Shirin Nilizadeh’s research while affiliated with The University of Texas at Arlington and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (58)


Cultural Nuances in COVID-19 Vaccine Uptake: A Comparative Analysis of English and Spanish Facebook Posts in Tarrant County, Texas (Preprint)
  • Preprint

February 2025

·

1 Read

Ana Aleksandric

·

Anisha Dangal

·

Shirin Nilizadeh

·

Gabriela Mustata Wilson

BACKGROUND Prior studies have identified key factors contributing to COVID-19 vaccine hesitancy, including concerns over vaccine safety, potential side effects, and mistrust in the healthcare system. According to the World Health Organization, vaccine hesitancy is among the top ten threats to global public health. Previous research suggests that vaccine hesitancy is a significant barrier within the Hispanic population, particularly in Texas. OBJECTIVE This longitudinal study examines the relationship between daily stances, misinformation, and topics in vaccine-related English and Spanish social media posts and daily vaccination rates in Tarrant County, Texas, throughout 2021 and 2022. The study seeks to identify predictors positively associated with vaccination uptake to inform potential social media interventions aimed at reducing vaccine hesitancy, focusing on the Hispanic population in Tarrant County. METHODS COVID-19 vaccine-related English and Spanish posts were collected from Facebook in Tarrant County for 2021 and 2022. Posts were annotated by GPT-4, labeling each post’s stance toward the vaccine, the presence of misinformation, and relevant topics such as vaccine availability, safety, and side effects. The prevalence of each category was compared across English and Spanish posts to explore major vaccine-related concerns and potential cultural influences on vaccination uptake. Regression analysis was then conducted to assess associations between post-related variables and vaccination rates over time. RESULTS Regression analysis identified distinct predictors of Hispanic vaccination uptake within the Spanish dataset, including encouraging posts (P = .02) and posts related to religious beliefs (P = .007), which did not emerge as significant predictors for the general population uptake (P = .065). A substantial proportion of discouraging Spanish posts focused on vaccine side effects (~19%) and health system distrust (~34%), highlighting areas where targeted interventions may address specific concerns within the Hispanic community. Some predictors are common for both higher Hispanic and general population vaccination uptake, including posts regarding vaccine availability (P = .01), safety (P = .006), and misinformation debunking (P < .001). CONCLUSIONS This study investigates the correlation between daily stances, misinformation, and topics shared in Facebook COVID-19 vaccine-related English and Spanish posts with new daily vaccination uptake in Tarrant County, Texas, during 2021 and 2022. Findings suggest that posts emphasizing vaccine availability, safety, and debunking misinformation are associated with increased vaccination rates. Additionally, encouraging posts and those related to religious beliefs correlate with higher vaccination uptake among Hispanics, suggesting cultural nuances. These insights highlight the need for targeted social media messaging that may effectively boost vaccination rates when tailored as part of targeted public health campaigns.


Figure 1: Framework showing implementation of adversarial perturbations
Figure 2: Poisoning attack using Influence Functions
Examples for Character and Word Perturbations. Perturbations are represented by their short abbreviations.
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions
  • Preprint
  • File available

October 2024

·

5 Reads

Large Language Models have introduced novel opportunities for text comprehension and generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks, particularly in tasks like text classification and translation. However, the adversarial robustness of abstractive text summarization models remains less explored. In this work, we unveil a novel approach by exploiting the inherent lead bias in summarization models, to perform adversarial perturbations. Furthermore, we introduce an innovative application of influence functions, to execute data poisoning, which compromises the model's integrity. This approach not only shows a skew in the models behavior to produce desired outcomes but also shows a new behavioral change, where models under attack tend to generate extractive summaries rather than abstractive summaries.

Download


DarkGram: Exploring and Mitigating Cybercriminal content shared in Telegram channels

September 2024

·

360 Reads

We present the first large scale analysis of 339 cybercriminal activity channels (CACs) on Telegram from February to May 2024. Collectively followed by over 23.8 million users, these channels shared a wide array of illicit content, including compromised credentials, pirated software and media, tools for blackhat hacking resources such as malware, social engineering scams, and exploit kits. We developed DarkGram, a BERT based framework that identifies malicious posts from the CACs with an accuracy of 96%, using which we conducted a quantitative analysis of 53,605 posts from these channels, revealing key characteristics of shared content. While much of this content is distributed for free, channel administrators frequently employ promotions and giveaways to engage users and boost the sales of premium cybercriminal content. These channels also pose significant risks to their own subscribers. Notably, 28.1% of shared links contained phishing attacks, and 38% of executable files were bundled with malware. Moreover, our qualitative analysis of replies in CACs shows how subscribers cultivate a dangerous sense of community through requests for illegal content, illicit knowledge sharing, and collaborative hacking efforts, while their reactions to posts, including emoji responses, further underscore their appreciation for such content. We also find that the CACs can evade scrutiny by quickly migrating to new channels with minimal subscriber loss, highlighting the resilience of this ecosystem. To counteract this, we further utilized DarkGram to detect new channels, reporting malicious content to Telegram and the affected organizations which resulted in the takedown of 196 such channels over three months. To aid further collaborative efforts in taking down these channels, we open source our dataset and the DarkGram framework.


Utilizing Large Language Models to Optimize the Detection and Explainability of Phishing Websites

August 2024

·

122 Reads

In this paper, we introduce PhishLang, an open-source, lightweight Large Language Model (LLM) specifically designed for phishing website detection through contextual analysis of the website. Unlike traditional heuristic or machine learning models that rely on static features and struggle to adapt to new threats and deep learning models that are computationally intensive, our model utilizes the advanced language processing capabilities of LLMs to learn granular features that are characteristic of phishing attacks. Furthermore, PhishLang operates with minimal data preprocessing and offers performance comparable to leading deep learning tools, while being significantly faster and less resource-intensive. Over a 3.5-month testing period, PhishLang successfully identified approximately 26K phishing URLs, many of which were undetected by popular antiphishing blocklists, thus demonstrating its potential to aid current detection measures. We also evaluate PhishLang against several realistic adversarial attacks and develop six patches that make it very robust against such threats. Furthermore, we integrate PhishLang with GPT-3.5 Turbo to create \textit{explainable blocklisting} - warnings that provide users with contextual information about different features that led to a website being marked as phishing. Finally, we have open-sourced the PhishLang framework and developed a Chromium-based browser extension and URL scanner website, which implement explainable warnings for end-users.


Users’ Behavioral and Emotional Response to Toxicity in Twitter Conversations

May 2024

·

22 Reads

·

1 Citation

Proceedings of the International AAAI Conference on Web and Social Media

Prior works have shown connections between online toxicity attacks, such as harassment, cyberbullying, and hate speech, and the subsequent increase in offline violence, as well as negative psychological effects on victims. These correlations are primarily identified through user studies conducted via virtual environments, simulations, and questionnaires. However, no work has investigated how, in practice and authentically, people react to online toxicity both emotionally, showing anger, anxiety, and sadness, and behaviorally in terms of engaging with and responding to toxicity instigators, considering conversations as a whole and the relation between emotions and behaviors. This data-driven study investigates the effect of toxicity on Twitter users' behaviors and emotions considering confounding factors, such as account identifiability, activity, and conversation's structure and topic. We collected about 80K Twitter conversations and identified those with and without toxic replies. Performing statistical tests along with propensity score matching, we investigated the causal association of receiving toxicity and users' responses. We found that authors of conversations with toxic replies are more likely to engage in conversations, reply in a toxic way, and unfollow toxicity instigators. In terms of users' emotional responses, we found that sadness and anger after the first toxic reply are more likely to increase as the amount of toxicity increases. These findings not only emphasize the negative emotional and behavioral effects of online toxicity on social media users but also, as demonstrated in this paper, can be utilized to build prediction models for users' reactions, which could then aid the implementation of proactive detection and intervention measures helping users in such situations.


Analyzing the Stance of Facebook Posts on Abortion Considering State-Level Health and Social Compositions

May 2024

·

1 Read

·

1 Citation

Proceedings of the International AAAI Conference on Web and Social Media

Abortion remains one of the most controversial topics, especially after overturning Roe v. Wade ruling in the United States. Previous literature showed that the illegality of abortion could have serious consequences, as women might seek unsafe pregnancy terminations leading to increased maternal mortality rates and negative effects on their reproductive health. Therefore, the stances of the abortion-related Facebook posts were analyzed at the state level in the United States from May 4 until June 30, 2022, right after the Supreme Court’s decision was disclosed. In more detail, a pre-trained Transformer architecture-based model was fine-tuned on a manually labeled training set to obtain a stance detection model suitable for the collected dataset. Afterward, we employed appropriate statistical tests to examine the relationships between public opinion regarding abortion, abortion legality, political leaning, and factors measuring the overall population’s health, health knowledge, and vulnerability per state. We found that infant mortality rate, political affiliation, abortion rates, and abortion legality are associated with stances toward abortion at the state level in the US. While aligned with existing literature, these findings indicate how public opinion, laws, and women’s and infants’ health are related, as well as how these relationships can be demonstrated by using social media data.


Figure 3: Distribution of bots across different Botometer score thresholds for (a) Bot accounts who had retweeted the promotion tweets, (b) Bots following NFT collections during and after promotion and (c) Likes and replies by bots on tweets shared by NFT collections.
Comparison with other classification models
Unveiling the Risks of NFT Promotion Scams

May 2024

·

28 Reads

·

8 Citations

Proceedings of the International AAAI Conference on Web and Social Media

The rapid growth in popularity and hype surrounding digital assets such as art, video, and music in the form of non-fungible tokens (NFTs) has made them a lucrative investment opportunity, with NFT-based sales surpassing $25B in 2021 alone. However, the volatility and general lack of technical understanding of the NFT ecosystem have led to the spread of various scams. The success of an NFT heavily depends on its online virality. As a result, creators use dedicated promotion services to drive engagement to their projects on social media websites, such as Twitter. However, these services are also utilized by scammers to promote fraudulent projects that attempt to steal users' cryptocurrency assets, thus posing a major threat to the ecosystem of NFT sales. In this paper, we conduct a longitudinal study of 439 promotion services (accounts) on Twitter that have collectively promoted 823 unique NFT projects through giveaway competitions over a period of two months. Our findings reveal that more than 36% of these projects were fraudulent, comprising of phishing, rug pull, and pre-mint scams. We also found that a majority of accounts engaging with these promotions (including those for fraudulent NFT projects) are bots that artificially inflate the popularity of the fraudulent NFT collections by increasing their likes, followers, and retweet counts. This manipulation results in significant engagement from real users, who then invest in these scams. We also identify several shortcomings in existing anti-scam measures, such as blocklists, browser protection tools, and domain hosting services, in detecting NFT-based scams. We utilize our findings to develop and open-source a machine learning classifier tool that was able to proactively detect 382 new fraudulent NFT projects on Twitter.



Vulnerabilities Unveiled: Adversarially Attacking a Multimodal Vision Language Model for Pathology Imaging

April 2024

·

1 Citation

In the context of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks. Leveraging the Kather Colon dataset with 7,180 H&E images across nine tissue types, our investigation employs Projected Gradient Descent (PGD) adversarial perturbation attacks to induce misclassifications intentionally. The outcomes reveal a 100% success rate in manipulating PLIP's predictions, underscoring its susceptibility to adversarial perturbations. The qualitative analysis of adversarial examples delves into the interpretability challenges, shedding light on nuanced changes in predictions induced by adversarial manipulations. These findings contribute crucial insights into the interpretability, domain adaptation, and trustworthiness of Vision Language Models in medical imaging. The study emphasizes the pressing need for robust defenses to ensure the reliability of AI models. The source codes for this experiment can be found at this https URL.


Citations (29)


... 2) Robustness in Disease Diagnosis: Although foundation models for medical image analysis achieved great success, the robustness of foundation models for medical image analysis is still considerable. Veerla et al. [91] explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model by employing Projected Gradient Descent (PGD) adversarial attacks to intentionally induce misclassifications. The findings of the study emphasize the pressing need for robust defenses to ensure the security of foundation models for medical image analysis. ...

Reference:

A Survey on Trustworthiness in Foundation Models for Medical Image Analysis
Vulnerabilities Unveiled: Adversarially Attacking a Multimodal Vision Language Model for Pathology Imaging
  • Citing Conference Paper
  • April 2024

... Despite their transformative capabilities, the widespread deployment of LLMs has also introduced a range of security challenges [47,48,49,50,51,52,53]. Key concerns include the potential for LLMs to generate misinformation [54,55,56,57], perpetuate bias [58,59,60], and become susceptible [61,62,63] to adversarial attacks such as prompt injection [64,65] and jailbreaking [66,67,68]. The complexity involved in training LLMs means that even minor weaknesses can result in significant vulnerabilities, particularly when these models are applied in sensitive domains such as healthcare [69,70,71], finance [72], and national security [73,74]. ...

Demonstration of an Adversarial Attack Against a Multimodal Vision Language Model for Pathology Imaging
  • Citing Conference Paper
  • May 2024

... Toxicity has dire social and economic costs. Online, it reduces user participation, hinders information exchange, and deepens divides [1]. Offline, it may lead to physical violence, reduce norm adherence, and cause severe psychological distress [25,51,57]. ...

Users’ Behavioral and Emotional Response to Toxicity in Twitter Conversations
  • Citing Article
  • May 2024

Proceedings of the International AAAI Conference on Web and Social Media

... Some artists found that their work had been copied and sold as NFTs that they were not aware of or a part of. Insider trading, wash sales, and pyramid schemes were among the illegal frauds that were perpetrated (Flick, 2022;Jordanoska, 2021;Mackenzie & Bērzina, 2022;Roy et al., 2023). Celebrity profiteering has already been discussed in considering social influences on NFT prices (Hawkins, 2022). ...

Unveiling the Risks of NFT Promotion Scams

Proceedings of the International AAAI Conference on Web and Social Media

... This section presents a review of various studies that focus on such methods. The reviewed studies can be broadly categorized into the following technique groups: structural analysis [16], [17], [27], [29], code analysis [15], ML and feature-based analysis [12]- [14], behavioral analysis [30] and data-driven analysis [20], [28], [31]. ...

Phishing in the Free Waters: A Study of Phishing Attacks Created using Free Website Building Services
  • Citing Conference Paper
  • October 2023

... In terms of picture quality and utility preservation, Khorzooghi et al.'s work [12] surpasses CIAGAN and Deep Privacy when using StyleGAN for face de-identification through style mixing. In order to accomplish a controlled privacy-utility trade-off, Meden et al. [13] develop CPP-DeID, a face deidentification technique that maximizes StyleGAN2 [12] latent code to suppress identity while preserving attributes like gender and expression. ...

Examining StyleGAN as a Utility-Preserving Face De-identification Method

Proceedings on Privacy Enhancing Technologies

... Not only are they essential for setting standards and ensuring consistency in how platforms enforce rules around acceptable behavior and content, but they can help users make informed decisions on what speech is-and is notadmissible (e.g. Singhal et al. 2023). These guidelines have been partly influenced by civil society initiatives such as the Santa Clara Principles, which emphasize transparency, accountability, and the protection of marginalized voices in content moderation practices (even though the effectiveness of multistakeholder governance, as exemplified by partnerships between civil society organizations and corporations, has been subject to debate- Dvoskin, 2024). ...

SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice
  • Citing Conference Paper
  • July 2023

... Some examples of such content are child sexual exploitation, terrorism, and pornography, which by law are required to be removed. Other types of prohibited content are malicious content, which includes spam, malware, and phishing URLs purposefully spread on social media platforms to gain more victims [84], [286], [287], [334]. While there are numerous works on the moderation of such content, these works are out of the scope of this paper. ...

Cybersecurity Misinformation Detection on Social Media: Case Studies on Phishing Reports and Zoom’s Threat
  • Citing Article
  • June 2023

Proceedings of the International AAAI Conference on Web and Social Media

... Indeed, the United States supreme court overturned Roe v. Wade on June 24, 2022, meaning that States could independently decide abortion regulations and legalities [70]. Since then, abortion has been a topic of interest on social media platforms, regardless of the geolocation of the users [71]. This study is no exception, with the majority of posts in this category pertaining to abortion. ...

Analyzing the Stance of Facebook Posts on Abortion Considering State-level Health and Social Compositions

... Several works analyzed the security characteristics of FCWs such as [4], [5], [6], [7], [8], [17], [18], [21], [27], while other works focus on analyzing the security of the general websites [10], [11], [12], [14], [15], [19], [20], [22], [23], [24], [25], [26], [28], [31], [36], [37]. Moreover, several works explored the regional analysis for domain-specific websites, such as governments and universities. ...

A Large-Scale Analysis of Phishing Websites Hosted on Free Web Hosting Domains