Gendered Disinformation Is Not a “Women-only” Issue, It Poses a Threat to Election Integrity

Image Source: Meedan

As two billion citizens across the globe exercise their voting rights this year, the specter of disinformation looms large over election information integrity and outcomes. While public attention is predominantly fixated on combating fake news, AI-powered propaganda, hate speech, and foreign influence operations, gendered disinformation is an overlooked but substantial threat to election integrity. It embeds itself in societal and cultural biases, presenting a formidable challenge for effective counteraction. 

In 2024, gendered disinformation targeting female politicians will directly impact women’s future political participation worldwide. Fundamentally, any effort to counter gendered disinformation needs to also address the social forces that power it, such as misogynistic and sexist biases. In addition to these long-term efforts, social media platforms must ensure access to data and incorporate gender and other identity-based lenses in their content moderation policies and automated detection technology.

What is Gendered Disinformation? Who Does it Target, and How?

Gendered disinformation is “a subset of online gendered abuse that uses false or misleading gender and sex-based narratives against women, often with a degree of coordination, aimed at deterring women from participating in the public sphere.” It also intersects with other identity-driven forms of disinformation, such as those based on sexual orientation, disability, race, ethnicity, and religion. Consequently, women most affected by gendered disinformation are typically those whose identities intersect with other marginalized groups, such as sexual, religious, or racial minority communities.

Forms of gendered disinformation include the dissemination of falsified or doctored sexual images, coordinated abuse denigrating a woman’s character, and caricaturing and demonizing supporters of gender equality. 

It is crucial to recognize that gendered disinformation shares strong connections to technology-facilitated gender-based violence, violent extremism, and radicalization. In particular, misogyny was found to be integral to the ideology, political identity, and political economy of current violent extremist groups. Further, studies show that violent misogyny can motivate men and women to participate in violent extremism and radicalization, as well as inform mobilization strategies of violent extremist groups. Misogynist narratives also inform the ideology of the so-called manosphere communities, which consist of anti-feminist websites and groups. For instance, Men Going Their Own Way, a white supremacist and alt-right movement, shares misogynist and anti-feminist narratives claiming that feminism is responsible for corrupting society. 

Significantly, gendered disinformation, potentially more so than other types of disinformation, generates offline consequences for its targets. 

Gendered disinformation narratives reinforce existing biases. Gendered disinformation typically amplifies existing stereotypes to evoke emotional responses, gain credibility, and resonate with a broader audience. Some recurring gendered narratives include labeling women as devious, unintelligent, weak, immoral, or sexualizing women to depict them as untrustworthy and unfit to hold positions of power. 

Traditional media and social media platforms can amplify gendered disinformation narratives, either deliberately or unwittingly. In the 2016 U.S. presidential elections, conspiracy theories and false allegations about presidential candidate Hillary Clinton reflected demeaning gender biases and misogynistic attitudes toward female politicians. Keyword analysis of conspiracies targeting Hillary Clinton, like “Pizzagate” and the “Hillary Health Scare,” versus keywords associated with the Donald Trump Access Hollywood tapes exposed stark disparities in coverage of analogous controversies. It showed that keywords related to Clinton, such as “ill,” “weak,” “tired,” “sick,” and “abuse,” showed up over 28,000 times in news articles, while mentions of words typifying Trump’s statement and related behavior, including “harass,” “grab,” and “grope” appeared under 5,000 times. Similarly, on Twitter, fake news-related terms about Clinton appeared nine times as often (1,800 vs. 200) as terms related to real news stories about Trump’s derogatory behavior toward women.

During the 2020 U.S. presidential elections, gendered disinformation narratives targeted Kamala Harris. While disinformation also targeted then-candidate Joe Biden, it mostly focused on his age and policy positions, not gender. On the other hand, disinformation directed at Harris employed a classic misogynist narrative claiming that she used sex to gain power and that her immigrant parents made her ineligible for the vice presidency. 

Gendered Disinformation Tactics & Strategies

Gendered disinformation poses a greater challenge to detection and counteraction compared to other forms of online disinformation. Common gendered disinformation tactics involve using inside jokes and coded terms comprehensible only to individuals with an extensive understanding of the political context of the campaign and its targets. It also employs nicknames and deliberately alters characters used in common terms of abuse (e.g., b!tch). Finally, visual disinformation, such as memes, screenshots, or images depicting political figures, tends to disproportionately direct hate toward female public figures, particularly if they are people of color. Given these peculiarities, gendered disinformation can evade traditional content moderation, which often lacks a nuanced understanding of the profiles and backgrounds of disinformation targets or the political landscape in which they conduct electoral campaigns. 

Generative AI disproportionately targets women and presents new threats to female politicians. Women have historically been primary targets for malicious adaptations of new technologies. A 2019 study found that 96 percent of deepfakes online were nonconsensual porn, and among those, 99 percent depicted women. Well before public attention was drawn to the deepfake featuring Taylor Swift, such manipulative content has been employed against women in politics worldwide. For instance, India’s youngest parliamentarian, Chandrani Murmu, became a victim of a deepfake pornographic video circulated as part of the widespread online harassment of India’s female politicians. Similarly, in the run-up to the 2016 parliamentary election in Georgia, a series of videos depicting female politicians engaged in sex were released online, accompanied by intimidating messages. With quality improvements and faster dissemination, deepfakes can lead to public humiliation, abuse, and physical threats against female politicians and ultimately influence election outcomes.  

Foreign state actors exploit gendered disinformation in their influence efforts. Analysis of state-sponsored Twitter accounts from Russia’s military intelligence agency (GRU) and the Internet Research Agency (IRA), Iran, and Venezuela revealed that they disseminated gendered disinformation designed to demobilize civil society activists, amplify pro-government propaganda, and generate virality around divisive political topics. Instead of focusing on divisions between men and women, these state actors used narratives to undermine women’s collective identity, instrumentalizing in-group tensions and provoking resentment against opposition groups. This suggests that gendered disinformation narratives related to feminism, women’s political participation, and gender equality can be weaponized during the elections to fuel public debate, and deepen polarization, ultimately swaying public discourse to the operatives’ benefit. 

Addressing Gendered Disinformation

Gendered disinformation not only affects women’s political participation but also denigrates global security and democracy. On an individual level, it can dissuade women from participating in politics. At the national level, it has shown a potential to deepen polarization and fuel violent extremist movements. More fundamentally, gendered disinformation destabilizes societies, undermines political representation, and perpetuates a hostile environment for women to participate in public life. It has been (and likely will continue to be) exploited by foreign actors in their influence operations. Therefore, addressing gendered disinformation is not only a priority for women’s rights groups but also a national security imperative.

While technical solutions are not a panacea, social media platforms must assume a leadership role in detecting gendered disinformation. Identity-based abuse and disinformation fundamentally come out of pre-existing cultural norms and biases. Therefore, any effort to counter gendered disinformation requires lasting cross-sectoral cooperation in countering misogyny, sexism, racism, and anti-LGBTQ+ hate and building social cohesion at large. What can be done now to detect and counter gendered disinformation in a crucial year for democracy? 

First, social media platforms must release or allow access to data on gendered disinformation and abuse. The ability to access data is crucial for researchers, regulators, and media to detect and monitor gendered disinformation narratives and tactics. In this context, Meta’s recent decision to shut down CrowdTangle, its data tool used to track the spread of disinformation, raises concerns about the decision’s timing during a global election year. Specifically, researchers pointed out severe implications for pre- and post-election monitoring of political disinformation and online harassment directed at women and minorities. In previous years, international nongovernmental organizations, such as the National Democratic Institute, as well as local organizations in Macedonia, Georgia, and Nepal, used CrowdTangle to inform their research and programming on gendered disinformation. Therefore, ensuring access to data and maintaining CrowdTangle’s functionality in the new tool – Meta Content Library – must become a priority in 2024. 

Second, social media platforms can enhance their ability to counter gendered disinformation by incorporating a gender mainstreaming approach into their policies. This entails incorporating gender and other identity-based lenses in the training of content moderators to identify gendered disinformation narratives and tactics. Alongside training efforts, classifiers and keywords used for content moderation should be consistently updated to include unique themes and narratives targeting candidates during a given electoral campaign. Lastly, improvements in automated detection of gendered disinformation require expanding training data for machine learning and Large Language Models (LLMs), encompassing gender, racial, cultural, and linguistic nuances. 

Worthwhile Investment in 2024 and Beyond

Social media platforms are grappling with a perpetual stream of user-generated content. Consequently, the detection of constantly evolving gendered disinformation narratives may seem like an endless process, far from eradicating the issue. However, embracing a gender mainstreaming approach in both human and automated content moderation is a worthwhile investment. Looking beyond 2024, such an approach will empower human moderators to make more informed decisions when reviewing content. Lastly, gender-inclusive and context-aware automated content moderation will become more effective in flagging disinformation and, therefore, decrease the time and effort needed for human review. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.