AI is turbocharging disinformation attacks on voters, especially in communities of color - Los Angeles Times
Advertisement

Opinion: AI is turbocharging disinformation attacks on voters, especially in communities of color

A mobile phone displaying many social media apps
Communities of color and non-English speakers heavily rely on mobile phones for internet access. These devices’ user interfaces are particularly vulnerable to disinformation.
(NurPhoto via Getty Images)
Share via

As the general election campaign begins in earnest, we can expect disinformation attacks to target voters, especially in communities of color. This has happened before: In 2016, for example, Russia’s disinformation programs zeroed in on Black Americans, creating Instagram and Twitter accounts that masqueraded as Black voices and producing fake news websites such as blacktivist.info, blacktolive.org and blacksoul.us.

Advances in technology will make these efforts harder to recognize. Envision those same fake accounts and websites featuring hyper-realistic videos and images intended to sow racial division and mislead people about their voting rights. With the advent of generative artificial intelligence, that is possible at little to no cost, turbocharging the kind of disinformation that has always targeted communities of color.

It’s a problem for candidates, election offices and voter outreach groups in the months ahead. But voters themselves will ultimately have to figure out what is real news or fake news, authentic or AI-generated.

Advertisement

In our study, we found many AI chatbots and apps such as ChatGPT and Gemini provide misinformation on when and where to vote. Tech companies should improve their platforms.

March 8, 2024

For immigrants and communities of color — who often face language barriers, distrust democratic systems and lack technology access — the challenge is likely to be more significant. Across the nation, and especially in states such as California with large communities of immigrants and people with limited knowledge of English, the government needs to help these groups identify and avoid disinformation,

Asian Americans and Latinos are particularly vulnerable. About two-thirds of the Asian American and Pacific Islander population are immigrants, and a Pew Research Center report states that “[86%] of Asian immigrants 5 and older say they speak a language other than English at home.” The same dynamics hold true for Latinos: Only 38% of the U.S. foreign-born Latino population reports being proficient in English.

Targeting non-English-speaking communities has several advantages for those who would spread disinformation. These groups are often cut off from mainstream news sources that have the greatest resources to debunk deepfakes and other disinformation, preferring online engagement in their native languages, where moderation and fact-checking are less prevalent. Forty-six percent of Latinos in the U.S. use WhatsApp, while many Asian Americans prefer WeChat. Wired magazine reported that the platform “is used by millions of Chinese Americans and people with friends, family, or business in China, including as a political organizing tool.”

Advertisement

Can making nice with our political opposites change the underlying dynamic driving polarization? Not while social media disinformation thrives unchecked.

Aug. 20, 2023

Disinformation aimed at immigrant communities is poorly understood and difficult to track and counteract, yet it is getting easier and easier to create. In the past, producing false content in non-English languages required intensive work from humans and was often low in quality. Now, AI tools can create hard-to-track, in-language disinformation at lightning speed and without the vulnerabilities and scaling problems posed by human limitations. Despite this, much research on misinformation and disinformation concentrates on English-language uses.

Attempts to target communities of color and non-English speakers with disinformation are aided by many immigrants’ heavy reliance on their mobile phones for internet access. Mobile user interfaces are particularly vulnerable to disinformation because many desktop design and branding elements are minimized in favor of content on smaller screens. With 13% of Latinos and 12% of African Americans dependent on mobile devices for broadband access, in contrast to 4% of white smartphone owners, they are more likely to receive — and share — false information.

Social media companies’ past efforts to counter voter disinformation have fallen short. Meta’s February announcement that it would flag AI-generated images on Facebook, Instagram and Threads is a positive but minor step toward stemming AI-generated disinformation, especially for ethnic and immigrant communities who may know little about its effects. Clearly, a stronger government response is needed.

Advertisement

The California Initiative for Technology and Democracy, or CITED, where we serve on the board of directors, will soon unveil a legislative package that would require broader transparency for generative AI content, making sure users of social media know what video, audio and images were made by AI tools. The bills would also require labeling of AI-assisted political disinformation on social media, prohibit campaign ads close to an election from using the technology and restrict anonymous trolls and bots.

AIs can spit out work in the style of any artist they were trained on — eliminating the need for anyone to hire that artist again.

Dec. 21, 2022

In addition, CITED plans to hold a series of community forums around California with partner organizations rooted in their regions. The groups will speak directly to leaders in communities of color, labor leaders, local elected officials and other trusted messengers about the dangers of false AI-generated information likely to be circulating this election season.

The hope is that this information will be relayed at the community level, making voters in the state more aware and skeptical of false or misleading content, building trust in the election process, election results and our democracy.

Bill Wong is a campaign strategist and the author of “Better to Win: Hardball Lessons in Leadership, Influence, & the Craft of Politics.” Mindy Romero is a political sociologist and the director of the Center for Inclusive Democracy at the USC Price School of Public Policy.

Advertisement