This post was originally published on Defender Network

By ReShonda Tate

As artificial intelligence (AI) becomes increasingly woven into our daily lives, a troubling truth is emerging: The technology designed to advance humanity is also amplifying its oldest prejudices.

From OpenAI’s Sora 2 video generator – which has been used to produce racially mocking portrayals of Black people – to ChatGPT and Google’s Gemini exhibiting bias in speech, the promise of innovation is colliding head-on with the persistence of racism.

A new study from the Allen Institute for Artificial Intelligence found that large language models consistently associate African American Vernacular English (AAVE) with negative stereotypes. Researchers discovered that AI systems penalize speakers of AAVE, often labeling their speech as “less professional,” “angry,” or “incoherent.”

“These biases are not just theoretical,” said Valentin Hofmann, lead researcher on the study. “They can impact whether someone gets a job interview, a loan approval, or even fair treatment in court. When the systems used to make decisions at scale inherit our social biases, those prejudices become automated.”

The Sora controversy: When fake becomes “News”

Perhaps the most chilling example of this new digital racism comes courtesy of Sora, OpenAI’s highly touted text-to-video generator. Marketed as a creative tool for filmmakers, educators, and content creators, Sora allows users to type a sentence and instantly produce a lifelike video. Within weeks of its demo, Sora-created videos flooded social media, including racist fakes that looked so real, even major newsrooms were fooled.

One viral video depicted a Black woman using exaggerated AAVE to rant about selling her government SNAP benefits. The clip was completely fabricated, but Fox News published a digital story presenting it as fact, complete with quotes from the fictional woman.

After viewers on social media flagged the video as AI-generated, Fox quietly edited the article and added a brief note acknowledging the error. By then, the damage was done: the fake clip had been shared thousands of times, reinforcing long-debunked “welfare queen” stereotypes.

“These are not innocent mistakes,” Hofmann said. “When major media outlets amplify fake content that dehumanizes Black people, it reinforces dangerous narratives that have existed for generations.”

The Accountability Gap

Houston AI ethicist Angelica Renee said incidents like the Fox News deepfake expose a deeper systemic failure, the lack of enforceable policy.

“There was a policy introduced in 2023 called the AI Labeling Act of 2023,” Renee explained. “It largely involves the metadata and embedding of AI videos. But right now, the bill is just sitting in committee.”

Without passage, there’s no legal consequence for media outlets or platforms that fail to disclose, or that misuse, AI content.

“This honestly means that media outlets can get away with simply labeling information as AI without ramification, even if it causes harm,” she said. “Unless you’re doing the deep-dive research into what the metadata says — and let’s be real, most people won’t — you’d never know. And as we all know, perception is reality, especially in media.”

Renee argues that the absence of clear regulation leaves Black communities most vulnerable.

YouTube video

“Holding mass media outlets and social platforms accountable starts with shaping smart, equitable AI policy,” she said. “Clear, enforceable rules must be put in place regarding the creation and dissemination of targeted AI-driven disinformation, which poses a unique and severe threat.”

That threat, she added, becomes especially critical when AI is used to fabricate videos of Black activists, politicians, or community leaders, turning them into digital caricatures meant to discredit or silence them.

“When false imagery undermines our public standing, it isn’t just propaganda,” Renee said. “It’s psychological warfare.”

A case study in disinformation

Renee pointed to the Fox News deepfake as a textbook case of targeted disinformation.

The content: A story about supposed SNAP recipients threatening to “ransack stores” after benefit cuts, relying entirely on AI-generated videos of Black women.

The mechanism of harm: The fakes weaponized racist stereotypes that have long portrayed Black women on public assistance as dishonest or criminal. By publishing the piece, Fox validated that narrative and gave it mass-media legitimacy.

The confirmation bias: The fabricated story appealed to viewers predisposed to believe such stereotypes, embedding falsehoods as “truth.”

The accountability failure: When the deception was exposed, Fox didn’t issue a transparent retraction. The story was quietly rewritten at the same URL, changing focus from “recipients threatening stores” to “AI videos going viral.”

“In today’s policy landscape, without a strong AI Accountability Act or FTC oversight, the penalty for this kind of editorial negligence is virtually nonexistent,” Renee said. “That’s why we have to move beyond labeling — to demanding enforceable ethical and verification standards for any media outlet using or reporting on AI content.”

Digital blackface and “Bigfoot Baddie”

The Sora controversy follows another wave of racially charged AI content: a trend known as “Bigfoot Baddie.”

YouTube video

The Sora controversy follows another wave of racially charged AI content: a trend known as “Bigfoot Baddie.”

Across TikTok and Instagram, pages with names like FemaleBigfoot and BigfootBaddies have gained hundreds of thousands of followers by posting AI-generated videos of gorillas depicted as hypersexualized caricatures of Black women, decked out in blonde wigs, long nails, and glittery crop tops.

The characters use exaggerated slang and stereotypical AAVE, delivering monologues that are as offensive as they are absurd. “What’s up b**es, it’s Bigfoot, the baddest b*h in the woods. Part-time cryptic, full-time problem,” one viral clip begins.

Some commenters laugh; others express disgust. But the fact that millions of users engage with this content, even ironically, shows how AI is giving new life to old racist tropes.

Dr. Safiya Noble, author of Algorithms of Oppression, said this phenomenon reflects how deeply racism is embedded in digital spaces.

“These systems don’t just reproduce stereotypes — they industrialize them,” Noble said. “It’s an automated caricature, and it’s deeply harmful.”

Media literacy: The first line of defense

Renee insists that education is just as important as regulation.

“This is why I’ve always been a strong advocate for media literacy,” she said. “Just like financial literacy, media literacy will be paramount in shaping future minds. But as I’ve also mentioned before, willful ignorance and learned helplessness have played such a pivotal role in certain communities’ assimilation of media, and with AI, that’s only being amplified.”

She believes that schools, churches, and community groups must start teaching digital literacy that includes racial awareness and AI bias.

“Digital literacy has to evolve to teach algorithmic skepticism,” Renee said. “Ask: Who made it? What data trained it? Who benefits — and who’s harmed?”

She suggests that educators introduce Critical Media Forensics, practical lessons in spotting deepfakes, reading AI labels, and tracing sources.

“The goal is not just to teach people to detect fake content,” Renee said, “but to understand how that content is weaponized against marginalized groups.”

Renee also warns that Black users face particular risks when trusting AI tools built without them in mind.

“The primary danger is the multiplication and automation of systemic racial bias,” she said. “Think of facial-recognition systems that misidentify Black faces, leading to wrongful arrests. Or healthcare algorithms trained on unequal data that underestimate pain levels or cancer risks for darker-skinned patients. The list goes on.”

For Renee, the lesson is simple: “We have to think critically about the media we consume — and the tools we use to create it. We can’t fight what we can’t recognize,” she added. “Media literacy is the first defense against digital racism, it’s how we reclaim our power. For Black communities, the stakes are clear: the fight for equality isn’t just in the streets or the courts anymore — it’s in the code, the classrooms, and the policy rooms that decide who gets to define reality.”

Can You Tell the Difference in Fake Videos vs. AI?

How to spot a deepfake video 

As digital racism and disinformation become more sophisticated, knowing how to spot AI-generated deepfake videos is a crucial skill. Here are 5-steps to help you verify content:

1. Scrutinize the face and eyes. Look for telltale signs of digital manipulation in the subject’s face. Do they blink naturally? Are the emotions in the face consistent with the speech and context? 

2. Check the Audio-Visual Sync. A common failure point for deepfakes is synchronizing the audio and video seamlessly. Look to see if there is a lag and that person’s lips exactly match the words they are saying.

3. Examine hands, teeth, and details. AI often struggles with the complexity of fine details. Are there five fingers? Are the hands positioned, shaped, and moving in a natural way?

4. Vet the source and context. Don’t trust the video on its face—trace its origins. Who originally posted the video? Is it from a verified, credible account or a news organization, or an anonymous/newly created account? Has the story or event been reported by multiple trusted news sources? Search for the video or key quotes to see if its authenticity has been debunked. If it seems too shocking or outrageous, be extra skeptical.

5. Watch for physics-defying glitches. Look for basic inconsistencies that violate real-world physics. Do objects, like a pair of glasses or jewelry, flicker, disappear, or morph between frames? Does the background warp, shimmer, or distort around the subject’s outline? Look for text in the background that seems incomprehensible or wobbly. Do their movements look unnatural or jerky, or do their body parts seem disconnected from their head or torso?