SB 11 Can Stop AI Deepfake Abuse Against Women

Josh Bocanegra
3 min readDec 19, 2024

--

Can you tell which woman in this photo is real and which is AI?

When I first wrote SB 11 (formerly SB 970), it was sparked by an unsettling story.

A mother received a call from her daughter, who was crying and claiming she had been kidnapped by someone demanding money to set her free.

Except, that wasn’t her daughter on the phone.

This was a fake kidnapping, powered by AI.

By cloning her daughter’s voice from social media videos, a scammer was able to use it to prey on a mother’s love and fear.

As both a father of a teenage girl and as an AI developer, this story stroke a cord.

As time passed, similar accounts began to emerge. Deepfaked images, manipulated videos, and voice clones weren’t just anomalies, they were part of a growing pattern.

And as that pattern became clearer, so did the truth.

Women are bearing the brunt of AI deepfake abuse.

When we talk about deepfakes, it’s tempting to treat them like a party trick or toy. But for many women, they’re becoming a personal nightmare.

The numbers are staggering: women make up the overwhelming majority of victims, often targeted with the worst kind of intimate violence imaginable, repackaged into “content” for the masses.

Why women?

Simply put, deepfakes magnify longstanding patterns of online harassment and sexual objectification — rooted in narratives that cast women as objects and targets.

It brings those ideas into a world where synthetic “evidence” never fades, reverberating long after headlines die down and leaving victims with lasting humiliation, anxiety, and the haunting question of whether it will ever truly go away.

What’s at stake is broken trust and lasting trauma — at scale.

As if we don’t already have a mental health crisis among teenage girls.

SB 11 will help women fight back

SB 11 recognizes that deepfakes aren’t harmless. It acknowledges it as a malicious tool of abuse when a person’s likeness is used without their consent.

With SB 11, victims gain a path to hold perpetrators accountable and forces would-be bad actors to think twice.

Here’s how:

  • Combating AI Deepfake Abuse: Criminalizes the unauthorized use of AI to replicate someone’s name, voice, likeness, or other personal identifiers.
  • Legal Recourse: Allows victims to pursue civil litigation against offenders who misuse deepfake technology for false impersonation.
  • Evidence Integrity: Mandates the Judicial Council to develop rules to address AI-generated evidence in legal proceedings.
  • Consumer Warnings: Requires clear warnings on deepfake software, monitored by the Department of Justice.

At its core, this isn’t just about a single piece of legislation, it’s about grappling with what it means to have an identity in a world where the line between authentic and artificial feels meaningless.

Legal recourse doesn’t magically heal emotional wounds, but it can slow the flood. It can force bad actors to think twice.

We can’t go back to a time before deepfakes existed. But we can decide what to do next.

If you support SB 11, you can learn more and sign the petition to spread awareness below:

At the start of this article, I posed the question: Can you tell which woman in this photo is real and which is AI? The answer is both women in the photo are AI.

But if the answer was both women in the photo are not AI, would you have been able to tell the difference?

--

--

Josh Bocanegra
Josh Bocanegra

Written by Josh Bocanegra

Founder & AI Developer at Persona, former U.S. Congressional Candidate, and CA Comms Lead for RFK Jr. I write about philosophy, AI and public service.

No responses yet