Reading the Signals: Why the Government Still Isn’t Keeping Up with AI
- Noemi Kaminski
- Oct 24
- 3 min read

I recently read an article by Sacha Alanoca and Maroussia Lévesque arguing that the US is regulating AI — just not where we expect it. And yet, as interesting as that is, my gut feeling is that the regulation still isn’t enough.
The authors correctly highlight that while a lot of attention is on whether AIs like ChatGPT or image-generators are being checked for bias, misinformation or misuse, the U.S. government has been more active behind the scenes: restricting access to advanced chips, controlling compute, export limits, etc. That’s real regulation, just under the hood.
But here’s the rub: a recent example shows how the visible regulatory gaps still matter.
Case in point: MLK, Deepfakes, and the “Oops” moment
OpenAI’s Sora 2 app let users generate hyper-realistic short videos of historical figures, including Martin Luther King Jr. After his estate raised alarms about disrespectful and offensive clips, OpenAI paused further generations of MLK’s likeness and introduced a mechanism for estates/representatives to opt-out. Cool that they did something.
But consider:
This is reactive, not proactive. The misuse happened, people protested, then a fix.
There is no consistent federal law that says “you can’t use someone’s likeness in an AI-generated video without consent” across the board. (Existing laws vary state to state and are patchy when it comes to deaths, deceased public figures, AI-deepfakes, etc.)
The incident shows how one high-profile case can slip through the cracks: historical figure + AI + deepfake = risk to reputation, legacy, dignity. Yet the government didn’t step in ahead of time.
Why this matters & why it shows “not enough”
Public trust and fairness: If anyone can generate a video of MLK saying or doing something absurd, that undermines trust in media, justice and respect for history. The tech is moving faster than the rules.
Consent and rights: We’re in a world where your image, voice and likeness can be replicated by AI. Yet the regulatory framework lets large platforms patch things afterward, rather than requiring pre-emptive guardrails.
Visible harms = distractions: While we talk about big-macro AI issues (model safety, compute power, export controls), very real harms happen at the “surface layer” — mis-use of someone’s identity, misleading videos, legacy distortion. Because the regulation is deeper (chips, exports) we miss the fact the visible layer is still weak.
Regulatory confusion: Because the “official story” is that the US is hands-off, many people believe “no-regulation” meaning “no rules at all” — but what they do see (such as the MLK case) is patchy self-regulation by companies, not strong federal law. That gap is dangerous.
My take: What the US should be doing (and soon)
Federal statute + uniform national standard for use of likenesses, voices and digital replicas of individuals (living or deceased). Make clear what’s legal, what’s not.
Obligatory transparency and consent frameworks for generative-AI platforms when dealing with real persons’ identities. No “oops we’ll put an opt-out later” after a backlash.
Public oversight & citizen-rights: People should know when they’re being “replicated”. Watermarking, disclosures, right to remove.
Broaden visible regulation: It’s not enough to regulate chips and compute; policy must cover applications and social harms too. Yes, the infrastructure is vital — but so is the platform level where people’s lives and reputations are affected.
Global cooperation: AI is cross-border. If one country has minimal rules for identity misuse, it becomes a safe harbour for miscreants. Regulation cannot just hide in export controls and chip deals.
Final thought
So yes — the US is regulating AI, just differently than many expect. But the fact that we still have major gaps (like how MLK’s likeness was mis-used) shows we’re not yet keeping up with the visible layer of risk. Regulation shouldn’t just be about invisible supply chains; it needs to protect the people, identities and legacies visible to all of us



Comments