In May 2025, millions of people saw Billie Eilish at Met Gala.
Or they thought.
Photos and videos of the impressive gown pop star have gone viral. Some fans praised it. Others laughed. The media picked it up, influencers gained weight, and the internet did its best: responded.
But there was one problem. Billie Eilish was not there.
“I wasn’t there. It’s AI. I did a show in Europe that night. Let me do it,” she posted.
What seemed real was a totally fake. And it deceived everyone.
And this isn’t just celebrity gossip.
Earlier this year, former President Donald Trump shared an image of the true society of a man who was mistakenly deported – Abrego Garcia, whose gang tattoo took a photo of him in his hand. The letter “MS13” was digitally altered to promote political narratives. It wasn’t true, but it was convincing enough to gain traction.
“We’re reaching a point where just looking at it makes it impossible to convey the difference between a real photo or video and a fake,” said Yuval Noah Harari.
Deepfake and real death
The use of AI and machine learning to create fake human videos is nothing new. Researchers and fringe enthusiasts have been creating deep fakes for many years. However, tools like Openai’s Sora and Google’s VEO 3 quickly close the gap between synthesis and the real human. What required technical skills and time now requires some prompts and the results are surprisingly realistic.
A recent study at the University of Waterloo found that people can distinguish real images from images generated by AI by 61% of the time. The findings raise serious concerns about how reliable we can be in what we see, and highlight the growing need for tools to detect synthetic content.
Fake humans, false events, false identities, all generated by AI, indistinguishable from reality.
And if we can no longer trust what we see, what will be the foundations of our society?
We have entered an age of synthetic reality, which raises one urgent question.
Should humans generated by AI be banned like counterfeit money to avoid the collapse of social trust?
Harari draws a sharp comparison:
“The government had very strict laws against forgerying money because they knew that if they allowed fake money to be distributed, people would lose their trust in money and the financial system would collapse.”
Today, tools available to anyone with a laptop allow you to create fake people who can cross the real world, from facial expressions to tones of voice.
I’ve seen fake immigrant photos pushing false stories. AI-generated influencers are building followers by pretending to be false. Deepfake impersonation is fraudulent about businesses and families.
Warnings of Harari will be denied:
“Fake humans should be illegal. We need to maintain social trust as much as financial trust.”
Similarities between counterfeit currency and humans generated by AI
The economic trust that humans (deepfakes) generated by counterfeit money AI undermines the trust of society/citizens that are often created to deceive that the banned largely unregulated or loosely governed.
Both make social foundations unstable. It’s economically and socially for others. If you protect your financial system with law, you need to protect your true system in the same way.
Should the government intervene and ban AI-generated humans?
It’s no longer a theoretical conversation.
When synthetic humans can be used to deceive, manipulate, or disguise on a large scale, there are issues that the law needs to address.
Harari clearly puts it:
“If we allow fake people to cycle, people lose their trust in others and society will collapse.”
Just as the government banned fake money to protect the economy, should the creation of fake humans be banned to protect social trust?
What counts as a fake? Where do you draw the line? And how do we prevent things from swirling even further?
Why a complete ban may be too extreme
A complete ban on human beings generated by AI could also put an entire startup segment at risk. Over the past year, many new companies have provided AI-generated spokesmen, UGC videos, customer service avatars, influencers and video content creators. These startups are riding a wave of synthetic media innovation. This builds a tool that blurs the line between humans and machines.
If the government imposes strict bans without nuance, many of these businesses could be forced to shut down or pivot completely. Investors could be pulled back, founders could face regulatory uncertainty, and stalled the broader ecosystem of innovation. While some of these tools are vulnerable to abuse, others simply seek to reduce costs or democratize access to video production.
The challenge is to draw a line that protects society without strangling innovation.
All synthetic human use is not harmful.
Blanket bans are:
Block legitimate use cases for movies, games, education and accessibility. Push abuse further underground, making it an anonymous or offshore model. Confuses the problem and perhaps confuses the creativity or parody of censorship.
Even Harari is not opposed to AI while seeking a ban on deception. He seeks restrictions on pretending to be human.
The point is not to ban all humans. It’s about stopping what is meant to be misleading.
What can I do instead?
Even if a complete ban makes no sense, strong guardrails are needed.
1. Require disclosure:
Human content generated by AI must come with clear labels on the screen and under the hood.
2. Hold the Platform Responsible:
Social networks and publishers need to flag fake human content in spam and fraudulent ways.
3. Criminalize the use of deceptives:
Use AI to impersonate someone without your consent. Whether genuine or constructed, it should be treated like identity theft.
4. Bake with digital fingerprints:
AI tools that generate human-like content must embed invisible signatures to track origins.
5. No disguises allowed:
AI systems should not pretend to be conversational, marketing or media people. period.
None of these stops all deep furks. But they oppose abuse and send a clear message: Truth matters.
Why does the government have to act?
This is not about overregulation. This is to draw a line between truth and fiction.
The real threat is not AI. It’s an AI pretending to be human.
Just as fake cash damages the economy, fake people fall into trust in their institutions, relationships, and in fact in their own right.
This is what the government can do now:
It makes it illegal to impersonate a human and impersonate a real or fictional person without disclosing it. You need the AI-generated face and voice labels. Fine or punish platforms that can be spread out without monitoring synthetic humans. It deals with malicious deepfakes like digital counterfeiting and harassment.
Final Thoughts
We are now at the point where we have to deny that we are competing in an event that celebrities have never attended… when the president shares fake photos to make a political point…
We are no longer discussing the future. We live in it.
Now we must draw a line that truth itself can become an option in law, technology and public expectations.
🚀Want to share the story?
Submit your stories to TechStartUps.com in front of thousands of founders, investors, PE companies, tech executives, decision makers and tech leaders.
Please attract attention
Source link