AI systems can collect data without consent and without clear limits. Camera footage, location data, biometrics, and digital traces become raw material for algorithms capable of reconstructing someone’s life in remarkable detail. Similar concerns also appear in the field of neurotechnology, where UNESCO warns about the risks to mental privacy and the possibility of unauthorized biometric monitoring of the brain
Models learn from data that reflect social inequalities. Facial recognition systems misidentify women and minorities more often, which can lead to wrongful identifications and unfair policing.
Article 5 of the AI Act prohibits:
- real‑time biometric identification in public spaces
- scraping facial images from the internet or camera networks
- biometric categorisation based on sensitive traits
- emotion recognition in schools and workplaces
- social scoring
- AI systems that manipulate behaviour by exploiting vulnerabilities
Exceptions exist only in three situations: locating missing persons, preventing terrorist attacks, and investigating the most serious crimes. Even then, strict judicial oversight and proportionality are required.
Europe has created a framework that protects everyone on its territory, regardless of nationality.
While Europe defines what is off‑limits in advance, the U.S. often reacts only after public outrage. The recent OpenAI–Pentagon case is a perfect example.
OpenAI entered into an agreement with the U.S. government to deploy its models in classified military operations. The company insisted the deal included more safeguards than previous military AI deployments. But the public reaction was swift and intense. Users began uninstalling ChatGPT, employees raised concerns, and researchers warned that the company was drifting away from its stated principles.
The timing made the situation even more controversial. The Pentagon had just halted its collaboration with Anthropic over fears that Claude could be used for mass surveillance or autonomous weapons. OpenAI suddenly became the replacement, raising questions about whether the deal was rushed and poorly considered.
Under pressure, CEO Sam Altman announced that the agreement would be revised. He admitted the deal had been “opportunistic and sloppy” and that the company had rushed its announcement. He also said the contract would explicitly prohibit using OpenAI systems for domestic surveillance of Americans and restrict use by intelligence agencies unless the agreement was amended.
Although these changes were meant to calm critics, they left major questions unanswered. What counts as “intentional use”? Who ensures compliance? And why does the protection apply only to U.S. citizens?
This episode illustrates how fragile the line between innovation and responsibility becomes when decisions are made behind closed doors.
Th EU AI Act does not rely on voluntary promises from companies.
It defines obligations, bans, and penalties. It doesn’t wait for scandals to erupt; it sets boundaries in advance. In this system, technology must adapt to society.
In the U.S., decisions are often made quietly, and the public reacts only when information leaks. In Europe, rules are created through transparent, democratic processes.
Responsible use of AI surveillance rests on several principles:
- minimal intrusion
- transparency toward the public
- clear institutional accountability
- testing systems for bias
- human oversight in every decision
These are not technical guidelines they are the foundation of a social contract for the digital age.
1. The security‑driven model
Governments use AI surveillance broadly and aggressively. Safety increases, but freedom shrinks.
2. The democratic model
AI is used selectively, with strict oversight and transparency. This is the European path.
3. The corporate model
Private companies set the boundaries. This raises questions about accountability and democratic control.
Europe has clearly chosen the second model. The rest of the world is still deciding.
AI surveillance is a question of power, freedom and values.
Artificial intelligence can improve safety, but it can also undermine fundamental rights. It can help find missing people, but it can also create a system that tracks every citizen at every moment.
That is why societies must define boundaries before the technology defines them for us. Europe has done this through the AI Act. The United States is still searching for a balance between innovation and responsibility. And the rest of the world is watching closely.
As AI continues to evolve, the real challenge of the digital age may not be building technology that sees everything, but building a society that understands what it wants to see.
Keep reading 👉Ethics
🔙 Return to the beginning of the journey