
It is a question we find ourselves asking more and more often when faced with digital content. And while the question may arise almost spontaneously, the answer is not always immediate—especially without the proper tools to assess it. It is to bridge this technological and interpretative gap that IdentifAI was founded in 2024. The Milan-based start-up specializes in training “degenerative” models, designed to identify AI-generated content with an accuracy rate approaching 100%.
“Our goal is to guarantee a fundamental human right: the ability to know the true origin of the images, audio tracks, and videos circulating online,” explains Marco Ramilli, co-founder of IdentifAI, international cybersecurity expert, author, and ethical hacker. “Every day, we are called upon to take a stance on a wide range of issues—but how can we do so consciously and independently if we remain exposed to unverified and potentially false content?”
One of the most well-known cases of deepfake was, in fact, the trigger that led Ramilli and his business partner, Marco Castaldo—both already active in the field of cybersecurity—to create IdentifAI.
“I still remember being in a taxi,” recalls Ramilli, “when some friends called to ask what I thought of an incredible photo circulating online: the Pope wearing an expensive designer puffer jacket. What struck me was not the photo itself, but the degree of polarization it had already generated on social media within just a few hours of its release. Even after the Holy See denied it and it was revealed to be AI-generated, the public continued to debate it. In that moment, I realized the impact that such content can have on society, and I told myself: I want to find a solution.”
Fraud and forgery are as old as time. But while in the past only a select few possessed the skills to carry out elaborate deceptions, today artificial intelligence has become a tool accessible to all.
“The ease of use of generative AI will cause the number of manipulated contents to skyrocket,” warns Ramilli. Tackling this issue will therefore require both greater public awareness and the development of robust detection technologies. “We are moving toward a future in which we will be forced to prove reality, not lies,” he reflects. “It is a radical paradigm shift.”
Considering that 90% of the information transmitted to the human brain is visual—and that images are processed 60,000 times faster than text—the risks are numerous, both for individuals and organizations. In the economic sphere, deepfakes may be used to manipulate financial markets, for example through fake videos of executives announcing decisions or crises that never occurred. They can also harm individuals, via impersonation scams leading to the theft of sensitive personal data.
The political and geopolitical domains are no less at risk: distorted information poses a direct threat to democracy and to the stability of international relations, which are already fragile—or in some cases, entirely compromised.
The challenge for the future is to put everyone in a position to instantly know the true origin of the images or videos they are viewing. “We believe that time is a critical factor when it comes to visual content, and therefore to emotions,” comments Ramilli. “If, for example, we see a photograph depicting the destruction caused by a bombing, our immediate reaction is to take sides and feel hatred toward those responsible. Later, when we discover that the image was AI-generated, we realize we have been manipulated on the very level that makes us most vulnerable as humans: our emotions.
Nor should we overlook the potential consequences of the practical decisions we might make in reaction to particularly impactful misinformation. This is why it is our right to know from the outset whether what we are looking at is real or not.”
To achieve this, systems like IdentifAI must be implemented upstream, so that when manipulated images reach viewers, they are already clearly flagged as non-authentic.
“Beyond technological support,” concludes Ramilli, “it remains vitally important to teach the younger generations—constantly immersed in a stream of manipulated visual content—to always question the source of what they are seeing.”