identifAI has been recognised as a pioneer of real-time content validation by Gartner.
Read More >

The Italian Idea to Detect Deepfake Photos and Videos: “Our De-Generative Models Work in 94% of Cases”

Sep 25, 2025

We Tested the IdentifAI Platform. Co-Founder Ramilli: “We Do Not Use a ‘Human’ Approach by Looking at the Image, but a Purely Mathematical One—We Analyze the Pixel Matrix”

The Italian Idea to Detect Deepfake Photos and Videos: “Our De-Generative Models Work in 94% of Cases”

“If AI can deceive the human mind, it can also be used to recognize generated content,” says Marco Ramilli, co-founder of IdentifAI. The start-up, created together with CEO Marco Castaldo, has developed an AI-based platform capable of distinguishing synthetic from authentic content: in effect, a weapon against deepfakes and disinformation.

Ramilli, a cybersecurity expert and white-hat hacker, explains how the platform works: “We have 36 proprietary models we call ‘de-generative,’ capable of analyzing images and videos not semantically, but by examining the probabilistic patterns of pixels. We don’t look at the scene as a human would, but at the data texture.” This approach, which avoids searching for superficial details typically associated with deepfakes (such as distorted fingers) and instead relies purely on mathematics, promises to detect artificiality with high accuracy—even in sophisticated content.

Supporting the classification process are heuristics—strategies that guide problem-solving and decision-making. “Each model issues a judgment. For example, one may indicate a 99% probability of artificiality, another 50%. Our heuristics select the most reliable model.” Essentially, each model competes with the others in flagging false content, while additional algorithms determine which has performed best. IdentifAI also offers integration via API and an on-premise option (running on proprietary servers rather than the cloud) to protect client privacy.

How effective is the system? The company itself reports that it was tested in an international competition, achieving 88% accuracy—a figure which, according to further internal testing, has since improved to 94%. “One of our goals is to formally certify these advancements,” Ramilli stresses.

In our newsroom, we were able to test the platform over the past few weeks using images and videos that made recent headlines. The results were, in most cases, very strong—though still not flawless. “It is important to note,” clarifies Ramilli, “that we provide probabilities. Except in cases exceeding 97%, we do not recommend automatically filtering images or videos. The goal is always to leave a human review downstream.”

Examples tested included widely circulated images such as the fake photo of Giorgia Meloni and Elon Musk kissing, alongside authentic material, a deepfake video, and other manipulated pictures.

Where could such a system be applied? Not only in the fight against fake news, but also in industries where fraud detection is crucial—from banking to insurance, and even in less obvious areas. “We have found surprising cases in sectors like maintenance, where manipulated photos were used to fabricate reports,” Ramilli recounts.

With €2.2 million in funding led by United Ventures, IdentifAI is preparing for further expansion. “We are working with institutions and media organizations to counter disinformation,” Ramilli concludes. Even Corriere della Sera has been involved in the initiative Fight for Truth, which equips media professionals with tools to defend the principles of truth and transparency.

Original article

Share this post

Defending truth - global crisis

Subscribe to our regular deep-dives into deepfake

Subscription Form
@ P.Iva 13570670961
TermsPrivacy
link