
Visual disinformation has become a global phenomenon, and the risk of falling into its trap is growing every day.
Perhaps you recall the photo of Pope Francis in a Balenciaga puffer jacket, or the image of Donald Trump’s arrest. These pictures deceived millions of people worldwide, fueling false narratives and polarizing public debate.
The problem? The quality of such images is constantly improving. Tools like MidJourney and Stable Diffusion can generate hyper-realistic details, making it nearly impossible to distinguish truth from fabrication with the naked eye.
To defend ourselves, useful tools exist for detecting AI-generated images. For example, IdentifAI, which leverages neural networks to analyze pixels and metadata, identifying signs of manipulation.
Other options include TinEye, which enables reverse image searches to determine whether an image has been altered or originates from unreliable sources, and Google Lens, which can help verify whether the same image has appeared elsewhere in a different context.
Beyond these tools, certain details may reveal a fake: hands with deformed fingers, asymmetrical eyes, blurred backgrounds, or irregularities in written text.
We cannot stop this technology, but we can learn to recognize it—and avoid becoming unwitting instruments of disinformation.