AI-fuelled disinformation will have direct implications for open source research-a single undiscovered fake image, for example, could compromise an entire investigation.Įarlier this year, the New York Times tested five tools designed to detect these AI-generated images. Of particular concern for open source researchers are AI-generated images.ĭALL-E, Stable Diffusion, and Midjourney-the latter was used to create the fake Francis photos-are just some of the tools that have emerged in recent years, capable of generating images realistic enough to fool human eyes.
These images were the product of Generative AI, a term that refers to any tool based on a deep-learning software model that can generate text or visual content based on the data it is trained on. An AI-generated image of Pope Francis, created using MidJourney.