Misinformation and disinformation have emerged as a serious issue in the 21st century. Although the problem of false information has existed since the dawn of human civilization, artificial intelligence (AI) is exacerbating these challenges. AI tools make it easy for anyone to create fake images and news that are hard to distinguish from accurate information. From elections to wars, those with ill intentions can mass-produce and disseminate propaganda on social media.
According to NewsGuard, which tracks fake news sites, the number of AI-enabled fake news sites increased tenfold in 2023. These sites are operated by little or no human supervision.
This does not mean that we are defenseless. Researchers, tech companies, and governments are collaborating to fight AI-powered misinformation with AI technology. Platform companies partner with professional fact-checkers and content moderators to tag fake information and use it to detect misinformation early in the diffusion stage. Previous research shows that during the COVID-19 pandemic, this early detection and filtering approach significantly reduced exposure to misinformation. Social scientists are gathering evidence to identify fabricated information and explain what makes a true statement and who spreads which information.
In addition to automatically detecting fake news and images, the need for greater media literacy is becoming increasingly important. Media literacy emphasizes critical skills and mindsets required for navigating the digital space. Media literacy programs also offer techniques to quickly spot fake imagery or resources to track down the original source of questionable information. My recent projects investigated whether media literacy actually helps Internet users process information critically and make informed decisions. The results show the efficacy of media literacy skills. In particular, journalists and news media are shown to play a significant role in closing the gap between the digitally literate and illiterate.