AI and News

By Frank Waddell, Associate Professor, Journalism

AI and News

AI and News

By Frank Waddell, Associate Professor, Journalism

Frank Waddell

The capabilities of AI as content producers are expanding. The writing of news is no exception, with major news organizations relying upon generative AI for writing thousands of news articles across topics like finance, weather and sports.

AI’s ability to write the news, however, does not guarantee that news readers are receptive to AI. News-writing algorithms offer both challenges and opportunities to news organizations. For example, AI-powered reporting can free newsrooms from monotonous, data-focused reporting (e.g., stock reports) with a high level of precision. However, this will only be a benefit if news readers are willing to accept AI in this new role; the jump from AI cleaning our rugs to writing our news is a big leap!  

We are exploring this question of audience receptivity to news-writing AI. A typical study compares how audiences respond to news attributed to either a human or AI author. We do so without actually changing the content of the news writing, meaning if we find a difference, it is because the reader is responding differently to the AI (rather than to a stylistic difference across authors).

Our results show a unique pattern where audiences prefer a “man-machine marriage.” Specifically, audiences seem to see the news as less biased when they believe the content has been co-authored by AI and humans working together (than when the same news is written by either a human or AI author alone). We believe this happens because AI-human co-authorship gives the best of both worlds – the reader gets the assumed human-touch of a traditional journalist alongside the assumed objectivity of AI. Notably, this happens purely due to our perceptions of the writer without an actual difference in the writing itself.

Although this research shows the possible utility of AI, there are also challenges. It is unclear how AI should be recognized for their contributions to news; do we call them an algorithm, a robot reporter, or something else altogether? Our research shows that audiences prefer to know from the start if AI is the author of news (rather than saving the declaration until the end, as some news organizations do). Attribution blindness can also be a problem. Put simply, news readers often miss or ignore the authorship of news articles. In studies where we encourage participants to pay attention to the author of the story, their recall of the source improves, so news organizations might consider how their websites can do this naturally for the reader to reap the benefits of AI authorship.  

AI-generated news thus holds the potential to free journalists from otherwise banal tasks, while gaining the benefits of perceived objectivity that AI can garner from the reader. This works best when it is assumed that the two sources have worked together – e.g., the AI writes the story, but it is double checked by a reporter or editor. News organizations must continue to work toward proper attribution of AI and carefully design their websites to clearly highlight when AI is a source so that the barrier of attribution blindness can be avoided.

Our research at UF will continue to explore these possibilities to best understand audience response to generative AI in the changing news landscape. 

Back to AI stories