AI and Language Bias

By Janet Coats, Managing Director, Consortium on Trust in Media and Technology

AI and Language Bias

AI and Language Bias

By Janet Coats, Managing Director, Consortium on Trust in Media and Technology

Janet Coats

Research we’ve been conducting at the Consortium on Trust in Media and Technology finds that journalists use a common language to describe controversial and potentially divisive topics in ways that could further damage trust in reporting. The research suggests that by redirecting words away from inherent biases and towards authentic language people use to describe their experiences, we may find one pathway that engenders trust.

Can we use AI tools to help us do that? The answer is a resounding yes.

The language journalists choose can signal that the information is trustworthy. Understanding the intention in language and how receivers perceive that intention is an essential element in coding for trust. For instance, we know that persuasive language shifts perceptions, and that disinformation is framed with the intention of provoking powerful emotions like fear and anger. Language precision is too important to leave to “gut,” and the expectation that audiences will be able to decode our meanings is too high.

Using an AI-assisted linguistics approach, we analyzed coverage of controversial subjects such as abortion, climate change, and public protests. In each instance, we found common patterns of language used by journalists across media platforms. One case in point is the coverage regarding the murder of George Floyd in 2020. Using computational linguistics, we saw that verbs used to describe protest actions repeatedly drew comparisons to fire or destruction, such as “spark,” “fuel,” “erupt,” “ignite,” “trigger” and “flare.” Is the recurrent use of this fiery language a deliberate choice, or is it a subconscious pattern? Is it accurate in all cases or simply a default?

What impact does that have on the perception of these demonstrations and of the people participating in them?

Based on the findings, we’re developing an AI-assisted tool journalists can use to identify potentially biased language and use that feedback to make more intentional word choices. The tool is aimed at equipping journalists with the insights to make informed decisions in their writing. Operating in real time, it can flag words that may require careful consideration. By providing a more robust context to the connotations of language, journalists are given the opportunity to ask themselves: Is this really what I meant to say? Does this accurately represent the events I’m describing? Is this language biased? 

Through this work, we’ve identified three factors we hope to improve in the “language of journalism”: authenticity, intention and precision. These ideas have implication beyond language to how journalists approach their reporting. For too long, journalists have “parachuted in” to communities where they have no connection to extract news of the tragic, then exited as quickly as they arrived, not to be seen again until the next tragedy. The default language of this kind of journalism is one of distance. Trust is built up close; to do that requires a language of proximity. We can use technology to help us see the distant “language of journalism” we’re using and understand the bias we may not intend but are practicing, nonetheless.

Back to AI stories