New Research Explains Why Information Spaces Are Rife With Misinformation
A new model out of the University of Tennessee and the Institute of Evolutionary Biology in Barcelona enables stakeholders to estimate misinformation risk
Public policy can focus on mitigating general conditions for online misinformation by monitoring for two characteristics – transparency of information and popularity bias – in different media spaces, according to a new research study co-authored by a University of Tennessee, Knoxville professor and Howard H. Baker Jr. Center affiliate.
The research showed how the risk of misinformation is highest in environments where people tend to copy each other easily, without a lot of transparency as to where the information came from or whether it is credible.
The research also advances the field of cultural evolution, specifically the study of how societies accumulate knowledge over time, through “big leaps” that lead to paradigm shifts, followed by many “little leaps” that build on the advance. Science, health recommendations and genuine advances in knowledge are left behind, not due to any specific agenda or conspiracy theory, but because the environment becomes more like a lottery than one where the best information is selected.
The spread of misinformation is a challenge for public policy, whether that challenge is trust in public institutions and science, health-related behaviors, or civil discourse in society. Online media complicates this with a flood of information, choices, social pressure and algorithms.
To address the challenge of misinformation, a new research study, co-authored by Alex Bentley, professor and Baker Center Affiliate at the University of Tennessee, Knoxville, published in the Journal of the Royal Society Interface proposes looking at the problem in a new way: How can we identify the information spaces where misinformation spreads easily, and improve those spaces generally, rather than continually responding to each new instance?”
Through mathematical modeling and simulation, researchers, led by Blai Vidiella, a postdoctoral researcher at the Institute of Evolutionary Biology (UPF-CSIC) in Barcelona, focused on the general balance of two characteristics of the decision space, called “popularity bias” which is the relative tendency to copy others and “transparency of information,” a number that represents how clearly certain information can be judged on its merits or benefits.
“The critical threshold is the relative balance of popularity bias to information transparency,” Vidiella explained. When there is a high level of popularity bias, such as when algorithms sort choices by “most liked” or “most viewed,” the space is ready for misinformation to spread and useful advances in knowledge to slow or stop entirely.
Without invading the privacy of specific users, the study shows how to determine the likelihood that false information will spread using this data.
“We can use this model to identify information spaces that are poised for misinformation to spread and suggest modifications that would help surface evidence and expertise,” Bentley added.
“Our evolutionary model can account for a wide range of behaviors,” said Sergi Valverde, co-author and tenured scientist at the Institute of Evolutionary Biology, said. “It demonstrates how information transparency interacts with popularity bias and population size to drive the advance in knowledge.”
To read the full paper, click here.