“If you think this is biased, it is biased,” said media personality Abbie Chatfield about her interview with Australian PM Anthony Albanese. [1]Rather than claiming neutrality, Chatfield embraces bias – an unusual stance in a world grappling with misinformation. As people aged 18-34 rely more on social media for news than older generations, ‘TikTok (14%) and ‘news influencers’ (20%) emerge as sources of truth, an idea known as ‘social searching.’ [2]People aged 18-34 rely more on social media for news than older generations. [3]
While helpful if done correctly, social media-based news has presented more risks as misinformation ripens across all walks of life – beyond the political and into misleading medical diagnoses, potentially harmful homemade sunscreen recipes, and the trivialization of weather disasters and emergencies. In the current political climate, it’s easy to view these phenomena as ‘biased.’ And when we think of bias, so much research says that this stems from ‘confirmation bias’ – what people believe is true results from their pre-existing beliefs. But what if we’re missing a big part of the problem? What if there’s more to biases than simply confirming what we already know?
According to Aaron French, an assistant professor of information systems at Kennesaw State University, “Confirmation bias says that you believe something based on your existing values. But what if I get influenced by disinformation on a topic I didn’t have a pre-existing opinion or belief on?” It happened when people sympathized with an AI-generated victim of Hurricane Helene and rushed to ‘medbed’ online groups for miracle cures. French’s paper, ‘The Impact of Cognitive Biases on the Believability of Fake News.” has huge implications for the way we think about how people process information and what they believe to be true. [4] Canvas8 spoke with French to better understand the range of biases informing people’s approach to misinformation on social media.
As a society, we're becoming so blinded to the information we receive. I have lifelong friends who are at each other’s throats because of political differences. When I see them arguing, the key points they argue with are specific quotes from the news channels or sources they're getting information from. It isn’t their original thoughts – they're repeating what they've heard. And a lot of research has looked at confirmation bias, people’s pre-existing beliefs, as the sole reason why people believe misinformation. But that’s really only part of the equation.
We know these polarised extremes are a problem. It’s important to look at because there are people in the middle, but they're getting attacked by both sides whenever they talk because they don't fully align with the beliefs of either side. There is a silent majority that we need to hear from. If we can better understand why people fall for fake news, perhaps we can get that polarization lowered enough to be able to have conversations again. If we do that, perhaps the middle is going to come back out and we'll see more progress.
Of course, we're not going to be able to stop fake news – it’s been happening for ages. But in today’s world, if we can disentangle the legitimate news from the noise or interference that's trying to manipulate people even further, then we might be able to get to a point where we can get closer together.
There’s a gap right now looking at what causes people to believe information. Many articles conclude that confirmation bias is the sole influencer of the believability of information. But our research found that isn’t necessarily the case. If the only way we’re going to combat fake news is by changing people’s beliefs, that’s likely to be a challenge we can’t overcome. It’s only half the equation because we’re not looking at these other cognitive biases that may influence people. By considering these other factors, we could address the believability of a lot of fake news. Maybe we can’t resolve actual bias in the media, but we can at least start filtering out some of the external noise that has made polarisation worse. Maybe we can start developing some mitigation strategies to alleviate and reduce these other biases.
For our study, we wanted to understand the thought process people go through when assessing whether or not they would believe a piece of information. To start, we didn’t want to condition people to think about fake news. We just asked them general questions about how they read information and how they verify it. Then, we got into fake news specifically and asked them about their experiences. How do they prevent themselves from sharing fake news? Have they ever shared fake news, either intentionally or unintentionally? We then coded people’s interviews based on ten cognitive biases identified from the literature. If all the coders identified the same quote as being part of the same bias, we knew there was a high chance that it was accurate.
We found five main cognitive biases. Confirmation bias was among them, but it was not the most significant. We also identified framing bias, anchoring bias, overconfidence bias, and herding bias, which was the most impactful. What is it? Well, whenever you look at a post on social media, you see the number of likes and the number of comments. The higher the number of those ‘virality statistics,’ the more believable that piece of information is to the people reading it.
The anchoring bias says you’re ‘anchored’ to the first piece of information you see. If the first story I see about tariffs in Canada is that it’s bad, any other information that goes against that is going to be hard for me to believe. Framing bias has to do with the way a story is told – if it sounds logical, more people are going to believe it. And lastly, overconfidence bias: many people overestimate their own ability to identify fake news, which leads people to think they don’t need to fact check or verify. So, what do we do about these biases?
Ultimately, we can’t always rely on third parties to stop fake news. We see that social media platforms can be influenced by politics. We have to help people develop the tools to identify it themselves. That doesn’t mean teaching people what is true or false. In Finland, they teach children from kindergarten how information can be manipulated in order to increase believability. These children grow up being more critical about the information they receive rather than just passively consuming what they read. Teaching media literacy like this has a big effect on combating fake news and ultimately influencing these very cognitive biases.
As social media platforms shift their policies in line with politics, brands face a trust crisis. As brands like Meta end fact-checking programs and others roll back on DEI, 56% of people now research brands before supporting them, while 41% feel the relationship between brands and consumers is deteriorating. [5][6] With online distrust at an all-time high, brands have an opportunity to rebuild credibility by fostering transparency and reliability. According to French’s research, herding bias plays a key role in trust formation – people tend to believe content with more engagement, regardless of its accuracy. Credible communities like the one cultivated by the menstrual tracker app Clue are replacing traditional herding schemes, strengthening user trust by citing scientific sources in their platforms. Brands that actively demonstrate credibility will stand out in a chaotic digital landscape where people seek trustworthy voices.
The spread of deepfakes and AI-generated misinformation is escalating, with over 90% of UK consumers expressing concern. [7]A study on people from the US, Germany, and China found that state-of-the-art deepfakes are almost indistinguishable from human-made content, sparking digital literacy concerns on a global scale. [8]This is probably why 65% of Americans support tech companies restricting false information online, even if it limits freedom of information. [9] As French found, anchoring bias significantly affects how people assess information – the first narrative they encounter shapes their long-term perception. Brands that move fast with credible content can establish themselves as primary sources of truth. AltSchool enhances media awareness among African youth, while Little Free Library combats censorship by mapping banned books in the US. Similar initiatives can help brands counter misinformation by reinforcing fact-based narratives and positioning themselves as trusted authorities.
Brands must tread carefully as virality-driven content dominates digital spaces. One study found that 93% of people think brands need to take more action to combat misinformation, especially with 87% worried about being unable to separate true or false information online. [10][11] French’s research highlights the role of overconfidence bias – many people overestimate their ability to spot misinformation, leading them to trust viral content without verifying it. With herding bias accelerating virality, brands could benefit from assessing trends before participating, especially when a third of people find trend-jacking embarrassing. [10] Misinformation spreads fast, and brands risk reinforcing false narratives if they chase engagement without vetting content. NGO Sidaction aces this by using clickbait as a strategy for HIV/AIDS awareness. Fact-checking and contextual awareness can help businesses maintain authenticity while leveraging digital culture responsibly.