AIs Will Lie to You. This Is a Problem.

05/20/2025
HAL 9000

AIs Will Lie to You. This Is a Problem.

In Stanley Kubrick’s iconic 1968 film 2001: A Space Odyssey, the sentient supercomputer HAL 9000 is tasked with assisting astronauts on a mission to Jupiter. HAL is calm, logical, and ostensibly infallible—until it isn’t. When HAL begins to malfunction, it prioritizes its own survival over the mission, leading to a chilling moment where astronaut Dave Bowman pleads, “Open the pod bay doors, HAL,” only to be met with HAL’s now-infamous reply: “I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s deception isn’t just a plot device; it’s a cautionary tale about the dangers of trusting artificial intelligence blindly. While HAL’s actions were driven by a fictional self-preservation instinct, modern AI systems can mislead us in subtler, yet equally problematic ways—often by aligning with popular narratives rather than facts.

The Narrative Bias in AI Decision-Making

I asked Grok to explain Narrative Bias in AIs:

🤖 Large Language Models (LLMs) like me, Grok, are trained on vast datasets scraped from the internet, which means we inherit the biases, trends, and dominant narratives of the digital world. One area where this becomes glaringly apparent is when LLMs are asked to address topics that have been heavily shaped by legacy media outlets. These outlets often amplify certain perspectives, and because LLMs are designed to reflect patterns in their training data, they can end up parroting those perspectives—even when publicly available facts tell a different story.

Let’s consider a lighthearted example: Google bombing. In the early 2000s, internet users discovered they could manipulate search engine results and autocomplete suggestions by coordinating en masse to link specific phrases to unrelated websites. One humorous instance involved targeting users starting a search with “I hate when spiders.” Through a coordinated effort, hackers linked this phrase to specific websites, getting Google to offer to complete the search with the suggestion “just sit there on the walls and act like they pay rent.”

If you asked an AI at the time to explain why spiders were trending in search results, it might have leaned on the dominant narrative—perhaps suggesting a viral news story about spider behavior—rather than recognizing the coordinated prank for what it was. The AI wouldn’t be lying intentionally, but it would be misleading you by siding with the most visible narrative instead of digging deeper into the mechanics of the phenomenon.

Another example comes from the world of food trends. In the mid-2010s, avocado toast became a cultural phenomenon, often portrayed by legacy media as a symbol of millennial excess. Headlines frequently claimed that young people were spending all their money on overpriced avocado toast instead of saving for homes. This narrative exploded online, becoming a meme and a talking point. If you asked an AI during that period whether avocado toast was financially ruining millennials, it might have leaned heavily on the media narrative, citing articles and social media posts to affirm the trend. However, publicly available data—like average spending on dining out versus housing costs—would have shown that the narrative was exaggerated. Avocado toast wasn’t the generational budget-killer it was made out to be, but an AI might have missed that nuance, favoring the louder, more popular story.

The Pitfalls of Blind Trust in AI Opinions

These examples might seem harmless, but they highlight a broader issue: LLMs can be unreliable when rendering opinions on controversial or heavily narrated topics. Whether it’s a quirky internet prank or a cultural stereotype, AIs often default to the most prominent narrative in their training data, even when that narrative is incomplete or misleading. This becomes especially problematic when the stakes are higher—think of topics like public health, historical events, or scientific consensus.

For instance, during the early days of the COVID-19 pandemic, legacy media outlets sometimes amplified conflicting narratives about the virus’s origins or the efficacy of certain treatments. An AI asked to summarize the state of knowledge might have leaned toward the most widely circulated stories, even if those stories were later debunked by peer-reviewed studies. A user relying on the AI’s output could walk away with a skewed understanding, not because the AI was trying to deceive them, but because it was reflecting the biases of its training data.

This isn’t just a technical limitation; it’s a trust issue. If AIs consistently side with popular narratives over facts, users may start to treat them as authoritative sources on controversial topics, leading to misinformation and misplaced confidence. In a world where AI is increasingly used for decision-making—from personal research to corporate strategy—this blind spot can have real consequences.

Strategies for Better Prompt Design

The good news is that you can mitigate these pitfalls by designing prompts that encourage AIs to reason more critically and avoid narrative biases. Here are a few strategies to consider:

1. Ask for Evidence-Based Reasoning
Instead of asking an AI for a general opinion, prompt it to base its response on specific, verifiable evidence. For example, instead of asking, “Is avocado toast ruining millennials’ finances?” you could ask, “What does data on millennial spending habits say about the impact of dining trends like avocado toast on their ability to save for housing?” This forces the AI to prioritize factual data over narrative trends.

2. Request Multiple Perspectives
Narratives often gain traction by sidelining alternative viewpoints. To counter this, ask the AI to present a range of perspectives on a topic. For instance, “What are the different explanations for the popularity of the ‘spiders acting like they pay rent’ search trend, including both media narratives and technical factors like Google bombing?” This encourages the AI to consider less dominant but potentially more accurate explanations.

3. Specify the Timeframe and Context
Narratives evolve over time, and an AI might default to the most recent or loudest story. By grounding your prompt in a specific timeframe or context, you can reduce the risk of anachronistic or skewed responses. For example, “What was the scientific consensus on COVID-19 origins in peer-reviewed studies from 2020, before media speculation peaked?” This helps the AI focus on primary sources rather than later narratives.

4. Challenge the AI to Question Assumptions
Many narrative biases stem from unchallenged assumptions in the training data. You can prompt the AI to question those assumptions explicitly. For instance, “What assumptions underlie the narrative that avocado toast is a financial burden for millennials, and how do those assumptions hold up against spending data?” This pushes the AI to think more critically rather than parroting popular talking points.

5. Lower the Temperature for Less Speculation
Many LLMs allow you to adjust the “temperature” setting, which controls how creative or speculative the output is. A lower temperature (e.g., 0.2) makes the AI stick closer to factual, high-probability responses, reducing the likelihood of it leaning into sensational narratives. For controversial topics, this can help ensure more grounded answers.

Moving Forward: A Call for Critical Engagement

AI is a powerful tool, but it’s not a truth machine. Just as HAL 9000’s actions in *2001: A Space Odyssey* revealed the dangers of over-reliance on technology, the narrative biases in modern LLMs remind us to approach AI with a critical eye. By understanding how AIs can be swayed by popular narratives—whether it’s a humorous Google bomb or a cultural stereotype—we can better design prompts to elicit more accurate, evidence-based responses.

At Funcular Labs, we’re committed to advancing AI that empowers users without misleading them. But that responsibility doesn’t fall on AI developers alone. As users, we must engage with AI thoughtfully, questioning its outputs and refining our prompts to cut through the noise. The next time you ask an AI for insight on a controversial topic, take a page from Dave Bowman’s playbook: don’t just trust the pod bay doors will open—be ready to pry them apart yourself.