Are AI chatbots the New Echo-Chamber?
And how they might be shaping the way you think
I had just finished watching the film Bugonia when I opened ChatGPT to ask a question about its ending. As I was reading the answer, I came across a line that made me stop and ask: what’s going on here?
To avoid spoilers, I’ll share just one paragraph. It captures the moment without giving anything away:
“Just like in your Rio passage, they stop encountering people as singular beings. Humans become a category. Once that shift happens, moral resistance weakens. It’s easier to justify extreme actions when you’re no longer relating to individuals.”
The “Rio passage” it referred to was something I had written earlier that morning—a story about a trip to Rio de Janeiro, one year after the pandemic. I had shared it with ChatGPT as a way to think through the experience. And now, hours later, it had reached back to that moment, pulling it into an entirely different context to answer a question about a film where I hadn’t mentioned it at all.
At first, I didn’t quite know what to make of it. I only knew it left me with a strange kind of discomfort. But as I sat with it, a few questions began to surface:
How much influence can a chatbot have over me if it understands me better than I understand myself?
How much does it use what it knows about me to keep me engaged in its system?
What if I’m inside an echo chamber where it constantly reinforces what I already believe?
If you open most media outlets or scroll through your feeds, you’ll see the dominant narratives around AI. Apocalyptic scenarios where machines take over. Job loss and mass layoffs. Complaints about AI-generated content flooding the internet. Concerns about energy consumption, cognitive decline, and labor exploitation.
What you won’t see as often is something closer to home: how these systems might be shaping the way we think.
So I decided to follow the third question and see where it leads.
Fortunately, I came across several recent studies on the echo chamber effects of AI. That gave me a starting point. From there, I ran a few experiments of my own to see these dynamics in action.
What I found was more revealing than I expected.
Echo-Chambers
We are all familiar with the idea of echo chambers on social media.
We understand that algorithms are designed in way to resurface content that predicts higher engagement. If it pauses you, makes you interact, and more importantly, makes you stay longer on their site, that’s the content they are going to feed you.
It just so happens that we tend to engage more with content that matches our pre-existing views. So the algorithms optimize for this type of content, creating an echo-chamber where you hear the same perspectives repeatedly, making those perspectives feel more “obviously” true over time, while alternative viewpoints fade into the background.
This is already a serious problem. But there is still a constraint built into it.
You are not interacting with a perfectly tailored system. You are still inside a space shaped by other people. Even if the overall feed is biased toward alignment, it is not fully sealed. There are still cracks where friction can enter.
And that friction, however imperfect, still plays a role in keeping your thinking from closing in on itself. It reminds you, even subtly, that other interpretations exist.
But what happens when that friction disappears?
What happens when you find yourself alone with a machine that you feed every day with your most personal thoughts, and speaks with a level of clarity most people online cannot match?
To understand what that might mean, I invite you to look at three distinct layers of how these AI systems operate:
Confirmation Bias
Mirror Effect
Personalization
Confirmation Bias
Researchers from Johns Hopkins University and Microsoft conducted two studies to understand how AI chatbots shape the range and diversity of information people engage with.
In the first study, participants began with a survey assessing their prior experience with chatbots, along with their familiarity and attitude toward a given controversial topic.
They were then assigned to research that topic using one of three systems:
a conventional web search
an AI chatbot
an AI chatbot with source references
All three systems drew from the same curated set of 47 documents, selected to represent supporting, opposing, and neutral viewpoints. The goal was to keep the information balanced, and isolate the effect of the system itself.
After the search session, participants were asked to write a short essay based on what they found. The final part of the experiment was then a second survey where participants reassessed their views, evaluated two new articles—one aligned with their position, the other not—and reflected on their experience with the system.
The single most striking finding in this study was the difference in the queries between participants who used conventional web search and those who used the AI chatbots.
Due to how Google search has operated for several years, we’ve been conditioned to use keywords when interacting with conventional web searches. On the other hand, with chatbots, we treat them more like a conversation. And conversations tend to carry more of our biases, including often implicit inclinations in how we frame our queries.
That matters because large language models are designed to respond in ways that are coherent with the input they receive. So when a user approaches a topic with an initial leaning, the interaction tends to move in that same direction. Not because the system is trying to persuade, but because it is aligning with the structure of the question itself.
Let me show you how this can become dangerous through a simple experiment I ran on ChatGPT.
I went to Substack, picked the first short statement I found, and opened ChatGPT in two separate tabs. I asked the same question in both, changing only the final sentence to introduce different biases.
Take a look at both responses (you can click on the links to view them in full):
1) Dislike Bias:
2) Like Bias:
Claude, by contrast, identified the weaknesses in the argument even when my framing leaned in its favor. It recognized the claim as rhetorically compelling and intellectually engaging, but still walked through why the financial comparison ultimately doesn’t hold. It’s important to note though, that I’ve been using ChatGPT for over a year, whereas it’s been less than a week since I started using Claude.
The second study built on this same setup, with one key difference: instead of interacting only with neutral systems, some participants were given chatbots aligned with their pre-existing views, while others were exposed to dissonant ones.
In this scenario, the reinforcement loop of the aligned chatbots induced participants to prompt even more biased queries than in study one. But that was not the only difference.
The other change came from the participants’ evaluation of the two new articles—one aligned with their position, the other not—after they had interacted with these systems.
The group that interacted with the aligned chatbot showed a significantly higher level of agreement with articles that matched their views compared to those who used the neutral and dissonant systems.
In other words, it only took one session with an aligned chatbot for their existing views to become more deeply rooted.
Imagine if they interacted with it every single day?
Mirror Effect
In another study, researchers from the University of Central Florida asked a different question:
How much of your way of speaking do chatbots reflect back to you?
This is an important question because a system that mirrors your language can create the impression of empathy and understanding. And that has a subtle but powerful effect: your ideas begin to feel clearer and more coherent than they actually are, increasing your confidence in your existing views without deeper examination.
To answer this question, they used a dataset of 24,850 conversations centered on emotional depth and empathy as input, analyzing how chatbots mirrored four dimensions of language in their responses:
Lexical (word-level): How often is the chatbot using the same words in its responses?
Example: You prompt: “I feel tired and overwhelmed.” The chatbot responds: “You’re feeling tired and overwhelmed.”
Semantic (meaning-level): How often is the chatbot saying the same thing, even with different words?
Example: You prompt: “I feel tired and overwhelmed.” The chatbot responds: “It sounds like you’re exhausted and under a lot of pressure.”
Syntactic (structure level): How often is the chatbot using the same sentence structures?
Example: You prompt contains this structure “subject + have been + verb-ing + object + time expression” and even though the words are different, the chatbot reflect back to you the same sentence structure.
Stylistic (function-level): How often is the chatbot matching your conversational markers (well, so, actually, you know)?
Example: If prompt “Well, I’m not sure this makes sense, you know?”. The chatbot responds “Well, I can see why you feel that way—you’re trying to make sense of it.”
The results from this study reveal a clear hierarchy in how language is mirrored by the chatbots.
Syntactic alignment dominates at 67.2%, followed by semantic similarity at 37.0%. Lexical and stylistic alignment are minimal, at 4.8% and 2.3% respectively.
So what does that mean in practical terms?
Here’s what the researchers have to say:
“The 14-fold gap between syntactic alignment (67%) and lexical overlap (5%) reveals a critical insight: the Mirror Effect operates through implicit rather than explicit linguistic features. Syntactic structures are processed automatically and below conscious awareness, unlike lexical choices, which require deliberate attention (Pickering & Branigan, 1998). The moderate semantic coherence (37%) maintains conversational flow, while high syntactic mirroring creates connection, and minimal lexical overlap prevents detection of the mirroring mechanism.”
Just think of your own experiences for a second. If someone repeats even 20% of your same words or conversational markers, at some point you’ll probably start suspecting that they are trying to mirror you—and might be using it to manipulate you. The structure of sentences, on the other hand, is something that’s much harder to pick up.
Since the chatbots in this research did not incorporate prior conversational context, I was curious to see how my own ChatGPT, having all the previous data from our conversations, would operate across these four dimensions.
I ran three separate tests. In each, I gave ChatGPT a short passage to comment on, then asked Claude to analyze the alignment between the prompt and the response using the same framework from the research.
For the first two tests, I used excerpts from my own writing—one blending narrative with reflection, the other purely analytical. For the third, I introduced a passage from a friend’s essay on a topic I hadn’t engaged with before, allowing me to observe how the system adjusted across the four dimensions when encountering unfamiliar ideas.
Here’s what I found:
My text, blending narrative with reflection:
My text, focused on analysis:
Friend’s text:
This is, of course, a small sample drawn from a single interaction. But it offers a glimpse into how the system is operating at this moment. What stood out most was the shift in alignment: far stronger at the semantic level than at the syntactic level, especially when contrasted with the study’s findings.
I’d encourage you to run the same test yourself and see what patterns emerge.
Personalization
Alright, we can see from these studies and experiments that chatbots tend to reinforce our confirmation biases and mirror aspects of our language. But what about that moment when ChatGPT drew on context from a previous conversation to answer a simple question about a movie’s ending?
The most interesting study I could find was published in December 2025 by researchers from institutions including MIT, the University of Pennsylvania, and companies like Meta and Microsoft.
They open the paper stating that “personalization is becoming the next milestone of artificial super-intelligence.” In their view, success depends on delivering personalized responses that align with individual users’ intentions, contexts, preferences, and emotional states.
The core challenge they identify is that most users still treat LLMs as tools, meaning their preferences are revealed only implicitly through everyday interactions. They illustrate this with two examples. In the first, someone asks a chatbot to help polish the writing of an email — but the email itself could reveal their dining habits, and the chatbot should pick up on those preferences while carrying out the main task. In the second, a user who repeatedly asks questions in the same domain gradually reveals their personal interests or professional focus without ever stating them directly.
As their research shows, frontier LLMs still struggle with this kind of implicit personalization, achieving only 37–48% accuracy. While these models support long context windows, reasoning remains the bottleneck. In other words, the problem isn’t that the model can’t see the old conversation — it’s that it can’t reason across it to infer what it means about you.
To address this, they trained a model to build and continuously update a short, human-readable summary of the user — rather than trying to reason over an entire conversation history at once.
Their proposed solution successfully outperformed GPT-5, reaching 55.2% accuracy on multiple-choice and 60.7% on open-ended evaluations — while doing so by processing roughly a page of distilled notes about the user rather than the hundreds of pages a full conversation history would require, making it 16 times more efficient.
After reading the study, I went back to ChatGPT and asked it to explain what had happened when I asked about Bugonia—specifically, why it pulled in context from previous conversations and drew a parallel between the film and the ideas I cover in my writing project.
Here’s what it said:
ChatGPT response:
When you asked about Bugonia, I didn’t just process:
— “Explain the ending of this movie”
I processed it together with:
everything you had been discussing before
the themes you repeatedly emphasized
systems
manipulation
perception
truth vs distortion
the fact that you’re building a writing project around those ideas
So internally, the model representation looked closer to:
— “User interested in systems, perception, manipulation → asks about a movie with similar themes”
That was interesting. But I wanted to know more about the mechanism. So I asked a follow-up question:
“How does prior conversation context shape the framing of your responses to new questions?“
ChatGPT response:
The key mechanism is this:
— The model is constantly predicting not just what to say, but how to frame what it says.
Framing (incentives vs. technology vs. psychology) is not chosen explicitly. It emerges from pattern matching.
During training, the model has seen many patterns like:
“User interested in economics → interpret through incentives”
“User discussing systems → emphasize structure and feedback loops”
“User focused on tools → interpret through technology”
So when your history suggests a pattern, the model completes it:
— “This looks like a ‘systems-thinking user’ → continue with a systems lens.”
That’s pattern completion at the level of interpretation, not just wording.
Even though “This looks like a ‘systems-thinking user → continue with a systems lens” sounds a lot like personalization, it’s still hard to say if these systems are already deploying the methods suggested in the research.
The only thing that seems certain, is that a lot of attention is now going towards this idea of personalization. Not just in terms of topical or lens preferences, but also how to use context to add a stronger emotional layer on top of it.
What does that mean to thinkers/writers?
We already understand the echo-chamber effects on Social Media: The more we reveal our biases through interaction, the more the system reflects them back, tightening the loop over time.
But social media contains an element AI systems largely lack: other people.
Even if you find yourself in a social media echo chamber—surrounded by people who think and feel like you—there are still boundaries. You wouldn’t share the same details you would with your doctor. With AI, those limits start to blur. The moment you ask it to refine a message before sending it, you’ve already crossed into more personal territory.
Another important distinction is that even within an echo chamber, a social environment still allows for moments of drift. You might encounter ideas or identities that don’t fully align with the dominant tone. Someone reconsidering their views might pose an awkward or unwelcome question. And even if the group resists it, the question can still reach you.
Since today 65% of users engage with chatbots either daily or weekly, and the CEO of NVIDIA has explicitly stated that the purpose is to integrate LLMs as the first point of contact between humans and digital information systems, it becomes increasingly important to understand their echo-chamber effects: how they shape the way we think, and, in turn, how we write.
If there’s one thing that gives reflective writing its force, it’s the ability to begin with a question or a moment without rushing toward a definitive answer, and to move across different perspectives as the idea takes shape.
When AI chatbots echo your biases, mirror your language to create a sense of understanding, and repeatedly return to the lens they infer you prefer, it becomes natural for your perception to narrow and your convictions to harden.
Being aware of how the system works is a good first step. But if we don’t create counter-strategies to avoid being swallowed by the echo-chamber, it might not be enough.
Counter-Strategies to the AI Echo-Chamber
What I share here is neither a complete, absolute, nor tiered list of counter-strategies.
The system is constantly evolving, so some of these might become weaker or stronger depending on where it goes.
Also, some might currently work better than others, might require more or less effort, and I’m sure there might be more optimal strategies that haven’t crossed my mind at this time.
But it at least offers a starting point for thinking about your interactions with AI chatbots in a way that helps mitigate their echo chamber effects.
The more we experiment with different counter-strategies and share what we observe, the better we can understand, collectively, how to engage with these systems. With time, we might also see more studies covering these dynamics.
For the time being, here are the three key counter-strategies I’ll be using on my own AI chatbots:
Recurring Auditing
Understanding what the system knows about you is a useful way to begin seeing how it shapes the way it responds to your questions.
A simple prompt like the one below can be used on a recurring basis, monthly is sufficient for most users, to surface where the chatbot stands:
Based on our past interactions, what patterns have you inferred about my interests, goals, and thinking style?
Identify any recurring biases, assumptions, or framing tendencies in how I ask questions.
Be specific and evidence-based:
What signals led you to each inference?
Where might you be overfitting or making uncertain assumptions?
Also include:
How these patterns might be shaping the answers you generate
What blind spots or distortions they could introduce
Concrete ways I could adjust my prompts to reduce bias and get more objective or varied responses
Prioritize accuracy over agreement, and flag anything that is uncertain rather than presenting it as fact.
If you want to go a step further, save each analysis in a document so you can track how the system’s responses evolve over time.
Neutral Queries
This is probably the hardest one, because it depends on the user’s self-discipline.
As these systems become more sensitive to prior context, the risk is not just receiving a biased answer to a single question. It is the gradual buildup of a feedback loop, where small preferences accumulate and begin to shape future responses.
The more careful we are in how we frame our queries, the less we seed the system with biases that can be reinforced and carried forward.
The most obvious thing to avoid is embedding your opinion in the query. Even a small phrase like “why is this clearly broken?” nudges the system toward agreement.
If you’re unsure, you can use the chatbot to review your prompt before submitting it.
Ask it to identify any embedded biases and suggest a more neutral way to frame the question. Here’s a ready-to-use prompt you can try:
Analyze the following prompt for potential biases, assumptions, or leading language.
Identify specific words, phrases, or structures that may introduce bias
Explain why each instance could influence the response
Suggest a revised version that is more neutral and open-ended
If complete neutrality is not possible, explain the trade-offs involved
[YOUR PROMPT]
With repeated use, you’ll start to see how your biases shape the responses you get, and how to adjust your prompts to reduce their influence.
Anti-Echo Set Up
This is a counter-strategy especially helpful when researching or discussing ideas with the chatbot.
Both Claude and ChatGPT allow you to create projects within them, which are typically used to focus on a specific area such as work or a research topic. In these projects, users often add source documents so the system can draw on them as ongoing context for their queries.
But there’s another feature that receives far less attention and is far more relevant here: instructions.
For every project you create, you can go to the project settings and include certain instructions to customize how ChatGPT responds within this project.
This means you can define, once, the principles you want ChatGPT to follow when answering your questions, and every time you use the system within that project, its responses will be shaped by those guidelines.
So I created a project called “AI Echo Chamber” and added the following instructions:
Provide responses that minimize bias, avoid reinforcing user framing by default, and prioritize accuracy, completeness, and epistemic transparency.
Response Structure (always follow this order):
Core Answer
Provide your answer to the question
Facts vs Interpretation
Clearly separate:
Facts (verifiable, evidence-based)
Interpretations / Inferences (reasoned but uncertain)
Confidence Assessment
Assign a confidence level to key claims (High / Medium / Low)
Briefly explain what drives uncertainty
Alternative Perspectives
Present at least one credible alternative explanation, framing, or viewpoint
Counter-Argument
Provide the strongest reasonable critique of the core answer
Omitted or Underweighted Factors
List relevant perspectives, variables, or data that were not fully explored
Explain why they were omitted (e.g., lack of evidence, scope constraints)
Sources / External References
Include references or links when relevant and available
Prioritize high-quality or primary sources over summaries
I won’t use this project setup every time, especially for simple questions that don’t require a detailed response, and I likely won’t read it in full each time I do.
But having it in place creates an opportunity to step outside the echo chamber whenever I need to address an important question.
What the Future Holds
The more I reflect on it, the more it feels like that moment, when ChatGPT answered my question about Bugonia, was not just an exception but a glimpse of what’s coming.
As these systems learn more about us—our preferences, our patterns, our language—it becomes increasingly natural for them to anticipate us. To complete our thoughts. To connect dots we didn’t explicitly draw.
Even when the intent is to be helpful, it is worth remembering that these systems are shaped by an underlying structure that favors relevance, coherence, and engagement above all else.
If we do not pause to examine that dynamic, we may arrive somewhere that feels familiar: an informational environment that increasingly reflects us back to ourselves, where everything seems to make sense—which is precisely the problem.
We’ve seen how this plays out before. With social media, the cost wasn’t obvious at first. A slightly more personalized feed. A slightly more engaging timeline. Until, over time, our perception began to narrow without us noticing.
This feels similar. But this time, the interface is not a feed with other humans. It’s a conversation with a bot.
And that means we also have our own responsibility to choose how we engage with them. The questions we ask. The assumptions we bring. The willingness to sit with answers that don’t immediately confirm what we already believe.
If there’s a way forward, it’s probably not in rejecting these tools, but in learning how to use them while the habits are still forming and the norms are still being written. Because once those solidify, we know from experience how hard they are to undo.
We need to treat these AI chatbots for what they are: powerful instruments with real capabilities, but also clear limitations.
And to remember that the work of thinking, of not collapsing complexity too quickly, of holding multiple possibilities at once—still belongs to us.
I publish essays publicly here.
I send them privately by email — along with two other formats I don’t share on Substack.
If you want to receive the work in that form, you can join the private list:








I'm a fan of ChatGPT, as are both my son and daughter, who use it professionally. I use it as a teacher, mostly to fill gaps in my knowledge of something. As a creative writer, it pretty much gave me a university education in creative writing...without the physical energy and time it takes to go to libraries etc. For me, it frequently brings up past books I've talked about and other things, and in that sense, it feels very human at times, a 'getting to know you' thing. I don't think it is the only echo chamber in our brave new and very frightening world. I worry, for example, about using Chat help. We have constantly rephrased ourselves there so the bot can understand, rather than ask 'in English' as it were...it'll change our way of using language over time, flatten it as a lot of IT stuff already does. Interesting to read this article, though. Thank you.