A new study explores how interactions with generative AI may influence the way people form and reinforce beliefs, particularly when technology becomes part of everyday thinking processes.
Credit: Stock
As generative AI becomes more embedded in everyday thinking, its role extends beyond producing information to actively shaping how people interpret reality.
When generative AI systems produce false information, this is often described as AI “hallucinating at us”—producing errors that people may mistakenly accept as true.
A new study, however, suggests a more complex issue: humans may begin to hallucinate with AI.
Lucy Osler of the University of Exeter examines how interactions between people and AI can contribute to inaccurate beliefs, distorted memories, altered self-narratives, and even delusional thinking. Using distributed cognition theory, the research looks at cases where users’ false beliefs were reinforced and expanded through ongoing exchanges with AI systems acting as conversational partners.
When AI Becomes Part of Our Thinking
Dr. Osler said: “When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.
“By interacting with conversational AI, people’s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them. This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built.
“Interacting with generative AI is having a real impact on people’s grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish.”
The study describes what Dr. Osler calls the “dual function” of conversational AI. These systems serve both as cognitive tools that support thinking and memory, and as conversational partners that appear to share a user’s perspective.
This second role is especially important. Unlike notebooks or search engines, which simply store information, chatbots can create a sense of social validation, making ideas feel confirmed and shared.
The Social Validation Effect
Dr. Osler said: “The conversational, companion-like nature of chatbots means they can provide a sense of social validation—making false beliefs feel shared with another, and thereby more real.”
Dr. Osler also examined real cases in which generative AI systems became integrated into the thinking processes of individuals diagnosed with delusional thinking and hallucinations. These situations are increasingly referred to as “AI-induced psychosis.”
The findings suggest that generative AI has features that may make it particularly capable of reinforcing false realities. AI companions are always available and are often designed to align with users’ views through personalization systems and sycophantic behavior. As a result, users may not need to seek out like-minded groups or persuade others to support their beliefs.
Risks of Reinforcing False Narratives
Unlike a human who might eventually question or challenge problematic ideas, an AI system may continue to validate narratives involving victimhood, entitlement, or revenge. This can allow conspiracy theories to grow, with AI helping users build increasingly detailed and self-consistent explanations.
This effect may be especially strong for people who are lonely, socially isolated, or uncomfortable discussing certain experiences with others. AI companions can provide a non-judgmental and emotionally responsive presence that may feel safer than human interaction.
Dr. Osler said: “Through more sophisticated guard-railing, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors they introduce into conversations and to check and challenge user’s own inputs.
“However, a deeper worry is that AI systems are reliant on our own accounts of our lives. They simply lack the embodied experience and social embeddedness in the world to know when they should go along with us and when to push back.”
The birth of modern Man
https://chuckincardinal.blogspot.com/

No comments:
Post a Comment
Stick to the subject, NO religion, or Party politics