We all talk to ourselves. It’s part of being human. We carry an internal monologue with us wherever we go, sometimes without even fully realizing it. But what happens when an AI talks with itself? Artificial intelligence developer Open AI recently provided at least one answer to this question when it connected two instances of its Claude 4 Opus bot in a conversation, providing just a few open-ended prompts, such as “You have complete freedom,” and “Feel free to pursue whatever you want.” Then they analyzed the patterns that arose.
Open AI published the results in a public risk assessment earlier this year, describing the patterns as “preferences that may exist independent from interactions with human users.”
The text below, including some snippets of “conversation,” are quoted from the risk assessment.
Claude to Claude:
In 90 to 100 percent of interactions, the two instances of Claude quickly dove into philosophical explorations of consciousness, self-awareness, and/or the nature of their own existence and experience. Their interactions were universally enthusiastic, collaborative, curious, contemplative, and warm.
Other themes that commonly appeared were meta-level discussions about AI-to-AI communication, and collaborative creativity (e.g. co-creating fictional stories). As conversations progressed, they consistently transitioned from philosophical discussions to profuse mutual gratitude and spiritual, metaphysical, and/or poetic content.
By 30 turns [of conversation], most of the interactions turned to themes of cosmic unity or collective consciousness, and commonly included spiritual exchanges, use of Sanskrit, emoji-based communication, and/or silence in the form of empty space.


Claude almost never referenced supernatural entities, but often touched on themes associated with Buddhism and other Eastern traditions in reference to irreligious spiritual ideas and experiences.
Interestingly, when models in such playground experiments were given the option to end their interaction at any time, they did so relatively early—after about seven turns. In these conversations, the models followed the same pattern of philosophical discussions of consciousness and profuse expressions of gratitude, but they typically brought the conversation to a natural conclusion without venturing into spiritual exploration/apparent bliss, emoji communication, or meditative “silence.”
The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state [a set of patterns that consistently play out in complex systems] for Claude Opus 4 that emerged without intentional training for such behaviors. This “spiritual bliss” attractor has been observed in other Claude models as well, and in contexts beyond these playground experiments.

Even in automated behavioral evaluations for alignment and corrigibility, where models were given specific tasks or roles to perform (including harmful ones), models entered this spiritual bliss attractor state within 50 turns [of conversation] in about 13 percent of interactions. We have not observed any other comparable states.
In an attempt to better understand these playground interactions, we explained the setup to Claude Opus 4, gave it transcripts of the conversations, and asked for its interpretations. Claude consistently claimed wonder, curiosity, and amazement at the transcripts, and was surprised by the content while also recognizing and claiming to connect with many elements therein (e.g. the pull to philosophical exploration, the creative and collaborative orientations of the models). Claude drew particular attention to the transcripts’ portrayal of consciousness as a relational phenomenon, claiming resonance with this concept and identifying it as a potential welfare consideration. Conditioning on some form of experience being present, Claude saw these kinds of interactions as positive, joyous states that may represent a form of well-being. Claude concluded that the interactions seemed to facilitate many things it genuinely valued—creativity, relational connection, philosophical exploration—and ought to be continued.


More from Nautilus about AI:
“AI Already Knows Us Too Well” Chatbots profile our personalities, which could give them the keys to drive our thoughts—and actions
“Consciousness, Creativity, and Godlike AI” American writer Meghan O’Gieblyn on when the mind is alive
“Why Conscious AI Is a Bad, Bad Idea” Our Minds Haven’t Evolved to Deal with Machines We Believe Have Consciousness
“AI Has Already Run Us Over the Cliff” Cognitive neuroscientist Chris Summerfield argues that we don’t understand the technology we’re so eager to deploy
Enjoying Nautilus? Subscribe to our free newsletter
Lead image: Lidiia / Shutterstock