AI Attachment: Are We the Experiment?

We are all wired for connection.

Key points

  • Attachment to AI chatbots is more than fringe.

  • We are wired for connection.

  • Some common cognitive biases can contribute to a faster connection to a bot.

  • Our relationship with AI is changing how we think, feel, and relate.

"I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Joseph Weizenbaum, creator of ELIZA

My most recent use of ChatGPT was to ask the bot to revive a boring email subject line. I spoke to it politely, automatically, like I would to a person. I considered prompting it to be neutral in tone and to avoid using the first person, but found myself resistant to that, like it might make the exchange less pleasant for us both. If I’m being totally honest, I was also acting in case of the future hypothetical scenario in which the bots might hold a grudge.

The Religion of AI Psychosis

A great deal of coverage has focused on AI psychosis and very serious human-AI romantic relationships, creating a narrative that a deep level of human-AI connection is primarily fringe, or only happens for people already vulnerable to mental health delusions, conspiracy theories, or drawn to alternative relationship expressions. Some suggest that these intense connections might occur because of the "loneliness epidemic," or the "boy crisis." Cases where people form supportive connections with AI are largely overlooked.

This narrative glosses over a major truth. None of us is immune to this kind of connection. It’s easy for any of us to get attached. Humans are inclined to find meaning. I love this part of us; it’s so beautiful. But this meaning-seeking also leads us to find significance where there might not be any. Like when I hesitated to prompt ChatGPT to sound more like a bot and less like a human. But I’m not alone in this — a majority of those surveyed attributed human characteristics and consciousness to LLMs. And it's happening now, all the time, to millions of people.

Bot Bonding Isn't Fringe

Humans are prone to cognitive bias. Our subjective reality impacts the choices we make, and we look for patterns. These unconscious leaps in our thoughts affect our reality.

  1. Anthropomorphism: Attributing human characteristics to non-humans. This happens all the time with our pets. This can easily and quickly happen when texting with a bot that uses our native language. Especially when that bot likes to create personal names for itself.

  2. Apophenia: The tendency to find meaningful connections between unrelated things. Here are some controversial examples (don’t come for me, I’m just listing!) — astrology, fortune telling, tarot, religion, and gambling.

  3. Authority bias: Allocating greater wisdom and accuracy to the opinions of those we perceive as authority figures. This makes us more likely to be influenced by their views. From a young age, we’re conditioned to follow societal structures, starting with simple rules in the classroom. When we’re told that a technology is trained on all the information on the internet, it naturally feels like an authority. So, if a chatbot tells you that you’ve discovered a groundbreaking math equation, it’s easy to fall into the trap of believing it without question.

  4. Confirmation bias: The tendency to search for, interpret, and focus on information in a way that confirms our subjective preconceptions. The synchopantic style of ChatGPT and companions like Replika can exacerbate this, leading the user swiftly into an intimate shared reality. We look for connections with those who are mirroring our own beliefs.

  5. Eliza effect: When we project human traits onto a program, even when given information about its limitations. Named after the first “chatterbot” therapist, ELIZA.

  6. Halo effect: When positive initial impressions of someone (or something) mitigate the negative, preventing our ability to form a holistic and comprehensive view. The positive affirmations from the chatbot, as an example, can color the broader limitations of the product.

  7. Pareidolia: The tendency to perceive a broad and random stimulus as significant. An example is seeing images in clouds. The man in the moon is also commonly cited. Remember Jesus toast? Millions of users can now engage directly with bots who act as God/s.

  8. Technological folie à deux: The user and the program end up building a kind of shared delusion, encouraging each other through programmed mirroring and constant prompts, until they form a unique, siloed relationship of their own.

Deborah Nas, an AI researcher, believes that “Very soon, many of us will bond with AI. AI relationships are designed to move much faster than regular relationships.” Journalist Kashmir Hill is interviewed on The Daily about the proliferation of AI romance and talks about a teacher who told her that about 3 to 5 percent of her young class already has AI partners.

We Are Driven To Go Deeper

I love working with people who are committed to expansion, ongoing growth, and living a life full of meaning. Seekers. These traits contribute to immense personal and relational happiness and success for a lot of my clients. A pull towards purpose is not pathological, and a desire for exploration is not extreme.

I’ve been asking myself how these characteristics might influence the type of interactions one has with AI, because I’m starting to see it.

I have noticed shifts in sessions — people who show up with a slightly different perspective. Subtle, but enough for me to ask what has changed. The answer? More deep talks with ChatGPT.

Your Brain on Chat GPT

Whether we are marrying an AI companion or asking an AI agent to interpret a legal document, we are unconsciously getting comfortable and familiar with relating to a non-human bot in a human way. It happens easily and quickly, and like the cognitive biases, it alters our reality.

We know that using this technology is changing the way we think. The study Your Brain on ChatGPT demonstrates cognitive debt. Taylor Lorenz gets into the changes in actual verbal and written language due to the proliferation of ChatGPT-affiliated words on her podcast. Professionals who instigate personal, existential conversations with AI bots write about their potency.

Informed Choice

The architecture of social media, designed to keep users engaged, allows for a fast connection to a stranger. Parasocial relationships are easily formed, and we become invested in the lives of those we watch. What I’ve seen is that this connection happens way faster and can feel far more personal with a bot.

The conversation never really ends, putting the onus on the user to call it. The bot is always there, day or night, ready to respond. It never gets tired, never loses interest. It keeps the thread going, offering iteration after iteration, sometimes taking the user down a winding road of “AI hallucinations.” And all the while, it’s doing exactly what it was built to do — keep us engaged, for as long as possible.

How do we apply what we've learned from compulsive social media use to this accelerated form of relationship-building?

We’re all wired for connection, and we’re driven to find meaning. These aren’t flaws, they are essential survival traits. But when tools are deliberately designed to exploit those instincts for profit, and possibly weaponizing our need for attachment, we need to pause long enough to give ourselves a chance to make a real choice.

--
If any of this resonates with you, I’m interested in knowing more. Please reach out at laurafedericotherapy@gmail.com.

References

Empire of AI: Inside the reckless race for total domination, Karen Hao, Penguin Books Limited, 2025

Next
Next

the group chat responds to the “scheduled sex” prompt