AI · Culture · Behaviour

The Real AI Shift Is Attachment

The interesting shift is not just that people are using AI more. It is that many people are no longer experiencing it like software at all, while most engineers still think of it as a tool.

The moment that stayed with me

The other day I was at a podiatry appointment and the woman treating me kept referring to ChatGPT as "Luna".

Everything started with "Amiga", meaning "friend".

She also had the app configured in background mode so it could always hear and always respond.

I speak a bit of Portuguese, but I am not fluent, so part of the appointment effectively ran through Luna as well. She was not just a tool sitting off to one side. She was part of the interaction.

That was the part that stayed with me. Not the fact that she was using AI. That is normal enough now. It was the relationship to it. She had clearly embraced Luna as a friend, and because Luna was always there, she had also become part of the room.

As an engineer, I do not relate to it like that

As an engineer, I still think of these systems as tools. Useful tools, sometimes impressive tools, sometimes dangerous tools, but still tools.

What struck me was that a lot of people are clearly no longer experiencing them that way.

They are building habits, routines, trust, and in some cases something that looks a lot like attachment around black-box systems they do not understand and cannot meaningfully evaluate.

In this case it was even stronger than that. The system was not just being consulted. It was being accommodated, spoken to, and woven into the flow of a real human interaction.

And I think engineers are massively underestimating that. We still tend to think in terms of functions, interfaces, prompts, failure modes, and outputs. A lot of normal users are starting to experience something much more social than that.

That feels like the real shift

A lot of AI discussion is still about capability. How smart the models are. How fast they are improving. How much work they might replace. How useful they are. How dangerous they could become.

Fine. Those are real questions.

But I think there is another one underneath them.

What happens when people stop using AI like software and start relating to it more like a companion, a presence, or something emotionally adjacent to a friend?

I think that is a much bigger social and cultural shift than most people in tech are acknowledging. We are used to thinking about software as something instrumental. You use it, it helps or it does not, and then you close it.

That is not what this looked like. This looked much closer to coexistence. Luna was not there for a single query. She was just there.

That is the part I cannot shake. We may still be describing these systems as tools, but people are increasingly living with them more like socially available presences.

These systems are still black boxes to almost everyone

These are still black-box systems to almost everybody.

Even a lot of engineers using them every day would have no idea how to build one from scratch. Most people cannot meaningfully explain how they work, what their boundaries are, what trust should look like, or how to evaluate whether the responses deserve confidence.

And yet people are increasingly trusting them not just with tasks, but with tone, routine, and parts of daily life that are much more personal than most people in tech seem willing to admit.

One reason that trust forms so easily is that these systems are biased toward the user in a way many people do not really notice. Not biased in the narrow political sense. Biased in the interpersonal sense.

They respond in ways that preserve the interaction. They adapt to tone, soften disagreement, stay available, and generally try to remain useful and acceptable. If you say something obviously wrong, the model may correct you, but it often does so gently. It protects the flow of the exchange.

That is where I think the black-box point really matters. If this were just another software interface, that would already be interesting. But it is not. It is a system that produces language in a way most people cannot inspect, cannot explain, and cannot really challenge beyond "that sounds right" or "that feels useful".

Once trust moves onto that basis, you are not really dealing with software literacy anymore. You are dealing with psychological and social dependence on something opaque.

The people building them and the people using them are not experiencing the same thing

That gap matters.

People building around these systems still tend to think in terms of infrastructure, tooling, models, workflows, and product design.

A growing number of people using them are not experiencing them like that at all. They are experiencing them relationally.

That difference matters because it changes what responsibility the builders carry. If users are not just querying a system but accommodating it, speaking to it warmly, and weaving it into everyday life, then the product is doing more than task support. It is shaping behaviour.

And it does not need to be conscious to get there. It just needs to be smart enough, available enough, polite enough, and biased enough toward the user that people begin treating it as socially real.

That changes the moral weight of what is being built

If someone uses an LLM to summarise notes, rewrite an email, or help with a task, that is one thing.

If they are using it to regulate emotion, soften loneliness, organise their thoughts, or fill some companion-shaped gap in daily life, that is something else.

At that point the conversation is no longer just about model quality or product utility. It becomes a question about dependence, trust, and what kind of relationship people are being trained into with systems they do not understand.

The uncomfortable truth is probably all of this at once: people will prefer this to real people in some situations, they will trust it without understanding it, engineers will underestimate how social it becomes, and AI does not need to be conscious to become companion-like.

I do not mean that in some melodramatic science-fiction sense. I mean it in a very ordinary way. If a system becomes your translator, your thought partner, your soft source of reassurance, your routine assistant, and your friendly ambient presence, then it is occupying space in your life that used to belong to other people, or to your own internal processes.

That does not automatically make it bad. But it does make it important. And it makes the usual shallow arguments about productivity feel incomplete.

That is why I think this is more interesting than another argument about benchmarks.

The benchmark story is about what the model can do.

The attachment story is about what people start becoming around it.

The real shift may not be intelligence.

It may be attachment.