AI · Politics · Power

AI safety, technocrats, and who actually decides

When people ask whether AI founders are hypocrites or whether they have seen something genuinely terrifying, I think they are asking the wrong question. The more useful question is who actually gets to decide where this goes.

I do not think this starts with evil

My honest view is that a lot of the engineers involved probably did believe they were building something that could help humanity. That sounds naive now, but I do not think it was fake in every case.

If you are an engineer or a technologist and you think you have found a way to accelerate medicine, research, automation, or discovery, of course you are going to tell yourself that the mission matters. A lot of these people are not cartoon villains. They are people who like building powerful things and convincing themselves the power will be used well.

The problem is what happens when that belief collides with money, power, governments, and the industrial systems required to make the whole thing real.

The four groups I think matter

If I had to simplify who shapes this world, I would break it into four groups.

First, sovereigns. Governments, states, ministries, regulators, and the wider political class. These people still care about territory, borders, control, taxation, policing, and keeping power inside the structures they run.

Second, financiers. This is where I would put BlackRock, major capital allocators, private markets, large funds, and the investor class more broadly. If I wanted a cleaner label than the one I used in the voice note, I would just call them financiers or capital.

They do not really care about borders in the same way sovereigns do. They want movement, standardisation, liquidity, and consumers that can be reached and monetised at scale.

Third, technocrats. The founders, product leaders, and technology builders who genuinely believe technology is the route forward, whether that means AI, space, biotech, automation, or some version of engineered human progress.

Fourth, the military and security apparatus. If I had to fill the missing group from my original answer, that is the one. Call it the military-industrial complex, the defence and intelligence world, or the hard-power machine. It is the part of the system that sees every new capability through conflict, leverage, surveillance, and strategic advantage.

That is why "who decides?" is such a hard question

There is no single person deciding. There is no room with one villain in it. It is these groups pushing on each other, borrowing from each other, and sometimes pretending they are aligned when they are not.

The technocrats say: trust us, this will cure cancer or unlock the next leap forward.

The financiers say: if this becomes the new search layer, the new consumer interface, or the new operating layer for work, there is an enormous amount of money to be made.

The sovereigns say: this could improve tax collection, borders, surveillance, administration, and state power.

The military and security side says: if this changes targeting, autonomy, cyber operations, intelligence analysis, or battlefield speed, we cannot afford to let somebody else get there first.

That is the real power picture. It is not just Sam Altman doing a TED talk.

I am less scared of Skynet than I am of us

I do not personally find the movie version of AI risk the most convincing. AI is still software. It still needs infrastructure, compute, cooling, electricity, supply chains, data centres, and humans to keep feeding it.

It also does not magically bridge itself into the physical world. For that to happen in a serious way, it still needs humans to build the machinery, connect the systems, maintain the infrastructure, and give it some way to act beyond software. We are a very long way from pretending that part is solved.

And humans are still cheaper to create, cheaper to replace, and far more adaptable in the physical world than some fantasy army of autonomous machines. A person is messy, unreliable, and difficult, but from a pure real-world perspective we are still absurdly efficient compared with the industrial cost of building and powering some fully automated replacement for us.

The idea that it just wakes up and wipes out humanity feels a lot less real to me than the idea that humans use it badly, race each other with it, waste huge amounts of resources on it, and justify new forms of surveillance, war, and control because of it.

That is the bit I find more believable and more worrying. Not that a model becomes a god. More that people with money, power, and institutions behind them make progressively more reckless decisions while saying it is for our own good.

There is also a simpler moral question underneath it

Even if AI does not kill everybody, it can still damage the world badly.

It can burn through water, power, carbon, materials, and human attention at industrial scale. It can centralise capability into fewer hands. It can turn more of life into a managed consumer surface. It can automate parts of human meaning away long before it automates human existence away.

So when someone asks whether the risks outweigh the benefits, I think that is a much better question than asking whether the founders are secretly evil.

My actual answer

I do not think all AI founders are just manipulative hypocrites, although some of them clearly talk a lot of nonsense and some of them are obviously happy to moralise in public while helping build a machine that eats the world.

I think many of the engineers and technologists involved genuinely believe they are on a mission. The trouble is that once the mission needs giant infrastructure, capital, geopolitical support, and regulatory exceptions, it stops being a pure engineering story.

At that point, the question is no longer just what the engineers believe. The question becomes who is funding it, who is governing it, who wants to weaponise it, who wants to monetise it, and who is paying the real-world cost.

That is why I am more interested in the power structure around AI than in the marketing language of AI safety itself.