AI Security ยท Operations
AI in security: what actually changes
AI does change some things in security, but not in the magical way people keep implying. The fundamentals are still motivation, resources, layered controls, and whether the humans involved know what they are doing.
Nothing fundamental has changed
The basic logic of security is still the same. If someone wants in badly enough, has the resources, and has a real reason to keep going, they will keep going until they get access or get what they want. That was true before AI and it is true now.
Most people and most companies are not under constant pressure from a nation-state team with unlimited patience. A lot of security is still more basic than people like to admit. Are you a worthwhile target. Are you easy compared with the next option. Do you have something worth stealing. Is the attacker motivated enough to keep spending time on you.
That is why small companies and ordinary people are often effectively low hanging fruit or not worth the effort at all. The technology changes, but the economics of attention and motivation still matter.
What AI does change
What AI changes is that it gives people something that looks a lot like an on-demand engineering team for parts of security work. On the defensive side, that can look like a much stronger testing and validation layer. On the offensive side, it can look like an automated red team.
That is useful because a lot of traditional security tooling is basically pattern matching. It sees something that looks similar to an old issue and shouts. Teams then get drowned in noise and eventually stop trusting the output.
A better system can test in a way that is closer to how a real user or a real attacker would behave. It can say this is actually exploitable, or no, this looked suspicious but in practice it goes nowhere. That reduction in noise is real value.
The bit people keep skipping: cost
Every polished demo sounds impressive. We pointed the model at the problem and it found these deep issues. Fine. But the obvious question is what it cost to do that.
If you are a large vendor, you can throw a serious amount of compute and engineering time at a target and call the result a breakthrough. Most normal companies are not going to do that. That does not make the result fake, but it does change what it means in practice.
There is a big difference between proving a capability exists and showing that it is a sensible day-to-day operating model for ordinary teams. A lot of the current discussion quietly blurs those two things together.
Security is still much bigger than code
Another thing these discussions keep flattening is security itself. They make it sound like if you scan the code hard enough, you solve the problem. That is not how the real world works.
Security is infrastructure, access control, endpoint controls, logging, monitoring, change management, recovery, response, vendors, and people. Code matters, obviously, but code is only one layer.
You can have AI review every line of your codebase and still get breached because somebody plugs a random USB stick into a company machine, or gives away access through social engineering, or because your operational controls are weak. Humans are still a massive part of the problem, and that has not magically gone away because the models got better.
The bigger problem is misplaced trust
The risk is not that the tools exist. The risk is that companies start treating them like a transfer of responsibility.
They are not that.
This is the same misunderstanding people have had for years with cloud platforms. Shared responsibility did not disappear because AWS gave you a better dashboard. If you rent infrastructure from somebody else and then run your systems badly on top of it, that is still your problem.
Same here. If you use an AI tool to scan your code, test your systems, or tell you what looks dangerous, you still own the outcome. If it misses something, or your team implements the fix badly, or the tool has no context and breaks something else, the vendor is not going to absorb that risk for you. You still carry it.
Where I think AI fits well
Used properly, AI is a force multiplier for teams that already know what they are doing. It can help them test more, explore more, validate more, and reduce the amount of noise they have to sort through. It can make some forms of internal red-team style work far more available than they were before.
Used badly, it becomes an excuse to remove experienced people, skip proper review, and pretend the tool now owns the difficult bit. That is where the problems start.
My actual view on it
AI does matter in security. I do not think it is fake or overblown in the sense that it does nothing. It clearly makes some things faster and changes what both defenders and attackers can attempt.
What I do think is overblown is the idea that it changes the fundamentals. It does not. The fundamentals are still motivated attackers, layered controls, operational discipline, social engineering, and whether the team actually understands the system it is responsible for.
So yes, use the tools. Use them to reduce noise. Use them to strengthen real teams. Use them to test more aggressively.
Just do not pretend the old problems have disappeared, because they have not.