Anthropic’s Labor Market Impacts of AI report: what it means for physical security.

Charlie Bennett, our Regional Director for Europe, shares his reflections on the current discussion around AI’s impact on the jobs market and the common misconceptions around its role in modern security operations.

“So We Don’t Need CCTV Operators Any More…”

It’s a comment I hear surprisingly often when talking about artificial intelligence in the security industry.

Usually it comes up in conversations about video analytics.

Someone will say something like:

“Once AI can monitor the cameras, we won’t need operators anymore.”

Or:

“Installers won’t be necessary once systems configure themselves.”

Or even:

“Robots will replace security guards.”

At first glance, these statements seem logical.

If AI can detect intrusions, identify suspicious behaviour, recognise faces, and analyse video faster than a human — surely the human will become redundant?

But when we step away from these assumptions and look at the data, the reality is far less apocalyptic. And far more interesting.

A landmark new labour market study published by Anthropic in March 2026 offers a fascinating glimpse into how AI is actually affecting jobs today.

And the findings should give everyone in the physical security industry good reason to take a deep breath.


What the Research Actually Shows

The Anthropic study introduces a new metric called “Observed Exposure” — a measure that doesn’t just ask could AI theoretically do a job, but is it actually being used to do it in real workplaces?

The Anthropic team combined three datasets: the O*NET database of task descriptions for around 800 US occupations, their own usage data from millions of Claude conversations, and a prior academic measure of which tasks LLMs could theoretically accelerate. By overlaying actual usage patterns on theoretical capability, they could see where AI was genuinely changing work — and where it wasn’t.

The results were revealing.

Even in the occupations most theoretically exposed to AI, actual real-world AI coverage is a fraction of what’s possible. The most exposed occupation they identified — computer programming — showed 75% task coverage. But then the numbers drop fast. And at the bottom end, a full 30% of all workers have zero AI task coverage. Their jobs simply don’t appear in AI usage data at all.

In fact, the jobs most exposed to AI today are overwhelmingly white-collar digital roles — such as programming, customer service, data entry, and research tasks.

Which raises an interesting question: what does this mean for the security industry?


Why Video Analytics Won’t Replace CCTV Operators

Let’s deal with the most persistent myth first, because it’s the one most likely to be repeated at your next client meeting.

Video analytics — AI systems that can automatically detect motion, flag anomalies, search for individuals across camera feeds, and generate alerts — are genuinely impressive. The technology has advanced rapidly. And yes, it will change how CCTV monitoring is done.

But replace operators? No. And the Anthropic data helps explain why.

The Observed Exposure measure is highest for tasks that are cognitive, text-based, data-processing, and automatable in isolation. Writing. Summarising. Coding. Data entry. These are tasks with clear inputs and clear outputs, where an AI can operate independently and the result can be evaluated without human judgment.

CCTV monitoring is none of these things. The job of a skilled CCTV operator is not to watch a screen and press a button when something moves — any motion-detection algorithm can do that. The job is to interpret. To notice that the person running in a public space is doing something slightly different from someone taking exercise. To understand whether the argument happening at the front reception is about to turn into a fight or resolve itself. To maintain situational awareness across dozens of feeds simultaneously and build a coherent mental model of what is normal and what is not.

This is contextual, experience-dependent judgment. It is exactly the category of cognitive work that AI is worst at and shows no signs of mastering.

What video analytics actually does — and what it’s genuinely good at — is handling the high-volume, low-judgment detection work that currently consumes enormous amounts of operator attention. Flagging the car that’s been parked in a restricted zone. Alerting to motion in a sterile area out of hours. Identifying a face or number plate that matches a known exclusion list.

This is AI augmenting the operator by handing them the signal so they can make the decision. The Anthropic study found that augmentative AI use is far less disruptive to employment than fully automated replacement, and weighted it accordingly in their exposure calculations.

The CCTV operator of 2030 will have better tools. They will not be replaced by them.


Why Robots Won’t Replace Security Guards

The robot security guard is a favourite of sci-fi fans and AI doomists. And to be fair, the hardware has become genuinely interesting. Autonomous patrol robots exist. Some can navigate complex indoor environments, detect intruders, and transmit footage in real time. A handful of organisations, such as airports and campuses, are piloting them.

But “piloting an interesting robot” and “replacing your security workforce” are separated by a chasm that neither the technology nor the practical realities of security work can currently bridge.

Consider what a security officer actually does. Yes, they patrol. Yes, they monitor access points. But they also de-escalate the situation that’s about to turn violent. They reassure the person having a mental health crisis in a public space. They make a judgment call about whether to challenge someone or let them pass. They notice that something feels wrong before they can articulate exactly what it is — and they act on that feeling in a way that prevents an incident rather than just recording it.

These are fundamentally human capacities. They require social intelligence, physical presence, and adaptability in novel situations — qualities that no robotic system possesses, and that the Anthropic research shows AI is nowhere near developing.

The study is explicit that physical roles with complex, in-person, judgment-dependent task structures have zero observed AI exposure. Not just low exposure. Zero. The gap between “a robot can patrol a corridor” and “a robot can do the job of a security guard” is the entire human dimension of security work — and that gap is not closing any time soon.

There’s also a practical dimension that often gets overlooked: liability, public trust, and regulatory requirements. Clients don’t just want monitored premises — they want a legal chain of responsibility. They want someone who can be accountable for the decisions made on their site. A robot cannot be held responsible for making a bad decision. A security officer can, and that accountability is part of what clients want.

Robots will find a role in the security industry as support tools or as patrol aids, but they will not replace the security guard.


Why the Installation Job Market Is More Resilient Than You Think

The installation side of the physical security industry faces a different version of the AI anxiety. Here the concern is less about being replaced by AI and more about whether a contracting market — fewer operators needed, fewer guards needed — means fewer systems being installed.

This logic is flawed, and the Anthropic data explains why.

If AI tools are augmenting human security workers rather than replacing them, the demand for the systems those workers use will remain strong — and, in many scenarios, will actually increase. An organisation that moves from having operators watch raw feeds to having operators supervise AI-filtered alerts still needs all the cameras. It may need more of them, to give the analytics system the visibility it requires. It almost certainly needs upgraded infrastructure: better bandwidth, more storage, more processing power, more sophisticated integration between systems.

The movement toward smarter physical security doesn’t reduce the requirement for physical security infrastructure — it expands it.

There’s also the installation skills point. The Anthropic study notes that the workers with zero AI exposure are those in hands-on, physical, technically complex roles. Installing a CCTV system requires physical presence. It requires working at height, in confined spaces, in live environments. It requires surveying a building — understanding where a camera needs to be, how to run a cable through the existing structure, how to troubleshoot a connection problem on a commissioned system. These are exactly the tasks that AI cannot perform and shows no sign of being able to perform.

The study found no evidence of displacement in physical, skilled trades roles.


The Real Shift: From Monitoring to Intelligence

What the Anthropic research reveals — and what almost no one in the AI-will-change-everything conversation acknowledges — is that even in the most AI-exposed roles, the gap between theoretical capability and actual deployment is enormous.

Computer programming — the most AI-exposed occupation in the study — showed 75% task coverage. That means even in the field where AI tools are most advanced and most widely adopted, a quarter of what programmers do remains untouched. And for every other occupation, the coverage is lower. Dramatically lower.

The research also found something important about unemployment. Despite all the fear, despite just over three years of rapidly advancing AI capability since ChatGPT launched in late 2022, there has been no measurable increase in unemployment among the most AI-exposed workers. The number that might indicate displacement is, statistically, indistinguishable from zero.

There is one concerning data point: hiring of workers aged 22–25 into the most exposed roles appears to have slowed slightly. This suggests AI is affecting entry-level pathways into certain occupations. But again — this is in high-exposure roles. Not in physical security. Not in installation. Not in guarding.


What This Means for the Industry

The physical security industry should be paying close attention to AI — but for the right reasons.

The right reasons are: AI tools will make your operators more effective. Video analytics will reduce alert fatigue and improve response times. Better data processing will improve incident review and investigation. AI-assisted scheduling and resource planning will improve operational efficiency.

These are meaningful changes that are worth understanding, adapting to, and in many cases investing in.

What AI tools are not is a replacement for the people who do the work. The Anthropic study draws a clear line between augmentation and automation — and physical security sits firmly on the augmentation side of that line. AI gives your people better tools. It does not give you a world where you don’t need people.

So when someone says: “We don’t need CCTV operators any more…” the answer is simple. We need them more than ever — just not in the way we used to. Because the future of security is not about watching cameras, it’s about understanding what the cameras are telling us, and that still requires human intelligence.


This article references the Anthropic study “Labour Market Impacts of AI: A New Measure and Early Evidence” by Maxim Massenkoff and Peter McCrory, published March 2026. The full paper is available at the link below.

🔗 Read the Anthropic study: https://www.anthropic.com/research/labor-market-impacts

Related articles