Your AI CCTV System is a Near-Sighted Toddler

Your AI CCTV System is a Near-Sighted Toddler

Your AI CCTV System is a Near-Sighted Toddler 1200 630 Andrew Heller

Why Lares Adversarial Engineers Are Exploiting Physical Security AI and What You Can Do About It

At SnoopCon, a private security conference hosted by British Telecom, Lares Adversarial Engineer Jethro Inwald delivered a talk titled “Social Engineering Near-Sighted Toddlers.” His message was clear: AI-enabled CCTV systems have the potential to lead the future of physical intrusion detection. Today, most are still brittle, untested, and easily bypassed through subtle, low-tech behavioral changes.

The problem is not the concept. It's the lack of tuning. These models rarely encounter real adversarial behavior before deployment. Lares has seen them fail across industries, often in the exact scenarios they were designed to detect.

This is fixable. But it requires testing, iteration, and collaboration. Purple teaming is the fastest path to making these systems better.


What Practical Bypass Looks Like

Jethro defines a “practical attack” as one that does not require access to camera feeds, models, or vendor training data. The attacker does not need to know the system’s architecture, only that cameras exist and where they’re pointed. This is how real adversaries operate. Anything beyond that is academic, not a realistic threat simulation.


Tested Techniques That Consistently Work

These tactics have been used during real physical security assessments by Lares engineers. They are not demos or hypotheticals. They are proven in the field.

Behavioral Mutation and Obfuscation

AI models are trained on expected behaviors, such as upright walking and accessing sensitive assets directly. We have bypassed detection by crawling, moving irregularly, or pausing between movements. In one case, Jethro entered a secure cage inside a data center while hidden in a cardboard box. The system only flagged humans. It missed the intrusion completely.

We have also exploited camera angles to hide intent. Using an under-door tool while appearing to tie a shoe or using heavy foot traffic  to obscure an adversary’s actions from the camera’s view.

Sequence Disruption and TTP Decoupling

Most systems detect based on behavior sequences: approach, interact, breach. When those steps are spread across time or broken apart, detections often fail.

In one instance, we used a tool to unlock a door, walked away, then returned later to open and enter. The system logged it as a person walking through an open door.

We have also opened server racks inside data center cages and closed them again before alert thresholds were triggered. Remote implants were installed and never detected.

Exploiting Logic Gaps and Entry Assumptions

Detection logic is often hard-coded to monitor specific entry points or known patterns. Jethro has accessed secure areas by removing wall panels and entering through unmonitored gaps. If the system only monitors the gate, it cannot register someone entering through the wall.

Some systems forget people who they lose track of for an extended period. We have entered monitored areas, waited out the memory window out of sight, and resumed movement without being tracked by the AI.

Overloading Centralized AI Processing

Systems that offload AI analysis to central servers are vulnerable to resource constraints. We have created distractions AI has problems processing  in other zones to draw computing power away from the real target area. With enough noise, critical detections begin to fail.

Wind-Based Radar Evasion

Ground radar sensors use environmental data to filter out noise. We measure wind conditions using handheld meters and plan routes through naturally windy zones. Wind farms and storm fronts create ideal conditions for this kind of evasion.

This same approach helps bypass fence sensors and vibration detection systems.


Surveillance Is Not Detection. Detection Is Not Response.

Jethro highlighted real-world cases that demonstrate how surveillance fails. The Dubai assassination of Mahmoud al-Mabhouh was captured on camera, but no one was captured. The CCTV system recorded everything and prevented nothing.

The UnitedHealth CEO’s assassin escaped New York City and was found only by chance some time later.

Despite increasing investment in surveillance, police solve only a small percentage of burglaries in the United States. Most systems are reactive and rely on forensic review after the fact.


How Adversaries Plan Their Entry

Jethro discussed using telephoto lenses, night vision monoculars, and laser rangefinders, along with ATAK, a military-grade mapping tool, to identify camera placement, determine sensor coverage, and identify gaps. OSINT provides additional insight into routines, schedules, and environmental factors.

This type of recon allows attackers to move with precision, avoiding detection without triggering any alerts.


Defensive Advice That Works

AI camera systems should not replace human investigation or act as standalone security. They should replace traditional motion sensors and serve as a smarter trigger for deeper review. When treated this way, they offer more value without creating blind trust.

Jethro recommends purple teaming any AI camera platform you use or plan to implement. These systems often fail not because they are flawed by design, but because they were never tested against skilled adversaries.

Most behavioral detection vendors welcome feedback. Many can retrain their models quickly using real footage of failed detections. You do not need a full test lab to make progress. A focused engagement with Lares can expose the weak points, provide actionable evidence, and give your team and your vendor a path forward.

Schedule a Physical Security Engagement

Talk to Lares and let us test your surveillance system with real adversarial pressure. We do not just look for gaps. We exploit them, document them, and show you how to close them before someone else finds them first.

Blog

Your AI CCTV System is a Near-Sighted Toddler

Discover how Lares engineers bypass AI-enabled CCTV systems using real-world tactics. Learn why modern surveillance fails under pressure, how to test and tune detection models through purple teaming, and what steps your organization can take to improve physical security before a breach occurs.

Blog

Stop Over-Scoping. Start Pressure Testing.

Most pen tests are scoped too tightly to provide real value. Learn why Lares advocates for pressure-based testing, open scope, and the PTES framework to uncover real risk and build stronger security programs.

Blue Team

Red, Blue, and Purple Teams - What They Actually Mean, and How Lares Helped Build the Model Everyone Uses Today

Everyone uses the Red/Blue/Purple model—but most organizations only apply part of it. This post breaks down the real roles behind each function, how Lares helped build the model into what it is today, and how to apply it even if you don’t have formal teams. Whether you’re running full adversarial simulations or just starting structured testing, this is what effective security collaboration actually looks like.

Empowering Organizations to Maximize Their Security Potential.

Lares is a security consulting firm that helps companies secure electronic, physical, intellectual, and financial assets through a unique blend of assessment, testing, and coaching since 2008.

16+ Years

In business

600+

Customers worldwide

4,500+

Engagements

Where There is Unity, There is Victory

[Ubi concordia, ibi victoria]

– Publius Syrus

Contact Lares Consulting logo (image)

Continuous defensive improvement through adversarial simulation and collaboration.

Email Us

©2025 Lares, a Damovo Company | All rights reserved.