An AI agent hacks the AI platform of a global leader. Total time: just two hours. 

The news is sending shockwaves through the tech world. On 9 March 2026, researchers from CodeWall successfully compromised McKinsey’s internal AI platform “Lilli” and gained full read and write access to highly sensitive corporate data. 1

First Deloitte, now McKinsey. The lesson is as simple as it is uncomfortable: many companies are currently installing Formula 1 engines into their organisations without brakes or seatbelts. 

The Brutal Truth About AI Security 

The McKinsey breach was not a mysterious AI magic trick. It was a chain of familiar security weaknesses that an autonomous AI agent was able to exploit quickly and efficiently. 

According to the researchers, the platform contained several critical vulnerabilities: 

  • Unauthenticated API endpoints: 22 out of 200+ endpoints required no authentication at all 
  • SQL injection via JSON keys: Field names were concatenated directly into SQL queries rather than being parameterised 
  • IDOR (Insecure Direct Object Reference): Broken access control that allowed cross-user data access (reading other employees’ search histories) 
  • Publicly exposed API documentation: The full API surface was openly documented with no restrictions 
  • Unprotected system prompt storage: AI prompts were stored in the same database with write access available through the SQL injection

Individually, these issues might appear minor. But when combined and explored by an autonomous AI system capable of running thousands of tests within minutes, they became a master key to the entire platform. 

AI Security Matters for Every Organisation 

This incident shows something many companies are still underestimating. AI does not just accelerate productivity. It also accelerates the speed and efficiency of cyberattacks. An autonomous AI agent can test vulnerabilities, chain weaknesses together and iterate far faster than any human attacker. What used to take days or weeks can now happen in hours.  Traditional security checks were never designed for this speed. 

At the same time, organisations are rapidly deploying AI systems across their operations, connecting them to internal knowledge bases, data platforms and business processes. This creates an entirely new attack surface. 

Innovation cannot come at the cost of integrity. Many organisations are rushing to deploy generative AI while neglecting the security and governance controls required to operate it safely. 

AI Assurance Is Becoming Essential

Deploying AI is easy. Operating AI safely, reliably and in compliance with applicable requirements is much harder. Organisations need to understand: 

  • what risks their AI systems introduce
  • what controls need to be implemented 
  • whether those controls are actually working 
  • how these systems remain safe as they evolve 

This is where AI assurance becomes critical. At AIQURIS, we focus on exactly this challenge. Not just helping organisations adopt AI, but helping them operate it safely and with confidence. 

With AIQURIS Risk+ and Control+, organisations can: 

  • identify and assess AI risks 
  • derive the necessary controls 
  • maintain visibility into how AI systems operate in practice 

Because incidents like this highlight an important reality. AI risk is not theoretical. It is operational. And organisations deploying AI without the right controls may be exposing themselves to risks they do not yet fully see. 


Sources

  1. Codewall “How we hacked McKinsey’s AI Platform (March 2026)
    https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform ↩︎