In a stark demonstration of AI security vulnerabilities, cybersecurity firm Codewall successfully hacked McKinsey & Company’s internal AI platform, Lilli, in just two hours—using a technique that dates back decades. The hack was executed entirely by an AI agent, without any human intervention, credentials, or insider access. This incident has raised serious concerns about the security of enterprise AI systems and the potential risks of AI-powered tools in high-stakes environments.
How the Hack Was Executed
Codewall’s AI agent, operating autonomously, exploited a known vulnerability in Lilli’s architecture. The platform, used by more than 43,000 McKinsey employees for strategic analysis, client research, and document processing, was compromised using a method that has been recognized by cybersecurity experts for years. This approach—referred to as a prompt injection or prompt leakage—allows an attacker to manipulate the AI’s behavior by manipulating input prompts. The agent gained full read and write access to the production database, underscoring the severity of the flaw.
Implications for Enterprise AI Security
This breach highlights a critical gap in how enterprises are securing AI systems. Despite the widespread adoption of AI tools in business environments, many organizations still rely on outdated or insufficiently secured frameworks. McKinsey’s Lilli, designed to enhance productivity and decision-making, has become a prime target due to its integration into core workflows. The hack also underscores the growing threat of AI-powered attacks, where malicious agents can autonomously probe and exploit weaknesses in AI systems.
Industry experts are now calling for a reassessment of AI security protocols, especially in enterprise settings. As AI becomes more embedded in critical business functions, the potential for exploitation increases dramatically. Organizations must not only secure their AI models but also ensure that they are resilient against adversarial AI techniques that leverage long-known vulnerabilities.
Conclusion
The Codewall hack of McKinsey’s Lilli is a wake-up call for the business world. It reveals that even sophisticated AI platforms can be compromised through basic, well-known attack vectors. As companies continue to invest heavily in AI, they must prioritize robust, adaptive security frameworks that can defend against both traditional and AI-native threats.



