A critical pre-authentication SQL injection vulnerability in the LiteLLM AI gateway is currently being exploited in the wild to exfiltrate master API keys and cloud provider credentials. LiteLLM acts as a central proxy for major language models including OpenAI and Anthropic, making it a high-value target for identity theft.
The vulnerability stems from a failure to securely sanitize the Authorization Bearer header during the proxy verification process. By injecting a single quote into a fake token, unauthenticated attackers can break out of the intended database query and execute malicious PostgreSQL commands. This allows for the direct extraction of sensitive cloud access tokens and enterprise billing information from the platform database.
As organizations consolidate AI management into centralized gateways, these proxies become single points of failure. A compromise of a gateway like LiteLLM provides an attacker with the master keys to the organization entire AI infrastructure, effectively bypassing individual model-level security controls.
– Immediately upgrade LiteLLM to version 1.35.0 or higher to neutralize the SQL injection path.
– Rotate all master API keys and cloud provider credentials stored within the LiteLLM database.
– Audit PostgreSQL logs for anomalous queries originating from the LiteLLM service account, particularly those targeting the credentials table.
– Implement strict network-level isolation to ensure the gateway database is not accessible from untrusted network segments.
AI orchestration layers are the new frontier for credential harvesting and require immediate architectural hardening to prevent total cloud-native identity loss. #CodeDefence #AISecurity #LiteLLM #SQLi
/
