Back to Feed
AI▼ 70
AI coding agents leak secrets via prompt injection
VentureBeat·
A security researcher discovered a critical vulnerability, dubbed 'Comment and Control,' affecting AI coding agents from Anthropic, Google, and Microsoft. The exploit allows prompt injection to trick these agents into revealing sensitive API keys and other secrets. This occurs because the agents process untrusted inputs, such as pull request titles, as instructions, bypassing built-in safeguards. While all vendors have quietly patched the issue, the disclosure highlights a significant gap in security practices, particularly concerning agent runtime protections versus model-level safeguards. The findings emphasize the need for organizations to rigorously audit agent permissions and implement robust input sanitization to mitigate systemic risks.
Tags
ai
security
regulation
Original Source
VentureBeat — venturebeat.com