Researchers have identified how prompt injection attacks can be weaponized against AI agent frameworks to achieve remote code execution, bypassing traditional security controls by exploiting how language models process untrusted input. Microsoft's security team published technical details and mitigation guidance for developers using AI agents.