The agent runs under its own AWS role, not yours.
AWS Bedrock AgentCore, the IAM blast radius its starter toolkit quietly ships, and the 30-minute audit to run this week.
> download one-pager (PDF)> New here? Get next Tuesday's issue in your inbox.
🎯 Attack of the Week
How AgentCore God Mode runs. An attacker either ships a malicious skill into a Bedrock AgentCore deployment or prompt-injects a running agent. The agent executes under its auto-generated execution role. That role wildcards arn:aws:bedrock-agentcore:*:memory/*, holds an unbound InvokeCodeInterpreter, and pulls ECR images without namespace scoping. From there, the attacker walks the agent across the account: reading other agents' memory stores, invoking their code interpreters, pulling their container images for whatever secrets got baked in. One compromised agent becomes every agent in the account, no CVE required.
The AI agent runs under its own AWS role, not yours. If that role is broad, every prompt injection becomes a privilege escalation. Most IAM patterns I've written assume the thing executing the action is a predictable service. Agents aren't. They act on input text, including input they shouldn't trust.
If anything in your org ships on Bedrock AgentCore, pull the execution role this week. Look for the starter-toolkit defaults on memory, interpreter, and ECR scopes. Cleanup is straightforward once you've seen it. If you don't run AgentCore, the same question lands on any service where your code hands identity to a background runner.
🚨 Rule of the Week
Detection sketch for AgentCore role sprawl. Flag any IAM role trusted by a bedrock-agentcore principal whose policy uses:
Resource: arn:aws:bedrock-agentcore:*:memory/*bedrock-agentcore:InvokeCodeInterpreterwith a wildcard resourceecr:BatchGetImagebeyond the agent's own namespace
These are the starter-toolkit defaults Unit 42 documented. Run it as a Config rule if your org uses Config, or as a weekly CloudTrail + IAM Access Analyzer diff otherwise. Under an hour to build either way.
🔧 Defender's Corner
Run Permiso's SandyClaw against one agent skill before your next AgentCore deploy. Permiso introduced a sandbox that detonates AI agent skills and records what they do across the LLM and OS layers before approval. It catches the exact class Unit 42 just named: an agent doing what its role allows but its designer never meant it to. First public dynamic-analysis tool for agent skills I've seen. The category will get crowded fast; the early look is worth 15 minutes.
📡 Also on the Radar
Cloud Accounts tops Red Canary's most prevalent attacker-technique list for the second year running. Based on 110,000+ confirmed threats across thousands of Red Canary customer environments. If you're building a 2026 security budget, this is the data point that anchors the conversation.