Execution: Container Escape

Preview

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms . Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions .

This document describes a threat finding type in Security Command Center. Threat findings are generated by threat detectors when they detect a potential threat in your cloud resources. For a full list of available threat findings, see Threat findings index .

Overview

A process running inside the container ran a known suspicious tool binary for container escape activities. This indicates a possible container escape attempt, where a process inside the container tries to break out of isolation and interact with the host system or other containers. This is a high-severity finding, because it suggests that an attacker might be attempting to gain access beyond the container's boundaries, potentially compromising the host or other infrastructure. Container escapes can result from misconfigurations, vulnerabilities in container runtimes, or exploitation of privileged containers.

Agent Engine Threat Detection is the source of this finding.

How to respond

To respond to this finding, do the following:

Review finding details

  1. Open the Execution: Container Escape finding as directed in Reviewing findings . Review the details on the Summaryand JSONtabs.

  2. On the Summarytab, review the information in the following sections:

    • What was detected, especially the following fields:
      • Program binary: the absolute path of the executed binary
      • Arguments: the arguments passed during binary execution
    • Affected resource, especially the following fields:
  3. On the JSONtab, note the following fields:

    • resource :
      • project_display_name : the name of the project that contains the AI agent.
    • finding :
      • processes :
      • binary :
        • path : the full path of the executed binary.
      • args : the arguments that were provided while executing the binary.
  4. Identify other findings that occurred at a similar time for the affected AI agent. Related findings might indicate that this activity was malicious, instead of a failure to follow best practices.

  5. Review the settings of the affected AI agent.

  6. Check the logs for the affected AI agent.

Research attack and response methods

  1. Review MITRE ATT&CK framework entry for this finding type: Escape to Host .
  2. To develop a response plan, combine your investigation results with MITRE research.

Implement your response

For response recommendations, see Respond to AI threat findings .

What's next

Create a Mobile Website
View Site in Mobile | Classic
Share by: