Impact: Remove Bulk Data From Disk

This document describes a threat finding type in Security Command Center. Threat findings are generated by threat detectors when they detect a potential threat in your cloud resources. For a full list of available threat findings, see Threat findings index .

Overview

A process was identified performing bulk data deletion operations, which could indicate an attempt to erase forensic evidence, disrupt services, or execute a data-wiping attack. This activity is concerning because attackers may remove logs, databases, or important files to cover their tracks or sabotage the system. Data destruction is often part of ransomware attacks, insider threats, or advanced persistent threats (APTs) attempting to evade detection and cause operational damage.

How to respond

To respond to this finding, do the following:

Step 1: Review finding details

  1. Open an Impact: Remove Bulk Data From Disk finding as directed in Reviewing findings . The details panel for the finding opens to the Summarytab.

  2. On the Summarytab, review the information in the following sections:

    • What was detected, especially the following fields:
      • Program binary: the absolute path of the executed binary.
      • Arguments: the arguments passed during binary execution.
    • Affected resource, especially the following fields:
      • Resource full name: the full resource name of the cluster including the project number, location, and cluster name.
  3. In the detail view of the finding, click the JSONtab.

  4. In the JSON, note the following fields.

    • resource :
      • project_display_name : the name of the project that contains the cluster.
    • finding :
      • processes :
      • binary :
        • path : the full path of the executed binary.
      • args : the arguments that were provided while executing the binary.
    • sourceProperties :
      • Pod_Namespace : the name of the Pod's Kubernetes namespace.
      • Pod_Name : the name of the GKE Pod.
      • Container_Name : the name of the affected container.
      • Container_Image_Uri : the name of the container image being deployed.
      • VM_Instance_Name : the name of the GKE node where the Pod executed.
  5. Identify other findings that occurred at a similar time for this container. Related findings might indicate that this activity was malicious, instead of a failure to follow best practices.

Step 2: Review cluster and node

  1. In the Google Cloud console, go to the Kubernetes clusterspage.

    Go to Kubernetes clusters

  2. On the Google Cloud console toolbar, select the project listed in resource.project_display_name , if necessary.

  3. Select the cluster listed on the Resource full namerow in the Summarytab of the finding details. Note any metadata about the cluster and its owner.

  4. Click the Nodestab. Select the node listed in VM_Instance_Name .

  5. Click the Detailstab and note the container.googleapis.com/instance_id annotation.

Step 3: Review Pod

  1. In the Google Cloud console, go to the Kubernetes Workloadspage.

    Go to Kubernetes Workloads

  2. On the Google Cloud console toolbar, select the project listed in resource.project_display_name , if necessary.

  3. Filter on the cluster listed on the Resource full namerow in the Summarytab of the finding details and the Pod namespace listed in Pod_Namespace , if necessary.

  4. Select the Pod listed in Pod_Name . Note any metadata about the Pod and its owner.

Step 4: Check logs

  1. In the Google Cloud console, go to Logs Explorer.

    Go to Logs Explorer

  2. On the Google Cloud console toolbar, select the project listed in resource.project_display_name , if necessary.

  3. Set Select time rangeto the period of interest.

  4. On the page that loads, do the following:

    1. Find Pod logs for Pod_Name by using the following filter:
      • resource.type="k8s_container"
      • resource.labels.project_id=" RESOURCE.PROJECT_DISPLAY_NAME "
      • resource.labels.location=" LOCATION "
      • resource.labels.cluster_name=" CLUSTER_NAME "
      • resource.labels.namespace_name=" POD_NAMESPACE "
      • resource.labels.pod_name=" POD_NAME "
    2. Find cluster audit logs by using the following filter:
      • logName="projects/ RESOURCE.PROJECT_DISPLAY_NAME /logs/cloudaudit.googleapis.com%2Factivity"
      • resource.type="k8s_cluster"
      • resource.labels.project_id=" RESOURCE.PROJECT_DISPLAY_NAME "
      • resource.labels.location=" LOCATION "
      • resource.labels.cluster_name=" CLUSTER_NAME "
      • POD_NAME
    3. Find GKE node console logs by using the following filter:
      • resource.type="gce_instance"
      • resource.labels.instance_id=" INSTANCE_ID "

Step 5: Investigate running container

If the container is still running, it might be possible to investigate the container environment directly.

  1. Go to the Google Cloud console.

    Open Google Cloud console

  2. On the Google Cloud console toolbar, select the project listed in resource.project_display_name , if necessary.

  3. Click Activate Cloud Shell

  4. Obtain GKE credentials for your cluster by running the following commands.

    For zonal clusters:

     gcloud  
    container  
    clusters  
    get-credentials  
     CLUSTER_NAME 
      
     \ 
      
    --zone  
     LOCATION 
      
     \ 
      
    --project  
     PROJECT_NAME 
     
    

    For regional clusters:

     gcloud  
    container  
    clusters  
    get-credentials  
     CLUSTER_NAME 
      
     \ 
      
    --region  
     LOCATION 
      
     \ 
      
    --project  
     PROJECT_NAME 
     
    

    Replace the following:

    • CLUSTER_NAME : the cluster listed in resource.labels.cluster_name
    • LOCATION : the location listed in resource.labels.location
    • PROJECT_NAME : the project name listed in resource.project_display_name
  5. Retrieve the executed binary:

     kubectl  
    cp  
     \ 
      
     POD_NAMESPACE 
    / POD_NAME 
    : PROCESS_BINARY_FULLPATH 
      
     \ 
      
    -c  
     CONTAINER_NAME 
      
     \ 
      
     LOCAL_FILE 
     
    

    Replace local_file with a local file path to store the added binary.

  6. Connect to the container environment by running the following command:

     kubectl  
     exec 
      
     \ 
      
    --namespace = 
     POD_NAMESPACE 
      
     \ 
      
    -ti  
     POD_NAME 
      
     \ 
      
    -c  
     CONTAINER_NAME 
      
     \ 
      
    --  
    /bin/sh 
    

    This command requires the container to have a shell installed at /bin/sh .

Step 6: Research attack and response methods

  1. Review MITRE ATT&CK framework entries for this finding type: Data Destruction .
  2. To develop a response plan, combine your investigation results with MITRE research.

Step 7: Implement your response

The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.

  • Contact the owner of the project with the compromised container.
  • Stop or delete the compromised container and replace it with a new container .

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: