This page describes the security features, configurations, and settings in Google Kubernetes Engine (GKE) Autopilot, which is the recommended way to run GKE.
Who should use this page?
This page is intended for security administrators who want to understand the security restrictions that Google specifically applies to Autopilot clusters, and the security features that are available for use in Autopilot.
You should also read the GKE security overview , which describes the hardening options, measures, and recommendations that apply to all GKE clusters, network configurations, and workloads.
Security measures in Autopilot
Autopilot clusters enable and apply security best practices and settings by default, including many of the recommendations in the security overview and in Harden your cluster security .
If you want recommended resources based on your use case, skip to Security resources by use case . The following sections describe the security policies that Autopilot applies for you.
Autopilot and the Kubernetes Pod Security Standards
The Kubernetes project has a set of security guidelines named the Pod Security Standards that define the following policies:
- Privileged: No access restrictions. Not used in Autopilot.
- Baseline: Prevents known privilege escalation pathways. Allows most workloads to run without significant changes.
- Restricted: Highest level of security. Requires significant changes to most workloads.
Autopilot applies the Baseline policy with some modifications for usability. Autopilot also applies many constraints from the Restricted policy, but avoids restrictions that would block a majority of your workloads from running. Autopilot applies these constraints at the cluster level using an admission controller that Google controls. If you need to apply additional restrictions to comply with the full Restricted policy, you can optionally use the PodSecurity admission controller in specific namespaces.
The following table describes how Autopilot clusters implement the Baseline and Restricted policies. For descriptions of each control in this table, see the corresponding entry in Pod Security Standards .
When evaluating compliance, we considered how the constraints apply to your own workloads. This excludes verified Google Cloud partner workloads and system workloads that require specific privileges to function.
Autopilot workloads can only access the capabilities specified in the Baseline Pod Security Standard by default.
You can manually enable the following capabilities:
-
NET_RAW
for ping andSYS_PTRACE
for debugging: Add to Pod SecurityContext -
NET_ADMIN
for service meshes such as Istio: Specify--workload-policies=allow-net-admin
in your cluster creation command. Available on new and upgraded existing clusters running GKE version 1.27 and later.
Autopilot also allows some verified partner workloads to set dropped capabilities.
/var/log
for debugging, but denies all other read or write
access./proc
mount typeprocMount
to "Unmasked", GKE automatically overrides it with
"Default".RuntimeDefault
seccomp
profile to all workloads. You can manually override this setting for
specific workloads by setting the profile to Unconfined
in
the Pod specification./var/log
for debugging, gcePersistentDisk for Compute Engine
persistent disks, and nfs for network file system volumes.runAsUser
to 0
. Industry surveys
show that 76% of containers run as root, so
Autopilot allows running as root to enable most workloadsBuilt-in security configurations
Google applies many built-in security settings to Autopilot clusters based on industry best practices and our expertise. The following table describes some of the security configurations that Autopilot applies for you:
You can use the following Linux capabilities :
"SETPCAP", "MKNOD", "AUDIT_WRITE", "CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID", "KILL", "SETGID", "SETUID", "NET_BIND_SERVICE", "SYS_CHROOT", "SETFCAP", "SYS_PTRACE"
You can also manually enable the following capabilities:
-
NET_RAW
for ping: Add to PodSecurityContext
. -
SYS_PTRACE
for debugging: Add to PodSecurityContext
. -
NET_ADMIN
for service meshes such as Istio: Use--workload-policies=allow-net-admin
when you create a cluster or update an existing cluster. After that, add the capability to the PodSecurityContext
. Available on GKE version 1.27 and later.
In GKE versions earlier than 1.21, the "SYS_PTRACE"
capability is not supported.
kube-system
.Autopilot enforces the following restrictions on containers to limit the impact of container escape vulnerabilities.
Linux capabilities and kernel security
- Autopilot applies the
RuntimeDefault
seccomp profile to all Pods in the cluster unless the Pods use GKE Sandbox . GKE Sandbox enforces host isolation and ignores seccomp rules specified in the Pod manifest. The sandbox is considered the security boundary for GKE Sandbox Pods. - Autopilot drops the
CAP_NET_RAW
Linux capability for all containers. This permission is not often used, and has been the subject of multiple escape vulnerabilities. Theping
command might fail inside your containers because this capability is dropped. You can manually re-enable this capability by setting it in your Pod SecurityContext. - Autopilot drops the
CAP_NET_ADMIN
Linux capability for all containers. To re-enable this capability, specify the--workload-policies=allow-net-admin
flag in your cluster creation or update command.NET_ADMIN
is required by some workloads such as Istio. - Autopilot enables Workload Identity Federation for GKE , which prevents Pod access to sensitive metadata on the node.
- Autopilot blocks Kubernetes Services that set the
spec.externalIPs
field to protect against CVE-2020-8554 . -
Autopilot allows only the following types of volumes :
"configMap", "csi", "downwardAPI", "emptyDir", "gcePersistentDisk", "nfs", "persistentVolumeClaim", "projected", "secret"
Other types of volumes are blocked because they require node privileges. HostPath volumes are blocked by default, but containers can request read-only access to
/var/log
paths for debugging.
PodSecurity
admission controller
, Gatekeeper
,
or Policy Controller
.
However, you might not need to use any of these if the built-in security
configurations described on this page already meet your requirements.Autopilot blocks SSH access to nodes. GKE handles all operational aspects of the nodes, including node health and all Kubernetes components running on the nodes.
You can still connect remotely to your running containers using the Kubernetes exec
functionality to execute commands in your containers for
debugging, including connecting to an interactive shell, for example with kubectl exec -it deploy/YOUR_DEPLOYMENT -- sh
.
kube-apiserver
user and the system:masters
group cannot be impersonated.Autopilot modifies mutating webhooks to exclude resources in
managed namespaces, such as kube-system
, from being
intercepted.
Autopilot also rejects webhooks that specify one or more of the following resources, and any sub-resources of those resources.
- group: "" resource: nodes - group: "" resource: persistentVolumes - group: certificates.k8s.io resource: certificatesigningrequests - group: authentication.k8s.io resource: tokenreviews
You can't use the *
wildcard for resources or groups to
bypass this restriction.
Autopilot modifies ValidatingAdmissionPolicy
objects to exclude resources in managed namespaces, such as kube-system
, from being
intercepted.
Autopilot also rejects ValidatingAdmissionPolicy
objects that specify one or more of the following resources, and any
sub-resources of those resources.
- group: "" resource: nodes - group: "" resource: persistentVolumes - group: certificates.k8s.io resource: certificatesigningrequests - group: authentication.k8s.io resource: tokenreviews
You can't use the *
wildcard for resources or groups to
bypass this restriction.
Security boundaries in Autopilot
Autopilot provides access to the Kubernetes API but removes permissions to use some highly privileged Kubernetes features, such as privileged Pods. The goal is to limit the ability to access, modify, or directly control the node virtual machine (VM). Autopilot implements these restrictions to limit workloads from having low-level access to the node VM, so that Google Cloud can offer full management of nodes, and a Pod-level SLA .
Our intent is to prevent unintended access to the node VM. We accept submissions to that effect through the Google Vulnerability Reward Program (VRP) and will reward reports at the discretion of the Google VRP reward panel.
By design, privileged users such as cluster administrators have full control of any GKE cluster. As a security best practice, we recommend that you avoid granting powerful GKE or Kubernetes privileges widely and instead use namespace administrator delegation wherever possible as described in our multi-tenancy guidance .
Autopilot provisions single-tenant VMs in your project for your exclusive use. On each individual VM, your Autopilot workloads might run together, sharing a security-hardened kernel. Since the shared kernel represents a single security boundary, we recommend that if you require strong isolation, such as high-risk or untrusted workloads, run your workloads on GKE Sandbox Pods to provide multi-layer security protection.
Security resources based on use case
The following sections provide you with links and recommendations to plan, implement, and manage the security of your Autopilot clusters depending on your use case.
Plan cluster security
- For a high-level overview of cluster security, read the GKE security overview .
- To understand how we secure the Kubernetes control plane, read Control plane security .
- To understand the GKE in-cluster trust model, read Cluster trust .
- For hardening best practices, read the GKE hardening guide .
- For guidance on responding to security incidents, read Mitigating security incidents .
- Read the GKE audit policy .
- Learn about the audit logs that GKE creates .
Authenticate and authorize
After setting up your Autopilot clusters, you might need to authenticate your users and applications to use resources such as the Kubernetes API or Google Cloud APIs.
- To authenticate users, read Authenticating users .
- To authenticate applications, read Authenticating applications , which provides steps for authenticating from apps in the same cluster, in other Google Cloud environments, or in external environments.
Harden clusters and workloads
If you have specialized isolation or hardening requirements beyond the pre-configured Autopilot measures, consider the following resources:
Use case | Resources |
---|---|
Restrict public access to your cluster endpoint | Create your Autopilot clusters as private clusters, which disable the public IP address of the cluster control plane. For instructions, refer to Private clusters . |
Restrict cluster access to specific networks | Use control plane authorized networks to specify IP address ranges that can access your cluster. |
Store sensitive information outside your cluster | Storing sensitive data in an external, encrypted storage provider with versioning enabled is a common compliance requirement and a best practice. Use Secret Manager to store your data and access it from your Autopilot clusters using Workload Identity Federation for GKE. For instructions, refer to Access secrets stored outside GKE clusters using Workload Identity Federation for GKE . |
Verify container images before deployment to your cluster | Use Binary Authorization to check the integrity of the container images referenced in your Pod manifests at deploy time. For instructions, refer to Verify container images at deploy time using Binary Authorization . |
Monitor your security posture
After setting up your clusters and deploying your workloads, you should set up and configure monitoring and logging so that you have observability over your cluster security posture. We recommend that you do all of the following:
- Enroll your clusters in the GKE security posture dashboard to audit workloads for concerns such as problematic security configurations or vulnerabilities in your container operating system packages and get actionable mitigation information.
- Get notified about new security bulletins and upgrade events using cluster notifications .
- Observe your clusters using the GKE dashboard in Cloud Monitoring or the Observability tab in GKE.
- Learn how to view and manage your GKE audit logs in Cloud Logging.
What's next
- Read the GKE security overview .
- Read the GKE hardening guide .
- Subscribe to security bulletins and release notes .