GKE release notes archiveStay organized with collectionsSave and categorize content based on your preferences.
This page contains a historical archive of all release notes for
Google Kubernetes Engine prior to 2020. To view more recent release notes, see theRelease notes.
You can see the latest product updates for all of Google Cloud on theGoogle Cloudpage, browse and filter all release notes in theGoogle Cloud console,
or programmatically access release notes inBigQuery.
To get the latest product updates delivered to you, add the URL of this page to yourfeed
reader, or add thefeed URLdirectly.
December 23, 2019
Rapid channel (1.16.x)
Feature
Global accessfor internal TCP/UDP load balancing Services is now Beta. Global access allows
internal load balancing IP addresses to be accessed from any region within
a VPC.
December 13, 2019
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
By default, firewall rules restrict your cluster master to only initiate
TCP connections to your nodes on ports 443 (HTTPS) and 10250 (kubelet).
For some Kubernetes features, you might need to add firewall rules to
allow access on additional ports. For example, in Kubernetes 1.9 and
older, kubectl top accesses heapster, which needs a firewall rule to
allow TCP connections on port 8080. To grant such access, you can add
firewall rules.
Feature
Node-local DNS caching is now available in beta. This does create
a single point of failure. If the node-cache goes down DNS for all Pods
on that node will be broken until the cache is up.
Known Issues
Issue
There is a low risk that consumers of the published OpenAPI document
that made assumptions about theabsenceof schema
info for a given type (for example, "no schema info means a resource
is a custom resource") could have those assumptions broken once custom
resources start publishing schema definitions.
Stable channel and 1.13.x
Stable channel
There are no changes to the Stable channel this week.
No channel
1.13.11-gke.15
1.13.12-gke.16
Regular channel and 1.14.x
Regular channel
There are no changes to the Regular channel, but 1.15 will be available
in this channel in January 2020.
No channel
1.14.7-gke.25
1.14.8-gke.21
1.14.9-gke.2
Rapid channel (1.16.x)
Rapid channel
1.16.0-gke.20
GKE 1.16.0-gke.20 (alpha) is now available for testing
and validation in the Rapidrelease channel.
Retired APIs
Deprecated
extensions/v1beta1, apps/v1beta1, and apps/v1beta2 won't be served by
default.
All resources underapps/v1beta1andapps/v1beta2- useapps/v1instead.
New clusters have thecos-metrics-enabledflag enabled by
default. This change allows kernel crash logs to be collected. You can
disable by adding--metadata cos-metrics-enabled=falsewhen you create clusters.
Fixed
Fixed
All of the versions made available include a fix for the issue where
newly created node pools are created successfully but are incorrectly
shown as PROVISIONING, as reported onDecember 6th, 2019.
TheDecember 4, 2019 rolloutis paused.
Versions that were made available for upgrades and new clusters in that release
will no longer be available. This is to address an issue where newly created
node pools are created successfully but are incorrectly shown as PROVISIONING.
December 4, 2019
Fixed
We have fixed an issue with cluster upgrade from a version earlier than
1.14.2-gke.10 when gVisor is enabled in the cluster. It's now safe to upgrade to
any version greater than 1.14.7-gke.17. This issue was originally noted in therelease notes for October 30, 2019.
Version updates
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.12.x
No new v1.12.x versions this week.
Stable channel and 1.13.x
Stable channel
There are no changes to the Stable channel this week.
There are no changes to the Rapid channel this week.
November 22, 2019
Fixed
Fixed
The known issue in the COS kernel that may cause kernel panic, previously
reported onNovember 5th, 2019, is resolved.
The versions available in this release use updated versions of COS.
GKE 1.12 usescos-69-10895-348-0and versions 1.13 and 1.14 usecos-stable-73-11647-348-0.
Version updates
GKE cluster versions have been updated.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.12.10-gke.15
1.12.10-gke.17
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
There are no changes to the Rapid channel this week.
Versions no longer available
The following versions are no longer available for new clusters or upgrades.
1.12.10-gke.15
1.13.11-gke.5
1.13.11-gke.9
1.13.11-gke.11
1.13.12-gke.2
1.14.7-gke.10
1.14.7-gke.14
1.14.7-gke.17
1.14.8-gke.2
November 18, 2019
Fixed
Fixed
The known issue in the COS kernel that may cause nodes to crash,
previously reported onNovember 5th, 2019,
is resolved. This release downgrades COS tocos-73-11647-293-0.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.13.0-gke.0 to 1.13.11-gke.13
1.13.11-gke.14 (Stable channel)
1.13.12-gke.0 to 1.13.12-gke.7
1.13.12-gke.8
1.14.0-gke.0 to 1.14.7-gke.22
1.14.7-gke.23
1.14.8-gke.0 to 1.14.8-gke.11
1.14.8-gke.12 (Regular channel)
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.12.x
1.12.10-gke.17
No new v1.12.x versions this week.
Stable channel and 1.13.x
Stable channel
1.13.11-gke.14
Fixed
This version includes a fix for a known issue in the COS kernel that may have
caused nodes to crash.
No channel
1.13.12-gke.8
Fixed
This version includes a fix for a known issue in the COS kernel that may have
caused nodes to crash.
Regular channel and 1.14.x
Regular channel
1.14.8-gke.12
Fixed
This version includes a fix for a known issue in the COS kernel that may have
caused nodes to crash.
No channel
1.14.7-gke.23
Fixed
This version includes a fix for a known issue in the COS kernel that may have
caused nodes to crash.
Rapid channel (1.15.x)
1.15.4-gke.15
No new v1.15.x versions this week.
November 11, 2019
Changes
Change
After November 11, 2019,
new clusters and node pools created withgcloudhavenode auto-upgradeenabled by default.
November 05, 2019
Version updates
GKE cluster versions have been updated.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
v1.12.x
1.12.10-gke.15
v1.13.x
1.13.11-gke.5
v1.14.x
1.14.7-gke.10
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.12.x
v1.12.10-gke.17
Fixed
This release includes a patch for the golang vulnerability
CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.
This release includes a patch for the golang vulnerability
CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.
Known issues
Issue
We have found an issue in COS that might cause kernel panics
on nodes.
This impacts node versions:
1.13.11-gke.9
1.13.11-gke.11
1.13.11-gke.12
1.13.12-gke.1
1.13.12-gke.2
1.13.12-gke.3
1.13.12-gke.4
1.14.7-gke.14
1.14.7-gke.17
1.14.8-gke.1
1.14.8-gke.2
1.14.8-gke.6
1.14.8-gke.7
A patch is being tested and will rollout soon, but we recommend customers
avoid these node versions or downgrade to previous, unaffected patches.
New features
Feature
Surge upgradesare now in beta. Surge upgrades allow you to configure speed and disruption
of node upgrades
Changes
Change
Node auto-provisioninghas reached General Availability. Node auto-provisioning creates or deletes
node pools from your cluster based upon resource requests.
October 30, 2019
Version updates
GKE cluster versions have been updated.
New default version
The default version for new clusters is now v1.13.11-gke.9
(previously v1.13.10-gke.0). Clusters enrolled
in the stablerelease channelwill be auto-upgraded to this version.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.12.x versions
1.12.10-gke.17
1.13.x versions
1.13.11-gke.5
1.14.x versions
1.14.7-gke.10
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
The following versions are no longer available for new clusters or upgrades.
1.12.10-gke.15
1.13.7-gke.24
1.13.9-gke.3
1.13.9-gke.11
1.13.10-gke.0
1.13.10-gke.7
1.14.6-gke.1
1.14.6-gke.2
1.14.6-gke.13
Known Issues
Issue
If you use Sandbox Pods in your GKE cluster and plan to upgrade from a
version less than 1.14.2-gke.10 to a version greater than 1.14.2-gke.10, you
need to manually runkubectl delete mutatingwebhookconfiguration gvisor-admission-webhook-configafter the upgrade.
October 18, 2019
Version updates
GKE cluster versions have been updated.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.12.x versions
1.13.7-gke.24
1.14.x versions 1.14.6-gke.0 and older
1.14.6-gke.1
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
The following versions are no longer available for new clusters or upgrades.
1.12.9-gke.15
1.12.9-gke.16
1.12.10-gke.5
1.12.10-gke.11
Security bulletin
Issue
A vulnerability was recently discovered in Kubernetes, described inCVE-2019-11253,
which allows any user authorized to make POST requests to execute a remote
Denial-of-Service attack on a Kubernetes API server. For more information,
see thesecurity bulletin.
October 11, 2019
Version updates
GKE cluster versions have been updated.
New default version
The default version for new clusters is now v1.13.10-gke.0 (previously
v1.13.7-gke.24). Clusters enrolled in the stablerelease channelwill be
auto-upgraded to this version.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
versions older than 1.12.9-gke.13
1.12.9-gke.15
1.13.x versions older than 1.13.7-gke.19
1.13.7-gke.24
1.14.x versions older than 1.14.6-gke.0
1.14.6-gke.1
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Node image for Ubuntu updated toubuntu-gke-1804-d1809-0-v20190918. Upgrades Nvidia GPU driver to 418 driver, adds Vulkan ICD for graphical workloads, and fixes nvidia-uvm installation order.
Change
Upgrades GPU device plugin to the latest version with Vulkan support.
Issue
Do not upgrade to this version if you useWorkload Identity. There is a known issue where the gke-metadata-server Pods crashloop if you create or uprade a cluster to 1.14.6-gke.13.
Fixed
Fixes an issue where cronjobs cannot be scheduled when the total number of existing jobs exceeds 500.
Rapid channel (1.15.x)
1.15.3-gke.18
GKE 1.15.3-gke.18 (alpha) is now available for testing
and validation in the Rapidrelease channel.
Change
Upgraded Istio to 1.2.5.
Change
Improvements to gVisor.
Change
Node image for Container-Optimized OS updated to cos-rc-77-12371-44-0. This update includes upgrading the kernel to 4.19 from 4.14 and upgrading Docker to 19.03 from 18.09.
Change
Node image for Ubuntu updated to ubuntu-gke-1804-d1903-0-v20190917a.
This update includes upgrading the kernel to 5 from 4.15 and upgrading
Docker to 19.03 from 18.09.
Issue
Do not update to this version if you have clusters with hundreds of nodes per cluster or with I/O intensive workloads.Clusters with these characteristics may be impacted by a known issue in versions 4.19 and 5.0 of the Linux kernel that introduces performance regressions in thefdatasyncsystem call.
Versions no longer available
v1.14.3-gke.11 is no longer available for new clusters or upgrades.
Fixed a bug with fluentd that would prevent new nodes from starting on
large clusters with over 1000 nodes on v1.12.6.
October 2, 2019
Feature
Maintenance windows and exclusionsnow give you granular control over when automatic maintenance occurs on your
clusters. You can specify the start time, duration, and recurrence of a
cluster's maintenance window. You can also designate specific periods of time
when non-essential automatic maintenance should not occur.
September 26, 2019
Version updates
GKE cluster versions have been updated.
New default version
The default version for new clusters is now v1.13.7-gke.24
(previously v1.13.7-gke.8). Clusters enrolled
in the stablerelease channelwill be auto-upgraded to this version.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
versions older than 1.12.9-gke.13
1.12.9-gke.15
1.13.x versions older than 1.13.7-gke.19
1.13.7-gke.24
Auto-upgrades are currently occurring two days behind therollout
schedule. Some
1.11 clusters will be upgraded to 1.12 in the week of October 7th.
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
1.12.x
No new v1.12.x versions this week.
Stable channel (1.13.x)
No new v1.13.x versions this week.
Change
v1.13.7-gke.24 is now available in the Stable release channel.
Regular channel (1.14.x)
There are no changes to the Regular channel in this release.
1.14.6-gke.2
Fixed
This release includes a patch for CVE-2019-9512 and CVE-2019-9514.
Starting with GKE v1.15, the open sourceKubernetes Dashboardis no longer natively supported in GKE as a managed add-on.
To deploy it manually, follow thedeployment instructionsin the Kubernetes Dashboard documentation.
Change
Resizing PersistentVolumes is now a beta feature. As part of this
change, resizing a PersisntentVolume no longer requires you to restart
the Pod.
Versions no longer available
The following versions are no longer available for new clusters or upgrades.
1.12.7-gke.25
1.12.7-gke.26
1.12.8-gke.10
1.12.8-gke.12
1.12.9-gke.7
1.12.9-gke.13
1.13.6-gke.13
1.13.7-gke.8
1.13.7-gke.19
September 20, 2019
Feature
Ingress Controllerv1.6, which was previously available in beta, is
generally available for clusters running v1.13.7-gke.5 and higher.
Along with Ingress Controller, the following are also generally available:
The release notes forSeptember 16, 2019were incorrectly
published early, on September 9. The incorrect release notes included an
announcement of the availability of a security patch that was not
actually made available on that date. For more
information about the security patch, see thesecurity bulletin for September 16, 2019.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
v1.11
v1.12
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.12.x
v1.12.10-gke.5
Fixed
Fixes an issue where Vertical Pod Autoscaler would reject valid Pod
patches.
Ingress Controller v1.6, which was previously available in beta, is
generally available for clusters running v1.13.7-gke.5 and higher.
Feature
Network Endpoint Groups,
which allow HTTP(S) load balancers to target Pods directly, are now
generally available.
Feature
Release channels,
which provide more control over which automatic upgrades your cluster receives,
are generally available. In addition to the Rapid channel, you can now enroll
your clusters in the Regular or Stable channel.
September 9, 2019
Correction
The release notes forSeptember 16, 2019were incorrectly
published early, on September 9. The incorrect release notes included an
announcement of the availability of a security patch that was not
actually made available until the week of September 16, 2019. For more
information avbout the patch, see thesecurity bulletin for September 16, 2019.
No GKE releases occurred the week of September 9, 2019.
September 5, 2019
Version updates
GKE cluster versions have been updated.
New default version
The default version for new clusters is now 1.13.7-gke.8 (previously
1.12.8-gke.10).
Scheduled automatic upgrades
Auto-upgrades are no longer paused.
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
upgrade version
1.11.x
1.12.7-gke.25
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.10-gke.6
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
For example, the default RBAC policy no longer grants access to discovery and
permission-checking APIs, and you must take specific action to preserve the
old behavior for newly-created cluster users.
Differences between GKE v1.14.x and Kubernetes 1.14
TheRunAsGroupfeature has been promoted to beta and enabled by default. PodSpec and
PodSecurityPolicy objects can be used to control the primary GID of
containers on Docker and containerd runtimes.
Feature
Early-access to test Windows containers is now available. If you are
interested in testing Windows containers, fill outthis form.
Other changes
Change
Thenode.k8s.ioAPI group andruntimeclasses.node.k8s.ioresource
have been migrated to a built-in API. If you were using RuntimeClasses,
you must recreate each of them after upgrading, and also delete theruntimeclasses.node.k8s.ioCRD. RuntimeClasses can no
longer be created without a defined handler.
Change
When creating a new GKE cluster, Stackdriver Kubernetes
Engine Monitoring is now the default Stackdriver support option. This is a
change from prior versions where Stackdriver Logging and Stackdriver
Monitoring were the default Stackdriver support option. For more
information, seeOverview of
Stackdriver support for GKE.
Deprecated
OS and Arch information is now recorded inkubernetes.io/osandkubernetes.io/archlabels on Node objects. The previous
labels (beta.kubernetes.io/osandbeta.kubernetes.io/arch) are still recorded, but are
deprecated and targeted for removal in Kubernetes 1.18.
Known Issues
Issue
Users with the Quobyte Volume plugin are advised not to upgrade
between GKE 1.13.x and 1.14.x due to an issue with
Kubernetes 1.14. This will be fixed in an upcoming release.
Bug fixes and performance improvements.
Rapid
The following versions are available to clusters enrolled in the Rapidrelease channel.
1.14.5-gke.5
GKE 1.14.5-gke.5 is now available in the Rapid release
channel. It includes bug fixes and performance improvements.
For more details, refer to therelease notes for Kubernetes v1.14.
You can now useCustomer-managed encryption keys (beta)to control the encryption used for attached persistent disks in your
clusters. This is available as a dynamically provisioned PersistentVolume.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Fixes an issue that can cause Horizontal Pod Autoscaler to increase
the replica count to the maximum, regardless of other autoscaling
factors.
Fixed
Upgrade Istio to 1.1.13, to address addresstwo vulnerabilitiesannounced by the Istio project. These vulnerabilities can be used to mount
a Denial of Service (DoS) attack against services using Istio.
Change
The node image for Container-Optimized OS (COS) is nowcos-69-10895-329-0.
v1.13.x
Multiple v1.13.x versions are available this week:
Fixes an issue that can cause Horizontal Pod Autoscaler to increase
the replica count to the maximum during a rolling update, regardless of other autoscaling
factors.
Fixed
Upgrade Istio to 1.1.13, to address addresstwo vulnerabilitiesannounced by the Istio project. These vulnerabilities can be used to mount
a Denial of Service (DoS) attack against services using Istio.
Change
The node image for Container-Optimized OS (COS) is nowcos-73-11647-267-0.
Upgrade Istio to 1.1.13, to address addresstwo vulnerabilitiesannounced by the Istio project. These vulnerabilities can be used to mount
a Denial of Service (DoS) attack against services using Istio.
Change
The node image for Container-Optimized OS (COS) is nowcos-73-11647-267-0.
New features
Feature
Config Connectoris a Kubernetes addon that
allows you to manage your Google Cloud resources through Kubernetes
configuration.
In addition to GKE'sversion policy, Kubernetes has aversion skew policyof supporting only the three newest minor versions. Older versions are not
guaranteed to receive bug fixes or security updates, and the control plane may
become incompatible with nodes running unsupported versions.
Specifically, the Kubernetes v1.13.x control plane is not compatible with nodes
running v1.10.x. Clusters in such a configuration could become unreachable or
fail to run your workloads correctly. Additionally,security
patchesare not applied to v1.10.x and below.
We previously published a notice that Google would enable node auto-upgrade to
node pools running v1.10.x or lower, to bring those clusters into a supported
configuration and mitigate the incompatibility risk described above. To allow
for sufficient time for customers to complete the upgrade themselves, Google
postponed upgrading cluster control planes to 1.13 until mid-September 2019.
Please plan your manual node upgrade to keep your clusters healthy and up to
date.
Scheduled automatic upgrades
Auto-upgrades are currently paused.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Fixes a problem where Cluster Autoscaler can create too many nodes when
scaling up.
1.12.9-gke.10
Fixed
Fixes a problem where Vertical Pod Autoscaler would reject valid patches
to Pods.
Fixed
Improvements to Cluster Autoscaler.
Fixed
Updates Istio to v1.0.9-gke.0.
v1.12.8-gke.12
Fixed
Updates Istio to v1.0.9-gke.0.
1.12.7-gke.2
Fixed
Updates Istio to v1.0.9-gke.0.
Fixed
Fixes aproblemwhere the kubelet could fail to start a Pod for the first time if the node
was not completely configured and the Pod's restart policy wasNEVER.
v1.13.x
Multiple v1.13.x versions are available this week:
Fixes a problem where Cluster Autoscaler can create too many nodes when
scaling up.
1.13.7-gke.15
Fixed
Fixes a problem where Vertical Pod Autoscaler would reject valid patches
to Pods.
Fixed
Improvements to Cluster Autoscaler.
Feature
You can now useVulkanwith GPUs to process graphics workloads. The Vulkan configuration directorhy is mounted on/etc/vulkan/icd.din the container.
Fixed
Updates Istio to v1.1.10-gke.0.
Fixed
Fixes aproblemwhere the kubelet could fail to start a Pod for the first time if the node
was not completely configured and the Pod's restart policy wasNEVER.
Fixes a problem where Cluster Autoscaler can create too many nodes when
scaling up.
Change
In v1.14.3-gke.10 and higher,GKE Sandboxuses thegvisor.config.common-webhooks.networking.gke.iowebhook, which
is created when the cluster starts and makes sandboxed nodes available faster.
Security bulletin
Issue
Kubernetes recently discovered a vulnerability,CVE-2019-11247,
which allows cluster-scoped custom resource instances to be acted on as if
they were namespaced objects existing in all Namespaces. This vulnerability
is fixed in GKE versions also announced today. For more
information, see thesecurity bulletin.
New features
Feature
Clusters running v1.13.6-gke.0 or higher can useShielded GKE Nodes (beta),
which provide strong, verifiable node identity and integrity to increase the
security of your nodes.
New versions available for upgrades and new clusters
During the week of July 8, 2019, a release resulted in a partial rollout.
Release notes were not published at that time. Changes discussed in the
rest of this entry were appliedonly to the following zones:
europe-west2-a
us-east1
us-east1-d
In those zones only, the following new versions are available:
1.13.7-gke.15
1.12.9-gke.10
1.12.7-gke.26
1.12.8-gke.12
In those zones only, the following versions are no longer available for
new clusters or nodes:
1.11.10-gke.5
In those zones only, clusters running v1.11.x with auto-upgrade enabled
were upgraded to v1.12.7-gke.25.
Security bulletin
Fixed
GKE v1.13.7.x includes patches that mitigate multiple
vulnerabilities that are present in v1.13.6. Clusters running any v1.13.6.x
version should upgrade to 1.13.7.x, to mitigate against these vulnerabilities,
which are described in the following security bulletins:
GKE usage metering (Beta) now supports tracking actual
consumption, in addition to resource requests, for clusters running
v1.12.8-gke.8 and higher, v1.13.6-gke.7 and higher, or 1.14.2-gke.8 and higher.
A new BigQuery table,gke_cluster_resource_consumption, is
created automatically in the BigQuery dataset. For more information
about this and other improvements to Usage Metering, seeUsage metering (Beta).
VPC-native isno longerthe default cluster network mode for new
clusters created usinggcloudv256.0.0 or higher. Instead, the routes-based
cluster network mode is used by default. We recommend manually enablingVPC-native, to
avoid exhausting routes quota.
VPC-native clusters are created by default when you use Google Cloud console orgcloudversions 251.0.0 through 255.0.0.
Routes-based clusters are created by default when using the REST API.
June 27, 2019
Version updates
GKE cluster versions have been updated.
Important changes to clusters running unsupported versions
In addition to GKE'sversion policy, Kubernetes has aversion skew policyof supporting only the three newest minor versions. Older versions are not
guaranteed to receive bug fixes or security updates, and the control plane may
become incompatible with nodes running unsupported versions.
For example, the Kubernetes v1.13.x control plane is not compatible with nodes
running v1.10.x. Clusters in such a configuration could become unreachable or
fail to run your workloads correctly. Additionally,security
patchesare not applied to v1.10.x and below.
To keep your clusters operational and to protect Google's infrastructure, we
strongly recommend that you upgrade existing nodes to v1.11.x or higher before
the end of June 2019. At that time, Google will enable node auto-upgrade on node
pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the
control plane can be upgraded to v1.13.x and remain compatible with existing
node pools.
We strongly recommend leaving node auto-upgrade enabled.
NOTE: As of 1.12 all kubelets are issued certificates from the cluster CA and
verification of kubelet certificates is enabled automatically if all nodepools
are 1.12+. We have observed that introducing older (pre 1.12) nodepools after
certificate verification has started may cause connection problems for kubectl
logs/exec/attach/portforward commands, and should be avoided.
Versions no longer available for upgrades and new clusters
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.8-gke.10
1.11.10-gke.4
1.12.7-gke.10
1.12.7-gke.21
1.12.7-gke.22
1.12.8-gke.6
1.12.8-gke.7
1.12.9-gke.3
1.13.6-gke.5
1.13.6-gke.6
1.13.7-gke.0
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.11.x
1.11.10-gke.5
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
v1.12.x
1.12.7-gke.25
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
1.12.8-gke.10
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
1.12.9-gke.7
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
v1.13.x
1.13.6-gke.13
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
1.13.7-gke.8
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
Rapid channel
1.14.3-gke.9
Fixed
This version contains a patch for recently discovered TCP
vulnerabilities in the Linux kernel. See the associatedsecurity bulletinfor more information.
Security bulletins
Fixed
Patched versions are now available to address TCP vulnerabilities in the Linux
Kernel. For more information, see thesecurity bulletinIn accordance with the documented support policy, patches will not be applied to
GKE version 1.10 and older.
Issue
Kubernetes recently discovered a vulnerability inkubectl,
CVE-2019-11246. For more information, see thesecurity bulletin.
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.11.9
1.12.7-gke.10
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.8-gke.6
1.11.9-gke.8
1.11.9-gke.13
1.14.2-gke.1 [Preview]
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.11.x
No v1.11.x versions this week.
v1.12.x
v1.12.8-gke.7 includes the following changes:
Change
Improved Node Auto-Provisioning support for multi-zonal clusters with
GPUs.
Cloud Run 0.6
v1.13.x
v1.13.6-gke.6 includes the following changes:
Change
Improved Node Auto-Provisioning support for multi-zonal clusters with
GPUs.
Cloud Run 0.6
COS images now use the Nvidia GPU 418.67 driver. Nvidia drivers on COS
are now pre-compiled, greatly reducing driver installation time.
Issue
GKE nodes running Kubernetes v1.13.6 are affected by
CVE-2019-11245. Information about the impact and mitigation of this vulnerability
is available in thisKubernetes issue report.
In addition to security concerns, this bug can cause Pods that must run as
a specific UID to fail.
Rapid channel
Change
v1.14.1-gke.5 is the default for new Rapid channel clusters. This version
includes patched node images that addressCVE-2019-11245.
Issue
GKE nodes running Kubernetes v1.14.2 are affected by
CVE-2019-11245. Information about the impact and mitigation of this vulnerability
is available in thisKubernetes issue report.
In addition to security concerns, this bug can cause Pods that must run as
a specific UID to fail.
Security bulletin
Issue
GKE nodes running Kubernetes v1.13.6 and v1.14.2 are affected by
CVE-2019-11245. Information about the impact and mitigation of this vulnerability
is available in thisKubernetes issue report.
In addition to security concerns, this bug can cause Pods that must run as
a specific UID to fail.
Changes
Change
Currently,VPC-nativeis the default
for new clusters created withgcloudor the Google Cloud console. However,
VPC-native is not the default for new clusters created with the REST API.
Basic authentication and client certificate issuance are disabled by default
for clusters created with GKE 1.12 and higher. We recommend
switching your clusters to use OpenID instead. However, you can still enable
basic authentication and client certificate issuance manually.
This information was inadvertently omitted from theFebruary 27, 2019 release note. However,
the documentation about cluster routing was updated.
Change
The rollout dates for theMay 28, 2019releases are
incorrect. Day 2 spanned May 29-30, day 3 is May 31, and day 4 is June 3.
May 28, 2019
Version updates
GKE cluster versions have been updated.
Important changes to clusters running unsupported versions
In addition to GKE'sversion policy, Kubernetes has aversion skew policyof supporting only the three newest minor versions. Older versions are not
guaranteed to receive bug fixes or security updates, and the control plane may
become incompatible with nodes running unsupported versions.
For example, the Kubernetes v1.13.x control plane is not compatible with nodes
running v1.10.x. Clusters in such a configuration could become unreachable or
fail to run your workloads correctly.
To keep your clusters operational and to protect Google's infrastructure, we
strongly recommend that you upgrade existing nodes to v1.11.x or higher before
the end of June 2019. At that time, Google will enable node auto-upgrade on node
pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the
control plane can be upgraded to v1.13.x and remain compatible with existing
node pools.
We strongly recommend leaving node auto-upgrade enabled.
Scheduled automatic upgrades
No new automatic upgrades this week; previously-announced automatic upgrades
may still be ongoing.
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.11.x
v1.11.10-gke.4 includes the following changes:
Fixed
The node image for Container-Optimized OS (COS) is nowcos-69-10895-242-0.
Node images have been updated to fix Microarchitectural Data Sampling
(MDS) vulnerabilities announced by Intel. For more information, see thesecurity bulletin.
The patch alone is not sufficient to mitigate exposure to this vulnerability.
For more information, see thesecurity bulletin.
v1.12.x
v1.12.8-gke.6 includes the following changes:
Fixed
The node image for Container-Optimized OS (COS) is nowcos-69-10895-242-0.
Node images have been updated to fix Microarchitectural Data Sampling
(MDS) vulnerabilities announced by Intel. For more information, see thesecurity bulletin.
The patch alone is not sufficient to mitigate exposure to this vulnerability.
For more information, see thesecurity bulletin.
Rapid channel
v1.14.2-gke.2 is the default for new Rapid channel clusters, and includes
the following changes:
Feature
GKE Sandbox is supported on v1.14.x clusters running
v1.14.2-gke.2 or higher.
Node images have been updated to fix Microarchitectural Data Sampling
(MDS) vulnerabilities announced by Intel. For more information, see thesecurity bulletin.
The patch alone is not sufficient to mitigate exposure to this vulnerability.
For more information, see thesecurity bulletin.
Nodes using these images are nowshielded VMswith the following properties:
Important changes to clusters running unsupported versions
In addition to GKE'sversion policy, Kubernetes has aversion skew policyof supporting only the three newest minor versions. Older versions are not
guaranteed to receive bug fixes or security updates, and the control plane may
become incompatible with nodes running unsupported versions.
For example, the Kubernetes v1.13.x control plane is not compatible with nodes
running v1.10.x. Clusters in such a configuration could become unreachable or
fail to run your workloads correctly.
To keep your clusters operational and to protect Google's infrastructure, we
strongly recommend that you upgrade existing nodes to v1.11.x or higher before
the end of June 2019. At that time, Google will enable node auto-upgrade on node
pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the
control plane can be upgraded to v1.13.x and remain compatible with existing
node pools.
We strongly recommend leaving node auto-upgrade enabled.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.10.x (nodes only, completing)
1.11.8-gke.6
1.12.6-gke.10
1.12.6-gke.11
1.14.1-gke.4 and older 1.14.x (Alpha)
1.14.1-gke.5 (Alpha)
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Early access to test Windows Containers, expected in early June
May 13, 2019
Version updates
GKE cluster versions have been updated.
Important changes to clusters running unsupported versions
In addition to GKE'sversion policy, Kubernetes has aversion skew policyof supporting only the three newest minor versions. Older versions are not
guaranteed to receive bug fixes or security updates, and the control plane may
become incompatible with nodes running unsupported versions.
For example, the Kubernetes v1.13.x control plane is not compatible with nodes
running v1.10.x. Clusters in such a configuration could become unreachable or
fail to run your workloads correctly.
To keep your clusters operational and to protect Google's infrastructure, we
strongly recommend that you upgrade existing nodes to v1.11.x or higher before
the end of June 2019. At that time, Google will enable node auto-upgrade on node
pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the
control plane can be upgraded to v1.13.x and remain compatible with existing
node pools.
We strongly recommend leaving node auto-upgrade enabled.
New default version
The default version for new clusters is now 1.12.7-gke.10
(previously 1.11.8-gke.6). If your cluster is using v1.12.6-gke.10, upgrade to
this version to avoid a potential issue that causesauto-repairing nodesto fail.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
All 1.10.x versions, including v1.10.12-gke.14 (continuing after unpausing node auto-upgrade)
v1.11.8-gke.6
v1.11.x versions older than v1.11.8-gke.6
v1.11.8-gke.6
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
v1.11.x
v1.11.9-gke.13
Improvements to Vertical Pod Autoscaler
Improvements to Cluster Autoscaler
Cloud Run for GKE now uses the default Istio sidecar
injection behavior
Fix an issue that prevented the kubelet from seeing all GPUs available to
nodes using the Ubuntu node image.
Fix an issue that prevented the kubelet from seeing all GPUs available to
nodes using the Ubuntu node image
Fix an issue that sets the dynamic maximum volume count to 16 if your
nodes use a custom machine type. The value is now set to 128.
v1.13.x
v1.13.5-gke.10
Upgrading to GKE v1.13.x
To prepare to upgrade your clusters, read theKubernetes 1.13 release notesand the following information. You may need to modify your cluster before
upgrading.
Deprecated
scheduler.alpha.kubernetes.io/critical-podis deprecated. To mark
Pods as critical, usePod priority and preemption.
Deprecated
node.status.volumes.attached.devicePathis deprecated for Container
Storage Interface (CSI) volumes and will not be enabled in future
releases.
Deprecated
The built-insystem:csi-external-provisionerandsystem:csi-external-attacherRoles are no longer automatically created
You can create your own Roles and modify your Deployments to use them.
Deprecated
Support for CSI drivers using 0.3 and older versions of the CSI API is
deprecated. Users should upgrade CSI drivers to use the 1.0 API during the
deprecation period.
Issue
Kubernetes cannot distinguish between manually-provisioned zonal and
regional persistent disks with the same name. Ensure that persistent disks
have unique names across the Google Cloud project. This issue does not occur
when using dynamically provisioned persistent disks.
Issue
If kubelet fails to register a CSI driver, it does not make a second
attempt. To work around this issue, restart the CSI driver Pod.
Issue
After resizing a PersistentVolumeClaim (PVC), the PVC is sometimes left
with a spuriousRESIZINGcondition when expansion has already completed.
The condition is spurious as long as the PVC's reported size is correct.
If the value ofpvc.spec.capacity['storage']matchespvc.status.capacity['storage'], the condition is spurious and you can
delete or ignore it.
Issue
The CSIdriver-registrar externalsidecar container v1.0.0 has a
known issue where it takes up to a minute to restart.
Change
DaemonSets now use scheduling features that require kubelet version 1.11 or
higher. Google will update kubelet to 1.11 before upgrading clusters to
v1.13.x.
Change
kubelet can no longer delete their Node API objects.
Change
Use of the--node-labelsflag to set labels under thekubernetes.io/andk8s.io/prefix will be subject to restriction by the NodeRestriction
admission plugin in future releases. See theadmission plugin documentationfor the list of allowed labels.
You cannot yet create an alpha cluster running GKE
v1.14.x. If you attempt to use the--enable-kubernetes-alphaflag,
cluster creation fails.
Security bulletin
Issue
If you run untrusted code in your own multi-tenant services within
Google Kubernetes Engine, we recommend that you disable Hyper-Threading to mitigate
Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For
more information, see thesecurity bulletin.
New features
Feature
With GKE 1.13.5-gke.10, GKE 1.13 is
now generally available for use in production. You can upgrade clusters
running older v1.13.x versions manually.
GKE v1.13.x has the following differences from Kubernetes
1.13:
We are introducingRelease channels,
a new way to keep your GKE clusters up to date. The Rapid
release channel is available, and includes v1.14.1-gke.5 (alpha). You cansign upto try release channels and preview GKE v1.14.x.
Feature
GKE Sandbox (Beta)is now available for clusters running v1.12.7-gke.17 and higher and
v1.13.5-gke.15 and higher. You can use GKE Sandbox to
isolate untrusted workloads in a sandbox to protect your nodes, other
workloads, and cluster metadata from defective or malicious code.
Changes
Change
For clusters running v1.12.x or higher and using nodes with less than 1 GB
of memory, GKE reserves 255 MiB of memory. This is not a
new change, but it was not previously noted. For more details about node
resources, seeAllocatable memory and CPU resources.
Masters onlywith auto-upgrade enabled will be upgraded as follows:
Current version
Upgrade version
All 1.10.x versions, including v1.10.12-gke.14 (continuing)
1.11.8-gke.6
1.13.4-gke.x
1.13.5-gke.10
Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
1.12.6-gke.11
Nodes continue to use Docker as the default runtime.
Fix a performance regression introduced in 1.12.6-gke.10. This regression
caused delays when the kubelet reads the/sys/fs/cgroup/memory/memory.statfile to determine a node's memory usage.
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.9-gke.5
1.12.7-gke.7
1.13.4-gke.10
1.13.5-gke.7
Fixed issues
Fixed
A problem was fixed in the Stackdriver Kubernetes Monitoring (Beta) Metadata
agent. This problem caused the agent to generate unnecessary log messages.
Changes
Change
Alpha clustersrunning
Kubernetes 1.13 and higher created with the Google Cloud CLI version 242.0.0
and higher have auto-upgrade and auto-repair disabled. Previously, you were
required to disable these feature manually.
Known issues
Issue
Under certain circumstances, Google-managed SSL certificates (Beta) are not
being provisioned in regional clusters. If this happens, you are unable to
create or update managed certificates. If you are experiencing this issue,contact Google Cloud support.
Issue
Node auto-upgrade is currently disabled. You can still upgrade node pools
manually.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
Node auto-upgrade will be re-enabled
etcd will be upgraded
Improvements to Vertical Pod Autoscaler
Improvements to Cluster Autoscaler
Improvements to Managed Certificates
April 26, 2019
Due to delays during theApril 22 GKE release rollout,
the release will not complete by April 26, 2019 as originally planned. Rollout
is expected to complete by April 29, 2019 GMT.
April 25, 2019
Changes
Change
Google Cloud Observability Kubernetes Monitoring users:Google Cloud Observability Kubernetes Monitoring logging label fields
change when you upgrade your GKE clusters to
GKE v1.12.6 or higher. The following changes were
effective the week of March 26, 2019:
Kubernetes Pod labels, currently
located in themetadata.userLabelsfield, are moved to thelabelsfield in the LogEntry and the label keys have a prefix
prefix ofk8s-pod/. The filter expressions in yoursinks,logs-based metrics,log exclusions, or queries might
need to change.
Google Cloud Observability system labels that are in themetadata.systemLabelsfield are no longer available.
For detailed information about what changed, see therelease guidefor
Google Cloud Observability Beta Monitoring and Logging,
also known as Google Cloud Observability Kubernetes Monitoring (Beta).
April 22, 2019
Version updates
GKE cluster versions have been updated.
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
All 1.10.x versions, including v1.10.12-gke.14
1.11.8-gke.6
This roll-out will be phased across multiple weeks.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Fix a performance regression introduced in v1.11.x node images older than 1.11.9-gke.8. This regression
caused delays when the kubelet reads the/sys/fs/cgroup/memory/memory.statfile to determine a node's memory usage.
Fix a performance regression introduced in v1.12.x node images older than v1.12.6-gke.10. This regression
caused delays when the kubelet reads the/sys/fs/cgroup/memory/memory.statfile to determine a node's memory usage.
The following versions are no longer available for new clusters or cluster
upgrades:
All 1.10.x versions, including v1.10.12-gke.14
Fixed issues
Fixed
A known issue in v1.12.6-gke.10 and older has been fixed in 1.12.7-gke.10.
This issue causes node auto-repair to fail. Upgrading is recommended.
Fixed
A known issue in 1.12.7-gke.7 and older has been fixed in 1.12.7-gke.10.
ThecurrentMetricsfield now reports the correct
value. The problem only affected reporting and did not impact the
functionality of Horizontal Pod Autoscaler.
Deprecations
GKE v1.10.x has been deprecated, and is no longer
available for new clusters, master upgrades, or node upgrades.
TheCluster.FIELDS.initial_node_countfield has been deprecated
in favor ofnodePool.initial_node_countin thev1andv1beta1GKE APIs.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
etcd will be upgraded
Improvements to Vertical Pod Autoscaler
Improvements to Cluster Autoscaler
Improvements to Managed Certificates
April 19, 2019
Change
You can now useUsage meteringwith GKE 1.12.x and 1.13.x clusters.
April 18, 2019
Feature
You can now run GKE clusters in regionasia-northeast2(Osaka, Japan) with zonesasia-northeast2-a,asia-northeast2-b, andasia-northeast2-c.
The new region and zones will be included in future rollout schedules.
April 15, 2019
Version updates
GKE cluster versions have been updated.
New default version
The default version for new clusters has been updated to 1.11.8-gke.6
(previously 1.11.7-gke.12).
Scheduled automatic upgrades
Masters and nodes with auto-upgrade enabled will be upgraded:
Current version
Upgrade version
1.10.x versions 1.10.12-gke.13 and older
1.10.12-gke.14
1.11.x versions 1.11.8-gke.5 and older
1.11.8-gke.6
1.12.x versions 1.12.6-gke.9 and older
1.12.6-gke.10
1.13.x versions 1.13.4-gke.9 and older
1.13.4-gke.10 (Preview)
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Cluster Autoscaler is now supported for GKE 1.13 clusters
Fix a problem that caused thecurrentMetricsfield for Horizontal Pod
Autoscaler with 'AverageValue' target to always reportunknown. The
problem only affected reporting and did not impact the functionality of
Horizontal Pod Autoscaler.
The following versions are no longer available for new clusters or cluster
upgrades:
1.10.12-gke.7
1.10.12-gke.9
1.11.6-gke.11
1.11.6-gke.16
1.11.7-gke.12
1.11.7-gke.18
1.11.8-gke.2
1.11.8-gke.4
1.11.8-gke.5
1.12.5-gke.5
1.12.6-gke.7
1.13.4-gke.1
1.13.4-gke.5
Changes
Change
Improvements have been made to the automated rules for theadd-on resizer.
It now uses 5 nodes as the inflection point.
Known issues
Issue
GKE 1.12.7-gke.7 and older, and 1.13.4-gke.10 and older have
a known issue where thecurrentMetricsfield for
Horizontal Pod Autoscaler withAverageValuetarget always reportsunknown. The
problem only affects reporting and does not impact the functionality of
Horizontal Pod Autoscaler.
This issue has already been fixed in GKE 1.13.5-gke.7.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
GKE 1.13.4-gke.1 is available foralpha clustersas a public preview. The preview period helps Google Cloud to improve the
quality of the final GA release, and allows you to test the new version
earlier.
To create a cluster using this version, use the following
command, replacingmy-alpha-clusterwith the name
of your cluster. Use the exact cluster version provided in the command. You can
add other configuration options, but do not change any of the ones below.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.
Includes the fix for CVE-2019-1002100. For more information, see thesecurity bulletin.
Known issues
GKE 1.13.4-gke.1 clusters may experience apreviously-published known issuerelated to elevated
master error rates, if Namespaces exist with names longer than 44
characters. To work around the issue, use shorter Namespace names.
Cluster autoscaler is not operational in this GKE version.
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.5-gke.5
1.11.6-gke.2
1.11.6-gke.3
1.11.6-gke.6
1.11.6-gke.8
1.11.7-gke.4
1.11.7-gke.6
Deprecated
GKE 1.12.5-gke.10 is no longer available for new clusters, master upgrades,
or node upgrades.
Last week, we began to make GKE 1.12.5-gke.10 unavailable for new clusters
or upgrades, due to increased error rates. That process completes this week.
If you have already upgraded to 1.12.5-gke.10 and are experiencing elevated
error rates, you cancontact support.
Automated master and node upgrades
The following versions will be updated for masters and nodes with
auto-upgrade enabled. Automated upgrades are rolled out over multiple weeks to
ensure cluster stability.
1.11.6 Masters and nodes with auto-upgrade enabled which are using versions
1.11.6-gke.10 or earlier will begin to be upgraded to 1.11.7-gke.12.
1.11.7 Masters and nodes with auto-upgrade enabled which are using version
1.11.7-gke.11 or earlier will begin to be upgraded to 1.11.7-gke.12.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
Nodes with auto-upgrade enabled and masters running 1.11.x will be upgraded to 1.11.7-gke.12
GKE 1.12.x masters will begin using the containerd runtime with an upcoming release.
March 14, 2019
Deprecated
GKE 1.12.5-gke.10 is no longer available for new clusters or master upgrades.
We have received reports of master nodes experiencing elevated error rates
when upgrading to version 1.12.5-gke.10 in all regions. Therefore, we have
begun the process of making it unavailable for new clusters or upgrades.
If you have already upgraded to 1.12.5-gke.10 and are experiencing elevated
error rates, you cancontact support.
March 11, 2019
Feature
You can now run GKE clusters in regioneurope-west6(Zürich, Switzerland) with zoneseurope-west6-a,europe-west6-b, andeurope-west6-c.
The new region and zones will be included in future rollout schedules.
March 5, 2019
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
1.10.12-gke.7 - This version is being made available again after being
previously removed.
1.10.12-gke.9
1.11.7-gke.12
1.12.5-gke.10
Node image updates
Change
Container-Optimized OS with containerd image for GKE 1.11 clusters
The Container-Optimized OS with containerd node image has been upgraded fromcos-69-10895-138-0-c115tocos-69-10895-138-0-c116for clusters runningKubernetes 1.11+.
Container-Optimized OS with containerd image for GKE 1.12 clusters
The Container-Optimized OS with containerd node image has been upgraded fromcos-69-10895-138-0-c123tocos-69-10895-138-0-c124for clusters runningKubernetes 1.12.5-gke.10+and alpha clusters runningKubernetes 1.13+.
cos-69-10895-138-0-c124upgrades Docker to v18.09.0.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
Nodes with auto-upgrade enabled and masters running 1.11.x will be upgraded to 1.11.7-gke.12
February 27, 2019
GKE 1.12.5-gke.5 is generally available and includes Kubernetes 1.12. Kubernetes
1.12 provides faster auto-scaling, faster affinity scheduling, topology-aware
dynamic provisioning of storage, and advanced audit logging. For more
information, seeDigging into Kubernetes 1.12on the Google Cloud blog.
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
A known issue in GKE 1.12.5-gke.5 and all 1.11.x versions below 1.11.6 can
cause significant delays when the cluster autoscaler adds new nodes to
the cluster, if the cluster has hundreds of unschedulable Pods due to resource
starvation. It may require a few minutes before all Pods are
scheduled, depending on the number of unschedulable Pods and the size of the
cluster. The workaround is to add an adequate number of nodes manually. If
adding nodes does not resolve the issue,contact
support.
Issue
A known issue in GKE 1.12.5-gke.5 can cause unbounded memory usage. This is
caused by a memory leak in ReflectorMetricsProvider. Seethis issuefor
further details. This will be fixed in an upcoming patch.
Issue
A known issue in GKE 1.12.5-gke.5 slows down or stops Pod scheduling in
clusters with large numbers of terminated Pods. Seethis issuefor
further details. This will be fixed in an upcoming patch.
Coming soon
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
Nodes with auto-upgrade enabled and masters running 1.10 will begin to be upgraded to 1.11
February 18, 2019
Version updates
GKE cluster versions have been updated.
New default version for new clusters
Kubernetes version 1.11.7-gke.4 is the default version for new clusters, available
according to this week's rollout schedule.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
1.11.7-gke.6
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
1.10.x
Node image updates
Change
The Container-Optimized OS node image has been upgraded fromcos-69-10895-123-0tocos-69-10895-138-0.
See COS imagerelease notesfor more information.
GKE Ingress has been upgraded from v1.4.2 to v1.4.3 for clusters running 1.11.7-gke.6+. For details, see thedetailed changelogandrelease notes.
Coming soon
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
GKE 1.12 will be made generally available.
Nodes with auto-upgrade enabled and masters running 1.10 will begin to be upgraded to 1.11.7-gke.4.
February 11, 2019
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions will be available for new clusters and for opt-in
master upgrades of existing clusters this week according to the rollout schedule:
1.11.6-gke.11
1.11.7-gke.4
1.10.12-gke.7
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seethese instructionsto get a full list of the Kubernetes versions you can
run on your Kubernetes Engine masters and nodes.
For information about changes expected in the coming weeks, seeComing soon.
New default version for new clusters
GKE version 1.11.6-gke.2 is the default version for new clusters,
available according to this week's rollout schedule.
New versions available for upgrades and new clusters
The following Kubernetes Engine versions are available, according to this week's
rollout schedule, for new clusters and for opt-in master upgrades for existing
clusters:
1.11.6-gke.6
GKE Ingress controller update
Change
GKE Ingress has been upgraded from v1.4.1 to v1.4.2 for clusters running
1.11.6-gke.6+. For details, see thechange logand therelease notes.
Fixed Issues
Fixed
A bug in version 1.10.x and 1.11.x may lead to periodic persistent disk
commit latency spikes exceeding one second. This may trigger master
re-elections of GKE components and cause short (a few seconds) periods of
unavailability in the cluster control plane. The issue is fixed in version
1.11.6-gke.6.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
25% of the upgrades from 1.10 to 1.11.6-gke.2 will be complete.
Version 1.11.6-gke.8 will be made available.
Version 1.10 will be made unavailable.
January 21, 2019
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seethese instructionsto get a full list of the Kubernetes versions you can
run on your Kubernetes Engine masters and nodes.
For information about changes expected in the coming weeks, seeComing soon.
New default version for new clusters
Kubernetes version 1.10.11-gke.1 is the default version for new clusters,
available according to this week's rollout schedule.
New versions available for upgrades and new clusters
The following Kubernetes Engine versions are now available for new clusters and
for opt-in master upgrades for existing clusters:
1.10.12-gke.1
1.11.6-gke.3
The following versions are no longer available for new clusters or cluster
upgrades:
1.10.6-gke.13
1.10.7-gke.11
1.10.7-gke.13
1.10.9-gke.5
1.10.9-gke.7
1.11.2-gke.26
1.11.3-gke.24
1.11.4-gke.13
Scheduled master auto-upgrades
Cluster masters running 1.10.x will be upgraded to 1.10.11-gke.1.
Cluster masters running 1.11.2 through 1.11.4 will be upgraded to 1.11.5-gke.5.
Scheduled node auto-upgrades
Cluster nodes with auto-upgrade enabled will be upgraded:
1.10.x nodes with auto-upgrade enabled will be upgraded to 1.10.11-gke.1.
1.11.2 through 1.11.4 nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.5.
Changes
Change
GKE will not set--max-nodes-total, because--max-nodes-totalis inaccurate when the cluster usesFlexible Pod CIDR ranges.
This will be gated in 1.11.7+.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
GKE 1.11.6-gke.6 will be available.
A new COS image will be available.
January 14, 2019
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seethese instructionsto get a full list of the Kubernetes versions you can
run on your Kubernetes Engine masters and nodes.
For information about changes expected in the coming weeks, seeComing soon.
New versions available for upgrades and new clusters
The following Kubernetes Engine versions are now available for new clusters and
for opt-in master upgrades for existing clusters:
1.10.12-gke.0
1.11.6-gke.0
1.11.6-gke.2
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.2-gke.25
1.11.3-gke.23
1.11.4-gke.12
1.11.5-gke.4
Scheduled master auto-upgrades
Cluster masters running 1.9.x will be upgraded to 1.10.9-gke.5.
Cluster masters running 1.11.2-gke.25 will be upgraded to 1.11.2-gke.26.
Cluster masters running 1.11.3-gke.23 will be upgraded to 1.11.3-gke.24.
Cluster masters running 1.11.4-gke.12 will be upgraded to 1.11.4-gke.13.
Cluster masters running 1.11.5-gke.4 will be upgraded to 1.11.5-gke.5.
Scheduled node auto-upgrades
Cluster nodes with auto-upgrade enabled will be upgraded:
1.9.x nodes with auto-upgrade enabled will be upgraded to 1.10.9-gke.5.
1.11.2-gke.25 nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.26.
1.11.3-gke.23 nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.24.
1.11.4-gke.12 nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.13.
1.11.5-gke.4 nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.5.
GKE Ingress controller update
The GKE Ingress controller has been upgraded from v1.4.0 to v1.4.1 for clusters
running 1.11.6-gke.2+. For details, see thechange logand therelease notes.
Fixed Issues
Fixed
If you use Stackdriver Kubernetes Monitoring Beta with structured JSON
logging, an issue with the parsing of structured JSON log entries was
introduced in GKE v1.11.4-gke.12. Seerelease guide for Stackdriver Kubernetes Monitoring.
This is fixed by upgrading your cluster:
1.11.6-gke.2
Fixed
Users of GKE 1.11.2.x, 1.11.3-gke.18, 1.11.4-gke.8, or 1.11.5-gke.2
on clusters that use Calico network policies may experience failures due to
a problem recreating theBGPConfigurations.crd.projectcalico.orgresource. This is fixed by the automatic upgrades to masters and nodes
that have auto-upgrade enabled.
Fixed
A problem in Endpoints API object validation could prevent updates during an
upgrade, leading to stale network information for Services. Symptoms of
the problem include failed healthchecks with a502status code
or a message such asForbidden: Cannot change NodeName. This is
fixed by the automatic upgrades to masters and nodes that have auto-upgrade
enabled.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
All GKE 1.10.x masters will be upgraded to the latest 1.10 version.
All GKE 1.11.0 through 1.11.4 masters will be upgraded to the latest 1.11.5 version.
January 8, 2019
The rollout beginning January 8, 2019 has been paused after two days. This is
being done as a caution, so that we can investigate an issue that will be fixed
in next week's rollout. This is not a bug in any GKE version currently available
or planned to be made available.
December 17, 2018
Version updates
GKE cluster versions have been updated.
For information about changes expected in the coming weeks, seeComing soon.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
1.11.2-gke.26
1.11.3-gke.24
1.11.4-gke.13
1.11.5-gke.5
The following versions are no longer available for new clusters or cluster
upgrades:
1.11.2-gke.18
1.11.2-gke.20
1.11.3-gke.18
1.11.4-gke.8
Scheduled master auto-upgrades
Remaining cluster masters running GKE 1.9.x will be upgraded to GKE
1.10.9-gke.5 in January 2019.
Scheduled node auto-upgrades
Cluster nodes with auto-upgrade enabled will be upgraded:
1.11.2-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.25
1.11.3-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.23
1.11.4-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.12
1.11.5-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.4
Fixed Issues
Fixed
Users upgrading to GKE 1.11.2.x, 1.11.3-gke.18, 1.11.4-gke.8, or 1.11.5-gke.2
on clusters that use Calico network policies may experience failures due to
a problem recreating theBGPConfigurations.crd.projectcalico.orgresource. This problem does not affect newly-created clusters. This is fixed
by upgrading your to one of the following versions:
1.11.2-gke.25
1.11.3-gke.23
1.11.4-gke.12
1.11.5-gke.4
Fixed
A problem in Endpoints API object validation could prevent updates during an
upgrade, leading to stale network information for Services. Symptoms of
the problem include failed healthchecks with a502status code
or a message such asForbidden: Cannot change NodeName. If you
encounter this problem, upgrade your cluster to one of the following versions:
1.11.2-gke.26
1.11.3-gke.24
1.11.4-gke.13
1.11.5-gke.5
This problem can also affect earlier versions of GKE, but the fix is not
yet available for those versions. If you are running an earlier version and
encounter this issue,contact support.
We expect the following changes in the coming weeks.
This information is not a guarantee, but is provided to help you plan for
upcoming changes.
Remaining GKE 1.9.x masters are expected to be upgraded in January 2019.
December 10, 2018
Version updates
GKE cluster versions have been updated.
For information about changes expected in the coming weeks, seeComing soon.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
1.10.11-gke.1
1.11.2-gke.25
1.11.3-gke.23
1.11.4-gke.12
1.11.5-gke.4
The following versions are no longer available for new clusters or cluster
upgrades:
1.9.x
1.10.6-gke.11
Scheduled master auto-upgrades
We will begin upgrading cluster masters running GKE 1.9.x to GKE 1.10.9-gke.5.
The upgrade will be completed in January 2019.
Scheduled node auto-upgrades
Cluster nodes with auto-upgrade enabled will be upgraded:
1.11.2-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.25
1.11.3-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.23
1.11.4-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.12
1.11.5-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.4
Node image updates
Change
Container-Optimized OS node image has been upgraded tocos-stable-69-10895-91-0for clusters runningKubernetes 1.11.2,Kubernetes 1.11.3,Kubernetes 1.11.4,
andKubernetes 1.11.5..
Users upgrading to GKE 1.11.3 on clusters that use Calico network policies
may experience failures due to a problem recreating theBGPConfigurations.crd.projectcalico.orgresource. This problem
does not affect newly-created clusters. This is fixed by upgrading your
GKE 1.11.3 clusters to 1.11.3-gke.23.
Fixed
Users modifying or upgrading existing GKE 1.11.x clusters that use Alias
IP may experience network failures due to a mismatch between the new
IP range assigned the Pods and the alias IP address range for the nodes.
This is fixed by upgrading your GKE 1.11.x clusters to one of the following
versions:
1.11.2-gke.25
1.11.3-gke.23
1.11.4-gke.12
1.11.5-gke.4
Changes
Change
Node Problem Detector (NPD) has been upgraded from 0.5.0 to 0.6.0 for
clusters running GKE 1.10.11-gke.1+ and 1.11.5-gke.1+. For details, see theupstream pull request.
Known Issues
Issue
In GKE v1.11.4-gke.12 and later, if you use Stackdriver Kubernetes
Monitoring Beta with structured JSON logging, there is an issue with the
parsing of structured JSON log entries. As a workaround, you can downgrade
to GKE 1.11.3. For more information, see therelease guide for Stackdriver Kubernetes Monitoring.
The following warning is now displayed to SSH clients that connect to
Nodes using SSH or to run remote commands on Nodes over an SSH connection:
WARNING: Any changes on the boot disk of the node must be made via
DaemonSet in order to preserve them across node (re)creations.
Node will be (re)created during manual-upgrade, auto-upgrade,
auto-repair or auto-scaling.
You can now drain node pools and delete Nodes in parallel.
GKE data in Cloud Asset Inventory and Search is now available in
near-real-time. Previously, data was dumped at 6-hour intervals.
Fixed Issues
Fixed
When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem
with provisioning the ExternalIP on one or more Nodes causes thekubectlcommand to fail. The following error is logged in thekube-apiserverlog:
Failed to getAddresses: no preferred addresses found; known addresses
This issue is fixed in GKE 1.11.4-gke.8. If you can't upgrade to that
version, you can work around this issue by following these steps:
Determine which Nodes have no ExternalIP set:
kubectlgetnodes-owide
Look for entries where the last column is<none>.
Restart affected nodes.
Known Issues
Issue
Users upgrading to GKE 1.11.3 on clusters that use Calico network policies
may experience failures due to a problem recreating theBGPConfigurations.crd.projectcalico.orgresource. This problem does not
affect newly-created clusters. This is expected to be fixed in the coming
weeks.
To work around this problem, you can create theBGPConfigurations.crd.projectcalico.orgresource manually:
Copy the following script into a file namedbgp.yaml:
Apply the change to the affected cluster using the following command:
kubectlapply-fbgp.yaml
Issue
Users modifying or upgrading existing GKE 1.11.x clusters that use Alias
IP may experience network failures due to a mismatch between the new
IP range assigned the Pods and the alias IP address range for the nodes.
This is expected to be fixed in the coming weeks.
To work around this problem, follow these steps. Use the name of your node
in place of[NODE_NAME], and use your cluster's
zone in place of[ZONE].
When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem
with provisioning the ExternalIP on one or more Nodes causes somekubectlcommand to fail. The following error is logged in thekube-apiserverlog:
Failed to getAddresses: no preferred addresses found; known addresses
You can work around this issue by following these steps:
Vertical Pod Autoscaler (beta)
is now available on 1.11.3-gke.11 and higher.
November 12, 2018
Version updates
GKE cluster versions have been updated.
New default version for new clusters
Kubernetes version 1.9.7-gke.11 is the default version for new clusters, available
according to this week's rollout schedule.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
1.9.7-gke.11
1.10.6-gke.11
1.10.7-gke.11
1.10.9-gke.5
1.11.2-gke.18
Scheduled master auto-upgrades
Cluster masters will be auto-upgraded as described below:
All clusters running 1.9.7 will be upgraded to 1.9.7-gke.11
All clusters running 1.10.6 will be upgraded to 1.10.6-gke.11
All clusters running 1.10.7 will be upgraded to 1.10.7-gke.11
All clusters running 1.10.9 will be upgraded to 1.10.9-gke.5
All clusters running 1.11.2 will be upgraded to 1.11.2-gke.18
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
1.9.7-gke.7
1.10.6-gke.9
1.10.7-gke.9
1.10.9-gke.3
1.11.2-gke.15
Known Issues
Issue
When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem
with provisioning the ExternalIP on one or more Nodes causes somekubectlcommand to fail. The following error is logged in thekube-apiserverlog:
Failed to getAddresses: no preferred addresses found; known addresses
You can work around this issue by following these steps:
Determine which Nodes have no ExternalIP set:
kubectlgetnodes-owide
Look for entries where the last column is
<none>
.
Restart affected nodes.
Other Updates
Change
Patch 2 for Tigera Technical Advisory TTA-2018-001. See thesecurity bulletinfor further details.
Kubernetes version 1.9.7-gke.7 is the default version for new clusters, available
according to this week's rollout schedule.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
1.9.7-gke.7
1.10.6-gke.9
1.10.7-gke.9
1.10.9-gke.3
1.11.2-gke.15
Scheduled master auto-upgrades
Cluster masters running will be auto-upgraded as described below:
All clusters running 1.9.x will be upgraded to 1.9.7-gke.7
All clusters running 1.10.6 will be upgraded to 1.10.6-gke.9
All clusters running 1.10.7 will be upgraded to 1.10.7-gke.9
All clusters running 1.10.9 will be upgraded to 1.10.9-gke.3
All clusters running 1.11.2 will be upgraded to 1.11.2-gke.15
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
1.9.7-gke.6
1.10.6-gke.6
1.10.7-gke.6
1.10.9-gke.0
1.11.2-gke.9
Other Updates
Change
Patch 1 for Tigera Technical Advisory TTA-2018-001. See thesecurity bulletinfor further details. The November 12th release contains additional fixes that
address TTA-2018-001 and we recommend customers upgrade to that release.
GKE cluster versions have been updated as detailed in the
following sections. Seesupported
versionsfor a full list of the Kubernetes versions you can run on your
GKE masters and nodes.
New versions available for upgrades and new clusters
GKE 1.11.2-gke.9 is now generally available.
You can now select Container-Optimized OS withcontainerdimages when creating, modifying, or upgrading a cluster to GKE v1.11.
VisitUsing Container-Optimized OS with containerdfor details.
The CustomResourceDefinition API supports aversionslist
field (and deprecates the previous singularversionfield)
that you can use to support multiple versions of custom resources you
have developed, to indicate the stability of a given custom resource.
All versions must currently use the same schema, so if you need to add
a field, you must add it to all versions. Currently, versions only
indicate the stability of your custom resource, and do not allow for
any difference in functionality among versions. For more
information, visitVersions of CustomResourceDefinitions.
Kubernetes 1.11 introduces beta support for increasing the size of an
existing PersistentVolume. To increase the size of a PersistentVolume,
edit the PersistentVolumeClaim (PVC) object. Kubernetes expands the
file system automatically.
Kubernetes 1.11 also includes alpha support for expanding an online
PersistentVolume (one which is in use by a running deployment). To
test this feature, use analpha cluster.
Subresources allow you to add capabilities to custom resources. You
can enable/statusand/scaleREST endpoints for a
given custom resource. You can access these endpoints to view or modify
the behavior of the custom resource, usingPUT,POST, orPATCHrequests. VisitSubresourcesfor details.
Also, 1.10.9-gke.0 is available.
Scheduled master auto-upgrades
Cluster masters running GKE 1.10.6 will be upgraded to 1.10.6-gke.6.
Cluster masters running GKE 1.10.7 will be upgraded to 1.10.7-gke.6.
Fixed Issues
Fixed
GKE 1.10.7-gke.6 and 1.11.2-gke.9 fix an issue that is present in GKE
1.10.6-gke.2 and higher and 1.11.2-gke.4 and higher, where
master component logs are missing from Stackdriver Logging.
Other Updates
Container-Optimized OS node image has been upgraded to `cos-beta-69-10895-52-0`
for clusters running Kubernetes 1.11.2-gke.9, 1.10.9-gke.0, or 1.10.7-gke.6. SeeCOS image release notesfor more information.
Cluster templatesare now available when creating new GKE clusters in
Google Cloud console.
Changes
Change
Thekubectlcommand on new nodes has been upgraded from version
1.9 to 1.10. Thekubectlversion is always one version behind the
highest GKE version, to ensure compatibility with all supported versions.
Known Issues
Issue
In GKE 1.10.6-gke.2 and higher and 1.11.2-gke.4 and higher,
master component logs are missing from Stackdriver Logging.
This is due to an issue in the version of fluentd used in those
versions of GKE.
Update:This issue is fixed in GKE 1.10.7-gke.6 and 1.11.2-gke.9, available
fromOctober 30, 2018.
October 22, 2018
Fixed
Fixed
Kubernetes 1.11.0+:Fixes a bug in kubeDNS where hostnames in SRV records were being incorrectly compressed.
Version updates
GKE cluster versions have been updated.
Scheduled master auto-upgrades
20% of cluster masters running Kubernetes versions 1.10.6-gke.x will be updated to Kubernetes 1.10.6-gke.6, according to this week's rollout schedule.
20% of cluster masters running Kubernetes versions 1.10.7-gke.x will be updated to Kubernetes 1.10.7-gke.6, according to this week's rollout schedule.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
The problem is fixed in GKE v1.10.7 and higher. However, it cannot be fixed
in GKE v1.10.6.If your cluster uses Ingress, do not upgrade to v1.10.6.
Do not use GKE v1.10.6 for new clusters. If your cluster does not use
Ingress for load balancing and you cannot upgrade to GKE v1.10.7 or higher,
you can still use GKE v1.10.6.
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seesupported versionsfor a full
list of the Kubernetes versions you can run on your Kubernetes Engine masters
and nodes.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for
opt-in master upgrades for existing clusters:
1.10.6-gke.6
1.10.7-gke.6
1.11.2-gke.9 as EAP
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
1.10.6-gke.4
1.10.7-gke.2
Node image updates
Change
Container-Optimized OS node imagecos-dev-69-10895-23-0is now
available. See COS imagerelease notesfor more information.
Change
Container-Optimized OS with containerd node imagecos-b-69-10895-52-0-c110is now available. See COS imagerelease notesfor more information.
1.10.7-gke.1 fixes an issue where preempted GPU Pods would restart without
proper GPU libraries.
August 20, 2018
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
1.11.2-gke.3 (preview)
1.10.6-gke.2
1.9.7-gke.6
Scheduled master auto-upgrades
Auto-upgradesof Kubernetes
1.8.x clusters to 1.9.7-gke.5 continues for the second week. You can always
upgrade your Kubernetes 1.8 mastersmanually.
Node image updates
Change
Container-Optimized OS node image has been upgraded fromcos-stable-66-10452-109-0tocos-dev-69-10895-23-0for clusters running Kubernetes 1.10.6-gke.2 and Kubernetes 1.11.2-gke.3.
See COS imagerelease notesfor more information.
Container-Optimized OS node image has been upgraded fromcos-stable-65-10323-98-0-p2tocos-stable-65-10323-99-0-p2for clusters runningKubernetes 1.9.7-gke.6.
See COS imagerelease notesfor more information.
GCE-Ingresshas been upgraded to version 1.3.0. HTTP2 support for Ingress is promoted to Beta.
Private endpoints are promoted to Beta, for customers using private clusters.
At cluster creation time, customers can now choose to use the Kubernetes
master's private IP address as their API server endpoint.
Fixes
This week's releases address anL1 Terminal Fault vulnerability.
Customers running containers from different customers on the same GKE Node, as
well as customers using COS images, should prioritize updating those
environments.
August 13, 2018
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in
master upgrades for existing clusters:
Kubernetes 1.11.2-gke.2 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.11.0-gke.1 in Alpha Clusters
1.10.6-gke.1
Scheduled master auto-upgrades
10 % of cluster masters running Kubernetes versions 1.8.x will be updated to Kubernetes 1.9.7-gke.5, according to this week's rollout schedule.
Cluster masters running Kubernetes versions 1.9.x will be updated to Kubernetes 1.9.7-gke.5, according to this week's rollout schedule.
Cluster masters running Kubernetes versions 1.10.x will be updated to Kubernetes 1.10.6-gke.1, according to this week's rollout schedule.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Containerd integration on the Container-Optimized OS (COS) image is now beta. You can now create a cluster or a node pool with image typecos_containerd. Refer toContainer-Optimized OS with containerdfor details.
Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seesupported versionsfor a full list of the
Kubernetes versions you can run on your Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-in
master upgrades for existing clusters:
Kubernetes 1.9.7-gke.5is nowgenerally availablefor use with
Kubernetes Engine clusters.
New default version for new clusters
Kubernetes version1.9.7-gke.5is the default version for new
clusters, available according to this week's rollout schedule.
Scheduled master auto-upgrades
Change
Cluster masters running Kubernetes version1.8.10-gke.0will be updated
toKubernetes 1.8.10-gke.2, according to this week's rollout schedule.
Change
Cluster masters running Kubernetes versions1.8.12-gke.1and1.8.12-gke.2will be updated toKubernetes 1.8.12-gke.3,
according to this week's rollout schedule.
Change
Cluster masters running Kubernetes version1.9.6-gke.1will be updated
toKubernetes 1.9.6-gke.2, according to this week's rollout schedule.
Change
Cluster masters running Kubernetes versions1.9.7-gke.0,1.9.7-gke.1,1.9.7-gke.3, and1.9.7-gke.4will be updated toKubernetes 1.9.7-gke.5,
according to this week's rollout schedule.
Change
Cluster masters running Kubernetes versions1.10.2-gke.0,1.10.2-gke.1, and1.10.2-gke.3will be updated toKubernetes 1.10.2-gke.4, according to this week's rollout schedule.
Change
Cluster masters running Kubernetes versions1.10.4-gke.0and1.10.4-gke.2will be updated toKubernetes 1.10.4-gke.3,
according to this week's rollout schedule.
Change
Cluster masters running Kubernetes versions1.10.5-gke.0and1.10.5-gke.3will be updated toKubernetes 1.10.5-gke.4,
according to this week's rollout schedule.
A patch for Kubernetes vulnerabilityCVE-2018-5390is now available according to this week's rollout schedule. We recommend that youmanually upgradeyour nodes as soon as the patch becomes available in your cluster's zone.
August 3, 2018
New Features
In a future release, all newly-created Google Kubernetes Engine
clusters areVPC-nativeby default.
July 30, 2018
Version updates
GKE cluster versions have been updated.
Kubernetes 1.10.5-gke.3is nowgenerally availablefor use with
Google Kubernetes Engine clusters.
July 12, 2018
New Features
Feature
Cloud TPUis now
available with GKE in Beta.
Run your machine learning workload in a Kubernetes cluster on
Google Cloud, and let GKE manage and scale the
Cloud TPU resources for you.
Version updates
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-in
master upgrades for existing clusters:
Kubernetes 1.8.12-gke.2is nowgenerally availablefor use with
Google Kubernetes Engine clusters.
Kubernetes 1.9.7-gke.4is nowgenerally availablefor use with
Google Kubernetes Engine clusters.
Kubernetes 1.10.5-gke.2is nowgenerally availablefor use with
Google Kubernetes Engine clusters.
Kubernetes 1.11.0-gke.1clusters are now available for whitelisted
early-access users. Non-whitelisted users can specify version 1.11.0-gke.1 inAlpha Clusters.
Issue
Enabling/disabling network policy on already created 1.11 clusters may not work
properly.
Scheduled master auto-upgrades
Cluster masters running Kubernetes versions 1.8 will be updated to Kubernetes
1.8.10-gke.0 according to this week's rollout schedule.
You can now run GKE clusters in regionus-west2(Los Angeles) with zonesus-west2-a,us-west2-b, andus-west2-c.
June 28, 2018
Version Updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.10.5-gke.0is nowgenerally availablefor use
with GKE clusters.
New default version for new clusters
Kubernetes version 1.9.7-gke.3 is the default version for new clusters,
available according to this week's rollout schedule.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-
in master upgrades for existing clusters:
1.10.5-gke.0
Scheduled master auto-upgrades
Cluster masters running Kubernetes versions older than 1.8.10-gke.0 will be
updated to Kubernetes 1.8.10-gke.0 according to this week's rollout schedule.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Currently, OS Login is not fully compatible with Google Kubernetes Engine clusters
running Kubernetes version 1.10.x. The following functionalities of kubectl
might not work properly when OS Login is enabled: kubectl logs, proxy, exec,
attach, and port-forward. Until OS Login is fully supported, the settings at
the project-level are ignored at the nodes level. The settings at project-level
are ignored in Kubernetes Engine.
June 18, 2018
Version Updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.10.4-gke.2is nowgenerally availablefor use
with GKE clusters.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-
in master upgrades for existing clusters:
1.10.4-gke.2
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.10.4-gke.0is nowgenerally availablefor use
with GKE clusters.
The base image for this version iscos-stable-66-10452-101-0,
which contains a fix for an issue that causes deadlock in the Linux kernel.
New Features
Feature
You can now run GKE clusters in regioneurope-north1(Finland) with zoneseurope-north1-a,europe-north1-b, andeurope-north1-c.
Refer to the rollout schedule below for the specific rollout dates in each
zone.
The rollout of the release has been delayed. Refer to the revised rollout
schedule below.
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list
of the Kubernetes versions you can run on your Kubernetes Engine masters and
nodes.
New versions available for upgrades and new clusters
Feature
Clusters runningKubernetes 1.9.0 - 1.9.6-gke.0that have opted intoautomatic node upgradeswill be upgraded toKubernetes 1.9.6-gke.1according to this week's
rollout schedule.
Feature
Kubernetes 1.10.2-gke.1is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
Feature
Kubernetes 1.9.7-gke.1is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
Feature
Kubernetes 1.8.12-gke.1is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
New default version for new clusters
The following versions are now default according to this week's rollout
schedule:
Change
Kubernetes 1.8.10-gke.0is now the default version for new clusters.
These images contain a fix for Linux kernel CVE-2018-1000199. Refer toUSN-3641-1for more information.
May 7, 2018
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
Scheduled master auto-upgrades
Change
100% of cluster masters running Kubernetes versions1.7.0and1.7.12-gke.2will be updated toKubernetes
1.8.8-gke.0, according to this week's rollout schedule.
Change
100% of cluster masters running Kubernetes versions1.7.14.gke-1and1.7.15-gke.0will be updated toKubernetes
1.8.10-gke.0, according to this week's rollout schedule.
Change
100% of cluster masters running Kubernetes versions1.9.Xwill be updated toKubernetes
1.9.6, according to this week's rollout schedule.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
The Kubernetes Dashboard in version 1.8.8-gke.0 isn't compatible with nodes running versions
1.7.13 through 1.7.15.
May 1, 2018
Known Issues
Issue
InKubernetes versions 1.9.7, 1.10.0, and 1.10.2, if an NVIDIA GPU
device plugin restarts but the associated kubelet does not, then the node
allocatable for the GPU resourcenvidia.com/gpustays zero
until the kubelet restarts. This prevents new pods from consuming GPU
devices.
The most likely scenario when this problem occurs is after a cluster is
created or upgraded with Kubernetes 1.9.7, 1.10.0, or 1.10.2 and the cluster
master is upgraded to a new version, which triggers an NVIDIA GPU device
plugin DaemonSet upgrade. The DaemonSet upgrade causes the NVIDIA GPU device
plugin to restart itself.
If you use the GPU feature, do not create or upgrade your cluster with
Kubernetes 1.9.7, 1.10.0, or 1.10.2. This issue will be addressed in an
upcoming release.
April 30, 2018
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.8.12-gke.0is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
Feature
Kubernetes 1.9.7-gke.0is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
Feature
Kubernetes 1.10.2-gke.0clusters are now available for whitelisted
early-access users. Non-whitelisted users can specify version 1.10.2-gke.0
inAlpha
Clusters.
Scheduled master auto-upgrades
Change
100% of cluster masters running Kubernetes versions1.7.xwill be updated toKubernetes
1.8.8-gke.0, according to this week's rollout schedule.
The base image has been changed tocos-stable-65-10323-75-0-pfor clusters runningKubernetes 1.8.12-gke.0.
Change
The base image has been changed tocos-stable-65-10323-75-0-p2for clusters runningKubernetes 1.9.7-gke.0.
Change
The base image has been changed tocos-stable-66-10452-74-0for clusters runningKubernetes 1.10.2-gke.0.
April 24, 2018
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning and
upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
Scheduled master auto-upgrades
10 % of cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes
1.8.8-gke.0, according to this week's rollout schedule.
Cluster masters running Kubernetes versions 1.8.x will be updated to
Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.
Cluster masters running Kubernetes versions 1.9.x will be updated to
Kubernetes 1.9.3-gke.0, according to this week's rollout schedule.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes 1.9.6-gke.0
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.9.6-gke.1is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
Feature
Kubernetes 1.10.0-gke.0clusters are now available for whitelisted
early-access users. Non-whitelisted users can specify version 1.10.0-gke.0
inAlpha
Clusters.
Scheduled master auto-upgrades
Cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes
1.7.12-gke.2, according to this week's rollout schedule.
Versions no longer available
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes 1.7.12-gke.1
Other Updates
Container-Optimized OS node image has been upgraded tocos-stable-65-10323-69-0-p2for clusters runningKubernetes 1.9.6-gke.1. See COS imagerelease notesfor more information.
Container-Optimized OS node image is usingcos-beta-66-10452-28-0for clusters runningKubernetes 1.10.0-gke.0. See COS imagerelease notesfor more information.
Inubuntu-gke-1604-xenial-v20180207-1,
used by Kubernetes 1.9.3-gke.0 and 1.9.4-gke.1, new pods could not be
scheduled to node where Docker gets restarted.
Container-Optimized OS node image has been upgraded tocos-beta-65-10323-12-0for clusters runningKubernetes 1.7.15-gke.0. See COS imagerelease notesfor more information.
March 27, 2018
New versions available for upgrades and new clusters
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.9.6-gke.0.,Kubernetes 1.8.10-gke.0., andKubernetes 1.7.15-gke.0.are nowgenerally availablefor use with Google Kubernetes Engine clusters.
New default version for new clusters
The following versions are now default according to this week's rollout
schedule:
Zonal clusters
Change
Kubernetes 1.8.9-gke.1is now the default version for newzonalandregionalclusters.
Versions no longer available
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Inubuntu-gke-1604-xenial-v20180207-1,
used by Kubernetes 1.9.3-gke.0 and 1.9.4-gke.1, new pods could not be
scheduled to node where Docker gets restarted.
Container-Optimized OS node image has been upgraded tocos-beta-65-10323-12-0for clusters runningKubernetes 1.7.15-gke.0. See COS imagerelease notesfor more information.
Kubernetes 1.9.4+:Fixes a bug that prevented clusters with IP aliases from
appearing.
March 13, 2018
Fixed
Fixed
A patch for Kubernetes vulnerabilitiesCVE-2017-1002101andCVE-2017-1002102is now available according to this week's rollout
schedule. We recommend that youmanually upgradeyour nodes as soon as the patch becomes available in your cluster's zone.
Issues
Issue
Breaking Change:Do not upgrade your cluster if your application
requires mounting asecret,configMap,downwardAPI, orprojected volumewithwrite access.
To fix security vulnerabilityCVE-2017-1002102,Kubernetes 1.9.4-gke.1,Kubernetes 1.8.9-gke.1, andKubernetes 1.7.14-gke.1changed secret, configMap, downwardAPI, and projected volumes to mount
read-only, instead of allowing applications to write data and then reverting
it automatically. We recommend that you modify your application to accommodate
these changes before you upgrade your cluster.
Issue
If your cluster usesIP Aliasesand was created with the--enable-ip-aliasflag, upgrading the
master to Kubernetes 1.9.4-gke.1 will prevent it from starting properly.
This issue will be addressed in an upcoming release.
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.9.4-gke.1,Kubernetes 1.8.9-gke.1, andKubernetes 1.7.14-gke.1are nowgenerally availablefor use with Google Kubernetes Engine clusters.
New default version for new clusters
The following versions are now default according to this week's rollout
schedule:
Zonal clusters
Change
Kubernetes 1.8.8-gke.0is now the default version for newzonalandregionalclusters.
Scheduled auto-upgrades
Change
Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:
Regional clusters runningKubernetes 1.7.xwill be
upgraded toKubernetes 1.8.7-gke.1.
This upgrade applies to cluster masters.
Versions no longer available
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes 1.8.7-gke.1
New Features
Feature
You can now use version aliases with gcloud's--cluster-versionoption to specify Kubernetes versions. Version aliases allow you to specify
the latest version or a specific version, without including the `-gke.0` version
suffix. Seeversioning
and upgradesfor a complete overview of version aliases.
March 12, 2018
Issues
Issue
A patch for Kubernetes vulnerabilitiesCVE-2017-1002101andCVE-2017-1002102will be available in the upcoming release. We recommend that youmanually upgradeyour nodes as soon as the patch becomes available.
You can now easily debug your Kubernetes services from theGoogle Cloud consolewith port-forwarding and web preview.
March 06, 2018
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
Versions no longer available
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.9.3-gke.0,Kubernetes 1.8.8-gke.0, andKubernetes 1.7.12-gke.2are nowgenerally availablefor use with Google Kubernetes Engine clusters.
Scheduled auto-upgrades
Change
Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:
Clusters runningKubernetes 1.8.xwill be
upgraded toKubernetes 1.8.7-gke.1.
Regional clusters runningKubernetes 1.8.xwill have
etcd upgraded toetcd 3.1.11.
This upgrade applies to cluster masters.
Versions no longer available
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes 1.8.5-gke.0
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Beginning with Kubernetes version 1.9.3, you can enable metadata
concealment to prevent user Pods from accessing certain VM metadata
for your cluster's nodes. For more information, seeProtecting
Cluster Metadata.
Other Updates
Change
Ubuntu node image has been upgraded fromubuntu-gke-1604-xenial-v20180122toubuntu-gke-1604-xenial-v20180207for clusters runningKubernetes 1.7.12-gke.2and1.8.8-gke.0.
Ubuntu node image has been upgraded fromubuntu-gke-1604-xenial-v20180122toubuntu-gke-1604-xenial-v20180207-1for clusters runningKubernetes 1.9.3-gke.0.
Docker upgraded from 1.12 to 17.03 and default storage driver changed to overlay2
Known issue: When Docker gets restarted on a node, new pods cannot be scheduled on that node and they will be stuck in `ContainerCreating` state.
Change
Container-Optimized OS node image has been upgraded fromcos-stable-63-10032-71-0tocos-beta-65-10323-12-0for clusters runningKubernetes 1.9.3-gke.0and1.8.8-gke.0. See COS imagerelease notesfor more information.
February 13, 2018
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New default version for new clusters
The following versions are now default according to this week's rollout
schedule:
Zonal clusters
Change
Kubernetes version 1.8.7-gke.1is now the default version for newzonalandregionalclusters.
Scheduled auto-upgrades
Change
Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:
Clusters runningKubernetes 1.6.13-gke.1and1.7.12-gke.0will be
upgraded toKubernetes 1.7.12-gke.1.
Clusters runningKubernetes 1.9.1-gke.0and1.9.2-gke.0will be
upgraded toKubernetes 1.9.2-gke.1.
Clusters runningetcd 2.*will be
upgraded toetcd 3.0.17-gke.2.
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available according to this week's rollout
schedule:
Feature
Kubernetes 1.9.2-gke.1is nowgenerally availablefor use
with Google Kubernetes Engine clusters.
New default version for new clusters
The following versions are now default according to this week's rollout
schedule:
Zonal clusters
Change
Kubernetes version 1.7.12-gke.1is now the default version for newzonalclusters.
Regional clusters
Change
Kubernetes version 1.8.7-gke.1is now the default
version for newregionalclusters.
Beginning with Kubernetes version 1.9.x on Google Kubernetes Engine, you can
now performhorizontal pod autoscalingbased on custom metrics from
Stackdriver Monitoring (in addition to the default scaling based on CPU
utilization). For more information, seeScaling an Applicationand thecustom metrics autoscaling tutorial.
Known Issues
Issue
Beginning with Kubernetes version 1.9.x, automatic firewall rules have
changed such that workloads in your Google Kubernetes Engine cluster cannot
communicate with other Compute Engine VMs that are on the same network, butoutside the cluster. This change was made for security reasons.
You can replicate the behavior of older clusters (1.8.x and earlier) bysetting a new
firewall ruleon your cluster.
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New default version for new clusters
The following versions are now default according to this week's rollout
schedule:
Change
Kubernetes version 1.7.12-gke.0is now the default version for new
zonal clusters.
Change
Kubernetes version 1.8.6-gke.0is now the default
version for new regional clusters.
New versions available for upgrades and new clusters
Feature
The following versions are now available according to this week's rollout
schedule:
Kubernetes 1.8.7-gke.0
Kubernetes 1.9.2-gke.0clusters are now available for whitelisted early-access
users. Non-whitelisted users can specify version 1.9.2-gke.0 inAlpha Clusters.
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available according to this week's rollout
schedule:
Kubernetes 1.9.1clusters are now available for whitelisted early-access
users. Non-whitelisted users can specify version 1.9.1 inAlpha Clusters.
Scheduled auto-upgrades
Change
Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:
Clusters runningKubernetes 1.6.xwill be upgraded to1.7.11-gke.1.
You can now run Container Engine clusters in regioneurope-west4(Netherlands).
Feature
You can now run Container Engine clusters in regionnorthamerica-northeast1(Montréal).
January 9, 2018
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New default version for new clusters
Change
Kubernetes version 1.7.11-gke.1is now the default version for new clusters,
available according to this week's rollout schedule.
Scheduled auto-upgrades
Change
Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:
Clusters runningKubernetes 1.6.xwill be upgraded to1.6.13-gke.1.
Clusters runningKubernetes 1.7.xwill be upgraded to1.7.11-gke.1.
Clusters runningKubernetes 1.8.xwill be upgraded to1.8.5-gke.0
This upgrade applies to cluster masters and, ifnode auto-upgradesare enabled, all cluster nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Kubernetes 1.8.6-gke.0
Kubernetes 1.7.12-gke.0
Versions no longer available
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New default version for new clusters
Change
Kubernetes version 1.7.11-gke.1is now the default version for new clusters,
available according to this week's rollout schedule.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Kubernetes 1.8.5-gke.0
Versions no longer available
Change
The following versions are no longer available for new clusters or cluster
upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Kubernetes 1.8.4-gke.1
Kubernetes 1.7.11-gke.1
Kubernetes 1.6.13-gke.1
These version updates change the defaultnode imagefor Kubernetes Engine nodes toContainer-Optimized OSversioncos-stable-63-10032-71-0-p.
Versions no longer available
Change
The following versions areno longer availablefor new clusters or
opt-in master and node upgrades:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Kubernetes Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Kubernetes Engine'skubectlversion has been updated from
1.8.2 to 1.8.3.
November 7, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Added an option to thegcloud container clusters createcommand:--enable-basic-auth. This option allows you to create a cluster with basic authorization enabled.
Feature
Added options to thegcloud container clusters updatecommand:--enable-basic-auth,--username, and--password. These options allows you to enable or
disable basic authorization and change the username and password for an existing cluster.
October 31, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:
Kubernetes 1.7.9-gke.0
Scheduled auto-upgrades
Change
Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:
Clusters runningKubernetes 1.6.xwill be upgraded to1.6.11-gke.0.
Clusters runningKubernetes 1.7.xwill be upgraded to1.7.8-gke.0.
Clusters runningKubernetes 1.8.xwill be upgraded to1.8.1-gke.1
This upgrade applies to cluster masters and, ifnode auto-upgradesare enabled, all cluster nodes.
New default version for new clusters
Change
Kubernetes version 1.7.8-gke.0is now the default version for new clusters,
available according to this week's rollout schedule.
You can now run Container Engine clusters in regionasia-south1(Mumbai).
Fixes
Fixed
Clusters using theContainer-Optimized
OSnode imageversioncos-stable-61can be affected by Docker daemon crashes and
restarts and become unable to schedule pods.
To mitigate this issue, clusters running Kubernetes versions 1.6.x, 1.7.x,
and 1.8.x are slated to automatically upgrade to versions 1.6.11-gke.0,
1.7.8-gke.0, and 1.8.1-gke.1 respectively. These versions have been remapped
to use thecos-stable-60-9592-90-0node image.
Known Issues
Issue
Clusters running Kubernetes version 1.7.6 might see inaccurate memory usage
metrics for pods running on the cluster. Clusters are slated to automatically
upgrade to version 1.7.8-gke.0 to mitigate this issue. If node auto-upgrades
are not enabled for your cluster, you can manually upgrade to 1.7.8-gke.0.
October 24, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
Kubernetes version 1.8.1is now generally available, according to this
week's rollout schedule. See theGoogle
Cloud blog post on Container Engine 1.8for more information on the Kubernetes capabilties highlighted in this release.
You can now runCronJobson your Container Engine cluster. CronJob is a Beta feature in Kubernetes
version 1.8.
Feature
You can now view the status of your cluster's nodes using the Google Cloud console.
Feature
The Google Cloud console browser-integrated cloud shell can now automatically
generate commands for thekubectlcommand-line interface.
Feature
You can now edit your cluster's workloads when viewing them with the
Google Cloud console.
Known Issues
Issue
Kubernetes Third-party Resources, previously deprecated, have been removed
in version 1.8. These resources will cease to function on clusters upgrading
to version 1.8.1 or later.
Issue
Audit Logging, a beta feature in Kubernetes 1.8, is currently not enabled
on Container Engine.
Issue
Horizontal Pod Autoscaling with Custom Metrics, a beta feature in
Kubernetes 1.8, is currently not enabled on Container Engine.
Other Updates
Change
Beta features in the Container Engine API (andgcloudcommand-line interface) are now exposed via the newv1beta1API surface. To use beta
features on Container Engine, you must configure thegcloudcommand-line interface to use the Beta API surface to rungcloud beta containercommands. SeeAPI organizationfor more information.
October 10, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following Kubernetes versions are now available for new clusters and opt-in
master upgrades for existing clusters, according to this week's rollout schedule:
1.7.8
1.6.11
Feature
Clusters running Kubernetes version 1.6.11 can safely upgrade to Kubernetes
versions 1.7.x.
Clusters running Kubernetes versions 1.7.8 and 1.6.11 have upgraded the
version ofContainer-Optimized OSrunning on cluster nodes from versioncos-stable-60-9592-84-0tocos-stable-61-9765-66-0. See therelease notesfor more details.
This upgrade updates the node's Docker version from 1.13
to 17.03. See theDockerdocumentation for details on feature deprecations.
October 3, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
Kubernetes version 1.8.0-gke.0 is now available for early access partners
andalpha clustersonly.
To try out v1.8.0-gke.0,sign upfor the early access program.
Scheduled master auto-upgrades
Change
Cluster masters running Kuberenetes versions 1.7.x will be automatically
upgraded to Kubernetes v1.7.6-gke.1 according to this week's rollout schedule.
You can now rotate your username for basic authorization on existing
clusters, or disable basic authorization by providing an empty username.
Fixes
Fixed
Kubernetes 1.7.6-gke.1: Fixed a regression influentd.
Fixed
Kubernetes 1.7.6-gke.1: Updated thekube-dnsadd-on to
patchdnsmasqvulnerabilities announced on October 2. For more
information on the vulnerability, see the associatedKubernetes
Security Announcement.
Known Issues
Issue
Kubernetes 1.8.0-gke.0(early access and alpha clusters only):
Clusters created with a subnetwork with an automatically-generated name that
contains a hash (e.g. "default-38b01f54907a15a7") might encounter issues
where theirinternal
load balancersfail to sync.
Container Engine clusters can enter a bad state if you convert your
automatically-configured network to a manually-configured one. In this
state,internal
load balancersmight fail to sync, and node pool upgrades might
fail.
September 27, 2017
New Features
Feature
You can now configure amaintenance
windowfor your Container Engine clusters. You can use the maintenance
window feature to designate specific spans of time for scheduled maintenance
and upgrades to your master and nodes. Maintenance window is abetafeature on Container Engine.
Feature
Container Engine'snode
auto upgradefeature is now generally available.
Feature
The Ubuntunode imageis
now generally available for use on your Container Engine cluster nodes.
September 25, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
Scheduled master auto-upgrades
Change
Cluster masters running Kuberenetes versions 1.7.x will be automatically
upgraded to Kubernetes v1.7.5 according to this week's rollout schedule.
Change
Cluster masters running Kuberenetes versions 1.6.x will be automatically
upgraded to Kubernetes v1.6.10 according to this week's rollout schedule.
Kubernetes v1.7.5: Fixed an issue with Kubernetes v1.7.0 to v1.7.4
in whichcontroller-managercould become unhealthy and enter
a repair loop.
Fixed
Kubernetes v1.6.10: Fixed an issue in which a Google Cloud Load Balancer
could enter a persistently bad state if an API call failed while the ingress
controller was starting.
September 18, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New default version for new clusters
Change
Kubernetes v1.7.5is the default version for new clusters, available according to this week's
rollout schedule below.
New versions available for upgrades and new clusters
Feature
The following Kubernetes versions are now available for new clusters and
opt-in master upgrades for existing clusters:
1.7.6
1.6.10
New versions available for node upgrades and downgrades
Feature
The following Kubernetes versions are now available fornodeupgrades and downgrades:
Starting in Kubernetes version 1.7.6, the available resources on cluster
nodes have been updated to account for the CPU and memory requirement of
Kubernetes node daemons. See theNode
documentationin thecluster
architecture overviewfor more information.
Feature
You can nowset a cluster
network policyon your Container Engine clusters running Kubernetes
version 1.7.6 or later.
Other Updates
Change
The deprecatedcontainer-vmnode image type has been removed
from the list of valid Container Engine node images. Existing clusters and
node pools will continue to function, but you can no longer create new
clusters and node pools that run thecontainer-vmnode
image.
Issue
Clusters that use the deprecatedcontainer-vmas a node image
cannot be upgraded to Kubernetes v1.7.6 or later.
September 12, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New versions available for upgrades and new clusters
Feature
The following Kubernetes versions are now available for new clusters and
opt-in master upgrades for existing clusters:
1.7.5
1.6.9
1.6.7
Scheduled master auto-upgrades
Change
Cluster masters running Kubernetes versions 1.6.x will be upgraded toKubernetes v1.6.9according to this week's rollout schedule.
You can now useIP aliaseswith an existing subnetwork when creating a cluster. IP aliases are a Beta
feature in Google Kubernetes Engine version 1.7.5.
September 05, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the
following sections. Seeversioning
and upgradesfor a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.
New default version for new clusters
Change
Kubernetes v1.6.9is the default version for new clusters, available according to this week's
rollout schedule.
New versions available for upgrades and new clusters
Feature
Kubernetes
v1.7.5is now available for new clusters and opt-in master upgrades.
Versions no longer available
Change
The following Kubernetes versions areno longer availablefor new
clusters or upgrades to existing cluster masters:
Container Engine'skubectlversion has been updated from
1.7.4 to 1.7.5.
Change
You can now run Container Engine clusters in regionsouthamerica-east1(São Paulo).
August 28, 2017
Kubernetes
v1.7.4is available for new clusters and opt-in master upgrades.
Kubernetes
v1.6.9is available for new clusters and opt-in master upgrades.
Clusters with a master version of v1.6.7 andNode
Auto-Upgradesenabled will have
nodes upgraded to v1.6.7.
Clusters with a master version of v1.7.3 andNode
Auto-Upgradesenabled will have
nodes upgraded to v1.7.3.
Starting at version v1.7.4, when Cloud Monitoring is enabled for a cluster,container system metricswill start
to be pushed by Heapster to Stackdriver Monitoring API. The metrics remain
free, though Stackdriver Monitoring API quota will be affected.
Clusters running Kubernetes v1.6.9 and v1.7.4 have updated node images:
The COS node image was upgraded fromcos-stable-59-9460-73-0tocos-stable-60-9592-84-0. Please see theCOS image release
notesfor details.
The new COS image includes an upgrade of Docker, from v1.11.2 to
v1.13.1. This Docker upgrade contains many stability and performance
fixes. A full list of the Docker features that have been deprecated
between v1.11.2 and v1.13.1 is available onDocker's
website.
Three features in Docker v1.13.1 are disabled by default in the COS
m60 image, but are planned to be enabled in a later node image
release: live-restore, shared PID namespaces and overlay2.
Known issue: Docker v1.13.1 supportsHEALTHCHECK,
which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes
supports more powerful liveness/readiness checks for containers, and
it currently does not surface or consume theHEALTHCHECKstatus
reported by Docker. We encourage users to disableHEALTHCHECKin
Docker images to reduce unnecessary overhead, especially if
performance degradation is observed after node upgrade.
Note thatHEALTHCHECKcould be inherited from the base image.
There is a known issue with StatefulSets in 1.7.X that causes StatefulSet pods
to become unavailable in DNS upon upgrade. We are currently recommending that
you not upgrade to 1.7.X if you are using DNS with StatefulSets. A fix is
being prepared. Additional information can be found here:
https://github.com/kubernetes/kubernetes/issues/48327
Known Issues running Docker v1.13:
Docker v1.13.1 supportsHEALTHCHECK,
which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes supports
more powerful liveness/readiness checks for containers, and it currently does
not surface or consume theHEALTHCHECKstatus reported by Docker. We
encourage users to disableHEALTHCHECKin Docker images to reduce
unnecessary overhead, especially if performance degradation is observed after
node upgrade. Note thatHEALTHCHECKcould be inherited from the base image.
August 21, 2017
When using IP aliases, you can now represent service CIDR blocks by using a
secondary range instead of a subnetwork. This means you can use IP aliases
without specifying the--create-subnetworkoption.
Cluster etcd fragmentation/compaction fixes.
Known Issues upgrading to v1.7.3:
There is a known issue with StatefulSets in 1.7.X regarding annotations, so
we are currently recommending that you not upgrade to 1.7.X if you are using
them. A fix is being prepared. Additional information can be found here:
https://github.com/kubernetes/kubernetes/issues/48327
August 14, 2017
Cluster masters running Kubernetes versions 1.7.X will be upgraded tov1.7.3according to the following schedule:
You can now specify a minimum CPU size/class for Alpha clusters by using
the--min-cpu-platformflag withgcloud alpha containercommands.
Cluster resize commands (gcloud alpha container clusters resizeorgcloud
beta container clusters resize) now safely drain nodes before removal.
Updated Google Container Engine's kubectl from version 1.7.2 to 1.7.3.
Added--logging-serviceflag togcloud beta container clusters update.
This flag controls the enabling and disabling of Stackdriver Logging integration.
Use--logging-service=logging.googleapis.comto enable and--logging-service=noneto disable.
Modified the--scopesflag ingcloud beta container clusters createandgcloud beta container node-pools createcommands to default tologging.write,monitoringand support passing an empty list.
August 7, 2017
Kubernetes
v1.7.3is available for new clusters and opt-in master upgrades.
Kubernetes
v1.6.8is available for new clusters and opt-in master upgrades.
Cluster masters running Kubernetes version v1.6.6 or older will be upgraded tov1.6.7according to the following schedule:
Node pools can now be created with an initial node count of 0.
Cloud monitoring can only be enabled in clusters that have monitoring scope
enabled in all node pools.
Known Issues upgrading to v1.6.7:
Kubernetes 1.6.7 includes version 0.9.5 of the Google Cloud Ingress Controller. This version contains a
fix for a bug that caused the controller to incorrectly synchronize Google Cloud URL Maps. Changes to
the ingress resource may not have caused the Google Cloud URL Map to update. Using the fixed controller
will ensure maps reflect the host and path rules. To avoid potential disruption, validate that
all ingress objects contain the desiredhostorpathrules.
August 3, 2017
Users with access to KubernetesSecretobjects
can no longer view the secrets' values in Google Container Engine UI.
The recommended way to access them is with thekubectltool.
August 1, 2017
The VM firewall rule (e.g.cluster-<hash>-vms) for non-legacy auto-mode
networks now includes both the primary and reserved VM ranges (10.128/9)
if the primary range lies outside of the reserved range.
You can now use the beta Ubuntu node image with clusters running Kubernetes
version 1.6.4 or higher.
You can now run Container Engine clusters in regioneurope-west3(Frankfurt).
July 26, 2017
You can now use theGoogle Cloud consoleto add additional
zones to, or remove zones from, your existing multi-zone clusters.
For more information, seeMulti-Zone Clusters.
You can now use theGoogle Cloud consoleto defineMaster Authorized Networks- a restricted range of IP addresses that are permitted
to access your container cluster's Kubernetes master endpoint.
July 25, 2017
Kubernetes
v1.7.2is available for new clusters and opt-in master upgrades.
Known Issues upgrading to v1.7.2:
If you are upgrading nodes from 1.7.0 or 1.7.1 to 1.7.2, you may
experience service disruption if you have services of
type=LoadBalancer. To mitigate this potential disruption, see theupgrade instructions for versions 1.7.0 and 1.7.1.
Kubernetes
v1.6.7is the default version for new clusters, released according to the following
schedule:
gcloud container clusters createnow allows the Kubernetes Dashboard to be
disabled for a new cluster via the--disable-addons=KubernetesDashboardflag.
gcloud container clusters updatenow allows the Kubernetes Dashboard to be
disabled on existing clusters via the--update-addons=KubernetesDashboard=DISABLEDflag.
July 18, 2017
Kubernetes
v1.7.1is available for new clusters and opt-in master upgrades.
Cluster masters running Kubernetes version v1.7.0 will be upgraded tov1.7.1according to the following schedule:
Container Engine now respects KubernetesPod Disruption Budgets,
making stateful workloads more stable during upgrades. This also reduces
disruptions during node auto-upgrades.
gcloud container clusters get-credentialsnow correctly respects the
HOMEDRIVE/HOMEPATH and USERPROFILE environment variables when generating the
kubectl config file on Windows.
Known Issues with v1.7.1:
Google Cloud Internal Load Balancers created through Kubernetes services (a
Beta feature in 1.7) have an issue that causes health-checks to fail
preventing them from functioning. This will be fixed in a future patch
release.
Services of type=LoadBalancer in clusters that have nodes running
Kubernetes v1.7 may fail Google Cloud Load Balancer health checks. However, the Load
Balancers will continue to forward traffic to backends. This issue will be
fixed in future patch release and may require special upgrade actions.
July 13, 2017
New views available in Google Container Engine UI, allowing cross-cluster
overview and inspection of various Kubernetes Objects. This new UI will be
rolling out in the coming week:
Workloads:
inspect and diagnose your pods and their controllers.
Kubernetes 1.7 is being made available as an optional version for clusters.
Please see therelease announcementfor more details on new features.
You can now use HTTP re-encryption through Google Cloud Load Balancing to
allow HTTPS access from the Google Cloud Load Balancer to your service backend. This
feature ensures that your data is fully encrypted in all phases of transit,
even after it enters Google's global network.
Support for all-private IP (RFC-1918) addresses is generally available. These
addresses allow you create clusters and access resources in all-private IP
ranges, and extends your ability to use Container Engine clusters with
existing networks.
Support for external source IP preservation is now generally available.
This feature allows applications to be fully aware of client IP addresses
for Kubernetes services you expose.
Cluster autoscaler now supports for scaling node pools to 0 or 1, for when
you don't need capacity.
Cluster autoscaler can now use a pricing-based expander, which applies additional
cost-based constraints to let you use auto-scaling in the most cost-effective
manner. This is default as of 1.7.0 and is not user-configurable.
Cluster autoscaler now supports balanced scale-outs of similar node groups.
This is useful for clusters that span multiple zones.
You can now use API Aggregation to extend the Kubernetes API with custom APIs.
For example, you can now add existing API solutions such as service catalog,
or build your own.
The following new features are available on Alpha clusters running Kubernetes
version 1.7:
Local storage
External webhook admission controllers
Known Issues with v1.7.0:
Kubelet certificate rotation is not enabled for Alpha clusters. This issue
will be fixed in a future release.
Kubernetes services with network load balancers using static IP will cause the kube-controller-manager to crash loop, leading to multiple master repairs. See issue#48848for more details. This issue will be fixed in a future release.
You can now disable basic authentication for new clusters using theGoogle Cloud console.
You can now disable client certificate generation for new clusters using theGoogle Cloud console.
June 26, 2017
Known Issues with v1.6.6A bug in the version of fluentd bundled with Kubernetesv1.6.6causes JSON-formatted logs to be exported as plain text. This issue will be
fixed in v1.6.7. Meanwhile v1.6.6 will remain available as an optional
version for new cluster creation and opt-in master upgrades, but will not be
made the default. See issue#48018for more
details.
There will be no release for the week of July 3rd, since this is a holiday
in the US. The next release is planned for the week of July 10th.
Original plan to upgrade container cluster masters to 1.6 this week has been postponed due to a bug in the GLBC ingress controller
that causes unintentional overwrites of manual health check edits (Seeknown issues for v1.6.4).
This bug is fixed in 1.6.6.
DeleteNodepool now drains all nodes in the pool before deletion.
You can now run Container Engine clusters in regionaustralia-southeast1(Sydney).
June 13, 2017
v1.5.7will no longer be available for new clusters and master upgrades.
All cluster masters will be upgraded tov1.6.4in the week of 2017-06-19.
June 5, 2017
Cluster masters running Kubernetes versions v1.6.0 - v1.6.3 will be
upgraded tov1.6.4according to the following schedule:
You can now use theGoogle Cloud consoleto choose whether
clusters should use legacy authorization permissions. This option is available in clusters
running version 1.6 or later.
See theRole-Based Access Control Documentationfor more information.
May 10, 2017
Cluster masters running Kubernetes versions v1.5.6 and below will be
upgraded tov1.5.7according to the following schedule:
v1.6.0 is no longer available for container cluster node
upgrades/downgrades.
Known Issues
A known issue with Container Engine'sIP Rotationfeature can cause it to break Kubernetes features that depend on the proxy
endpoint (such askubectl exec,kubectl logs), as well as cluster metrics
exports into Stackdriver. This issue only affects your cluster if you ranCompleteIPRotation, and have also disabled the default SSH
firewall rule for cluster nodes. There is a simple manual fix; seeIP Rotation known issuesfor details.
May 3, 2017
You can now use theGoogle Cloud consoleto choose whether
existing node pools should be automatically upgraded when a new Kubernetes
version becomes available.
SeeNode Auto-Upgrade documentationfor more information.
You can now use theGoogle Cloud consoleto scale existing
clusters running Kubernetes version 1.6.0 or later up to 5000 nodes in most
zones.
May 2, 2017
Kubernetes
v1.5.7is the default version for new clusters. This version will be available for
new clusters and opt-in master upgrades according to the following planned
schedule:
Cluster masters running Kubernetes versions v1.6.0 and v.1.6.1 will be upgraded tov1.6.2.
April 26, 2017
Kubernetes
v1.6.2This version will be available for new clusters and opt-in master upgrades.
You can create a cluster with HTTP basic authentication disabled by passing
an empty username:gcloud container clusters create CLUSTER_NAME --username=""This feature only works with version 1.6.0 and later.
Fixed a bug where SetMasterAuth would fail silently on clusters below
v1.6.0. SetMasterAuth is only allowed for clusters at v1.6.0 and above.
Fixed a bug for clusters at v1.6.0 and above where fluentd pods were
mistakenly created on all nodes when logging was disabled.
gcloudkubectlversion is now 1.6.2 instead of 1.6.0.
April 12, 2017
Kubernetes
v1.6.1This version will be available for new clusters and opt-in master upgrades
according to the following planned schedule:
Container engine hosted masters will be upgraded to v1.5.6 according to the
planned schedule mentioned above.
Known issue:
gcloud container clusters update --set-password(or --generate-password), for setting or rotating your cluster admin password, does not work on clusters running Kubernetes version 1.5.x or earlier. Please use this method only on clusters running Kubernetes version 1.6.x or later.
April 4, 2017
Kubernetes
v1.6.0This version will be available for new clusters and opt-in master upgrades
according to the following planned schedule:
Container-Optimized OSis now generally available. You can create or upgrade clusters and node
pools that use Container-Optimized OS by specifyingimageTypevalues of
eitherCOSorGCI.
A new system daemon, node problem detector, is introduced in Kubernetes v1.6
on COS node images. It detects node problems (e.g. kernel/network/container
runtime issues) and reports them as node conditions and events.
Starting in 1.6, a default StorageClass instance with the gce-pd provisioner
is installed. All unbound PVCs that don't specify a StorageClass will
automatically use the default provisioner, which is different behavior from
previous releases and can be disabled by modifying the default StorageClass
and removing the "storageclass.beta.kubernetes.io/is-default-class
annotation". This feature replaces alpha dynamic provisioning, but the
alpha annotation will still be allowed and will retain the same behavior.
gcloud container clusters create|get-credentialswill now configure
kubectl to use the credentials of the active gcloud account by default,
instead of using application default credentials. This requires kubectl
1.6.0 or higher. You can update kubectl by runninggcloud components update kubectl.
If you prefer to use application default credentials to authenticate kubectl
to Google Container Engine clusters, you can revert to the previous behavior
by setting thecontainer/use_application_default_credentialsproperty:
gcloud config set container/use_application_default_credentials true
Google Cloud CLI kubectl version updating to 1.6.0.
New clusters launched at 1.6.0 will use be using etcd3 in the master.
Existing cluster masters will be automatically updated to use etcd3 in a
future release.
Starting in 1.6,RBACcan be used to grant permissions for users and Service Accounts to the
cluster's API. To help transition to using RBAC, the cluster's legacy
authorization permissions are enabled by default, allowing Kubernetes
Service Accounts full access to the API like they had in previous versions
of Kubernetes. An option will be rolled out soon to allow the legacy
authorization mode to be disabled in order to take full advantage of RBAC.
You can now use gcloud to set or rotate the admin password for Container
clusters by running
During node upgrades, Container Engine will now verify and recreate the
Managed Instance Group for a node pool (at size 0) if required.
March 29, 2017
Kubernetes
v1.5.6is the default version for new clusters. This version will be available for
new clusters and opt-in master upgrades according to the following planned
schedule:
Google Cloud CLI kubectl version updating to 1.5.3.
February 14, 2017
It is no longer necessary to disable the HttpLoadBalancing add-on when you
create a cluster without adding the compute read/write scope to nodes.Previously, when you created a cluster without adding the
compute read/write scope, you were required to disable HttpLoadBalacing.
January 31, 2017
Google Cloud CLI kubectl version updating to 1.5.2.
The Google Cloud CLI and kubectl 1.5+ support usinggcloudcredentials for authentication.
Currently,gcloud container clusters createandgcloud container clusters
get-credentialsconfigure kubectl to useApplication Default
Credentialsto authenticate to Container Clusters. If these differ from the Identity and Access Management (IAM) role that
the Google Cloud CLI is using, kubectl requests can fail authentication
(#30617). With Google
gcloud CLI 140.0.0 and kubectl 1.5+, the Google Cloud CLI can configure kubectl to use its
own credentials. This means that if, e.g., thegcloudcommand-line is configured to use a
service account, kubectl will authenticate as the same service account.
To enable using the Google Cloud CLI's own credentials, set thecontainer/use_application_default_credentialsproperty to false:
The current default behavior is to continue using application default
credentials. The Google Cloud CLI credentials will be made the default for kubectl
configuration (viagcloud container clusters create|get-credentials) in a
future release.
Rollout of Kubernetes v1.5 as the default for new clusters is postponed until
v1.5.2 to fix known issues with v1.5.1.
Fixed an issue where Node Upgrades would fail if one of the nodes was not
registered with the Master.
Google Cloud CLI kubectl version updating to 1.5.1.
Known Issues with Kubernetes v1.5.1
#39680Defining a pod
with a resource request of 0 will cause Controller Manager to crashloop.
#38322Kubelet can
evict or refuse to admit critical pods (kube-proxy, static pods) when under
memory pressure.
January 4, 2017
Default cluster version for new clusters will be changed toKubernetes v1.5.1in the week of January 9th.
January 3, 2017
Google Cloud consolenow allows setting newly created
clusters and node pools to automatically upgrade when a new Kubernetes version
becomes available.
Seedocumentationfor details.
Node pools can now opt in to automatically upgrade when a new Kubernetes
version becomes available.
Seedocumentationfor details.
Node pool upgrades can now be rolled back using thegcloud alpha container node-pools rollback <pool-name>command.
Seegcloud alpha container node-pools rollback --helpfor more details.
December 7, 2016
Google Cloud consolenow allows choosing between
Container-VM Image (GCI) and the deprecated container-vm when adding new node
pools to existing clusters.
To learn more about image types, clickhere.
December 5, 2016
Container Engine hosted masters running v1.4 will be upgraded to
v1.4.6.
November 29, 2016
Increase master disk size in large Google Container Engine clusters. This is needed as in large clusters etcd needs much more IOPS.
Change thegcloud container list-tagscommand to support user-specified filters on occurrences and exposes a column summarizing vulnerability information.
Google Cloud consolenow allows choosing between
Container-VM Image (GCI) and the deprecated container-vm on cluster creation.
To learn more about image types, clickhere.
Kubernetes v1.4.5 and v1.3.10 include fixes for CVE-2016-5195 (Dirty Cow),
which is a Linux kernel vulnerability that allows privilege escalation. If
your clusters are running nodes with lower versions, we strongly encourage you
to upgrade them to a version of Kubernetes that includes a node image that is
not vulnerable, such as Kubernetes 1.3.10 or 1.4.5. To upgrade a cluster, see
https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade.
Upgrade operations can now be cancelled usinggcloud alpha container
operations cancel <operation_id>. Seegcloud alpha container operations
cancel --helpfor more details.
Reminder that the base OS image for nodes has changed in the 1.4 release. A
set of known issues have been identified and have been documentedhere.
If you suspect that your application or workflow is having problems with new
clusters, you may select the old ContainerVM by following the opt-out
instructions documentedhere.
Rewrote the node upgrade logic to make it less disruptive by waiting for the node to
register with the Kubernetes master before upgrading the next node.
Added support for new clusters and node-pools to use preemptible
VM instances by using the--preemptibleflag. Seegcloud beta container clusters create --helpandgcloud beta container node-pools create --helpfor more details.
Reminder that the base OS image for nodes has changed in the 1.4 release. A
set of known issues have been identified and have been documentedhere.
If you suspect that your application or workflow is having problems with new
clusters, you may select the old ContainerVM by following the opt-out
instructions documentedhere.
Fix a bug ingcloud beta container images list-tags.
Add support for kubernetes labels on new clusters and nodepools by passing--node-labels=label1=value1,label2=value2.... Seegcloud container clusters create --helpandgcloud container nodepools create --helpfor more details and
examples.
Update kubectl to version 1.4.1.
October 5, 2016
Can now specify the cluster-version when creating Google Container Engine clusters.
Update kubectl to version 1.4.0.
Introduce 1.3.8 as a valid cluster version. 1.3.8 fixes a log rotation leak on the master.
Container-VM Image (GCI), which was introduced earlier this year, is now the default
ImageType for new clusters and node-pools. The old container-vm is now deprecated; it
will be supported for a limited time. To learn more about how to use GCI, clickhere.
Can now create temporary clusters with all kubernetes alpha features enabled
via
init-containers are now supported on Container Engine, but only when master and nodes
are running 1.4.0 or higher. Other configurations are not supported.
Customers manually upgrading masters to 1.4 should be aware that the
lowest node version supported with it is 1.2.
September 20, 2016
Container Engine hosted masters will be upgraded to v1.3.7 in zones according to the following planned schedule:
Container Engine hosted masters have been upgraded to v1.3.6.
Known Issues with v1.3.6 fixed in v1.3.7
#32415Fixes a bug
in kubelet hostport logic which flushes KUBE-MARK-MASQ
iptables chain.
#30790Fixes
the panic that occurs in the federation controller manager when
registering a Container Engine cluster to the federation.
September 6, 2016
Cluster update to add node locations (API:rest/v1/projects.zones.clusters/update,
CLI:gcloud beta container clusters update --additional-zones) will now wait
for all nodes to be healthy before marking operation completed (DONE).
#27653Volume manager should be more robust across restarts.
#29997loadBalancerSourceRanges does not work on Container Engine.
Known Issues with older versions fixed in v1.3.6
#31219Graceful termination fails if terminationGracePeriodSeconds > 2
#30828Netsplit causes pods to get stuck in NotReady for < 1.2 nodes
#29358Google Compute Engine PD Detach fails if node no longer
exists.
cluster.master_auth.passwordis no longer required in aclusters.createrequest. If a password is not specified for a cluster, one will be generated.
Google Cloud CLI kubectl version updated to v1.3.5
Image Type selection forgcloud containercommands is now GA. Can now usegcloud container clusters create --image-type=...gcloud container clusters upgrade --image-type=...
Google Cloud CLI changed thecontainer/use_client_certificateproperty default value tofalse. This makes thegcloud container clusters createandgcloud container clusters get-credentialscommands configurekubectlto use Google OAuth2 credentials by default instead of the legacy client certificate.
Existing Google Container Engine cluster masters were upgraded toKubernetes v1.2.5over the previous week.
Improved error messages when a cluster is already being operated on.
Now supports creating clusters and node pools with local SSDs attached to
nodes. SeeContainer Cluster Operationsfor examples.
Cluster autoscaling is now available for clusters running v1.3.0. Autoscaling
options can be specified on cluster create and update. SeeContainer Cluster Operationsfor examples.
Existing single-zone clusters can now be updated to multi-zone clusters by
runninggcloud beta container clusters update --additional zones. SeeContainer Cluster Operationsfor examples.
Known issues:
Scaling v1.3.0 clusters after creation (including via cluster
autoscaling) can cause bad routes to be created with colliding target
CIDRs. Bad routes can be detected and manually fixed via following
*1. List routes with duplicate destination ranges
gcloud compute routes list --filter="name ~ gke-$CLUSTER_NAME" --format='value(destRange)' | uniq -d
If the above returns any values, the bad routes can be fixed by
deleting one of the target instances. A new one will be automatically
recreated with a working route.
*2. Replace $DUPE_RANGE with a destination range from 1.
gcloud compute routes list --filter="destRange:$DUPE_RANGE"*3. Delete one of the target instances listed by 2.
gcloud compute instances delete $TARGET_INSTANCE
kubectl authorization for v1.3.0 clusters fails if a the cluster is
created with a non-default master auth username (gcloud container
clusters create --username ...). This can be worked around by
authenticating with the cluster certificate instead by running
gcloud container clusters updatecommand is now available for updating
cluster settings of an existing container cluster.
Thegcloud container node-poolscommands are now available for creating
deleting, describing and listing node pools of a cluster.
Google Cloud consolesupports listing node pools. Listed
node pools can also be upgraded/downgraded to supported Kubernetes versions.
May 18, 2016
gcloud alpha containercommands (e.g. create) now support specifying
alternate ImageTypes, such as the newly-available BetaContainer-VM Image.
To try it out, update to the latest gcloud (gcloud components install alpha ;
gcloud components update) and then create a new cluster:gcloud alpha
container clusters create --image-type=GCI $NAME. Support for ImageTypes in
Google Cloud console will follow at a later date.
Thegcloud container clusters listcommand now sorts the clusters
based on zone and then on cluster name.
Thegcloud container clusters createcommand now allows specifying--max-nodes-per-pool(default 1000) to create multiple node pools for
large clusters.
May 16, 2016
Container Engine hosted masters have been upgraded to v1.2.4.
Google Cloud CLIkubectlversion updated to v.1.2.4.
CreateCluster calls now accept multiple NodePool objects.
May 6, 2016
Container Engine hosted masters have been upgraded to v1.2.3.
Google Cloud CLIkubectlversion updated to v1.2.3
April 29, 2016
Kubernetesv1.2.3is the default version for new clusters.
gcloud container clusters resizenow allows specifying a node pool
via--node-pool.
April 21, 2016
Can now create a multi-zone cluster, which is a cluster whose nodes span
multiple zones, enabling higher availability of applications running in the
cluster. More details on multi-zone clusters can be found at
http://kubernetes.io/docs/admin/multiple-zones/. The ability to convert
existing clusters to be multi-zone will be coming soon.
gcloud container clusters createnow allows specifying multiple zones within
a region for your cluster's nodes to be created in by using the--additional-zonesflag.
Fixed bug that causedkubectlcomponent to be missing from
gcloud components list on Windows.
Google Cloud CLIkubectlversion updated to v1.2.2
April 13, 2016
Known issue: the "bastion route"
workaround for accessing services from outside of a kubernetes cluster no
longer works with 1.2.0 - 1.2.2 nodes, due to a change in kube-proxy. If you
are using this workaround, we recommend not upgrading nodes to 1.2.x at this
time. This will be addressed in a future patch release.
April 11, 2016
Kubernetesv1.2.2is the default version for new clusters.
Google Cloud consolesupports "Google Kubernetes Engine master upgrade" option,
which allows proactive upgrade of cluster masters. Note this is the same
functionality available viagcloud container clusters upgrade --master.
April 4, 2016
Kubernetesv1.2.1is the default version for new clusters.
March 29, 2016
The API Discovery Doc and Client Libraries have been updated.
gcloud container clusters create|get-credentialswill warn|fail respectively
if the HOME env var isn't set. The variable is required to store kubectl
credentials (kubeconfig).
Google Cloud CLIkubectlcomponent is now available for Windows.
March 21, 2016
Kubernetesv1.2.0is the default version for new clusters. This update contains significant
changes from v1.1, described in detail atreleases-1.2.0.
Major changes include
Increased cluster scale by 400% to 1000 nodes with 30,000 pods per
cluster.
Kubelet supports 100 pods per node with 4x reduced system overhead.
Deployment and DaemonSet API now Beta. Job and HorizontalPodAutoscaler
APIs moved from Beta to GA.
Ingress supports HTTPS.
Kube-Proxy now defaults to an iptables-based proxy.
Docker v1.9.1.
Dynamic configuration for applications via ConfigMap API provides
alternative to baking in commandline flags when building container.
New kubernetes GUI that enables the same functionality as CLI.
Graceful node shutdown viakubectl draincommand to gracefully evict
pods from nodes.
Clusters created without compute read/write node scopes must also disableHttpLoadBalancing.
Note that disabling compute read/write is only possible via the raw API, not the
Google Cloud CLI or theGoogle Cloud console.
ClusterUpdates to clusters whose node scopes do not have compute read/write
must also specify an AddonsConfig withHttpLoadBalancingdisabled.
Google Cloud CLIkubectlversion updated to 1.2.0.
March 16, 2016
CreateCluster will now succeed if the kubernetes API reports at least 99% of
nodes have registered and are healthy within a startup deadline.
gcloud container clusters createprints a warning if cluster creation
finished with > 99% but < 100% of nodes registered/healthy.
March 2, 2016
Container Engine hosted master upgrades from v1.1.7 to v1.1.8 were
completed this week.
February 26, 2016
Kubernetesv1.1.8is the default version for new clusters.
DeleteCluster will fail fast with an error if there are backend services that
target the cluster's node group, as existence of such services will block
deletion of the nodes.
You can now self-initiate an upgrade of a cluster's hosted master to the
latest supported Kubernetes version by runninggcloud container clusters upgrade --master. This lets you access versions
ahead of automatic Container Engine hosted master upgrades.
February 10, 2016
Container Engine hosted master upgrades from v1.1.3, v1.1.4 to v1.1.7 were
completed this week.
Google Cloud CLIkubectlversion is 1.1.7.
January 28, 2016
Kubernetesv1.1.7is the default version for new clusters.
January 15, 2016
Kubernetesv1.1.4is the default version for new clusters.
Can now rungcloud container clusters resizeto resize Container Engine clusters.
gcloud container clustersdescribeandlistnow notify the user when a
node upgrade is available.
Google Cloud CLIkubectlversion is 1.1.3.
January 5, 2016
Fixed an issue whereGoogle Cloud consoleincorrectly
disallowed users from creating clusters with Cloud Monitoring enabled.
Fixed an issue where users could not create clusters in domain-scoped projects.
December 8, 2015
Kubernetesv1.1.3is the default version for new clusters.
Added support for custom machine types.
Create cluster now checks that the network for the cluster has a route to the
default internet gateway. If no such route exists, the request returns with an
error immediately, instead of timing out waiting for the nodes to register.
The Google Container Engine v1beta1 API, which was previously deprecated, is
now disabled.
Container Engine hosted masters were upgraded to v1.1.2 this week, except
for clusters with nodes older than v1.0.1, which will be upgraded once v1.1.3
is available.
November 30, 2015
Kubernetesv1.1.2is the default version for new clusters.
Container Engine now supports manual-subnet networks.
Subnetworks are an Alpha feature of Google Compute Engine and you must be
whitelisted to use them. See theSubnetworksdocumentation for whitelist information.
Once whitelisted, thesubnetworkis specified in the cluster create
request. In the REST API, this is specified as the value of thesubnetworkfield of thecluster object;
when usinggcloud containercommands, pass a--subnetworkflag togcloud container clusters create.
Improved reliability of cluster creation and deletion.
The release documented below is being rolled out over the next few days.
Clusters can now be created with up to 250 nodes.
The Google Compute Engine load balancer controller addon is added by default
to new clusters.Learn more.
Kubernetesv1.1.1is the default version for new clusters.
Important Note:The packagedkubectlis version 1.0.7, consequently
new Kubernetes 1.1 APIs like autoscaling will not be available viakubectluntil next week's push of thekubectlbinary.
Users who want access before then can manually download a 1.1kubectlfrom:
And thenchmod a+x kubectl; cp kubectl $(which kubectl)to install it.
Kubernetes v0.19.3 and v0.21.4 are no longer supported for nodes.
New clusters using thef1-micromachine type must contain at least three
nodes. This ensures that there is enough memory in the cluster to run more
than just a couple of very small pods.
kubectlversion is 1.0.7.
November 4, 2015
Kubernetesv1.0.7is the default version for new clusters.
Existing clusters will have their masters upgraded from v1.0.6 to
v1.0.7 over the coming week.
Added adetailfield to operation objects to show progress details for
long-running operations (such as cluster updates).
Better categorization of errors caused by projects not being fully
initialized with the default service accounts.
October 19, 2015
The--container-ipv4-cidrflag has been deprecated in favor of--cluster-ipv4-cidr.
The current node count of Container Engine clusters is available from the REST
API.
Metrics in Cloud Monitoring are now available with a much shorter delay.
Cluster names now only need to be unique within each zone, not within the
entire project.
Error messages involving regular expressions have more useful, human-readable
hints.
October 12, 2015
You can now specify custom metadata to be added to the nodes when creating
a cluster with theREST API.
September 25, 2015
Cluster self links now contain the project ID rather than the project number.
kubectlversion is 1.0.6.
September 18, 2015
Kubernetesv1.0.6is the default version for new clusters.
Existing clusters will have their masters upgraded from v1.0.4 to
v1.0.6 over the coming week.
September 4, 2015
Fixed a bug where aCreateClusterrequest would be rejected if it contained
aClusterApiVersion. Since the field is output-only, it is now silently
ignored.
August 31, 2015
To avoid creating clusters without any space for non-system containers, there
are now limits on clusters consisting of f1-micro instances:
A single-node f1-micro cluster must disable both logging and monitoring.
A two-node f1-micro cluster must disable at least one of logging and
monitoring.
August 26, 2015
Google Container Engine is out of beta.
Allgcloud beta containercommands are now in thegcloud containercommand group instead.
You can now use the Google Container Engine API to enable or disable Google
Cloud Monitoring on your cluster. Use thedesiredMonitoringServicefield of the cluster update method.
When updating this field, the Kubernetes apiserver will be see a brief outage
as the master is updated.
August 14, 2015
Kubernetesv1.0.3is the default version for new clusters.
Thecomputeanddevstorage.read_onlyauth scopes are no longer required
and are no longer automatically added server-side to new clusters. Thegcloudcommand and Google Cloud console still add these scopes on the
client side when creating new clusters; the REST API does not.
Listing container clusters in a non-existent zone now results in a404: Not Founderror instead of an empty list.
Theget-credentialscommand has moved togcloud beta container clusters get-credentials. Runninggcloud beta container get-credentialsprints an error redirecting to the new
location.
The newgcloud beta container get-server-configcommand returns:
The default Kubernetes version currently used for new clusters.
The list of supported versions for node upgrades
(viagcloud beta container clusters upgrade).
August 4, 2015
Kubernetesv1.0.1is the default version for new clusters.
kubectlversion is 1.0.1.
Removed the v1beta1 API discovery doc in preparation for deprecation.
Thegcloud alpha containercommands target the Container Engine v1 API. The
options forgcloud alpha container clusters createhave been updated
accordingly:
--userrenamed--username.
--cluster-api-versionremoved. Cluster version not selectable
in v1 API; new clusters always created at latest supported version.
--imageoption removed. Source image not selectable in v1 API;
clusters are always created with latest supported ContainerVM image.
Note that using an unsupported image (i.e. not ContainerVM) would
result in an unusable cluster in most cases anyway.
Added--no-enable-cloud-monitoringto turn off cloud monitoring
(on by default).
Added--disk-sizeoption for specifying boot disk size of node VMs.
July 27, 2015
A firewall rule is now created at the time of cluster creation to make node
VMs accessible via SSH. This ensures that the Kubernetes proxy functionality
works.
Disabled the--source-imageoption in the v1beta1 API. Attempting to
rungcloud alpha container clusters create --source-imagenow returns an
error.
Removed the option to create clusters in the 172.16.0.0/12 private IP block.
July 24, 2015
Upgrade to Kubernetes v1 - Action Required
Users must upgrade their configuration files to the v1 Kubernetes API
before August 5th, 2015. This applies to any Beta Container Engine cluster
created before July 21st.
Google Container Engine will upgrade container cluster masters beginning on
August 5th, to use the v1 Kubernetes API. If you'd like to upgrade
prior, pleasesign up for an early upgrade.
This upgrade removes support for the v1beta3 API. All configuration files
must be formatted according to the v1 specification to ensure that your
cluster remains functional. The v1 API represents the production-ready set of
APIs for Kubernetes and Container Engine.
Some helpful resources are:
Anupgrade scriptto convert your v1beta3 configuration files to v1.
If your configuration files already use the v1 specification, no action is
required.
July 15, 2015
Kubernetesv0.21.2is the default version for new clusters.
Existing masters running versions 0.19.3 or higher will be upgraded to 0.21.2.
Customers shouldupgrade their container clustersat
their convenience. Clusters running versions older than 0.19.3 can not be
updated.
Thekubectlversion is now 0.20.2.
July 10, 2015
Kubernetesv0.21.1is the default version for new clusters.
Thekubectlversion is now 0.20.1.
Known issue:
Therolling-updatecommand will fail when usingkubectlv0.20.1 with
clusters running v0.19.3 of the Kubernetes API. To resolve the issue, specify--api-version=v1beta3as a flag to therolling-updatecommand:
The REST API returns a more accurate error message when the region is out of
quota.
gcloud container clusters createsupports specifying disk size for
nodes with the--disk-sizeflag.
June 22, 2015
Google Container Engine is now in Beta.
Kubernetes master VMs are no longer created for new clusters. They are now run
as a hosted service. There is no Compute Engine instance charge for the
hosted master. Read more aboutpricing details.
Kubernetesv0.19.3is the default version for new clusters.
For projects with default regional Compute Engine CPUs quota, container
clusters are limited to 3 per region.
Documentation updated to usegcloud betacommand group.
Documentation updated to useapiVersion: v1in all samples.
Known issue:
kubectl execis broken for cluster version 0.19.3.
June 10, 2015
Documentation updated to use v1beta3.
Kubernetesv0.18.2is the default version for new clusters.
June 3, 2015
Kubernetesv0.18.0is the default version for new clusters.
Clusters launched with 0.18.0 and above are deployed using Managed Instance
Groups.
New clusters can no longer be created at v0.16.0.
Fixed a race condition that could cause routes to be leaked on cluster
deletion.
Fail faster and with a helpful message if the project is lacking specific
resource quota to create a functioning cluster.
Google Cloud CLI:
Thegcloud alpha container clusters createcommand always setskubectl's
current context to the newly created cluster.
Theclusters createandget-credentialscommands look for and writekubectlconfiguration to aKUBECONFIGenvironment variable. This matches
the behavior ofkubectl config *commands.
Thegcloud alpha container kubectlcommand is disabled. Use simplykubectlinstead.
May 22, 2015
Kubernetesv0.17.1is the default version for new clusters.
Kubernetes v0.16.0 is still supported. However, new clusters can no longer be
created at Kubernetes v0.17.0 due to the bug listed below.
Fixes a bug that was preventing containers from accessing the Google Compute
Engine metadata service.
Kubernetes service DNS names are now suffixed with.<namespace>.svc.cluster.localinstead of.<namespace>.kubernetes.local.
kubectl 0.17.0 notes:
Updatedkubectl cluster-infoto show v1beta3 addresses.
Addkubectl log --previoussupport to view last terminated container log.
Addkubectl_labelto custom functions in bash completion.
ChangeIPtoIP(S)in service columns forkubectl get.
AddedTerminationGracePeriodfield to PodSpec andgrace-periodflag tokubectl stop.
May 13, 2015
Kubernetesv0.17.0is the default version for new clusters.
New clusters can no longer be created at Kubernetes version 0.15.0.
Standalonekubectlworks with Container Engine created clusters without
needing to set theKUBECONFIGenv var.
gcloud alpha container kubectlis deprecated. The command still works, but
prints a warning with directions for usingkubectldirectly.
Added a new command,gcloud alpha container get-credentials. The command
fetches cluster auth and updates the localkubectlcommand.
gcloud alpha container kubectlandclusters delete|describeprint more
helpful error messages when the cluster cannot be found due to an incorrect
zone flag/default.
gcloud alpha container clusters createexits with non-zero returncode if
cluster create succeeded but cert data could not be fetched.
Master VMs are now created with a data persistent disk to store important
cluster data, leaving the boot disk for the OS / software.
May 2, 2015
Kubernetesv0.16.0is the default version for new clusters.
Clusters that don't have nginx will use bearer token auth instead of basic
auth.
KUBE_PROXY_TOKENadded tokube-envmetadata.
April 22, 2015
A CIDR can now be requested during cluster creation when using the
Google Cloud CLI or the REST API. For the Google Cloud CLI, use the--container-ipv4-cidrflag. If not set, the server will choose a
CIDR for the cluster.
Standalonekubectlinstructions are now available fromgcloud alpha container kubectl --help.
When fetching cluster credentials after creating a cluster using the
Google Cloud CLI, you'll never have to enter the passphrase for your SSH
key more than once.
Thegcloud alpha container clusters ...commands default to
human-readable (table) output.
April 16, 2015
Container Engine:
Kubernetesv0.15.0is the default version for new clusters. v0.14.2 is still supported.
The Kubernetes v1beta3 API is now enabled for new clusters.
New clusters can no longer be created at kubernetes version 0.13.2.
Google Cloud CLI:
Thekubectlversion is now v0.14.1.
The deprecatedgcloud alpha container pods|services|replicationcontrollerscommands have been removed. Usegcloud alpha container kubectlinstead.
April 9, 2015
Container Engine:
Kubernetesv0.14.2is the default version for new clusters.
New clusters can no longer be created at kubernetes version 0.14.1.
Cluster creation is more reliable.
Clusters created via theGoogle Cloud consolewill pre-fill the cluster
name with a project-unique name instead of a zone-unique name.
Kubernetes v0.14.1 is the default version for new clusters.
New clusters can no longer be created at version 0.11.0.
Container Engine's cluster firewall no longer specifies target-tags. This
allows pods to make outgoing connections by default (in the private network).
Google Cloud CLI:
Clusters created by the Google Cloud CLI now automatically send logs toGoogle Cloud Loggingunless explicitly disabled using the--no-enable-cloud-loggingflag. Logs are visible in thelogs section of the Google Cloud consoleonce your
project has enabled the Google Cloud Logging API.
You can now access Container Engine clusters with standalonekubectl(i.e. withoutgcloud alpha container) after setting an environment
variable, which is printed after successful
cluster creation and/or the first time accessing a cluster withgcloud alpha container kubectl.
Gcloud will always try to fetch certificate files for the cluster if they are
missing. "WARNING: No certificate files found in..." will resolve itself on a
subsequentgcloud alpha container kubectlcommand run if the cluster is
healthy.
Known issue:containercommands are included in thealphacomponent, but
the kubernetes client (kubectl) is still installed with thepreviewcomponent, so users will need both.
April 1, 2015
All Container Engine commands have moved fromgcloud previewtogcloud alpha. Rungcloud components update alphato install
this command group. Documentation has been updated to use thealphacommands.
March 25, 2015
Kubernetes v0.13.2 is the default version for new clusters.
Thekubectlversion is now v0.13.1.
Updated tocontainer-vm-v20150317, which starts up more reliably.
The default boot disk size for cluster nodes has been increased from 10GB to
100GB.
February 25, 2015
Google Cloud CLI:
Thekubectlwrapper commands
(gcloud preview container pods|services|replicationcontrollers) have been
deprecated in favor of usinggcloud preview container kubectldirectly.
Calling the deprecated commands prints the equivalentkubectlcommand.
Thekubectlversion has been bumped to 0.11.0.
Fixed a bug that preventedkubectl updatewith--patchfrom working.
Thekubectlcommand now automatically tries refetching the configuration
if the command fails with a stale configuration error.
February 19, 2015
Google Container Engine:
Kubernetes v0.11.0 is the default version for new clusters.
Removed support for creating clusters at Kubernetes v0.9.2.
Nodes now use the container-vm-v20150129 image.
Google Cloud CLI:
Pods created withgcloud preview container pods createno longer bind to
a host port. As a result the scheduler can assign more than one pod to each
host.
The version ofkubectlused by thegcloud preview container kubectlcommand is 0.10.1.
February 12, 2015
Kubernetes v0.10.1 is the default version for new clusters.
Removed support for creating clusters at Kubernetes v0.10.0.
Improved API enablement flow and error messages when first visiting the
Container Engine page of theGoogle Cloud console.
February 5, 2015
Google Container Engine:
Kubernetes v0.10.0 is the default version for new clusters.
Removed support for creating clusters at Kubernetes version 0.8.1.
Google Cloud CLI:
Thegcloud preview container kubectlcommand is upgraded to version 0.9.1:
kubectl createhandles bulk creation from file or directory.
Thecreateallcommand has been removed.
Added thekubectl rollingupdatecommand, which runs controlled updates
of replicated pods.
Added thekubectl run-containercommand, which simplifies creation of a
(optionally replicated) pod from an image.
Added thekubectl stopcommand to cleanly shut down a replication
controller.
Addedkubectl config ...commands for managing config for multiple
clusters/users. (Note: this is not yet compatible withgcloud preview
container kubectl).
Kubernetes v0.9.2 is the default version for new clusters.
Removed support for creating clusters at v0.7.1. Existing clusters at this
version can still be used and deleted.
SkyDNSis supported for services
on clusters using v0.9.2 onwards.
January 21, 2015
Improved error messages during pod creation when the source image is invalid.
Fixed a bug affecting Compute Engine routes whosedestRangefields are plain
IP addresses.
Improved the reliability of cluster creation when provisioning is slow.
January 15, 2015
Kubernetes v0.8.1 is the default version for newly created clusters. Our
v0.8.1 support includes changes on the 0.8 branch at 0.8.1.
Removed support for creating clusters at Kubernetes v0.8.0.
Existing clusters at this version can still be used and deleted.
Service accounts and auth scopes can be added to node instances at the time
of creation for all pods to use.
The command line interface now renders multiple error messages across
newlines and tabs, instead of using a comma separator.
Machine type information has been fixed in the cluster details page of the
Google Cloud console.
January 8, 2015
Kubernetes v0.8.0 is the default version for newly created clusters.
Kubernetes v0.7.1 is also supported. Refer to theKubernetes release notesfor information about each release. Our v0.7.1 support includes changes on the
0.7 branch at 0.7.1. Our v0.8.0 support includes changes in the 0.7.2 and
0.8.0 releases.
Removed support for creating clusters at Kubernetes v0.6.1 and v0.7.0.
Existing clusters at these versions can still be used and deleted.
Thepods|services|replicationcontrollers createcommands now validate
the resource type when creating with--config-file. This fixes the known
issue in the December 12, 2014 release.
December 19, 2014
Kubernetes v0.7.0 is the default version for newly created clusters.
Removed support for creating clusters at Kubernetes v0.4.4 and v0.5.5.
Existing clusters at these versions can still be used and deleted.
December 12, 2014
Known issues:
Thepods|services|replicationcontrollers createcommands do not validate
the resource type when creating with--config-file. The command creates
the resource specified in the configuration file, regardless of the command
group specified. For example, callingpods createand passing a service
configuration file creates a service instead of failing.
Updates:
Kubernetes v0.6.1 is the default version for newly created clusters.
Google Container Engine now reserves a /14 CIDR range for new clusters.
Previously, a /16 was reserved.
New clusters created with Kubernetes v0.4.4 now use the
backports-debian-7-wheezy-v20141108 image. This replaces the previous
backports-debian-7-wheezy-v20141021 image.
New clusters created with Kubernetes v0.5.5 or v0.6.1 now use the
container-vm image, instead of the Debian backports image.
TheService Operationsdocumentation has been updated to describe thecreateExternalLoadBalanceroption.
A newgcloud preview container kubectlcommand has been added to the CLI.
This is a pass-through command to call the native Kuberneteskubectlclient with arbitrary commands, using the Google Cloud CLI to handle authentication.
The--cluster-nameflag in all CLI commands has been renamed to--cluster.
Newdescribeandlistsupport for cluster operations.
December 5, 2014
The syntax for creating a pod with the Google Container Engine command line
interface has changed. The name of the pod is now specified as the value of
a--nameflag. See thePod Operationspage for details.
Clusters and Operations returned by the API now include aselfLinkfield and
Operations also include atargetLinkfield, which contain the full URL of
the given resource.
Added support for Kubernetes v0.4.4 and Kubernetes v0.5.5. The default
version is now v0.4.4. Refer to theKubernetes release notesfor information about each release. Our v0.4.4 support includes changes on the
0.4 branch from 0.4.2 through 0.4.4. Our v0.5.5 support includes changes on
the 0.5 branch through 0.5.5.
Removed support for creating clusters at Kubernetes v0.4.2. Existing clusters
at this version can still be used and deleted.
November 20, 2014
Updates to thegcloud preview containercommands:
New error message that catches cluster creation failure due to missingdefaultnetwork.
Specify default zone and cluster :
gcloud config set compute/zone ZONE
gcloud config set container/cluster CLUSTER_NAME
There is currently a bug preventing the default cluster name from working
if the local configuration cache is missing. If you see a stack trace
when omitting--cluster-name, repeat the command once with the flag
specified. Subsequent commands can omit the flag.
The default cluster name is set to the value of the new cluster when a
cluster is successfully created.
Thegcloud preview container clusters listcommand lists clusters across
all zones if no--zoneflag is specified. Thelistcommand ignores any
default zone that may be set.
Cluster error state information is available in the Google Cloud console.
November 4, 2014
(Updated November 10, 2014: Added two additional known issues with GoogleContainer Engine.)
Google Container Engine is a new service that creates and managesKubernetesclusters for Google Cloud users.
Container Engine is currently in Alpha state; it is suitable for
experimentation and is intended to provide an early view of the production
service, but customers are strongly encouraged not to run production workloads
on it.
The underlying open source Kubernetes project is being actively developed by
the community and is not considered ready for production use. This version of
Google Container Engine is based onKubernetes public buildv0.4.2.
While the Kubernetes community is working hard to address
community-reported issues as they are reported, there are some known issues in
the v0.4.2 release that will be addressed in v0.5 and that will be incorporated
into Google Container Engine in the coming days.
Known issues with the Kubernetes 0.4.2 release
(Issue #1730)External health checks that use in-container scripts (exec) do notwork. Process, HTTP and TCP health checks work properly.
Health checks that use in-container shell execution are not functioning;
they always report Unknown. This is a result of the transition todocker execintroduced in Docker version 1.3. At this time process-level
health checks, TCP socket health checks, and HTTP level health checks are
functional. This has been addressed in v0.5 and will be available shortly.
(Issue #1712)Pod update operations fails.
In v0.4.2, pod update functionality is not implemented, and a call to the
update API returns an unimplemented error. Pods must be updated by tear down
and recreate. This will be implemented in v0.5.
(Issue #974)Silent failure on internal service port number collision:Each Kubernetes service needs a unique network port assignment. Currently if
you try to create a second service with a port number that conflicts with an
existing service, the operation succeeds but the second service will not
receive network traffic. This has been fixed, and will be available in v0.5.
(Issue #1161)External service load balancing. The current Kubernetes design includes
a model that does a 1:1 mapping between an externally-exposed port number at
the cluster level, and a service. This means that only a single external
service can exist on a given port. For now this is a hard limitation of the
service.
Known issues with Google Container Engine
In addition to issues with the underlying Kubernetes project, there are some
known issues with the Google Container Engine tools and API that will be
addressed in subsequent releases.
Kubecfg binary conflicts:During the Google Cloud SDK
installation, kubecfg v0.4.1 is installed and placed on the path by the
Google Cloud CLI. Depending on your $PATH variable, this version may
conflict with other installed versions from the open source Kubernetes
product.
Containers are assigned private IPs in the range10.40.0.0/16 to 10.239.0.0/16.
If you have changed your default network settings from 10.240.0.0/16,
clusters may create successfully, but fail during operation.
All Container Engine nodes are started with and require project levelread-write scope. This is temporarily required to support the dynamic
mounting of PD-based volumes to nodes. In future releases nodes will revert
to default read-only project scope.
Windows is not currently supported. Thegcloud preview containercommand is built on top of the Kubernetes client'skubecfgbinary, which is
not yet available on Windows.
The default network is required. Container Engine relies on the existence
of the default network, and tries to create routes that use it. If you don't
have a default network, Container Engine cluster creation will fail.
To recreate it:
Go to theNetworks pagein
the Google Cloud console and select your project.
ClickNew network.
Enter the following values:
Name:default
Address range:10.240.0.0/16
Gateway:10.240.0.1
ClickCreate.
Next, recreate the firewall rules:
Clickdefaultin theAll networkslist.
ClickCreate newnext toFirewall rules.
Enter the following values:
Name:default-allow-internal
Source IP ranges:10.240.0.0/16
Protocols & ports:tcp:1-65535; udp:1-65535; icmp
ClickCreate.
Create a second firewall rule with the following values:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2026-04-08 UTC."],[],[]]