Quickstart

This topic shows you how to create a workload on GKE on AWS and expose it internally to your cluster.

Before you begin

Before you start using GKE on AWS, make sure you have performed the following tasks:

  • Install a management service .
  • Create a user cluster .
  • From your anthos-aws directory, use anthos-gke to switch context to your user cluster.
    cd anthos-aws 
    env HTTPS_PROXY=http://localhost:8118 \
      anthos-gke aws clusters get-credentials CLUSTER_NAME 
    
    Replace CLUSTER_NAME with your user cluster name.

You can perform these steps with kubectl , or with the Google Cloud console if you have Authenticated with Connect . If you are using the Google Cloud console, skip to Launch an NGINX Deployment .

To connect to your GKE on AWS resources, perform the following steps. Select if you have an existing AWS VPC (or direct connection to your VPC) or created a dedicated VPC when creating your management service.

Existing VPC

If you have a direct or VPN connection to an existing VPC, omit the line env HTTP_PROXY=http://localhost:8118 from commands in this topic.

Dedicated VPC

When you create a management service in a dedicated VPC, GKE on AWS includes a bastion host in a public subnet.

To connect to your management service, perform the following steps:

  1. Change to the directory with your GKE on AWS configuration. You created this directory when Installing the management service .

    cd anthos-aws 
    
  2. To open the tunnel, run the bastion-tunnel.sh script. The tunnel forwards to localhost:8118 .

    To open a tunnel to the bastion host, run the following command:

     ./bastion-tunnel.sh  
    -N 
    

    Messages from the SSH tunnel appear in this window. When you are ready to close the connection, stop the process by using Control+C or closing the window.

  3. Open a new terminal and change into your anthos-aws directory.

    cd anthos-aws 
    
  4. Check that you're able to connect to the cluster with kubectl .

     env  
     HTTPS_PROXY 
     = 
    http://localhost:8118  
     \ 
    kubectl  
    cluster-info 
    

    The output includes the URL for the management service API server.

Launch an NGINX Deployment

In this section, you create a Deployment of the NGINX webserver named nginx-1 .

kubectl

  1. Use kubectl create to create the Deployment.

     env  
     HTTPS_PROXY 
     = 
    http://localhost:8118  
     \ 
    kubectl  
    create  
    deployment  
    --image  
    nginx  
    nginx-1 
    
  2. Use kubectl to get the status of the Deployment. Note the Pod's NAME .

     env  
     HTTPS_PROXY 
     = 
    http://localhost:8118  
     \ 
    kubectl  
    get  
    deployment 
    

Console

To launch a NGINX Deployment with the Google Cloud console, perform the following steps:

  1. Visit the GKE Workloads menu in Google Cloud console.

    Visit the Workloads menu

  2. Click Deploy.

  3. Under Edit container, select Existing container imageto choose a container image available from Container Registry. Fill Image pathwith the container image that you want to use and its version. For this quickstart, use nginx:latest .

  4. Click Done, and then click Continue. The Configurationscreen appears.

  5. You can change your Deployment's Application nameand Kubernetes Namespace. For this quickstart, you can use the application name nginx-1 and namespace default

  6. From the Clusterdrop-down menu, select your user cluster. By default, your first user cluster is named cluster-0 .

  7. Click Deploy. GKE on AWS launches your NGINX Deployment. The Deployment detailsscreen appears.

Exposing your pods

This section shows how to do one of the following:

  • Expose your Deployment internally in your cluster and confirm it is available with kubectl port-forward .

  • Expose your Deployment from the Google Cloud console to the addresses allowed by your node pool security group .

kubectl

  1. Expose port 80 the Deployment to the cluster with kubectl expose .

     env  
     HTTPS_PROXY 
     = 
    http://localhost:8118  
     \ 
    kubectl  
    expose  
    deployment  
    nginx-1  
    --port = 
     80 
     
    

    The Deployment is now accessible from within the cluster.

  2. Forward port 80 on the Deployment to port 8080 on your local machine with kubectl port-forward .

     env  
     HTTPS_PROXY 
     = 
    http://localhost:8118  
     \ 
    kubectl  
    port-forward  
    deployment/nginx-1  
     8080 
    :80 
    
  3. Connect to http://localhost:8080 with curl or your web browser. The default NGINX web page appears.

     curl  
    http://localhost:8080 
    

Console

  1. Visit the GKE Workloads menu in Google Cloud console.

    Visit the Workloads menu

  2. From the Deployment detailsscreen, click Expose. The Expose a deploymentscreen appears.

  3. In the Port mappingsection, leave the default port ( 80 ), and click Done.

  4. For Service type, select Load balancer. For more information on other options, see Publishing services (ServiceTypes) in the Kubernetes documentation.

  5. Click Expose. The Service detailsscreen appears. GKE on AWS creates a Classic Elastic Load Balancer for the Service.

  6. Click on the link for External Endpoints. If the load balancer is ready, the default NGINX web page appears.

View your Deployment on Google Cloud console

If your cluster is connected to Google Cloud console , you can view your Deployment in the GKE Workloads page. To view your workload, perform the following steps:

  1. In your browser, visit the Google Kubernetes Engine Workloads page .

    Visit the Google Kubernetes Engine Workloads page

    The list of Workloads appears.

  2. Click the name of your workload, nginx-1 . The Deployment detailsscreen appears.

  3. From this screen, you can get details on your Deployment; view and edit YAML configuration; and take other Kubernetes actions.

For more information on options available from this page, see Deploying a stateless application in the GKE documentation.

Cleanup

To delete your NGINX Deployment, use kubectl delete or the Google Cloud console.

kubectl

 env  
 HTTPS_PROXY 
 = 
http://localhost:8118  
 \ 
kubectl  
delete  
service  
nginx-1  
&& \ 
kubectl  
delete  
deployment  
nginx-1 

Console

  1. Visit the Services and Ingress page menu on the Google Cloud console.

    Visit the Services and Ingress page

  2. Find your NGINX Service and click its Name. By default, the name is nginx-1-service . The Service detailsscreen appears.

  3. Click Deleteand confirm that you want to delete the Service. GKE on AWS deletes the load balancer.

  4. Visit the Google Kubernetes Engine Workloads page .

    Visit the Google Kubernetes Engine Workloads page

    The list of Workloads appears.

  5. Click the name of your workload, nginx-1 . The Deployment detailsscreen appears.

  6. Click Deleteand confirm that you want to delete the Deployment. GKE on AWS deletes the Deployment.

What's next?

Create an internal or external load balancer using one of the following Services:

You can use other types of Kubernetes Workloads with GKE on AWS. See the GKE documentation for more information on Deploying workloads .

Design a Mobile Site
View Site in Mobile | Classic
Share by: