Table of Contents
Overview
Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically Compute Engine instances) grouped to form a container cluster.
In this lab, you get hands-on practice with container creation and application deployment with GKE.
What you'll learn to do
In this lab you will learn how to:
- Deploy an application to the GKE cluster
Cluster orchestration with Google Kubernetes Engine
Google Kubernetes Engine (GKE) clusters are powered by the Kubernetes open source cluster management system. Kubernetes provides the mechanisms through which you interact with your container cluster. You use Kubernetes commands and resources to deploy and manage your applications, perform administrative tasks, set policies, and monitor the health of your deployed workloads.
Kubernetes draws on the same design principles that run popular Google services and provides the same benefits: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more. When you run your applications on a container cluster, you're using technology based on Google's 10+ years of experience with running production workloads in containers.
Kubernetes on Google Cloud
When you run a GKE cluster, you also gain the benefit of advanced cluster management features that Google Cloud provides. These include:
Load balancing for Compute Engine instances
Node pools to designate subsets of nodes within a cluster for additional flexibility
Automatic scaling of your cluster's node instance count
Automatic upgrades for your cluster's node software
Node auto-repair to maintain node health and availability
Logging and Monitoring with Cloud Monitoring for visibility into your cluster
Now that you have a basic understanding of Kubernetes, you will learn how to deploy a containerized application with GKE in less than 15 minutes. Follow the steps below to set up your lab environment.
Setup and requirements
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Pre-configured resource:
- You have a pre-created GKE cluster for this lab named lab-cluster.
Task 1. Set a default compute zone
Your compute zone is an approximate regional location in which your clusters and their resources live. For example, us-central1-a is a zone in the us-central1 region.
In your cloud terminal, run the following commands.
Set the default compute region:
gcloud config set compute/region "REGION"
Expected output:
Updated property [compute/region].
Set the default compute zone:
gcloud config set compute/zone "ZONE"
Expected output:
Updated property [compute/zone].
Task 2. Get authentication credentials for the GKE cluster
You need authentication credentials to interact with your cluster.
Use the following
gcloudcommand to retrieve the cluster credentials:gcloud container clusters get-credentials lab-cluster
Expected output:
Fetching cluster endpoint and auth data.
kubeconfig entry generated for lab-cluster.
This command is used to authenticate your kubectl client to a specific Google Kubernetes Engine (GKE) cluster so that you can interact with the cluster. This command fetches the cluster's credentials and updates your kubectl configuration with the necessary information.
Task 3. Deploy an application to the GKE cluster
You can now deploy a containerized application to the cluster. For this lab, you'll deploy hello-app on your cluster.
GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.
To create a new Deployment named
hello-serverfrom thehello-appcontainer image, run the followingkubectl createcommand:kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
Expected output:
deployment.apps/hello-server created
This Kubernetes command creates a deployment object that represents hello-server. In this case, --image specifies a container image to deploy. The command pulls the example image from a Container Registry bucket. gcr.io/google-samples/hello-app:1.0 indicates the specific image version to pull. If a version is not specified, the latest version is used.
Click Check my progress to verify the objective.
Create a new Deployment: hello-server
To create a Kubernetes Service, which is a Kubernetes resource that lets you expose your application to external traffic, run the following
kubectl exposecommand:kubectl expose deployment hello-server --type=LoadBalancer --port 8080
In this command:
--portspecifies the port that the container exposes.type="LoadBalancer"creates a Compute Engine load balancer for your container.
Expected output:
service/hello-server exposed
To inspect the
hello-serverService, runkubectl get:kubectl get service
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-server loadBalancer 10.39.244.36 35.202.234.26 8080:31991/TCP 65s
kubernetes ClusterIP 10.39.240.1 <none> 433/TCP 5m13s
</none>
Note: It might take a minute for an external IP address to be generated. Run the previous command again if the EXTERNAL-IP column status is pending.
To view the application from your web browser, open a new tab and enter the following address, replacing
[EXTERNAL IP]with theEXTERNAL-IPforhello-server.http://[EXTERNAL-IP]:8080
Expected output: The browser tab displays the message Hello, world! as well as the version and hostname.
Click Check my progress to verify the objective.
Create a Kubernetes Service
Solution of Lab
Quick
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP821/lab.sh
source lab.sh
