<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[David Nguyen]]></title><description><![CDATA[A passionate full-stack developer from VIETNAM.
]]></description><link>https://eplus.dev</link><generator>RSS for Node</generator><lastBuildDate>Thu, 30 Apr 2026 08:29:51 GMT</lastBuildDate><atom:link href="https://eplus.dev/rss.xml?after=Njk5ZmI0NjFjOTAxNWMzN2Y2Y2ZmZGYyXzIwMjYtMDItMjZUMDI6NDg6MDEuNzIxWg==" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="self" href="https://eplus.dev/rss.xml?after=Njk5ZmI0NjFjOTAxNWMzN2Y2Y2ZmZGYyXzIwMjYtMDItMjZUMDI6NDg6MDEuNzIxWg==" type="application/rss+xml"/><atom:link rel="first" href="https://eplus.dev/rss.xml"/><atom:link rel="next" href="https://eplus.dev/rss.xml?after=Njk4NmRjMzU4MjYyOGViMDljODczYTg2XzIwMjYtMDItMDdUMDY6MzE6MTcuMDcyWg=="/><item><title><![CDATA[Create Firewall Rule to Enable SSH Access (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario

Your colleague created a custom VPC network with a compute instance in that network. You have to connect to the compute instance through ssh, but you are facing an error while connecting to the instance. After investigation, you discovered an issue with the firewall. There is no firewall at this movement which allows SSH to this instance.

Your task is to create a firewall rule so that you can connect to the instance through ssh.


Click Check my progress to verify the objective.

Solution of Lab


curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/create-firewall-rule-to-enable-ssh-access-solution/lab.sh
source lab.sh

Script Alternative
VPC=\((gcloud compute instances describe \)(gcloud compute instances list --format="value(name)") --zone=\((gcloud compute instances list --format="value(zone)") --format="value(networkInterfaces[0].network.basename())"); gcloud compute firewall-rules create allow-ssh --network=\)VPC --allow=tcp:22 --source-ranges=0.0.0.0/0 --target-tags=http-server # drabhishek ji ka code copy karta hu mai

]]></description><link>https://eplus.dev/create-firewall-rule-to-enable-ssh-access-solution</link><guid isPermaLink="true">https://eplus.dev/create-firewall-rule-to-enable-ssh-access-solution</guid><category><![CDATA[Create Firewall Rule to Enable SSH Access]]></category><category><![CDATA[Create Firewall Rule to Enable SSH Access (Solution)]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Wed, 25 Feb 2026 09:27:45 GMT</pubDate></item><item><title><![CDATA[Modify VM Instance for Cost Optimization (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
You work as a cloud administrator for a technology company that utilizes Google Cloud extensively for its operations. Today, you have been tasked with modifying a virtual machine (VM) instance to better align with updated resource requirements by using a specific General purpose Machine type with low cost.

Currently, you have an existing VM instance named Instance_name with high cost. Your task is to update the machine type with e2-medium suitable for the VM instance with low cost.

Click Check my progress to verify the objective.
Update the Machine type of the VM instance.

Solution of Lab
%[https://www.youtube.com/watch?v=BlPbr1A1dOw\]
We gratefully acknowledge Google's learning resources that make cloud education accessible
export VM_NAME="lab-vm"
export ZONE="us-east4-c"  # Replace with your actual zone

gcloud compute instances stop lab-vm --zone [YOUR_ZONE]
# Example:
# gcloud compute instances stop lab-vm --zone us-east4-c

gcloud compute instances set-machine-type $VM_NAME \
  --machine-type e2-medium \
  --zone $ZONE

gcloud compute instances start lab-vm --zone us-east4-c

If you get an error, run

gcloud auth list

export ZONE=$(gcloud compute project-info describe --format="value(commonInstanceMetadata.items[google-compute-default-zone])")

export PROJECT_ID=$(gcloud config get-value project)

gcloud config set compute/zone "$ZONE"

gcloud compute instances stop lab-vm --zone="$ZONE"

sleep 10

gcloud compute instances set-machine-type lab-vm --machine-type e2-medium --zone="$ZONE"

sleep 10

gcloud compute instances start lab-vm  --zone="$ZONE"

]]></description><link>https://eplus.dev/modify-vm-instance-for-cost-optimization-solution</link><guid isPermaLink="true">https://eplus.dev/modify-vm-instance-for-cost-optimization-solution</guid><category><![CDATA[Modify VM Instance for Cost Optimization (Solution)]]></category><category><![CDATA[Modify VM Instance for Cost Optimization]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Tue, 24 Feb 2026 02:27:21 GMT</pubDate></item><item><title><![CDATA[Docker Essentials: Container Networking - gem-docker-networking]]></title><description><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

Click Activate Cloud Shell
 
 at the top of the Google Cloud console.


When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

(Optional) You can list the active account name with this command:

gcloud auth list


Click Authorize.

Your output should now look like this:


Output:
ACTIVE: *
ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net

To set the active account, run:
    $ gcloud config set account `ACCOUNT`


(Optional) You can list the project ID with this command:

gcloud config list project

Output:
[core]
project = &lt;project_ID&gt;

Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6

Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab provides a practical exploration of Docker networking. You will learn how containers communicate with each other and the outside world using various networking modes. You'll also learn how to create custom networks and control container communication. We will use Artifact Registry to host the container images used in this lab.
Task 1. Setting up the Environment
In this task, you will configure your environment and pull the necessary images from Artifact Registry.

Set your Project ID is: qwiklabs-gcp-00-192ff2ed31f3

gcloud config set project qwiklabs-gcp-00-192ff2ed31f3

Note:This command sets your active project identity.

Set your default region to us-west1

gcloud config set compute/region us-west1

Note:This command sets your active compute region.

Enable the Artifact Registry API.

gcloud services enable artifactregistry.googleapis.com

Note:Enables the Artifact Registry service.

Create a Docker repository in Artifact Registry. Replace lab-registry with a name for your repository. It must be unique within the specified region.

gcloud artifacts repositories create lab-registry --repository-format=docker --location=us-west1 --description="Docker repository"

Note:Creates a Docker repository in Artifact Registry.

Configure Docker to authenticate with Artifact Registry.

gcloud auth configure-docker us-west1-docker.pkg.dev

Note:This command configures Docker to use your Google Cloud credentials for authentication with Artifact Registry.

Pull the alpine/curl image from Docker Hub and tag it for your Artifact Registry.

docker pull alpine/curl &amp;&amp;  docker tag alpine/curl us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest

Note:This will pull the image from docker hub and tag it for Artifact Registry.

Push the alpine/curl image to Artifact Registry.

docker push us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest

strong&gt;Note:This command pushes the tagged image to your Artifact Registry repository.

Pull the nginx:latest image from Docker Hub and tag it for your Artifact Registry.

docker pull nginx:latest &amp;&amp; docker tag nginx:latest us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest

Note:This will pull the image from docker hub and tag it for Artifact Registry.

Push the nginx:latest image to Artifact Registry.

docker push us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest

Note:This command pushes the tagged image to your Artifact Registry repository.
Task 2. Exploring Default Bridge Network
This task explores the default bridge network Docker creates. You will run containers and observe their communication within this network.

Run container1 using the alpine/curl image.

docker run -d --name container1 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity


Run container2 using the alpine/curl image.

docker run -d --name container2 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity

Note:This starts two containers in detached mode. The sleep infinity command keeps the containers running.

Inspect the default bridge network.

docker network inspect bridge

Note:This shows details of the bridge network, including connected containers and IP addresses.

From container1, ping container2 using its name. Note that Docker uses embedded DNS for name resolution within the default bridge network.

docker exec -it container1 ping container2

Note:This executes the ping command within container1, targeting container2. The standard bridge network does not provide DNS resolution, so ping command cannot use the container name.

Stop container2 from runnning.

docker stop container2 &amp;&amp; docker rm container2


Restart container2 running as an HTTP server.

docker run -d --name container2 -p 8080:80 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest

Note:Start a new container2 running nginx and exposing port 8080.

From container1, use curl to make an HTTP request to container2.

docker exec -it container1 curl container2:8080

Note:Send a curl request from container1 to container2 on port 8080. The standard bridge network does not provide DNS resolution, so curl command cannot use the container name.
Task 3. Creating and Using Custom Networks
This task demonstrates how to create a custom network which supports DNS and connect containers to it, providing more control over network configuration.

Create a new network named my-net.

docker network create my-net

Note:Creates a new Docker network named my-net.

Run container 3 connecting it to the my-net network.

docker run -d --name container3 --network my-net us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity


Run container 4 connecting it to the my-net network.

docker run -d --name container4 --network my-net us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity

Note:Starts two containers connected to the my-net network.

Inspect the my-net network to see the connected containers and their IP addresses.

docker network inspect my-net

Note:Displays details about the my-net network.

From container3, ping container4 using its name. Name resolution works within custom networks as well.

docker exec -it container3 ping container4

Note:Tests connectivity between containers within my-net.

Stop the active container 4 from running.

docker stop container4 &amp;&amp; docker rm container4


Restart container 4.

docker run -d --name container4 --network my-net -p 8081:80 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest


Run an nginx container on my-net and test connectivity.

docker exec -it container3 curl container4:80

Note:Starts an nginx container on my-net.

Stop the active container 4 from running.

docker stop container4 &amp;&amp; docker rm container4

Task 4. Publishing Ports and Accessing Containers from the Host
Learn how to publish container ports and access containerized services from the host machine.

Run an nginx container, publishing port 80 to the host's port 8080.

docker run -d --name container4 -p 8080:80 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest

Note:Publishes port 80 of the container to port 8080 on the host.

Access the nginx service from the host machine using curl.

curl localhost:8080

Note:This command sends an HTTP request to the published port on the host machine.

Use docker port to check the port mapping.

docker port container4 80

Note:This command shows the mapping for port 80 of the container.
Task 5. Cleaning Up
Remove the created containers and networks.

Stop all containers.

docker stop container1 container2 container3 container4


Remove all containers.

docker rm container1 container2 container3 container4

Note:This stops and removes the containers created in the previous steps.

Remove the my-net network.

docker network rm my-net

Note:This removes the custom network.

Solution of Lab
https://www.youtube.com/watch?v=c_w7Utw7l50
 


💡
The lab will automatically complete in approximately 5 minutes. Just sit tight and let it finish 👍



]]></description><link>https://eplus.dev/docker-essentials-container-networking-gem-docker-networking-1</link><guid isPermaLink="true">https://eplus.dev/docker-essentials-container-networking-gem-docker-networking-1</guid><category><![CDATA[Docker Essentials: Container Networking - gem-docker-networking]]></category><category><![CDATA[Docker Essentials: Container Networking]]></category><category><![CDATA[gem-docker-networking]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 09:02:28 GMT</pubDate></item><item><title><![CDATA[Docker Essentials: Containers and Artifact Registry - gem-docker-basics]]></title><description><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

Click Activate Cloud Shell


at the top of the Google Cloud console.


When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

(Optional) You can list the active account name with this command:

gcloud auth list


Click Authorize.

Your output should now look like this:


Output:
ACTIVE: *
ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net

To set the active account, run:
    $ gcloud config set account `ACCOUNT`


(Optional) You can list the project ID with this command:

gcloud config list project

Output:
[core]
project = &lt;project_ID&gt;

Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6

Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab provides a hands-on introduction to essential Docker operations, including building, running, managing, and publishing Docker containers. You will learn how to containerize a simple application, interact with the container, and push the resulting image to Google Artifact Registry. This lab assumes familiarity with basic Linux commands and Docker concepts.
Task 1. Setting up your environment and Artifact Registry
In this task, you'll configure your environment, enable the necessary services, and create an Artifact Registry repository to store your Docker images.

Set your Project ID:

gcloud config set project qwiklabs-gcp-04-3dba7879dc58

Note:This configures the gcloud CLI to use your project.

Enable Artifact Registry API

gcloud services enable artifactregistry.googleapis.com

Note:This command enables the Artifact Registry API for your project, allowing you to create and manage repositories.

Create an Artifact Registry Repository in region: us-central1

gcloud artifacts repositories create my-docker-repo \
    --repository-format=docker \
    --location=us-central1 \
    --description="My Docker image repository"

Note:Creates a Docker repository in Artifact Registry named my-docker-repo.

Configure Docker to authenticate with Artifact Registry:

gcloud auth configure-docker us-central1-docker.pkg.dev

Note:Authenticates Docker with Artifact Registry for the specified region. This allows you to push and pull images.
Task 2. Building a Docker Image
Here, you will create a simple 'Hello World' application and build a Docker image for it using a Dockerfile.

Create a directory for your application:

mkdir myapp &amp;&amp; cd $_

Note:Creates a new directory named myapp and navigates into it.

Create a simple app.py file:

cat &gt; app.py &lt;&lt;EOF
from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello, Docker!\n"

if __name__ == "__main__":
    app.run(debug=True, host='0.0.0.0', port=8080)
EOF

Note:Creates a simple Flask application that returns 'Hello, Docker!'. This will be our application.

Create a requirements.txt file:

cat &gt; requirements.txt &lt;&lt;EOF
Flask
EOF

Note:Specifies the dependencies for your application (Flask).

Create a Dockerfile:

FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["python", "app.py"]

Note:Defines the steps to build your Docker image. It uses a Python base image, installs dependencies, copies the application code, and specifies the command to run the application.

Build the Docker image. Replace us-central1 and qwiklabs-gcp-04-3dba7879dc58

docker build -t us-central1-docker.pkg.dev/qwiklabs-gcp-04-3dba7879dc58/my-docker-repo/hello-docker:latest .

Note:Builds the Docker image using the Dockerfile in the current directory. It tags the image with the Artifact Registry repository URL.
Task 3. Running and Testing the Docker Container
In this task, you will run the Docker image you built and test it to ensure it's working correctly.

Run the Docker container:

docker run -d -p 8080:8080 us-central1-docker.pkg.dev/qwiklabs-gcp-04-3dba7879dc58/my-docker-repo/hello-docker:latest

Note:Runs the Docker image in detached mode (`-d`) and maps port 8080 on the host to port 8080 in the container. You may need to configure firewall rules to allow external traffic on port 8080.

Check if the container is running:

docker ps

Note:Lists the currently running Docker containers.

Test the application. Replace qwiklabs-gcp-04-3dba7879dc58

curl http://localhost:8080

Note:Sends an HTTP request to the application running in the container. You should see 'Hello, Docker!' in the output.

Stop the Docker container:

docker stop $(docker ps -q)

Note:Stops all running Docker containers. docker ps -q returns only the container IDs.
Task 4. Pushing the Image to Artifact Registry
Now that you have a working image, you will push it to your Artifact Registry repository.

Push the Docker image. Replace us-central1 and qwiklabs-gcp-04-3dba7879dc58

docker push us-central1-docker.pkg.dev/qwiklabs-gcp-04-3dba7879dc58/my-docker-repo/hello-docker:latest

Note:Pushes the Docker image to the Artifact Registry repository. This makes the image available for others to use.
Task 5. Cleaning Up
Remove local artifacts to ensure a clean environment.

Remove the application directory:

cd .. &amp;&amp; rm -rf myapp

Note:Removes the myapp directory and all its contents.

Solution of Lab
%[https://www.youtube.com/watch?v=qy-rVvwVBR0\]

💡
The lab will automatically complete in approximately 5 minutes. Just sit tight and let it finish 👍


]]></description><link>https://eplus.dev/docker-essentials-containers-and-artifact-registry-gem-docker-basics-1</link><guid isPermaLink="true">https://eplus.dev/docker-essentials-containers-and-artifact-registry-gem-docker-basics-1</guid><category><![CDATA[Docker Essentials: Containers and Artifact Registry - gem-docker-basics]]></category><category><![CDATA[Docker Essentials: Containers and Artifact Registry]]></category><category><![CDATA[gem-docker-basics]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 08:58:43 GMT</pubDate></item><item><title><![CDATA[Create Custom VPC with Subnets Configuration (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario

You have an existing project with the default VPC. As per the VPC best practice you have decided to move to a custom VPC for better network isolation and control.

Your task is to delete default VPC and create custom VPC with two subnets in us-central1 and asia-southeast1 in provided time frame.


Click Check my progress to verify the objective.
Custom VPC with two subnets

Solution of Lab
https://www.youtube.com/watch?v=0PS9SVjnvJI
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/create-custom-vpc-with-subnets-configuration-solution/lab.sh
source lab.sh

Script Alternative

gcloud compute firewall-rules list --filter="network=default" --format="value(name)" | xargs -r -I {} gcloud compute firewall-rules delete {} --quiet &amp;&amp; \
gcloud compute networks delete default --quiet &amp;&amp; \
gcloud compute networks create custom-vpc --subnet-mode=custom &amp;&amp; \
gcloud compute networks subnets create custom-subnet-us --network=custom-vpc --region=us-central1 --range=10.0.1.0/24 &amp;&amp; \
gcloud compute networks subnets create custom-subnet-asia --network=custom-vpc --region=asia-southeast1 --range=10.0.2.0/24

]]></description><link>https://eplus.dev/create-custom-vpc-with-subnets-configuration-solution</link><guid isPermaLink="true">https://eplus.dev/create-custom-vpc-with-subnets-configuration-solution</guid><category><![CDATA[Create Custom VPC with Subnets Configuration]]></category><category><![CDATA[Create Custom VPC with Subnets Configuration (Solution)]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 08:45:55 GMT</pubDate></item><item><title><![CDATA[Arcade Hero: Enter the VPC - ARC122-VPC]]></title><description><![CDATA[Overview
In this lab you will learn the fundamentals of topic using Google Cloud.
If you are new to topic or looking for an overview of how to get started, you are in the right place. Read on to learn about the specifics of this lab and areas that you will get hands-on practice with.
In this lab learn:

The use cases for topic

How to implement topic


Prerequisites
Over the course of this lab the following elements are required:

topic

Task 1. Access the Ticket Application
Open the service url to gain access to the lab chat application.
Note:The application link works in both a normal browser tab and an incognito window. An initial loading screen will appear while the lab data is being prepared.
From here you will be able to interact with the application interface during the course of this lab.

Note:The above image is the main kanban screen. The screen includes the available tickets reflecting different knowledge domains. The number of tickets displayed will be dependent on the level and persona selected.
The lab mimics a kanban application scenario. Select an active ticket to view the lab specific task. To complete the lab successfully ensure the ticket task is fulfilled per instructions given.

Solution of Lab
https://www.youtube.com/watch?v=zuU-LkW1m4U
 


]]></description><link>https://eplus.dev/arcade-hero-enter-the-vpc-arc122-vpc-1</link><guid isPermaLink="true">https://eplus.dev/arcade-hero-enter-the-vpc-arc122-vpc-1</guid><category><![CDATA[Arcade Hero: Enter the VPC (Solution)]]></category><category><![CDATA[Arcade Hero: Enter the VPC]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 08:33:51 GMT</pubDate></item><item><title><![CDATA[Developer Essentials: Google Cloud Storage Static Website Hosting - gem-cloud-storage-host-static-site]]></title><description><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

Click Activate Cloud Shell
 
 at the top of the Google Cloud console.


When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

(Optional) You can list the active account name with this command:

gcloud auth list


Click Authorize.

Your output should now look like this:


Output:
ACTIVE: *
ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net

To set the active account, run:
    $ gcloud config set account `ACCOUNT`


(Optional) You can list the project ID with this command:

gcloud config list project

Output:
[core]
project = &lt;project_ID&gt;

Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6

Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab guides you through hosting a static website using Google Cloud Storage (GCS). You'll create a GCS bucket, upload website files, configure the bucket for website hosting, and use Artifact Registry to manage container images. This lab assumes basic familiarity with Google Cloud and command-line operations.
Task 1. Create a Google Cloud Storage Bucket
Create a GCS bucket to store your website's files.

Set your Project ID.

gcloud config set project qwiklabs-gcp-02-e5754c0fd87b

Note:Sets the active project in the Cloud SDK.

Create a GCS bucket.

gcloud storage buckets create gs://qwiklabs-gcp-02-e5754c0fd87b-website --uniform-bucket-level-access

Note:Creates a new GCS bucket with uniform bucket-level access enabled.
Task 2. Upload Website Files
Upload your website's HTML, CSS, JavaScript, and image files to the GCS bucket.

Create a simple index.html file.

&lt;html&gt;
&lt;head&gt;
  &lt;title&gt;My Static Website&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;p&gt;Hello from Google Cloud Storage!&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;

Note:Creates a basic HTML file.

Upload the index.html file to your bucket.

gcloud storage cp index.html gs://qwiklabs-gcp-02-e5754c0fd87b-website

Note:Copies the index.html file to the GCS bucket.
Task 3. Configure Bucket for Website Hosting
Configure the GCS bucket to serve your static website.

Enable website configuration on the bucket.

gcloud storage buckets update gs://qwiklabs-gcp-02-e5754c0fd87b-website --web-main-suffix=index.html

Note:Sets index.html as the default index page for the bucket.

Make the bucket objects publicly readable.

gcloud storage buckets add-iam-policy-binding gs://qwiklabs-gcp-02-e5754c0fd87b-website --member=allUsers --role=roles/storage.objectViewer

Note:Grants public read access to objects in the bucket.
Task 4. Access Your Website
Access your hosted static website via the GCS bucket's URL.

Get the public URL of your website.

echo "https://storage.googleapis.com/qwiklabs-gcp-02-e5754c0fd87b-website/index.html"

Note:Prints the URL to access your website.

Open the URL in your browser to view your website.

Open the URL in your browser to view your website.

Note:This is not a command, just a text reminder to the user.
Task 5. Clean Up
Clean up resources to prevent unintended charges

Remove the bucket.

gcloud storage rm -r gs://qwiklabs-gcp-02-e5754c0fd87b-website

Note:Deletes the GCS bucket.

Solution of Lab
https://www.youtube.com/watch?v=-NjQp5Y8J8I
 

]]></description><link>https://eplus.dev/developer-essentials-google-cloud-storage-static-website-hosting-gem-cloud-storage-host-static-site-1</link><guid isPermaLink="true">https://eplus.dev/developer-essentials-google-cloud-storage-static-website-hosting-gem-cloud-storage-host-static-site-1</guid><category><![CDATA[gem-cloud-storage-host-static-site]]></category><category><![CDATA[Developer Essentials: Google Cloud Storage Static Website Hosting - gem-cloud-storage-host-static-site]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 08:29:02 GMT</pubDate></item><item><title><![CDATA[Create VPC Peering Connection between VPCs (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
As a network administrator, you have been assigned with the responsibility of connecting two Virtual Private Clouds (VPCs) workspace_vpc and private_vpc in your project. This peering connection will establish a direct and secure communication pathway between the resources residing in each VPC, allowing them to interact seamlessly with each other.
Your task is :

Create Peering connection workspace-vpc with private-vpc

Create Peering connection private-vpc with workspace-vpc


Note: To ssh into the vm instance, run the following command:
gcloud compute ssh INSTANCE_NAME --project=PROJECT_ID --zone=INSTANCE_ZONE

When asked if you want to continue, enter Y. When prompted for a passphrase, press ENTER for no passphrase, then ENTER again.
Click Check my progress to verify the objective.

Solution of Lab
https://www.youtube.com/watch?v=iZwRujG_g2Y
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/create-vpc-peering-connection-between-vpcs-solution/lab.sh
source lab.sh

Script Alternative

gcloud auth list

export ZONE=$(gcloud compute project-info describe --format="value(commonInstanceMetadata.items[google-compute-default-zone])")

export PROJECT_ID=$(gcloud config get-value project)

gcloud config set compute/zone "$ZONE"

gcloud compute networks create workspace-vpc --subnet-mode=custom

gcloud compute networks create private-vpc --subnet-mode=custom

gcloud compute networks peerings create workspace-to-private --network=workspace-vpc --peer-network=private-vpc --auto-create-routes

gcloud compute networks peerings create private-to-workspace --network=private-vpc --peer-network=workspace-vpc --auto-create-routes

gcloud compute ssh workspace-vm --project="$PROJECT_ID" --zone="$ZONE"

]]></description><link>https://eplus.dev/create-vpc-peering-connection-between-vpcs-solution-1</link><guid isPermaLink="true">https://eplus.dev/create-vpc-peering-connection-between-vpcs-solution-1</guid><category><![CDATA[Create VPC Peering Connection between VPCs (Solution)]]></category><category><![CDATA[Create VPC Peering Connection between VPCs]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 07:56:45 GMT</pubDate></item><item><title><![CDATA[Manage Cloud Storage Lifecycle Policy using gcloud storage (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
You are managing a Cloud Storage bucket named qwiklabs-gcp-01-cb18e338debe-bucket. This bucket serves multiple purposes within your organization and contains a mix of active project files, archived documents, and temporary logs. To optimize storage costs, you need to implement a lifecycle management policy that automatically aligns the storage classes of these files with their access patterns.

Design a lifecycle management policy with the following objectives:

Active Project Files: Files within the /projects/active/ folder modified within the last 30 days should reside in Standard storage for fast access.

Archives: Files within /archive/ modified within the last 90 days should be moved to Nearline storage. After 180 days, they should transition to Coldline storage.

Temporary Logs: Files within /processing/temp_logs/ should be automatically deleted after 7 days.




Click Check my progress to verify the objective.
Create a lifecycle management policy

Solution of Lab
https://www.youtube.com/watch?v=jI00HyPDPr4
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/manage-cloud-storage-lifecycle-policy-using-gcloud-storage-solution/lab.sh
source lab.sh

Script Alternative
# I Know you will Steal it
PROJECT_ID=$(gcloud config get-value project)

cat &lt;&lt;EOF &gt; lifecycle.json
{
  "rule": [
    {
      "action": {
        "type": "SetStorageClass",
        "storageClass": "NEARLINE"
      },
      "condition": {
        "daysSinceNoncurrentTime": 30,
        "matchesPrefix": ["projects/active/"]
      }
    },
    {
      "action": {
        "type": "SetStorageClass",
        "storageClass": "NEARLINE"
      },
      "condition": {
        "daysSinceNoncurrentTime": 90,
        "matchesPrefix": ["archive/"]
      }
    },
    {
      "action": {
        "type": "SetStorageClass",
        "storageClass": "COLDLINE"
      },
      "condition": {
        "daysSinceNoncurrentTime": 180,
        "matchesPrefix": ["archive/"]
      }
    },
    {
      "action": {
        "type": "Delete"
      },
      "condition": {
        "age": 7,
        "matchesPrefix": ["processing/temp_logs/"]
      }
    }
  ]
}
EOF


gsutil lifecycle set lifecycle.json gs://$PROJECT_ID-bucket

]]></description><link>https://eplus.dev/manage-cloud-storage-lifecycle-policy-using-gcloud-storage-solution</link><guid isPermaLink="true">https://eplus.dev/manage-cloud-storage-lifecycle-policy-using-gcloud-storage-solution</guid><category><![CDATA[Manage Cloud Storage Lifecycle Policy using gcloud storage (Solution)]]></category><category><![CDATA[Manage Cloud Storage Lifecycle Policy using gcloud storage]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 07:23:57 GMT</pubDate></item><item><title><![CDATA[Configure Cloud Storage Bucket for Website Hosting using gsutil (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario

You have an existing Cloud Storage bucket named qwiklabs-gcp-00-8809a1bbfbd0-bucket that contains the following files necessary for a simple static website:

index.html (The main landing page)

error.html (Custom error page)

style.css

logo.jpg



Currently, the bucket is not configured for website hosting. Your task is to update the configuration to make this website publicly accessible.

As of now, there is no need to create a load balancer or CDN to redirect the request to the cloud storage bucket.


Click Check my progress to verify the objective.
Configure a bucket for website hosting

Solution of Lab
https://www.youtube.com/watch?v=y-gYQ9Vkec0
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/configure-cloud-storage-bucket-for-website-hosting-using-gsutil-solution/lab.sh
source lab.sh

Script Alternative
export BUCKET="$(gsutil ls -b | head -n 1 | sed 's#gs://##; s#/*$##')"
gsutil web set -m index.html -e error.html gs://$BUCKET
gsutil uniformbucketlevelaccess set off gs://$BUCKET
gsutil defacl set public-read gs://$BUCKET
gsutil acl set -a public-read gs://$BUCKET/index.html
gsutil acl set -a public-read gs://$BUCKET/error.html
gsutil acl set -a public-read gs://$BUCKET/style.css
gsutil acl set -a public-read gs://$BUCKET/logo.jpg

]]></description><link>https://eplus.dev/configure-cloud-storage-bucket-for-website-hosting-using-gsutil-solution-1</link><guid isPermaLink="true">https://eplus.dev/configure-cloud-storage-bucket-for-website-hosting-using-gsutil-solution-1</guid><category><![CDATA[Configure Cloud Storage Bucket for Website Hosting using gsutil (Solution)]]></category><category><![CDATA[Configure Cloud Storage Bucket for Website Hosting using gsutil]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 07:11:26 GMT</pubDate></item><item><title><![CDATA[Configure Cloud CDN for Storage using gcloud (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
You're the cloud architect for a news network, which is rapidly growing as an online news platform. The website serves a massive amount of static content (images, videos, articles) to a global audience. You've noticed that page load times are increasing, especially for users in regions far from your cloud storage bucket in the pre-selected region. Create a Cloud CDN configuration to cache the sites static content that is hosted on the pre-created Cloud Storage bucket.
Click Check my progress to verify the objective.
Create a Cloud CDN configuration to cache the sites static content.

Solution of Lab
https://youtu.be/VPwleeJam3Y
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/configure-cloud-cdn-for-storage-using-gcloud-solution/lab.sh
source lab.sh

Script Alternative
#!/bin/bash
set -euo pipefail

# ============================================================
# Cloud CDN + HTTP Load Balancer for a pre-created GCS bucket
# Then request (curl) a file via CDN/LB to complete the lab.
# ============================================================

# -----------------------------
# 1) Detect the pre-created bucket
# -----------------------------
BUCKET_NAME="$(gsutil ls -b | head -n 1 | sed 's#gs://##; s#/*$##')"
if [[ -z "${BUCKET_NAME}" ]]; then
  echo "ERROR: No Cloud Storage bucket found in this project."
  exit 1
fi
echo "Using bucket: gs://${BUCKET_NAME}"

# -----------------------------
# 2) Pick a test object from the bucket
#    (any static file is fine; the first object is enough)
# -----------------------------
OBJECT_PATH="$(gsutil ls "gs://${BUCKET_NAME}/**" 2&gt;/dev/null | head -n 1 | sed "s#gs://${BUCKET_NAME}/##")"
if [[ -z "${OBJECT_PATH}" ]]; then
  echo "ERROR: Bucket gs://${BUCKET_NAME} has no objects to request with curl."
  exit 1
fi
echo "Test object path: ${OBJECT_PATH}"

# -----------------------------
# 3) Resource names (idempotent)
# -----------------------------
BACKEND_BUCKET="static-backend-bucket"
URL_MAP="cdn-map"
PROXY="cdn-http-proxy"
FORWARDING_RULE="cdn-http-rule"

# -----------------------------
# 4) Create Cloud CDN backend bucket
# -----------------------------
if ! gcloud compute backend-buckets describe "$BACKEND_BUCKET" &gt;/dev/null 2&gt;&amp;1; then
  echo "Creating backend bucket with Cloud CDN enabled..."
  gcloud -q compute backend-buckets create "$BACKEND_BUCKET" \
    --gcs-bucket-name="$BUCKET_NAME" \
    --enable-cdn
else
  echo "Backend bucket already exists: $BACKEND_BUCKET"
fi

# -----------------------------
# 5) Create URL map
# -----------------------------
if ! gcloud compute url-maps describe "$URL_MAP" &gt;/dev/null 2&gt;&amp;1; then
  echo "Creating URL map..."
  gcloud -q compute url-maps create "$URL_MAP" \
    --default-backend-bucket="$BACKEND_BUCKET"
else
  echo "URL map already exists: $URL_MAP"
fi

# -----------------------------
# 6) Create target HTTP proxy
# -----------------------------
if ! gcloud compute target-http-proxies describe "$PROXY" &gt;/dev/null 2&gt;&amp;1; then
  echo "Creating target HTTP proxy..."
  gcloud -q compute target-http-proxies create "$PROXY" \
    --url-map="$URL_MAP"
else
  echo "Target HTTP proxy already exists: $PROXY"
fi

# -----------------------------
# 7) Create global forwarding rule on port 80
# -----------------------------
if ! gcloud compute forwarding-rules describe "$FORWARDING_RULE" --global &gt;/dev/null 2&gt;&amp;1; then
  echo "Creating global forwarding rule..."
  gcloud -q compute forwarding-rules create "$FORWARDING_RULE" \
    --global \
    --target-http-proxy="$PROXY" \
    --ports=80
else
  echo "Forwarding rule already exists: $FORWARDING_RULE"
fi

# -----------------------------
# 8) Get LB IP and request a file via CDN using curl
# -----------------------------
IP_ADDRESS="$(gcloud compute forwarding-rules describe "$FORWARDING_RULE" --global --format="value(IPAddress)")"
if [[ -z "${IP_ADDRESS}" ]]; then
  echo "ERROR: Could not determine forwarding rule IP address."
  exit 1
fi

echo "Load Balancer IP: ${IP_ADDRESS}"
echo "Requesting file via CDN/LB with curl:"
echo "URL: http://${IP_ADDRESS}/${OBJECT_PATH}"
echo

# Use -I to fetch headers only (fast), or remove -I to download the full file.
curl -I "http://${IP_ADDRESS}/${OBJECT_PATH}"

echo
echo "DONE. Now click 'Check my progress' in the lab."

]]></description><link>https://eplus.dev/configure-cloud-cdn-for-storage-using-gcloud-solution</link><guid isPermaLink="true">https://eplus.dev/configure-cloud-cdn-for-storage-using-gcloud-solution</guid><category><![CDATA[Configure Cloud CDN for Storage using gcloud (Solution)]]></category><category><![CDATA[Configure Cloud CDN for Storage using gcloud]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 06:56:29 GMT</pubDate></item><item><title><![CDATA[Configure Secure CORS for Cloud Storage (Solution)]]></title><description><![CDATA[Configure Secure CORS for Cloud Storage
experimentLabschedule10 minutesuniversal_currency_altNo costshow_chartIntroductory
infoThis lab may incorporate AI tools to support your learning.
Lab instructions and tasks

Overview

Challenge scenario

Congratulations!



Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
Your company which is into scientific research utilizes a Google Cloud Storage bucket for public data dissemination. A research partner of yours needs to access this data within their web application http://example.com but faces cross-origin access errors.
Configure secure CORS on the created bucket, specifically permitting GET requests from (http://example.com), adhering to least privilege.
Click Check my progress to verify the objective.
Configure secure CORS on the created bucket.

Solution of Lab
https://youtu.be/AgbAjpY7J5o
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/configure-secure-cors-for-cloud-storage-solution/lab.sh
source lab.sh

Script Alternative
echo '[{"origin":["http://example.com"],"method":["GET"],"responseHeader":["Content-Type"],"maxAgeSeconds":3600}]' &gt; cors.json
gcloud storage buckets update gs://$(gcloud config get-value project)-bucket --cors-file=cors.json

]]></description><link>https://eplus.dev/configure-secure-cors-for-cloud-storage-solution</link><guid isPermaLink="true">https://eplus.dev/configure-secure-cors-for-cloud-storage-solution</guid><category><![CDATA[Configure Secure CORS for Cloud Storage]]></category><category><![CDATA[Configure Secure CORS for Cloud Storage (Solution)]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 06:43:01 GMT</pubDate></item><item><title><![CDATA[Secure a Public Storage Bucket - gcloud (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario

You're a cloud architect for a media company. A critical video archive bucket named qwiklabs-gcp-00-6372eb9e6b50-urgent has mistakenly been made public. Your task is to quickly secure it in the provided time frame.

Click Check my progress to verify the objective.
Please prevent the public access from your bucket folder and make it private. If already done so, please wait for a while for the changes to get propagated.
Make the media archive folder private.
Please prevent the public access from your bucket folder and make it private. If already done so, please wait for a while for the changes to get propagated.

Solution of Lab
https://youtu.be/jO8dnodJ2a4
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/Secure%20a%20Public%20Storage%20Bucket%20-%20gcloud/lab.sh
source lab.sh

Script Alternative
PROJECT_ID=$(gcloud config get-value project)
gsutil iam ch -d allUsers:objectViewer gs://$PROJECT_ID-urgent &amp;&amp; gsutil iam get gs://$PROJECT_ID-urgent

]]></description><link>https://eplus.dev/secure-a-public-storage-bucket-gcloud-solution-1</link><guid isPermaLink="true">https://eplus.dev/secure-a-public-storage-bucket-gcloud-solution-1</guid><category><![CDATA[Secure a Public Storage Bucket - gcloud (Solution)]]></category><category><![CDATA[Secure a Public Storage Bucket - gcloud]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sun, 15 Feb 2026 06:33:11 GMT</pubDate></item><item><title><![CDATA[Analyze the Text Prompt and Use it to Build an AI Image (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included IDE is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


In a challenge lab youre given a scenario and a set of tasks. Instead of following step-by-step instructions, you will use the skills learned from the labs in the course to figure out how to complete the tasks on your own! An automated scoring system (shown on this page) will provide feedback on whether you have completed your tasks correctly.
When you take a challenge lab, you will not be taught new Google Cloud concepts. You are expected to extend your learned skills, like changing default values and reading and researching error messages to fix your own mistakes.
To score 100% you must successfully complete all tasks within the time period! Are you ready for the challenge?
Challenge scenario
Scenario: You're a developer at a company specializing in AI-driven sports venue design. Your clients are sports organizations and city planners who want to visualize potential new venues or renovations. Your system generates realistic images of these venues based on textual descriptions. Your main application will invoke the relevant methods based on the users' interaction and to facilitate that, you need to finish the below tasks:
Task: Develop a Python function named generate_image(prompt). This function should invoke the imagen-3.0-generate-002 model using the supplied prompt, which will uses Gemini's ability to understand the text prompt and use it to build an AI Image.
For this challenge, use the prompt: "Create an image of a cricket ground in the heart of Los Angeles".
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.

Click File &gt; New File to open a new file within the Code Editor.

Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.

Create and save the python file.

Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.


/usr/bin/python3 /FILE_NAME.py


Now to view the generated image, Click EXPLORER &gt; image.jpeg.

Note: You can ignore any warnings related to Python version dependencies.
Click Check my progress to verify the objective.
Please create and run python file to generate an AI Image and if already done please wait till the logs are being created.
Send a text prompt to Gen AI and receive an image response
Please create and run python file to generate an AI Image and if already done please wait till the logs are being created.

Solution of Lab
https://www.youtube.com/watch?v=20h0_LcltWs
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/analyze-the-text-prompt-and-use-it-to-build-an-ai-image-solution/lab.sh
source lab.sh

Script Alternative
import vertexai
from vertexai.preview.vision_models import ImageGenerationModel

def generate_image(prompt: str):
    # Initialize Vertex AI
    vertexai.init()

    # Load Imagen 3 model
    model = ImageGenerationModel.from_pretrained(
        "imagen-3.0-generate-002"
    )

    # Generate image
    images = model.generate_images(
        prompt=prompt,
        number_of_images=1
    )

    # Save the generated image
    image = images[0]
    image.save("image.jpeg")

    print("Image generated and saved as image.jpeg")

if __name__ == "__main__":
    generate_image(
        "Create an image of a cricket ground in the heart of Los Angeles"
    )

]]></description><link>https://eplus.dev/analyze-the-text-prompt-and-use-it-to-build-an-ai-image-solution</link><guid isPermaLink="true">https://eplus.dev/analyze-the-text-prompt-and-use-it-to-build-an-ai-image-solution</guid><category><![CDATA[Analyze the Text Prompt and Use it to Build an AI Image]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Wed, 11 Feb 2026 08:24:39 GMT</pubDate></item><item><title><![CDATA[Encrypt a Persistent Disk with a Customer-Supplied Key (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
You are a system administrator at a large enterprise company. Your compliance team has informed you that you need to start encrypting data at rest with your own key (customer-supplied encryption key or CSEK). Your task is to create a persistent disk with encryption using the CSEK and attach that persistent disk to a VM instance.
Click Check my progress to verify the objective.
Create and attach a CSEK-encrypted persistent disk to the VM instance.

Solution of Lab
https://www.youtube.com/watch?v=e-CPhuZ865s
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/encrypt-a-persistent-disk-with-a-customer-supplied-key-solution/lab.sh
source lab.sh

Script Alternative
export PROJECT_ID=$(gcloud config get-value project) ZONE=$(gcloud compute instances list --limit=1 --format="value(zone)") VM_NAME=$(gcloud compute instances list --limit=1 --format="value(name)") BASE64_KEY=$(head -c 32 /dev/urandom | base64)
gcloud compute disks create csek-encrypted-disk --size=200GB --zone=$ZONE --csek-key-file=&lt;(echo "[{\"uri\": \"https://www.googleapis.com/compute/v1/projects/$PROJECT_ID/zones/$ZONE/disks/csek-encrypted-disk\", \"key\": \"$BASE64_KEY\", \"key-type\": \"raw\"}]")
gcloud compute instances attach-disk $VM_NAME --disk=csek-encrypted-disk --zone=$ZONE --csek-key-file=&lt;(echo "[{\"uri\": \"https://www.googleapis.com/compute/v1/projects/$PROJECT_ID/zones/$ZONE/disks/csek-encrypted-disk\", \"key\": \"$BASE64_KEY\", \"key-type\": \"raw\"}]")

]]></description><link>https://eplus.dev/encrypt-a-persistent-disk-with-a-customer-supplied-key-solution</link><guid isPermaLink="true">https://eplus.dev/encrypt-a-persistent-disk-with-a-customer-supplied-key-solution</guid><category><![CDATA[Encrypt a Persistent Disk with a Customer-Supplied Key]]></category><category><![CDATA[Encrypt a Persistent Disk with a Customer-Supplied Key (Solution)]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Wed, 11 Feb 2026 07:38:42 GMT</pubDate></item><item><title><![CDATA[Assign External IP to VM Instance using gcloud (Solution)]]></title><description><![CDATA[Overview

Labs are timed and cannot be paused. The timer starts when you click Start Lab.

The included cloud terminal is preconfigured with the gcloud SDK.

Use the terminal to execute commands and then click Check my progress to verify your work.


Challenge scenario
You are a system administrator for a company that uses Google Cloud to host its applications. You have an existing compute instance without an external IP in the default VPC network.
Your task is to assign static external IP address to your virtual machine (VM) instances.
Click Check my progress to verify the objective.
Create a static external IP address

Solution of Lab
https://www.youtube.com/watch?v=MsQUNVAX2kE
 
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/assign-external-ip-to-vm-instance-using-gcloud-solution/lab.sh
source lab.sh

Script Alternative
export VM_NAME=$(gcloud compute instances list --format='value(name)' --limit=1) &amp;&amp; export ZONE=$(gcloud compute instances list --format='value(zone)' --limit=1) &amp;&amp; export REGION=${ZONE%-*}
gcloud compute addresses create lab-static-ip --region=$REGION
export IP_ADDRESS=$(gcloud compute addresses describe lab-static-ip --region=$REGION --format='get(address)') &amp;&amp; gcloud compute instances add-access-config $VM_NAME --zone=$ZONE --address=$IP_ADDRESS

]]></description><link>https://eplus.dev/assign-external-ip-to-vm-instance-using-gcloud-solution</link><guid isPermaLink="true">https://eplus.dev/assign-external-ip-to-vm-instance-using-gcloud-solution</guid><category><![CDATA[Assign External IP to VM Instance using gcloud (Solution)]]></category><category><![CDATA[Assign External IP to VM Instance using gcloud]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Wed, 11 Feb 2026 07:16:28 GMT</pubDate></item><item><title><![CDATA[Arcade February 2026 Sprint 4 (Solution)]]></title><description><![CDATA[Overview
Welcome to Arcade February 2026 Sprint 4! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.

Quiz

In Google Cloud IAM, what do you call a defined collection of specific permissions?
 Select ONE answer that would be relevant

Folder

Member

Role

Policy



Which Google Cloud tool allows you to manage permissions and access levels for different users?
 Select ONE answer that would be relevant

VPC Peering

Compute Engine

IAM

Cloud CDN



Which Google Cloud resource is required to define the configuration (machine type, image) for a Managed Instance Group?
 Select ONE answer that would be relevant

Disk Image

Instance Snapshot

Instance Template

Machine Image



What feature of a Google Cloud MIG (Managed Instance Groups) automatically adds more VMs when web traffic increases?
 Select ONE answer that would be relevant

Autoscaling

VPC Peering

Snapshotting

Load Balancing



When deploying a VM instance with multiple network interfaces in Google Cloud, what is a mandatory requirement for each interface regarding its network connection?
 Select ONE answer that would be relevant

Each interface must be attached to a different VPC network

Each interface must have a static external IP address

All interfaces must belong to the same subnetwork

Every interface must have a dedicated Cloud NAT gateway



How will you attach multiple network interfaces to a VM instance during creation using gcloud?
 Select ONE answer that would be relevant

Use the --network-interface flag multiple times

Use the --interfaces list

Use the --add-network command

Use the --attach-multiple-nics flag






]]></description><link>https://eplus.dev/arcade-february-2026-sprint-4-solution</link><guid isPermaLink="true">https://eplus.dev/arcade-february-2026-sprint-4-solution</guid><category><![CDATA[Arcade February 2026 Sprint 4 (Solution)]]></category><category><![CDATA[Arcade February 2026 Sprint 4]]></category><category><![CDATA[Arcade February 2026]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sat, 07 Feb 2026 06:46:37 GMT</pubDate></item><item><title><![CDATA[Arcade February 2026 Sprint 3 (Solution)]]></title><description><![CDATA[Overview
Welcome to Arcade February 2026 Sprint 3! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.

Quiz

In Google Cloud, which component is used to route incoming traffic to the nearest healthy region?
 Select ONE answer that would be relevant

Cloud Build

Global Load Balancer

Local HDD

Static VM



Which Google Cloud component monitors the status of instances to ensure traffic is only sent to healthy VMs?
 Select ONE answer that would be relevant

kubectl run pods

Traffic Director

kubectl show pods

Health Check



Which command is used to create a new deployment in a Google Kubernetes Engine cluster?
 Select ONE answer that would be relevant

kubectl create deployment

kubectl start deploy

kubectl run instance

kubectl new pod



In Google Kubernetes Engine, what is the smallest deployable object that represents a single instance of a running process?
 Select ONE answer that would be relevant

Node

Cluster

Service

Pod



Which command-line utility is used to manage and deploy resources within a Kubernetes cluster?
 Select ONE answer that would be relevant

gsutil

kubectl

bq

terraform



Which Google Cloud command creates a new cluster in Google Kubernetes Engine?
 Select ONE answer that would be relevant

gcloud k8s new-cluster

gcloud compute clusters add

gcloud container clusters create

gcloud container new






]]></description><link>https://eplus.dev/arcade-february-2026-sprint-3-solution</link><guid isPermaLink="true">https://eplus.dev/arcade-february-2026-sprint-3-solution</guid><category><![CDATA[Arcade February 2026 Sprint 3 (Solution)]]></category><category><![CDATA[Arcade February 2026 Sprint 3]]></category><category><![CDATA[Arcade February 2026]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sat, 07 Feb 2026 06:41:19 GMT</pubDate></item><item><title><![CDATA[Arcade February 2026 Sprint 2 (Solution)]]></title><description><![CDATA[Overview
Welcome to Arcade February 2026 Sprint 2! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.

Quiz

In Google Cloud, which service is used to manage and orchestrate containerized applications?
 Select ONE answer that would be relevant

Kubernetes Engine

Compute Engine

Cloud Run

Cloud Build



Which Kubernetes command is used to list all the pods currently running in your Google Cloud cluster?
 Select ONE answer that would be relevant

kubectl run pods

kubectl show pods

kubectl start pods

kubectl get pods



Which Google Cloud command is used to provision a Virtual Private Cloud (VPC) network?
 Select ONE answer that would be relevant

gcloud compute vpc add

gcloud vpc provision

gcloud networks new

gcloud compute networks create



Which Google Cloud command is used to create a new virtual machine instance?
 Select ONE answer that would be relevant

gcloud vm create

gcloud provision instance

gcloud compute instances create

gcloud compute new-vm



Which Google Cloud service caches content at the network edge to reduce user latency?
 Select ONE answer that would be relevant

Cloud Armor

Cloud NAT

Cloud CDN

Cloud DNS



To use Cloud CDN, it must be enabled on which specific Google Cloud component?
 Select ONE answer that would be relevant

BigQuery

Load Balancer

Cloud PubSub

Cloud SQL






]]></description><link>https://eplus.dev/arcade-february-2026-sprint-2-solution</link><guid isPermaLink="true">https://eplus.dev/arcade-february-2026-sprint-2-solution</guid><category><![CDATA[Arcade February 2026 Sprint 2 (Solution)]]></category><category><![CDATA[Arcade February 2026 Sprint 2]]></category><category><![CDATA[Arcade February 2026]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sat, 07 Feb 2026 06:35:29 GMT</pubDate></item><item><title><![CDATA[Arcade February 2026 Sprint 1 (Solution)]]></title><description><![CDATA[Overview
Welcome to Arcade February 2026 Sprint 1! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.

Quiz

How will you create a new Linux server instance in Google Cloud using the Console?
 Select ONE answer that would be relevant

Use Cloud Functions

Use Google Drive

Use Compute Engine

Use Cloud Spanner



In Google Cloud, what does the "Machine Type" configuration primarily determine?
 Select ONE answer that would be relevant

Disk type

OS version

Network speed

Hardware resources



Which gcloud command is used to display all the configuration properties of your current environment?
 Select ONE answer that would be relevant

gcloud help

gcloud auth list

gcloud config list

gcloud info



Which gcloud command is used to view a list of active account names in your environment?
 Select ONE answer that would be relevant

gcloud help

gcloud info

gcloud auth list

gcloud config list



In Google Cloud, how will you create a new persistent disk in a specific zone using the command line?
 Select ONE answer that would be relevant

gcloud make disk

gcloud compute disks create

gcloud disk provision

gcloud storage new



Which Google Cloud command is used to attach an existing Persistent Disk to a virtual machine instance?
 Select ONE answer that would be relevant

gcloud compute instances attach-disk

Send it to a printer

Click Delete

gcloud vm mount-disk






]]></description><link>https://eplus.dev/arcade-february-2026-sprint-1-solution</link><guid isPermaLink="true">https://eplus.dev/arcade-february-2026-sprint-1-solution</guid><category><![CDATA[Arcade February 2026 Sprint 1 (Solution)]]></category><category><![CDATA[Arcade February 2026 Sprint 1]]></category><category><![CDATA[Arcade February 2026]]></category><dc:creator><![CDATA[David Nguyen]]></dc:creator><pubDate>Sat, 07 Feb 2026 06:31:17 GMT</pubDate></item></channel></rss>