REPORT

2022 Gartner® Magic Quadrant™ for APM and Observability Read the Report

Back to insight results

May 23, 2019 By Sumo Logic

Getting Started with Kubernetes and Google Container Engine

Docker, Vagrant, Ansible, Jenkins—none of these seem particularly scary, but Kubernetes sounds like something out of a jargon-filled sci-fi movie; as in “entering phantom zone, engaging Kubernetes firedrive now.” The fact of the matter, though, is that Kubernetes really isn’t all that complicated. To quote straight from the source, Kubernetes is “an open-source system for automating deployment, scaling, and management of containerized applications.”

At the risk of overgeneralizing, it’s an automated Docker management platform. Now, let’s take a look at how at how to get your containers integrated with Google Container Engine and Kubernetes.

Getting Kubernetes into Google

Believe it or not, getting your feet wet with Kubernetes doesn’t require a massive architecture or complicated application. All it takes is a Docker image and a Google Cloud Platform (GCP) account to use with Google Container Engine. In order to get started, we first need to log into our GCP account and create a new project.

Figure 9. Creating and Setting the Service Role for the Pipeline

Discover how the Sumo Logic platform empowers devops teams for continuous delivery, no matter what the future bringsFree Trial

Creating a Kubernetes Cluster in Google Container Engine

Once we have a GCP Project, we can actually start digging into Kubernetes. The first thing we need to do is create a cluster. In Kubernetes-speak, a cluster is a collection of nodes, which in turn are collections of pods, which themselves are collections of containers. To put it more clearly, a cluster is the base Kubernetes service, nodes are like servers, and pods act as applications.

Kubernetes cluster diagram

To create a cluster, head on over to the Google Container Engine (GCE) and click the obvious button that says “create a container cluster.” This will pop up an incredibly straightforward form that will allow you to enter in some details about your new cluster. For the purposes of this demo, the default settings are more than sufficient; however, I recommend hovering over some of the question marks to get a better idea of what each individual setting means.

Create a Kubernetes Cluster

Building the Container for Kubernetes

After we create our cluster, we then need something to deploy on it. Because Kubernetes is a container automation platform, the application we deploy will come in the form of a Docker image. For the sake of simplicity (and to keep the focus on Kubernetes), I’ve created the world’s simplest Dockerfile by importing the yeasy/simple-web image.

FROM yeasy/simple-web:latest

For the sake of continuity, we next need to build the Docker image from the Dockerfile. This can be done via the docker build -t organization>/image-name command.

Building the Dockerfile

Once we have a built Docker image, we then need to tag it into a repository. What this basically means is that we will be associating our local Docker image with a remote destination. While the docker tag command is pretty straightforward, it’s important to note the format of the remote repository image:
docker tag organization/image-name gcr.io/project-id/image-name

Tag Docker Image

Your GCP project ID can be found on the Project Settings page, but you can also find it via a quick shortcut by clicking on the Project Name dropdown in the menu bar and hovering over Project Name:

Setting a project ID in GCP

Next, we need to upload our Docker image to the Google Container Registry (GCR). Before we can do that, though, we need to install the GCP Cloud Tools application. The details of doing this are outside the scope of this article, so I recommend heading over to the Cloud Tools download page and following the instructions for your operating system before continuing.
Once we’ve installed and configured Cloud Tools, it’s time to push up our image to the GCR. This command will look similar to any docker push command, but will be prefixed with the gcloud command. This bootstraps authentication for us, which makes pushing up new images incredibly straightforward.

gcloud docker push gcr.io/project-id/image-name

Launching a Node in GCP

Now that the necessary pieces of our puzzle are in place, it’s time to launch a node with our recently uploaded image. The first step to accomplishing this is to get the login credentials to our initially created cluster. This step is important, as it allows us to run our authenticated kubectl commands.

gcloud clusters get-credentials cluster-name –zone=us-west1-a

After fetching the cluster data from GCP, we are now able to deploy a node to it using the kubectl run command. This brings up one node containing one pod running the designated Docker image.

kubectl run node-name –image=gcr.io/project-id/image-name –port=8080

If everything goes well, we can list our deployments using the kubectl get deployments command, and we should see the node that we just launched.

Verifying node deployment

Exposing the Node Deployment

Unfortunately, we’re not quite done. Our recently created node can only be accessed via an internal network, which means that it doesn’t do the rest of the world much good. In order to make our application accessible to the outside world, we need to expose our defined ports.

kubectl expose deployment node-name –type=”LoadBalancer”

Node deployment waiting for external IP

Once we expose our deployment, we need to pull up the public IP address using the kubectl get services command. It’s important to note that allocating an external IP address could take a few minutes, so you may see something like this at first:

Give it a few minutes and run the command again. When the public IP has been allocated, it will replace the field in the services response.

Node deployment with external IP

If everything goes as expected, we should be able to navigate to the provided external IP address and our previously defined port in a browser and receive the simple web response!

Successful node deployment

While Kubernetes is a bit more complicated than typical serverless architectures, it isn’t anything to be scared of. Getting started is really as simple as setting up a cluster and deploying a new node. The Kubernetes lingo can be a little confusing at first, but like any new technology, it’s just a matter of practice. With its automation capabilities, and direct Docker integrations, Kubernetes is the perfect platform for building scalable and highly available applications with minimal overhead.

Additional Reading

Additional Resources

Access insight

Navigate Kubernetes with Sumo Logic.

Monitor, troubleshoot and secure your Kubernetes clusters with Sumo Logic Continuous Intelligence solution for Kubernetes.

Chart your course

People who read this also enjoyed