The original enlightenment about Kubernetes

Tram Ho

The objective of the article is to give you an overview of a Kubernetes system in theory and through practice to gain a better understanding. Requires readers to have some basic knowledge about Docker (Concept of image, container).

1. The arrival of Kubernetes

With the advent of containerization virtualization technology, typically Docker, with many benefits, products are being deployed in containers more and more.

Let’s take a look at some of the container application implementations compared to other implementations that have long been used.

Deploying traditional applications: Applications are deployed on physical servers. In cases where multiple applications are deployed on the same server, there may be instances of applications having resource conflicts. The fix is ​​to run each application on a different physical server, but this is costly for the server.

Deploy applications on virtual machines : On a physical server running many different virtual machines, each application will be deployed on separate virtual machines, this prevents resource conflicts from happening, saving costs. , more convenient for application expansion.

Deploying an application on a container : It is a virtualization environment similar to a virtual machine, but the way to deploy the application with the container will be lighter, more resource-efficient than the virtual machine. The reason is that applications running on standalone virtual machines need to consume additional resources for each operating system in that virtual machine, and different containers will share the same operating system.

However, deploying products on production environment with Docker has many difficulties, especially for large applications and systems:

  • Batch managing docket hosts
  • Container Scheduling
  • Rolling update
  • Scaling / Auto Scaling
  • Monitor the life cycle and status of the container (the container may be shutdown or unexpected error).
  • Self-hearing in case of an error. (Ability to detect and correct errors by itself)
  • Service discovery
  • Load balancing (load balancing)
  • Data management
  • Log management
  • Interconnection and expansion with other systems

And Kubernetes appears to help solve the aforementioned problems.

Kubernetes : is an open source platform, which helps the management, deployment, and expansion of container applications automatically and easily.

Kubernetes is a word derived from the Greek word for sailor, introduced by Google in 2014 based on internal projects that have been used for years to deploy container applications on thousands, tens of thousands of servers. (Project Borg and later changed its name to Omega).

The logo of Kubernetes is the shape of the ship’s steering wheel

The picture below shows the most basic description of a Kubernetes system. When an application is deployed using Kubernetes , it will create one or more clusters. A cluster will have at least one master nodes and one worker nodes (one node is a physical machine or a virtual machine).

So what is a cluster really, what are worker nodes and what are master nodes like? We will find out in the next section.

2. The architecture of a cluster in Kubernetes

As introduced above, a cluster in Kubernetes will have 2 main components:

  • Master node: Where to coordinate and manage the entire Kubernetes system.
  • Worker node: Responsible for running application containers.

Going into more detail on the components of the master node and the worker node, we will see

Master node:

  • Kubernetes API Server: A central component of a kubernetes cluster.
  • Controller Manager: Performs functions such as monitoring the status (running or shutdown) of containers, as well as worker nodes, responsible for handling when errors occur.
  • Scheduler: Coordinator, select worker nodes to run containers.
  • etcd: A key-value distributed data storage tool, which helps master nodes store information and status in the cluster.

Worker node:

  • Kubelet: Runs on worker nodes, communicates with the master node’s API Server and manages the containers running on that node.
  • Kube-proxy: Implement the function of load balancing (load-balances)
  • Container Runtime: Container virtualization software such as Docker , rkt , etc.

3. Run the application on Kubernetes

To run the application on Kubernetes , we need to package the application’s components in the form of container images, push images to the server (such as Docker Hub ). And finally post the app description of the application to the Kubernetes API server.

We take a look at the image below to better understand how the applications are deployed on Kubernetes. The app descriptor section lists four containers, grouped into three groups. The first two groups contain only one container, while the last group contains two. That means that both containers need to run together and should not be isolated independently. Besides each group, we also see information about the number of copies of each group to be deployed. After submitting the descriptor to Kubernetes, the Master node’s Schedule component will specify the position of each group at the available worker nodes. Kubelets on the worker nodes will then ask Docker to pull the image of the Docker hub container and run the containers up.

Maintain a steady state of the containers

When the application is running, Kubernetes continually ensures that the application’s deployment status is always in the description you provide. For example, if you specify that you always want to have 5 versions of a web server running, Kubernetes will always keep exactly the five versions running. If in unexpected circumstances, such as when the container process crashes or stops responding, Kubernetes will automatically restart.

Similarly, if all worker nodes are dead or inaccessible, Kubernetes will select new nodes for all currently running containers and run them on the newly selected node.

Expansion (Scaling)

While the application is running, we can decide to increase or decrease the number of container copies. Even if we want, Kubernetes can take on this work. It can automatically adjust the amount, based on real-time metrics, such as CPU load, memory consumption, queries per second or any other metric that your application displays.

4. Demo Kubernetes on local

In this section we will try to deploy a simple application written in Node.js with Kubernetes above with Minikube as a local Kubernetes implementation tool. If you want to use on production environment, you should refer to Rancher or Helm .

We write the most basic Node.js application:

File app.js

Along with that is the Dockerfile same directory as app.js

B1: Build docker image

We build the image named kuber

B2: Run docker container from image

We map port 8080 in the container with port 8080 in the physical machine

B3: Install the Minikube

On Linux (You can refer to how to install minikube on other platforms here )

B4: Start the minikube

The boot process will take a few minutes

B5: Install kubernetes client (kubectl)

On Linux (You can refer to how to install kubectl on other platforms here )

B6: Deploy

Deploying applications using kubectl is a quicker, simpler way and in return will not be able to push enough, details using json or yaml files to config. However, for beginners, we will use kubectl will be a better option.

B7: Install additional service LoadBlance

To gain access to the kubernetes cluster containers, we need to install additional services. LoadBlance is a service that can help us do that.

B8: Get the service

B9:

Now, through the service, we can access the server of the deployed container

References

Chia sẻ bài viết ngay

Nguồn bài viết : Viblo