Core concepts in Kubernetes (Part 2)

Tram Ho

1. ReplicaSets

In Part 1, we discussed the concept of ReplicationControllers as a component of the Kubernetes system that helps manage the status of available Pods and nodes.

ReplicaSets in Kubernetes has a similar role to ReplicationControllers , more precisely, ReplicaSets was introduced to replace ReplicationControllers .

Compare the correlation between ReplicaSets and ReplicationControllers

  • ReplicationControllers can only be created directly by configuring the yaml file or using the command line with kubectl . With ReplicaSets , in addition to the normal way of initializing like ReplicationControllers , it can be created automatically when we initialize a Deployment object (we will learn Deployment later).
  • ReplicaSets can be configured to apply to multiple labels values ​​in the same field. ReplicationControllers can only be applied to Pods with a value for each labels field. For example, ReplicaSets can apply to Pods with labels env=production , env=development , etc. ReplicationControllers only applies to Pods with labels env=development .

2. Volumes

As we all know, the Kubermetes system will create new Pods to replace when a Pods fails, dies or crashes. So where does the data stored in the old Pods go? Will new Pods get back the data of old Pods that have been lost to continue using? The concept of Voulumes will help solve these problems.

Volumes are components of Pods. Volumes are defined in the yaml file configuration when creating pods. The container can mount data inside the container to volume objects belonging to the same Pods.

Containers in Pods mount up to 2 voumes to share data with each other

Types of volumes

  • emptyDir
  • hostPath
  • gitRepo
  • nfs
  • Cloud volumes: gcePersistentDisk (Google Compute Engine Persistent Disk), awsElasticBlockStore (Amazon Web Services Elastic Block Store Volume), azureDisk (Microsoft Azure Disk Volume).
  • cinder, cephfs, iscsi, flocker, glusterfs, quobyte, rbd, flexVolume, vsphereVolume, photonPersistentDisk, scaleIO:
  • configMap, secret, downwardAPI:
  • persistentVolumeClaim:

As seen, there are many different types of volumes, within the scope of the article, it is not allowed to explore all types of volumes. We will look at a few types of volumes.

EmptyDir Volumes

is the simplest volume type. Initially just an empty folder, containers can use emptyDir volumes to read, write and share to other containers and Pods. When Pods crash or are deleted, emptyDir volumes also lose with the data in it.

Create an EmptyDir volumes (In the configuration file of Pods) *


Looking at the configuration file above, we cos:

  • Pods include 2 containers: html-generator , web-server .
  • The directory of the html-generator container is mounted with volumes of /var/htdocs
  • Container directory web-server to be mounted with volumes is /usr/share/nginx/html at CE level readOnly (read-only data from the volumes in the container).

In this example, the html-generator container will change the index.html file in the /var/htdocs folder /var/htdocs 10 seconds. When the new html file is created, it will be updated in volumes and the web-server container can read them. When a user sends a request, for example, to the nginx web-server container, the data returned is the latest index.html file.

  • The last 3 lines contain information about the volumes name and the volumes type is emptyDir . By default, emptyDir will use the worker nodes ‘ hard drive resources for storage. We can have another option of using worker nodes ‘ RAM as follows:

hostPath volume

As we all know, with empty volume , data will be lost when Pods fail, delete or crash because empty volume is a part of Pods. With the hosPath volume, data stored in volumes will not be lost when the Pods fail because it is outside the Pods (in the worker node’s file system). When new Pods are created to replace the old Pods, it will mount to the hostPath volume to continue working with the data on the old Pods.

ConfigMap and Secret

Normally when programming applications, we put important variables (such as DB url password, secret key, DB name, etc.) into .env files as environment variables to ensure confidentiality. In the Kubernetes system, Config Map and Secret are the two types of volumes that help to store environment variables for use by containers of different Pods. Config Map will be used with environment variables that do not contain sensitive information. Secret , as its name implies, will be used to store sensitive and important environment variables. Unlike other volume types, Config Map and Secret will be defined separately from yaml files instead of the configuration in yaml files that initialize Pods.

3. Deployments

So far, the first component that needs to be initialized in a Kubernetes system is no other than Pods . And as we all know, to manage the status of Pods , it is necessary to create more Replication Controllers to manage those Pods, which is quite cumbersome operation. And imagine that in a large to very large system with hundreds, thousands or tens of thousands of Pods, it would be a pain to create more Replication Controllers to manage Pods or Pods group according to those labels?

Kubernetes has introduced the concept of Deployments to help simplify the process above. With Deployments , we will just need to define the configuration and create a Deployments , the system will automatically create one or more corresponding Pods and ReplicaSet to manage the status of those Pods. In addition, Deployments also has a mechanism to help system administrators easily update, rollback the version of the application (the container version runs in Pods).

In the above yaml file, we defined a deployment called kubia. The configuration for Replicas always maintains 3 Pods, running Pods will have labels app=kubia and containers running in Pods will be built from image luksa/kubia:v1


Kubernetes in Action

Kubernetes Docs

Share the news now

Source : Viblo