Deploy Air-viewer on Kubernetes using Nginx ingress

Tram Ho

In the previous part I deployed nginx, along with services like tea or coffee , to this part I will deploy a web application with the system described as the picture above, the sensor part, I will not do details, you can Refer here or via github . We have services like Flask, mysql, nuxt.js routed with nginx-ingress and then use the worker node as the domain and loadbalancer. The project to measure the concentration of dust pollution in the air is also built microservices, you can refer here https://github.com/sun-asterisk-research/air-viewer

CleanUp the previous section

Clone Project Air-viewer

Install for you who have not read the previous post

In the previous section, I have installed kubernetes-ingress so I will go quickly if you have not installed kubernetes-ingress yet. From now on, the default commands will execute on the master node only

Create a Namespace, a SA, the Default Secret, the Customization Config Map, and Custom Resource Definitions

Create a secret with the TLS certificate and the key for the default server NGINX:

Create a config map for customizing NGINX configuration

Create custom resource definitions for VirtualServer and VirtualServerRoute

Configure RBAC

Deploy the Ingress Controller

Config Air-viewer

Create Namespaces

Deploy Mysql

Regarding deploying mysql, we need persistent Volume where database is not available, secret contains root and user name passwords to hide passwords to prevent others from managing other services readable through describe deployment của mysql command. Finally, the files contain service and deployment

Secret

ouput: cm9vdA==

We can list the secret in the air-viewer namespace

Output:

Descriptions show information of mysql-secrets , similar to mysql-pass-non-root

Output:

Persistent Storage

Container is an ephemeral structure. Any changes to a running container will lose data when the container stops. Since containers are not suitable for storage databases, we must use the volume store to mount with mysql’s pods.

Deployment

As above, you can see the mapping between deployment with secret and mysql-pv-claim

Check the status of mysql deployment

output:

Access the internal environment of the created pods

output:

output:

Service

Building service for pods (s)

Review:

output:

Deploy Backend (Flask, UWSGI)

Deployment

This deployment will combine with the secret user, the password created earlier.

Check:

output:

Service

Deploy Frontend (Nuxt.js)

Deployment

Service

Check:

Output:

Ingress controller config

VirtualServer Profile: VirtualServer to configure load balancing for a domain

In this part, I have some requirements on Virtual server route and virtual server learn more here

Secret DNS framgia2c.mylabserver.com

Backend Virtual Server Route

Configure the backend route with prefix /api

Frontend Virtual Server Route

Configure the frontend’s route with the prefix /

Air-viewer Virtual Server

Map path rules for nginx controller

The frontend-air-viewer is similar for backend-air-viewer

output:

Check virtualserver air-viewer

output:

Check for browser access

Visit https://framgia2c.mylabserver.com/api/ we will access the service backend due to prefix rule: /api

output:

Visit https://framgia2c.mylabserver.com/en/faq we will lead to the air-viewer FAQ

output:

Currently we do not have a PI + sensor client to collect air data to send to this domain, the data will be empty like http://airviewer.sun-asterisk.vn/

Because the server is for learning purposes, it may only work for a few hours and then be shut down. The domain name will not be accessible, please understand

A few small experiments

Experiment 1 (eliminating 1 worker node)

We will shut down worker node 2 with the domain framgia3c.mylabserver.com which does not affect the current domain nor change the master node. Realizing that the app is still running normally, check that the pods been moved to worker node 1 to support the shut down pods.

Experiment 2 (completely removing the master node)

We have 3 nodes, 1 master and 2 workers running and try to shut down the master nodes. The App is still running as usual, only some services that NuxtServerInit of Nuxt.js calls API on the server platform connected to Flask will be dead, in addition to other services or APIs still work as usual. Just like an organization, the leader takes a day off, the member still works as usual, but just need to change the schedule, or have problems with the members (shutdown to the worker node at that time). surely our web will be dead. Because the pods will no longer be coordinated because the master node no longer manages operations.

Through these two experiments, it was possible to draw the zero downtime of kubernetes

End

This implementation will also be incomplete, in addition we need to combine other DevOps tools to manage the monitors, logs … collect analysis reports of system status logs, very Many other things that come to me are still very fluttering. Promise to come to the future article later

Share the news now

Source : Viblo