Today’s tutorial will show you how to use K8S using commands instead of web UI. There are many advantages to avoiding too much reliance on the web interface. There are 2 options to do this:
- Use Cloud Shell
- Active cloud shell under the guidance of this
- Gcloud and kubectl are available in cloud shell (Kubernetes command-line tool)
- Use ** Command-line tools locally **
- Install gcloud for your local machine here
- Use gcloud to install kubectl
1 2 | gcloud components install kubectl |
If the installation process fails, it is possible under the guidance of this
The preparations are complete, now to the main part.
- Configure default google-cloud parameters
1 2 | gcloud init |
in the direction you choose the right parameters
- Sign in to google account for the first time
- Select default project
- Select the default zone (containing the default region) in this example, I choose the zone us-central1-a located in us-central1
After configuring is done, you can review your information
1 2 | gcloud config list |
The result will be as follows (*** is I cover my personal information)
1 2 3 4 5 6 7 8 9 | [compute] region = us-central1 zone = us-central1-a [core] account = ***@gmail.com disable_usage_reporting = True project = *** |
- Create a cluster, in this example I will create an autopilot cluster, my-cluster is the name of the cluster
1 2 | gcloud container clusters create-auto my-cluster --region us-central1 |
Wait 5 minutes for the system to create cluster 3. Connect cluster
1 2 3 4 5 6 | gcloud container clusters get-credentials my-cluster --region us-central1 message Fetching cluster endpoint and auth data. kubeconfig entry generated for my-cluster. |
- Create a deployment from sample image: nginx: latest taken from dockerhub
1 2 3 4 | kubectl create deployment hello-app --image=nginx:latest deployment.apps/hello-app created |
- Create a HorizontalPodAutoscaler for your deployment, helping to auto scale (based on% CPU usage, the system will automatically scale)
1 2 | kubectl autoscale deployment hello-app --cpu-percent=80 --min=1 --max=5 |
Pods created can be checked
1 2 | kubectl get pods |
- Pods created, running on the local network, need external permissions to use these services. We go to the step of creating the service. Because nginx runs port 80 so –target-port = 80 Open link from outside, port 8080 –port 8080 Expose type LoadBalancer (when we have multiple pods, requests will be divided into pods)
1 2 | kubectl expose deployment hello-app --name=hello-app-service --type=LoadBalancer --port 8080 --target-port 80 |
- Check the services that have been created
1 2 | kubectl get service |
output
1 2 3 4 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app-service LoadBalancer 10.62.1.116 <pending> 8080:31699/TCP 29s kubernetes ClusterIP 10.62.0.1 <none> 443/TCP 49m |
Initially external-ip is
Wait a second. Check again, the system will initialize and assign the external ip to the service, then the service creation will be done.
1 2 3 4 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app-service LoadBalancer 10.62.1.116 35.184.165.64 8080:31699/TCP 25m kubernetes ClusterIP 10.62.0.1 <none> 443/TCP 73m |
In my case, I was able to access the web http://35.184.165.64:8080/
- When you want to update the new version of the new_image image
1 2 | kubectl set image deployment/hello-app nginx=nginx:1.16.1 |
- When not in use, you can delete the service and the cluster
1 2 3 | kubectl delete service hello-app-service gcloud container clusters delete hello-cluster --region us-central1 |