How I started learning DevOps. Part 2: Configure

Tram Ho


In Part 1, I mentioned that the work of Devops Engineering is fully automated, digital pipelines moving code from a developer’s machine to production. To do this work effectively requires a basic understanding of the fundamentals, described as follows:

As well as a good understanding of tools and skills (illustrated below) that rely on this basic principle. Note: your goal is to learn blue things first, from left to right, and then learn purple things, from left to right. There are a total of 6 columns, one for each month.

Done, return to the topic. In this article, we will completely talk about the beginning of the digital pipeline: Configure.


In the configuration phase what happens? Since the code we are creating requires many machines to operate, the Configure stage actually builds the basis for operating the code. In the past, the infrastructure supply was very lengthy, needing a lot of people to do, making mistakes. Today, thanks to the great cloud, provisioning can be done with just one click. Or, at most lots of clicks. Now, it turns out that clicking to complete these tasks is a bad idea. Why?

  1. Easy to make mistakes (people often make mistakes)
  2. Not versioned (click cannot save on git)
  3. Not repeatable (multiple machines = multiple clicks)
  4. And it didn’t work out (don’t know if my clicks would work or mess things up.

For example, think of all the work needed to provide your dev environment and then staging … then prod in US … and prod in EU … It becomes boring, annoying, very fast. . Therefore, it is really necessary to find new solutions. The new way is infrastructure-as-code and that’s why the Configure phase is so important.

As a best practice, infrastructure-as-code mandates that whatever work is needed to provision computing resources it must be done via code only.

Note: By “computing resources”, I think everything needs to properly operate a prod: compute, storge, networking, databases, etc. This is followed by “infrastrutre-as-code”.

  1. Write on the desired infrastruture stage on Terraform
  2. Store it on source code control (eg git)
  3. Through Pull Request to receive feedback
  4. Check
  5. Implement providing all the necessary resources

Why is Terraform not the other?

Now, there is an obvious question: “why Teraform? Isn’t it Chef or Puppet or Ansible or CFEEngine or Salt or Cloudformation or anything else? ” That is a good question! There have been many posts about this. In short, I think you should learn Terraform because 1. It is a trend, has a lot of potential in work 2. It is easier to learn than other sites 3. Multi potential

Currently, you absolutely can choose a similar one and still succeed.

Note: this space is drawn and quite tangled. I want to take a few minutes to talk about the recent incident and where I understand something is progressing.

Traditionally, Terraform and CloudFormation have been used as infrastructure supplies, while Ansible has been used to configure it.

You can think of Terraform as a platform, Ansible places the house on top and then the application becomes more efficient, but you wish (Ansible can do that too).

In other words, you create VMs with Terraform and then use Ansible to configure the servers, as well as potentially effective for your applications. That is why these things often go together. However, Ansible can do many things but if Ansible cannot do it, Terraform can do it. The opposite is most likely true.

Do not be that annoying you. Just understand that Terraform is one of the dominant things in infrastructure-as-code space, so I insist on using it when you get started.

The truth is, right now expertise in Terraform + AWS is one of the hottest skills required.

Immutable Deployments

In fact, I predict that configuration management tools such as Ansible will diminish in importance, while infrastructure provisioning tools like Terraform or Cloudformation will increase in importance.


Because something called ” immutable deployments

In simplest terms, immutable deployments are related to the fact that never replace deployed infrastructure. In other words, each deployment unit is a VM or a Docker container, not a piece of code.

So, don’t implement code for a static virtual machines, you deploy all VMs using the baked-ready code.

You have not changed VMs configured, you deploy new VMs with the desired configuration.

You do not connect patch prod machines, you deploy new machines, with existing patches.

Do not run a VM set in dev and another VM set in prod, they are the same.

In fact, you can safely disable all SSH access to all prod machines because there is nothing to do there – no settings to change, no logs to view ( about log we will mention later).

You know what I mean?

When used correctly, this is a very powerful model and I highly recommend!

Note: Immutabe deployments mandadte that configuration will be separated by your code. Please read the 12 Factor App in detail and also have many great ideas. The 12 Factor App is well worth reading for DevOps practitioners.

It’s important to separate code from a configuration – you certainly don’t want to re-deploy the entire application stack every time you change your DB password. Instead, make sure the application retrieves it from an external configuration store (SSM / Consul / etc.)

Moreover, you can easily see what has come from developing immutable deployments, tools like Ansible begin to play less prominent roles.

The reason is, you only need to configure a server and deploy the entire number of times as part of the auto-scaling group.

Or, if you are making containers, you of course most want immutable deployments by definition. You don’t want dev containers other than QA Contanier and prod.

You want the exact same container on all your environments. This avoids configuration problems and simplifies roll-backs in the event of a problem.

Leaving aside, for starters, providing AWS infrastructure using Terraform is a textbook for DevOps and something you really need to master.

But wait … Do you look at logs to fix a problem? Well, you won’t be accessing the machine to look at logs anymore, instead you’ll be looking at the centrailized infrastructure for all your logs.

In fact, there have been many detailed articles on how to deploy ELK stack in AWS – read if you want to see how it works in practice.

Once again, you can disable remote access completely and feel good about being more secure than most people out there!


The path to becoming DevOps begins with the resources needed to run code – the configuration phase. And the best way to do that is through immutable deployments.

Finally, if you’re curious where to start, Terraform + AWS is a combo to start the journey.

In the article there are also some real language tools you should also explore, or dig deeper:

  1. Immutable Deployments
  2. 12 Factor App
  3. Terraform
  4. Config store (SSM / Consul)
  5. Deploy an ELK stack in AWS

This part has a lot of confusing things and many theories but there are no specific examples, the next parts I will intersperse with the practical parts to be easier to understand, maybe I will take the example that I will deploy a WordPress blog on AWS with Terraform and Ansible to do a demo, hope you will support me.

The article is referenced from:

Share the news now

Source : Viblo