Today software is deployed as services, which are called web apps or software-as-a-service (SaaS). The increasingly complex web apps require a specific approach to building applications quickly as well as being easy to deploy and scale. Therefore
The Twelve-Factor App was born to solve the problems posed.
The Twelve-Factor App is built on the following criteria:
- Use automation setup formats to reduce the time and effort required by new developers joining projects.
- There are clear conventions with the underlying operating systems, providing maximum switchability between execution environments.
- Suitable for deploying on cloud platforms to reduce server requirements and system administration.
- Minimize differentiation between environments (development and production) to achieve maximum flexibility in continuous deployment
- Provides scalability without changing tools, architectures, or development methods.
The Tewlve-Factor App not bound by any programming language or sofware stacks so it can be applied to a wide variety of applications.
The Tewlve-Factor App becoming more and more popular as companies begin to gradually switch to container technology in order to build better services. This is a set of 12 elements that have been drawn from developing and implementing hundreds of services by experts experienced in software development. Factors include:
- Code base
- Backing services
- Build, release and run
- Port binding
- Dev / prod parity
- Admin processes
1. Code base
The first of the 12 elements is the
Code base . This factor is understood that you need to perform the management, tracking code base through
Version control system such as Github, Gitlab, Subversion, … Each application should be associated with a code base and be managed. on a repo (1 – 1 relation).
- If there are many code bases, it is not an application, but rather a distributed system. Each component in the distributed system is considered an application and a separate application we can apply 12 elements of
The Tewlve-Factor App.
- Multiple apps that share part of their source code violate
The Tewlve-Factor Appguidelines. In this case the shared source code should be split into a library and used as a dependency.
One note in managing code base on version control system is that we should use general rules like git-flow in naming branches, commits, … among members. This will make managing the code base easier and more convenient in implementing the CI / CD pipeline.
Dependencies can be understood as packages and libraries that are installed in order to run an application, and when it comes to dependencies we need to consider two factors:
Dependencies management: Common programming languages come with a tool to manage dependencies (package manager) like Nodejs (npm, yarn), Ruby (gem), Python (pip), Java (maven, gradle) , … Dependency management tools often provide the ability to declare dependencies through Ruby (Gemfile), Nodejs (package.json), Java (pom.xml), Python (requiments. txt), … Clearly declaring beneficial dependencies simplifies the installation process for a new developer.
Isolate dependencies: Dependencies must be separate in each app. Why is this important?
For example we have 2 applications on the same machine. Application A uses library X version 1.0, application B uses library X but is version 1.1. Failure to isolate dependencies for each application but leaving applications A and B implicitly dependent on library X can lead to conflict dependencies leading to 1 or both applications inoperable. To solve this problem the dependencies should be clearly versioned and installed in the application, running in the application. Common programming languages also provide tools to do this like Ruby (Bundler), Python (virtualenv), … In addition, we can use container technology like Docker to create virtual environments to isolate dependencies and applications from your machine’s external environment.
An application can run in many different environments such as development, staging, and production. Efficient management of configurations enables flexible switching between environments. Surely we do not follow the way that farmers hard code and then revise the source code every time we change the environment. To use a regular config we have two ways: by using a file to store or use environment variables.
The Tewlve-Factor App encourages the use of environment variables to change the runtime configuration for easy deployment without having to change source code and also to avoid frustrating situations like pushing production environment configs to version. control system.
4. Backing services
Normally, an application when deployed in a production environment needs to connect to many different services (Database, Mail server, Cache server …). According to
The Tewlve-Factor App , these services should be separate from the application used as resources, which can be changed through config editing. This makes it easy to change and upgrade without touching the current application.
For example: On a local environment we can connect to the MongoDB database locally, but when we deploy, we can edit the server’s config to connect our application with MongoDB on a cloud service like Mongo Atlas without touching it. Or affect our application.
5. Build, release, run
To deploy the application from source code requires the following steps:
- Build: Get source code, install dependencies, build code into binary (depending on language).
- Release: Combines the already built source code with the environment configs to create the
- Run: Execute the application through the process execution with
The splitting into 3 steps has the following benefits:
- Prevents changing source code directly while the application is running.
- Easy to rollback when the system has problems.
- Easily restart when the system crashes or server crashes.
One point to note is that for each
Release Object should be assigned a unique Id, any change requires a new Id to be able to rollback to the old version easily and accurately.
The Tewlve-Factor App recommends running the application as one or more stateless processes.
Stateless process: are processes that do not store data about the state of the application as well as the results of the processing of the request or transaction. After the requests are processed, the data in the memory will be erased.
Stateful process: are the processes that store data about the state or the results of the processing into memory.
In case we want to save the state of the application, we will have to save through the
Backing services . Example: In case of using Sticky session in Load balancer. The sticky session will store the user’s data, request, or transaction in the memory of each server. This would violate
The Tewlve-Factor App . Instead we can use Redis or Memcached as a
Backing service to store data.
Using a stateless process will bring benefits such as:
- Easily deploy, change config, restart without fear of losing data because data is stored in
Backing services, not in memory.
- Stateless processes are fully independent, thus easily scaling by increasing the number of processes.
In this article I introduce the first 6 factors in
The Tewlve-Factor App . Maybe you have also applied these factors in projects, but have not been clearly codified. In the next article I will introduce the remaining factors (or you can refer to more in the reference document.). Hope the article is useful and receives interest from everyone.