10 things you should know about Docker

Ngoc Huynh

Container technology isn’t new, but that hasn’t stopped Docker from taking the world by storm.

If you work in enterprise IT, you’ve heard of Docker. Even among hot technologies like Puppet, Hadoop, and MongoDB, Docker stands out.

Before you download Docker and give it a spin, however, there are 10 things you should know about the super-popular container technology.

1: Docker is a way to package and distribute software

A modern software system comprises many parts, including binaries, libraries, configuration files, and dependencies. It’s hard enough to assemble such diverse components on a single machine, but it’s dramatically more complex when you “ship” that software. Once you ship your software, you need to figure out a way to package all these things together and put them where they need to run. Docker is a container technology that makes it easy to package software, along with all its dependencies, and ship it to the developer across the room, to staging or production, or wherever it needs to run.

2: Docker isn’t particularly new

Docker has been around for only a few short years, but container technology has been with us for decades. While containers proved useful in the mainframe era, Docker has hit its stride now due to a confluence of factors, including the prominence of Linux, the spread of virtualization, and the cloud’s erosion of the importance of operating systems.

3: Just about everyone offers Docker

No matter who your chosen vendor happens to be, odds are roughly 100% that they support Docker. From Amazon Web Services to Red Hat to Google, everyone loves Docker.

4: Docker isn’t just for Linux

Docker’s roots are in Linux, but Microsoft has embraced it in a big way. Or it will. Docker depends on Linux technologies like Linux Containers (LXC) and the cgroups and namespaces capabilities, which don’t currently exist in Windows. So Microsoft is furiously working on building out such hooks to enable Docker containers to run on Windows Server, too. Microsoft has been running its own containerization technology on Windows for years, but the company is broadening its approach to also support the community standard, Docker.

5: Docker lets you allocate specific amounts of CPU, memory, and disk resources to each process, just like virtual machines

At the heart of Docker are Linux’s cgroups (Control Groups), which provide the means to account for and limit the amount of CPU, memory, network, and disk resources that a container uses. This offers some of the benefits of virtualization, like the ability to carve up a computer into smaller chunks of resources so you don’t have one process taking over all of the computer and starving the others — but it doesn’t come with the heavy overhead or cost of VMware.

6: Docker is faster than starting a virtual machine (millseconds vs. minutes)

If you want to run multiple jobs on a single server, the traditional approach would be carve it up into virtual machines and use each VM to run one job. But VMs are slow to start, given that they must boot an entire operating system, which can take minutes. They’re also resource intensive, as each VM has to run a full OS instance. Containers offer some of this same behavior but are much faster, because starting a container is like starting a process. Docker containers also require much less overhead — really no more expensive than a process.

7: Docker won’t eliminate VMs… yet

Docker is not a like-for-like replacement for virtual machines because it requires that all your containers share the same underlying operating system. That means you won’t be able to run Windows and Linux apps on the same server, for example. Plus, as Docker containers currently stand, they offer much weaker security isolation than VMs, making them inappropriate options for some types of multi-tenancy.

8: Docker is being developed at a torrid pace

Climbing aboard the Docker train is less like boarding a steam engine and more like jumping onto a Japanese bullet train… as it passes you at 250 MPH. Consider that Docker’s 1.5 year-old API is already at revision 15, and you’ll get a sense for how fast it’s changing. While Docker has been “ripening as it begins to mature into stable, enterprise-worthy software,” it still doesn’t carry the 10-year support commitment that enterprises expect from their software.

9: Docker has growing competition

As popular as Docker is, it’s not impervious to competition. For example, CoreOS recently released a competing Docker runtime, Rocket, and Linux darling Ubuntu has its own LXD container project. These and other competitors seem to chafe at Docker’s closed ecosystem. In the Docker world, everything depends on the Docker registry. You must rely on Docker Inc.’s registry or run a copy of Docker in your own datacenter, which isn’t free.

10: You should proceed with caution

As with any new technology, you’re probably going to want to walk before you run with Docker. From a technology standpoint, there are definite do’s and don’ts you should consider while you become familiar with running applications in Docker containers. But it’s more than just code. As suggested above, Docker’s community may not be a fit for you. You should join the community — attend meetups, read and participate on mailing lists, etc. — and decide for yourself whether Docker is where you want to invest your time.

However you choose to approach Docker, containers are here to stay. Containers simplify so much that is difficult in modern computing. Docker is the leader of the container pack, and you’re going to need to come to terms with it. What those terms are will depend on you and the state of your enterprise infrastructure.

Share the news now

Source : http://www.techrepublic.com/