Containers are currently one of the hottest topics in the Dev and Ops world. The rise of Docker and of companies building around Docker and containers has lead to a booming growth in tools that help with using containers, from container registry services to container-based operating systems to different tools that help make the process of working with containers easier.
Today I’d like to focus on one area of this containers-driven development that has been evolving rapidly and shows an incredible amount of potential even though the industry is young. That topic is container scheduling and orchestration.
There are several different tools out in the wild now with more in development. This post will highlight some of the bigger, more well-known container orchestration tools and briefly cover some of the philosophies and benefits of using each. There is quite a bit of overlap between the different tools and it is easy to get confused about them, so hopefully this post will help clear up some of the questions.
I won’t be covering any of the design or architectures here, just some notes and observations. If you have any specific questions, feel free to ask in the comments and I will try to point readers in the right direction.
This tool has shown tremendous promise and the core ideas behind it are actually running the Google infrastructure. Check the borg paper for some of the more excruciating details on the design philosophy.
Kubernetes was also the first of the tools to go production ready, which happened in mid-2015. Since the 1.0 release, the development speed has not slowed down and developers have been rapidly adding great features. In fact, more and more companies are getting involved and adding features that are immediately beneficial to them but also help out the community.
Kubernetes is open source like other tools, but the developments are so hard to keep up with that it is best to check the releases section on Github for the latest. A lot of the work done recently has been to address some of the scalability issues.
One of the criticisms of Kubernetes for quite a while now has been the lack of documentation, due to the speed at which development happens, as well as how complicated it is to work with. Recently, developers have made strides in improving both of these areas and getting a Kubernetes cluster up and running is easier now than it ever has been.
One great thing about Kubernetes is how easy it is to use out of the box with some of the newer tools that the community has created. There are some tools and tutorials out now that make setting Kubernetes up in a development environment trivially easy, which makes the getting started experience with Kubernetes much better:
- CoreOS-Vagrant Kubernetes Cluster for OS X
- Kubernetes Vagrant Coreos Cluster
- Kubernetes Installation with Vagrant & CoreOS
Another great selling point for Kubernetes is that it scales, a lot. In some recent scalability testing results, engineers showed Kubernetes 1.2 scaling up to 1,000 nodes in one cluster, running 30,000 pods (containers) without breaking a sweat.
Here is the full post detailing the scalability. Obviously, Google was thinking about scalability when they started building Kubernetes because they run these types of tools at extraordinary scale and it is just one factor that makes Kubernetes so appealing.
Thinking ahead, engineers designed Kubernetes in such a way that you don’t need to be the size of Google or run at webscale to get benefit from running your own Kubernetes cluster. If your environment has a smaller workload, you can create a small Kubernetes cluster and still get all of the benefits of running Kubernetes.
Kubernetes shows a tremendous amount of promise. It is getting easier and easier to use with some of the great tooling that has come out and the improving documentation is helping as well. As Kubernetes continues to evolve, it is gaining momentum and is being adopted in organizations across the board to run production workloads. Kubernetes is a solid orchestration platform with a lot of room to grow.
P.S. Go check out the Kubernetes Slack channel if you’re interested in getting started but aren’t sure how to get help.
Become a Better DevOps Pro
Start our free DevOps course and get lessons on ranging from Docker to Agile.
Rancher (Rancher Labs)
Rancher is an interesting piece of software. It is sort of a hybrid that bridges that gap between schedulers and orchestration. For example, Rancher can run Kubernetes and use it for the scheduling system, but also has its own built-in network layer that can also be used for scheduling and orchestration, which makes it a nice option because of its flexibility.
Rancher just announced earlier this month that it is production-ready. Rancher has been picking up steam for the last year or so with many developments and a rapid-release cycle, similar to Kubernetes but not quite as fast-paced. The easiest way to keep up with developments is to check out the Github release pages for the Rancher project.
Rancher is a unique solution to orchestration and scheduling. Rancher would mostly be considered an orchestration tool but it does also have some basic scheduling functionality built into it. Rancher sits at the same layer as other tools (Kubernetes, Swarm) but also extends orchestration via its own custom API, its own implementation of docker-compose called rancher-compose, its own network overlay and, importantly, a very nice, robust web interface that sits on top of everything to make interacting with all of the components very easy. All of these pieces all combine to make Rancher very easy to configure and use.
Rancher also has great documentation and setting up a Rancher environment is painless and straightforward, at least until you have to get into more complicated scenarios. Even with the more complicated topics, it isn’t any more difficult to troubleshoot than with other tools. The last highlight worth mentioning is that the Rancher staff and community are very friendly and helpful. Like Kubernetes, as the project and tools continue to mature and evolve, more and more folks are getting involved and the community does a great job supporting users and creating useful features. The easiest place to find help for Rancher issues is either Github (for specific issues) or, for more general help and guidance, the #rancher IRC channel on freenode.
Rancher is definitely the most flexible option when it comes to scheduling and orchestration because it allows you to pick and choose how you want to set up different environments. So, for example, you can create a Kubernetes cluster, manage it with the Rancher web interface and automate Kubernetes via its API, as well the Rancher component via its API, which potentially could be very cool.
Rancher is also scalable, though I don’t have any specific numbers and I don’t think it scales quite as well as either Kubernetes or Swarm. The biggest win for Rancher is that, with its web interface, it makes managing environments incredibly easy and the API makes automation very easy.
Docker Swarm (Docker)
Docker just released their rendition of container orchestration with Docker Swarm, which went production-ready at the end of 2015.
Docker Swarm fits somewhere between Kubernetes and Rancher in the scheduling and orchestration realm, though closer to Kubernetes, as it has orchestration functionality via compose and some basic scheduling via its network overlay. One thing that makes Swarm an attractive option is that it is maintained and developed by the Docker folks, so it works out of the box really well with other Docker tools and technologies. This means that getting started with Swarm is a little bit smoother if you are already familiar with Docker and the integration (and also learning curve) is a little better.
Swarm scales well. In fact, it is comparable to Kubernetes in that department. According to some benchmarking done by some of the engineers at Docker, a single Swarm cluster was able to scale up to 1,000 nodes and 30,000 and not fall over. You can find the details of the experiment here. This is good news if you are a Docker-only environment or are on the fence between Kubernetes and Swarm, because it definitely opens up the options.
Swarm, compared to Kubernetes, is less opinionated. This is an improving area in Kubernetes, but is one of the reasons folks might choose Swarm. Swarm chooses some sane defaults for its users, but follows one of the Docker philosophies of “batteries included but removable. This means that users can pick and choose different components to plug in to Swarm that work best for their environment. This philosophy makes Swarm very flexible.
Docker offers a layer that sits on top of Swarm to give it a nice graphical touch, called Universal Control Plane (UCP), which really helps with managing and understanding your container environment similar to Rancher.
Additionally, Docker offers a few different commercial options. The first is a sort of CaaS/managed service for running containers that was recently released, called Docker Datacenter product.
The other option is their full workflow product, also recently released, called Docker Cloud. Docker Cloud adds additional functionality on top of Data Center to facilitate a more complete workflow. The downside to using these tools is that they are all relatively new to the game and therefore haven’t had as many eyes on them, or had as much time to develop features that other more mature products have at this point.
Concluding Thoughts on Container Orchestration
As you can see, there are a variety of options for choosing a container scheduler and/or orchestrator. Hopefully this post was able to shine some light on some of the different container orchestration tools that are available and shows readers some of the benefits of using each.
There are quite a few more orchestration and scheduling projects out there, so don’t take this post as exclusionary. There are just too many tools developing to keep up with and the three mentioned above have the biggest followings and adoption.
I would love to look over some of the new tools in the future, like ECS for the folks that run containers on AWS or for the fans of Hashicorp products, Nomad. There is also Mesos, which is a complete scheduling and orchestration platform that has built-in support for running containers, and a much greater container focus in the future, which also looks very interesting.
Each of these container orchestration tools has been designed with slightly different goals in mind, so there is not a one-size-fits-all type of answer for choosing the right tool for your environment. Honestly, the best advice is to pick out a few of the orchestrators that sound like they would be a good fit for your environment and start playing with each of them. In the future, there might be a few tutorial posts on how to set up and use some of these tools.