Recently O’Reilly and Dynatrace Ruxit conducted a survey about the container and Docker ecosystem. The goal of the survey was to understand technology adoption across the lifecycle. Besides offering some surprising stats on Docker adoption, the survey identifies some key challenges faced by early adopters of Docker.
Docker is the delivery stack of the future
Adoption of Docker and related container technologies in production environments is on the rise. A majority of survey participants are already using Docker and, for most of the others, adoption of Docker is on the agenda for the next 12 months. The cloud ecosystem supporting Docker is also rapidly growing with AWS announcing their container service enhancements and major cloud stacks like OpenStack supporting Docker as a key technology for application delivery. If Docker has not been on your technology agenda until now, it’s time for you to take a closer look.
Production is more than just another environment
As with any technology innovation, people begin using Docker in small, controlled environments. So it’s no surprise that Docker is used most widely in development environments. Technology pioneers and industry leaders have already moved well beyond that stage, however. Once you’ve experienced the simple, seamless application delivery process offered by Docker, using it for continuous delivery into your production environment is a no-brainer.
Once people begin using Docker to production they realize that the real change is not the container technology itself. The big changes come in controlling and mastering an entirely new ecosystem for application delivery.
Mastering quickly-evolving toolchains is key
Automation is key to delivering new microservices features and updates effectively and efficiently. In Docker-based environments, continuous integration and continuous deployment processes must be adapted so that they support the push and pull of images to and from a registry. Usually, build servers (e.g., Jenkins) and configuration management tools (e.g., Ansible) are integrated with Docker to allow for the building and shipping of Docker images as artifacts.
Docker-specific automation technology is also evolving quickly, with tons of new features and improvements introduced with each new release. The tools involved in building, deploying, and operating containers typically include Docker’s own tools, like Compose and Swarm, but may also include third-party tools like Wercker, Kubernetes, and Mesosphere’s Marathon on top of Mesos.
Ensuring that these toolchains work as intended can be a challenge. This is due to both the fast evolution of features and most organizations’ lack of long-term experience using such highly automated toolchains.
Monitoring proves to be particularly valuable because it allows you to track communication between tools and validate the results of the automation process. In this way, monitoring can identify inconsistencies and shortcomings in tool configuration in your production environment. Application monitoring is key to confirming whether or not the automated process chain results in shippable applications that perform as expected. Interestingly, the key requirement here is not the ability to collect metrics—it’s real-time visualization of the Docker environment.
The “self-driving” container infrastructure isn’t here yet
One of the big reasons for adopting containerization is that it allows you to migrate from clunky monolithic application architectures to lightweight, flexible microservices. Docker is perfect for encapsulating, shipping, and running small, scalable microservices across multiple hosts.
Having small units to deploy enables fine-grained, dynamic scaling of architectures. As there are more “knobs to turn” in environments that have numerous microservices, orchestration tools are now indispensable.
Tools like Mesos/Marathon, AWS ECS, and Kubernetes are especially effective in handling the coordination and communication between containers hosting microservices. They make it easier to deploy and scale these environments by adding containers to clusters of hosts and registering the containers with load balancers. These tools can even handle failovers and redeployments of broken containers to maintain the required number of containers in service.
Despite all this functionality, solutions to some key orchestration challenges are still in the early stages. Simply put, these tools support you in the logistics of scaling, but they require input as to when and how specific services should be scaled. This information is usually available from application monitoring because it provides deep, real-time insights into services. This includes inbound and outbound service communications with other services.
Having this high-quality performance data is key to determining the impact that adding and removing containers has to the response times and performance of each service. As a consequence, monitoring tools are now a part of the feedback loop with orchestration tools. Monitoring tools drive the tweaking of orchestration configurations (i.e., when to reduce or increase the number of containers).
Management and monitoring must evolve
Highly dynamic and scalable microservices environments require monitoring that scales. In a 2014 Monitorama talk, Adrian Cockroft said that monitoring systems need to be more available and scalable than the systems they monitor. He’s right.
Monitoring needs to autonomously adjust to changing environment configurations. Manual configuration is as impractical as manual deployment and orchestration. Auto-discovery and self-learning of performance baselines, as well as highly dynamic dashboarding capabilities, have made many traditional monitoring solutions obsolete.
As these environments scale dynamically, monitoring solutions need to keep up with them. This makes SaaS-based solutions ideal candidates for monitoring Docker environments. In cases where SaaS is not an option, the “feels-like-SaaS” approach to on-premise monitoring provided by Dynatrace Managed is the solution of choice.
Innovation needed to monitor Docker environments
When deploying Docker and related container technologies in production, monitoring is particularly important for understanding and proving whether or not your applications are working properly. The increasing number of Docker-related tools and projects required to provide basic infrastructure to run distributed applications creates a whole new set of monitoring requirements that go well beyond classic metrics-only driven approaches. Visualizing and understanding the dynamics of container environments is at least as equally important as health metrics. The dynamics and scalability requirements call for a new set of monitoring tools as retrofitted classic monitoring tools fail to deliver innovation on the key challenges of these environments.
The post Monitoring is key to Docker success in production appeared first on Dynatrace blog – monitoring redefined.