Along with those business cases, we enjoyed getting a chance to share our own experience running a Docker infrastructure at scale for the past few years. SignalFx’s own Docker expert, Maxime Petazzoni, started the week by publishing a guide to getting started with monitoring for containers and why we love collectd. We ran into loads of SignalFx users (and some future-users) who credit collectd with their ability to not only scale their monitoring strategy but also help set the path towards operationalizing their microservices strategy. Max outlined four key questions to ask about your Docker and microservices objectives and gave guidance on how the answers determine how to not only monitor but operate your containerized environment as you set goals for production.
- Do you want to track application-specific metrics or just system-level metrics?
- Is your application placement static or dynamic? (i.e., Do you use a static mapping of what runs where or do you use dynamic container placement, scheduling, and binpacking?)
- If you have application-specific metrics, do you poll those metrics from your application, or are they being pushed to some external endpoint? If you poll the metrics, are they available through a TCP port you’re comfortable exposing from your container?
- Do you run lightweight, barebone, single-process Docker containers or heavyweight images with supervisord (or something similar)?
Best of all, we got to share best practices for monitoring Docker in production at SignalFx today. We use SignalFx’s Host & Container Navigator to get instant visibility into the status of all our Docker containers with a real-time and continuous survey of container status across our environment. From the Host & Container Navigator view, we can drill into different perspectives of our environment by specific availability zone, plugin, or microservice, for example. This gives us a starting point to quickly determine where our attention may be required. Finally, the recent release of the SignalFx Insights feature allows us to explore correlations between metrics and dimensions of all the system- and application-level data flowing out of our system. We can hone in on one infrastructure metric, such as high memory utilization, and see whether there are more common dimensions related to that group of select containers.
Even though DockerCon is over, we’re planning to keep the discussion going. We love working with the Docker team as an Ecosystem Technology Partner Program, and we’re eager to hear more from the community about what’s required to make cloud monitoring and intelligent alerting a fundamental part of your production strategy for Docker. To learn more, check out our webinar with Zenefits on operating Docker and orchestrating microservices. Max talks about his experience operating Docker at scale in a high-performance environment.