Docker is a powerful tool that can help you build and deploy applications quickly and easily. But there are other tools that can complement Docker, which can make your development process more efficient and easier. Here are three of the most popular tools for developers:
- Git: Git is a source code management system that lets you keep track of your work as you go. It’s also easy to use, making it a great choice for developing software.
- Jenkins: Jenkins is a Continuous Integration (CI) server that helps you test and deploy your applications. It’s free and open source, making it an excellent choice for larger projects.
- Puppet: Puppet is a tool that lets you create custom scripts to manage your applications’ settings and behavior. It’s free and open source, making it an excellent choice for smaller projects or those who want to learn more about coding standards.
Docker is the best known containerization platform but it doesn’t exist in isolation. An entire ecosystem of complementary tools and spin-off projects has sprung up around the shift to containers.
Here’s a round-up of 10 open-source analyzers, indexers, and orchestrators that make Docker even more convenient and useful. Whether you’re still early in your Docker journey, or you’re a seasoned practitioner using the tech in production, you might find something here that’s worth including alongside your next project.
Docker Compose
Docker Compose is the only tool on this list that’s actually part of Docker. Compose is an accessible way to build “stacks” of Docker containers that you can manage in unison.
The standard Docker CLI lets you interact with individual containers. Compose provides a similar interface for working with containers in aggregate. This makes it possible to easily control systems that require multiple containers, such as an app server, database, and caching layer. You define these components as services in a docker-compose.yml file, then use the docker-compose binary to start them all together:
Running docker-compose up -d would create three containers, one each for the app, database, and cache services. They’ll be automatically linked together. This is much more manageable than repeating the docker run command multiple times.
Portainer
Portainer is a GUI for your Docker installation. It’s a browser-based tool that offers a complete interface for viewing, creating, and configuring your containers. You can also interact with other Docker object types such as images, networks, and volumes.
Portainer is deployed as its own Docker image:
This sets up a Portainer instance which you can access at localhost:9000. It works by mounting your host’s Docker socket into the Portainer container. Portainer can therefore use the socket to manage the containers running on your host.
Kubernetes
Kubernetes is a distributed container orchestration platform. It’s a common way to move Dockerized workloads into production environments. A Kubernetes cluster consists of multiple Nodes (physical machines) that are each eligible to host container instances.
Kubernetes gives you straightforward scaling and distribution. Whereas plain Docker exposes individual containers on a single machine, Kubernetes manages multiple containers that run seamlessly over several Nodes.
As Kubernetes is OCI-compatible, you can deploy your existing Docker images into your cluster:
This example creates a Kubernetes deployment of the example.com/example-image:latest image. The replicas: 3 field means you’ll end up with three container instances, providing redundancy for your system. The Deployment is similar to running docker run -d -p 80:80 example.com/example-image:latest, although this would only start a single container.
Traefik
Traefik is an HTTP reverse proxy that’s easy to integrate with container workloads. It automatically reconfigures itself with new routes as you create and remove containers.
Traefik lets you attach labels to your containers to define domain names and forwarding behavior. The software will create appropriate proxy routes each time a container with matching labels joins the Traefik network.
Traefik also offers load balancing capabilities, support for WebSockets, a REST API, integrated metrics, and a web-based dashboard so you can monitor your traffic in real-time. It’s a good way to expose multiple public-facing containers via domain names using a single Docker installation.
Trivy
Trivy is a container image scanner which uncovers known vulnerabilities. Scanning your images before you deploy them into production gives you confidence your workloads are safe and secure.
Trivy is available as its own Docker image. You can start a simple scan of the example-image:latest image using the following command:
Trivy identifies the software packages in your image, looks for vulnerabilities, and produces a report containing each issue’s CVE ID, severity, and impacted version range. You should upgrade each package to the FIXED VERSION indicated by Trivy. Running the tool after you build an image is therefore an easy way to boost the security of your deployments.
Syft
Syft generates SBOMs (software bill of materials) from Docker images. These are lists of all the OS packages and programming language dependencies included in the image.
Syft helps you audit your software supply chain. Docker makes it easy to reference remote content and layer up complex filesystems without necessarily realizing it. It’s even harder for your image’s users to work out what lies inside.
Recent high-profile attacks have demonstrated that overly long software supply chains are a serious threat. Running Syft on your images keeps you informed of their composition, letting you assess whether you can remove some packages or switch to a more minimal base image.
Dive
On a related theme, Dive simplifies Docker image filesystem inspections. Images are fairly opaque by default so it’s common to start a container to work out what lies inside. This could put you at risk if the image contains a malicious process.
Dive lets you navigate an image’s filesystem using an interactive tree view in your terminal. You can also browse individual layers to see how the image has been constructed. Viewing just the changes in a single layer helps you visualize the changes applied by each build stage, even if you don’t have access to the original Dockerfile.
Flocker
Flocker is a volume manager which combines the management of containers and their persistent data. It supports multi-host environments, simplifying the migration of volumes between hosts as containers get rescheduled.
This portability ensures volumes are available wherever containers are. Traditional Docker volumes can’t leave the host they’re created on, forcing your containers to stay in stasis too.
Dokku
Dokku uses Docker to let you self-host your own Platform-as-a-Service (PaaS). It automatically spins up Docker containers when you push code using Git.
As a complete application platform, Dokku lets you map domains, add SSL, deploy multiple environments via Git branches, and configure auxiliary services such as databases. It’s a great alternative to commercial platforms like Heroku and Firebase that lets you keep your production deployments on your own hardware.
Setting up a Dokku server lets you start applications in isolated containers without learning all the intricacies of manual container management. You can concentrate on writing and committing code using established Git-based workflows. Adding your Dokku server as a Git remote means you can git push to deploy your changes, either locally in your terminal or as part of a CI pipeline.
Hadolint
Hadolint is a Dockerfile linter that checks your build stages adhere to the recommended best practices. Running Hadolint can uncover common configuration issues that make your builds slower and less secure. Hadolint uses ShellCheck internally to also lint the shell scripts in your Dockerfile RUN instructions.
You can download Hadolint as a precompiled binary, try it on the web, or use its own Docker image, hadolint/hadolint. Start a scan by supplying the path to a Dockerfile to the Hadolint binary:
Hadolint will scan your Dockerfile for problems and present the results in your terminal. Some of the bundled rules include checking for absolute WORKDIR paths, mandating unique COPY –from aliases, and not switching to a non-root user before the end of the Dockerfile. Running Hadolint regularly will result in safer and more performant image builds that comply with community standards.
Summary
Docker is a great developer tool but it gets even better when paired with other popular projects. Community initiatives can boost the security of your images, help you spot issues in your Dockerfiles, and provide versatile GUIs for managing your containers.
New tools are constantly emerging so it’s worth browsing code sharing sites like GitHub to discover upcoming projects. The Docker topic is a good starting point for your exploration.