This blog post first appeared at http://blog.cloudpassage.com/.  For more info on CloudPassage, visit https://www.cloudpassage.com/.

Virtualization containers, with their extraordinarily efficient hardware utilization, can be like a dream come true for development teams. While containerization will probably never entirely unseat VMs in enterprise application development and deployment, increasingly popular systems like Docker fulfill wishes on the checklist for the speed and agility required to develop, test, and deploy modern software at scale. No heavy hypervisor. Exceptional portability. Resource isolation. Incredibly lightweight containers. Open standards. Perfect for micro-service architectures. Lots of tidy app packages all wrapped up and humming away on top of a single Linux instance. What’s not to love?

It’s easy for dev teams to get excited by the possibilities that such speed and ease imply (there have been over 400 million Docker container downloads to date, which represents a lot of excitement). But concerns about containerization and security do exist. And while you certainly don’t want to reign in enthusiasm to the degree that it stifles rapid iteration and innovation (thus completely negating all that wonderful potential), you do need to avoid developing a culture of cowboy programming and keep security considerations at the fore if your organization is to safely embrace Docker.

To be clear, the Docker model does address security, but responsible use is a lynchpin. When you start using Docker, you quickly discover that there are lots of downloadable templates (“images”) available from repositories (“repos”) that can be used as shortcuts for writing your own micro-services, thus speeding development exponentially. The problem is that you don’t know which of these images are secure (they may contain vulnerabilities). And therein lies the source of recent security cautions. Image vulnerabilities may not be of much concern for individual app developers — but for the enterprise, security and data compliance policies are critical and must be maintained. Thus the question becomes: How they can be applied to Docker usage?

Docker Best Practices

To establish best practices for your organization, the nonprofit Center for Internet Security (CIS) provides a detailed 100+ page Benchmark resource for safe and secure Docker configuration, and there are a few specific areas of focus to keep in mind.

  1. Be aware of what images you’re using 
    All containers inherit a parent image, typically a base OS and its dependencies (a shell, default users, libraries and any dependent packages). As the Docker security page plainly explains: “One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.” So it’s up to the individual to set container capabilities and verify any images used with an eye to your security requirements — and that applies to every container.
    Ethical Hacking Training – Resources (InfoSec)
  2. Deploy agents
    Agents aid in setting security parameters for your containers because they automatically give you visibility into what’s coming along with the parent image. Since individual container security is the responsibility of the end user, you need a way to check for dependencies for yourself (though images are constantly being scanned and shared and updated on Docker Hub, you cannot rely on listservs and issue reports to manage vulnerabilities). You should understand the underlying details of what you’re introducing into your shop, and perform your own scans and verifications. Agents do this with little overhead at both the host server and Docker container level.
  3. Consider the way you run
    One of the best ways to stay safe is to run Docker containers in read-only mode so they can’t be modified or accessed by anyone else. If you run in read-only mode, you don’t need an agent in every container, and you can reuse a verified parent image. If you do run in read/write mode, the best practice is to put an agent in every container. You should also set a rule against taking images from public repos, and never run containers in privileged mode.
  4. Manage container interaction with the outside world
    A container can accept connections on exposed ports on any network interface — a red flag from a security standpoint. A good idea is to have only a specific interface exposed externally, with services such as intrusion detection, intrusion prevention, firewall, and load balancing run on it to screen any incoming public traffic. Container ports should also be bound to a specific host interface on a trusted host port.
  5. Strong Linux administration skills required
    Docker offers security enhancement capabilities, but none are on by default; so it is critical have a Linux pro establishing the basics for your Docker workflow and hardening the Linux host to prevent misconfiguration (most common mistakes with Docker occur when users set configurations incorrectly).

Overall, your best strategy for enterprise Docker use is to meld the CIS benchmark with your existing security policy; it will guide you to establishing a secure configuration posture for all Docker containers and help you create a safer playing field for your dev teams to have at it.