AboutBlogContact
DevOpsMay 14, 2018 3 min read 14

The Rise of Containerization: Why We are Moving Our Production to Docker

AunimedaAunimeda
📋 Table of Contents

The Rise of Containerization: Why We are Moving Our Production to Docker

It is May 2018, and if you are still manually configuring servers using Bash scripts or (heaven forbid) manual FTP uploads, you are already behind. At our agency, we’ve spent the last few months migrating our internal workflows to Docker, and the results have been transformative for our deployment velocity.

The promise of containerization is simple: Environmental Parity. We want the exact same binary, the same OS libraries, and the same runtime version to exist on a developer's MacBook, our staging server, and our production cluster.

1. Eliminating the "It Works on My Machine" Syndrome

We’ve all been there. A developer pushes code that works perfectly in their local environment, but it crashes in production because the server is running Ubuntu 16.04 while they are on macOS with a different version of libpng.

With Docker, we define our environment in a Dockerfile. This file is version-controlled right alongside the code.

# Our standard 2018 Node.js environment
FROM node:8.11-slim

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./

RUN npm install --only=production

# Bundle app source
COPY . .

EXPOSE 8080
CMD [ "npm", "start" ]

2. Micro-services and Orchestration

As we build more complex systems in 2018, the "Monolith" is becoming harder to manage. Docker allows us to split a project into smaller, specialized services—a database container, a Redis container for caching, and the Node.js API container.

To manage these, we are heavily utilizing docker-compose for local development. It allows a new developer to join a project and have the entire stack running with a single command: docker-compose up.

3. The Performance Myth

One concern we often hear in 2018 is that containers add "overhead." While it’s true that there is a tiny performance hit compared to running bare-metal, the trade-off in operational reliability is massive. Modern Linux kernels have made cgroups and namespaces so efficient that the overhead is negligible for 99% of web applications.

Furthermore, Docker allows us to utilize "Immutable Infrastructure." Instead of patching a running server, we simply build a new image and replace the old container. This makes rollbacks as simple as changing a tag.

Looking Forward: The Orchestration Wars

As we look toward the second half of 2018, the industry is converging on Kubernetes as the winner of the orchestration wars (beating out Docker Swarm and Mesos). We are currently experimenting with managed Kubernetes services like GKE and the newly announced EKS from AWS.

The takeaway for 2018: Containerization is no longer a "cool trend" for Silicon Valley startups. It is a fundamental requirement for any professional agency that takes uptime and scalability seriously.

Is your infrastructure still living in 2015? Let's talk about how containerization can stabilize your next release.

Read Also

Beyond Zero-Downtime: Mastering State Persistence in Distributed Deploymentsaunimeda
DevOps

Beyond Zero-Downtime: Mastering State Persistence in Distributed Deployments

Zero-downtime deployment was the goal in 2018. In 2026, the challenge is 'State Continuity.' We explore how to manage database migrations and persistent WebSocket connections without dropping a single user session.

Docker Multi-Stage Builds: Slimming Down Your Production Images (2019)aunimeda
DevOps

Docker Multi-Stage Builds: Slimming Down Your Production Images (2019)

Shipping a 1GB Node.js image is so 2017. In 2019, we use multi-stage builds to separate our build environment from our runtime environment, resulting in tiny, secure images.

Docker 1.0+: Deep Dive into Overlay Networking and VXLAN (2014)aunimeda
DevOps

Docker 1.0+: Deep Dive into Overlay Networking and VXLAN (2014)

Moving beyond single-host containers. How Docker's multi-host networking uses VXLAN encapsulation to create a seamless L2 network across your cluster.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles