Why Docker?
Docker is the standard for containerization, but really you should be able to port these concepts over without much trouble to tools like Podman or NerdCTL.
As for the question of why, there's plenty of reasons! Let's go over some of the primary reasons developers might decide to containerize their applications.
Development Environments
One of the biggest reasons is for standardizing a development environment that your other developers can use. When you set up a docker-compose.yml
, it's more-or-less transferrable between systems. You can bundle all of the instructions into an isolated container, which can help in the neverending goal of reproducibility.
Simpler Deployment
Often, especially when working with more bespoke technologies, finding a platform to host your code can be a pain. You don't need an entire EC2 instance just for your tiny app, but no one wants to host your <insert bespoke language here>
! Containers, luckily, solve this problem too, with dozens of containerization solutions, like Digital Ocean, ECS, Google Cloud, Microsoft Azure, and several more.
Cheaper than a VM
Virtual machines (VMs) can be costly because they require dedicated resources for every layer of the system, providing substantial power but often exceeding the needs of smaller applications. In contrast, containers offer a more cost-effective, "pay-as-you-go" solution. Containers share the underlying hardware and operating system with other containers, which significantly reduces overhead and expenses. This makes containers an ideal choice for smaller applications or those needing to scale efficiently without the high costs associated with full VMs.
Scalability
Ah yes, the infamous "scalability". But the fact remains that there is no competitor when it comes to scale than the concept of containerization. The biggest reason for this is that there are several big players in the container orchestration space, most notably Kubernetes.
While there are many more points, I hope this will give a good introduction into what makes containerization special. Now let's go through some examples!
Containerizing an Express + React Application
To get started, we'll want to take inventory of the necessary runtimes to handle this application, and the other programs that are depended on.
For example, with Express, we'll know that we need a Node runtime, and with React + Vite, we'll need another Node runtime.
Let's start crafting our docker-compose.yml
:
version: '3'
services:
client:
server:
While we can probably get away with pulling the Node
image directly, it's going to be easier to do our own Dockerfile
in the long run, so let's do those! Let's assume we'll have a client/
and a server/
directory.
FROM node:22-alpine3.19 # Select node version from https://hub.docker.com/_/node
WORKDIR /usr/app # `cd` to the place in the container where you want your client code
COPY client/package*.json ./ # Copy your package/package-lock for installation
RUN npm ci -qy # Install dependencies
COPY . . # Copy the rest of the code
EXPOSE 3000 # Expose port 3000 for use with other containers
CMD ["npm", "start"] # Run your start script from your `package.json`
As you can see, we're essentially going through the steps we normally would when setting up a Node project, but this time, we're only doing it once! After that, anyone can pull this project, and simply run docker compose up
to run the project! Let's not jump too far ahead though. Let's finish the server
Dockerfile
.
Since the server is also using Node here, we can just copy our client/Dockerfile
to server/Dockerfile
, and double check the version, port, and setup scripts are the same from our package.json
.
Finally, we've come back to the docker-compose.yml
file. Let's add our images, volumes, and ports in!
version: '3'
services:
server: # This can be named anything, it just makes most sense as `server`
build:
context: ./server/ # where the `server/Dockerfile` is held
volumes:
- ./server/:/usr/app # mapped from root of server project to container root of project
- /usr/app/node_modules # here for caching node_modules between runs
ports:
- '8080:8080' # Same ports that are exposed in the `server/Dockerfile`
client: # This can be named anything, it just makes most sense as `client`
build:
context: ./client/ # where the `client/Dockerfile` is held
volumes:
- ./client/:/usr/app # mapped from root of server project to container root of project
- /usr/app/node_modules # here for caching `node_modules` between runs
ports:
- '3000:3000' # Same ports that are exposed in the `client/Dockerfile`
depends_on:
- server # This is here to make sure `client` starts after `server`
Now that that's all added, all we need to do is navigate to the docker-compose.yml
and run the following:
docker-compose up # to start and set everything up
docker-compose down # to stop and tear down the container completely
docker-compose stop # to kill the services via `SIGTERM` or `SIGKILL`
docker-compose start # to restart a stopped container via `SIGCONT`
docker-compose pause # to pause execution in the containers via `SIGSTOP`
docker-compose restart # to stop and start the services
Containerizing a Django Application
Django is a full-stack framework that uses Python. This means we'll likely only have one container for the application. However, let's add a service for the database as well this time!
FROM python:3.7-alpine AS builder
# Installation steps:
WORKDIR /app
COPY requirements.txt /app
RUN pip3 install -r requirements.txt --no-cache-dir
COPY . /app
EXPOSE 8000
# Commands for running Django server:
ENTRYPOINT ["python3"]
CMD ["manage.py", "runserver", "0.0.0.0:8000"]
As you can see, much like before, we're simply going through the installation and setup steps in the "Docker" way. Now let's add our docker-compose.yml
, much like we did before, but this time with a database!
services:
# build our image from above:
web:
build:
context: app
ports:
- '8000:8000'
depends_on:
- db
# build the postgres image
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data # use our custom data
environment:
# for our Django application to get access to the database:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
And that's it! Depending on your specific Django application, you might have to tweak it a bit, but this should be enough to get you started!
Containerizing a .NET Application
.NET is a framework by Microsoft that makes use of the C# language. Typically, this will manifest as an MVC, full-stack app, with a MSSQL database.
The build steps tend to be a little more complicated, as we're emulating what an IDE like Visual Studio might usually do, but for the most part it's fairly straightforward.
# Install dependencies for .NET
FROM mcr.microsoft.com/dotnet/aspnet:5.0 as base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
COPY . /src
WORKDIR /src
# Build the project and prepare for hosting
RUN dotnet build "aspnetapp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "aspnetapp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY /app/publish .
# Run the application
ENTRYPOINT ["dotnet", "aspnetapp.dll"]
Now we'll just need to add that to the docker-compose.yml
, as well as the database, which will look a bit different from PostgreSQL.
services:
web:
build: app/aspnetapp
ports:
- 80:80
db:
environment:
ACCEPT_EULA: 'Y' # Skip EULA
SA_PASSWORD: example_123 # System Admin password
image: mcr.microsoft.com/azure-sql-edge:1.0.4 # The modern way to run MSSQL
restart: always # makes restarts automatic when changing things
healthcheck:
# Used to make sure the application is still running
test:
[
'CMD-SHELL',
"/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P example_123 -Q 'SELECT 1' || exit 1",
]
interval: 10s
retries: 10
start_period: 10s
timeout: 3s
And, as easy as that, we're done!
Conclusion
Docker is fairly simple to get started, as long as you have a basic example. What I like to do (and what I did to build out these quick tutorials) is to go to Awesome Compose, which provides great examples for many of the common use-cases with Docker.
Happy hacking!