Looking for a great (remote only) React Dev? Or just feel like having a chat? Visit my profile on LinkedIn and say hi! 😃
LENGTH WARNING: This is going to be a fairly long article. Apologies, there’s simply a lot of content to get through.
If you want to avoid my nostalgic ramblings, just skip to the actual tutorial below the page breaks.
Oh wow, I’ve been avoiding this one for a while. Like 7 freaking YEARS… Yes, that’s right. In 2013, I was working for a guy called Gabe Monroy who now is a big wig at Microsoft Azure. His startup had this CLI tool called Deis that in one command allowed you to push your app to any of a number of different cloud providers and customize your deployment.
Enter Docker to the stage, and everyone lost their absolute minds. The industry was “disrupted” (damn, I hate that word…), meetups were held, some jobs were created, others were lost, furious debates took place concerning Linux containerization, documentation was written… and I, as a Junior Front End Developer threw up my hands in the air and said — “No thank you!! I’ve got enough on my plate.” and thoroughly refused to learn about Docker.
To name a few of the things I first had to learn before I could focus on Docker, there was — jQuery, Wordpress, PHP, MySQL, Python, PyTest, Backbone, Marionette, Mocha, Jasmine, Selenium, HTML5, CSS3, OOP, Git, Github, Linux, Unix, Functional Programming, NodeJS, React, Redux, Reselect, Redux Sagas, Jest, TDD, ES6+, REST, TypeScript, Next.JS, GraphQL, Postgres, D3, Apollo, C, Heroku, AWS, Google AppEngine, Chef, TCP/IP, SSL… etc.
It took another seven years until I had the time and energy to study Docker, and that is such a shame, because it’s takes like a day to learn it at the Developer level, and a few days to learn it at serious DevOps level.
Hopefully you can avoid putting it off for as long as I have, so let’s get you over that fear and dive right in!
Yeah, I know… this is such a cliche way to start, but just like having a colonoscopy every 10 years when you’re over 30 — it’s necessary. 😕💩
Server deployments are tricky. There are startup scripts to run, libraries to install, services to configure, and of course, apps to host. Not only do you need to turn everything on and hook it up together (e.g. App can access the database), but you also need to make sure that every different service is of the right version number!
At a previous company of mine, the entire app would crash if you weren’t on the right version of NodeJS.
But wait, there’s more! When an app goes down, you have to detect that the VM is down, and before Docker, you’d need to SSH into the problematic server, diagnose the issue, fix it, and restart all of the services on that box — sometimes live in production.
Does that sound like a lot of work? Well, to me it does!!
Docker fixes a LOT of these issues.
Using Docker, you configure everything ONCE in a Docker image (a template for your container). You define the services that should run, how they should connect to each other (e.g. which port), and what version number everything should be.
Then you just push it to every server that needs to run the app.
You are then able to monitor all your containers using a tool like Kubernetes, and if there’s a problem with one of them, you simply delete it and replace it with a new container that is working properly. You can diagnose the issue later, since production is all green.
Eh voila! You have now vastly simplified Server/App Deployment and maintenance.
Here are a few helpful explanations for some terms that are regularly used in the Docker ecosphere:
A container is where your application and resources are located.
An image is a blueprint for a container, like a class is for an object. When you create a new container, it is usually based off a predefined image.
A volume holds the data for your containers. Containers are static and unchanging, so the data that changes is on the volumes.
Networking hooks together all the pieces of the architecture above.
A Full Working Example
I created a simple Node Express app that listens on
PORT 4000 and responds with a string. You can find it and all the instructions of how to set it up and use it here:
The Docker Workflow
We’re going to go through the full workflow of creating an image, setting up an app, and viewing it in the browser.
After that, we’ll go over how to use
docker-compose to achieve the same thing, but much more efficiently.
Step 1: Install the Docker Client
Docker has a desktop client, and it’s highly recommended to use it for local development. You can download it here:
Step 2: Create a dockerignore File
.dockerignore file will instruct Docker not to add certain files and folders to the image. Here’s a sample one:
Step 3: Create a Docker Image with a Dockerfile
We use a Dockerfile to define our image. It’s really just a list of commands that the Docker engine executes to build a container.
Note: There are a lot of base images that are already built and live on the Docker Hub that you can use as standalone images, or you can incorporate them when building your own image.
The Dockerfile should be stored in the root of the project, and the convention is to use UPPER-CASE for the build commands. These commands are run in order, and usually start with a
Here’s a full example of a Dockerfile. We’ll go through it line by line.
FROM command specifies that parent image and tells Docker we’re going to build on top of it — e.g. if we’re building a Node app, this parent image will be
node:12.18.3. The numbers after the colon specify the tag we want to target. You can see a list of tags on the images’ page on DockerHub. e.g. Link to Node’s page.
WORKDIR command tells Docker to create a directory called
usr/src/app and use that as our current working directory
ENV command declares an environment variable called
MESSAGE and assigns it a string value. Any processes inside the container will be able to access this environment variable — e.g.
COPY command tells Docker to copy our
package.json file to the current working directory.
RUN command tells Docker to execute a command inside the image. In this case, Docker will execute
We then re-use the
COPY command to copy all of the code in our local working directory to the image.
EXPOSE command makes your app available at the port number you provide. In our case, it’s
And finally we the the
CMD command to give the Docker engine the default command that it should run when it starts our container.
Step 4: Build Your Image
The main command to use in your CLI to build an image is, surprisingly, called:
docker build .
Note the period at the end, which specifies the path. In this case, it’s the current directory.
Tagging your image
It’s advisable to add the
-t flag, which enables you to give your image a name and a tag.
docker build -t my-app .
If you wanted to add a tag as well as a name:
docker build -t my-app:1.10.22 .
Once you run this, Docker will read our Dockerfile and start executing the commands.
To see all the images that you have on your machine, you can run:
If you want to remove some of them, you can use:
docker rmi IMAGE_NAME.
Step 5: Run your Image
The command to run your image is:
docker run -p 4000:4000 --name PROCESS_NAME -d IMAGE_NAME
You can substitute
PROCESS_NAME for any container name you like, and in using the above example,
IMAGE_NAME would be
docker run -p 4000:4000 --name hello-world -d my-app
-p flag maps a local port to a port that will be used by the container. The
--name flag allows you to give a custom name to the process, and the
-d flag will run this in detached mode, so it doesn’t lock up our terminal.
To see the container that we just ran, or any other docker containers running on your system, you can use:
ps stands for process. This command gives you a list of Processes, which we can also refer to as Containers. Hopefully this will stop you getting confused between the two terms, which mean the same thing in this context.
To really see everything, including the processes that have been stopped, or those that errored out and failed, you can add the
docker ps -a
Dealing with a Failing Process/Container
Sometimes you build a container and run it and it totally fails. You can tell this because when you run
docker ps, it won’t show up, but if you run
docker ps -a, it will.
The first thing to do is to look at its logs.
So, imagine you have an output of
docker ps -a like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f42b9fdcd304 my-app "docker-entrypoint.s…" 17 seconds ago Exited (127) 16 seconds ago bens-app-1
Sorry for the crappy formatting. Here, we can see that
f42b9fdcd304) exited 17 seconds ago after we ran it, meaning it failed.
We need to read the logs to figure out what happened and why it failed, so lets do with with the command:
docker logs f42b9fdcd304
Note that we also could have used the containers name —
We can also add the
-f tag which stands for follow and prints out any logs as they occur in real time. This is good to use if you’re trying to diagnose a problem in a container.
ctrl-c to exit.
Once we’ve diagnosed the problem (in our case, we used single quotes instead of double quotes in our
Dockerfile) , we may want to delete this container so that we can reclaim the namespace
bens-app-1. To do that, run:
docker rm bens-app-1
Or you can use the Container ID.
Starting and Stopping Containers
If the process is running and we want to stop it, use:
docker stop PROCESS_NAME
Once it’s stopped, we come back and want to restart it, so we just run a
docker ps -a to find its name or its id and run:
docker start PROCESS_NAME
Navigate to You App
At this point, if your app is simple enough — e.g. the example app I provided above — you can navigate to
localhost:4000 in your browser to view your app.
Step 6: Use Docker-Compose
Docker Compose basically does everything we went through above (you still need to set up a Dockerfile tho), except that now you only need to run a single command —
docker-compose up -d. i.e. it combines the
docker build and
docker run commands!
More specifically, Docker Compose is a tool that allows you to manage multiple containers and set up your entire back end with a single file. It’s useful if you want to add a database or other services and make them available to your app.
docker-compose.yml file will let Docker know exactly what services we want to compose and what application we want to start.
In this article, we’re just going to use
docker-compose to run our simple app.
We’ll go over setting up a database in a future article (which will likely be released tomorrow).
Here’s a sample
We need to use
version 2, since
version 1 will error out.
Then we declare a service with a name of
web. In the
build section, we define the containers’
context, which is another name for the containers
path. We then given it the name of the Dockerfile that it should look for and load, and then set the container name to be
web. Finally we expose and map port
4000 to port
4000, just like we did in the
docker run command.
Once you have saved this to your project root directory, run it with:
docker-compose up -d
-d flag will run this in detached head state, so it doesn’t lock up your terminal. You can use
docker ps to make sure that it’s running, and then navigate to
localhost:4000 in your browser to see the output.
Basically, Docker Compose will create an image, run it, and give it the name that you defined.
So if you now run
docker images, you’ll see
docker-simple-example_web (if you used my example app on Github), and if you run
docker ps, you’ll see a container called
If you want to stop your application, run:
Docker solves many of the problems that used to exist with Server Deployments. It enables you to quickly and effectively set up and configure services, and reduces the amount of time you need to focus on this stuff so that you can really concentrate on building your awesome app.
If you enjoyed this article, please give it a clap.
Happy coding! 😃