Opinionated Docker development workflow for Node.js projects - Part 2  

In a recent post, I described why you’d want to use Docker to develop server applications. Then I described how to use an opinionated workflow. In this post, I’ll describe how the workflow works under the covers.

If you haven’t read part 1, I strongly suggest you check it out now.

Directory Structure and Files

Remember the directory structure? We established a clear directory structure that isolates all your application specific code into a single sub-folder and the top level directory holds all the workflow files.

It looks like this:

└── Main_Project_Directory/
    ├── server-code/
    │   ├── server.js
    │   ├── package.json
    │   └── ... (All your other source code files)
    ├── .gitignore
    ├── .dockerignore
    ├── Dockerfile
    ├── docker-compose.yml
    └── README.md

Dockerfile

Here’s the complete Dockerfile

# Base node images can be found here: https://hub.docker.com/_/node?tab=description&amp%3Bpage=1&amp%3Bname=alpine
ARG NODE_IMAGE=node:16.17-alpine

#####################################################################
# Base Image
#
# All these commands are common to both development and production builds
#
#####################################################################
FROM $NODE_IMAGE AS base
ARG NPM_VERSION=npm@8.18.0

# While root is the default user to run as, why not be explicit?
USER root

# Run tini as the init process and it will clean zombie processes as needed
# Generally you can achieve this same effect by adding `--init` in your `docker RUN` command
# And Nodejs servers tend not to spawn processes, so this is belt and suspenders
# More info: https://github.com/krallin/tini
RUN apk add --no-cache tini
# Tini is now available at /sbin/tini
ENTRYPOINT ["/sbin/tini", "--"]

# Upgrade some global packages
RUN npm install -g $NPM_VERSION

# Specific to your framework
#
# Some frameworks force a global install tool such as aws-amplify or firebase.  Run those commands here
# RUN npm install -g firebase

# Create space for our code to live
RUN mkdir -p /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app

# Switch to the `node` user instead of running as `root` for improved security
USER node

# Expose the port to listen on here.  Express uses 8080 by default so we'll set that here.
ENV PORT=8080
EXPOSE $PORT

#####################################################################
# Development build
#
# These commands are unique to the development builds
#
#####################################################################
FROM base AS development

# Copy the package.json file over and run `npm install`
COPY server-code/package*.json ./
RUN npm install

# Now copy rest of the code.  We separate these copies so that Docker can cache the node_modules directory
# So only when you add/remove/update package.json file will Docker rebuild the node_modules dir.
COPY server-code ./

# Finally, if the container is run in headless, non-interactive mode, start up node
# This can be overridden by the user running the Docker CLI by specifying a different endpoint
CMD ["npx", "nodemon","server.js"]

#####################################################################
# Production build
#
# These commands are unique to the production builds
#
#####################################################################
FROM base AS production

# Indicate to all processes in the container that this is a production build
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}

# Now copy all source code
COPY --chown=node:node server-code ./
RUN npm install && npm cache clean --force

# Finally, if the container is run in headless, non-interactive mode, start up node
# This can be overridden by the user running the Docker CLI by specifying a different endpoint
CMD ["node","server.js"]

Dockerfile: Version Management and Multi-Stage

Let’s start at the top of the file:

# Base node images can be found here: https://hub.docker.com/_/node?tab=description&amp%3Bpage=1&amp%3Bname=alpine
ARG NODE_IMAGE=node:16.17-alpine

#####################################################################
# Base Image
#
# All these commands are common to both development and production builds
#
#####################################################################
FROM $NODE_IMAGE AS base
ARG NPM_VERSION=npm@8.18.0

First we know that versions of Node.js and npm change over time. These are critical dependencies so we put them up front at the top so developers can update them when starting a new project.

Also note that this is a multi-stage Dockerfile. This is to accommodate a single file for both development and production builds. So we have:

  1. A common base target
  2. A development target
  3. a production target

Dockerfile: Base stage

These are the commands in the Dockerfile for the base stage

We explicitly set the user to be root for the next several instructions

# While root is the default user to run as, why not be explicit?
USER root

We’re adding a handler to capture termination signals from the system to gracefully shutdown. While not super important for Node.js containers, it’s a good practice in case we want to use this Dockerfile for other languages such as Python. More details can be found at https://github.com/krallin/tini.

# Run tini as the init process and it will clean zombie processes as needed
# Generally you can achieve this same effect by adding `--init` in your `docker RUN` command
# And Nodejs servers tend not to spawn processes, so this is belt and suspenders
# More info: https://github.com/krallin/tini
RUN apk add --no-cache tini
# Tini is now available at /sbin/tini
ENTRYPOINT ["/sbin/tini", "--"]

We install the specified version of npm here. We also have the opportunity to install other packages that are required to be global here just as aws-amplify or firebase.
While I hope the trend of requiring global pacakages goes away, we can easily support it using Docker. Be sure to identify a specific version of your global package to prevent nasty surprises later!

# Upgrade some global packages
RUN npm install -g $NPM_VERSION

# Specific to your framework
#
# Some frameworks force a global install tool such as aws-amplify or firebase.  Run those commands here
# RUN npm install -g firebase

Create an arbitrary location for the container to store your code. You can call this anything you’d like, but we’re following the traditional Linux approach here for consistency. Note that we also have to change the ownership of this directory to the proper user (called node) so that the user can properly create/read/write files in this directory.

So your application will live in the directory /home/node/app. Finally, switch subsequent build instructions to be executed by the node. This follows the recommended best practices

# Create space for our code to live
RUN mkdir -p /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app

# Switch to the `node` user instead of running as `root` for improved security
USER node

Expose the port on which the application will listen. This is configurable, but must be changed in several files.

# Expose the port to listen on here.  Express uses 8080 by default so we'll set that here.
ENV PORT=8080
EXPOSE $PORT

Dockerfile: Development stage

These are the commands in the Dockerfile for the development stage.
When doing a build, you must specify the stage using the flag: --target=development.

First we copy over the package.json file from the server-code directory into the working directory /home/node/app (set above by WORKDIR). We also include any package-lock.json files.

#####################################################################
# Development build
#
# These commands are unique to the development builds
#
#####################################################################
FROM base AS development

# Copy the package.json file over and run `npm install`
COPY server-code/package*.json ./

Next, we run npm install to establish all the application dependencies including development dependencies.

RUN npm install

Then we copy the rest of the code. As the comment states, we separate this step for build optimization. If we don’t modify the package requirements, then we won’t have to rebuild the npm install step which could take a long time. More often, we’ll be changing just the application source code, so subsequent builds would be very fast.

And finally, we run npx nodemon server.js to start the application using nodemon to reload the code when it changes.

# Now copy rest of the code.  We separate these copies so that Docker can cache the node_modules directory
# So only when you add/remove/update package.json file will Docker rebuild the node_modules dir.
COPY server-code ./

# Finally, if the container is run in headless, non-interactive mode, start up node
# This can be overridden by the user running the Docker CLI by specifying a different endpoint
CMD ["npx", "nodemon","server.js"]

Dockerfile: Production stage

These are the commands in the Dockerfile for the development stage.
When doing a build, you must specify the stage using the flag: --target=development.

#####################################################################
# Production build
#
# These commands are unique to the production builds
#
#####################################################################
FROM base AS production

Set some environment variables so all downstream scripts can discover this is a production build and act accordingly.

# Indicate to all processes in the container that this is a production build
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}

Copy over all the source code at once. We are going to do a full npm install so there’s no need to break up the multiple copy steps like we did in the development stage. After we do the install, we tell npm to clear it’s cache in an attempt to make the image size smaller.

# Now copy all source code
COPY --chown=node:node server-code ./
RUN npm install && npm cache clean --force

Here we kick off the application by telling node to run the application server.js.

# Finally, if the container is run in headless, non-interactive mode, start up node
# This can be overridden by the user running the Docker CLI by specifying a different endpoint
CMD ["node","server.js"]

Development Mode without Docker Compose

We’ll show how to use this Dockerfile using just the command line interface (CLI). In the next section, we’ll simplify this with Docker Compose.

  1. If this the first time you are running the container, or if you have changed any package dependencies, then run:

    docker build . -t mynodeapp:DEV --target=development
    

    This will build the image using the development stage instructions from the Dockerfile. Note that you have to call it something, so we’re using mynodeapp with the version of DEV. Using DEV helps to avoid production deployment since it’s not using semantic versioning.

  2. To run the container, type the following command:

    docker run -ti --rm -p 8080:8080 -v "$(pwd)/server-code:/home/node/app" -v /home/node/app/node_modules mynodeapp:DEV
    

What you should see is Docker will start to run your container in the terminal window and any console messages will appear as they are printed out.

Changes to the source code should trigger a reload of Node and will be reflected in the console.

Notes

  • If you make any changes to dependent packages, then you’ll have to run the docker build command as shown above. Any time you add / remove a package or update the version.
  • We assume that node will be running on port 8080. If this is not the case for your project, feel free to change it, but make sure to change it everywhere.

This workflow is made possible by some clever Docker commands. We’ll expand on the command above here:

  • docker run: This is the primary Docker command to take a container image and run it
  • -ti: This instructs Docker to run this container interactively so you can see the output console
  • --rm: After you exit the container instance by pressing Ctrl-c this flag instructs Docker to clean up after the container
  • -p 8080:8080: Ensure port 8080 on the container is mapped to port 8080 on your local machine so you can use http://localhost:8080
  • -v "$(pwd)/server-code:/home/node/app": This maps the directory server-code (along with your source code) into the container directory /home/node/app. So your source code and everything in the server-code directory is available in the container.
  • -v /home/node/app/node_modules: This is a special command that excludes the node_modules directory on your local machine and instead keeps the container’s node_modules directory that was created during the build phase. This is important because the node_modules on your local machine may be full of packages that are specific to the local machine operating system. And since we want the packages for the container, this flag makes that one directory take priority.
  • mynodeapp:DEV: This is whatever you want to call your container image. We tag this image with DEV to make sure you don’t accidentaly deploy this version.

Development Mode simplified with Docker Compose

The commands to enable this workflow are very long, complex, and difficult to remember. To simplify this workflow, we have a few options. One common option is create shell scripts for the commands above. This works, but will be operating system specific (e.g. Windows will need a different solution than most shells).

We can use Docker Compose to simplify our workflow. Docker Compose is a tool that allows you to put all the commands into a file that does the work for you. It also allows you to spin up multiple containers at the same time. This is useful if you have to spin up both a web server and a database server for local development. That scenario is outside the scope of this post.

Note that the Docker Compose we are going to show here is for development flow only.

First, create the docker-compose.yml file:

services:
  app:
    build:
      context: .
      target: development
      args:
        - NODE_ENV=development
    environment:
        - NODE_ENV=development
    ports:
      - "8080:8080"
    volumes:
      - ./server-code:/home/node/app
      - /home/node/app/node_modules

Note that we’ve put the same command line arguments shown earlier into to the file itself. So that makes it easy to build:

docker compose build

And really easy to run:

docker compose up

And when you’re finished

docker compose down

Conclusion

And that’s about it. Don’t forget the .dockerignore and optionally the .gitignore but you can customize those as you see fit.