Opinionated Docker development workflow for Node.js projects - Part 1  

In a recent post, I described why you’d want to use Docker to develop server applications. In this post, I’ll describe how to develop a Node.js application with Docker.

Overview

The goals we’d like to accomplish in this blog post:

  • We are focusing on Node.js environment to write a server side application (e.g. Express, Sails, or other)
  • Allow local development without Docker (optional)
  • Allow local development with Docker with hot refresh when code changes
  • Provide instructions to build images for testing and production
  • Isolate container scripts from source code so one folder structure can be used for many projects
  • Be straightforward, but explain all the steps so modifications and updates can be made

This post is divided into two parts:

  1. How to use the workflow (this post!)
  2. Dive into the details of how the Dockerfiles work

We’ll start with how to use the workflow and readers can continue on to the working details if interested.

Quick Docker Terminology

For those who are new to Docker, I use some terms in this post and wanted to quickly define them:

Dockerfile: A file that describes a set of instructions to Docker Desktop to build an image.

Image: A file that contains the end results of instructions of a Dockerfile after Docker Desktop performs a build

Container: A running instance of your image that can execute code

That should be enough to get you rolling - let’s get to the workflow!

How to use the workflow

Directory Structure and Files

We establish a clear directory structure that isolates all your application specific code into a single sub-folder and the top level directory holds all the workflow files.

It looks like this:

└── Main_Project_Directory/
    ├── server-code/
    │   ├── server.js
    │   ├── package.json
    │   └── ... (All your other source code files)
    ├── .gitignore
    ├── .dockerignore
    ├── Dockerfile
    ├── docker-compose.yml
    └── README.md

This graph generated on https://tree.nathanfriend.io/

Notes

  • The server-code directory is an arbitrary name. You may rename it, but be sure to update all the references in the Dockerfiles and Docker commands shown in this blog post. The purpose of this subdirectory is to isolate your server code from all the workflow stuff.
  • The Dockerfile, docker-compose.yml, and .dockerignore files will be taken from this repo.
  • Your application must start with the file server.js because the container will run node server.js when launching. If you want to rename this, then you will have to update the files to refer to your own start file.

Setup

You’ll need to install Docker Desktop.

Clone this repo: https://github.com/edgarroman/docker-setup-node-container or just take the Docker related files and build a directory structure as shown above.

Updates

As time goes on, you’ll want to modify / upgrade the versions of Node.js and npm. You can find the versions at the top of the Dockerfile. At the time of this writing the lines look like:

# Base node images can be found here: https://hub.docker.com/_/node?tab=description&amp%3Bpage=1&amp%3Bname=alpine
ARG NODE_IMAGE=node:16.17-alpine

For the version of Node.js, head to the official node docker hub and pick your base docker image. I recommend you stick with alpine unless you have additional needs. Replace the 2nd line in the Dockerfile with your desired tag.

For the npm verison, see line 11. Update this as you see fit.

ARG NPM_VERSION=npm@8.18.0

All other versions of packages and whatnot are up to your preferences inside your app.

Workflow Guide

We’ll explore workflows of developing and testing your code. There are a number of workflows that we’ll talk about in this post.

  1. Local development without containers
  2. Local development with containers (Preferred)
  3. Production Build and Local Testing with containers

Local development without containers

This optional workflow does not use Docker at all. Using this workflow allows you to develop your code locally on your system with the least number of abstractions and complications. But it also means you have to install the correct version of Node.js and npm locally.

Your local system will be directly running Node and directly loading your code. This workflow requires the least amount of processing power by your machine and will provide the most responsive development environment. When you make changes to your code, they be reflected as quickly as possible. (using nodemon to hot reload your code when changes are detected)

The downside to this approach is that most likely your local machine is not running the operating system that your final container will be running. If you’re running Windows, MacOS, or even some flavors of Linux, the packages used locally may not be identical to those ultimately used in production.

The differences these packages have between platforms could inject subtle bugs and errors that would be confounding and difficult to debug. While many straightforward Javascript packages may be identical between platforms, there also may be differences when your code needs to interact with the host machine’s operating system.

With the pitfalls noted above, why should you take this approach? I would only recommend this approach if you are working in an environment where running Docker Desktop puts too much stress on your machine.

In general, I suggest using the next workflow.

Local development with containers (Preferred)

This workflow allows you to develop by running your code in a container environment. This container environment matches exactly what you will be deploying to production. And you don’t need to install anything on your local machine aside from Docker Desktop.

In addition, if you are working with a team, then you can be assured that regardless of operating system they are running, the code will behave the same across all hosts.

A key benefit of this workflow is that you can edit your source code and any updates will be reflected in the container. We are still using nodemon to detect source code changes and reload Node. This greatly eases development by allowing developers to see changes much faster than having to rebuild the image on every change.

Steps to get up and running

  1. Start Docker Desktop on your local machine

  2. Navigate to the main project directory (not in server-code)

  3. If this the first time you are running this workflow, or if you have changed any package dependencies, then run:

    docker compose build
    

    This step will run npm install in your image and lock in whatever you list in package.json.

  4. Now run the following command to create a container (running instance of your image)

    docker compose up
    
  5. You’ll be able to see your project running at http://localhost:8080/. And you’ll be able to see any logs printed out to the console.

  6. Press Control-C to exit the console and stop the container. (Equivalent to docker compose stop if you’re familiar with Docker commands)

  7. At this point your container is stopped, but Docker has it ready to start up again just in case. If you’re finished developing or you need to make package changes, type the following to have Docker Desktop do a complete cleanup. It will remove the container, but keep your image around in case you want to start it again.

    docker compose down
    

Notes

  • If you make any changes to dependent packages, then you’ll have to run the docker compose build command as shown above. Do this anytime you add, update, or remove a package.
  • We assume that node will be running on port 8080. If this is not the case for your project, feel free to change it, but make sure to change it everywhere, especially Dockerfile and docker-compose.yml.
  • You may find an empty node_modules under the server-code directory. That’s ok. When you do a build, the node_modules directory is created inside your Docker image, but not pulled from your local machine. So an empty node_modules is normal.

Production Build and Local Testing with containers

This workflow allows you to test your container by running it locally but with production settings. It’s an exact match of what you would deploy in production, but it allows you to view the console output to help remove any bugs or errors.

For this workflow, there is no live reloading of source code. So if you make a change to the source code, you’ll have to run the build step for every change.

Steps to get up and running

  1. Start Docker Desktop on your local machine

  2. Navigate to the main project directory (not in server-code)

  3. To build the image:

    docker build . --target=production -t mynodeapp:1.00
    

    The --target=production is a very important flag here. It indicates to the build process to strip out any development dependencies and extraneous files.

    A note here on the name of your image. I’ve picked mynodeapp as the name and the version as 1.00. I suggest you call your application something that is meaningful to you and follow semantic versioning

  4. To run an instance of your production image locally, run the following command:

    docker run -ti --rm -p 8080:8080 mynodeapp:1.00
    

    Another note here is that you’re running the production version of your application, but most non-trivial apps will need connectivity to other services such as a database. It’s left as an exercise for the reader to provide such connectivity.

Production Deployment

Deploying your production image is outside the scope of this blog post. Especially since it varies wildly based on your Docker hosting environment.

Conclusion

This is end of the first part of this post where we explained how to use this workflow. Part two will dive into the details of how the Dockerfiles were created and how they enable the workflow.