Developing with Docker and Webpack
One of the big advantages of using containers in general is that you can keep your environments (whether that’s dev, test, staging, prod or even another developer’s machine) the same, which helps in tracking down those “prod only” bugs. Because of that, I won’t be using webpack-dev-server. I know, I know, but I’m really interested in making sure that my development environment matches the others as closely as possible, and webpack-dev-server is definitely not a production ready server. So, here’s the plan:
- Create a small container to act as a web server. I’m going to use
nginxfor this, but there’s nothing stopping you from using your favourite web server.
- Create another container that will transpile local source code. The idea here is that we want
- Create a shared volume for the two containers. This volume will hold the result of the transpilation above and will be available to both the
First things first: if you’ve never used Docker before, you’ll need to install the Docker Toolbox. It contains all you need to get started spinning up containers and such. I also suggest you install Kitematic (it’s part of the toolbox). We’ll be setting everything up via the command line, so for the most part we won’t be using Kitematic, but it provides great access to container logs so you don’t have to go hunting for them in the event you run into a problem.
Once the toolbox is installed, open up a Docker Quickstart Terminal. It’s from here we’ll be running all of our commands.
We’ll start with the
nginx container as it’s the simpler of the two. All we’re trying to accomplish here is to set up a simple web server. Straightforward, right? There are prebuilt Nginx example Docker images available, but where’s the fun in that? We’ll put together an image from scratch just to see how everything works. We’ll need a
Dockerfile, which describes what your image is going to look like, what it’ll run and the things it’ll do. Here’s my really simple
Dockerfile for the
nginx container. We’ll call this one `docker.nginx`.
# docker.nginx FROM nginx RUN mkdir /wwwroot COPY nginx.conf /etc/nginx/nginx.conf RUN service nginx start
Ok, lets break this down.
FROM nginx– Dockerfiles have a sort-of inheritance built into them. This line indicates that we’re inheriting from the official nginx Docker image, which gives us a Ubuntu container with Nginx preinstalled. Without this inheritance, Docker would be much more difficult to use, as you’d have to build your Docker images from the ground up. If we built our image with just this statement, it would mirror the official image.
COPY nginx.conf /etc/nginx/nginx.conf– This line safely assumes you want to make some modifications to your
nginxweb server. Take a look at the nginx.conf here.
RUN service nginx start– Unsurprisingly, this starts the
Now that we’ve got a
Dockerfile ready to go, we’ll need to build the image. We’ll call the image
my-nginx to separate it from the official
nginx Docker image.
docker build -t my-nginx -f docker.nginx .
Here, we’re running the Docker build command with two arguments:
-t indicates a “tag”, which is how we’ll reference the built image later, and the
-f parameter tells Docker which Dockerfile to use. The
. at the end of the command is important, as it tells Docker the context for the image.
So far so good? Ok, lets move on to the more complicated
Dockerfile is a little more involved than the last. It’s called `docker.webpack`.
# docker.webpack FROM ubuntu:latest WORKDIR /app COPY . /app RUN apt-get update RUN apt-get install curl -y RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install nodejs -y RUN npm install webpack -g RUN npm install CMD webpack --watch --watch-polling
Lets break it down.
FROM ubuntu:latest– This indicates that we’re building just a standard Ubuntu container. The official Node Docker container expects us to be running a Node app, which isn’t what we’ll be doing, so we’ll have to install Node manually.
WORKDIR /app– The
/appdirectory is where our source code will live. The
WORKDIRstatement indicates to Docker that every subsequent command should be run from this directory.
COPY . /app– Now we’re copying our code from the host machine to the container. The
.indicates that we want to copy from the current working host directory, and the
/appis the destination folder in the container.
RUN apt-get update– Update our
RUN apt-get install curl -y– We’re going to use
node, so we’ll need to install it.
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install nodejs -y– Finally, install
RUN npm install webpack -g– Now, onto the good stuff. Here we’re installing the
webpackCLI library using
npm. It’s what we’re going to execute to transpile our code.
RUN npm install– Before we do any transpiling, however, we’ll need to make sure all of our dependencies are installed properly.
webpack --watch --watch-polling– Now that everything’s ready to go, we can get
webpackto watch for code changes in our directory and transpile. Note that we have to use polling here, as file system watches are disabled over network shares, which is how the container mounts our app directory. Another note: you’ll have to make some changes to your
webpack.config.jsto allow for watching with polling (see here).
Now, we want to build our
webpack Docker image. We’ll call it
docker build -t my-webpack -f docker.webpack .
If you require more assistance with Dockerfiles, the Docker documentation is excellent.
Running the Docker Images
So now we’ve got two Docker images compiled:
webpack. We’re not done, however, as we need to apply these images onto containers. We do that with the
docker run command. Here’s the run command for the two containers.
docker run --name my-nginx-container -p 8080:8080 -v wwwroot:/wwwroot my-nginx docker run --name my-webpack-container -p 35729:35729 -v ~/path/to/code:/app -v /app/node_modules -v wwwroot:/wwwroot my-webpack
I’ll explain what’s going on with the
nginx container’s run command, as they’re both pretty similar. First, the
--name indicates the name of the container once it’s up and running. It’s not required, so if you omit it, Docker will generate a readable name for you (you can view running containers with a
docker ps command). The
-p indicates we want to open up the 8080 port in the container, which will be useful for a web server so we can access the served files. The following
-v flag tells the run command to mount a volume between the host and container. Any file changes made to either the host or container at this mount point will be reflected in the other. I’ll dive into volumes a little later. Finally, the last portion of the command specifies the name of the image we compiled above.
Once you’ve run these commands, you should verify that the containers are running properly with a
docker ps. These commands are verbose to say the least, and you’ll have to run them in conjunction with the build commands each time you make a change to your Dockerfiles. Fortunately, the guys at Docker realized there’s a lot of stuff going on here, so they put together a tool to spin up multiple containers at once.
Docker Compose allows us to write a configuration file that basically duplicates the command line options given in the run commands for Docker images. The file is typically called
docker-compose.yml, and here are the docs for it. For our situation, the file should look like this.
version: '2' services: nginx: build: context: . dockerfile: docker.nginx image: my-nginx container_name: my-nginx-container ports: - "8080:8080" volumes: - wwwroot:/wwwroot webpack: build: context: . dockerfile: docker.webpack image: my-webpack container_name: my-webpack-container ports: - "35729:35729" // for live reload volumes: - .:/app - /app/node_modules - wwwroot:/wwwroot volumes: wwwroot: driver: local
So what’s going on here? The first line indicates the version of the Docker Compose configuration. Version two introduces some stuff that obviously wasn’t available in version one. The next is the juicy part, in which we specify the services to spin up or down. Lets run through the
nginx configuration section.
buildsection lets us specify to Docker Compose what to do if we need to build the image. We specify the context and Dockerfile, which match the
docker buildcommand line arguments from earlier.
image– The name of the image to place into a container.
container_name– The name of the container.
ports– This allows us to forward ports directly to the host. Here, we’re forwarding the 8080 port because that’s where the Nginx instance in our container will be listening.
volumes– Here, we specify the volumes to mount. There’s just the one, and it’s a shared volume mounted at
nginx.conffrom right at the start references this directory as the server root, so it’ll serve up files from here.
services section, there’s a
volumes part, which tells Docker Compose to create a shared volume that will be used for both of our containers. The idea here is that we want to have our
webpack container watch for changes in its app root, transpile them, then push them up to the shared
wwwroot volume. From there, the
nginx container can serve them to anyone who wants them. The Docker documentation talks at length about volumes if you require more clarification.
Now that we’ve got our
docker-compose.yml file all ready to go, we can build and spin up both of our containers at the same time with just one command.
docker-compose up --build -d
--build flag indicates that we want to build our images if necessary using the
build section under each of our services in the config file. The
-d parameter tells Docker Compose to run the containers in the background.
Now that our containers are running, you should be able to view your compiled web site at http://192.168.99.100:8080. This is the IP address of the VM that’s been created on your behalf, but it’s not always the same. You can determine what your VM’s IP address is with a
docker-machine ip default command.