Building Custom Images in Docker

Thousands of images are available to pull from Docker Hub that can be used as containers with a simple run command, but it is almost certain none of them fits exactly what you want to have running in your production environment . Most of the time, you will find yourself running containers, tweaking them, installing binaries and libraries and doing a bunch of things before finally getting those containers ready to be deployed in your own specific environment.

Once you have your containers fully updated and running, you might want to re-use them for future deployments or share them with others. Long story short, you will want to create your own custom image to be able run ready-to-use containers from it.

In Docker, there two ways to create a custom image, a basic one where you commit the container instance as an image, and a much more powerful and useful one where you create an image using Dockerfile.

We will explore in this post both methods to create Docker custom images.

Creating a Custom Image from a Container


Lets’ get a running base ubuntu container.

Now, Let’s pretend that we will need nano and elinks to be installed before deploying this container in production. For the moment, nothing is installed there!

We will update the system, then install those packages on the running container.

Done! we can now exit the container.

Let’s get the the container name or ID as it is needed to while creating the image.

To create an image based on this container, we use the commit directive with the following command.

The above command will create an image called custom-image from our container using its ID.

The image is now created and visible in the images list locally.

Consequently, we have a ready-to-use image with nano, elinks and python installed. As such, If you run now a container from the custom-image will have already those three packages installed.

Creating a Custom Image using Dockerfile


This method consist of using a file called Dockerfile. A Dockerfile is a text document that contains all the instructions for building the image. This is much handy and easier compared to the previous method, especially if your image gets bigger, because the Dockerfile will include the commands needed for building the image from scratch, by pulling it from Docker Hub, tweaking it then saving it.

One the other hand, if you want to rebuild your image, let’s say because you want to install a new version of nano or elinks, you will not have to run the container again, upgrade the installed version, then commit that container again (which something you have to do if using the first method). You will only have to recompose the image using the instructions in the Dockerfile.

So building images using Dockerfile is the preferred method in a very active and dynamic environment where images tend to change very often.

To use a Dockerfile, you will need create it. As mentioned before, it is nothing but a text file that lists all the instructions that make your image. Some instructions are mandatory and some others are optional.

A very basic Dockerfile with only two instructions looks like this:

The first line contains the FROM instruction, which pulls an ubuntu image. Note that A valid Dockerfile must always starts with a FROM instruction. The image can be any valid image.

The second instruction is self-explanatory. This one runs a system update on the ubuntu container.

So to summarize it up, this simple Dockerfile pulls the latest ubuntu image from Docker Hub makes sure the system is updated after that. But we want more. Won’t we? Like getting the packages needed available on our image.

To go ahead and install nano, elinks or may be python in the image, our Dockerfile will have the following instructions:

The -y argument is very important here. Because we don’t have any way to interact with our Dockerfile while it is building the configuration, we need to confirm the package installation beforehand.

Once we have the Dockerfile with all the instruction, we’ll need to use it as source to build the image using the docker build command.

Most commonly, the Dockerfile is located in your working directory and the (.) is used to confirm this to Docker. However, the file can be located anywhere in your file system other than your working directory. If this is the case, you use the -f flag with docker build.

Note that the path to the Dockerfile can also be a URL to GitHub

Docker will go through the steps listed in the Dockerfile and build the intended image. The output of the above will be similar to this.

The image is now built and can be found in the images list, but does not have a name.

We might optionally tag it with a proper name.

Any container pulled from this image will have the needed packages.

A More Interesting Example: Running a Node.js Application


Let’s admit it! Building a flat Linux image with nano and elinks installed is not that exciting. So let’s get a more interesting example. This time we will run a node.js application that will we will run on top of a node.js image.

The node.js application consists of a single file called app.js with the contents shown in the following listing.

Mainly, this code starts up an HTTP server on port 8080. The server responds with an HTTP response status code 200 OK and the text “You’ve hit <hostname>” to every request.

os.hostname() will display the container’s name and not the Docker hostname, as the web page will be displayed by the container itself.

To run this application, there is no need to install node.js on the host. We’ll user a Dockerfile to package the app into a container image and enable it to be run anywhere without having to download or install anything (except Docker, which does need to be installed on the machine you want to run the image on).

This Dockerfile will pull the latest node.js image from Docker Hub, copy local host app.js to the / directory (the ADD directive is used for this) of the image, then run the node.js script with the node command.

Let’s build the image and tag it as node-custom.

When the build process completes, you have a new node-custom stored locally.

You can now use your image to run the node.js application named node-app and listening on port 8080 (both on host and on the container) with the following command:

You now have a running application in your Docker host from a customized node.js image containing your code, that display the container’s name.

Launch a browser on your Docker host and type http://localhost:8080 to see the running application in action.

I hope this post helps to get the usefulness of creating your own customized images for future uses, and to understand the different use cases for doing so. Thanks for reading!

Find this post interesting. Share it!

Leave a Comment

Your email address will not be published. Required fields are marked *