docker best practises
devops, software development

Docker Best Practices

Docker is being adopted widely, Lets talk some of the docker best practices we can adopt.

Use a .dockerignore file

To increase the build’s performance, you can exclude files and directories by adding a .dockerignore file .

Minimize the number of layers / Consolidate instructions

Each instruction in the Dockerfile adds an extra layer to the docker image. 
The number of instructions and layers should be kept to a minimum as this ultimately affects build performance and time.

Use COPY command instead of ADD

Avoid installing unnecessary packages

Take advantage of docker cache to reduce the build time

Docker creates a layer on top of existing layer for each instruction in docker file ,and caches it.When you re-run the docker build command it searches for the layer in the cache if its there it uses the cached
layer otherwise cache is invalidated and all the layers after that are build again. As we build every-time after a code change layer after code copy will be invalidated ,
build the dependencies first and then copy the code in Dockerfile 
so that cache can be leveraged.


COPY code/ /usr/src/app

RUN pip install requirement.txt

In this case, every time you change code pip install will run increase in the build time.

COPY requirement.txt /usr/src/app

RUN pip install requirement.txt

COPY code/ /usr/src/app

In this case pip install will not run if there are no changes in requirement.txt ,only COPY instruction will run reducing build time in total.


The recommendation is use CMD in your Dockerfile when you want the user of your image to have the flexibility to run whichever executable they choose when starting the container.

RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.

CMD sets default command and/or parameters, which can be overwritten from the command line when docker container runs.

ENTRYPOINT configures a container that will run as an executable.

Shell vs. Exec form of ENTRYPOINT/CMD

We should always run ENTRYPOINT or CMD in exec form as in exec form executable gets pid 1 and it can listen to SIGNALS.

Shell form

<instruction> <command>


RUN apt-get install python3

CMD echo “Hello world”

ENTRYPOINT echo “Hello world”

When an instruction is executed in shell form it calls /bin/sh -c <command>under the hood and normal shell processing happens. For example, the following snippet in Dockerfile

ENV name John Dow

ENTRYPOINT echo “Hello, $name”

when container runs as docker run -it <image> will produce output

Hello, John Dow

Note that variable name is replaced with its value.

Exec form

This is the preferred form for CMD and ENTRYPOINT instructions.

<instruction> ["executable", "param1", "param2", ...]


RUN [“apt-get”, “install”, “python3”]

CMD [“/bin/echo”, “Hello world”]

ENTRYPOINT [“/bin/echo”, “Hello world”]

When the instruction is executed in exec form it calls executable directly, and shell processing does not happen. For example, the following snippet in Dockerfile

ENV name John Dow

ENTRYPOINT [“/bin/echo”, “Hello, $name”]

when container runs as docker run -it <image> will produce output

Hello, $name

Note that variable name is not substituted.

Gracefully stopping docker container

docker stopcommand attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container. If the process hasn’t exited within the timeout period a SIGKILL signal will be sent.
docker kill command doesn’t give the container process an opportunity to exit gracefully — it simply issues a SIGKILL to terminate the container.

When you use docker stop or docker kill to signal a container, that signal is sent only to the container process running as PID 1.

Since in shell form /bin/sh doesn’t forward signals to any child processes, the SIGTERM we sent never reaches our script/executable. Clearly, if we want our app to be able to receive signals from the host we need a way to run it as PID 1, we can acchive this by running the excutables in exce form discussed above.

Avoid RUN apt-get upgrade and dist-upgrade

As many of the “essential” packages from the parent images cannot upgrade inside an unprivileged container. If a package contained in the parent image is out-of-date, contact its maintainers. If you know there is a particular package, foo, that needs to be updated, useapt-get install -y foo to update automatically.

Always combine RUN apt-get update with apt-get install in the same RUN statement.

RUN apt-get update && apt-get install -y \
package-bar \
package-baz \

Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail. For example, say you have a Dockerfile:

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl

After building the image, all layers are in the Docker cache. Suppose you later modify apt-get installby adding extra package:

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl nginx

Docker sees the initial and modified instructions as identical and reuses the cache from previous steps. As a result the apt-get update is not executed because the build uses the cached version. Because the apt-get update is not run, your build can potentially get an outdated version of the curl and nginxpackages.

Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention

Docker best practices can help to improve performance ,usability and how to manage docker image and containers in the best possible way to get maximum out of it.

Follow us on

Looking for DevOps course?Find list of courses from different websites


Leave a Reply

Your email address will not be published. Required fields are marked *