Skip to Content
Technical Articles

Understanding containers (part 04): multiple containers

Thank for all the feedback after the previous parts of this #UnderstandContainers series. I hope you enjoyed as well some exercises with OrientDB database.

Let’s keep digging into containers!

I assume that you have the container myorientdb01 running, e.g. by running it with docker start myorientdb01, if it was stopped in the meantime.

Let’s run the 2nd instance of OrientDB server – attempt #1

Not for any practical reasons other than for the sake of today’s exercise, let’s run the second instance myorientdb02 of OrientDB database server. As you might remember from part 01 we can use the following statement:

docker run --name myorientdb02 -d -p 2480:2480 -p 2424:2424 -e ORIENTDB_ROOT_PASSWORD=root orientdb

which looks good, but … should fail! And return the message “docker: Error response from daemon: driver failed programming external connectivity on endpoint myorientdb02 (...): Bind for 0.0.0.0:2480 failed: port is already allocated.” should be clear, that the port 2480 on your host computer (your laptop, I assume) is already used to bind the same port for container myorientdb01.

What’s more: even though the container was not started, it was created and the name had been assigned. You can check with the command docker ps --filter='ancestor=orientdb' --all to list all containers that were created using orientdb as the image. Or, for short:

docker ps -f='ancestor=orientdb' -a

So, before we move on, please delete that failed container with docker container rm myorientdb02, or for short:

docker rm myorientdb02

Ok, so we are back to the situation with only one OrientDB server container myorientdb01 running.

Let’s run the 2nd instance of OrientDB server – attempt #2

Now at least we know that with every next container on the same host we need to assign different ports from the host. We can do this manually, by using the option -p 2481:2480 -p 2425:2424, where we manually ordered Docker to try mapping application ports 2480 and 2424 from inside the container to our host ports 2481 and 2425. This will work if ports 2481 and 2425 are not used yet.

The other option we are going to use is -P. It will automatically pick free ports on the host to map ports exposed (we will get deeper into this later) by the image.

So, let’s run the following command:

docker run --name myorientdb02 -d -P -e ORIENTDB_ROOT_PASSWORD=root orientdb

and — once the new container has been successfully created — check its assigned host ports:

docker container port myorientdb02

On my laptop it was something like this.

So, the port of OrientDB Studio running in the second container was assigned to the host’s port 32768. Let’s check it by running http://localhost:32768 (replace localhost with the proper IP address, if running Docker Toolbox; and 32768 with the port assigned on your machine).

So, there are two instances of OrientDB running as can be seen by connecting to two different instances of their Studio apps on different ports.

But, what’s the point…

…of running these two instances?

Well, to show one important container’s feature, that makes it different from traditional VMs.

Run this command to display all containers based on orientdb image plus their sizes by adding the --size (or -s for short) option. Please note that because of a single hyphen - the -as is a combination of two options -a and -s, not a single “as” option as if it would be with double hyphens --.

docker ps -f='ancestor=orientdb' -as

Now we got something interesting here to understand. Let’s look at “SIZE” column…

…because in containers SIZE matters

Image credit: The Size Matters Coffee Mug

So, what does container size means? It is one of the key differentiators between containers and traditional VMs.

Simply put,

container’s virtual size = image size + container’s delta file system size.

What it means is that a container’s file system is not a separate copy of the image, but is just a read/write layer of files on top of read-only layers of files from the image.

What do I mean? Let’s first check the size of the image itself with docker image list orientdb, or for short:

docker images orientdb

So the size of all files in the image is 333MB.

And then the size of both myorientdb01 and myorientdb02 container layers are currently around 100KB only! Should we start the 3rd, 4th and so on instances on OrientDB server they all will reuse the same read-only image.

So, what’s in a container’s layer?

We can check this using docker container diff myorientdb02 command, or for short:

docker diff myorientdb02

A are files/catalogs added comparing to the image. And C are changed.

In that way, multiple containers can share access to the same underlying image and yet have their own data state.

What about modifying/deleting image files in containers?

That was my first question when I learned about the read-only image and read/write the container’s layer. Let’s check.

docker exec myorientdb02 ls -l
docker exec myorientdb02 rm readme.txt
docker exec myorientdb02 ls -l

So, in myorientdb02 we deleted the shared file /orientdb/readme.txt, which is the file from the orientdb image.

What happened with that file in myorientdb01 container then?

It is still there. Why? Let’s check myorientdb02 container’s layer once again.

docker diff myorientdb02

The file was just marked as deleted D in that container. So, it is not visible in that container anymore, but is still existing in the other one.

Disk vs Memory footprint

So far we’ve been looking at disk size of images and containers. But what about the memory?

We can check this with docker container stats, or for short:

docker stats

Each image has its own memory, and each of them is using around 1.1GiB of RAM currently.

Quick note on where this MEM LIMIT of 12GB comes from, because it is important to smoothly run your containers:

  • In Docker Desktop, it is the limit you assign in advanced preferences:
  • In Docker Toolbox, it is a “Base Memory” setting for the VM hosting the Docker Machine:

This post does not exhaust the topic…

…of storages yet, so stay tuned for another post to discuss Docker volumes.

 

Ok, my train is arriving…

…to Wrocław 🚂. Just like part 02, this part was written on the train too. It is just that now I am returning from Katowice, where Ewelina Pękała organized another successful SAP Stammtisch Silesia.

Yet this is not the end of my travels today. Tonight I am going to Walldorf to join SAP Inside Track. My presentation “Learning Docker from Scratch … with SAP Data Management” is on Saturday. Will you be there?


We will keep digging into Docker and containers in the next posts. I will tag these posts UnderstandContainers for easy search.

Stay tuned!

-Vitaliy (aka @Sygyzmundovych)

1 Comment
You must be Logged on to comment or reply to a post.
  • Thanks for another insightful blog on containers! I learned several things…

    • -P autopicks open ports, seemingly from a range > 32768. My ports assigned were also 32768 and 32769. It doesn’t seem “random” as docker run –help for -P states: “Publish all exposed ports to random ports”.
    • I didn’t know the image was read-only with a complementary read-write layer (an overlay?). A very helpful  insight!
    • ps flags for filter (-f) and size (-as). Good use of -as to illustrate the incremental size of the read/write layer.
    • docker diff – had not seen diff used in docker before. Docker appears to assume diff to the image, so only 1 parameter. It’s very helpful to illustrate file diffs (A/C/D) from the read-only image.

    Looking forward to more of your insights on docker storage and volumes. Ultimately, I’d like to get HXE + XSA running in Docker for Mac rather than a VMWare Fusion VM. Running HXE in Docker would seem much more efficient!