Categories
#DEV

My love affair with Docker

Last few days has been the worst for our business and part of it is to do with a much-hated hosting provider – OVH. Some devs like it, and some don’t! If you read the reviews about this hosting, you’d probably find a lot of bad things said about them and their network than good ones. The only reason we prefer OVH over AWS (which we do use for most of our production apps) is to do a lot with their no questions asked policy for IPv4 addresses.

I am sure most of you know we have a shortage of IPv4 addresses. It’s been in the news, and nearly 80% of the people who heard the news probably had absolutely no clue about what was going on in the computer world. Anyway, I won’t go into explaining that for the newbies to this comp world. That would deviate me from what I want to talk about in this post. Hopefully, this will act as a guide to those who are facing similar issues as us. There is another good reason we go with OVH. They are damn cheap. Two of the basic dedicated public cloud instances cost us $50 or so to run every month. I am pretty sure Amazon can’t beat that on a month-to-month contract. They probably can beat that on a three-year lease but not a monthly contract.

Anyway, my love affair with Docker started with issues developing on our traditional OVH dedicated instance. We had all kinds of troubles. The containers we were running were close to 160 on a 32GB v2 configuration – Ubuntu 14.04 LTS. This is not too bad given that Docker shares memory across all the containers. As soon as I configured more than 160 containers, all hell broke loose. We received a whole lot of errors and the IPs that were configured stopped working. This was probably the most frustrating moment of the whole experience because there are no real guidelines on optimising memory usage for Docker. You just got to have more memory if you want to have lots of containers. Here are a couple of things to help you run a lot of

Anyway, Here are a couple of things to help you run a lot more smoothly and hopefully resolve a lot of those errors. They are in no particular order. We pretty much tried all of them out, and they work flawlessly on the virtual instances we were running. Now, I am not sure what your purpose for running Docker containers is so. So please use these commands with caution. If you are taking a step back from executing these, I’d say consult your developer or someone who knows what they are doing (Docker Expert).

1) Stop all Docker Containers

docker stop $(docker ps -a -q)

2) Remove all Docker Containers

docker rm $(docker ps -a -q)

3) Remove any volumes that are unused.

docker volume rm $(docker volume ls -qf dangling=true)

4) Remove problematic networks

docker network rm(docker network ls -q)

5) Find out if any of the processes are still occupying a port

lsof -nP | grep LISTEN

Then you’d get an output similar to this…

Dropbox             384  IPv4 0x82c      TCP 127.0.0.1:17600 (LISTEN)
com.docker.slirp   6218  IPv4 0x82c      TCP *:5432 (LISTEN) <<<MOSTLY THE PROBLEM
Python             6268  IPv4 0x82c      TCP 127.0.0.1:51617 (LISTEN)

Now, just kill it…

kill -9 6218

6) Find “docker.service” file and see it to this (Helps with starting up lots of containers)

TasksMax=infinity

7) One of the Docker Limitations includes running out of keys and all kinds of stuff. Use these commands to overcome those issues (Adjust the numbers as you see fit)

echo 4194304 > /proc/sys/kernel/pid_max
echo "20000000" > /proc/sys/kernel/keys/root_maxbytes
echo "20000000" > /proc/sys/kernel/keys/maxbytes
echo "1000000" > /proc/sys/kernel/keys/root_maxkeys
echo "1000000" > /proc/sys/kernel/keys/maxkeys

8) Docker Clean Up (because it does get dirty and its not good at cleaning itself)

docker ps --filter status=dead --filter status=exited -aq \
  | xargs docker rm -v

9) Another Docker Command to Clean things (Helps with high disk space usage)

apt-get autoclean
apt-get autoremove

 

Some other things that help would be cleaning up unused images. You can find command for it online. Ask your best friend Google. Always remember, measure the amount of RAM you would need by the footprint of your container. If your container has a footprint of 1MB, 10k containers would cost you 10GB memory. Compare that with having 100MB footprint; you would need 1TB memory. That’s a lot. If you are looking into starting up quite a lot of containers, this article is quite good (Docker insane scale on IBM Power Systems). It talks about the limitations of Docker when you want to start up lots of containers. We found this guide quite helpful.

I am in love with Docker. I have to say…it was love at first sight. It’s so Awesome! It’s useful for a lot of things, but I don’t know how much longer we’ll stay together because technology is emerging at a very fast pace. Let’s hope Docker advances in which case; it’ll be until death do us part. If not, then…yeah. I’d rather not talk about that. Here

Here are a couple of things I currently love about Docker. Docker has everything in containers and I love containers. Since 2013, the eco-system has contributed nearly 100,000 public images on Docker Hub. Love…love…love.

#1 Docker has everything in containers and I love containers. Since 2013, the eco-system has contributed nearly 100,000 public images on Docker Hub. Love, love, love.

#2 Developers love Docker and Docker loves them back. Docker provides full life cycle control and that’s important for any system architecture. It works flawlessly on practically anything. So when you wake up at 2AM in the morning for troubleshooting, you know you can switch on your laptop, run the image, and start troubleshooting the script that went bad. There are lots of other reasons why developers dig this.

#3 I have hired and spoken to lots of system developers and they love Docker. I ask them to install or configure anything, they love hitting up the Docker Hub and look for images they could use. Why? It saves them time and more so, a lot of headache with incompatibilities. So when newer technologies emerge, you can easily try them on and put them into production without having to worry about where it works and where it does. You don’t even need to worry about breaking links or dependencies.

Good Luck!