Docker compose does not inject host environment variables in MacOS

When using docker-compose to orchestrate your container services it’s common practice¬†to pass environment variables set in the host machine through to your containers. This is especially useful when you want to configure¬†passwords/secrets for¬†your applications running inside the container. By doing this you can avoid including such sensitive¬†information in your docker-compose.yml file.

Let’s consider the following extract of a docker-compose.yml file which defines¬†two environment variables required for a MySQL container service named ‘db’. ¬†The highlighted lines shows how the variables are not given any values. This essentially tells¬†docker-compose to¬†inject the values using¬†the corresponding environment variables in¬†the host machine.

version: "2.0"

   image: mysql:5.6.26
     - '3306:3306'

Now usually¬†what you would do in Linux or MacOS is¬†use the “export” shell command to set values on the host machine before calling docker-compose up¬†to bring up the containers.

export MYSQL_ROOT_PASSWORD=password
export MYSQL_DATABASE=moviedb
docker-compose up

This works on Linux, in my case an Ubuntu 16.04 droplet on DigitalOcean without any issue. But it¬†doesn’t seem to work on my dev machine running MacOS (Sierra) with¬†the latest version of Docker¬†17.03.1-ce¬†(at the time of writing)

Strangely when running docker-compose up on MacOS the MySQL container complains that the MYSQL_ROOT_PASSWORD environment variable is not set. Further inspection revealed that both variables were empty in the container.

So it seems like the “export” shell command does not work with compose on MacOS.

The trick to solving this problem in MacOS is to instead use a one liner in the shell like so:

MYSQL_ROOT_PASSWORD=password docker-compose up

Note: I have removed the MYSQL_DATABASE=moviedb segment from the above command for brevity.


Key Learning from the GitLab Incident

GitLab is an awesome product!¬†Although I don’t use their¬†hosted service at, I’ve been a very happy user of the¬†product in¬†an internally hosted setup.

They had a pretty bad (and well publicized) incident a couple of days back which started with spammers hammering their Postgres DBs and unfortunately ending up with a sysadmin accidentally removing almost 300GB of production data.

I can empathize (#HugOps) with the engineers who were working tirelessly to rectify the situation. Shit can hit the fan anytime you have a production system with so many users open to the wild internet. The transparency shown by the GitLab team to keep their users informed during the incident was awesome and required amazing guts!

Now most blogs/experts talk about the technical aspects of the unfortunate incident. These mainly focus on DB backup, replication and restoration processes, which are no doubt, highly valid points.

I’d like to suggest another key¬†aspect that came to my mind when going through the incident report, the human aspect!

This aspect seems to be ignored by many. From all accounts it looks like the team member working on the database issue was alone, tired and frustrated. The data removal disaster may have been averted if, not one but two engineers were working on the problem together. Think pair-programming. Obviously, screen sharing can be used if the engineers are not co-located.

I know this still does not guarantee a serious f*ck up, but as a company/startup you would probably have better odds on your side.

An engineer should never work alone when fixing a highly critical production issue.

Image Courtesy: Flickr (licensed under Creative Commons)

When trying to fix critical production issues in software systems its super important to have a aircraft style co-pilot working with you on the look out for potential howlers that can occur, e.g. rm -rfing the wrong folder.

There is always something to learn from adversity, Rock-on GitLab! Still a big fan.

DevOps, Git, Version Control Systems

How-to use Jenkins to keep your branches fresh

Although I prefer feature toggles over branching there can be instances that feature toggles are not practical. Think of a situation where you need to upgrade the underlying framework (or platform) of your application. Some examples from the Java world could be, newer JDKs, the latest Spring framework that contains breaking API changes, the new NetBeans platform that your desktop client is based on etc. etc.

In such situations when you have created a branch in your Git repo for development there is a risk that the master code base will diverge drastically from the branch. Therefore it is a good idea to keep your branch fresh¬†by regularly merging ‘master changes’ into it. This not only saves you from having to resolve many conflicts when you perform a ‘big bang‘ merge, but it also ensures the functionality implemented in the master works in your branch more frequently.

Now you can do this manually by regularly merging master changes into your branch, but instead you can use Jenkins to easily automate the process. Since this can be scheduled to run frequently you will get faster feedback when a change in the master breaks your branch.

Here’s how you do it.

1: Setup a job that tracks both your master and dev branch under Source Code Management. Use the Additional Behaviours/Merge before build option to merge master changes into your branch.


In short, this would fetch master, merge into branch, build and (ideally) run all tests of your application.

2: Next use the Git publisher feature in the post build section to push your branch containing the merged changes to the remote Git repo.


You can perhaps schedule this job @hourly to ensure your branches stays fresh with master changes.

Architecture, General

Architects are trade-off evaluators

Most software problems have a finite set of solutions to choose from. An important role of an architect is to understand the trade-offs of each solution and decide on the best solution for the given business case.

Image credit:

For example, one solution to a given problem could be less performant but may result in a clean and maintainable codebase. The job of the software architect in this situation would be to determine whether performance or maintainability is the most important aspect for the problem at hand. The compromise reached should always be in the best interest of the software product.

It is important to document the reason for picking a particular¬†solution along with it’s trade-offs for future reference. Software people are¬†notoriously forgetful of their design decisions after a¬†few days¬†ūüôā

As the saying goes, there are are many ways to skin a cat, the architect should find the best way to do it given the resources while achieving the end goal.

General, Management

Product Passion

The most amazing thing to me about this video is not the awesome rocket technology or the brilliance of Elon Musk but the crazy passion shown by the SpaceX employees (including Musk) towards the success of the PRODUCT!

Product passion is a result of product focus. You don’t have to be Musk or SpaceX to have crazy passion for your product, you just need the culture and mindset from top to bottom.
Finally product passion fuels employee engagement and then everything else becomes secondary!
General, Subversion, Technology, Version Control Systems

Subversion Revert with Externals

Disclaimer:¬†I know Git rocks, but people still use Subversion ūüôā !

Let’s say you have a Subversion checkout containing externals. Now you’ve made changes in many places within the folder structure and you want to get back to the original clean state.

So your typical approach would be to go to the top directory of the working copy and do a recursive revert using:

svn revert -R .

But unfortunately nothing happens! The reason is that the working copy is made up of sub folders containing externals and in order to revert them you need to go into each sub directory and then issue the svn revert command. This can be cumbersome if you have a working copy containing many subfolders corresponding to externals.

Well the solution is pretty simple if you have a bash shell (Windows users will require Cygwin or something similar)

for d in ./*/ ; do (cd "$d" && svn revert -R .); done

This little bash script will change (cd) into all sub folders using a loop and execute an svn revert within each ‘external’ folder recursively.

The solution was inspired by this thread on StackExchange.

Cloud, DevOps, General, Technology

How To Move your large VirtualBox VM disk created by Docker

So you’ve been using Docker Tool Box¬†(DTB) on Windows and the ‘default’ docker host created by docker-machine is growing alarmingly large¬†on your limited C: drive.

The super large disk.vmdk file for the “default” VM created by DTB¬†is usually located at¬†C:\Users\[username]\.docker\machine\machines\default

Now you want to move the existing disk.vmdk file to your much larger D: drive without having to recreate a docker machine/host from scratch and pulling all images on to it again.

The important thing to note here is that the vmdisk is an implementation detail of VirtualBox (VBox) not Docker. docker-machine just uses VBox as a provider to create a Docker host.

Therefore if you need to move the VM disk file to another location you should change VBox configuration for the VM instead of changing any Docker machine configurations (or using any docker commands)

So here  are the steps you need to follow.

1. Stop the running docker machine (i.e. VBox VM) like so:

                   docker-machine stop  

Note: This will effectively power off the VBox VM, named ‘default’. You can check this by opening the VBox GUI.


2. Copy the disk.vmdk file from C:\Users\[username]\.docker\machine\machines\default to a suitable folder in your bigger D: drive. I created D:\docker-machines\default for this.

Now the interesting part ūüôā We need to tell VBox about the new location of the disk.vmdk file.

3. The default.vbox file located at C:\Users\[username]\.docker\machine\machines\default\default  specifies the path to the vmdk file. This vbox file is an XML file, so just open it up in any editor and set the Machine/MediaRegistry/HardDisks/HardDisk/location attribute to the new location on your D: drive.


Note: Don’t worry about the “DO NOT EDIT THIS FILE..” statement on top since you have already stopped the VM, the file will not be overwritten. And I found this method easier than using the GUI ūüôā

4. Now power up the docker machine using:

                docker-machine start

If the ‘default’ machine start without any problem then you are good to go!

Now check if all your images are still available using:

docker images

5. ¬†You can verify that the vmdk file on D: is being used by firing up VBox and selecting the “default” VM and clicking on Settings/Storage/disk.vmdk as shown below.


6. Now you are done! Just go ahead and delete the huge disk.vmdk from your C: drive located at  C:\Users\[username]\.docker\machine\machines\default