DevOps

Docker compose does not inject host environment variables in MacOS


When using docker-compose to orchestrate your container services it’s common practice to pass environment variables set in the host machine through to your containers. This is especially useful when you want to configure passwords/secrets for your applications running inside the container. By doing this you can avoid including such sensitive information in your docker-compose.yml file.

Let’s consider the following extract of a docker-compose.yml file which defines two environment variables required for a MySQL container service named ‘db’.  The highlighted lines shows how the variables are not given any values. This essentially tells docker-compose to inject the values using the corresponding environment variables in the host machine.

version: "2.0"
services:
 ...

 db:
   image: mysql:5.6.26
   environment:
     - MYSQL_ROOT_PASSWORD
     - MYSQL_DATABASE
   ports:
     - '3306:3306'

Now usually what you would do in Linux or MacOS is use the “export” shell command to set values on the host machine before calling docker-compose up to bring up the containers.

export MYSQL_ROOT_PASSWORD=password
export MYSQL_DATABASE=moviedb
docker-compose up

This works on Linux, in my case an Ubuntu 16.04 droplet on DigitalOcean without any issue. But it doesn’t seem to work on my dev machine running MacOS (Sierra) with the latest version of Docker 17.03.1-ce (at the time of writing)

Strangely when running docker-compose up on MacOS the MySQL container complains that the MYSQL_ROOT_PASSWORD environment variable is not set. Further inspection revealed that both variables were empty in the container.

So it seems like the “export” shell command does not work with compose on MacOS.

The trick to solving this problem in MacOS is to instead use a one liner in the shell like so:

MYSQL_ROOT_PASSWORD=password docker-compose up

Note: I have removed the MYSQL_DATABASE=moviedb segment from the above command for brevity.

DevOps, Git, Version Control Systems

How-to use Jenkins to keep your branches fresh


Although I prefer feature toggles over branching there can be instances that feature toggles are not practical. Think of a situation where you need to upgrade the underlying framework (or platform) of your application. Some examples from the Java world could be, newer JDKs, the latest Spring framework that contains breaking API changes, the new NetBeans platform that your desktop client is based on etc. etc.

In such situations when you have created a branch in your Git repo for development there is a risk that the master code base will diverge drastically from the branch. Therefore it is a good idea to keep your branch fresh by regularly merging ‘master changes’ into it. This not only saves you from having to resolve many conflicts when you perform a ‘big bang‘ merge, but it also ensures the functionality implemented in the master works in your branch more frequently.

Now you can do this manually by regularly merging master changes into your branch, but instead you can use Jenkins to easily automate the process. Since this can be scheduled to run frequently you will get faster feedback when a change in the master breaks your branch.

Here’s how you do it.

1: Setup a job that tracks both your master and dev branch under Source Code Management. Use the Additional Behaviours/Merge before build option to merge master changes into your branch.

jenkins_merge_branch1

In short, this would fetch master, merge into branch, build and (ideally) run all tests of your application.

2: Next use the Git publisher feature in the post build section to push your branch containing the merged changes to the remote Git repo.

jenkins_push_merge

You can perhaps schedule this job @hourly to ensure your branches stays fresh with master changes.

Cloud, DevOps, General, Technology

How To Move your large VirtualBox VM disk created by Docker


So you’ve been using Docker Tool Box (DTB) on Windows and the ‘default’ docker host created by docker-machine is growing alarmingly large on your limited C: drive.

The super large disk.vmdk file for the “default” VM created by DTB is usually located at C:\Users\[username]\.docker\machine\machines\default

Now you want to move the existing disk.vmdk file to your much larger D: drive without having to recreate a docker machine/host from scratch and pulling all images on to it again.

The important thing to note here is that the vmdisk is an implementation detail of VirtualBox (VBox) not Docker. docker-machine just uses VBox as a provider to create a Docker host.

Therefore if you need to move the VM disk file to another location you should change VBox configuration for the VM instead of changing any Docker machine configurations (or using any docker commands)

So here  are the steps you need to follow.

1. Stop the running docker machine (i.e. VBox VM) like so:

                   docker-machine stop  

Note: This will effectively power off the VBox VM, named ‘default’. You can check this by opening the VBox GUI.

vbox_default_off

2. Copy the disk.vmdk file from C:\Users\[username]\.docker\machine\machines\default to a suitable folder in your bigger D: drive. I created D:\docker-machines\default for this.

Now the interesting part 🙂 We need to tell VBox about the new location of the disk.vmdk file.

3. The default.vbox file located at C:\Users\[username]\.docker\machine\machines\default\default  specifies the path to the vmdk file. This vbox file is an XML file, so just open it up in any editor and set the Machine/MediaRegistry/HardDisks/HardDisk/location attribute to the new location on your D: drive.

docker_vm_move

Note: Don’t worry about the “DO NOT EDIT THIS FILE..” statement on top since you have already stopped the VM, the file will not be overwritten. And I found this method easier than using the GUI 🙂

4. Now power up the docker machine using:

                docker-machine start

If the ‘default’ machine start without any problem then you are good to go!

Now check if all your images are still available using:

docker images

5.  You can verify that the vmdk file on D: is being used by firing up VBox and selecting the “default” VM and clicking on Settings/Storage/disk.vmdk as shown below.

docker_new_vmdk

6. Now you are done! Just go ahead and delete the huge disk.vmdk from your C: drive located at  C:\Users\[username]\.docker\machine\machines\default

Cloud, DevOps, MySQL

Connecting to MySQL running inside Vagrant


Vagrant is pretty neat! It helps dev-teams or even stand-alone developers to easily setup their machines by making their development environments “configurable, reproducible and portable“. All the details about your dev setup is specified in a file called Vagrantfile.

This is really handy if you use a Windows/Mac machine for your IDE but want to have a Linux based “runtime-box” consistent with your Cloud/VPS environment to deploy and test your app.

But this post is not about how to setup and use Vagrant itself, the Vagrant docs does a superb job to help you get started. This post is rather about a simple method that you can use to connect to a MySQL server running inside your Vagrant box.

Let’s say you are running a MySQL server on the Guest (e.g. Ubuntu) inside your Vagrant box, how do you connect to it from your Host (e.g. Windows)?

The first thing that came to my mind is the port forwarding functionality provided by Vagrant it self, where you can perhaps say 3366 (Win/Host) -> 3306 (Ubuntu/guest)

But there is a much easier way to do this since Vagrant by default allows the host to connect to the guest through SSH you can use MySQL workbench installed on your host (Windows/Mac machine) to connect to MySQL running inside you Vagrant/Ubuntu box as shown below.

mysql_ssh

Note: The connection only uses the default SSH port forwarding (2222 -> 22) and MySQL port remains the same.