Key Learning from the GitLab Incident

GitLab is an awesome product!¬†Although I don’t use their¬†hosted service at, I’ve been a very happy user of the¬†product in¬†an internally hosted setup.

They had a pretty bad (and well publicized) incident a couple of days back which started with spammers hammering their Postgres DBs and unfortunately ending up with a sysadmin accidentally removing almost 300GB of production data.

I can empathize (#HugOps) with the engineers who were working tirelessly to rectify the situation. Shit can hit the fan anytime you have a production system with so many users open to the wild internet. The transparency shown by the GitLab team to keep their users informed during the incident was awesome and required amazing guts!

Now most blogs/experts talk about the technical aspects of the unfortunate incident. These mainly focus on DB backup, replication and restoration processes, which are no doubt, highly valid points.

I’d like to suggest another key¬†aspect that came to my mind when going through the incident report, the human aspect!

This aspect seems to be ignored by many. From all accounts it looks like the team member working on the database issue was alone, tired and frustrated. The data removal disaster may have been averted if, not one but two engineers were working on the problem together. Think pair-programming. Obviously, screen sharing can be used if the engineers are not co-located.

I know this still does not guarantee a serious f*ck up, but as a company/startup you would probably have better odds on your side.

An engineer should never work alone when fixing a highly critical production issue.

Image Courtesy: Flickr (licensed under Creative Commons)

When trying to fix critical production issues in software systems its super important to have a aircraft style co-pilot working with you on the look out for potential howlers that can occur, e.g. rm -rfing the wrong folder.

There is always something to learn from adversity, Rock-on GitLab! Still a big fan.

DevOps, Git, Version Control Systems

How-to use Jenkins to keep your branches fresh

Although I prefer feature toggles over branching there can be instances that feature toggles are not practical. Think of a situation where you need to upgrade the underlying framework (or platform) of your application. Some examples from the Java world could be, newer JDKs, the latest Spring framework that contains breaking API changes, the new NetBeans platform that your desktop client is based on etc. etc.

In such situations when you have created a branch in your Git repo for development there is a risk that the master code base will diverge drastically from the branch. Therefore it is a good idea to keep your branch fresh¬†by regularly merging ‘master changes’ into it. This not only saves you from having to resolve many conflicts when you perform a ‘big bang‘ merge, but it also ensures the functionality implemented in the master works in your branch more frequently.

Now you can do this manually by regularly merging master changes into your branch, but instead you can use Jenkins to easily automate the process. Since this can be scheduled to run frequently you will get faster feedback when a change in the master breaks your branch.

Here’s how you do it.

1: Setup a job that tracks both your master and dev branch under Source Code Management. Use the Additional Behaviours/Merge before build option to merge master changes into your branch.


In short, this would fetch master, merge into branch, build and (ideally) run all tests of your application.

2: Next use the Git publisher feature in the post build section to push your branch containing the merged changes to the remote Git repo.


You can perhaps schedule this job @hourly to ensure your branches stays fresh with master changes.

Architecture, General

Architects are trade-off evaluators

Most software problems have a finite set of solutions to choose from. An important role of an architect is to understand the trade-offs of each solution and decide on the best solution for the given business case.

Image credit:

For example, one solution to a given problem could be less performant but may result in a clean and maintainable codebase. The job of the software architect in this situation would be to determine whether performance or maintainability is the most important aspect for the problem at hand. The compromise reached should always be in the best interest of the software product.

It is important to document the reason for picking a particular¬†solution along with it’s trade-offs for future reference. Software people are¬†notoriously forgetful of their design decisions after a¬†few days¬†ūüôā

As the saying goes, there are are many ways to skin a cat, the architect should find the best way to do it given the resources while achieving the end goal.

General, Management

Product Passion

The most amazing thing to me about this video is not the awesome rocket technology or the brilliance of Elon Musk but the crazy passion shown by the SpaceX employees (including Musk) towards the success of the PRODUCT!

Product passion is a result of product focus. You don’t have to be Musk or SpaceX to have crazy passion for your product, you just need the culture and mindset from top to bottom.
Finally product passion fuels employee engagement and then everything else becomes secondary!
General, Subversion, Technology, Version Control Systems

Subversion Revert with Externals

Disclaimer:¬†I know Git rocks, but people still use Subversion ūüôā !

Let’s say you have a Subversion checkout containing externals. Now you’ve made changes in many places within the folder structure and you want to get back to the original clean state.

So your typical approach would be to go to the top directory of the working copy and do a recursive revert using:

svn revert -R .

But unfortunately nothing happens! The reason is that the working copy is made up of sub folders containing externals and in order to revert them you need to go into each sub directory and then issue the svn revert command. This can be cumbersome if you have a working copy containing many subfolders corresponding to externals.

Well the solution is pretty simple if you have a bash shell (Windows users will require Cygwin or something similar)

for d in ./*/ ; do (cd "$d" && svn revert -R .); done

This little bash script will change (cd) into all sub folders using a loop and execute an svn revert within each ‘external’ folder recursively.

The solution was inspired by this thread on StackExchange.

Cloud, DevOps, General, Technology

How To Move your large VirtualBox VM disk created by Docker

So you’ve been using Docker Tool Box¬†(DTB) on Windows and the ‘default’ docker host created by docker-machine is growing alarmingly large¬†on your limited C: drive.

The super large disk.vmdk file for the “default” VM created by DTB¬†is usually located at¬†C:\Users\[username]\.docker\machine\machines\default

Now you want to move the existing disk.vmdk file to your much larger D: drive without having to recreate a docker machine/host from scratch and pulling all images on to it again.

The important thing to note here is that the vmdisk is an implementation detail of VirtualBox (VBox) not Docker. docker-machine just uses VBox as a provider to create a Docker host.

Therefore if you need to move the VM disk file to another location you should change VBox configuration for the VM instead of changing any Docker machine configurations (or using any docker commands)

So here  are the steps you need to follow.

1. Stop the running docker machine (i.e. VBox VM) like so:

                   docker-machine stop  

Note: This will effectively power off the VBox VM, named ‘default’. You can check this by opening the VBox GUI.


2. Copy the disk.vmdk file from C:\Users\[username]\.docker\machine\machines\default to a suitable folder in your bigger D: drive. I created D:\docker-machines\default for this.

Now the interesting part ūüôā We need to tell VBox about the new location of the disk.vmdk file.

3. The default.vbox file located at C:\Users\[username]\.docker\machine\machines\default\default  specifies the path to the vmdk file. This vbox file is an XML file, so just open it up in any editor and set the Machine/MediaRegistry/HardDisks/HardDisk/location attribute to the new location on your D: drive.


Note: Don’t worry about the “DO NOT EDIT THIS FILE..” statement on top since you have already stopped the VM, the file will not be overwritten. And I found this method easier than using the GUI ūüôā

4. Now power up the docker machine using:

                docker-machine start

If the ‘default’ machine start without any problem then you are good to go!

Now check if all your images are still available using:

docker images

5. ¬†You can verify that the vmdk file on D: is being used by firing up VBox and selecting the “default” VM and clicking on Settings/Storage/disk.vmdk as shown below.


6. Now you are done! Just go ahead and delete the huge disk.vmdk from your C: drive located at  C:\Users\[username]\.docker\machine\machines\default


Tip from Basketball for Software Firms

Many times a software developer’s performance is judged purely on the number of “tasks” that he or she has completed. This can be the number of bug fixes or user stories completed during a given period of time. This would typically co-relate to the amount of code contributed to the product by an engineer during this period. Now this can be an important performance metric no doubt.

But in my mind software firms need¬†to pay more¬†attention to another important ¬†non-tangible metric when evaluating a developer’s performance…ASSISTS!¬†I got this idea when watching some highlights of this year’s NBA finals. In basketball, an assist¬†is when a player makes a pass to a teammate that directly results in a goal and points for the team. The number of assists a player makes in basketball is considered an important stat in terms of his performance.

Similarly in software teams, some¬†developers may¬†contribute in many “non-tangible” ways to assist other¬†developers in the team.¬†These can be in the form of architecture/design tips, suggestions for new product features, code improvements or even pointing developers to look at similar implementations in other areas of the same product.

Contributing to the team in such¬†ways is¬†a key trait of a good software engineer and software firms should have mechanisms in place to “quantify” (at least to some degree) such “assists”¬†made by team members. Project/Product managers and even architects¬†can play a key role in helping software firms identify the amount of¬†assists¬†that a developer has contributed during a given period when evaluating her performance. This is by no means an exact science but a “ball park” rating can be very useful.

Software engineers should also realize that their value proposition is not just about writing code but also contributing to the team goal in other non-tangible ways as well.

Look at¬†Steph Curry¬†for example he is not only a huge points scorer but also brilliant¬†in assists, that’s why he is MVP!