General, Technology

User Experience and the Locality Principle

Designing a great user experience (UX) for complex business software is non-trivial to say the least.

Screens with complex functionality along with large amounts of data in such systems do not make things easier.

One of the key elements of good UX is making it easy for the end-user to find “things” in the system that is relevant to her.

This is where I think the Locality Principle comes into play.¬† It’s a well known principle in computer science and has implications in memory access optimization.

So what does this principle tell us? To keep things super simple have a look at this video which explains the gist of the principle.

Now how do we apply this concept when designing user experience (UX) in business software?

End-users of a business software system would typically work with (or “reuse”) a relatively small set of data (and screens) during a given time period. (e.g. during a business week). Therefore the information they search for and work with during this period should be fairly predictable.

If we can build some “intelligence” (short of AI ūüôā ) into these systems to identify¬† information access patterns of users in the system, we could use that “intelligence” to suggest the next possible interaction on the current UI for a given user. This would alleviate the need for the user to “find” the next screen to work with in the system and make it easy to “jump” between screens.

It must be noted this is more richer than “Recent Screens” functionality, which merely puts the last used screens of one user in a stack. Think something like Amazon Recommendations instead.

So the idea is simple, we should leverage the principle of locality in UX design to figure out the next information requirement (e.g. next screen) based on users previous UI interaction patterns


Management, Strategy, Technology

What’s the Chorus of your Product?

After listening to a song you like what part of the song plays over and over in your head?

What part of the song do you keep on singing in the car to work or in the shower?

What part of the song do you hum to your friends when telling them how great it is?

It’s the Chorus!

The chorus of a song gives it a unique identity. It is the most important ‘feature’ that makes it different from other songs. It’s USP.

So what can the software industry learn from the music industry?

Let’s first have a look at this fascinating video (starts at 0:15) of how Sia comes up with a new song. (in fact it is the inspiration behind this blog post)

The cool thing to note here is how she mainly focuses on identifying the chorus and getting it to sound awesome, even without any lyrics.

So if you are building a consumer (or even enterprise) software product make sure you identify it’s Chorus,¬†the feature that makes it unique. Then focus most of your effort and resources on it to make your users fall in love with it and talk about it with other potential users.

It’s amazing how we can stimulate product strategy in our own industry by looking at other almost unrelated industries. The key is to not isolate your knowledge to your own¬† industry alone but widen it to other industries and try to draw parallels that will help with your own strategy.


Strategy, Technology

‘Time to Value’ should be the new ‘Time to Market’


It’s fascinating¬†how the software industry¬†has built and leveraged technologies that could¬†deliver software products of good¬†quality at amazing speed.

The key technologies (in my opinion) that enables such speed are:

Cloud – Fast and Reliable distribution channel for software.

Microservices¬†– Smaller units of software that can be developed and deployed independently and ‘quickly’ ¬†by two-pizza teams

Containers (e.g. Docker) РMaking sure that the software (primarily microservices) have a reliable and consistent environment to execute.

Software vendors should now focus on how fast their customers can start extracting actual value from software instead of how fast they can get their software products to market.

Time to Value should be the new Time to Market!

Management, Strategy, Technology

The biggest challenge for “traditional” software vendors moving to a SaaS model is not technical

Image Credit:

Cloud Computing is probably¬†the longest surviving¬†buzzword in the IT industry for¬†the past¬†decade or more. From a software buyer’s point of view the important decision of¬†going for a “Cloud Solution” ¬†is based on economics, more specifically the¬†CapEx vs. OpEx trade-off. The pay-as-you-go nature of cloud computing is perhaps¬†the most important economic¬†feature¬†for customers.

Cloud computing has¬†three well known service models¬† IaaS, Paas and SaaS. Out of these, Software as-a Service (SaaS) is perhaps the most convenient model of acquiring IT for operating a business. ¬†The huge success of enterprise SaaS vendors such as¬†Salesforce¬†and more recently Workday¬†is evidence¬†that¬†many enterprise customers¬†are moving towards SaaS for software “procurement”.

These new kids on the block have prompted the “brick and mortar”¬†software vendors that follow the old model¬†of building software, burning it on a CD and shipping it to their customers for on-premise installation¬†to follow suit. These vendors are now¬†making their software more architecturally and technically cloud friendly. What¬†this usually means is that the software is now runnable on cloud infrastructure (IaaS) like Amazon AWS or Microsoft Azure.

Now building software that is more cloud friendly is one thing, but actually moving towards a true pay-as-you-go SaaS delivery model is a whole new ballgame for the traditional vendors.

I think the biggest challenge for existing non-SaaS vendors is not technical but its rather about overhauling their business/financial model. When moving to SaaS, the customers who used to pay all the license fees upfront will now be using a subscription payment model. This means the financials (such as cash flow) of the company need to be looked at from a different angle. It may also affect how sales and marketing approach their roles since customer LTV (Life Time Value) is now a bigger concern.

Possible ways to overcome this challenge would be to partner with (or even merge/acquire) another cloud company and piggyback on their business model for SaaS delivery. But when choosing a cloud partner it would probably be a good thing to avoid another SaaS provider and instead select an IaaS or PaaS provider to avoid market share erosion due to conflicting products.

Another way to address this challenge would be to setup a separate business unit for the cloud SaaS business. This would enable all new customers to be directly part of the SaaS business unit while existing customers are gradually migrated.

General, Subversion, Technology, Version Control Systems

Subversion Revert with Externals

Disclaimer:¬†I know Git rocks, but people still use Subversion ūüôā !

Let’s say you have a Subversion checkout containing externals. Now you’ve made changes in many places within the folder structure and you want to get back to the original clean state.

So your typical approach would be to go to the top directory of the working copy and do a recursive revert using:

svn revert -R .

But unfortunately nothing happens! The reason is that the working copy is made up of sub folders containing externals and in order to revert them you need to go into each sub directory and then issue the svn revert command. This can be cumbersome if you have a working copy containing many subfolders corresponding to externals.

Well the solution is pretty simple if you have a bash shell (Windows users will require Cygwin or something similar)

for d in ./*/ ; do (cd "$d" && svn revert -R .); done

This little bash script will change (cd) into all sub folders using a loop and execute an svn revert within each ‘external’ folder recursively.

The solution was inspired by this thread on StackExchange.

Cloud, DevOps, General, Technology

How To Move your large VirtualBox VM disk created by Docker

So you’ve been using Docker Tool Box¬†(DTB) on Windows and the ‘default’ docker host created by docker-machine is growing alarmingly large¬†on your limited C: drive.

The super large disk.vmdk file for the “default” VM created by DTB¬†is usually located at¬†C:\Users\[username]\.docker\machine\machines\default

Now you want to move the existing disk.vmdk file to your much larger D: drive without having to recreate a docker machine/host from scratch and pulling all images on to it again.

The important thing to note here is that the vmdisk is an implementation detail of VirtualBox (VBox) not Docker. docker-machine just uses VBox as a provider to create a Docker host.

Therefore if you need to move the VM disk file to another location you should change VBox configuration for the VM instead of changing any Docker machine configurations (or using any docker commands)

So here  are the steps you need to follow.

1. Stop the running docker machine (i.e. VBox VM) like so:

                   docker-machine stop  

Note: This will effectively power off the VBox VM, named ‘default’. You can check this by opening the VBox GUI.


2. Copy the disk.vmdk file from C:\Users\[username]\.docker\machine\machines\default to a suitable folder in your bigger D: drive. I created D:\docker-machines\default for this.

Now the interesting part ūüôā We need to tell VBox about the new location of the disk.vmdk file.

3. The default.vbox file located at C:\Users\[username]\.docker\machine\machines\default\default  specifies the path to the vmdk file. This vbox file is an XML file, so just open it up in any editor and set the Machine/MediaRegistry/HardDisks/HardDisk/location attribute to the new location on your D: drive.


Note: Don’t worry about the “DO NOT EDIT THIS FILE..” statement on top since you have already stopped the VM, the file will not be overwritten. And I found this method easier than using the GUI ūüôā

4. Now power up the docker machine using:

                docker-machine start

If the ‘default’ machine start without any problem then you are good to go!

Now check if all your images are still available using:

docker images

5. ¬†You can verify that the vmdk file on D: is being used by firing up VBox and selecting the “default” VM and clicking on Settings/Storage/disk.vmdk as shown below.


6. Now you are done! Just go ahead and delete the huge disk.vmdk from your C: drive located at  C:\Users\[username]\.docker\machine\machines\default

General, Technology, Uncategorized

Post Notifications plugin tip for WordPress

If you use the Post Notification plugin in your hosted WordPress (WP) blog you will notice that it maintains a separate MySQL table called post_notification_emails for users who subscribe for email alerts about new blog posts.

But unfortunately there is no easy, configurable way to automatically subscribe a user for post notifications when he or she registers a new account in the blog.

Remember, when a new user registers he has to provide his email address which is stored in the core WP users table which has no link with the post_notification_emails table.

A simple solution for this is to create a MySQL trigger in your WP database like so;

CREATE TRIGGER `subscribe_user_for_notifications`
INSERT INTO post_notification_emails (email_addr, last_modified, date_subscribed, gets_mail) VALUES (NEW.user_email, now(),now(),1 );
INSERT INTO post_notification_cats (id,cat_id) VALUES (, 0);
END $$


  • I used the phpMyAdmin tool to execute commands on the MySQL database. This comes with the¬†WAMP installation which is used to host the blog.
  • The table names mentioned above may have table prefixes if you provided them during the WP setup. (e.g. travelblog_post_notification_mails)

Although I’m sure there are other ways of doing this like hacking the PHP code in the Post Notification plugin (which by the way is no longer maintained) to query the users database for emails in additions to the post_notification_emails¬†for now I prefer this backend DB solution ūüôā