Strategy, Technology

‘Time to Value’ should be the new ‘Time to Market’


stopwatch-259376_960_720

It’s fascinating¬†how the software industry¬†has built and leveraged technologies that could¬†deliver software products of good¬†quality at amazing speed.

The key technologies (in my opinion) that enables such speed are:

Cloud – Fast and Reliable distribution channel for software.

Microservices¬†– Smaller units of software that can be developed and deployed independently and ‘quickly’ ¬†by two-pizza teams

Containers (e.g. Docker) РMaking sure that the software (primarily microservices) have a reliable and consistent environment to execute.

Software vendors should now focus on how fast their customers can start extracting actual value from software instead of how fast they can get their software products to market.

Time to Value should be the new Time to Market!

Management, Strategy, Technology

The biggest challenge for “traditional” software vendors moving to a SaaS model is not technical


cloud-computing-defined
Image Credit: http://www.webopedia.com

Cloud Computing is probably¬†the longest surviving¬†buzzword in the IT industry for¬†the past¬†decade or more. From a software buyer’s point of view the important decision of¬†going for a “Cloud Solution” ¬†is based on economics, more specifically the¬†CapEx vs. OpEx trade-off. The pay-as-you-go nature of cloud computing is perhaps¬†the most important economic¬†feature¬†for customers.

Cloud computing has¬†three well known service models¬† IaaS, Paas and SaaS. Out of these, Software as-a Service (SaaS) is perhaps the most convenient model of acquiring IT for operating a business. ¬†The huge success of enterprise SaaS vendors such as¬†Salesforce¬†and more recently Workday¬†is evidence¬†that¬†many enterprise customers¬†are moving towards SaaS for software “procurement”.

These new kids on the block have prompted the “brick and mortar”¬†software vendors that follow the old model¬†of building software, burning it on a CD and shipping it to their customers for on-premise installation¬†to follow suit. These vendors are now¬†making their software more architecturally and technically cloud friendly. What¬†this usually means is that the software is now runnable on cloud infrastructure (IaaS) like Amazon AWS or Microsoft Azure.

Now building software that is more cloud friendly is one thing, but actually moving towards a true pay-as-you-go SaaS delivery model is a whole new ballgame for the traditional vendors.

I think the biggest challenge for existing non-SaaS vendors is not technical but its rather about overhauling their business/financial model. When moving to SaaS, the customers who used to pay all the license fees upfront will now be using a subscription payment model. This means the financials (such as cash flow) of the company need to be looked at from a different angle. It may also affect how sales and marketing approach their roles since customer LTV (Life Time Value) is now a bigger concern.

Possible ways to overcome this challenge would be to partner with (or even merge/acquire) another cloud company and piggyback on their business model for SaaS delivery. But when choosing a cloud partner it would probably be a good thing to avoid another SaaS provider and instead select an IaaS or PaaS provider to avoid market share erosion due to conflicting products.

Another way to address this challenge would be to setup a separate business unit for the cloud SaaS business. This would enable all new customers to be directly part of the SaaS business unit while existing customers are gradually migrated.

DevOps

Docker compose does not inject host environment variables in MacOS


When using docker-compose to orchestrate your container services it’s common practice¬†to pass environment variables set in the host machine through to your containers. This is especially useful when you want to configure¬†passwords/secrets for¬†your applications running inside the container. By doing this you can avoid including such sensitive¬†information in your docker-compose.yml file.

Let’s consider the following extract of a docker-compose.yml file which defines¬†two environment variables required for a MySQL container service named ‘db’. ¬†The highlighted lines shows how the variables are not given any values. This essentially tells¬†docker-compose to¬†inject the values using¬†the corresponding environment variables in¬†the host machine.

version: "2.0"
services:
 ...

 db:
   image: mysql:5.6.26
   environment:
     - MYSQL_ROOT_PASSWORD
     - MYSQL_DATABASE
   ports:
     - '3306:3306'

Now usually¬†what you would do in Linux or MacOS is¬†use the “export” shell command to set values on the host machine before calling docker-compose up¬†to bring up the containers.

export MYSQL_ROOT_PASSWORD=password
export MYSQL_DATABASE=moviedb
docker-compose up

This works on Linux, in my case an Ubuntu 16.04 droplet on DigitalOcean without any issue. But it¬†doesn’t seem to work on my dev machine running MacOS (Sierra) with¬†the latest version of Docker¬†17.03.1-ce¬†(at the time of writing)

Strangely when running docker-compose up on MacOS the MySQL container complains that the MYSQL_ROOT_PASSWORD environment variable is not set. Further inspection revealed that both variables were empty in the container.

So it seems like the “export” shell command does not work with compose on MacOS.

The trick to solving this problem in MacOS is to instead use a one liner in the shell like so:

MYSQL_ROOT_PASSWORD=password docker-compose up

Note: I have removed the MYSQL_DATABASE=moviedb segment from the above command for brevity.

Management

Key Learning from the GitLab Incident


GitLab is an awesome product!¬†Although I don’t use their¬†hosted service at GitLab.com, I’ve been a very happy user of the¬†product in¬†an internally hosted setup.

They had a pretty bad (and well publicized) incident a couple of days back which started with spammers hammering their Postgres DBs and unfortunately ending up with a sysadmin accidentally removing almost 300GB of production data.

I can empathize (#HugOps) with the engineers who were working tirelessly to rectify the situation. Shit can hit the fan anytime you have a production system with so many users open to the wild internet. The transparency shown by the GitLab team to keep their users informed during the incident was awesome and required amazing guts!

Now most blogs/experts talk about the technical aspects of the unfortunate incident. These mainly focus on DB backup, replication and restoration processes, which are no doubt, highly valid points.

I’d like to suggest another key¬†aspect that came to my mind when going through the incident report, the human aspect!

This aspect seems to be ignored by many. From all accounts it looks like the team member working on the database issue was alone, tired and frustrated. The data removal disaster may have been averted if, not one but two engineers were working on the problem together. Think pair-programming. Obviously, screen sharing can be used if the engineers are not co-located.

I know this still does not guarantee a serious f*ck up, but as a company/startup you would probably have better odds on your side.

An engineer should never work alone when fixing a highly critical production issue.

cockpit
Image Courtesy: Flickr (licensed under Creative Commons)

When trying to fix critical production issues in software systems its super important to have a aircraft style co-pilot working with you on the look out for potential howlers that can occur, e.g. rm -rfing the wrong folder.

There is always something to learn from adversity, Rock-on GitLab! Still a big fan.

DevOps, Git, Version Control Systems

How-to use Jenkins to keep your branches fresh


Although I prefer feature toggles over branching there can be instances that feature toggles are not practical. Think of a situation where you need to upgrade the underlying framework (or platform) of your application. Some examples from the Java world could be, newer JDKs, the latest Spring framework that contains breaking API changes, the new NetBeans platform that your desktop client is based on etc. etc.

In such situations when you have created a branch in your Git repo for development there is a risk that the master code base will diverge drastically from the branch. Therefore it is a good idea to keep your branch fresh¬†by regularly merging ‘master changes’ into it. This not only saves you from having to resolve many conflicts when you perform a ‘big bang‘ merge, but it also ensures the functionality implemented in the master works in your branch more frequently.

Now you can do this manually by regularly merging master changes into your branch, but instead you can use Jenkins to easily automate the process. Since this can be scheduled to run frequently you will get faster feedback when a change in the master breaks your branch.

Here’s how you do it.

1: Setup a job that tracks both your master and dev branch under Source Code Management. Use the Additional Behaviours/Merge before build option to merge master changes into your branch.

jenkins_merge_branch1

In short, this would fetch master, merge into branch, build and (ideally) run all tests of your application.

2: Next use the Git publisher feature in the post build section to push your branch containing the merged changes to the remote Git repo.

jenkins_push_merge

You can perhaps schedule this job @hourly to ensure your branches stays fresh with master changes.

Architecture, General

Architects are trade-off evaluators


Most software problems have a finite set of solutions to choose from. An important role of an architect is to understand the trade-offs of each solution and decide on the best solution for the given business case.

appliance_repair
Image credit: xkcd.com

For example, one solution to a given problem could be less performant but may result in a clean and maintainable codebase. The job of the software architect in this situation would be to determine whether performance or maintainability is the most important aspect for the problem at hand. The compromise reached should always be in the best interest of the software product.

It is important to document the reason for picking a particular¬†solution along with it’s trade-offs for future reference. Software people are¬†notoriously forgetful of their design decisions after a¬†few days¬†ūüôā

As the saying goes, there are are many ways to skin a cat, the architect should find the best way to do it given the resources while achieving the end goal.

General, Management

Product Passion


The most amazing thing to me about this video is not the awesome rocket technology or the brilliance of Elon Musk but the crazy passion shown by the SpaceX employees (including Musk) towards the success of the PRODUCT!


Product passion is a result of product focus. You don’t have to be Musk or SpaceX to have crazy passion for your product, you just need the culture and mindset from top to bottom.
Finally product passion fuels employee engagement and then everything else becomes secondary!