MicroServices for a small island

by Graham Pohl on 11/09/2017 10:00


In New Zealand the scale of most solutions, and the dev teams working on them, are tiny compared to global software companies. Yet many developers, architects and CxOs I work with, want to do things 'because Netflix does it' – especially when it comes to MicroServices.

There is plenty of guidance out there – but it's hard to apply when software development isn't your core business, your team is small, and scale isn't the problem you're trying to solve. Here's some observations from 2 years of leading a small dev team refactoring a complex system for a government agency using MicroServices.

1. Is MicroServices the simplest way to solve your problem?

If you have an unsustainable monolith, but can feed your team with 2 pizzas and count concurrent sessions on your fingers – then refactoring into a more modular monolith may be a simpler approach.

Our problem was how to safely refactor something that wasn't a single monolith, but a collection of distributed yet tightly-coupled polyglot monoliths. We needed to maintain normal service to users while increasing the release cadence of a multitude of server-side and client-side components with different deployment methods, written in a mix of .Net, Java, a proprietary scripting language and some JS.

Even so, we would have preferred to rewrite the entire system as a new (modular) monolith – but that didn't fly, so MicroServices was a way for us to gradually slice out functionality that was split across the monoliths in a safe and reliable way.

2. Better software requires adaptable people

We got a lot of benefit from introducing some new paradigms

None of those are bleeding edge – yet many of the local developers, operational support staff and project/change managers hadn't worked with them before. Some of our team adapted quickly and we're making better software – but I worry about the sustainability of new paradigms when adaptable people are in short supply in the local market.

3. Making a Polyglot Continuous Delivery pipeline is hard work

Most toolsets for continuous deployment, automated testing and code quality are not language agnostic.

In our case before we started refactoring we already had code in .Net, Java, a proprietary scripting language and some JS. Our tooling includes SVN, Ant, Nexus, Maven, Jenkins, SonarQube, Cucumber and Liquibase - while they now work more-or-less consistently, it took a LOT of effort, and has been fragile when individual tools have been upgraded to new versions.

4. Refactoring service boundaries and making them performant is hard work and the results often aren't pretty

We choose our MicroService boundaries based on DDD bounded contexts – with a complex business domain we expect to get them wrong and iterate. There are two areas we've chosen to be pragmatic rather than purist about those boundaries - sometimes we found bounded contexts that:

  • cut across almost the entire existing codebase, but weren't high priority to the business - so refactoring would have been a massive effort, for low value.
  • meant what had been a simple in-process call became many sequential http roundtrips - so might have required a lot of effort in performance tuning.

Where it made sense, we chose to modify the service boundary instead - knowing responsibilities would be duplicated, or split across services.

Image: Palau_2008030818_4749 by LuxTonnerre used under creative commons license CC BY 2.0.

Graham Pohl is a Principal Consultant specialising in strategy and architecture, based in Equinox IT's Wellington office. See Graham's profile.


Get blog posts by email

New call-to-action
New call-to-action