The microservices architectural style has been the hot topic over the last year. At the recent O'Reilly software architecture conference, it seemed like every session talked about microservices. Enough to get everyone's over-hyped-bullshit detector up and flashing. One of the consequences of this is that we've seen teams be too eager to embrace microservices, [1] not realizing that microservices introduce complexity on their own account. This adds a premium to a project's cost and risk - one that often gets projects into serious trouble.

While this hype around microservices is annoying, I do think it's a useful bit of terminology for a style of architecture which has been around for a while, but needed a name to make it easier to talk about. The important thing here is not how annoyed you feel about the hype, but the architectural question it raises: is a microservice architecture a good choice for the system you're working on?

"It depends" must start my answer, but then I must shift the focus to what factors it depends on. The fulcrum of whether or not to use microservices is the complexity of the system you're contemplating. The microservices approach is all about handling a complex system, but in order to do so the approach introduces its own set of complexities. When you use microservices you have to work on automated deployment, monitoring, dealing with failure, eventual consistency, and other factors that a distributed system introduces. There are well-known ways to cope with all this, but it's extra effort, and nobody I know in software development seems to have acres of free time.

So my primary guideline would be don't even consider microservices unless you have a system that's too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don't try to separate it into separate services.

The complexity that drives us to microservices can come from many sources including dealing with large teams [2], multi-tenancy, supporting many user interaction models, allowing different business functions to evolve independently, and scaling. But the biggest factor is that of sheer size - people finding they have a monolith that's too big to modify and deploy.

At this point I feel a certain frustration. Many of the problems ascribed to monoliths aren't essential to that style. I've heard people say that you need to use microservices because it's impossible to do ContinuousDelivery with monoliths - yet there are plenty of organizations that succeed with a cookie-cutter deployment approach: Facebook and Etsy are two well-known examples.

I've also heard arguments that say that as a system increases in size, you have to use microservices in order to have parts that are easy to modify and replace. Yet there's no reason why you can't make a single monolith with well defined module boundaries. At least there's no reason in theory, in practice it seems too easy for module boundaries to be breached and monoliths to get tangled as well as large.

We should also remember that there's a substantial variation in service-size between different microservice systems. I've seen microservice systems vary from a team of 60 with 20 services to a team of 4 with 200 services. It's not clear to what degree service size affects the premium.

As size and other complexity boosters kick into a project I've seen many teams find that microservices are a better place to be. But unless you're faced with that complexity, remember that the microservices approach brings a high premium, one that can slow down your development considerably. So if you can keep your system simple enough to avoid the need for microservices: do.


1: It's a common enough problem that our recent radar called it out as Microservice Envy.

2: Conway's Law says that the structure of a system follows the organization of the people that built it. Some examples of microservice usage had organizations deliberately split themselves into small, loosely coupled groups in order to push the software into a similar modular structure - a notion that's called the Inverse Conway Maneuver.


I stole much of this thinking from my colleagues: James Lewis, Sam Newman, Thiyagu Palanisamy, and Evan Bottcher. Stefan Tilkov's comments on an earlier draft were instrumental in sharpening this post. Rob Miles, David Nelson, Brian Mason, and Scott Robinson discussed drafts of this article on our internal mailing list.



extreme programming · clean code · refactoring


Kent Beck came up with his four rules of simple design while he was developing ExtremeProgramming in the late 1990's. I express them like this. [1]

The rules are in priority order, so "passes the tests" takes priority over "reveals intention"

Kent Beck developed Extreme Programming, Test Driven Development, and can always be relied on for good Victorian facial hair for his local ballet.

The most important of the rules is "passes the tests". XP was revolutionary in how it raised testing to a first-class activity in software development, so it's natural that testing should play a prominent role in these rules. The point is that whatever else you do with the software, the primary aim is that it works as intended and tests are there to ensure that happens.

"Reveals intention" is Kent's way of saying the code should be easy to understand. Communication is a core value of Extreme Programing, and many programmers like to stress that programs are there to be read by people. Kent's form of expressing this rule implies that the key to enabling understanding is to express your intention in the code, so that your readers can understand what your purpose was when writing it.

The "no duplication" is perhaps the most powerfully subtle of these rules. It's a notion expressed elsewhere as DRY or SPOT [2], Kent expressed it as saying everything should be said "Once and only Once." Many programmers have observed that the exercise of eliminating duplication is a powerful way to drive out good designs. [3]

The last rule tells us that anything that doesn't serve the three prior rules should be removed. At the time these rules were formulated there was a lot of design advice around adding elements to an architecture in order to increase flexibility for future requirements. Ironically the extra complexity of all of these elements usually made the system harder to modify and thus less flexible in practice.

People often find there is some tension between "no duplication" and "reveals intention", leading to arguments about which order those rules should appear. I've always seen their order as unimportant, since they feed off each other in refining the code. Such things as adding duplication to increase clarity is often papering over a problem, when it would be better to solve it. [4]

What I like about these rules is that they are very simple to remember, yet following them improves code in any language or programming paradigm that I've worked with. They are an example of Kent's skill in finding principles that are generally applicable and yet concrete enough to shape my actions.

At the time there was a lot of “design is subjective”, “design is a matter of taste” bullshit going around. I disagreed. There are better and worse designs. These criteria aren’t perfect, but they serve to sort out some of the obvious crap and (importantly) you can evaluate them right now. The real criteria for quality of design, “minimizes cost (including the cost of delay) and maximizes benefit over the lifetime of the software,” can only be evaluated post hoc, and even then any evaluation will be subject to a large bag full of cognitive biases. The four rules are generally predictive.

-- Kent Beck

Further Reading

There are many expressions of these rules out there, here are a few that I think are worth exploring:


Kent reviewed this post and sent me some very helpful feedback, much of which I appropriated into the text.


1: Authoritative Formulation

There are many expressions of the four rules out there, Kent stated them in lots of media, and plenty of other people have liked them and phrased them their own way. So you'll see plenty of descriptions of the rules, but each author has their own twist - as do I.

If you want an authoritative formulation from the man himself, probably your best bet is from the first edition of The White Book (p 57) in the section that outlines the XP practice of Simple Design.

  • Runs all the tests
  • Has no duplicated logic. Be wary of hidden duplication like parallel class hierarchies
  • States every intention important to the programmer
  • Has the fewest possible classes and methods

(Just to be confusing, there's another formulation on page 109 that omits "runs all the tests" and splits "fewest classes" and "fewest methods" over the last two rules. I recall this was an earlier formulation that Kent improved on while writing the White Book.)

2: DRY stands for Don't Repeat Yourself, and comes from The Pragmatic Programmer. SPOT stands for Single Point Of Truth.

3: This principle was the basis of my first design column for IEEE Software.

4: When reviewing this post, Kent said "In the rare case they are in conflict (in tests are the only examples I can recall), empathy wins over some strictly technical metric." I like his point about empathy - it reminds us that when writing code we should always be thinking of the reader.



database · big data


Data Lake is a term that's appeared in this decade to describe an important component of the data analytics pipeline in the world of Big Data. The idea is to have a single store for all of the raw data that anyone in an organization might need to analyze. Commonly people use Hadoop to work on the data in the lake, but the concept is broader than just Hadoop.

When I hear about a single point to pull together all the data an organization wants to analyze, I immediately think of the notion of the data warehouse (and data mart [1]). But there is a vital distinction between the data lake and the data warehouse. The data lake stores raw data, in whatever form the data source provides. There is no assumptions about the schema of the data, each data source can use whatever schema it likes. It's up to the consumers of that data to make sense of that data for their own purposes.

This is an important step, many data warehouse initiatives didn't get very far because of schema problems. Data warehouses tend to go with the notion of a single schema for all analytics needs, but I've taken the view that a single unified data model is impractical for anything but the smallest organizations. To model even a slightly complex domain you need multiple BoundedContexts, each with its own data model. In analytics terms, you need each analytics user to use a model that makes sense for the analysis they are doing. By shifting to storing raw data only, this firmly puts the responsibility on the data analyst.

Another source of problems for data warehouse initiatives is ensuring data quality. Trying to get an authoritative single source for data requires lots of analysis of how the data is acquired and used by different systems. System A may be good for some data, and system B for another. You run into rules where system A is better for more recent orders but system B is better for orders of a month or more ago, unless returns are involved. On top of this, data quality is often a subjective issue, different analysis has different tolerances for data quality issues, or even a different notion of what is good quality.

This leads to a common criticism of the data lake - that it's just a dumping ground for data of widely varying quality, better named a data swamp. The criticism is both valid and irrelevant. The hot title of the New Analytics is "Data Scientist". Although it's a much-abused title, many of these folks do have a solid background in science. And any serious scientist knows all about data quality problems. Consider what you might think is the simple matter of analyzing temperature readings over time. You have to take into account that some weather stations are relocated in ways that may subtly affect the readings, anomalies due to problems in equipment, missing periods when the sensors aren't working. Many of the sophisticated statistical techniques out there are created to sort out data quality problems. Scientists are always skeptical about data quality and are used to dealing with questionable data. So for them the lake is important because they get to work with raw data and can be deliberate about applying techniques to make sense of it, rather than some opaque data cleansing mechanism that probably does more harm that good.

Data warehouses usually would not just cleanse but also aggregate the data into a form that made it easier to analyze. But scientists tend to object to this too, because aggregation implies throwing away data. The data lake should contain all the data because you don't know what people will find valuable, either today or in a couple of years time.

One of my colleagues illustrated this thinking with a recent example: "We were trying to compare our automated predictive models versus manual forecasts made by the company's contract managers. To do this we decided to train our models on year old data and compare our predictions to the ones made by managers at that time. Since we now know the correct results, this should be a fair test of accuracy. When we started to do this, it appeared that the manager's predictions were horrible and that even our simple models, made in just two weeks, were crushing them. We suspected that this out-performance was too good to be true. After a lot of testing and digging we discovered that the time stamps associated with those manager predictions were incorrect. They were being modified by some end-of-month processing report. So in short, these values in the data warehouse were useless; we feared that we would have no way of performing this comparison. After more digging we found that these reports had been stored and so we could extract the real forecasts made at that time. (We're crushing them again but it's taken many months to get there)."

The complexity of this raw data means that there is room for something that curates the data into a more manageable structure (as well as reducing the considerable volume of data.) The data lake shouldn't be accessed directly very much. Because the data is raw, you need a lot of skill to make any sense of it. You have relatively few people who work in the data lake, as they uncover generally useful views of data in the lake, they can create a number of data marts each of which has a specific model for a single bounded context. A larger number of downstream users can then treat these lakeshore marts as an authoritative source for that context.

So far I've described the data lake as singular point for integrating data across an enterprise, but I should mention that isn't how it was originally intended. The term was coined by James Dixon in 2010, when he did that he intended a data lake to be used for a single data source, multiple data sources would instead form a "water garden". Despite its original formulation the prevalent usage now is to treat a data lake as combining many sources. [2]

You should use a data lake for analytic purposes, not for collaboration between operational systems. When operational systems collaborate they should do this through services designed for the purpose, such as RESTful HTTP calls, or asynchronous messaging. The lake is too complex to trawl for operational communication. It may be that analysis of the lake can lead to new operational communication routes, but these should be built directly rather than through the lake.

It is important that all data put in the lake should have a clear provenance in place and time. Every data item should have a clear trace to what system it came from and when the data was produced. The data lake thus contains a historical record. This might come from feeding Domain Events into the lake, a natural fit with Event Sourced systems. But it could also come from systems doing a regular dump of current state into the lake - an approach that's valuable when the source system doesn't have any temporal capabilities but you want a temporal analysis of its data. A consequence of this is that data put into the lake is immutable, an observation once stated cannot be removed (although it may be refuted later), you should also expect ContradictoryObservations.

The data lake is schemaless, it's up to the source systems to decide what schema to use and for consumers to work out how to deal with the resulting chaos. Furthermore the source systems are free to change their inflow data schemas at will, and again the consumers have to cope. Obviously we prefer such changes to be as minimally disruptive as possible, but scientists prefer messy data to losing data.

Data lakes are going to be very large, and much of the storage is oriented around the notion of a large schemaless structure - which is why Hadoop and HDFS are usually the technologies people use for data lakes. One of the vital tasks of the lakeshore marts is to reduce the amount of data you need to deal with, so that big data analytics doesn't have to deal with large amounts of data.

The Data Lake's appetite for a deluge of raw data raises awkward questions about privacy and security. The principle of Datensparsamkeit is very much in tension with the data scientists' desire to capture all data now. A data lake makes a tempting target for crackers, who might love to siphon choice bits into the public oceans. Restricting direct lake access to a small data science group may reduce this threat, but doesn't avoid the question of how that group is kept accountable for the privacy of the data they sail on.


1: The usual distinction is that a data mart is for a single department in an organization, while a data warehouse integrates across all departments. Opinions differ on whether a data warehouse should be the union of all data marts or whether a data mart is a logical subset (view) of data in the data warehouse.

2: In a later blog post, Dixon emphasizes the lake versus water garden distinction, but (in the comments) says that it is a minor change. For me the key point is that the lake stores a large body of data in its natural state, the number of feeder streams isn't a big deal.


My thanks to Anand Krishnaswamy, Danilo Sato, David Johnston, Derek Hammer, Duncan Cragg, Jonny Leroy, Ken Collier, Shripad Agashe, and Steven Lowe for discussing drafts of this post on our internal mailing lists





I've often been involved in discussions about deliberately increasing the diversity of a group of people. The most common case in software is increasing the proportion of women. Two examples are in hiring and conference speaker rosters where we discuss trying to get the proportion of women to some level that's higher than usual. A common argument against pushing for greater diversity is that it will lower standards, raising the spectre of a diverse but mediocre group.

To understand why this is an illusionary concern, I like to consider a little thought experiment. Imagine a giant bucket that contains a hundred thousand marbles. You know that 10% of these marbles have a special sparkle that you can see when you carefully examine them. You also know that 80% of these marbles are blue and 20% pink, and that sparkles exist evenly across both colors [1]. If you were asked to pick out ten sparkly marbles, you know you could confidently go through some and pick them out. So now imagine you're told to pick out ten marbles such that five were blue and five were pink.

I don't think you would react by saying “that's impossible”. After all there are two thousand pink sparkly marbles in there, getting five of them is not beyond the wit of even a man. Similarly in software, there may be less women in the software business, but there are still enough good women to fit the roles a company or a conference needs.

The point of the marbles analogy, however, is to focus on the real consequence of the demand for 50:50 split. Yes it's possible to find the appropriate marbles, but the downside is that it takes longer. [2]

That notion applies to finding the right people too. Getting a better than base proportion of women isn't impossible, but it does require more work, often much more work. This extra effort reinforces the rarity, if people have difficulty finding good people as it is, it needs determined effort to spend the extra time to get a higher proportion of the minority group — even if you are only trying to raise the proportion of women up to 30%, rather than a full 50%.

In recent years we've made increasing our diversity a high priority at ThoughtWorks. This has led to a lot of effort trying to go to where we are more likely to run into the talented women we are seeking: women's colleges, women-in-IT groups and conferences. We encourage our women to speak at conferences, which helps let other women know we value a diverse workforce.

When interviewing, we make a point of ensuring there are women involved. This gives women candidates someone to relate to, and someone to ask questions which are often difficult to ask men. It's also vital to have women interview men, since we've found that women often spot problematic behaviors that men miss as we just don't have the experiences of subtle discriminations. Getting a diverse group of people inside the company isn't just a matter of recruiting, it also means paying a lot of attention to the environment we have, to try to ensure we don't have the same AlienatingAtmosphere that much of the industry exhibits. [3]

One argument I've heard against this approach is that if everyone did this, then we would run out of pink, sparkly marbles. We'll know this is something to be worried about when women are paid significantly more than men for the same work.

One anecdote that stuck in my memory was from a large, traditional company who wanted to improve the number of women in senior management positions. They didn't impose a quota on appointing women to those positions, but they did impose a quota for women on the list of candidates. (Something like: "there must be at least three credible women candidates for each post".) This candidate quota forced the company to actively seek out women candidates. The interesting point was that just doing this, with no mandate to actually appoint these women, correlated with an increased proportion of women in those positions.

For conference planning it's a similar strategy: just putting out a call for papers and saying you'd like a diverse speaker lineup isn't enough. Neither are such things as blind review of proposals (and I'm not sure that's a good idea anyway). The important thing is to seek out women and encourage them to submit ideas. Organizing conferences is hard enough work as it is, so I can sympathize with those that don't want to add to the workload, but those that do can get there. FlowCon is a good example of a conference that made this an explicit goal and did far better than the industry average (and in case you were wondering, there was no difference between men's and women's evaluation scores).

So now that we recognize that getting greater diversity is a matter of application and effort, we can ask ourselves whether the benefit is worth the cost. In a broad professional sense, I've argued that it is, because our DiversityImbalance is reducing our ability to bring the talent we need into our profession, and reducing the influence our profession needs to have on society. In addition I believe there is a moral argument to push back against long-standing wrongs faced by HistoricallyDiscriminatedAgainst groups.

Conferences have an important role to play in correcting this imbalance. The roster of speakers is, at least subconsciously, a statement of what the profession should look like. If it's all white guys like me, then that adds to the AlienatingAtmosphere that pushes women out of the profession. Therefore I believe that conferences need to strive to get an increased proportion of historically-discriminated-against speakers. We, as a profession, need to push them to do this. It also means that women have an extra burden to become visible and act as part of that better direction for us. [4]

For companies, the choice is more personal. For me, ThoughtWorks's efforts to improve its diversity are a major factor in why I've been an employee here for over a decade. I don't think it's a coincidence that ThoughtWorks is also a company that has a greater open-mindedness, and a lack of political maneuvering, than most of the companies I've consulted with over the years. I consider those attributes to be a considerable competitive advantage in attracting talented people, and providing an environment where we can collaborate effectively to do our work.

But I'm not holding ThoughtWorks up as an example of perfection. We've made a lot of progress over the decade I've been here, but we still have a long way to go. In particular we are very short of senior technical women. We've introduced a number of programs around networks, and leadership development, to help grow women to fill those gaps. But these things take time - all you have to do is look at our Technical Advisory Board to see that we are a long way from the ratio we seek.

Despite my knowledge of how far we still have to climb, I can glimpse the summit ahead. At a recent AwayDay in Atlanta I was delighted to see how many younger technical women we've managed to bring into the company. While struggling to keep my head above water as the sole male during a late night game of Dominion, I enjoyed a great feeling of hope for our future.


1: That is 10% of blue marbles are sparkly as are 10% of pink.

2: Actually, if I dig around for a while in that bucket, I find that some marbles are neither blue nor pink, but some engaging mixture of the two.

3: This is especially tricky for a company like us, where so much of our work is done in client environments, where we aren't able to exert as much of an influence as we'd like. Some of our offices have put together special training to educate both sexes on how to deal with sexist situations with clients. As a man, I feel it's important for me to know how I can be supportive, it's not something I do well, but it is something I want to learn to improve.

4: Many people find the pressure of public speaking intimidating (I've come to hate it, even with all my practice). Feeling that you're representing your entire gender or race only makes it worse.


Camila Tartari, Carol Cintra, Dani Schufeldt, Derek Hammer, Isabella Degen, Korny Sietsma, Lindy Stephens, Mridula Jayaraman, Nikki Appleby, Rebecca Parsons, Sarah Taraporewalla, Stefanie Tinder, and Suzi Edwards-Alexander commented on drafts of this article.



process theory · evolutionary design · application architecture


You're sitting in a meeting, contemplating the code that your team has been working on for the last couple of years. You've come to the decision that the best thing you can do now is to throw away all that code, and rebuild on a totally new architecture. How does that make you feel about that doomed code, about the time you spent working on it, about the decisions you made all that time ago?

For many people throwing away a code base is a sign of failure, perhaps understandable given the inherent exploratory nature of software development, but still failure.

But often the best code you can write now is code you'll discard in a couple of years time.

Often we think of great code as long-lived software. I'm writing this article in an editor which dates back to the 1980's. Much thinking on software architecture is how to facilitate that kind of longevity. Yet success can also be built on the top of code long since sent to /dev/null.

Consider the story of eBay, one of the web's most successful large businesses. It started as a set of perl scripts built over a weekend in 1995. In 1997 it was all torn down and replaced with a system written in C++ on top of the windows tools of the time. Then in 2002 the application was rewritten again in Java. Were these early versions an error because the were replaced? Hardly. Ebay is one of the great successes of the web so far, but much of that success was built on the discarded software of the 90's. Like many successful websites, ebay has seen exponential growth - and exponential growth isn't kind to architectural decisions. The right architecture to support 1996-ebay isn't going to be the right architecture for 2006-ebay. The 1996 one won't handle 2006's load but the 2006 version is too complex to build, maintain, and evolve for the needs of 1996.

Indeed this guideline can be baked into an organization's way of working. At Google, the explicit rule is to design a system for ten times its current needs, with the implication that if the needs exceed an order of magnitude then it's often better to throw away and replace from scratch [1]. It's common for subsystems to be redesigned and thrown away every few years.

Indeed it's a common pattern to see people coming into a maturing code base denigrating its lack of performance or scalability. But often in the early period of a software system you're less sure of what it really needs to do, so it's important to put more focus on flexibility for changing features rather than performance or availability. Later on you need to switch priorities as you get more users, but getting too many users on an unperforment code base is usually the better problem than its inverse. Jeff Atwood coined the phrase "performance is a feature", which some people read as saying the performance is always priority number 1. But any feature is something you have to choose versus other features. That's not saying you should ignore things like performance - software can get sufficiently slow and unreliable to kill a business - but the team has to make the difficult trade-offs with other needs. Often these are more business decisions rather than technology ones.

So what does it mean to deliberately choose a sacrificial architecture? Essentially it means accepting now that in a few years time you'll (hopefully) need to throw away what you're currently building. This can mean accepting limits to the cross-functional needs of what you're putting together. It can mean thinking now about things that can make it easier to replace when the time comes - software designers rarely think about how to design their creation to support its graceful replacement. It also means recognizing that software that's thrown away in a relatively short time can still deliver plenty of value.

Knowing your architecture is sacrificial doesn't mean abandoning the internal quality of the software. Usually sacrificing internal quality will bite you more rapidly than the replacement time, unless you're already working on retiring the code base. Good modularity is a vital part of a healthy code base, and modularity is usually a big help when replacing a system. Indeed one of the best things to do with an early version of a system is to explore what the best modular structure should be so that you can build on that knowledge for the replacement. While it can be reasonable to sacrifice an entire system in its early days, as a system grows it's more effective to sacrifice individual modules - which you can only do if you have good module boundaries.

One thing that's easily missed when it comes to handling this problem is accounting. Yes, really — we've run into situations where people have been reluctant to replace a clearly unviable system because of the way they were amortizing the codebase. This is more likely to be an issue for big enterprises, but don't forget to check it if you live in that world.

You can also apply this principle to features within an existing system. If you're building a new feature it's often wise to make it available to only a subset of your users, so you can get feedback on whether it's a good idea. To do that you may initially build it in a sacrificial way, so that you don't invest the full effort on a feature that you find isn't worth full deployment.

Modular replaceability is a principal argument in favor of a microservices architecture, but I'm wary to recommend that for a sacrificial architecture. Microservices imply distribution and asynchrony, which are both complexity boosters. I've already run into a couple of projects that took the microservice path without really needing to — seriously slowing down their feature pipeline as a result. So a monolith is often a good sacrificial architecture, with microservices introduced later to gradually pull it apart.

The team that writes the sacrificial architecture is the team that decides it's time to sacrifice it. This is a different case to a new team coming in, hating the existing code, and wanting to rewrite it. It's easy to hate code you didn't write, without an understanding of the context in which it was written. Knowingly sacrificing your own code is a very different dynamic, and knowing you going to be sacrificing the code you're about to write is a useful variant on that.


Conversations with Randy Shoup encouraged and helped me formulate this post, in particular describing the history of eBay (and some similar stories from Google). Jonny Leroy pointed out the accounting issue. Keif Morris, Jason Yip, Mahendra Kariya, Jessica Kerr, Rahul Jain, Andrew Kiellor, Fabio Pereira, Pramod Sadalage, Jen Smith, Charles Haynes, Scott Robinson and Paul Hammant provided useful comments.


1: As Jeff Dean puts it "design for ~10X growth, but plan to rewrite before ~100X"





As I talk to people about using a microservices architectural style I hear a lot of optimism. Developers enjoy working with smaller units and have expectations of better modularity than with monoliths. But as with any architectural decision there are trade-offs. In particular with microservices there are serious consequences for operations, who now have to handle an ecosystem of small services rather than a single, well-defined monolith. Consequently if you don't have certain baseline competencies, you shouldn't consider using the microservice style.

Rapid provisioning: you should be able to fire up a new server in a matter of hours. Naturally this fits in with CloudComputing, but it's also something that can be done without a full cloud service. To be able to do such rapid provisioning, you'll need a lot of automation - it may not have to be fully automated to start with, but to do serious microservices later it will need to get that way.

Basic Monitoring: with many loosely-coupled services collaborating in production, things are bound to go wrong in ways that are difficult to detect in test environments. As a result it's essential that a monitoring regime is in place to detect serious problems quickly. The baseline here is detecting technical issues (counting errors, service availability, etc) but it's also worth monitoring business issues (such as detecting a drop in orders). If a sudden problem appears then you need to ensure you can quickly rollback, hence…

Rapid application deployment: with many services to manage, you need to be able to quickly deploy them, both to test environments and to production. Usually this will involve a DeploymentPipeline that can execute in no more than a couple of hours. Some manual intervention is alright in the early stages, but you'll be looking to fully automate it soon.

These capabilities imply an important organizational shift - close collaboration between developers and operations: the DevOps culture. This collaboration is needed to ensure that provisioning and deployment can be done rapidly, it's also important to ensure you can react quickly when your monitoring indicates a problem. In particular any incident management needs to involve the development team and operations, both in fixing the immediate problem and the root-cause analysis to ensure the underlying problems are fixed.

With this kind of setup in place, you're ready for a first system using a handful of microservices. Deploy this system and use it in production, expect to learn a lot about keeping it healthy and ensuring the devops collaboration is working well. Give yourself time to do this, learn from it, and grow more capability before you ramp up your number of services.

If you don't have these capabilities now, you should ensure you develop them so they are ready by the time you put a microservice system into production. Indeed these are capabilities that you really ought to have for monolithic systems too. While they aren't universally present across software organizations, there are very few places where they shouldn't be a high priority.

Going beyond a handful of services requires more. You'll need to trace business transactions through multiple services and automate your provisioning and deployment by fully embracing ContinuousDelivery. There's also the shift to product centered teams that needs to be started. You'll need to organize your development environment so developers can easily swap between multiple repositories, libraries, and languages. Some of my contacts are sensing that there could be a useful MaturityModel here that can help organizations as they take on more microservice implementations - we should see more conversation on that in the next few years.


This list originated in discussions with my ThoughtWorks colleagues, particularly those who attended the microservice summit earlier this year. I then structured and finalized the list in discussion with Evan Bottcher, Thiyagu Palanisamy, Sam Newman, and James Lewis.

And as usual there were valuable comments from our internal mailing list from Chris Ford, Kief Morris, Premanand Chandrasekaran, Rebecca Parsons, Sarah Taraporewalla, and Ian Cartwright.



certification · agile adoption · process theory


A maturity model is a tool that helps people assess the current effectiveness of a person or group and supports figuring out what capabilities they need to acquire next in order to improve their performance. In many circles maturity models have gained a bad reputation, but although they can easily be misused, in proper hands they can be helpful.

Maturity models are structured as a series of levels of effectiveness. It's assumed that anyone in the field will pass through the levels in sequence as they become more capable.

So a whimsical example might be that of mixology (a fancy term for someone who makes cocktails). We might define levels like this:

  1. Knows how to make a dozen basic drinks (eg "make me a Manhattan")
  2. Knows at least 100 recipes, can substitute ingredients (eg "make me a Vieux Carre in a bar that lacks Peychaud's")
  3. Able to come up with cocktails (either invented or recalled) with a few simple constraints on ingredients and styles (eg "make me something with sherry and tequila that's moderately sweet").

Working with a maturity model begins with assessment, determining which level the subject is currently performing in. Once you've carried out an assessment to determine your level, then you use the level above your own to prioritize what capabilities you need to learn next. This prioritization of learning is really the big benefit of using a maturity model. It's founded on the notion that if you are at level 2 in something, it's much more important to learn the things at level 3 than level 4. The model thus acts as guide to what to learn, putting some structure on what otherwise would be a more complex process.

The vital point here is that the true outcome of a maturity model assessment isn't what level you are but the list of things you need to work on to improve. Your current level is merely a piece of intermediate work in order to determine that list of skills to acquire next.

Any maturity model, like any model, is a simplification: wrong but hopefully useful. Sometimes even a crude model can help you figure out what the next step is to take, but if your needed mix of capabilities varies too much in different contexts, then this form of simplification isn't likely to be worthwhile.

A maturity model may have only a single dimension, or may have multiple dimensions. In this way you might be level 2 in 19th century cocktails but level 3 in tiki drinks. Adding dimensions makes the model more nuanced, but also makes it more complex - and much of the value of a model comes from simplification, even if it's a bit of an over-simplification.

As well as using a maturity model for prioritizing learning, it can also be helpful in the investment decisions involved. A maturity model can contain generalized estimates of progress, such as "to get from level 4 to 5 usually takes around 6 months and a 25% productivity reduction". Such estimates are, of course, as crude as the model, and like any estimation you should only use it when you have a clear PurposeOfEstimation. Timing estimates can also be helpful in dealing with impatience, particularly with level changes that take many months. The model can help structure such generalizations by being applied to past work ("we've done 7 level 2-3 shifts and they took 3-7 months").

Most people I know in the software world treat maturity models with an inherent feeling of disdain, most of which you can understand by looking at the Capability Maturity Model (CMM) - the best known maturity model in the software world. The disdain for the CMM sprung from two main roots. The first problem was the CMM was very much associated with a document-heavy, plan-driven culture which was very much in opposition to the agile software community.

But the more serious problem with the CMM was the corruption of its core value by certification. Software development companies realized that they could gain a competitive advantage by having themselves certified at a higher level than their competitors - this led to a whole world of often-bogus certification levels, levels that lacked a CertificationCompetenceCorrelation. Using a maturity model to say one group is better than another is a classic example of ruining an informational metric by incentivizing it. My feeling that anyone doing an assessment should never publicize the current level outside of the group they are working with.

It may be that this tendency to compare levels to judge worth is a fundamentally destructive feature of a maturity model, one that will always undermine any positive value that comes from it. Certainly it feels too easy to see maturity models as catnip for consultants looking to sell performance improvement efforts - which is why there's always lots of pushback on our internal mailing list whenever someone suggests a maturity model to add some structure to our consulting work.

In an email discussion over a draft of this article, Jason Yip observed a more fundamental problem with maturity models:

"One of my main annoyances with most maturity models is not so much that they're simplified and linear, but more that they're suggesting a poor learning order, usually reflecting what's easier to what's harder rather than you should typically learn following this path, which may start with some difficult things.

In other words, the maturity model conflates level of effectiveness with learning path"

Jason's observation doesn't mean maturity models are never a good idea, but they do raise extra questions when assessing their fitness. Whenever you use any kind of model to understand a situation and draw inferences, you need to first ensure that the model is a good fit to the circumstances. If the model doesn't fit, that doesn't mean it's a bad model, but it does mean it's inappropriate for this situation. Too often, people don't put enough care in evaluating the fitness of a model for a situation before they leap to using it.


Jeff Xiong reminded me that a model can be helpful for investment decisions. Sriram Narayan and Jason Yip contributed some helpful feedback.