Accidental Complexity is killing your testing efforts and IT budget
You’re working hard to transform your ways of working, with a range of different goals. Common aims of digital transformations include:
- To become more agile;
- To deliver faster through DevOps;
- To migrate all of your systems to the cloud;
- To enable regular change.
Whatever your desired outcome, there’s one common problem that most (everybody really) ignores. Yet, overlooking this problem ultimately means that the initiative will fail, become delayed, cost too much, or generally become severely hampered going forwards.
This perennial (and perennially ignored) problem is “accidental complexity”. This includes the accidental complexity already inherent in the way you make changes today, or the accidental complexity that you’ll introduce in future because of how you’ll choose to make change tomorrow.
“ While essential complexity is inherent and unavoidable, accidental complexity is caused by the chosen approach to solve the problem.”
Hugo Sereno Ferreira, “Incomplete by Design: Thoughts on Agile Architectures” (Agile Portugal: 2010).
Organisations rarely have the opportunity to start fresh when it comes to IT systems. Any agile transformation — which is fundamentally a move to small, iterative, emergent change — is completed within brown-field architectures. These inevitably have accidental complexity built-in.
This article sets out different symptoms of accidental complexity, discussing how they derail your transformation initiatives. My previous blog then offers inspiration for how you can solve this accidental complexity.
Too many UI tests
Because interfaces aren’t particularly well understood or documented, organisations are forced back to creating tests that focus on the user interface. This over-focus on UI testing is inefficient and costly, while undermining our ability to test early and iteratively. It further tends to low overall coverage, exposing systems to bugs.
Combinatorial explosions — subjective coverage
Because the understanding of our system is at the e2e user flow level, we have a multiplying explosion of business logic:
This “combinatorial explosion” is complex beyond human comprehension, and is impossible to test against within a reasonable timeframe. In this scenario, the only way to achieve a valuable outcome in testing is to apply a risk-based approach. Yet, this is rarely recognised, or risk is based on an SMEs opinion of what is “enough” testing.
Bloated Regression test pack with lots of duplication
This combinatorial explosion multiplies complexity in your testing, but also in your ability to understand your systems. The problem simply becomes too big to understand. We are all taught to break problem down into smaller parts, but this seems to allude many test approaches.
Huge data requirements
The large volume of tests needed to traverse the multiple systems in e2e journeys proliferates the demand for test data. Test data becomes embroiled in complexity, not just because the data required for the test isn’t well understood, but also because of the systems of record for which these data items reside.
These systems are themselves under constant change, during which accidental complexity is playing its part. The systems are often poorly understood and poorly documented. Provisioning data for testing in turn isn’t the transactional request you thought it was. It risks massive complexity, massive labour, errors and bottlenecks:
Huge test environment requirements
The snowball of complexity continues to grow: Because I need e2e tests, I need e2e environments.
Testing and development in turn needs numerous channels, middleware and systems of record. Often these will be legacy (mainframe) systems that you can’t build overnight. In fact, organisations have often lost the ability and knowledge to build these systems from the ground up. As a consequence, our only choice left is to use the finite number of fully integrated e2e environments available in the organisation.
Yet, even one of these environments will cost millions in infrastructure alone, and the same multiple times over in resources to maintain. And that’s not the only problem. Organisation have many teams making change and they all need to test e2e. Teams queue for environments, creating a huge bottleneck that drains the organisation’s change budget.
Test drift from the System Under Test
A separation between what is being tested and how the system actually works will inevitably occur if you don’t have effective means to refactor what you are testing in line with what is being tested. Very rarely will you see a team talk about how they refactor test assets, because most don’t do it. This not only leads to test bloat, but creates outdated and invalid tests, and misalignment in what your test efforts cover.
Organisations are much more than a structure chart
The problems discussed so far are much more systemic than testing. Accidental complexity additionally stems from organisations and their structures. Challenges include:
- Organisations are siloed and so are IT change teams. Conway’s law tells us that these silos create an architecture where interfaces are not well understood or maintained. Teams don’t talk to each other unless they have to….
- IT change creates “ layering” in the understanding of how systems work. As systems grow, they become increasingly complex, with more unknowns.
The 3 ways of DevOps talk to Flow, Feedback and Experimentation/Learning. Flow talks to “Never allowing local optimization to create global degradation”. This should cause teams to rethink the way in which they approach change, but it rarely seems to [1].
This quote indicates the need for collaboration across teams. Such collaboration is blocked by the “pizza box sized team”, who carry on working in a silo, chuck their work over the fence, and find problems during large end-to-end integration testing events. Whilst a team can work in a silo as much as they can, they inevitably need to integrate the system they are working on with the rest of the organisation.
This choice of approach might have been taken as the path of least resistance to get started. It might even have been taken in the name of experimentation, or a “start-up” initiative within a larger organisation. Whatever the rational, it likely did not consider the accidental complexity such an approach creates or contributes to within the wider organisation.
You might see testing as the barometer of an organisation’s maturity when making change. If systems are testable, change is understood and observable. If quality and risk is discussed, you have a healthy eco-system. If all of this seems too hard, you will unfortunately continue down the increasing spiral of complexity.
Can’t change, won’t change
“Culture eats strategy for breakfast”
Peter Drucker
Whilst no doubt you will recognise many of the points raised in this article, the biggest challenge isn’t knowing how to change, but rather wanting to change.
Many organisations work to a certain drumbeat. The innovators go off and get the latest tech, not yet realising they are just building the same problems with a flashier tool. Then you have the laggards who are stuck in a way of working from yester-year. They will say “we did agile before it was agile” and yearn to go back to a time when there was even less documentation and even less change or version control.
Each have some good practices, but both create more accidental complexity and optimise locally, not globally across the organisation. So how will you face into your organisations accidental complexity?
“Accidental complexity is when something can be scaled down or made less complex, not lose any value, and likely add value because it’s been simplified.”
Kristi Pelzel, “Design Theory: Accidental and Essential Complexity” (Medium: 2022)
You can read Rich’s previous article, “Going Lean on Your Testing Approach”, where he talks about how you might face into accidental complexity in your organisation, starting with getting your test approach right.
About the author: Rich Jordan is an Enterprise Solutions Architect at Curiosity Software and has spent the past 20 years within the Testing Industry — mostly in Financial Services, leading teams creating test capabilities that have won multiple awards in Testing and DevOps categories.
References
[1] Gene Kim, “The Three Ways: The Principles Underpinning DevOps” by Gene Kim (IT Revolution: 2012).
Originally published at https://www.curiositysoftware.ie.