March 30, 2026

On Complexity: The So-Called Irreducible Evil

Modern software systems are veritable Ruby Goldberg contraptions.  Layers upon layers upon layers of code, libraries, and frameworks, thousands of modules, components, and containers, all working in an impressive orchestrated symphony of... mostly delivering trivial things like "take my credit card" or "here's another picture of a cute cat".

The jarring disconnect between ever-increasing complexity of hidden technological plumbing and the mundanity of what that plumbing does for the users begs the $64 trillion question: where all that complexity comes from and can something be done to control it?

Back in 1986 Fred Brooks in his classic No Silver Bullet introduced the notions of essential and accidental complexity arguing that essential complexity - i.e. complexity inherent in the nature of the problem being solved itself - is irreducible while accidental complexity can be reduced with better and more careful design.   On it's own, this statement is nearly tautological (a careful reader may note that the very definition of "essential" implied in the distinction is circular - what we cannot reduce must be essential).

Like many novel takes in the computer science, this one got its cult following with the cultish aspect of embracing the argument is overinterpretation it to mean simply that complexity is irreducible and can only be moved from one place to another. This implies that modern software tooling already eliminated most accidental complexity by providing programmers with abstracted building blocks.

This view is the starting point of the rather insightful essay by Ivan TurkovicComplexity Is Never Eliminated. It Is Only Relocated.

This view is also simply wrong.

Firstly, this treats complexity as a single large lump, whereby in reality a large system composed of multiple well-delineated and conceptually simple modules is much easier to expand, debug, and maintain than a monolithic bezoar, even if combined code size is the same.  Modularization is the mainstay of engineering.

The general error of considering complexity as an objective metric is in the fact that human ability to understand complex concepts is severely limited.  It is not complexity defined in some way mathematically which matters: what matters is human ability to gain useful understanding of the concepts and implementation.  This ability declines sharply and non-linearly with growing size of the module.

There is a useful definition of essential (objective) complexity which is not circular: it is called Kolmogorov complexity (aka algorithmic complexity) defined simply as the length of the shortest program of all programs equivalent to one being analyzed. As any mathematical abstraction, this is only good for dealing with spherical cows in vacuum.  The real-world program quality cannot be reduced only to size of the code, and other criteria (such as speed of execution, memory required, fault tolerance, etc, etc) are more important.  This means that essential complexity is not limited to the code itself - it depends on the computer and the performance requirements.

This said, Kolmogorov complexity still offers some insight on the nature of essential complexity in real-life systems: you can see reducing accidental complexity as an optimization problem.  However a procedure for finding algorithmic complexity in general is uncomputable.   This means that it is impossible (in general case) to determine how much of our program's complexity is accidental and how much is irreducibly essential, and even trying all possible programs up to the length of the one we are trying to analyze on all possible inputs won't give us an answer.

Another interesting feature of algorithmic complexity is its subadditivity: algorithmic complexity of a combination of two programs does not exceed the sum of complexities of these programs alone. (Informally, this is easy to understand by observing that optimizing two programs together can eliminate redundancies like code common between these programs.)  This means that essential complexity of a monolithic program is not larger (and usually less) than essential complexity of a modularized program - which flies completely in face of the actual engineering experience.

Therefore this artificial division between essential and accidental complexity is rather useless as a guide to understanding the complexity of real-life software systems.  What we need to consider instead is perceived complexity which is objective complexity of some sort adjusted by non-linear and rapidly increasing to infinity function which quantifies amount of effort needed to comprehend a program with given objective complexity (let's call it a cognitive effort function CE(oc)).

A perceived complexity of a program decomposed into modules will be

     PC = sum(CE(oci)) + CE(ocglue)

where oci is objective complexity of module i and ocglue is objective complexity of glue code combining the modules.   Note that this definition can be used recursively if we decompose modules even further.

This approach matches reality much better: a program of objective complexity in the region where CE is rapidly raising (or infinite - i.e. beyond human capacity to comprehend) when decomposed into simpler modules will have much lower PC.  That said, slicing it too much increases complexity of glue, so there is an optimal size of the modules.  A good decomposition also reduces cross-dependencies and links between modules thus simplifying the glue.

To paraphrase: good architecture decreases perceived complexity.  This is not new and has been a generally accepted engineering principle for a century now.

The second problem with the idea of somehow irreducible complexity is duplication.  You cannot reduce complexity only if your program does not need to use same functionality in more than one place.  If it can, you can abstract that functionality into a procedure or module, thus eliminating the complexity of duplicating parts of code at the cost of adding complexity of abstracting.

The elephant in the room, however, is duplication across different programs.  The reason why libraries and compilers are so effective in reducing perceived complexity of the systems is because they eliminate duplication of cognitive effort.  The burden of complexity of development tool chain is amortized across all users of that tool chain.  Given the large population of programmers, the cost of complexity of a compiler to a developer is rather minimal (and comes mostly as the cost of learning how to use the tool chain).

Thus, the diagnosis of the complexity issue as inability to eliminate the essential complexity is wrong, and so is the contention that software tools now successfully control accidental complexity down to manageable levels.

While we have some notable successes (such as use of high-level languages and reuse of libraries), the actual programs are still full of boilerplate and repetitive (maybe with some variations) code.  It gets even worse across different software projects - it seems that every project reimplements the same set of functions not provided by the technology stack of the day.

The low level of reuse results from the phenomenon of abstraction level ceiling as we discussed here.

The rest of the Ivan Turkovic's essay is right on the spot.  The real reason why we do not see much of programming productivity increase with the use of LLMs to generate code is that LLMs do not (and cannot) increase the abstraction level.  They generate code (which still needs to be reviewed by humans)  at the same level of abstraction as the traditional development tool chains.  LLMs do not reduce cognitive load - you still need to understand what the code it generated does simply because LLMs are by their very nature probabilistic - they are statistical predictors, not logical machines - and will generate buggy code no matter what.

LLMs are not a silver bullet.  Νέμεσις is undefeated (and not even fazed much).

And if we look at the software industry as a whole, wide adoption of LLMs will simply increase overall complexity with randomly varied repetitions of the same patterns - generated at industrial scale, and with very lax supervision.  Prepare for the brown wave of even shittier software, folks.