> The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstracts away its essence. (No Silver Bullet, 1986)
> Complexity is the business we are in, and complexity is what limits us. (No Silver Bullet Refired, 1995)
The reason "complexity is the business we are in" is that software doesn't live in a vacuum. Like the economy and like bureaucracy, it is built to serve a certain social purpose. Of course, there needs to be a constant struggle not to over-complicate, but it should be done with the understanding that over-simplification is just as problematic.
In software there are many things that are more complex than they need to be.
Sometimes it can be a subtractive practice, where a person can delete / simplify, but this happens much less than it should.
> “Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”
Complexity is one of the manifestations of this problem, but that doesn't mean that complexity cannot arise due to other reasons (i.e. essential vs incidental).
The answer is more along the line of better refactoring (or programming) tools, stronger type systems, better cross-program and whole system static checking etc.
1. Even software with no "accidental" complexity is necessarily complex (in fact, that's why it's useful).
2. If the manifesto's point is that we should stop over-complicating things, then wouldn't it better to first examine whether, and if so why, things are overcomplicated now, given that the previous generation also went on a crusade to abolish (over)complication? If the point is to declare "don't overcomplicate!" then, if you believe things are overcomplicated now, why is that declaration useful if it's clearly not effective?
In the early days people usually lived close to water bodies (rivers, lakes, etc.) because the closer you lived to the water the better your society thrived (water being essential for live, agriculture, transportation, etc). But there is a limit where if you go too far (i.e. into the water) the benefits evaporate possibly reversing the benefits (building houses into the water adds complexity, flooding, etc).
So there is probably a (global?) optimum that is neither too complex nor too simplistic
Except for that special guru who writes The Framework for the company, because why not a Forth-based DSL for everyone to work in? Overblown, I know, but there are kitchen sink types, or those who just don't know that there's an easier way, and other reasons to wind up with too many tools or layers or libraries.
(I've been guilty of not knowing easier ways, so...)
This 100%.
I'd also add that, often times, attempts at simplifying end up having the opposite effect and actually increase complexity because, as you say, the reasons for the current complexity haven't been explored/understood.
I think the explanation is similar, but slightly more complex, than this.
I don't think we'd necessarily be in a worse place without crusades like this, because I don't really believe they have any impact on general trends in software. But that's not the same as having no impact or value.
Rather, I think these crusades are important for their niche influence; without them, we would only have the overcomplicated mainstream, without the minority of simpler options, that will never come to dominance but, hopefully, with the help of advocacy like this, will continue to have some small following.
> there needs to be a constant struggle not to over-complicate, but it should be done with the understanding that over-simplification is just as problematic
While I agree with the sentiment, the latter seems to be less of a problem in practice, no?
The reason this plethora exists may well the be crusade that you mention. Instead of solving the problems with the current programming language, let us just invent a new one instead. The former thing is much harder to pull of, of course but should limit the amount of complexity in the long run.
I would like to differentiate simplification, aka "dumbing things down", from purification, which is finding the right frame of reference to enable a more compact representation of the necessary complexity. Purification is always highly desirable (unless it's just a facade used to make oversimplification more appealing).
The problem with purification is that you usually don't have the time to ruminate on your problem space for months.
There probably are exceptions. Software can sometimes be complex for no good reason except maybe its architects didn’t have a good plan and folks just added more bad code on top of existing bad code. I’m just not sure if that’s as common as the crusaders want it to be, and when folks say they want to simplify, I think that most of the time it’s because they don’t understand what they’re getting themselves into.
Abstractions are a way to create contexts where contexts might not apparently exists and the main objective of these contexts is to provide simple models at every contextual layer to understand the infinitely complex environment where everything exists.
The TLDW is that Kay thinks we haven't discovered all the fundamental rules of computing, a la Maxwells Equations for physics. If you are able to figure these out, you can substantially cut down your accidental complexity.
FWIW, my unscientific view is that achieving simplicity is really hard and doesn't scale people-wise. Instead of C99 without a bunch of stuff, I'd probably use Scheme to create very dense (lots of bespoke abstractions and idioms) but simple software. That approach works reasonably well, but try onboarding a new developer!
> Computers have been invented to surprise us
> If we knew what computers do, we would not use them, and we would not have built any.
This means that computers are most useful precisely where the complexity of the problem is beyond our efficient reasoning powers. Like Brooks, I think that our accidental complexity is already quite low (no more than 50%), and while it's always good to decrease it, what we're left with will still be very complex -- that's why it's useful.
I understand what you're getting at, but I'd argue that what you're suggesting isn't actually simple by any means.
I hear this a lot, in various contexts, and I even used say it a lot. But this has actually stopped making sense for me. I don’t think I’ve ever genuinely seen a case where there is only one obvious way to do something, or that there’s a way that is obviously the “right” way. Everything non-trivial is chock full of trade offs and hidden corners that could make it obviously wrong in hindsight.
Does someone disagree? Can you share your perspective.
"There is never only one way to do something. You can't know which is correct, so go for the simplest option that is easy to change later"
And once customers get their hands on your simple thing, management won't let you change it to something better...
(Also known as, "The demo becomes the product.")
Your simple solution is embedded in a complex social reality. If you're not taking that into account, it probably won't stay simple for long: i.e. simple to maintain, simple to operate, simple to build new features on.
Complete trumps simple.
(Grumble grumble when parts of protocols are ignored for not even simplicity.)
While I agree that simplicity is always better than complexity, in practice it is an abstract target rather than a hard requirement.
I don't believe that follows.
For example, there is one 'correct way' to unscramble a rubiks cube, but humans unscramble them in a much more roundabout manner.
Just because there is one correct way to do it doesn't imply that all rubiks cube solvers are fungible.
This is because of the very high cognitive load of finding the correct way.
Math, especially number theory, is full of conjectures that are easy to state but take hundreds of years to resolve, and at it's base computer science is math.
I used to be exhausted by so many libraries which all do more or less the same thing. I often imagined a fantasy penultimate implementation which would moot all the other partial or less optimal alternatives.
I now believe that each use case entails very different tradeoffs.
I now believe that fit and finish matter more than completeness.
Further, I enthusiastically support reinventing the wheel, for self study, batting practice, or just for kicks.
That would kind of make more sense with the follow up remark that two ways is more complex.
Otherwise I completely agree, every non-trivial design will involve tradeoffs.
C is anything but simple. It looks simple on the outside, then beats you to death with undefined behavior and either overexplicitness (if you don't use macros) or nested-macros hell (if you do). Their choice of C is routed more in the authors coming from the Unix community, not in their desire for simple software.
If anything, they should start with a truly simple programming language. Something like Go minus huge runtime and maybe plus basic generics.
Software is complex because the real world is messy. That messiness cannot ever be "abstracted away", it can only be hidden.
You can't derive a better software stack from first principles. What we have is the tireless work of many people, quite a few of whom are smarter than you. Rust and LLVM, for instance, are much more fit for purpose than "my pet compiler backend + my favorite C subset" because even the most disciplined C subset is still full of UB traps and Rust has had years going on a decade of work, by many hands, addressing the very cases where C falls flat on its ass. The same with LLVM vs. whatever brain fart J. Random Hacker thinks up as an ideal abstract machine,
How would you hide something without hiding it behind an abstraction?
> You can't derive a better software stack from first principles.
What do you consider "first principles"?
Consider human learning as an optimization in the mathematical sense. Note that humans are good at learning things along a gradient. Now note that this is not just about how the learning works but also how far we go. For example, if a human thinks it is too cold, they turn the knob not where they want the temperature to be but much farther up, and then when the desired temperature is reached, they turn it back. Or consider how you find a page in a book. You overshoot and then go back. Or how you choose the speed when driving. You go as fast as you feel comfortable, and you find that by overshooting and correcting.
I think this is inherent in humans. Evolution made us this way.
The downside is that humans are bad in environments where there is no clear limit or where you can't correct after overshooting, for example driving too fast and then having an accident, or substance abuse.
Now take this thought and apply it to programming.
If you find me a way to work with complexity, for example modularization, abstracting it away, etc., it will not lead to us writing a good version of the software we had before. It will lead us to writing larger software that will then again be at the limit of what we can do with the technology available.
So far it is just a theory, but I can see in my software selection that all the programs that I consider good and simple in the sense that this manifesto describes, all of those are old programs that were written when the functionality they offer was the limit of what was achievable with technology back then. The reason they did not metastasize into something unwieldly is that a) they did but it was still comparatively small or b) they already contain all the features anyone could think of.
Try comparing two programs of your choice that do essentially the same. You will find that if there is an old and a new one, the newer one will be bigger and slower and usually work in less environments and have less features.
It also turns out that no two people have the same idea of what "simple" means. For me, with software, simple means it does all I really need it to do, and nothing else. Other people usually need it to do other things and they don't need some of the stuff I need.
So basically Parkinson's law applied to complexity rather than time. I think it's definitely true. More complex software is (ideally) more versatile, handles more cases of the problem domain, and so there's competitive pressure to make software as complex as is feasible to solve as many problem as possible, and thus, provide as much value as possible. The ceiling is exactly as you said: what level of complexity is manageable given the other constraints that must be satisfied (performance, robustness, etc.)?
Edit: to be clear, I mean "essential complexity" above, not accidental complexity from poor designs. Being able to handle more essential complexity is a good thing.
As a CJK person, the most prominent example is multilingual support in the FOSS community; I still have not seen any Linux GUI app with proper out-of-the-box IME support (and that includes DEs, WMs, etc...) mostly because for some reason (hint, most are from the western world) the devs of Linux distributions turn off IME support by default. Is it really ‘simple’ to turn off some features that you don’t use, resulting in total breakage of apps when the feature is in use? Really?
Yep. The problem is that sometimes the reason is the authors are trying to do the last 80% of work required to add the last 20% of features, which 99% of users will never need.
Each new generation of programmers creates new tools to replace the 'complex' ones that came before without even bothering to understand them.
In my 25 year career I've seen it multiple times. We are all going downhill.
Attention spans in the smartphone era might be preventing people from reading the f*ing manual.
This means that any system or library which has an operating model that must be understood before it can be effectively used¹ simply can't be understood by the majority of programmers today.
1: The currently popular things in software world are mostly the type of system that lack such a model and which can be 'mastered' by asking discrete questions on stack overflow. This lack of complexity causes a loss of expressive power.
> Constraining the user to simple tools encourages creativity and allows the system to be more quickly understood
> There is only one correct way to do something, because two ways would be more complex
When you encourage creativity with combinations of simple tools, creative people will come up with multiple ways of doing things.
https://github.com/search?q=%22simple+yet+powerful%22&type=C...
Is there any camp that DOESN'T consider their preferred way of doing thing to be simple yet powerful?
- thou shall learn how to use the wheel, thou shell not reimplement it instead
- best software barely satisfies task it was made for
- antipasta statement: no lasignas, no spaghetti and no raviolli
- the code forming interface will be at max 20% of actual code written
- no interface will be designed to solve all humanity problems for next millenia
I think that all those were hugest pains I had within 23+ years of development.
What were yours?
Also, with no C macros there are no C generics. I wish the author luck in maintaining that mess.
Looking at their central axioms, I would guess:
1 - simple software breaks in simple ways: weak, dynamic typing means a lot of problems trivially solved by static and/or strong typing.
2 - simple tools: no idea. I remember cpan being fairly great and perldoc being excellent.
3 - one correct way to do thing versus perl's TMTOWTDI central theme
4 - dependencies are part of the system: lots of CPAN modules, so there's a tendency to import black box modules instead of writing it yourself and retaining ownership.
5 - standards are useful but full of cruft: perl has a much larger standard lib than, say, Go or Python as far as I understand.
As to why it would be what's wrong with git: after a cursory glance at the git repo (0), I would guess the problem is that it still contains .perl files?
This kind of arbitrary simplification is particularly unfortunate because C is a professional, Dangerous tool; removing the main way to organize code is like simplifying a chainsaw by omitting power switch interlocks.
1. All software breaks, so let's avoid failure by not even trying.
2. Constraining the user to simple tools, because the user is an idiot.
3. There is only one correct way to do something, and it's mine. Coming up with one mediocre way to do things is already a great accomplishment; thinking about design tradeoffs is unthinkable. YOLO!
4. The dependencies of a system are not fun, so they are your responsibility. Good luck.
5. Standards are a source of inspiration, idiots (see #2 above) aren't going to run into interoperability problems, missing features, and other consequences of implementing only the "simple" parts of the standard.
The cproc missing features list is a brief but very insightful example of this way of thinking (I mean, this way of not thinking).
Very few organizations use real data and the scientific method to make decisions on architectures and design. Most rely on anecdotes, gut feel, politics and software cults to make these decisions.
Consequently, there's rarely real proof on the advantages of one approach over another. The rigor, and thus cost, needed to make those determinations is perceived to be too great but the reality is, those perceptions are often unfounded when considering the total cost of software.
This all hit me years ago when working for company that had to go through CMMI certification. As part of the effort, a consultant would come out quarterly to coach us along and monitor our progress. During that time period I noticed a significant portion of my time was now spent on the implementation of the process. When the consultant was out for a visit I told him I had concerns because we did not have a charge number for the extra time it was taking to implement the process. He asked why that would be necessary and I asked, how else could you make an objective determination on the benefit/cost ratio of implementing the process being greater than one. Disappointingly, the blank stare ensued.
I'd like to note that "simple software" means a different thing to the developer, and to the user. And in the grand scheme of things, the developer and all the code he wrote is highly irrelevant. We'd like to think we are, but how much does it really matter what kind of trowel the mason used, and how clean he kept it? What matters is the cathedral he built, the feel of the space within, the durability of the walls.
In my view, the complexity is closely related to scale. I'd like to draw a parallel with nature, where things scale linearly up to some critical point, at which a system goes through reorganization and only then continues to grow. I think mimicking this phenomenon can be a path towards software simplicity.
Oh god, no. This horrible idea escalated so far that there is now a billion dollar industry (RPA) automating braindead tasks just because software can not be arsed any more to include API, code/CLI based access or, to steal the wording, complex tools at all.
IMO this is another round of weirdly religious adherence to "UX" schemes which ends in massive damage, like the unconditional love for whitespace everywhere.
Please stop using "simplicity" as a cop out to prevent having to build software that, for example, has basic things like stack deletion. The result will just suck.
Everyone loves to bash POSIX. The idea of replacing it lingers on, but has anyone considered the idea of updating POSIX to remove the gnarly parts? This is a somewhat incompatible change and would have to be versioned appropriately.
Excellent!
> There is only one correct way to do something
!!! The correct way to do something depends on what else you're doing at the same time.
> Simple software in practice > C
Dear God
Rewrite in Zig!