For me, a perfect system is one that is stable over long periods of time. Achieving these outcomes first requires careful selection of tools, frameworks and platforms. Building anything on unstable foundations is never going to end well over time.
We have large chunks of code that haven't been modified since ~2014. How many shiny frameworks and languages have come and gone between then and now? You can be certain I have basically forgotten about this code. Not out of disdain or neglect, but because it just works 100% of the time now.
We have a lot of problems to deal with in front of us, so we find the courage to properly settle things and move on.
Lessons learned over and over: Working code works, and, don’t let perfect be the enemy of good.
If you can change it with little effort it’s as close to perfect as it have to be.
Ofc not all domains are the same, but here I am with quicksand in my hands.
All systems we construct are actually at best models of reality. Because the information content of reality is always bigger, the model must always fail at some point due to the fact that information content is proportional to surface area (Bekenstein bound) so all models are trying to represent with less information a reality that contains more information.
This difference means you can never have perfect predictive fidelity in any system. You do not even need to invoke human frailty but even that is the same - the human mind can never model and predict the universe for the same reason so anything we imagine will always have limits and become wrong as prediction and model at some point.
It’s impossible to have a correct model, but you can have a model that is valid for its intended purpose. The hard part of engineering is to draw a box around the scope and have everyone agree that’s what is to be done.
* On the topic of systems changing. The "perfect" system is a moving target. Have you ever encountered an entire module that could be replaced by a library call? The plot twist is that library didn't exist 5 years ago when the code was written. Found an ugly hack that's completely unnecessary? that was written to overcome a system limitation that has since been removed. As time passes, how a "perfect" system looks like changes as well.
* On the topic of never the same implementation, we aren't all experts in everything. Being assigned to a project in a tech you have no knowledge whatsoever is a very humbling experience. You'll make tons of mistakes, and code will be far less than ideal, but it's only with time that you'll get to appreciate (and fix) those.
* On the point of Good > Perfect, there is the topic of diminishing returns and leverage. In the previous two points we either can't or don't know how to write better systems. Sometimes we can and know how to do it, but it's better not to. A while back I wrote a service that's central to our infrastructure. I spent a bit of time making sure code is clean and we have good test coverage. It has been in production a while and it has never caused an issue. Sometimes I look at it and see some obvious faults: interfaces could be tidier, or some code could be cleaner. However I force myself to not spend time fixing it. Why? it isn't a high leverage activity. Rather than spending time polishing a service that works well, I should spend that time fixing open bugs or improving parts of the system that don't work as well. Even if it isn't as satisfying to our ego, the return on investment will be much higher.
I wouldn't go so far to say this.
The perfect system does exist we just need to define it formally. This will take a long time and a lot of research but we can get there.
Any time you hear the words "designing systems" it refers to some aspect of reality we don't understand and we go through this "design process" where we attempt to guess and check our way to a very sub optimal solution.
Take for example the distance between two points. The answer is a straight line. We have a formal definition of this axiom. We do not need to design the shortest distance between two points. If we complicate the problem and ask the question what is the best way to travel between two points in the United States... well then the answer gets much more complicated. Do you take a car? Do you take a bus? Do you take a plane? Which one is cheaper? Which one is faster? Which one is better for the environment? All kinds of decisions make it too complicated to calculate a solution so we turn to Design. We use "design" to create systems where no closed form solution yet exists. And in the past decade we've used machine learning as one possible way of finding a solution for these types of problems.
While we have to use designs for building planes and such I do not believe that this will always be the case for programming. I truly believe in a world where it is possible to calculate our program designs. If you really squint... I sort of see a path leading to this world within functional programming and category/type theory.
I do think we can and should define a self-consistent internal logic for the program designs we do write (eg. under a given set of valid inputs, the program is is memory-safe, is thread-safe, does not produce UB, has program-specific invariants upheld by each function's preconditions and postconditions, and produces correct results with eg. O(n log n) runtime, making O(n) allocations totaling O(n) bytes ever allocated), then verify that the code we write upholds this logic. Every program has an internal logic, correct programs have self-consistent logic, and any holes in the logic are exposed as bugs in the program. And easily understood code makes this logic self-evident to readers.
Not talking about efficiency here. The theoretical optimal for time complexity is already done. We know how to calculate a quantitative measure for average case and worst case performance. For many problems we know the solution for how to optimize for both of the above cases.
When I refer to design I refer to the aspect of computer science we do not have theory for. How do you organize your logic? What is optimal organization of code such that it is optimal and future proof? This area I believe can be formally defined.
The theorems and axioms of geometry are consistent and evident. This is what I'm proposing. We must define the axiomatic notion of optimal program organization. There may be several metrics here but like the axioms of geometry we must pick something foundational. For example: the shortest distance between two points is a straight line is a foundational axiom chosen for Euclidean geometry.
Just like how Geometry, group theory or probability follows from a set of rules and axioms I foresee the possibility of such a thing happening for program organization.
> I think my supervisor summed it up best....there is nothing constant accept for change itself.
Your supervisors mind is clouded by the endless circle of redesign happening throughout the industry. He doesn't think above and beyond that. What is design? and what is optimal? are the questions he should ask. Sure everyone has their own opinion on what is "optimal" but at the same time everyone on the face of the earth agrees on some foundational concepts that optimality encompasses (including the trade offs). Therein is where the axioms of program organization lies. Somewhere within this universal agreement that nobody has really sought out to fully crystallize yet exists the formal theory of program organization.
We were able to formalize our notion of "luck" with probability. Probability is humanities universal agreement on the true nature of "chance" or "luck." Prior to probability luck and chance were fuzzy, qualitative and opinionated concepts that were ripe for formalization.
If we can formalize "luck" then we can do the same with how we organize logic.
Experience helps. Inventivity helps. Trying to do things better than you did before helps. Asking for input helps. Research helps.