“the promise of technological fixes peddled by Silicon Valley entrepreneurs that seem to allow us to continue with business as usual.”
“But if we are to listen to Silicon Valley entrepreneurs and their allies in government and academia, we should not worry about changing our collective way of living on the planet: climate change is simply a problem that can be solved with “disruptive” new engineering innovations, from carbon capture and storage to electric cars.”
The only direct quote is from an unnamed Tesla executive answering an unspecified question: “those are questions for philosophers—next question.” For all we know, he might have been asked why bad things happen to good people.
Attacking whole groups of people for unspecified and unattributed proposals is a truly obnoxious rhetorical tactic. You can blame anything on anyone this way.
Edit: I want to clarify that I think the thesis of the article, that engineering is used both to enact and obscure political outcomes, is true and important. The engineering problems described are a fine example of this dynamic, and I wish the author had stopped there, rather undermining this important argument in such an easily avoidable way.
If we wish to be taken seriously when attributing claims or beliefs to people, we need to substantiate those with evidence like a cited quote, not just hearsay. The author is PHD candidate so they should understand why it is important to cite sources.
it's not blaming engineers so much as recognizing that all this tech doesn't address the fundamental "problem" and may actually make it more difficult to solve or worse: he alludes to such with the lithium/electric car issue. imagine the engineers built an even better system the first time - it'd probably allow even more growth, worsening the unsolved problem.
I frequently say that we've solved (almost) all first and second degree problems (problems which are cause->effect our cause->effect->effect). A big problem that were facing today is we have high order problems and we're treating them like first degree (the call is always clear "it's easy, you just..."). We've clearly advanced to a stage, at least in the first world, were we have extremely complex issues that are interconnected with many others. Luckily we're also at a stage (in all worlds) where we can recognize this, but we need to act on it. I often see complex issues (name literally any popular topic discussed in political climate: climate, guns, civil rights, reputations, etc) addressed as simple to solve issues. While all of them are solvable, over simplifying actually distracts from the problem. And I think the author of this article would agree with me, they often make things worse in the long run. With these great advancements we've made we not only should, but have a duty, to think better about the future and complexity of the issues at hand.
A good engineer can solve a problem. A great engineer recognizes the usefulness of a thousand page reference manual on orings and uses that reference.
It is inconceivable to me that for a project of this nature and magnitude, the Geologists, Engineers etc. did not anticipate the "land subsidence" problem. They might have missed the rate of growth of the city which might have exacerbated the problem but certainly would not have repeated the same/similar mistake twice. To be sure, it is a non-trivial Engineering problem but the article seems to imply that subsequent fixes resulted in problems in other parts of the system which were not anticipated.
The author is an Environmental Engineer himself and i really found these paragraphs worthy of thought (they sound eerily true of Software Projects/Development!);
The ... System succeeded precisely by failing in the most mundane and invisible way possible. It transformed a catastrophic problem into a creeping one, out of sight ... It displaced the costs ... onto the margins, far from the centers of power—and onto future generations.
Yet ... politicians and business elites will not judge ... by its mundane failures, such as the groundwater depletion and subsidence it facilitates. These effects are slow-moving and concentrated on the urban periphery, far from the centers of power. Instead, elites will consider ... a success insofar as it prevents the kind of catastrophic flooding that might stall their dreams of a fast-growing ...
But it also has the outlines of a broader truth: in engineering, the “success” of a technology often has less to do with solving problems than rendering them opaque or distant from our imagination. Like an endless game of whack-a-mole, the problems never truly go away—they come back with a vengeance decades later and miles away in new forms, often made worse by the very infrastructure engineers created.
But to be able to wrestle with these questions, we need to change the language we use to think about engineering and technology. Saying engineers “solve problems” implies a kind of mathematical tidiness that doesn’t reflect our messy reality. This language suggests that problems just disappear or are neatly contained through technologies. ..., we should instead talk about how engineers transform problems.
This subtle shift in language brings our attention to the fact that any “solution” produces, inevitably, more and different problems—many of which may not be visible in the moment or place it is implemented, or to the particular group of people designing the intervention. This seems to be, at first glance, obvious. We often say that a given tool “creates more problems than it solves.” Yet the idiom is rarely taken to heart—even if, as engineers, we talk about tradeoffs and generate cost-benefit analyses of different “alternative solutions.” Anyone who has ever worked in an engineering firm or the government knows that these are inevitably influenced by our own biases and interests, whether conscious or not. Furthermore, not every effect of an engineering solution can be quantified in dollars and placed into our analysis.
I think that one of the issues with doing as the author suggests and thinking of technology as a transformation is that it can be hard to come up with the downsides of your new tech. Engineers will constantly be encouraged by management and customers to deliver a solution, and often saying 'this is my solution, but it comes with caveats' will be frowned upon. Additionally, it's clear in retrospect that the expansion of the city the grand canal allowed would lead to depletion of the ground water which would lead to sinking of the city and the canal being inneffective, but was that known at the time? Could engineers working on a canal project have anticipated socioeconomic trends like this? I'm not saying they couldn't have, just that it's difficult.
Mexico City has grown from 4 million in 1951 (the time of flood mentioned in the first paragraph) to over 20 million today. Few civil engineering projects solve problems 70 years in the future for a city five times as large. As far as I can tell from the article, the engineers did anticipate population growth and designed a system that would last for decades; furthermore, they monitored the system, were aware of stresses and possible failure points, and took several appropriate corrective actions as the decades went on. All of this is just from reading the article.
If at my job I criticized a design by saying, "this design is terrible because if it is wildly successful and gives us 5X growth, then 70 years from now it might require some rework" I would be laughed at. If I followed it up by suggesting that this reflected some sort of fundamental problem with the methods of engineering I would probably not be taken seriously. Engineers are not godlike miracle workers, setting in motion an ineffable plan that somehow make the world holistically better over an indefinite time frame. It's OK to just solve the tractable problems in front of us and to get a good couple of decades out of a system.
Is it? Is it really? The point the article is making is that by designing a system which solves the problems in front of them to get a good many decades out of a system, those engineers have actually made other problems worse. I think you're being overly simplistic when you suggest that the problem the article is proposing is that the system requires some rework.
The problem the article is proposing is that fundamental, irreperable damage has been done. We can't un-do that damage. Short term thinking like what you're proposing is, by the vast majority of evidence and evidenced theories, destroying our world and risking the survival of our species. It is not okay to just solve the problems in front of us now if the cost is that all our children die in chaos and poverty--and it looks like the cost is just that.
We see the same problem with the U.S.'s approach to the levee system. A neighborhood commissions an engineer to spare their homes from flooding, but without any care taken to protecting those homes outside of town lines.
The problem here isn't with the engineer; the problem is North America's unwillingness to fund long-term solutions that benefit everyone, not just a chosen (and temporary) few.
The engineers did solve one problem, that of the flooding in the metropolitan city center and they did it pretty well. The transformation the author mentions is the side effect caused by hauling all that water away instead of letting it sink into the ground and replenish the water table. That's another problem, and now the engineers can solve that too.
Damming rivers across the US kickstarted the US economy after the Great Depression but also lead to unintended consequences (depleted water supply downstream, silt buildup near dams etc.). It is the nature of engineering to solve one problem at a time; only an Oracle could forsee all possible consequences of engineering works.
That reminds me a lot of the days when managers did not want to do proper code testing and coverage because without that they could ship more code. Defining the problem upfront and doing rapid iteration not just on the solution but also problem definition is nowadays more important and effective than just shipping.
There is always, always trade offs involved in engineering. As for which trade offs are acceptable sometimes you have influence over that and sometimes it's out of your hands. Design thinking is a tool that can help but it is not a panacea.
For people that don't work in the field perhaps it's not obvious. A good friend of mine is a PHD chemist who works in research (I work as a Materials Engineer in industry) he is always winding me up about engineers doing a half-assed job. I've always thought that our interactions highlight one the key differences between science and engineering, scientists strive for perfection if you will and engineers want workable...
From experience working for a leading hydrodynamics organisation you don't mess around with nature lightly ok you can do costal defence but you just move the problem around.
Oh and normally its the technician who fixes the "engineers" math :-)
"I wish we didn't have so much water!"
"Your wish is my command: all the water now drains out of the city."
"Wait, I meant..."
I understand the allure of creating a system that allows the "popular power" to make decisions, but that in itself is a difficult problem to solve.
We couldn't possibly crowd-source how to solve this flooding problem until the majority of that crowd has been educated on several aspects of the issue. Perhaps what the author is implying is that there needs to be a better interface between Politicians/Engineers and the people such that the P/Es say "hey here's our plan, find the flaws" and the people say "here are the flaws"
But there's the difficult problem. There will always be flaws and trade-offs, and this kind of interface eats up large amounts of time that may be better spent implementing a short-term solution to buy a few more years until a long-term solution is reached. It's a catch 22.
The decisions must eventually come down to the P/Es, but maybe we just need to add a few more feedback points into the decision-making system.
So the interface between the engineers and the people is not to find solutions together, or to find flaws together, but for engineers to find solutions and flaws, and the people to pick the one that they are happiest with.
(I'm not certain my understanding of the quote is right, just sharing my interpretation / an alternative to the ones you shared)
The costs(in various dimensions) of an Engineering decision in projects of this magnitude is borne by the whole population itself. Therefore the populace must have a seat at the decision-making table to choose amongst alternatives.
if as you say,
> The decisions must eventually come down to the P/Es
then there's reason to believe that an entire class of people will always bear the externalities of these "solutions", which are solutions in the sense of out of sight, out of mind.
http://web.mit.edu/2.75/resources/random/How%20Complex%20Sys...
(And if you are, you're in for a re-treat.)
General Systemantics addresses similar points:
All solutions to problems have design constraints, and all solutions are limited. All actions have consequences.
The "move fast and break things" mentality is asking for trouble, certainly. Even acting as prudently as possible, serious issues will arise.
This article reminded me of Kim Stanley Robinson's novel "Aurora", about a generation ship's various engineering issues and how the inhabitants find various ways to rebalance the closed system, but never completely.
It makes perfect sense to me, but is it correct to blame engineering?
Philosophically speaking, my pet theory is that the entire history of human civilization is an eternal process of solving existing problem by creating new forms of societies, thus creating their own problems, ad infinitum. It began since the use of fire, the invention of language and systematic agriculture, and moving towards more complex forms, simply because it has to be. I think some radical philosophers have not only argued that the industrial revolution was a mistake, but that the civilization itself can be seen as a type of technology, and it was a mistake.
Although some thinkers believe we should somehow degrowth and freeze the civilization for the best interests of human happiness, I don't think it's really coming. The human civilization on Earth is a very centralized system today. It may be possible in a future space age where human civilization spreads across the galaxy when centralization would be no longer possible and enable some regions to choose a primitive approach to civilization, or in a future digital age when computational resource is practically post-scarcity that enables minds and civilizations to exist independently in cyberspaces (even then, engineers have to work tirelessly to increase the computational power of the system before it collapses, although the laws of physics have set an extremely high upper limit for reversible computation, unlike many types of physical resources, so I don't think it would be a problem in many centuries if improvements is continued).
But before that, the ride will go on. If we are lucky enough not to accidentally destroy ourselves from a massive environmental incident or a world war, and we can keep engineering new solutions before the current system collapses, the ride towards, at least solar system domination, seems certain.
So I don't really think creating new problems to solve is an engineering problem and one should blame engineering for not solving problems.
I don't think the author is blaming Engineering/Engineers so much as cautioning them against short-term thinking when the consequences of getting it wrong can be so catastrophic.
1. Code is a liability.
2. Therefore, adding new code to your codebase is adding liability to your codebase, at the margins.
3. Refining/documenting code at the leaves transforms some of those leaves liabilities into assets.
4. Refining/documenting code in the branch/trunk transforms some of that branch's liability into an asset, but usually undoes any progress made in the leaves.
5. We are paid to (a) create liabilities and (b) transform liabilities into assets. If you do too much of (a) and not enough of (b), you are a bad engineer.
The fun thing is that you can analyze a software program like this (module A is a leaf to module B). You can also analyze the whole stack like this (Redux is a leaf to React, is a leaf to JS, is a leaf to Chromium, is a leaf to Intel, is a leaf to the Van Neumann model).
This ties into the article, because engineers don't "just" solve problems (unless you're Van Neumann?). Usually, we first create a problem (which is usually the dual of problem we are nominally paid to solve), and then we very slowly and iteratively "solve" that newly-created problem over time.
It sounds somewhat Sisyphean, but that's life/evolution! It's a joy to see things slowly crystallize into highly-functional, specialized components. Even if those components will inevitably become obsolete one day, they will still make for interesting fossils (see Zork's source code, dinosaurs, etc.).
I start with the very debatable axiom that all new code begins its life as liability. I've argued this in the past by deduction:
You type a single character into your editor, and (depending on the change, and your language's execution story), you've probably broken your code. Type a few more characters, and it is now less of a liability, because it compiles. A few words more, and it is less of a liability, because it accomplishes some small task (imperfectly). By the time you have committed some set of changes into version control, you've hopefully ironed out the liabilities. But, it's entirely possible that you've created more liabilities than assets. Code review hopefully refines it further, but things still sneak through. I could keep going, but I think this is clear?
Measuring the liability:asset ratio for a given piece of code is a challenge, and is more qualitative than quantitative. I believe that this is also an argument in favor of "new code is a liability", otherwise code review should catch all of our mistakes.
But yeah, I have found it to be useful "in the real world" as a heuristic for thinking about how code bases evolve over time. I do not think that it translates into a hard-and-fast rule to weaponize in code reviews. However, it does temper your expectations about the fallibility of code, and it is also a convincing argument for code review as a quality-control process.
And finally, just so you know this is a problem with all major urban centers throughout the world. Here is a scary report on India - https://edition.cnn.com/2019/06/27/india/india-water-crisis-... This is a thorny problem for Govts, Politicians, Urban planners and Engineers which if not properly evaluated will have catastrophic consequences. That is why when people do research on this and point out problems and possible solutions, you listen carefully with gravitas.