The obvious example is the quick-to-build MVP, but many of the bigger problems come from platform conflicts. Because we have at least five different actively uncooperating operating system platforms, it's hard to build portable native apps - so people build electron apps instead. We also use the web browser as a competitive battleground; due to coordination problems only one programming language and UI model is possible, although another is creeping in via webassembly.
Then there's the ongoing War On Native Apps. Every platform holder would love to take the 30% cut of the profits and veto which applications can run on the platform. We're left with Windows (non-app-store) and sort of MacOS (although watch out for notarisation turning into a veto in the future). And sadly this has very real benefits in malware prevention. Systems which run arbitrary code get exploited.
Beyond that there's cryptocurrency, where finding a less-efficient algorithm is a design goal to maximise the energy wasted, in order to impose a global rate limit on "minting" virtual tokens.
Then there's the ongoing War On Native Apps.
I'm no prophet, but I did predict then that the browser would have eaten all the business applications space by now. It was just obvious. Colleagues objected that the web could not match rich native programs.
Excel is a much better product than Google Sheets, but having the better product doesn't mean having the winning product.
It's gotten much worse. Now you have iOS, Android, Windows, Mac, wed, and Linux(?). In 2000, you had Windows. You couldn't do anything interesting on mobile, Web 2.0 (cringe) wasn't a thing, and Mac's market share was about 3%.
Turns out that it was already removed but slack was still displaying it.
⌘ + R (refresh page shortcut) solved it.. Electron might help devs getting something out quickly but all these layers have a cost
I don't disagree with the gist of this, but from a your technical description verges on nonsense. I'm questioning if you're serious.
>...finding a less-efficient algorithm is a design goal...
At no point is anyone searching for an algorithm. Most mining algorithms were chosen at random or for novelty; Bitcoin uses double SHA-256, Litecoin uses scrypt, Primecoin searches for primes.
>...maximise the energy wasted...
Energy is wasted during mining in order to maximize security. The waste is a side effect.
>...in order to impose a global rate limit...
This is plain false.
>..."minting"...
It's called "mining". I wouldn't complain if this wasn't in quotes.
The whitepaper is only nine pages, but nobody seems to read it. https://bitcoin.org/bitcoin.pdf
>> finding a less-efficient algorithm is a design goal
The Bitcoin paper doesn't actually specify a specific algorithm at all, it just says "such as":
> To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof of-work system similar to Adam Back's Hashcash [6], rather than newspaper or Usenet posts. The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits.
The Hashcash paper uses the term "minting".
>>...in order to impose a global rate limit..
> To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they're generated too fast, the difficulty increases.
ie to limit the global rate of block generation. Which is what makes it useful as a global distributed timestamp server.
>> maximise the energy wasted
> The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU time and electricity that is expended.
As everyone noticed fairly early on, like gold mining, this creates a means of expending energy to produce something which can be sold. Just as it's economically advantageous to burn down rainforest, it's economically advantageous to perform a trillion SHA operations and throw away the results of almost all of them.
A fun read on cloud scale vs optimized code is this recent article comparing ClickHouse and ScyllaDB (https://www.altinity.com/blog/2020/1/1/clickhouse-cost-effic...)
Most of the numerical code that cares about performance for linear algebra uses this API and links an appropriate implementation.
Advances in language, compiler, and runtime implementations will continue to keep up with any growth in the need for performant applications for the foreseeable future, despite the looming collapse of Moore's Law.
It would be great if most applications worked at human speed. Instead we have web applications taking 5 seconds to load what is basically 3 records from a small database.
I've often complained out loud with coworkers, while waiting for some horrible webapp to do its thing: "This computer can execute over a billion instructions every second. How many instructions does it take to render some formatted text!?!?"
Consider also that spending an hour at the DMV for them to update a database entry or two is also human speed.
I want to live in your alternate reality, because in ours anything under 45 seconds is a miracle.
If you prefer, call it carbon footprint. Python has a huge carbon footprint. We should get rid of slow languages for environmental reasons.
The trick, as always, is finding balance between paying for hardware and paying developers.
People really bought into the ‘people are more expensive than hardware’ as an excuse to get screwed like this. For $5k in human cost, these guys (and their investors) now save 200k/year in hosting. And this is not an isolated story; I am working on another one at this very moment. Programmers have become so incredibly sloppy with the ‘autoscaling’ and ‘serverless’ cloud ‘revolution’.
I.O.W., If you have a program with millions of users you make something that performs well enough for people to pay for it. While each cpu cycle wasted then becomes millions of cycles, you never get billed for those it doesn't matter to you.
I wonder if an eco system is possible where software providers have to pay for ALL resources consumed. It sounds ridiculous but having any transaction going on would make monetizing software a lot easier.
It would for the most part boil down to billing the end user for the data stored, the cpu cycles and the bandwidth consumed. A perfectly competitive vehicle. Want to invest in growing your user base? Pay part of their fees and undercut the competition.
It would make it more logical if they didn't own the device. The hardware can scale with usage. You just replace the desktop, console or phone with one better fit for their consumption.
Programmers keep saying this, and users keep complaining about slow software.
But what does that even mean? A 3Ghz quad-core can do 12 billion things per second yet I still regularly experience lags keeping up with typing or mouse movements, scrolling webpages, redrawing windows... the actual interactive experience has gotten much worse since the 90s.
I learned this by greatly improving a scheduling system algorithm that could schedule 10-12 related (to each other) medical procedures while accounting for 47 dynamic rules (existing appointments, outage blocks, usage blocks, zero patient waiting, procedure split times, etc) to sub second, improving the current algorithm's 13 seconds. You know what? It didn't matter. That was our speed test scenario (most realistically complex one a customer had).
The customer was fine with 13 seconds because it was so much faster than doing it by hand and these customers were paying hundreds of thousands of dollars for the licenses. Because of this, the improved algorithm was never implemented. It was a neat algorithm though.
Absolute maximum performance has its place, it's just not every place.
I've seen locking brought forward as a critical limit. Long discussions about new hardware and adding nodes and all sorts of expenditure required. We need a larger kubernetes. More GPUs!
I've also been in the situation where we switched to a plain redis queue (LPOP, RPUSH) scheme and gotten 10x the improvement just by lowering message overhead. A lot of the very complex solutions require so much processing power overhead simply because they involve wading through gigabytes. Better alternative solutions involve less gigabytes. Same hardware, different mindset. Not even talking about assembly language or other forms of optimisation being required. Just different philosophy and different methodology.
Perhaps we need programmers with the mental flexibility to run experiments and be open to alternatives. (Spoiler: we've already got plenty of these people.)
Contracting is such a strange world. I've drifted so far into it I've lost the ability to see how salary based people even get work. All I can do is keep the door open for as many people as possible. Sometimes I need to actually assert the door into existence. This was something I didn't know was possible until recently.
There is still improvements being made to the current tech or new takes on the current tech that aren't incorporated yet in the current bunch of consumer processors.
Also I happen to think that what makes a computer fast is the removal of bottlenecks in the hardware. You can take quite an old machine (I have a Core 2 Quad machine under my desk) slap in an SSD and suddenly it doesn't feel much slower than my Ryzen 3 machine.
Sure it is true. It isn't a tech journo writing a quick piece to get some clicks. I am quite cynical these days.
There hasn't been any competition in the Desktop CPU space for years until 2019.
Also clock rates haven't increased since the mid-2000s (there were 5ghz P4 chips). Clock rates being an indication of speed stopped being a thing back then when I could buy a "slower" clocked Athlon XP chip that was comparable to a P4 with a faster clock.
Also more stuff is getting offloaded from the CPU to custom chips (usually the GPU).
> We need developers who understand these new low level details as much now as we needed that kind of developer in the past.
I suspect that there will get better compiler and languages. I work with .NET stuff and the performance increase from a rewrite to .NET core is ridiculous.
>In a lecture in 1997, Nathan Myhrvold, who was once Bill Gates’s chief technology officer, set out his Four Laws of Software. 1: software is like a gas – it expands to fill its container. 2: software grows until it is limited by Moore’s law. 3: software growth makes Moore’s law possible – people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.
Codified anti-recycling.
Not to mention manually written algorithms are, in many cases, more accurate than ML heuristics (for a terrible yet relevant example in the finance industry, identifying the correct sum of a set of numbers).
I kind of disagree with this based on intuition alone. Most developers, professional developers, are using web tech (JS stack in particular - Node and hyped front-end frameworks). Yet we"re seeing "interest" in compiled languages such as Rust, despite almost nobody using it professionally and almost nobody doing much with it outside of simple proof of concepts.
To me it points toward a developing sense of insecurity in modern professional developers that simply being a JS dev isn't really programming/development and they've to "prove" themselves with lower level tech.
Something that indicates that, for me, is in StackOverflow's 2019 survey [0] the most used tech was JS and that which surrounds it, followed by Python and other easy-to-get-going-well-supported tech. Yet the "Most Loved" was Rust.
I could be wrong, and I'm open to being, but I intuitively I don't believe the interest in performant technologies is in the face of the sheer bloat we've seen, particularly from the web-front.
>Programmer skill will become more important in the future.
My prediction on this, not so AI specific, is that developing and deploying web-tech will continue to become easier and easier, meaning it'll take less people to do it. Sure, work may arise from developing countries/economies to supplant a drop in demand for it in the developed world, but maybe not.
Combined with a potential bubble burst in tech, I think those relying on web dev for a living could be in trouble in the coming decade.
I don't foresee much in terms of companies trying to optimize operational costs by instructing their devs to write their code more efficiently with memory/performance in mind to reduce operating costs, and thus spur a push toward jumping on compiled languages. If anything, cloud computing will continue to get cheaper and cheaper as the big 3 continue to try and absorb as much marketshare as possible.
When we could, we optimized for ease of writing code, and it lead to bloated and slow systems. This is the current status of things.
We are optimizing again for performance, but we want to have our cake and eat it, we want both the performance and the ease of writing code. And the reduction of bugs in the end result.
This change takes time, perhaps decades. Longer than bubbles and market growth. It takes time because we are curious, and we want to test all possibilities. We want fast games, easy abstractions, zero bugs, the whole package.
But it will happen, at some point. Rust, Go, D, may be one of these languages will replace Javascript, or may be it will be a totally new language.
> I expect that programming in the future will be more about getting the AI to do what you want rather than writing code directly.
This is the clear endgame. The question is how long it takes to get there.
If this is universally qualified, where are the scientific HPC simulations written in python, AAA video games written in Haskell, and fin tech trading apps written in Lisp?
I am not stating that there are no places where compiled vs interpreted can never produce acceptable results, it’s just more nuanced than a forall type proposition.
Addendum: I am aware C# could sort of be ‘interpreted’ and Unity is C#, so there is at least some evidence in the game category, but I’d quibble over the best-in-class C#/Unity game being considered 100% C#.
This is where I see the SRE (site reliability engineers) role. The developers making changes are put into a position where they measure the cost impact of a decision.
It's these feedback loops, and the practices they instill, that I believe we need. New programmers can help break the mold, but without good feedback they'll fall into the same traps.
We have great optimization tools freely available these days, and when necessary they are used. We also have great standard libraries with most languages that make it fairly easy to choose the right types of containers and other data structures. (You can still screw it up if you want, though.)
As soon as it becomes economically necessary to write more efficient code, we will be tasked with that. I work on professional software and we do a hell of a lot of optimization. Some of it is hard, but a lot of it could be done by regular programmers if they were taught how to use the tools.
A lot of the latest revolutions (good or bad, that's up to the reader); crunching huge data, ML, every more realistic simulations etc comes from every faster machines. If that growth stops, the article suggests we do something that was (and still is in some circles) normal in the 70-80s with homecomputers and consoles; because you could not upgrade the hardware and almost nothing was compatible (which is the most common reason the IBM PC won) to the next generation, you optimised the software to get everything from that existing hardware you had on your desk. And people are still doing that.
One of my personal miracle examples; my first love was the MSX computer, which is a Z80 3.58mhz computer with (in my case) 128kb in it. This machine could do nice games for the time and some business applications. Many years later, that same physical hardware (I still have my first one) can do this [0]. Obviously the hardware was always capable of doing this, but it needed many years (decades) for programmer(s) to figure out how to get every ounce of performance and memory utilization out of these things and push it beyond what anyone thought possible.
If the improvements in performance stagnate, there is a lot of room for getting most out of the existing hardware. I would think though that in the case of modern hardware, the geniuses that get to that point will get some language to compile to this optimised optimum instead of having to handcode and optimise applications like the Symbos guy did.
Case in point: a matrix library I used to use needed to a full row/column pass each time. We put a layer in between it and our code. Reduced lookups required by 30%. We were processing the same amount of data and getting the same results but requiring far less time. That layer also reduced memory requirements. Now we could process larger datasets faster with the same hardware. Thats just one example.
Your choice of CPU and other hardware isn't always the limiting factor. Even the language choice has an impact. Some languages/solutions require more data processing overhead than others to get the same final result.
Even the way your program's Makefile or module composition can have an effect on compiling performance. I remember the use of a code generator we included that meant it had to regenerate a massive amount of code each run due to its input files being changed. We improved it by a ridiculous amount simply by hashing its inputs and comparing the hashes prior to running the code generator. Simply not running that code generator each time meant we sped up the build significantly. 30 minute build times reduced by 5-10 minutes. Same hardware. And that was easily triggered by a trivial file change.
My philosophy at the moment is to use HPC only when I've exhausted other possibilities. I think many people jump to HPC prematurely. The simpler approaches are so much cheaper that I think it's usually worthwhile. I'm skeptical of the argument that it's cheaper to use HPC than it is to use more efficient methods in this case, because the more efficient methods are often something like a few days spent reading to find the right equation or existing experimental data vs. at least that much setting up a simulation and longer to run it.
Edit: Bill Rider has a bunch of blog posts that make similar points:
https://wjrider.wordpress.com/2016/06/27/we-have-already-los...
https://wjrider.wordpress.com/2015/12/25/the-unfortunate-myt...
https://wjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-...
https://wjrider.wordpress.com/2016/11/17/a-single-massive-ca...
https://wjrider.wordpress.com/2014/02/28/why-algorithms-and-...
And by the way I believe the software code is not the only place which could be made more efficient. What if we removed all the legacy stuff from the x86 architecture - wouldn't it become more efficient? What if we designed a new CPU with a particular modern programming language and advanced high-level computer science concepts in mind - wouldn't it make writing efficient code easier?
Also, what are the actual tasks we need so much number-crunching power for, besides things of questionable value like face recognition, deep-fake, passwords cracking and algorithmic trading?
One of the biggest pieces of bloat I've seen is doing the same thing in multiple places, and the new feature not being an improvement over the old workflow in 90% of cases, the efficiency gained 10% was lost in the other 90%
Sounds like an interesting read; do you mind sharing a link (or submitting it to HN)?
I understand that 3D has thermal issues but couldn't this be prevented by increasing (dead) dark silicon and maybe water cooling inside the 3D chip?
Not directly comparable but brains are state of the art of computing and are tri-dimensionals.
Cloud computing and SaaS have extended the deadline for coming up with an answer to "What comes after Moore's Law." But it is much more likely to not be based on every coder learning what us olds learned 40 years ago. Instead, optimization is more likely to get automated. Even what we call "architecture" will become automated. People don't scale well, and the problem is larger than un-automated people can solve.
Beyond that, developers being conscientious of what they send over the wire, and being just a bit critical of what the framework or ORM produces also can yield substantial gains.
I say this as a “DevOps” guy who is responsible for budget at a mid-size startup, where we’re hitting scale where this becomes important. We save about 8 production cores per service that we convert from Rails to Go. Devs lose some convenience, yes, but they’re still happy with the language, and they’re far from writing hyper-optimized, close to the metal code.
Elixir itself is almost completely staying-out-of-your-way language as well -- meaning that if your request takes 10ms to do everything it needs then it's almost guaranteed that 9.95ms of those 10 are spent in the DB and receiving the request and sending the response; Elixir almost doesn't take CPU resources.
I worked with a lot of languages, Go/JS/Ruby/PHP/Elixir included. Elixir so far has hit the best balance between programmer productivity and DevOps happiness. (Although I can't deny that the single binary outputs of Go and Rust are definitely the ideal pieces to maintain from a sysadmin's perspective.)
Out of everything I worked with in the last 15 years I'd heartily recommend Rust for uber-performant-yet-mostly-easy-to-use language.
As for my own opinion: yes, optimization is key, but we gotta remember not to make it premature. Take advantage of the fast hardware to actually create something; once we know that the something is viable, let's refactor and optimize.
I've seen many products die simply because customers get frustrated with laggy or buggy experience and leave.
By the time the businessmen wake up, it's usually too late.
I'm lucky at work we write lots of stuff to avoid the tell/mound, but hello! where is the rest of the industry on this?
[You can use our stuff if you like, it is all public. Let's rebuild together.]
This is remarkably accurate for games as well. Insurgency: Sandstorm for example. I was full of hope when I learned it was being developed in Unreal Engine which supports large scale combat much better than Insurgency's source engine. Unfortunately when it came out if performed much worse than its predecessor. Working with these engines has become so easy you don't really have to 'think' anymore and can just keep throwing stuff in.
For all the programmers out there -- _how do we do this?_. I came into programming through Matlab and Python in Economics and Data Science. I don't have formal training in software engineering. I know some C, some Fortran, and have a journeryman's understanding of how my tools interact with the hardware they run on.
Where can I learn how to be extremely efficient and treat my operating environment always as resource constrained? Am I correct in seeing the rise of point-and-click cloud configuration hell-sites like AWS are masking the problem by distributing inefficiently? (sorry if unrelated, spent hours debugging Amazon Glue code last night and struck me as related).
In other words -- how can we tell what is the path forward?
Meaning there's no point in optimizing an expensive function if 99% of your program's memory and run time is spent in a different function.
This means the absolute most important skill to writing efficient software is not assembly language skills, but profiling so you know where to focus your efforts in the first place.
Maybe there's no business point in optimizing those. But I feel this line of thinking got us into the current mess to begin with. Everybody is either like "we can't afford to optimize" (blatant lie at least 80% of the time btw) or "nah, not my effing job".
Plus that philosophy only really works when your business is fighting for survival in its initial stages. After you stabilize a little and have some runway you absolutely definitely should invest in technical excellence because it also lends itself pretty well to preventing laggy and/or buggy user experience (and those can bleed your subscriber numbers).
My guess is that we will slowly approach this wall and spend a lot of time trying for incremental gains, trying to avoid the inevitable, which would be the design of new chipsets with new instructions, sets of new languages explicitly designed to take advantage of the new hardware, and then tons of advances in compiler theory and technology. On top of it, very tight protocols designed for specific use.
I think we have layers upon layers of inefficiency, each using what was at hand. All reasonable things to do, in the short-term, based on the pressures of business. But in the end of the day we're still transmitting video over HTTP, of all things. Sure, we did it! But you can't tell me that it is efficient or even within the original scope of the protocol's concept.
Naturally, I think the whole thing would run about a trillion dollars and take armies of geniuses, but it would at least be feasible, just ... it would require a lot of will. And money.
1) hardware that doesn't change. One C64 is just like every other C64 out there. You knew what the hardware was and since it doesn't change, you can start exploiting undefined behavior because it will just work [1].
2) The problem domain doesn't change---once a program is done, it's done and any bugs left in are usually not fixed [2]. The problem domain was fixed becuase:
3) The software was limited in scope. When you only have 64K RAM (at best---a lot of machines had less, 48K, 32K, 16K were common sizes) you couldn't have complex software and a lot of what we take for granted these days wasn't possible. A program like Rouge, which originally ran on minicomputers (with way more resources than the 8-bit computers of the 1980s) was still simple compared to what is possible today (it eventually became Net Hack, which wouldn't even run on the minicomputers of the 1980s, and it's still a text based game).
4) The entire program is nothing but optimizations, which make the resulting source code both hard to follow and reuse. There are techniques that no longer make sense (embedding instructions inside instructions to save a byte) or can make the code slower (self-modifying code causing the instruction cache to be flushed) and make it hard to debug.
5) Almost forgot---you're writing everything in assembly. It's not hard, just tedious. That's because at the time, compilers weren't good enough on 8-bit computers, and depending upon the CPU, a high level language might not even be a good match (thinking of C on the 6502---horrible idea).
[1] Of course, except when it doesn't. A game that hits the C64 hard on a PAL based machine may not work properly on a NTSC based machine because the timing is different.
[2] Bug fixes for video games starting happening in the 1990s with the rise of PC gaming. Of course, PCs didn't have fixed hardware.
EDIT: Add point #5.
I can't prove it but I intuitively feel there's a lot of spite out there. Many people are unhappy with the status quo but are also unhappy with the idea to sacrifice their resources for everybody else -- and they will likely not only be non-grateful; they might try and pull an Oracle or Amazon and sue the creators over the rights of their own labour.
Things really do seem stuck in this giant tug of war game lately.
There isn't a single place to learn how to be efficient, it is better to start being extremely curious of how things actually work. Scary number of people I've met do not even attempt to learn how a library functions they use actually work.
> I always try to imagine a physical character performing a task that i'm trying to code. How far does imaginary character needs to travel, how many trips do they need to make.
Dude, that's why we have optimising compilers. Functional programming is demonstrably less efficient on our imperative/mutable CPU architectures but a lot of compilers are extremely smart and turn those higher-level FP languages into very decently efficient machine code that's not much worse than what GCC for C++ produces. Especially compilers like those of OCaml and Haskell are famous for this. They shrunk the gap between FP and the languages that are closer to the metal. They shrunk that gap by a lot and even if they are not 100% there, I'm seeing results that make me think they are 75% - 85% there.
We need languages that rid us of endlessly thinking about minutiae and we must start assembling bigger LEGO constructs in our heads if we want anything in IT to actually get unstuck and start progressing again. (Of course, this paragraph doesn't apply to kernel and driver authors. They have to micro-optimise everything they can on the lowest level they can. That's a given.)
> Scary number of people I've met do not even attempt to learn how a library functions they use actually work.
I couldn't care less. How a library function works is an implementation detail. I only need to know what does it do. That's why it's a 3rd party library after all. The creator might notice a hot path during stress tests and optimise that implementation detail into some entirely another algorithm and/or data structure. And boom, your code that optimises for an implementation quirk you weren't supposed to look at in the first place, is now slow or even buggy.
Compilers are what mediate between these two domains, but tend to become more bloated as they have to accommodate both more diverse hardware and more numerous languages.
This helps the working programmer ignore the problem of writing good code but only for so long. It only delays the inevitable as the returns from clever compilation can't go on forever, and in fact these returns become more volatile as hardware architectures become more complex (typically through more cores or extra caches, incurring synchronization costs). Thus for maximum performance through binaries one would have to practice tweaking compiler settings which just creates another layer of abstraction and defeats the point of having this step automated for you.
Programmer training in particular needs to become both more comprehensive and more specialized. More comprehensive means knowing how each layer of abstraction gets built up from the most common machines (like x86). More specialized means filtering out a lot of people who were trained-for-the-tool and facilitating more cross collaboration between those that can program in a domain but not program for performance. This might mean better methodologies for prototyping across domains or experimentation with organizational structures to complement such methodologies.
Functional algebraic programming as a paradigm still seems somewhat underrated to me as a way of cross-cutting conceptual boundaries and getting programmers refocused on how their code is interpreted from the point of denotation. But it comes at great risk from continuing the trend towards more redundant abstraction which is responsible for bloatware.
At that point it seems that knowing how these problems are solved without classes types and libraries, or at least how classes types and libraries resolve the complexities of just doing it using the native capabilities of the operating environment (and recursing down to the point of maximal control), might be a big improvement, as it means reversing the greater-abstraction trend.
Under these discretions languages like OCaml and Rust seem to make the cut. A lot of good ideas from these languages seem to seep into the design of others. But the white whale is browser programming/web programming, as the browser has become the de facto endpoint for universal application deployment. WASM may or may not fix this. But then we just get to compilers again.
This talk did the most for developing my point of view here: https://www.youtube.com/watch?v=443UNeGrFoM Choice quotes include "If you're going to program, really program, and learn to implement everything yourself" and "At first you want ease, but in the end, all you'll want is control."
Or just take up another field. We probably need more farmers and doctors than programmers now.
I absolutely agree! I am gradually learning both and I am just getting so angry that I didn't know about OCaml like 10-15 years ago. :( I was just so damn busy surviving and being depressed for a heckton of [dumb] reasons for 15 years. And then I woke up.
Now I am just a regular web CRUD idiot dev who, even though he was very clever and inventive and creative in the past, nowadays seems to get pissed at small details like configuring web frameworks (even though I am still much better than a lot of others, I dare say -- proven with practice... or so I like to think). And now I have to work against the negative inertia of my last 15 years and learn the truly valuable concepts and how they are implemented in those two extremely efficient, if a bit quirky in syntax, languages.
But it seems every time somebody says "let's just keep these N languages and kill everything else", no discussion is possible... And I feel we really must only keep a few languages/runtimes around and scrap everything else.
I fear only when people realized the economy needed to support large mammals crossing a bridge at one time did they really engineer bridges to support that weight. I think the same metaphor could be said for computing.
For things to go well and optimally, the pendulum should never be on the extremes. Sure, you guys are in a hurry. OK. But I must protect my name and your interests and must do a good job as well. Don't make me emulate a bunch of clueless Indians, please. Just go hire them.
Businessmen aren't very good at compromises when it comes to techies. I am still coming to terms with that fact and to this day I cannot explain its origins and reasoning well.
There are exceptions to this, as with everything, but it's not as easy as this article makes it sound, i.e. "Just make faster stuff dummy!" There's always a cost.
The problem is that software practices have gotten so bad that a simple text messenger or email client uses at least as many resources as that program that's streaming HD video within a virtual reality, just to send or receive a few bytes of text now and then.
I'd be ok with losing 30-40% of the overbloated apps, because then they could be replaced with apps that don't need 2GB of dependencies to left-pad a string. We've really gone overboard on the "code reuse is great" and "don't reinvent the wheel" to the point that every program tries to include as much as possible of all code ever written and every wheel ever designed.
Dude, I agree to lose at least 80% of them, most are useless and with bad UX on top of that. Even worse: they are distracting.
At some point hiring the programmers to pour software by the kilogram becomes a visible problem -- when the businessmen wake up to the fact that the amortised cost of a job sloppily done (say, over the course of the next 2 years) is much higher than investing 20-30% more upfront. That's what the article is arguing for, IMO.
I'll also reminiscence a bit: back in 2000s, my 266MHz, 64MB, 4.1GB HDD PC would let me install a 2GB full feature third person adventure game(Legacy of Kain: Soul Reaver, for exampler) worth nearly double digit hours of play, currently it takes 2x of disk space to install a basic platformer giving 1-2h of fun. Every new game lags to hell on a new PC because I have opted for 1 year old GFX card. I can view a PDF nicely with SumatrPDF yet Adobe Acrobat Reader takes 3-digit MB to offer same feature & 5x more time to start. I could use IRC in 2000s while Slack takes all of my RAM available. A website back in days would be few kB, I mean people here frequently compare HN with Jira or how funny it is that Netflix has to spend engineering effort to improve time-to-first-render on it's landing page which is static!
Those are facts, not so good: Soul Reaver vs Assassin's Creed is bad idea, because people didn't mind if grass was just flat texture or hero looked like walking cubes. SumatraPDF can open a PDF but Adobe Reader gives me annotation, form filling, signing etc. NFS2 was just racing, NFS:Heat players demand customizing exhaust gas color. Netflix home page loads more images combined than "back in days" and must adapt to big or small screens so it looks great everywhere. Jira lets me drag-n-drop a ticket while it took x3 time to update same ticket back in days in several form refreshes. HN is the simplest CRUD, it just lets me vote and post basic text, heck it delegated search to algolia(a different service)! The features Slack offers will require 5-7 extra different services if I were to use IRC.
But those kinds of reality don't get posts up-voted, so instead they are always like ranting about why Whatsapp needs more resources than the SMS app when both lets me send text to someone else?
Anyways, things change over time, in 2000s, my PC would lag if I opened MSWord & had windows Media player playing some HD videos or a game would crash if I tabbed out of it to check something. But now I have 20+ tabs open that live update stock tickers and have texts infested with hundreds of advert monitoring things while a tiny window plays current news in corner while am typing away happily in IntelliJ IDE, and have a ML model training in background. Now I can also record a HD version of my gameplay and tab out too. I think, in future complex development will take place in the cloud, we'll probably have high speed internet everywhere and online IDE or similar so everything happens in cloud. Similarly how 4GB HDD costed a fortune in 2000s but same price gets me a x100 capacity now, cloud resources will improve while prices will go down. :)
However, saying that things are just fine today is not strictly true. You are mostly correct but there's a lot of room for improvement and some ceilings are starting to get hit (people regularly complain that Docker pre-allocates 64GB on their 128GB SSD MacBooks, or that Slack just kills their MacBook Air they only use for messaging during travels). And still nobody seems to care and then people like you come along and say "don't complain, things were actually much worse before".
...Well, duh? Of course they were.
But things aren't that much roses and sunshine as you seem to make them look. Not everybody has ultrabooks or professional workstations. I know like 50 programmers that are quite happy to use MacBook Pros from 2013 to 2015. Those machines are still very adequate today yet it's no fun when Slack and Docker together can take away a very solid chunk of their resources -- for reasons not very well defined (Docker for example could have just preallocated 16GB or even 8GB; make the damn files grow with time, damn it!).
---
TL;DR -- Sure, things weren't that good in the past, yeah. But the situation today is quite far from perfect... and you seem to imply things are fine, which I disagree with.
(BTW: thanks for the nostalgia trip mentioning Legacy of Kain! They'll remain my most favourite games until my death.)
I make this point as someone whose job is Haskell. Too many people expect awesome magic sauce and basically write the same old imperative stuff in functional programming languages: not in the small but in the large. There's still plenty of benefit of using a good language for that, but you won't get zomg auto-parallelism.
It's quite comical and sad to watch at the same time.
I agree with the article's title: we really need a new breed of programmers.
I'm new to FP myself and it seems like if done wisely it simplifies multi thread, parallel processing quite a bit.
I remember how well and how fast software worked 20 years ago. Today I have to reboot my telephone to make a call.
I think you are looking at the past with rose-tinted glasses. The software I remember from 20 years ago was generally slow, clunky, unstable, and often didn't work very well.
Open source as we know it is the perfect playground for trying out new technologies, just for the sake of it or for building resumes. This is exactly "reinventing the wheel" as you say it.
Last, I'm not sure of the link between open source and software quality.