And yet the big boys just can't help themselves. I've been doing a bit of work with Google Sheets the last few months, and even in that time the user interface has changed a few times. Only in small ways, will be the offered excuse.
People are not just playing with this software, they're trying to use it. Usually as a minor component of some workflow where other things occupy working memory and attention.
That's exactly how I see Go, and why I personally use it, even though it has clear limitations and can be frustrating at time. And the language get quite a lot of hate because of these choices and their consequences. Minimalistic often means that you have to implement your own solution for some common problems, and/or write a good amount of boilerplate. Stable means that you don't change things that much over time, even if trends are shifting and new patterns/paradigms are developed, so you're potentially losing people and/or powerful technics (you also avoid the ones that haven't been proven yet and will potentially be seen as harmful in the future).
On the other hand you have the real benefits of having something small that you can easily keep in your mind, a tool that won't impact your maintenance budget, you can learn it once and then can be good for years without feeling the need to catch up all the time with newest changes and ideas. That's quite relaxing IMHO.
The upper case languages. C, AWK, SQL, COBOL, FORTRAN (yes yes I know the latter two like to be written in mixed case these days).
Also Ada ... hmm, perhaps not so minimalistic.
And Scheme.
The tradeoff is that a small language can lead your program to be big, hard to read, hard to keep in your head. Abstractions like polymorphism, monads, recursion schemes or lenses add (what initially seems like a lot, but later feels like) a little to the language, in return for (if used appropriately) immensely cutting down on the amount of code you end up writing.
Take my recent experience with Gogs and Gitea as an example—both not just incidentally coming out of the Go world, but also consciously branding themselves as "a painless self-hosted Git service". The latter abandons this and the gestalt of Go right out of the gate. Builds are not in any sense fast, it depends heavily on the NPM ecosystem, and there are other issues. The former is the original and supposed to eschew with all the "move fast and break things" attitude of the latter, right? Try building and it fails. At the time 1.14 was the latest version of Go. The fix was upgrading to 1.14, which had only been released a few months earlier. So then you build this binary on your local machine and then move it to the the VPS and find out that you can't set it up because sqlite support hasn't been built in because that requires building with cgo, which proves to be a real pain in the neck trying to build on a different system, so you're better off doing that on the system it's going to run on. So much for Go's thoughtful compiler architecture. So you re-clone the repo, only this time on the remote system. And of course that won't build, because (remember?) it needs the latest version of go. And now you're running as non-root user on a system with a system-level go binary, so that's going to involve twiddling the symlinks and/or your path so `go` invokes the right version.
I say all this with an awareness of the tremendous unrest against the trajectory of Go's "simplicity" from within its own fanbase, and the Go team's recent capitulations in the last year or so. As the blog post lays out, this is not unique; it's the way mainstream software development goes in general. The reason is what I said before: it's the "economy"—the interactions of a bunch of stakeholders.
What seems to be the problem with every system, no matter what tech stack they're built on or the principles they set out to embody at the beginning, is that it tends towards consultant-driven, devops-scale complexity. Intentional or not, there's an invisible force that moves things in the direction of maximizing "payoff", where that most often involves literal money changing into hands of the person putting up with all this stuff. Pretty much everyone in the workforce benefits when their line of work accumulates more esoterica, thus proving the value of consultants and others who derive their paycheck from being wizards taming incomprehensible systems. Even well-meaning, non-nefarious actors are susceptible to it. What also happens is that most developers get so far up their own ass with respect to their chosen tech stack that they're not able to see the problems their "progress" is causing, and they've got a whole community usually egging them on. This isn't too far off from what we've seen with the Twitter-driven polarization in politics, as everyone is able to find a set of likeminded people telling them that they're doing and thinking the right things.
...wait what?
That link discusses it. Seems there's a heat issue when charging on the left side which results in an impact on performance.
Connecting to WIFI takes me 8 clicks! As someone who regularly goes to cafes, this is very frustrating:
1. Generic top-right menu
2. Open wifi submenu.
3. Turn On
modal closes automatically!
4. Generic top-right menu
5. Wifi submenu
6. Select Network
popup opens
7. Click the desired network
8. Connect
KDE would have been the alternative.
I really don't get the hate against Unity.
Right now I'm using Ubuntu Mate 20.04 since Ubuntu 20.04 won't run on my machine. After a fresh install it just boots up into a black screen, and I can't ctrl-alt-f2 my way into a terminal, etc.
I've always found KDE pretty decent.
There was a very vocal minority of shouties in various forums that spewed their venemous hate at Unity but by and large most people who tried it really liked it and I still get very positive feedback from non-technical users even today when they find out I was heavily involved in that project.
The criticism levied in the feature article is the same tired old one that boiled down to "I didn't like it because it wasn't the Microsoft Windows I used when I was first learning." There is always a certain merit to the "all change is bad" argument, but since it's entirely based on visceral reaction and not technical merit or rational discourse, it can be difficult to use to convince others without appearing petulant.
Supposedly, it’s also super nice on mobile and tablet, I didn’t get a chance to try.
If you want gnome 2, Mint is still maintaining it as MATE and it’s in the Ubuntu repositories.
I'm dreaming that by focusing on correctness one could reduce the maintenance churn that then can lead to various other spurious changes. But I don't know if any of those languages are really suitable to "real world" use, nor if they really provide such dramatic reduction of bugs that one would hope for.
> F* (pronounced F star) is a general-purpose functional programming language with effects aimed at program verification
I haven't used either, so that's why I'm asking. Certainly going by their marketing F* seems very practical oriented, fusing common sensibilities from F# with formal methods
1. Sustainability
2. Chaos
3. Reorganization
Programming in the large has a natural ecosystems quality to it. It resists standardization and falls into patchworks easily. So I have come around to the idea that one should embrace the change and discover points of stability by accident. The sustainable part happens when the system survives chaos and is sufficiently flexible to be reorganized - i.e. there is a benchmark for passing the test.
Long story short, it doesn't come easily by design. Designing small and designing to retarget your output are good ideas, because that reduces two forms of change cost. But we trip over the problem of having immediate solutions at the cost of complexity and single-use implementation. Designing for extension turns your system into a platform, which gives it a definite kind of lifespan and ability to address new problems.
I worked with Go for a while and gradually got fed up with the accumulation of little issues. I have come to the conclusion that Haxe - and most transpiled languages - do the job of sustainability better than Wirthian languages, actually, because being retargetable allows your code some insulation from platform changes. The intent is preserved, and the bulk of breaking changes occur outside the compiler tooling. A cost is paid in having to debug multiple times and often at a distance from the original source, and in having imperfect access to platform features, but this is a much smaller thing than having a codebase with hidden dependencies, which is a thing that constantly sneaks into native code systems, and a thing that makes VM language runtimes grow over time.
My thoughts on Eclipse the entire time I used it was "why can't this be as fast and reliable as VB6?"
Visual C++ 6.0 vs Visual.NET
Visual Studio 2010 vs Visual Studio 2015
[1] https://markwatson.com/blog/2005/08/04/ancient-software.html
That is, yes, there is room for stable languages, but it’s already fully occupied. So only languages targeting market with a mindset open to "better moving continuously anywhere than risking sclerosis" can flourish.
Now, that might be a bit caricatural of course.
Most, if not all, code from 5 years ago runs with no changes whatsoever.
The stewards of Clojure heavily advocate and practice “stable” development where backwards compatibility is a requirement. Combined with the famous JVM compatibility means that the whole enterprise is a very stable foundation for building stuff.