The Apple Developer Program is only needed for macOS if you want to do sign your binaries or distribute through the Mac App Store. And you only have to pay Microsoft if you want to publish to the Microsoft Store (or use Visual Studio if you're a company that has more than 5 Visual Studio users, more than 250 computers, or more than $1 Million USD in annual revenue).
> buy certificates for signing binaries
Fair (though both Windows and macOS will run apps that haven't been signed, with more warnings of course).
> share 30% of my revenue with them for barely any reason
Only if you use their stores (Mac App Store or Microsoft Store), and it looks like the Microsoft Store won't take any cut if you do your own payments and it's not a game.
As for, Apple, I do not know but I suspect you can make Mac applications without a developer account. You need a developer account for iPhone. It's $99 a year the last time I looked. This is not a lot of money if you are serious about making an application.
Compare with the web where LetsEncrypt just works without demanding a king's ransom.
As for the APIs, it is very easy to get into dependency hell between all the different UI technologies, .NET implementations, and target systems. Want to develop a brand new plain-old GUI app? Probably simple (although I've never tried, the web is right there). Need to develop a plugin for an existing application, or a new app for something like Hololens? Have fun.
It is a lot of money when you consider it should be free and serves exactly no purpose.
Maybe not for you.
Set up CC processing on the web:
How much are you going to pay stripe? 2.9% + 30¢ ... that means you have to charge 10 bucks to get down to a 6% transaction fee. Quite the price floor and an interesting cap on your pricing model!
What does managing chargebacks cost you? The moment your taking money your going to hire in customer service, or spend time dealing with CS. What happens when you get a chargeback, or do a refund? Most of the time you loose money (processing fees etc)
If your under a million bucks a year apple is 15%. If you're building a low price app or a value add app, odds are that apple is going to be a far better deal for you than doing it on your own.
Chargebacks = customer support. I agree with that, but if you have a B2C business which has any non-trivial revenue (OP is talking about word doc apps, so we’re obviously not talking about indie $2 side project apps), then you would already have CS anyway. I fully understand there is an opportunity cost with any service and where those costs get realized, but your examples don’t seem like a slam dunk in apple’s favor.
How is chargeback being managed on Apple? I doubt they are swallowing the cost on their side, so I don't really see the difference between what'd get with a bank: you're losing the money anyway.
And once your rate goes to 30%, does it stay there the following year, or does the whole system reset to zero each year?
How do you calculate a price for not being able to release your main product? Usually without clear indications of what exact interpretation of a rule you are breaking...
We've had delays of a week because of things like we mentioned "Android" in an integration setting that had been there for years.
?
Your math seems to show the exact opposite.
I'd happily build iOS apps without XCode or any of Apple's frameworks to save the 30% fee. Heck, I'd do it even if I still had to pay the 30%, I hate being forced to use XCode.
That said, I don't know about Mac, but you can build apps using free tools - maybe in not as convenient way, but certainly you can.
I remember, because I was someone who couldn't afford Visual Studio licence and had to make do with GNU tools.
The greed of these companies put me off from developing anything.
Does the App Store collect sales tax and remit on your behalf? If it does then I think it's worth it or face registering both in the EU and UK ($0 tax threshold) as well as 50 US states (once you hit the allowed limit) will take you a long time.
Apps like that get made anyway but as it stands at least there’s a healthy crop of smaller/indie native alternatives which often best the behemoths in UI/UX. That would likely disappear with the addition of a standardized UI API, as it would probably also come with the abandonment of the old specialized APIs.
[1] Yes, one can use Qt for commercial software without buying a license (as long as it is dynamically linked), but their marketing does everything it can to hide that fact. Also, the newer additions to Qt do not fall in this category – for those, you have to pay.
That's not realistic for Apple users who are used to ergonomic software. It's not technically required to notarize, but practically speaking, it is.
Choosing overly complex web frameworks is still a guilty pleasure of too many projects.
Once the beancounters at the rent-seeking companies (Apple, Microsoft, …) have figured out that web development is where all the money is, this will change rapidly. Google has already started gatekeeping the web via Chrome.
Considering the whole point to have Windows is to use apps I'd expect they made the process super smooth.
I'm old enough to remember when buying development tooling for DOS or Windows was $$$$$$
Today Apple is taking percentage of every dollar made from application developers who participate in their App store and they are making it increasingly difficult to avoid this with every release. IMHO, they are making far more dollars today than they ever did selling development hardware and SDK licenses.
This is an important subject, thus it's one for which clickbait is generated.
Size is a problem. I look at my Rust compiles scroll by, and wonder "why is that in there?". I managed to get tokio out, which took some effort. The whole "zbus" system was pulled in because the program asks if the user is in "dark mode". That brought in the "event-listener" system.
Lately, "bash" in a Linux console has become much slower about echoing characters. Did someone stick in spell check, or a LLM for autocomplete, or something?
Obviously your statement is true about most other sites, but I thought it was an odd thing to say about a platform that famously doesn’t serve ads.
I wish Mozilla or Google or someone aggregated statistics for cpu/memory/energy usage by domain to shame devs who clearly don't otherwise care.
The web feels like 2005 again. Only thing is, this time the popups are embedded in the page...
Software has been freeriding on hardware improvements for a few decades, especially on web and desktop apps.
Moore's law has been a blessing and a curse.
The software you use today was written by people who learned their craft while this free-ride was still fully ongoing.
Now just a decade later, a computer with less than 8Gb of RAM is unusable. A computer with 8Gb of RAM is barely usable. Each new software uses Electron and consumes roughly 1Gb of RAM minimum! Browsers consume a ton of RAM, basically everything consumes an absurd amount of memory.
Not talking about Windows, I don't even know how people can use it. Every time I help my mother with the computer is so slow, and we talk about a recent PC with an i5 and 8Gb of RAM. It takes ages to startup, software takes ages to launch, it takes 1 hour if you need to do updates. How can people use these system and not complain? I would throw my computer out of the window if it takes more than a minute to boot up, even Windows 98 was faster!
Addition: consider also how few resources these applications used, & how they, if they were able to run natively on contemporary systems, would have minuscule system demands compared to their present equivalents with only somewhat less capability.
I think that is some kind of fallacy. We are doing the same things but the quality of those things is vastly different. I collect vintage computers and I think you'd be surprised how limited we were while doing the same things. I wouldn't want to go back.
Although I will say your experience with Windows is different than mine. On all my machines, regardless of specs, start up is fast so the point where I don't even think about it.
I pulled down an Audiobook player the other day, once all dependencies were meet, it need 1.3GB to function! At least VLC is still slim.
> I would throw my computer out of the window if it takes more than a minute to boot up, even Windows 98 was faster!
Sure, Windows has grown a lot in size (as have other OSes). But startup is typically bounded by disk random access, not compute power or memory (granted, I don't use Windows, if 8GB is not enough to boot the OS then things are much worse than I thought). Have you tried putting an SSD in that thing?
(And yes, I realise the irony of saying "just buy more expensive hardware". But SSDs are actually really cheap these days.)
Well said. I believe many of the "hard" issues in software were not "solved" but worked around. IMO containers are a perfect example. Polyglot application distribution was not solved, it was bypassed with container engines. There are tools to work AROUND this issue, I ship build scrips that install compilers and tools on user's machines if they want but that can't be tested well, so containers it is. Redbean and Cosmopolitan libc are the closest I have seen to "solving" this issue
It's also a matter of competition, if I want users to deploy my apps easily and reliably, container it is. Then boom there goes 100mb+ of disk space plus the container engine.
It's really only Linux where you have to ship a complete copy of the OS (sans kernel) to even reliably boot up a web server. A lot of that is due to coordination problems. Linux is UNIX with extra bits, and UNIX wasn't really designed with software distribution in mind, so it's never moved beyond that legacy. A Docker-style container is a natural approach in such an environment.
As long as we have COMPETITION as the main principle for all tech development — between countries or corporations etc. — we will not be able to rein in global crises such as climate change, destruction of ecosystems, or killer AI.
We need “collaboration” and “cooperation” at the highest levels as an organizing principle, instead. Competition causes many huge negative externalities to the rest of the planet.
Cooperation with no competition subtracts all urgency because one must prioritize not rocking the boat and one never knows what negative consequences any decision one makes might prove to have. You need both forces to be present, but cooperation must also be the background/default touchstone with adversarial competition employed as a tool within that framework.
Every modernization (hardware and framework) in software is a tax on the underlying software in its functional entirety
Great take. It feels like the path of least resistance peppered with obscene amounts of resume driven development.
Complexity in all the wrong places.
It wasn't supposed to be like this but it looks like most people never have found the way by now.
So, misguided efforts, wasted resources, and technical debt piles up like never before, and at an even faster rate than efficiency of the software itself declines on the surface.
We use JITs and GPU acceleration and stuff in our mega frameworks, and maybe more importantly, we kind of maxed out the amount of crazy JS powered animations and features people actually want.
Well, except backdrop filter. That still slows everything down insanely whenever it feels like it.
Developers certainly like to have their completely integrated, connected and universal computing platform (the web). And users do not seem to particularly care about performance as long as it is good enough. And that is exactly the standard that is set, software is allowed to be so bad that it doesn't really annoy the user too much. Management doesn't care either, certainly creating good software isn't important when good enough software has already been developed.
Sure, I would like things to be different, but until one group decides that a drastic departure is necessary, nothing will change. There are also no real incentives for change, from any perspective.
If you feel that people don't care about performance and download size, you may be asking the wrong people the wrong questions.
This is not exactly a new phenomena. People have been complaining about software bloat since at least the mid-1990's. I suspect someone older than myself would gleefully explain that the complaint's went back to the mid-1980's, mid-1970's, etc.. Eventually it gets to the point where only outliers will complain. Everyone else will simply upgrade, put up with the bloat, or stick with old software.
Then there is the question of whether the bloat is worth the benefits. If Docs was a simply a clone of Word, few people would have adopted it. Some people use it because it is free, others since they want to work on or access their documents from various devices, yet others want to collaborate on documents seamlessly. If you're getting something out of the bloat, you're less likely to think of it as bloat.
We also have to consider that some bloat isn't really bloat. It's easy to point to AppleWorks on the Apple II then bemoan how modern word processes require about five orders of magnitude more resources, while ignoring how resource intensive the niceties are. Want to use proportional fonts that look nice at any size and have them rendered properly on the screen? That's about three orders of magnitude more video memory, additional CPU and use for rendering the text, etc.. I'm using that example since it is something people can actually see. Now consider the things they cannot see (such as working on documents larger than the computer's main memory, the memory required for Unicode fonts, the ability to switch between the working document and research notes, memory protection to prevent an ill behaved application from wiping out all of your work). Yes, bloat exists. On the other hand, a lot of the increased resource use is actually quality of life improvements.
Apps today are no more bloated than they were last century, while we gained a lot of functionalities that would have been considered witchcraft in the days of Win98.
Web developers do of course, but I've hardly touched web development myself. Web interfaces etc., are a choice, but I think it's driven by commercial needs-- a desire for subscription revenue instead of one-time sales, etc.
Much of the modern cloud-based or half-online world is quite unnatural from a user perspective, and where there is no need for monetisation-- for example with OpenOffice, the software can expected to remain a desktop application.
This is certainly a big part of it, but it's not the whole story.
For one thing, there were ways to achieve those business models with native software too - "web-based=subscription" isn't actually a requirement, or the only way to go.
But users in the early 2000s, many of them more technical than users today, rejected this idea. It felt like native apps had to cost money one time, and doing otherwise would be wrong. But with Web, since users understood that it was hosted elsewhere, it "made sense" for it to be a subscription, so users went with it.
This also affected the technical things as well. Auto-updating was incredibly frowned upon in the 2000s - you bought it, you got to keep it as-is. So companies had to work very hard to keep multiple versions working all the time.
Most of these biases by technical users have gone away. We now have auto-updating subscription native apps, e.g. Photoshop works that way today. But these technical biases drove the usage of the web, because it was so much technically easier for businesses, and allowed much better business models.
(And, and this isn't even getting into the whole "installing software was really hard for users" thing!)
Nobody complained about that. In fact, few people complained about a few portions of the app that had abysmal performance. It often wasn’t until 60 second load times that customers started complaining.
They still raved and raved about that software because it solved an extremely valuable problem for them. A job that took literally a week could now be done in minutes.
As the space heated up, we needed to improve some things, but most people literally did not care. It would always be stack ranked last in the list.
The problem with the long startup is that it tends to cloud any discussion on performance. Code loading and parsing is basically the biggest bar in the app-perf breakdown of your profiles, and thus spins this narrative that this is the thing to optimize for, because it's the biggest bang. Rather than say, responding to user selections, reducing jitter and sluggishness while scrolling, etc...
I'm starting to believe that for a large class of apps, developers should look at it as if they are writing video games: the user will tolerate the spinner before the level, but then it needs to be silky smooth after. And the _smooth after_ requires a whole class of other optimizations; it's striving for a flat memory profile, it's small ad-hoc data transfer, it's formatting data into usable layout at lower levels in the stack, it's lazy loading of content in the background, etc... Those are the areas where web-devs should be looking a
(again, this only applies for that sort of SPA; e.g read-only content, blogs and such, should display _fast_).
This is a big point isn't it.
We seem to think that customers are choosing "slow" over "fast", when a lot of times they are really choosing between "slow" vs "manual" (i.e. very very slow)
It is always fun to put up the task managed on these to see them using 20-30MB of RAM with a large part of that being the current data base loaded in.
VLC & Blender are other examples of this.
It's kind of funny to imagine this parallel world where you send a PC of today back to the 70s. Whichever government got their hands on it would be keeping it ultra classified and hiding it away, like it was some device, too dangerous for the public, that could computationally solve any problem imaginable, create anything imaginable.
That's different from us devs losing efficiency from our deployment platform.
The reality is that these are all business decisions:
1) Move to the cloud because the business likes the steady payout of subscriptions. Business customers love not having to hire IT teams and demand six 9s of uptime because it is someone else’s responsibility. But performance needs to just be acceptable to end users.
2) Customers refusing to upgrade on-premises software, that led to long maintenance cycles and endless patches
3) Developing once for the web vs. Multiple times for different platforms – each needing its own developers and testers.
No amount of expertise on the part of developers is going to address these fundamental forces.
After a certain period of time, that software worked just fine for those customers. Photoshop is a great example. Sure, you won’t get the flashiest features, but CS4 will still work for you on a Win7 machine without any additional fees paid.
It did pretty much everything it does now, only lacked a grammar checker. (WordPerfect had one.)
Now we measure things in GB units. 1000X bigger, but what was gained?
We not only lost the way, we don’t even know the destination any more.
Functionality and graphics.
For instance 'dict.words' alone on Linux is 4.8MB. Arial Unicode is a 20MB-ish font. The icon for an application I work on is 400K. The Google Crashpad handler for handling crashes is somewhere around several MB.
A 4K true color display is 138 times larger than 640x480x16 colors.
With your examples, it could be:
- introduce global spell checker.
- have emoji?
- fix blurry icons?
- being able to search through crash logs?
- not having to switch between windows.
Do we need GBs instead of MBs for that? Why? Was that problem not fixed already? Could we not fix it in a way that didn't demand magnitudes more resources?
I'm asking, because I highly doubt that there's a technical reason that requires an improved piece of software or a solved problem,to require magnitudes more resources.
Sure, slack is far superior in UX to IRC. But could we really not get that UX without bloatware hogging my CPU, taking hundreds of MBs installation size and often biting off significant chunks of my memory? Is that truly, technically impossible?
A Word document isn't just text and some formatting sigils. Editing isn't just appending bytes to the end of a file descriptor.
It's a huge memory structure that has to hold an edit history so undo and track changes works, the spelling and grammar checker needs to live entirely in RAM since it runs in realtime as you edit, and the application itself has thousands of not millions of allocated objects for everything from UI elements to WordArt. The rendering engine needs to potentially hold a dozen fonts in memory for not just the system but any specified but not immediately used fonts from the base document template.
It's not like Google Docs is any lighter on RAM than Word. Features require memory. Fast features are usually going to require more memory.
People can use AbiWord if they want a much slimmer word processor. They could also just use nano and Markdown if they wanted even slimmer. But a lot of people want document sharing over the Internet with track changes, grammar checking, and the ability to drag in and edit an Excel spreadsheet.
The features used in native software follow a bathtub curve. But not just one but several. No two groups necessarily use the same sets of advanced/uncommon features.
> Functionality and graphics.
And massive amounts of telemetry.
But now functionality is moving to the cloud, we'll just be stuck with gigabytes for graphics and telemetry.
Having used that version of Word when it was the latest one, I can say the current ones have quite some added functionality (lots of very tiny things, and a few bigger ones), but I'm totally sure the same could be done with 10x less memory usage if MS would care about it. But there's no incentive to do it. Computers are faster and have lots of memory, and we don't depend on floppy disks any more. It would just cost them more money. Not saying that this is a good thing (I think the opposite, especially that I start thinking that software bloat might have a non-neglectable environmental impact), but as long as nobody complains strong enough (or as long as the EU doesn't come out with an anti-softare-bloat law... seems just a dream but who knows), that wont change. And I can clearly remember I had the same bloat-software feeling when I tried Office 2000 or XP, compared to Office 97, so there's nothing so new here.
As a final note, I've recently seen the source code of MS Word for Windows 1.0 on a GitHub repo (MS released it, see original release on Computer History Museum: https://computerhistory.org/blog/microsoft-word-for-windows-...). It was pure C, with even very large parts of code written in assembly! But the code is really ugly... totally incomparable to current C or C++ coding standards, patterns and language capabilities.
I saw it described once that software is like a gas - it expanded to take up the space we now have
You see it with live distros too. They used to be 700MB to fit on a CD-R, but now it's getting rare to find one that'll fit on a 2GB USB; although yay for 'minimal' gaining ground
Prime example of "Hardware is cheap and inefficiencies be damned"
It only takes longer to use while finding what I want among the bloated set of other things added.
Hah! Good one. It's unfortunate, but true.
In terms of frontend, which the post focuses on (Google Docs and a 30MB doc), I guess I'm conflicted. While I tend to favor native apps + web pages, I'm also a daily Tiddlywiki user, and I really think web apps have their place (heck, one idea I'm working on is a lightweight local server that lets you run web apps like Tiddlywiki). But without a doubt, Tiddlywiki is more resource intensive than Emacs (my go-to for notetaking when I'm not on TW). My tab for a 6MB Tiddlywiki file uses 155MB of RAM, and my (heavily customized, dozens of open buffers) Emacs session uses 88MB. So I do think the author has a good point.
[0]: https://fennel-lang.org/ [1]: https://sqlite.org/althttpd/doc/trunk/althttpd.md [2]: https://fossil-scm.org/home/doc/trunk/www/index.wiki [3]: https://makoserver.net/
1. Company executive decides their developers need top-of-the-line hardware to remain competitive in today's market
2. Developers make web apps on their company-provided M5 Ultra Pro Max 128GB RAM powerhouse laptop
3. They never test it on their father's old 2010 family PC, or at least they don't test often/thoroughly enough to realize many parts are broken or unusable
I develop using Dev Tools set to mobile and throttled connection. This way mobile-first responsiveness (limited screen-estate) and potential problems with bad connections are first class citizens.
Now, what usually happens is that I signal problems to the product owner and they wave them away. So I might update your third point that sometimes they do test it on their father's old 2010 family PC but it's not of concern to more relevant stakeholders.
I used to do the same on iOS, but came to find that performance differences on older devices there generally weren’t nearly as severe and that iOS users as a whole tend to use newer devices. When combined with reasonably well written Swift, performance on old devices generally isn’t a problem.
It was suggested to only display the first 100 items and let the user type in 3 characters until it started rendering.
Unfortunately this is the reality for many these days.
Of course instead we just fixed the shitty react code and it rendered instantly.
If the new frameworks make the problem blindingly obvious so that someone can actually justify fixing it, all the more reason to use those frameworks.
As long as they’re sorted and I can jump with the keyboard, that bare-ass drop-down is probably going to “just work” with default behavior. Anything further and we don’t know the intended use case for the element itself, but on the surface… it could be fine.
We used to have developers who took less time and wrote better code.
This is flagship product of one of the largest companies, and even they cannot get UI performance right...
The rest are simply modifiers on that value. More intuitive UI allows users to gain value more efficiently. Performance allows users to gain value more efficiency.
Efficiency is important, but Value more so.
The rest of us and the vast majority of professional SE work for a marketing/sales person and “number of new features released this Friday” as you said.
"The classic response to accusations of bloat is that this growth is an efficient response to the additional resources available with improved hardware. That is, programmers are investing in adding features rather than investing in improving performance, disk footprint or other efforts to reduce bloat because the added hardware resources make the effective cost of bloat minimal. The argument runs that this is in direct response to customer demand.
It is most definitely the case that when you see wide-spread consistent behavior across all these different computing ecosystems, it is almost certainly the case that the behavior is in response to direct signals and feedback rather than moral failures of the participants. The question is whether this is an efficient response.
The more likely underlying process is that we are seeing a system that exhibits significant externalities. That is, the cost of bloat is not directly borne by the those introducing it. Individual efforts to reduce bloat have little effect since there is always another bad actor out there to use up the resource and the improvements do not accrue to those making the investments. The final result is sub-optimal but there is no obvious path to improving things."
Web pages/applications are probably even worse in this regard because I'm not sure users even conceptualize them as using resources on their local computers, so they don't get blamed for it (people seem to attribute resource usage only to the web browsers, not the sites themselves)
Some will argue that it is indirectly benefitting the users who can get more features quicker. But most people care more about stability and not having to upgrade their computer yet again than features.
Gonna need some data on that assertion, since there is surely some "balance point", that probably depends on the industry/software, it's not all to one or the other.
As a personal example, these days I've been using Apple Pages (or whatever is called) and it crashes around once in an hour. But it has some features that allows me to quickly iterate over some document, so I am back to cmd+s as in the old days, vs using LibreOffice interface where it would take me an estimate 2x-5x more time.
Try to upload a folder with 20+ small files(say a picture galley) it takes a lot of time to process and upload them. If you add a new file to the folder and try to upload the folder, it will need to upload the whole thing again.
It was slow as molasses - unusable, really, and we're not talking about huge amounts of data either.
At least then, it seemed to be the google drive i/o that was the bottleneck, and the solution was to upload the training files to the colab session / VM.
They just open up "internet" and work on docs, and for 99% of the cases Google Docs works fine despite running in a browser that is much less efficient than a native "app". For most cases it's more than enough for the regular user who is used to "computers being slow" anyway.
By then, the app is built and running. Even though the code is a mess because the developer only know about React, nothing about the DOM and software architecture.
1) Your browser is always open, whereas you need to close your current app and open the app store app
2) Google Search is better at giving you what you want in the fewest number of keystrokes than any search from any other company including app store search boxes
3) installing a "web app" is one click after the google search results or if someone posts a link. As you might know from social media, even one click is a lot and most people won't click links in a comment. In the apple app store i have to tap on the app in the search result, tap install, double click the lock button, scan my face then wait many seconds. Some websites load in many seconds but that's not that common and it's considered a bad website
4) software is reinstalled on each use, meaning it's always updated. Native apps update randomly with a big delay and I have to check if there's an update manually sometimes
5) with native software there's a risk it will not support your device. Risk is commonly expressed as a cost, so the fact that in my life I've searched for an app and found out that it's not supported on iPad or not supported in my country can be counted as every single app store download since then took me .1 seconds longer because I now know there's a risk I will waste my time looking for the app. Web apps also don't work sometimes but it's more predictable since it's usually tied to real limitations like the screen size of your device. Another risk is malware, I feel nervous and powerless when installing native software because I don't know who I'm giving access and to and to what, whereas I understand what web apps can track. and also the fact that it's HTML and JavaScript instead of opaque assembly instructions makes ad blockers possible and cheap
Installation is an integral part of using software, and a big the reason the web won and continues to win.
That seems to be contrary to the general opinion (at least on HN): google has become utterly irrelevant, serving mostly content farm AI generated junk type of blogspam, and google is more concerned about ad revenue than anything else (including results quality)
Then I got uBlock Origin to turn off JavaScript, remote fonts, and large media items.
Result: 116KB
So 98.61% of the page is extraneous...
1. Why bother optimizing when the developer's time is more expensive than RAM and CPU power? I see this a lot.
2. From the times that I can remember (mid '80) till now, only top developers write efficiently software that is efficient. Most developers are average (this is not bad, it is just an observation) and for the average developer software optimization is too expensive in terms of time invested. Some don't know how to do it, some are not proficient enough to do it in the constraints of the projects given to them by bean-counting managers. "good enough" quality is software management is much safer than "good enough" Boeing planes, so when Boeing is cutting corners then managers of developers cut even more.
The comparison should be between developer's time, and time spent (wasted) by all users combined. This depends on # of users, and how often they run the software.
For a one-off, with a few dozen users running it occasionally, yes developer's time is expensive.
For popular software with 100M+ or billions of daily users, developer time is practically irrelevant, and spending weeks/months to shave off 1/10th of a second for each user's run, would be a no-brainer.
Most software sits somewhere in between.
But... developer is paid by company not by end users. And company cares about other things than the interests of society-at-large.
So it's mostly a case of bad incentives. Companies don't care about / aren't rewarded (enough, anyway) for saving end users' time. Open source developers might, but often they are not rewarded, period.
"... the cost of bloat is not directly borne by the those introducing it. Individual efforts to reduce bloat have little effect since there is always another bad actor out there to use up the resource and the improvements do not accrue to those making the investments."
This is the obvious low effort, low complexity solution. Of course you could make it fast, but that would take time and effort for a feature most people won't notice.
The new Passwords is a joke. UX errors all over the place, modal based view with a toggle to start editing. If you need to enter a password in another area of System Preferences, you need to back out of the auth flow, switch to Passwords and copy the credentials over to a temporary file.
So when my doctor sends me a one page checklist of how to prepare for a procedure, I have to open it in a powerful word processor and since it's I'm not using MS Word, the fonts and formatting aren't as expected.
On Windows, Wordpad was plenty enough for most needs, came preinstalled for free and barely consumed any resources, but I understand it's no longer shipping with Windows. Office 365 is now where the money is, even for basic needs.
https://github.com/jmechner/Prince-of-Persia-Apple-II https://www.jordanmechner.com/en/books/journals/
When I went to recreate it, I contacted QNX and asked if I could speak to the guy who did the work, and he had died the year before. So, I just took apart his floppy, and figured out how he did it.
The things you can do when you invest your time 100% in something.
Today’s software systems are more generalized, though they are solving the same business problems, just with more details / functionality than before.
Sad for me who wanted to be a software architect. I had to watch all this unfold in real time from inside various companies and never had the ability to fix the problems. Last time I tried to prevent major architectural flaws from being implemented in software during the design phase, I couldn't convince management and had to quit the company... Then 2 years later, from the outside, I witnessed the project turn into a complete failure. They literally abandoned the whole thing and started using a competitor's platform... Which, to rub salt into the wound, is almost just as awful.
Typically the "architect" title I see thrown around today pertains to gluing together some abominations of "systems", typically cloud services to do web stuff at "scale"
Architect today seems == "cloud infrastructure decision maker" and has no bearing on the code written, libs/frameworks/whatever used
Why didn’t you just, like, get him Word? Why did you make him try to use a shitty web app that assumes everyone’s computer is brand new, then install an open-source program that’s going to be constantly playing catch-up with Word’s updates and may cause problems down the line when Dad wants to work with someone else’s Word docs?
Maybe there was a perfectly good reason for this choice. I can think of a few. Maybe you helped Dad enter The Year Of Linux On The Desktop recently. Maybe Dad didn’t want to pay for Office. Who knows. Whatever the reason, you didn’t put it in this post. And you ended it with a plug for your completely unrelated SAAS, too.
Update: I updated the article.
Now that the zero-interest economy is over, the entire tech sector is readjusting.
I'm not going to deny that we could do better, but it is more nuanced than that:
OP uses Word as an anecdotal example, but Word is not designed with a goal of being optimized. It is designed with a goal of being backwards-compatible to decades of history.
We cannot assume that all software shares the same goals because they simply do not. When we look at the problem any given software is trying to solve, performance optimization is almost always important, but almost never #1... #1 is "solve the problem". Doing it fast is always secondary to doing it at all.
Also, Word (despite the general fame) isn't compatible with decades of history (remember a post that found that LibreOffice was more compatible with history (though not current designs))
Once all that XML is naively loaded as nodes, we might be talking about more than 1000 MB of RAM usage.
Did you consider trying Microsoft's own browser-based Word editor? It's free too. And .docx is it's native format.
Or, consider doing a conversion to Google Docs native format first (you'll lose some formatting though, possibly a lot of it).
[0]: https://ecma-international.org/publications-and-standards/st... [1]: https://en.wikipedia.org/wiki/Office_Open_XML#Application_su... [2]: https://www.theverge.com/2019/4/10/18304978/google-docs-shee...
It is just a different kind of efficient, there is economic incentive in making software development process more efficient and not that much incentive for making software itself efficient .
Software development is more accessible to millions and millions of new developers due to the work on higher level languages, frameworks , IDEs , libraries , low code , copilots and so on .
Each of these innovations made software development more efficient (not necessarily faster ) .
Nobody buys or uses software because it is faster , only cheaper .
Now it is merely not elegant or not as fast as it could be or at least fast enough initially during development and so no effort is spent on making it better.
I believe this could only be fixed through regulations which either make the engineers liable (thereby empowering them to make decisions) or by regulating energy use and user experience.
If we don‘t want this, then things will just be the way they are now.
But what are companies doing with this fast turnaround time? Features suck and are largely incomplete in modern software. For example: Sonos speakers, if the WiFi goes down they don't reconnect. Why? Why is basically every device and every bit of software chock full of obvious stuff like that? Do we really need AI or something to tell us how to build something properly?
Also, in the past you had to care explicitly about how much memory you allocate etc. which stopped you to think. Now you can pretend you have infinite resources because everything happens implicitly.
Compounded with this [0]:
> O(n^2) is the sweet spot of badly scaling algorithms: fast enough to make it into production, but slow enough to make things fall down once it gets there
you get what you get. Ever opened a GitHub pull request with 2000+ files changed? It hangs the M1 MBP. The solution is probably not rocket science, if someone really prioritized the fix.
[0] https://twitter.com/BruceDawson0xB/status/112038140670042931...
1. JS doesn't support multithreading, nor many other features that are useful for performance (e.g. mmap). This severely limits what you can do and makes it hard to scale up by parallelizing.
2. JS is a very pointer heavy language that was never designed for performance, so the CPU finds it harder to execute than old-school C++ of the type you'd find in Word. It's hard to design tight data structures of the kind you'd find at the core of Word.
3. The browser's one-size-fits-all security model sacrifices a lot of performance for what is essentially a mix of moral, legal and philosophical reasons. The sandbox is high overhead, but Docs is made by the same company as Chrome so they know it isn't malicious. They could just run the whole thing outside of the sandbox and win some perf back. But they never will, because giving themselves that kind of leg up would be an antitrust violation, and they don't want to get into the game of paying big review teams to hand out special Officially Legit™ cards in the same way that desktop vendors are willing to do.
4. The DOM is a highly generic, page oriented structure, that isn't well suited for app-like UIs. As a concrete example Chrome's rendering pipeline contains an absolute ton of complexity to try and handle very long static pages, like tiled rendering, but if the page takes over rendering itself like Docs does then all this just gets in the way and slows things down. But you can't opt out (see point 3).
Exclusively the minds of developers and the stance of management.
It is of course possible to built responsive, high quality and performance websites. But that is hard, much, much harder than to make something work, which maybe takes a few seconds to load, sometimes doesn't work quite right and can be a bit tedious to use.
Anyone that can't be bothered to update their ten year old laptop because it's slow is also not going to spend a lot on faster software. There's just not a lot of money in optimizing stuff. And if you have a modern laptop, it doesn't really benefit a lot from the type of optimizations you'd do to make things run smooth on a ten year old laptop. Especially when optimizations are simply about turning stuff off that aren't really in the way on the faster laptop. Like having some cheap 3D effects, pretty colors and animations, etc.
Anyway, I'm old enough to know that this is not a new debate. We never lost our way on this front. It was always like this even when computers were several orders of magnitude slower.
Also, cloud-based synchronization using CRDTs is a complex problem that is significantly more complex than just loading the document.
Can't claim we are going backwards when comparing apples and oranges.
It uses 500 Mb of ram fresh, and in a couple weeks it goes up to 1.5 to 2 Gb and I have to kill the tab and reopen it.
This is the modern javascript world...
There are some new simplified approaches that are starting to be interesting again.
While I've been involved in releasing both types of software (a ye olde Windows standalone app written in Qt framework and the new software that's being released every 2-3 days or so) I find the new way much less painful. I couldn't imagine releasing a Qt app in such a cycle (how to do updates? maybe like Minecraft - every other launch there's a new package of 64MB downloaded)...
...but as a user, I feel much more comfortable when code is on my computer, available for me to run. While extensively using Google Docs to share documents with my wife (calendar, spreadsheets) and band (songlist, lyrics), I'm scared that if I moved everything to on-cloud (paid mail, photo enhancing (thankfully I'm a standalone Lightroom buyer), vector graphics (inkscape user), photoshop (gimp user), etc.) one day I would hit the roof in payments of my per-month-but-bound-to-year plans. In 2030 it would be also netflix, shmetflix, televisionix, phonix, car-as-a-service, heated-seats-as-a-service, air-conditioning-as-a-service, toilet-as-a-service, smartwatch-as-a-service etc. building up to an unbearable rent for something that used to be free or paid once. I don't feel like my duty is to provide a constant supply of money to my beloved corporations
It looks like a joke seeing an i5 choking on files opened by PC's from 2003.
No.
Right?
- how much computing power is needed to present a single useful bit of information to the user;
- how much computing power is needed to process a single useful bit of information;
- how much total data transfer is needed to transfer a single useful bit of information.
of course to answer the above questions you need to give the definition to the term "useful single bit". And the hint here is: if we agree that - say - rainbow has 7 colors then the information about all 7 colors would take just 3 bits of data, wouldn't it?..
long story below:
this questions pops up at least twice every year by someone completely frustrated by the current state of things in the computer industry. And if you think it is limited to only software side of things, then well... ignorance is bliss.
Think of it from the incentives and rationale perspective.
Whenever you encounter a bloated piece of software or an over-engineered hardware box put yourself into the shoes of the author of this. Once you delve into the details of why and how was any specific tool or technology created you can understand why it looks so bad.
Some notorious examples of most hated programming languages were created in a very short period of time without any strategic thinking involved by people who had not had any experience with designing programming languages.
And people continue with this pattern in all types of software driven by business requirements rather than their engineering and scientific aspirations and talent (or lack thereof) most of the time.
In other words, bloated software is the result of time limitations imposed on developers. Efficiency, size, quality, stability and security go out of the window when you need to pursue other, more "important" goals.
"We need to go ahead of the competition and the time to market is our priority. We'll cut corners and burn cash. No thorough think through, just do it."
Another perspective is resource limitation. You as a developer have access to virtually unlimited computing, networking and storage resources. Remember this "memory is cheaper than developer's time" mantra?
Now put yourself in the shoes of a NES game developer. You need to squeeze in the whole universe into 32kb, with graphics, music and gameplay that will look attractive and responsive running on a 1.8MHz single-core 8-bit CPU.
Or put yourself into the shoes of the Voyager 1/2 team whose objectives are to keep a small piece of metal afloat in hostile environment for the next 50 years. With remote debugging capabilities, over-the-air software updates and continuous telemetry transmission back to Earth.
If Brendan Eich had not been given just 10 days to draft the javascript specs would we see something different in the frontend world today? Or we'd still see the 10MB garbage being downloaded by every other website just for the sake of keeping the cables busy?
And here is one of my favorite quotes by Alan Kay:
"Think about it. HTML and the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats. This has to be one of the worst ideas since ms-dos, this is really a shame. it's maybe what happens when physicists decide to play with computers."
hiring bar was dropped. expecting a mid level engineer to work with a byte buffer is considered “too complex” and non differentiated work.
the literal goal is to pump out features written up by mba/product team. none of these mbas use the product mind you. theyre chasing stupid features they think vice presidents want, because the thinking is it will drive promotions.
this is a cynical post and i will stop here. my org has problem of incentives. nothing else. you incentivize wrong things then this happens
(probably the cost of making an ancient Word and the newest one... : )) but there could be lots of other examples of 'modern' ones made along current trends vs. similar feature set classic ones, I wonder how this cost characteristic would play out .... : ) )
Moore's law kept going, and software started getting a little bit faster, which was enough to stop undoing the gains made by hardware, and now things are back to mostly snappy.
Occasionally you'll get a 30mb file that's slow... but subjectively things sure seem better than 10 years ago when you couldn't even think about optimization without someone beating you over the head with a "premature optimization is the root of all evil" quote.
This is the inverse of my experience. There are few applications that have a UI that I would refer to as 'snappy'. In fact, I am trying to come up with a single example, and atm I can't even think of one.
If it doesn’t matter - it doesn’t matter. If the goal is making a document format that is flexible enough to accommodate history, concurrent editing, various layouts and embedding, etc, all this comes with abstractions that add inefficiency. The trade off is ease in adding and changing to the format and the software that consumes and produces the format. If the consequence in the real world is effective unobservable in any material way, who cares?
Maybe as a moralistic measure it’s offensive that something lacking parsimony is practical. But from any meaningful measure - the users perspective, the developer, even the company paying for the processing - if it doesn’t matter - it literally doesn’t matter.
Comparing Google Docs to a program hosted on an Apollo era flight computer is obtuse to an extreme, and I would rather write my collaboratively edited documents with Google Docs than Apollo era flight computer any day no matter whether one is less parsimonious than the other.
Except the post you're responding to was literally in response to a user problem trying to edit a 30MB document in Google Docs. So it very much does matter, from the user perspective.
> Comparing Google Docs to a program hosted on an Apollo era flight computer is obtuse to an extreme, and I would rather write my collaboratively edited documents with Google Docs than Apollo era flight computer any day no matter whether one is less parsimonious than the other.
Straw man. The post compares Google Docs to LibreOffice (a competing product), and points out that LibreOffice solves the user's problem (editing a 30MB document) and Google Docs cannot.