What iphone was a LOT better at than everyone else was UX. Of which speed is one component, of course. It's funny how much people never get it although it happened in front of us, it happened to us. At the time I was working at Nokia Research and I remember my girlfriend telling me how his boss got this wonderful phone that you can take photos with and you can view them, etc. The funny thing is that I had such a phone since 2001. I have been working with smartphones for 6 years then, she knew it, she listened to me when I told her or others what I was doing (and then listen to others responding "yeah, but phones are for making phone calls"). She saw me browsing the net on my phones (a 9210 communicator and then a 9500), send emails from the beach, etc.
Still it somehow didn't register. Because it looked like something that she'd never use. And then the iphone that did a lot less made her and basically everyone understand what a smartphone is. (Even though by then symbian smartphones were pretty common, most people didn't use them as smartphones.)
So no, it's not simply the speed. It's the UX. And even if we talk about speed, it's still not the speed, but it's the perception of the speed, which a lot has been written about: delay (lagging) matters a lot even if speed on average is OK.
Remember WAP[1] and WML[2], the HTTP and HTML substitutes for mobile phones too anemic/limited to support the real thing? Back then, many web sites simply didn't support access from a mobile device. (It's the polar opposite of "mobile first" or "mobile only".) A few did, but many just tossed up an error page.
With the iPhone, Apple put together all the key ingredients to be able to say, if you're on the go and suddenly need to access your bank's web site to check your balance or whatever, you will be able to, even if your bank doesn't support mobile devices. The experience may not be great, but it will at least be possible.
Those key ingredients included a big screen, a fast enough processor and large enough RAM to handle pages that were somewhat bloated, a browser that supported enough (JS, etc.) to make most pages work, and special features for making the most of desktop-oriented pages by zooming in on text. To some extent, Apple brought these key ingredients together by designing it that way, but they also did it by not entering the market until powerful enough hardware was available.
The iPhone flipped mobile web access on its head. Instead of implementing whatever was convenient and punting on 50+% of the web, leaving users at the mercy of web sites to decide if mobile access was worth it to them, Apple created a device and browser that took responsibility for doing anything and everything it could to make sites work.
The web is a killer feature for the internet, and getting meaningful access to the web was a killer feature for internet-connected mobile devices. Paradoxically, it worked so well that the platform was enormously successful and it became essential to offer mobile web support.
---
[1] https://en.wikipedia.org/wiki/Wireless_Application_Protocol [2] https://en.wikipedia.org/wiki/Wireless_Markup_Language
Quite frankly, I still tend to think it's as much about Apple knocking it out of the park with the UX (and marketing), as it is about Microsoft doing literally everything wrong in response. WM could have become what Android is today.
But I agree also agree with you in that my own first iPhone was the iPhone 3G, and the 3G part and later the app store became a major thing for the device, whether it was browsing the web, or using internet-centered apps (chat, sms, reddit, etc.).
The KILLER feature is the total time to do something that the user intends to do.
If you have a very fast OS, but bad UX then the rate-limiting step is the UX, not the OS. And the converse is also true.
Having an expressive vocabulary and complex grammar is great for saying a lot quickly if they’re fluent but painfully slow for anyone who isn’t.
With this in place commerce could begin on the phone. Once everyone added mobile pay options it could end there as well. An now everyone has one, if they can.
However, I think you make a great point that the two are interrelated.
My straw man starting point would be: A poor user experience that is lightning fast can still be a great experience.
But a great user experience that lags or is slow will typically not be successful.
The iphone succeeded because it coupled a great user experience that was so fast that it felt like interacting with objects in the real world.
The iPhone won because it looked amazing and had the App Store. Looks and features. How did you reach the conclusion that it was speed?
It would help me at least if you could specify/ list what you think were the things in UX that made it so much better than Symbian phones.
We can think of things like slide to unlock but much more importantly scrolling. If something shows more the mastery of the iphone UX of that time it is definitely scrolling, who categories it as gesture now? It's completely normalized, on all other Platform of that time, you had to play with arrows and the scroll bar. Now every platform has it.
Using an iPhone felt like directly manipulating the underlying content. It was a qualitatively different experience that only superficially resembled previous touchscreen devices insofar as it used similar input hardware.
Note that Apple didn't invent the concept of a responsive UI thread with physics-based UI metaphors, for example Jeff Han demonstrated a fairly sophisticated example at TED2006[1] the year before. But to my knowledge the iPhone was the first mass-produced device with a direct manipulation interface.
[1] https://www.ted.com/talks/jeff_han_the_radical_promise_of_th...
But in the case of iOS vs Symbian:
- as others said: capacitive touch screen (this is not an OS issue, but iphone was among the firsts to use it, definitely earlier than Nokia). This is huge. Like the thing that everyone was talking about (around me) when the original iphone came out is how you could swipe to see the pictures. And it wasn't just for paging, it defined how you could interact with the phone (think pinch zooming, and rotation - not sure when these were added).
- the touch screen UI itself. Nokia played around with the touch UI before, but never really liked it. It was expressed several times internally, that touch is just a no go. But no wonder: the resistive touch screen is pretty bad, but also Symbian itself was built on the assumption that all you have is keys while iOS was built with touch UI in mind from the very beginning. (Now, of course touch was added to Symbian, but that's just not the same. Or they didn't put in the effort. Nokia even had an experimental touch phone released to the market in 2003, the 7700[1], but it was mostly ridiculous.)
- the UI just was a lot more polished, looked better, classier, the graphics was better. They had OpenGL and probably a graphics accelerator - nothing like that in Symbian, of course. (It even took the android guys by surprise, I remember reading/hearing in an interview that when they saw a demo or the release, they've realized that they had to redo the UI from scratch. Because before that they had this Blackberry-ish/Symbianish idea, they thought they were competing with that.)
- I'm pretty sure it had a better browser.
And this pretty much defines the experience, the feel a user gets from the phone. It couldn't send or receive MMS-es (some people may have used it then, but most I guess just wanted to have the feature), it couldn't receive 'push' email. I.e. you had to manually refresh your inbox, emails didn't just arrive. It didn't even have apps. Symbian had all these. It has had these for years then. It even had an app store like thing (at least you had to send in your app for verification which would then be signed by Nokia or it couldn't be installed - that was a new thing around 2004-2006, something I think nobody really did before).
It helps that the iPhone was a iPod with a phone attached, instead of a phone with a multi-use compute device attached.
OG iPods were single purpose music players, and features that made sense were slowly introduced over time (and were optional). Adding support for photo viewing made sense because album art is universal and well, album art is no different than a photo. Adding video made sense because you have this nice color screen for showing the photos/album art, and music videos are a thing people enjoy. Then adding a camera make sense, because you can already view photos/videos. Once you have all that in one package, adding phone capabilities makes a lot of sense when you realize that people are carrying around iPods along with a cellphone.
If we can consider the iPhone to be innovative, we cannot overstate how much timing was important.
The iPhone was a phone with a big screen optimized for the internet. Regarding people who are not into technology,for what I can see, their main interest to go for the smartphone has been whatsapp and free phoning in general. And as time went on, more and more services of all kind including administrative ones where more participial to use on the internet that in real life.
Any else have a PDA and see the glaring opportunity to add cellular functionality to them?
The INSTANT I hit the button to complete the order, the built in printer almost spat the ticket at me. I ordered a second sandwich just so I could get a video of that happening again.
Edit: Just found and uploaded the video :) https://youtu.be/TX_-dXIpPvA
Edit2: looks like it was a soda, not a second sandwich.
Thanks for sharing this, it's crazy how much super fast experiences still surprise us.
The POS app that I worked on (not related to the one shown in the video) also went to pretty serious lengths to get rid of the pause between the user pressing "enter" and the receipt coming out. The store operators rightfully insisted on this, because they wanted to keep the checkout lines moving as fast as possible.
I liked those printers and remember wanting one for myself even though I had no use for it. They start at around $200 and take up space, so I managed to resist.
Costco is the only place where I've seen this. I don't understand how it gets an authorization for any amount that fast, since it can't know the total while the cashier is still scanning items, and it's Costco, so it could be anywhere from $50 to $5,000 so surely it's getting the authorization after the transaction finishes? The flow is almost perfect. I have them scan my Costco membership, I use tap to pay on the card reader, then I or 2nd cashier move to organizing the items into the cart, and then the cashier hands me a receipt with basically zero wasted time.
Netflix is faster in every way. There’s a button on my TV specifically to launch it, the videos start faster, fast forwarding is faster, there’s less buffering in general. Every single touch point is fast. And it’s because they put the effort in where the others didn’t.
And I know Apple is a weird one there. On the Apple TV, they offer pretty much a version of iOS. There's multiple options to build your UI, but iirc you can build it native if you want to.
And this has been Apple's differentiator; they were FAST. The code for apps compiled down to native, as opposed to a lot of Java based phones at the time (and later with Android).
I've always maintained that Apple had a 5 year head start on Android when it comes to performance (as well as UX, even in their skeuomorphic designs), and after 5 years it was mainly Android smartphone companies focusing on more performance than the Android OS or apps becoming faster. It was Android phones that went for quadcore (and beyond) processors first, while Apple was just fine with a single core, and later, almost reluctantly, a dualcore. Simply because their earlier technology choices made their stuff so much faster and more efficient.
I'm so glad Apple didn't go ahead and make web technology the main development path, as they initially planned (or so I gathered).
Sometimes I have fantasies about sending an email direct to Jeff Bezos just to say: dude, did you know about this?
Suffice to say, I don’t watch much Prime.
Meanwhile Apple TV+ will just go ahead and try to use a super heavy 4k stream on my iPhone over 4G - won't even let me download it (at least this was the case a year ago, the last time I was out of wifi range).
Also, given the choice, I'll rent a movie on Amazon because they even give refunds if they detect the quality was low.
Netflix also never stutters, but it starts with like 480p and gets better over time.
Nevertheless the Netflix UI is far superior.
The amount of engineering work going into that must be amazing.
Why do content streaming platforms assume that you want to watch everything other than the content?!
YouTube does this too on my TV, and it's infuriating. Not only does it hide half the screen for a long time after the content starts playing, it then helpfully hides most of the screen before the end of the content also!
This is the computer equivalent of someone shoving their hand in your face to block your vision.
It's rude when a human does it. It's rude when a computer does it.
But I'll always check Netflix for something to watch first because it's faster and easier (unless there's something specific I know i want).
Being the default first choice is very valuable, and speed is the reason they're it.
I'd honestly pay extra for an iPhone where I can disable ALL motion, too. But that's the least of the problems.
I don't want to become the grumpy old grandpa yelling "back in my day!..." but we have waaaaaaaaaaay too much young JS devs who know nothing else.
We need to get back to native UIs. Is it awfully hard? Yes it is. Do the users care? No, they don't. Many people want fast UIs.
But to be fair to all sides -- there are also a lot of people who don't notice certain slowdowns that I scoff at. So there is a middle ground that won't cost billions to achieve and IMO that should be chased after.
The crazy thing is that all these web apps also do a fraction of the things that the native apps used to do. They’ve somehow managed to strip down all the features while making the apps slow and bloated. Watching Microsoft’s To-Do blog is comitragic. Elon Musk will be living on Mars before the Microsoft tools allow you to schedule todos by dragging them to the calendar like Outlook has done since what, 98? (You can drag a todo from the web sidebar to the calendar now—but it somehow doesn’t actually schedule the due or start date in the todo itself or even have any link back to the todo.) And I feel like that’s one thing that’s different now. I also complained that Word 97 was a slow bloated big compared to Word Perfect, etc. But back in the day there was feature bloat. Now, everything is both slow and and non-functional.
I have to assume that it’s a structural thing with the industry. Machine learning, big data, security, etc., has become the hot areas, so all the “A” teams have migrated over there. I hear Apple is having trouble even getting people to do kernel work on MacOS.
They won't bring in a ton of cash, but I can continue to make beautiful apps that are fast, focused, and respect the user's time and computing resources.
But anyway in the enterprise sector, it doesn’t matter whether an app is web or native, it will be slow regardless lol.
- User visits website - downloads binary (preferably small size, use an appropriate language and cross-platform graphics library) - launches it (preferably without installation) - Perhaps creation of a local storage directory on the file system is needed the first time. - and voilà!
What would be the main obstacles to such a workflow? Are there projects who try work like this?
On some days, I manage to type faster than XCode can display the letters on screen. There is no excuse for that with a 3 GHz CPU.
And yes, 200ms seems plausible to me:
Bluetooth adds delay over PS2 (about 28ms). DisplayPort adds delay over VGA. LCD screens need to buffer internally. Most even buffer 2-3 frames for motion smoothing (= 50ms). And suddenly you have 78 ms in hardware delay.
If the app you're using is Electron or the like, then the click will be buffered for 1 frame, then there's the click handler, then 1 frame of delay until the DOM is updated and another frame of delay for redraw. Maybe add 1 more frame for the Windows compositor. So that's 83ms in software-caused delay.
So I'd estimate a minimum of 161ms of latency if you use an Electron-based app with a wireless mouse on a DisplayPort-connected LCD screen, i.e. VSCode on my Mac.
You type in a letter and that starts off a cascade of computations, incremental compilation, table lookups, and such to support syntax highlighting, completion, etc. and then it updates whatever parts of a dynamic UI (the user decides which widgets are on the screen and where) need to be updated.
It almost has to be done in a "managed language" whether that is Emacs Lisp, Java, etc. and is likely to have an extension facility that might let the user add updating operations that could use unbounded time and space. (I am wary to add any plug-ins to Eclipse)
I usually use a powerful Windows laptop and notice that IDE responsiveness is very much affected by the power state: if I turn down the power use because it is getting too warm for my lap, the keypress lag increases greatly.
Surely VGA would have more latency than DP for an LCD? It's gotta convert from digital to analogue and then back to digital again at the other end.
Is the overhead of the protocol really greater than that? (genuine question)
I'm sure Id notice if typing had that much lag on vs code. I am using manjaro Linux but I can't imagine that it would be much faster than osx.
Months ago I noticed picom causing issues with keynav I was too lazy to find a (proper, pretty-window-shadow retaining) fix for, so I just killed it and — while I can’t confidently say I remember noticing a significant lag decrease — I can say I don’t really miss it (and my CPU, RAM, and electricity use almost certainly decreased by some small fractions).
If you are using Android, you are in luck.
1. Open Settings > About Phone, Tap the build number 7 times (Or google other methods to open Developer menu for your phone model)
2. Go to Developer options -> Drawing
3. Set all animation scale to 0.5x
You'd be amazed to find how fast the phone appears
It help a lot in that "computer user bill of rights" issue that you start to worry at some point that the button press wasn't registered and might then mash the button with unpredictable effects.
(e.g. you might get more customer satisfaction from a crosswalk button that doesn't do anything at all except 'click' instantaneously)
These settings completely disabled my on-screen home button and other UI elements, and setting the anim scale back to 1.0 and rebooting did not fix that, no more home button for now.
I probably have to reset the phone, did not find any further info so far on how to fix it (pointers, anyone?). But the UI seemed snappy indeed at 0.5 ...
Edit: "other UI elements" including e.g. the Tab switcher in the Lightning browser. The widgets are all displayed, but totally unresponsive.
The researchers telling me I don't notice 100ms delays are smoking something. Yes, human reaction time is 200ms on average but we process information much faster than that. Moreover, the delays make it impossible to do "learned" chains of actions cause of the constant interruptions.
Hackers typing insanely fast and windows popping up everywhere in movies? The reason why that looks very unrealistic is just that our tools do not behave like that at all.
You can absolutely detect when your ping gets above 25ms even. It can't be missed.
> Hackers typing insanely fast and windows popping up everywhere in movies? The reason why that looks very unrealistic is just that our tools do not behave like that at all.
Right on. That's why, even though I have an insanely pretty Apple display (on the iMac Pro) I move more and more of my day work to the terminal. Those movie UIs are achievable.
Related: I invest a lot of time and energy into learning my every tool's keyboard shortcuts. This increases productivity.
Yeah this resonates for sure. Multiple times per day i tell citrix ctrl+alt+break, down arrow, return (minimise full screen citrix, go to my personal desktop) and about 50% of the time an app inside the citrix session will be delivered the down arrow, return keystrokes :-/
These days I see people casually adding network hops to web applications like it's nothing. These actually take multiple milliseconds in common scenarios such as cloud hosting on a PaaS. (I measured. Have you?)
At that point it's not even relevant how fast your CPUs are, you're blowing your "time budget" in just a handful of remote function calls.
If you stop and think about it, the "modern" default protocol stack for a simple function consists of:
- Creating an object graph scattered randomly on the heap
- Serialising it with dynamic reflection
...to a *text* format!
...written into a dynamically resizing buffer
- Gzip compressing it to another resizing buffer
- Encrypting it to stop the spies in the data centre
- Buffering
- Kernel transition
- Buffering again in the NIC
- Router(s)
- Firewall(s)
- Load balancer
and then the reverse of the above for the data to be received!then the forward -- and -- backwards stack -- again -- for the response
If this isn't insanity, I don't know what is...
I mean the web stack itself was never designed per se. HTML is essentially a text annotation format, which has been abused to support the needs of arbitrary layouts. The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently. And Javascript was literally designed in a week.
And then in terms of deploying web content, you have this situation where you have multiple browsers which are moving targets, so you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack to take care of all the compatibility issues, and translate a tool which is actually usable like React into an artifact which will behave predictably across all environments. I don't blame web developers for abusing libraries, because it's almost impossible to strip it all down and work with the raw interfaces.
The whole thing is an enormous hack. If you view your job as a programmer as writing code to drive computer hardware - which is what the true reality of programming is - then web development is so far divorced from that. I think it's a huge problem.
Rendering hundreds or thousands of meshes and doing complicated 3D math for physics is no problem, UI is still extremely hard and complex, especially if you are supporting multiple arbitrary resolutions for example.
Godot, for example, has a full UI toolkit built in (the Godot editor was made using Godot components). However to actually get it working the way you want in most cases is a horrendous struggle, a struggle with ratios, screen sizes, minimum and maximum UI control sizes, size/growth flags, and before it gets any more complicated please just throw me a Tailwind flex/grid box model instead, because HTML/CSS has solved these problems repeatedly already.
So responsive Electron apps are certainly possible.
Blame OS vendors for refusing to get together to specify a cross-platform standard API for UIs. We have mostly standard APIs for networking, file I/O, even 3D graphics, but not for putting a window on the screen and putting buttons on it.
OS vendors are still trying to play the lock-in game by forcing everyone to write GUI apps for only their platform. This is a non-starter, so everyone goes to Electron.
There are a few third party cross-platform UI libraries around. They suck. Qt is as bloated as HTML-based UIs, and then there's wxWidgets which is ugly and has an awful API based on 1990s MSC.
We could have something better, but it's an extremely large and difficult project and nobody will fund it. OS vendors won't because they don't want cross platform (even though all developers and users do). Nobody else will because nobody pays for dev tools or building blocks. The market has been educated to believe that stuff should all be free-as-in-beer.
Bullshit. Qt is much faster than Electron, the Mumble client is really fast on my Turion laptop, that with OpenBSD.
And I say this even if I prefer Barnard IRL.
1) Would need to be lowest-common-denominator by nature
2) Would quickly stagnate due to friction against changes/additions
3) Would have few allowances for platform HIGs
If it were permissible to have vendor specific additions on top of a common core, that could probably work fine otherwise this hypothetical standard UI library would share many of the problems suffered by Qt, wxWidgets, etc.
The other option I could see working is something like SwiftUI, in which some control over the behavior, layout, and presentation is ceded to the platform — basically having developers provide a set of basic specifications rather than instructions for every pixel on-screen.
As for the free aspect, I feel like this ship has sailed like 20 years ago. Nobody will pay for an UI toolkit these days. This is not Unreal Engine 4, you know. That stuff only works on AAA games market, apparently (although I am curious as to why it doesn't work everywhere else -- likely thin profit margins and/or middle management greed outside of the gaming genre).
I get annoyed with Windows having the cursor randomly stutter for a split second rather than smooth motion. Or Teams taking half a second to load the conversation I clicked on. Or Powershell taking 3 seconds between initial render & giving me a damn prompt. Or the delay between me pressing the Windows button & the start menu appearing. None of these delays exist on my Linux machine where I've had the freedom to select the programs I use
I've made fast UIs with Javascript & React. Like all optimization it comes down to sitting down & profiling. Not taking "this is as fast it it can be" as an answer. In short, saying "Javascript is just slow" is part of the problem
Blaming languages is chasing a fad. I deal with it when people think the service I'm working on in Ruby is going to be slow because Ruby is slow. Nope, architectures are slow. If you know what you're doing Ruby will do just fine at doing nothing, which is really the trick behind speed
Languages like JS and Ruby make it easier to write slower code (and harder to detect that you're doing it) by the virtue of how their ecosystem and culture turned out with time.
I stood behind the romantic statement of "you are holding it wrong" when I was younger but nowadays it seems to me that the languages live and die by the culture of their communities. It rarely if ever matters if the language itself can be better / faster.
So while I agree JS/Ruby might have undeserved reputation for being slow, I think you should also agree that they are easy targets because observably a lot of software written with them is in fact slow.
I am looking at it empirically / historically while you are postulating a theoretical construct. I don't disagree with you per se but prefer to work with the reality that's in front of me.
---
That being said, kudos for being the exception in the group of the JS devs! The web frontend industry needs much more people like yourself. Keep up the good work. <3
As an example, my grandmother-in-law has been putting up with Microsoft Jigsaw's desktop app for years. Last time I watched her load it, we sat there for awhile and had to restart multiple times because it was getting stuck loading some advertisements. The startup time was absolutely brutal and the run-time performance while playing wasn't great either, even with a decent laptop.
So when I saw how slow, bloated and laggy this app was, I wanted to try to make her a better jigsaw app for the web and I think I succeeded [1]. It loads almost instantly, has no advertisements, and feels super smooth while playing... and it's mostly just js, svelte and a little bit of Rust WASM.
Anyway, I do prefer a good native app over a web app when available. But with native apps, it's also harder to block ads and other trackers compared to the web.
I've been working with the horrors called Windows MFC and Java Swing a long time ago. It was tough but if you did it right (moderately hard) you had a very snappy app on a computer that was 5x slower and had 10x less RAM than a midrange today's Android device.
Especially after the walkbacks that Xbox Game Studios had to do after flak about scummy microtransactions in Halo, Gears, and Forza, it still seems incredible that Microsoft continues to allow Arkadium to do it to a far bigger audience (and a lot of people's parents and grandparents especially) with their brand name attached to it.
Meanwhile, here I am, making a decentralized social media server and being afraid to add an extra <div> lest it bloats the page.
TL;DR - users can get used to pretty much anything because they don't know it could be so much better
https://apple.stackexchange.com/questions/17929/how-can-i-di...
https://superuser.com/questions/455313/disable-os-x-enter-fu...
It looks "pretty" to the UI people.
https://thedailywtf.com/articles/The-Speedup-Loop
tl;dr : programmer inserts a large empty loop in a UI, so that in weeks when he achieves nothing, he removes a single zero from the end of the loop counter to speed up things a bit.
We later created a light version for a specific use case, and the product owner came prepared with a nice splash screen for this one too. The app was so lightweight that it loaded near instantaneously - so the engineer added a six second delay just to meet the splash screen requirement.
Buys them time to get stuff done under the hood while you are gazing upon the 'sands of time' (good old Windows hourglass).
It conditions you/me/everyone to be impatient. I opt out of all such transition effects on my phone. I prefer that the screen goes black of freezes until the next screen comes up. This way I don't get distracted by irrelevant junk (spinning wheels, hourglasses, etc.). It is crunching bits. Don't add more junk to it. Let it crunch bits without tricking me.
Chromium is a huge boon to developers for this reason. Now there could have been a different history here. Apple after acquiring NeXT had also gotten OpenStep, https://en.m.wikipedia.org/wiki/OpenStep . OpenStep was a cross platform UI development kit, even the web could be a target. Apple decided (possibly for good reasons, hard to argue with success) to kill this off. But, they had toyed with it, https://www.macrumors.com/2007/06/14/yellow-box-seems-to-exi... . So, Apple had effectively what Chromium has become. A cross-platform development and runtime environment.
Would things be different today if that wasn’t killed off? Would Apple have never come back from the brink of death to become the behemoth it is today, because it would have starved its own platforms? One thing you might have had is a cross-platform “native” UI platform, and that might have meant faster more efficient UIs like you want now.
Shoutout to GNUStep trying to keep the dream alive: https://en.m.wikipedia.org/wiki/GNUstep
Follow up question: maybe with Apple being so successful, now they could revive this and make it profitable for themselves, rather than starving their own platforms?
The bad news is that doesn't seem likely to happen.
People here crying about load times and FPS rendering are completely out of touch with reality of SW development - getting stuff to function correctly and reliably with requirements constantly changing > performance, and that's hard enough with tools that simplify SW development. Optimising for performance is a luxury very few can afford.
Game engineers are wizards, but real general-purpose UI is a different problem than they are generally solving. A game UI is typically very limited in terms of what types of information has to be displayed and how. Many applications have to support what is essentially arbitrary 2D content which has to be laid out dynamically at runtime, and this is something different than the problems most games have to solve.
Most game developers will make it as fast as they have to... in fact most developers do that.
Games are usually developed as abandonware. Do you want your apps to be developed as abandonware?
Not really. I'm sure plenty of people remember the quick feel of early PC UIs. Ironically, q3 kind of came at the end of that era.
Some of the same people might even remember when, with a little training, voice recognition software could do its thing without an internet connection and a warehouse full of computers at the other end, on a PC with less RAM than the framebuffer of a modern PC or phone...
I'm sensitive to latency.. first thing I do when I setup a new android phone is go into the developer settings and speed up all animations.
For our own company [0], we also treat speed as a top feature, though it's not something that's easily to market. It's something that power users appreciate. I even wrote a similar blog post [1] to this. The magic number, from what I've found, is 100ms. If you can respond to a user action in 100ms, it feels instant to the user.
I would immediately apply but I'm not interested in Ruby or HTML/CSS anymore (although I still know the last two rather well and plan on making a return there to author my own blog theme).
Main focus are Elixir and Rust -- the latter exactly because I want to make efficient and ultra-fast software. Also very invested and averagely skilled in DevOps / sysadmin activities.
I hope there are more companies like yours out there -- and that yours is thriving!
Current situation is people creating abstraction at the wrong level and not understanding the performance cost of things like reflection and ORMs.
Kudos to you. We need more people like you.
I wouldn't say native UIs necessarily, IMO, but I definitely agree that something has to change.
Current systems are not only getting slower and less useful, but they're also getting harder to develop, test and maintain as well -- and, consequently, buggier.
The fact that there still are many old, TUI-based systems out there AND that users favor them over the newer ones exposes a lesson we've been insisting on overlooking.
It doesn’t feel right with no animation (like Reduced Motion in settings) since spatial hints are lost.
This can be hard to achieve if you work off templates and plugins.
Yet, I find it supremely important. I frequently lose my train of thought while waiting for pages to load.
The problem is not the language or framework, is that very few people/devs/businesses actually care about performance anymore, so they just implement the quickest solution without even thinking about the performance impact.
Profiling: https://i.snipboard.io/UPhtmH.jpg
When I built my blog, I tried to find every opportunity to reduce cruft (even stripping out extra CSS classes) so reading it would feel as close to a native app as possible.
You could argue that HN succeeded because it's focused on speed above all else.
(Also - fellow former Q1/Q3 player here, I competed in CPL, Quakecon, and a few other events).
Not sure I agree with this. I wrote a bunch of data vis GUIs with PyQt and Pyqtgraph, all Python, with everything keyboard shortcuts and accelerators, and it was Vim-like speed except where CPU bound by data processing (NumPy).
So I think it can be fairly easy yet Qt dies (frequently, on HN) on the altar of native look/feel/platform (ie doesn’t look/feel like a MacOS app on macOS).
The only change we might see are more "native" UIs written in C#, Swift, etc. Also, Swift will not be a suitable replacement in its current form. Any replacement needs to at minimum work on MacOS plus Windows and by work I mean you can create a UI without crazy amounts of platform specific code.
funny, java applet in the 90s gained a fame for being slow, being caused mostly by junior devs putting stuff on the UI thread
:-(
I do that on my Android phone. Feels snappier than the iPhone now. Not perfect though. Scrolling sucks.
JavaScript is bad but nowadays that's not the main evil. That falls on the hideous awful libraries on top of JS that everyone seem to love these days. They need to die ASAP.
I can never go back to Android now. I'm sure if you studied the phones under a high speed camera we'd be talking about differences of only tens of ms but when you tap something 1000x a day it really adds up. It's just like how most programmers are hyper sensitive to text editor latency.
Every second support email in the early days of https://www.darwinmail.app was from users who were wondering why the website wasn't faster to load and operate.
I knew that this was going to slowly kill the product if I didn't focus on optimising the speed immediately. I also heard somewhere that even a 0.01 increase in load times for Amazon's website would cost them somewhere in the region of 100's of millions.
1. I gathered feedback from all users that said the website was slow (in any way and in any page/component/workflow).
2. I created a Trello board https://trello.com/c/PPuhLtW0/95-upgrade-performance for all the feedback.
3. Since that week of initial performance enhancement research and groundwork, I have essentially been completing todo's on that Trello card and adding more tasks as time goes on. I think the more speed improvements I make, the more I learn about what other parts of the application can be sped up. It's like economics, the more you learn, the more you realise you have so much more to learn :D
A few years later and I have not received an email suggesting to increase the speed of the app in several months, although I continue to make speed improvements on a regular basis.
Netflix have been my source of inspiration here. They are leagues ahead of every other streaming service and their custom architecture placed at the ISP level is absolutely incredible and paramount to how the deliver content with such amazing speeds.
When fixing performance problems you shouldn't guess, just profile it to find the bottlenecks.
I've seen plenty of performance 'fixes' that weren't, pure guesses by developers that did nothing, when a quick profile immediately revealed the culprit.
In your case you also need to figure out if it's happening server-side or client-side. I generally start with the server-side logs, get a few days/weeks worth of data, find average page request times, plus how much deviation on those requests, then go from there. That gets you the server-side. For client-side, unfortunately it's a lot harder. Google analytics page load speed, for example, is a pile of crap. But, again, there's a profiler in dev tools, remember js compile time is a significant thing and can slow load time too so check that out as well as the actual run times (js compile time shows in the page load graph).
That's something no other smartphone could do. I don't know how things are today but I looks like Android more or less "solved" the problem by throwing powerful hardware at it.
The killer feature is not really speed, but low input latency. And this is achieved by taking performance in consideration during development. And contrary to the old "premature optimization is the root of all evil" saying, you have to do it early, because while can be relatively easy to increase throughput, latency is much harder to deal with.
This is also part of the success of Google Chrome. While it didn't load pages that much faster than its competition it was great at showing you something quickly. It took ages for Firefox to catch up, and it looks like it did mostly because Chrome became slower over time. How is Servo going BTW?
> That's something no other smartphone could do.
Either I'm misreading you, or you have a strangely narrow version of the world we live in. What is so magical about the iPhone that no other smartphone can "react quickly to your input by showing you a nice, smooth but slow animation while work is being done in the background"?
(Part of my doubt probably comes from using a OnePlus 7 Pro as my daily driver. 90Hz refresh rate and everything is ridiculously fast and smooth. But that's not actually possible, is it?)
Hum... I don't think that makes much sense. Yes, there are some latency optimizations that are certain and architecture wide, so they are much easier to do at first write time, but there are a lot of latency optimizations that are iffy and local, and thus much easier to do with an actual profiler running.
The thing is, throughput optimizations also come on both forms. I'm having a very hard time remembering any large and general enough experience on the ratios, or arriving at a property that would change them for latency or throughput. I think that dimension is really not relevant for them.
I find Android is now terrible on both my Samsung Tab 2 and my Galaxy S8. Sometimes I click something and it takes over a second to do any UI changes and looks like it hasn't responded. Just as you go to click it again, it comes up. I find the same in multiple apps where basic actions take too long even simple menu/view apps like email.
I don't know what has happened but it does seem crazy that in 20 years with hardware that is 1000s of times more powerful, we still can't consistently solve click latency.
Maybe it's just me.
Animations are very seldom used on iOS to hide any work happening in the background. Most things happen instantly, and animations are added for usability, to give spatial hints and make the UI easier to follow.
Pretty much dead, unfortunately.
There's a reason why it's hard to ever go back, once you've experienced the fluidity of even just your mouse cursor reacting instantly to your movements.
If you've ever used the iPad Pro, there's clearly something special about the experience. It just _feels_ better, and for all the same reasons described in the article.
60hz is far from smooth, and that number is a leftover from days past, not what is actually optimal or good.
Display technologies unfortunately still have ways to go when it comes to high resolution, color accurate panels, with high refresh rates, but the general direction on the market is that high refresh rates are not available in the "productivity" category of monitors, even if sometimes the manufacturer has panels that would fit the bill. You unfortunately always need to look in the gaming category, which usually lack many of the features you'd like in a more productivity centered display. Such as a fully adjustable stand, high color accuracy and viewing angles, virtual display splitting, or just overall design of the enclosure.
I could go on another rant about display enclosure designs... Why isn't there a single company out there (with perhaps the exception of Dell) that's creating nice and minimal display enclosures that aren't covered in cheap plastic and "aesthetic" ornaments? Apple's Cinema Display from 2004 is to this day one of the better looking enclosures out there.
I don't think you can blame this on the consumers really. For the higher end market that I'm talking about in general here, I'd be willing to take a bet on if you build it they will come. I'd certainly be praising any company willing to take this on to high heavens.
I want a great, fast, accurate panel with a nice, minimal, aluminum enclosure. Is that just too much to ask?
But you probably haven't, except in, well, games?
> 60hz is far from smooth, and that number is a leftover from days past, not what is actually optimal or good.
Well it would already be wonderful if we actually had 60 Hz in modern application / devices, including 16 ms response time. I fired up an old game the other day on my arcade cab (CRT screen), some shoot'em up game, and it was silky smooth. I'm pretty sure it was "only" 60 Hz but it was constantly 60 Hz: any input with the joystick or buttons had results the very next frame.
This felt so much smoother than any of the army of modern devices I'm using on a daily basis: even if they can animate stuff at high refresh rate, the latency before the animation starts is what makes using them painful.
Refresh rate is a thing but so is the latency between when your input and when, visually, it produces a result.
I've seen people working on ports from old arcade game where they'd record using high-speed cameras LEDs physically hooked to the joysticks/buttons to make sure that "input at frame x means response at frame x + 1". Short of that your app very probably is not responding in 16 ms or less, unless you really know what you're doing.
There was this famous rant by John Carmack where he lamented that on PCs it was faster to do a transatlantic ping than to push one pixel to the screen: I don't know how far we've gone, but when I compared modern devices to my old arcade cab and it's measly 60 Hz (but 16 ms latency), I'm still not impressed.
A 120 Hz or 144 Hz or 240 Hz is no good if it takes 35 ms between when you move the mouse and when you see the results on screen: that's not "120 Hz" but 30 Hz. And 30 Hz feels laggier than an 35 years old arcade cab: it is that shameful. 35 years and still feeling more responsive than any productivity app.
I remember a recent tool posted here (I think for OS X: maybe an editor) here by someone who was fed up with this extreme "input lag" and was guaranteeing his program would be answering in less than 16ms (maybe was it 24ms, don't remember exactly). But that is an exception.
I think you're highly underestimating how smooth 60 Hz already is when there's no input lag. Now, of course, I'm taking 120 Hz or more any day over 60 Hz but we should very badly focus on input lag too.
And, sadly, we live in a world where I'd scientifically guesstimate that 99.99% of the programmers are totally unable, due to limitations of their tools (do they have high speed cameras and can they prove how fast things are pushed to the screen?) / knowledge (I'm not John Carmack and modern software stacks sure seems complicated) / languages (let's not start a flame war) / mindset (never optimize anything / 100 MB JavaScript downloads are fine, etc.), to push anything to the screen in 8ms or 4ms.
Except for top-notch game programmers working on AAA titles.
So 240 Hz monitors, sure: bring them up. But bring me too the programmers and tools needed so that in 4 ms I'll see the result of my inputs.
In our case, our users had a specific flow through the application they would use, and it worked, but it required clicking many (10+) buttons and waiting for a web request on each. People on the team were satisfied that the flow worked and going through it didn't take TOO long... But what people on our side didn't get is that our customers had to go through this flow dozens if not hundreds of times - some users would need to do it this many times regularly. It effectively made our users hate using the product, or they would refuse to, or they'd use it but only a little bit and they'd try to minimize the cost.
I tried to get people on our side to experience the pain points, e.g. asking PMs to follow this flow one hundred times, and things like that, but I never could get through to anyone that we should redesign and refocus on making it usable. Maybe a mockup of a faster flow was what was needed to be persuasive there.
I bring this up with others, and they are lukewarm about it. I feel like our company is in deeeeep shit if I don't convince people this is a problem.
I always felt like resistive screens were more responsive than capacitive screens.
Case in point: My 3DS resistive screen and Palm Centro responded instantly. I think their downside was the necessary use of the stylus (because of the additional precision, their UIs required you to pull out the stylus before you could do anything effective).
What the Apple iPod Touch / iPhone did, was allow you to touch without using a stylus.
Anyway, I read this post as if its a mirror-image of my reality. The one thing I remember about Apple's capacitive push was that it felt slower than what I was used to. Honest.
--------
With that being said: I've played fighting games vs opponents who can 1-frame link and counter-throws within 7-frames (115 milliseconds). I'm well aware of the human-brain's capability to process data far faster than most people realize.
Musicians, Video Game players, Athletes... I expect most of them to have reaction speeds well above average: below the 200ms typical human. Even then, "average" humans have far better reaction speeds and ability to perceive things that happen in factions-of-a-second (at least, once you make them aware of those things).
UI-speed is absolutely a great feature. I just disagree that Apple's iPod Touch or iPhone was a good representation of that.
I upgraded my monitor to 144hz, got a low delay mouse and headset (some headsets have over 400ms delay and audio response is faster than visual!). My ranking in games I've played for years has gone up about 1 standard deviation. I'm at my record high ranking in every game and it continues to rise.
Likely biased study, but Nvidia found an eyebrow raising difference in player performance when using higher refresh rates. https://www.nvidia.com/en-us/geforce/news/geforce-gives-you-...
We can look at Input Lag [1], and Microsoft Research's work [2] on Touch Input. Apple's ProMotion being part of that as well. For the past 20 years we have make 10 - 100x improvement in bandwidth at the expense of Latency. Now we need to do more work on it . Especially if we want VR or AR which are extremely latency sensitive. John Carmack [3] used to talk a lot about it when he was still working on Oculus. How it was faster sending something hundreds of miles away than showing it on your computer screen.
[1] https://danluu.com/input-lag/
Speed matters. I CAN perceive the latency of using an SPA vs using a native application. I notice. the diff. between executing a GNU binary vs running a js based script.
I agree with your post. We as a community have completely subverted the meaning of this quote. It is originally about the need to profile your code, and about how programmers instincts often fail them, making them optimize the wrong things.
But when it mixed with Startup Culture it morphed into "don't worry about speed, just write whatever shitty code comes into your head and only optimize if a customer complains... scratch that, let's not listen to customer complaints because we know better".
Like you said, some companies with good products and some good developers are following what Knuth had to say and are constantly optimizing for speed (but after profiling). Others are engaged in a race to the bottom and are trying to convince everyone else that careless engineering is somewhat better.
This applies to every metric, not just performance. For example: don't optimise your top of the funnel when your bottleneck is actually conversion.
So measure, and then optimise. Don't optimise prematurely.
This. Yet, I’d say it’s not the teams. In my experience it’s usually management that demands new features and doesn’t care about speed.
This line of reasoning makes me sad. It highlights so many problems in a company that a developer is having to deal with;
- Seeing 'management' and 'devs' as opposing teams shows a lack of communication and a lack of understanding from everyone involved.
- A company where managers aren't willing to listen to developers is never going to put out a great product. Developers have expertise and know what they're doing.
- A company where developers think they know best is never going to put out a great product either. Managers also have expertise and know what they're doing.
- If the "managers" dictating that features are added are "higher ups" rather than product managers then the company is never going to put out a great product because the people who talk to the customers and look at usage metrics should be driving the product roadmap. Customer needs should be driving what gets added.
- Developers who aren't putting up a fight to write good, fast code because they're not being listened to stop caring about what they're building, and that means there's very likely to be other problems like significant bugs, tech debt, etc. That just grinds you down and stresses you out.
All in all, if your opinion is "the product I build sucks because managers make it suck" you probably need to find a new job. Not every company is like that. Find a good one.
Speaking of Teams, there is something I don't really understand, which is that we have just experienced nearly a full year of intense competition between Teams, Webex, Zoom, Bluejeans, Skype and all the rest. All of those products should be AMAZING by now. But actually most of them are still clunky as hell, and Teams itself is probably the worst of them, it's still as slow as it was a year ago, still unreliable and still missing (trivial) features that people actually want, like the ability to block/ignore certain contacts. And it can't be - if anyone at Microsoft actually uses it themselves - that they don't know how bad it is. But they seem to be doing absolutely nothing about it.
However, speed is not “the killer feature”. Speed does not add any value in isolation; your app needs to solve a need for the user first. If you don’t have PMF don’t think about speed yet.
The article gestures at objectivity by linking some cases where people measured revenue gains from speed improvements, but fails to follow through and actually propose an experiment or ROI calc. If you think your app is slow, run an experiment and measure the impact on conversion. (You can even take a page from Google’s book and _add_ delay with a simple sleep() if you don’t want to spend any time on perf work before you get data. Or just do the first bit of low-hanging fruit and measure the impact.)
Talk to your users and ask them what frustrates them in the app. It might be “takes so long to check out”, or it might be “it just lacks feature X that competitor Y has”. I’d suggest it’s unwise to spend time on perf work if you are pre-pmf and the main feedback was the latter. Again, do experiments too because customers don’t always tell you what they need. In particular enterprise users often don’t care as much about speed, as long as you tick all of the boxes. Many users are used to line-of-business software that is slow and buggy, so your bar in B2B is not always high here.
Finally do an ROI calculation. If a perf iteration is going to cost you $20k in dev resource, and get you 7% improvement on $10k of monthly revenue, that might not be the right thing to focus on. Ideally you’re looking at features that will improve your top of funnel volume more than that.
It’s all a trade-off. It depends on your company’s level of maturity, Product/Market fit, and the value of the marginal feature that you’d be deferring to make your app go faster.
If we interpret this to be a political manifesto carrying the message “you should care more about speed/performance”, I’d prefer the meta-level “you should care more about trade-offs and marginal value”.
How could the majority of people collect a salary working on software that has no users? That makes no sense.
1) As programmers we're biased to feel like speed is the most important thing because it's very fun and satisfying to optimize. In reality, for actual users, it's one of many different axes of value that have to be weighed against each other. In some domains it's critical, in some it matters very little, in most cases it's one important factor among many.
2) There are different types of "speed". Generally anything that's supposed to mimic something physical - basic UI feedback, real-time games/simulations, etc - has a much higher speed requirement than some abstract process. Will the process take long enough that it makes sense to show a loading spinner at all? Then the user probably won't mind waiting a couple extra seconds. Will it take <500ms? then the user will approximate it to "instant", and will notice if there's a bit of "lag".
> Phones in 2007 had the same features as the iPhone. The Palm Treo even had a touch screen. The difference was speed.
If the original iPhone had taken twice as long as the Treo to load a web page, but the touch screen was still more responsive, people still would have perceived it as being "faster". The extra seconds matter less than shaving off the extra milliseconds.
All that said, I do agree with the general thesis. Evernote has just come up with a huge update of all their apps, having ported them all to Electron to standardize development. The only problem is, they're all brutally slow compared to the native apps that preceded them, and it truly ruins the experience.
Stopped listening to these people for product expertise. Even took a chance on Facebook at $19 when HN was gleefully expounding on how this was obvious and the company was doomed. Glad I did that.
Did it again when everyone on HN was convinced SMCI was spying for China. Worked out again.
I'm going to call it "Tech Enthusiast Inverse Sentiment Index" TEISI. List it on the NYSE and people can make big money doing the opposite of people here. Maybe you get a couple of losses like WeWork and whatever but overall, I think you win.
I'll never get another Samsung, even though I don't know if they did it deliberately, or if it was even them that did it.
Somehow my current phone has lasted 3 years with no appreciable slowing.
Everything is just too slow -- and it doesn't need to be.
It seems to me that basically 100% of the UI/UX developers at the big tech companies are woefully ignorant of the fact that there is a massive amount of data and papers written about human computer interaction. I'm guessing that is because few comp-sci programs even touch the topic, rather spending all their time on more esoteric/mathematical topics.
In summary, a very large number of studies were done in the 1960s-1980's on the _human_ aspects of user responsiveness (important when timesharing became common), how people learned computer interfaces, and how effective they were at operating them. Despite some of these papers being > 40 years old, none of it has really changed because the studies were about humans, less than computers. The underlying computing may have changed from a time shared terminal to a phone in someones hand connected to a server, but in that time the human cognitive loop hasn't changed.
IMHO, and somewhat backed by the science, any system which isn't responding in under 100ms is broken unless its performing something extraordinary. If its actually interactive (like typing on a command prompt) even that is far to slow. User frustration, and loss of attention are real things, and you can bet when given the choice users will pick less frustrating systems. The saving grace for many of these platforms is that the entire industry is trying to be like the fashion industry and follow the latest trends. So it doesn't matter if BigCoX makes a huge UI blunder all the others will follow it down the lemming hole.
So tell me why some of the conclusions in a paper like http://larch-www.lcs.mit.edu/~corbato/sjcc62/ (1962) are wrong. How about: http://yusufarslan.net/sites/yusufarslan.net/files/upload/co... (1968)
Amusingly other classics like https://www.microsoft.com/en-us/research/wp-content/uploads/... are discovered regularly too (1983).
Thanks everyone for sharing really awesome examples in the comments here - from Games to Receipt Printers to Apps, it's clear that speed is valued.
Or... that there's a big opportunity to bring back lightning fast products :)
1/3 of a second is already insane lag, 3 seconds is just ridiculous.
My first impression was "unbelievable" - how on earth would anyone think a Palm device is slow?! Then I followed the link and saw a Palm Tree 750/V... oh, of course, that thing run Windows Mobile.
A Palm device running Palm OS is blazing fast! I switched to iPhone from Treo 650 in 2009. Almost everything became much slower. The iPhone software was slow, so was the user interaction (in the sense of UX).
Palm only started using Windows in its later years. And there were actually very few Windows Palm phones. Most Palm PDAs and phones run Palm OS and were very, very fast.
Speed always has been not only the most important thing, but virtually the only important thing. Back before most of you were born, there was a review (in PC Magazine, IIRC) of the category of spreadsheet programs. MBA Analyst dominated the others (visicalc and lotus 123, IIRC) in every category except speed, in which it was OK, but not great. That's why you never heard of it.
The speed requirement is closely related to the self-importance fallacy. If a computer needs time to think, maybe we could make good use of a few moments pause, too.
For example, Line of Business (LOB) apps are built with ROI in mind. LOB apps help businesses run more efficiently, and employ the vast majority of developers. These are the most used apps in the world, and company owners are much more interested in functionality, automation, and distribution of apps than performance and usability.
Examples I can think of: The emoji selector (ctrl+cmd+space) is quite slow. On my brand new macbook, it's a small noticeable pause, and on my old macbook it's several seconds (during which time keyboard input is lost).
> If you can’t speed up a specific action, you can often fake it. Perceived speed is just as important as actual speed.
Second example is facetime on my iPhone. They fake being fast by showing the last opened screen. For me, it's very often the "most recent calls". The problem is that in the meantime there's been another call. Result: I see the person I want to call back, tap on the screen where they are, observe that the content changes and I call the wrong person. This happens often enough that I should learn, but somehow I don't.
A very strange phone to reference. First iPhone was slow as molasses with all of the excessive visual effects.
It was only around OMAP iphones when they first got proper hardware acceleration.
Palms were noticably faster than WinMo 6, and WinMo 6 was faster than 5 which was indeed painful to use because of input lag.
Ironically, Android is still somwhat slower than WinMo 6 on input lag despite every trick Google is throwing on it.
I read somewhere they even tried to wire the input layer directly to hardware acceleration to make scrolling less laggy.
It was the first all-in-one (camera, music player, phone, game system, organizer, etc) that didn’t make you a bully target.
Not saying Speed is unimportant...I’m saying this is straight up lying.
Like back in the AOL days, when dialup was a thing, the internet was dogshit slow, but you still had to get in line to use it. Took hours more often than not.
If people valued speed more than anything, aol would have gone bankrupt. People are willing to pay extra for speed but can live without it as long as features are there.
This is starting to bug me because for startups, this is bad advice. It’s actually harmful since it’s all about product-market fit at the beginning. You’re better off throwing away code instead of optimizing.
But Google Pay released a new update using the flutter framework. And now even scrolling takes ages to complete. I complained on Play Store but the reply said to check my internet speed.
Meanwhile PayTM has also redesigned their app, but unlike Google Pay their updates actually made the app much faster and intuitive. I still check Google Pay from time to time to see if they have fixed their app, but the scrolling is still laggy (it feels like you are in a web page) and the loading page still flickers.
Few apps are native anymore, they're all just wrappers around web pages. It sucks.
I think it's fine to say faster page loading makes users happier and will increase conversions but you should avoid generalising with such specific figures (I see this often with page speed article titles where they mention conversion rates changes to 4 significant figures). It's going to vary wildly based on the product, audience, price, exclusivity, custom loyalty etc. and you'll get diminishing returns as well.
The impact page speed has on amazon.com conversions isn't going to be the same as on your side-project website for lots of reasons.
It’s incredible that nobody has 1-click flight bookings a-la-Amazon yet.
If you deliberately add a second to the checkout and measure the conversion rate it will go down. But to then talk of reducing latency creates this much extra conversion rate is a lie. There is an oft touted figure from when they deliberately slowed the BBC website to assess engagement.
However, truth is that speed is good.
It's painful (for me, that is :D), but I know it's the right thing to do.
Now turn off the screen with the power button.
Notice the annoying delay when turning off the screen is gone? enjoy :-)
Of course you also don't have a way to invoke the wallet manually, but luckily if you put it near a payment terminal it will auto activate
Probably won't work if you're a heavy apple wallet user but if you use it only sporadically I personally think it's worth it, I found the delay very annoying when I switched to a homekeyless iphone
Speed (or more likely, perceived speed) is only one part of UX, and how much it matters depends a lot on what else is going on and the users expectation. Even focusing merely on responsiveness feels a bit superficial.
Something a bit closer to the core of it is that whenever a user is focused on waiting for your software, it reduces their experience. That can be articulated better I'm sure - and still is only one part of the (complex) equation.
The thing with iPhone was the capasitive screen, which made touch UI work. At the point I had already worked with phone touch UI:s for seven years, and that's the thing that felt like magic.
I was reminded by this today as I installed Debian on a new computer. Why do Gnome makers imagine it's OK to have the *default* on slow ("Animations") rather than instant ? Do they really think we'll be happy enjoying a 200ms or 500ms delay every time we reduce or open a window ?
For my personal notes, I'm still organizing it in local files (via vimwiki), but for team notes, Notion needs to step up its game.
Oddly the links were right next to each other.
You can cheat in some weird and fun ways though. For instance, if you say "no user of this system will ever be more than 50ms away", you get to play some really interesting games with vertical scaling and consolidation of compute capability in an all-in way. I.e. server-side technologies ran out of a single datacenter near the userbase.
If your latency domain fits it, something like Blazor server side can be an incredible experience for your users. First load is almost immediate because there's virtually no client code. Everything is incremental from there. If you are within 50ms of the server, UI feels instant in my experience. The nature of how applications are developed with this tech means that if your business services are completing requests within the performance budget, you can be almost certain the end user will see the same.
Going to the bottom of the rabbit hole, understanding how NUMA impacts performance can make 5+ orders of magnitude difference in latency and throughput. Letting a thread warm up on a hot path and keeping it fed with well-structured data is foundational to ultra-low-latency processing.
You can handle well over a million events per second on a single thread on any recent PC using techniques such as LMAX disruptor combined with a web hosting technology like Kestrel. The difference between a class and a struct can be 10x if you get to that level of optimization. I measure user interactions in microseconds in my stack these days.
A millisecond is a fucking eternity. You shouldn't be doing a bunch of back and forth bullshit in that kind of latency domain. Stream client events to server as fast as possible, microbatch and process, prepare final DOM changeset and send to client all at once. How could any other client-server architecture be faster than this, especially if we are forced to care about a bucket of shared state?
How do you explain then that iphones took over the market, even though Nokias had many more features? Speed, or the feeling of speed, was part of it, I am sure
That said, I feel like it is sort of belaboring the obvious.
I think that our overdependence on dependencies has a lot to do with UI latency.
Not saying I necessarily disagree with the premise but they chose a poor example.