It an entirely custom toolkit, so don't expect it to have a native look and feel, but it's a GPU-first design with multiple back-ends. It can be used in native OpenGL apps too. It's an immediate-mode UI, so it's very easy to build and update even complex windows. Great choice if you want to prototype a game.
While few people (like myself) do this directly, lots of people use it indirectly through extensions like Adblock, Stylus, etc.
For example, I find the fonts hard to read due to the colour choices; then I zoom in and it's better, but now this "Widget gallery" no longer fits on my screen and there is no scrollbar.
I suspect that if Amazon have considered this, they certainly didn't see it as a disadvantage!
This is the context. They use egui to display debugger information - performance does not matter for the debugger information.
Major pita is keyboard handling when selecting, cutting, pasting text in edit controls and markup editor
There's demo apps that you can just grab and edit to your liking, and the docs explain what you need to build the WASM app and run it.
In any case, the fashion (for the last ~10 years) is to consider immediate mode the modern approach, and retained mode an awkward practice of the 90s. So you can call "immediate mode" a lot, but you can't call it "outdated" when most consider it the new black...
On my old tablet, Netflix runs a-ok, the media player has hardware acceleration and runs a-ok, but Amazon Prime stutters.
I haven't checked it in years, so maybe it is better now, but I am skeptical this is the case, because if the original creators of the Android app didn't think hardware accelerated video might be a nice idea, why would it be different now? This is a structural issue.
There are trade offs to be made.
Surely AVC which is h.264 should still be supported..
Maybe it gets tripped up trying to use the software google decoder codecs instead?
That said, I am not able to test it anymore. The tablet is on kitkat and I'm not able today to find a site hosting a minsdk older than lollipop. It's not a big deal, but it's disappointing to see apks for services that used to work just disappear. They always gobble on about "security". Why is it a problem for everyone else except Netflix? Netflix loves supporting devices.
> xyz [4K/UHD]
4K is not available when using a computer
> HD is not available because you're not using Windows
when using Windows
> HD is not available because you don't have HDCP
Graphics driver begs to differ.
Prime SD is usually somewhere between 240p and 320p, HD looks like a decently encoded 720p file. Never seen it but I'm guessing 4K might actually approach the quality of a 2007 Blu-Ray.
Apple TV has been the best platform so far for 4K and Atmos content, if you also want an option to purchase a title.
https://www.highdefdigest.com/news/show/everything-on-amazon...
Just checked and Prime using 90MB in my TV, while YouTube uses 214KB, so maybe I already have the wasm monstrosity.
HDR content is somewhat better when played through the Prime Video PS5 app, though it still seems much darker than HDR content played on the Netflix/AppleTV/HBO LGTV apps.
I bought the 1080p Blu-Ray version from the UK, and even my parents standing 10 feet away from our 48" 1080p TV (without me explaining the problems with the Prime version) could tell the difference after about 10 seconds.
I feel like the problems with their app are more algorithmic than in the implementation details. I once had to go edit my watchlist on amazon.com because I had added enough movies that whatever super-linear algorithm they were using simply could not comprehend my watchlist and the input watchdog timer would murder the app. And come to think of it that was when I used Prime on a Roku, so it's a cross-platform problem (yay, WASM!).
edit: HDR, not UHD...tho have only noticed it predominate on their originals vs purchases.
Not to mention that their show recommendations are the worst ever, I would be ashamed to be part of their ML team honestly. (For reference I'm an American/North African woman staying in France and all of my recommendations are for Indian action movies, literally all of them. I've never seen a single Bollywood film ever.)
Anyway this is to say, technical innovation won't undo the damage done by shitty product managers.
"Man, that's an awful lot of work to avoid writing a native app."
Caveat: previously worked at Prime Video though not in this area, still at Amazon
That Qt application was only lightly modified for each platform, IIRC, and it appeared all over the place from those cheap Linux-based Blu-ray players to Smart TV clients. Anywhere that was embedded, generally Linux-based, and unlikely to get frequent updates.
As for some of the larger players like Roku or Android, those ones are more obvious where to hire. Java developers and BrightScript developers.
Likewise, Amazon needs to run on all sorts of different smart TV platforms, not just iOS and Android. All of those apps need to be consistent to each other, rather than to their host platforms. That's almost the textbook case for using a cross-platform framework.
It really doesn’t. I remember this being brought up back when the Uber engineer crying about how their compiler couldn’t match their scale was making the rounds, and in the end the numbers just did not add up to support the app size they have right now.
> All of those apps need to be consistent to each other, rather than to their host platforms.
Why? Platform-specific apps, for the most part, should feel at home on their platforms. Not doing this is how you get YouTube for Apple TV and similarly bad apps.
How is the breadth of their payment back-end and excuse for a bloated client?
This is Interesting. So essentially Uber is a "Global" App by default?
[1]...until Branding(tm) enters the chat.
A native app on Windows, macOS, iOS and Android (TV), and some other solution for other platforms, isn't an unreasonable ask in this context.
If streaming services desire to replace local playback, they’ll also have to replace the convenience that provided.
My own strategy would be to keep the generic electron version and then make a native app for the top 5 most popular TV models but that’s just me.
https://eng.uber.com/how-uber-deals-with-large-ios-app-size/
But for TVs, ehh. There's a lot more fragmentation on the TV market. And if things were slightly different, we'd have a lot more fragmentation on the mobile market as well. Actually there are plenty of mobile alternatives, but developers don't really want to support all operating systems - and these cross-platform tools often have substandard support for mobile operating systems that aren't iOS or Android.
From a startup pov, going with native apps will require multiple teams (Usually just Android + iOS, but occasionally Windows as well)
React Native will only require one team, or just one engineer depending on the size of the project
Better yet, If your front end developer is fluent in React(web), then that developer is already fluent in React Native.
I completely disagree on the "awful lot of work", when you could literally get your FE developer to work on your "native" apps.
Sure, where performance must be squeezed, it's probably optimal to go native, but that bar is set a lot lower than where it actually should be.
If React Native is good enough for Discord, it's good for 3 quarters of all apps out there.
- Web devs typically don’t have strong instincts for mobile apps, they require more mentorship & onboarding time.
- Animation and interactivity is strongly limited in RN - You’re always so many levels removed from the actual APIs being used. For example, using UIKit and CoreAnimation are absolutely amazing and super lightweight on CPU/GPU in comparison. UICollectionView is a modern miracle that offers so much more control than FlatList.
- JS/Babel/node_modules is hell, no I won’t elaborate.
- Multi-threading, priority/concurrent queues etc for maintaining good all-around perf while doing intense operations is very strong with native platforms. RN just has sophisticated band-aids.
Honestly it mostly depends on the type of work the app will be doing, the dev resources available to you, and the level of quality you want to attain. I strive to create the best of the best user experience, and native is the best way to attain that.
The app I work on still uses RN for some features, but the results historically have been disappointing on average.
Happy coding!
Native apps shine when you do something novel, something that’s beyond text, images and video consumption or you leverage platform specific functions.
IMHO, sticking to Web technologies is a better option for things like Netflix or Amazon Prime but not good for Uber and alike.
I'm forced to use Slack at $job, and it is awful, for example. Some text fields looks as text fields and are used as text fields, but don't respect "Shift+End/Home/Arrows" keyboard commands. Other does. It is annoying as hell. It is Electron.
Even more «native» toolkits, like GTK and QT are have small discrepancies. Simple example: my Windows has English MUI (interface language) but Russian locale (time and date format, etc). In POSIX terms it is something like LC_MESSAGES=en_GB & LC_ALL=ru_RU (according to POSIX more specific categories have priority over LC_ALL). 95% of both QT-based and GTK-based software with translation to Russian speak Russian to me (and if I'm lucky has setting somewhere to enforce English). They think that «(system language)» is Russian. But it is not! It is not very «native», IMHO. Ok, GTK-based software is typicality build via cygwin/mingw, so it is really not very native and is «hacked» to run on Windows, but most of QT-based software IS build for Windows, as Windows is fully supported, tier 1 platform for QT. But still.
If I start to write down all small nitpicks about cross-platform software, which shows that it it not native, I could find many in any cross-platform software. Corner cases in keyboard shortcuts, non-copyable messageboxes, strange tab behaviors, non-standard open/save/print dialogs, etc, etc, etc. If you use platform for 25+ years (yes, Windows changed a lot from 3.11 for Workgroups which was my first version, but these changes were small ones, step-by-step, and A LOT of things in UX didn't change still!), you have a shitton of muscle memory and all these nitpicks throw you off.
Yes, I know, that there is no "native" toolkit for Linux. IMHO, it is a big problem, much bigger, that all problems of X11 which Wayland want to solve.
The point is moot except for anyone watching Prime Video in the web browser.
The UI, however, is easily the worst of any streaming platform. Why is a search result returned for each individual season of a TV series? Why doesn't it follow ANY of the Apple TV UI conventions?
But somehow it's more reliable than Netflix, Hulu, and HBO Max (hbo isn't that high of a bar though).
Start an episode, pause immediately because you just realized you forgot something in the kitchen? Oops now you disabled subtitles because that UI button was still busy popping up (and subtitles are necessary to understand their whispers without being DEAFENED BY LOUD SCENES without adjusting volume by 50% every 2 minutes).
Want to hide the UI because you just finished navigating the menus to re-enable those subtitles and hit the back button? Oops now you go back to the main overview because the first click did register, it just took a minute for the TV to respond and now your second click takes you back. Now have fun scrolling down five rows three columns to find the thing you were watching 1 second ago and do this dance again.
This is not helped by LG shipping a CPU literally slower than a raspberry pi's in a €550 2019 TV, but other apps do not have this problem. It's too bad Prime has some exclusives like The Expanse, I look forward to finishing the couple series we had in the queue and cancelling Prime again.
Overall it has the shittiest experience, also thanks to the monstrosity that is X-ray.
At the best, this article can only say Prime Video would be more shitty without WASM maybe?
> This architecture split allows us to deliver new features and bug fixes without having to go through the very slow process of updating the C++ layer. The downloadable code is delivered through a fully automated continuous integration and delivery pipeline that can release updates as often as every few hours
They do support a lot of device types, which means a lot of compilation to produce many different binaries and probably more reliance on the app update systems of third parties.
Exactly right. Big platforms with guaranteed support is one thing, but here's the situation for smart TVs right now:
It's 2022. You build an app for SuperScreen's latest range of smart TV. You update it every so often, and SuperScreen are happy to certify those updates and help you push them.
It's 2023. You port your app to SuperScreen's latest range of smart TVs. You keep updating your 2022 app, but SuperScreen are slightly less helpful now, as most of their attention is focused on the 2023 set (and upcoming 2024 range).
It's 2024. You port your app to SuperScreen's latest range of smart TVs. But when you want to update your 2022 app, SuperScreen say "hmm, we don't really have a lot of time now to certify apps for these older devices, please try to make as few updates as possible". It's a little tough but you have to do it, as the alternative is that 2022 TVs don't get any of your new features.
It's 2025. You port your app to SuperScreen's latest range of smart TVs. But now SuperScreen refuse to certify updates to your 2022 app saying "these are legacy devices and we can't justify the time or effort to certify apps on this platform any longer... unless you were to pay us a LOT of money".
You might get approval for one last update, paying SuperScreen the money, and then you effectively mothball your 2022 app. It becomes a "legacy" app for you as well, stuck in maintenance mode, not getting any updates or new features. And this is on a TV that is only four years old. So the best way forward is to make your native layer as small and light as possible, and shift all of the heavy lifting into the runtime client layer. If most of your client UI is a downloadable JS / WASM runtime, then you can keep supporting it even if the platform owners don't want to play ball any more.
Objective-C’s success was driven by iOS.
C’s success was driven by Unix.
C++’s early success was driven in large part by 90’s GUI frameworks ( MDC, OWL, Qt, etc.
I think Wasm is shaping up to be huge in that you can safely run, high performance code across multiple operating systems/CPU’s. Of all the languages, I think Rust has the best WASM tooling, and the 2020’s may end up being the WASM/Rust decade.
Regardless, being open standards, not obnoxiously object oriented, and better designed to interoperate with normal HTML content, the modern HTML5 ecosystem is certainly way more promising than Java applets / Flash / Silverlight.
Still pales in comparison to .NET
Nope, C/C++ has, emscripten remains undefeated
You don't just put a name and expect things to skyroket
C++/Go/C# has more chance to become the standard toolkit for WASM than rust
rust problem, is people promotes more the "rust"™ rather than their project, the crabs definitely shadows them, it's unfortunate
The Rust tooling certainly was the best, no question. As the Mozilla WASM devs have moved on the WASM tooling has stagnated, sadly, and The C++ tooling has caught up.
Languages like C# and Go are inherently worse for WASM because they require GC and have big runtimes they have to bring along.
Well, Objective-C is from 1984 and its major success was being cloned to make a programming language called Java.
James Gosling's earlier object oriented PostScript based NeWS interpreter was a lot more like Objective C and Smalltalk than his later Java language was. (But I'm not going to mention the earlier abomination that was Gosling Emacs MockLisp. Oops!)
https://medium.com/@donhopkins/bill-joys-law-2-year-1984-mil...
>Bill Joy’s Law: 2^(Year-1984) Million Instructions per Second
>The peak computer speed doubles each year and thus is given by a simple function of time. Specifically, S = 2^(Year-1984), in which S is the peak computer speed attained during each year, expressed in MIPS. -Wikipedia, Joy’s law (computing)
>Introduction: These are some highlights from a prescient talk by Bill Joy in February of 1991.
>“It’s vintage wnj. When assessing wnj-speak, remember Eric Schmidt’s comment that Bill is almost always qualitatively right, but the time scale is sometimes wrong.” -David Hough
>C++++-=: “C++++-= is the new language that is a little more than C++ and a lot less.” -Bill Joy
>In this talk from 1991, Bill Joy predicts a new hypothetical language that he calls “C++++-=”, which adds some things to C++, and takes away some other things.
>Oak: It’s no co-incidence that in 1991, James Gosling started developing a programming language called Oak, which later evolved into Java.
>“Java is C++ without the guns, knives, and clubs.” -James Gosling
>Fortunately James had the sense to name his language after the tree growing outside his office window, instead of calling it “C++++-=”. (Bill and James also have very different tastes in text editors, too!)
>[...]
I imagine the other streaming apps use similar techniques to share code, so I wonder why theirs is so poor in this regard.
Outside of the major players though it gets a lot worse - Funimation's streaming app is horrendous, as is Crunchyroll's. At least Prime Video is doing better than that tier.
My girlfriend’s smart TV runs WebOS and if we have to use Prime for some reason it’s easier to plug the laptop in via HDMI and do it via a browser. Amazon’s app seems to just be broken on WebOS there.
Not only that, but what the heck is up with the completely uninspired UI and user-hostile UX? The importance of seasons being lumped together is something a focus group should have noticed.
Amazon was a web-first company - you really have to wonder how they screwed up their user experience so, so badly.
Or the AWS API?
I actually think this is a product of Amazon's workplace culture. The old joke about the first 90% taking 90% of the time and the last 10% takes the other 90% of the time. Amazon only invests in the first 90%, and doesn't give a shit about the rest of it because they have already won the monopoly, and their sociopathic stack ranking forced firing culture won't reward people working on polish.
In UIs, that second 90% is polish. Polish comes from low priority ticket/feature fulfillment, and if you're working on low priority tickets, that means you're getting fired in the next reaper cycle.
If you have any specific question, I could try to answer it here.
Our Wasm investigations started in August 2020, when we built some prototypes to compare the performance of Wasm VMs and JavaScript VMs in simulations involving the type of work our low-level JavaScript components were doing. In those experiments, code written in Rust and compiled to Wasm was 10 to 25 times as fast as JavaScript.
For video processing, especially high fidelity, high frequency, and high resolution video, I can see WASM crushing JavaScript performance by orders of magnitude. But, that isn’t this. They are just launching an app.
I have verified in my own personal application that I can achieve superior performance and response times in a GUI compared to nearly identical interfaces provided by the OS on the desktop.
There are some caveats though.
First, rendering in the browser is offloaded to the GPU so performance improvements attributed to interfaces in the browser are largely a reflection of proper hardware configurations on new hardware. The better the hardware the better a browser interface can perform compared to a desktop equivalent and I suspect the inverse to also be true.
Second, performance improvements in the browser apply only up to a threshold. In my performance testing on my best hardware that threshold is somewhere between 30000 to 50000 nodes rendered from a single instance. I am not a hardware guy but I suspect this could be due to a combination of JavaScript being single threaded and memory allocation designed for speed in a garbage collected logical VM opposed to allocated for efficiency.
Third, the developers actually have to know what they are doing. This is the most important factor for performance and all the hardware improvements in the world won’t compensate. There are two APIs to render any interface in the browser: canvas and the DOM. Each have different limitations. The primary interface is the DOM which is more memory intense but requires far less from the CPU/GPU, so the DOM can scale to a higher quantity of nodes without breaking a sweat but without cool stuff like animation.
There are only a few ways to modify the DOM. Most performance variations come from reading the DOM. In most cases, but not all, the fastest access comes from the old static methods like getElementById, getElementsByTagName, and getElementsByClassName. Other access approaches are faster only when there is not a static method equivalent, such as querying elements by attribute.
The most common and preferred means of DOM access are querySelectors which are incredibly slow. The performance difference can be as large as 250,000x in Firefox. Modern frameworks tend to make this slower by supplying additional layers of abstraction and unnecessarily executing querySelectors with unnecessary repetition.