But maybe web technologies have advanced recently so that that’s no longer the case.
I used to work with a group that created training materials using Flash. We had a bunch of animators that knew nothing about coding, but they could produce all of these amazing animated videos using Flash.
They could also produce interactive animations. One job we did was creating 3d renderings of printers. The printers could be torn down in the flash app to the smallest screw. A technician could choose what they wanted to do to the printer, and the flash movie would show them step by step in full animation what to do. At any time they could rotate the printer in all an axes to view different angles. It was amazing. And it was animators who knew nothing about code that built it all. The things they could do you just don’t see anymore.
I remember another project we worked on that had these mini games you could play in place of being multiple choice quizzes.
All of that is just gone. There was so much animation and interaction and fun that has been replaced with boring, text, images, and videos.
The best part was, it was all vectors, so the file sizes for even long animated movies was incredibly small. Back in those days we had very low bandwidth, but flash worked great. We had retailers in Asia that consumed our content no problem. Now our content is mostly text, images and videos. Those same retailers have higher bandwidth, but really struggle because the files sizes are enormous compared to flash.
The big problem is it was proprietary and the cost of entry was high at a time when I was short of cash for quite a few years.
Still, I wish I'd invested in learning it back in the day, as I really only came to appreciate its benefits after its fate was effectively sealed by Steve Jobs: could have made some good money whilst having some good fun doing it for 8 or 10 years up to around 2010.
(Of course, I would have had to deal with the consequence of my skillset being obsoleted seemingly overnight, but I got kind of a taste of that from the Microsoft ecosystem as well: it's WinForms, no, it's WPF, no it's Silverlight, no, etc...)
Re-reading Apple’s “Thoughts on Flash” tonight, I was surprised at how many times Apple mentioned vendor lock-in. JavaScript is definitely here to stay. Ten years later, though, I wonder if we have ended up in nearly the same place we strived to avoid - high power consumption - but with worse authoring tools. But with at least open standards, which are good.
I figure the magic moment will occur when some tiny startup builds friendly authoring tools on top of a tight runtime à la Decker. Like HyperCard/Director/Flash it’ll only get you 66% there, but that will be enough for many folks. Hackers. Hobbyists. Students. Tiny companies, and guerrilla departments.
The challenges will be preventing feature-itis, keeping it embeddable / avoiding JS library dependencies, and making it at least free-as-in-beer with an open format. I know there’s space for such a thing. My money’s on some lean 5-person startup with strong opinions.
This might be partially a fashion thing: people want the flashy interactions less. I for one appreciate some boring text with minimal animation! Also away from games where animated interaction is part of the point, designing a good animation that works better than text+images is often not as easy as many think, and you need to contend with that before you attempt to implement the design.
As there are less people wanting to make the animations the tools that would have replaced the tooling for flash (but outputting SVG+JS instead), of which there were a fair few in active development at one point, have languished unloved and incomplete – so not only do fewer people want to make such animations but those that do want to don't have easy tooling to do so.
There are a couple of e-learning sites out there using interactive animations to illustrate their points, one often has sponsor spots on a couple of podcasts & video series I follow though I forget its name ATM. I wonder what tooling they use, if any (though I doubt it is all manually coded).
The places where "animation that works better than text+images" is incredibly small
I can read faster than you can animate - speed it up!
Or I want to read longer than you animate - slow it down!
Text (+images) solves this perfectly - and has for millennia
It's a pity SVG animation authoring tools never eventuated since it's an underutilized but useful native format. Though I know Blender allows for SMIL export via its Freestyle feature (the animation is effectively frame-by-frame only though, not leveraging SMIL's path interpolation/morph ability).
[1] (ignore the GIF-converted examples, the originals are lightweight and vector-based) https://airbnb.io/lottie/
As for the cause, I think it's not only that Flash is dead, but also that there are now many other easy options to create content including professional looking video and much more refined animations. That wasn't an option 20 years ago for most.
Doesn't the animation tool still exist just not under the name "flash"?
My understanding is that the free browser plugin to view the animations that got killed, but the the actual animation software that was used to produce them got bought by adobe and is still widely used to produce animations, it just doesn't have the option to produce swf files anymore (and even during the flash plugin era it was also being used to produce animations for tv, etc.).
Around the latter half of the 00s, I had a Windows XP machine with 256MB of RAM and a Pentium 4 processor, with a matching spinning metal-disk hard drive. I could make decent Flash animations and games with it. I could view entire websites in Flash with Internet Explorer 8.
Today, my personal machine is a laptop I bought from 2016. Core i7 (4 cores), AMD graphics card, 16GB of RAM. It's not top-of-the-line even when I bought it but I put in ~200 hours of The Witcher 3 in it, finishing the main quest and the two paid DLCs. That said, modern web dev is a PITA in this mostly because I'm too cheap to upgrade to an SSD. Oh, also, the GitHub homepage that showed live commits on the globe makes the fans scream with Chrome unless I enable hardware acceleration. (In fairness that's enabled by default, but it so happened I had to turn it off and forgot to turn it back on.)
I still can't figure out why my personal machine can't provide a decent feedback loop on a React/Angular+Typescript stack (that's not even doing animations!) just because I don't have an SSD. In my very opinionated opinion, only a database server needs an SSD.
A Phaser+Typescript project (my preferred modern web gaming stack) is just a slightly better experience than React. But, again, I wonder why it needs hardware acceleration at all to function well and I've only implemented very "static" boardgames so far. Maybe I'm just a sloppy programmer but I'm definitely better than when I was making stuff in Flash. This shouldn't be eating that much resources, right?
How is it that physicists and engineers are pushing the limits of physics to bring us ever faster processors and we still can't smoothly scroll a webpage, where as I can design an embedded product with a GUI on a 40MHz CPU
Remember a while back, when Casey Muratori told Microsoft their terminal emulator could be a lot faster if it used a glyph index, and professional Microsoft engineers told him that was complexity worthy of a doctoral thesis [0]?
There's a reason he got so mad at them, and its because this level of "competence" is pervasive in our industry and he has to see it every single f'ing day. See also: "When does the draw window change?", a demonstration of how Microsoft's flagship IDE's debugger fails to compete with a one-man project.
https://imgs.xkcd.com/comics/voting_software.png
[0] For anyone not in the know, this is essentially just emulating how a real dumb terminal from the 70s would work, is the blindingly obvious implementation, and would be familiar to just about any game developer on earth. Casey spent the next weekend coding a 3-orders-of-magnitude faster terminal display demo that was completely unoptimized as a demonstration of the lower bound of how fast a terminal should be. Microsoft's terminal is still not as fast, even though they did eventually implement his solution without, initially, giving him any credit.
1. Better GFX quality. Modern screens have vastly higher pixel resolutions than before and vastly higher than any embedded device. Also everything now has to be anti-aliased by default and we expect sophisticated Unicode support everywhere.
This means UI is lovely and sharp - our stuff just looks fantastic compared to the Windows 95 era. But this came at a really high and non-linear cost increase because you can't do CPU rendering and keep up the needed pixel rates anymore. This has caused a lot of awkwardness, complexity and difficulties lower down in the stack as people try to move more and more graphics work to the GPU but hit problems of internal code complexity, backwards compatibility etc.
2. Windows dominance ended, meaning apps have to be platform neutral at reasonable cost. In turn that means you can't use the OS native GUI widgets anymore unless you're writing mobile apps or some artisanal macOS app - you have to use some cross platform abstraction. This also led to the widespread use of GCd languages for UI, because ain't nobody got time for mucking around with refcounting and memory ownership in their UI code anymore.
3. For various reasons like distribution and sandboxing, browsers met people's needs better than other ways of writing apps but browser rendering engines are massively constrained in how much they can improve due to the backwards compatibility requirements of the web again. Flash demonstrated that viscerally back when it was around. So a lot of potential performance got lost there in the transition to web apps, and memory usage exploded due to the highly indirect and complicated DOM rendering model which in turn needs layers of (non-mmap-shareable) code to make it digestable for devs.
4. Browser devs lost confidence in language based sandboxes and so moved to process based sandboxes, but a process is an extremely heavyweight thing - lots of CPU cost from all the IPC and context switching and especially expensive in terms of memory overhead.
You ask why is embedded different. Others here are asking why games are different. This is simple enough:
1. Embedded apps don't care about OS independence, don't care about security or sandboxing and only sometimes have large hi-res displays. If you're on a 40MHz CPU your display is probably a dinky LCD. You can lose the abstractions and write much closer to the metal.
2. Game engines and GPUs co-evolved with the needs of games driving GPU features and capabilities. In contrast, nobody was buying a hot new NVIDIA card to make their browser scroll faster. Games also benefit from historically being disposable software in which the core tech isn't really evolved over a long period of time, so devs can start over from scratch quite frequently without backwards compatibility being a big deal. Normal application software can't justify this. Of course games are going the same way as app software with Unreal becoming a kind of OS for games, but ultimately, it's shipped with the app every time and porting titles between major engine versions is rare, so they can change things up every so often to get better performance.
Could things have been different? Maybe. If just a tiny handful of decisions had been different in the late 90s then Jobs would never have come back to Apple and the dominance of Windows would have never ended. The iPhone would have never happened and Android would have remained a BlackBerry competitor at best, with a UI to match. If the Windows team had executed better and paid more attention to security basics like sandboxing, ActiveX could have remained a common and viable way to way to ship apps inside the browser. Flash might still be around because it was ultimately Google and Apple who killed it off by fiat - Microsoft wanted to compete via Silverlight, but were by then sufficiently respectful of anti-trust concerns that they wouldn't have simply announced they were going to murder it in cold blood.
So it's easy to imagine a parallel universe where our tech stack looks very different. But, this is the one we live in.
Glad that you pointed that out.
Around 2002-2003 I was working an office night-shift, in front of a computer that was in no way the fastest even by those times' standards. But, even so, I was able to follow almost every at-bat in the MLB, live, through their Flash app (while I was doing my regular work in another browser tab). And it wasn't only showing stuff like Batter X hit a 1B, it showed you the exact location and the speed of the incoming pitch, where the ball landed (more or less), what was the out-field formation of the pitching side etc. Really cool stuff given the limited processing resources.
I'd say a HTML+JS solution (which, obviously, wasn't even possible back then for that kind of stuff) would require at least an order of magnitude more resources on the client side.
Where did you acquire that (somewhat flawed) opinion?
I've been using SSD's since ~2009 and those early devices were transformational even on already elderly equipment. You can pick up a modern 1TB SSD with a SATA interface for ~GBP77.00. A price well worth paying for the increase in productivity and quality of life not waiting for spinning rust disks to do their thing.
Wow, that is cheap. We’re talking, what, $40? Basically everything these days assumes fast random disk IO, and will be slow if you don’t have it. If you’ve upgraded your computer at all in the last 10 years then you probably have been better off keeping the old one and getting an SSD.
I think the point is that they shouldn't. The majority of things that people do every day should fit in memory, requiring only short bursts of sequential access when loading and when saving files. Even databases and filesystems are pretty good at avoiding or overlapping random accesses. Exceptions exist, to be sure, but if "basically everything" gets slow when it doesn't have fast random access then "basically everything minus epsilon" is broken.
- https://danluu.com/keyboard-latency/
I like living dangerously.
In practice, I think the authoring experience has been lost. There is still Adobe Animate (which is the renamed Flash application), but I think you can not export to a fully interactive HTML application - only to static videos or simple HTML (like interactive ads). Maybe it is now possible, but I haven't seen much use of it.
The magic of Flash started with a very nice vector editor - it even allowed you to "paint" Vectors with a brush. Then you had very intuitive tools for animations (tweening, onion shells). You could add simple interaction from the GUI. But it was easy to run custom ActionScript (basically JS) code on events, and move objects around on the screen. When you needed more control, you could also completely go to the ActionScript level and create your objects from code.
Nowadays, what do you use to create animations? After Effects seems what a lot of people use but it is overkill. Animate seems to be barely maintained. And if you want to do a game? I think Unity & co. have taken over, but they lack a bit the low barrier to entry that Flash had.
I love how turing complete gets thrown around every time. Can a turing machine by itself run a voice chat, print documents, display images, stream videos? If I emulated a turing machine on a 1 Hz CPU would it be capable of decoding a 4k video stream in acceptable time?
But that "Turing completeness" or "result achiveability" is not at all the interesing point.
The point is that despite this "completeness" or "equivalency" there is a difference. You could write a line of business application in plain C or say even FORTH, but it is probably easier and makes more sense in C# or Java. And you can create great games and animations in HTML+JS, but something was lost in the experience, and in the tooling, that was available with Flash.
Unless they're quantum computers where they may as well create a wormhole or something https://news.ycombinator.com/item?id=33802711
The question is what you can do, not what can you write a function to calculate.
You can, with the help of integrated EaselJS. But it is way more complicated, so I also never used it and I don't know if anyone is using it at all.
And yes, the ease of flash is gone. This was the most appealing of it.
Is there some technical limitation I haven't figured out? Or is it simply a matter that no one has done it yet?
A good example of the former is a e-card service that eventually chose to move to video. Video is worse for their product but there was/is a more robust path to move to video than modern vector options.
I'm old enough that HSR is still part of my lexicon. Kids just think I'm weird, which is fine.
Also the fact that the new whatever would be either Youtube or Twitter. Or Tiktok. Or something most people here have never even heard of.
Among the top HN posts of all time is an interactive watch visualization: https://news.ycombinator.com/item?id=31261533
This is evidence that people still enjoy interactive experiences, but they are just harder to create.
I think that niche is now filled by the app stores. A lot of little games can be installed on your phone.
The monetization of those games has made them less fun, in my opinion, but that's more on the evolution of game makers, and not really on the development technology.
Allow artists to create simple animation quickly.
run fast.
The key selling point of flash was that it was an author once, run anywhere tool. I could use keyframe animation along with a simple scripting language to make almost anything. Not only that it was fast to build and deploy.
Nothing I've seen recently allows an artist to do the same. I've tried a couple of times over the years to make an animated SVG on the web, and all of them required me to program keyframes using code, which sucked. Not only that its dogshit slow. animating <5 low complexity shapes would eat 50% of CPU, (think squares and circles).
Worse still all of those libraries are deprecated now, so if I want to do it again, I'll need to start again from scratch and select a new animation library.
This is why I stopped being interested in modern web anymore. Everything is deprecated at unbelievable pace, you can't keep track of it unless you work full time in the field. If it was all for great efficiency and performance, I'd get it, but it seems to just follow the newest fad every 2 years. Maybe with wasm that could change, but I'll believe it when I see it.
Is this really true? The same JavaScript I wrote 3 years ago still work, for multiple different applications. It's really uncommon that browsers break "user-space" JavaScript, I can't even remember the last heavily dependent API that got removed and cause havoc.
What does change very often is the latest trends/fads in JavaScript frameworks/UI libraries, but if you pick one and stick with it, it won't magically break because JavaScript changed. I think what's causing your problem is here is the want/need to stick with the latest flavor of frameworks/libraries instead of becoming deeply familiar with one and sticking with it.
Provided you have Flash player installed. Doesn't the same apply to web technologies provided you have the same browser installed?
Every game using Unity/Unreal + WASM needs to download 10-20 MB of JavaScript before it even starts loading the actual assets. With Flash, that could have been a 10 KB SWF instead.
Similarly, Flash had really fluid animations that were drawn purely on the CPU. Why my 266 MHz P2 could have better vector animations than people nowadays can squeeze out of a 3070 with WebGL is beyond me.
In my opinion, we still DO NOT have any worthwhile replacement for the ease of cross-platform development combined with excellent performance that Flash had.
I have experienced flash for the last time a while ago though, so my memory might have been corrupted in the mean time. Also, I am talking about computing capacity of a decade ago, so the comparison is never going to be fair.
But I am definitely sure that the java applets were a disaster in terms of UX (slow to load, interaction feeling very foreign).
https://blog-api.unity.com/sites/default/files/styles/focal_...
couldn't find a live demo to confirm, which strongly speaks against Unity being anywhere as flashy.
Bevy loads pretty instantly in comparison. That's where I'd invest my time if I was doing web games.
I mean I guess you could do something awful with base64 encoding assets into a giant HTML file, but that sounds horrible.
Flash was also a platform for hosting bandwidth-efficient animation and people have just gone to using video now. That sacrifices incidental, easter egg-type interactivity and drives people to centralised services like YouTube.
It might have been more "creative" but that was also due to the fact that it was around at the beginning of the web, when everything was more creative. People had not figured out UI or UX patterns yet at that point, and there was massive wheel reinvention taking place all the time.
Flash was an absolute catastrophe for accessibility at well. A screen reader would just say "Flash movie" as that's all that was actually in the page.
We've had to reinvent a lot of that stuff, true, but we've also gained interoperability, security, and accessibility. I think those are all worthy trade-offs.
What is the modern replacement for my nephews now? There might be one but I haven't seen it.
Compared to what? All SVG/canvas/WebGL doesn't solve the problem either.
Later flash versions regressed in performance and can't run this game without stutter in my experience. A way to play the original game today is to get an old version of "Adobe flash player projector". If you are paranoid, use some OS level sandbox around it.
I did not see an in-browser game since the original N game that matches its low system requirements and smooth gameplay.
[1] https://www.thewayoftheninja.org/n_history.html
BTW, the successor N v2 is also pretty good, and it's not flash-based, AFAIK.
While it is possible in modern browsers to make a) cross-platform, front-end web applications with b) smooth vector and c) bitmap animation incorporating d) multimedia sound and e) video driven by f) a backend API (called dynamic data back then), it is not nearly as prevalent as it was. Animations and multi-media objects were first-class citizens in Flash in a way they are not in browsers.
I do think that's not entirely the browser makers fault. Web aesthetics have changed. Users liked swooping and diving logos with dramatic drop-shadows then, and not so much now.
I use CRDP (chrome remote debugging protocol)[3] to run Ruffle on pages that need it (sort of like a Chrome extension content script). Ruffle itself uses wasm and is quite fast.
It's cool seeing the audio and video work and playing those old games.
-
[0]: https://github.com/ruffle-rs/ruffle/wiki/Test-SWFs
[1]: https://github.com/crisdosyago/BrowserBox#bb-pro-vs-regular-...
[2]: https://github.com/ruffle-rs/ruffle
[3]: https://chromedevtools.github.io/devtools-protocol/tot/
Playing smooth vector animations on a single-core Pentium III computer with 64MB RAM.
If adobe would have not been the way they were, they would now dominate the animation scene too.
Instead, inadvertetly because of patents, we dont have anything cool to show for.
All this minimalism everywhere is disgusting. And they never get it right anyway because you have to put hightower ads everywhere you can.
While it still was macromedia calling the shots, people had trust in flash
And I thought it was Steve Jobs' wielding of monopoly power in the name of "taste." Or was it "security?" Or the fact that Apple couldn't keep Flash from crashing Macs?
I rather would blame the fact that it allways ran in a plugin and needed to be constantly updaeted where browsers did not easily provide such a mechanism.
Flash was very cool but also very frustrating if you "just want to use your computer"
Even if web standards have caught up with what Adobe Flash used to do a full decade ago, writing games for the browser became a lot more difficult to do.
Barring Adobe killing Flash by disappearing it from the internet, I can run anything I wrote in 2004 on Flash Player in 2022. This level of compatibility is only surpassed by Windows programs. In today's web, I not only need to be careful of supporting 3 major browser vendors, but also support any breaking changes that any of these make, which means I might need to recompile my project long after I have archived it.
The closest thing that has allowed me to make stuff easily for a web browser is PICO-8. A fantasy console, with very limited resolution and capabilities, but all its dev tools integrated, that allows you to quickly export to HTML5 and optionally upload it to its BBS (the website). PICO-8 has made amateur gamedev fun for me again.
I have no idea how to go about recreating my projects today in JS at all...
As others have said, it’s mostly tooling and workflow. As much as I’d like to sneer at Flash in general, my experience with it on projects was positive. The IDEs and tools were good, you could approach everything code first with unit tests. The process of cutting up PhotoShop PSDs from a designer and building UIs in FlashBuilder, while orchestrating everything in code in an MVC style was very pleasant.
I know that they can be remade with other technologies, but except for some clones remade to be a pay-to-win mobile apps, a lot of those games are just gone now.
The closest I’ve seen (and actually use) is Hype (https://tumult.com/hype/), but it is Mac only.
I still have one customer with a Flash-based web site, even though it doesn't work on any modern browser. They actually have some old version of IE and the Flash plugin available for download just to use their site.
The loop just worked and flash trimmed out silence automatically when playing the music. Very useful for games.
But not a single one has the same 'all in one' nature that Flash did. Back in the day, people who'd never even think of making a game could be exposed to it and eventually make one, now everything is more specialized and compartmentalized. The smallest webgame engines I see still have 5 MB of bloat, too, whereas a decently big flash game could be 500~ish KB.
That's not possible with Web APIs. Web APIs are more high-level (complex, purpose-oriented), e.g. WebSockets, WebRTC, WebTransport. The low-level stuff is not exposed to user code and probably never will. One of the reasons is security.
Html does most of the stuff now in theory, but its so much more difficult to access that it's not fun
Probably…
This year the situation is a little better. There's gdevelop, Rive, ct.js, Godot, Construct 3, and Wick Editor. None of them feel as friendly as Flash (except maybe Wick), or as efficient (I just played a small Rive game that made 500+ network requests to load assets). But they come with their upsides, like better game frameworks, better addons, better 3D support, etc.
So I wonder why we're still not seeing small free games. Are the tools not good enough? Packaging is too hard? Lack of good portals, à la Miniclip/Newgrounds/Kongregate? Or has internet culture simply moved away from interactivity to catchy videos?
Show me a tool where a 10 years old can draw and animate stick figure cartoons and publish is in a lightweight vector format (not video)!
Generalizing, tech progress swings repeatedly and unpredictably between three optimands or poles: Developer-first, User-first, and Machine-first.
Developer-first tech is like Flash -- it's powerful because the developer experience is powerful. The quality and profusion of great products drives the adoption and other network effects. Yet these programs tend to be heavier (non-optimal for machines) and, for lack of a better term, frou-frou-ier; the ease of applying themes and styles promotes 'mystery UI', where the dev reinvents (or more commonly, balderizes) the visual language of the user interface. Flash apps always had ornate, ridiculous scroll bars and buttons and so forth, mostly because the DX made customization so easy, and this imposed a sort of psychological tax on the users, often to the point of compromising the user experience.
By contrast, user-first tech emphasizes the experience of the end user or consumer. For example, HTML5/CSS3 is harder to author than Flash, but, as an open standard, offered the end-user choice of platform and graceful degradation on mobile.
That said, user-first tech can often have terrible developer experience (DX), and that is very true of HTML5/CSS3. It's basically coders-only, for certain lightweight values of the word 'code'.
Finally, there is machine-first tech -- apps that prioritize the machine's experience. This family of tools generally pursues performance at the expense of both the developer and the user. Generally one can think of Moore's Law as describing an asymptotic decline in the importance of what I'll haltingly call 'MX', because as machines get more powerful, it's less important to cater to them.
So you can think of the birth of the web as a move away from optimizing for MX towards optimizing some balance of DX or UX, but which of those two other poles picks up the bounty of Moore's Law is up to fashion, social trends, etc.
Finally, it's common for premature market anticipation of MX-shedding to drive clumsy, non-performant solutions that privileged DX or UX (or both) to such a degree that the machines simply don't like to run it much anymore. In other words, people overshoot in their attempts to optimize DX, UX, or both, and in doing so compromise the machine's experience of that code. When this happens, the market tends to lurch back towards MX. I perceive the move to (say) Rust from higher-level languages (Python, etc) as an example of trading off DX and even UX for MX. Arguably, Wayland does this as well, shedding both UX and DX considerations to improve machine-oriented properties such as security.
Going deeper, one could think of MX as being 'hardware-owner-first', as machines are not the sort of things that can have genuine experiences of their own -- however, animism is a useful form of lossy compression here.