Personally though, I think the distinctive choices are a boon. You are never confused about what language you are writing because Lua code is so obviously Lua. There is value in this. Once you have written enough Lua, your mind easily switches in and out of Lua mode. Javascript, on the other hand, is filled with poor semantic decisions which for me, cancel out any benefits from syntactic familiarity.
More importantly, Lua has a crucial feature that Javascript lacks: tail call optimization. There are programs that I can easily write in Lua, in spite of its syntactic verbosity, that I cannot write in Javascript because of this limitation. Perhaps this particular JS implementation has tco, but I doubt it reading the release notes.
I have learned as much from Lua as I have Forth (SmallTalk doesn't interest me) and my programming skill has increased significantly since I switched to it as my primary language. Lua is the only lightweight language that I am aware of with TCO. In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
In particular, Lua is an exceptional language for writing compilers. Compilers are inherently recursive and thus languages lacking TCO are a poor fit (even if people have been valiantly forcing that square peg through a round hole for all this time).
Having said all that, perhaps as a scripting language for Redis, JS is a better fit. For me though Lua is clearly better than JS on many different dimensions and I don't appreciate the needless denigration of Lua, especially from someone as influential as you.
Is it needless? It's useful specifically because he is someone influential, and someone might say "Lua was antirez's choice when making redis, and I trust and respect his engineering, so I'm going to keep Lua as a top contender for use in my project because of that" and him being clear on his choices and reasoning is useful in that respect. In any case where you think he has a responsibility to be careful what he says because of that influence, that can also be used in this case as a reason he should definitely explain his thoughts on it then and now.
Scheme is pretty lightweight.
This. And not just Lua , but having different kind of syntax for scripting languages or very high level languages signal it is something entirely different, and not C as in system programming language.
The syntax is also easier for people who dont intend to make programming as their profession, but simply want something done. It used to be the case in the old days people would design simple PL for new beginners, ActionScript / Flash era and even Hypercard before that. Unfortunately the industry is no longer interested in it, and if anything intend to make every as complicated as possible.
I'm not familiar with Lua, but I expect tco to be a feature of the compiler, not of the language. Am I wrong?
I'd love to hear more how it is, the state of the library ecosystem, language evolution (wasn't there a new major version recently?), pros/cons, reasons to use it compared to other languages.
About tail-calls, in other languages I've found sometimes a conversion of recursive algorithm to a flat iterative loop with stack/queue to be effective. But it can be a pain, less elegant or intuitive than TCO.
Does the language give any guarantee that TCO was applied? In other words can it give you an error that the recursion is not of tail call form? Because I imagine a probability of writing a recursion and relying on it being TCO-optimized, where it's not. I would prefer if a language had some form of explicit TCO modifier for a function. Is there any language that has this?
v8 had PTC, but removed it because they insisted it MUST have a new tail call keyword. When they were shot down, they threw a childish fit and removed the PTC from their JIT.
> [...] In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
This is not a great language feature, IMO. There are two ways to go here:
1. You can go the Python way, and have no TCO, not ever. Guido van Rossum's reasoning on this is outlined here[1] and here[2], but the high level summary is that TCO makes it impossible to provide acceptably-clear tracebacks.
2. You can go the Chicken Scheme way, and do TCO, and ALSO do CPS conversion, which makes EVERY call into a tail call, without language user having to restructure their code to make sure their recursion happens at the tail.
Either of these approaches has its upsides and downsides, but TCO WITHOUT CPS conversion gives you the worst of both worlds. The only upside is that you can write most of your loops as recursion, but as van Rossum points out, most cases that can be handled with tail recursion, can AND SHOULD be handled with higher-order functions. This is just a much cleaner way to do it in most cases.
And the downsides to TCO without CPS conversion are:
1. Poor tracebacks.
2. Having to restructure your code awkwardly to make recursive calls into tail calls.
3. Easy to make a tail call into not a tail call, resulting in stack overflows.
I'll also add that the main reason recursion is preferable to looping is that it enables all sorts of formal verification. There's some tooling around formal verification for Scheme, but the benefits to eliminating loops are felt most in static, strongly typed languages like Haskell or OCaml. As far as I know Lua has no mature tooling whatsoever that benefits from preferring recursion over looping. It may be that the author of the post I am responding to finds recursion more intuitive than looping, but my experience contains no evidence that recursion is inherently more intuitive than looping: which is more intuitive appears to me to be entirely a function of the programmer's past experience.
In short, treating TCO without CPS conversion as a killer feature seems to me to be a fetishization of functional programming without understanding why functional programming is effective, embracing the madness with none of the method.
EDIT: To point out a weakness to my own argument: there are a bunch of functional programming language implementations that implement TCO without CPS conversion. I'd counter by saying that this is a function of when they were implemented/standardized. Requiring CPS conversion in the Scheme standard would pretty clearly make Scheme an easier to use language, but it would be unreasonable in 2025 to require CPS conversion because so many Scheme implementations don't have it and don't have the resources to implement it.
EDIT 2: I didn't mean for this post to come across as negative on Lua: I love Lua, and in my hobby language interpreter I've been writing, I have spent countless hours implementing ideas I got from Lua. Lua has many strengths--TCO just isn't one of them. When I'm writing Scheme and can't use a higher-order function, I use TCO. When I'm writing Lua and can't use a higher order function, I use loops. And in both languages I'd prefer to use a higher order function.
[1] https://neopythonic.blogspot.com/2009/04/tail-recursion-elim...
[2] https://neopythonic.blogspot.com/2009/04/final-words-on-tail...
Rather, you no longer see what they're doing clearly.
I scrolled most of this sub thread and gp seem to not be replying to any of the replies they got.
I'm fairly certain antirez is the author of redis
Do you really need to write compilers with limitless nesting? Or is nesting, say, 100.000 deep enough, perhaps?
Also, you'll usually want to allocate some data structure to create an AST for each level. So that means you'll have some finite limit anyway. And that limit is a lot easier to hit in the real world, as it applies not just to nesting depth, but to the entire size of your compilation unit.
Lua was first released in 1993. I think that it's pretty conventional for the time, though yeah it did not follow Algol syntax but Pascal's and Ada's (which were more popular in Brazil at the time than C, which is why that is the case)!
Ruby, which appeared just 2 years later, departs a lot more, arguably without good reasons either? Perl, which is 5 years older and was very popular at the time, is much more "different" than Lua from what we now consider mainstream.
Perl, Python, OCaml, Lua and Rust were all fine (Rust wasn't around in 2010 of course).
I doubt we ever would have heard about Ruby without it's syntax decisions. From my understanding it's entire raison d'être was readability.
def ruby(is)
it = is
a = "bad"
example()
begin
it["had"] = pascal(:like)
rescue
flow
end
endNow quite sure what you mean by that; all of Lua, Pascal, and Ada follow Algol's syntax much more closely than C does.
People go through all this effort to separate parsing and lexing, but never exploit the ability to just plug in a different lexer that allows for e.g. "{" and "}" tokens instead of "then" and "end", or vice versa.
1. <https://hn.algolia.com/?type=comment&prefix=true&query=cxr%2...>
2. <https://old.reddit.com/r/Oberon/comments/1pcmw8n/is_this_sac...>
The problem with "skins" is that they create variety where people strive for uniformity to lower the cognitive load. OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
I see this argument a lot with Lua. People simply don't like its syntax because we live in a world where C style syntax is more common, and the departure from that seem unnecessary. So going "well actually, in 1992 when Lua was made, C style syntax was more unfamiliar" won't help, because in the current year, C syntax is more familiar.
The first language I learned was Lua, and because of that it seems to have a special place in my heart or something. The reason for this is because in around 2006, the sandbox game "Garry's Mod" was extended with scripting support and chose Lua for seemingly the same reasons as Redis.
The game's author famously didn't like Lua, its unfamiliarity, its syntax, etc. He even modified it to add C style comments and operators. His new sandbox game "s&box" is based on C#, which is the language closest to his heart I think.
The point I'm trying to make is just that Lua is familiar to me and not to you for seemingly no objective reason. Had Garry chosen a different language, I would likely have a different favorite language, and Lua would feel unfamiliar and strange to me.
For example Premake[1] uses Lua as it is - without custom syntax parser but with set of domain specific functions.
This is pure Lua:
workspace "MyWorkspace"
configurations { "Debug", "Release" }
project "MyProject"
kind "ConsoleApp"
language "C++"
files { "**.h", "**.cpp" }
filter { "configurations:Debug" }
defines { "DEBUG" }
symbols "On"
filter { "configurations:Release" }
defines { "NDEBUG" }
optimize "On"
In that sense Premake looks significantly better than CMake with its esoteric constructs.
Having regular and robust PL to implement those 10% of configuration cases that cannot be defined with "standard" declarations is the way to go, IMO.Come to think of it I don't think I can name a single mainstream language other than Lua that wasn't invented in the G7.
So, even if an implementation like MicroQuickJS existed in 2010, it's unlikely that too many people would have chosen JS over Lua, given all the shortcomings that JavaScript had at the time.
For those not familiar with TCL, the C API is flavoured like main. Callbacks take a list of strings argv style and an argc count. TCL is stringly typed which sounds bad, but the data comes from strings in the HTML and script blocks, and the page HTML is also text, so it fits nicely and the C callbacks are easy to write.
[1] Mosaic Netscape 0.9 was released the week before
This would have been a catastrophic loss. Lua is better than javascript in every single way except for ordinal indexing
Thank god it wasn’t then.
I did once manage to compile Lua 5.4 on a Macintosh SE with 4MB of RAM, and THINK C 5.0 (circa 1991), which was a sick trick. Unfortunately, it took about 30 seconds for the VM to fully initialize, and it couldn't play well with the classic MacOS MMU-less handle-based memory management scheme.
it also helps that it has ridiculously high performance for a scripting language
Frankly, I welcome the fact that Redis doesn’t use JavaScript. It’s an abomination of a language. The fewer times I need to use it the better.
Lua is a pretty old language. In 1993 the world had not really settled on C style syntax. Compared to Perl or Tcl, Lua's syntax seems rather conventional.
Some design decisions might be a bit unusual, but overall the language feels very consistent and predictable. JS is a mess in comparison.
> because it departs from a more Algol-like syntax
Huh? Lua's syntax is actually very Algol-like since it uses keywords to delimit blocks (e.g. if ... then ... end)
But only after long time I tried to check what Algol actually looked like. To my surprise, Algol does not look anything like C to me.
I would be quite interested in the expanded version of “C has inherited syntax from Algol”
Edit: apparently the inheritance from Algol is a formula: lexical scoping + value returning functions (expression based) - parenthesitis. Only last item is about visual part of the syntax.
Algol alternatives were: cobol, fortan, lisp, apl.
That's what matters to me, not how similar Lua is to other languages, but that the language is well-designed in its own system of rules and conventions. It makes sense, every part of it contributes to a harmonious whole. JavaScript on the other hand.
When speaking of Algol or C-style syntax, it makes me imagine a "Common C" syntax, like taking the best, or the least common denominator, of all C-like languages. A minimal subset that fits in your head, instead of what modern C is turning out to be, not to mention C++ or Rust.
https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...
Good to see you alive and kicking. Happy holidays
[0] https://redbean.dev/ - the single-file distributable web server built with Cosmopolitan as an αcτµαlly pδrταblε εxεcµταblε
I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...
PR and prompt I used here: https://github.com/simonw/research/pull/50 - using this pattern: https://simonwillison.net/2025/Nov/6/async-code-research/
No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.
I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.
(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)
Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.
The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.
In this particular case AI has nothing to do with Fabrice Bellard.
We can have something different on HN like what Fabrice Bellard is up to.
You can continue AI posting as normal in the coming days.
I would guess people don't know how you expect them to evaluate this, so it comes off as spamming us with a bunch of AI slop.
(That C can be compiled to WASM or wrapped as a python library isn't really something that needs a proof-of-concept, so again it could be understood as an excuse to spam us with AI slop.)
If you care that much, write a blog post and post that, we don't need low effort LLM show and tell all day everyday.
A lot of HN people got cut by AI in one way or another, so they seem to have personal beefs with AI. I am talking about not only job shortages but also general humbling of the bloated egos.
(Keep posting please. Downvotes due to mentioning LLMs will be perceived as a quaint historic artifact in the not so distant future…)
Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.
Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.
If there's a better option than that I'd love to hear about it!
In a browser environment it's much easier to sandbox Wasm successfully than to sandbox JS.
I was looking for something like Pyodide but runnable from Python, but that doesn't seem to exist quite yet. I can get a Python interpreter to run in wasmtime, but that doesn't have the Pyodide goodies like Micropip etc. Sadly, Pyodide itself seems fully married to JS, as it compiles to Emscripten and not WASI.
I'm almost tempted to just go with a small binary embedding V8 and running Pyodide inside V8 isolates or something.
(I know I can do this via Firecracker / GVisor / whatever, that is not the solution I'm looking for.)
And +1000 on linking to your own (or any other well-written) blog.
But there are other ways, e.g. run the logic isolated within gvisor/firecracker/kata.
[1] github.com/microsoft/CCF under src/js/core
One example: given this database table run this JavaScript function against every value in this column to calculate a value to be stored in another column.
Or once a day fetch the JSON from this URL and transform it with this JavaScript and store it here.
You can’t restrict JS that way on the web because of compatibility. But I totally buy that restricting it this way for embedded systems will result in something that sparks joy
I bet MQJS will also be very popular. Quite impressive that bro is going to have two JS engines to brag about in addition to a lot of other very useful things!
Well, now we can run this thing in WASM and get, I imagine, sane runtime errors :)
It's a variant of my QuickJS playground here: https://tools.simonwillison.net/quickjs
The QuickJS page loads 2.28 MB (675 KB transferred). The MicroQuickJS one loads 303 KB (120 KB transferred).
emcc -O3
(and maybe even adding --closure 1 )
edit: actually the QuickJS playground looks already optimized - just the MicroQuickJS one could be improved.
https://github.com/simonw/research/pull/5
Thats now live on https://tools.simonwillison.net/microquickjs
https://bellard.org/jslinux/vm.html?cpu=riscv64&url=fedora33...
I am envious that I will never anywhere near his level of productivity.
That said, judging by the license file this was based on QuickJS anyway, making it a moot comparison.
https://github.com/yt-dlp/yt-dlp/wiki/EJS
(Note that Bellard's QuickJS is already a supported option.)
> It only supports a subset of Javascript close to ES5 [...]
I have not read the code of the solver, but solving YouTube's JS challenge is so demanding that the team behind yt-dlp ditched their JS emulator written in Python.
* espruino (https://www.espruino.com/)
* elk (https://github.com/cesanta/elk)
* DeviceScript (Microsoft Research's now defunct effort, https://github.com/microsoft/devicescript)
<https://www.moddable.com/faq#comparison>
If you take a look at the MicroQuickJS README, you can see that it's not a full implementation of even ES5, and it's incompatible in several ways.
Just being able to run JS also isn't going to automatically give you any bindings for the environment.
So cool. Where did the name come from? I am so stoked and glad that we are going to have a JS to native binary compiler. The best thing ever!
I was going to set up an AI automation to run on this against the autotests, but as I got started, I felt - why not just create a new language where I can pick my own concurrency paradigms and syntax? So I went with that instead.
So glad someone is doing this. What more do you know about this proejct, kind person?
At first glance Espruino has broader coverage including quite a bit of ES6 and even up to parts of ES2020. (https://www.espruino.com/Features). And obviously has a ton of libraries and support for a wide range of hardware.
For a laugh, and to further annoy the people annoyed by @simonw's experiments, I got Cursor to butcher it and run as a REPL on an ESP32-S3 over USB-Serial using ESP-IDF.
Blink is now running so my work here is done :-)
led.init(48)
function blink() {
led.rgb(0, 0, 255)
setTimeout(function() {
led.off();
setTimeout(blink, 500)
}, 500)
}
blink()One strategy is to wait for US to wake up, then post, during their morning.
Other strategy is to post the same thing periodically until there is response.
- Date: only Date.now() is supported. [0]
I certainly understand not shipping the js date library especially in an embedded environment both for code-size, and practicality reasons (it's not a great date library), but that would be an issue in many projects (even if you don't use it, libraries yo use almost certainly do.
https://github.com/bellard/mquickjs/blob/main/README.md#:~:t...
- FFmpeg: https://bellard.org
- QEMU: https://bellard.org/qemu/
- JSLinux: https://bellard.org/jslinux/
- TCC: https://bellard.org/tcc/
- QuickJS: https://bellard.org/quickjs/
Legendary.
That was a sort of defining moment in my personal coding; a lot of my websites and apps are now single file source wherever possible/practical.
It doesn't necessarily translate to people who are less brilliant.
Honestly, it's a reminder that, for the time it takes, it's incredibly fun to build from scratch and understand through-and-through your own system.
Although you have to take detours from, say, writing a bytecode VM, to writing FP printing and parsing routines...
You are absolutely wrong here. Most of us wish that somebody would get him to sit for an in-depth interview and/or get him to write a book on his thinking, problem-solving approach, advice etc. i.e. "we want to pick his brain".
But he is not interested and seems to live on a different plane :-(
Can you elaborate a little about the methods you mention and how you analysed them?
played with implementing analog modem DSP in software in 1999 (linmodem is ~50-80% there, sadly never finished)
probably leading to
played with implementing SDR (again DSP) using VGA output to transmit DVB-T/NTSC/PAL in 2005
probably leading to
Amarisoft SDR 5G base station, commercial product started in 2012 - his current job https://www.amarisoft.com/company/about-us
I would not want to dismiss or diminish by any amount the incredible work he has done. It's just interesting to me that the problems he appears to pick generally take the form of "user sets up the parameters, the program runs to completion".
Guy is a genius. I hope he tries Rust someday
The math checks out.
Real people have to sleep at some point!
In my engine Arrays are always dense from a memory perspective and Objects don't special case indexes, so we're on the same page in that sense. I haven't gotten around to creating the "no holes" version of Array semantics yet, and now that we have an existing version of it I believe I'll fully copy out Bellard's semantics: I personally mildly disagree with throwing errors on over-indexing since it doesn't align with TypedArrays, but I'd rather copy an existing semantic than make a nearly identical but slightly different semantic of my own.
Just in time for RAM to become super expensive. How easy would it be to shove this into Chromium and Electron?
The good news is that it would probably not matter much for chromium's memory footprint anyway...
Being an engineer and coding at this stage/level is just remarkable- sadly this trade craft is missing in most (big?) companies as you get promoted away into oblivion.
One such award is the Turing Award [1], given "for contributions of lasting and major technical importance to computer science."
Here's the commit history for this project
b700a4d (2025-12-22T1420) - Creates an empty project with an MIT license
295a36b (2025-12-22T1432) - Implements the JavaScript engine, the C API, the REPL, and all documentation
He went from zero to a complete JS implementation in just 12 minutes!
I couldn't do that even if you gave me twice as much time.
Okay, but seriously, this is super cool, and I continue to be amazed by Fabrice. I honestly do think it would be interesting to do an analysis of a day or week of Fabrice's commits to see if there's something about his approach that others can apply besides just being a hardworking genius.
// global. initialized by SomeConstructor
var fooInstance
class SomeConstructor {
constructor(...) {
fooInstance = this;
}
static getInstance(...) {
if (fooInstance != null) return fooInstance;
return new SomeConstructor(...);
}
} a = []
a[0] = 1; // OK to extend the array length
a[10] = 2; // TypeError
If you need an array like object with holes, use a normal object insteadGuess I'm a bit fuzzy on this, I wouldn't use numeric keys to populate a "sparse array", but why would it be a problem to just treat it as an iterable with missing values undefined? Something to do with how memory is being reserved in C...? If someone jumps from defining arr[0] to arr[3] why not just reserve 1 and 2 and inform that there's a memory penalty (ie that you don't get the benefit of sparseness)?
- A small and efficient JS subset, HTML, CSS
- A family of very simple browsers that do just that
- A new Web that adheres to the above
That would make my year.Browsers are complex because they solve a complex problem: running arbitrary applications in a secure manner across a wide range of platforms. So any "simple" browser you can come up with just won't work in the real world (yes, that means being compatible with websites that normal people use).
The embedded use case is obvious, but it'd also be excellent for things like documentation — with such a browser you could probably have a dozen+ doc pages open with resource usage below that of a single regular browser tab. Perfect for things that you have sitting open for long periods of time.
I understand this has been tried before (flash, silverlight, etc). They weren't bad ideas, they were killed because of companies that were threatened by the browser as a standard target for applications.
Or maybe just make it all a single lispy language
Work towards an eventual feature freeze and final standardisation of the web would be fantastic though, and a huge benefit to pretty much everyone other than maybe the Chrome developers.