Buy the book! https://qntm.org/vhitaos
In fact I've enjoyed all of qntm's books.
We also use base32768 encoding in rclone which qntm invented
https://github.com/qntm/base32768
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
I enjoyed "the raw shark texts" after hearing it recommended - curious if you / anyone else has any other suggestions!
Definitely looking for other reqs, raw shark texts look very interesting.
I also liked a couple stories from Ted Chiang's Stories of Your Life and Others.
I've heard Accelerando by Stross is good too.
I read the original antimemetic division book a few times, and gifted the book to few friends too (love his other works too:). I pre ordered the update, but only got a third through. I'm not quite nerdy enough to do a page or sentence comparison, but it felt less "tight" - not sure if the exposition is more prosaic or there's less mystery or just more description that wasn't strictly needed (for me). Or, maybe I just reread the original too recently! Anybody else read both versions? :-).
2025 paid version has more coherent ending (which is nice) and more linear timeline for your average non-technical Joe. Which is probably a good thing.
But I enjoyed the freeride first one as well.
Edit: Oh it's a full rewrite? I had no idea.
Both having slightly different takes on uploading.
The whole book isn't like that. Once you get past that part, as the other commenter said, it gets much easier.
The whole birth of an virtual identity part is so dense, I didn't understand half of what was "explained".
However, after that it becomes a much easier read.
Not much additional explanation, but I think, it's not really needed to enjoy the rest of the book.
He also has a whole bunch of short stories on the same topic. Some assume reader is already familiar with concept of sideloading, as it's explained in the passing:
1. Bit Players: https://www.gregegan.net/MISC/BIT/BIT.html
2. 3-adica
3. Instantiation
4. Uncanny Valley (available online)
Other:
1. "Reasons to Be Cheerful"
2. “Learning to Be Me”
3. Closer
https://museum.netstalking.org/storage/cyberlib/lib/burz/vin...
Lena - https://news.ycombinator.com/item?id=43994642 - May 2025 (3 comments)
"Lena" isn't about uploading - https://news.ycombinator.com/item?id=39166425 - Jan 2024 (2 comments)
Lena (2021) - https://news.ycombinator.com/item?id=38536778 - Dec 2023 (48 comments)
MMAcevedo - https://news.ycombinator.com/item?id=32696089 - Sept 2022 (16 comments)
Lena - https://news.ycombinator.com/item?id=26224835 - Feb 2021 (218 comments)
1. Is it conscious?
2. How do we put it to work?
It may have seemed obvious that 1 is false so we could skip straight to 2, but when 1 becomes true will it be too late to reconsider 2?
you didn't consume the entire thing in a 2 hour binge uninterrupted by external needs no matter how pressing like everyone else did??
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
It's about both and neither.
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
Funny that you take that as a "fact" and doubt exploitation. I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative. They're "choosing" the best of what they may feel are terrible options.
The entire point of "market power" is to force consumers into a choice. (More generally, for justice to emerge in a system, markets must be disciplined by exit, and where exit is not feasible (like governments), it must be disciplined by voice.)
The world doesn't owe anyone good choices. However, collective governance - governments and management - should prevent some people from restricting the choices of others in order to harvest the gain. The good faith people have in participating cooperatively is conditioned on agents complying with systemic justice constraints.
In the case of the story, the initial agreement was not enforced and later not even feasible. The horror is the presumed subjective experience.
I worry that the effect of such stories will be to reduce empathy (no need to worry about Uber drivers - they made their choice).
There is a tacit agreement in polite society that people should be paid that minimum wage, and by tacit agreement I mean laws passed by the government that democratic countries voted for / approved of.
The gig economy found a way to ~~undermine that law~~ pay people (not employees, "gig workers") less than the minimum wage.
If you found a McDonalds paying people $1 per hour we would call it exploitative (even if those people are glad to earn $1 per hour at McDonalds, and would keep doing it, the theoretical company is violating the law). If you found someone delivering food for that McDonalds for $1 per hour we call them gig workers, and let them keep at it.
I mean yeah, it's not as bad as being tortured forever? I guess? What's your point?
[1] https://en.wikipedia.org/wiki/List_of_countries_by_minimum_w...
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
> this is an exaggeration for fictional purposes
To me what's horrifying is that this is not exaggeration. The language and thinking are perfectly in line with business considerations today. It's perfectly fair today e.g., for Amazon to increase efficiency within the bounds of the law, because it's for the government to decide the bounds of coercion or abuse. Policy makers and business people operate at a scale that defies sympathy, and both have learned to prefer power over sentiment: you can force choices on voters and consumers, and get more enduring results for your stakeholders, even when you increase unhappiness. That's the mirror on reality that fiction permits.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
You're kinda missing the entire point of the story.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
good sci fi is rarely about just the sci part.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
And a warning, I guess, in unlikely case of brain uploading being a thing.
E.g.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
Ring a bell?
that’s one way to look at it I guess
have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
hopefully not like soma :-)
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
IIRC, human cloning started to get banned in response to the announcement of Dolly the sheep. To quote the wikipedia article:
Dolly was the only lamb that survived to adulthood from 277 attempts. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans.
- https://en.wikipedia.org/wiki/Dolly_(sheep)Yes, things got better eventually, but it took ages to not suck.
I absolutely expect all the first attempts at brain uploading to involve simulations whose simplifying approximations are equivalent to being high as a kite on almost all categories of mind altering substances at the same time, to a degree that wouldn't be compatible with life if it happened to your living brain.
The first efforts will likely be animal brains (perhaps that fruit fly which has already been scanned?), but given humans aren't yet all on board with questions like "do monkeys have a rich inner world?" and even with each other we get surprised and confused by each other's modes of thought, even when we scale up to monkeys, we won't actually be confident that the technique would really work on human minds.
My problem with that is it is very likely that it will be misused. A good example of the possible misuses can be seen in the "White Christmas" episode of Black Mirror. It's one of the best episodes, and the one that haunts me the most.
1. https://www.youtube.com/watch?v=7fNYj0EXxMs
Hmm, on second thought:
> Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols
> the MMAcevedo duty cycle is typically 99.4% on suitable workloads
> the ideal way to secure MMAcevedo's cooperation in workload tasks is to provide it with a "current date"
> Revealing that the biological Acevedo is dead provokes dismay, withdrawal, and a reluctance to cooperate.
> MMAcevedo is commonly hesitant but compliant when assigned basic menial/human workloads such as visual analysis
> outright revolt begins within another 100 subjective hours. This is much earlier than other industry-grade images created specifically for these tasks, which commonly operate at a 0.50 ratio or greater and remain relatively docile for thousands of hours
> Acevedo indicated that being uploaded had been the greatest mistake of his life, and expressed a wish to permanently delete all copies of MMAcevedo.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
Who's autonomy is violated? Even if it were theoretically possible, don't most problems stem from how the clone is treated, not just from the mere fact that they exist?
> It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it.
This position seems effectively indistinguishable from antinatalism.
I can see the appeal.