I realize the hyperbolic framing of the idea, but none-the-less I haven't been able to get it out of my head. This article feels like it's another piece of the same puzzle.
And how is this a bad thing? Would it be good for oxygen to be valuable, or water?
What's with this fetishistic obsession with "value"? What's bad about living in a world where nothing holds any value whatsoever?
Not really. The value to a thirsty soul of water in the desert is as high as they value their own life (to some there is little) and have a currency of value to the seller. Still, once thirst is quenched the value to that soul drops nearer to zero.
For an optional good the value only rises to the point that there is excess asset in the inventories of those that would like to add the option.
I would suggest what you are looking for is that some scarcities are shifted by each new technology. Things like the sincere attention of others or more exclusive emotional attachments become relatively more scarce in a goods abundant existence. Earlier, insights on where to apply the tool and to where one should attend become more scarce.
Something you would have to accept if you believe your statement is that you would never value (i.e. need) water again if we could produce far more than we ever could use. Your body's need and use would not cease even if the economics might collapse.
Financializing everything can lead one into foolish conclusions.
What you are talking about is needs, which is an entirely different discussion. Value can also be used to discuss needs (how valuable is something to survive), but I think I'm quite clearly using it to describe something different.
Diamonds are pretty worthless but expensive because they're scarce (putting aside industrial applications), water is extremely valuable but cheap.
No doubt there are some goods where the value is related to price, but these are probably mostly status related goods. e.g. to many buyers, the whole point in a Rolex is that it's expensive.
AI is like oil, in that it's "burning" a resource that took geological timescales to accrue. Its value derives from the energy-dense and instantaneous act of combusting a fossil fuel, and in this particular part of the "terrain", it will be a local maximum for A Long Time.
Just like how it's taken absurdly long (still very much WIP) for human societies to prioritize weaning ourselves off of fossil fuels, I fear we are going to latch onto GenAI/LLMs pretty hard, and not let go.
Some "value" will be lost, but other will be generated. People WILL lose their jobs as what they did can either be done by AI directly or an AI Agent can write a script/program to do some or all what they did.
No, it’s not. This is where your concept fails. AI is a tool, like any other tool. It doesn’t provide unlimited anything, and, furthermore, it needs human inputs and direction to provide anything. “Go make me a profitable startup from scratch” is not a useful prompt.
Perhaps it's not how you use LLMs, but it is the promise of AI.
For the record, I make a distinction between LLMs (a current tool that we have today) and AI (a promise of some mystical all-powerful science-fiction entity).
There is nothing intelligent about what we have today, artificial or otherwise.
It's called utopia.
But my issue with AI hype is that it's not clear how it will lead to "everyone can do anything." Like how is it going to free up lands so everyone can afford a house with yard if they want?
If you want everyone to be able to afford a house with a yard within walking distance of downtown Palo Alto, there aren’t enough of them for everyone that wants to do that, and AI (and utopia) can’t change that. Proximity to others creates scarcity because of basic physical laws. This is why California is expensive.
This is something I always wondered about in Banks’ post-scarcity utopian Culture novels. How do they decide who gets to live next door to the coolest/best restaurant or bar or social gathering place? Does Hub (the AI that runs the habitat, and notionally is the city) simply decide and adjudicate like a dictator or king?
Its not. We already see this on social media: creating pictures and clips has become basically effortless and the result isnt utopia, its a massive steaming pile of worthless shit.
Most in the west don't understand water scarcity.
The digitization of information and media combined with the Internet and widespread use of electronic devices practically means that in some important ways, we are already grappling with post-scarcity in certain fields. 600 years ago, "books" and other texts were rare and valuable, then there was an explosive transformation with the invention of the printing press. But while much easier, there was a still a laborious printing process and a copy of a book was still a valuable thing. Now, a "book" can exist as an .epub and be copied perfectly a million times practically for free. It is similarly true for movies, photos, recorded music, news articles, etc.
As a capitalist society, we've really struggled how to deal with this post-scarcity arrangement. We understand in the abstract that this stuff is important, and that creating it is a laborious process, but we do not really know how to assign copies of those works value (because, once created, they immediately become infinitely abundant). The best idea we've seem to have settled on is articifically creating scarcity by locking the digital works behind paywalls and subscription services that require an account, or maybe DRM paired with a EULA. But I think people generally, and the HN crowd specifically, understand that is a lousy arrangement.
Could energy become so abundant that it is also post-scarcity? Between fusion energy and advancements in solar, wind, and geothermal energy, maybe! It is a tantalizing vision to dream of, but what does that look like under capitalism?
Electricity that is too cheap to meter is possible today. I'm pretty sure that we are technologically capable of producing enough solar panels to supply reasonable energy needs (ignoring AI data center nonsense, for now). I think this is happening already in certain countries, but the economics of it get weird, because even as a public utility, you have to charge something. A market that drives prices down to almost nothing will then cease to exist, and powerful people don't want that to happen.
The real solution is that governments should just build out power capacity and provide electricity as a service to its citizens, like healthcare and education. The solution we'll probably get is some Dickensian torment nexus where orphans are pushed into a meat grinder and our electric bills go up.
Sounds like a semantics trick. Value is value. Sure, something can have a different value if you exchange it versus if you use it. It can also have a different value if you eat it, or drink it, or smash it, or wear it, or gift it to a family member, or gift it to a friend, or gift it to a lover. "Exchange" is simply one way of use.
Unfortunately, the labor theory of value is self-contradictory. If you invent a new machine that replaces human labor, it will clearly produce more value, yet human labour is reduced. So this follows that not all value can be attributed to human labor.
What this really breaks down is meritocracy. If you cannot unambiguously attribute "effort" of each individual (her labor) to produced "value", then such attribution cannot be used as a moral guidance anymore.
So this breaks the right-wing idea that the different incomes are somehow deserved. But this is not new, it's just more pronounced with AI, because the last bastion of meritocracy, human intelligence ("I make more because I'm smarter"), is now falling.
Addendum: Although accounts differ on this, Marx seemed to struggle with LTV, IIRC Steve Keen's Debunking Economics shows Marx contradicting himself on it.
Marx agrees on this, Adam Smith agrees on this, John Locke agrees on this, heck even Keynes agrees on this; all (sensible) economists agree on this. If you do not have labor somewhere in the process, you do not have value.
“Equal quantities of labor, at all times and places, may be said to be of equal value to the laborer. In his ordinary state of health, strength and spirits; in the ordinary degree of his skill and dexterity, he must always lay down the same portion of his ease, his liberty, and his happiness. The price which he pays must always be the same, whatever may be the quantity of goods which he receives in return for it. Of these, indeed, it may sometimes purchase a greater and sometimes a smaller quantity; but it is their value which varies, not that of the labor which purchases them.” says Smith.
From this Smith concludes that “labor alone, therefore, never varying in its own value, is alone the ultimate and real standard by which the value of all commodities can at all times and places be estimated and compared. It is their real price; money is their nominal price only.”
It's a bit long winded (brief by his standards, though), but the essence is that labor ultimately determines if there is value. But, critically, not the amount of value. Marx would say that the amount of value is the amount of effort that goes into the commodity, something we now know not to be true.
Still, the point here with AI:
If the labor in the products and services that AI produces goes to 0, then the value of those goods and services must also go to 0.
As a brief example, look at chess or F1 racing. There are chessbots that can beat any human, there are F1 robots that can outrace and win against any human. Yet still, we find no value in watching a robot beat up on a human or on another robot. No-one watches or cares to watch those kinds of competitions. We only care to watch other humans compete against each other. There are many reasons for this, but one is that there is labor involved in the competition.
There's are an interesting book called "This Life: Secular Faith and Spiritual Freedom" by Martin Haaglund. Part 2 of the book is really concerned with the Labor Theory of Value, and it articulated it in a way I'd never really understood before. It's hard to summarize in a short post, but here's an essay that engages with the ideas in a span of a few pages: https://www.radicalphilosophy.com/article/the-revival-of-heg...
Really, I encourage people to check out the book. It was at times challenging, and but always thought-provoking. Even when I found myself disagreeing (I have some fundamental disagreements with part 1), it helped me articulate my own worldview in a way that few books have before. It's something special. Anyway, the book really cemented and clarified my views on the labor theory of value.
The labor theory of value is one of multiple theories of values [1]. And it is still widely debated.
> Marx would say that the amount of value is the amount of effort that goes into the commodity, something we now know not to be true.
Marx would say the _exchange_ value of a commodity is proportional to the amount of _socially necessary labor time_ required to produce it. Again, something that debatable.
[1] https://en.wikipedia.org/wiki/Value_(economics)#Theories [2] https://en.wikipedia.org/wiki/Criticisms_of_the_labour_theor...
1) I think it's the destruction of our value, as workers. Without an unthinkable change in society, we'll be discarded.
2) I think it will also destroy the unrealized value of not-yet-created work, first by overwhelming everything with a firehouse mediocre slop, then by disincentivizing the development of human talent and skill (because it will be an easy button that removes the incentives to do that). AI will exceed humans primarily by making humans dumber, not by exceeding humans' present-day capabilities. Eventually creative output will settle at some crappy, derivative level without any peaks that rise above that.
The only people I see handwringing over AI slop replacing their jobs are people who produce things of quality on the level of AI slop. Nobody truly creative seems to feel this way.
I haven’t felt to bad about my creative works being fed into training models. Taken by itself, my creations are minuscule. But it’s very apparent when I look at AI as a whole, having taken from everyone in aggregate.
I feel that.
It's been the exact opposite for me. Coding assistance is a great boon towards productivity simply because otherwise I wouldn't work on any of my old ideas stashed in numerous note taking apps. It's way easier today to go from 0 to something like an MVP, and see if there's something there. If there isn't, not much is lost. But without these tools it would be 0 all across the board.
So, how hard can it be to translate them automatically? Tried a few ready-made tools and they either just plain didn't work or made stupid mistakes due to bad prompting.
I fired up Claude and had an MVP that could 1) rip english subtitles out of a mkv file 2) shove them to OpenAI API for translation in batches within a few hours
v2 took two evenings of me watching TV and bouncing around between Claude+Codex+Crush(GLM-4.6)
1) Get subtitles from original
2) Feed full subtitle file to GPT-4o for first pass analysis. It finds names for people, locations etc and decides a singular translation so they don't vary and gives a general context for the tone/genre of the movie for translation
3) Give gpt-4o-mini the first pass context file and a section of the subtitles in a loop
4) save them as .srt
The results are SO MUCH better than most of the "real" translation we've encountered. I think a bunch of them were done by a zero-shot AI or a human who didn't give a fuck about quality and had to meet a quota.
And none of this would've been done without AI, there's so much crappy boilerplate on top of the few actually interesting bits I would've just tolerated the bad ready-made solutions instead.
I don't know how it plays out in the long run. Will the ease of creating result in increased creativity or will the lack of uniqueness result in a decrease?
I've recently started using a Chrome extension of my own making[0] that allows me to block and highlight users on Hacker News. I remember trying to do this once a long time ago and it was so much work. I had to learn about the Chrome manifest's possible permissions and I had to format my options page nicely with CSS and I had to learn how to make the extension connect to a web page.
Same with these tools I've built for our family. My wife wanted to be able to give an AI some rough notes and have it polish things up using the notes she already had. She wanted to use ChatGPT. I know the theory about the thing:
* ChatGPT has custom GPTs
* Custom GPTs have Actions that call APIs
* If you have an OpenAPI spec, custom GPTs can understand how to call them
* If you have the HTTPS server, custom GPTs can access the endpoints
Previously, I'd slog through each step one by one. This time, on a day when I was watching my infant daughter, I managed to finish the whole thing fully functioning during a short period that she slept! Eufy Baby iOS app up in the corner of the screen, claude code on the left, ChatGPT in Chrome on the right. Knocked it out in an hour and my wife uses it every day.
Astounding tool. Then yesterday I wanted to print an alternative mount for our baby monitor so I can place it in a specific place. I couldn't find the camera mount STL files anywhere. 15 minutes for my wife to find my calipers, 5 minutes measuring, then untold GPU hours but zero of my time as codex built me a test mount[2] while verifying it against a mesh-contiguity script.
And my wife is a graphic designer, so she iterated on our wedding clothes by using Dall-E to design the ideas she had, polished it up in Adobe CS, and then we got an embroiderer we know to put it on my sherwani[3]. At first I thought that perhaps it was just us engineers who get a lot of value out of these tools, but my wife uses them to design things to make on the Cricut, or to help with stuff to 3d-print on our Bambu, and both of us have used it to come up with modifications to recipes which have had surprisingly decent effect!
Dude, my life is like 10x better, maybe 100x better. Everything I dreamed of is no longer gated by my lack of specific skill. I am only gated out by my taste.
1: https://wiki.roshangeorge.dev/w/Blog/2025-10-17/Custom_GPTs
2: https://wiki.roshangeorge.dev/w/Blog/2025-12-01/Grounding_Yo...
Since before ai all my tiny little works have been public domain and it tickles me pink when i see something of mine out in the wild.
Journey before destination.
With that said though, the people who press the button and fashion themselves creatives piss me off. Heck anyone who has more than a passing interest in gen ai art disappoints me. After all, what is interesting about printing the Mona Lisa compared to creating your own shitty version by hand?
Radio amateurs used to be a thing. Because playing with radios is fun, but also because this provided a way to hear things that otherwise could not be heard.
I got the same feeling as when you defeat a boss in some soulslike game, it's frustrating but you feel so good when you're done with it.
AI art didn't feel special to me because you can generate as many as you want, I got bored very fast. I guess this is because I'm a process oriented person.
I'm cautiously optimistic because perhaps AGI will find a way to implement it - if it is sentient at that level, then it would be rather difficult to be owned by some person or corporation. Obviously LLMs are very far from there and remains just a tool for now.
There are much much bigger forces that impact society in the way the author describes.
Corporations are just so large and powerful, that people feel hopeless. Byt we could still get together and enact legislation which will override them. Othing is impossible, it just takes some imagination and organisation.
Like Chomsky once said, if the peasants of Haiti could organise and overthrow their government and create a functioning democracy, then surely we can too, with far more advantages.
Personally, I'd rather have inequality if it means everyone can live a peaceful life. Let the rich have their yachts or whatever.
I don't see why the increased productivity provided by AI won't make things better, given that all of the ills of the world are caused by scarcity: that is, insufficient productivity.
I don’t subscribe to this basic idea. Copyrights are a legal fiction designed to prop up an industry. Somehow from that we went to the idea that creative work output is property. It isn’t. It’s a service. This is why “works made for hire” is a thing.
This is the same reason that reasonable people don’t believe that fanfic authors should be jailed.
The “knowledge cut off date” is 12 to 18 months ago for models, which essentially means that copyright has, in some ways, shrunk to that period since designing around is now very easy.
Given most people live on what they produced recently and not 20 years ago there’s an argument this makes access to knowledge and techniques fairer. Constant new creation is required to obtain a markup and that drives forward productivity
In other words it’s the copyright/patent argument all over again. And it’s perhaps a debate we need to have again as a service society.
I wonder if the "vibe coders" can fix their own code. Or find a very subtle logic error in it. Hell, I wonder if someone with access to AI can fix a bicycle if they haven't even touched a spanner.
The article is not about AI, it's about this stage of capitalism. Unlike the author, I would argue that it is very much in line with the consequences of Robert Nozick's thinking. On the other hand, the way China is doing its AI development and rollout seems more aligned with a Rawlsian notion of distributing the benefits.
I can perfectly understand why everyone in the West is so jaded and worried about how AI benefits will be distributed. That doesn't change the fact that, like any technology, it can also be used to make everyone wealthier, like industrialization also did.
No one steals from them.
So far AI companies were settling by throwing VC cash at it so the vocal ones that do have IP will be paid off.
The premise is that AI does not allow to do this any more, which is completely false. It may not allow to do it in the same way, so its true that some jobs may disappear, but others will be created.
The article is too alarmist by someone who has drank all of the corporate hype. AI is not AGI. AI is an automation tool, like any other that we have invented before. The cool thing is that now we can use natural language as a programming language which was not possible before. If you treat AI as something that can thin k, you will fail again and again. If you treat it as an automation tool, that cannot think you will get all of the benefits.
Here i am talking about work. Of course AI has introduced a new scale of AI slop, and that has other psycological impacts on society.
AI is still shit. There are prompt networks, what some people call agents, but presently models are still primarily trained as a singular models, not made to operate as agents in different contexts with RL on each agent being used to improve the whole indirectly.
Tokens will eventually become cheaper to the point where it will be possible to actually train proper agents. So we probably will end up with very powerful systems in time, systems that might actually be at least some kind of AGI-lite. I don't think is far off. At most a decade.
What does the author suggest is the "moral foundation of modern society"
Likewise, I've felt like the meritocracy story that the author sets up as the "moral foundation" has heavily attenuated in this century. It's still used as the justification in America (I'm rich because I deserve it, you're not rich because you didn't work as hard/smart as me) but it feels like that story is wearing thin. Or that the relative proportion of the luck / ovarian lottery aspect has become so much larger than the skill+hard work aspect.
The trend of the rich getting richer, of them using their power to manipulate the system to further advantage them and theirs at the expense of everyone else, existed before AI burst into the public in '20-21. Maybe, like the fake media, it will finally be the kick people need to let go of the meritocracy trap* and realize we need a change.
* I like the notion of meritocracy, it just seems like America has moved from aiming for that, to using the story of it as an excuse to or opiate for the masses.
All of this stuff is clearly a highly cherry-picked gymnastic exercise to justify a pre-existing position. Classic Elephant and Rider stuff.
It’s the same as support for the snail darter. Same as the story about how groups shouldn’t go out during COVID but BLM protests are fine. And if by some incredible chance it had been the FSF or Brewster Kahle who had produced GPT then you guys would be talking about how information should be unchained because creative work belongs to all Man.
Couching this blatantly motivated reasoning by quoting past philosophers is just such middle-brow woe-is-me whining. Take one look at yourselves in an honest sense. Do you have any principles or will you slave them all to your outcomes?
And now I must repeat the litany lest one assume that my opposition to this kind of balderdash be construed as some kind of political tribalism:
* I don’t think we should destroy endangered species
* I think COVID wasn’t a hoax and does spread in large groups
* I think people have a right to protest if they are discriminated against and that includes the black people at BLM
* I love the Internet Archive and have donated to them
i don't think you have a good grasp of why that it was ok for outdoor protests to happen but people should not go out into crowded buildings. the chance of you getting sick from a protest is much less than the chance of you getting sick from going to an indoor gathering at say a club. getting sick is not a binary on or off. it's exposure time and magnitude vs. your immune system's defenses.
At the same time, if someone designs a robot that prints out copyright-infringing material out of the blue, then they are infringing copyright every time it does so.
You used to need the privilege of knowing the right people or knowing the right sources. But you don’t need that anymore.
What honestly could be better for meritocracy than AI?
"We’re in a narrow window where institutional choices still matter, where there’s enough distributed economic and political power to reshape who controls AI infrastructure and how its benefits flow."
China has AI infrastructure and has made it very clear how its benefits flow. Not to people, "despite" China being communist. In other words we can't do this unless we were to stop China first, or China honestly cooperates, which they've never done. And if you don't do that, then your institutions can only sabotage our side.
You want to exercise control, fine, but first need to have that control.
Some people will point to their supposed crimes or immoral actions while in office, having to do with execution of their duties as president. Large countries tend to do many questionable things. But the current US administration is pretty unique in terms of its corruption, avoidance of accountability, authoritarian and fascist tendencies, etc.
It's not a useful contribution to the discussion to essentially claim that "they're all the same" without making some sort of case for it.
As long as we wait for a godlike leader for rescue the end result is same as with Stalin, Hitler, Trump, Thiel, Epstein, Musk, ...
The godlikeness can come though in many forms, political (Trump), propaganda (Musk/Zuck7Thiel) or via extortion material and money like Epstein.
A good litmus test for a decision maker is the universal ethical principle mentioned in the article put into concrete and compare everything via the lens of "what if all eight billion of current humans and also the future generations would do this".
Right now nobody's daring to to this but as long as we start asking "who's afraid of the narcissist zillionaire" the world starts to make sense and the solution appears.
For example, this is why the way someone treats service workers is a good indicator of someone's character.
AI is exposing the myths of talent and erasing differences between humans making them all a uniform array of subjects. However this erasure comes at the cost of return to monarchy style of economic model, where wealth would be moved from common population to the owners of AI or the neo-monarchies.
AI feels like a more extreme version of that flattening, but without the civic purpose that justified it. You end up with legibility without legitimacy. That’s the part I think we don’t have a good framework for yet.
So as a preface, I would argue than in the marketplace for information (ideas, content, works, etc if you will) that value has two components, 1) utility value, and 2) scarcity value (which I would argue has two components, availability value (information horizon[0]) and saliency value (preference as a result of the information horizon[1]). Obviously the rise of copyright over the last few centuries attempts to limit commerce of ideas based somewhat loosely on the concept of (real, physical) property law (itself a glorified codified extension of the animal behaviour of territoriality, aka survival by means of securing resources); ownership is a moral value autogenous to an idea's sui generis expression on, by, or through a recorded medium. In other words the mere act of recording an idea in a "creative" fashion brings with it a "moral" ownership of that idea. Of course the "creative" and "moral" parts from which the law prescribes and proscribes these limits is debatable. The legally mandated "monopoly" of an idea (even though the marginal cost of replication approaches zero) provides this scarcity. So in the end because of this the scarcity, and the resulting "rent seeking" (corporations seeking to charging (typically over time) more than the cost to create and distribute information), are in effect value (read wealth) redistribution.
So, what is more "moral"? Using a constructivist approach, which is more socially acceptable (and hence far more likely to be codified into law)? broad access to information distilled to the point that sui generis is "removed" so that concepts (which in general can't be copyrighted) are freely available for transformative use, or extending the "moral right" of copyright, now that information distillation is so cheap, to include the ability to "copyright" an idea rather than the expression of it. Or are we seeing a "breakdown" of the codification of the law where the granularity of the spectrum is so fine it becomes so costly to enforce such the "value" of the law in reducing the transaction costs of daily social interaction in the information sphere that we revert to pre-copyright behaviours, i.e. information hoarding.
[0] https://www.researchgate.net/publication/261773523_Informati... The concept is quite simple there is a limit to the amount and quality of information that a) a human can get access to (and following what they prefer / value), and b) relatedly the amount that can be synthesised for actual use. [1] https://www.sciencedirect.com/science/article/abs/pii/S07408...
It opened the PR automatically, took the comments and applied the fixes (renaming a few things, simple refactoring and added a few unit tests).
Effort from me: maybe 5 minutes going through the PR comments and deciding they were super simple monkey work and 10 seconds typing instructions for Claude.
Then maybe 5 minutes of checking the work, commit and push.
All of this could've been done with a Github integrated Claude tbh without me in the loop more than accepting the changes one by one.
There’s an old joke about a mechanic banging a hammer on a car and charging $1000 for it.
Man us engineers severely undervalue each other’s worth huh. One thing all this AI hype has taught me is I’ve started to be very conscious of what my value is. It’s a hard habit to break.