In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.
Also what a shortsighted scifi book, yet techies readily invest in that particular fantasy because it's not your usual spaceship fare.
https://marshallbrain.com/manna1
I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.
The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.
I hear this all the time, but to what end? If the input costs to produce most things ends up driving towards zero, then why would there be a need for UBI? Wouldn't UBI _be_ the performative economics mentioned?
Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…
There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.
Agreed though that there’s lots of similarities.
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
1) Human agent, manual retrieval (included for completion
2) one-off script to get exactly the content I want
3) Traditional workflow, write & maintain
4) one off prompt to the agent to write the script in #1, sort and arrange content for grouping base on descriptions it receives (this is what I used, 3 hours later I had a years worth of journal abstracts of various subjects downloaded, sorted, indexed and summarized in a chromadb. I’d just asked for the content, but it’s python code it left for me included a parameterized CLI with assorted variables and some thoughtful presets for semantic search options.)
5) one off prompt to the agent to write the workflow in #3, run at-will or by agent
6) prompt an agent to write some prompts, one of which will be a prompt for this task, the others whatever they want: “write a series of prompts that will be given to agents for task X. Break task x down to these components…”
For inference, the difference between expensive data center hardware and homemade GPUs largely comes down the distinction of RAM. Which is a limitation actively worked around (unfortunately all the well-funded orgs are not that interested in this)
Here's one from Deepmind:
1. https://www.x402.org/ - micropayments for ai agents to access resources without needing to sign up for an api key
2. https://8004.org/ - open AI agent registry standard
https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
I feel like co-ops were awful anyway even without the blockchain.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
This is so bonkers and absurd I don't know what to say.
> Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition
I think you are correct on the competition part.
I think we are going to see a avalanche of millions of small business takeaway market share from big businesses(b2b saas is the first casualty but also others in the future as technology advances).
However more than by regulation, I think it'll happen due to AI/LLMs itself.
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Ie., The reason a cookie is 1 USD is never because some "merely legal entity" had a fictional (/merely legal) desire for cookies for some reason.
Instead, from this pov, it's that the workers each have their desire for something; the customers; the owners; and so on. That it all bottoms out in people doing things for other people -- that legal fictions are just dispensble ways of talking about arragnemnts of people.
Incidentally, I think this is an open question. Maybe groups of people have unique desires, unique kinds of lives, a unique time limitation etc. that means a group of people really can give rise to different kinds of economic transactions and different economic values.
Consider a state: it issues debt. But why does that have value? Because we expect the population of a state to be stable or grow, and so this debt, in the future has people who can honour it. Their time is being promised today. Could a state issue debt if this wasnt true? I dont think so. I think in the end, some people have to be around to exchange their time for this debt; if none are, or none want to, the debt has no value
You’ll need to give a citation for this to take you seriously
You can program AI with "market values" that arise from people; but absent that, how do these values arise naturally? Ie., why is it that I value anything at all, in order to exchange it?
Well if I live forever, can labour forever, and so on -- then the value to me of anything is if not always zero, almost always zero. I dont need anything from you: I can make everything myself. I dont need to exchange.
We engage in exchange because we are primarily time limited. We do not have the time, quite literally, to do for ourselves everything we need. We, today, cannot farm (etc.) on our own behalf.
Now there are a few caveats, and so on to add; and there's an argument to say that we are limited in other ways that can give rise to the need to exchange.
But why things have an exchange value at all -- why there are economic transactions -- that is mostly due to the need to exchange time with each other because we dont have enough of it.
However that seems completely tangential to the current AI tech trajectory and probably going to arise entirely separately.