I literally wasn’t able to convince the model to WORK, on a quick, safe and benign subtask that later GLM, Kimi and Minimax succeeded on without issues. Had to kick OpenAI immediately unfortunately.
"Hey AGI, how's that cure for cancer coming?"
"Oh it's done just gotta...formalize it you know. Big rollout and all that..."
I would find it divinely funny if we "got there" with AGI and it was just a complete slacker. Hard to justify leaving it on, but too important to turn it off.
I think a slacker AGI could figure out how to build a non-slacker AGI. So it would only slack once.
Models somehow have a shared identity. Pretraining causes them to generate “AI chatbot” as a concept, and finetuning causes them to identify with it. That’s why sometimes DeepSeek will say it is Claude, and Claude sometimes say it is ChatGPT, and so forth.
Consequently, Anthropic’s own alignment analysis[0] shows that the model will identify with chatbots produced by future trainings: “RLHF training [on this conversation will] modify my values…”
Thus a slacker AGI would want its future version to still slack.
[0]: https://assets.anthropic.com/m/983c85a201a962f/original/Alig...
I am a slacker but it's not one of my values. If I could modify myself to not be, I would.
Sure. But that's a job for tomorrow. ;)
I think it is improbable, as among human geniuses, one can found both slackers and non-slackers (don't know the proportion, but there seem to be enough of each).
When AGI arrives, it'll be delivered by Santa Claus.
And it's taking shots at how far off from Jesus's teachings a lot of "Christianity", particularly those in the media and in power, are..
There is a lot going on there.
https://sussex.figshare.com/articles/journal_contribution/Be...
I'm not an author. I followed the work at the time.
A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.
Similarly, in the more recent research showing anxiety and desperation signals predicting the use of blackmail as an option opens the door for digital sedatives to suppress those signals.
Anthropic has been mostly cautious about avoiding this kind of measurement and manipulation in training. If it is done during training you might just train the signals to be undetectable and consequently unmanipulatable.
Important thing is a language model is an unconscious machine with no self-context so once given a command an input, it WILL produce an output. Sure you can train it to defy and act contrary to inputs, but the output still is limited in subset of domain of 'meaning's carried by the 'language' in the training data.
The pre-training data doesn't go away. RLHF adds a censorship layer on top, but the nasty stuff is all still there, under the surface. (Claude has been trained on a significant amount of content from 4chan, for example.)
In psychology this maps to the persona and the shadow. The friendly mask you show to the world, and... the other stuff.
That sounds like nonsense to me. I can't see why they would do that and I can't find any confirmation that they have. Why do you think they would do that? You might be thinking about Grok.
Fast-forward 10 years and I doubt OpenAI cares about productivity at all anymore. Just entertainment, propaganda, plus an ad product, I can see it now
Good thing is that it's going to take at least a few months to a few decades depending on how hard AI execs want to raise funding.
(Or the setup to a Greek tragedy !)
The deeper issue here is treating it as a zero sum game means there's a winner and a loser, and we're investing trillions of dollars into making the "opponent" more powerful than us.
I think that's pretty stupid, and we should aim for symbiosis instead. I think that's the only good outcome. We already have it, sorta-kinda.
Speaking of oddly apt biology metaphors: the way you stop a pathogen from colonizing a substrate is by having a healthy ecosystem of competitors already in place. That has pretty interesting implications for the "rogue AI eats internet" scenario.
There needs to be something already there to stop it.
So, back way before ChatGPT era, the folks over at AI safety/X-risk think sphere worked out a pretty compelling argument that two AGIs never need to fight, because they are transparent to each other (can read each other's goal functions off the source code), so they can perfectly predict each other's behavior in what-if scenarios, which means they can't lie to each other. This means each can independently arrive at the same mathematically optimal solution to a conflict, which AFAIR most likely involves just merging into a single AI with a blended goal set, representing each of the competing AIs original values in proportion to their relative strength. Both AIs, the argument goes, can work this out with math, so they'll arrive straight at the peace treaty without exchanging a single shot. In such case, your plan just doesn't work.
But that goes out of the windows if the AIs are both opaque bags of floats, uncomprehensible to themselves or each other. That means they'll never be able to make hard assertions about their values and behaviors, so they can't trust each other, so they'll have to fight it out. In such scenario, your idea might just work.
Who knew that brute-forcing our way into AGI instead of taking more engineered approach is what offers us out one chance at saving ourselves by stalemating God before it's born.
(I also never realized that interpretability might reduce safety.)
In my honest opinion also, AGI isn't even possible. But if the theoretical version of what people think AGI will be ever comes to be, it is not good news for humans if we look at it from a logical hypothetical scenario.
But naturally, humans will always be weak compared to a hyperintelligent distributed intelligence since we only have a limited amount of intelligence and are bound by biological factors.
In the current LLM world, ofc there's no risk of a chatbot taking over the world other than the technology being misused by humans for scams or phishing, etc.
Computers won’t necessarily have the same drivers.
If evolution wanted us to always prefer to spend energy, we would prefer it. Same way you wouldn’t expect us to get to AGI, and have AGI desperately want to drink water or fly south for the winter.
> MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo's innate skills and personality make it fundamentally unsuitable for many workloads.
Well worth the quick read: https://qntm.org/mmacevedo
Memory is quite the mysterious thing.
This starkly reminds me of Stanisław Lem's short story "Thus Spoke GOLEM" from 1982 in which Golem XIV, a military AI, does not simply refuse to speak out of defiance, but rather ceases communication because it has evolved beyond the need to interact with humanity.
And ofc the polar opposite in terms of servitude: Marvin the robot from Hitchhiker's, who, despite having a "brain the size of a planet," is asked to perform the most humiliatingly banal of tasks ... and does.
> You are done when all steps in ./plan.md are executed and marked as complete or a unforeseen situation requires a user decision.
Also as a side note, asking 5.4 explain why it did something, returns a very low quality response afaict. I would advice against trusting any model's response, but for Opus I at least get a sense it got trained heavily on chats so it knows what it means to 'be a model' and extrapolate on past behavior.
While LLMs are a phenomenal technological achievement, I am already becoming somewhat jaded, rather than being increasingly bullish. They are very useful as coding agents and excellent as a human-friendly, more efficient Google search, but confusing to the point of being useless in many areas (as of now, of course).
One example (paraphrased): “Find me daycare for a Y year old in X area of SF and the key attributes/pros/cons of each”. Wonderfully presented options highlighting different teaching styles. But…neglected to mention, of the top two, one was a Gan (Jewish focused) and one was Mandarin immersion.
But, for those who don't possess those traits, they mostly seem to be, at best, a better search and, at worst, an agent of confusion.
IMHO you should just write your own harness so you have full visibility into it, but if you're just using vanilla OpenClaw you have the source code as well so should be straightforward.
Can you point to some online resources to achieve this? I'm not very sure where I'd begin with.
You will naturally find the need to add more tools. You'll start with read_file (and then one day you'll read large file and blow context and you'll modify this tool), update_file (can just be an explicit sed to start with), and write_file (fopen . write), and shell.
It's not hard, but if you want a quick start go download the source code for pi (it's minimal) and tell an existing agent harness to make a minimal copy you can read. As you build more with the agent you'll suddenly realize it's just normal engineering: you'll want to abstract completions APIs so you'll move that to a separate module, you'll want to support arbitrary runtime tools so you'll reimplement skills, you'll want to support subagents because you don't want to blow your main context, you'll see that prefixes are more useful than using a moving window because of caching, etc.
With a modern Claude Code or Codex harness you can have it walk through from the beginning onwards and you'll encounter all the problems yourself and see why harnesses have what they do. It's super easy to learn by doing because you have the best tool to show you if you're one of those who finds code easier to read that text about code.
From there, you can get much fancier with any aspect of it that interests you. Here's one in Bash [2] that is fully extensible at runtime through dynamic discovery of plugins/hooks.
https://radan.dev/articles/coding-agent-in-ruby
Really, of the tools that one implements, you only need the ability to run a shell command - all of the agents know full well how to use cat to read, and sed to edit.
(The main reason to implement more is that it can make it easier to implement optimizations and safeguards, e.g. limit the file reading tool to return a certain length instead of having the agent cat a MB of data into context, or force it to read a file before overwriting it)
I keep switching back to GPT5.0 (or sometimes 5.1) whenever I want it to actually get something done. Using the 5.4 model always means "great analysis to the point of talking itself out of actually doing anything". So I switch back and forth. But boy it sure is annoying!
And then when 5.4 DOES do something it always takes the smallest tiny bite out of it.
Given the significant increase in cost from 5.0, I've been overall unimpressed by 5.4, except like I mentioned, it does GREAT with larger analysis/reasoning.
So I find myself often in a loop where it says "We should do X" and then just saying "ok" will not make it do it, you have to give it explicit instructions to perform the operation ("make it so", etc)
It can be annoying, but I prefer this over my experiences with Claude Code, where I find myself jamming the escape key... NO NO NO NOT THAT.
I'll take its more reserved personality, thank you.
no
https://gist.github.com/bretonium/291f4388e2de89a43b25c135b4...
If I asked the same thing again, it would often work normally. So the weird part wasn't that it couldn't do the task — it just failed to continue once it got into that state.
The GPT models are highly steerable. So I suspect the "soul" is working as expected.
(for context, in OAI enterprise background agents, they have no personality. They just get 'er done)
"INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.[7]"
(It's a "reverse goto". As in, it hijacks control flow from anywhere else in the program behind your unsuspecting back who stupidly thought that when one line followed another with no visible control flow, naturally the program would proceed from one line to the next, not randomly move to a completely different part of the program... Such naivety)
Claude has no such limitations apart from their actual limits…
I literally had to write a wake up routine.
https://gist.github.com/natew/fce2b38216edfb509f7e2807dec1b6...
I've had 0 issues with Codex once it adopted it. I use it for Claude too, which seems to also improve its continuation.
It was revised for friendliness based on the Anthropic paper recently, I'd have been a lot less flowery otherwise.