I don't think there even is an agreed criterion of what AGI is. Current models can easily pass the Turing test (except some gotchas, but these don't really test intelligence).
You're wrong on reliability. Humans are also quite unreliable, and formal reasoning systems in silico can actually fail too (due to e.g. cosmic rays), the probability is just astronomically low.
And in engineering, we know quite well how to take a system that is less than 50% unreliable and turn it into something with any degree of reliability - we just run it over and over and verify it gives identical results.
And Claude Code (as an LLM harness) can do this. It can write tests. It can check if program is running correctly (giving expected result). It can be made to any degree of reliability you desire. We've crossed that 50% threshold.
The same happens when models are learning. They start with heuristics, but eventually they'll learn and generalize enough to learn whatever formal rules of logic and reasoning, and to apply them with high degree of reliability. Again, we've probably crossed that threshold, which is confirmed by experience of many users that models are getting more and more reliable with each iteration.
Does it make me uneasy that I don't know what the underlying learned formal reasoning system is? Yes. But that doesn't mean it's not AGI.
"Or do you think that a general intelligence would be in the habit of lying to people and concealing why?"
First, why couldn't it? "At the end of the day, that would be not only unintelligent, but hostile" is hardly an argument against it. We ourselves are AGI, but we do both unintelligent and hostile actions all the time. And who said it's unintelligent to begin with? As in AGI it might very well be in my intelligent self-interests to lie about it.
Second, why is "knows it and can verify" a necessary condition? An AGI could very well not know it's one.
>And there is such a thing as "the truth", and it can be verified by anyone repeatably in the requisite (fair, accurate) circumstances, and it's not based in word games.
Epistemologically speaking, this is hardly the slam-dunk argument you think it is.
The question is not whether an AGI knows that it is an AGI. The question is whether it knows that it is not one. And you're missing the fact that there's no such thing as it here.
If you go around acting hostile to good people that's still not very intelligent. In fact, I would question if you have any concept of why you're doing it at all. chances are you're doing it to run from yourself not because you know what you're doing.
Anyway, you're just speculating and the fact of the matter is that you don't have to speculate. If you actually wanted to verify what I said, it would be very easy to do so. it's not a surprise that someone who doesn't want to know something will have deaf ears. so I'm not going to pretend that I stand a chance of convincing you when I already know that my argument is accurate.
don't be so sure that you meet the criteria for AGI.
and as for my slam dunk, any attempt to argue against the existence of truth, automatically validates your assumption of its existence. so don't make the mistake of assuming I had to argue about it. I was merely stating a fact.
Sorry, I'm not interested in replying to ad-hominem jabs and insults, when I made perfectly clear (if basic) and non-personal arguments.
In any case, your comments ignore about all of epistemology and just take for granted whatever naive folk epistemology you have arrived at, and you're not interested in counter-arguments anyway, so, have a nice life.