By the same logic, we should worry about the sun not coming up tomorrow, since we know to be true:
- The sun consumes hydrogen in nuclear reactions all the time.
- The sun has a finite amount of hydrogen available.
There’s a lot of non justifiable assumptions baked into those axioms, like that we’re anywhere close to superintelligence or the sun running out of hydrogen.
AFAIK we haven’t even seen “AI trying to escape”, we’ve seen “AI roleplays as if it’s trying to escape”, which is very different.
I’m not even sure you can even create a prompt scenario without that prompt having biased the response towards faking an escape.
I think it’s hard at this point to maintain the claim “LLMs are intelligent”, they’re clearly not. They might be useful, but that’s another story entirely.