> And it replied with some random http response showing that this hostname exists.
And that's not logical? ChatGPT doesn't know what is there, so it answer logically based on what should happens there. Obviously having 2 different answers make it less logical for sure, but I have seen many peoples makes plenty of logic error too in real life.
It's crazy to me that for an AGI to be one, it need to be infallible in logic...