https://twitter.com/goodside/status/1581805503897735168?s=20...
GPT-3 is perfectly capable of recognizing what kinds of things it will be bad at, and can be encouraged to generate machine-executable queries to fill in that gap.
(note this is based on prompting GPT-3, not chatGPT, but the principles about what this language model is capable of apply)
It can even go one level deeper - there's an example there of it generating a python script that uses the 'wikipedia' library to look up the date of death of the Queen, as a way to fill in knowledge it doesn't have. Tell it it can use the wolframalpha module to answer questions that involve complex units, quantities, or advanced mathematics, and it'll almost certainly do that too.
One of the things I love is this reply to that tweet - https://twitter.com/JulienMouchnino/status/15820120109127065...
"How is it possible that GPT-3 understands what a human can compute in his/her head?"
Riley's quick demo prompt that shows how well ChatGPT can guess whether particular mathematical results are easy to guess matches human intuition surprisingly well.