How would one tell if the AI-created "proof" is both accurate and adequate?
Just yesterday I was playing with chatgpt and found an error between the code it generated and the explanation of the code. It contradicted itself.
However when I caught the error I asked it to further explain since it appears to contradict the code it generated. It then came back with an apology and it did state it made a mistake and was able to understand the error and fix it. Although I was specific about the mistake. I might try again later today to do the same test and see if it learned or generates the same error again .If it does I will ask it to confirm that its explanation and code match versus pointing out the error.
Even your single datapoint explanation/POV/understanding will help to accelerate this entire process.
I have to keep reminding programmers that Co-Pilot exists, is real, and makes LESS mistakes than entry-level datagrunt software engineers. And it costs pennies of electricity to run daily.
All capitalism is: the search to maximize efficiency; monotonous human labor (~80%) is the most expensive part of this equation... this is not an "if," rather "when" situation. Putting your head into the sand will be a safe place for lesser programmers to still make money, for at least another few years.
But as was said elsewhere in this thread: if you do not know how to write PROOF code to VERIFY these inevitable AI-assistant-coders' outputs, you will not have a job. Human mindpower cannot compete in the bruteforce arena — all ChatGPT is right now is a bunch of autistic middle-aged asshole trolls with WAY TOO MUCH MONEY, and EVEN MORE TIME (to play around with this).
I encourage you as a more-artistic-than-technical (but still fairly intelligent) person to "just pretend" that this is your new Fiverr-tasker. Because it is already, and will be once more-widely understood / accepted.
Peace.
I sense that "I'm sorry, Dave..." isn't quite as far away as we thought...