I work in math for the first year of the university in Argentina. We have non mandatory take home exercises in each class. If I waste 10 minutes writing them down in the blackboard instead of handing photocopies, I get like the double of answers by students. It's important that they write the answers and I can comment them, because otherwise they get to the midterms and can't write the answers correctly or they are just wrong and didn't notice. So I waste those 10 minutes. Humans are weird and for some task they like another human.
Efficiency seeking players will adopt this quickly but self-sustaining bureaucracy has avoided most modernization successfully over the past 30 years - so why not also AI.
I think we often view teaching as knowledge-in-knowledge-out, which is true for later grades. For early ones though, many teach how to be "human" as crazy as it sounds.
A great example would be handing a double sided worksheet to a child in 1st grade. A normal person may just hand the child the paper and pencil and tell them to go work on it. A teacher will teach the child where and how to write their name, to read instructions carefully, and to flip the paper over to check for more questions.
We often don't think about things like that, since we don't remember them at all.
I can imagine a future where AIs greatly enhance the paperwork, planning, etc. of teachers so that they can wholly focus on human to human interaction.
There's much more I'm missing here that teachers of younger grades do, but I hope my point has gotten across.
Teaching is a very hands on, front-line job. It's more like being a stage performer than a bureaucrat.
It's not a competitive field. Teachers won't get replaced as new, more efficient modes of learning become available.
Barely any western education system has adapted to the existence of the internet - still teaching facts and using repetitive learning where completely useless.
We got high quality online courses which should render most of high school and university useless but yet the system continue in the old tracks, almost unchanged. It's never been competitive and it's likely always been more about certification of traits rather than actual learning. Both - I think - are pointers towards rapid change being unlikely.
[1] Michael Levin: "Non-neural, developmental bioelectricity as a precursor for cognition", https://www.youtube.com/watch?v=3Cu-g4LgnWs
[2] And ChatGPT agrees, like a good parrot:
"Regarding the assertion that LLMs are better at selecting the search space than specifying it, I believe this is accurate. LLMs are trained on large datasets and can identify patterns and relationships within that data. However, they do not create the data or define the search space themselves. Instead, they rely on the data provided to them to guide their decision-making process."
But then, given the prompt: "what do you think about: LLMs are very helpful, they are some form of legitimate reasoning or knowledge: they are a better search space selector, and they also specify the search space.",
ChatGPT also agrees: "When it comes to search space selection, LLMs can be used to generate relevant search queries or to rank search results based on their relevance to the query. LLMs can also be used to specify the search space by limiting the search to a specific domain or topic.
In terms of legitimate reasoning or knowledge, LLMs can provide insights and predictions based on their training data. However, it's important to note that LLMs are only as good as the data they are trained on, and they may not always provide accurate or unbiased results."
If only Plato could see this Sophist as a Service, he would go completely apoplectic.