No.
A LLM has more opportunity itself to replace a Lawyer, the person typing the prompt is not necessarily required to be as educated. Though a case can be made that you need to validate the information.
As it happens we have an opportunity to tell how this works. Software engineering has seen many abstractions of which each comes with its own complexity in verification.
What tends to happen is that people don't really do a lot of verification, we are just "mostly right" very fast and leave an immense amount of inefficiency and indirection behind us.
If I need someone to help me interact with a legal LLM I will want to (and probably be able to, for 300k) hire someone with a law degree. In fact I anticipate many lawyers in the future will effectively become “prompt engineers” for legal LLMs.
How do you use this as a lawyer?
I mean, as a stereotypical evil lawyer in a world of naïve people who don't learn from experience, you could maybe use it to win cases until you destroy the justice system.
But other than that...
Sure, there are matters I would only trust a lawyer to handle, but there are a great many I wouldn't.
Further, the average quality of a human lawyer will likely remain the same tomorrow as it is today, while AI will only get better. LLM today, perhaps some hybrid stack tomorrow, it's only a matter of time before an AI lawyer is the way to go for just about any legal matter. And let me be clear, that time might be 10 years, or may be 100+, but it is coming.
This is a strange statement. No one is training LLMs to generate “misinformation”. It’s the opposite - it’s trained to generate the most likely next word, given the preceding 2000 words - using billions of examples from a real world training corpus. So it will try to generate as much information as what’s present in the corpus. Maybe even more, but that’s debatable.
That is phrased like it is stating a fact about the training process, but it is a statement about the intent of the training, isn't it? So I don't see it as rebutting my comment.
>It’s the opposite - it’s trained to generate the most likely next word
Sure, of course, what else? But if you take any correct statement about something and modify it slightly, it's not very likely it will still be correct.
It seems intuitive to me that there are going to be a million billion (understatement) wrong things next to anything correct in the inputs. As a sort of combinatorial, mathematical thing. You just (in principle) count all the ways to be wrong that are similar to being right.
Nobody trained it to get anything right! It doesn't matter what people expect if they don't have a procedure to do it.
If a statement is adjacent to things that are also "correct", that almost implies a lack of information in the original statement. It seems born out in the impressive BS'ing - the key to BS'ing is saying things that can't really be wrong.