Rudely? Ha - they misrepresented my point about RAG tooling not replacing lawyers into a straw man about replacing lawyers - I never said that, said the opposite.
Secondly, it's obvious they have not used RAG, or they wouldn't say things like "inaccurate responses" etc. RAG is as accurate as any database (because it is a database). It puts all the information from your uploaded files into a database and reads from that. The commenter fundamentally misunderstands the technology and likely hasn't even used it - yet feels the need to comment on it like an expert. It's not like using ChatGPT, and in any case it's not in lieu of a lawyer anyway, that was just a straw man argument that goes counter to my actual post.
I did respond to the points about accuracy and legal precedents. Unlike the other false statements that were made, these are legitimate concerns a lot of people share about whether or not LLM tooling should be used by legal professionals.
Is ChatGPT sufficient to replace a lawyer? No.
Is ChatGPT sufficient as a legal advice tool that a lawyer might use on a case-by-case basis or generally? No.
Could the same LLM technology be used except on a body of specific case documents to surface information through a convenient language interface to a legal expert? Yes. It's as safe as SQL.
The point about pricing and inventory is that, unlike an LLM, RAG involves retrieval of specific facts from a document (or collection of documents) - the language is more for handling your query and matching it to that information. None of the points he made about inaccuracies and insufficient answers, etc. or replacing lawyers apply.