So; I think many people in our sector have a progress-bias from years of Moore's law etc. and run under the assumption that all things tech that display problems (in this case "incorrect information") will just resolve with time and progress.
People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?
Who is right in this case? I actually think that the LLM approach has limits that we could hit the wall of, and for that technique at least never get past the "I'm just making shit up" problem, and in fact the skeptics are quite right.
LLMs are exciting as an automatic language content generation tool, they chain words together in ways that sound like humans, and extract reasoning patterns that look like human reasoning. But they're not reasoning, it doesn't take much to trip them up in basic argumentation. But because they look like they're "thinking" some people with a tech-optimist bias get excited and just assume that the problems will resolve themselves. They could be very wrong, there could in fact be very strong limits to this approach.
... More worrisome is if LLMs because omnipresent despite having this flaw, and we just accept bullshit from computers the way we seemingly now accept complete bullshit from politicians and businessmen....