Do you mean use the LLM as a post-processing step within a ChatGPT conversation? Or generally (like as part of Whisper)? If it’s the former, I’ve found that ChatGPT is good at working around transcription errors. Regarding the latter, I agree, but it wouldn’t be hard to use the GPT API for that.
Yes I mean as part of the GUI but you're right, I hadn't thought of that: maybe transcription errors don't matter if chatGPT works out that it's wrong from the context and gives a correct answer anyway.