story
I’ve also recently been involved in designing a DevOps /Docker deployment pipeline for a customer. They use Java and I haven’t used Java in decades.
Before I would have just done my POC using a Python or Node container and rely on the fact that they knew Java well enough to get the concepts. But I used Java and started the chain of questions “answer all question based on talking to someone who doesn’t know Java. Explain everything step by step.”
In both cases, ChatGPT will usually get me 99% there. But I have to keep trying things and giving it the error messages and iterating.
Of course there is the hallucination issue.
On the other hand, I’ve done a lot of work professionally with old school chatbots integrated with web pages and call centers where the only intelligent component is that we could parse out parts of speech (nouns, verbs, adjectives, etc) and only search on those.
I would never recommend putting an LLM style chatbot in front of a customer. When I work with customers - especially in the government - the questions and answers are heavily vetted before being put in production.
They would never take a chance that either the customer could jailbreak the chatbot and have it say something and trigger a political argument about “bias” or that it would give incorrect information about a government benefit.