I am a complete LLM beginner but would like to get into the practical application of the technology by interfacing a GPT instance with an internal database at work.
I understand creating LLM agents that handle unstructured data is a quite common use-case. Ironically, more experienced peers also told me that going through the same exercise to make agents work with structured data can be more challenging.
Are there any resources that you could point me to wrt best practices and tools to use when tackling such a project? In my mind, I would 'magically feed' the DB schema to the LLM, have it write valid SQL prompts and translate the results into text. Does this make sense? Are there better ways to do this?