Thank you!
On the info leaving your machine part, obviously since we're using OpenAI the table metadata will be sent to draft the SQL. But the conversations and messages and the stored results and everything stays local in an SQLite DB. No cloud or anything involved. With Local LLMs it will indeed be FULLY airtight.
We don't support local LLMs yet cause we want to ensure high quality results. We're only exposing models after we test them thoroughly (eval pipeline work nearly done now - less than a week left probs). So soon we'll be releasing more supported models, including local LLMs. But only if they're good enough/fast enough. Speed is one concern for local LLMs but you should see the quality - pretty meh right now. But haven't tested all of them so can't generalize yet. With the eval pipeline this will be much easier.