we're currently using gpt-3.5 and gpt-4 for the LLMs. We'll be adding support for more models under the hood in the future depending on the prompt / use cases.
We’re experimenting and trying to do everything with the cheapest, smallest models we can, because ultimately we want to do it all on device