What? Where have you been the last 3 months?
> the quality of the responses from the model are correlated with how large (and therefore how much compute) the model has
There's a lot more to this including the model structure, training methods, number of training tokens, quality of training data, etc.
I'm not at all saying that Vicuna/Alpaca/SuperCOT/Other llama based models are as good as GPT3.5 - but they should be capable of this, they still create coherent answers.
You need preferably 24GB of vram, but you can get away with less, or you can use system memory (although that'll be slow).
There is a openai api proxy that might let this work without too much work actually
EDIT: It actually says in the readme they plan to support StableLM which is interesting because at least at the moment that's not a well performing model
EDIT 2: You should try the replit2.8B model - This is surprisingly good at programming - https://huggingface.co/spaces/replit/replit-code-v1-3b-demo