This assumes the team deploying the RAG-based solution has equal ability to either engineer a RAG-based system or to finetune an LLM. Those are different skillsets and even selecting which LLM should be finetuned is a complex question, let alone aligning it, deploying it, optimizing inference etc.
The budget question comes into play as well. Even if text is repetitively fed to the LLM, that might happen over a long enough time compared to finetuning which is a sort of capex that it is financially more accessible.
Now bear in mind, I'm a big proponent of finetuning where applicable and I try to raise awareness to the possibilities it opens. But one cannot deny RAG is a lot more accessible to teams which are likely developers / AI engineers compared to ML engineers/researchers.