In this case it is not, though. As much as I'd like a self-hostable, cheap and lean model for this specific task, instead we have a completely inflexible model that I can't just prompt tweak to behave better in even not-so-special cases like above.
I'm sure there are good examples of specialised LLMs that do work well (like ones that are trained on specific sciences), but here the model doesn't have enough language comprehension to understand plain English instructions. How do I tweak it without fine-tuning? With a traditional approach to scraping this is trivial, but here it's unfeasible to the end user.