This technique can reliably make any good LLM fluent in an API that it's never seen in its training data.
I’ve had better luck with manually including the page but including the indexed docs is usually enough to fix API mistakes.
I find copilot useful when I already know what I want and start typing it out, at a certain point the scope of the problem is narrowed sufficiently for the LLM to fill the rest in. Of course this is more in line of “glorified autocomplete” than “replacing junior devs” that a keep hearing claims of.
Because it's faster.
Here's an example: https://tools.simonwillison.net/ocr
That's an entirely client-side web page you can use to open a PDF which then converts every page to an image (using PDF.js), then runs each image through the Tesseract.js OCR program and lets you copy out the resulting text.
I built the first version of that in about 5 minutes while paying attention to a talk at a conference, by pasting in examples of PDF.js and Tesseract.js usage. Here's that transcript: https://gist.github.com/simonw/6a9f077bf8db616e44893a24ae1d3...
I wrote more about that process here, including the prompts I used: https://simonwillison.net/2024/Mar/30/ocr-pdfs-images/
That's why I'm bothering: I can produce useful software in just a few minutes, while only paying partial attention to what the LLM is doing for me.
We actually had to make a rule at work that if you use an LLM to create an PR and can't explain the changes without using more LLMs, you can't submit the PR. I've seen it almost work - code that looks right but does a bunch of unnecessary stuff, and then it required a real person (me) to clean it up and ends up taking just as much time as if it were just written correctly the first time.
I've struggled with getting any productivity benefits beyond single-file contexts. I've started playing with aider in an attempt to handle more complex workflows and multi-file editing but keep running into snags and end up spinning my wheels fighting my tools instead of making forward progress...