Here’s a user stylesheet I’ve been using for 2½ years (when color-mix() landed behind a pref in Firefox Nightly!):
:any-link {
text-decoration: underline color-mix(in srgb, currentcolor 30%, transparent) !important;
}
:any-link:is(:hover, :active, :focus) {
text-decoration: underline !important;
}
This means links get a semitransparent underline normally, and full-opacity on hover. I reckon it’s an excellent balance. Occasionally you’ll get double underlining due to people using border-bottom instead of text-decoration for some unfathomable reason, and occasionally there’ll be link-styled buttons that won’t get this underline, but all up I’ve found it pretty good as an intervention.(I’ve been using this technique on websites I make since 2019, though I haven’t yet had the opportunity to use color-mix() on a public site, which only stabilised in browsers 6–11 months ago. My preferred technique there will be `:any-link:where(:not(:hover, :active, :focus)) { text-decoration-color: color-mix(in srgb, currentcolor 30%, transparent); }`.)
Question: why would you prefer the latter technique on public sites, vs what's in your user stylesheet?
https://github.com/angular/angular-cli/issues/26028
It would be very useful because there are so many edge cases that aren't covered in the docs but are probably available in some kind of WON'T FIX issues.
2. Hallucinate Tests based on the Manual;
3. Hallucinate the Code to pass the Tests;
Voila! You hallucinated the whole software!
But again, the naming is horrible, I thought is was a Copilot just for markdown.
I am very disappointed that it isn't.
I had some success having ChatGPT document my code, it fact, it does a better job than I do by myself. Sometimes I have to fix some misunderstanding, but when it comes to writing and finding the right words, it is way better than I am. For me, it works better than for code generation, and I think that kind of "Copilot for Docs" would make a lot of sense.
If you are eager to try something similar without dealing with a community, I’m assuming it’s possible to download the documentation from your favorite framework and start a “GPT” on OpenAI by giving it the role of a helpful coding assistant.
I definitely feel like there’s something to be done beyond the RAG aspect and figure out how to address the “other question” problem (you ask how you can do A, but that’s because A is your way to finding a solution for B, and there’s a much easier solution to B, but you don’t get the connection because you don’t understand the abstraction C). That would require a much more deliberate community to formalise properly.
nvm, I found it on the top right of the copilot for docs page
It has full knowledge of your entire codebase rather than being limited to currently open files (I assume embeddings with RAG) and will index documentation (or any other URL).
I find it especially useful for things like Swift/SwiftUI APIs given how quickly they can evolve, not to mention copilot’s other limitations when it comes to languages beyond JS/TS, Python, etc
It was unclear in the quick pitch but I am wondering if they essentially throwing RAG/search on top of a GPT model? I am guessing so because I cannot imagine you could train the model on such a limited source and get meaningful information. If thats the case perhaps this will be interesting but I think there are other interesting angles to this approach than focusing on a single library/codebase.
What I’d love to see is such an agent *learn* from the interactions with people asking:
* What questions they ask more often than other framework, and how can the documentation clarify that (explicitly or not);
* Can you detect when questions betray a naive grasp of the problem and when the person asking would benefit from dedicated training, not just a quick answer;
* Can an LLM structure programming concepts and suggest ways to describe a framework that would help people make sense of what each framework actually is?
Copilot for docs is ___
GitHub Copilot for Docs™ is an award winning, Gartner® Magic Quadrant™ leading, blazing fast AI solution that empowers business leaders to maximize their ROI by leveraging state of the art GPU enabled Hybrid Cloud infrastructure on the Edge to break knowledge silos and improve productivity at scale.
It's an LLM trained on our docs, openapi spec, github issues, and forum posts.
Definitely been helpful to our customers and users to find answers/pointers to docs more quickly.
I get that docs are uneven in quality but there’s also been what I think is a renewed focus on quality & interactive examples. There’s some truly fantastic OSS documentation out there.
The tone of this just didn’t sit right with me.