Just summary features: save me 20min of reading a transcript, turn it into 20s. That's a huge enabler.
If I paste the actual non-trivial code, it starts deviating fast. And it isn’t too complex, it’s just less like “parallel sort two arrays” and more like “wait for an image on a screenshot by execing scrot (with no sound) repeatedly and passing the result to this detect-cv2.py script and use all matching options described in this ts type, get stdout json as in this ts type, and if there’s a match, wait for the specified anim timeout and test again to get the settled match coords after an animation finishes; throw after a total timeout”. Not a rocket science, pretty dumb shit, but right there they fall flat and start imagining things, heavily.
I guess it shines if you ask it to make an html form, but I couldn’t call that life-changing unless I had to make these damn forms all day.
Effective and information-dense communication is really hard. That doesn't mean we should just accept the useless fluff surrounding the actual information and/or analysis. People could learn a lot from the Ignoble Prize ceremony's 24/7 presentation model.
Sadly, it seems we are heading towards a future where you may need an LLM to distill the relevant information out of a sea of noise.
But if that's the only place that contained the information you needed, then you have no choice.
There's a lot of material out there that is badly written, badly organized, badly presented. LLM's can be a godsend for extracting the information you actually need without wasting 20 minutes wading through the muck.
Not all content is worth consuming, and not all content is dense.