I guess this goes to show that even a subtle touch of an LLM can undermine authenticity.
edit: i've removed that line. I don't like to edit articles after publish (call me old fashioned, but i try to be honest and transparent), in this case though the line adds nothing and your call-out has taught me a good lesson: shit human writing is better than "good" AI writing.
The best practise for writing docs with LLMs in my opinion, which you have done, is to write as much as you can first then feed that into an LLM for context, and then work with the LLM to finalise it. Maybe half the time is spent writing and half the time is spent going back and forward polishing the doc.
Finally I think it’s important to give the LLM very clear writing guidelines based on your own writing style. I did this by feeding Claude around 20 of my handwritten docs and asked it to analyse my writing style and then add thy to its Claude.md. After a free round of iterations you can get great results!
realistically, though, i quite like writing. The other article i've posted ( https://www.dardar.co/articles/your-data-agent-is-wrong ) is 100% me, but as a consequence it feels kinda preachy and verbose in places haha
Though I'd love to see an analysis of pre-gpt writing to see if it was more prevalent than we remember but lacked the acute sensitivity to it.
There's also the potential that AI started it but people read AI stuff and organically propagate AI tropes in their own words because it's part of the writing they consume.
Changed my mental model of using Skills a bit anyway.
> LLMs forget things as their context grows. When a conversation gets long, the context window fills up, and Claude Code starts compacting older messages. To prevent the agent from forgetting the skill’s instructions during a long thread, Claude Code registers the invoked skill in a dedicated session state.
> When the conversation history undergoes compaction, Claude Code references this registry and explicitly re-injects the skill’s instructions: you never lose the skill guardrails to context bloat.
If true, this means that over time a session can grow to contain all or most skills, negating the benefit of progressive disclosure. I would expect it would be better to let compaction do its thing with the possibility of an agent re-fetching a skill if needed.
I don't trust the article though. It looks like someone just pointed a LLM at the codebase and asked it to write an article.
> It looks like someone just pointed a LLM at the codebase and asked it to write an article.
Not entirely true. I pointed an LLM at the codebase to get me to the right files for understanding skills, and to map out the dependencies and lifecycles - Then I spent quite a bit of time reading the code myself and writing about it.
An AI review at the end of the writing (to "sharpen" the language) unfortunately brought in a couple of AI fingerprints (note the "mic drop" comment above)
edit: write -> right (its 8am)