1. Be uncomfortably explicit in prompts: Claude Code in particular is very sensitive to ambiguity. When I write a prompt, I’ll often:
Specify coding style, performance constraints, and even “avoid X library” if needed.
Give sample input/output (even hand-written).
Explicitly state: “Prefer simplicity and readability over cleverness.”
2. Break down problems more than feels necessary: If I give Claude a 5-step plan and ask for code for the whole thing, it often stumbles. But if I ask for one function at a time, or have it generate stub functions first, then fill in each one, the output is much more solid.
3. Always get it to generate unit tests (and run them immediately): I now habitually ask: "Write code that does X. Then, write at least 3 edge-case unit tests." Even if the code needs cleanup, the tests usually expose the gaps.
4. Plan mode can work, but human-tighten the plan first: I’ve found Claude’s “plan” sometimes overestimates its own reasoning ability. After it makes a plan, I’ll review and adjust before asking for code generation. Shorter, concrete steps help.
5. Use “summarize” and “explain” after code generation: If I get a weird/hard-to-read output, I’ll paste it back and ask “Explain this block, step by step.” That helps catch misunderstandings early.
Re: Parallelization and rate limits: I suspect most rate-limit hitters are power-users running multiple agents/tools at once, or scripting API calls. I’m in the same boat as you — the limiting factor is usually review/rework time, not the API.
Last tip: I keep a running doc of prompts that work well and bad habits to avoid. When I start to see spurious/overly complex output, it’s nearly always because I gave unclear requirements or tried to do too much in one message.
No comments yet.