I was formalizing an unrelated cognitive phenomenon — the involuntary mini-narrative your mind constructs when you briefly make eye contact with a stranger. When I worked out the trigger conditions, the model required two independent variables to fire, not one. After publishing the formal version (Zenodo, https://doi.org/10.5281/zenodo.19446426), I noticed the same two-variable structure maps cleanly onto how an LLM responds to a prompt:
T1 (Template Specificity) — how precisely the prompt activates deep patterns. Domain vocabulary, structural constraints, persona, intellectual tradition. Vague inputs activate nothing.
T2 (Generative Headroom) — how much of the output space the prompt leaves open. If you fully constrain the output (translate X to Y), the model becomes a lookup engine, not a generator.
Quality is the interaction T1 × T2, not the sum. The four quadrants are: Optimal / Overconstrained / Underspecified / Dead zone.
The analyzer at the link runs the scoring entirely client-side — zero API calls, zero data retention. The whole engine is on npm as @dualcondition/scorer (MIT, 60 KB, no deps). It uses four signal channels (lexical / structural / openness / metacognitive) instead of an LLM-as-judge, which is why it's free and instant.
Things I'd genuinely like feedback on:
1. Calibration. Try the five example prompts and tell me if any score feels wrong. The whole framework lives or dies on whether engineers trust the score on prompts they already have intuitions about.
2. The "constraint adherence" anchor decay metric (separate function in the scorer). It's an alternative to embedding-based topic-drift detection for long-running agents. I think it's the better metric for what people actually want to monitor, but I'd love to be told I'm wrong.
3. Whether "Template Specificity" and "Generative Headroom" feel like the right names. The framework is real; the terminology is mutable.
The full essay translating the paper for engineers is here: https://dualcondition.substack.com/p/your-prompts-have-two-d...
Happy to answer anything about the formal model, the calibration, or the weird road from cognitive science to LLM tooling.