I don't understand why you're stuck on the word lie?
These are both true statements:
- We've just developed our new top model for agentic coding
- We've just developed a model capable of finding cybersecurity vulnerabilities at a scale never before seen
The problem is/was when you say the 1st statement, you're saying something that everyone says. OpenAI said something similar for 5.5 just this morning. Once you loudly frame your release in the latter terms, you're not lying... but you're being very intentional in trying to grab headlines.
Every top release from a frontier lab now enables the same thing. That's why we've already had response-level filters on cybersecurity for months now from both OpenAI and Anthropic.
Technically every time either has released a top model for the last several months they've been "enabling automated cybersecurity penetration at a scale never before seen.": it was Anthropic that decided to quadruple down on the language and create a ton of buzz.
But OpenAI today showed that the existing cybersecurity mitigations already addressed the concern of misuse. Anthropic has the same (or even stricter) detection for widescale automated attacks and could have used it to ship Mythos if not for the marketing points.