> You can confirm that the people who say things are in a position to know.
What is the above commenter's sense of how well one can 'confirm' such a thing?
Looking at an HN account and its comment history provides some signal, but this doesn't satisfy me, given the incentives at play here. We're talking about OpenAI, a ~$800B company. Reputation matters a lot. The stakes are higher than e.g. "does so and on really work at e.g. Mozilla and know about the details of a messy Rust governance issue?" (to pick a deliberately lower stakes example).
When we decide what to let into our brains around OpenAI, Anthropic, etc, the bar needs to be higher than i.e. "does a HN account seem to be consistent with someone who works at OpenAI?". (I'm not sure if this is the above commenter's position or close to it?)
We need to be able to have stronger proofs, preferably ones with cryptography and credibility rooted in a legitimate trust model. In 2026, this is certainly possible technically, if a platform made this a priority. The barriers are largely social, cultural, and economic.
HN does not make real-world identity a priority. There might be some workarounds for posting information in one's profile, but practically speaking, I'm not seeing how this would work and what levels of identity it would bolster. Am I missing something?
If I start hand-waving I might dream up something like the following ... Maybe someone could stitch something together with a trusted content time-stamping server and prove they control an OpenAI email address and also provide that cryptographic evidence on their HN profile. It sounds ... practically unappealing at best. I haven't seen this done. Maybe I'm overlooking a good way. I'm all ears. We're going to need better solutions.