I still have no idea what they have found. Far too little info to estimate how relevant this is.
Until then, there's not really much to see here.
A pity if research findings now do PR teasers.
Alternately, an HTTPS implementation that can silently update itself without admin permissions like browsers can. The web moves too fast for manual security patches.
The client sends a list of acceptable cipher suites to the server. The server then selects one that is also agrees with (this can be based on many factors, strongest first, quickest first, a custom priority, etc). Because of this, if you send, say 40 ciphers, the server will only respond with one. To get an accurate list of cipher suites supported, you must send each one individually to truly test whether the server does support a particular cipher suite or not.
Not to mention, if a server is sending out its 'patch level' to a client machine on request, that in itself is a vulnerability.
The government does use such a tool - or at least something similar - https://pulse.cio.gov/ is just one example of a public repository of some of those results.
In fact, now that I think about it, disclosing a list of sites that have known vulnerabilities could actually be a legitimately irresponsible thing to do; you'd be picking targets for bad-actors unless there was a confidential disclosure process.
(And you shouldn't trust that data anyway.)
That's one of the reasons why, in my thesis (which I defend in... 1 week!), I propose replacing replacing security indicators with risk indicators [1]. I think technical properties of a web page, in conjunction with the context of specific interactions, can be used to determine whether the interactions might be risky. By informing users of risks they may be taking, they feel more confident making their own trust decisions.
(Meanwhile on the back-end: as a web server developer, I'm trying to find ways to make it easier to do upgrades when vulnerabilities in protocols are fixed, etc. It's also hard.)