Sensitivity is what fraction of the affected people are actually found. Here: 90%, so 10% are missed.
Specificity is what fraction of the unaffected people are detected as such. Here: 95%, so 5% wrongly detected ("false positives").
In Europe, there are 60 cases of lung cancer per 100 000 people.
That makes 54 correctly detected per 100 000, missing 6 cases. That also means 5000 people incorrectly suspected of lung cancer (5% of 100 000).
Update: using the accuracy from the article itself, we would still get a total of 1000 of false negatives (affected but not detected) and false positives (unaffected but suspected). Incidence is still 60/100 000.
Either that or driving up healthcare costs significantly as those 5000 people are going to need an MRI or CAT scan or something else to rule out cancer.
An MRI without contrast has no impact. An MRI with contrast has relatively little impact. A biopsy would only be done if the MRI with contrast lit up areas of concern. At the point a PET is ordered, you have narrowed the false positive pool substantially and probably want the scan no matter what.
This kind of analysis has been done, most memorably for breast cancer screening. The conclusion I recall was that it did more harm than good a few years ago (opportunity cost of unnecessary spending, pain and complications of biopsy, unnecessary mastectomy, psychological damage, etc. etc.). The follow-up tests and analysis also have an error rate and no treatment is zero cost.
It might only be 1 or 2 people out of 5,000, but those 5,000 were perfectly healthy and never had cancer to start with.
To amplify your point,
99% sensitive from 100000 people with an incidence of 60 means 1 false negative, assuming you can't detect .4 of a person and floor to integer.
99% specific from the same pool means 999 false positives, same assumption.
You mentioned that re: 1000 total, but the kicker:
Total population, 59 true positives + 999 false positives.
So, if I test positive, absent any more knowledge that means it's a 59/(999 + 59) chance of being true, or around a 6% chance of being true.
Probably enough for followup testing, but an interesting demo of why the statistical accuracy is meaningless unless you also know the actual incidence. 99% becomes not many % right quick.
Some cancers like pancreatic are a death sentence because it's usually caught too late.
"Toshiba says its device tests for 13 cancer types with 99% accuracy from a single drop of blood"
"The test will be used to detect gastric, esophageal, lung, liver, biliary tract, pancreatic, bowel, ovarian, prostate, bladder and breast cancers as well as sarcoma and glioma."
The particular types of cancer are leading the list of most casualties by cancer-type by the way. See https://ourworldindata.org/grapher/total-cancer-deaths-by-ty...
The idea is that you have something cheap and easy up front before or in parallel to further downstream diagnostic procedures.
You'll still be able to identify a pool of people that as a group will develop this cancer at a rate 20x above the normal population. That still seems like a big deal, for instsance if I discovered I had a genetic factor that made me 20x more likely to get a particular cancer I think I would want to be tested for it out of precaution. This seems like the same thing.
(Now if the only further test you can do is itself super invasive or risky, that obviously has to be weighed into the decision too).
If all it takes is a drop of blood (as opposed to more invasive tests) to know with ~90% accuracy if I have cancer or not (and when the machine says I do, then do a more accurate follow up test) then it’s far more likely more people will get diagnosed sooner.
If ran twice we'd have: 49 correctly detected, missing 11 cases and 250 incorrectly suspected.
Ran thrice keeping the 2 most similar results we'd have: Most people correctly identified?
Say you run the test every day/week/month, can you look at the total results or do the failure cases for the tests themselves depend on the individual?