> “Cancer screening was never really designed to increase longevity. Screenings are really designed to decrease premature deaths from cancer.” Explained another way, Dahut said, if a person’s life expectancy at birth was 80, a cancer screening may prevent their premature death at 65, but it wouldn’t necessarily mean they’d live to be 90 instead of the predicted 80.
Personally I think this is just a matter of terminology in public health not necessarily aligning with our intuitive understanding. I presume most people would think that preventing a shortening of lifespan is prolonging your life, but the article makes clear that they are different.
When has someone claimed that treating cancer would increase life expectancy above the average?
This is a dangerous article. People don't need more reasons to avoid cancer screenings.
https://youtu.be/yNzQ_sLGIuA?si=fUttSVFQjsrIqc-p
I guess the main gist is that screening is not completely benign and a positive screen might lead to more interventions for what in the end could just be a benign tumor. Then there's the other point of detecting it late in life. Like treatment for cancer might not make much sense if you are already 89 years old
If we say that cancer screening has no effect of life expectancy, that's exactly the same as saying that it doesn't prevent premature deaths.
If cancer screening effectively prevents premature deaths, but the effect on the population's life expectancy is small, that effect is the wrong thing to be focusing on, potentially resulting in a harmful takeaway message.
That's what they've found with PSA--it kills (via treating things that wouldn't actually have killed the patient) as many as it saves.
The main purpose of screening for early disease or genetic condition detection regardless of the diseases or illness, cancer, heart attack, stroke, etc, is not really to prolong longevity, even though that's a wonderful side effect, but the main thing to prevent complication(s) that may arise from the late detection.
Apparently death does not incurs extra and massive medical bills, on the other hand complications will certainly do. If you connect the dots, this suddenly become a really good and plausible conspiracy theory.
It's a shame that most of the health organizations, I'm looking at you American Heart Association (AHA), are really against any form of routine screening [1],[2].
[1] American Heart Association 14-Element Screening (Maron, BJ Circulation 2014):
https://med.stanford.edu/content/dam/sm/ppc/documents/HSuper...
[2] ACC/AHA Release Recommendations For Congenital and Genetic Heart Disease Screenings in Youth:
https://www.acc.org/latest-in-cardiology/articles/2014/09/15...
The paper is indeed presenting strong evidence that regular screenings have minimal value.
[1] - https://jamanetwork.com/journals/jamainternalmedicine/fullar...
He did regular screening, nothing came up in it. When suddenly in one blood report, his counts were low that's when we did some extensive testing. Even with extensive testing, the diagnosis kept changing. From Myelofibrosis to MDS to AML to AEL.
Whole aspects around testing and related studies are very confusing. Tests are not at all accurate and testing is very expensive.
It seems like a scam when it comes to testing industry and pharma industry for such disorders. Bottomline is nothing helped in my father's case and we lost him after spending tons of money.
The fact that medical system remains such an inefficient system till date indicates to me that world leaders doesn't want to solve for health problems. It's an industry for them with lots of money. Everyone seems to be motivated to keep population seek and be subscribed to their drugs.
Sorry for the rant.
That sounds, uh, significant. So to clarify, does that mean people gain 4 months on average by getting this screening? Those who get the cancer gain years, and those that don't gain 0, so the average is 4 months?
That is, putting aside monetary costs for the moment (which I know is not a good idea in reality, but want to focus on another issue), you often hear about how false positives cause "added anxiety and unnecessary treatments, which can cause harm", but if a breast cancer screening saves, say, lets one woman live until 80 instead of dying at 40, how many other people's "added anxiety" would it take to say "OK, that test was worth it". I think a lot of folks would go think that saving that life should be valued a lot more than just, say, comparing 40 years for her vs. time/anxiety "wasted" for false positives.
Yes, life expectancy should count premature deaths
Saying it "doesn't extend live" goes against that, because it absolutely does increase life expectancy
Science communication is bad
Just take as given that the analysis is correct, and screening for rare Disease A on net has no effect on life expectancy. Almost no one actually gets Disease A, but everyone is screened for it, and that has some diffuse cost to life expectancy: Screen enough people enough times and someone will die in a car accident on the way to or from the doctor's office. More likely the screening crowds out other more net-beneficial medical testing or is taken as some false comfort to continue an unhealthy lifestyle.
Modern cancer treatment, especially for the most common types (i.e. the most likely to be screened for) is very good, even if the cancer is caught later due to lack of screening. So even the folks who catch it early due to screening don't incur a benefit in many cases, further pushing down the life-expectancy win on average.
Still: This is like saying home insurance is a bad deal because on average the insurance companies make money. Screening is an insurance policy (not a free one, to be sure) against a catastrophic outcome.
If you're a public health authority in a utilitarian and budget-constrained mindset, sure, don't encourage screenings by the logic and findings of this analysis. But I don't think individuals should consider on-average-LE-negative screenings as something to avoid.
Why? Imagine 1 million people get tested. Well we know exactly 1 person (on average) in that group is going to have the disease. But our 99% accurate test will ring a positive 1%, or 10,000 times. So the odds that you really have the disease are the odds that you're that 1 in 10,000 which is 99.99% against! Well just run the test again. Oh no! It turns up positive again! What are the odds it's two false positives? 99%! Same math. Now we know that 1 person has the disease, but our test will show 1%, 100 people, in the 10,000 as being positive. So your odds of having it are 1 in 100, or 99% against.
I'm not especially interested in being tested for rare conditions.
Imagine you test early and often for a condition in country A much more often than country B which waits until some more late stage easier to detect symptoms occur. Now compare survivor rates. Much higher in country A! We should clearly also be testing in country B, right?
Depends. If the condition is something that doesn’t often actually kill and typically remains at a non lethal but detectable state then all you might have done is treat a lot of extra non fatal conditions that usually only is detected in country B once it evolves to a more advanced and dangerous state. You may have put many people in country A through an unnecessary, expensive, and frightening treatment regime.
The point is that these things are complex and need thorough analysis.
I'd say this depends on the nature of the diffuse negative effects you mention - if it's car accidents on the way to the doctor's office that's one thing, but if it's people dying during surgery they actually didn't need that's another
The risks are not 'car crash from doctors visit' they are 'got cancer from imaging', 'unnecessary surgery and complications', etc - the risks are directly related to the screening.
This is an active topic in medical ethics as well which you seem ignored of (no offense intended), given that you are framing this in terms of insurance or public health from the perspective of a beaurocrat - the bottom line is that if your screening is more likely to kill or maim you than the thing being screened for then that screening shouldn't be standard practice, and when it is less clear cut than that you still have to make a determination about which screenings make sense to perform on a population level.
That is something that a caring doctor has to think about as part of their duty, there is the very real potential to do much more harm than good by being thoughtless about the interventions you perform.
I’m arguing that this specific analysis has very little to tell individuals about how they should perceive the value of any particular test. A different analysis—looking at the particular negative outcomes of the testing itself or the reaction to false positives—would be a different story entirely.
What do you think the individual patient should take away from this analysis in actionable terms?
I dunno about all types of cancer (and screening methods), but mammograms definitely are not helpful and this has been known for a long time [0]
[0] https://www.vox.com/2015/7/6/8900751/breast-cancer-overdiagn... (2015)
> I often wonder how different public attitudes would be towards treatment and prevention if the US healthcare system wasn’t profit driven.
in japan some prefectures or cities will send you leaflets/guides about how to get prescreening for cancer markers and early treatment for people past a certain agei seem to also remember seeing they offer incentives like first screening only 700 yen (about 5 bucks) etc
so, yea, different systems, different incentives i guess
Open: https://simplecast.econtalk.org/episodes/vinay-prasad-on-can...
Apple: https://podcasts.apple.com/us/podcast/econtalk/id135066958?i...
Highly recommend the episode and show. The gist is that cancer is only 4% of deaths, and screening only reduces those 4% by 20% (so it does not help whatsoever 80% of cancers). But there is a lot more to the episode.
The most important thing you need to do to prevent early death is reduce heart attack and stroke, and that requires better diet and exercise. Really the best drug of all is diet and exercise.
Also cancer screenings happen in the US. Thus 750k deaths without screenings and 80% of that = 600k deaths with screenings and 150k deaths postponed.
Are they using some unusual definition or something?
I'm having trouble finding numbers, but the comment by Vinay was basically saying that each screening is testing for something that has 1-4% chance of killing you. Not that 4% of all cause mortality is cancer, which is incorrect. In the context of individual screenings per cancer, the numbers roughly make sense based on what I could find, but I am by no means an expert.
The specific video/content they are rehashing is the one by Vinay here - https://www.youtube.com/watch?v=-9hQO7X1bmU
I'd highly recommend listening to the Econtalk conversation as there is a lot of nuance behind why screenings (at least as they are done today) could potentially be a net negative at the individual level and don't seem to have improved all-cause mortality in a significant way (according to Vinay).
https://jamanetwork.com/journals/jamainternalmedicine/fullar...
and I do not like it. It suffers from what I consider an extremely common problem in statistics: if you define the question poorly, then your output is garbage no matter how fancy or careful your analysis is.
We can start with the beginning of the abstract:
> Importance Cancer screening tests are promoted to save life by increasing longevity, but it is unknown whether people will live longer with commonly used cancer screening tests.
I have never heard of a doctor suggesting a cancer screening by saying "this might save your life by increasing your longevity." What does that even mean?
So let's try to figure it out. The paper uses the terms "lifetime" and "longevity" somewhat interchangeably, and it does not define either term. The best I can figure out is that, for an individual deceased person, they have a certain lifetime in days from when they were screened to when they because deceased. (Or a certain lifetime in days from birth to death, and I'm not sure this distinction matters.)
Great, but this is only for one patient. What about for a sample of patients or for a population in general or for the probability distribution of lifetimes of a given patient conditioned on whether they do or do not get screened? The article does not say, and a single "lifetime" number is not a probability distribution. Is it an expected value or a mean? A median? A mode? No comment.
One of the headline conclusions is:
> Based on the observed relative risks for all-cause mortality and the reported follow-up time in the trials, the only screening test that significantly increased longevity was sigmoidoscopy, by 110 days (95% CI, 0-274 days) (Table 2, Figure 2)
Figure 2 is useless. Table 2 is somewhat informative, and it has a column for relative risk of all-cause mortality and a column for lifetime gained and its 95% CI. But WAIT A MOMENT! The only way you can know the lifetime of an individual patient is if they're dead. If they're dead, their risk of all-cause mortality by the time they died is 100%. That's not 100% plus or minus something with some relative risk thrown in -- they are dead enough to have a date of death so that someone could compute their lifetime! Or maybe "lifetime" means something else, and the authors didn't bother to figure it out and say what they meant. So what exactly is this paper even analyzing?
So I suspect this is a meta-analysis of studies, of which some may or may not have been high enough quality to define their terms, and probably several of which used "lifetime" to mean some estimated property of a distribution, and this meta-analysis completely failed to figure out what the included studies were talking about.
So I rate this meta-analysis as almost entirely useless, on account of it failing to actually analyze anything that makes sense.
So I don't think any conclusions can be drawn. Although... ACS puts the lifetime risk of colorectal cancer at 1:25 or so. So one might naively translate a 110-day lifetime extension for everyone to a 110 day · 25 = 2750 day = ~7.5 year expected lifetime extension for people who actually get colorectal cancer. Sign me up -- 7.5 more expected years of life and presumably more than that of quality life years in the event I contract a not-particularly-rare disease sounds like a pretty good deal. (Colorectal cancer screening is not all that unpleasant, and I apparently only have a 96% chance of the screening being unnecessary.)
Also not that harmless as you think. Only you suggest to perform it on 24 other patients who won't ever have a colon cancer. And that might be you, right? With the real risk (albeit small, but you need to multiply it by 25) of having a serious complication which eventually may result in a premature death.
This statement makes it clear what's going on. In ML terms: class imbalance. 99.9% of people won't ever get colon cancer, and therefore won't ever benefit from a colonoscopy. It won't make any statistical difference in overall population survival. But for the 0.1%? It will save their lives.
But they may have complications from a colonoscopy, that's the idea. No test is completely harmless, even a blood work. You save some lives but may loose others, that's the point of the paper. And of course you waste resources that can be used to find a cure.
The lifetime risk of being diagnosed with colorectal cancer is ~4%. (With the odds trending higher for the younger generation.) The risk of _death_ from this cancer is ~1%.
So, while I see the "statistically" part, asking everyone to get zero screenings until you start coughing up blood (or whatever happens when the cancer starts showing very obvious signs)... it just seems weird on the individual level. Nobody knows (nor can they know) if you're the person who would be cured with treatment, or if you're the person whose outcome wouldn't have changed a bit with treatment (for better or for worse). That question matters individually, and while "statistically you're slightly more likely to be in the second group" according to these studies, that doesn't make me feel great about just declining all screenings.
Or your screening might have caused that tumor? X-rays are not harmless and they can cause cancer. Other screenings have their respective complications. Aren't you disturbed by a thought that maybe you wouldn't have any cancer if not for screening? Or you would have a healthy colon if not for the intern who perforated it during colonoscopy?
In colonoscopy, the entire colon is examined by using a long flexible tube. In sigmoidoscopy, only the lower portion of the colon is checked, again by using a flexible tube.
The things that come to mind is that sigmoidoscopy may be better tolerated by patients or have fewer complications.
They are also dissimilar in their chances of presenting early with symptoms making screening less useful vs those that don't present early.
If you assume all the costs are fungible, then the analysis is straightforward (the costs are not actually fungible -- some things cost money, other things cost patient time, or use resources that are in finite supply -- so it turns into a linear programming problem, and we've been able to solve those since before computers existed).
With fungible costs, for each of the available health care services, you the expected increase of life expectancy and divide it by the cost, and prioritize the things that have the highest benefit/cost ratio.
The article doesn't talk at all about the cost of the screens, which is fairly low (vs. spending time exercising, or a year of exotic chemo). It also doesn't look at patient quality of life.
That is shockingly bad. An extra three months of life is not nothing, but not exactly a clear win to push people towards the added stress and effort to be screened.
You're conflating this with individual patient decisions.
The real cost of a preventive health scan goes well beyond the price tag - https://news.ycombinator.com/item?id=37266189 - Aug 2023 (254 comments)
- Early testing means more visits to the hospitals/clinics. Those are famously dangerous places to get sick.
- False positives mean unneeded treatment. That can shorten your life.
- Finding things earlier also means earlier treatment. That can kill you. Every trip to a hospital means a non-zero risk of catching MRSA and other nasties.
- Some things would be handled by the immune system anyway. The treatment might provide no benefit over an ignorant outcome.
I also find it easy to believe that among all the oncologists and hospitals, some of them might be at best neutral, or even net-negative in mortality and morbidity. This is about the whole population, not just leading units at famous hospitals. There are doctors out there with the ethics of a bent mechanic.
That can only be that these experts disagree with the study, because if it were properly conducted such that its findings are true, then in fact its conclusions do add up to skipping colonoscopies and mammograms.
The study is saying that the diagnostic procedures have no effect on outcome. You should not waste time on procedures that don't change outcome.
If that's your bar, then most preventative screening is probably not worthwhile.
It's pretty dishonest to be confusing the effect of screening by counting unscreened individuals.
It's like claiming that seat belts don't work by looking only at fatality figures that don't inform to what extent seatbelts were worn.
A diagnostic approach to healthcare can be easily bureaucratized (come collect your stamp after your colonoscopy!). A touchy-feely "million factors" approach cannot.
Quote:
> "Two recent data reviews deserve further mention. The United Kingdom commissioned an independent review after dissenting voices swelled, for the purpose of better informing shared decision making and educational materials about the harms and benefits of screening.6 Unfortunately the review concluded that screening mammography trials were inadequately powered to detect an impact on all-cause mortality, and therefore used breast cancer mortality as a primary outcome. They concluded a 20% reduction and used this in their discussion of harms and benefits. As noted above, cause-specific mortality is both scientifically unstable (conclusion reversal is common when all-cause mortality is considered)7 and disease-centered rather than patient centered (patients would prefer to avoid death altogether). Thus, either screening mammography does not save lives or else we have inadequate data to say whether it does or does not. In neither case can a benefit be scientifically claimed."
> "They concluded a 20% reduction and used this in their discussion of harms and benefits."
We want to do things that will help people. On an individual level, people have the right to make decisions for themselves. At the population level, we should do things that can be proven to be helpful. This leads to decisions that feel heartless - denying people a vaccine during a deadly pandemic while it is being tested; delaying or denying introduction of a new medicine that shows only marginal improvements over an old one.
So their independent review showed benefit by changing the criteria for comparison. We want to help people, and medical interventions feel like helping. Unfortunately, sometimes they can be the equivalent of The Politician's Fallacy: "There was a problem. I did something about the problem. Therefore the problem is fixed"
> "As noted above, cause-specific mortality is both scientifically unstable (conclusion reversal is common when all-cause mortality is considered)7 and disease-centered rather than patient centered (patients would prefer to avoid death altogether)."
People die of /something/. That thing becomes a target to fix; and we deploy resources to fix it. We become emotionally connected to some interventions. (Speaking personally, I'm participating in a bike ride in a few weeks to raise money for breast cancer screenings.) Some of those resources are beneficial, some are not, some may be harmful.
> "Thus, either screening mammography does not save lives or else we have inadequate data to say whether it does or does not. In neither case can a benefit be scientifically claimed."
If anyone's interested the "Wilson criteria" is the basis of decisions about screening programmes and many run programmes fail it.
I found this comment useful: - https://news.ycombinator.com/item?id=37297963
Several comments noted that a 4 month improvement in life expectancy (for sigmoidoscopy) over the whole population is actually pretty good for a low-incidence cancer. That's several years for the people who actually get it.
But the comments seem naive about the downsides of broad screening: over-treatment, iatrogenic disease, false positives, opportunity cost, etc.
Positive screening result usually leads to treatment of one kind or another. Without screening, treatment would start later, if at all. Is that a corollary of this study?
There is no study powered enough to draw conclusions on all-cause mortality, which may not be the best measure anyway.
"This is an important, though uncommonly discussed, issue in the translation of evidence from cancer screening trials.1 It is known that overdiagnosis (treatment of cancers that would have been no threat), and high false positive rates (misdiagnosis) lead to medical harms and unnecessary surgeries, chemotherapy, and radiation...
...margin of benefit suggested by the analysis above it seems likely that if there is a benefit to screening mammography it is balanced out by mortal harms from overdiagnosis and false-positives"
If you are unfortunately talking about a personal experience, sorry, but you can't confuse n=1 stories with large research studies.
Screening mammography unequivocally improves cancer-specific mortality. Making the leap to overall mortality is hard, and even a meta analysis is likely too underpowered given the very low overall mortality to begin with. Recall that the smaller the absolute difference is the larger the study will have to be to detect the difference.
For breast cancer we would probably be talking about something like 10 million patients to be adequately powered to draw any conclusion. The older trials also aren't useful/can't be used because the diagnosis and treatment of breast cancer is dramatically different than it was 10 years ago when BI-RADS was in its nascent stage. Accordingly it's not even possible to conduct such a study as it would be unethical to randomize millions of patients to no screening when we know it has proven benefits.
This also raises the question of which outcome measure matters more? All-cause mortality is a good one but it has both pros and cons. Pros being it captures hidden and misattributed deaths and is the least susceptible to bias. Cons include that it underestimates the impact of diseases that aren't high causes of death (i.e. if the patient population is more likely to die of something else the all-cause mortality won't change).
All of this leads to what are we trying to solve with breast cancer screening? It's unequivocal that a screen detected breast cancer is less advanced (i.e. no systemic therapy required) and is associated with cancer-related mortality benefit. The harms of overdiagnosis have also significantly lessened with modern radiology/histology classifications, biopsy techniques and treatment algorithms. Is cancer-specific mortality good enough? I would argue yes given the significant morbidity with systemic therapy and metastatic disease.
To summarize:
1. Screening mammography has been proven to reduce cancer specific mortality in many studies.
2. The only accurate statement about all-cause mortality is "we don't know" rather than yes/no. None of the studies are powered or controlled enough to draw any conclusions.
3. All-cause mortality may not be the best outcome measure to determine whether an intervention is "saving lives" and certainly is not the only measure to consider when deciding on a screening program.
So let's assume that an annual X-ray caused another cancer in women who would never develop breast cancer (i.e. 87% of them). You are saying "we don't know", but the authors of that paper are trying to answer exactly that. We may have saved lives in 13% group (that would be < 2.5% of those dying from breast cancer), but may have lost some lives in 87% group. According to the paper the net outcome is around 0.
The last link explains the NCCN rationale and decision making process.
http://www.aapec.org/images/b69a380d-519a-415b-8e8c-8e7f2b02...
https://www.nejm.org/doi/full/10.1056/nejmoa1000727
https://academic.oup.com/jnci/article/106/11/dju261/1496367?...
https://www.bmj.com/content/bmj/352/bmj.h6080.full.pdf
https://jnccn.org/view/journals/jnccn/16/11/article-p1398.xm...
After reading: "this study, like ~every study, detected no effect on all-cause mortality because that needs a crazy big sample size"
A three-year study where one group magically had zero car crashes would need a sample size of 5,000,000 to detect a difference in all-cause mortality. It's really hard. (80% chance of p < 0.05)
(The headline is probably technically true, but I think it is intentionally misleading and disingenuous.)