In ‘Modern mammography screening and breast cancer mortality: population study’ (bmj 17 June 2014) Harald Weedon-Fekjær and colleagues conclude:
‘Invitation to modern mammography screening may reduce deaths from breast cancer by about 28%.’
But does the public understand the difference between a study and a randomised controlled trial? And how many people realise that a 28% reduction in disease specific mortality is not as impressive as it sounds? Why not? Because there is a need to understand the difference between relative and absolute risk.
Commenting on this study, Aaron Carroll explains neatly on his blog: http://theincidentaleconomist.com/wordpress/relative-and-absolute-risk-mammogram-edition/
“That sounds amazing. Who wouldn’t want this? But that’s a relative risk reduction. It doesn’t tell you how much you reduced your absolute risk.
To the author’s credit, they reported an NNT (Number Needed to Treat) for invitations to the program. You’d need to invite 368 women to participate in order for one breast cancer death to be prevented over a lifetime.** That means the absolute risk reduction is (100/368) = 0.27%.
Sticking to the NNT, that means that of the 368 women you invite, 367 will see no benefit. And they could see harms! Overdiagnosis, extra procedures, expense, potential sequelae, etc. And it’s possible that the one women saved might die of other causes. We’ve covered that before.
But what’s more concerning is that I’m sure the “28% reduction” will be in many media stories that cover this paper. Very few will mention that the results show that the screening program will reduce absolute risk by 0.27%. The former number will blow past anyone’s concerns about harms. The latter would likely make them think hard about whether a program is worth it. Especially if we discussed the NNT along with NNH.”
(Number Needed to Harm – http://en.wikipedia.org/wiki/Number_needed_to_harm).
With regard to the harms – many people confuse ‘overdiagnosis’ with ‘false positive’ and think the only harm will be a false diagnosis of cancer for a short while which would entail unnecessary biopsies. ‘Overdiagnosis’ is quite different – a diagnosis which may or may not progress to a cancer that would turn out to be aggressive – or would never be a problem in a person’s lifetime – any more than a freckle – yet is treated as full on cancer, often with mastectomy.
A colleague, Miriam Pryke, sent me this comment:
‘All screenees pay a price. They lose something, however trivial. Turning up is a loss. A dose of radiation, being diagnosed, being treated are losses.
They pay for a 1 in n chance of net gain. Only those who get longer life with quality, if any, get net gain. Everyone else sustains a net loss. If the odds are good enough, even a high price is worth it to some. Nevertheless a price is a price, they do not get something for nothing.
A reduction in chance of dying of breast cancer is not a chance of net gain.
That study, like others, did not show that anybody gets net gain. It showed a chance of not dying of breast cancer but not a chance of postponing death.
We do not know if anybody gets to avert death – die later with the time gained at a quality that makes it worth it.
This study, like others, does not show lives are extended. Everyone takes the gamble hoping for net gain, but this study doesn’t show that they have such a chance, whatever the number of breast cancer deaths avoided, because it does not show that any deaths were averted.
The risks of harms were not given by this study. Like the chance of net benefit, they are not known. Everybody who gets screened pays a price. 100% of screenees pay by turning up, by getting a dose of radiation, etc. After that the price gets higher and the numbers paying get smaller.
Worth it for some, depending on the chance of net benefit against the chances and nature of each of the various harms. Nobody knows those numbers with a suitable degree of confidence, and this study did not give them.
“Suitable degree of confidence” = value judgement. A value judgement has to be made about whether to offer screening, and whether to accept.
There may be those who would take a leap of faith. I think that odd.
Re probability: each person asks, “how likely is it that x will happen to me?” The odds are of the form 1 in n for each of the things that happen to people who get screened. In this case, each person has a 1 in 368 chance of not dying of breast cancer and 367 in 368 chance of paying anything from getting a dose of radiation to dying younger. The chance of net gain by extending life was not given.
The chance of paying a high price for nothing was not given but, if you take the Marmot figures (NHS Independent Review) it is 3X greater than of paying a high price for not dying of breast cancer. Only a proportion of those who don’t die of breast cancer, maybe 0%, live longer, but we don’t know the number.
If any get net gain nobody will know it is them and each has to know it is improbable that it is them: they have all paid a price, some a high price, and for each it is probably for nothing.
This study is just one of many estimates, which differ widely, based on assumptions that are contested amongst authorities, and should not be cherry-picked but seen in the perspective of all the evidence there is about screening including the evidence about the nature of harms and numbers affected.”