The problem: there is robust research and there is other ‘research’ that might as well compare apples with oranges and oranges with eggs – then conclude apples are like eggs. So much comes down to being able to interpret correctly – critical analysis. The average person can be easily misled. A good example is a recent letter in the Lancet ‘Effect of population-based screening on breast cancer mortality’ by Bock et al.
It states that 27% of women with screen-detected cancers have a mastectomy, compared with 52% among those with clinically detected cancers. True, but misleading. There is a need to compare like with like, not apples with oranges.
I understand that screen-detected breast cancers are different from those detected because of symptoms, for several reasons:
1 The comparison is made within a population where all are offered
screening. This means that there is really no “control group”, but the compliant are simply compared to the non-compliant. Attenders are preferentially in a high social class, affluent, well-educated, diet aware, non-smokers, with long-lived parents, whereas non-attenders are the opposite. So screening “selects” those who would turn up quickly with symptoms of small cancers in the absence of screening, whereas those that would wait longer and present with large cancers do not turn up anyway.
2 Screening will preferentially detect slow-growing lesions because there is more time in which to detect them (length bias). So, screening “selects” small cancers, with the aggressive ones growing fast enough to “slip through the screen” and appear between rounds as large cancers.
3 Many of the small, screen-detected cancers are overdiagnosed (modern screening shows small ‘changes’ that would never progress to be a problem in a lifetime, yet are treated). This inflates the number of breast conserving surgical interventions in the screened group, which “artificially” reduces the percentage of mastectomies in this group.
What is needed is to compare a non-screened population to a screened population and compare the rate of mastectomies, per eg 100,000 women to the rate in a screened population. There are already three sources for such a comparison.
1 The randomised trials. This is the most reliable source, and there were 20% more mastectomies in the screened population.
2 Population-based data from Denmark where 20% of the administrative regions offered screening for 17 years before the remaining 80% joined the programme following a government vote. Again, more mastectomies were done in the screened areas, and the data was presented in ‘Radiology’earlier this year.
3 Population-based data from Norway, which also had a gradual introduction of mammography screening. The same picture emerged as in Denmark, as described in BMJ earlier this year.
Regarding these two studies, there may be geographical variation apart from the presence of screening, but the consistency of the findings, together with those from the randomised trials, indicate that screening does in fact increase the use of mastectomies. The increase is caused by over-diagnosis, and Autier has shown that screening does not reduce the occurrence of large invasive cancers – those that are treated by mastectomy.
Here is another example, by Peter C Gotzsche and Kartsen Juhl Jorgensen, both from the Nordic Cochrane Centre, Copenhagen, in their response this week to a French over-diagnosis study in the bmj: ‘Overdiagnosis from non-progressive cancer detected by screening mammography: stochastic simulation study with calibration to population based registry data’
BMJ 2011; 343 doi: 10.1136/bmj.d7017 (Published 23 November 2011) Cite this as: BMJ 2011;343:d7017