Gary Schwitzer has an awesome post on the “Three common errors in medical reporting”. I suggest everyone read it to better understand how some of the stuff that medical reporters (or reporters in general) can get things wrong even when they’re trying to get it right. My favorite error is the absolute vs. relative risk fallacy. As Mr. Schwitzer describes it:
Many stories use relative risk reduction or benefit estimates without providing the absolute data. So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).
This is something that people on both sides of a scientific debate are guilty of doing. Some overzealous public health people may say that an intervention is the best thing ever because it “cut in half” the number of new cases of some disease or condition. On the flip side, pseudoscientific zealots may say that some “natural” remedy is “the shit” because it “more than doubled” the life expectancy of a person with cancer… Or something like that.
In both instances, it is absolutely critical that the person making the assertion report on the actual numbers, not just how much – or how little – impact the intervention had. So trust your sources, but always verify. Just because something is “statistically significant” doesn’t mean that it’s “impressive”.