My mom showed me an article on her phone just a few days ago. It had to do with mercury laden fish intake during pregnancy and later ADHD symptoms in the child. The title of the article and the short blurb underneath made it very clear that fish intake was related to the risk of ADHD.
Having recently read the actual article itself, I had her click on the rest of the link. The findings? Fish intake actually protected against ADHD development (expected) while mercury hair levels increased the risk (again expected).
One of my continuing pet peeves is that most doctors, regardless of specialty, do not read medical literature. If anything, it based on promotional material from the drug rep or at a seminar or it is just the abstract.
If someone doesn’t understand medical research, it can be very easy to be confused by the numbers thrown out in both the abstract and the study. Words like relative risk, absolute risk and number needed to treat can be daunting.
I’ve given the example time and time again about the statin class of drugs used to lower cholesterol. All of the promotional data and abstract data is given in relative risk, which makes is sound like a fantastic drug to be given to everyone with a heartbeat.
The absolute risk reduction, however, flat out sucks. About 1%. You have to treat 1000 people for 5 years to prevent 11 heart attacks. While this is a 50% relative risk (without treatment, 22 would have had a heart attack, so 22 down to 11 is a 50% reduction), it is only a measly 1% absolute risk reduction.
This type of wordsmithing goes on all too frequently in the medical literature and leads the general public and unsuspecting doctors to the wrong conclusions. But just how frequently does this occur?
This particular study looked at the presence of “spin” in the abstract of a medical journal article and in the media coverage of research studies. The results should make you a little more leery next time you listen to that CNN report:
Researchers looked at 498 press releases and narrowed it down to 70 that were press releases based on double blind research studies. They then compared the abstract given as well as the press stories written and compared them to data in the original study itself. They defined “spin” as “specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment.”
- Spin was identified in 40% of the article abstract conclusions and in 47% of press releases
- If there was spin present in the article abstract conclusions, there was a 560% risk that the press release would have spin associated with it.
Basically, the presence of some type of spin or misrepresentation of the data (such as the example given above with statins) was very common and usually involved inflating the benefit of the drug being tested.
The last time you picked a new doctor, did you think to ask how he or she stayed connected to new medical research?