When There’s Big News about Health, Should You Believe It?

A behind-the-scenes look at how the media report medical research

To be fair, at first glance, it did sound like a huge story.

“Metastatic Prostate Cancer Cases Skyrocket,” proclaimed the press-release headline in July of 2016. New cases of an incurable form of prostate cancer rose a whopping 72 percent from 2004 to 2013, according to a study from the prestigious Northwestern University.

The news headlines came fast and furious:

“Most Aggressive Form of Prostate Cancer on the Rise”

Newsweek

“Advanced Prostate Cancer Cases Soar”

—AARP

“Advanced Prostate Cancer on the Rise, Screening at Age 50 Key to Detection”

Huffington Post

The shocking increase could be due to “lax” screenings, the press release suggested. In recent years, various organizations, including the respected US Preventive Services Task Force (USPSTF), had relaxed their prostate-cancer screening guidelines, to some controversy. Was that the reason for the increase? Or perhaps prostate cancer, a disease that mostly affects men 50 and older, had become more aggressive.

Actually, what few reporters seemed to recognize was that there was a strong chance that neither factor was to blame—because there may have been no cancer increase at all.

This prostate-cancer frenzy was the perfect storm that experts in health and science journalism warn of. From the press release to the articles, it was a meld of sensationalism, misunderstanding and lack of due diligence.

But it wasn’t an anomaly. After all, news about research these days has become a running joke. Anything could kill you. Anything could be good for you. Think coffee is unhealthy? Just wait till tomorrow.

It’s funny, until it isn’t. Research on issues related to aging, in particular, helps shape our world. It affects medical guidelines, policy debates, social programs, even personal wellness decisions. To be accurately informed, people need to understand what the research really shows.

I’ve worked as a health journalist for over a decade, with a specialty in aging for much of that time. I started out with an unusual educational leg up: my father, James Hubbard, a family doctor and writer, taught me key points about understanding studies. Yet I still struggled at first. Research papers were gobbledygook—supposedly in English but impossible to make sense of.

Over the years, the studies haven’t gotten simpler, but I’ve gotten savvier—not only as a journalist but as a research news consumer. You can too. After all, in the midst of all that sensationalism, sometimes studies do come out that you’d actually benefit from knowing about. Deciphering which ones are likely worth a look just takes a little jargon know-how—and a deeper understanding of how research and journalism really work.

From Misled to Misleading

In the midst of the prostate-cancer frenzy, none other than Otis W. Brawley, MD, the American Cancer Society’s chief medical officer, stepped in to stem the tide.

It’s the rate that matters, not the raw numbers, he reminded the media through a statement on the ACS Pressroom Blog. “A rising number of cases can be due simply to a growing and aging population among other factors,” his statement read. If the number of new cases per 100,000 men ages 50 to 69 had risen by 72 percent, that would have been news. “In addition, in this study, the rise [the researchers] detected began before USPSTF guidelines for screening changed,” he wrote. 

Brawley continued:

The issue of whether and how screening may affect deaths from prostate cancer in the US is an incredibly important one. This study and its promotion get us no closer to the answer, and in fact cloud the waters. We hope reporters understand that and use this study to ask another important question: can we allow ourselves to be seriously misled by active promotion of flawed data on important health matters?”

His stinging question isn’t new. The problem has been discussed—by both researchers and journalists—for decades. There are remedies. But in many ways, society has only gotten further from them. 

Behind the Scenes in a Newsroom

The quality of research reporting varies. Some is fantastic. Some is abysmal. But here’s how such reporting is ideally done in my field, health journalism: a reporter reads the full study, paying particular attention to its limitations and weaknesses. She conducts interviews, including at least one with a researcher who isn’t affiliated with the study and who provides objective opinion and overall context. If part of her job is to suggest headlines for her stories, she writes one that’s not sensational or misleading.

But journalism doesn’t live in an ideal world. So here’s how health reporting is often done instead: the reporter interviews one of the lead researchers (maybe). She writes a compelling article based on that interview, the press release and the study abstract (summary). A click-worthy headline is added—either by her or an editor—and then it’s on to the next story.

What happened? No time, no education in reading studies and lots of pressure to drive clicks.

“The people who are dedicated health reporters at a lot of the major media outlets have really been dramatically cut. Where there used to be 10, 20 people, now there are two,” says Lisa Schwartz, MD, codirector of the Medicine in the Media program at the Dartmouth Institute for Health Policy and Clinical Practice.

Today’s reporters—on any beat—are notoriously overworked. In addition, “a lot of places have laid off staff like copy editors,” says Liz Seegert, a freelance health journalist who’s written for the Silver Century Foundation and is the topic leader on aging with the Association of Health Care Journalists. “So the checks and balances that used to be there have in many instances disappeared. In the rush to get published, you’ve got to be your own fact-checker, you’ve got to be your own editor.”

Yet reporters are often not even trained to do their main job. Specialty training for the health beat isn’t a big part of many university journalism programs, says Schwartz, who’s also a professor of medicine at the Dartmouth Institute. And because of the shrinking newsrooms, these reporters often don’t have so much as a mentor who’s been there longer to help them along, she points out.

Therefore, many health journalists have had no training that would, for example, help them read that prostate cancer article—whose second paragraph, by the way, begins as follows:

From the National Cancer Data Base (NCDB), all men diagnosed with adenocarcinoma of the prostate (International Classification of Diseases for Oncology histology codes 8550 and 8140) from 2004 through 2013 were included. Only patients with data available to risk stratify based on National Comprehensive Cancer Network (NCCN) guidelines were included (low risk: cT1cT2a, PSA <10 ng ml−1 and Gleason score < 6; intermediate risk: cT2b–T2c, PSA 10–20 ng ml−1 and Gleason score 7; high risk: cT3–4, PSA < 20 ng ml−1 and Gleason score 8–10; metastatic cN1 or cM1).5 

A press release is a lot easier to get through. So that’s often what journalists depend on (perhaps along with the study abstract, a brief summary of the study that’s published along with it). And some press releases do explain studies fairly. But many others exaggerate, misrepresent or worse. After all, publicists—not to mention researchers and universities—want those media hits.

“There are lots of self-interests that are served by getting great media coverage,” Schwartz says. “That’s part of how researchers advance their careers—by showing that their research is important. It’s also how institutions raise money. And part of that is to write a really exciting press release.”

Add to all this the intense pressure on the reporter to draw an audience.

“Journalists sometimes feel the need to play carnival barkers, hyping a story to draw attention to it,” health policy journalist Susan Dentzer wrote in 2009 in an article for the New England Journal of Medicine about the pitfalls of health care journalism. “This leads them to frame a story as new or different—depicting study results as counterintuitive or a break from the past—if they want it to be featured prominently or even accepted by an editor at all.”

Not all reporters fall into these traps. Paula Span, who writes the New York Times column The New Old Age, reads the studies even though she doesn’t have a science background. She calls the researchers for help translating.

“I find that most researchers are extremely glad to help out,” she says. “They want their information to get a broader audience.” If she’s reporting on a controversial issue, she’ll get opinions from researchers who weren’t involved in the study, as well.

Span also does something that many media watchers wish journalists would do more often: she reports on studies that have negative results—those that find no benefit to a treatment or supplement, for example.

Span, who’s also the author of When the Time Comes: Families with Aging Parents Share Their Struggles and Solutions (2009), believes that studies with negative findings should be covered more often. “We are coming to learn how much overtesting and overtreatment there is of older people and how detrimental this can be to them. I’ve written about a number of different studies that show no benefit to doing something.”

How to Analyze Research Stories

As the journalism, research and marketing worlds continue to sort all this out, the public still needs reliable information. So here’s how to get it: learn to be research-media savvy. The first step is to watch for three telling things in a story: association, size and risk.

First: association. One of the most common problems in research reporting is that the difference between association and causation is not made clear, says James Hubbard, MD, my father, who, full disclosure, publishes a website I edit, TheSurvivalDoctor.com.

For example, when an article says a particular fruit is “associated with” or “linked to” a reduced risk of developing some disease, that does not necessarily mean the fruit caused the reduced risk.

Usually, for association studies like this, “researchers take a big group of people and ask some questions and then try to associate different illnesses with the people’s habits,” Hubbard says. “This gives the investigators something to be suspicious of. Then more specific studies that are much more accurate and precise are done.”

For example, if a study finds that women who sip a nightly glass of red wine are less likely to get osteoporosis, maybe the wine reduced the risk. Or maybe the wine drinkers also tended to do yoga or eat dairy or do something else that was the true risk reducer. Further studies would be needed to find that out. Some studies that show association pan out; many don’t.

With association studies, there’s also often the question of which came first. For example, if older people with a positive attitude tend to be healthier, perhaps positivity improved their health. On the other hand, maybe they feel positive because they’re healthier.

Second: size. In general, bigger studies are better; smaller studies are preliminary. This is especially true of association studies, Hubbard says.

Stronger types of studies don’t have to be as large to be impactful. One of the strongest types is the randomized, double-blind, placebo-controlled study. All those terms are good to know:

  • Randomized: The participants are randomly divided into groups (commonly, two). Because neither researchers nor participants choose who gets into what group, the groups are likely to be similar. For example, neither has more severely sick people.
  • Double-blind: Neither the researchers nor the participants know who’s getting what treatment (for example, who’s getting what medication) until the study is over. This ensures that participants are objective about effects or the lack thereof, Hubbard explains. And researchers don’t, for instance, unknowingly give more positive reinforcement to one group than the other. (“It looks like you’re getting better.”)
  • Placebo-controlled: One group gets the treatment; the other gets a placebo, a “treatment” that’s secretly inactive. When analyzing study results, researchers evaluate whether the people who got the real treatment experienced stronger effects than those who got the placebo—in other words, whether the treatment has more than just a placebo effect (an actual or perceived effect caused by believing something is affecting you even though it really isn’t).

Third: risk. When a study finds that the risk for something has increased or decreased, consider what your risk was to begin with. The book Know Your Chances (2008), available for free at the PubMed Health website, which Schwartz co-wrote, explains it this way:

When someone tells you something like this—“42 percent fewer deaths”—the most important question to ask is “42 percent fewer than what?” Unless you know what number is being lowered by 42 percent, it’s impossible to judge how big the change is.

Thinking about risk reduction is like deciding when to use a coupon at a store. Imagine that you have a coupon for 50 percent off any one purchase. You go to the store to buy a pack of gum, which costs 50 cents, and a large Thanksgiving turkey, which costs $35.00. Will you use the coupon for the gum or for the turkey? Most people would use the coupon for the turkey.

So far, you’re watching out for association, size and risk. But there are a few other important things to consider:

  • Who were the subjects? Animal studies often don’t pan out in humans. For human studies, consider whether you fit into the researched category. Were the participants all one gender? Did they all have a certain disease or fall within a certain age range? For example, many studies don’t include people 65 and older even though medications commonly affect older people differently than younger ones.
  • Who funded the study? Does the funder—or the researcher—have a possible conflict of interest? For instance, was a study about the amazing benefits of oranges funded by a company that sells oranges? Does the lead researcher of a medication study have a relationship with the drug’s manufacturer? If the news story doesn’t include this information, the study should. Medical journals are now providing some studies in full, for free, online. Conflicts of interest are usually listed at the end.
  • Who’s quoted? Does the article include insight from someone other than the researchers involved with the study?
  • Has the study been published? Where? Ideally, it’s been published in a peer-reviewed journal, meaning experts in that study’s topic evaluated it before it was accepted. Well-known examples of such journals are the New England Journal of Medicine (JAMA) and BMJ, but there are many more.

These are some of the main points experts want you to understand when you’re reading news stories about medical research. But if you want to delve deeper, check out the review criteria from HealthNewsReview.org, which publishes critiques of health-news articles. They point out, for example, that the cost and possible harms of medical interventions are important to consider, not just the exciting positive possibilities.

HealthNewsReview.org also recommends a few websites to check out for reliable medical news, including a couple of my favorites, MedPageToday, which is written for health care professionals, and Kaiser Health News.

Overall, health news tends to be hit-and-miss, according to the experts I spoke with. No one outlet was mentioned by everyone as a go-to for great medical news. Schwartz believes newspapers with health sections and with reporters dedicated to those sections tend to do a better job. Both Hubbard and Seegert say that even when you find a good source, you shouldn’t trust them implicitly. “Even the best stories and the best done studies can be skewed,” Seegert says, “so look at multiple sources.”

Positively Percolating

Despite all the problems in research-related journalism, there are some positive signs for older people concerned about health.

For one thing, even though the aging beat is “not seen as a magnet for advertising or political or market support from the editorial suites upstairs, it has stayed alive because it percolates up from the bottom of the newsroom,” says Paul Kleyman, director of the Ethnic Elders Newsbeat at New America Media. Though corporate may not be pushing for stories on issues related to aging, reporters, editors and television producers continually encounter such issues in their own lives. “I’ve always felt that, like ‘all politics are local,’ ‘all journalism is personal,’” Kleyman says.

Also, these days, a number of fellowship and training programs support journalism that’s focused on aging or health. The Silver Century Foundation cosponsors one of them through a yearly grant to the Journalists in Aging Fellows Program, run by the Gerontological Society of America and New America Media. Seegert, the topic leader on aging with the Association of Health Care Journalists, was a fellow in 2015.

One of the most important things journalists learn from educational courses is when not to cover a study, says Schwartz, whose Medicine in the Media training program is, incidentally, on hold due to a halt in federal funding for it.

“There are lots of studies that help science to move forward but that are not ready for the public,” she says. Sometimes, she’ll get an email from a journalist who’s proud to have fought to keep a study out of the news. “That prostate cancer study is a great example,” she says, noting that, though many big outlets bit, a number of others didn’t. “When journalists take a stronger line about things that they feel aren’t in the public’s interest—and argue to get those things out of the news—they’re doing an incredible public service.”

When they don’t, though, the public needn’t be fooled. There are usually red flags galore; people just need to know how to recognize them.