Psychology

› ‹

Taking Psychology with You

Lying with Statistics

Listen to the Audio

We have seen that statistical procedures are indispensable tools for assessing research. But in the real world, statistics can be manipulated, misrepresented, and even made up by people hoping to promote a particular political or social agenda. That is why an essential part of critical and scientific thinking is learning not only how to use statistics correctly but also how to identify their misuse.

A primary reason for the misuse of statistics is “innumeracy” (mathematical illiteracy). In Damned Lies and Statistics, Joel Best (2012) told of a graduate student who copied this figure from a professional journal: “Every year since 1950, the number of American children gunned down has doubled.” Sounds scary, right? But if that claim were true, then by 1987, the number of children gunned down would have surpassed 137 billion, more than the total human population throughout history; and by 1995, the annual number of victims would have been 35 trillion!

Where did this wildly inaccurate number come from? The author of the original article misrepresented a statistic from the Children's Defense Fund (CDF), which in 1994 claimed that “The number of American children killed each year by guns has doubled since 1950.” Notice the difference: The CDF was saying that there were twice as many deaths in 1994 as in 1950, not that the number had doubled every year.

We don't want you to distrust all statistics. Statistics don't lie; people do—or, more likely, they misrepresent or misinterpret what the numbers mean. When statistics are used correctly, they neither confuse nor mislead. On the contrary, they can expose unwarranted conclusions, promote clarity and precision, and protect us from our biases and blind spots. You need to be careful, though. Here are a few things you can do when you hear that “2 million people do this” or “one out of four people are that”:

Ask how the number was computed. Suppose someone on your campus gives a talk about a hot social issue and cites some big number to show how serious and widespread the problem is. You should ask how the number was calculated. Was it based on government data, such as the census? Did it come from just one small study or from a meta-analysis of many studies? Or is it pure conjecture?

Ask about base rates and absolute numbers. If we tell you that the relative risk of getting ulcers is increased by 300 percent in college students who eat a bagel every morning (relax, it isn't!), that sounds pretty alarming, but it does not tell you much. You would need to know how many students get ulcers in the first place, and then how many bagel-eating students get ulcers. If the “300 percent increased risk” is a jump from 100 students in every thousand to 300 students, then you might reasonably be concerned. If the number shifts from one in every thousand to three in every thousand, that is still a 300 percent increase, but the risk is very small and could even be a random fluke. Many health findings are presented in ways that increase worry and even panic, as an increased relative risk of this or that. What you want to know is the absolute risk, what the actual, absolute numbers show. They may be quite trivial (Bluming & Tavris, 2009; Gigerenzer et al., 2008).

Ask how terms were defined. If we hear that “one out of every four women” will be raped at some point in her life, we need to ask: How was rape defined? If women are asked if they have ever experienced any act of unwanted sex, the percentages are higher than if they are asked specifically whether they have been forced or coerced into intercourse. Similarly, although far more women are raped by men they know than by strangers, many women do not define acts of date rape or acquaintance rape as “rape.”

Always, always look for the control group. If an experiment does not have a control group, then, as they say in New York, “fuhgeddaboudit.” The kinds of “findings” often reported without a control group tend to be those promoting a new herbal supplement, treatment, or self-improvement program. People are motivated to justify any program or treatment in which they have invested time, money, or effort. Furthermore, thanks to the placebo effect, people's expectations of success are often what helps them, not the treatment itself. This is why testimonials don't provide a full or accurate picture of a medication's or treatment's benefits or harms. It's like the bartender who says to the customer, “Why are you waving your arms around like that?” And the customer says, “It keeps the gerbils away.” “But there aren't any gerbils here,” the bartender says. “See?” says the customer, “It works!” All the arm-waving in the world won't substitute for a good study.

Be cautious about correlations. We said this before, but we'll say it again: With correlational findings, you usually cannot be sure what's causing what. A study reported that teenagers who listened to music 5 or more hours a day were eight times more likely to be depressed than those who didn't listen that often (Primack et al., 2011). Does listening to music make you depressed? A more likely explanation is that being depressed causes teenagers to tune out and listen to music, as they don't have the mental energy to do much else. “At this point, it is not clear whether depressed people begin to listen to more music to escape, or whether listening to large amounts of music can lead to depression, or both,” said the lead researcher.

The statistics that most people like best are usually the ones that support their own opinions and prejudices. Unfortunately, bad statistics, repeated again and again, can infiltrate popular culture, spread like a virus on the Internet, and become difficult to eradicate. The information in this chapter gets you started on telling the difference between numbers that are helpful and those that mislead or deceive.

JV ©2020, for Educational Uses. DMCA: dmca@simplecore.org