Although writer Jeanna Bryner makes some unexpected moves in the article that I find refreshing, acknowledging that some of the "duh" studies she describes might actually have some value, the tone of the headline disturbs me, in part because it echoes a message I've been hearing a lot lately in radio commentaries, editorials, and student papers.
The gist of the message is, "Those silly scientists! They could have just asked me, and I would have saved them a lot of time and money!"
The people sending these messages are often missing the message themselves.
Let's set aside, for the time being, the very real possibility that the journalists are only focusing on the headlines and are missing critical, important, helpful details in the deeper parts of the studies they're discussing. That's often the case, but even if it weren't, the commentators are missing the point.
The point is this: There is value to "duh" studies.
Look at it this way: Every time someone releases a study that surprises us by coming up with an unexpected finding, we pay respect because they've taught us something new.
But the only way they could arrive at an unexpected conclusion is to test the "obvious" stuff we think is right. Scientists learn not to trust instincts. What we think is right is often wrong. So they test everything.
When researchers come up with something surprising, we are illuminated.
- We discover that all objects fall at the same speed, no matter what their weight is, even though many people (even today) would never predict that.
- We discover that time passes differently for satellites than it does for people on the ground -- you'd think, if you go with "obvious" instincts, that a minute is a minute is a minute. But no, the satellite's minute isn't the same as yours -- we actually have to make regular adjustments to satellite clocks or else your GPS and cell phones will stop working.
Fine, you say. But why report the findings, if they turn out to be obvious?
That's a good question, and, as it happens, there are two good answers:
1. It's useful to researchers to know when the obvious stuff is right. As I noted before, scientists (and other sorts of researchers) learn after a while that gut instincts and "obvious" conclusions can be wrong. So it's reassuring every once in a while to learn, "Oh that assumption I've been making all these years is correct. It means the 30 articles I've written over my career are still possibly valid!"
2. Huge misunderstandings can develop when people don't report results.
Let's say 1,000 scientific studies have determined that, surprise, broccoli is good for you. They don't bother to publish the findings because, well, everyone knew that. Then scientist 1,001 comes along, and simply because incorrect results pop up randomly from time to time, he ends up with numbers that say eating broccoli is worse for your health than eating rocket fuel. Wow. He's wrong, but he doesn't know that. He reports it. Now the only study published on the health effects of broccoli says it's worse than eating rocket fuel. If the other 1,000 scientists had published their work, it'd be easy to take that weird finding with a grain of salt. But as far as the world knows, the only study done on the effects of broccoli shows it'll kill you stone dead.
This principle cuts both ways: It's important to report results, no matter how expected or unexpected they are.
Economist Steven Landsburg (the first freakonomist, before Freakonomics was ever written) wrote a column in Slate a while back in which he talked about minimum wage studies. Years ago, economists had decided that minimum wage increases must kill lots of jobs. It seemed obvious: There would be less money to go around -- you can hire 100 people for $1 each or 10 people for $10 each. And, in apparent support of that obvious conclusion, studies were sometimes published showing that increasing minimum wage reduced the number of jobs on the market.
But it turns out the impacts of minimum wage increases aren't so severe or obvious. Later, statistical analysis showed something suspicious about those previous studies: The findings didn't get stronger when sample sizes increased. If the pattern were real, they should. Economists eventually concluded that lots of their colleagues had been doing studies on minimum wage impacts, but throwing out the results when they didn't match expectations. Self-censorship led to confusion for a whole field.
Similarly, although there are very good reasons to think that alarms over climate change are legitimate and deserve attention, skeptics frequently argue that dissenting opinions frequently get squelched in official channels. If so -- if only articles that report on expected findings are getting published -- that's as bad as publishing only the unexpected would be. (The famous article by Naomi Oreskes, in which her analysis showed no disagreement with the established consensus on global warming in more than 900 scientific abstracts, may be a bad sign, viewed in this light.)
I'm not saying that the majority is wrong in the latter two cases, or that there's no such thing as a waste of grant money. Without doubt, there are studies that didn't deserve a penny. But they should be judged by their methodologies, not their conclusions.
In our age of rapidly disseminated information and promiscuous skimming of headlines, we need safe-text practices to keep us clean of memetic diseases. A good start is to be wary of any commentator who snorts with derision at a study simply because its conclusions are expected or unexpected.
No comments:
Post a Comment