Jen and I had some friends over for dinner recently and a cooking topic of conversation came up. The question was, should you pepper steaks after grilling instead of before because pepper burns easily and thus you’ll burn it on the grill. I’ve always peppered (and salted) steaks before grilling, but I could see the point that it might in fact burn or become toasted – especially if you’re searing.
So I decided to do a simple experiment. I’d grill a couple steaks with pepper before grilling, and a couple I’d add fresh pepper after grilling. Then we’d have a blind taste test. The hypothesis was that fresh pepper after grilling would taste better.
This is fundamentally how science works. You have a question, you come up with a hypothesis, and you develop a method to test your hypothesis.
And though a very simple experiment, it gets pretty close to the gold standard of science experiments in which you directly test the hypothesis with blinded participants.
As usual, that’s where the problems started.
Last night when I went to grill the steaks for dinner I realized I was almost out of charcoal and it was too late to go to the store. No big deal, I had just enough to make it work.
But the result was that I didn’t have as much heat as I normally would, and the heat was more concentrated in one spot on the grill than I would have liked.
As a result, I wasn’t able to grill all 4 steaks perfectly the same like I normally would.
Which means that if there was a detectable difference in taste, it could be down to the difference in grilling rather than the difference in peppering.
If you’re at this point wondering who cares so much about grilling and pepper you haven’t zoomed out to look at the bigger picture.
SCIENCE IS HARD.
This experiment should have been very simple to execute perfectly, and it should have been easy to adhere to the methodology and experiment design. And due to unforeseen circumstances it turned out to be impossible.
Imagine how hard it is to examine nutritional differences across populations and decades.
Imagine what happens in a complex drug trial.
Imagine how hard it is to isolate any one factor in a moving, changing system.
Thinking critically about this, to say nothing of noting the mountains of evidence that have been dumped in the trash after we realized we were wrong about something that we were previously so sure of, should lead you to the conclusion that the results of most scientific studies are total bullshit.
It’s only after time and many, many, many iterations of same and similar experiments that we can even begin to have confidence that we have a handle on how something works and what factors correlate with what outcomes.
To use one of the biggest nutrition arguments as an example, the notion that we can say “carbs are bad” or “fats are bad” based on some garbage pop science is preposterous.
This is not to say science is worthless. It’s not. But science is not canon. It’s the best method we’ve found thus far to investigate the world around us and approach problem-solving. But it’s limited, extremely limited.
In fact by the very nature of the approach it’s almost entirely useless on an individual level. At best what you get from a large study might be a good starting point for self-experimentation. At worst it’s incredibly and dangerously misleading.
So next time you see the results of the latest pop science study being pushed into the media firehose by authors hoping to lasso more funding for their programs remember that regardless of whether the conclusions coincidence with or refute your existing beliefs it’s probably worthless and you should give it no more weight than the latest celebrity opinion.
Leave a Reply