I grew up in a house of mathematicians. Among other things,
this means that throughout my childhood, I heard a lot of jokes about
physicists (the mathematicians’ equivalent of blondes jokes). As a child, I used
to find those jokes mildly amusing. When I started learning about research
methods in psychology, I started finding them really funny, but sad at the same
time – after all, if a hard science like physics is a laughing stock because of
their dodgy scientific practices, what does this mean for a soft science like
psychology?
Here’s an example of a physicist joke:
A physicist is doing a talk at a
conference. He is holding up a graph, and is explaining what the data on it
means. After half an hour, a graduate student timidly raises her hand and
politely notes that the graph is held upside down. The physicist stops, looks
at the graph, turns it the right way up, and says: “Why, you’re right! Well, in
this case the data is even easier to explain!”
The moral of the story is that, for a reasonably intelligent
and creative person, it is almost always possible to come up with a plausible-sounding
explanation for any set of results. This, of course, is already well-known: for
this reason, the scientific method entails explicitly stating a hypothesis
before the data is collected and analysed. However, in psychological research
this principle is not straight forward, for a simple reason: it is very rare
for the data to behave in the way that was anticipated.
Like probably everyone else in the field, I learnt this the
hard way during my PhD. I conducted a study to look an effect in three
conditions (let’s call the size of the effect in the three conditions A, B, and
C, respectively). Two theories (let’s call them X and Y) made opposing
predictions:
If X is true, A < B < C.
If Y is true, A > B > C.
The result? B = 0 < A = C.
I think this scenario is familiar to anyone who has ever
done an experiment in the so-called soft sciences, and it’s a PhD student’s
worst nightmare. What does one do with a set of results like these? One of my advisors
said I can do one of the following: (1) Figure out why we got this unexpected
set of results, (2) write up a paper with our initial predictions in the
introduction, our results, and conclude that ‘more research is needed’ to
understand this unexpected set of results, or (3) forget about the whole thing.
In retrospect, I should have done (3), but due to my
stubbornness I went for (1) instead. I had numerous meetings with my advisors
to discuss how any theory could account for the obtained results – but in this
case, we could not even come up with a reasonable-sounding explanation. Then I
decided to collect some more data. I conducted four more studies with larger
samples, and eventually performed a meta-analysis of all the data on this
effect that I could get my hands on (which, aside from the data that I had
collected, was not much). Thus having maximised the power to obtain a
potentially true effect, I found that A = B = 0, and C is only slightly larger
than zero. On the bright side, this finally allowed me to conclude that the
data is more compatible with Theory X than Theory Y, but at this stage I had
wasted hours of my advisors’ time and most of my PhD trying to understand the
results of the first experiment, which were basically just random noise.
This is where Hyman’s Maxim comes in. I came across it in this
blogpost by chance, after I had already submitted my PhD thesis. The maxim
says: “Do not try to explain something until you are sure there is something to
be explained.” Ray Hyman
started off as a magician, but later became a skeptic and a psychologist. Aside
from the blogpost, I have not found any publications on Hyman’s Maxim, but in
my opinion, this is the most important principle in psychological science, and
possibly any science that involves drawing inferences from data. As a
scientist’s main job is to obtain data that can support or refute theories, it
is easy to get carried away with drawing the link between data and theory, and
to forget how important it is to ensure that the data actually tells you what
you think it tells you. In psychology, with generally small effects and noisy
data, the non-zero probability that a statistically significant effect reflects
random noise is often forgotten. Consequently, any statistically significant
result is in the danger of being interpreted as ‘meaningful’: if the a priori theory did not predict it, we
must be missing something, there must be some explanation, or moderating
factor, which should explain this unexpected result.
In conclusion, unless we have ensured that an unexpected
result is replicable, drawing inferences from a single study with a statistically
significant result that was not predicted a
priori is a lot like telling someone’s future from the stars or their tea
leaves. In fact, if the null hypothesis happens to be true, it is literally
like telling someone’s future from the stars or their tea leaves. On some
level, everyone knows this already, but perhaps it is easy to forget this point.
My proposed solution to this problem: Create motivational posters starring
Hyman’s Maxim. Put them up in every psychological scientist’s office and
bathroom.
*********************************************
Edit (29/3/15): I created some motivational posters starring Hyman's Maxim. Sorry for my awful photoshop skills; feel free to improve or make your own!
*********************************************
Edit (29/3/15): I created some motivational posters starring Hyman's Maxim. Sorry for my awful photoshop skills; feel free to improve or make your own!
I know I am late to the party, but I am happy to have come across this post. Great post! And nice work on the memes. :)
ReplyDelete