Tuesday, January 31, 2017

Why I suck at writing grant applications, or: Why grant agencies suck at distributing funding

I just got my first big grant rejection. That’s three months of grant application writing that I could have spent collecting data, learning about stats, practicing programming, writing blog posts, reading a nice book, enjoying the summer, playing the cello, hiking in the mountains, spending quality time with the family – in sum, three months wasted. I get it: the grant agencies need to make difficult decisions about which projects to fund by reading lots of applications in a short amount of time, and the applicant needs to ensure that their project is well thought-out, useful, and – most importantly – that this comes through in the limited number of pages that the evaluators can possibly read.

A problem that I see, however, is not with the fact that grant agencies necessarily need to reject a lot of proposals (including high-quality ones), or that we need to spend a lot of time planning the project that we would like to conduct, even with the high chance that we will never be able to finance it. The thing is: I know that my proposal was weak. I have not yet read the evaluations of my proposal – I suspect that it will not contain any information that will help me to improve as a scientist. I tried my best to write a good proposal, but I just couldn't.

A good proposal, in my ideal world, should start with a good idea. My idea, in my opinion, was not bad, if I may say so myself. Importantly, when I decided to apply for this grant, I decided to really think it through. My main question was: how do I maximise the chances that the data will be useful, no matter what results I get? How will I ensure that a null result will be interpretable? That the measures are reliable, that the sample size is big enough, that the statistical results will mean what I think they mean? I wrote down my thoughts about that, and introduced the project as one that was specifically designed with the replication crisis in mind; the aim was to conduct a strongly theory-based and methodologically rigorous study which, in my opinion, is needed if we want to even begin to understand potentially causal dynamics between specific cognitive processes and some behavioural skill (in the case of my proposal, statistical learning and reading).

From this grant agency, there are strict guidelines about how to write the application. The subsection headings are provided, they suggest specific issues that the proposal needs to address. Unfortunately, none of them involves maximising the replicability of the work conducted under this funding scheme. However, they involve many other things, which meant that, in order to fit a description of what my study is actually about, I needed to cut pretty much all of my thoughts about making the study as strong as possible.

So, what kind of things did they want me to write about instead? A minor pet peeve for me was the focus on gender issues: we needed to specifically state how our research project addresses gender. At best, this is a waste of space at the cost of information that is actually useful (do, e.g., mathematicians and chemists also need to justify that gender can’t be a moderator in their work?). At worst, in the social sciences, it would encourage an atheoretical p-hacking expedition for potential gender differences. While I strongly believe that it is important to address gender bias in academia, grant agencies have millions of ways of contributing to gender equality that are more effective than asking applicants to include a section that is, in most cases, not relevant.

What really turned my proposal from (what I think is) a good idea to a mediocre proposal was the emphasis on the practical value. It is especially good, I am told, if one can make a connection between one's project and a focal societal issues from a list on the website. For example, I am interested in the cognitive processes underlying skilled reading and reading acquisition. It obviously can’t cure cancer or prevent earthquakes (or can it???). After some thinking, however, I came up with the following link: understanding the processes underlying skilled reading will help us to understand what may go wrong in developmental dyslexia. Children and teenagers with dyslexia have higher drop-out rates from school, lower self-esteem, and often comorbid developmental disorders. All of these can lead to poverty. So, by studying whether the link between statistical learning and reading ability, if any, may be causal, I will actually be solving the problem of world poverty. Does this sound really dodgy to you? Yeah, apparently the evaluation committee thought so, too.

Why is this a problem? Well, obviously it is a problem for me, because nobody wants to fund my research. It goes deeper than that, though: the focus on immediate practical implications damages science by devaluing theoretical research and by supporting researchers who are particularly good at making a convincing case when overhyping their research.

Theoretical research is – let’s face it – more boring than a ground-breaking study that cures a disease, prevents a natural disaster, or tells us something about our psyche that we didn’t even know ourselves. However, it is extremely important. If we tried to fix our car without knowing how it works, we would spend a lot of time poking at random parts of the engine, replacing the battery, kicking the wheel. Maybe one of these things will work. Even if it does, there is no guarantee that it will work next time the car breaks down, or for a different car. But if we understand the underlying processes, we can look at specific symptoms underlying the non-working state, and determine which part may be broken. This will save a lot of time. In short – and I apologise for the corny metaphor – theoretical and practical research are needed in order to achieve some understanding that will have practical applications. But it is difficult to make a direct link between a theoretical question and the possible implications that it will have. Admittedly, I am biased, because I have always been more drawn to theoretical issues in cognitive psychology, rather than its practical applications in clinical and educational settings. Perhaps this has something to do with my parents being mathematicians; I grew up listening to speeches about the value of theoretical work and the over-rating of immediate practical applications. But even Andrew Gelman argues that theory is very important, from a statistician’s point of view.

The second issue with encouraging applicants to bend over backwards to make a link between their research and some immediate practical application is pretty self-explanatory. Some people are just very good at making others believe that their research will cure cancer. My guess is that this ability is, if anything, negatively correlated with research skills. Supporting young researchers who are particularly good at faking practical applications will lead to a future where, instead of disseminating useful research, professors will boast that they can cure dyslexia with computational modelling.


As a final disclaimer: I would probably not have written this blogpost if I had been awarded the grant. I would be too busy celebrating and preparing to conduct the study that I had proposed. In this sense, it is hypocritical of me to write this critical blogpost. After all, this grant agency has supported many great research projects. And, given that I have not had a look at the feedback I got alongside the rejection, it is possible that the evaluators found some fundamental flaw with my proposal which would make the study useless. In this blogpost, however, my aim is not to whine about being rejected (though I won’t pretend I’m happy about that, either). The issues with supporting only research which has immediate practical implications, and researchers who are able to convincingly argue for immediate practical implications, is something that holds true in academia beyond this specific grant application. Since the realisation that psychology has a replication crisis, there has been a lot of discussion about the damage done by overhyping the practical applications of one’s research, and brushing away methodological and statistical issues that would, ideally, need to be considered a priori. It is unfortunate that these discussions have had little impact on the reward system in academia.

Tuesday, November 29, 2016

On Physics Envy

I followed my partner to a workshop in plasma physics. The workshop was held in a mountain resort in Poland – getting there was an adventure worthy, perhaps, of a separate blog post.

“I’m probably the only non-physicist in the room”, I say, apologetically, at the welcome reception, when professors come up to me to introduce themselves, upon seeing an unfamiliar face in the close-knit fusion community. 
Remembering this XKCD comic, I ask my partner: “How long do you think I could pretend to be a physicist for?” I clarify: “Let’s say, you are pretending to be a psychological scientist. I ask you what you’re working on. What would you say?”
“I’d say: ‘I’m working on orthographic depth and how affects reading processes, and also on statistical learning and its relationship to reading’.” Pretty good, that’s what I would say as well.
“So, if you’re pretending to be a physicist, what would you say if I ask you what you’re working on?”, he asks me.
“I’m, uh, trying to implement… what’s it called… controlled fusion in real time.”
The look on my partner’s face tells me that I would not do very well as an undercover agent in the physics community.

The attendees are around 50 plasma physicists, mostly greying, about three women among the senior scientist, perhaps five female post-docs or PhD students. Halfway through the reception dinner, I am asked about my work. In ten sentences, I try to describe what a cognitive scientist/psycholinguist does, trying to make it sound as scientific and non-trivial as possible. Several heads turn, curious to listen to my explanation. I’m asked if I use neuroimaging techniques. No, I don’t, but a lot of my colleagues and friends do. For the questions I’m interested in, anyway, I think we know too little about the relationship between brain and mind to make meaningful conclusions.
“It’s interesting”, says one physicist, “that you could explain to us what you are doing in ten sentences. For us, it’s much more difficult.” More people join in, admitting that they have given up trying to explain to their families what it is they are doing.
“Ondra gave me a pretty good explanation of what he is doing”, I tell them, pointing at my partner. I sense some scepticism. 

Physics envy is a term coined by psychologists (who else?), describing the inferiority complex associated with striving to be taken serious as a field in science. Physics is the prototypical hard science: they have long formulae, exact measurements where even the fifth decimal places matter, shiny multi-billion-dollar machines, and stereotypical crazy geniuses who would probably forget their own head if it wasn’t attached to them. Physicists don’t always make it easy for their scientific siblings (or distant cousins)* but, admittedly, they do have a right to be smug towards psychological scientists, given the replication crisis that we’re going through. The average physicist, unsurprisingly, finds it easier to grasp concepts associated with mathematics than the average psychologist. This means that physicist have, in general, a better understanding of probability. When I tell physicists about some of the absurd statements that some psychologists have made (“Including unpublished studies in the meta-analysis erroneously biases an effect size estimate towards zero.”; “Those replicators were just too incompetent to replicate our results. It’s very difficult to create the exact conditions under which we get the effect: even we had to try it ten times before we got this significant result!”), physicists start literally rolling on the floor with laughter. “Why do you even want to stay in this area of research?” I was asked once, after the physicist I was talking to had wiped off the tears of laughter. The question sounded neither rhetorical nor snarky, so I gave a genuine answer: “Because there are a lot of interesting questions that can be answered, if we improve the methodology and statistics we use.”

In physics, I am told, no experiment is taken seriously until it has been replicated by an independent lab. (Unless it requires some unique equipment, in which case it can't be replicated by an independent lab.) Negative results are still considered informative, unless they are due to experimental errors. Physicists still have issues with researchers who make their results look better than they actually are by cherry-picking the experimental results that fit best within one’s hypothesis and with post-hoc parameter adjustments – after all, the publish-or-perish system looms over all of academia. However, the importance of replicating results is a lesson that physicists have learnt from their own replication crisis: in the late 1980s, there was a shitstorm about cold fusion, set off by experimental results that were of immense public interest, but theoretically implausible, difficult to replicate, and later turned out to be due to sloppy research and/or scientific misconduct. (Sounds familiar?)

Physicists take their research very seriously, probably to a large extent because it is often of great financial interest. There are those physicists who work closely with industry. Even for those who don’t, their work often involves very expensive experiments. In plasma physics, a shot on the machine of Max Planck Institute for Plasma Physics, ASDEX-Upgrade, costs several thousand dollars. The number of shots required for an experiment depends on the research aims, and whether there is other data available, but can go up to 50 or more. This gives very strong motivation to make sure that one’s experiment is based on accurate calculations and sound theories which are supported by replicable studies. Furthermore, as there is only one machine – and only a handful of similar machines all over Europe – it needs to be shared with all other internal and external projects. In order to ensure that shots (and experimental time) are not wasted, any team wishing to perform an experiment needs to submit an application; the call for proposals opens only once a year. A representative of the team will also need to do a talk in front of the committee, which consists of the world’s leading experts in the area. The committee will decide whether the experiment is likely to yield informative and important results. In short, it is not possible – as in psychology – to spend one’s research career testing ideas one has on a whim, with twenty participants, and publish only if it actually ‘works’. One would be booed of the stage pretty quickly.

It’s easy to get into an us-and-them mentality and feelings of superiority and inferiority. No doubt all sciences have something of importance and of interest to offer to society in general. But it is also important to understand how we can maximise the utility of the research that we produce, and in this sense we can take a leaf out of physicists’ books. The importance of replication should be adopted also into the psychological literature: arguably, we should simply forget all theories that are based on non-replicable experiments. Perhaps more importantly, though, we should start taking our experiments more seriously. We need to increase our sample sizes; this conclusion seems to be gradually coming through as a consensus in psychological science. This means that also our experiments will become more expensive, both in terms of money and time. By conducting sloppy studies, we may still not loose thousands of dollars of taxpayers’ (or, even worse, investors’) money for each blotched experiment, but we will waste the time of our participants, the time, nerves and resources of researchers who try to make sense of or replicate our experiments, and we stall progress in our area of research, which has strong implications for policy makers in areas ranging from education through improving social equality, prisoners’ rehabilitation, and political/financial decision making, to mental health care.
--------------------------------------

* Seriously, though, I haven’t met a physicist who is as bad as the linked comic suggests.   


Acknowledgement: I'd like to thank Ondřej Kudláček, not only for his input into this blogpost and discussions about good science, but also for his unconditional support in my quest to learn about statistics.

Thursday, November 24, 2016

Flexible measures and meta-analyses: The case of statistical learning


On a website called flexiblemeasures.com, Malte Elson lists 156 dependent measures that have been used in the literature to quantify the performance on the Competitive Reaction Time Task. A task which has this many possible ways of calculating the outcome measure is, in a way, convenient for researchers: without correcting for multiple comparisons, the probability that the effect of interest will be significant in at least one of the measures skyrockets.

So does, of course, the probability that a significant result is a Type-I error (false positive). Such testing of multiple variables and reporting only the one which gives a significant result is an instance of p-hacking. It becomes problematic when another researcher tries to establish whether there is good evidence for an effect: if one performs a meta-analysis of the published analyses (using standardised effect sizes to be able to compare the different outcome measures across tasks), one can get a significant effect, even if each study reports only random noise and one creatively calculated outcome variable that ‘worked’.

Similarly, it becomes difficult for a researcher to establish how reliable a task is. Take, for example, statistical learning. Statistical learning, the cognitive ability to derive regularities from the environment and apply them to future events, has been linked to everything from language learning to autism. The concept of statistical learning ties to many theoretically interesting and practically important questions, for example, about how we learn, and what enables us to be able to use an abstract, complex system such as languages before we even learn to tie a shoelace.

Unsurprisingly, many tasks have been developed that are supposed to measure this cognitive ability of ours, and to correlate performance on these tasks to various everyday skills. Let us set aside the theoretical issues with the proposition that a statistical learning mechanism underlies the learning of statistical regularities in the environment, and concentrate on the way statistical learning is measured. This is an important question for someone who wants to study this statistical learning process: before running an experiment, one would like to be sure that the experimental task ‘works’.

As it turns out, statistical learning tasks don’t have particularly good psychometric properties: when the same individuals perform different tasks, the correlations between performance on different tasks are rather low; the test-retest reliability varies across tasks, but ranges from pretty good to pretty bad (Siegelman & Frost, 2015). For some tasks, performance on statistical learning tasks is not above chance for the majority of the participants, meaning that they cannot be used as valid indicators of individual differences in the statistical learning skill. This raises questions about why such a large proportion of published studies find that individual differences in statistical learning are correlated with various life-skills, and explains anecdotal evidence from myself and colleagues of conducting statistical learning experiments that just don’t work, in the sense that there is no evidence of statistical learning.* Relying on flexible outcome measures increases the researcher’s chances of finding a significant effect or correlation, which can be especially handy when the task has sub-optimal psychometric properties (low reliability and validity reduce the statistical power to find an effect if it exists). Rather than trying to improve the validity or reliability of the task, it is easier to continue analysing different variables until something becomes significant.

The first example of a statistical learning tasks is the Serial Reaction Time Task. Here, the participants respond to a series of stimuli, which appear on different positions on a screen. The participant presses buttons which correspond to the location of the stimulus. Unbeknown to the participant, the sequence of the locations repeats – the participants’ error rates and reaction times decrease. Towards the end of the experiment, normally in the penultimate block, the order of the locations is scrambled, meaning that the learned sequence is disrupted. Participants perform worse in this scrambled block compared to the sequential one. Possible outcome variables (which can all be found in the literature) are:
- Comparison of accuracy in the scrambled block to the preceding block
- Comparison of accuracy in the scrambled block to the succeeding (final) block
- Comparison of accuracy in the scrambled block compared to an average of the preceding and succeeding blocks
- The increase in accuracy across the sequential blocks
- Comparison of reaction times in the scrambled block to the preceding block
- Comparison of reaction times in the scrambled block to the succeeding (final) block
- Comparison of reaction times in the scrambled block compared to an average of the preceding and succeeding blocks
- The increase in reaction times across the sequential blocks.

This can hardly compare to the 156 dependent variables from the Competitive Reaction Time Task, but it already gives the researcher increased flexibility in selectively reporting only the outcome measures that ‘worked’. As an example of how this can lead to conflicting conclusions about the presence or absence of an effect: in a recent review, we discussed the evidence for a statistical learning deficit in developmental dyslexia (Schmalz, Altoè, & Mulatti, in press). In regards to the Serial Reaction Time Task, we concluded that there was insufficient evidence to decide whether or not there are differences in performance on this task across dyslexic participants and controls. Partly, this is because researchers tend to report different variables (presumably the one that ‘worked’): as it is rare for researchers to report the average reaction times and accuracy per block (or to respond to requests for raw data), it was impossible to pick the same dependent measure from all studies (say, the difference between the scrambled block and the one that preceded it) and perform a meta-analysis on it. Today, I stumbled across a meta-analysis on the same question: without taking into account differences between experiments in the dependent variable, Lum, Ullman, and Conti-Ramsden (2013) conclude that there is evidence for a statistical learning deficit in developmental dyslexia.

As a second example: in many statistical learning tasks, participants are exposed to a stream of stimuli which contain regularities. In a subsequent test phase, the participants then need to make decisions about stimuli which either follow the same patterns or not. This task can take many shapes, from a set of letter strings generated by a so-called artificial grammar (Reber, 1967) to strings of syllables with varying transitional probabilities (Saffran, Aslin, & Newport, 1996). It should be noted that both the overall accuracy rates (i.e., the observed rates of learning) and the psychometric properties varies across different variants of this tasks (see, e.g., Siegelman, Bogaerts, & Frost, 2016, who specifically aimed to create a statistical learning task with good psychometric properties). In these tasks, accuracy is normally too low to allow an analysis of reaction times; nevertheless, different dependent variables can be used: overall accuracy, the accuracy of grammatical items only, or the sensitivity index (d’). And, if there is imaging data, one can apparently interpret brain patterns in the complete absence of any evidence of learning on the behavioural level.

In summary, flexible measures could be an issue for evaluating the statistical learning literature: both in finding out which tasks are more likely to ‘work’, and in determining to what extent individual differences in statistical learning may be related to everyday skills such as language or reading. This does not mean that statistical learning does not exists, or that all existing work on this topic is flawed. However, it creates cause for healthy scepticism about the published results, and many interesting questions and challenges for future research. Above all, the field would benefit from increased awareness of issues such as flexible measures, which would lead to the pressure to increase the probability of getting a significant result by maximising the statistical power, i.e., decreasing the Type-II error rate (through larger sample sizes and more reliable and valid measures), rather than using tricks that affect the Type-I error rate.

References
Lum, J. A., Ullman, M. T., & Conti-Ramsden, G. (2013). Procedural learning is impaired in dyslexia: Evidence from a meta-analysis of serial reaction time studies. Research in Developmental Disabilities, 34(10), 3460-3476.
Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 6(6), 855-863.
Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274(5294), 1926-1928.
Schmalz, X., Altoè, G., & Mulatti, C. (in press). Statistical learning and dyslexia: a systematic review. Annals of Dyslexia. doi:10.1007/s11881-016-0136-0
Siegelman, N., Bogaerts, L., & Frost, R. (2016). Measuring individual differences in statistical learning: Current pitfalls and possible solutions. Behavior Research Methods, 1-15.
Siegelman, N., & Frost, R. (2015). Statistical learning as an individual ability: Theoretical perspectives and empirical evidence. Journal of Memory and Language, 81, 105-120.

------------------------------------------------------------------
* In my case, it’s probably a lack of flair, actually.

Wednesday, September 21, 2016

Some thoughts on methodological terrorism


Yesterday, I woke up to a shitstorm on Twitter, caused by an editorial-in-press by social psychologist Susan Fiske (who wrote my undergraduate Social Psych course textbook). The full text of the editorial, along with a superb commentary from Andrew Gelman, can be found here. This editorial, which launches an attack against so-called methodological terrorists who have the audacity to criticise their colleagues in public, has already inspired blog posts such as this one by Sam Schwarzkopf and this one which broke the time-space continuum by Dorothy Bishop

However, I would like to write about about one aspect of Susan Fiske’s commentary, which also emerged in a subsequent discussion with her at the congress of the German Society for Psychology (which, alas, I followed only on twitter). In the editorial Fiske states that psychological scientists at all stages of their career are being bullied; she seems especially worried about graduate students who are leaving academia. In the subsequent discussion, as cited by Malte Elson, she specifies that >30 graduate students wrote to her, in fear of cyberbullies.*

Being an early career researcher myself, I can try to imagine myself in a position where I would be scared of “methodological terrorists”. I can’t speak for all ECRs, but for what it’s worth, I don’t see any reason to stifle public debate. Of course, there is internet harassment which is completely inexcusable and should be punished (as covered by John Oliver in this video). But I have never seen, nor heard of, a scientific debate which dropped to the level of violence, rape or death threats.

So, what is the worst thing that can happen in academia? Someone finds a mistake in your work (or thinks they have found a mistake), and makes it public, either through the internet (twitter, blog), a peer-reviewed paper, or by screaming it out at an international conference after your talk. Of course, on a personal level, it is preferable that before or instead of making it public, the critic approaches you privately. On the other hand, the critic is not obliged to do this – as others build on your work, it is only fair that the public should be informed about a potential mistake. It is therefore, in practice, up to the critic to decide whether they will approach you first, or whether they think that a public approach would be more effective in getting an error fixed. Similarly, it would be nice of the critic to adopt a kind, constructive tone. It would probably make the experience more pleasant (or less unpleasant) for both parties, and be more effective in convincing the person who is criticised to think about the criticiser’s point and to decide rationally whether or not this is a valid point. But again, the critic is not obliged to be nice – someone who stands up at a conference to publicly destroy an early career researcher’s work is an a-hole, but not a criminal. (Though I can even imagine scenarios where such behaviour would be justified, for example, if the criticised researcher has been unresponsive to private expressions of concern about this work.)

As an early career researcher, it can be very daunting to face an audience of potential critics. It is even worse if someone accuses you of having done something wrong (whether it’s a methodological shortcoming of your experiment, or a possibly intentional error in your analysis script). I have received some criticism throughout my five-year academic career; some of it was not fair, though most of it was (even though I would sometimes deny it, in the initial stages). Furthermore, there are cultural differences in how researchers express their concern with some aspect of somebody’s work: in English-speaking countries (Australia, UK, US), much softer words seem to be used for criticising than in many mainland European countries (Italy, Germany). When I spent six months during my PhD in Germany, I was shocked at some of the conversations I had overheard between other PhD students and their supervisors – being used to the Australian style of conversation it seemed to me that German supervisors could be straight-out mean. Someone who is used to being told about a mistake with the phrase: “This is good, but you might want to consider…” is likely to be shocked and offended if they go to an international conference and someone tells them straight out: “This is wrong.” This could lead to some people feeling personally attacked due to what is more or less a cultural misunderstanding.

In any event, it is inevitable that one makes mistakes from time to time, and that someone finds something to criticise about your work. Indeed, this is how science progresses. We make mistakes, and we learn from them. We learn from others’ mistakes. Learning is what science is all about. Someone who doesn’t want to learn cannot be a scientist. And if nobody ever tells you that you made a mistake, you cannot learn from it. Yes, criticism stings, and some people are more sensitive than others. However, responding to criticism in a constructive way, and being aware of potential cultural differences in how criticism is conveyed, is part of the job description of an academic. Somebody who reacts explosively or defensively to criticism cannot be a scientist just like someone who is afraid of water cannot be an Olympic swimmer.

---------------------------
In response to this, Daniël Lakens wrote, in a series of tweets (I can’t phrase it better): “100+ students told me they think of quitting because science is no longer about science. [… They are the] ones you want to stay in science, because they are not afraid, they know what to do, they just doubt if a career in science is worth it.”

Monday, June 27, 2016

What happens when you try to publish a failure to replicate in 2015/2016

Anyone who has talked to me in the last year would have heard me complain about my 8-times-failure-to-replicate which nobody wants to publish. The preprint, raw data and analysis scripts are available here, so anyone can judge for themselves if they think the rejections to date are justified. In fact, if anyone can show me that my conclusions are wrong – that the data are either inconclusive, or that they actually support an opposite view – I will buy them a bottle of drink of their choice*. So far, this has not happened.

I promise to stop complaining about this after I publish this blog post. I think it is important to be aware of the current situation, but I am, by now, just getting tired of debates which go in circles (and I’m sure many others feel the same way). Therefore, I pledge that from now on I will stop writing whining blog posts, and I will only write happy ones – which have at least one constructive comment or suggestion about how we could improve things.

So, here goes my last ever complaining post. I should stress that the sentiments and opinions I describe here are entirely my own; although I’ve had lots of input from my wonderful co-authors in preparing the manuscript of my unfortunate paper, they would probably not agree with many of the things I am writing here.

Why is it important to publish failures to replicate?

People who haven’t been convinced by the arguments put forward to date will not be convinced by a puny little blogpost. In fact, they will probably not even read this. Therefore, I will not go into details about why it is important to publish failures to replicate. Suffice it to say that this is not my opinion – it’s a truism. If we combine a low average experimental power with selective publishing of positive results, we – to use Daniel Lakens’ words – get “a literature that is about as representative of real science as porn movies are representative of real sex”. We get over-inflated effect sizes across experiments, even if an effect is non-existent; or, in the words of Michael Inzlicht, “meta-analyses are fucked”.

Our study

The interested reader can look up further details of our study in the OSF folder I linked above (https://osf.io/myfk3/). The study is about the Psycholinguistic Grain Size Theory (Ziegler & Goswami, 2005)**. If you type the name of this theory into google – or some other popular search terms, such as “dyslexia theory”, “reading across languages”, or “reading development theory” – you will see this paper on the first page. It has 1650 citations, at the time of writing of this blogpost. In other words, this theory is huge. People rely on it to interpret their data, and to guide their experimental designs and theories in diverse topics of reading and dyslexia.

The evidence for the Psycholinguistic Grain Size Theory is summarised in the preprint linked above; the reader can decide for themselves if they find it convincing. During my PhD, I decided to do some follow-up experiments on the body-N effect (Ziegler & Perry, 1998; Ziegler et al., 2001; Ziegler et al., 2003). Why? Not because I wanted to build my career on the ruins of someone else’s work (which is apparently what some people think of replicators), but because I found the theory genuinely interesting, and I wanted to do further work to specify the locus of this effect. So I did study after study after study – blaming myself for the messy results – until I realised: I had conducted eight experiments, and the effect just isn’t there. So I conducted a meta-analysis on all of our data, plus an unpublished study by a colleague with whom I’d talked about this effect, wrote it up and submitted it.

Surely, in our day and age, journals should welcome null-results as much as positive results? And any rejections would be based on flaws in the study?

Well, here is what happened:

Submission 1: Relatively high-impact journal for cognitive psychology

Here is a section directly copied-and-pasted from a review:

“Although the paper is well-written and the analyses are quite substantial, I find the whole approach rather irritating for the following reasons:

1. Typically meta-analyses are done one [sic] published data that meet the standards for publishing in international peer-reviewed journals. In the present analyses, the only two published studies that reported significant effects of body-N and were published in Cognition and Psychological Science were excluded (because the trial-by-trial data were no longer available) and the authors focus on a bunch of unpublished studies from a dissertation and a colleague who is not even an author of the present paper. There is no way of knowing whether these unpublished experiments meet the standards to be published in high-quality journals.”

Of course, I picked the most extreme statement. Other reviewers had some cogent points – however, nothing that would compromise the conclusions. The paper was rejected because “the manuscript is probably too far from what we are looking for”.

Submission 2: Very high-impact psychology journal

As a very ambitious second plan, we submitted the paper to one of the top journals in psychology. It’s a journal which “publishes evaluative and integrative research reviews and interpretations of issues in scientific psychology. Both qualitative (narrative) and quantitative (meta-analytic) reviews will be considered, depending on the nature of the database under consideration for review” (from their website). They have even announced a special issue on Replicability and Reproducibility, because their “primary mission […] is to contribute a cohesive, authoritative, theory-based, and complete synthesis of scientific evidence in the field of psychology” (again, from their website). In fact, they published the original theoretical paper, so surely they would at least consider a paper which argues against this theory? As in, send it out for review? And reject it based on flaws, rather than the standard explanation of it being uninteresting to a broad audience? Given that they published the original theoretical article, and all? Right?

Wrong, on all points.

Submission 3: A well-respected, but not huge impact factor journal in cognitive psychology

I agreed to submit this paper to a non-open-access journal again, but only under the condition that at least one of my co-authors would have a bet with me: if it got rejected, I would get a bottle of good whiskey. Spoiler alert: I am now the proud owner of a 10-year aged bottle of Bushmills.

To be fair, this round of reviews brought some cogent and interesting comments. The first reviewer provided some insightful remarks, but their main concern was that “The main message here seems to be a negative one.” Furthermore, the reviewer “found the theoretical rationale [for the choice of paradigm] to be rather simplistic”. Your words, not mine! However, for a failure to replicate, this is irrelevant. As many researchers rely on what may or may not be a simplistic theoretical framework which is based on the original studies, we need to know whether the evidence put forward by the original studies is reliable.

I could not quite make sense of all of the second reviewer’s comment, but somehow they argued that the paper was “overkill”. (It is very long and dense, to be fair, but I do have a lot of data to analyse. I suspect most readers will skip from the introduction to the discussion, anyway – but anyone who wants the juicy details of the analyses should have easy access to them.)

Next step: Open-access journal

I like the idea of open-access journals. However, when I submitted previous versions of the manuscript I was somewhat swayed by the argument that going open access would decrease the visibility and credibility of the paper. This is probably true, but without any doubt, the next step will be to submit the paper to an open-access journal. Preferably one with open review. I would like to see a reviewer calling a paper “irritating” in a public forum.

At least in this case, traditional journals have shown – well, let’s just say that we still have a long way to go in improving replicability in psychological sciences. For now, I have uploaded a pre-print of the paper on OSF and on researchgate. On researchgate, the article has over 200 views, suggesting that there is some interest in this theory; the finding that the key study is not replicable seems relevant to researchers. Nevertheless, I wonder if the failure to provide support for this theory will ever gain as much visibility as the original study – how many researchers will put their trust into a theory that they might be more sceptical about if they knew the key study is not as robust as it may seem?

In the meantime, my offer of a bottle of beverage for anyone who can show that the analyses or data are fundamentally flawed, still stands.

-------------------------------------------------------


* Beer, wine, whiskey, brandy: You name it. Limited only by my post-doc budget.
** The full references of all papers cited throughout the blogpost can be found in the preprint of our paper.

-----------------------------------------

Edit 30/6: Thanks all for the comments so far, I'll have a closer look at how I can implement your helpful suggestions when I get the chance!

Please note that I will delete comments from spammers and trolls. If you feel the urge to threaten physical violence, please see your local counsellor or psychologist.

Thursday, June 16, 2016

Naming, not shaming: Criticising a weak result is not the same as launching a personal attack

You are working on a theoretical paper about the proposed relationship between X and Y. A two-experiment study has previously shown that X and Y are correlated, and you are trying to explain the cognitive mechanisms that drive this correlation. This previous study makes conclusions based on partial correlations which take into account a moderator that has not been postulated a priori; raw correlations are not reported. The p-values for each of the two partial correlations are < 0.05, but > 0.04. In a theoretical paper, you stress that although it makes theoretical sense that there would be a correlation between these variables, we cannot be sure about this link.

In a different paradigm, several studies have found a group difference in a certain task. In most studies, this group difference has a Cohen’s d of around 0.2. However, three studies which all come from the same lab report Cohen’s ds ranging between 0.8 and 1.1. You calculate that it is very unlikely to obtain three huge effects such as these by chance alone (probability < 1%). 

For a different project, you fail to find an effect which has been reported by a previously published experiment. The authors of this previous study have published their raw data a few years after the original paper came out. You take a close look at this raw data, and find some discrepancies with the means reported in the paper. When you analyse the raw data, the effect disappears.

What would you do in each of the scenarios above? I would be very happy to hear about it in the comments!

From each of these scenarios, I would draw two conclusions: (1) The evidence reported by these studies is not strong, to say the least, and (2) it is likely that the authors used what we now call questionable research practices to obtain significant results. The question is what we can conclude in our hypothetical paper, where the presence or absence of the effect is critical. Throwing around accusations of p-hacking can turn ugly. First, we cannot be absolutely sure that there is something fishy. Even if you calculate that the likelihood of obtaining a certain result is minimal, it is still greater than zero – you can never be completely sure that there really is something questionable going on. Second, criticising someone else’s work is always a hairy issue. Feelings may get hurt, and the desire for revenge may arise; careers can get destroyed. Especially as an early-career researcher, one wants to stay clear of close-range combat.

Yet, if your work rests on these results, you need to make something of them. One could just ignore them – not cite these papers, pretend they don’t exist. It is difficult to draw conclusions from studies with questionable research practices, so they may as well not be there. But ignoring relevant published work would be childish and unscientific. Any reader of your paper who is interested in the topic will notice this omission. Therefore, one needs to at least explain why one thinks the results of these studies may not be reliable.

One can’t explain why one doesn’t trust a study without citing it – a general phrase such as: “Previous work has shown this effect, but future research is needed to confirm its stability” will not do. We could remain general in our accusations: “Previous work has shown this effect (Lemmon & Matthau, 2000), but future research is needed to confirm its stability”. This, again, does not sound very convincing.

There are therefore two possibilities: either we drop the topic altogether, or we write down exactly why the results of the published studies would need to be replicated before we would trust them, kind of like what I did in the examples at the top of the page. This, of course, could be misconstrued as a personal attack. Describing such studies in my own papers is an exercise involving very careful phrasing and proofreading for diplomacy by very nice colleagues. Unfortunately, this often leads to the watering down of arguments, and tip-toeing around the real issue, which is the believability of a specific result. And when we think about it, this is what we are criticising – not the original researchers. Knowledge about questionable research practices is spreading gradually; many researchers are still in the process of realising that they can really damage a research area. Therefore, judging researchers for what they have done in the past would be neither productive, nor wise.

Should we judge a scientist for having used questionable research practices? In general, I don’t think so. I am convinced that the majority of researchers don’t intend to cheat, but they are convinced that they have legitimately maximised their chance to find a very small and subtle effect. It is, of course, the responsibility of a criticiser to make it clear that a problem is with the study, not with the researcher who conducted it. But the researchers whose work is being criticised should also consider whether the criticism is fair, and respond accordingly. If they are prepared to correct any mistakes – publishing file-drawer studies, releasing untrimmed data, conducting a replication, or in more extreme cases publishing a correction or even retracting a paper – it is unlikely that they will be judged negatively by the scientific community, quite on the contrary.

But there are a few hypothetical scenarios where my opinion of the researcher would decrease: (1) If the questionable research practice was data fabrication rather than something more benign such as creative outlier removal, (2) if the researchers use any means possible to suppress studies which criticise or fail to replicate their work, or (3) if the researchers continue to engage in questionable research practices, even after they learn that it increases their false-positive rate. This last point bears further consideration, because pleading ignorance is becoming less and less defensible. By now, a researcher would need to live under a rock if they have not even heard about the replication crisis. And a good, curious researcher should follow up on hearing such rumours, to check whether issues in replicability could also apply to them.


In summary, criticising existing studies is essential for scientific progress. Identifying potential issues with experiments will save time as researchers won’t go off on a wild-goose-chase for an effect that doesn’t exist; it will help us to narrow down on studies which need to be replicated before we consider that they are backed up by evidence. The criticism of a study, however, should not be conflated with criticism of the researcher – either by the criticiser or by the person being criticised. A strong distinction between the criticism of a study versus criticism of a researcher would result in a climate where discussions about reproducibility of specific studies will lead to scientific progress rather than a battlefield.