I just got my first big grant rejection. That’s three months of grant application writing that I could have spent collecting data, learning about stats, practicing programming, writing blog posts, reading a nice book, enjoying the summer, playing the cello, hiking in the mountains, spending quality time with the family – in sum, three months wasted. I get it: the grant agencies need to make difficult decisions about which projects to fund by reading lots of applications in a short amount of time, and the applicant needs to ensure that their project is well thought-out, useful, and – most importantly – that this comes through in the limited number of pages that the evaluators can possibly read.
A problem that I see, however, is not with the fact that grant agencies necessarily need to reject a lot of proposals (including high-quality ones), or that we need to spend a lot of time planning the project that we would like to conduct, even with the high chance that we will never be able to finance it. The thing is: I know that my proposal was weak. I have not yet read the evaluations of my proposal – I suspect that it will not contain any information that will help me to improve as a scientist. I tried my best to write a good proposal, but I just couldn't.
A good proposal, in my ideal world, should start with a good idea. My idea, in my opinion, was not bad, if I may say so myself. Importantly, when I decided to apply for this grant, I decided to really think it through. My main question was: how do I maximise the chances that the data will be useful, no matter what results I get? How will I ensure that a null result will be interpretable? That the measures are reliable, that the sample size is big enough, that the statistical results will mean what I think they mean? I wrote down my thoughts about that, and introduced the project as one that was specifically designed with the replication crisis in mind; the aim was to conduct a strongly theory-based and methodologically rigorous study which, in my opinion, is needed if we want to even begin to understand potentially causal dynamics between specific cognitive processes and some behavioural skill (in the case of my proposal, statistical learning and reading).
From this grant agency, there are strict guidelines about how to write the application. The subsection headings are provided, they suggest specific issues that the proposal needs to address. Unfortunately, none of them involves maximising the replicability of the work conducted under this funding scheme. However, they involve many other things, which meant that, in order to fit a description of what my study is actually about, I needed to cut pretty much all of my thoughts about making the study as strong as possible.
So, what kind of things did they want me to write about instead? A minor pet peeve for me was the focus on gender issues: we needed to specifically state how our research project addresses gender. At best, this is a waste of space at the cost of information that is actually useful (do, e.g., mathematicians and chemists also need to justify that gender can’t be a moderator in their work?). At worst, in the social sciences, it would encourage an atheoretical p-hacking expedition for potential gender differences. While I strongly believe that it is important to address gender bias in academia, grant agencies have millions of ways of contributing to gender equality that are more effective than asking applicants to include a section that is, in most cases, not relevant.
What really turned my proposal from (what I think is) a good idea to a mediocre proposal was the emphasis on the practical value. It is especially good, I am told, if one can make a connection between one's project and a focal societal issues from a list on the website. For example, I am interested in the cognitive processes underlying skilled reading and reading acquisition. It obviously can’t cure cancer or prevent earthquakes (or can it???). After some thinking, however, I came up with the following link: understanding the processes underlying skilled reading will help us to understand what may go wrong in developmental dyslexia. Children and teenagers with dyslexia have higher drop-out rates from school, lower self-esteem, and often comorbid developmental disorders. All of these can lead to poverty. So, by studying whether the link between statistical learning and reading ability, if any, may be causal, I will actually be solving the problem of world poverty. Does this sound really dodgy to you? Yeah, apparently the evaluation committee thought so, too.
Why is this a problem? Well, obviously it is a problem for me, because nobody wants to fund my research. It goes deeper than that, though: the focus on immediate practical implications damages science by devaluing theoretical research and by supporting researchers who are particularly good at making a convincing case when overhyping their research.
Theoretical research is – let’s face it – more boring than a ground-breaking study that cures a disease, prevents a natural disaster, or tells us something about our psyche that we didn’t even know ourselves. However, it is extremely important. If we tried to fix our car without knowing how it works, we would spend a lot of time poking at random parts of the engine, replacing the battery, kicking the wheel. Maybe one of these things will work. Even if it does, there is no guarantee that it will work next time the car breaks down, or for a different car. But if we understand the underlying processes, we can look at specific symptoms underlying the non-working state, and determine which part may be broken. This will save a lot of time. In short – and I apologise for the corny metaphor – theoretical and practical research are needed in order to achieve some understanding that will have practical applications. But it is difficult to make a direct link between a theoretical question and the possible implications that it will have. Admittedly, I am biased, because I have always been more drawn to theoretical issues in cognitive psychology, rather than its practical applications in clinical and educational settings. Perhaps this has something to do with my parents being mathematicians; I grew up listening to speeches about the value of theoretical work and the over-rating of immediate practical applications. But even Andrew Gelman argues that theory is very important, from a statistician’s point of view.
The second issue with encouraging applicants to bend over backwards to make a link between their research and some immediate practical application is pretty self-explanatory. Some people are just very good at making others believe that their research will cure cancer. My guess is that this ability is, if anything, negatively correlated with research skills. Supporting young researchers who are particularly good at faking practical applications will lead to a future where, instead of disseminating useful research, professors will boast that they can cure dyslexia with computational modelling.
As a final disclaimer: I would probably not have written this blogpost if I had been awarded the grant. I would be too busy celebrating and preparing to conduct the study that I had proposed. In this sense, it is hypocritical of me to write this critical blogpost. After all, this grant agency has supported many great research projects. And, given that I have not had a look at the feedback I got alongside the rejection, it is possible that the evaluators found some fundamental flaw with my proposal which would make the study useless. In this blogpost, however, my aim is not to whine about being rejected (though I won’t pretend I’m happy about that, either). The issues with supporting only research which has immediate practical implications, and researchers who are able to convincingly argue for immediate practical implications, is something that holds true in academia beyond this specific grant application. Since the realisation that psychology has a replication crisis, there has been a lot of discussion about the damage done by overhyping the practical applications of one’s research, and brushing away methodological and statistical issues that would, ideally, need to be considered a priori. It is unfortunate that these discussions have had little impact on the reward system in academia.