Friday, October 16, 2020

Anecdotes showing the flaws of the current peer review system: Case 2

Earlier this week, I published a Twitter poll with a question relating to peer review. Here is the poll, as well as the results of 91 voters (see here for a link to the poll and to the responses):

The issue is one that I touched in my last blogpost: When we think that a paper that we are reviewing should be rejected, should we make this opinion clear to the editor, or is it simply our role to list the shortcomings and leave it up to the editor to decide whether they are serious enough to warrant a rejection? 

Most respondents would agree to re-review a paper that they think should not be published, but add a very clear statement to the editor about this opinion. This corresponds to the view of the reviewer as a gate keeper, whose task it is to make sure that bad papers don't get published. About half as many respondents would agree to review again with an open mind, and to accept it if, eventually, the authors improve the paper sufficiently to warrant publication. This response reflects the view of a reviewer as a guide, who provides constructive criticism that will help the authors produce a better manuscript. About equally common was the response of declining to re-review in the first place. This reflects the view that it's ultimately not the reviewers' decision whether the paper should be published, but the editor's. The reviewers list the pros and cons, and if the concerns remain unaddressed and the editor still passes it on to the reviewers, clearly the editor doesn't think these concerns are major obstacles to a publication. The problem with this approach is that it creates a loophole for a really bad paper: if the editor keeps inviting re-submissions and critical reviewers only provide one round of peer review, it is only matter of time until the lottery results in only non-critical reviewers who are happy to wave the paper through. 

The view that it's the reviewer's role to provide pros and cons, and the editor's role to decide what to do with them, is the one that I held for a while, and which led me to decline a few invitations to re-review that, in retrospect, I regret. One of these I described in my last blogpost, linked above. Today, I'll describe the second case study. 

I don't want to attack anyone personally, so I made sure to describe the paper from my last blogpost in as little detail as possible. Here, I'd like describe some more details, because the paper is on a controversial theory which has practical implications, some strong believers, and in my view, close-to-no supporting evidence. Publications which make it look like the evidence is stronger than it actually is can potentially cause damage, both to other researchers, who invest their resources on following up on an illusory effect, and for the general public, who may trust a potential treatment that is not backed up by evidence. The topic is - unsurprisingly for anyone who has read my recent publications (e.g., here and here) - statistical learning and dyslexia. 

A while ago, I was asked to review a paper that compared a group of children with dyslexia and a group of children without dyslexia on statistical learning, among with some other cognitive tasks. They showed a huge group difference, and I started to think that maybe I was wrong with my whole skepticism thing. Still, I asked for the raw data, as I do routinely; the authors argued against this with privacy concerns, but added scatterplots of their data instead. At this stage, after two rounds of peer review, I noticed something very strange: There was absolutely no overlap in the statistical learning scores between children with dyslexia and children without dyslexia. After having checked with a stats-savvy friend, I wrote the following point (this is an excerpt from the whole review, with only the relevant information): 

"I have noticed something unusual about the data, after inspecting the scatterplots (Figure 2). The scatterplots show the distribution of scores for reading, writing, orthographic awareness and statistical learning, separated by condition (dyslexic versus control). It seems that in the orthographic awareness and statistical learning tasks, there is no overlap between the two groups. I find this highly unlikely: Even if there is a group difference in the population, it would be strange not to find any child without dyslexia who isn’t worse than any child with dyslexia. If we were to randomly pick 23 men and 23 women, we would be very surprised if all women were shorter than all men – and the effects we find in psychology are generally much smaller than the sex difference in heights. Closer to home, White et al. (2006) report a multiple case study, where they tested phonological awareness, among other tasks, in 23 children with dyslexia and 22 controls. Their Figure 1 shows some overlap between the two groups of participants – and, unlike the statistical learning deficit, a phonological deficit has been consistently shown in dozens of studies since the 1980s, suggesting that the population effect size should be far greater for the phonological deficit compared to any statistical learning deficit. In the current study, it even seems that there was some overlap between scores in the reading and writing tasks across groups, which would suggest that a statistical learning task is more closely related to a diagnosis of dyslexia than reading and writing ability. In short, the data unfortunately do not pass a sanity check. I can see two reasons for this: (1) Either, there is a coding error (the most likely explanation I can think of would be some mistake in using the “sort” function in excel), or (2) by chance, the authors obtained an outlier set of data, where indeed all controls performed better than all children with dyslexia on a statistical learning task. I strongly suggest that the authors double check that the data is reported correctly. If this is the case, the unusual pattern should be addressed in the discussion section. If the authors obtained an outlier set of data, the implication is that they are very likely to report a Magnitude Error (see Gelman & Carlin, 2014): The obtained effect size is likely to be much larger than the real population effect size, meaning that future studies using the same methods are likely to give much smaller effect sizes. This should be clearly stated as a limitation and direction for future research."

Months later, I was invited to re-review the paper. The editor, in the invitation letter, wrote that the authors had collected more data and analysed it together with this already existing dataset. This, of course, is not an appropriate course of action, assuming I was right with my sorting function hypothesis (no matter what, to me that still seems like the most plausible benign explanation): analysing a probably non-real and definitely strongly biased dataset with some additional real data points still leads to a very biased final result.

After some hesitation, I declined, with the justification that the editor and other reviewers should decide whether they think that my concerns were justified. Now, again months later, this article has been published, and frequently shows up in my researchgate feed, with recommendations from colleagues who, I feel, would not endorse it if they knew its peer review history. The scatterplots in the published paper show the combined dataset: indeed, among the newly collected data, there is a lot of overlap in statistical learning between the two groups, which adds noise to the unrealistically and suspiciously neat plots from the original dataset. This means that a cynical person looking at this scatterplot is unlikely to come to the same conclusion as I did. To be fair, I did not read the final version of the paper beyond looking at the plots: perhaps the authors honestly describe the very strange pattern that's probably fake in their original dataset, or provide an amazingly logical and obvious reason for this data pattern that I did not think of.

This anecdote demonstrates my own failure in acting as a gatekeeper who prevents articles that should not be published from making it into the peer-reviewed body of literature. The moral for myself is that, from now on, I will agree to re-review papers I've reviewed previously (unless there are some timing constraints that prevent me from doing so), and I will be more clear when my recommendation is not to publish the paper, ever. (In my reviewing experience so far, this happens extremely rarely, but I have learned that it does happen, and not only in this single case.) 

As for my last blogpost, I will conclude with some broader questions and vague suggestions about the publication system in general. Some open questions: Are reviewers obliged to do their best to keep a bad paper out of the peer-reviewed literature? Should we blame them if they decline to re-review a paper instead of making sure that some serious concern of theirs has been addressed (and, if so, what about those who decline for a legitimate reason, such as health reasons or leaving academia)? Or is it the editor's responsibility to ensure that all critical points raised by any of the reviewers are addressed before publication? If so, how should this be implemented in practice? Even as a reviewer, I sometimes find that, during the time that passes between having written a review and seeing the revised version, I forgot all about the issues that I'd raised previously. For the editors, remembering all reviewers' points when they probably handle more manuscripts than an average reviewer might be too much to ask. 

And as a vague suggestion: To some extent, this issue would be addressed by publishing the reviews along with the paper. This practice wouldn't need to add weight to the manuscript: on the article page, there would simply be an option to download the reviews, next to the option to download any supplementary materials such as the raw data. This is already done, to some extent, by some journals, such as Collabra: Psychology. However, the authors need to agree to this, which for a case such as the one I described above seems very unlikely. To really address the issue, publishing the reviews (whether with or without the reviewers' identities) would need to be compulsory. This would come with the possibility of collateral damage to authors if a reviewer throws around wild and unjustified accusations. Post-publication peer review, such as is done on PubPeer, would not fully address this particular issue. First, it comes with the same danger of unjustified criticism potentially damaging honest authors' reputation. Second, ultimately, a skeptical reviewer who doesn't follow the paper until the issues are resolved or the paper is rejected, helps the authors to hide these issues, such that another skeptical reader will not be able to spot them so easily without knowing about the peer review history.

Thursday, October 8, 2020

Anecdotes showing the flaws of the current peer review system: Case 1

A friend, who had decided not to pursue a PhD and an academic career after finishing his Masters degree, asked me how it's possible that so many of the papers that are published in peer-reviewed journals are - well - bullshit. As a response, I told him about a recent experience of mine. 

A while ago, I was asked to review a paper by a journal with a pretty high impact factor. I agreed: the paper was right in my area of expertise and sounded very interesting. When I read the manuscript, however, I was less enthusiastic. Let's say: I've seen better papers desk-rejected by lower impact factor journals. This was a sloppily designed study with overstated conclusions. I wrote the review by my standard template: First, summarise the paper in a few sentences, then write something nice about it, then list major and minor points, with suggestions that would address them whenever possible. I hold on to the belief that any study that the authors thought was worth conducting is also worth publishing, at least in some form. In the paper, I detected a potential major confound, and I had the impression that the authors wanted to hide some of the information relating to it, so I asked for clarifications. 

I submitted my review, and as always, a while later, received the decision letter. The other reviews were also lukewarm at best, so I was very surprised that the action editor invited a revision! When the authors resubmitted the paper, I agreed to review it again. However, most of my comments remained unaddressed, and my overall impression was that of the authors trying to hide some of the design flaws to blow up the importance of the conclusions. I wrote a slightly less reserved review, stating more clearly that I didn't think the paper should be published unless the authors addressed my comments. When I was invited to participate in the third round of reviews, I declined: I just didn't want to deal with it. 

Several months later, I saw the paper published in the very same high impact factor journal. As the academic world is small, I now knew for sure what I had suspected despite the anonymity of the peer review process: the senior author of that paper was a friend of the action editor's.

This is, of course, an anecdote, coloured by my own perceptions and preconceptions. There is nothing to suggest, other than my own impression, that the paper was published only because of the friendship between the author and editor. Maybe (probably) I'm way too skeptical in my reading of articles. That was also one of the reasons why I had declined to do a third round of review: I wanted to leave it up to the editor and the other reviewers to decide whether my concerns were justified. But let's be honest: Is anyone truly surprised that there are some cases where editors are more lenient when they personally know the author(s)? And, if we are truly honest, isn't this just a very natural thing that we do ourselves whenever we judge our colleagues' papers, be it as reviewers or editors or simply as readers: letting people we know and like get away with things that we would judge strangers harshly for? 

Maybe this anecdote, along with your own personal experiences, is convincing enough to show that at least sometimes, personal interest interferes with objective judgements and allows articles to pass peer review when they wouldn't hold up to scrutiny under other circumstances. This raises two questions, to which I don't have an answer: How often does this happen, and is this really a problem? And, more importantly, what is a better system? 

For years, I've been an advocate for as much transparency as possible in all aspects of the research process, and in line with this principle, I started signing my reviews shortly after I finished my PhD (though I stopped signing them later). Now, I am coming to the conclusion that anonymity has substantial advantages, not only if the reviewers don't know the identity of the authors, but also if the editors don't know the identity of the authors. Would this help? Well, maybe not. Years ago, I've been told by a senior researcher that it doesn't matter whether peer review is anonymous or not, because it's normally obvious who exactly - or at least which lab - produced the paper. In my experience (I've reviewed ca. 60 papers since then), I'd say this is often true, and when I review an anonymous paper I cannot stop myself from taking a guess at who the authors are.

So, to conclude, I don't have the answers to the two questions I asked above. But I do know that experiencing such anecdotes leaves me discouraged and frustrated about a system where one's chances of being employed are determined based on whether one publishes in high impact factor journals or not.

Thursday, May 28, 2020

What is a habilitation?

One of my earliest childhood memories is my dad's habilitation party. My mum, sister and I met my dad in front of a huge building, the library of the University of Bonn. While we waited for him, my mum explained what a habilitation was. I don't remember her explanation in detail, but this is what I took away from it at the time: Our dad had to publish some things, which were now available in this library. No, it wouldn't be interesting for us (my sister and me) to go have a look inside the library, because they didn't have kid's books. (It blew my mind that there could be such a thing as a library which would not be interesting to me.) Yes, our dad was a published author. No, he has not written a book - he wrote some articles that were published as sort of chapters in journals, kind of like the "Bummi" magazine, but about maths. Then, I remember, we went out for dinner. To my mind, it took an eternity while we were waiting for our food after having ordered, while the adults were talking about adult stuff.

Ten year later, my dad had not found a professorship in Germany, and we moved to Australia. I finished high school, completed my bachelor, and moved on to a PhD. I had not heard the word "habilitation" for a very long time. Only once did an Australian colleague ask me during my PhD: "I heard that in Germany, after you finish your PhD, you need to write another thesis." I shrugged. "Ah yes, there was something like that."

Around twenty years after my early memory from Bonn, against my expectations, I found myself in the German academic system. I was reminded of the concept of a habilitation when applying for post-doc grants, and reading successful applications, where the applicants promised to do a habilitation as part of the post-doc. "Is this... still a thing?" I asked my prospective boss (in other words, of course -- and in an email starting, as German etiquette demands, with "Highly esteemed Mr. Professor [...]). "Yes, a habilitation is generally required", was the formal reply. I wrote in my proposal that I'd do a habilitation, but then forgot about it for a while.

After a few years, a few rejected grant proposals, but also a few successes, I had the funding to hire a PhD student. I invited a masters student whom I'd supervised during my previous post-doc at the University of Padova in Italy. Except, it turned out, that while I managed to get the funding to hire a PhD student within the German academic system, a habilitation is actually a prerequisite for supervising PhD students. To make things more complicated, our faculty had recently changed the PhD requirements, and instead of registering the names of your supervisor(s) ("Promotionsbetreuer"), we needed to find a thesis advisory board with three people, all of them habilitated, and at least one external. "Can we list an external advisor from Australia?", I asked a bureaucrat on the phone. "She isn't be habilitated: habilitation doesn't exist in Australia." I may as well have told her that I'd been hand-feeding a unicorn the other day. At some stage, my student took over the bureaucracy. "Does the habilitation requirement mean that Dr. Schmalz can't actually be my supervisor?", she asked the bureaucrat at some stage. "Of course she can be your supervisor", was the reply. "She just can't be on the advisory board." Translation: She can do the work, but she can't take the credit for it.

So, what actually is a habilitation? After some googling, I managed to patch the following information together: A habilitation is required as a demonstration that you can teach and research independently. After you finish a habilitation, you are formally qualified for a professorship position. Nowadays, there are alternative pathways to a professorship, but doing a habilitation still seems to be very common. In order to get the habilitation, you do what most aspiring researchers do anyway: you research and you teach. From my more experienced colleagues, I got the following step-by-step instruction about how to go about doing a habilitation: Step 1: Find a supervisor. Step 2: Make a written agreement about all the things you want to achieve during your habilitaiton (number of courses taught, number of first- or last-author publications, number of students supervised). Step 3: Your supervisor arranges the teaching for you.

To me, this does not make much sense. You're supposed to show that you can research independently by working with a supervisor. You can't be trusted to teach as a professor unless you have shown that you can teach by teaching. But the steps seemed quite straight-forward. Until I learned something that led me to procrastinate with starting the procedure for another couple of years: In Germany, the habilitation title is tied to the university where you work, and in order to keep it, you need to continue teaching at this university, until you find a professorship position. Once you start a professorship position, you lose the title, but you gain the professor's title, which is worth much more. (Not to mention it comes with a very high status and a permanent position.) Except, of course, professorships are far more competitive than they were back in the days when the habilitation may have made sense: it is not uncommon for researchers to be on short-term contracts for more than a decade until they either leave academia or win the lottery in the form of a professorship position (or, as my dad did, take up a position in another country). Mostly, researchers who are habilitated but don't have a professorship position continue to teach for free.

The habilitation process seems to have changed a lot since I was waiting in front of the university library in Bonn. When I discussed this with my parents, they were surprised that a habilitation supervisor is required. They were also surprised that there is no stipend: back in his days, my dad had received a "Habilitationstipendium" from the German Research Foundation to cover living expenses. I'd have to do it alongside the full-time project that I'm paid to work on. As most researchers of the older generation, my parents encouraged me to do the habilitation. To the older generation, a habilitation seems to be an honorary title. To my generation, it seems to be more of a nuisance. "What's the point of doing a habilitation?" I asked my mentor, a professor in our faculty. "It's not like it will guarantee that I will get a professorship." -- "Well, to put it differently", my mentor said, "If you don't do a habilitation, you won't get a professorship." The thing I love about academics: they will counter pessimism and whining with indisputable logic.

In the meantime, I appear to have found a loophole to the problem of losing the habilitation title if you don't continue to teach: Other Central European countries also have habilitations, but you can keep the title for life. Luckily, Munich is close to the border of one such country: Austria. So, all I have to do is find a professor who could supervise me, write to them, and tell them, basically, that I'm willing to commute and to do free teaching for them, and put their department as the second affiliation for all of my publications.

To summarise: What is a habilitation? "I guess I haven't been in Germany for long enough", I confessed to a recently habilitated colleague. "I'm still very confused about the purpose of the habilitation." She laughed, and replied: "It's not because you haven't been in Germany for long enough!" It is noteworthy that my text processing software has underlined every instance of the word "habilitation" in this blogpost as a typo, and suggests the following alternatives: "rehabilitation", "habitation", "debilitation", and "habituation".

Monday, April 13, 2020

The Open Webinar Series on Psycholinguistics and Research Methods

For me, the last month has been a lot about rethinking and re-planning, about cancelling existing plans, or putting them on hold for an indeterminate amount of time.

One of my grand plans for this year had been to establish a seminar series at my department. After all, in my view, research cannot progress unless there is exchange of ideas, unless we learn about new approaches, and meeting new people and taking them out for a subsequent dinner or drinks for some informal conversations is always a bonus. From my time as a PhD student, I'd benefited a lot from the department's seminar series, even for talks where I'd come in with the expectation that I wouldn't hear anything that would ever apply to my work. I'd had the ambition to establish a seminar series at my current department for a number of years, but realising it requires money. There should be at least enough to cover travel costs, accommodation, and the dinner or drinks after the talk. This is why I was excited to get some funding that I could set aside for this purpose for this year. I had started contacting people whose work I'd like to hear more about, and already had two talks (almost) scheduled. This is one of the first plans that, with the Corona-crisis, had to be put on hold.

To summarise the problem: I'd like to initiate a seminar series, but the current situation does not allow for guest speakers to travel to Munich, or for any larger number of people to gather in one room for the talk. The solution is rather simple, and has been adopted across the globe for many similar events: Do it in a digital format. This solution comes with the drawback of no dinner or drinks afterwards (though, I guess, with enough enthusiasm, it could be arranged, if everyone brings their own drinks to the video-conference). However, it also comes with a few advantages. First, as the speaker is not required to travel, the digital format is cheaper and less time-consuming. This means that one can invite speakers without any funding limitations: a guest speaker from, say Sydney, would cost the organiser in Munich as much as a guest speaker from Regensburg. Second, a video-conference room can be made open to everyone, not just to members of the department.

After having these ideas turn in my head for a few days, I wrote a few tweets about it, got a few encouraging responses, and decided: "Let's try it! What's there to lose?"

Things I thought through
My first step was to decide on the topic. Reluctant to name the series "Webinars on stuff that interests Xenia", I decided to keep it broad enough to encompass a wide range of topics, while keeping it narrow enough to be of interest to a specific audience.

Then, I created a Google Form to gauge interest. I didn't want to invest time into organising a regular event and recruit guest speakers if it would turn out that nobody has time to attend these webinars anyway. I've since de-activated the form, but below, I list the questions I asked.

The description of the form was as follows: "Many of us are working from home, which could present an opportunity to connect and exchange ideas beyond our close colleagues. If there is sufficient interest (> 10 people who promise to write it as a fixed slot in their calendar and to attend if they possibly can) I'll try to organise regular slots with speakers from around the world. Please fill out the form by the end of the week (28.3.2020)".

Then, I asked about peoples' names, email addresses, time zones, preferred days of the week, whether they had any preferences or suggestions for any video conferencing software, and how often they'd prefer for the meetings to take place. To gauge interest, I asked them to choose from three options: (1) "I'd like to attend", (2) "I'd like to present", and (3) "I'll write the fixed slot into my calendar and make it a priority to attend as many webinars as possible". I'd decided, a priori, to use the number of people ticking the third option to decide whether I'd take further steps in organising the event. For those who'd want to present, I also had a slot where they could leave the topic they'd present on. Finally, for those who wanted to attend, I allowed them to choose from a list or add their own answer to the question of what kind of topics interest them most.

After I'd finished the form, I tweeted a link to it, and went through the list of people I follow, to tag everyone who I thought might be interested in this topic and whose work I'd like to hear more about. Note that this, in addition to choosing the title of the webinar series, gave me some control over the direction in which this webinar series was going. Without wanting to exclude anyone, I wanted to push it in the direction of reading research: while I'm always interested in a broad array of topics, if the talks moved too far away from my research interests, the group would no longer serve its original purpose (for me). I also sent an email to my department, and encouraged everyone on twitter to share the link.

Altogether, I got 78 responses, with > 30 people promising to write the events in their calendar and attending whenever they could, and 14 people saying they would like to present. The other responses justified some executive decisions: The meetings would be held biweekly on Thursdays. Most respondents were from Europe, but there were also respondents from other parts of the world, ranging from America to Australia. This made it impossible to find a time zone that would fall within working hours for everyone. Instead, I decided to have two different time slots: on every second Thursday of the month, the webinar will take place at 9am CET (a timeslot that should be convenient for people based in Australia), and on every fourth Thursday, at 16:00 CET (a time slot that should be convenient for people based in America).

I was very pleased by the diversity of respondents. Many of the respondents were Early Career Researchers (ERCs; PhD students, and post-docs). Also, a few of the more senior researchers whom I'd tagged signed up and offered to do a talk. Such a seminar series, of course, is a very good opportunity for junior researchers to present their work to an international audience. However, it is also good to have more experienced researchers, both to allow the ERCs to learn from them, and as a kind of "star" effect. There was variability in terms of the countries in which the researchers are based. I somewhat regret that I did not ask about this in the Google Form, but judging by the names, email addresses, and those people I know personally, the respondents are based in countries as diverse as Brazil, USA, Netherlands, UK, Italy, Germany, Austria, Denmark, Serbia, Russia, Turkey, Israel, Iran, Australia. Try to get such a diverse audience together for a department seminar series, or even an international workshop on psycholinguistics!

The next step was to create a Google Sheet. Quite simply, the sheet contains a list of slots, and the request for people to sign up for a slot where they'd like to present.

Things I didn't think through
In terms of software, the simplest solution seemed to be Zoom (which was also preferred by the majority of the respondents of the Google Forms). After I had signed up for a professional account (which includes sessions for an unlimited time for up to 100 people) and sent all of the form respondents the link to the event, I realised that, through my university, I can get an account with an even better plan (unlimited time for up to 300 people).

After having decided on Zoom, I'd read, through Twitter, that video conference organisers were having problems with trolls crashing their meetings and harassing the attendees and speakers. This is really something I don't want to deal with. It had been my plan to distribute the link to the meetings as widely as possible, to create few barriers to anyone who genuinely wants to join. Instead, I decided to create a Google Group, where anyone can sign up, but the posts are closed to non-members. The link and passwords to the webinars is sent to the members of this group, with the request to forward to anyone who might be interested, but not to share on any public platform. This is an unfortunate example that something as stupid and petty as trolls can stand in the way of Open Science.

The additional step of the Google Group, I fear, created some confusion, especially since I forgot to change the information on the Google Sheet, where people sign up for talks, and where I'd originally written that I'd post the Zoom link next to each slot. Nevertheless, through advertising the group on Twitter, it now has more members than the number of people who had originally filled in the Google Form: 85.

Another thing that, in retrospect, I should have put more thought into is the choice of topics for the speakers. I'd made a word cloud of the keywords that the respondents had provided in my Google Form (in the Google Sheet linked above). However, there are also some things that I'd like to avoid for this group. One danger of giving absolutely free reign of the Google Sheet could be that businesses (e.g., IBM and its SPSS 😱) would sign up for slots in order to promote their products. Therefore, in the Google Sheet, in a third tab called "Updates", I added a request to avoid signing up with slots that aim to promote a product. This could interfere with the request of some respondents to also have some tutorial sessions to learn to use some particular method or software, so I added that any such tutorials should be based on freely available software.

The first talk
Overall, especially considering the relatively small amount of time that went into organising this series so far, I am very happy with the result.

Last Thursday, we had the first slot: Mariella Paul from the Department of Neuropsychology at the MPI for Human Cognitive and Brain Sciences in Leipzig and the Berlin School of Mind and Brain held the first presentation, "Harry Potter and the Methods of Reproducibility - a brief introduction to Open Science". There were, at the peak time, 48 attendants. I recorded the talk, and with Mariella, we've uploaded both the slides and the recordings on an OSF project page: The talk was scheduled as 45 minutes + 15 minutes for questions: with many interesting questions, the discussion continued until approximately 10:15 (i.e., 15 minutes overtime). Members of the audience joined in answering the questions: for example, one question was about the timeline of a Registered Reports submission; two members of the audience turned out to be editors who had experience handling Registered Reports, and could provide some insider knowledge and practical advice (e.g., if you submit a Registered Report and you're under time pressure to start collecting the data, check with the editor beforehand if they can take this into account).

I'm looking forward to the next talk, which will take place on the 23rd of April, at 16:00 Central European Time. Suzi J. Styles from NTU in Singapore will be talking about "How do you catch a Hypothefish? Preregistration basics (Psycholinguistics remix ft N400s)". Does this sound like something you'd like to hear more about? Join the Google Group to get information about how to join the meeting closer to date! Spread the word!

Some final, Open Science-related thoughts
The idea of this online webinar arose because many of us are working from home. For me, aside from the essential exchange with colleagues, the webinar format is a regular event in my working-from-home schedule which I otherwise still haven't managed to regularise or stabilise.

At this stage, nobody seems to know how long the recommendations or requirements to work from home will continue. But I hope that the usefulness of this group will by far outlive the restriction period.

At the SIPS 2019 conference, I attended an Unconference session about "The academic conference of the future" (a summary of what was discussed can be found here). The webinar format can offer an alternative for at least some aspects of conferences, and has some advantages over the traditional format of a large number of people coming together to discuss research.

Instead of having a seemingly endless series of talks, where at some stage not even buckets of coffee can keep your jetlagged brain focussed despite the interesting content, you have one slot per week (or fortnight), with enough time both to discuss and to digest the content before you need to take in the next wave of information.

Instead of applying for visas and scrambling for funding, and buying expensive airplane tickets and contributing to carbon emissions, you sign up to a Google group and click on a link 15 minutes before a talk starts.

Granted, this format requires some more creativity to make social events such as the conference dinner possible. It is possible to recruit world-leading researchers as speakers for a webinar, but then there will be no possibility for the PhD students to watch them get drunk at the conference dinner. But, let's be honest: Will we really miss this?

The webinar format is by no means novel, but I hope that it will become more and more popular as a tool for exchanging ideas and learning from researchers from across the world, across different career stages.

This is why I wrote this blog: I outlined the steps I took (which were not many), so that you can easily create a webinar series for your area of interest, too!

Saturday, March 21, 2020

An introvert's guide to surviving in isolation

On social media, extroverts are jokingly asking introverts for advice on how they survive being by themselves for long periods of time. Being extremely introverted, this is indeed a question that I don't hear that often. And while I realise that it's often being asked as a joke or a rhetorical question, I'm going to do the socially awkward thing (being an introvert) and give a long, serious answer to that.

Generally, I like being alone, and find ways to entertain myself. However, even for me there have been periods in my life when there was too much loneliness. In writing this blogpost, I'm keeping in mind the last of such periods: During my post-doc in Italy, I found out that, during the month of August, everyone is on holidays. The university was shut, with heavy chains blocking the main entrance to the building, and when I snuck in through the back door, I found the building completely empty, so I started working (or not working) from home. I didn't have many friends (being an introvert and finding it difficult to meet new people), and those I had were travelling themselves. I also had my own flat, and no housemates to socialise with. So I spent several weeks by myself, hardly talking to anyone. I'd like to list some things that might help others who are in a similar situation, of being at home by themselves, perhaps working from home, or living in quarantine for a few weeks.

The situation was, of course, different from the Corona-situation now. I could travel, and scheduled in regular day trips to nearby cities and towns and one or two weekend trips in further-away cities. There was no pandemic, no external reason for fear and anxiety. However, it was tough (even for me), and thinking back, I remember a few things that worked in terms of helping me to get through this period, and a couple of things that probably made me feel worse at the time. These are specific to me: an introvert who is impatient and has nerdy hobbies. I don't want to make it sound like the things that helped me will help everyone. But maybe some of my more extroverted friends will find some things that they can try out if they feel down during their time at home.

Things to do
1) Keep a routine. Eat regular meals, wake up at a reasonable time, get dressed, brush your teeth, go for walks, exercise, take showers.
2) Buy a pot plant. Even if you don't have a green thumb, a routined lifestyle and too much time on your hands will allow you to look after it well. If you live by yourself, it's a way to get something that's alive and low maintenance into your life, and you'll be happy when the plant grows and starts blossoming.
3) Treat yourself to good meals on a regular basis. This was very easy for me, because I was living in Italy: I could taste cheeses and meats from the market, buy lots of fresh fruit and vegetables, and try out new recipes from Preparing and eating a nice dinner gave me something to look forward to each day.
4) Go outside every day. At this stage, in Bavaria, it is allowed to go outside for walks. Try to discover some new streets or parks near where you live. Find a nice place to watch the sunset, and make it part of your routine to watch it every day.
5) Vary the activities. To an introvert, there are many fun things that can be done at home. To name a few: Reading, learning a language, writing a blog post, short story, or working on a novel (don't worry, you don't have to ever show it to anyone - it's just a way to keep yourself busy and your brain active), playing the piano, listening to music, learning to code, watching movies, series, or youtube videos, cooking. Go through a list of things you've always wanted to do: perhaps you have a Spanish language book on your shelf from when you wanted to learn Spanish but then found out you didn't actually have the time, or your friends kept telling you about this awesome novel that you always forgot to download to your e-reader, or a guitar you bought ages ago and haven't touched since.

Things to avoid
1) Depressing things. There are times to read or watch movies about war, death, and destruction. But when in isolation, it's not a good time to expose yourself to things that drag you down. At this stage, this also involves my social media feeds and the news. It's important to keep up-to-date with what's happening, but it's not good to become completely absorbed by it. Minimise the amount of time you spend doing things that you know will make you feel depressed: maybe just check the news three times a day.
2) Drinking too much. In line with Point 3 from "things to do", it's nice to have a glass of some nice wine occasionally. However, then it becomes tempting to have another glass of the delicious wine, and then another, and then another, and before you know it, you're feeling drunk, lonely, and terrible.
3) Binging on anything. For me, doing anything for too long makes me feel like my head is about to implode, and at the end of the day, it feels like it's been wasted. This involves spending the whole day binge-watching series, but also finding a book that is so interesting that I can't put it down and end up forgetting my whole routine, including sleep, until I get to the last page.
4) Building a den: It's nice to have a cozy place, for example, your favourite blanket and pillows on the couch. But it's not good to spend too much time in it. This relates back to the previous point: Spending a whole day binge watching or reading something in your den, eating in your den, drinking in your den, is not a day well-spent for me.

I focussed on things that an introvert would say, but, of course, there are other things that are advisable to do if you're in isolation and feel lonely. Actively seek contact: call or email an old friend and ask them if they're OK. Video-chat with your relatives. Check if your elderly neighbours need something from the shop. And, most of all: Stay safe, and take care of yourself!

Tuesday, March 10, 2020

Ten reasons to leave academia

I've been in academia for 9 years now (counting from the beginning my PhD), and I don't actually hate my job most of the time. But sometimes I do, and then I wonder why I'm continuing in academia rather than finding a more stable, more well-paid job. There are different things that make me wonder about that on a regular basis.

Why write a blogpost about those things? As a selfish reason, I hope that venting will make me feel better. As an altruistic reason, I think it's important, for any aspiring academic, to be aware of the problems associated with this career path. I'm not sure if striving towards an academic career is a rational thing to do. Either way, I'd recommend to anyone starting on this path to consider alternatives, and to make sure they acquire skills that will help them on the non-academic job market.

The list that I made describes my experiences, and most of them do not universally apply to academia across the board. These problems may not exist in other countries, universities, or fields of study, and conversely, there will be other things that other people hate about working in academia that I don't notice in my everyday life. Also, I've never worked outside of academia (other than a few student jobs), so I don't know whether the things that sometimes drive me insane in academia are better elsewhere. In short, any reader is encouraged to decide for themselves whether these things apply to their own academic system, and whether they would be bothered by these issues.

So, here is my list:

1) Lack of stability: Starting an academic career involves, first, doing a PhD, and then continuing with a post-doc. What happens afterwards depends on the country. In some countries it's relatively easy to get a lecturer's position after one or two post-docs (or even straight out of the PhD). In others, you do one post-doc after the other, only to end up with no professorship or tenure, no more post-doc funding, and being too old to get a good job outside of academia. In some countries, it's perfectly normal to meet post-docs who are in their late 40s. A post-doc position is fun and all when you're out of your PhD. However, at some stage, every extension of a post-doc contract feels like both a blessing and a curse, with no guarantee that it will be extended again and the knowledge that you're postponing the problem of finding a stable job for another 2 years, when it might already be more difficult to change your career path.

Often, it's expected that researchers change labs and countries every few years. This is an aspect of academia which I enjoyed very much - until I met my husband. As soon as one wants to settle down, one might find it impossible to get a permanent academic job (or any kind of academic job, for that matter) in the same place as the partner.

2) Bad long-term perspectives: There are more PhD students than post-doc positions, and way more post-doc positions than professorships. Statistically, this means that getting from the beginning of an academic career all the way to a professorship position is not very likely. The extent to which this is true depends on the country where you live. In Germany, the permanent positions are limited to professorships, which means that unless you get a professorship, there is no stability, and you probably won't get a professorship. 

3) Writing grants: If you want the opportunity to work on your own project, you need to get a grant. If you don't already have a position, you can apply for a grant that pays your salary. Some grants are so prestigious that they come with a guarantee of a permanent position for after the project is finished, such as the ERC Starting Grant. If you manage to get this kind of grant, Problems 1 and 2 from above are solved. The catch is: they're very competitive. The probability of success depends on the quality of your proposal, your track record, political issues (whom do you know? whom does your supervisor know?), and luck. The ratio of these four things is unknown (to me). However, luck is a big factor. Don't believe me? Write a mock proposal, and ask 10 different colleagues to give you feedback. Count the number of pieces of advice that you get that are in direct conflict with each other. Your reviewers could share any one of these different opinions: Which piece of advice do you incorporate, and which one do you ignore? Aside from the huge role that luck plays in getting grants: The high competitiveness means that, in the most likely scenario, you'll spend a lot of time writing a proposal that will bring you absolutely no benefit.

4) Publish or perish: For all three of the above points, your success depends a lot on where you publish. Ideally, you publish lots of papers in high impact-factor (IF) journals that get cited a lot and featured in the media. A high IF does not correspond to higher quality: if anything, the correlation between IF and different markers of quality is negative. Still, publishing in high IF journals is necessary not only for your own career, but also for your department: at our faculty, the income of the department is calculated through a system called "Leistungsorientierte Mittelvergabe" (LOM for short, translating to something like "achievement-based distribution of funds"). I am told that each additional IF point translates to roughly an additional 1,000 Euros that the department will get. This LOM also takes into account the amount of grant money you bring in and, to a lesser extent, the amount of teaching. If you don't publish a lot and don't get any grants, you're pretty much useless to your department.

Whether or not you manage to publish in a high-IF journal is again, to some extent, a matter of luck and connections. Recently, I was asked to review a paper for a relatively high-IF journal. This paper was simply not good: sloppy design, messy results. The other reviewers were also not overly enthusiastic. Yet, though in my experience such reviews normally lead to rejection, even in lesser-IF journals, the editor's verdict was "major revision". After a couple of rounds of review, I decided that my comments didn't make much of a difference, anyway, and declined to review the next version. The paper was, of course, eventually published, and I realised why the editor had supported it, despite its so-so quality: there were a couple of big names among the authors. Here's a looming theme that applies to all of the points to some extent: Success in academia is a lot about having connections with big names, which, of course, leads to systematic discrimination of anyone who doesn't have the means to establish such connections.

5) Doing things for free: An advantage of academia is that a lot of people do it because they love it. And when people work on something that they love and truly believe in, they are willing to invest more than what they are paid for. And when people do things for free, there is the expectation that they will continue to do so. This means that you will be at a disadvantage if you don't do things for free. This leads to the next point: any nice thing that you do is no longer seen as a favour from your side, but is taken for granted, leading to a...

6) Lack of appreciation: Another advantage of academia is that you can work on a project that is very important to you. The problem is: everyone feels this way about their own project. You might be hoping for some occasional ego boost from your collaborators, colleagues, or reviewers. However, everyone else is busy with their own awesome projects and probably doesn't even have the time to read your latest paper.

7) Lack of communication: Ideally, academia should be a place of intellectual exchange. This is one of the parts that I love about my job: going to conferences and talks and learning about something new, discussing ideas with colleagues working on something completely different, and realising that you can use their approach in your own work. This hardly ever happens in real life: everyone is too busy with their own project. Any attempts to organise a reading group or after work drinks are greeted with great enthusiasm, and when the meetings actually start, the number of participants dwindles quickly from a handful of people to zero.

8) Bureaucracy: I've worked in three different countries, and in each of them, bureaucracy consumed too much time and energy. This seems to happen in different ways in different countries. In Australia, many decisions that, in other places, are made by academics, are made by the administration, which leads to solutions that are not necessarily helpful to create a good research environment (even when they are supposed to be). In Italy, bureaucracy is characterised by a lack of transparency, and in Germany, by a lack of flexibility. For example, I do some studies on reading ability in children. The easiest way to get participants would be to go to schools and test those children who have parental consent there: except the bureaucracy is so time-consuming that my colleagues advised me to not even try. Everyone spending any time at our department needs to have a medical certificate and up-to-date vaccines, as well as a police check, even if they have no contact with patients or participants. Of course, nobody knows what to do if a visiting researcher can't easily get a police check from their country of origin. For external PhD supervisors, a habilitation is required. I tried and failed to explain to an administration officer that habilitation is not a thing outside of Europe. In fact, it took months to convince the university administration to pay me a post-doc salary, because a formal requirement for getting a post-doc salary is that one has a masters degree, and as I have a Bachelor with Honours degree and a PhD, I was considered under(?)qualified. 

Bureaucracy is, of course, not specific to academia, and also makes everyday life difficult (if you're ever keen to hear a long and not-that-interesting story, ask me about getting my drivers license changed from an Australian to a German one). However, I imagine that companies which aim at making profit cut out a lot of bureaucracy that costs time and doesn't bring benefits (and is often directly damaging).

9) Salary: This is the last point on my list, because it doesn't bother me that much, personally. The salary is not bad, and I didn't go into academia because I wanted to get rich. However, if your goal is to have money, then you will probably get much more of that if you go to industry with your qualifications.

In Germany, the salaries in the public sector (including universities) are determined by a class system. The class system is the same across all of Germany, regardless of how expensive life in the city where you live is. Munich is very expensive: if you would like to buy property with an academic salary, that could be problematic. During your PhD, you should expect to have very little money, despite doing a job that requires postgraduate qualification (i.e., a masters). In Germany, for jobs that require a masters degree, the standard salary class is E13 - the same salary class that is given to junior postdocs, and that already constitutes a decent salary. However, as PhD students are expected to get a shitty salary, someone came up with the disingenuous idea to pay them only part-time - normally between 50%-75% - while expecting them to work full-time.

10) Actually, I ran out of things to complain about. I'm sure I'd think of something more if I thought about it a bit longer, but what I have so far should already give some food for thought for aspiring academics (and maybe some big shot, who stumbled across this blog post and actually has the power to change some of the above?).

This blogpost is, of course, very negative, but that doesn't mean that I don't think there are no reasons to stay in academia (I'm still here, right?). Academia, like all working environments, has pros and cons. However, in my experience the pros of academia are often overhyped, and the cons are brushed aside as sacrifices that you need to make if you want to be a real scientist. Leaving academia is often seen as a failure, and considering alternative careers as a betrayal of your ideals. This mindset is incredibly damaging, as it allows for exploitation of people who have been brainwashed to simply be grateful that they have the opportunity to strive for an academic career. So, even for readers who are not weighing up the pros and cons of staying in academia, I hope that the blogpost shows that academia is not perfect, and that there might be upsides to considering alternative careers. This realisation will make (especially) early career researchers less vulnerable to being abused and guilted into staying in academia. 

To end on a brighter note, perhaps I'll get around to writing about "Ten reasons to stay in academia" for the next blogpost.

Wednesday, February 19, 2020

I would rather write an ERC grant proposal than buying lottery tickets after all

Scientists are supposed to be rational. Yet, at times, it feels like staying in academia is a completely irrational decision. The chances of success - a professorship, or gaining some prestigious grant - are low. The stakes - working overtime, dealing with high competition, occupational instability - are high.

The current blog post is inspired by a grant rejection. With a grant proposal, one invests a lot of time with a relatively small chance of success. In this sense, submitting a grant proposal feels like buying a lottery ticket. A very expensive lottery ticket. One that costs months of work, and has the unlikely scenario of the realisation of a project idea, and a red carpet towards a professorship as the potential gain. We know that buying a lottery ticket is irrational: the long-term expected value, the gain relative to the investment, is negative: if it wasn't negative for the consumer, there would be no gain for the organisers and therefore no motivation to run it.

If we take just the base rate of grant success rates (e.g., for the ERC), we will find that most grants have a higher success rate than lottery tickets (e.g., the Australian National Monday Lottery Ticket). However, the investment into an ERC grant is also much higher. A single lottery ticket costs AU$2.42, an ERC proposal takes months of work. This made me wonder if it's not worth, in the long run, to buy lottery tickets instead of writing proposals. This suggestion did not come off well with my boss.

So I decided to do the numbers to see if this is indeed the case. In the following I compare the expected value of buying an Australian National Monday Lottery Ticket compared to submitting a proposal for an ERC Starting Grant. How to calculate an expected gain for the lottery is explained here, and with a bit of researching I learned a bit about how the lottery works and managed to find or estimate the necessary values. As a simplification, we consider only the possibility of winning the jackpot (AU$1,000,000). The odds of getting a jackpot, according to the lottery's website, are 1 in 8,145,060. Thus, we have a success rate (p), the potential gain (V) and the cost of a ticket (C). To calculate the expected gain, we still need to estimate the number of people who buy this type of lottery ticket. I could not find this number on the official website, but I found the overall number of winners of a recent draw on a different page. Given that 58695 people won the last division, and that the probability of success of this division is 1 in 144, then, if I understand how the lottery works, we can estimate that around 8,452,224 lottery tickets are bought. Plugging these numbers in the formula below:

We get:

This means that, as we predicted, the expected value is negative: if we play the lottery, we expect, in the long term, to lose money.

Now, let's do the same thing for the ERC grant. Here, the success rate already takes into account the number of applicants, so we can use the simpler formula:

Here, we need to somehow estimate the cost of submitting an ERC proposal. Assuming two months of focussing only on writing the proposal, and my current salary, the cost comes to approximately 5,000 Euros. The immediate financial gain of the proposal is 1,500,000 Euros; for now, we don't consider the additional gain of 5 years' occupational security and high chance of a permanent professorship position afterwards. The probability of success, according to the ERC, is 12.7%. This gives us:

E = 0.127 * 1,500,000 - 0.8 * 5,000 = 186,500.

So, the good news is: The expected value of submitting an ERC proposal is not only positive, but with all the simplifying assumptions that we made, and assuming I made no mistakes in the calculations, it's also quite large! I don't fully trust my calculations. But even if my lottery calculations are wrong, there is the reasonable assumption that the expected value is negative. Furthermore, the expected value for the ERC Starting Grant is conservative, in the sense that it considers only the immediate financial gain: in reality, a success comes with additional gains. The estimation may be too optimistic, if the time spent writing the grant is underestimated in my calculations. However, keeping the other parameters constant, one would need to work for >7 years on the proposal for the expected value to turn negative.

There are some additional factors which may decrease the expected value. Perhaps the psychological loss associated with getting rejected again and again and again adds to the financial loss of spending time on the grant. It is up to each individual to decide how much rejection affects them, and whether it will tip expected value towards a negative one. Furthermore, the current calculations show that it's more rational to submit ERC Starting Grants than to buy lottery tickets for the rest of your life. However, that's a really low bar to set: the gains and losses associated with finding a stable, well-paying non-academic job could far outweigh the gains associated with applying for grants.

Of course, there is one important difference between lottery tickets and grants, at least in theory. A lottery is controlled by completely random processes: I'm no more or less likely to win the lottery than any person sitting next to me in the bus on my way to work (provided we both buy the same kind of ticket). Grants, at least in theory, are awarded based on merit, not based on a random number generator. Whether or not this is actually the case is a matter of debate. Still, me applying for a grant this year and me applying for a grant next year are not independent events: my chance of success depends on my grant writing skills, connection to the reviewers, how impressive my track record looks, and so on.

Is it rational to submit proposals with the hope to stay in science? Well, maybe. For me, at least, the thought that I have a greater chance of success in academia than if I were to buy lottery tickets and hope for the best is an encouraging one.

Thursday, January 2, 2020

A year in the life of a post-doc

In some ways, the year 2019 has been remarkably unremarkable for me. In the beginning of the year, I started on a DFG-funded project, so the most acute uncertainties associated with employment have been postponed for another few years. The beginning of the project is slow; data collection has just started, projects from the previous years are at various stages of non-completion and imperfection. 

Why would I even write a blog post about my year 2019, then? Well, every fail comes with a win, and vice versa. To demonstrate: The first win of 2020 is that we booked a nice hotel in the Austrian Alps for cross-country skiing. The first fail, associated with that (and the actual reason for writing the blog post): I came down with a cold, and need to stay in the hotel room. And, again, as an attached win, I get to enjoy this view:

So, without any further ado, here's a list of my 2019 win/fail pairs: 

Finalised a manuscript
Rejected by 5 different journals
For a grant proposal, got together a team of amazing researchers from 13 different countries who agreed to collaborate with me on a cross-linguistic project
Proposal has a <10% chance of being funded
Started learning Natural Language Processing (and data science, and programming in general)
Still need to improve a lot before I can actually apply it in my research, or for looking for jobs outside of academia (New Year’s Resolution for 2020)
Started supervising my first PhD student
Issues with funding beyond the first year of her PhD (mainly due to stupid university bureaucracy)
Started teaching
Not sure how happy my students are with me hijacking their “research methods in clinical psychology” course and turning it into a course about statistics and open science
Learned that if I get an ERC starting grant, I’ll be guaranteed a professorship
Downloaded the manual for writing an ERC starting grant proposal, realised that the manual is 50 pages of densely written bureaucratese text, started wondering if I really want a professorship that much...
Data collection for new project going well, very competent research assistants
One of the testing laptops gave research assistant a strong electric shock and stopped working