Monday, April 17, 2023

On graphomania

As we're moving flats soon, I threw out a pile of papers, about half a meter tall when all stacked up. I've accumulated these papers over the last 6 years that we've been living in our current flat. They include different types of papers:

1) Papers that I started reading but then realised they weren't as relevant or interesting as I thought.

2) Papers that I printed because the title and abstract sounded (and still sound) fascinating - but as I haven't read them while they've been lying around for years, I should give up on my wishful thinking and acknowledge the fact that I will probably never have the time to read them.

3) Papers that I've read but mostly forgotten about.

If it sounds discouraging that, as part of our academic jobs, we don't really have the time to read papers, it gets worse when you consider the implication that our very own papers are probably getting treated in the same way. Indeed, I have found that I myself am starting to forget what I wrote in various papers where I'm the first author. For example, I spent hours writing a discussion section for a paper I'd started writing months previously, only to discover that past me had already incorporated most of my arguments and examples in the introduction section! 

Of course, this is not a new problem, and I'm not the first one to talk about it. Dorothy Bishop wrote a more detailed blogpost with more than anecdotal observations here: Here, she basically showed that a researcher studying autism and ADHD would need to read about 8 papers a day to keep up with all the new literature in the field (assuming they're already up-to-date with all papers that have previously been published). 

The reason why I'm writing so much is also obvious. I need publications so that I get a job and so that my department gets money. And yet, as much as I love writing, and more generally, working as a researcher, I wonder if there isn't a better way to spend my time, and hereby the taxpayers' money that is paying for my time...

In the meantime, I'll try to practice the art of minimalist writing.

Tuesday, January 3, 2023

New Year's Resolutions of an Early-Mid-Career Researcher in Germany

Three years ago (before COVID and the birth of my now toddler, which have put my academic life on hold in some ways), I wrote a New Year's post summarising my year and my new year's resolutions. Though I see it as a kind of superstition, I still like to take this time of the year to think about my achievements so far, and about what I need to do next to get where I want to get (and, of course, about where I want to get in the first place). In some years, it's easy: it is clear what I need to focus on. In other years, it's hard: Either there are too many things to focus on, or I decide that, actually, everything is going well, and I don't need to change anything. This year, it's hard in a different sense: It's not really clear what I can do to get any closer to my goals. 

My current position is not untypical for an early-to-mid-career researcher in Germany. In some ways, it is clear where I need to get to. The goal for most researchers here is a professorship. The timing is clear, too: there is a limit on the number of years one can work as a postdoc (a controversial German law, with a beautiful compound word for a name: Wissenschaftszeitvertragsgesetz). This means that I need to get a professorship (or other permanent academic position) within ca. 2 years, or else leave academia. Getting a permanent position would be good in any case, when trying to lead a stable family life and after having taken out a mortgage for a flat. Professorship positions are very competitive, especially if you are not too flexible with moving to a different city and even more so if the city where you would like to stay is Munich. 

With the high competition, finding a way directly to a professorship (i.e., applying for a professorship position and getting it) is very unlikely. The professorship application process is rather intimidating, and relies a lot on insider knowledge from other academics ("hidden curriculum"). The procedure is often not very transparent, so it is difficult to know just how far I am from getting shortlisted or even selected as the winner. The alternative is to try some other things to increase the probability of getting a professorship. This includes applying for prestigious grants or publishing high-profile papers. At some stage, my university guaranteed a professorship to any winner of an ERC Starting Grant, but they have now cancelled this policy. Some funding bodies allow one to apply for financing for a professorship position, but this requires the university to commit to paying the new professor's salary after the end of the funding period. In any case, applying for prestigious grants in itself is very competitive, so to increase the chances here, one needs to apply for less competitive grants and publish papers. In short, one just has to repeatedly try various things that cost a lot of resources and have a relatively low chance of success. This does not lend itself as a good new year's resolution, because there is no single action that I could commit to doing, either as a one-off or as a repeated activity.

Of course, my ambitions are not simply to get a professorship for the sake of getting a professorship, but primarily I would like to continue with my research agenda, and getting a professorship is one of the not-so-many ways to do this. Having a stable job to build up my research team is a necessary condition for doing good research, but it's not sufficient. There are skills that I still need to improve to keep up-to-date with the best research practices. Picking a skill to improve would be a good new year's resolution, but it may not help me to get any closer to a professorship position. Such skills could be learning a new language or improving my programming skills, for example, by learning more about Natural Language Processing. If I pick one such skill to focus on in 2023, I may find that I'll have to abandon it, because it will be more advantageous, in the short-term, to focus on writing a paper or grant proposal. On top of that, I also somehow keep my head above water with student supervision, family life (which I will not compromise on), and bureaucratic duties (unlike the former two duties, something that I don't enjoy doing at all but that keeps increasing as I progress in my academic career). Keeping my head above water could be a good new year's resolution, but - well - it sounds a bit depressing.

With what I have written above, some (myself included) might wonder if my ambitions are too high. In the German system, an academic career is almost an all-or-none affair (leaving academia vs. becoming a full professor, who, in Germany, have a lot more freedom and power than professors in many other countries). There are options in between a professorship in Munich and leaving academia, though. These include: applying for professorships at universities outside of Munich (which would be an inconvenience, but not a disaster for my family life), though these are also very competitive. There are non-university tertiary education institutions which hire professors, but I've heard that there is such a high teaching load that, in practice, there is just no time for research. There might be research positions outside of universities that could interest me, though I haven't found anything convincing yet. Maybe I should make it my new year's resolution to decide what I really want, and whether my ambitions are realistic. But this kind of decision is likely to change a lot, with incoming information, such as future successes or failures, and is unlikely to be completed by the end of the year. 

In the end, I think I'll just stick to eating more vegetables as my new year's resolution for 2023.

Thursday, December 1, 2022

Should I stay or should I go? Some thoughts on switching from Twitter to Mastodon

I have not been active on social media for a while, and I must admit that keeping off social media did not feel like a big loss. Lately, I have started checking in more frequently again, though. Ironically, the reason for this is my consideration of whether I should delete my Twitter account or not. 

I will not pretend like I know exactly what's going on with the whole Elon-Musk-buying-Twitter thing and all the pros and cons of staying on Twitter. However, I like the general idea to move from relying on large corporations to smaller providers, and so I welcomed the idea to try out Mastodon as an alternative, and created an account. I'm pretty happy so far, which leads to the question: Should I keep my Twitter account?

My considerations here are mostly pragmatic. I have benefited a lot from being on Twitter. I managed to catch the wave when the Open Science movement started, and through Twitter, I have become an active member of the Open Science community. I have met and discussed with many colleagues about what I refer to as my "day job", my research on reading and dyslexia. My impression is that most Open Science people have moved to Mastodon, so I should not miss out on any new developments here if I delete my Twitter account. The same does not seem to be true for the reading research community, however. Perhaps (ironically), they are more interested in retaining their outreach to a broader audience than the open science community, and are skeptical about this being possible on Mastodon. 

This leaves me with the following dilemma: As a pro of keeping my Twitter account, I will be able to keep in touch with the reading research community. As a con, I will have yet another social media account, and it's not like I have so much spare time that I can keep up with yet another timeline. 

There are some other things I could do. For example, I could revive my lurker account on Twitter, which I created when the atmosphere in the Open Science community got a bit too tense for my liking and where I follow exclusively reading researchers, and delete my main account. Or, I could leave social media altogether. 

There is no conclusion to this post, it's just some disorganised thoughts, quickly jotted down between two meetings. Maybe it will encourage some reading researchers to try out Mastodon? And, in case you notice that I disappear from Twitter, I hope to stay in touch with all of the amazing people that I have met throughout the years.

Friday, October 16, 2020

Anecdotes showing the flaws of the current peer review system: Case 2

Earlier this week, I published a Twitter poll with a question relating to peer review. Here is the poll, as well as the results of 91 voters (see here for a link to the poll and to the responses):

The issue is one that I touched in my last blogpost: When we think that a paper that we are reviewing should be rejected, should we make this opinion clear to the editor, or is it simply our role to list the shortcomings and leave it up to the editor to decide whether they are serious enough to warrant a rejection? 

Most respondents would agree to re-review a paper that they think should not be published, but add a very clear statement to the editor about this opinion. This corresponds to the view of the reviewer as a gate keeper, whose task it is to make sure that bad papers don't get published. About half as many respondents would agree to review again with an open mind, and to accept it if, eventually, the authors improve the paper sufficiently to warrant publication. This response reflects the view of a reviewer as a guide, who provides constructive criticism that will help the authors produce a better manuscript. About equally common was the response of declining to re-review in the first place. This reflects the view that it's ultimately not the reviewers' decision whether the paper should be published, but the editor's. The reviewers list the pros and cons, and if the concerns remain unaddressed and the editor still passes it on to the reviewers, clearly the editor doesn't think these concerns are major obstacles to a publication. The problem with this approach is that it creates a loophole for a really bad paper: if the editor keeps inviting re-submissions and critical reviewers only provide one round of peer review, it is only matter of time until the lottery results in only non-critical reviewers who are happy to wave the paper through. 

The view that it's the reviewer's role to provide pros and cons, and the editor's role to decide what to do with them, is the one that I held for a while, and which led me to decline a few invitations to re-review that, in retrospect, I regret. One of these I described in my last blogpost, linked above. Today, I'll describe the second case study. 

I don't want to attack anyone personally, so I made sure to describe the paper from my last blogpost in as little detail as possible. Here, I'd like describe some more details, because the paper is on a controversial theory which has practical implications, some strong believers, and in my view, close-to-no supporting evidence. Publications which make it look like the evidence is stronger than it actually is can potentially cause damage, both to other researchers, who invest their resources on following up on an illusory effect, and for the general public, who may trust a potential treatment that is not backed up by evidence. The topic is - unsurprisingly for anyone who has read my recent publications (e.g., here and here) - statistical learning and dyslexia. 

A while ago, I was asked to review a paper that compared a group of children with dyslexia and a group of children without dyslexia on statistical learning, among with some other cognitive tasks. They showed a huge group difference, and I started to think that maybe I was wrong with my whole skepticism thing. Still, I asked for the raw data, as I do routinely; the authors argued against this with privacy concerns, but added scatterplots of their data instead. At this stage, after two rounds of peer review, I noticed something very strange: There was absolutely no overlap in the statistical learning scores between children with dyslexia and children without dyslexia. After having checked with a stats-savvy friend, I wrote the following point (this is an excerpt from the whole review, with only the relevant information): 

"I have noticed something unusual about the data, after inspecting the scatterplots (Figure 2). The scatterplots show the distribution of scores for reading, writing, orthographic awareness and statistical learning, separated by condition (dyslexic versus control). It seems that in the orthographic awareness and statistical learning tasks, there is no overlap between the two groups. I find this highly unlikely: Even if there is a group difference in the population, it would be strange not to find any child without dyslexia who isn’t worse than any child with dyslexia. If we were to randomly pick 23 men and 23 women, we would be very surprised if all women were shorter than all men – and the effects we find in psychology are generally much smaller than the sex difference in heights. Closer to home, White et al. (2006) report a multiple case study, where they tested phonological awareness, among other tasks, in 23 children with dyslexia and 22 controls. Their Figure 1 shows some overlap between the two groups of participants – and, unlike the statistical learning deficit, a phonological deficit has been consistently shown in dozens of studies since the 1980s, suggesting that the population effect size should be far greater for the phonological deficit compared to any statistical learning deficit. In the current study, it even seems that there was some overlap between scores in the reading and writing tasks across groups, which would suggest that a statistical learning task is more closely related to a diagnosis of dyslexia than reading and writing ability. In short, the data unfortunately do not pass a sanity check. I can see two reasons for this: (1) Either, there is a coding error (the most likely explanation I can think of would be some mistake in using the “sort” function in excel), or (2) by chance, the authors obtained an outlier set of data, where indeed all controls performed better than all children with dyslexia on a statistical learning task. I strongly suggest that the authors double check that the data is reported correctly. If this is the case, the unusual pattern should be addressed in the discussion section. If the authors obtained an outlier set of data, the implication is that they are very likely to report a Magnitude Error (see Gelman & Carlin, 2014): The obtained effect size is likely to be much larger than the real population effect size, meaning that future studies using the same methods are likely to give much smaller effect sizes. This should be clearly stated as a limitation and direction for future research."

Months later, I was invited to re-review the paper. The editor, in the invitation letter, wrote that the authors had collected more data and analysed it together with this already existing dataset. This, of course, is not an appropriate course of action, assuming I was right with my sorting function hypothesis (no matter what, to me that still seems like the most plausible benign explanation): analysing a probably non-real and definitely strongly biased dataset with some additional real data points still leads to a very biased final result.

After some hesitation, I declined, with the justification that the editor and other reviewers should decide whether they think that my concerns were justified. Now, again months later, this article has been published, and frequently shows up in my researchgate feed, with recommendations from colleagues who, I feel, would not endorse it if they knew its peer review history. The scatterplots in the published paper show the combined dataset: indeed, among the newly collected data, there is a lot of overlap in statistical learning between the two groups, which adds noise to the unrealistically and suspiciously neat plots from the original dataset. This means that a cynical person looking at this scatterplot is unlikely to come to the same conclusion as I did. To be fair, I did not read the final version of the paper beyond looking at the plots: perhaps the authors honestly describe the very strange pattern that's probably fake in their original dataset, or provide an amazingly logical and obvious reason for this data pattern that I did not think of.

This anecdote demonstrates my own failure in acting as a gatekeeper who prevents articles that should not be published from making it into the peer-reviewed body of literature. The moral for myself is that, from now on, I will agree to re-review papers I've reviewed previously (unless there are some timing constraints that prevent me from doing so), and I will be more clear when my recommendation is not to publish the paper, ever. (In my reviewing experience so far, this happens extremely rarely, but I have learned that it does happen, and not only in this single case.) 

As for my last blogpost, I will conclude with some broader questions and vague suggestions about the publication system in general. Some open questions: Are reviewers obliged to do their best to keep a bad paper out of the peer-reviewed literature? Should we blame them if they decline to re-review a paper instead of making sure that some serious concern of theirs has been addressed (and, if so, what about those who decline for a legitimate reason, such as health reasons or leaving academia)? Or is it the editor's responsibility to ensure that all critical points raised by any of the reviewers are addressed before publication? If so, how should this be implemented in practice? Even as a reviewer, I sometimes find that, during the time that passes between having written a review and seeing the revised version, I forgot all about the issues that I'd raised previously. For the editors, remembering all reviewers' points when they probably handle more manuscripts than an average reviewer might be too much to ask. 

And as a vague suggestion: To some extent, this issue would be addressed by publishing the reviews along with the paper. This practice wouldn't need to add weight to the manuscript: on the article page, there would simply be an option to download the reviews, next to the option to download any supplementary materials such as the raw data. This is already done, to some extent, by some journals, such as Collabra: Psychology. However, the authors need to agree to this, which for a case such as the one I described above seems very unlikely. To really address the issue, publishing the reviews (whether with or without the reviewers' identities) would need to be compulsory. This would come with the possibility of collateral damage to authors if a reviewer throws around wild and unjustified accusations. Post-publication peer review, such as is done on PubPeer, would not fully address this particular issue. First, it comes with the same danger of unjustified criticism potentially damaging honest authors' reputation. Second, ultimately, a skeptical reviewer who doesn't follow the paper until the issues are resolved or the paper is rejected, helps the authors to hide these issues, such that another skeptical reader will not be able to spot them so easily without knowing about the peer review history.

Thursday, October 8, 2020

Anecdotes showing the flaws of the current peer review system: Case 1

A friend, who had decided not to pursue a PhD and an academic career after finishing his Masters degree, asked me how it's possible that so many of the papers that are published in peer-reviewed journals are - well - bullshit. As a response, I told him about a recent experience of mine. 

A while ago, I was asked to review a paper by a journal with a pretty high impact factor. I agreed: the paper was right in my area of expertise and sounded very interesting. When I read the manuscript, however, I was less enthusiastic. Let's say: I've seen better papers desk-rejected by lower impact factor journals. This was a sloppily designed study with overstated conclusions. I wrote the review by my standard template: First, summarise the paper in a few sentences, then write something nice about it, then list major and minor points, with suggestions that would address them whenever possible. I hold on to the belief that any study that the authors thought was worth conducting is also worth publishing, at least in some form. In the paper, I detected a potential major confound, and I had the impression that the authors wanted to hide some of the information relating to it, so I asked for clarifications. 

I submitted my review, and as always, a while later, received the decision letter. The other reviews were also lukewarm at best, so I was very surprised that the action editor invited a revision! When the authors resubmitted the paper, I agreed to review it again. However, most of my comments remained unaddressed, and my overall impression was that of the authors trying to hide some of the design flaws to blow up the importance of the conclusions. I wrote a slightly less reserved review, stating more clearly that I didn't think the paper should be published unless the authors addressed my comments. When I was invited to participate in the third round of reviews, I declined: I just didn't want to deal with it. 

Several months later, I saw the paper published in the very same high impact factor journal. As the academic world is small, I now knew for sure what I had suspected despite the anonymity of the peer review process: the senior author of that paper was a friend of the action editor's.

This is, of course, an anecdote, coloured by my own perceptions and preconceptions. There is nothing to suggest, other than my own impression, that the paper was published only because of the friendship between the author and editor. Maybe (probably) I'm way too skeptical in my reading of articles. That was also one of the reasons why I had declined to do a third round of review: I wanted to leave it up to the editor and the other reviewers to decide whether my concerns were justified. But let's be honest: Is anyone truly surprised that there are some cases where editors are more lenient when they personally know the author(s)? And, if we are truly honest, isn't this just a very natural thing that we do ourselves whenever we judge our colleagues' papers, be it as reviewers or editors or simply as readers: letting people we know and like get away with things that we would judge strangers harshly for? 

Maybe this anecdote, along with your own personal experiences, is convincing enough to show that at least sometimes, personal interest interferes with objective judgements and allows articles to pass peer review when they wouldn't hold up to scrutiny under other circumstances. This raises two questions, to which I don't have an answer: How often does this happen, and is this really a problem? And, more importantly, what is a better system? 

For years, I've been an advocate for as much transparency as possible in all aspects of the research process, and in line with this principle, I started signing my reviews shortly after I finished my PhD (though I stopped signing them later). Now, I am coming to the conclusion that anonymity has substantial advantages, not only if the reviewers don't know the identity of the authors, but also if the editors don't know the identity of the authors. Would this help? Well, maybe not. Years ago, I've been told by a senior researcher that it doesn't matter whether peer review is anonymous or not, because it's normally obvious who exactly - or at least which lab - produced the paper. In my experience (I've reviewed ca. 60 papers since then), I'd say this is often true, and when I review an anonymous paper I cannot stop myself from taking a guess at who the authors are.

So, to conclude, I don't have the answers to the two questions I asked above. But I do know that experiencing such anecdotes leaves me discouraged and frustrated about a system where one's chances of being employed are determined based on whether one publishes in high impact factor journals or not.

Thursday, May 28, 2020

What is a habilitation?

One of my earliest childhood memories is my dad's habilitation party. My mum, sister and I met my dad in front of a huge building, the library of the University of Bonn. While we waited for him, my mum explained what a habilitation was. I don't remember her explanation in detail, but this is what I took away from it at the time: Our dad had to publish some things, which were now available in this library. No, it wouldn't be interesting for us (my sister and me) to go have a look inside the library, because they didn't have kid's books. (It blew my mind that there could be such a thing as a library which would not be interesting to me.) Yes, our dad was a published author. No, he has not written a book - he wrote some articles that were published as sort of chapters in journals, kind of like the "Bummi" magazine, but about maths. Then, I remember, we went out for dinner. To my mind, it took an eternity while we were waiting for our food after having ordered, while the adults were talking about adult stuff.

Ten year later, my dad had not found a professorship in Germany, and we moved to Australia. I finished high school, completed my bachelor, and moved on to a PhD. I had not heard the word "habilitation" for a very long time. Only once did an Australian colleague ask me during my PhD: "I heard that in Germany, after you finish your PhD, you need to write another thesis." I shrugged. "Ah yes, there was something like that."

Around twenty years after my early memory from Bonn, against my expectations, I found myself in the German academic system. I was reminded of the concept of a habilitation when applying for post-doc grants, and reading successful applications, where the applicants promised to do a habilitation as part of the post-doc. "Is this... still a thing?" I asked my prospective boss (in other words, of course -- and in an email starting, as German etiquette demands, with "Highly esteemed Mr. Professor [...]). "Yes, a habilitation is generally required", was the formal reply. I wrote in my proposal that I'd do a habilitation, but then forgot about it for a while.

After a few years, a few rejected grant proposals, but also a few successes, I had the funding to hire a PhD student. I invited a masters student whom I'd supervised during my previous post-doc at the University of Padova in Italy. Except, it turned out, that while I managed to get the funding to hire a PhD student within the German academic system, a habilitation is actually a prerequisite for supervising PhD students. To make things more complicated, our faculty had recently changed the PhD requirements, and instead of registering the names of your supervisor(s) ("Promotionsbetreuer"), we needed to find a thesis advisory board with three people, all of them habilitated, and at least one external. "Can we list an external advisor from Australia?", I asked a bureaucrat on the phone. "She isn't be habilitated: habilitation doesn't exist in Australia." I may as well have told her that I'd been hand-feeding a unicorn the other day. At some stage, my student took over the bureaucracy. "Does the habilitation requirement mean that Dr. Schmalz can't actually be my supervisor?", she asked the bureaucrat at some stage. "Of course she can be your supervisor", was the reply. "She just can't be on the advisory board." Translation: She can do the work, but she can't take the credit for it.

So, what actually is a habilitation? After some googling, I managed to patch the following information together: A habilitation is required as a demonstration that you can teach and research independently. After you finish a habilitation, you are formally qualified for a professorship position. Nowadays, there are alternative pathways to a professorship, but doing a habilitation still seems to be very common. In order to get the habilitation, you do what most aspiring researchers do anyway: you research and you teach. From my more experienced colleagues, I got the following step-by-step instruction about how to go about doing a habilitation: Step 1: Find a supervisor. Step 2: Make a written agreement about all the things you want to achieve during your habilitaiton (number of courses taught, number of first- or last-author publications, number of students supervised). Step 3: Your supervisor arranges the teaching for you.

To me, this does not make much sense. You're supposed to show that you can research independently by working with a supervisor. You can't be trusted to teach as a professor unless you have shown that you can teach by teaching. But the steps seemed quite straight-forward. Until I learned something that led me to procrastinate with starting the procedure for another couple of years: In Germany, the habilitation title is tied to the university where you work, and in order to keep it, you need to continue teaching at this university, until you find a professorship position. Once you start a professorship position, you lose the title, but you gain the professor's title, which is worth much more. (Not to mention it comes with a very high status and a permanent position.) Except, of course, professorships are far more competitive than they were back in the days when the habilitation may have made sense: it is not uncommon for researchers to be on short-term contracts for more than a decade until they either leave academia or win the lottery in the form of a professorship position (or, as my dad did, take up a position in another country). Mostly, researchers who are habilitated but don't have a professorship position continue to teach for free.

The habilitation process seems to have changed a lot since I was waiting in front of the university library in Bonn. When I discussed this with my parents, they were surprised that a habilitation supervisor is required. They were also surprised that there is no stipend: back in his days, my dad had received a "Habilitationstipendium" from the German Research Foundation to cover living expenses. I'd have to do it alongside the full-time project that I'm paid to work on. As most researchers of the older generation, my parents encouraged me to do the habilitation. To the older generation, a habilitation seems to be an honorary title. To my generation, it seems to be more of a nuisance. "What's the point of doing a habilitation?" I asked my mentor, a professor in our faculty. "It's not like it will guarantee that I will get a professorship." -- "Well, to put it differently", my mentor said, "If you don't do a habilitation, you won't get a professorship." The thing I love about academics: they will counter pessimism and whining with indisputable logic.

In the meantime, I appear to have found a loophole to the problem of losing the habilitation title if you don't continue to teach: Other Central European countries also have habilitations, but you can keep the title for life. Luckily, Munich is close to the border of one such country: Austria. So, all I have to do is find a professor who could supervise me, write to them, and tell them, basically, that I'm willing to commute and to do free teaching for them, and put their department as the second affiliation for all of my publications.

To summarise: What is a habilitation? "I guess I haven't been in Germany for long enough", I confessed to a recently habilitated colleague. "I'm still very confused about the purpose of the habilitation." She laughed, and replied: "It's not because you haven't been in Germany for long enough!" It is noteworthy that my text processing software has underlined every instance of the word "habilitation" in this blogpost as a typo, and suggests the following alternatives: "rehabilitation", "habitation", "debilitation", and "habituation".

Monday, April 13, 2020

The Open Webinar Series on Psycholinguistics and Research Methods

For me, the last month has been a lot about rethinking and re-planning, about cancelling existing plans, or putting them on hold for an indeterminate amount of time.

One of my grand plans for this year had been to establish a seminar series at my department. After all, in my view, research cannot progress unless there is exchange of ideas, unless we learn about new approaches, and meeting new people and taking them out for a subsequent dinner or drinks for some informal conversations is always a bonus. From my time as a PhD student, I'd benefited a lot from the department's seminar series, even for talks where I'd come in with the expectation that I wouldn't hear anything that would ever apply to my work. I'd had the ambition to establish a seminar series at my current department for a number of years, but realising it requires money. There should be at least enough to cover travel costs, accommodation, and the dinner or drinks after the talk. This is why I was excited to get some funding that I could set aside for this purpose for this year. I had started contacting people whose work I'd like to hear more about, and already had two talks (almost) scheduled. This is one of the first plans that, with the Corona-crisis, had to be put on hold.

To summarise the problem: I'd like to initiate a seminar series, but the current situation does not allow for guest speakers to travel to Munich, or for any larger number of people to gather in one room for the talk. The solution is rather simple, and has been adopted across the globe for many similar events: Do it in a digital format. This solution comes with the drawback of no dinner or drinks afterwards (though, I guess, with enough enthusiasm, it could be arranged, if everyone brings their own drinks to the video-conference). However, it also comes with a few advantages. First, as the speaker is not required to travel, the digital format is cheaper and less time-consuming. This means that one can invite speakers without any funding limitations: a guest speaker from, say Sydney, would cost the organiser in Munich as much as a guest speaker from Regensburg. Second, a video-conference room can be made open to everyone, not just to members of the department.

After having these ideas turn in my head for a few days, I wrote a few tweets about it, got a few encouraging responses, and decided: "Let's try it! What's there to lose?"

Things I thought through
My first step was to decide on the topic. Reluctant to name the series "Webinars on stuff that interests Xenia", I decided to keep it broad enough to encompass a wide range of topics, while keeping it narrow enough to be of interest to a specific audience.

Then, I created a Google Form to gauge interest. I didn't want to invest time into organising a regular event and recruit guest speakers if it would turn out that nobody has time to attend these webinars anyway. I've since de-activated the form, but below, I list the questions I asked.

The description of the form was as follows: "Many of us are working from home, which could present an opportunity to connect and exchange ideas beyond our close colleagues. If there is sufficient interest (> 10 people who promise to write it as a fixed slot in their calendar and to attend if they possibly can) I'll try to organise regular slots with speakers from around the world. Please fill out the form by the end of the week (28.3.2020)".

Then, I asked about peoples' names, email addresses, time zones, preferred days of the week, whether they had any preferences or suggestions for any video conferencing software, and how often they'd prefer for the meetings to take place. To gauge interest, I asked them to choose from three options: (1) "I'd like to attend", (2) "I'd like to present", and (3) "I'll write the fixed slot into my calendar and make it a priority to attend as many webinars as possible". I'd decided, a priori, to use the number of people ticking the third option to decide whether I'd take further steps in organising the event. For those who'd want to present, I also had a slot where they could leave the topic they'd present on. Finally, for those who wanted to attend, I allowed them to choose from a list or add their own answer to the question of what kind of topics interest them most.

After I'd finished the form, I tweeted a link to it, and went through the list of people I follow, to tag everyone who I thought might be interested in this topic and whose work I'd like to hear more about. Note that this, in addition to choosing the title of the webinar series, gave me some control over the direction in which this webinar series was going. Without wanting to exclude anyone, I wanted to push it in the direction of reading research: while I'm always interested in a broad array of topics, if the talks moved too far away from my research interests, the group would no longer serve its original purpose (for me). I also sent an email to my department, and encouraged everyone on twitter to share the link.

Altogether, I got 78 responses, with > 30 people promising to write the events in their calendar and attending whenever they could, and 14 people saying they would like to present. The other responses justified some executive decisions: The meetings would be held biweekly on Thursdays. Most respondents were from Europe, but there were also respondents from other parts of the world, ranging from America to Australia. This made it impossible to find a time zone that would fall within working hours for everyone. Instead, I decided to have two different time slots: on every second Thursday of the month, the webinar will take place at 9am CET (a timeslot that should be convenient for people based in Australia), and on every fourth Thursday, at 16:00 CET (a time slot that should be convenient for people based in America).

I was very pleased by the diversity of respondents. Many of the respondents were Early Career Researchers (ERCs; PhD students, and post-docs). Also, a few of the more senior researchers whom I'd tagged signed up and offered to do a talk. Such a seminar series, of course, is a very good opportunity for junior researchers to present their work to an international audience. However, it is also good to have more experienced researchers, both to allow the ERCs to learn from them, and as a kind of "star" effect. There was variability in terms of the countries in which the researchers are based. I somewhat regret that I did not ask about this in the Google Form, but judging by the names, email addresses, and those people I know personally, the respondents are based in countries as diverse as Brazil, USA, Netherlands, UK, Italy, Germany, Austria, Denmark, Serbia, Russia, Turkey, Israel, Iran, Australia. Try to get such a diverse audience together for a department seminar series, or even an international workshop on psycholinguistics!

The next step was to create a Google Sheet. Quite simply, the sheet contains a list of slots, and the request for people to sign up for a slot where they'd like to present.

Things I didn't think through
In terms of software, the simplest solution seemed to be Zoom (which was also preferred by the majority of the respondents of the Google Forms). After I had signed up for a professional account (which includes sessions for an unlimited time for up to 100 people) and sent all of the form respondents the link to the event, I realised that, through my university, I can get an account with an even better plan (unlimited time for up to 300 people).

After having decided on Zoom, I'd read, through Twitter, that video conference organisers were having problems with trolls crashing their meetings and harassing the attendees and speakers. This is really something I don't want to deal with. It had been my plan to distribute the link to the meetings as widely as possible, to create few barriers to anyone who genuinely wants to join. Instead, I decided to create a Google Group, where anyone can sign up, but the posts are closed to non-members. The link and passwords to the webinars is sent to the members of this group, with the request to forward to anyone who might be interested, but not to share on any public platform. This is an unfortunate example that something as stupid and petty as trolls can stand in the way of Open Science.

The additional step of the Google Group, I fear, created some confusion, especially since I forgot to change the information on the Google Sheet, where people sign up for talks, and where I'd originally written that I'd post the Zoom link next to each slot. Nevertheless, through advertising the group on Twitter, it now has more members than the number of people who had originally filled in the Google Form: 85.

Another thing that, in retrospect, I should have put more thought into is the choice of topics for the speakers. I'd made a word cloud of the keywords that the respondents had provided in my Google Form (in the Google Sheet linked above). However, there are also some things that I'd like to avoid for this group. One danger of giving absolutely free reign of the Google Sheet could be that businesses (e.g., IBM and its SPSS 😱) would sign up for slots in order to promote their products. Therefore, in the Google Sheet, in a third tab called "Updates", I added a request to avoid signing up with slots that aim to promote a product. This could interfere with the request of some respondents to also have some tutorial sessions to learn to use some particular method or software, so I added that any such tutorials should be based on freely available software.

The first talk
Overall, especially considering the relatively small amount of time that went into organising this series so far, I am very happy with the result.

Last Thursday, we had the first slot: Mariella Paul from the Department of Neuropsychology at the MPI for Human Cognitive and Brain Sciences in Leipzig and the Berlin School of Mind and Brain held the first presentation, "Harry Potter and the Methods of Reproducibility - a brief introduction to Open Science". There were, at the peak time, 48 attendants. I recorded the talk, and with Mariella, we've uploaded both the slides and the recordings on an OSF project page: The talk was scheduled as 45 minutes + 15 minutes for questions: with many interesting questions, the discussion continued until approximately 10:15 (i.e., 15 minutes overtime). Members of the audience joined in answering the questions: for example, one question was about the timeline of a Registered Reports submission; two members of the audience turned out to be editors who had experience handling Registered Reports, and could provide some insider knowledge and practical advice (e.g., if you submit a Registered Report and you're under time pressure to start collecting the data, check with the editor beforehand if they can take this into account).

I'm looking forward to the next talk, which will take place on the 23rd of April, at 16:00 Central European Time. Suzi J. Styles from NTU in Singapore will be talking about "How do you catch a Hypothefish? Preregistration basics (Psycholinguistics remix ft N400s)". Does this sound like something you'd like to hear more about? Join the Google Group to get information about how to join the meeting closer to date! Spread the word!

Some final, Open Science-related thoughts
The idea of this online webinar arose because many of us are working from home. For me, aside from the essential exchange with colleagues, the webinar format is a regular event in my working-from-home schedule which I otherwise still haven't managed to regularise or stabilise.

At this stage, nobody seems to know how long the recommendations or requirements to work from home will continue. But I hope that the usefulness of this group will by far outlive the restriction period.

At the SIPS 2019 conference, I attended an Unconference session about "The academic conference of the future" (a summary of what was discussed can be found here). The webinar format can offer an alternative for at least some aspects of conferences, and has some advantages over the traditional format of a large number of people coming together to discuss research.

Instead of having a seemingly endless series of talks, where at some stage not even buckets of coffee can keep your jetlagged brain focussed despite the interesting content, you have one slot per week (or fortnight), with enough time both to discuss and to digest the content before you need to take in the next wave of information.

Instead of applying for visas and scrambling for funding, and buying expensive airplane tickets and contributing to carbon emissions, you sign up to a Google group and click on a link 15 minutes before a talk starts.

Granted, this format requires some more creativity to make social events such as the conference dinner possible. It is possible to recruit world-leading researchers as speakers for a webinar, but then there will be no possibility for the PhD students to watch them get drunk at the conference dinner. But, let's be honest: Will we really miss this?

The webinar format is by no means novel, but I hope that it will become more and more popular as a tool for exchanging ideas and learning from researchers from across the world, across different career stages.

This is why I wrote this blog: I outlined the steps I took (which were not many), so that you can easily create a webinar series for your area of interest, too!