Friday, July 28, 2023

What's wrong with science?

I think I really need a holiday. 

Many of us are researchers because, in some way or another, we want to make science better. Yet, we rarely keep this goal in mind explicitly when planning a specific project. If we do, how would a research project look like? This seemingly simple question sent me on a downward spiral: What could I do that might really make a difference? Where do I want to make a difference? And why? And what is good science, anyway? Or science, for that matter? What is the purpose of it all? What am I doing with my life?

I spent Thursday afternoon ("Is it Friday yet?") quizzing my new friend, ChatGPT. Although ChatGPT was reluctant to answer the question "What am I doing with my life?", we had some interesting discussion about science and everything that's wrong with it. Setting aside existential angst, the three relevant questions are: (1) What is (good) science? (2) Which are some aspects where we still need improvement? (3) In the current discussions on how to improve science, how do the proposed solutions that are on the table relate to the aspects that need improvement? 

To summarise ChatGPT's response to the first question (phrased as "What is the aim of science?"): There is a list of eight goals:

  1. Explanation
  2. Prediction 
  3. Understanding causality
  4. Falsifiability
  5. Reproducibility
  6. Continuous improvement (self-correction)
  7. Application and innovation
  8. Unification of knowledge.

Some of these points may be contentious (is prediction without explanation really science?), but overall, it sounds at least reasonable. 

As a next step, I asked the less nuanced question: "What's wrong with science?" Again, ChatGPT provided a list of eight items:

  1. Reproducibility crisis
  2. Publication bias
  3. p-hacking and cherry picking
  4. Funding and conflicts of interest
  5. Lack of diversity and inclusivity
  6. Ethical concerns
  7. Hypercompetitiveness and pressure to publish
  8. Miscommunication and sensationalism.

So it seems that my social media bubble is representative of a broader population, or in any case, of ChatGPT's training data. All of these are important challenges that need to be addressed. For an ambitious researcher trying to make the world a better place, the question remains: What are still some gaps that might not have been addressed yet? 

Broadly, the aims of science according to ChatGPT can be divided into methodological/technical and theoretical aspects. Reproducibility, self-correction, and application and innovation fall into the former category. There are clearly things that are wrong on this technical level: The reproducibility crisis, publication bias, p-hacking, funding and conflict of interest, pressure to publish and miscommunication all relate to this. To put it bluntly: Given these issues, when one reads about a given finding, one is simply not sure whether this finding can be trusted or not. Without a doubt, this is the first general problem that needs to be tackled: I'm a firm believer of never trying to explain something unless one is sure that there is something to be explained (see my first ever blogpost).

Having replicable, reproducible, robust, and generalisable effects is still a far cry away of achieving the more theoretical aims of science. Sure, knowing that two variables correlate is useful for prediction, but just knowing that this correlation exists tells us nothing about the explanation or causality, nor does it allow for a unification of knowledge. A lack of diversity and inclusivity prevents us especially from achieving the goal of unification of knowledge, because it excludes many varying perspectives from the scientific discourse. Ethical concerns are an issue on the more basic level - these should be considered even before asking questions about methodological or technical aspects of a study. This still leaves us with a gap, though, between having a robust finding and making sense of it.

Of course, linking results to theory is not a novel question. Just in the last few months, I've come across this preprint by Lucy D'Agostino McGowan et al, and this blogpost by Richard McElreath. Still, in seeing how we do science in real life, I see room for improvement on this front. It's relatively easy to provide easy-to-follow rules for showing that your finding is credible (or, at least easy-to-follow-in-principle, if you have unlimited resources). It's more difficult for the less tangible question of linking your finding to an explanation.

The good news is: My summer holiday is starting next week. The bad news is: I'll probably spend it pondering and researching all of these questions.

No comments:

Post a Comment