Are we Irrational?
As anyone who has ever spent any amount of time on social media can attest, human beings act irrationally. Echo chambers on the internet are a good example of this. People join groups or follow accounts that only share information supporting the theory they already believe. When they encounter evidence that contradicts their beliefs, they ignore or dismiss it, and only engage with posts that reinforce their views. This irrational behavior leads to distorted perceptions of reality that perpetuate misinformation and cause people to become even more deeply entrenched in their skewed viewpoints.
This long-established behavioral observation, corroborated now by various fields of social science research, has vast implications for medicine, science, policy, and public health. Our irrationality is so wholly baked into our collective consciousness that most of us are not even aware of it.
For the majority of human history, the prevailing opinion within academic/intellectual circles was that people were inherently rational beings who were naturally inclined toward rational decision-making. Classical economics, for instance, was based on the idea of a “rational actor” who made decisions logically and consistently in their best interests. It wasn’t until the 1970s that two psychologists, Daniel Kahneman and Amos Tversky, challenged these theories with groundbreaking research demonstrating that human decision-making is limited by a set of predictable patterns of error in thinking called cognitive biases.
Speaking of “irrational,” we highly recommend "The Irrational Ape: Why Flawed Logic Puts us all at Risk and How Critical Thinking Can Save the World" by David Robert Grimes. In it, Dr. Grimes presents a compelling case for the importance of clear, rational thinking, illustrated with real-world examples - including how critical thinking quite literally saved the world during a tense moment in the Cold War. It's an excellent companion to our discussion on cognitive biases in science and offers practical strategies for improving our critical thinking skills.
What are cognitive biases?
The human brain is incredibly adept at processing vast amounts of information; its efficiency and capacity are truly remarkable. It’s responsible for interpreting sensory inputs, managing physiological functions, sorting and retrieving memories, and making decisions, all at the same time! Cognitive biases, therefore, arise from the brain’s attempt to simplify and expedite the processing of this large amount of information. They are systematic errors in thinking that affect how people perceive reality and make judgments, often leading to irrational behaviors and decisions. It is therefore crucial for scientists to be aware of cognitive biases as these errors in thinking can creep in and interfere with the objectivity and reliability of results. Let’s give a few examples of cognitive biases and how they can potentially interfere with the scientific process.
Anchoring Bias
This occurs when people place too much emphasis on the first piece of information they encounter (the “anchor”) when making judgments or decisions. In clinical trials, the initial estimates of the efficacy of a new pharmaceutical might serve as an “anchor” for further evaluations of the drug. For instance, if early results suggest treatment has a 75% success rate, researchers might be inclined to unconsciously compare new findings to this initial number, potentially skewing interpretations or influencing researchers to overlook weaker evidence.
Availability Heuristic
This bias involves relying on immediate examples that come to mind when evaluating a topic or decision. For example, a clinician-scientist who studies rare infectious diseases and frequently reads case studies in medical journals might overestimate the prevalence of these diseases in the general population. This could lead to a disproportionate emphasis on evaluating or even incorrectly diagnosing these types of conditions in patients relative to more common diseases.
Sunk Cost Fallacy
This is a particularly pervasive one in science. It refers to the tendency of people to continue an endeavor once an investment of time, money, or resources has been made, even if it’s clear that the decision is no longer beneficial. For instance, the board of a biotech company in the private sector might continue funding a study on a particular pharmaceutical that is producing inconclusive or negative results because they have already invested significant resources. This bias often prevents scientists from pivoting to more promising research avenues and abandoning stale hypotheses.
Endowment Effect
A similar bias that often goes hand-in-hand with the sunk cost fallacy is the endowment effect, the inclination of people to value something more highly simply because they own or developed it. This penchant is particularly pervasive among established research scientists who have worked for years developing a hypothesis and generating data to support it, causing them to place undue emphasis on their own theories and data simply because they are the originators. This could lead to bias in interpreting results and/or a reluctance to accept alternative hypotheses or broaden the scope of a research topic.
Confirmation Bias
This is possibly the most popular cognitive bias and affects virtually all aspects of life from personal beliefs to professional judgments, serving as one of the most insidious roadblocks to critical thinking and rational decision-making. Confirmation bias refers to the propensity of people to favor information or interpretations that confirm their preexisting beliefs or hypotheses. In other words, it is the natural propensity to seek out, interpret, and remember information in a way that supports what you already think or believe, while disregarding or undervaluing evidence that contradicts those views. When it comes to science, confirmation bias can drastically skew the research process, leading to results that might overstate the value of a given theory or set of experiments and the underreporting of potential issues. This is one of the reasons why “blinding” is so important in scientific studies
Reducing Bias in Science
Blinding is a technique used in experimental research to prevent research participants and/or scientists from knowing specific details that might influence their behavior or interpretation of results. It helps to ensure that a study’s outcomes are not affected by preexisting beliefs or expectations, whether intentional or unintentional. The prime example of this is the double-blind placebo-controlled drug trial, often considered the “gold standard” when evaluating the efficacy and safety of a pharmaceutical intervention. A key feature of this type of experiment is that neither the participants nor the scientists administering the treatment know which participants are receiving the bioactive treatment and which are receiving the control/placebo treatment, which significantly helps mitigate the confirmation bias, leading to more reliable and valid results.
The bias blind spot
The interesting thing about cognitive biases, and why they are so difficult to combat, is that while we may tend to recognize bias in others, we often fail to see these same biases in ourselves. In other words, being aware of our cognitive biases does not entirely prevent them from impacting our thought processes and decision-making. These biases are deeply rooted and often extremely difficult to recognize.
Recent research on combating misinformation (or “misleading content”), as reported by NPR, offers insights that are highly relevant to addressing cognitive biases in science. Just as in personal interactions, effective scientific communication requires understanding the audience's perspective, using appropriate language, and providing detailed, culturally relevant information. Scientists must recognize that changing deeply held beliefs — whether in the general public or within the scientific community — is a gradual process that requires patience, empathy, and persistent, clear communication.
Unbiased Science?
Given all of these biases and how difficult they are to confront, can true, “unbiased science” ever be achieved? Science aims to be as unbiased as possible, but at the end of the day, it is a human endeavor and thus imperfect. However, scientists undergo years of extensive and rigorous training focused on enhancing objectivity and minimizing biases. A few of the most effective strategies are:
Research methodologies and experimental design including blinding, randomization, and control groups
Adherence to established standards such as standardized experimental protocols and reporting guidelines, ensuring consistency
A process of peer review whereby other experts in the field evaluate a study to identify errors in thinking and analysis that the original researcher might have overlooked
Statistical analyses and corrections to ensure results are reliable and not due to random chance or misinterpretation, including the adherence to predefined analytical plans
Replication studies to verify/confirm whether original scientific research findings can be reproduced under similar conditions or with slight variations
By implementing these strategies and many others, scientists can reduce the impact of cognitive biases on their research, leading to more accurate and reliable findings. While we may not be able to eliminate biases from creeping in from time to time, continuous vigilance and adherence to the scientific method allow for the maintenance of scientific integrity allowing us to continue striving toward that completely “unbiased” aspiration.
No nonsense, just…
-Unbiased Science
(But mainly our resident neuroscientist, Dr. Sarah Scheinman, who did the lion's share of writing for this piece!)
Thank you for such a concise and thorough treatment of this important subject!!! Really appreciate what you do.