Confirmation Bias & Basing Research on Unsubstantiated Facts


Two key recurring faults I find, when working in the research industry, is confirmation bias and making assumptions based on lack of data.

Take this excerpt from an interview in our ‘Cancer Drugs & Treatments Market - Data, Analysis and Forecasts to 2023’ report;



GMR Data: Does breast cancer incidence have anything to do with geographic location or race?

Dr. Ben Anderson: That to a certain degree may be a misunderstanding because the reason that it appears that breast cancer is more common in Caucasian women is because these women live in places where they have access to screening mammography. Screening will find cancers; however if you don’t have the screening you miss it….



So although there is a perception that Caucasian women have a higher propensity to breast cancer, the fact of the matter is that incidence rates are similar in all ethnic groups, it’s just many woman, particularly in emerging or 3rd world countries, don’t have access to screening.


Probably the most famous example of confirmation bias is the ‘more babies are born during a full moon’. This theory has been finally put to rest after numerous studies, this is highlighted in a recent Huffington Post article;


In 2005, researchers from the Mountain Area Health Education Center in North Carolina analyzed almost 600,000 births across 62 lunar cycles. The data were retrieved from birth certificates from 1997 to 2001. The result? No significant differences in the frequency of births across the eight stages of the moon. (1)


Time and again newspaper articles have described how births, arrests by police, admittance to psychiatric wards and even dog bites have increased during the full moon cycle and for every one of these reported ‘phenomena’ studies have shown that rates haven’t gone up at all, in fact they have remained broadly the same across the monthly cycle.



Oliver Burkeman in the Guardian (2) (28/2/2014) highlighted some ‘confirmation bias’ theories that had recently been studied by 5 Princeton researchers (3). He goes on to write;

“Even when the risk of bias was explicitly pointed out to them, people remained confident that they weren’t susceptible to it; indeed, they actually rated their performance as more objective than they’d predicted it would be at the start of the test. “Even when people acknowledge that what they are about to do is biased,” the researchers write, “…they still are inclined to see their resulting decisions as objective.”


So how does this translate to the research industry?

With each new topic that we tackle we need to have three thoughts foremost in our minds;

1)      From my previous research and knowledge of this subject, what do I know already?

2)     How can my personal experience bias the research that I am writing?

3)     There will inevitably be some bias; this is unavoidable.