Join our Facebook Group
An apparently statistically significant observation may have actually arisen by chance because of the size of the parameter space to be searched.
The look-elsewhere effect is a cognitive bias in scientific experiment statistics analysis, notably in the study of complicated particle physics. Because of the vastness of the space being searched, statistically, significant findings are occasionally obtained by coincidence in this field. In astronomy, for example, a large region of space may be searched in the hopes of finding a single comet. Comets will almost certainly be discovered since such a large area of space was examined. However, the chances of finding a comet would be slimmer if the researchers had focused their search on a specific location. Because the search region was so broad, it appears that comets are more abundant than they are. Due to the vast amount of space surveyed, this skews the apparent incidence of comets, making them appear more common than they are.
Let's pretend your pal David is a medical researcher working on a medicine that will help patients recover from colds faster. He conducts an experiment in which he evaluates his new therapy, gathers a large amount of data, and analyzes it using statistical tests. His research reveals that the therapy has no discernible impact on people's recovery times. David is dismayed but realizes that he may have been looking in the incorrect spot because he didn't discover a substantial result. After completing several tests, he ultimately discovered a statistically significant impact: the treatment group reported fewer headache symptoms than the control group. Success!
The look-elsewhere effect is fueled by cognitive errors that affect everyone, but it is more prevalent in statistical tests and their interpretation. As a result, it mostly concerns scientists and researchers who use statistics to test (or refute) a concept.
The look-elsewhere effect is a major contributor to the replication dilemma that many fields of science are now experiencing. Replication is repeating an experiment to check if the results are the same as the first time. This is a critical tool for ensuring that science's machinery is operating as it should. If a study's findings cannot be replicated, it calls into doubt the validity of its initial findings.
Unfortunately, in subsequent years, a substantial number of replications have failed to replicate the original study's findings. Although psychology has garnered the most attention, comparable crises occur in numerous professions, including economics and even medicine, where just 20 to 25 percent of research replicates correctly, according to some estimates. It should go without saying that this is a major issue that hinders scientific progress while also diminishing public trust in experts.
To comprehend the look-elsewhere effect, we must first have a fundamental concept of what it means to have a "statistically significant" result. When researchers wish to test a theory, they usually experiment to compare the outcomes of two groups-for example; one receives the treatment under investigation and another that receives a placebo. If we find a difference in how these groups fare after all other factors have been thoroughly accounted for, we may safely conclude that the difference is due to the therapy. Right?
The difficulty is that even when other variables are accounted for, there is still a chance that any differences between groups are attributable to random chance. This is because, despite our best efforts to draw broad generalizations about how therapy would affect the whole population, we must test it on a much smaller group of people. Our results would be deceptive if our sample turned out not to be representative of the entire population for any reason. Consider working at an ice cream shop where customers are welcome to taste the many flavors. A large party of roughly a hundred individuals arrives one day, all eager to try the mint chocolate chip. The mint chocolate chip bucket has a lot of chocolate chips; however, they aren't uniformly dispersed throughout the bucket. So, when you're handing out samples, the great majority of the time, the samples have some chocolate-but now and then, an unlucky individual receives a sample that's just mint ice cream. This sample doesn't represent the flavor.
In science, sampling presents a similar problem: there's always the risk that our experimental sample will have features that cause them to respond to treatment differently from the rest of the population by chance. This means that our findings would be purely coincidental, leading us to the incorrect treatment conclusion.
The look-elsewhere effect occurs for a variety of reasons, one of which is purely mathematical. The problem of numerous comparisons is what it's called in statistics. This problem occurs when scientists do many statistical tests on the same dataset, as the name implies. While this may not appear to be a problem, it increases the likelihood of making an alpha mistake. 3 The more times a researcher searches the same dataset for a result, the more likely they are to find anything that appears to be fascinating on the surface but is simply noise or random oscillations in the data. This is the statistical explanation for the look-elsewhere effect in a nutshell. This, however, does not convey the entire tale. After all, statisticians are trained, so they should know better than to throw a bunch of experiments together at random. Furthermore, when doing many separate tests is essential, there are approaches to account for the problem of multiple comparisons statistically. So, why is this issue still present in scientific research? Unconscious cognitive biases are to blame for this conclusion.
People are susceptible to a variety of biases and heuristics that cause them to think in distorted ways. Unconscious prejudices, on the other hand, are just that: unconscious. Even after being taught about our cognitive faults, it's frequently difficult to avoid slipping into the same cognitive traps. Even more difficult to stomach is the fact that this truth applies equally to specialists and laypeople. Many of us believe that scientists are somehow immune to making the same mistakes as the rest of us, but data shows that this is not the case. Even more shocking, scientists' formal statistics knowledge does not protect them from biased thinking when predicting probabilities. The topic of sample sizes is a well-known example of this reality. Large samples are always preferable in statistics, as smaller samples make it more difficult to identify a probable influence. Despite this, research has shown that even the most recognized statisticians may make mistakes regarding sample size.
When repeated by a large number of individuals over a long period, the look-elsewhere effect can have disastrous effects for individual researchers. The replication dilemma has cast doubt on the fundamental reality of notions on which many scholars have built their careers. For example, social psychologist and neuroscientist Michael Inzlicht noted in a blog post dated June 2020 that a fundamental theme in his research-ego depletion, or the assumption that self-control is based on a finite supply of resources-is "probably not true." This realization had a profound emotional impact on him: it "undid universe," as he put it. However, the look-elsewhere effect isn't limited to people. It has far-reaching repercussions as a contributing cause to the replication crisis. In addition to delaying scientific progress and leading scientists to wrong conclusions, it is also harmful to science's standing as an institution. When finding the truth is becoming increasingly difficult and conspiracy theories are gaining worrying traction, the public's faith in scientific experts is critical. Unfortunately, the astonishingly large percentage of studies that cannot be replicated undermines such trust: in some fields of psychology, for example, up to 50% of all published research may not be repeatable.
Even when we're aware of cognitive biases, as we've shown, it isn't easy to prevent them. There are certain procedures that researchers may take to prevent inappropriate statistical methods when it comes to the look-elsewhere effect. As many scientists fight for increased openness and transparency in their areas, several approaches are becoming more prevalent. Changes in the culture of science and academia would likely aid in the solution of this challenge. How it all started In the early 2000s, concerns about replicability began to grow in a variety of scientific domains. Professor John Ioannidis of Stanford University argued in a famous 2005 paper titled "Why most published research findings are false" that a large number of published research papers were based on Type I errors and could not be replicated due to several statistical factors, including large numbers of statistical tests and flexibility in design and analysis.
When exploring a huge parameter space for anomalies like events, peaks, objects, or particles, there's a good chance you'll find false signals with high apparent relevance. This phenomenon is known as the look-elsewhere effect, and it may be found in cosmology, particle physics, and other fields. When determining the statistical significance of an anomaly, one must account for this impact to prevent making erroneous detection claims. This is usually done by considering the trials factor, which is usually calculated numerically via possibly costly simulations.
Are you curious about how to apply this bias in experimentation? We've got that information available for you!