Even though it is unintentional, scientist are misled by their own biases. I previously mentioned one of the biggest biases of confirmation, but that is not the only bias people have. In this blogpost, I tell you something about the opportunistic bias and publication bias.
Jamie DeCoster: “The opportunistic bias occurs when the reported relations are stronger or otherwise more supportive of the researcher’s theories than they would be without the exploratory process.”
The opportunistic bias occurs when researchers examine multiple analyses before deciding which one to actually use. This selection process makes it more likely to find significant results and large effect sizes because you can pick the analysis that favors your expected prediction or theory most. But, according to DeCoster and Sparks there are different procedures that shift your result towards significance; you can create an opportunistic bias by examining the most preferable way of transforming variables, measure a large collection of variables and only report desirable results, and examine the same hypotheses with different analyses, methods, or in different subgroups of participants. Another possibility is scrutinizing undesirable findings more closely than desirable findings (e.g. double-check the unexpected finding). Michèle Nuijten also mentioned several activities and she noted the self-admission rates of professors. Below you’ll find the three most frequent procedures:
- Failing to report all of the study’s dependent measures (63.4%);
- Deciding whether to collect more data after looking at the results and their significance (55.9%);
- Selectively reporting the studies that worked (45.8%).
As a consequence of the opportunistic bias, type I errors are easily made. Also, p-values can’t be interpreted as they should be because the actual probability of finding a significant result is much higher. Hence, opportunistic bias can lead to a significant effect (even when no effect is there). These wrongly drawn conclusions are incorporated in the general view in literature (as people and researchers read biased articles) and they systematically influence meta-analyses.
Why do we want these significant results so badly? Why do we transform data in a preferable way or do different analysis to get these significant results? One of the causes of the opportunistic bias is closely related to the publication bias.
Michèle Nuijten: “Publication bias is putting the non-significant results in the closet and publish the significant results in the journals.”
With publication bias, the whole view in literature gets distorted; by only reporting the articles that have significant results, researchers get triggered to (only) publish significant results and this stimulates the opportunistic bias increasingly. The view on the world changes and the (scientific) knowledge we have is not as objective as it should be. This could be dangerous in the field of medicine for instance. Publication bias is not only noticeable in the scientific world, but also in journalism. For journalists it is important to create remarkable and sensational stories in order to get people to read a blog/newspaper and get paid by their bosses. With this, incorrect and biased information is (even more) encouraged in our society resulting in a misguided worldview.
The problem is clear: the motivation of the opportunistic bias is closely related to the publication bias. What can we do about it?
 Researchers must create reliable articles with as little as possible (publication and opportunistic) biases. They should also make use of preregistration by means of the website OSF (which does not allow you to change anything when posted on). When referring to other articles or previous theories, they should be cautious. To indicate if an article is adequate, researchers could look for bad and good signs.
- Statistical errors;
- Lot of p-values just below .05;
- Post hoc explanations of covariates;
- Removing outliers without doing a sensitivity check;
- Vague and inaccurate language in the method section;
- Degrees of freedom that don’t match the sample size.
- High power or large sample size;
- Preregister hypotheses, method and analysis plan;
- Openness (share data, analyses, material online);
- Replication with high power and preregistration;
- Meta-analysis of different studies (test for publication bias).
 What could journals do? They could create a more rigorous and thorough reporting standard (e.g. reporting the intended and the actual sample size, describing all variables, mention the analyses which were pre-specified and which were done). In addition, journals could require an increased disclosure (e.g. researches have to write a log of all performed analyses and procedures). In my opinion, journals should also publish non-significant results because this is also a result. They can do this by accepting or rejecting research proposals on the basis of their theory, described method and proposed analysis. When it is accepted, the journal would agree to publish it no matter if the results are non-significant. I do think that the latter should have some other requirements to uphold the quality of research papers though.
 What can be done by journalists? Journalists should be cautious when referring to an article. In my opinion, a lot of journalists are not doing this; they are rather sensational instead of subtle. An example of this can be found in the news report “Even Casually Smoking Marijuana Can Change Your Brain, Study Says” of the Washington Post. The study they refer to solely indicated that there were differences in the brain of casual pot users compared to nonusers, but it did not mention that these differences were caused by marijuana use (because it even couldn’t show causality because of the study’s design). In this sense, journalists should be critical and more skeptical and not just write something that is exciting.