Positive bias
Positive or confirmation bias is a tendency to test hypotheses with positive rather than negative examples, thus risking to miss obvious disconfirming tests.
Kevin Kelly argues that negative results should be "saved, shared, compiled and analyzed, instead of being dumped. Positive results may increase their credibility when linked to negative results." [1] If so then this bias is particularly dangerous as the lack of negative results would themselves cast doubt on even entirely valid conclusions.
As an extreme example, imagine one hundred algorithms for stock market prediction placed in one hundred safety deposit boxes under a hundred different assumed names. Ten years later, to great fanfare, only one box, the one containing the most accurate post-facto results, is opened. There is no universally accepted and undeniable way to prove that any of the other 99 did or didn't exist without a discipline that forces negative result reporting. Even a person who had filed away only one algorithm under their own name, once, would therefore be suspect.
Blog posts
External links
- Online test implementing Wason's experiment. (this link is currently not working, as of Jan. 19, 2012)
- online version of MBlume's C++ program
- [2] Kevin Kelly on future science
References
- P.C. Wason (1960). "On the failure to eliminate hypotheses in a conceptual task". Quarterly Journal of Experimental Psychology 12: 129-140. (PDF)
See also
- Motivated skepticism, Availability bias
- Surprise
- Narrative fallacy, Privileging the hypothesis
- Write Your Hypothetical Apostasy