Confirmation bias is our tendency to seek to confirm what we already believe. When we evaluate a claim/argument/hypothesis/etc., we'll search for evidence that supports our own belief about the issue and ignore, downplay or not even look for any evidence against it.
An example using modern technology is with the use of search engines to seek information. When the results of our enquiry are returned, we scan the list and click on the results that best match what we were looking to find - then conclude we were right all along!
As a worked (hypothetical) example, let's look at someone's belief that hypnotherapy helps people to stop smoking. People do go to hypnotherapists and subsequently give up smoking and there are many people who will anecdotally state that hypnotherapy worked for them. It seems convincing, but is this proof that hypnotherapy really helps people to give up smoking?
If we only search for and consider evidence for outcomes that support what we want to find, what we're doing is introducing biases known as selective attention (seeing only what we want to see) and suppressed evidence (avoiding what we don't want to see). Of course the problem is that using a biased data sample will most likely result in a false conclusion.
Evidence need context to be meaningful. Positive outcomes need to be compared to negative outcomes to give a success rate, and in turn, that success rate needs to be compared to something else to provide context. That could be a competing hypothesis or compared to doing nothing (as a control comparison).
Many problems can be analysed using a simple table like this one:
|(A) Gave up smoking||
(B) Failed to give up smoking
|(1) Used hypnotherapy||30||70|
|(2) Did not use hypnotherapy||45||105|
Here we're counting the number of people who used hypnotherapy and gave up smoking (the highlighted square) but also the number who used hypnotherapy and failed to give up smoking. Then we compare that result to a group of people who did not use hypnotherapy while attempting to give up smoking.
When comparing different sample sizes, we can find the percentage of success to failure in both instances and then compare the results. This can be done in the following way:
|Used hypnotherapy :||A1
|= 0.3 (30%)|
|Did not use hypnotherapy :||A2
|= 0.3 (30%)|
As can be seen from the figures, 30% of people who go to a hypnotherapist manage to give up smoking; however, when we give that figure context by comparing it to those who did not go to a hypnotherapist, we find that they too were successful 30% of the time. There is no difference between the two groups; the net benefit from using hypnotherapy is zero.
This example is hypothetical but the model is what is important. Whether looking at psychics' claims, alternative remedies, the latest fad diet, or another miracle product, looking only at confirmatory evidence will likely lead to false conclusions. It's not unless disconfirming evidence is considered and the hypothesis under consideration is compared to something else can we state whether it is true or not.
Seeking out and being influenced by confirmatory evidence is something we do naturally. This leads to a situation where people can be influenced, often quite strongly, by information that they already believe is true or would like to be true.
Once we're aware of this bias, we can compensate for it by forcing ourselves to look at disconfirming evidence and arguments against what we believe in order to make more accurate assessments. This can be extremely beneficial, particularly with important decisions and issues - where mistakes can be very costly.