Sunk Cost Fallacy
Sunk Cost Fallacy
The Sunk Cost Fallacy refers to how people continue with a behaviour or project based on the emotional and financial investment they have made. One reason for this behaviour identified by Psychologists Daniel Kahneman and Amos Tversky is loss aversion. This means people are less willing to accept a loss than make a gain. So they have the tendency to avoid losses.
People who have incurred a sunk cost also over-estimate the probability that a project will be successful compared to those who have not. The fallacy may be the result of a combination of factors, including a bias caused by an ongoing commitment, the status quo bias, cognitive dissonance, plausibility deniability and regret avoidance. All these biases may make people less willing to accept failure and halt their behaviour.
In an experiment by Hal R Arkes and Catherine Bulmer (1985) theatre season ticket purchasers were randomly offered a discount of either $2, $7 or zero. Customers who purchased tickets at the normal price attended more shows than those who purchased either of the two discounts. As the discounts were assigned randomly the groups should not have differed on either the costs or benefits. The only difference between the groups was the sunk cost they had incurred.
We see evidence of the sunk cost fallacy in all aspects of life. In business many projects suffer from sunk costs fallacy. When business owners or project managers persevere with initiatives that are over-budget and behind schedule despite warnings that the project is unlikely to deliver a return on investment. Advertising campaigns that fail to deliver this are often re-run on the basis that it should have a second chance.
Implications for Conversion Rate Optimisation:
Breaking user tasks into smaller chunks, such as changing a long form into a multi-step form, improves completion rates due to the sunk cost fallacy. Once a user has completed the first step of a form they are more likely to complete it as they have invested time and effort into the first step.
Optimisers can fall into the trap of the fallacy if they don’t review the costs and benefits of changes they have included in their roadmaps. If a change is running behind schedule and the costs are higher than expected review your decision. Stop working on it if there is no longer a sufficient return on investment.
Review the progress of experiments after a week or so to evaluate the likelihood of the test achieving a significant uplift. If a test is showing a large decline in the success metrics it is unlikely that it will turn around and deliver a significant uplift. Rather than continuing with a test because of the investment incurred you should consider the opportunity cost. Decide if it would be more efficient to stop the test early so that an another more promising experiment could go live instead.
Marketers are prone to the fallacy by monitoring and optimising campaigns using vanity metrics which show no correlation with the businesses bottom line. For example Facebook’s own Head of Marketing Science evaluated “Likes”, “Shares”, message posts and “Clicks”. In an article for the Journal of Advertising Research he concluded that there is no correlation between these metrics and real-world effectiveness. This indicates that such metrics are meaningless and useless. Many marketers continue to spend time and money on these “shiny” metrics because they have invested so much into reporting.
“The allure of measurable and traceable “shiny” metrics, such as social media users’ “Likes,” “Shares” message posts, and “clicks”. Has led marketers to endless, intricate reports on the irrelevant”. Brad Smallwood, Head of Marketing Science, Facebook
Conversion marketing – Glossary of Conversion Marketing.
Over 300 tools reviewed – Digital Marketing Toolbox.
A/B testing software – Which A/B testing tools should you choose?
Types of A/B tests – How to optimise your website’s performance using A/B testing.