I recently read Michael Lewis’ book “The Undoing Project” about how a ground-breaking pair of Psychologists, Daniel Kahneman and Amos Tversky, upended the science of decision making by demonstrating that the human mind can make systematic errors of judgment. Their work won a Nobel Prize in Economics, paved the way for Richard Thaler and Behavioral Economics, and has even led to the creation of “Nudge Units” within governments to help shape policy to fit the way people think.
I’ve long been an advocate of including feedback from your target audience frequently throughout the project lifecycle, and this book got me thinking about why this is important, and what exactly are the hurdles of perception that prevent a project team from fully understanding the end user. My next post is about how to collect this early, unbiased feedback, but first we need to discuss the impacts of cognition on envisioned project value.
PROBLEM: Why don’t people adopt changes designed to benefit them?
Often as project teams we define “Success” as Go-Live, or the moment when we release our new change, software, product, or whatever out for others to use. But really, value is only created as people start using what we released. If a start-up releases to market what they believe is a ground-breaking product and no one buys it, they will soon be out of business. Similarly, if a company releases a new process and accompanying software update, its value is directly tied to whether employees are willing and able to use it.
If we want to create the most value from our project, we need to be aware of what is driving or slowing adoption. So the question is: why don’t people adopt changes designed to benefit them? We can think about this in terms of two factors, the perceived effort that will be required to make the change, and the perceived value that will come from the future state. When someone believes that the EFFORT to change is greater than the VALUE they will receive, then they will resist adoption. It is their perception of the effort and value that are important for us, as we don’t know the actual effort required and value received until after the fact. Also, the effort vs. value equation will not be the same for everyone involved.
Generalizing the audience by these factors, we can see patterns that affect adoption. The project team and executive sponsor(s) are unlikely to push out a new change or product unless they feel that the benefit is high and the effort to adopt by the target audience will be relatively low. Where they are in relation to each other may vary depending on your project, and you should be aware of the perceptions of your core team to know if someone senses a risk that others may be ignoring. Usually at least some portion of your target audience is also aligned, and believes that the value of making the change is worth the effort to do so. These will be your early adopters, and while they will be tolerant of helping to work out the kinks, others will be watching them to see how the effort vs. value equation works out in real-life.
People who believe that the change will benefit them (in efficiency, quality, higher profits or reduced costs) but also feel the move could be difficult, will be willing to make the change but hesitant to get started. Attention by the project team to reduce the effort to adopt will help get them past their initial inertia. Some people may be skeptical of the value of the project, while not seeing the transition as difficult. This might be because the main value of the project is not aimed at them, or they have external constraints you did not take into consideration. Another possibility if you are standardizing across the organization is that the new maturity level is below what that group is used to. For these people, working on ease of use or ease of transition will be mostly a waste. You should instead find a way to either include features they value in another area, or else sell them (and their leaders) on the overall value for the organization and that this is just the first step on a larger journey.
The final group of course is the resistant users, who do not believe they will realize as much value from the future state and that it will require a great deal of effort to transition. These people will be the last to adopt the change, and will be willing to put up with inconveniences and painful work arounds to keep with their status quo. They will be vocal with their negative reviews, seeking to persuade others to reject your project. They may engage in political maneuvers to delay or subvert the roll out. They are also the most difficult to persuade, as you need to work on improving their perception of both factors at once.
Cognitive biases change the project team’s perception of effort vs. value from that of end users
In working on adjusting the perceptions of effort vs. value of your target audience, it helps to understand the cognitive biases that affect our judgements, and how many of these biases push the project team and end-users in opposite directions. If you are not careful, your team will be ready to go live with a new system or change and only then realize that your perception of the effort vs. value equation is vastly different from that of your target audience. For our purposes, we will discuss biases that lead our teams to underestimate the effort users believe a change will require, overestimate the value users believe they will receive from the change, or do both at the same time. I am focusing here on how these biases affect a project team’s understanding of the target audience’s perception of effort and value, as that is the first area we need to correct. However, these biases can also impact the target audience’s perception vs. reality.
Underestimating Users’ Perception of Effort
Blind Spots – The day-to-day actions of our target audience can be very nuanced, and even when we spend time understanding where they are coming from it is impossible for a project team to see everything. When judging the difficulty of completing a task, we are incapable of factoring in items we don’t know about. On paper, we adjust for this by adding a buffer to our estimates but it is much harder to add a buffer to our mental opinions.
Curse of Knowledge – Research has shown that once people understand a piece of knowledge, we systematically underestimate how difficult it will be for someone else to learn about it. For instance, a Stanford University study by Elizabeth Newton asked people to tap out the rhythm of a well-known song (such as “Happy Birthday”) on a table while a listener tried to guess the name of the song. The “tappers” predicted that listeners would guess the song correctly 50% of the time, when in fact listeners were only successful 2.5% of the time. Much as we are unable to account for information we lack, we are unable to properly discount information that we do have. In project settings, this causes us to believe that end-users will be able to learn a new system or process more quickly than they are actually able.
Empathy Gap – The more removed a project team is from the target audience, the more difficult it will be to account for their habits and preferences. For instance, an analyst fresh out of college will have trouble appreciating how a blue-collar worker close to retirement prefers a screen to be laid out. What is intuitive for one group will be confusing to another, and it is impossible to discern what works best without enlisting feedback from your target audience.
Loss-aversion – People prefer to avoid a loss rather than to acquire an equivalent gain. A study published in the Journal of Services Marketing found that a larger number of customers will defect for competitors after a price increase vs. the number of new customers that a similar magnitude price decrease will bring. Our target audience feels ownership over their current systems and processes, and so will be more reticent to give them up than we expect. However, we can use this to our advantage if we include users as co-creators in the project design process so that they feel ownership of the future state as well.
Overestimating Users’ Perception of Value
Ikea Effect – An experiment by Norton, Mochon and Ariely published in 2011 demonstrated that people value work they are involved in more than non-participants do. Subjects were willing to pay 63% more for furniture they helped assemble than for the same furniture pre-assembled. As project teams, we can’t help but see the effort that goes into a project, and are required to frequently persuade others of the value of continuing the work. This is a necessary component of demonstrating the professional will required to see a project to completion, but has an adverse effect on our perception of how much users will value the result.
Availability – When building a product or designing a new process, we will account for the complications that are easiest to conceive of and validate. For instance, we are more likely to protect against bugs that can be found by a team member in a test environment than for issues that arise once multiple subject matter experts are using the system at the same time. An 80% unit test coverage rate gives us a sense of quality, and then a major bug that only happens with the version of Internet Explorer that the finance team has installed renders the application unusable.
Recency – New information comes with a sense of urgency that makes it feel more important than information we have known about for a while. We assume because we’ve known about an issue for a long time that it is less important to the overall quality of the project, or that there is a workaround in place. When a user starts to adopt a system or process all of these quirks will be discovered together, and so the criticality of each piece will be perceived differently than the project team. This bias can also lead to project teams who jump into fighting the next fire before they have fully extinguished the last one.
Optimism – When we have a goal or outcome that we wish to achieve, it is common to err optimistically about our probability of achieving that outcome. This optimism can present itself positively (we feel more likely to be successful than we should) or negatively (we feel more likely to avoid a negative outcome than we should). Optimism bias has been shown to be greater when we have more perceived control. For instance, people believe they are less likely to be in a car accident if they are the one driving. Prior experience and putting in place a mechanism to test the results have been shown to reduce optimism bias.
Underestimating Effort and Overestimating Value
Confirmation Bias – Human tendency is to readily accept evidence that favors our point of view, and to reject evidence that contradicts it. We are more likely to interpret ambiguous information as supporting our point of view. This stems from inductive reasoning and a natural preference to win. The effect of this bias is stronger in emotionally charged situations, or when we have a stake in a particular outcome. As a project team, we are highly invested in the project being successful, so we discount issues or more readily accept workarounds to maintain the belief that the end result will be valuable overall.
Bandwagon Effect – Confirmation bias can be compounded by a dogmatic leader. Subordinates become used to “not taking NO for an answer” and so filter out or frame problems in a way that minimizes their impact to the effort vs. value equation before they present them to the team.
Anchoring – Research has shown that we put too much weight on the first piece of information we receive, even if it is not relevant to the question. In one study, some participants were asked whether Mahatma Gandhi died before or after the age of 9, while others were asked if he died before or after the age of 140. Both anchors are obviously ridiculous, however, the average response was 50 years old for the first question and 67 for the second. On our projects, once we have designed a workable solution to the problem we have been tasked to solve, it can be very difficult to accept alternatives. This is partly because we compare the later partial solutions to the first more complete solution and look for similar positive characteristics. Anchoring also leads us to overestimate the probability of conjunctive events (e.g. hitting all milestones on time. The probability of each separately is high, so we estimate the overall probability as too likely) and underestimate the probability of disjunctive events (e.g. that any one of many risks will impact the timeline. The probability of any one risk causing a delay is low, but their combined probability is higher than we expect). Complex systems increase this bias because they tax our working memory.
Simplification – In order to make comparisons and decisions, we take complex alternatives and reduce them into their simplest forms. The judgement that we reach may not follow from the original situation depending on the importance of the factors we stripped out. An example of this is causal oversimplification: We assume that because Y followed X that X caused Y. However in reality A, B, C, D … etc. also contributed to Y directly or indirectly.
Pressure to Deliver (Fixed vs. Growth mindset) – Projects can be high stress situations, especially as we approach milestones and deadlines. A cultural environment that follows a fixed mindset, i.e. one where resources are limited and people are static can lead to project teams becoming protective and uncomfortable acknowledging that they don’t have all the answers. They feel required to protect themselves: the risk of losing face outweighs the benefit of open collaboration, especially when they feel they are on the right track. Teams are more likely to become entrenched and force their perspectives, including that of the effort vs. value equation.
What Comes Next?
A true understanding of your target audience’s perception of effort vs. value will inform you about the adoption risk your project faces. Most of the cognitive biases listed here are difficult to self moderate, as they are systematic to the way we reason and make judgements. Rules of thumb that most of the time work to our advantage lead us astray in certain contexts. As mentioned earlier, the best way to combat these biases is through early and frequent feedback from the target audience, which forces us out of our entrenched point of view. But, what is the best way to collect this feedback for your project, and what do you do with the information once you have it? My follow up post, Stop Complaining About Changing Requirements! explains my approach based on my past project experience, but I am interested to hear your thoughts as well. What biases have you seen crop up on projects? What have you done to collect meaningful feedback from your target audience? How much of your project value is at-risk due to cognitive bias?
Tversky, Amos; Kahneman, Daniel. (1974) “Judgment under uncertainty: Heuristics and biases.” Science, Vol 185(4157), Sep 1974, 1124-1131. http://dx.doi.org/10.1126/science.185.4157.1124
Heath, Chip; Heath, Dan. (2006) “The Curse of Knowledge” Harvard Business Review, December 2006. https://hbr.org/2006/12/the-curse-of-knowledge
Dawes, J. (2004) “Price Changes and Defection Levels in a Subscription-type Market: Can an Estimation Model Really Predict Defection Levels?” Journal of Services Marketing. 18 (1): 35–44. doi:10.1108/08876040410520690
Norton, Michael I.; Mochon, Daniel; Ariely, Dan (2012). “The IKEA effect: When labor leads to love” Journal of Consumer Psychology. 22 (3): 453–460. doi:10.1016/j.jcps.2011.08.002
Klein, Cynthia T. F.; Marie Helweg-Larsen (2002). “Perceived Control and the Optimistic Bias: A Meta-analytic Review.” Psychology and Health. 17 (4): 437–446. doi:10.1080/0887044022000004920
Strack, Fritz; Mussweiler, Thomas (1997). “Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility.” Journal of Personality and Social Psychology. 73 (3): 437–446. doi:10.1037/0022-35188.8.131.527