Have you ever been part of a software development team that complained they could not make progress because the users’ requirements kept changing? It is certainly frustrating to feel like you have wrapped something up, only to have to go back and tweak it over and over again. Even “Agile” teams, which are based around the idea that requirements are a conversation that becomes clear as you move along, often find it hard to go back and revisit completed stories.
Do you know what is even more frustrating than changing requirements? Pushing a new system into production only to have users fight adopting it because it doesn’t meet their needs. My previous article, How Much of Your Project Value is At-Risk Due to Cognitive Bias? discusses the many ways that project teams can misperceive the willingness of the target audience to adopt. Setting aside higher level political interests or organizational fears, people will adopt when they believe the value of making a change will be greater than the effort required to make that change. Essentially: if given a choice, people will adopt when perceived value > effort, and will not when perceived value < effort. When adoption is not an option, people believing that value < effort creates organizational friction in the form of reduced productivity, political turmoil, or shadow IT. Systematic biases lead us to overestimate the value of our projects and to underestimate how much effort people will require to change their habits.
So what do we do about it? How can we overcome deeply rooted problems in the way we process information? Research has demonstrated that you can’t simply think yourself out of these pitfalls, you need to expand your feedback loops with people outside of your core team. Let’s start with what this means related to project execution. It is not uncommon for teams to have some form of validation process in place, but often this process fails to break teams out of their cognitive biases. In order to be effective, project validation processes need to be designed to test assumptions made by the team with unbiased external parties. The approach of early unbiased feedback is designed to emulate a scientific process and is composed of four steps: Hypothesize, Validate, Feedback, Adjust.
First, design a hypothesis on how to solve a need of the target audience. How will you help them with their Job to be Done? Define an assertion about the future state, and be explicit about the assumptions you have made that need to be tested.
Second, create something tangible that you can validate with your target audience. In software development, this could be a wireframe, clickable prototype, or your iteration deliverable. On a business intelligence project this could be a simple version of a report using a one-time data extract. When implementing a strategic process improvement, this could be a session where you roleplay a scenario with employees from different departments. The point is to do something where they can get a good feeling for what the end-state will mean to them. A straightforward way to accomplish this is to take the deliverables you are already creating for project sponsors, and use them in one-on-one guerrilla interviews with members of your target audience. To be clear, giving a presentation to your Product Owner does not count. That person is a proxy for the target audience, and is subject to the same biases as the rest of the team. Validation needs to be done with someone who is external to your project team in order to break past cognitive bias.
Third, define a feedback mechanism to synthesize qualitative and quantitative learning from the validation exercises. This should enable you to test your assumptions, reveal blind spots you were not considering, and to think about what you have learned in aggregate, grouping your target audience by how they perceive the value and effort of the change. Teams fail with this step when they don’t take time to synthesize what they have learned and discuss how it should impact their project direction.
Fourth, place deliberate checkpoints in your project plan to review the feedback from users and adjust as needed. A good process will enable you to continually course-correct to arrive at the right intersection of effort and value for your target audience, while providing enough stability to your development team so they can move forward. Teams usually use the feedback from validation sessions to adjust the scope of certain features, but it is less common to step back and assess how the high level project direction should change. We should be forcing ourselves to reevaluate if we are still on track to provide real value at a reasonable level of effort.
Using this approach can help you get outside of your mindset to be better aligned with the needs and pain points of your target audience. You are likely already doing many similar steps, but are your current methods truly enough to break your team out of its biases? My goal in publishing this article is to get feedback on my approach and resources so I can continue to adjust them to be better. How valuable do you think this approach would be for your teams? What might make it difficult to implement this approach? If you have used some form of this approach, what methods of validation, feedback and course correction have you found to be successful?
Static requirements indicate that you have not been effectively seeking feedback from your target audience. Your and their ideas about the effort and value required to accomplish what they need will change over time, and if your process doesn’t allow for course correction, you are likely to create a solution to no one’s problem. Who will be complaining then?
Free Resources
You can download a feedback planning worksheet and adoption testing template I have developed here. All I ask is that if you do, please like or share this article so that more people can find them as well. If you use them, please leave a comment with your experience and feedback so that we can continue to improve the tools.
Thank you!