What is prioritization in product management?
Prioritization in product management is the disciplined process of evaluating the relative importance of work, ideas, and requests to eliminate wasteful practices and deliver customer value in the quickest possible way, given a variety of constraints.
The reality of building products is that you can never get everything done — priorities shift, resources are reallocated, funding is scarce. As product managers, it’s our job to make sure we’re working on the most important things first. We need to ruthlessly prioritize features before we run out of resources.
“Opportunity cost is when you never get the chance to do something important because you chose to work on something else instead.”
—Product Roadmaps Relaunched by C. Todd Lombardo, Bruce McCarthy, Evan Ryan, Michael Connors
An effective product prioritization process garners support from stakeholders, inspires a vision in your team, and minimizes the risk of working on something that nobody wants.
What are product prioritization frameworks?
In a 2016 survey conducted by Mind the Product, 47 product managers named the most significant challenge they face at work. While this data sample is too small to make this a statistically significant report, the results will sound painfully familiar to you if you are a product manager.
The biggest challenge for product managers is: Prioritizing the roadmap without market research.
A staggering 49% of respondents indicated that they don’t know how to prioritize new features and products without valuable customer feedback. In other words, product managers are not sure if they’re working on the right thing.
Due to the lack of customer data, we often fall into the pitfall of prioritizing based on gut reactions, feature popularity, support requests or even worse—going into an uphill feature parity battle with our competitors.
Luckily for us, there is a more scientific way to prioritize our work.
Product prioritization frameworks are a set of principles; a strategy to help us decide what to work on next.
The right prioritization framework will help you answer questions such as:
- Are we working on the highest business value item?
- Are we delivering the necessary value to customers?
- Does our work contribute to the broader business objectives?
- Can we get this product to the market?
In this post, we’re going to introduce you to seven of the most popular prioritization frameworks.
- Value vs. Complexity Quadrant
- The Kano Model
- Weighted Scoring Prioritization
- The RICE framework
- ICE Scoring Model
- The MoSCoW method
- Opportunity Scoring
Value vs. Complexity Quadrant
A value vs. Complexity Quadrant is a prioritization instrument in the form of a matrix. It is a simple 2 x 2 grid with “Value” plotted against “Complexity.”
To make this framework work, the team has to quantify the value and complexity of each feature, update, fix, or another product initiative.
- Value is the benefit your customers and your business get out of the feature. Is the feature going to alleviate any customers’ pains, improve their day-to-day workflow, and help them achieve the desired outcome? Also, is the feature going to have a positive impact on the bottom line of your business?
- Complexity (or Effort) is what it takes for your organization to deliver this feature. It’s not enough that we create a feature that our customers love. The feature or product must also work for our business. Can you afford the cost of building and provisioning the feature? Operational costs, development time, skills, training, technology, and infrastructure costs are just some of the categories that you have to think about when estimating complexity.
If you can get more value with fewer efforts, that’s a feature you should prioritize.
Value/Complexity = Priority
When aligned together, the criteria makes up several groups (or quadrants) that objectively show which set of features to build first, which to do next, and which to not do at all.
The quadrants created by this matrix are:
- Quick Wins (upper-left). Due to their high value and low complexity, these features are the low-hanging-fruit opportunities in our business that we must execute with top priority.
- Major Projects, Big Bets, or Potential Features (upper-right). The initiatives that fall into this block are the big project releases that we know are valuable but are too risky to take on because of the resources and costs involved with them.
- Fill-Ins or Maybes (lower-left). In this quadrant are usually positioned the “nice to have” features. Things like small improvements to the interface and one day, maybe ideas.
- Time Sink Features (lower-right). Time sinks are the initiatives that we never want our team to be working on.
The Value vs. Complexity Quadrant is an excellent framework to use for teams working on new products. Due to its simplicity, this framework is helpful if you need to make objective decisions fast. Also, if your team lacks resources, the Value vs. Complexity Quadrant is an easy way to identify low-hanging-fruit opportunities.
The drawback of the Value vs. Complexity diagram is that it can get quite busy if you’re working on a super mature product with a long list of features.
In productboard, the Prioritization matrix is an interactive visualization that helps you prioritize features within an objective by visualizing each feature’s value and effort. Just drag and drop features vertically to indicate their value to an objective, and horizontally to indicate estimated effort.
The Kano Model
Developed by Japanese professor Noriako Kano and his team in 1984, the Kano model is a set of guidelines and techniques used to categorize and prioritize customer needs, guide product development and improve customer satisfaction.
The idea behind the Kano model is that Customer Satisfaction depends on the level of Functionality that a feature provides (how well a feature is implemented).
The model contains two dimensions:
Satisfaction, also seen as Delight or Excitement (Y-axis) that goes from Total Satisfaction (Delighted or Excited) to Total Dissatisfaction (Frustrated or Disgusted).
Functionality, also seen as Achievement, Investment, Sophistication or Implementation (X-axis) that shows how well we’ve executed a given feature. It goes from Didn’t Do It at All (None or Done Poorly) to Did It Very Well.
Kano classifies features into four broad categories depending on the customer’s expectations (or needs):
- Expected (Must-Be or Basic). Some product features are simply expected. For example, being able to import your contacts into a CRM system. You must include these in your product requirements.
- Normal (or Performance). The more of these features we build, the more satisfied customers we get. Choose the right set of performance features to create an attractive product.
- Exciting (or Attractive). Some unspoken features, when presented, could create a delightful customer experience. Pick a few of these from your customer feedback and implement them for competitive differentiation.
- Indifferent. The presence or absence of some features doesn’t impact the customer value in any way.
Let’s take a restaurant business, for example:
- An Expected (or Basic) need is that the restaurant is clean and delivers the food on time. Without this, customers would be dissatisfied.
- A Normal (or Performance) requirement is that the food in the restaurant is tasty.
- An Exciting (or Attractive) requirement is that the restaurant offers an extra free meal with your order.
- An Indifferent need is that the restaurant is using a proprietary POS terminal.
The Kano model is useful when you’re prioritizing product features based on the customer’s perception of value:
Perception is the key word here. If the customer lives in an arid climate, rain-sensing wipers may seem unimportant to them, and there will be no delight. Using the Kano model (or any other model incorporating customer value) requires you to know your customer well.
—Product Roadmaps Relaunched by C. Todd Lombardo, Bruce McCarthy, Evan Ryan, Michael Connors
To determine what’s your customers’ perception of your product, you must ask them a set of questions for each of the features they use:
- If you had (feature), how do you feel?
- If you didn’t have (feature), how do you feel?
Users are asked to answer with one of five options:
- I like it
- I expect it
- I’m neutral
- I can tolerate it
- I dislike it
An example Kano questionnaire:
Then, we collect the functional and dysfunctional answers in what is called an evaluation table.
To learn more about categorizing features in the evaluation table, you can check Daniel Zacarias’ post on the topic.
Weighted Scoring Prioritization
Weighted Scoring Prioritization is another framework that helps you decide what to put on your product roadmap.
The prioritization score is a weighted aggregation of drivers that are used to quantify the importance of a feature. It is calculated using a weighted average of each feature’s score across all drivers, which can serve to represent any prioritization criteria you’d like.
The weight given to each driver (out of a total of 100%) determines the driver’s relative contribution to the final score.
Here’s how to use the Weighted Scoring Prioritization framework:
- Start with a clear strategic overview of your next product release.
- Compile a list of product features that are related to that release. You don’t want to score every single feature in your backlog. Identify and group only the most relevant features for that release theme.
- Define the scoring criteria and assign weights to each driver. Come up with a list of drivers (or parameters) and decide their importance by giving each driver a specific weight from 0% (smallest contribution to the overall score) to 100% (biggest contribution to the score). Make sure all of the stakeholders agree on each criterion.
- Go through each feature and assign a score from 1 to 100 for each driver. The higher the score, the higher the impact that that feature has on that driver.
Here’s an example scorecard:
Each feature’s score is multiplied by the driver’s weight, then added to the total Priority score. For example: 90*20% + 90*10% + 50*30% + 20*40% = 50 Total Priority.
productboard makes the weighted scoring process intuitive by providing you with a visual interface to define the drivers’ weights. You can also filter features based on their prioritization score.
Weighting drivers in productboard
Scoring features in productboard
The RICE framework
The RICE framework is a straightforward scoring system developed by the brilliant product management team at Intercom.
RICE stands for the four factors that Intercom uses to evaluate product ideas.
How many people will be affected by that feature in a given time? For example, “users per month” or “conversions per quarter.”
Example: 1000 of our user base open this page every month, and from that, 20% of people select this feature. The total Reach is going to be 200 people.
Intercom scores the impact of a specific feature on an individual person level on a scale from 0.5 to 3.
- 3 – massive impact
- 2 – high
- 1 – medium
- 0.5 – low impact
As we previously mentioned in this guide, the number one problem for product managers is prioritizing features without customer feedback. The Confidence score in the RICE method takes into account this problem and allows you to score features based on your research data (or lack of it).
Confidence is a percentage value:
- 100% – high confidence in your data
- 80% – medium confidence in your data
- 50% – low confidence in your data or lack of data
Example: “I have data to support the reach and effort, but I’m unsure about the impact. This project gets an 80% confidence score.”
Effort is the total amount of time a feature will require from all team members. Effort is a negative factor, and it is measured in “person-months.”
This feature will take 1 week of planning, 4 weeks of design, 3 weeks of front-end development, and 4 weeks of back-end development. This feature gets an effort score of 3 person-months.
Once you have all of the four factors scored, you use the following formula to calculate the RICE score for each feature:
Intercom has made our life easier by providing a spreadsheet that we can use to calculate the RICE score automatically. You want to work on the features with the highest RICE score first!
ICE Scoring Model
If you’re looking for a speedy prioritization framework, look no further because the ICE Scoring Model is even more straightforward than the RICE framework.
In the words of Anuj Adhiya, author of “Growth Hacking for Dummies”: think of the ICE scoring model as a minimum viable prioritization framework.
It’s an excellent starting point if you’re just getting into the habit of prioritizing product initiatives, but it lacks the data-informed objectivity of the rest of the frameworks in this guide.
The model was popularized by Sean Ellis, the person credited for coining the term “growth hacking.” It was initially used to score and prioritize growth experiments but later became popular among the product management community.
ICE is an acronym for:
- Impact – how impactful do we expect this initiative to be?
- Confidence – how confident we are that this initiative will prove our hypothesis and deliver the desired results?
- Ease – how easy is this initiative to build and implement? What are the costs of the resources that are going to be needed?
Each of these factors is scored from 1–10, and the total average number is the ICE score.
You can use this simple spreadsheet built by a member of the Growth Hackers community to calculate your ICE scores.
One of the issues with that model is that different people could score the same feature differently based on their own perceptions of impact, confidence, and ease. The reality is that the goal of the ICE model is to provide you with a system for relative prioritization, not a rigorous data-informed calculator.
“The point is that the “good enough” characteristic of the ICE score works well BECAUSE it is paired with the discipline of a growth process.”
—Anuj Adhiya, The Practical Advantage Of The ICE Score As A Test Prioritization Framework
To minimize inconsistent product assessments, make sure to define what the ICE rankings mean. What does Impact 5, Confidence 7, Ease 3, and so on, mean for you and your team.
The MoSCoW method
The MoSCoW prioritization framework was developed by Dai Clegg while working at Oracle in 1994 and first used in the Dynamic Systems Development Method (DSDM)—an agile project delivery framework.
The MoSCoW method helps you prioritize product features into four unambiguous buckets typically in conjunction with fixed timeframes.
This quirky acronym stands for:
- Must have (Mo)
- Should have (S)
- Could have (Co)
- Won’t have (W)
Features are prioritized to deliver the most immediate business value early. Product teams are focused on implementing the “Must Have” initiatives before the rest of them. “Should Have” and “Could Have” features are important, but they’re the first to be dropped if resources or deadline pressures occur.
“Must Have” features are non-negotiable requirements to launch the product. An easy way to identify a “Must Have” feature is to ask the question, “What happens if this requirement is not met?” If the answer is “cancel the project,” then this needs to be labeled as a “Must Have” feature. Otherwise, move the feature to the “Should Have” or “Could Have” boxes. Think of these features as minimum-to-ship features.
“Should Have” features are not vital to launch but are essential for the overall success of the product. “Should Have” initiatives might be as crucial as “Must Haves” but are often not as time-critical.
“Could Have” features are desirable, but not as critical as “Should Have” features. They should only be implemented if spare time and budget allow for it. You can separate them from the “Could Have” features by the degree of discomfort that leaving them out would cause to the customer.
“Won’t Have” features are items considered “out of scope” and not planned for release into the schedule of the next product delivery. In this box, we classify the least-critical features or tasks with the smallest return on investment and value for the customer.
When you start prioritizing features using the MoSCoW method, classify them as “Won’t Haves” and then justify why they need a higher rank.
People often find pleasure in working on pet ideas that they find fun instead of initiatives with higher impact. The MoSCoW method is a great way to establish strict release criteria and prevent teams from falling into that trap.
The roots of Opportunity Scoring, also known as a gap analysis or opportunity analysis, trace back to the 1990s and the concept of Outcome-Driven Innovation (ODI), popularized by the researcher Anthony Ulwik.
Opportunity scoring is a prioritization framework that evaluates the feature importance and satisfaction for customers. This method allows us to identify features that customers consider essential but are dissatisfied with.
To use the Opportunity Scoring method, you must conduct a brief survey asking customers to rank each feature from 1 to 10 according to two questions:
- How important is this feature or outcome to you?
- How satisfied are you with the existing solution today?
Then, you use your aggregated numbers in the following formula:
Importance + (Importance – Satisfaction) = Opportunity
The features with the highest importance score and lowest satisfaction will represent your biggest opportunities.
“If 81% of surgeons, for example, rate an outcome very or extremely important, yet only 30% percent rate it very or extremely satisfied, that outcome would be considered underserved. In contrast, if only 30% of those surveyed rate an outcome very or extremely important, and 81% rate it very or extremely satisfied, that outcome would be considered over-served.”
—Eric Eskey, Quantify Your Customer’s Unmet Needs
Once you know your most viable opportunities, determine what it takes to connect these gaps. You need to take into consideration any resources required to deliver the improved feature.
The opportunity scoring formula is an effective way to discover new ways to innovate your product and low-hanging-fruit opportunities to improve satisfaction metrics such as a Net Promoter Score (NPS)
Prioritization frameworks — putting it all together
Here is a relative overview of each framework and how you can decide which one to use that best suits your needs:
Value vs. Complexity Quadrant
- Choose when: Working on a new product, building an MVP or when development resources are scarce
- Pros: Great for identifying quick wins and low-hanging-fruit opportunities
- Cons: Hard to navigate when there’s an extensive list of features
- Choose when: You need to make better decisions for product improvements and add-ons
- Pros: Prioritizing features based on the customers’ perception of value
- Cons: It doesn’t take into account complexity or effort; customer surveys can be time-consuming
- Choose when: Weighting a long list of feature drivers and product initiatives
- Pros: Quantifies feature importance and ROI
- Cons: Drivers’ weights can be manipulated to favor political decisions; requires full team alignment on the different drivers and features involved in the scoring process
- Choose when: You need an objective scoring system that has been proved instead of developing one from scratch
- Pros: Quantifies total impact per time worked
- Cons: Its predefined scoring factors don’t allow for customization. May not be the perfect fit for your organization
- Choose when: You’re just starting or need to exercise the discipline of prioritization in your team
- Pros: A simple scoring model that is “good enough” for relative prioritization
- Cons: Subjective; lacks data viability
- Choose when: You need to communicate what needs to be included (or excluded) in a feature release
- Pros: Identities product launch criteria
- Cons: Doesn’t set prioritization between features grouped in the same bucket
- Choose when: Finding innovative ways to improve existing solutions
- Pros: Great for finding gaps in your value delivery; identify features that customers consider important but are dissatisfied with.
- Cons: Does not work for new products or features due to the lack of customer data and market research
Ready to get on the path to product excellence? Sign up for a free two-week trial!
Questions? Comments? We’d love to hear from you! ? email@example.com