> How valuable is the feature to the target users, truly?
Take this question for example. Accurately answering it is not always possible. A common mistake is to ask your users or take their word for how valuable they perceive a feature to be. That approach is better than nothing, but can often lead teams astray. This isn't because your users are stupid, it's because your users don't have the perspective that you have in terms of (a) what is possible, and (b) the knock-on effects of the feature on other aspects of the software value proposition.
Note: the above is _not_ true about bugs. If a new feature is actually a bug/issue fix raised by your users, they are usually right.
> What is the time, technical effort, and cost to build the feature?
Estimating technical effort is so difficult that it is an entire field in itself. When working on complex systems, you also have to consider the future complexity introduced when building on top of the new feature (linked to the last question).
Then change your answers. For me, this kind of method of assigning numbers to aspects of a choice and combining them in some way is there not to be an oracle, but to direct your thoughts or discussions within a group.
For example, if you gut tells you a feature is definitely worth it, but a tool like this says it’s only borderline useful, that shouldn’t make you immediately discard the feature, but make you consider
- whether the list of aspects is complete
- whether you judged the existing ones correctly
- whether your gut was right (e.g. if your gut says its worth it, but you also think its hard to implement and only moderately useful. Clearly, something is wrong or missing there)
When making a group decision, a big advantage is that this moves you from exchanging opinions “I think we should do A; you think we should do B” to more concrete discussion “I think it’s worth a lot and easy to implement; you think its worth something but too hard to support for what it’s worth. Let’s discuss those two separately”.
Items about which there’s strong disagreement even after discussion may even trigger postponement “let’s get a better idea about how hard/useful this is first”
The only way to make an informed decision is by thresholding on some number scale, but as you say, it also is impossible to assign numbers to aspects of a solution.
"Hey Jack, we've been asking for an edit button for years! It's not that difficult."
Missed an opportunity to present the “don’t build” reasoning! :-)
The sad story: it’s the technically-inclined people that have powerful machines despite being capable of making do with weak machines, knowing better how to manage limited computational resources, but it’s the technically-ignorant people that have weak machines and need more powerful machines.
In other news, after thinking of it from time to time for a few years, yesterday I finally disabled all font selection to see what it’s like, so now pages can only choose monospace, sans-serif or serif (I choose my own fonts for those years ago—all this font stuff is in Firefox’s preferences, BTW, no extension required). It’s rather pleasant, so far, a great improvement, with only very minor trouble from icon fonts.
https://www.intercom.com/blog/rice-simple-prioritization-for...
If a very complex feature is of true high value to 90% of my users, it seems uncontroversially worthwhile: the tool gives me "No, but a close call (48%)"
I'd suggest putting a little more weight on value & user importance and a little less weight on complexity/effort.
Otherwise, GREAT tool. Even just as an aid to get across the idea that some features should not be built, which is often not understood.
*for reference, weights are currently as follows:
# users: 10
effort: -15
user importance: 20
biz complexity: -15
value: 20
risk: -20
Why is the result always between 5% and 95%?
Looking at the page's <script>, I think it's because they set the minimum score to 1 rather than 0.
In addition, for three of the questions, a high score is seen as a negative rather than a positive (e.g. a high score in "development effort" likely means "a lot of effort"), so behind the hood they invert those scores by doing (10 - score).
The problem is then that positive questions have a range from 1 to 10, and negative questions have a range from 0 to 9, and the means you have a minimum score of 3 and a max of 57, rather than 0 and 60.
For a more flowery answer: A developer never deals in absolutes :)
A nice additional feature would be a way to bookmark a set of slider values, so that it can be shared with others.
It's kind of similar to what the RICE/ICE frameworks are trying to help achieve [0].
We built some scoring of impact/effort into our tool Kitemaker [1] and allow teams to prioritize their work by these things. We ended up going with really simple scores like S/M/L since it's super hard to know the difference between a 6 and a 7 (and it probably doesn't really matter anyway).
0: https://medium.com/glidr/how-opportunity-scoring-can-help-pr...
Such a useful tool, and I foresee referring to it regularly.
I thought seeing charts of how the answer would change along with each slider value for a given value range might help as others have mentioned it's not too easy to answer the questions accurately. Could help handle uncertainty since people would then be able to understand the range of answers in between their "best case" and "worst case"
What's a technical effort of 6 vs 4? What's a technical debt of 8 vs 6 or 7?
Title: Don't build (or build) that feature
Answer: Yes
I think the way I selected the questions (high impact, low effort) it should tell me to build, but as I read it, the tool tells me either to not build or answers an OR question with yes or no.
Take build cost. Suppose a project would take 2 engineers 4 weeks to build. A large team may call that a "2" but a small team would call it a "8".
Or perhaps as an optimisation he could set the total to 0% or 100% immediately if certain answers are set to 0 - for example, if no user need the feature, it should be 0%, or if the time and cost is 0, it should be 100% (although that's absurd), etc.