Beyond quibbling about specific points, the MVP and the Vertical Slice are functionally similar they totally different in their purpose. Video Games are generally competing for a slice of a large preexisting market. The test is whether it can compete against existing products. A new to the world software start up is trying to serve a need that is currently unserved. The test is whether there is any market at all for this product (PMF). The 80/20 rule is about getting some validation that the thing you are building is worth building in the first place - not that it can be done, but that it should be done.
There aren't a ton of specific examples listed, but the images might insinuate some products that the author has in mind. I would just point out that Magic Leap, Humane were both hardware products that spent >5 years in development. They cam out as completed fully finished products that were complete by any standard. The problem wasn't that these products shipped with missing features, it was that nobody wanted what they were selling (see also the Apple Vision Pro, which is technically phenomenal in terms of design/engineering/manufacturing, but not really very useful). These products did the opposite of the Lean Methodology and show the risk of trying something brand new and not validating the assumptions as quickly as possible.
Well, yes in some cases. In lots of others it's just another implementation of existing software. Oh, you're building another food delivery app, because the current one is missing feature xxx?
This doesn't make them bad, the world is full of different contexts, and providing a solution tailored to a specific context is valuable.
But most startups are not "novel". Even in a novel space (like self driving cars) there are a bunch of companies in that space (basically competing for VC money. )
I'm very much in the MVP camp - if anything I'm more extreme- I'm in the "show me a market and how to reach them before coding anything" camp.
Magic leap had like a 30 degree FOV and no software. It was a dev kit, literally. The consumer version was never released IIRC
Also, I don’t think you argued against the main thesis which is something like “startup land is too good at the first 80% of a product, but not the latter 20%”
That rings true for me. Our industry is not known for robustness and quality. There is robust and high quality software out there, but most of it is not.
[1] For example, the liquid metal fast breeder reaxctor https://www.iaea.org/sites/default/files/publications/magazi...
[2] https://kguttag.com/ tells you just how hard it is
[3] Apple Vision Pro is a refined if overly expensive product that takes a different approach to the same end and consumers were indifferent
So they released both a vertical slice and a horizontal slice then.
I feel like it's completely the opposite. A new game doesn't need to "replace" old games. People have played those games. It only needs to be new and good enough to get people's attention.
But tools usually are replacing something existing. Tech that actually creates new possibilities is few and far between. The internet is one example of that. Can you think of another? I think most new tech is aiming to replace older tech that is currently used for those problems.
man, really? 99% of software is basically glorified CRUD view for some DB. it's nothing novel.
If we venture outside of corporate software: most startups do not create anything novel - they just monetize existing businesses in a different(usually more predatory) way, or do X but digital. both usually go for 'virtual monopoly' by: offering service(never a product!) for free -> capture the market -> enshittify -> new startups repeats the cycle.
The actual novelty where you find a need that is unserved is sub 0.1% of them.
In reality you aim for product that fits current buzzword meta for funding.
I do 100% agree that product-market-fit is probably the thing you should try to get ASAP though.
When we were in Rome and exited the Colosseum, it started raining. Some random dude walked to us and sold us an umbrella. Great business, both of us were better off.
His qualm is with companies developing what amounts to a good demo (whether you call it MVP, prototype, Beta product, etc), validate their hypothesis but then call it a finished product. Validation is supposed to be just that, validation. Instead he's arguing companies call that initial validation market success, then relabel the demo a working product, warts and all.
And, you're correct in saying the Vision Pro didn't flop because it wasn't developed properly. It flopped because no one wants what it has to offer, and that's a different problem from what the author describes
Since the comparison was made to games development, I think the closer equivalent is an early access release where you're generally paying for a WIP game at a lower price. Money changes hands, you get access to product in return, but there's no guarantee that it would ever be 'finished' or even what 'finished' might mean.
Vertical slice (VS) is a type of beautiful corner — this is done at production quality.
The purpose of FP is to prove the game loop and that a game is worth producing — that it is viable, or you could say that it has reached the minimum viable state. The purpose of VS is to try out the entire production process and test burndowns, etc.
I can confirm, as someone who has worked in games for decades, that the author understands it correctly.
FP in automotive would be a prototype car, VS in automotive would be the first factory produced car. VS in games often marks the end of pre-production and a shift of priorities from iterating and experimenting to producing bulk content. MVP would be much earlier.
Then in another sense, MVP is already marketable and commercially viable. But a game is that neither at VS nor FP. So if you look at MVP from that perspective, it is not even close to either VS or FP. It would be like somewhere beyond around alpha. In any case, MVP != VS :)
The MVP concept doesn’t work with game production that well because it’s a hit driven industry where most of the costs go into producing the hit. Like in movies, music, TV and book publishing — there are many stages of green-lighting before a product is first made available to the market as going from zero to market is where the bulk of the costs are. Going zero to market MVP as the first green-light check would be quite expensive ($50M for market leading VR/handheld, $100M for market leading console and Windows games minimum spent by the time a game is shown to the players) and risky. So instead, we start green-lighting and reviewing the prototype when <$2M is spent in most cases.
A vertical slice is not an MVP because people would never consider it minimally viable. It's a tiny product but it's completely polished and it's for a consumer of one: the publisher.
In other software, MVP is what you can go live with to all of your customers (often replacing something existing and omitting half the existing features to much chagrin).
It's the detail, the little touches, that result in the comparative advantage in the market - not the shared 80% ( most chairs have 4 legs, seat and back - that 80% isn't what you compete on ).
They're "finished" with it now though, after years of weekly updates they've gone quiet in November. But, they're also working on a new game.
I have not seen the code bases from the inside but I would be very surprised if not a lot of it has been touched in recent years.
I firmly believe any sense of accomplishment comes from what you give players, not how ”complete” your implementation is.
This is why software development _as a job_ sucks, and sucks deeply: how often do you get to put the icing on the cake, and put a ribbon on it, and get a final effort that matches what you were able to envision ?
"Job" satisfaction is for _hobbyist_ software development. Capitalism generates crap software.
In a software job I rarely have to do that to get paid, I can spend most of my days on the easy stuff that gets far enough. The pay is good enough that I can spend my time outside of work doing what I want and put in the effort to grind through the last 20% and really feel proud if the end result.
This may be why so many software developers gravitate to wood working. If you have the time to put in the effort for that last 20% its very noticeable and satisfying.
As opposed to state-funded software development, which is renowed for its high quality and innovation.
E.g. Excel or git, or their potential eventual successors. Former and latter has largely being the same commands and feature set used by 99% of users since V1. They are now old, storied projects with enhancements and features/improvements that go decades long, and even inspired or spun out new products/projects out of the ideas built within.
For the article itself, Pareto exists as a reminder that work expended is rarely if ever equal to results produced. There are instances where it pays off. But you always pay a price. Make sure you're willing to pay that price.
Sometimes a chair with 3 legs is all you need or care for. That 4th leg might give you more balance in an uneven plane, but I work in a decently flat garage and I'm not paying the premium for that 4th leg.
Medical device control software is not build like that, drone flight control is not build like that, power plant safety is not build like that.
The problem that I noticed in the recent years is: People see the fast dev cycles for non critical software and think they can replicate it in areas where it really does not fit.
I guess that’s how we ended up with Teslas self driving.
I am a bit worried that ai seems to be build like that, using development cycles fit for a convenience appliance for what could be used as a weapon.
Seniority doesn't mean anything if a dev's 20 years has been spent flinging crap over the wall and then wondering how to keep up with all the support tickets being filed.
How does one get onto the "software is suppose to work" career track.
Boring companies that have had IT for a long time; industries like government, taxes, energy, administration, CRM, insurance, pensions, banking, etc. You won't get recruiters knocking on your doorstep to come and work for those though, and you'll possibly be working with 10+ year old tech and development practices.
Agreed.
> drone flight control is not build like that
You obviously haven't been working in the drone industry, have you? Just a guess :-).
45k€ agricultural spraying drone? At least the ones I had the privilege to look at were programmed quite paranoid. Farmers tend to get really pissed if their equipment dies because of software issues.
The few principles I live by are vague to avoid that predicament-
1. Don't let the perfect be the enemy of the good 2. Under promise, over deliver 3. Graveyards are full of indispensable men
1 took me a long time to really learn
2 In my case, where I've sucked at estimations, it's really not over deliver, but deliver my under promise.
3 is a De Gaulle quote and my favorite - when I think I can't be replaced because I've had an ego boost from a recent accomplishment. Alternatively, it can also be interpreted as 'the world goes on'.
This gets conflated with products that are 80% reliable across all their tasks (LLMs, brittle software). That makes it difficult for users to rely on the product, because occasionally a failure will happen, and the user can’t build a mental model of what works and doesn’t.
Tax prep software for simple returns only is an entire product. Adding support for the other 20% would lose your initial base's interest.
Tax software that aims to solve all problems whose MVP is it handles 80% of people's tax returns is the pareto the author is talking about. But the real complexity is the other 20%.
Pareto as a minimalism process for focused product development is not engineering (good or bad).
Forgetting pareto and believing (or lying) that you are truly 80% of the way there is a big problem in engineering and funding. The author is correct in that.
I was, and still am, prepared to use a large number of things that aren't perfect - housing, transport, furniture, computers, clothing, FOOD, and more.
edit: Added Food, I don't get the best chef on the planet to prepare my meals every day, or any chef for the most part, not only because I don't have the money, but also because I don't value that sort of thing enough - I'm more than happy with my own cooking for the most part.
My PM did not take the correct lesson away from the encounter.
If the market isn’t large enough, then the customer still got 80% of the value whereas in the authors idealised world, they likely wouldn’t have gotten anything at all, since the minimum cost to develop it was 5x higher (assuming 80/20 holds).
Overall it seems we’re better off with startups following the Pareto principal than not following it, and the authors real issue is just with bad product management decisions afterwards.
> Even people who advocate for these technologies rarely assert that the results are useable as-is, especially in a world where people are accustomed to a much higher, human-level quality. At best they are useful as a starting point for a human to then finish the image, or the cover letter...
I think this might be a bit of a thought trap. Because the people working on and advocating for these technologies are definitely in the top 20% of intelligence/capabilities/whatever you want to call it. For that 20%, the (arguably achieved) 80% might not be good enough. But there are a lot of people who are already vastly outperformed by many of the currently available AI tools in many tasks.
There are many people who are just awful at writing and can already benefit greatly from these tools, for example to write readable cover letters. Or for explaining things in basic language, something I often use it for when trying to understand complicated texts from another field.
The other side of Pareto is that perfect is the enemy of good. Sometimes 80% adds so much value for the majority, that the other 20% isn't necessary to label something good enough. The article contains an image of a Cyber Truck, which is fitting, but I truly loved my early Model 3 which was arguably also only 80% done.
I doubt that. From what I understand Vilfredo Pareto introduced it to describe the existing allocation of wealth in Italy on the brink of fascism. He claimed that the crops in his garden followed this principle. I highly doubt that that can be replicated. Ever since people refer to the Pareto principle when they observe a 80/20 distribution. Like it is some kind of natural law. But it is not. At least I have yet to see a scientific explanation why a 80/20 distribution would have any kind of special meaning. Just because some distributions are 80/20 doesn't mean there is anything special about it, a lot of distributions are not 80/20.
So I think society would be better off if people would stop acting like it is a natural law and there is nothing to change about it.
Paereto distribution is dangerous since it is applied to justify hierarchies in society. And it is just not a good justification.
There's pretty much a mathematical rule that states that variables free to have any values tend to distribute themselves like that, just like the one that states that variables that are bounded tend to distribute themselves normally.
Or are you talking about the specific numbers? Because yes, the specific numbers are almost never correct.
The whole point is that we have these arbitrary subject divisions of knowledge for convenience. Physics, economics, biology, chemistry, etc. Then he spends 4 volumes trying to describe what he sees as the mechanics of society without worrying about these divisions of knowledge.
I have been stuck on volume 2 for years. It is brutal read.
A Pareto distribution is a type of power law distribution.
80/20 is Joseph Juran's pathetic simplification. It is something MBAs can easily remember. It is like the most minor and trivial of Pareto's thought.
The main idea of his thought is that society is largely based on types of "non-logical conduct". Pseudo-logical conduct. The best I have been able to understand it is that Pareto believed society operates on unthinking, almost quasi religious beliefs, then after the fact we invent logical and scientific sounding reasons for the action/behavior and fool ourselves into believing the action/behavior was logically arrived at from the start.
His work is trying to catalog these non-logical conducts like a butterfly collector collecting types of butterflys.
Basically, nothing much at all to do with 80/20 or fascism.
Pretty sure all the big Italian political philosophers from that time believed in the Iron Law Of Oligarchy politically. Democracy being basically a type of performance art while it doesn't matter the system, you always get oligarchy in reality.
A major benefit of iterative development is you may have features sitting in a backlog that keep getting pushed aside for higher priority features that you aren't expending software development and testing resources on those features. Contrast that with a waterfall approach where an entire product is designed up front, requirements documented and then built. The product likely ends up with features that are not important and rarely used.
Iterative development, or agile, processes are not the best process for every project. You definitely want high risk projects like nuclear power plant control software to use a waterfall approach to ensure safety.
In software development, waterfall projects were often iterative, though in longer cycles. I'm sure there were some that weren't but each version is essentially a waterfall iteration. For example, way back when, we would do 2 major releases and 2 patch releases every year, so our iterations were 3 months. Keep in mind this was software that we cut onto CD's and shipped out.
The benefit of waterfall is that biz is required to think about the project as a whole instead of a wishlist. Sometimes with agile, you end up with a Homermobile because the biz isn't forced to think of everything at once. Both have plusses and minuses.
Not quite sure why so many devs assume that agile exclusively means iterative.
https://www.theverge.com/2023/12/7/23992737/google-gemini-mi...
Perhaps this is similar to the practice of "vaporware". Musk's Tesla is a recent example. Neverending promises, mixed results.
Clearly the applicability of this alternative in other domains might not be as direct as in gaming software but its an interesting way to think about how to structure deliverables on the way to the "1.0" release.
In other words thinking of horizontal and vertical slice chunks of work, where horizontal means perfecting one functionality that applies across all product components (may not be visible to end-user), while vertical is perfecting all functionalities of one component that is visible to the end user.
The Pareto principle is useless if considering just a specific predetermined goal in isolation. If I want to climb to the top of a mountain, I have to climb 100 % of the height. Then it makes no practical difference, when I know that I can reach 80% of the height in 20% of the time, for example. However, the Pareto principle encourages us to (re)evaluate a goal in the light of limited resources. Is it better to be satisfied with only climbing 80% of the height of the mountain and use the time saved for other activities that I would otherwise miss out on? The answer to this question depends on other more general goals that I am pursuing.
Applied to software development, this tells us that we should consider for example implementing certain features or striving for a certain level of quality not as goals in themselves, but as framed by more fundamental goals. It is therefore no wonder that given the same code base, the immediate goals what to do next can differ greatly depending on what the fundamental goals are, such as earning money vs. having fun vs. taking pride in, etc. (they may align by coincidence, though). The Pareto principle helps us to (re)evaluate and compare immediate goals in the light of such fundamental goals: Is it better to implement feature A completely and dispense with feature B, or is it better to implement a simplified feature A´ and have room for a simplified feature B´ in the same timeframe (in this example the limiting resource)? Here, the fundamental goal is implicit in the utility function indicated by the word "better", in our example better according to earning money vs. better according to having fun vs. better according to taking pride in, etc.
Of course, considering the Pareto principle when (re)evaluating immediate goals does not gurantee to arrive at the best conclusion. And there are additional considerations outside the Pareto principle, such as short-term goals competing with long-term goals under the same fundamental goals, or legal obligations that are non-negotiable. Here, we enter the sphere of policies, where the policy makers decide upon regulations beyond the individual fundamental goals. In practice we have a hierarchy of multiple goals. On each hierarchy level the Pareto principle is still worth considering as long as there are conflicting goals and limited resources.
To recapitulate: The Pareto principle can only be applied meaningfully when evaluating certain alternative goals according to a given utility function for one or more specific limited resources.
Game studios don't care about optimizations anymore, they care about shoveling something out the door as quickly and as cheaply as possible.
There are attributes of your project that are table stakes (absolutely required to compete) and there are regulatory or safety requirements. You need all those to ship.
However, everything else is “value” or even just “perceived value” and it is up to your customer which ones “you have to do””.
Let’s put the 80/20 rule another way. If I can get 8 out of 10 features in 20% of the time, it makes sense to do those 8 ( assuming they all have at least some value ).
But what do I do with the 80% of the effort that it would take to get the last two features?
The answer is opportunity cost. What could I do with that time instead? Put another way, what am I NOT going to be able to accomplish because I chose to add those last two features?
If the answer is that I have other features that customers value more, I should do those instead. If I have a backlog of features that all take the same effort as the first 8. I can do 32 of them in the time it would take to deliver the original 2!!
The statement was made that “customers do not like to use 80% of a website”.
If I read this article, maybe I implement the full 10 original features. If I embrace 80/20, maybe I implement 40 instead. Hey look, the “just do 100%” approach resulted in a website with 25% as many features as 80/20. If “customers do not like 80 percent of a website”, they are probably even less happy with 25%. Right?
Now, not all features have the same value. So, the math is not as simple as above. But, in my view, this is the right way to think about 80/20. The perfect is the enemy of the good.
So that tue perfectionists can hate me even more, the same is true “within” features (or whatever other axis you are evaluating). Sometimes customers “expect” or even “demand” features they do not really use. Compliance is a an example. Or stuff that used to matter in a product category (and is still used as a filter) not really does not anymore. You can get a lot of value in your product by adding this stuff, but it is a waste of resources to “do it right” or match every competitor like for like. You may find again that you get essentially all the market success “value” from doing some fraction of the work. Note, I am not saying to ship stuff that is buggy or stuff that does not really work. If that is what you think I am saying, you misunderstand. A shorter version may to say “build what your customers will actually use and not much more”.
What you really want to spend your time on is the stuff that excites people, that differentiates your offering, and takes “relatively” little effort to execute. That is probably not “the last 20%”, most of the time.