Curious what others think about forgoing design thinking in AI product development in favor of this more direct, concrete approach.
[1] https://www.deeplearning.ai/the-batch/concrete-ideas-make-st...
Not every product can be totally designed and spec’d out from the outset. Especially when time to market is important.
Maybe this works at the individual feature scale, but at any reasonably large product, designing _everything_ from the outset would result in brittle design.
His argument is pointlessly contrarian, too. He says his proposal is counter to design thinking, but design thinking would encourage you to build the exact same prototype he is proposing. As his own piece acknowledges, if you're at an early stage where you don't have any specific product ideas, design thinking could be a good starting point.
In practice, this is all the same core idea. The end result is better if you investigate real ideas rather than rely on abstractions and assumptions. Test your ideas with prototypes. Be ready to discover your favourite idea doesn't work and change direction.
I wonder if he's really arguing against something that is independent of the method chosen: handing your money and control over to teams whose incentives are to spend as much time as possible on consultative exercises.
I'd argue that no product can be spec'd 100% from the outset. Not even something like the regular Notepad.exe.
You'll always find some hidden complexity overlooked that results in the revision of the spec at the middle of development.
Embrace the change.
I think the more interesting question for the PM is how are you going to make a differentiated product in the market if everything you're planning to build is trivial? If it's not trivial, maybe talk to an engineer or two.
Table stakes for any product manager, not just AI related.
Currently the generated prototype usually needs tweaks and that’s if it even works. But when it does work, it’s like the model is reading your mind.
In the future as models improve at coding, they will anticipate the tweaks that make sense, less of the prompt will need to be specified & there’ll be less polish work after you get the generated artifact, and you can work at an even higher level of abstraction and thought. Domain experts can create even bigger, cooler things without spending’s years getting software engineering skills.
Assemblers and compilers came along very early in our industry’s history. If you run the thought experiment that that’s where we are at with prompted software creation, it will be a wild and exciting future. More people creating more stuff means a tremendous amount of amazing creations to enjoy.
* Specify the product as concretely as possible
* Use existing applications to test feasibility
* Get non-engineer user feedback on early prototypes
These all obviously apply to product management more generally, but Andrew gives some examples/ways in which they apply specifically to AI products. Still, I feel like they're talking more generally about complex/abstract software engineering rather than simply AI.
This is no small task.
Nothing new - we heard the same message with Figma, containerisation… you name it.
Having a good sense what problem solve, building rapport and trust with early customer and being a fantastic leader and communicatore has always been the most important skills. Thanks, nothing to see here…
"here's some guidelines for being a PM of an AI dev team"
Rather than
"Here's how to use AI to be a better PM"
And this is an example I feel of the form evolving.
The point that AI could not learn from a vague mission statement (whereas most people today would think wow that’s a good start to a two year project) suggests that AI companies as Ng suggests are “just” well thought out companies.
Sorry not making a lot of sense - what I think I mean is that one can write down a human sentence and the phase space of possible meanings is very large - the behaviours that meet the specification can be huge and most projects are attempts to find a working output that meets that and has everyone understanding it.
But a *working* piece of software has a much more constrained phase space of possible behaviours - just to get it working (or even get a set of tests it must pass) drastically reduces the possible behaviours and so makes clearer intentions and makes the discussion more focused.A lot of people (most?) do not have this. People who are overworked, people whose management want them to do things as cheaply as possible, people in physically or mentally bad environments.....
Given an epic with keywords organize tasks into that epic and estimate the time and then track if it’s on track or not.
Yeah not a lot to PM work.
Ooh also a 50/50 coin flipper to saying no to adhoc things
There that’s an AI PM
Otherwise it’s a role not too many teams need but when you need one you know it and you really need one.
A couple of cron jobs can easily automate this for my team. Most of the PMs I've seen in the wild only do this.
Very few PMs are actually valuable to a team from the product perspective.