Most of the big CEOs appear to be CEO of multiple companies (Musk and Bezos come to mind). If you can be the CEO of five different companies, it really doesn't seem like it can be that hard of a job. ChatGPT might be able to do that just fine.
Most big CEOs are definitely not in charge of multiple companies. You hear about Musk and Bezos because they’re all over the news, but they’re all over the news because they’re not normal CEOs.
It’s well known that Musk’s relationship to his companies is primarily one of ownership and delegation. Bezos hasn’t been in charge of Amazon for a long time but people conveniently forget that at every opportunity.
> it really doesn't seem like it can be that hard of a job.
It’s funny how often I hear this about AI replacing jobs, with one exception: Everyone who repeats it is highly confident that AI can replace other jobs but once you get to the work they do they’ll stop and explain why it’s actually much harder than you think.
LLMs still cannot persuade anybody right now but maybe soon.
Hard disagree. Using an LLM for these decisions transforms it into a game of manipulating inputs.
If you thought it was bad when people were gaming metrics for performance reviews, imagine the nightmare of a company where everyone is trying to manipulate their work to appeal to the current preferences of the prompts HR is feeding to the LLM. Now imagine the HR people manipulating the prompts to get the answers they want, or leaking the prompts to their friends.
Without humans in the loop to recognize when incentives are being gamed it becomes a free for all.
So perhaps imagine something like Kaggle competitions, but for Harvard Business School case studies. Open to LLMs, humans, and collaboratives.
A first step might be to create a leadership/managerial LLM test set. I wonder if HBS case studies are in training sets. And whether we can generate good case studies yet. Perhaps use military leadership training material?
This feels like a desperate power grab.
Why can’t engineers be involved with the prompts? Why aren’t they allowed to do things like automated A/B testing or to implement ideas from papers they’ve read?
Banning engineers from prompts altogether feels extremely arbitrary. If this was a healthy relationship they’d at least say the engineers and PMs work together on prompts.
Banning them is just politics.
AI is transforming how we prototype and iterate, and products like v0 or Replit are scratching the surface. However, historically, low-code platforms lacked a good integration with complex development cycles. There were many attempts, but they either failed or shifted their focus: Microsoft Expression Blend had a brilliant concept of integrating early sketching and ideation with development, but the product ultimately died with Silverlight; Framer had an editor that allowed users to integrate React components with a design tool, but they repurposed their product into a CMS-oriented tool like Webflow; Builder.io is following a similar path. It seems that in today’s market, there is no clear fit for the RAD tools of the late 1990s. Maybe AI can change that and create the new equivalent to Visual Basic. The hardest part is the extra mile that goes from the prototype to something robust and complies with multiple quality attributes: scalability, performance, security, maintainability, and testability.
For tools, not clear how this works since as you adjust parameters and whatnot you’re also presumably changing the code downstream when you “execute the call”.
But probably both sides of this will be done by LLMs directly in the future. I rarely write or tune prompts by hand now.
Some of them are painful. Some are impressive. The projects are small. Sometimes pulling in powerful off-the-shelf modules. They are getting better fast.
As a greybeard software architect, my current annoyance is that I'm spending all day talking to people when I want to get some practice voice prompting code :P
The vibe coding Reddit (http://reddit.com/r/vibecoding) already contains the full spectrum of “first time trying to code” to “just rolled my own custom GPT to optimize this.”
Personally I think AI will eat project management long before it eats software development.
> LLMs break traditional software development
> Develop your Prompts and Agents in code or UI
...
So of course the blog post is an ad, peddling what they want potential customers to think.
I don't doubt that we'll eventually come to a point where most code is written by AI, but that point is not at all where we are at right now, and for quite a while we'll need developers to drive the development process.
I totally agree that we're not at a point where AI can write most code. Though, I didn't ever say that. I just think its blurring the boundary between engineers and PMs with both taking on more of the others role.
Also, it shouldn't be surprising that the product we're building is aligned with what we believe about the world :)
R
For writing code with PMs replacing Engineers with AI assistants? Very unlikely, even if they try as soon as you get to the question on maintaining an existing codebase, this is where LLMs begin to hallucinate more and struggle.
The typical PM won't know that they generated bad code. You still need an engineer to detect that.
This is a obvious bubble repeating like it's 1999.
Managing engineers is not the job of PMs anyway. PMs are supposed to manage projects, not people.
At the end of the day, the stack is collapsing itself, and the lines are blurring because we want people to own the total outcome more often than throwing responsibilities over the wall to each other.
That doesn't mean engineers are going to design, but they should know what a good design looks and feels like. That doesn't mean designers or product should write code, but they should be able to engage in high level architecture discussion to understand the capabilities of their products.
I have not read it yet but there's a book by David Epstein (not that one), "Range: Why Generalists Triumph in a Specialized World". I'm interested because for the last 10 years I've thought of myself as a generalist but always thought companies were looking for specialists. I could show immense value when I got to those companies, but I don't typically think companies are looking for generalist. They are looking for specialists to solve an acute problem.
I love ChatGPT [1]. I use it all the time. I use it for coding, I use it to generate stuff like form letters, I use it for parsing out information from PDFs. Point is, I'm not a luddite with this stuff; I'm perfectly happy to play with and use new tech, including but not limited to AI.
Which makes me confident when I say this: Anyone who thinks that AI in its current state is "blurring the line between PMs and Engineers" doesn't know what they are talking about. ChatGPT is definitely very useful, but it's nowhere near a replacement for an engineer.
ChatGPT is really only useful if you already kind of know what you want. Like, if I asked it "I have a table with the columns name (a string), age (an integer), location (string), can you write me an upsert statement for Postgres for the values 'tom', 34, 'new york'?". This will likely give you exactly what you want, will give you the proper "ON CONFLICT" command, and it's cool and useful.
If I ask it "I want to put a value into a table. I also want to make sure that if there's a value in there, we don't just put the value in there, but instead we get the value, update it, and then put the new value back in", it's not as guaranteed to be correct. It might give you the upsert command, but it also might fetch the value from the database, check if it exists, and if it doesn't do an "insert" and if it does do an "update", which is likely incorrect because you risk race conditions.
My point is, the first example required knowing what an upsert is, and how to word it in a technical and precise way.
It certainly doesn't "blur the line" between PM and engineer for me. I have to pretty heavily modify and babysit its outputs, even when it is giving me useful stuff. You might be saying "well that's what a PM does!!", but not really; project managers aren't typically involved in the technical minutia of a project in my experience, they're not going to correct me for using the wrong kind of upsert.
These kinds of articles always seem to be operating on a theoretical "what if AIs could do this??" plane of existence.
[1] Deepseek is cool too, what I'm saying applies to that as well.
ETA:
Even if I wasn't a fan, this article definitely shouldn't have been flagged.
AI is like a lot of "Startup Founders", great at presenting all sorts of things as facts, but if you really dig and drill, they don't really have any domain expertise.
wrt did you read the article? I was quite specific about the ways I think LLMs are blurring the lines. I don't think its true for general engineering but I do think its true for applications being built with LLMs.
Also its still very early
A PM might generate that SQL thing I mentioned and just blindly cut and paste it. For any application with more than one user, that is a bug, it's incorrect, and it's not like this is some deep cut: upserts happen all over the place.
I didn't finish the entire article, I disagreed with the line, "Prompting Is Here To Stay and PMs—Not Engineers—Are Going to Do It", because I fundamentally do not think that is true unless AI models get considerably better.
It's possible they will, maybe OpenAI will crack AGI or maybe these models will just get a lot better at figuring out your intent or maybe there's another variable that I'm not thinking of that will fix it.
I hate the term "prompt engineer" because I don't think it's engineering, at least not really. I will agree that there's a skill to getting ChatGPT to give you what you want, I think I'm pretty good at it even, but I hesitate to call it engineering because it lacks a lot of "objectivity". I can come up with a "good" prompt that will 90% of the time give me a good answer, but 10% of the time give me utter rubbish, which doesn't really feel like engineering to me.
I saw the line: `As AI models become able to write complex applications end-to-end, the work of an engineer will increasingly start to resemble that of a product manager.`, and while I don't completely disagree, I also don't completely agree either. Even when I heavily abuse ChatGPT for code generation, it doesn't feel at all like I'm barking orders to a human. It might superficially resemble it but I'm not convinced that it's actually that similar.
I hope I'm not coming off as too much of a dick here, I apologize if I am, and obviously a blog post in which you wax philosophical about the implications of new technology is perfectly fine. I think I'm just a bit on edge with this stuff because you get morons like Zuckerberg claiming they'll be able to replace all their junior and mid level engineers with AI soon, and I think that's ridiculous unless they have access to considerably better models than I do.
“Jam tomorrow” will only get you so far.
OP here. Thanks for the (harsh!) feedback, I'll take it in a growth mindset.
The post does genuinely reflect my experiences and I do believe what I said.How would you advise I change the post to make it better?
Which parts do you think are untrue?
Thanks!
"By allowing non-technical people and domain experts to use English as the programming language, AI blurs the line between specification and implementation."
This is a non sequitur. You are saying that some PMs can update the prompts for an AI application. But it does not follow that AI can now specify and implement software. If you are talking about specifically "LLM Applications that just pre-prompt a model can be updated by a PM instead of an engineer". Then yes, that I would agree with. But you've extrapolated this wildly and close out with marketing for your tool.