If you genuinely "vibe-code 24/7", you don't develop software, you are translating business logic rules to a set of long explained commands and put it in a black box and hope it works...
Integrating AI in a meaningful way while keeping the same consistency and maintainability of a software project is hard, just throwing EVERYTHING at an LLM has not really worked out in the last few months to a year yet...
Because from my experiment at "vibe coding", LLMs for code are good at following the rules, not at being intelligent. They are especially bad at keeping the "big picture" in their context.
In fact you can talk to your IA at the level of abstraction you want :
1 - You can ask it to create an entire app and, for sure, that'll be a black box full of unknown bugs.
2 - Or you can just give it precise instructions : "create the model for x entity with exactly those characteristics". "Now let's create the controller for the API but pay attention to this or that".
If you are good at describing things, LLMs are excellent at doing what you are already good at. They'll write exactly the code you would have wrote, they'll take care of the boilerplate, they'll self correct if needed except they'll do it in seconds while it would have took you hours between just typing the whole thing, debugging your own mistakes, going back and forth files, refactoring your implementation ...
Sure it takes more time than just asking "make me a snake game in pygame" but it also takes so much less time than writing it yourself.
Also, but that's only my personal experience, working with LLMs, even with a lot of iterations helps me a lot with staying in flow.
The thing I'm most excited about with 'vibe coding' isn't really existing software developers making things. It's people with deep domain knowledge who can't program can now build stuff. It's more of an excel replacement imo as it gets into the mainstream.
I am sure that most LLM generated software projects are more easily understood at a code level than the average excel 'megasheet' created by this kind of person, and certainly will be less brittle to a certain degree.
However, when I do vibe-coding, I feel like a PM asking a remote team to do code and I get back the response asap. It could be good or it could be bad, but if I don't check it, I am just a PM trusting 100% what my engineer (team) did. How did this work in reality before IA? Well, the engineers had to have some kind of responsability and testing, and assuring that what they did was OK. Now that part is completely "disabled", and the PM Is responsable for the tech aspect as well, without the knowledge (and I am avoiding all the deploy / infra perspective).
I myself use AI all the time, for smaller things, for more math-heavy things, for decisions I already thought about but want to check with best-practices (after all, LLMs are trained with the "hive mind" thoughts of programming). I use it as a companion, a tool akin to JetBrains refactor tools, something I can drop on a specific function to enhance my own work.
Chat is fundamentally the wrong interface for this. My instructions to the LLM in a chat session are effectively the software specifications, but chat encourages to treat them as ephemeral.
It's analogous to writing code in a high-level language, then deleting the code after it's compiled and keeping only the low-level bytecode.
In the case of vibe-coding, my specifications to the LLM _are_ the code, and the LLM-emitted code is a _artifact_ that can in principle be discarded and re-generated, assuming my specs are precise enough to ensure consistent outputs (the nondeterminism of LLMs is definitely a issue here.)
So I want tools that let me persist my specs, help me iterate on them, check them into version control, and help me test how the LLM interprets them. Not a chat.
Just write your specs in markdown files (or formal specification formats) in your repo and instruct the LLM to update them in the process.
In fact, you can just work on specs like this. Vibe speccing, I guess?
What matters is if the current output after each request complies with the given specifications, and if it's possible to solve the bugs until the code converges into stability.
The human won't understand how the codebase works when the codebase is developed by AI. Coupled to the fact AI misunderstands and gets things wrong, any company worth an audit needs more assurances
Vibe coding just like no-code will work to a degree, as you can easily build simple apps with either solution. It's just when you start building something bigger that both systems completely fall apart. While I welcome both technologies and think that it will help non-technical folks write code and build stuff. The world is still powered by "large" software and that's the stuff that you can't just vibe code your way through.
Wanting "a better way to write code" is one thing. Finding a way to usefully fit LLMs into your workflow is another thing. "Vibe coding" is yet another thing, distinct from the first two. Vibe coding is to programming as throwing a water balloon full of paint at a canvas is to painting. It works just as long as you don't actually care about the result.
"Hej there, I'm Kenneth. I'm a partner at AlleyCorp, a New York-based venture capital firm, where I focus on AI, developer tools, and infrastructure."
Previous no-code tools could work but required total buy-in. On the other hand you can have Claude generate a small throwaway frontend that relies on Web technologies you'd be using anyway.
My mental model for this is "Excel++". It has similar strengths (very powerful, users don't need developers to write every last line) and similar weaknesses (easy to make a mess or go down blind alleys if you don't know what you're doing).
Hearing professional developers talk about how no-code must be useless and/or dead, is kind of like hearing a professional swimmer say that floaties are dead.
Completely disagree. What “vibe-coding” weirdos are experiencing is that 99% of the time they need code for a solved problem. However this magical vibe coding falls apart as soon as you need nuance.
There is a difference between “generate me a nextjs app with typescript and unit tests” and “create-next-app” with typescript and unit tests. The first is guessing at what you need the latter is pushing out what you need exactly according to a template. Vibe coding falls apart when you need to spend time tweaking your typescript config because the LLM just used the most popular collection of code strings it found that leads to a working app instead of create-next-app.
A wysiwyg editor provides better interface because you know what you are getting when you click the button. You aren’t asking a computer to infer your intent. Vibe coding requires writing natural language that then needs to be parsed into intent. There is your error gap right there, going from “I need this” and “hoping” a computer grasps your intent correctly.
There are also people actively poisoning the well for LLMs with LLMs so your prompt will always be at risk to be misinterpreted. That is not “production grade”.
You click a few things, move stuff around, and it feels like you're building something real. But then it gets messy. You try to change one thing, and ten other things break. It becomes frustrating fast.
But if you already know how to code, it's actually great. You can tell what went wrong, fix it, and keep going. It’s fun when you can fall back on your skills. Not so fun when you can’t.
I don't understand the disconnect between engineers who work with these tools daily and the investor/exec/founder/influencer-types who constantly hype them. My LinkedIn feed has become a comedy.
I feel like any senior SWE who's tried to maintain an AI-generated codebase, coach a junior who's found themselves reliant on Cursor; or found themselves in an undo, yell, repeat cycle in Cline, knows they still require significant expertise to build production-safe code.
Where's the disconnect, or what am I missing?
Yeah, I think I can safely ignore 100% of what you're saying.