But in this particular case I have to think a lot of people just haven’t tried it in its best form. No, not a local model on your MacBook. No, not the web interface on the free plan. Go lay down $300 into API credits, spend a weekend (or maybe two) fully setting up aider, really give it a shot. It’s ultimately a pretty small amount of money when it comes to figuring out whether the people who are telling you there’s an existential risk to your livelihood on the horizon are onto something, don’t you think?
Nope. I'd rather buy some books or a Jetbrains subscription.
I disagree. I view this as a "machine guns versus heat sinking missiles from the 70s" dichotomy. Sure, using missiles is faster. However, sometimes you're too close for missiles. Also, machine gun rounds are way cheaper than missiles. However, when they first came out, missiles were viewed as the future. For a while, fighter jets were made without machine guns, but they added them back later because they decided they needed both.
Sometimes I find I want to drill down and edit what Claude generated. In that case, copilot is still really nice.
With regard to ai assisted coding: the more you know what you're doing, the more you know the code base, the better result you'll get. To me it feels like a rototiller or some other power tool. It plows soil way faster than you can and is self propelled, but it isn't self directed. Using it still requires planning and it's expensive to run. While using the tool, you must micromanage its direction, constantly giving it haptic feedback from the hands, or it goes off course.
A rototiller could be compared to a hired hand plowing himself, I guess, but there's way less micromanagement with a hired hand vs a rototiller.
Kind of like horses and cars. Horses can get you home if you're drunk. Cars can't.
The proper use of AI agentic tools is like operating heavy machinery. Juniors can really hurt themselves with it but seniors can do a lot of good. The analogy goes further: sometimes you need to get out of the backhoe and dig with smaller tools like jackhammers or just shovels. The jackhammer is like copilot -- a mid-grade power tool -- and Claude code is like the backhoe. Clunky, crude, but can get massive amounts done quickly, if that's what's needed.
You know what's quicker in your analogy? A spell. Or in the coding world. Template, snippets, code generators, framework, and metaprogramming. Where you abstract all the boilerplate behind a few commands. You already know the blast radius of your brute modification tools, so you no longer have to micromanage them. And it's reliable
In that lens think the AI cult is more right than the Crypto cult. At least I can use it to do something tangible right now while crypto is still pretty useless after many years.
In some sense I think these technologies need the cults and the critics though. It’s good to have people push new things forwards even if everyone isn’t along for the ride. It’s also good to have a counter side poke holes. I think the world is better with both optimists charting new paths forwards and pessimists making sure they don’t walk right off a cliff.
Whether more have been helped or hurt is debatable but it certainly has a tangible, if niche, use case with real value. It certainly has no value as a store of value, though.
It's been such a relief since the Crypto scammers finally shut the fuck up with their incessant ShitCoin and NFT shilling and get-rich-quick pyramid schemes, so please don't any of you start up again, for the love of God.
AI is NOT like Crypto in any way shape or form, and this is an AI discussion, not a Crypto discussion. And I'm sick and tired of hearing from Crypto shills yapping HODL and FUD while I'm actually getting productive work done and making real money while creating tangible value and delivering useful products with AI, without even having to continuously recruit greater fools and rip off senior citizens and naive suckers of their life savings by incessantly shilling and pumping and dumping and pulling rugs out from under people.
Unless you are imagining a world in which there's a global conflict and crypto isn't shut down in the first 12 months.
Didn't he just made a point about how fast the situation is evolving? I had some FOMO about ai last year, not anymore. I don't care that I don't have time to fully explore the current LLM state of the art, because in a month it will be obsolete. I'm happy waiting until it settles down.
And if their scenario ends up happening, and you can basically multiply a dev's productivity by N by paying N x K dollarinos, why would you chose a junior dev? It's cheaper, but sometimes a junior dev doesn't take longer to arrive at a solution, it never does (same for senior devs, don't get me wrong, but it happens less often).
And it's not saying his original post is wrong, they should be taken together. He's saying those who adapt to the new paradigm will "win", whether senior or junior.
still, this "ai code tools will deprecate real programming" bullshit will one day be laughed at just like how most of us laugh at shitcoin maniacs
it just takes a lot of people way too long to learn
I spend much of the day reading and thinking and only a small portion actually writing code, because when I'm typing, I usually have a hypothetical solution that is 99% correct and I'm just bringing it to life. Or I'm refactoring. You can interrupt me at any time and I could give you the complete recipe of what I'm doing.
Which is why I don't use LLMs, because it's actually twice the work for me. Typing out the specs, then verifying and editing the given result, while I could type the code in the first place. And they suck at prototyping. Sometimes I may want to leave something in the bare state where only one incantation works, because I'm not sure of the design yet, and have a TODO comment, but they go to generate a more complicated code. Which is a pain to refactor later.
For example, yesterday I needed a parser for a mini-language. I wrote a grammar — actually not even the formal grammar, just some examples — and what I wanted the AST to look like. I said “write the tokenizer”, and it did. I told it to tweak a few things and write tests. It did. I told it to “write a recursive descent parser”, and it did. Add tests and do some tweaks, done.
The whole thing works and it took less than an hour.
So yeah, I had to know I needed a parser, but at that point I could pretty much hand off the details. (Which I still checked over, so I’m not even in full vibe mode, guess.)
Generally, you wouldn't type out the spec either, either you provide an existing spec to the model (in form of white board notes, meeting notes, etc.) or you iterate conversationally until you arrive at an initial implementation plan.
It's a different way of working for sure, and it has distinct draw backs and requires different mental modes depending if you're doing green field, demo/prototype, existing large app feature development etc. but it's been a massive productivity enhancement, especially when tackling multiple projects in quick succession.
Everything is shifting so fast right now that it hardly matters anyways. Whatever I spend time learning will be outdated in a few years (when things are predicted to get good). It does matter if you're trying to sell AI products, though. Then you gotta convince people they're missing out, their livelihood is at stake if they don't use your new thing now now now.
No one: making anything great with AI.
Sourcegraph: an AI company, routinely promoting their LLM-optimism blogposts to HN, perpetuating the hype cycle their business model depends on.
If I may use an analogy, it's like what sampling is for music producers. The sample is out there—it’s already a beautiful sample, full of strings and keys, a glorious 8-bar loop. Anyone can grab it, but not every producer can sell it or turn it into a hit.
In the end, every hype train has some truth to it. I highly suggest you give it a shot, if you haven't already. I’m glad I did—it really helped me a lot, and I am (unfortunately, financially) hooked on it.
AI is an irrational market at the moment and this is not going to change anytime soon.
T: What's happening with this sausages, Charlie?
C: 2 minutes, Turkish.
-- 5 Minutes later ---
T: How long for the sausages?
C: 5 minutes, Turkish.
T: It was 2 minutes, 5 minutes ago.
I don't know why I remembered it. Is it AI, or self driving cars, or both. Huh.OK, I'll take the other side of that bet. If in Q4 '25 devs using cursor or whatever are 5x as productive as me using emacs, I'll give this AI stuff another chance. But I'm pretty sure it won't happen.
Notepad is a better IDE than emacs if we compare a really good dev using notepad vs a shitty dev using emacs.
In this agentic AI utopia of six months from now:
* Why would developers — especially junior developers — be assigned oversight of the AI clusters? This sounds more like an engineering management role that’s very hands on. This makes sense because the skill set required for the desired outcomes is no longer “how do I write code that makes these computers work correcty” and rather “what’s the best solution for our customers and/or business in this problem space.” Higher order thinking, expertise in the domain, and dare I say wisdom are more valuable than knowing the intricacies of React hooks.
* Economically speaking what are all these companies doing with all this code? Code is still a liability, not an asset. Mere humans writing code faster than they comprehend the problem space is already a problem and the brave new world described here makes this problem worse not better. In particular here, there’s no longer an economic “moat” to build a business off of if everything can be “solved” in a day with a swarm of AI agents.
* I wonder about the ongoing term scaling of these approaches. The trade off seems to be extremely fast productivity at the start which falls off a cliff as the product matures and grows. It’s like a building that can be constructed in a day up to a few floors but quickly hits an upper limit as your ability to build _on top of_ the foundational layer of poorly understood garbage.
* Heaven help the ops / infrastructure folks who have to run this garbage and deal with issues at scale.
Btw I don’t reject everything in this post — these tools are indeed powerful and compelling and the trendlines are undeniable.
They’re fine at basic tasks, but nothing more.
The article tells me something unfortunate about the appropriateness about ever buying software from this person; based on how they write.
(Also, grain of salt required, because this is a blatant marketing post.)
Look, I've been hearing "the models will get better and make these core problems go away" since it become common to talk about "the models" at all. Maybe they will some day! But also, and critically, maybe they won't.
You also have to consider the future where some companies spend an additional $50-100k per developer and they DON'T see any of this supposed increase in performance, if these "trust me, it'll happen this time" promises don't come true. This is the kind of bet that can CRATER companies, so it's not surprising to see some hesitation here, a desire to see if the football will be again yanked away.
Plus, and I believe most damningly, this article appears to be engaging in the classic technocratic failure mode: mistaking social problems for technical ones.
Obviously, yes, developers engage in solving technical problems, but that is not all they do, and at the higher level, that becomes the least of what they do. More and more, a good developer ensures that they are solving the RIGHT problem in the RIGHT WAY. They're consulting with managers, (ideally) users, other teams, a whole host of people to ensure the right thing is built at the right time with the right features and that the right sacrifices are being made. LLMs are classically bad at this.
The author dismissively calls this "getting stuck", and handwaves the importance of it away, saying that the engineer will be able to unstuck the model at first (which, if we're putting armies of "vibe coding" junior engineers in charge of the LLMS, who've not had time enough in their career to develop this skill, HOW?), and then makes the classic claim "but the models will get better", and predicts the models will eventually be able to do it (which, if this is an intractable problem with LLMS -- and so far evidence has been leaning this way -- again, HOW?).
Forgive that apalling grammar. I am het up. But note well what I'm doing: I'm asking "should we even be doing this?" Which is something these models a) will have to do well to accomplish what the author insinuates they will, and b) have been persistently terrible at.
I'm going to remain skeptical for now, since it seems that's my one remaining superpower versus these LLMs, and I guess I'm going to need to keep that skill sharp if I want to avoid the breadline in this author's future. =)
I really recommend this to anyone reading - if you haven’t tried using cursor or copilot, check them out. It makes writing code less tedious.
The whole article seems to be written disingenuously for the junior developer audience but this one kinda irked me: the flat hiring is because interest rates are high and has nothing to do with companies figuring out what to do with vibe coding.
On topic, nothing in this article suggests anything fundamentally useful about vibe coding other than it being an easier way to start for juniors and entry-levels. If you are a junior, go ahead and keep vibe coding but also do your best to understand the code you’re given. I strongly suspect that will (continue to) be something that makes people stand out.
1) Cursor has been crashing several times an hour for me recently.
2) Cursor seems to ignore .cursorrules files. I'm using the json format that's supposed to let you filter on file name patterns (although how that works for cross-cutting agent stuff I don't know).
3) Cursor is obsessed with making sketchy iffy defensive code checking for the most recent symptom and trying to guess and shart its way out of it instead of addressing the real problem? And it's extremely hard to talk it out of doing that, I have to keep reminding it and admonishing it to cut it the fuck out, fail instead of mitigate, address the root cause not the symptoms, and stop trying to close the barn door after all the horses have escaped. It's as of it was only trained on Stack Overflow and PHP manual page discussions.