Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.
It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.
It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up.
However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code.
I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works.
Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about
Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today.
> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.
Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk.
Do you want to feel like that in next 5-10 years?
*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.
Mistakes are less costly in the beginning and the knowledge gained from them is more valuable.
Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing.
I agree:
FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call.
And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive.
One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution.
For me though, I'm dabbling in AI because it fascinates me. Bitcoin was like, I don't know, Herbalife? —never interesting to me at all.
There are loads of BS tools out there of course but I don’t use that many tools.
I think it's a luxury to be able to ignore a trend like AI. crypto was fine to ignore because it didn't really replace anyone, but Ai is a different beast
I waited until it seemed good enough to use without having to spend most of my time keeping up with the latest magical incantations.
Now I have multiple Claude instances running and producing almost all of my commits at work.
Yes, with a lot of time spent planning and validating.
Once you have a good social position, or at least one you're happy with, you stop doing this, and you grow ever more irritated at others doing it ... because it's your social position that they're coming after. And they're younger, more motivated and hungrier. More than that, a decent chunk of these people want a better social position, even if that means taking yours.
I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.
You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.
I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.
for example, (dodging the whole full-self-driving controversy) tesla cars have had advanced safety features like traffic aware cruise control and autosteer for over a decade.
so, buying into safety early...
for other technologies, there's sort of the rugpull effect. The people who get in early enjoy something with little drama vs the late adopters. ask people who bought into sonos early vs late, probably more exampless of this.
so getting technology the founders envisioned, vs later enshittified versions.
You do have to drag stubborn people, kicking and screaming, into the future or they will continue using old tech. The article is framed in the past tense, "someone tried", "the crypto grift was". As if it's not currently swallowing the world. I guess he is so maximally sensible that he self-assess faster than MS and realizes bitcoin just isn't for him every time.
He has a strange hyper-specific definition of utility and productivity, (wrote my MSc, had fun) don't count.
- If you'd invested in Bitcoin in 2016, you'd have made a 200x return
- If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now
- If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
Of course, you could just as well have
- become an ActionScript specialist as it was clearly the future of interactive web design
- specialized in Blackberry app development as one of the first mobile computing platforms
- made major investments in NFTs (any time, really...)
Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...
My goal in life is not to maximize financial return, it's to maximize my impact on things I care about. I try to stay comfortable enough financially to have the luxury to make the decisions that allow me to keep doing things I care about when the opportunities come along.
Deciding whether something new is the right path for me usually takes a little time to assess where it's headed and what the impacts may be.
I still think it's stupid, but I'd be a whole lot richer if I went along with it at the time!
I was ahead of the game with my intimidate expertise in ActionaScript and Silverlight! I made 3D engines in browsers well before WebGL was a spec.
It was quite profitable for a few years, then poof. Dead end lol
Except you would've probably sold it at any of 1.5x, 2x, 4x, or 10x points. That's what people keep missing about this whole "early bitcoin". You couldn't tell it will 2x at 1.5x, you couldn't tell it will 4x at 2x, and so on.
> - If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
I disagree:
Concerning the first point: how neural networks are today is very different from how they were in former days. So, the knowledge of neural networks from the past does only very partially transfer to modern neural networks, and clearly does not make you a very sought-after specialist right now.
Concerning your second point: the success of mobile games is very marketing-centric. While it is plausible that being early in mobile games development when the iPhone was released might have opened doors in the game industry for you, I seriously doubt whether having this skill would have made you rich.
By the time iPhone was released, it was already too late for small companies. When you developed apps and games for MS PocketPC and Blackberry, you charged $20 per app, and any average quality product would make money.
In 2007, there were only 2 kinds of success stories:
1. Companies that were able to throw a lot of money around (Your example of Candy Crush).
2. Some rare flukes of someone getting rich with some app.
So what I was trying to say: The golden days were really before the release of the iPhone.
Luckily I was also doing frontend work alongside, so when the time came to transition to html+css+javascript, it wasn’t much of a move at all, it was just putting down AS and focusing fully on JS
1. 2016 was years after Bitcoin was developed. So you could still make 200x returns without being early adopter.
2. Is this even true? I'd bet scraping experts or people who can fine tune LLMs have easier time finding a job than classical ML academics.
3. Candy Crash was released when iPhone was on its 5th iteration.
If anything, you just added to OPs points. Being an early adopter gives limited advantage.
As a freelancer I do a bit of everything, and I’ve seen places where LLM breezes through and gets me what I want quickly, and times where using an LLM was a complete waste of time.
Feels like a false dichotomy.
Have I become faster with LLMs? Yes, maybe. Is it 10x or 1000x or 10,000x? Definitely not. I think actually in the past I would have leaned more on senior developers, books, stack overflow etc. but now I can be much more independent and proactive.
LLM-based tools are a wide spectrum, and to argue that the whole spectrum is worth exploring because one sliver of it has definite utility is a bit wonky. Kind of like saying $SHITCOIN is worth investing in because $BITCOIN mooned as a speculative asset:
- I’m bullish on LLMs chat interfaces replacing StackOverflow and O’Reilly
- I could not be more bearish on Agents automating software engineering
Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead.Sure, maybe crypto changed some lives, but an entire industry? I think ALL of software dev is going under a transformation and I think we're past the point of "wait it out" IMO.
Or I'm wrong, but right I'm being paid to develop a new skill professionally. Maybe the skill ends up not being useful - ok, back to writing code the old way then.
It's clearly a textbook example of survivorship bias.
In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.
It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.
And coding is the lowest of low hanging fruits.
I can't really agree. I've never seen anything from an LLM that I would consider even helpful, never mind transformative.
How are you supposed to use them?
It may have reduced the time to an implementation, based on my experiences I sincerely doubt the veracity of applying the adjective "working".
That's the point of the blog post. If you can't even say right now whether it's for the better, then there's no reason to rush in.
So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?
I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.
Anecdata, but the few people I know who were looking to switch gigs all had multiple offers within a few weeks. One thing they all had in common was taking a very targeted approach with their search and leveraging their networks. Not spamming thousands of resumes into the ether.
There’s really not much stopping changing tires from being automated away. Further standardization of tires or wage increases would probably do the trick.
There’s still plenty of software to be created. You’ll probably have to learn some ML tricks or whatever, but there’s nothing going away, just changing as software has always done.
For an old dog like myself it feels an unjust rug pull.
I'm still working in tech, and likely will forever in a much reduced capacity. But pottery is my life now.
MS Access and so many more "you won't need a programmer again" dev tools over the decades blazed the trail.
Bad idea. Automotive repair is barely a moat, because you don't need that much training to work those jobs. There's a lot of people who want to do it. And cars are definitely susceptible to recessions - if fewer people are buying cars, if there's a shift to transit, if your locality builds more pedestrian-friendly infrastructure, if businesses that use work vehicles are forced to close, then your demand drops and everyone already in the field is forced to compete with one another.
For moats, look for things that are complex (not everyone can do it), licensed and always needed.
But well, I feel like you too.
Programmers (and other white collar jobs) were able to luxuriously coast along the ZIRP era because capital (replenished twice via quantitative easing) was cheap and plentiful, and because the elites at the top had to pump huge amounts money to create a shared fantasy of the "technological future" that validates the neoliberal era. Now that the reality of the actual "physical economy" (the economy of making tangible things) has clawed back at us because of that forbidden three-letter word (war), we all realize that doubling and tripling oil prices were actually dictating our lives rather than some "Skynet AI" crap, and thus our fantasy simulacra of "virtual" play-things have now come to an end. Oh and we all found out that most of SaaS was actually bullshit anyway. In fact, if it could be completely replaced by AI then it was already pretty bullshit in the first place.
So, for smart STEM people uninterested in programming and only looking for a stable career, I think they would be better off by just doing engineering work that's a bit more tangible, like robotics, manufacturing, shipbuilding, construction, etc. (Or anything related to war, but only if you're able to stomach what you're doing.) If you don't like to sit all day for a salary, then niche blue collar work can also be a good option, since general-purpose robotics (Physical AI?) is still too far away because of many, many issues that's just too long to explain here. I still think if you like programming then you should stick to it in the long run - there will be a very cold winter because of the combination of LLMs, AI bubble pop, and general economic depression, but for those who survive this era there will be an opportunity because of the shortage of skilled programmers (since no-one bothered to hire juniors after the pop, no one will grow to become seniors themselves!) Computing will still be with us forever, just not in a way that investors thought that it's going to "engulf the world".
Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.
AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.
It's easy to say "well of course I would have invested in Google in 1999" but there was nothing in 1999 to say that Google was going to be as big as it was. Why not Lycos or Dogpile or AskJeeves?
How many people dedicated their careers to Flash, only to have it die at the hands of Steve Jobs and HTML5? It's not just about bailing out: lots of folks had to start over because taking advantage of the opportunity means actually investing real time and money. "As a tulip bulb producer, I would have simply stopped producing tulip bulbs when it started to seem questionable." https://en.wikipedia.org/wiki/Tulip_mania
Counterpoint, I sold all my Bitcoin in 2011 when Mt Gox got hacked and the price plummeted 80%. Would have done it again after their 2014 hack too if I had any left.
> Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now
But you just said bail the moment it's future starts to be questionable. If you follow that you would have never held it for 15 years.
The problem is this leaves you undifferentiated from every hype chaser in Silicon Valley. Our world is littered with folks who went to coding school, traded Bitcoin, did something in the metaverse and blogged about AI. That jack-of-all-trades knowledge can be useful. But only if you’re making unlikely connections. Having the same cutting-edge familiarity as every tech journalist doesn’t that make.
Better: develop deep knowledge and expertise in something. Anything. Not only does this give you some ability to recognize what expertise looks like from afar, it also lets you dip into new topics and have a chance at seeing something everyone else hasn’t already. That, in turn, gives you the ability to be a meaningful first mover.
If you sold the farm to get in early in the Metaverse, you're totally hosed now because that was a dead end. The idea of digital real estate was as terrible then as it is now.
What would have you done when the Bitcoin fork happened 50/50? Would you have gone int ICOs? Which ones? Etc…
There’s simply too many “new things”, so by trying to get exposure to them you’ll be massively in the red.
Let’s say you get into 1000 “new things”, and you strike it lucky and hit BTC. You’d had to buy BTC in early 2013, hold it over the whole period and sold at the historical maximum for you to be at break even.
If instead of buying 1000 “new things”, you’ve put your money into the S&P you’d be at +250% by the same time.
But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.
Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!
You're right, it's possible. But you might be both overestimating the ease of onboarding and underestimating the variety of tasks and constraints devs are responsible for.
I've seen Claude knock out trivial stuff with a sufficiently good spec. But I've also seen it utterly choke on a bad spec or a hard task. I think these outcomes are pretty broadly established. So is the expectation that the tech will get better. Waiting isn't unwise.
I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.
Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.
Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.
In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.
Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.
EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.
Yeah, everyone is in panic mode - "they're killing all the horses", but one really needs to consider the similar historical events.
When ATMs were rolled out in the 70s, everyone assumed tellers were on their way out. What actually happened was counterintuitive: the number of bank tellers increased for the next few decades. ATMs lowered the cost of operating a branch, so banks opened more branches, which required more tellers. The teller's job also shifted from cash handling toward relationship-building. The predicted elimination took 40+ years to materialize, and even then it was gradual.
"Paperless office" in another example. Around 1970s futurists confidently predicted - computers would eliminate paper. Companies restructured workflows around that assumption. What happened, really? Paper consumption actually increased dramatically - laser printers and desktop publishing created more paper demand, not less. The prediction wasn't wrong, just half a century early.
US horse population peaked around 21 million in 1915 and crashed to roughly 3 million by 1950. The tragedy wasn't that people prematurely killed off horses - it's that the infrastructure around horses (farriers, feed suppliers, carriage makers, stable hands) was devastated faster than those workers could adapt, but that also took decades.
Imagine if in 1910, someone sold all their horses expecting automobiles to arrive in their rural county within two years, and then cars didn't reach reliable rural infrastructure until 1940. That's what it feels to me when companies lay off thousands of programmers because of AI.
Yes, AI may replace programmers, but what would probably happen is that the meaning of "programmer" would change. Yet that won't happen within a year or two. Even with the unseen advancements in the AI research.
But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".
Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.
lol no. There's nothing actually different about managing VMs in EC2 versus managing physical servers in a datacenter. It's all the same skills, and anyone who is competent in one can pick up the other with zero adjustment.
Obviously there are tons of tools and systems building up around LLMs, and I don't intend to minimize that, but at the end of the day, an LLM is more analogous to a tool such as an IDE than a programming language. And I've never seen a job posting that dictated one must have X number of years in Y IDE; if they exist, they're rare, and it's hardly a massive hill to climb.
Sure, there's a continuum with regards to the difficulty of picking up a tool, e.g. learning a new editor is probably easier than learning, say, git. But learning git still has nothing on learning a whole tech stack.
I was very against LLM-assisted programming, but over time my position has softened, and Claude Code has become a regular part of my workflow. I've begun expanding out into the ancilary tools that interact with LLMs, and it's...not at all difficult to pick up. It's nothing like, say, learning iOS development. It's more like learning how to configure Neovim.
In fact, isn't this precisely one of the primary value propositions of LLMs -- that non-technical people can pick up these tools with ease and start doing technical work that they don't understand? If non-technical folks can pick up Claude Code, why would it be even _kind_ of difficult for a developer to?
So, I'm with the post author here: what is there to get left behind _from_?
> But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
From a technological perspective, these sysadmins were right: in nearly all cases (exception: you have a low average load, but it is essential that the servers can handle huge spikes in the load), buying cloud services is much more expensive overall than using your own servers.
The reason cloud computing took of is that many managers believed much more in the marketing claims of the cloud providers than in the technological expertise of their sysadmins.
So just read up on it and say you do. They don't really need 3 years experience, so you don't really need to have it.
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.
For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.
Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.
Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?
This really hinges on what you mean by "didn't use git".
If you were using bzr or svn, that's one thing.
If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.
I don't understand how this, at all, makes "the point" for anyone.
I'm still stuck with TFS and SVN in my day jobs but use Git on and off on side projects. I really wish all my clients would just switch to Git.
Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.
Nothing is happening. And if it is, it's just hype.
And if it isn't, it only works on toy problems. And if it doesn't, I'll learn it when it stabilizes.
And if I can't, the gains all go to owners anyway. And if they don't, it's just managers chasing metrics.
And if it isn't, well I'm a real programmer. And if I'm not, then neither are you.
This is a great framing.
To keep and/or increase my current compensation, I have to be competitive in the software development market.
(Whether I need AI to remain competitive is another matter.)
The 16,000 new babies will be competing in different markets.
Oh, and of those 16,000 babies, many are born in far less fortunate circumstances, they're already far behind their cohort. :/
I think the framing just doesn't help at all.
In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.
I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.
Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"
The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.
Speaking as an IC who is both more productive than last year, but simultaneously more worried.
I think it depends on why you do programming. I like programming for its own sake. I enjoy understanding a complex system, figuring out how to make change to it, how to express that change within the language and existing code structure, how to effectively test it, etc. I actively like doing these things. It's fun and that keeps me motivated.
With AI I just type in an English sentence, wait a few minutes, and it does the thing, and then I stare out the window and think about all the things I could be doing with my life that I enjoy more than what just happened. I find my productivity is way down this year since the AI push at work, because I'm just not motivated to work. This isn't the job I signed up for. It's boring now.
The money's nice, I guess. But the joy is gone. Maybe I should go find more joy in another career, even if it pays less.
Why are they not satisfied with the 2x improvement? Could you give an example of the "rebuilding" that you mean?
But I think it's just a matter of when not if.
My current guess at my slow fortune 500 is ~1-2 years before we see real employment impact.
Startups are happening now at least with my anecdotal conversations. Right now the discussion is more just slower growth than actually doing layoffs. That coin will flip at some point.
> But that extra value doesn't accrue to individuals, it accrues to business owners.
What is value?
Is a 2X faster lumberjack 2X as valuable? Sure
Is a 2X faster programmer 2X as valuable? At what, fixing bugs? Adding features? That's not how the "ownership class" would define value.
Productivity is a measure of efficiency, not growth. Slashing labor costs while maintaining the status quo is still a big productivity gain.
Hopefully not too many people are "enhanced" to the tune of 0.5x!
A practitioner with more experience maybe a few percentage points more productive, but the median - grab subscription, get tool, prompt, will be mostly good enough.
Sadly, I'm still disagreeing while crypto kiddies are driving past me in lambo's. If its the future of money, yes we'll get there eventually, but like every technology shift, there's a lot of money to be made in the transition, not after. *
* I sold all crypto a few years ago and I'm a happier person :D
Meanwhile, the main category of people who have consistently gotten rich off the "crypto revolution" were various scammers and pump-and-dumpers who have since moved on to meme stocks, AI content farming, and so on.
But I wouldn't use crypto as a benchmark because AI has more substance. We can debate if it's going to change the world, but you can build some new types of businesses and services if you have near-perfect natural language comprehension on the cheap.
That is why I agree with the sentiment as well. I use AI a little. Not too much. And I'm as swamped with work as ever because my focus is on legacy stacks, where AI is really not strong.
I know a few people who got wealthy by being early to crypto. None of them had the correct reasoning at the time: They thought BTC was going to become a common way to pay for things or that “the flippening” was going to see worldwide currency replaced with BTC. They thought they’d be kings in a new economy but instead they’re just moderately wealthy with a large tax bill they’re determined to dodge.
I know far more people who lost money on crypto, though. Some were even briefly crypto-rich but failed to sell before the crash or did things like double down on the altcoin bubble.
The second group had gone quiet about their crypto while the few people in the first group gloat and evangelize (because continued evangelization is necessary to keep their portfolios pumped). This creates an intense survivorship bias where it appears like all the crypto kiddies are wealthy, but a quiet mass of people who played with crypto are most definitely not.
Writing the actual code that's efficient is iffy at times and you better know the language well or you'll get yourself in trouble. I've watched AI make my code more complex and harder to read. I've seen it put an import in a loop. It's removed the walrus operator because it doesn't seem to understand it. It's used older libraries or built-ins that are no longer supported. It's still fun and does save me some time with certain things but I don't want to vibe code much because it removes the joy out of what you're doing.
Wonderful life lesson on hype cycles. I am curious if hype literacy will join media literacy in academia.
> Few are useful to me as they are now.
Except current AI tools are extremely useful and I think you're missing something if you don't see that. This is one of the main differences between LLMs and cryptocurrency; cryptocurrencies were the "next big thing", always promising more utility down the road. Whereas LLMs are already extremely useful; I'm using them to prototype software faster, Terrance Tao is using them to formalize proofs faster, my mom's using them to do administrative work faster.
I know, I know. I'm prompting it wrong. I'm using the wrong model. I need to pull the slot-machine arm just one more time.
I know I'm not as clever as Terrance Tao - so I'll wait until the machines are useful to someone like me.
i'll just say, and i understand this is not the point of the article at all, but for all its faults, if you got in on flash as earl as html 2.0 and you were staring at an upcoming dead-end of flash in say, 2009, you also knew or had been exposed at that time to plenty of javascript, e4x and what were essentially entirely clientside SPAs, providing you a sort of bizarro view of the future of react in a couple of years. honestly, not a bad offramp even if flash itself didn't make it.
No, they are not.
But AI is a beast.
Its A LOT to learn. RAG, LLMs, Architecture, tooling, ecosystem, frameworks, approaches, terms etc. and this will not go away.
Its clear today already and it was clear with GPT-3 that this is the next thing and in comparison to other 'next things' its the next thing in the perfect environment: The internet allows for fast communication and we never have been as fast and flexible and global scaled manufactoring than today.
Which means whatever the internet killed and changed, will happen / is happening a lot faster with ai.
And tbh. if someone gets fired in the AI future, it will always be the person who knows less about AI and knows less about how to leverage than the other person.
For me personally, i just enjoy the whole new frontier of approaches, technologies and progress.
But i would recommend EVERYONE to regularly spend time with this technology. Play around regularly. You don't need to use it but you will not gain any gut knowledge of models vs. models and it will be A LOT to learn when it crosses the the line for whatever you do.
The only scenario where I think it pays off to be on top of the hype is of you are chasing money sloshing around the latest hype. You know, the hustle culture thing. If that's not your thing, waiting until things are established (if they ever get there) is harmless.
And yeah, AI as it is now is at best moderately useful. I use it on a daily basis, but could do without it with little harm.
As much as I dislike the idea of not writing/checking code I am responsible for, it was a surprise to me seeing a few "anti/limited AI in coding" articles that don't pass an LLM detector. (I know those are not perfect but not much else one can do).
As an employee (perhaps even a highly stock-option compensated one) the equation is very different. Perhaps you're aligned if you're an employee of a startup/AI obsessed company. But for the vast majority they're not.
At any moment, you are failing at thousands of things that you may not even know about, and that is the gist of what I took away from it. The thing is that you have to be OK when you intentionally choose to not invest in something as regret is ultimately a poison.
The other thing is this: you are not obligated to bring people with you and you have a choice of free association.
When doing a code review, tell Claude "review commit xyz". It finds things humans are not finding in reviews at this point.
Not sure about what you want to do for some design? Use Claude as a rubber duck - tell it "I'm deciding between implementing this with A or B and here's why... is there any thing I'm missing that would make the decision more clear, or any potential solution I've missed?" It has the context of your existing code, which can be incredibly helpful for stuff like this.
Opus 4.6 is good enough that it's undeniable that it's going to change the industry at this point. It's just a question of how big the change will be. The earlier models were shit. Everyone I worked with spurned them, including me. Everyone around me using Opus 4.6 is saying "well, shit, this is real now" with varying degrees of excitement and unease.
AI is not some magic pill. A lot of software development is not writing code, it's requirements gathering and design. That part doesn't magically go away, in fact it becomes way more important. But AI is speeding up the writing code part of things - if, and only if, you put down a good plan first.
People will structure their code where AI can churn on it (specs in markdown, and TDD all over again), it's useful enough to be worth it. We're going to get segmented into engineers who can use it and engineers who can't.
For this technology I'd not wait. Start learning it.
We're all going to be more focused on being architects instead of developers.
The answer was quite the opposite. I wanted to see if the technology lived up to the hype. The answer, unsurprisingly was no. If only Zuck had listened to me :-)
It’s the same with games. I give them a few months for the inevitable bug fixes and balance patches before jumping in. Saves a lot of frustration. And a lot of wasted time.
I was developing in the metaverse space, and the problems we were facing led me to learn about the state of AI image generation (2018), and where the world was headed.
People assume the thing you are focused on is the thing that must win in the end, but that just means you are too focused on your little part of the world to take in the bigger picture of what's out there.
Prior to working in the metaverse (really a form of volumetric video, but I won't go into details), I was working in telehealth (2014). I did some research in augmented reality (2009), and lots of other areas of interest as well.
Some people would say I wasted my time on these, but there was a mass of secondary learnings which I value every day.
So many humans are uploading their code into Github for the AIs to train on, and thus digging their own graves whereby they'll eventually be outsmarted by AI.
On the other side of the coin, don't the AIs have to be weary about training on prior output in order to maintain the purity of their sources? The dogfooding for them will end up becoming overwhelming.
Perhaps innovation should stay private or used to extract micropayments.
Managing themselves well, hardware tinkerers have a solid future, because robots aren't as effective yet in the real-world. I think that is the next golden age, actually.
The risk of getting in early on crypto is you lose a little money. The risk of not is missing out on money. You can't simply replay that later, the way that you could invest the time to catch up on how git works.
There's a tough timing call that comes into play with this because there is no ability to predict, only to look in hindsight, it's more of a hedge to be in and say I bought a lottery ticket and had a chance during the AI boom than it is to not engage.
I believe the saying is "you never win if you never play."
That said, my only regret with Bitcoin was deleting my early wallets when I realized the coins were only worth $.25 ... if I'd had any inkling what they'd be worth someday, I'd probably have just bought $1000 worth back then and zipped it up until closer to today. I'm truly curious how many bitcoins were similarly deleted from existence.
It also shows a passion for learning and improvement, something hiring managers are often looking for signals of.
But of course it's a trade off. This rewards people who don't have family or other obligations, who have time to learn all the new fads so they can be early on the winners.
When I eventually got around to using Rust, I was hooked, and now I don't use C++ anymore if I can choose Rust instead. The hype was not completely unjustified, but it was also misplaced, and to this day I disagree with most of those hype projects.
It was no issue to silently pick up Rust, write some code that solves problems, and enjoy it as a very very good language. I don't feel a need to personally contact C or C++ project maintainers and curse at them for not using Rust.
I do the same with AI. I'm not going around screaming at people who dare to write code by hand, going "Claude will replace you", or "I could vibe code this for 10 bucks". I silently write my code, I use AI where I find it brings value, and that's it.
Recognize these tools for what they are: Just tools. They have use-cases, tradeoffs, and a massive community of incompetent idiots who like it ONLY because they don't know better, not because they understand the actual value. And then there's the normal, every day engineers, who use tools because, and ONLY because, they solve a problem.
My advice: Don't be an idiot. It's not the solution for all problems. It can be good without being the solution to a problems. It can be useful without replacing skill. It can add value without replacing you. You don't have to pick a side.
I mean if you did that you'd have contributed ~$38K USD by 2026 and had ~1.5B USD now if you started in 2010. BTC being so cheap back then dominates the whole process so to demonstrate my point more if you had heard about it all those years and were nervous about trying it and decided to wait until 2016, you'd still need to just put in $24K overall to come out with ~$450K by 2026.
That's not biting your finger nails over the price changes, the hype cycles, the price drop scares. You just set and forget a $200 recurring buy a month and put your energy elsewhere and pocket half a million for basically no effort
And if anything is possible in hindsight, then why in hindsight would you write an article acting like bitcoin was a bad decision to be an early adopter for
It is a skill, but not a special AI specific skill.
That's does not obviously follow, I do worry about the ever increasing proportion of humanity who are no longer 'economically viable' and this includes people who are not yet born.
All I know is, I've always enjoyed building things. And I enjoy building things with AI-assisted tools too, so I'll continue doing it.
From Thomas Kuhn's Structure of Scientific Revolutions:
"a new scientific truth does not triumph by convincing its opponents... but rather because its opponents eventually die, and a new generation grows up that is familiar with it"
I am actually surprised by people willingly trying to be more productive, like... machines. And then crying when machines are proven to be better at being machines than meatbags.
Is it necessary to get involved right now? Perhaps not. Is it neat to be part of a transformative historical change just getting it's feet under it? Yep.
But on the other hand... I also only learned git when I needed it at a new job... So we can pump the breaks a bit.
I saw meme in X the other day, which roughly says that one does not have to learn if she learns slow enough in the age AI. I guess the undertone is that AI evolves faster than one can learn about the tricks of using it.
This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.
The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.
The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.
With AI people are able to say 'this is nonsense' without people getting the pitchforks out.
As for myself, I don't have the bandwidth to learn how to do clever things with AI. I know you just have to write a prompt and it all happens by magic, but I have been burned quite badly.
First off, my elderly father got tricked out of all of his money and my mother's savings, which were intended for my niece, when she comes of age. It was an AI chatbot that did the deed. So no inheritance for me, cheers AI, didn't need it anyway!
Then there was the time I wanted to tidy up the fonts list on my Ubuntu computer. I just wanted to remove Urdu, Hebrew and however many other fonts that don't have any use for me. So I asked Google and just copied and pasted the Gemini suggestion. Gemini specified command line options so that you could not review the changes, but the text said 'use this as you can review changes'. I thought the '-y' looked off, but I just wanted to do some drawing and was not really thinking. So I typed in the AI suggestion. It then began to remove all the fonts and the window manager, and the apps. It might as well have suggested 'sudo rm -fr /'.
This was my wakeup call. I am sure an AI evangelist could blame me for being stupid, which I freely admit to. However, as a clueless idiot, I have been copying and pasting from Stack Overflow for aeons, to never be tricked into destroying all my work.
My compromise is to allow some fun with cat pictures, featuring my uncle's cat, with Google Banana. This allows me to have a toe in the water.
Recently I went on a course with lots of people with few of them being great intellects. I was amazed at how popular AI was with people that have no background in coding. They have collectively outsourced their critical thinking to AI.
I did not feel the FOMO. However, I am old enough to remember when Word came out. I was at university at the time and some of my coursemates were using it. I had genuine FOMO then. What is this Word tool? I was intimidated that I had this to learn on top of my studies. In time I did fire up Word, to find that there was nothing to learn of note, apart from 'styles', which few use to this day, preferring to highlight text and making it bold or biglier. I haven't used a word processor in decades, however, it was a useful tool for a long time.
Looking back, I could have skipped learning how to use a word processor, to stick to vi, latex and ghostscript until email became the way. But, for its time, it was the tool. AI is a bit like that, for some disciplines, you can choose to do it the hard way, using your own brain, or use the new tools. However, I have been badly burned, so I am waiting it out.
I was highly skeptical of this happening not that long ago, but I have to say that it seems increasingly likely. LLMs are still quite mediocre at esoteric stuff, but most software development work isn't esoteric. There's the viable argument that software development largely isn't about writing code, but the ability to write code is what justifies software developer salaries, because there's a large barrier to entry there that most just can't overcome. The 80/20 law seems to apply to everything, certainly here - 80% of your salary is justified from 20% of what you spend your time doing.
It's quite impossible to imagine what this will do to the overall market, because while this sounds highly negative for software developers, we're also talking about a future where going independent will be way easier than ever before, because one of the main barriers for fully independent development is gaps in your skillset. Those gaps may not be especially difficult, but they're just outside your domain. And LLMs do a terrific job of passably filling them in.
It'd be interesting if the entire domain of internet and software tech plummets in overall value due to excessive and trivialized competition. That'd probably be a highly disruptive but ultimately positive direction for society.
When AI will be easy to pick up and guide, guess what, there will be no need for a programmer to pick it up. AI will be using itself, Claude Manager driving Claude programmers.
So leverage AI while you still can provide value doing so.
It's literally a "use it or lose it situation".
Of course those that believe that AI will convert into AGI and destroy society as we know it won't be convinced.
These companies are paying for the privilege of having their IP stolen.
> Might I be 7% more effective if I'd suffered through the early years? Maybe. But so what? I could just as easily have wasted my time learning something which never took off.
Chasing every new tech will lead to burnout and disillusionment at some point.
AI probably isn't going away in the same way NFTs largely did, and I use it to some degree. However, I don't see a lot of value of being on the bleeding edge of AI, as the shape it takes for those skills that will be used for the next 10 years are still forming. Trying to keep up now means constantly adapting how I work, where more time is spent keeping up on the changes in AI than actually doing something useful with it.
After the bubble pops, I think we'll start to see a much more clear picture of what the landscape of AI will look like long-term. Who are the winners, who are the losers, and what tools rise to the top after the hype is gone. I'll go deeper at that time.
Right now, the only thing I'm allowed to use at work is Copilot, so I just use that and don't bother messing around with much more in my free time.
I'm glad I jumped early on: Linux, Python, virtualization, cloud, nodejs, Solana.
I wish I'd gotten into Rust and LLMs earlier.
Like investing in index funds, a big part of it is psychology of the individual as Jeeves would say.
Find a way to scratch the FOMO itch without taking on too much risk.
Just go out and prove how useless it is. If, during your testing, you find that it has no good use case, toss it.
Waiting for others to validate a tech for you is a mistake IMO.
Ask this to the guy who deliver a pizza for a few bitcoins in the early stage
It is said that major providers more than break even on what they're charging.
But at the same time that's not the point of capitalism, is it? The point is to charge close to the value you're providing.
My lunch money is approximately $10 and I often blow through as much in Claude tokens generously provided by the company which hired me. But I'm not getting $10 value from those tokens, but much more.
The cost of entry to this market is extremely high. Should Anthropic win and become an almost monopoly, it is bound to keep increasing prices to the point, where the value it's providing matches the cost.
That's the endgame of every AI company out there. It's worth using these tools now, while there's still competition and moats weren't established.
Employment?
(I'm not the earliest adopter of crypto and AI by any means. I only rode up crypto a couple of times for 2X and 3X kinda gains on my investment, and I only started using Claude last year.)
This line, as one example:
> For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.
Like a lot of tech Flash had its moment in the sun and then faded away, but that “moment” lasted a decade, and plenty of people got their start because of or built successful businesses around it. Did they have to pivot as Flash waned? Sure, but change is part of life.
I’m sorry but I find the take expressed in this piece to be absolutely miserable and uninspiring.
But, hey, congratulations on the 20:20 hindsight, I suppose.
I do think it's a bad take though. Not all new trends are the same: the metaverse was an obvious flop and crypto hasn't found practical applications. AI isn't like those because it's already practically changed the way I get my job done.
It takes time to learn skills, and getting started earlier will means more time to use them in your working life.
I made these kind of mistakes early in my career, stuck it out with PHP for far too long ignoring all the changes with frontend design trends, react, etc. I was using jQuery far too late in my career and it really hurt me during interviews. What I was doing was seen as dated and it made ageism far worse for me.
Showing a portfolio website that was using tables instead of divs.
I had to rapidly skill up and it takes longer than you think when you stick too long with what works for you.
If AI truly is a nothing-burger than guess what? Nothing lost and perhaps you learned some adjacent tech that will help you later. My advice is to NEVER stop learning in this field.
Learning is your true superpower. Without that skill, you are a cog that will be easily replaced. AI has revealed to me who among my colleagues is curious, and a continuous learner. Those virtues have proven over the course of my 25+ year career in technology to be what keeps you relevant and marketable.
You're trying to make the point using BitCoin, but in the early 2000s I had just over 14,000 of them, so I can quite clearly see a point in getting in early.
> It is 100% OK to wait and see if something is actually useful.
> I took part in a vaccine trial
> Getting Jabbed With EXPERIMENTAL SCIENCE!
This is such a weird article. The author presents so many contradictory anecdotal experiences against the author's own conclusion.
it’s readily apparent who has bought into the llm hype and who hasn’t
Not going to lie, I’d rather be poor. Not destitute - I’ve been poor but not destitute and I’d rather not go desperate - but poor? As in (because “poor” is very imprecise and can imply anything between utter poverty to “not owning three homes”) like having a low paying job but still enough to pay rent?
I’d rather be that than do AI assisted software development. Genuinely the only thing stopping me now is that there’s actually way more skill and qualifications in most low-paying jobs than a typical software developer imagines, and acquiring those takes time and money itself. But by now I know multiple people who made the jump even before the latest madness, and they’re all happier. Some still code, but don’t even publish. Some are like “I haven’t used a proper computer in _months_ this is great.” All work hard jobs at odd hours. None regret.
A ton of it is garbage. But 1/1000 gets to realize a vision, that could never be marketed, never be discovered and is something new entirely. For that i love it and thank those who made it..
Don't convert it all to xhtml, just start closing your <tags>
Don't go full UML code generation mode, just use it to illustrate core business logic.
Don't buy an iPhone, get an n8.
Don't be a cryptobro, just get one, set it aside.
Don't go full TDD, but use it for bug fixes.
Don't go full knowledge graph/org mode, just have a captain's log.md and an assets/ folder.
Don't go full microservices, create modular monoloths.
Don't go full agent teams, just use normal cli.
etc. etc.
This is the lazy guy path, is not the wise one.
I mean... yeah? It's obviously true. However people use LLM coding today not because they're "afraid of being left behind" or "investing into a new tech" or whatever abstract reasoning. It's because they're already reaping the benefit right away. It takes just a few hours to go through like 80% of the learning curve.
Then of course, there are MANY problems starting with BTC https://blog.dshr.org/2025/09/the-gaslit-asset-class.html but they generally aren't what most people talk about, and today, even if the complexity is insane, the documentation sparse, and the code quality questionable, Lightning is actually the only truly scalable solution we have for micropayments and payments of various amounts, generally not large ones. It has some absurd aspects, but that hardly matters. It works.
More generally, we don't have a crystal ball; where we can, we diversify, certainly limiting risk but also taking a bit of a gamble; where we can't, we choose to watch and see how it goes, knowing that we'll pay for it in terms of returns.
As things stand for me personally, given the level of IT obscenity in the traditional banking world, which in 2026 still doesn't have decent APIs open to retail customers, or at least decent export functions, shameful websites, a push towards completely unacceptable mobile apps, absurd limits etc etc etc, well, the worst CEx is less bad than the best bank. I don't trust either, but I distrust the CEx less than the bank, and given Wikileaks, Francesca Albanese, the protesters in Canada, the various private individuals illegally "sanctioned" by the EU Commission and so on, I'd say it's madness to rely on banks for anything more than the bare minimum.
Most people today know nothing of this and don't weigh it up, but they will, and they'll pay for this delay with their lack of attention.
This is more on the scale of the invention of the press, the telegraph, or the internet itself.
"I'm ok being left behind, I will join this Internet thing when it really becomes useful"...
Ok... you do you. Hope you don't get there too late.
In general, we as a society have not adjusted to technology. We've gone through to much change to have any stable base lines. So we're going to float in insanity for a while until things finally settle down. Probably 2 wars, a famine, and several periods of resource scarcity away still, but we'll get there one day...
LLMs, at the moment, are all about giving up your own brain and becoming fully dependent on a subscription-based online service.
IMO it reads a little desperate and very much like the hype bros but from opposite side. Take a look at the articles if you don't believe.
https://shkspr.mobi/blog/tag/ai/
- I'm OK being left behind, thanks!
- Unstructured Data and the Joy of having Something Else think for you
- This time is different
- How close are we to a vision for 2010?
- AI is a NAND Maximiser
- Reputation Scores for GitHub Accounts
- Agentic AI is brilliant because I loath my family
- Stop crawling my HTML you dickheads - use the API!
- Removing "/Subtype /Watermark" images from a PDF using Linux
- LLMs are still surprisingly bad at some simple tasks
- Books will soon be obsolete in school
- Winners don't use ChatGPT
- Grinding down open source maintainers with AI
- Why do people have such dramatically different experiences using AI?
- Large Language Models and Pareidolia
- How to Dismantle Knowledge of an Atomic Bomb
- GitHub's Copilot lies about its own documentation. So why would I trust it with my code?
- LLMs are good for coding because your documentation is shit
When Maps apps came around, people totally lost the brain muscle for being able to navigate. Using LLMs is no different, people over reliant on these tools are simply ngmi. They are going to be totally reliant on their favorite billionaire being willing to sell them competency via their thinking machines.
I would caution everyone to consider if the Billionaires who are screaming that you're going to be left behind, laid off and redundant if you don't (pay them to) use their brain nerfing machine, whether or not they have your best interest at heart.
You're not going to be left behind.
Did handmade Swiss watch movements lose all demand when Asia started mass manufacturing watches? No. There is always going to be more demand for quality over slop. Its the same reason that handmade clothes are worth 100x more than clothes at a department store.
This is all by design too, these billionaires selling thinking machines are trying to make us all dependent on their fountain of tokens. Don't fall for it. Just like how maps apps made everyone reliant on Google/Apple for your ability to navigate around your own city, these billionaires want to do the same think with your ability to think, build, plan and even learn/read.
Don't fall for this scam, unlike other hype cycles like NFTs and Crypto this will actually damage more than just your bank account, it will fry your brain if you become over reliant on it.
Take a second and consider why these LLM tool companies design their products like slot machines. They put multipliers in there UIs (run this x3,x4,x5) times so that you inevitably treat the thing like a slot machine. And it is like a slot machine, you have no way to control the results its quite random, in the case of llms they just have a better payout percentage, at the cost of making your brain become dopamine and structurally dependent on their output. They convince people there is some occulted art in the formation of a prompt, like a gambler who thinks if they press buttons in a certain order they'll get better results or many other gambling superstitions.
If you're writing software please take a moment to breath, and ask yourself if its really that useful to have piles of code where you have little idea how things work, even if they do. Billionaires will sell you on the idea that this doesn't matter because the llm, that you conveniently have to pay them to use, will always be able to fix that bug.
Don't fall for the ruling classes trick, they want you reliant on this thing so they can tell you that your input isn't as valuable, and therefor your salary and skills are not as valuable. We have to stop this now.
These only really happen in mature codebases tied up in complex business requirements.
The last few times I’ve tried LLMs with this codebase it has not been fruitful.
Weird because it’s impressive in other areas, especially tech with no real users lmao
Why not simply evaluate things instead of ignoring them until its too late?
Sure, we don't have infinity time, but the fact that OP mentions these two things, means the pattern showed up enough.
What jobs aren’t requiring usage of these tools by now?
I remember when React was the hotness and I was still using jQuery, I didn't learn it immediatley, maybe a couple years later is when I finally started to use React. I believe this delayed my chance in getting a job especially around that time when hiring was good eg. 2016 or so.
With vibe-coding it just sucks the joy out of it. I can't feel happy if I can just say "make this" and it comes out. I enjoy the process... which yeah you can say it's "dumb/waste of time" to bother with typing out code with your hands. For me it isn't about just "here's the running code", I like architecting it, deciding how it goes together which yeah you can do that with prompts.
Idk I'm fortunate right now using tools like Cursor/Windsurf/Copilot is not mandatory. I think in the long run though I will get out of working in software professionally for a company.
I do use AI though, every time I search something and read Google's AI summary, which you'd argue it would be faster to just use a built in thing that types for you vs. copy paste.
Which again... what is there to be proud of if you can just ask this magic box to produce something and claim it as your own. "I made this".
Even design can be done with AI too (mechanical/3D design) then you put it into a 3D printer, where is the passion/personality...
Anyway yeah, my own thoughts, I'm a luddite or whatever