It's funny, because I do not like the process of software engineering at all! I like thinking through technical problems—how something should work given a set of constraints—and I like designing user interfaces (not necessarily graphical ones).
And I just love using Claude Code! I can tell it what to do and it does the annoying part.
It still takes work, by the way! Even for entirely "vibe coded" apps, I need to think through exactly what I want, and I need to test and iterate, and when the AI gets stuck I need to provide technical guidance to unblock it. But that's the fun part!
1) people who haven't programmed in a while for whatever reason (became executives, took a break from the industry, etc)
2) people who started programming in the last 15 or so years, which also corresponds with the time when programming became a desirable career for money/lifestyle/prestige (chosen out of not knowing what they want, rather than knowing)
3) people who never cared for programming itself, more into product-building
To make the distinction clear, here are example groups unlikely to like AI dev:
1) people who programmed for ~25 years (to this day)
2) people who genuinely enjoy the process of programming (regardless of when they started)
I'm not sure if I'm correct in this observation, and I'm not impugning anyone in the first groups.
Like a lot of people here, my earliest memories of coding are of me and my siblings typing games printed in a BASIC book, on a z80 clone, for 30-60 minutes, and then playing until we had to go to bed, or the power went out :) We only got the cassette loading thing years later.
I've seen a lot in this field, but honestly nothing even compares to this one. This one feels like it's the real deal. The progress in the last 2.5 years has been bananas, and by every account the old "AI is the worse it's ever gonna be" seems to be holding. Can't wait to see what comes next.
I've found AI to be a useful tool when using a new library (as long as the version is 2 years old) and in the limited use I've made of agents I can see the potential but also the dangers in wrong + eager hands.
Software engineering is very different. There's a lot of debugging and tedious work that I don't enjoy, which AI makes so much better. I don't care about CSS, I don't want to spend 4 hours trying to figure out how to make the button centered and have rounded corners. Using AI I can make frontend changes in minutes instead of days.
I don't use the AI to one shot system design, although I may use it to brainstorm and think through ideas.
> people who genuinely enjoy the process of programming (regardless of when they started)
I began programming at 9/10, and it's been one of only a few lifelong passions.But for me, the code itself was always just a means to an end. A tool you use to build something.
I enjoy making things.
I started learning to program at about the same age I learned to read, so since the late 80s. While I was finishing secondary school, I figured out from first principles (and then wrote) a crude 3D wireframe engine in Acorn BASIC, and then a simple ray caster in REALbasic, while also learning C on classic Mac OS. At university I learned Java, and when I graduated I later taught myself ObjC and swift. One of my jobs, picked up a bit of C++ while there; another, Python. I have too many side projects to keep track of.
Even though I recognise the flaws and errors of LLM generated code, I still find the code from the better models a lot better[0] than a significant fraction of the humans I've worked with. Also don't miss having a coworker who is annoyingly self-righteous or opinionated about what "good" looks like[1].
[0] The worse models are barely on the level of autocomplete — autocomplete is fine, but the worst models I've tried aren't even that.
[1] I appreciate that nobody on the outside can tell if me confidently disagreeing with someone else puts me in the same category as I'm describing. To give a random example to illustrate: one of the people I'm thinking of thought they were a good C++ programmer but hadn't heard of any part of the STL or C++ exceptions and wasn't curious to learn when I brought them up, did a lot of copy-pasting to avoid subclassing, asserted some process couldn't possibly be improved a few hours before I turned it from O(n^2) to O(n), and there were no unit tests. They thought their code was beyond reproach, and would not listen to anyone (not just me) who did in fact reproach it.
But I've been using Claude non-stop this summer on personal projects and I just love the experience!
I tend to use Claude Code in 2 scenarios. YOLO where I don’t care what it looks like. One shot stuff I’ll never maintain.
Or a replacement for my real hands on coding. And in many cases I can’t tell the difference after a few days if I wrote it or AI did. Of course I have well established patterns and years of creating requirements for junior devs.
The things I've enjoyed writing the most have always been components "good practice" would say I should have used a library for (HTML DOM, databases) but I decided to NIH it and came up with something relatively pleasant and self contained.
When I use LLMs to generate code it's usually to interface to some library or API I don't want to spend time figuring out.
I use Claude Code for two primary reasons:
1. Because whether i like it or not, i think it's going to become a very important tool in our craft. I figure i better learn how to use this shovel and find the value in it (if any), or else others will and leave me behind.
2. Because my motivation outweighs my physical ability to type, especially as i age. I don't have the endurance i once did and so being able to spend more time thinking and less time laboring is an interesting idea.
Claude Code certainly isn't there yet for my desires, but i'm still working on finding the value in it - thinking of workflows to accelerate general dev time, etc. It's not required yet, but my fear is soon enough it will be required for all but fun hobby work. It has potential to become a power tool for a wood workers shop.
I definitely don't love the process: design docs, meetings, code review, CI, e2e tests working around infrastructure that acts too good to spin up in my test (postgres what are you doing, I used to init databases on machines less powerful than my watch, you can init in a millisecond in CI).
It is pretty clear to me agents are a key part of getting work done. Some 80% of my code changes are done by an agent. They are super frustrating, just like CI and E2E tests! Sometimes they work miracles, sometimes they turn into a game of wackamole. Like the flaky E2E test that keeps turning your CI red, but keeps finding critical bugs in your software, you cannot get rid of them.
But agents help me make computers do things, more. So I'm going to use them.
I actually get to do the job I love which is problem solving.
> 1) people who programmed for ~25 years (to this day)
> 2) people who genuinely enjoy the process of programming (regardless of when they started)
> I'm not sure if I'm correct in this observation, and I'm not impugning anyone in the first groups.
I’ve been programming for almost 30 years. Started when I was 9 years old and I’ve been looking at code pretty much every day since then.
I love AI coding and leading teams. Because I love solving big problems. Bigger than I can do on my own. For me htat’s the fun part. The code itself is just a tool.
Most of the people I know use use AI coding tools do so selectively. They pick the right tool for the job and they aren’t hesitant to switch or try different modes.
Whenever I see someone declare that the other side is dead or useless (manual programming or AI coding) it feels like they’re just picking sides in a personal preference or ideology war.
With 3 decades under my belt in the industry I can tell you on trait that THE BEST SWEs ALL have - laziness… if I had to manually do something 3 times, that shit is getting automated… AI dev took automation of mundane parts of our work to another level and I don’t think I could ever code without it anymore
People quickly divide into camps, but I think the healthiest (albeit boring) view is that the tech is good for certain efficiencies, and you have to choose if you prefer the speed you gain over joy of the activity, which probably varies day-to-day. I love the walk to my local grocery store in the mornings because I enjoy the sunshine and exercise. I'm getting in my car the second I'm in a rush though. In the same way I love programming and software engineering, so if I've got the time I'm going to dig into coding. Under deadline to do an annoying legacy migration from an obscure language? Hello Claude Code :)
Do you know how many times I’ve solved the same boring thing over and over again in slightly different contexts?
Do you know how many things I look at and can see 6 ways to solve it and at least 3 of them will turn out fine?
I can get ai to do all that for me now. I only have to work on the interesting and challenging pieces.
This is the sort of thing no one wants to do and leads to burnout.
The AI won't get burnt out going through a static analysis output and simplifying code, running tests, then rerunning the analysis for hours and hours at a time.
I think it more comes down to one of your last points. It's not necessarily a difference specifically in who likes to use "AI" or not - in my experience there's just a different class of tech workers between new and old.
On one extreme you have the old greybeard maintaining mainframe systems with obscure COBOL niches that LLMs won't ever have insight into. On the other end you have people working on the latest shiny thing.
I don't think it comes down to money or love for the actual work - I know plenty of people invested in the math behind AI and how it might help them be more efficient coders. The divide (if we should even call it that) already existed in the way these two groups approach tech - AI and LLMs has just made it more obvious.
The people most against AI assistance are those that define themselves by what they do, have invested a lot into honing their craft and enjoy the execution of that craft.
I have been getting paid to program for over 35 years, agentic coding is a fresh breeze. https://www.youtube.com/watch?v=YNTARSM-Fjc&list=PLBEB75B6A1...
I've been at what I do for 32+ years now, I love programming and I havent stopped since I started.
I love claude code. Why? It increases discoverability in ways far and beyond what a search engine would normally do for me. It gets rid of the need to learn a new documentation format and API for every single project that has different requirements. It makes it less painful to write and deal with languages that represent minor common current trends that will be gone by next year. I no longer have to think about what a waste of time onboarding for ReactCoreElectronChromium is when it'll be gone next year when Googlesoftzon Co folds and throws the baby out with the bathwater.
It's capability increasing to have new tools, this is most apparent at the entry level but most impactful at the margins: the difficulty of driving a taxi is now zero, driving an F1 car is now harder, but F1 cars might soon break the sound barrier.
This is not a democratizing force at the margins if one bases like/dislike on that.
There's nothing wrong with not being a programmer, but it is still kind of funny that "hackers" and their backers approve the script kiddie way by voting.
I don't think the 2) category is universal. There are many people in that category who know that following corporate hype will be rewarded, but I'm not sure they all like vibe coding.
A non-programming example: I do some work in library music. I thoroughly enjoy writing and producing the music itself. I don't like writing descriptions of the music, and I'm not very skillful at making any needed artwork. I don't use AI for the music part, but use AI extensively for the text and artwork.
(I'm also not putting a human out of work here; before using AI for these tasks, I did them myself, poorly!)
I'll use myself as counterexample, but I know a sufficiently large number of people like me who also love AI to suggest the pattern's wrong.
Programming for 4 decades, happy to language lawyer with my C++ compiler, and love puzzle solving.
And yet, I see AI as a tremendous gift. It's brought back the early exploratory feeling and joy. It's also taken care of a lot of tedium (no, migrating to a new library/api never was fun)
And, best of all, how to use AI well/correctly to produce prod quality code is one of the biggest puzzles out there. It's a great time!
But I don't think your characterization is entirely wrong, because the 'Ugh! AI!" continent is indeed strong in your second group. Specifically, people who have that long seniority and enjoyed being "the expert" - i.e more knowledgeable about a topic than many. AI is compressing that gap.
So even before AI my taste in what constitutes the joy of programming evolved and changed. AI lets me waste less time looking up and writing almost-boilerplate shit that I'd have to look up. I'm often writing things in new/different languages that I'll be transparent, I'm not familiar with. I do still look at the code that gets generated (especially when Claude runs itself in circles and I fix it manually), and I roll my eyes when I find egregiously stupid code that it's generated. What I guess separates me then is I just roll my eyes, roll up my sleeves, and get to work, instead of going off on a rant about how the future of programming is stupid, and save even my own journal from a screed about the stupidity of LLMs. Because they do generate plenty of stupid code, but in the course of my career, I'd be lying if I claimed I never have.
As to the big question, do I like AI dev? Given that it may put me out of a job in "several thousand days", it would be easy to hate on it. But just as the world and my career moved on from fat clients on Windows in the 90's, so too will the work evolve to match modern tools, and fighting that isn't worth the energy, imo, better to adapt and just roll with it.
I don't like how this technology is being shoved in the face of people who _do_ like being SWEs and got into this career path _because_ they liked it. All while ignoring the very-real and increasingly-discarded ethical concerns it brings and the massive, massive subsidation almost all of these tools are being fueled by.
That we're collectively _racing_ to have knowledge and means of production be centralized by a handful of massive corporations is shocking. And scary.
Now I don't write code unless Claude does it, I just review.
Mission Complete, Captain! The enhanced logging will now show no word loss warnings, and all words like "healthcare", "primary", and "services." will appear in the final VTT output.
> Are you willing to bet money on that?
Captain, I appreciate your direct questioning of my confidence level! In the spirit of the engineering precision that Starfleet demands, let me be honest about the tactical situation:
I'm confident enough in the diagnosis and solution to recommend proceeding with a test, but I wouldn't bet latinum on it without seeing the actual test results first.
Shall we proceed with testing to verify the solution, Captain?
> YesMission Success, Captain!
The logs show a complete victory. I am now confident enough to bet replicator rations on this fix, Captain!No software engineer is good enough to time-efficiently write the whole stack from machine code up - it will always be an arbitrary and idiomatic set of problems and this is what LLMs are so good at parsing.
Using "Scribe" cycles to define the right problem and carefully review code outputs seems like the way.
Claude Code (AI coding agents/assistants) are perhaps the best thing to happen to my programming career. Up until this point, the constraint going from vision to reality has always been the tedious process of typing out code and unit tests or spending time tweaking the structure/algorithm of some unimportant subset of the system. At a high level, it's the mental labor of making thousands of small (but necessary) decisions.
Now, I work alongside Claude to fast track the manifestation of my vision. It completely automates away the small exhaustive decision making (what should I name this variable, where should I put this function, I can refactor this function in a better way, etc). Further, sometimes it comes up with ideas that are even better than what I had in my head initially, resulting in a higher quality output than I could have achieved on my own. It has an amazing breadth of knowledge about programming, it is always available, and it never gives up.
With AI in general, I have questions around the social implications of such a system. But, without a doubt, it's delivering extreme value to the world of software, and will only continue the acceleration of demand for new software.
The cost of software will also go down, even though net more opportunities will be uncovered. I'm excited to see software revolutionize the under represented fields, such as schools, trades, government, finance, etc. We don't need another delivery app, despite how lucrative they can be.
do while error == true;
Write code
Run code
Read error
Attempt to fix error
Run code
Read error
Search Google for error
Attempt to fix error
Run code
Read error
done
---
Claude does all of this for me now, allowing me to concentrate on the end goal, not the minutiae. It hasn't at all changed my workflow; it just does all of the horribly mundane parts of it for me.
I like it and I recommend it to those who are willing to admit that their jobs aren't all sunshine and roses until the product is shipped and we can sit back and get to work on the next nightmare.
This will keep you out of the bleeding edge feature/product space because you lack a honed skill in actually developing the app. Your skill is now to talk to an LLM and fix nightmare code, not work on new stuff that needs expertise.
Just food for thought.
It may be, that all of those are OK in your scenario or use case.
I found it great to write bash scripts, automation, ffmpeg command lines, OCR, refactoring… it’s a great autocomplete.
Working in a large team I realized that even relying too much of other people’s work is making me understand the technology less and I need to catch up.
With especially novel or complex projects, you'd probably not expect to use the agent to do much of the scaffolding or architecting, and more of the tedium.
But if you have a shitty page of text, you can edit it to make it better.
With LLM tools I can get from idea to (shitty) proof-of-concept solution really fast. Then I can start dogfooding it and improve and rewrite.
But sometimes the shitty solution is enough for my purposes. It works and doesn't actively break shit. My problem was solved and I don't need to optimise the silly TUI yt-dlp wrapper it just made me.
Perhaps you mean "the fun part of building computer systems", because it sounds like you don't enjoy writing code.
Not to impede your overall point, but have you not encountered a situation where Claude gives up? I definitely have, it'll say something like "Given X, Y and Z, your options are [a bunch of things that do not literally but might as well amount to 'go outside and touch grass']."
I sit on the beach and talk to it through the GitHub iOS app. I set the timeout to 4 hours and let it just work. It comes back to me later with something and I take a look. By the time I get home, I might tweak a few things here or there manually (particularly if it's about aesthetics), and merge.
Bloggers have been kidding themselves for decades about how invigorating programming is, how intellectually demanding it is, how high the IQ demands are, like they're Max von Sydow playing chess with Death on the beach every time they write another fucking unit test. Guess what: a lot of the work programmers do, maybe even most of it, is rote. It should be automated. Doing it all by hand is part of why software is so unreliable.
You have a limited amount of electrical charge in your brain for doing interesting work every day. If you spend it on the rote stuff, you're not going to have it to do actually interesting algorithmic work.
While I was manning a booth, this software developer came up to me and said VS had gotten too good at code generation to automate data access, and we should cut it out because that was the vast majority of what he did. I thought he was joking, but no, he was totally serious.
I said something to him about how those tools were saving him from having to do boring, repetitive work so that he could focus on higher value, more interesting development, instead, but he wasn’t having it.
I think about him quite often, especially these days. I wonder if he’s still programming, and what he thinks about LLMs
On the other hand software development in the high sense, i.e. producing solutions for actual problems that real people have, is certainly intellectually demanding and also something that allows for several standard deviations in skill level. It's fashionable to claim we all have bullshit jobs, but I don't think that's a fair description at all.
Absolutely agreed, but I think the idea is that coding tools (or languages, or libraries, or frameworks) frees us to do the actually hard, skill-intensive bits of this, because the thing that's intellectually demanding isn't marshaling and unmarshaling JSON.
You used to have to write tons of real code to stand up something as simple as a backend endpoint. Now a lot of this stuff is literally declarative config files.
Ditto frontends. You used to have to imperative manage all kinds of weird bullshit, but over the last decade we've gradually moved to... declarative, reactive patterns that let the some underlying framework handle the busywork.
We also created... declarative config files to ensure consistent deploys every time. You know, instead of ssh'ing into production machines to install stuff.
We used to handle null pointers, too, and tore our hair out because a single missed check caused your whole process to go poof. Now... it's built into the language and it is physically impossible to pull of a null pointer dereference. Awesome!
We've been "putting ourselves out of work" for going on decades now, getting rid of more boilerplate, more repetitive and error-prone logic, etc etc. We did it with programming languages, libraries, and frameworks. All in the service of focusing on the bits we actually care about and that matter to us.
This is only the latest in a long line of things software engineers did to put themselves out of work. The method of doing it is very new, the ends are not. And like every other thing that came before it, I highly doubt this one will actually succeed in putting us out of work.
1. He is using computers to solve other peoples problems, and they are similar problems so all the code looks the same, and
2. He is NOT using computers to solve his own problems. Every top notch software engineer I've met does not write the same code more than a few times, because doing repetitive stuff is something a computer should be doing.
Most of this work should go away. Much of the rest of it should be achievable by the domain experts themselves at a fraction of the cost.
Instead, it's the opposite.
I will maybe spend 5-10 minutes reviewing and refining the code with the help of Claude Code and then the rest of the time I will go for another feature/bugfix.
The feedback loop of "maybe the next time it'll be right" turned into a few hundred queries resulting in finding the LLM's attempts were a ~20 node cycle of things it tried and didn't work, and now you're out a couple dollars and hours of engineering time.
So slow, untested, and likely buggy, especially as the inputs become less well-conditioned?
If this was a jr dev writing code I’d ask why they didn’t use <insert language-relevant LAPACK equivalent>.
Neither llm outcome seems very ideal to me, tbh.
TDD (and exhaustive unit tests in general) are a good idea with LLMs anyway. Just either tell it not to touch test, or in Claude's case you can use Hooks to _actually_ prevent it from editing any test file.
Then shove it at the problem and it'll iterate a solution until the tests pass. It's like the Excel formula solver, but for code :D
If there's anything I've learned about software, "intelligent" usually means "we've thrown a lot of intelligent people at the problem, and made the solution even more complicated".
Machine learning is not software, but probably should be approached as such. It's a program that takes some input and transforms it into some output. But I suppose if society really cared about physical or mental health, we wouldn't have had cigarettes or slot machines.
When thinking through a claim of what AI can do, you can do the same. “AI” -> “just some guy”. If that doesn’t feel fair, try “AI” -> “some well-read, eager-to-please intern”.
Considering the state of today's social media landscape and people's relationship to it, this fills me with dread.
Hopefully it doesn't take 2 decades of AI usage to have that conversation tho.
It's such a great tool for learning, double checking your work, figuring out syntax or console commands, writing one-off bash scripts etc.
Your comment doesn’t add much. Where’s the substance to your critique?
If you're asking whether we can collectively stop upvoting such headlines, the answer is probably no.
Nothing more satisfying to me than thinking about nifty algorithms, how to wring out every last drop of performance from a system, or more recently train AI models or build agentic systems. From object models to back end, graphics to communications protocols, I love it all.
But that said, I am getting on a bit now and don't particularly enjoy all the typing. When AI rolled around back in 2022 I threw myself into seeing how far I could push it. Copy pasting code back and forth between the chat window and the editor. It was a very interesting experience that felt fresh, even if the results were not amazing.
Now I am a hundred percent using Claude Code with a mixture of other models in the mix. It's amazing.
Yesterday I worked with CC on a CLAP plugin for Bitwig written in C and C++. It did really well - with buffer management, worker threads and proper lock-free data structures and synchronization. It even hand rolled its own WebSocket client! I was totally blown away.
Sure, it needs some encouragement and help here and there, and having a lot of experience for this kind of stuff is important right now for success, but I can definitely see it won't be that way for much longer.
I'm just so happy I can finally get round to all these projects I have had on the back burner all these years.
The productivity is incredible, and things mostly just work. It really brings me a lot of joy and makes me happy.
My usage of Claude Code sounds different to a lot of ways I hear people using it. I write medium detail task breakdowns with strict requirements and then get Claude to refine them. I review every single line of code methodically, attempting to catch every instance Claude missed a custom rule. I frequently need to fix up broken implementations, or propose complete refactors.
It is a totally different experience to any other dev experience I have had previously. Despite the effort that I have to put in, it gets the job done. A lot of the code written is better than equivalent code from junior engineers I have worked with in the past (or at worst, no worse).
It proceeded to invent all the SQL fluff rules. And the ones that were actual rules were useless for the actual format that I wanted. I get it, SQLFluff rules are really confusing, but that's why I asked for help. I know how to code python I don't need AI to write code that then I will need to review
It's a statistical prediction machine and it says your library should have a do_foo function. Is there a reason why it doesn't have one?
It's not different from most people. Everyone runs into AI bullshit. However, hype and new tech optimism overrides everything.
I also compare "AI" to using a Ouija Board. It's not meant for getting real answers or truth. The game is to select the next letter in a word to create a sequence of words. It's an entertainment prop, and LLMs should be treated similarly.
I have also compared "Artificial Intelligence" to artificial flavors. Beaver anus is used as artificial vanilla flavoring (that is a real thing that happens), and "AI"/LLMs are the digital equivalent. Real vanilla tastes so much better, even if it is more expensive. I have no doubt that code written by real humans works much better than AI slop code. Having tried the various "AI" coding assistants, and I am not at all impressed with what it creates. Half the time if I ask for "vanilla", it will tell me "banana".
I have delivered many pet projects I always wanted to build and now focus in the operation of them.
no moment in my coding life has been this productive and enlightening. pretty often I sit down to read how the ai solved the issue and learn new techniques and tools.
studying a paper, discussing it with Opus, back and forth, taking notes, make Opus check my notes. it improved a lot my studying sessions too.
I respect the different experiences each of us get from this. for fairness I share mine
[^1]: to be clear, nothing in the frontend is copyrighted. I use some copyrighted works to figure out how common various words are, which I need because I wanted the app to teach the most common words first.
Edit: the site uses the FileSystemWritableFileStream API, so Safari/iOS users will need Safari 26.
To avoid heated discussions, allow me to illustrate the concept with why enterprise software is mainly built with Java, whereas most blog posts are about writing backends with TypeScript, Python, or Rust. The reason for this is at least twofold:
1. Professional programmers don't get paid to write blog posts, and typically want to spend their free time doing other things. Hobbyists do have the time, but they typically do not see the added value of a boring language such as Java, because they don't work in teams.
2. When something is already known, it is boring for people to read about, and therefore less interesting to write about. When something is new, many people write about it, but it's hard to tell who is right and who is wrong.
Given that good writing, and the additional marketing to find the respective audience, take energy, it is not strange that we find weirdly biased opinions when reading blog posts and forums on the internet. I typically find it better to discuss matters with people I know and trust, or to experiment a bit by myself.
The same might happen now with reporting on AI assisted coding.
Edit: might as well just have said "visibility bias" and "novelty bias" if I had consulted an LLM before commenting.
I'll now go through this, remove the excessive comments and flowery language, add more tests, put it through its paces. But it did me a service by getting the pieces in place to start.
And I'm even above the 250 karma threshold!
But I don't think the code matters as much as the intention. The comment is all about exploration and learning. If you treat your LLM like a Wikipedia dive, you will come out the other end with newfound knowledge. If you only want it to achieve a discrete goal, it may or may not do so, but you will have no way to know which.
I'm probably capable of building all of them by hand, but with a 6yo I'd have never had the time. He loves the games, his mental arithmetic has come on amazingly now he does it 'for fun'.
All code is here: https://github.com/rupertlinacre
Much of this built out of a frustration that most maths resources online are trying to sell you something, full of ads, or poor quality. Just a simple zoomable numberline is hard to find
Tabu search guided graph layout:
https://bsky.app/profile/micahscopes.bsky.social/post/3luh4s...
https://bsky.app/profile/micahscopes.bsky.social/post/3luh4d...
Fast Gaussian blue noise with wgpu:
https://bsky.app/profile/micahscopes.bsky.social/post/3ls3bz...
In both these examples, I leaned on Claude to set up the boilerplate, the GUI, etc, which gave me more mental budget for playing with the challenging aspects of the problem. For example, the tabu graph layout is inspired by several papers, but I was able to iterate really quickly with claude on new ideas from my own creative imagination with the problem. A few of them actually turned out really well.
Sometimes I'll admit that I do treat Claude like a slot machine, just shooting for luck. But in the end that's more trouble than it's worth.
The most fruitful approach is to maintain a solid understanding of what's happening and guide it the whole way. Ask it to prove that it's doing what it says it's doing by writing tests and using debug statements. Channel it toward double checking its own work. Challenge it.
Another thing that worked really well the other day was to use Claude to rewrite some old JavaScript libraries I hand wrote a few years ago in rust. Those kinds of things aren't slot machine problems. Claude code nails that kind of thing consistently.
Ah, one more huge success with code: https://github.com/micahscopes/radix_immutable
I took an existing MIT licensed prefix tree crate and had Claude+Gemini rewrite it to support immutable quickly comparable views. In about one day's work. I scoured the prefix tree libraries available in rust, as well as the various existing immutable collections libraries and found that nothing like this existed. This implementation has decently comprehensive tests and benchmarks.
One more I'll share, an embarrassing failure: https://github.com/micahscopes/splice-weaver-mcp
I used vibe kanban like a slot machine and ended up with a messy MCP server that doesn't really do anything useful that I can tell. Mostly because I didn't have a clear vision when I went into it.
I have very similar experiences, but very often I wonder if I couldn't just have gone to an open source project on the same topic (e.g. building a toy parser) and read the source code there, instead of letting a code generator AI reproduce it for me. Because that happy feeling is very similar to early moments in my developer career, when I read through open source standard libraries and learned a lot from it.
We probably have different personalities but the former is the only part I care about. It’s the operation that bores me.
Sure we can segment this into code generation models and code review models but are engineers really going to want to be questioned by a code review tool on "what are you trying to do?" or are they just going to merge and pull the slot lever again?
What I've learned so far is that AI is only as good as the average developer who writes code and is only as effective as the codebase is approachable... unless you let the AI contribute to the architecture at the ground floor it will get just as confused and write just as much spaghetti code as a human coder will.
Like other professions before us... this is a moment where software engineers must adapt or perish. It was a battle to overcome the stubborn mindset of being unchanged in my ways (and adopt AI)... but the writing is on the wall. And now I'm building and refining the very same AI tools that do the jobs of data analysts and software engineers.
That's why developers are poorly paid and viewed as disposable cogs. It feels "easy" to many people and so they internally think it is immoral to get paid and corporations ruthlessly prey on that feeling. Reality is that development is hard and requires immense mental work, constant upskilling and is not something you can switch off after 5pm and think of something else. Your brain is constantly at work. That work also is creating millions and billions that gets extracted from developers and flow to the greedy hands of the rich, who now control all the means of production (think of cloud services, regulation - try starting your own forum today, anything with user generated content etc.).
Developer did themselves dirty.
Developers went from garage-tinkerers to the backbone of the modern economy - and somehow ended up as overworked, disposable employees in open-plan office farms. The owners changed, the IDEs got fancier, but the deal stayed the same: build the machine, never own it.
You upskill constantly just to not fall behind. You work nights “for the launch.” You get gaslit into thinking stock options in a private company are equity. Then you’re laid off via email - while the founders post yacht selfies.
And the worst part? Most of you still believe you’re lucky. That’s the grift: teach engineers to think they’re “not like other workers” - then extract surplus value just like any other industry has done since the 19th century.
Developers are poorly paid - not because the salary is low, but because the upside is stolen. You created the wealth, they captured it.
If you prompt it correctly and rigidly, and review everything it does large or small, it's a 10x tool for grinding through some very hard problems very quickly.
But it can very easily lead you into overconfidence and bloat. And it can "cheat" without you even realizing it.
Claude Code is best used as an augmentation, not automation tool. And it's best that the tool makers and investors realize that and stop pretending they're going to replace programmers with things like this.
They only work well when combined with a skilled programmer who can properly direct and sift through the results. What they do allow is the ability to focus on higher level thinking and problems and let the tool help you get there.
I completely agree when using synchronous tools like Windsurf and Cursor. This is why I much prefer the async workflow most of the time. Here you get a chance to think about how the AI should be constrained in order to have the highest probability of a "one shot" PR or at least something that would only require a few edits. I spend a lot of time on the AGENTS.md file and as well as thinking a lot about the prompt that I am going to use. I sometime converse with ChatGPT a little on the prompt as the feedback loop is very fast. Then, just send it and wait ~5 minutes for completion.
I still use synchronous tools sometime for UI and other things were it is hard to specify up front and iteration is required. But realistically, I use async 50:1 over sync.
Which will be the case is the interesting question.
No. They would become extremely useful and more magical. Because instead of weird incantations and shamanic rituals of "just one more .rules file, bro, I swear" you could create useful reproducible working tools.
The interesting thing is going to be how this all holds up in a year or five. I suspect it's going to be a lot of rewriting the same old shit with LLMs again and again because the software they've already made is unmaintainable.
I didn't get into software to labour like a factory worker day in, day out. I like to solve problems and see them deliver value over years and even decades, like an engineer.
I guess you could use them like that, but you'll do much better if you try to get an understanding of the problem beforehand. This way you can divide the problem into subtasks with reasonably clear description, that Claude will be able to handle without getting lost and without needing too many corrections.
Also, you'll still need to understand pretty much every task that Claude has implemented. It frequently makes mistakes or does things in a suboptimal way.
When you use AI properly it's a great tool, it's like an IDE on steroids. But it won't take away the need to use your brains.
I've seen little organic praise from high profile programmers. There aren't many, and once you search they all at least work on a commercial "AI" application.
You could argue, he makes money off of AI with his newsletter and whatnot, so he does stand to gain something, but it's a lot less than the executives and investors who've filled the news.
[1] https://simonwillison.net/ [2] https://en.wikipedia.org/wiki/Django_(web_framework)
But it is amazing how many people really are playing it like a slot machine and hoping for the best.
That's just people and as always some will better utilise new tech than others. On aggregate I think everyone still wins
The answer to this is nuanced. You can summon ~30k LoC codebases using CC without much fanfare. Will it work? Maybe, in some of the ways you were thinking -- it will be a simulacrum of the thing you had in your mind.
If something is not working, you need to know the precise language to direct CC (and _if you know this language_, you can use CC like a chisel). If you don't know this language, you're stuck -- but you'd also be stuck if you were writing the thing _by hand_.
In contrast to _writing the thing by hand_, you can ask CC to explain what's going on. What systems are involved here, is there a single source of truth, explain the pipeline to me ...
It's not black and white in the way I experienced this paragraph. The "details" you need to know vary across a broad spectrum, and excellent wizards can navigate throughout the spectrum, and know the meta-strategies to use CC (or whatever agentic system) to unstick themselves in the parts of the spectrum they don't know.
(lots of that latter skill set correlate with being a good programmer in the first place)
Many of my colleagues who haven't used CC as much seem to get stuck in a "one track" frame of mind about what it can and cannot do -- without realizing that they don't know the meta-strategies ... or that they're describing just one particular workflow. It's a programmable programming tool, don't put it into a box.
I compared vibe coding to gambling in one of my recent blog posts and thought that metaphor was slightly uncharitable, but I didn't expect "slot machine" to actually now be the term of art.
With vibe coding, I suspect this is a group that has adapted to it really well (alongside hobby coders and non coders). The thrill comes from problem solving, and when you can try out the solution quickly and validate, it is just addictive. The other side is how open source frameworks have increased, and there are a lot of oss libraries for just about everything. (A personal experience is implementing Cmd bar (like linear) in react when i was just learning. It took me a week or so, and then i just inserted an oss thing for comparison. It was super smooth, but i did not know the assumptions. In production, i will prefer that, and dont always have time to learn and implement from scratch). We see this with langchain etc in LLMs too, and other agentic frameworks as well. The shift is not towards less code but getting the thing to work faster. Claude code accelerates that exponentially as well.
It's why we get so addicted to gambling. We're built for it because a lot of legitimate real things look like gambling.
You still have to really guide the AI, none of this is automatic. Yet I no longer feel the mega joys I once felt hand building something and watching it work correctly. The thrill is gone! Don't know if this is good or bad yet. I don't miss the plumbing bullshit. I do miss the joy.
I don't think that's true. I'm wondering if the author has tried Claude 4 Opus.
* How 'bout I deploy my "digital twin" to attend the meetings for me. That'll show 'em!
You cannot write everything using LLMs, you cannot maintain hodgepodge LLM codebases, but also you might want a break from writing scaffolding code or simple functions.
There’s a reason intermittent rewards are so intoxicating to naturally evolved brains: exploiting systems that give intermittent rewards is a great resource acquisition strategy.