AI is sometimes a productivity booster for a dev, sometimes not. And it's unpredictable when it will and won't be. It's not great at giving you confidence signals when you should be skeptical of its output.
In any sufficiently complex software project, as much of the development is about domain knowledge, asking the right questions, balancing resources, guarding against risks, interfacing with a team to scope and vet and iterate on a feature, managing resources, analyzing customer feedback, thinking of new features, improving existing features, etc.
When AI is a productivity booster, it's great, but modern software is an evolving, organic product, that requires a team to maintain, expand, improve, etc. As of yet, no AI can take the place of that.
If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous.
Do I use LLMs as an alternative to Googling? Absolutely. That doesn't mean AI is doing my job. Google and Stack Overflow also do 0% of my job. It's great as a reference tool. But if you're going to be that pedantic, we've got to count any help I receive from any human or tool as doing some % of my job. Do I count the open source software I build on? Do I count Slack as doing some % of my job since I don't have to go into the office and interface with everyone face-to-face? Does Ford get some of the credit for building the vehicle that gets me to the office?
Have I used a meeting transcription tool? Occasionally, yeah. That doesn't mean it does any part of my work. My job was never to transcribe meetings. Do I use it to brainstorm? No, I've found it's fairly useless for that. Do I use it to create presentations? No, I just write my slides the old fashioned way.
And don't get me started on the "time savings" for boiler plate documentation. It messes up every time.
AI was doing 0% of my work 10 years ago too, why should I be any less effective without it now?
You think I'm behind the curve because I'm not buying into the AI craze?
Ok. What's so important about being on the curve anyways, exactly? My boss won't pay me a single cent more for using AI, so why should I care?
There are reasons that seasoned OSS developers reject AI PRs: https://news.itsfoss.com/curl-ai-slop/ (like the creator of curl). Additionally, the only study to date currently measuring the impact on LLMs on experienced developers found a modest 19% decline in productivity when using an LLM for their daily work.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
Now we can ponder behind the reasons that the study showed experienced developers get a decrease of productivity, and you anecdotally experience a boost of "productivity", but why think about things when we can ask an LLM?
- experienced developers -> measured decrease of productivity
- you -> perceived increase of productivity
Here is what ChatGPT-5 thinks about the potential reason (AI slop below):
"Why You Might Feel More Productive
If senior developers are seeing a decline in productivity, but you are experiencing the opposite, it stands to reason that you are more junior. Here are some reasons why LLMs might help junior developers like you to feel more productive:
Lower Barrier to Entry
- LLMs help fill in gaps in knowledge—syntax, APIs, patterns—so you can move faster without constantly Googling or reading docs.
- Confidence Boost You get instant feedback, suggestions, and explanations. That can make you feel more capable and reduce hesitation.
- Acceleration of Learning You’re not just coding—you’re learning as you go. LLMs act like a tutor, speeding up your understanding of concepts and best practices.
- More Output, Less Friction You might be producing more code, solving more problems, and feeling that momentum—especially if you are just starting your coding journey."
Because those projects are mature and have a very high bar for contributions, so aren't a good fit for AI?
Opening a PR on Linux is very different from opening a PR on a company's non-critical-path CRUD service.
It’s mostly seen as a force multiplier. Our platform is all Java+Spring so obviously the LLMs are particularly effective because it’s so common. It hasn’t really replaced anyone though, also because it’s Java+Spring so most of our platform is an enormous incomprehensible mess lol
Agreed that it’s inherently a bunch of barely comprehensible slop that the AI slop probably fits right in, lol.
My organization would still hire as many software engineers as we could afford.
- Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.
- Using it for exploratory high level research and summarization into unfamiliar repos is pretty nice.
- Very rarely does AI write code that I feel would last a year without needing to be rewritten. That makes it good for things like knocking out a quick script or updating a button color.
- None of them actually follow instructions e.g. in Cursor rules. Its a serious problem. It doesn't matter how many times or where I tell it "one component per file, one component per file", all caps, threaten its children, offer it a cookie, it just does whatever it wants.
I wonder if we are going to pay for that, as a society. The number of times I went there, asking some tricky question about a framework, and have the actual author or one of the core contributors answer me was astonishing.
I think that a certain kind of craftsmanship will be lost.
I used to answer a lot of the basic questions just to help others as I've felt I had been helped, the moderation shift applying more and more rules started to make me feel unwelcomed to ask questions and even to answer. I do understand why it happened, with the influx of people trying to game the platform to show off in their resumés they were at the "top" of whatever buzzword was hot in the industry at the time but it still affected me as a contributing user out of kindness.
By 2018 I would not even login to vote or add comments, and I feel it was already going on a slow downhill path, and LLMs will definitely kill it.
We will definitely suffer, SO has been an incredible resource to figure out things not covered well in documentation, I remember when proper experts (i.e. maintainers of libraries/frameworks) would jump in to answer about a weird edge case, or clarify the usage of a feature, explain why it was misuse, etc.
Right now I don't see anything else that will provide this knowledge to LLMs, in 10-20 years time there will be a lot missing in training datasets, and it will be a slow degradation of knowledge available in the open for us all to learn from.
if they're not in stack overflow they have to be in reddit, or a discord channel, or something.
or else the AI has to be able to discern answers from first principles themselves, which most LLMs can't -- they're language model.
If, like the meme, you just copied from SO without using your brain then yes AI is comparable.
If you appreciate SO for the discussion (peer review) about the answers and contrasting approaches sometimes out of left field, well good luck because AI can't and won't give you that.
They used to, but in the past few years all I've ever gotten on the site is downvotes and close flags. No one is interested in actually answering questions, let alone discussing them: the site has trained everyone to become pedantic bureaucrats instead.
Now that we can use Copilot, we have a new CIO and I don’t hear about it so much. There is still some AI hype, but it’s more about how it’s being used in our products, rather than how to use it internally to do the work.
Apparently sometime in the next year we’re getting a new version of Jira with some AI that can do user stories on its own, but I don’t see that changing much of anything.
The bottleneck has rarely been the actual writing of code, it’s been people making decisions and general bureaucracy. AI isn’t solving that. Copilot has also not impressed anyone on my team. As far as the code we work on, it’s pretty bad. There are a few niche things it helps with, mostly writing queries to pull values out of complex json. That saves a little time, but hardly 30-50%. More like 1-2%.
Management stopped giving us new people, while pressuring us to do more, for many years now. This was a trend long before AI and I haven’t noticed any major change. I’d say it’s been this way for over 10 years now, ever since they had the realization that tasks could be automated.
in practice copilot is only useful for advanced search and it is really only useful for surface answers. Was a time saver for researching common tech I'm not strong in, like if certain Cisco platforms support X, or if there are simple ways to do Y in MongoDB, etc.
for example we had it search for 9-10 CVEs to see if any were exploited in the wild. Copilot got most of them wrong. ChatGPT got most of them right... except for one or two. But now I'm not sure I can trust anything and I'm checking them all from scratch anyway.
Both were confident in ways my security intern was not, and my intern was able to tell me "I have no idea about [CVE]".
In the psychological sense, I'm actually devastated. I'm honestly struggling to be motivated to learn/create new things. I'm always overthinking stuff like:
- "Why would I learn mobile app dev if in the near future there will be an AI making better UIs than me?" - "Why would I write a development blog?" - "Why would I publish an open-source library on GitHub? So that OpenAI can train its LLM on it?" - "Why would I even bother?"
And then, my motivation sharply drops to zero. What I've been up to lately is playing with non-tech related hobbies and considering switching careers...
- A lot of our code base is very specialized and complex, AI still not good enough to replace human judgement/knowledge but can help in various ways.
- Not yet clear (to me anyways) how much of a productivity gain we're getting.
- We've always had more things we want to do than what we could get done. So if we can get more productivity there's plenty of places to use it. But again, not clear that's actually happening in any major way.
I think the jury is still out on this one. Curious what others will say here. My personal opinion is that unless AI gets smart enough to replace more experienced developer completely, and it's far from that, then I'm quite sure there's not going to be less software jobs. If AI gets to a point where it is equal to a good/senior developer we'll have to see. Even then it might be that our jobs will just turn into more managing AI but it's not a zero sum game, we'll do more things. Superintelligence is a different story, i.e. AI that is better than humans in every cognitive aspect.
So to answer the original question of how my morale is? It is non-existent. I am quite open to fixing that, but haven't seen much that indicates now is the right time to search for something new.
I am under MUCH more pressure to deliver more in shorter periods of time, with just me involved in several layers of decision making, rather than having a whole team. Which may sound scary, but it pays the bills. At one company I contract with, I now have 2 PMs; where I am the only dev on a production app with users, shipping new features every few days (rather than weeks).
It feels more like performance art, than it even feels like software development at this point. I am still waiting for some of my features to come crashing prod down in fantastic fashion, being paged at 3am in the morning; debugging for 12 hours straight because AI has built such a gigantic footgun for me.... but it has yet to happen. If anything I am doing less work than before - being paid a little more, and the companies working with me have built a true dependency on my skills to both ship, maintain and implement stuff.
Are you using agentic features, given that you have not just one but two PMs?
I don't even use completions, really just agent mode. I do planning, wireframing, creating specs all with agents. Even small MVPs created in 5 minutes, deployed in 10, during a meeting to just brainstorm. As for the models. Go with Claude 3.5 or 4.0, GPT5. Use sequentialthinking and Taskmaster MCP. I could write a book about it... but the best way to go about it is to dive into, get frustrated, push through and then learn it the hard way. I started delegating a lot of my programming work the day ChatGPT came out; just copy and pasting, and since that day, my reliance on AI has just been increasing, and I have been getting better at it (and now I am at this stage.. with 2 PMS).
Not OP, but regarding your situation, I suggest moving to an agentic solution instead of “copy-pasting to GPT” — this will boost your coding productivity. There are several tools available, and to each their own, but try out Claude Code.I’m wondering how did you land your current gigs?
Thank you.
So I post on LinkedIn & Reddit, and I am not doing it in a spammy way. Do some outreach through LinkedIn and post on YCombinator on my personal account on the monthly who's hiring/freelancing posts. But a lot of the traffic I get comes from organic search and reddit -> clients. I had a client who told me they found me on Twitter; but I never even posted there, so someone reposted an article.
Tired of leadership who think productivity will raise.
Tired of AI summaries sent around unreflected as meeting minutes / action items. Tired of working and responding on these.
Overall, it sped up my learning greatly, but I had to verify everything it said and its code was a mess. It's a useful tool when used appropriately but it's not threatening my job anytime soon.
So in the end I use it fairly often when setting up new things (new infra, new files, new tools, new functions, etc). Although the time it saves is not coding time, but googling/boilerplating time. But in practice I work in a well established project where I rarely do this kind of thing (I don't think I even created a new file in the project last week).
If I am already familiar with the tool/library I almost always skip it (occasionally autocomplete is useful, but I could easily live without it). Occasionally I used for small self-contained snippets of code (usually no more than a single function). Last one I remember was some date formatting code.
Doesnt mean it wont get there - just that it isnt there yet
Letting it condense something like a paper and checking it afterwards might be a good learning exercise.
And yes, I did test ChatGPT, claude, cursor, aider... They produce subpar code, riddled with subtle and not so subtle bugs, each of my attempts turned out to be a massive waste of time.
LLM is a plague and I wish it had never showed up, the negative effects on so many aspects of the world are numerous and saddening.
I'm wearing glasses that tell me who all the fucking assholes and impostors are.
the experienced devs with an ability to do the deep dives become superheros since they can wade in and unfuck situations.
As of AI, I've been asked to test a partial rewrite of the current UI to the new components. For a few weeks I've been logging 10+ bugs a day. The only explanation I have, they use AI tool to produce nicely looking code which does not work properly in a complex app.
I'm Mexican (in Mexico) and I've seen this firsthand. There may be some truth to it, but soon enough these new companies will find out what several others found in the 90s (when the first wave of tech outsourcing came): The bottleneck is in communication and culture, not performance.
Anyway, point is, in a way AI is pushing the outsourcing trend a bit.
In terms of hiring- I co-own a small consultancy. I just hired a sub to help me while on parental leave with some UI work. AI isn’t going to help my team integrate, deploy, or make informed decision while I’m out.
Side note, with a newborn (sleeping on me at this moment), I can make real meaningful edits to my codebase pretty much on my phone. Then review, test, integrate when I have the time. It’s amazing, but I still feel you have to know what you are doing, and I am selective on what tasks, and how to split them up. I also throw away a lot of generated code, same as I throw away a lot of my first iterations, it’s all part of the process.
I think saying “AI is going X% of my work” is the wrong attitude. I’m still doing work when I use AI, it’s just different. That statement kind of assumes you are blindly shipping robot code, which sounds horrible and zero fun.
Because of the accumulated knowledge in these abstraction layers and because of the abstraction itself resulting in readable and maintainable code.
Yes you can move the abstraction one level up, but you don't control it if you nor the LLM meet the level of accumulated knowledge that is embedded in this abstraction. Let alone future contributors to your codebase.
Of course it is all depending on context and there is no one-fits-all strategy here.
For a good example, just look at how Google does "support." It's just robots doing shoddy work and screwing over people. Can better compensated and organized human support team do better? Of course, but the rich execs don't want to spend a penny to help people if they can get away with it.
It's definitely the most consistent topic but there's a lot of other stuff.
I agree that there's a lot of other stuff though, even on the worst days.
When code autocomplete first came out everyone thought software engineering would become 10x more productive.
Then it turned out writing code was only a small part of the complex endeavor of designing, building, and shipping a software system.
On the other hand I find it super useful for debugging. I can paste 500k tokens into Gemini with logs and a chunk of the codebase and ask it what’s wrong, 80% it gets it right.
Neural Networks have the long tail risk problem. In that they are good at returning the top 20% of answers to you. But they are pretty poor at returning the less common 80% answers.
I have seen the different tools return the wrong code, code with security issues or code that was open source that is not legal to use in your project without open sourcing it.
The big tech companies are selling it as AGI and this massive replacement tool to get more money. But really it is just gaining a few percentage points of productivity.
Most modern development tools can do the same. Like if you are able to build a app with some low code method. I have not seen a AI build app that i could not build in a low code system in the same amount of time.
All companies will end up with just one employee. If you don't agree with this, you don't know how to prompt.
You might want to be less credulous around LLM vendor marketing. Outside of possibly the blogspam/ultra-low-end journalism industry, and maybe low-end translation, LLMs aren’t doing 30-50% of anyone’s work.
I don't really see it replacing us in the near future though, it would be almost useless if I wasn't there to guide it, write interfaces it must satisfy, write the tests it uses to validate its work etc. I find that projects become highly modularised, with defined interfaces between everything, so it can just go to work in a folder satisfying tests and interfaces while I work on other stuff. Architecting for the agents seems to lead to better design overall which is a win.
I'm just writing crud apps though, I imagine it's less useful in other domains or in code bases which are older and less designed for agents.
My next experiment is designing a really high level component library to see if it can write dashboards and apps with. It seems to struggle with more interactive UI's as opposed to landing pages.
I use AI for taking info and restructuring it for me. Rewrite a Linear ticket in a proper format, take an info dump and turn it into a spike outcome or ADR doc that I can then refine. I also like it for the rote stuff I haven't memorized the structure of: building OpenSearch queries, writing boto3 snippets, etc. Other than that, my job is the same as it was pre-LLM hype. And from talking to other engineers, it seems that my experience is fairly standard.
They spend more time spotting and fixing bugs and basically have been feeling frustrated.
It's also annoying for the team in general. Projects that would otherwise take a couple of days have sometimes taken over 2 weeks, and it is hard to predict how long something will take. That adds a lot of pressure for everyone.
Mostly of having to try and explain to people why having an AI reduce software development workload by 30-50% doesn't reduce headcount or time taken similarly.
Turns out, lots of time is still sunk in talking about the features with PM's, stakeholders, customers etc.
Reducing the amount of time a dev NEEDS to spend doing boilerplate means they have more time to do the things that previously got ignored in a time poor state, like cleaning up tech debt or security checks or accessibility etc etc
I'm tired of having to try and explain that AI isn't remotely reducing my workload by 30-50%, and in fact it often probably slows me down because the stupid AI autocomplete gets in the way with incorrect suggestions and prevents me from getting into any kind of flow
If people can seriously have an AI do 50% of their work, that's usually a confession that they weren't actually doing real work in the first place. Or, at least, they lacked the basic competence with tools that that any university sophomore should have.
Sometimes, however, it is instead a confession "I previously wasn't allowed to copy the preexisting solutions, but thanks to the magic of copyright laundering, now I can!"
Strongly agree here. I am extremely skeptical of anyone reporting this kind of productivity gain.
So generally the people getting the most use out of LLMs are people who are using these higher levels of abstractions. And I imagine we will be building more abstractions like HTML to get more use out of it.
AI helps here and there but honestly the bottleneck for output is not how fast the code is produced. Task priorization, lacking requirements, information silos and similar issues cause a lot of 'non-coding work' for developers (and probably just waiting around for some who don't want to take initiative). Also I think the most time consuming coding task is usually debugging and AI tools don't really excel at that in my experience.
That being said, we are not hiring at the moment but that really doesn't have anything to do with AI.
I don't know any developers who use AI to that large extent.
Myself am mostly waiting for the hype to die out so we can have a sober conversation about the future.
Google AI summaries and ChatGPT have almost halved my traffic. They are a scourge on informational websites, parasites.
It’s depressing to see the independent web being strangled like that. It’s only a matter of time before they become the entire internet for many, and then the enshittification will be more brutal than anything before it.
I will be fine, but I have to divert 6-10 months on my life to damage control[0] instead of working on what matters to my audience. That happened by chance; other websites won’t be so lucky.
So yeah, morale is low. It feels like a brazen consolidation play by big tech, in all aspects of our lives.
On the bright side, it does make coding a bit easier. It spits out small bits of code and saves me a lot of API docs round trips. I can focus on business logic instead of basic code. AI is also a phenomenal writing tool. I use it for trying different phrasing options, reverse word and express search, and translation nuances. It does enable me in that way.
We were (finally!) given the go for even using AI just before summer vacation this year, and I was very excited, having been obsessed with AI 20 years ago. I was still excited quite a while, until I slowly understood all the limitations that come with LLMs. We can not trust them, and this is one of the fundamental problems: verifying things takes a long time, sometimes its even faster just writing the code yourself.
We do have non-core-product tasks that can greatly benefit from AI, but that is already a small part of our job.
I did find two areas where LLMs are very useful: generatin documentation from code (mermaidjs is useful here) and parsing GDB output!
Seriously, parsing GDB output was like an epiphany. I was for real blown away by it. It correctly generated a very detailed explanation of what kind of overwriting was happening when I happened to use a wild pointer. Its so good at seeing patterns, even combining data from several overwritten variables and parsing what was written there. I could have done it myself, but I seldom do such deep analysis in GDB and it did it in literally 10 seconds. Sadly, it was not that terribly useful this time, but I do feel that in the future GDB+AI is a winning concept. But at the same time, I spend very little time in GDB per year.
With AI, the future seems just so much worse for me. I feel that productivity boost will not benefit me in any way (apart from some distant trickle down dream). I expect the outsource, and remote work in general to be impacted negatively the most. Maybe there's going to be some defensive measures to protect domestic specialists, but that wouldn't apply to me anyway unless I relocate (and probably acquire citizenship).
>Is your company hiring more/ have they stopped hiring software engineers
Stopped hiring completely and reduced workforce, but the reasons stated were financial, not AI.
>Is the management team putting more pressure to get more things done
with less workforce, there is naturally more work to do. But I can't say there is a change in pressure, and no one forces AI upon you.
Even so, I have to constantly hound the AI to write concise code, to reuse code, to consolidate duplicate code blocks. When I ask it to remove the useless comments, it also removes the useful comments, so I have to goad it back into adding back individual helpful comments.
I had AI write some Python unit tests and it showed me how to mock, which I had never done in Python. That's great! But when I examined the tests, they were so "white box" and brittle that almost any change in implementation would break the tests.
When coding in a familiar language (C++), I tried turning on the auto-AI assistant (for our one project where security rules allowed it) and while I was impressed that it would auto-complete whole blocks of text based on my actual code base, not once was I able to accept those blocks as-is.
So for me at this point AI is at best a net +2% productivity improvement, though I surely have lots to learn about other ways in which it could be useful.
It can kickstart new projects to get over the blank page syndrome but after that there's still work, either prompting or fixing it yourself.
There are requirements-led approaches where you can try to stay in prompt mode as much as possible (like feeding spec to a junior dev) but there is a point where you just have to do things yourself.
Software development has never been about lines of code, it has always required a lot of back and forth discussion, decisions, digging into company/domain lore to get the background on stuff.
Reviewing AI code, and lots of it, is hard work - it can get stuff wrong when you least expect it ("I'll just stub out this authentication so it returns true and our test passes")
With all that in mind though, as someone who would pay other devs to do work I would be horrified if someone spent a week writing unit tests that I can clearly see an AI would generate in 30 seconds. There are some task that just make sense for AI to do now.
AI is an irrelevant implementation detail, and if the pace of your work is not determined by business needs but rather how quickly you can crank out code, you should probably quit and find a real job somewhere better that isn't run by morons.
No-Code Low-Code Vibe-Code
I have seen it all before
Great for 0 to 0.1
Soon as you have real domain problems to solve and any complexity the mess that is being created to create a solution is going to need fixing.
I don't think I'm being paid to 1:1 convert a dumb crud app or rest api from one language to another, although of course you do that once a decade in a typical job.
As a person I'm increasingly worried about the consequences of people using it, and of what happens when the bubble bursts.
The main thing that changed is that the CTO is in more of a "move fast, break things"-mood now (minus the insane silicon valley funding) because he can quickly vibe-code a proof-of-concept, so development gets derailed more often.
It is doing 0% of my work and honestly I am tired of 80% of HN posts being about it in one way or another.
I just simply don't get it. Productivity delta is literally negative.
I've been asking to do projects where I thought "oh, maybe this project has a chance of getting an AI productivity boost". Nope. Personal projects all failed as well.
I don't get it. I guess I'm getting old. "Grandpa let me write the prompt, you write it like this".
I find it wastes my time more than it helps
Everyone insists I must be using it wrong
I was never arrogant enough to think I'm a superior coder to many people, but AI code is so bad and the experience using it is so tedious that I'm starting to seriously question the skills of anyone who finds themselves more productive using AI for code instead of writing it themselves
In day to day work I could only trust it to help me with the most conventional problems that the average developer experiences in the "top N" most popular programming languages and frameworks, but I don't need help with that because search engines are faster and lead to more trustworthy results.
I turn to LLMs when I have a problem that I can't solve after at least 10 minutes of my own research, which probably means I've strayed off the beaten path a bit. This is where response quality goes down the drain. The LLM now succumbs to hallucinations and bad pattern-matching like disregarding important details, suggesting solutions to superficially similar problems, parroting inapplicable conventional wisdoms, and summarizing the top 5 google search results and calling it "deep research".
perhaps 1% of the time I've asked an LLM to write code for me, has it given me something useful and not taken more time than just writing the thing myself.
It has happened, but those instances are vastly outnumbered by it spewing out garbage that I would be professionally embarrassed to ever commit into a repo, and/or me repeatedly screaming at it "no, dumbass, I already told you why that isn't a solution to the problem"
It's not you getting old (although we all are), it's that you are probably already experienced and can produce better and more relevant code than the mid-to-low quality code produced by any LLM even with the best prompting.
Just so we are clear, in the only current actual study measuring productivity of experienced developers using an LLM so far, it actually led to a 19% decline in productivity. So there is a big chance that you are an experienced dev, and the ones that do experience a bump in productivity are the less experienced devs.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
I also want to mention that you are not alone. There are plenty of us, see here:
- Claude Code is a Slot Machine https://news.ycombinator.com/item?id=44702046
- GPTs and Feeling Left Behind: https://news.ycombinator.com/item?id=44851214
- I tried coding with AI, I became lazy and stupid: https://news.ycombinator.com/item?id=44858641
The current LLM hype reminds me of the Scrum/Agile hype where people could swear that it works for them and it if didn't for you, you were not following some scrum ritual right. It's the same with LLMs, apparently you are not asking nicely enough and giving writing 4000 lines of pseudocode and specs to produce 1 line of well written code. LLM coding is the new Scrum: useful to an extent and in moderation, but once they become a cult, you better not engage and let it die out on its own.
There will be a whole industry of prompting "experts", prompting books, the same as there were different crops of SCRUM, SAFe, and who knows what else. All we can do is sit on the sidelines and laugh.
About 10% of the devs in my location are using genAI as a dev tool to some degree, but for most of those devs, that degree is pretty small.
Ironically(?), our primary products are deep learning systems, so this is a very AI-savvy group.
It's nice as a passive thing like meeting transcription, or extra brainstorm head but that's about it.
It's helping with a lot of toil work that used to be annoying to do, PMs can do their own data analysis without having to pull me out of deliverable tasks to craft a SQL query for something and put it up on a dashboard; I don't need to go copy-paste-adapt test cases to cover a change in some feature, I don't need, most times, to open many different sections of documentation to figure out how a library/framework/language feature should be used.
It's a boost to many boring tasks but anything more complex takes as much work to setup and maintain the environment for a LLM to understand the context, the codebase, the services' relationships, the internal knowledge, the pieces of infrastructure, as it does for me to just do the work.
I've been hybridising as much as I can, when I feel there's something a LLM would be good at I do the foundational work to set it up, and prompt it incrementally to work on the task so I can review each step before it goes haywire (which it usually does), it takes effort to read what's been generated, explain what it did wrong so it can correct course, and iteratively build 80% of the solution, most times it's not able to completely finish it since there's a lot of domain knowledge that isn't documented (and there's no point in documenting since it changes often enough). Otherwise it's been more productive to just do the work myself: get pen and paper to think through the task, break it down after I have a potential solution, and use LLMs to just do the very boring scaffolding for the task.
Does it help me to get unstuck when there's some boring but straightforward thing to do? Absolutely. Has it ever managed to finish a complex task even after being given all the context, setup the Markdown documentation, explain the dependencies, the project's purpose, etc.? No, it hasn't, not even close, in many cases it gave me more work to actually massage the code it wrote into something useful than if I had done it myself. I'm tired of trying the many approaches people seem to praise about and see it crumble, I spent a whole week in 2 of our services writing all the Markdown files, iterating through them to fix any missing context it could need, and every single time it broke down at some point while trying to execute a task so, for now, I just decided to use it as a nice tool and stopped getting anxious about "missing out".
Cursor and Claude Code are by themselves awful engineers, who at best just save me a lot of keystrokes.
Like everyone said, my productivity boost is in 1-4% range and not more than that.
The difference here is that LLMs do have some genuine use cases, it's just they are far away from the hype.
with AI we can accomplish more, it's arguably easier to be a "10x" engineer. but to some extent with the cost of "quality".
i think AI is here to stay and i'm preparing for it. currently trying to find best way of utilizing AI in my work to be more efficient (i tried claude code back in nov24, to cursor pro, trae pro, now back to claude code). maximizing tokens. challenging myself with token limits but makes the best out of it.
but also to mitigate being "dumb" i tried to enjoy learning some low level programming lang while building something out of it (currently learning to build TUI)
My guess is that people who find the most benefit from AI are the people who have the most questions to ask, which aren’t the most productive developers either way.
Things change fast and the tech got way better as I moved from CoPilot to Cursor to finally Claude Code. But at the end of the day the amount that I understood the code I was producing versus how much I was writing was not worth it.
People talk a lot about "what you merge to production is still your code" and I agree with that. To that point whether its a shared experience or just how I'm wired I would need to put way more time in to trying to deeply understand the code than the time it took to write a great prompt that Claude could follow correctly.
We already had a script for boilerplate writing and honestly boilerplate is kinda fun to write sometimes if you have a nice keyboard. The annoying boilerplate is the more nitty gritty stuff like error handling in Go. So I guess I still support the AI tab, that's pretty nice.
I felt myself becoming "dumber" and my productivity increase wasn't worth it. Same reason I stay off TikTok and Instagram even though they sometimes have interesting reels that are actually informative.
Basically, no one cares.
Hiring is as haphazard and inadequate as it has been in the last 25 years, no change there.
AI usage is personal, widespread and on a don't ask don't tell basis.
I use it a lot to:
- Write bullshit reports that no one ever reads.
- Generate minimal documentation for decade old projects that had none.
- Small, low stakes, low complexity improvements, like when having to update this page that was ugly when someone created it in 1999, I'll plop it on aistudio to give it a basic bootstrap treatment.
- Simple automation that wasn't worth it before: Write me a bash script that does this thing that only comes up twice a year but I always hate.
- A couple times I have tried to come up with more complex greenfield stuff to do things that are needed but management doesn't ever acknowledge, but it always falls apart and starts needing actual work.
Morale is quite crappy, as ever, but since some of the above feels like secretly sticking it to The Man, there are these beautiful moments.
For example when the LLM almost nails your bimonthly performance self report from your chat history, and it takes 10 minutes instead of 2 hours, so you get to quietly look out of the window for a long while, feeling relaxed and smug about pocketing some of the gains from this awesome performance improvement.
Also from my experiences with agents, and given that I have been around computers since 1986, I can clearly see where the road is going.
Anyone involved with software engineering tasks, should see themselves becoming more of a technical architect for their coding agents, than raw coding, just like nowadays while Assembly is a required skill for some fields, others can code without ever learning anything about it.
Models will eventually become more relevant than specific programming languages, what is worth discussing X or Y is better, if I can generate any that I feel like asking for. If anything newer languages will have even harder time getting adopted, on top of everything that is expected, now they also have to be relevant for AI based workflows.
I'm a weaver using a loom and I miss weaving.
what sucks though is that its super inconsistent whether the thing is gonna throw an error and ruin the flow, whether thats synchronous or async.
I like that it makes it easy to learn new things by example.
I don't like that I have no idea if what I'm learning is correct (or at least recent / idiomatic), so everything I see that's new, I have to validate against other resources.
I also don't really know if it's any different from "tutorial hell".
https://www.linkedin.com/posts/georgzoeller_how-stupidly-eas...
Part of "AI-native" means being able to really focus on how we can improve our Product to lessen upfront burden on users and increase time-to-value. For the first time in a while, I feel like there is more skill needed in building an app than just doing MVC + REST + Validation + Form Building. We focus on the minimum data needed for each form upfront from our users, then stream things like Titles, Icons, Descriptions, etc in a progressive manner to reduce form filling burden on our users.
I've been able to hire and mentor Engineers at a quicker pace than in the past. We have a mix of newer and seasoned Engineers. The newer Engineers seem to be learning far quicker with focused mentoring on how to effectively prompt AI for code discovery, scaffolding, and writing tests. Seasoned Engineers are able to work across the stack to understand and contribute to dependencies outside of their main focus because it's easier to understand the codebase and work across languages/frameworks.
AI in development has proven useful for some things, but thoughtful architecture with skilled personnel driving always seems to get the best results. Our vision from our product is the same, we want it to be a force multiplier for skilled Platform Engineers.
Baffled because there are too many rank-and-file tech workers who seem to think AI exciting/useful/interesting. It’s none of those things.
Just ask yourself who wants AI to succeed and what their motivations are. It is certainly not for your benefit.
The AI will replace us all in 2028! For real this time.
But before that, all the mid-managers will be replaced first, then the tech writers, the QA people, the PM, the...
The devs are closing the lights behind...
First you augment, then replace.
Anyone doing simple web design/development work is fairly easily replaceable at this point.
That 50% is unit tests.
Moving fast in the beginning always has caveats.
In the meantime I'm doubling down on math and theory behind AI.
On one hand, AI helps me and my team ideate and prototype quickly. On the other, it’s replacing the fun part of coding with the annoying part: troubleshooting its thought process and fixing its generated code.
I’d say it’s still too early to have a concrete opinion.
I’m looking for a way out of tech because of it.
I still don’t see this, if only for the Managerial instinct for ass-covering.
If something really matters and a prod showstopper emerges, can those non-technical supervisory managers be completely, absolutely, 100% sure the AI can fix the code and bring everything back up? If not, the buck would surely stop with them and they would be utterly helpless in that situation. The Board waiting on conference call while they stare at a pageful of code that may as well be written in ancient Sumerian.
I can see developers taking a higher level role and using these tools, but I can’t really see managers interfacing directly with AI code generation. Unless they are completely risk tolerant, and you don’t get far up the greasy pole with those tendencies.
I expect a massive bubble burst and recession in the next year or so, but honestly I cannot wait. I fear it will be quite a while before hiring picks up again, unfortunately.
I keep thinking of a friend in finance in pre-crash 2008 who could see the emperor had no clothes but was derided by his enthusiastic coworkers every time he opened his mouth.