Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away.
Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.
Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel.
What will remain are the things that already differentiate a good developer from a bad one:
- Able to review the output of coding agents
- Able to guide the architecture of an application
- Able to guide the architecture of a system
- Able to minimize vulnerabilities
- Able to ensure test quality
- Able to interpret business needs
- Able to communicate with stakeholders
Yeah, but there’s “able to use Excel”, and then there’s “able to use Excel.”
There is a vast skill gap between those with basic Excel, those who are proficient, and those who have mastered it.
As in intermittent user of Excel I fall somewhere in the middle, although I’m probably a master of knowing how to find out how to do what I need with Excel.
The same will be true for agentic development (which is more than just coding).
That probably won’t be necessary in a few years.
If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off.
Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish.
No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.
Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders.
A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet.
Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult.
The only way I was able to direct the AI to a better design was by saying the words I know in my head that describe better designs. Anyone without that knowledge wouldn't be able to tell the heavy interpreter architecture wasn't good, because it was fast enough for simple test cases which all passed.
And you can say "just prompt better" but we're very quickly coming to a place where people won't even have the words to say without AI first telling them what they are. At that point it might as well just say "The design is fine don't worry about it" and how would the user know any better.
Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.
Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)
I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.
Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.
I agree if that's all you can do. Using a coding agent to complement a valuable domain-specific skill is gold.
At least when you're talking about shipping software customers pay for, or debugging it, etc. Research, narrow specializations, etc may be a different category and some will indeed be obsoleted.
> As it turns out, neural nets “won”
> The people who scoffed at neural nets and never got up to speed not so much.
I get the feeling you don’t know what you’re talking about. LLMs are impressive but what have they “won” exactly? They require millions of dollars of infrastructure to run coming around a decade after their debut, and we’re really having trouble using them for anything all that serious. Now I’m sure in a few decades’ time this comment will read like a silly cynic but I bet that will only be after those old school machine learning losers come back around and start making improvements again.
> you don’t know what you’re talking about
Consider: Why did Google have a bazillion TPUs, anyway?
It’s also the most important capability engineering orgs can be working on developing right now.
Software Engineering itself is being disrupted.
It does seem like a roundabout way of saying "but what if full sending on AI didn't have downsides, tho?"
Just phrased in a way that can put the onus on the other party with a perfect weasel word qualifier like "most important".
There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work.
Doing so will effectively force a (potentially unwanted) career change for many people and will lead to the end of software engineering (and software as a career), assuming AI continues to improve.
"Effectively" using agents means that you're writing specs and reading code (in batches through change diffs) instead of writing code directly. This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO).
The way that you read code is different with agents as well. Agents can produce a smattering of tests alongside implementation in a single turn. This is usually a lot of code. Thus, instead of red-green-refactor'ing a single change that you can cumulatively map in your head, you're prompt-build-executing entire features all at once and focusing on the result.
Code itself loses its importance as a result. See also: projects that are moving towards agentic-first development using agents for maintenance and PR review. Some maintainers don't even read their codebases anymore. They have no idea what the software is actually doing. Need security? Have an agent that does nothing but security look at it. DevOps? Use a DevOps agent.
This isn't too far off from what I was doing as a business analyst a little over 20 years ago (and what some technical product managers do now for spikes/prototypes). I wrote FRDs [^0] describing what the software should do. Architects would create TRDs [^1] from those FRDs. These got sent off to developers to get developed, then to QA to get bugs hammered out, then back to my team for UAT.
If agents existed back then, there would've been way fewer developers/QA in the middle. Architects would probably do a lot of what they would've done. I foresee that this is the direction we're heading in, but with agents powered by staff engineers/Enterprise Architects in the middle.
> Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
People learn differently. I (and others) learn from doing. Typing code from Stack Overflow/Expertsexchange/etc instead of pasting it, then modifying it is how I learned to code. Some can learn from reading alone.
[^0]: https://www.modernanalyst.com/Resources/Articles/tabid/115/I...
I do not see why you can't write your spec in pseudocode if you really want to - communicating your intent to the LLM, for how the code should be developed is far closer to programming than writing skillwise.
If you expected things to stay the same forever, maybe software engineering wasn't the right career move for you. Even though it looked safe enough, given that we've spent 50 years writing the same old code the same old way, that was never guaranteed.
I for one am glad to see something genuinely new come along. The last dozen or so "paradigm shifts" turned out to be disappointing variations on the same old paradigm. Not this one, though.
Every company I've ever worked at has genuinely believed in and invested in improving developer skills.
It seems they were correct not to invest in your skills.
I've worked for six companies over almost 20 years. The majority invested in my skills, and I hope that investment has paid off for them!
I can't understand what people are looking for when they talk about lack of investment into training for engineers. It's not the kind of job where someone can train you. It's like an executive complaining they aren't trained. You're the one who's supposed to be coming up with answers and making decisions. You need to spend time on self-motivated learning/discovering how to better do your work. Every company I've been at big or small assumes that's part of the job.
In US you go to college for 4-5 years and pay $50k per year. Or more.
You pay to learn. A lot of money, a lot of time.
Then you get a job, where the idea is that you get paid for doing work and you expect the employer to do what?
You seem to expect that not only you won't be doing the things you're being paid to do but the employer will pay for your education on company's time.
How many weeks per year of time off do you expect to get from a company?
You'll either say a reasonable number, like 1 or 2, which is insignificant to the time you supposedly spent learnings (5 years). You just spend 250 weeks supposedly learning but 1 or 2 weeks a year is supposed to make a difference?
Or you'll say unreasonable number (anything above 2 weeks) because employment is not free education.
With 35 companies, that would be around 1-2 years per company on average if you are retired or near retirement. I doubt any company is seriously investing in a worker who would likely be gone the next year. Getting lip service seems already good deal at that point.
There doesn’t seem to be a plan for maintaining that culture.
Doesn't credentialism kinda throw a spanner in that - where it's not enough to have people with a good track record of solving issues, but then someone along the way says "Yeah, we'd also like the devs who'll work on the project to have Java certs." (I've done those certs, they're orthogonal to one's ability to produce good software)
Might just be govt. projects or particular orgs where such requirements are drawn up by dinosaurs, go figure (as much as I'd love software development to be "real" engineering with best practices spanning decades, it's still the Wild West in many respects). Then again, the same thing more or less applies to security, a lot of it seems like posturing and checklists (how some years back the status quo was that you'll change your password every 30-90 days because IT said so) instead of the stuff that actually matters.
Not to detract from the point too much, but I've very much seen people not care about solving problems and shipping fast as stuff like that, or covering their own asses by paying for Oracle support or whatever (even when it gets in the way of actually shipping, like ADF and WebLogic and the horror that is JDeveloper).
But yeah, I think many companies out there don't care that much about the individual growth of their employees that much, unless they have the ability to actually look further into the future - which most don't, given how they prefer not to train junior devs into mid/senior ones over years.
Back in the day, there were more or less two consumer flight sims: MS Flight Simulator and XPlane. MSFS was and has always been the much prettier one, much easier to work with; xplane is kludgy, very old-school *NIX, and chonky in terms of resource usage. I was doing some work integrating flight systems data (FDAU/FDR outputs) into a cheaper flight re-creation tool, since the aircraft OEM's tool cost more than my annual salary. Hmm, actually, ten years of my salary.
So why use xplane at all, then?
The difference was that MSFS flight dynamics was driven from a model using table-based lookup that reproduced performance characteristics for a given airframe, whereas xplane (as you might be able to tell from the company name, Laminar Research) does fluid and gas simulation over the actual skin of the airframe, and then does the physics for the forces and masses and such.
I caught some flack for going with xplane: "Why not MSFS!? It's so much prettier!"
Unless the airframe is in a state that is near-equivalent with tabular lookup model, the actual flight is not going to be actually re-created. A plane in distress is very often in a boundary state- at best. OR you might be flying a plane that doesn't really have a model, like, say, a brand new planform (like the company was trying to develop). Without the aerodynamic fundamentals, the further away you get from the model represented by the tabular lookups, the greater the risk gets.
And how does this relate?
Those fundamentals- aerodynamic or mathematical or electrical- will be able to deal with a much broader range than models trained on existing data, regardless of whether or not they are LLMs or tabular lookups. If we rely on LLMs for aerodynamics, for chemistry, for electrical engineering, we are setting ourselves up for something like the 2008 Econopalypse except now it affects ALL the physical sciences; a Black Swan event that breaks reality.
I am genuinely worried we're working outselves into just such an event, where the fundamentals are all but forgotten, and a new phenomenon simply breaks the nuts and bolts of the applied sciences.
As for my xplane selection, it helped in other ways. Because often the FDR data is just plain wrong, but with xplane you could actually tell, because a control surface sticking out one way, while the flight instruments say another, lights up a "YOU GOT PROBLEMS" light in the cockpit as the aircraft inexplicably lurches to the right.
What's valuable to a company is not necessarily what's valuable to the customers or even more so, to a civilization at large.
It could hardly have been a hobby if people were willing to pay you for it (and good rates too)?
I will rephrase it like this - the market has shifted away from providing value to the customers of said companies to pumping itself instead and it does not need to employ people for that. Simple as.
There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
God I hope it doesn't all crash at once.
Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors.
Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something.
And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head.
It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer).
Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone.
It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems.
Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation?
You're low by several orders of magnitude. "The 2025 development cycle saw 2,134 developers contribute to [Linux] kernel 6.18" [1]
[1] https://commandlinux.com/statistics/linux-kernel-contributor...
How does building agentic systems, a "really hard" problem, not just end up a "regular code" problem? Because that is what it is. A distributed systems problem with non-deterministic run lengths. How do you switch agent contexts? Similar to how you solve regular program context switching. How do you search tool capabilities and verify them? How do you effectively manage scheduled tasks?
Oh, look, you've just invented the operating system kernel. Suddenly, those 'dozen or two' experts don't seem so archaic after all!
Also, 200 years ago we didn't have bike mechanics. Car mechanics. Boat mechanics. Plumbers. Electricians. Not all new professions fade away.
I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.)
Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]).
I've vibe coded plenty. I mostly don't look at the crap coming out. Don't want to. When I do I absorb a tiny bit, but not enough to recreate the thing from scratch. I might have a modicum more surface-level knowledge, but I don't have deep understanding and I don't have skills. To the extent that I've fixed or tweaked AI-generated code, it's not been to restructure, rearchitecture, or refactor. If this is all I did day in and day out, my entire skillset would atrophy.
I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.
Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".
So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D
How many bytes is a pointer in C? How many bytes is a shared pointer in C++? What does sysctl do? What about fsync?
What is a mutex lock? How is it different from a spin lock?
You want to find the n nearest points to a given point on a 2-D Cartesian plane. Could you write the code to solve that on your own?
Can you answer any of these questions without searching for the answer?
I don't use LLMs and I learn things fine. Always have. For several decades. I care deeply about the underlying code and systems. It annoys me when people say they do and they cannot even understand how the computer works. I'm fine with people having domain-specific knowledge of programming: maybe you've only been interested in web development and scripting DOM elements. But don't pretend that your expertise in that area means you understand how to write an operating system.
Or worse: that it prevents you from learning how to write an operating system.
You can do that without an LLM. There's no royal road. You have to understand the theory, read the books, read the code, write the code, make mistakes, fix mistakes, read papers, talk to other people with more experience than you... and just write code. And rewrite it. And do it all again.
I find the opposite is true: those who use LLM coding exclusively never enjoyed programming to begin with, only learned as much as they needed to, and want the end results.
I have been coding long before internet and before there were huge demand for software devs..and I would be coding even after there is no demand for the same.
That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there.
Why do people worry about a potential, temporary loss of skill?
Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book.
Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.
I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).
With how much some people spend on tokens that they've shared on here, and concerns about raising prices, I've kind of been wondering if we're actually heading to a point where seniors who don't use AI are going to be cheaper than juniors who do.
And executives will get millions in bonuses for figuring it out, and the remaining programmers, probably one or two, will raise their necks over who's the best prompter and how everyone else was dumber than them for not figuring it out.
I’ve eyerolled way less with Codex CLI and the GPT models than with Claude.
Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out.
It kind of feels like companies are being fooled into outsourcing/offshoring their jr. developer level work. Then the companies depend on it because operational inertia is powerful, and will pay as the price keeps going up to cover the perpetual motion lie. Then they look back and realize they're just paying Microsoft for 20 jr. developers but are getting zero benefit from in-house skill development.
It's not perpetual motion, it's very real capability, you just have to be able to learn how to use it.
What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast. That is how I compare it to perpetual motion mechanism scams. In the case of a perpetual motion machine, it appear that it will continue to run indefinitely. That is analogous to the impression that you have now. You feel that this will go on and on for ever, and that is the scam you are falling for.
People yeating a (shitty) Github clone with Claude in a week apparently can't imagine it, but if you know the shit out of Rails, start with a good a boiler plate, and have a good git library, a solo dev can also build a (shitty) Github clone in a week. And they'll be able to take it somewhere, unlike the llm ratsnest that will require increasingly expensive tokens to (frustratingly) modify.
Is the world any better for them existing? The decline of coding and sw engineering skills in humans from outsourcing the practice of it to AI is it worth it and sustainable long term?
Managers at companies are just doing what they've optimized their careers for: maintaining some edge over some competition, at some cost. What is pure FOMO to you or me, is good strategy to anyone trying to win [1]. In other words, FOMO was always the strategy.
This self-reinforcing loop is also not going away. There hasn't been any real evidence that any part of knowledge work, including coding, cannot be automated [2]. Even if human-level quality or cost-effectiveness takes 10 more years, all tasks are functionally solved or about to be. I don't like it, but it's true.
The big problem is that the people who are removed from this loop, who have the time to understand its effects and the power to make changes, are doing fuck-all.
So, whether the loop stops for a while or speeds up even more, we're fucked until we figure out how to detach full-time employment from survival.
[1] I believe this is called meta in PvP games; even if you want to subvert the meta, you gotta know it well first.
[2] Although it could just be my impression, and I'd be happy to be proven otherwise.
We are at the point where sure ai can write code, but we could always do that; lack of code writing ability was not what killed the OOP automation efforts. There was plenty ability to code back then as well. The distinction of whether it’s an offshore team in India or Claude writing the code doesn’t change things as far as the larger picture of building the software.
Yet every company does it, except the worst sweatshops.
More and more the bar is being lowered. Don’t fall to brain rot. Don’t quite quit. Stay active and engaged, and you’ll begin to stand out among your peers.
Here's my advice: if there's someone around you who can teach you, learn from them. But if there isn't anyone around you who can teach you, find someone around you who can learn from you and mentor them. You'll actually grow more from the latter than from the former, if you can believe that.
I think there's a broad blindness in industry to the benefits of mentorship for the mentors. Mentoring has sharpened my thinking and pushed me to articulate why things are true in a way I never would have gone to the effort of otherwise.
If there are no juniors around to teach, seniors will forever be less senior than they might have been had they been getting reps at mentorship along the way.
It's purely because of the fact that if you can't teach something, you really don't understand it.
And the act of having to simplify and break down a skill to explain it to others improves your knowledge of it.
I haven't really been a reader, but I can definitely notice when a book/text is "hard". I'm currently reading the old testament, and I understand very little (even the oxford one that has a lot of annotations is hard for me). I like this, because its a measurement of what I don't know (if that makes sense).
I'm trying to decide if my attention span has atrophied, or if I'm just more aware now of my ADD.
Either way, I'm hopeful that my attention span for this kind of reading will grow with practice.
I used to be an avid reader as a child, even as a teenager. That was a long time ago. I'm looking forward to that time when I will have the mental capacity to read long prose again.
But besides that, it's interesting so many people are willing to tailor their entire workflow and product to indeterminate machines and business culture.
I recommend everyone stop using these infernal cloud devices and start with a nice local model that doesn't instantly give you everything, but is quite capabable of removing a select amount of drudgery that is rather relaxing. And as soon as you get too lazy to do enough specifying or real coding, it fucks up your dev environment and you slap yuorself a hundred times wondering why you ever trusted someone else to properly build your artifaces.
There's definitely some philosophy being edged into our spaces that need to be combatted.
The local models are only going to get better, and the improvement curve has to top out eventually. Maybe the cloud models will still give you a few extra percentage points of performance, especially if they're based on data sets that aren't available to the public, but it won't make much difference on most tasks and the local models will have a lot of advantages too.
Most people are outsourcing thinking instead of using it to go deeper. The tools aren’t the problem, the default behavior is.
I have a friend who uses Google Maps to find places, then memorizes the route there and closes the app to navigate because he wants to build a better mental map of our city. Meanwhile, I just check the app every five seconds like a dummy, and my hippocampus stays small.
You’ve let them in and given them power in many aspects of your life without even a whimper of resistance. Of course you’ll accept them as your lords.
> Stay active and engaged, and you’ll begin to stand out among your peers.
Here’s how the rat race looks in the age of AI and how you can stay ahead.
I have never typed and expressed myself so much before I started talking to clankers. Telling them what to do, teaching them skills, giving them architecture decisions and yanking their chain when they weer off course. I type in plain English maybe more than 60 hours a week now.
A shocking number of my coworkers plot every single trip (even to work!) and claim this helps with traffic or undercover cameras or some other cope. But the traffic will be there regardless, and they shouldnt need Google Maps to remind them not to speed. They'd rather be glued to the screen than pay attention to their surroundings or learn any landmarks.
Isn't the point to use the GPS to avoid the traffic? In my experience, it's pretty effective a lot of times for that.
The second part gave it away though.
We're obviously in an era where "good enough" is taken so far that, what used to be the middle of the fictional line is not the middle point anymore but a new extreme. You're either someone who cares for the output or someone who cares how readable and easy to extend the code is.
I can only assume this is done on hopeful purpose, with the hope that the LLM's will "only keep improving linearly" to the point where readability and extendability is not my problem by it's "tomorrow's LLM" problem.
You'll still come here, read the comments, see something engaging and want to reply and... feel sad because shakes fist at [datacenter] clouds it's all just bots talking to each other anyway.
Seems lame. Keep talking anyway.
We certainly do suddenly live in "interesting times".
Soon to remove my access entirely to this website.
AI is the future. AI will do this. AI will cause that. It is inevitable. Everything is obviously changing.
They leave no room left for debate. No openness to pushback. And no evidence or proof. It just is because it is, and if you don't believe it, you're simply wrong. We saw the same sort of attitudes with blockchain and NFTs.
Which of course is a lie propagated by those who want to cast the disbelievers as mentally defective. Why the hell would you care how readable and easy to extend the code is if you weren’t interested in the result?
Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.
I'm curious: would you say the feeling of being watched online is making you afraid of some repercussion, or is it something else?
I get a feeling from overall anti-AI sentiment online that a lot of people feel they're entitled to 100% of value created by anything even tangentially related to their person, whether that's some intentional contribution or a random brain fart that happened in the vicinity of someone else doing something useful - and then become resentful they're not "getting their share".
There's hardly any other way to read all the proclamations of quitting to do anything because "cognitive dark forest" (itself a butchering of the original idea of "dark forest" across so many orthogonal dimensions in parallel, that it starts to look like a latent space of a transformer model).
Downloading public stuff off the internet with no regard for the creator's wishes or license is bad enough, but we have many people here who defended AI companies seeding models with pirated content.
The internet is a social contract. AI is not the first thing to try and erode it for profit, but it's by far the most aggressive one.
Rather, I don't like that the terms I released my work under aren't being respected. I believe LLMs are derivative works of the pieces they are trained on. I spent more than ten years working on open source code, and now the models that were trained on my GPL'd code are being used to make proprietary code against the terms of the license. I find this reprehensible.
While it wasn't an explicit term of release, generally I did not expect anyone to get any kind of financial value from the blog posts I wrote. I just wrote them for fun & maybe others would find them interesting. Now, LLMs have been trained on my blog posts and are generating financial value for some of the worst human beings on the planet who are using their money to murder, demean, and maim other humans.
I now know that blog posts I wrote for fun are putting money in some sociopath's bank account, and the GPL'd code I wrote is being used to create software to exploit me & other users. If I continue to create things publicly, it will be used against me and other people, and there's nothing I can do to stop it except to stop creating things. It's all very disrespectful & demoralizing.
Even if it's true and you genuinely have nothing to hide, have nothing to lose from being profiled, there are people who absolutely do.
Look at the radicalization happening in countries around the world, including the USA. It might be OK to be part of a minority or to have an uncommon opinion. A few years pass and suddenly the same person is considered an undesirable, a foreign agent, a terrorist or a deviant.
I've posted a lot of shit online which can be connected to my person and which could label me as any of the groups above. But that's a decision I make for myself. I would never dare make it for others or claim that they should not care about surveillance and take the same risks I do.
I know a guy from russia who lost his job because he expressed an antiwar opinion. The same thing can happen in the US or and other country you consider civilized. The US proto-dictator is already sending death-threats to people who only expressed the opinion that soldiers can refuse illegal orders. Neither you or me can know what will happen next.
Obviously that's a false dichotomy and a pretty defeatist attitude. But it does have a point.
Writing a blog yes, feeding the beast no.
> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This sort of reasoning is why you might have been called extreme.
It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".
There's nothing wrong with extreme, but since you asked.
I was an AI sceptic for a long time until toward the end of last year when I seriously evaluated them, and came to realise it could add tremendous value.
When someone comes along and declares that it's all hype, it goes against my experience that it's getting things done.
I can also see the harm it does, and I hope the tooling improves to reduce that harm. For example, there's a significant lack of caching in the tooling. It's constantly re-reading the same files every day, and more harmfully, constantly fetching the same help pages and blog-posts from the web.
If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.
Declaring my experience to be invalid and based on nothing but hype doesn't engage people like me at all.
And it's the people like me, the middle-of-the-road developer working on enterprise software, that either need convincing to not use the tools, or for our habits to change to minimise the harm.
Because otherwise we're quietly getting on with using it, potentially destroying forests and lakes as we do.
I think the position that ai is morally troubling enough that the downsides out way the positives is perfectly defensible. But the entire argument becomes a joke when you can’t accurately catalog the positives.
While this is a great idea, the harms are somewhat overblown. The big scare number for water consumption includes water used in power generation which itself includes evaporation from hydroelectric power.
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
But this is not that. The current situations is closer to "what's yours is mine and what's mine is mine".
I have been releasing my writings under a Creative Commons Attribution-ShareAlike license which requires attribution and that anything built upon the material to be distributed "under the same license as the original". And yet I have no access to OpenAI's built-upon material (I know for a fact they scrape my posts) while they get my data for free. This is so far legal, but it's probably not ethical and definitely not what the free software movement wanted.
Sorry, you don't speak for the movement. Plenty of us want this world.
Available to all yes. Not available to the giant corpos while the lone hobbyist still fears getting sued to oblivion. In fact that's pretty much the opposite of what the free software movement wanted.
Also the other thing the free software movement wanted was to be able to fix bugs in the code they had to use, which AI is pulling us further and further away from.
Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Closed source is no longer the moat it was, and so keeping the source code to yourself is only going to hurt you as people pass you over for companies who realize this, and strive to make it easier for your LLM to figure their systems out.
We've always been able to do that, but that's not the point. There's a reason free software licenses require the "the preferred form of the work for making modifications to it" to be opened.
One of the core tenets is that any user should have the exact same access as the original developers.
Jesus christ.
"The people who wanted everyone to have a home should be happy with the invention of the lockpick. You can just find a nice house and open the lock and move in. Ignore the lockpick company charging essentially whatver they want for lockpicks or how they got accesss to everyones keyfob, or the danger of someone breaking into your house"
That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
The open source community wants people to upskill, people become tech literate, free solutions that grow organically out of people who care, features the community needs and wants and people having the freedom to modify that code to solve their own circumstances.
Or Oracle for databases.
Or Microsoft for operating systems.
Or DEC for computers.
There are perfectly good open source LLMs and agents out there, which are getting better by the day (especially after the recent leak!)
I want to support RISC V over Intel.
I want other things too, and on balance, Intel+Anthropic is most compliant with my various preferences, even if they're not perfect.
they’re not free in any sense of the word. from price to openness of the models. would openai cry if every bit of their models were wide open for us to use however we see fit? if so, then it’s not free, again, in any definition of the word.
Local models are a thing.
Me boycotting some company's product due to bad practices (slavery, etc).
Response: You know your boycotting isn't going to change anything, right?
Me: Yes, and ....? I'm not trying to change the world.
I don't use FF as some form of protest. It's just a browser I like more.
I'm not anti-AI the way much of HN is, but let's pretend I am. If I ban AI generated content on my site[1], I'm not trying to change the world. Just controlling my site.
Getting more to your sentiment: The world/Internet is a vast place. If even 1000 people think like me, it's more than enough. For a number of years I had valuable online interactions via BBS's with a population < 1000. As long as I get 1000 people, let the rest of the world burn!
It's like the constant "Emacs is dying" threads we used to have on HN, because the percentage of SO users using it kept dropping. When in reality, the absolute numbers kept increasing. Who cares if the world has moved on to VS Code? Emacs as an ecosystem was/is thriving!
[1] Assuming I live in a fantasy world where I can classify content accurately...
I think people are so afraid to do a hecking racism that they start comparing any normal thing to racism. I also think there’s an incentive here: by comparing to racism they potentially gain some social status points like - I’m more morally superior to you because I didn’t do a hecking racism like you.
But it can backfire, like with your comment. People are catching up to how ridiculous this comparison is
How large does a group have to be (absolute number or percentage of population) for you to change your mind on this?
Serious question - genuinely curious. My answer is about 10% provided they are organised in some way. 5% if you're particularly good at collective action.
Posting your most provocative and strong opinions in reaction to the latest controversy-of-the-week is what fuels the internet and culture more than anything these days. The attention economy demands hot takes mixed with preaching about every new thing.
So while I do worry about AI's impact on blogging/writing/etc., I do think to some extent, you either love the process or you don't. If you only write in order to have readers, you're in the wrong game.
For the vast majority of history it was all community theater, carnies, and “that guy in the square guy who knows the lute”.
Fun fact, in medieval Europe acting was considered a sin so great you could not get a Christian burial, as you were channeling the spirit of others. Realpolitik it was probably because actors were mostly queer in the cities and Roma in the countryside. What’s old is new…
There’s always Van Gogh and Starry Night to think about as well.
Suppose that I have discovered a novel algorithm that solves an important basic problem much more efficiently than current techniques do. How do I hide it from the web scrapers that will steal it if I put it on GitHub or elsewhere? Should I just write it up as a paper and be content with citations and minor glory? Or should I capture AI search results today for "write me code that does X", put my new code up under a restrictive license, capture search results a day later, demonstrate that an AI scraper has acquired the algorithm in violation of the license, and seek damages?
Unfortunately, that would be considered heresy on forums like HN, and people will continue to rail against AI and whatever it's causing and patents, instead of realizing that one is the only available leverage against the other.
Now i still show clean code videos from bob and other old things to new hires and young collegues.
Java got more features, given but the golden area of discovery is over.
The new big thing is ai and i'm curious to see how it will feel to write real agents for my company specific use cases.
But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use. It will change and it changes our field.
Btw. "Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value." i disagree, video generation has a massive impact on the industry for a lot of people. Don't down play this. NFTs btw. never had any impact besides moving money from a to b
Oof. The modern "Go away or I will replace you with a very small shell script"
But yeah there is one person made of teflon. Nothing sticks. And i could tell you that teflon person in every company i worked so far.
I’ve never found a way around it, and I don’t want to believe that some people can’t grok this field, but that is what I’ve experienced. Maybe other people can educate better.
I’ve just found that at some point you have to limit the blast radius and move onto more productive uses of your own time.
I don't see any proof that software development is not dead. Software engineering is not, and it's much more than writing code, and it can be fun. But writing code is dead, there is no point of doing it if an LLM can output the same code 100x faster. Of course, architecture and operations stays in our hands (for now?).
Initially I was very sceptic, first versions of ChatGPT or Claude were rather bad. I kept holding to a thought that it cannot get good. Then I've spend a few months evaluating them, if you know how to code, there is no point of coding anymore, just instruct an LLM to do something, verify, merge, repeat. It's an editor of some sorts, an editor where you enter a thought and get code as an output. Changes the whole scene.
on the other hand, i can't help but think about ASM coders lamenting C and especially C++. Also, god help you if you tell an embedded developer you use micropython instead of C. Maybe a current chapter is closing and a new one is beginning and my part was in the last chapter just like them.
i'll end with saying i really like using AI for code, it's got me excited about technology again. So many projects that were out of reach due to time ( i have a family + stressful career ) are now back on the table like when i was in college with nothing but time on my hands.
I will never not hate on Micropython.
This doesn't come from a position of gatekeeping embedded software engineering.
I just hate Python.
For any serious system you still need to understand and guide the code, and unless you do some of the coding.. You won't. It's just novelty right now is skewing our reasoning.
Verifying generated code and writing code are not equivalent at all. You're replacing a builder's mindset with a supervisor's mindset, and the supervisor's workers are dumb. You're bound to end up with the same class of problems we always had when programmers were reviewing their own code or testing their own features. Blind spots.
Fine for personal hobby tools or quick scripts. Not acceptable for business-critical software.
The LLM hype seems to have made an entire profession forget the painfully learned lessons of the last 70 years.
The only way to write like that is to have a real theory of mind for the two characters and understand that they are four processing speeds: that of both speakers, that of the narrator, and that of the reader.
Also, yes, I know the origin is Star Wars, but it went viral recently a very specific way.
The power of edgelord memes.
For the article it was nice, but the font is really what got me.
The broader corporate world has never wanted code monkeys. They want "boring" reliability and pay a reasonable wage for it. On the other hand, they also won't tolerate contrarians who can't deliver, so maybe some of the fear from people posting this sort of thing really is justified.
There is perhaps some relevance to the analogy however, because the US is designed in such a way that makes walking difficult to impossible. I am already seeing this pattern in vibe-coded areas where engineers will just use AI because it's too difficult to parse and edit by hand.
I didn't. Yesterday I walked 11 km for errands. Today I took a detour when walking to work, a more scenic route with less traffic.
For me walking is not much slower than using public transport (you need to get to it, then from it to the point of your destination), and not much slower than a car (stuck in traffic, finding parking, not to mention the road rage). I'd be faster on a bicycle but I'm not in a hurry and enjoy my walks.
They literally made it a crime to walk down the street.
It's also a crime to jog on the railroad tracks.
If a wagon or trolley hit someone that was considered the fault of the driver, every time.
When cars started arriving and being driven around by people who were wreckless and bad at it, you started getting manufacturers and "motorists" lobbying for the concept and laws around jay-walking. Even the word was a way to delegitimize what used to be normal. "Jay" was negative slang for country-person (think red-neck). The idea was "modern city people stay out of the streets!"
"Fighting Traffic" by Peter D. Norton talks about this at length.
The suburbs didn't exist when automobiles hit the market. Most people lived in cities because that's where the jobs were and transportation (outside of whatever public transportation options the cities provided) was limited. Kids and adults used the streets freely (which were for horses, though they were widened as automobiles started growing in popularity).
This changed as cars killed kids (and adults) who didn't know that cars were much faster than cars and didn't react in time. Traffic deaths were so numerous, cities invested lots of money in "safety parades" that were kind of gruesome, actually (like showing tombstones of the future deceased). [^0] Jaywalking was a crime that was invented to deal with exactly this phenomenon.
People fought HARD to keep the streets free (where else are kids going to play). People lost that battle, as we know.
[^0] https://www.bloomberg.com/news/features/2022-06-10/how-citie...
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
Alas I think tech crowd have collectively painted humanity into a corner where not playing is not an option anymore.
The combination of having subverted copyright and enabled cheap machine replication kills large swaths of creativity. At least as a viable living. One can still do many things on an artisanal level certainly and as excited as I am about AI it’s hard not to see it as a big L for humanity’s creative output
Nobody. When I said artisanal I meant that there is no mainstream market for it.
Therefore, things like writing, film, sales, etc are less productively scalable by bots
And things like code, where people don't care how the sausage is made as long as it "works" are more productively scalable by bots
And even in the situation of code, the job description leans more on defining what "works" which requires the human touch
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
Was this how other professionals dealt with their grief? Like a translator in the advent of ML based translations? Like a lift man?
I've come up with a set of rules that describe our reactions to technologies: 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. 2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. 3. Anything invented after you're thirty-five is against the natural order of things.
A lot of the dissonance on this forum is probably age based.
You can try to avoid consuming AI-generated material, but of course part-way through a lot of things you may wonder whether it is partly AI-generated, and we don't yet have a credible "human-authored" stamp. But you can't really keep them from using your work to make cheap copies of you, or at least reducing your audience by including information or insights from your work in the chat sessions of people who otherwise might have read your work.
Imagine having 6 software engineering jobs, each paying maybe $150k a year, all being done by agents.
Hell, I might even do this secretly without their consent. If I can hold just 10 jobs for about 3 or 4 years, I can retire and leave the industry before it all comes crumbling down in 2030.
The problem of course, is securing that many jobs. But maybe agents can help with applying for jobs.
You seriously need to go outside and touch grass if you are so defeated by another chess winning machine
Nobody wants to Watch AI play chess, nobody wants to read ai blogposts
AI makes human writing more valuable, not less.
I will pay good money for pure human made books certified as made without a single word auto generated whether in original or during process of Translation.
In what world is "the media" not an integral, tightly-bound part of the ratchet mechanism that seeks to suppress all distinction?
The supposedly starved don't seem to care much for such food. Blogs are kind of a wasteland.
Well, there’s not much of a point leaving a comment saying “yes, this, exactly this,” so I’ll leave one here on behalf of my fellow lurkers.
The more AI gets shoved down my throat, the less I’m inclined to use it for anything, and the more I’m inspired to write my own writing, make my own art, and create my own code — with great creative joy and burning anger. Enjoy your 1000x productivity gains and your inevitable burnout as you downskill to a glorified inference loop.
I'm really getting tired of the programming obituaries. As if LLMs didn't fail at any complex task, as if they didn't vomit shit code and as if they didn't just copy patterns surrounding the new code, and as if they didn't hallucinate and downright write wrong code or made up libraries. Yet for some reason, every time you bring it up, someone will come along and say "You're not using it right then.", Is it that, or is it just that they're only doing toy projects? I'm led to believe the latter.
At this point I don't know what's organic and what's not. Reddit is filled with astroturfing for big LLM. Maybe this place is too? Even if that was not the case, I'm led to believe that it isn't uncommon for people to swallow up all of the big LLM propaganda and fall into despair, or fall into unrealistic expectations, and just parrot it everywhere else. One thing is for certain, LLM evangelism has all the money of the world, and LLM denial doesn't. It's only natural to think that the balance is tilted in terms of media presence.
Even at the best, or worst, LLMs can't do anything you couldn't do yourself better with a scaffolding prompt + manual editing, and at the end of the day, you still need the cognitive energy to review, veto, come up with, the implementation. What does this exactly do anyways other than saving you a bunch of keypresses? I wonder if the people touting it to be all that really didn't think before LLMs, of just switched their brain off on them.
I used to really like this site, but I think that just consuming the RSS feed is enough for me. I think that lobster.rs has less "trend chasing" point of views these days, and I do wonder if it might be on here there's larger amounts of non technical people jumpy to call for the funeral of things.
Setting aside the self delusion that makes a considerable number to erroneously rate themselves above average, the reason you create blog posts should not be for the attention you might gain, there simply are not the eyeballs. You create as a form of self expression, to organise your thoughts, to create a record of them.
AI can never challenge in those areas because it is, as it has always been, the act of creation is the goal.
I think most people cannot destinguish between "genuine" creativity and an artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.
I think many of the more human centric thinkers will be disappointed at how many people just wont care.
Pop music is often composed by dozens of people who specialize in a thin sliver of the track - lyrics, vocals, drums, &c. - and then it's given a pretty face and makes the charts. That's really no different than something like Suno.
I think AI is forcing people who thought that THEIR thing was too nuanced or too complex to be replaced by technology to reckon with what makes them special.
And can or will AI create it?
AI is perfect for that. It reveal, perhaps to the dismay of those who revel in high art, that it might be an illusion that art has genuine creativity, if most people find ai to produce acceptable output.
What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.
AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.
You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.
Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.
For me right now, that's the fretboard.
According to the author AI is 99% hype.
That 1% of AI utility can unlock more for humanity than 99.999% of blogs; static text hosted from a laptop in a closet.
Odd ball position that cheap publishing via the web is a path to The Next Generation for humanity is 100% hype.
Other than feeding dopamine addiction humanity has not improved greatly since we read all those insipid posts on GeoCities no one remembers today.
It's all been 99%+ hype to feed Wall Street. Young GenX and older Millennials with tech jobs were temporary political pawns and gonna end up bag holders like may older GenXers and Boomers who lived through the car boom, the housing boom, the retail boom.
Same old human shit, different hype.
Everyone wants to be a famous author, or at least a published/somewhat acknowledged one; few are willing to write their novel and be satisfied with zero or near-zero sales/readings.
But that is exactly what you need to do, especially in the age of AI. Everyone who was "in it to win it" (think linkedinslop which existed before AI) is going to certainly use AI - because they do not give a shit about the quality of themselves - they just want the result.
And you can only become a writer (unpublished, unread, or no) by doing the writing - it takes time (10,000 hours?) that cannot be replaced by AI, just like you can't have the body of a marathon runner without running (yes, yes, the joke). You may be able to get 26 miles and change away, even very fast, but unless you personally do the running of that distance without cheating, you will not get the inherent benefits.
And if you instruct an AI, or another human even, to write for you, you may get some of the results you want, but you won't have changed to become a writer.
We shouldn't celebrate the successful blogs; they're already rewarded enough. It's celebrating the unsuccessful blogs that is needed - even if, frankly, the vast majority of them are sub-AI levels of crap it is still a human changing and progressing behind them.
Babies fall over a lot but unless you take them out of the stroller and let them do so, they'll never progress to crawling, walking, running.
Might be just me though, but I definitely don’t get why blogging should be the solution.
The back and forth is more annoying now as well. Instead of trying different approaches to code to produce an outcome, I'm prompting "this is broken; can you fix it?" or "this almost works! can you do $x instead" and doing stuff in another window while Claude churns. This isn't fun or stimulating at all for me. It's like carrying a dog to a ball instead of making them run for it.
Using it also reminds me that a big part of the experience I've spent years accumulating is (or feels) no longer useful. That everyone can "write" (produce) code now is a good thing, but SO MUCH TIME AND ENERGY was spent on putting writing and understanding code on a pedestal (especially on this site), and seeing that get torn down while gaslighting us into thinking that this never happened has been affecting my psyche for sure.
There's also the dirty feeling I get from abdicating more and more of my skills to a big company that is probably salivating at the idea of developers not being able to write code without Claude Code anymore.
I have a handful of projects I'd like to work on, but I'd rather leave them on the shelf until I can hand-write them than use agents to finish them and, in doing so, create a codebase that I'm less likely to be able to maintain without agents.
I guess I can turn the question back at you: how aren't you not losing your mind at becoming a glorified spec writer?
Without AI I would probably never get to them because realistically, I do not have dozens, or hundreds of personal hours to devote to fun, but unnecessary projects.
What's more, I can already explore five ideas at once. There is no backlog formed by incapability, lol.
I don't understand the eagerness for "productivity as a service", celebrating quotas, and lot of other conjecture I could get twisted up with... but I'll skip it this time. Rarely pays off :)
I mean, to put a price tag on enabling vastly more creation than would otherwise have occurred!
More pretentious gatekeeping from luddites who like to yell at clouds. This is someone who would love a piece of artwork created using ai tools right up until someone told them it was created using ai tools.
Apart from the fact that, yes, GenAI art is overwhelmingly shit and by definition derivative, it's also not unreasonable to not want to be lied to. People look for human connection. They like being lied to about human expression in a piece of art about as much as they like being catfished.
Mixed messages fr
Hot take, folks packing it in because of AI probably were not difference makers before AI, and wouldn't be difference makers after it either.
I agree with the author, keep writing. It helps hone your ability to communicate effectively which we all need for some time to come (at least until we become batteries).
Anecdotal but I’ve been seeing a lot of the opposite. Some of those leaning in strongly are being propped up by the tools. Holding onto them like a lifeboat when they would have fallen off earlier.
Thats generated audio. It may not be LLM generated but it's not read by a human.
To draw an arbitrary line between _this kind_ of generated content but not _that kind_ is seemingly a matter of perspective and preferences.
But partly overlapping with perspective is reasoning (subjective or objective), which is also something that makes a judgement not arbitrary. But your pre-judgement here tells me that that part is uninteresting to you.
- spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
- be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.
But that's the opposite of sabotage, you're actually helping your boss use AI effectively!
> spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
Yes, but the "useless" stuff should be things like "carefully document how this codebase works" or "ruthlessly critique this 10k-lines AI slop pull request, and propose ways to improve it". So that you at least get something nice out of it long-term, even if it's "useless" to a clueless AI-pilled PHB.
But the Kool aid has been drunk, and the philosophy of silicon valley cemented in your field. It will take a lot of pain or work to get it to change.
There is nothing new about using machinery to automate boring / repetitive tasks, including the wall of resistance that comes up. But it should be clear that genuinely useful tooling and automation tends to become a normal part of life, from the plow, to the printing press, to the dishwasher, to digital video editing, to autocorrect, and now to large language models.
There's a lot that has to be worked out with LLMs in particular as they are now encroaching heavily upon human creativity and thought. This is an extremely important topic. But rants like these with terms like "the plagarism machine" and "the solution is that we all must vow to never use AI in any shape or form" are not really contributing.