I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development. All the people I work with who are getting the most value out of using AI to deliver software are people who are already very high-skilled engineers, and the more years of real experience they have, the better.
I know some guys who were road warriors for many years —- everything from racking and cabling servers, setting up infrastructure, and getting huge cloud deployments going all the way to embedded software, video game backends, etc. These guys were already really good at automation, seeing the whole life cycle of software, and understanding all the pressure points. For them, AI is the ultimate power tool. They’re just flying with it right now. (All of them also are aware that the AI vampire is very real.)
There’s still a lot to learn, and the tools are still very, very early on, but the value is clear.
I think for quite a few people, engaging with AI is maybe the first time ever in their entire career they are having to engage with systems thinking in a very concrete and directed way. Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.
So I think we’re going to keep talking for quite a while, and the conversation will continue to be very unevenly distributed. Paradoxically, I’m not bored of it, because I’m learning so much listening to intelligent people share their learnings.
> I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development.
This I agree with completely. You can see it in the difference between a prompt where you know exactly what you want and when things are a little woolley. A tool in the hands of a well trained craftsperson is always better used.
> So I think we’re going to keep talking for quite a while Me neither, and to be clear I'm okay with that. This was mostly a rant at the lack of diversity of discourse.
This isn’t to say there’s not hype. Just that if you’re not seeing big productivity gains you need to make sure you really are an outlier and not just surplus to requirements.
I am amazed at the incredible things it can do - only to turnaround and not be able to do a simple task a child can do. Just like people.
A lot of software engineering career capital was built on knowing which obscure method to call, which Stack Overflow answer to trust, how to navigate a specific framework's quirks. That knowledge was genuinely hard to acquire and it was a real signal. Now it's table stakes. The career capital that survives is knowing why you'd make a particular architectural decision, how to tell if generated code is actually correct, what the error message is really telling you.
The road warrior framing is right. Those people internalized systems thinking across the whole stack over years. AI doesn't replace that — it makes it worth more, because now one person with that mental model can move faster than a team without it. The people who are "bored of AI" are often the people who already made that transition and stopped finding it novel. The people still anxious about it usually haven't yet.
But at the same time the more I read about AI, the more I realize I need to learn about AI. Thus far I'm just using cursor and the Claude code extension alongside obra superpowers, and I've been quite happy with it. But on Twitter I see people with multiple instances of Claude code or open claw talking to each other and I don't even know how to begin to understand what's going on there. But I'm not letting myself get distracted — Claude code and open claw are tools. They could go away at any time. But systems thinking is something that won't go away. At least, that's my gambit.
Specialists/generalists, top-down/bottom-up, BFS/DFS, pragmatists/idealists, ADHD/ASD; lots of continuums in software work and those at either extreme have biases.
Personally I think that there will be less programmers needed and the ones that will remain will have had to mellow out towards the center in all these continuums. We won't be able to rely on big teams balancing each other out.
Generalists will need to learn which details matter. Specialists will need to learn the delegation and risk tolerance usually reserved for the bemoaned management track. Hard to say which is the easier journey.
Well, there was also a lot of unrelated things that happened as well around last November for me, but yes, getting into vibecoding for real was one of them, and man I feel physically drained coming back from work and going to use more AI.
Not sure what it is. I'm using AI personally to learn and bootstrap a lot of domain knowledge I never would have learned otherwise (even got into philosophy!, but man is it exhausting keeping up with AI. I would burn through a week's worth of credits in a day, and now I haven't vibe coded a week.
I think, I will chill. One day at a time.
Me too. A key purpose of HN, and a bright time for that.
AI makes a ton of bad decisions too and it's up to you to work with it. If I had the knowledge of the dangers hidden in things I'm developing, I'd move even faster
Was able to make a great full web app, which I think is hardened for prod but it had to be refactored to do so. Which it happily did.
It's really about asking the right questions, breaking down tasks, and planning now. I'm going to tackle a huge project, hoping to share it here.
I don’t know if it’s the Universe delivering this farce or it’s the emergent LLM Singularity.
Since COVID I have seen teams scaled down, lots of custom development or devops/infra work work got replaced with SaaS and iPaaS cloud products, serverless/lambda, managed containers.
This is the next step.
Great that people feel more productive, unfortunely for many of them, us, more productivity means the C-suites can do some head count reduction yet again.
It's what happened with the internet and computer usage. As Apple made it easier to get online with zero computer knowledge, suddenly we're electing people like donald trump.
Where are they flying and why software has gone to shit?
Maybe this super stars programmers have to keep their reality breaking technology secret, but everything has not only degraded, but turned to absolute trash.
My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.
Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.
Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.
Its an absolute disaster.
In the relocation industry, it's losing translators, relocation consultants and immigration lawyers a lot of work. Their cases are also getting tougher because people are getting false information from ChatGPT and arguing with them.
This problem is compounded by the lack of training data for that topic. I spent years surfacing that sort of information and putting it online, but with AI overviews killing the economics of running a website, it feels pointless.
I see such stories everywhere. People being replaced by something half as good but a tenth of the cost. It's putting everyone out of work and making everything worse.
The closer they can map their real problems to make-document-bigger, the better their results will be.
Alas, that alignment is nearly 100% when it comes to academic cheating.
Doesn't sound that different from my tech job
Now I have this love/hate relationship with it. Claude Code is amazing. I use it everyday because it makes me so much more efficient at my job. But I also know that by using it I’m contributing to making my job redundant one day.
At the same time I see how much resources we are wasting on AI. And to what end? Does anybody really buy the BS that this will all make the world a better place one day? So many people we could shelter and feed, but instead we are spending it on trying to make your computer check and answer your emails for you. At what point do we just look up and ask… what is the damn purpose of all of this? I guess money.
I know someone who worked for a nonprofit that made pregnancy health software that worked over text messaging. Its clients were women in Africa who didn’t have much, but they had a cell phone, so they could get reminders, track vitals, and so forth.
They had to find enough funding to pay several software engineers to build and maintain that system. If AI allows a single person to do it, at much lower cost, is that bad?
Unfortunately for fellow developers, software enables massive scale.
Yeah - I think there's a lot of cool sci-fi like stuff in the future.
To add to list of questions - it's undeniable the AI is making humans dumber by doing mental job previously done by humans. So why we spend so much energy making AI smarter and fellow humans dumber?
Shouldn't we be moving in opposite direction - invest in people instead of some software and greedy psychopaths at helm of large companies behind it?
I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities. You can do so much more now. We are more limited by our ideas at this point than anything else.
Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"
(You can go on about corporate culture as the cause, but I've worked at regular corporations and most of FAANG. Initiative is rewarded almost everywhere.)
> Does anybody really buy the BS that this will all make the world a better place one day?
Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor. The march of automation and technology has already "made the world a better place."
All that said, it’s extremely exciting. I’ve been in tech, in one way or another, for 25 years. This is the most energizing (and simultaneously exhausting) atmosphere I’ve ever felt. The 2006-2011 years of early Facebook, Uber, etc. were exciting but nothing like this. The future is developing faster than we can process it.
Would it be such a bad thing if the "right way" to build a JavaScript frontend didn't change so much every year?
I write Typescript and SQL by day, my last two personal projects were Rust and Perl.
I do worry that I'm not learning them as deeply, but I am learning them and without AI as an accelerant I probably wouldn't be trying them at all.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
Saame. I wonder if the use of AI will lead to less invention and adoption of new ideas in favour of ideas with lots of training data.
I think we've seen what the enthusiasm leads to, once these companies establish dominance. We even coined a word for it: enshittification.
Everyone is in their own place adapting (or not) to AI. The disconnect b/w even folks on the same team is just crazy. At least it's gotten more concrete (here's what works for me, what do you do) vs catastrophizing jobpocolypse or "teh singularity", at least on day to day conversations.
It's just not very interesting or useful to me to read about how you got AI to output better quality code or how you can program from your phone now without going into detail. And so many of the conversations are showing off the wins without talking about the tools, configurations, or other parts of the setup that made it possible.
> here's what works for me, what do you do
This is at least progress... but many want to remain in denial, and cant even contemplate this portion of the conversation.
We're also ignoring the light AI shines on our industry, and how (badly) we have been practicing our craft. As an example there is a lot of gnashing of teeth right now about the VOLUME of code generated and how to deal with it... how were you dealing with code reviews? How were you reviewing the dependencies in your package manager? (Another supply chain attack today so someone is looking but maybe not you). Do you look at your DB or OS? Does the 2 decades of leet code, brain teaser fang style interview qualify candidates who are skilled at reading code? What is good code? Because after close to 30 years working in the industry, let me tell you the sins of the LLM have nothing on what I have seen people do...
news.ycombinator.com##td.title:has-text(/LLM|AI/i)Some of it very interesting, but maybe it shouldn't be on the home page unless it's a certain critical mass (similar to show HN)?
[0] https://en.wikipedia.org/wiki/Technology_adoption_life_cycle
Alternately: the trough of disillusionment.
For me, the issue isn't that I can't conceive of work AI could help with. It's that most of the work I currently need to be doing involves things AI is useless for.
I look forward to using it when I have an appropriate task. However, I don't actually have a lot of those, especially in my personal life. I suspect this is a fairly common experience.
I am extremely skeptical of AI products anyone builds. It's just using one black box to build scaffolding around another black box and then typically want to charge money for it. I don't see any value there.
AI products can and do help make the raw models applicable to targeted domains. Think of them as a black box sure, but that doesn’t mean they dont add value.
Also, depends on who target user is.
AI can be used to build deterministic software
But I don't see it that way. I've been fascinated by AI since I was a little kid (watching Max Headroom, Knight Rider, Whiz Kids, Wargames, Tron, Short Circuit, etc in the 80's) up through college in the 1990's when I first read about the 1956 Dartmouth AI workshop that kicked the field off, and up to today where we have the most powerful AI systems we've had. Every single bit of this stuff is wildly fascinating to me, but that's at least in part because I recognize (or "believe" if you will) that there's a lot more to "AI" than just "LLM's" or "Generative AI".
I still believe there are plenty of neural network architectures that haven't been explored yet, plenty more meat on the bone of metaheuristics, all sorts of angles on neuro-symbolic AI to work on, etc. And even "Agents" are pretty exciting when you go back and read the 90's era literature on Agents and realize that the things passing for "Agents" right now are a pretty thin reflection of what Agents can be. Really understanding MAS's involves economics, game theory, computer science, maybe even a hint of sociology.
As such, I still find AI fascinating and love talking about it... at least in the right context and with the right people. :-)
And besides... as they[1] say: "Swarm mode is sick fun".
[1]: https://static0.srcdn.com/wordpress/wp-content/uploads/2022/...
> What makes this worse, is our bosses have bought into it this time too. My managers never cared much about database technologies, IDE’s or javascript frameworks; they just wanted the feature so they could sell it. Management seems to have stepped firmly and somewhat haphazardly into the implementation detail now. I reckon most of us have got some sort of company initiative to ‘use more AI’ in our objectives this year.
At my big tech, AI is every conversation with everyone, every day. Becoming AI native is a huge deal for us. Literally everyone is making AI usage a core part of their job and it's been a big productivity accelerator.
Perhaps it's different where you work, so you don't see the sentiment.
I spent 2024 on Mastodon and I absorbed their groupthink that AI was useless... I wish I could get that year of my life back. I wish I had that extra year headstart on AI compared to where I am now. So much of my coding frustrations that year that might have been solved from using AI. I am reluctantly back on X - I hate what has been done to Twitter, but that's where so much of the useful information on using AI is being shared.
Well, back to it. Claude has been building another local MCP server tool for me in the background.
Before that we were excited about the wheel and the creation of fire. All capital drained into those ephemeral fancies.
The cycles cycle on.
Like the new frontend frameworks coming every week after 2010 sometime. Not jumping on every single one, and waiting until react was declared the winner and learn that worked well. Sure, someone that used it from day 1 had more experience, but one quickly catch up.
Only thing that stuck thus far is the cloud. Though not for infinite scalability and resiliency, cause that just dumps big invoices in your lap.
The new HN is full of people filled with anxiety about being replaced by an advanced calculator.
To an outsider, it could almost be funny if it wasn't so sad.
---
Personally, I'm still very interested in the topic.
But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.
Never heard this and I like it very much. This is just an off-topic comment to say thanks!
[1]: https://en.wikipedia.org/wiki/Mastodon_(social_network)
All they essentially did was tell the LLM to test and verify whether the answer is correct with a prompt like the following:
>"You just edited X. Before moving on, verify the change is correct: write a short inline python -c or a /tmp test script that exercises the changed code path, run it with bash, and confirm the output is as expected."
Now whether this is true, I don't know, but I think talking about this kind of stuff is cool!
Our local tech meetup is implementing an "LLM swear jar" where the first person to mention an LLM in a conversation has to put a dollar in the jar. At least it makes the inevitable gravitational pull of the subject somewhat more interesting.
I'd imagine similarly there were points in time where people who go to concerts just to see the electric guitars and lighting setups.
Argueably Jean-Michel Jarre concets were 100% gear-porn shows.
But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen
The new GenAI architectures and tooling supported by them just give more fun things to do and fun ways to do it.
I "tried" Claude the other day. It gave me 3 options for choosing, effectively, an API to call an AI. The first were sort of off limits, b/c my company… while I think we have a Claude Pro Max Ultra+ XL Premium S account, it's Conway's Law'd. But, oh, I can give it a vertex API key! "I can probably probably get one of those" — I thought to myself. The CLI even links you to docs, and … oh, the docs page is a 404. But the 404's prose misrepresents it as a 500.
Maybe Claude could take a bit of its own medicine before trying to peddle it on me?
We're on like our 8th? 9th? Github review bot. Absolutely none of them (Claude included) is seemingly capable of writing an actual suggestion. Instead of "your code is trash, here's a patch" I get a long-form prose explanation of what I need to change, which I must then translate into code. That's if it is correct. The most recent comment was attached to the wrong line number: "f-string that does not need to be an f-string on line 130" — this comment, mind you, the AI has put on line 50. Line 130? "else:" — no f-strings in sight.
"Phd level intelligence."
An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.
Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.
But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.
I don't know how I'm burnt out from making this thing do work for me. But I am.
AI is the red herring that'll waste all our attention until it's too late.
I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"
In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.
We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.
[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.
So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.
[1] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...
It will calm down once the dust starts to settle and there's some kind of consensus on how the chips have fallen.
Also there is an irony that talking about being sick of talking about AI is still talking about AI.
The only thing that triggers me about it peoples inability to understand how a scam works, after falling for such scams the n-th time.
Hyperloop, ubeam, blockchain, Elon musk taking all to mars....
In these line of scams, LLMs are a wet dream...
I think the "can do" part gets boring but now I'm paralleling this to trust relationships and fiduciary responsiblities. What I mean is that we can not only instruct but then put a framework around an agent much like we do a trustee where they are compelled to act in the best interests of the beneficiaries (the human that created them in this case).
Anyway it's got me thinking in a different way.
I'm also learning art and I'll never use AI here, so I thought I have less time for hobby programming and I could just use AI for that but then I come back to the concern I mentioned above.Plus I can't proudly share anything I made with it either because I wouldn't have done much of the work at all.
I'm also feeling burnt out about web dev in general and doing the same thing during my free time just feels like more work to my brain. I wish I could find something interesting to do, and if I don't I'll quit programming in free time for good.
As shown in "Normal Accidents" the strength is as high as its weaknesses, and in any complex system this is even more a problem. A catastrophic event is still to happen with AI as it happened in basically every complex system. They ocurred with trained people that wasnt believing in magic or laziness... so the scenario is even worse for AI.
Yes, I'm bored about people that believe in magic and the ghosts the are emerging and are yet to be seen.
I also have to say that I don't use AI in my personal or professional life. And that is simply because I haven't felt any need to use it.
I'm currently reading non-things by Byung-Chul Han, which is an interesting exploration of internet's impact on humanity/humans. Haven't finished it yet, but enjoying it so far.
My technical interests are varied, and it's so boring to come to HN and see that a third (or more) of the front page is about AI.
Enough already. Let's talk about other things! And yes, I know, I should be a part of the solution and submit more articles.
Currently it feels a bit like everyone is talking about what new editor they're using. I don't care about that type of developer tooling very much. AI isn't coming up with some exciting new database, type system, etc etc.
"Look at how I'm able to web dev x% faster" because of LLMs is boring.
> seems to have devolved into three different people’s (almost identical) Claude code workflow
I do feel like I've seen a number of those articles.
Asking it to draft was weakening my own skills.
Everythink devolves from “cool that was a nice single video” to “here’s my schtick…. AGAIN”.
At least I'm not tired of talking about how it's killing websites and filling everything with spam. I have spent most of a decade building a useful resource, and Google AI overviews has killed my traffic. It killed everyone's traffic. This thing gave me purpose, and I'm watching AI slowly strangle it.
I mourn the death of the independent web, and it frightens me that this is still the happy stage. We haven't yet felt the effect of stiffing content creators, and the LLM tools haven't yet begun to enshittify.
I am tired of discussions about agentic coding, but I would feel a lot better if we acknowledged all the harm being caused. Big tech went all in on this, stealing everything, putting everyone out of work, using up all resources with no regards for consequences, and they threaten to kill the economy if we don't let them have their way.
I feel like we are heading for a much worse place as a society, and all we can talk about is how to 10x our bullshit jobs, because we're afraid of falling behind.
I’m just kidding. LinkedIn feed became so unbearable, that I had to install an extension to turn it off.
Large organizations are making major decisions on the basis of it. Startups new and old will live and die by the shift that it's creating (is SaaS dead? Well investors will make it so). Mass engineering layoffs could be inevitable.
Sure. I vibe coded a thing is getting pretty tired. The rest? If anything we're not talking about it enough.
The most frustrating about AI I find is that it is (trying to) replacing things that I like; writing, art, programming while leaving me with the things I absolutely hate like testing, chores etc. I do like reading code, so thats not a big issue; I spent most of the day doing that before AI, however, being a fulltime QA was always my nightmare but here we are; AI sucks at it for a more than trivial frontend and backend its not that good at either (I should write the tests or rather tell the AI in detail what to test for otherwise it will just positively test what it indeed wrote; not what the spec says it had to write in many cases).
But no, I like talking about AI, just not so much about slop, trivial usage (show hn; here is a SaaS to turn you into a rabbit!) or hyperbole (our jubs!) (Although I do believe it is the end of code; like said; I read and review code all day but have not written much for the past 6 months while working on complex non trivial SaaS projects; I am with antirez; it is automatic coding, not vibecoding for me).
Sounds very much like this blog I read too… he laughs at AI in his workplace a lot Www.sometimesworking.com
I've been spamming some auto research loops, and it is so addictive. Think about how many of humanity's problems will be solved because of this. Of course, it will also disappoint, like, we are still waiting for flying cars, but man, this is a unique moment in history.
You could use AI to do it! Fight fire with fire.
I'm neutral on AI - so far it seems useful but flawed. But I don't want to hear about it constantly.
I'm somewhat tired of seeing the same rehashed claims of future ability, non-ability, profit, loss.
I actually like talking about the implications, future risks and challenges of AI. I have made submissions on ways AI should be regulated to benefit society. The problem is the assumption of what is happening and what will happen.
To many people seem to enter the conversation feeling that the absence of doubt is the same thing as being informed.
And especially people making claims based on premises that they seem to believe that if they build big enough towers on them, they will become true.
The number one thing that bothers me in all this, is people assuming the contents of the minds of others.
I find the pathologising of Sam Altman to be the most egregious form of this. It is one thing to disagree with someone's decisions, another thing to disagree with their stated opinions, but to decide upon a person's character based upon what you believe they are thinking in their private thoughts is simply projection.
I know this is an opinion of little worth to many, but my impression of Sam Altman is just a person who has different perspectives to me. The capitalist tech world he lives in would inevitably shape different values to me. What I have seen of him is consistent with a sincere expression of values. I can accept that a person might do something different to what I would, even the opposite of what I want while believing that they can be doing so for reasons that seem to be morally the right thing to do.
This also happened with cryptocurrency. Crypto advocates believe that it is a good thing for the world. Too many consider those who believe that crypto could benefit society to be evil. There is a difference between being wrong and being evil. No matter how certain you are you can still be wrong, in fact beyond a point I would say increased certancy would indicate a higher likelihood of being wrong.
So I'm happy to talk about AI. I have plenty to learn. I wonder if others went in with the goal to learn whether they would find it less tiring.
But the sooner we get to the part of history with the chromy-killer-robots and people-sabotaging-datacenters-and-foundries, the sooner we will get some meaningful excitement.
It's worse when there's a colleague of yours encouraging that by using AI blindly, piling up technical debt just to move at the pace that Management expects after signing you all up on some AI tool.
At the end of the day, everyone is talking about AI. For AI or against AI, it doesn't really matter.
The analogy is someone from the 19th century talking about their slaves all day which is of course nonsense because they had other things to talk about.
Bored of hearing about it, bored of reading about it.
I love using these LLM tools, but honestly, it feels like every man and his dog has something to say about it, and is angling to make a quick buck or two from it.
And the slop, oh my goodness, it's never-ending on every site and service.
Tack on to that the increasing number of political stuff on here as well just makes it less and less an interesting place to visit.
Don't agree with the angry mob on the political stuff especially and you get downvoted/flagged into oblivion.
Just another echo chamber looking to have viewpoints confirmed in yet another one of the disappearing places online that foster any level of intellectual curiosity.
Never thought I'd feel nostalgic about that era...
Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.
Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.
So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.
It can't. As you say in the very next sentence. If it isn't solving any given puzzle with a 100% success rate, but randomly failing, then it isn't consistent.
I'm so exhausted by this and ready for the economic crash.
AI is especially sensitive to this. Unlike coding, where giving away the secret sauce also makes you look smart, divulging AI secrets only demystifies you -- revealing the shriveling man behind the Wizards curtain.
So anyone boasting about AI is likely not doing anything useful with it.
Similar to finance tips, btw.
As bad as the AI hype wave is now, I can't help but wonder if it could have been even worse.
There are other interesting things in the world today, and HN is overwhelmed with pretend intelligence.
Hype, detractors, ALL OF IT!
Maybe a separate web page or RSS feed could be created that is dedicated to the subject...
Then we can get back to the unglamorous, boring, thankless task of delivering business value to paying clients, and the public discourse will no longer be polluted by our inane witterings.
If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times. There are a lot of people who are deep in tech who still don't understand what agents and LLM's can do
I used to have this idea that if I built something cool it would be valuable to donate it to the world for free. But now increasingly I'd be just making a donation to the training data, and on top of this I'm in competition with AI slop. Most people won't tell the difference and won't care. The noise floor for doing absolutely anything collaboratively on the computer is now 10x higher than it was before, and I'm basically checked out at this point. Even HN is becoming tiring to read since I think around 10-15% of comments that I read are AI generated. When that number reaches 30% I'm done forever, gone. My life is too short to waste time on this shit.
"Yes" Proceeds to talk about AI.
So at this point I have to just assume this shit doesn't work very well for some reason, because no one is outputting anything with it that resembles good, useful software.
> At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
Umm.
> I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
It’s all positives. So what’s the problem?
There isn’t a problem with AI. Of course. It’s just the discourse around it is “boring”. And the managers are lame about it.
And what has been the AI discourse for the last few years. The same formula.
- AI is either good
- ... or it is the best thing to have happened to Me
- But I have feelings[1] or concerns about everything around AI, like the discourse, or people having two-hundred concurrent AI agents mania
It’s all just grease for the AI Inevitabilism bytemill.
[1] https://news.ycombinator.com/item?id=47487774
> … And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.
OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”
Very self-aware.
And now 117 points and 53 comments in 23 minutes.
> And this one will be different? I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.
>Umm. ??
> It’s all positives. So what’s the problem? The article is trying to say that these things are great, but the level of conversation leads to a lack of novelty.
> It’s just the discourse around it is “boring”. And the managers are lame about it. Exactly.
> OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe” Very self-aware.
Is this sarcasm?
Why on earth is the parent comment downvoted? the title of the TFA asks a question. This statement directly answers that question. Seems very on-topic.