I presume I'm not the only one.
Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
With good management you will get great work faster.
The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.
Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.
That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.
If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.
I just went and deleted it because it's completely broken at every edge case and half of the happy paths too.
edit: LOL called it, a bunch of useless garbage that no one really cares about but used to justify corporate jobs programs.
I'd been fighting to make this for two years and kept getting told no. I got claude to make a PoC in a day, then got management support to continue for a couple weeks. It's super beneficial, and targets so many of our pain points that really bog us down.
This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".
And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?
A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.
I run a team and am spending my time/tokens on serious pain points.
This is in a real-time stateful system, not a system where I'd necessarily expect the exact same thing to happen every time. I just wanted to understand why it behaved differently because there wasn't any obvious reason, to me, why it would.
The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out. The narrative touched a lot of code, and the source references it provided did an excellent job of walking me through the narrative. I independently validated the explanation using some telemetry data that the LLM didn't have access to. It was correct. This would have taken me a very long time to work out by hand.
Edit: I have done this multiple times and have been blown away each time.
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
I personally noticed this. The speed at which development was happening at one gig I had was impossible to keep up with without agentic development, and serious review wasn't really possibile because there wasn't really even time to learn the codebase. Had a huge stack of rules and MCPs to leverage that kinda kept things on the rails and apps were coming out but like, for why? It was like we were all just abandoning the idea of good code and caring about the user and just trying to close tickets and keep management/the client happy, I'm not sure if anyone anywhere on the line was measuring real world outcomes. Apparently the client was thrilled.
It felt like... You know that story where two economists pass each other fifty bucks back and forth and in doing so skyrocket the local GDP? Felt like that.
well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.
I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.
You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.
The real thing to look at is whether or not the future outlook for company AI spend is heading up or down?
Are they peeking over the shoulder of each team and individual? Of course not.
It can be the case that the spend is absolutely wasteful. Numbers don’t lie.
Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.
Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?
Claude is a tool. It can be abused, or used in a sloppy way. But it can also be used rigorously.
I've been beating my team to be more papercut-free in the tooling they develop and it's been rough mostly because of the velocity.
But overall it's a huge net positive.
[waits for chickens to come home to roost]
"We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"
After all (Grug Chief reminds us), the only truly secure computing system is an inert rock.
> Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done
This is literally how rest of the world works already, and always had. We'd still be living in caves otherwise. Fortunately most people (at least outside software) seem to understand that security is a trade-off against usefulness, and not an end goal in itself.
Even right now the difference with working with 'AI native' developers or with regular developers is day and night.
I certainly wouldn't want a non-clause enabled developer on my team now.
You only want to work with people who are hip with the North Pole?
I wonder what I’m doing differently.
I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?
Haven't found a process that beats this yet and I burn very few tokens this way.
I like writing code, I’m good at writing code. What I hate doing is dredging through logs, filtering out test scenarios and putting together disparate information from knowledge silos - so I have the AI doing that. It’s my research assistant.
Effectively I’m using it like an automated search engine that indexes anything I want and refines the results by using the statistical near neighbors of how other people explained their searches.
It's now trivial to fix these problems while still doing our day jobs -- shipping a product.
This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.
it's trivial to reimplement a better solution.
Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.
It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.
I've been using it to write tools that drastically facilitate spinning up local k8s cluster with an entire suite of development services that used to take two days to set up in Docker.