It has also enabled a few people to write code or plan out implementation details who haven't done so in a long (sometimes decade or more) time, and so I'm getting some bizarre suggestions.
Otherwise, it really does depend on what kind of code. I hand write prod code, and the only thing that AI can do is review it and point out bugs to me. But for other things, like a throwaway script to generate a bunch of data for load testing? Sure, why not.
I've kind of decided this is my last job, so when this company folds or fires me, I'm just going to retire to my cabin in the rural Louisiana woods, and my wife will be the breadwinner. I only have a few 10s of thousands left to make that home "free" (pay off the mortgage, add solar and batteries, plant more than just potatoes and tomatoes).
Though, post retirement, I will support my wife's therapy practice, and I have a goal of silly businesses that are just fun to do (until they arent), like my potato/tomato hybrid (actually just a graft) so you can make fries and ketchup from the same plant!
We should be friends. I like your ideas.
They were the original non-familial homesteaders from 50+ years ago when all this land was my wife's great grandfather's, and he sold off small plots to people. He, infact, inherited it from his father, who bought a half mile square back in the 20s or 30s (I believe). The first house on the road was his (Great Great Grandpa). The road WAS his driveway, then slowly but surely new generations of the family started building houses a few hundred yards away from each other, then they started selling plots to people in the 60s, and sold the last of the original land in 2023 about a year before grandpa passed.
Now the only land left in "the family", is this 1.25 acre plot that I live on. I don't really have the desire to buy more from the folks that are dying, but my neighbor has already bought up about half of the vacant land.
Or I'll walk up to your desk and ask you to explain it.
It’s the asymmetric expectations—that one person can spew slop but the other must go full-effort—that for me personally feels disrespectful.
However, since people are not going to readily reveal that they used an LLM to produce said output, it seems like the most logical way to do this is just always use an LLM to consume inputs, because there's no easy 100% way to tell whether it was created by an LLM or a human or not anymore.
That way, thinking is communication! That's kind of why I loved math so much - it felt like I could solve a problem and succinctly communicate with the reader at the same time.
This has always been the case. Have some junior shit out a few thousand lines of code, leave, and leave it for the senior cleanup crew to figure out what the fuck just happened...
It's a breach of trust. I don't care if you're my friend, my boss, a stranger, or my dog - it crosses a line.
I value my time and my attention. I will willingly spend it on humans, but I most certainly won't spend it on your slop when you didn't even feel me worth making a human effort.
What I've learned in 2 years of heavy LLM use - ChatGPT, Gemini, and Claude, is that the significance is on expressing and then refining goals and plans. The details are noise. The clear goals matter, and the plans are derived from those.
I regularly interrupt my tools to say, "Please document what you just said in ...". And I manage the document organization.
At any point I can start fresh with any AI tool and say, "read x, y, and z documents, and then let's discuss our plans". Although I find that with Gemini, despite saying, "let's discuss", it wants to go build stuff. The stop button is there for a reason.
I've found that SoTA LLMs sometimes implement / design differently (in the sense that "why didn't I think of that"), and that's always refreshing to see. I may run the same prompt through Gemini, Sonnet, and Codex just to see if they'd come up with some technique I didn't even know to consider.
> don't forsee a need to review it myself either
On the flip side, SoTA LLMs are crazy good at code review and bug fixes. I always use "find and fix business logic errors, edge cases, and api / language misuse" prompt after every substantial commit.
If I'm feeling brave, I let it write functions with very clear and well defined input/output, like a well established algorithm. I know it can one-shot those, or they can be easily tested.
But when doing something that I know will be further developed, maintained, I mainly end up writing it by hand. I used to have the LLM write that kind of code as well, but I found it to be slower in the long run.
For the most part, many of them work the first time and just continue to do so to aid a project. I've done similar in terms of scaffolding a test/demo environment around a component that I'm directly focused on... sometimes similar for documentation site(s) for gh pages, etc.
Soem things have gone surprisingly well.
Zizek had a great point about this.
But deep down I know that slop is noise and words no longer represent understanding.