A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
It probably helps that I have 40 years of experience with producing code the old ways, including using punch cards in middle school and learning basic on a computer with no persistent storage when I was ten.
I think I've done enough time in the trenches and deserve to play with coding agents without shame.
people seem to have a inability to predict second and third order effects
the first order effect is "I can sip a latte while the bot does my job for me"... well, great I suppose, while it lasts
but the second order effect is: unless you're in the top 10%, you will now lose your job, permanently
and the third order effect is the economy collapses as it is built on consumer spending
Now, since the bottleneck of moving the fingers to write code has gone down, I actually started to enjoy doing side projects. The mental stress from writing code has gone down drastically with Claude Code, and I feel the urge to create more nowadays!
These people just drool at being able to have work done for them to begin with. Are you sure it is just "code"?
In my circles see some overlap with the people who are like: "Done! Let's move on" and don't worry about production bugs, etc. "We'll fix it later".
I've always stressed out about introducing bugs and want to avoid firefighting (even in orgs where that's the way to get noticed).
Too much leaning on coding tools and agents feels to sketchy to someone like me right now (maybe always tbh)
+100 for this.
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
Also we live in a capitalist society. The boss will soon ask: "Why the fuck am I paying you to sip a latte in a bar? While am machine does your work? Use all your time to make money for me, or you're fired."
AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.
How I see it is we've reverted back to a heavier spec type approach, however the turn around time is so fast with agents that it still can feel very iterative simply because the cost of bailing on an approach is so minimal. I treat the spec (and tests when applicable) as the real work now. I front load as much as I can into the spec, but I also iterate constantly. I often completely bail on a feature or the overall approach to a feature as I discover (with the agent) that I'm just not happy with the gotchas that come to light.
AI agents to me are a tool. An accelerator. I think there are people who've figured out a more vibey approach that works for them, but for now at least, my approach is to review and think about everything we're producing, which forms my thoughts as we go.
We vibe around a lot in our heads and that's great. But it's really refreshing, every so often, to be where the rubber meets the road.
For mission-critical applications I wonder if making "writing the actual code" so much cheaper means that it would make more sense to do more formal design up front instead, when you no longer have a human directly in the loop during the writing of the code to think about those nasty pops-up-on-the-fly decisions.
Love this! Be it design specs or a mock from the designer. So many unaccounted for decisions. Good devs will solve many on their own, uplevel when needed, and provide options.
And absolutely it means more design up front. And without human in the direct loop, maybe people won’t skimp on this!
Sometimes the AI does weird stuff too. I wrote a texture projection for a nonstandard geometric primitive, the projection used some math that was valid only for local regions… long story. Claude kept on wanting to rewrite the function to what it thought was correct (it was not) even when I directed to non related tasks. Super annoying. I ended up wrapping the function in comments telling it to f#=% off before it would leave it alone.
> I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.
yea, same here.i've asked an ai to plan and setup some larger non straight forwards changes/features/refactorings but it usually devolves into burning tokens and me clicking the 'allow' button and re-clarifying over and over when it keeps trying to confirm the build works etc...
when i'm stuck though, or when im curious of some solution it usually opens the way to finish the work similar to stack overflow
I also use these things to just plan out an approach. You can use plan mode for yourself to get an idea of the steps required and then ask the agent to write it to a file. Pull up the file and then go do it yourself.
This is so on point. The spec as code people try again and again. But reality always punches holes in their spec.
A spec that wasn't exercised in code, is like a drawing of a car, no matter how detailed that drawing is, you can't drive it, and it hides 90% of the complexity.
To me the value of LLMs is not so much in the code they write. They're usually to verbose, start building weird things when you don't constantly micromanage them.
But you can ask very broad questions, iteratively refine the answer, critique what you don't like. They're good as a sounding board.
The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.
This past week, I spent a couple of days modifying a web solution written by someone else + converting it from a Terraform based deployment to CloudFormation using Codex - without looking at the code as someone who hasn’t done front in development in a decade - I verified the functionality.
More relevantly but related, I spent a couple of hours thinking through an architecture - cloud + an Amazon managed service + infrastructure as code + actual coding, diagramming it, labeling it , and thinking about the breakdown and phases to get it done. I put all of the requirements - that I would have done anyway - into a markdown file and told Claude and Codex to mark off items as I tested each item and summarize what it did.
Looking at the amount of work, between modifying the web front end and the new work, it would have taken two weeks with another developer helping me before AI based coding. It took me three or four days by myself.
The real kicker though is while it worked as expected for a couple of hundred documents, it fell completely to its knees when I threw 20x documents into the system. Before LLMs, this would have made me look completely incompetent telling the customer I now wasted two weeks worth of time and 2 other resources.
Now, I just went back to the literal drawing board, rearchitected it, did all of the things with code that the managed services abstracted away with a few tweaks, created a new mark down file and was done in a day. That rework would have taken me a week by itself. I knew the theory behind what the managed service was doing. But in practice I had never done it.
It’s been over a decade where I was responsable for a delivery that I could do by myself without delegating to other people or that was simple enough that I wouldn’t start with a design document for my own benefit. Now within the past year, I can take on larger projects by myself without the coordination/“mythical man Month” overhead.
I can also in a moment of exasperation say to Codex “what you did was an over complicated stupid mess, rethink your implementation from first principles” without getting reported to HR.
There is also a lot of nice to have gold plating that I will do now knowing that it will be a lot faster
On the other hand, every time the matter is seriously empirically studied, it turns out that overall:
* productivity gains are very modest, if not negative
* there are considerable drawbacks, including most notably the brainrot effect
Furthermore, AI spend is NOT delivering the promised returns to the extent that we are now seeing reversals in the fortunes of AI stocks, up to and including freakin' NVIDIA, as customers cool on what's being offered.
So I'm supposed to be an empiricist about this, and yet I'm supposed to switch on the word of a "cool story bro" about how some guy built an app or added a feature the other day that he totally swears would have taken him weeks otherwise?
I'm like you. I use code as a part of my thought process for how to solve a problem. It's a notation for thought, much like mathematical or musical notation, not just an end product. "Programs must be written for people to read, and only incidentally for machines to execute." I've actually come to love documenting what I intend to do as I do it, esp. in the form of literate programming. It's like context engineering the intelligence I've got upstairs. Helps the old ADHD brain stay locked in on what needs to be done and why. Org-mode has been extremely helpful in general for collecting my scatterbrained thoughts. But when I want to experiment or prove out a new technique, I lean on working directly with code an awful lot.
With AI, the correct approach is to think more like a software architect.
Learning to plan things out in your head upfront without to figure things out while coding requires a mindset shift, but is important to work effectively with the new tools.
To some this comes naturally, for others it is very hard.
The same kind of planning you’re describing can and do happen sans LLM, usually on the sofa, or in front of a whiteboard. Or by reading some research materials. No good programmer rushes to coding without a clear objective.
But the map is not the territory. A lot of questions surface during coding. LLMs will guess and the result may be correct according to the plan, but technically poor, unreliable, or downright insecure.
I dont think any complex plan should be planned in your head. But drawing diagrams, sketching components, listing pros and cons, 100%. Not jumping directly into coding might look more like jumping into spec writing a poc
If you need that, don't use AI for it. What is it that you don't enjoy coding or think it's tangential to your thinking process? Maybe while you focus on the code have an agent build a testing pipeline, or deal with other parts of the system that is not very ergonomic or need some cleanup.
> If you need that, don't use AI for it.
this is the right answer, but many companies mandate to use ai (burn x tokens and y percent of code) now, so people are bound to use it where it might not fitTwo principles I have held for many years which I believe are relevant both to your sentiment and this thread are reproduced below. Hopefully they help.
First:
When making software, remember that it is a snapshot of
your understanding of the problem. It states to all,
including your future-self, your approach, clarity, and
appropriateness of the solution for the problem at hand.
Choose your statements wisely.
And: Code answers what it does, how it does it, when it is used,
and who uses it. What it cannot answer is why it exists.
Comments accomplish this. If a developer cannot be bothered
with answering why the code exists, why bother to work with
them?Actually those same markdown files answer the second question.
Most people can't answer why they themselves exist, or justify why they are taking up resources rather than eating a bullet and relinquishing their body-matter.
According to the philosophy herein, they are therefore worthless and not worth interacting with, right?
You can check it out here: https://ai-lint.dosaygo.com/
It's not at all clear to me which is true given the level of hype and antipathy out there. I'm just going to watch and wait, and experiment cautiously, till it's more clearcut.
If I have to say, we're just waiting for the AI concern caucus to get tired of performing for each other and justifying each other's inaction in other facets of their lives.
I completely agree but my thought went to how we are supposed to estimate work just like that. Or worse, planning poker where I'm supposed to estimate work someone else does.
Coding is significantly faster but my understanding of the system takes a lot longer because I’m having to merge my mental model with what was produced.
Is the entire AI bubble just the result of taking performance metrics like "lines of code written per day" to their logical extreme?
Software quality and productivity have always been notoriously difficult to measure. That problem never really got solved in a way that allowed non technical management to make really good decisions from the spreadsheet level of abstraction... but those are the same people driving adoption of all these AI tools.
Engineers sometimes do their jobs in spite of poor incentives, but we are eliminating that as an economic inefficiency.
My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):
1. Typesafe compiler (tsc)
2. Basic lint rules (eslint)
3. Cyclomatic complexity rules (eslint, sonarjs)
4. Max line length enforcement (via eslint)
5. Max file length enforcement (via eslint)
6. Unused code/export analyser (knip)
7. Code duplication analyser (jscpd)
8. Modularisation enforcement (dependency-cruiser)
9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
10. Security check (semgrep)
I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.
Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.
This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.
(Edit: added mention of pre-commit hook which I missed mention of in initial comment)
It’s also tricky otherwise if you have to occasionally review lazily written manual code mixed with syntactically formal/clean but functionally incorrect AI code.
I use a pre-commit hook to run `pnpm check`. I missed mentioning it in the original comment. Your reply reminded me of it and I have now added it. Thanks.
Remember these are still fundamentally trained on human communication and Dale Carnegie had some good advice that also applies to language generators.
BUT, what is the point of max line length enforcement, just to see if there are crazy ternary operators going on?
At least this is the reason why I do use it
For the pre-commit hook, I assume you run it on just the files changed?
> Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
Would you share this?
By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage. Might as well write the code yourself and just use AI as first-pass peer review on the code you’ve written.
A lot of the things the writer points out also feel like safeguards against the pitfalls of older models.
I do agree with their 12th point. The smaller your task the easier to verify that the model hasn’t lost the plot. It’s better to go fast with smaller updates that can be validated, and the combination of those small updates gives you your final result. That is still agile without going full “specifications document” waterfall.
“Break things down” is something most of us do instinctively now but it’s something I see less experienced people fail at all the time.
Next: vibe brain surgery.
/i
Brain surgery is highly technical AND highly vibe based.
You need both in extremely high quantities. Every brain is different, so the super detailed technical anatomies that we have is never enough, and the surgeon needs constant feedback (and insanely long/deep focus).
But there’s more time to do some of these other things if the actual coding time is trending toward zero.
And the importance of it can go up with AI systems because they do actually use the documentation you write as part of their context! Direct visible value can lead people to finally take more seriously things that previously felt like luxuries they didn’t have time for.
Again if you’ve been a developer for more than 10 minutes, you’ve had the discouraging experience of pain-stakingly writing very good documentation only for it to be ignored by the next guy. This isn’t how LLMs work. They read your docs.
I think you'll find even less time - as "AI" drives the target time to ship toward zero.
One of the problems with writing detailed specs is it means you understand the problem, but often the problem is not understand - but you learn to understand it through coding and testing.
So where are we now?
The main difference now is the parrots have reduced the cost of the wrong program to near zero, thereby eliminating much of the perceived value if a spec.
Astronaut 2, Tim Bryce: Always has been...
It’s also nothing new, as it’s basically Joe Armstrong's programming method. It’s just not prohibitively expensive for the first time in history.
Religiously, routinely refactor. After almost every feature I do a feature level code analysis and refactoring, and every few features - codebase wide code analysis and refactoring.
I am quite happy with the resulting code - much less shameful than most things I've created in 40 years of being passionate about coding.
And there _was_ a good reason to resist refactoring. It takes time and effort! After "finishing" something, the timeline, the mental and physical energy, the institutional support, is all dried up. Just ship it and move on.
But LLMs change the equation. There's no reason to leave sloppy sub-optimal code around. If you see something, say something. Wholesale refactoring your PR is likely faster than running your test suite. Literally no excuses for bad code anymore.
You'd think it didn't need to be said but, given we have a tool to make coding vastly more efficient, some people use that tool to improve quality rather than just pump out more quantity.
1) Do a gap and needs assessment. 2) Build business requirements. 3) Define scope of work to advance fulfillment. 4) Create functional and non-functional specs. 5) Divide-conquer-refine loop.
https://xcancel.com/hamptonism/status/2019434933178306971
And all that after stealing everyone's output.
In reality that man is hoping to IPO in 6-12 months, if anyone is wondering why the “use claude or you’re left behind” is so heavy right now.
My recent experience: I'm porting an app to Mac. It's been in my backlog for ~2 years. With Claude I had a functional prototype in under a day getting the major behavior implemented. I spent the next two weeks refactoring the original app to share as much logic as possible. The first two days was lots of fun. The refactoring was also something I wanted to flush out unit tests, still enjoyable.
The worst part was debugging really bugs introduced to my code from 5 years ago. My functions had naming issues describing the behavior wrong, confusing Claude, that I needed to re-understand to add new features.
Parts of coding are frustrating. Using AI is frustrating for different reasons.
The most frustrating part was rebasing with git to create a sensible history (which I've had to do without AI in the past), reviewing the sheer volume of changes (14k lines) and then deciding "do I want my name on this" which involved cleaning up all the linter warnings I'd self imposed on myself.
Before I also had to code it and then make sure it had no issues.
Now I can skip the coding and then just have something spit out something which I can evaluate whether I believe is a good implementation of my solution or not.
Of course, you need the skill to know good from bad but for medium to senior devs, AI is incredibly useful to get rid of the mundane task of actually writing code, while focusing on problem solving with critical review of magically generated code.
You have just transformed your job from developer to manual spec maintainer - a clerk who has to painstakingly check everything.
Clearly that didn't happen, and then agile took over from the more waterfall/specs based approaches, and the rest was history.
But now we're entering a world where the state of the art is expressing your requirements & shape of the system. Perhaps this is just part of a broader pendulum swing, or perhaps the 1990s hopes & dreams finally caught up with technology.
I think PG said something about sitting down and hacking being how you understand the problem, and it’s right. You can write UML after you’ve got your head round it, but the feedback loop when hacking is essential.
This was pushed back hard by management because it "took too much time to create a ticket". I fought it for some months but at the end I stopped and also really lose the ability and patience of do that. Juniors suffered, implementation took more time. Time passed.
Now, I am supposed to do the exact same thing, but even better and for yesterday.
The controlling systems are not give it more words at the start. Agentic coding needs to work in loop with dedicated context.
You need to think about how can i give as much intent as possible with as little words.
You can built a tremendous amount of custom lint rules ai never needs to read except they miss it.
Every pattern in your repo gets repeated, repo will always win over documentation and when your repo is good structured you don’t need to repeat this to AI
It’s like dev always has been, watch what has gone wrong and make sure the whole type or error can’t happen again.
> repo will always win over documentation
it really does seem like this... also new devs are like that too: "i just copied this pattern use over here and there whats wrong?" is something i've heard over and over loli think languages that allow expression of "this is deprecated, use x instead" will be usefull for that too
AI gets a lot of big projects right if you give at all the tools to verify its own implementation if you can build a proper system to verify the solution it works astonishly good. Even Opus 4.6 judgement seems to be wrong most of the time on projects of my scale pre the validation layers.
https://github.com/ryanthedev/code-foundations
I’m currently working on a checklist and profile based code review system.
If the issue is SNR and the ratio of "good" vs "bad" practices in the input training corpus, I don't know if that's getting better.
Yes!
> by some of the smartest coders in the world
Hmm... How will it filter out those by the dumbest coders in the world?
Including those by parrots?
if you know, and I know, and the guys at openai and anthropic know... not a big leap that the models will know too? many datasets are curated and labeled by humans
1. Keep things small and review everything AI written, or 2. Keep things bloated and let AI do whatever it wants within the designated interface.
Initially I drew this line for API service / UI components, but it later expanded to other domains. e.g. For my hobby rust project I try to keep "trait"s to be single responsible, never overlap, easy to understand etc etc. but I never look at AI generated "impl"s as long as it passes some sensible tests and conforming the traits.
I find rust generally easier to reason about, but can't stand writing it.
The compiler works well with LLMs plenty of good tooling and LSPs.
If I'm happy with the shape of the code and I usually write the function signatures/ Module APIs. And the compiler is happy with it compiling. Usually the errors if any are logical ones I should catch in reviews.
So I focus on function, compiler focuses on correctness and LLM just does the actual writing.
Tl;Dr I don't mind reading rust I hate writing it and the compiler meets me in the middle.
And in practice, I am happy enough that the LLM helps me to eliminate some toil, but I think you need to know when it is time to fold your cards, and leave the game. I prefer to fix small bugs in the generated code myself, than asking the agent, as it tends to go too far when fixing its own code.
Coding agents made me really get something back from the money I pay for my O'Reilly subscription.
So, coding agents are making me a better engineer by giving me time to dive deeper into books instead of having to just read enough to do something that works under time pressure.
I do allow it to write the tests (lots of typing there), but I break them manually to see how they fail. And I do think about what the tests should cover before asking LLM to tell me (it does come up with some great ideas, but it also doesn't cover all the aspects I find important).
Great tool, but it is very easy to be led astray if you are not careful.
https://github.com/glittercowboy/get-shit-done
You still need to know the hard parts: precisely what you want to build, all domain/business knowledge questions solved, but this tool automates the rest of the coding and documentation and testing.
It's going to be a wild future for software development...
1. Have the LLM write code based on a clear prompt with limited scope 2. Look at the diff and fix everything it got wrong
That's it. I don't gain a lot in velocity, maybe 10-20%, but I've seen the code, and I know it's good.
Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.
One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.
Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.
even if you check and redo after paste, you need to check for gotchas. I wish I had a nickel for every time the llm gave me a solution with a hidden limitation. assume that it violates all your unspoken assumptions, and adheres only to what you nailed down in your prompt
The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
If this was written with help of AI, I'd personally appreciate a small notice above the blog post. If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
i have written this text by myself except like 2 or 3 sentences which i iterated with an LLM to nail down flow and readability. I would interpret that as completely written by me.
> The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
Before i wrote this text, i also asked Gemini Deep Research but for me the results where too technical and not structural or high level as i describe them here. Hence the blogpost to share what i have found works best.
> If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
I have pondered the idea and also wrote a few anecdotal experiences but i deleted them again because i think it is hard to nail the right balance down and it is also highly depended on the project, what renders examples a bit useless.
And i also kind of like the short and lean nature of it the last few days when i worked on the blogpost. I might will make a few more blogposts about that, that will expand a few points.
Thank you for your feedback!
Just because of a hype?
"Document the requirements, specifications, constraints, and architecture of your project in detail. Document your coding standards, best practices, and design patterns. Use flowcharts, UML diagrams, and other visual aids to communicate complex structures and workflows. Write pseudocode for complex algorithms and logic to guide the AI. Develop efficient debug systems for the AI to use. Build a system that collects logs from all nodes in a distributed system and provides abstracted information. Use a system that allows you to mark how thoroughly each function has been reviewed. Write property based high level specification tests yourself. Use strict linting and formatting rules to ensure code quality and consistency. Utilize path specific coding agent prompts. Provide as much high level information as practical, such as coding standards, best practices, design patterns, and specific requirements for the project. Identify and mark functions that have a high security risk, such as authentication, authorization, and data handling. Make sure that the AI is instructed to change the review state of these functions as soon as it changes a single character in the function. Developers must make sure that the status of these functions is always correct. Explore different solutions to a problem with experiments and prototypes with minimal specifications. Break down complex tasks into smaller, manageable tasks for the AI. You have to check each component or module for its adherence to the specifications and requirements."
And just like that, easy peasy, nothing to it.
As a supreme irony, the story currently on the front page directly under this one ('You are here'), makes the claim "The cost of turning written business logic into code has dropped to zero. Or, at best, near-zero." in the very first sentence.
I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter
If your goal is for AI to write code that works, is maintainable and extensible, you have to include as many deterministic guardrails as possible.
Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.
This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.
There’s a lot of engineers who will refuse to wake up to the revolution happening in front of them.
I get it. The denialism is a deeply human response.
Must be nice. Claude and Codex are still a waste of my time in complex legacy codebases.
https://en.wikipedia.org/wiki/Luddite
> workers who opposed the use of certain types of automated machinery due to concerns relating to worker pay and output quality... Luddites were not opposed to the use of machines per se (many were skilled operators in the textile industry); they attacked manufacturers who were trying to circumvent standard labor practices of the time.
Define data structures manually, ask AI to implement specific state changes. So JSON, C .h or other source files of func sigs and put prompts in there. Never tried the Agents.md monolithic definition file approach
Also I demand it stick to a limited set of processing patterns. Usually dynamic, recursive programming techniques and functions. They just make the most sense to my head and using one style I can spot check faster.
I also demand it avoid making up abstractions and stick to mathematical semantics. Unique namespaces are not relevant to software in the AI era. It's all about using unique vectors as keys to values.
Stick to one behavior or type/object definition per file.
Only allow dependencies that are designed as libraries to begin with. There is a ton of documentation to implement a Vulkan pipeline so just do that. Don't import an entire engine like libgodot.
And for my own agent framework I added observation of my local system telemetry via common Linux files and commands. This data feeds back in to be used to generate right-sized sched_ext schedules and leverage bpf for event driven responses.
Am currently experimenting with generation of small models of my own data. A single path of images for example not the entire Pictures directory. Each small model is spun akin to a Docker container.
LLMs are monolithic (massive) zip files of the entire web. No one really asking for that. And anyone who needs it already has access to the web itself