> The first thing I do with new agent driven project is set up quality checks. Linters, test frameworks, static analysis, etc
I do this too, but then I sit and observe how agent gets very creative by going around all of these layers just to get to the finish line faster.
Say, for example, if I needlessly pass a mutable reference and the linter screams at me, I know it's either linter is wrong in this case, or I should listen to it and change the signature. If I make the lazy choice, I will be dissatisfied with myself, I might even get scolded, or even fired if I keep making lazy choices.
LLM doesn't get these feelings.
LLM will almost always go for silencing it because it prevents it from reaching the 'reward'. If you put guardrails so that LLM isn't allowed to silence anything, then you get things like 'ok, I'll just do foo.accessed = 1 to satisfy the linter'.
Same story with tests. Who decides when it's the test that should be changed/deleted or the implementation?