In the future, I see a small percentage of benevolent system-thinkers, hackers, and architects still at the helm. And, even if 95% of the time they are the guy who feeds the dog that protects us from touching the machines, occasionally nature will force us to convene and tell the machines what to do (or at least bargain with them).
The rest of humanity will go back to the default state: digging potatoes out of the ground in a village of 200 down by the river.
> …to refactor and improve the quality of a microservice.
> …
> It worked. It vastly improved the code base…
That sounds like an amazing learning opportunity!Would you be up for sharing before refactor and after refactor versions of your experiment in a public repo?
Thanks in advance.
- 1. Read the whole code of the repository.
- 2. Read the TASKS.md file if it exists.
- 2.1. If it exists and is not empty, pick a refactoring task from the list. Choose the most appropriate.
- 2.1.1. Refactor the code according to the task description.
- 2.1.2. Commit the changes to git.
- 2.1.3 Remove the task from TASKS.md
- 2.1.3. You are done.
- 2.2. If it doesn't exist or is empty:
- 2.2.1. Identify the parts of the code that could be refactored, following the following principles
- A class should have a single responsability
- The dependencies of the class should be mockable and injected at class instanciation
- Repeated code should be factored into functions
- Files shouldn't be longer than 1.5K lines
- 2.2.2: If using the previous insights you think there is valuable refactoring work to be done:
- 2.2.2.1 Write a list of refactoring tasks in TASKS.md
- 2.2.2.2: You are done.
- 2.2.3: If there is no more refactoring to be done, notify me with 'say "I am done with refactoring"' > …Read the TASKS.md file if it exists…
What about sharing that TASKS.md? I'd like to replicate your success as closely as possible.Having the same tasks would help me nail a similar successful result.
I forgot to also ask: What language and which REST framework was your microservice implemented with?
For even better reproducibility, I'm thinking I should have my codebase be as similar to yours as I can get it.
TIA!
Instead, I ended up getting a coding agent to run through a thought experiment [1] based on the method described in the blog post.
The AI's critique sounds kinda harsh to me. So, I'm quoting only a snippet of it here…
———
…Running this specific recursive loop without a higher-level architectural constraint or a "convergence metric" will likely result in a specific type of technical debt known as _Ravioli Code_ (the inverse of Spaghetti Code)…
…
The prompt provided optimizes for _local code metrics_ (file length, shallow SRP) at the expense of _global architectural cohesion_.
In your specific $A \to B(C)$ scenario: _$A$ stops being a class that "does something" and becomes a class that "configures things that do things_."
The eventual system is one where every piece is perfectly unit-testable, perfectly mockable, and adhering to strict SRP, yet the system as a whole is incomprehensible to a human reader because the "story" of the code has been shredded into a thousand paragraphs scattered on the floor…
———
Again, those are the words of the automaton; not mine. I simply pointed it at the blog post and this thread. I asked it what it thought of the approach and final results discussed in the blog post.
Then, in the vibe coders' vernacular, I "let it rip" ;)
Today I noticed a follow up blog post [2]. That one noted a number of things that resonate with my relatively lightweight experience with agentic coding…
> …
> We're not there yet…
> …
> If you want the agent to tackle a
> specific architectural smell, you
> need to name it…
> …
> You need to give direction. The
> refactoring principles are specific
> to each project's goals and technology choices.
> …
[1] https://g2ww.short.gy/TIL2Ralph[2] https://frederic.vanderessen.com/posts/unsupervised-refactor...