This! I like most of the ideas and it sounds like a fun exercise to try a few for 1-2 weeks, but I think that implementing something twice could yield so much value!
In my role I prioritize code maintenance, probably because I hate work with painful codebases, probably because I’ve seen 5 years+millions of dollar projects fail as the tech debt piles up and the codebase grows too fragile to implement new features needed to go to market, probably because I’m lazy and don’t like to read much to know what the code is doing.
But one of the best architectures I’ve ever built was one that I had to implement 3 times. Now, this wasn’t because of my vast wisdom but because I literally coded myself in a corner the first two! (Go rigidity around circular dependencies is a double edged sword)
https://wiki.c2.com/?PlanToThrowOneAway or for iphone users https://archive.md/zdfXh (no idea why they are blocking iphone)
Quotes about Joe Armstrong say he solved the same problem in multiple iterations to refine the solution.
Some of his opinions: https://joearms.github.io/published/2014-02-07-why-programmi...
> EXPERIMENT: If you prayed to a god last week, try praying to a different one this week and see if you notice a difference
So, no different than from when I am actually streaming?
I ended up subscribing to a bunch of random calls on https://dialup.com/ and had some pretty amazing conversations.
People who don't use the debugger, do you generally have more/better logging? What other advantages would there be to not using a debugger. Perhaps simpler code, so that you don't need to use a debugger to debug?
Sometimes you might simply mot have a debugger. I had an old WinCE 6 system whose debugger I never got working. An even older 80186 system with a custom RTOS. I have no idea how you would debug that. (The 286 added some pins for in-circuit emulation, so I’m guessing such a system would be easier to build a debugger for)
Sometimes having a debugger attached slows down the system so much it is no longer functional. Specifically hooking a debugger to HHVM seems to slow down execution by quite a bit.
If you are debugging a real time motion control, you get one shot per move to capture what is going on. This is because your motor control software stops executing while the debugger is pause by the physical motor is still moving. So when you resume execution your physical motor is not where it should be and your controller either tries to compensate or error out.
Sometimes the act of attaching a debugger prevents the problem from occurring. For example it can change the interleaving of multiple threads of execution and cause your race condition to go away. Or on some microcontrollers if your debugger reads a hardware register it can cause the value of the register to change.
Sometimes your debugger can cause the problem to happen more reliably. I had an intermittent corruption in a floating point calculation. Stepping through with the debugger made it 100% reproducible, as the data-corrupting interior handler had plenty of time to run while single-stepping the main thread.
If I do 5 of these micro experiments in a year, do I have 10 weeks of code that idiomatically doesn’t match the rest of the codebase?
> Some of these should probably not be tried for the first time in a professional context where you work with other people in the same code repository, while others can. Use your judgment.
I could see the testing example still being of use in a prod code base though. If you find a useful technique and make useful tests, you can keep them. If they're not, remove them at the end of the experiment.
I sort out thoughts, see relationships and plan naming and params ahead of time.