If you've never worked with stereo vision systems, they have multiple cameras as input. The algorithms need to know the exact position of each camera. These calibrations are extremely sensitive - if the camera moves even a teensy bit, the algorithms start to fail because the image features aren't where they expect. Anytime we had a bad calibration, we needed to rerun calibration. This was a process where we waved a checkerboard in front of the cameras at tons of different angles, and software would process all the images and deduce the pose of each camera. It typically took at least an hour, and sometimes it'd fail and you'd need to redo it. Typically we'd need to recalibrate systems every few weeks, no matter how hard we tried to make the whole rig rigid.
Anyways, one day I accidentally drove the robot's sensor head into a table right before a demo with the people funding the project. The program started spitting out "BAD CALIBRATION" warnings. This would basically mean we'd need to cancel or postpone the demo to recalibrate, which would look really bad since they traveled all the way to our office only to be told "never mind!"
As a last-ditch effort. I grabbed the cameras and started wiggling them back and forth, and managed to force them into an orientation where the calibration worked. The demo went off perfectly. I later told the researchers about it and they hated it. "You should just always do calibration," etc.
My favorite hack that I've seen someone else do was at Google, where some specific project had a weird test that checked some ratio like "lines of tests to lines of code." Someone checked in a test with the comment "If you're not cheating you're not trying", and it just had the same assertion over and over for hundreds of lines to satisfy the metric. I never looked into why the person couldn't just disable the test, but I like the simplicity of the solution.