So I was challenged to prove it. Which I did (or rather I did it in 10 minutes). When their code came back, a day later, it didn't actually work. When we pointed that out to them, they came back another day later with code that looked almost identical to the one I'd written in 10 minutes.
tl;dr. Sometimes managers don't realise the complexity of modern software, but sometimes modern developers are actually just plain slow.
Every pattern, layer, feature or tool that you introduce in a project, makes it more complex, so you really have to use good judgment when you decide what to add.
Nowadays a web application, for example, is tiered more. Different frameworks are encouraged for the different tiers: Bootstrap, Spring, Hibernate, etc. Each one is its own ecosystem and is built on top of other libraries. It's very common to make web service calls outside your WAN. Quickly you find out that "standards" have different interpretations by different library authors.
UI's are no longer an afterthought. They affect how successful your application is. (My observation is that a well-designed UI can cut down on user errors and training by two thirds over a merely functional UI.)
I'm keeping the example simple by not mentioning necessary middle-tier components that we didn't use 20+ years ago. We also didn't worry about clustered environments, asynchronicity, or concurrency.
Not knowing the application you needed or how the analysis was done by the coding team, it's hard to say if some of their "slowness" was getting to know the problem AND coming to understand how extensible, performant, and reliable you wanted it. My own approach is usually to solve the "happy path" first and then start surrounding it with "what if's" - e.g. what if a null is passed into the function, etc. Over time I refactor and build in reliability and extensibility. The coding team you referred to may have used a different approach in which they tried abstracting use-cases and building an error handling model before solving the "happy path".
Your "tl;dr" is spot on. But I'd like to raise a cautionary flag about judging modern development through a 25 yo lens. The game has changed.
>We also didn't worry about clustered environments, asynchronicity, or concurrency.
Clustered environments, probably not. But asynchronicity and concurrency were the bane of my life. Writing comms software back in the day involved having to hand-craft both the interrupt-driven reading of data from the i/o port and the storage and tracking of that data in a memory-constrained queue, synchronised with displaying that data on the screen. And the windowed UI had to be hand-crafted as well. Error handling was no more of an afterthought then than it is now - and you couldn't roll out a patch for a minor defect without manually copying 500 floppy disks and posting them to clients.
I understand why some bits of development take a long time, but the reality is that 90+% of the development work that our place does these days is what an ex-manager used to refer to as "bricklaying" - dull and repetitive work that involves pretty much zero thought to implement. Extract file X (using the language's built in drag and drop file-extract wizard), sort it by date (using the language's built-in sort module), split into two separate files (using the language's built-in file split module) and load into database Y (using the language's built-in database load module).
And even with all of these tools, it still takes 10 times longer for people to develop these kinds of thing than it did when we were writing all of this from scratch. It's not because of complexity of coding, of environments, or of frameworks. The problem is that much of the IT industry has replaced skill and knowledge with process, contracts, documentation and disinterested cheap labour.
Good on the develop manager!
“An operating system,” replied the programmer.
The warlord uttered an exclamation of disbelief.
“Surely an accounting package is trivial next to the complexity of an operating system,” he said.
“Not so,” said the programmer, “when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to tax laws.
By contrast, an operating system is not limited by outward appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design.”
The warlord of Wu nodded and smiled. “That is all good and well,” he said, “but which is easier to debug?”
The programmer made no reply.
— The Tao of Programming, Geoffrey James
First step is fixing the development environment / build process and getting a staging server up. It will inevitably be broken / nonexistent, with frequent edits directly to production necessary. The last guy will have internalized a great deal of operational workarounds that you'll need to rediscover then codify into the app.
Next you write tests. There will be none. Once you have a workflow that is decent, you can start to identify the worst offenders. All the while, you'll be having to change the codebase to meet project requirements, this will give you a good idea of where the really bad shit is. Unit test all of it, and if you're feeling froggy, write some integration tests. Once you get to this phase, you should be unit testing your project work.
Only after those two are completed can you start refactoring. Treat it like TDD. Keep an eye on larger goals like 12 factor conformance. It may look pie-in-the sky at first, but it will give you ideas on what to focus on. Main advantage of refactoring over ground-up re-writing is, you don't have to sell it to your boss. You just do it, in and around normal project work.
The biggest hurdle is the first step. It's scary to fuck with deployment. The approach I've come up with is to fork the codebase and rebuild the tooling on top of that, deploying first to staging, then to production, alongside the current working copies. Once you're satisfied flip the switch. You may have to flip it back, but at least it will be easy and instantaneous.
These lessons are from my ongoing project to modernize an ancient Rails 2.3.5 website running on Debian Lenny. Linode doesn't even offer that OS anymore, I had to cannibalize a server with an EOL app on it for a staging environment. I can't use Vagrant because there aren't any Lenny boxes.
It's long, arduous and slow. I fucking love it.
Eventually that laptop was destroyed in a bizarre accident (dropped at the airport security checkpoint was the claim) and last I heard they were regularly backing up the directory on the web server and still dropping files in to compile at runtime.
This is what happens when someone writes something and no longer has responsibility to maintain it or document.
I'm sure that if Mr. Gates had implemented FAT at a later date, he would have needed a much longer plane ride.
I work at a networking company (Arista), and a lot of the interesting problems come from this sort of interaction. Our entire OS was built so an agent would be resilient to changes in another agent, and this modularization means there is very little "spaghetti code". However, when you are building a feature (say, a new routing protocol), you have to be extremely conscious of how it interacts with everything else: various reliability features (i.e. Stateful switchovers), configuration features (saved configs, public APIs), resource contention (TCAM utilization), etc. etc. If that new routing protocol was the first thing we implemented on our switch, it would be a complete breeze. In the context of other features though, this becomes a more intensive project (though in codebases without proper modularization you'd find this task "herculean" as opposed to "intensive").
> I don't mind if billg gets a little arrogant at times. One merely has to look at how he wrote the ROM code for the Altair to realize his abilities.
> Also, ALL of the concepts embodied in modern tablets and smartphones were "invented" by billg when he wrote the code for the Tandy 100. Things like "instant on", data stored in non-volitile memory, small productivity apps, continue from the same point after power down, etc. Persoannly, I put billg in the Top 5 of all-time CS people who contributed to computing.
This is an interesting claim, I was always under the impression that while Gates could code, he wasn't at all a CS giant.
Gates was certainly one of the brighter undergrads in those courses. I don't know if that makes him a "CS giant," but he was no slouch.
While you guys were coding away at Harvard, I was not yet able to properly focus my mind, and so took to running around the streets of Cambridge(port) in diapers instead ;-)
p.s. from the above one can assume I was either born in '72 or was taking far too much acid for my own good.
http://www.npr.org/templates/story/story.php?storyId=9223678...
http://www.6502.org/source/interpreters/sweet16.htm
Anyway, Engelbart, et al, had figured out all this stuff 10 years earlier.
"I wrote all my code on paper in hexadecimal. I couldn't afford an assembler to translate my programs into hexadecimal bytes, I did it myself. Even my BASIC interpreter is all hand written. I'd type 4K into the Apple I and ][ in about an hour. I, and many others too I think, could sit down and start typing hexadecimal in for a SMALL program to solve something that occured or something that somebody else wanted. I'd do this all the time for demos. I certainly don't remember which hexadecimal codes are which 6502 instructions any longer, but it was a part of life back then."
http://en.wikipedia.org/wiki/File_Allocation_Table#Original_...
Like a lot of old stories remembered after-the-fact to make a point, it seems that the truth is more complicated.
If I remember right, he did have a Compaq "portable" he lugged around in the early portable days. Something like http://oldcomputers.net/compaqi.html
It obviously came out after DOS / FAT.
At one work place, I spent most of the day away from the computer (terminal) waiting for the operational boys to get their stuff done.
I would spend most of my coding day scribbling out code changes onto the fanfold printout.
Coding is something you do in your brain, not in an editor.
I guess it's no wonder why the mainstream of Apple and Microsoft is what it is.
Coming up with the blueprint for such on a flight of at few hours seems quite reasonable for someone with the requisite knowledge.
It seems this is largely their problem today - they no longer have a lofty goal to focus their energy on. Most companies have one. Google - organize the world's information etc. etc.
The best ideas I have ever had happened during my commute, so I feel familiar with the "I wrote FAT on an airplane" statement.
"I did X in Y time so why don't you follow my lead" may have been a motivational tactic at the time, but it did backfire. I'm forced to wonder if this is how the motif began.
What is optimization? For a very sparse description, for the set R of real numbers, positive integers m and n, functions f: R^n --> R and g: R^n --> R^m, find x in R^n to solve
minimize f(x)
subject to
g(x) >= 0
So, set up a mathematical description of the problem based mostly just on the 'cost' function f, that is, what want to minimize, and 'constraints' g that keep the solution 'feasible', that is, realistic for the real problem. Then look for a solution x. Yes, easily enough such problems can be in NP-complete and still difficult otherwise.
Likely what Gates wanted was function f to be the execution time and function g to force honoring the 64KB limit per segment, etc.
Then a big question would be, assuming what software to have its segments assigned?
So, for an answer, just take some Windows and/or Office code and get the total execution time on a typical workload.
Might also want to be sure that on any of several workloads, the optimal solution was no worse than some factor p in R of the best solution just for that workload. Then keep lowering p until the f(x) starts to rise significantly and come up with a curve of f(x) as a function of p. Present the curve to Gates and have him pick a point.
Gee, should we rule out having the same subroutine/function in more than one segment? Hmm ....
Just where Gates learned about optimization would be a question, but he had a point.
Or more likely he was being a bit of a jerk.
Edit with further detail: The thing that's unique about LFN are the rename and delete behaviors. Lots of filesystems support more than 8.3 or multiple names for files (hardlinks) but I don't think any of them have alternate names for files that "stick" when you move them into another dir, or that behave somewhat cleanly when a non-LFN-aware OS does so.