I have my personal theory.
1) Top companies receive way more applications than the positions they have open. Thus they standardised around very technical interview as ways to eliminate false positives. I think these companies know this method produces several false negatives, but the ratio between those (eliminating candidates that wouldn't make it and missing on great candidates) is wide enough that it's fine. It does leads to some absurd results (such as people interviewed to maintain a library not being qualified despite being the very author) though.
2) Most of these top companies grew at such rates and hiring so aggressively from top colleges that eventually the interview was built by somewhat fresh grads for other fresh grads.
3) Many companies thought that replicating the brilliant results of these unicorns implied copying them. So you get OKR non sense or such interviews.
And even for Google, leetcode has become noise because people simply cram them. When Microsoft started to use leetcode-style interviews, there were no interview site, and later there was this Cracking the Interview at most. So, people who aced the interview were either naturally talented or were so geeky that they devour math and puzzle books. Unfortunately, we have lost such signals nowadays.
And the great irony is that most software is slow as shit and resource intensive. Because yeah, knowing worst case performance is good to know, but what about mean? Or what you expect users to be doing? These can completely change the desired algorithm.
But there's the long joke "10 years of hardware advancements have been completely undone by 10 years of advancements in software."
Because people now rely on the hardware for doing things rather than trying to make software more optimal. It amazes me that gaming companies do this! And the root of the issue is trying to push things out quickly and so a lot of software is really just a Lovecraftian monster made of spaghetti and duct tape. And for what? Like Apple released the M4 today and who's going to use that power? Why did it take years for Apple to develop a fucking PDF reader that I can edit documents in? Why is it still a pain to open a PDF on my macbook and edit it on my iPad? Constantly fails and is unreliable, disconnecting despite being <2ft from one another. Why can't I use an iPad Pro as my glorified SSH machine? fuck man, that's why I have a laptop, so I can login to another machine and code there. The other things I need are latex, word, and a browser. I know I'm ranting a bit but I just feel like we in computer science have really lost this hacker mentality that was what made the field so great in the first place (and what brought about so many innovations). It just feels like there's too much momentum now and no one is __allowed__ to innovate.
To bring it back to interviewing signals, I do think the rant kinda relates. Because this same degradation makes it harder to determine in groups when there's so much pressure to be a textbook. But I guess this is why so many ML enthusiasts compare LLMs to humans, because we want humans to be machines.
IME the most underrated optimization tool is the delete command. People don't realize that it's something you should frequently do. Delete a function, file, or even a code base. Some things just need to be rewritten. Hell, most things I write are written several times. You do it for an essay or any writing, why is code different?
Yeah, we have "move fast and break things" but we also have "clean up, everybody do their share." If your manager is pushing you around, ignore them. Manage your manager. You clean your room don't you? If most people's code was a house it'd be infested with termites and mold. It's not healthy. It wants to die. Stop trying to resuscitate it and let it die. Give birth to something new and more beautiful.
In part I think managers are to blame because they don't have a good understanding but also engineers are to blame for enabling the behavior and not managing your managers (you need each other, but they need you more).
I'll even note that we jump into huge code bases all the time, especially when starting out. Rewriting is a great way to learn that code! (Be careful pushing upstream though and make sure you communicate!!!) Even if you never push it's often faster in the long run. Sure, you can duct tape shit together but patch work is patch work, not a long term solution (or even moderate).
And dear God, open source developers, take your issues seriously. I know there's a lot of dumb ones, but a lot of people are trying to help and wanting to contribute. Every issue isn't a mark of failure, it's a mark of success because people are using your work. If they're having a hard time understanding the documentation, that's okay, your docs can be improved. If they want to do something your program can't, that's okay and you can admit that and even ask for help (don't fucking tell them it does and move on. No one's code is perfect, and your ego is getting in the way of your ego. You think you're so smart you're preventing yourself from proving how smart you are or getting smarter!). Close stale likely resolved issues (with a message like "reopen if you still have issues") but dear god, don't just respond and close an issue right away. Your users aren't door to door salesmen or Jehovah's Witnesses. A little kindness goes a long way.
You really need those 100x faster algorithms when everything is a web or Electron app.
The problem I have with it is that for this to be a reasonably effective strategy you should change the arbitrary metric every few years because otherwise it is likely to be hacked and has the potential to turn into a negative signal rather than positive. Essentially your false positives can dominate by "studying to the test" rather than "studying".
I'd say the same is true for college admissions too... because let's be honest, I highly doubt a randomly selected high school student is going to be significantly more or less successful than the current process. I'd imagine the simple act of applying is a strong enough natural filter to make this hypothesis much stronger (in practice, but see my prior argument)
People (and machines) are just fucking good at metric hacking. We're all familiar with Goodhart's Law, right?
If it were otherwise, and those trendsetting companies actually believed LeetCode tested programming ability, then why isn't LeetCode used in ongoing employee evaluation? Surely the skill of programming ability a) varies over an employee's tenure at a firm and b) is a strong predictor of employee impact over the near term. So I surmise that such companies don't believe this, and that therefore LeetCode serves some other purpose, in some semi-deliberate way.
I give a very basic business problem with no connection to any useful algorithm, and explicitly state that there are no gotchyas: we know all inputs and here’s what they are.
Almost everyone fails this interview, because somehow there are a lot of smooth tech talkers who couldn’t program to save their lives.
Probably recent job performance is a stronger predictor of near future job performance.
i did interviews for senior engineer and had people fail to find the second biggest number in a list, in a programming language of their own choosing. it had a depressingly high failure rate.
This wasn’t an off-by-one or didn’t handle overflow, but rather was couldn’t get started at all.
But I still think you can reasonably weed these people out without these whiteboard problems. For exactly the same reasons engineers and scientists can. And let's be honest, for the most part, your resume should really be GitHub. I know so much more about a person by walking through their GitHub than by their resume.
To be a capital E Engineer you have to pass a licensing exam. This filter obviously is not going to catch everything but it does raise the bar a little bit.
—-
As far as the root question goes, they are allowed to propose that, and then i can try and tease out of them why they think that is the best and if something is better. But you would be surprised at the creative ways people manage to not iterate through a full loop once.