Sigh. I wish instead of the sarcasm, you actually introspected on the toxic and needlessly aggressive way you're defending rote memorization of algorithm trivia. For example, you saying
> Source? or just trying to justify your own shortcomings?
is clearly, unequivocally unprovoked and needlessly antagonistic (what do my shortcomings have to do with my point ... either you engage with the claim or not, but use ad hominem insults is not a valid discussion tactic. Yet you seem to assume no responsibility for your completely unprovoked hostility, and continue on with sarcasm, even sarcasm about your own ridiculousness.)
In terms of data, obviously no study is perfect and we should not simply base everything on a single study, but the 2012 PISA results offer at least some evidence < http://www.oecd.org/pisa/keyfindings/pisa-2012-results-volum... >.
The particular sections of the actual study that cover poor performance of those who focus on memorization is not available in the free sample (Chapter 2), but it was reported on, e.g. here: < http://hechingerreport.org/memorizers-are-the-lowest-achieve... >, including this:
> The U.S. has more memorizers than most other countries in the world. Perhaps not surprisingly as math teachers, driven by narrow state standards and tests, have valued those students over all others, communicating to many other students along the way – often girls – that they do not belong in math class.
> The fact that we have valued one type of learner and given others the idea they cannot do math is part of the reason for the widespread math failure and dislike in the U.S.
There is also the discussion from Google that test scores and brainteasers do not predict later-on job success < http://www.newyorker.com/tech/elements/why-brainteasers-dont... > (and many algorithm interviews are absolutely the same kind of brainteaser nonsense).
From the NY article:
> The major problem with most attempts to predict a specific outcome, such as interviews, is decontextualization: the attempt takes place in a generalized environment, as opposed to the context in which a behavior or trait naturally occurs. Google’s brainteasers measure how good people are at quickly coming up with a clever, plausible-seeming solution to an abstract problem under pressure. But employees don’t experience this particular type of pressure on the job. What the interviewee faces, instead, is the objective of a stressful, artificial interview setting: to make an impression that speaks to her qualifications in a limited time, within the narrow parameters set by the interviewer. What’s more, the candidate is asked to handle an abstracted “gotcha” situation, where thinking quickly is often more important than thinking well. Instead of determining how someone will perform on relevant tasks, the interviewer measures how the candidate will handle a brainteaser during an interview, and not much more.
The other thing we have to fight against is self-selection. It's not necessarily in everyone's interest to publicize that their riddles and algorithm hazing process isn't working. Especially not for start-ups which need investors to feel like they are crammed to the brim with stereotypical nerds or something. So if you go looking along the lines of "well, my company uses riddles and we have hired well" you're already done. You haven't hired well. Your company (if it's a startup) probably isn't even remotely proven yet, even if it's well-funded, and the verdict is out on whether the people you rejected really should have been.