That doesn't work well in interviews, especially with how terrible most interviewers at time management. I sometimes get 10 minutes for a system design problem because the interviewer was expected to get signals on my non-technical competencies as well as system design.
This is never enough time to ask clarifying questions, diagram things, and get a good solution out unless it's similar to a problem I've already solved.
It's often OK to not solve the problem as long as you give an interviewer insight into how you think, but some interviewers expect a miracle.
It's not, though. Many people have no clue how to interview well, and way too many tech interviewers are obsessed with whether or not the candidate got the "right" answer.
Anyone who dropped multiple mediocre solutions before a good one in an interview with me would likely get a strong hire — I love to see this kind of iterative thinking, and finding people who can role model a healthy exchange of rough-draft ideas is always a great boon for the team's psychological safety (and by extension, their creativity and productivity).
But I think, industry-wide, it's not great. As a SWAG, probably ~50% of interviewers are more interested in the correctness of the answer rather than the caliber of the thought process and communication.
And this is why most the interview processes as practiced today are a joke. It's far more this weird sort of cargo-cult hazing process than any actual sort of reasonable assessment. Few people give challenging problems that they expect won't get solved to step through thought processes, they have some predisposed ideal solution or perhaps probably optimal algorithmic solution on the spot. That lends itself well to a combination of assessing rote memorization and chance, I suppose.
There are definitely engineers hunting max salary/equity over everything and they are going to jump ship if they can get a FAANG role. Being able to spit out rote memorized leetcode and systems design questions is probably correlated with someone who wants to maximize their salary.
I was a little irritated though because, though I was early in my career, I have no doubt that I'd have been a productive member of their team within a month. I had thought the interview went well - we discussed the problem they proposed in detail, we arrived at a reasonable solution by the end, I asked lots of questions and responded well when I was prompted about edge cases.
I was just fuzzy on implementation details and aspects of database design that I hadn't had direct experience with.
Anyway, I'm not bitter, not getting that job led to a fantastic gig that I still have today. But I did feel like they were focused on the wrong things in the interview.
On the other hand, many interviewers are just normal people who have no idea how to gauge "how the candidate thinks", but like to think they are. I saw this a lot in the heyday of Google's brainteaser type questions. (And middle managers seem to revel in it.)
So the right answer is as good a proxy as they can hope for. It still sucks, but there you go.
Some advantages - Interviewer and the interviewee are at ease. There is no rush to solve a problem. - You can easily spend 90 minutes to 2 hours on System design, Spend 2-3 hours coding and another 2 hours in behaviour/leadership what not. - The interviews can be progressive, meaning you don't make it through the first 2 hours - good bye. - This can be done remotely as well as in person. Of course, in person would be better, hosting expenses etc.
End of the day, decision is made and you can verbally convey an offer/reject.
This calls for a lot of discipline and commitment from the companies and their interview panel, I mean so be it. Dedicate 1 month for hiring and be done, at least for senior positions. Just like you allocate time for your projects, allocate dedicated time for interviewing every 6 months, every quarter whatever.
Do they pay you for your time for those long multi-hour interviews?
Is that knowledge available in real time? When I interviewed at Google last year, the next thing I heard back from the recruiter was that I'd passed the interviews and should expect a job offer. She wished me congratulations. A couple weeks later, she informed me that my interview scores were too low to get a job offer. It remains unclear to me why there would be separate thresholds for "passing" and "eligible for hire".
In "Thinking Fast and Slow", Daniel Kahnemann describes the "instant" answer as being given by "System 1", which the far slower (and more rational) "System 2" might distrust.
Deep wisdom here, indeed. Don't you dare say this during an actual interview, though -- especially when you're asked to do the system design part in 10 minutes or less. The last thing these companies could possibly want is to have reality to intrude.
And then proceed with my non-optimal solution. They often don’t care about optimality at all. I’ve passed interviews by implementing a bubble sort before. And I have no shame in doing so. I’ve never needed to implement a sort in an actual job, and if I did need to I would be looking up how to do it.
In the past I put it down my experienced approach and what I need from others: make it work (mediocre solutions that deliver 80% value are fine!), then make it better (reflect and refactor until readable/approachable/idiomatic) before finally (stretch goal) make it faster (break idiomatic paradigms, vectorize etc.)
At interviews they stop me at my first attempt.
Of course not - interviews aren't about assessing actual problem-solving ability (in the face of, you know, actual real problems). But rather your ability to recite stock answers to made-up problems the interviewer found on some website somewhere. That, and your ability to politely tolerate their horrible time management skills, laugh at their jokes -- and feign belief / interest in their product.
Experienced candidates give two solutions in seconds, juniors cannot figure out the task, and overqualified are visibly annoyed by being asked to come to a whiteboard.
That's a remarkable generalization. In reference to the article, perhaps you should have thought longer about it.
(Not all of them, of course, but in general they are at least as intellectually-curious and open as anyone else I know.)
I gather you may not know any of these types yourself but they’re certainly abundant here on HN, dismissing Dropbox as easily replicated with rsync and other such sentiments.
I'm 6 months into my current role and only just starting to feel the confidence and comfort to question the approaches to problems we've dealt with in that time, in order to, hopefully, modify the approaches we take in future in the hopes of improving the quality of the output.
It takes time to "find the water level" and also working through applying that knowledge to each problem, and if time is 'pressurised' it can lead to suboptimal resolutions. On the flipside, no one will wait forever - and it feels as if the world is currently oversensitive to waiting time.
Find the water level. But that also takes time.
Dialogue is a necessity for the correct development of knowledge. To engage in dialogue we have recourse to language and a vocabulary of discourse. As knowledge is gained, dialogues branch off and specialize. The system of thought inherent to the ~unique arrangement of the elements of these dialogues partition thought into acceptable (an opening) and unthinkable (a closing).
Throughout history you will find minds that navigated these intellectual currents by stepping away from their dominant belief system and gaining knowledge of the universality of meaning, and saw hidden (filtered) vistas previously unseen, and then step back into their home grounds to contribute to the development of the field, in a positive manner, adding new elements into vocabulary of the dialogue. They extend it, and create the possibility of synthesis in the future.
Equally, you will find that the further necessity of establishing schools, cults, churches, and institutions, which lend social prestige to its members and satellites, introduce incentives contrary to that of pure love of knowledge, and this attracts a certain type of people, beyond the already present danger of vanity and self regard.
What is to be done about it?
Helpfully suggest that tolerance of this necessary evil may be the remedy for your gripe.
There is almost nothing to be done but just wait patiently.
Data driven hiring practices require hard definitions. One of these hard definition of intelligence is IQ. IQ is very correlated with job success and also various other skill sets related to job success.
From the company perspective, successful hiring using a data driven scientific approach is the right call simply because its the best metric we have on intelligence.
Obviously their are aspects to intelligence that currently aren't quantified but is it wise for a company to bet their future on gut feelings that are subject to bias and aspects of intelligence not in the realm of science?
Unfortunately no.
I'm pretty sure the Google interview process is somewhat of a data driven process but I can't be sure. Are any googlers in the know? Are specific interview question and answer pairs measured and correlated with the success of the employee? Or is it all qualitative judgement?
IQ, I believe, is actually not legal to measure for an occupation. But we know that this is the metric that faangs are actually attempting to replicate.
"People with lower IQ jump to conclusions"
If you don't think about edge cases, you get "solutions" faster.
That's why we have so much technical debt.
Maintaining/fixing/managing the code that the star developer produces is very ungrateful work.
Surely a group of people who deem themselves 'intelligent' can broaden their perspectives a little?
From article: "Participants were asked to identify logical rules in a series of patterns"
The more intelligent you are, the more different rules you can come up with for a pattern. It then becomes a task of figuring out which rule the interviewer thinks is the (only) correct one.
Magnus Carlsen does not move his pieces faster than bad chess players. He thinks thru more creative options and see more useful "edge cases" than the bad players.
A lot of this thread is this sort of thinking, that the comment author is the bright one and everyone else is a dunce.
Inexperienced dev code the first solution they have.
Experienced dev challenge their first solution with one (or more) other solution(s).
I find such summary ambiguous. Difficult for who? Isn't it very common that a difficult problem for me is a piece of cake for thee, like the PDE that Bezos couldn't solve for hours yet his classmate could solve in a matter of seconds?
In 1697 Jean Bernoulli challenged the mathematicians and scientists in Europe to solve the brachistochrone problem. Bernoulli was particularly proud of his solution after working on it for weeks. Yet Newton, who was 55 years old at that time and hadn't worked on science or math for years, worked out a brilliant solution overnight and submitted the solution anonymously. Reading the solution, Bernoulli famously said "I I recognize the lion by his paw". Not only did Newton solve the problem, he also invented the Calculus of Variations.
This is spelled out in the article ... difficulty is relative among problems, with increasingly difficult problems within the same problem space.
What is "solve"? The example given was finding a route on a map. Is any route a valid solution, or only the best one? Does the time differ for finding a route of the same quality, or does it differ only for how long it takes until the brain is satisfied with the solution?
Abstraction solves a lot of hard problems, but it is also costly.
He actually ended up teaching me some calculus so I could grok trig identities, even though we were still in the precalc year. And let me use it on the tests to prove stuff that would have been expected to be proved geometrically.
The other big gripe I have with the way kids are tested is the way people are locked out of life because they're only good at one or two subjects to the extent they fail some of the others. Congratulations, you have failed at school because you, a teenager, can't meet our minimum requirements for being able to write inane, shallow analysis of 19th century poetry, can't run n meters in less than m minutes, and have failed to learn German. Good luck pursing your prolific talents with no high school diploma kthxbye.
I completely failed an English course and still graduated. Even went on to fail classes in college. Had to retake a class but still got a degree.
I haven't taken a test in years, but the number of abstractions my mind would cycle through was a major disadvantage on timed tests.
This may explain what happened to me. Of the various standardized tests that I have taken in life (SAT in 8th grade, SHSAT, PSAT, SAT, ACT, LSAT, GMAT), the LSAT is the only one that I did not score in the 99th percentile on, and its logic games section was my weakest. I have never had any test prep training of any kind other than taking old exams; if I had taken an LSAT class perhaps I might have approached the section differently.
At work you get stuff done rather than answer questions in a obscenely short amount of time
A typical IQ test (WAIS) takes roughly an hour to administrate.
Huh, I hadn't heard of this before. Does anyone know more about how exactly those "in silico" brains work and how they compare to their real-world counterparts? I mean, the article makes it sound as if the researchers fully understood how the brain works and had managed to create a faithful digital copy, which I find difficult to believe.
EDIT: The original paper says
> To study neuronal processing in silico we created BNMs [brain network models] for the 650 subjects using a tuning algorithm that fits each participant’s simulated FC with their empirical FC (Figs. 2 and 3). The BNMs use coupled neural mass models to simulate the electric, synaptic, firing, and hemodynamic (fMRI) activity of a 379-nodes whole-brain network. Each node consists of one excitatory and one inhibitory population that mutually and recurrently interact. To simulate long-range white matter coupling, the neural masses were connected by each participant’s SC, which were estimated by dwMRI tractography. Importantly, we added feedforward inhibition to increase biological realism
Sounds like they used small neural networks for simulation and adjusted the weights between the neurons to what they saw in participants' MRI measurements.
All I'm saying is I think it has more to do with familiarity of material than "intellect." Intelligence to me is being able to rapidly comprehend and assimilate new systems of logic and signification as they are approached. Biology is going to have different rules from Set Theory which is going to have different rules than Political Science. Someone might note that these things are all interrelated: which they are, of course. But I think intelligence here would be noticing the contradictions that appear when they are set in relation, not simply trying to understand how they all stand together in some grand, unified fashion; nor (on the other hand), making humble claims (like formalism) which are not truly held by anyone.
One student solved it in class, but the rest of us weren't told what the solution was. I tried solving it a bit in high school but still couldn't come up with a solution.
I really hope there is a solution that I couldn't find, and not that we were expected to discover that there is no solution. I expect that problems in the real world may not have a solution, but if you're going to pull that sort of thing in a 4th grade classroom (even a gifted one) I expect some setup so that the kids know this is a possibility.
Lower response time indicates jump to the conclusion that happened to be correct. The percentage of correctly solved higher-level problems was predictably lower for people with lower g-factor.
THEN I stopped, and thought:
"Hey, waait a minute! Is the article trolling me with a 'difficult problem' here? which I will now proceed to take a long time to 'solve'? because I'm 'intelligent'? well, screw you, troll - I can solve this quickly!", and so, I hastily clicked on something in order to make the popup go away.
Thus proving that I'm not intelligent.
I win.
HNer isn't at all intimidated by the wall of text and knows exactly what it mean but reads every word to check they haven't snuck any particularly egregious privacy violations and goes down the rabbit hole of evaluating the likely implications of blocking 'necessary' cookies and the virtues of doing that via the cookie popup or browser plugin before making their selection...
Could we please get an ISO for usability, and regulation that requires compliance?
(Should still be styled as a button, however.)
The charitable interpretation is that the irony here was intentional
Not shocking.
> needed more time to solve challenging tasks but made fewer errors
I don't think this really has anything to do with "intelligence" as much as it does patience.
Who would you want to be your airline pilot: the person who solves 70% of problems quicker or who solves close to 100% of the problems slower? The ability to error check has a time cost but also is of infinite value for any position of consequence.
Fascinating..what a time to be alive!
1: https://collections.archives.caltech.edu/repositories/2/acce... Can someone verify that?
Uh huh
I guess you want to say that it’s a poor metric for intelligence.
Not so sure. IQ is measured in standardized tests. Participants should have the same amount of time to answer the questions. If intelligent people take longer to reach the correct answers than non-intelligent, then, it seems paradox.
The idea could be to prolong the test. So we better see who reaches the correct answers without much time pressure (but still with the same amount). Or go even further and ask really difficult questions without any limit.
These would probably also measure the ability to concentrate, or motivation to continue, … which might not be what you want for an IQ test.
However, if your question would be true, designers of IQ test could fix them in light of this research: they plan for enough time so that even the most intelligent have time to answer the questions. It’s of course a problem to find a couple of really intelligent people if your test is flawed in the first place.
A standard IQ test take an hour or so to administrate.
For a very long time, I just have not been smart enough to go to sleep in time. (Metacognition related "intelligence"?)
That's how you explore the space.
You can't expect a good solution before you've fully explored the space.
(This is not to say its false)
The article title refers to something that's been long known even in pop science since Daniel Kahnemann's "Thinking Fast and Slow". In fact, the original paper[0] even cites Daniel Kahnemann's work:
> Simulation results indicate that decision-making speed is traded with accuracy, resembling influential theories from the fields of economy and psychology on fast and slow thinking.
Kahnemann's work might be summarized (very roughly) as follows: "System 1" often quickly suggests an intuitive (and frequently wrong) answer, whereas "System 2" is the part that does the slower, more rational, and conscious thinking. "Intelligent" people tend to be those who control their System 1 and thus their urge for intuitive answers more effectively.
And indeed this is what the article says, too:
> Resting-state functional MRI scans showed that slower solvers had higher average functional connectivity, or temporal synchrony, between their brain regions. In personalized brain simulations of the 650 participants, the researchers could determine that brains with reduced functional connectivity literally “jump to conclusions” when making decisions, rather than waiting until upstream brain regions could complete the processing steps needed to solve the problem.
> “In more challenging tasks, you have to store previous progress in working memory while you explore other solution paths and then integrate these into each other. This gathering of evidence for a particular solution may sometimes takes longer, but it also leads to better results.
However, if I understand correctly, the thing about the research here that's actually novel is that they now have a better understanding of the neural processes underlying System 1 vs. System 2 ("we identified a mechanistic link between functional connectivity, intelligence, processing speed and brain synchrony for trading accuracy with speed in dependence of excitation-inhibition balance") and that they in fact simulated the brain digitally, see my other comment.
1. Dunning Kruger in 1999 on overconfidence when unskilled.
2. Selection bias in favor of “quick thinkers”
3. Compounding effect of selection bias in capitalism—-often survivorship in decision making theories.