My interpretation of this is that interviewees who can communicate clearly about code (whether they wrote it or not) correlate with high technical ability. Does this suggest that rather than having the interviewee write code on the spot, one could give them some new code they've never seen before and ask them to reason about it aloud for 30 minutes, then gauge their technical ability based on their ability to communicate clearly about the code?
In other words, could you replace live-coding with "here's some code, tell me about it"?
Basically, give them less than ten lines of code, ask them what it does, where are a couple bugs, ask what would you name the function, etc. Then we talk about how to improve it. I'd say, less than 20% of people I interview pass. It's actually amazingly low how many people can find a bug and communicate it.
Half the people don't even tell me what they are thinking. And no matter how many times I try to work with them, act like their buddy, or w.e. they just kind of shut down. They think in their head, don't work through the problem at all.
Maybe they are introverted or simply need to think before they talk. Around 50% of people are introverted (less in usa). Many people are like that and simultaneously quite skilled. Moreover, some environments punish errors, so people who worked/studied there tend to be conditioned to think before talking.
"And no matter how many times I try to work with them, act like their buddy, or w.e. they just kind of shut down."
You are not buddies, you are interviewer about to decide whether they get hired. Many people shutting down might mean that they are not comfortable juggling "buddy" social role and expectations and "serious job interview" social expectations simultaneously.
As it happens most of my "aha!" code-related moments come when I'm either under the shower or when I'm washing the dishes. Very rarely did it happen for me to look at a piece code while in front of my computer and then immediately be stuck by what that code does or by some hidden bugs in it. It's all sort of mechanical, at least while I'm sitting at my desk. I'd say it would be very hard for people like me to reproduce that "aha!" moment during an interview, in front of some other people who expect me to have those moments of enlightenment right there and at that precise moment.
I'm left wondering why this would be a problem.
Those are independent clauses. Just because the individual is not talking out loud, should not be indicative that they aren't working through the problem. Personally, I am a very visual thinker and layout visual models in my head for problems with high (3-5d) dimensionality that would take longer to draw clearly in 2D space while explaining all the traversals I'm mentally making. I would then reduce a solution, and explain that, or I apologize for zoning out.
It became obvious to my office mates when I was working through and reducing a problem space, because I would "hang" on the thought. Sometimes I would be down, stuck in thought, for 15-30 seconds and according to them they could tell because my face loses emotion and my eyes flutter about.
I'm a product manager and I suspect I would do pretty well in this type of interview unless the snippet is especially complicated and/or esoteric.
In that case you might be screening for people who can talk but not actually code or people who can code but might not be the best communicator.
Interestingly enough, if youre optimizing for specific outcomes, you may not need to screen out the latter.
Engineers who struggle to communicate but are productive can be extremely successful given the right environment - including having someone on the team who knows who to work with various personalities and is technical enough.
Just some rambling, but it's worth pointing out given the scarcity of engineering talent. Not everyone is going to be Paul Bucheit, ace product manager and engineer all wrapped into one.
Also, could you possibly just do an initial phone/Skype interview to save time in that case?
Indeed, it's amazing how quickly people reveal themselves in this way.
No whiteboards, no quiz shows, no grilling of any kind needed. Just: "tell me your story".
You can act buddy all you want, but people aren't stupid. They know if they take a few extra seconds too long, you're already thinking "this guy sucks"
You could even have HR do it. When they arrive, HR hands them the code and instructions. Then, they read it for 15 minutes, until you walk in.
The old saying "If you can't explain it simply, you don't understand it well enough" applies here I think. :)
Then for the next round of hiring, advance the employees up the education pyramid. The trainees become trainers, and the previous trainers mentor the new trainers by reviewing the newly trained employees.[1] It was one of the things I think we did we really well and that I am really proud of (even though the business went belly-up).
When training is done is this pattern, hopefully a virtuous cycle of education is established throughout your organization.
1: Note that this was at a quickly growing company, with recruits that had little to no relevant actual know-how to do what we did.
Engineers who are capable can clearly explain their coding choices and their style is evident. If they've borrowed too liberally or had someone else write it for them, it becomes apparent pretty quickly because they can't clearly explain the code they supposedly wrote.
This is a fairly new approach for us, but so far, the candidates have appreciated it and the team reports that this approach gives them a good sense of the candidates ability.
Have them delve through it. Reason aloud, ask questions about it. See how they gain context, and how they troubleshoot. The biggest down side was they sometimes need to reference external projects, or documentation. I would often have to search on my work laptop and then let them should surf the docs.
When I'm nervous, I don't code well.
I don't know about other people, but for me personally, when you measure me under a high-stress situation, all you are measuring is just my level of nervousness.
I like your idea though.
After hiring him, he was unable to even SSH to a box. As in he didn't know how to give us his private key, or how to configure his SSH client.
Maybe one guy takes interviews under other people's names then lets them mess up the work.
My brain simply froze during the live coding test. I looked like an idiot, even though the coding challenge was one I could have normally handled in my sleep.
My guess is that it was due to anxiety about doing well on the interview. But to this day I'm gun-shy regarding live coding tests.
People normally perform better by verbalizing the process than they would by sitting at their computer and performing it, since a lot of the concern in an interview can be whether you're getting your code/commands syntax-perfect or not.
The only way I can really see this going badly would be if the interviewer was only asking generalized questions that someone like a non-coder HN reader would be able to pick up from the zeitgeist without actually having ever implemented. Instead of asking about trends or fads, it's probably better to ask detailed questions about implementation processes.
If they really have internalized all the applicable concepts but can't express them in computer language, well, it shouldn't be hard for them to get up to speed on the syntactical fineries. Make sure you're testing for grokkiness of the core concepts needed, not how well they're following the trends or the news.
Personally I use a very small code test and put the rest of the energy into a detailed conversation.
But, its quite possible for people who are not naturally able to communicate with a wide range of people to be able to communicate with a smaller subset (say people they are more comfortable with, or people working on similar problems to them).
The more interesting question is, could those people who communicate less broadly, in a situation where they do communicate well, perform technically at the same level (or better) as people who are more natural communicators?
Another way to say this is, evaluating interviews is fairly prone to bias towards people who interview well. As someone who is hiring I'd be much more enthused about their results if it correlated to job performance, not hiring results. Its almost gospel at this point that there are people in software who interview well who do not make good hires, but the methodology of this article doesn't talk about this at all.
All told, nothing about this research is persuasive to me on my opinion of "interviews are largely worthless in determining who will be a good hire".
That sounds like a trick question. I've done a lot of code optimization. Here are a few generalizations I've drawn:
(a) You rarely know what part of a program is the bottleneck until you profile it on an appropriate workload.
(b) You often don't know why it's slow until you dig into performance metrics like cache-miss ratios, branch-mispredicts, etc. Often at the disassembly level.
I'd rather see a question like "Please modify this piece of code to add X functionality."
Some engineers spend a lot of effort trying to project a specific image of themselves at the expense of bringing up the morale of others in their team.
On a few occasions, I've seen an engineer tear apart the work of another engineer at a team meeting in order to make themselves look good and more senior in front of their boss - Often, the critique is delivered with such vigour that it sounds rehearsed. I've seen this behaviour in startups mostly.
Based on other responses to your post, it's heartening to see that others do this kind of interview, so maybe in the future I might encounter it.
The unsuitability should be extremely obvious, no "spot the missing semicolon". Maybe it has a severe performance problem, maybe an obvious security flaw, maybe a logic error. The point is to prompt a discussion of better alternative ways to do what the code is trying to do.
Someone says "I am a 9/10 in JS" I give them something that uses the animation frame api, the dom api, and a few more javascript APIs that I personally am not fully comfortable saying that I am a 9/10 on (been working with JS for over ten years) and see how they decipher what it is doing, what jumps out, and how they would review it / give feedback.
9/10 times they don't actually understand Javascript.
One of the biggest keys to doing well on technical interviews is to completely separate the problem solving from the coding. The strongest interviewers will discuss the problem and solve it at an abstract level using diagrams. Once satisfied with the solution, they'll code the entire thing making few mistakes.
I think this is what drives most of those metrics. Strong interviewers submit code later, and have a higher chance of it being correct because they take the time to problem solve upfront. Their thought process seems more clear because there isn't the iteration of "this should work, let me code it, oh no wait, that's wrong, let me erase this now..."
I think Einstein's the one that said: "If I had an hour to solve a problem I'd spend 55 minutes thinking about the problem and 5 minutes thinking about solutions."
"Fix" "Real fix" "Fix the fix because of fix"
Then you know you don't want those on your team. It really bites on tight deadlines when you have to put something to production but poor dude needs to push that really, really last fix.
I agree. In general I'm mediocre at coding interviews, but I do best when I have a chance to whiteboard and draw the problem. On the other hand I do absolutely terribly on phone screens with a shared document and no where for the interviewer to see my drawing.
The "average" is too sensitive to outliers and should not be used for such a comparison...
[Edit] Being bored I calculated the Kolmogorov-Smirnov statistic based on the chart. It is between 10%-10.5%. The number of defined funtions seems to be a significant but weak indicator.
What's wrong with this, I think, is that a (journalistic) title should give an ultra-condensed summary of the main point of the article. This title suggests that the authors gathered a lot of data but didn't find much.
(I find myself quite intrigued by clickbaity titles somehow, sorry for that.)
Even more galling when you have a healthy GitHub portfolio that they refuse to even look at in favor of a quiz (this has happened recently).
It gives a common baseline to judge. Each candidate does the same thing and we have a good idea of what we are looking for, the rest of it tells us how you think, how you approach your work, organize your work, and best of all? You can compare that to how others do it.
Now, not to say that's everyone, that is what we use it for.
---
Before, as an Engineer / Manager, I hated doing "live coding" tests when it wasn't relevant. For example doing "algorithms" or "palindrome" or "sliding window dns" or "O(n)" examples when you're doing front-end or a management position screams to me that the people doing the interviewing don't know what they actually want.
Instead quizzes or live coding that are relevant like "tell me how to access all the elements in this particular element and traverse the children to apply some styling" is much more relevant and will show me the thought process, their ability to retain information, and their recall. It also shows communication ability when they get stuck and ask for help or use me as a sounding board.
It's not always about your implementation, but how you handle the situation and communicate
And the thought that quizzes provide a fair point of comparison comes across to me as putting process ahead of substance. Interviewing isn't meant to be fair to everyone - only one person gets the job, after all - so it's not like handing out cookies and stickers in middle school. It's meant to see, in part, whether the person is capable of generating working code. If you have a person who can provide samples to prove it, requiring an artificial quiz really is a slap in the face to a lot of good candidates.
That all said, the correlation between great whiteboard / coding tricks and them being a successful part of a team isn't so great. It's not their job to code short programs under pressure of being watched. It's a proxy. There are better ways.
Yep. Articulate attention is the name of the game (where "communication ability" sounds a little nebulous).
If you can't organize your thoughts, bring them to the forefront of your attention, name them, you're likely bad at handling abstractions. And abstractions are at the core of "technical ability" -- the ability to name things, find the appropriate abstraction boundaries, chisel structure out of chaos.
Articulate speech is the greatest human invention for a reason.
Testing for that (plus conscientiousness -- can you pay attention to details and get shit done?) during interviews makes perfect sense.
- liked the person
- rated the questions 3 or 4 stars
- gave the interviewer 3 or 4 stars for being helpful
Do the trends still hold?
How are those trends compared to only looking at interviews with:
- disliked the person
- rated the questions 1 or 2 stars
- gave the interviewer 1 or 2 stars for being helpful
Looks to me that interview length is correlated with success rate. If your interviewer stops before 60 minutes, there's a bias towards successful interviews. It seems like the interviews that end up being "no"s tend to get hard-stopped right at the 1-hour mark.
But that's reversed. It is in fact fairly difficult for a high-level language programmer to pick up C++, and facility with C++ (or at least C) is a common, accepted goal for C++ hiring shops. A C++ shop that hired candidates without regard for their aptitude at C++ would have real problems.
Interesting that this effect does not show up for Java programmers.
Why isn't it the same in Java? I'm not sure, perhaps its because Java has less gotchas as a language (certainly a lot less undefined behaviour and weird memory-related gotchas, no templates, no multiple inheritance etc) and C++ has this "its a difficult language" prestige which Java doesn't have.
Clicking on the graph to go to plot.ly and viewing its "data" tab, it looks like there's a blank X value for:
text y x
0 bucketed_success_rate: -0.02<br>pct: 0.1<br>`Interviewer Would Hire`: False 0.0987654320987654Edit - it looks way better if I disable Open Sans. It might just be a font issue with Chrome or Windows.
Maybe that is a tad bit too harsh, but surely the use of "big difference" and of "significant" seems like not being justified by the actual data:
>On average, successful interviews had final interview code that was on average 2045 characters long, whereas unsuccessful ones were, on average, 1760 characters long. That’s a big difference! This finding is statistically significant and probably not very surprising.
An average of 1760 vs an average of 2045 indicates a general average-average of around 1900 lines, so that would be 1900+/- 7%, and anyway the difference in ranges is so little that anything could cause it.
To have more or less 200 characters, merely calling variables a, b, c, etc. vs. FirstUserChoice, DefaultArrayIndexingField, you know what I mean, would be enough.
Same goes for:
>On average, successful candidates’ code ran successfully (didn’t result in errors) 64% of the time, whereas unsuccessful candidates’ attempts to compile code ran successfully 60% of the time, and this difference was indeed significant.
As I see it 60% or 64% as an average are almost exactly the same number, and bear very little significance. Maybe it is just me missing some sensibility ...