The test has a number of fairly loose factors that are explained to the candidate as more about approach and communication than a “correct” answer (although don’t keep sitting and wallowing about a particular problem like the Python GIL for what is meant to be about exploration together). 1. Many outages are due to systems not even known to us to even exist for a team, so being able to determine when one knows something v. does not is a skill in the sense of knowing when to stop wasting time and to escalate and when to keep trying to troubleshoot their responsible areas - response time isn’t the KPI in an interview and never should be. Part of why I ask this is to determine if a candidate will give up really, really easily and just escalate saying “this isn’t my problem” - this isn’t acceptable, has happened to us all at previous companies, and is basically one criteria for failure / dismissal. A more constructive attitude and approach is “I have done a lot of evaluation including A, B, C, D and do not have anything else in mind, I would appreciate another set of eyes” - that is basically all we can reasonably expect in an interview situation and of course nobody knows anything about a company’s systems as a new hire. There is usually no expected canonical answer as well which would bias interviewers for a “correct” solution or even approach.
2. I usually ask candidates questions and scenarios based upon their submitted code samples and will presume an OS or cloud provider they’ve listed as something they’re familiar with on their resume. If you have lied for any reason it will be very obvious very fast as I keep adapting the question to a narrower and narrower scope to being a purely toy problem that is now useless for measuring anything beyond whether they’ve seen the problem before. Usually we can spot some errors in code samples submitted and make it clear to candidates that we do not expect perfect or necessarily even working code! The test isn’t about necessarily being bug free but about the attitude one has about their work output and how they’ve thought about various failure modes. I’ve had candidates surprisingly often say “that can’t happen, I made sure of it in my test cases” and then I show with a quick test run I think of that their submitted code does indeed have some flaws that would have had issues in production causing, say, OOM issues, segfaults, etc. A successful candidate’s attitude is accepting / welcoming of criticism as a team effort, has strong but loosely held opinions when encountering data contradicting their position appropriately, and is able to accept a challenge from junior engineer with respect and sincerity. Red flags = arguing with the interviewer for asking an admittedly irrelevant question, getting very defensive about a technical decision, and saying adamantly “this should have been caught in a code review so it’s not relevant” and being dismissive about the question. Yes, I’ve seen all these responses before. Nowhere in this process does “is expert at X” enter the picture, and we are usually evaluating the quality of the questions asked by the candidate as well. Many of us on the team have respectfully asked the relevance and presumptions of a question and that’s a big plus IMO - they have courage, respect, and critical thinking skills under pressure. I make sure to tell them that during the interview because people are so used to taking orders and shutting up that it’s bad for the organization.
Another set of red flags I’ve seen is “I’ve never had to do X, it’s taken care of by Y” and being unable to reason or even conjecture about how Y could be designed and reassured it’s not about correctness when it’s on their resume. Like I had someone that was pretty clearly a competent, skilled programmer but was applying to be an SRE and didn’t know how to work on a system below the container level with any tools proprietary or OSS - they were recommended to a different team but not for the applied position.
I assure you that everyone I’ve interviewed for years now has said they’ve had a great experience in all the follow-ups with recruiters, felt that the interview was a technical conversation rather than a hazing / torture test, and that nobody felt the questions were not relevant to the job once hired. I’m not an asshole as an interviewer trying to push people to some limit or something that seems to be the result of most technical interviews in our industry.