The conditional probability you are hopelessly lacking software skills to do a job given that you nonetheless passed a TripleByte exam or something is quite high. Overfitting & memorization for the sake of the test is extremely common.
But it’s much, much harder to fake competence when needing to dynamically and verbally explain technical details in a conversational interview about past work experience.
It's far, far easier to fake expertise when you have more context than the person asking questions. Technical interview questions make sure that the question giver has more context than the recipient, ignoring pathological cases.
In fact, conversational interviewing like this has very little to do with any of the domain specifics of the project. The point is to recursively keep probing for deeper technical specifics, so they have to explain at finer and finer technical levels what were the tradeoffs, why exactly were certain decisions made or how were certain problems overcome.
It is precisely the situation when someone did not have to dig into the technical weeds of a project for themselves that they will not be able to fake or fast talk their way through this type of interview.
That is the number one, defining characteristic of this way of interviewing.
I’m speaking from ~10 years of experience running my team’s recruiting in a quant finance firm, where many interview requirements / tests / etc., came down from executive managers, so I got to see a wide range of performance on tests of all sorts, riddles, hardcore algo trivia, etc.
The sum total of all that leads me to believe quite strongly that the best signal to noise comes from super careful and tedious resume selection followed by conversational and behavioral interviews that recursively probe into more specific technical details.