So you're right that the quiz
does try to be harder if you're doing well, but it'll also give you easier questions if an incorrect answer lowers its confidence in your ability estimate. We have a pretty sizeable bank of potential questions to ask a candidate, but the quiz tries to strike an optimal balance between appropriate
difficulty and maximum
informativeness. For example, we wouldn't want to as you a particularly difficult question unless we're confident that it's a) a good fit for your estimated ability level, and b) will give us more information about your ability than any other question in the bank.
You're right that tailoring question difficulty to ability level can drastically increase a test's accuracy. But while a logistic regression model works well when you have a fixed quiz or a low number of questions, it isn't flexible enough to work with a fully adaptive system like we have at Triplebyte. Our models are loosely based on the kinds of systems that the MGAT or GRE use, but we've implemented significant extensions on top of those approaches to fit our needs.