Yeah, I had a very hard time believing that the multithreaded approach would be slower. Its so counterintuitive since at first blush it seems that walking N trees is an embarrassingly parallel problem. I tested up to 1000 trees and single threaded was still faster. I'm sure at some point the multithreaded approach will win out, but its beyond the number of trees and max depth we are using.
I've half-convinced myself it's because we're talking about GBM's and not Random Forests (where my mind goes first). One of the smart things about XGBoost is parallelizing training by multithreading the variable selection at each node, but that doesn't apply to inference; I imagine you gotta predict trees sequentially since each takes the previous output as an input? Now I wonder what those extra threads were even doing...
Each tree can be traversed independently. Each tree is traversed until a leaf node is reached. Those leaf values are summed for all the trees. The sum of all the leafs plus a base score is then returned. In the case of binary classification that sum is passed through a sigmoid function. For linear regression the sum is returned.