You're missing something.
The idea is to take one compiler source (S), and compile it with a diverse collection of compilers (Ck being a compiler in C0-CK), producing a diverse collection of binaries that are compilations of S: (Bk = Ck(S)). Because the different compilers are almost certainly not functionally identical, the various Bk should not be expected to be bitwise identical. However, because they are compilations of the same source, they should be functionally identical, or one of the original compilers was broken (accidentally or deliberately). So now we can compile that original source with the Bk compilers, and because these compilers are functionally identical, the results (Bk(S))should be bitwise identical. There is certainly some chance of false positive, due to bugs in the Ck compilers or exploitation of undefined behavior in S, but if you do get the same output (Bk(S)) from all of the (Bk) compilers then you can be pretty confident that there is no Trusting Trust style attack present: exceedingly so, when the various compilers have diverse histories so that it's exceedingly unlikely that all Ck compilers contain the same attack. If there are any differences, you can manually inspect them to determine what the issue is and either issue a bug report to the appropriate compiler, change the source (S) to avoid undefined behavior, or notify people of the attack present in the compiler in question, depending on what you find. This does involve some binary digging, but quite targeted compared to a full audit and it may well not be necessary at all.
Obviously, if you do have a trusted compiler, including it in the mix is great, but the technique doesn't rely on this, nor on any two compilers returning the same binary output except when they are compilations of the same source.