This is a misapplication of Bayes theorem.
P(bad) is the probability any app is bad
P(signed) is the probability any app is signed
P(bad if signed) = P(signed if bad) * P(bad) / P(signed)
Essentially your trust model requires that "the fraction of bad apps that are signed is small", or P(signed if bad) approaches 0. But signed malware is available - famously stuxnet, but others before and since: http://users.umiacs.umd.edu/~tdumitra/papers/CCS-2017.pdfMalware authors have incentive to make their apps appear legitimate, either by stealing keys, impersonating companies, or other mechanisms. Signing also helps get past automated checks (per the paper above).
Further, those probabilities assume random distribution, but I'd suggest that really expensive/dangerous malware has greater incentives to appear safe, so it is even more likely to be signed, even if most malware is not. Stuxnet would be a case in point - high value, sophisticated malware, signed.
P(really bad if signed) = P(signed if really bad) * P(really bad) / P(signed)
P(really bad) is lower,
but P(signed if really bad) approaches 1,
so P(really bad if signed) approaches P(really bad)
Meaning the worse the malware, the less the signature tells you.