This is only true if there is some trust in what is signing them. If anyone can get one then anyone can sign the malicious version of the app with their own key, or one they stole from someone else. The user doesn't know who is supposed to be signing the app -- and if they did then you could be using TOFU or importing the expected source's key from a trusted channel without having to pay fees to anyone.
> and if it was malicious from the start, it can be disabled.
In the same way that Defender can block it. Then the attacker makes a new version signed with a different key.
The problem with CA-based signing is that it's a garbage trade off. If you make it easy to get a signing key, the attacker can easily get more and it does nothing. If you make it hard, you're kicking small developers in the teeth.
> The non-signed up can have zillions of malicious variants which something like Defender may or may not catch.
Which is still possible with code signing. The attacker gets their own key, uses it to infect many users, then some of those users are developers with their own signing keys and the attacker can use each of those keys to infect even more people and get even more keys.
Using keys as a rate limiter doesn't really work when one key can get you many more.
> It also gets a shot of circumventing (or even exploiting) AV.
As opposed to a shot at exploiting the signature verification method and the AV.
There is a better version of this that don't require expensive code signing certificates. You have the developer host their code signing key(s) on their website, served over HTTPS. Then the name displayed in the "do you trust them" box is the name of the website -- which is what the user is likely more familiar with anyway. If the program is signed by a key served on the website, and the user trusts the website, then you're done.
The application itself can still be obtained from another source, only the key has to be from the developer's website. Then future versions of the software signed with the same key can be trusted, but compromised keys can be revoked (and then replacements obtained from the website again).
This is better in every way than paying for EV certificates. It doesn't cost the developer anything, because they already have a domain (and if not they're very inexpensive and independently useful). But the attacker can't just register thousands of garbage domains because they're displayed to the user and nobody is going to trust "jdyfihjasdfhjkas.ru" or in principle anything other than the known developer's actual website, which the user is more likely to actually be familiar with than the legal name of the developer or their company.
Blocking a specific executables block that one. Depending on AV used, simply rebuilding may get you through (different hash); some trivial modifications will do.