No, because PBKDFs are not a good mechanism for creating encryption keys. Instead you have an actual random key, and your devices gate access to that key with your device password.
> Does the decryption only occur on the user's device?
Yes, because only the user's devices have access to the key material needed to decrypt. Apple cannot decrypt them.
> Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
Not sure what you're asking here?
> If the key is ever decrypted on vendor's servers, everything else is theater.
As above the vendor/apple cannot decrypt anything[1] because they do not have key material.
> And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
Don't really agree. The malicious vendor update is something that is discussed reasonably actively, it's just that there isn't a solution to the problem. Even the extreme "publish all source code" idea doesn't work as auditing these codebases for something malicious is simply not feasible, and even if it were ensuring that the course code you get exactly matches the code in the update isn't feasible (because if you assume a malicious vendor you have to assume that they're willing to make the OS lie).
Anyway, here's a basic description of how to make a secure system for synchronizing anything, including key material (secure means "no one other than the end user can ever access the key material, in any circumstance without having broken the core cryptographic algorithms that are used to secure everything").
Apple has some documentation on this scattered around, but essentially it works like this:
* There is a primary key - presumably AES but I can't recall off the top of my head. This key is used to encrypt a bunch of additional keys for various services (this is fairly standard, the basic idea is that a compromise of one service doesn't compromise others - to me this is "technically good", but I would assume that the most likely path to compromise is getting an individual device's keys in which case you get everything anyway?)
* The first device you use to create an iCloud account or to enable syncing generates these keys
* That device also generates a bunch of asymmetric keys and pushes public keys to anywhere they need to go (i.e. iMessage keys)
* When you add a new device to your account it messages your other devices asking to get access to your synced key material, when you approve the addition of that new device on one of your existing ones, that existing device encrypts the master key with the public key provided by your new device and sends it back. At that point the new device can decrypt that response and use that key to then decrypt other key material for your account.
All this is why in the Apple ecosystem if you lose all your devices, you historically lost pretty much everything in your account.
A few years ago Apple introduced "iCloud Key Vault" or some such marketing name for what are essentially very large sets of HSMs. When you set up a new device that device pushes its key material to the HSMs, in what is functionally plaintext from the point of view of the HSMs, alongside some combination of your account password and device passcode. You might now say "that means apple has my key material", but Apple has designed these so that it cannot. Ivan Krstic did a talk about this at BlackHat a few years back, but essentially it works as following:
* Apple buys a giant HSM
* Apple installs software on this HSM that is essentially a password+passcode protected account->key material database
* Installing software on an HSM requires what are called "admin cards", they're essentially just sharded hardware tokens. Once Apple has installed the software and configured the HSM, the admin cards are put through what Krstic called a "secure one way physical hashing function" (aka a blender)
* Once this has happened the HSM rolls its internal encryption keys. At this point it is no longer possible for Apple (or anyone else) to update the software, or in any way decrypt any data on the HSM.
* The recovery path through requires you to provide your account, account password, and passcode, and the HSM will only provide the key material if all of those match. Once your new device gets that material it can start to recover all the other material needed. As with your phone the HSM itself has increasing delays between attempts. Unlike your phone once a certain attempt count is reached the key material is destroyed and the only "recovery path" is an account reset so at least you get to keep your existing purchases, email address, etc.
You might think it would be better to protect the data with some password derived key, but that is strictly worse - especially as the majority of passwords and passcodes are not great, nor large. In general if you can have a secure piece of hardware gate access to a strong key is better than having the data encrypted to a weak key. The reason being that if the material is protected by that key rather than enforced policy then an attacker can copy the key material and then brute force it offline, whereas a policy based guard can just directly enforce time and attempt restrictions.
[1] Excepting things that aren't end-to-end encrypted, most providers still have a few services that aren't E2E, though it mostly seems to be historical reasons.