This also means that, if your OS certificate store trusts the cryptographic signer of the archive, then the archive can 1. be auto-expanded after download, in effect giving you the experience of "downloading a folder" (or in macOS, "downloading a bundle") directly; 2. the contents can skip having your OS's foreign-source / untrusted / quarantine xattr-of-choice applied to them; and 3. any disk images unpacked from the archive can have an xattr applied to them saying that they've been pre-integrity-checked and can skip any checksum verification they'd normally do on mount.
Sadly, Apple's implementation of .XIP (which is, AFAIK, the only existing implementation of .XIP), which used to be happy to unpack arbitrary .xip file, switched in 10.13 to only expanding .xip files signed by Apple, treating anyone else's .xips as invalid. So .XIP itself is "broken"—since, even if it was adopted by other players, at least on one major OS the default handler would just say the archive is corrupt.
But that doesn't mean that the concept behind .XIP is bad. Someone could totally create an open format that has equivalent extraction semantics to .XIP, but "for real." It's not like there's any patented tech here; it's just a format with metadata attached, that archivers have special logic for. We'd just need support for the same sort of verification—but against the actual OS cert store—in extractors like 7zip, The Unarchiver, and GNOME's Archive Manager.
-----
Then again, maybe this would be a too-soon-obsolete technology anyway. Every OS's primary filesystem seems to trending toward a concept of subvolumes these days.
In a world where all filesystems supported subvolumes, an ideal archive format would just be a CQRS event-stream representation of the construction process for the dirents+inodes+extents of a subvolume, in some vendor-neutral "abstract filesystem" format.
Older/traditional systems could hold onto, and pass around, such an archive as a file (though, to operate on it, they'd have to effectively rebuild a regular disk-image-alike from it.)
But, upon receipt by a modern consumer OS, the downloading program could ask the filesystem to not reserve a file for the download, but rather to reserve a subvolume for the download, and then feed in the download body as a "change stream" for that subvolume, ala `btrfs receive`.
So you'd never actually write such an archive to disk in its packed state; you'd just unpack it as you receive it into its own isolated subvolume space. Like a streaming `tar x`, but without the decision of where to put the result. The new subvolume could even be a "non-root item" in the garbage collection sense; ref-counted by its firm-link references within your existing filesystems (where one such firm-link would be added from the beginning by the downloading program.)
Such a subvolume-event-stream archive format could have the same sort of cryptographic integrity-checking as .XIP, where subvolume-streams signed by trusted sources would have different forced-mount-flag metadata set for the unpacked subvolume. Interestingly, this cryptographic integrity-checking could be applied by the filesystem driver as it constructs the subvolume from the stream, thus making it impossible for an application to screw up, and ensuring that the contents are never made available to userspace if they're corrupt, or signed by a blacklisted signer.
As a bonus, you could have [signed] subvolume-event-stream archives that are incremental, and OS an app authors could use those for version updates, patching read-only subvolume "app-vN" or "OS-vN" with an update-stream to generate "app-vN+1" or "OS-vN+1." Like a cross between how CoreOS does OS updates, and Google Chrome's Courgette binary-diff updates.