If the data is dumb content like text, this amounts to regular DRM content encryption, except that there's no decryption key to be found in the wrapper program or anywhere else; the key is "baked into" the logic of the program in a non-recoverable way. (This would allow for things like "true" TPM chips, that can store your keys opaquely from forensic recovery.)
If, on the other hand, the data is itself a program for which the wrapper serves as an interpreter, this amounts to a mathematical basis for a real "Trusted Computing Base", enabling any manner of things, like simple distributed computation on untrusted hardware, or mathematically-strong anti-cheating protection for an MMO game, or satisfying cell carriers' desires for a protected "baseband processor" under their control without that needing to be instantiated as a physical chip.
Effectively, creating a wrapper VM (the "bootstrap program" in the article's terminology) would allow a processor to run a "binary" through the VM that is literally opaque to it; code that, even in its operation as instructions on the CPU, the CPU is incapable of comprehending or interfering with (beyond simply terminating/interrupting the wrapper VM, or restricting its hardware access.) Not only would the interpreted program's code itself be opaque; the working state—the contents of the wrapper program's memory (and the processor's registers, and whatever else) would be opaque. The only place you could see such a program's intent realized would be in the IO it does—and that might be just encrypted network traffic sent to peers, too.
Such a software process, if given a full CPU hypervisor slot rather than having to make system calls to an OS, would be for the first time a "first-class citizen" on a computer, functioning more like[1] a flashable FPGA coprocessor connected to the CPU than a series of instructions that the CPU can edit to its whims. The CPU could ignore such a coprocessor—choose to not interact with it or power it (not emulate it, in other words), or tell the IOMMU to remove the coprocessor's access to peripherals, etc. But the CPU couldn't reach inside the coprocessor to fiddle with it, even though it's a virtual coprocessor residing entirely within "the mind of" the CPU. [The CPU could arbitrarily corrupt the memory the coprocessor was using for its state—but with good encryption, that would just immediately crash the wrapper VM with an assertion failure, rather than leaking any info.]
---
[1] Note that this is just an analogy from the CPU's perspective; we already have flashable coprocessors, but that doesn't help us any, because while the CPU can't poke into them, people can. Indistinguishability Obfuscation means that we're in the position the CPU is in; we can no more see into the VM or its state than the CPU can reach over and take apart a coprocessor.