What would be a really big result is finding IO that doesn't rely on multilinear maps.
Security is a double-edged sword. You can use it to protect against others, and others can also use it to protect against you. As computer security becomes stronger, I think it could be almost irresponsible to only mention the "good" uses - hackers accessing your bank account seems to be the cliche example - without also mentioning the malware-hiding, user-hostile, locked-down devices and DRM uses. After seeing what's happened to computing over the years, I'm starting to think that maybe such strong security is not good for society as a whole after all...
If the data is dumb content like text, this amounts to regular DRM content encryption, except that there's no decryption key to be found in the wrapper program or anywhere else; the key is "baked into" the logic of the program in a non-recoverable way. (This would allow for things like "true" TPM chips, that can store your keys opaquely from forensic recovery.)
If, on the other hand, the data is itself a program for which the wrapper serves as an interpreter, this amounts to a mathematical basis for a real "Trusted Computing Base", enabling any manner of things, like simple distributed computation on untrusted hardware, or mathematically-strong anti-cheating protection for an MMO game, or satisfying cell carriers' desires for a protected "baseband processor" under their control without that needing to be instantiated as a physical chip.
Effectively, creating a wrapper VM (the "bootstrap program" in the article's terminology) would allow a processor to run a "binary" through the VM that is literally opaque to it; code that, even in its operation as instructions on the CPU, the CPU is incapable of comprehending or interfering with (beyond simply terminating/interrupting the wrapper VM, or restricting its hardware access.) Not only would the interpreted program's code itself be opaque; the working state—the contents of the wrapper program's memory (and the processor's registers, and whatever else) would be opaque. The only place you could see such a program's intent realized would be in the IO it does—and that might be just encrypted network traffic sent to peers, too.
Such a software process, if given a full CPU hypervisor slot rather than having to make system calls to an OS, would be for the first time a "first-class citizen" on a computer, functioning more like[1] a flashable FPGA coprocessor connected to the CPU than a series of instructions that the CPU can edit to its whims. The CPU could ignore such a coprocessor—choose to not interact with it or power it (not emulate it, in other words), or tell the IOMMU to remove the coprocessor's access to peripherals, etc. But the CPU couldn't reach inside the coprocessor to fiddle with it, even though it's a virtual coprocessor residing entirely within "the mind of" the CPU. [The CPU could arbitrarily corrupt the memory the coprocessor was using for its state—but with good encryption, that would just immediately crash the wrapper VM with an assertion failure, rather than leaking any info.]
---
[1] Note that this is just an analogy from the CPU's perspective; we already have flashable coprocessors, but that doesn't help us any, because while the CPU can't poke into them, people can. Indistinguishability Obfuscation means that we're in the position the CPU is in; we can no more see into the VM or its state than the CPU can reach over and take apart a coprocessor.
I'm pretty sure it was posted on HN at some point. I don't remember the term IO being used, so it may have been a different kind of obfuscation. There were some allusions made to an unsolvable jigsaw puzzle.
AFAIK, the definition of IO is: we have two programs that perform the same computation. After we apply IO to both programs, we cannot figure out which obfuscated program corresponds to a particular original program.
However, there is a flaw: programs encrypting data with different keys are performing different computations.
So IO definition does not claim that IO is able to hide the key.
It turns out we can do just about anything in modern crypto using IO - it is an extremely powerful primitive - including symmetric encryption, public-key encryption, etc.
From what I've read, that doesn't even matter. The obfuscated program IS effectively the key. A copy of that obfuscated program is still a copy of the key. It's still not clear to me what the advantage is supposed to be.