1. Build an array of a few (let's say 8 or so) cameras pointing outward from the center.
2. Use a stereo matching algorithm to extract a depth map from the perspective of each camera. Keeping track of the position and orientation of each camera in 3D space, these depths become a point cloud associated with each camera.
3. Determine the 3D location and orientation of each "eye" you want to render, then render all point clouds in 3D space to reconstruct a "reprojected" version of the scene from any desired viewpoint. Of course, the farther the eyes deviate from actual camera locations the more stretched/warped the image will appear, but that won't matter much as long as you keep the eye coordinates within the physical space occupied by the camera.
Honestly I'd be kind of disappointed if CMU doesn't actually try this. It's disappointing to think that perhaps all this buzz about "hackathons" (as the article mentions) is encouraging -- even at major research universities -- quickly slapping together components to make something kind of work, as opposed to fundamental algorithm development and proper engineering solutions.