This seems about as hard as digitally moving eyes.
I think the main source of artifacts is going to be lighting and reflections. Specular color or reflections are only possible to see when light, surface position and normal, and observer are arranged in a specific way. If you have 2 or more cameras positioned elsewhere, there's no way to find out what color is visible to another camera in the center.
Modern AI can try to guess, but fundamentally there's no that info anywhere in the video. It can assume the object surface is made of small count of uniform materials, and extrapolate materials across picture and across frames, but this gonna fail too often for biologicals subjects like people.
Most of the literature I've seen has been on specifically gaze correction, which isn't actually what you would want.