The normal PCA encoding:
1) Given a mean-center-scaled X matrix, get the latent variable matrix T with X = T * P’ + e, where P = loadings and e = residuals. The P is your model, so for a new vector xnew, you can calculate tnew = xnew * P (because P’ * P = I).
This is the encoder —- nothing changes here. The original matrix is dimensionally reduced with residuals e discarded. This is why PCA is lossy.
The decoder is where things diverge
The usual PCA decoder reconstructs a given latent variable t_any by using the trained P loadings, like thus x_reconstructed = t_any * P’. This reconstructed data lies on a linear hyperplane, so if the original data did not lie on the hyperplane, reconstruction errors are potetially high.
In your proposal, instead of a linear decoder, you train a quadratic decoder (essentially a classic ridge regression using a quadratic) on the original X. So for your reconstruction, you have x_reconstructed = poly(t_new).
This achieves lower reconstruction error in-sample (naturally, because quadratic is higher order than linear), but your poly function is trained on a particular corpus. Which means that when you’re in-distribution within that corpus, you’re good but when you’re not, you can be very wrong in biased ways that PCA’s linear reconstruction is not.
SO this is not a better technique than PCA in a general sense. It’s a better reconstruction machine when your data is mostly in-sample. It’s a kind of computationally cheap “specialization” on a particular distribution of data, which can be useful if you’re mostly in-distribution but introduces new risks when out-of-distribution.
Whereas PCA just drops the residual and makes modest claims, a quadratic decoder is trying to predict the residual and on out-of-sample data, it can be wrong in biased ways that PCA is not. In other words, it can hallucinate.
But if on a large enough training corpus, chances are we’re going to be in-distribution most of the time, so maybe this could generalize well.
No comments yet.