No, the kernel trick is something else: basically a nonlinear basis representation of the model. For example, fitting a polynomial model, or using splines, would effectively be using the "kernel trick" (though only ML people use that term, not statisticians, and usually they talk about it in the context of SVMs but it's fine for linear regression too). Transforming the data is just transforming the Y-outcome, most commonly with log(y) for things that tend to be distributed with a right-skew: house prices being a classic example, along with things like income, various blood biomarkers, or really anything that cannot go below zero but can (in principle) be arbitrarily large.
In a few rare cases I have found situations where sqrt(y) or 1/y is a clever and useful transform but they're very situational, often occurring when there's some physical law behind the data generation process with that sort of mathematical form.