PCA is a linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. The PCA can be used for dimensionality reduction in a dataset while retaining those characteristics of the dataset that contribute most to its variance, by keeping lower-order principal components and ignoring higher-order ones. Such low-order components often contain the "most important" aspects of the data. But this is not necessarily the case, depending on the application. Let p and tn denote respectively the original and reduced number of variables. The original variables are denoted X. In the simplest case our measure of accuracy of reconstruction is the sum ofp squared multiple correlations between X-variables and the predictions of X made froin the factors. In the more general case we can weight each squared multiple correlation by the variance of the corresponding X-variable.
Since we can set those variances ourselves by multiplying scores on each variable,by any constant we choose, this amounts to the ability to assign any weights we choose to the different variables.