Buch, Englisch, Band 28, 140 Seiten, Format (B × H): 156 mm x 234 mm
Buch, Englisch, Band 28, 140 Seiten, Format (B × H): 156 mm x 234 mm
Reihe: Foundations and Trends® in Machine Learning
ISBN: 978-1-68083-140-5
Verlag: Now Publishers
Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well-known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features. We propose several parallel algorithms for fitting generalized low rank models, and describe implementations and numerical results.
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
1: Introduction 2: PCA and quadratically regularized PCA 3: Generalized regularization 4: Generalized loss functions 5: Loss functions for abstract data types 6: Multi-dimensional loss functions 7: Fitting low rank models 8: Choosing low rank models 9: Implementations. Acknowledgements. Appendices. References.




