Buch, Englisch, 774 Seiten, Format (B × H): 161 mm x 238 mm, Gewicht: 1310 g
Buch, Englisch, 774 Seiten, Format (B × H): 161 mm x 238 mm, Gewicht: 1310 g
Reihe: Chapman & Hall/CRC Data Science Series
ISBN: 978-1-4665-1084-5
Verlag: Taylor & Francis Inc
Statistical Foundations of Data Science gives a thorough introduction to commonly used statistical models, contemporary statistical machine learning techniques and algorithms, along with their mathematical insights and statistical theories. It aims to serve as a graduate-level textbook and a research monograph on high-dimensional statistics, sparsity and covariance learning, machine learning, and statistical inference. It includes ample exercises that involve both theoretical studies as well as empirical applications.
The book begins with an introduction to the stylized features of big data and their impacts on statistical analysis. It then introduces multiple linear regression and expands the techniques of model building via nonparametric regression and kernel tricks. It provides a comprehensive account on sparsity explorations and model selections for multiple regression, generalized linear models, quantile regression, robust regression, hazards regression, among others. High-dimensional inference is also thoroughly addressed and so is feature screening. The book also provides a comprehensive account on high-dimensional covariance estimation, learning latent factors and hidden structures, as well as their applications to statistical estimation, inference, prediction and machine learning problems. It also introduces thoroughly statistical machine learning theory and methods for classification, clustering, and prediction. These include CART, random forests, boosting, support vector machines, clustering algorithms, sparse PCA, and deep learning.
Zielgruppe
Academic
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
1. Introduction. 2. Multiple and Nonparametric Regression. 3. Introduction to Penalized Least-Squares. 4. Penalized Least Squares: Properties. 5. Generalized Linear Models and Penalized Likelihood. 6. Penalized M-estimators. 7. High Dimensional Inference 8. Feature Screening. 9. Covariance Regularization and Graphical Models. 10. Covariance Learning and Factor Models. 11. Applications of Factor Models and PCA. 12. Supervised Learning. 13. Unsupervised Learning. 14. An Introduction to Deep Learning.