Fuleky | Macroeconomic Forecasting in the Era of Big Data | E-Book | www.sack.de
E-Book

E-Book, Englisch, Band 52, 716 Seiten

Reihe: Advanced Studies in Theoretical and Applied Econometrics

Fuleky Macroeconomic Forecasting in the Era of Big Data

Theory and Practice
1. Auflage 2019
ISBN: 978-3-030-31150-6
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark

Theory and Practice

E-Book, Englisch, Band 52, 716 Seiten

Reihe: Advanced Studies in Theoretical and Applied Econometrics

ISBN: 978-3-030-31150-6
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark



This book surveys big data tools used in macroeconomic forecasting and addresses related econometric issues, including how to capture dynamic relationships among variables; how to select parsimonious models; how to deal with model uncertainty, instability, non-stationarity, and mixed frequency data; and how to evaluate forecasts, among others. Each chapter is self-contained with references, and provides solid background information, while also reviewing the latest advances in the field. Accordingly, the book offers a valuable resource for researchers, professional forecasters, and students of quantitative economics.


Peter Fuleky is an Associate Professor of Economics with a joint appointment at the University of Hawaii Economic Research Organization (UHERO), and the Department of Economics at the University of Hawaii at Manoa. His research focuses on econometrics, time series analysis, and forecasting. He is a co-author of UHERO's quarterly forecast reports on Hawaii's economy. He obtained his Ph.D. degree in Economics at the University of Washington, USA.

Fuleky Macroeconomic Forecasting in the Era of Big Data jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Foreword;6
2;Preface;7
3;Contents;9
4;List of Contributors;11
5;Part I Introduction;14
5.1;1 Sources and Types of Big Data for Macroeconomic Forecasting;15
5.1.1;1.1 Understanding What's Big About Big Data;15
5.1.1.1;1.1.1 How Big is Big?;16
5.1.1.2;1.1.2 The Challenges of Big Data;17
5.1.1.2.1;Undocumented and Changing Data Structures;17
5.1.1.2.2;Need for Network Infrastructure and Distributed Computing;18
5.1.1.2.3;Costs and Access Limitations;19
5.1.1.2.4;Data Snooping, Causation, and Big Data Hubris;19
5.1.2;1.2 Sources of Big Data for Forecasting;20
5.1.2.1;1.2.1 Financial Market Data;21
5.1.2.2;1.2.2 E-commerce and Scanner Data;22
5.1.2.3;1.2.3 Mobile Phones;23
5.1.2.4;1.2.4 Search Data;24
5.1.2.5;1.2.5 Social Network Data;25
5.1.2.6;1.2.6 Text and Media Data;26
5.1.2.7;1.2.7 Sensors, and the Internet of Things;27
5.1.2.8;1.2.8 Transportation Data;28
5.1.2.9;1.2.9 Other Administrative Data;28
5.1.2.10;1.2.10 Other Potential Data Sources;29
5.1.3;1.3 Conclusion;30
5.1.4;References;31
6;Part II Capturing Dynamic Relationships;36
6.1;2 Dynamic Factor Models;37
6.1.1;2.1 Introduction;37
6.1.2;2.2 From Exact to Approximate Factor Models;38
6.1.2.1;2.2.1 Exact Factor Models;39
6.1.2.2;2.2.2 Approximate Factor Models;41
6.1.3;2.3 Estimation in the Time Domain;42
6.1.3.1;2.3.1 Maximum Likelihood Estimation of Small Factor Models;42
6.1.3.2;2.3.2 Principal Component Analysis of Large Approximate Factor Models;45
6.1.3.3;2.3.3 Generalized Principal Component Analysis of Large Approximate Factor Models;47
6.1.3.4;2.3.4 Two-Step and Quasi-Maximum Likelihood Estimation of Large Approximate Factor Models;48
6.1.3.5;2.3.5 Estimation of Large Approximate Factor Models with Missing Data;49
6.1.4;2.4 Estimation in the Frequency Domain;51
6.1.5;2.5 Estimating the Number of Factors;56
6.1.6;2.6 Forecasting with Large Dynamic Factor Models;57
6.1.6.1;2.6.1 Targeting Predictors and Other Forecasting Refinements;58
6.1.7;2.7 Hierarchical Dynamic Factor Models;59
6.1.8;2.8 Structural Breaks in Dynamic Factor Models;61
6.1.8.1;2.8.1 Markov-Switching Dynamic Factor Models;61
6.1.8.2;2.8.2 Time Varying Loadings;64
6.1.9;2.9 Conclusion;69
6.1.10;References;69
6.2;3 Factor Augmented Vector Autoregressions, Panel VARs, and Global VARs;75
6.2.1;3.1 Introduction;75
6.2.2;3.2 Modeling Relations Across Units;78
6.2.2.1;3.2.1 Panel VAR Models;78
6.2.2.2;3.2.2 Restrictions for Large-Scale Panel Models;80
6.2.2.2.1;Cross-Sectional Homogeneity;80
6.2.2.2.2;Dynamic Interdependencies;81
6.2.2.2.3;Static Interdependencies;81
6.2.2.2.4;Implementing Parametric Restrictions;82
6.2.2.3;3.2.3 Global Vector Autoregressive Models;83
6.2.2.3.1;The Full GVAR Model;84
6.2.2.4;3.2.4 Factor Augmented Vector Autoregressive Models;86
6.2.2.5;3.2.5 Computing Forecasts;87
6.2.3;3.3 Empirical Application;88
6.2.3.1;3.3.1 Data and Model Specification;88
6.2.3.1.1;An Illustrative Example;89
6.2.3.1.2;Model Specification;90
6.2.3.2;3.3.2 Evaluating Forecasting Performance;91
6.2.3.2.1;Performance Measures;92
6.2.3.3;3.3.3 Results;92
6.2.3.3.1;Overall Forecasting Performance;93
6.2.3.3.2;Forecasts for Individual Countries;95
6.2.4;3.4 Summary;97
6.2.5;Appendix A: Details on Prior Specification;98
6.2.6;References;101
6.3;4 Large Bayesian Vector Autoregressions;104
6.3.1;4.1 Introduction;104
6.3.1.1;4.1.1 Vector Autoregressions;105
6.3.1.2;4.1.2 Likelihood Functions;106
6.3.2;4.2 Priors for Large Bayesian VARs;107
6.3.2.1;4.2.1 The Minnesota Prior;107
6.3.2.2;4.2.2 The Natural Conjugate Prior;110
6.3.2.3;4.2.3 The Independent Normal and Inverse-Wishart Prior;114
6.3.2.4;4.2.4 The Stochastic Search Variable Selection Prior;116
6.3.3;4.3 Large Bayesian VARs with Time-Varying Volatility, Heavy Tails and Serial Dependent Errors;118
6.3.3.1;4.3.1 Common Stochastic Volatility;119
6.3.3.2;4.3.2 Non-Gaussian Errors;120
6.3.3.3;4.3.3 Serially Dependent Errors;120
6.3.3.4;4.3.4 Estimation;121
6.3.4;4.4 Empirical Application: Forecasting with Large Bayesian VARs;126
6.3.4.1;4.4.1 Data, Models, and Priors;127
6.3.4.2;4.4.2 Forecast Evaluation Metrics;128
6.3.4.3;4.4.3 Forecasting Results;129
6.3.5;4.5 Further Reading;130
6.3.6;Appendix A: Data;131
6.3.7;Appendix B: Sampling from the Matrix Normal Distribution;132
6.3.8;References;132
6.4;5 Volatility Forecasting in a Data Rich Environment;135
6.4.1;5.1 Introduction;135
6.4.2;5.2 Classical Tools for Volatility Forecasting: ARCH Models;137
6.4.2.1;5.2.1 Univariate GARCH Models;137
6.4.2.2;5.2.2 Multivariate GARCH Models;139
6.4.2.3;5.2.3 Dealing with Large Dimension in Multivariate Models;142
6.4.3;5.3 Stochastic Volatility Models;143
6.4.3.1;5.3.1 Univariate Stochastic Volatility Models;144
6.4.3.2;5.3.2 Multivariate Stochastic Volatility Models;145
6.4.3.3;5.3.3 Improvements on Classical Models;146
6.4.3.4;5.3.4 Dealing with Large Dimensional Models;149
6.4.4;5.4 Volatility Forecasting with High Frequency Data;152
6.4.4.1;5.4.1 Measuring Realized Variances;152
6.4.4.2;5.4.2 Realized Variance Modelling and Forecasting;153
6.4.4.3;5.4.3 Measuring and Modelling Realized Covariances;156
6.4.4.4;5.4.4 Realized (Co)variance Tools for Large Dimensional Settings;158
6.4.4.5;5.4.5 Bayesian Tools;160
6.4.5;5.5 Conclusion;162
6.4.6;References;162
6.5;6 Neural Networks;169
6.5.1;6.1 Introduction;169
6.5.1.1;6.1.1 Fully Connected Networks;171
6.5.1.2;6.1.2 Estimation;173
6.5.1.2.1;Gradient Estimation;174
6.5.1.3;6.1.3 Example: XOR Network;175
6.5.2;6.2 Design Considerations;178
6.5.2.1;6.2.1 Activation Functions;178
6.5.2.2;6.2.2 Model Shape;179
6.5.2.3;6.2.3 Weight Initialization;180
6.5.2.4;6.2.4 Regularization;181
6.5.2.5;6.2.5 Data Preprocessing;183
6.5.3;6.3 RNNs and LSTM;185
6.5.4;6.4 Encoder-Decoder;189
6.5.5;6.5 Empirical Application: Unemployment Forecasting;191
6.5.5.1;6.5.1 Data;191
6.5.5.2;6.5.2 Model Specification;192
6.5.5.3;6.5.3 Model Training;192
6.5.5.4;6.5.4 Results;193
6.5.6;6.6 Conclusion;194
6.5.7;References;194
7;Part III Seeking Parsimony;198
7.1;7 Penalized Time Series Regression;199
7.1.1;7.1 Introduction;199
7.1.2;7.2 Notation;200
7.1.3;7.3 Linear Models;200
7.1.3.1;7.3.1 Autoregressive Models;201
7.1.3.2;7.3.2 Autoregressive Distributed Lag Models;201
7.1.3.3;7.3.3 Vector Autoregressive Models;201
7.1.3.4;7.3.4 Further Models;201
7.1.4;7.4 Penalized Regression and Penalties;202
7.1.4.1;7.4.1 Ridge Regression;203
7.1.4.2;7.4.2 Least Absolute Shrinkage and Selection Operator (Lasso);203
7.1.4.3;7.4.3 Adaptive Lasso;204
7.1.4.4;7.4.4 Elastic Net;205
7.1.4.5;7.4.5 Adaptive Elastic Net;206
7.1.4.6;7.4.6 Group Lasso;206
7.1.4.7;7.4.7 Other penalties and methods;207
7.1.5;7.5 Theoretical Properties;207
7.1.6;7.6 Practical Recommendations;211
7.1.6.1;7.6.1 Selection of the Penalty Parameters;211
7.1.6.1.1;Cross-Validation;212
7.1.6.1.2;Information Criteria;214
7.1.6.2;7.6.2 Computer Implementations;214
7.1.7;7.7 Simulations;215
7.1.8;7.8 Empirical Example: Inflation Forecasting;219
7.1.8.1;7.8.1 Overview;219
7.1.8.2;7.8.2 Data;224
7.1.8.3;7.8.3 Methodology;224
7.1.8.4;7.8.4 Results;226
7.1.9;7.9 Conclusions;232
7.1.10;References;233
7.2;8 Principal Component and Static Factor Analysis;235
7.2.1;8.1 Principal Component Analysis;235
7.2.1.1;8.1.1 Introduction;236
7.2.1.2;8.1.2 Variance Maximization;236
7.2.1.3;8.1.3 Reconstruction Error Minimization;238
7.2.1.4;8.1.4 Related Methods;240
7.2.1.4.1;Independent Component Analysis;240
7.2.1.4.2;Sparse Principal Component Analysis;242
7.2.2;8.2 Factor Analysis with Large Datasets;243
7.2.2.1;8.2.1 Factor Model Estimation by the Principal Component Method;244
7.2.2.1.1;Estimation;244
7.2.2.1.2;Estimate the Number of Factors;245
7.2.2.1.3;Rate of Convergence and Asymptotic Distribution;246
7.2.2.1.4;Factor-Augmented Regression;248
7.2.3;8.3 Regularization and Machine Learning in Factor Models;250
7.2.3.1;8.3.1 Machine Learning Methods;250
7.2.3.2;8.3.2 Model Selection Targeted at Prediction;252
7.2.3.2.1;Targeted Predictor;253
7.2.3.2.2;Partial Least Squares and Sparse Partial Least Squares;253
7.2.4;8.4 Policy Evaluation with Factor Model;255
7.2.4.1;8.4.1 Rubin's Model and ATT;255
7.2.4.2;8.4.2 Interactive Fixed-Effects Model;256
7.2.4.3;8.4.3 Synthetic Control Method;257
7.2.5;8.5 Empirical Application: Forecasting in Macroeconomics;258
7.2.5.1;8.5.1 Forecasting with Diffusion Index Method;258
7.2.5.1.1;Forecasting Procedures;259
7.2.5.1.2;Benchmark Models;259
7.2.5.1.3;Diffusion Index Models;260
7.2.5.1.4;Forecasting Performance;260
7.2.5.2;8.5.2 Forecasting Augmented with Machine Learning Methods;262
7.2.5.2.1;Diffusion Index Models;263
7.2.5.2.2;Hard Thresholding Models;263
7.2.5.2.3;Soft Thresholding Models;263
7.2.5.2.4;Empirical Findings;264
7.2.5.3;8.5.3 Forecasting with PLS and Sparse PLS;265
7.2.5.4;8.5.4 Forecasting with ICA and Sparse PCA;266
7.2.6;8.6 Empirical Application: Policy Evaluation with Interactive Effects;267
7.2.6.1;8.6.1 Findings Based on Monte Carlo Experiments;268
7.2.6.2;8.6.2 Empirical Findings;269
7.2.7;References;270
7.3;9 Subspace Methods;273
7.3.1;9.1 Introduction;273
7.3.2;9.2 Notation;275
7.3.3;9.3 Two Different Approaches to Macroeconomic Forecasting;275
7.3.3.1;9.3.1 Forecast Combinations;275
7.3.3.2;9.3.2 Principal Component Analysis, Diffusion Indices, Factor Models;276
7.3.4;9.4 Subspace Methods;277
7.3.4.1;9.4.1 Complete Subset Regression;278
7.3.4.1.1;Subspace Dimension;278
7.3.4.1.2;Weighting Schemes;279
7.3.4.1.3;Limitations;279
7.3.4.2;9.4.2 Random Subset Regression;280
7.3.4.3;9.4.3 Random Projection Regression;280
7.3.4.4;9.4.4 Compressed Regression;281
7.3.5;9.5 Empirical Applications of Subspace Methods;282
7.3.5.1;9.5.1 Macroeconomics;282
7.3.5.2;9.5.2 Microeconomics;283
7.3.5.3;9.5.3 Finance;284
7.3.5.4;9.5.4 Machine Learning;284
7.3.6;9.6 Theoretical Results: Forecast Accuracy;285
7.3.6.1;9.6.1 Mean Squared Forecast Error;286
7.3.6.1.1;Identity Covariance Matrix;287
7.3.6.2;9.6.2 Mean Squared Forecast Error Bounds;288
7.3.6.3;9.6.3 Theoretical Results in the Literature;289
7.3.7;9.7 Empirical Illustrations;290
7.3.7.1;9.7.1 Empirical Application: FRED-MD;290
7.3.7.1.1;Results;290
7.3.7.2;9.7.2 Empirical Application: Stock and Watson (2002);291
7.3.7.2.1;Methods;292
7.3.7.2.2;Results;292
7.3.8;9.8 Discussion;294
7.3.9;References;296
7.4;10 Variable Selection and Feature Screening;298
7.4.1;10.1 Introduction;298
7.4.2;10.2 Marginal, Iterative, and Joint Feature Screening;300
7.4.2.1;10.2.1 Marginal Feature Screening;300
7.4.2.2;10.2.2 Iterative Feature Screening;301
7.4.2.3;10.2.3 Joint Feature Screening;301
7.4.2.4;10.2.4 Notations and Organization;302
7.4.3;10.3 Independent and Identically Distributed Data;303
7.4.3.1;10.3.1 Linear Model;303
7.4.3.2;10.3.2 Generalized Linear Model and Beyond;305
7.4.3.3;10.3.3 Nonparametric Regression Models;308
7.4.3.4;10.3.4 Model-Free Feature Screening;311
7.4.3.5;10.3.5 Feature Screening for Categorical Data;316
7.4.4;10.4 Time-Dependent Data;319
7.4.4.1;10.4.1 Longitudinal Data;319
7.4.4.2;10.4.2 Time Series Data;323
7.4.5;10.5 Survival Data;325
7.4.5.1;10.5.1 Cox Model;325
7.4.5.2;10.5.2 Feature Screening for Cox Model;326
7.4.6;References;329
8;Part IV Dealing with Model Uncertainty;332
8.1;11 Frequentist Averaging;333
8.1.1;11.1 Introduction;333
8.1.2;11.2 Background: Model Averaging;334
8.1.3;11.3 Forecast Combination;337
8.1.3.1;11.3.1 The Problem;339
8.1.3.2;11.3.2 Forecast Criteria;340
8.1.3.3;11.3.3 MSFE;342
8.1.3.3.1;The Forecast Combination Puzzle;344
8.1.3.3.2;Is the Simple Average Optimal?;345
8.1.3.4;11.3.4 MAD;346
8.1.4;11.4 Density Forecasts Combination;347
8.1.4.1;11.4.1 Optimal Weights;348
8.1.4.2;11.4.2 Theoretical Discussions;349
8.1.4.3;11.4.3 Extension: Method of Moments;352
8.1.5;11.5 Conclusion;356
8.1.6;Technical Proofs;357
8.1.7;References;359
8.2;12 Bayesian Model Averaging;362
8.2.1;12.1 Introduction;362
8.2.2;12.2 BMA in Economics;364
8.2.2.1;12.2.1 Jointness;365
8.2.2.2;12.2.2 Functional Uncertainty;366
8.2.3;12.3 Statistical Model and Methods;367
8.2.3.1;12.3.1 Model Specification;367
8.2.3.2;12.3.2 Regression Parameter Priors;368
8.2.3.3;12.3.3 Model Priors;369
8.2.3.3.1;Independent Model Priors;369
8.2.3.3.2;Dependent Model Priors;371
8.2.3.3.3;Dirichlet Process Model Priors;373
8.2.3.4;12.3.4 Inference;373
8.2.3.5;12.3.5 Post-Processing;375
8.2.4;12.4 Application;376
8.2.4.1;12.4.1 Data Description;376
8.2.4.2;12.4.2 Exploratory Data Analysis;377
8.2.4.3;12.4.3 BMA Results;379
8.2.4.4;12.4.4 Iterations Matter;385
8.2.4.5;12.4.5 Assessing the Forecasting Performance;387
8.2.5;12.5 Summary;388
8.2.6;References;389
8.3;13 Bootstrap Aggregating and Random Forest;392
8.3.1;13.1 Introduction;392
8.3.2;13.2 Bootstrap Aggregating and Its Variants;392
8.3.2.1;13.2.1 Bootstrap Aggregating (Bagging);393
8.3.2.2;13.2.2 Sub-sampling Aggregating (Subagging);394
8.3.2.3;13.2.3 Bootstrap Robust Aggregating (Bragging);395
8.3.2.4;13.2.4 Out-of-Bag Error for Bagging;396
8.3.3;13.3 Decision Trees;397
8.3.3.1;13.3.1 The Structure of a Decision Tree;397
8.3.3.2;13.3.2 Growing a Decision Tree for Classification: ID3 and C4.5;401
8.3.3.3;13.3.3 Growing a Decision Tree for Classification: CART;410
8.3.3.4;13.3.4 Growing a Decision Tree for Regression: CART;412
8.3.3.5;13.3.5 Variable Importance in a Decision Tree;413
8.3.4;13.4 Random Forests;414
8.3.4.1;13.4.1 Constructing a Random Forest;414
8.3.4.2;13.4.2 Variable Importance in a Random Forest;416
8.3.4.3;13.4.3 Random Forest as the Adaptive Kernel Functions;418
8.3.5;13.5 Recent Developments of Random Forest;420
8.3.5.1;13.5.1 Extremely Randomized Trees;421
8.3.5.2;13.5.2 Soft Decision Tree and Forest;422
8.3.6;13.6 Applications of Bagging and Random Forest in Economics;428
8.3.6.1;13.6.1 Bagging in Economics;428
8.3.6.2;13.6.2 Random Forest in Economics;430
8.3.7;13.7 Summary;430
8.3.8;References;431
8.4;14 Boosting;433
8.4.1;14.1 Introduction;433
8.4.2;14.2 AdaBoost;434
8.4.2.1;14.2.1 AdaBoost Algorithm;434
8.4.2.2;14.2.2 An Example;436
8.4.2.3;14.2.3 AdaBoost: Statistical View;438
8.4.3;14.3 Extensions to AdaBoost Algorithms;442
8.4.3.1;14.3.1 Real AdaBoost;442
8.4.3.2;14.3.2 LogitBoost;443
8.4.3.3;14.3.3 Gentle AdaBoost;444
8.4.4;14.4 L2Boosting;445
8.4.5;14.5 Gradient Boosting;446
8.4.5.1;14.5.1 Functional Gradient Descent;447
8.4.5.2;14.5.2 Gradient Boosting Algorithm;447
8.4.5.3;14.5.3 Gradient Boosting Decision Tree;448
8.4.5.4;14.5.4 Regularization;450
8.4.5.4.1;Early Stopping;450
8.4.5.4.2;Shrinkage Method;451
8.4.5.5;14.5.5 Variable Importance;452
8.4.6;14.6 Recent Topics in Boosting;453
8.4.6.1;14.6.1 Boosting in Nonlinear Time Series Models;453
8.4.6.2;14.6.2 Boosting in Volatility Models;455
8.4.6.3;14.6.3 Boosting with Momentum (BOOM);457
8.4.6.4;14.6.4 Multi-Layered Gradient Boosting Decision Tree;460
8.4.7;14.7 Boosting in Macroeconomics and Finance;462
8.4.7.1;14.7.1 Boosting in Predicting Recessions;462
8.4.7.2;14.7.2 Boosting Diffusion Indices;462
8.4.7.3;14.7.3 Boosting with Markov-Switching;463
8.4.7.4;14.7.4 Boosting in Financial Modeling;463
8.4.8;14.8 Summary;463
8.4.9;References;464
8.5;15 Density Forecasting;466
8.5.1;15.1 Introduction;466
8.5.2;15.2 Computing Density Forecasts;468
8.5.2.1;15.2.1 Distribution Assumption;468
8.5.2.2;15.2.2 Bootstrapping;469
8.5.2.2.1;A Residual-Based Bootstrapping of Density Forecasts;470
8.5.2.2.2;Accounting for Autocorrelated or Heteroskedastic Errors;471
8.5.2.2.3;A Block Wild Bootstrapping of Density Forecasts;471
8.5.2.3;15.2.3 Bayesian Inference;472
8.5.3;15.3 Density Combinations;473
8.5.3.1;15.3.1 Bayesian Model Averaging;475
8.5.3.2;15.3.2 Linear Opinion Pool;475
8.5.3.3;15.3.3 Generalized Opinion Pool;476
8.5.4;15.4 Density Forecast Evaluation;478
8.5.4.1;15.4.1 Absolute Accuracy;478
8.5.4.2;15.4.2 Relative Accuracy;480
8.5.4.3;15.4.3 Forecast Calibration;481
8.5.5;15.5 Monte Carlo Methods for Predictive Approximation;484
8.5.5.1;15.5.1 Accept–Reject;484
8.5.5.2;15.5.2 Importance Sampling;485
8.5.5.3;15.5.3 Metropolis-Hastings;487
8.5.5.4;15.5.4 Constructing Density Forecasting Using GPU;488
8.5.6;15.6 Conclusion;489
8.5.7;Appendix;489
8.5.8;References;491
8.6;16 Forecast Evaluation;496
8.6.1;16.1 Forecast Evaluation Using Point Predictive Accuracy Tests;496
8.6.1.1;16.1.1 Comparison of Two Non-nested Models;497
8.6.1.2;16.1.2 Comparison of Two Nested Models;500
8.6.1.2.1;Clark and McCracken Tests for Nested Models;500
8.6.1.2.2;Out-of-Sample Tests for Granger Causality;502
8.6.1.3;16.1.3 A Predictive Accuracy Test that is Consistent Against Generic Alternatives;505
8.6.1.4;16.1.4 Comparison of Multiple Models;509
8.6.1.4.1;A Reality Check for Data Snooping;510
8.6.1.4.2;A Test for Superior Predictive Ability;513
8.6.1.4.3;A Test Based on Sub-Sampling;514
8.6.2;16.2 Forecast Evaluation Using Density-Based Predictive Accuracy Tests;514
8.6.2.1;16.2.1 The Kullback–Leibler Information Criterion Approach;515
8.6.2.2;16.2.2 A Predictive Density Accuracy Test for Comparing Multiple Misspecified Models;515
8.6.3;16.3 Forecast Evaluation Using Density-Based Predictive Accuracy Tests that are not Loss Function Dependent: The Case of Stochastic Dominance;528
8.6.3.1;16.3.1 Robust Forecast Comparison;528
8.6.4;References;536
9;Part V Further Issues;539
9.1;17 Unit Roots and Cointegration;540
9.1.1;17.1 Introduction;540
9.1.2;17.2 General Setup;542
9.1.3;17.3 Transformations to Stationarity and Unit Root Pre-testing;545
9.1.3.1;17.3.1 Unit Root Test Characteristics;545
9.1.3.1.1;Size Distortions;545
9.1.3.1.2;Power and Specification Considerations;546
9.1.3.2;17.3.2 Multiple Unit Root Tests;547
9.1.3.2.1;Controlling Generalized Error Rates;548
9.1.3.2.2;Sequential Testing;549
9.1.3.2.3;Multivariate Bootstrap Methods;550
9.1.4;17.4 High-Dimensional Cointegration;552
9.1.4.1;17.4.1 Modelling Cointegration Through Factor Structures;552
9.1.4.1.1;Dynamic Factor Models;553
9.1.4.1.2;Factor-Augmented Error Correction Model;555
9.1.4.1.3;Estimating the Number of Factors;556
9.1.4.2;17.4.2 Sparse Models;557
9.1.4.2.1;Full-System Estimation;558
9.1.4.2.2;Single-Equation Estimation;560
9.1.5;17.5 Empirical Applications;562
9.1.5.1;17.5.1 Macroeconomic Forecasting Using the FRED-MD Dataset;562
9.1.5.1.1;Transformations to Stationarity;562
9.1.5.1.2;Forecast Comparison After Transformations;565
9.1.5.1.3;Forecast Comparisons for Cointegration Methods;567
9.1.5.2;17.5.2 Unemployment Nowcasting with Google Trends;575
9.1.5.2.1;Transformations to Stationarity;576
9.1.5.2.2;Forecast Comparison;576
9.1.6;17.6 Conclusion;579
9.1.7;References;580
9.2;18 Turning Points and Classification;584
9.2.1;18.1 Introduction;584
9.2.2;18.2 The Forecasting Problem;587
9.2.2.1;18.2.1 Real-Time Classification;587
9.2.2.2;18.2.2 Classification and Economic Data;588
9.2.2.3;18.2.3 Metrics for Evaluating Class Forecasts;589
9.2.3;18.3 Machine Learning Approaches to Supervised Classification;592
9.2.3.1;18.3.1 Cross-Validation;592
9.2.3.2;18.3.2 Naïve Bayes;593
9.2.3.3;18.3.3 k-Nearest Neighbors;595
9.2.3.4;18.3.4 Learning Vector Quantization;597
9.2.3.5;18.3.5 Classification Trees;599
9.2.3.6;18.3.6 Bagging, Random Forests, and Extremely Randomized Trees;601
9.2.3.7;18.3.7 Boosting;604
9.2.4;18.4 Markov-Switching Models;609
9.2.5;18.5 Application;612
9.2.6;18.6 Conclusion;620
9.2.7;References;620
9.3;19 Robust Methods for High-Dimensional Regression and Covariance Matrix Estimation;624
9.3.1;19.1 Introduction;624
9.3.2;19.2 Robust Statistics Tools;625
9.3.2.1;19.2.1 Huber Contamination Models;625
9.3.2.2;19.2.2 Influence Function and M-Estimators;627
9.3.3;19.3 Robust Regression in High Dimensions;629
9.3.3.1;19.3.1 A Class Robust M-Estimators for Generalized Linear Models;629
9.3.3.2;19.3.2 Oracle Estimators and Robustness;630
9.3.3.3;19.3.3 Penalized M-Estimator;630
9.3.3.4;19.3.4 Computational Aspects;633
9.3.3.4.1;Fisher Scoring Coordinate Descent;633
9.3.3.4.2;Tuning Parameter Selection;634
9.3.3.5;19.3.5 Robustness Properties;635
9.3.3.5.1;Finite Sample Bias;635
9.3.3.5.2;Influence Function;637
9.3.4;19.4 Robust Estimation of High-Dimensional Covariance Matrices;639
9.3.4.1;19.4.1 Sparse Covariance Matrix Estimation;639
9.3.4.2;19.4.2 The Challenge of Heavy Tails;641
9.3.4.3;19.4.3 Revisting Tools from Robust Statistics;642
9.3.4.4;19.4.4 On the Robustness Properties of the Pilot Estimators;644
9.3.5;19.5 Further Extensions;645
9.3.5.1;19.5.1 Generalized Additive Models;645
9.3.5.2;19.5.2 Sure Independence Screening;646
9.3.5.3;19.5.3 Precision Matrix Estimation;647
9.3.5.4;19.5.4 Factor Models and High-Frequency Data;648
9.3.6;19.6 Conclusion;649
9.3.7;References;649
9.4;20 Frequency Domain;653
9.4.1;20.1 Introduction;653
9.4.2;20.2 Background;654
9.4.3;20.3 Granger Causality;658
9.4.4;20.4 Wavelet;665
9.4.4.1;20.4.1 Wavelet Forecasting;670
9.4.5;20.5 ZVAR and Generalised Shift Operator;676
9.4.5.1;20.5.1 Generalised Shift Operator;677
9.4.5.2;20.5.2 ZVAR Model;679
9.4.5.3;20.5.3 Monte Carlo Evidence;681
9.4.6;20.6 Conclusion;683
9.4.7;References;684
9.5;21 Hierarchical Forecasting;686
9.5.1;21.1 Introduction;686
9.5.2;21.2 Hierarchical Time Series;687
9.5.3;21.3 Point Forecasting;690
9.5.3.1;21.3.1 Single-Level Approaches;691
9.5.3.1.1;Bottom-Up;691
9.5.3.1.2;Top-Down;691
9.5.3.1.3;Middle-Out;692
9.5.3.2;21.3.2 Point Forecast Reconciliation;692
9.5.3.2.1;Optimal MinT Reconciliation;693
9.5.4;21.4 Hierarchical Probabilistic Forecasting;695
9.5.4.1;21.4.1 Probabilistic Forecast Reconciliation in the Gaussian Framework;696
9.5.4.2;21.4.2 Probabilistic Forecast Reconciliation in the Non-parametric Framework;696
9.5.5;21.5 Australian GDP;697
9.5.5.1;21.5.1 Income Approach;698
9.5.5.2;21.5.2 Expenditure Approach;698
9.5.6;21.6 Empirical Application Methodology;700
9.5.6.1;21.6.1 Models;700
9.5.6.2;21.6.2 Evaluation;703
9.5.7;21.7 Results;704
9.5.7.1;21.7.1 Base Forecasts;704
9.5.7.2;21.7.2 Point Forecast Reconciliation;706
9.5.7.3;21.7.3 Probabilistic Forecast Reconciliation;706
9.5.8;21.8 Conclusions;710
9.5.9;Appendix;711
9.5.10;References;714



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.