Buch, Englisch, 240 Seiten, Format (B × H): 152 mm x 229 mm
Specification, Estimation, and Inference
Buch, Englisch, 240 Seiten, Format (B × H): 152 mm x 229 mm
ISBN: 978-0-470-74993-7
Verlag: John Wiley & Sons Inc
EXPLORE THIS INDISPENSABLE AND COMPREHENSIVE GUIDE TO TIME SERIES ANALYSIS FOR STUDENTS AND PRACTITIONERS IN A WIDE VARIETY OF DISCIPLINES
Applied Time Series Analysis for the Social Sciences: Specification, Estimation, and Inference delivers an accessible guide to time series analysis that includes both theory and practice. The coverage spans developments from ARIMA intervention models and generalized least squares to the London School of Economics (LSE) approach and vector autoregression. Designed to break difficult concepts into manageable pieces while offering plenty of examples and exercises, the author demonstrates the use of lag operator algebra throughout to provide a better understanding of dynamic specification and the connections between model specifications that appear to be more different than they are.
The book is ideal for those with minimal mathematical experience, intended to follow a course in multiple regression, and includes exercises designed to build general skills such as mathematical expectation calculations to derive means and variances. Readers will also benefit from the inclusion of: - A focus on social science applications and a mix of theory and detailed examples provided throughout
- An accompanying website with data sets and examples in Stata, SAS and R
- A simplified unit root testing strategy based on recent developments
- An examination of various uses and interpretations of lagged dependent variables and the common pitfalls students and researchers face in this area
- An introduction to LSE methodology such as the COMFAC critique, general-to-specific modeling, and the use of forecasting to evaluate and test models
Perfect for students and professional researchers in the political sciences, public policy, sociology, and economics, Applied Time Series Analysis for the Social Sciences: Specification, Estimation, and Inference will also earn a place in the libraries of post graduate students and researchers in public health, public administration and policy, and education.
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Acknowledgments xi
About the Companion Website xiii
1 Introduction 1
1.1 Why Time Series and Why This Book? 1
1.2 Time Series: Preliminaries 4
1.2.1 What is a Time Series? 4
1.2.2 Why Special Methods? 5
1.2.2.1 Autocorrelation and Omitted Variables 6
1.2.2.2 Delayed Effects 8
1.2.2.3 Causal Representation 9
1.3 Time Series Approaches: Some History, and Outline of the Book 10
1.3.1 Regression Approaches and Structural Models 10
1.3.2 Classical Time Series: ARIMA (Box–Jenkins) Models 13
1.3.3 The LSE/Hendry Approach 14
1.3.4 Vector Autoregression 15
1.4 Summary 16
References 16
2 Foundations 19
2.1 Multiple Interpretations: Some Intuition 19
2.2 The Lag Operator, the Difference Operator, and Lag Operator Algebra 22
2.2.1 The Lag Operator and Some Properties 22
2.2.2 The Difference Operator 23
2.3 Lag Operator Division and Infinite Series 24
2.3.1 The Inverse of “L” 24
2.3.2 The Inverse of “1 – ßL” 24
2.4 Lag Operator Algebra: An Example 26
2.5 An Aside on Linear Difference Equations 27
Reference 27
3 Properties of Time Series: Mean and Variance Stationarity 29
3.1 Stationarity: Formal Definitions 32
3.2 Mean Non- stationarity: Stochastic Trend Versus Deterministic Trend 33
3.2.1 A Simple Illustration of Stochastic Trend: A Coin Flip 34
3.2.2 Modeling Mean Non- stationarity: Deterministic Trends, Stochastic Trends, and Unit Root Processes 37
3.2.2.1 A Random Walk Without Drift 38
3.2.2.2 A Random Walk with Drift 40
3.2.2.3 A Random Walk with Drift and Trend 41
3.3 Dickey–Fuller (D–F) Tests 42
3.3.1 D–F Estimating Equations and the D–F Distributions 43
3.3.2 Warning: the Null Hypothesis and Low Power 44
3.3.3 Testing for Drift and Trend: Individual Coefficients (t- statistics) Versus Restricted Regression (“F- tests”) 45
3.4 Unit Root Testing Strategies: Elder–Kennedy’s Simplified Approach 47
3.4.1 Part I: Clear Mean Stationarity or Clear Mean Non- stationarity 48
3.4.2 Example 1: Boston Robbery Series 49
3.4.3 Example 2: IV Iron Among US ESKD Patients 51
3.4.4 Part II: Uncertain Mean Non- stationarity 52
3.4.5 Example 3: The Dow Jones Industrial Average (A U.S. Stock Index) 54
3.5 Transforming Unit Root Series to Achieve Stationarity: Differencing 56
3.5.1 Differencing to Achieve Stationarity 56
3.5.2 Unit Root Check 1: Is the Differenced Series Stationary? 57
3.5.3 Unit Root Check 2: Could it Have Been a Deterministic Trend Instead? 57
3.6 Extensions of the Dickey–Fuller Test 58
3.6.1 Augmented Dickey–Fuller Tests 58
3.6.2 Phillips–Perron Tests 59
3.7 Seasonal Non- stationarity 61
3.8 Variance Non- stationarity 62
3.8.1 Count Data 62
3.8.2 Aside on Autoregressive Conditional Heteroskedasticity Models 64
3.9 Summary 64
References 65
4 Properties of Time Series: Autocorrelation 67
4.1 Rethinking Autocorrelation 67
4.2 Modeling Autocorrelation: Wold’s Theorem 69
4.3 Moving Average Processes 70
4.3.1 The MA(1) Process 71
4.3.2 The MA(2) Process 71
4.4 The Autocorrelation Function (ACF) and Sample Autocorrelation Function (SACF) 72
4.4.1 Definitions 72
4.4.2 Q- tests for SACFs 73
4.4.3 ACF for a White Noise Process 75
4.5 ACFs for MA Processes 76
4.5.1 The MA(1) Process 76
4.5.2 The General Form for the ACF of an MA(q) Process 77
4.5.3 Example 1: Test Yourself 78
4.5.4 Example 2: Did we Overdifference the DJIA Series? 79
4.5.5 Example 1: Answers 79
4.6 Autoregressive Processes 80
4.6.1 The AR(1) Process 80
4.6.2 AR(2) Processes 81
4.6.3 Moving Average vs. Autoregressive Representations 81
4.6.4 The ACF for an AR(1) Process 82
4.6.4.1 Aside: ACF for a Nonstationary Series 84
4.6.4.2 ACF for an AR(2) Process 86
4.6.5 ACF’s for AR(p) Processes (General Form) 87
4.7 The Partial Autocorrelation Function 88
4.7.1 Definitions 88
4.7.2 PACFs for AR Processes 88
4.7.2.1 PACF for an AR(1) Process 88
4.7.2.2 PACF for an AR(2) Process 89
4.7.3 PACFs for MA Processes 90
4.8 Seasonality 90
4.8.1 Seasonal Non-stationarity 92
4.8.2 Seasonal Moving Average 94
4.8.3 Seasonal Autoregression 95
4.8.4 Combining Regular and Seasonal Components 95
4.9 ARMA (mixed) Processes 96
4.10 Summary 96
References 97
5 Autocorrelation: Univariate ARIMA Estimation and Forecasting 99
5.1 ARIMA (p,d,q)(P,D,Q) s Notation 100
5.2 The ARIMA Model-Building Process 102
5.2.1 Identification 103
5.2.2 Estimation Methods 104
5.2.3 Diagnosis 105
5.2.4 Meta-diagnosis: the Use of Information Criteria 105
5.3 Example: Directory Assistance (411) 106
5.4 Forecasting 113
5.4.1 Point Estimates: Unconditional and Conditional Forecasts 113
5.4.2 Forecast Errors for Interval Estimates 115
5.4.2.1 The Forecast Error When the True ARIMA Model is Known 115
5.4.2.2 Aside: Rewriting an ARMA Series as a Pure MA Process 116
5.4.2.3 The Forecast Error When the True ARIMA Model is Unknown 117
5.5 Summary 118
5.6 Appendix to Chapter 5: Non-linear Models and Numerical Estimation Methods 118
References 120
6 ARIMA Intervention Models 121
6.1 Transfer Functions: General Form 123
6.2 Simplest Form: Zero-Order Transfer Functions 125
6.2.1 Form and Notation 125
6.2.2 A Zero-order Transfer Function with a Pulse Intervention 126
6.2.3 A Zero-order Transfer Function with a Step Intervention 126
6.3 Gradual Changes: First-Order Transfer Functions 126
6.3.1 Form and Notation 126
6.3.2 First-Order Transfer Functions with Pulse Interventions 127
6.3.3 First-Order Transfer Functions with Step Interventions 128
6.4 Example: The Directory Assistance (411) Series 130
7 ARIMA with Continuous Explanatory Variables 135
7.1 The Cross-Correlation Function (CCF) 136
7.1.1 Definition and Properties 136
7.1.2 Why You Might Expect the CCF to be Helpful in Identifying the Appropriate Transfer Function 138
7.1.3 Using the CCF to Identify the Appropriate Transfer Function 140
7.2 Prewhitening 142
7.2.1 A Theoretical Illustration of Prewhitening 142
7.3 Example: Lydia Pinkham Advertising 143
7.4 Conclusion 147
Reference 147
8 OLS and the Gauss–Markov Assumptions 149
8.1 Autocorrelation and its Consequences: A Review 150
8.2 Detecting Autocorrelation 152
8.2.1 The Durbin–Watson Test 152
8.2.1.1 Limitations of the Durbin–Watson Statistic 153
8.2.2 Breusch–Godfrey Tests 154
8.2.3 Example: Civilian Deaths During the Iraq War 155
8.3 Generalized Least Squares 158
8.3.1 Standard Presentation of GLS 158
8.3.2 A Simpler Derivation Using Lag Operator Algebra 159
8.3.2.1 A Simple Extension 160
8.3.3 Pseudo- GLS Techniques 161
8.3.3.1 The Cochrane–Orcutt Method 161
8.3.3.2 The Prais–Winsten Correction 161
8.3.4 Example: Civilian Deaths During the Iraq War 161
8.4 Limitations of GLS Approaches 162
8.4.1 Dynamic Misspecification 162
8.4.2 The Common Factors Critique of GLS 162
8.5 An Aside: Newey–West Standard Errors 163
8.6 Summary 164
References 165
9 Dynamic Specification: Distributed Lag Models 167
9.1 Distributed Lag and Autoregressive Distributed Lag Models 168
9.2 The Koyck Model and Dynamic Specification 171
9.2.1 Motivation 172
9.2.2 Marginal Effects, Dynamic Effects, and Backward- and Forward- thinking 173
9.3 General- to- Specific Modeling and the ADL(1,1) Model 175
9.4 Estimating Models with Lagged Dependent Variables 178
9.4.1 LDV with Uncorrelated Errors 178
9.4.2 LDV with Autocorrelated Errors 179
9.5 Testing Constraints 180
References 180
10 Regression with Non-stationary Series: Cointegration and Error Correction Models 183
10.1 Non-stationary Series and Spurious Regression 185
10.2 Cointegration and Cointegration Tests 187
10.2.1 Integrated Series: Definitions and Properties 187
10.2.2 Cointegration 188
10.2.3 Testing for Cointegration 189
10.3 Testing for Cointegration: Two Examples 190
10.3.1 Example 1: Lydia Pinkham’s Sales and Advertising Series 190
10.3.2 Example 2: Military Expenditures in Jordan and Urban Population in Fiji 191
10.4 Error Correction Models for Non-stationary Series 193
10.4.1 Estimating the ECM 194
10.4.2 Example 1: Lydia Pinkham’s Sales and Advertising 194
10.4.3 Example 2: What Moves Policy Sentiment? 196
10.4.4 Limitations of the Engle- Granger Approach 197
10.5 Summary 198
References 198
11 The LSE Approach: Encompassing, General-to-Specific Modeling, and Forecasting Success 201
11.1 Competing Models of the Consumption Function 202
11.1.1 Keynes’ Absolute Income Hypothesis (AIH) 203
11.1.2 The Permanent Income Hypothesis (PIH) 203
11.2 Additional Criteria: Forecast Success and Parameter Constancy 204
11.2.1 A Chow Test for Parameter Non-constancy 205
11.2.2 the Forecast X 2 Test for Parameter Non-constancy 206
11.3 A Sketch of the DHSY Process 207
11.3.1 Encompassing and Critical Tests 207
11.3.2 Time Series Properties and General-to-Specific Modeling 208
11.3.3 Incorporation of the Adjustment (Error-Correction) Mechanism 209
11.4 Summary 210
References 210
12 A Brief Introduction to Vector Autoregression 211
12.1 VAR: Logic and Motivation 213
12.1.1 General-to-Specific Modeling 213
12.1.2 Identification in Multiple Equation Models 213
12.2 Estimating VAR Models 215
12.2.1 Basic Setup: From the Structural VAR to the Reduced Form 215
12.2.2 Choosing Lag Length 216
12.2.3 Example: Lydia Pinkham’s Sales and Advertising Data 217
12.2.4 Granger Causality 218
12.2.5 Impulse Response Functions 219
12.3 Summary 220
References 220
Index 221




