Hager Pricing Portfolio Credit Derivatives by Means of Evolutionary Algorithms
2008
ISBN: 978-3-8349-9702-9
Verlag: Betriebswirtschaftlicher Verlag Gabler
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 160 Seiten, eBook
ISBN: 978-3-8349-9702-9
Verlag: Betriebswirtschaftlicher Verlag Gabler
Format: PDF
Kopierschutz: 1 - PDF Watermark
Svenja Hager aims at pricing non-standard illiquid portfolio credit derivatives which are related to standard CDO tranches with the same underlying portfolio of obligors. Instead of assuming a homogeneous dependence structure between the default times of different obligors, as it is assumed in the standard market model, the author focuses on the use of heterogeneous correlation structures.
Dr. Svenja Hager promovierte bei Prof. Dr.-Ing. Rainer Schöbel am Lehrstuhl für Betriebswirtschaftslehre, insbesondere Betriebliche Finanzwirtschaft, der Universität Tübingen. Sie ist als Kredit- und Marktrisiko-Expertin tätig.
Zielgruppe
Research
Weitere Infos & Material
1;Foreword;6
2;Acknowledgements;7
3;Table of Contents;8
4;List of Tables;13
5;List of Figures;14
6;List of Notations;17
7;Chapter 1 Introduction;22
8;Chapter 2 Collateralized Debt Obligations: Structure and Valuation;27
8.1;2.1 Introduction;27
8.2;2.2 Credit Risk Transfer Instruments;29
8.3;2.3 Credit Risk Modeling;38
8.4;2.4 Valuation of CDOs: Literature;56
9;Chapter 3 Explaining the Implied Correlation Smile;60
9.1;3.1 Introduction;60
9.2;3.2 Sensitivity of the Tranche Price to the Level of Correlation;61
9.3;3.3 The Implied Tranche Correlation;63
9.4;3.4 The Implied Correlation Smile;64
9.5;3.5 The Implied Base Correlation;66
9.6;3.6 Evolution of the Implied Correlation Smile;68
9.7;3.7 Modeling the Correlation Smile: Literature;75
9.8;3.8 Heterogeneous Dependence Structures;77
9.9;3.9 Conclusion;84
10;Chapter 4 Optimization by Means of Evolutionary Algorithms;91
10.1;4.1 Introduction;91
10.2;4.2 Evolutionary Algorithms;92
10.3;4.3 Notation;95
10.4;4.4 Evolutionary Operators;98
10.5;4.5 Basic Algorithms;101
10.6;4.6 Parallel Algorithms;104
10.7;4.7 Evolutionary Algorithms in Finance: Literature;106
11;Chapter 5 Evolutionary Algorithms in Finance: Deriving the Dependence Structure;108
11.1;5.1 Introduction;108
11.2;5.2 The Implied Correlation Structure;109
11.3;5.3 The Optimization Problem;110
11.4;5.4 Description of the Genotypes;111
11.5;5.5 A Systematic Approach to Describe the Dependence Structure;118
11.6;5.6 Conclusion;124
12;Chapter 6 Experimental Results;125
12.1;6.1 Introduction;125
12.2;6.2 Solution Evaluation;126
12.3;6.3 Performance Comparison: Basic Strategies;129
12.4;6.4 Performance Comparison: More Advanced Algorithms;136
12.5;6.5 Implementation of a Parallel System;151
12.6;6.6 Performance Comparison: Parallel Algorithms;152
12.7;6.7 Deriving the Dependence Structure From Market Data;154
12.8;6.8 Conclusion;157
13;Chapter 7 Summary and Outlook;159
14;References;162
Collateralized Debt Obligations: Structure and Valuation.- Explaining the Implied Correlation Smile.- Optimization by Means of Evolutionary Algorithms.- Evolutionary Algorithms in Finance: Deriving the Dependence Structure.- Experimental Results.- Summary and Outlook.
Chapter 4 Optimization by Means of Evolutionary Algorithms (S. 73-74)
4.1 Introduction
In the preceding Chapter 3, we presented a possible explanation for the inability of the standard market approach to fit quoted CDO tranche prices and to model the correlation smile. We suggested overcoming the deficiency of the standard market model by means of non-flat dependence structures. In the subsequent Chapter 5, we will explain how a correlation matrix can be derived from observed tranche spreads such that all tranche spreads of the CDO structure are reproduced simultaneously. This idea can be represented in the form of an optimization problem. This Chapter 4 addresses optimization algorithms. Life in general and the domain of finance in particular confront us with many opportunities for optimization. Optimization is the process of searching for the optimal solution in a set of candidate solutions, i.e. the search space.
Optimization theory is a branch of mathematics which encompasses many di.erent methodologies of minimization and maximization. In this chapter we represent optimization problems as maximization problems, unless mentioned otherwise. The function to be maximized is denoted as objective function. Optimization methods are similar to approaches to root .nding, but generally they are more intricate. The idea behind root finding is to search for the zeros of a function, while the idea behind optimization is to search for the zeros of the objective function’s derivative. However, often the derivative does not exist or is hard to find.
Another di.culty with optimization is to determine whether a given optimum is the global or only a local optimum. There are many di.erent types of optimization problems: they can be one- or multidimensional, static or dynamic, discrete or continuous, constrained or unconstrained. Sometimes even the objective function is unknown. In line with the high number of different optimization problems, many di.erent standard approaches have been developed to finding an optimal solution. Standard approaches are methods that are developed for a certain class of problems (though not speci.cally designed for an actual problem) and that do not use domain-specific knowledge in the search procedure. In case of a discrete search space, the most simple optimization method is the total enumeration of all possible solutions.
Needless to say, this approach .nds the global optimum but is very ineficient especially when the problem size increases. Other approaches like linear or quadratic programming utilize special properties of the objective function. Possible solution techniques for nonlinear programming problems are local search procedures like the gradient-ascent method, provided that the objective function is real-valued and di.erentiable.
Most local search methods take the approach of heading uphill from a certain starting point. They di.er in deciding in what direction to go and how far to move. If the search space is multi-modal (i.e. it contains several local extrema), the local search methods will all run the risk of being stuck in a local optimum. But even if the objective function is not di.erentiable or if the search space is multi-modal, there will still be some standard approaches that deal with these kinds of problems.