Hager Pricing Portfolio Credit Derivatives by Means of Evolutionary Algorithms
1. Auflage 2008
ISBN: 978-3-8349-9702-9
Verlag: Betriebswirtschaftlicher Verlag Gabler
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 160 Seiten, Web PDF
Reihe: Business and Economics
ISBN: 978-3-8349-9702-9
Verlag: Betriebswirtschaftlicher Verlag Gabler
Format: PDF
Kopierschutz: 1 - PDF Watermark
Svenja Hager aims at pricing non-standard illiquid portfolio credit derivatives which are related to standard CDO tranches with the same underlying portfolio of obligors. Instead of assuming a homogeneous dependence structure between the default times of different obligors, as it is assumed in the standard market model, the author focuses on the use of heterogeneous correlation structures. The intention is to find a correlation matrix sufficiently flexible so that all tranche spreads of a CDO structure can be reproduced simultaneously. This allows for consistent pricing. The calibrated model can then be used to determine the price of non-standard contracts. As there is no standard optimization technique to derive the correlation structure from market prices, Evolutionary Algorithms are applied.
Zielgruppe
Research
Weitere Infos & Material
Collateralized Debt Obligations: Structure and Valuation.- Explaining the Implied Correlation Smile.- Optimization by Means of Evolutionary Algorithms.- Evolutionary Algorithms in Finance: Deriving the Dependence Structure.- Experimental Results.- Summary and Outlook.
Chapter 4 Optimization by Means of Evolutionary Algorithms (S. 73-74)
4.1 Introduction
In the preceding Chapter 3, we presented a possible explanation for the inability of the standard market approach to fit quoted CDO tranche prices and to model the correlation smile. We suggested overcoming the deficiency of the standard market model by means of non-flat dependence structures. In the subsequent Chapter 5, we will explain how a correlation matrix can be derived from observed tranche spreads such that all tranche spreads of the CDO structure are reproduced simultaneously. This idea can be represented in the form of an optimization problem. This Chapter 4 addresses optimization algorithms. Life in general and the domain of finance in particular confront us with many opportunities for optimization. Optimization is the process of searching for the optimal solution in a set of candidate solutions, i.e. the search space.
Optimization theory is a branch of mathematics which encompasses many di.erent methodologies of minimization and maximization. In this chapter we represent optimization problems as maximization problems, unless mentioned otherwise. The function to be maximized is denoted as objective function. Optimization methods are similar to approaches to root .nding, but generally they are more intricate. The idea behind root finding is to search for the zeros of a function, while the idea behind optimization is to search for the zeros of the objective function’s derivative. However, often the derivative does not exist or is hard to find.
Another di.culty with optimization is to determine whether a given optimum is the global or only a local optimum. There are many di.erent types of optimization problems: they can be one- or multidimensional, static or dynamic, discrete or continuous, constrained or unconstrained. Sometimes even the objective function is unknown. In line with the high number of different optimization problems, many di.erent standard approaches have been developed to finding an optimal solution. Standard approaches are methods that are developed for a certain class of problems (though not speci.cally designed for an actual problem) and that do not use domain-specific knowledge in the search procedure. In case of a discrete search space, the most simple optimization method is the total enumeration of all possible solutions.
Needless to say, this approach .nds the global optimum but is very ineficient especially when the problem size increases. Other approaches like linear or quadratic programming utilize special properties of the objective function. Possible solution techniques for nonlinear programming problems are local search procedures like the gradient-ascent method, provided that the objective function is real-valued and di.erentiable.
Most local search methods take the approach of heading uphill from a certain starting point. They di.er in deciding in what direction to go and how far to move. If the search space is multi-modal (i.e. it contains several local extrema), the local search methods will all run the risk of being stuck in a local optimum. But even if the objective function is not di.erentiable or if the search space is multi-modal, there will still be some standard approaches that deal with these kinds of problems.




