Thomson / Emery | Data Analysis Methods in Physical Oceanography | E-Book | www.sack.de
E-Book

E-Book, Englisch, 728 Seiten

Thomson / Emery Data Analysis Methods in Physical Oceanography


3. Auflage 2014
ISBN: 978-0-12-387783-3
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark

E-Book, Englisch, 728 Seiten

ISBN: 978-0-12-387783-3
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark



Data Analysis Methods in Physical Oceanography, Third Edition is a practical reference to established and modern data analysis techniques in earth and ocean sciences. Its five major sections address data acquisition and recording, data processing and presentation, statistical methods and error handling, analysis of spatial data fields, and time series analysis methods. The revised Third Edition updates the instrumentation used to collect and analyze physical oceanic data and adds new techniques including Kalman Filtering. Additionally, the sections covering spectral, wavelet, and harmonic analysis techniques are completely revised since these techniques have attracted significant attention over the past decade as more accurate and efficient data gathering and analysis methods. - Completely updated and revised to reflect new filtering techniques and major updating of the instrumentation used to collect and analyze data - Co-authored by scientists from academe and industry, both of whom have more than 30 years of experience in oceanographic research and field work - Significant revision of sections covering spectral, wavelet, and harmonic analysis techniques - Examples address typical data analysis problems yet provide the reader with formulaic 'recipes for working with their own data - Significant expansion to 350 figures, illustrations, diagrams and photos

Richard E. Thomson is a researcher in coastal and deep-sea physical oceanography within the Ocean Sciences Division. Coastal oceanographic processes on the continental shelf and slope including coastally trapped waves, upwelling and baroclinic instability; hydrothermal venting and the physics of buoyant plumes; linkage between circulation and zooplankton biomass aggregations at hydrothermal venting sites; analysis and modelling of landslide generated tsunamis; paleoclimate using tree ring records and sediment cores from coastal inlets and basins.

Thomson / Emery Data Analysis Methods in Physical Oceanography jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;DATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY;4
3;Copyright;5
4;Dedication;6
5;Contents;8
6;Preface;10
7;Acknowledgments;12
8;Chapter 1 - Data Acquisition and Recording;14
8.1;1.1 INTRODUCTION;14
8.2;1.2 BASIC SAMPLING REQUIREMENTS;16
8.3;1.3 TEMPERATURE;23
8.4;1.4 SALINITY;50
8.5;1.5 DEPTH OR PRESSURE;61
8.6;1.6 SEA-LEVEL MEASUREMENT;74
8.7;1.7 EULERIAN CURRENTS;92
8.8;1.8 LAGRANGIAN CURRENT MEASUREMENTS;128
8.9;1.9 WIND;157
8.10;1.10 PRECIPITATION;165
8.11;1.11 CHEMICAL TRACERS;168
8.12;1.12 TRANSIENT CHEMICAL TRACERS;188
9;Chapter 2 - Data Processing and Presentation;200
9.1;2.1 INTRODUCTION;200
9.2;2.2 CALIBRATION;202
9.3;2.3 INTERPOLATION;203
9.4;2.4 DATA PRESENTATION;204
10;Chapter 3 - Statistical Methods and Error Handling;232
10.1;3.1 INTRODUCTION;232
10.2;3.2 SAMPLE DISTRIBUTIONS;233
10.3;3.3 PROBABILITY;235
10.4;3.4 MOMENTS AND EXPECTED VALUES;239
10.5;3.5 COMMON PDFS;241
10.6;3.6 CENTRAL LIMIT THEOREM;245
10.7;3.7 ESTIMATION;247
10.8;3.8 CONFIDENCE INTERVALS;249
10.9;3.9 SELECTING THE SAMPLE SIZE;256
10.10;3.10 CONFIDENCE INTERVALS FOR ALTIMETER-BIAS ESTIMATES;257
10.11;3.11 ESTIMATION METHODS;258
10.12;3.12 LINEAR ESTIMATION (REGRESSION);263
10.13;3.13 RELATIONSHIP BETWEEN REGRESSION AND CORRELATION;270
10.14;3.14 HYPOTHESIS TESTING;275
10.15;3.15 EFFECTIVE DEGREES OF FREEDOM;282
10.16;3.16 EDITING AND DESPIKING TECHNIQUES: THE NATURE OF ERRORS;288
10.17;3.17 INTERPOLATION: FILLING THE DATA GAPS;300
10.18;3.18 COVARIANCE AND THE COVARIANCE MATRIX;312
10.19;3.19 THE BOOTSTRAP AND JACKKNIFE METHODS;315
11;Chapter 4 - The Spatial Analyses of Data Fields;326
11.1;4.1 TRADITIONAL BLOCK AND BULK AVERAGING;326
11.2;4.2 OBJECTIVE ANALYSIS;330
11.3;4.3 KRIGING;341
11.4;4.4 EMPIRICAL ORTHOGONAL FUNCTIONS;348
11.5;4.5 EXTENDED EMPIRICAL ORTHOGONAL FUNCTIONS;369
11.6;4.6 CYCLOSTATIONARY EOFS;376
11.7;4.7 FACTOR ANALYSIS;380
11.8;4.8 NORMAL MODE ANALYSIS;381
11.9;4.9 SELF ORGANIZING MAPS;392
11.10;4.10 KALMAN FILTERS;409
11.11;4.11 MIXED LAYER DEPTH ESTIMATION;419
11.12;4.12 INVERSE METHODS;427
12;Chapter 5 - Time Series Analysis Methods;438
12.1;5.1 BASIC CONCEPTS;438
12.2;5.2 STOCHASTIC PROCESSES AND STATIONARITY;440
12.3;5.3 CORRELATION FUNCTIONS;441
12.4;5.4 SPECTRAL ANALYSIS;446
12.5;5.5 SPECTRAL ANALYSIS (PARAMETRIC METHODS);502
12.6;5.6 CROSS-SPECTRAL ANALYSIS;516
12.7;5.7 WAVELET ANALYSIS;534
12.8;5.8 FOURIER ANALYSIS;549
12.9;5.9 HARMONIC ANALYSIS;560
12.10;5.10 REGIME SHIFT DETECTION;570
12.11;5.11 VECTOR REGRESSION;581
12.12;5.12 FRACTALS;593
13;Chapter 6 - Digital Filters;606
13.1;6.1 INTRODUCTION;606
13.2;6.2 BASIC CONCEPTS;607
13.3;6.3 IDEAL FILTERS;609
13.4;6.4 DESIGN OF OCEANOGRAPHIC FILTERS;617
13.5;6.5 RUNNING-MEAN FILTERS;620
13.6;6.6 GODIN-TYPE FILTERS;622
13.7;6.7 LANCZOS-WINDOW COSINE FILTERS;625
13.8;6.8 BUTTERWORTH FILTERS;630
13.9;6.9 KAISER–BESSEL FILTERS;637
13.10;6.10 FREQUENCY-DOMAIN (TRANSFORM) FILTERING;640
14;References;652
15;Appendix A - Units in Physical Oceanography;678
16;Appendix B - Glossary of Statistical Terminology;682
17;Appendix C - Means, Variances and Moment-Generating Functions for Some Common Continuous Variables;686
18;Appendix D - Statistical Tables;688
19;Appendix E - Correlation Coefficients at the 5% and 1% Levels of Significance for Various Degrees of Freedom .;700
20;Appendix F - Approximations and Nondimensional Numbers in Physical Oceanography;702
20.1;References;708
21;Appendix G - Convolution;710
21.1;Appendix G CONVOLUTION AND FOURIER TRANSFORMS;710
21.2;Appendix G CONVOLUTION OF DISCRETE DATA;710
21.3;Appendix G CONVOLUTION AS TRUNCATION OF AN INFINITE TIME SERIES;711
21.4;Appendix G DECONVOLUTION;713
22;Index;714


1.2. Basic Sampling Requirements
A primary concern in most observational work is the accuracy of the measurement device, a common performance statistic for the instrument. Absolute accuracy requires frequent instrument calibration to detect and correct for any shifts in behavior. The inconvenience of frequent calibration often causes the scientist to substitute instrument precision as the measurement capability of an instrument. Unlike absolute accuracy, precision is a relative term and simply represents the ability of the instrument to repeat the observation without deviation. Absolute accuracy further requires that the observation be consistent in magnitude with some universally accepted reference standard. In most cases, the user must be satisfied with having good precision and repeatability of the measurement rather than having absolute measurement accuracy. Any instrument that fails to maintain its precision, fails to provide data that can be handled in any meaningful statistical fashion. The best instruments are those that provide both high precision and defensible absolute accuracy. It is sometimes advantageous to measure simultaneously the same variable with more than one reliable instrument. However, if the instruments have the same precision but not the same absolute accuracy, we are reminded of the saying that “a man with two watches does not know the time”. Digital instrument resolution is measured in bits, where a resolution of N bits means that the full range of the sensor is partitioned into 2N equal segments (N = 1, 2…). For example, eight-bit resolution means that the specified full-scale range of the sensor, say V = 10 V, is divided into 28 = 256 increments, with a bit resolution of V/256 = 0.039 V. Whether the instrument can actually measure to a resolution or accuracy of V/2N units is another matter. The sensor range can always be divided into an increasing number of smaller increments but eventually one reaches a point where the value of each bit is buried in the noise level of the sensor and is no longer significant. 1.2.1. Sampling Interval
Assuming the instrument selected can produce reliable and useful data, the next highest priority sampling requirement is that the measurements be collected often enough in space and time to resolve the phenomena of interest. For example, in the days when oceanographers were only interested in the mean stratification of the world ocean, water property profiles from discrete-level hydrographic (bottle) casts were adequate to resolve the general vertical density structure. On the other hand, these same discrete-level profiles failed to resolve the detailed structure associated with interleaving and mixing processes, including those associated with thermohaline staircases (salt fingering and diffusive convection), that now are resolved by the rapid vertical sampling provided by modern conductivity-temperature-depth (CTD) probes. The need for higher resolution assumes that the oceanographer has some prior knowledge of the process of interest. Often this prior knowledge has been collected with instruments incapable of resolving the true variability and may, therefore, only be suggested by highly aliased (distorted) data collected using earlier techniques. In addition, laboratory and theoretical studies may provide information on the scales that must be resolved by the measurement system. For discrete digital data x(ti) measured at times ti, the choice of the sampling increment ?t (or ?x in the case of spatial measurements) is the quantity of importance. In essence, we want to sample often enough that we can pick out the highest frequency component of interest in the time series but not oversample so that we fill up the data storage file, use up all the battery power, or become swamped with unnecessary data. In the case of real-time cabled observatories, it is also possible to sample so rapidly (hundreds of times per second) that inserting the essential time stamps in the data string can disrupt the cadence of the record. We might also want to sample at irregular intervals to avoid built-in bias in our sampling scheme. If the sampling interval is too large to resolve higher frequency components, it becomes necessary to suppress these components during sampling using a sensor whose response is limited to frequencies equal to that of the sampling frequency. As we discuss in our section on processing satellite-tracked drifter data, these lessons are often learned too late—after the buoys have been cast adrift in the sea. The important aspect to keep in mind is that, for a given sampling interval ?t, the highest frequency we can hope to resolve is the Nyquist (or folding) frequency, fN, defined as N=1/2?t (1.1) We cannot resolve any higher frequencies than this. For example, if we sample every 10 h, the highest frequency we can hope to see in the data is fN = 0.05 cph (cycles per hour). Equation (1.1) states the obvious—that it takes at least two sampling intervals (or three data points) to resolve a sinusoidal-type oscillation with period 1/fN (Figure 1.1). In practice, we need to contend with noise and sampling errors so that it takes something like three or more sampling increments (i.e., ? four data points) to accurately determine the highest observable frequency. Thus, fN is an upper limit. The highest frequency we can resolve for a sampling of ?t = 10 h in Figure 1.1 is closer to 1/(3?t) ? 0.033 cph. (Replacing ?t with ?x in the case of spatial sampling increments allows us to interpret these limitations in terms of the highest wavenumber (Nyquist wavenumber) the data are able to resolve.) An important consequence of Equation (1.1) is the problem of aliasing. In particular, if there is energy at frequencies f > fN—which we obviously cannot resolve because of the ?t we picked—this energy gets folded back into the range of frequencies, f < fN, which we are attempting to resolve (hence, the alternate name “folding frequency” for fN). This unresolved energy does not disappear but gets redistributed within the frequency range of interest. To make matters worse, the folded-back energy is disguised (or aliased) within frequency components different from those of its origin. We cannot distinguish this folded-back energy from that which actually belongs to the lower frequencies. Thus, we end up with erroneous (aliased) estimates of the spectral energy variance over the resolvable range of frequencies. An example of highly aliased data would be current meter data collected using 13-h sampling in a region dominated by strong semidiurnal (12.42-h period) tidal currents. More will be said on this topic in Chapter 5. As a general rule, one should plan a measurement program based on the frequencies and wavenumbers (estimated from the corresponding periods and wavelengths) of the parameters of interest over the study domain. This requirement may then dictate the selection of the measurement tool or technique. If the instrument cannot sample rapidly enough to resolve the frequencies of concern it should not be used. It should be emphasized that the Nyquist frequency concept applies to both time and space and the Nyquist wavenumber is a valid means of determining the fundamental wavelength that must be sampled.
FIGURE 1.1 Plot of the function F(n) = sin (2?n/20 + ?) where time is given by the integer n = ?1, 0, …, 24. The period 2?t = 1/fN is 20 units and ? is a random phase with a small magnitude in the range ±0.1 radians. Open circles denote measured points and solid points the curve F(n). Noise makes it necessary to use more than three data values to accurately define the oscillation period. 1.2.2. Sampling Duration
The next concern is that one samples long enough to establish a statistically significant determination of the process being studied. For time-series measurements, this amounts to a requirement that the data be collected over a period sufficiently long that repeated cycles of the phenomenon are observed. This also applies to spatial sampling where statistical considerations require a large enough sample to define...



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.