E-Book, Englisch, 270 Seiten
Grabski Semi-Markov Processes
1. Auflage 2014
ISBN: 978-0-12-800659-7
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Applications in System Reliability and Maintenance
E-Book, Englisch, 270 Seiten
ISBN: 978-0-12-800659-7
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Franciszek Grabski is a Full Professor and the Head of the Mathematics and Physics Department at the Naval University in Gdynia, Poland. The main focus of his math research interests focus on probability theory, in particular its applications in system reliability theory and practice. He has constructed and tested several new reliability stochastic models and developed the Bayesian methods applications in reliability.He is the author or co-author of more than 100 scientific papers, course-books and monographs in the probability and reliability field. His main monographs are published in Polish.
Autoren/Hrsg.
Weitere Infos & Material
1;Front Cover;1
2;Semi-Markov Processes: Applications in System Reliability and Maintenance;4
3;Copyright;5
4;Dedication;6
5;Contents;8
6;Preface;12
7;Chapter 1: Discrete state space Markov processes;16
7.1;1.1 Basic definitions and properties;16
7.2;1.2 Homogeneous Markov chains;17
7.2.1;1.2.1 Basic definitions and properties;17
7.2.2;1.2.2 Classification of states;19
7.2.3;1.2.3 Limiting distribution;24
7.3;1.3 Continuous-time homogeneous Markov processes;25
7.4;1.4 Important examples;28
7.4.1;1.4.1 Poisson process;28
7.4.2;1.4.2 Furry-Yule process;28
7.4.3;1.4.3 Finite state space birth and death process;29
7.5;1.5 Numerical illustrative examples;30
8;Chapter 2:
Semi-Markov process;34
8.1;2.1 Markov renewal processes;34
8.2;2.2 Definition of discrete state space SMP;38
8.3;2.3 Regularity of SMP;39
8.4;2.4 Other methods of determining the SMP;40
8.5;2.5 Connection between Semi-Markov and Markov process;41
8.6;2.6 Illustrative examples;42
8.7;2.7 Elements of statistical estimation;44
8.7.1;2.7.1 Observation of SMP sample path;44
8.7.2;2.7.2 Empirical estimators;45
8.7.3;2.7.3 Nonparametric estimators of kernel elements densities;46
8.8;2.8 Nonhomogeneous Semi-Markov process;48
9;Chapter 3:
Characteristics and parameters of SMP;52
9.1;3.1 First passage time to subset of states;52
9.2;3.2 Interval transition probabilities;59
9.2.1;3.2.1 Interval transition probabilities for alternating process;60
9.2.2;3.2.2 Interval transition probabilities for Poisson process;62
9.2.3;3.2.3 Interval transition probabilities for Furry-Yule process;63
9.3;3.3 The limiting probabilities;65
9.4;3.4 Reliability and maintainability characteristics;69
9.4.1;3.4.1 Reliability function and parameters of the system;70
9.4.2;3.4.2 Pointwise availability;71
9.4.3;3.4.3 Maintainability function and parameters of the system;72
9.5;3.5 Numerical illustrative example;73
9.5.1;3.5.1 Description and assumptions;73
9.5.2;3.5.2 Model construction;74
9.5.3;3.5.3 Reliability characteristics and parameters;75
9.5.4;3.5.4 Numerical illustrative example;76
10;Chapter 4:
Perturbed Semi-Markov processes;82
10.1;4.1 Introduction;82
10.2;4.2 Shpak concept;82
10.3;4.3 Pavlov and Ushakov concept;86
10.4;4.4 Korolyuk and Turbin concept;89
10.5;4.5 Exemplary approximation of the system reliability function;92
10.5.1;4.5.1 Numerical illustrative example;94
10.6;4.6 State space aggregation method;95
10.7;4.7 Remarks on advanced perturbed Semi-Markov processes;96
11;Chapter 5:
Stochastic processes associated with the SM process;98
11.1;5.1 The renewal process generated by return times;98
11.1.1;5.1.1 Characteristics and parameters;99
11.2;5.2 Limiting distribution of the process;101
11.3;5.3 Additive functionals of the alternating process;102
11.4;5.4 Additive functionals of the Semi-Markov process;107
12;Chapter 6:
SM models of renewable cold standby system;114
12.1;6.1 Two different units of cold standby system with switch;114
12.1.1;6.1.1 Introduction;114
12.1.2;6.1.2 Description and assumptions;115
12.1.3;6.1.3 Construction of Semi-Markov reliability model;115
12.1.4;6.1.4 Reliability characteristics;119
12.1.5;6.1.5 An approximate reliability function;121
12.1.6;6.1.6 Illustrative numerical examples;122
12.1.7;6.1.7 Conclusions;125
12.2;6.2 Technical example;126
12.2.1;6.2.1 Assumptions;127
12.2.2;6.2.2 Model construction;128
12.2.3;6.2.3 Reliability characteristic;130
12.3;6.3 Cold standby system with series exponential subsystems;131
12.3.1;6.3.1 Description and assumptions;131
12.3.2;6.3.2 Construction of Semi-Markov reliability model;132
13;Chapter 7:
SM models of multistage operation;134
13.1;7.1 Introduction;134
13.2;7.2 Description and assumptions;134
13.3;7.3 Construction of Semi-Markov model;135
13.4;7.4 Illustrative numerical examples;137
13.4.1;7.4.1 Example 1;138
13.4.2;7.4.2 Example 2;142
13.5;7.5 Model of multimodal transport operation;145
13.5.1;7.5.1 Introduction;145
13.5.2;7.5.2 SM model;145
13.5.3;7.5.3 Example 3;147
14;Chapter 8:
SM model of working intensity process;150
14.1;8.1 Introduction;150
14.2;8.2 Semi-Markov model of the ship engine load process;150
14.2.1;8.2.1 Model;151
14.2.2;8.2.2 Characteristics of engine load process;153
14.2.3;8.2.3 Model of research test of toxicity of exhaust gases;153
14.3;8.3 SM model for continuous working intensity process;155
14.3.1;8.3.1 Model;156
14.3.2;8.3.2 Estimation of the model parameters;157
14.3.3;8.3.3 Analysis of the SM working intensity;157
14.4;8.4 Model of car speed;160
15;Chapter 9:
Multitask operation process;164
15.1;9.1 Introduction;164
15.2;9.2 Description and assumptions;164
15.3;9.3 Model construction;165
15.4;9.4 Reliability characteristics;166
15.5;9.5 Approximate reliability function;169
15.6;9.6 Numerical example;172
15.6.1;9.6.1 Input characteristics and parameters;172
15.6.2;9.6.2 Calculated model parameters;173
16;Chapter 10:
Semi-Markov failure rate process;176
16.1;10.1 Introduction;176
16.2;10.2 Reliability function with random failure rate;176
16.3;10.3 Semi-Markov failure rate process;177
16.4;10.4 Random walk failure rate process;180
16.5;10.5 Alternating failure rate process;185
16.6;10.6 Poisson failure rate process;186
16.7;10.7 Furry-Yule failure rate process;188
16.8;10.8 Failure rate process depending on random load;189
16.9;10.9 Conclusions;191
17;Chapter 11:
Simple model of maintenance;192
17.1;11.1 Introduction;192
17.2;11.2 Description and assumptions;193
17.3;11.3 Model;193
17.4;11.4 Characteristics of operation process;194
17.5;11.5 Problem of time to preventive service optimization;195
17.6;11.6 Example;198
18;Chapter 12:
Semi-Markov model of system component damage;202
18.1;12.1 Semi-Markov model of multistate object;202
18.2;12.2 General Semi-Markov model of damage process;203
18.3;12.3 Multistate model of two kinds of failures;207
18.4;12.4 Inverse problem for simple exponential model of damage;209
18.5;12.5 Conclusions;212
19;Chapter 13:
Multistate systems with SM components;214
19.1;13.1 Introduction;214
19.2;13.2 Structure of the system;214
19.3;13.3 Reliability of unrepairable system components;215
19.4;13.4 Binary representation of MMSs;218
19.5;13.5 Reliability of unrepairable system;219
19.6;13.6 Numerical illustrative example;220
19.7;13.7 Renewable multistate system;224
19.8;13.8 Conclusions;229
20;Chapter 14:
Semi-Markov maintenance nets;232
20.1;14.1 Introduction;232
20.2;14.2 Model of maintenance net;232
20.2.1;14.2.1 States of maintenance operation;232
20.2.2;14.2.2 Model of maintenance operation;234
20.2.3;14.2.3 Characteristics of maintenance operation;235
20.2.4;14.2.4 Numerical example;236
20.2.5;14.2.5 Income from maintenance operation;239
20.3;14.3 Model of maintenance net without diagnostics;240
20.3.1;14.3.1 Model of maintenance operation;241
20.3.2;14.3.2 Numerical example;243
20.4;14.4 Conclusions;243
21;Chapter 15:
Semi-Markov decision processes;244
21.1;15.1 Introduction;244
21.2;15.2 Semi-Markov decision processes;244
21.3;15.3 Optimization for a finite states change;245
21.4;15.4 SM decision model of maintenance operation;247
21.5;15.5 Optimal strategy for the maintenance operation;250
21.6;15.6 Optimization problem for infinite duration process;252
21.7;15.7 Decision problem for renewable series system;255
21.8;15.8 Conclusions;259
22;Summary;260
23;Bibliography;262
24;Notation;268
Discrete state space Markov processes
Abstract
The Markov processes are an important class of the stochastic processes. The Markov property means that evolution of the Markov process in the future depends only on the present state and does not depend on past history. The Markov process does not remember the past if the present state is given. Hence, the Markov process is called the process with memoryless property. This chapter covers some basic concepts, properties, and theorems on homogeneous Markov chains and continuous-time homogeneous Markov processes with a discrete set of states. The theory of those kinds of processes allows us to create models of real random processes, particularly in issues of reliability and maintenance.
Keywords
Markov process
Homogeneous Markov chain
Poisson process
Furry-Yule process
Birth and death process
1.1 Basic definitions and properties
Definition 1.1
A stochastic process X(t):t?T} with a discrete (finite or countable) state space S is said to be a Markov process, if for all i, j, i0, i1, …, in-1 ? S and t0, t1, …, tn, tn + 1 ? + such that 0 = t0 < t1 < … tn < tn + 1,
(X(tn+1)=j|X(tn)=i,X(tn-1)=in-1,…,?X(t0)=i0) =P(X(tn+1)=j|X(tn)=i).
(1.1)
If t0, t1, …, tn-1 are interpreted as the moments from the past, tn as the present instant, and tn + 1 as the moment in the future, then the above-mentioned equation says that the probability of the future state is independent of the past states, if a present state is given. So, we can say that evolution of the Markov process in the future depends only on the present state. The Markov process does not remember the past if the present state is given. Hence, the Markov process is called the stochastic process with memoryless property.
From the definition of the Markov process it follows that any process with independent increments is the Markov process.
If =N0={0,1,2,…}, the Markov process is said to be a Markov chain, if =R+=[0,?8), it is called the continuous-time Markov process. Let tn = u, tn+1 = t. The conditional probabilities
ij(u,?s)=P(X(t)=j|X(u)=i), i,?j???S
(1.2)
are said to be the transition probabilities from the state i at the moment u, to the state j at the moment s.
Definition 1.2
The Markov process X(t):t?T} is called homogeneous, if for all i, j ? S and u.?T, such that 0 = u < s,
ij(u,s)=pij(s–u).
(1.3)
It means that the transition probabilities are the functions of a difference of the moments s and u. Substituting t = s – u we get
ij(t)=P(X(s-u)=j|X(u-u)=i)=P(X(t)=j|X(0)=i), i,j?S,t=0.
The number pij(t) is called a transition probability from the state i to the state j during the time t.
If X(t):t?R+} is a process with the stationary independent increments, taking values on a discrete state space S, then
pij(t)=P(X(t+h)=j|X(h)=i)=P(X(t+h)=j,X(h)=i)P(X(h)=i)=P(X(t+h)-X(h)=j-i,X(h)=i)P(X(h)=i)=P(X(t+h)-X(h)=j-i),P(X(h)=i)P(X(h)=i)=P(X(t+h)-X(h)=j-i).
Therefore, any process with the stationary independent increments is a homogeneous Markov process with transition probabilities
ij(t)=P(X(t+h)-X(h)=j-i).
(1.4)
For the homogeneous Markov process with a discrete state space, the transition probabilities satisfy the following conditions:
a)pij(t)=0, t?T,(b)?i?Spij(t)=1,(c)pij(t+s)=?k?Spik(t)pkj(s), t?T,?s=0.
(1.5)
The last formula is known as the Chapman-Kolmogorov equation.
1.2 Homogeneous Markov chains
As we have mentioned, a Markov chain is a special case of a Markov process. We will introduce the basic properties of the Markov chains with the discrete state space. Proofs of presented theorems omitted here may be found in Refs. [3, 9, 22, 47, 88, 90].
1.2.1 Basic definitions and properties
Now let us consider a discrete time homogeneous Markov process {Xn : n ? 0}, having a finite or countable state space S that is called a homogeneous Markov chain (HMC). Recall that for each moment n ? and all states i, j, i0, …, in-1 ? S there is
P(Xn+1=j|Xn=i,Xn-1=in-1,…,X1=i1,X0=i0)=P(Xn+1=j|Xn=i)
(1.6)
whenever
(Xn=i,Xn-1=in-1,...,X1=i1,X0=i0)>0.
Transition probabilities of the HMC
ij(n,n+1)=P(X(n+1)=j|X(n)=i), i,j?S,n?N0
(1.7)
are independent of n ? 0:
ij(n,n+1)=pij, i,j?S,n?N0.
(1.8)
The square number matrix
=[pij:i,j?S]
(1.9)
is said to be a matrix of transition probabilities or transition matrix of the HMC {X(n) : n ? 0}. It is easy to notice that
i,j?Spij=0 and ?j?S?j?Spij=1.
(1.10)
The matrix P =[pij : i, j ? S] having the above-mentioned properties is called a stochastic matrix. There exists the natural question: Do the stochastic matrix P and discrete probability distribution p(0) = [pi = i : i ? S] define completely the HMC? The following theorem answers this question.
Theorem 1.1
Let P =[pij : i, j ? S] be a stochastic matrix and p = [pi : i ? S] be the one-row matrix with nonnegative elements, such that i?spi=1. There exists a probability space (O, , P) and defined on this space HMC {X(n) : n ? 0} with the initial distribution p = [pi : i ? S] and the transition matrix P = [pij : i, j ? S].
Proof
From (1.6) we obtain
(X0=i0,X1=i1,...,Xn=in)=pi0pi0i1pi1i2...pin-1in.
(1.11)
A number
ij(n)=P(X(n)=j|X(0)=i)=P(X(n+w)=j|X(w)=i)
(1.12)
denotes a transition probability from state i to state j throughout the period [0, n], (in n steps). Now, the Chapman-Kolmogorov equation is given by a formula
ij(m+r)=?k?Spik(m)pkj(r), i,j?S
(1.13)
or in matrix form
(m+r)=P(m)P(r),
Where P(n) = [pij(n) : i, j ? S].
From the above equation we get
(n)=P(1+1+···+1)=P(1)·P(1)·····P(1)=P·P···P=Pn
(1.14)
We suppose that
(0)=I,
(1.15)
where I is a unit matrix.
One-dimensional distribution of HMC we write as a one-row matrix
(n)=[pj(n):j?S],pj(n)=P(X(n)=j)
(1.16)
Using the formula for total probability and the...




