E-Book, Englisch, 266 Seiten
Chen / Zhu / Hu System Parameter Identification
1. Auflage 2013
ISBN: 978-0-12-404595-8
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Information Criteria and Algorithms
E-Book, Englisch, 266 Seiten
ISBN: 978-0-12-404595-8
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Badong Chen received the B.S. and M.S. degrees in control theory and engineering from Chongqing University, in 1997 and 2003, respectively, and the Ph.D. degree in computer science and technology from Tsinghua University in 2008. He was a Post-Doctoral Researcher with Tsinghua University from 2008 to 2010, and a Post-Doctoral Associate at the University of Florida Computational NeuroEngineering Laboratory (CNEL) during the period October, 2010 to September, 2012. He is currently a professor at the Institute of Artificial Intelligence and Robotics (IAIR), Xi'an Jiaotong University. His research interests are in system identification and control, information theory, machine learning, and their applications in cognition and neuroscience.
Autoren/Hrsg.
Weitere Infos & Material
Symbols and Abbreviations
The main symbols and abbreviations used throughout the text are listed as follows.
absolute value of a real number
Euclidean norm of a vector
inner product
indicator function
expectation value of a random variable
first-order derivative of the function
second-order derivative of the function
gradient of the function with respect to
sign function
Gamma function
vector or matrix transposition
identity matrix
inverse of matrix
determinant of matrix
trace of matrix
rank of matrix
natural logarithm function
unit delay operator
real number space
-dimensional real Euclidean space
correlation coefficient between random variables and
variance of random variable
probability of event
Gaussian distribution with mean vector and covariance matrix
uniform distribution over interval
chi-squared distribution with degree of freedom
Shannon entropy of random variable
-entropy of random variable
-order Renyi entropy of random variable
-order information potential of random variable
survival information potential of random variable
-entropy of discrete random variable
mutual information between random variables and
KL-divergence between random variables and
-divergence between random variables and
Fisher information matrix
Fisher information rate matrix
probability density function
Mercer kernel function
kernel function for density estimation
kernel function with width
Gaussian kernel function with width
reproducing kernel Hilbert space induced by Mercer kernel
feature space induced by Mercer kernel
weight vector
weight vector in feature space
weight error vector
step size
sliding data length
MSE mean square error
LMS least mean square
NLMS normalized least mean square
LS least squares
RLS recursive least squares
MLE maximum likelihood estimation
EM expectation-maximization
FLOM fractional lower order moment
LMP least mean -power
LAD least absolute deviation
LMF least mean fourth
FIR finite impulse response
IIR infinite impulse response
AR auto regressive
ADALINE adaptive linear neuron
MLP multilayer perceptron
RKHS reproducing kernel Hilbert space
KAF kernel adaptive filtering
KLMS kernel least mean square
KAPA kernel affine projection algorithm
KMEE kernel minimum error entropy
KMC kernel maximum correntropy
PDF probability density function
KDE kernel density estimation
GGD generalized Gaussian density
symmetric -stable
MEP maximum entropy principle
DPI data processing inequality
EPI entropy power inequality
MEE minimum error entropy
MCC maximum correntropy criterion
IP information potential
QIP quadratic information potential
CRE cumulative residual entropy
SIP survival information potential
QSIP survival quadratic information potential
KLID Kullback–Leibler information divergence
EDC Euclidean distance criterion
MinMI minimum mutual information
MaxMI maximum mutual information
AIC Akaike’s information criterion
BIC Bayesian information criterion
MDL minimum description length
FIM Fisher information matrix
FIRM Fisher information rate matrix
MIH minimum identifiable horizon
ITL information theoretic learning
BIG batch information gradient
FRIG forgetting recursive information gradient
SIG stochastic information gradient
SIDG stochastic information divergence gradient
SMIG stochastic mutual information gradient
FP fixed point
FP-MEE fixed-point minimum error...