MIT Press
Dataset shift is a common problem in predictive modeling that occurs when
the joint distribution of inputs and outputs differs between training and test
stages. Covariate shift, a particular case of dataset shift, occurs when only the
input distribution changes. Dataset shift is present in most practical applications,
for reasons ranging from the bias introduced by experimental design to the
irreproducibility of the testing conditions at training time. (An example is -email
spam filtering, which may fail to recognize spam that differs in form from the spam
the automatic filter has been built on.) Despite this, and despite the attention
given to the apparently similar problems of semi-supervised learning and active
learning, dataset shift has received relatively little attention in the machine
learning community until recently. This volume offers an overview of current efforts
to deal with dataset and covariate shift. The chapters offer a mathematical and
philosophical introduction to the problem, place dataset shift in relationship to
transfer learning, transduction, local learning, active learning, and
semi-supervised learning, provide theoretical views of dataset and covariate shift
(including decision theoretic and Bayesian perspectives), and present algorithms for
covariate shift. Contributors [cut for catalog if necessary]Shai Ben-David, Steffen
Bickel, Karsten Borgwardt, Michael Brückner, David Corfield, Amir Globerson, Arthur
Gretton, Lars Kai Hansen, Matthias Hein, Jiayuan Huang, Choon Hui Teo, Takafumi
Kanamori, Klaus-Robert Müller, Sam Roweis, Neil Rubens, Tobias Scheffer, Marcel
Schmittfull, Bernhard Schölkopf Hidetoshi Shimodaira, Alex Smola, Amos Storkey,
Masashi Sugiyama
Quiñonero-Candela / Sugiyama / Schwaighofer
Dataset Shift in Machine Learning jetzt bestellen!
the joint distribution of inputs and outputs differs between training and test
stages. Covariate shift, a particular case of dataset shift, occurs when only the
input distribution changes. Dataset shift is present in most practical applications,
for reasons ranging from the bias introduced by experimental design to the
irreproducibility of the testing conditions at training time. (An example is -email
spam filtering, which may fail to recognize spam that differs in form from the spam
the automatic filter has been built on.) Despite this, and despite the attention
given to the apparently similar problems of semi-supervised learning and active
learning, dataset shift has received relatively little attention in the machine
learning community until recently. This volume offers an overview of current efforts
to deal with dataset and covariate shift. The chapters offer a mathematical and
philosophical introduction to the problem, place dataset shift in relationship to
transfer learning, transduction, local learning, active learning, and
semi-supervised learning, provide theoretical views of dataset and covariate shift
(including decision theoretic and Bayesian perspectives), and present algorithms for
covariate shift. Contributors [cut for catalog if necessary]Shai Ben-David, Steffen
Bickel, Karsten Borgwardt, Michael Brückner, David Corfield, Amir Globerson, Arthur
Gretton, Lars Kai Hansen, Matthias Hein, Jiayuan Huang, Choon Hui Teo, Takafumi
Kanamori, Klaus-Robert Müller, Sam Roweis, Neil Rubens, Tobias Scheffer, Marcel
Schmittfull, Bernhard Schölkopf Hidetoshi Shimodaira, Alex Smola, Amos Storkey,
Masashi Sugiyama
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Bitte ändern Sie das Passwort