Warmuth / Schölkopf | Learning Theory and Kernel Machines | Buch | 978-3-540-40720-1 | sack.de

Buch, Englisch, Band 2777, 754 Seiten, Paperback, Format (B × H): 155 mm x 235 mm, Gewicht: 1142 g

Reihe: Lecture Notes in Computer Science

Warmuth / Schölkopf

Learning Theory and Kernel Machines

16th Annual Conference on Computational Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003, Proceedings

Buch, Englisch, Band 2777, 754 Seiten, Paperback, Format (B × H): 155 mm x 235 mm, Gewicht: 1142 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-540-40720-1
Verlag: Springer Berlin Heidelberg


This volume contains papers presented at the joint 16th Annual Conference on Learning Theory (COLT) and the 7th Annual Workshop on Kernel Machines, heldinWashington,DC,USA,duringAugust24–27,2003.COLT,whichrecently merged with EuroCOLT, has traditionally been a meeting place for learning theorists. We hope that COLT will bene?t from the collocation with the annual workshoponkernelmachines,formerlyheldasaNIPSpostconferenceworkshop. The technical program contained 47 papers selected from 92 submissions. All 47paperswerepresentedasposters;22ofthepaperswereadditionallypresented astalks.Therewerealsotwotargetareaswithinvitedcontributions.Incompu- tional game theory,atutorialentitled“LearningTopicsinGame-TheoreticDe- sionMaking”wasgivenbyMichaelLittman,andaninvitedpaperon“AGeneral Class of No-Regret Learning Algorithms and Game-Theoretic Equilibria” was contributed by Amy Greenwald. In natural language processing, a tutorial on “Machine Learning Methods in Natural Language Processing” was presented by Michael Collins, followed by two invited talks, “Learning from Uncertain Data” by Mehryar Mohri and “Learning and Parsing Stochastic Uni?cation- Based Grammars” by Mark Johnson. In addition to the accepted papers and invited presentations, we solicited short open problems that were reviewed and included in the proceedings. We hope that reviewed open problems might become a new tradition for COLT. Our goal was to select simple signature problems whose solutions are likely to inspire further research. For some of the problems the authors o?ered monetary rewards. Yoav Freund acted as the open problem area chair. The open problems were presented as posters at the conference.
Warmuth / Schölkopf Learning Theory and Kernel Machines jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Target Area: Computational Game Theory.- Tutorial: Learning Topics in Game-Theoretic Decision Making.- A General Class of No-Regret Learning Algorithms and Game-Theoretic Equilibria.- Preference Elicitation and Query Learning.- Efficient Algorithms for Online Decision Problems.- Positive Definite Rational Kernels.- Bhattacharyya and Expected Likelihood Kernels.- Maximal Margin Classification for Metric Spaces.- Maximum Margin Algorithms with Boolean Kernels.- Knowledge-Based Nonlinear Kernel Classifiers.- Fast Kernels for Inexact String Matching.- On Graph Kernels: Hardness Results and Efficient Alternatives.- Kernels and Regularization on Graphs.- Data-Dependent Bounds for Multi-category Classification Based on Convex Losses.- Poster Session 1.- Comparing Clusterings by the Variation of Information.- Multiplicative Updates for Large Margin Classifiers.- Simplified PAC-Bayesian Margin Bounds.- Sparse Kernel Partial Least Squares Regression.- Sparse Probability Regression by Label Partitioning.- Learning with Rigorous Support Vector Machines.- Robust Regression by Boosting the Median.- Boosting with Diverse Base Classifiers.- Reducing Kernel Matrix Diagonal Dominance Using Semi-definite Programming.- Optimal Rates of Aggregation.- Distance-Based Classification with Lipschitz Functions.- Random Subclass Bounds.- PAC-MDL Bounds.- Universal Well-Calibrated Algorithm for On-Line Classification.- Learning Probabilistic Linear-Threshold Classifiers via Selective Sampling.- Learning Algorithms for Enclosing Points in Bregmanian Spheres.- Internal Regret in On-Line Portfolio Selection.- Lower Bounds on the Sample Complexity of Exploration in the Multi-armed Bandit Problem.- Smooth ?-Insensitive Regression by Loss Symmetrization.- On Finding Large Conjunctive Clusters.- LearningArithmetic Circuits via Partial Derivatives.- Poster Session 2.- Using a Linear Fit to Determine Monotonicity Directions.- Generalization Bounds for Voting Classifiers Based on Sparsity and Clustering.- Sequence Prediction Based on Monotone Complexity.- How Many Strings Are Easy to Predict?.- Polynomial Certificates for Propositional Classes.- On-Line Learning with Imperfect Monitoring.- Exploiting Task Relatedness for Multiple Task Learning.- Approximate Equivalence of Markov Decision Processes.- An Information Theoretic Tradeoff between Complexity and Accuracy.- Learning Random Log-Depth Decision Trees under the Uniform Distribution.- Projective DNF Formulae and Their Revision.- Learning with Equivalence Constraints and the Relation to Multiclass Learning.- Target Area: Natural Language Processing.- Tutorial: Machine Learning Methods in Natural Language Processing.- Learning from Uncertain Data.- Learning and Parsing Stochastic Unification-Based Grammars.- Generality’s Price.- On Learning to Coordinate.- Learning All Subfunctions of a Function.- When Is Small Beautiful?.- Learning a Function of r Relevant Variables.- Subspace Detection: A Robust Statistics Formulation.- How Fast Is k-Means?.- Universal Coding of Zipf Distributions.- An Open Problem Regarding the Convergence of Universal A Priori Probability.- Entropy Bounds for Restricted Convex Hulls.- Compressing to VC Dimension Many Points.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.