Online Public Access Catalogue (OPAC)
Central Library - Vidyasagar University

“Education does not only mean learning, reading, writing, and arithmetic,

it should provide a comprehensive knowledge”

-Ishwarchandra Vidyasagar


Normal view MARC view ISBD view

Neural Network Learning : Theoretical Foundations [ electronic resource ] / by Martin Anthony and Peter L. Bartlett.

By: Anthony, Martin.
Contributor(s): Bartlett, Peter L [joint author].
Material type: TextTextPublisher: Cambridge: Cambridge University Press , 2010ISBN: 9780511624216.Subject(s): Computer Science | Pattern Recognition and Machine LearningGenre/Form: Electronic booksDDC classification: 006.32 Online resources: https://doi.org/10.1017/CBO9780511624216 View to click Summary: This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik–Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik–Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.
Tags from this library: No tags from this library for this title. Log in to add tags.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode
E-Book E-Book WWW
006.32 ANT/N (Browse shelf) Available EB160

This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik–Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik–Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha