Cantitate/Preț
Produs

A Probabilistic Theory of Pattern Recognition

Autor Luc Devroye, Laszlo Györfi, Gabor Lugosi
en Limba Engleză Paperback – 22 noi 2013
Pattern recognition presents one of the most significant challenges for scientists and engineers, and many different approaches have been proposed. The aim of this book is to provide a self-contained account of probabilistic analysis of these approaches. The book includes a discussion of distance measures, nonparametric methods based on kernels or nearest neighbors, Vapnik-Chervonenkis theory, epsilon entropy, parametric classification, error estimation, free classifiers, and neural networks. Wherever possible, distribution-free properties and inequalities are derived. A substantial portion of the results or the analysis is new. Over 430 problems and exercises complement the material.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 58107 lei  6-8 săpt.
  Springer – 22 noi 2013 58107 lei  6-8 săpt.
Hardback (1) 77379 lei  6-8 săpt.
  Springer – 4 apr 1996 77379 lei  6-8 săpt.

Preț: 58107 lei

Preț vechi: 68362 lei
-15% Nou

Puncte Express: 872

Preț estimativ în valută:
10285 11989$ 8995£

Carte tipărită la comandă

Livrare economică 20 ianuarie-03 februarie 26

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781461268772
ISBN-10: 146126877X
Pagini: 660
Ilustrații: XV, 638 p.
Dimensiuni: 155 x 235 x 36 mm
Greutate: 0.98 kg
Ediția:Softcover reprint of the original 1st ed. 1996
Editura: Springer
Locul publicării:New York, NY, United States

Public țintă

Research

Cuprins

Preface * Introduction * The Bayes Error * Inequalities and alternate
distance measures * Linear discrimination * Nearest neighbor rules *
Consistency * Slow rates of convergence Error estimation * The regular
histogram rule * Kernel rules Consistency of the k-nearest neighbor
rule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-
Chervonenkis theory * Lower bounds for empirical classifier selection
* The maximum likelihood principle * Parametric classification *
Generalized linear discrimination * Complexity regularization *
Condensed and edited nearest neighbor rules * Tree classifiers * Data-
dependent partitioning * Splitting the data * The resubstitution
estimate * Deleted estimates of the error probability * Automatic
kernel rules * Automatic nearest neighbor rules * Hypercubes and
discrete spaces * Epsilon entropy and totally bounded sets * Uniform
laws of large numbers * Neural networks * Other error estimates *
Feature extraction * Appendix * Notation * References * Index