Cantitate/Preț
Produs

Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science: Synthesis Lectures on Human Language Technologies

Autor Stefan Riezler, Michael Hagmann
en Limba Engleză Paperback – 3 dec 2021

Notăm cu interes apariția celei de-a doua ediții a volumului Validity, Reliability, and Significance, o resursă metodologică esențială pentru cercetătorii din NLP și Data Science care doresc să treacă dincolo de simplele metrici de performanță. Metodologia propusă de Stefan Riezler și Michael Hagmann se ancorează în utilizarea modelelor probabilistice interpretabile, precum modelele aditive generalizate (GAM) și modelele liniare cu efecte mixte (LMEM). Această abordare permite nu doar măsurarea succesului unui algoritm, ci și înțelegerea cauzelor profunde ale variațiilor de performanță, identificând trăsăturile circulare care pot compromite procesul de învățare.

Structura cărții este riguros segmentată pentru a acoperi cele trei întrebări fundamentale ale științelor empirice. Secțiunea dedicată validității analizează dacă modelul măsoară într-adevăr fenomenul vizat, în timp ce capitolul despre fiabilitate utilizează descompunerea varianței prin parametrii de efecte aleatorii ai LMEM pentru a verifica consistența rezultatelor. Finalul volumului este dedicat testelor de semnificație bazate pe raportul de verosimilitate, oferind o metodă naturală de a include variațiile meta-parametrilor în testarea ipotezelor.

Cititorul care a aplicat ideile din Statistical Significance Testing for Natural Language Processing de Rotem Dror va găsi aici o extensie teoretică și practică valoroasă, trecând de la testele de semnificație clasice la o abordare bazată pe modelare statistică avansată. Spre deosebire de manualele introductive, acest titlu din seria Synthesis Lectures on Human Language Technologies oferă un exemplu practic detaliat de analiză a reproductibilității inferențiale, susținut de cod R, ceea ce facilitează aplicarea imediată a tehnicilor în proiecte de cercetare sau producție.

Citește tot Restrânge

Din seria Synthesis Lectures on Human Language Technologies

Preț: 34296 lei

Preț vechi: 42869 lei
-20%

Puncte Express: 514

Carte disponibilă

Livrare economică 11-25 mai


Specificații

ISBN-13: 9783031010552
ISBN-10: 3031010558
Pagini: 147
Ilustrații: XVII, 147 p.
Dimensiuni: 191 x 235 mm
Greutate: 0.3 kg
Editura: Springer International Publishing
Colecția Springer
Seria Synthesis Lectures on Human Language Technologies

Locul publicării:Cham, Switzerland

De ce să citești această carte

Recomandăm această carte profesioniștilor din inteligența artificială și lingvistică computațională care au nevoie de un cadru matematic riguros pentru validarea modelelor. Cititorul câștigă instrumente concrete pentru a distinge între rezultatele datorate șansei și performanța reală a algoritmilor. Este un ghid practic ce transformă evaluarea empirică dintr-o bifare de metrici într-un proces analitic profund, sprijinit de exemple în R și modele statistice moderne.


Despre autor

Stefan Riezler este un cercetător recunoscut în domeniul procesării limbajului natural și al învățării automate, cu un interes deosebit pentru metodele empirice și evaluarea statistică a sistemelor de calcul. Michael Hagmann contribuie cu expertiză în domeniul statisticii aplicate, colaborând la dezvoltarea unor metodologii care aduc rigoarea matematică în contextul modern al științei datelor. Împreună, autorii oferă o perspectivă interdisciplinară, combinând informatica cu statistica avansată pentru a îmbunătăți standardele de raportare și replicabilitate în cercetarea de profil.


Descriere scurtă

Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science. Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.

Cuprins

Preface.- Acknowledgments.- Introduction.- Validity.- Reliability.- Significance.- Bibliography.- Authors' Biographies.

Notă biografică

Stefan Riezler is a full professor in the Department of Computational Linguistics at Heidelberg University, Germany since 2010, and also co-opted in Informatics at the Department of Mathematics and Computer Science. He received his Ph.D. (with distinction) in Computational Linguistics from the University of Tübingen in 1998, conducted post-doctoral work at Brown University in 1999, and spent a decade in industry research (Xerox PARC, Google Research). His research focus is on interactive machine learning for natural language processing problems especially for the application areas of cross-lingual information retrieval and statistical machine translation. He is engaged as an editorial board member of the main journals of the field—Computational Linguistics and Transactions of the Association for Computational Linguistics—and is a regular member of the program committee of various natural language processing and machine learning conferences. He has published more than 100 journal and conference papers in these areas. He also conducts interdisciplinary research as member of the Interdisciplinary Center for Scientific Computing (IWR), for example, on the topic of early prediction of sepsis using machine learning and natural language processing techniques.
Michael Hagmann is a graduate research assistant in the Department of Computational Linguistics at Heidelberg University, Germany, since 2019. He holds an M.Sc. in Statistics (with distinction) from the University of Vienna, Austria. He received an award for the best Master’s thesis in Applied Statistics from the Austrian Statistical Society. He has worked as a medical statistician at the medical faculty of Heidelberg University in Mannheim, Germany and in the section for Medical Statistics at the Medical University of Vienna, Austria. His research focus is on statistical methods for data science and, recently, NLP. He has published more than 50 papers in journals for medical research and mathematical statistics.

Textul de pe ultima copertă

This book introduces empirical methods for machine learning with a special focus on applications in natural language processing (NLP) and data science.  The authors present problems of validity, reliability, and significance and provide common solutions based on statistical methodology to solve them. The book focuses on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows for the detection of circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Lastly, a significance test based on the likelihood ratios of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data.  The book is self-contained with an appendix on the mathematical background of generalized additive models and linear mixed effects models as well as an accompanying webpage with the related R and Python code to replicate the presented experiments. The second edition also features a new hands-on chapter that illustrates how to use the included tools in practical applications.