Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part IV: Communications in Computer and Information Science, cartea 2156
Editat de Luca Longo, Sebastian Lapuschkin, Christin Seiferten Limba Engleză Paperback – 10 iul 2024
The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:
Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.
Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.
Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
| Toate formatele și edițiile | Preț | Express |
|---|---|---|
| Paperback (2) | 518.97 lei 3-5 săpt. | |
| Springer Nature Switzerland – 10 iul 2024 | 518.97 lei 3-5 săpt. | |
| Springer Nature Switzerland – 10 iul 2024 | 520.33 lei 3-5 săpt. |
Din seria Communications in Computer and Information Science
- 20%
Preț: 460.54 lei - 20%
Preț: 313.10 lei - 20%
Preț: 643.20 lei - 20%
Preț: 312.30 lei - 20%
Preț: 324.99 lei - 20%
Preț: 630.84 lei - 20%
Preț: 634.45 lei - 20%
Preț: 321.17 lei - 20%
Preț: 324.68 lei - 20%
Preț: 631.00 lei - 20%
Preț: 631.31 lei - 20%
Preț: 633.83 lei -
Preț: 377.68 lei - 20%
Preț: 388.30 lei - 20%
Preț: 317.05 lei -
Preț: 371.37 lei - 20%
Preț: 323.23 lei - 20%
Preț: 423.73 lei - 20%
Preț: 321.81 lei - 20%
Preț: 319.13 lei - 20%
Preț: 630.51 lei - 20%
Preț: 325.61 lei - 20%
Preț: 321.17 lei - 20%
Preț: 321.81 lei - 20%
Preț: 325.79 lei - 20%
Preț: 640.83 lei - 20%
Preț: 323.23 lei - 20%
Preț: 325.79 lei - 20%
Preț: 317.68 lei - 20%
Preț: 635.26 lei - 15%
Preț: 623.39 lei - 20%
Preț: 628.32 lei - 20%
Preț: 319.42 lei - 20%
Preț: 324.99 lei - 20%
Preț: 1014.25 lei - 20%
Preț: 804.07 lei - 20%
Preț: 529.54 lei - 20%
Preț: 631.31 lei - 20%
Preț: 1183.08 lei - 20%
Preț: 494.98 lei - 20%
Preț: 388.26 lei - 20%
Preț: 318.67 lei - 20%
Preț: 389.14 lei - 20%
Preț: 323.23 lei - 20%
Preț: 458.73 lei - 20%
Preț: 530.40 lei - 20%
Preț: 388.00 lei - 20%
Preț: 632.09 lei - 20%
Preț: 388.91 lei - 20%
Preț: 310.73 lei
Preț: 520.33 lei
Preț vechi: 650.42 lei
-20% Nou
Puncte Express: 780
Preț estimativ în valută:
92.06€ • 107.25$ • 80.39£
92.06€ • 107.25$ • 80.39£
Carte disponibilă
Livrare economică 27 decembrie 25 - 10 ianuarie 26
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9783031638022
ISBN-10: 3031638026
Pagini: 466
Ilustrații: XVII, 466 p. 149 illus., 136 illus. in color.
Dimensiuni: 155 x 235 mm
Greutate: 0.68 kg
Ediția:2024
Editura: Springer Nature Switzerland
Colecția Springer
Seria Communications in Computer and Information Science
Locul publicării:Cham, Switzerland
ISBN-10: 3031638026
Pagini: 466
Ilustrații: XVII, 466 p. 149 illus., 136 illus. in color.
Dimensiuni: 155 x 235 mm
Greutate: 0.68 kg
Ediția:2024
Editura: Springer Nature Switzerland
Colecția Springer
Seria Communications in Computer and Information Science
Locul publicării:Cham, Switzerland
Cuprins
.- Explainable AI in healthcare and computational neuroscience.
.- SRFAMap: a method for mapping integrated gradients of a CNN trained with statistical radiomic features to medical image saliency maps.
.- Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke.
.- Precision medicine in student health: Insights from Tsetlin Machines into chronic pain and psychological distress.
.- Evaluating Local Explainable AI Techniques for the Classification of Chest X-ray Images.
.- Feature importance to explain multimodal prediction models. A clinical use case.
.- Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures.
.- Increasing Explainability in Time Series Classification by Functional Decomposition.
.- Towards Evaluation of Explainable Artificial Intelligence in Streaming Data.
.- Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-based MI Detection Model.
.- Explainable AI for improved human-computer interaction and Software Engineering for explainability.
.- Influenciae: A library for tracing the influence back to the data-points.
.- Explainability Engineering Challenges: Connecting Explainability Levels to Run-time Explainability.
.- On the Explainability of Financial Robo-advice Systems.
.- Can I trust my anomaly detection system? A case study based on explainable AI..
.- Explanations considered harmful: The Impact of misleading Explanations on Accuracy in hybrid human-AI decision making.
.- Human emotions in AI explanations.
.- Study on the Helpfulness of Explainable Artificial Intelligence.
.- Applications of explainable artificial intelligence.
.- Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums.
.- Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study.
.- Explainable Artificial Intelligence applied to Predictive Maintenance: Comparison of Post-hoc Explainability Techniques.
.- A comparative analysis of SHAP, LIME, ANCHORS, and DICE for interpreting a dense neural network in Credit Card Fraud Detection.
.- Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision data.
.- Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions.
.- Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments.
.- AcME-AD: Accelerated Model Explanations for Anomaly Detection.
.- SRFAMap: a method for mapping integrated gradients of a CNN trained with statistical radiomic features to medical image saliency maps.
.- Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke.
.- Precision medicine in student health: Insights from Tsetlin Machines into chronic pain and psychological distress.
.- Evaluating Local Explainable AI Techniques for the Classification of Chest X-ray Images.
.- Feature importance to explain multimodal prediction models. A clinical use case.
.- Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures.
.- Increasing Explainability in Time Series Classification by Functional Decomposition.
.- Towards Evaluation of Explainable Artificial Intelligence in Streaming Data.
.- Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-based MI Detection Model.
.- Explainable AI for improved human-computer interaction and Software Engineering for explainability.
.- Influenciae: A library for tracing the influence back to the data-points.
.- Explainability Engineering Challenges: Connecting Explainability Levels to Run-time Explainability.
.- On the Explainability of Financial Robo-advice Systems.
.- Can I trust my anomaly detection system? A case study based on explainable AI..
.- Explanations considered harmful: The Impact of misleading Explanations on Accuracy in hybrid human-AI decision making.
.- Human emotions in AI explanations.
.- Study on the Helpfulness of Explainable Artificial Intelligence.
.- Applications of explainable artificial intelligence.
.- Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums.
.- Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study.
.- Explainable Artificial Intelligence applied to Predictive Maintenance: Comparison of Post-hoc Explainability Techniques.
.- A comparative analysis of SHAP, LIME, ANCHORS, and DICE for interpreting a dense neural network in Credit Card Fraud Detection.
.- Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision data.
.- Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions.
.- Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments.
.- AcME-AD: Accelerated Model Explanations for Anomaly Detection.