Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part III: Communications in Computer and Information Science, cartea 2155
Editat de Luca Longo, Sebastian Lapuschkin, Christin Seiferten Limba Engleză Paperback – 10 iul 2024
The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:
Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.
Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.
Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
| Toate formatele și edițiile | Preț | Express |
|---|---|---|
| Paperback (2) | 518.97 lei 3-5 săpt. | |
| Springer Nature Switzerland – 10 iul 2024 | 518.97 lei 3-5 săpt. | |
| Springer Nature Switzerland – 10 iul 2024 | 520.33 lei 3-5 săpt. |
Din seria Communications in Computer and Information Science
- 20%
Preț: 460.54 lei - 20%
Preț: 313.10 lei - 20%
Preț: 643.20 lei - 20%
Preț: 312.30 lei - 20%
Preț: 324.99 lei - 20%
Preț: 630.84 lei - 20%
Preț: 634.45 lei - 20%
Preț: 321.17 lei - 20%
Preț: 324.68 lei - 20%
Preț: 631.00 lei - 20%
Preț: 631.31 lei - 20%
Preț: 633.83 lei -
Preț: 377.68 lei - 20%
Preț: 388.30 lei - 20%
Preț: 317.05 lei -
Preț: 371.37 lei - 20%
Preț: 323.23 lei - 20%
Preț: 423.73 lei - 20%
Preț: 321.81 lei - 20%
Preț: 319.13 lei - 20%
Preț: 630.51 lei - 20%
Preț: 325.61 lei - 20%
Preț: 321.17 lei - 20%
Preț: 321.81 lei - 20%
Preț: 325.79 lei - 20%
Preț: 640.83 lei - 20%
Preț: 323.23 lei - 20%
Preț: 325.79 lei - 20%
Preț: 317.68 lei - 20%
Preț: 635.26 lei - 15%
Preț: 623.39 lei - 20%
Preț: 628.32 lei - 20%
Preț: 319.42 lei - 20%
Preț: 324.99 lei - 20%
Preț: 1014.25 lei - 20%
Preț: 804.07 lei - 20%
Preț: 529.54 lei - 20%
Preț: 631.31 lei - 20%
Preț: 1183.08 lei - 20%
Preț: 494.98 lei - 20%
Preț: 388.26 lei - 20%
Preț: 318.67 lei - 20%
Preț: 389.14 lei - 20%
Preț: 323.23 lei - 20%
Preț: 458.73 lei - 20%
Preț: 530.40 lei - 20%
Preț: 388.00 lei - 20%
Preț: 632.09 lei - 20%
Preț: 388.91 lei - 20%
Preț: 310.73 lei
Preț: 518.97 lei
Preț vechi: 648.71 lei
-20% Nou
Puncte Express: 778
Preț estimativ în valută:
91.82€ • 107.13$ • 80.28£
91.82€ • 107.13$ • 80.28£
Carte disponibilă
Livrare economică 27 decembrie 25 - 10 ianuarie 26
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9783031637995
ISBN-10: 3031637992
Pagini: 456
Ilustrații: XVII, 456 p. 130 illus., 103 illus. in color.
Dimensiuni: 155 x 235 mm
Ediția:2024
Editura: Springer Nature Switzerland
Colecția Springer
Seria Communications in Computer and Information Science
Locul publicării:Cham, Switzerland
ISBN-10: 3031637992
Pagini: 456
Ilustrații: XVII, 456 p. 130 illus., 103 illus. in color.
Dimensiuni: 155 x 235 mm
Ediția:2024
Editura: Springer Nature Switzerland
Colecția Springer
Seria Communications in Computer and Information Science
Locul publicării:Cham, Switzerland
Cuprins
.- Counterfactual explanations and causality for eXplainable AI.
.- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems.
.- Human-in-the-loop Personalized Counterfactual Recourse.
.- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images.
.- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence.
.- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests.
.- Causality-Aware Local Interpretable Model-Agnostic Explanations.
.- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy.
.- CAGE: Causality-Aware Shapley Value for Global Explanations.
.- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
.- Exploring the Reliability of SHAP Values in Reinforcement Learning.
.- Categorical Foundation of Explainable AI: A Unifying Theory.
.- Investigating Calibrated Classification Scores through the Lens of Interpretability.
.- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI.
.- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution.
.- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework.
.- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability.
.- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers.
.- Multi-modal Machine learning model for Interpretable Mobile Malware Classification.
.- Explainable Fraud Detection with Deep Symbolic Classification.
.- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems.
.- Towards Non-Adversarial Algorithmic Recourse.
.- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring.
.- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.
.- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems.
.- Human-in-the-loop Personalized Counterfactual Recourse.
.- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images.
.- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence.
.- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests.
.- Causality-Aware Local Interpretable Model-Agnostic Explanations.
.- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy.
.- CAGE: Causality-Aware Shapley Value for Global Explanations.
.- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
.- Exploring the Reliability of SHAP Values in Reinforcement Learning.
.- Categorical Foundation of Explainable AI: A Unifying Theory.
.- Investigating Calibrated Classification Scores through the Lens of Interpretability.
.- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI.
.- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution.
.- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework.
.- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability.
.- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers.
.- Multi-modal Machine learning model for Interpretable Mobile Malware Classification.
.- Explainable Fraud Detection with Deep Symbolic Classification.
.- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems.
.- Towards Non-Adversarial Algorithmic Recourse.
.- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring.
.- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.