Responsible Use of AI in Military Systems
Editat de Jan Maarten Schraagenen Limba Engleză Paperback – 26 dec 2025
| Toate formatele și edițiile | Preț | Express |
|---|---|---|
| Paperback (1) | 359.26 lei Precomandă | |
| Taylor & Francis Ltd (Sales) – 26 dec 2025 | 359.26 lei Precomandă | |
| Hardback (1) | 780.57 lei 22-36 zile | +30.00 lei 5-11 zile |
| CRC Press – 26 apr 2024 | 780.57 lei 22-36 zile | +30.00 lei 5-11 zile |
Preț: 359.26 lei
Preț vechi: 449.08 lei
-20% Precomandă
Puncte Express: 539
Preț estimativ în valută:
63.57€ • 74.55$ • 55.83£
63.57€ • 74.55$ • 55.83£
Carte nepublicată încă
Doresc să fiu notificat când acest titlu va fi disponibil:
Se trimite...
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781032531168
ISBN-10: 1032531169
Pagini: 388
Dimensiuni: 156 x 234 x 20 mm
Greutate: 0.54 kg
Editura: Taylor & Francis Ltd (Sales)
ISBN-10: 1032531169
Pagini: 388
Dimensiuni: 156 x 234 x 20 mm
Greutate: 0.54 kg
Editura: Taylor & Francis Ltd (Sales)
Cuprins
Preface. Acknowledgements. Editor. Contributors. 1 Introduction to Responsible Use of AI in Military Systems. SECTION I Implementing Military AI Responsibly: Models and Approaches. 2 A Socio‑Technical Feedback Loop for Responsible Military AI Life‑Cycles from Governance to Operation. 3 How Can Responsible AI Be Implemented? 4 A Qualitative Risk Evaluation Model for AI‑Enabled Military Systems. 5 Applying Responsible AI Principles into Military AI Products and Services: A Practical Approach. 6 Unreliable AIs for the Military. SECTION II Liability and Accountability of Individuals and States. 7 Methods to Mitigate Risks Associated with the Use of AI in the Military Domain. 8 ‘Killer Pays’: State Liability for the Use of Autonomous Weapons Systems in the Battlespace. 9 Military AI and Accountability of Individuals and States for War Crimes in the Ukraine. 10 Scapegoats!: Assessing the Liability of Programmers and Designers for Autonomous Weapons Systems. SECTION III Human Control in Human–AI Military Teams. 11 Rethinking ‘Meaningful Human Control’. 12 AlphaGo’s Move 37 and Its Implications for AI‑Supported Military Decision‑Making. 13 Bad, Mad, and Cooked: Moral Responsibility for Civilian Harms in Human–AI Military Teams. 14 Neglect Tolerance as a Measure for Responsible Human Delegation. SECTION IV Policy Aspects. 15 Strategic Interactions: The Economic Complements of AI and the Political Context of War. 16 Promoting Responsible State Behavior on the Use of AI in the Military Domain: Lessons. SECTION V Bounded Autonomy. 17 Bounded Autonomy. Index.
Notă biografică
Jan Maarten Schraagen is Principal Scientist at TNO, The Netherlands. His research interests include human-autonomy teaming and responsible AI. He is main editor of Cognitive Task Analysis (2000) and Naturalistic Decision Making and Macrocognition (2008) and co-editor of the Oxford Handbook of Expertise (2020). He is editor in chief of the Journal of Cognitive Engineering and Decision Making. Dr. Schraagen holds a PhD in Cognitive Psychology from the University of Amsterdam, The Netherlands.