The Banking Risk of AI Explanation
Prix TTC
Pas encore paru
Ce titre n’est pas encore paru. Vous pouvez le réserver dès maintenant.
Artificial Intelligence brings significant benefits to banking, but also introduces novel risks, such as the requirement that decisions made by complex machine learning systems and deep neural networks be adequately explained to humans. This work offers a multifaceted view of the topic, harmonizing the legal frameworks of Data Protection and Artificial Intelligence, the background of systems engineers, academic contributions from the computing community, and banking risk management under Basel III and the DORA Regulation, suggesting the idea of the Five Beacons as a structural model of explanation to strengthen the protection of financial customers (and every citizen) against the machine. As stated in the book: "Rather than treating explainability as a single explanation delivered at the end of an automated process, this model establishes a sequence of prior explanations, each addressed to actors capable of understanding and validating increasingly complex aspects of the system's behavior. This structure ensures that the explanation ultimately provided to the customer is supported by earlier layers of technical, organizational, and supervisory accountability."
