IEEE 2894-2024 PDF
Original price was: $77.00.$38.00Current price is: $38.00.
IEEE Guide for an Architectural Framework for Explainable Artificial Intelligence
IEEE , 08/30/2024
File Format: PDF
- Description
Description
IEEE 2894-2024 PDF
This guide provides a technological framework that aims to increase trustworthiness of AI systems using explainable artificial intelligence (XAI) technologies and methods. The document also provides measurable solutions to evaluate AI systems in terms of explainability. Specifically, the document illustrates the following aspects of XAI systems: a) The requirements of providing human-understandable explanations for AI systems in different use-cases, for example, healthcare and financial applications b) Approaches to offer a series of available tools for giving an AI model tenable explanations c) A set of measurable solutions to evaluate AI systems and corresponding performance, such as the availability, resiliency, accuracy, safety, security, and privacy of the AI system under such status
Artificial intelligence systems are expected to be trustworthy and adhere to human ethics principles including fairness, privacy, transparency, and so forth. Achieving the trustworthiness requires underlying mechanisms of AI systems that are transparent and understandable to all stakeholders of AI systems. This mandate motivates the study of a variety of explainable artificial intelligence (XAI) technologies and methods. Given the urgent need and profound influence of XAI, this guide aims to provide a technological framework that facilitates the adoption of relevant methods, facilitates the evaluation and comparison of different approaches, and showcases typical scenarios in which XAI can bring great value to AI system stakeholders and our society.
New IEEE Standard – Active. A new wave of artificial intelligence applications that offer extensive benefits to our daily lives has been led to by dramatic success in machine learning. The loss of explainability during this transition, however, means vulnerability to vicious data, poor model structure design, and suspicion of stakeholders and the general public–all with a range of legal implications. The study of explainable AI (XAI), which is an active research field that aims to make AI systems results more understandable to humans, has been called for by this dilemma. This is a field with great hopes for improving the trust and transparency of AI-based systems and is considered a necessary route for AI to move forward. A technological blueprint for building, deploying, and managing machine learning models, while meeting the requirements of transparent and trustworthy AI by adopting a variety of XAI methodologies, is provided by this guide. It defines the architectural framework and application guidelines for explainable AI, including: description and definition of XAI; the types of XAI methods and the application scenarios to which each type applies; and performance evaluation of XAI.