Brief about Explainable AI (XAI) ; may be crucial one for industrial production
Most of the time, we can't really figure out how an AI model works. But with explainable AI, humans are able to figure out how an AI model works and what its results mean. In short we can denote explainable AI as XAI.
The set-up of XAI consists of three main methods-
- Prediction accuracy
- Traceability and
- Decision Understanding
Prediction accuracy, traceability -these two methods are about technology, and the third one is about human's needs. The most popular technique used for prediction accuracy is called Local Interpretable Model-agnostic Explanations, or LIME, which explains the prediction of classifiers by the machine learning algorithm. Now, traceability can restrict decision-making, narrowing the scope for machine learning principles and features. One traceability technique is known as Deep Learning Important FeaTures, or DeepLIFT for short. It achieves this through contrasting the activation of each neuron in the neural network with that of its reference neuron, which reveals dependencies and linkages in the chain of traceability. Shortly, Each neuron's activation is compared to the activation of a reference neuron, highlighting connections and dependencies within the network.
The final phase, decision understanding, involves informing and training teams to assist them get over their initial concerns about AI and helping them understand how the decisions were made.
Advantages of XAI:
- Helps to establish trust in the AI model.
- Identifying problems and developing solutions to improve model performance.
- Analyze the model and generate alerts when it deviates from the desired outcomes or performs inadequately.
- May assist developers in ensuring that the system functions as intended and meets regulatory requirements.
Reference:
- What is Explainable AI? (2022, May 4). YouTube. https://www.youtube.com/watch?v=jFHPEQi55Ko

Comments
Post a Comment