When Explainable AI (XAI) is incorporated into a Flow, users may want additional detail on how particular conclusions were drawn.
To provide transparency and to help provide a sense of confidence in the predictions generated, several methods provide ways to investigate how those predictions are being made.
One common method is a waterfall plot to examine specific predictions. It gives a breakdown of how much of an effect each field’s value had on the prediction. Users are able to hover over each value and see exactly the impact it has on the prediction relative to the rest of the data.
For more about incorporating XAI into a Flow, see this article.
Related Article:
Previous Article |
Next Article |