Figure 2 - uploaded by Michele Donini
Content may be subject to copyright.
The stages in a ML lifecycle from problem formulation to deployment and monitoring with their respective fairness considerations in orange. Clarify can be used at dataset construction and model testing and monitoring to investigate them. í µí°´í µí°· Accuracy Difference: We compare the accuracy, i.e. the fraction of examples for which the predictions is equal to the label, across groups. We define í µí°´í µí°· = (í µí±‡ í µí±ƒ í µí±Ž + í µí±‡ í µí± í µí±Ž )/í µí±› í µí±Ž − (í µí±‡ í µí±ƒ í µí±‘ + í µí±‡ í µí± í µí±‘ )/í µí±› í µí±‘ . í µí± í µí°· Recall Difference: We compare the recall (the fraction of positive examples that receive a positive prediction) across groups. í µí± í µí°· = í µí±‡ í µí±ƒ í µí±Ž /í µí±› (1) í µí±Ž − í µí±‡ í µí±ƒ í µí±‘ /í µí±› (1) í µí±‘

The stages in a ML lifecycle from problem formulation to deployment and monitoring with their respective fairness considerations in orange. Clarify can be used at dataset construction and model testing and monitoring to investigate them. í µí°´í µí°· Accuracy Difference: We compare the accuracy, i.e. the fraction of examples for which the predictions is equal to the label, across groups. We define í µí°´í µí°· = (í µí±‡ í µí±ƒ í µí±Ž + í µí±‡ í µí± í µí±Ž )/í µí±› í µí±Ž − (í µí±‡ í µí±ƒ í µí±‘ + í µí±‡ í µí± í µí±‘ )/í µí±› í µí±‘ . í µí± í µí°· Recall Difference: We compare the recall (the fraction of positive examples that receive a positive prediction) across groups. í µí± í µí°· = í µí±‡ í µí±ƒ í µí±Ž /í µí±› (1) í µí±Ž − í µí±‡ í µí±ƒ í µí±‘ /í µí±› (1) í µí±‘

Source publication
Preprint
Full-text available
Understanding the predictions made by machine learning (ML) models and their potential biases remains a challenging and labor-intensive task that depends on the application, the dataset, and the specific model. We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into...

Contexts in source publication

Context 1
... generating bias and explainability insights across the ML lifecycle, we integrated Clarify with the following SageMaker components: Data Wrangler to visualize dataset biases during data preparation, Studio & Experiments to explore biases and explanations of trained models, and Model Monitor to monitor these metrics. 8 Figure 2 illustrates the stages during ML lifecycle and the integration points of Clarify within SageMaker. ...
Context 2
... experience suggests that the successful adoption of fairness-aware ML approaches in practice requires building consensus and achieving collaboration across key stakeholders (such as product, policy, legal, engineering, and AI/ML teams, as well as end users and communities). Further, fairness and explainability related ethical considerations need to be taken into account during each stage of the ML lifecycle, for example, by asking questions stated in Figure 2. ...
Context 3
... generating bias and explainability insights across the ML lifecycle, we integrated Clarify with the following SageMaker components: Data Wrangler to visualize dataset biases during data preparation, Studio & Experiments to explore biases and explanations of trained models, and Model Monitor to monitor these metrics. 8 Figure 2 illustrates the stages during ML lifecycle and the integration points of Clarify within SageMaker. ...
Context 4
... experience suggests that the successful adoption of fairness-aware ML approaches in practice requires building consensus and achieving collaboration across key stakeholders (such as product, policy, legal, engineering, and AI/ML teams, as well as end users and communities). Further, fairness and explainability related ethical considerations need to be taken into account during each stage of the ML lifecycle, for example, by asking questions stated in Figure 2. ...