September 2021
·
57 Reads
·
44 Citations
The Journal of Financial Data Science
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
September 2021
·
57 Reads
·
44 Citations
The Journal of Financial Data Science
September 2021
·
313 Reads
Understanding the predictions made by machine learning (ML) models and their potential biases remains a challenging and labor-intensive task that depends on the application, the dataset, and the specific model. We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into data and ML models by identifying biases and explaining predictions. It is deeply integrated into Amazon SageMaker, a fully managed service that enables data scientists and developers to build, train, and deploy ML models at any scale. Clarify supports bias detection and feature importance computation across the ML lifecycle, during data preparation, model evaluation, and post-deployment monitoring. We outline the desiderata derived from customer input, the modular architecture, and the methodology for bias and explanation computations. Further, we describe the technical challenges encountered and the tradeoffs we had to make. For illustration, we discuss two customer use cases. We present our deployment results including qualitative customer feedback and a quantitative evaluation. Finally, we summarize lessons learned, and discuss best practices for the successful adoption of fairness and explanation tools in practice.
August 2021
·
88 Reads
·
46 Citations
... These algorithms must adhere to societal values, particularly those that promote nondiscrimination [17]. While machine learning can offer precise classifications, depending on the data situation it can also inadvertently perpetuate classification biases in crucial domains like loan approvals [20] and criminal justice [23]. For instance, loan approval algorithms may unfairly disadvantage single applicants by considering marital status. ...
September 2021
The Journal of Financial Data Science
... Google's PAIR team, for example, released the What-If Tool [32] and Fairness Indicators [33], which allow developers to visualize model behavior for different slices of data and compute basic bias metrics. There are also fairness libraries like Themis [34]or AEC (Audit AI) [35] and industry services such as Amazon SageMaker Clarify [36]. SageMaker Clarify is a service that helps detect bias in machine learning data and models; it can analyze datasets for bias by requiring the user to specify which features are sensitive (like gender or age) and then computes bias metrics and produces a report. ...
August 2021