In light of ongoing concerns that artificially intelligent systems can discriminate against different groups, Amazon plans to distribute warning cards for software sold by its cloud computing division.
The card is a documentation template that provides critical information on important factors such as fairness and bias, which examine how a system affects different user subpopulations such as gender and ethnicity. Explainability is a mechanism for understanding and evaluating an AI system’s outputs. Privacy and Security investigates how model data is used to understand privacy and legal issues, as well as to protect against theft and exposure.
Furthermore, robustness are mechanisms that ensure an AI system operates consistently. Governance evaluates the processes used to define, implement, and enforce responsible AI practices within an organization, whereas transparency provides information about an AI system to allow stakeholders to make informed decisions about how to use the system.
The AI Service cards also facilitate responsible AI by providing a single location to find information on the intended use cases, limitations, and design choices made by AWS when developing the model, as well as deployment and optimization best practices. This is part of AWS’ commitment to building AI services that are fair, free of bias, transparent, and secure.
The sources for this piece include an article in Reuters.