Salesforce, a leading provider of customer relationship management software, has released a new study with recommendations for reducing AI bias. Guidelines for Trusted Generative AI, the bias-reduction study, advises companies on how to build trust, safety, and sustainability from their AI data stores, as well as when to empower humans in the oversight of these models.
The guidelines are based on a set of principles developed by a multidisciplinary team of experts in artificial intelligence, ethics, and human rights. They emphasize the importance of Generative AI transparency, accountability, and responsible use. They urge developers to make their systems transparent and explainable so that users can understand how they work and make decisions. They also stress the importance of developers being accountable for the impact of their systems and ensuring that they are not used to create harmful or malicious content.
The guidelines also emphasize the importance of addressing bias and fairness in Generative AI. They urge developers to train their systems on diverse and representative data sets, as well as to consider the potential impacts of their systems on marginalized communities. Slicing and dicing data, on the other hand, produces biased models such as historical bias, representation bias, measurement bias, aggregate bias, and evaluation bias.
Finally, the guidelines emphasize the importance of involving a diverse set of stakeholders in the development and deployment of Generative AI, such as users, ethics and human rights experts, and representatives from affected communities. It goes on to suggest that companies consider Verifiable data, Safety, Honesty, Empowerment, and Sustainability in order to keep bias at bay in AI models.
The sources for this piece include an article in TechRepublic.