In a new report, researchers at Stanford University have found that the biggest AI companies are not being transparent about their massive and powerful AI models.
The Foundation Model Transparency Index, which graded 10 of the biggest AI models on 100 different metrics of transparency, found that even the highest-scoring model, Meta’s Llama 2, only received a score of 54 out of 100.
The report’s authors say that this lack of transparency is a major concern, as it makes it difficult to assess the potential risks and benefits of these models. For example, the researchers found that most companies are not disclosing what data their models were trained on, or how those models are being used in the real world. This makes it difficult to know whether the models are biased, or whether they are being used in ways that could harm people.
The report also found that most companies are not being transparent about the labour practices that go into developing and maintaining their AI models. For example, many companies are using low-wage workers in developing countries to train their models. However, the companies are not disclosing who these workers are, what they are paid, or what working conditions they are subjected to.
The report’s authors say that the AI industry needs to be more transparent about its models in order to build trust with the public and to ensure that these models are used responsibly. They call on companies to disclose more information about their models, including the data they were trained on, how they are being used, and the labour practices that go into developing and maintaining them.
The sources for this piece include an article in SpectrumIEEE