Zoom has backtracked on its policy of using user calls to train its AI models, after facing backlash from users who were concerned about their privacy.
The company had previously said that it could use audio, video, and chat content from calls to train its AI models without user consent. However, following the backlash, Zoom has now clarified that it will only use this data for training AI models if users specifically opt in.
The controversy erupted on August 6, when the tech blog Stack Diary highlighted Zoom’s terms of service. Section 10.2 of these terms suggests that users agree to extensive usage of Service Generated Data. This includes machine learning and AI applications, a fact which led to a barrage of criticism from Zoom users online.
Individuals like Gabriella Coleman and Brianna Wu took to social media to express their displeasure, with threats to switch to rival services. Zoom’s chief product officer, Smita Hashim, responded to the uproar with a blog post. In it, she clarified the updated terms of service, assuring that audio, video, or chat content will not be employed for AI training without user consent.
The backlash against Zoom’s original policy came as part of a broader trend of public concern about the use of personal data to train AI models. In recent months, there have been several high-profile cases where companies have been accused of using user data without their consent to train AI models.
For example, in July, more than 8,000 authors signed an open letter to AI companies demanding compensation for the use of their books to train AI systems without their permission. And in May, a group of artists sued AI companies for using their artwork to train AI-image generators without their consent.
The sources for this piece include an article in BusinessInsider.