Site icon IT World Canada

Researchers cite challenges to compliance with data protection regulations for AI chatbots

Researchers from Australia’s National Science Agency and Australian National University have found that AI chatbots and associated machine learning applications may not be able to comply with data protection laws, such as the European Union’s General Data Protection Regulation (GDPR).

The researchers argue that these large language models, such as OpenAI’s ChatGPT, Google’s Flan-T5, Meta’s LLaMA, and Anthropic’s Claude, process and store information in a way that is different from search engines, making it difficult to comply with the right to be forgotten.

The right to be forgotten allows individuals to request the removal of their personal data from search engines. However, applying this right to large language models is complicated because it’s unclear what personal data is stored in the models and how it can be linked to individuals. Even if personal data could be removed, it would likely affect the model’s performance, and creating a new version of the model takes a lot of time and money.

There’s also a conflict between the right to be forgotten and the persistent nature of data in AI models. Scholars have pointed out the mismatch between legal regulations and technical realities. However, AI model creators like OpenAI are trying to bridge this gap by offering ways for individuals to object to the use of their personal information and exercise their data rights.

The sources for this piece include an article in TheRegister.

Exit mobile version