In a concerning development for users of AI services a ChatGPT user, encountered a serious privacy issue when he discovered chat histories from unrelated users in his account. These chats included sensitive content such as unpublished research papers and private data. The incident, initially feared to be a leak from ChatGPT, has been attributed to an account compromise, according to OpenAI officials.
The ChatGPT user, who accesses his account from Brooklyn, New York, was surprised to find that the unauthorized logins were traced back to Sri Lanka. Despite using a strong, unique password, he expressed skepticism about his account being compromised, suggesting a deeper issue at play.
OpenAI’s investigation into the matter concluded that the incident was not due to ChatGPT inadvertently sharing chat histories among users. Instead, it was identified as a case of account takeover. This revelation has brought to light the absence of critical security features on the ChatGPT platform, such as two-factor authentication (2FA) and the ability to monitor IP locations of logins. These are standard security measures implemented by most major online platforms.
The initial fears that ChatGPT was leaking private conversations, including sensitive login credentials and personal details, have been dispelled by OpenAI’s findings. Nonetheless, this incident serves as a stark reminder of the importance of robust security practices for online accounts. Users are advised to be vigilant about sharing personal information in AI service queries.
This incident, along with similar reports in the past, underscores the potential risks associated with using AI services and highlights the critical need for users to protect their personal and proprietary data.
Source: ArsTechnica