According to Astrix Security Research Group, the increased usage of AI-based applications has prompted worries among security executives, who are concerned about the possible hazards associated with unvetted apps being connected to vital corporate systems.
The use of ChatGPT, a generative-AI tool, in particular, has resulted in data leakage at Samsung. Employees used the app to discuss sensitive corporate information including customer PII and proprietary code. ChatGPT is now using this information to develop its AI models, possibly jeopardizing its security and availability.
Furthermore, not all Generative-AI programs come from reliable sources. According Astrix, employees are linking high-privilege AI-based apps to essential systems such as GitHub and Salesforce, creating substantial security threats. Astrix previously identified a dubious integration, “GPT For Gmail,” created by an untrustworthy developer with higher rights to access an organization’s Gmail accounts. The permissions of this connection include the ability to view, compose, send, and delete emails—an unsettling degree of access.
Astrix proposes that enterprises emphasize sophisticated non-human identity management to obtain visibility into third-party services linked by workers, exert control over permissions, and properly analyze possible security concerns.
The sources for this piece include an article in TheHackerNews.