A recent report has brought to light that major companies like Walmart, Delta, T-Mobile, Chevron, and Starbucks are now employing artificial intelligence to monitor employee conversations on platforms such as Slack and Microsoft Teams. This surveillance is facilitated by software from a startup called “Aware,” designed to scan for keywords indicating employee dissatisfaction or potential safety risks.
The company’s AI technology analyzes billions of messages to identify trends, sentiments, and potential risks within corporate communications, including bullying, harassment, and other inappropriate behaviours. Despite the technology not being designed to identify individual employees directly through its analytics tool, a separate eDiscovery tool can pinpoint individuals in cases of extreme risk, as determined by the client.
Critics argue that such surveillance technologies tread dangerously close to treating employees as mere inventory, raising ethical concerns over privacy and the notion of thought crimes. The debate reflects a broader discomfort with the increasing role of AI in workplace monitoring, highlighting the tension between corporate risk management and individual privacy rights.