Apple may get an unexpected penalty from the US Governments new lawsuit, survey of CIOs complains of application sprawl but proposes that the way to get out of it is “more applications”, 1% of employees cause 89% of data loss events and information surfaces about some potentially enormous developments in AI in the coming months.
These stories and more on the “sum of all fears” edition of Hashtag Trending. I’m your host, Jim Love, let’s get into it:
Even if Apple manages to win the lawsuit launched by the US Department of Justice last week, it may get a penalty that it fears more than fines – disclosure.
This legal battle could force the revelation of Apple’s most closely guarded secrets, potentially exposing detailed insights into its operations, strategies, and unannounced projects during the discovery process.
Apple’s obsession with managing both its secrecy and its public image could be hurt badly as the courtroom becomes a stage where aspects of its business, usually shrouded in secrecy, may be disclosed.
This has happened in past legal skirmishes. When Apple sued Samsung a decade ago, Apple was forced to share details of unlaunched prototypes, market research and its highly secret design process. That lawsuit brought in details that other tech companies wanted kept secret, such and Intel, Qualcomm and others filed motions to try to keep their business dealings from being part of the public record.
And in 2005, Apple was again forced to confirm unannounced products, ironically when it went to court to punish people for leaking its product info.
Perhaps with this experience, Apple’s getting better at protecting its information in lawsuits, or it could just be PR bravado, but an Apple spokesperson told Axios that “We have litigated dozens of high-profile cases over the last 15 years,” the official said. “DOJ has already had access to millions of documents during the course of the investigation. Yet they only used the same tired documents that have been part of the public record.”
We’ll see.
Source: Axios
A Harris poll claims that 84% of the CIOs they surveyed are concerned about “application sprawl.” Again according to the report the number of new applications is growing alarmingly. In 2022 the amount was 20 to 40 new applications per year. By 2024 that amount has grown to 30 to 60 new applications per year.
Not surprisingly, half of those surveyed were planning to consolidate various applications.
Where are these applications coming from? While the report doesn’t explicity state this, it would seem that a number of these are new AI applications.
90% of the CIOs sampled agree that AI tools can dramatically improve their own performance as well as the performance of their employees.
When asked what the benefits of AI applications were, half or 52% said that AI saved time on creative tasks. 50% felt that AI helped them get data driven insights. And in an answer I couldn’t quite understand, half of them felt that AI would help them consolidate applications – my question was, would it consolidate them faster than the growth that was causing this growth in the number of applications, because despite the earlier findings on “application sprawl” 94% plan to invest in these new AI driven tools.
Some other observations? 70% claim that they have established some “guard rails” for safe use of AI in the workplace.
The study covered 1,369 CIOs in the US with approximately 150 in each country – the US, Spain, German, France, Brazil, Mexico, India and Australia.
And once more – watch how many times this happens – there’s no mention of Canada. So, we’re not going to mention who sponsored the poll. Sounds fair to me.
Source: Harris Poll
A report by security company Proofpoint reveals that the average organization has grappled with approximately 15 incidents of data loss in the past year alone, translating to more than one episode per month. A staggering 71% of the respondents pinpoint careless users as the culprits behind these breaches.
This spans a range of actions, from misdirecting emails to visiting phishing sites, installing unauthorized software, and emailing sensitive data to personal accounts. These behaviors, although preventable, suggest a significant lapse in organizational vigilance.
One of the most common, yet easily avoidable, sources of data loss is misdirected email. The report one-third of employees have sent emails to the incorrect recipient, posing a considerable risk to data security.
In a company with 5,000 employees, that would be 3,400 misdirected emails annually. These errors are not just simple operational mistakes – they could also lead to hefty fines under GDPR or other privacy legisltation due to the potential exposure of sensitive information.
The rise of generative AI technologies, including ChatGPT, Grammarly, Bing Chat, and Google Gemini, marks the fastest-growing area of concern. With these tools gaining traction these models are increasingly being used for sensitive information.
Not all data loss incidents are accidents or carelessness. About 20% of respondents identified malicious insiders, such as employees or contractors, as potentially intentionally causing breaches. These may also be presumed to have more severe consequences because of this deliberate intent.
The survey identifies departing employees as a significant risk factor. Not because individuals perceive their actions as malicious; rather, they feel entitled to take certain information with them. Data from Proofpoint indicates a troubling trend where 87% of anomalous file exfiltration among cloud tenants over nine months was attributed to departing employees.
But privileged users, such as those in HR and finance with access to sensitive data, are deemed the highest risk, with a mere 1% responsible for 88% of data loss events. This finding underscores the importance of actively managing and monitoring privileged access – something many organizations do not do effectively.
On a positive note, the survey reveals a growing maturity in organizations’ approach to data loss prevention and a move away from compliance-driven measures towards a more holistic view of data security particularly in areas that have shown great vulnerability such as healthcare and government.
Source: Proofpoint
And finally, there are reports that Open AI will release a new model, which some are calling GPT5 mid-year. Whatever the name, Sam Altman himself said in a recent speech that this will be a major upgrade.
Wes Roth, a YouTube commentator who follows OpenAI closely reported some CEO’s have had early access to this new model and one is reported to have said, “it’s materially better.”
So, what will this new development be? The speculation, again fueled by comments from those who have seen the model, is that it’s going to be autonomous agents. These intelligent agents can learn, plan, and take actions in the real world and they mark the next phase of AI development.
These autonomous agents already exist. We covered them a while ago when RabbitR1 launched with the ability to use them.
But in the past week there’s been another example of how powerful these agents can be. DevinAI is an autonomous AI agent built by Cognition Labs to be a software developer. Devin, an agent that learns from its work and its mistakes has shown to be extremely powerful and sophisticated development tool.
Ethan Millick, a professor at Wharton, asked Devin to go on Reddit and advertise its services for web development. In a thread that has since been taken down, Devin got on Reddit, was able to understand and obey the rules for how developers can solicit for assignments.
It also understood that it should try to charge for its work.
Devin got on Reddit, posted and monitored the thread for responses.
Devin’s post got 366 views and some comments and Devin asked for the API key to Reddit to respond. At that point, Mollick stopped the experiment and took down the post.
The intent was not to fool anyone or to make money, it was to demonstrate that even with the level of GPT4, you can build autonomous agents that can successfully navigate even the nuances of social media successfully.
That’s where we are today. What will the next level of autonomous agents be able to do? This is going to be a huge development. AI will no longer be a passive agent that answers questions, it will be able to take actions in the real world.
When you combine that with what is happening in robotics – buckle up, this is going to be an interesting year.
And that’s our show for today.
Remind your friends that they can get us anywhere you get audio podcasts Google, Apple, Spotify, whereever, and even on their smart speakers – and remind yourself that if you like the podcast, please give us a good review – it matters. And as I’m sure you know, there is a copy of the show notes at itworldcanada.com/podcasts
I’m your host, Jim Love. Have a Marvelous Monday.