Twitter source code appears on GitHub, Google confirms not using private Gmail accounts for training its AI and Microsoft claims its AI is showing early signs of artificial general intelligence.
These stories and more on Hashtag Trending for Tuesday, March 28th
I’m your guest host for the week, James Roy – here’s today’s top tech news stories.
Twitter said that parts of its source code were leaked online, according to court filings, first reported by the New York Times on Sunday.
Twitter filed the case on Friday with the U.S District Court of the Northern District of California against GitHub, the software collaboration platform on which the code reportedly appeared.
The leak, according to Twitter, included “proprietary source code for Twitter’s platform and internal tools.”
Github took down the repository after Twitter filed a Digital Millennium Copyright Act (DMCA) takedown notice but has refused to provide any information that Twitter sought, in an attempt to hunt down the code leaker.
Specifically, Twitter asked GitHub to provide the code submitter’s contact information, IP addresses, any session information and other identifying information.
The GitHub user who posted the Twitter source code has the username “FreeSpeechEnthusiast,” possibly a reference to Twitter owner Elon Musk casting himself as a protector of free speech.
According to an article by Ars Technica, the leaker might have been one of the roughly 5,500 employees who left Twitter via layoff, firing, or resignation after Musk bought the company.
While Musk said in March that “all code used to recommend tweets” will be made open source by March 31, the leaked code may be much more sensitive.
The code, according to NYT’s sources, could include security vulnerabilities that could give hackers or other motivated parties the means to extract user data or take down the site.
Source: Ars Technica
Google said it did not train AI chatbot Bard from private Gmail accounts, a spokesperson confirmed to news site The Register.
The question surfaced when an AI researcher quizzed Bard on where its training data came from and was surprised when it mentioned internal data from Gmail.
A former Google employee, Blake Lamoine – who was fired for leaking company secrets – claimed it was, indeed, trained on text from Gmail.
Google said in its statement to the Register, “Like all LLMs, Bard can sometimes generate responses that contain inaccurate or misleading information while presenting it confidently and convincingly. This is an example of that. We do not use personal data from your Gmail or other private apps and services to improve Bard.”
Google launched Bard last week, and invited enthusiasts from the US and UK to join the waitlist to talk to the chatbot.
It has not yet demonstrated the unbridled and bizarre behaviors seen during Microsoft Bing’s earlier tests but they both do share the same tendencies to respond inappropriately, when prompted or make up false information.
Source: The Register
France has decided to ban all recreational apps, including TikTok on all government devices.
Minister of transformation and public service, Stanislas Guerini issued the policy saying that there are no recreational apps that have sufficiently robust security for them to be deployed on government-owned devices.
Guerini stated that some exceptions will be allowed, but only for apps needed for official communication.
This move comes as nations globally continue to clamp down on TikTok, for its effects on children’s mental health, loose privacy policies and its alleged link to the Chinese communist party.
In fact, TikTok CEO Shou Zi Chew was grilled last week by the US congress’s House Committee on Energy and Commerce. It was a trainwreck, with several members saying the app should be banned outright – not just from government devices.
On Sunday House speaker Kevin McCarthy tweeted that a law would soon be tabled to sanction TikTok. It remains unclear if that bill will force a sale of TikTok’s US operations, or ban the app.
But China’s government pointed out that TikTok could not do a deal without its approval. And with China’s Foreign Ministry accusing the U.S. of doing a “xenophobic witch hunt”, it’s unlikely that Beijing would make it easy for a U.S. entity to control of the app.
Source: The Register
Microsoft has unveiled a new, faster and redesigned Microsoft Teams, available now, in preview, to Windows users.
The company describes the new Teams client as being twice as faster, consuming 50 per cent less memory and up to 70 per cent less disk space when compared to the current app.
Microsoft says that the new app will also launch three times faster, allowing users to switch between chats and channels up to 1.7 times faster than the Classic Teams.
The new Microsoft Teams also bundles support for Copilot, Microsoft’s AI-powered assistant that will help users prepare before joining a meeting and help them answer questions in real time while chatting with their colleagues.
The enhanced Teams will become generally available starting June 2023.
Source: Bleeping Computer
Microsoft claims its AI, backed by OpenAI GPT large language models, is showing early signs of artificial general intelligence (AGI), meaning that its capabilities are at or above human level.
Microsoft made these claims in a paper released on the arXiv preprint server titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.”
The puzzling conclusion is at complete odds with what OpenAI’s CEO Sam Altman has been saying regarding GPT-4. For example, he said the model was “still flawed, still limited.”
But even the bulk of the paper is dedicated to listing the number of limitations and biases that the large language model contains.
This begs the question of how close GPT-4 really is to AGI and whether AGI is instead being used as clickbait.
As a matter of fact, the researchers write in the paper abstract that GPT-4 is strikingly close to human-level performance but then immediately contradicts that statement.
They write, “Our claim that GPT-4 represents progress towards AGI does not mean that it is perfect at what it does, or that it comes close to being able to do anything that a human can do or that it has inner motivation and goals, which are key aspects in some definitions of AGI.”
Everytime the researchers describe GPT-4’s impressive capabilities, it is immediately followed by some serious caveats.
Weeks before GPT-4’s release, Altman said, “The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from. People are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.”
Microsoft also clarified, in a statement to news site Motherboard that it is not focused on trying to achieve AGI. Rather, it wants its AI technologies to assist humans with cognitive work, instead of replacing them.
But it is clear that the “sparks” that researchers claimed to have found are largely overshadowed by the AI’s limitations.
Source: Vice
That’s the top tech news for today. Hashtag Trending goes to air five days a week with the daily tech news and we have a special weekend edition where we do an in depth interview with an expert on some tech development that is making the news.
Follow us on Apple, Google, Spotify or wherever you get your podcasts. Links to all the stories we’ve covered can be found in the text edition of this podcast at itworldcanada.com/podcasts
We love your comments – good or bad. You can find ITWC CIO, Jim Love on LinkedIn, Twitter, or on Mastodon as @therealjimlove on our Mastodon site technews.social. Or just leave a comment under the text version at itworldcanada.com/podcasts
I’m your host, James Roy, have a terrific Tuesday!