Welcome to Hashtag Trending for Monday morning, November 20th. I’m your host, Jim Love.
Today we have a special edition covering the many stories about OpenAI that happened over the weekend.
In case you missed it, Sam Altman, head of OpenAI was fired late Friday. And that’s pretty much what we’ll be covering in this special edition.
Just to put this in perspective, let’s highlight a couple of major events that occurred this month.
November 6th – Open AI has its “Developer Day” where Satya Nadella appears on stage with Sam Altman, affirming the key partnership with Microsoft. Altman goes on to reveal major developments in OpenAI including the ability for anyone to develop their own applications, a new “app store” where these can be sold on a revenue share basis and a new AI-API that would automate integration of AI developed apps.
November 9th – Microsoft cuts employees off from Chat GPT due to “security concerns.” A day later, that order was countermanded and Microsoft employees were back using OpenAI.
November 15th – OpenAI announces that they are suspending new ChatGPT plus sign ups.
Friday, November 17th: Sam Altman is fired. According to reports, Altman was invited to a board meeting with thirty minutes notice on Friday. Greg Brockman, then chair of the board, was also invited, but given only five minutes notice.
The board meeting took place via Google Meet with OpenAI’s six-person board included OpenAI co-founder and President Greg Brockman, who was also chairman of the board; Ilya Sutskever, OpenAI’s chief scientist; Adam D’Angelo; Tasha McCauley; Helen Toner; and Altman himself.
Reportedly, at the board meeting, Ilya Sutskever, Open AI’s chief scientist told Altman he was fired.
At the same meeting Greg Brockman was also ousted from his position as chairman of the board, but told that he could stay on in his current job. Brockman resigned in protest.
Within an hour, OpenAI posts a blog: OpenAI announces leadership transition which stated:
“Chief technology officer Mira Murati appointed interim CEO to lead OpenAI;
Sam Altman departs the company.”
But it had a rather shocking paragraph for a corporate announcement, which stated:
“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
What happened?
There are lots of rumours and much speculation. Altman and Brockman could not provide much in the way of answers. According to a tweet by Brockman, “Sam and I were shocked” and “still trying to figure out what happened.”
For the next 24 hours, everyone was trying to figure out what happened.
It wasn’t a “lie”
Despite the terse statement from the board that said Altman was not “candid” with the board, no evidence emerged that he had actually lied to the board or misled them. And there are enough leaks at OpenAI that if there had been something factual, it should have leaked.
AGI is achieved? A “safetyst” coup?
There is a general agreement that the “coup” that ousted Altman was led by Ilya Sutskever, the chief scientist.
So one rumour that began circulating was that AGI (Artificial General Intelligence had been achieved and Sutskever, the chief scientist, panicked.
What is AGI? Altman defined it as “the equivalent of a median human that you could hire as a co-worker.” And yes, Altman has hinted that OpenAI had either achieved this or was much further along than anyone might think.
On September 18th, Altman put out a post on Reddit that stated that AGI had been achieved internally. A week later he edited that post to say it was a “meme” and that he certainly wouldn’t announce AGI in a Reddit post.
There were additional signs that OpenAI was much further ahead in the development of AGI than was officially reported.
The company had recently updated their core values. According to an article in the Semafor:
OpenAI’s careers page previously listed six core values for its employees, according to a September 25 screenshot from the Internet Archive. They were Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented.
The same page now lists five values, with “AGI focus” being the first. “Anything that doesn’t help with that is out of scope,” the website reads.
Other indicators that the company had advanced in AGI development? On the 20th of October, the company filed for trademarks on GPT6 and GPT7 – although they had only recently filed for a trademark on GPT5, which has not even launched.
So was this a “safetyst coup?”
The Information reported that there were internal disagreements over “whether the company was developing artificial intelligence safely enough.” It also claimed that there had been an “all hands” meeting to discuss safety just prior to the Altman firing.
Stop Altman from revealing AGI?
It was clear that Altman seemed on the verge of reporting the development of AGI and that there was some backlash to this. He did have to back down on his Reddit post.
There were also indications that the company was trying to manage him in interviews. In a recent Wall Street Journal interview, Altman was asked about upcoming developments and CTO Murai jumped in to state clearly that they “wouldn’t be talking about that” just as Altman appeared ready to answer.
But any “training wheels” placed on Altman wouldn’t last. Altman has little to lose. As he pointed out, “If I start going off, the OpenAI board should go after me for the value of my shares.”
The only problem with this is that Altman doesn’t have any shares or equity in the company.
Was the board in fear that Altman was unmanageable?
A data leak?
Could there have been a major leak of data? It has happened before. A month ago there was the story that information could be leaked from prompts.
Then there was the mysterious shutdown of access from Microsoft on November 9th. With the scrutiny that OpenAI is under, and with a history of company execs now being charged in the US for security breaches, could there have been some kind of security issue?
While this might sound plausible, when Altman was fired the head of risk, Aleksander Madry was one of the first to resign in support of Altman. If there had been concerns. If this had truly been what one blogger called a “safetyst coup” why would the person in charge of risk resign in protest?
Altman’s reaction
For someone who had been ousted by his board, Altman was characteristically gracious in his response, choosing to focus on the positive outpouring of support that he received.
But one YouTube blogger claimed that Altman might have been trolling the chief data scientist Ilya Sutskever in this tweet, noting that the first line “I love you all” spells out Ilya.
But although Altman was being most gracious, it was clear that he was not going to give up on this project that had taken up 8 years of his life. The only question was, “what would his new venture be?”
The reaction was swift
The reaction to Altman’s leaving was swift and severe.
Officially, Microsoft’s reaction to Altman’s firing expressed Microsoft’s continued support for Open AI. Windows Central report Satya Nadella furious with the OpenAI board for being “blindsided” by the firing of Altman. Microsoft has invested more than 10 billion dollars in OpenAI and that partnership is a huge part of their product strategy going forward.
Thrive Capital, which is leading a deal that would value the company at more than 80 billion dollars, tripled its valuation of only a few months ago – let it be known that Altman’s firing could jeopardize this deal.
Clearly, both Microsoft and Thrive would be worried about a “brain drain” following Altman’s departure and with good reason. In the first 24 hours, in addition to Greg Brockman, the former board chair, three key individuals resigned immediately. They were
- Jackub Pachocki, Director of Research,
- Aleksander Madry, Head of Risk and
- Szymon Sido, Open Source Baseline Researcher
As the weekend went on, it was clear that this was just the start. The New York Times reported that Brad Lightcup, OpenAI COO told employees on Saturday that the company had been talking with the board to “better understand the reason and the process behind the decisions.” According to these reports, he also confirmed that there was no “financial malfeasance.”
By Sunday, it was being reported that a large group of employees had set a deadline of Sunday at 5 pm if Altman were not reinstated mass resignations would follow.
The board caves in?
By 3 pm on Sunday the OpenAI board had reversed themselves. Chief strategy officer Jason Quon was quoted in The Information as saying, “We are still working towards a resolution and we remain optimistic. By resolution, we mean bringing back Sam, Greg, Jakub, Syzmon, Aleksander and other colleagues (sorry if I missed you!) and remaining the place where people who want to work on AGI research, safety, products and policy can do their best work.”
Will Altman return or start his own venture.
As we went to air on Sunday evening, it’s not clear whether Altman will return. If he does, undoubtedly, it will be on his terms. Will the board be forced to resign, and a new board be appointed? Will the structure of the company have to change?
While tweets between former CTO Murai and Altman seem to confirm rumours that she had only learned about his firing the night before and had no part in it, there is a huge question about what will happen to Ilya Sutskever, the chief scientist who led the coup against Altman.
While anyone with business experience would think that Sutskever’s ousting was a slam dunk, nothing is predictable in this incredible story.
So that’s the story we had as we went to air on Sunday evening.
We’ll update this as it evolves on Monday and beyond.
Hashtag Trending goes to air 5 days a week with a daily news program and a special weekend edition interview show.
The show notes including links are posted at itworldcanada.com/podcasts.
I’m your host Jim Love, have a Marvelous Monday.