According to Sam Altman, CEO of OpenAI, advances in artificial intelligence (AI) will not be accomplished primarily via the use of huge models that demand vast quantities of data. Instead, he proposes using “other methods” to enhance these models.
Altman’s viewpoint runs counter to the current trend in AI, which shows that models are getting increasingly massive, as illustrated by OpenAI’s ChatGPT. According to him, there’s currently too much emphasis on parameter count, which is not supposed to be so because AI models will keep improving despite a marked reduction in the parameter count.
GPT-2 contained 1.5 billion parameters, according to Wired, while GPT-3 raised this amount to 175 billion. However, OpenAI has not revealed the number of parameters in GPT-4, implying that size is no longer the major goal for their huge language models. Some reports indicate that training GPT-4 cost more than $100 million, however OpenAI has yet to confirm this amount.
Altman highlighted the need of building safety controls before revealing AI systems to the public in response to an open letter signed by 100 technologists. He also refuted the notion that OpenAI was training GPT-5, which is incorrect.
The sources for this piece include an article in TechSpot.