Apple discussed AI and large language models at a recent internal event, revealing its experiments with language-generating artificial intelligence (AI) and ChatGPT.
Apple focused on AI and large language models during the event. According to The New York Times, many teams, including those working on Siri, are testing “language-generating concepts” on a regular basis. Furthermore, according to 9to5Mac, Apple has included a new framework for “Siri Natural Language Generation” in tvOS 16.4.
This move is in response to Siri, Alexa, and Google Assistant’s limitations in understanding various aspects of language. In the meantime, OpenAI has unveiled GPT-4, its next-generation AI engine that powers ChatGPT and accepts image and text input. Google also announced Bard, a new AI service that will compete with OpenAI’s ChatGPT. And it is looking like Apple doesn’t want to be left out.
In an interview with The New York Times, former Apple engineer John Burke, who worked on Siri, said that the assistant’s slow evolution was due to “clunky code,” making it difficult to push even basic feature updates. Burke also mentioned that Siri had a large database with a lot of words. So, when engineers needed to add features or phrases, the database had to be rebuilt, which reportedly took up to six weeks.
Apple is currently testing new language generation AI capabilities for Siri under the code name “Bobcat,” which is currently limited in scope but could potentially expand to greater capabilities and devices, according to the sources.
The sources for this piece include an article in TechCrunch.