Mark Zuckerberg, CEO of Meta, has stirred up a controversy with his commitment to developing an Artificial General Intelligence (AGI) system. He also suggest that this system, potentially on par with human intelligence might be made open source. This has raised alarm bells among experts and academics alike.
Zuckerberg talks about this next-generation technology as a key driver for tech services, even though concept of AGI still remains largely theoretical. Even Open AI’s Sam Altman, who although has talked about huge advancements in the upcoming version 5 of ChatGPT, is not yet ready to announce AGI has been achieved.
AGI is an AI system capable of performing a wide range of tasks at human-level intelligence or beyond. The prospect of achieving such a breakthrough, and more so, making it publicly accessible, has sparked fears about its potential to escape human control and pose significant threats.
Dame Wendy Hall, a prominent computer science professor and member of the UN’s AI advisory body, labeled the idea of open source AGI as “really very scary” and criticized Zuckerberg’s approach as irresponsible. She emphasized the urgent need for regulatory frameworks to ensure public safety in the face of such powerful technologies.
Meta’s previous decision to open source its Llama 2 AI model was also met with criticism, drawing parallels to “giving people a template to build a nuclear bomb.” The debate extends beyond Meta, with other tech giants like OpenAI and Google’s DeepMind also pursuing AGI, each with their own definitions and timelines.
Sources include: The Guardian