The success or failure of the European Organization for Nuclear Research’s (known as CERN) landmark Large Hadron Collider (LHC) project may not be achieved underground – but rather, in the cloud.
Located underground in Geneva, Switzerland, the LHC is a 26-kilometre particle accelerator that scientists hope will hold the key to understanding the birth of the universe. CERN scientists plan to smash two particle beams together in an attempt to re-create the “Big Bang.”
But according to Markham, Ont.-based Platform Computing Corp., the project might never have gotten off the ground without the help of cloud (or grid) computing technology. Platform’s workload management software is scheduling and prioritizing the workloads of over 20,000 Linux machines running a variety of applications on the project. The total LHC computing grid is comprised of a network of about 40,000 CPUs.
Songnian Zhou, CEO at Platform, said without its grid computing capabilities, CERN would be unable to distribute its workload across its thousands of servers – thus, slowing down the project.
“Tens of thousands of scientists around the world are feeding off this data and doing all kinds of research themselves,” he said. “All of the data needs to be disseminated, but so does all the computing capabilities. Therefore, the cloud services, not just the data services, need to be accessible.”
With application requests coming into the cloud, Zhou said, both CERN’s compute and data processing clusters are relying on Platform software to keep everything running smoothly. “The whole LHC experiment can be seen as a cloud for the high energy and physics community worldwide,” he added. “We’re helping to integrate all of the applications the scientists are using to the compute and storage resources they need.”
But despite its role in the CERN project, cloud computing hasn’t lived up to the hype as the next big technology in IT. Even those in the grid computing business would seem to agree that many companies are still in the “kicking the tires” stage when it comes to the technology.
Reuven Cohen, founder and CTO of Enomaly, said his customers are primarily using grid computing for research and development projects, rather than production applications.
Kirill Sheynkman, head of start-up Elastra, said the early adopters of grid computing are Web. 2.0 start-ups who want to get up and running quickly and without a lot of capital expenses, independent software vendors that want to offer their applications in a software-as-a-service model, and enterprises who have selected specific applications for the cloud, such as salesforce automation or human resources.
“Equipment inside the corporate data centre isn’t going away anytime soon,” added Sheynkman. Companies remain reluctant, for a variety of reasons, to trust the cloud for their mission-critical applications. This includes issues around licencing, compliance, interoperability, data privacy and security.
This is a trend that cloud computing proponents like Zhou hope will soon change – especially among Canadian enterprises.
“Even though the things they are doing are very specific, a lot of elements from this LHC experiment are highly general,” he said. “It can provide a model to all kinds of other industries and businesses to follow and I would like to see more adoption of this approach from Canadian enterprises.”
Zhou said that many governments and universities are also using cloud computing capabilities for their R&D projects. He also pointed to computer and chip vendors like AMD and Intel, which run internal clouds to help them design their various products.
But even Zhou admits adoption for generic business applications – outside of R&D – won’t happen overnight. He said that enterprises should first build up an internal cloud, before possibly off-loading some of their applications to external cloud providers.
“It’s a long evolution that’s going to happen among companies over the next 10 or 20 years, but it will happen,” he said.
–With files from Neal Weinberg, Network World (US)