FRAMINGHAM, MASS. AND DALLAS, TEXAS – Two years ago, Shell Oil’s IT subsidiary found itself in a somewhat ironic situation. It was facing staggering energy costs.
Among its three main worldwide data centers and 400 regional data centers, the company saw its energy use grow by 3.6 megawatts over a nine-month period.
With local, state and international regulations, power and data center capacity isn’t easy to come by, said Kayoor Gajarawala, general manager of hosting, storage and enterprise service delivery for Shell Information Technology.
“We think cloud can help with that,” he told attendees at the SNW conference here. (Computerworld is a sponsor of the conference.)
Even more important than reducing energy costs, the company, which has 150,000 employees in 90 countries, needed to be more agile in deploying IT services and planning for user demand. In 2010, Shell began to build cloud infrastructure using Amazon’s Virtual Private Cloud (Amazon VPC).
Using Amazon VPC, Shell was able to provision a private, isolated section of the Amazon Web Services (AWS) cloud where it could launch resources in a virtual network.
Part of the challenge that Shell was and still is facing has to do with new seismic sensors that it recently created with Hewlett-Packard. The super-sensitive sensors allow the company to find oil in wells formerly thought to have run dry or in places where previous exploration indicated there was no oil.
The new sensors create massive amounts of geological data.
“The first test we did of these seismic sensors in the field, we generated a data set with a one petabyte file size,” Gajarawala said. “And we’d like to deploy these seismic sensors to approximately 10,000 wells.”
The company has 46 petabytes of data on disk subsystems.
In the next 10 years, Shell’s IT shop has to figure out how to drive costs down, manage the giant file sizes effectively and still make it profitable for the company to deploy its new sensors.
In 2011, Shell’s first big rollout of its cloud service was aimed at application developers. The company used software testing as a service for its pre-production systems, as well as its development and test environment. The payback was immediate.
“Two hundred and 300 project teams could be up and running in a day versus weeks before,” Gajarawala said.
When Amazon experienced a major cloud outage in April 2011, it didn’t affect Shell because its service was spread across multiple geographic zones. But the outage created image problems for the cloud with Shell.
“That caused a lot of people to question why we were going to cloud computing,” he said. “After that slight dip in credibility … things started to move back up again.”
Shell is now piloting Hadoop in its Amazon cloud for big data analytics. The company is also focused on authentication techniques to ensure data security and third-party access to its cloud.
“There are huge challenges ahead with that, given that there’s not a market standard for that capability,” Gajarawala said. “How do we connect back to the enterprise and make sure we don’t allow malcontents into our environment, and how do we keep our business critical data safe.?”
Gajarawala said it took his team a solid year to get a sign-off from Shell’s Information Rights Management group. They’re now allowed to add data up to the “confidential” level into the public cloud.
Before setting up a cloud service, Gajarawala said the IT team needed to understand how the business used its services. The team had to develop a strong business engagement model, which included identifying the most critical applications and matching the right workloads to the right cloud capabilities.
Shell’s scientists, especially the geophysicists and drilling engineers, frequently use cloud computing to run models. They provision compute capacity themselves, run their models and then return the cloud compute capacity, getting charged only for what they used.
“It has allowed them to experiment quickly, effectively and cheaply, helping them manage their R&D budgets effectively,” Gajarawala said.
Shell’s IT team also developed a “Center of Excellence” model to enable business-led conversations with IT on how to exploit cloud services for the best business results. The Center of Excellence is there to help users apply cloud services in the best ways possible, Gajarawala said.
“Sometimes we’ll get a phone call from a geophysicist saying ‘I’d like to run this computer model. What’s the best way to do that?’ The cloud Center of Excellence provides them with the guidance to do that,” he said.
Shell is still a long way from where it would like to be in its use of cloud services.
Today, the company’s in-house server infrastructure is 60% virtual. The other 40% remains physical servers because of the company’s legacy applications, which cannot be supported on a virtual infrastructure. Gajarawala said as the applications are upgraded, they too will be moved to a virtual server environment.
Shell’s end goal is to eventually create a hybrid cloud model, where some applications run in a public cloud and others in an on-premise, private cloud.
At this point, Gajarawals see the cloud as an “additional capability” rather than a large scale replacement for traditional IT.