Amazon Web Services (AWS) now allows users of its cloud computing platform to take advantage of the extra power graphics processing units (GPUs) can give. The company hopes the move will attract high-performance computing applications to its service.
AWS has added a new Cluster GPU instance to the Elastic Compute Cloud (EC2) platform, which provides resizable computing capacity in the cloud.
Besides high-performance computing applications, the new instance is well suited to the demands of rendering and media processing applications, according to AWS. It also has the advantages of other cloud services: it eliminates the cost and complexity of buying, configuring and operating in-house compute clusters, Amazon said.
The “Cluster GPU Quadruple Extra Large Instance” comes with a pair of Nvidia Tesla M2050 “Fermi” GPUs, which each have 448 cores and 3 GB of RAM, and can together deliver over a trillion floating point operations per second. The instance specification also includes a pair of Intel Xeon X5570 quad-core processors, 22 GB of RAM and 1690 GB of local storage, according to an AWS blog post.
Each Amazon Web Services account can use up to eight GPU instances in a cluster, where the nodes communicate using 10 Gigabit Ethernet. Today, users that want even larger clusters have to ask for Amazon’s permission. The default setting exists to help AWS better understand customer needs for the technology early on, and is not a technology limitation, the company said. A similar default setting on its standard cluster instances has now been removed, the blog post said.
To take full advantage of GPUs, applications will have to be compatible with Nvidia’s parallel computing architecture, CUDA.
Users can either pay US$2.10 per hour or reserve an instance, for which they pay an upfront fee and a lower hourly rate, for example $5,630 for a one year term and then $0.74 per hour.
The Cluster GPU instances are now available from Amazon’s Northern Virginia location and can run Linux.