Google has announced lower prices for the use of Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing).
The company is also dropping the prices for preemptible local SSDs by almost 40 percent. “Preemptible local SSDs” refers to local SSDs attached to Google’s preemptible VMs. You can’t attach GPUs to preemptible instances, though, so this is a nice little bonus announcement — but it isn’t going to directly benefit GPU users.
As for the new GPU pricing, it’s clear that Google is aiming this feature at developers who want to run their own machine learning workloads on its cloud, though there also are a number of other applications — including physical simulations and molecular modeling — that greatly benefit from the hundreds of cores that are now available on these GPUs. The P100, which is officially still in beta on the Google Cloud Platform, features 3594 cores, for example.
Developers can attach up to four P100 and eight K80 dies to each instance. Like regular VMs, GPU users will also receive sustained-use discounts, though most users probably don’t keep their GPUs running for a full month.
It’s hard not to see this announcement in the light of AWS’s upcoming annual developer conference, which will take over most of Las Vegas’s hotel conference space next week. AWS is expected to make a number of AI and machine learning announcements, and chances are we’ll see some price cuts from AWS, too.
Read the source article at TechCrunch.