From InfoWorld: First came Amazon, offering GPU-powered instances in its cloud back in 2010. Then, more recently, Microsoft Azure and IBM Softlayer each provided their versions of the same, albeit with different pricing structures and instance types.
Starting next year, Google will offer GPU instances for both Google Compute Engine and Google Cloud Machine Learning users, with GPU profiles that complement both high-end number-crunching and more modest remote workstation computation loads.
Google’s plan to stand apart from the competition is to be more granular. Amazon’s machine-learning-oriented GPU instances are rented by the hour and come in a discrete instance type. Google, however, is planning to allow users to “attach up to 8 GPU dies to any non-shared-core machine,” regardless of instance type.
Even more critical, Google’s GPU pricing will follow its existing model: by the minute, same as Google’s VMs. This isn’t about consistency alone; it also reflects how GPU-powered machine learning is actually used. If a machine learning application employs only GPUs for training, it makes sense to be able to toggle off the GPU when it’s not needed instead of changing instance types.
“Whether you need one or dozens of instances,” says Google, “you only pay for what you use.”
View: Article @ Source Site