From X-bit Labs: Graphics sub-systems with numerous graphics processing units (GPUs) have existed for more than one decade and in the recent years leading developers of graphics processing units – ATI, graphics business group of Advanced Micro Devices, and Nvidia Corp. – resurrected graphics boards with two chips. So far multi-GPU operation relied mostly on drivers from ATI or Nvidia. But the recent OpenGL extension from ATI allows game developers to optimize their titles for multi-chip rendering.
There is currently no way for applications to efficiently use GPU resources in systems that contain more than one GPU. Hardware developers have provided methods that attempt to split the workload for an application among the available GPU resources, but this has proven to be rather inefficient since most applications were never written with these sorts of optimizations in mind, e.g. alternate frame rendering, split frame rendering or tiled rendering.
The new “WGL_AMD_gpu_association” provides a mechanism for applications to explicitly use the GPU resources on a given system individually. By providing this functionality, a driver allows applications to make appropriate decisions regarding where and when to distribute rendering tasks. In particular, when multiple GPUs are present, a context can be created for off-screen rendering that is associated with a specific GPU. This will allow applications to achieve an app-specific distributed GPU utilization.
An interesting thing that Geeks3D.com web-site pointed out is that a similar OpenGL extension (NVIDIA’s WGL_NV_gpu_affinity) is also offered by Nvidia: an application can bind an OpenGL render context to a specific GPU once several graphics chips are. However, Nvidia only offers this extension for its Nvidia Quadro professional graphics cards, whereas ATI provides its extension both for Fire-series professional cards as well as for Radeon-series consumer graphics boards.
View: Article @ Source Site