Message boards : GPUs : Configurable limit for compute units
Message board moderation
Author | Message |
---|---|
Send message Joined: 8 Jul 14 Posts: 3 |
My GPU has 9 compute units. If a work unit uses more than 33% of the GPU power (monitored with Process Explorer) my desktop starts stuttering. I have blocked already an app from Einstein@Home, now a new POEM@Home app also uses more than 33% of the GPU. At the end I have no more GPU apps to run... It would be great to have an option to configure a limit for the used compute units. OS: Windows 7 64Bit GPU: Radeon HD 6750 |
![]() Send message Joined: 29 Aug 05 Posts: 15631 ![]() |
You'll have to ask the GPU's manufacturer for an API that does that then, as at the moment all that GPGPUs can do is either be full on, or full off. No option available yet to partially use the computing resources. |
Send message Joined: 8 Jul 14 Posts: 3 |
I searched for "opencl device partitioning" on Google and found an old thread here on the forum [1]. Wouldn't that solve my problem? clCreateSubDevice is part of the OpenCL 1.2 spec [2], which AMD claims to implement. [1] http://boinc.berkeley.edu/dev/forum_thread.php?id=7174 [2] http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/clCreateSubDevices.html |
![]() Send message Joined: 29 Aug 05 Posts: 15631 ![]() |
Answer from David Anderson: BOINC allows apps to use fractional GPUs, i.e. for > 1 job to run on a GPU. |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
Answer from David Anderson: I don't think David understands GPU programming. As you said yourself, there's no API for that. BOINC can schedule fractional GPUs, but once an application finds itself running on a GPU, all alone, it is free to expand to fill the space available: BOINC does not constrain the app to run in half a GPU, a third of a GPU etc. This is different from the multithreaded CPU application case, where we have both <ncpus> x [to define the scheduling] <cmdline>--nthreads y [to control the app behaviour while running] David likewise said (at first) that he wouldn't let us have direct access to nthreads: it took a long time, but he was eventually persuaded to add it to the app_config.xml specification. Please keep prodding for an equivalent in the GPU case - even though it might need cooperation with the maintainers of CUDA, OpenCL etc. |
![]() Send message Joined: 29 Aug 05 Posts: 15631 ![]() |
I don't think David understands GPU programming. As you said yourself, there's no API for that. Well, there is the thing that bergi points out, that using clCreateSubDevices one can divide the compute units up in arrays for calculations. But the problem here is, and I already said that in my email to the developers, is that it's OpenCL 1.2 compliant only. This means that all OpenCL 1.0 and 1.1 GPUs cannot do it. This means that ALL Nvidia GPUs cannot do it. And then you'll need to find if there is a similar ability for CUDA, as else everyone with an Nvidia card will scream how that the developers found it possible to only program this for newer AMD and Intel GPUs. Foregoing the fact that Nvidia decided in their own wisdom to stop developing OpenCL of course. So it's probably more trouble at this time, than it is worth. |
Send message Joined: 8 Jul 14 Posts: 3 |
The problem could also be solved on another level. A configurable virtual OpenCL driver could return devices which are created by the clCreateSubDevice function. Like VirtualCL [1], but the driver wouldn't merge, it would split devices. I'm not sure if BOINC could be configured to use only this virtual driver. Is the <type> element of <exclude_gpu> the OpenCL platform? [1] http://www.mosix.cs.huji.ac.il/txt_vcl.html |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.