Message boards : Questions and problems : Recommended BOINC 7.0.28 x64 can't detect available NV GPU
Message board moderation
Author | Message |
---|---|
Send message Joined: 9 Apr 06 Posts: 302 |
From BOINC's log: 2012-10-03 16:23:05 | | No config file found - using defaults 2012-10-03 16:23:08 | | Starting BOINC client version 7.0.28 for windows_x86_64 2012-10-03 16:23:08 | | log flags: file_xfer, sched_ops, task 2012-10-03 16:23:08 | | Libraries: libcurl/7.25.0 OpenSSL/1.0.1 zlib/1.2.6 2012-10-03 16:23:08 | | Running as a daemon 2012-10-03 16:23:08 | | Data directory: O:\BOINCdata 2012-10-03 16:23:08 | | Running under account boinc_master 2012-10-03 16:23:08 | | Processor: 1 AuthenticAMD AMD Athlon(tm) 64 Processor 3200+ [Family 15 Model 31 Stepping 0] 2012-10-03 16:23:08 | | Processor: 512.00 KB cache 2012-10-03 16:23:08 | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm 3dnowext 3dnow 2012-10-03 16:23:08 | | OS: Microsoft Windows Server 2003 "R2": Enterprise Server x64 Edition, Service Pack 2, (05.02.3790.00) 2012-10-03 16:23:08 | | Memory: 2.00 GB physical, 2.33 GB virtual 2012-10-03 16:23:08 | | Disk: 157.49 GB total, 2.86 GB free 2012-10-03 16:23:08 | | Local time is UTC +3 hours 2012-10-03 16:23:08 | | No usable GPUs found From SETI x41g CUDA app running standalone: Can't set up shared mem: -1 Will run in standalone mode. setiathome_CUDA: Found 1 CUDA device(s): Device 1: GeForce 9400 GT, 511 MiB, regsPerBlock 8192 computeCap 1.1, multiProcs 2 clockRate = 1400000 setiathome_CUDA: No device specified, determined to use CUDA device 1: GeForce 9400 GT SETI@home using CUDA accelerated device GeForce 9400 GT Priority of process raised successfully Priority of worker thread raised successfully Cuda Active: Plenty of total Global VRAM (>300MiB). All early cuFft plans postponed, to parallel with first chirp. ) _ _ _)_ o _ _ (__ (_( ) ) (_( (_ ( (_ ( not bad for a human... _) Multibeam x41g Preview, Cuda 3.20 Detected setiathome_enhanced_v7 task. Autocorrelations enabled, size 128k elements. Work Unit Info: ............... WU true angle range is : 1.326684 VRAM: cudaMalloc((void**) &dev_cx_DataArray, 1048576x 8bytes = 8388608bytes, offs256=0, rtotal= 8388608bytes VRAM: cudaMalloc((void**) &dev_cx_ChirpDataArray, 1179648x 8bytes = 9437184bytes, offs256=0, rtotal= 17825792bytes VRAM: cudaMalloc((void**) &dev_sample_rate, 1048576x 8bytes = 8388608bytes, offs256=0, rtotal= 26214400bytes VRAM: cudaMalloc((void**) &dev_flag, 1x 8bytes = 8bytes, offs256=0, rtotal= 26214408bytes VRAM: cudaMalloc((void**) &dev_WorkData, 1179648x 8bytes = 9437184bytes, offs256=0, rtotal= 35651592bytes VRAM: cudaMalloc((void**) &dev_PowerSpectrum, 1048576x 4bytes = 4194304bytes, offs256=0, rtotal= 39845896bytes VRAM: cudaMalloc((void**) &dev_t_PowerSpectrum, 1048584x 4bytes = 1048608bytes, offs256=0, rtotal= 40894504bytes VRAM: cudaMalloc((void**) &dev_GaussFitResults, 1048576x 16bytes = 16777216bytes, offs256=0, rtotal= 57671720bytes VRAM: cudaMalloc((void**) &dev_PoT, 1572864x 4bytes = 6291456bytes, offs256=0, rtotal= 63963176bytes VRAM: cudaMalloc((void**) &dev_PoTPrefixSum, 1572864x 4bytes = 6291456bytes, offs256=0, rtotal= 70254632bytes VRAM: cudaMalloc((void**) &dev_NormMaxPower, 16384x 4bytes = 65536bytes, offs256=0, rtotal= 70320168bytes VRAM: cudaMalloc((void**) &dev_flagged, 1048576x 4bytes = 4194304bytes, offs256=0, rtotal= 74514472bytes VRAM: cudaMalloc((void**) &dev_outputposition, 1048576x 4bytes = 4194304bytes, offs256=0, rtotal= 78708776bytes VRAM: cudaMalloc((void**) &dev_PowerSpectrumSumMax, 262144x 12bytes = 3145728bytes, offs256=0, rtotal= 81854504bytes VRAM: cudaMallocArray( &dev_gauss_dof_lcgf_cache, 1x 8192bytes = 8192bytes, offs256=136, rtotal= 81862696bytes VRAM: cudaMallocArray( &dev_null_dof_lcgf_cache, 1x 8192bytes = 8192bytes, offs256=200, rtotal= 81870888bytes VRAM: cudaMalloc((void**) &dev_find_pulse_flag, 1x 8bytes = 8bytes, offs256=0, rtotal= 81870896bytes VRAM: cudaMalloc((void**) &dev_t_funct_cache, 1966081x 4bytes = 7864324bytes, offs256=0, rtotal= 89735220bytes re-using dev_GaussFitResults array for dev_AutoCorrIn, 4194304 bytes re-using dev_GaussFitResults+524288x8 array for dev_AutoCorrOut, 4194304 bytes As one can see app was able to detect and use GPU. The question is: why BOINC can't ? |
Send message Joined: 9 Apr 06 Posts: 302 |
I downgraded to BOINC 6.12.34 x64 in same config. And what a miracle - BOINC recognised GPU! 2012-10-03 17:01:58 | | Starting BOINC client version 6.12.34 for windows_x86_64 2012-10-03 17:01:58 | | Config: GUI RPC allowed from: 2012-10-03 17:01:58 | | Config: 192.168.0.1 2012-10-03 17:01:58 | | log flags: file_xfer, sched_ops, task 2012-10-03 17:01:58 | | Libraries: libcurl/7.21.6 OpenSSL/1.0.0d zlib/1.2.5 2012-10-03 17:01:58 | | Running as a daemon 2012-10-03 17:01:58 | | Data directory: O:\BOINCdata 2012-10-03 17:01:58 | | Running under account boinc_master 2012-10-03 17:01:58 | | Processor: 1 AuthenticAMD AMD Athlon(tm) 64 Processor 3200+ [Family 15 Model 31 Stepping 0] 2012-10-03 17:01:58 | | Processor: 512.00 KB cache 2012-10-03 17:01:58 | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm 3dnowext 3dnow 2012-10-03 17:01:58 | | OS: Microsoft Windows Server 2003 "R2": Enterprise Server x64 Edition, Service Pack 2, (05.02.3790.00) 2012-10-03 17:01:58 | | Memory: 2.00 GB physical, 2.33 GB virtual 2012-10-03 17:01:58 | | Disk: 157.49 GB total, 2.82 GB free 2012-10-03 17:01:58 | | Local time is UTC +3 hours 2012-10-03 17:01:58 | | NVIDIA GPU 0: GeForce 9400 GT (driver version 30623, CUDA version 5000, compute capability 1.1, 512MB, 45 GFLOPS peak) So, how BOINC 7.0.28 managed to become recommended version if it has such a show-stopping bug ? |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
|
![]() Send message Joined: 18 Jun 10 Posts: 73 ![]() |
The problem is that when they decided to "break GPU detection when installed as a service on Windows XP as well" they didn't implement some way for the user to force this detection (in cc_config.xml), e.g.: <force_detect_gpus>1</force_detect_gpus> ![]() ![]() |
Send message Joined: 9 Apr 06 Posts: 302 |
That's what I talk about. There is marvelous Robert Shekli story "Ticket to planet Tranai" (or smth alike, read in russian translation). Where developers never improve things, they make things (robots in particular) more and more error-prone, to stimulate users to buy new and new models.... Sad to see that BOINC going on this path too. |
Send message Joined: 9 Apr 06 Posts: 302 |
Please read message 40316. Well, particular driver set from particular vendor is broken. We have such situation all the time. But it's not ATi, it's NV host. And I'm fully aware that BOINC has no uniform GPU detection. NV and ATi use their own routes. Why NV detection was broken? Why ATi detection was fully broken? BOINC not capable to detect driver version before advancing further? EDIT: maybe worth to mention another issue I had: When I did downgrade to BOINC 6.12.34 from BOINC 7.0.28 BOINC manager refused to start completely. It say some executable (with "ctl" in its name) can't start or can't be found. So I decided to do server reboot (just think how it sounds - SERVER reboot (!)). Well, I recived BSoD on next boot. On one next reboot OS booted OK, BOINC manager was started w/o issues, GPU was detected. Ultimately I'm able to start SETI CUDA app testing for that all this was started... Cause I will not go to reproduce this procedure I didn't mention it in initial post. But such downgrade route is far from perfect... Server reboot required, twice!.... |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.