Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAufJG5pgZ6DPHBb7xCtOr7EpVeWHnYYScV3YpiakA9ZSgsi9A@mail.gmail.com>
Date: Sun, 20 May 2012 15:18:03 -0300
From: Claudio André <claudioandre.br@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: OpenCL vs. CUDA CPU usage

2012/5/16 Solar Designer <solar@...nwall.com>

> This is just an observation: somehow with CUDA we're fully wasting an
> entire CPU core per GPU, whereas with OpenCL we're only using some CPU
> time on one core:
>

Like this Increased CPU usage with last drivers starting from 270.xx  from
http://forums.nvidia.com/index.php?showtopic=215813?

Not a new complain: http://forums.nvidia.com/index.php?showtopic=77003


I wonder if there's a tunable setting for CUDA to make it wait passively
> (so that the CPU would stay cooler or we could use this CPU core by
> another instance of John).
>

Seems it exists (for CUDA, not OpenCL on NVIDIA)
cudaDeviceScheduleAuto, cudaDeviceScheduleSpin and
cudaDeviceScheduleYield: Instruct CUDA to yield its thread when waiting for
results from the device. This
can increase latency when waiting for the device, but can increase the
performance of CPU threads performing
work in parallel with the device.


> I also wonder if we'd see the same CPU wastage with OpenCL on the 570.
> I can easily try that next, indeed.
>

I would say yes. The cudaDeviceScheduleSpin (decrease latency when waiting
for the device, but may lower the performance of CPU threads) seems to be
the default.

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.