Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17c75146.43c4cd4b.51e39fee.95fe7@o2.pl>
Date: Mon, 15 Jul 2013 09:08:30 +0200
From: marcus.desto <marcus.desto@...pl>
To: john-dev@...ts.openwall.com
Subject: Shared Memory on GPGPU?

Hello,

I am wondering, whether it is possible to use shared data on GPGPU, meaning you have some data you push into the GPGPU's RAM and then you start the computation in the same program. The first computation finishes, then you run a second program that starts another computation, but it does not upload new input data to RAM. It uses the data that first program stored in GPGPU RAM for its computation. Is that possible?

If it is, does JtR support that?

Marcus

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.