Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+TsHUDFhdg2oDcM7eW1oF6067cYyo36f+NTCW=TSZFoCcOm=A@mail.gmail.com>
Date: Mon, 15 Jul 2013 17:34:46 +0530
From: Sayantan Datta <std2048@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: Shared Memory on GPGPU?

Hi,

On Mon, Jul 15, 2013 at 12:38 PM, marcus.desto <marcus.desto@...pl> wrote:

> I am wondering, whether it is possible to use shared data on GPGPU,
> meaning you have some data you push into the GPGPU's RAM and then you start
> the computation in the same program. The first computation finishes, then
> you run a second program that starts another computation, but it does not
> upload new input data to RAM. It uses the data that first program stored in
> GPGPU RAM for its computation. Is that possible?
>
> If it is, does JtR support that?
>

Yes , it does. In fact our split kernel implementations uses this concept.

Regards,
Sayantan

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.