Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210312145503.GA15020@openwall.com>
Date: Fri, 12 Mar 2021 15:55:03 +0100
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Apple M1 CPU

On Fri, Mar 12, 2021 at 02:05:17PM +0100, Albert Veli wrote:
> Has anybody tried to compile JtR on the new Apple M1 Arm CPU:s? Did it
> work and how is the benchmark compared to similar Intel CPU:s? I have
> seen reports of OpenCL working with Big Sur and M1, but does it work
> with John?

Like magnum wrote, he recently did some work on this.  Yes, things work,
in one of two ways:

1. You can make a native build of latest bleeding-jumbo.  Then CPU and
OpenCL work.

2. You can run a(n older) build of (bleeding-)jumbo for Intel Mac
(x86-64 with SSE4, but not AVX2) via Rosetta 2 (it's transparent, you
don't need to do anything for it to work).  Then the CPU works too, but
slower - e.g., magnum reported 37M c/s native ASIMD vs. 27K c/s via
Rosetta 2 for descrypt.  I guess OpenCL will probably work too, to some
extent, but we don't really have the data here.

The native ASIMD speeds that magnum mentioned on GitHub are quite OK
comparing against Intel AVX2 quad-core at similar conservative clock
rate.  (But are about half of true 8-core.)  This is actually impressive
considering that ASIMD is 128-bit whereas AVX2 is 256-bit, and that the
new CPU is more energy-efficient.

So you shouldn't expect a speedup from upgrading to M1, but not too much
of a slowdown either.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.