Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 23 Nov 2017 11:15:22 +0100
From: "Jeroen" <spam@...lab.nl>
To: <john-users@...ts.openwall.com>
Subject: Re: OpenMPI and .rec files?

A clean check on a local system with a very basic setup shows that it is not a generic issue:

---
bofh@...ncher:/opt/JohnTheRipper/run$ rm *rec; mpirun -np 99 ./john --format=raw-md4 /tmp/hashes
--------------------------------------------------------------------------
[[62017,1],34]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: cruncher

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Using default input encoding: UTF-8
Loaded 1200 password hashes with no different salts (Raw-MD4 [MD4 128/128 SSE4.1 4x3])
Node numbers 1-99 of 99 (MPI)
Send SIGUSR1 to mpirun for status
[cruncher:54746] 98 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[cruncher:54746] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
^C^CAbort is in progress...hit ctrl-c again within 5 seconds to forcibly terminate
<SNAP>
bofh@...ncher:/opt/JohnTheRipper/run$ ls *rec|wc -l
99

bofh@...ncher:/opt/JohnTheRipper/run$ rm *rec; mpirun -np 120 ./john --format=raw-md4 /tmp/hashes
--------------------------------------------------------------------------
[[61661,1],114]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: cruncher

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Using default input encoding: UTF-8
Loaded 1200 password hashes with no different salts (Raw-MD4 [MD4 128/128 SSE4.1 4x3])
Node numbers 1-120 of 120 (MPI)
[cruncher:55110] 119 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[cruncher:55110] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Send SIGUSR1 to mpirun for status
^C^C
bofh@...ncher:/opt/JohnTheRipper/rls *rec|wc -l
120
---

Same outcome using --fork.

I'll check cluster specific settings and behavior. If you've got suggestions, please let me know.

Thanks,

Jeroen

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.