|
Message-ID: <CADkMHCmCr3Kc9GDWR+DZcgNt3_9pgjMCw3k2Q7q9zMQT0n5Z5g@mail.gmail.com> Date: Sat, 12 Jan 2013 08:48:46 -0700 From: RB <aoz.syn@...il.com> To: john-users@...ts.openwall.com Subject: Re: parallel sort of text files under Windows On Sat, Jan 12, 2013 at 8:18 AM, Solar Designer <solar@...nwall.com> wrote: > So if you're on a machine with e.g. only 4 GB RAM, the "unique" approach > is likely faster (since it's a simpler task). If you're on a machine > with e.g. 16 GB RAM or more, the "sort" approach is likely faster (since > it can use more RAM). This is actually a huge win with nearly any modern size of memory. The default internal buffer is rather small (I'd quote precisely what it is but the source is more opaque on that than I have time to disentangle), and once the job exceeds that size sort will start using temporary files and is subsequently limited to the speed of (by default) /tmp. As an incident responder I run into monstrous sorts all the time, and -S and --parallel are some of my best friends.
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.