Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH8erehFSOHf1M34dU=2KepuOcp_ck5d+Hh8Vub4Pg-zEMYOSQ@mail.gmail.com>
Date: Thu, 23 Mar 2023 17:18:53 -0300
From: Rodrigo s <rodrigozanattasilva@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: John the Ripper in the cloud update 2023/02

Lol, really thanks for the feedback!

But now I did my homework. And discover an ugly truth! AWS is... DIFFICULT!
Man... I had to understand all the types of the server and this is not easy
information. This link show it all
<https://aws.amazon.com/ec2/instance-types/?nc1=h_ls>and what we really
want is a *g4dn.16xlarge* or the supreme *g4dn.metal*. But you will need to
request access to it and this is not an easy task. It is so obscure what I
have to do that I give up!

Why not use something easier? Just pay, rent a server, click and start to
do the attack. So I discovery the https://vast.ai/

And this is really what I was thinking in the start. And I think you should
migrate to this project. All you need to do is create a docker in
https://hub.docker.com!

I don't know how to do this, but if you could install the best drivers for
the gpu in the site and when I start the docker, it automatically has the
best configuration, Man... life will be easy! I just pay, rent, start,
connect ssh, type john and recover the password!

Do you like this idea?

Em qui., 9 de mar. de 2023 às 20:03, Solar Designer <solar@...nwall.com>
escreveu:

> On Thu, Mar 09, 2023 at 12:21:12AM -0300, Rodrigo s wrote:
> > Really thanks for your help! I am really happy to say that it works now.
> >
> > All I did was exactly what that link said
> > <https://towardsthecloud.com/amazon-ec2-requested-more-vcpu-capacity>
> to
> > do. I put a random number like 80. After a day the AWS accepted.
> >
> > Summary of service quota(s) requested for increase:
> > [US East (Northern Virginia)]: EC2 Instances / nu.general (All
> > Standard (A, C, D, H, I, M, R, T, Z) instances), New Limit = 80
>
> That's great, but now you can no longer help us test if our new default
> instance type would have worked with AWS defaults.
>
> > Now... I can't say if it was what you did to solve the problem. But,
> after
> > I try again, using the same default configuration, it works without any
> > problem and I have the Linux console. I can learn more and understand how
> > all of it works.
>
> I think what really made the "same" default configuration work is that
> I've changed the default instance type from p3.2xlarge to c6i.large.
>
> p3.2xlarge wouldn't work for you even after the quota increase, because
> P isn't among the instance type letters that you increased the quota
> for.  c6i.large was expected to work even before your quota increase,
> because it needs 2 vCPUs and the default is apparently 5, which you've
> now increased to 80.
>
> > But I am a little disappointed. I thought it could be extremely faster
> than
> > what I can do myself on my computer.
>
> c6i.large is certainly not extremely fast.  Their largest in that
> category is c6i.32xlarge, which is 64 times larger - it has 128 vCPUs,
> so wouldn't even fit in your increased quota now.  You can now try e.g.
> c6i.16xlarge, which has 64 vCPUs.
>
> > In my test (trying a random format I
> > was using), it was doing  about 12.555KC/s in each thread, so 12,5k *2 =
> > 25KC/S. In my computer the same operation make 4,3KC/s in 12 threads or
> > about 50KC/S
>
> If you have 12 vCPUs (hardware threads), no surprise they're faster than
> the 2 in AWS.  However, apparently those in AWS are faster than yours
> each, perhaps because c6i instances have AVX-512 and your CPU does not.
>
> You'll need to mention what this "random format" was, or better yet show
> the "Loaded ..." lines, for me (and maybe others in here) to comment
> whether those speeds are good or bad, and maybe how to improve them.
>
> > Because we can easily "build" a new computer in AWS, I thought this
> bundle
> > could have the best possible configuration. What really makes the program
> > work faster? I always have this question in my mind. So, with AWS I could
> > test it.
>
> Yes, you can test different kinds of hardware quickly, like Intel vs.
> AMD (c6i vs. c6a instances for the latest ones).
>
> > What do you think about it? Is there a cheap configuration in AWS? Or to
> it
> > work, I just need to pay for the most expensive options in AWS?
>
> For continued use over months or years, AWS is more expensive than
> buying your own hardware (but then you also need to maintain the
> hardware, and cost of that depends on cost of your time).  It is also
> more expensive than renting dedicated servers (but then you're tied to
> specific hardware and need to manage the OS and software installs).
>
> For occasional uses and experiments (up to days or weeks, but not
> months), or e.g. when you don't know how long an attack will take (with
> luck, can be quick), AWS can be cheaper.
>
> > There is no shortcut for it?
>
> As explained on John the Ripper in the cloud homepage, usage of spot
> instances is a partial shortcut to reduce AWS costs.
>
> Please note that there's a separate service quota for spot instances.
> Your increase from 5(?) to 80 probably only applies to on-demand
> instances of those categories.  You're probably still limited to 5 spot
> instance vCPUs (if not to 0?), and you'll probably want to request an
> increase.
>
> Meanwhile, you can try running up to c6i.16xlarge as on-demand and up to
> c6i.xlarge as spot.
>
> > I really would like to help with this AWS project. Because I am still a
> > noob, I can just think about how to do it. But I can study if you say
> that
> > what I show makes sense :)
>
> So far you're experimenting and learning.  This makes sense.
>
> Alexander
>

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.