Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SE1P216MB24842DA9097DEB6DF91EEF579EB52@SE1P216MB2484.KORP216.PROD.OUTLOOK.COM>
Date: Sat, 27 Jul 2024 13:38:55 +0000
From: JinCheng Li <naiveli233@...look.com>
To: "musl@...ts.openwall.com" <musl@...ts.openwall.com>
CC: Markus Wichmann <nullplan@....net>
Subject: Re: Possible unfair scenarios in pthread_mutex_unlock

Hi

> ​> I have two questions in pthread_mutex_unlock.
> ​>
> ​>   1.
> ​> Why we need a vmlock in only pshared mutex, not in other private mutex?

> ​ Because only in the pshared case it is valid to have the mutex in a
> ​​ shared memory, and unmap the shared memory immediately following an
> ​ unlock.

Sorry, I still don't fully understand.

  1.
 The vmlock is a private lock of a process. How does it work in pshared mutex(shared cross-process memory)? What's its real role?
  2.
 Why munmap have to do vm_wait? It looks like we need to do vm_wait even if I'm not munmapping shared memory. If I'm releasing a pshared lock, are all munmaps blocked until the mutex been unlocked?
  3.
Can you provide an example of this vm_lock in action(where this lock must exist and work)?


int __munmap(void *start, size_t len)
{
    __vm_wait();
    return syscall(SYS_munmap, start, len);
}



Best
Li
________________________________
From: Markus Wichmann <nullplan@....net>
Sent: Saturday, July 27, 2024 3:52
To: musl@...ts.openwall.com <musl@...ts.openwall.com>
Cc: JinCheng Li <naiveli233@...look.com>
Subject: Re: [musl] Possible unfair scenarios in pthread_mutex_unlock

Am Fri, Jul 26, 2024 at 06:22:50AM +0000 schrieb JinCheng Li:
> Hi
>
> I have two questions in pthread_mutex_unlock.
>
>   1.
> Why we need a vmlock in only pshared mutex, not in other private mutex?

Because only in the pshared case it is valid to have the mutex in a
shared memory, and unmap the shared memory immediately following an
unlock.

>   2.
> For pshared mutex, after swap _m_lock to unlock state, we need to wait
> for vmunlock before wake other waiters. During this period, if someone
> trylocks this pshared lock, queue jumping may occur. Is this unfair to
> the preious waiters? They may have to wait longer to get woken up.
> This can cause performance issues, do we have any good way to avoid
> this?

Fairness is not guaranteed for mutexes. It is not guaranteed for
anything. For any synchronization primitives, someone can snipe it. In a
situation where you constantly have new waiters coming into the mutex,
it is not guaranteed that any one waiter won't starve.

Ciao,
Markus

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.