|
Message-ID: <20181102154528.GI5150@brightrain.aerifal.cx> Date: Fri, 2 Nov 2018 11:45:28 -0400 From: Rich Felker <dalias@...c.org> To: "dirk@...iot.com" <dirk@...iot.com> Cc: musl <musl@...ts.openwall.com> Subject: Re: Deadlock when calling fflush/fclose in multiple threads On Fri, Nov 02, 2018 at 10:29:15AM -0400, Rich Felker wrote: > By removing the FILE being closed from the open file list (and > unlocking the open file list, without which the removal can't be seen) > before it's flushed and closed, fclose creates a race window where > fflush(NULL) or exit() from another thread can complete without this > file being flushed, potentially causing data loss. > > I think we just have to move the __ofl_lock to the top of the > function, before FLOCK, and the __ofl_unlock to after the > fflush/close. Unfortunately this makes fclose much more serializing > than it was before, but I don't see any way to avoid it. Another possibility seems to be moving the ofl lock and after the fflush, close, and FUNLOCK, but before the free. This leaves a 'dead' FILE in the open file list momentarily, but the only things that can act on it are pthread_create's init_file_lock, __stdio_exit's close_file, and fflush(NULL), and none of these can have any side effects except on a FILE with buffered data (which the FILE being closed can't have at this point). I think I like this solution better, and I think it's necessary to do something other than the above-quoted idea; holding the ofl lock during flush can itself cause deadlock, since the flush could block and forward progress of whatever has it blocked (e.g. other end of socket or pipe) could depend on forward progress of fclose, fopen, etc. in another thread. Also, in light of having added support for application-provided buffers with setvbuf, even on regular files the flush operation could take a long time. Rich
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.