Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110613064252.GB3877@albatros>
Date: Mon, 13 Jun 2011 10:42:52 +0400
From: Vasiliy Kulikov <segoon@...nwall.com>
To: kernel-hardening@...ts.openwall.com
Subject: destroy unused shmem segments

Solar,

I'm looking into the -ow patch and I see this code:

	#ifdef CONFIG_HARDEN_SHM
	void shm_exit (void)
	{
		int i;
		struct shmid_kernel *shp;

		for (i = 0; i <= shm_ids.max_id; i++) {
			shp = shm_get(i);
			if (!shp) continue;

			if (shp->shm_cprid != current->pid) continue;

			if (shp->shm_nattch <= 0) {
				shp->shm_flags |= SHM_DEST;
				shm_destroy (shp);
			}
		}
	}
	#endif


	NORET_TYPE void do_exit(long code)
	{
		...
	#ifdef CONFIG_HARDEN_SHM
		shm_exit();
	#endif
		...
	}

However, the shm segment should be already freed by exit_mm() => vma->close():

	static struct vm_operations_struct shm_vm_ops = {
		open:	shm_open,	/* callback for a new vm-area open */
		close:	shm_close,	/* callback for when the vm-area is released */
		nopage:	shmem_nopage,
	};

	static void shm_close (struct vm_area_struct *shmd)
	{
		...
		shp->shm_nattch--;
	#ifdef CONFIG_HARDEN_SHM
		if(shp->shm_nattch == 0) {
			shp->shm_flags |= SHM_DEST;
			shm_destroy (shp);
		}
	#else
		if(shp->shm_nattch == 0 &&
		   shp->shm_flags & SHM_DEST)
			shm_destroy (shp);
	#endif
		...
	}

Is it some additional "safety" check or a workaround for some dubious
race?  I see no explicit need of such freeing cycle in do_exit().

Thanks,

-- 
Vasiliy

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.