[PATCHv2 8/9] zswap: add to mm/

Dan Magenheimer dan.magenheimer at oracle.com
Tue Jan 8 17:54:49 UTC 2013


> From: Dave Hansen [mailto:dave at linux.vnet.ibm.com]
> Sent: Tuesday, January 08, 2013 10:15 AM
> To: Seth Jennings
> Cc: Greg Kroah-Hartman; Andrew Morton; Nitin Gupta; Minchan Kim; Konrad Rzeszutek Wilk; Dan
> Magenheimer; Robert Jennings; Jenifer Hopper; Mel Gorman; Johannes Weiner; Rik van Riel; Larry
> Woodman; linux-mm at kvack.org; linux-kernel at vger.kernel.org; devel at driverdev.osuosl.org
> Subject: Re: [PATCHv2 8/9] zswap: add to mm/
> 
> On 01/07/2013 12:24 PM, Seth Jennings wrote:
> > +struct zswap_tree {
> > +	struct rb_root rbroot;
> > +	struct list_head lru;
> > +	spinlock_t lock;
> > +	struct zs_pool *pool;
> > +};
> 
> BTW, I spent some time trying to get this lock contended.  You thought
> the anon_vma locks would dominate and this spinlock would not end up
> very contended.
> 
> I figured that if I hit zswap from a bunch of CPUs that _didn't_ use
> anonymous memory (and thus the anon_vma locks) that some more contention
> would pop up.  I did that with a bunch of CPUs writing to tmpfs, and
> this lock was still well down below anon_vma.  The anon_vma contention
> was obviously coming from _other_ anonymous memory around.
> 
> IOW, I feel a bit better about this lock.  I only tested on 16 cores on
> a system with relatively light NUMA characteristics, and it might be the
> bottleneck if all the anonymous memory on the system is mlock()'d and
> you're pounding on tmpfs, but that's pretty contrived.

IIUC, Seth's current "flush" code only gets called when in the context
of a frontswap_store and is very limited in what it does, whereas the
goal will be for flushing to run both as an independent thread and do
more complex things (e.g. so that wholepages can be reclaimed rather
than random zpages).

So it will be interesting to re-test contention when zswap is complete.

Dan



More information about the devel mailing list