[PATCH 7/8] zswap: add to mm/

Dan Magenheimer dan.magenheimer at oracle.com
Wed Jan 2 17:26:07 UTC 2013


> From: Dave Hansen [mailto:dave at linux.vnet.ibm.com]
> Subject: Re: [PATCH 7/8] zswap: add to mm/
> 
> On 01/01/2013 09:52 AM, Seth Jennings wrote:
> > On 12/31/2012 05:06 PM, Dan Magenheimer wrote:
> >> A second related issue that concerns me is that, although you
> >> are now, like zcache2, using an LRU queue for compressed pages
> >> (aka "zpages"), there is no relationship between that queue and
> >> physical pageframes.  In other words, you may free up 100 zpages
> >> out of zswap via zswap_flush_entries, but not free up a single
> >> pageframe.  This seems like a significant design issue.  Or am
> >> I misunderstanding the code?
> >
> > You understand correctly.  There is room for optimization here and it
> > is something I'm working on right now.
> 
> It's the same "design issue" that the slab shrinkers have, and they are
> likely to have some substantially consistently smaller object sizes.

Understood Dave.  However if one compares the total percentage
of RAM used for zpages by zswap vs the total percentage of RAM
used by slab, I suspect that the zswap number will dominate,
perhaps because zswap is storing primarily data and slab is
storing primarily metadata?

I don't claim to be any kind of expert here, but I'd imagine
that MM doesn't try to manage the total amount of slab space
because slab is "a cost of doing business".  However, for
in-kernel compression to be widely useful, IMHO it will be
critical for MM to somehow load balance between total pageframes
used for compressed pages vs total pageframes used for
normal pages, just as today it needs to balance between
active and inactive pages.
 
> >> A third concern is about scalability... the locking seems very
> >
> > The reason the coarse lock isn't a problem for zswap like the hash
> 
> Lock hold times don't often dominate lock cost these days.  The limiting
> factor tends to be the cost of atomic operations to bring the cacheline
> over to the CPUs acquiring the lock.

[I'll bow out of the scalability discussion as long as someone
else is thinking about it.]

Dan




More information about the devel mailing list