[PATCH 3/3] x86: add local_tlb_flush_kernel_range()

Seth Jennings sjenning at linux.vnet.ibm.com
Wed Jun 27 21:41:45 UTC 2012


On 06/27/2012 04:15 PM, Dan Magenheimer wrote:
>> From: Seth Jennings [mailto:sjenning at linux.vnet.ibm.com]
>> I guess I'm not following.  Are you supporting the removal
>> of the "break even" logic?  I added that logic as a
>> compromise for Peter's feedback:
>>
>> http://lkml.org/lkml/2012/5/17/177
> 
> Yes, as long as I am correct that zsmalloc never has to map/flush
> more than two pages at a time, I think dealing with the break-even
> logic is overkill.

The implementation of local_flush_tlb_kernel_range()
shouldn't be influenced by zsmalloc at all.  Additionally,
we can't assume that zsmalloc will always be the only user
of this function.

> I see Peter isn't on this dist list... maybe
> you should ask him if he agrees, as long as we are only always
> talking about flush-two-TLB-pages vs flush-all.

Yes, I'm planning to send out the next version of patches
tomorrow (minus the first that has already been accepted)
and I'll include him like I should have the first time :-/

> (And, of course, per previous discussion, I think even mapping/flushing
> two TLB pages is unnecessary and overkill required only for protecting an
> abstraction, but will stop beating that dead horse. ;-)

With this patchset, I actually quantified the the
performance gain with page table assisted mapping vs mapping
via copy, and there is a significant 40% difference in
single-threaded performance.

You can do the test yourself by commenting out the
#define __HAVE_ARCH_LOCAL_FLUSH_TLB_KERNEL_RANGE
in tlbflush.h which will cause the new mapping via copy
method to be used.

--
Seth




More information about the devel mailing list