[PATCH v3 08/10] x86/hyper-v: use hypercall for remote TLB flush

KY Srinivasan kys at microsoft.com
Mon May 22 14:39:28 UTC 2017



> -----Original Message-----
> From: devel [mailto:driverdev-devel-bounces at linuxdriverproject.org] On
> Behalf Of Vitaly Kuznetsov
> Sent: Monday, May 22, 2017 3:44 AM
> To: Andy Lutomirski <luto at kernel.org>
> Cc: Stephen Hemminger <sthemmin at microsoft.com>; Jork Loeser
> <Jork.Loeser at microsoft.com>; Haiyang Zhang <haiyangz at microsoft.com>;
> x86 at kernel.org; linux-kernel at vger.kernel.org; Steven Rostedt
> <rostedt at goodmis.org>; Ingo Molnar <mingo at redhat.com>; H. Peter Anvin
> <hpa at zytor.com>; devel at linuxdriverproject.org; Thomas Gleixner
> <tglx at linutronix.de>
> Subject: Re: [PATCH v3 08/10] x86/hyper-v: use hypercall for remote TLB
> flush
> 
> Andy Lutomirski <luto at kernel.org> writes:
> 
> > On 05/19/2017 07:09 AM, Vitaly Kuznetsov wrote:
> >> Hyper-V host can suggest us to use hypercall for doing remote TLB flush,
> >> this is supposed to work faster than IPIs.
> >>
> >> Implementation details: to do HvFlushVirtualAddress{Space,List}
> hypercalls
> >> we need to put the input somewhere in memory and we don't really
> want to
> >> have memory allocation on each call so we pre-allocate per cpu memory
> areas
> >> on boot. These areas are of fixes size, limit them with an arbitrary number
> >> of 16 (16 gvas are able to specify 16 * 4096 pages).
> >>
> >> pv_ops patching is happening very early so we need to separate
> >> hyperv_setup_mmu_ops() and hyper_alloc_mmu().
> >>
> >> It is possible and easy to implement local TLB flushing too and there is
> >> even a hint for that. However, I don't see a room for optimization on the
> >> host side as both hypercall and native tlb flush will result in vmexit. The
> >> hint is also not set on modern Hyper-V versions.
> >
> > Why do local flushes exit?
> 
> "exist"? I don't know, to be honest. To me it makes no difference from
> hypervisor's point of view as intercepting tlb flushing instructions is
> not any different from implmenting a hypercall.
> 
> Hyper-V gives its guests 'hints' to indicate if they need to use
> hypercalls for remote/locat TLB flush and I don't remember seeing
> 'local' bit set.
> 
> Microsoft folks may probably shed some light on why this was added.

As Vitaly has indicated, these are based on hints from the hypervisor.
Not sure what the perf impact might be for the local flush enlightenment.
> 
> >
> >> +static void hyperv_flush_tlb_others(const struct cpumask *cpus,
> >> +				    struct mm_struct *mm, unsigned long
> start,
> >> +				    unsigned long end)
> >> +{
> >
> > What tree will this go through?  I'm about to send a signature change
> > for this function for tip:x86/mm.
> 
> I think this was going to get through Greg's char-misc tree but if we
> need to synchronize I think we can push this through x86.

It will be good to take this through Greg's tree as that would simplify coordination
with other changes. 
> 
> >
> > Also, how would this interact with PCID?  I have PCID patches that I'm
> > pretty happy with now, and I'm hoping to support PCID in 4.13.
> >
> 
> Sorry, I wasn't following this work closely. .flush_tlb_others() hook is
> not going away from pv_mmu_ops, right? In think case we can have both in
> 4.13. Or do you see any other clashes?
> 
> --
>   Vitaly
> _______________________________________________
> devel mailing list
> devel at linuxdriverproject.org
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdriverd
> ev.linuxdriverproject.org%2Fmailman%2Flistinfo%2Fdriverdev-
> devel&data=02%7C01%7Ckys%40microsoft.com%7Cbdee6af479524fb02db50
> 8d4a0ff73fe%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63631046
> 6477893081&sdata=69mm5horEX93QjLCyhvyFwD8CL%2B0M8kJFaWC9%2BW
> 18wc%3D&reserved=0


More information about the devel mailing list