[PATCH 1/1] Drivers: base: memory: Export symbols for onlining memory blocks

KY Srinivasan kys at microsoft.com
Tue Jul 23 17:21:08 UTC 2013



> -----Original Message-----
> From: Dave Hansen [mailto:dave.hansen at intel.com]
> Sent: Tuesday, July 23, 2013 12:01 PM
> To: KY Srinivasan
> Cc: Michal Hocko; gregkh at linuxfoundation.org; linux-kernel at vger.kernel.org;
> devel at linuxdriverproject.org; olaf at aepfle.de; apw at canonical.com;
> andi at firstfloor.org; akpm at linux-foundation.org; linux-mm at kvack.org;
> kamezawa.hiroyuki at gmail.com; hannes at cmpxchg.org; yinghan at google.com;
> jasowang at redhat.com; kay at vrfy.org
> Subject: Re: [PATCH 1/1] Drivers: base: memory: Export symbols for onlining
> memory blocks
> 
> On 07/23/2013 08:54 AM, KY Srinivasan wrote:
> >> > Adding memory usually requires allocating some large, contiguous areas
> >> > of memory for use as mem_map[] and other VM structures.  That's really
> >> > hard to do under heavy memory pressure.  How are you accomplishing this?
> > I cannot avoid failures because of lack of memory. In this case I notify the host
> of
> > the failure and also tag the failure as transient. Host retries the operation after
> some
> > delay. There is no guarantee it will succeed though.
> 
> You didn't really answer the question.
> 
> You have allocated some large, physically contiguous areas of memory
> under heavy pressure.  But you also contend that there is too much
> memory pressure to run a small userspace helper.  Under heavy memory
> pressure, I'd expect large, kernel allocations to fail much more often
> than running a small userspace helper.

I am only reporting what I am seeing. Broadly, I have two main failure conditions to
deal with: (a) resource related failure (add_memory() returning -ENOMEM) and (b) not being
able to online a segment that has been successfully hot-added. I have seen both these failures
under high memory pressure. By supporting "in context" onlining, we can eliminate one failure
case. Our inability to online is not a recoverable failure from the host's point of view - the memory
is committed to the guest (since hot add succeeded) but is not usable since it is not onlined.
> 
> It _sounds_ like you really want to be able to have the host retry the
> operation if it fails, and you return success/failure from inside the
> kernel.  It's hard for you to tell if running the userspace helper
> failed, so your solution is to move what what previously done in
> userspace in to the kernel so that you can more easily tell if it failed
> or succeeded.
> 
> Is that right?

No; I am able to get the proper error code for recoverable failures (hot add failures
because of lack of memory). By doing what I am proposing here, we can avoid one class
of failures completely and I think this is what resulted in a better "hot add" experience in the
guest.

K. Y 
> 
> 





More information about the devel mailing list