[PATCH 02/22] drm/i915: introduce simple gemfs

Joonas Lahtinen joonas.lahtinen at linux.intel.com
Wed Sep 27 07:50:59 UTC 2017


On Tue, 2017-09-26 at 23:34 +0200, Greg Kroah-Hartman wrote:
> On Tue, Sep 26, 2017 at 04:21:47PM +0300, Joonas Lahtinen wrote:
> > On Tue, 2017-09-26 at 09:52 +0200, Greg Kroah-Hartman wrote:
> > > On Mon, Sep 25, 2017 at 07:47:17PM +0100, Matthew Auld wrote:
> > > > Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so
> > > > moves us away from the shmemfs shm_mnt, and gives us the much needed
> > > > flexibility to do things like set our own mount options, namely huge=
> > > > which should allow us to enable the use of transparent-huge-pages for
> > > > our shmem backed objects.
> > > > 
> > > > v2: various improvements suggested by Joonas
> > > > 
> > > > v3: move gemfs instance to i915.mm and simplify now that we have
> > > > file_setup_with_mnt
> > > > 
> > > > v4: fallback to tmpfs shm_mnt upon failure to setup gemfs
> > > > 
> > > > v5: make tmpfs fallback kinder
> > > 
> > > Why do this only for one specific driver?  Shouldn't the drm core handle
> > > this for you, for all other drivers as well?  Otherwise trying to figure
> > > out how to "contain" this type of thing is going to be a pain (mount
> > > options, selinux options, etc.)
> > 
> > We actually started quite grande by making stripped down version of
> > shmemfs for drm core, but kept running into nacks about how we were
> > implementing it (after getting a recommendation to try implementing it
> > some way). After a few iterations and massive engineering time, we have
> > been progressively reducing the amount of changes outside i915 in the
> > hopes to get this merged.
> > 
> > And all the while clock is ticking, so we thought the best way to get
> > something to support our future work is to implement this first locally
> > with minimal external changes outside i915 and then once we have
> > something working, it'll be easier to generalize it for the drm core.
> > Otherwise we'll never get to work with the huge page support, for which
> > gemfs is the stepping stone here.
> > 
> > So we're not planning on sitting on top of it, we'll just incubate it
> > under i915/ so that it'll then be less pain for others to adopt when
> > the biggest hurdles with core MM interactions are sorted out.
> 
> But by doing this, you are now creating a new user/kernel api that you
> have to support for forever, right?  Will it not change if you make it
> "generic" to the drm core eventually?

Nope, this series is actually just for the driver to get some THPs,
regardless of whether user asked for them or not. It's an opportunistic
feature in this form, no new API introduced. We will also take
advantage if we happen to get 4-order pages (64KB).

What comes to the API anyway, the differences between each GPU driver
are big enough that we each have our own GEM buffer create IOCTLs for
example (I915_GEM_CREATE for i915). And then those are internally
calling DRM core functions which do bulk of the work with the backing
storage. So if we provide an interface for the user to enforce getting
huge pages, we'll simply have our own bit in the IOCTL which will then
be translated to some DRM core flag or function call.

> Worse case, name it a generic name that everyone will end up using in
> the future, and then you can just claim that all other drivers need to
> implement it :)

"gem" is the DRM core memory manager (well, the other of them), so
"gemfs" is not an accidental name :) We're definitely driving it there.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation


More information about the devel mailing list