[patch 09/11] x86/vdso: Simplify the invalid vclock case

Andy Lutomirski luto at kernel.org
Mon Sep 17 19:25:38 UTC 2018


On Fri, Sep 14, 2018 at 5:50 AM, Thomas Gleixner <tglx at linutronix.de> wrote:
> The code flow for the vclocks is convoluted as it requires the vclocks
> which can be invalidated separately from the vsyscall_gtod_data sequence to
> store the fact in a separate variable. That's inefficient.
>

>  notrace static int do_hres(clockid_t clk, struct timespec *ts)
>  {
>         struct vgtod_ts *base = &gtod->basetime[clk];
>         unsigned int seq;
> -       int mode;
> -       u64 ns;
> +       u64 cycles, ns;
>
>         do {
>                 seq = gtod_read_begin(gtod);
> -               mode = gtod->vclock_mode;
>                 ts->tv_sec = base->sec;
>                 ns = base->nsec;
> -               ns += vgetsns(&mode);
> +               cycles = vgetcyc(gtod->vclock_mode);
> +               if (unlikely((s64)cycles < 0))
> +                       return vdso_fallback_gettime(clk, ts);

i was contemplating this, and I would suggest one of two optimizations:

1. have all the helpers return a struct containing a u64 and a bool,
and use that bool.  The compiler should do okay.

2. Be sneaky.  Later in the series, you do:

                if (unlikely((s64)cycles < 0))
                        return vdso_fallback_gettime(clk, ts);
-               ns += (cycles - gtod->cycle_last) * gtod->mult;
+               if (cycles > last)
+                       ns += (cycles - last) * gtod->mult;

How about:

if (unlikely((s64)cycles <= last)) {
  if (cycles < 0) [or cycles == -1 or whatever]
    return vdso_fallback_gettime;
} else {
  ns += (cycles - last) * gtod->mult;
}

which should actually make this whole mess be essentially free.

Also, I'm not entirely convinced that this "last" thing is needed at
all.  John, what's the scenario under which we need it?

--Andy

--Andy


More information about the devel mailing list