[PATCH 0/9] staging: octeon: multi rx group (queue) support

Aaro Koskinen aaro.koskinen at iki.fi
Wed Aug 31 06:29:15 UTC 2016


Hi,

On Tue, Aug 30, 2016 at 06:12:17PM -0700, Ed Swierk wrote:
> On Tue, Aug 30, 2016 at 11:47 AM, Aaro Koskinen <aaro.koskinen at iki.fi> wrote:
> > This series implements multiple RX group support that should improve
> > the networking performance on multi-core OCTEONs. Basically we register
> > IRQ and NAPI for each group, and ask the HW to select the group for
> > the incoming packets based on hash.
> >
> > Tested on EdgeRouter Lite with a simple forwarding test using two flows
> > and 16 RX groups distributed between two cores - the routing throughput
> > is roughly doubled.
> 
> I applied the series to my 4.4.19 tree, which involved backporting a
> bunch of other patches from master, most of them trivial.
> 
> When I test it on a Cavium Octeon 2 (CN6880) board, I get an immediate
> crash (bus error) in the netif_receive_skb() call from cvm_oct_poll().
> Replacing the rx_group argument to cvm_oct_poll() with int group, and
> dereferencing rx_group->group in the caller (cvm_oct_napi_poll())
> instead makes the crash disappear. Apparently there's some race in
> dereferencing rx_group from within cvm_oct_poll().

Oops, looks like I tested without CONFIG_NET_POLL_CONTROLLER enabled
and that seems to be broken. Sorry.

> With this workaround in place, I can send and receive on XAUI
> interfaces, but don't see any performance improvement. I'm guessing I
> need to set receive_group_order > 0. But any value between 1 and 4
> seems to break rx altogether. When I ping another host I see both
> request and response on the wire, and the interface counters increase,
> but the response doesn't make it back to ping.

Can you see multiple ethernet IRQs in /proc/interrupts and their
counters increasing?

With receive_group_order=4 you should see 16 IRQs.

> Is some other configuration needed to make use of multiple rx groups?

Once RX interrupts are working you need to divide them to multiple cores
using /proc/irq/<number>/smp_affinity, or use irqbalance or such.

A.


More information about the devel mailing list