[PATCH v2] Drivers: hv: vmbus: Expose counters for interrupts and full conditions

Dexuan Cui decui at microsoft.com
Thu Jan 10 03:58:34 UTC 2019


> From: Michael Kelley <mikelley at microsoft.com>
> Sent: Saturday, January 5, 2019 1:01 PM
> > From: Kimberly Brown <kimbrownkd at gmail.com>  Sent: Friday, January 4,
>  > 2019 8:35 PM
> >
> >  static inline void set_channel_pending_send_size(struct vmbus_channel *c,
> >  						 u32 size)
> >  {
> > +	if (size) {
> > +		++c->out_full_total;
> > +
> > +		if (!c->out_full_flag) {
> > +			++c->out_full_first;
> > +			c->out_full_flag = true;
> > +		}
> > +	} else {
> > +		c->out_full_flag = false;
> > +	}
> > +
> >  	c->outbound.ring_buffer->pending_send_sz = size;
> >  }
> >
> 
> I think there may be an atomicity problem with the above code.  I looked
> in the hv_sock code, and didn't see any locks being held when
> set_channel_pending_send_size() is called.   The original code doesn't need
> a lock because it is just storing a single value into pending_send_sz.
>
> In the similar code in hv_ringbuffer_write(), the ring buffer spin lock is held
> while the counts are incremented and the out_full_flag is maintained, so all is
> good there.  But some locking may be needed here.  Dexuan knows the
> hv_sock
> code best and can comment on whether there is any higher level
> synchronization
> that prevents multiple threads from running the above code on the same
> channel.
> Even if there is such higher level synchronization, this code probably shouldn't
> depend on it for correctness.
> 
> Michael

Yes, there is indeed a higher level per-"sock" lock, e.g. in the code path
vsock_stream_sendmsg() -> vsock_stream_has_space() -> 
transport->stream_has_space() -> hvs_stream_has_space() -> 
hvs_set_channel_pending_send_size(), we acquire the lock by
lock_sock(sk) at the beginning of vsock_stream_sendmsg(), and call
release_sock(sk) at the end of the function.

So we don't have an atomicity issue here for hv_sock, which is the only user
of set_channel_pending_send_size(), so far.

Thanks,
-- Dexuan


More information about the devel mailing list