[PATCH] scsi: storvsc: use shost_for_each_device() instead of open coding

KY Srinivasan kys at microsoft.com
Fri Jul 3 18:34:41 UTC 2015



> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets at redhat.com]
> Sent: Wednesday, July 1, 2015 2:31 AM
> To: linux-scsi at vger.kernel.org
> Cc: Long Li; KY Srinivasan; Haiyang Zhang; James E.J. Bottomley;
> devel at linuxdriverproject.org; linux-kernel at vger.kernel.org
> Subject: [PATCH] scsi: storvsc: use shost_for_each_device() instead of open
> coding
> 
> Comment in struct Scsi_Host says that drivers are not supposed to access
> __devices directly. storvsc_host_scan() doesn't happen in irq context
> so we can just use shost_for_each_device().
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets at redhat.com>

Signed-off-by: K. Y. Srinivasan <kys at microsoft.com>
> ---
>  drivers/scsi/storvsc_drv.c | 9 +--------
>  1 file changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> index 3c6584f..9ea912b 100644
> --- a/drivers/scsi/storvsc_drv.c
> +++ b/drivers/scsi/storvsc_drv.c
> @@ -426,7 +426,6 @@ static void storvsc_host_scan(struct work_struct
> *work)
>  	struct storvsc_scan_work *wrk;
>  	struct Scsi_Host *host;
>  	struct scsi_device *sdev;
> -	unsigned long flags;
> 
>  	wrk = container_of(work, struct storvsc_scan_work, work);
>  	host = wrk->host;
> @@ -443,14 +442,8 @@ static void storvsc_host_scan(struct work_struct
> *work)
>  	 * may have been removed this way.
>  	 */
>  	mutex_lock(&host->scan_mutex);
> -	spin_lock_irqsave(host->host_lock, flags);
> -	list_for_each_entry(sdev, &host->__devices, siblings) {
> -		spin_unlock_irqrestore(host->host_lock, flags);
> +	shost_for_each_device(sdev, host)
>  		scsi_test_unit_ready(sdev, 1, 1, NULL);
> -		spin_lock_irqsave(host->host_lock, flags);
> -		continue;
> -	}
> -	spin_unlock_irqrestore(host->host_lock, flags);
>  	mutex_unlock(&host->scan_mutex);
>  	/*
>  	 * Now scan the host to discover LUNs that may have been added.
> --
> 2.4.3



More information about the devel mailing list