blk-mq: convert to serialize updating nr_requests with update_nr_hwq_lock

request_queue->nr_requests can be changed by:

a) switch elevator by updating nr_hw_queues
b) switch elevator by elevator sysfs attribute
c) configue queue sysfs attribute nr_requests

Current lock order is:

1) update_nr_hwq_lock, case a,b
2) freeze_queue
3) elevator_lock, case a,b,c

And update nr_requests is seriablized by elevator_lock() already,
however, in the case c, we'll have to allocate new sched_tags if
nr_requests grow, and do this with elevator_lock held and queue
freezed has the risk of deadlock.

Hence use update_nr_hwq_lock instead, make it possible to allocate
memory if tags grow, meanwhile also prevent nr_requests to be changed
concurrently.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
Yu Kuai 2025-09-10 16:04:39 +08:00 committed by Jens Axboe
parent b46d4c447d
commit 626ff4f8eb

View File

@ -68,13 +68,17 @@ queue_requests_store(struct gendisk *disk, const char *page, size_t count)
int ret, err;
unsigned int memflags;
struct request_queue *q = disk->queue;
struct blk_mq_tag_set *set = q->tag_set;
ret = queue_var_store(&nr, page, count);
if (ret < 0)
return ret;
memflags = blk_mq_freeze_queue(q);
mutex_lock(&q->elevator_lock);
/*
* Serialize updating nr_requests with concurrent queue_requests_store()
* and switching elevator.
*/
down_write(&set->update_nr_hwq_lock);
if (nr == q->nr_requests)
goto unlock;
@ -82,20 +86,31 @@ queue_requests_store(struct gendisk *disk, const char *page, size_t count)
if (nr < BLKDEV_MIN_RQ)
nr = BLKDEV_MIN_RQ;
if (nr <= q->tag_set->reserved_tags ||
/*
* Switching elevator is protected by update_nr_hwq_lock:
* - read lock is held from elevator sysfs attribute;
* - write lock is held from updating nr_hw_queues;
* Hence it's safe to access q->elevator here with write lock held.
*/
if (nr <= set->reserved_tags ||
(q->elevator && nr > MAX_SCHED_RQ) ||
(!q->elevator && nr > q->tag_set->queue_depth)) {
(!q->elevator && nr > set->queue_depth)) {
ret = -EINVAL;
goto unlock;
}
memflags = blk_mq_freeze_queue(q);
mutex_lock(&q->elevator_lock);
err = blk_mq_update_nr_requests(disk->queue, nr);
if (err)
ret = err;
unlock:
mutex_unlock(&q->elevator_lock);
blk_mq_unfreeze_queue(q, memflags);
unlock:
up_write(&set->update_nr_hwq_lock);
return ret;
}