Add comment on metaslab_class_throttle_reserve() locking

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Issue #12314
Closes #12419
This commit is contained in:
Alexander Motin 2021-07-26 19:30:20 -04:00 committed by Tony Hutter
parent 9429910781
commit e298ac5d04
1 changed files with 7 additions and 0 deletions

View File

@ -5617,6 +5617,13 @@ metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, int allocator,
if (GANG_ALLOCATION(flags) || (flags & METASLAB_MUST_RESERVE) || if (GANG_ALLOCATION(flags) || (flags & METASLAB_MUST_RESERVE) ||
zfs_refcount_count(&mca->mca_alloc_slots) + slots <= max) { zfs_refcount_count(&mca->mca_alloc_slots) + slots <= max) {
/* /*
* The potential race between _count() and _add() is covered
* by the allocator lock in most cases, or irrelevant due to
* GANG_ALLOCATION() or METASLAB_MUST_RESERVE set in others.
* But even if we assume some other non-existing scenario, the
* worst that can happen is few more I/Os get to allocation
* earlier, that is not a problem.
*
* We reserve the slots individually so that we can unreserve * We reserve the slots individually so that we can unreserve
* them individually when an I/O completes. * them individually when an I/O completes.
*/ */