Fix arc_p aggressive increase

The original ARC paper called for an initial 50/50 MRU/MFU split
and this is accounted in various places where arc_p = arc_c >> 1,
with further adjustment based on ghost lists size/hit. However, in
current code both arc_adapt() and arc_get_data_impl() aggressively
grow arc_p until arc_c is reached, causing unneeded pressure on
MFU and greatly reducing its scan-resistance until ghost list
adjustments kick in.

This patch restores the original behavior of initially having arc_p
as 1/2 of total ARC, without preventing MRU to use up to 100% total
ARC when MFU is empty.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Gionatan Danti <g.danti@assyoma.it>
Closes #14137
Closes #14120
This commit is contained in:
shodanshok 2022-11-11 19:41:36 +01:00 committed by Tony Hutter
parent 957c3776f2
commit d9de079a4b
1 changed files with 3 additions and 2 deletions

View File

@ -5166,7 +5166,7 @@ arc_adapt(int bytes, arc_state_t *state)
atomic_add_64(&arc_c, (int64_t)bytes); atomic_add_64(&arc_c, (int64_t)bytes);
if (arc_c > arc_c_max) if (arc_c > arc_c_max)
arc_c = arc_c_max; arc_c = arc_c_max;
else if (state == arc_anon) else if (state == arc_anon && arc_p < arc_c >> 1)
atomic_add_64(&arc_p, (int64_t)bytes); atomic_add_64(&arc_p, (int64_t)bytes);
if (arc_p > arc_c) if (arc_p > arc_c)
arc_p = arc_c; arc_p = arc_c;
@ -5379,7 +5379,8 @@ arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag,
if (aggsum_upper_bound(&arc_sums.arcstat_size) < arc_c && if (aggsum_upper_bound(&arc_sums.arcstat_size) < arc_c &&
hdr->b_l1hdr.b_state == arc_anon && hdr->b_l1hdr.b_state == arc_anon &&
(zfs_refcount_count(&arc_anon->arcs_size) + (zfs_refcount_count(&arc_anon->arcs_size) +
zfs_refcount_count(&arc_mru->arcs_size) > arc_p)) zfs_refcount_count(&arc_mru->arcs_size) > arc_p &&
arc_p < arc_c >> 1))
arc_p = MIN(arc_c, arc_p + size); arc_p = MIN(arc_c, arc_p + size);
} }
} }