Fix the ZFS checksum error histograms with larger record sizes

My analysis in PR #14716 was incorrect.  Each histogram bucket contains
the number of incorrect bits, by position in a 64-bit word, over the
entire record.  8-bit buckets can overflow for record sizes above 2k.
To forestall that, saturate each bucket at 255.  That should still get
the point across: either all bits are equally wrong, or just a couple
are.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alan Somers <asomers@gmail.com>
Sponsored-by: Axcient
Closes #15049
This commit is contained in:
Alan Somers 2023-07-14 17:13:15 -06:00 committed by GitHub
parent fdba8cbb79
commit 67c5e1ba4f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 1 additions and 1 deletions

View File

@ -790,7 +790,7 @@ update_histogram(uint64_t value_arg, uint8_t *hist, uint32_t *count)
/* We store the bits in big-endian (largest-first) order */ /* We store the bits in big-endian (largest-first) order */
for (i = 0; i < 64; i++) { for (i = 0; i < 64; i++) {
if (value & (1ull << i)) { if (value & (1ull << i)) {
hist[63 - i]++; hist[63 - i] = MAX(hist[63 - i], hist[63 - i] + 1);
++bits; ++bits;
} }
} }