Improve logging of 128KB writes

Before my ZIL space optimization few years ago 128KB writes were logged
as two 64KB+ records in two 128KB log blocks.  After that change it
became ~127KB+/1KB+ in two 128KB log blocks to free space in the second
block for another record.  Unfortunately in case of 128KB only writes,
when space in the second block remained unused, that change increased
write latency by unbalancing checksum computation and write times
between parallel threads.  It also didn't help with SLOG space
efficiency in that case.

This change introduces new 68KB log block size, used for both writes
below 67KB and 128KB-sharp writes.  Writes of 68-127KB are still using
one 128KB block to not increase processing overhead.  Writes above
131KB are still using full 128KB blocks, since possible saving there
is small.  Mixed loads will likely also fall back to previous 128KB,
since code uses maximum of the last 16 requested block sizes.

Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:  Alexander Motin <mav@FreeBSD.org>
Closes #9409
This commit is contained in:
Alexander Motin 2019-11-11 20:27:59 +03:00 committed by Brian Behlendorf
parent 2f1ca8a32a
commit f15d6a5457
1 changed files with 13 additions and 7 deletions

View File

@ -1414,11 +1414,17 @@ zil_lwb_write_open(zilog_t *zilog, lwb_t *lwb)
* aligned to 4KB) actually gets written. However, we can't always just * aligned to 4KB) actually gets written. However, we can't always just
* allocate SPA_OLD_MAXBLOCKSIZE as the slog space could be exhausted. * allocate SPA_OLD_MAXBLOCKSIZE as the slog space could be exhausted.
*/ */
uint64_t zil_block_buckets[] = { struct {
4096, /* non TX_WRITE */ uint64_t limit;
8192+4096, /* data base */ uint64_t blksz;
32*1024 + 4096, /* NFS writes */ } zil_block_buckets[] = {
UINT64_MAX { 4096, 4096 }, /* non TX_WRITE */
{ 8192 + 4096, 8192 + 4096 }, /* database */
{ 32768 + 4096, 32768 + 4096 }, /* NFS writes */
{ 65536 + 4096, 65536 + 4096 }, /* 64KB writes */
{ 131072, 131072 }, /* < 128KB writes */
{ 131072 +4096, 65536 + 4096 }, /* 128KB writes */
{ UINT64_MAX, SPA_OLD_MAXBLOCKSIZE}, /* > 128KB writes */
}; };
/* /*
@ -1502,9 +1508,9 @@ zil_lwb_write_issue(zilog_t *zilog, lwb_t *lwb)
* pool log space. * pool log space.
*/ */
zil_blksz = zilog->zl_cur_used + sizeof (zil_chain_t); zil_blksz = zilog->zl_cur_used + sizeof (zil_chain_t);
for (i = 0; zil_blksz > zil_block_buckets[i]; i++) for (i = 0; zil_blksz > zil_block_buckets[i].limit; i++)
continue; continue;
zil_blksz = MIN(zil_block_buckets[i], zilog->zl_max_block_size); zil_blksz = MIN(zil_block_buckets[i].blksz, zilog->zl_max_block_size);
zilog->zl_prev_blks[zilog->zl_prev_rotor] = zil_blksz; zilog->zl_prev_blks[zilog->zl_prev_rotor] = zil_blksz;
for (i = 0; i < ZIL_PREV_BLKS; i++) for (i = 0; i < ZIL_PREV_BLKS; i++)
zil_blksz = MAX(zil_blksz, zilog->zl_prev_blks[i]); zil_blksz = MAX(zil_blksz, zilog->zl_prev_blks[i]);