zfs recv hangs if max recordsize is less than received recordsize
- Some optimizations for bqueue enqueue/dequeue. - Added a fix to prevent deadlock when both bqueue_enqueue_impl() and bqueue_dequeue() waits for signal to be triggered. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Ryan Moeller <ryan@iXsystems.com> Signed-off-by: Ameer Hamza <ahamza@ixsystems.com> Closes #13855
This commit is contained in:
parent
faa1e4082d
commit
d5105f068f
|
@ -1622,9 +1622,9 @@ typedef enum {
|
||||||
* against the cost of COWing a giant block to modify one byte, and the
|
* against the cost of COWing a giant block to modify one byte, and the
|
||||||
* large latency of reading or writing a large block.
|
* large latency of reading or writing a large block.
|
||||||
*
|
*
|
||||||
* Note that although blocks up to 16MB are supported, the recordsize
|
* The recordsize property can not be set larger than zfs_max_recordsize
|
||||||
* property can not be set larger than zfs_max_recordsize (default 1MB).
|
* (default 16MB on 64-bit and 1MB on 32-bit). See the comment near
|
||||||
* See the comment near zfs_max_recordsize in dsl_dataset.c for details.
|
* zfs_max_recordsize in dsl_dataset.c for details.
|
||||||
*
|
*
|
||||||
* Note that although the LSIZE field of the blkptr_t can store sizes up
|
* Note that although the LSIZE field of the blkptr_t can store sizes up
|
||||||
* to 32MB, the dnode's dn_datablkszsec can only store sizes up to
|
* to 32MB, the dnode's dn_datablkszsec can only store sizes up to
|
||||||
|
|
Loading…
Reference in New Issue