2015-12-22 01:31:57 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* This file and its contents are supplied under the terms of the
|
|
|
|
* Common Development and Distribution License ("CDDL"), version 1.0.
|
|
|
|
* You may only use this file in accordance with the terms of version
|
|
|
|
* 1.0 of the CDDL.
|
|
|
|
*
|
|
|
|
* A full copy of the text of the CDDL should have accompanied this
|
|
|
|
* source. A copy of the CDDL is also available via the Internet at
|
|
|
|
* http://www.illumos.org/license/CDDL.
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
Implement Redacted Send/Receive
Redacted send/receive allows users to send subsets of their data to
a target system. One possible use case for this feature is to not
transmit sensitive information to a data warehousing, test/dev, or
analytics environment. Another is to save space by not replicating
unimportant data within a given dataset, for example in backup tools
like zrepl.
Redacted send/receive is a three-stage process. First, a clone (or
clones) is made of the snapshot to be sent to the target. In this
clone (or clones), all unnecessary or unwanted data is removed or
modified. This clone is then snapshotted to create the "redaction
snapshot" (or snapshots). Second, the new zfs redact command is used
to create a redaction bookmark. The redaction bookmark stores the
list of blocks in a snapshot that were modified by the redaction
snapshot(s). Finally, the redaction bookmark is passed as a parameter
to zfs send. When sending to the snapshot that was redacted, the
redaction bookmark is used to filter out blocks that contain sensitive
or unwanted information, and those blocks are not included in the send
stream. When sending from the redaction bookmark, the blocks it
contains are considered as candidate blocks in addition to those
blocks in the destination snapshot that were modified since the
creation_txg of the redaction bookmark. This step is necessary to
allow the target to rehydrate data in the case where some blocks are
accidentally or unnecessarily modified in the redaction snapshot.
The changes to bookmarks to enable fast space estimation involve
adding deadlists to bookmarks. There is also logic to manage the
life cycles of these deadlists.
The new size estimation process operates in cases where previously
an accurate estimate could not be provided. In those cases, a send
is performed where no data blocks are read, reducing the runtime
significantly and providing a byte-accurate size estimate.
Reviewed-by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Reviewed-by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Chris Williamson <chris.williamson@delphix.com>
Reviewed-by: Pavel Zhakarov <pavel.zakharov@delphix.com>
Reviewed-by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #7958
2019-06-19 16:48:13 +00:00
|
|
|
* Copyright (c) 2014, 2018 by Delphix. All rights reserved.
|
2015-12-22 01:31:57 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _BQUEUE_H
|
|
|
|
#define _BQUEUE_H
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
|
|
|
|
typedef struct bqueue {
|
|
|
|
list_t bq_list;
|
Batch enqueue/dequeue for bqueue
The Blocking Queue (bqueue) code is used by zfs send/receive to send
messages between the various threads. It uses a shared linked list,
which is locked whenever we enqueue or dequeue. For workloads which
process many blocks per second, the locking on the shared list can be
quite expensive.
This commit changes the bqueue logic to have 3 linked lists:
1. An enquing list, which is used only by the (single) enquing thread,
and thus needs no locks.
2. A shared list, with an associated lock.
3. A dequing list, which is used only by the (single) dequing thread,
and thus needs no locks.
The entire enquing list can be moved to the shared list in constant
time, and the entire shared list can be moved to the dequing list in
constant time. These operations only happen when the `fill_fraction` is
reached, or on an explicit flush request. Therefore, the lock only
needs to be acquired infrequently.
The API already allows for dequing to block until an explicit flush, so
callers don't need to be changed.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Richard Yao <richard.yao@alumni.stonybrook.edu>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #14121
2023-01-10 21:39:22 +00:00
|
|
|
size_t bq_size;
|
|
|
|
list_t bq_dequeuing_list;
|
|
|
|
size_t bq_dequeuing_size;
|
|
|
|
list_t bq_enqueuing_list;
|
|
|
|
size_t bq_enqueuing_size;
|
2015-12-22 01:31:57 +00:00
|
|
|
kmutex_t bq_lock;
|
|
|
|
kcondvar_t bq_add_cv;
|
|
|
|
kcondvar_t bq_pop_cv;
|
2022-09-16 20:52:25 +00:00
|
|
|
size_t bq_maxsize;
|
|
|
|
uint_t bq_fill_fraction;
|
2015-12-22 01:31:57 +00:00
|
|
|
size_t bq_node_offset;
|
|
|
|
} bqueue_t;
|
|
|
|
|
|
|
|
typedef struct bqueue_node {
|
|
|
|
list_node_t bqn_node;
|
2022-09-16 20:52:25 +00:00
|
|
|
size_t bqn_size;
|
2015-12-22 01:31:57 +00:00
|
|
|
} bqueue_node_t;
|
|
|
|
|
|
|
|
|
2022-09-16 20:52:25 +00:00
|
|
|
int bqueue_init(bqueue_t *, uint_t, size_t, size_t);
|
2015-12-22 01:31:57 +00:00
|
|
|
void bqueue_destroy(bqueue_t *);
|
2022-09-16 20:52:25 +00:00
|
|
|
void bqueue_enqueue(bqueue_t *, void *, size_t);
|
|
|
|
void bqueue_enqueue_flush(bqueue_t *, void *, size_t);
|
2015-12-22 01:31:57 +00:00
|
|
|
void *bqueue_dequeue(bqueue_t *);
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _BQUEUE_H */
|