zfs/module/zfs/zfs_log.c

711 lines
18 KiB
C
Raw Normal View History

2008-11-20 20:01:55 +00:00
/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
2009-07-02 22:44:48 +00:00
* Copyright 2009 Sun Microsystems, Inc. All rights reserved.
2008-11-20 20:01:55 +00:00
* Use is subject to license terms.
*/
Linux ZVOL implementation; kernel-side changes At last a useful user space interface for the Linux ZFS port arrives. With the addition of the ZVOL real ZFS based block devices are available and can be compared head to head with Linux's MD and LVM block drivers. The Linux ZVOL has not yet had any performance work done but from a user perspective it should be functionally complete and behave like any other Linux block device. The ZVOL has so far been tested using zconfig.sh on the following x86_64 based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11. However, more testing is required to ensure everything is working as designed. What follows in a somewhat detailed list of changes includes in this commit to make ZVOL's possible. A few other issues were addressed in the context of these changes which will also be mentioned. * Added module/zfs/zvol.c which is based off the original Solaris ZVOL implementation but rewritten to intergrate with the Linux block device APIs. The basic design remains the similar in Linux with the major change being request processing. Request processing is handled by registering a request function which the elevator calls once all request merges is finished and the elevator unplugs. This function is called under a spin lock and the request structure is passed to the block driver to be queued for IO. The elevator must be notified asyncronously once the request completes or fails with an error. This allows us the block driver a chance to handle many request concurrently. For the ZVOL we maintain a taskq with a service thread per core. As requests are delivered by the elevator each request is dispatched to the taskq. The task queue handles each request with a write or read helper function which basically copies the request data in to our out of the DMU object. Writes single completion as soon as the DMU has the data unless they are marked sync. Reads are all handled syncronously however the elevator will merge many small reads in to a large read before it submitting the request. * Cachine is worth specifically mentioning. Because both the Linux VFS and the ZFS ARC both want to fully manage the cache we unfortunately end up with two caches. This means our memory foot print is larger than otherwise expected, and it means we have an extra copy between the caches, but it does not impact correctness. All syncs are barrior requests I believe are handled correctly. Longer term there is lots of room for improvement here but it will require fairly extensive changes to either the Linux VFS and VM layer, or additional DMU interfaces to handle managing buffer not directly allocated by the ARC. * Added module/zfs/include/sys/blkdev.h which contains all the Linux compatibility foo which is required to handle changes in the Linux block APIs from 2.6.18 thru 2.6.31 based kernels. * The dmu_{read,write}_uio interfaces which don't make sense on Linux have been modified to dmu_{read,write}_req functions which consume the standard Linux IO request structure. Their function fundamentally remains the same so this happily worked out pretty cleanly. * The /dev/zfs character device is no longer created through the half implemented Solaris driver DDI interfaces. It is now simply created with it's own major number as a Linux misc device which greatly simplifies everything. It is only capable of handling ioctls() but this fits nicely because that's all it ever has to do. The ZVOL devices unlike in Solaris do not leverage the same major number as /dev/zfs but instead register their own major. Because only one major is allocated and space is reserved for 16 partitions per-device there is a limit of 16384 concurrent ZVOL devices. By using multiple majors like the scsi driver this limit could be addressed if it becomes a problem. * The {spa,zfs,zvol}_busy() functions have all be removed because they are not required on a Linux system. Under Linux the registered module exit function will not be called while the are still references to the module. Once the exit function is called however it must succeed or block, it may not fail so returning an error on module unload makes to sense under Linux. * With the addition of ZVOL support all the HAVE_ZVOL defines were removed for obvious reasons. However, the HAVE_ZPL defines have been relocated in to the linux-{kernel,user}-disk topic branches and must remain until the ZPL is implemented.
2009-11-20 19:06:59 +00:00
#ifdef HAVE_ZPL
2008-11-20 20:01:55 +00:00
#include <sys/types.h>
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/sysmacros.h>
#include <sys/cmn_err.h>
#include <sys/kmem.h>
#include <sys/thread.h>
#include <sys/file.h>
#include <sys/vfs.h>
#include <sys/zfs_znode.h>
#include <sys/zfs_dir.h>
#include <sys/zil.h>
#include <sys/zil_impl.h>
#include <sys/byteorder.h>
#include <sys/policy.h>
#include <sys/stat.h>
#include <sys/mode.h>
#include <sys/acl.h>
#include <sys/dmu.h>
#include <sys/spa.h>
#include <sys/zfs_fuid.h>
#include <sys/ddi.h>
2009-01-15 21:59:39 +00:00
#include <sys/dsl_dataset.h>
#define ZFS_HANDLE_REPLAY(zilog, tx) \
if (zilog->zl_replay) { \
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx); \
zilog->zl_replayed_seq[dmu_tx_get_txg(tx) & TXG_MASK] = \
zilog->zl_replaying_seq; \
return; \
}
2008-11-20 20:01:55 +00:00
/*
2009-01-15 21:59:39 +00:00
* These zfs_log_* functions must be called within a dmu tx, in one
* of 2 contexts depending on zilog->z_replay:
*
* Non replay mode
* ---------------
* We need to record the transaction so that if it is committed to
* the Intent Log then it can be replayed. An intent log transaction
* structure (itx_t) is allocated and all the information necessary to
* possibly replay the transaction is saved in it. The itx is then assigned
* a sequence number and inserted in the in-memory list anchored in the zilog.
*
* Replay mode
* -----------
* We need to mark the intent log record as replayed in the log header.
* This is done in the same transaction as the replay so that they
* commit atomically.
2008-11-20 20:01:55 +00:00
*/
int
zfs_log_create_txtype(zil_create_t type, vsecattr_t *vsecp, vattr_t *vap)
{
int isxvattr = (vap->va_mask & AT_XVATTR);
switch (type) {
case Z_FILE:
if (vsecp == NULL && !isxvattr)
return (TX_CREATE);
if (vsecp && isxvattr)
return (TX_CREATE_ACL_ATTR);
if (vsecp)
return (TX_CREATE_ACL);
else
return (TX_CREATE_ATTR);
/*NOTREACHED*/
case Z_DIR:
if (vsecp == NULL && !isxvattr)
return (TX_MKDIR);
if (vsecp && isxvattr)
return (TX_MKDIR_ACL_ATTR);
if (vsecp)
return (TX_MKDIR_ACL);
else
return (TX_MKDIR_ATTR);
case Z_XATTRDIR:
return (TX_MKXATTR);
}
ASSERT(0);
return (TX_MAX_TYPE);
}
/*
* build up the log data necessary for logging xvattr_t
* First lr_attr_t is initialized. following the lr_attr_t
* is the mapsize and attribute bitmap copied from the xvattr_t.
* Following the bitmap and bitmapsize two 64 bit words are reserved
* for the create time which may be set. Following the create time
* records a single 64 bit integer which has the bits to set on
* replay for the xvattr.
*/
static void
zfs_log_xvattr(lr_attr_t *lrattr, xvattr_t *xvap)
{
uint32_t *bitmap;
uint64_t *attrs;
uint64_t *crtime;
xoptattr_t *xoap;
void *scanstamp;
int i;
xoap = xva_getxoptattr(xvap);
ASSERT(xoap);
lrattr->lr_attr_masksize = xvap->xva_mapsize;
bitmap = &lrattr->lr_attr_bitmap;
for (i = 0; i != xvap->xva_mapsize; i++, bitmap++) {
*bitmap = xvap->xva_reqattrmap[i];
}
/* Now pack the attributes up in a single uint64_t */
attrs = (uint64_t *)bitmap;
crtime = attrs + 1;
scanstamp = (caddr_t)(crtime + 2);
*attrs = 0;
if (XVA_ISSET_REQ(xvap, XAT_READONLY))
*attrs |= (xoap->xoa_readonly == 0) ? 0 :
XAT0_READONLY;
if (XVA_ISSET_REQ(xvap, XAT_HIDDEN))
*attrs |= (xoap->xoa_hidden == 0) ? 0 :
XAT0_HIDDEN;
if (XVA_ISSET_REQ(xvap, XAT_SYSTEM))
*attrs |= (xoap->xoa_system == 0) ? 0 :
XAT0_SYSTEM;
if (XVA_ISSET_REQ(xvap, XAT_ARCHIVE))
*attrs |= (xoap->xoa_archive == 0) ? 0 :
XAT0_ARCHIVE;
if (XVA_ISSET_REQ(xvap, XAT_IMMUTABLE))
*attrs |= (xoap->xoa_immutable == 0) ? 0 :
XAT0_IMMUTABLE;
if (XVA_ISSET_REQ(xvap, XAT_NOUNLINK))
*attrs |= (xoap->xoa_nounlink == 0) ? 0 :
XAT0_NOUNLINK;
if (XVA_ISSET_REQ(xvap, XAT_APPENDONLY))
*attrs |= (xoap->xoa_appendonly == 0) ? 0 :
XAT0_APPENDONLY;
if (XVA_ISSET_REQ(xvap, XAT_OPAQUE))
*attrs |= (xoap->xoa_opaque == 0) ? 0 :
XAT0_APPENDONLY;
if (XVA_ISSET_REQ(xvap, XAT_NODUMP))
*attrs |= (xoap->xoa_nodump == 0) ? 0 :
XAT0_NODUMP;
if (XVA_ISSET_REQ(xvap, XAT_AV_QUARANTINED))
*attrs |= (xoap->xoa_av_quarantined == 0) ? 0 :
XAT0_AV_QUARANTINED;
if (XVA_ISSET_REQ(xvap, XAT_AV_MODIFIED))
*attrs |= (xoap->xoa_av_modified == 0) ? 0 :
XAT0_AV_MODIFIED;
if (XVA_ISSET_REQ(xvap, XAT_CREATETIME))
ZFS_TIME_ENCODE(&xoap->xoa_createtime, crtime);
if (XVA_ISSET_REQ(xvap, XAT_AV_SCANSTAMP))
bcopy(xoap->xoa_av_scanstamp, scanstamp, AV_SCANSTAMP_SZ);
}
static void *
zfs_log_fuid_ids(zfs_fuid_info_t *fuidp, void *start)
{
zfs_fuid_t *zfuid;
uint64_t *fuidloc = start;
/* First copy in the ACE FUIDs */
for (zfuid = list_head(&fuidp->z_fuids); zfuid;
zfuid = list_next(&fuidp->z_fuids, zfuid)) {
*fuidloc++ = zfuid->z_logfuid;
}
return (fuidloc);
}
static void *
zfs_log_fuid_domains(zfs_fuid_info_t *fuidp, void *start)
{
zfs_fuid_domain_t *zdomain;
/* now copy in the domain info, if any */
if (fuidp->z_domain_str_sz != 0) {
for (zdomain = list_head(&fuidp->z_domains); zdomain;
zdomain = list_next(&fuidp->z_domains, zdomain)) {
bcopy((void *)zdomain->z_domain, start,
strlen(zdomain->z_domain) + 1);
start = (caddr_t)start +
strlen(zdomain->z_domain) + 1;
}
}
return (start);
}
/*
* zfs_log_create() is used to handle TX_CREATE, TX_CREATE_ATTR, TX_MKDIR,
* TX_MKDIR_ATTR and TX_MKXATTR
* transactions.
*
* TX_CREATE and TX_MKDIR are standard creates, but they may have FUID
* domain information appended prior to the name. In this case the
* uid/gid in the log record will be a log centric FUID.
*
* TX_CREATE_ACL_ATTR and TX_MKDIR_ACL_ATTR handle special creates that
* may contain attributes, ACL and optional fuid information.
*
* TX_CREATE_ACL and TX_MKDIR_ACL handle special creates that specify
* and ACL and normal users/groups in the ACEs.
*
* There may be an optional xvattr attribute information similar
* to zfs_log_setattr.
*
* Also, after the file name "domain" strings may be appended.
*/
void
zfs_log_create(zilog_t *zilog, dmu_tx_t *tx, uint64_t txtype,
znode_t *dzp, znode_t *zp, char *name, vsecattr_t *vsecp,
zfs_fuid_info_t *fuidp, vattr_t *vap)
{
itx_t *itx;
uint64_t seq;
lr_create_t *lr;
lr_acl_create_t *lracl;
size_t aclsize;
size_t xvatsize = 0;
size_t txsize;
xvattr_t *xvap = (xvattr_t *)vap;
void *end;
size_t lrsize;
size_t namesize = strlen(name) + 1;
size_t fuidsz = 0;
if (zilog == NULL)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
/*
* If we have FUIDs present then add in space for
* domains and ACE fuid's if any.
*/
if (fuidp) {
fuidsz += fuidp->z_domain_str_sz;
fuidsz += fuidp->z_fuid_cnt * sizeof (uint64_t);
}
if (vap->va_mask & AT_XVATTR)
xvatsize = ZIL_XVAT_SIZE(xvap->xva_mapsize);
if ((int)txtype == TX_CREATE_ATTR || (int)txtype == TX_MKDIR_ATTR ||
(int)txtype == TX_CREATE || (int)txtype == TX_MKDIR ||
(int)txtype == TX_MKXATTR) {
txsize = sizeof (*lr) + namesize + fuidsz + xvatsize;
lrsize = sizeof (*lr);
} else {
aclsize = (vsecp) ? vsecp->vsa_aclentsz : 0;
txsize =
sizeof (lr_acl_create_t) + namesize + fuidsz +
ZIL_ACE_LENGTH(aclsize) + xvatsize;
lrsize = sizeof (lr_acl_create_t);
}
itx = zil_itx_create(txtype, txsize);
lr = (lr_create_t *)&itx->itx_lr;
lr->lr_doid = dzp->z_id;
lr->lr_foid = zp->z_id;
lr->lr_mode = zp->z_phys->zp_mode;
if (!IS_EPHEMERAL(zp->z_phys->zp_uid)) {
lr->lr_uid = (uint64_t)zp->z_phys->zp_uid;
} else {
lr->lr_uid = fuidp->z_fuid_owner;
}
if (!IS_EPHEMERAL(zp->z_phys->zp_gid)) {
lr->lr_gid = (uint64_t)zp->z_phys->zp_gid;
} else {
lr->lr_gid = fuidp->z_fuid_group;
}
lr->lr_gen = zp->z_phys->zp_gen;
lr->lr_crtime[0] = zp->z_phys->zp_crtime[0];
lr->lr_crtime[1] = zp->z_phys->zp_crtime[1];
lr->lr_rdev = zp->z_phys->zp_rdev;
/*
* Fill in xvattr info if any
*/
if (vap->va_mask & AT_XVATTR) {
zfs_log_xvattr((lr_attr_t *)((caddr_t)lr + lrsize), xvap);
end = (caddr_t)lr + lrsize + xvatsize;
} else {
end = (caddr_t)lr + lrsize;
}
/* Now fill in any ACL info */
if (vsecp) {
lracl = (lr_acl_create_t *)&itx->itx_lr;
lracl->lr_aclcnt = vsecp->vsa_aclcnt;
lracl->lr_acl_bytes = aclsize;
lracl->lr_domcnt = fuidp ? fuidp->z_domain_cnt : 0;
lracl->lr_fuidcnt = fuidp ? fuidp->z_fuid_cnt : 0;
if (vsecp->vsa_aclflags & VSA_ACE_ACLFLAGS)
lracl->lr_acl_flags = (uint64_t)vsecp->vsa_aclflags;
else
lracl->lr_acl_flags = 0;
bcopy(vsecp->vsa_aclentp, end, aclsize);
end = (caddr_t)end + ZIL_ACE_LENGTH(aclsize);
}
/* drop in FUID info */
if (fuidp) {
end = zfs_log_fuid_ids(fuidp, end);
end = zfs_log_fuid_domains(fuidp, end);
}
/*
* Now place file name in log record
*/
bcopy(name, end, namesize);
seq = zil_itx_assign(zilog, itx, tx);
dzp->z_last_itx = seq;
zp->z_last_itx = seq;
}
/*
* zfs_log_remove() handles both TX_REMOVE and TX_RMDIR transactions.
*/
void
zfs_log_remove(zilog_t *zilog, dmu_tx_t *tx, uint64_t txtype,
znode_t *dzp, char *name)
{
itx_t *itx;
uint64_t seq;
lr_remove_t *lr;
size_t namesize = strlen(name) + 1;
if (zilog == NULL)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
itx = zil_itx_create(txtype, sizeof (*lr) + namesize);
lr = (lr_remove_t *)&itx->itx_lr;
lr->lr_doid = dzp->z_id;
bcopy(name, (char *)(lr + 1), namesize);
seq = zil_itx_assign(zilog, itx, tx);
dzp->z_last_itx = seq;
}
/*
* zfs_log_link() handles TX_LINK transactions.
*/
void
zfs_log_link(zilog_t *zilog, dmu_tx_t *tx, uint64_t txtype,
znode_t *dzp, znode_t *zp, char *name)
{
itx_t *itx;
uint64_t seq;
lr_link_t *lr;
size_t namesize = strlen(name) + 1;
if (zilog == NULL)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
itx = zil_itx_create(txtype, sizeof (*lr) + namesize);
lr = (lr_link_t *)&itx->itx_lr;
lr->lr_doid = dzp->z_id;
lr->lr_link_obj = zp->z_id;
bcopy(name, (char *)(lr + 1), namesize);
seq = zil_itx_assign(zilog, itx, tx);
dzp->z_last_itx = seq;
zp->z_last_itx = seq;
}
/*
* zfs_log_symlink() handles TX_SYMLINK transactions.
*/
void
zfs_log_symlink(zilog_t *zilog, dmu_tx_t *tx, uint64_t txtype,
znode_t *dzp, znode_t *zp, char *name, char *link)
{
itx_t *itx;
uint64_t seq;
lr_create_t *lr;
size_t namesize = strlen(name) + 1;
size_t linksize = strlen(link) + 1;
if (zilog == NULL)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
itx = zil_itx_create(txtype, sizeof (*lr) + namesize + linksize);
lr = (lr_create_t *)&itx->itx_lr;
lr->lr_doid = dzp->z_id;
lr->lr_foid = zp->z_id;
lr->lr_mode = zp->z_phys->zp_mode;
lr->lr_uid = zp->z_phys->zp_uid;
lr->lr_gid = zp->z_phys->zp_gid;
lr->lr_gen = zp->z_phys->zp_gen;
lr->lr_crtime[0] = zp->z_phys->zp_crtime[0];
lr->lr_crtime[1] = zp->z_phys->zp_crtime[1];
bcopy(name, (char *)(lr + 1), namesize);
bcopy(link, (char *)(lr + 1) + namesize, linksize);
seq = zil_itx_assign(zilog, itx, tx);
dzp->z_last_itx = seq;
zp->z_last_itx = seq;
}
/*
* zfs_log_rename() handles TX_RENAME transactions.
*/
void
zfs_log_rename(zilog_t *zilog, dmu_tx_t *tx, uint64_t txtype,
znode_t *sdzp, char *sname, znode_t *tdzp, char *dname, znode_t *szp)
{
itx_t *itx;
uint64_t seq;
lr_rename_t *lr;
size_t snamesize = strlen(sname) + 1;
size_t dnamesize = strlen(dname) + 1;
if (zilog == NULL)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
itx = zil_itx_create(txtype, sizeof (*lr) + snamesize + dnamesize);
lr = (lr_rename_t *)&itx->itx_lr;
lr->lr_sdoid = sdzp->z_id;
lr->lr_tdoid = tdzp->z_id;
bcopy(sname, (char *)(lr + 1), snamesize);
bcopy(dname, (char *)(lr + 1) + snamesize, dnamesize);
seq = zil_itx_assign(zilog, itx, tx);
sdzp->z_last_itx = seq;
tdzp->z_last_itx = seq;
szp->z_last_itx = seq;
}
/*
* zfs_log_write() handles TX_WRITE transactions.
*/
ssize_t zfs_immediate_write_sz = 32768;
void
zfs_log_write(zilog_t *zilog, dmu_tx_t *tx, int txtype,
znode_t *zp, offset_t off, ssize_t resid, int ioflag)
{
itx_wr_state_t write_state;
boolean_t slogging;
uintptr_t fsync_cnt;
if (zilog == NULL || zp->z_unlinked)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
slogging = spa_has_slogs(zilog->zl_spa);
if (resid > zfs_immediate_write_sz && !slogging && resid <= zp->z_blksz)
2008-11-20 20:01:55 +00:00
write_state = WR_INDIRECT;
else if (ioflag & (FSYNC | FDSYNC))
write_state = WR_COPIED;
else
write_state = WR_NEED_COPY;
if ((fsync_cnt = (uintptr_t)tsd_get(zfs_fsyncer_key)) != 0) {
(void) tsd_set(zfs_fsyncer_key, (void *)(fsync_cnt - 1));
}
while (resid) {
itx_t *itx;
lr_write_t *lr;
ssize_t len;
/*
* If the write would overflow the largest block then split it.
2008-11-20 20:01:55 +00:00
*/
if (write_state != WR_INDIRECT && resid > ZIL_MAX_LOG_DATA)
2008-11-20 20:01:55 +00:00
len = SPA_MAXBLOCKSIZE >> 1;
else
len = resid;
itx = zil_itx_create(txtype, sizeof (*lr) +
(write_state == WR_COPIED ? len : 0));
lr = (lr_write_t *)&itx->itx_lr;
if (write_state == WR_COPIED && dmu_read(zp->z_zfsvfs->z_os,
2009-07-02 22:44:48 +00:00
zp->z_id, off, len, lr + 1, DMU_READ_NO_PREFETCH) != 0) {
2008-11-20 20:01:55 +00:00
kmem_free(itx, offsetof(itx_t, itx_lr) +
itx->itx_lr.lrc_reclen);
itx = zil_itx_create(txtype, sizeof (*lr));
lr = (lr_write_t *)&itx->itx_lr;
write_state = WR_NEED_COPY;
}
itx->itx_wr_state = write_state;
if (write_state == WR_NEED_COPY)
itx->itx_sod += len;
lr->lr_foid = zp->z_id;
lr->lr_offset = off;
lr->lr_length = len;
lr->lr_blkoff = 0;
BP_ZERO(&lr->lr_blkptr);
itx->itx_private = zp->z_zfsvfs;
if ((zp->z_sync_cnt != 0) || (fsync_cnt != 0) ||
(ioflag & (FSYNC | FDSYNC)))
itx->itx_sync = B_TRUE;
else
itx->itx_sync = B_FALSE;
zp->z_last_itx = zil_itx_assign(zilog, itx, tx);
off += len;
resid -= len;
}
}
/*
* zfs_log_truncate() handles TX_TRUNCATE transactions.
*/
void
zfs_log_truncate(zilog_t *zilog, dmu_tx_t *tx, int txtype,
znode_t *zp, uint64_t off, uint64_t len)
{
itx_t *itx;
uint64_t seq;
lr_truncate_t *lr;
if (zilog == NULL || zp->z_unlinked)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
itx = zil_itx_create(txtype, sizeof (*lr));
lr = (lr_truncate_t *)&itx->itx_lr;
lr->lr_foid = zp->z_id;
lr->lr_offset = off;
lr->lr_length = len;
itx->itx_sync = (zp->z_sync_cnt != 0);
seq = zil_itx_assign(zilog, itx, tx);
zp->z_last_itx = seq;
}
/*
* zfs_log_setattr() handles TX_SETATTR transactions.
*/
void
zfs_log_setattr(zilog_t *zilog, dmu_tx_t *tx, int txtype,
znode_t *zp, vattr_t *vap, uint_t mask_applied, zfs_fuid_info_t *fuidp)
{
itx_t *itx;
uint64_t seq;
lr_setattr_t *lr;
xvattr_t *xvap = (xvattr_t *)vap;
size_t recsize = sizeof (lr_setattr_t);
void *start;
if (zilog == NULL || zp->z_unlinked)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
2008-11-20 20:01:55 +00:00
/*
* If XVATTR set, then log record size needs to allow
* for lr_attr_t + xvattr mask, mapsize and create time
* plus actual attribute values
*/
if (vap->va_mask & AT_XVATTR)
recsize = sizeof (*lr) + ZIL_XVAT_SIZE(xvap->xva_mapsize);
if (fuidp)
recsize += fuidp->z_domain_str_sz;
itx = zil_itx_create(txtype, recsize);
lr = (lr_setattr_t *)&itx->itx_lr;
lr->lr_foid = zp->z_id;
lr->lr_mask = (uint64_t)mask_applied;
lr->lr_mode = (uint64_t)vap->va_mode;
if ((mask_applied & AT_UID) && IS_EPHEMERAL(vap->va_uid))
lr->lr_uid = fuidp->z_fuid_owner;
else
lr->lr_uid = (uint64_t)vap->va_uid;
if ((mask_applied & AT_GID) && IS_EPHEMERAL(vap->va_gid))
lr->lr_gid = fuidp->z_fuid_group;
else
lr->lr_gid = (uint64_t)vap->va_gid;
lr->lr_size = (uint64_t)vap->va_size;
ZFS_TIME_ENCODE(&vap->va_atime, lr->lr_atime);
ZFS_TIME_ENCODE(&vap->va_mtime, lr->lr_mtime);
start = (lr_setattr_t *)(lr + 1);
if (vap->va_mask & AT_XVATTR) {
zfs_log_xvattr((lr_attr_t *)start, xvap);
start = (caddr_t)start + ZIL_XVAT_SIZE(xvap->xva_mapsize);
}
/*
* Now stick on domain information if any on end
*/
if (fuidp)
(void) zfs_log_fuid_domains(fuidp, start);
itx->itx_sync = (zp->z_sync_cnt != 0);
seq = zil_itx_assign(zilog, itx, tx);
zp->z_last_itx = seq;
}
/*
* zfs_log_acl() handles TX_ACL transactions.
*/
void
zfs_log_acl(zilog_t *zilog, dmu_tx_t *tx, znode_t *zp,
vsecattr_t *vsecp, zfs_fuid_info_t *fuidp)
{
itx_t *itx;
uint64_t seq;
lr_acl_v0_t *lrv0;
lr_acl_t *lr;
int txtype;
int lrsize;
size_t txsize;
size_t aclbytes = vsecp->vsa_aclentsz;
if (zilog == NULL || zp->z_unlinked)
return;
2009-01-15 21:59:39 +00:00
ZFS_HANDLE_REPLAY(zilog, tx); /* exits if replay */
txtype = (zp->z_zfsvfs->z_version < ZPL_VERSION_FUID) ?
2008-11-20 20:01:55 +00:00
TX_ACL_V0 : TX_ACL;
if (txtype == TX_ACL)
lrsize = sizeof (*lr);
else
lrsize = sizeof (*lrv0);
txsize = lrsize +
((txtype == TX_ACL) ? ZIL_ACE_LENGTH(aclbytes) : aclbytes) +
(fuidp ? fuidp->z_domain_str_sz : 0) +
sizeof (uint64_t) * (fuidp ? fuidp->z_fuid_cnt : 0);
2008-11-20 20:01:55 +00:00
itx = zil_itx_create(txtype, txsize);
lr = (lr_acl_t *)&itx->itx_lr;
lr->lr_foid = zp->z_id;
if (txtype == TX_ACL) {
lr->lr_acl_bytes = aclbytes;
lr->lr_domcnt = fuidp ? fuidp->z_domain_cnt : 0;
lr->lr_fuidcnt = fuidp ? fuidp->z_fuid_cnt : 0;
if (vsecp->vsa_mask & VSA_ACE_ACLFLAGS)
lr->lr_acl_flags = (uint64_t)vsecp->vsa_aclflags;
else
lr->lr_acl_flags = 0;
}
lr->lr_aclcnt = (uint64_t)vsecp->vsa_aclcnt;
if (txtype == TX_ACL_V0) {
lrv0 = (lr_acl_v0_t *)lr;
bcopy(vsecp->vsa_aclentp, (ace_t *)(lrv0 + 1), aclbytes);
} else {
void *start = (ace_t *)(lr + 1);
bcopy(vsecp->vsa_aclentp, start, aclbytes);
start = (caddr_t)start + ZIL_ACE_LENGTH(aclbytes);
if (fuidp) {
start = zfs_log_fuid_ids(fuidp, start);
(void) zfs_log_fuid_domains(fuidp, start);
}
}
itx->itx_sync = (zp->z_sync_cnt != 0);
seq = zil_itx_assign(zilog, itx, tx);
zp->z_last_itx = seq;
}
Linux ZVOL implementation; kernel-side changes At last a useful user space interface for the Linux ZFS port arrives. With the addition of the ZVOL real ZFS based block devices are available and can be compared head to head with Linux's MD and LVM block drivers. The Linux ZVOL has not yet had any performance work done but from a user perspective it should be functionally complete and behave like any other Linux block device. The ZVOL has so far been tested using zconfig.sh on the following x86_64 based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11. However, more testing is required to ensure everything is working as designed. What follows in a somewhat detailed list of changes includes in this commit to make ZVOL's possible. A few other issues were addressed in the context of these changes which will also be mentioned. * Added module/zfs/zvol.c which is based off the original Solaris ZVOL implementation but rewritten to intergrate with the Linux block device APIs. The basic design remains the similar in Linux with the major change being request processing. Request processing is handled by registering a request function which the elevator calls once all request merges is finished and the elevator unplugs. This function is called under a spin lock and the request structure is passed to the block driver to be queued for IO. The elevator must be notified asyncronously once the request completes or fails with an error. This allows us the block driver a chance to handle many request concurrently. For the ZVOL we maintain a taskq with a service thread per core. As requests are delivered by the elevator each request is dispatched to the taskq. The task queue handles each request with a write or read helper function which basically copies the request data in to our out of the DMU object. Writes single completion as soon as the DMU has the data unless they are marked sync. Reads are all handled syncronously however the elevator will merge many small reads in to a large read before it submitting the request. * Cachine is worth specifically mentioning. Because both the Linux VFS and the ZFS ARC both want to fully manage the cache we unfortunately end up with two caches. This means our memory foot print is larger than otherwise expected, and it means we have an extra copy between the caches, but it does not impact correctness. All syncs are barrior requests I believe are handled correctly. Longer term there is lots of room for improvement here but it will require fairly extensive changes to either the Linux VFS and VM layer, or additional DMU interfaces to handle managing buffer not directly allocated by the ARC. * Added module/zfs/include/sys/blkdev.h which contains all the Linux compatibility foo which is required to handle changes in the Linux block APIs from 2.6.18 thru 2.6.31 based kernels. * The dmu_{read,write}_uio interfaces which don't make sense on Linux have been modified to dmu_{read,write}_req functions which consume the standard Linux IO request structure. Their function fundamentally remains the same so this happily worked out pretty cleanly. * The /dev/zfs character device is no longer created through the half implemented Solaris driver DDI interfaces. It is now simply created with it's own major number as a Linux misc device which greatly simplifies everything. It is only capable of handling ioctls() but this fits nicely because that's all it ever has to do. The ZVOL devices unlike in Solaris do not leverage the same major number as /dev/zfs but instead register their own major. Because only one major is allocated and space is reserved for 16 partitions per-device there is a limit of 16384 concurrent ZVOL devices. By using multiple majors like the scsi driver this limit could be addressed if it becomes a problem. * The {spa,zfs,zvol}_busy() functions have all be removed because they are not required on a Linux system. Under Linux the registered module exit function will not be called while the are still references to the module. Once the exit function is called however it must succeed or block, it may not fail so returning an error on module unload makes to sense under Linux. * With the addition of ZVOL support all the HAVE_ZVOL defines were removed for obvious reasons. However, the HAVE_ZPL defines have been relocated in to the linux-{kernel,user}-disk topic branches and must remain until the ZPL is implemented.
2009-11-20 19:06:59 +00:00
#endif /* HAVE_ZPL */