2011-01-26 20:03:58 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* Copyright (c) 2011, Lawrence Livermore National Security, LLC.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _SYS_ZPL_H
|
|
|
|
#define _SYS_ZPL_H
|
|
|
|
|
|
|
|
#include <sys/vfs.h>
|
2011-02-11 16:58:55 +00:00
|
|
|
#include <linux/vfs_compat.h>
|
2011-02-11 00:16:52 +00:00
|
|
|
#include <linux/xattr_compat.h>
|
2013-01-16 00:41:09 +00:00
|
|
|
#include <linux/dcache_compat.h>
|
2011-04-28 16:35:50 +00:00
|
|
|
#include <linux/exportfs.h>
|
2011-05-28 01:53:07 +00:00
|
|
|
#include <linux/writeback.h>
|
2011-09-02 07:42:07 +00:00
|
|
|
#include <linux/falloc.h>
|
2013-11-15 17:59:09 +00:00
|
|
|
#include <linux/task_io_accounting_ops.h>
|
Linux AIO Support
nfsd uses do_readv_writev() to implement fops->read and fops->write.
do_readv_writev() will attempt to read/write using fops->aio_read and
fops->aio_write, but it will fallback to fops->read and fops->write when
AIO is not available. However, the fallback will perform a call for each
individual data page. Since our default recordsize is 128KB, sequential
operations on NFS will generate 32 DMU transactions where only 1
transaction was needed. That was unnecessary overhead and we implement
fops->aio_read and fops->aio_write to eliminate it.
ZFS originated in OpenSolaris, where the AIO API is entirely implemented
in userland's libc by intelligently mapping them to VOP_WRITE, VOP_READ
and VOP_FSYNC. Linux implements AIO inside the kernel itself. Linux
filesystems therefore must implement their own AIO logic and nearly all
of them implement fops->aio_write synchronously. Consequently, they do
not implement aio_fsync(). However, since the ZPL works by mapping
Linux's VFS calls to the functions implementing Illumos' VFS operations,
we instead implement AIO in the kernel by mapping the operations to the
VOP_READ, VOP_WRITE and VOP_FSYNC equivalents. We therefore implement
fops->aio_fsync.
One might be inclined to make our fops->aio_write implementation
synchronous to make software that expects this behavior safe. However,
there are several reasons not to do this:
1. Other platforms do not implement aio_write() synchronously and since
the majority of userland software using AIO should be cross platform,
expectations of synchronous behavior should not be a problem.
2. We would hurt the performance of programs that use POSIX interfaces
properly while simultaneously encouraging the creation of more
non-compliant software.
3. The broader community concluded that userland software should be
patched to properly use POSIX interfaces instead of implementing hacks
in filesystems to cater to broken software. This concept is best
described as the O_PONIES debate.
4. Making an asynchronous write synchronous is non sequitur.
Any software dependent on synchronous aio_write behavior will suffer
data loss on ZFSOnLinux in a kernel panic / system failure of at most
zfs_txg_timeout seconds, which by default is 5 seconds. This seems like
a reasonable consequence of using non-compliant software.
It should be noted that this is also a problem in the kernel itself
where nfsd does not pass O_SYNC on files opened with it and instead
relies on a open()/write()/close() to enforce synchronous behavior when
the flush is only guarenteed on last close.
Exporting any filesystem that does not implement AIO via NFS risks data
loss in the event of a kernel panic / system failure when something else
is also accessing the file. Exporting any file system that implements
AIO the way this patch does bears similar risk. However, it seems
reasonable to forgo crippling our AIO implementation in favor of
developing patches to fix this problem in Linux's nfsd for the reasons
stated earlier. In the interim, the risk will remain. Failing to
implement AIO will not change the problem that nfsd created, so there is
no reason for nfsd's mistake to block our implementation of AIO.
It also should be noted that `aio_cancel()` will always return
`AIO_NOTCANCELED` under this implementation. It is possible to implement
aio_cancel by deferring work to taskqs and use `kiocb_set_cancel_fn()`
to set a callback function for cancelling work sent to taskqs, but the
simpler approach is allowed by the specification:
```
Which operations are cancelable is implementation-defined.
```
http://pubs.opengroup.org/onlinepubs/009695399/functions/aio_cancel.html
The only programs on my system that are capable of using `aio_cancel()`
are QEMU, beecrypt and fio use it according to a recursive grep of my
system's `/usr/src/debug`. That suggests that `aio_cancel()` users are
rare. Implementing aio_cancel() is left to a future date when it is
clear that there are consumers that benefit from its implementation to
justify the work.
Lastly, it is important to know that handling of the iovec updates differs
between Illumos and Linux in the implementation of read/write. On Linux,
it is the VFS' responsibility whle on Illumos, it is the filesystem's
responsibility. We take the intermediate solution of copying the iovec
so that the ZFS code can update it like on Solaris while leaving the
originals alone. This imposes some overhead. We could always revisit
this should profiling show that the allocations are a problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #223
Closes #2373
2014-08-04 11:09:32 +00:00
|
|
|
#include <linux/aio.h>
|
2011-01-26 20:03:58 +00:00
|
|
|
|
|
|
|
/* zpl_inode.c */
|
2011-11-11 07:15:53 +00:00
|
|
|
extern void zpl_vap_init(vattr_t *vap, struct inode *dir,
|
2013-01-16 00:41:09 +00:00
|
|
|
zpl_umode_t mode, cred_t *cr);
|
2011-11-11 07:15:53 +00:00
|
|
|
|
2011-01-26 20:03:58 +00:00
|
|
|
extern const struct inode_operations zpl_inode_operations;
|
|
|
|
extern const struct inode_operations zpl_dir_inode_operations;
|
|
|
|
extern const struct inode_operations zpl_symlink_inode_operations;
|
|
|
|
extern const struct inode_operations zpl_special_inode_operations;
|
2013-01-16 00:41:09 +00:00
|
|
|
extern dentry_operations_t zpl_dentry_operations;
|
2011-01-26 20:03:58 +00:00
|
|
|
|
|
|
|
/* zpl_file.c */
|
|
|
|
extern ssize_t zpl_read_common(struct inode *ip, const char *buf,
|
Linux AIO Support
nfsd uses do_readv_writev() to implement fops->read and fops->write.
do_readv_writev() will attempt to read/write using fops->aio_read and
fops->aio_write, but it will fallback to fops->read and fops->write when
AIO is not available. However, the fallback will perform a call for each
individual data page. Since our default recordsize is 128KB, sequential
operations on NFS will generate 32 DMU transactions where only 1
transaction was needed. That was unnecessary overhead and we implement
fops->aio_read and fops->aio_write to eliminate it.
ZFS originated in OpenSolaris, where the AIO API is entirely implemented
in userland's libc by intelligently mapping them to VOP_WRITE, VOP_READ
and VOP_FSYNC. Linux implements AIO inside the kernel itself. Linux
filesystems therefore must implement their own AIO logic and nearly all
of them implement fops->aio_write synchronously. Consequently, they do
not implement aio_fsync(). However, since the ZPL works by mapping
Linux's VFS calls to the functions implementing Illumos' VFS operations,
we instead implement AIO in the kernel by mapping the operations to the
VOP_READ, VOP_WRITE and VOP_FSYNC equivalents. We therefore implement
fops->aio_fsync.
One might be inclined to make our fops->aio_write implementation
synchronous to make software that expects this behavior safe. However,
there are several reasons not to do this:
1. Other platforms do not implement aio_write() synchronously and since
the majority of userland software using AIO should be cross platform,
expectations of synchronous behavior should not be a problem.
2. We would hurt the performance of programs that use POSIX interfaces
properly while simultaneously encouraging the creation of more
non-compliant software.
3. The broader community concluded that userland software should be
patched to properly use POSIX interfaces instead of implementing hacks
in filesystems to cater to broken software. This concept is best
described as the O_PONIES debate.
4. Making an asynchronous write synchronous is non sequitur.
Any software dependent on synchronous aio_write behavior will suffer
data loss on ZFSOnLinux in a kernel panic / system failure of at most
zfs_txg_timeout seconds, which by default is 5 seconds. This seems like
a reasonable consequence of using non-compliant software.
It should be noted that this is also a problem in the kernel itself
where nfsd does not pass O_SYNC on files opened with it and instead
relies on a open()/write()/close() to enforce synchronous behavior when
the flush is only guarenteed on last close.
Exporting any filesystem that does not implement AIO via NFS risks data
loss in the event of a kernel panic / system failure when something else
is also accessing the file. Exporting any file system that implements
AIO the way this patch does bears similar risk. However, it seems
reasonable to forgo crippling our AIO implementation in favor of
developing patches to fix this problem in Linux's nfsd for the reasons
stated earlier. In the interim, the risk will remain. Failing to
implement AIO will not change the problem that nfsd created, so there is
no reason for nfsd's mistake to block our implementation of AIO.
It also should be noted that `aio_cancel()` will always return
`AIO_NOTCANCELED` under this implementation. It is possible to implement
aio_cancel by deferring work to taskqs and use `kiocb_set_cancel_fn()`
to set a callback function for cancelling work sent to taskqs, but the
simpler approach is allowed by the specification:
```
Which operations are cancelable is implementation-defined.
```
http://pubs.opengroup.org/onlinepubs/009695399/functions/aio_cancel.html
The only programs on my system that are capable of using `aio_cancel()`
are QEMU, beecrypt and fio use it according to a recursive grep of my
system's `/usr/src/debug`. That suggests that `aio_cancel()` users are
rare. Implementing aio_cancel() is left to a future date when it is
clear that there are consumers that benefit from its implementation to
justify the work.
Lastly, it is important to know that handling of the iovec updates differs
between Illumos and Linux in the implementation of read/write. On Linux,
it is the VFS' responsibility whle on Illumos, it is the filesystem's
responsibility. We take the intermediate solution of copying the iovec
so that the ZFS code can update it like on Solaris while leaving the
originals alone. This imposes some overhead. We could always revisit
this should profiling show that the allocations are a problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #223
Closes #2373
2014-08-04 11:09:32 +00:00
|
|
|
size_t len, loff_t *ppos, uio_seg_t segment, int flags,
|
|
|
|
cred_t *cr);
|
2011-01-26 20:03:58 +00:00
|
|
|
extern ssize_t zpl_write_common(struct inode *ip, const char *buf,
|
Linux AIO Support
nfsd uses do_readv_writev() to implement fops->read and fops->write.
do_readv_writev() will attempt to read/write using fops->aio_read and
fops->aio_write, but it will fallback to fops->read and fops->write when
AIO is not available. However, the fallback will perform a call for each
individual data page. Since our default recordsize is 128KB, sequential
operations on NFS will generate 32 DMU transactions where only 1
transaction was needed. That was unnecessary overhead and we implement
fops->aio_read and fops->aio_write to eliminate it.
ZFS originated in OpenSolaris, where the AIO API is entirely implemented
in userland's libc by intelligently mapping them to VOP_WRITE, VOP_READ
and VOP_FSYNC. Linux implements AIO inside the kernel itself. Linux
filesystems therefore must implement their own AIO logic and nearly all
of them implement fops->aio_write synchronously. Consequently, they do
not implement aio_fsync(). However, since the ZPL works by mapping
Linux's VFS calls to the functions implementing Illumos' VFS operations,
we instead implement AIO in the kernel by mapping the operations to the
VOP_READ, VOP_WRITE and VOP_FSYNC equivalents. We therefore implement
fops->aio_fsync.
One might be inclined to make our fops->aio_write implementation
synchronous to make software that expects this behavior safe. However,
there are several reasons not to do this:
1. Other platforms do not implement aio_write() synchronously and since
the majority of userland software using AIO should be cross platform,
expectations of synchronous behavior should not be a problem.
2. We would hurt the performance of programs that use POSIX interfaces
properly while simultaneously encouraging the creation of more
non-compliant software.
3. The broader community concluded that userland software should be
patched to properly use POSIX interfaces instead of implementing hacks
in filesystems to cater to broken software. This concept is best
described as the O_PONIES debate.
4. Making an asynchronous write synchronous is non sequitur.
Any software dependent on synchronous aio_write behavior will suffer
data loss on ZFSOnLinux in a kernel panic / system failure of at most
zfs_txg_timeout seconds, which by default is 5 seconds. This seems like
a reasonable consequence of using non-compliant software.
It should be noted that this is also a problem in the kernel itself
where nfsd does not pass O_SYNC on files opened with it and instead
relies on a open()/write()/close() to enforce synchronous behavior when
the flush is only guarenteed on last close.
Exporting any filesystem that does not implement AIO via NFS risks data
loss in the event of a kernel panic / system failure when something else
is also accessing the file. Exporting any file system that implements
AIO the way this patch does bears similar risk. However, it seems
reasonable to forgo crippling our AIO implementation in favor of
developing patches to fix this problem in Linux's nfsd for the reasons
stated earlier. In the interim, the risk will remain. Failing to
implement AIO will not change the problem that nfsd created, so there is
no reason for nfsd's mistake to block our implementation of AIO.
It also should be noted that `aio_cancel()` will always return
`AIO_NOTCANCELED` under this implementation. It is possible to implement
aio_cancel by deferring work to taskqs and use `kiocb_set_cancel_fn()`
to set a callback function for cancelling work sent to taskqs, but the
simpler approach is allowed by the specification:
```
Which operations are cancelable is implementation-defined.
```
http://pubs.opengroup.org/onlinepubs/009695399/functions/aio_cancel.html
The only programs on my system that are capable of using `aio_cancel()`
are QEMU, beecrypt and fio use it according to a recursive grep of my
system's `/usr/src/debug`. That suggests that `aio_cancel()` users are
rare. Implementing aio_cancel() is left to a future date when it is
clear that there are consumers that benefit from its implementation to
justify the work.
Lastly, it is important to know that handling of the iovec updates differs
between Illumos and Linux in the implementation of read/write. On Linux,
it is the VFS' responsibility whle on Illumos, it is the filesystem's
responsibility. We take the intermediate solution of copying the iovec
so that the ZFS code can update it like on Solaris while leaving the
originals alone. This imposes some overhead. We could always revisit
this should profiling show that the allocations are a problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #223
Closes #2373
2014-08-04 11:09:32 +00:00
|
|
|
size_t len, loff_t *ppos, uio_seg_t segment, int flags,
|
|
|
|
cred_t *cr);
|
2014-08-20 22:35:13 +00:00
|
|
|
#if defined(HAVE_FILE_FALLOCATE) || defined(HAVE_INODE_FALLOCATE)
|
2011-09-02 07:42:07 +00:00
|
|
|
extern long zpl_fallocate_common(struct inode *ip, int mode,
|
|
|
|
loff_t offset, loff_t len);
|
2014-08-20 22:35:13 +00:00
|
|
|
#endif /* defined(HAVE_FILE_FALLOCATE) || defined(HAVE_INODE_FALLOCATE) */
|
2011-01-26 20:03:58 +00:00
|
|
|
|
|
|
|
extern const struct address_space_operations zpl_address_space_operations;
|
|
|
|
extern const struct file_operations zpl_file_operations;
|
|
|
|
extern const struct file_operations zpl_dir_file_operations;
|
|
|
|
|
|
|
|
/* zpl_super.c */
|
2011-12-22 20:20:43 +00:00
|
|
|
extern void zpl_prune_sbs(int64_t bytes_to_scan, void *private);
|
|
|
|
|
2011-01-26 20:03:58 +00:00
|
|
|
typedef struct zpl_mount_data {
|
|
|
|
const char *z_osname; /* Dataset name */
|
|
|
|
void *z_data; /* Mount options string */
|
|
|
|
} zpl_mount_data_t;
|
|
|
|
|
|
|
|
extern const struct super_operations zpl_super_operations;
|
2011-04-28 16:35:50 +00:00
|
|
|
extern const struct export_operations zpl_export_operations;
|
2011-01-26 20:03:58 +00:00
|
|
|
extern struct file_system_type zpl_fs_type;
|
|
|
|
|
|
|
|
/* zpl_xattr.c */
|
|
|
|
extern ssize_t zpl_xattr_list(struct dentry *dentry, char *buf, size_t size);
|
2011-05-19 19:47:32 +00:00
|
|
|
extern int zpl_xattr_security_init(struct inode *ip, struct inode *dip,
|
|
|
|
const struct qstr *qstr);
|
2013-11-02 23:40:26 +00:00
|
|
|
#if defined(CONFIG_FS_POSIX_ACL)
|
2013-10-28 16:22:15 +00:00
|
|
|
extern int zpl_set_acl(struct inode *ip, int type, struct posix_acl *acl);
|
|
|
|
extern struct posix_acl *zpl_get_acl(struct inode *ip, int type);
|
|
|
|
#if !defined(HAVE_GET_ACL)
|
|
|
|
#if defined(HAVE_CHECK_ACL_WITH_FLAGS)
|
2013-11-01 19:26:11 +00:00
|
|
|
extern int zpl_check_acl(struct inode *inode, int mask, unsigned int flags);
|
2013-10-28 16:22:15 +00:00
|
|
|
#elif defined(HAVE_CHECK_ACL)
|
|
|
|
extern int zpl_check_acl(struct inode *inode, int mask);
|
|
|
|
#elif defined(HAVE_PERMISSION_WITH_NAMEIDATA)
|
|
|
|
extern int zpl_permission(struct inode *ip, int mask, struct nameidata *nd);
|
|
|
|
#elif defined(HAVE_PERMISSION)
|
|
|
|
extern int zpl_permission(struct inode *ip, int mask);
|
|
|
|
#endif /* HAVE_CHECK_ACL | HAVE_PERMISSION */
|
|
|
|
#endif /* HAVE_GET_ACL */
|
|
|
|
|
|
|
|
extern int zpl_init_acl(struct inode *ip, struct inode *dir);
|
|
|
|
extern int zpl_chmod_acl(struct inode *ip);
|
2013-11-02 23:40:26 +00:00
|
|
|
#else
|
|
|
|
static inline int
|
|
|
|
zpl_init_acl(struct inode *ip, struct inode *dir)
|
|
|
|
{
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
zpl_chmod_acl(struct inode *ip)
|
|
|
|
{
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_FS_POSIX_ACL */
|
2011-01-26 20:03:58 +00:00
|
|
|
|
2011-02-11 00:16:52 +00:00
|
|
|
extern xattr_handler_t *zpl_xattr_handlers[];
|
2011-01-26 20:03:58 +00:00
|
|
|
|
2011-11-11 07:15:53 +00:00
|
|
|
/* zpl_ctldir.c */
|
|
|
|
extern const struct file_operations zpl_fops_root;
|
|
|
|
extern const struct inode_operations zpl_ops_root;
|
|
|
|
|
|
|
|
extern const struct file_operations zpl_fops_snapdir;
|
|
|
|
extern const struct inode_operations zpl_ops_snapdir;
|
|
|
|
#ifdef HAVE_AUTOMOUNT
|
|
|
|
extern const struct dentry_operations zpl_dops_snapdirs;
|
|
|
|
#else
|
|
|
|
extern const struct inode_operations zpl_ops_snapdirs;
|
|
|
|
#endif /* HAVE_AUTOMOUNT */
|
|
|
|
|
|
|
|
extern const struct file_operations zpl_fops_shares;
|
|
|
|
extern const struct inode_operations zpl_ops_shares;
|
|
|
|
|
2013-08-07 12:53:45 +00:00
|
|
|
#ifdef HAVE_VFS_ITERATE
|
|
|
|
|
2013-11-01 19:26:11 +00:00
|
|
|
#define DIR_CONTEXT_INIT(_dirent, _actor, _pos) { \
|
2013-08-07 12:53:45 +00:00
|
|
|
.actor = _actor, \
|
|
|
|
.pos = _pos, \
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
typedef struct dir_context {
|
|
|
|
void *dirent;
|
|
|
|
const filldir_t actor;
|
|
|
|
loff_t pos;
|
|
|
|
} dir_context_t;
|
|
|
|
|
2013-11-01 19:26:11 +00:00
|
|
|
#define DIR_CONTEXT_INIT(_dirent, _actor, _pos) { \
|
2013-08-07 12:53:45 +00:00
|
|
|
.dirent = _dirent, \
|
|
|
|
.actor = _actor, \
|
|
|
|
.pos = _pos, \
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
dir_emit(struct dir_context *ctx, const char *name, int namelen,
|
|
|
|
uint64_t ino, unsigned type)
|
|
|
|
{
|
2013-11-01 19:26:11 +00:00
|
|
|
return (ctx->actor(ctx->dirent, name, namelen, ctx->pos, ino, type)
|
|
|
|
== 0);
|
2013-08-07 12:53:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
dir_emit_dot(struct file *file, struct dir_context *ctx)
|
|
|
|
{
|
2013-11-01 19:26:11 +00:00
|
|
|
return (ctx->actor(ctx->dirent, ".", 1, ctx->pos,
|
|
|
|
file->f_path.dentry->d_inode->i_ino, DT_DIR) == 0);
|
2013-08-07 12:53:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
dir_emit_dotdot(struct file *file, struct dir_context *ctx)
|
|
|
|
{
|
2013-11-01 19:26:11 +00:00
|
|
|
return (ctx->actor(ctx->dirent, "..", 2, ctx->pos,
|
|
|
|
parent_ino(file->f_path.dentry), DT_DIR) == 0);
|
2013-08-07 12:53:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
dir_emit_dots(struct file *file, struct dir_context *ctx)
|
|
|
|
{
|
|
|
|
if (ctx->pos == 0) {
|
|
|
|
if (!dir_emit_dot(file, ctx))
|
2013-11-01 19:26:11 +00:00
|
|
|
return (false);
|
2013-08-07 12:53:45 +00:00
|
|
|
ctx->pos = 1;
|
|
|
|
}
|
|
|
|
if (ctx->pos == 1) {
|
|
|
|
if (!dir_emit_dotdot(file, ctx))
|
2013-11-01 19:26:11 +00:00
|
|
|
return (false);
|
2013-08-07 12:53:45 +00:00
|
|
|
ctx->pos = 2;
|
|
|
|
}
|
2013-11-01 19:26:11 +00:00
|
|
|
return (true);
|
2013-08-07 12:53:45 +00:00
|
|
|
}
|
|
|
|
#endif /* HAVE_VFS_ITERATE */
|
|
|
|
|
2011-01-26 20:03:58 +00:00
|
|
|
#endif /* _SYS_ZPL_H */
|