At last a useful user space interface for the Linux ZFS port arrives. With the addition of the ZVOL real ZFS based block devices are available and can be compared head to head with Linux's MD and LVM block drivers. The Linux ZVOL has not yet had any performance work done but from a user perspective it should be functionally complete and behave like any other Linux block device. The ZVOL has so far been tested using zconfig.sh on the following x86_64 based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11. However, more testing is required to ensure everything is working as designed. What follows in a somewhat detailed list of changes includes in this commit to make ZVOL's possible. A few other issues were addressed in the context of these changes which will also be mentioned. * Added module/zfs/zvol.c which is based off the original Solaris ZVOL implementation but rewritten to intergrate with the Linux block device APIs. The basic design remains the similar in Linux with the major change being request processing. Request processing is handled by registering a request function which the elevator calls once all request merges is finished and the elevator unplugs. This function is called under a spin lock and the request structure is passed to the block driver to be queued for IO. The elevator must be notified asyncronously once the request completes or fails with an error. This allows us the block driver a chance to handle many request concurrently. For the ZVOL we maintain a taskq with a service thread per core. As requests are delivered by the elevator each request is dispatched to the taskq. The task queue handles each request with a write or read helper function which basically copies the request data in to our out of the DMU object. Writes single completion as soon as the DMU has the data unless they are marked sync. Reads are all handled syncronously however the elevator will merge many small reads in to a large read before it submitting the request. * Cachine is worth specifically mentioning. Because both the Linux VFS and the ZFS ARC both want to fully manage the cache we unfortunately end up with two caches. This means our memory foot print is larger than otherwise expected, and it means we have an extra copy between the caches, but it does not impact correctness. All syncs are barrior requests I believe are handled correctly. Longer term there is lots of room for improvement here but it will require fairly extensive changes to either the Linux VFS and VM layer, or additional DMU interfaces to handle managing buffer not directly allocated by the ARC. * Added module/zfs/include/sys/blkdev.h which contains all the Linux compatibility foo which is required to handle changes in the Linux block APIs from 2.6.18 thru 2.6.31 based kernels. * The dmu_{read,write}_uio interfaces which don't make sense on Linux have been modified to dmu_{read,write}_req functions which consume the standard Linux IO request structure. Their function fundamentally remains the same so this happily worked out pretty cleanly. * The /dev/zfs character device is no longer created through the half implemented Solaris driver DDI interfaces. It is now simply created with it's own major number as a Linux misc device which greatly simplifies everything. It is only capable of handling ioctls() but this fits nicely because that's all it ever has to do. The ZVOL devices unlike in Solaris do not leverage the same major number as /dev/zfs but instead register their own major. Because only one major is allocated and space is reserved for 16 partitions per-device there is a limit of 16384 concurrent ZVOL devices. By using multiple majors like the scsi driver this limit could be addressed if it becomes a problem. * The {spa,zfs,zvol}_busy() functions have all be removed because they are not required on a Linux system. Under Linux the registered module exit function will not be called while the are still references to the module. Once the exit function is called however it must succeed or block, it may not fail so returning an error on module unload makes to sense under Linux. * With the addition of ZVOL support all the HAVE_ZVOL defines were removed for obvious reasons. However, the HAVE_ZPL defines have been relocated in to the linux-{kernel,user}-disk topic branches and must remain until the ZPL is implemented. |
||
---|---|---|
cmd | ||
config | ||
doc | ||
lib | ||
module | ||
patches | ||
scripts | ||
.topdeps | ||
.topmsg | ||
AUTHORS | ||
COPYING | ||
COPYRIGHT | ||
ChangeLog | ||
DISCLAIMER | ||
GIT | ||
META | ||
Makefile.am | ||
OPENSOLARIS.LICENSE | ||
README | ||
TODO | ||
ZFS.RELEASE | ||
autogen.sh | ||
configure.ac | ||
zfs-modules.spec.in | ||
zfs.spec.in | ||
zfs_unconfig.h |
README
============================ ZFS KERNEL BUILD ============================ 1) Build the SPL (Solaris Porting Layer) module which is designed to provide many Solaris APIs in the Linux kernel which are needed by ZFS. To build the SPL: tar -xzf spl-x.y.z.tgz cd spl-x.y.z ./configure --with-linux=<kernel src> make make check <as root> 2) Build ZFS, this port is based on build specified by the ZFS.RELEASE file. You will need to have both the kernel and SPL source available. To build ZFS for use as a Linux kernel module. tar -xzf zfs-x.y.z.tgz cd zfs-x.y.z ./configure --with-linux=<kernel src> \ --with-spl=<spl src> make make check <as root> ============================ ZPIOS TEST SUITE ============================ 3) Provided is an in-kernel test application called zpios which can be used to simulate a parallel IO load. It may be used as a stress or performance test for your configuration. To simplify testing scripts provided in the scripts/ directory which provide a few pre-built zpool configurations and zpios test cases. By default 'make check' as root will run a simple test against several small loopback devices created in /tmp/. cd scripts ./zfs.sh # Load the ZFS/SPL modules ./zpios.sh -c lo-raid0.sh -t tiny -v # Tiny zpios loopback test ./zfs.sh -u # Unload the ZFS/SPL modules Enjoy, Brian Behlendorf <behlendorf1@llnl.gov>