OpenZFS on Linux and FreeBSD
Go to file
Brian Behlendorf fb1b00e9f4 Linux ZVOL implementation; kernel-side changes
At last a useful user space interface for the Linux ZFS port arrives.
With the addition of the ZVOL real ZFS based block devices are available
and can be compared head to head with Linux's MD and LVM block drivers.
The Linux ZVOL has not yet had any performance work done but from a user
perspective it should be functionally complete and behave like any other
Linux block device.

The ZVOL has so far been tested using zconfig.sh on the following x86_64
based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11.  However, more
testing is required to ensure everything is working as designed.

What follows in a somewhat detailed list of changes includes in this
commit to make ZVOL's possible.  A few other issues were addressed in
the context of these changes which will also be mentioned.

* Added module/zfs/zvol.c which is based off the original Solaris ZVOL
implementation but rewritten to intergrate with the Linux block device
APIs.  The basic design remains the similar in Linux with the major
change being request processing.  Request processing is handled by
registering a request function which the elevator calls once all request
merges is finished and the elevator unplugs.  This function is called
under a spin lock and the request structure is passed to the block driver
to be queued for IO.  The elevator must be notified asyncronously once
the request completes or fails with an error.  This allows us the block
driver a chance to handle many request concurrently.  For the ZVOL we
maintain a taskq with a service thread per core.  As requests are delivered
by the elevator each request is dispatched to the taskq.  The task queue
handles each request with a write or read helper function which basically
copies the request data in to our out of the DMU object.  Writes single
completion as soon as the DMU has the data unless they are marked sync.
Reads are all handled syncronously however the elevator will merge many
small reads in to a large read before it submitting the request.

* Cachine is worth specifically mentioning.  Because both the Linux VFS
and the ZFS ARC both want to fully manage the cache we unfortunately
end up with two caches.  This means our memory foot print is larger
than otherwise expected, and it means we have an extra copy between
the caches, but it does not impact correctness.  All syncs are barrior
requests I believe are handled correctly.  Longer term there is lots of
room for improvement here but it will require fairly extensive changes
to either the Linux VFS and VM layer, or additional DMU interfaces to
handle managing buffer not directly allocated by the ARC.

* Added module/zfs/include/sys/blkdev.h which contains all the Linux
compatibility foo which is required to handle changes in the Linux block
APIs from 2.6.18 thru 2.6.31 based kernels.

* The dmu_{read,write}_uio interfaces which don't make sense on Linux
have been modified to dmu_{read,write}_req functions which consume the
standard Linux IO request structure.  Their function fundamentally
remains the same so this happily worked out pretty cleanly.

* The /dev/zfs character device is no longer created through the half
implemented Solaris driver DDI interfaces.  It is now simply created
with it's own major number as a Linux misc device which greatly simplifies
everything.  It is only capable of handling ioctls() but this fits nicely
because that's all it ever has to do.  The ZVOL devices unlike in Solaris
do not leverage the same major number as /dev/zfs but instead register
their own major.  Because only one major is allocated and space is reserved
for 16 partitions per-device there is a limit of 16384 concurrent ZVOL
devices.  By using multiple majors like the scsi driver this limit could
be addressed if it becomes a problem.

* The {spa,zfs,zvol}_busy() functions have all be removed because they
are not required on a Linux system.  Under Linux the registered module
exit function will not be called while the are still references to the
module.  Once the exit function is called however it must succeed or
block, it may not fail so returning an error on module unload makes to
sense under Linux.

* With the addition of ZVOL support all the HAVE_ZVOL defines were removed
for obvious reasons.  However, the HAVE_ZPL defines have been relocated
in to the linux-{kernel,user}-disk topic branches and must remain until
the ZPL is implemented.
2009-11-20 11:06:59 -08:00
cmd Merge commit 'refs/top-bases/zfs-branch' into zfs-branch 2009-10-23 12:24:39 -07:00
config Additional ZVOL compatibility autoconf checks and zconfig ZVOL sanity test. 2009-11-20 10:04:56 -08:00
doc Refresh zfs-branch 2008-12-05 09:46:11 -08:00
lib Merge commit 'refs/top-bases/zfs-branch' into zfs-branch 2009-10-14 15:57:10 -07:00
module Linux ZVOL implementation; kernel-side changes 2009-11-20 11:06:59 -08:00
patches Add e2fsprogs patch for detecting ZFS uberblocks until it appears upstream. 2009-11-02 15:04:43 -08:00
scripts Add 16 drive promise JBOD zpool configs for small test setup. 2009-11-20 10:12:41 -08:00
.topdeps Refresh linux-kernel-disk 2008-12-05 11:16:18 -08:00
.topmsg Add TODO 2009-01-21 11:26:19 -08:00
AUTHORS Refresh zfs-branch 2008-12-05 09:46:11 -08:00
COPYING Refresh for consistency with COPYRIGHT 2009-06-08 11:59:13 -07:00
COPYRIGHT Readd accidentally dropped COPYRIGHT, it just references the 2009-06-08 11:01:13 -07:00
ChangeLog Prep for 0.4.6 tag, updated META, ChangeLog, and TODO. 2009-11-02 13:03:59 -08:00
DISCLAIMER Initial Linux ZFS GIT Repo 2008-11-20 12:01:55 -08:00
GIT Refresh type in topgit git://* reference 2009-01-26 21:58:32 -08:00
META Prep for 0.4.6 tag, updated META, ChangeLog, and TODO. 2009-11-02 13:03:59 -08:00
Makefile.am Install zfs_config, zfs_unconfig, symbols in to correct location. 2009-07-01 12:51:06 -07:00
OPENSOLARIS.LICENSE Add CDDL license file 2008-12-01 14:49:34 -08:00
README Refresh README 2009-01-20 16:16:57 -08:00
TODO Prep for 0.4.6 tag, updated META, ChangeLog, and TODO. 2009-11-02 13:03:59 -08:00
ZFS.RELEASE Rebase master to b121 2009-08-18 11:43:27 -07:00
autogen.sh Core target arch support for conditional compilation of SUBDIRs 2009-06-08 16:07:43 -07:00
configure.ac Additional set of build system tweaks for libefi library. 2009-10-09 16:37:32 -07:00
zfs-modules.spec.in Remove usage of the __id_u macro for portability. 2009-10-05 13:01:01 -07:00
zfs.spec.in Update build system for libblkid integration 2009-10-15 16:25:18 -07:00
zfs_unconfig.h Distro friendly build system / packaging improvements. 2009-07-01 10:53:05 -07:00

README

============================ ZFS KERNEL BUILD ============================

1) Build the SPL (Solaris Porting Layer) module which is designed to
   provide many Solaris APIs in the Linux kernel which are needed
   by ZFS.  To build the SPL:

        tar -xzf spl-x.y.z.tgz
        cd spl-x.y.z
        ./configure --with-linux=<kernel src>
        make
        make check <as root>

2) Build ZFS, this port is based on build specified by the ZFS.RELEASE
   file.  You will need to have both the kernel and SPL source available.
   To build ZFS for use as a Linux kernel module.

        tar -xzf zfs-x.y.z.tgz
        cd zfs-x.y.z
        ./configure --with-linux=<kernel src> \
                    --with-spl=<spl src>
        make
        make check <as root>

============================ ZPIOS TEST SUITE ============================

3) Provided is an in-kernel test application called zpios which can be
   used to simulate a parallel IO load.  It may be used as a stress 
   or performance test for your configuration.  To simplify testing
   scripts provided in the scripts/ directory which provide a few
   pre-built zpool configurations and zpios test cases.  By default
   'make check' as root will run a simple test against several small
   loopback devices created in /tmp/.

       cd scripts
       ./zfs.sh					# Load the ZFS/SPL modules
       ./zpios.sh -c lo-raid0.sh -t tiny -v 	# Tiny zpios loopback test
       ./zfs.sh -u				# Unload the ZFS/SPL modules

Enjoy,
Brian Behlendorf <behlendorf1@llnl.gov>