OpenZFS on Linux and FreeBSD
Go to file
Brian Behlendorf ed97b4447d Adds the last missing block device support (merge_bdev support)
This change should wrap up the last of the missing block device
support in the vdev_disk layer.  With this change I can now
successfully create and use zpools which are layered on top of
md and lvm virtual devices.  The following changes include:

1) The big one, properly handle the case when page cannot be added
to a bio due to dynamic limitation of a merge_bdev handler.  For
example the md device will limit a bio to the configured stripe
size.  Our bio size may also end up being limited by the maximum
request size, and other factors determined during bio construction.

To handle all of the above cases the code has been updated to
handle failures from bio_add_page().  This had been hardcoded to
never fail for the prototype proof of concept implementation.  In
the case of a failure the number of bytes which still need to be
added to a bio are returned.  New bio's are allocated and attached
to the dio until the entire data buffer is mapped to bios.  It is
then submitted as before to the request queue, and once all the bio's
attached to a dio have finished the completion callback is run.

2) The devid comments have been removed because it is not clear to
me that we will not need devid support.  They have been replaced
with a comment explaining that udev can and should be used.
2009-10-27 14:38:38 -07:00
cmd Merge commit 'refs/top-bases/zfs-branch' into zfs-branch 2009-10-23 12:24:39 -07:00
config Update zpool-configs to be udev aware. 2009-10-21 11:38:51 -07:00
doc Refresh zfs-branch 2008-12-05 09:46:11 -08:00
lib Merge commit 'refs/top-bases/zfs-branch' into zfs-branch 2009-10-14 15:57:10 -07:00
module Adds the last missing block device support (merge_bdev support) 2009-10-27 14:38:38 -07:00
scripts Test configs for md, dm, and ramdisk style block devices 2009-10-26 10:41:06 -07:00
.topdeps Refresh linux-kernel-disk 2008-12-05 11:16:18 -08:00
.topmsg Add TODO 2009-01-21 11:26:19 -08:00
AUTHORS Refresh zfs-branch 2008-12-05 09:46:11 -08:00
COPYING Refresh for consistency with COPYRIGHT 2009-06-08 11:59:13 -07:00
COPYRIGHT Readd accidentally dropped COPYRIGHT, it just references the 2009-06-08 11:01:13 -07:00
ChangeLog Refresh ChangeLog 2009-08-04 15:45:48 -07:00
DISCLAIMER Initial Linux ZFS GIT Repo 2008-11-20 12:01:55 -08:00
GIT Refresh type in topgit git://* reference 2009-01-26 21:58:32 -08:00
META Tag zfs-0.4.5. 2009-08-04 14:56:40 -07:00
Makefile.am Install zfs_config, zfs_unconfig, symbols in to correct location. 2009-07-01 12:51:06 -07:00
OPENSOLARIS.LICENSE Add CDDL license file 2008-12-01 14:49:34 -08:00
README Refresh README 2009-01-20 16:16:57 -08:00
TODO Tag zfs-0.4.5 for real 2009-08-04 16:12:28 -07:00
ZFS.RELEASE Rebase master to b121 2009-08-18 11:43:27 -07:00
autogen.sh Core target arch support for conditional compilation of SUBDIRs 2009-06-08 16:07:43 -07:00
configure.ac Additional set of build system tweaks for libefi library. 2009-10-09 16:37:32 -07:00
zfs-modules.spec.in Remove usage of the __id_u macro for portability. 2009-10-05 13:01:01 -07:00
zfs.spec.in Update build system for libblkid integration 2009-10-15 16:25:18 -07:00
zfs_unconfig.h Distro friendly build system / packaging improvements. 2009-07-01 10:53:05 -07:00

README

============================ ZFS KERNEL BUILD ============================

1) Build the SPL (Solaris Porting Layer) module which is designed to
   provide many Solaris APIs in the Linux kernel which are needed
   by ZFS.  To build the SPL:

        tar -xzf spl-x.y.z.tgz
        cd spl-x.y.z
        ./configure --with-linux=<kernel src>
        make
        make check <as root>

2) Build ZFS, this port is based on build specified by the ZFS.RELEASE
   file.  You will need to have both the kernel and SPL source available.
   To build ZFS for use as a Linux kernel module.

        tar -xzf zfs-x.y.z.tgz
        cd zfs-x.y.z
        ./configure --with-linux=<kernel src> \
                    --with-spl=<spl src>
        make
        make check <as root>

============================ ZPIOS TEST SUITE ============================

3) Provided is an in-kernel test application called zpios which can be
   used to simulate a parallel IO load.  It may be used as a stress 
   or performance test for your configuration.  To simplify testing
   scripts provided in the scripts/ directory which provide a few
   pre-built zpool configurations and zpios test cases.  By default
   'make check' as root will run a simple test against several small
   loopback devices created in /tmp/.

       cd scripts
       ./zfs.sh					# Load the ZFS/SPL modules
       ./zpios.sh -c lo-raid0.sh -t tiny -v 	# Tiny zpios loopback test
       ./zfs.sh -u				# Unload the ZFS/SPL modules

Enjoy,
Brian Behlendorf <behlendorf1@llnl.gov>