Commit Graph

6344 Commits

Author SHA1 Message Date
Ned Bass a6ef9522ea Make vdev_id POSIX sh compatible
Full bash may not be available in all environments where udev helpers
run, such as in an initial ramdisk.  To avoid breakage in this case,
remove use of bash-specific features such as variable arrays and the
`declare' keyword from the vdev_id script.

Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #870
2012-11-27 14:23:22 -08:00
Brian Behlendorf f74a147c02 Fix NULL deref when zvol_alloc() fails
If zvol_alloc() fails zv will be set to NULL and dereferenced
in out_dmu_objset_disown.  To avoid this entirely the zv->objset
line is moved up in to the success block.

Original-patch-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1109
2012-11-27 14:10:31 -08:00
Brian Behlendorf 30315d237b Increase ZFS_OBJ_MTX_SZ to 256
Increasing this limit costs us 6144 bytes of memory per mounted
filesystem, but this is small price to pay for accomplishing
the following:

* Allows for up to 256-way concurreny when performing lookups
  which helps performance when there are a large number of
  processes.

* Minimizes the likelyhood of encountering the deadlock
  described in issue #1101.  Because vmalloc() won't strictly
  honor __GFP_FS there is still a very remote chance of a
  deadlock.  See the zfsonlinux/spl@043f9b57 commit.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1101
2012-11-27 13:46:32 -08:00
Brian Behlendorf 043f9b5724 Disable FS reclaim when allocating new slabs
Allowing the spl_cache_grow_work() function to reclaim inodes
allows for two unlikely deadlocks.  Therefore, we clear __GFP_FS
for these allocations.  The two deadlocks are:

* While holding the ZFS_OBJ_HOLD_ENTER(zsb, obj1) lock a function
  calls kmem_cache_alloc() which happens to need to allocate a
  new slab.  To allocate the new slab we enter FS level reclaim
  and attempt to evict several inodes.  To evict these inodes we
  need to take the ZFS_OBJ_HOLD_ENTER(zsb, obj2) lock and it
  just happens that obj1 and obj2 use the same hashed lock.

* Similar to the first case however instead of getting blocked
  on the hash lock we block in txg_wait_open() which is waiting
  for the next txg which isn't coming because the txg_sync
  thread is blocked in kmem_cache_alloc().

Note this isn't a 100% fix because vmalloc() won't strictly
honor __GFP_FS.  However, it practice this is sufficient because
several very unlikely things must all occur concurrently.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue zfsonlinux/zfs#1101
2012-11-27 13:43:27 -08:00
Brian Behlendorf 0e20a31b4b Recreate minors when renaming zvols
When a zvol with snapshots is renamed the device files under
/dev/zvol/ are not renamed.  This patch resolves the problem
by destroying and recreating the minors with the new name so
the links can be recreated bu udev.

Original-patch-by: Suman Chakravartula <schakrava@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #408
2012-11-19 16:59:44 -08:00
nordaux 33364b15d3 mount.zfs: canonicalize mount point for mtab
Canonicalize the mount point passed to the mount.zfs helper.
This way a clean path is always added to mtab which ensures
the umount can properly locate and remove the entry.

Test case:
$ mkdir /mnt/foo
$ mount -t zfs zpool/foo /mnt/../mnt/foo////
$ umount /mnt/foo
$ cat /etc/mtab | grep zpool/foo
zpool/foo /mnt/../mnt/foo//// zfs rw 0 0

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #573
2012-11-15 14:28:52 -08:00
Brian Behlendorf 54602c3771 Merge branch 'ashift'
This branch adds some overdue ashift improvements.

  * Add '-o ashift' to 'zpool add' and 'zpool attach'
  * Improve AF hard disk detection
  * Allow 'zpool import' to handle increases in ashift

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-15 11:51:23 -08:00
Cyril Plisko df83110856 Add "-o ashift" to zpool add and zpool attach
When adding devices to an existing pool "ashift" property is
auto-detected.  However, if this property was overridden at
the pool creation time (i.e. zpool create -o ashift=12 tank ...)
this may not be what the user wants.  This commit lets the user
specify the value of "ashift" property to be used with newly
added drives. For example,

    zpool add -o ashift=12 tank disk1
    zpool attach -o ashift=12 tank disk1 disk2

Signed-off-by: Cyril Plisko <cyril.plisko@mountall.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #566
2012-11-15 11:06:14 -08:00
Brian Behlendorf 2404b01499 Improve AF hard disk detection
Use the bdev_physical_block_size() interface to determine the
minimize write size which can be issued without incurring a
read-modify-write operation.  This is used to set the ashift
correctly to prevent a performance penalty when using AF hard
disks.

Unfortunately, this interface isn't entirely reliable because
it's not uncommon for disks to misreport this value.  For this
reason you may still need to manually set your ashift with:

  zpool create -o ashift=12 ...

The solution to this in the upstream Illumos source was to add
a white list of known offending drives.  Maintaining such a list
will be a burden, but it still may be worth doing if we can
detect a large number of these drives.  This should be considered
as future work.

Reported-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #916
2012-11-15 11:06:14 -08:00
George Wilson 32a9872bba Illumos #2671: zpool import should not fail if vdev ashift has increased
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Eric Schrock <eric.schrock@delphix.com>
Reviewed by: Richard Elling <richard.elling@richardelling.com>
Reviewed by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Approved by: Richard Lowe <richlowe@richlowe.net>

Refererces to Illumos issue:
      https://www.illumos.org/issues/2671

This patch has been slightly modified from the upstream Illumos
version.  In the upstream implementation a warning message is
logged to the console.  To prevent pointless console noise this
notification is now posted as a "ereport.fs.zfs.vdev.bad_ashift"
event.

The event indicates a non-optimial (but entirely safe) ashift
value was used to create the pool.  Depending on your workload
this may impact pool performance.  Unfortunately, the only way
to correct the issue is to recreate the pool with a new ashift.

NOTE: The unrelated fix to the comment in zpool_main.c appears
in the upstream commit and was preserved for consistnecy.

Ported-by: Cyril Plisko <cyril.plisko@mountall.com>
Reworked-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #955
2012-11-15 11:05:59 -08:00
Brian Behlendorf 3997bc7435 zfs-0.6.0-rc12 2012-11-13 14:35:44 -08:00
Brian Behlendorf e71a4534b3 SPL 0.6.0-rc12 2012-11-13 14:28:25 -08:00
Richard Yao 2af96a5df5 Fix hard coded path in 60-vdev.rules.in
The udev data directory was hard coded in 60-vdev.rules.in. That causes
a problem when a distribution changes the location of the directory.
This was not an issue in the past because virtually all distributions
used the same path, but that is beginning to change following a decision
by the systemd developers to change the directory location to reflect
their take-over of udev maintainership. The testing branch of Gentoo
Linux adopted this change, which enabled the hardcoded directory
location to trigger a regression.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1085
2012-11-13 12:04:15 -08:00
Brian Behlendorf 4c837f0d93 Fix "allocating allocated segment" panic
Gunnar Beutner did all the hard work on this one by correctly
identifying that this issue is a race between dmu_sync() and
dbuf_dirty().

Now in all cases the caller is responsible for preventing this
race by making sure the zfs_range_lock() is held when dirtying
a buffer which may be referenced in a log record.  The mmap
case which relies on zfs_putpage() was not taking the range
lock.  This code was accidentally dropped when the function
was rewritten for the Linux VFS.

This patch adds the required range locking to zfs_putpage().

It also adds the missing ZFS_ENTER()/ZFS_EXIT() macros which
aren't strictly required due to the VFS holding a reference.
However, this makes the code more consistent with the upsteam
code and there's no harm in being extra careful here.

Original-patch-by: Gunnar Beutner <gunnar@beutner.name>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #541
2012-11-09 19:01:09 -08:00
Brian Behlendorf e26ade5101 Fix zvol+btrfs hang
When using a zvol to back a btrfs filesystem the btrfs mount
would hang.  This was due to the bio completion callback used
in btrfs assuming that lower level drivers would never modify
the bio->bi_io_vecs after they were submitted via bio_submit().
If they are modified btrfs will miscalculate which pages need
to be unlocked resulting in a hang.

It's worth mentioning that other file systems such as ext[234]
and xfs work fine because they do not make the same assumption
in the bio completion callback.

The most straight forward way to fix the issue is to present
the semantics expected by btrfs.  This is done by cloning the
bios attached to each request and then using the clones bvecs
to perform the required accounting.  The clones are freed after
each read/write and the original unmodified bios are linked back
in to the request.

Signed-off-by: Chris Wedgwood <cw@f00f.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #469
2012-11-09 12:24:51 -08:00
Brian Behlendorf 366346c565 Merge branch 'kmem-cache-optimization'
This branch contains kmem cache optimizations designed to resolve
the lockups reported in zfsonlinux/zfs#922.  The lockups were
largely the result of spin lock contention in the slab under low
memory conditions.  Fundamentally, these changes are all designed
to minimize that contention though a variety of methods.

  * Improved vmem cached deadlock detection
  * Track emergency objects in rbtree
  * Optimize spl_kmem_cache_free()
  * Never spin in kmem_cache_alloc()

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
zfsonlinux/zfs#922
2012-11-08 11:09:17 -08:00
Brian Behlendorf dc1b30224f Never spin in kmem_cache_alloc()
If we are reaping from the cache and a concurrent allocation
occurs then the caller must block until the reaping is complete.
This is signaled by the clearing of the KMC_BIT_REAPING bit.

Otherwise the caller will be in a tight loop which takes and
releases the skc->skc_cache lock.  When there are multiple
concurrent callers the system will thrash on the lock and
appear to lock up.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 15:48:39 -08:00
Brian Behlendorf a1af8fb1ea Optimize spl_kmem_cache_free()
Because only virtual slabs may have emergency objects and these
objects are guaranteed to have physical addresses.  It can be
easily determined if the passed object is a virtual slab object
or an emergency object.  This allows us to completely optimize
the emergency object free case out of the common free path.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:19 -08:00
Brian Behlendorf ed3163484d Track emergency object in rbtree
In the initial implementation emergency objects were tracked on a
per-cache list.  The assumption was that under normal operation we
would never allocate more than a handful of these objects.  So the
cost of walking the list during free was expected to be negligible.

However real world usage has shown that emergency objects tend to
be allocated in batches.  A deadlock will be detected and several
thousand emergency objects will be allocated before the original
blocked slab allocation can complete.

Therefore the original list has been replaced by a red black tree
which is sorted by the memory address of each allocated object.
This bounds the worst case insertion and removal time to O(log n)
which minimize contention on the assoicated spin lock.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:19 -08:00
Brian Behlendorf 165f13c33a Improved vmem cached deadlock detection
The entire goal of performing the slab allocations asynchronously
is to be able to detect when a vmalloc() deadlocks.  In this case,
and only this case, do we want to start allocating emergency objects.
The trick here is to minimize false positives because the overhead
of tracking emergency objects is far higher than normal slab objects.

With that goal in mind the code was reworked to be less sensitive
to slow allocations by increasing the wait time.  Once a cache is
is marked deadlocked all subsequent allocations which can not be
satisfied with existing cache objects will immediately allocate new
emergency objects.  This behavior persists until the asynchronous
allocation completes and clears the deadlocked flag.

The result of these tweaks is that far fewer emergency objects
get created which is important because this minimizes the cost of
releasing them latter in kmem_cache_free().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:15 -08:00
Brian Behlendorf 65c2fc5a2e Merge branch 'splat'
Additional debugging, some cleanup, and an assortment of fixes
to the SPLAT tests and infrastructure.  Full details in the
individual patches.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:49:14 -08:00
Brian Behlendorf 1112486356 splat kmem:slab_overcommit: Disabled
Disable this test because it may result in an OOM event on the
system which can result in the test infrastructure being killed.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:57 -08:00
Brian Behlendorf b8296bf3e6 splat atomic:64-bit: Create thread outside spin lock
The Fedora 3.6 debug kernel identified the following issue where
we create a thread under a spin lock.  This isn't safe because
sleeping could result in a deadlock.  Therefore the lock is changed
to a mutex so it's safe to sleep.

  BUG: sleeping function called from invalid context at mm/slub.c:930
  in_atomic(): 1, irqs_disabled(): 0, pid: 10583, name: splat
  1 lock held by splat/10583:

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:57 -08:00
Brian Behlendorf 0e149d4204 splat: Fix log buffer locking
The Fedora 3.6 debug kernel identified the following issue where
we call copy_to_user() under a spin lock().  This used to be safe
in older kernels but no longer appears to be true so the spin
lock was changed to a mutex.  None of this code is performance
critical so allowing the process to sleep is harmless.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:56 -08:00
Brian Behlendorf df870a697f splat: Cleanup headers
Restructure the the SPLAT headers such that each test only
includes the minimal set of headers it requires.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:56 -08:00
Brian Behlendorf d2733258d0 Condition variable reference counts
Reference count every entry and exit from the condition variable
functions: cv_wait(), cv_wait_timeout(), cv_signal(), cv_broadcast().

This allows us to safely block in cv_destroy() until all consumers
have been scheduled and are no longer accessing the condition
variable memory.

In addition poison the magic value at the start of cv_destroy() to
ensure there are never any new callers after cv_destroy() is called.
The consumer is responsible for ensuring this never occurs.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:55 -08:00
Brian Behlendorf 87efc30b27 Merge remote branch 'eris/stats'
Bring in support for the new KSTAT_TYPE_TXG type.  This allows for
additional visibility in to the txg handling.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:43 -08:00
Brian Behlendorf bbf8c74805 Merge remote branch 'eris/stats'
Bring in support for the new KSTAT_TYPE_TXG type.  This allows for
additional visibility in to the txg handling.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:48:06 -08:00
Brian Behlendorf 9dcb971983 Log I/Os longer than zio_delay_max (30s default)
There have been reports of ZFS deadlocking due to what appears to
be a lost IO.  This patch addes some debugging to determine the
exact state of the IO which neither 1) completed, 2) failed, or
3) timed out after zio_delay_max (30) seconds.

This information will be logged using the ZFS FMA infrastructure
as a 'delay' event and posted to the internal zevent log.  By
default the last 64 events will be kept in the log but the limit
is configurable via the zfs_zevent_len_max module option.

To dump the contents of the log use the 'zpool events -v' command
and look for the resource.fs.zfs.delay event.  It will include
various information about the pool, vdev, and zio which may shed
some light on the issue.

In the context of this change the 120 second kernel blocked thread
watchdog has been disabled for synchronous IOs.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #930
2012-11-02 15:45:59 -07:00
Brian Behlendorf e95853a331 Add txgs-<pool> kstat file
Create a kstat file which contains useful statistics about the
last N txgs processed.  This can be helpful when analyzing pool
performance.  The new KSTAT_TYPE_TXG type was added for this
purpose and it tracks the following statistics per-txg.

  txg          - Unique txg number
  state        - State (O)pen/(Q)uiescing/(S)yncing/(C)ommitted
  birth;       - Creation time
  nread        - Bytes read
  nwritten;    - Bytes written
  reads        - IOPs read
  writes       - IOPs write
  open_time;   - Length in nanoseconds the txg was open
  quiesce_time - Length in nanoseconds the txg was quiescing
  sync_time;   - Length in nanoseconds the txg was syncing

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-02 15:45:56 -07:00
Brian Behlendorf dba79fcbf2 Add KSTAT_TYPE_TXG type
Add a new kstat type for tracking useful statistics about a TXG.
The new KSTAT_TYPE_TXG type can be used to tracks the following
statistics per-txg.

  txg          - Unique txg number
  state        - State (O)pen/(Q)uiescing/(S)yncing/(C)ommitted
  birth;       - Creation time
  nread        - Bytes read
  nwritten;    - Bytes written
  reads        - IOPs read
  writes       - IOPs write
  open_time;   - Length in nanoseconds the txg was open
  quiesce_time - Length in nanoseconds the txg was quiescing
  sync_time;   - Length in nanoseconds the txg was syncing

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-02 15:17:40 -07:00
Brian Behlendorf e8fd45a0f9 Add ddt_object_count() error handling
The interface for the ddt_zap_count() function assumes it can
never fail.  However, internally ddt_zap_count() is implemented
with zap_count() which can potentially fail.  Now because there
was no way to return the error to the caller a VERIFY was used
to ensure this case never happens.

Unfortunately, it has been observed that pools can be damaged in
such a way that zap_count() fails.  The result is that the pool can
not be imported without hitting the VERIFY and crashing the system.

This patch reworks ddt_object_count() so the error can be safely
caught and returned to the caller.  This allows a pool which has
be damaged in this way to be safely rewound for import.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #910
2012-10-29 08:57:45 -07:00
Brian Behlendorf 178e73b376 Revert "Don't ashift-align vdev read requests."
This reverts commit a5c20e2a0a which
accidentally introduced a regression for real 4k sector devices.
See issue #1065 for details.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1065
2012-10-24 15:25:33 -07:00
Brian Behlendorf 71c9f0b003 Make kstat.ks_update() callback atomic
Move the kstat ks_update() callback under the ks_lock.  This
enables dynamically sized kstats without modification to the
kstat API.

  * Create a kstat with the KSTAT_FLAG_VIRTUAL flag.
  * Register a ->ks_update() callback which does:
    o Frees any existing ks_data buffer.
    o Set ks_data_size to the kstat array size.
    o Set ks_data to an allocated buffer of size ks_data_size
    o Populate the array of buffers with the required data.

The buffer allocated in the ks_update() callback is guaranteed
to remain allocated and valid while the proc sequence handler
iterates over the buffer.  The lock will not be dropped until
kstat_seq_stop() function is run making it safe for concurrent
access.  To allow the ks_update() callback to perform memory
allocations the lock was changed to a mutex.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-10-23 09:36:19 -07:00
Brian Behlendorf f21e5c6a17 Remove 'Resized bio's/dio' warning
The following warning was originally added to provide visibility
in to how often a dio gets heavily fragmented in to over 16 bios.
This can happen due to constraints imposed by the block device
and may have a negitive impact on performance but is otherwise
harmless.  To prevent needless confusion and worry the message
has been removed.

  kernel: WARNING: Resized bio's/dio to 32

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-10-22 10:17:10 -07:00
Brian Behlendorf 30b937ee15 Update spare and cache device names on import
During 'zpool import' all ZPOOL_CONFIG_PATH names are supposed
to be updated by fix_paths().  This was not happening for spare
and cache devices because the proper names were getting filtered
out of the pool_list_t->names.  Interestingly, the names were
being filtered because the spare and cache devices do not
contain the pool name in their vdev label.

The fix is to exclude the device path from the list only if:

  1) has a valid ZPOOL_CONFIG_POOL_NAME key in the label, and
  2) that pool name does not match the specified pool name.

Since the label is valid and because it does properly store the
vdev guid it will be correctly assembled without the pool name.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #725
2012-10-22 08:46:02 -07:00
Brian Behlendorf eac4720465 Allow 'zpool replace' to use short device names
The 'zpool replace' command would fail when given a short name
because unlike on other platforms the short name cannot be
deterministically expanded to a single path.  Multiple path
prefixes must be checked and in addition the partition suffix
for whole disks is determined by the prefix.

To handle this complexity a zfs_strcmp_pathname() function was
added which takes either a short or fully qualified device name.
Short names will be expanded using the prefixes in the default
import search path, or the ZPOOL_IMPORT_PATH environment variable
if it's defined.  All posible expansions are then compared against
the comparison path.  Care is taken to strip redundant slashes to
ensure legitimate matches are not missed.

In the context of this work the existing zfs_resolve_shortname()
function was extended to consider the ZPOOL_IMPORT_PATH when set.
The zfs_append_partition() interface was also simplified to take
only a single buffer.

The vast majority of these changes rework existing Linux specific
code which was originally written to accomidate udev.  However,
there is some minimal cleanup which removes Illumos specific code.
This was done to improve readability but the basic flow and intent
of the upstream code was maintained.

These changes are the logical conclusion of the previos work to
adjust the 'zpool import' search behavior, see commit 44867b6a.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #544
Closes #976
2012-10-22 08:45:58 -07:00
Brian Behlendorf 1e0c2c2ccf Linux 3.7 compat, __clear_close_on_exec() removed
Commit torvalds/linux@b8318b0 moved the __clear_close_on_exec()
function out of include/linux/fdtable.h and in to fs/file.c
making it unavailable to the SPL.

Now as it turns out we only used this function to tear down
some test infrastructure for the vn_getf()/vn_releasef() SPLAT
regression tests.  Rather than implement even more autoconf
compatibilty code to handle this we just remove the test case.
This also allows us to drop three existing autoconf tests.

This does mean the SPLAT tests will no longer verify these
functions but historically they have never been a problem.
And if we feel we absolutely need this test coverage I'm
sure a more portable version of the test case could be added.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #183
2012-10-18 13:36:44 -07:00
Brian Behlendorf c7dfc08629 Quote snapshot and mountpoint for .zfs automount
When automounting a snapshot in the .zfs/snapshot directory
make sure to quote both the dataset name and the mount point.
This ensures that if either component contains spaces, which
are allowed, they get handled correctly.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1027
2012-10-17 13:26:18 -07:00
Brian Behlendorf 658a0140f3 Merge branch 'zil-performance'
This brnach brings some ZIL performance optimizations, with
significant increases in synchronous write performance for
some workloads and pool configurations.

See the individual commit messages for details.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1013
2012-10-17 08:57:49 -07:00
Etienne Dechamps 5d7a86d114 Use the slog even with logbias=throughput.
In the current code, logbias=throughput implies the following:
 1) All synchronous writes are logged in indirect mode.
 2) The slog is not used.

(1) makes sense because it avoids writing the data twice, which is
obviously a good thing when the user wants maximum pool throughput.

(2), however, is a surprising decision. Considering all writes are
indirect, the log record doesn't contain the actual data, only pointers
to DMU blocks. As a result, log records written in logbias=throughput
mode are quite small, and as such, it doesn't make any sense to write
them to the main pool since slogs are usually optimized for small
synchronous writes.

In fact, the current behavior is actually harmful for performance,
because log blocks and data blocks from dmu_sync() seldom have the same
allocation size and as a result are usually allocated from different
metaslabs. This means that if a spindle has to write both log blocks and
DMU blocks (which is likely to happen under heavy load), it will have to
seek between the two. Allocating the log blocks from the slog pool
instead of the main pool avoids these unnecessary seeks.

This commit makes ZFS use the slog on datasets with logbias=throughput.
Real-life performance testing shows a 50% synchronous write performance
increase with some large commit sizes, and no negative effect in other
cases.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-10-17 08:56:46 -07:00
Etienne Dechamps 920dd524fb Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:

1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;

2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;

3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.

The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.

This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.

The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.

metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().

ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.

A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.

The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-10-17 08:56:41 -07:00
Etienne Dechamps 142e6dd100 Add atomic_sub_* functions to libspl.
Both the SPL and the ZFS libspl export most of the atomic_* functions,
except atomic_sub_* functions which are only exported by the SPL, not by
libspl. This patch remedies that by implementing atomic_sub_* functions
in libspl.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-10-17 08:56:37 -07:00
Brian Behlendorf 82f46731fd Merge branch 'condvar'
Auditing the code to verify that all instances of cv_signal() and
cv_broadcast() are called under the proper associated mutex turned
up several races. None of these have been conclusively seen in the
wild but the following patch set resolves them.

For reference, from the cv_signal(9F) man page:

  cv_signal() signals the condition and wakes one blocked thread.
  All blocked threads can be unblocked by calling cv_broadcast().
  You must acquire the mutex passed into cv_wait() before calling
  cv_signal() or cv_broadcast()

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1048
2012-10-17 08:48:29 -07:00
Brian Behlendorf a298dbde92 Condition variable usage, zp->r_{rd,wr}_cv
The following incorrect usage of cv_broadcast() was caught by
code inspection.  The cv_broadcast() function must be called
under the associated mutex to preventing racing with cv_wait().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-10-15 16:02:03 -07:00
Brian Behlendorf 8c0712fd88 Condition variable usage, zilog->zl_cv_batch
The following incorrect usage of cv_signal and cv_broadcast()
was caught by code inspection.  The cv_signal and cv_broadcast()
functions must be called under the associated mutex to preventing
racing with cv_wait().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-10-15 16:01:58 -07:00
Brian Behlendorf 99db9bfde7 Condition variable usage, zevent_cv
The following incorrect usage of cv_broadcast() was caught by
code inspection.  The cv_broadcast() function must be called
under the associated mutex to preventing racing with cv_wait().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-10-15 16:01:54 -07:00
Andrew Reid 6cb7ab069d Do not return /dev/loop-control in unused_loop_device
The function unused_loop_device in /usr/libexec/zfs/common.sh
returns /dev/loop-control on the first call. This device is NOT
a loop device (https://github.com/torvalds/linux/commit/770fe30)
it is a control device. This in turn causes the script zconfig.sh
to fail with:

  zpool-create.sh: Error 1 creating /tmp/zpool-vdev0 ->
  /dev/loop-control loopback

The patch makes the function return /dev/loop[0-9]* which are
loop devices.

Signed-off-by: Andrew Reid <ColdCanuck@nailedtotheperch.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #797
2012-10-15 10:02:42 -07:00
Massimo Maggi 6f53a6a229 Switch KM_SLEEP to KM_PUSHPAGE
In this particular instance the allocation occurred in the context
of sys_msync()->...->zpl_putpage() where we must be careful not to
initiate additional I/O.

Signed-off-by: Massimo Maggi <massimo@mmmm.it>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1038
2012-10-15 09:32:38 -07:00
Brian Behlendorf c418410393 Limit zfs_vdev_aggregation_limit to SPA_MAXBLOCKSIZE
Prevent users from setting the zfs_vdev_aggregation_limit tuning
larger than SPA_MAXBLOCKSIZE.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #520
2012-10-15 09:28:43 -07:00