Merge commit 'refs/top-bases/gcc-init-pragmas' into gcc-init-pragmas

This commit is contained in:
Brian Behlendorf 2008-12-01 16:18:09 -08:00
commit 9ac0ad5671
21 changed files with 0 additions and 1511 deletions

View File

@ -1,4 +0,0 @@
Brian Behlendorf <behlendorf1@llnl.gov>,
Herb Wartens <wartens2@llnl.gov>,
Jim Garlick <garlick@llnl.gov>,
Ricardo M. Correia <Ricardo.M.Correia@sun.com>

114
ChangeLog
View File

@ -1,114 +0,0 @@
2008-11-19 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.4.0
* : ZFS project migrated from Subversion which leveraged a
quilt based patch stack to Git and a TopGit managed patch
stack. The new method treats all patches as Git branches
which can be more easily shared for distributed development.
Consult the top level GIT file for detailed information on
how to properly develop for this package using Git+TopGit.
2008-11-12 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.3.4
* zfs-07-create-dev-zfs.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Make libzfs create /dev/zfs if it doesn't exist.
* zfs-05-check-zvol-size.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Properly check zvol size under Linux.
* zfs-04-no-openat-fdopendir.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Do not use openat() and fdopendir() since they are not available
on older systems.
* zfs-03-fix-bio-sync.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Fix memory corruption in RHEL4 due to synchronous IO becoming
asynchronous.
2008-11-06 Brian Behlendorf <behlendorf1@llnl.gov>
* zfs-02-zpios-fix-stuck-thread-memleak.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Fix stuck threads and memory leaks when errors occur while writing.
* zfs-01-zpios-arg-corruption.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Fix zpios cmd line argument corruption problem.
* zfs-00-minor-fixes.patch:
Ricardo M. Correia <Ricardo.M.Correia@sun.com>
- Minor build system improvements
- Minor script improvements
- Create a full copy and not a link tree with quilt
- KPIOS_MAJOR changed from 231 to 232
- BIO_RW_BARRIER flag removed from IO request
2008-06-30 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.3.3
* : Minor script updates and tweaks to be compatible with
the latest version of the SPL.
2008-06-13 Brian Behlendorf <behlendorf1@llnl.gov>
* vdev_disk.diff: Replace vdev_disk implementation which was
based on the kmalloc'ed logical address space with a version
which works with vmalloc'ed memory in the virtual address space.
This was done to support the new SPL slab implementation which
is based on virtual addresses to avoid the need for contigeously
allocated memory.
2008-06-05 Brian Behlendorf <behlendorf1@llnl.gov>
* arc-vm-integration.diff: Reduce maximum default arc memory
usage to 1/4 of total system memory. Because all the bulk data
is still allocated on the slab memory fragmentation is a serious
concern. To address this in the short term we simply need to
leave lots of free memory.
* fix-stack.diff: First step towards reducing stack usage so
we can run the full ZFS stack using a stock kernel.
2008-06-04 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.3.2
* : Extensive improvements to the build system to detect kernel
API changes so we can flexibly build with a wider range of kernel
versions. The code has now been testing with the 2.6.18-32chaos
and 2.6.25.3-18.fc9 kernels, however we should also be compatible
with other kernels in the range of 2.6.18-2.6.25. The only
remaining issue preventing us from running with a stock
kernel is ZFS stack usage.
2008-05-21 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.3.1
* : License headers including URCL added for release.
2008-05-21 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.3.0
* configure.ac: Improved autotools support and configurable debug.
2008-05-15 Brian Behlendorf <behlendorf1@llnl.gov>
* : Updating original ZFS sources to build 89 which
includes the new write throttling changes plus support
for using ZFS as your root device. Neither of which
will work exactly right without some more work but this
gets us much closers to the latest source.
2008-02-28 Brian Behlendorf <behlendorf1@llnl.gov>
* : First attempt based on SPL module and zfs-lustre sources

186
GIT
View File

@ -1,186 +0,0 @@
=========================== WHY USE GIT+TOPGIT? ==========================
Three major concerns were on our mind when setting up this project.
o First we needed to structure the project in such a way that it would be
easy to rebase all of our changes on the latest official ZFS release
from Sun. We absolutely need to be able to benefit from the upstream
improvements and not get locked in to an old version of the code base.
o Secondly, we wanted to be able to easily manage our changes in terms
of a patch stack or graph. This allows us to easily isolate specific
changes and push them upstream for inclusion. It also allows us to
easily update or drop specific changes based on what occurs upstream.
o Thirdly we needed our DVCS to be integrated with the management of this
patch stack or graph. We have tried other methods in the past such as
SVN+Quilt but have found managing the patch stack becomes cumbersome.
By using Git+TopGit to more tightly integrate our patches in to the repo
we expect several benefits. One of the most important will be the
ability to easily work on the patch's with a distributed development
team, additionally the repo can track patch history, and we can utilize
Git to merge patches and resolve conflicts.
TopGit is designed to specifically address these concerns by providing
tools to simplify the handling of large numbers of interdependent topic
branches. When using a TopGit aware repo every topic branch represents
a 'patch' and that branch references its dependent branches. The union
of all these branches is your final source base.
========================= SETTING UP GIT+TOPGIT ==========================
First off you need to install a Git package on your system. For my
purposes I have been working on a RHEL5 system with git version 1.5.4.5
installed and it has been working well. You will also need to go get
the latest version of TopGit which likely is not packaged nicely so you
will need to build it from source. You can use Git to clone TopGit
from the official site here and your all set:
> git clone git://repo.or.cz/w/topgit.git
> make
> make install # Default installs to $(HOME)
========================== TOPGIT AND ZFS ================================
One you have Git and TopGit installed you will want to clone a copy of
the Linux ZFS repo. While this project does not yet have a public home
it hopefully will some day. In the meanwhile if you have VPN access to
LLNL you can clone the latest official repo here. Cloning a TopGit
controlled repo is very similar to cloning a normal Git repo, but you
need to remember to use 'tg remote' to populate all topic branches.
> git clone http://eris.llnl.gov/git/zfs.git zfs
> cd zfs
> tg remote --populate origin
Now that you have the Linux ZFS repo the first thing you will probably
want to do is have a look at all the topic branches. TopGit provides
a summary command which shows all the branches and a brief summary for
each branch obtained from the .topmsg files.
> tg summary
0 feature-branch [PATCH] feature-branch
feature-commit-cb [PATCH] feature commit cb
feature-zap-cursor-to-key [PATCH] feature zap cursor to key
...
By convention all TopGit branches are usually prefixed with 't/', however
I have chosen not to do this for simplicity. A different convention I have
adopted is to tag the top most TopGit branch as 'top' for easy reference.
This provides a consistent label to be used when you need to reference the
branch which contains the union of all topic branches.
One thing you may also notice about the 'tg summary' command is it does
not show the branches in dependent order. This is done because TopGit allows
each branch to express multiple dependencies as a DAC. Initially this seemed
like an added complication which I planned to avoid by just implementing a
stack using the graph. However, this ended up being problematic because
with a stack when a change was made to a branch near the base, it was a
very expensive operation to merge the change up to the top of the stack.
By defining the dependencies as a graph it is possible to keep the depth
much shallower thus minimizing the merging. It has also proved insightful
as to each patches actual dependencies.
To see the dependencies you will need to use the --graphviz option and pipe
the result to dot for display. The following command works fairly well for
me. Longer term it would be nice to update this option to use a preferred
config options stored in the repo.
> tg summary --graphviz | dot -Txlib -Nfontsize=8
========================= UPDATING A TOPIC BRANCH ========================
Updating a topic branch in TopGit is a pretty straight forward but there
are a few rules you need to be aware of. The basic process involves
checking out the relevant topic branch where the changes need to be made,
making the changes, committing the changes to the branch and then merging
those changes in to dependent branches. TopGit provides some tools to make
this pretty easy, although it may be a little sluggish depending on how many
dependent branches are impacted by the change. Here is an example:
> git checkout modify-topic-branch # Checkout the proper branch
> ...update branch... # Update the branch
> git commit -a # Commit your changes
> git checkout top # Checkout the top branch
> tg update # Recursively merge in new branch
Assuming you change does not introduce any conflicts your done. All branches
were dependent on your change will have had the changed merged in. If your
change introduced a conflict you will need to resolve the conflict and then
continue on with the update.
========================== ADDING A TOPIC BRANCH =========================
Adding a topic branch in TopGit can be pretty straight forward. If your
adding a non-conflicting patch in parallel with other patches of the same
type, then things are pretty easy and TopGit does all the work.
> git co existing-topic-branch # Checkout the branch to add after
> tg create new-topic-branch # Create a new topic branch
> ...update .topmsg... # Update the branch message
> ...create patch... # Update with your changes
> git commit -a # Commit your changes
> git co dependent-topic-branch # Checkout dependent branch
> tg depend add new-topic-branch # Update dependencies
> git checkout top # Checkout the top branch
> tg update # Recursively merge in new branch
If you need to add your patch in series with another change things are
a little more complicated. In this case TopGit does not yet support removing
dependencies so you will need to do it by hand, as follows.
> git co existing-topic-branch # Checkout the branch to add after
> tg create new-topic-branch # Create a new topic branch
> ...update .topmsg... # Update the branch message
> ...create patch... # Update with your changes
> git commit -a # Commit your changes
> git co dependent-topic-branch # Checkout dependent branch
> ...update .topdeps... # Manually update dependencies
> git commit -a # Commit your changes
> tg update # TopGit update
> git checkout top # Checkout the top branch
> tg update # Recursively merge in new branch
Once your done, I find it is a good idea view the repo using the
'tg summary --graphviz' command and verify the updated dependency graph.
========================= REMOVING A TOPIC BRANCH ========================
Removing a topic branch in TopGit is also currently not very easy. To remove
a dependent branch the basic process is to commit a patch which reverts all
changes on the branch. Then that reversion must be merged in to all dependent
branches, the dependencies manually updated and finally the branch removed.
If the branch is not empty you will not be able to remove it.
> git co delete-topic-branch # Checkout the branch to delete
> tg patch | patch -R -p1 # Revert all branch changes
> git commit -a # Commit your changes
> git checkout top # Checkout the top branch
> tg update # Recursively merge revert
> git co dependent-topic-branch # Checkout dependent branch
> ...update .topdeps... # Manually update dependencies
> git commit -a # Commit your changes
> tg delete delete-topic-branch # Delete empty topic branch
Once your done, I find it is a good idea view the repo using the
'tg summary --graphviz' command and verify the updated dependency graph.
============================ TOPGIT TODO =================================
TopGit is still a young package which seems to be under active development
by its author. It provides the minimum set of commands needed but there
are clearly areas which simply have not yet been implemented. My short
list of features includes:
o 'tg summary --deps', option to display a text version of the topic
branch dependency DAC.
o 'tg depend list', list all topic branch dependencies.
o 'tg depend delete', cleanly remove a topic branch dependency.
o 'tg create', cleanly insert a topic branch in the middle
of the graph and properly take care updating all dependencies.
o 'tg delete', cleanly delete a topic branch in the middle
of the graph and properly take care updating all dependencies.

74
README
View File

@ -1,74 +0,0 @@
============================ ZFS KERNEL BUILD ============================
1) Build the SPL (Solaris Porting Layer) module which is designed to
provide many Solaris APIs in the Linux kernel which are needed
by ZFS. To build the SPL:
tar -xzf spl-x.y.z.tgz
cd spl-x.y.z
./configure --with-linux=<kernel src>
make
make check <as root>
2) Build ZFS, this port is based on build 89 of ZFS from OpenSolaris.
You will need to have both the kernel and SPL source available.
To build ZFS for use as a Linux kernel module (default):
tar -xzf zfs-x.y.z.tgz
cd zfs-x.y.z
./configure --with-linux=<kernel src> \
--with-spl=<spl src>
make
make check <as root>
========================= ZFS USER LIBRARY BUILD =========================
1) Build ZFS, this port is based on build 89 of ZFS from OpenSolaris.
To build ZFS as a userspace library:
tar -xzf zfs-x.y.z.tgz
cd zfs-x.y.z
./configure --zfsconfig=user
make
make check <as root>
============================ ZFS LUSTRE BUILD ============================
1) Build the SPL (Solaris Porting Layer) module which is designed to
provide many Solaris APIs in the Linux kernel which are needed
by ZFS. To build the SPL:
tar -xzf spl-x.y.z.tgz
cd spl-x.y.z
./configure --with-linux=<kernel src>
make
make check <as root>
2) Build ZFS, this port is based on build 89 of ZFS from OpenSolaris.
To build ZFS as a userspace library for use by a Lustre filesystem:
tar -xzf zfs-x.y.z.tgz
cd zfs-x.y.z
./configure --zfsconfig=lustre \
--with-linux=<kernel src> \
--with-spl=<spl src>
make
make check <as root>
3) Provided is an in-kernel test application called kpios which can be
used to simulate a Lustre IO load. It may be used as a stress test
or as a performance to measure your configuration. To simplify testing
there are scripts provided in the scripts/ directory. A single test
can be run as follows:
WARNING: You MUST update DEVICES in the create-zpool.sh script
to reference the devices you wish to use.
cd scripts
./load-zfs.sh # Load the ZFS/SPL module stack
./create-zpool.sh # Modify DEVICES to list your zpool devices
./zpios.sh # Modify for your particular kpios test
./unload-zfs.sh # Unload the ZFS/SPL module stack
Enjoy,
Brian Behlendorf <behlendorf1@llnl.gov>

View File

@ -1,17 +0,0 @@
#!/bin/bash
prog=check.sh
die() {
echo "${prog}: $1" >&2
exit 1
}
if [ $(id -u) != 0 ]; then
die "Must run as root"
fi
./load-zfs.sh || die ""
./unload-zfs.sh || die ""
exit 0

View File

@ -1,42 +0,0 @@
#!/bin/bash
prog=create-zpool.sh
. ../.script-config
# Single disk ilc dev nodes
DEVICES="/dev/sda"
# All disks in a Thumper config
#DEVICES="/dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf \
# /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl \
# /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr \
# /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx \
# /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad \
# /dev/sdae /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj \
# /dev/sdak /dev/sdal /dev/sdam /dev/sdan /dev/sdao /dev/sdap \
# /dev/sdaq /dev/sdar /dev/sdas /dev/sdat /dev/sdau /dev/sdav"
# Sun style disk in Thumper config
#DEVICES="/dev/sda /dev/sdb /dev/sdc \
# /dev/sdi /dev/sdj /dev/sdk \
# /dev/sdr /dev/sds /dev/sdt \
# /dev/sdz /dev/sdaa /dev/sdab"
# Promise JBOD config (ilc23)
#DEVICES="/dev/sdb /dev/sdc /dev/sdd \
# /dev/sde /dev/sdf /dev/sdg \
# /dev/sdh /dev/sdi /dev/sdj \
# /dev/sdk /dev/sdl /dev/sdm"
echo
echo "zpool create lustre <devices>"
${CMDDIR}/zpool/zpool create -F lustre ${DEVICES}
echo
echo "zpool list"
${CMDDIR}/zpool/zpool list
echo
echo "zpool status lustre"
${CMDDIR}/zpool/zpool status lustre

View File

@ -1,58 +0,0 @@
#!/bin/bash
prog=load-zfs.sh
. ../.script-config
spl_options=$1
zpool_options=$2
spl_module=${SPLBUILD}/modules/spl/spl.ko
zlib_module=/lib/modules/${KERNELSRCVER}/kernel/lib/zlib_deflate/zlib_deflate.ko
zavl_module=${ZFSBUILD}/lib/libavl/zavl.ko
znvpair_module=${ZFSBUILD}/lib/libnvpair/znvpair.ko
zport_module=${ZFSBUILD}/lib/libport/zport.ko
zcommon_module=${ZFSBUILD}/lib/libzcommon/zcommon.ko
zpool_module=${ZFSBUILD}/lib/libzpool/zpool.ko
zctl_module=${ZFSBUILD}/lib/libdmu-ctl/zctl.ko
zpios_module=${ZFSBUILD}/lib/libzpios/zpios.ko
die() {
echo "${prog}: $1" >&2
exit 1
}
load_module() {
echo "Loading $1"
/sbin/insmod $* || die "Failed to load $1"
}
if [ $(id -u) != 0 ]; then
die "Must run as root"
fi
if /sbin/lsmod | egrep -q "^spl|^zavl|^znvpair|^zport|^zcommon|^zlib_deflate|^zpool"; then
die "Must start with modules unloaded"
fi
if [ ! -f ${zavl_module} ] ||
[ ! -f ${znvpair_module} ] ||
[ ! -f ${zport_module} ] ||
[ ! -f ${zcommon_module} ] ||
[ ! -f ${zpool_module} ]; then
die "Source tree must be built, run 'make'"
fi
load_module ${spl_module} ${spl_options}
load_module ${zlib_module}
load_module ${zavl_module}
load_module ${znvpair_module}
load_module ${zport_module}
load_module ${zcommon_module}
load_module ${zpool_module} ${zpool_options}
load_module ${zctl_module}
load_module ${zpios_module}
sleep 1
echo "Successfully loaded ZFS module stack"
exit 0

View File

@ -1,128 +0,0 @@
#!/bin/bash
# profile-kpios-disk.sh
#
# /proc/diskinfo <after skipping major/minor>
# Field 1 -- device name
# Field 2 -- # of reads issued
# Field 3 -- # of reads merged
# Field 4 -- # of sectors read
# Field 5 -- # of milliseconds spent reading
# Field 6 -- # of writes completed
# Field 7 -- # of writes merged
# Field 8 -- # of sectors written
# Field 9 -- # of milliseconds spent writing
# Field 10 -- # of I/Os currently in progress
# Field 11 -- # of milliseconds spent doing I/Os
# Field 12 -- weighted # of milliseconds spent doing I/Os
RUN_PIDS=${0}
RUN_LOG_DIR=${1}
RUN_ID=${2}
create_table() {
local FIELD=$1
local ROW_M=()
local ROW_N=()
local HEADER=1
local STEP=1
for DISK_FILE in `ls -r --sort=time --time=ctime ${RUN_LOG_DIR}/${RUN_ID}/disk-[0-9]*`; do
ROW_M=( ${ROW_N[@]} )
ROW_N=( `cat ${DISK_FILE} | grep sd | cut -c11- | cut -f${FIELD} -d' ' | tr "\n" "\t"` )
if [ $HEADER -eq 1 ]; then
echo -n "step, "
cat ${DISK_FILE} | grep sd | cut -c11- | cut -f1 -d' ' | tr "\n" ", "
echo "total"
HEADER=0
fi
if [ ${#ROW_M[@]} -eq 0 ]; then
continue
fi
if [ ${#ROW_M[@]} -ne ${#ROW_N[@]} ]; then
echo "Badly formatted profile data in ${DISK_FILE}"
break
fi
TOTAL=0
echo -n "${STEP}, "
for (( i=0; i<${#ROW_N[@]}; i++ )); do
DELTA=`echo "${ROW_N[${i}]}-${ROW_M[${i}]}" | bc`
let TOTAL=${TOTAL}+${DELTA}
echo -n "${DELTA}, "
done
echo "${TOTAL}, "
let STEP=${STEP}+1
done
}
create_table_mbs() {
local FIELD=$1
local TIME=$2
local ROW_M=()
local ROW_N=()
local HEADER=1
local STEP=1
for DISK_FILE in `ls -r --sort=time --time=ctime ${RUN_LOG_DIR}/${RUN_ID}/disk-[0-9]*`; do
ROW_M=( ${ROW_N[@]} )
ROW_N=( `cat ${DISK_FILE} | grep sd | cut -c11- | cut -f${FIELD} -d' ' | tr "\n" "\t"` )
if [ $HEADER -eq 1 ]; then
echo -n "step, "
cat ${DISK_FILE} | grep sd | cut -c11- | cut -f1 -d' ' | tr "\n" ", "
echo "total"
HEADER=0
fi
if [ ${#ROW_M[@]} -eq 0 ]; then
continue
fi
if [ ${#ROW_M[@]} -ne ${#ROW_N[@]} ]; then
echo "Badly formatted profile data in ${DISK_FILE}"
break
fi
TOTAL=0
echo -n "${STEP}, "
for (( i=0; i<${#ROW_N[@]}; i++ )); do
DELTA=`echo "${ROW_N[${i}]}-${ROW_M[${i}]}" | bc`
MBS=`echo "scale=2; ((${DELTA}*512)/${TIME})/(1024*1024)" | bc`
TOTAL=`echo "scale=2; ${TOTAL}+${MBS}" | bc`
echo -n "${MBS}, "
done
echo "${TOTAL}, "
let STEP=${STEP}+1
done
}
echo
echo "Reads issued per device"
create_table 2
echo
echo "Reads merged per device"
create_table 3
echo
echo "Sectors read per device"
create_table 4
echo "MB/s per device"
create_table_mbs 4 3
echo
echo "Writes issued per device"
create_table 6
echo
echo "Writes merged per device"
create_table 7
echo
echo "Sectors written per device"
create_table 8
echo "MB/s per device"
create_table_mbs 8 3
exit 0

View File

@ -1,130 +0,0 @@
#!/bin/bash
# profile-kpios-pids.sh
RUN_PIDS=${0}
RUN_LOG_DIR=${1}
RUN_ID=${2}
ROW_M=()
ROW_N=()
ROW_N_SCHED=()
ROW_N_WAIT=()
HEADER=1
STEP=1
for PID_FILE in `ls -r --sort=time --time=ctime ${RUN_LOG_DIR}/${RUN_ID}/pids-[0-9]*`; do
ROW_M=( ${ROW_N[@]} )
ROW_N=( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 )
ROW_N_SCHED=( `cat ${PID_FILE} | cut -f15 -d' ' | tr "\n" "\t"` )
ROW_N_WAIT=( `cat ${PID_FILE} | cut -f17 -d' ' | tr "\n" "\t"` )
ROW_N_NAMES=( `cat ${PID_FILE} | cut -f2 -d' ' | cut -f2 -d'(' |
cut -f1 -d')' | cut -f1 -d'/' | tr "\n" "\t"` )
for (( i=0; i<${#ROW_N_SCHED[@]}; i++ )); do
SUM=`echo "${ROW_N_WAIT[${i}]}+${ROW_N_SCHED[${i}]}" | bc`
case ${ROW_N_NAMES[${i}]} in
zio_taskq) IDX=0;;
zio_req_nul) IDX=1;;
zio_irq_nul) IDX=2;;
zio_req_rd) IDX=3;;
zio_irq_rd) IDX=4;;
zio_req_wr) IDX=5;;
zio_irq_wr) IDX=6;;
zio_req_fr) IDX=7;;
zio_irq_fr) IDX=8;;
zio_req_cm) IDX=9;;
zio_irq_cm) IDX=10;;
zio_req_ctl) IDX=11;;
zio_irq_ctl) IDX=12;;
txg_quiesce) IDX=13;;
txg_sync) IDX=14;;
txg_timelimit) IDX=15;;
arc_reclaim) IDX=16;;
l2arc_feed) IDX=17;;
kpios_io) IDX=18;;
*) continue;;
esac
let ROW_N[${IDX}]=${ROW_N[${IDX}]}+${SUM}
done
if [ $HEADER -eq 1 ]; then
echo "step, zio_taskq, zio_req_nul, zio_irq_nul, " \
"zio_req_rd, zio_irq_rd, zio_req_wr, zio_irq_wr, " \
"zio_req_fr, zio_irq_fr, zio_req_cm, zio_irq_cm, " \
"zio_req_ctl, zio_irq_ctl, txg_quiesce, txg_sync, " \
"txg_timelimit, arc_reclaim, l2arc_feed, kpios_io, " \
"idle"
HEADER=0
fi
if [ ${#ROW_M[@]} -eq 0 ]; then
continue
fi
if [ ${#ROW_M[@]} -ne ${#ROW_N[@]} ]; then
echo "Badly formatted profile data in ${PID_FILE}"
break
fi
# Original values are in jiffies and we expect HZ to be 1000
# on most 2.6 systems thus we divide by 10 to get a percentage.
IDLE=1000
echo -n "${STEP}, "
for (( i=0; i<${#ROW_N[@]}; i++ )); do
DELTA=`echo "${ROW_N[${i}]}-${ROW_M[${i}]}" | bc`
DELTA_PERCENT=`echo "scale=1; ${DELTA}/10" | bc`
let IDLE=${IDLE}-${DELTA}
echo -n "${DELTA_PERCENT}, "
done
ILDE_PERCENT=`echo "scale=1; ${IDLE}/10" | bc`
echo "${ILDE_PERCENT}"
let STEP=${STEP}+1
done
exit
echo
echo "Percent of total system time per pid"
for PID_FILE in `ls -r --sort=time --time=ctime ${RUN_LOG_DIR}/${RUN_ID}/pids-[0-9]*`; do
ROW_M=( ${ROW_N[@]} )
ROW_N_SCHED=( `cat ${PID_FILE} | cut -f15 -d' ' | tr "\n" "\t"` )
ROW_N_WAIT=( `cat ${PID_FILE} | cut -f17 -d' ' | tr "\n" "\t"` )
for (( i=0; i<${#ROW_N_SCHED[@]}; i++ )); do
ROW_N[${i}]=`echo "${ROW_N_WAIT[${i}]}+${ROW_N_SCHED[${i}]}" | bc`
done
if [ $HEADER -eq 1 ]; then
echo -n "step, "
cat ${PID_FILE} | cut -f2 -d' ' | tr "\n" ", "
echo
HEADER=0
fi
if [ ${#ROW_M[@]} -eq 0 ]; then
continue
fi
if [ ${#ROW_M[@]} -ne ${#ROW_N[@]} ]; then
echo "Badly formatted profile data in ${PID_FILE}"
break
fi
# Original values are in jiffies and we expect HZ to be 1000
# on most 2.6 systems thus we divide by 10 to get a percentage.
echo -n "${STEP}, "
for (( i=0; i<${#ROW_N[@]}; i++ )); do
DELTA=`echo "scale=1; (${ROW_N[${i}]}-${ROW_M[${i}]})/10" | bc`
echo -n "${DELTA}, "
done
echo
let STEP=${STEP}+1
done
exit 0

View File

@ -1,67 +0,0 @@
#!/bin/bash
prog=profile-kpios-post.sh
. ../.script-config
RUN_POST=${0}
RUN_PHASE=${1}
RUN_LOG_DIR=${2}
RUN_ID=${3}
RUN_POOL=${4}
RUN_CHUNK_SIZE=${5}
RUN_REGION_SIZE=${6}
RUN_THREAD_COUNT=${7}
RUN_REGION_COUNT=${8}
RUN_OFFSET=${9}
RUN_REGION_NOISE=${10}
RUN_CHUNK_NOISE=${11}
RUN_THREAD_DELAY=${12}
RUN_FLAGS=${13}
RUN_RESULT=${14}
PROFILE_KPIOS_PIDS_BIN=/home/behlendo/src/zfs/scripts/profile-kpios-pids.sh
PROFILE_KPIOS_PIDS_LOG=${RUN_LOG_DIR}/${RUN_ID}/pids-summary.csv
PROFILE_KPIOS_DISK_BIN=/home/behlendo/src/zfs/scripts/profile-kpios-disk.sh
PROFILE_KPIOS_DISK_LOG=${RUN_LOG_DIR}/${RUN_ID}/disk-summary.csv
PROFILE_KPIOS_ARC_LOG=${RUN_LOG_DIR}/${RUN_ID}/arcstats
PROFILE_KPIOS_VDEV_LOG=${RUN_LOG_DIR}/${RUN_ID}/vdev_cache_stats
KERNEL_BIN="/lib/modules/`uname -r`/kernel/"
SPL_BIN="${SPLBUILD}/modules/spl/"
ZFS_BIN="${ZFSBUILD}/lib/"
OPROFILE_SHORT_ARGS="-a -g -l -p ${KERNEL_BIN},${SPL_BIN},${ZFS_BIN}"
OPROFILE_LONG_ARGS="-d -a -g -l -p ${KERNEL_BIN},${SPL_BIN},${ZFS_BIN}"
OPROFILE_LOG=${RUN_LOG_DIR}/${RUN_ID}/oprofile.txt
OPROFILE_SHORT_LOG=${RUN_LOG_DIR}/${RUN_ID}/oprofile-short.txt
OPROFILE_LONG_LOG=${RUN_LOG_DIR}/${RUN_ID}/oprofile-long.txt
PROFILE_PID=${RUN_LOG_DIR}/${RUN_ID}/pid
if [ "${RUN_PHASE}" != "post" ]; then
exit 1
fi
# opcontrol --stop >>${OPROFILE_LOG} 2>&1
# opcontrol --dump >>${OPROFILE_LOG} 2>&1
kill -s SIGHUP `cat ${PROFILE_PID}`
rm -f ${PROFILE_PID}
# opreport ${OPROFILE_SHORT_ARGS} >${OPROFILE_SHORT_LOG} 2>&1
# opreport ${OPROFILE_LONG_ARGS} >${OPROFILE_LONG_LOG} 2>&1
# opcontrol --deinit >>${OPROFILE_LOG} 2>&1
cat /proc/spl/kstat/zfs/arcstats >${PROFILE_KPIOS_ARC_LOG}
cat /proc/spl/kstat/zfs/vdev_cache_stats >${PROFILE_KPIOS_VDEV_LOG}
# Summarize system time per pid
${PROFILE_KPIOS_PIDS_BIN} ${RUN_LOG_DIR} ${RUN_ID} >${PROFILE_KPIOS_PIDS_LOG}
# Summarize per device performance
${PROFILE_KPIOS_DISK_BIN} ${RUN_LOG_DIR} ${RUN_ID} >${PROFILE_KPIOS_DISK_LOG}
exit 0

View File

@ -1,69 +0,0 @@
#!/bin/bash
# profile-kpios-pre.sh
trap "PROFILE_KPIOS_READY=1" SIGHUP
RUN_PRE=${0}
RUN_PHASE=${1}
RUN_LOG_DIR=${2}
RUN_ID=${3}
RUN_POOL=${4}
RUN_CHUNK_SIZE=${5}
RUN_REGION_SIZE=${6}
RUN_THREAD_COUNT=${7}
RUN_REGION_COUNT=${8}
RUN_OFFSET=${9}
RUN_REGION_NOISE=${10}
RUN_CHUNK_NOISE=${11}
RUN_THREAD_DELAY=${12}
RUN_FLAGS=${13}
RUN_RESULT=${14}
PROFILE_KPIOS_BIN=/home/behlendo/src/zfs/scripts/profile-kpios.sh
PROFILE_KPIOS_READY=0
OPROFILE_LOG=${RUN_LOG_DIR}/${RUN_ID}/oprofile.txt
PROFILE_PID=${RUN_LOG_DIR}/${RUN_ID}/pid
RUN_ARGS=${RUN_LOG_DIR}/${RUN_ID}/args
if [ "${RUN_PHASE}" != "pre" ]; then
exit 1
fi
rm -Rf ${RUN_LOG_DIR}/${RUN_ID}/
mkdir -p ${RUN_LOG_DIR}/${RUN_ID}/
echo "PHASE=${RUN_PHASE}" >>${RUN_ARGS}
echo "LOG_DIR=${RUN_LOG_DIR}" >>${RUN_ARGS}
echo "ID=${RUN_ID}" >>${RUN_ARGS}
echo "POOL=${RUN_POOL}" >>${RUN_ARGS}
echo "CHUNK_SIZE=${RUN_CHUNK_SIZE}" >>${RUN_ARGS}
echo "REGION_SIZE=${RUN_REGION_SIZE}" >>${RUN_ARGS}
echo "THREAD_COUNT=${RUN_THREAD_COUNT}" >>${RUN_ARGS}
echo "REGION_COUNT=${RUN_REGION_COUNT}" >>${RUN_ARGS}
echo "OFFSET=${RUN_OFFSET}" >>${RUN_ARGS}
echo "REGION_NOISE=${RUN_REGION_NOISE}" >>${RUN_ARGS}
echo "CHUNK_NOISE=${RUN_CHUNK_NOISE}" >>${RUN_ARGS}
echo "THREAD_DELAY=${RUN_THREAD_DELAY}" >>${RUN_ARGS}
echo "FLAGS=${RUN_FLAGS}" >>${RUN_ARGS}
echo "RESULT=${RUN_RESULT}" >>${RUN_ARGS}
# XXX: Oprofile support seems to be broken when I try and start
# it via a user mode helper script, I suspect the setup is failing.
# opcontrol --init >>${OPROFILE_LOG} 2>&1
# opcontrol --setup --vmlinux=/boot/vmlinux >>${OPROFILE_LOG} 2>&1
# Start the profile script
${PROFILE_KPIOS_BIN} ${RUN_PHASE} ${RUN_LOG_DIR} ${RUN_ID} &
echo "$!" >${PROFILE_PID}
# Sleep waiting for profile script to be ready, it will
# signal us via SIGHUP when it is ready to start profiling.
while [ ${PROFILE_KPIOS_READY} -eq 0 ]; do
sleep 0.1
done
# opcontrol --start-daemon >>${OPROFILE_LOG} 2>&1
# opcontrol --start >>${OPROFILE_LOG} 2>&1
exit 0

View File

@ -1,222 +0,0 @@
#!/bin/bash
# profile-kpios.sh
trap "RUN_DONE=1" SIGHUP
RUN_PHASE=${1}
RUN_LOG_DIR=${2}
RUN_ID=${3}
RUN_DONE=0
POLL_INTERVAL=2.99
# Log these pids, the exact pid numbers will vary from system to system
# so I harvest pid for all the following type of processes from /proc/<pid>/
#
# zio_taskq/#
# spa_zio_issue/#
# spa_zio_intr/#
# txg_quiesce_thr
# txg_sync_thread
# txg_timelimit_t
# arc_reclaim_thr
# l2arc_feed_thre
# kpios_io/#
ZIO_TASKQ_PIDS=()
ZIO_REQ_NUL_PIDS=()
ZIO_IRQ_NUL_PIDS=()
ZIO_REQ_RD_PIDS=()
ZIO_IRQ_RD_PIDS=()
ZIO_REQ_WR_PIDS=()
ZIO_IRQ_WR_PIDS=()
ZIO_REQ_FR_PIDS=()
ZIO_IRQ_FR_PIDS=()
ZIO_REQ_CM_PIDS=()
ZIO_IRQ_CM_PIDS=()
ZIO_REQ_CTL_PIDS=()
ZIO_IRQ_CTL_PIDS=()
TXG_QUIESCE_PIDS=()
TXG_SYNC_PIDS=()
TXG_TIMELIMIT_PIDS=()
ARC_RECLAIM_PIDS=()
L2ARC_FEED_PIDS=()
KPIOS_IO_PIDS=()
show_pids() {
echo "* zio_taskq: { ${ZIO_TASKQ_PIDS[@]} } = ${#ZIO_TASKQ_PIDS[@]}"
echo "* zio_req_nul: { ${ZIO_REQ_NUL_PIDS[@]} } = ${#ZIO_REQ_NUL_PIDS[@]}"
echo "* zio_irq_nul: { ${ZIO_IRQ_NUL_PIDS[@]} } = ${#ZIO_IRQ_NUL_PIDS[@]}"
echo "* zio_req_rd: { ${ZIO_REQ_RD_PIDS[@]} } = ${#ZIO_REQ_RD_PIDS[@]}"
echo "* zio_irq_rd: { ${ZIO_IRQ_RD_PIDS[@]} } = ${#ZIO_IRQ_RD_PIDS[@]}"
echo "* zio_req_wr: { ${ZIO_REQ_WR_PIDS[@]} } = ${#ZIO_REQ_WR_PIDS[@]}"
echo "* zio_irq_wr: { ${ZIO_IRQ_WR_PIDS[@]} } = ${#ZIO_IRQ_WR_PIDS[@]}"
echo "* zio_req_fr: { ${ZIO_REQ_FR_PIDS[@]} } = ${#ZIO_REQ_FR_PIDS[@]}"
echo "* zio_irq_fr: { ${ZIO_IRQ_FR_PIDS[@]} } = ${#ZIO_IRQ_FR_PIDS[@]}"
echo "* zio_req_cm: { ${ZIO_REQ_CM_PIDS[@]} } = ${#ZIO_REQ_CM_PIDS[@]}"
echo "* zio_irq_cm: { ${ZIO_IRQ_CM_PIDS[@]} } = ${#ZIO_IRQ_CM_PIDS[@]}"
echo "* zio_req_ctl: { ${ZIO_REQ_CTL_PIDS[@]} } = ${#ZIO_REQ_CTL_PIDS[@]}"
echo "* zio_irq_ctl: { ${ZIO_IRQ_CTL_PIDS[@]} } = ${#ZIO_IRQ_CTL_PIDS[@]}"
echo "* txg_quiesce: { ${TXG_QUIESCE_PIDS[@]} } = ${#TXG_QUIESCE_PIDS[@]}"
echo "* txg_sync: { ${TXG_SYNC_PIDS[@]} } = ${#TXG_SYNC_PIDS[@]}"
echo "* txg_timelimit: { ${TXG_TIMELIMIT_PIDS[@]} } = ${#TXG_TIMELIMIT_PIDS[@]}"
echo "* arc_reclaim: { ${ARC_RECLAIM_PIDS[@]} } = ${#ARC_RECLAIM_PIDS[@]}"
echo "* l2arc_feed: { ${L2ARC_FEED_PIDS[@]} } = ${#L2ARC_FEED_PIDS[@]}"
echo "* kpios_io: { ${KPIOS_IO_PIDS[@]} } = ${#KPIOS_IO_PIDS[@]}"
}
check_pid() {
local PID=$1
local NAME=$2
local TYPE=$3
local PIDS=( "$4" )
local NAME_STRING=`echo ${NAME} | cut -f1 -d'/'`
local NAME_NUMBER=`echo ${NAME} | cut -f2 -d'/'`
if [ "${NAME_STRING}" == "${TYPE}" ]; then
if [ -n "${NAME_NUMBER}" ]; then
PIDS[${NAME_NUMBER}]=${PID}
else
PIDS[${#PIDS[@]}]=${PID}
fi
fi
echo "${PIDS[@]}"
}
# NOTE: This whole process is crazy slow but it will do for now
aquire_pids() {
echo "--- Aquiring ZFS pids ---"
for PID in `ls /proc/ | grep [0-9] | sort -n -u`; do
if [ ! -e /proc/${PID}/status ]; then
continue
fi
NAME=`cat /proc/${PID}/status | head -n1 | cut -f2`
ZIO_TASKQ_PIDS=( `check_pid ${PID} ${NAME} "zio_taskq" \
"$(echo "${ZIO_TASKQ_PIDS[@]}")"` )
ZIO_REQ_NUL_PIDS=( `check_pid ${PID} ${NAME} "zio_req_nul" \
"$(echo "${ZIO_REQ_NUL_PIDS[@]}")"` )
ZIO_IRQ_NUL_PIDS=( `check_pid ${PID} ${NAME} "zio_irq_nul" \
"$(echo "${ZIO_IRQ_NUL_PIDS[@]}")"` )
ZIO_REQ_RD_PIDS=( `check_pid ${PID} ${NAME} "zio_req_rd" \
"$(echo "${ZIO_REQ_RD_PIDS[@]}")"` )
ZIO_IRQ_RD_PIDS=( `check_pid ${PID} ${NAME} "zio_irq_rd" \
"$(echo "${ZIO_IRQ_RD_PIDS[@]}")"` )
ZIO_REQ_WR_PIDS=( `check_pid ${PID} ${NAME} "zio_req_wr" \
"$(echo "${ZIO_REQ_WR_PIDS[@]}")"` )
ZIO_IRQ_WR_PIDS=( `check_pid ${PID} ${NAME} "zio_irq_wr" \
"$(echo "${ZIO_IRQ_WR_PIDS[@]}")"` )
ZIO_REQ_FR_PIDS=( `check_pid ${PID} ${NAME} "zio_req_fr" \
"$(echo "${ZIO_REQ_FR_PIDS[@]}")"` )
ZIO_IRQ_FR_PIDS=( `check_pid ${PID} ${NAME} "zio_irq_fr" \
"$(echo "${ZIO_IRQ_FR_PIDS[@]}")"` )
ZIO_REQ_CM_PIDS=( `check_pid ${PID} ${NAME} "zio_req_cm" \
"$(echo "${ZIO_REQ_CM_PIDS[@]}")"` )
ZIO_IRQ_CM_PIDS=( `check_pid ${PID} ${NAME} "zio_irq_cm" \
"$(echo "${ZIO_IRQ_CM_PIDS[@]}")"` )
ZIO_REQ_CTL_PIDS=( `check_pid ${PID} ${NAME} "zio_req_ctl" \
"$(echo "${ZIO_REQ_CTL_PIDS[@]}")"` )
ZIO_IRQ_CTL_PIDS=( `check_pid ${PID} ${NAME} "zio_irq_ctl" \
"$(echo "${ZIO_IRQ_CTL_PIDS[@]}")"` )
TXG_QUIESCE_PIDS=( `check_pid ${PID} ${NAME} "txg_quiesce" \
"$(echo "${TXG_QUIESCE_PIDS[@]}")"` )
TXG_SYNC_PIDS=( `check_pid ${PID} ${NAME} "txg_sync" \
"$(echo "${TXG_SYNC_PIDS[@]}")"` )
TXG_TIMELIMIT_PIDS=( `check_pid ${PID} ${NAME} "txg_timelimit" \
"$(echo "${TXG_TIMELIMIT_PIDS[@]}")"` )
ARC_RECLAIM_PIDS=( `check_pid ${PID} ${NAME} "arc_reclaim" \
"$(echo "${ARC_RECLAIM_PIDS[@]}")"` )
L2ARC_FEED_PIDS=( `check_pid ${PID} ${NAME} "l2arc_feed" \
"$(echo "${L2ARC_FEED_PIDS[@]}")"` )
done
# Wait for kpios_io threads to start
kill -s SIGHUP ${PPID}
echo "* Waiting for kpios_io threads to start"
while [ ${RUN_DONE} -eq 0 ]; do
KPIOS_IO_PIDS=( `ps ax | grep kpios_io | grep -v grep | \
sed 's/^ *//g' | cut -f1 -d' '` )
if [ ${#KPIOS_IO_PIDS[@]} -gt 0 ]; then
break;
fi
sleep 0.1
done
echo "`show_pids`" >${RUN_LOG_DIR}/${RUN_ID}/pids.txt
}
log_pids() {
echo "--- Logging ZFS profile to ${RUN_LOG_DIR}/${RUN_ID}/ ---"
ALL_PIDS=( ${ZIO_TASKQ_PIDS[@]} \
${ZIO_REQ_NUL_PIDS[@]} \
${ZIO_IRQ_NUL_PIDS[@]} \
${ZIO_REQ_RD_PID[@]} \
${ZIO_IRQ_RD_PIDS[@]} \
${ZIO_REQ_WR_PIDS[@]} \
${ZIO_IRQ_WR_PIDS[@]} \
${ZIO_REQ_FR_PIDS[@]} \
${ZIO_IRQ_FR_PIDS[@]} \
${ZIO_REQ_CM_PIDS[@]} \
${ZIO_IRQ_CM_PIDS[@]} \
${ZIO_REQ_CTL_PIDS[@]} \
${ZIO_IRQ_CTL_PIDS[@]} \
${TXG_QUIESCE_PIDS[@]} \
${TXG_SYNC_PIDS[@]} \
${TXG_TIMELIMIT_PIDS[@]} \
${ARC_RECLAIM_PIDS[@]} \
${L2ARC_FEED_PIDS[@]} \
${KPIOS_IO_PIDS[@]} )
while [ ${RUN_DONE} -eq 0 ]; do
NOW=`date +%s.%N`
LOG_PIDS="${RUN_LOG_DIR}/${RUN_ID}/pids-${NOW}"
LOG_DISK="${RUN_LOG_DIR}/${RUN_ID}/disk-${NOW}"
for PID in "${ALL_PIDS[@]}"; do
if [ -z ${PID} ]; then
continue;
fi
if [ -e /proc/${PID}/stat ]; then
cat /proc/${PID}/stat | head -n1 >>${LOG_PIDS}
else
echo "<${PID} exited>" >>${LOG_PIDS}
fi
done
cat /proc/diskstats >${LOG_DISK}
NOW2=`date +%s.%N`
DELTA=`echo "${POLL_INTERVAL}-(${NOW2}-${NOW})" | bc`
sleep ${DELTA}
done
}
aquire_pids
log_pids
exit 0

View File

@ -1,102 +0,0 @@
#!/bin/bash
prog=survey.sh
. ../.script-config
LOG=/home/`whoami`/zpios-logs/`uname -r`/kpios-`date +%Y%m%d`/
mkdir -p ${LOG}
# Apply all tunings described below to generate some best case
# numbers for what is acheivable with some more elbow grease.
NAME="prefetch+zerocopy+checksum+pending1024+kmem"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"zfs_prefetch_disable=1 zfs_vdev_max_pending=1024 zio_bulk_flags=0x100" \
"--zerocopy" \
${LOG}/${NAME}/ \
"${CMDDIR}/zfs/zfs set checksum=off lustre" | \
tee ${LOG}/${NAME}.txt
# Baseline number for an out of the box config with no manual tuning.
# Ideally, we will want things to be automatically tuned and for this
# number to approach the tweaked out results above.
NAME="baseline"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"" \
"" \
${LOG}/${NAME}/ | \
tee ${LOG}/${NAME}.txt
# Disable ZFS's prefetching. For some reason still not clear to me
# current prefetching policy is quite bad for a random workload.
# Allow the algorithm to detect a random workload and not do anything
# may be the way to address this issue.
NAME="prefetch"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"zfs_prefetch_disable=1" \
"" \
${LOG}/${NAME}/ | \
tee ${LOG}/${NAME}.txt
# As expected, simulating a zerocopy IO path improves performance
# by freeing up lots of CPU which is wasted move data between buffers.
NAME="zerocopy"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"" \
"--zerocopy" \
${LOG}/${NAME}/ | \
tee ${LOG}/${NAME}.txt
# Disabling checksumming should show some (if small) improvement
# simply due to freeing up a modest amount of CPU.
NAME="checksum"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"" \
"" \
${LOG}/${NAME}/ \
"${CMDDIR}/zfs/zfs set checksum=off lustre" | \
tee ${LOG}/${NAME}.txt
# Increasing the pending IO depth also seems to improve things likely
# at the expense of latency. This should be exported more because I'm
# seeing a much bigger impact there that I would have expected. There
# may be some low hanging fruit to be found here.
NAME="pending"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"zfs_vdev_max_pending=1024" \
"" \
${LOG}/${NAME}/ | \
tee ${LOG}/${NAME}.txt
# To avoid memory fragmentation issues our slab implementation can be
# based on a virtual address space. Interestingly, we take a pretty
# substantial performance penalty for this somewhere in the low level
# IO drivers. If we back the slab with kmem pages we see far better
# read performance numbers at the cost of memory fragmention and general
# system instability due to large allocations. This may be because of
# an optimization in the low level drivers due to the contigeous kmem
# based memory. This needs to be explained. The good news here is that
# with zerocopy interfaces added at the DMU layer we could gaurentee
# kmem based memory for a pool of pages.
#
# 0x100 = KMC_KMEM - Force kmem_* based slab
# 0x200 = KMC_VMEM - Force vmem_* based slab
NAME="kmem"
echo "----------------------- ${NAME} ------------------------------"
./zpios.sh \
"" \
"zio_bulk_flags=0x100" \
"" \
${LOG}/${NAME}/ | \
tee ${LOG}/${NAME}.txt

View File

@ -1,55 +0,0 @@
#!/bin/bash
prog=unload-zfs.sh
. ../.script-config
spl_module=${SPLBUILD}/modules/spl/spl.ko
zlib_module=/lib/modules/${KERNELSRCVER}/kernel/lib/zlib_deflate/zlib_deflate.ko
zavl_module=${ZFSBUILD}/lib/libavl/zavl.ko
znvpair_module=${ZFSBUILD}/lib/libnvpair/znvpair.ko
zport_module=${ZFSBUILD}/lib/libport/zport.ko
zcommon_module=${ZFSBUILD}/lib/libzcommon/zcommon.ko
zpool_module=${ZFSBUILD}/lib/libzpool/zpool.ko
zctl_module=${ZFSBUILD}/lib/libdmu-ctl/zctl.ko
zpios_module=${ZFSBUILD}/lib/libzpios/zpios.ko
die() {
echo "${prog}: $1" >&2
exit 1
}
unload_module() {
echo "Unloading $1"
/sbin/rmmod $1 || die "Failed to unload $1"
}
if [ $(id -u) != 0 ]; then
die "Must run as root"
fi
unload_module ${zpios_module}
unload_module ${zctl_module}
unload_module ${zpool_module}
unload_module ${zcommon_module}
unload_module ${zport_module}
unload_module ${znvpair_module}
unload_module ${zavl_module}
unload_module ${zlib_module}
# Set DUMP=1 to generate debug logs on unload
if [ -n "${DUMP}" ]; then
sysctl -w kernel.spl.debug.dump=1
# This is racy, I don't like it, but for a helper script it will do.
SPL_LOG=`dmesg | tail -n 1 | cut -f5 -d' '`
${SPLBUILD}/cmd/spl ${SPL_LOG} >${SPL_LOG}.log
echo
echo "Dumped debug log: ${SPL_LOG}.log"
tail -n1 ${SPL_LOG}.log
echo
fi
unload_module ${spl_module}
echo "Successfully unloaded ZFS module stack"
exit 0

View File

@ -1,110 +0,0 @@
#!/bin/bash
prog=zpios-jbod.sh
. ../.script-config
SPL_OPTIONS=$1
ZPOOL_OPTIONS=$2
KPIOS_OPTIONS=$3
PROFILE_KPIOS_LOGS=$4
KPIOS_PRE=$5
KPIOS_POST=$6
PROFILE_KPIOS_PRE=/home/behlendo/src/zfs/scripts/profile-kpios-pre.sh
PROFILE_KPIOS_POST=/home/behlendo/src/zfs/scripts/profile-kpios-post.sh
echo ------------------------- ZFS TEST LOG ---------------------------------
echo -n "Date = "; date
echo -n "Kernel = "; uname -r
echo ------------------------------------------------------------------------
echo
./load-zfs.sh "${SPL_OPTIONS}" "${ZPOOL_OPTIONS}"
sysctl -w kernel.spl.debug.mask=0
sysctl -w kernel.spl.debug.subsystem=0
echo ---------------------- SPL Sysctl Tunings ------------------------------
sysctl -A | grep spl
echo
echo ------------------- SPL/ZPOOL Module Tunings ---------------------------
grep [0-9] /sys/module/spl/parameters/*
grep [0-9] /sys/module/zpool/parameters/*
echo
DEVICES="/dev/sdn /dev/sdo /dev/sdp \
/dev/sdq /dev/sdr /dev/sds \
/dev/sdt /dev/sdu /dev/sdv \
/dev/sdw /dev/sdx /dev/sdy"
${CMDDIR}/zpool/zpool create -F lustre ${DEVICES}
${CMDDIR}/zpool/zpool status lustre
if [ -n "${KPIOS_PRE}" ]; then
${KPIOS_PRE}
fi
# Usage: zpios
# --chunksize -c =values
# --chunksize_low -a =value
# --chunksize_high -b =value
# --chunksize_incr -g =value
# --offset -o =values
# --offset_low -m =value
# --offset_high -q =value
# --offset_incr -r =value
# --regioncount -n =values
# --regioncount_low -i =value
# --regioncount_high -j =value
# --regioncount_incr -k =value
# --threadcount -t =values
# --threadcount_low -l =value
# --threadcount_high -h =value
# --threadcount_incr -e =value
# --regionsize -s =values
# --regionsize_low -A =value
# --regionsize_high -B =value
# --regionsize_incr -C =value
# --cleanup -x
# --verify -V
# --zerocopy -z
# --threaddelay -T =jiffies
# --regionnoise -I =shift
# --chunknoise -N =bytes
# --prerun -P =pre-command
# --postrun -R =post-command
# --log -G =log directory
# --pool | --path -p =pool name
# --load -L =dmuio
# --help -? =this help
# --verbose -v =increase verbosity
# --threadcount=256,256,256,256,256 \
CMD="${CMDDIR}/zpios/zpios \
--load=dmuio \
--path=lustre \
--chunksize=1M \
--regionsize=4M \
--regioncount=16384 \
--threadcount=256 \
--offset=4M \
--cleanup \
--verbose \
--human-readable \
${KPIOS_OPTIONS} \
--prerun=${PROFILE_KPIOS_PRE} \
--postrun=${PROFILE_KPIOS_POST} \
--log=${PROFILE_KPIOS_LOGS}"
echo
date
echo ${CMD}
$CMD
date
if [ -n "${KPIOS_POST}" ]; then
${KPIOS_POST}
fi
${CMDDIR}/zpool/zpool destroy lustre
./unload-zfs.sh

View File

@ -1,133 +0,0 @@
#!/bin/bash
prog=zpios.sh
. ../.script-config
SPL_OPTIONS="spl_debug_mask=0 spl_debug_subsys=0 ${1}"
ZPOOL_OPTIONS=$2
KPIOS_OPTIONS=$3
PROFILE_KPIOS_LOGS=$4
KPIOS_PRE=$5
KPIOS_POST=$6
PROFILE_KPIOS_PRE=/home/behlendo/src/zfs/scripts/profile-kpios-pre.sh
PROFILE_KPIOS_POST=/home/behlendo/src/zfs/scripts/profile-kpios-post.sh
DEVICES="/dev/hda"
echo ------------------------- ZFS TEST LOG ---------------------------------
echo -n "Date = "; date
echo -n "Kernel = "; uname -r
echo ------------------------------------------------------------------------
echo
./load-zfs.sh "${SPL_OPTIONS}" "${ZPOOL_OPTIONS}"
echo ---------------------- SPL Sysctl Tunings ------------------------------
sysctl -A | grep spl
echo
echo ------------------- SPL/ZPOOL Module Tunings ---------------------------
if [ -d /sys/module/spl/parameters ]; then
grep [0-9] /sys/module/spl/parameters/*
grep [0-9] /sys/module/zpool/parameters/*
else
grep [0-9] /sys/module/spl/*
grep [0-9] /sys/module/zpool/*
fi
echo
echo "${CMDDIR}/zpool/zpool create -f lustre ${DEVICES}"
${CMDDIR}/zpool/zpool create -f lustre ${DEVICES}
echo "${CMDDIR}/zpool/zpool status lustre"
${CMDDIR}/zpool/zpool status lustre
echo "Waiting for /dev/kpios to come up..."
while [ ! -c /dev/kpios ]; do
sleep 1
done
if [ -n "${KPIOS_PRE}" ]; then
${KPIOS_PRE}
fi
# Usage: zpios
# --chunksize -c =values
# --chunksize_low -a =value
# --chunksize_high -b =value
# --chunksize_incr -g =value
# --offset -o =values
# --offset_low -m =value
# --offset_high -q =value
# --offset_incr -r =value
# --regioncount -n =values
# --regioncount_low -i =value
# --regioncount_high -j =value
# --regioncount_incr -k =value
# --threadcount -t =values
# --threadcount_low -l =value
# --threadcount_high -h =value
# --threadcount_incr -e =value
# --regionsize -s =values
# --regionsize_low -A =value
# --regionsize_high -B =value
# --regionsize_incr -C =value
# --cleanup -x
# --verify -V
# --zerocopy -z
# --threaddelay -T =jiffies
# --regionnoise -I =shift
# --chunknoise -N =bytes
# --prerun -P =pre-command
# --postrun -R =post-command
# --log -G =log directory
# --pool | --path -p =pool name
# --load -L =dmuio
# --help -? =this help
# --verbose -v =increase verbosity
# --prerun=${PROFILE_KPIOS_PRE} \
# --postrun=${PROFILE_KPIOS_POST} \
CMD="${CMDDIR}/zpios/zpios \
--load=dmuio \
--path=lustre \
--chunksize=1M \
--regionsize=4M \
--regioncount=64 \
--threadcount=4 \
--offset=4M \
--cleanup \
--verbose \
--human-readable \
${KPIOS_OPTIONS} \
--log=${PROFILE_KPIOS_LOGS}"
echo
date
echo ${CMD}
$CMD
date
if [ -n "${KPIOS_POST}" ]; then
${KPIOS_POST}
fi
${CMDDIR}/zpool/zpool destroy lustre
echo ---------------------- SPL Sysctl Tunings ------------------------------
sysctl -A | grep spl
echo
echo ------------------------ KSTAT Statistics ------------------------------
echo ARCSTATS
cat /proc/spl/kstat/zfs/arcstats
echo
echo VDEV_CACHE_STATS
cat /proc/spl/kstat/zfs/vdev_cache_stats
echo
echo SLAB
cat /proc/spl/kmem/slab
echo
./unload-zfs.sh