2009-08-18 04:35:06 +00:00
|
|
|
pkglibexecdir = $(libexecdir)/@PACKAGE@
|
|
|
|
nobase_pkglibexec_SCRIPTS = common.sh
|
2009-11-02 22:09:23 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += zconfig.sh
|
2009-08-18 04:35:06 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += zfs.sh
|
|
|
|
nobase_pkglibexec_SCRIPTS += zpool-create.sh
|
2009-11-24 23:48:16 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += udev-rules/*
|
2009-08-18 04:35:06 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += zpool-config/*
|
2009-08-18 04:50:55 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += zpios.sh
|
2009-11-02 22:10:46 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += zpios-sanity.sh
|
2009-08-18 04:50:55 +00:00
|
|
|
nobase_pkglibexec_SCRIPTS += zpios-survey.sh
|
|
|
|
nobase_pkglibexec_SCRIPTS += zpios-test/*
|
|
|
|
nobase_pkglibexec_SCRIPTS += zpios-profile/*
|
2009-08-18 04:35:06 +00:00
|
|
|
EXTRA_DIST = zfs-update.sh $(nobase_pkglibexec_SCRIPTS)
|
2008-12-05 17:46:11 +00:00
|
|
|
|
2009-03-18 21:19:04 +00:00
|
|
|
ZFS=${top_srcdir}/scripts/zfs.sh
|
2009-10-01 23:55:34 +00:00
|
|
|
ZCONFIG=${top_srcdir}/scripts/zconfig.sh
|
2009-03-18 21:19:04 +00:00
|
|
|
ZTEST=${top_builddir}/cmd/ztest/ztest
|
2009-10-27 21:33:27 +00:00
|
|
|
ZPIOS_SANITY=${top_srcdir}/scripts/zpios-sanity.sh
|
2009-03-18 21:19:04 +00:00
|
|
|
|
2008-12-05 17:46:11 +00:00
|
|
|
check:
|
Pretty-up the 'make check' output
Reasonable output from 'make check' now looks roughly like this. The
big change is the consolidation of the all the zpion test results in
to a single table which can be easily scanned for failures/problems.
==================================== ZTEST ====================================
5 vdevs, 7 datasets, 23 threads, 300 seconds...
Pass 1, SIGKILL, 1 ENOSPC, 13.8% of 238M used, 17% done, 4m07s to go
Pass 2, SIGKILL, 1 ENOSPC, 23.7% of 238M used, 38% done, 3m04s to go
Pass 3, SIGKILL, 0 ENOSPC, 27.0% of 238M used, 66% done, 1m42s to go
Pass 4, SIGKILL, 0 ENOSPC, 27.4% of 238M used, 75% done, 1m14s to go
Pass 5, SIGKILL, 0 ENOSPC, 27.9% of 238M used, 89% done, 32s to go
Pass 6, Complete, 0 ENOSPC, 14.0% of 476M used, 100% done, 0s to go
5 killed, 1 completed, 83% kill rate
==================================== ZPIOS ====================================
status name id wr-data wr-ch wr-bw rd-data rd-ch rd-bw
-------------------------------------------------------------------------------
PASS: file-raid0 0 64m 64 13.04m 64m 64 842.22m
PASS: file-raid10 0 64m 64 134.19m 64m 64 842.22m
PASS: file-raidz 0 64m 64 87.56m 64m 64 853.45m
PASS: file-raidz2 0 64m 64 134.19m 64m 64 853.45m
PASS: lo-raid0 0 64m 64 429.59m 64m 64 14.63m
PASS: lo-raid10 0 64m 64 397.57m 64m 64 771.19m
PASS: lo-raidz 0 64m 64 206.48m 64m 64 688.27m
PASS: lo-raidz2 0 64m 64 14.34m 64m 64 711.21m
2009-07-21 21:41:35 +00:00
|
|
|
@echo
|
|
|
|
@echo -n "===================================="
|
|
|
|
@echo -n " ZTEST "
|
|
|
|
@echo "===================================="
|
|
|
|
@echo
|
2009-10-01 23:55:34 +00:00
|
|
|
@$(ZFS)
|
Pretty-up the 'make check' output
Reasonable output from 'make check' now looks roughly like this. The
big change is the consolidation of the all the zpion test results in
to a single table which can be easily scanned for failures/problems.
==================================== ZTEST ====================================
5 vdevs, 7 datasets, 23 threads, 300 seconds...
Pass 1, SIGKILL, 1 ENOSPC, 13.8% of 238M used, 17% done, 4m07s to go
Pass 2, SIGKILL, 1 ENOSPC, 23.7% of 238M used, 38% done, 3m04s to go
Pass 3, SIGKILL, 0 ENOSPC, 27.0% of 238M used, 66% done, 1m42s to go
Pass 4, SIGKILL, 0 ENOSPC, 27.4% of 238M used, 75% done, 1m14s to go
Pass 5, SIGKILL, 0 ENOSPC, 27.9% of 238M used, 89% done, 32s to go
Pass 6, Complete, 0 ENOSPC, 14.0% of 476M used, 100% done, 0s to go
5 killed, 1 completed, 83% kill rate
==================================== ZPIOS ====================================
status name id wr-data wr-ch wr-bw rd-data rd-ch rd-bw
-------------------------------------------------------------------------------
PASS: file-raid0 0 64m 64 13.04m 64m 64 842.22m
PASS: file-raid10 0 64m 64 134.19m 64m 64 842.22m
PASS: file-raidz 0 64m 64 87.56m 64m 64 853.45m
PASS: file-raidz2 0 64m 64 134.19m 64m 64 853.45m
PASS: lo-raid0 0 64m 64 429.59m 64m 64 14.63m
PASS: lo-raid10 0 64m 64 397.57m 64m 64 771.19m
PASS: lo-raidz 0 64m 64 206.48m 64m 64 688.27m
PASS: lo-raidz2 0 64m 64 14.34m 64m 64 711.21m
2009-07-21 21:41:35 +00:00
|
|
|
@$(ZTEST) -V
|
2009-10-01 23:55:34 +00:00
|
|
|
@$(ZFS) -u
|
|
|
|
@echo
|
|
|
|
@echo
|
2009-10-06 19:12:05 +00:00
|
|
|
@echo -n "==================================="
|
2009-10-01 23:55:34 +00:00
|
|
|
@echo -n " ZCONFIG "
|
2009-10-06 19:12:05 +00:00
|
|
|
@echo "==================================="
|
2009-10-01 23:55:34 +00:00
|
|
|
@echo
|
|
|
|
@$(ZCONFIG)
|
Pretty-up the 'make check' output
Reasonable output from 'make check' now looks roughly like this. The
big change is the consolidation of the all the zpion test results in
to a single table which can be easily scanned for failures/problems.
==================================== ZTEST ====================================
5 vdevs, 7 datasets, 23 threads, 300 seconds...
Pass 1, SIGKILL, 1 ENOSPC, 13.8% of 238M used, 17% done, 4m07s to go
Pass 2, SIGKILL, 1 ENOSPC, 23.7% of 238M used, 38% done, 3m04s to go
Pass 3, SIGKILL, 0 ENOSPC, 27.0% of 238M used, 66% done, 1m42s to go
Pass 4, SIGKILL, 0 ENOSPC, 27.4% of 238M used, 75% done, 1m14s to go
Pass 5, SIGKILL, 0 ENOSPC, 27.9% of 238M used, 89% done, 32s to go
Pass 6, Complete, 0 ENOSPC, 14.0% of 476M used, 100% done, 0s to go
5 killed, 1 completed, 83% kill rate
==================================== ZPIOS ====================================
status name id wr-data wr-ch wr-bw rd-data rd-ch rd-bw
-------------------------------------------------------------------------------
PASS: file-raid0 0 64m 64 13.04m 64m 64 842.22m
PASS: file-raid10 0 64m 64 134.19m 64m 64 842.22m
PASS: file-raidz 0 64m 64 87.56m 64m 64 853.45m
PASS: file-raidz2 0 64m 64 134.19m 64m 64 853.45m
PASS: lo-raid0 0 64m 64 429.59m 64m 64 14.63m
PASS: lo-raid10 0 64m 64 397.57m 64m 64 771.19m
PASS: lo-raidz 0 64m 64 206.48m 64m 64 688.27m
PASS: lo-raidz2 0 64m 64 14.34m 64m 64 711.21m
2009-07-21 21:41:35 +00:00
|
|
|
@echo
|
|
|
|
@echo -n "===================================="
|
|
|
|
@echo -n " ZPIOS "
|
|
|
|
@echo "===================================="
|
|
|
|
@echo
|
2009-10-02 00:09:38 +00:00
|
|
|
@$(ZFS)
|
2009-10-27 21:33:27 +00:00
|
|
|
@$(ZPIOS_SANITY)
|
2009-10-02 00:09:38 +00:00
|
|
|
@$(ZFS) -u
|
Pretty-up the 'make check' output
Reasonable output from 'make check' now looks roughly like this. The
big change is the consolidation of the all the zpion test results in
to a single table which can be easily scanned for failures/problems.
==================================== ZTEST ====================================
5 vdevs, 7 datasets, 23 threads, 300 seconds...
Pass 1, SIGKILL, 1 ENOSPC, 13.8% of 238M used, 17% done, 4m07s to go
Pass 2, SIGKILL, 1 ENOSPC, 23.7% of 238M used, 38% done, 3m04s to go
Pass 3, SIGKILL, 0 ENOSPC, 27.0% of 238M used, 66% done, 1m42s to go
Pass 4, SIGKILL, 0 ENOSPC, 27.4% of 238M used, 75% done, 1m14s to go
Pass 5, SIGKILL, 0 ENOSPC, 27.9% of 238M used, 89% done, 32s to go
Pass 6, Complete, 0 ENOSPC, 14.0% of 476M used, 100% done, 0s to go
5 killed, 1 completed, 83% kill rate
==================================== ZPIOS ====================================
status name id wr-data wr-ch wr-bw rd-data rd-ch rd-bw
-------------------------------------------------------------------------------
PASS: file-raid0 0 64m 64 13.04m 64m 64 842.22m
PASS: file-raid10 0 64m 64 134.19m 64m 64 842.22m
PASS: file-raidz 0 64m 64 87.56m 64m 64 853.45m
PASS: file-raidz2 0 64m 64 134.19m 64m 64 853.45m
PASS: lo-raid0 0 64m 64 429.59m 64m 64 14.63m
PASS: lo-raid10 0 64m 64 397.57m 64m 64 771.19m
PASS: lo-raidz 0 64m 64 206.48m 64m 64 688.27m
PASS: lo-raidz2 0 64m 64 14.34m 64m 64 711.21m
2009-07-21 21:41:35 +00:00
|
|
|
@echo
|