Merged test and dist Makefile to master branch for easier dev

This commit is contained in:
U-STARBUCK\gina
2009-04-04 12:16:18 -07:00
8 changed files with 936 additions and 0 deletions

2
tests/Makefile Normal file
View File

@@ -0,0 +1,2 @@
test:
$(MAKE) -C .. test

219
tests/README Normal file
View File

@@ -0,0 +1,219 @@
todo.sh tests
=============
This directory holds test scripts for todo.sh. The
first part of this short document describes how to run the tests
and read their output.
When fixing the tools or adding enhancements, you are strongly
encouraged to add tests in this directory to cover what you are
trying to fix or enhance. The later part of this short document
describes how your test scripts should be organized.
Running Tests
-------------
The easiest way to run tests is to say "make test" from the top-level.
This runs all the tests.
rm -rf tests/test-results "tests/trash directory"*
cd tests && sh t0000-config.sh
* ok 1: no config file
* ok 2: config file (default location 1)
* ok 3: config file (default location 2)
* ok 4: config file (command line)
* ok 5: config file (env variable)
* passed all 5 test(s)
cd tests && sh t0001-null.sh
* ok 1: null ls
* passed all 1 test(s)
rm -rf tests/test-results
Or you can run each test individually from command line, like
this:
$ ./t0001-null.sh
* ok 1: null ls
* passed all 1 test(s)
You can pass --verbose (or -v), --debug (or -d), and --immediate
(or -i) command line argument to the test, or by setting GIT_TEST_OPTS
appropriately before running "make".
--verbose::
This makes the test more verbose. Specifically, the
command being run and their output if any are also
output.
--debug::
This may help the person who is developing a new test.
It causes the command defined with test_debug to run.
--immediate::
This causes the test to immediately exit upon the first
failed test.
--long-tests::
This causes additional long-running tests to be run (where
available), for more exhaustive testing.
--tee::
In addition to printing the test output to the terminal,
write it to files named 't/test-results/$TEST_NAME.out'.
As the names depend on the tests' file names, it is safe to
run the tests with this option in parallel.
Skipping Tests
--------------
In some environments, certain tests have no way of succeeding
due to platform limitation, such as lack of 'unzip' program, or
filesystem that do not allow arbitrary sequence of non-NUL bytes
as pathnames.
You should be able to say something like
$ SKIP_TESTS=t0000.2 sh ./t0000-config.sh
and even:
$ SKIP_TESTS='t[0-4]??? t91?? t9200.8' make
to omit such tests. The value of the environment variable is a
SP separated list of patterns that tells which tests to skip,
and either can match the "t[0-9]{4}" part to skip the whole
test, or t[0-9]{4} followed by ".$number" to say which
particular test to skip.
Note that some tests in the existing test suite rely on previous
test item, so you cannot arbitrarily disable one and expect the
remainder of test to check what the test originally was intended
to check.
Naming Tests
------------
The test files are named as:
tNNNN-commandname-details.sh
where N is a decimal digit.
First digit tells the family:
0 - the absolute basics and global stuff
1 - basic every-day usage
2 - add ins
Second digit tells the particular command we are testing.
Third digit (optionally) tells the particular switch or group of switches
we are testing.
If you create files under tests/ directory (i.e. here) that is not
the top-level test script, never name the file to match the above
pattern. The Makefile here considers all such files as the
top-level test script and tries to run all of them. A care is
especially needed if you are creating a common test library
file, similar to test-lib.sh, because such a library file may
not be suitable for standalone execution.
Writing Tests
-------------
The test script is written as a shell script. It should start
with the standard "#!/bin/sh" with copyright notices, and an
assignment to variable 'test_description', like this:
#!/bin/sh
#
# Copyright (c) 2005 Junio C Hamano
#
test_description='xxx test (option --frotz)
This test registers the following structure in the cache
and tries to run git-ls-files with option --frotz.'
Source 'test-lib.sh'
--------------------
After assigning test_description, the test script should source
test-lib.sh like this:
. ./test-lib.sh
This test harness library does the following things:
- If the script is invoked with command line argument --help
(or -h), it shows the test_description and exits.
- Creates an empty test directory with an empty todo file
database and chdir(2) into it. This directory is 't/trash directory'
if you must know, but I do not think you care.
- Defines standard test helper functions for your scripts to
use. These functions are designed to make all scripts behave
consistently when command line arguments --verbose (or -v),
--debug (or -d), and --immediate (or -i) is given.
End with test_done
------------------
Your script will be a sequence of tests, using helper functions
from the test harness library. At the end of the script, call
'test_done'.
Test harness library
--------------------
There are a handful helper functions defined in the test harness
library for your script to use.
- test_expect_success <message> <script>
This takes two strings as parameter, and evaluates the
<script>. If it yields success, test is considered
successful. <message> should state what it is testing.
Example:
test_expect_success \
'git-write-tree should be able to write an empty tree.' \
'tree=$(git-write-tree)'
- test_expect_failure <message> <script>
This is NOT the opposite of test_expect_success, but is used
to mark a test that demonstrates a known breakage. Unlike
the usual test_expect_success tests, which say "ok" on
success and "FAIL" on failure, this will say "FIXED" on
success and "still broken" on failure. Failures from these
tests won't cause -i (immediate) to stop.
- test_debug <script>
This takes a single argument, <script>, and evaluates it only
when the test script is started with --debug command line
argument. This is primarily meant for use during the
development of a new test script.
- test_done
Your test script must have test_done at the end. Its purpose
is to summarize successes and failures in the test script and
exit with an appropriate error code.
Credits
-------
This test framework was derived from the framework used by
git itself, written originally by Junio Hamano and licensed
for use under the GPL.

34
tests/aggregate-results.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/sh
fixed=0
success=0
failed=0
broken=0
total=0
for file
do
while read type value
do
case $type in
'')
continue ;;
fixed)
fixed=$(($fixed + $value)) ;;
success)
success=$(($success + $value)) ;;
failed)
failed=$(($failed + $value)) ;;
broken)
broken=$(($broken + $value)) ;;
total)
total=$(($total + $value)) ;;
esac
done <"$file"
done
printf "%-8s%d\n" fixed $fixed
printf "%-8s%d\n" success $success
printf "%-8s%d\n" failed $failed
printf "%-8s%d\n" broken $broken
printf "%-8s%d\n" total $total

61
tests/t0000-config.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/sh
test_description='todo.sh configuration file location
This test just makes sure that todo.sh can find its
config files in the default locations and take arguments
to find it somewhere else.
'
. ./test-lib.sh
# Remove the pre-created todo.cfg to test behavior in its absence
rm -f todo.cfg
echo "Fatal error: Cannot read configuration file $HOME/todo.cfg" > expect
test_expect_success 'no config file' '
todo.sh > output 2>&1 || test_cmp expect output
'
# All the below tests will output the usage message.
cat > expect << EOF
Usage: todo.sh [-fhpantvV] [-d todo_config] action [task_number] [task_description]
Try 'todo.sh -h' for more information.
EOF
cat > test.cfg << EOF
export TODO_DIR=.
export TODO_FILE="$TODO_DIR/todo.txt"
export DONE_FILE="$TODO_DIR/done.txt"
export REPORT_FILE="$TODO_DIR/report.txt"
export TMP_FILE="$TODO_DIR/todo.tmp"
touch used_config
EOF
rm -f used_config
test_expect_success 'config file (default location 1)' '
cp test.cfg todo.cfg
todo.sh > output;
test_cmp expect output && test -f used_config &&
rm -f todo.cfg
'
rm -f used_config
test_expect_success 'config file (default location 2)' '
cp test.cfg .todo.cfg
todo.sh > output;
test_cmp expect output && test -f used_config &&
rm -f .todo.cfg
'
rm -f used_config
test_expect_success 'config file (command line)' '
todo.sh -d test.cfg > output;
test_cmp expect output && test -f used_config
'
rm -f used_config
test_expect_success 'config file (env variable)' '
TODOTXT_CFG_FILE=test.cfg todo.sh > output;
test_cmp expect output && test -f used_config
'
test_done

101
tests/t0001-null.sh Executable file
View File

@@ -0,0 +1,101 @@
#!/bin/sh
test_description='todo.sh basic null functionality test.
This test just makes sure the basic commands work,
when there are no todos.
'
. ./test-lib.sh
#
# ls|list
#
cat > expect <<EOF
--
TODO: 0 of 0 tasks shown from $HOME/todo.txt
EOF
test_expect_success 'null ls' '
todo.sh ls > output && test_cmp expect output
'
test_expect_success 'null list' '
todo.sh list > output && test_cmp expect output
'
test_expect_success 'null list filter' '
todo.sh list filter > output && test_cmp expect output
'
#
# lsp|listpri
#
# Re-use expect from ls.
test_expect_success 'null lsp' '
todo.sh lsp > output && test_cmp expect output
'
test_expect_success 'null listpri' '
todo.sh listpri > output && test_cmp expect output
'
test_expect_success 'null listpri a' '
todo.sh listpri a > output && test_cmp expect output
'
#
# lsa|listall
#
cat > expect <<EOF
--
TODO: 0 of 0 tasks shown from $HOME/todo.tmp
EOF
test_expect_success 'null lsa' '
todo.sh lsa > output && test_cmp expect output
'
test_expect_success 'null list' '
todo.sh listall > output && test_cmp expect output
'
test_expect_success 'null list filter' '
todo.sh listall filter > output && test_cmp expect output
'
#
# lsc|listcon
#
test_expect_success 'null lsc' '
todo.sh lsc > output && ! test -s output
'
test_expect_success 'null listcon' '
todo.sh listcon > output && ! test -s output
'
#
# lsprj|listproj
#
test_expect_success 'null lsprj' '
todo.sh lsprj > output && ! test -s output
'
test_expect_success 'null listproj' '
todo.sh listproj > output && ! test -s output
'
#
# lf|listfile
#
cat > expect <<EOF
TODO: File does not exist.
EOF
# XXX really should give a better usage error message here.
test_expect_success 'null lf' '
todo.sh lf > output || test_cmp expect output
'
test_expect_success 'null listfile' '
todo.sh listfile > output || test_cmp expect output
'
cat > expect <<EOF
TODO: File foo.txt does not exist.
EOF
test_expect_success 'null listfile foo.txt' '
todo.sh listfile foo.txt > output || test_cmp expect output
'
test_done

464
tests/test-lib.sh Normal file
View File

@@ -0,0 +1,464 @@
#!/bin/sh
#
# Copyright (c) 2005 Junio C Hamano
#
# if --tee was passed, write the output not only to the terminal, but
# additionally to the file test-results/$BASENAME.out, too.
case "$TEST_TEE_STARTED, $* " in
done,*)
# do not redirect again
;;
*' --tee '*|*' --va'*)
mkdir -p test-results
BASE=test-results/$(basename "$0" .sh)
(TEST_TEE_STARTED=done ${SHELL-sh} "$0" "$@" 2>&1;
echo $? > $BASE.exit) | tee $BASE.out
test "$(cat $BASE.exit)" = 0
exit
;;
esac
# Keep the original TERM for say_color
ORIGINAL_TERM=$TERM
# For repeatability, reset the environment to known value.
LANG=C
LC_ALL=C
PAGER=cat
TZ=UTC
TERM=dumb
export LANG LC_ALL PAGER TERM TZ
EDITOR=:
VISUAL=:
# Protect ourselves from common misconfiguration to export
# CDPATH into the environment
unset CDPATH
# Each test should start with something like this, after copyright notices:
#
# test_description='Description of this test...
# This test checks if command xyzzy does the right thing...
# '
# . ./test-lib.sh
[ "x$ORIGINAL_TERM" != "xdumb" ] && (
TERM=$ORIGINAL_TERM &&
export TERM &&
[ -t 1 ] &&
tput bold >/dev/null 2>&1 &&
tput setaf 1 >/dev/null 2>&1 &&
tput sgr0 >/dev/null 2>&1
) &&
color=t
while test "$#" -ne 0
do
case "$1" in
-d|--d|--de|--deb|--debu|--debug)
debug=t; shift ;;
-i|--i|--im|--imm|--imme|--immed|--immedi|--immedia|--immediat|--immediate)
immediate=t; shift ;;
-l|--l|--lo|--lon|--long|--long-|--long-t|--long-te|--long-tes|--long-test|--long-tests)
TODOTXT_TEST_LONG=t; export TODOTXT_TEST_LONG; shift ;;
-h|--h|--he|--hel|--help)
help=t; shift ;;
-v|--v|--ve|--ver|--verb|--verbo|--verbos|--verbose)
verbose=t; shift ;;
-q|--q|--qu|--qui|--quie|--quiet)
quiet=t; shift ;;
--no-color)
color=; shift ;;
--no-python)
# noop now...
shift ;;
--tee)
shift ;; # was handled already
*)
break ;;
esac
done
if test -n "$color"; then
say_color () {
(
TERM=$ORIGINAL_TERM
export TERM
case "$1" in
error) tput bold; tput setaf 1;; # bold red
skip) tput bold; tput setaf 2;; # bold green
pass) tput setaf 2;; # green
info) tput setaf 3;; # brown
*) test -n "$quiet" && return;;
esac
shift
printf "* %s" "$*"
tput sgr0
echo
)
}
else
say_color() {
test -z "$1" && test -n "$quiet" && return
shift
echo "* $*"
}
fi
error () {
say_color error "error: $*"
trap - EXIT
exit 1
}
say () {
say_color info "$*"
}
test "${test_description}" != "" ||
error "Test script did not set test_description."
if test "$help" = "t"
then
echo "$test_description"
exit 0
fi
exec 5>&1
if test "$verbose" = "t"
then
exec 4>&2 3>&1
else
exec 4>/dev/null 3>/dev/null
fi
test_failure=0
test_count=0
test_fixed=0
test_broken=0
test_success=0
die () {
echo >&5 "FATAL: Unexpected exit with code $?"
exit 1
}
trap 'die' EXIT
# The semantics of the editor variables are that of invoking
# sh -c "$EDITOR \"$@\"" files ...
#
# If our trash directory contains shell metacharacters, they will be
# interpreted if we just set $EDITOR directly, so do a little dance with
# environment variables to work around this.
#
# In particular, quoting isn't enough, as the path may contain the same quote
# that we're using.
test_set_editor () {
FAKE_EDITOR="$1"
export FAKE_EDITOR
VISUAL='"$FAKE_EDITOR"'
export VISUAL
}
# You are not expected to call test_ok_ and test_failure_ directly, use
# the text_expect_* functions instead.
test_ok_ () {
test_success=$(($test_success + 1))
say_color "" " ok $test_count: $@"
}
test_failure_ () {
test_failure=$(($test_failure + 1))
say_color error "FAIL $test_count: $1"
shift
echo "$@" | sed -e 's/^/ /'
test "$immediate" = "" || { trap - EXIT; exit 1; }
}
test_known_broken_ok_ () {
test_fixed=$(($test_fixed+1))
say_color "" " FIXED $test_count: $@"
}
test_known_broken_failure_ () {
test_broken=$(($test_broken+1))
say_color skip " still broken $test_count: $@"
}
test_debug () {
test "$debug" = "" || eval "$1"
}
test_run_ () {
eval >&3 2>&4 "$1"
eval_ret="$?"
return 0
}
test_skip () {
test_count=$(($test_count+1))
to_skip=
for skp in $SKIP_TESTS
do
case $this_test.$test_count in
$skp)
to_skip=t
esac
done
case "$to_skip" in
t)
say_color skip >&3 "skipping test: $@"
say_color skip "skip $test_count: $1"
: true
;;
*)
false
;;
esac
}
test_expect_failure () {
test "$#" = 2 ||
error "bug in the test script: not 2 parameters to test-expect-failure"
if ! test_skip "$@"
then
say >&3 "checking known breakage: $2"
test_run_ "$2"
if [ "$?" = 0 -a "$eval_ret" = 0 ]
then
test_known_broken_ok_ "$1"
else
test_known_broken_failure_ "$1"
fi
fi
echo >&3 ""
}
test_expect_success () {
test "$#" = 2 ||
error "bug in the test script: not 2 parameters to test-expect-success"
if ! test_skip "$@"
then
say >&3 "expecting success: $2"
test_run_ "$2"
if [ "$?" = 0 -a "$eval_ret" = 0 ]
then
test_ok_ "$1"
else
test_failure_ "$@"
fi
fi
echo >&3 ""
}
test_expect_code () {
test "$#" = 3 ||
error "bug in the test script: not 3 parameters to test-expect-code"
if ! test_skip "$@"
then
say >&3 "expecting exit code $1: $3"
test_run_ "$3"
if [ "$?" = 0 -a "$eval_ret" = "$1" ]
then
test_ok_ "$2"
else
test_failure_ "$@"
fi
fi
echo >&3 ""
}
# test_external runs external test scripts that provide continuous
# test output about their progress, and succeeds/fails on
# zero/non-zero exit code. It outputs the test output on stdout even
# in non-verbose mode, and announces the external script with "* run
# <n>: ..." before running it. When providing relative paths, keep in
# mind that all scripts run in "trash directory".
# Usage: test_external description command arguments...
# Example: test_external 'Perl API' perl ../path/to/test.pl
test_external () {
test "$#" -eq 3 ||
error >&5 "bug in the test script: not 3 parameters to test_external"
descr="$1"
shift
if ! test_skip "$descr" "$@"
then
# Announce the script to reduce confusion about the
# test output that follows.
say_color "" " run $test_count: $descr ($*)"
# Run command; redirect its stderr to &4 as in
# test_run_, but keep its stdout on our stdout even in
# non-verbose mode.
"$@" 2>&4
if [ "$?" = 0 ]
then
test_ok_ "$descr"
else
test_failure_ "$descr" "$@"
fi
fi
}
# Like test_external, but in addition tests that the command generated
# no output on stderr.
test_external_without_stderr () {
# The temporary file has no (and must have no) security
# implications.
tmp="$TMPDIR"; if [ -z "$tmp" ]; then tmp=/tmp; fi
stderr="$tmp/todotxt-external-stderr.$$.tmp"
test_external "$@" 4> "$stderr"
[ -f "$stderr" ] || error "Internal error: $stderr disappeared."
descr="no stderr: $1"
shift
say >&3 "expecting no stderr from previous command"
if [ ! -s "$stderr" ]; then
rm "$stderr"
test_ok_ "$descr"
else
if [ "$verbose" = t ]; then
output=`echo; echo Stderr is:; cat "$stderr"`
else
output=
fi
# rm first in case test_failure exits.
rm "$stderr"
test_failure_ "$descr" "$@" "$output"
fi
}
# This is not among top-level (test_expect_success | test_expect_failure)
# but is a prefix that can be used in the test script, like:
#
# test_expect_success 'complain and die' '
# do something &&
# do something else &&
# test_must_fail git checkout ../outerspace
# '
#
# Writing this as "! git checkout ../outerspace" is wrong, because
# the failure could be due to a segv. We want a controlled failure.
test_must_fail () {
"$@"
test $? -gt 0 -a $? -le 129 -o $? -gt 192
}
# test_cmp is a helper function to compare actual and expected output.
# You can use it like:
#
# test_expect_success 'foo works' '
# echo expected >expected &&
# foo >actual &&
# test_cmp expected actual
# '
#
# This could be written as either "cmp" or "diff -u", but:
# - cmp's output is not nearly as easy to read as diff -u
# - not all diff versions understand "-u"
test_cmp() {
diff -u "$@"
}
test_done () {
trap - EXIT
test_results_dir="$TEST_DIRECTORY/test-results"
mkdir -p "$test_results_dir"
test_results_path="$test_results_dir/${0%.sh}-$$"
echo "total $test_count" >> $test_results_path
echo "success $test_success" >> $test_results_path
echo "fixed $test_fixed" >> $test_results_path
echo "broken $test_broken" >> $test_results_path
echo "failed $test_failure" >> $test_results_path
echo "" >> $test_results_path
if test "$test_fixed" != 0
then
say_color pass "fixed $test_fixed known breakage(s)"
fi
if test "$test_broken" != 0
then
say_color error "still have $test_broken known breakage(s)"
msg="remaining $(($test_count-$test_broken)) test(s)"
else
msg="$test_count test(s)"
fi
case "$test_failure" in
0)
say_color pass "passed all $msg"
# Clean up this test.
test -d "$remove_trash" &&
cd "$(dirname "$remove_trash")" &&
rm -rf "$(basename "$remove_trash")"
exit 0 ;;
*)
say_color error "failed $test_failure among $msg"
exit 1 ;;
esac
}
# Make sure we are testing the latest version.
TEST_DIRECTORY=$(pwd)
PATH=$TEST_DIRECTORY/..:$PATH
# Test repository
test="trash directory.$(basename "$0" .sh)"
test ! -z "$debug" || remove_trash="$TEST_DIRECTORY/$test"
rm -fr "$test" || {
trap - EXIT
echo >&5 "FATAL: Cannot prepare test area"
exit 1
}
# Most tests can use the created repository, but some may need to create more.
# Usage: test_init_todo <directory>
test_init_todo () {
test "$#" = 1 ||
error "bug in the test script: not 1 parameter to test_init_todo"
owd=`pwd`
root="$1"
mkdir -p "$root"
cd "$root" || error "Cannot setup todo dir in $root"
# Initialize the configuration file. Carefully quoted.
sed -e 's|TODO_DIR=.*$|TODO_DIR="'"$TEST_DIRECTORY/$test"'"|' $TEST_DIRECTORY/../todo.cfg > todo.cfg
cd "$owd"
}
test_init_todo "$test"
# Use -P to resolve symlinks in our working directory so that the cwd
# in subprocesses equals our $PWD (for pathname comparisons).
cd -P "$test" || exit 1
# Since todo.sh refers to the home directory often,
# make sure we don't accidentally grab the tester's config
# but use something specified by the framework.
HOME=$(pwd)
export HOME
this_test=${0##*/}
this_test=${this_test%%-*}
for skp in $SKIP_TESTS
do
to_skip=
for skp in $SKIP_TESTS
do
case "$this_test" in
$skp)
to_skip=t
esac
done
case "$to_skip" in
t)
say_color skip >&3 "skipping test $this_test altogether"
say_color skip "skip all tests in $this_test"
test_done
esac
done