1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
6 use Test::Harness::Straps;
7 use Test::Harness::Assert;
13 use vars '$has_time_hires';
16 eval "use Time::HiRes 'time'";
17 $has_time_hires = !$@;
22 @ISA @EXPORT @EXPORT_OK
23 $Verbose $Switches $Debug
24 $verbose $switches $debug
33 Test::Harness - Run Perl standard test scripts with statistics
43 # Backwards compatibility for exportable variable names.
45 *switches = *Switches;
48 $ENV{HARNESS_ACTIVE} = 1;
49 $ENV{HARNESS_VERSION} = $VERSION;
53 delete $ENV{HARNESS_ACTIVE};
54 delete $ENV{HARNESS_VERSION};
57 # Some experimental versions of OS/2 build have broken $?
58 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
60 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
62 $Strap = Test::Harness::Straps->new;
64 sub strap { return $Strap };
67 @EXPORT = qw(&runtests);
68 @EXPORT_OK = qw($verbose $switches);
70 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
71 $Debug = $ENV{HARNESS_DEBUG} || 0;
73 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
74 $Columns--; # Some shells have trouble with a full line of text.
80 runtests(@test_files);
84 B<STOP!> If all you want to do is write a test script, consider
85 using Test::Simple. Test::Harness is the module that reads the
86 output from Test::Simple, Test::More and other modules based on
87 Test::Builder. You don't need to know about Test::Harness to use
90 Test::Harness runs tests and expects output from the test in a
91 certain format. That format is called TAP, the Test Anything
92 Protocol. It is defined in L<Test::Harness::TAP>.
94 C<Test::Harness::runtests(@tests)> runs all the testscripts named
95 as arguments and checks standard output for the expected strings
98 The F<prove> utility is a thin wrapper around Test::Harness.
102 Test::Harness will honor the C<-T> or C<-t> in the #! line on your
103 test files. So if you begin a test with:
107 the test will be run with taint mode on.
109 =head2 Configuration variables.
111 These variables can be used to configure the behavior of
112 Test::Harness. They are exported on request.
116 =item C<$Test::Harness::Verbose>
118 The package variable C<$Test::Harness::Verbose> is exportable and can be
119 used to let C<runtests()> display the standard output of the script
120 without altering the behavior otherwise. The F<prove> utility's C<-v>
123 =item C<$Test::Harness::switches>
125 The package variable C<$Test::Harness::switches> is exportable and can be
126 used to set perl command line options used for running the test
127 script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
134 When tests fail, analyze the summary report:
136 t/base..............ok
137 t/nonumbers.........ok
138 t/ok................ok
139 t/test-harness......ok
140 t/waterloo..........dubious
141 Test returned status 3 (wstat 768, 0x300)
142 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
143 Failed 10/20 tests, 50.00% okay
144 Failed Test Stat Wstat Total Fail Failed List of Failed
145 -----------------------------------------------------------------------
146 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
147 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
149 Everything passed but F<t/waterloo.t>. It failed 10 of 20 tests and
150 exited with non-zero status indicating something dubious happened.
152 The columns in the summary report mean:
158 The test file which failed.
162 If the test exited with non-zero, this is its exit status.
166 The wait status of the test.
170 Total number of tests expected to run.
174 Number which failed, either from "not ok" or because they never ran.
178 Percentage of the total tests which failed.
180 =item B<List of Failed>
182 A list of the tests which failed. Successive failures may be
183 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
191 Test::Harness currently only has one function, here it is.
197 my $allok = runtests(@test_files);
199 This runs all the given I<@test_files> and divines whether they passed
200 or failed based on their output to STDOUT (details above). It prints
201 out each individual test which failed along with a summary report and
202 a how long it all took.
204 It returns true if everything was ok. Otherwise it will C<die()> with
205 one of the messages in the DIAGNOSTICS section.
214 my($tot, $failedtests) = _run_all_tests(@tests);
215 _show_results($tot, $failedtests);
217 my $ok = _all_ok($tot);
219 assert(($ok xor keys %$failedtests),
220 q{ok status jives with $failedtests});
229 my $ok = _all_ok(\%tot);
231 Tells you if this test run is overall successful or not.
238 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
243 my @files = _globdir $dir;
245 Returns all the files in a directory. This is shorthand for backwards
246 compatibility on systems where C<glob()> doesn't work right.
252 my @f = readdir DIRH;
258 =item B<_run_all_tests>
260 my($total, $failed) = _run_all_tests(@test_files);
262 Runs all the given C<@test_files> (as C<runtests()>) but does it
263 quietly (no report). $total is a hash ref summary of all the tests
264 run. Its keys and values are this:
266 bonus Number of individual todo tests unexpectedly passed
267 max Number of individual tests ran
268 ok Number of individual tests passed
269 sub_skipped Number of individual tests skipped
270 todo Number of individual todo tests
272 files Number of test files ran
273 good Number of test files passed
274 bad Number of test files failed
275 tests Number of test files originally given
276 skipped Number of test files skipped
278 If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
279 got a successful test.
281 $failed is a hash ref of all the test scripts which failed. Each key
282 is the name of a test script, each value is another hash representing
283 how that script failed. Its keys are these:
285 name Name of the test which failed
286 estat Script's exit value
287 wstat Script's wait status
288 max Number of individual tests
289 failed Number which failed
290 percent Percentage of tests which failed
291 canon List of tests which failed (as string).
293 C<$failed> should be empty if everything passed.
295 B<NOTE> Currently this function is still noisy. I'm working on it.
299 # Turns on autoflush for the handle passed
301 my $flushy_fh = shift;
302 my $old_fh = select $flushy_fh;
310 _autoflush(\*STDOUT);
311 _autoflush(\*STDERR);
323 tests => scalar @tests,
331 @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
332 my $run_start_time = new Benchmark;
334 my $width = _leader_width(@tests);
335 foreach my $tfile (@tests) {
336 $Last_ML_Print = 0; # so each test prints at least once
337 my($leader, $ml) = _mk_leader($tfile, $width);
344 $Strap->{_seen_header} = 0;
345 if ( $Test::Harness::Debug ) {
346 print "# Running: ", $Strap->_command_line($tfile), "\n";
348 my $test_start_time = time;
349 my %results = $Strap->analyze_file($tfile) or
350 do { warn $Strap->{error}, "\n"; next };
351 my $test_end_time = time;
352 my $elapsed = $test_end_time - $test_start_time;
353 $elapsed = $has_time_hires ? sprintf( " %8.3fs", $elapsed ) : "";
355 # state of the current test.
356 my @failed = grep { !$results{details}[$_-1]{ok} }
357 1..@{$results{details}};
360 'next' => $Strap->{'next'},
361 max => $results{max},
363 bonus => $results{bonus},
364 skipped => $results{skip},
365 skip_reason => $results{skip_reason},
366 skip_all => $Strap->{skip_all},
370 $tot{bonus} += $results{bonus};
371 $tot{max} += $results{max};
372 $tot{ok} += $results{ok};
373 $tot{todo} += $results{todo};
374 $tot{sub_skipped} += $results{skip};
376 my($estatus, $wstatus) = @results{qw(exit wait)};
378 if ($results{passing}) {
379 # XXX Combine these first two
380 if ($test{max} and $test{skipped} + $test{bonus}) {
382 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
384 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
386 print "$test{ml}ok$elapsed\n ".join(', ', @msg)."\n";
388 elsif ( $test{max} ) {
389 print "$test{ml}ok$elapsed\n";
391 elsif ( defined $test{skip_all} and length $test{skip_all} ) {
392 print "skipped\n all skipped: $test{skip_all}\n";
396 print "skipped\n all skipped: no reason given\n";
402 # List unrun tests as failures.
403 if ($test{'next'} <= $test{max}) {
404 push @{$test{failed}}, $test{'next'}..$test{max};
406 # List overruns as failures.
408 my $details = $results{details};
409 foreach my $overrun ($test{max}+1..@$details) {
410 next unless ref $details->[$overrun-1];
411 push @{$test{failed}}, $overrun
416 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
418 $failedtests{$tfile}{name} = $tfile;
420 elsif($results{seen}) {
421 if (@{$test{failed}} and $test{max}) {
422 my ($txt, $canon) = _canonfailed($test{max},$test{skipped},
424 print "$test{ml}$txt";
425 $failedtests{$tfile} = { canon => $canon,
427 failed => scalar @{$test{failed}},
429 percent => 100*(scalar @{$test{failed}})/$test{max},
435 print "Don't know which tests failed: got $test{ok} ok, ".
436 "expected $test{max}\n";
437 $failedtests{$tfile} = { canon => '??',
449 print "FAILED before any test output arrived\n";
451 $failedtests{$tfile} = { canon => '??',
462 if (defined $Files_In_Dir) {
463 my @new_dir_files = _globdir $Files_In_Dir;
464 if (@new_dir_files != @dir_files) {
466 @f{@new_dir_files} = (1) x @new_dir_files;
467 delete @f{@dir_files};
468 my @f = sort keys %f;
469 print "LEAKED FILES: @f\n";
470 @dir_files = @new_dir_files;
474 $tot{bench} = timediff(new Benchmark, $run_start_time);
476 $Strap->_restore_PERL5LIB;
478 return(\%tot, \%failedtests);
483 my($leader, $ml) = _mk_leader($test_file, $width);
485 Generates the 't/foo........' leader for the given C<$test_file> as well
486 as a similar version which will overwrite the current line (by use of
487 \r and such). C<$ml> may be empty if Test::Harness doesn't think you're
490 The C<$width> is the width of the "yada/blah.." string.
495 my($te, $width) = @_;
500 $te =~ s/^.*\.t\./\[.t./s;
502 my $leader = "$te" . '.' x ($width - length($te));
505 if ( -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose ) {
506 $ml = "\r" . (' ' x 77) . "\r$leader"
509 return($leader, $ml);
512 =item B<_leader_width>
514 my($width) = _leader_width(@test_files);
516 Calculates how wide the leader should be based on the length of the
525 my $suf = /\.(\w+)$/ ? $1 : '';
527 my $suflen = length $suf;
528 $maxlen = $len if $len > $maxlen;
529 $maxsuflen = $suflen if $suflen > $maxsuflen;
531 # + 3 : we want three dots between the test name and the "ok"
532 return $maxlen + 3 - $maxsuflen;
537 my($tot, $failedtests) = @_;
540 my $bonusmsg = _bonusmsg($tot);
543 print "All tests successful$bonusmsg.\n";
545 elsif (!$tot->{tests}){
546 die "FAILED--no tests were run for some reason.\n";
548 elsif (!$tot->{max}) {
549 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
550 die "FAILED--$tot->{tests} test $blurb could be run, ".
551 "alas--no output ever seen\n";
554 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
555 my $percent_ok = 100*$tot->{ok}/$tot->{max};
556 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
557 $tot->{max} - $tot->{ok}, $tot->{max},
560 my($fmt_top, $fmt) = _create_fmts($failedtests);
562 # Now write to formats
563 for my $script (sort keys %$failedtests) {
564 $Curtest = $failedtests->{$script};
568 $bonusmsg =~ s/^,\s*//;
569 print "$bonusmsg.\n" if $bonusmsg;
570 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
575 printf("Files=%d, Tests=%d, %s\n",
576 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
581 header => \&header_handler,
582 test => \&test_handler,
583 bailout => \&bailout_handler,
586 $Strap->{callback} = \&strap_callback;
588 my($self, $line, $type, $totals) = @_;
589 print $line if $Verbose;
591 my $meth = $Handlers{$type};
592 $meth->($self, $line, $type, $totals) if $meth;
597 my($self, $line, $type, $totals) = @_;
599 warn "Test header seen more than once!\n" if $self->{_seen_header};
601 $self->{_seen_header}++;
603 warn "1..M can only appear at the beginning or end of tests\n"
604 if $totals->{seen} &&
605 $totals->{max} < $totals->{seen};
609 my($self, $line, $type, $totals) = @_;
611 my $curr = $totals->{seen};
612 my $next = $self->{'next'};
613 my $max = $totals->{max};
614 my $detail = $totals->{details}[-1];
616 if( $detail->{ok} ) {
617 _print_ml_less("ok $curr/$max");
619 if( $detail->{type} eq 'skip' ) {
620 $totals->{skip_reason} = $detail->{reason}
621 unless defined $totals->{skip_reason};
622 $totals->{skip_reason} = 'various reasons'
623 if $totals->{skip_reason} ne $detail->{reason};
627 _print_ml("NOK $curr");
630 if( $curr > $next ) {
631 print "Test output counter mismatch [test $curr]\n";
633 elsif( $curr < $next ) {
634 print "Confused test output: test $curr answered after ".
635 "test ", $next - 1, "\n";
640 sub bailout_handler {
641 my($self, $line, $type, $totals) = @_;
643 die "FAILED--Further testing stopped" .
644 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
649 print join '', $ML, @_ if $ML;
653 # For slow connections, we save lots of bandwidth by printing only once
656 if ( $Last_ML_Print != time ) {
658 $Last_ML_Print = time;
666 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
667 " UNEXPECTEDLY SUCCEEDED)")
670 if ($tot->{skipped}) {
671 $bonusmsg .= ", $tot->{skipped} test"
672 . ($tot->{skipped} != 1 ? 's' : '');
673 if ($tot->{sub_skipped}) {
674 $bonusmsg .= " and $tot->{sub_skipped} subtest"
675 . ($tot->{sub_skipped} != 1 ? 's' : '');
677 $bonusmsg .= ' skipped';
679 elsif ($tot->{sub_skipped}) {
680 $bonusmsg .= ", $tot->{sub_skipped} subtest"
681 . ($tot->{sub_skipped} != 1 ? 's' : '')
688 # Test program go boom.
689 sub _dubious_return {
690 my($test, $tot, $estatus, $wstatus) = @_;
691 my ($failed, $canon, $percent) = ('??', '??');
693 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
694 "(wstat %d, 0x%x)\n",
696 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
701 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
702 print "\tafter all the subtests completed successfully\n";
704 $failed = 0; # But we do not set $canon!
707 push @{$test->{failed}}, $test->{'next'}..$test->{max};
708 $failed = @{$test->{failed}};
709 (my $txt, $canon) = _canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
710 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
715 return { canon => $canon, max => $test->{max} || '??',
718 estat => $estatus, wstat => $wstatus,
724 my($failedtests) = @_;
726 my $failed_str = "Failed Test";
727 my $middle_str = " Stat Wstat Total Fail Failed ";
728 my $list_str = "List of Failed";
730 # Figure out our longest name string for formatting purposes.
731 my $max_namelen = length($failed_str);
732 foreach my $script (keys %$failedtests) {
733 my $namelen = length $failedtests->{$script}->{name};
734 $max_namelen = $namelen if $namelen > $max_namelen;
737 my $list_len = $Columns - length($middle_str) - $max_namelen;
738 if ($list_len < length($list_str)) {
739 $list_len = length($list_str);
740 $max_namelen = $Columns - length($middle_str) - $list_len;
741 if ($max_namelen < length($failed_str)) {
742 $max_namelen = length($failed_str);
743 $Columns = $max_namelen + length($middle_str) + $list_len;
747 my $fmt_top = "format STDOUT_TOP =\n"
748 . sprintf("%-${max_namelen}s", $failed_str)
754 my $fmt = "format STDOUT =\n"
755 . "@" . "<" x ($max_namelen - 1)
756 . " @>> @>>>> @>>>> @>>> ^##.##% "
757 . "^" . "<" x ($list_len - 1) . "\n"
758 . '{ $Curtest->{name}, $Curtest->{estat},'
759 . ' $Curtest->{wstat}, $Curtest->{max},'
760 . ' $Curtest->{failed}, $Curtest->{percent},'
761 . ' $Curtest->{canon}'
763 . "~~" . " " x ($Columns - $list_len - 2) . "^"
764 . "<" x ($list_len - 1) . "\n"
765 . '$Curtest->{canon}'
773 return($fmt_top, $fmt);
776 sub _canonfailed ($$@) {
777 my($max,$skipped,@failed) = @_;
779 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
780 my $failed = @failed;
784 my $last = $min = shift @failed;
787 for (@failed, $failed[-1]) { # don't forget the last one
788 if ($_ > $last+1 || $_ == $last) {
789 push @canon, ($min == $last) ? $last : "$min-$last";
795 push @result, "FAILED tests @canon\n";
796 $canon = join ' ', @canon;
799 push @result, "FAILED test $last\n";
803 push @result, "\tFailed $failed/$max tests, ";
805 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
808 push @result, "?% okay";
810 my $ender = 's' x ($skipped > 1);
812 my $good = $max - $failed - $skipped;
813 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
815 my $goodper = sprintf("%.2f",100*($good/$max));
816 $skipmsg .= "$goodper%)";
821 push @result, $skipmsg;
824 my $txt = join "", @result;
841 C<&runtests> is exported by Test::Harness by default.
843 C<$verbose>, C<$switches> and C<$debug> are exported upon request.
849 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
851 If all tests are successful some statistics about the performance are
854 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
856 For any single script that has failing subtests statistics like the
859 =item C<Test returned status %d (wstat %d)>
861 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
862 and C<$?> are printed in a message similar to the above.
864 =item C<Failed 1 test, %.2f%% okay. %s>
866 =item C<Failed %d/%d tests, %.2f%% okay. %s>
868 If not all tests were successful, the script dies with one of the
871 =item C<FAILED--Further testing stopped: %s>
873 If a single subtest decides that further testing will not make sense,
874 the script dies with this message.
878 =head1 ENVIRONMENT VARIABLES THAT TEST::HARNESS SETS
880 Test::Harness sets these before executing the individual tests.
884 =item C<HARNESS_ACTIVE>
886 This is set to a true value. It allows the tests to determine if they
887 are being executed through the harness or by any other means.
889 =item C<HARNESS_VERSION>
891 This is the version of Test::Harness.
895 =head1 ENVIRONMENT VARIABLES THAT AFFECT TEST::HARNESS
899 =item C<HARNESS_COLUMNS>
901 This value will be used for the width of the terminal. If it is not
902 set then it will default to C<COLUMNS>. If this is not set, it will
903 default to 80. Note that users of Bourne-sh based shells will need to
904 C<export COLUMNS> for this module to use that variable.
906 =item C<HARNESS_COMPILE_TEST>
908 When true it will make harness attempt to compile the test using
909 C<perlcc> before running it.
911 B<NOTE> This currently only works when sitting in the perl source
914 =item C<HARNESS_DEBUG>
916 If true, Test::Harness will print debugging information about itself as
917 it runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
918 the output from the test being run. Setting C<$Test::Harness::Debug> will
919 override this, or you can use the C<-d> switch in the F<prove> utility.
921 =item C<HARNESS_FILELEAK_IN_DIR>
923 When set to the name of a directory, harness will check after each
924 test whether new files appeared in that directory, and report them as
926 LEAKED FILES: scr.tmp 0 my.db
928 If relative, directory name is with respect to the current directory at
929 the moment runtests() was called. Putting absolute path into
930 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
932 =item C<HARNESS_IGNORE_EXITCODE>
934 Makes harness ignore the exit status of child processes when defined.
936 =item C<HARNESS_NOTTY>
938 When set to a true value, forces it to behave as though STDOUT were
939 not a console. You may need to set this if you don't want harness to
940 output more frequent progress messages using carriage returns. Some
941 consoles may not handle carriage returns properly (which results in a
942 somewhat messy output).
944 =item C<HARNESS_PERL>
946 Usually your tests will be run by C<$^X>, the currently-executing Perl.
947 However, you may want to have it run by a different executable, such as
948 a threading perl, or a different version.
950 If you're using the F<prove> utility, you can use the C<--perl> switch.
952 =item C<HARNESS_PERL_SWITCHES>
954 Its value will be prepended to the switches used to invoke perl on
955 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
956 run all tests with all warnings enabled.
958 =item C<HARNESS_VERBOSE>
960 If true, Test::Harness will output the verbose results of running
961 its tests. Setting C<$Test::Harness::verbose> will override this,
962 or you can use the C<-v> switch in the F<prove> utility.
968 Here's how Test::Harness tests itself
970 $ cd ~/src/devel/Test-Harness
971 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
972 $verbose=0; runtests @ARGV;' t/*.t
973 Using /home/schwern/src/devel/Test-Harness/blib
974 t/base..............ok
975 t/nonumbers.........ok
976 t/ok................ok
977 t/test-harness......ok
978 All tests successful.
979 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
983 The included F<prove> utility for running test scripts from the command line,
984 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
985 the underlying timing routines, and L<Devel::Cover> for test coverage
990 Provide a way of running tests quietly (ie. no printing) for automated
991 validation of tests. This will probably take the form of a version
992 of runtests() which rather than printing its output returns raw data
993 on the state of the tests. (Partially done in Test::Harness::Straps)
997 Fix HARNESS_COMPILE_TEST without breaking its core usage.
999 Figure a way to report test names in the failure summary.
1001 Rework the test summary so long test names are not truncated as badly.
1002 (Partially done with new skip test styles)
1004 Add option for coverage analysis.
1008 Implement Straps total_results()
1012 Completely redo the print summary code.
1014 Implement Straps callbacks. (experimentally implemented)
1016 Straps->analyze_file() not taint clean, don't know if it can be
1018 Fix that damned VMS nit.
1020 HARNESS_TODOFAIL to display TODO failures
1022 Add a test for verbose.
1024 Change internal list of test results to a hash.
1026 Fix stats display when there's an overrun.
1028 Fix so perls with spaces in the filename work.
1030 Keeping whittling away at _run_all_tests()
1032 Clean up how the summary is printed. Get rid of those damned formats.
1036 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
1039 Please use the CPAN bug ticketing system at L<http://rt.cpan.org/>.
1040 You can also mail bugs, fixes and enhancements to
1041 C<< <bug-test-harness >> at C<< rt.cpan.org> >>.
1045 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1046 sure is, that it was inspired by Larry Wall's TEST script that came
1047 with perl distributions for ages. Numerous anonymous contributors
1048 exist. Andreas Koenig held the torch for many years, and then
1051 Current maintainer is Andy Lester C<< <andy at petdance.com> >>.
1056 by Michael G Schwern C<< <schwern at pobox.com> >>,
1057 Andy Lester C<< <andy at petdance.com> >>.
1059 This program is free software; you can redistribute it and/or
1060 modify it under the same terms as Perl itself.
1062 See L<http://www.perl.com/perl/misc/Artistic.html>.