1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.14.2.13 2002/01/07 22:34:32 schwern Exp $
7 use Test::Harness::Straps;
8 use Test::Harness::Assert;
14 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches
16 @ISA @EXPORT @EXPORT_OK
19 # Backwards compatibility for exportable variable names.
21 *switches = \$Switches;
23 $Have_Devel_Corestack = 0;
27 $ENV{HARNESS_ACTIVE} = 1;
31 delete $ENV{HARNESS_ACTIVE};
34 # Some experimental versions of OS/2 build have broken $?
35 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
37 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
39 my $Strap = Test::Harness::Straps->new;
42 @EXPORT = qw(&runtests);
43 @EXPORT_OK = qw($verbose $switches);
47 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
48 $Columns--; # Some shells have trouble with a full line of text.
53 Test::Harness - run perl standard test scripts with statistics
59 runtests(@test_files);
63 B<STOP!> If all you want to do is write a test script, consider using
64 Test::Simple. Otherwise, read on.
66 (By using the Test module, you can write test scripts without
67 knowing the exact output this module expects. However, if you need to
68 know the specifics, read on!)
70 Perl test scripts print to standard output C<"ok N"> for each single
71 test, where C<N> is an increasing sequence of integers. The first line
72 output by a standard test script is C<"1..M"> with C<M> being the
73 number of tests that should be run within the test
74 script. Test::Harness::runtests(@tests) runs all the testscripts
75 named as arguments and checks standard output for the expected
78 After all tests have been performed, runtests() prints some
79 performance statistics that are computed by the Benchmark module.
81 =head2 The test script output
83 The following explains how Test::Harness interprets the output of your
90 This header tells how many tests there will be. It should be the
91 first line output by your test program (but its okay if its preceded
94 In certain instanced, you may not know how many tests you will
95 ultimately be running. In this case, it is permitted (but not
96 encouraged) for the 1..M header to appear as the B<last> line output
97 by your test (again, it can be followed by further comments). But we
98 strongly encourage you to put it first.
100 Under B<no> circumstances should 1..M appear in the middle of your
101 output or more than once.
104 =item B<'ok', 'not ok'. Ok?>
106 Any output from the testscript to standard error is ignored and
107 bypassed, thus will be seen by the user. Lines written to standard
108 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
109 runtests(). All other lines are discarded.
111 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
114 =item B<test numbers>
116 Perl normally expects the 'ok' or 'not ok' to be followed by a test
117 number. It is tolerated if the test numbers after 'ok' are
118 omitted. In this case Test::Harness maintains temporarily its own
119 counter until the script supplies test numbers again. So the following
134 Failed 3/6 tests, 50.00% okay
138 Anything after the test number but before the # is considered to be
139 the name of the test.
141 ok 42 this is the name of the test
143 Currently, Test::Harness does nothing with this information.
145 =item B<Skipping tests>
147 If the standard output line contains the substring C< # Skip> (with
148 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
149 counted as a skipped test. If the whole testscript succeeds, the
150 count of skipped tests is included in the generated output.
151 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
154 ok 23 # skip Insufficient flogiston pressure.
156 Similarly, one can include a similar explanation in a C<1..0> line
157 emitted if the test script is skipped completely:
159 1..0 # Skipped: no leverage found
163 If the standard output line contains the substring C< # TODO> after
164 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
165 afterwards is the thing that has to be done before this test will
168 not ok 13 # TODO harness the power of the atom
172 Alternatively, you can specify a list of what tests are todo as part
177 This only works if the header appears at the beginning of the test.
179 This style is B<deprecated>.
183 These tests represent a feature to be implemented or a bug to be fixed
184 and act as something of an executable "thing to do" list. They are
185 B<not> expected to succeed. Should a todo test begin succeeding,
186 Test::Harness will report it as a bonus. This indicates that whatever
187 you were supposed to do has been done and you should promote this to a
192 As an emergency measure, a test script can decide that further tests
193 are useless (e.g. missing dependencies) and testing should stop
194 immediately. In that case the test script prints the magic words
198 to standard output. Any message after these words will be displayed by
199 C<Test::Harness> as the reason why testing is stopped.
203 Additional comments may be put into the testing output on their own
204 lines. Comment lines should begin with a '#', Test::Harness will
208 # Life is good, the sun is shining, RAM is cheap.
210 # got 'Bush' expected 'Gore'
212 =item B<Anything else>
214 Any other output Test::Harness sees it will silently ignore B<BUT WE
215 PLAN TO CHANGE THIS!> If you wish to place additional output in your
216 test script, please use a comment.
223 Test::Harness will honor the C<-T> in the #! line on your test files. So
224 if you begin a test with:
228 the test will be run with taint mode on.
231 =head2 Configuration variables.
233 These variables can be used to configure the behavior of
234 Test::Harness. They are exported on request.
238 =item B<$Test::Harness::verbose>
240 The global variable $Test::Harness::verbose is exportable and can be
241 used to let runtests() display the standard output of the script
242 without altering the behavior otherwise.
244 =item B<$Test::Harness::switches>
246 The global variable $Test::Harness::switches is exportable and can be
247 used to set perl command line options used for running the test
248 script(s). The default value is C<-w>.
255 It will happen, your tests will fail. After you mop up your ego, you
256 can begin examining the summary report:
258 t/base..............ok
259 t/nonumbers.........ok
260 t/ok................ok
261 t/test-harness......ok
262 t/waterloo..........dubious
263 Test returned status 3 (wstat 768, 0x300)
264 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
265 Failed 10/20 tests, 50.00% okay
266 Failed Test Stat Wstat Total Fail Failed List of Failed
267 -----------------------------------------------------------------------
268 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
269 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
272 exited with non-zero status indicating something dubious happened.
274 The columns in the summary report mean:
280 The test file which failed.
284 If the test exited with non-zero, this is its exit status.
288 The wait status of the test I<umm, I need a better explanation here>.
292 Total number of tests expected to run.
296 Number which failed, either from "not ok" or because they never ran.
300 Percentage of the total tests which failed.
302 =item B<List of Failed>
304 A list of the tests which failed. Successive failures may be
305 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
313 Test::Harness currently only has one function, here it is.
319 my $allok = runtests(@test_files);
321 This runs all the given @test_files and divines whether they passed
322 or failed based on their output to STDOUT (details above). It prints
323 out each individual test which failed along with a summary report and
324 a how long it all took.
326 It returns true if everything was ok, false otherwise.
329 This is just _run_all_tests() plus _show_results()
338 my($tot, $failedtests) = _run_all_tests(@tests);
339 _show_results($tot, $failedtests);
341 my $ok = _all_ok($tot);
343 assert(($ok xor keys %$failedtests),
344 q{ok status jives with $failedtests});
353 my $ok = _all_ok(\%tot);
355 Tells you if this test run is overall successful or not.
362 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
367 my @files = _globdir $dir;
369 Returns all the files in a directory. This is shorthand for backwards
370 compatibility on systems where glob() doesn't work right.
376 my @f = readdir DIRH;
382 =item B<_run_all_tests>
384 my($total, $failed) = _run_all_tests(@test_files);
386 Runs all the given @test_files (as runtests()) but does it quietly (no
387 report). $total is a hash ref summary of all the tests run. Its keys
390 bonus Number of individual todo tests unexpectedly passed
391 max Number of individual tests ran
392 ok Number of individual tests passed
393 sub_skipped Number of individual tests skipped
394 todo Number of individual todo tests
396 files Number of test files ran
397 good Number of test files passed
398 bad Number of test files failed
399 tests Number of test files originally given
400 skipped Number of test files skipped
402 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
405 $failed is a hash ref of all the test scripts which failed. Each key
406 is the name of a test script, each value is another hash representing
407 how that script failed. Its keys are these:
409 name Name of the test which failed
410 estat Script's exit value
411 wstat Script's wait status
412 max Number of individual tests
413 failed Number which failed
414 percent Percentage of tests which failed
415 canon List of tests which failed (as string).
417 Needless to say, $failed should be empty if everything passed.
419 B<NOTE> Currently this function is still noisy. I'm working on it.
436 tests => scalar @tests,
443 local($ENV{'PERL5LIB'}) = $Strap->_INC2PERL5LIB;
445 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
446 my $t_start = new Benchmark;
448 my $width = _leader_width(@tests);
449 foreach my $tfile (@tests) {
450 $Strap->_reset_file_state;
452 my($leader, $ml) = _mk_leader($tfile, $width);
455 my $fh = _open_test($tfile);
457 # state of the current test.
466 skip_reason => undef,
470 my($seen_header, $tests_seen) = (0,0);
475 if( _parse_header($_, \%test, \%tot) ) {
476 warn "Test header seen twice!\n" if $seen_header;
480 warn "1..M can only appear at the beginning or end of tests\n"
481 if $tests_seen && $test{max} < $tests_seen;
483 elsif( _parse_test_line($_, \%test, \%tot) ) {
489 my($estatus, $wstatus) = _close_fh($fh);
491 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
494 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
496 $failedtests{$tfile}{name} = $tfile;
499 if ($test{max} and $test{skipped} + $test{bonus}) {
501 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
503 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
505 print "$test{ml}ok, ".join(', ', @msg)."\n";
506 } elsif ($test{max}) {
507 print "$test{ml}ok\n";
508 } elsif (defined $test{skip_reason}) {
509 print "skipped: $test{skip_reason}\n";
512 print "skipped test on this platform\n";
519 if ($test{'next'} <= $test{max}) {
520 push @{$test{failed}}, $test{'next'}..$test{max};
522 if (@{$test{failed}}) {
523 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
525 print "$test{ml}$txt";
526 $failedtests{$tfile} = { canon => $canon,
528 failed => scalar @{$test{failed}},
530 percent => 100*(scalar @{$test{failed}})/$test{max},
535 print "Don't know which tests failed: got $test{ok} ok, ".
536 "expected $test{max}\n";
537 $failedtests{$tfile} = { canon => '??',
547 } elsif ($test{'next'} == 0) {
548 print "FAILED before any test output arrived\n";
550 $failedtests{$tfile} = { canon => '??',
561 $tot{sub_skipped} += $test{skipped};
563 if (defined $Files_In_Dir) {
564 my @new_dir_files = _globdir $Files_In_Dir;
565 if (@new_dir_files != @dir_files) {
567 @f{@new_dir_files} = (1) x @new_dir_files;
568 delete @f{@dir_files};
569 my @f = sort keys %f;
570 print "LEAKED FILES: @f\n";
571 @dir_files = @new_dir_files;
577 $tot{bench} = timediff(new Benchmark, $t_start);
579 $Strap->_restore_PERL5LIB;
581 return(\%tot, \%failedtests);
586 my($leader, $ml) = _mk_leader($test_file, $width);
588 Generates the 't/foo........' $leader for the given $test_file as well
589 as a similar version which will overwrite the current line (by use of
590 \r and such). $ml may be empty if Test::Harness doesn't think you're
593 The $width is the width of the "yada/blah.." string.
598 my($te, $width) = @_;
602 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
603 my $blank = (' ' x 77);
604 my $leader = "$te" . '.' x ($width - length($te));
607 $ml = "\r$blank\r$leader"
608 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
610 return($leader, $ml);
613 =item B<_leader_width>
615 my($width) = _leader_width(@test_files);
617 Calculates how wide the leader should be based on the length of the
626 my $suf = /\.(\w+)$/ ? $1 : '';
628 my $suflen = length $suf;
629 $maxlen = $len if $len > $maxlen;
630 $maxsuflen = $suflen if $suflen > $maxsuflen;
632 # + 3 : we want three dots between the test name and the "ok"
633 return $maxlen + 3 - $maxsuflen;
638 my($tot, $failedtests) = @_;
641 my $bonusmsg = _bonusmsg($tot);
644 print "All tests successful$bonusmsg.\n";
645 } elsif (!$tot->{tests}){
646 die "FAILED--no tests were run for some reason.\n";
647 } elsif (!$tot->{max}) {
648 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
649 die "FAILED--$tot->{tests} test $blurb could be run, ".
650 "alas--no output ever seen\n";
652 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
653 my $percent_ok = 100*$tot->{ok}/$tot->{max};
654 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
655 $tot->{max} - $tot->{ok}, $tot->{max},
658 my($fmt_top, $fmt) = _create_fmts($failedtests);
660 # Now write to formats
661 for my $script (sort keys %$failedtests) {
662 $Curtest = $failedtests->{$script};
666 $bonusmsg =~ s/^,\s*//;
667 print "$bonusmsg.\n" if $bonusmsg;
668 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
673 printf("Files=%d, Tests=%d, %s\n",
674 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
679 my($line, $test, $tot) = @_;
683 if( $Strap->_is_header($line) ) {
686 $test->{max} = $Strap->{max};
687 for ( keys %{$Strap->{todo}} ) { $test->{todo}{$_} = 1; }
689 $test->{skip_reason} = $Strap->{skip_all}
690 if not $test->{max} and defined $Strap->{skip_all};
692 $test->{'next'} = 1 unless $test->{'next'};
695 $tot->{max} += $test->{max};
709 my $s = _set_switches($test);
711 # XXX This is WAY too core specific!
712 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
713 ? "./perl -I../lib ../utils/perlcc $test "
714 . "-r 2>> ./compilelog |"
716 $cmd = "MCR $cmd" if $^O eq 'VMS';
718 if( open(PERL, $cmd) ) {
722 print "can't run $test. $!\n";
728 sub _parse_test_line {
729 my($line, $test, $tot) = @_;
732 if ( $Strap->_is_test($line, \%result) ) {
733 $test->{'next'} ||= 1;
734 my $this = $test->{'next'};
736 my($not, $tnum) = (!$result{ok}, $result{number});
738 $this = $tnum if $tnum;
740 my($type, $reason) = ($result{type}, $result{reason});
742 my($istodo, $isskip);
743 if( defined $type ) {
744 $istodo = 1 if $type eq 'todo';
745 $isskip = 1 if $type eq 'skip';
748 $test->{todo}{$this} = 1 if $istodo;
750 $tot->{todo}++ if $test->{todo}{$this};
753 print "$test->{ml}NOK $this" if $test->{ml};
754 if (!$test->{todo}{$this}) {
755 push @{$test->{failed}}, $this;
762 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
765 $test->{skipped}++ if $isskip;
767 $reason = '[no reason given]'
768 if $isskip and not defined $reason;
769 if (defined $reason and defined $test->{skip_reason}) {
770 # print "was: '$skip_reason' new '$reason'\n";
771 $test->{skip_reason} = 'various reasons'
772 if $test->{skip_reason} ne $reason;
773 } elsif (defined $reason) {
774 $test->{skip_reason} = $reason;
777 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
780 if ($this > $test->{'next'}) {
781 print "Test output counter mismatch [test $this]\n";
782 push @{$test->{failed}}, $test->{'next'}..$this-1;
784 elsif ($this < $test->{'next'}) {
785 #we have seen more "ok" lines than the number suggests
786 print "Confused test output: test $this answered after ".
787 "test ", $test->{'next'}-1, "\n";
788 $test->{'next'} = $this;
790 $test->{'next'} = $this + 1;
795 if( $Strap->_is_bail_out($line, \$bail_reason) ) { # bail out!
796 die "FAILED--Further testing stopped" .
797 ($bail_reason ? ": $bail_reason\n" : ".\n");
807 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
808 " UNEXPECTEDLY SUCCEEDED)")
811 if ($tot->{skipped}) {
812 $bonusmsg .= ", $tot->{skipped} test"
813 . ($tot->{skipped} != 1 ? 's' : '');
814 if ($tot->{sub_skipped}) {
815 $bonusmsg .= " and $tot->{sub_skipped} subtest"
816 . ($tot->{sub_skipped} != 1 ? 's' : '');
818 $bonusmsg .= ' skipped';
820 elsif ($tot->{sub_skipped}) {
821 $bonusmsg .= ", $tot->{sub_skipped} subtest"
822 . ($tot->{sub_skipped} != 1 ? 's' : '')
829 # VMS has some subtle nastiness with closing the test files.
833 close($fh); # must close to reap child resource values
835 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
837 $estatus = ($^O eq 'VMS'
838 ? eval 'use vmsish "status"; $estatus = $?'
841 return($estatus, $wstatus);
845 # Set up the command-line switches to run perl as.
850 $s .= $Strap->_switches($test);
856 # Test program go boom.
857 sub _dubious_return {
858 my($test, $tot, $estatus, $wstatus) = @_;
859 my ($failed, $canon, $percent) = ('??', '??');
861 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
862 "(wstat %d, 0x%x)\n",
864 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
866 if (corestatus($wstatus)) { # until we have a wait module
867 if ($Have_Devel_Corestack) {
868 Devel::CoreStack::stack($^X);
870 print "\ttest program seems to have generated a core\n";
877 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
878 print "\tafter all the subtests completed successfully\n";
880 $failed = 0; # But we do not set $canon!
883 push @{$test->{failed}}, $test->{'next'}..$test->{max};
884 $failed = @{$test->{failed}};
885 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
886 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
891 return { canon => $canon, max => $test->{max} || '??',
894 estat => $estatus, wstat => $wstatus,
899 sub _garbled_output {
900 my($gibberish) = shift;
901 warn "Confusing test output: '$gibberish'\n";
906 my($failedtests) = @_;
908 my $failed_str = "Failed Test";
909 my $middle_str = " Stat Wstat Total Fail Failed ";
910 my $list_str = "List of Failed";
912 # Figure out our longest name string for formatting purposes.
913 my $max_namelen = length($failed_str);
914 foreach my $script (keys %$failedtests) {
915 my $namelen = length $failedtests->{$script}->{name};
916 $max_namelen = $namelen if $namelen > $max_namelen;
919 my $list_len = $Columns - length($middle_str) - $max_namelen;
920 if ($list_len < length($list_str)) {
921 $list_len = length($list_str);
922 $max_namelen = $Columns - length($middle_str) - $list_len;
923 if ($max_namelen < length($failed_str)) {
924 $max_namelen = length($failed_str);
925 $Columns = $max_namelen + length($middle_str) + $list_len;
929 my $fmt_top = "format STDOUT_TOP =\n"
930 . sprintf("%-${max_namelen}s", $failed_str)
936 my $fmt = "format STDOUT =\n"
937 . "@" . "<" x ($max_namelen - 1)
938 . " @>> @>>>> @>>>> @>>> ^##.##% "
939 . "^" . "<" x ($list_len - 1) . "\n"
940 . '{ $Curtest->{name}, $Curtest->{estat},'
941 . ' $Curtest->{wstat}, $Curtest->{max},'
942 . ' $Curtest->{failed}, $Curtest->{percent},'
943 . ' $Curtest->{canon}'
945 . "~~" . " " x ($Columns - $list_len - 2) . "^"
946 . "<" x ($list_len - 1) . "\n"
947 . '$Curtest->{canon}'
955 return($fmt_top, $fmt);
959 my $tried_devel_corestack;
964 eval {require 'wait.ph'};
965 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
967 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
968 unless $tried_devel_corestack++;
974 sub canonfailed ($@) {
975 my($max,$skipped,@failed) = @_;
977 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
978 my $failed = @failed;
982 my $last = $min = shift @failed;
985 for (@failed, $failed[-1]) { # don't forget the last one
986 if ($_ > $last+1 || $_ == $last) {
990 push @canon, "$min-$last";
997 push @result, "FAILED tests @canon\n";
998 $canon = join ' ', @canon;
1000 push @result, "FAILED test $last\n";
1004 push @result, "\tFailed $failed/$max tests, ";
1005 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
1006 my $ender = 's' x ($skipped > 1);
1007 my $good = $max - $failed - $skipped;
1008 my $goodper = sprintf("%.2f",100*($good/$max));
1009 push @result, " (-$skipped skipped test$ender: $good okay, ".
1013 my $txt = join "", @result;
1030 C<&runtests> is exported by Test::Harness per default.
1032 C<$verbose> and C<$switches> are exported upon request.
1039 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1041 If all tests are successful some statistics about the performance are
1044 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1046 For any single script that has failing subtests statistics like the
1049 =item C<Test returned status %d (wstat %d)>
1051 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1052 and C<$?> are printed in a message similar to the above.
1054 =item C<Failed 1 test, %.2f%% okay. %s>
1056 =item C<Failed %d/%d tests, %.2f%% okay. %s>
1058 If not all tests were successful, the script dies with one of the
1061 =item C<FAILED--Further testing stopped%s>
1063 If a single subtest decides that further testing will not make sense,
1064 the script dies with this message.
1072 =item C<HARNESS_IGNORE_EXITCODE>
1074 Makes harness ignore the exit status of child processes when defined.
1076 =item C<HARNESS_NOTTY>
1078 When set to a true value, forces it to behave as though STDOUT were
1079 not a console. You may need to set this if you don't want harness to
1080 output more frequent progress messages using carriage returns. Some
1081 consoles may not handle carriage returns properly (which results in a
1082 somewhat messy output).
1084 =item C<HARNESS_COMPILE_TEST>
1086 When true it will make harness attempt to compile the test using
1087 C<perlcc> before running it.
1089 B<NOTE> This currently only works when sitting in the perl source
1092 =item C<HARNESS_FILELEAK_IN_DIR>
1094 When set to the name of a directory, harness will check after each
1095 test whether new files appeared in that directory, and report them as
1097 LEAKED FILES: scr.tmp 0 my.db
1099 If relative, directory name is with respect to the current directory at
1100 the moment runtests() was called. Putting absolute path into
1101 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1103 =item C<HARNESS_PERL_SWITCHES>
1105 Its value will be prepended to the switches used to invoke perl on
1106 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1107 run all tests with all warnings enabled.
1109 =item C<HARNESS_COLUMNS>
1111 This value will be used for the width of the terminal. If it is not
1112 set then it will default to C<COLUMNS>. If this is not set, it will
1113 default to 80. Note that users of Bourne-sh based shells will need to
1114 C<export COLUMNS> for this module to use that variable.
1116 =item C<HARNESS_ACTIVE>
1118 Harness sets this before executing the individual tests. This allows
1119 the tests to determine if they are being executed through the harness
1120 or by any other means.
1126 Here's how Test::Harness tests itself
1128 $ cd ~/src/devel/Test-Harness
1129 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1130 $verbose=0; runtests @ARGV;' t/*.t
1131 Using /home/schwern/src/devel/Test-Harness/blib
1132 t/base..............ok
1133 t/nonumbers.........ok
1134 t/ok................ok
1135 t/test-harness......ok
1136 All tests successful.
1137 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1141 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1142 the underlying timing routines, L<Devel::CoreStack> to generate core
1143 dumps from failed tests and L<Devel::Cover> for test coverage
1148 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1149 sure is, that it was inspired by Larry Wall's TEST script that came
1150 with perl distributions for ages. Numerous anonymous contributors
1151 exist. Andreas Koenig held the torch for many years.
1153 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1157 Provide a way of running tests quietly (ie. no printing) for automated
1158 validation of tests. This will probably take the form of a version
1159 of runtests() which rather than printing its output returns raw data
1160 on the state of the tests.
1162 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1164 Figure a way to report test names in the failure summary.
1166 Rework the test summary so long test names are not truncated as badly.
1168 Merge back into bleadperl.
1170 Deal with VMS's "not \nok 4\n" mistake.
1172 Add option for coverage analysis.
1175 Keeping whittling away at _run_all_tests()
1178 Clean up how the summary is printed. Get rid of those damned formats.
1182 Test::Harness uses $^X to determine the perl binary to run the tests
1183 with. Test scripts running via the shebang (C<#!>) line may not be
1184 portable because $^X is not consistent for shebang scripts across
1185 platforms. This is no problem when Test::Harness is run with an
1186 absolute path to the perl binary or when $^X can be found in the path.
1188 HARNESS_COMPILE_TEST currently assumes its run from the Perl source