1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.17 2001/09/07 06:20:29 schwern Exp $
12 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
17 # Backwards compatibility for exportable variable names.
19 *switches = \$Switches;
21 $Have_Devel_Corestack = 0;
25 $ENV{HARNESS_ACTIVE} = 1;
27 # Some experimental versions of OS/2 build have broken $?
28 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
30 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
34 @EXPORT = qw(&runtests);
35 @EXPORT_OK = qw($verbose $switches);
39 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
40 $Columns--; # Some shells have trouble with a full line of text.
45 Test::Harness - run perl standard test scripts with statistics
51 runtests(@test_files);
55 B<STOP!> If all you want to do is write a test script, consider using
56 Test::Simple. Otherwise, read on.
58 (By using the Test module, you can write test scripts without
59 knowing the exact output this module expects. However, if you need to
60 know the specifics, read on!)
62 Perl test scripts print to standard output C<"ok N"> for each single
63 test, where C<N> is an increasing sequence of integers. The first line
64 output by a standard test script is C<"1..M"> with C<M> being the
65 number of tests that should be run within the test
66 script. Test::Harness::runtests(@tests) runs all the testscripts
67 named as arguments and checks standard output for the expected
70 After all tests have been performed, runtests() prints some
71 performance statistics that are computed by the Benchmark module.
73 =head2 The test script output
75 The following explains how Test::Harness interprets the output of your
82 This header tells how many tests there will be. It should be the
83 first line output by your test program (but its okay if its preceded
86 In certain instanced, you may not know how many tests you will
87 ultimately be running. In this case, it is permitted (but not
88 encouraged) for the 1..M header to appear as the B<last> line output
89 by your test (again, it can be followed by further comments). But we
90 strongly encourage you to put it first.
92 Under B<no> circumstances should 1..M appear in the middle of your
93 output or more than once.
96 =item B<'ok', 'not ok'. Ok?>
98 Any output from the testscript to standard error is ignored and
99 bypassed, thus will be seen by the user. Lines written to standard
100 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101 runtests(). All other lines are discarded.
103 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
106 =item B<test numbers>
108 Perl normally expects the 'ok' or 'not ok' to be followed by a test
109 number. It is tolerated if the test numbers after 'ok' are
110 omitted. In this case Test::Harness maintains temporarily its own
111 counter until the script supplies test numbers again. So the following
126 Failed 3/6 tests, 50.00% okay
129 =item B<$Test::Harness::verbose>
131 The global variable $Test::Harness::verbose is exportable and can be
132 used to let runtests() display the standard output of the script
133 without altering the behavior otherwise.
135 =item B<$Test::Harness::switches>
137 The global variable $Test::Harness::switches is exportable and can be
138 used to set perl command line options used for running the test
139 script(s). The default value is C<-w>.
141 =item B<Skipping tests>
143 If the standard output line contains the substring C< # Skip> (with
144 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145 counted as a skipped test. If the whole testscript succeeds, the
146 count of skipped tests is included in the generated output.
147 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
150 ok 23 # skip Insufficient flogiston pressure.
152 Similarly, one can include a similar explanation in a C<1..0> line
153 emitted if the test script is skipped completely:
155 1..0 # Skipped: no leverage found
159 If the standard output line contains the substring C< # TODO> after
160 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161 afterwards is the thing that has to be done before this test will
164 not ok 13 # TODO harness the power of the atom
166 These tests represent a feature to be implemented or a bug to be fixed
167 and act as something of an executable "thing to do" list. They are
168 B<not> expected to succeed. Should a todo test begin succeeding,
169 Test::Harness will report it as a bonus. This indicates that whatever
170 you were supposed to do has been done and you should promote this to a
175 As an emergency measure, a test script can decide that further tests
176 are useless (e.g. missing dependencies) and testing should stop
177 immediately. In that case the test script prints the magic words
181 to standard output. Any message after these words will be displayed by
182 C<Test::Harness> as the reason why testing is stopped.
186 Additional comments may be put into the testing output on their own
187 lines. Comment lines should begin with a '#', Test::Harness will
191 # Life is good, the sun is shining, RAM is cheap.
193 # got 'Bush' expected 'Gore'
195 =item B<Anything else>
197 Any other output Test::Harness sees it will silently ignore B<BUT WE
198 PLAN TO CHANGE THIS!> If you wish to place additional output in your
199 test script, please use a comment.
206 It will happen, your tests will fail. After you mop up your ego, you
207 can begin examining the summary report:
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
222 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223 exited with non-zero status indicating something dubious happened.
225 The columns in the summary report mean:
231 The test file which failed.
235 If the test exited with non-zero, this is its exit status.
239 The wait status of the test I<umm, I need a better explanation here>.
243 Total number of tests expected to run.
247 Number which failed, either from "not ok" or because they never ran.
251 Percentage of the total tests which failed.
253 =item B<List of Failed>
255 A list of the tests which failed. Successive failures may be
256 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
264 Test::Harness currently only has one function, here it is.
270 my $allok = runtests(@test_files);
272 This runs all the given @test_files and divines whether they passed
273 or failed based on their output to STDOUT (details above). It prints
274 out each individual test which failed along with a summary report and
275 a how long it all took.
277 It returns true if everything was ok, false otherwise.
280 This is just _run_all_tests() plus _show_results()
289 my($tot, $failedtests) = _run_all_tests(@tests);
290 _show_results($tot, $failedtests);
292 my $ok = _all_ok($tot);
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
304 my $ok = _all_ok(\%tot);
306 Tells you if this test run is overall successful or not.
313 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
318 my @files = _globdir $dir;
320 Returns all the files in a directory. This is shorthand for backwards
321 compatibility on systems where glob() doesn't work right.
327 my @f = readdir DIRH;
333 =item B<_run_all_tests>
335 my($total, $failed) = _run_all_tests(@test_files);
337 Runs all the given @test_files (as runtests()) but does it quietly (no
338 report). $total is a hash ref summary of all the tests run. Its keys
341 bonus Number of individual todo tests unexpectedly passed
342 max Number of individual tests ran
343 ok Number of individual tests passed
344 sub_skipped Number of individual tests skipped
345 todo Number of individual todo tests
347 files Number of test files ran
348 good Number of test files passed
349 bad Number of test files failed
350 tests Number of test files originally given
351 skipped Number of test files skipped
353 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
356 $failed is a hash ref of all the test scripts which failed. Each key
357 is the name of a test script, each value is another hash representing
358 how that script failed. Its keys are these:
360 name Name of the test which failed
361 estat Script's exit value
362 wstat Script's wait status
363 max Number of individual tests
364 failed Number which failed
365 percent Percentage of tests which failed
366 canon List of tests which failed (as string).
368 Needless to say, $failed should be empty if everything passed.
370 B<NOTE> Currently this function is still noisy. I'm working on it.
387 tests => scalar @tests,
394 # pass -I flags to children
395 my $old5lib = $ENV{PERL5LIB};
397 # VMS has a 255-byte limit on the length of %ENV entries, so
398 # toss the ones that involve perl_root, the install location
402 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
403 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
406 $new5lib = join($Config{path_sep}, @INC);
409 local($ENV{'PERL5LIB'}) = $new5lib;
411 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
412 my $t_start = new Benchmark;
416 foreach (@tests) { # The same code in t/TEST
417 my $suf = /\.(\w+)$/ ? $1 : '';
419 my $suflen = length $suf;
420 $maxlen = $len if $len > $maxlen;
421 $maxsuflen = $suflen if $suflen > $maxsuflen;
423 # + 3 : we want three dots between the test name and the "ok"
424 my $width = $maxlen + 3 - $maxsuflen;
426 foreach my $tfile (@tests) {
427 my($leader, $ml) = _mk_leader($tfile, $width);
430 my $fh = _open_test($tfile);
432 # state of the current test.
441 skip_reason => undef,
445 my($seen_header, $tests_seen) = (0,0);
447 if( _parse_header($_, \%test, \%tot) ) {
448 warn "Test header seen twice!\n" if $seen_header;
452 warn "1..M can only appear at the beginning or end of tests\n"
453 if $tests_seen && $test{max} < $tests_seen;
455 elsif( _parse_test_line($_, \%test, \%tot) ) {
461 my($estatus, $wstatus) = _close_fh($fh);
463 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
466 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
468 $failedtests{$tfile}{name} = $tfile;
471 if ($test{max} and $test{skipped} + $test{bonus}) {
473 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
475 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
477 print "$test{ml}ok, ".join(', ', @msg)."\n";
478 } elsif ($test{max}) {
479 print "$test{ml}ok\n";
480 } elsif (defined $test{skip_reason}) {
481 print "skipped: $test{skip_reason}\n";
484 print "skipped test on this platform\n";
491 if ($test{'next'} <= $test{max}) {
492 push @{$test{failed}}, $test{'next'}..$test{max};
494 if (@{$test{failed}}) {
495 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
497 print "$test{ml}$txt";
498 $failedtests{$tfile} = { canon => $canon,
500 failed => scalar @{$test{failed}},
502 percent => 100*(scalar @{$test{failed}})/$test{max},
507 print "Don't know which tests failed: got $test{ok} ok, ".
508 "expected $test{max}\n";
509 $failedtests{$tfile} = { canon => '??',
519 } elsif ($test{'next'} == 0) {
520 print "FAILED before any test output arrived\n";
522 $failedtests{$tfile} = { canon => '??',
533 $tot{sub_skipped} += $test{skipped};
535 if (defined $Files_In_Dir) {
536 my @new_dir_files = _globdir $Files_In_Dir;
537 if (@new_dir_files != @dir_files) {
539 @f{@new_dir_files} = (1) x @new_dir_files;
540 delete @f{@dir_files};
541 my @f = sort keys %f;
542 print "LEAKED FILES: @f\n";
543 @dir_files = @new_dir_files;
547 $tot{bench} = timediff(new Benchmark, $t_start);
550 if (defined $old5lib) {
551 $ENV{PERL5LIB} = $old5lib;
553 delete $ENV{PERL5LIB};
557 return(\%tot, \%failedtests);
562 my($leader, $ml) = _mk_leader($test_file, $width);
564 Generates the 't/foo........' $leader for the given $test_file as well
565 as a similar version which will overwrite the current line (by use of
566 \r and such). $ml may be empty if Test::Harness doesn't think you're
569 The $width is the width of the "yada/blah.." string.
574 my($te, $width) = @_;
578 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
579 my $blank = (' ' x 77);
580 my $leader = "$te" . '.' x ($width - length($te));
583 $ml = "\r$blank\r$leader"
584 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
586 return($leader, $ml);
591 my($tot, $failedtests) = @_;
594 my $bonusmsg = _bonusmsg($tot);
597 print "All tests successful$bonusmsg.\n";
598 } elsif (!$tot->{tests}){
599 die "FAILED--no tests were run for some reason.\n";
600 } elsif (!$tot->{max}) {
601 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
602 die "FAILED--$tot->{tests} test $blurb could be run, ".
603 "alas--no output ever seen\n";
605 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
606 my $percent_ok = 100*$tot->{ok}/$tot->{max};
607 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
608 $tot->{max} - $tot->{ok}, $tot->{max},
611 my($fmt_top, $fmt) = _create_fmts($failedtests);
613 # Now write to formats
614 for my $script (sort keys %$failedtests) {
615 $Curtest = $failedtests->{$script};
619 $bonusmsg =~ s/^,\s*//;
620 print "$bonusmsg.\n" if $bonusmsg;
621 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
626 printf("Files=%d, Tests=%d, %s\n",
627 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
632 my($line, $test, $tot) = @_;
636 print $line if $Verbose;
639 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
641 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
643 $tot->{max} += $test->{max};
649 # 1..0 # skip Why? Because I said so!
650 elsif ($line =~ /^1\.\.([0-9]+)
651 (\s*\#\s*[Ss]kip\S*\s* (.+))?
656 $tot->{max} += $test->{max};
658 $test->{'next'} = 1 unless $test->{'next'};
659 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
674 my $s = _set_switches($test);
676 # XXX This is WAY too core specific!
677 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
678 ? "./perl -I../lib ../utils/perlcc $test "
679 . "-r 2>> ./compilelog |"
681 $cmd = "MCR $cmd" if $^O eq 'VMS';
683 if( open(PERL, $cmd) ) {
687 print "can't run $test. $!\n";
699 sub _parse_test_line {
700 my($line, $test, $tot) = @_;
702 if ($line =~ /^(not\s+)?ok\b/i) {
703 $test->{'next'} ||= 1;
704 my $this = $test->{'next'};
706 if ($line =~ /^(not )?ok\s*(\d*)[^#]*(\s*#.*)?/) {
707 my($not, $tnum, $extra) = ($1, $2, $3);
709 $this = $tnum if $tnum;
711 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
714 my($istodo, $isskip);
715 if( defined $type ) {
716 $istodo = 1 if $type =~ /TODO/;
717 $isskip = 1 if $type =~ /skip/i;
720 $test->{todo}{$this} = 1 if $istodo;
722 $tot->{todo}++ if $test->{todo}{$this};
725 print "$test->{ml}NOK $this" if $test->{ml};
726 if (!$test->{todo}{$this}) {
727 push @{$test->{failed}}, $this;
734 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
737 $test->{skipped}++ if $isskip;
739 $reason = '[no reason given]'
740 if $isskip and not defined $reason;
741 if (defined $reason and defined $test->{skip_reason}) {
742 # print "was: '$skip_reason' new '$reason'\n";
743 $test->{skip_reason} = 'various reasons'
744 if $test->{skip_reason} ne $reason;
745 } elsif (defined $reason) {
746 $test->{skip_reason} = $reason;
749 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
753 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
754 $this = $1 if $1 > 0;
755 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
760 # an ok or not ok not matching the 3 cases above...
761 # just ignore it for compatibility with TEST
765 if ($this > $test->{'next'}) {
766 print "Test output counter mismatch [test $this]\n";
767 push @{$test->{failed}}, $test->{'next'}..$this-1;
769 elsif ($this < $test->{'next'}) {
770 #we have seen more "ok" lines than the number suggests
771 print "Confused test output: test $this answered after ".
772 "test ", $test->{'next'}-1, "\n";
773 $test->{'next'} = $this;
775 $test->{'next'} = $this + 1;
778 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
779 die "FAILED--Further testing stopped" .
780 ($1 ? ": $1\n" : ".\n");
789 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
790 " UNEXPECTEDLY SUCCEEDED)")
793 if ($tot->{skipped}) {
794 $bonusmsg .= ", $tot->{skipped} test"
795 . ($tot->{skipped} != 1 ? 's' : '');
796 if ($tot->{sub_skipped}) {
797 $bonusmsg .= " and $tot->{sub_skipped} subtest"
798 . ($tot->{sub_skipped} != 1 ? 's' : '');
800 $bonusmsg .= ' skipped';
802 elsif ($tot->{sub_skipped}) {
803 $bonusmsg .= ", $tot->{sub_skipped} subtest"
804 . ($tot->{sub_skipped} != 1 ? 's' : '')
811 # VMS has some subtle nastiness with closing the test files.
815 close($fh); # must close to reap child resource values
817 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
819 $estatus = ($^O eq 'VMS'
820 ? eval 'use vmsish "status"; $estatus = $?'
823 return($estatus, $wstatus);
827 # Set up the command-line switches to run perl as.
832 open(TEST, $test) or print "can't open $test. $!\n";
835 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
836 if exists $ENV{'HARNESS_PERL_SWITCHES'};
837 $s .= join " ", qq[ "-$1"], map {qq["-I$_"]} @INC
838 if $first =~ /^#!.*\bperl.*-\w*([tT])/;
840 close(TEST) or print "can't close $test. $!\n";
846 # Test program go boom.
847 sub _dubious_return {
848 my($test, $tot, $estatus, $wstatus) = @_;
849 my ($failed, $canon, $percent) = ('??', '??');
851 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
852 "(wstat %d, 0x%x)\n",
854 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
856 if (corestatus($wstatus)) { # until we have a wait module
857 if ($Have_Devel_Corestack) {
858 Devel::CoreStack::stack($^X);
860 print "\ttest program seems to have generated a core\n";
867 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
868 print "\tafter all the subtests completed successfully\n";
870 $failed = 0; # But we do not set $canon!
873 push @{$test->{failed}}, $test->{'next'}..$test->{max};
874 $failed = @{$test->{failed}};
875 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
876 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
881 return { canon => $canon, max => $test->{max} || '??',
884 estat => $estatus, wstat => $wstatus,
889 sub _garbled_output {
890 my($gibberish) = shift;
891 warn "Confusing test output: '$gibberish'\n";
896 my($failedtests) = @_;
898 my $failed_str = "Failed Test";
899 my $middle_str = " Stat Wstat Total Fail Failed ";
900 my $list_str = "List of Failed";
902 # Figure out our longest name string for formatting purposes.
903 my $max_namelen = length($failed_str);
904 foreach my $script (keys %$failedtests) {
905 my $namelen = length $failedtests->{$script}->{name};
906 $max_namelen = $namelen if $namelen > $max_namelen;
909 my $list_len = $Columns - length($middle_str) - $max_namelen;
910 if ($list_len < length($list_str)) {
911 $list_len = length($list_str);
912 $max_namelen = $Columns - length($middle_str) - $list_len;
913 if ($max_namelen < length($failed_str)) {
914 $max_namelen = length($failed_str);
915 $Columns = $max_namelen + length($middle_str) + $list_len;
919 my $fmt_top = "format STDOUT_TOP =\n"
920 . sprintf("%-${max_namelen}s", $failed_str)
926 my $fmt = "format STDOUT =\n"
927 . "@" . "<" x ($max_namelen - 1)
928 . " @>> @>>>> @>>>> @>>> ^##.##% "
929 . "^" . "<" x ($list_len - 1) . "\n"
930 . '{ $Curtest->{name}, $Curtest->{estat},'
931 . ' $Curtest->{wstat}, $Curtest->{max},'
932 . ' $Curtest->{failed}, $Curtest->{percent},'
933 . ' $Curtest->{canon}'
935 . "~~" . " " x ($Columns - $list_len - 2) . "^"
936 . "<" x ($list_len - 1) . "\n"
937 . '$Curtest->{canon}'
945 return($fmt_top, $fmt);
949 my $tried_devel_corestack;
954 eval {require 'wait.ph'};
955 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
957 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
958 unless $tried_devel_corestack++;
964 sub canonfailed ($@) {
965 my($max,$skipped,@failed) = @_;
967 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
968 my $failed = @failed;
972 my $last = $min = shift @failed;
975 for (@failed, $failed[-1]) { # don't forget the last one
976 if ($_ > $last+1 || $_ == $last) {
980 push @canon, "$min-$last";
987 push @result, "FAILED tests @canon\n";
988 $canon = join ' ', @canon;
990 push @result, "FAILED test $last\n";
994 push @result, "\tFailed $failed/$max tests, ";
995 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
996 my $ender = 's' x ($skipped > 1);
997 my $good = $max - $failed - $skipped;
998 my $goodper = sprintf("%.2f",100*($good/$max));
999 push @result, " (-$skipped skipped test$ender: $good okay, ".
1003 my $txt = join "", @result;
1020 C<&runtests> is exported by Test::Harness per default.
1022 C<$verbose> and C<$switches> are exported upon request.
1029 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1031 If all tests are successful some statistics about the performance are
1034 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1036 For any single script that has failing subtests statistics like the
1039 =item C<Test returned status %d (wstat %d)>
1041 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1042 and C<$?> are printed in a message similar to the above.
1044 =item C<Failed 1 test, %.2f%% okay. %s>
1046 =item C<Failed %d/%d tests, %.2f%% okay. %s>
1048 If not all tests were successful, the script dies with one of the
1051 =item C<FAILED--Further testing stopped%s>
1053 If a single subtest decides that further testing will not make sense,
1054 the script dies with this message.
1062 =item C<HARNESS_IGNORE_EXITCODE>
1064 Makes harness ignore the exit status of child processes when defined.
1066 =item C<HARNESS_NOTTY>
1068 When set to a true value, forces it to behave as though STDOUT were
1069 not a console. You may need to set this if you don't want harness to
1070 output more frequent progress messages using carriage returns. Some
1071 consoles may not handle carriage returns properly (which results in a
1072 somewhat messy output).
1074 =item C<HARNESS_COMPILE_TEST>
1076 When true it will make harness attempt to compile the test using
1077 C<perlcc> before running it.
1079 B<NOTE> This currently only works when sitting in the perl source
1082 =item C<HARNESS_FILELEAK_IN_DIR>
1084 When set to the name of a directory, harness will check after each
1085 test whether new files appeared in that directory, and report them as
1087 LEAKED FILES: scr.tmp 0 my.db
1089 If relative, directory name is with respect to the current directory at
1090 the moment runtests() was called. Putting absolute path into
1091 C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1093 =item C<HARNESS_PERL_SWITCHES>
1095 Its value will be prepended to the switches used to invoke perl on
1096 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1097 run all tests with all warnings enabled.
1099 =item C<HARNESS_COLUMNS>
1101 This value will be used for the width of the terminal. If it is not
1102 set then it will default to C<COLUMNS>. If this is not set, it will
1103 default to 80. Note that users of Bourne-sh based shells will need to
1104 C<export COLUMNS> for this module to use that variable.
1106 =item C<HARNESS_ACTIVE>
1108 Harness sets this before executing the individual tests. This allows
1109 the tests to determine if they are being executed through the harness
1110 or by any other means.
1116 Here's how Test::Harness tests itself
1118 $ cd ~/src/devel/Test-Harness
1119 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1120 $verbose=0; runtests @ARGV;' t/*.t
1121 Using /home/schwern/src/devel/Test-Harness/blib
1122 t/base..............ok
1123 t/nonumbers.........ok
1124 t/ok................ok
1125 t/test-harness......ok
1126 All tests successful.
1127 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1131 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1132 the underlying timing routines, L<Devel::CoreStack> to generate core
1133 dumps from failed tests and L<Devel::Cover> for test coverage
1138 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1139 sure is, that it was inspired by Larry Wall's TEST script that came
1140 with perl distributions for ages. Numerous anonymous contributors
1141 exist. Andreas Koenig held the torch for many years.
1143 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1147 Provide a way of running tests quietly (ie. no printing) for automated
1148 validation of tests. This will probably take the form of a version
1149 of runtests() which rather than printing its output returns raw data
1150 on the state of the tests.
1152 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1154 Figure a way to report test names in the failure summary.
1156 Rework the test summary so long test names are not truncated as badly.
1158 Merge back into bleadperl.
1160 Deal with VMS's "not \nok 4\n" mistake.
1162 Add option for coverage analysis.
1165 Keeping whittling away at _run_all_tests()
1168 Clean up how the summary is printed. Get rid of those damned formats.
1172 Test::Harness uses $^X to determine the perl binary to run the tests
1173 with. Test scripts running via the shebang (C<#!>) line may not be
1174 portable because $^X is not consistent for shebang scripts across
1175 platforms. This is no problem when Test::Harness is run with an
1176 absolute path to the perl binary or when $^X can be found in the path.
1178 HARNESS_COMPILE_TEST currently assumes its run from the Perl source