1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.14.2.13 2002/01/07 22:34:32 schwern Exp $
7 use Test::Harness::Straps;
8 use Test::Harness::Assert;
14 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches
16 @ISA @EXPORT @EXPORT_OK
19 # Backwards compatibility for exportable variable names.
21 *switches = \$Switches;
23 $Have_Devel_Corestack = 0;
27 $ENV{HARNESS_ACTIVE} = 1;
31 delete $ENV{HARNESS_ACTIVE};
34 # Some experimental versions of OS/2 build have broken $?
35 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
37 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
39 my $Running_In_Perl_Tree = 0;
40 ++$Running_In_Perl_Tree if -d "../t" and -f "../sv.c";
42 my $Strap = Test::Harness::Straps->new;
45 @EXPORT = qw(&runtests);
46 @EXPORT_OK = qw($verbose $switches);
50 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
51 $Columns--; # Some shells have trouble with a full line of text.
56 Test::Harness - run perl standard test scripts with statistics
62 runtests(@test_files);
66 B<STOP!> If all you want to do is write a test script, consider using
67 Test::Simple. Otherwise, read on.
69 (By using the Test module, you can write test scripts without
70 knowing the exact output this module expects. However, if you need to
71 know the specifics, read on!)
73 Perl test scripts print to standard output C<"ok N"> for each single
74 test, where C<N> is an increasing sequence of integers. The first line
75 output by a standard test script is C<"1..M"> with C<M> being the
76 number of tests that should be run within the test
77 script. Test::Harness::runtests(@tests) runs all the testscripts
78 named as arguments and checks standard output for the expected
81 After all tests have been performed, runtests() prints some
82 performance statistics that are computed by the Benchmark module.
84 =head2 The test script output
86 The following explains how Test::Harness interprets the output of your
93 This header tells how many tests there will be. It should be the
94 first line output by your test program (but it is okay if it is preceded
97 In certain instanced, you may not know how many tests you will
98 ultimately be running. In this case, it is permitted (but not
99 encouraged) for the 1..M header to appear as the B<last> line output
100 by your test (again, it can be followed by further comments). But we
101 strongly encourage you to put it first.
103 Under B<no> circumstances should 1..M appear in the middle of your
104 output or more than once.
107 =item B<'ok', 'not ok'. Ok?>
109 Any output from the testscript to standard error is ignored and
110 bypassed, thus will be seen by the user. Lines written to standard
111 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
112 runtests(). All other lines are discarded.
114 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
117 =item B<test numbers>
119 Perl normally expects the 'ok' or 'not ok' to be followed by a test
120 number. It is tolerated if the test numbers after 'ok' are
121 omitted. In this case Test::Harness maintains temporarily its own
122 counter until the script supplies test numbers again. So the following
137 Failed 3/6 tests, 50.00% okay
141 Anything after the test number but before the # is considered to be
142 the name of the test.
144 ok 42 this is the name of the test
146 Currently, Test::Harness does nothing with this information.
148 =item B<Skipping tests>
150 If the standard output line contains the substring C< # Skip> (with
151 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
152 counted as a skipped test. If the whole testscript succeeds, the
153 count of skipped tests is included in the generated output.
154 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
157 ok 23 # skip Insufficient flogiston pressure.
159 Similarly, one can include a similar explanation in a C<1..0> line
160 emitted if the test script is skipped completely:
162 1..0 # Skipped: no leverage found
166 If the standard output line contains the substring C< # TODO> after
167 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
168 afterwards is the thing that has to be done before this test will
171 not ok 13 # TODO harness the power of the atom
175 Alternatively, you can specify a list of what tests are todo as part
180 This only works if the header appears at the beginning of the test.
182 This style is B<deprecated>.
186 These tests represent a feature to be implemented or a bug to be fixed
187 and act as something of an executable "thing to do" list. They are
188 B<not> expected to succeed. Should a todo test begin succeeding,
189 Test::Harness will report it as a bonus. This indicates that whatever
190 you were supposed to do has been done and you should promote this to a
195 As an emergency measure, a test script can decide that further tests
196 are useless (e.g. missing dependencies) and testing should stop
197 immediately. In that case the test script prints the magic words
201 to standard output. Any message after these words will be displayed by
202 C<Test::Harness> as the reason why testing is stopped.
206 Additional comments may be put into the testing output on their own
207 lines. Comment lines should begin with a '#', Test::Harness will
211 # Life is good, the sun is shining, RAM is cheap.
213 # got 'Bush' expected 'Gore'
215 =item B<Anything else>
217 Any other output Test::Harness sees it will silently ignore B<BUT WE
218 PLAN TO CHANGE THIS!> If you wish to place additional output in your
219 test script, please use a comment.
226 Test::Harness will honor the C<-T> in the #! line on your test files. So
227 if you begin a test with:
231 the test will be run with taint mode on.
234 =head2 Configuration variables.
236 These variables can be used to configure the behavior of
237 Test::Harness. They are exported on request.
241 =item B<$Test::Harness::verbose>
243 The global variable $Test::Harness::verbose is exportable and can be
244 used to let runtests() display the standard output of the script
245 without altering the behavior otherwise.
247 =item B<$Test::Harness::switches>
249 The global variable $Test::Harness::switches is exportable and can be
250 used to set perl command line options used for running the test
251 script(s). The default value is C<-w>.
258 It will happen, your tests will fail. After you mop up your ego, you
259 can begin examining the summary report:
261 t/base..............ok
262 t/nonumbers.........ok
263 t/ok................ok
264 t/test-harness......ok
265 t/waterloo..........dubious
266 Test returned status 3 (wstat 768, 0x300)
267 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
268 Failed 10/20 tests, 50.00% okay
269 Failed Test Stat Wstat Total Fail Failed List of Failed
270 -----------------------------------------------------------------------
271 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
272 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
274 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
275 exited with non-zero status indicating something dubious happened.
277 The columns in the summary report mean:
283 The test file which failed.
287 If the test exited with non-zero, this is its exit status.
291 The wait status of the test I<umm, I need a better explanation here>.
295 Total number of tests expected to run.
299 Number which failed, either from "not ok" or because they never ran.
303 Percentage of the total tests which failed.
305 =item B<List of Failed>
307 A list of the tests which failed. Successive failures may be
308 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
316 Test::Harness currently only has one function, here it is.
322 my $allok = runtests(@test_files);
324 This runs all the given @test_files and divines whether they passed
325 or failed based on their output to STDOUT (details above). It prints
326 out each individual test which failed along with a summary report and
327 a how long it all took.
329 It returns true if everything was ok, false otherwise.
332 This is just _run_all_tests() plus _show_results()
341 my($tot, $failedtests) = _run_all_tests(@tests);
342 _show_results($tot, $failedtests);
344 my $ok = _all_ok($tot);
346 assert(($ok xor keys %$failedtests),
347 q{ok status jives with $failedtests});
356 my $ok = _all_ok(\%tot);
358 Tells you if this test run is overall successful or not.
365 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
370 my @files = _globdir $dir;
372 Returns all the files in a directory. This is shorthand for backwards
373 compatibility on systems where glob() doesn't work right.
379 my @f = readdir DIRH;
385 =item B<_run_all_tests>
387 my($total, $failed) = _run_all_tests(@test_files);
389 Runs all the given @test_files (as runtests()) but does it quietly (no
390 report). $total is a hash ref summary of all the tests run. Its keys
393 bonus Number of individual todo tests unexpectedly passed
394 max Number of individual tests ran
395 ok Number of individual tests passed
396 sub_skipped Number of individual tests skipped
397 todo Number of individual todo tests
399 files Number of test files ran
400 good Number of test files passed
401 bad Number of test files failed
402 tests Number of test files originally given
403 skipped Number of test files skipped
405 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
408 $failed is a hash ref of all the test scripts which failed. Each key
409 is the name of a test script, each value is another hash representing
410 how that script failed. Its keys are these:
412 name Name of the test which failed
413 estat Script's exit value
414 wstat Script's wait status
415 max Number of individual tests
416 failed Number which failed
417 percent Percentage of tests which failed
418 canon List of tests which failed (as string).
420 Needless to say, $failed should be empty if everything passed.
422 B<NOTE> Currently this function is still noisy. I'm working on it.
439 tests => scalar @tests,
446 local($ENV{'PERL5LIB'}) = $Strap->_INC2PERL5LIB;
448 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
449 my $t_start = new Benchmark;
451 my $width = _leader_width(@tests);
452 foreach my $tfile (@tests) {
453 $Strap->_reset_file_state;
455 my($leader, $ml) = _mk_leader($tfile, $width);
458 my $fh = _open_test($tfile);
460 # state of the current test.
469 skip_reason => undef,
473 my($seen_header, $tests_seen) = (0,0);
478 if( _parse_header($_, \%test, \%tot) ) {
479 warn "Test header seen twice!\n" if $seen_header;
483 warn "1..M can only appear at the beginning or end of tests\n"
484 if $tests_seen && $test{max} < $tests_seen;
486 elsif( _parse_test_line($_, \%test, \%tot) ) {
492 my($estatus, $wstatus) = _close_fh($fh);
494 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
497 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
499 $failedtests{$tfile}{name} = $tfile;
502 if ($test{max} and $test{skipped} + $test{bonus}) {
504 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
506 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
508 print "$test{ml}ok, ".join(', ', @msg)."\n";
509 } elsif ($test{max}) {
510 print "$test{ml}ok\n";
511 } elsif (defined $test{skip_reason}) {
512 print "skipped: $test{skip_reason}\n";
515 print "skipped test on this platform\n";
522 if ($test{'next'} <= $test{max}) {
523 push @{$test{failed}}, $test{'next'}..$test{max};
525 if (@{$test{failed}}) {
526 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
528 print "$test{ml}$txt";
529 $failedtests{$tfile} = { canon => $canon,
531 failed => scalar @{$test{failed}},
533 percent => 100*(scalar @{$test{failed}})/$test{max},
538 print "Don't know which tests failed: got $test{ok} ok, ".
539 "expected $test{max}\n";
540 $failedtests{$tfile} = { canon => '??',
550 } elsif ($test{'next'} == 0) {
551 print "FAILED before any test output arrived\n";
553 $failedtests{$tfile} = { canon => '??',
564 $tot{sub_skipped} += $test{skipped};
566 if (defined $Files_In_Dir) {
567 my @new_dir_files = _globdir $Files_In_Dir;
568 if (@new_dir_files != @dir_files) {
570 @f{@new_dir_files} = (1) x @new_dir_files;
571 delete @f{@dir_files};
572 my @f = sort keys %f;
573 print "LEAKED FILES: @f\n";
574 @dir_files = @new_dir_files;
580 $tot{bench} = timediff(new Benchmark, $t_start);
582 $Strap->_restore_PERL5LIB;
584 return(\%tot, \%failedtests);
589 my($leader, $ml) = _mk_leader($test_file, $width);
591 Generates the 't/foo........' $leader for the given $test_file as well
592 as a similar version which will overwrite the current line (by use of
593 \r and such). $ml may be empty if Test::Harness doesn't think you're
596 The $width is the width of the "yada/blah.." string.
601 my($te, $width) = @_;
606 $te =~ s/^.*\.t\./\[.t./s;
608 $te =~ s,\\,/,g if $^O eq 'MSWin32';
609 $te =~ s,^\.\./,/, if $Running_In_Perl_Tree;
610 my $blank = (' ' x 77);
611 my $leader = "$te" . '.' x ($width - length($te));
614 $ml = "\r$blank\r$leader"
615 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
617 return($leader, $ml);
620 =item B<_leader_width>
622 my($width) = _leader_width(@test_files);
624 Calculates how wide the leader should be based on the length of the
633 my $suf = /\.(\w+)$/ ? $1 : '';
635 $len -= 2 if $Running_In_Perl_Tree and m{^\.\.[/\\]};
636 my $suflen = length $suf;
637 $maxlen = $len if $len > $maxlen;
638 $maxsuflen = $suflen if $suflen > $maxsuflen;
640 # we want three dots between the test name and the "ok" for
641 # typical lengths, and just two dots if longer than 30 characters
642 $maxlen -= $maxsuflen;
643 return $maxlen + ($maxlen >= 30 ? 2 : 3);
648 my($tot, $failedtests) = @_;
651 my $bonusmsg = _bonusmsg($tot);
654 print "All tests successful$bonusmsg.\n";
655 } elsif (!$tot->{tests}){
656 die "FAILED--no tests were run for some reason.\n";
657 } elsif (!$tot->{max}) {
658 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
659 die "FAILED--$tot->{tests} test $blurb could be run, ".
660 "alas--no output ever seen\n";
662 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
663 my $percent_ok = 100*$tot->{ok}/$tot->{max};
664 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
665 $tot->{max} - $tot->{ok}, $tot->{max},
668 my($fmt_top, $fmt) = _create_fmts($failedtests);
670 # Now write to formats
671 for my $script (sort keys %$failedtests) {
672 $Curtest = $failedtests->{$script};
676 $bonusmsg =~ s/^,\s*//;
677 print "$bonusmsg.\n" if $bonusmsg;
678 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
683 printf("Files=%d, Tests=%d, %s\n",
684 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
689 my($line, $test, $tot) = @_;
693 if( $Strap->_is_header($line) ) {
696 $test->{max} = $Strap->{max};
697 for ( keys %{$Strap->{todo}} ) { $test->{todo}{$_} = 1; }
699 $test->{skip_reason} = $Strap->{skip_all}
700 if not $test->{max} and defined $Strap->{skip_all};
702 $test->{'next'} = 1 unless $test->{'next'};
705 $tot->{max} += $test->{max};
719 my $s = _set_switches($test);
721 # XXX This is WAY too core specific!
722 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
723 ? "./perl -I../lib ../utils/perlcc $test "
724 . "-r 2>> ./compilelog |"
726 $cmd = "MCR $cmd" if $^O eq 'VMS';
728 if( open(PERL, $cmd) ) {
732 print "can't run $test. $!\n";
738 sub _parse_test_line {
739 my($line, $test, $tot) = @_;
742 if ( $Strap->_is_test($line, \%result) ) {
743 $test->{'next'} ||= 1;
744 my $this = $test->{'next'};
746 my($not, $tnum) = (!$result{ok}, $result{number});
748 $this = $tnum if $tnum;
750 my($type, $reason) = ($result{type}, $result{reason});
752 my($istodo, $isskip);
753 if( defined $type ) {
754 $istodo = 1 if $type eq 'todo';
755 $isskip = 1 if $type eq 'skip';
758 $test->{todo}{$this} = 1 if $istodo;
760 $tot->{todo}++ if $test->{todo}{$this};
763 print "$test->{ml}NOK $this" if $test->{ml};
764 if (!$test->{todo}{$this}) {
765 push @{$test->{failed}}, $this;
772 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
775 $test->{skipped}++ if $isskip;
777 $reason = '[no reason given]'
778 if $isskip and not defined $reason;
779 if (defined $reason and defined $test->{skip_reason}) {
780 # print "was: '$skip_reason' new '$reason'\n";
781 $test->{skip_reason} = 'various reasons'
782 if $test->{skip_reason} ne $reason;
783 } elsif (defined $reason) {
784 $test->{skip_reason} = $reason;
787 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
790 if ($this > $test->{'next'}) {
791 print "Test output counter mismatch [test $this]\n";
792 push @{$test->{failed}}, $test->{'next'}..$this-1;
794 elsif ($this < $test->{'next'}) {
795 #we have seen more "ok" lines than the number suggests
796 print "Confused test output: test $this answered after ".
797 "test ", $test->{'next'}-1, "\n";
798 $test->{'next'} = $this;
800 $test->{'next'} = $this + 1;
805 if( $Strap->_is_bail_out($line, \$bail_reason) ) { # bail out!
806 die "FAILED--Further testing stopped" .
807 ($bail_reason ? ": $bail_reason\n" : ".\n");
817 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
818 " UNEXPECTEDLY SUCCEEDED)")
821 if ($tot->{skipped}) {
822 $bonusmsg .= ", $tot->{skipped} test"
823 . ($tot->{skipped} != 1 ? 's' : '');
824 if ($tot->{sub_skipped}) {
825 $bonusmsg .= " and $tot->{sub_skipped} subtest"
826 . ($tot->{sub_skipped} != 1 ? 's' : '');
828 $bonusmsg .= ' skipped';
830 elsif ($tot->{sub_skipped}) {
831 $bonusmsg .= ", $tot->{sub_skipped} subtest"
832 . ($tot->{sub_skipped} != 1 ? 's' : '')
839 # VMS has some subtle nastiness with closing the test files.
843 close($fh); # must close to reap child resource values
845 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
847 $estatus = ($^O eq 'VMS'
848 ? eval 'use vmsish "status"; $estatus = $?'
851 return($estatus, $wstatus);
855 # Set up the command-line switches to run perl as.
860 $s .= $Strap->_switches($test);
866 # Test program go boom.
867 sub _dubious_return {
868 my($test, $tot, $estatus, $wstatus) = @_;
869 my ($failed, $canon, $percent) = ('??', '??');
871 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
872 "(wstat %d, 0x%x)\n",
874 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
876 if (corestatus($wstatus)) { # until we have a wait module
877 if ($Have_Devel_Corestack) {
878 Devel::CoreStack::stack($^X);
880 print "\ttest program seems to have generated a core\n";
887 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
888 print "\tafter all the subtests completed successfully\n";
890 $failed = 0; # But we do not set $canon!
893 push @{$test->{failed}}, $test->{'next'}..$test->{max};
894 $failed = @{$test->{failed}};
895 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
896 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
901 return { canon => $canon, max => $test->{max} || '??',
904 estat => $estatus, wstat => $wstatus,
909 sub _garbled_output {
910 my($gibberish) = shift;
911 warn "Confusing test output: '$gibberish'\n";
916 my($failedtests) = @_;
918 my $failed_str = "Failed Test";
919 my $middle_str = " Stat Wstat Total Fail Failed ";
920 my $list_str = "List of Failed";
922 # Figure out our longest name string for formatting purposes.
923 my $max_namelen = length($failed_str);
924 foreach my $script (keys %$failedtests) {
925 my $namelen = length $failedtests->{$script}->{name};
926 $max_namelen = $namelen if $namelen > $max_namelen;
929 my $list_len = $Columns - length($middle_str) - $max_namelen;
930 if ($list_len < length($list_str)) {
931 $list_len = length($list_str);
932 $max_namelen = $Columns - length($middle_str) - $list_len;
933 if ($max_namelen < length($failed_str)) {
934 $max_namelen = length($failed_str);
935 $Columns = $max_namelen + length($middle_str) + $list_len;
939 my $fmt_top = "format STDOUT_TOP =\n"
940 . sprintf("%-${max_namelen}s", $failed_str)
946 my $fmt = "format STDOUT =\n"
947 . "@" . "<" x ($max_namelen - 1)
948 . " @>> @>>>> @>>>> @>>> ^##.##% "
949 . "^" . "<" x ($list_len - 1) . "\n"
950 . '{ $Curtest->{name}, $Curtest->{estat},'
951 . ' $Curtest->{wstat}, $Curtest->{max},'
952 . ' $Curtest->{failed}, $Curtest->{percent},'
953 . ' $Curtest->{canon}'
955 . "~~" . " " x ($Columns - $list_len - 2) . "^"
956 . "<" x ($list_len - 1) . "\n"
957 . '$Curtest->{canon}'
965 return($fmt_top, $fmt);
969 my $tried_devel_corestack;
974 eval {require 'wait.ph'};
975 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
977 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
978 unless $tried_devel_corestack++;
984 sub canonfailed ($@) {
985 my($max,$skipped,@failed) = @_;
987 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
988 my $failed = @failed;
992 my $last = $min = shift @failed;
995 for (@failed, $failed[-1]) { # don't forget the last one
996 if ($_ > $last+1 || $_ == $last) {
1000 push @canon, "$min-$last";
1007 push @result, "FAILED tests @canon\n";
1008 $canon = join ' ', @canon;
1010 push @result, "FAILED test $last\n";
1014 push @result, "\tFailed $failed/$max tests, ";
1015 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
1016 my $ender = 's' x ($skipped > 1);
1017 my $good = $max - $failed - $skipped;
1018 my $goodper = sprintf("%.2f",100*($good/$max));
1019 push @result, " (-$skipped skipped test$ender: $good okay, ".
1023 my $txt = join "", @result;
1040 C<&runtests> is exported by Test::Harness per default.
1042 C<$verbose> and C<$switches> are exported upon request.
1049 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1051 If all tests are successful some statistics about the performance are
1054 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1056 For any single script that has failing subtests statistics like the
1059 =item C<Test returned status %d (wstat %d)>
1061 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1062 and C<$?> are printed in a message similar to the above.
1064 =item C<Failed 1 test, %.2f%% okay. %s>
1066 =item C<Failed %d/%d tests, %.2f%% okay. %s>
1068 If not all tests were successful, the script dies with one of the
1071 =item C<FAILED--Further testing stopped%s>
1073 If a single subtest decides that further testing will not make sense,
1074 the script dies with this message.
1082 =item C<HARNESS_IGNORE_EXITCODE>
1084 Makes harness ignore the exit status of child processes when defined.
1086 =item C<HARNESS_NOTTY>
1088 When set to a true value, forces it to behave as though STDOUT were
1089 not a console. You may need to set this if you don't want harness to
1090 output more frequent progress messages using carriage returns. Some
1091 consoles may not handle carriage returns properly (which results in a
1092 somewhat messy output).
1094 =item C<HARNESS_COMPILE_TEST>
1096 When true it will make harness attempt to compile the test using
1097 C<perlcc> before running it.
1099 B<NOTE> This currently only works when sitting in the perl source
1102 =item C<HARNESS_FILELEAK_IN_DIR>
1104 When set to the name of a directory, harness will check after each
1105 test whether new files appeared in that directory, and report them as
1107 LEAKED FILES: scr.tmp 0 my.db
1109 If relative, directory name is with respect to the current directory at
1110 the moment runtests() was called. Putting absolute path into
1111 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1113 =item C<HARNESS_PERL_SWITCHES>
1115 Its value will be prepended to the switches used to invoke perl on
1116 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1117 run all tests with all warnings enabled.
1119 =item C<HARNESS_COLUMNS>
1121 This value will be used for the width of the terminal. If it is not
1122 set then it will default to C<COLUMNS>. If this is not set, it will
1123 default to 80. Note that users of Bourne-sh based shells will need to
1124 C<export COLUMNS> for this module to use that variable.
1126 =item C<HARNESS_ACTIVE>
1128 Harness sets this before executing the individual tests. This allows
1129 the tests to determine if they are being executed through the harness
1130 or by any other means.
1136 Here's how Test::Harness tests itself
1138 $ cd ~/src/devel/Test-Harness
1139 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1140 $verbose=0; runtests @ARGV;' t/*.t
1141 Using /home/schwern/src/devel/Test-Harness/blib
1142 t/base..............ok
1143 t/nonumbers.........ok
1144 t/ok................ok
1145 t/test-harness......ok
1146 All tests successful.
1147 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1151 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1152 the underlying timing routines, L<Devel::CoreStack> to generate core
1153 dumps from failed tests and L<Devel::Cover> for test coverage
1158 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1159 sure is, that it was inspired by Larry Wall's TEST script that came
1160 with perl distributions for ages. Numerous anonymous contributors
1161 exist. Andreas Koenig held the torch for many years.
1163 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1167 Provide a way of running tests quietly (ie. no printing) for automated
1168 validation of tests. This will probably take the form of a version
1169 of runtests() which rather than printing its output returns raw data
1170 on the state of the tests.
1172 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1174 Figure a way to report test names in the failure summary.
1176 Rework the test summary so long test names are not truncated as badly.
1178 Merge back into bleadperl.
1180 Deal with VMS's "not \nok 4\n" mistake.
1182 Add option for coverage analysis.
1185 Keeping whittling away at _run_all_tests()
1188 Clean up how the summary is printed. Get rid of those damned formats.
1192 Test::Harness uses $^X to determine the perl binary to run the tests
1193 with. Test scripts running via the shebang (C<#!>) line may not be
1194 portable because $^X is not consistent for shebang scripts across
1195 platforms. This is no problem when Test::Harness is run with an
1196 absolute path to the perl binary or when $^X can be found in the path.
1198 HARNESS_COMPILE_TEST currently assumes it is run from the Perl source