1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
12 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
17 # Backwards compatibility for exportable variable names.
19 *switches = \$Switches;
21 $Have_Devel_Corestack = 0;
25 $ENV{HARNESS_ACTIVE} = 1;
27 # Some experimental versions of OS/2 build have broken $?
28 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
30 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
34 @EXPORT = qw(&runtests);
35 @EXPORT_OK = qw($verbose $switches);
39 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
40 $Columns--; # Some shells have trouble with a full line of text.
45 Test::Harness - run perl standard test scripts with statistics
51 runtests(@test_files);
55 B<STOP!> If all you want to do is write a test script, consider using
56 Test::Simple. Otherwise, read on.
58 (By using the Test module, you can write test scripts without
59 knowing the exact output this module expects. However, if you need to
60 know the specifics, read on!)
62 Perl test scripts print to standard output C<"ok N"> for each single
63 test, where C<N> is an increasing sequence of integers. The first line
64 output by a standard test script is C<"1..M"> with C<M> being the
65 number of tests that should be run within the test
66 script. Test::Harness::runtests(@tests) runs all the testscripts
67 named as arguments and checks standard output for the expected
70 After all tests have been performed, runtests() prints some
71 performance statistics that are computed by the Benchmark module.
73 =head2 The test script output
75 The following explains how Test::Harness interprets the output of your
82 This header tells how many tests there will be. It should be the
83 first line output by your test program (but its okay if its preceded
86 In certain instanced, you may not know how many tests you will
87 ultimately be running. In this case, it is permitted (but not
88 encouraged) for the 1..M header to appear as the B<last> line output
89 by your test (again, it can be followed by further comments). But we
90 strongly encourage you to put it first.
92 Under B<no> circumstances should 1..M appear in the middle of your
93 output or more than once.
96 =item B<'ok', 'not ok'. Ok?>
98 Any output from the testscript to standard error is ignored and
99 bypassed, thus will be seen by the user. Lines written to standard
100 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101 runtests(). All other lines are discarded.
103 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
106 =item B<test numbers>
108 Perl normally expects the 'ok' or 'not ok' to be followed by a test
109 number. It is tolerated if the test numbers after 'ok' are
110 omitted. In this case Test::Harness maintains temporarily its own
111 counter until the script supplies test numbers again. So the following
126 Failed 3/6 tests, 50.00% okay
129 =item B<$Test::Harness::verbose>
131 The global variable $Test::Harness::verbose is exportable and can be
132 used to let runtests() display the standard output of the script
133 without altering the behavior otherwise.
135 =item B<$Test::Harness::switches>
137 The global variable $Test::Harness::switches is exportable and can be
138 used to set perl command line options used for running the test
139 script(s). The default value is C<-w>.
141 =item B<Skipping tests>
143 If the standard output line contains the substring C< # Skip> (with
144 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145 counted as a skipped test. If the whole testscript succeeds, the
146 count of skipped tests is included in the generated output.
147 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
150 ok 23 # skip Insufficient flogiston pressure.
152 Similarly, one can include a similar explanation in a C<1..0> line
153 emitted if the test script is skipped completely:
155 1..0 # Skipped: no leverage found
159 If the standard output line contains the substring C< # TODO> after
160 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161 afterwards is the thing that has to be done before this test will
164 not ok 13 # TODO harness the power of the atom
166 These tests represent a feature to be implemented or a bug to be fixed
167 and act as something of an executable "thing to do" list. They are
168 B<not> expected to succeed. Should a todo test begin succeeding,
169 Test::Harness will report it as a bonus. This indicates that whatever
170 you were supposed to do has been done and you should promote this to a
175 As an emergency measure, a test script can decide that further tests
176 are useless (e.g. missing dependencies) and testing should stop
177 immediately. In that case the test script prints the magic words
181 to standard output. Any message after these words will be displayed by
182 C<Test::Harness> as the reason why testing is stopped.
186 Additional comments may be put into the testing output on their own
187 lines. Comment lines should begin with a '#', Test::Harness will
191 # Life is good, the sun is shining, RAM is cheap.
193 # got 'Bush' expected 'Gore'
195 =item B<Anything else>
197 Any other output Test::Harness sees it will silently ignore B<BUT WE
198 PLAN TO CHANGE THIS!> If you wish to place additional output in your
199 test script, please use a comment.
206 It will happen, your tests will fail. After you mop up your ego, you
207 can begin examining the summary report:
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
222 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223 exited with non-zero status indicating something dubious happened.
225 The columns in the summary report mean:
231 The test file which failed.
235 If the test exited with non-zero, this is its exit status.
239 The wait status of the test I<umm, I need a better explanation here>.
243 Total number of tests expected to run.
247 Number which failed, either from "not ok" or because they never ran.
251 Percentage of the total tests which failed.
253 =item B<List of Failed>
255 A list of the tests which failed. Successive failures may be
256 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
264 Test::Harness currently only has one function, here it is.
270 my $allok = runtests(@test_files);
272 This runs all the given @test_files and divines whether they passed
273 or failed based on their output to STDOUT (details above). It prints
274 out each individual test which failed along with a summary report and
275 a how long it all took.
277 It returns true if everything was ok, false otherwise.
280 This is just _run_all_tests() plus _show_results()
289 my($tot, $failedtests) = _run_all_tests(@tests);
290 _show_results($tot, $failedtests);
292 my $ok = ($tot->{bad} == 0 && $tot->{max});
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
304 my @files = _globdir $dir;
306 Returns all the files in a directory. This is shorthand for backwards
307 compatibility on systems where glob() doesn't work right.
313 my @f = readdir DIRH;
319 =item B<_run_all_tests>
321 my($total, $failed) = _run_all_tests(@test_files);
323 Runs all the given @test_files (as runtests()) but does it quietly (no
324 report). $total is a hash ref summary of all the tests run. Its keys
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
338 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
341 $failed is a hash ref of all the test scripts which failed. Each key
342 is the name of a test script, each value is another hash representing
343 how that script failed. Its keys are these:
345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
353 Needless to say, $failed should be empty if everything passed.
355 B<NOTE> Currently this function is still noisy. I'm working on it.
372 tests => scalar @tests,
378 # pass -I flags to children
379 my $old5lib = $ENV{PERL5LIB};
381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
390 $new5lib = join($Config{path_sep}, @INC);
393 local($ENV{'PERL5LIB'}) = $new5lib;
395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
396 my $t_start = new Benchmark;
400 foreach (@tests) { # The same code in t/TEST
401 my $suf = /\.(\w+)$/ ? $1 : '';
403 my $suflen = length $suf;
404 $maxlen = $len if $len > $maxlen;
405 $maxsuflen = $suflen if $suflen > $maxsuflen;
407 # + 3 : we want three dots between the test name and the "ok"
408 my $width = $maxlen + 3 - $maxsuflen;
409 foreach my $tfile (@tests) {
410 my($leader, $ml) = _mk_leader($tfile, $width);
413 my $fh = _open_test($tfile);
415 # state of the current test.
424 skip_reason => undef,
428 my($seen_header, $tests_seen) = (0,0);
430 if( _parse_header($_, \%test, \%tot) ) {
431 warn "Test header seen twice!\n" if $seen_header;
435 warn "1..M can only appear at the beginning or end of tests\n"
436 if $tests_seen && $test{max} < $tests_seen;
438 elsif( _parse_test_line($_, \%test, \%tot) ) {
444 my($estatus, $wstatus) = _close_fh($fh);
446 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
449 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
451 $failedtests{$tfile}{name} = $tfile;
454 if ($test{max} and $test{skipped} + $test{bonus}) {
456 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
458 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
460 print "$test{ml}ok, ".join(', ', @msg)."\n";
461 } elsif ($test{max}) {
462 print "$test{ml}ok\n";
463 } elsif (defined $test{skip_reason}) {
464 print "skipped: $test{skip_reason}\n";
467 print "skipped test on this platform\n";
474 if ($test{'next'} <= $test{max}) {
475 push @{$test{failed}}, $test{'next'}..$test{max};
477 if (@{$test{failed}}) {
478 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
480 print "$test{ml}$txt";
481 $failedtests{$tfile} = { canon => $canon,
483 failed => scalar @{$test{failed}},
485 percent => 100*(scalar @{$test{failed}})/$test{max},
490 print "Don't know which tests failed: got $test{ok} ok, ".
491 "expected $test{max}\n";
492 $failedtests{$tfile} = { canon => '??',
502 } elsif ($test{'next'} == 0) {
503 print "FAILED before any test output arrived\n";
505 $failedtests{$tfile} = { canon => '??',
516 $tot{sub_skipped} += $test{skipped};
518 if (defined $Files_In_Dir) {
519 my @new_dir_files = _globdir $Files_In_Dir;
520 if (@new_dir_files != @dir_files) {
522 @f{@new_dir_files} = (1) x @new_dir_files;
523 delete @f{@dir_files};
524 my @f = sort keys %f;
525 print "LEAKED FILES: @f\n";
526 @dir_files = @new_dir_files;
530 $tot{bench} = timediff(new Benchmark, $t_start);
533 if (defined $old5lib) {
534 $ENV{PERL5LIB} = $old5lib;
536 delete $ENV{PERL5LIB};
540 return(\%tot, \%failedtests);
545 my($leader, $ml) = _mk_leader($test_file, $width);
547 Generates the 't/foo........' $leader for the given $test_file as well
548 as a similar version which will overwrite the current line (by use of
549 \r and such). $ml may be empty if Test::Harness doesn't think you're
550 on TTY. The width is the width of the "yada/blah..." string.
555 my ($te, $width) = @_;
557 chop($te); # XXX chomp?
559 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
560 my $blank = (' ' x 77);
561 my $leader = "$te" . '.' x ($width - length($te));
564 $ml = "\r$blank\r$leader"
565 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
567 return($leader, $ml);
572 my($tot, $failedtests) = @_;
575 my $bonusmsg = _bonusmsg($tot);
577 if ($tot->{bad} == 0 && $tot->{max}) {
578 print "All tests successful$bonusmsg.\n";
579 } elsif ($tot->{tests}==0){
580 die "FAILED--no tests were run for some reason.\n";
581 } elsif ($tot->{max} == 0) {
582 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
583 die "FAILED--$tot->{tests} test $blurb could be run, ".
584 "alas--no output ever seen\n";
586 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
587 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
588 $tot->{max} - $tot->{ok}, $tot->{max},
589 100*$tot->{ok}/$tot->{max};
591 my($fmt_top, $fmt) = _create_fmts($failedtests);
593 # Now write to formats
594 for my $script (sort keys %$failedtests) {
595 $Curtest = $failedtests->{$script};
599 $bonusmsg =~ s/^,\s*//;
600 print "$bonusmsg.\n" if $bonusmsg;
601 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
606 printf("Files=%d, Tests=%d, %s\n",
607 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
612 my($line, $test, $tot) = @_;
616 print $line if $Verbose;
619 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
621 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
623 $tot->{max} += $test->{max};
629 # 1..0 # skip Why? Because I said so!
630 elsif ($line =~ /^1\.\.([0-9]+)
631 (\s*\#\s*[Ss]kip\S*\s* (.+))?
636 $tot->{max} += $test->{max};
638 $test->{'next'} = 1 unless $test->{'next'};
639 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
654 my $s = _set_switches($test);
656 # XXX This is WAY too core specific!
657 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
658 ? "./perl -I../lib ../utils/perlcc $test "
659 . "-r 2>> ./compilelog |"
661 $cmd = "MCR $cmd" if $^O eq 'VMS';
663 if( open(PERL, $cmd) ) {
667 print "can't run $test. $!\n";
679 sub _parse_test_line {
680 my($line, $test, $tot) = @_;
682 if ($line =~ /^(not\s+)?ok\b/i) {
683 my $this = $test->{'next'} || 1;
685 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
686 my($not, $tnum, $extra) = ($1, $2, $3);
688 $this = $tnum if $tnum;
690 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
693 my($istodo, $isskip);
694 if( defined $type ) {
695 $istodo = $type =~ /TODO/;
696 $isskip = $type =~ /skip/i;
699 $test->{todo}{$tnum} = 1 if $istodo;
702 print "$test->{ml}NOK $this" if $test->{ml};
703 if (!$test->{todo}{$this}) {
704 push @{$test->{failed}}, $this;
711 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
714 $test->{skipped}++ if $isskip;
716 if (defined $reason and defined $test->{skip_reason}) {
717 # print "was: '$skip_reason' new '$reason'\n";
718 $test->{skip_reason} = 'various reasons'
719 if $test->{skip_reason} ne $reason;
720 } elsif (defined $reason) {
721 $test->{skip_reason} = $reason;
724 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
728 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
729 $this = $1 if $1 > 0;
730 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
735 # an ok or not ok not matching the 3 cases above...
736 # just ignore it for compatibility with TEST
740 if ($this > $test->{'next'}) {
741 # print "Test output counter mismatch [test $this]\n";
742 # no need to warn probably
743 push @{$test->{failed}}, $test->{'next'}..$this-1;
745 elsif ($this < $test->{'next'}) {
746 #we have seen more "ok" lines than the number suggests
747 print "Confused test output: test $this answered after ".
748 "test ", $test->{'next'}-1, "\n";
749 $test->{'next'} = $this;
751 $test->{'next'} = $this + 1;
754 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
755 die "FAILED--Further testing stopped" .
756 ($1 ? ": $1\n" : ".\n");
765 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
766 " UNEXPECTEDLY SUCCEEDED)")
769 if ($tot->{skipped}) {
770 $bonusmsg .= ", $tot->{skipped} test"
771 . ($tot->{skipped} != 1 ? 's' : '');
772 if ($tot->{sub_skipped}) {
773 $bonusmsg .= " and $tot->{sub_skipped} subtest"
774 . ($tot->{sub_skipped} != 1 ? 's' : '');
776 $bonusmsg .= ' skipped';
778 elsif ($tot->{sub_skipped}) {
779 $bonusmsg .= ", $tot->{sub_skipped} subtest"
780 . ($tot->{sub_skipped} != 1 ? 's' : '')
787 # VMS has some subtle nastiness with closing the test files.
791 close($fh); # must close to reap child resource values
793 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
795 $estatus = ($^O eq 'VMS'
796 ? eval 'use vmsish "status"; $estatus = $?'
799 return($estatus, $wstatus);
803 # Set up the command-line switches to run perl as.
808 open(TEST, $test) or print "can't open $test. $!\n";
811 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
812 if exists $ENV{'HARNESS_PERL_SWITCHES'};
813 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
814 if $first =~ /^#!.*\bperl.*-\w*T/;
816 close(TEST) or print "can't close $test. $!\n";
822 # Test program go boom.
823 sub _dubious_return {
824 my($test, $tot, $estatus, $wstatus) = @_;
825 my ($failed, $canon, $percent) = ('??', '??');
827 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
828 "(wstat %d, 0x%x)\n",
830 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
832 if (corestatus($wstatus)) { # until we have a wait module
833 if ($Have_Devel_Corestack) {
834 Devel::CoreStack::stack($^X);
836 print "\ttest program seems to have generated a core\n";
843 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
844 print "\tafter all the subtests completed successfully\n";
846 $failed = 0; # But we do not set $canon!
849 push @{$test->{failed}}, $test->{'next'}..$test->{max};
850 $failed = @{$test->{failed}};
851 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
852 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
857 return { canon => $canon, max => $test->{max} || '??',
860 estat => $estatus, wstat => $wstatus,
865 sub _garbled_output {
866 my($gibberish) = shift;
867 warn "Confusing test output: '$gibberish'\n";
872 my($failedtests) = @_;
874 my $failed_str = "Failed Test";
875 my $middle_str = " Stat Wstat Total Fail Failed ";
876 my $list_str = "List of Failed";
878 # Figure out our longest name string for formatting purposes.
879 my $max_namelen = length($failed_str);
880 foreach my $script (keys %$failedtests) {
881 my $namelen = length $failedtests->{$script}->{name};
882 $max_namelen = $namelen if $namelen > $max_namelen;
885 my $list_len = $Columns - length($middle_str) - $max_namelen;
886 if ($list_len < length($list_str)) {
887 $list_len = length($list_str);
888 $max_namelen = $Columns - length($middle_str) - $list_len;
889 if ($max_namelen < length($failed_str)) {
890 $max_namelen = length($failed_str);
891 $Columns = $max_namelen + length($middle_str) + $list_len;
895 my $fmt_top = "format STDOUT_TOP =\n"
896 . sprintf("%-${max_namelen}s", $failed_str)
902 my $fmt = "format STDOUT =\n"
903 . "@" . "<" x ($max_namelen - 1)
904 . " @>> @>>>> @>>>> @>>> ^##.##% "
905 . "^" . "<" x ($list_len - 1) . "\n"
906 . '{ $Curtest->{name}, $Curtest->{estat},'
907 . ' $Curtest->{wstat}, $Curtest->{max},'
908 . ' $Curtest->{failed}, $Curtest->{percent},'
909 . ' $Curtest->{canon}'
911 . "~~" . " " x ($Columns - $list_len - 2) . "^"
912 . "<" x ($list_len - 1) . "\n"
913 . '$Curtest->{canon}'
921 return($fmt_top, $fmt);
925 my $tried_devel_corestack;
930 eval {require 'wait.ph'};
931 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
933 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
934 unless $tried_devel_corestack++;
940 sub canonfailed ($@) {
941 my($max,$skipped,@failed) = @_;
943 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
944 my $failed = @failed;
948 my $last = $min = shift @failed;
951 for (@failed, $failed[-1]) { # don't forget the last one
952 if ($_ > $last+1 || $_ == $last) {
956 push @canon, "$min-$last";
963 push @result, "FAILED tests @canon\n";
964 $canon = join ' ', @canon;
966 push @result, "FAILED test $last\n";
970 push @result, "\tFailed $failed/$max tests, ";
971 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
972 my $ender = 's' x ($skipped > 1);
973 my $good = $max - $failed - $skipped;
974 my $goodper = sprintf("%.2f",100*($good/$max));
975 push @result, " (-$skipped skipped test$ender: $good okay, ".
979 my $txt = join "", @result;
996 C<&runtests> is exported by Test::Harness per default.
998 C<$verbose> and C<$switches> are exported upon request.
1005 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1007 If all tests are successful some statistics about the performance are
1010 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1012 For any single script that has failing subtests statistics like the
1015 =item C<Test returned status %d (wstat %d)>
1017 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1018 and C<$?> are printed in a message similar to the above.
1020 =item C<Failed 1 test, %.2f%% okay. %s>
1022 =item C<Failed %d/%d tests, %.2f%% okay. %s>
1024 If not all tests were successful, the script dies with one of the
1027 =item C<FAILED--Further testing stopped%s>
1029 If a single subtest decides that further testing will not make sense,
1030 the script dies with this message.
1038 =item C<HARNESS_IGNORE_EXITCODE>
1040 Makes harness ignore the exit status of child processes when defined.
1042 =item C<HARNESS_NOTTY>
1044 When set to a true value, forces it to behave as though STDOUT were
1045 not a console. You may need to set this if you don't want harness to
1046 output more frequent progress messages using carriage returns. Some
1047 consoles may not handle carriage returns properly (which results in a
1048 somewhat messy output).
1050 =item C<HARNESS_COMPILE_TEST>
1052 When true it will make harness attempt to compile the test using
1053 C<perlcc> before running it.
1055 B<NOTE> This currently only works when sitting in the perl source
1058 =item C<HARNESS_FILELEAK_IN_DIR>
1060 When set to the name of a directory, harness will check after each
1061 test whether new files appeared in that directory, and report them as
1063 LEAKED FILES: scr.tmp 0 my.db
1065 If relative, directory name is with respect to the current directory at
1066 the moment runtests() was called. Putting absolute path into
1067 C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1069 =item C<HARNESS_PERL_SWITCHES>
1071 Its value will be prepended to the switches used to invoke perl on
1072 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1073 run all tests with all warnings enabled.
1075 =item C<HARNESS_COLUMNS>
1077 This value will be used for the width of the terminal. If it is not
1078 set then it will default to C<COLUMNS>. If this is not set, it will
1079 default to 80. Note that users of Bourne-sh based shells will need to
1080 C<export COLUMNS> for this module to use that variable.
1082 =item C<HARNESS_ACTIVE>
1084 Harness sets this before executing the individual tests. This allows
1085 the tests to determine if they are being executed through the harness
1086 or by any other means.
1092 Here's how Test::Harness tests itself
1094 $ cd ~/src/devel/Test-Harness
1095 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1096 $verbose=0; runtests @ARGV;' t/*.t
1097 Using /home/schwern/src/devel/Test-Harness/blib
1098 t/base..............ok
1099 t/nonumbers.........ok
1100 t/ok................ok
1101 t/test-harness......ok
1102 All tests successful.
1103 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1107 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1108 the underlying timing routines, L<Devel::CoreStack> to generate core
1109 dumps from failed tests and L<Devel::Cover> for test coverage
1114 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1115 sure is, that it was inspired by Larry Wall's TEST script that came
1116 with perl distributions for ages. Numerous anonymous contributors
1117 exist. Andreas Koenig held the torch for many years.
1119 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1123 Provide a way of running tests quietly (ie. no printing) for automated
1124 validation of tests. This will probably take the form of a version
1125 of runtests() which rather than printing its output returns raw data
1126 on the state of the tests.
1128 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1130 Figure a way to report test names in the failure summary.
1132 Rework the test summary so long test names are not truncated as badly.
1134 Merge back into bleadperl.
1136 Deal with VMS's "not \nok 4\n" mistake.
1138 Add option for coverage analysis.
1141 Keeping whittling away at _run_all_tests()
1144 Clean up how the summary is printed. Get rid of those damned formats.
1148 Test::Harness uses $^X to determine the perl binary to run the tests
1149 with. Test scripts running via the shebang (C<#!>) line may not be
1150 portable because $^X is not consistent for shebang scripts across
1151 platforms. This is no problem when Test::Harness is run with an
1152 absolute path to the perl binary or when $^X can be found in the path.
1154 HARNESS_COMPILE_TEST currently assumes its run from the Perl source