1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.80 2003/12/31 02:39:21 andy Exp $
7 use Test::Harness::Straps;
8 use Test::Harness::Assert;
16 @ISA @EXPORT @EXPORT_OK
17 $Verbose $Switches $Debug
18 $verbose $switches $debug
28 Test::Harness - Run Perl standard test scripts with statistics
34 $Header: /home/cvs/test-harness/lib/Test/Harness.pm,v 1.80 2003/12/31 02:39:21 andy Exp $
40 # Backwards compatibility for exportable variable names.
42 *switches = *Switches;
45 $Have_Devel_Corestack = 0;
47 $ENV{HARNESS_ACTIVE} = 1;
51 delete $ENV{HARNESS_ACTIVE};
54 # Some experimental versions of OS/2 build have broken $?
55 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
57 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
59 my $Ok_Slow = $ENV{HARNESS_OK_SLOW};
61 $Strap = Test::Harness::Straps->new;
64 @EXPORT = qw(&runtests);
65 @EXPORT_OK = qw($verbose $switches);
67 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
68 $Debug = $ENV{HARNESS_DEBUG} || 0;
70 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
71 $Columns--; # Some shells have trouble with a full line of text.
77 runtests(@test_files);
81 B<STOP!> If all you want to do is write a test script, consider using
82 Test::Simple. Otherwise, read on.
84 (By using the Test module, you can write test scripts without
85 knowing the exact output this module expects. However, if you need to
86 know the specifics, read on!)
88 Perl test scripts print to standard output C<"ok N"> for each single
89 test, where C<N> is an increasing sequence of integers. The first line
90 output by a standard test script is C<"1..M"> with C<M> being the
91 number of tests that should be run within the test
92 script. Test::Harness::runtests(@tests) runs all the testscripts
93 named as arguments and checks standard output for the expected
96 After all tests have been performed, runtests() prints some
97 performance statistics that are computed by the Benchmark module.
99 =head2 The test script output
101 The following explains how Test::Harness interprets the output of your
108 This header tells how many tests there will be. For example, C<1..10>
109 means you plan on running 10 tests. This is a safeguard in case your
110 test dies quietly in the middle of its run.
112 It should be the first non-comment line output by your test program.
114 In certain instances, you may not know how many tests you will
115 ultimately be running. In this case, it is permitted for the 1..M
116 header to appear as the B<last> line output by your test (again, it
117 can be followed by further comments).
119 Under B<no> circumstances should 1..M appear in the middle of your
120 output or more than once.
123 =item B<'ok', 'not ok'. Ok?>
125 Any output from the testscript to standard error is ignored and
126 bypassed, thus will be seen by the user. Lines written to standard
127 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
128 runtests(). All other lines are discarded.
130 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
133 =item B<test numbers>
135 Perl normally expects the 'ok' or 'not ok' to be followed by a test
136 number. It is tolerated if the test numbers after 'ok' are
137 omitted. In this case Test::Harness maintains temporarily its own
138 counter until the script supplies test numbers again. So the following
153 Failed 3/6 tests, 50.00% okay
157 Anything after the test number but before the # is considered to be
158 the name of the test.
160 ok 42 this is the name of the test
162 Currently, Test::Harness does nothing with this information.
164 =item B<Skipping tests>
166 If the standard output line contains the substring C< # Skip> (with
167 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
168 counted as a skipped test. If the whole testscript succeeds, the
169 count of skipped tests is included in the generated output.
170 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
173 ok 23 # skip Insufficient flogiston pressure.
175 Similarly, one can include a similar explanation in a C<1..0> line
176 emitted if the test script is skipped completely:
178 1..0 # Skipped: no leverage found
182 If the standard output line contains the substring C< # TODO > after
183 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
184 afterwards is the thing that has to be done before this test will
187 not ok 13 # TODO harness the power of the atom
189 Note that the TODO must have a space after it.
193 Alternatively, you can specify a list of what tests are todo as part
198 This only works if the header appears at the beginning of the test.
200 This style is B<deprecated>.
204 These tests represent a feature to be implemented or a bug to be fixed
205 and act as something of an executable "thing to do" list. They are
206 B<not> expected to succeed. Should a todo test begin succeeding,
207 Test::Harness will report it as a bonus. This indicates that whatever
208 you were supposed to do has been done and you should promote this to a
213 As an emergency measure, a test script can decide that further tests
214 are useless (e.g. missing dependencies) and testing should stop
215 immediately. In that case the test script prints the magic words
219 to standard output. Any message after these words will be displayed by
220 C<Test::Harness> as the reason why testing is stopped.
224 Additional comments may be put into the testing output on their own
225 lines. Comment lines should begin with a '#', Test::Harness will
229 # Life is good, the sun is shining, RAM is cheap.
231 # got 'Bush' expected 'Gore'
233 =item B<Anything else>
235 Any other output Test::Harness sees it will silently ignore B<BUT WE
236 PLAN TO CHANGE THIS!> If you wish to place additional output in your
237 test script, please use a comment.
243 Test::Harness will honor the C<-T> or C<-t> in the #! line on your
244 test files. So if you begin a test with:
248 the test will be run with taint mode on.
250 =head2 Configuration variables.
252 These variables can be used to configure the behavior of
253 Test::Harness. They are exported on request.
257 =item B<$Test::Harness::Verbose>
259 The global variable C<$Test::Harness::Verbose> is exportable and can be
260 used to let C<runtests()> display the standard output of the script
261 without altering the behavior otherwise. The F<prove> utility's C<-v>
264 =item B<$Test::Harness::switches>
266 The global variable C<$Test::Harness::switches> is exportable and can be
267 used to set perl command line options used for running the test
268 script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
275 It will happen: your tests will fail. After you mop up your ego, you
276 can begin examining the summary report:
278 t/base..............ok
279 t/nonumbers.........ok
280 t/ok................ok
281 t/test-harness......ok
282 t/waterloo..........dubious
283 Test returned status 3 (wstat 768, 0x300)
284 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
285 Failed 10/20 tests, 50.00% okay
286 Failed Test Stat Wstat Total Fail Failed List of Failed
287 -----------------------------------------------------------------------
288 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
289 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
291 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
292 exited with non-zero status indicating something dubious happened.
294 The columns in the summary report mean:
300 The test file which failed.
304 If the test exited with non-zero, this is its exit status.
308 The wait status of the test.
312 Total number of tests expected to run.
316 Number which failed, either from "not ok" or because they never ran.
320 Percentage of the total tests which failed.
322 =item B<List of Failed>
324 A list of the tests which failed. Successive failures may be
325 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
333 Test::Harness currently only has one function, here it is.
339 my $allok = runtests(@test_files);
341 This runs all the given @test_files and divines whether they passed
342 or failed based on their output to STDOUT (details above). It prints
343 out each individual test which failed along with a summary report and
344 a how long it all took.
346 It returns true if everything was ok. Otherwise it will die() with
347 one of the messages in the DIAGNOSTICS section.
351 This is just _run_all_tests() plus _show_results()
360 my($tot, $failedtests) = _run_all_tests(@tests);
361 _show_results($tot, $failedtests);
363 my $ok = _all_ok($tot);
365 assert(($ok xor keys %$failedtests),
366 q{ok status jives with $failedtests});
375 my $ok = _all_ok(\%tot);
377 Tells you if this test run is overall successful or not.
384 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
389 my @files = _globdir $dir;
391 Returns all the files in a directory. This is shorthand for backwards
392 compatibility on systems where glob() doesn't work right.
398 my @f = readdir DIRH;
404 =item B<_run_all_tests>
406 my($total, $failed) = _run_all_tests(@test_files);
408 Runs all the given C<@test_files> (as C<runtests()>) but does it
409 quietly (no report). $total is a hash ref summary of all the tests
410 run. Its keys and values are this:
412 bonus Number of individual todo tests unexpectedly passed
413 max Number of individual tests ran
414 ok Number of individual tests passed
415 sub_skipped Number of individual tests skipped
416 todo Number of individual todo tests
418 files Number of test files ran
419 good Number of test files passed
420 bad Number of test files failed
421 tests Number of test files originally given
422 skipped Number of test files skipped
424 If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
425 got a successful test.
427 $failed is a hash ref of all the test scripts which failed. Each key
428 is the name of a test script, each value is another hash representing
429 how that script failed. Its keys are these:
431 name Name of the test which failed
432 estat Script's exit value
433 wstat Script's wait status
434 max Number of individual tests
435 failed Number which failed
436 percent Percentage of tests which failed
437 canon List of tests which failed (as string).
439 C<$failed> should be empty if everything passed.
441 B<NOTE> Currently this function is still noisy. I'm working on it.
459 tests => scalar @tests,
467 @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
468 my $t_start = new Benchmark;
470 my $width = _leader_width(@tests);
471 foreach my $tfile (@tests) {
472 if ( $Test::Harness::Debug ) {
473 print "# Running: ", $Strap->_command_line($tfile), "\n";
476 $Last_ML_Print = 0; # so each test prints at least once
477 my($leader, $ml) = _mk_leader($tfile, $width);
484 $Strap->{_seen_header} = 0;
485 my %results = $Strap->analyze_file($tfile) or
486 do { warn $Strap->{error}, "\n"; next };
488 # state of the current test.
489 my @failed = grep { !$results{details}[$_-1]{ok} }
490 1..@{$results{details}};
493 'next' => $Strap->{'next'},
494 max => $results{max},
496 bonus => $results{bonus},
497 skipped => $results{skip},
498 skip_reason => $results{skip_reason},
499 skip_all => $Strap->{skip_all},
503 $tot{bonus} += $results{bonus};
504 $tot{max} += $results{max};
505 $tot{ok} += $results{ok};
506 $tot{todo} += $results{todo};
507 $tot{sub_skipped} += $results{skip};
509 my($estatus, $wstatus) = @results{qw(exit wait)};
511 if ($results{passing}) {
512 if ($test{max} and $test{skipped} + $test{bonus}) {
514 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
516 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
518 print "$test{ml}ok\n ".join(', ', @msg)."\n";
519 } elsif ($test{max}) {
520 print "$test{ml}ok\n";
521 } elsif (defined $test{skip_all} and length $test{skip_all}) {
522 print "skipped\n all skipped: $test{skip_all}\n";
525 print "skipped\n all skipped: no reason given\n";
531 # List unrun tests as failures.
532 if ($test{'next'} <= $test{max}) {
533 push @{$test{failed}}, $test{'next'}..$test{max};
535 # List overruns as failures.
537 my $details = $results{details};
538 foreach my $overrun ($test{max}+1..@$details)
540 next unless ref $details->[$overrun-1];
541 push @{$test{failed}}, $overrun
546 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
548 $failedtests{$tfile}{name} = $tfile;
550 elsif($results{seen}) {
551 if (@{$test{failed}} and $test{max}) {
552 my ($txt, $canon) = _canonfailed($test{max},$test{skipped},
554 print "$test{ml}$txt";
555 $failedtests{$tfile} = { canon => $canon,
557 failed => scalar @{$test{failed}},
559 percent => 100*(scalar @{$test{failed}})/$test{max},
564 print "Don't know which tests failed: got $test{ok} ok, ".
565 "expected $test{max}\n";
566 $failedtests{$tfile} = { canon => '??',
577 print "FAILED before any test output arrived\n";
579 $failedtests{$tfile} = { canon => '??',
590 if (defined $Files_In_Dir) {
591 my @new_dir_files = _globdir $Files_In_Dir;
592 if (@new_dir_files != @dir_files) {
594 @f{@new_dir_files} = (1) x @new_dir_files;
595 delete @f{@dir_files};
596 my @f = sort keys %f;
597 print "LEAKED FILES: @f\n";
598 @dir_files = @new_dir_files;
602 $tot{bench} = timediff(new Benchmark, $t_start);
604 $Strap->_restore_PERL5LIB;
606 return(\%tot, \%failedtests);
611 my($leader, $ml) = _mk_leader($test_file, $width);
613 Generates the 't/foo........' $leader for the given C<$test_file> as well
614 as a similar version which will overwrite the current line (by use of
615 \r and such). C<$ml> may be empty if Test::Harness doesn't think you're
618 The C<$width> is the width of the "yada/blah.." string.
623 my($te, $width) = @_;
627 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
628 my $blank = (' ' x 77);
629 my $leader = "$te" . '.' x ($width - length($te));
632 $ml = "\r$blank\r$leader"
633 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
635 return($leader, $ml);
638 =item B<_leader_width>
640 my($width) = _leader_width(@test_files);
642 Calculates how wide the leader should be based on the length of the
651 my $suf = /\.(\w+)$/ ? $1 : '';
653 my $suflen = length $suf;
654 $maxlen = $len if $len > $maxlen;
655 $maxsuflen = $suflen if $suflen > $maxsuflen;
657 # + 3 : we want three dots between the test name and the "ok"
658 return $maxlen + 3 - $maxsuflen;
663 my($tot, $failedtests) = @_;
666 my $bonusmsg = _bonusmsg($tot);
669 print "All tests successful$bonusmsg.\n";
670 } elsif (!$tot->{tests}){
671 die "FAILED--no tests were run for some reason.\n";
672 } elsif (!$tot->{max}) {
673 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
674 die "FAILED--$tot->{tests} test $blurb could be run, ".
675 "alas--no output ever seen\n";
677 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
678 my $percent_ok = 100*$tot->{ok}/$tot->{max};
679 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
680 $tot->{max} - $tot->{ok}, $tot->{max},
683 my($fmt_top, $fmt) = _create_fmts($failedtests);
685 # Now write to formats
686 for my $script (sort keys %$failedtests) {
687 $Curtest = $failedtests->{$script};
691 $bonusmsg =~ s/^,\s*//;
692 print "$bonusmsg.\n" if $bonusmsg;
693 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
698 printf("Files=%d, Tests=%d, %s\n",
699 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
704 $Strap->{callback} = sub {
705 my($self, $line, $type, $totals) = @_;
706 print $line if $Verbose;
708 my $meth = $Handlers{$type};
709 $meth->($self, $line, $type, $totals) if $meth;
713 $Handlers{header} = sub {
714 my($self, $line, $type, $totals) = @_;
716 warn "Test header seen more than once!\n" if $self->{_seen_header};
718 $self->{_seen_header}++;
720 warn "1..M can only appear at the beginning or end of tests\n"
721 if $totals->{seen} &&
722 $totals->{max} < $totals->{seen};
725 $Handlers{test} = sub {
726 my($self, $line, $type, $totals) = @_;
728 my $curr = $totals->{seen};
729 my $next = $self->{'next'};
730 my $max = $totals->{max};
731 my $detail = $totals->{details}[-1];
733 if( $detail->{ok} ) {
734 _print_ml_less("ok $curr/$max");
736 if( $detail->{type} eq 'skip' ) {
737 $totals->{skip_reason} = $detail->{reason}
738 unless defined $totals->{skip_reason};
739 $totals->{skip_reason} = 'various reasons'
740 if $totals->{skip_reason} ne $detail->{reason};
744 _print_ml("NOK $curr");
747 if( $curr > $next ) {
748 print "Test output counter mismatch [test $curr]\n";
750 elsif( $curr < $next ) {
751 print "Confused test output: test $curr answered after ".
752 "test ", $next - 1, "\n";
757 $Handlers{bailout} = sub {
758 my($self, $line, $type, $totals) = @_;
760 die "FAILED--Further testing stopped" .
761 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
766 print join '', $ML, @_ if $ML;
770 # For slow connections, we save lots of bandwidth by printing only once
773 if( !$Ok_Slow || $Last_ML_Print != time ) {
775 $Last_ML_Print = time;
783 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
784 " UNEXPECTEDLY SUCCEEDED)")
787 if ($tot->{skipped}) {
788 $bonusmsg .= ", $tot->{skipped} test"
789 . ($tot->{skipped} != 1 ? 's' : '');
790 if ($tot->{sub_skipped}) {
791 $bonusmsg .= " and $tot->{sub_skipped} subtest"
792 . ($tot->{sub_skipped} != 1 ? 's' : '');
794 $bonusmsg .= ' skipped';
796 elsif ($tot->{sub_skipped}) {
797 $bonusmsg .= ", $tot->{sub_skipped} subtest"
798 . ($tot->{sub_skipped} != 1 ? 's' : '')
805 # Test program go boom.
806 sub _dubious_return {
807 my($test, $tot, $estatus, $wstatus) = @_;
808 my ($failed, $canon, $percent) = ('??', '??');
810 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
811 "(wstat %d, 0x%x)\n",
813 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
815 if (_corestatus($wstatus)) { # until we have a wait module
816 if ($Have_Devel_Corestack) {
817 Devel::CoreStack::stack($^X);
819 print "\ttest program seems to have generated a core\n";
826 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
827 print "\tafter all the subtests completed successfully\n";
829 $failed = 0; # But we do not set $canon!
832 push @{$test->{failed}}, $test->{'next'}..$test->{max};
833 $failed = @{$test->{failed}};
834 (my $txt, $canon) = _canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
835 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
840 return { canon => $canon, max => $test->{max} || '??',
843 estat => $estatus, wstat => $wstatus,
849 my($failedtests) = @_;
851 my $failed_str = "Failed Test";
852 my $middle_str = " Stat Wstat Total Fail Failed ";
853 my $list_str = "List of Failed";
855 # Figure out our longest name string for formatting purposes.
856 my $max_namelen = length($failed_str);
857 foreach my $script (keys %$failedtests) {
858 my $namelen = length $failedtests->{$script}->{name};
859 $max_namelen = $namelen if $namelen > $max_namelen;
862 my $list_len = $Columns - length($middle_str) - $max_namelen;
863 if ($list_len < length($list_str)) {
864 $list_len = length($list_str);
865 $max_namelen = $Columns - length($middle_str) - $list_len;
866 if ($max_namelen < length($failed_str)) {
867 $max_namelen = length($failed_str);
868 $Columns = $max_namelen + length($middle_str) + $list_len;
872 my $fmt_top = "format STDOUT_TOP =\n"
873 . sprintf("%-${max_namelen}s", $failed_str)
879 my $fmt = "format STDOUT =\n"
880 . "@" . "<" x ($max_namelen - 1)
881 . " @>> @>>>> @>>>> @>>> ^##.##% "
882 . "^" . "<" x ($list_len - 1) . "\n"
883 . '{ $Curtest->{name}, $Curtest->{estat},'
884 . ' $Curtest->{wstat}, $Curtest->{max},'
885 . ' $Curtest->{failed}, $Curtest->{percent},'
886 . ' $Curtest->{canon}'
888 . "~~" . " " x ($Columns - $list_len - 2) . "^"
889 . "<" x ($list_len - 1) . "\n"
890 . '$Curtest->{canon}'
898 return($fmt_top, $fmt);
902 my $tried_devel_corestack;
908 eval { # we may not have a WCOREDUMP
909 local $^W = 0; # *.ph files are often *very* noisy
911 $did_core = WCOREDUMP($st);
914 $did_core = $st & 0200;
917 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
918 unless $tried_devel_corestack++;
924 sub _canonfailed ($$@) {
925 my($max,$skipped,@failed) = @_;
927 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
928 my $failed = @failed;
932 my $last = $min = shift @failed;
935 for (@failed, $failed[-1]) { # don't forget the last one
936 if ($_ > $last+1 || $_ == $last) {
940 push @canon, "$min-$last";
947 push @result, "FAILED tests @canon\n";
948 $canon = join ' ', @canon;
950 push @result, "FAILED test $last\n";
954 push @result, "\tFailed $failed/$max tests, ";
956 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
958 push @result, "?% okay";
960 my $ender = 's' x ($skipped > 1);
961 my $good = $max - $failed - $skipped;
963 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
965 my $goodper = sprintf("%.2f",100*($good/$max));
966 $skipmsg .= "$goodper%)";
970 push @result, $skipmsg;
973 my $txt = join "", @result;
990 C<&runtests> is exported by Test::Harness by default.
992 C<$verbose>, C<$switches> and C<$debug> are exported upon request.
998 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1000 If all tests are successful some statistics about the performance are
1003 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1005 For any single script that has failing subtests statistics like the
1008 =item C<Test returned status %d (wstat %d)>
1010 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1011 and C<$?> are printed in a message similar to the above.
1013 =item C<Failed 1 test, %.2f%% okay. %s>
1015 =item C<Failed %d/%d tests, %.2f%% okay. %s>
1017 If not all tests were successful, the script dies with one of the
1020 =item C<FAILED--Further testing stopped: %s>
1022 If a single subtest decides that further testing will not make sense,
1023 the script dies with this message.
1031 =item C<HARNESS_ACTIVE>
1033 Harness sets this before executing the individual tests. This allows
1034 the tests to determine if they are being executed through the harness
1035 or by any other means.
1037 =item C<HARNESS_COLUMNS>
1039 This value will be used for the width of the terminal. If it is not
1040 set then it will default to C<COLUMNS>. If this is not set, it will
1041 default to 80. Note that users of Bourne-sh based shells will need to
1042 C<export COLUMNS> for this module to use that variable.
1044 =item C<HARNESS_COMPILE_TEST>
1046 When true it will make harness attempt to compile the test using
1047 C<perlcc> before running it.
1049 B<NOTE> This currently only works when sitting in the perl source
1052 =item C<HARNESS_DEBUG>
1054 If true, Test::Harness will print debugging information about itself as
1055 it runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
1056 the output from the test being run. Setting C<$Test::Harness::Debug> will
1057 override this, or you can use the C<-d> switch in the F<prove> utility.
1059 =item C<HARNESS_FILELEAK_IN_DIR>
1061 When set to the name of a directory, harness will check after each
1062 test whether new files appeared in that directory, and report them as
1064 LEAKED FILES: scr.tmp 0 my.db
1066 If relative, directory name is with respect to the current directory at
1067 the moment runtests() was called. Putting absolute path into
1068 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1070 =item C<HARNESS_IGNORE_EXITCODE>
1072 Makes harness ignore the exit status of child processes when defined.
1074 =item C<HARNESS_NOTTY>
1076 When set to a true value, forces it to behave as though STDOUT were
1077 not a console. You may need to set this if you don't want harness to
1078 output more frequent progress messages using carriage returns. Some
1079 consoles may not handle carriage returns properly (which results in a
1080 somewhat messy output).
1082 =item C<HARNESS_OK_SLOW>
1084 If true, the C<ok> messages are printed out only every second. This
1085 reduces output and may help increase testing speed over slow
1086 connections, or with very large numbers of tests.
1088 =item C<HARNESS_PERL>
1090 Usually your tests will be run by C<$^X>, the currently-executing Perl.
1091 However, you may want to have it run by a different executable, such as
1092 a threading perl, or a different version.
1094 If you're using the F<prove> utility, you can use the C<--perl> switch.
1096 =item C<HARNESS_PERL_SWITCHES>
1098 Its value will be prepended to the switches used to invoke perl on
1099 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1100 run all tests with all warnings enabled.
1102 =item C<HARNESS_VERBOSE>
1104 If true, Test::Harness will output the verbose results of running
1105 its tests. Setting C<$Test::Harness::verbose> will override this,
1106 or you can use the C<-v> switch in the F<prove> utility.
1112 Here's how Test::Harness tests itself
1114 $ cd ~/src/devel/Test-Harness
1115 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1116 $verbose=0; runtests @ARGV;' t/*.t
1117 Using /home/schwern/src/devel/Test-Harness/blib
1118 t/base..............ok
1119 t/nonumbers.........ok
1120 t/ok................ok
1121 t/test-harness......ok
1122 All tests successful.
1123 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1127 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1128 the underlying timing routines, L<Devel::CoreStack> to generate core
1129 dumps from failed tests and L<Devel::Cover> for test coverage
1134 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1135 sure is, that it was inspired by Larry Wall's TEST script that came
1136 with perl distributions for ages. Numerous anonymous contributors
1137 exist. Andreas Koenig held the torch for many years, and then
1140 Current maintainer is Andy Lester C<< <andy@petdance.com> >>.
1144 This program is free software; you can redistribute it and/or
1145 modify it under the same terms as Perl itself.
1147 See L<http://www.perl.com/perl/misc/Artistic.html>
1151 Provide a way of running tests quietly (ie. no printing) for automated
1152 validation of tests. This will probably take the form of a version
1153 of runtests() which rather than printing its output returns raw data
1154 on the state of the tests. (Partially done in Test::Harness::Straps)
1156 Document the format.
1158 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1160 Figure a way to report test names in the failure summary.
1162 Rework the test summary so long test names are not truncated as badly.
1163 (Partially done with new skip test styles)
1165 Deal with VMS's "not \nok 4\n" mistake.
1167 Add option for coverage analysis.
1171 Implement Straps total_results()
1175 Completely redo the print summary code.
1177 Implement Straps callbacks. (experimentally implemented)
1179 Straps->analyze_file() not taint clean, don't know if it can be
1181 Fix that damned VMS nit.
1183 HARNESS_TODOFAIL to display TODO failures
1185 Add a test for verbose.
1187 Change internal list of test results to a hash.
1189 Fix stats display when there's an overrun.
1191 Fix so perls with spaces in the filename work.
1195 Keeping whittling away at _run_all_tests()
1199 Clean up how the summary is printed. Get rid of those damned formats.
1203 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
1206 Please use the CPAN bug ticketing system at L<http://rt.cpan.org/>.
1207 You can also mail bugs, fixes and enhancements to
1208 C<< <bug-test-harness@rt.cpan.org> >>.
1212 Original code by Michael G Schwern, maintained by Andy Lester.
1216 Copyright 2003 by Michael G Schwern C<< <schwern@pobox.com> >>,
1217 Andy Lester C<< <andy@petdance.com> >>.
1219 This program is free software; you can redistribute it and/or
1220 modify it under the same terms as Perl itself.
1222 See L<http://www.perl.com/perl/misc/Artistic.html>.