1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.54 2003/08/15 01:05:00 andy Exp $
7 use Test::Harness::Straps;
8 use Test::Harness::Assert;
14 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches $ML $Strap
16 @ISA @EXPORT @EXPORT_OK $Last_ML_Print
19 # Backwards compatibility for exportable variable names.
21 *switches = *Switches;
23 $Have_Devel_Corestack = 0;
27 $ENV{HARNESS_ACTIVE} = 1;
31 delete $ENV{HARNESS_ACTIVE};
34 # Some experimental versions of OS/2 build have broken $?
35 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
37 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
39 my $Ok_Slow = $ENV{HARNESS_OK_SLOW};
41 $Strap = Test::Harness::Straps->new;
44 @EXPORT = qw(&runtests);
45 @EXPORT_OK = qw($verbose $switches);
47 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
49 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
50 $Columns--; # Some shells have trouble with a full line of text.
55 Test::Harness - run perl standard test scripts with statistics
61 runtests(@test_files);
65 B<STOP!> If all you want to do is write a test script, consider using
66 Test::Simple. Otherwise, read on.
68 (By using the Test module, you can write test scripts without
69 knowing the exact output this module expects. However, if you need to
70 know the specifics, read on!)
72 Perl test scripts print to standard output C<"ok N"> for each single
73 test, where C<N> is an increasing sequence of integers. The first line
74 output by a standard test script is C<"1..M"> with C<M> being the
75 number of tests that should be run within the test
76 script. Test::Harness::runtests(@tests) runs all the testscripts
77 named as arguments and checks standard output for the expected
80 After all tests have been performed, runtests() prints some
81 performance statistics that are computed by the Benchmark module.
83 =head2 The test script output
85 The following explains how Test::Harness interprets the output of your
92 This header tells how many tests there will be. For example, C<1..10>
93 means you plan on running 10 tests. This is a safeguard in case your
94 test dies quietly in the middle of its run.
96 It should be the first non-comment line output by your test program.
98 In certain instances, you may not know how many tests you will
99 ultimately be running. In this case, it is permitted for the 1..M
100 header to appear as the B<last> line output by your test (again, it
101 can be followed by further comments).
103 Under B<no> circumstances should 1..M appear in the middle of your
104 output or more than once.
107 =item B<'ok', 'not ok'. Ok?>
109 Any output from the testscript to standard error is ignored and
110 bypassed, thus will be seen by the user. Lines written to standard
111 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
112 runtests(). All other lines are discarded.
114 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
117 =item B<test numbers>
119 Perl normally expects the 'ok' or 'not ok' to be followed by a test
120 number. It is tolerated if the test numbers after 'ok' are
121 omitted. In this case Test::Harness maintains temporarily its own
122 counter until the script supplies test numbers again. So the following
137 Failed 3/6 tests, 50.00% okay
141 Anything after the test number but before the # is considered to be
142 the name of the test.
144 ok 42 this is the name of the test
146 Currently, Test::Harness does nothing with this information.
148 =item B<Skipping tests>
150 If the standard output line contains the substring C< # Skip> (with
151 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
152 counted as a skipped test. If the whole testscript succeeds, the
153 count of skipped tests is included in the generated output.
154 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
157 ok 23 # skip Insufficient flogiston pressure.
159 Similarly, one can include a similar explanation in a C<1..0> line
160 emitted if the test script is skipped completely:
162 1..0 # Skipped: no leverage found
166 If the standard output line contains the substring C< # TODO> after
167 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
168 afterwards is the thing that has to be done before this test will
171 not ok 13 # TODO harness the power of the atom
175 Alternatively, you can specify a list of what tests are todo as part
180 This only works if the header appears at the beginning of the test.
182 This style is B<deprecated>.
186 These tests represent a feature to be implemented or a bug to be fixed
187 and act as something of an executable "thing to do" list. They are
188 B<not> expected to succeed. Should a todo test begin succeeding,
189 Test::Harness will report it as a bonus. This indicates that whatever
190 you were supposed to do has been done and you should promote this to a
195 As an emergency measure, a test script can decide that further tests
196 are useless (e.g. missing dependencies) and testing should stop
197 immediately. In that case the test script prints the magic words
201 to standard output. Any message after these words will be displayed by
202 C<Test::Harness> as the reason why testing is stopped.
206 Additional comments may be put into the testing output on their own
207 lines. Comment lines should begin with a '#', Test::Harness will
211 # Life is good, the sun is shining, RAM is cheap.
213 # got 'Bush' expected 'Gore'
215 =item B<Anything else>
217 Any other output Test::Harness sees it will silently ignore B<BUT WE
218 PLAN TO CHANGE THIS!> If you wish to place additional output in your
219 test script, please use a comment.
226 Test::Harness will honor the C<-T> in the #! line on your test files. So
227 if you begin a test with:
231 the test will be run with taint mode on.
234 =head2 Configuration variables.
236 These variables can be used to configure the behavior of
237 Test::Harness. They are exported on request.
241 =item B<$Test::Harness::verbose>
243 The global variable $Test::Harness::verbose is exportable and can be
244 used to let runtests() display the standard output of the script
245 without altering the behavior otherwise.
247 =item B<$Test::Harness::switches>
249 The global variable $Test::Harness::switches is exportable and can be
250 used to set perl command line options used for running the test
251 script(s). The default value is C<-w>.
258 It will happen, your tests will fail. After you mop up your ego, you
259 can begin examining the summary report:
261 t/base..............ok
262 t/nonumbers.........ok
263 t/ok................ok
264 t/test-harness......ok
265 t/waterloo..........dubious
266 Test returned status 3 (wstat 768, 0x300)
267 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
268 Failed 10/20 tests, 50.00% okay
269 Failed Test Stat Wstat Total Fail Failed List of Failed
270 -----------------------------------------------------------------------
271 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
272 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
274 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
275 exited with non-zero status indicating something dubious happened.
277 The columns in the summary report mean:
283 The test file which failed.
287 If the test exited with non-zero, this is its exit status.
291 The wait status of the test I<umm, I need a better explanation here>.
295 Total number of tests expected to run.
299 Number which failed, either from "not ok" or because they never ran.
303 Percentage of the total tests which failed.
305 =item B<List of Failed>
307 A list of the tests which failed. Successive failures may be
308 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
316 Test::Harness currently only has one function, here it is.
322 my $allok = runtests(@test_files);
324 This runs all the given @test_files and divines whether they passed
325 or failed based on their output to STDOUT (details above). It prints
326 out each individual test which failed along with a summary report and
327 a how long it all took.
329 It returns true if everything was ok. Otherwise it will die() with
330 one of the messages in the DIAGNOSTICS section.
334 This is just _run_all_tests() plus _show_results()
343 my($tot, $failedtests) = _run_all_tests(@tests);
344 _show_results($tot, $failedtests);
346 my $ok = _all_ok($tot);
348 assert(($ok xor keys %$failedtests),
349 q{ok status jives with $failedtests});
358 my $ok = _all_ok(\%tot);
360 Tells you if this test run is overall successful or not.
367 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
372 my @files = _globdir $dir;
374 Returns all the files in a directory. This is shorthand for backwards
375 compatibility on systems where glob() doesn't work right.
381 my @f = readdir DIRH;
387 =item B<_run_all_tests>
389 my($total, $failed) = _run_all_tests(@test_files);
391 Runs all the given @test_files (as runtests()) but does it quietly (no
392 report). $total is a hash ref summary of all the tests run. Its keys
395 bonus Number of individual todo tests unexpectedly passed
396 max Number of individual tests ran
397 ok Number of individual tests passed
398 sub_skipped Number of individual tests skipped
399 todo Number of individual todo tests
401 files Number of test files ran
402 good Number of test files passed
403 bad Number of test files failed
404 tests Number of test files originally given
405 skipped Number of test files skipped
407 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
410 $failed is a hash ref of all the test scripts which failed. Each key
411 is the name of a test script, each value is another hash representing
412 how that script failed. Its keys are these:
414 name Name of the test which failed
415 estat Script's exit value
416 wstat Script's wait status
417 max Number of individual tests
418 failed Number which failed
419 percent Percentage of tests which failed
420 canon List of tests which failed (as string).
422 Needless to say, $failed should be empty if everything passed.
424 B<NOTE> Currently this function is still noisy. I'm working on it.
442 tests => scalar @tests,
449 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
450 my $t_start = new Benchmark;
452 my $width = _leader_width(@tests);
453 foreach my $tfile (@tests) {
454 $Last_ML_Print = 0; # so each test prints at least once
455 my($leader, $ml) = _mk_leader($tfile, $width);
461 $Strap->{_seen_header} = 0;
462 my %results = $Strap->analyze_file($tfile) or
463 do { warn "$Strap->{error}\n"; next };
465 # state of the current test.
466 my @failed = grep { !$results{details}[$_-1]{ok} }
467 1..@{$results{details}};
470 'next' => $Strap->{'next'},
471 max => $results{max},
473 bonus => $results{bonus},
474 skipped => $results{skip},
475 skip_reason => $results{skip_reason},
476 skip_all => $Strap->{skip_all},
480 $tot{bonus} += $results{bonus};
481 $tot{max} += $results{max};
482 $tot{ok} += $results{ok};
483 $tot{todo} += $results{todo};
484 $tot{sub_skipped} += $results{skip};
486 my($estatus, $wstatus) = @results{qw(exit wait)};
488 if ($results{passing}) {
489 if ($test{max} and $test{skipped} + $test{bonus}) {
491 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
493 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
495 print "$test{ml}ok\n ".join(', ', @msg)."\n";
496 } elsif ($test{max}) {
497 print "$test{ml}ok\n";
498 } elsif (defined $test{skip_all} and length $test{skip_all}) {
499 print "skipped\n all skipped: $test{skip_all}\n";
502 print "skipped\n all skipped: no reason given\n";
508 # List unrun tests as failures.
509 if ($test{'next'} <= $test{max}) {
510 push @{$test{failed}}, $test{'next'}..$test{max};
512 # List overruns as failures.
514 my $details = $results{details};
515 foreach my $overrun ($test{max}+1..@$details)
517 next unless ref $details->[$overrun-1];
518 push @{$test{failed}}, $overrun
523 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
525 $failedtests{$tfile}{name} = $tfile;
527 elsif($results{seen}) {
528 if (@{$test{failed}} and $test{max}) {
529 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
531 print "$test{ml}$txt";
532 $failedtests{$tfile} = { canon => $canon,
534 failed => scalar @{$test{failed}},
536 percent => 100*(scalar @{$test{failed}})/$test{max},
541 print "Don't know which tests failed: got $test{ok} ok, ".
542 "expected $test{max}\n";
543 $failedtests{$tfile} = { canon => '??',
554 print "FAILED before any test output arrived\n";
556 $failedtests{$tfile} = { canon => '??',
567 if (defined $Files_In_Dir) {
568 my @new_dir_files = _globdir $Files_In_Dir;
569 if (@new_dir_files != @dir_files) {
571 @f{@new_dir_files} = (1) x @new_dir_files;
572 delete @f{@dir_files};
573 my @f = sort keys %f;
574 print "LEAKED FILES: @f\n";
575 @dir_files = @new_dir_files;
579 $tot{bench} = timediff(new Benchmark, $t_start);
581 $Strap->_restore_PERL5LIB;
583 return(\%tot, \%failedtests);
588 my($leader, $ml) = _mk_leader($test_file, $width);
590 Generates the 't/foo........' $leader for the given $test_file as well
591 as a similar version which will overwrite the current line (by use of
592 \r and such). $ml may be empty if Test::Harness doesn't think you're
595 The $width is the width of the "yada/blah.." string.
600 my($te, $width) = @_;
604 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
605 my $blank = (' ' x 77);
606 my $leader = "$te" . '.' x ($width - length($te));
609 $ml = "\r$blank\r$leader"
610 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
612 return($leader, $ml);
615 =item B<_leader_width>
617 my($width) = _leader_width(@test_files);
619 Calculates how wide the leader should be based on the length of the
628 my $suf = /\.(\w+)$/ ? $1 : '';
630 my $suflen = length $suf;
631 $maxlen = $len if $len > $maxlen;
632 $maxsuflen = $suflen if $suflen > $maxsuflen;
634 # + 3 : we want three dots between the test name and the "ok"
635 return $maxlen + 3 - $maxsuflen;
640 my($tot, $failedtests) = @_;
643 my $bonusmsg = _bonusmsg($tot);
646 print "All tests successful$bonusmsg.\n";
647 } elsif (!$tot->{tests}){
648 die "FAILED--no tests were run for some reason.\n";
649 } elsif (!$tot->{max}) {
650 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
651 die "FAILED--$tot->{tests} test $blurb could be run, ".
652 "alas--no output ever seen\n";
654 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
655 my $percent_ok = 100*$tot->{ok}/$tot->{max};
656 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
657 $tot->{max} - $tot->{ok}, $tot->{max},
660 my($fmt_top, $fmt) = _create_fmts($failedtests);
662 # Now write to formats
663 for my $script (sort keys %$failedtests) {
664 $Curtest = $failedtests->{$script};
668 $bonusmsg =~ s/^,\s*//;
669 print "$bonusmsg.\n" if $bonusmsg;
670 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
675 printf("Files=%d, Tests=%d, %s\n",
676 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
681 $Strap->{callback} = sub {
682 my($self, $line, $type, $totals) = @_;
683 print $line if $Verbose;
685 my $meth = $Handlers{$type};
686 $meth->($self, $line, $type, $totals) if $meth;
690 $Handlers{header} = sub {
691 my($self, $line, $type, $totals) = @_;
693 warn "Test header seen more than once!\n" if $self->{_seen_header};
695 $self->{_seen_header}++;
697 warn "1..M can only appear at the beginning or end of tests\n"
698 if $totals->{seen} &&
699 $totals->{max} < $totals->{seen};
702 $Handlers{test} = sub {
703 my($self, $line, $type, $totals) = @_;
705 my $curr = $totals->{seen};
706 my $next = $self->{'next'};
707 my $max = $totals->{max};
708 my $detail = $totals->{details}[-1];
710 if( $detail->{ok} ) {
711 _print_ml_less("ok $curr/$max");
713 if( $detail->{type} eq 'skip' ) {
714 $totals->{skip_reason} = $detail->{reason}
715 unless defined $totals->{skip_reason};
716 $totals->{skip_reason} = 'various reasons'
717 if $totals->{skip_reason} ne $detail->{reason};
721 _print_ml("NOK $curr");
724 if( $curr > $next ) {
725 print "Test output counter mismatch [test $curr]\n";
727 elsif( $curr < $next ) {
728 print "Confused test output: test $curr answered after ".
729 "test ", $next - 1, "\n";
734 $Handlers{bailout} = sub {
735 my($self, $line, $type, $totals) = @_;
737 die "FAILED--Further testing stopped" .
738 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
743 print join '', $ML, @_ if $ML;
747 # For slow connections, we save lots of bandwidth by printing only once
750 if( !$Ok_Slow || $Last_ML_Print != time ) {
752 $Last_ML_Print = time;
760 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
761 " UNEXPECTEDLY SUCCEEDED)")
764 if ($tot->{skipped}) {
765 $bonusmsg .= ", $tot->{skipped} test"
766 . ($tot->{skipped} != 1 ? 's' : '');
767 if ($tot->{sub_skipped}) {
768 $bonusmsg .= " and $tot->{sub_skipped} subtest"
769 . ($tot->{sub_skipped} != 1 ? 's' : '');
771 $bonusmsg .= ' skipped';
773 elsif ($tot->{sub_skipped}) {
774 $bonusmsg .= ", $tot->{sub_skipped} subtest"
775 . ($tot->{sub_skipped} != 1 ? 's' : '')
782 # Test program go boom.
783 sub _dubious_return {
784 my($test, $tot, $estatus, $wstatus) = @_;
785 my ($failed, $canon, $percent) = ('??', '??');
787 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
788 "(wstat %d, 0x%x)\n",
790 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
792 if (corestatus($wstatus)) { # until we have a wait module
793 if ($Have_Devel_Corestack) {
794 Devel::CoreStack::stack($^X);
796 print "\ttest program seems to have generated a core\n";
803 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
804 print "\tafter all the subtests completed successfully\n";
806 $failed = 0; # But we do not set $canon!
809 push @{$test->{failed}}, $test->{'next'}..$test->{max};
810 $failed = @{$test->{failed}};
811 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
812 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
817 return { canon => $canon, max => $test->{max} || '??',
820 estat => $estatus, wstat => $wstatus,
826 my($failedtests) = @_;
828 my $failed_str = "Failed Test";
829 my $middle_str = " Stat Wstat Total Fail Failed ";
830 my $list_str = "List of Failed";
832 # Figure out our longest name string for formatting purposes.
833 my $max_namelen = length($failed_str);
834 foreach my $script (keys %$failedtests) {
835 my $namelen = length $failedtests->{$script}->{name};
836 $max_namelen = $namelen if $namelen > $max_namelen;
839 my $list_len = $Columns - length($middle_str) - $max_namelen;
840 if ($list_len < length($list_str)) {
841 $list_len = length($list_str);
842 $max_namelen = $Columns - length($middle_str) - $list_len;
843 if ($max_namelen < length($failed_str)) {
844 $max_namelen = length($failed_str);
845 $Columns = $max_namelen + length($middle_str) + $list_len;
849 my $fmt_top = "format STDOUT_TOP =\n"
850 . sprintf("%-${max_namelen}s", $failed_str)
856 my $fmt = "format STDOUT =\n"
857 . "@" . "<" x ($max_namelen - 1)
858 . " @>> @>>>> @>>>> @>>> ^##.##% "
859 . "^" . "<" x ($list_len - 1) . "\n"
860 . '{ $Curtest->{name}, $Curtest->{estat},'
861 . ' $Curtest->{wstat}, $Curtest->{max},'
862 . ' $Curtest->{failed}, $Curtest->{percent},'
863 . ' $Curtest->{canon}'
865 . "~~" . " " x ($Columns - $list_len - 2) . "^"
866 . "<" x ($list_len - 1) . "\n"
867 . '$Curtest->{canon}'
875 return($fmt_top, $fmt);
879 my $tried_devel_corestack;
885 eval { # we may not have a WCOREDUMP
886 local $^W = 0; # *.ph files are often *very* noisy
888 $did_core = WCOREDUMP($st);
891 $did_core = $st & 0200;
894 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
895 unless $tried_devel_corestack++;
901 sub canonfailed ($$@) {
902 my($max,$skipped,@failed) = @_;
904 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
905 my $failed = @failed;
909 my $last = $min = shift @failed;
912 for (@failed, $failed[-1]) { # don't forget the last one
913 if ($_ > $last+1 || $_ == $last) {
917 push @canon, "$min-$last";
924 push @result, "FAILED tests @canon\n";
925 $canon = join ' ', @canon;
927 push @result, "FAILED test $last\n";
931 push @result, "\tFailed $failed/$max tests, ";
933 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
935 push @result, "?% okay";
937 my $ender = 's' x ($skipped > 1);
938 my $good = $max - $failed - $skipped;
940 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
942 my $goodper = sprintf("%.2f",100*($good/$max));
943 $skipmsg .= "$goodper%)";
947 push @result, $skipmsg;
950 my $txt = join "", @result;
967 C<&runtests> is exported by Test::Harness by default.
969 C<$verbose> and C<$switches> are exported upon request.
976 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
978 If all tests are successful some statistics about the performance are
981 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
983 For any single script that has failing subtests statistics like the
986 =item C<Test returned status %d (wstat %d)>
988 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
989 and C<$?> are printed in a message similar to the above.
991 =item C<Failed 1 test, %.2f%% okay. %s>
993 =item C<Failed %d/%d tests, %.2f%% okay. %s>
995 If not all tests were successful, the script dies with one of the
998 =item C<FAILED--Further testing stopped: %s>
1000 If a single subtest decides that further testing will not make sense,
1001 the script dies with this message.
1009 =item C<HARNESS_ACTIVE>
1011 Harness sets this before executing the individual tests. This allows
1012 the tests to determine if they are being executed through the harness
1013 or by any other means.
1015 =item C<HARNESS_COLUMNS>
1017 This value will be used for the width of the terminal. If it is not
1018 set then it will default to C<COLUMNS>. If this is not set, it will
1019 default to 80. Note that users of Bourne-sh based shells will need to
1020 C<export COLUMNS> for this module to use that variable.
1022 =item C<HARNESS_COMPILE_TEST>
1024 When true it will make harness attempt to compile the test using
1025 C<perlcc> before running it.
1027 B<NOTE> This currently only works when sitting in the perl source
1030 =item C<HARNESS_FILELEAK_IN_DIR>
1032 When set to the name of a directory, harness will check after each
1033 test whether new files appeared in that directory, and report them as
1035 LEAKED FILES: scr.tmp 0 my.db
1037 If relative, directory name is with respect to the current directory at
1038 the moment runtests() was called. Putting absolute path into
1039 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1041 =item C<HARNESS_IGNORE_EXITCODE>
1043 Makes harness ignore the exit status of child processes when defined.
1045 =item C<HARNESS_NOTTY>
1047 When set to a true value, forces it to behave as though STDOUT were
1048 not a console. You may need to set this if you don't want harness to
1049 output more frequent progress messages using carriage returns. Some
1050 consoles may not handle carriage returns properly (which results in a
1051 somewhat messy output).
1053 =item C<HARNESS_OK_SLOW>
1055 If true, the C<ok> messages are printed out only every second.
1056 This reduces output and therefore may for example help testing
1057 over slow connections.
1059 =item C<HARNESS_PERL_SWITCHES>
1061 Its value will be prepended to the switches used to invoke perl on
1062 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1063 run all tests with all warnings enabled.
1065 =item C<HARNESS_VERBOSE>
1067 If true, Test::Harness will output the verbose results of running
1068 its tests. Setting $Test::Harness::verbose will override this.
1074 Here's how Test::Harness tests itself
1076 $ cd ~/src/devel/Test-Harness
1077 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1078 $verbose=0; runtests @ARGV;' t/*.t
1079 Using /home/schwern/src/devel/Test-Harness/blib
1080 t/base..............ok
1081 t/nonumbers.........ok
1082 t/ok................ok
1083 t/test-harness......ok
1084 All tests successful.
1085 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1089 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1090 the underlying timing routines, L<Devel::CoreStack> to generate core
1091 dumps from failed tests and L<Devel::Cover> for test coverage
1096 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1097 sure is, that it was inspired by Larry Wall's TEST script that came
1098 with perl distributions for ages. Numerous anonymous contributors
1099 exist. Andreas Koenig held the torch for many years, and then
1102 Current maintainer is Andy Lester C<< <andy@petdance.com> >>.
1106 This program is free software; you can redistribute it and/or
1107 modify it under the same terms as Perl itself.
1109 See L<http://www.perl.com/perl/misc/Artistic.html>
1113 Provide a way of running tests quietly (ie. no printing) for automated
1114 validation of tests. This will probably take the form of a version
1115 of runtests() which rather than printing its output returns raw data
1116 on the state of the tests. (Partially done in Test::Harness::Straps)
1118 Document the format.
1120 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1122 Figure a way to report test names in the failure summary.
1124 Rework the test summary so long test names are not truncated as badly.
1125 (Partially done with new skip test styles)
1127 Deal with VMS's "not \nok 4\n" mistake.
1129 Add option for coverage analysis.
1133 Implement Straps total_results()
1137 Completely redo the print summary code.
1139 Implement Straps callbacks. (experimentally implemented)
1141 Straps->analyze_file() not taint clean, don't know if it can be
1143 Fix that damned VMS nit.
1145 HARNESS_TODOFAIL to display TODO failures
1147 Add a test for verbose.
1149 Change internal list of test results to a hash.
1151 Fix stats display when there's an overrun.
1153 Fix so perls with spaces in the filename work.
1157 Keeping whittling away at _run_all_tests()
1161 Clean up how the summary is printed. Get rid of those damned formats.
1165 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source