1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.47 2003/04/24 19:33:05 andy Exp $
7 use Test::Harness::Straps;
8 use Test::Harness::Assert;
14 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches $ML $Strap
16 @ISA @EXPORT @EXPORT_OK $Last_ML_Print
19 # Backwards compatibility for exportable variable names.
21 *switches = *Switches;
23 $Have_Devel_Corestack = 0;
27 $ENV{HARNESS_ACTIVE} = 1;
31 delete $ENV{HARNESS_ACTIVE};
34 # Some experimental versions of OS/2 build have broken $?
35 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
37 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
39 $Strap = Test::Harness::Straps->new;
42 @EXPORT = qw(&runtests);
43 @EXPORT_OK = qw($verbose $switches);
45 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
47 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
48 $Columns--; # Some shells have trouble with a full line of text.
53 Test::Harness - run perl standard test scripts with statistics
59 runtests(@test_files);
63 B<STOP!> If all you want to do is write a test script, consider using
64 Test::Simple. Otherwise, read on.
66 (By using the Test module, you can write test scripts without
67 knowing the exact output this module expects. However, if you need to
68 know the specifics, read on!)
70 Perl test scripts print to standard output C<"ok N"> for each single
71 test, where C<N> is an increasing sequence of integers. The first line
72 output by a standard test script is C<"1..M"> with C<M> being the
73 number of tests that should be run within the test
74 script. Test::Harness::runtests(@tests) runs all the testscripts
75 named as arguments and checks standard output for the expected
78 After all tests have been performed, runtests() prints some
79 performance statistics that are computed by the Benchmark module.
81 =head2 The test script output
83 The following explains how Test::Harness interprets the output of your
90 This header tells how many tests there will be. For example, C<1..10>
91 means you plan on running 10 tests. This is a safeguard in case your
92 test dies quietly in the middle of its run.
94 It should be the first non-comment line output by your test program.
96 In certain instances, you may not know how many tests you will
97 ultimately be running. In this case, it is permitted for the 1..M
98 header to appear as the B<last> line output by your test (again, it
99 can be followed by further comments).
101 Under B<no> circumstances should 1..M appear in the middle of your
102 output or more than once.
105 =item B<'ok', 'not ok'. Ok?>
107 Any output from the testscript to standard error is ignored and
108 bypassed, thus will be seen by the user. Lines written to standard
109 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110 runtests(). All other lines are discarded.
112 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
115 =item B<test numbers>
117 Perl normally expects the 'ok' or 'not ok' to be followed by a test
118 number. It is tolerated if the test numbers after 'ok' are
119 omitted. In this case Test::Harness maintains temporarily its own
120 counter until the script supplies test numbers again. So the following
135 Failed 3/6 tests, 50.00% okay
139 Anything after the test number but before the # is considered to be
140 the name of the test.
142 ok 42 this is the name of the test
144 Currently, Test::Harness does nothing with this information.
146 =item B<Skipping tests>
148 If the standard output line contains the substring C< # Skip> (with
149 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150 counted as a skipped test. If the whole testscript succeeds, the
151 count of skipped tests is included in the generated output.
152 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
155 ok 23 # skip Insufficient flogiston pressure.
157 Similarly, one can include a similar explanation in a C<1..0> line
158 emitted if the test script is skipped completely:
160 1..0 # Skipped: no leverage found
164 If the standard output line contains the substring C< # TODO> after
165 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166 afterwards is the thing that has to be done before this test will
169 not ok 13 # TODO harness the power of the atom
173 Alternatively, you can specify a list of what tests are todo as part
178 This only works if the header appears at the beginning of the test.
180 This style is B<deprecated>.
184 These tests represent a feature to be implemented or a bug to be fixed
185 and act as something of an executable "thing to do" list. They are
186 B<not> expected to succeed. Should a todo test begin succeeding,
187 Test::Harness will report it as a bonus. This indicates that whatever
188 you were supposed to do has been done and you should promote this to a
193 As an emergency measure, a test script can decide that further tests
194 are useless (e.g. missing dependencies) and testing should stop
195 immediately. In that case the test script prints the magic words
199 to standard output. Any message after these words will be displayed by
200 C<Test::Harness> as the reason why testing is stopped.
204 Additional comments may be put into the testing output on their own
205 lines. Comment lines should begin with a '#', Test::Harness will
209 # Life is good, the sun is shining, RAM is cheap.
211 # got 'Bush' expected 'Gore'
213 =item B<Anything else>
215 Any other output Test::Harness sees it will silently ignore B<BUT WE
216 PLAN TO CHANGE THIS!> If you wish to place additional output in your
217 test script, please use a comment.
224 Test::Harness will honor the C<-T> in the #! line on your test files. So
225 if you begin a test with:
229 the test will be run with taint mode on.
232 =head2 Configuration variables.
234 These variables can be used to configure the behavior of
235 Test::Harness. They are exported on request.
239 =item B<$Test::Harness::verbose>
241 The global variable $Test::Harness::verbose is exportable and can be
242 used to let runtests() display the standard output of the script
243 without altering the behavior otherwise.
245 =item B<$Test::Harness::switches>
247 The global variable $Test::Harness::switches is exportable and can be
248 used to set perl command line options used for running the test
249 script(s). The default value is C<-w>.
256 It will happen, your tests will fail. After you mop up your ego, you
257 can begin examining the summary report:
259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
272 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273 exited with non-zero status indicating something dubious happened.
275 The columns in the summary report mean:
281 The test file which failed.
285 If the test exited with non-zero, this is its exit status.
289 The wait status of the test I<umm, I need a better explanation here>.
293 Total number of tests expected to run.
297 Number which failed, either from "not ok" or because they never ran.
301 Percentage of the total tests which failed.
303 =item B<List of Failed>
305 A list of the tests which failed. Successive failures may be
306 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
314 Test::Harness currently only has one function, here it is.
320 my $allok = runtests(@test_files);
322 This runs all the given @test_files and divines whether they passed
323 or failed based on their output to STDOUT (details above). It prints
324 out each individual test which failed along with a summary report and
325 a how long it all took.
327 It returns true if everything was ok. Otherwise it will die() with
328 one of the messages in the DIAGNOSTICS section.
332 This is just _run_all_tests() plus _show_results()
341 my($tot, $failedtests) = _run_all_tests(@tests);
342 _show_results($tot, $failedtests);
344 my $ok = _all_ok($tot);
346 assert(($ok xor keys %$failedtests),
347 q{ok status jives with $failedtests});
356 my $ok = _all_ok(\%tot);
358 Tells you if this test run is overall successful or not.
365 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
370 my @files = _globdir $dir;
372 Returns all the files in a directory. This is shorthand for backwards
373 compatibility on systems where glob() doesn't work right.
379 my @f = readdir DIRH;
385 =item B<_run_all_tests>
387 my($total, $failed) = _run_all_tests(@test_files);
389 Runs all the given @test_files (as runtests()) but does it quietly (no
390 report). $total is a hash ref summary of all the tests run. Its keys
393 bonus Number of individual todo tests unexpectedly passed
394 max Number of individual tests ran
395 ok Number of individual tests passed
396 sub_skipped Number of individual tests skipped
397 todo Number of individual todo tests
399 files Number of test files ran
400 good Number of test files passed
401 bad Number of test files failed
402 tests Number of test files originally given
403 skipped Number of test files skipped
405 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
408 $failed is a hash ref of all the test scripts which failed. Each key
409 is the name of a test script, each value is another hash representing
410 how that script failed. Its keys are these:
412 name Name of the test which failed
413 estat Script's exit value
414 wstat Script's wait status
415 max Number of individual tests
416 failed Number which failed
417 percent Percentage of tests which failed
418 canon List of tests which failed (as string).
420 Needless to say, $failed should be empty if everything passed.
422 B<NOTE> Currently this function is still noisy. I'm working on it.
440 tests => scalar @tests,
447 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
448 my $t_start = new Benchmark;
450 my $width = _leader_width(@tests);
451 foreach my $tfile (@tests) {
452 $Last_ML_Print = 0; # so each test prints at least once
453 my($leader, $ml) = _mk_leader($tfile, $width);
459 $Strap->{_seen_header} = 0;
460 my %results = $Strap->analyze_file($tfile) or
461 do { warn "$Strap->{error}\n"; next };
463 # state of the current test.
464 my @failed = grep { !$results{details}[$_-1]{ok} }
465 1..@{$results{details}};
468 'next' => $Strap->{'next'},
469 max => $results{max},
471 bonus => $results{bonus},
472 skipped => $results{skip},
473 skip_reason => $results{skip_reason},
474 skip_all => $Strap->{skip_all},
478 $tot{bonus} += $results{bonus};
479 $tot{max} += $results{max};
480 $tot{ok} += $results{ok};
481 $tot{todo} += $results{todo};
482 $tot{sub_skipped} += $results{skip};
484 my($estatus, $wstatus) = @results{qw(exit wait)};
486 if ($results{passing}) {
487 if ($test{max} and $test{skipped} + $test{bonus}) {
489 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
491 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
493 print "$test{ml}ok\n ".join(', ', @msg)."\n";
494 } elsif ($test{max}) {
495 print "$test{ml}ok\n";
496 } elsif (defined $test{skip_all} and length $test{skip_all}) {
497 print "skipped\n all skipped: $test{skip_all}\n";
500 print "skipped\n all skipped: no reason given\n";
506 # List unrun tests as failures.
507 if ($test{'next'} <= $test{max}) {
508 push @{$test{failed}}, $test{'next'}..$test{max};
510 # List overruns as failures.
512 my $details = $results{details};
513 foreach my $overrun ($test{max}+1..@$details)
515 next unless ref $details->[$overrun-1];
516 push @{$test{failed}}, $overrun
521 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
523 $failedtests{$tfile}{name} = $tfile;
525 elsif($results{seen}) {
526 if (@{$test{failed}} and $test{max}) {
527 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
529 print "$test{ml}$txt";
530 $failedtests{$tfile} = { canon => $canon,
532 failed => scalar @{$test{failed}},
534 percent => 100*(scalar @{$test{failed}})/$test{max},
539 print "Don't know which tests failed: got $test{ok} ok, ".
540 "expected $test{max}\n";
541 $failedtests{$tfile} = { canon => '??',
552 print "FAILED before any test output arrived\n";
554 $failedtests{$tfile} = { canon => '??',
565 if (defined $Files_In_Dir) {
566 my @new_dir_files = _globdir $Files_In_Dir;
567 if (@new_dir_files != @dir_files) {
569 @f{@new_dir_files} = (1) x @new_dir_files;
570 delete @f{@dir_files};
571 my @f = sort keys %f;
572 print "LEAKED FILES: @f\n";
573 @dir_files = @new_dir_files;
577 $tot{bench} = timediff(new Benchmark, $t_start);
579 $Strap->_restore_PERL5LIB;
581 return(\%tot, \%failedtests);
586 my($leader, $ml) = _mk_leader($test_file, $width);
588 Generates the 't/foo........' $leader for the given $test_file as well
589 as a similar version which will overwrite the current line (by use of
590 \r and such). $ml may be empty if Test::Harness doesn't think you're
593 The $width is the width of the "yada/blah.." string.
598 my($te, $width) = @_;
602 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
603 my $blank = (' ' x 77);
604 my $leader = "$te" . '.' x ($width - length($te));
607 $ml = "\r$blank\r$leader"
608 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
610 return($leader, $ml);
613 =item B<_leader_width>
615 my($width) = _leader_width(@test_files);
617 Calculates how wide the leader should be based on the length of the
626 my $suf = /\.(\w+)$/ ? $1 : '';
628 my $suflen = length $suf;
629 $maxlen = $len if $len > $maxlen;
630 $maxsuflen = $suflen if $suflen > $maxsuflen;
632 # + 3 : we want three dots between the test name and the "ok"
633 return $maxlen + 3 - $maxsuflen;
638 my($tot, $failedtests) = @_;
641 my $bonusmsg = _bonusmsg($tot);
644 print "All tests successful$bonusmsg.\n";
645 } elsif (!$tot->{tests}){
646 die "FAILED--no tests were run for some reason.\n";
647 } elsif (!$tot->{max}) {
648 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
649 die "FAILED--$tot->{tests} test $blurb could be run, ".
650 "alas--no output ever seen\n";
652 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
653 my $percent_ok = 100*$tot->{ok}/$tot->{max};
654 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
655 $tot->{max} - $tot->{ok}, $tot->{max},
658 my($fmt_top, $fmt) = _create_fmts($failedtests);
660 # Now write to formats
661 for my $script (sort keys %$failedtests) {
662 $Curtest = $failedtests->{$script};
666 $bonusmsg =~ s/^,\s*//;
667 print "$bonusmsg.\n" if $bonusmsg;
668 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
673 printf("Files=%d, Tests=%d, %s\n",
674 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
679 $Strap->{callback} = sub {
680 my($self, $line, $type, $totals) = @_;
681 print $line if $Verbose;
683 my $meth = $Handlers{$type};
684 $meth->($self, $line, $type, $totals) if $meth;
688 $Handlers{header} = sub {
689 my($self, $line, $type, $totals) = @_;
691 warn "Test header seen more than once!\n" if $self->{_seen_header};
693 $self->{_seen_header}++;
695 warn "1..M can only appear at the beginning or end of tests\n"
696 if $totals->{seen} &&
697 $totals->{max} < $totals->{seen};
700 $Handlers{test} = sub {
701 my($self, $line, $type, $totals) = @_;
703 my $curr = $totals->{seen};
704 my $next = $self->{'next'};
705 my $max = $totals->{max};
706 my $detail = $totals->{details}[-1];
708 if( $detail->{ok} ) {
709 _print_ml_less("ok $curr/$max");
711 if( $detail->{type} eq 'skip' ) {
712 $totals->{skip_reason} = $detail->{reason}
713 unless defined $totals->{skip_reason};
714 $totals->{skip_reason} = 'various reasons'
715 if $totals->{skip_reason} ne $detail->{reason};
719 _print_ml("NOK $curr");
722 if( $curr > $next ) {
723 print "Test output counter mismatch [test $curr]\n";
725 elsif( $curr < $next ) {
726 print "Confused test output: test $curr answered after ".
727 "test ", $next - 1, "\n";
732 $Handlers{bailout} = sub {
733 my($self, $line, $type, $totals) = @_;
735 die "FAILED--Further testing stopped" .
736 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
741 print join '', $ML, @_ if $ML;
745 # For slow connections, we save lots of bandwidth by printing only once
748 if( $Last_ML_Print != time ) {
750 $Last_ML_Print = time;
758 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
759 " UNEXPECTEDLY SUCCEEDED)")
762 if ($tot->{skipped}) {
763 $bonusmsg .= ", $tot->{skipped} test"
764 . ($tot->{skipped} != 1 ? 's' : '');
765 if ($tot->{sub_skipped}) {
766 $bonusmsg .= " and $tot->{sub_skipped} subtest"
767 . ($tot->{sub_skipped} != 1 ? 's' : '');
769 $bonusmsg .= ' skipped';
771 elsif ($tot->{sub_skipped}) {
772 $bonusmsg .= ", $tot->{sub_skipped} subtest"
773 . ($tot->{sub_skipped} != 1 ? 's' : '')
780 # Test program go boom.
781 sub _dubious_return {
782 my($test, $tot, $estatus, $wstatus) = @_;
783 my ($failed, $canon, $percent) = ('??', '??');
785 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
786 "(wstat %d, 0x%x)\n",
788 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
790 if (corestatus($wstatus)) { # until we have a wait module
791 if ($Have_Devel_Corestack) {
792 Devel::CoreStack::stack($^X);
794 print "\ttest program seems to have generated a core\n";
801 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
802 print "\tafter all the subtests completed successfully\n";
804 $failed = 0; # But we do not set $canon!
807 push @{$test->{failed}}, $test->{'next'}..$test->{max};
808 $failed = @{$test->{failed}};
809 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
810 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
815 return { canon => $canon, max => $test->{max} || '??',
818 estat => $estatus, wstat => $wstatus,
824 my($failedtests) = @_;
826 my $failed_str = "Failed Test";
827 my $middle_str = " Stat Wstat Total Fail Failed ";
828 my $list_str = "List of Failed";
830 # Figure out our longest name string for formatting purposes.
831 my $max_namelen = length($failed_str);
832 foreach my $script (keys %$failedtests) {
833 my $namelen = length $failedtests->{$script}->{name};
834 $max_namelen = $namelen if $namelen > $max_namelen;
837 my $list_len = $Columns - length($middle_str) - $max_namelen;
838 if ($list_len < length($list_str)) {
839 $list_len = length($list_str);
840 $max_namelen = $Columns - length($middle_str) - $list_len;
841 if ($max_namelen < length($failed_str)) {
842 $max_namelen = length($failed_str);
843 $Columns = $max_namelen + length($middle_str) + $list_len;
847 my $fmt_top = "format STDOUT_TOP =\n"
848 . sprintf("%-${max_namelen}s", $failed_str)
854 my $fmt = "format STDOUT =\n"
855 . "@" . "<" x ($max_namelen - 1)
856 . " @>> @>>>> @>>>> @>>> ^##.##% "
857 . "^" . "<" x ($list_len - 1) . "\n"
858 . '{ $Curtest->{name}, $Curtest->{estat},'
859 . ' $Curtest->{wstat}, $Curtest->{max},'
860 . ' $Curtest->{failed}, $Curtest->{percent},'
861 . ' $Curtest->{canon}'
863 . "~~" . " " x ($Columns - $list_len - 2) . "^"
864 . "<" x ($list_len - 1) . "\n"
865 . '$Curtest->{canon}'
873 return($fmt_top, $fmt);
877 my $tried_devel_corestack;
883 eval { # we may not have a WCOREDUMP
884 local $^W = 0; # *.ph files are often *very* noisy
886 $did_core = WCOREDUMP($st);
889 $did_core = $st & 0200;
892 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
893 unless $tried_devel_corestack++;
899 sub canonfailed ($@) {
900 my($max,$skipped,@failed) = @_;
902 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
903 my $failed = @failed;
907 my $last = $min = shift @failed;
910 for (@failed, $failed[-1]) { # don't forget the last one
911 if ($_ > $last+1 || $_ == $last) {
915 push @canon, "$min-$last";
922 push @result, "FAILED tests @canon\n";
923 $canon = join ' ', @canon;
925 push @result, "FAILED test $last\n";
929 push @result, "\tFailed $failed/$max tests, ";
931 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
933 push @result, "?% okay";
935 my $ender = 's' x ($skipped > 1);
936 my $good = $max - $failed - $skipped;
938 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
940 my $goodper = sprintf("%.2f",100*($good/$max));
941 $skipmsg .= "$goodper%)";
945 push @result, $skipmsg;
948 my $txt = join "", @result;
965 C<&runtests> is exported by Test::Harness by default.
967 C<$verbose> and C<$switches> are exported upon request.
974 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
976 If all tests are successful some statistics about the performance are
979 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
981 For any single script that has failing subtests statistics like the
984 =item C<Test returned status %d (wstat %d)>
986 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
987 and C<$?> are printed in a message similar to the above.
989 =item C<Failed 1 test, %.2f%% okay. %s>
991 =item C<Failed %d/%d tests, %.2f%% okay. %s>
993 If not all tests were successful, the script dies with one of the
996 =item C<FAILED--Further testing stopped: %s>
998 If a single subtest decides that further testing will not make sense,
999 the script dies with this message.
1007 =item C<HARNESS_ACTIVE>
1009 Harness sets this before executing the individual tests. This allows
1010 the tests to determine if they are being executed through the harness
1011 or by any other means.
1013 =item C<HARNESS_COLUMNS>
1015 This value will be used for the width of the terminal. If it is not
1016 set then it will default to C<COLUMNS>. If this is not set, it will
1017 default to 80. Note that users of Bourne-sh based shells will need to
1018 C<export COLUMNS> for this module to use that variable.
1020 =item C<HARNESS_COMPILE_TEST>
1022 When true it will make harness attempt to compile the test using
1023 C<perlcc> before running it.
1025 B<NOTE> This currently only works when sitting in the perl source
1028 =item C<HARNESS_FILELEAK_IN_DIR>
1030 When set to the name of a directory, harness will check after each
1031 test whether new files appeared in that directory, and report them as
1033 LEAKED FILES: scr.tmp 0 my.db
1035 If relative, directory name is with respect to the current directory at
1036 the moment runtests() was called. Putting absolute path into
1037 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1039 =item C<HARNESS_IGNORE_EXITCODE>
1041 Makes harness ignore the exit status of child processes when defined.
1043 =item C<HARNESS_NOTTY>
1045 When set to a true value, forces it to behave as though STDOUT were
1046 not a console. You may need to set this if you don't want harness to
1047 output more frequent progress messages using carriage returns. Some
1048 consoles may not handle carriage returns properly (which results in a
1049 somewhat messy output).
1051 =item C<HARNESS_PERL_SWITCHES>
1053 Its value will be prepended to the switches used to invoke perl on
1054 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1055 run all tests with all warnings enabled.
1057 =item C<HARNESS_VERBOSE>
1059 If true, Test::Harness will output the verbose results of running
1060 its tests. Setting $Test::Harness::verbose will override this.
1066 Here's how Test::Harness tests itself
1068 $ cd ~/src/devel/Test-Harness
1069 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1070 $verbose=0; runtests @ARGV;' t/*.t
1071 Using /home/schwern/src/devel/Test-Harness/blib
1072 t/base..............ok
1073 t/nonumbers.........ok
1074 t/ok................ok
1075 t/test-harness......ok
1076 All tests successful.
1077 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1081 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1082 the underlying timing routines, L<Devel::CoreStack> to generate core
1083 dumps from failed tests and L<Devel::Cover> for test coverage
1088 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1089 sure is, that it was inspired by Larry Wall's TEST script that came
1090 with perl distributions for ages. Numerous anonymous contributors
1091 exist. Andreas Koenig held the torch for many years.
1093 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1097 This program is free software; you can redistribute it and/or
1098 modify it under the same terms as Perl itself.
1100 See F<http://www.perl.com/perl/misc/Artistic.html>
1105 Provide a way of running tests quietly (ie. no printing) for automated
1106 validation of tests. This will probably take the form of a version
1107 of runtests() which rather than printing its output returns raw data
1108 on the state of the tests. (Partially done in Test::Harness::Straps)
1110 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1112 Figure a way to report test names in the failure summary.
1114 Rework the test summary so long test names are not truncated as badly.
1115 (Partially done with new skip test styles)
1117 Deal with VMS's "not \nok 4\n" mistake.
1119 Add option for coverage analysis.
1122 Keeping whittling away at _run_all_tests()
1125 Clean up how the summary is printed. Get rid of those damned formats.
1129 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source