1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.31 2002/05/18 20:19:06 schwern Exp $
7 use Test::Harness::Straps;
8 use Test::Harness::Assert;
14 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches $ML
16 @ISA @EXPORT @EXPORT_OK
19 # Backwards compatibility for exportable variable names.
21 *switches = \$Switches;
23 $Have_Devel_Corestack = 0;
27 $ENV{HARNESS_ACTIVE} = 1;
31 delete $ENV{HARNESS_ACTIVE};
34 # Some experimental versions of OS/2 build have broken $?
35 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
37 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
39 my $Strap = Test::Harness::Straps->new;
42 @EXPORT = qw(&runtests);
43 @EXPORT_OK = qw($verbose $switches);
45 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
47 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
48 $Columns--; # Some shells have trouble with a full line of text.
53 Test::Harness - run perl standard test scripts with statistics
59 runtests(@test_files);
63 B<STOP!> If all you want to do is write a test script, consider using
64 Test::Simple. Otherwise, read on.
66 (By using the Test module, you can write test scripts without
67 knowing the exact output this module expects. However, if you need to
68 know the specifics, read on!)
70 Perl test scripts print to standard output C<"ok N"> for each single
71 test, where C<N> is an increasing sequence of integers. The first line
72 output by a standard test script is C<"1..M"> with C<M> being the
73 number of tests that should be run within the test
74 script. Test::Harness::runtests(@tests) runs all the testscripts
75 named as arguments and checks standard output for the expected
78 After all tests have been performed, runtests() prints some
79 performance statistics that are computed by the Benchmark module.
81 =head2 The test script output
83 The following explains how Test::Harness interprets the output of your
90 This header tells how many tests there will be. For example, C<1..10>
91 means you plan on running 10 tests. This is a safeguard in case your
92 test dies quietly in the middle of its run.
94 It should be the first non-comment line output by your test program.
96 In certain instances, you may not know how many tests you will
97 ultimately be running. In this case, it is permitted for the 1..M
98 header to appear as the B<last> line output by your test (again, it
99 can be followed by further comments).
101 Under B<no> circumstances should 1..M appear in the middle of your
102 output or more than once.
105 =item B<'ok', 'not ok'. Ok?>
107 Any output from the testscript to standard error is ignored and
108 bypassed, thus will be seen by the user. Lines written to standard
109 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110 runtests(). All other lines are discarded.
112 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
115 =item B<test numbers>
117 Perl normally expects the 'ok' or 'not ok' to be followed by a test
118 number. It is tolerated if the test numbers after 'ok' are
119 omitted. In this case Test::Harness maintains temporarily its own
120 counter until the script supplies test numbers again. So the following
135 Failed 3/6 tests, 50.00% okay
139 Anything after the test number but before the # is considered to be
140 the name of the test.
142 ok 42 this is the name of the test
144 Currently, Test::Harness does nothing with this information.
146 =item B<Skipping tests>
148 If the standard output line contains the substring C< # Skip> (with
149 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150 counted as a skipped test. If the whole testscript succeeds, the
151 count of skipped tests is included in the generated output.
152 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
155 ok 23 # skip Insufficient flogiston pressure.
157 Similarly, one can include a similar explanation in a C<1..0> line
158 emitted if the test script is skipped completely:
160 1..0 # Skipped: no leverage found
162 If you don't have any comment on skipping, just print C<1..0> with
163 nothing after it. Test::Harness will say something like this:
165 op/64bitint..............skipped
166 skipped: no reason given
171 If the standard output line contains the substring C< # TODO> after
172 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
173 afterwards is the thing that has to be done before this test will
176 not ok 13 # TODO harness the power of the atom
180 Alternatively, you can specify a list of what tests are todo as part
185 This only works if the header appears at the beginning of the test.
187 This style is B<deprecated>.
191 These tests represent a feature to be implemented or a bug to be fixed
192 and act as something of an executable "thing to do" list. They are
193 B<not> expected to succeed. Should a todo test begin succeeding,
194 Test::Harness will report it as a bonus. This indicates that whatever
195 you were supposed to do has been done and you should promote this to a
200 As an emergency measure, a test script can decide that further tests
201 are useless (e.g. missing dependencies) and testing should stop
202 immediately. In that case the test script prints the magic words
206 to standard output. Any message after these words will be displayed by
207 C<Test::Harness> as the reason why testing is stopped.
211 Additional comments may be put into the testing output on their own
212 lines. Comment lines should begin with a '#', Test::Harness will
216 # Life is good, the sun is shining, RAM is cheap.
218 # got 'Bush' expected 'Gore'
220 =item B<Anything else>
222 Any other output Test::Harness sees it will silently ignore B<BUT WE
223 PLAN TO CHANGE THIS!> If you wish to place additional output in your
224 test script, please use a comment.
231 Test::Harness will honor the C<-T> in the #! line on your test files. So
232 if you begin a test with:
236 the test will be run with taint mode on.
239 =head2 Configuration variables.
241 These variables can be used to configure the behavior of
242 Test::Harness. They are exported on request.
246 =item B<$Test::Harness::verbose>
248 The global variable $Test::Harness::verbose is exportable and can be
249 used to let runtests() display the standard output of the script
250 without altering the behavior otherwise.
252 =item B<$Test::Harness::switches>
254 The global variable $Test::Harness::switches is exportable and can be
255 used to set perl command line options used for running the test
256 script(s). The default value is C<-w>.
263 It will happen, your tests will fail. After you mop up your ego, you
264 can begin examining the summary report:
266 t/base..............ok
267 t/nonumbers.........ok
268 t/ok................ok
269 t/test-harness......ok
270 t/waterloo..........dubious
271 Test returned status 3 (wstat 768, 0x300)
272 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
273 Failed 10/20 tests, 50.00% okay
274 Failed Test Stat Wstat Total Fail Failed List of Failed
275 -----------------------------------------------------------------------
276 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
277 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
279 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
280 exited with non-zero status indicating something dubious happened.
282 The columns in the summary report mean:
288 The test file which failed.
292 If the test exited with non-zero, this is its exit status.
296 The wait status of the test I<umm, I need a better explanation here>.
300 Total number of tests expected to run.
304 Number which failed, either from "not ok" or because they never ran.
308 Percentage of the total tests which failed.
310 =item B<List of Failed>
312 A list of the tests which failed. Successive failures may be
313 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
321 Test::Harness currently only has one function, here it is.
327 my $allok = runtests(@test_files);
329 This runs all the given @test_files and divines whether they passed
330 or failed based on their output to STDOUT (details above). It prints
331 out each individual test which failed along with a summary report and
332 a how long it all took.
334 It returns true if everything was ok, false otherwise.
337 This is just _run_all_tests() plus _show_results()
346 my($tot, $failedtests) = _run_all_tests(@tests);
347 _show_results($tot, $failedtests);
349 my $ok = _all_ok($tot);
351 assert(($ok xor keys %$failedtests),
352 q{ok status jives with $failedtests});
361 my $ok = _all_ok(\%tot);
363 Tells you if this test run is overall successful or not.
370 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
375 my @files = _globdir $dir;
377 Returns all the files in a directory. This is shorthand for backwards
378 compatibility on systems where glob() doesn't work right.
384 my @f = readdir DIRH;
390 =item B<_run_all_tests>
392 my($total, $failed) = _run_all_tests(@test_files);
394 Runs all the given @test_files (as runtests()) but does it quietly (no
395 report). $total is a hash ref summary of all the tests run. Its keys
398 bonus Number of individual todo tests unexpectedly passed
399 max Number of individual tests ran
400 ok Number of individual tests passed
401 sub_skipped Number of individual tests skipped
402 todo Number of individual todo tests
404 files Number of test files ran
405 good Number of test files passed
406 bad Number of test files failed
407 tests Number of test files originally given
408 skipped Number of test files skipped
410 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
413 $failed is a hash ref of all the test scripts which failed. Each key
414 is the name of a test script, each value is another hash representing
415 how that script failed. Its keys are these:
417 name Name of the test which failed
418 estat Script's exit value
419 wstat Script's wait status
420 max Number of individual tests
421 failed Number which failed
422 percent Percentage of tests which failed
423 canon List of tests which failed (as string).
425 Needless to say, $failed should be empty if everything passed.
427 B<NOTE> Currently this function is still noisy. I'm working on it.
445 tests => scalar @tests,
452 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
453 my $t_start = new Benchmark;
455 my $width = _leader_width(@tests);
456 foreach my $tfile (@tests) {
458 my($leader, $ml) = _mk_leader($tfile, $width);
464 $Strap->{_seen_header} = 0;
465 my %results = $Strap->analyze_file($tfile);
467 # state of the current test.
468 my @failed = grep { !$results{details}[$_-1]{ok} }
469 1..@{$results{details}};
472 'next' => $Strap->{'next'},
473 max => $results{max},
475 bonus => $results{bonus},
476 skipped => $results{skip},
477 skip_reason => $Strap->{_skip_reason},
478 skip_all => $Strap->{skip_all},
482 $tot{bonus} += $results{bonus};
483 $tot{max} += $results{max};
484 $tot{ok} += $results{ok};
485 $tot{todo} += $results{todo};
486 $tot{sub_skipped} += $results{skip};
488 my($estatus, $wstatus) = @results{qw(exit wait)};
491 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
493 $failedtests{$tfile}{name} = $tfile;
495 elsif ($results{passing}) {
496 if ($test{max} and $test{skipped} + $test{bonus}) {
498 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
500 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
502 print "$test{ml}ok\n ".join(', ', @msg)."\n";
503 } elsif ($test{max}) {
504 print "$test{ml}ok\n";
505 } elsif (defined $test{skip_all}) {
506 print "skipped\n all skipped: $test{skip_all}\n";
509 print "\n skipped test on this platform\n";
516 if ($test{'next'} <= $test{max}) {
517 push @{$test{failed}}, $test{'next'}..$test{max};
519 if (@{$test{failed}}) {
520 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
522 print "$test{ml}$txt";
523 $failedtests{$tfile} = { canon => $canon,
525 failed => scalar @{$test{failed}},
527 percent => 100*(scalar @{$test{failed}})/$test{max},
532 print "Don't know which tests failed: got $test{ok} ok, ".
533 "expected $test{max}\n";
534 $failedtests{$tfile} = { canon => '??',
544 } elsif ($test{'next'} == 0) {
545 print "FAILED before any test output arrived\n";
547 $failedtests{$tfile} = { canon => '??',
558 if (defined $Files_In_Dir) {
559 my @new_dir_files = _globdir $Files_In_Dir;
560 if (@new_dir_files != @dir_files) {
562 @f{@new_dir_files} = (1) x @new_dir_files;
563 delete @f{@dir_files};
564 my @f = sort keys %f;
565 print "LEAKED FILES: @f\n";
566 @dir_files = @new_dir_files;
570 $tot{bench} = timediff(new Benchmark, $t_start);
572 $Strap->_restore_PERL5LIB;
574 return(\%tot, \%failedtests);
579 my($leader, $ml) = _mk_leader($test_file, $width);
581 Generates the 't/foo........' $leader for the given $test_file as well
582 as a similar version which will overwrite the current line (by use of
583 \r and such). $ml may be empty if Test::Harness doesn't think you're
586 The $width is the width of the "yada/blah.." string.
591 my($te, $width) = @_;
595 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
596 my $blank = (' ' x 77);
597 my $leader = "$te" . '.' x ($width - length($te));
600 $ml = "\r$blank\r$leader"
601 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
603 return($leader, $ml);
606 =item B<_leader_width>
608 my($width) = _leader_width(@test_files);
610 Calculates how wide the leader should be based on the length of the
619 my $suf = /\.(\w+)$/ ? $1 : '';
621 my $suflen = length $suf;
622 $maxlen = $len if $len > $maxlen;
623 $maxsuflen = $suflen if $suflen > $maxsuflen;
625 # + 3 : we want three dots between the test name and the "ok"
626 return $maxlen + 3 - $maxsuflen;
631 my($tot, $failedtests) = @_;
634 my $bonusmsg = _bonusmsg($tot);
637 print "All tests successful$bonusmsg.\n";
638 } elsif (!$tot->{tests}){
639 die "FAILED--no tests were run for some reason.\n";
640 } elsif (!$tot->{max}) {
641 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
642 die "FAILED--$tot->{tests} test $blurb could be run, ".
643 "alas--no output ever seen\n";
645 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
646 my $percent_ok = 100*$tot->{ok}/$tot->{max};
647 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
648 $tot->{max} - $tot->{ok}, $tot->{max},
651 my($fmt_top, $fmt) = _create_fmts($failedtests);
653 # Now write to formats
654 for my $script (sort keys %$failedtests) {
655 $Curtest = $failedtests->{$script};
659 $bonusmsg =~ s/^,\s*//;
660 print "$bonusmsg.\n" if $bonusmsg;
661 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
666 printf("Files=%d, Tests=%d, %s\n",
667 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
672 $Strap->{callback} = sub {
673 my($self, $line, $type, $totals) = @_;
674 print $line if $Verbose;
676 my $meth = $Handlers{$type};
677 $meth->($self, $line, $type, $totals) if $meth;
681 $Handlers{header} = sub {
682 my($self, $line, $type, $totals) = @_;
684 warn "Test header seen more than once!\n" if $self->{_seen_header};
686 $self->{_seen_header}++;
688 warn "1..M can only appear at the beginning or end of tests\n"
689 if $totals->{seen} &&
690 $totals->{max} < $totals->{seen};
693 $Handlers{test} = sub {
694 my($self, $line, $type, $totals) = @_;
696 my $curr = $totals->{seen};
697 my $next = $self->{'next'};
698 my $max = $totals->{max};
699 my $detail = $totals->{details}[-1];
701 if( $detail->{ok} ) {
702 _print_ml("ok $curr/$max");
704 if( $detail->{type} eq 'skip' ) {
705 $self->{_skip_reason} = $detail->{reason}
706 unless defined $self->{_skip_reason};
707 $self->{_skip_reason} = 'various reasons'
708 if $self->{_skip_reason} ne $detail->{reason};
712 _print_ml("NOK $curr");
715 if( $curr > $next ) {
716 print "Test output counter mismatch [test $curr]\n";
718 elsif( $curr < $next ) {
719 print "Confused test output: test $curr answered after ".
720 "test ", $next - 1, "\n";
725 $Handlers{bailout} = sub {
726 my($self, $line, $type, $totals) = @_;
728 die "FAILED--Further testing stopped" .
729 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
734 print join '', $ML, @_ if $ML;
742 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
743 " UNEXPECTEDLY SUCCEEDED)")
746 if ($tot->{skipped}) {
747 $bonusmsg .= ", $tot->{skipped} test"
748 . ($tot->{skipped} != 1 ? 's' : '');
749 if ($tot->{sub_skipped}) {
750 $bonusmsg .= " and $tot->{sub_skipped} subtest"
751 . ($tot->{sub_skipped} != 1 ? 's' : '');
753 $bonusmsg .= ' skipped';
755 elsif ($tot->{sub_skipped}) {
756 $bonusmsg .= ", $tot->{sub_skipped} subtest"
757 . ($tot->{sub_skipped} != 1 ? 's' : '')
764 # Test program go boom.
765 sub _dubious_return {
766 my($test, $tot, $estatus, $wstatus) = @_;
767 my ($failed, $canon, $percent) = ('??', '??');
769 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
770 "(wstat %d, 0x%x)\n",
772 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
774 if (corestatus($wstatus)) { # until we have a wait module
775 if ($Have_Devel_Corestack) {
776 Devel::CoreStack::stack($^X);
778 print "\ttest program seems to have generated a core\n";
785 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
786 print "\tafter all the subtests completed successfully\n";
788 $failed = 0; # But we do not set $canon!
791 push @{$test->{failed}}, $test->{'next'}..$test->{max};
792 $failed = @{$test->{failed}};
793 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
794 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
799 return { canon => $canon, max => $test->{max} || '??',
802 estat => $estatus, wstat => $wstatus,
808 my($failedtests) = @_;
810 my $failed_str = "Failed Test";
811 my $middle_str = " Stat Wstat Total Fail Failed ";
812 my $list_str = "List of Failed";
814 # Figure out our longest name string for formatting purposes.
815 my $max_namelen = length($failed_str);
816 foreach my $script (keys %$failedtests) {
817 my $namelen = length $failedtests->{$script}->{name};
818 $max_namelen = $namelen if $namelen > $max_namelen;
821 my $list_len = $Columns - length($middle_str) - $max_namelen;
822 if ($list_len < length($list_str)) {
823 $list_len = length($list_str);
824 $max_namelen = $Columns - length($middle_str) - $list_len;
825 if ($max_namelen < length($failed_str)) {
826 $max_namelen = length($failed_str);
827 $Columns = $max_namelen + length($middle_str) + $list_len;
831 my $fmt_top = "format STDOUT_TOP =\n"
832 . sprintf("%-${max_namelen}s", $failed_str)
838 my $fmt = "format STDOUT =\n"
839 . "@" . "<" x ($max_namelen - 1)
840 . " @>> @>>>> @>>>> @>>> ^##.##% "
841 . "^" . "<" x ($list_len - 1) . "\n"
842 . '{ $Curtest->{name}, $Curtest->{estat},'
843 . ' $Curtest->{wstat}, $Curtest->{max},'
844 . ' $Curtest->{failed}, $Curtest->{percent},'
845 . ' $Curtest->{canon}'
847 . "~~" . " " x ($Columns - $list_len - 2) . "^"
848 . "<" x ($list_len - 1) . "\n"
849 . '$Curtest->{canon}'
857 return($fmt_top, $fmt);
861 my $tried_devel_corestack;
867 local $^W = 0; # *.ph files are often *very* noisy
871 my $did_core = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
873 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
874 unless $tried_devel_corestack++;
880 sub canonfailed ($@) {
881 my($max,$skipped,@failed) = @_;
883 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
884 my $failed = @failed;
888 my $last = $min = shift @failed;
891 for (@failed, $failed[-1]) { # don't forget the last one
892 if ($_ > $last+1 || $_ == $last) {
896 push @canon, "$min-$last";
903 push @result, "FAILED tests @canon\n";
904 $canon = join ' ', @canon;
906 push @result, "FAILED test $last\n";
910 push @result, "\tFailed $failed/$max tests, ";
911 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
912 my $ender = 's' x ($skipped > 1);
913 my $good = $max - $failed - $skipped;
914 my $goodper = sprintf("%.2f",100*($good/$max));
915 push @result, " (less $skipped skipped test$ender: $good okay, ".
919 my $txt = join "", @result;
936 C<&runtests> is exported by Test::Harness per default.
938 C<$verbose> and C<$switches> are exported upon request.
945 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
947 If all tests are successful some statistics about the performance are
950 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
952 For any single script that has failing subtests statistics like the
955 =item C<Test returned status %d (wstat %d)>
957 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
958 and C<$?> are printed in a message similar to the above.
960 =item C<Failed 1 test, %.2f%% okay. %s>
962 =item C<Failed %d/%d tests, %.2f%% okay. %s>
964 If not all tests were successful, the script dies with one of the
967 =item C<FAILED--Further testing stopped: %s>
969 If a single subtest decides that further testing will not make sense,
970 the script dies with this message.
978 =item C<HARNESS_ACTIVE>
980 Harness sets this before executing the individual tests. This allows
981 the tests to determine if they are being executed through the harness
982 or by any other means.
984 =item C<HARNESS_COLUMNS>
986 This value will be used for the width of the terminal. If it is not
987 set then it will default to C<COLUMNS>. If this is not set, it will
988 default to 80. Note that users of Bourne-sh based shells will need to
989 C<export COLUMNS> for this module to use that variable.
991 =item C<HARNESS_COMPILE_TEST>
993 When true it will make harness attempt to compile the test using
994 C<perlcc> before running it.
996 B<NOTE> This currently only works when sitting in the perl source
999 =item C<HARNESS_FILELEAK_IN_DIR>
1001 When set to the name of a directory, harness will check after each
1002 test whether new files appeared in that directory, and report them as
1004 LEAKED FILES: scr.tmp 0 my.db
1006 If relative, directory name is with respect to the current directory at
1007 the moment runtests() was called. Putting absolute path into
1008 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1010 =item C<HARNESS_IGNORE_EXITCODE>
1012 Makes harness ignore the exit status of child processes when defined.
1014 =item C<HARNESS_NOTTY>
1016 When set to a true value, forces it to behave as though STDOUT were
1017 not a console. You may need to set this if you don't want harness to
1018 output more frequent progress messages using carriage returns. Some
1019 consoles may not handle carriage returns properly (which results in a
1020 somewhat messy output).
1022 =item C<HARNESS_PERL_SWITCHES>
1024 Its value will be prepended to the switches used to invoke perl on
1025 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1026 run all tests with all warnings enabled.
1028 =item C<HARNESS_VERBOSE>
1030 If true, Test::Harness will output the verbose results of running
1031 its tests. Setting $Test::Harness::verbose will override this.
1037 Here's how Test::Harness tests itself
1039 $ cd ~/src/devel/Test-Harness
1040 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1041 $verbose=0; runtests @ARGV;' t/*.t
1042 Using /home/schwern/src/devel/Test-Harness/blib
1043 t/base..............ok
1044 t/nonumbers.........ok
1045 t/ok................ok
1046 t/test-harness......ok
1047 All tests successful.
1048 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1052 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1053 the underlying timing routines, L<Devel::CoreStack> to generate core
1054 dumps from failed tests and L<Devel::Cover> for test coverage
1059 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1060 sure is, that it was inspired by Larry Wall's TEST script that came
1061 with perl distributions for ages. Numerous anonymous contributors
1062 exist. Andreas Koenig held the torch for many years.
1064 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1068 Provide a way of running tests quietly (ie. no printing) for automated
1069 validation of tests. This will probably take the form of a version
1070 of runtests() which rather than printing its output returns raw data
1071 on the state of the tests. (Partially done in Test::Harness::Straps)
1073 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1075 Figure a way to report test names in the failure summary.
1077 Rework the test summary so long test names are not truncated as badly.
1078 (Partially done with new skip test styles)
1080 Deal with VMS's "not \nok 4\n" mistake.
1082 Add option for coverage analysis.
1085 Keeping whittling away at _run_all_tests()
1088 Clean up how the summary is printed. Get rid of those damned formats.
1092 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source