1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
6 use Test::Harness::Straps;
7 use Test::Harness::Assert;
16 @ISA @EXPORT @EXPORT_OK
17 $Verbose $Switches $Debug
18 $verbose $switches $debug
27 eval "use Time::HiRes 'time'";
28 $has_time_hires = !$@;
33 Test::Harness - Run Perl standard test scripts with statistics
42 $VERSION = eval $VERSION;
44 # Backwards compatibility for exportable variable names.
46 *switches = *Switches;
49 $ENV{HARNESS_ACTIVE} = 1;
50 $ENV{HARNESS_VERSION} = $VERSION;
54 delete $ENV{HARNESS_ACTIVE};
55 delete $ENV{HARNESS_VERSION};
58 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
60 $Strap = Test::Harness::Straps->new;
62 sub strap { return $Strap };
65 @EXPORT = qw(&runtests);
66 @EXPORT_OK = qw(&execute_tests $verbose $switches);
68 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
69 $Debug = $ENV{HARNESS_DEBUG} || 0;
71 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
72 $Columns--; # Some shells have trouble with a full line of text.
73 $Timer = $ENV{HARNESS_TIMER} || 0;
79 runtests(@test_files);
83 B<STOP!> If all you want to do is write a test script, consider
84 using Test::Simple. Test::Harness is the module that reads the
85 output from Test::Simple, Test::More and other modules based on
86 Test::Builder. You don't need to know about Test::Harness to use
89 Test::Harness runs tests and expects output from the test in a
90 certain format. That format is called TAP, the Test Anything
91 Protocol. It is defined in L<Test::Harness::TAP>.
93 C<Test::Harness::runtests(@tests)> runs all the testscripts named
94 as arguments and checks standard output for the expected strings
97 The F<prove> utility is a thin wrapper around Test::Harness.
101 Test::Harness will honor the C<-T> or C<-t> in the #! line on your
102 test files. So if you begin a test with:
106 the test will be run with taint mode on.
108 =head2 Configuration variables.
110 These variables can be used to configure the behavior of
111 Test::Harness. They are exported on request.
115 =item C<$Test::Harness::Verbose>
117 The package variable C<$Test::Harness::Verbose> is exportable and can be
118 used to let C<runtests()> display the standard output of the script
119 without altering the behavior otherwise. The F<prove> utility's C<-v>
122 =item C<$Test::Harness::switches>
124 The package variable C<$Test::Harness::switches> is exportable and can be
125 used to set perl command line options used for running the test
126 script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
128 =item C<$Test::Harness::Timer>
130 If set to true, and C<Time::HiRes> is available, print elapsed seconds
131 after each test file.
138 When tests fail, analyze the summary report:
140 t/base..............ok
141 t/nonumbers.........ok
142 t/ok................ok
143 t/test-harness......ok
144 t/waterloo..........dubious
145 Test returned status 3 (wstat 768, 0x300)
146 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
147 Failed 10/20 tests, 50.00% okay
148 Failed Test Stat Wstat Total Fail Failed List of Failed
149 -----------------------------------------------------------------------
150 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
151 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
153 Everything passed but F<t/waterloo.t>. It failed 10 of 20 tests and
154 exited with non-zero status indicating something dubious happened.
156 The columns in the summary report mean:
162 The test file which failed.
166 If the test exited with non-zero, this is its exit status.
170 The wait status of the test.
174 Total number of tests expected to run.
178 Number which failed, either from "not ok" or because they never ran.
182 Percentage of the total tests which failed.
184 =item B<List of Failed>
186 A list of the tests which failed. Successive failures may be
187 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
195 The following functions are available.
197 =head2 runtests( @test_files )
199 This runs all the given I<@test_files> and divines whether they passed
200 or failed based on their output to STDOUT (details above). It prints
201 out each individual test which failed along with a summary report and
202 a how long it all took.
204 It returns true if everything was ok. Otherwise it will C<die()> with
205 one of the messages in the DIAGNOSTICS section.
214 my ($tot, $failedtests,$todo_passed) = execute_tests(tests => \@tests);
215 print get_results($tot, $failedtests,$todo_passed);
217 my $ok = _all_ok($tot);
219 assert(($ok xor keys %$failedtests),
220 q{ok status jives with $failedtests});
225 # my $ok = _all_ok(\%tot);
226 # Tells you if this test run is overall successful or not.
231 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
234 # Returns all the files in a directory. This is shorthand for backwards
235 # compatibility on systems where C<glob()> doesn't work right.
241 my @f = readdir DIRH;
247 =head2 execute_tests( tests => \@test_files, out => \*FH )
249 Runs all the given C<@test_files> (just like C<runtests()>) but
250 doesn't generate the final report. During testing, progress
251 information will be written to the currently selected output
252 filehandle (usually C<STDOUT>), or to the filehandle given by the
253 C<out> parameter. The I<out> is optional.
255 Returns a list of two values, C<$total> and C<$failed>, describing the
256 results. C<$total> is a hash ref summary of all the tests run. Its
257 keys and values are this:
259 bonus Number of individual todo tests unexpectedly passed
260 max Number of individual tests ran
261 ok Number of individual tests passed
262 sub_skipped Number of individual tests skipped
263 todo Number of individual todo tests
265 files Number of test files ran
266 good Number of test files passed
267 bad Number of test files failed
268 tests Number of test files originally given
269 skipped Number of test files skipped
271 If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
272 got a successful test.
274 C<$failed> is a hash ref of all the test scripts that failed. Each key
275 is the name of a test script, each value is another hash representing
276 how that script failed. Its keys are these:
278 name Name of the test which failed
279 estat Script's exit value
280 wstat Script's wait status
281 max Number of individual tests
282 failed Number which failed
283 percent Percentage of tests which failed
284 canon List of tests which failed (as string).
286 C<$failed> should be empty if everything passed.
292 my @tests = @{$args{tests}};
293 my $out = $args{out} || select();
295 # We allow filehandles that are symbolic refs
298 _autoflush(\*STDERR);
311 tests => scalar @tests,
319 @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
320 my $run_start_time = new Benchmark;
322 my $width = _leader_width(@tests);
323 foreach my $tfile (@tests) {
324 $Last_ML_Print = 0; # so each test prints at least once
325 my($leader, $ml) = _mk_leader($tfile, $width);
332 $Strap->{_seen_header} = 0;
333 if ( $Test::Harness::Debug ) {
334 print $out "# Running: ", $Strap->_command_line($tfile), "\n";
336 my $test_start_time = $Timer ? time : 0;
337 my %results = $Strap->analyze_file($tfile) or
338 do { warn $Strap->{error}, "\n"; next };
341 $elapsed = time - $test_start_time;
342 if ( $has_time_hires ) {
343 $elapsed = sprintf( " %8d ms", $elapsed*1000 );
346 $elapsed = sprintf( " %8s s", $elapsed ? $elapsed : "<1" );
353 # state of the current test.
354 my @failed = grep { !$results{details}[$_-1]{ok} }
355 1..@{$results{details}};
356 my @todo_pass = grep { $results{details}[$_-1]{actual_ok} &&
357 $results{details}[$_-1]{type} eq 'todo' }
358 1..@{$results{details}};
362 'next' => $Strap->{'next'},
363 max => $results{max},
365 todo_pass => \@todo_pass,
366 todo => $results{todo},
367 bonus => $results{bonus},
368 skipped => $results{skip},
369 skip_reason => $results{skip_reason},
370 skip_all => $Strap->{skip_all},
374 $tot{bonus} += $results{bonus};
375 $tot{max} += $results{max};
376 $tot{ok} += $results{ok};
377 $tot{todo} += $results{todo};
378 $tot{sub_skipped} += $results{skip};
380 my($estatus, $wstatus) = @results{qw(exit wait)};
382 if ($results{passing}) {
383 # XXX Combine these first two
384 if ($test{max} and $test{skipped} + $test{bonus}) {
386 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
389 my ($txt, $canon) = _canondetail($test{todo},0,'TODO passed',
390 @{$test{todo_pass}});
391 $todo_passed{$tfile} = {
394 failed => $test{bonus},
396 percent => 100*$test{bonus}/$test{todo},
401 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded\n$txt");
403 print $out "$test{ml}ok$elapsed\n ".join(', ', @msg)."\n";
405 elsif ( $test{max} ) {
406 print $out "$test{ml}ok$elapsed\n";
408 elsif ( defined $test{skip_all} and length $test{skip_all} ) {
409 print $out "skipped\n all skipped: $test{skip_all}\n";
413 print $out "skipped\n all skipped: no reason given\n";
419 # List unrun tests as failures.
420 if ($test{'next'} <= $test{max}) {
421 push @{$test{failed}}, $test{'next'}..$test{max};
423 # List overruns as failures.
425 my $details = $results{details};
426 foreach my $overrun ($test{max}+1..@$details) {
427 next unless ref $details->[$overrun-1];
428 push @{$test{failed}}, $overrun
433 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
435 $failedtests{$tfile}{name} = $tfile;
437 elsif($results{seen}) {
438 if (@{$test{failed}} and $test{max}) {
439 my ($txt, $canon) = _canondetail($test{max},$test{skipped},'Failed',
441 print $out "$test{ml}$txt";
442 $failedtests{$tfile} = { canon => $canon,
444 failed => scalar @{$test{failed}},
446 percent => 100*(scalar @{$test{failed}})/$test{max},
452 print $out "Don't know which tests failed: got $test{ok} ok, ".
453 "expected $test{max}\n";
454 $failedtests{$tfile} = { canon => '??',
466 print $out "FAILED before any test output arrived\n";
468 $failedtests{$tfile} = { canon => '??',
479 if (defined $Files_In_Dir) {
480 my @new_dir_files = _globdir $Files_In_Dir;
481 if (@new_dir_files != @dir_files) {
483 @f{@new_dir_files} = (1) x @new_dir_files;
484 delete @f{@dir_files};
485 my @f = sort keys %f;
486 print $out "LEAKED FILES: @f\n";
487 @dir_files = @new_dir_files;
491 $tot{bench} = timediff(new Benchmark, $run_start_time);
493 $Strap->_restore_PERL5LIB;
495 return(\%tot, \%failedtests, \%todo_passed);
498 # Turns on autoflush for the handle passed
500 my $flushy_fh = shift;
501 my $old_fh = select $flushy_fh;
506 =for private _mk_leader
508 my($leader, $ml) = _mk_leader($test_file, $width);
510 Generates the 't/foo........' leader for the given C<$test_file> as well
511 as a similar version which will overwrite the current line (by use of
512 \r and such). C<$ml> may be empty if Test::Harness doesn't think you're
515 The C<$width> is the width of the "yada/blah.." string.
520 my($te, $width) = @_;
525 $te =~ s/^.*\.t\./\[.t./s;
527 my $leader = "$te" . '.' x ($width - length($te));
530 if ( -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose ) {
531 $ml = "\r" . (' ' x 77) . "\r$leader"
534 return($leader, $ml);
537 =for private _leader_width
539 my($width) = _leader_width(@test_files);
541 Calculates how wide the leader should be based on the length of the
550 my $suf = /\.(\w+)$/ ? $1 : '';
552 my $suflen = length $suf;
553 $maxlen = $len if $len > $maxlen;
554 $maxsuflen = $suflen if $suflen > $maxsuflen;
556 # + 3 : we want three dots between the test name and the "ok"
557 return $maxlen + 3 - $maxsuflen;
562 my $failedtests = shift;
563 my $todo_passed = shift;
568 my $bonusmsg = _bonusmsg($tot);
571 $out .= "All tests successful$bonusmsg.\n";
573 my($fmt_top, $fmt) = _create_fmts("Passed Todo",$todo_passed);
574 # Now write to formats
575 for my $script (sort keys %{$todo_passed||{}}) {
576 my $Curtest = $todo_passed->{$script};
578 $out .= swrite( $fmt_top );
579 $out .= swrite( $fmt, @{ $Curtest }{qw(name estat wstat max failed percent canon)} );
583 elsif (!$tot->{tests}){
584 die "FAILED--no tests were run for some reason.\n";
586 elsif (!$tot->{max}) {
587 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
588 die "FAILED--$tot->{tests} test $blurb could be run, ".
589 "alas--no output ever seen\n";
592 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
593 my $percent_ok = 100*$tot->{ok}/$tot->{max};
594 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
595 $tot->{max} - $tot->{ok}, $tot->{max},
598 my($fmt_top, $fmt1, $fmt2) = _create_fmts("Failed Test",$failedtests);
600 # Now write to formats
601 for my $script (sort keys %$failedtests) {
602 my $Curtest = $failedtests->{$script};
603 $out .= swrite( $fmt_top );
604 $out .= swrite( $fmt1, @{ $Curtest }{qw(name estat wstat max failed percent canon)} );
605 $out .= swrite( $fmt2, $Curtest->{canon} );
608 $bonusmsg =~ s/^,\s*//;
609 $out .= "$bonusmsg.\n" if $bonusmsg;
610 $out .= "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.$subpct\n";
614 $out .= sprintf("Files=%d, Tests=%d, %s\n",
615 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
622 formline($format,@_);
630 header => \&header_handler,
631 test => \&test_handler,
632 bailout => \&bailout_handler,
635 $Strap->{callback} = \&strap_callback;
637 my($self, $line, $type, $totals) = @_;
638 print $line if $Verbose;
640 my $meth = $Handlers{$type};
641 $meth->($self, $line, $type, $totals) if $meth;
646 my($self, $line, $type, $totals) = @_;
648 warn "Test header seen more than once!\n" if $self->{_seen_header};
650 $self->{_seen_header}++;
652 warn "1..M can only appear at the beginning or end of tests\n"
653 if $totals->{seen} &&
654 $totals->{max} < $totals->{seen};
658 my($self, $line, $type, $totals) = @_;
660 my $curr = $totals->{seen};
661 my $next = $self->{'next'};
662 my $max = $totals->{max};
663 my $detail = $totals->{details}[-1];
665 if( $detail->{ok} ) {
666 _print_ml_less("ok $curr/$max");
668 if( $detail->{type} eq 'skip' ) {
669 $totals->{skip_reason} = $detail->{reason}
670 unless defined $totals->{skip_reason};
671 $totals->{skip_reason} = 'various reasons'
672 if $totals->{skip_reason} ne $detail->{reason};
676 _print_ml("NOK $curr");
679 if( $curr > $next ) {
680 print "Test output counter mismatch [test $curr]\n";
682 elsif( $curr < $next ) {
683 print "Confused test output: test $curr answered after ".
684 "test ", $next - 1, "\n";
689 sub bailout_handler {
690 my($self, $line, $type, $totals) = @_;
692 die "FAILED--Further testing stopped" .
693 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
698 print join '', $ML, @_ if $ML;
702 # Print updates only once per second.
704 my $now = CORE::time;
705 if ( $Last_ML_Print != $now ) {
707 $Last_ML_Print = $now;
715 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
716 " UNEXPECTEDLY SUCCEEDED)")
719 if ($tot->{skipped}) {
720 $bonusmsg .= ", $tot->{skipped} test"
721 . ($tot->{skipped} != 1 ? 's' : '');
722 if ($tot->{sub_skipped}) {
723 $bonusmsg .= " and $tot->{sub_skipped} subtest"
724 . ($tot->{sub_skipped} != 1 ? 's' : '');
726 $bonusmsg .= ' skipped';
728 elsif ($tot->{sub_skipped}) {
729 $bonusmsg .= ", $tot->{sub_skipped} subtest"
730 . ($tot->{sub_skipped} != 1 ? 's' : '')
736 # Test program go boom.
737 sub _dubious_return {
738 my($test, $tot, $estatus, $wstatus) = @_;
739 my ($failed, $canon, $percent) = ('??', '??');
741 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
742 "(wstat %d, 0x%x)\n",
744 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
749 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
750 print "\tafter all the subtests completed successfully\n";
752 $failed = 0; # But we do not set $canon!
755 push @{$test->{failed}}, $test->{'next'}..$test->{max};
756 $failed = @{$test->{failed}};
757 (my $txt, $canon) = _canondetail($test->{max},$test->{skipped},'Failed',@{$test->{failed}});
758 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
763 return { canon => $canon, max => $test->{max} || '??',
766 estat => $estatus, wstat => $wstatus,
772 my $failed_str = shift;
773 my $failedtests = shift;
775 my ($type) = split /\s/,$failed_str;
776 my $short = substr($type,0,4);
777 my $total = $short eq 'Pass' ? 'Todos' : 'Total';
778 my $middle_str = " Stat Wstat $total $short $type ";
779 my $list_str = "List of $type";
781 # Figure out our longest name string for formatting purposes.
782 my $max_namelen = length($failed_str);
783 foreach my $script (keys %$failedtests) {
784 my $namelen = length $failedtests->{$script}->{name};
785 $max_namelen = $namelen if $namelen > $max_namelen;
788 my $list_len = $Columns - length($middle_str) - $max_namelen;
789 if ($list_len < length($list_str)) {
790 $list_len = length($list_str);
791 $max_namelen = $Columns - length($middle_str) - $list_len;
792 if ($max_namelen < length($failed_str)) {
793 $max_namelen = length($failed_str);
794 $Columns = $max_namelen + length($middle_str) + $list_len;
798 my $fmt_top = sprintf("%-${max_namelen}s", $failed_str)
804 my $fmt1 = "@" . "<" x ($max_namelen - 1)
805 . " @>> @>>>> @>>>> @>>> ^##.##% "
806 . "^" . "<" x ($list_len - 1) . "\n";
807 my $fmt2 = "~~" . " " x ($Columns - $list_len - 2) . "^"
808 . "<" x ($list_len - 1) . "\n";
810 return($fmt_top, $fmt1, $fmt2);
819 @detail = sort {$a <=> $b} grep !$seen{$_}++, @detail;
820 my $detail = @detail;
824 my $last = $min = shift @detail;
826 my $uc_type = uc($type);
828 for (@detail, $detail[-1]) { # don't forget the last one
829 if ($_ > $last+1 || $_ == $last) {
830 push @canon, ($min == $last) ? $last : "$min-$last";
836 push @result, "$uc_type tests @canon\n";
837 $canon = join ' ', @canon;
840 push @result, "$uc_type test $last\n";
844 return (join("", @result), $canon)
846 push @result, "\t$type $detail/$max tests, ";
848 push @result, sprintf("%.2f",100*(1-$detail/$max)), "% okay";
851 push @result, "?% okay";
853 my $ender = 's' x ($skipped > 1);
855 my $good = $max - $detail - $skipped;
856 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
858 my $goodper = sprintf("%.2f",100*($good/$max));
859 $skipmsg .= "$goodper%)";
864 push @result, $skipmsg;
867 my $txt = join "", @result;
868 return ($txt, $canon);
877 C<&runtests> is exported by Test::Harness by default.
879 C<&execute_tests>, C<$verbose>, C<$switches> and C<$debug> are
880 exported upon request.
886 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
888 If all tests are successful some statistics about the performance are
891 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
893 For any single script that has failing subtests statistics like the
896 =item C<Test returned status %d (wstat %d)>
898 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
899 and C<$?> are printed in a message similar to the above.
901 =item C<Failed 1 test, %.2f%% okay. %s>
903 =item C<Failed %d/%d tests, %.2f%% okay. %s>
905 If not all tests were successful, the script dies with one of the
908 =item C<FAILED--Further testing stopped: %s>
910 If a single subtest decides that further testing will not make sense,
911 the script dies with this message.
915 =head1 ENVIRONMENT VARIABLES THAT TEST::HARNESS SETS
917 Test::Harness sets these before executing the individual tests.
921 =item C<HARNESS_ACTIVE>
923 This is set to a true value. It allows the tests to determine if they
924 are being executed through the harness or by any other means.
926 =item C<HARNESS_VERSION>
928 This is the version of Test::Harness.
932 =head1 ENVIRONMENT VARIABLES THAT AFFECT TEST::HARNESS
936 =item C<HARNESS_COLUMNS>
938 This value will be used for the width of the terminal. If it is not
939 set then it will default to C<COLUMNS>. If this is not set, it will
940 default to 80. Note that users of Bourne-sh based shells will need to
941 C<export COLUMNS> for this module to use that variable.
943 =item C<HARNESS_COMPILE_TEST>
945 When true it will make harness attempt to compile the test using
946 C<perlcc> before running it.
948 B<NOTE> This currently only works when sitting in the perl source
951 =item C<HARNESS_DEBUG>
953 If true, Test::Harness will print debugging information about itself as
954 it runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
955 the output from the test being run. Setting C<$Test::Harness::Debug> will
956 override this, or you can use the C<-d> switch in the F<prove> utility.
958 =item C<HARNESS_FILELEAK_IN_DIR>
960 When set to the name of a directory, harness will check after each
961 test whether new files appeared in that directory, and report them as
963 LEAKED FILES: scr.tmp 0 my.db
965 If relative, directory name is with respect to the current directory at
966 the moment runtests() was called. Putting absolute path into
967 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
969 =item C<HARNESS_NOTTY>
971 When set to a true value, forces it to behave as though STDOUT were
972 not a console. You may need to set this if you don't want harness to
973 output more frequent progress messages using carriage returns. Some
974 consoles may not handle carriage returns properly (which results in a
975 somewhat messy output).
977 =item C<HARNESS_PERL>
979 Usually your tests will be run by C<$^X>, the currently-executing Perl.
980 However, you may want to have it run by a different executable, such as
981 a threading perl, or a different version.
983 If you're using the F<prove> utility, you can use the C<--perl> switch.
985 =item C<HARNESS_PERL_SWITCHES>
987 Its value will be prepended to the switches used to invoke perl on
988 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
989 run all tests with all warnings enabled.
991 =item C<HARNESS_VERBOSE>
993 If true, Test::Harness will output the verbose results of running
994 its tests. Setting C<$Test::Harness::verbose> will override this,
995 or you can use the C<-v> switch in the F<prove> utility.
1001 Here's how Test::Harness tests itself
1003 $ cd ~/src/devel/Test-Harness
1004 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1005 $verbose=0; runtests @ARGV;' t/*.t
1006 Using /home/schwern/src/devel/Test-Harness/blib
1007 t/base..............ok
1008 t/nonumbers.........ok
1009 t/ok................ok
1010 t/test-harness......ok
1011 All tests successful.
1012 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1016 The included F<prove> utility for running test scripts from the command line,
1017 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1018 the underlying timing routines, and L<Devel::Cover> for test coverage
1023 Provide a way of running tests quietly (ie. no printing) for automated
1024 validation of tests. This will probably take the form of a version
1025 of runtests() which rather than printing its output returns raw data
1026 on the state of the tests. (Partially done in Test::Harness::Straps)
1028 Document the format.
1030 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1032 Figure a way to report test names in the failure summary.
1034 Rework the test summary so long test names are not truncated as badly.
1035 (Partially done with new skip test styles)
1037 Add option for coverage analysis.
1041 Implement Straps total_results()
1045 Completely redo the print summary code.
1047 Implement Straps callbacks. (experimentally implemented)
1049 Straps->analyze_file() not taint clean, don't know if it can be
1051 Fix that damned VMS nit.
1053 HARNESS_TODOFAIL to display TODO failures
1055 Add a test for verbose.
1057 Change internal list of test results to a hash.
1059 Fix stats display when there's an overrun.
1061 Fix so perls with spaces in the filename work.
1063 Keeping whittling away at _run_all_tests()
1065 Clean up how the summary is printed. Get rid of those damned formats.
1069 Please report any bugs or feature requests to
1070 C<bug-test-harness at rt.cpan.org>, or through the web interface at
1071 L<http://rt.cpan.org/NoAuth/ReportBug.html?Queue=Test-Harness>.
1072 I will be notified, and then you'll automatically be notified of progress on
1073 your bug as I make changes.
1077 You can find documentation for this module with the F<perldoc> command.
1079 perldoc Test::Harness
1081 You can get docs for F<prove> with
1085 You can also look for information at:
1089 =item * AnnoCPAN: Annotated CPAN documentation
1091 L<http://annocpan.org/dist/Test-Harness>
1093 =item * CPAN Ratings
1095 L<http://cpanratings.perl.org/d/Test-Harness>
1097 =item * RT: CPAN's request tracker
1099 L<http://rt.cpan.org/NoAuth/Bugs.html?Dist=Test-Harness>
1103 L<http://search.cpan.org/dist/Test-Harness>
1109 The source code repository for Test::Harness is at
1110 L<http://svn.perl.org/modules/Test-Harness>.
1114 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1115 sure is, that it was inspired by Larry Wall's F<TEST> script that came
1116 with perl distributions for ages. Numerous anonymous contributors
1117 exist. Andreas Koenig held the torch for many years, and then
1120 Current maintainer is Andy Lester C<< <andy at petdance.com> >>.
1125 by Michael G Schwern C<< <schwern at pobox.com> >>,
1126 Andy Lester C<< <andy at petdance.com> >>.
1128 This program is free software; you can redistribute it and/or
1129 modify it under the same terms as Perl itself.
1131 See L<http://www.perl.com/perl/misc/Artistic.html>.