1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
6 use Test::Harness::Straps;
7 use Test::Harness::Assert;
16 @ISA @EXPORT @EXPORT_OK
17 $Verbose $Switches $Debug
18 $verbose $switches $debug
27 eval "use Time::HiRes 'time'";
28 $has_time_hires = !$@;
33 Test::Harness - Run Perl standard test scripts with statistics
43 # Backwards compatibility for exportable variable names.
45 *switches = *Switches;
48 $ENV{HARNESS_ACTIVE} = 1;
49 $ENV{HARNESS_VERSION} = $VERSION;
53 delete $ENV{HARNESS_ACTIVE};
54 delete $ENV{HARNESS_VERSION};
57 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
59 $Strap = Test::Harness::Straps->new;
61 sub strap { return $Strap };
64 @EXPORT = qw(&runtests);
65 @EXPORT_OK = qw(&execute_tests $verbose $switches);
67 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
68 $Debug = $ENV{HARNESS_DEBUG} || 0;
70 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
71 $Columns--; # Some shells have trouble with a full line of text.
72 $Timer = $ENV{HARNESS_TIMER} || 0;
78 runtests(@test_files);
82 B<STOP!> If all you want to do is write a test script, consider
83 using Test::Simple. Test::Harness is the module that reads the
84 output from Test::Simple, Test::More and other modules based on
85 Test::Builder. You don't need to know about Test::Harness to use
88 Test::Harness runs tests and expects output from the test in a
89 certain format. That format is called TAP, the Test Anything
90 Protocol. It is defined in L<Test::Harness::TAP>.
92 C<Test::Harness::runtests(@tests)> runs all the testscripts named
93 as arguments and checks standard output for the expected strings
96 The F<prove> utility is a thin wrapper around Test::Harness.
100 Test::Harness will honor the C<-T> or C<-t> in the #! line on your
101 test files. So if you begin a test with:
105 the test will be run with taint mode on.
107 =head2 Configuration variables.
109 These variables can be used to configure the behavior of
110 Test::Harness. They are exported on request.
114 =item C<$Test::Harness::Verbose>
116 The package variable C<$Test::Harness::Verbose> is exportable and can be
117 used to let C<runtests()> display the standard output of the script
118 without altering the behavior otherwise. The F<prove> utility's C<-v>
121 =item C<$Test::Harness::switches>
123 The package variable C<$Test::Harness::switches> is exportable and can be
124 used to set perl command line options used for running the test
125 script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
127 =item C<$Test::Harness::Timer>
129 If set to true, and C<Time::HiRes> is available, print elapsed seconds
130 after each test file.
137 When tests fail, analyze the summary report:
139 t/base..............ok
140 t/nonumbers.........ok
141 t/ok................ok
142 t/test-harness......ok
143 t/waterloo..........dubious
144 Test returned status 3 (wstat 768, 0x300)
145 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
146 Failed 10/20 tests, 50.00% okay
147 Failed Test Stat Wstat Total Fail Failed List of Failed
148 -----------------------------------------------------------------------
149 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
150 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
152 Everything passed but F<t/waterloo.t>. It failed 10 of 20 tests and
153 exited with non-zero status indicating something dubious happened.
155 The columns in the summary report mean:
161 The test file which failed.
165 If the test exited with non-zero, this is its exit status.
169 The wait status of the test.
173 Total number of tests expected to run.
177 Number which failed, either from "not ok" or because they never ran.
181 Percentage of the total tests which failed.
183 =item B<List of Failed>
185 A list of the tests which failed. Successive failures may be
186 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
194 The following functions are available.
196 =head2 runtests( @test_files )
198 This runs all the given I<@test_files> and divines whether they passed
199 or failed based on their output to STDOUT (details above). It prints
200 out each individual test which failed along with a summary report and
201 a how long it all took.
203 It returns true if everything was ok. Otherwise it will C<die()> with
204 one of the messages in the DIAGNOSTICS section.
213 my ($tot, $failedtests,$todo_passed) = execute_tests(tests => \@tests);
214 print get_results($tot, $failedtests,$todo_passed);
216 my $ok = _all_ok($tot);
218 assert(($ok xor keys %$failedtests),
219 q{ok status jives with $failedtests});
224 # my $ok = _all_ok(\%tot);
225 # Tells you if this test run is overall successful or not.
230 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
233 # Returns all the files in a directory. This is shorthand for backwards
234 # compatibility on systems where C<glob()> doesn't work right.
240 my @f = readdir DIRH;
246 =head2 execute_tests( tests => \@test_files, out => \*FH )
248 Runs all the given C<@test_files> (just like C<runtests()>) but
249 doesn't generate the final report. During testing, progress
250 information will be written to the currently selected output
251 filehandle (usually C<STDOUT>), or to the filehandle given by the
252 C<out> parameter. The I<out> is optional.
254 Returns a list of two values, C<$total> and C<$failed>, describing the
255 results. C<$total> is a hash ref summary of all the tests run. Its
256 keys and values are this:
258 bonus Number of individual todo tests unexpectedly passed
259 max Number of individual tests ran
260 ok Number of individual tests passed
261 sub_skipped Number of individual tests skipped
262 todo Number of individual todo tests
264 files Number of test files ran
265 good Number of test files passed
266 bad Number of test files failed
267 tests Number of test files originally given
268 skipped Number of test files skipped
270 If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
271 got a successful test.
273 C<$failed> is a hash ref of all the test scripts that failed. Each key
274 is the name of a test script, each value is another hash representing
275 how that script failed. Its keys are these:
277 name Name of the test which failed
278 estat Script's exit value
279 wstat Script's wait status
280 max Number of individual tests
281 failed Number which failed
282 percent Percentage of tests which failed
283 canon List of tests which failed (as string).
285 C<$failed> should be empty if everything passed.
291 my @tests = @{$args{tests}};
292 my $out = $args{out} || select();
294 # We allow filehandles that are symbolic refs
297 _autoflush(\*STDERR);
310 tests => scalar @tests,
318 @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
319 my $run_start_time = new Benchmark;
321 my $width = _leader_width(@tests);
322 foreach my $tfile (@tests) {
323 $Last_ML_Print = 0; # so each test prints at least once
324 my($leader, $ml) = _mk_leader($tfile, $width);
331 $Strap->{_seen_header} = 0;
332 if ( $Test::Harness::Debug ) {
333 print $out "# Running: ", $Strap->_command_line($tfile), "\n";
335 my $test_start_time = $Timer ? time : 0;
336 my %results = $Strap->analyze_file($tfile) or
337 do { warn $Strap->{error}, "\n"; next };
340 $elapsed = time - $test_start_time;
341 if ( $has_time_hires ) {
342 $elapsed = sprintf( " %8d ms", $elapsed*1000 );
345 $elapsed = sprintf( " %8s s", $elapsed ? $elapsed : "<1" );
352 # state of the current test.
353 my @failed = grep { !$results{details}[$_-1]{ok} }
354 1..@{$results{details}};
355 my @todo_pass = grep { $results{details}[$_-1]{ok} &&
356 $results{details}[$_-1]{type} eq 'todo' }
357 1..@{$results{details}};
361 'next' => $Strap->{'next'},
362 max => $results{max},
364 todo_pass => \@todo_pass,
365 bonus => $results{bonus},
366 skipped => $results{skip},
367 skip_reason => $results{skip_reason},
368 skip_all => $Strap->{skip_all},
372 $tot{bonus} += $results{bonus};
373 $tot{max} += $results{max};
374 $tot{ok} += $results{ok};
375 $tot{todo} += $results{todo};
376 $tot{sub_skipped} += $results{skip};
378 my($estatus, $wstatus) = @results{qw(exit wait)};
380 if ($results{passing}) {
381 # XXX Combine these first two
382 if ($test{max} and $test{skipped} + $test{bonus}) {
384 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
387 my ($txt, $canon) = _canondetail($test{max},$test{skipped},'TODO passed',
388 @{$test{todo_pass}});
389 $todo_passed{$tfile} = {
392 failed => $test{bonus},
394 percent => 100*$test{bonus}/$test{max},
399 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded\n$txt");
401 print $out "$test{ml}ok$elapsed\n ".join(', ', @msg)."\n";
403 elsif ( $test{max} ) {
404 print $out "$test{ml}ok$elapsed\n";
406 elsif ( defined $test{skip_all} and length $test{skip_all} ) {
407 print $out "skipped\n all skipped: $test{skip_all}\n";
411 print $out "skipped\n all skipped: no reason given\n";
417 # List unrun tests as failures.
418 if ($test{'next'} <= $test{max}) {
419 push @{$test{failed}}, $test{'next'}..$test{max};
421 # List overruns as failures.
423 my $details = $results{details};
424 foreach my $overrun ($test{max}+1..@$details) {
425 next unless ref $details->[$overrun-1];
426 push @{$test{failed}}, $overrun
431 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
433 $failedtests{$tfile}{name} = $tfile;
435 elsif($results{seen}) {
436 if (@{$test{failed}} and $test{max}) {
437 my ($txt, $canon) = _canondetail($test{max},$test{skipped},'Failed',
439 print $out "$test{ml}$txt";
440 $failedtests{$tfile} = { canon => $canon,
442 failed => scalar @{$test{failed}},
444 percent => 100*(scalar @{$test{failed}})/$test{max},
450 print $out "Don't know which tests failed: got $test{ok} ok, ".
451 "expected $test{max}\n";
452 $failedtests{$tfile} = { canon => '??',
464 print $out "FAILED before any test output arrived\n";
466 $failedtests{$tfile} = { canon => '??',
477 if (defined $Files_In_Dir) {
478 my @new_dir_files = _globdir $Files_In_Dir;
479 if (@new_dir_files != @dir_files) {
481 @f{@new_dir_files} = (1) x @new_dir_files;
482 delete @f{@dir_files};
483 my @f = sort keys %f;
484 print $out "LEAKED FILES: @f\n";
485 @dir_files = @new_dir_files;
489 $tot{bench} = timediff(new Benchmark, $run_start_time);
491 $Strap->_restore_PERL5LIB;
493 return(\%tot, \%failedtests, \%todo_passed);
496 # Turns on autoflush for the handle passed
498 my $flushy_fh = shift;
499 my $old_fh = select $flushy_fh;
504 =for private _mk_leader
506 my($leader, $ml) = _mk_leader($test_file, $width);
508 Generates the 't/foo........' leader for the given C<$test_file> as well
509 as a similar version which will overwrite the current line (by use of
510 \r and such). C<$ml> may be empty if Test::Harness doesn't think you're
513 The C<$width> is the width of the "yada/blah.." string.
518 my($te, $width) = @_;
523 $te =~ s/^.*\.t\./\[.t./s;
525 my $leader = "$te" . '.' x ($width - length($te));
528 if ( -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose ) {
529 $ml = "\r" . (' ' x 77) . "\r$leader"
532 return($leader, $ml);
535 =for private _leader_width
537 my($width) = _leader_width(@test_files);
539 Calculates how wide the leader should be based on the length of the
548 my $suf = /\.(\w+)$/ ? $1 : '';
550 my $suflen = length $suf;
551 $maxlen = $len if $len > $maxlen;
552 $maxsuflen = $suflen if $suflen > $maxsuflen;
554 # + 3 : we want three dots between the test name and the "ok"
555 return $maxlen + 3 - $maxsuflen;
560 my $failedtests = shift;
561 my $todo_passed = shift;
566 my $bonusmsg = _bonusmsg($tot);
569 $out .= "All tests successful$bonusmsg.\n";
571 my($fmt_top, $fmt) = _create_fmts("Passed",$todo_passed);
572 # Now write to formats
573 for my $script (sort keys %{$todo_passed||{}}) {
574 my $Curtest = $todo_passed->{$script};
576 $out .= swrite( $fmt_top );
577 $out .= swrite( $fmt, @{ $Curtest }{qw(name estat wstat max failed percent canon)} );
581 elsif (!$tot->{tests}){
582 die "FAILED--no tests were run for some reason.\n";
584 elsif (!$tot->{max}) {
585 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
586 die "FAILED--$tot->{tests} test $blurb could be run, ".
587 "alas--no output ever seen\n";
590 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
591 my $percent_ok = 100*$tot->{ok}/$tot->{max};
592 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
593 $tot->{max} - $tot->{ok}, $tot->{max},
596 my($fmt_top, $fmt1, $fmt2) = _create_fmts("Failed",$failedtests);
598 # Now write to formats
599 for my $script (sort keys %$failedtests) {
600 my $Curtest = $failedtests->{$script};
601 $out .= swrite( $fmt_top );
602 $out .= swrite( $fmt1, @{ $Curtest }{qw(name estat wstat max failed percent canon)} );
603 $out .= swrite( $fmt2, $Curtest->{canon} );
606 $bonusmsg =~ s/^,\s*//;
607 $out .= "$bonusmsg.\n" if $bonusmsg;
608 $out .= "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.$subpct\n";
612 $out .= sprintf("Files=%d, Tests=%d, %s\n",
613 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
620 formline($format,@_);
628 header => \&header_handler,
629 test => \&test_handler,
630 bailout => \&bailout_handler,
633 $Strap->{callback} = \&strap_callback;
635 my($self, $line, $type, $totals) = @_;
636 print $line if $Verbose;
638 my $meth = $Handlers{$type};
639 $meth->($self, $line, $type, $totals) if $meth;
644 my($self, $line, $type, $totals) = @_;
646 warn "Test header seen more than once!\n" if $self->{_seen_header};
648 $self->{_seen_header}++;
650 warn "1..M can only appear at the beginning or end of tests\n"
651 if $totals->{seen} &&
652 $totals->{max} < $totals->{seen};
656 my($self, $line, $type, $totals) = @_;
658 my $curr = $totals->{seen};
659 my $next = $self->{'next'};
660 my $max = $totals->{max};
661 my $detail = $totals->{details}[-1];
663 if( $detail->{ok} ) {
664 _print_ml_less("ok $curr/$max");
666 if( $detail->{type} eq 'skip' ) {
667 $totals->{skip_reason} = $detail->{reason}
668 unless defined $totals->{skip_reason};
669 $totals->{skip_reason} = 'various reasons'
670 if $totals->{skip_reason} ne $detail->{reason};
674 _print_ml("NOK $curr");
677 if( $curr > $next ) {
678 print "Test output counter mismatch [test $curr]\n";
680 elsif( $curr < $next ) {
681 print "Confused test output: test $curr answered after ".
682 "test ", $next - 1, "\n";
687 sub bailout_handler {
688 my($self, $line, $type, $totals) = @_;
690 die "FAILED--Further testing stopped" .
691 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
696 print join '', $ML, @_ if $ML;
700 # Print updates only once per second.
702 my $now = CORE::time;
703 if ( $Last_ML_Print != $now ) {
705 $Last_ML_Print = $now;
713 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
714 " UNEXPECTEDLY SUCCEEDED)")
717 if ($tot->{skipped}) {
718 $bonusmsg .= ", $tot->{skipped} test"
719 . ($tot->{skipped} != 1 ? 's' : '');
720 if ($tot->{sub_skipped}) {
721 $bonusmsg .= " and $tot->{sub_skipped} subtest"
722 . ($tot->{sub_skipped} != 1 ? 's' : '');
724 $bonusmsg .= ' skipped';
726 elsif ($tot->{sub_skipped}) {
727 $bonusmsg .= ", $tot->{sub_skipped} subtest"
728 . ($tot->{sub_skipped} != 1 ? 's' : '')
734 # Test program go boom.
735 sub _dubious_return {
736 my($test, $tot, $estatus, $wstatus) = @_;
737 my ($failed, $canon, $percent) = ('??', '??');
739 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
740 "(wstat %d, 0x%x)\n",
742 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
747 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
748 print "\tafter all the subtests completed successfully\n";
750 $failed = 0; # But we do not set $canon!
753 push @{$test->{failed}}, $test->{'next'}..$test->{max};
754 $failed = @{$test->{failed}};
755 (my $txt, $canon) = _canondetail($test->{max},$test->{skipped},'Failed',@{$test->{failed}});
756 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
761 return { canon => $canon, max => $test->{max} || '??',
764 estat => $estatus, wstat => $wstatus,
771 my $failedtests = shift;
773 my $short = substr($type,0,4);
774 my $failed_str = "$type Test";
775 my $middle_str = " Stat Wstat Total $short $type ";
776 my $list_str = "List of $type";
778 # Figure out our longest name string for formatting purposes.
779 my $max_namelen = length($failed_str);
780 foreach my $script (keys %$failedtests) {
781 my $namelen = length $failedtests->{$script}->{name};
782 $max_namelen = $namelen if $namelen > $max_namelen;
785 my $list_len = $Columns - length($middle_str) - $max_namelen;
786 if ($list_len < length($list_str)) {
787 $list_len = length($list_str);
788 $max_namelen = $Columns - length($middle_str) - $list_len;
789 if ($max_namelen < length($failed_str)) {
790 $max_namelen = length($failed_str);
791 $Columns = $max_namelen + length($middle_str) + $list_len;
795 my $fmt_top = sprintf("%-${max_namelen}s", $failed_str)
801 my $fmt1 = "@" . "<" x ($max_namelen - 1)
802 . " @>> @>>>> @>>>> @>>> ^##.##% "
803 . "^" . "<" x ($list_len - 1) . "\n";
804 my $fmt2 = "~~" . " " x ($Columns - $list_len - 2) . "^"
805 . "<" x ($list_len - 1) . "\n";
807 return($fmt_top, $fmt1, $fmt2);
817 @detail = sort {$a <=> $b} grep !$seen{$_}++, @detail;
818 my $detail = @detail;
822 my $last = $min = shift @detail;
824 my $uc_type = uc($type);
826 for (@detail, $detail[-1]) { # don't forget the last one
827 if ($_ > $last+1 || $_ == $last) {
828 push @canon, ($min == $last) ? $last : "$min-$last";
834 push @result, "$uc_type tests @canon\n";
835 $canon = join ' ', @canon;
838 push @result, "$uc_type test $last\n";
842 return (join("", @result), $canon)
844 push @result, "\t$type $detail/$max tests, ";
846 push @result, sprintf("%.2f",100*(1-$detail/$max)), "% okay";
849 push @result, "?% okay";
851 my $ender = 's' x ($skipped > 1);
853 my $good = $max - $detail - $skipped;
854 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
856 my $goodper = sprintf("%.2f",100*($good/$max));
857 $skipmsg .= "$goodper%)";
862 push @result, $skipmsg;
865 my $txt = join "", @result;
866 return ($txt, $canon);
875 C<&runtests> is exported by Test::Harness by default.
877 C<&execute_tests>, C<$verbose>, C<$switches> and C<$debug> are
878 exported upon request.
884 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
886 If all tests are successful some statistics about the performance are
889 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
891 For any single script that has failing subtests statistics like the
894 =item C<Test returned status %d (wstat %d)>
896 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
897 and C<$?> are printed in a message similar to the above.
899 =item C<Failed 1 test, %.2f%% okay. %s>
901 =item C<Failed %d/%d tests, %.2f%% okay. %s>
903 If not all tests were successful, the script dies with one of the
906 =item C<FAILED--Further testing stopped: %s>
908 If a single subtest decides that further testing will not make sense,
909 the script dies with this message.
913 =head1 ENVIRONMENT VARIABLES THAT TEST::HARNESS SETS
915 Test::Harness sets these before executing the individual tests.
919 =item C<HARNESS_ACTIVE>
921 This is set to a true value. It allows the tests to determine if they
922 are being executed through the harness or by any other means.
924 =item C<HARNESS_VERSION>
926 This is the version of Test::Harness.
930 =head1 ENVIRONMENT VARIABLES THAT AFFECT TEST::HARNESS
934 =item C<HARNESS_COLUMNS>
936 This value will be used for the width of the terminal. If it is not
937 set then it will default to C<COLUMNS>. If this is not set, it will
938 default to 80. Note that users of Bourne-sh based shells will need to
939 C<export COLUMNS> for this module to use that variable.
941 =item C<HARNESS_COMPILE_TEST>
943 When true it will make harness attempt to compile the test using
944 C<perlcc> before running it.
946 B<NOTE> This currently only works when sitting in the perl source
949 =item C<HARNESS_DEBUG>
951 If true, Test::Harness will print debugging information about itself as
952 it runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
953 the output from the test being run. Setting C<$Test::Harness::Debug> will
954 override this, or you can use the C<-d> switch in the F<prove> utility.
956 =item C<HARNESS_FILELEAK_IN_DIR>
958 When set to the name of a directory, harness will check after each
959 test whether new files appeared in that directory, and report them as
961 LEAKED FILES: scr.tmp 0 my.db
963 If relative, directory name is with respect to the current directory at
964 the moment runtests() was called. Putting absolute path into
965 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
967 =item C<HARNESS_NOTTY>
969 When set to a true value, forces it to behave as though STDOUT were
970 not a console. You may need to set this if you don't want harness to
971 output more frequent progress messages using carriage returns. Some
972 consoles may not handle carriage returns properly (which results in a
973 somewhat messy output).
975 =item C<HARNESS_PERL>
977 Usually your tests will be run by C<$^X>, the currently-executing Perl.
978 However, you may want to have it run by a different executable, such as
979 a threading perl, or a different version.
981 If you're using the F<prove> utility, you can use the C<--perl> switch.
983 =item C<HARNESS_PERL_SWITCHES>
985 Its value will be prepended to the switches used to invoke perl on
986 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
987 run all tests with all warnings enabled.
989 =item C<HARNESS_VERBOSE>
991 If true, Test::Harness will output the verbose results of running
992 its tests. Setting C<$Test::Harness::verbose> will override this,
993 or you can use the C<-v> switch in the F<prove> utility.
999 Here's how Test::Harness tests itself
1001 $ cd ~/src/devel/Test-Harness
1002 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1003 $verbose=0; runtests @ARGV;' t/*.t
1004 Using /home/schwern/src/devel/Test-Harness/blib
1005 t/base..............ok
1006 t/nonumbers.........ok
1007 t/ok................ok
1008 t/test-harness......ok
1009 All tests successful.
1010 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1014 The included F<prove> utility for running test scripts from the command line,
1015 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1016 the underlying timing routines, and L<Devel::Cover> for test coverage
1021 Provide a way of running tests quietly (ie. no printing) for automated
1022 validation of tests. This will probably take the form of a version
1023 of runtests() which rather than printing its output returns raw data
1024 on the state of the tests. (Partially done in Test::Harness::Straps)
1026 Document the format.
1028 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1030 Figure a way to report test names in the failure summary.
1032 Rework the test summary so long test names are not truncated as badly.
1033 (Partially done with new skip test styles)
1035 Add option for coverage analysis.
1039 Implement Straps total_results()
1043 Completely redo the print summary code.
1045 Implement Straps callbacks. (experimentally implemented)
1047 Straps->analyze_file() not taint clean, don't know if it can be
1049 Fix that damned VMS nit.
1051 HARNESS_TODOFAIL to display TODO failures
1053 Add a test for verbose.
1055 Change internal list of test results to a hash.
1057 Fix stats display when there's an overrun.
1059 Fix so perls with spaces in the filename work.
1061 Keeping whittling away at _run_all_tests()
1063 Clean up how the summary is printed. Get rid of those damned formats.
1067 Please report any bugs or feature requests to
1068 C<bug-test-harness at rt.cpan.org>, or through the web interface at
1069 L<http://rt.cpan.org/NoAuth/ReportBug.html?Queue=Test-Harness>.
1070 I will be notified, and then you'll automatically be notified of progress on
1071 your bug as I make changes.
1075 You can find documentation for this module with the F<perldoc> command.
1077 perldoc Test::Harness
1079 You can get docs for F<prove> with
1083 You can also look for information at:
1087 =item * AnnoCPAN: Annotated CPAN documentation
1089 L<http://annocpan.org/dist/Test-Harness>
1091 =item * CPAN Ratings
1093 L<http://cpanratings.perl.org/d/Test-Harness>
1095 =item * RT: CPAN's request tracker
1097 L<http://rt.cpan.org/NoAuth/Bugs.html?Dist=Test-Harness>
1101 L<http://search.cpan.org/dist/Test-Harness>
1107 The source code repository for Test::Harness is at
1108 L<http://svn.perl.org/modules/Test-Harness>.
1112 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1113 sure is, that it was inspired by Larry Wall's F<TEST> script that came
1114 with perl distributions for ages. Numerous anonymous contributors
1115 exist. Andreas Koenig held the torch for many years, and then
1118 Current maintainer is Andy Lester C<< <andy at petdance.com> >>.
1123 by Michael G Schwern C<< <schwern at pobox.com> >>,
1124 Andy Lester C<< <andy at petdance.com> >>.
1126 This program is free software; you can redistribute it and/or
1127 modify it under the same terms as Perl itself.
1129 See L<http://www.perl.com/perl/misc/Artistic.html>.