1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
6 use Test::Harness::Straps;
7 use Test::Harness::Assert;
16 @ISA @EXPORT @EXPORT_OK
17 $Verbose $Switches $Debug
18 $verbose $switches $debug
28 eval "use Time::HiRes 'time'";
29 $has_time_hires = !$@;
34 Test::Harness - Run Perl standard test scripts with statistics
44 # Backwards compatibility for exportable variable names.
46 *switches = *Switches;
49 $ENV{HARNESS_ACTIVE} = 1;
50 $ENV{HARNESS_VERSION} = $VERSION;
54 delete $ENV{HARNESS_ACTIVE};
55 delete $ENV{HARNESS_VERSION};
58 # Some experimental versions of OS/2 build have broken $?
59 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
61 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
63 $Strap = Test::Harness::Straps->new;
65 sub strap { return $Strap };
68 @EXPORT = qw(&runtests);
69 @EXPORT_OK = qw($verbose $switches);
71 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
72 $Debug = $ENV{HARNESS_DEBUG} || 0;
74 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
75 $Columns--; # Some shells have trouble with a full line of text.
76 $Timer = $ENV{HARNESS_TIMER} || 0;
82 runtests(@test_files);
86 B<STOP!> If all you want to do is write a test script, consider
87 using Test::Simple. Test::Harness is the module that reads the
88 output from Test::Simple, Test::More and other modules based on
89 Test::Builder. You don't need to know about Test::Harness to use
92 Test::Harness runs tests and expects output from the test in a
93 certain format. That format is called TAP, the Test Anything
94 Protocol. It is defined in L<Test::Harness::TAP>.
96 C<Test::Harness::runtests(@tests)> runs all the testscripts named
97 as arguments and checks standard output for the expected strings
100 The F<prove> utility is a thin wrapper around Test::Harness.
104 Test::Harness will honor the C<-T> or C<-t> in the #! line on your
105 test files. So if you begin a test with:
109 the test will be run with taint mode on.
111 =head2 Configuration variables.
113 These variables can be used to configure the behavior of
114 Test::Harness. They are exported on request.
118 =item C<$Test::Harness::Verbose>
120 The package variable C<$Test::Harness::Verbose> is exportable and can be
121 used to let C<runtests()> display the standard output of the script
122 without altering the behavior otherwise. The F<prove> utility's C<-v>
125 =item C<$Test::Harness::switches>
127 The package variable C<$Test::Harness::switches> is exportable and can be
128 used to set perl command line options used for running the test
129 script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
131 =item C<$Test::Harness::Timer>
133 If set to true, and C<Time::HiRes> is available, print elapsed seconds
134 after each test file.
141 When tests fail, analyze the summary report:
143 t/base..............ok
144 t/nonumbers.........ok
145 t/ok................ok
146 t/test-harness......ok
147 t/waterloo..........dubious
148 Test returned status 3 (wstat 768, 0x300)
149 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
150 Failed 10/20 tests, 50.00% okay
151 Failed Test Stat Wstat Total Fail Failed List of Failed
152 -----------------------------------------------------------------------
153 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
154 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
156 Everything passed but F<t/waterloo.t>. It failed 10 of 20 tests and
157 exited with non-zero status indicating something dubious happened.
159 The columns in the summary report mean:
165 The test file which failed.
169 If the test exited with non-zero, this is its exit status.
173 The wait status of the test.
177 Total number of tests expected to run.
181 Number which failed, either from "not ok" or because they never ran.
185 Percentage of the total tests which failed.
187 =item B<List of Failed>
189 A list of the tests which failed. Successive failures may be
190 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
198 Test::Harness currently only has one function, here it is.
204 my $allok = runtests(@test_files);
206 This runs all the given I<@test_files> and divines whether they passed
207 or failed based on their output to STDOUT (details above). It prints
208 out each individual test which failed along with a summary report and
209 a how long it all took.
211 It returns true if everything was ok. Otherwise it will C<die()> with
212 one of the messages in the DIAGNOSTICS section.
221 my($tot, $failedtests) = _run_all_tests(@tests);
222 _show_results($tot, $failedtests);
224 my $ok = _all_ok($tot);
226 assert(($ok xor keys %$failedtests),
227 q{ok status jives with $failedtests});
236 my $ok = _all_ok(\%tot);
238 Tells you if this test run is overall successful or not.
245 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
250 my @files = _globdir $dir;
252 Returns all the files in a directory. This is shorthand for backwards
253 compatibility on systems where C<glob()> doesn't work right.
259 my @f = readdir DIRH;
265 =item B<_run_all_tests>
267 my($total, $failed) = _run_all_tests(@test_files);
269 Runs all the given C<@test_files> (as C<runtests()>) but does it
270 quietly (no report). $total is a hash ref summary of all the tests
271 run. Its keys and values are this:
273 bonus Number of individual todo tests unexpectedly passed
274 max Number of individual tests ran
275 ok Number of individual tests passed
276 sub_skipped Number of individual tests skipped
277 todo Number of individual todo tests
279 files Number of test files ran
280 good Number of test files passed
281 bad Number of test files failed
282 tests Number of test files originally given
283 skipped Number of test files skipped
285 If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
286 got a successful test.
288 $failed is a hash ref of all the test scripts which failed. Each key
289 is the name of a test script, each value is another hash representing
290 how that script failed. Its keys are these:
292 name Name of the test which failed
293 estat Script's exit value
294 wstat Script's wait status
295 max Number of individual tests
296 failed Number which failed
297 percent Percentage of tests which failed
298 canon List of tests which failed (as string).
300 C<$failed> should be empty if everything passed.
302 B<NOTE> Currently this function is still noisy. I'm working on it.
306 # Turns on autoflush for the handle passed
308 my $flushy_fh = shift;
309 my $old_fh = select $flushy_fh;
317 _autoflush(\*STDOUT);
318 _autoflush(\*STDERR);
330 tests => scalar @tests,
338 @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
339 my $run_start_time = new Benchmark;
341 my $width = _leader_width(@tests);
342 foreach my $tfile (@tests) {
343 $Last_ML_Print = 0; # so each test prints at least once
344 my($leader, $ml) = _mk_leader($tfile, $width);
351 $Strap->{_seen_header} = 0;
352 if ( $Test::Harness::Debug ) {
353 print "# Running: ", $Strap->_command_line($tfile), "\n";
355 my $test_start_time = $Timer ? time : 0;
356 my %results = $Strap->analyze_file($tfile) or
357 do { warn $Strap->{error}, "\n"; next };
360 $elapsed = time - $test_start_time;
361 if ( $has_time_hires ) {
362 $elapsed = sprintf( " %8.3fs", $elapsed );
365 $elapsed = sprintf( " %8ss", $elapsed ? $elapsed : "<1" );
372 # state of the current test.
373 my @failed = grep { !$results{details}[$_-1]{ok} }
374 1..@{$results{details}};
377 'next' => $Strap->{'next'},
378 max => $results{max},
380 bonus => $results{bonus},
381 skipped => $results{skip},
382 skip_reason => $results{skip_reason},
383 skip_all => $Strap->{skip_all},
387 $tot{bonus} += $results{bonus};
388 $tot{max} += $results{max};
389 $tot{ok} += $results{ok};
390 $tot{todo} += $results{todo};
391 $tot{sub_skipped} += $results{skip};
393 my($estatus, $wstatus) = @results{qw(exit wait)};
395 if ($results{passing}) {
396 # XXX Combine these first two
397 if ($test{max} and $test{skipped} + $test{bonus}) {
399 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
401 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
403 print "$test{ml}ok$elapsed\n ".join(', ', @msg)."\n";
405 elsif ( $test{max} ) {
406 print "$test{ml}ok$elapsed\n";
408 elsif ( defined $test{skip_all} and length $test{skip_all} ) {
409 print "skipped\n all skipped: $test{skip_all}\n";
413 print "skipped\n all skipped: no reason given\n";
419 # List unrun tests as failures.
420 if ($test{'next'} <= $test{max}) {
421 push @{$test{failed}}, $test{'next'}..$test{max};
423 # List overruns as failures.
425 my $details = $results{details};
426 foreach my $overrun ($test{max}+1..@$details) {
427 next unless ref $details->[$overrun-1];
428 push @{$test{failed}}, $overrun
433 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
435 $failedtests{$tfile}{name} = $tfile;
437 elsif($results{seen}) {
438 if (@{$test{failed}} and $test{max}) {
439 my ($txt, $canon) = _canonfailed($test{max},$test{skipped},
441 print "$test{ml}$txt";
442 $failedtests{$tfile} = { canon => $canon,
444 failed => scalar @{$test{failed}},
446 percent => 100*(scalar @{$test{failed}})/$test{max},
452 print "Don't know which tests failed: got $test{ok} ok, ".
453 "expected $test{max}\n";
454 $failedtests{$tfile} = { canon => '??',
466 print "FAILED before any test output arrived\n";
468 $failedtests{$tfile} = { canon => '??',
479 if (defined $Files_In_Dir) {
480 my @new_dir_files = _globdir $Files_In_Dir;
481 if (@new_dir_files != @dir_files) {
483 @f{@new_dir_files} = (1) x @new_dir_files;
484 delete @f{@dir_files};
485 my @f = sort keys %f;
486 print "LEAKED FILES: @f\n";
487 @dir_files = @new_dir_files;
491 $tot{bench} = timediff(new Benchmark, $run_start_time);
493 $Strap->_restore_PERL5LIB;
495 return(\%tot, \%failedtests);
500 my($leader, $ml) = _mk_leader($test_file, $width);
502 Generates the 't/foo........' leader for the given C<$test_file> as well
503 as a similar version which will overwrite the current line (by use of
504 \r and such). C<$ml> may be empty if Test::Harness doesn't think you're
507 The C<$width> is the width of the "yada/blah.." string.
512 my($te, $width) = @_;
517 $te =~ s/^.*\.t\./\[.t./s;
519 my $leader = "$te" . '.' x ($width - length($te));
522 if ( -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose ) {
523 $ml = "\r" . (' ' x 77) . "\r$leader"
526 return($leader, $ml);
529 =item B<_leader_width>
531 my($width) = _leader_width(@test_files);
533 Calculates how wide the leader should be based on the length of the
542 my $suf = /\.(\w+)$/ ? $1 : '';
544 my $suflen = length $suf;
545 $maxlen = $len if $len > $maxlen;
546 $maxsuflen = $suflen if $suflen > $maxsuflen;
548 # + 3 : we want three dots between the test name and the "ok"
549 return $maxlen + 3 - $maxsuflen;
554 my($tot, $failedtests) = @_;
557 my $bonusmsg = _bonusmsg($tot);
560 print "All tests successful$bonusmsg.\n";
562 elsif (!$tot->{tests}){
563 die "FAILED--no tests were run for some reason.\n";
565 elsif (!$tot->{max}) {
566 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
567 die "FAILED--$tot->{tests} test $blurb could be run, ".
568 "alas--no output ever seen\n";
571 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
572 my $percent_ok = 100*$tot->{ok}/$tot->{max};
573 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
574 $tot->{max} - $tot->{ok}, $tot->{max},
577 my($fmt_top, $fmt) = _create_fmts($failedtests);
579 # Now write to formats
580 for my $script (sort keys %$failedtests) {
581 $Curtest = $failedtests->{$script};
585 $bonusmsg =~ s/^,\s*//;
586 print "$bonusmsg.\n" if $bonusmsg;
587 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
592 printf("Files=%d, Tests=%d, %s\n",
593 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
598 header => \&header_handler,
599 test => \&test_handler,
600 bailout => \&bailout_handler,
603 $Strap->{callback} = \&strap_callback;
605 my($self, $line, $type, $totals) = @_;
606 print $line if $Verbose;
608 my $meth = $Handlers{$type};
609 $meth->($self, $line, $type, $totals) if $meth;
614 my($self, $line, $type, $totals) = @_;
616 warn "Test header seen more than once!\n" if $self->{_seen_header};
618 $self->{_seen_header}++;
620 warn "1..M can only appear at the beginning or end of tests\n"
621 if $totals->{seen} &&
622 $totals->{max} < $totals->{seen};
626 my($self, $line, $type, $totals) = @_;
628 my $curr = $totals->{seen};
629 my $next = $self->{'next'};
630 my $max = $totals->{max};
631 my $detail = $totals->{details}[-1];
633 if( $detail->{ok} ) {
634 _print_ml_less("ok $curr/$max");
636 if( $detail->{type} eq 'skip' ) {
637 $totals->{skip_reason} = $detail->{reason}
638 unless defined $totals->{skip_reason};
639 $totals->{skip_reason} = 'various reasons'
640 if $totals->{skip_reason} ne $detail->{reason};
644 _print_ml("NOK $curr");
647 if( $curr > $next ) {
648 print "Test output counter mismatch [test $curr]\n";
650 elsif( $curr < $next ) {
651 print "Confused test output: test $curr answered after ".
652 "test ", $next - 1, "\n";
657 sub bailout_handler {
658 my($self, $line, $type, $totals) = @_;
660 die "FAILED--Further testing stopped" .
661 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
666 print join '', $ML, @_ if $ML;
670 # Print updates only once per second.
672 my $now = CORE::time;
673 if ( $Last_ML_Print != $now ) {
675 $Last_ML_Print = $now;
683 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
684 " UNEXPECTEDLY SUCCEEDED)")
687 if ($tot->{skipped}) {
688 $bonusmsg .= ", $tot->{skipped} test"
689 . ($tot->{skipped} != 1 ? 's' : '');
690 if ($tot->{sub_skipped}) {
691 $bonusmsg .= " and $tot->{sub_skipped} subtest"
692 . ($tot->{sub_skipped} != 1 ? 's' : '');
694 $bonusmsg .= ' skipped';
696 elsif ($tot->{sub_skipped}) {
697 $bonusmsg .= ", $tot->{sub_skipped} subtest"
698 . ($tot->{sub_skipped} != 1 ? 's' : '')
705 # Test program go boom.
706 sub _dubious_return {
707 my($test, $tot, $estatus, $wstatus) = @_;
708 my ($failed, $canon, $percent) = ('??', '??');
710 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
711 "(wstat %d, 0x%x)\n",
713 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
718 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
719 print "\tafter all the subtests completed successfully\n";
721 $failed = 0; # But we do not set $canon!
724 push @{$test->{failed}}, $test->{'next'}..$test->{max};
725 $failed = @{$test->{failed}};
726 (my $txt, $canon) = _canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
727 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
732 return { canon => $canon, max => $test->{max} || '??',
735 estat => $estatus, wstat => $wstatus,
741 my($failedtests) = @_;
743 my $failed_str = "Failed Test";
744 my $middle_str = " Stat Wstat Total Fail Failed ";
745 my $list_str = "List of Failed";
747 # Figure out our longest name string for formatting purposes.
748 my $max_namelen = length($failed_str);
749 foreach my $script (keys %$failedtests) {
750 my $namelen = length $failedtests->{$script}->{name};
751 $max_namelen = $namelen if $namelen > $max_namelen;
754 my $list_len = $Columns - length($middle_str) - $max_namelen;
755 if ($list_len < length($list_str)) {
756 $list_len = length($list_str);
757 $max_namelen = $Columns - length($middle_str) - $list_len;
758 if ($max_namelen < length($failed_str)) {
759 $max_namelen = length($failed_str);
760 $Columns = $max_namelen + length($middle_str) + $list_len;
764 my $fmt_top = "format STDOUT_TOP =\n"
765 . sprintf("%-${max_namelen}s", $failed_str)
771 my $fmt = "format STDOUT =\n"
772 . "@" . "<" x ($max_namelen - 1)
773 . " @>> @>>>> @>>>> @>>> ^##.##% "
774 . "^" . "<" x ($list_len - 1) . "\n"
775 . '{ $Curtest->{name}, $Curtest->{estat},'
776 . ' $Curtest->{wstat}, $Curtest->{max},'
777 . ' $Curtest->{failed}, $Curtest->{percent},'
778 . ' $Curtest->{canon}'
780 . "~~" . " " x ($Columns - $list_len - 2) . "^"
781 . "<" x ($list_len - 1) . "\n"
782 . '$Curtest->{canon}'
790 return($fmt_top, $fmt);
793 sub _canonfailed ($$@) {
794 my($max,$skipped,@failed) = @_;
796 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
797 my $failed = @failed;
801 my $last = $min = shift @failed;
804 for (@failed, $failed[-1]) { # don't forget the last one
805 if ($_ > $last+1 || $_ == $last) {
806 push @canon, ($min == $last) ? $last : "$min-$last";
812 push @result, "FAILED tests @canon\n";
813 $canon = join ' ', @canon;
816 push @result, "FAILED test $last\n";
820 push @result, "\tFailed $failed/$max tests, ";
822 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
825 push @result, "?% okay";
827 my $ender = 's' x ($skipped > 1);
829 my $good = $max - $failed - $skipped;
830 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
832 my $goodper = sprintf("%.2f",100*($good/$max));
833 $skipmsg .= "$goodper%)";
838 push @result, $skipmsg;
841 my $txt = join "", @result;
858 C<&runtests> is exported by Test::Harness by default.
860 C<$verbose>, C<$switches> and C<$debug> are exported upon request.
866 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
868 If all tests are successful some statistics about the performance are
871 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
873 For any single script that has failing subtests statistics like the
876 =item C<Test returned status %d (wstat %d)>
878 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
879 and C<$?> are printed in a message similar to the above.
881 =item C<Failed 1 test, %.2f%% okay. %s>
883 =item C<Failed %d/%d tests, %.2f%% okay. %s>
885 If not all tests were successful, the script dies with one of the
888 =item C<FAILED--Further testing stopped: %s>
890 If a single subtest decides that further testing will not make sense,
891 the script dies with this message.
895 =head1 ENVIRONMENT VARIABLES THAT TEST::HARNESS SETS
897 Test::Harness sets these before executing the individual tests.
901 =item C<HARNESS_ACTIVE>
903 This is set to a true value. It allows the tests to determine if they
904 are being executed through the harness or by any other means.
906 =item C<HARNESS_VERSION>
908 This is the version of Test::Harness.
912 =head1 ENVIRONMENT VARIABLES THAT AFFECT TEST::HARNESS
916 =item C<HARNESS_COLUMNS>
918 This value will be used for the width of the terminal. If it is not
919 set then it will default to C<COLUMNS>. If this is not set, it will
920 default to 80. Note that users of Bourne-sh based shells will need to
921 C<export COLUMNS> for this module to use that variable.
923 =item C<HARNESS_COMPILE_TEST>
925 When true it will make harness attempt to compile the test using
926 C<perlcc> before running it.
928 B<NOTE> This currently only works when sitting in the perl source
931 =item C<HARNESS_DEBUG>
933 If true, Test::Harness will print debugging information about itself as
934 it runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
935 the output from the test being run. Setting C<$Test::Harness::Debug> will
936 override this, or you can use the C<-d> switch in the F<prove> utility.
938 =item C<HARNESS_FILELEAK_IN_DIR>
940 When set to the name of a directory, harness will check after each
941 test whether new files appeared in that directory, and report them as
943 LEAKED FILES: scr.tmp 0 my.db
945 If relative, directory name is with respect to the current directory at
946 the moment runtests() was called. Putting absolute path into
947 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
949 =item C<HARNESS_IGNORE_EXITCODE>
951 Makes harness ignore the exit status of child processes when defined.
953 =item C<HARNESS_NOTTY>
955 When set to a true value, forces it to behave as though STDOUT were
956 not a console. You may need to set this if you don't want harness to
957 output more frequent progress messages using carriage returns. Some
958 consoles may not handle carriage returns properly (which results in a
959 somewhat messy output).
961 =item C<HARNESS_PERL>
963 Usually your tests will be run by C<$^X>, the currently-executing Perl.
964 However, you may want to have it run by a different executable, such as
965 a threading perl, or a different version.
967 If you're using the F<prove> utility, you can use the C<--perl> switch.
969 =item C<HARNESS_PERL_SWITCHES>
971 Its value will be prepended to the switches used to invoke perl on
972 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
973 run all tests with all warnings enabled.
975 =item C<HARNESS_VERBOSE>
977 If true, Test::Harness will output the verbose results of running
978 its tests. Setting C<$Test::Harness::verbose> will override this,
979 or you can use the C<-v> switch in the F<prove> utility.
985 Here's how Test::Harness tests itself
987 $ cd ~/src/devel/Test-Harness
988 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
989 $verbose=0; runtests @ARGV;' t/*.t
990 Using /home/schwern/src/devel/Test-Harness/blib
991 t/base..............ok
992 t/nonumbers.........ok
993 t/ok................ok
994 t/test-harness......ok
995 All tests successful.
996 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1000 The included F<prove> utility for running test scripts from the command line,
1001 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1002 the underlying timing routines, and L<Devel::Cover> for test coverage
1007 Provide a way of running tests quietly (ie. no printing) for automated
1008 validation of tests. This will probably take the form of a version
1009 of runtests() which rather than printing its output returns raw data
1010 on the state of the tests. (Partially done in Test::Harness::Straps)
1012 Document the format.
1014 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1016 Figure a way to report test names in the failure summary.
1018 Rework the test summary so long test names are not truncated as badly.
1019 (Partially done with new skip test styles)
1021 Add option for coverage analysis.
1025 Implement Straps total_results()
1029 Completely redo the print summary code.
1031 Implement Straps callbacks. (experimentally implemented)
1033 Straps->analyze_file() not taint clean, don't know if it can be
1035 Fix that damned VMS nit.
1037 HARNESS_TODOFAIL to display TODO failures
1039 Add a test for verbose.
1041 Change internal list of test results to a hash.
1043 Fix stats display when there's an overrun.
1045 Fix so perls with spaces in the filename work.
1047 Keeping whittling away at _run_all_tests()
1049 Clean up how the summary is printed. Get rid of those damned formats.
1053 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
1056 Please use the CPAN bug ticketing system at L<http://rt.cpan.org/>.
1057 You can also mail bugs, fixes and enhancements to
1058 C<< <bug-test-harness >> at C<< rt.cpan.org> >>.
1062 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1063 sure is, that it was inspired by Larry Wall's TEST script that came
1064 with perl distributions for ages. Numerous anonymous contributors
1065 exist. Andreas Koenig held the torch for many years, and then
1068 Current maintainer is Andy Lester C<< <andy at petdance.com> >>.
1073 by Michael G Schwern C<< <schwern at pobox.com> >>,
1074 Andy Lester C<< <andy at petdance.com> >>.
1076 This program is free software; you can redistribute it and/or
1077 modify it under the same terms as Perl itself.
1079 See L<http://www.perl.com/perl/misc/Artistic.html>.