1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
6 use Test::Harness::Straps;
7 use Test::Harness::Assert;
15 @ISA @EXPORT @EXPORT_OK
16 $Verbose $Switches $Debug
17 $verbose $switches $debug
26 Test::Harness - Run Perl standard test scripts with statistics
36 # Backwards compatibility for exportable variable names.
38 *switches = *Switches;
41 $ENV{HARNESS_ACTIVE} = 1;
45 delete $ENV{HARNESS_ACTIVE};
48 # Some experimental versions of OS/2 build have broken $?
49 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
51 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
53 $Strap = Test::Harness::Straps->new;
55 sub strap { return $Strap };
58 @EXPORT = qw(&runtests);
59 @EXPORT_OK = qw($verbose $switches);
61 $Verbose = $ENV{HARNESS_VERBOSE} || 0;
62 $Debug = $ENV{HARNESS_DEBUG} || 0;
64 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
65 $Columns--; # Some shells have trouble with a full line of text.
71 runtests(@test_files);
75 B<STOP!> If all you want to do is write a test script, consider
76 using Test::Simple. Test::Harness is the module that reads the
77 output from Test::Simple, Test::More and other modules based on
78 Test::Builder. You don't need to know about Test::Harness to use
81 Test::Harness runs tests and expects output from the test in a
82 certain format. That format is called TAP, the Test Anything
83 Protocol. It is defined in L<Test::Harness::TAP>.
85 C<Test::Harness::runtests(@tests)> runs all the testscripts named
86 as arguments and checks standard output for the expected strings
89 The F<prove> utility is a thin wrapper around Test::Harness.
93 Test::Harness will honor the C<-T> or C<-t> in the #! line on your
94 test files. So if you begin a test with:
98 the test will be run with taint mode on.
100 =head2 Configuration variables.
102 These variables can be used to configure the behavior of
103 Test::Harness. They are exported on request.
107 =item C<$Test::Harness::Verbose>
109 The package variable C<$Test::Harness::Verbose> is exportable and can be
110 used to let C<runtests()> display the standard output of the script
111 without altering the behavior otherwise. The F<prove> utility's C<-v>
114 =item C<$Test::Harness::switches>
116 The package variable C<$Test::Harness::switches> is exportable and can be
117 used to set perl command line options used for running the test
118 script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
125 When tests fail, analyze the summary report:
127 t/base..............ok
128 t/nonumbers.........ok
129 t/ok................ok
130 t/test-harness......ok
131 t/waterloo..........dubious
132 Test returned status 3 (wstat 768, 0x300)
133 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
134 Failed 10/20 tests, 50.00% okay
135 Failed Test Stat Wstat Total Fail Failed List of Failed
136 -----------------------------------------------------------------------
137 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
138 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
140 Everything passed but F<t/waterloo.t>. It failed 10 of 20 tests and
141 exited with non-zero status indicating something dubious happened.
143 The columns in the summary report mean:
149 The test file which failed.
153 If the test exited with non-zero, this is its exit status.
157 The wait status of the test.
161 Total number of tests expected to run.
165 Number which failed, either from "not ok" or because they never ran.
169 Percentage of the total tests which failed.
171 =item B<List of Failed>
173 A list of the tests which failed. Successive failures may be
174 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
182 Test::Harness currently only has one function, here it is.
188 my $allok = runtests(@test_files);
190 This runs all the given I<@test_files> and divines whether they passed
191 or failed based on their output to STDOUT (details above). It prints
192 out each individual test which failed along with a summary report and
193 a how long it all took.
195 It returns true if everything was ok. Otherwise it will C<die()> with
196 one of the messages in the DIAGNOSTICS section.
205 my($tot, $failedtests) = _run_all_tests(@tests);
206 _show_results($tot, $failedtests);
208 my $ok = _all_ok($tot);
210 assert(($ok xor keys %$failedtests),
211 q{ok status jives with $failedtests});
220 my $ok = _all_ok(\%tot);
222 Tells you if this test run is overall successful or not.
229 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
234 my @files = _globdir $dir;
236 Returns all the files in a directory. This is shorthand for backwards
237 compatibility on systems where C<glob()> doesn't work right.
243 my @f = readdir DIRH;
249 =item B<_run_all_tests>
251 my($total, $failed) = _run_all_tests(@test_files);
253 Runs all the given C<@test_files> (as C<runtests()>) but does it
254 quietly (no report). $total is a hash ref summary of all the tests
255 run. Its keys and values are this:
257 bonus Number of individual todo tests unexpectedly passed
258 max Number of individual tests ran
259 ok Number of individual tests passed
260 sub_skipped Number of individual tests skipped
261 todo Number of individual todo tests
263 files Number of test files ran
264 good Number of test files passed
265 bad Number of test files failed
266 tests Number of test files originally given
267 skipped Number of test files skipped
269 If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
270 got a successful test.
272 $failed is a hash ref of all the test scripts which failed. Each key
273 is the name of a test script, each value is another hash representing
274 how that script failed. Its keys are these:
276 name Name of the test which failed
277 estat Script's exit value
278 wstat Script's wait status
279 max Number of individual tests
280 failed Number which failed
281 percent Percentage of tests which failed
282 canon List of tests which failed (as string).
284 C<$failed> should be empty if everything passed.
286 B<NOTE> Currently this function is still noisy. I'm working on it.
290 # Turns on autoflush for the handle passed
292 my $flushy_fh = shift;
293 my $old_fh = select $flushy_fh;
301 _autoflush(\*STDOUT);
302 _autoflush(\*STDERR);
314 tests => scalar @tests,
322 @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
323 my $t_start = new Benchmark;
325 my $width = _leader_width(@tests);
326 foreach my $tfile (@tests) {
327 $Last_ML_Print = 0; # so each test prints at least once
328 my($leader, $ml) = _mk_leader($tfile, $width);
335 $Strap->{_seen_header} = 0;
336 if ( $Test::Harness::Debug ) {
337 print "# Running: ", $Strap->_command_line($tfile), "\n";
339 my %results = $Strap->analyze_file($tfile) or
340 do { warn $Strap->{error}, "\n"; next };
342 # state of the current test.
343 my @failed = grep { !$results{details}[$_-1]{ok} }
344 1..@{$results{details}};
347 'next' => $Strap->{'next'},
348 max => $results{max},
350 bonus => $results{bonus},
351 skipped => $results{skip},
352 skip_reason => $results{skip_reason},
353 skip_all => $Strap->{skip_all},
357 $tot{bonus} += $results{bonus};
358 $tot{max} += $results{max};
359 $tot{ok} += $results{ok};
360 $tot{todo} += $results{todo};
361 $tot{sub_skipped} += $results{skip};
363 my($estatus, $wstatus) = @results{qw(exit wait)};
365 if ($results{passing}) {
366 if ($test{max} and $test{skipped} + $test{bonus}) {
368 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
370 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
372 print "$test{ml}ok\n ".join(', ', @msg)."\n";
373 } elsif ($test{max}) {
374 print "$test{ml}ok\n";
375 } elsif (defined $test{skip_all} and length $test{skip_all}) {
376 print "skipped\n all skipped: $test{skip_all}\n";
379 print "skipped\n all skipped: no reason given\n";
385 # List unrun tests as failures.
386 if ($test{'next'} <= $test{max}) {
387 push @{$test{failed}}, $test{'next'}..$test{max};
389 # List overruns as failures.
391 my $details = $results{details};
392 foreach my $overrun ($test{max}+1..@$details) {
393 next unless ref $details->[$overrun-1];
394 push @{$test{failed}}, $overrun
399 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
401 $failedtests{$tfile}{name} = $tfile;
403 elsif($results{seen}) {
404 if (@{$test{failed}} and $test{max}) {
405 my ($txt, $canon) = _canonfailed($test{max},$test{skipped},
407 print "$test{ml}$txt";
408 $failedtests{$tfile} = { canon => $canon,
410 failed => scalar @{$test{failed}},
412 percent => 100*(scalar @{$test{failed}})/$test{max},
417 print "Don't know which tests failed: got $test{ok} ok, ".
418 "expected $test{max}\n";
419 $failedtests{$tfile} = { canon => '??',
430 print "FAILED before any test output arrived\n";
432 $failedtests{$tfile} = { canon => '??',
443 if (defined $Files_In_Dir) {
444 my @new_dir_files = _globdir $Files_In_Dir;
445 if (@new_dir_files != @dir_files) {
447 @f{@new_dir_files} = (1) x @new_dir_files;
448 delete @f{@dir_files};
449 my @f = sort keys %f;
450 print "LEAKED FILES: @f\n";
451 @dir_files = @new_dir_files;
455 $tot{bench} = timediff(new Benchmark, $t_start);
457 $Strap->_restore_PERL5LIB;
459 return(\%tot, \%failedtests);
464 my($leader, $ml) = _mk_leader($test_file, $width);
466 Generates the 't/foo........' leader for the given C<$test_file> as well
467 as a similar version which will overwrite the current line (by use of
468 \r and such). C<$ml> may be empty if Test::Harness doesn't think you're
471 The C<$width> is the width of the "yada/blah.." string.
476 my($te, $width) = @_;
480 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
481 my $blank = (' ' x 77);
482 my $leader = "$te" . '.' x ($width - length($te));
485 $ml = "\r$blank\r$leader"
486 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
488 return($leader, $ml);
491 =item B<_leader_width>
493 my($width) = _leader_width(@test_files);
495 Calculates how wide the leader should be based on the length of the
504 my $suf = /\.(\w+)$/ ? $1 : '';
506 my $suflen = length $suf;
507 $maxlen = $len if $len > $maxlen;
508 $maxsuflen = $suflen if $suflen > $maxsuflen;
510 # + 3 : we want three dots between the test name and the "ok"
511 return $maxlen + 3 - $maxsuflen;
516 my($tot, $failedtests) = @_;
519 my $bonusmsg = _bonusmsg($tot);
522 print "All tests successful$bonusmsg.\n";
523 } elsif (!$tot->{tests}){
524 die "FAILED--no tests were run for some reason.\n";
525 } elsif (!$tot->{max}) {
526 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
527 die "FAILED--$tot->{tests} test $blurb could be run, ".
528 "alas--no output ever seen\n";
530 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
531 my $percent_ok = 100*$tot->{ok}/$tot->{max};
532 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
533 $tot->{max} - $tot->{ok}, $tot->{max},
536 my($fmt_top, $fmt) = _create_fmts($failedtests);
538 # Now write to formats
539 for my $script (sort keys %$failedtests) {
540 $Curtest = $failedtests->{$script};
544 $bonusmsg =~ s/^,\s*//;
545 print "$bonusmsg.\n" if $bonusmsg;
546 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
551 printf("Files=%d, Tests=%d, %s\n",
552 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
557 header => \&header_handler,
558 test => \&test_handler,
559 bailout => \&bailout_handler,
562 $Strap->{callback} = \&strap_callback;
564 my($self, $line, $type, $totals) = @_;
565 print $line if $Verbose;
567 my $meth = $Handlers{$type};
568 $meth->($self, $line, $type, $totals) if $meth;
573 my($self, $line, $type, $totals) = @_;
575 warn "Test header seen more than once!\n" if $self->{_seen_header};
577 $self->{_seen_header}++;
579 warn "1..M can only appear at the beginning or end of tests\n"
580 if $totals->{seen} &&
581 $totals->{max} < $totals->{seen};
585 my($self, $line, $type, $totals) = @_;
587 my $curr = $totals->{seen};
588 my $next = $self->{'next'};
589 my $max = $totals->{max};
590 my $detail = $totals->{details}[-1];
592 if( $detail->{ok} ) {
593 _print_ml_less("ok $curr/$max");
595 if( $detail->{type} eq 'skip' ) {
596 $totals->{skip_reason} = $detail->{reason}
597 unless defined $totals->{skip_reason};
598 $totals->{skip_reason} = 'various reasons'
599 if $totals->{skip_reason} ne $detail->{reason};
603 _print_ml("NOK $curr");
606 if( $curr > $next ) {
607 print "Test output counter mismatch [test $curr]\n";
609 elsif( $curr < $next ) {
610 print "Confused test output: test $curr answered after ".
611 "test ", $next - 1, "\n";
616 sub bailout_handler {
617 my($self, $line, $type, $totals) = @_;
619 die "FAILED--Further testing stopped" .
620 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
625 print join '', $ML, @_ if $ML;
629 # For slow connections, we save lots of bandwidth by printing only once
632 if ( $Last_ML_Print != time ) {
634 $Last_ML_Print = time;
642 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
643 " UNEXPECTEDLY SUCCEEDED)")
646 if ($tot->{skipped}) {
647 $bonusmsg .= ", $tot->{skipped} test"
648 . ($tot->{skipped} != 1 ? 's' : '');
649 if ($tot->{sub_skipped}) {
650 $bonusmsg .= " and $tot->{sub_skipped} subtest"
651 . ($tot->{sub_skipped} != 1 ? 's' : '');
653 $bonusmsg .= ' skipped';
655 elsif ($tot->{sub_skipped}) {
656 $bonusmsg .= ", $tot->{sub_skipped} subtest"
657 . ($tot->{sub_skipped} != 1 ? 's' : '')
664 # Test program go boom.
665 sub _dubious_return {
666 my($test, $tot, $estatus, $wstatus) = @_;
667 my ($failed, $canon, $percent) = ('??', '??');
669 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
670 "(wstat %d, 0x%x)\n",
672 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
677 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
678 print "\tafter all the subtests completed successfully\n";
680 $failed = 0; # But we do not set $canon!
683 push @{$test->{failed}}, $test->{'next'}..$test->{max};
684 $failed = @{$test->{failed}};
685 (my $txt, $canon) = _canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
686 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
691 return { canon => $canon, max => $test->{max} || '??',
694 estat => $estatus, wstat => $wstatus,
700 my($failedtests) = @_;
702 my $failed_str = "Failed Test";
703 my $middle_str = " Stat Wstat Total Fail Failed ";
704 my $list_str = "List of Failed";
706 # Figure out our longest name string for formatting purposes.
707 my $max_namelen = length($failed_str);
708 foreach my $script (keys %$failedtests) {
709 my $namelen = length $failedtests->{$script}->{name};
710 $max_namelen = $namelen if $namelen > $max_namelen;
713 my $list_len = $Columns - length($middle_str) - $max_namelen;
714 if ($list_len < length($list_str)) {
715 $list_len = length($list_str);
716 $max_namelen = $Columns - length($middle_str) - $list_len;
717 if ($max_namelen < length($failed_str)) {
718 $max_namelen = length($failed_str);
719 $Columns = $max_namelen + length($middle_str) + $list_len;
723 my $fmt_top = "format STDOUT_TOP =\n"
724 . sprintf("%-${max_namelen}s", $failed_str)
730 my $fmt = "format STDOUT =\n"
731 . "@" . "<" x ($max_namelen - 1)
732 . " @>> @>>>> @>>>> @>>> ^##.##% "
733 . "^" . "<" x ($list_len - 1) . "\n"
734 . '{ $Curtest->{name}, $Curtest->{estat},'
735 . ' $Curtest->{wstat}, $Curtest->{max},'
736 . ' $Curtest->{failed}, $Curtest->{percent},'
737 . ' $Curtest->{canon}'
739 . "~~" . " " x ($Columns - $list_len - 2) . "^"
740 . "<" x ($list_len - 1) . "\n"
741 . '$Curtest->{canon}'
749 return($fmt_top, $fmt);
752 sub _canonfailed ($$@) {
753 my($max,$skipped,@failed) = @_;
755 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
756 my $failed = @failed;
760 my $last = $min = shift @failed;
763 for (@failed, $failed[-1]) { # don't forget the last one
764 if ($_ > $last+1 || $_ == $last) {
768 push @canon, "$min-$last";
775 push @result, "FAILED tests @canon\n";
776 $canon = join ' ', @canon;
778 push @result, "FAILED test $last\n";
782 push @result, "\tFailed $failed/$max tests, ";
784 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
786 push @result, "?% okay";
788 my $ender = 's' x ($skipped > 1);
790 my $good = $max - $failed - $skipped;
791 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
793 my $goodper = sprintf("%.2f",100*($good/$max));
794 $skipmsg .= "$goodper%)";
798 push @result, $skipmsg;
801 my $txt = join "", @result;
818 C<&runtests> is exported by Test::Harness by default.
820 C<$verbose>, C<$switches> and C<$debug> are exported upon request.
826 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
828 If all tests are successful some statistics about the performance are
831 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
833 For any single script that has failing subtests statistics like the
836 =item C<Test returned status %d (wstat %d)>
838 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
839 and C<$?> are printed in a message similar to the above.
841 =item C<Failed 1 test, %.2f%% okay. %s>
843 =item C<Failed %d/%d tests, %.2f%% okay. %s>
845 If not all tests were successful, the script dies with one of the
848 =item C<FAILED--Further testing stopped: %s>
850 If a single subtest decides that further testing will not make sense,
851 the script dies with this message.
859 =item C<HARNESS_ACTIVE>
861 Harness sets this before executing the individual tests. This allows
862 the tests to determine if they are being executed through the harness
863 or by any other means.
865 =item C<HARNESS_COLUMNS>
867 This value will be used for the width of the terminal. If it is not
868 set then it will default to C<COLUMNS>. If this is not set, it will
869 default to 80. Note that users of Bourne-sh based shells will need to
870 C<export COLUMNS> for this module to use that variable.
872 =item C<HARNESS_COMPILE_TEST>
874 When true it will make harness attempt to compile the test using
875 C<perlcc> before running it.
877 B<NOTE> This currently only works when sitting in the perl source
880 =item C<HARNESS_DEBUG>
882 If true, Test::Harness will print debugging information about itself as
883 it runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
884 the output from the test being run. Setting C<$Test::Harness::Debug> will
885 override this, or you can use the C<-d> switch in the F<prove> utility.
887 =item C<HARNESS_FILELEAK_IN_DIR>
889 When set to the name of a directory, harness will check after each
890 test whether new files appeared in that directory, and report them as
892 LEAKED FILES: scr.tmp 0 my.db
894 If relative, directory name is with respect to the current directory at
895 the moment runtests() was called. Putting absolute path into
896 C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
898 =item C<HARNESS_IGNORE_EXITCODE>
900 Makes harness ignore the exit status of child processes when defined.
902 =item C<HARNESS_NOTTY>
904 When set to a true value, forces it to behave as though STDOUT were
905 not a console. You may need to set this if you don't want harness to
906 output more frequent progress messages using carriage returns. Some
907 consoles may not handle carriage returns properly (which results in a
908 somewhat messy output).
910 =item C<HARNESS_PERL>
912 Usually your tests will be run by C<$^X>, the currently-executing Perl.
913 However, you may want to have it run by a different executable, such as
914 a threading perl, or a different version.
916 If you're using the F<prove> utility, you can use the C<--perl> switch.
918 =item C<HARNESS_PERL_SWITCHES>
920 Its value will be prepended to the switches used to invoke perl on
921 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
922 run all tests with all warnings enabled.
924 =item C<HARNESS_VERBOSE>
926 If true, Test::Harness will output the verbose results of running
927 its tests. Setting C<$Test::Harness::verbose> will override this,
928 or you can use the C<-v> switch in the F<prove> utility.
934 Here's how Test::Harness tests itself
936 $ cd ~/src/devel/Test-Harness
937 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
938 $verbose=0; runtests @ARGV;' t/*.t
939 Using /home/schwern/src/devel/Test-Harness/blib
940 t/base..............ok
941 t/nonumbers.........ok
942 t/ok................ok
943 t/test-harness......ok
944 All tests successful.
945 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
949 The included F<prove> utility for running test scripts from the command line,
950 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
951 the underlying timing routines, and L<Devel::Cover> for test coverage
956 Provide a way of running tests quietly (ie. no printing) for automated
957 validation of tests. This will probably take the form of a version
958 of runtests() which rather than printing its output returns raw data
959 on the state of the tests. (Partially done in Test::Harness::Straps)
963 Fix HARNESS_COMPILE_TEST without breaking its core usage.
965 Figure a way to report test names in the failure summary.
967 Rework the test summary so long test names are not truncated as badly.
968 (Partially done with new skip test styles)
970 Add option for coverage analysis.
974 Implement Straps total_results()
978 Completely redo the print summary code.
980 Implement Straps callbacks. (experimentally implemented)
982 Straps->analyze_file() not taint clean, don't know if it can be
984 Fix that damned VMS nit.
986 HARNESS_TODOFAIL to display TODO failures
988 Add a test for verbose.
990 Change internal list of test results to a hash.
992 Fix stats display when there's an overrun.
994 Fix so perls with spaces in the filename work.
996 Keeping whittling away at _run_all_tests()
998 Clean up how the summary is printed. Get rid of those damned formats.
1002 HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
1005 Please use the CPAN bug ticketing system at L<http://rt.cpan.org/>.
1006 You can also mail bugs, fixes and enhancements to
1007 C<< <bug-test-harness >> at C<< rt.cpan.org> >>.
1011 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1012 sure is, that it was inspired by Larry Wall's TEST script that came
1013 with perl distributions for ages. Numerous anonymous contributors
1014 exist. Andreas Koenig held the torch for many years, and then
1017 Current maintainer is Andy Lester C<< <andy at petdance.com> >>.
1022 by Michael G Schwern C<< <schwern at pobox.com> >>,
1023 Andy Lester C<< <andy at petdance.com> >>.
1025 This program is free software; you can redistribute it and/or
1026 modify it under the same terms as Perl itself.
1028 See L<http://www.perl.com/perl/misc/Artistic.html>.