1 # -*- Mode: cperl; cperl-indent-level: 4 -*-
2 # $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
12 use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
17 # Backwards compatibility for exportable variable names.
19 *switches = \$Switches;
21 $Have_Devel_Corestack = 0;
25 $ENV{HARNESS_ACTIVE} = 1;
27 # Some experimental versions of OS/2 build have broken $?
28 my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
30 my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
34 @EXPORT = qw(&runtests);
35 @EXPORT_OK = qw($verbose $switches);
39 $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
40 $Columns--; # Some shells have trouble with a full line of text.
45 Test::Harness - run perl standard test scripts with statistics
51 runtests(@test_files);
55 B<STOP!> If all you want to do is write a test script, consider using
56 Test::Simple. Otherwise, read on.
58 (By using the Test module, you can write test scripts without
59 knowing the exact output this module expects. However, if you need to
60 know the specifics, read on!)
62 Perl test scripts print to standard output C<"ok N"> for each single
63 test, where C<N> is an increasing sequence of integers. The first line
64 output by a standard test script is C<"1..M"> with C<M> being the
65 number of tests that should be run within the test
66 script. Test::Harness::runtests(@tests) runs all the testscripts
67 named as arguments and checks standard output for the expected
70 After all tests have been performed, runtests() prints some
71 performance statistics that are computed by the Benchmark module.
73 =head2 The test script output
75 The following explains how Test::Harness interprets the output of your
82 This header tells how many tests there will be. It should be the
83 first line output by your test program (but its okay if its preceded
86 In certain instanced, you may not know how many tests you will
87 ultimately be running. In this case, it is permitted (but not
88 encouraged) for the 1..M header to appear as the B<last> line output
89 by your test (again, it can be followed by further comments). But we
90 strongly encourage you to put it first.
92 Under B<no> circumstances should 1..M appear in the middle of your
93 output or more than once.
96 =item B<'ok', 'not ok'. Ok?>
98 Any output from the testscript to standard error is ignored and
99 bypassed, thus will be seen by the user. Lines written to standard
100 output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101 runtests(). All other lines are discarded.
103 C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
106 =item B<test numbers>
108 Perl normally expects the 'ok' or 'not ok' to be followed by a test
109 number. It is tolerated if the test numbers after 'ok' are
110 omitted. In this case Test::Harness maintains temporarily its own
111 counter until the script supplies test numbers again. So the following
126 Failed 3/6 tests, 50.00% okay
129 =item B<$Test::Harness::verbose>
131 The global variable $Test::Harness::verbose is exportable and can be
132 used to let runtests() display the standard output of the script
133 without altering the behavior otherwise.
135 =item B<$Test::Harness::switches>
137 The global variable $Test::Harness::switches is exportable and can be
138 used to set perl command line options used for running the test
139 script(s). The default value is C<-w>.
141 =item B<Skipping tests>
143 If the standard output line contains the substring C< # Skip> (with
144 variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145 counted as a skipped test. If the whole testscript succeeds, the
146 count of skipped tests is included in the generated output.
147 C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
150 ok 23 # skip Insufficient flogiston pressure.
152 Similarly, one can include a similar explanation in a C<1..0> line
153 emitted if the test script is skipped completely:
155 1..0 # Skipped: no leverage found
159 If the standard output line contains the substring C< # TODO> after
160 C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161 afterwards is the thing that has to be done before this test will
164 not ok 13 # TODO harness the power of the atom
166 These tests represent a feature to be implemented or a bug to be fixed
167 and act as something of an executable "thing to do" list. They are
168 B<not> expected to succeed. Should a todo test begin succeeding,
169 Test::Harness will report it as a bonus. This indicates that whatever
170 you were supposed to do has been done and you should promote this to a
175 As an emergency measure, a test script can decide that further tests
176 are useless (e.g. missing dependencies) and testing should stop
177 immediately. In that case the test script prints the magic words
181 to standard output. Any message after these words will be displayed by
182 C<Test::Harness> as the reason why testing is stopped.
186 Additional comments may be put into the testing output on their own
187 lines. Comment lines should begin with a '#', Test::Harness will
191 # Life is good, the sun is shining, RAM is cheap.
193 # got 'Bush' expected 'Gore'
195 =item B<Anything else>
197 Any other output Test::Harness sees it will silently ignore B<BUT WE
198 PLAN TO CHANGE THIS!> If you wish to place additional output in your
199 test script, please use a comment.
206 It will happen, your tests will fail. After you mop up your ego, you
207 can begin examining the summary report:
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
222 Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223 exited with non-zero status indicating something dubious happened.
225 The columns in the summary report mean:
231 The test file which failed.
235 If the test exited with non-zero, this is its exit status.
239 The wait status of the test I<umm, I need a better explanation here>.
243 Total number of tests expected to run.
247 Number which failed, either from "not ok" or because they never ran.
251 Percentage of the total tests which failed.
253 =item B<List of Failed>
255 A list of the tests which failed. Successive failures may be
256 abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
264 Test::Harness currently only has one function, here it is.
270 my $allok = runtests(@test_files);
272 This runs all the given @test_files and divines whether they passed
273 or failed based on their output to STDOUT (details above). It prints
274 out each individual test which failed along with a summary report and
275 a how long it all took.
277 It returns true if everything was ok, false otherwise.
280 This is just _run_all_tests() plus _show_results()
289 my($tot, $failedtests) = _run_all_tests(@tests);
290 _show_results($tot, $failedtests);
292 my $ok = ($tot->{bad} == 0 && $tot->{max});
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
304 my @files = _globdir $dir;
306 Returns all the files in a directory. This is shorthand for backwards
307 compatibility on systems where glob() doesn't work right.
313 my @f = readdir DIRH;
319 =item B<_run_all_tests>
321 my($total, $failed) = _run_all_tests(@test_files);
323 Runs all the given @test_files (as runtests()) but does it quietly (no
324 report). $total is a hash ref summary of all the tests run. Its keys
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
338 If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
341 $failed is a hash ref of all the test scripts which failed. Each key
342 is the name of a test script, each value is another hash representing
343 how that script failed. Its keys are these:
345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
353 Needless to say, $failed should be empty if everything passed.
355 B<NOTE> Currently this function is still noisy. I'm working on it.
372 tests => scalar @tests,
378 # pass -I flags to children
379 my $old5lib = $ENV{PERL5LIB};
381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
390 $new5lib = join($Config{path_sep}, @INC);
393 local($ENV{'PERL5LIB'}) = $new5lib;
395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
396 my $t_start = new Benchmark;
398 foreach my $tfile (@tests) {
399 my($leader, $ml) = _mk_leader($tfile);
402 my $fh = _open_test($tfile);
404 # state of the current test.
413 skip_reason => undef,
417 my($seen_header, $tests_seen) = (0,0);
419 if( _parse_header($_, \%test, \%tot) ) {
420 warn "Test header seen twice!\n" if $seen_header;
424 warn "1..M can only appear at the beginning or end of tests\n"
425 if $tests_seen && $test{max} < $tests_seen;
427 elsif( _parse_test_line($_, \%test, \%tot) ) {
433 my($estatus, $wstatus) = _close_fh($fh);
435 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
438 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
440 $failedtests{$tfile}{name} = $tfile;
443 if ($test{max} and $test{skipped} + $test{bonus}) {
445 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
447 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
449 print "$test{ml}ok, ".join(', ', @msg)."\n";
450 } elsif ($test{max}) {
451 print "$test{ml}ok\n";
452 } elsif (defined $test{skip_reason}) {
453 print "skipped: $test{skip_reason}\n";
456 print "skipped test on this platform\n";
463 if ($test{'next'} <= $test{max}) {
464 push @{$test{failed}}, $test{'next'}..$test{max};
466 if (@{$test{failed}}) {
467 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
469 print "$test{ml}$txt";
470 $failedtests{$tfile} = { canon => $canon,
472 failed => scalar @{$test{failed}},
474 percent => 100*(scalar @{$test{failed}})/$test{max},
479 print "Don't know which tests failed: got $test{ok} ok, ".
480 "expected $test{max}\n";
481 $failedtests{$tfile} = { canon => '??',
491 } elsif ($test{'next'} == 0) {
492 print "FAILED before any test output arrived\n";
494 $failedtests{$tfile} = { canon => '??',
505 $tot{sub_skipped} += $test{skipped};
507 if (defined $Files_In_Dir) {
508 my @new_dir_files = _globdir $Files_In_Dir;
509 if (@new_dir_files != @dir_files) {
511 @f{@new_dir_files} = (1) x @new_dir_files;
512 delete @f{@dir_files};
513 my @f = sort keys %f;
514 print "LEAKED FILES: @f\n";
515 @dir_files = @new_dir_files;
519 $tot{bench} = timediff(new Benchmark, $t_start);
522 if (defined $old5lib) {
523 $ENV{PERL5LIB} = $old5lib;
525 delete $ENV{PERL5LIB};
529 return(\%tot, \%failedtests);
534 my($leader, $ml) = _mk_leader($test_file);
536 Generates the 't/foo........' $leader for the given $test_file as well
537 as a similar version which will overwrite the current line (by use of
538 \r and such). $ml may be empty if Test::Harness doesn't think you're
545 chop($te); # XXX chomp?
547 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
548 my $blank = (' ' x 77);
549 my $leader = "$te" . '.' x (20 - length($te));
552 $ml = "\r$blank\r$leader"
553 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
555 return($leader, $ml);
560 my($tot, $failedtests) = @_;
563 my $bonusmsg = _bonusmsg($tot);
565 if ($tot->{bad} == 0 && $tot->{max}) {
566 print "All tests successful$bonusmsg.\n";
567 } elsif ($tot->{tests}==0){
568 die "FAILED--no tests were run for some reason.\n";
569 } elsif ($tot->{max} == 0) {
570 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
571 die "FAILED--$tot->{tests} test $blurb could be run, ".
572 "alas--no output ever seen\n";
574 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
575 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
576 $tot->{max} - $tot->{ok}, $tot->{max},
577 100*$tot->{ok}/$tot->{max};
579 my($fmt_top, $fmt) = _create_fmts($failedtests);
581 # Now write to formats
582 for my $script (sort keys %$failedtests) {
583 $Curtest = $failedtests->{$script};
587 $bonusmsg =~ s/^,\s*//;
588 print "$bonusmsg.\n" if $bonusmsg;
589 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
594 printf("Files=%d, Tests=%d, %s\n",
595 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
600 my($line, $test, $tot) = @_;
604 print $line if $Verbose;
607 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
609 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
611 $tot->{max} += $test->{max};
617 # 1..0 # skip Why? Because I said so!
618 elsif ($line =~ /^1\.\.([0-9]+)
619 (\s*\#\s*[Ss]kip\S*\s* (.+))?
624 $tot->{max} += $test->{max};
626 $test->{'next'} = 1 unless $test->{'next'};
627 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
642 my $s = _set_switches($test);
644 # XXX This is WAY too core specific!
645 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
646 ? "./perl -I../lib ../utils/perlcc $test "
647 . "-r 2>> ./compilelog |"
649 $cmd = "MCR $cmd" if $^O eq 'VMS';
651 if( open(PERL, $cmd) ) {
655 print "can't run $test. $!\n";
667 sub _parse_test_line {
668 my($line, $test, $tot) = @_;
670 if ($line =~ /^(not\s+)?ok\b/i) {
671 my $this = $test->{'next'} || 1;
673 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
674 my($not, $tnum, $extra) = ($1, $2, $3);
676 $this = $tnum if $tnum;
678 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
681 my($istodo, $isskip);
682 if( defined $type ) {
683 $istodo = $type =~ /TODO/;
684 $isskip = $type =~ /skip/i;
687 $test->{todo}{$tnum} = 1 if $istodo;
690 print "$test->{ml}NOK $this" if $test->{ml};
691 if (!$test->{todo}{$this}) {
692 push @{$test->{failed}}, $this;
699 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
702 $test->{skipped}++ if $isskip;
704 if (defined $reason and defined $test->{skip_reason}) {
705 # print "was: '$skip_reason' new '$reason'\n";
706 $test->{skip_reason} = 'various reasons'
707 if $test->{skip_reason} ne $reason;
708 } elsif (defined $reason) {
709 $test->{skip_reason} = $reason;
712 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
716 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
717 $this = $1 if $1 > 0;
718 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
723 # an ok or not ok not matching the 3 cases above...
724 # just ignore it for compatibility with TEST
728 if ($this > $test->{'next'}) {
729 # print "Test output counter mismatch [test $this]\n";
730 # no need to warn probably
731 push @{$test->{failed}}, $test->{'next'}..$this-1;
733 elsif ($this < $test->{'next'}) {
734 #we have seen more "ok" lines than the number suggests
735 print "Confused test output: test $this answered after ".
736 "test ", $test->{'next'}-1, "\n";
737 $test->{'next'} = $this;
739 $test->{'next'} = $this + 1;
742 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
743 die "FAILED--Further testing stopped" .
744 ($1 ? ": $1\n" : ".\n");
753 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
754 " UNEXPECTEDLY SUCCEEDED)")
757 if ($tot->{skipped}) {
758 $bonusmsg .= ", $tot->{skipped} test"
759 . ($tot->{skipped} != 1 ? 's' : '');
760 if ($tot->{sub_skipped}) {
761 $bonusmsg .= " and $tot->{sub_skipped} subtest"
762 . ($tot->{sub_skipped} != 1 ? 's' : '');
764 $bonusmsg .= ' skipped';
766 elsif ($tot->{sub_skipped}) {
767 $bonusmsg .= ", $tot->{sub_skipped} subtest"
768 . ($tot->{sub_skipped} != 1 ? 's' : '')
775 # VMS has some subtle nastiness with closing the test files.
779 close($fh); # must close to reap child resource values
781 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
783 $estatus = ($^O eq 'VMS'
784 ? eval 'use vmsish "status"; $estatus = $?'
787 return($estatus, $wstatus);
791 # Set up the command-line switches to run perl as.
796 open(TEST, $test) or print "can't open $test. $!\n";
799 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
800 if exists $ENV{'HARNESS_PERL_SWITCHES'};
801 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
802 if $first =~ /^#!.*\bperl.*-\w*T/;
804 close(TEST) or print "can't close $test. $!\n";
810 # Test program go boom.
811 sub _dubious_return {
812 my($test, $tot, $estatus, $wstatus) = @_;
813 my ($failed, $canon, $percent) = ('??', '??');
815 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
816 "(wstat %d, 0x%x)\n",
818 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
820 if (corestatus($wstatus)) { # until we have a wait module
821 if ($Have_Devel_Corestack) {
822 Devel::CoreStack::stack($^X);
824 print "\ttest program seems to have generated a core\n";
831 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
832 print "\tafter all the subtests completed successfully\n";
834 $failed = 0; # But we do not set $canon!
837 push @{$test->{failed}}, $test->{'next'}..$test->{max};
838 $failed = @{$test->{failed}};
839 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
840 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
845 return { canon => $canon, max => $test->{max} || '??',
848 estat => $estatus, wstat => $wstatus,
853 sub _garbled_output {
854 my($gibberish) = shift;
855 warn "Confusing test output: '$gibberish'\n";
860 my($failedtests) = @_;
862 my $failed_str = "Failed Test";
863 my $middle_str = " Stat Wstat Total Fail Failed ";
864 my $list_str = "List of Failed";
866 # Figure out our longest name string for formatting purposes.
867 my $max_namelen = length($failed_str);
868 foreach my $script (keys %$failedtests) {
869 my $namelen = length $failedtests->{$script}->{name};
870 $max_namelen = $namelen if $namelen > $max_namelen;
873 my $list_len = $Columns - length($middle_str) - $max_namelen;
874 if ($list_len < length($list_str)) {
875 $list_len = length($list_str);
876 $max_namelen = $Columns - length($middle_str) - $list_len;
877 if ($max_namelen < length($failed_str)) {
878 $max_namelen = length($failed_str);
879 $Columns = $max_namelen + length($middle_str) + $list_len;
883 my $fmt_top = "format STDOUT_TOP =\n"
884 . sprintf("%-${max_namelen}s", $failed_str)
890 my $fmt = "format STDOUT =\n"
891 . "@" . "<" x ($max_namelen - 1)
892 . " @>> @>>>> @>>>> @>>> ^##.##% "
893 . "^" . "<" x ($list_len - 1) . "\n"
894 . '{ $Curtest->{name}, $Curtest->{estat},'
895 . ' $Curtest->{wstat}, $Curtest->{max},'
896 . ' $Curtest->{failed}, $Curtest->{percent},'
897 . ' $Curtest->{canon}'
899 . "~~" . " " x ($Columns - $list_len - 2) . "^"
900 . "<" x ($list_len - 1) . "\n"
901 . '$Curtest->{canon}'
909 return($fmt_top, $fmt);
913 my $tried_devel_corestack;
918 eval {require 'wait.ph'};
919 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
921 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
922 unless $tried_devel_corestack++;
928 sub canonfailed ($@) {
929 my($max,$skipped,@failed) = @_;
931 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
932 my $failed = @failed;
936 my $last = $min = shift @failed;
939 for (@failed, $failed[-1]) { # don't forget the last one
940 if ($_ > $last+1 || $_ == $last) {
944 push @canon, "$min-$last";
951 push @result, "FAILED tests @canon\n";
952 $canon = join ' ', @canon;
954 push @result, "FAILED test $last\n";
958 push @result, "\tFailed $failed/$max tests, ";
959 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
960 my $ender = 's' x ($skipped > 1);
961 my $good = $max - $failed - $skipped;
962 my $goodper = sprintf("%.2f",100*($good/$max));
963 push @result, " (-$skipped skipped test$ender: $good okay, ".
967 my $txt = join "", @result;
984 C<&runtests> is exported by Test::Harness per default.
986 C<$verbose> and C<$switches> are exported upon request.
993 =item C<All tests successful.\nFiles=%d, Tests=%d, %s>
995 If all tests are successful some statistics about the performance are
998 =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1000 For any single script that has failing subtests statistics like the
1003 =item C<Test returned status %d (wstat %d)>
1005 Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1006 and C<$?> are printed in a message similar to the above.
1008 =item C<Failed 1 test, %.2f%% okay. %s>
1010 =item C<Failed %d/%d tests, %.2f%% okay. %s>
1012 If not all tests were successful, the script dies with one of the
1015 =item C<FAILED--Further testing stopped%s>
1017 If a single subtest decides that further testing will not make sense,
1018 the script dies with this message.
1026 =item C<HARNESS_IGNORE_EXITCODE>
1028 Makes harness ignore the exit status of child processes when defined.
1030 =item C<HARNESS_NOTTY>
1032 When set to a true value, forces it to behave as though STDOUT were
1033 not a console. You may need to set this if you don't want harness to
1034 output more frequent progress messages using carriage returns. Some
1035 consoles may not handle carriage returns properly (which results in a
1036 somewhat messy output).
1038 =item C<HARNESS_COMPILE_TEST>
1040 When true it will make harness attempt to compile the test using
1041 C<perlcc> before running it.
1043 B<NOTE> This currently only works when sitting in the perl source
1046 =item C<HARNESS_FILELEAK_IN_DIR>
1048 When set to the name of a directory, harness will check after each
1049 test whether new files appeared in that directory, and report them as
1051 LEAKED FILES: scr.tmp 0 my.db
1053 If relative, directory name is with respect to the current directory at
1054 the moment runtests() was called. Putting absolute path into
1055 C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1057 =item C<HARNESS_PERL_SWITCHES>
1059 Its value will be prepended to the switches used to invoke perl on
1060 each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1061 run all tests with all warnings enabled.
1063 =item C<HARNESS_COLUMNS>
1065 This value will be used for the width of the terminal. If it is not
1066 set then it will default to C<COLUMNS>. If this is not set, it will
1067 default to 80. Note that users of Bourne-sh based shells will need to
1068 C<export COLUMNS> for this module to use that variable.
1070 =item C<HARNESS_ACTIVE>
1072 Harness sets this before executing the individual tests. This allows
1073 the tests to determine if they are being executed through the harness
1074 or by any other means.
1080 Here's how Test::Harness tests itself
1082 $ cd ~/src/devel/Test-Harness
1083 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1084 $verbose=0; runtests @ARGV;' t/*.t
1085 Using /home/schwern/src/devel/Test-Harness/blib
1086 t/base..............ok
1087 t/nonumbers.........ok
1088 t/ok................ok
1089 t/test-harness......ok
1090 All tests successful.
1091 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1095 L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1096 the underlying timing routines, L<Devel::CoreStack> to generate core
1097 dumps from failed tests and L<Devel::Cover> for test coverage
1102 Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1103 sure is, that it was inspired by Larry Wall's TEST script that came
1104 with perl distributions for ages. Numerous anonymous contributors
1105 exist. Andreas Koenig held the torch for many years.
1107 Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1111 Provide a way of running tests quietly (ie. no printing) for automated
1112 validation of tests. This will probably take the form of a version
1113 of runtests() which rather than printing its output returns raw data
1114 on the state of the tests.
1116 Fix HARNESS_COMPILE_TEST without breaking its core usage.
1118 Figure a way to report test names in the failure summary.
1120 Rework the test summary so long test names are not truncated as badly.
1122 Merge back into bleadperl.
1124 Deal with VMS's "not \nok 4\n" mistake.
1126 Add option for coverage analysis.
1129 Keeping whittling away at _run_all_tests()
1132 Clean up how the summary is printed. Get rid of those damned formats.
1136 Test::Harness uses $^X to determine the perl binary to run the tests
1137 with. Test scripts running via the shebang (C<#!>) line may not be
1138 portable because $^X is not consistent for shebang scripts across
1139 platforms. This is no problem when Test::Harness is run with an
1140 absolute path to the perl binary or when $^X can be found in the path.
1142 HARNESS_COMPILE_TEST currently assumes its run from the Perl source