Test::Harness 2.01 -> 2.03
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
356733da 2# $Id: Harness.pm,v 1.14.2.18 2002/04/25 05:04:35 schwern Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches
16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
356733da 25$VERSION = '2.03';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
13287dd5 39my $Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
327It returns true if everything was ok, false otherwise.
328
329=for _private
330This is just _run_all_tests() plus _show_results()
331
332=cut
17a79f5b 333
a0d0e21e 334sub runtests {
335 my(@tests) = @_;
9c5c68c8 336
b82fa0b7 337 local ($\, $,);
338
339 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 340 _show_results($tot, $failedtests);
341
2fe373ce 342 my $ok = _all_ok($tot);
b82fa0b7 343
13287dd5 344 assert(($ok xor keys %$failedtests),
345 q{ok status jives with $failedtests});
b82fa0b7 346
347 return $ok;
348}
349
350=begin _private
351
2fe373ce 352=item B<_all_ok>
353
354 my $ok = _all_ok(\%tot);
355
356Tells you if this test run is overall successful or not.
357
358=cut
359
360sub _all_ok {
361 my($tot) = shift;
362
363 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
364}
365
b82fa0b7 366=item B<_globdir>
367
368 my @files = _globdir $dir;
369
370Returns all the files in a directory. This is shorthand for backwards
371compatibility on systems where glob() doesn't work right.
372
373=cut
374
375sub _globdir {
376 opendir DIRH, shift;
377 my @f = readdir DIRH;
378 closedir DIRH;
379
380 return @f;
9c5c68c8 381}
382
b82fa0b7 383=item B<_run_all_tests>
384
385 my($total, $failed) = _run_all_tests(@test_files);
386
387Runs all the given @test_files (as runtests()) but does it quietly (no
388report). $total is a hash ref summary of all the tests run. Its keys
389and values are this:
390
391 bonus Number of individual todo tests unexpectedly passed
392 max Number of individual tests ran
393 ok Number of individual tests passed
394 sub_skipped Number of individual tests skipped
2fe373ce 395 todo Number of individual todo tests
b82fa0b7 396
397 files Number of test files ran
398 good Number of test files passed
399 bad Number of test files failed
400 tests Number of test files originally given
401 skipped Number of test files skipped
402
403If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
404test.
405
406$failed is a hash ref of all the test scripts which failed. Each key
407is the name of a test script, each value is another hash representing
408how that script failed. Its keys are these:
9c5c68c8 409
b82fa0b7 410 name Name of the test which failed
411 estat Script's exit value
412 wstat Script's wait status
413 max Number of individual tests
414 failed Number which failed
415 percent Percentage of tests which failed
416 canon List of tests which failed (as string).
417
418Needless to say, $failed should be empty if everything passed.
419
420B<NOTE> Currently this function is still noisy. I'm working on it.
421
422=cut
423
424sub _run_all_tests {
9c5c68c8 425 my(@tests) = @_;
a0d0e21e 426 local($|) = 1;
9c5c68c8 427 my(%failedtests);
428
429 # Test-wide totals.
430 my(%tot) = (
431 bonus => 0,
432 max => 0,
433 ok => 0,
434 files => 0,
435 bad => 0,
436 good => 0,
437 tests => scalar @tests,
438 sub_skipped => 0,
2fe373ce 439 todo => 0,
9c5c68c8 440 skipped => 0,
2fe373ce 441 bench => 0,
9c5c68c8 442 );
774d564b 443
13287dd5 444 local($ENV{'PERL5LIB'}) = $Strap->_INC2PERL5LIB;
a0d0e21e 445
b82fa0b7 446 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 447 my $t_start = new Benchmark;
9c5c68c8 448
13287dd5 449 my $width = _leader_width(@tests);
b82fa0b7 450 foreach my $tfile (@tests) {
13287dd5 451 $Strap->_reset_file_state;
452
7a315204 453 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 454 print $leader;
9c5c68c8 455
b82fa0b7 456 my $fh = _open_test($tfile);
9c5c68c8 457
356733da 458 $tot{files}++;
459
9c5c68c8 460 # state of the current test.
461 my %test = (
462 ok => 0,
b82fa0b7 463 'next' => 0,
9c5c68c8 464 max => 0,
465 failed => [],
466 todo => {},
467 bonus => 0,
468 skipped => 0,
469 skip_reason => undef,
470 ml => $ml,
471 );
472
473 my($seen_header, $tests_seen) = (0,0);
2fe373ce 474 while (<$fh>) {
13287dd5 475 print if $Verbose;
476
477 $Strap->{line}++;
9c5c68c8 478 if( _parse_header($_, \%test, \%tot) ) {
479 warn "Test header seen twice!\n" if $seen_header;
480
481 $seen_header = 1;
482
483 warn "1..M can only appear at the beginning or end of tests\n"
484 if $tests_seen && $test{max} < $tests_seen;
485 }
486 elsif( _parse_test_line($_, \%test, \%tot) ) {
487 $tests_seen++;
d667a7e6 488 }
9c5c68c8 489 # else, ignore it.
2fe373ce 490 }
9c5c68c8 491
492 my($estatus, $wstatus) = _close_fh($fh);
493
b82fa0b7 494 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
495
2fe373ce 496 if ($wstatus) {
b82fa0b7 497 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 498 $estatus, $wstatus);
b82fa0b7 499 $failedtests{$tfile}{name} = $tfile;
2fe373ce 500 }
b82fa0b7 501 elsif ($allok) {
2fe373ce 502 if ($test{max} and $test{skipped} + $test{bonus}) {
503 my @msg;
504 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
505 if $test{skipped};
506 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
507 if $test{bonus};
508 print "$test{ml}ok, ".join(', ', @msg)."\n";
509 } elsif ($test{max}) {
510 print "$test{ml}ok\n";
511 } elsif (defined $test{skip_reason}) {
512 print "skipped: $test{skip_reason}\n";
513 $tot{skipped}++;
514 } else {
515 print "skipped test on this platform\n";
516 $tot{skipped}++;
517 }
518 $tot{good}++;
519 }
b82fa0b7 520 else {
521 if ($test{max}) {
522 if ($test{'next'} <= $test{max}) {
523 push @{$test{failed}}, $test{'next'}..$test{max};
524 }
525 if (@{$test{failed}}) {
526 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
527 @{$test{failed}});
528 print "$test{ml}$txt";
529 $failedtests{$tfile} = { canon => $canon,
530 max => $test{max},
531 failed => scalar @{$test{failed}},
532 name => $tfile,
533 percent => 100*(scalar @{$test{failed}})/$test{max},
534 estat => '',
535 wstat => '',
536 };
537 } else {
538 print "Don't know which tests failed: got $test{ok} ok, ".
539 "expected $test{max}\n";
540 $failedtests{$tfile} = { canon => '??',
541 max => $test{max},
542 failed => '??',
543 name => $tfile,
544 percent => undef,
545 estat => '',
546 wstat => '',
547 };
548 }
549 $tot{bad}++;
550 } elsif ($test{'next'} == 0) {
551 print "FAILED before any test output arrived\n";
552 $tot{bad}++;
553 $failedtests{$tfile} = { canon => '??',
554 max => '??',
555 failed => '??',
556 name => $tfile,
557 percent => undef,
558 estat => '',
559 wstat => '',
560 };
561 }
562 }
563
2fe373ce 564 $tot{sub_skipped} += $test{skipped};
565
566 if (defined $Files_In_Dir) {
567 my @new_dir_files = _globdir $Files_In_Dir;
568 if (@new_dir_files != @dir_files) {
569 my %f;
570 @f{@new_dir_files} = (1) x @new_dir_files;
571 delete @f{@dir_files};
572 my @f = sort keys %f;
573 print "LEAKED FILES: @f\n";
574 @dir_files = @new_dir_files;
575 }
576 }
13287dd5 577
578 close $fh;
a0d0e21e 579 }
9c5c68c8 580 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 581
13287dd5 582 $Strap->_restore_PERL5LIB;
9c5c68c8 583
584 return(\%tot, \%failedtests);
585}
586
b82fa0b7 587=item B<_mk_leader>
588
7a315204 589 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 590
591Generates the 't/foo........' $leader for the given $test_file as well
592as a similar version which will overwrite the current line (by use of
593\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 594on TTY.
595
596The $width is the width of the "yada/blah.." string.
b82fa0b7 597
598=cut
599
600sub _mk_leader {
2fe373ce 601 my($te, $width) = @_;
602 chomp($te);
b695f709 603 $te =~ s/\.\w+$/./;
b82fa0b7 604
356733da 605 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 606 my $blank = (' ' x 77);
7a315204 607 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 608 my $ml = "";
609
610 $ml = "\r$blank\r$leader"
611 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
612
613 return($leader, $ml);
614}
615
13287dd5 616=item B<_leader_width>
617
618 my($width) = _leader_width(@test_files);
619
620Calculates how wide the leader should be based on the length of the
621longest test name.
622
623=cut
624
625sub _leader_width {
626 my $maxlen = 0;
627 my $maxsuflen = 0;
628 foreach (@_) {
629 my $suf = /\.(\w+)$/ ? $1 : '';
630 my $len = length;
631 my $suflen = length $suf;
632 $maxlen = $len if $len > $maxlen;
633 $maxsuflen = $suflen if $suflen > $maxsuflen;
634 }
356733da 635 # + 3 : we want three dots between the test name and the "ok"
636 return $maxlen + 3 - $maxsuflen;
13287dd5 637}
638
9c5c68c8 639
640sub _show_results {
641 my($tot, $failedtests) = @_;
642
643 my $pct;
644 my $bonusmsg = _bonusmsg($tot);
645
2fe373ce 646 if (_all_ok($tot)) {
647 print "All tests successful$bonusmsg.\n";
648 } elsif (!$tot->{tests}){
649 die "FAILED--no tests were run for some reason.\n";
650 } elsif (!$tot->{max}) {
651 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
652 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 653 "alas--no output ever seen\n";
c07a80fd 654 } else {
2fe373ce 655 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
656 my $percent_ok = 100*$tot->{ok}/$tot->{max};
657 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
658 $tot->{max} - $tot->{ok}, $tot->{max},
659 $percent_ok;
0a931e4a 660
9c5c68c8 661 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 662
2fe373ce 663 # Now write to formats
664 for my $script (sort keys %$failedtests) {
665 $Curtest = $failedtests->{$script};
666 write;
667 }
668 if ($tot->{bad}) {
669 $bonusmsg =~ s/^,\s*//;
670 print "$bonusmsg.\n" if $bonusmsg;
671 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 672 "$subpct\n";
2fe373ce 673 }
c07a80fd 674 }
f0a9308e 675
9c5c68c8 676 printf("Files=%d, Tests=%d, %s\n",
677 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
678}
679
680
681sub _parse_header {
682 my($line, $test, $tot) = @_;
683
684 my $is_header = 0;
685
13287dd5 686 if( $Strap->_is_header($line) ) {
687 $is_header = 1;
9c5c68c8 688
13287dd5 689 $test->{max} = $Strap->{max};
690 for ( keys %{$Strap->{todo}} ) { $test->{todo}{$_} = 1; }
9c5c68c8 691
13287dd5 692 $test->{skip_reason} = $Strap->{skip_all}
693 if not $test->{max} and defined $Strap->{skip_all};
9c5c68c8 694
b82fa0b7 695 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 696
13287dd5 697
698 $tot->{max} += $test->{max};
9c5c68c8 699 }
700 else {
701 $is_header = 0;
702 }
703
704 return $is_header;
c07a80fd 705}
706
9c5c68c8 707
b82fa0b7 708sub _open_test {
709 my($test) = shift;
710
711 my $s = _set_switches($test);
712
356733da 713 my $perl = -x $^X ? $^X : $Config{perlpath};
714
b82fa0b7 715 # XXX This is WAY too core specific!
716 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
717 ? "./perl -I../lib ../utils/perlcc $test "
2fe373ce 718 . "-r 2>> ./compilelog |"
356733da 719 : "$perl $s $test|";
b82fa0b7 720 $cmd = "MCR $cmd" if $^O eq 'VMS';
721
722 if( open(PERL, $cmd) ) {
723 return \*PERL;
724 }
725 else {
726 print "can't run $test. $!\n";
727 return;
728 }
729}
730
b82fa0b7 731
9c5c68c8 732sub _parse_test_line {
733 my($line, $test, $tot) = @_;
734
13287dd5 735 my %result;
736 if ( $Strap->_is_test($line, \%result) ) {
2fe373ce 737 $test->{'next'} ||= 1;
738 my $this = $test->{'next'};
2fe373ce 739
13287dd5 740 my($not, $tnum) = (!$result{ok}, $result{number});
2fe373ce 741
13287dd5 742 $this = $tnum if $tnum;
2fe373ce 743
13287dd5 744 my($type, $reason) = ($result{type}, $result{reason});
2fe373ce 745
13287dd5 746 my($istodo, $isskip);
747 if( defined $type ) {
748 $istodo = 1 if $type eq 'todo';
749 $isskip = 1 if $type eq 'skip';
750 }
2fe373ce 751
13287dd5 752 $test->{todo}{$this} = 1 if $istodo;
356733da 753 if( $test->{todo}{$this} ) {
754 $tot->{todo}++;
755 $test->{bonus}++, $tot->{bonus}++ unless $not;
756 }
2fe373ce 757
356733da 758 if( $not && !$test->{todo}{$this} ) {
13287dd5 759 print "$test->{ml}NOK $this" if $test->{ml};
356733da 760 push @{$test->{failed}}, $this;
9c5c68c8 761 }
13287dd5 762 else {
9c5c68c8 763 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
764 $test->{ok}++;
765 $tot->{ok}++;
13287dd5 766 $test->{skipped}++ if $isskip;
767
768 $reason = '[no reason given]'
769 if $isskip and not defined $reason;
770 if (defined $reason and defined $test->{skip_reason}) {
771 # print "was: '$skip_reason' new '$reason'\n";
772 $test->{skip_reason} = 'various reasons'
773 if $test->{skip_reason} ne $reason;
774 } elsif (defined $reason) {
775 $test->{skip_reason} = $reason;
776 }
9c5c68c8 777 }
778
b82fa0b7 779 if ($this > $test->{'next'}) {
2fe373ce 780 print "Test output counter mismatch [test $this]\n";
356733da 781
782 # Guard against resource starvation.
783 if( $this > 100000 ) {
784 print "Enourmous test number seen [test $this]\n";
785 }
786 else {
787 push @{$test->{failed}}, $test->{'next'}..$this-1;
788 }
9c5c68c8 789 }
b82fa0b7 790 elsif ($this < $test->{'next'}) {
9c5c68c8 791 #we have seen more "ok" lines than the number suggests
792 print "Confused test output: test $this answered after ".
b82fa0b7 793 "test ", $test->{'next'}-1, "\n";
794 $test->{'next'} = $this;
9c5c68c8 795 }
b82fa0b7 796 $test->{'next'} = $this + 1;
9c5c68c8 797
798 }
13287dd5 799 else {
800 my $bail_reason;
801 if( $Strap->_is_bail_out($line, \$bail_reason) ) { # bail out!
802 die "FAILED--Further testing stopped" .
803 ($bail_reason ? ": $bail_reason\n" : ".\n");
804 }
9c5c68c8 805 }
806}
807
808
809sub _bonusmsg {
810 my($tot) = @_;
811
812 my $bonusmsg = '';
813 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 814 " UNEXPECTEDLY SUCCEEDED)")
815 if $tot->{bonus};
9c5c68c8 816
817 if ($tot->{skipped}) {
2fe373ce 818 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 819 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 820 if ($tot->{sub_skipped}) {
821 $bonusmsg .= " and $tot->{sub_skipped} subtest"
822 . ($tot->{sub_skipped} != 1 ? 's' : '');
823 }
824 $bonusmsg .= ' skipped';
9c5c68c8 825 }
826 elsif ($tot->{sub_skipped}) {
2fe373ce 827 $bonusmsg .= ", $tot->{sub_skipped} subtest"
828 . ($tot->{sub_skipped} != 1 ? 's' : '')
829 . " skipped";
9c5c68c8 830 }
831
832 return $bonusmsg;
833}
834
835# VMS has some subtle nastiness with closing the test files.
836sub _close_fh {
837 my($fh) = shift;
838
839 close($fh); # must close to reap child resource values
840
2fe373ce 841 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
9c5c68c8 842 my $estatus;
843 $estatus = ($^O eq 'VMS'
844 ? eval 'use vmsish "status"; $estatus = $?'
845 : $wstatus >> 8);
846
847 return($estatus, $wstatus);
848}
849
850
851# Set up the command-line switches to run perl as.
852sub _set_switches {
853 my($test) = shift;
854
9c5c68c8 855 my $s = $Switches;
13287dd5 856 $s .= $Strap->_switches($test);
9c5c68c8 857
858 return $s;
859}
860
861
862# Test program go boom.
863sub _dubious_return {
864 my($test, $tot, $estatus, $wstatus) = @_;
865 my ($failed, $canon, $percent) = ('??', '??');
866
867 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
868 "(wstat %d, 0x%x)\n",
869 $wstatus,$wstatus;
870 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
871
872 if (corestatus($wstatus)) { # until we have a wait module
873 if ($Have_Devel_Corestack) {
874 Devel::CoreStack::stack($^X);
875 } else {
876 print "\ttest program seems to have generated a core\n";
877 }
878 }
879
880 $tot->{bad}++;
881
882 if ($test->{max}) {
b82fa0b7 883 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 884 print "\tafter all the subtests completed successfully\n";
885 $percent = 0;
2fe373ce 886 $failed = 0; # But we do not set $canon!
9c5c68c8 887 }
888 else {
b82fa0b7 889 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 890 $failed = @{$test->{failed}};
891 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
892 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
893 print "DIED. ",$txt;
894 }
895 }
896
897 return { canon => $canon, max => $test->{max} || '??',
898 failed => $failed,
66fd8cb9 899 percent => $percent,
9c5c68c8 900 estat => $estatus, wstat => $wstatus,
901 };
902}
903
904
905sub _garbled_output {
906 my($gibberish) = shift;
907 warn "Confusing test output: '$gibberish'\n";
908}
909
910
911sub _create_fmts {
912 my($failedtests) = @_;
913
b82fa0b7 914 my $failed_str = "Failed Test";
915 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 916 my $list_str = "List of Failed";
917
918 # Figure out our longest name string for formatting purposes.
919 my $max_namelen = length($failed_str);
920 foreach my $script (keys %$failedtests) {
921 my $namelen = length $failedtests->{$script}->{name};
922 $max_namelen = $namelen if $namelen > $max_namelen;
923 }
924
925 my $list_len = $Columns - length($middle_str) - $max_namelen;
926 if ($list_len < length($list_str)) {
927 $list_len = length($list_str);
928 $max_namelen = $Columns - length($middle_str) - $list_len;
929 if ($max_namelen < length($failed_str)) {
930 $max_namelen = length($failed_str);
931 $Columns = $max_namelen + length($middle_str) + $list_len;
932 }
933 }
934
935 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 936 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 937 . $middle_str
2fe373ce 938 . $list_str . "\n"
939 . "-" x $Columns
940 . "\n.\n";
9c5c68c8 941
942 my $fmt = "format STDOUT =\n"
2fe373ce 943 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 944 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 945 . "^" . "<" x ($list_len - 1) . "\n"
946 . '{ $Curtest->{name}, $Curtest->{estat},'
947 . ' $Curtest->{wstat}, $Curtest->{max},'
948 . ' $Curtest->{failed}, $Curtest->{percent},'
949 . ' $Curtest->{canon}'
950 . "\n}\n"
951 . "~~" . " " x ($Columns - $list_len - 2) . "^"
952 . "<" x ($list_len - 1) . "\n"
953 . '$Curtest->{canon}'
954 . "\n.\n";
9c5c68c8 955
956 eval $fmt_top;
957 die $@ if $@;
958 eval $fmt;
959 die $@ if $@;
960
961 return($fmt_top, $fmt);
962}
963
b82fa0b7 964{
965 my $tried_devel_corestack;
9c5c68c8 966
b82fa0b7 967 sub corestatus {
968 my($st) = @_;
c0ee6f5c 969
356733da 970 eval {
971 local $^W = 0; # *.ph files are often *very* noisy
972 require 'wait.ph'
973 };
974 return if $@;
975 my $did_core = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 976
b82fa0b7 977 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
978 unless $tried_devel_corestack++;
c0ee6f5c 979
356733da 980 return $did_core;
b82fa0b7 981 }
c0ee6f5c 982}
983
c07a80fd 984sub canonfailed ($@) {
89d3b7e2 985 my($max,$skipped,@failed) = @_;
6c31b336 986 my %seen;
987 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 988 my $failed = @failed;
989 my @result = ();
990 my @canon = ();
991 my $min;
992 my $last = $min = shift @failed;
760ac839 993 my $canon;
c07a80fd 994 if (@failed) {
2fe373ce 995 for (@failed, $failed[-1]) { # don't forget the last one
996 if ($_ > $last+1 || $_ == $last) {
997 if ($min == $last) {
998 push @canon, $last;
999 } else {
1000 push @canon, "$min-$last";
1001 }
1002 $min = $_;
1003 }
1004 $last = $_;
1005 }
1006 local $" = ", ";
1007 push @result, "FAILED tests @canon\n";
1008 $canon = join ' ', @canon;
a0d0e21e 1009 } else {
2fe373ce 1010 push @result, "FAILED test $last\n";
1011 $canon = $last;
a0d0e21e 1012 }
c07a80fd 1013
1014 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 1015 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
1016 my $ender = 's' x ($skipped > 1);
1017 my $good = $max - $failed - $skipped;
1018 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 1019 push @result, " (-$skipped skipped test$ender: $good okay, ".
1020 "$goodper%)"
1021 if $skipped;
89d3b7e2 1022 push @result, "\n";
760ac839 1023 my $txt = join "", @result;
1024 ($txt, $canon);
a0d0e21e 1025}
1026
b82fa0b7 1027=end _private
9c5c68c8 1028
b82fa0b7 1029=back
d667a7e6 1030
b82fa0b7 1031=cut
9c5c68c8 1032
9c5c68c8 1033
b82fa0b7 10341;
1035__END__
9c5c68c8 1036
1037
cb1a09d0 1038=head1 EXPORT
1039
c0ee6f5c 1040C<&runtests> is exported by Test::Harness per default.
cb1a09d0 1041
9c5c68c8 1042C<$verbose> and C<$switches> are exported upon request.
1043
1044
cb1a09d0 1045=head1 DIAGNOSTICS
1046
1047=over 4
1048
1049=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1050
1051If all tests are successful some statistics about the performance are
1052printed.
1053
6c31b336 1054=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1055
1056For any single script that has failing subtests statistics like the
1057above are printed.
1058
1059=item C<Test returned status %d (wstat %d)>
1060
9c5c68c8 1061Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1062and C<$?> are printed in a message similar to the above.
6c31b336 1063
1064=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1065
6c31b336 1066=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1067
1068If not all tests were successful, the script dies with one of the
1069above messages.
1070
d667a7e6 1071=item C<FAILED--Further testing stopped%s>
1072
1073If a single subtest decides that further testing will not make sense,
1074the script dies with this message.
1075
cb1a09d0 1076=back
1077
9b0ceca9 1078=head1 ENVIRONMENT
1079
37ce32a7 1080=over 4
1081
356733da 1082=item C<HARNESS_ACTIVE>
37ce32a7 1083
356733da 1084Harness sets this before executing the individual tests. This allows
1085the tests to determine if they are being executed through the harness
1086or by any other means.
37ce32a7 1087
356733da 1088=item C<HARNESS_COLUMNS>
9b0ceca9 1089
356733da 1090This value will be used for the width of the terminal. If it is not
1091set then it will default to C<COLUMNS>. If this is not set, it will
1092default to 80. Note that users of Bourne-sh based shells will need to
1093C<export COLUMNS> for this module to use that variable.
0d0c0d42 1094
b82fa0b7 1095=item C<HARNESS_COMPILE_TEST>
9636a016 1096
37ce32a7 1097When true it will make harness attempt to compile the test using
1098C<perlcc> before running it.
1099
b82fa0b7 1100B<NOTE> This currently only works when sitting in the perl source
1101directory!
1102
1103=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1104
1105When set to the name of a directory, harness will check after each
1106test whether new files appeared in that directory, and report them as
17a79f5b 1107
1108 LEAKED FILES: scr.tmp 0 my.db
1109
1110If relative, directory name is with respect to the current directory at
1111the moment runtests() was called. Putting absolute path into
13287dd5 1112C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1113
356733da 1114=item C<HARNESS_IGNORE_EXITCODE>
1115
1116Makes harness ignore the exit status of child processes when defined.
1117
1118=item C<HARNESS_NOTTY>
1119
1120When set to a true value, forces it to behave as though STDOUT were
1121not a console. You may need to set this if you don't want harness to
1122output more frequent progress messages using carriage returns. Some
1123consoles may not handle carriage returns properly (which results in a
1124somewhat messy output).
1125
b82fa0b7 1126=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1127
1128Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1129each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1130run all tests with all warnings enabled.
1131
356733da 1132=item C<HARNESS_VERBOSE>
37ce32a7 1133
356733da 1134If true, Test::Harness will output the verbose results of running
1135its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1136
1137=back
0a931e4a 1138
b82fa0b7 1139=head1 EXAMPLE
1140
1141Here's how Test::Harness tests itself
1142
1143 $ cd ~/src/devel/Test-Harness
1144 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1145 $verbose=0; runtests @ARGV;' t/*.t
1146 Using /home/schwern/src/devel/Test-Harness/blib
1147 t/base..............ok
1148 t/nonumbers.........ok
1149 t/ok................ok
1150 t/test-harness......ok
1151 All tests successful.
1152 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1153
cb1a09d0 1154=head1 SEE ALSO
1155
b82fa0b7 1156L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1157the underlying timing routines, L<Devel::CoreStack> to generate core
1158dumps from failed tests and L<Devel::Cover> for test coverage
1159analysis.
c07a80fd 1160
1161=head1 AUTHORS
1162
1163Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1164sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1165with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1166exist. Andreas Koenig held the torch for many years.
1167
1168Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1169
1170=head1 TODO
1171
1172Provide a way of running tests quietly (ie. no printing) for automated
1173validation of tests. This will probably take the form of a version
1174of runtests() which rather than printing its output returns raw data
356733da 1175on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1176
1177Fix HARNESS_COMPILE_TEST without breaking its core usage.
1178
1179Figure a way to report test names in the failure summary.
37ce32a7 1180
b82fa0b7 1181Rework the test summary so long test names are not truncated as badly.
1182
b82fa0b7 1183Deal with VMS's "not \nok 4\n" mistake.
1184
1185Add option for coverage analysis.
1186
1187=for _private
1188Keeping whittling away at _run_all_tests()
1189
1190=for _private
1191Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1192
1193=head1 BUGS
1194
356733da 1195HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1196directory.
1197
cb1a09d0 1198=cut