Integrate mainline
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
6e5a998b 2# $Id: Harness.pm,v 1.29 2002/05/17 23:04:11 schwern Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
308957f5 15 $Columns $verbose $switches $ML
b82fa0b7 16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
6e5a998b 25$VERSION = '2.22';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
13287dd5 39my $Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
327It returns true if everything was ok, false otherwise.
328
329=for _private
330This is just _run_all_tests() plus _show_results()
331
332=cut
17a79f5b 333
a0d0e21e 334sub runtests {
335 my(@tests) = @_;
9c5c68c8 336
b82fa0b7 337 local ($\, $,);
338
339 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 340 _show_results($tot, $failedtests);
341
2fe373ce 342 my $ok = _all_ok($tot);
b82fa0b7 343
13287dd5 344 assert(($ok xor keys %$failedtests),
345 q{ok status jives with $failedtests});
b82fa0b7 346
347 return $ok;
348}
349
350=begin _private
351
2fe373ce 352=item B<_all_ok>
353
354 my $ok = _all_ok(\%tot);
355
356Tells you if this test run is overall successful or not.
357
358=cut
359
360sub _all_ok {
361 my($tot) = shift;
362
363 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
364}
365
b82fa0b7 366=item B<_globdir>
367
368 my @files = _globdir $dir;
369
370Returns all the files in a directory. This is shorthand for backwards
371compatibility on systems where glob() doesn't work right.
372
373=cut
374
375sub _globdir {
376 opendir DIRH, shift;
377 my @f = readdir DIRH;
378 closedir DIRH;
379
380 return @f;
9c5c68c8 381}
382
b82fa0b7 383=item B<_run_all_tests>
384
385 my($total, $failed) = _run_all_tests(@test_files);
386
387Runs all the given @test_files (as runtests()) but does it quietly (no
388report). $total is a hash ref summary of all the tests run. Its keys
389and values are this:
390
391 bonus Number of individual todo tests unexpectedly passed
392 max Number of individual tests ran
393 ok Number of individual tests passed
394 sub_skipped Number of individual tests skipped
2fe373ce 395 todo Number of individual todo tests
b82fa0b7 396
397 files Number of test files ran
398 good Number of test files passed
399 bad Number of test files failed
400 tests Number of test files originally given
401 skipped Number of test files skipped
402
403If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
404test.
405
406$failed is a hash ref of all the test scripts which failed. Each key
407is the name of a test script, each value is another hash representing
408how that script failed. Its keys are these:
9c5c68c8 409
b82fa0b7 410 name Name of the test which failed
411 estat Script's exit value
412 wstat Script's wait status
413 max Number of individual tests
414 failed Number which failed
415 percent Percentage of tests which failed
416 canon List of tests which failed (as string).
417
418Needless to say, $failed should be empty if everything passed.
419
420B<NOTE> Currently this function is still noisy. I'm working on it.
421
422=cut
423
308957f5 424#'#
b82fa0b7 425sub _run_all_tests {
9c5c68c8 426 my(@tests) = @_;
a0d0e21e 427 local($|) = 1;
9c5c68c8 428 my(%failedtests);
429
430 # Test-wide totals.
431 my(%tot) = (
432 bonus => 0,
433 max => 0,
434 ok => 0,
435 files => 0,
436 bad => 0,
437 good => 0,
438 tests => scalar @tests,
439 sub_skipped => 0,
2fe373ce 440 todo => 0,
9c5c68c8 441 skipped => 0,
2fe373ce 442 bench => 0,
9c5c68c8 443 );
774d564b 444
b82fa0b7 445 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 446 my $t_start = new Benchmark;
9c5c68c8 447
13287dd5 448 my $width = _leader_width(@tests);
b82fa0b7 449 foreach my $tfile (@tests) {
13287dd5 450
7a315204 451 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 452 local $ML = $ml;
b82fa0b7 453 print $leader;
9c5c68c8 454
356733da 455 $tot{files}++;
456
308957f5 457 $Strap->{_seen_header} = 0;
458 my %results = $Strap->analyze_file($tfile);
459
9c5c68c8 460 # state of the current test.
308957f5 461 my @failed = grep { !$results{details}[$_-1]{ok} }
462 1..@{$results{details}};
9c5c68c8 463 my %test = (
308957f5 464 ok => $results{ok},
465 'next' => $Strap->{'next'},
466 max => $results{max},
467 failed => \@failed,
468 bonus => $results{bonus},
469 skipped => $results{skip},
470 skip_reason => $Strap->{_skip_reason},
9c5c68c8 471 ml => $ml,
472 );
473
308957f5 474 $tot{bonus} += $results{bonus};
475 $tot{max} += $results{max};
476 $tot{ok} += $results{ok};
477 $tot{todo} += $results{todo};
478 $tot{sub_skipped} += $results{skip};
9c5c68c8 479
308957f5 480 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 481
2fe373ce 482 if ($wstatus) {
b82fa0b7 483 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 484 $estatus, $wstatus);
b82fa0b7 485 $failedtests{$tfile}{name} = $tfile;
2fe373ce 486 }
308957f5 487 elsif ($results{passing}) {
2fe373ce 488 if ($test{max} and $test{skipped} + $test{bonus}) {
489 my @msg;
490 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
491 if $test{skipped};
492 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
493 if $test{bonus};
f0008e52 494 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 495 } elsif ($test{max}) {
496 print "$test{ml}ok\n";
497 } elsif (defined $test{skip_reason}) {
f0008e52 498 print "skipped\n all skipped: $test{skip_reason}\n";
2fe373ce 499 $tot{skipped}++;
500 } else {
f0008e52 501 print "\n skipped test on this platform\n";
2fe373ce 502 $tot{skipped}++;
503 }
504 $tot{good}++;
505 }
b82fa0b7 506 else {
507 if ($test{max}) {
508 if ($test{'next'} <= $test{max}) {
509 push @{$test{failed}}, $test{'next'}..$test{max};
510 }
511 if (@{$test{failed}}) {
512 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
513 @{$test{failed}});
514 print "$test{ml}$txt";
515 $failedtests{$tfile} = { canon => $canon,
516 max => $test{max},
517 failed => scalar @{$test{failed}},
518 name => $tfile,
519 percent => 100*(scalar @{$test{failed}})/$test{max},
520 estat => '',
521 wstat => '',
522 };
523 } else {
524 print "Don't know which tests failed: got $test{ok} ok, ".
525 "expected $test{max}\n";
526 $failedtests{$tfile} = { canon => '??',
527 max => $test{max},
528 failed => '??',
529 name => $tfile,
530 percent => undef,
531 estat => '',
532 wstat => '',
533 };
534 }
535 $tot{bad}++;
536 } elsif ($test{'next'} == 0) {
537 print "FAILED before any test output arrived\n";
538 $tot{bad}++;
539 $failedtests{$tfile} = { canon => '??',
540 max => '??',
541 failed => '??',
542 name => $tfile,
543 percent => undef,
544 estat => '',
545 wstat => '',
546 };
547 }
548 }
549
2fe373ce 550 if (defined $Files_In_Dir) {
551 my @new_dir_files = _globdir $Files_In_Dir;
552 if (@new_dir_files != @dir_files) {
553 my %f;
554 @f{@new_dir_files} = (1) x @new_dir_files;
555 delete @f{@dir_files};
556 my @f = sort keys %f;
557 print "LEAKED FILES: @f\n";
558 @dir_files = @new_dir_files;
559 }
560 }
a0d0e21e 561 }
9c5c68c8 562 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 563
13287dd5 564 $Strap->_restore_PERL5LIB;
9c5c68c8 565
566 return(\%tot, \%failedtests);
567}
568
b82fa0b7 569=item B<_mk_leader>
570
7a315204 571 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 572
573Generates the 't/foo........' $leader for the given $test_file as well
574as a similar version which will overwrite the current line (by use of
575\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 576on TTY.
577
578The $width is the width of the "yada/blah.." string.
b82fa0b7 579
580=cut
581
582sub _mk_leader {
2fe373ce 583 my($te, $width) = @_;
584 chomp($te);
b695f709 585 $te =~ s/\.\w+$/./;
b82fa0b7 586
356733da 587 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 588 my $blank = (' ' x 77);
7a315204 589 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 590 my $ml = "";
591
592 $ml = "\r$blank\r$leader"
593 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
594
595 return($leader, $ml);
596}
597
13287dd5 598=item B<_leader_width>
599
600 my($width) = _leader_width(@test_files);
601
602Calculates how wide the leader should be based on the length of the
603longest test name.
604
605=cut
606
607sub _leader_width {
608 my $maxlen = 0;
609 my $maxsuflen = 0;
610 foreach (@_) {
611 my $suf = /\.(\w+)$/ ? $1 : '';
612 my $len = length;
613 my $suflen = length $suf;
614 $maxlen = $len if $len > $maxlen;
615 $maxsuflen = $suflen if $suflen > $maxsuflen;
616 }
356733da 617 # + 3 : we want three dots between the test name and the "ok"
618 return $maxlen + 3 - $maxsuflen;
13287dd5 619}
620
9c5c68c8 621
622sub _show_results {
623 my($tot, $failedtests) = @_;
624
625 my $pct;
626 my $bonusmsg = _bonusmsg($tot);
627
2fe373ce 628 if (_all_ok($tot)) {
629 print "All tests successful$bonusmsg.\n";
630 } elsif (!$tot->{tests}){
631 die "FAILED--no tests were run for some reason.\n";
632 } elsif (!$tot->{max}) {
633 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
634 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 635 "alas--no output ever seen\n";
c07a80fd 636 } else {
2fe373ce 637 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
638 my $percent_ok = 100*$tot->{ok}/$tot->{max};
639 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
640 $tot->{max} - $tot->{ok}, $tot->{max},
641 $percent_ok;
0a931e4a 642
9c5c68c8 643 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 644
2fe373ce 645 # Now write to formats
646 for my $script (sort keys %$failedtests) {
647 $Curtest = $failedtests->{$script};
648 write;
649 }
650 if ($tot->{bad}) {
651 $bonusmsg =~ s/^,\s*//;
652 print "$bonusmsg.\n" if $bonusmsg;
653 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 654 "$subpct\n";
2fe373ce 655 }
c07a80fd 656 }
f0a9308e 657
9c5c68c8 658 printf("Files=%d, Tests=%d, %s\n",
659 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
660}
661
662
308957f5 663my %Handlers = ();
664$Strap->{callback} = sub {
665 my($self, $line, $type, $totals) = @_;
666 print $line if $Verbose;
9c5c68c8 667
308957f5 668 my $meth = $Handlers{$type};
669 $meth->($self, $line, $type, $totals) if $meth;
670};
9c5c68c8 671
9c5c68c8 672
308957f5 673$Handlers{header} = sub {
674 my($self, $line, $type, $totals) = @_;
9c5c68c8 675
308957f5 676 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 677
308957f5 678 $self->{_seen_header}++;
9c5c68c8 679
308957f5 680 warn "1..M can only appear at the beginning or end of tests\n"
681 if $totals->{seen} &&
682 $totals->{max} < $totals->{seen};
683};
13287dd5 684
308957f5 685$Handlers{test} = sub {
686 my($self, $line, $type, $totals) = @_;
9c5c68c8 687
308957f5 688 my $curr = $totals->{seen};
689 my $next = $self->{'next'};
690 my $max = $totals->{max};
691 my $detail = $totals->{details}[-1];
b82fa0b7 692
308957f5 693 if( $detail->{ok} ) {
694 _print_ml("ok $curr/$max");
356733da 695
308957f5 696 if( $detail->{type} eq 'skip' ) {
697 $self->{_skip_reason} = $detail->{reason}
698 unless defined $self->{_skip_reason};
699 $self->{_skip_reason} = 'various reasons'
700 if $self->{_skip_reason} ne $detail->{reason};
701 }
b82fa0b7 702 }
703 else {
308957f5 704 _print_ml("NOK $curr");
b82fa0b7 705 }
b82fa0b7 706
308957f5 707 if( $curr > $next ) {
708 print "Test output counter mismatch [test $curr]\n";
709 }
710 elsif( $curr < $next ) {
711 print "Confused test output: test $curr answered after ".
712 "test ", $next - 1, "\n";
713 }
b82fa0b7 714
308957f5 715};
2fe373ce 716
308957f5 717$Handlers{bailout} = sub {
718 my($self, $line, $type, $totals) = @_;
9c5c68c8 719
308957f5 720 die "FAILED--Further testing stopped" .
721 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
722};
356733da 723
9c5c68c8 724
308957f5 725sub _print_ml {
726 print join '', $ML, @_ if $ML;
9c5c68c8 727}
728
729
730sub _bonusmsg {
731 my($tot) = @_;
732
733 my $bonusmsg = '';
734 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 735 " UNEXPECTEDLY SUCCEEDED)")
736 if $tot->{bonus};
9c5c68c8 737
738 if ($tot->{skipped}) {
2fe373ce 739 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 740 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 741 if ($tot->{sub_skipped}) {
742 $bonusmsg .= " and $tot->{sub_skipped} subtest"
743 . ($tot->{sub_skipped} != 1 ? 's' : '');
744 }
745 $bonusmsg .= ' skipped';
9c5c68c8 746 }
747 elsif ($tot->{sub_skipped}) {
2fe373ce 748 $bonusmsg .= ", $tot->{sub_skipped} subtest"
749 . ($tot->{sub_skipped} != 1 ? 's' : '')
750 . " skipped";
9c5c68c8 751 }
752
753 return $bonusmsg;
754}
755
9c5c68c8 756# Test program go boom.
757sub _dubious_return {
758 my($test, $tot, $estatus, $wstatus) = @_;
759 my ($failed, $canon, $percent) = ('??', '??');
760
761 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
762 "(wstat %d, 0x%x)\n",
763 $wstatus,$wstatus;
764 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
765
766 if (corestatus($wstatus)) { # until we have a wait module
767 if ($Have_Devel_Corestack) {
768 Devel::CoreStack::stack($^X);
769 } else {
770 print "\ttest program seems to have generated a core\n";
771 }
772 }
773
774 $tot->{bad}++;
775
776 if ($test->{max}) {
b82fa0b7 777 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 778 print "\tafter all the subtests completed successfully\n";
779 $percent = 0;
2fe373ce 780 $failed = 0; # But we do not set $canon!
9c5c68c8 781 }
782 else {
b82fa0b7 783 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 784 $failed = @{$test->{failed}};
785 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
786 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
787 print "DIED. ",$txt;
788 }
789 }
790
791 return { canon => $canon, max => $test->{max} || '??',
792 failed => $failed,
66fd8cb9 793 percent => $percent,
9c5c68c8 794 estat => $estatus, wstat => $wstatus,
795 };
796}
797
798
9c5c68c8 799sub _create_fmts {
800 my($failedtests) = @_;
801
b82fa0b7 802 my $failed_str = "Failed Test";
803 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 804 my $list_str = "List of Failed";
805
806 # Figure out our longest name string for formatting purposes.
807 my $max_namelen = length($failed_str);
808 foreach my $script (keys %$failedtests) {
809 my $namelen = length $failedtests->{$script}->{name};
810 $max_namelen = $namelen if $namelen > $max_namelen;
811 }
812
813 my $list_len = $Columns - length($middle_str) - $max_namelen;
814 if ($list_len < length($list_str)) {
815 $list_len = length($list_str);
816 $max_namelen = $Columns - length($middle_str) - $list_len;
817 if ($max_namelen < length($failed_str)) {
818 $max_namelen = length($failed_str);
819 $Columns = $max_namelen + length($middle_str) + $list_len;
820 }
821 }
822
823 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 824 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 825 . $middle_str
2fe373ce 826 . $list_str . "\n"
827 . "-" x $Columns
828 . "\n.\n";
9c5c68c8 829
830 my $fmt = "format STDOUT =\n"
2fe373ce 831 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 832 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 833 . "^" . "<" x ($list_len - 1) . "\n"
834 . '{ $Curtest->{name}, $Curtest->{estat},'
835 . ' $Curtest->{wstat}, $Curtest->{max},'
836 . ' $Curtest->{failed}, $Curtest->{percent},'
837 . ' $Curtest->{canon}'
838 . "\n}\n"
839 . "~~" . " " x ($Columns - $list_len - 2) . "^"
840 . "<" x ($list_len - 1) . "\n"
841 . '$Curtest->{canon}'
842 . "\n.\n";
9c5c68c8 843
844 eval $fmt_top;
845 die $@ if $@;
846 eval $fmt;
847 die $@ if $@;
848
849 return($fmt_top, $fmt);
850}
851
b82fa0b7 852{
853 my $tried_devel_corestack;
9c5c68c8 854
b82fa0b7 855 sub corestatus {
856 my($st) = @_;
c0ee6f5c 857
356733da 858 eval {
859 local $^W = 0; # *.ph files are often *very* noisy
860 require 'wait.ph'
861 };
862 return if $@;
863 my $did_core = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 864
b82fa0b7 865 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
866 unless $tried_devel_corestack++;
c0ee6f5c 867
356733da 868 return $did_core;
b82fa0b7 869 }
c0ee6f5c 870}
871
c07a80fd 872sub canonfailed ($@) {
89d3b7e2 873 my($max,$skipped,@failed) = @_;
6c31b336 874 my %seen;
875 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 876 my $failed = @failed;
877 my @result = ();
878 my @canon = ();
879 my $min;
880 my $last = $min = shift @failed;
760ac839 881 my $canon;
c07a80fd 882 if (@failed) {
2fe373ce 883 for (@failed, $failed[-1]) { # don't forget the last one
884 if ($_ > $last+1 || $_ == $last) {
885 if ($min == $last) {
886 push @canon, $last;
887 } else {
888 push @canon, "$min-$last";
889 }
890 $min = $_;
891 }
892 $last = $_;
893 }
894 local $" = ", ";
895 push @result, "FAILED tests @canon\n";
896 $canon = join ' ', @canon;
a0d0e21e 897 } else {
2fe373ce 898 push @result, "FAILED test $last\n";
899 $canon = $last;
a0d0e21e 900 }
c07a80fd 901
902 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 903 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
904 my $ender = 's' x ($skipped > 1);
905 my $good = $max - $failed - $skipped;
906 my $goodper = sprintf("%.2f",100*($good/$max));
f0008e52 907 push @result, " (less $skipped skipped test$ender: $good okay, ".
9c5c68c8 908 "$goodper%)"
909 if $skipped;
89d3b7e2 910 push @result, "\n";
760ac839 911 my $txt = join "", @result;
912 ($txt, $canon);
a0d0e21e 913}
914
b82fa0b7 915=end _private
9c5c68c8 916
b82fa0b7 917=back
d667a7e6 918
b82fa0b7 919=cut
9c5c68c8 920
9c5c68c8 921
b82fa0b7 9221;
923__END__
9c5c68c8 924
925
cb1a09d0 926=head1 EXPORT
927
c0ee6f5c 928C<&runtests> is exported by Test::Harness per default.
cb1a09d0 929
9c5c68c8 930C<$verbose> and C<$switches> are exported upon request.
931
932
cb1a09d0 933=head1 DIAGNOSTICS
934
935=over 4
936
937=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
938
939If all tests are successful some statistics about the performance are
940printed.
941
6c31b336 942=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
943
944For any single script that has failing subtests statistics like the
945above are printed.
946
947=item C<Test returned status %d (wstat %d)>
948
9c5c68c8 949Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
950and C<$?> are printed in a message similar to the above.
6c31b336 951
952=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 953
6c31b336 954=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 955
956If not all tests were successful, the script dies with one of the
957above messages.
958
308957f5 959=item C<FAILED--Further testing stopped: %s>
d667a7e6 960
961If a single subtest decides that further testing will not make sense,
962the script dies with this message.
963
cb1a09d0 964=back
965
9b0ceca9 966=head1 ENVIRONMENT
967
37ce32a7 968=over 4
969
356733da 970=item C<HARNESS_ACTIVE>
37ce32a7 971
356733da 972Harness sets this before executing the individual tests. This allows
973the tests to determine if they are being executed through the harness
974or by any other means.
37ce32a7 975
356733da 976=item C<HARNESS_COLUMNS>
9b0ceca9 977
356733da 978This value will be used for the width of the terminal. If it is not
979set then it will default to C<COLUMNS>. If this is not set, it will
980default to 80. Note that users of Bourne-sh based shells will need to
981C<export COLUMNS> for this module to use that variable.
0d0c0d42 982
b82fa0b7 983=item C<HARNESS_COMPILE_TEST>
9636a016 984
37ce32a7 985When true it will make harness attempt to compile the test using
986C<perlcc> before running it.
987
b82fa0b7 988B<NOTE> This currently only works when sitting in the perl source
989directory!
990
991=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 992
993When set to the name of a directory, harness will check after each
994test whether new files appeared in that directory, and report them as
17a79f5b 995
996 LEAKED FILES: scr.tmp 0 my.db
997
998If relative, directory name is with respect to the current directory at
999the moment runtests() was called. Putting absolute path into
13287dd5 1000C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1001
356733da 1002=item C<HARNESS_IGNORE_EXITCODE>
1003
1004Makes harness ignore the exit status of child processes when defined.
1005
1006=item C<HARNESS_NOTTY>
1007
1008When set to a true value, forces it to behave as though STDOUT were
1009not a console. You may need to set this if you don't want harness to
1010output more frequent progress messages using carriage returns. Some
1011consoles may not handle carriage returns properly (which results in a
1012somewhat messy output).
1013
b82fa0b7 1014=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1015
1016Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1017each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1018run all tests with all warnings enabled.
1019
356733da 1020=item C<HARNESS_VERBOSE>
37ce32a7 1021
356733da 1022If true, Test::Harness will output the verbose results of running
1023its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1024
1025=back
0a931e4a 1026
b82fa0b7 1027=head1 EXAMPLE
1028
1029Here's how Test::Harness tests itself
1030
1031 $ cd ~/src/devel/Test-Harness
1032 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1033 $verbose=0; runtests @ARGV;' t/*.t
1034 Using /home/schwern/src/devel/Test-Harness/blib
1035 t/base..............ok
1036 t/nonumbers.........ok
1037 t/ok................ok
1038 t/test-harness......ok
1039 All tests successful.
1040 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1041
cb1a09d0 1042=head1 SEE ALSO
1043
b82fa0b7 1044L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1045the underlying timing routines, L<Devel::CoreStack> to generate core
1046dumps from failed tests and L<Devel::Cover> for test coverage
1047analysis.
c07a80fd 1048
1049=head1 AUTHORS
1050
1051Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1052sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1053with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1054exist. Andreas Koenig held the torch for many years.
1055
1056Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1057
1058=head1 TODO
1059
1060Provide a way of running tests quietly (ie. no printing) for automated
1061validation of tests. This will probably take the form of a version
1062of runtests() which rather than printing its output returns raw data
356733da 1063on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1064
1065Fix HARNESS_COMPILE_TEST without breaking its core usage.
1066
1067Figure a way to report test names in the failure summary.
37ce32a7 1068
b82fa0b7 1069Rework the test summary so long test names are not truncated as badly.
308957f5 1070(Partially done with new skip test styles)
b82fa0b7 1071
b82fa0b7 1072Deal with VMS's "not \nok 4\n" mistake.
1073
1074Add option for coverage analysis.
1075
1076=for _private
1077Keeping whittling away at _run_all_tests()
1078
1079=for _private
1080Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1081
1082=head1 BUGS
1083
356733da 1084HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1085directory.
1086
cb1a09d0 1087=cut