Re: [perl #18888] $Exporter::Verbose=1 does not work for testing, $Heavy::Verbose...
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
11c6125c 2# $Id: Harness.pm,v 1.38 2002/06/19 21:01:01 schwern Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
d5d4ec93 15 $Columns $verbose $switches $ML $Strap
b82fa0b7 16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
11c6125c 25$VERSION = '2.26';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
d5d4ec93 39$Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
d5d4ec93 327It returns true if everything was ok. Otherwise it will die() with
328one of the messages in the DIAGNOSTICS section.
b82fa0b7 329
330=for _private
331This is just _run_all_tests() plus _show_results()
332
333=cut
17a79f5b 334
a0d0e21e 335sub runtests {
336 my(@tests) = @_;
9c5c68c8 337
b82fa0b7 338 local ($\, $,);
339
340 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 341 _show_results($tot, $failedtests);
342
2fe373ce 343 my $ok = _all_ok($tot);
b82fa0b7 344
13287dd5 345 assert(($ok xor keys %$failedtests),
346 q{ok status jives with $failedtests});
b82fa0b7 347
348 return $ok;
349}
350
351=begin _private
352
2fe373ce 353=item B<_all_ok>
354
355 my $ok = _all_ok(\%tot);
356
357Tells you if this test run is overall successful or not.
358
359=cut
360
361sub _all_ok {
362 my($tot) = shift;
363
364 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
365}
366
b82fa0b7 367=item B<_globdir>
368
369 my @files = _globdir $dir;
370
371Returns all the files in a directory. This is shorthand for backwards
372compatibility on systems where glob() doesn't work right.
373
374=cut
375
376sub _globdir {
377 opendir DIRH, shift;
378 my @f = readdir DIRH;
379 closedir DIRH;
380
381 return @f;
9c5c68c8 382}
383
b82fa0b7 384=item B<_run_all_tests>
385
386 my($total, $failed) = _run_all_tests(@test_files);
387
388Runs all the given @test_files (as runtests()) but does it quietly (no
389report). $total is a hash ref summary of all the tests run. Its keys
390and values are this:
391
392 bonus Number of individual todo tests unexpectedly passed
393 max Number of individual tests ran
394 ok Number of individual tests passed
395 sub_skipped Number of individual tests skipped
2fe373ce 396 todo Number of individual todo tests
b82fa0b7 397
398 files Number of test files ran
399 good Number of test files passed
400 bad Number of test files failed
401 tests Number of test files originally given
402 skipped Number of test files skipped
403
404If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
405test.
406
407$failed is a hash ref of all the test scripts which failed. Each key
408is the name of a test script, each value is another hash representing
409how that script failed. Its keys are these:
9c5c68c8 410
b82fa0b7 411 name Name of the test which failed
412 estat Script's exit value
413 wstat Script's wait status
414 max Number of individual tests
415 failed Number which failed
416 percent Percentage of tests which failed
417 canon List of tests which failed (as string).
418
419Needless to say, $failed should be empty if everything passed.
420
421B<NOTE> Currently this function is still noisy. I'm working on it.
422
423=cut
424
308957f5 425#'#
b82fa0b7 426sub _run_all_tests {
9c5c68c8 427 my(@tests) = @_;
a0d0e21e 428 local($|) = 1;
9c5c68c8 429 my(%failedtests);
430
431 # Test-wide totals.
432 my(%tot) = (
433 bonus => 0,
434 max => 0,
435 ok => 0,
436 files => 0,
437 bad => 0,
438 good => 0,
439 tests => scalar @tests,
440 sub_skipped => 0,
2fe373ce 441 todo => 0,
9c5c68c8 442 skipped => 0,
2fe373ce 443 bench => 0,
9c5c68c8 444 );
774d564b 445
b82fa0b7 446 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 447 my $t_start = new Benchmark;
9c5c68c8 448
13287dd5 449 my $width = _leader_width(@tests);
b82fa0b7 450 foreach my $tfile (@tests) {
13287dd5 451
7a315204 452 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 453 local $ML = $ml;
b82fa0b7 454 print $leader;
9c5c68c8 455
356733da 456 $tot{files}++;
457
308957f5 458 $Strap->{_seen_header} = 0;
0be28027 459 my %results = $Strap->analyze_file($tfile) or
460 do { warn "$Strap->{error}\n"; next };
308957f5 461
9c5c68c8 462 # state of the current test.
308957f5 463 my @failed = grep { !$results{details}[$_-1]{ok} }
464 1..@{$results{details}};
9c5c68c8 465 my %test = (
308957f5 466 ok => $results{ok},
467 'next' => $Strap->{'next'},
468 max => $results{max},
469 failed => \@failed,
470 bonus => $results{bonus},
471 skipped => $results{skip},
472 skip_reason => $Strap->{_skip_reason},
c0bb2de7 473 skip_all => $Strap->{skip_all},
9c5c68c8 474 ml => $ml,
475 );
476
308957f5 477 $tot{bonus} += $results{bonus};
478 $tot{max} += $results{max};
479 $tot{ok} += $results{ok};
480 $tot{todo} += $results{todo};
481 $tot{sub_skipped} += $results{skip};
9c5c68c8 482
308957f5 483 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 484
2fe373ce 485 if ($wstatus) {
b82fa0b7 486 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 487 $estatus, $wstatus);
b82fa0b7 488 $failedtests{$tfile}{name} = $tfile;
2fe373ce 489 }
308957f5 490 elsif ($results{passing}) {
2fe373ce 491 if ($test{max} and $test{skipped} + $test{bonus}) {
492 my @msg;
493 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
494 if $test{skipped};
495 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
496 if $test{bonus};
f0008e52 497 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 498 } elsif ($test{max}) {
499 print "$test{ml}ok\n";
d5d4ec93 500 } elsif (defined $test{skip_all} and length $test{skip_all}) {
c0bb2de7 501 print "skipped\n all skipped: $test{skip_all}\n";
2fe373ce 502 $tot{skipped}++;
503 } else {
0be28027 504 print "skipped\n all skipped: no reason given\n";
2fe373ce 505 $tot{skipped}++;
506 }
507 $tot{good}++;
508 }
b82fa0b7 509 else {
510 if ($test{max}) {
511 if ($test{'next'} <= $test{max}) {
512 push @{$test{failed}}, $test{'next'}..$test{max};
513 }
514 if (@{$test{failed}}) {
515 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
516 @{$test{failed}});
517 print "$test{ml}$txt";
518 $failedtests{$tfile} = { canon => $canon,
519 max => $test{max},
520 failed => scalar @{$test{failed}},
521 name => $tfile,
522 percent => 100*(scalar @{$test{failed}})/$test{max},
523 estat => '',
524 wstat => '',
525 };
526 } else {
527 print "Don't know which tests failed: got $test{ok} ok, ".
528 "expected $test{max}\n";
529 $failedtests{$tfile} = { canon => '??',
530 max => $test{max},
531 failed => '??',
532 name => $tfile,
533 percent => undef,
534 estat => '',
535 wstat => '',
536 };
537 }
538 $tot{bad}++;
539 } elsif ($test{'next'} == 0) {
540 print "FAILED before any test output arrived\n";
541 $tot{bad}++;
542 $failedtests{$tfile} = { canon => '??',
543 max => '??',
544 failed => '??',
545 name => $tfile,
546 percent => undef,
547 estat => '',
548 wstat => '',
549 };
550 }
551 }
552
2fe373ce 553 if (defined $Files_In_Dir) {
554 my @new_dir_files = _globdir $Files_In_Dir;
555 if (@new_dir_files != @dir_files) {
556 my %f;
557 @f{@new_dir_files} = (1) x @new_dir_files;
558 delete @f{@dir_files};
559 my @f = sort keys %f;
560 print "LEAKED FILES: @f\n";
561 @dir_files = @new_dir_files;
562 }
563 }
a0d0e21e 564 }
9c5c68c8 565 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 566
13287dd5 567 $Strap->_restore_PERL5LIB;
9c5c68c8 568
569 return(\%tot, \%failedtests);
570}
571
b82fa0b7 572=item B<_mk_leader>
573
7a315204 574 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 575
576Generates the 't/foo........' $leader for the given $test_file as well
577as a similar version which will overwrite the current line (by use of
578\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 579on TTY.
580
581The $width is the width of the "yada/blah.." string.
b82fa0b7 582
583=cut
584
585sub _mk_leader {
2fe373ce 586 my($te, $width) = @_;
587 chomp($te);
b695f709 588 $te =~ s/\.\w+$/./;
b82fa0b7 589
356733da 590 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 591 my $blank = (' ' x 77);
7a315204 592 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 593 my $ml = "";
594
595 $ml = "\r$blank\r$leader"
596 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
597
598 return($leader, $ml);
599}
600
13287dd5 601=item B<_leader_width>
602
603 my($width) = _leader_width(@test_files);
604
605Calculates how wide the leader should be based on the length of the
606longest test name.
607
608=cut
609
610sub _leader_width {
611 my $maxlen = 0;
612 my $maxsuflen = 0;
613 foreach (@_) {
614 my $suf = /\.(\w+)$/ ? $1 : '';
615 my $len = length;
616 my $suflen = length $suf;
617 $maxlen = $len if $len > $maxlen;
618 $maxsuflen = $suflen if $suflen > $maxsuflen;
619 }
356733da 620 # + 3 : we want three dots between the test name and the "ok"
621 return $maxlen + 3 - $maxsuflen;
13287dd5 622}
623
9c5c68c8 624
625sub _show_results {
626 my($tot, $failedtests) = @_;
627
628 my $pct;
629 my $bonusmsg = _bonusmsg($tot);
630
2fe373ce 631 if (_all_ok($tot)) {
632 print "All tests successful$bonusmsg.\n";
633 } elsif (!$tot->{tests}){
634 die "FAILED--no tests were run for some reason.\n";
635 } elsif (!$tot->{max}) {
636 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
637 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 638 "alas--no output ever seen\n";
c07a80fd 639 } else {
2fe373ce 640 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
641 my $percent_ok = 100*$tot->{ok}/$tot->{max};
642 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
643 $tot->{max} - $tot->{ok}, $tot->{max},
644 $percent_ok;
0a931e4a 645
9c5c68c8 646 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 647
2fe373ce 648 # Now write to formats
649 for my $script (sort keys %$failedtests) {
650 $Curtest = $failedtests->{$script};
651 write;
652 }
653 if ($tot->{bad}) {
654 $bonusmsg =~ s/^,\s*//;
655 print "$bonusmsg.\n" if $bonusmsg;
656 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 657 "$subpct\n";
2fe373ce 658 }
c07a80fd 659 }
f0a9308e 660
9c5c68c8 661 printf("Files=%d, Tests=%d, %s\n",
662 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
663}
664
665
308957f5 666my %Handlers = ();
667$Strap->{callback} = sub {
668 my($self, $line, $type, $totals) = @_;
669 print $line if $Verbose;
9c5c68c8 670
308957f5 671 my $meth = $Handlers{$type};
672 $meth->($self, $line, $type, $totals) if $meth;
673};
9c5c68c8 674
9c5c68c8 675
308957f5 676$Handlers{header} = sub {
677 my($self, $line, $type, $totals) = @_;
9c5c68c8 678
308957f5 679 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 680
308957f5 681 $self->{_seen_header}++;
9c5c68c8 682
308957f5 683 warn "1..M can only appear at the beginning or end of tests\n"
684 if $totals->{seen} &&
685 $totals->{max} < $totals->{seen};
686};
13287dd5 687
308957f5 688$Handlers{test} = sub {
689 my($self, $line, $type, $totals) = @_;
9c5c68c8 690
308957f5 691 my $curr = $totals->{seen};
692 my $next = $self->{'next'};
693 my $max = $totals->{max};
694 my $detail = $totals->{details}[-1];
b82fa0b7 695
308957f5 696 if( $detail->{ok} ) {
697 _print_ml("ok $curr/$max");
356733da 698
308957f5 699 if( $detail->{type} eq 'skip' ) {
700 $self->{_skip_reason} = $detail->{reason}
701 unless defined $self->{_skip_reason};
702 $self->{_skip_reason} = 'various reasons'
703 if $self->{_skip_reason} ne $detail->{reason};
704 }
b82fa0b7 705 }
706 else {
308957f5 707 _print_ml("NOK $curr");
b82fa0b7 708 }
b82fa0b7 709
308957f5 710 if( $curr > $next ) {
711 print "Test output counter mismatch [test $curr]\n";
712 }
713 elsif( $curr < $next ) {
714 print "Confused test output: test $curr answered after ".
715 "test ", $next - 1, "\n";
716 }
b82fa0b7 717
308957f5 718};
2fe373ce 719
308957f5 720$Handlers{bailout} = sub {
721 my($self, $line, $type, $totals) = @_;
9c5c68c8 722
308957f5 723 die "FAILED--Further testing stopped" .
724 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
725};
356733da 726
9c5c68c8 727
308957f5 728sub _print_ml {
729 print join '', $ML, @_ if $ML;
9c5c68c8 730}
731
732
733sub _bonusmsg {
734 my($tot) = @_;
735
736 my $bonusmsg = '';
737 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 738 " UNEXPECTEDLY SUCCEEDED)")
739 if $tot->{bonus};
9c5c68c8 740
741 if ($tot->{skipped}) {
2fe373ce 742 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 743 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 744 if ($tot->{sub_skipped}) {
745 $bonusmsg .= " and $tot->{sub_skipped} subtest"
746 . ($tot->{sub_skipped} != 1 ? 's' : '');
747 }
748 $bonusmsg .= ' skipped';
9c5c68c8 749 }
750 elsif ($tot->{sub_skipped}) {
2fe373ce 751 $bonusmsg .= ", $tot->{sub_skipped} subtest"
752 . ($tot->{sub_skipped} != 1 ? 's' : '')
753 . " skipped";
9c5c68c8 754 }
755
756 return $bonusmsg;
757}
758
9c5c68c8 759# Test program go boom.
760sub _dubious_return {
761 my($test, $tot, $estatus, $wstatus) = @_;
762 my ($failed, $canon, $percent) = ('??', '??');
763
764 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
765 "(wstat %d, 0x%x)\n",
766 $wstatus,$wstatus;
767 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
768
769 if (corestatus($wstatus)) { # until we have a wait module
770 if ($Have_Devel_Corestack) {
771 Devel::CoreStack::stack($^X);
772 } else {
773 print "\ttest program seems to have generated a core\n";
774 }
775 }
776
777 $tot->{bad}++;
778
779 if ($test->{max}) {
b82fa0b7 780 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 781 print "\tafter all the subtests completed successfully\n";
782 $percent = 0;
2fe373ce 783 $failed = 0; # But we do not set $canon!
9c5c68c8 784 }
785 else {
b82fa0b7 786 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 787 $failed = @{$test->{failed}};
788 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
789 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
790 print "DIED. ",$txt;
791 }
792 }
793
794 return { canon => $canon, max => $test->{max} || '??',
795 failed => $failed,
66fd8cb9 796 percent => $percent,
9c5c68c8 797 estat => $estatus, wstat => $wstatus,
798 };
799}
800
801
9c5c68c8 802sub _create_fmts {
803 my($failedtests) = @_;
804
b82fa0b7 805 my $failed_str = "Failed Test";
806 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 807 my $list_str = "List of Failed";
808
809 # Figure out our longest name string for formatting purposes.
810 my $max_namelen = length($failed_str);
811 foreach my $script (keys %$failedtests) {
812 my $namelen = length $failedtests->{$script}->{name};
813 $max_namelen = $namelen if $namelen > $max_namelen;
814 }
815
816 my $list_len = $Columns - length($middle_str) - $max_namelen;
817 if ($list_len < length($list_str)) {
818 $list_len = length($list_str);
819 $max_namelen = $Columns - length($middle_str) - $list_len;
820 if ($max_namelen < length($failed_str)) {
821 $max_namelen = length($failed_str);
822 $Columns = $max_namelen + length($middle_str) + $list_len;
823 }
824 }
825
826 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 827 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 828 . $middle_str
2fe373ce 829 . $list_str . "\n"
830 . "-" x $Columns
831 . "\n.\n";
9c5c68c8 832
833 my $fmt = "format STDOUT =\n"
2fe373ce 834 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 835 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 836 . "^" . "<" x ($list_len - 1) . "\n"
837 . '{ $Curtest->{name}, $Curtest->{estat},'
838 . ' $Curtest->{wstat}, $Curtest->{max},'
839 . ' $Curtest->{failed}, $Curtest->{percent},'
840 . ' $Curtest->{canon}'
841 . "\n}\n"
842 . "~~" . " " x ($Columns - $list_len - 2) . "^"
843 . "<" x ($list_len - 1) . "\n"
844 . '$Curtest->{canon}'
845 . "\n.\n";
9c5c68c8 846
847 eval $fmt_top;
848 die $@ if $@;
849 eval $fmt;
850 die $@ if $@;
851
852 return($fmt_top, $fmt);
853}
854
b82fa0b7 855{
856 my $tried_devel_corestack;
9c5c68c8 857
b82fa0b7 858 sub corestatus {
859 my($st) = @_;
c0ee6f5c 860
356733da 861 eval {
862 local $^W = 0; # *.ph files are often *very* noisy
863 require 'wait.ph'
864 };
865 return if $@;
866 my $did_core = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 867
b82fa0b7 868 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
869 unless $tried_devel_corestack++;
c0ee6f5c 870
356733da 871 return $did_core;
b82fa0b7 872 }
c0ee6f5c 873}
874
c07a80fd 875sub canonfailed ($@) {
89d3b7e2 876 my($max,$skipped,@failed) = @_;
6c31b336 877 my %seen;
878 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 879 my $failed = @failed;
880 my @result = ();
881 my @canon = ();
882 my $min;
883 my $last = $min = shift @failed;
760ac839 884 my $canon;
c07a80fd 885 if (@failed) {
2fe373ce 886 for (@failed, $failed[-1]) { # don't forget the last one
887 if ($_ > $last+1 || $_ == $last) {
888 if ($min == $last) {
889 push @canon, $last;
890 } else {
891 push @canon, "$min-$last";
892 }
893 $min = $_;
894 }
895 $last = $_;
896 }
897 local $" = ", ";
898 push @result, "FAILED tests @canon\n";
899 $canon = join ' ', @canon;
a0d0e21e 900 } else {
2fe373ce 901 push @result, "FAILED test $last\n";
902 $canon = $last;
a0d0e21e 903 }
c07a80fd 904
905 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 906 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
907 my $ender = 's' x ($skipped > 1);
908 my $good = $max - $failed - $skipped;
909 my $goodper = sprintf("%.2f",100*($good/$max));
f0008e52 910 push @result, " (less $skipped skipped test$ender: $good okay, ".
9c5c68c8 911 "$goodper%)"
912 if $skipped;
89d3b7e2 913 push @result, "\n";
760ac839 914 my $txt = join "", @result;
915 ($txt, $canon);
a0d0e21e 916}
917
b82fa0b7 918=end _private
9c5c68c8 919
b82fa0b7 920=back
d667a7e6 921
b82fa0b7 922=cut
9c5c68c8 923
9c5c68c8 924
b82fa0b7 9251;
926__END__
9c5c68c8 927
928
cb1a09d0 929=head1 EXPORT
930
c0ee6f5c 931C<&runtests> is exported by Test::Harness per default.
cb1a09d0 932
9c5c68c8 933C<$verbose> and C<$switches> are exported upon request.
934
935
cb1a09d0 936=head1 DIAGNOSTICS
937
938=over 4
939
940=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
941
942If all tests are successful some statistics about the performance are
943printed.
944
6c31b336 945=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
946
947For any single script that has failing subtests statistics like the
948above are printed.
949
950=item C<Test returned status %d (wstat %d)>
951
9c5c68c8 952Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
953and C<$?> are printed in a message similar to the above.
6c31b336 954
955=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 956
6c31b336 957=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 958
959If not all tests were successful, the script dies with one of the
960above messages.
961
308957f5 962=item C<FAILED--Further testing stopped: %s>
d667a7e6 963
964If a single subtest decides that further testing will not make sense,
965the script dies with this message.
966
cb1a09d0 967=back
968
9b0ceca9 969=head1 ENVIRONMENT
970
37ce32a7 971=over 4
972
356733da 973=item C<HARNESS_ACTIVE>
37ce32a7 974
356733da 975Harness sets this before executing the individual tests. This allows
976the tests to determine if they are being executed through the harness
977or by any other means.
37ce32a7 978
356733da 979=item C<HARNESS_COLUMNS>
9b0ceca9 980
356733da 981This value will be used for the width of the terminal. If it is not
982set then it will default to C<COLUMNS>. If this is not set, it will
983default to 80. Note that users of Bourne-sh based shells will need to
984C<export COLUMNS> for this module to use that variable.
0d0c0d42 985
b82fa0b7 986=item C<HARNESS_COMPILE_TEST>
9636a016 987
37ce32a7 988When true it will make harness attempt to compile the test using
989C<perlcc> before running it.
990
b82fa0b7 991B<NOTE> This currently only works when sitting in the perl source
992directory!
993
994=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 995
996When set to the name of a directory, harness will check after each
997test whether new files appeared in that directory, and report them as
17a79f5b 998
999 LEAKED FILES: scr.tmp 0 my.db
1000
1001If relative, directory name is with respect to the current directory at
1002the moment runtests() was called. Putting absolute path into
13287dd5 1003C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1004
356733da 1005=item C<HARNESS_IGNORE_EXITCODE>
1006
1007Makes harness ignore the exit status of child processes when defined.
1008
1009=item C<HARNESS_NOTTY>
1010
1011When set to a true value, forces it to behave as though STDOUT were
1012not a console. You may need to set this if you don't want harness to
1013output more frequent progress messages using carriage returns. Some
1014consoles may not handle carriage returns properly (which results in a
1015somewhat messy output).
1016
b82fa0b7 1017=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1018
1019Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1020each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1021run all tests with all warnings enabled.
1022
356733da 1023=item C<HARNESS_VERBOSE>
37ce32a7 1024
356733da 1025If true, Test::Harness will output the verbose results of running
1026its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1027
1028=back
0a931e4a 1029
b82fa0b7 1030=head1 EXAMPLE
1031
1032Here's how Test::Harness tests itself
1033
1034 $ cd ~/src/devel/Test-Harness
1035 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1036 $verbose=0; runtests @ARGV;' t/*.t
1037 Using /home/schwern/src/devel/Test-Harness/blib
1038 t/base..............ok
1039 t/nonumbers.........ok
1040 t/ok................ok
1041 t/test-harness......ok
1042 All tests successful.
1043 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1044
cb1a09d0 1045=head1 SEE ALSO
1046
b82fa0b7 1047L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1048the underlying timing routines, L<Devel::CoreStack> to generate core
1049dumps from failed tests and L<Devel::Cover> for test coverage
1050analysis.
c07a80fd 1051
1052=head1 AUTHORS
1053
1054Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1055sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1056with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1057exist. Andreas Koenig held the torch for many years.
1058
1059Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1060
1061=head1 TODO
1062
1063Provide a way of running tests quietly (ie. no printing) for automated
1064validation of tests. This will probably take the form of a version
1065of runtests() which rather than printing its output returns raw data
356733da 1066on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1067
1068Fix HARNESS_COMPILE_TEST without breaking its core usage.
1069
1070Figure a way to report test names in the failure summary.
37ce32a7 1071
b82fa0b7 1072Rework the test summary so long test names are not truncated as badly.
308957f5 1073(Partially done with new skip test styles)
b82fa0b7 1074
b82fa0b7 1075Deal with VMS's "not \nok 4\n" mistake.
1076
1077Add option for coverage analysis.
1078
1079=for _private
1080Keeping whittling away at _run_all_tests()
1081
1082=for _private
1083Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1084
1085=head1 BUGS
1086
356733da 1087HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1088directory.
1089
cb1a09d0 1090=cut