Test::Harness 2.22 -> 2.23
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
c0bb2de7 2# $Id: Harness.pm,v 1.31 2002/05/18 20:19:06 schwern Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
308957f5 15 $Columns $verbose $switches $ML
b82fa0b7 16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
c0bb2de7 25$VERSION = '2.23';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
13287dd5 39my $Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
327It returns true if everything was ok, false otherwise.
328
329=for _private
330This is just _run_all_tests() plus _show_results()
331
332=cut
17a79f5b 333
a0d0e21e 334sub runtests {
335 my(@tests) = @_;
9c5c68c8 336
b82fa0b7 337 local ($\, $,);
338
339 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 340 _show_results($tot, $failedtests);
341
2fe373ce 342 my $ok = _all_ok($tot);
b82fa0b7 343
13287dd5 344 assert(($ok xor keys %$failedtests),
345 q{ok status jives with $failedtests});
b82fa0b7 346
347 return $ok;
348}
349
350=begin _private
351
2fe373ce 352=item B<_all_ok>
353
354 my $ok = _all_ok(\%tot);
355
356Tells you if this test run is overall successful or not.
357
358=cut
359
360sub _all_ok {
361 my($tot) = shift;
362
363 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
364}
365
b82fa0b7 366=item B<_globdir>
367
368 my @files = _globdir $dir;
369
370Returns all the files in a directory. This is shorthand for backwards
371compatibility on systems where glob() doesn't work right.
372
373=cut
374
375sub _globdir {
376 opendir DIRH, shift;
377 my @f = readdir DIRH;
378 closedir DIRH;
379
380 return @f;
9c5c68c8 381}
382
b82fa0b7 383=item B<_run_all_tests>
384
385 my($total, $failed) = _run_all_tests(@test_files);
386
387Runs all the given @test_files (as runtests()) but does it quietly (no
388report). $total is a hash ref summary of all the tests run. Its keys
389and values are this:
390
391 bonus Number of individual todo tests unexpectedly passed
392 max Number of individual tests ran
393 ok Number of individual tests passed
394 sub_skipped Number of individual tests skipped
2fe373ce 395 todo Number of individual todo tests
b82fa0b7 396
397 files Number of test files ran
398 good Number of test files passed
399 bad Number of test files failed
400 tests Number of test files originally given
401 skipped Number of test files skipped
402
403If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
404test.
405
406$failed is a hash ref of all the test scripts which failed. Each key
407is the name of a test script, each value is another hash representing
408how that script failed. Its keys are these:
9c5c68c8 409
b82fa0b7 410 name Name of the test which failed
411 estat Script's exit value
412 wstat Script's wait status
413 max Number of individual tests
414 failed Number which failed
415 percent Percentage of tests which failed
416 canon List of tests which failed (as string).
417
418Needless to say, $failed should be empty if everything passed.
419
420B<NOTE> Currently this function is still noisy. I'm working on it.
421
422=cut
423
308957f5 424#'#
b82fa0b7 425sub _run_all_tests {
9c5c68c8 426 my(@tests) = @_;
a0d0e21e 427 local($|) = 1;
9c5c68c8 428 my(%failedtests);
429
430 # Test-wide totals.
431 my(%tot) = (
432 bonus => 0,
433 max => 0,
434 ok => 0,
435 files => 0,
436 bad => 0,
437 good => 0,
438 tests => scalar @tests,
439 sub_skipped => 0,
2fe373ce 440 todo => 0,
9c5c68c8 441 skipped => 0,
2fe373ce 442 bench => 0,
9c5c68c8 443 );
774d564b 444
b82fa0b7 445 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 446 my $t_start = new Benchmark;
9c5c68c8 447
13287dd5 448 my $width = _leader_width(@tests);
b82fa0b7 449 foreach my $tfile (@tests) {
13287dd5 450
7a315204 451 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 452 local $ML = $ml;
b82fa0b7 453 print $leader;
9c5c68c8 454
356733da 455 $tot{files}++;
456
308957f5 457 $Strap->{_seen_header} = 0;
458 my %results = $Strap->analyze_file($tfile);
459
9c5c68c8 460 # state of the current test.
308957f5 461 my @failed = grep { !$results{details}[$_-1]{ok} }
462 1..@{$results{details}};
9c5c68c8 463 my %test = (
308957f5 464 ok => $results{ok},
465 'next' => $Strap->{'next'},
466 max => $results{max},
467 failed => \@failed,
468 bonus => $results{bonus},
469 skipped => $results{skip},
470 skip_reason => $Strap->{_skip_reason},
c0bb2de7 471 skip_all => $Strap->{skip_all},
9c5c68c8 472 ml => $ml,
473 );
474
308957f5 475 $tot{bonus} += $results{bonus};
476 $tot{max} += $results{max};
477 $tot{ok} += $results{ok};
478 $tot{todo} += $results{todo};
479 $tot{sub_skipped} += $results{skip};
9c5c68c8 480
308957f5 481 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 482
2fe373ce 483 if ($wstatus) {
b82fa0b7 484 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 485 $estatus, $wstatus);
b82fa0b7 486 $failedtests{$tfile}{name} = $tfile;
2fe373ce 487 }
308957f5 488 elsif ($results{passing}) {
2fe373ce 489 if ($test{max} and $test{skipped} + $test{bonus}) {
490 my @msg;
491 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
492 if $test{skipped};
493 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
494 if $test{bonus};
f0008e52 495 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 496 } elsif ($test{max}) {
497 print "$test{ml}ok\n";
c0bb2de7 498 } elsif (defined $test{skip_all}) {
499 print "skipped\n all skipped: $test{skip_all}\n";
2fe373ce 500 $tot{skipped}++;
501 } else {
f0008e52 502 print "\n skipped test on this platform\n";
2fe373ce 503 $tot{skipped}++;
504 }
505 $tot{good}++;
506 }
b82fa0b7 507 else {
508 if ($test{max}) {
509 if ($test{'next'} <= $test{max}) {
510 push @{$test{failed}}, $test{'next'}..$test{max};
511 }
512 if (@{$test{failed}}) {
513 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
514 @{$test{failed}});
515 print "$test{ml}$txt";
516 $failedtests{$tfile} = { canon => $canon,
517 max => $test{max},
518 failed => scalar @{$test{failed}},
519 name => $tfile,
520 percent => 100*(scalar @{$test{failed}})/$test{max},
521 estat => '',
522 wstat => '',
523 };
524 } else {
525 print "Don't know which tests failed: got $test{ok} ok, ".
526 "expected $test{max}\n";
527 $failedtests{$tfile} = { canon => '??',
528 max => $test{max},
529 failed => '??',
530 name => $tfile,
531 percent => undef,
532 estat => '',
533 wstat => '',
534 };
535 }
536 $tot{bad}++;
537 } elsif ($test{'next'} == 0) {
538 print "FAILED before any test output arrived\n";
539 $tot{bad}++;
540 $failedtests{$tfile} = { canon => '??',
541 max => '??',
542 failed => '??',
543 name => $tfile,
544 percent => undef,
545 estat => '',
546 wstat => '',
547 };
548 }
549 }
550
2fe373ce 551 if (defined $Files_In_Dir) {
552 my @new_dir_files = _globdir $Files_In_Dir;
553 if (@new_dir_files != @dir_files) {
554 my %f;
555 @f{@new_dir_files} = (1) x @new_dir_files;
556 delete @f{@dir_files};
557 my @f = sort keys %f;
558 print "LEAKED FILES: @f\n";
559 @dir_files = @new_dir_files;
560 }
561 }
a0d0e21e 562 }
9c5c68c8 563 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 564
13287dd5 565 $Strap->_restore_PERL5LIB;
9c5c68c8 566
567 return(\%tot, \%failedtests);
568}
569
b82fa0b7 570=item B<_mk_leader>
571
7a315204 572 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 573
574Generates the 't/foo........' $leader for the given $test_file as well
575as a similar version which will overwrite the current line (by use of
576\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 577on TTY.
578
579The $width is the width of the "yada/blah.." string.
b82fa0b7 580
581=cut
582
583sub _mk_leader {
2fe373ce 584 my($te, $width) = @_;
585 chomp($te);
b695f709 586 $te =~ s/\.\w+$/./;
b82fa0b7 587
356733da 588 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 589 my $blank = (' ' x 77);
7a315204 590 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 591 my $ml = "";
592
593 $ml = "\r$blank\r$leader"
594 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
595
596 return($leader, $ml);
597}
598
13287dd5 599=item B<_leader_width>
600
601 my($width) = _leader_width(@test_files);
602
603Calculates how wide the leader should be based on the length of the
604longest test name.
605
606=cut
607
608sub _leader_width {
609 my $maxlen = 0;
610 my $maxsuflen = 0;
611 foreach (@_) {
612 my $suf = /\.(\w+)$/ ? $1 : '';
613 my $len = length;
614 my $suflen = length $suf;
615 $maxlen = $len if $len > $maxlen;
616 $maxsuflen = $suflen if $suflen > $maxsuflen;
617 }
356733da 618 # + 3 : we want three dots between the test name and the "ok"
619 return $maxlen + 3 - $maxsuflen;
13287dd5 620}
621
9c5c68c8 622
623sub _show_results {
624 my($tot, $failedtests) = @_;
625
626 my $pct;
627 my $bonusmsg = _bonusmsg($tot);
628
2fe373ce 629 if (_all_ok($tot)) {
630 print "All tests successful$bonusmsg.\n";
631 } elsif (!$tot->{tests}){
632 die "FAILED--no tests were run for some reason.\n";
633 } elsif (!$tot->{max}) {
634 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
635 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 636 "alas--no output ever seen\n";
c07a80fd 637 } else {
2fe373ce 638 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
639 my $percent_ok = 100*$tot->{ok}/$tot->{max};
640 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
641 $tot->{max} - $tot->{ok}, $tot->{max},
642 $percent_ok;
0a931e4a 643
9c5c68c8 644 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 645
2fe373ce 646 # Now write to formats
647 for my $script (sort keys %$failedtests) {
648 $Curtest = $failedtests->{$script};
649 write;
650 }
651 if ($tot->{bad}) {
652 $bonusmsg =~ s/^,\s*//;
653 print "$bonusmsg.\n" if $bonusmsg;
654 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 655 "$subpct\n";
2fe373ce 656 }
c07a80fd 657 }
f0a9308e 658
9c5c68c8 659 printf("Files=%d, Tests=%d, %s\n",
660 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
661}
662
663
308957f5 664my %Handlers = ();
665$Strap->{callback} = sub {
666 my($self, $line, $type, $totals) = @_;
667 print $line if $Verbose;
9c5c68c8 668
308957f5 669 my $meth = $Handlers{$type};
670 $meth->($self, $line, $type, $totals) if $meth;
671};
9c5c68c8 672
9c5c68c8 673
308957f5 674$Handlers{header} = sub {
675 my($self, $line, $type, $totals) = @_;
9c5c68c8 676
308957f5 677 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 678
308957f5 679 $self->{_seen_header}++;
9c5c68c8 680
308957f5 681 warn "1..M can only appear at the beginning or end of tests\n"
682 if $totals->{seen} &&
683 $totals->{max} < $totals->{seen};
684};
13287dd5 685
308957f5 686$Handlers{test} = sub {
687 my($self, $line, $type, $totals) = @_;
9c5c68c8 688
308957f5 689 my $curr = $totals->{seen};
690 my $next = $self->{'next'};
691 my $max = $totals->{max};
692 my $detail = $totals->{details}[-1];
b82fa0b7 693
308957f5 694 if( $detail->{ok} ) {
695 _print_ml("ok $curr/$max");
356733da 696
308957f5 697 if( $detail->{type} eq 'skip' ) {
698 $self->{_skip_reason} = $detail->{reason}
699 unless defined $self->{_skip_reason};
700 $self->{_skip_reason} = 'various reasons'
701 if $self->{_skip_reason} ne $detail->{reason};
702 }
b82fa0b7 703 }
704 else {
308957f5 705 _print_ml("NOK $curr");
b82fa0b7 706 }
b82fa0b7 707
308957f5 708 if( $curr > $next ) {
709 print "Test output counter mismatch [test $curr]\n";
710 }
711 elsif( $curr < $next ) {
712 print "Confused test output: test $curr answered after ".
713 "test ", $next - 1, "\n";
714 }
b82fa0b7 715
308957f5 716};
2fe373ce 717
308957f5 718$Handlers{bailout} = sub {
719 my($self, $line, $type, $totals) = @_;
9c5c68c8 720
308957f5 721 die "FAILED--Further testing stopped" .
722 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
723};
356733da 724
9c5c68c8 725
308957f5 726sub _print_ml {
727 print join '', $ML, @_ if $ML;
9c5c68c8 728}
729
730
731sub _bonusmsg {
732 my($tot) = @_;
733
734 my $bonusmsg = '';
735 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 736 " UNEXPECTEDLY SUCCEEDED)")
737 if $tot->{bonus};
9c5c68c8 738
739 if ($tot->{skipped}) {
2fe373ce 740 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 741 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 742 if ($tot->{sub_skipped}) {
743 $bonusmsg .= " and $tot->{sub_skipped} subtest"
744 . ($tot->{sub_skipped} != 1 ? 's' : '');
745 }
746 $bonusmsg .= ' skipped';
9c5c68c8 747 }
748 elsif ($tot->{sub_skipped}) {
2fe373ce 749 $bonusmsg .= ", $tot->{sub_skipped} subtest"
750 . ($tot->{sub_skipped} != 1 ? 's' : '')
751 . " skipped";
9c5c68c8 752 }
753
754 return $bonusmsg;
755}
756
9c5c68c8 757# Test program go boom.
758sub _dubious_return {
759 my($test, $tot, $estatus, $wstatus) = @_;
760 my ($failed, $canon, $percent) = ('??', '??');
761
762 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
763 "(wstat %d, 0x%x)\n",
764 $wstatus,$wstatus;
765 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
766
767 if (corestatus($wstatus)) { # until we have a wait module
768 if ($Have_Devel_Corestack) {
769 Devel::CoreStack::stack($^X);
770 } else {
771 print "\ttest program seems to have generated a core\n";
772 }
773 }
774
775 $tot->{bad}++;
776
777 if ($test->{max}) {
b82fa0b7 778 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 779 print "\tafter all the subtests completed successfully\n";
780 $percent = 0;
2fe373ce 781 $failed = 0; # But we do not set $canon!
9c5c68c8 782 }
783 else {
b82fa0b7 784 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 785 $failed = @{$test->{failed}};
786 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
787 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
788 print "DIED. ",$txt;
789 }
790 }
791
792 return { canon => $canon, max => $test->{max} || '??',
793 failed => $failed,
66fd8cb9 794 percent => $percent,
9c5c68c8 795 estat => $estatus, wstat => $wstatus,
796 };
797}
798
799
9c5c68c8 800sub _create_fmts {
801 my($failedtests) = @_;
802
b82fa0b7 803 my $failed_str = "Failed Test";
804 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 805 my $list_str = "List of Failed";
806
807 # Figure out our longest name string for formatting purposes.
808 my $max_namelen = length($failed_str);
809 foreach my $script (keys %$failedtests) {
810 my $namelen = length $failedtests->{$script}->{name};
811 $max_namelen = $namelen if $namelen > $max_namelen;
812 }
813
814 my $list_len = $Columns - length($middle_str) - $max_namelen;
815 if ($list_len < length($list_str)) {
816 $list_len = length($list_str);
817 $max_namelen = $Columns - length($middle_str) - $list_len;
818 if ($max_namelen < length($failed_str)) {
819 $max_namelen = length($failed_str);
820 $Columns = $max_namelen + length($middle_str) + $list_len;
821 }
822 }
823
824 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 825 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 826 . $middle_str
2fe373ce 827 . $list_str . "\n"
828 . "-" x $Columns
829 . "\n.\n";
9c5c68c8 830
831 my $fmt = "format STDOUT =\n"
2fe373ce 832 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 833 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 834 . "^" . "<" x ($list_len - 1) . "\n"
835 . '{ $Curtest->{name}, $Curtest->{estat},'
836 . ' $Curtest->{wstat}, $Curtest->{max},'
837 . ' $Curtest->{failed}, $Curtest->{percent},'
838 . ' $Curtest->{canon}'
839 . "\n}\n"
840 . "~~" . " " x ($Columns - $list_len - 2) . "^"
841 . "<" x ($list_len - 1) . "\n"
842 . '$Curtest->{canon}'
843 . "\n.\n";
9c5c68c8 844
845 eval $fmt_top;
846 die $@ if $@;
847 eval $fmt;
848 die $@ if $@;
849
850 return($fmt_top, $fmt);
851}
852
b82fa0b7 853{
854 my $tried_devel_corestack;
9c5c68c8 855
b82fa0b7 856 sub corestatus {
857 my($st) = @_;
c0ee6f5c 858
356733da 859 eval {
860 local $^W = 0; # *.ph files are often *very* noisy
861 require 'wait.ph'
862 };
863 return if $@;
864 my $did_core = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 865
b82fa0b7 866 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
867 unless $tried_devel_corestack++;
c0ee6f5c 868
356733da 869 return $did_core;
b82fa0b7 870 }
c0ee6f5c 871}
872
c07a80fd 873sub canonfailed ($@) {
89d3b7e2 874 my($max,$skipped,@failed) = @_;
6c31b336 875 my %seen;
876 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 877 my $failed = @failed;
878 my @result = ();
879 my @canon = ();
880 my $min;
881 my $last = $min = shift @failed;
760ac839 882 my $canon;
c07a80fd 883 if (@failed) {
2fe373ce 884 for (@failed, $failed[-1]) { # don't forget the last one
885 if ($_ > $last+1 || $_ == $last) {
886 if ($min == $last) {
887 push @canon, $last;
888 } else {
889 push @canon, "$min-$last";
890 }
891 $min = $_;
892 }
893 $last = $_;
894 }
895 local $" = ", ";
896 push @result, "FAILED tests @canon\n";
897 $canon = join ' ', @canon;
a0d0e21e 898 } else {
2fe373ce 899 push @result, "FAILED test $last\n";
900 $canon = $last;
a0d0e21e 901 }
c07a80fd 902
903 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 904 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
905 my $ender = 's' x ($skipped > 1);
906 my $good = $max - $failed - $skipped;
907 my $goodper = sprintf("%.2f",100*($good/$max));
f0008e52 908 push @result, " (less $skipped skipped test$ender: $good okay, ".
9c5c68c8 909 "$goodper%)"
910 if $skipped;
89d3b7e2 911 push @result, "\n";
760ac839 912 my $txt = join "", @result;
913 ($txt, $canon);
a0d0e21e 914}
915
b82fa0b7 916=end _private
9c5c68c8 917
b82fa0b7 918=back
d667a7e6 919
b82fa0b7 920=cut
9c5c68c8 921
9c5c68c8 922
b82fa0b7 9231;
924__END__
9c5c68c8 925
926
cb1a09d0 927=head1 EXPORT
928
c0ee6f5c 929C<&runtests> is exported by Test::Harness per default.
cb1a09d0 930
9c5c68c8 931C<$verbose> and C<$switches> are exported upon request.
932
933
cb1a09d0 934=head1 DIAGNOSTICS
935
936=over 4
937
938=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
939
940If all tests are successful some statistics about the performance are
941printed.
942
6c31b336 943=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
944
945For any single script that has failing subtests statistics like the
946above are printed.
947
948=item C<Test returned status %d (wstat %d)>
949
9c5c68c8 950Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
951and C<$?> are printed in a message similar to the above.
6c31b336 952
953=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 954
6c31b336 955=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 956
957If not all tests were successful, the script dies with one of the
958above messages.
959
308957f5 960=item C<FAILED--Further testing stopped: %s>
d667a7e6 961
962If a single subtest decides that further testing will not make sense,
963the script dies with this message.
964
cb1a09d0 965=back
966
9b0ceca9 967=head1 ENVIRONMENT
968
37ce32a7 969=over 4
970
356733da 971=item C<HARNESS_ACTIVE>
37ce32a7 972
356733da 973Harness sets this before executing the individual tests. This allows
974the tests to determine if they are being executed through the harness
975or by any other means.
37ce32a7 976
356733da 977=item C<HARNESS_COLUMNS>
9b0ceca9 978
356733da 979This value will be used for the width of the terminal. If it is not
980set then it will default to C<COLUMNS>. If this is not set, it will
981default to 80. Note that users of Bourne-sh based shells will need to
982C<export COLUMNS> for this module to use that variable.
0d0c0d42 983
b82fa0b7 984=item C<HARNESS_COMPILE_TEST>
9636a016 985
37ce32a7 986When true it will make harness attempt to compile the test using
987C<perlcc> before running it.
988
b82fa0b7 989B<NOTE> This currently only works when sitting in the perl source
990directory!
991
992=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 993
994When set to the name of a directory, harness will check after each
995test whether new files appeared in that directory, and report them as
17a79f5b 996
997 LEAKED FILES: scr.tmp 0 my.db
998
999If relative, directory name is with respect to the current directory at
1000the moment runtests() was called. Putting absolute path into
13287dd5 1001C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1002
356733da 1003=item C<HARNESS_IGNORE_EXITCODE>
1004
1005Makes harness ignore the exit status of child processes when defined.
1006
1007=item C<HARNESS_NOTTY>
1008
1009When set to a true value, forces it to behave as though STDOUT were
1010not a console. You may need to set this if you don't want harness to
1011output more frequent progress messages using carriage returns. Some
1012consoles may not handle carriage returns properly (which results in a
1013somewhat messy output).
1014
b82fa0b7 1015=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1016
1017Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1018each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1019run all tests with all warnings enabled.
1020
356733da 1021=item C<HARNESS_VERBOSE>
37ce32a7 1022
356733da 1023If true, Test::Harness will output the verbose results of running
1024its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1025
1026=back
0a931e4a 1027
b82fa0b7 1028=head1 EXAMPLE
1029
1030Here's how Test::Harness tests itself
1031
1032 $ cd ~/src/devel/Test-Harness
1033 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1034 $verbose=0; runtests @ARGV;' t/*.t
1035 Using /home/schwern/src/devel/Test-Harness/blib
1036 t/base..............ok
1037 t/nonumbers.........ok
1038 t/ok................ok
1039 t/test-harness......ok
1040 All tests successful.
1041 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1042
cb1a09d0 1043=head1 SEE ALSO
1044
b82fa0b7 1045L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1046the underlying timing routines, L<Devel::CoreStack> to generate core
1047dumps from failed tests and L<Devel::Cover> for test coverage
1048analysis.
c07a80fd 1049
1050=head1 AUTHORS
1051
1052Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1053sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1054with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1055exist. Andreas Koenig held the torch for many years.
1056
1057Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1058
1059=head1 TODO
1060
1061Provide a way of running tests quietly (ie. no printing) for automated
1062validation of tests. This will probably take the form of a version
1063of runtests() which rather than printing its output returns raw data
356733da 1064on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1065
1066Fix HARNESS_COMPILE_TEST without breaking its core usage.
1067
1068Figure a way to report test names in the failure summary.
37ce32a7 1069
b82fa0b7 1070Rework the test summary so long test names are not truncated as badly.
308957f5 1071(Partially done with new skip test styles)
b82fa0b7 1072
b82fa0b7 1073Deal with VMS's "not \nok 4\n" mistake.
1074
1075Add option for coverage analysis.
1076
1077=for _private
1078Keeping whittling away at _run_all_tests()
1079
1080=for _private
1081Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1082
1083=head1 BUGS
1084
356733da 1085HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1086directory.
1087
cb1a09d0 1088=cut