Re: [perl #18849] fork/system bug on FreeBSD
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
e8df9912 2# $Id: Harness.pm,v 1.47 2003/04/24 19:33:05 andy Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
d5d4ec93 15 $Columns $verbose $switches $ML $Strap
b82fa0b7 16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
e8df9912 25$VERSION = '2.28';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
d5d4ec93 39$Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
d5d4ec93 327It returns true if everything was ok. Otherwise it will die() with
328one of the messages in the DIAGNOSTICS section.
b82fa0b7 329
330=for _private
0bf5423d 331
b82fa0b7 332This is just _run_all_tests() plus _show_results()
333
334=cut
17a79f5b 335
a0d0e21e 336sub runtests {
337 my(@tests) = @_;
9c5c68c8 338
b82fa0b7 339 local ($\, $,);
340
341 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 342 _show_results($tot, $failedtests);
343
2fe373ce 344 my $ok = _all_ok($tot);
b82fa0b7 345
13287dd5 346 assert(($ok xor keys %$failedtests),
347 q{ok status jives with $failedtests});
b82fa0b7 348
349 return $ok;
350}
351
352=begin _private
353
2fe373ce 354=item B<_all_ok>
355
356 my $ok = _all_ok(\%tot);
357
358Tells you if this test run is overall successful or not.
359
360=cut
361
362sub _all_ok {
363 my($tot) = shift;
364
365 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
366}
367
b82fa0b7 368=item B<_globdir>
369
370 my @files = _globdir $dir;
371
372Returns all the files in a directory. This is shorthand for backwards
373compatibility on systems where glob() doesn't work right.
374
375=cut
376
377sub _globdir {
378 opendir DIRH, shift;
379 my @f = readdir DIRH;
380 closedir DIRH;
381
382 return @f;
9c5c68c8 383}
384
b82fa0b7 385=item B<_run_all_tests>
386
387 my($total, $failed) = _run_all_tests(@test_files);
388
389Runs all the given @test_files (as runtests()) but does it quietly (no
390report). $total is a hash ref summary of all the tests run. Its keys
391and values are this:
392
393 bonus Number of individual todo tests unexpectedly passed
394 max Number of individual tests ran
395 ok Number of individual tests passed
396 sub_skipped Number of individual tests skipped
2fe373ce 397 todo Number of individual todo tests
b82fa0b7 398
399 files Number of test files ran
400 good Number of test files passed
401 bad Number of test files failed
402 tests Number of test files originally given
403 skipped Number of test files skipped
404
405If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
406test.
407
408$failed is a hash ref of all the test scripts which failed. Each key
409is the name of a test script, each value is another hash representing
410how that script failed. Its keys are these:
9c5c68c8 411
b82fa0b7 412 name Name of the test which failed
413 estat Script's exit value
414 wstat Script's wait status
415 max Number of individual tests
416 failed Number which failed
417 percent Percentage of tests which failed
418 canon List of tests which failed (as string).
419
420Needless to say, $failed should be empty if everything passed.
421
422B<NOTE> Currently this function is still noisy. I'm working on it.
423
424=cut
425
308957f5 426#'#
b82fa0b7 427sub _run_all_tests {
9c5c68c8 428 my(@tests) = @_;
a0d0e21e 429 local($|) = 1;
9c5c68c8 430 my(%failedtests);
431
432 # Test-wide totals.
433 my(%tot) = (
434 bonus => 0,
435 max => 0,
436 ok => 0,
437 files => 0,
438 bad => 0,
439 good => 0,
440 tests => scalar @tests,
441 sub_skipped => 0,
2fe373ce 442 todo => 0,
9c5c68c8 443 skipped => 0,
2fe373ce 444 bench => 0,
9c5c68c8 445 );
774d564b 446
b82fa0b7 447 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 448 my $t_start = new Benchmark;
9c5c68c8 449
13287dd5 450 my $width = _leader_width(@tests);
b82fa0b7 451 foreach my $tfile (@tests) {
13287dd5 452
7a315204 453 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 454 local $ML = $ml;
b82fa0b7 455 print $leader;
9c5c68c8 456
356733da 457 $tot{files}++;
458
308957f5 459 $Strap->{_seen_header} = 0;
0be28027 460 my %results = $Strap->analyze_file($tfile) or
461 do { warn "$Strap->{error}\n"; next };
308957f5 462
9c5c68c8 463 # state of the current test.
308957f5 464 my @failed = grep { !$results{details}[$_-1]{ok} }
465 1..@{$results{details}};
9c5c68c8 466 my %test = (
308957f5 467 ok => $results{ok},
468 'next' => $Strap->{'next'},
469 max => $results{max},
470 failed => \@failed,
471 bonus => $results{bonus},
472 skipped => $results{skip},
a72fde19 473 skip_reason => $results{skip_reason},
c0bb2de7 474 skip_all => $Strap->{skip_all},
9c5c68c8 475 ml => $ml,
476 );
477
308957f5 478 $tot{bonus} += $results{bonus};
479 $tot{max} += $results{max};
480 $tot{ok} += $results{ok};
481 $tot{todo} += $results{todo};
482 $tot{sub_skipped} += $results{skip};
9c5c68c8 483
308957f5 484 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 485
a72fde19 486 if ($results{passing}) {
2fe373ce 487 if ($test{max} and $test{skipped} + $test{bonus}) {
488 my @msg;
489 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
490 if $test{skipped};
491 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
492 if $test{bonus};
f0008e52 493 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 494 } elsif ($test{max}) {
495 print "$test{ml}ok\n";
d5d4ec93 496 } elsif (defined $test{skip_all} and length $test{skip_all}) {
c0bb2de7 497 print "skipped\n all skipped: $test{skip_all}\n";
2fe373ce 498 $tot{skipped}++;
499 } else {
0be28027 500 print "skipped\n all skipped: no reason given\n";
2fe373ce 501 $tot{skipped}++;
502 }
503 $tot{good}++;
504 }
b82fa0b7 505 else {
a72fde19 506 # List unrun tests as failures.
507 if ($test{'next'} <= $test{max}) {
508 push @{$test{failed}}, $test{'next'}..$test{max};
509 }
510 # List overruns as failures.
511 else {
512 my $details = $results{details};
513 foreach my $overrun ($test{max}+1..@$details)
514 {
515 next unless ref $details->[$overrun-1];
516 push @{$test{failed}}, $overrun
b82fa0b7 517 }
a72fde19 518 }
519
520 if ($wstatus) {
521 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
522 $estatus, $wstatus);
523 $failedtests{$tfile}{name} = $tfile;
524 }
525 elsif($results{seen}) {
b82fa0b7 526 if (@{$test{failed}}) {
527 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
528 @{$test{failed}});
529 print "$test{ml}$txt";
530 $failedtests{$tfile} = { canon => $canon,
531 max => $test{max},
532 failed => scalar @{$test{failed}},
533 name => $tfile,
534 percent => 100*(scalar @{$test{failed}})/$test{max},
535 estat => '',
536 wstat => '',
537 };
538 } else {
539 print "Don't know which tests failed: got $test{ok} ok, ".
540 "expected $test{max}\n";
541 $failedtests{$tfile} = { canon => '??',
542 max => $test{max},
543 failed => '??',
544 name => $tfile,
545 percent => undef,
546 estat => '',
547 wstat => '',
548 };
549 }
550 $tot{bad}++;
a72fde19 551 } else {
b82fa0b7 552 print "FAILED before any test output arrived\n";
553 $tot{bad}++;
554 $failedtests{$tfile} = { canon => '??',
555 max => '??',
556 failed => '??',
557 name => $tfile,
558 percent => undef,
559 estat => '',
560 wstat => '',
561 };
562 }
563 }
564
2fe373ce 565 if (defined $Files_In_Dir) {
566 my @new_dir_files = _globdir $Files_In_Dir;
567 if (@new_dir_files != @dir_files) {
568 my %f;
569 @f{@new_dir_files} = (1) x @new_dir_files;
570 delete @f{@dir_files};
571 my @f = sort keys %f;
572 print "LEAKED FILES: @f\n";
573 @dir_files = @new_dir_files;
574 }
575 }
a0d0e21e 576 }
9c5c68c8 577 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 578
13287dd5 579 $Strap->_restore_PERL5LIB;
9c5c68c8 580
581 return(\%tot, \%failedtests);
582}
583
b82fa0b7 584=item B<_mk_leader>
585
7a315204 586 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 587
588Generates the 't/foo........' $leader for the given $test_file as well
589as a similar version which will overwrite the current line (by use of
590\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 591on TTY.
592
593The $width is the width of the "yada/blah.." string.
b82fa0b7 594
595=cut
596
597sub _mk_leader {
2fe373ce 598 my($te, $width) = @_;
599 chomp($te);
b695f709 600 $te =~ s/\.\w+$/./;
b82fa0b7 601
356733da 602 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 603 my $blank = (' ' x 77);
7a315204 604 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 605 my $ml = "";
606
607 $ml = "\r$blank\r$leader"
608 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
609
610 return($leader, $ml);
611}
612
13287dd5 613=item B<_leader_width>
614
615 my($width) = _leader_width(@test_files);
616
617Calculates how wide the leader should be based on the length of the
618longest test name.
619
620=cut
621
622sub _leader_width {
623 my $maxlen = 0;
624 my $maxsuflen = 0;
625 foreach (@_) {
626 my $suf = /\.(\w+)$/ ? $1 : '';
627 my $len = length;
628 my $suflen = length $suf;
629 $maxlen = $len if $len > $maxlen;
630 $maxsuflen = $suflen if $suflen > $maxsuflen;
631 }
356733da 632 # + 3 : we want three dots between the test name and the "ok"
633 return $maxlen + 3 - $maxsuflen;
13287dd5 634}
635
9c5c68c8 636
637sub _show_results {
638 my($tot, $failedtests) = @_;
639
640 my $pct;
641 my $bonusmsg = _bonusmsg($tot);
642
2fe373ce 643 if (_all_ok($tot)) {
644 print "All tests successful$bonusmsg.\n";
645 } elsif (!$tot->{tests}){
646 die "FAILED--no tests were run for some reason.\n";
647 } elsif (!$tot->{max}) {
648 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
649 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 650 "alas--no output ever seen\n";
c07a80fd 651 } else {
2fe373ce 652 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
653 my $percent_ok = 100*$tot->{ok}/$tot->{max};
654 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
655 $tot->{max} - $tot->{ok}, $tot->{max},
656 $percent_ok;
0a931e4a 657
9c5c68c8 658 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 659
2fe373ce 660 # Now write to formats
661 for my $script (sort keys %$failedtests) {
662 $Curtest = $failedtests->{$script};
663 write;
664 }
665 if ($tot->{bad}) {
666 $bonusmsg =~ s/^,\s*//;
667 print "$bonusmsg.\n" if $bonusmsg;
668 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 669 "$subpct\n";
2fe373ce 670 }
c07a80fd 671 }
f0a9308e 672
9c5c68c8 673 printf("Files=%d, Tests=%d, %s\n",
674 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
675}
676
677
308957f5 678my %Handlers = ();
679$Strap->{callback} = sub {
680 my($self, $line, $type, $totals) = @_;
681 print $line if $Verbose;
9c5c68c8 682
308957f5 683 my $meth = $Handlers{$type};
684 $meth->($self, $line, $type, $totals) if $meth;
685};
9c5c68c8 686
9c5c68c8 687
308957f5 688$Handlers{header} = sub {
689 my($self, $line, $type, $totals) = @_;
9c5c68c8 690
308957f5 691 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 692
308957f5 693 $self->{_seen_header}++;
9c5c68c8 694
308957f5 695 warn "1..M can only appear at the beginning or end of tests\n"
696 if $totals->{seen} &&
697 $totals->{max} < $totals->{seen};
698};
13287dd5 699
308957f5 700$Handlers{test} = sub {
701 my($self, $line, $type, $totals) = @_;
9c5c68c8 702
308957f5 703 my $curr = $totals->{seen};
704 my $next = $self->{'next'};
705 my $max = $totals->{max};
706 my $detail = $totals->{details}[-1];
b82fa0b7 707
308957f5 708 if( $detail->{ok} ) {
709 _print_ml("ok $curr/$max");
356733da 710
308957f5 711 if( $detail->{type} eq 'skip' ) {
a72fde19 712 $totals->{skip_reason} = $detail->{reason}
713 unless defined $totals->{skip_reason};
714 $totals->{skip_reason} = 'various reasons'
715 if $totals->{skip_reason} ne $detail->{reason};
308957f5 716 }
b82fa0b7 717 }
718 else {
308957f5 719 _print_ml("NOK $curr");
b82fa0b7 720 }
b82fa0b7 721
308957f5 722 if( $curr > $next ) {
723 print "Test output counter mismatch [test $curr]\n";
724 }
725 elsif( $curr < $next ) {
726 print "Confused test output: test $curr answered after ".
727 "test ", $next - 1, "\n";
728 }
b82fa0b7 729
308957f5 730};
2fe373ce 731
308957f5 732$Handlers{bailout} = sub {
733 my($self, $line, $type, $totals) = @_;
9c5c68c8 734
308957f5 735 die "FAILED--Further testing stopped" .
736 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
737};
356733da 738
9c5c68c8 739
308957f5 740sub _print_ml {
741 print join '', $ML, @_ if $ML;
9c5c68c8 742}
743
744
745sub _bonusmsg {
746 my($tot) = @_;
747
748 my $bonusmsg = '';
749 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 750 " UNEXPECTEDLY SUCCEEDED)")
751 if $tot->{bonus};
9c5c68c8 752
753 if ($tot->{skipped}) {
2fe373ce 754 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 755 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 756 if ($tot->{sub_skipped}) {
757 $bonusmsg .= " and $tot->{sub_skipped} subtest"
758 . ($tot->{sub_skipped} != 1 ? 's' : '');
759 }
760 $bonusmsg .= ' skipped';
9c5c68c8 761 }
762 elsif ($tot->{sub_skipped}) {
2fe373ce 763 $bonusmsg .= ", $tot->{sub_skipped} subtest"
764 . ($tot->{sub_skipped} != 1 ? 's' : '')
765 . " skipped";
9c5c68c8 766 }
767
768 return $bonusmsg;
769}
770
9c5c68c8 771# Test program go boom.
772sub _dubious_return {
773 my($test, $tot, $estatus, $wstatus) = @_;
774 my ($failed, $canon, $percent) = ('??', '??');
775
776 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
777 "(wstat %d, 0x%x)\n",
778 $wstatus,$wstatus;
779 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
780
781 if (corestatus($wstatus)) { # until we have a wait module
782 if ($Have_Devel_Corestack) {
783 Devel::CoreStack::stack($^X);
784 } else {
785 print "\ttest program seems to have generated a core\n";
786 }
787 }
788
789 $tot->{bad}++;
790
791 if ($test->{max}) {
b82fa0b7 792 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 793 print "\tafter all the subtests completed successfully\n";
794 $percent = 0;
2fe373ce 795 $failed = 0; # But we do not set $canon!
9c5c68c8 796 }
797 else {
b82fa0b7 798 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 799 $failed = @{$test->{failed}};
800 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
801 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
802 print "DIED. ",$txt;
803 }
804 }
805
806 return { canon => $canon, max => $test->{max} || '??',
807 failed => $failed,
66fd8cb9 808 percent => $percent,
9c5c68c8 809 estat => $estatus, wstat => $wstatus,
810 };
811}
812
813
9c5c68c8 814sub _create_fmts {
815 my($failedtests) = @_;
816
b82fa0b7 817 my $failed_str = "Failed Test";
818 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 819 my $list_str = "List of Failed";
820
821 # Figure out our longest name string for formatting purposes.
822 my $max_namelen = length($failed_str);
823 foreach my $script (keys %$failedtests) {
824 my $namelen = length $failedtests->{$script}->{name};
825 $max_namelen = $namelen if $namelen > $max_namelen;
826 }
827
828 my $list_len = $Columns - length($middle_str) - $max_namelen;
829 if ($list_len < length($list_str)) {
830 $list_len = length($list_str);
831 $max_namelen = $Columns - length($middle_str) - $list_len;
832 if ($max_namelen < length($failed_str)) {
833 $max_namelen = length($failed_str);
834 $Columns = $max_namelen + length($middle_str) + $list_len;
835 }
836 }
837
838 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 839 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 840 . $middle_str
2fe373ce 841 . $list_str . "\n"
842 . "-" x $Columns
843 . "\n.\n";
9c5c68c8 844
845 my $fmt = "format STDOUT =\n"
2fe373ce 846 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 847 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 848 . "^" . "<" x ($list_len - 1) . "\n"
849 . '{ $Curtest->{name}, $Curtest->{estat},'
850 . ' $Curtest->{wstat}, $Curtest->{max},'
851 . ' $Curtest->{failed}, $Curtest->{percent},'
852 . ' $Curtest->{canon}'
853 . "\n}\n"
854 . "~~" . " " x ($Columns - $list_len - 2) . "^"
855 . "<" x ($list_len - 1) . "\n"
856 . '$Curtest->{canon}'
857 . "\n.\n";
9c5c68c8 858
859 eval $fmt_top;
860 die $@ if $@;
861 eval $fmt;
862 die $@ if $@;
863
864 return($fmt_top, $fmt);
865}
866
b82fa0b7 867{
868 my $tried_devel_corestack;
9c5c68c8 869
b82fa0b7 870 sub corestatus {
871 my($st) = @_;
c0ee6f5c 872
a72fde19 873 my $did_core;
874 eval { # we may not have a WCOREDUMP
356733da 875 local $^W = 0; # *.ph files are often *very* noisy
a72fde19 876 require 'wait.ph';
877 $did_core = WCOREDUMP($st);
356733da 878 };
a72fde19 879 if( $@ ) {
880 $did_core = $st & 0200;
881 }
c0ee6f5c 882
b82fa0b7 883 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
884 unless $tried_devel_corestack++;
c0ee6f5c 885
356733da 886 return $did_core;
b82fa0b7 887 }
c0ee6f5c 888}
889
c07a80fd 890sub canonfailed ($@) {
89d3b7e2 891 my($max,$skipped,@failed) = @_;
6c31b336 892 my %seen;
893 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 894 my $failed = @failed;
895 my @result = ();
896 my @canon = ();
897 my $min;
898 my $last = $min = shift @failed;
760ac839 899 my $canon;
c07a80fd 900 if (@failed) {
2fe373ce 901 for (@failed, $failed[-1]) { # don't forget the last one
902 if ($_ > $last+1 || $_ == $last) {
903 if ($min == $last) {
904 push @canon, $last;
905 } else {
906 push @canon, "$min-$last";
907 }
908 $min = $_;
909 }
910 $last = $_;
911 }
912 local $" = ", ";
913 push @result, "FAILED tests @canon\n";
914 $canon = join ' ', @canon;
a0d0e21e 915 } else {
2fe373ce 916 push @result, "FAILED test $last\n";
917 $canon = $last;
a0d0e21e 918 }
c07a80fd 919
920 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 921 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
922 my $ender = 's' x ($skipped > 1);
923 my $good = $max - $failed - $skipped;
924 my $goodper = sprintf("%.2f",100*($good/$max));
f0008e52 925 push @result, " (less $skipped skipped test$ender: $good okay, ".
9c5c68c8 926 "$goodper%)"
927 if $skipped;
89d3b7e2 928 push @result, "\n";
760ac839 929 my $txt = join "", @result;
930 ($txt, $canon);
a0d0e21e 931}
932
b82fa0b7 933=end _private
9c5c68c8 934
b82fa0b7 935=back
d667a7e6 936
b82fa0b7 937=cut
9c5c68c8 938
9c5c68c8 939
b82fa0b7 9401;
941__END__
9c5c68c8 942
943
cb1a09d0 944=head1 EXPORT
945
e8df9912 946C<&runtests> is exported by Test::Harness by default.
cb1a09d0 947
9c5c68c8 948C<$verbose> and C<$switches> are exported upon request.
949
950
cb1a09d0 951=head1 DIAGNOSTICS
952
953=over 4
954
955=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
956
957If all tests are successful some statistics about the performance are
958printed.
959
6c31b336 960=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
961
962For any single script that has failing subtests statistics like the
963above are printed.
964
965=item C<Test returned status %d (wstat %d)>
966
9c5c68c8 967Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
968and C<$?> are printed in a message similar to the above.
6c31b336 969
970=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 971
6c31b336 972=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 973
974If not all tests were successful, the script dies with one of the
975above messages.
976
308957f5 977=item C<FAILED--Further testing stopped: %s>
d667a7e6 978
979If a single subtest decides that further testing will not make sense,
980the script dies with this message.
981
cb1a09d0 982=back
983
9b0ceca9 984=head1 ENVIRONMENT
985
37ce32a7 986=over 4
987
356733da 988=item C<HARNESS_ACTIVE>
37ce32a7 989
356733da 990Harness sets this before executing the individual tests. This allows
991the tests to determine if they are being executed through the harness
992or by any other means.
37ce32a7 993
356733da 994=item C<HARNESS_COLUMNS>
9b0ceca9 995
356733da 996This value will be used for the width of the terminal. If it is not
997set then it will default to C<COLUMNS>. If this is not set, it will
998default to 80. Note that users of Bourne-sh based shells will need to
999C<export COLUMNS> for this module to use that variable.
0d0c0d42 1000
b82fa0b7 1001=item C<HARNESS_COMPILE_TEST>
9636a016 1002
37ce32a7 1003When true it will make harness attempt to compile the test using
1004C<perlcc> before running it.
1005
b82fa0b7 1006B<NOTE> This currently only works when sitting in the perl source
1007directory!
1008
1009=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1010
1011When set to the name of a directory, harness will check after each
1012test whether new files appeared in that directory, and report them as
17a79f5b 1013
1014 LEAKED FILES: scr.tmp 0 my.db
1015
1016If relative, directory name is with respect to the current directory at
1017the moment runtests() was called. Putting absolute path into
13287dd5 1018C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1019
356733da 1020=item C<HARNESS_IGNORE_EXITCODE>
1021
1022Makes harness ignore the exit status of child processes when defined.
1023
1024=item C<HARNESS_NOTTY>
1025
1026When set to a true value, forces it to behave as though STDOUT were
1027not a console. You may need to set this if you don't want harness to
1028output more frequent progress messages using carriage returns. Some
1029consoles may not handle carriage returns properly (which results in a
1030somewhat messy output).
1031
b82fa0b7 1032=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1033
1034Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1035each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1036run all tests with all warnings enabled.
1037
356733da 1038=item C<HARNESS_VERBOSE>
37ce32a7 1039
356733da 1040If true, Test::Harness will output the verbose results of running
1041its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1042
1043=back
0a931e4a 1044
b82fa0b7 1045=head1 EXAMPLE
1046
1047Here's how Test::Harness tests itself
1048
1049 $ cd ~/src/devel/Test-Harness
1050 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1051 $verbose=0; runtests @ARGV;' t/*.t
1052 Using /home/schwern/src/devel/Test-Harness/blib
1053 t/base..............ok
1054 t/nonumbers.........ok
1055 t/ok................ok
1056 t/test-harness......ok
1057 All tests successful.
1058 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1059
cb1a09d0 1060=head1 SEE ALSO
1061
b82fa0b7 1062L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1063the underlying timing routines, L<Devel::CoreStack> to generate core
1064dumps from failed tests and L<Devel::Cover> for test coverage
1065analysis.
c07a80fd 1066
1067=head1 AUTHORS
1068
1069Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1070sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1071with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1072exist. Andreas Koenig held the torch for many years.
1073
1074Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1075
a72fde19 1076=head1 LICENSE
1077
1078This program is free software; you can redistribute it and/or
1079modify it under the same terms as Perl itself.
1080
1081See F<http://www.perl.com/perl/misc/Artistic.html>
1082
1083
b82fa0b7 1084=head1 TODO
1085
1086Provide a way of running tests quietly (ie. no printing) for automated
1087validation of tests. This will probably take the form of a version
1088of runtests() which rather than printing its output returns raw data
356733da 1089on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1090
1091Fix HARNESS_COMPILE_TEST without breaking its core usage.
1092
1093Figure a way to report test names in the failure summary.
37ce32a7 1094
b82fa0b7 1095Rework the test summary so long test names are not truncated as badly.
308957f5 1096(Partially done with new skip test styles)
b82fa0b7 1097
b82fa0b7 1098Deal with VMS's "not \nok 4\n" mistake.
1099
1100Add option for coverage analysis.
1101
1102=for _private
1103Keeping whittling away at _run_all_tests()
1104
1105=for _private
1106Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1107
1108=head1 BUGS
1109
356733da 1110HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1111directory.
1112
cb1a09d0 1113=cut