Upgrade to Test::Harness 2.24.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
0be28027 2# $Id: Harness.pm,v 1.33 2002/05/29 23:02:48 schwern Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
308957f5 15 $Columns $verbose $switches $ML
b82fa0b7 16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
0be28027 25$VERSION = '2.24';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
13287dd5 39my $Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
327It returns true if everything was ok, false otherwise.
328
329=for _private
330This is just _run_all_tests() plus _show_results()
331
332=cut
17a79f5b 333
a0d0e21e 334sub runtests {
335 my(@tests) = @_;
9c5c68c8 336
b82fa0b7 337 local ($\, $,);
338
339 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 340 _show_results($tot, $failedtests);
341
2fe373ce 342 my $ok = _all_ok($tot);
b82fa0b7 343
13287dd5 344 assert(($ok xor keys %$failedtests),
345 q{ok status jives with $failedtests});
b82fa0b7 346
347 return $ok;
348}
349
350=begin _private
351
2fe373ce 352=item B<_all_ok>
353
354 my $ok = _all_ok(\%tot);
355
356Tells you if this test run is overall successful or not.
357
358=cut
359
360sub _all_ok {
361 my($tot) = shift;
362
363 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
364}
365
b82fa0b7 366=item B<_globdir>
367
368 my @files = _globdir $dir;
369
370Returns all the files in a directory. This is shorthand for backwards
371compatibility on systems where glob() doesn't work right.
372
373=cut
374
375sub _globdir {
376 opendir DIRH, shift;
377 my @f = readdir DIRH;
378 closedir DIRH;
379
380 return @f;
9c5c68c8 381}
382
b82fa0b7 383=item B<_run_all_tests>
384
385 my($total, $failed) = _run_all_tests(@test_files);
386
387Runs all the given @test_files (as runtests()) but does it quietly (no
388report). $total is a hash ref summary of all the tests run. Its keys
389and values are this:
390
391 bonus Number of individual todo tests unexpectedly passed
392 max Number of individual tests ran
393 ok Number of individual tests passed
394 sub_skipped Number of individual tests skipped
2fe373ce 395 todo Number of individual todo tests
b82fa0b7 396
397 files Number of test files ran
398 good Number of test files passed
399 bad Number of test files failed
400 tests Number of test files originally given
401 skipped Number of test files skipped
402
403If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
404test.
405
406$failed is a hash ref of all the test scripts which failed. Each key
407is the name of a test script, each value is another hash representing
408how that script failed. Its keys are these:
9c5c68c8 409
b82fa0b7 410 name Name of the test which failed
411 estat Script's exit value
412 wstat Script's wait status
413 max Number of individual tests
414 failed Number which failed
415 percent Percentage of tests which failed
416 canon List of tests which failed (as string).
417
418Needless to say, $failed should be empty if everything passed.
419
420B<NOTE> Currently this function is still noisy. I'm working on it.
421
422=cut
423
308957f5 424#'#
b82fa0b7 425sub _run_all_tests {
9c5c68c8 426 my(@tests) = @_;
a0d0e21e 427 local($|) = 1;
9c5c68c8 428 my(%failedtests);
429
430 # Test-wide totals.
431 my(%tot) = (
432 bonus => 0,
433 max => 0,
434 ok => 0,
435 files => 0,
436 bad => 0,
437 good => 0,
438 tests => scalar @tests,
439 sub_skipped => 0,
2fe373ce 440 todo => 0,
9c5c68c8 441 skipped => 0,
2fe373ce 442 bench => 0,
9c5c68c8 443 );
774d564b 444
b82fa0b7 445 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 446 my $t_start = new Benchmark;
9c5c68c8 447
13287dd5 448 my $width = _leader_width(@tests);
b82fa0b7 449 foreach my $tfile (@tests) {
13287dd5 450
7a315204 451 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 452 local $ML = $ml;
b82fa0b7 453 print $leader;
9c5c68c8 454
356733da 455 $tot{files}++;
456
308957f5 457 $Strap->{_seen_header} = 0;
0be28027 458 my %results = $Strap->analyze_file($tfile) or
459 do { warn "$Strap->{error}\n"; next };
308957f5 460
9c5c68c8 461 # state of the current test.
308957f5 462 my @failed = grep { !$results{details}[$_-1]{ok} }
463 1..@{$results{details}};
9c5c68c8 464 my %test = (
308957f5 465 ok => $results{ok},
466 'next' => $Strap->{'next'},
467 max => $results{max},
468 failed => \@failed,
469 bonus => $results{bonus},
470 skipped => $results{skip},
471 skip_reason => $Strap->{_skip_reason},
c0bb2de7 472 skip_all => $Strap->{skip_all},
9c5c68c8 473 ml => $ml,
474 );
475
308957f5 476 $tot{bonus} += $results{bonus};
477 $tot{max} += $results{max};
478 $tot{ok} += $results{ok};
479 $tot{todo} += $results{todo};
480 $tot{sub_skipped} += $results{skip};
9c5c68c8 481
308957f5 482 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 483
2fe373ce 484 if ($wstatus) {
b82fa0b7 485 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 486 $estatus, $wstatus);
b82fa0b7 487 $failedtests{$tfile}{name} = $tfile;
2fe373ce 488 }
308957f5 489 elsif ($results{passing}) {
2fe373ce 490 if ($test{max} and $test{skipped} + $test{bonus}) {
491 my @msg;
492 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
493 if $test{skipped};
494 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
495 if $test{bonus};
f0008e52 496 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 497 } elsif ($test{max}) {
498 print "$test{ml}ok\n";
0be28027 499 } elsif (length $test{skip_all}) {
c0bb2de7 500 print "skipped\n all skipped: $test{skip_all}\n";
2fe373ce 501 $tot{skipped}++;
502 } else {
0be28027 503 print "skipped\n all skipped: no reason given\n";
2fe373ce 504 $tot{skipped}++;
505 }
506 $tot{good}++;
507 }
b82fa0b7 508 else {
509 if ($test{max}) {
510 if ($test{'next'} <= $test{max}) {
511 push @{$test{failed}}, $test{'next'}..$test{max};
512 }
513 if (@{$test{failed}}) {
514 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
515 @{$test{failed}});
516 print "$test{ml}$txt";
517 $failedtests{$tfile} = { canon => $canon,
518 max => $test{max},
519 failed => scalar @{$test{failed}},
520 name => $tfile,
521 percent => 100*(scalar @{$test{failed}})/$test{max},
522 estat => '',
523 wstat => '',
524 };
525 } else {
526 print "Don't know which tests failed: got $test{ok} ok, ".
527 "expected $test{max}\n";
528 $failedtests{$tfile} = { canon => '??',
529 max => $test{max},
530 failed => '??',
531 name => $tfile,
532 percent => undef,
533 estat => '',
534 wstat => '',
535 };
536 }
537 $tot{bad}++;
538 } elsif ($test{'next'} == 0) {
539 print "FAILED before any test output arrived\n";
540 $tot{bad}++;
541 $failedtests{$tfile} = { canon => '??',
542 max => '??',
543 failed => '??',
544 name => $tfile,
545 percent => undef,
546 estat => '',
547 wstat => '',
548 };
549 }
550 }
551
2fe373ce 552 if (defined $Files_In_Dir) {
553 my @new_dir_files = _globdir $Files_In_Dir;
554 if (@new_dir_files != @dir_files) {
555 my %f;
556 @f{@new_dir_files} = (1) x @new_dir_files;
557 delete @f{@dir_files};
558 my @f = sort keys %f;
559 print "LEAKED FILES: @f\n";
560 @dir_files = @new_dir_files;
561 }
562 }
a0d0e21e 563 }
9c5c68c8 564 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 565
13287dd5 566 $Strap->_restore_PERL5LIB;
9c5c68c8 567
568 return(\%tot, \%failedtests);
569}
570
b82fa0b7 571=item B<_mk_leader>
572
7a315204 573 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 574
575Generates the 't/foo........' $leader for the given $test_file as well
576as a similar version which will overwrite the current line (by use of
577\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 578on TTY.
579
580The $width is the width of the "yada/blah.." string.
b82fa0b7 581
582=cut
583
584sub _mk_leader {
2fe373ce 585 my($te, $width) = @_;
586 chomp($te);
b695f709 587 $te =~ s/\.\w+$/./;
b82fa0b7 588
356733da 589 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 590 my $blank = (' ' x 77);
7a315204 591 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 592 my $ml = "";
593
594 $ml = "\r$blank\r$leader"
595 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
596
597 return($leader, $ml);
598}
599
13287dd5 600=item B<_leader_width>
601
602 my($width) = _leader_width(@test_files);
603
604Calculates how wide the leader should be based on the length of the
605longest test name.
606
607=cut
608
609sub _leader_width {
610 my $maxlen = 0;
611 my $maxsuflen = 0;
612 foreach (@_) {
613 my $suf = /\.(\w+)$/ ? $1 : '';
614 my $len = length;
615 my $suflen = length $suf;
616 $maxlen = $len if $len > $maxlen;
617 $maxsuflen = $suflen if $suflen > $maxsuflen;
618 }
356733da 619 # + 3 : we want three dots between the test name and the "ok"
620 return $maxlen + 3 - $maxsuflen;
13287dd5 621}
622
9c5c68c8 623
624sub _show_results {
625 my($tot, $failedtests) = @_;
626
627 my $pct;
628 my $bonusmsg = _bonusmsg($tot);
629
2fe373ce 630 if (_all_ok($tot)) {
631 print "All tests successful$bonusmsg.\n";
632 } elsif (!$tot->{tests}){
633 die "FAILED--no tests were run for some reason.\n";
634 } elsif (!$tot->{max}) {
635 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
636 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 637 "alas--no output ever seen\n";
c07a80fd 638 } else {
2fe373ce 639 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
640 my $percent_ok = 100*$tot->{ok}/$tot->{max};
641 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
642 $tot->{max} - $tot->{ok}, $tot->{max},
643 $percent_ok;
0a931e4a 644
9c5c68c8 645 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 646
2fe373ce 647 # Now write to formats
648 for my $script (sort keys %$failedtests) {
649 $Curtest = $failedtests->{$script};
650 write;
651 }
652 if ($tot->{bad}) {
653 $bonusmsg =~ s/^,\s*//;
654 print "$bonusmsg.\n" if $bonusmsg;
655 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 656 "$subpct\n";
2fe373ce 657 }
c07a80fd 658 }
f0a9308e 659
9c5c68c8 660 printf("Files=%d, Tests=%d, %s\n",
661 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
662}
663
664
308957f5 665my %Handlers = ();
666$Strap->{callback} = sub {
667 my($self, $line, $type, $totals) = @_;
668 print $line if $Verbose;
9c5c68c8 669
308957f5 670 my $meth = $Handlers{$type};
671 $meth->($self, $line, $type, $totals) if $meth;
672};
9c5c68c8 673
9c5c68c8 674
308957f5 675$Handlers{header} = sub {
676 my($self, $line, $type, $totals) = @_;
9c5c68c8 677
308957f5 678 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 679
308957f5 680 $self->{_seen_header}++;
9c5c68c8 681
308957f5 682 warn "1..M can only appear at the beginning or end of tests\n"
683 if $totals->{seen} &&
684 $totals->{max} < $totals->{seen};
685};
13287dd5 686
308957f5 687$Handlers{test} = sub {
688 my($self, $line, $type, $totals) = @_;
9c5c68c8 689
308957f5 690 my $curr = $totals->{seen};
691 my $next = $self->{'next'};
692 my $max = $totals->{max};
693 my $detail = $totals->{details}[-1];
b82fa0b7 694
308957f5 695 if( $detail->{ok} ) {
696 _print_ml("ok $curr/$max");
356733da 697
308957f5 698 if( $detail->{type} eq 'skip' ) {
699 $self->{_skip_reason} = $detail->{reason}
700 unless defined $self->{_skip_reason};
701 $self->{_skip_reason} = 'various reasons'
702 if $self->{_skip_reason} ne $detail->{reason};
703 }
b82fa0b7 704 }
705 else {
308957f5 706 _print_ml("NOK $curr");
b82fa0b7 707 }
b82fa0b7 708
308957f5 709 if( $curr > $next ) {
710 print "Test output counter mismatch [test $curr]\n";
711 }
712 elsif( $curr < $next ) {
713 print "Confused test output: test $curr answered after ".
714 "test ", $next - 1, "\n";
715 }
b82fa0b7 716
308957f5 717};
2fe373ce 718
308957f5 719$Handlers{bailout} = sub {
720 my($self, $line, $type, $totals) = @_;
9c5c68c8 721
308957f5 722 die "FAILED--Further testing stopped" .
723 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
724};
356733da 725
9c5c68c8 726
308957f5 727sub _print_ml {
728 print join '', $ML, @_ if $ML;
9c5c68c8 729}
730
731
732sub _bonusmsg {
733 my($tot) = @_;
734
735 my $bonusmsg = '';
736 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 737 " UNEXPECTEDLY SUCCEEDED)")
738 if $tot->{bonus};
9c5c68c8 739
740 if ($tot->{skipped}) {
2fe373ce 741 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 742 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 743 if ($tot->{sub_skipped}) {
744 $bonusmsg .= " and $tot->{sub_skipped} subtest"
745 . ($tot->{sub_skipped} != 1 ? 's' : '');
746 }
747 $bonusmsg .= ' skipped';
9c5c68c8 748 }
749 elsif ($tot->{sub_skipped}) {
2fe373ce 750 $bonusmsg .= ", $tot->{sub_skipped} subtest"
751 . ($tot->{sub_skipped} != 1 ? 's' : '')
752 . " skipped";
9c5c68c8 753 }
754
755 return $bonusmsg;
756}
757
9c5c68c8 758# Test program go boom.
759sub _dubious_return {
760 my($test, $tot, $estatus, $wstatus) = @_;
761 my ($failed, $canon, $percent) = ('??', '??');
762
763 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
764 "(wstat %d, 0x%x)\n",
765 $wstatus,$wstatus;
766 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
767
768 if (corestatus($wstatus)) { # until we have a wait module
769 if ($Have_Devel_Corestack) {
770 Devel::CoreStack::stack($^X);
771 } else {
772 print "\ttest program seems to have generated a core\n";
773 }
774 }
775
776 $tot->{bad}++;
777
778 if ($test->{max}) {
b82fa0b7 779 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 780 print "\tafter all the subtests completed successfully\n";
781 $percent = 0;
2fe373ce 782 $failed = 0; # But we do not set $canon!
9c5c68c8 783 }
784 else {
b82fa0b7 785 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 786 $failed = @{$test->{failed}};
787 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
788 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
789 print "DIED. ",$txt;
790 }
791 }
792
793 return { canon => $canon, max => $test->{max} || '??',
794 failed => $failed,
66fd8cb9 795 percent => $percent,
9c5c68c8 796 estat => $estatus, wstat => $wstatus,
797 };
798}
799
800
9c5c68c8 801sub _create_fmts {
802 my($failedtests) = @_;
803
b82fa0b7 804 my $failed_str = "Failed Test";
805 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 806 my $list_str = "List of Failed";
807
808 # Figure out our longest name string for formatting purposes.
809 my $max_namelen = length($failed_str);
810 foreach my $script (keys %$failedtests) {
811 my $namelen = length $failedtests->{$script}->{name};
812 $max_namelen = $namelen if $namelen > $max_namelen;
813 }
814
815 my $list_len = $Columns - length($middle_str) - $max_namelen;
816 if ($list_len < length($list_str)) {
817 $list_len = length($list_str);
818 $max_namelen = $Columns - length($middle_str) - $list_len;
819 if ($max_namelen < length($failed_str)) {
820 $max_namelen = length($failed_str);
821 $Columns = $max_namelen + length($middle_str) + $list_len;
822 }
823 }
824
825 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 826 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 827 . $middle_str
2fe373ce 828 . $list_str . "\n"
829 . "-" x $Columns
830 . "\n.\n";
9c5c68c8 831
832 my $fmt = "format STDOUT =\n"
2fe373ce 833 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 834 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 835 . "^" . "<" x ($list_len - 1) . "\n"
836 . '{ $Curtest->{name}, $Curtest->{estat},'
837 . ' $Curtest->{wstat}, $Curtest->{max},'
838 . ' $Curtest->{failed}, $Curtest->{percent},'
839 . ' $Curtest->{canon}'
840 . "\n}\n"
841 . "~~" . " " x ($Columns - $list_len - 2) . "^"
842 . "<" x ($list_len - 1) . "\n"
843 . '$Curtest->{canon}'
844 . "\n.\n";
9c5c68c8 845
846 eval $fmt_top;
847 die $@ if $@;
848 eval $fmt;
849 die $@ if $@;
850
851 return($fmt_top, $fmt);
852}
853
b82fa0b7 854{
855 my $tried_devel_corestack;
9c5c68c8 856
b82fa0b7 857 sub corestatus {
858 my($st) = @_;
c0ee6f5c 859
356733da 860 eval {
861 local $^W = 0; # *.ph files are often *very* noisy
862 require 'wait.ph'
863 };
864 return if $@;
865 my $did_core = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 866
b82fa0b7 867 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
868 unless $tried_devel_corestack++;
c0ee6f5c 869
356733da 870 return $did_core;
b82fa0b7 871 }
c0ee6f5c 872}
873
c07a80fd 874sub canonfailed ($@) {
89d3b7e2 875 my($max,$skipped,@failed) = @_;
6c31b336 876 my %seen;
877 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 878 my $failed = @failed;
879 my @result = ();
880 my @canon = ();
881 my $min;
882 my $last = $min = shift @failed;
760ac839 883 my $canon;
c07a80fd 884 if (@failed) {
2fe373ce 885 for (@failed, $failed[-1]) { # don't forget the last one
886 if ($_ > $last+1 || $_ == $last) {
887 if ($min == $last) {
888 push @canon, $last;
889 } else {
890 push @canon, "$min-$last";
891 }
892 $min = $_;
893 }
894 $last = $_;
895 }
896 local $" = ", ";
897 push @result, "FAILED tests @canon\n";
898 $canon = join ' ', @canon;
a0d0e21e 899 } else {
2fe373ce 900 push @result, "FAILED test $last\n";
901 $canon = $last;
a0d0e21e 902 }
c07a80fd 903
904 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 905 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
906 my $ender = 's' x ($skipped > 1);
907 my $good = $max - $failed - $skipped;
908 my $goodper = sprintf("%.2f",100*($good/$max));
f0008e52 909 push @result, " (less $skipped skipped test$ender: $good okay, ".
9c5c68c8 910 "$goodper%)"
911 if $skipped;
89d3b7e2 912 push @result, "\n";
760ac839 913 my $txt = join "", @result;
914 ($txt, $canon);
a0d0e21e 915}
916
b82fa0b7 917=end _private
9c5c68c8 918
b82fa0b7 919=back
d667a7e6 920
b82fa0b7 921=cut
9c5c68c8 922
9c5c68c8 923
b82fa0b7 9241;
925__END__
9c5c68c8 926
927
cb1a09d0 928=head1 EXPORT
929
c0ee6f5c 930C<&runtests> is exported by Test::Harness per default.
cb1a09d0 931
9c5c68c8 932C<$verbose> and C<$switches> are exported upon request.
933
934
cb1a09d0 935=head1 DIAGNOSTICS
936
937=over 4
938
939=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
940
941If all tests are successful some statistics about the performance are
942printed.
943
6c31b336 944=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
945
946For any single script that has failing subtests statistics like the
947above are printed.
948
949=item C<Test returned status %d (wstat %d)>
950
9c5c68c8 951Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
952and C<$?> are printed in a message similar to the above.
6c31b336 953
954=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 955
6c31b336 956=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 957
958If not all tests were successful, the script dies with one of the
959above messages.
960
308957f5 961=item C<FAILED--Further testing stopped: %s>
d667a7e6 962
963If a single subtest decides that further testing will not make sense,
964the script dies with this message.
965
cb1a09d0 966=back
967
9b0ceca9 968=head1 ENVIRONMENT
969
37ce32a7 970=over 4
971
356733da 972=item C<HARNESS_ACTIVE>
37ce32a7 973
356733da 974Harness sets this before executing the individual tests. This allows
975the tests to determine if they are being executed through the harness
976or by any other means.
37ce32a7 977
356733da 978=item C<HARNESS_COLUMNS>
9b0ceca9 979
356733da 980This value will be used for the width of the terminal. If it is not
981set then it will default to C<COLUMNS>. If this is not set, it will
982default to 80. Note that users of Bourne-sh based shells will need to
983C<export COLUMNS> for this module to use that variable.
0d0c0d42 984
b82fa0b7 985=item C<HARNESS_COMPILE_TEST>
9636a016 986
37ce32a7 987When true it will make harness attempt to compile the test using
988C<perlcc> before running it.
989
b82fa0b7 990B<NOTE> This currently only works when sitting in the perl source
991directory!
992
993=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 994
995When set to the name of a directory, harness will check after each
996test whether new files appeared in that directory, and report them as
17a79f5b 997
998 LEAKED FILES: scr.tmp 0 my.db
999
1000If relative, directory name is with respect to the current directory at
1001the moment runtests() was called. Putting absolute path into
13287dd5 1002C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1003
356733da 1004=item C<HARNESS_IGNORE_EXITCODE>
1005
1006Makes harness ignore the exit status of child processes when defined.
1007
1008=item C<HARNESS_NOTTY>
1009
1010When set to a true value, forces it to behave as though STDOUT were
1011not a console. You may need to set this if you don't want harness to
1012output more frequent progress messages using carriage returns. Some
1013consoles may not handle carriage returns properly (which results in a
1014somewhat messy output).
1015
b82fa0b7 1016=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1017
1018Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1019each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1020run all tests with all warnings enabled.
1021
356733da 1022=item C<HARNESS_VERBOSE>
37ce32a7 1023
356733da 1024If true, Test::Harness will output the verbose results of running
1025its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1026
1027=back
0a931e4a 1028
b82fa0b7 1029=head1 EXAMPLE
1030
1031Here's how Test::Harness tests itself
1032
1033 $ cd ~/src/devel/Test-Harness
1034 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1035 $verbose=0; runtests @ARGV;' t/*.t
1036 Using /home/schwern/src/devel/Test-Harness/blib
1037 t/base..............ok
1038 t/nonumbers.........ok
1039 t/ok................ok
1040 t/test-harness......ok
1041 All tests successful.
1042 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1043
cb1a09d0 1044=head1 SEE ALSO
1045
b82fa0b7 1046L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1047the underlying timing routines, L<Devel::CoreStack> to generate core
1048dumps from failed tests and L<Devel::Cover> for test coverage
1049analysis.
c07a80fd 1050
1051=head1 AUTHORS
1052
1053Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1054sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1055with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1056exist. Andreas Koenig held the torch for many years.
1057
1058Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1059
1060=head1 TODO
1061
1062Provide a way of running tests quietly (ie. no printing) for automated
1063validation of tests. This will probably take the form of a version
1064of runtests() which rather than printing its output returns raw data
356733da 1065on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1066
1067Fix HARNESS_COMPILE_TEST without breaking its core usage.
1068
1069Figure a way to report test names in the failure summary.
37ce32a7 1070
b82fa0b7 1071Rework the test summary so long test names are not truncated as badly.
308957f5 1072(Partially done with new skip test styles)
b82fa0b7 1073
b82fa0b7 1074Deal with VMS's "not \nok 4\n" mistake.
1075
1076Add option for coverage analysis.
1077
1078=for _private
1079Keeping whittling away at _run_all_tests()
1080
1081=for _private
1082Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1083
1084=head1 BUGS
1085
356733da 1086HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1087directory.
1088
cb1a09d0 1089=cut