The change #20022 didn't work for Command.t.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
e8df9912 2# $Id: Harness.pm,v 1.47 2003/04/24 19:33:05 andy Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
d5d4ec93 15 $Columns $verbose $switches $ML $Strap
5c0604c3 16 @ISA @EXPORT @EXPORT_OK $Last_ML_Print
b82fa0b7 17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
5c0604c3 20*verbose = *Verbose;
21*switches = *Switches;
9c5c68c8 22
23$Have_Devel_Corestack = 0;
24
e8df9912 25$VERSION = '2.28';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
d5d4ec93 39$Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
356733da 45$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
356733da 90This header tells how many tests there will be. For example, C<1..10>
91means you plan on running 10 tests. This is a safeguard in case your
92test dies quietly in the middle of its run.
93
94It should be the first non-comment line output by your test program.
b82fa0b7 95
356733da 96In certain instances, you may not know how many tests you will
97ultimately be running. In this case, it is permitted for the 1..M
98header to appear as the B<last> line output by your test (again, it
99can be followed by further comments).
b82fa0b7 100
101Under B<no> circumstances should 1..M appear in the middle of your
102output or more than once.
103
104
105=item B<'ok', 'not ok'. Ok?>
106
107Any output from the testscript to standard error is ignored and
108bypassed, thus will be seen by the user. Lines written to standard
109output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
110runtests(). All other lines are discarded.
111
112C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
113
114
115=item B<test numbers>
116
117Perl normally expects the 'ok' or 'not ok' to be followed by a test
118number. It is tolerated if the test numbers after 'ok' are
119omitted. In this case Test::Harness maintains temporarily its own
120counter until the script supplies test numbers again. So the following
121test script
122
123 print <<END;
124 1..6
125 not ok
126 ok
127 not ok
128 ok
129 ok
130 END
131
132will generate
133
134 FAILED tests 1, 3, 6
135 Failed 3/6 tests, 50.00% okay
136
13287dd5 137=item B<test names>
b82fa0b7 138
13287dd5 139Anything after the test number but before the # is considered to be
140the name of the test.
b82fa0b7 141
13287dd5 142 ok 42 this is the name of the test
b82fa0b7 143
13287dd5 144Currently, Test::Harness does nothing with this information.
b82fa0b7 145
146=item B<Skipping tests>
147
148If the standard output line contains the substring C< # Skip> (with
149variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
150counted as a skipped test. If the whole testscript succeeds, the
151count of skipped tests is included in the generated output.
152C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 153for skipping.
b82fa0b7 154
155 ok 23 # skip Insufficient flogiston pressure.
156
157Similarly, one can include a similar explanation in a C<1..0> line
158emitted if the test script is skipped completely:
159
160 1..0 # Skipped: no leverage found
161
162=item B<Todo tests>
163
164If the standard output line contains the substring C< # TODO> after
165C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
166afterwards is the thing that has to be done before this test will
167succeed.
168
169 not ok 13 # TODO harness the power of the atom
170
13287dd5 171=begin _deprecated
172
173Alternatively, you can specify a list of what tests are todo as part
174of the test header.
175
176 1..23 todo 5 12 23
177
178This only works if the header appears at the beginning of the test.
179
180This style is B<deprecated>.
181
182=end _deprecated
183
b82fa0b7 184These tests represent a feature to be implemented or a bug to be fixed
185and act as something of an executable "thing to do" list. They are
186B<not> expected to succeed. Should a todo test begin succeeding,
187Test::Harness will report it as a bonus. This indicates that whatever
188you were supposed to do has been done and you should promote this to a
189normal test.
190
191=item B<Bail out!>
192
193As an emergency measure, a test script can decide that further tests
194are useless (e.g. missing dependencies) and testing should stop
195immediately. In that case the test script prints the magic words
196
197 Bail out!
198
199to standard output. Any message after these words will be displayed by
200C<Test::Harness> as the reason why testing is stopped.
201
202=item B<Comments>
203
204Additional comments may be put into the testing output on their own
205lines. Comment lines should begin with a '#', Test::Harness will
206ignore them.
207
208 ok 1
209 # Life is good, the sun is shining, RAM is cheap.
210 not ok 2
211 # got 'Bush' expected 'Gore'
212
213=item B<Anything else>
214
215Any other output Test::Harness sees it will silently ignore B<BUT WE
216PLAN TO CHANGE THIS!> If you wish to place additional output in your
217test script, please use a comment.
218
219=back
220
221
13287dd5 222=head2 Taint mode
223
224Test::Harness will honor the C<-T> in the #! line on your test files. So
225if you begin a test with:
226
227 #!perl -T
228
229the test will be run with taint mode on.
230
231
232=head2 Configuration variables.
233
234These variables can be used to configure the behavior of
235Test::Harness. They are exported on request.
236
237=over 4
238
239=item B<$Test::Harness::verbose>
240
241The global variable $Test::Harness::verbose is exportable and can be
242used to let runtests() display the standard output of the script
243without altering the behavior otherwise.
244
245=item B<$Test::Harness::switches>
246
247The global variable $Test::Harness::switches is exportable and can be
248used to set perl command line options used for running the test
249script(s). The default value is C<-w>.
250
251=back
252
253
b82fa0b7 254=head2 Failure
255
256It will happen, your tests will fail. After you mop up your ego, you
257can begin examining the summary report:
258
2fe373ce 259 t/base..............ok
260 t/nonumbers.........ok
261 t/ok................ok
262 t/test-harness......ok
263 t/waterloo..........dubious
b82fa0b7 264 Test returned status 3 (wstat 768, 0x300)
265 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
266 Failed 10/20 tests, 50.00% okay
267 Failed Test Stat Wstat Total Fail Failed List of Failed
268 -----------------------------------------------------------------------
269 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
270 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
271
272Everything passed but t/waterloo.t. It failed 10 of 20 tests and
273exited with non-zero status indicating something dubious happened.
274
275The columns in the summary report mean:
276
277=over 4
278
279=item B<Failed Test>
280
281The test file which failed.
282
283=item B<Stat>
284
285If the test exited with non-zero, this is its exit status.
286
287=item B<Wstat>
288
289The wait status of the test I<umm, I need a better explanation here>.
290
291=item B<Total>
292
293Total number of tests expected to run.
294
295=item B<Fail>
296
297Number which failed, either from "not ok" or because they never ran.
298
299=item B<Failed>
300
301Percentage of the total tests which failed.
302
303=item B<List of Failed>
304
305A list of the tests which failed. Successive failures may be
306abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30720 failed).
308
309=back
310
311
312=head2 Functions
313
314Test::Harness currently only has one function, here it is.
315
316=over 4
317
318=item B<runtests>
319
320 my $allok = runtests(@test_files);
321
322This runs all the given @test_files and divines whether they passed
323or failed based on their output to STDOUT (details above). It prints
324out each individual test which failed along with a summary report and
325a how long it all took.
326
d5d4ec93 327It returns true if everything was ok. Otherwise it will die() with
328one of the messages in the DIAGNOSTICS section.
b82fa0b7 329
330=for _private
0bf5423d 331
b82fa0b7 332This is just _run_all_tests() plus _show_results()
333
334=cut
17a79f5b 335
a0d0e21e 336sub runtests {
337 my(@tests) = @_;
9c5c68c8 338
b82fa0b7 339 local ($\, $,);
340
341 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 342 _show_results($tot, $failedtests);
343
2fe373ce 344 my $ok = _all_ok($tot);
b82fa0b7 345
13287dd5 346 assert(($ok xor keys %$failedtests),
347 q{ok status jives with $failedtests});
b82fa0b7 348
349 return $ok;
350}
351
352=begin _private
353
2fe373ce 354=item B<_all_ok>
355
356 my $ok = _all_ok(\%tot);
357
358Tells you if this test run is overall successful or not.
359
360=cut
361
362sub _all_ok {
363 my($tot) = shift;
364
365 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
366}
367
b82fa0b7 368=item B<_globdir>
369
370 my @files = _globdir $dir;
371
372Returns all the files in a directory. This is shorthand for backwards
373compatibility on systems where glob() doesn't work right.
374
375=cut
376
377sub _globdir {
378 opendir DIRH, shift;
379 my @f = readdir DIRH;
380 closedir DIRH;
381
382 return @f;
9c5c68c8 383}
384
b82fa0b7 385=item B<_run_all_tests>
386
387 my($total, $failed) = _run_all_tests(@test_files);
388
389Runs all the given @test_files (as runtests()) but does it quietly (no
390report). $total is a hash ref summary of all the tests run. Its keys
391and values are this:
392
393 bonus Number of individual todo tests unexpectedly passed
394 max Number of individual tests ran
395 ok Number of individual tests passed
396 sub_skipped Number of individual tests skipped
2fe373ce 397 todo Number of individual todo tests
b82fa0b7 398
399 files Number of test files ran
400 good Number of test files passed
401 bad Number of test files failed
402 tests Number of test files originally given
403 skipped Number of test files skipped
404
405If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
406test.
407
408$failed is a hash ref of all the test scripts which failed. Each key
409is the name of a test script, each value is another hash representing
410how that script failed. Its keys are these:
9c5c68c8 411
b82fa0b7 412 name Name of the test which failed
413 estat Script's exit value
414 wstat Script's wait status
415 max Number of individual tests
416 failed Number which failed
417 percent Percentage of tests which failed
418 canon List of tests which failed (as string).
419
420Needless to say, $failed should be empty if everything passed.
421
422B<NOTE> Currently this function is still noisy. I'm working on it.
423
424=cut
425
308957f5 426#'#
b82fa0b7 427sub _run_all_tests {
9c5c68c8 428 my(@tests) = @_;
a0d0e21e 429 local($|) = 1;
9c5c68c8 430 my(%failedtests);
431
432 # Test-wide totals.
433 my(%tot) = (
434 bonus => 0,
435 max => 0,
436 ok => 0,
437 files => 0,
438 bad => 0,
439 good => 0,
440 tests => scalar @tests,
441 sub_skipped => 0,
2fe373ce 442 todo => 0,
9c5c68c8 443 skipped => 0,
2fe373ce 444 bench => 0,
9c5c68c8 445 );
774d564b 446
b82fa0b7 447 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 448 my $t_start = new Benchmark;
9c5c68c8 449
13287dd5 450 my $width = _leader_width(@tests);
b82fa0b7 451 foreach my $tfile (@tests) {
5c0604c3 452 $Last_ML_Print = 0; # so each test prints at least once
7a315204 453 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 454 local $ML = $ml;
b82fa0b7 455 print $leader;
9c5c68c8 456
356733da 457 $tot{files}++;
458
308957f5 459 $Strap->{_seen_header} = 0;
0be28027 460 my %results = $Strap->analyze_file($tfile) or
461 do { warn "$Strap->{error}\n"; next };
308957f5 462
9c5c68c8 463 # state of the current test.
308957f5 464 my @failed = grep { !$results{details}[$_-1]{ok} }
465 1..@{$results{details}};
9c5c68c8 466 my %test = (
308957f5 467 ok => $results{ok},
468 'next' => $Strap->{'next'},
469 max => $results{max},
470 failed => \@failed,
471 bonus => $results{bonus},
472 skipped => $results{skip},
a72fde19 473 skip_reason => $results{skip_reason},
c0bb2de7 474 skip_all => $Strap->{skip_all},
9c5c68c8 475 ml => $ml,
476 );
477
308957f5 478 $tot{bonus} += $results{bonus};
479 $tot{max} += $results{max};
480 $tot{ok} += $results{ok};
481 $tot{todo} += $results{todo};
482 $tot{sub_skipped} += $results{skip};
9c5c68c8 483
308957f5 484 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 485
a72fde19 486 if ($results{passing}) {
2fe373ce 487 if ($test{max} and $test{skipped} + $test{bonus}) {
488 my @msg;
489 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
490 if $test{skipped};
491 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
492 if $test{bonus};
f0008e52 493 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 494 } elsif ($test{max}) {
495 print "$test{ml}ok\n";
d5d4ec93 496 } elsif (defined $test{skip_all} and length $test{skip_all}) {
c0bb2de7 497 print "skipped\n all skipped: $test{skip_all}\n";
2fe373ce 498 $tot{skipped}++;
499 } else {
0be28027 500 print "skipped\n all skipped: no reason given\n";
2fe373ce 501 $tot{skipped}++;
502 }
503 $tot{good}++;
504 }
b82fa0b7 505 else {
a72fde19 506 # List unrun tests as failures.
507 if ($test{'next'} <= $test{max}) {
508 push @{$test{failed}}, $test{'next'}..$test{max};
509 }
510 # List overruns as failures.
511 else {
512 my $details = $results{details};
513 foreach my $overrun ($test{max}+1..@$details)
514 {
515 next unless ref $details->[$overrun-1];
516 push @{$test{failed}}, $overrun
b82fa0b7 517 }
a72fde19 518 }
519
520 if ($wstatus) {
521 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
522 $estatus, $wstatus);
523 $failedtests{$tfile}{name} = $tfile;
524 }
525 elsif($results{seen}) {
c4b2e1b6 526 if (@{$test{failed}} and $test{max}) {
b82fa0b7 527 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
528 @{$test{failed}});
529 print "$test{ml}$txt";
530 $failedtests{$tfile} = { canon => $canon,
531 max => $test{max},
532 failed => scalar @{$test{failed}},
533 name => $tfile,
534 percent => 100*(scalar @{$test{failed}})/$test{max},
535 estat => '',
536 wstat => '',
537 };
538 } else {
539 print "Don't know which tests failed: got $test{ok} ok, ".
540 "expected $test{max}\n";
541 $failedtests{$tfile} = { canon => '??',
542 max => $test{max},
543 failed => '??',
544 name => $tfile,
545 percent => undef,
546 estat => '',
547 wstat => '',
548 };
549 }
550 $tot{bad}++;
a72fde19 551 } else {
b82fa0b7 552 print "FAILED before any test output arrived\n";
553 $tot{bad}++;
554 $failedtests{$tfile} = { canon => '??',
555 max => '??',
556 failed => '??',
557 name => $tfile,
558 percent => undef,
559 estat => '',
560 wstat => '',
561 };
562 }
563 }
564
2fe373ce 565 if (defined $Files_In_Dir) {
566 my @new_dir_files = _globdir $Files_In_Dir;
567 if (@new_dir_files != @dir_files) {
568 my %f;
569 @f{@new_dir_files} = (1) x @new_dir_files;
570 delete @f{@dir_files};
571 my @f = sort keys %f;
572 print "LEAKED FILES: @f\n";
573 @dir_files = @new_dir_files;
574 }
575 }
a0d0e21e 576 }
9c5c68c8 577 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 578
13287dd5 579 $Strap->_restore_PERL5LIB;
9c5c68c8 580
581 return(\%tot, \%failedtests);
582}
583
b82fa0b7 584=item B<_mk_leader>
585
7a315204 586 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 587
588Generates the 't/foo........' $leader for the given $test_file as well
589as a similar version which will overwrite the current line (by use of
590\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 591on TTY.
592
593The $width is the width of the "yada/blah.." string.
b82fa0b7 594
595=cut
596
597sub _mk_leader {
2fe373ce 598 my($te, $width) = @_;
599 chomp($te);
b695f709 600 $te =~ s/\.\w+$/./;
b82fa0b7 601
356733da 602 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 603 my $blank = (' ' x 77);
7a315204 604 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 605 my $ml = "";
606
607 $ml = "\r$blank\r$leader"
608 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
609
610 return($leader, $ml);
611}
612
13287dd5 613=item B<_leader_width>
614
615 my($width) = _leader_width(@test_files);
616
617Calculates how wide the leader should be based on the length of the
618longest test name.
619
620=cut
621
622sub _leader_width {
623 my $maxlen = 0;
624 my $maxsuflen = 0;
625 foreach (@_) {
626 my $suf = /\.(\w+)$/ ? $1 : '';
627 my $len = length;
628 my $suflen = length $suf;
629 $maxlen = $len if $len > $maxlen;
630 $maxsuflen = $suflen if $suflen > $maxsuflen;
631 }
356733da 632 # + 3 : we want three dots between the test name and the "ok"
633 return $maxlen + 3 - $maxsuflen;
13287dd5 634}
635
9c5c68c8 636
637sub _show_results {
638 my($tot, $failedtests) = @_;
639
640 my $pct;
641 my $bonusmsg = _bonusmsg($tot);
642
2fe373ce 643 if (_all_ok($tot)) {
644 print "All tests successful$bonusmsg.\n";
645 } elsif (!$tot->{tests}){
646 die "FAILED--no tests were run for some reason.\n";
647 } elsif (!$tot->{max}) {
648 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
649 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 650 "alas--no output ever seen\n";
c07a80fd 651 } else {
2fe373ce 652 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
653 my $percent_ok = 100*$tot->{ok}/$tot->{max};
654 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
655 $tot->{max} - $tot->{ok}, $tot->{max},
656 $percent_ok;
0a931e4a 657
9c5c68c8 658 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 659
2fe373ce 660 # Now write to formats
661 for my $script (sort keys %$failedtests) {
662 $Curtest = $failedtests->{$script};
663 write;
664 }
665 if ($tot->{bad}) {
666 $bonusmsg =~ s/^,\s*//;
667 print "$bonusmsg.\n" if $bonusmsg;
668 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 669 "$subpct\n";
2fe373ce 670 }
c07a80fd 671 }
f0a9308e 672
9c5c68c8 673 printf("Files=%d, Tests=%d, %s\n",
674 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
675}
676
677
308957f5 678my %Handlers = ();
679$Strap->{callback} = sub {
680 my($self, $line, $type, $totals) = @_;
681 print $line if $Verbose;
9c5c68c8 682
308957f5 683 my $meth = $Handlers{$type};
684 $meth->($self, $line, $type, $totals) if $meth;
685};
9c5c68c8 686
9c5c68c8 687
308957f5 688$Handlers{header} = sub {
689 my($self, $line, $type, $totals) = @_;
9c5c68c8 690
308957f5 691 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 692
308957f5 693 $self->{_seen_header}++;
9c5c68c8 694
308957f5 695 warn "1..M can only appear at the beginning or end of tests\n"
696 if $totals->{seen} &&
697 $totals->{max} < $totals->{seen};
698};
13287dd5 699
308957f5 700$Handlers{test} = sub {
701 my($self, $line, $type, $totals) = @_;
9c5c68c8 702
308957f5 703 my $curr = $totals->{seen};
704 my $next = $self->{'next'};
705 my $max = $totals->{max};
706 my $detail = $totals->{details}[-1];
b82fa0b7 707
308957f5 708 if( $detail->{ok} ) {
5c0604c3 709 _print_ml_less("ok $curr/$max");
356733da 710
308957f5 711 if( $detail->{type} eq 'skip' ) {
a72fde19 712 $totals->{skip_reason} = $detail->{reason}
713 unless defined $totals->{skip_reason};
714 $totals->{skip_reason} = 'various reasons'
715 if $totals->{skip_reason} ne $detail->{reason};
308957f5 716 }
b82fa0b7 717 }
718 else {
308957f5 719 _print_ml("NOK $curr");
b82fa0b7 720 }
b82fa0b7 721
308957f5 722 if( $curr > $next ) {
723 print "Test output counter mismatch [test $curr]\n";
724 }
725 elsif( $curr < $next ) {
726 print "Confused test output: test $curr answered after ".
727 "test ", $next - 1, "\n";
728 }
b82fa0b7 729
308957f5 730};
2fe373ce 731
308957f5 732$Handlers{bailout} = sub {
733 my($self, $line, $type, $totals) = @_;
9c5c68c8 734
308957f5 735 die "FAILED--Further testing stopped" .
736 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
737};
356733da 738
9c5c68c8 739
308957f5 740sub _print_ml {
741 print join '', $ML, @_ if $ML;
9c5c68c8 742}
743
744
5c0604c3 745# For slow connections, we save lots of bandwidth by printing only once
746# per second.
747sub _print_ml_less {
748 if( $Last_ML_Print != time ) {
749 _print_ml(@_);
750 $Last_ML_Print = time;
751 }
752}
753
9c5c68c8 754sub _bonusmsg {
755 my($tot) = @_;
756
757 my $bonusmsg = '';
758 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 759 " UNEXPECTEDLY SUCCEEDED)")
760 if $tot->{bonus};
9c5c68c8 761
762 if ($tot->{skipped}) {
2fe373ce 763 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 764 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 765 if ($tot->{sub_skipped}) {
766 $bonusmsg .= " and $tot->{sub_skipped} subtest"
767 . ($tot->{sub_skipped} != 1 ? 's' : '');
768 }
769 $bonusmsg .= ' skipped';
9c5c68c8 770 }
771 elsif ($tot->{sub_skipped}) {
2fe373ce 772 $bonusmsg .= ", $tot->{sub_skipped} subtest"
773 . ($tot->{sub_skipped} != 1 ? 's' : '')
774 . " skipped";
9c5c68c8 775 }
776
777 return $bonusmsg;
778}
779
9c5c68c8 780# Test program go boom.
781sub _dubious_return {
782 my($test, $tot, $estatus, $wstatus) = @_;
783 my ($failed, $canon, $percent) = ('??', '??');
784
785 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
786 "(wstat %d, 0x%x)\n",
787 $wstatus,$wstatus;
788 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
789
790 if (corestatus($wstatus)) { # until we have a wait module
791 if ($Have_Devel_Corestack) {
792 Devel::CoreStack::stack($^X);
793 } else {
794 print "\ttest program seems to have generated a core\n";
795 }
796 }
797
798 $tot->{bad}++;
799
800 if ($test->{max}) {
b82fa0b7 801 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 802 print "\tafter all the subtests completed successfully\n";
803 $percent = 0;
2fe373ce 804 $failed = 0; # But we do not set $canon!
9c5c68c8 805 }
806 else {
b82fa0b7 807 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 808 $failed = @{$test->{failed}};
809 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
810 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
811 print "DIED. ",$txt;
812 }
813 }
814
815 return { canon => $canon, max => $test->{max} || '??',
816 failed => $failed,
66fd8cb9 817 percent => $percent,
9c5c68c8 818 estat => $estatus, wstat => $wstatus,
819 };
820}
821
822
9c5c68c8 823sub _create_fmts {
824 my($failedtests) = @_;
825
b82fa0b7 826 my $failed_str = "Failed Test";
827 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 828 my $list_str = "List of Failed";
829
830 # Figure out our longest name string for formatting purposes.
831 my $max_namelen = length($failed_str);
832 foreach my $script (keys %$failedtests) {
833 my $namelen = length $failedtests->{$script}->{name};
834 $max_namelen = $namelen if $namelen > $max_namelen;
835 }
836
837 my $list_len = $Columns - length($middle_str) - $max_namelen;
838 if ($list_len < length($list_str)) {
839 $list_len = length($list_str);
840 $max_namelen = $Columns - length($middle_str) - $list_len;
841 if ($max_namelen < length($failed_str)) {
842 $max_namelen = length($failed_str);
843 $Columns = $max_namelen + length($middle_str) + $list_len;
844 }
845 }
846
847 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 848 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 849 . $middle_str
2fe373ce 850 . $list_str . "\n"
851 . "-" x $Columns
852 . "\n.\n";
9c5c68c8 853
854 my $fmt = "format STDOUT =\n"
2fe373ce 855 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 856 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 857 . "^" . "<" x ($list_len - 1) . "\n"
858 . '{ $Curtest->{name}, $Curtest->{estat},'
859 . ' $Curtest->{wstat}, $Curtest->{max},'
860 . ' $Curtest->{failed}, $Curtest->{percent},'
861 . ' $Curtest->{canon}'
862 . "\n}\n"
863 . "~~" . " " x ($Columns - $list_len - 2) . "^"
864 . "<" x ($list_len - 1) . "\n"
865 . '$Curtest->{canon}'
866 . "\n.\n";
9c5c68c8 867
868 eval $fmt_top;
869 die $@ if $@;
870 eval $fmt;
871 die $@ if $@;
872
873 return($fmt_top, $fmt);
874}
875
b82fa0b7 876{
877 my $tried_devel_corestack;
9c5c68c8 878
b82fa0b7 879 sub corestatus {
880 my($st) = @_;
c0ee6f5c 881
a72fde19 882 my $did_core;
883 eval { # we may not have a WCOREDUMP
356733da 884 local $^W = 0; # *.ph files are often *very* noisy
a72fde19 885 require 'wait.ph';
886 $did_core = WCOREDUMP($st);
356733da 887 };
a72fde19 888 if( $@ ) {
889 $did_core = $st & 0200;
890 }
c0ee6f5c 891
b82fa0b7 892 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
893 unless $tried_devel_corestack++;
c0ee6f5c 894
356733da 895 return $did_core;
b82fa0b7 896 }
c0ee6f5c 897}
898
c07a80fd 899sub canonfailed ($@) {
89d3b7e2 900 my($max,$skipped,@failed) = @_;
6c31b336 901 my %seen;
902 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 903 my $failed = @failed;
904 my @result = ();
905 my @canon = ();
906 my $min;
907 my $last = $min = shift @failed;
760ac839 908 my $canon;
c07a80fd 909 if (@failed) {
2fe373ce 910 for (@failed, $failed[-1]) { # don't forget the last one
911 if ($_ > $last+1 || $_ == $last) {
912 if ($min == $last) {
913 push @canon, $last;
914 } else {
915 push @canon, "$min-$last";
916 }
917 $min = $_;
918 }
919 $last = $_;
920 }
921 local $" = ", ";
922 push @result, "FAILED tests @canon\n";
923 $canon = join ' ', @canon;
a0d0e21e 924 } else {
2fe373ce 925 push @result, "FAILED test $last\n";
926 $canon = $last;
a0d0e21e 927 }
c07a80fd 928
929 push @result, "\tFailed $failed/$max tests, ";
e93c2686 930 if ($max) {
931 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
932 } else {
933 push @result, "?% okay";
934 }
89d3b7e2 935 my $ender = 's' x ($skipped > 1);
936 my $good = $max - $failed - $skipped;
e93c2686 937 if ($skipped) {
938 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
939 if ($max) {
940 my $goodper = sprintf("%.2f",100*($good/$max));
941 $skipmsg .= "$goodper%)";
942 } else {
943 $skipmsg .= "?%)";
944 }
945 push @result, $skipmsg;
946 }
89d3b7e2 947 push @result, "\n";
760ac839 948 my $txt = join "", @result;
949 ($txt, $canon);
a0d0e21e 950}
951
b82fa0b7 952=end _private
9c5c68c8 953
b82fa0b7 954=back
d667a7e6 955
b82fa0b7 956=cut
9c5c68c8 957
9c5c68c8 958
b82fa0b7 9591;
960__END__
9c5c68c8 961
962
cb1a09d0 963=head1 EXPORT
964
e8df9912 965C<&runtests> is exported by Test::Harness by default.
cb1a09d0 966
9c5c68c8 967C<$verbose> and C<$switches> are exported upon request.
968
969
cb1a09d0 970=head1 DIAGNOSTICS
971
972=over 4
973
974=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
975
976If all tests are successful some statistics about the performance are
977printed.
978
6c31b336 979=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
980
981For any single script that has failing subtests statistics like the
982above are printed.
983
984=item C<Test returned status %d (wstat %d)>
985
9c5c68c8 986Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
987and C<$?> are printed in a message similar to the above.
6c31b336 988
989=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 990
6c31b336 991=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 992
993If not all tests were successful, the script dies with one of the
994above messages.
995
308957f5 996=item C<FAILED--Further testing stopped: %s>
d667a7e6 997
998If a single subtest decides that further testing will not make sense,
999the script dies with this message.
1000
cb1a09d0 1001=back
1002
9b0ceca9 1003=head1 ENVIRONMENT
1004
37ce32a7 1005=over 4
1006
356733da 1007=item C<HARNESS_ACTIVE>
37ce32a7 1008
356733da 1009Harness sets this before executing the individual tests. This allows
1010the tests to determine if they are being executed through the harness
1011or by any other means.
37ce32a7 1012
356733da 1013=item C<HARNESS_COLUMNS>
9b0ceca9 1014
356733da 1015This value will be used for the width of the terminal. If it is not
1016set then it will default to C<COLUMNS>. If this is not set, it will
1017default to 80. Note that users of Bourne-sh based shells will need to
1018C<export COLUMNS> for this module to use that variable.
0d0c0d42 1019
b82fa0b7 1020=item C<HARNESS_COMPILE_TEST>
9636a016 1021
37ce32a7 1022When true it will make harness attempt to compile the test using
1023C<perlcc> before running it.
1024
b82fa0b7 1025B<NOTE> This currently only works when sitting in the perl source
1026directory!
1027
1028=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1029
1030When set to the name of a directory, harness will check after each
1031test whether new files appeared in that directory, and report them as
17a79f5b 1032
1033 LEAKED FILES: scr.tmp 0 my.db
1034
1035If relative, directory name is with respect to the current directory at
1036the moment runtests() was called. Putting absolute path into
13287dd5 1037C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1038
356733da 1039=item C<HARNESS_IGNORE_EXITCODE>
1040
1041Makes harness ignore the exit status of child processes when defined.
1042
1043=item C<HARNESS_NOTTY>
1044
1045When set to a true value, forces it to behave as though STDOUT were
1046not a console. You may need to set this if you don't want harness to
1047output more frequent progress messages using carriage returns. Some
1048consoles may not handle carriage returns properly (which results in a
1049somewhat messy output).
1050
b82fa0b7 1051=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1052
1053Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1054each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1055run all tests with all warnings enabled.
1056
356733da 1057=item C<HARNESS_VERBOSE>
37ce32a7 1058
356733da 1059If true, Test::Harness will output the verbose results of running
1060its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1061
1062=back
0a931e4a 1063
b82fa0b7 1064=head1 EXAMPLE
1065
1066Here's how Test::Harness tests itself
1067
1068 $ cd ~/src/devel/Test-Harness
1069 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1070 $verbose=0; runtests @ARGV;' t/*.t
1071 Using /home/schwern/src/devel/Test-Harness/blib
1072 t/base..............ok
1073 t/nonumbers.........ok
1074 t/ok................ok
1075 t/test-harness......ok
1076 All tests successful.
1077 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1078
cb1a09d0 1079=head1 SEE ALSO
1080
b82fa0b7 1081L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1082the underlying timing routines, L<Devel::CoreStack> to generate core
1083dumps from failed tests and L<Devel::Cover> for test coverage
1084analysis.
c07a80fd 1085
1086=head1 AUTHORS
1087
1088Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1089sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1090with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1091exist. Andreas Koenig held the torch for many years.
1092
1093Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1094
a72fde19 1095=head1 LICENSE
1096
1097This program is free software; you can redistribute it and/or
1098modify it under the same terms as Perl itself.
1099
1100See F<http://www.perl.com/perl/misc/Artistic.html>
1101
1102
b82fa0b7 1103=head1 TODO
1104
1105Provide a way of running tests quietly (ie. no printing) for automated
1106validation of tests. This will probably take the form of a version
1107of runtests() which rather than printing its output returns raw data
356733da 1108on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1109
1110Fix HARNESS_COMPILE_TEST without breaking its core usage.
1111
1112Figure a way to report test names in the failure summary.
37ce32a7 1113
b82fa0b7 1114Rework the test summary so long test names are not truncated as badly.
308957f5 1115(Partially done with new skip test styles)
b82fa0b7 1116
b82fa0b7 1117Deal with VMS's "not \nok 4\n" mistake.
1118
1119Add option for coverage analysis.
1120
1121=for _private
1122Keeping whittling away at _run_all_tests()
1123
1124=for _private
1125Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1126
1127=head1 BUGS
1128
356733da 1129HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1130directory.
1131
cb1a09d0 1132=cut