Upgrade to NEXT 0.60.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
27caa5c1 2# $Id: Harness.pm,v 1.52 2003/07/17 19:02:48 andy Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
d5d4ec93 15 $Columns $verbose $switches $ML $Strap
5c0604c3 16 @ISA @EXPORT @EXPORT_OK $Last_ML_Print
b82fa0b7 17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
5c0604c3 20*verbose = *Verbose;
21*switches = *Switches;
9c5c68c8 22
23$Have_Devel_Corestack = 0;
24
27caa5c1 25$VERSION = '2.29';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
60af4b93 39my $Ok_Slow = $ENV{HARNESS_OK_SLOW};
40
d5d4ec93 41$Strap = Test::Harness::Straps->new;
17a79f5b 42
9c5c68c8 43@ISA = ('Exporter');
44@EXPORT = qw(&runtests);
45@EXPORT_OK = qw($verbose $switches);
4633a7c4 46
356733da 47$Verbose = $ENV{HARNESS_VERBOSE} || 0;
9c5c68c8 48$Switches = "-w";
49$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 50$Columns--; # Some shells have trouble with a full line of text.
51
52
53=head1 NAME
54
55Test::Harness - run perl standard test scripts with statistics
56
57=head1 SYNOPSIS
58
59 use Test::Harness;
60
61 runtests(@test_files);
62
63=head1 DESCRIPTION
a0d0e21e 64
b82fa0b7 65B<STOP!> If all you want to do is write a test script, consider using
66Test::Simple. Otherwise, read on.
67
68(By using the Test module, you can write test scripts without
69knowing the exact output this module expects. However, if you need to
70know the specifics, read on!)
71
72Perl test scripts print to standard output C<"ok N"> for each single
73test, where C<N> is an increasing sequence of integers. The first line
74output by a standard test script is C<"1..M"> with C<M> being the
75number of tests that should be run within the test
76script. Test::Harness::runtests(@tests) runs all the testscripts
77named as arguments and checks standard output for the expected
78C<"ok N"> strings.
79
80After all tests have been performed, runtests() prints some
81performance statistics that are computed by the Benchmark module.
82
83=head2 The test script output
84
85The following explains how Test::Harness interprets the output of your
86test program.
87
88=over 4
89
90=item B<'1..M'>
91
356733da 92This header tells how many tests there will be. For example, C<1..10>
93means you plan on running 10 tests. This is a safeguard in case your
94test dies quietly in the middle of its run.
95
96It should be the first non-comment line output by your test program.
b82fa0b7 97
356733da 98In certain instances, you may not know how many tests you will
99ultimately be running. In this case, it is permitted for the 1..M
100header to appear as the B<last> line output by your test (again, it
101can be followed by further comments).
b82fa0b7 102
103Under B<no> circumstances should 1..M appear in the middle of your
104output or more than once.
105
106
107=item B<'ok', 'not ok'. Ok?>
108
109Any output from the testscript to standard error is ignored and
110bypassed, thus will be seen by the user. Lines written to standard
111output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
112runtests(). All other lines are discarded.
113
114C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
115
116
117=item B<test numbers>
118
119Perl normally expects the 'ok' or 'not ok' to be followed by a test
120number. It is tolerated if the test numbers after 'ok' are
121omitted. In this case Test::Harness maintains temporarily its own
122counter until the script supplies test numbers again. So the following
123test script
124
125 print <<END;
126 1..6
127 not ok
128 ok
129 not ok
130 ok
131 ok
132 END
133
134will generate
135
136 FAILED tests 1, 3, 6
137 Failed 3/6 tests, 50.00% okay
138
13287dd5 139=item B<test names>
b82fa0b7 140
13287dd5 141Anything after the test number but before the # is considered to be
142the name of the test.
b82fa0b7 143
13287dd5 144 ok 42 this is the name of the test
b82fa0b7 145
13287dd5 146Currently, Test::Harness does nothing with this information.
b82fa0b7 147
148=item B<Skipping tests>
149
150If the standard output line contains the substring C< # Skip> (with
151variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
152counted as a skipped test. If the whole testscript succeeds, the
153count of skipped tests is included in the generated output.
154C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
356733da 155for skipping.
b82fa0b7 156
157 ok 23 # skip Insufficient flogiston pressure.
158
159Similarly, one can include a similar explanation in a C<1..0> line
160emitted if the test script is skipped completely:
161
162 1..0 # Skipped: no leverage found
163
164=item B<Todo tests>
165
166If the standard output line contains the substring C< # TODO> after
167C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
168afterwards is the thing that has to be done before this test will
169succeed.
170
171 not ok 13 # TODO harness the power of the atom
172
13287dd5 173=begin _deprecated
174
175Alternatively, you can specify a list of what tests are todo as part
176of the test header.
177
178 1..23 todo 5 12 23
179
180This only works if the header appears at the beginning of the test.
181
182This style is B<deprecated>.
183
184=end _deprecated
185
b82fa0b7 186These tests represent a feature to be implemented or a bug to be fixed
187and act as something of an executable "thing to do" list. They are
188B<not> expected to succeed. Should a todo test begin succeeding,
189Test::Harness will report it as a bonus. This indicates that whatever
190you were supposed to do has been done and you should promote this to a
191normal test.
192
193=item B<Bail out!>
194
195As an emergency measure, a test script can decide that further tests
196are useless (e.g. missing dependencies) and testing should stop
197immediately. In that case the test script prints the magic words
198
199 Bail out!
200
201to standard output. Any message after these words will be displayed by
202C<Test::Harness> as the reason why testing is stopped.
203
204=item B<Comments>
205
206Additional comments may be put into the testing output on their own
207lines. Comment lines should begin with a '#', Test::Harness will
208ignore them.
209
210 ok 1
211 # Life is good, the sun is shining, RAM is cheap.
212 not ok 2
213 # got 'Bush' expected 'Gore'
214
215=item B<Anything else>
216
217Any other output Test::Harness sees it will silently ignore B<BUT WE
218PLAN TO CHANGE THIS!> If you wish to place additional output in your
219test script, please use a comment.
220
221=back
222
223
13287dd5 224=head2 Taint mode
225
226Test::Harness will honor the C<-T> in the #! line on your test files. So
227if you begin a test with:
228
229 #!perl -T
230
231the test will be run with taint mode on.
232
233
234=head2 Configuration variables.
235
236These variables can be used to configure the behavior of
237Test::Harness. They are exported on request.
238
239=over 4
240
241=item B<$Test::Harness::verbose>
242
243The global variable $Test::Harness::verbose is exportable and can be
244used to let runtests() display the standard output of the script
245without altering the behavior otherwise.
246
247=item B<$Test::Harness::switches>
248
249The global variable $Test::Harness::switches is exportable and can be
250used to set perl command line options used for running the test
251script(s). The default value is C<-w>.
252
253=back
254
255
b82fa0b7 256=head2 Failure
257
258It will happen, your tests will fail. After you mop up your ego, you
259can begin examining the summary report:
260
2fe373ce 261 t/base..............ok
262 t/nonumbers.........ok
263 t/ok................ok
264 t/test-harness......ok
265 t/waterloo..........dubious
b82fa0b7 266 Test returned status 3 (wstat 768, 0x300)
267 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
268 Failed 10/20 tests, 50.00% okay
269 Failed Test Stat Wstat Total Fail Failed List of Failed
270 -----------------------------------------------------------------------
271 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
272 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
273
274Everything passed but t/waterloo.t. It failed 10 of 20 tests and
275exited with non-zero status indicating something dubious happened.
276
277The columns in the summary report mean:
278
279=over 4
280
281=item B<Failed Test>
282
283The test file which failed.
284
285=item B<Stat>
286
287If the test exited with non-zero, this is its exit status.
288
289=item B<Wstat>
290
291The wait status of the test I<umm, I need a better explanation here>.
292
293=item B<Total>
294
295Total number of tests expected to run.
296
297=item B<Fail>
298
299Number which failed, either from "not ok" or because they never ran.
300
301=item B<Failed>
302
303Percentage of the total tests which failed.
304
305=item B<List of Failed>
306
307A list of the tests which failed. Successive failures may be
308abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30920 failed).
310
311=back
312
313
314=head2 Functions
315
316Test::Harness currently only has one function, here it is.
317
318=over 4
319
320=item B<runtests>
321
322 my $allok = runtests(@test_files);
323
324This runs all the given @test_files and divines whether they passed
325or failed based on their output to STDOUT (details above). It prints
326out each individual test which failed along with a summary report and
327a how long it all took.
328
d5d4ec93 329It returns true if everything was ok. Otherwise it will die() with
330one of the messages in the DIAGNOSTICS section.
b82fa0b7 331
332=for _private
0bf5423d 333
b82fa0b7 334This is just _run_all_tests() plus _show_results()
335
336=cut
17a79f5b 337
a0d0e21e 338sub runtests {
339 my(@tests) = @_;
9c5c68c8 340
b82fa0b7 341 local ($\, $,);
342
343 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 344 _show_results($tot, $failedtests);
345
2fe373ce 346 my $ok = _all_ok($tot);
b82fa0b7 347
13287dd5 348 assert(($ok xor keys %$failedtests),
349 q{ok status jives with $failedtests});
b82fa0b7 350
351 return $ok;
352}
353
354=begin _private
355
2fe373ce 356=item B<_all_ok>
357
358 my $ok = _all_ok(\%tot);
359
360Tells you if this test run is overall successful or not.
361
362=cut
363
364sub _all_ok {
365 my($tot) = shift;
366
367 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
368}
369
b82fa0b7 370=item B<_globdir>
371
372 my @files = _globdir $dir;
373
374Returns all the files in a directory. This is shorthand for backwards
375compatibility on systems where glob() doesn't work right.
376
377=cut
378
379sub _globdir {
380 opendir DIRH, shift;
381 my @f = readdir DIRH;
382 closedir DIRH;
383
384 return @f;
9c5c68c8 385}
386
b82fa0b7 387=item B<_run_all_tests>
388
389 my($total, $failed) = _run_all_tests(@test_files);
390
391Runs all the given @test_files (as runtests()) but does it quietly (no
392report). $total is a hash ref summary of all the tests run. Its keys
393and values are this:
394
395 bonus Number of individual todo tests unexpectedly passed
396 max Number of individual tests ran
397 ok Number of individual tests passed
398 sub_skipped Number of individual tests skipped
2fe373ce 399 todo Number of individual todo tests
b82fa0b7 400
401 files Number of test files ran
402 good Number of test files passed
403 bad Number of test files failed
404 tests Number of test files originally given
405 skipped Number of test files skipped
406
407If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
408test.
409
410$failed is a hash ref of all the test scripts which failed. Each key
411is the name of a test script, each value is another hash representing
412how that script failed. Its keys are these:
9c5c68c8 413
b82fa0b7 414 name Name of the test which failed
415 estat Script's exit value
416 wstat Script's wait status
417 max Number of individual tests
418 failed Number which failed
419 percent Percentage of tests which failed
420 canon List of tests which failed (as string).
421
422Needless to say, $failed should be empty if everything passed.
423
424B<NOTE> Currently this function is still noisy. I'm working on it.
425
426=cut
427
308957f5 428#'#
b82fa0b7 429sub _run_all_tests {
9c5c68c8 430 my(@tests) = @_;
a0d0e21e 431 local($|) = 1;
9c5c68c8 432 my(%failedtests);
433
434 # Test-wide totals.
435 my(%tot) = (
436 bonus => 0,
437 max => 0,
438 ok => 0,
439 files => 0,
440 bad => 0,
441 good => 0,
442 tests => scalar @tests,
443 sub_skipped => 0,
2fe373ce 444 todo => 0,
9c5c68c8 445 skipped => 0,
2fe373ce 446 bench => 0,
9c5c68c8 447 );
774d564b 448
b82fa0b7 449 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 450 my $t_start = new Benchmark;
9c5c68c8 451
13287dd5 452 my $width = _leader_width(@tests);
b82fa0b7 453 foreach my $tfile (@tests) {
5c0604c3 454 $Last_ML_Print = 0; # so each test prints at least once
7a315204 455 my($leader, $ml) = _mk_leader($tfile, $width);
308957f5 456 local $ML = $ml;
b82fa0b7 457 print $leader;
9c5c68c8 458
356733da 459 $tot{files}++;
460
308957f5 461 $Strap->{_seen_header} = 0;
0be28027 462 my %results = $Strap->analyze_file($tfile) or
463 do { warn "$Strap->{error}\n"; next };
308957f5 464
9c5c68c8 465 # state of the current test.
308957f5 466 my @failed = grep { !$results{details}[$_-1]{ok} }
467 1..@{$results{details}};
9c5c68c8 468 my %test = (
308957f5 469 ok => $results{ok},
470 'next' => $Strap->{'next'},
471 max => $results{max},
472 failed => \@failed,
473 bonus => $results{bonus},
474 skipped => $results{skip},
a72fde19 475 skip_reason => $results{skip_reason},
c0bb2de7 476 skip_all => $Strap->{skip_all},
9c5c68c8 477 ml => $ml,
478 );
479
308957f5 480 $tot{bonus} += $results{bonus};
481 $tot{max} += $results{max};
482 $tot{ok} += $results{ok};
483 $tot{todo} += $results{todo};
484 $tot{sub_skipped} += $results{skip};
9c5c68c8 485
308957f5 486 my($estatus, $wstatus) = @results{qw(exit wait)};
b82fa0b7 487
a72fde19 488 if ($results{passing}) {
2fe373ce 489 if ($test{max} and $test{skipped} + $test{bonus}) {
490 my @msg;
491 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
492 if $test{skipped};
493 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
494 if $test{bonus};
f0008e52 495 print "$test{ml}ok\n ".join(', ', @msg)."\n";
2fe373ce 496 } elsif ($test{max}) {
497 print "$test{ml}ok\n";
d5d4ec93 498 } elsif (defined $test{skip_all} and length $test{skip_all}) {
c0bb2de7 499 print "skipped\n all skipped: $test{skip_all}\n";
2fe373ce 500 $tot{skipped}++;
501 } else {
0be28027 502 print "skipped\n all skipped: no reason given\n";
2fe373ce 503 $tot{skipped}++;
504 }
505 $tot{good}++;
506 }
b82fa0b7 507 else {
a72fde19 508 # List unrun tests as failures.
509 if ($test{'next'} <= $test{max}) {
510 push @{$test{failed}}, $test{'next'}..$test{max};
511 }
512 # List overruns as failures.
513 else {
514 my $details = $results{details};
515 foreach my $overrun ($test{max}+1..@$details)
516 {
517 next unless ref $details->[$overrun-1];
518 push @{$test{failed}}, $overrun
b82fa0b7 519 }
a72fde19 520 }
521
522 if ($wstatus) {
523 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
524 $estatus, $wstatus);
525 $failedtests{$tfile}{name} = $tfile;
526 }
527 elsif($results{seen}) {
c4b2e1b6 528 if (@{$test{failed}} and $test{max}) {
b82fa0b7 529 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
530 @{$test{failed}});
531 print "$test{ml}$txt";
532 $failedtests{$tfile} = { canon => $canon,
533 max => $test{max},
534 failed => scalar @{$test{failed}},
535 name => $tfile,
536 percent => 100*(scalar @{$test{failed}})/$test{max},
537 estat => '',
538 wstat => '',
539 };
540 } else {
541 print "Don't know which tests failed: got $test{ok} ok, ".
542 "expected $test{max}\n";
543 $failedtests{$tfile} = { canon => '??',
544 max => $test{max},
545 failed => '??',
546 name => $tfile,
547 percent => undef,
548 estat => '',
549 wstat => '',
550 };
551 }
552 $tot{bad}++;
a72fde19 553 } else {
b82fa0b7 554 print "FAILED before any test output arrived\n";
555 $tot{bad}++;
556 $failedtests{$tfile} = { canon => '??',
557 max => '??',
558 failed => '??',
559 name => $tfile,
560 percent => undef,
561 estat => '',
562 wstat => '',
563 };
564 }
565 }
566
2fe373ce 567 if (defined $Files_In_Dir) {
568 my @new_dir_files = _globdir $Files_In_Dir;
569 if (@new_dir_files != @dir_files) {
570 my %f;
571 @f{@new_dir_files} = (1) x @new_dir_files;
572 delete @f{@dir_files};
573 my @f = sort keys %f;
574 print "LEAKED FILES: @f\n";
575 @dir_files = @new_dir_files;
576 }
577 }
a0d0e21e 578 }
9c5c68c8 579 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 580
13287dd5 581 $Strap->_restore_PERL5LIB;
9c5c68c8 582
583 return(\%tot, \%failedtests);
584}
585
b82fa0b7 586=item B<_mk_leader>
587
7a315204 588 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 589
590Generates the 't/foo........' $leader for the given $test_file as well
591as a similar version which will overwrite the current line (by use of
592\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 593on TTY.
594
595The $width is the width of the "yada/blah.." string.
b82fa0b7 596
597=cut
598
599sub _mk_leader {
2fe373ce 600 my($te, $width) = @_;
601 chomp($te);
b695f709 602 $te =~ s/\.\w+$/./;
b82fa0b7 603
356733da 604 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
b82fa0b7 605 my $blank = (' ' x 77);
7a315204 606 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 607 my $ml = "";
608
609 $ml = "\r$blank\r$leader"
610 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
611
612 return($leader, $ml);
613}
614
13287dd5 615=item B<_leader_width>
616
617 my($width) = _leader_width(@test_files);
618
619Calculates how wide the leader should be based on the length of the
620longest test name.
621
622=cut
623
624sub _leader_width {
625 my $maxlen = 0;
626 my $maxsuflen = 0;
627 foreach (@_) {
628 my $suf = /\.(\w+)$/ ? $1 : '';
629 my $len = length;
630 my $suflen = length $suf;
631 $maxlen = $len if $len > $maxlen;
632 $maxsuflen = $suflen if $suflen > $maxsuflen;
633 }
356733da 634 # + 3 : we want three dots between the test name and the "ok"
635 return $maxlen + 3 - $maxsuflen;
13287dd5 636}
637
9c5c68c8 638
639sub _show_results {
640 my($tot, $failedtests) = @_;
641
642 my $pct;
643 my $bonusmsg = _bonusmsg($tot);
644
2fe373ce 645 if (_all_ok($tot)) {
646 print "All tests successful$bonusmsg.\n";
647 } elsif (!$tot->{tests}){
648 die "FAILED--no tests were run for some reason.\n";
649 } elsif (!$tot->{max}) {
650 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
651 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 652 "alas--no output ever seen\n";
c07a80fd 653 } else {
2fe373ce 654 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
655 my $percent_ok = 100*$tot->{ok}/$tot->{max};
656 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
657 $tot->{max} - $tot->{ok}, $tot->{max},
658 $percent_ok;
0a931e4a 659
9c5c68c8 660 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 661
2fe373ce 662 # Now write to formats
663 for my $script (sort keys %$failedtests) {
664 $Curtest = $failedtests->{$script};
665 write;
666 }
667 if ($tot->{bad}) {
668 $bonusmsg =~ s/^,\s*//;
669 print "$bonusmsg.\n" if $bonusmsg;
670 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 671 "$subpct\n";
2fe373ce 672 }
c07a80fd 673 }
f0a9308e 674
9c5c68c8 675 printf("Files=%d, Tests=%d, %s\n",
676 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
677}
678
679
308957f5 680my %Handlers = ();
681$Strap->{callback} = sub {
682 my($self, $line, $type, $totals) = @_;
683 print $line if $Verbose;
9c5c68c8 684
308957f5 685 my $meth = $Handlers{$type};
686 $meth->($self, $line, $type, $totals) if $meth;
687};
9c5c68c8 688
9c5c68c8 689
308957f5 690$Handlers{header} = sub {
691 my($self, $line, $type, $totals) = @_;
9c5c68c8 692
308957f5 693 warn "Test header seen more than once!\n" if $self->{_seen_header};
9c5c68c8 694
308957f5 695 $self->{_seen_header}++;
9c5c68c8 696
308957f5 697 warn "1..M can only appear at the beginning or end of tests\n"
698 if $totals->{seen} &&
699 $totals->{max} < $totals->{seen};
700};
13287dd5 701
308957f5 702$Handlers{test} = sub {
703 my($self, $line, $type, $totals) = @_;
9c5c68c8 704
308957f5 705 my $curr = $totals->{seen};
706 my $next = $self->{'next'};
707 my $max = $totals->{max};
708 my $detail = $totals->{details}[-1];
b82fa0b7 709
308957f5 710 if( $detail->{ok} ) {
5c0604c3 711 _print_ml_less("ok $curr/$max");
356733da 712
308957f5 713 if( $detail->{type} eq 'skip' ) {
a72fde19 714 $totals->{skip_reason} = $detail->{reason}
715 unless defined $totals->{skip_reason};
716 $totals->{skip_reason} = 'various reasons'
717 if $totals->{skip_reason} ne $detail->{reason};
308957f5 718 }
b82fa0b7 719 }
720 else {
308957f5 721 _print_ml("NOK $curr");
b82fa0b7 722 }
b82fa0b7 723
308957f5 724 if( $curr > $next ) {
725 print "Test output counter mismatch [test $curr]\n";
726 }
727 elsif( $curr < $next ) {
728 print "Confused test output: test $curr answered after ".
729 "test ", $next - 1, "\n";
730 }
b82fa0b7 731
308957f5 732};
2fe373ce 733
308957f5 734$Handlers{bailout} = sub {
735 my($self, $line, $type, $totals) = @_;
9c5c68c8 736
308957f5 737 die "FAILED--Further testing stopped" .
738 ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
739};
356733da 740
9c5c68c8 741
308957f5 742sub _print_ml {
743 print join '', $ML, @_ if $ML;
9c5c68c8 744}
745
746
5c0604c3 747# For slow connections, we save lots of bandwidth by printing only once
748# per second.
749sub _print_ml_less {
60af4b93 750 if( !$Ok_Slow || $Last_ML_Print != time ) {
5c0604c3 751 _print_ml(@_);
752 $Last_ML_Print = time;
753 }
754}
755
9c5c68c8 756sub _bonusmsg {
757 my($tot) = @_;
758
759 my $bonusmsg = '';
760 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 761 " UNEXPECTEDLY SUCCEEDED)")
762 if $tot->{bonus};
9c5c68c8 763
764 if ($tot->{skipped}) {
2fe373ce 765 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 766 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 767 if ($tot->{sub_skipped}) {
768 $bonusmsg .= " and $tot->{sub_skipped} subtest"
769 . ($tot->{sub_skipped} != 1 ? 's' : '');
770 }
771 $bonusmsg .= ' skipped';
9c5c68c8 772 }
773 elsif ($tot->{sub_skipped}) {
2fe373ce 774 $bonusmsg .= ", $tot->{sub_skipped} subtest"
775 . ($tot->{sub_skipped} != 1 ? 's' : '')
776 . " skipped";
9c5c68c8 777 }
778
779 return $bonusmsg;
780}
781
9c5c68c8 782# Test program go boom.
783sub _dubious_return {
784 my($test, $tot, $estatus, $wstatus) = @_;
785 my ($failed, $canon, $percent) = ('??', '??');
786
787 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
788 "(wstat %d, 0x%x)\n",
789 $wstatus,$wstatus;
790 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
791
792 if (corestatus($wstatus)) { # until we have a wait module
793 if ($Have_Devel_Corestack) {
794 Devel::CoreStack::stack($^X);
795 } else {
796 print "\ttest program seems to have generated a core\n";
797 }
798 }
799
800 $tot->{bad}++;
801
802 if ($test->{max}) {
b82fa0b7 803 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 804 print "\tafter all the subtests completed successfully\n";
805 $percent = 0;
2fe373ce 806 $failed = 0; # But we do not set $canon!
9c5c68c8 807 }
808 else {
b82fa0b7 809 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 810 $failed = @{$test->{failed}};
811 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
812 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
813 print "DIED. ",$txt;
814 }
815 }
816
817 return { canon => $canon, max => $test->{max} || '??',
818 failed => $failed,
66fd8cb9 819 percent => $percent,
9c5c68c8 820 estat => $estatus, wstat => $wstatus,
821 };
822}
823
824
9c5c68c8 825sub _create_fmts {
826 my($failedtests) = @_;
827
b82fa0b7 828 my $failed_str = "Failed Test";
829 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 830 my $list_str = "List of Failed";
831
832 # Figure out our longest name string for formatting purposes.
833 my $max_namelen = length($failed_str);
834 foreach my $script (keys %$failedtests) {
835 my $namelen = length $failedtests->{$script}->{name};
836 $max_namelen = $namelen if $namelen > $max_namelen;
837 }
838
839 my $list_len = $Columns - length($middle_str) - $max_namelen;
840 if ($list_len < length($list_str)) {
841 $list_len = length($list_str);
842 $max_namelen = $Columns - length($middle_str) - $list_len;
843 if ($max_namelen < length($failed_str)) {
844 $max_namelen = length($failed_str);
845 $Columns = $max_namelen + length($middle_str) + $list_len;
846 }
847 }
848
849 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 850 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 851 . $middle_str
2fe373ce 852 . $list_str . "\n"
853 . "-" x $Columns
854 . "\n.\n";
9c5c68c8 855
856 my $fmt = "format STDOUT =\n"
2fe373ce 857 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 858 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 859 . "^" . "<" x ($list_len - 1) . "\n"
860 . '{ $Curtest->{name}, $Curtest->{estat},'
861 . ' $Curtest->{wstat}, $Curtest->{max},'
862 . ' $Curtest->{failed}, $Curtest->{percent},'
863 . ' $Curtest->{canon}'
864 . "\n}\n"
865 . "~~" . " " x ($Columns - $list_len - 2) . "^"
866 . "<" x ($list_len - 1) . "\n"
867 . '$Curtest->{canon}'
868 . "\n.\n";
9c5c68c8 869
870 eval $fmt_top;
871 die $@ if $@;
872 eval $fmt;
873 die $@ if $@;
874
875 return($fmt_top, $fmt);
876}
877
b82fa0b7 878{
879 my $tried_devel_corestack;
9c5c68c8 880
b82fa0b7 881 sub corestatus {
882 my($st) = @_;
c0ee6f5c 883
a72fde19 884 my $did_core;
885 eval { # we may not have a WCOREDUMP
356733da 886 local $^W = 0; # *.ph files are often *very* noisy
a72fde19 887 require 'wait.ph';
888 $did_core = WCOREDUMP($st);
356733da 889 };
a72fde19 890 if( $@ ) {
891 $did_core = $st & 0200;
892 }
c0ee6f5c 893
b82fa0b7 894 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
895 unless $tried_devel_corestack++;
c0ee6f5c 896
356733da 897 return $did_core;
b82fa0b7 898 }
c0ee6f5c 899}
900
27caa5c1 901sub canonfailed ($$@) {
89d3b7e2 902 my($max,$skipped,@failed) = @_;
6c31b336 903 my %seen;
904 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 905 my $failed = @failed;
906 my @result = ();
907 my @canon = ();
908 my $min;
909 my $last = $min = shift @failed;
760ac839 910 my $canon;
c07a80fd 911 if (@failed) {
2fe373ce 912 for (@failed, $failed[-1]) { # don't forget the last one
913 if ($_ > $last+1 || $_ == $last) {
914 if ($min == $last) {
915 push @canon, $last;
916 } else {
917 push @canon, "$min-$last";
918 }
919 $min = $_;
920 }
921 $last = $_;
922 }
923 local $" = ", ";
924 push @result, "FAILED tests @canon\n";
925 $canon = join ' ', @canon;
a0d0e21e 926 } else {
2fe373ce 927 push @result, "FAILED test $last\n";
928 $canon = $last;
a0d0e21e 929 }
c07a80fd 930
931 push @result, "\tFailed $failed/$max tests, ";
e93c2686 932 if ($max) {
933 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
934 } else {
935 push @result, "?% okay";
936 }
89d3b7e2 937 my $ender = 's' x ($skipped > 1);
938 my $good = $max - $failed - $skipped;
e93c2686 939 if ($skipped) {
940 my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
941 if ($max) {
942 my $goodper = sprintf("%.2f",100*($good/$max));
943 $skipmsg .= "$goodper%)";
944 } else {
945 $skipmsg .= "?%)";
946 }
947 push @result, $skipmsg;
948 }
89d3b7e2 949 push @result, "\n";
760ac839 950 my $txt = join "", @result;
951 ($txt, $canon);
a0d0e21e 952}
953
b82fa0b7 954=end _private
9c5c68c8 955
b82fa0b7 956=back
d667a7e6 957
b82fa0b7 958=cut
9c5c68c8 959
9c5c68c8 960
b82fa0b7 9611;
962__END__
9c5c68c8 963
964
cb1a09d0 965=head1 EXPORT
966
e8df9912 967C<&runtests> is exported by Test::Harness by default.
cb1a09d0 968
9c5c68c8 969C<$verbose> and C<$switches> are exported upon request.
970
971
cb1a09d0 972=head1 DIAGNOSTICS
973
974=over 4
975
976=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
977
978If all tests are successful some statistics about the performance are
979printed.
980
6c31b336 981=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
982
983For any single script that has failing subtests statistics like the
984above are printed.
985
986=item C<Test returned status %d (wstat %d)>
987
9c5c68c8 988Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
989and C<$?> are printed in a message similar to the above.
6c31b336 990
991=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 992
6c31b336 993=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 994
995If not all tests were successful, the script dies with one of the
996above messages.
997
308957f5 998=item C<FAILED--Further testing stopped: %s>
d667a7e6 999
1000If a single subtest decides that further testing will not make sense,
1001the script dies with this message.
1002
cb1a09d0 1003=back
1004
9b0ceca9 1005=head1 ENVIRONMENT
1006
37ce32a7 1007=over 4
1008
356733da 1009=item C<HARNESS_ACTIVE>
37ce32a7 1010
356733da 1011Harness sets this before executing the individual tests. This allows
1012the tests to determine if they are being executed through the harness
1013or by any other means.
37ce32a7 1014
356733da 1015=item C<HARNESS_COLUMNS>
9b0ceca9 1016
356733da 1017This value will be used for the width of the terminal. If it is not
1018set then it will default to C<COLUMNS>. If this is not set, it will
1019default to 80. Note that users of Bourne-sh based shells will need to
1020C<export COLUMNS> for this module to use that variable.
0d0c0d42 1021
b82fa0b7 1022=item C<HARNESS_COMPILE_TEST>
9636a016 1023
37ce32a7 1024When true it will make harness attempt to compile the test using
1025C<perlcc> before running it.
1026
b82fa0b7 1027B<NOTE> This currently only works when sitting in the perl source
1028directory!
1029
1030=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1031
1032When set to the name of a directory, harness will check after each
1033test whether new files appeared in that directory, and report them as
17a79f5b 1034
1035 LEAKED FILES: scr.tmp 0 my.db
1036
1037If relative, directory name is with respect to the current directory at
1038the moment runtests() was called. Putting absolute path into
13287dd5 1039C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1040
356733da 1041=item C<HARNESS_IGNORE_EXITCODE>
1042
1043Makes harness ignore the exit status of child processes when defined.
1044
1045=item C<HARNESS_NOTTY>
1046
1047When set to a true value, forces it to behave as though STDOUT were
1048not a console. You may need to set this if you don't want harness to
1049output more frequent progress messages using carriage returns. Some
1050consoles may not handle carriage returns properly (which results in a
1051somewhat messy output).
1052
60af4b93 1053=item C<HARNESS_OK_SLOW>
1054
1055If true, the C<ok> messages are printed out only every second.
1056This reduces output and therefore may for example help testing
1057over slow connections.
1058
b82fa0b7 1059=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1060
1061Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1062each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1063run all tests with all warnings enabled.
1064
356733da 1065=item C<HARNESS_VERBOSE>
37ce32a7 1066
356733da 1067If true, Test::Harness will output the verbose results of running
1068its tests. Setting $Test::Harness::verbose will override this.
37ce32a7 1069
1070=back
0a931e4a 1071
b82fa0b7 1072=head1 EXAMPLE
1073
1074Here's how Test::Harness tests itself
1075
1076 $ cd ~/src/devel/Test-Harness
1077 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1078 $verbose=0; runtests @ARGV;' t/*.t
1079 Using /home/schwern/src/devel/Test-Harness/blib
1080 t/base..............ok
1081 t/nonumbers.........ok
1082 t/ok................ok
1083 t/test-harness......ok
1084 All tests successful.
1085 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1086
cb1a09d0 1087=head1 SEE ALSO
1088
b82fa0b7 1089L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1090the underlying timing routines, L<Devel::CoreStack> to generate core
1091dumps from failed tests and L<Devel::Cover> for test coverage
1092analysis.
c07a80fd 1093
1094=head1 AUTHORS
1095
1096Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1097sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1098with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1099exist. Andreas Koenig held the torch for many years.
1100
1101Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1102
a72fde19 1103=head1 LICENSE
1104
1105This program is free software; you can redistribute it and/or
1106modify it under the same terms as Perl itself.
1107
1108See F<http://www.perl.com/perl/misc/Artistic.html>
1109
1110
b82fa0b7 1111=head1 TODO
1112
1113Provide a way of running tests quietly (ie. no printing) for automated
1114validation of tests. This will probably take the form of a version
1115of runtests() which rather than printing its output returns raw data
356733da 1116on the state of the tests. (Partially done in Test::Harness::Straps)
b82fa0b7 1117
1118Fix HARNESS_COMPILE_TEST without breaking its core usage.
1119
1120Figure a way to report test names in the failure summary.
37ce32a7 1121
b82fa0b7 1122Rework the test summary so long test names are not truncated as badly.
308957f5 1123(Partially done with new skip test styles)
b82fa0b7 1124
b82fa0b7 1125Deal with VMS's "not \nok 4\n" mistake.
1126
1127Add option for coverage analysis.
1128
1129=for _private
1130Keeping whittling away at _run_all_tests()
1131
1132=for _private
1133Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1134
1135=head1 BUGS
1136
356733da 1137HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
b82fa0b7 1138directory.
1139
cb1a09d0 1140=cut