pod2html: try to be EOL agnostic.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
13287dd5 2# $Id: Harness.pm,v 1.14.2.13 2002/01/07 22:34:32 schwern Exp $
b82fa0b7 3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
13287dd5 7use Test::Harness::Straps;
8use Test::Harness::Assert;
a0d0e21e 9use Exporter;
10use Benchmark;
4633a7c4 11use Config;
760ac839 12use strict;
13
b82fa0b7 14use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
15 $Columns $verbose $switches
16 @ISA @EXPORT @EXPORT_OK
17 );
4633a7c4 18
9c5c68c8 19# Backwards compatibility for exportable variable names.
20*verbose = \$Verbose;
21*switches = \$Switches;
22
23$Have_Devel_Corestack = 0;
24
13287dd5 25$VERSION = '2.01';
9b0ceca9 26
f19ae7a7 27$ENV{HARNESS_ACTIVE} = 1;
28
13287dd5 29END {
30 # For VMS.
31 delete $ENV{HARNESS_ACTIVE};
32}
33
9b0ceca9 34# Some experimental versions of OS/2 build have broken $?
9c5c68c8 35my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
36
37my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 38
13287dd5 39my $Strap = Test::Harness::Straps->new;
17a79f5b 40
9c5c68c8 41@ISA = ('Exporter');
42@EXPORT = qw(&runtests);
43@EXPORT_OK = qw($verbose $switches);
4633a7c4 44
9c5c68c8 45$Verbose = 0;
46$Switches = "-w";
47$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 48$Columns--; # Some shells have trouble with a full line of text.
49
50
51=head1 NAME
52
53Test::Harness - run perl standard test scripts with statistics
54
55=head1 SYNOPSIS
56
57 use Test::Harness;
58
59 runtests(@test_files);
60
61=head1 DESCRIPTION
a0d0e21e 62
b82fa0b7 63B<STOP!> If all you want to do is write a test script, consider using
64Test::Simple. Otherwise, read on.
65
66(By using the Test module, you can write test scripts without
67knowing the exact output this module expects. However, if you need to
68know the specifics, read on!)
69
70Perl test scripts print to standard output C<"ok N"> for each single
71test, where C<N> is an increasing sequence of integers. The first line
72output by a standard test script is C<"1..M"> with C<M> being the
73number of tests that should be run within the test
74script. Test::Harness::runtests(@tests) runs all the testscripts
75named as arguments and checks standard output for the expected
76C<"ok N"> strings.
77
78After all tests have been performed, runtests() prints some
79performance statistics that are computed by the Benchmark module.
80
81=head2 The test script output
82
83The following explains how Test::Harness interprets the output of your
84test program.
85
86=over 4
87
88=item B<'1..M'>
89
90This header tells how many tests there will be. It should be the
91first line output by your test program (but its okay if its preceded
92by comments).
93
94In certain instanced, you may not know how many tests you will
95ultimately be running. In this case, it is permitted (but not
96encouraged) for the 1..M header to appear as the B<last> line output
97by your test (again, it can be followed by further comments). But we
98strongly encourage you to put it first.
99
100Under B<no> circumstances should 1..M appear in the middle of your
101output or more than once.
102
103
104=item B<'ok', 'not ok'. Ok?>
105
106Any output from the testscript to standard error is ignored and
107bypassed, thus will be seen by the user. Lines written to standard
108output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
109runtests(). All other lines are discarded.
110
111C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
112
113
114=item B<test numbers>
115
116Perl normally expects the 'ok' or 'not ok' to be followed by a test
117number. It is tolerated if the test numbers after 'ok' are
118omitted. In this case Test::Harness maintains temporarily its own
119counter until the script supplies test numbers again. So the following
120test script
121
122 print <<END;
123 1..6
124 not ok
125 ok
126 not ok
127 ok
128 ok
129 END
130
131will generate
132
133 FAILED tests 1, 3, 6
134 Failed 3/6 tests, 50.00% okay
135
13287dd5 136=item B<test names>
b82fa0b7 137
13287dd5 138Anything after the test number but before the # is considered to be
139the name of the test.
b82fa0b7 140
13287dd5 141 ok 42 this is the name of the test
b82fa0b7 142
13287dd5 143Currently, Test::Harness does nothing with this information.
b82fa0b7 144
145=item B<Skipping tests>
146
147If the standard output line contains the substring C< # Skip> (with
148variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
149counted as a skipped test. If the whole testscript succeeds, the
150count of skipped tests is included in the generated output.
151C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
152for skipping.
153
154 ok 23 # skip Insufficient flogiston pressure.
155
156Similarly, one can include a similar explanation in a C<1..0> line
157emitted if the test script is skipped completely:
158
159 1..0 # Skipped: no leverage found
160
161=item B<Todo tests>
162
163If the standard output line contains the substring C< # TODO> after
164C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
165afterwards is the thing that has to be done before this test will
166succeed.
167
168 not ok 13 # TODO harness the power of the atom
169
13287dd5 170=begin _deprecated
171
172Alternatively, you can specify a list of what tests are todo as part
173of the test header.
174
175 1..23 todo 5 12 23
176
177This only works if the header appears at the beginning of the test.
178
179This style is B<deprecated>.
180
181=end _deprecated
182
b82fa0b7 183These tests represent a feature to be implemented or a bug to be fixed
184and act as something of an executable "thing to do" list. They are
185B<not> expected to succeed. Should a todo test begin succeeding,
186Test::Harness will report it as a bonus. This indicates that whatever
187you were supposed to do has been done and you should promote this to a
188normal test.
189
190=item B<Bail out!>
191
192As an emergency measure, a test script can decide that further tests
193are useless (e.g. missing dependencies) and testing should stop
194immediately. In that case the test script prints the magic words
195
196 Bail out!
197
198to standard output. Any message after these words will be displayed by
199C<Test::Harness> as the reason why testing is stopped.
200
201=item B<Comments>
202
203Additional comments may be put into the testing output on their own
204lines. Comment lines should begin with a '#', Test::Harness will
205ignore them.
206
207 ok 1
208 # Life is good, the sun is shining, RAM is cheap.
209 not ok 2
210 # got 'Bush' expected 'Gore'
211
212=item B<Anything else>
213
214Any other output Test::Harness sees it will silently ignore B<BUT WE
215PLAN TO CHANGE THIS!> If you wish to place additional output in your
216test script, please use a comment.
217
218=back
219
220
13287dd5 221=head2 Taint mode
222
223Test::Harness will honor the C<-T> in the #! line on your test files. So
224if you begin a test with:
225
226 #!perl -T
227
228the test will be run with taint mode on.
229
230
231=head2 Configuration variables.
232
233These variables can be used to configure the behavior of
234Test::Harness. They are exported on request.
235
236=over 4
237
238=item B<$Test::Harness::verbose>
239
240The global variable $Test::Harness::verbose is exportable and can be
241used to let runtests() display the standard output of the script
242without altering the behavior otherwise.
243
244=item B<$Test::Harness::switches>
245
246The global variable $Test::Harness::switches is exportable and can be
247used to set perl command line options used for running the test
248script(s). The default value is C<-w>.
249
250=back
251
252
b82fa0b7 253=head2 Failure
254
255It will happen, your tests will fail. After you mop up your ego, you
256can begin examining the summary report:
257
2fe373ce 258 t/base..............ok
259 t/nonumbers.........ok
260 t/ok................ok
261 t/test-harness......ok
262 t/waterloo..........dubious
b82fa0b7 263 Test returned status 3 (wstat 768, 0x300)
264 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
265 Failed 10/20 tests, 50.00% okay
266 Failed Test Stat Wstat Total Fail Failed List of Failed
267 -----------------------------------------------------------------------
268 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
269 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
270
271Everything passed but t/waterloo.t. It failed 10 of 20 tests and
272exited with non-zero status indicating something dubious happened.
273
274The columns in the summary report mean:
275
276=over 4
277
278=item B<Failed Test>
279
280The test file which failed.
281
282=item B<Stat>
283
284If the test exited with non-zero, this is its exit status.
285
286=item B<Wstat>
287
288The wait status of the test I<umm, I need a better explanation here>.
289
290=item B<Total>
291
292Total number of tests expected to run.
293
294=item B<Fail>
295
296Number which failed, either from "not ok" or because they never ran.
297
298=item B<Failed>
299
300Percentage of the total tests which failed.
301
302=item B<List of Failed>
303
304A list of the tests which failed. Successive failures may be
305abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
30620 failed).
307
308=back
309
310
311=head2 Functions
312
313Test::Harness currently only has one function, here it is.
314
315=over 4
316
317=item B<runtests>
318
319 my $allok = runtests(@test_files);
320
321This runs all the given @test_files and divines whether they passed
322or failed based on their output to STDOUT (details above). It prints
323out each individual test which failed along with a summary report and
324a how long it all took.
325
326It returns true if everything was ok, false otherwise.
327
328=for _private
329This is just _run_all_tests() plus _show_results()
330
331=cut
17a79f5b 332
a0d0e21e 333sub runtests {
334 my(@tests) = @_;
9c5c68c8 335
b82fa0b7 336 local ($\, $,);
337
338 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 339 _show_results($tot, $failedtests);
340
2fe373ce 341 my $ok = _all_ok($tot);
b82fa0b7 342
13287dd5 343 assert(($ok xor keys %$failedtests),
344 q{ok status jives with $failedtests});
b82fa0b7 345
346 return $ok;
347}
348
349=begin _private
350
2fe373ce 351=item B<_all_ok>
352
353 my $ok = _all_ok(\%tot);
354
355Tells you if this test run is overall successful or not.
356
357=cut
358
359sub _all_ok {
360 my($tot) = shift;
361
362 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
363}
364
b82fa0b7 365=item B<_globdir>
366
367 my @files = _globdir $dir;
368
369Returns all the files in a directory. This is shorthand for backwards
370compatibility on systems where glob() doesn't work right.
371
372=cut
373
374sub _globdir {
375 opendir DIRH, shift;
376 my @f = readdir DIRH;
377 closedir DIRH;
378
379 return @f;
9c5c68c8 380}
381
b82fa0b7 382=item B<_run_all_tests>
383
384 my($total, $failed) = _run_all_tests(@test_files);
385
386Runs all the given @test_files (as runtests()) but does it quietly (no
387report). $total is a hash ref summary of all the tests run. Its keys
388and values are this:
389
390 bonus Number of individual todo tests unexpectedly passed
391 max Number of individual tests ran
392 ok Number of individual tests passed
393 sub_skipped Number of individual tests skipped
2fe373ce 394 todo Number of individual todo tests
b82fa0b7 395
396 files Number of test files ran
397 good Number of test files passed
398 bad Number of test files failed
399 tests Number of test files originally given
400 skipped Number of test files skipped
401
402If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
403test.
404
405$failed is a hash ref of all the test scripts which failed. Each key
406is the name of a test script, each value is another hash representing
407how that script failed. Its keys are these:
9c5c68c8 408
b82fa0b7 409 name Name of the test which failed
410 estat Script's exit value
411 wstat Script's wait status
412 max Number of individual tests
413 failed Number which failed
414 percent Percentage of tests which failed
415 canon List of tests which failed (as string).
416
417Needless to say, $failed should be empty if everything passed.
418
419B<NOTE> Currently this function is still noisy. I'm working on it.
420
421=cut
422
423sub _run_all_tests {
9c5c68c8 424 my(@tests) = @_;
a0d0e21e 425 local($|) = 1;
9c5c68c8 426 my(%failedtests);
427
428 # Test-wide totals.
429 my(%tot) = (
430 bonus => 0,
431 max => 0,
432 ok => 0,
433 files => 0,
434 bad => 0,
435 good => 0,
436 tests => scalar @tests,
437 sub_skipped => 0,
2fe373ce 438 todo => 0,
9c5c68c8 439 skipped => 0,
2fe373ce 440 bench => 0,
9c5c68c8 441 );
774d564b 442
13287dd5 443 local($ENV{'PERL5LIB'}) = $Strap->_INC2PERL5LIB;
a0d0e21e 444
b82fa0b7 445 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 446 my $t_start = new Benchmark;
9c5c68c8 447
13287dd5 448 my $width = _leader_width(@tests);
b82fa0b7 449 foreach my $tfile (@tests) {
13287dd5 450 $Strap->_reset_file_state;
451
7a315204 452 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 453 print $leader;
9c5c68c8 454
b82fa0b7 455 my $fh = _open_test($tfile);
9c5c68c8 456
457 # state of the current test.
458 my %test = (
459 ok => 0,
b82fa0b7 460 'next' => 0,
9c5c68c8 461 max => 0,
462 failed => [],
463 todo => {},
464 bonus => 0,
465 skipped => 0,
466 skip_reason => undef,
467 ml => $ml,
468 );
469
470 my($seen_header, $tests_seen) = (0,0);
2fe373ce 471 while (<$fh>) {
13287dd5 472 print if $Verbose;
473
474 $Strap->{line}++;
9c5c68c8 475 if( _parse_header($_, \%test, \%tot) ) {
476 warn "Test header seen twice!\n" if $seen_header;
477
478 $seen_header = 1;
479
480 warn "1..M can only appear at the beginning or end of tests\n"
481 if $tests_seen && $test{max} < $tests_seen;
482 }
483 elsif( _parse_test_line($_, \%test, \%tot) ) {
484 $tests_seen++;
d667a7e6 485 }
9c5c68c8 486 # else, ignore it.
2fe373ce 487 }
9c5c68c8 488
489 my($estatus, $wstatus) = _close_fh($fh);
490
b82fa0b7 491 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
492
2fe373ce 493 if ($wstatus) {
b82fa0b7 494 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 495 $estatus, $wstatus);
b82fa0b7 496 $failedtests{$tfile}{name} = $tfile;
2fe373ce 497 }
b82fa0b7 498 elsif ($allok) {
2fe373ce 499 if ($test{max} and $test{skipped} + $test{bonus}) {
500 my @msg;
501 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
502 if $test{skipped};
503 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
504 if $test{bonus};
505 print "$test{ml}ok, ".join(', ', @msg)."\n";
506 } elsif ($test{max}) {
507 print "$test{ml}ok\n";
508 } elsif (defined $test{skip_reason}) {
509 print "skipped: $test{skip_reason}\n";
510 $tot{skipped}++;
511 } else {
512 print "skipped test on this platform\n";
513 $tot{skipped}++;
514 }
515 $tot{good}++;
516 }
b82fa0b7 517 else {
518 if ($test{max}) {
519 if ($test{'next'} <= $test{max}) {
520 push @{$test{failed}}, $test{'next'}..$test{max};
521 }
522 if (@{$test{failed}}) {
523 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
524 @{$test{failed}});
525 print "$test{ml}$txt";
526 $failedtests{$tfile} = { canon => $canon,
527 max => $test{max},
528 failed => scalar @{$test{failed}},
529 name => $tfile,
530 percent => 100*(scalar @{$test{failed}})/$test{max},
531 estat => '',
532 wstat => '',
533 };
534 } else {
535 print "Don't know which tests failed: got $test{ok} ok, ".
536 "expected $test{max}\n";
537 $failedtests{$tfile} = { canon => '??',
538 max => $test{max},
539 failed => '??',
540 name => $tfile,
541 percent => undef,
542 estat => '',
543 wstat => '',
544 };
545 }
546 $tot{bad}++;
547 } elsif ($test{'next'} == 0) {
548 print "FAILED before any test output arrived\n";
549 $tot{bad}++;
550 $failedtests{$tfile} = { canon => '??',
551 max => '??',
552 failed => '??',
553 name => $tfile,
554 percent => undef,
555 estat => '',
556 wstat => '',
557 };
558 }
559 }
560
2fe373ce 561 $tot{sub_skipped} += $test{skipped};
562
563 if (defined $Files_In_Dir) {
564 my @new_dir_files = _globdir $Files_In_Dir;
565 if (@new_dir_files != @dir_files) {
566 my %f;
567 @f{@new_dir_files} = (1) x @new_dir_files;
568 delete @f{@dir_files};
569 my @f = sort keys %f;
570 print "LEAKED FILES: @f\n";
571 @dir_files = @new_dir_files;
572 }
573 }
13287dd5 574
575 close $fh;
a0d0e21e 576 }
9c5c68c8 577 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 578
13287dd5 579 $Strap->_restore_PERL5LIB;
9c5c68c8 580
581 return(\%tot, \%failedtests);
582}
583
b82fa0b7 584=item B<_mk_leader>
585
7a315204 586 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 587
588Generates the 't/foo........' $leader for the given $test_file as well
589as a similar version which will overwrite the current line (by use of
590\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce 591on TTY.
592
593The $width is the width of the "yada/blah.." string.
b82fa0b7 594
595=cut
596
597sub _mk_leader {
2fe373ce 598 my($te, $width) = @_;
599 chomp($te);
b695f709 600 $te =~ s/\.\w+$/./;
b82fa0b7 601
602 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
603 my $blank = (' ' x 77);
7a315204 604 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 605 my $ml = "";
606
607 $ml = "\r$blank\r$leader"
608 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
609
610 return($leader, $ml);
611}
612
13287dd5 613=item B<_leader_width>
614
615 my($width) = _leader_width(@test_files);
616
617Calculates how wide the leader should be based on the length of the
618longest test name.
619
620=cut
621
622sub _leader_width {
623 my $maxlen = 0;
624 my $maxsuflen = 0;
625 foreach (@_) {
626 my $suf = /\.(\w+)$/ ? $1 : '';
627 my $len = length;
628 my $suflen = length $suf;
629 $maxlen = $len if $len > $maxlen;
630 $maxsuflen = $suflen if $suflen > $maxsuflen;
631 }
632 # + 3 : we want three dots between the test name and the "ok"
633 return $maxlen + 3 - $maxsuflen;
634}
635
9c5c68c8 636
637sub _show_results {
638 my($tot, $failedtests) = @_;
639
640 my $pct;
641 my $bonusmsg = _bonusmsg($tot);
642
2fe373ce 643 if (_all_ok($tot)) {
644 print "All tests successful$bonusmsg.\n";
645 } elsif (!$tot->{tests}){
646 die "FAILED--no tests were run for some reason.\n";
647 } elsif (!$tot->{max}) {
648 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
649 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 650 "alas--no output ever seen\n";
c07a80fd 651 } else {
2fe373ce 652 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
653 my $percent_ok = 100*$tot->{ok}/$tot->{max};
654 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
655 $tot->{max} - $tot->{ok}, $tot->{max},
656 $percent_ok;
0a931e4a 657
9c5c68c8 658 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 659
2fe373ce 660 # Now write to formats
661 for my $script (sort keys %$failedtests) {
662 $Curtest = $failedtests->{$script};
663 write;
664 }
665 if ($tot->{bad}) {
666 $bonusmsg =~ s/^,\s*//;
667 print "$bonusmsg.\n" if $bonusmsg;
668 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 669 "$subpct\n";
2fe373ce 670 }
c07a80fd 671 }
f0a9308e 672
9c5c68c8 673 printf("Files=%d, Tests=%d, %s\n",
674 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
675}
676
677
678sub _parse_header {
679 my($line, $test, $tot) = @_;
680
681 my $is_header = 0;
682
13287dd5 683 if( $Strap->_is_header($line) ) {
684 $is_header = 1;
9c5c68c8 685
13287dd5 686 $test->{max} = $Strap->{max};
687 for ( keys %{$Strap->{todo}} ) { $test->{todo}{$_} = 1; }
9c5c68c8 688
13287dd5 689 $test->{skip_reason} = $Strap->{skip_all}
690 if not $test->{max} and defined $Strap->{skip_all};
9c5c68c8 691
b82fa0b7 692 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 693
13287dd5 694
695 $tot->{max} += $test->{max};
696 $tot->{files}++;
9c5c68c8 697 }
698 else {
699 $is_header = 0;
700 }
701
702 return $is_header;
c07a80fd 703}
704
9c5c68c8 705
b82fa0b7 706sub _open_test {
707 my($test) = shift;
708
709 my $s = _set_switches($test);
710
711 # XXX This is WAY too core specific!
712 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
713 ? "./perl -I../lib ../utils/perlcc $test "
2fe373ce 714 . "-r 2>> ./compilelog |"
715 : "$^X $s $test|";
b82fa0b7 716 $cmd = "MCR $cmd" if $^O eq 'VMS';
717
718 if( open(PERL, $cmd) ) {
719 return \*PERL;
720 }
721 else {
722 print "can't run $test. $!\n";
723 return;
724 }
725}
726
b82fa0b7 727
9c5c68c8 728sub _parse_test_line {
729 my($line, $test, $tot) = @_;
730
13287dd5 731 my %result;
732 if ( $Strap->_is_test($line, \%result) ) {
2fe373ce 733 $test->{'next'} ||= 1;
734 my $this = $test->{'next'};
2fe373ce 735
13287dd5 736 my($not, $tnum) = (!$result{ok}, $result{number});
2fe373ce 737
13287dd5 738 $this = $tnum if $tnum;
2fe373ce 739
13287dd5 740 my($type, $reason) = ($result{type}, $result{reason});
2fe373ce 741
13287dd5 742 my($istodo, $isskip);
743 if( defined $type ) {
744 $istodo = 1 if $type eq 'todo';
745 $isskip = 1 if $type eq 'skip';
746 }
2fe373ce 747
13287dd5 748 $test->{todo}{$this} = 1 if $istodo;
2fe373ce 749
13287dd5 750 $tot->{todo}++ if $test->{todo}{$this};
751
752 if( $not ) {
753 print "$test->{ml}NOK $this" if $test->{ml};
754 if (!$test->{todo}{$this}) {
755 push @{$test->{failed}}, $this;
756 } else {
2fe373ce 757 $test->{ok}++;
758 $tot->{ok}++;
2fe373ce 759 }
9c5c68c8 760 }
13287dd5 761 else {
9c5c68c8 762 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
763 $test->{ok}++;
764 $tot->{ok}++;
13287dd5 765 $test->{skipped}++ if $isskip;
766
767 $reason = '[no reason given]'
768 if $isskip and not defined $reason;
769 if (defined $reason and defined $test->{skip_reason}) {
770 # print "was: '$skip_reason' new '$reason'\n";
771 $test->{skip_reason} = 'various reasons'
772 if $test->{skip_reason} ne $reason;
773 } elsif (defined $reason) {
774 $test->{skip_reason} = $reason;
775 }
776
777 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
9c5c68c8 778 }
779
b82fa0b7 780 if ($this > $test->{'next'}) {
2fe373ce 781 print "Test output counter mismatch [test $this]\n";
b82fa0b7 782 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 783 }
b82fa0b7 784 elsif ($this < $test->{'next'}) {
9c5c68c8 785 #we have seen more "ok" lines than the number suggests
786 print "Confused test output: test $this answered after ".
b82fa0b7 787 "test ", $test->{'next'}-1, "\n";
788 $test->{'next'} = $this;
9c5c68c8 789 }
b82fa0b7 790 $test->{'next'} = $this + 1;
9c5c68c8 791
792 }
13287dd5 793 else {
794 my $bail_reason;
795 if( $Strap->_is_bail_out($line, \$bail_reason) ) { # bail out!
796 die "FAILED--Further testing stopped" .
797 ($bail_reason ? ": $bail_reason\n" : ".\n");
798 }
9c5c68c8 799 }
800}
801
802
803sub _bonusmsg {
804 my($tot) = @_;
805
806 my $bonusmsg = '';
807 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce 808 " UNEXPECTEDLY SUCCEEDED)")
809 if $tot->{bonus};
9c5c68c8 810
811 if ($tot->{skipped}) {
2fe373ce 812 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 813 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce 814 if ($tot->{sub_skipped}) {
815 $bonusmsg .= " and $tot->{sub_skipped} subtest"
816 . ($tot->{sub_skipped} != 1 ? 's' : '');
817 }
818 $bonusmsg .= ' skipped';
9c5c68c8 819 }
820 elsif ($tot->{sub_skipped}) {
2fe373ce 821 $bonusmsg .= ", $tot->{sub_skipped} subtest"
822 . ($tot->{sub_skipped} != 1 ? 's' : '')
823 . " skipped";
9c5c68c8 824 }
825
826 return $bonusmsg;
827}
828
829# VMS has some subtle nastiness with closing the test files.
830sub _close_fh {
831 my($fh) = shift;
832
833 close($fh); # must close to reap child resource values
834
2fe373ce 835 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
9c5c68c8 836 my $estatus;
837 $estatus = ($^O eq 'VMS'
838 ? eval 'use vmsish "status"; $estatus = $?'
839 : $wstatus >> 8);
840
841 return($estatus, $wstatus);
842}
843
844
845# Set up the command-line switches to run perl as.
846sub _set_switches {
847 my($test) = shift;
848
9c5c68c8 849 my $s = $Switches;
13287dd5 850 $s .= $Strap->_switches($test);
9c5c68c8 851
852 return $s;
853}
854
855
856# Test program go boom.
857sub _dubious_return {
858 my($test, $tot, $estatus, $wstatus) = @_;
859 my ($failed, $canon, $percent) = ('??', '??');
860
861 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
862 "(wstat %d, 0x%x)\n",
863 $wstatus,$wstatus;
864 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
865
866 if (corestatus($wstatus)) { # until we have a wait module
867 if ($Have_Devel_Corestack) {
868 Devel::CoreStack::stack($^X);
869 } else {
870 print "\ttest program seems to have generated a core\n";
871 }
872 }
873
874 $tot->{bad}++;
875
876 if ($test->{max}) {
b82fa0b7 877 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 878 print "\tafter all the subtests completed successfully\n";
879 $percent = 0;
2fe373ce 880 $failed = 0; # But we do not set $canon!
9c5c68c8 881 }
882 else {
b82fa0b7 883 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 884 $failed = @{$test->{failed}};
885 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
886 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
887 print "DIED. ",$txt;
888 }
889 }
890
891 return { canon => $canon, max => $test->{max} || '??',
892 failed => $failed,
66fd8cb9 893 percent => $percent,
9c5c68c8 894 estat => $estatus, wstat => $wstatus,
895 };
896}
897
898
899sub _garbled_output {
900 my($gibberish) = shift;
901 warn "Confusing test output: '$gibberish'\n";
902}
903
904
905sub _create_fmts {
906 my($failedtests) = @_;
907
b82fa0b7 908 my $failed_str = "Failed Test";
909 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 910 my $list_str = "List of Failed";
911
912 # Figure out our longest name string for formatting purposes.
913 my $max_namelen = length($failed_str);
914 foreach my $script (keys %$failedtests) {
915 my $namelen = length $failedtests->{$script}->{name};
916 $max_namelen = $namelen if $namelen > $max_namelen;
917 }
918
919 my $list_len = $Columns - length($middle_str) - $max_namelen;
920 if ($list_len < length($list_str)) {
921 $list_len = length($list_str);
922 $max_namelen = $Columns - length($middle_str) - $list_len;
923 if ($max_namelen < length($failed_str)) {
924 $max_namelen = length($failed_str);
925 $Columns = $max_namelen + length($middle_str) + $list_len;
926 }
927 }
928
929 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 930 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 931 . $middle_str
2fe373ce 932 . $list_str . "\n"
933 . "-" x $Columns
934 . "\n.\n";
9c5c68c8 935
936 my $fmt = "format STDOUT =\n"
2fe373ce 937 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 938 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce 939 . "^" . "<" x ($list_len - 1) . "\n"
940 . '{ $Curtest->{name}, $Curtest->{estat},'
941 . ' $Curtest->{wstat}, $Curtest->{max},'
942 . ' $Curtest->{failed}, $Curtest->{percent},'
943 . ' $Curtest->{canon}'
944 . "\n}\n"
945 . "~~" . " " x ($Columns - $list_len - 2) . "^"
946 . "<" x ($list_len - 1) . "\n"
947 . '$Curtest->{canon}'
948 . "\n.\n";
9c5c68c8 949
950 eval $fmt_top;
951 die $@ if $@;
952 eval $fmt;
953 die $@ if $@;
954
955 return($fmt_top, $fmt);
956}
957
b82fa0b7 958{
959 my $tried_devel_corestack;
9c5c68c8 960
b82fa0b7 961 sub corestatus {
962 my($st) = @_;
c0ee6f5c 963
b82fa0b7 964 eval {require 'wait.ph'};
965 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 966
b82fa0b7 967 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
968 unless $tried_devel_corestack++;
c0ee6f5c 969
b82fa0b7 970 $ret;
971 }
c0ee6f5c 972}
973
c07a80fd 974sub canonfailed ($@) {
89d3b7e2 975 my($max,$skipped,@failed) = @_;
6c31b336 976 my %seen;
977 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 978 my $failed = @failed;
979 my @result = ();
980 my @canon = ();
981 my $min;
982 my $last = $min = shift @failed;
760ac839 983 my $canon;
c07a80fd 984 if (@failed) {
2fe373ce 985 for (@failed, $failed[-1]) { # don't forget the last one
986 if ($_ > $last+1 || $_ == $last) {
987 if ($min == $last) {
988 push @canon, $last;
989 } else {
990 push @canon, "$min-$last";
991 }
992 $min = $_;
993 }
994 $last = $_;
995 }
996 local $" = ", ";
997 push @result, "FAILED tests @canon\n";
998 $canon = join ' ', @canon;
a0d0e21e 999 } else {
2fe373ce 1000 push @result, "FAILED test $last\n";
1001 $canon = $last;
a0d0e21e 1002 }
c07a80fd 1003
1004 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 1005 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
1006 my $ender = 's' x ($skipped > 1);
1007 my $good = $max - $failed - $skipped;
1008 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 1009 push @result, " (-$skipped skipped test$ender: $good okay, ".
1010 "$goodper%)"
1011 if $skipped;
89d3b7e2 1012 push @result, "\n";
760ac839 1013 my $txt = join "", @result;
1014 ($txt, $canon);
a0d0e21e 1015}
1016
b82fa0b7 1017=end _private
9c5c68c8 1018
b82fa0b7 1019=back
d667a7e6 1020
b82fa0b7 1021=cut
9c5c68c8 1022
9c5c68c8 1023
b82fa0b7 10241;
1025__END__
9c5c68c8 1026
1027
cb1a09d0 1028=head1 EXPORT
1029
c0ee6f5c 1030C<&runtests> is exported by Test::Harness per default.
cb1a09d0 1031
9c5c68c8 1032C<$verbose> and C<$switches> are exported upon request.
1033
1034
cb1a09d0 1035=head1 DIAGNOSTICS
1036
1037=over 4
1038
1039=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1040
1041If all tests are successful some statistics about the performance are
1042printed.
1043
6c31b336 1044=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1045
1046For any single script that has failing subtests statistics like the
1047above are printed.
1048
1049=item C<Test returned status %d (wstat %d)>
1050
9c5c68c8 1051Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1052and C<$?> are printed in a message similar to the above.
6c31b336 1053
1054=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1055
6c31b336 1056=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1057
1058If not all tests were successful, the script dies with one of the
1059above messages.
1060
d667a7e6 1061=item C<FAILED--Further testing stopped%s>
1062
1063If a single subtest decides that further testing will not make sense,
1064the script dies with this message.
1065
cb1a09d0 1066=back
1067
9b0ceca9 1068=head1 ENVIRONMENT
1069
37ce32a7 1070=over 4
1071
b82fa0b7 1072=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7 1073
1074Makes harness ignore the exit status of child processes when defined.
1075
b82fa0b7 1076=item C<HARNESS_NOTTY>
9b0ceca9 1077
37ce32a7 1078When set to a true value, forces it to behave as though STDOUT were
1079not a console. You may need to set this if you don't want harness to
1080output more frequent progress messages using carriage returns. Some
1081consoles may not handle carriage returns properly (which results in a
1082somewhat messy output).
0d0c0d42 1083
b82fa0b7 1084=item C<HARNESS_COMPILE_TEST>
9636a016 1085
37ce32a7 1086When true it will make harness attempt to compile the test using
1087C<perlcc> before running it.
1088
b82fa0b7 1089B<NOTE> This currently only works when sitting in the perl source
1090directory!
1091
1092=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1093
1094When set to the name of a directory, harness will check after each
1095test whether new files appeared in that directory, and report them as
17a79f5b 1096
1097 LEAKED FILES: scr.tmp 0 my.db
1098
1099If relative, directory name is with respect to the current directory at
1100the moment runtests() was called. Putting absolute path into
13287dd5 1101C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
17a79f5b 1102
b82fa0b7 1103=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1104
1105Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1106each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1107run all tests with all warnings enabled.
1108
b82fa0b7 1109=item C<HARNESS_COLUMNS>
37ce32a7 1110
1111This value will be used for the width of the terminal. If it is not
1112set then it will default to C<COLUMNS>. If this is not set, it will
1113default to 80. Note that users of Bourne-sh based shells will need to
1114C<export COLUMNS> for this module to use that variable.
2b32313b 1115
b82fa0b7 1116=item C<HARNESS_ACTIVE>
37ce32a7 1117
1118Harness sets this before executing the individual tests. This allows
1119the tests to determine if they are being executed through the harness
1120or by any other means.
1121
1122=back
0a931e4a 1123
b82fa0b7 1124=head1 EXAMPLE
1125
1126Here's how Test::Harness tests itself
1127
1128 $ cd ~/src/devel/Test-Harness
1129 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1130 $verbose=0; runtests @ARGV;' t/*.t
1131 Using /home/schwern/src/devel/Test-Harness/blib
1132 t/base..............ok
1133 t/nonumbers.........ok
1134 t/ok................ok
1135 t/test-harness......ok
1136 All tests successful.
1137 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1138
cb1a09d0 1139=head1 SEE ALSO
1140
b82fa0b7 1141L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1142the underlying timing routines, L<Devel::CoreStack> to generate core
1143dumps from failed tests and L<Devel::Cover> for test coverage
1144analysis.
c07a80fd 1145
1146=head1 AUTHORS
1147
1148Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1149sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1150with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1151exist. Andreas Koenig held the torch for many years.
1152
1153Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1154
1155=head1 TODO
1156
1157Provide a way of running tests quietly (ie. no printing) for automated
1158validation of tests. This will probably take the form of a version
1159of runtests() which rather than printing its output returns raw data
1160on the state of the tests.
1161
1162Fix HARNESS_COMPILE_TEST without breaking its core usage.
1163
1164Figure a way to report test names in the failure summary.
37ce32a7 1165
b82fa0b7 1166Rework the test summary so long test names are not truncated as badly.
1167
1168Merge back into bleadperl.
1169
1170Deal with VMS's "not \nok 4\n" mistake.
1171
1172Add option for coverage analysis.
1173
1174=for _private
1175Keeping whittling away at _run_all_tests()
1176
1177=for _private
1178Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1179
1180=head1 BUGS
1181
1182Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336 1183with. Test scripts running via the shebang (C<#!>) line may not be
1184portable because $^X is not consistent for shebang scripts across
cb1a09d0 1185platforms. This is no problem when Test::Harness is run with an
6c31b336 1186absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1187
b82fa0b7 1188HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1189directory.
1190
cb1a09d0 1191=cut