Allow for deeper t/ and also a single test.pl.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
b82fa0b7 2# $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
a0d0e21e 7use Exporter;
8use Benchmark;
4633a7c4 9use Config;
760ac839 10use strict;
11
b82fa0b7 12use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
15 );
4633a7c4 16
9c5c68c8 17# Backwards compatibility for exportable variable names.
18*verbose = \$Verbose;
19*switches = \$Switches;
20
21$Have_Devel_Corestack = 0;
22
b82fa0b7 23$VERSION = "1.21";
9b0ceca9 24
f19ae7a7 25$ENV{HARNESS_ACTIVE} = 1;
26
9b0ceca9 27# Some experimental versions of OS/2 build have broken $?
9c5c68c8 28my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
29
30my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 31
17a79f5b 32
9c5c68c8 33@ISA = ('Exporter');
34@EXPORT = qw(&runtests);
35@EXPORT_OK = qw($verbose $switches);
4633a7c4 36
9c5c68c8 37$Verbose = 0;
38$Switches = "-w";
39$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 40$Columns--; # Some shells have trouble with a full line of text.
41
42
43=head1 NAME
44
45Test::Harness - run perl standard test scripts with statistics
46
47=head1 SYNOPSIS
48
49 use Test::Harness;
50
51 runtests(@test_files);
52
53=head1 DESCRIPTION
a0d0e21e 54
b82fa0b7 55B<STOP!> If all you want to do is write a test script, consider using
56Test::Simple. Otherwise, read on.
57
58(By using the Test module, you can write test scripts without
59knowing the exact output this module expects. However, if you need to
60know the specifics, read on!)
61
62Perl test scripts print to standard output C<"ok N"> for each single
63test, where C<N> is an increasing sequence of integers. The first line
64output by a standard test script is C<"1..M"> with C<M> being the
65number of tests that should be run within the test
66script. Test::Harness::runtests(@tests) runs all the testscripts
67named as arguments and checks standard output for the expected
68C<"ok N"> strings.
69
70After all tests have been performed, runtests() prints some
71performance statistics that are computed by the Benchmark module.
72
73=head2 The test script output
74
75The following explains how Test::Harness interprets the output of your
76test program.
77
78=over 4
79
80=item B<'1..M'>
81
82This header tells how many tests there will be. It should be the
83first line output by your test program (but its okay if its preceded
84by comments).
85
86In certain instanced, you may not know how many tests you will
87ultimately be running. In this case, it is permitted (but not
88encouraged) for the 1..M header to appear as the B<last> line output
89by your test (again, it can be followed by further comments). But we
90strongly encourage you to put it first.
91
92Under B<no> circumstances should 1..M appear in the middle of your
93output or more than once.
94
95
96=item B<'ok', 'not ok'. Ok?>
97
98Any output from the testscript to standard error is ignored and
99bypassed, thus will be seen by the user. Lines written to standard
100output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101runtests(). All other lines are discarded.
102
103C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
104
105
106=item B<test numbers>
107
108Perl normally expects the 'ok' or 'not ok' to be followed by a test
109number. It is tolerated if the test numbers after 'ok' are
110omitted. In this case Test::Harness maintains temporarily its own
111counter until the script supplies test numbers again. So the following
112test script
113
114 print <<END;
115 1..6
116 not ok
117 ok
118 not ok
119 ok
120 ok
121 END
122
123will generate
124
125 FAILED tests 1, 3, 6
126 Failed 3/6 tests, 50.00% okay
127
128
129=item B<$Test::Harness::verbose>
130
131The global variable $Test::Harness::verbose is exportable and can be
132used to let runtests() display the standard output of the script
133without altering the behavior otherwise.
134
135=item B<$Test::Harness::switches>
136
137The global variable $Test::Harness::switches is exportable and can be
138used to set perl command line options used for running the test
139script(s). The default value is C<-w>.
140
141=item B<Skipping tests>
142
143If the standard output line contains the substring C< # Skip> (with
144variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145counted as a skipped test. If the whole testscript succeeds, the
146count of skipped tests is included in the generated output.
147C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
148for skipping.
149
150 ok 23 # skip Insufficient flogiston pressure.
151
152Similarly, one can include a similar explanation in a C<1..0> line
153emitted if the test script is skipped completely:
154
155 1..0 # Skipped: no leverage found
156
157=item B<Todo tests>
158
159If the standard output line contains the substring C< # TODO> after
160C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161afterwards is the thing that has to be done before this test will
162succeed.
163
164 not ok 13 # TODO harness the power of the atom
165
166These tests represent a feature to be implemented or a bug to be fixed
167and act as something of an executable "thing to do" list. They are
168B<not> expected to succeed. Should a todo test begin succeeding,
169Test::Harness will report it as a bonus. This indicates that whatever
170you were supposed to do has been done and you should promote this to a
171normal test.
172
173=item B<Bail out!>
174
175As an emergency measure, a test script can decide that further tests
176are useless (e.g. missing dependencies) and testing should stop
177immediately. In that case the test script prints the magic words
178
179 Bail out!
180
181to standard output. Any message after these words will be displayed by
182C<Test::Harness> as the reason why testing is stopped.
183
184=item B<Comments>
185
186Additional comments may be put into the testing output on their own
187lines. Comment lines should begin with a '#', Test::Harness will
188ignore them.
189
190 ok 1
191 # Life is good, the sun is shining, RAM is cheap.
192 not ok 2
193 # got 'Bush' expected 'Gore'
194
195=item B<Anything else>
196
197Any other output Test::Harness sees it will silently ignore B<BUT WE
198PLAN TO CHANGE THIS!> If you wish to place additional output in your
199test script, please use a comment.
200
201=back
202
203
204=head2 Failure
205
206It will happen, your tests will fail. After you mop up your ego, you
207can begin examining the summary report:
208
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
221
222Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223exited with non-zero status indicating something dubious happened.
224
225The columns in the summary report mean:
226
227=over 4
228
229=item B<Failed Test>
230
231The test file which failed.
232
233=item B<Stat>
234
235If the test exited with non-zero, this is its exit status.
236
237=item B<Wstat>
238
239The wait status of the test I<umm, I need a better explanation here>.
240
241=item B<Total>
242
243Total number of tests expected to run.
244
245=item B<Fail>
246
247Number which failed, either from "not ok" or because they never ran.
248
249=item B<Failed>
250
251Percentage of the total tests which failed.
252
253=item B<List of Failed>
254
255A list of the tests which failed. Successive failures may be
256abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
25720 failed).
258
259=back
260
261
262=head2 Functions
263
264Test::Harness currently only has one function, here it is.
265
266=over 4
267
268=item B<runtests>
269
270 my $allok = runtests(@test_files);
271
272This runs all the given @test_files and divines whether they passed
273or failed based on their output to STDOUT (details above). It prints
274out each individual test which failed along with a summary report and
275a how long it all took.
276
277It returns true if everything was ok, false otherwise.
278
279=for _private
280This is just _run_all_tests() plus _show_results()
281
282=cut
17a79f5b 283
a0d0e21e 284sub runtests {
285 my(@tests) = @_;
9c5c68c8 286
b82fa0b7 287 local ($\, $,);
288
289 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 290 _show_results($tot, $failedtests);
291
b82fa0b7 292 my $ok = ($tot->{bad} == 0 && $tot->{max});
293
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
296
297 return $ok;
298}
299
300=begin _private
301
302=item B<_globdir>
303
304 my @files = _globdir $dir;
305
306Returns all the files in a directory. This is shorthand for backwards
307compatibility on systems where glob() doesn't work right.
308
309=cut
310
311sub _globdir {
312 opendir DIRH, shift;
313 my @f = readdir DIRH;
314 closedir DIRH;
315
316 return @f;
9c5c68c8 317}
318
b82fa0b7 319=item B<_run_all_tests>
320
321 my($total, $failed) = _run_all_tests(@test_files);
322
323Runs all the given @test_files (as runtests()) but does it quietly (no
324report). $total is a hash ref summary of all the tests run. Its keys
325and values are this:
326
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
331
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
337
338If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
339test.
340
341$failed is a hash ref of all the test scripts which failed. Each key
342is the name of a test script, each value is another hash representing
343how that script failed. Its keys are these:
9c5c68c8 344
b82fa0b7 345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
352
353Needless to say, $failed should be empty if everything passed.
354
355B<NOTE> Currently this function is still noisy. I'm working on it.
356
357=cut
358
359sub _run_all_tests {
9c5c68c8 360 my(@tests) = @_;
a0d0e21e 361 local($|) = 1;
9c5c68c8 362 my(%failedtests);
363
364 # Test-wide totals.
365 my(%tot) = (
366 bonus => 0,
367 max => 0,
368 ok => 0,
369 files => 0,
370 bad => 0,
371 good => 0,
372 tests => scalar @tests,
373 sub_skipped => 0,
374 skipped => 0,
375 bench => 0
376 );
774d564b 377
378 # pass -I flags to children
81ff29e3 379 my $old5lib = $ENV{PERL5LIB};
774d564b 380
1250aba5 381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
383 # for VMS
384 my $new5lib;
385 if ($^O eq 'VMS') {
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
9c5c68c8 387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
1250aba5 388 }
389 else {
390 $new5lib = join($Config{path_sep}, @INC);
391 }
392
393 local($ENV{'PERL5LIB'}) = $new5lib;
a0d0e21e 394
b82fa0b7 395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 396 my $t_start = new Benchmark;
9c5c68c8 397
63b097aa 398 my $maxlen = 0;
7a315204 399 foreach (@tests) {
400 my $len = length;
401 $maxlen = $len if $len > $maxlen;
402 }
403 # +3 : we want three dots between the test name and the "ok"
404 # -2 : the .t suffix
405 my $width = $maxlen + 3 - 2;
b82fa0b7 406 foreach my $tfile (@tests) {
7a315204 407 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 408 print $leader;
9c5c68c8 409
b82fa0b7 410 my $fh = _open_test($tfile);
9c5c68c8 411
412 # state of the current test.
413 my %test = (
414 ok => 0,
b82fa0b7 415 'next' => 0,
9c5c68c8 416 max => 0,
417 failed => [],
418 todo => {},
419 bonus => 0,
420 skipped => 0,
421 skip_reason => undef,
422 ml => $ml,
423 );
424
425 my($seen_header, $tests_seen) = (0,0);
c07a80fd 426 while (<$fh>) {
9c5c68c8 427 if( _parse_header($_, \%test, \%tot) ) {
428 warn "Test header seen twice!\n" if $seen_header;
429
430 $seen_header = 1;
431
432 warn "1..M can only appear at the beginning or end of tests\n"
433 if $tests_seen && $test{max} < $tests_seen;
434 }
435 elsif( _parse_test_line($_, \%test, \%tot) ) {
436 $tests_seen++;
d667a7e6 437 }
9c5c68c8 438 # else, ignore it.
c07a80fd 439 }
9c5c68c8 440
441 my($estatus, $wstatus) = _close_fh($fh);
442
b82fa0b7 443 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
444
68dc0745 445 if ($wstatus) {
b82fa0b7 446 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 447 $estatus, $wstatus);
b82fa0b7 448 $failedtests{$tfile}{name} = $tfile;
9c5c68c8 449 }
b82fa0b7 450 elsif ($allok) {
9c5c68c8 451 if ($test{max} and $test{skipped} + $test{bonus}) {
7b13a3f5 452 my @msg;
9c5c68c8 453 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
454 if $test{skipped};
455 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
456 if $test{bonus};
457 print "$test{ml}ok, ".join(', ', @msg)."\n";
458 } elsif ($test{max}) {
459 print "$test{ml}ok\n";
460 } elsif (defined $test{skip_reason}) {
461 print "skipped: $test{skip_reason}\n";
462 $tot{skipped}++;
c0ee6f5c 463 } else {
45c0de28 464 print "skipped test on this platform\n";
9c5c68c8 465 $tot{skipped}++;
c0ee6f5c 466 }
9c5c68c8 467 $tot{good}++;
6c31b336 468 }
b82fa0b7 469 else {
470 if ($test{max}) {
471 if ($test{'next'} <= $test{max}) {
472 push @{$test{failed}}, $test{'next'}..$test{max};
473 }
474 if (@{$test{failed}}) {
475 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
476 @{$test{failed}});
477 print "$test{ml}$txt";
478 $failedtests{$tfile} = { canon => $canon,
479 max => $test{max},
480 failed => scalar @{$test{failed}},
481 name => $tfile,
482 percent => 100*(scalar @{$test{failed}})/$test{max},
483 estat => '',
484 wstat => '',
485 };
486 } else {
487 print "Don't know which tests failed: got $test{ok} ok, ".
488 "expected $test{max}\n";
489 $failedtests{$tfile} = { canon => '??',
490 max => $test{max},
491 failed => '??',
492 name => $tfile,
493 percent => undef,
494 estat => '',
495 wstat => '',
496 };
497 }
498 $tot{bad}++;
499 } elsif ($test{'next'} == 0) {
500 print "FAILED before any test output arrived\n";
501 $tot{bad}++;
502 $failedtests{$tfile} = { canon => '??',
503 max => '??',
504 failed => '??',
505 name => $tfile,
506 percent => undef,
507 estat => '',
508 wstat => '',
509 };
510 }
511 }
512
9c5c68c8 513 $tot{sub_skipped} += $test{skipped};
514
515 if (defined $Files_In_Dir) {
b82fa0b7 516 my @new_dir_files = _globdir $Files_In_Dir;
17a79f5b 517 if (@new_dir_files != @dir_files) {
518 my %f;
519 @f{@new_dir_files} = (1) x @new_dir_files;
520 delete @f{@dir_files};
521 my @f = sort keys %f;
522 print "LEAKED FILES: @f\n";
523 @dir_files = @new_dir_files;
524 }
525 }
a0d0e21e 526 }
9c5c68c8 527 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 528
774d564b 529 if ($^O eq 'VMS') {
530 if (defined $old5lib) {
531 $ENV{PERL5LIB} = $old5lib;
b876d4a6 532 } else {
774d564b 533 delete $ENV{PERL5LIB};
534 }
535 }
9c5c68c8 536
537 return(\%tot, \%failedtests);
538}
539
b82fa0b7 540=item B<_mk_leader>
541
7a315204 542 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 543
544Generates the 't/foo........' $leader for the given $test_file as well
545as a similar version which will overwrite the current line (by use of
546\r and such). $ml may be empty if Test::Harness doesn't think you're
7a315204 547on TTY. The width is the width of the "yada/blah..." string.
b82fa0b7 548
549=cut
550
551sub _mk_leader {
7a315204 552 my ($te, $width) = @_;
553
b82fa0b7 554 chop($te); # XXX chomp?
555
556 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
557 my $blank = (' ' x 77);
7a315204 558 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 559 my $ml = "";
560
561 $ml = "\r$blank\r$leader"
562 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
563
564 return($leader, $ml);
565}
566
9c5c68c8 567
568sub _show_results {
569 my($tot, $failedtests) = @_;
570
571 my $pct;
572 my $bonusmsg = _bonusmsg($tot);
573
574 if ($tot->{bad} == 0 && $tot->{max}) {
7b13a3f5 575 print "All tests successful$bonusmsg.\n";
9c5c68c8 576 } elsif ($tot->{tests}==0){
6c31b336 577 die "FAILED--no tests were run for some reason.\n";
9c5c68c8 578 } elsif ($tot->{max} == 0) {
579 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
580 die "FAILED--$tot->{tests} test $blurb could be run, ".
581 "alas--no output ever seen\n";
c07a80fd 582 } else {
9c5c68c8 583 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
6c31b336 584 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
9c5c68c8 585 $tot->{max} - $tot->{ok}, $tot->{max},
586 100*$tot->{ok}/$tot->{max};
0a931e4a 587
9c5c68c8 588 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 589
590 # Now write to formats
9c5c68c8 591 for my $script (sort keys %$failedtests) {
592 $Curtest = $failedtests->{$script};
760ac839 593 write;
594 }
9c5c68c8 595 if ($tot->{bad}) {
9b0ceca9 596 $bonusmsg =~ s/^,\s*//;
597 print "$bonusmsg.\n" if $bonusmsg;
9c5c68c8 598 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
599 "$subpct\n";
c07a80fd 600 }
601 }
f0a9308e 602
9c5c68c8 603 printf("Files=%d, Tests=%d, %s\n",
604 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
605}
606
607
608sub _parse_header {
609 my($line, $test, $tot) = @_;
610
611 my $is_header = 0;
612
613 print $line if $Verbose;
614
615 # 1..10 todo 4 7 10;
616 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
617 $test->{max} = $1;
618 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
619
620 $tot->{max} += $test->{max};
621 $tot->{files}++;
622
623 $is_header = 1;
624 }
625 # 1..10
626 # 1..0 # skip Why? Because I said so!
627 elsif ($line =~ /^1\.\.([0-9]+)
b82fa0b7 628 (\s*\#\s*[Ss]kip\S*\s* (.+))?
9c5c68c8 629 /x
630 )
631 {
632 $test->{max} = $1;
633 $tot->{max} += $test->{max};
634 $tot->{files}++;
b82fa0b7 635 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 636 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
637
638 $is_header = 1;
639 }
640 else {
641 $is_header = 0;
642 }
643
644 return $is_header;
c07a80fd 645}
646
9c5c68c8 647
b82fa0b7 648sub _open_test {
649 my($test) = shift;
650
651 my $s = _set_switches($test);
652
653 # XXX This is WAY too core specific!
654 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
655 ? "./perl -I../lib ../utils/perlcc $test "
656 . "-r 2>> ./compilelog |"
657 : "$^X $s $test|";
658 $cmd = "MCR $cmd" if $^O eq 'VMS';
659
660 if( open(PERL, $cmd) ) {
661 return \*PERL;
662 }
663 else {
664 print "can't run $test. $!\n";
665 return;
666 }
667}
668
669sub _run_one_test {
670 my($test) = @_;
671
672
673}
674
675
9c5c68c8 676sub _parse_test_line {
677 my($line, $test, $tot) = @_;
678
679 if ($line =~ /^(not\s+)?ok\b/i) {
b82fa0b7 680 my $this = $test->{'next'} || 1;
9c5c68c8 681 # "not ok 23"
37ce32a7 682 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
683 my($not, $tnum, $extra) = ($1, $2, $3);
684
685 $this = $tnum if $tnum;
686
687 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
688 if defined $extra;
689
690 my($istodo, $isskip);
691 if( defined $type ) {
692 $istodo = $type =~ /TODO/;
693 $isskip = $type =~ /skip/i;
694 }
695
696 $test->{todo}{$tnum} = 1 if $istodo;
697
698 if( $not ) {
699 print "$test->{ml}NOK $this" if $test->{ml};
700 if (!$test->{todo}{$this}) {
701 push @{$test->{failed}}, $this;
702 } else {
703 $test->{ok}++;
704 $tot->{ok}++;
705 }
706 }
707 else {
708 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
709 $test->{ok}++;
710 $tot->{ok}++;
711 $test->{skipped}++ if $isskip;
712
713 if (defined $reason and defined $test->{skip_reason}) {
714 # print "was: '$skip_reason' new '$reason'\n";
715 $test->{skip_reason} = 'various reasons'
716 if $test->{skip_reason} ne $reason;
717 } elsif (defined $reason) {
718 $test->{skip_reason} = $reason;
719 }
720
721 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
722 }
9c5c68c8 723 }
724 # XXX ummm... dunno
725 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
726 $this = $1 if $1 > 0;
727 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
728 $test->{ok}++;
729 $tot->{ok}++;
730 }
731 else {
732 # an ok or not ok not matching the 3 cases above...
733 # just ignore it for compatibility with TEST
734 next;
735 }
736
b82fa0b7 737 if ($this > $test->{'next'}) {
9c5c68c8 738 # print "Test output counter mismatch [test $this]\n";
739 # no need to warn probably
b82fa0b7 740 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 741 }
b82fa0b7 742 elsif ($this < $test->{'next'}) {
9c5c68c8 743 #we have seen more "ok" lines than the number suggests
744 print "Confused test output: test $this answered after ".
b82fa0b7 745 "test ", $test->{'next'}-1, "\n";
746 $test->{'next'} = $this;
9c5c68c8 747 }
b82fa0b7 748 $test->{'next'} = $this + 1;
9c5c68c8 749
750 }
751 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
752 die "FAILED--Further testing stopped" .
753 ($1 ? ": $1\n" : ".\n");
754 }
755}
756
757
758sub _bonusmsg {
759 my($tot) = @_;
760
761 my $bonusmsg = '';
762 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
763 " UNEXPECTEDLY SUCCEEDED)")
764 if $tot->{bonus};
765
766 if ($tot->{skipped}) {
767 $bonusmsg .= ", $tot->{skipped} test"
768 . ($tot->{skipped} != 1 ? 's' : '');
769 if ($tot->{sub_skipped}) {
770 $bonusmsg .= " and $tot->{sub_skipped} subtest"
771 . ($tot->{sub_skipped} != 1 ? 's' : '');
772 }
773 $bonusmsg .= ' skipped';
774 }
775 elsif ($tot->{sub_skipped}) {
776 $bonusmsg .= ", $tot->{sub_skipped} subtest"
777 . ($tot->{sub_skipped} != 1 ? 's' : '')
778 . " skipped";
779 }
780
781 return $bonusmsg;
782}
783
784# VMS has some subtle nastiness with closing the test files.
785sub _close_fh {
786 my($fh) = shift;
787
788 close($fh); # must close to reap child resource values
789
790 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
791 my $estatus;
792 $estatus = ($^O eq 'VMS'
793 ? eval 'use vmsish "status"; $estatus = $?'
794 : $wstatus >> 8);
795
796 return($estatus, $wstatus);
797}
798
799
800# Set up the command-line switches to run perl as.
801sub _set_switches {
802 my($test) = shift;
803
b82fa0b7 804 local *TEST;
805 open(TEST, $test) or print "can't open $test. $!\n";
806 my $first = <TEST>;
9c5c68c8 807 my $s = $Switches;
808 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
809 if exists $ENV{'HARNESS_PERL_SWITCHES'};
810 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
811 if $first =~ /^#!.*\bperl.*-\w*T/;
812
b82fa0b7 813 close(TEST) or print "can't close $test. $!\n";
9c5c68c8 814
815 return $s;
816}
817
818
819# Test program go boom.
820sub _dubious_return {
821 my($test, $tot, $estatus, $wstatus) = @_;
822 my ($failed, $canon, $percent) = ('??', '??');
823
824 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
825 "(wstat %d, 0x%x)\n",
826 $wstatus,$wstatus;
827 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
828
829 if (corestatus($wstatus)) { # until we have a wait module
830 if ($Have_Devel_Corestack) {
831 Devel::CoreStack::stack($^X);
832 } else {
833 print "\ttest program seems to have generated a core\n";
834 }
835 }
836
837 $tot->{bad}++;
838
839 if ($test->{max}) {
b82fa0b7 840 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 841 print "\tafter all the subtests completed successfully\n";
842 $percent = 0;
843 $failed = 0; # But we do not set $canon!
844 }
845 else {
b82fa0b7 846 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 847 $failed = @{$test->{failed}};
848 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
849 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
850 print "DIED. ",$txt;
851 }
852 }
853
854 return { canon => $canon, max => $test->{max} || '??',
855 failed => $failed,
66fd8cb9 856 percent => $percent,
9c5c68c8 857 estat => $estatus, wstat => $wstatus,
858 };
859}
860
861
862sub _garbled_output {
863 my($gibberish) = shift;
864 warn "Confusing test output: '$gibberish'\n";
865}
866
867
868sub _create_fmts {
869 my($failedtests) = @_;
870
b82fa0b7 871 my $failed_str = "Failed Test";
872 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 873 my $list_str = "List of Failed";
874
875 # Figure out our longest name string for formatting purposes.
876 my $max_namelen = length($failed_str);
877 foreach my $script (keys %$failedtests) {
878 my $namelen = length $failedtests->{$script}->{name};
879 $max_namelen = $namelen if $namelen > $max_namelen;
880 }
881
882 my $list_len = $Columns - length($middle_str) - $max_namelen;
883 if ($list_len < length($list_str)) {
884 $list_len = length($list_str);
885 $max_namelen = $Columns - length($middle_str) - $list_len;
886 if ($max_namelen < length($failed_str)) {
887 $max_namelen = length($failed_str);
888 $Columns = $max_namelen + length($middle_str) + $list_len;
889 }
890 }
891
892 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 893 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 894 . $middle_str
895 . $list_str . "\n"
896 . "-" x $Columns
897 . "\n.\n";
898
899 my $fmt = "format STDOUT =\n"
900 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 901 . " @>> @>>>> @>>>> @>>> ^##.##% "
9c5c68c8 902 . "^" . "<" x ($list_len - 1) . "\n"
903 . '{ $Curtest->{name}, $Curtest->{estat},'
904 . ' $Curtest->{wstat}, $Curtest->{max},'
905 . ' $Curtest->{failed}, $Curtest->{percent},'
906 . ' $Curtest->{canon}'
907 . "\n}\n"
908 . "~~" . " " x ($Columns - $list_len - 2) . "^"
909 . "<" x ($list_len - 1) . "\n"
910 . '$Curtest->{canon}'
911 . "\n.\n";
912
913 eval $fmt_top;
914 die $@ if $@;
915 eval $fmt;
916 die $@ if $@;
917
918 return($fmt_top, $fmt);
919}
920
b82fa0b7 921{
922 my $tried_devel_corestack;
9c5c68c8 923
b82fa0b7 924 sub corestatus {
925 my($st) = @_;
c0ee6f5c 926
b82fa0b7 927 eval {require 'wait.ph'};
928 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 929
b82fa0b7 930 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
931 unless $tried_devel_corestack++;
c0ee6f5c 932
b82fa0b7 933 $ret;
934 }
c0ee6f5c 935}
936
c07a80fd 937sub canonfailed ($@) {
89d3b7e2 938 my($max,$skipped,@failed) = @_;
6c31b336 939 my %seen;
940 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 941 my $failed = @failed;
942 my @result = ();
943 my @canon = ();
944 my $min;
945 my $last = $min = shift @failed;
760ac839 946 my $canon;
c07a80fd 947 if (@failed) {
948 for (@failed, $failed[-1]) { # don't forget the last one
949 if ($_ > $last+1 || $_ == $last) {
950 if ($min == $last) {
951 push @canon, $last;
952 } else {
953 push @canon, "$min-$last";
954 }
955 $min = $_;
956 }
957 $last = $_;
958 }
959 local $" = ", ";
960 push @result, "FAILED tests @canon\n";
b82fa0b7 961 $canon = join ' ', @canon;
a0d0e21e 962 } else {
c07a80fd 963 push @result, "FAILED test $last\n";
760ac839 964 $canon = $last;
a0d0e21e 965 }
c07a80fd 966
967 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 968 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
969 my $ender = 's' x ($skipped > 1);
970 my $good = $max - $failed - $skipped;
971 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 972 push @result, " (-$skipped skipped test$ender: $good okay, ".
973 "$goodper%)"
974 if $skipped;
89d3b7e2 975 push @result, "\n";
760ac839 976 my $txt = join "", @result;
977 ($txt, $canon);
a0d0e21e 978}
979
b82fa0b7 980=end _private
9c5c68c8 981
b82fa0b7 982=back
d667a7e6 983
b82fa0b7 984=cut
9c5c68c8 985
9c5c68c8 986
b82fa0b7 9871;
988__END__
9c5c68c8 989
990
cb1a09d0 991=head1 EXPORT
992
c0ee6f5c 993C<&runtests> is exported by Test::Harness per default.
cb1a09d0 994
9c5c68c8 995C<$verbose> and C<$switches> are exported upon request.
996
997
cb1a09d0 998=head1 DIAGNOSTICS
999
1000=over 4
1001
1002=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1003
1004If all tests are successful some statistics about the performance are
1005printed.
1006
6c31b336 1007=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1008
1009For any single script that has failing subtests statistics like the
1010above are printed.
1011
1012=item C<Test returned status %d (wstat %d)>
1013
9c5c68c8 1014Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1015and C<$?> are printed in a message similar to the above.
6c31b336 1016
1017=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1018
6c31b336 1019=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1020
1021If not all tests were successful, the script dies with one of the
1022above messages.
1023
d667a7e6 1024=item C<FAILED--Further testing stopped%s>
1025
1026If a single subtest decides that further testing will not make sense,
1027the script dies with this message.
1028
cb1a09d0 1029=back
1030
9b0ceca9 1031=head1 ENVIRONMENT
1032
37ce32a7 1033=over 4
1034
b82fa0b7 1035=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7 1036
1037Makes harness ignore the exit status of child processes when defined.
1038
b82fa0b7 1039=item C<HARNESS_NOTTY>
9b0ceca9 1040
37ce32a7 1041When set to a true value, forces it to behave as though STDOUT were
1042not a console. You may need to set this if you don't want harness to
1043output more frequent progress messages using carriage returns. Some
1044consoles may not handle carriage returns properly (which results in a
1045somewhat messy output).
0d0c0d42 1046
b82fa0b7 1047=item C<HARNESS_COMPILE_TEST>
9636a016 1048
37ce32a7 1049When true it will make harness attempt to compile the test using
1050C<perlcc> before running it.
1051
b82fa0b7 1052B<NOTE> This currently only works when sitting in the perl source
1053directory!
1054
1055=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1056
1057When set to the name of a directory, harness will check after each
1058test whether new files appeared in that directory, and report them as
17a79f5b 1059
1060 LEAKED FILES: scr.tmp 0 my.db
1061
1062If relative, directory name is with respect to the current directory at
1063the moment runtests() was called. Putting absolute path into
1064C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1065
b82fa0b7 1066=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1067
1068Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1069each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1070run all tests with all warnings enabled.
1071
b82fa0b7 1072=item C<HARNESS_COLUMNS>
37ce32a7 1073
1074This value will be used for the width of the terminal. If it is not
1075set then it will default to C<COLUMNS>. If this is not set, it will
1076default to 80. Note that users of Bourne-sh based shells will need to
1077C<export COLUMNS> for this module to use that variable.
2b32313b 1078
b82fa0b7 1079=item C<HARNESS_ACTIVE>
37ce32a7 1080
1081Harness sets this before executing the individual tests. This allows
1082the tests to determine if they are being executed through the harness
1083or by any other means.
1084
1085=back
0a931e4a 1086
b82fa0b7 1087=head1 EXAMPLE
1088
1089Here's how Test::Harness tests itself
1090
1091 $ cd ~/src/devel/Test-Harness
1092 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1093 $verbose=0; runtests @ARGV;' t/*.t
1094 Using /home/schwern/src/devel/Test-Harness/blib
1095 t/base..............ok
1096 t/nonumbers.........ok
1097 t/ok................ok
1098 t/test-harness......ok
1099 All tests successful.
1100 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1101
cb1a09d0 1102=head1 SEE ALSO
1103
b82fa0b7 1104L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1105the underlying timing routines, L<Devel::CoreStack> to generate core
1106dumps from failed tests and L<Devel::Cover> for test coverage
1107analysis.
c07a80fd 1108
1109=head1 AUTHORS
1110
1111Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1112sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1113with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1114exist. Andreas Koenig held the torch for many years.
1115
1116Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1117
1118=head1 TODO
1119
1120Provide a way of running tests quietly (ie. no printing) for automated
1121validation of tests. This will probably take the form of a version
1122of runtests() which rather than printing its output returns raw data
1123on the state of the tests.
1124
1125Fix HARNESS_COMPILE_TEST without breaking its core usage.
1126
1127Figure a way to report test names in the failure summary.
37ce32a7 1128
b82fa0b7 1129Rework the test summary so long test names are not truncated as badly.
1130
1131Merge back into bleadperl.
1132
1133Deal with VMS's "not \nok 4\n" mistake.
1134
1135Add option for coverage analysis.
1136
1137=for _private
1138Keeping whittling away at _run_all_tests()
1139
1140=for _private
1141Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1142
1143=head1 BUGS
1144
1145Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336 1146with. Test scripts running via the shebang (C<#!>) line may not be
1147portable because $^X is not consistent for shebang scripts across
cb1a09d0 1148platforms. This is no problem when Test::Harness is run with an
6c31b336 1149absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1150
b82fa0b7 1151HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1152directory.
1153
cb1a09d0 1154=cut