The warning no more comes from util.c, it comes from numeric.c.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
b82fa0b7 2# $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
a0d0e21e 7use Exporter;
8use Benchmark;
4633a7c4 9use Config;
760ac839 10use strict;
11
b82fa0b7 12use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
15 );
4633a7c4 16
9c5c68c8 17# Backwards compatibility for exportable variable names.
18*verbose = \$Verbose;
19*switches = \$Switches;
20
21$Have_Devel_Corestack = 0;
22
b82fa0b7 23$VERSION = "1.21";
9b0ceca9 24
f19ae7a7 25$ENV{HARNESS_ACTIVE} = 1;
26
9b0ceca9 27# Some experimental versions of OS/2 build have broken $?
9c5c68c8 28my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
29
30my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 31
17a79f5b 32
9c5c68c8 33@ISA = ('Exporter');
34@EXPORT = qw(&runtests);
35@EXPORT_OK = qw($verbose $switches);
4633a7c4 36
9c5c68c8 37$Verbose = 0;
38$Switches = "-w";
39$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 40$Columns--; # Some shells have trouble with a full line of text.
41
42
43=head1 NAME
44
45Test::Harness - run perl standard test scripts with statistics
46
47=head1 SYNOPSIS
48
49 use Test::Harness;
50
51 runtests(@test_files);
52
53=head1 DESCRIPTION
a0d0e21e 54
b82fa0b7 55B<STOP!> If all you want to do is write a test script, consider using
56Test::Simple. Otherwise, read on.
57
58(By using the Test module, you can write test scripts without
59knowing the exact output this module expects. However, if you need to
60know the specifics, read on!)
61
62Perl test scripts print to standard output C<"ok N"> for each single
63test, where C<N> is an increasing sequence of integers. The first line
64output by a standard test script is C<"1..M"> with C<M> being the
65number of tests that should be run within the test
66script. Test::Harness::runtests(@tests) runs all the testscripts
67named as arguments and checks standard output for the expected
68C<"ok N"> strings.
69
70After all tests have been performed, runtests() prints some
71performance statistics that are computed by the Benchmark module.
72
73=head2 The test script output
74
75The following explains how Test::Harness interprets the output of your
76test program.
77
78=over 4
79
80=item B<'1..M'>
81
82This header tells how many tests there will be. It should be the
83first line output by your test program (but its okay if its preceded
84by comments).
85
86In certain instanced, you may not know how many tests you will
87ultimately be running. In this case, it is permitted (but not
88encouraged) for the 1..M header to appear as the B<last> line output
89by your test (again, it can be followed by further comments). But we
90strongly encourage you to put it first.
91
92Under B<no> circumstances should 1..M appear in the middle of your
93output or more than once.
94
95
96=item B<'ok', 'not ok'. Ok?>
97
98Any output from the testscript to standard error is ignored and
99bypassed, thus will be seen by the user. Lines written to standard
100output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101runtests(). All other lines are discarded.
102
103C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
104
105
106=item B<test numbers>
107
108Perl normally expects the 'ok' or 'not ok' to be followed by a test
109number. It is tolerated if the test numbers after 'ok' are
110omitted. In this case Test::Harness maintains temporarily its own
111counter until the script supplies test numbers again. So the following
112test script
113
114 print <<END;
115 1..6
116 not ok
117 ok
118 not ok
119 ok
120 ok
121 END
122
123will generate
124
125 FAILED tests 1, 3, 6
126 Failed 3/6 tests, 50.00% okay
127
128
129=item B<$Test::Harness::verbose>
130
131The global variable $Test::Harness::verbose is exportable and can be
132used to let runtests() display the standard output of the script
133without altering the behavior otherwise.
134
135=item B<$Test::Harness::switches>
136
137The global variable $Test::Harness::switches is exportable and can be
138used to set perl command line options used for running the test
139script(s). The default value is C<-w>.
140
141=item B<Skipping tests>
142
143If the standard output line contains the substring C< # Skip> (with
144variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145counted as a skipped test. If the whole testscript succeeds, the
146count of skipped tests is included in the generated output.
147C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
148for skipping.
149
150 ok 23 # skip Insufficient flogiston pressure.
151
152Similarly, one can include a similar explanation in a C<1..0> line
153emitted if the test script is skipped completely:
154
155 1..0 # Skipped: no leverage found
156
157=item B<Todo tests>
158
159If the standard output line contains the substring C< # TODO> after
160C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161afterwards is the thing that has to be done before this test will
162succeed.
163
164 not ok 13 # TODO harness the power of the atom
165
166These tests represent a feature to be implemented or a bug to be fixed
167and act as something of an executable "thing to do" list. They are
168B<not> expected to succeed. Should a todo test begin succeeding,
169Test::Harness will report it as a bonus. This indicates that whatever
170you were supposed to do has been done and you should promote this to a
171normal test.
172
173=item B<Bail out!>
174
175As an emergency measure, a test script can decide that further tests
176are useless (e.g. missing dependencies) and testing should stop
177immediately. In that case the test script prints the magic words
178
179 Bail out!
180
181to standard output. Any message after these words will be displayed by
182C<Test::Harness> as the reason why testing is stopped.
183
184=item B<Comments>
185
186Additional comments may be put into the testing output on their own
187lines. Comment lines should begin with a '#', Test::Harness will
188ignore them.
189
190 ok 1
191 # Life is good, the sun is shining, RAM is cheap.
192 not ok 2
193 # got 'Bush' expected 'Gore'
194
195=item B<Anything else>
196
197Any other output Test::Harness sees it will silently ignore B<BUT WE
198PLAN TO CHANGE THIS!> If you wish to place additional output in your
199test script, please use a comment.
200
201=back
202
203
204=head2 Failure
205
206It will happen, your tests will fail. After you mop up your ego, you
207can begin examining the summary report:
208
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
221
222Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223exited with non-zero status indicating something dubious happened.
224
225The columns in the summary report mean:
226
227=over 4
228
229=item B<Failed Test>
230
231The test file which failed.
232
233=item B<Stat>
234
235If the test exited with non-zero, this is its exit status.
236
237=item B<Wstat>
238
239The wait status of the test I<umm, I need a better explanation here>.
240
241=item B<Total>
242
243Total number of tests expected to run.
244
245=item B<Fail>
246
247Number which failed, either from "not ok" or because they never ran.
248
249=item B<Failed>
250
251Percentage of the total tests which failed.
252
253=item B<List of Failed>
254
255A list of the tests which failed. Successive failures may be
256abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
25720 failed).
258
259=back
260
261
262=head2 Functions
263
264Test::Harness currently only has one function, here it is.
265
266=over 4
267
268=item B<runtests>
269
270 my $allok = runtests(@test_files);
271
272This runs all the given @test_files and divines whether they passed
273or failed based on their output to STDOUT (details above). It prints
274out each individual test which failed along with a summary report and
275a how long it all took.
276
277It returns true if everything was ok, false otherwise.
278
279=for _private
280This is just _run_all_tests() plus _show_results()
281
282=cut
17a79f5b 283
a0d0e21e 284sub runtests {
285 my(@tests) = @_;
9c5c68c8 286
b82fa0b7 287 local ($\, $,);
288
289 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 290 _show_results($tot, $failedtests);
291
b82fa0b7 292 my $ok = ($tot->{bad} == 0 && $tot->{max});
293
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
296
297 return $ok;
298}
299
300=begin _private
301
302=item B<_globdir>
303
304 my @files = _globdir $dir;
305
306Returns all the files in a directory. This is shorthand for backwards
307compatibility on systems where glob() doesn't work right.
308
309=cut
310
311sub _globdir {
312 opendir DIRH, shift;
313 my @f = readdir DIRH;
314 closedir DIRH;
315
316 return @f;
9c5c68c8 317}
318
b82fa0b7 319=item B<_run_all_tests>
320
321 my($total, $failed) = _run_all_tests(@test_files);
322
323Runs all the given @test_files (as runtests()) but does it quietly (no
324report). $total is a hash ref summary of all the tests run. Its keys
325and values are this:
326
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
331
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
337
338If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
339test.
340
341$failed is a hash ref of all the test scripts which failed. Each key
342is the name of a test script, each value is another hash representing
343how that script failed. Its keys are these:
9c5c68c8 344
b82fa0b7 345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
352
353Needless to say, $failed should be empty if everything passed.
354
355B<NOTE> Currently this function is still noisy. I'm working on it.
356
357=cut
358
359sub _run_all_tests {
9c5c68c8 360 my(@tests) = @_;
a0d0e21e 361 local($|) = 1;
9c5c68c8 362 my(%failedtests);
363
364 # Test-wide totals.
365 my(%tot) = (
366 bonus => 0,
367 max => 0,
368 ok => 0,
369 files => 0,
370 bad => 0,
371 good => 0,
372 tests => scalar @tests,
373 sub_skipped => 0,
374 skipped => 0,
375 bench => 0
376 );
774d564b 377
378 # pass -I flags to children
81ff29e3 379 my $old5lib = $ENV{PERL5LIB};
774d564b 380
1250aba5 381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
383 # for VMS
384 my $new5lib;
385 if ($^O eq 'VMS') {
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
9c5c68c8 387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
1250aba5 388 }
389 else {
390 $new5lib = join($Config{path_sep}, @INC);
391 }
392
393 local($ENV{'PERL5LIB'}) = $new5lib;
a0d0e21e 394
b82fa0b7 395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 396 my $t_start = new Benchmark;
9c5c68c8 397
63b097aa 398 my $maxlen = 0;
908801fe 399 my $maxsuflen = 0;
400 foreach (@tests) { # The same code in t/TEST
401 my $suf = /\.(\w+)$/ ? $1 : '';
402 my $len = length;
403 my $suflen = length $suf;
404 $maxlen = $len if $len > $maxlen;
405 $maxsuflen = $suflen if $suflen > $maxsuflen;
7a315204 406 }
908801fe 407 # + 3 : we want three dots between the test name and the "ok"
408 my $width = $maxlen + 3 - $maxsuflen;
b82fa0b7 409 foreach my $tfile (@tests) {
7a315204 410 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 411 print $leader;
9c5c68c8 412
b82fa0b7 413 my $fh = _open_test($tfile);
9c5c68c8 414
415 # state of the current test.
416 my %test = (
417 ok => 0,
b82fa0b7 418 'next' => 0,
9c5c68c8 419 max => 0,
420 failed => [],
421 todo => {},
422 bonus => 0,
423 skipped => 0,
424 skip_reason => undef,
425 ml => $ml,
426 );
427
428 my($seen_header, $tests_seen) = (0,0);
c07a80fd 429 while (<$fh>) {
9c5c68c8 430 if( _parse_header($_, \%test, \%tot) ) {
431 warn "Test header seen twice!\n" if $seen_header;
432
433 $seen_header = 1;
434
435 warn "1..M can only appear at the beginning or end of tests\n"
436 if $tests_seen && $test{max} < $tests_seen;
437 }
438 elsif( _parse_test_line($_, \%test, \%tot) ) {
439 $tests_seen++;
d667a7e6 440 }
9c5c68c8 441 # else, ignore it.
c07a80fd 442 }
9c5c68c8 443
444 my($estatus, $wstatus) = _close_fh($fh);
445
b82fa0b7 446 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
447
68dc0745 448 if ($wstatus) {
b82fa0b7 449 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 450 $estatus, $wstatus);
b82fa0b7 451 $failedtests{$tfile}{name} = $tfile;
9c5c68c8 452 }
b82fa0b7 453 elsif ($allok) {
9c5c68c8 454 if ($test{max} and $test{skipped} + $test{bonus}) {
7b13a3f5 455 my @msg;
9c5c68c8 456 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
457 if $test{skipped};
458 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
459 if $test{bonus};
460 print "$test{ml}ok, ".join(', ', @msg)."\n";
461 } elsif ($test{max}) {
462 print "$test{ml}ok\n";
463 } elsif (defined $test{skip_reason}) {
464 print "skipped: $test{skip_reason}\n";
465 $tot{skipped}++;
c0ee6f5c 466 } else {
45c0de28 467 print "skipped test on this platform\n";
9c5c68c8 468 $tot{skipped}++;
c0ee6f5c 469 }
9c5c68c8 470 $tot{good}++;
6c31b336 471 }
b82fa0b7 472 else {
473 if ($test{max}) {
474 if ($test{'next'} <= $test{max}) {
475 push @{$test{failed}}, $test{'next'}..$test{max};
476 }
477 if (@{$test{failed}}) {
478 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
479 @{$test{failed}});
480 print "$test{ml}$txt";
481 $failedtests{$tfile} = { canon => $canon,
482 max => $test{max},
483 failed => scalar @{$test{failed}},
484 name => $tfile,
485 percent => 100*(scalar @{$test{failed}})/$test{max},
486 estat => '',
487 wstat => '',
488 };
489 } else {
490 print "Don't know which tests failed: got $test{ok} ok, ".
491 "expected $test{max}\n";
492 $failedtests{$tfile} = { canon => '??',
493 max => $test{max},
494 failed => '??',
495 name => $tfile,
496 percent => undef,
497 estat => '',
498 wstat => '',
499 };
500 }
501 $tot{bad}++;
502 } elsif ($test{'next'} == 0) {
503 print "FAILED before any test output arrived\n";
504 $tot{bad}++;
505 $failedtests{$tfile} = { canon => '??',
506 max => '??',
507 failed => '??',
508 name => $tfile,
509 percent => undef,
510 estat => '',
511 wstat => '',
512 };
513 }
514 }
515
9c5c68c8 516 $tot{sub_skipped} += $test{skipped};
517
518 if (defined $Files_In_Dir) {
b82fa0b7 519 my @new_dir_files = _globdir $Files_In_Dir;
17a79f5b 520 if (@new_dir_files != @dir_files) {
521 my %f;
522 @f{@new_dir_files} = (1) x @new_dir_files;
523 delete @f{@dir_files};
524 my @f = sort keys %f;
525 print "LEAKED FILES: @f\n";
526 @dir_files = @new_dir_files;
527 }
528 }
a0d0e21e 529 }
9c5c68c8 530 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 531
774d564b 532 if ($^O eq 'VMS') {
533 if (defined $old5lib) {
534 $ENV{PERL5LIB} = $old5lib;
b876d4a6 535 } else {
774d564b 536 delete $ENV{PERL5LIB};
537 }
538 }
9c5c68c8 539
540 return(\%tot, \%failedtests);
541}
542
b82fa0b7 543=item B<_mk_leader>
544
7a315204 545 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 546
547Generates the 't/foo........' $leader for the given $test_file as well
548as a similar version which will overwrite the current line (by use of
549\r and such). $ml may be empty if Test::Harness doesn't think you're
7a315204 550on TTY. The width is the width of the "yada/blah..." string.
b82fa0b7 551
552=cut
553
554sub _mk_leader {
7a315204 555 my ($te, $width) = @_;
556
b82fa0b7 557 chop($te); # XXX chomp?
558
559 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
560 my $blank = (' ' x 77);
7a315204 561 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 562 my $ml = "";
563
564 $ml = "\r$blank\r$leader"
565 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
566
567 return($leader, $ml);
568}
569
9c5c68c8 570
571sub _show_results {
572 my($tot, $failedtests) = @_;
573
574 my $pct;
575 my $bonusmsg = _bonusmsg($tot);
576
577 if ($tot->{bad} == 0 && $tot->{max}) {
7b13a3f5 578 print "All tests successful$bonusmsg.\n";
9c5c68c8 579 } elsif ($tot->{tests}==0){
6c31b336 580 die "FAILED--no tests were run for some reason.\n";
9c5c68c8 581 } elsif ($tot->{max} == 0) {
582 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
583 die "FAILED--$tot->{tests} test $blurb could be run, ".
584 "alas--no output ever seen\n";
c07a80fd 585 } else {
9c5c68c8 586 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
6c31b336 587 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
9c5c68c8 588 $tot->{max} - $tot->{ok}, $tot->{max},
589 100*$tot->{ok}/$tot->{max};
0a931e4a 590
9c5c68c8 591 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 592
593 # Now write to formats
9c5c68c8 594 for my $script (sort keys %$failedtests) {
595 $Curtest = $failedtests->{$script};
760ac839 596 write;
597 }
9c5c68c8 598 if ($tot->{bad}) {
9b0ceca9 599 $bonusmsg =~ s/^,\s*//;
600 print "$bonusmsg.\n" if $bonusmsg;
9c5c68c8 601 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
602 "$subpct\n";
c07a80fd 603 }
604 }
f0a9308e 605
9c5c68c8 606 printf("Files=%d, Tests=%d, %s\n",
607 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
608}
609
610
611sub _parse_header {
612 my($line, $test, $tot) = @_;
613
614 my $is_header = 0;
615
616 print $line if $Verbose;
617
618 # 1..10 todo 4 7 10;
619 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
620 $test->{max} = $1;
621 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
622
623 $tot->{max} += $test->{max};
624 $tot->{files}++;
625
626 $is_header = 1;
627 }
628 # 1..10
629 # 1..0 # skip Why? Because I said so!
630 elsif ($line =~ /^1\.\.([0-9]+)
b82fa0b7 631 (\s*\#\s*[Ss]kip\S*\s* (.+))?
9c5c68c8 632 /x
633 )
634 {
635 $test->{max} = $1;
636 $tot->{max} += $test->{max};
637 $tot->{files}++;
b82fa0b7 638 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 639 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
640
641 $is_header = 1;
642 }
643 else {
644 $is_header = 0;
645 }
646
647 return $is_header;
c07a80fd 648}
649
9c5c68c8 650
b82fa0b7 651sub _open_test {
652 my($test) = shift;
653
654 my $s = _set_switches($test);
655
656 # XXX This is WAY too core specific!
657 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
658 ? "./perl -I../lib ../utils/perlcc $test "
659 . "-r 2>> ./compilelog |"
660 : "$^X $s $test|";
661 $cmd = "MCR $cmd" if $^O eq 'VMS';
662
663 if( open(PERL, $cmd) ) {
664 return \*PERL;
665 }
666 else {
667 print "can't run $test. $!\n";
668 return;
669 }
670}
671
672sub _run_one_test {
673 my($test) = @_;
674
675
676}
677
678
9c5c68c8 679sub _parse_test_line {
680 my($line, $test, $tot) = @_;
681
682 if ($line =~ /^(not\s+)?ok\b/i) {
b82fa0b7 683 my $this = $test->{'next'} || 1;
9c5c68c8 684 # "not ok 23"
37ce32a7 685 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
686 my($not, $tnum, $extra) = ($1, $2, $3);
687
688 $this = $tnum if $tnum;
689
690 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
691 if defined $extra;
692
693 my($istodo, $isskip);
694 if( defined $type ) {
695 $istodo = $type =~ /TODO/;
696 $isskip = $type =~ /skip/i;
697 }
698
699 $test->{todo}{$tnum} = 1 if $istodo;
700
701 if( $not ) {
702 print "$test->{ml}NOK $this" if $test->{ml};
703 if (!$test->{todo}{$this}) {
704 push @{$test->{failed}}, $this;
705 } else {
706 $test->{ok}++;
707 $tot->{ok}++;
708 }
709 }
710 else {
711 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
712 $test->{ok}++;
713 $tot->{ok}++;
714 $test->{skipped}++ if $isskip;
715
716 if (defined $reason and defined $test->{skip_reason}) {
717 # print "was: '$skip_reason' new '$reason'\n";
718 $test->{skip_reason} = 'various reasons'
719 if $test->{skip_reason} ne $reason;
720 } elsif (defined $reason) {
721 $test->{skip_reason} = $reason;
722 }
723
724 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
725 }
9c5c68c8 726 }
727 # XXX ummm... dunno
728 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
729 $this = $1 if $1 > 0;
730 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
731 $test->{ok}++;
732 $tot->{ok}++;
733 }
734 else {
735 # an ok or not ok not matching the 3 cases above...
736 # just ignore it for compatibility with TEST
737 next;
738 }
739
b82fa0b7 740 if ($this > $test->{'next'}) {
9c5c68c8 741 # print "Test output counter mismatch [test $this]\n";
742 # no need to warn probably
b82fa0b7 743 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 744 }
b82fa0b7 745 elsif ($this < $test->{'next'}) {
9c5c68c8 746 #we have seen more "ok" lines than the number suggests
747 print "Confused test output: test $this answered after ".
b82fa0b7 748 "test ", $test->{'next'}-1, "\n";
749 $test->{'next'} = $this;
9c5c68c8 750 }
b82fa0b7 751 $test->{'next'} = $this + 1;
9c5c68c8 752
753 }
754 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
755 die "FAILED--Further testing stopped" .
756 ($1 ? ": $1\n" : ".\n");
757 }
758}
759
760
761sub _bonusmsg {
762 my($tot) = @_;
763
764 my $bonusmsg = '';
765 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
766 " UNEXPECTEDLY SUCCEEDED)")
767 if $tot->{bonus};
768
769 if ($tot->{skipped}) {
770 $bonusmsg .= ", $tot->{skipped} test"
771 . ($tot->{skipped} != 1 ? 's' : '');
772 if ($tot->{sub_skipped}) {
773 $bonusmsg .= " and $tot->{sub_skipped} subtest"
774 . ($tot->{sub_skipped} != 1 ? 's' : '');
775 }
776 $bonusmsg .= ' skipped';
777 }
778 elsif ($tot->{sub_skipped}) {
779 $bonusmsg .= ", $tot->{sub_skipped} subtest"
780 . ($tot->{sub_skipped} != 1 ? 's' : '')
781 . " skipped";
782 }
783
784 return $bonusmsg;
785}
786
787# VMS has some subtle nastiness with closing the test files.
788sub _close_fh {
789 my($fh) = shift;
790
791 close($fh); # must close to reap child resource values
792
793 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
794 my $estatus;
795 $estatus = ($^O eq 'VMS'
796 ? eval 'use vmsish "status"; $estatus = $?'
797 : $wstatus >> 8);
798
799 return($estatus, $wstatus);
800}
801
802
803# Set up the command-line switches to run perl as.
804sub _set_switches {
805 my($test) = shift;
806
b82fa0b7 807 local *TEST;
808 open(TEST, $test) or print "can't open $test. $!\n";
809 my $first = <TEST>;
9c5c68c8 810 my $s = $Switches;
811 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
812 if exists $ENV{'HARNESS_PERL_SWITCHES'};
813 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
814 if $first =~ /^#!.*\bperl.*-\w*T/;
815
b82fa0b7 816 close(TEST) or print "can't close $test. $!\n";
9c5c68c8 817
818 return $s;
819}
820
821
822# Test program go boom.
823sub _dubious_return {
824 my($test, $tot, $estatus, $wstatus) = @_;
825 my ($failed, $canon, $percent) = ('??', '??');
826
827 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
828 "(wstat %d, 0x%x)\n",
829 $wstatus,$wstatus;
830 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
831
832 if (corestatus($wstatus)) { # until we have a wait module
833 if ($Have_Devel_Corestack) {
834 Devel::CoreStack::stack($^X);
835 } else {
836 print "\ttest program seems to have generated a core\n";
837 }
838 }
839
840 $tot->{bad}++;
841
842 if ($test->{max}) {
b82fa0b7 843 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 844 print "\tafter all the subtests completed successfully\n";
845 $percent = 0;
846 $failed = 0; # But we do not set $canon!
847 }
848 else {
b82fa0b7 849 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 850 $failed = @{$test->{failed}};
851 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
852 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
853 print "DIED. ",$txt;
854 }
855 }
856
857 return { canon => $canon, max => $test->{max} || '??',
858 failed => $failed,
66fd8cb9 859 percent => $percent,
9c5c68c8 860 estat => $estatus, wstat => $wstatus,
861 };
862}
863
864
865sub _garbled_output {
866 my($gibberish) = shift;
867 warn "Confusing test output: '$gibberish'\n";
868}
869
870
871sub _create_fmts {
872 my($failedtests) = @_;
873
b82fa0b7 874 my $failed_str = "Failed Test";
875 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 876 my $list_str = "List of Failed";
877
878 # Figure out our longest name string for formatting purposes.
879 my $max_namelen = length($failed_str);
880 foreach my $script (keys %$failedtests) {
881 my $namelen = length $failedtests->{$script}->{name};
882 $max_namelen = $namelen if $namelen > $max_namelen;
883 }
884
885 my $list_len = $Columns - length($middle_str) - $max_namelen;
886 if ($list_len < length($list_str)) {
887 $list_len = length($list_str);
888 $max_namelen = $Columns - length($middle_str) - $list_len;
889 if ($max_namelen < length($failed_str)) {
890 $max_namelen = length($failed_str);
891 $Columns = $max_namelen + length($middle_str) + $list_len;
892 }
893 }
894
895 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 896 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 897 . $middle_str
898 . $list_str . "\n"
899 . "-" x $Columns
900 . "\n.\n";
901
902 my $fmt = "format STDOUT =\n"
903 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 904 . " @>> @>>>> @>>>> @>>> ^##.##% "
9c5c68c8 905 . "^" . "<" x ($list_len - 1) . "\n"
906 . '{ $Curtest->{name}, $Curtest->{estat},'
907 . ' $Curtest->{wstat}, $Curtest->{max},'
908 . ' $Curtest->{failed}, $Curtest->{percent},'
909 . ' $Curtest->{canon}'
910 . "\n}\n"
911 . "~~" . " " x ($Columns - $list_len - 2) . "^"
912 . "<" x ($list_len - 1) . "\n"
913 . '$Curtest->{canon}'
914 . "\n.\n";
915
916 eval $fmt_top;
917 die $@ if $@;
918 eval $fmt;
919 die $@ if $@;
920
921 return($fmt_top, $fmt);
922}
923
b82fa0b7 924{
925 my $tried_devel_corestack;
9c5c68c8 926
b82fa0b7 927 sub corestatus {
928 my($st) = @_;
c0ee6f5c 929
b82fa0b7 930 eval {require 'wait.ph'};
931 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 932
b82fa0b7 933 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
934 unless $tried_devel_corestack++;
c0ee6f5c 935
b82fa0b7 936 $ret;
937 }
c0ee6f5c 938}
939
c07a80fd 940sub canonfailed ($@) {
89d3b7e2 941 my($max,$skipped,@failed) = @_;
6c31b336 942 my %seen;
943 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 944 my $failed = @failed;
945 my @result = ();
946 my @canon = ();
947 my $min;
948 my $last = $min = shift @failed;
760ac839 949 my $canon;
c07a80fd 950 if (@failed) {
951 for (@failed, $failed[-1]) { # don't forget the last one
952 if ($_ > $last+1 || $_ == $last) {
953 if ($min == $last) {
954 push @canon, $last;
955 } else {
956 push @canon, "$min-$last";
957 }
958 $min = $_;
959 }
960 $last = $_;
961 }
962 local $" = ", ";
963 push @result, "FAILED tests @canon\n";
b82fa0b7 964 $canon = join ' ', @canon;
a0d0e21e 965 } else {
c07a80fd 966 push @result, "FAILED test $last\n";
760ac839 967 $canon = $last;
a0d0e21e 968 }
c07a80fd 969
970 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 971 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
972 my $ender = 's' x ($skipped > 1);
973 my $good = $max - $failed - $skipped;
974 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 975 push @result, " (-$skipped skipped test$ender: $good okay, ".
976 "$goodper%)"
977 if $skipped;
89d3b7e2 978 push @result, "\n";
760ac839 979 my $txt = join "", @result;
980 ($txt, $canon);
a0d0e21e 981}
982
b82fa0b7 983=end _private
9c5c68c8 984
b82fa0b7 985=back
d667a7e6 986
b82fa0b7 987=cut
9c5c68c8 988
9c5c68c8 989
b82fa0b7 9901;
991__END__
9c5c68c8 992
993
cb1a09d0 994=head1 EXPORT
995
c0ee6f5c 996C<&runtests> is exported by Test::Harness per default.
cb1a09d0 997
9c5c68c8 998C<$verbose> and C<$switches> are exported upon request.
999
1000
cb1a09d0 1001=head1 DIAGNOSTICS
1002
1003=over 4
1004
1005=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1006
1007If all tests are successful some statistics about the performance are
1008printed.
1009
6c31b336 1010=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1011
1012For any single script that has failing subtests statistics like the
1013above are printed.
1014
1015=item C<Test returned status %d (wstat %d)>
1016
9c5c68c8 1017Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1018and C<$?> are printed in a message similar to the above.
6c31b336 1019
1020=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1021
6c31b336 1022=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1023
1024If not all tests were successful, the script dies with one of the
1025above messages.
1026
d667a7e6 1027=item C<FAILED--Further testing stopped%s>
1028
1029If a single subtest decides that further testing will not make sense,
1030the script dies with this message.
1031
cb1a09d0 1032=back
1033
9b0ceca9 1034=head1 ENVIRONMENT
1035
37ce32a7 1036=over 4
1037
b82fa0b7 1038=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7 1039
1040Makes harness ignore the exit status of child processes when defined.
1041
b82fa0b7 1042=item C<HARNESS_NOTTY>
9b0ceca9 1043
37ce32a7 1044When set to a true value, forces it to behave as though STDOUT were
1045not a console. You may need to set this if you don't want harness to
1046output more frequent progress messages using carriage returns. Some
1047consoles may not handle carriage returns properly (which results in a
1048somewhat messy output).
0d0c0d42 1049
b82fa0b7 1050=item C<HARNESS_COMPILE_TEST>
9636a016 1051
37ce32a7 1052When true it will make harness attempt to compile the test using
1053C<perlcc> before running it.
1054
b82fa0b7 1055B<NOTE> This currently only works when sitting in the perl source
1056directory!
1057
1058=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1059
1060When set to the name of a directory, harness will check after each
1061test whether new files appeared in that directory, and report them as
17a79f5b 1062
1063 LEAKED FILES: scr.tmp 0 my.db
1064
1065If relative, directory name is with respect to the current directory at
1066the moment runtests() was called. Putting absolute path into
1067C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1068
b82fa0b7 1069=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1070
1071Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1072each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1073run all tests with all warnings enabled.
1074
b82fa0b7 1075=item C<HARNESS_COLUMNS>
37ce32a7 1076
1077This value will be used for the width of the terminal. If it is not
1078set then it will default to C<COLUMNS>. If this is not set, it will
1079default to 80. Note that users of Bourne-sh based shells will need to
1080C<export COLUMNS> for this module to use that variable.
2b32313b 1081
b82fa0b7 1082=item C<HARNESS_ACTIVE>
37ce32a7 1083
1084Harness sets this before executing the individual tests. This allows
1085the tests to determine if they are being executed through the harness
1086or by any other means.
1087
1088=back
0a931e4a 1089
b82fa0b7 1090=head1 EXAMPLE
1091
1092Here's how Test::Harness tests itself
1093
1094 $ cd ~/src/devel/Test-Harness
1095 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1096 $verbose=0; runtests @ARGV;' t/*.t
1097 Using /home/schwern/src/devel/Test-Harness/blib
1098 t/base..............ok
1099 t/nonumbers.........ok
1100 t/ok................ok
1101 t/test-harness......ok
1102 All tests successful.
1103 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1104
cb1a09d0 1105=head1 SEE ALSO
1106
b82fa0b7 1107L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1108the underlying timing routines, L<Devel::CoreStack> to generate core
1109dumps from failed tests and L<Devel::Cover> for test coverage
1110analysis.
c07a80fd 1111
1112=head1 AUTHORS
1113
1114Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1115sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1116with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1117exist. Andreas Koenig held the torch for many years.
1118
1119Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1120
1121=head1 TODO
1122
1123Provide a way of running tests quietly (ie. no printing) for automated
1124validation of tests. This will probably take the form of a version
1125of runtests() which rather than printing its output returns raw data
1126on the state of the tests.
1127
1128Fix HARNESS_COMPILE_TEST without breaking its core usage.
1129
1130Figure a way to report test names in the failure summary.
37ce32a7 1131
b82fa0b7 1132Rework the test summary so long test names are not truncated as badly.
1133
1134Merge back into bleadperl.
1135
1136Deal with VMS's "not \nok 4\n" mistake.
1137
1138Add option for coverage analysis.
1139
1140=for _private
1141Keeping whittling away at _run_all_tests()
1142
1143=for _private
1144Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1145
1146=head1 BUGS
1147
1148Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336 1149with. Test scripts running via the shebang (C<#!>) line may not be
1150portable because $^X is not consistent for shebang scripts across
cb1a09d0 1151platforms. This is no problem when Test::Harness is run with an
6c31b336 1152absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1153
b82fa0b7 1154HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1155directory.
1156
cb1a09d0 1157=cut