(Retracted by #11586)
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
b82fa0b7 2# $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
a0d0e21e 7use Exporter;
8use Benchmark;
4633a7c4 9use Config;
760ac839 10use strict;
11
b82fa0b7 12use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
15 );
4633a7c4 16
9c5c68c8 17# Backwards compatibility for exportable variable names.
18*verbose = \$Verbose;
19*switches = \$Switches;
20
21$Have_Devel_Corestack = 0;
22
b82fa0b7 23$VERSION = "1.21";
9b0ceca9 24
f19ae7a7 25$ENV{HARNESS_ACTIVE} = 1;
26
9b0ceca9 27# Some experimental versions of OS/2 build have broken $?
9c5c68c8 28my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
29
30my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 31
17a79f5b 32
9c5c68c8 33@ISA = ('Exporter');
34@EXPORT = qw(&runtests);
35@EXPORT_OK = qw($verbose $switches);
4633a7c4 36
9c5c68c8 37$Verbose = 0;
38$Switches = "-w";
39$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 40$Columns--; # Some shells have trouble with a full line of text.
41
42
43=head1 NAME
44
45Test::Harness - run perl standard test scripts with statistics
46
47=head1 SYNOPSIS
48
49 use Test::Harness;
50
51 runtests(@test_files);
52
53=head1 DESCRIPTION
a0d0e21e 54
b82fa0b7 55B<STOP!> If all you want to do is write a test script, consider using
56Test::Simple. Otherwise, read on.
57
58(By using the Test module, you can write test scripts without
59knowing the exact output this module expects. However, if you need to
60know the specifics, read on!)
61
62Perl test scripts print to standard output C<"ok N"> for each single
63test, where C<N> is an increasing sequence of integers. The first line
64output by a standard test script is C<"1..M"> with C<M> being the
65number of tests that should be run within the test
66script. Test::Harness::runtests(@tests) runs all the testscripts
67named as arguments and checks standard output for the expected
68C<"ok N"> strings.
69
70After all tests have been performed, runtests() prints some
71performance statistics that are computed by the Benchmark module.
72
73=head2 The test script output
74
75The following explains how Test::Harness interprets the output of your
76test program.
77
78=over 4
79
80=item B<'1..M'>
81
82This header tells how many tests there will be. It should be the
83first line output by your test program (but its okay if its preceded
84by comments).
85
86In certain instanced, you may not know how many tests you will
87ultimately be running. In this case, it is permitted (but not
88encouraged) for the 1..M header to appear as the B<last> line output
89by your test (again, it can be followed by further comments). But we
90strongly encourage you to put it first.
91
92Under B<no> circumstances should 1..M appear in the middle of your
93output or more than once.
94
95
96=item B<'ok', 'not ok'. Ok?>
97
98Any output from the testscript to standard error is ignored and
99bypassed, thus will be seen by the user. Lines written to standard
100output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101runtests(). All other lines are discarded.
102
103C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
104
105
106=item B<test numbers>
107
108Perl normally expects the 'ok' or 'not ok' to be followed by a test
109number. It is tolerated if the test numbers after 'ok' are
110omitted. In this case Test::Harness maintains temporarily its own
111counter until the script supplies test numbers again. So the following
112test script
113
114 print <<END;
115 1..6
116 not ok
117 ok
118 not ok
119 ok
120 ok
121 END
122
123will generate
124
125 FAILED tests 1, 3, 6
126 Failed 3/6 tests, 50.00% okay
127
128
129=item B<$Test::Harness::verbose>
130
131The global variable $Test::Harness::verbose is exportable and can be
132used to let runtests() display the standard output of the script
133without altering the behavior otherwise.
134
135=item B<$Test::Harness::switches>
136
137The global variable $Test::Harness::switches is exportable and can be
138used to set perl command line options used for running the test
139script(s). The default value is C<-w>.
140
141=item B<Skipping tests>
142
143If the standard output line contains the substring C< # Skip> (with
144variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145counted as a skipped test. If the whole testscript succeeds, the
146count of skipped tests is included in the generated output.
147C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
148for skipping.
149
150 ok 23 # skip Insufficient flogiston pressure.
151
152Similarly, one can include a similar explanation in a C<1..0> line
153emitted if the test script is skipped completely:
154
155 1..0 # Skipped: no leverage found
156
157=item B<Todo tests>
158
159If the standard output line contains the substring C< # TODO> after
160C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161afterwards is the thing that has to be done before this test will
162succeed.
163
164 not ok 13 # TODO harness the power of the atom
165
166These tests represent a feature to be implemented or a bug to be fixed
167and act as something of an executable "thing to do" list. They are
168B<not> expected to succeed. Should a todo test begin succeeding,
169Test::Harness will report it as a bonus. This indicates that whatever
170you were supposed to do has been done and you should promote this to a
171normal test.
172
173=item B<Bail out!>
174
175As an emergency measure, a test script can decide that further tests
176are useless (e.g. missing dependencies) and testing should stop
177immediately. In that case the test script prints the magic words
178
179 Bail out!
180
181to standard output. Any message after these words will be displayed by
182C<Test::Harness> as the reason why testing is stopped.
183
184=item B<Comments>
185
186Additional comments may be put into the testing output on their own
187lines. Comment lines should begin with a '#', Test::Harness will
188ignore them.
189
190 ok 1
191 # Life is good, the sun is shining, RAM is cheap.
192 not ok 2
193 # got 'Bush' expected 'Gore'
194
195=item B<Anything else>
196
197Any other output Test::Harness sees it will silently ignore B<BUT WE
198PLAN TO CHANGE THIS!> If you wish to place additional output in your
199test script, please use a comment.
200
201=back
202
203
204=head2 Failure
205
206It will happen, your tests will fail. After you mop up your ego, you
207can begin examining the summary report:
208
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
221
222Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223exited with non-zero status indicating something dubious happened.
224
225The columns in the summary report mean:
226
227=over 4
228
229=item B<Failed Test>
230
231The test file which failed.
232
233=item B<Stat>
234
235If the test exited with non-zero, this is its exit status.
236
237=item B<Wstat>
238
239The wait status of the test I<umm, I need a better explanation here>.
240
241=item B<Total>
242
243Total number of tests expected to run.
244
245=item B<Fail>
246
247Number which failed, either from "not ok" or because they never ran.
248
249=item B<Failed>
250
251Percentage of the total tests which failed.
252
253=item B<List of Failed>
254
255A list of the tests which failed. Successive failures may be
256abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
25720 failed).
258
259=back
260
261
262=head2 Functions
263
264Test::Harness currently only has one function, here it is.
265
266=over 4
267
268=item B<runtests>
269
270 my $allok = runtests(@test_files);
271
272This runs all the given @test_files and divines whether they passed
273or failed based on their output to STDOUT (details above). It prints
274out each individual test which failed along with a summary report and
275a how long it all took.
276
277It returns true if everything was ok, false otherwise.
278
279=for _private
280This is just _run_all_tests() plus _show_results()
281
282=cut
17a79f5b 283
a0d0e21e 284sub runtests {
285 my(@tests) = @_;
9c5c68c8 286
b82fa0b7 287 local ($\, $,);
288
289 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 290 _show_results($tot, $failedtests);
291
b82fa0b7 292 my $ok = ($tot->{bad} == 0 && $tot->{max});
293
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
296
297 return $ok;
298}
299
300=begin _private
301
302=item B<_globdir>
303
304 my @files = _globdir $dir;
305
306Returns all the files in a directory. This is shorthand for backwards
307compatibility on systems where glob() doesn't work right.
308
309=cut
310
311sub _globdir {
312 opendir DIRH, shift;
313 my @f = readdir DIRH;
314 closedir DIRH;
315
316 return @f;
9c5c68c8 317}
318
b82fa0b7 319=item B<_run_all_tests>
320
321 my($total, $failed) = _run_all_tests(@test_files);
322
323Runs all the given @test_files (as runtests()) but does it quietly (no
324report). $total is a hash ref summary of all the tests run. Its keys
325and values are this:
326
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
331
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
337
338If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
339test.
340
341$failed is a hash ref of all the test scripts which failed. Each key
342is the name of a test script, each value is another hash representing
343how that script failed. Its keys are these:
9c5c68c8 344
b82fa0b7 345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
352
353Needless to say, $failed should be empty if everything passed.
354
355B<NOTE> Currently this function is still noisy. I'm working on it.
356
357=cut
358
359sub _run_all_tests {
9c5c68c8 360 my(@tests) = @_;
a0d0e21e 361 local($|) = 1;
9c5c68c8 362 my(%failedtests);
363
364 # Test-wide totals.
365 my(%tot) = (
366 bonus => 0,
367 max => 0,
368 ok => 0,
369 files => 0,
370 bad => 0,
371 good => 0,
372 tests => scalar @tests,
373 sub_skipped => 0,
374 skipped => 0,
375 bench => 0
376 );
774d564b 377
378 # pass -I flags to children
81ff29e3 379 my $old5lib = $ENV{PERL5LIB};
774d564b 380
1250aba5 381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
383 # for VMS
384 my $new5lib;
385 if ($^O eq 'VMS') {
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
9c5c68c8 387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
1250aba5 388 }
389 else {
390 $new5lib = join($Config{path_sep}, @INC);
391 }
392
393 local($ENV{'PERL5LIB'}) = $new5lib;
a0d0e21e 394
b82fa0b7 395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 396 my $t_start = new Benchmark;
9c5c68c8 397
63b097aa 398 my $maxlen = 0;
908801fe 399 my $maxsuflen = 0;
400 foreach (@tests) { # The same code in t/TEST
401 my $suf = /\.(\w+)$/ ? $1 : '';
402 my $len = length;
403 my $suflen = length $suf;
404 $maxlen = $len if $len > $maxlen;
405 $maxsuflen = $suflen if $suflen > $maxsuflen;
7a315204 406 }
908801fe 407 # + 3 : we want three dots between the test name and the "ok"
408 my $width = $maxlen + 3 - $maxsuflen;
b82fa0b7 409 foreach my $tfile (@tests) {
7a315204 410 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 411 print $leader;
9c5c68c8 412
b82fa0b7 413 my $fh = _open_test($tfile);
9c5c68c8 414
415 # state of the current test.
416 my %test = (
417 ok => 0,
b82fa0b7 418 'next' => 0,
9c5c68c8 419 max => 0,
420 failed => [],
421 todo => {},
422 bonus => 0,
423 skipped => 0,
424 skip_reason => undef,
425 ml => $ml,
426 );
427
428 my($seen_header, $tests_seen) = (0,0);
c07a80fd 429 while (<$fh>) {
9c5c68c8 430 if( _parse_header($_, \%test, \%tot) ) {
431 warn "Test header seen twice!\n" if $seen_header;
432
433 $seen_header = 1;
434
435 warn "1..M can only appear at the beginning or end of tests\n"
436 if $tests_seen && $test{max} < $tests_seen;
437 }
438 elsif( _parse_test_line($_, \%test, \%tot) ) {
439 $tests_seen++;
d667a7e6 440 }
9c5c68c8 441 # else, ignore it.
c07a80fd 442 }
9c5c68c8 443
444 my($estatus, $wstatus) = _close_fh($fh);
445
b82fa0b7 446 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
447
68dc0745 448 if ($wstatus) {
b82fa0b7 449 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 450 $estatus, $wstatus);
b82fa0b7 451 $failedtests{$tfile}{name} = $tfile;
9c5c68c8 452 }
b82fa0b7 453 elsif ($allok) {
9c5c68c8 454 if ($test{max} and $test{skipped} + $test{bonus}) {
7b13a3f5 455 my @msg;
9c5c68c8 456 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
457 if $test{skipped};
458 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
459 if $test{bonus};
460 print "$test{ml}ok, ".join(', ', @msg)."\n";
461 } elsif ($test{max}) {
462 print "$test{ml}ok\n";
463 } elsif (defined $test{skip_reason}) {
464 print "skipped: $test{skip_reason}\n";
465 $tot{skipped}++;
c0ee6f5c 466 } else {
45c0de28 467 print "skipped test on this platform\n";
9c5c68c8 468 $tot{skipped}++;
c0ee6f5c 469 }
9c5c68c8 470 $tot{good}++;
6c31b336 471 }
b82fa0b7 472 else {
473 if ($test{max}) {
474 if ($test{'next'} <= $test{max}) {
475 push @{$test{failed}}, $test{'next'}..$test{max};
476 }
477 if (@{$test{failed}}) {
478 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
479 @{$test{failed}});
480 print "$test{ml}$txt";
481 $failedtests{$tfile} = { canon => $canon,
482 max => $test{max},
483 failed => scalar @{$test{failed}},
484 name => $tfile,
485 percent => 100*(scalar @{$test{failed}})/$test{max},
486 estat => '',
487 wstat => '',
488 };
489 } else {
490 print "Don't know which tests failed: got $test{ok} ok, ".
491 "expected $test{max}\n";
492 $failedtests{$tfile} = { canon => '??',
493 max => $test{max},
494 failed => '??',
495 name => $tfile,
496 percent => undef,
497 estat => '',
498 wstat => '',
499 };
500 }
501 $tot{bad}++;
502 } elsif ($test{'next'} == 0) {
503 print "FAILED before any test output arrived\n";
504 $tot{bad}++;
505 $failedtests{$tfile} = { canon => '??',
506 max => '??',
507 failed => '??',
508 name => $tfile,
509 percent => undef,
510 estat => '',
511 wstat => '',
512 };
513 }
514 }
515
9c5c68c8 516 $tot{sub_skipped} += $test{skipped};
517
518 if (defined $Files_In_Dir) {
b82fa0b7 519 my @new_dir_files = _globdir $Files_In_Dir;
17a79f5b 520 if (@new_dir_files != @dir_files) {
521 my %f;
522 @f{@new_dir_files} = (1) x @new_dir_files;
523 delete @f{@dir_files};
524 my @f = sort keys %f;
525 print "LEAKED FILES: @f\n";
526 @dir_files = @new_dir_files;
527 }
528 }
a0d0e21e 529 }
9c5c68c8 530 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 531
774d564b 532 if ($^O eq 'VMS') {
533 if (defined $old5lib) {
534 $ENV{PERL5LIB} = $old5lib;
b876d4a6 535 } else {
774d564b 536 delete $ENV{PERL5LIB};
537 }
538 }
9c5c68c8 539
540 return(\%tot, \%failedtests);
541}
542
b82fa0b7 543=item B<_mk_leader>
544
7a315204 545 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 546
547Generates the 't/foo........' $leader for the given $test_file as well
548as a similar version which will overwrite the current line (by use of
549\r and such). $ml may be empty if Test::Harness doesn't think you're
7a315204 550on TTY. The width is the width of the "yada/blah..." string.
b82fa0b7 551
552=cut
553
554sub _mk_leader {
7a315204 555 my ($te, $width) = @_;
556
b695f709 557 $te =~ s/\.\w+$/./;
b82fa0b7 558
559 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
560 my $blank = (' ' x 77);
7a315204 561 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 562 my $ml = "";
563
564 $ml = "\r$blank\r$leader"
565 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
566
567 return($leader, $ml);
568}
569
9c5c68c8 570
571sub _show_results {
572 my($tot, $failedtests) = @_;
573
574 my $pct;
575 my $bonusmsg = _bonusmsg($tot);
576
577 if ($tot->{bad} == 0 && $tot->{max}) {
5d23d6cd 578# print "All tests successful$bonusmsg.\n";
579 print "All tests successful.\n";
9c5c68c8 580 } elsif ($tot->{tests}==0){
6c31b336 581 die "FAILED--no tests were run for some reason.\n";
9c5c68c8 582 } elsif ($tot->{max} == 0) {
583 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
584 die "FAILED--$tot->{tests} test $blurb could be run, ".
585 "alas--no output ever seen\n";
c07a80fd 586 } else {
9c5c68c8 587 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
6c31b336 588 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
9c5c68c8 589 $tot->{max} - $tot->{ok}, $tot->{max},
590 100*$tot->{ok}/$tot->{max};
0a931e4a 591
9c5c68c8 592 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 593
594 # Now write to formats
9c5c68c8 595 for my $script (sort keys %$failedtests) {
596 $Curtest = $failedtests->{$script};
760ac839 597 write;
598 }
9c5c68c8 599 if ($tot->{bad}) {
9b0ceca9 600 $bonusmsg =~ s/^,\s*//;
601 print "$bonusmsg.\n" if $bonusmsg;
9c5c68c8 602 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
603 "$subpct\n";
c07a80fd 604 }
605 }
f0a9308e 606
9c5c68c8 607 printf("Files=%d, Tests=%d, %s\n",
608 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
609}
610
611
612sub _parse_header {
613 my($line, $test, $tot) = @_;
614
615 my $is_header = 0;
616
617 print $line if $Verbose;
618
619 # 1..10 todo 4 7 10;
620 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
621 $test->{max} = $1;
622 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
623
624 $tot->{max} += $test->{max};
625 $tot->{files}++;
626
627 $is_header = 1;
628 }
629 # 1..10
630 # 1..0 # skip Why? Because I said so!
631 elsif ($line =~ /^1\.\.([0-9]+)
b82fa0b7 632 (\s*\#\s*[Ss]kip\S*\s* (.+))?
9c5c68c8 633 /x
634 )
635 {
636 $test->{max} = $1;
637 $tot->{max} += $test->{max};
638 $tot->{files}++;
b82fa0b7 639 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 640 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
641
642 $is_header = 1;
643 }
644 else {
645 $is_header = 0;
646 }
647
648 return $is_header;
c07a80fd 649}
650
9c5c68c8 651
b82fa0b7 652sub _open_test {
653 my($test) = shift;
654
655 my $s = _set_switches($test);
656
657 # XXX This is WAY too core specific!
658 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
659 ? "./perl -I../lib ../utils/perlcc $test "
660 . "-r 2>> ./compilelog |"
661 : "$^X $s $test|";
662 $cmd = "MCR $cmd" if $^O eq 'VMS';
663
664 if( open(PERL, $cmd) ) {
665 return \*PERL;
666 }
667 else {
668 print "can't run $test. $!\n";
669 return;
670 }
671}
672
673sub _run_one_test {
674 my($test) = @_;
675
676
677}
678
679
9c5c68c8 680sub _parse_test_line {
681 my($line, $test, $tot) = @_;
682
683 if ($line =~ /^(not\s+)?ok\b/i) {
b82fa0b7 684 my $this = $test->{'next'} || 1;
9c5c68c8 685 # "not ok 23"
37ce32a7 686 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
687 my($not, $tnum, $extra) = ($1, $2, $3);
688
689 $this = $tnum if $tnum;
690
691 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
692 if defined $extra;
693
694 my($istodo, $isskip);
695 if( defined $type ) {
696 $istodo = $type =~ /TODO/;
697 $isskip = $type =~ /skip/i;
698 }
699
700 $test->{todo}{$tnum} = 1 if $istodo;
701
702 if( $not ) {
703 print "$test->{ml}NOK $this" if $test->{ml};
704 if (!$test->{todo}{$this}) {
705 push @{$test->{failed}}, $this;
706 } else {
707 $test->{ok}++;
708 $tot->{ok}++;
709 }
710 }
711 else {
712 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
713 $test->{ok}++;
714 $tot->{ok}++;
715 $test->{skipped}++ if $isskip;
716
717 if (defined $reason and defined $test->{skip_reason}) {
718 # print "was: '$skip_reason' new '$reason'\n";
719 $test->{skip_reason} = 'various reasons'
720 if $test->{skip_reason} ne $reason;
721 } elsif (defined $reason) {
722 $test->{skip_reason} = $reason;
723 }
724
725 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
726 }
9c5c68c8 727 }
728 # XXX ummm... dunno
729 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
730 $this = $1 if $1 > 0;
731 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
732 $test->{ok}++;
733 $tot->{ok}++;
734 }
735 else {
736 # an ok or not ok not matching the 3 cases above...
737 # just ignore it for compatibility with TEST
738 next;
739 }
740
b82fa0b7 741 if ($this > $test->{'next'}) {
9c5c68c8 742 # print "Test output counter mismatch [test $this]\n";
743 # no need to warn probably
b82fa0b7 744 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 745 }
b82fa0b7 746 elsif ($this < $test->{'next'}) {
9c5c68c8 747 #we have seen more "ok" lines than the number suggests
748 print "Confused test output: test $this answered after ".
b82fa0b7 749 "test ", $test->{'next'}-1, "\n";
750 $test->{'next'} = $this;
9c5c68c8 751 }
b82fa0b7 752 $test->{'next'} = $this + 1;
9c5c68c8 753
754 }
755 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
756 die "FAILED--Further testing stopped" .
757 ($1 ? ": $1\n" : ".\n");
758 }
759}
760
761
762sub _bonusmsg {
763 my($tot) = @_;
764
765 my $bonusmsg = '';
766 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
767 " UNEXPECTEDLY SUCCEEDED)")
768 if $tot->{bonus};
769
770 if ($tot->{skipped}) {
771 $bonusmsg .= ", $tot->{skipped} test"
772 . ($tot->{skipped} != 1 ? 's' : '');
773 if ($tot->{sub_skipped}) {
774 $bonusmsg .= " and $tot->{sub_skipped} subtest"
775 . ($tot->{sub_skipped} != 1 ? 's' : '');
776 }
777 $bonusmsg .= ' skipped';
778 }
779 elsif ($tot->{sub_skipped}) {
780 $bonusmsg .= ", $tot->{sub_skipped} subtest"
781 . ($tot->{sub_skipped} != 1 ? 's' : '')
782 . " skipped";
783 }
784
785 return $bonusmsg;
786}
787
788# VMS has some subtle nastiness with closing the test files.
789sub _close_fh {
790 my($fh) = shift;
791
792 close($fh); # must close to reap child resource values
793
794 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
795 my $estatus;
796 $estatus = ($^O eq 'VMS'
797 ? eval 'use vmsish "status"; $estatus = $?'
798 : $wstatus >> 8);
799
800 return($estatus, $wstatus);
801}
802
803
804# Set up the command-line switches to run perl as.
805sub _set_switches {
806 my($test) = shift;
807
b82fa0b7 808 local *TEST;
809 open(TEST, $test) or print "can't open $test. $!\n";
810 my $first = <TEST>;
9c5c68c8 811 my $s = $Switches;
812 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
813 if exists $ENV{'HARNESS_PERL_SWITCHES'};
814 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
815 if $first =~ /^#!.*\bperl.*-\w*T/;
816
b82fa0b7 817 close(TEST) or print "can't close $test. $!\n";
9c5c68c8 818
819 return $s;
820}
821
822
823# Test program go boom.
824sub _dubious_return {
825 my($test, $tot, $estatus, $wstatus) = @_;
826 my ($failed, $canon, $percent) = ('??', '??');
827
828 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
829 "(wstat %d, 0x%x)\n",
830 $wstatus,$wstatus;
831 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
832
833 if (corestatus($wstatus)) { # until we have a wait module
834 if ($Have_Devel_Corestack) {
835 Devel::CoreStack::stack($^X);
836 } else {
837 print "\ttest program seems to have generated a core\n";
838 }
839 }
840
841 $tot->{bad}++;
842
843 if ($test->{max}) {
b82fa0b7 844 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 845 print "\tafter all the subtests completed successfully\n";
846 $percent = 0;
847 $failed = 0; # But we do not set $canon!
848 }
849 else {
b82fa0b7 850 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 851 $failed = @{$test->{failed}};
852 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
853 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
854 print "DIED. ",$txt;
855 }
856 }
857
858 return { canon => $canon, max => $test->{max} || '??',
859 failed => $failed,
66fd8cb9 860 percent => $percent,
9c5c68c8 861 estat => $estatus, wstat => $wstatus,
862 };
863}
864
865
866sub _garbled_output {
867 my($gibberish) = shift;
868 warn "Confusing test output: '$gibberish'\n";
869}
870
871
872sub _create_fmts {
873 my($failedtests) = @_;
874
b82fa0b7 875 my $failed_str = "Failed Test";
876 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 877 my $list_str = "List of Failed";
878
879 # Figure out our longest name string for formatting purposes.
880 my $max_namelen = length($failed_str);
881 foreach my $script (keys %$failedtests) {
882 my $namelen = length $failedtests->{$script}->{name};
883 $max_namelen = $namelen if $namelen > $max_namelen;
884 }
885
886 my $list_len = $Columns - length($middle_str) - $max_namelen;
887 if ($list_len < length($list_str)) {
888 $list_len = length($list_str);
889 $max_namelen = $Columns - length($middle_str) - $list_len;
890 if ($max_namelen < length($failed_str)) {
891 $max_namelen = length($failed_str);
892 $Columns = $max_namelen + length($middle_str) + $list_len;
893 }
894 }
895
896 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 897 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 898 . $middle_str
899 . $list_str . "\n"
900 . "-" x $Columns
901 . "\n.\n";
902
903 my $fmt = "format STDOUT =\n"
904 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 905 . " @>> @>>>> @>>>> @>>> ^##.##% "
9c5c68c8 906 . "^" . "<" x ($list_len - 1) . "\n"
907 . '{ $Curtest->{name}, $Curtest->{estat},'
908 . ' $Curtest->{wstat}, $Curtest->{max},'
909 . ' $Curtest->{failed}, $Curtest->{percent},'
910 . ' $Curtest->{canon}'
911 . "\n}\n"
912 . "~~" . " " x ($Columns - $list_len - 2) . "^"
913 . "<" x ($list_len - 1) . "\n"
914 . '$Curtest->{canon}'
915 . "\n.\n";
916
917 eval $fmt_top;
918 die $@ if $@;
919 eval $fmt;
920 die $@ if $@;
921
922 return($fmt_top, $fmt);
923}
924
b82fa0b7 925{
926 my $tried_devel_corestack;
9c5c68c8 927
b82fa0b7 928 sub corestatus {
929 my($st) = @_;
c0ee6f5c 930
b82fa0b7 931 eval {require 'wait.ph'};
932 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 933
b82fa0b7 934 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
935 unless $tried_devel_corestack++;
c0ee6f5c 936
b82fa0b7 937 $ret;
938 }
c0ee6f5c 939}
940
c07a80fd 941sub canonfailed ($@) {
89d3b7e2 942 my($max,$skipped,@failed) = @_;
6c31b336 943 my %seen;
944 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 945 my $failed = @failed;
946 my @result = ();
947 my @canon = ();
948 my $min;
949 my $last = $min = shift @failed;
760ac839 950 my $canon;
c07a80fd 951 if (@failed) {
952 for (@failed, $failed[-1]) { # don't forget the last one
953 if ($_ > $last+1 || $_ == $last) {
954 if ($min == $last) {
955 push @canon, $last;
956 } else {
957 push @canon, "$min-$last";
958 }
959 $min = $_;
960 }
961 $last = $_;
962 }
963 local $" = ", ";
964 push @result, "FAILED tests @canon\n";
b82fa0b7 965 $canon = join ' ', @canon;
a0d0e21e 966 } else {
c07a80fd 967 push @result, "FAILED test $last\n";
760ac839 968 $canon = $last;
a0d0e21e 969 }
c07a80fd 970
971 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 972 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
973 my $ender = 's' x ($skipped > 1);
974 my $good = $max - $failed - $skipped;
975 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 976 push @result, " (-$skipped skipped test$ender: $good okay, ".
977 "$goodper%)"
978 if $skipped;
89d3b7e2 979 push @result, "\n";
760ac839 980 my $txt = join "", @result;
981 ($txt, $canon);
a0d0e21e 982}
983
b82fa0b7 984=end _private
9c5c68c8 985
b82fa0b7 986=back
d667a7e6 987
b82fa0b7 988=cut
9c5c68c8 989
9c5c68c8 990
b82fa0b7 9911;
992__END__
9c5c68c8 993
994
cb1a09d0 995=head1 EXPORT
996
c0ee6f5c 997C<&runtests> is exported by Test::Harness per default.
cb1a09d0 998
9c5c68c8 999C<$verbose> and C<$switches> are exported upon request.
1000
1001
cb1a09d0 1002=head1 DIAGNOSTICS
1003
1004=over 4
1005
1006=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1007
1008If all tests are successful some statistics about the performance are
1009printed.
1010
6c31b336 1011=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1012
1013For any single script that has failing subtests statistics like the
1014above are printed.
1015
1016=item C<Test returned status %d (wstat %d)>
1017
9c5c68c8 1018Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1019and C<$?> are printed in a message similar to the above.
6c31b336 1020
1021=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1022
6c31b336 1023=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1024
1025If not all tests were successful, the script dies with one of the
1026above messages.
1027
d667a7e6 1028=item C<FAILED--Further testing stopped%s>
1029
1030If a single subtest decides that further testing will not make sense,
1031the script dies with this message.
1032
cb1a09d0 1033=back
1034
9b0ceca9 1035=head1 ENVIRONMENT
1036
37ce32a7 1037=over 4
1038
b82fa0b7 1039=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7 1040
1041Makes harness ignore the exit status of child processes when defined.
1042
b82fa0b7 1043=item C<HARNESS_NOTTY>
9b0ceca9 1044
37ce32a7 1045When set to a true value, forces it to behave as though STDOUT were
1046not a console. You may need to set this if you don't want harness to
1047output more frequent progress messages using carriage returns. Some
1048consoles may not handle carriage returns properly (which results in a
1049somewhat messy output).
0d0c0d42 1050
b82fa0b7 1051=item C<HARNESS_COMPILE_TEST>
9636a016 1052
37ce32a7 1053When true it will make harness attempt to compile the test using
1054C<perlcc> before running it.
1055
b82fa0b7 1056B<NOTE> This currently only works when sitting in the perl source
1057directory!
1058
1059=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1060
1061When set to the name of a directory, harness will check after each
1062test whether new files appeared in that directory, and report them as
17a79f5b 1063
1064 LEAKED FILES: scr.tmp 0 my.db
1065
1066If relative, directory name is with respect to the current directory at
1067the moment runtests() was called. Putting absolute path into
1068C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1069
b82fa0b7 1070=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1071
1072Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1073each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1074run all tests with all warnings enabled.
1075
b82fa0b7 1076=item C<HARNESS_COLUMNS>
37ce32a7 1077
1078This value will be used for the width of the terminal. If it is not
1079set then it will default to C<COLUMNS>. If this is not set, it will
1080default to 80. Note that users of Bourne-sh based shells will need to
1081C<export COLUMNS> for this module to use that variable.
2b32313b 1082
b82fa0b7 1083=item C<HARNESS_ACTIVE>
37ce32a7 1084
1085Harness sets this before executing the individual tests. This allows
1086the tests to determine if they are being executed through the harness
1087or by any other means.
1088
1089=back
0a931e4a 1090
b82fa0b7 1091=head1 EXAMPLE
1092
1093Here's how Test::Harness tests itself
1094
1095 $ cd ~/src/devel/Test-Harness
1096 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1097 $verbose=0; runtests @ARGV;' t/*.t
1098 Using /home/schwern/src/devel/Test-Harness/blib
1099 t/base..............ok
1100 t/nonumbers.........ok
1101 t/ok................ok
1102 t/test-harness......ok
1103 All tests successful.
1104 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1105
cb1a09d0 1106=head1 SEE ALSO
1107
b82fa0b7 1108L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1109the underlying timing routines, L<Devel::CoreStack> to generate core
1110dumps from failed tests and L<Devel::Cover> for test coverage
1111analysis.
c07a80fd 1112
1113=head1 AUTHORS
1114
1115Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1116sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1117with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1118exist. Andreas Koenig held the torch for many years.
1119
1120Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1121
1122=head1 TODO
1123
1124Provide a way of running tests quietly (ie. no printing) for automated
1125validation of tests. This will probably take the form of a version
1126of runtests() which rather than printing its output returns raw data
1127on the state of the tests.
1128
1129Fix HARNESS_COMPILE_TEST without breaking its core usage.
1130
1131Figure a way to report test names in the failure summary.
37ce32a7 1132
b82fa0b7 1133Rework the test summary so long test names are not truncated as badly.
1134
1135Merge back into bleadperl.
1136
1137Deal with VMS's "not \nok 4\n" mistake.
1138
1139Add option for coverage analysis.
1140
1141=for _private
1142Keeping whittling away at _run_all_tests()
1143
1144=for _private
1145Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1146
1147=head1 BUGS
1148
1149Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336 1150with. Test scripts running via the shebang (C<#!>) line may not be
1151portable because $^X is not consistent for shebang scripts across
cb1a09d0 1152platforms. This is no problem when Test::Harness is run with an
6c31b336 1153absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1154
b82fa0b7 1155HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1156directory.
1157
cb1a09d0 1158=cut