Re: ext/ + -Wall
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
b82fa0b7 2# $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
a0d0e21e 7use Exporter;
8use Benchmark;
4633a7c4 9use Config;
760ac839 10use strict;
11
b82fa0b7 12use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
15 );
4633a7c4 16
9c5c68c8 17# Backwards compatibility for exportable variable names.
18*verbose = \$Verbose;
19*switches = \$Switches;
20
21$Have_Devel_Corestack = 0;
22
b82fa0b7 23$VERSION = "1.21";
9b0ceca9 24
f19ae7a7 25$ENV{HARNESS_ACTIVE} = 1;
26
9b0ceca9 27# Some experimental versions of OS/2 build have broken $?
9c5c68c8 28my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
29
30my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 31
17a79f5b 32
9c5c68c8 33@ISA = ('Exporter');
34@EXPORT = qw(&runtests);
35@EXPORT_OK = qw($verbose $switches);
4633a7c4 36
9c5c68c8 37$Verbose = 0;
38$Switches = "-w";
39$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 40$Columns--; # Some shells have trouble with a full line of text.
41
42
43=head1 NAME
44
45Test::Harness - run perl standard test scripts with statistics
46
47=head1 SYNOPSIS
48
49 use Test::Harness;
50
51 runtests(@test_files);
52
53=head1 DESCRIPTION
a0d0e21e 54
b82fa0b7 55B<STOP!> If all you want to do is write a test script, consider using
56Test::Simple. Otherwise, read on.
57
58(By using the Test module, you can write test scripts without
59knowing the exact output this module expects. However, if you need to
60know the specifics, read on!)
61
62Perl test scripts print to standard output C<"ok N"> for each single
63test, where C<N> is an increasing sequence of integers. The first line
64output by a standard test script is C<"1..M"> with C<M> being the
65number of tests that should be run within the test
66script. Test::Harness::runtests(@tests) runs all the testscripts
67named as arguments and checks standard output for the expected
68C<"ok N"> strings.
69
70After all tests have been performed, runtests() prints some
71performance statistics that are computed by the Benchmark module.
72
73=head2 The test script output
74
75The following explains how Test::Harness interprets the output of your
76test program.
77
78=over 4
79
80=item B<'1..M'>
81
82This header tells how many tests there will be. It should be the
83first line output by your test program (but its okay if its preceded
84by comments).
85
86In certain instanced, you may not know how many tests you will
87ultimately be running. In this case, it is permitted (but not
88encouraged) for the 1..M header to appear as the B<last> line output
89by your test (again, it can be followed by further comments). But we
90strongly encourage you to put it first.
91
92Under B<no> circumstances should 1..M appear in the middle of your
93output or more than once.
94
95
96=item B<'ok', 'not ok'. Ok?>
97
98Any output from the testscript to standard error is ignored and
99bypassed, thus will be seen by the user. Lines written to standard
100output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101runtests(). All other lines are discarded.
102
103C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
104
105
106=item B<test numbers>
107
108Perl normally expects the 'ok' or 'not ok' to be followed by a test
109number. It is tolerated if the test numbers after 'ok' are
110omitted. In this case Test::Harness maintains temporarily its own
111counter until the script supplies test numbers again. So the following
112test script
113
114 print <<END;
115 1..6
116 not ok
117 ok
118 not ok
119 ok
120 ok
121 END
122
123will generate
124
125 FAILED tests 1, 3, 6
126 Failed 3/6 tests, 50.00% okay
127
128
129=item B<$Test::Harness::verbose>
130
131The global variable $Test::Harness::verbose is exportable and can be
132used to let runtests() display the standard output of the script
133without altering the behavior otherwise.
134
135=item B<$Test::Harness::switches>
136
137The global variable $Test::Harness::switches is exportable and can be
138used to set perl command line options used for running the test
139script(s). The default value is C<-w>.
140
141=item B<Skipping tests>
142
143If the standard output line contains the substring C< # Skip> (with
144variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145counted as a skipped test. If the whole testscript succeeds, the
146count of skipped tests is included in the generated output.
147C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
148for skipping.
149
150 ok 23 # skip Insufficient flogiston pressure.
151
152Similarly, one can include a similar explanation in a C<1..0> line
153emitted if the test script is skipped completely:
154
155 1..0 # Skipped: no leverage found
156
157=item B<Todo tests>
158
159If the standard output line contains the substring C< # TODO> after
160C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161afterwards is the thing that has to be done before this test will
162succeed.
163
164 not ok 13 # TODO harness the power of the atom
165
166These tests represent a feature to be implemented or a bug to be fixed
167and act as something of an executable "thing to do" list. They are
168B<not> expected to succeed. Should a todo test begin succeeding,
169Test::Harness will report it as a bonus. This indicates that whatever
170you were supposed to do has been done and you should promote this to a
171normal test.
172
173=item B<Bail out!>
174
175As an emergency measure, a test script can decide that further tests
176are useless (e.g. missing dependencies) and testing should stop
177immediately. In that case the test script prints the magic words
178
179 Bail out!
180
181to standard output. Any message after these words will be displayed by
182C<Test::Harness> as the reason why testing is stopped.
183
184=item B<Comments>
185
186Additional comments may be put into the testing output on their own
187lines. Comment lines should begin with a '#', Test::Harness will
188ignore them.
189
190 ok 1
191 # Life is good, the sun is shining, RAM is cheap.
192 not ok 2
193 # got 'Bush' expected 'Gore'
194
195=item B<Anything else>
196
197Any other output Test::Harness sees it will silently ignore B<BUT WE
198PLAN TO CHANGE THIS!> If you wish to place additional output in your
199test script, please use a comment.
200
201=back
202
203
204=head2 Failure
205
206It will happen, your tests will fail. After you mop up your ego, you
207can begin examining the summary report:
208
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
221
222Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223exited with non-zero status indicating something dubious happened.
224
225The columns in the summary report mean:
226
227=over 4
228
229=item B<Failed Test>
230
231The test file which failed.
232
233=item B<Stat>
234
235If the test exited with non-zero, this is its exit status.
236
237=item B<Wstat>
238
239The wait status of the test I<umm, I need a better explanation here>.
240
241=item B<Total>
242
243Total number of tests expected to run.
244
245=item B<Fail>
246
247Number which failed, either from "not ok" or because they never ran.
248
249=item B<Failed>
250
251Percentage of the total tests which failed.
252
253=item B<List of Failed>
254
255A list of the tests which failed. Successive failures may be
256abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
25720 failed).
258
259=back
260
261
262=head2 Functions
263
264Test::Harness currently only has one function, here it is.
265
266=over 4
267
268=item B<runtests>
269
270 my $allok = runtests(@test_files);
271
272This runs all the given @test_files and divines whether they passed
273or failed based on their output to STDOUT (details above). It prints
274out each individual test which failed along with a summary report and
275a how long it all took.
276
277It returns true if everything was ok, false otherwise.
278
279=for _private
280This is just _run_all_tests() plus _show_results()
281
282=cut
17a79f5b 283
a0d0e21e 284sub runtests {
285 my(@tests) = @_;
9c5c68c8 286
b82fa0b7 287 local ($\, $,);
288
289 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 290 _show_results($tot, $failedtests);
291
b82fa0b7 292 my $ok = ($tot->{bad} == 0 && $tot->{max});
293
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
296
297 return $ok;
298}
299
300=begin _private
301
302=item B<_globdir>
303
304 my @files = _globdir $dir;
305
306Returns all the files in a directory. This is shorthand for backwards
307compatibility on systems where glob() doesn't work right.
308
309=cut
310
311sub _globdir {
312 opendir DIRH, shift;
313 my @f = readdir DIRH;
314 closedir DIRH;
315
316 return @f;
9c5c68c8 317}
318
b82fa0b7 319=item B<_run_all_tests>
320
321 my($total, $failed) = _run_all_tests(@test_files);
322
323Runs all the given @test_files (as runtests()) but does it quietly (no
324report). $total is a hash ref summary of all the tests run. Its keys
325and values are this:
326
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
331
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
337
338If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
339test.
340
341$failed is a hash ref of all the test scripts which failed. Each key
342is the name of a test script, each value is another hash representing
343how that script failed. Its keys are these:
9c5c68c8 344
b82fa0b7 345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
352
353Needless to say, $failed should be empty if everything passed.
354
355B<NOTE> Currently this function is still noisy. I'm working on it.
356
357=cut
358
359sub _run_all_tests {
9c5c68c8 360 my(@tests) = @_;
a0d0e21e 361 local($|) = 1;
9c5c68c8 362 my(%failedtests);
363
364 # Test-wide totals.
365 my(%tot) = (
366 bonus => 0,
367 max => 0,
368 ok => 0,
369 files => 0,
370 bad => 0,
371 good => 0,
372 tests => scalar @tests,
373 sub_skipped => 0,
374 skipped => 0,
375 bench => 0
376 );
774d564b 377
378 # pass -I flags to children
81ff29e3 379 my $old5lib = $ENV{PERL5LIB};
774d564b 380
1250aba5 381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
383 # for VMS
384 my $new5lib;
385 if ($^O eq 'VMS') {
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
9c5c68c8 387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
1250aba5 388 }
389 else {
390 $new5lib = join($Config{path_sep}, @INC);
391 }
392
393 local($ENV{'PERL5LIB'}) = $new5lib;
a0d0e21e 394
b82fa0b7 395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 396 my $t_start = new Benchmark;
9c5c68c8 397
b82fa0b7 398 foreach my $tfile (@tests) {
399 my($leader, $ml) = _mk_leader($tfile);
400 print $leader;
9c5c68c8 401
b82fa0b7 402 my $fh = _open_test($tfile);
9c5c68c8 403
404 # state of the current test.
405 my %test = (
406 ok => 0,
b82fa0b7 407 'next' => 0,
9c5c68c8 408 max => 0,
409 failed => [],
410 todo => {},
411 bonus => 0,
412 skipped => 0,
413 skip_reason => undef,
414 ml => $ml,
415 );
416
417 my($seen_header, $tests_seen) = (0,0);
c07a80fd 418 while (<$fh>) {
9c5c68c8 419 if( _parse_header($_, \%test, \%tot) ) {
420 warn "Test header seen twice!\n" if $seen_header;
421
422 $seen_header = 1;
423
424 warn "1..M can only appear at the beginning or end of tests\n"
425 if $tests_seen && $test{max} < $tests_seen;
426 }
427 elsif( _parse_test_line($_, \%test, \%tot) ) {
428 $tests_seen++;
d667a7e6 429 }
9c5c68c8 430 # else, ignore it.
c07a80fd 431 }
9c5c68c8 432
433 my($estatus, $wstatus) = _close_fh($fh);
434
b82fa0b7 435 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
436
68dc0745 437 if ($wstatus) {
b82fa0b7 438 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 439 $estatus, $wstatus);
b82fa0b7 440 $failedtests{$tfile}{name} = $tfile;
9c5c68c8 441 }
b82fa0b7 442 elsif ($allok) {
9c5c68c8 443 if ($test{max} and $test{skipped} + $test{bonus}) {
7b13a3f5 444 my @msg;
9c5c68c8 445 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
446 if $test{skipped};
447 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
448 if $test{bonus};
449 print "$test{ml}ok, ".join(', ', @msg)."\n";
450 } elsif ($test{max}) {
451 print "$test{ml}ok\n";
452 } elsif (defined $test{skip_reason}) {
453 print "skipped: $test{skip_reason}\n";
454 $tot{skipped}++;
c0ee6f5c 455 } else {
45c0de28 456 print "skipped test on this platform\n";
9c5c68c8 457 $tot{skipped}++;
c0ee6f5c 458 }
9c5c68c8 459 $tot{good}++;
6c31b336 460 }
b82fa0b7 461 else {
462 if ($test{max}) {
463 if ($test{'next'} <= $test{max}) {
464 push @{$test{failed}}, $test{'next'}..$test{max};
465 }
466 if (@{$test{failed}}) {
467 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
468 @{$test{failed}});
469 print "$test{ml}$txt";
470 $failedtests{$tfile} = { canon => $canon,
471 max => $test{max},
472 failed => scalar @{$test{failed}},
473 name => $tfile,
474 percent => 100*(scalar @{$test{failed}})/$test{max},
475 estat => '',
476 wstat => '',
477 };
478 } else {
479 print "Don't know which tests failed: got $test{ok} ok, ".
480 "expected $test{max}\n";
481 $failedtests{$tfile} = { canon => '??',
482 max => $test{max},
483 failed => '??',
484 name => $tfile,
485 percent => undef,
486 estat => '',
487 wstat => '',
488 };
489 }
490 $tot{bad}++;
491 } elsif ($test{'next'} == 0) {
492 print "FAILED before any test output arrived\n";
493 $tot{bad}++;
494 $failedtests{$tfile} = { canon => '??',
495 max => '??',
496 failed => '??',
497 name => $tfile,
498 percent => undef,
499 estat => '',
500 wstat => '',
501 };
502 }
503 }
504
9c5c68c8 505 $tot{sub_skipped} += $test{skipped};
506
507 if (defined $Files_In_Dir) {
b82fa0b7 508 my @new_dir_files = _globdir $Files_In_Dir;
17a79f5b 509 if (@new_dir_files != @dir_files) {
510 my %f;
511 @f{@new_dir_files} = (1) x @new_dir_files;
512 delete @f{@dir_files};
513 my @f = sort keys %f;
514 print "LEAKED FILES: @f\n";
515 @dir_files = @new_dir_files;
516 }
517 }
a0d0e21e 518 }
9c5c68c8 519 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 520
774d564b 521 if ($^O eq 'VMS') {
522 if (defined $old5lib) {
523 $ENV{PERL5LIB} = $old5lib;
b876d4a6 524 } else {
774d564b 525 delete $ENV{PERL5LIB};
526 }
527 }
9c5c68c8 528
529 return(\%tot, \%failedtests);
530}
531
b82fa0b7 532=item B<_mk_leader>
533
534 my($leader, $ml) = _mk_leader($test_file);
535
536Generates the 't/foo........' $leader for the given $test_file as well
537as a similar version which will overwrite the current line (by use of
538\r and such). $ml may be empty if Test::Harness doesn't think you're
539on TTY.
540
541=cut
542
543sub _mk_leader {
544 my $te = shift;
545 chop($te); # XXX chomp?
546
547 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
548 my $blank = (' ' x 77);
549 my $leader = "$te" . '.' x (20 - length($te));
550 my $ml = "";
551
552 $ml = "\r$blank\r$leader"
553 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
554
555 return($leader, $ml);
556}
557
9c5c68c8 558
559sub _show_results {
560 my($tot, $failedtests) = @_;
561
562 my $pct;
563 my $bonusmsg = _bonusmsg($tot);
564
565 if ($tot->{bad} == 0 && $tot->{max}) {
7b13a3f5 566 print "All tests successful$bonusmsg.\n";
9c5c68c8 567 } elsif ($tot->{tests}==0){
6c31b336 568 die "FAILED--no tests were run for some reason.\n";
9c5c68c8 569 } elsif ($tot->{max} == 0) {
570 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
571 die "FAILED--$tot->{tests} test $blurb could be run, ".
572 "alas--no output ever seen\n";
c07a80fd 573 } else {
9c5c68c8 574 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
6c31b336 575 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
9c5c68c8 576 $tot->{max} - $tot->{ok}, $tot->{max},
577 100*$tot->{ok}/$tot->{max};
0a931e4a 578
9c5c68c8 579 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 580
581 # Now write to formats
9c5c68c8 582 for my $script (sort keys %$failedtests) {
583 $Curtest = $failedtests->{$script};
760ac839 584 write;
585 }
9c5c68c8 586 if ($tot->{bad}) {
9b0ceca9 587 $bonusmsg =~ s/^,\s*//;
588 print "$bonusmsg.\n" if $bonusmsg;
9c5c68c8 589 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
590 "$subpct\n";
c07a80fd 591 }
592 }
f0a9308e 593
9c5c68c8 594 printf("Files=%d, Tests=%d, %s\n",
595 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
596}
597
598
599sub _parse_header {
600 my($line, $test, $tot) = @_;
601
602 my $is_header = 0;
603
604 print $line if $Verbose;
605
606 # 1..10 todo 4 7 10;
607 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
608 $test->{max} = $1;
609 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
610
611 $tot->{max} += $test->{max};
612 $tot->{files}++;
613
614 $is_header = 1;
615 }
616 # 1..10
617 # 1..0 # skip Why? Because I said so!
618 elsif ($line =~ /^1\.\.([0-9]+)
b82fa0b7 619 (\s*\#\s*[Ss]kip\S*\s* (.+))?
9c5c68c8 620 /x
621 )
622 {
623 $test->{max} = $1;
624 $tot->{max} += $test->{max};
625 $tot->{files}++;
b82fa0b7 626 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 627 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
628
629 $is_header = 1;
630 }
631 else {
632 $is_header = 0;
633 }
634
635 return $is_header;
c07a80fd 636}
637
9c5c68c8 638
b82fa0b7 639sub _open_test {
640 my($test) = shift;
641
642 my $s = _set_switches($test);
643
644 # XXX This is WAY too core specific!
645 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
646 ? "./perl -I../lib ../utils/perlcc $test "
647 . "-r 2>> ./compilelog |"
648 : "$^X $s $test|";
649 $cmd = "MCR $cmd" if $^O eq 'VMS';
650
651 if( open(PERL, $cmd) ) {
652 return \*PERL;
653 }
654 else {
655 print "can't run $test. $!\n";
656 return;
657 }
658}
659
660sub _run_one_test {
661 my($test) = @_;
662
663
664}
665
666
9c5c68c8 667sub _parse_test_line {
668 my($line, $test, $tot) = @_;
669
670 if ($line =~ /^(not\s+)?ok\b/i) {
b82fa0b7 671 my $this = $test->{'next'} || 1;
9c5c68c8 672 # "not ok 23"
37ce32a7 673 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
674 my($not, $tnum, $extra) = ($1, $2, $3);
675
676 $this = $tnum if $tnum;
677
678 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
679 if defined $extra;
680
681 my($istodo, $isskip);
682 if( defined $type ) {
683 $istodo = $type =~ /TODO/;
684 $isskip = $type =~ /skip/i;
685 }
686
687 $test->{todo}{$tnum} = 1 if $istodo;
688
689 if( $not ) {
690 print "$test->{ml}NOK $this" if $test->{ml};
691 if (!$test->{todo}{$this}) {
692 push @{$test->{failed}}, $this;
693 } else {
694 $test->{ok}++;
695 $tot->{ok}++;
696 }
697 }
698 else {
699 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
700 $test->{ok}++;
701 $tot->{ok}++;
702 $test->{skipped}++ if $isskip;
703
704 if (defined $reason and defined $test->{skip_reason}) {
705 # print "was: '$skip_reason' new '$reason'\n";
706 $test->{skip_reason} = 'various reasons'
707 if $test->{skip_reason} ne $reason;
708 } elsif (defined $reason) {
709 $test->{skip_reason} = $reason;
710 }
711
712 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
713 }
9c5c68c8 714 }
715 # XXX ummm... dunno
716 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
717 $this = $1 if $1 > 0;
718 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
719 $test->{ok}++;
720 $tot->{ok}++;
721 }
722 else {
723 # an ok or not ok not matching the 3 cases above...
724 # just ignore it for compatibility with TEST
725 next;
726 }
727
b82fa0b7 728 if ($this > $test->{'next'}) {
9c5c68c8 729 # print "Test output counter mismatch [test $this]\n";
730 # no need to warn probably
b82fa0b7 731 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 732 }
b82fa0b7 733 elsif ($this < $test->{'next'}) {
9c5c68c8 734 #we have seen more "ok" lines than the number suggests
735 print "Confused test output: test $this answered after ".
b82fa0b7 736 "test ", $test->{'next'}-1, "\n";
737 $test->{'next'} = $this;
9c5c68c8 738 }
b82fa0b7 739 $test->{'next'} = $this + 1;
9c5c68c8 740
741 }
742 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
743 die "FAILED--Further testing stopped" .
744 ($1 ? ": $1\n" : ".\n");
745 }
746}
747
748
749sub _bonusmsg {
750 my($tot) = @_;
751
752 my $bonusmsg = '';
753 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
754 " UNEXPECTEDLY SUCCEEDED)")
755 if $tot->{bonus};
756
757 if ($tot->{skipped}) {
758 $bonusmsg .= ", $tot->{skipped} test"
759 . ($tot->{skipped} != 1 ? 's' : '');
760 if ($tot->{sub_skipped}) {
761 $bonusmsg .= " and $tot->{sub_skipped} subtest"
762 . ($tot->{sub_skipped} != 1 ? 's' : '');
763 }
764 $bonusmsg .= ' skipped';
765 }
766 elsif ($tot->{sub_skipped}) {
767 $bonusmsg .= ", $tot->{sub_skipped} subtest"
768 . ($tot->{sub_skipped} != 1 ? 's' : '')
769 . " skipped";
770 }
771
772 return $bonusmsg;
773}
774
775# VMS has some subtle nastiness with closing the test files.
776sub _close_fh {
777 my($fh) = shift;
778
779 close($fh); # must close to reap child resource values
780
781 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
782 my $estatus;
783 $estatus = ($^O eq 'VMS'
784 ? eval 'use vmsish "status"; $estatus = $?'
785 : $wstatus >> 8);
786
787 return($estatus, $wstatus);
788}
789
790
791# Set up the command-line switches to run perl as.
792sub _set_switches {
793 my($test) = shift;
794
b82fa0b7 795 local *TEST;
796 open(TEST, $test) or print "can't open $test. $!\n";
797 my $first = <TEST>;
9c5c68c8 798 my $s = $Switches;
799 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
800 if exists $ENV{'HARNESS_PERL_SWITCHES'};
801 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
802 if $first =~ /^#!.*\bperl.*-\w*T/;
803
b82fa0b7 804 close(TEST) or print "can't close $test. $!\n";
9c5c68c8 805
806 return $s;
807}
808
809
810# Test program go boom.
811sub _dubious_return {
812 my($test, $tot, $estatus, $wstatus) = @_;
813 my ($failed, $canon, $percent) = ('??', '??');
814
815 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
816 "(wstat %d, 0x%x)\n",
817 $wstatus,$wstatus;
818 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
819
820 if (corestatus($wstatus)) { # until we have a wait module
821 if ($Have_Devel_Corestack) {
822 Devel::CoreStack::stack($^X);
823 } else {
824 print "\ttest program seems to have generated a core\n";
825 }
826 }
827
828 $tot->{bad}++;
829
830 if ($test->{max}) {
b82fa0b7 831 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 832 print "\tafter all the subtests completed successfully\n";
833 $percent = 0;
834 $failed = 0; # But we do not set $canon!
835 }
836 else {
b82fa0b7 837 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 838 $failed = @{$test->{failed}};
839 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
840 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
841 print "DIED. ",$txt;
842 }
843 }
844
845 return { canon => $canon, max => $test->{max} || '??',
846 failed => $failed,
66fd8cb9 847 percent => $percent,
9c5c68c8 848 estat => $estatus, wstat => $wstatus,
849 };
850}
851
852
853sub _garbled_output {
854 my($gibberish) = shift;
855 warn "Confusing test output: '$gibberish'\n";
856}
857
858
859sub _create_fmts {
860 my($failedtests) = @_;
861
b82fa0b7 862 my $failed_str = "Failed Test";
863 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 864 my $list_str = "List of Failed";
865
866 # Figure out our longest name string for formatting purposes.
867 my $max_namelen = length($failed_str);
868 foreach my $script (keys %$failedtests) {
869 my $namelen = length $failedtests->{$script}->{name};
870 $max_namelen = $namelen if $namelen > $max_namelen;
871 }
872
873 my $list_len = $Columns - length($middle_str) - $max_namelen;
874 if ($list_len < length($list_str)) {
875 $list_len = length($list_str);
876 $max_namelen = $Columns - length($middle_str) - $list_len;
877 if ($max_namelen < length($failed_str)) {
878 $max_namelen = length($failed_str);
879 $Columns = $max_namelen + length($middle_str) + $list_len;
880 }
881 }
882
883 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 884 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 885 . $middle_str
886 . $list_str . "\n"
887 . "-" x $Columns
888 . "\n.\n";
889
890 my $fmt = "format STDOUT =\n"
891 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 892 . " @>> @>>>> @>>>> @>>> ^##.##% "
9c5c68c8 893 . "^" . "<" x ($list_len - 1) . "\n"
894 . '{ $Curtest->{name}, $Curtest->{estat},'
895 . ' $Curtest->{wstat}, $Curtest->{max},'
896 . ' $Curtest->{failed}, $Curtest->{percent},'
897 . ' $Curtest->{canon}'
898 . "\n}\n"
899 . "~~" . " " x ($Columns - $list_len - 2) . "^"
900 . "<" x ($list_len - 1) . "\n"
901 . '$Curtest->{canon}'
902 . "\n.\n";
903
904 eval $fmt_top;
905 die $@ if $@;
906 eval $fmt;
907 die $@ if $@;
908
909 return($fmt_top, $fmt);
910}
911
b82fa0b7 912{
913 my $tried_devel_corestack;
9c5c68c8 914
b82fa0b7 915 sub corestatus {
916 my($st) = @_;
c0ee6f5c 917
b82fa0b7 918 eval {require 'wait.ph'};
919 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 920
b82fa0b7 921 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
922 unless $tried_devel_corestack++;
c0ee6f5c 923
b82fa0b7 924 $ret;
925 }
c0ee6f5c 926}
927
c07a80fd 928sub canonfailed ($@) {
89d3b7e2 929 my($max,$skipped,@failed) = @_;
6c31b336 930 my %seen;
931 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 932 my $failed = @failed;
933 my @result = ();
934 my @canon = ();
935 my $min;
936 my $last = $min = shift @failed;
760ac839 937 my $canon;
c07a80fd 938 if (@failed) {
939 for (@failed, $failed[-1]) { # don't forget the last one
940 if ($_ > $last+1 || $_ == $last) {
941 if ($min == $last) {
942 push @canon, $last;
943 } else {
944 push @canon, "$min-$last";
945 }
946 $min = $_;
947 }
948 $last = $_;
949 }
950 local $" = ", ";
951 push @result, "FAILED tests @canon\n";
b82fa0b7 952 $canon = join ' ', @canon;
a0d0e21e 953 } else {
c07a80fd 954 push @result, "FAILED test $last\n";
760ac839 955 $canon = $last;
a0d0e21e 956 }
c07a80fd 957
958 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 959 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
960 my $ender = 's' x ($skipped > 1);
961 my $good = $max - $failed - $skipped;
962 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 963 push @result, " (-$skipped skipped test$ender: $good okay, ".
964 "$goodper%)"
965 if $skipped;
89d3b7e2 966 push @result, "\n";
760ac839 967 my $txt = join "", @result;
968 ($txt, $canon);
a0d0e21e 969}
970
b82fa0b7 971=end _private
9c5c68c8 972
b82fa0b7 973=back
d667a7e6 974
b82fa0b7 975=cut
9c5c68c8 976
9c5c68c8 977
b82fa0b7 9781;
979__END__
9c5c68c8 980
981
cb1a09d0 982=head1 EXPORT
983
c0ee6f5c 984C<&runtests> is exported by Test::Harness per default.
cb1a09d0 985
9c5c68c8 986C<$verbose> and C<$switches> are exported upon request.
987
988
cb1a09d0 989=head1 DIAGNOSTICS
990
991=over 4
992
993=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
994
995If all tests are successful some statistics about the performance are
996printed.
997
6c31b336 998=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
999
1000For any single script that has failing subtests statistics like the
1001above are printed.
1002
1003=item C<Test returned status %d (wstat %d)>
1004
9c5c68c8 1005Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1006and C<$?> are printed in a message similar to the above.
6c31b336 1007
1008=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1009
6c31b336 1010=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1011
1012If not all tests were successful, the script dies with one of the
1013above messages.
1014
d667a7e6 1015=item C<FAILED--Further testing stopped%s>
1016
1017If a single subtest decides that further testing will not make sense,
1018the script dies with this message.
1019
cb1a09d0 1020=back
1021
9b0ceca9 1022=head1 ENVIRONMENT
1023
37ce32a7 1024=over 4
1025
b82fa0b7 1026=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7 1027
1028Makes harness ignore the exit status of child processes when defined.
1029
b82fa0b7 1030=item C<HARNESS_NOTTY>
9b0ceca9 1031
37ce32a7 1032When set to a true value, forces it to behave as though STDOUT were
1033not a console. You may need to set this if you don't want harness to
1034output more frequent progress messages using carriage returns. Some
1035consoles may not handle carriage returns properly (which results in a
1036somewhat messy output).
0d0c0d42 1037
b82fa0b7 1038=item C<HARNESS_COMPILE_TEST>
9636a016 1039
37ce32a7 1040When true it will make harness attempt to compile the test using
1041C<perlcc> before running it.
1042
b82fa0b7 1043B<NOTE> This currently only works when sitting in the perl source
1044directory!
1045
1046=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1047
1048When set to the name of a directory, harness will check after each
1049test whether new files appeared in that directory, and report them as
17a79f5b 1050
1051 LEAKED FILES: scr.tmp 0 my.db
1052
1053If relative, directory name is with respect to the current directory at
1054the moment runtests() was called. Putting absolute path into
1055C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1056
b82fa0b7 1057=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1058
1059Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1060each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1061run all tests with all warnings enabled.
1062
b82fa0b7 1063=item C<HARNESS_COLUMNS>
37ce32a7 1064
1065This value will be used for the width of the terminal. If it is not
1066set then it will default to C<COLUMNS>. If this is not set, it will
1067default to 80. Note that users of Bourne-sh based shells will need to
1068C<export COLUMNS> for this module to use that variable.
2b32313b 1069
b82fa0b7 1070=item C<HARNESS_ACTIVE>
37ce32a7 1071
1072Harness sets this before executing the individual tests. This allows
1073the tests to determine if they are being executed through the harness
1074or by any other means.
1075
1076=back
0a931e4a 1077
b82fa0b7 1078=head1 EXAMPLE
1079
1080Here's how Test::Harness tests itself
1081
1082 $ cd ~/src/devel/Test-Harness
1083 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1084 $verbose=0; runtests @ARGV;' t/*.t
1085 Using /home/schwern/src/devel/Test-Harness/blib
1086 t/base..............ok
1087 t/nonumbers.........ok
1088 t/ok................ok
1089 t/test-harness......ok
1090 All tests successful.
1091 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1092
cb1a09d0 1093=head1 SEE ALSO
1094
b82fa0b7 1095L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1096the underlying timing routines, L<Devel::CoreStack> to generate core
1097dumps from failed tests and L<Devel::Cover> for test coverage
1098analysis.
c07a80fd 1099
1100=head1 AUTHORS
1101
1102Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1103sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1104with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1105exist. Andreas Koenig held the torch for many years.
1106
1107Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1108
1109=head1 TODO
1110
1111Provide a way of running tests quietly (ie. no printing) for automated
1112validation of tests. This will probably take the form of a version
1113of runtests() which rather than printing its output returns raw data
1114on the state of the tests.
1115
1116Fix HARNESS_COMPILE_TEST without breaking its core usage.
1117
1118Figure a way to report test names in the failure summary.
37ce32a7 1119
b82fa0b7 1120Rework the test summary so long test names are not truncated as badly.
1121
1122Merge back into bleadperl.
1123
1124Deal with VMS's "not \nok 4\n" mistake.
1125
1126Add option for coverage analysis.
1127
1128=for _private
1129Keeping whittling away at _run_all_tests()
1130
1131=for _private
1132Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1133
1134=head1 BUGS
1135
1136Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336 1137with. Test scripts running via the shebang (C<#!>) line may not be
1138portable because $^X is not consistent for shebang scripts across
cb1a09d0 1139platforms. This is no problem when Test::Harness is run with an
6c31b336 1140absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1141
b82fa0b7 1142HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1143directory.
1144
cb1a09d0 1145=cut