More encoding mapping magic.
[p5sagit/p5-mst-13.2.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
b82fa0b7 2# $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $
3
a0d0e21e 4package Test::Harness;
5
b82fa0b7 6require 5.004;
a0d0e21e 7use Exporter;
8use Benchmark;
4633a7c4 9use Config;
760ac839 10use strict;
11
b82fa0b7 12use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
15 );
4633a7c4 16
9c5c68c8 17# Backwards compatibility for exportable variable names.
18*verbose = \$Verbose;
19*switches = \$Switches;
20
21$Have_Devel_Corestack = 0;
22
b82fa0b7 23$VERSION = "1.21";
9b0ceca9 24
f19ae7a7 25$ENV{HARNESS_ACTIVE} = 1;
26
9b0ceca9 27# Some experimental versions of OS/2 build have broken $?
9c5c68c8 28my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
29
30my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 31
17a79f5b 32
9c5c68c8 33@ISA = ('Exporter');
34@EXPORT = qw(&runtests);
35@EXPORT_OK = qw($verbose $switches);
4633a7c4 36
9c5c68c8 37$Verbose = 0;
38$Switches = "-w";
39$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7 40$Columns--; # Some shells have trouble with a full line of text.
41
42
43=head1 NAME
44
45Test::Harness - run perl standard test scripts with statistics
46
47=head1 SYNOPSIS
48
49 use Test::Harness;
50
51 runtests(@test_files);
52
53=head1 DESCRIPTION
a0d0e21e 54
b82fa0b7 55B<STOP!> If all you want to do is write a test script, consider using
56Test::Simple. Otherwise, read on.
57
58(By using the Test module, you can write test scripts without
59knowing the exact output this module expects. However, if you need to
60know the specifics, read on!)
61
62Perl test scripts print to standard output C<"ok N"> for each single
63test, where C<N> is an increasing sequence of integers. The first line
64output by a standard test script is C<"1..M"> with C<M> being the
65number of tests that should be run within the test
66script. Test::Harness::runtests(@tests) runs all the testscripts
67named as arguments and checks standard output for the expected
68C<"ok N"> strings.
69
70After all tests have been performed, runtests() prints some
71performance statistics that are computed by the Benchmark module.
72
73=head2 The test script output
74
75The following explains how Test::Harness interprets the output of your
76test program.
77
78=over 4
79
80=item B<'1..M'>
81
82This header tells how many tests there will be. It should be the
83first line output by your test program (but its okay if its preceded
84by comments).
85
86In certain instanced, you may not know how many tests you will
87ultimately be running. In this case, it is permitted (but not
88encouraged) for the 1..M header to appear as the B<last> line output
89by your test (again, it can be followed by further comments). But we
90strongly encourage you to put it first.
91
92Under B<no> circumstances should 1..M appear in the middle of your
93output or more than once.
94
95
96=item B<'ok', 'not ok'. Ok?>
97
98Any output from the testscript to standard error is ignored and
99bypassed, thus will be seen by the user. Lines written to standard
100output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101runtests(). All other lines are discarded.
102
103C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
104
105
106=item B<test numbers>
107
108Perl normally expects the 'ok' or 'not ok' to be followed by a test
109number. It is tolerated if the test numbers after 'ok' are
110omitted. In this case Test::Harness maintains temporarily its own
111counter until the script supplies test numbers again. So the following
112test script
113
114 print <<END;
115 1..6
116 not ok
117 ok
118 not ok
119 ok
120 ok
121 END
122
123will generate
124
125 FAILED tests 1, 3, 6
126 Failed 3/6 tests, 50.00% okay
127
128
129=item B<$Test::Harness::verbose>
130
131The global variable $Test::Harness::verbose is exportable and can be
132used to let runtests() display the standard output of the script
133without altering the behavior otherwise.
134
135=item B<$Test::Harness::switches>
136
137The global variable $Test::Harness::switches is exportable and can be
138used to set perl command line options used for running the test
139script(s). The default value is C<-w>.
140
141=item B<Skipping tests>
142
143If the standard output line contains the substring C< # Skip> (with
144variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145counted as a skipped test. If the whole testscript succeeds, the
146count of skipped tests is included in the generated output.
147C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
148for skipping.
149
150 ok 23 # skip Insufficient flogiston pressure.
151
152Similarly, one can include a similar explanation in a C<1..0> line
153emitted if the test script is skipped completely:
154
155 1..0 # Skipped: no leverage found
156
157=item B<Todo tests>
158
159If the standard output line contains the substring C< # TODO> after
160C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161afterwards is the thing that has to be done before this test will
162succeed.
163
164 not ok 13 # TODO harness the power of the atom
165
166These tests represent a feature to be implemented or a bug to be fixed
167and act as something of an executable "thing to do" list. They are
168B<not> expected to succeed. Should a todo test begin succeeding,
169Test::Harness will report it as a bonus. This indicates that whatever
170you were supposed to do has been done and you should promote this to a
171normal test.
172
173=item B<Bail out!>
174
175As an emergency measure, a test script can decide that further tests
176are useless (e.g. missing dependencies) and testing should stop
177immediately. In that case the test script prints the magic words
178
179 Bail out!
180
181to standard output. Any message after these words will be displayed by
182C<Test::Harness> as the reason why testing is stopped.
183
184=item B<Comments>
185
186Additional comments may be put into the testing output on their own
187lines. Comment lines should begin with a '#', Test::Harness will
188ignore them.
189
190 ok 1
191 # Life is good, the sun is shining, RAM is cheap.
192 not ok 2
193 # got 'Bush' expected 'Gore'
194
195=item B<Anything else>
196
197Any other output Test::Harness sees it will silently ignore B<BUT WE
198PLAN TO CHANGE THIS!> If you wish to place additional output in your
199test script, please use a comment.
200
201=back
202
203
204=head2 Failure
205
206It will happen, your tests will fail. After you mop up your ego, you
207can begin examining the summary report:
208
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
221
222Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223exited with non-zero status indicating something dubious happened.
224
225The columns in the summary report mean:
226
227=over 4
228
229=item B<Failed Test>
230
231The test file which failed.
232
233=item B<Stat>
234
235If the test exited with non-zero, this is its exit status.
236
237=item B<Wstat>
238
239The wait status of the test I<umm, I need a better explanation here>.
240
241=item B<Total>
242
243Total number of tests expected to run.
244
245=item B<Fail>
246
247Number which failed, either from "not ok" or because they never ran.
248
249=item B<Failed>
250
251Percentage of the total tests which failed.
252
253=item B<List of Failed>
254
255A list of the tests which failed. Successive failures may be
256abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
25720 failed).
258
259=back
260
261
262=head2 Functions
263
264Test::Harness currently only has one function, here it is.
265
266=over 4
267
268=item B<runtests>
269
270 my $allok = runtests(@test_files);
271
272This runs all the given @test_files and divines whether they passed
273or failed based on their output to STDOUT (details above). It prints
274out each individual test which failed along with a summary report and
275a how long it all took.
276
277It returns true if everything was ok, false otherwise.
278
279=for _private
280This is just _run_all_tests() plus _show_results()
281
282=cut
17a79f5b 283
a0d0e21e 284sub runtests {
285 my(@tests) = @_;
9c5c68c8 286
b82fa0b7 287 local ($\, $,);
288
289 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8 290 _show_results($tot, $failedtests);
291
b82fa0b7 292 my $ok = ($tot->{bad} == 0 && $tot->{max});
293
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
296
297 return $ok;
298}
299
300=begin _private
301
302=item B<_globdir>
303
304 my @files = _globdir $dir;
305
306Returns all the files in a directory. This is shorthand for backwards
307compatibility on systems where glob() doesn't work right.
308
309=cut
310
311sub _globdir {
312 opendir DIRH, shift;
313 my @f = readdir DIRH;
314 closedir DIRH;
315
316 return @f;
9c5c68c8 317}
318
b82fa0b7 319=item B<_run_all_tests>
320
321 my($total, $failed) = _run_all_tests(@test_files);
322
323Runs all the given @test_files (as runtests()) but does it quietly (no
324report). $total is a hash ref summary of all the tests run. Its keys
325and values are this:
326
327 bonus Number of individual todo tests unexpectedly passed
328 max Number of individual tests ran
329 ok Number of individual tests passed
330 sub_skipped Number of individual tests skipped
331
332 files Number of test files ran
333 good Number of test files passed
334 bad Number of test files failed
335 tests Number of test files originally given
336 skipped Number of test files skipped
337
338If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
339test.
340
341$failed is a hash ref of all the test scripts which failed. Each key
342is the name of a test script, each value is another hash representing
343how that script failed. Its keys are these:
9c5c68c8 344
b82fa0b7 345 name Name of the test which failed
346 estat Script's exit value
347 wstat Script's wait status
348 max Number of individual tests
349 failed Number which failed
350 percent Percentage of tests which failed
351 canon List of tests which failed (as string).
352
353Needless to say, $failed should be empty if everything passed.
354
355B<NOTE> Currently this function is still noisy. I'm working on it.
356
357=cut
358
359sub _run_all_tests {
9c5c68c8 360 my(@tests) = @_;
a0d0e21e 361 local($|) = 1;
9c5c68c8 362 my(%failedtests);
363
364 # Test-wide totals.
365 my(%tot) = (
366 bonus => 0,
367 max => 0,
368 ok => 0,
369 files => 0,
370 bad => 0,
371 good => 0,
372 tests => scalar @tests,
373 sub_skipped => 0,
374 skipped => 0,
375 bench => 0
376 );
774d564b 377
378 # pass -I flags to children
81ff29e3 379 my $old5lib = $ENV{PERL5LIB};
774d564b 380
1250aba5 381 # VMS has a 255-byte limit on the length of %ENV entries, so
382 # toss the ones that involve perl_root, the install location
383 # for VMS
384 my $new5lib;
385 if ($^O eq 'VMS') {
386 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
9c5c68c8 387 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
1250aba5 388 }
389 else {
390 $new5lib = join($Config{path_sep}, @INC);
391 }
392
393 local($ENV{'PERL5LIB'}) = $new5lib;
a0d0e21e 394
b82fa0b7 395 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 396 my $t_start = new Benchmark;
9c5c68c8 397
63b097aa 398 my $maxlen = 0;
908801fe 399 my $maxsuflen = 0;
400 foreach (@tests) { # The same code in t/TEST
401 my $suf = /\.(\w+)$/ ? $1 : '';
402 my $len = length;
403 my $suflen = length $suf;
404 $maxlen = $len if $len > $maxlen;
405 $maxsuflen = $suflen if $suflen > $maxsuflen;
7a315204 406 }
908801fe 407 # + 3 : we want three dots between the test name and the "ok"
408 my $width = $maxlen + 3 - $maxsuflen;
b82fa0b7 409 foreach my $tfile (@tests) {
7a315204 410 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 411 print $leader;
9c5c68c8 412
b82fa0b7 413 my $fh = _open_test($tfile);
9c5c68c8 414
415 # state of the current test.
416 my %test = (
417 ok => 0,
b82fa0b7 418 'next' => 0,
9c5c68c8 419 max => 0,
420 failed => [],
421 todo => {},
422 bonus => 0,
423 skipped => 0,
424 skip_reason => undef,
425 ml => $ml,
426 );
427
428 my($seen_header, $tests_seen) = (0,0);
c07a80fd 429 while (<$fh>) {
9c5c68c8 430 if( _parse_header($_, \%test, \%tot) ) {
431 warn "Test header seen twice!\n" if $seen_header;
432
433 $seen_header = 1;
434
435 warn "1..M can only appear at the beginning or end of tests\n"
436 if $tests_seen && $test{max} < $tests_seen;
437 }
438 elsif( _parse_test_line($_, \%test, \%tot) ) {
439 $tests_seen++;
d667a7e6 440 }
9c5c68c8 441 # else, ignore it.
c07a80fd 442 }
9c5c68c8 443
444 my($estatus, $wstatus) = _close_fh($fh);
445
b82fa0b7 446 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
447
68dc0745 448 if ($wstatus) {
b82fa0b7 449 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 450 $estatus, $wstatus);
b82fa0b7 451 $failedtests{$tfile}{name} = $tfile;
9c5c68c8 452 }
b82fa0b7 453 elsif ($allok) {
9c5c68c8 454 if ($test{max} and $test{skipped} + $test{bonus}) {
7b13a3f5 455 my @msg;
9c5c68c8 456 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
457 if $test{skipped};
458 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
459 if $test{bonus};
460 print "$test{ml}ok, ".join(', ', @msg)."\n";
461 } elsif ($test{max}) {
462 print "$test{ml}ok\n";
463 } elsif (defined $test{skip_reason}) {
464 print "skipped: $test{skip_reason}\n";
465 $tot{skipped}++;
c0ee6f5c 466 } else {
45c0de28 467 print "skipped test on this platform\n";
9c5c68c8 468 $tot{skipped}++;
c0ee6f5c 469 }
9c5c68c8 470 $tot{good}++;
6c31b336 471 }
b82fa0b7 472 else {
473 if ($test{max}) {
474 if ($test{'next'} <= $test{max}) {
475 push @{$test{failed}}, $test{'next'}..$test{max};
476 }
477 if (@{$test{failed}}) {
478 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
479 @{$test{failed}});
480 print "$test{ml}$txt";
481 $failedtests{$tfile} = { canon => $canon,
482 max => $test{max},
483 failed => scalar @{$test{failed}},
484 name => $tfile,
485 percent => 100*(scalar @{$test{failed}})/$test{max},
486 estat => '',
487 wstat => '',
488 };
489 } else {
490 print "Don't know which tests failed: got $test{ok} ok, ".
491 "expected $test{max}\n";
492 $failedtests{$tfile} = { canon => '??',
493 max => $test{max},
494 failed => '??',
495 name => $tfile,
496 percent => undef,
497 estat => '',
498 wstat => '',
499 };
500 }
501 $tot{bad}++;
502 } elsif ($test{'next'} == 0) {
503 print "FAILED before any test output arrived\n";
504 $tot{bad}++;
505 $failedtests{$tfile} = { canon => '??',
506 max => '??',
507 failed => '??',
508 name => $tfile,
509 percent => undef,
510 estat => '',
511 wstat => '',
512 };
513 }
514 }
515
9c5c68c8 516 $tot{sub_skipped} += $test{skipped};
517
518 if (defined $Files_In_Dir) {
b82fa0b7 519 my @new_dir_files = _globdir $Files_In_Dir;
17a79f5b 520 if (@new_dir_files != @dir_files) {
521 my %f;
522 @f{@new_dir_files} = (1) x @new_dir_files;
523 delete @f{@dir_files};
524 my @f = sort keys %f;
525 print "LEAKED FILES: @f\n";
526 @dir_files = @new_dir_files;
527 }
528 }
a0d0e21e 529 }
9c5c68c8 530 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 531
774d564b 532 if ($^O eq 'VMS') {
533 if (defined $old5lib) {
534 $ENV{PERL5LIB} = $old5lib;
b876d4a6 535 } else {
774d564b 536 delete $ENV{PERL5LIB};
537 }
538 }
9c5c68c8 539
540 return(\%tot, \%failedtests);
541}
542
b82fa0b7 543=item B<_mk_leader>
544
7a315204 545 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7 546
547Generates the 't/foo........' $leader for the given $test_file as well
548as a similar version which will overwrite the current line (by use of
549\r and such). $ml may be empty if Test::Harness doesn't think you're
7a315204 550on TTY. The width is the width of the "yada/blah..." string.
b82fa0b7 551
552=cut
553
554sub _mk_leader {
7a315204 555 my ($te, $width) = @_;
556
b695f709 557 $te =~ s/\.\w+$/./;
b82fa0b7 558
559 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
560 my $blank = (' ' x 77);
7a315204 561 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7 562 my $ml = "";
563
564 $ml = "\r$blank\r$leader"
565 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
566
567 return($leader, $ml);
568}
569
9c5c68c8 570
571sub _show_results {
572 my($tot, $failedtests) = @_;
573
574 my $pct;
575 my $bonusmsg = _bonusmsg($tot);
576
577 if ($tot->{bad} == 0 && $tot->{max}) {
5d23d6cd 578# print "All tests successful$bonusmsg.\n";
579 print "All tests successful.\n";
9c5c68c8 580 } elsif ($tot->{tests}==0){
6c31b336 581 die "FAILED--no tests were run for some reason.\n";
9c5c68c8 582 } elsif ($tot->{max} == 0) {
583 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
584 die "FAILED--$tot->{tests} test $blurb could be run, ".
585 "alas--no output ever seen\n";
c07a80fd 586 } else {
9c5c68c8 587 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
6c31b336 588 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
9c5c68c8 589 $tot->{max} - $tot->{ok}, $tot->{max},
590 100*$tot->{ok}/$tot->{max};
0a931e4a 591
9c5c68c8 592 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 593
594 # Now write to formats
9c5c68c8 595 for my $script (sort keys %$failedtests) {
596 $Curtest = $failedtests->{$script};
760ac839 597 write;
598 }
9c5c68c8 599 if ($tot->{bad}) {
9b0ceca9 600 $bonusmsg =~ s/^,\s*//;
601 print "$bonusmsg.\n" if $bonusmsg;
9c5c68c8 602 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
603 "$subpct\n";
c07a80fd 604 }
605 }
f0a9308e 606
9c5c68c8 607 printf("Files=%d, Tests=%d, %s\n",
608 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
609}
610
611
612sub _parse_header {
613 my($line, $test, $tot) = @_;
614
615 my $is_header = 0;
616
617 print $line if $Verbose;
618
619 # 1..10 todo 4 7 10;
620 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
621 $test->{max} = $1;
622 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
623
624 $tot->{max} += $test->{max};
625 $tot->{files}++;
626
627 $is_header = 1;
628 }
629 # 1..10
630 # 1..0 # skip Why? Because I said so!
631 elsif ($line =~ /^1\.\.([0-9]+)
b82fa0b7 632 (\s*\#\s*[Ss]kip\S*\s* (.+))?
9c5c68c8 633 /x
634 )
635 {
636 $test->{max} = $1;
637 $tot->{max} += $test->{max};
638 $tot->{files}++;
b82fa0b7 639 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8 640 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
641
642 $is_header = 1;
643 }
644 else {
645 $is_header = 0;
646 }
647
648 return $is_header;
c07a80fd 649}
650
9c5c68c8 651
b82fa0b7 652sub _open_test {
653 my($test) = shift;
654
655 my $s = _set_switches($test);
656
657 # XXX This is WAY too core specific!
658 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
659 ? "./perl -I../lib ../utils/perlcc $test "
660 . "-r 2>> ./compilelog |"
661 : "$^X $s $test|";
662 $cmd = "MCR $cmd" if $^O eq 'VMS';
663
664 if( open(PERL, $cmd) ) {
665 return \*PERL;
666 }
667 else {
668 print "can't run $test. $!\n";
669 return;
670 }
671}
672
673sub _run_one_test {
674 my($test) = @_;
675
676
677}
678
679
9c5c68c8 680sub _parse_test_line {
681 my($line, $test, $tot) = @_;
682
683 if ($line =~ /^(not\s+)?ok\b/i) {
b82fa0b7 684 my $this = $test->{'next'} || 1;
9c5c68c8 685 # "not ok 23"
37ce32a7 686 if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) {
687 my($not, $tnum, $extra) = ($1, $2, $3);
688
689 $this = $tnum if $tnum;
690
691 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
692 if defined $extra;
693
694 my($istodo, $isskip);
695 if( defined $type ) {
696 $istodo = $type =~ /TODO/;
697 $isskip = $type =~ /skip/i;
698 }
699
700 $test->{todo}{$tnum} = 1 if $istodo;
701
702 if( $not ) {
703 print "$test->{ml}NOK $this" if $test->{ml};
704 if (!$test->{todo}{$this}) {
705 push @{$test->{failed}}, $this;
706 } else {
707 $test->{ok}++;
708 $tot->{ok}++;
709 }
710 }
711 else {
712 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
713 $test->{ok}++;
714 $tot->{ok}++;
715 $test->{skipped}++ if $isskip;
716
723ca54d 717 $reason = '[no reason given]'
718 if $isskip and not defined $reason;
37ce32a7 719 if (defined $reason and defined $test->{skip_reason}) {
720 # print "was: '$skip_reason' new '$reason'\n";
721 $test->{skip_reason} = 'various reasons'
722 if $test->{skip_reason} ne $reason;
723 } elsif (defined $reason) {
724 $test->{skip_reason} = $reason;
725 }
726
727 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
728 }
9c5c68c8 729 }
730 # XXX ummm... dunno
731 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
732 $this = $1 if $1 > 0;
733 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
734 $test->{ok}++;
735 $tot->{ok}++;
736 }
737 else {
738 # an ok or not ok not matching the 3 cases above...
739 # just ignore it for compatibility with TEST
740 next;
741 }
742
b82fa0b7 743 if ($this > $test->{'next'}) {
9c5c68c8 744 # print "Test output counter mismatch [test $this]\n";
745 # no need to warn probably
b82fa0b7 746 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 747 }
b82fa0b7 748 elsif ($this < $test->{'next'}) {
9c5c68c8 749 #we have seen more "ok" lines than the number suggests
750 print "Confused test output: test $this answered after ".
b82fa0b7 751 "test ", $test->{'next'}-1, "\n";
752 $test->{'next'} = $this;
9c5c68c8 753 }
b82fa0b7 754 $test->{'next'} = $this + 1;
9c5c68c8 755
756 }
757 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
758 die "FAILED--Further testing stopped" .
759 ($1 ? ": $1\n" : ".\n");
760 }
761}
762
763
764sub _bonusmsg {
765 my($tot) = @_;
766
767 my $bonusmsg = '';
768 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
769 " UNEXPECTEDLY SUCCEEDED)")
770 if $tot->{bonus};
771
772 if ($tot->{skipped}) {
773 $bonusmsg .= ", $tot->{skipped} test"
774 . ($tot->{skipped} != 1 ? 's' : '');
775 if ($tot->{sub_skipped}) {
776 $bonusmsg .= " and $tot->{sub_skipped} subtest"
777 . ($tot->{sub_skipped} != 1 ? 's' : '');
778 }
779 $bonusmsg .= ' skipped';
780 }
781 elsif ($tot->{sub_skipped}) {
782 $bonusmsg .= ", $tot->{sub_skipped} subtest"
783 . ($tot->{sub_skipped} != 1 ? 's' : '')
784 . " skipped";
785 }
786
787 return $bonusmsg;
788}
789
790# VMS has some subtle nastiness with closing the test files.
791sub _close_fh {
792 my($fh) = shift;
793
794 close($fh); # must close to reap child resource values
795
796 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
797 my $estatus;
798 $estatus = ($^O eq 'VMS'
799 ? eval 'use vmsish "status"; $estatus = $?'
800 : $wstatus >> 8);
801
802 return($estatus, $wstatus);
803}
804
805
806# Set up the command-line switches to run perl as.
807sub _set_switches {
808 my($test) = shift;
809
b82fa0b7 810 local *TEST;
811 open(TEST, $test) or print "can't open $test. $!\n";
812 my $first = <TEST>;
9c5c68c8 813 my $s = $Switches;
814 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
815 if exists $ENV{'HARNESS_PERL_SWITCHES'};
816 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
817 if $first =~ /^#!.*\bperl.*-\w*T/;
818
b82fa0b7 819 close(TEST) or print "can't close $test. $!\n";
9c5c68c8 820
821 return $s;
822}
823
824
825# Test program go boom.
826sub _dubious_return {
827 my($test, $tot, $estatus, $wstatus) = @_;
828 my ($failed, $canon, $percent) = ('??', '??');
829
830 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
831 "(wstat %d, 0x%x)\n",
832 $wstatus,$wstatus;
833 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
834
835 if (corestatus($wstatus)) { # until we have a wait module
836 if ($Have_Devel_Corestack) {
837 Devel::CoreStack::stack($^X);
838 } else {
839 print "\ttest program seems to have generated a core\n";
840 }
841 }
842
843 $tot->{bad}++;
844
845 if ($test->{max}) {
b82fa0b7 846 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8 847 print "\tafter all the subtests completed successfully\n";
848 $percent = 0;
849 $failed = 0; # But we do not set $canon!
850 }
851 else {
b82fa0b7 852 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8 853 $failed = @{$test->{failed}};
854 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
855 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
856 print "DIED. ",$txt;
857 }
858 }
859
860 return { canon => $canon, max => $test->{max} || '??',
861 failed => $failed,
66fd8cb9 862 percent => $percent,
9c5c68c8 863 estat => $estatus, wstat => $wstatus,
864 };
865}
866
867
868sub _garbled_output {
869 my($gibberish) = shift;
870 warn "Confusing test output: '$gibberish'\n";
871}
872
873
874sub _create_fmts {
875 my($failedtests) = @_;
876
b82fa0b7 877 my $failed_str = "Failed Test";
878 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8 879 my $list_str = "List of Failed";
880
881 # Figure out our longest name string for formatting purposes.
882 my $max_namelen = length($failed_str);
883 foreach my $script (keys %$failedtests) {
884 my $namelen = length $failedtests->{$script}->{name};
885 $max_namelen = $namelen if $namelen > $max_namelen;
886 }
887
888 my $list_len = $Columns - length($middle_str) - $max_namelen;
889 if ($list_len < length($list_str)) {
890 $list_len = length($list_str);
891 $max_namelen = $Columns - length($middle_str) - $list_len;
892 if ($max_namelen < length($failed_str)) {
893 $max_namelen = length($failed_str);
894 $Columns = $max_namelen + length($middle_str) + $list_len;
895 }
896 }
897
898 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 899 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 900 . $middle_str
901 . $list_str . "\n"
902 . "-" x $Columns
903 . "\n.\n";
904
905 my $fmt = "format STDOUT =\n"
906 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 907 . " @>> @>>>> @>>>> @>>> ^##.##% "
9c5c68c8 908 . "^" . "<" x ($list_len - 1) . "\n"
909 . '{ $Curtest->{name}, $Curtest->{estat},'
910 . ' $Curtest->{wstat}, $Curtest->{max},'
911 . ' $Curtest->{failed}, $Curtest->{percent},'
912 . ' $Curtest->{canon}'
913 . "\n}\n"
914 . "~~" . " " x ($Columns - $list_len - 2) . "^"
915 . "<" x ($list_len - 1) . "\n"
916 . '$Curtest->{canon}'
917 . "\n.\n";
918
919 eval $fmt_top;
920 die $@ if $@;
921 eval $fmt;
922 die $@ if $@;
923
924 return($fmt_top, $fmt);
925}
926
b82fa0b7 927{
928 my $tried_devel_corestack;
9c5c68c8 929
b82fa0b7 930 sub corestatus {
931 my($st) = @_;
c0ee6f5c 932
b82fa0b7 933 eval {require 'wait.ph'};
934 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 935
b82fa0b7 936 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
937 unless $tried_devel_corestack++;
c0ee6f5c 938
b82fa0b7 939 $ret;
940 }
c0ee6f5c 941}
942
c07a80fd 943sub canonfailed ($@) {
89d3b7e2 944 my($max,$skipped,@failed) = @_;
6c31b336 945 my %seen;
946 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 947 my $failed = @failed;
948 my @result = ();
949 my @canon = ();
950 my $min;
951 my $last = $min = shift @failed;
760ac839 952 my $canon;
c07a80fd 953 if (@failed) {
954 for (@failed, $failed[-1]) { # don't forget the last one
955 if ($_ > $last+1 || $_ == $last) {
956 if ($min == $last) {
957 push @canon, $last;
958 } else {
959 push @canon, "$min-$last";
960 }
961 $min = $_;
962 }
963 $last = $_;
964 }
965 local $" = ", ";
966 push @result, "FAILED tests @canon\n";
b82fa0b7 967 $canon = join ' ', @canon;
a0d0e21e 968 } else {
c07a80fd 969 push @result, "FAILED test $last\n";
760ac839 970 $canon = $last;
a0d0e21e 971 }
c07a80fd 972
973 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2 974 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
975 my $ender = 's' x ($skipped > 1);
976 my $good = $max - $failed - $skipped;
977 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8 978 push @result, " (-$skipped skipped test$ender: $good okay, ".
979 "$goodper%)"
980 if $skipped;
89d3b7e2 981 push @result, "\n";
760ac839 982 my $txt = join "", @result;
983 ($txt, $canon);
a0d0e21e 984}
985
b82fa0b7 986=end _private
9c5c68c8 987
b82fa0b7 988=back
d667a7e6 989
b82fa0b7 990=cut
9c5c68c8 991
9c5c68c8 992
b82fa0b7 9931;
994__END__
9c5c68c8 995
996
cb1a09d0 997=head1 EXPORT
998
c0ee6f5c 999C<&runtests> is exported by Test::Harness per default.
cb1a09d0 1000
9c5c68c8 1001C<$verbose> and C<$switches> are exported upon request.
1002
1003
cb1a09d0 1004=head1 DIAGNOSTICS
1005
1006=over 4
1007
1008=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1009
1010If all tests are successful some statistics about the performance are
1011printed.
1012
6c31b336 1013=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1014
1015For any single script that has failing subtests statistics like the
1016above are printed.
1017
1018=item C<Test returned status %d (wstat %d)>
1019
9c5c68c8 1020Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1021and C<$?> are printed in a message similar to the above.
6c31b336 1022
1023=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1024
6c31b336 1025=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0 1026
1027If not all tests were successful, the script dies with one of the
1028above messages.
1029
d667a7e6 1030=item C<FAILED--Further testing stopped%s>
1031
1032If a single subtest decides that further testing will not make sense,
1033the script dies with this message.
1034
cb1a09d0 1035=back
1036
9b0ceca9 1037=head1 ENVIRONMENT
1038
37ce32a7 1039=over 4
1040
b82fa0b7 1041=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7 1042
1043Makes harness ignore the exit status of child processes when defined.
1044
b82fa0b7 1045=item C<HARNESS_NOTTY>
9b0ceca9 1046
37ce32a7 1047When set to a true value, forces it to behave as though STDOUT were
1048not a console. You may need to set this if you don't want harness to
1049output more frequent progress messages using carriage returns. Some
1050consoles may not handle carriage returns properly (which results in a
1051somewhat messy output).
0d0c0d42 1052
b82fa0b7 1053=item C<HARNESS_COMPILE_TEST>
9636a016 1054
37ce32a7 1055When true it will make harness attempt to compile the test using
1056C<perlcc> before running it.
1057
b82fa0b7 1058B<NOTE> This currently only works when sitting in the perl source
1059directory!
1060
1061=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7 1062
1063When set to the name of a directory, harness will check after each
1064test whether new files appeared in that directory, and report them as
17a79f5b 1065
1066 LEAKED FILES: scr.tmp 0 my.db
1067
1068If relative, directory name is with respect to the current directory at
1069the moment runtests() was called. Putting absolute path into
1070C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1071
b82fa0b7 1072=item C<HARNESS_PERL_SWITCHES>
37ce32a7 1073
1074Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1075each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7 1076run all tests with all warnings enabled.
1077
b82fa0b7 1078=item C<HARNESS_COLUMNS>
37ce32a7 1079
1080This value will be used for the width of the terminal. If it is not
1081set then it will default to C<COLUMNS>. If this is not set, it will
1082default to 80. Note that users of Bourne-sh based shells will need to
1083C<export COLUMNS> for this module to use that variable.
2b32313b 1084
b82fa0b7 1085=item C<HARNESS_ACTIVE>
37ce32a7 1086
1087Harness sets this before executing the individual tests. This allows
1088the tests to determine if they are being executed through the harness
1089or by any other means.
1090
1091=back
0a931e4a 1092
b82fa0b7 1093=head1 EXAMPLE
1094
1095Here's how Test::Harness tests itself
1096
1097 $ cd ~/src/devel/Test-Harness
1098 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1099 $verbose=0; runtests @ARGV;' t/*.t
1100 Using /home/schwern/src/devel/Test-Harness/blib
1101 t/base..............ok
1102 t/nonumbers.........ok
1103 t/ok................ok
1104 t/test-harness......ok
1105 All tests successful.
1106 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1107
cb1a09d0 1108=head1 SEE ALSO
1109
b82fa0b7 1110L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1111the underlying timing routines, L<Devel::CoreStack> to generate core
1112dumps from failed tests and L<Devel::Cover> for test coverage
1113analysis.
c07a80fd 1114
1115=head1 AUTHORS
1116
1117Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1118sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1119with perl distributions for ages. Numerous anonymous contributors
b82fa0b7 1120exist. Andreas Koenig held the torch for many years.
1121
1122Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1123
1124=head1 TODO
1125
1126Provide a way of running tests quietly (ie. no printing) for automated
1127validation of tests. This will probably take the form of a version
1128of runtests() which rather than printing its output returns raw data
1129on the state of the tests.
1130
1131Fix HARNESS_COMPILE_TEST without breaking its core usage.
1132
1133Figure a way to report test names in the failure summary.
37ce32a7 1134
b82fa0b7 1135Rework the test summary so long test names are not truncated as badly.
1136
1137Merge back into bleadperl.
1138
1139Deal with VMS's "not \nok 4\n" mistake.
1140
1141Add option for coverage analysis.
1142
1143=for _private
1144Keeping whittling away at _run_all_tests()
1145
1146=for _private
1147Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0 1148
1149=head1 BUGS
1150
1151Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336 1152with. Test scripts running via the shebang (C<#!>) line may not be
1153portable because $^X is not consistent for shebang scripts across
cb1a09d0 1154platforms. This is no problem when Test::Harness is run with an
6c31b336 1155absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1156
b82fa0b7 1157HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1158directory.
1159
cb1a09d0 1160=cut