Commit | Line | Data |
d667a7e6 |
1 | # -*- Mode: cperl; cperl-indent-level: 4 -*- |
2fe373ce |
2 | # $Id: Harness.pm,v 1.17 2001/09/07 06:20:29 schwern Exp $ |
b82fa0b7 |
3 | |
a0d0e21e |
4 | package Test::Harness; |
5 | |
b82fa0b7 |
6 | require 5.004; |
a0d0e21e |
7 | use Exporter; |
8 | use Benchmark; |
4633a7c4 |
9 | use Config; |
760ac839 |
10 | use strict; |
11 | |
b82fa0b7 |
12 | use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest |
13 | $Columns $verbose $switches |
14 | @ISA @EXPORT @EXPORT_OK |
15 | ); |
4633a7c4 |
16 | |
9c5c68c8 |
17 | # Backwards compatibility for exportable variable names. |
18 | *verbose = \$Verbose; |
19 | *switches = \$Switches; |
20 | |
21 | $Have_Devel_Corestack = 0; |
22 | |
433ad7b6 |
23 | $VERSION = 1.26; |
9b0ceca9 |
24 | |
f19ae7a7 |
25 | $ENV{HARNESS_ACTIVE} = 1; |
26 | |
9b0ceca9 |
27 | # Some experimental versions of OS/2 build have broken $? |
9c5c68c8 |
28 | my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE}; |
29 | |
30 | my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR}; |
9b0ceca9 |
31 | |
17a79f5b |
32 | |
9c5c68c8 |
33 | @ISA = ('Exporter'); |
34 | @EXPORT = qw(&runtests); |
35 | @EXPORT_OK = qw($verbose $switches); |
4633a7c4 |
36 | |
9c5c68c8 |
37 | $Verbose = 0; |
38 | $Switches = "-w"; |
39 | $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80; |
b82fa0b7 |
40 | $Columns--; # Some shells have trouble with a full line of text. |
41 | |
42 | |
43 | =head1 NAME |
44 | |
45 | Test::Harness - run perl standard test scripts with statistics |
46 | |
47 | =head1 SYNOPSIS |
48 | |
49 | use Test::Harness; |
50 | |
51 | runtests(@test_files); |
52 | |
53 | =head1 DESCRIPTION |
a0d0e21e |
54 | |
b82fa0b7 |
55 | B<STOP!> If all you want to do is write a test script, consider using |
56 | Test::Simple. Otherwise, read on. |
57 | |
58 | (By using the Test module, you can write test scripts without |
59 | knowing the exact output this module expects. However, if you need to |
60 | know the specifics, read on!) |
61 | |
62 | Perl test scripts print to standard output C<"ok N"> for each single |
63 | test, where C<N> is an increasing sequence of integers. The first line |
64 | output by a standard test script is C<"1..M"> with C<M> being the |
65 | number of tests that should be run within the test |
66 | script. Test::Harness::runtests(@tests) runs all the testscripts |
67 | named as arguments and checks standard output for the expected |
68 | C<"ok N"> strings. |
69 | |
70 | After all tests have been performed, runtests() prints some |
71 | performance statistics that are computed by the Benchmark module. |
72 | |
73 | =head2 The test script output |
74 | |
75 | The following explains how Test::Harness interprets the output of your |
76 | test program. |
77 | |
78 | =over 4 |
79 | |
80 | =item B<'1..M'> |
81 | |
82 | This header tells how many tests there will be. It should be the |
83 | first line output by your test program (but its okay if its preceded |
84 | by comments). |
85 | |
86 | In certain instanced, you may not know how many tests you will |
87 | ultimately be running. In this case, it is permitted (but not |
88 | encouraged) for the 1..M header to appear as the B<last> line output |
89 | by your test (again, it can be followed by further comments). But we |
90 | strongly encourage you to put it first. |
91 | |
92 | Under B<no> circumstances should 1..M appear in the middle of your |
93 | output or more than once. |
94 | |
95 | |
96 | =item B<'ok', 'not ok'. Ok?> |
97 | |
98 | Any output from the testscript to standard error is ignored and |
99 | bypassed, thus will be seen by the user. Lines written to standard |
100 | output containing C</^(not\s+)?ok\b/> are interpreted as feedback for |
101 | runtests(). All other lines are discarded. |
102 | |
103 | C</^not ok/> indicates a failed test. C</^ok/> is a successful test. |
104 | |
105 | |
106 | =item B<test numbers> |
107 | |
108 | Perl normally expects the 'ok' or 'not ok' to be followed by a test |
109 | number. It is tolerated if the test numbers after 'ok' are |
110 | omitted. In this case Test::Harness maintains temporarily its own |
111 | counter until the script supplies test numbers again. So the following |
112 | test script |
113 | |
114 | print <<END; |
115 | 1..6 |
116 | not ok |
117 | ok |
118 | not ok |
119 | ok |
120 | ok |
121 | END |
122 | |
123 | will generate |
124 | |
125 | FAILED tests 1, 3, 6 |
126 | Failed 3/6 tests, 50.00% okay |
127 | |
128 | |
129 | =item B<$Test::Harness::verbose> |
130 | |
131 | The global variable $Test::Harness::verbose is exportable and can be |
132 | used to let runtests() display the standard output of the script |
133 | without altering the behavior otherwise. |
134 | |
135 | =item B<$Test::Harness::switches> |
136 | |
137 | The global variable $Test::Harness::switches is exportable and can be |
138 | used to set perl command line options used for running the test |
139 | script(s). The default value is C<-w>. |
140 | |
141 | =item B<Skipping tests> |
142 | |
143 | If the standard output line contains the substring C< # Skip> (with |
144 | variations in spacing and case) after C<ok> or C<ok NUMBER>, it is |
145 | counted as a skipped test. If the whole testscript succeeds, the |
146 | count of skipped tests is included in the generated output. |
147 | C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason |
148 | for skipping. |
149 | |
150 | ok 23 # skip Insufficient flogiston pressure. |
151 | |
152 | Similarly, one can include a similar explanation in a C<1..0> line |
153 | emitted if the test script is skipped completely: |
154 | |
155 | 1..0 # Skipped: no leverage found |
156 | |
157 | =item B<Todo tests> |
158 | |
159 | If the standard output line contains the substring C< # TODO> after |
160 | C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text |
161 | afterwards is the thing that has to be done before this test will |
162 | succeed. |
163 | |
164 | not ok 13 # TODO harness the power of the atom |
165 | |
166 | These tests represent a feature to be implemented or a bug to be fixed |
167 | and act as something of an executable "thing to do" list. They are |
168 | B<not> expected to succeed. Should a todo test begin succeeding, |
169 | Test::Harness will report it as a bonus. This indicates that whatever |
170 | you were supposed to do has been done and you should promote this to a |
171 | normal test. |
172 | |
173 | =item B<Bail out!> |
174 | |
175 | As an emergency measure, a test script can decide that further tests |
176 | are useless (e.g. missing dependencies) and testing should stop |
177 | immediately. In that case the test script prints the magic words |
178 | |
179 | Bail out! |
180 | |
181 | to standard output. Any message after these words will be displayed by |
182 | C<Test::Harness> as the reason why testing is stopped. |
183 | |
184 | =item B<Comments> |
185 | |
186 | Additional comments may be put into the testing output on their own |
187 | lines. Comment lines should begin with a '#', Test::Harness will |
188 | ignore them. |
189 | |
190 | ok 1 |
191 | # Life is good, the sun is shining, RAM is cheap. |
192 | not ok 2 |
193 | # got 'Bush' expected 'Gore' |
194 | |
195 | =item B<Anything else> |
196 | |
197 | Any other output Test::Harness sees it will silently ignore B<BUT WE |
198 | PLAN TO CHANGE THIS!> If you wish to place additional output in your |
199 | test script, please use a comment. |
200 | |
201 | =back |
202 | |
203 | |
204 | =head2 Failure |
205 | |
206 | It will happen, your tests will fail. After you mop up your ego, you |
207 | can begin examining the summary report: |
208 | |
2fe373ce |
209 | t/base..............ok |
210 | t/nonumbers.........ok |
211 | t/ok................ok |
212 | t/test-harness......ok |
213 | t/waterloo..........dubious |
b82fa0b7 |
214 | Test returned status 3 (wstat 768, 0x300) |
215 | DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 |
216 | Failed 10/20 tests, 50.00% okay |
217 | Failed Test Stat Wstat Total Fail Failed List of Failed |
218 | ----------------------------------------------------------------------- |
219 | t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19 |
220 | Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay. |
221 | |
222 | Everything passed but t/waterloo.t. It failed 10 of 20 tests and |
223 | exited with non-zero status indicating something dubious happened. |
224 | |
225 | The columns in the summary report mean: |
226 | |
227 | =over 4 |
228 | |
229 | =item B<Failed Test> |
230 | |
231 | The test file which failed. |
232 | |
233 | =item B<Stat> |
234 | |
235 | If the test exited with non-zero, this is its exit status. |
236 | |
237 | =item B<Wstat> |
238 | |
239 | The wait status of the test I<umm, I need a better explanation here>. |
240 | |
241 | =item B<Total> |
242 | |
243 | Total number of tests expected to run. |
244 | |
245 | =item B<Fail> |
246 | |
247 | Number which failed, either from "not ok" or because they never ran. |
248 | |
249 | =item B<Failed> |
250 | |
251 | Percentage of the total tests which failed. |
252 | |
253 | =item B<List of Failed> |
254 | |
255 | A list of the tests which failed. Successive failures may be |
256 | abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and |
257 | 20 failed). |
258 | |
259 | =back |
260 | |
261 | |
262 | =head2 Functions |
263 | |
264 | Test::Harness currently only has one function, here it is. |
265 | |
266 | =over 4 |
267 | |
268 | =item B<runtests> |
269 | |
270 | my $allok = runtests(@test_files); |
271 | |
272 | This runs all the given @test_files and divines whether they passed |
273 | or failed based on their output to STDOUT (details above). It prints |
274 | out each individual test which failed along with a summary report and |
275 | a how long it all took. |
276 | |
277 | It returns true if everything was ok, false otherwise. |
278 | |
279 | =for _private |
280 | This is just _run_all_tests() plus _show_results() |
281 | |
282 | =cut |
17a79f5b |
283 | |
a0d0e21e |
284 | sub runtests { |
285 | my(@tests) = @_; |
9c5c68c8 |
286 | |
b82fa0b7 |
287 | local ($\, $,); |
288 | |
289 | my($tot, $failedtests) = _run_all_tests(@tests); |
9c5c68c8 |
290 | _show_results($tot, $failedtests); |
291 | |
2fe373ce |
292 | my $ok = _all_ok($tot); |
b82fa0b7 |
293 | |
294 | die q{Assert '$ok xor keys %$failedtests' failed!} |
295 | unless $ok xor keys %$failedtests; |
296 | |
297 | return $ok; |
298 | } |
299 | |
300 | =begin _private |
301 | |
2fe373ce |
302 | =item B<_all_ok> |
303 | |
304 | my $ok = _all_ok(\%tot); |
305 | |
306 | Tells you if this test run is overall successful or not. |
307 | |
308 | =cut |
309 | |
310 | sub _all_ok { |
311 | my($tot) = shift; |
312 | |
313 | return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0; |
314 | } |
315 | |
b82fa0b7 |
316 | =item B<_globdir> |
317 | |
318 | my @files = _globdir $dir; |
319 | |
320 | Returns all the files in a directory. This is shorthand for backwards |
321 | compatibility on systems where glob() doesn't work right. |
322 | |
323 | =cut |
324 | |
325 | sub _globdir { |
326 | opendir DIRH, shift; |
327 | my @f = readdir DIRH; |
328 | closedir DIRH; |
329 | |
330 | return @f; |
9c5c68c8 |
331 | } |
332 | |
b82fa0b7 |
333 | =item B<_run_all_tests> |
334 | |
335 | my($total, $failed) = _run_all_tests(@test_files); |
336 | |
337 | Runs all the given @test_files (as runtests()) but does it quietly (no |
338 | report). $total is a hash ref summary of all the tests run. Its keys |
339 | and values are this: |
340 | |
341 | bonus Number of individual todo tests unexpectedly passed |
342 | max Number of individual tests ran |
343 | ok Number of individual tests passed |
344 | sub_skipped Number of individual tests skipped |
2fe373ce |
345 | todo Number of individual todo tests |
b82fa0b7 |
346 | |
347 | files Number of test files ran |
348 | good Number of test files passed |
349 | bad Number of test files failed |
350 | tests Number of test files originally given |
351 | skipped Number of test files skipped |
352 | |
353 | If $total->{bad} == 0 and $total->{max} > 0, you've got a successful |
354 | test. |
355 | |
356 | $failed is a hash ref of all the test scripts which failed. Each key |
357 | is the name of a test script, each value is another hash representing |
358 | how that script failed. Its keys are these: |
9c5c68c8 |
359 | |
b82fa0b7 |
360 | name Name of the test which failed |
361 | estat Script's exit value |
362 | wstat Script's wait status |
363 | max Number of individual tests |
364 | failed Number which failed |
365 | percent Percentage of tests which failed |
366 | canon List of tests which failed (as string). |
367 | |
368 | Needless to say, $failed should be empty if everything passed. |
369 | |
370 | B<NOTE> Currently this function is still noisy. I'm working on it. |
371 | |
372 | =cut |
373 | |
374 | sub _run_all_tests { |
9c5c68c8 |
375 | my(@tests) = @_; |
a0d0e21e |
376 | local($|) = 1; |
9c5c68c8 |
377 | my(%failedtests); |
378 | |
379 | # Test-wide totals. |
380 | my(%tot) = ( |
381 | bonus => 0, |
382 | max => 0, |
383 | ok => 0, |
384 | files => 0, |
385 | bad => 0, |
386 | good => 0, |
387 | tests => scalar @tests, |
388 | sub_skipped => 0, |
2fe373ce |
389 | todo => 0, |
9c5c68c8 |
390 | skipped => 0, |
2fe373ce |
391 | bench => 0, |
9c5c68c8 |
392 | ); |
774d564b |
393 | |
394 | # pass -I flags to children |
81ff29e3 |
395 | my $old5lib = $ENV{PERL5LIB}; |
774d564b |
396 | |
1250aba5 |
397 | # VMS has a 255-byte limit on the length of %ENV entries, so |
398 | # toss the ones that involve perl_root, the install location |
399 | # for VMS |
400 | my $new5lib; |
401 | if ($^O eq 'VMS') { |
2fe373ce |
402 | $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC); |
403 | $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g; |
1250aba5 |
404 | } |
405 | else { |
406 | $new5lib = join($Config{path_sep}, @INC); |
407 | } |
408 | |
409 | local($ENV{'PERL5LIB'}) = $new5lib; |
a0d0e21e |
410 | |
b82fa0b7 |
411 | my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir; |
a0d0e21e |
412 | my $t_start = new Benchmark; |
9c5c68c8 |
413 | |
63b097aa |
414 | my $maxlen = 0; |
908801fe |
415 | my $maxsuflen = 0; |
416 | foreach (@tests) { # The same code in t/TEST |
2fe373ce |
417 | my $suf = /\.(\w+)$/ ? $1 : ''; |
418 | my $len = length; |
419 | my $suflen = length $suf; |
420 | $maxlen = $len if $len > $maxlen; |
421 | $maxsuflen = $suflen if $suflen > $maxsuflen; |
7a315204 |
422 | } |
908801fe |
423 | # + 3 : we want three dots between the test name and the "ok" |
424 | my $width = $maxlen + 3 - $maxsuflen; |
2fe373ce |
425 | |
b82fa0b7 |
426 | foreach my $tfile (@tests) { |
7a315204 |
427 | my($leader, $ml) = _mk_leader($tfile, $width); |
b82fa0b7 |
428 | print $leader; |
9c5c68c8 |
429 | |
b82fa0b7 |
430 | my $fh = _open_test($tfile); |
9c5c68c8 |
431 | |
432 | # state of the current test. |
433 | my %test = ( |
434 | ok => 0, |
b82fa0b7 |
435 | 'next' => 0, |
9c5c68c8 |
436 | max => 0, |
437 | failed => [], |
438 | todo => {}, |
439 | bonus => 0, |
440 | skipped => 0, |
441 | skip_reason => undef, |
442 | ml => $ml, |
443 | ); |
444 | |
445 | my($seen_header, $tests_seen) = (0,0); |
2fe373ce |
446 | while (<$fh>) { |
9c5c68c8 |
447 | if( _parse_header($_, \%test, \%tot) ) { |
448 | warn "Test header seen twice!\n" if $seen_header; |
449 | |
450 | $seen_header = 1; |
451 | |
452 | warn "1..M can only appear at the beginning or end of tests\n" |
453 | if $tests_seen && $test{max} < $tests_seen; |
454 | } |
455 | elsif( _parse_test_line($_, \%test, \%tot) ) { |
456 | $tests_seen++; |
d667a7e6 |
457 | } |
9c5c68c8 |
458 | # else, ignore it. |
2fe373ce |
459 | } |
9c5c68c8 |
460 | |
461 | my($estatus, $wstatus) = _close_fh($fh); |
462 | |
b82fa0b7 |
463 | my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1; |
464 | |
2fe373ce |
465 | if ($wstatus) { |
b82fa0b7 |
466 | $failedtests{$tfile} = _dubious_return(\%test, \%tot, |
9c5c68c8 |
467 | $estatus, $wstatus); |
b82fa0b7 |
468 | $failedtests{$tfile}{name} = $tfile; |
2fe373ce |
469 | } |
b82fa0b7 |
470 | elsif ($allok) { |
2fe373ce |
471 | if ($test{max} and $test{skipped} + $test{bonus}) { |
472 | my @msg; |
473 | push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}") |
474 | if $test{skipped}; |
475 | push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded") |
476 | if $test{bonus}; |
477 | print "$test{ml}ok, ".join(', ', @msg)."\n"; |
478 | } elsif ($test{max}) { |
479 | print "$test{ml}ok\n"; |
480 | } elsif (defined $test{skip_reason}) { |
481 | print "skipped: $test{skip_reason}\n"; |
482 | $tot{skipped}++; |
483 | } else { |
484 | print "skipped test on this platform\n"; |
485 | $tot{skipped}++; |
486 | } |
487 | $tot{good}++; |
488 | } |
b82fa0b7 |
489 | else { |
490 | if ($test{max}) { |
491 | if ($test{'next'} <= $test{max}) { |
492 | push @{$test{failed}}, $test{'next'}..$test{max}; |
493 | } |
494 | if (@{$test{failed}}) { |
495 | my ($txt, $canon) = canonfailed($test{max},$test{skipped}, |
496 | @{$test{failed}}); |
497 | print "$test{ml}$txt"; |
498 | $failedtests{$tfile} = { canon => $canon, |
499 | max => $test{max}, |
500 | failed => scalar @{$test{failed}}, |
501 | name => $tfile, |
502 | percent => 100*(scalar @{$test{failed}})/$test{max}, |
503 | estat => '', |
504 | wstat => '', |
505 | }; |
506 | } else { |
507 | print "Don't know which tests failed: got $test{ok} ok, ". |
508 | "expected $test{max}\n"; |
509 | $failedtests{$tfile} = { canon => '??', |
510 | max => $test{max}, |
511 | failed => '??', |
512 | name => $tfile, |
513 | percent => undef, |
514 | estat => '', |
515 | wstat => '', |
516 | }; |
517 | } |
518 | $tot{bad}++; |
519 | } elsif ($test{'next'} == 0) { |
520 | print "FAILED before any test output arrived\n"; |
521 | $tot{bad}++; |
522 | $failedtests{$tfile} = { canon => '??', |
523 | max => '??', |
524 | failed => '??', |
525 | name => $tfile, |
526 | percent => undef, |
527 | estat => '', |
528 | wstat => '', |
529 | }; |
530 | } |
531 | } |
532 | |
2fe373ce |
533 | $tot{sub_skipped} += $test{skipped}; |
534 | |
535 | if (defined $Files_In_Dir) { |
536 | my @new_dir_files = _globdir $Files_In_Dir; |
537 | if (@new_dir_files != @dir_files) { |
538 | my %f; |
539 | @f{@new_dir_files} = (1) x @new_dir_files; |
540 | delete @f{@dir_files}; |
541 | my @f = sort keys %f; |
542 | print "LEAKED FILES: @f\n"; |
543 | @dir_files = @new_dir_files; |
544 | } |
545 | } |
a0d0e21e |
546 | } |
9c5c68c8 |
547 | $tot{bench} = timediff(new Benchmark, $t_start); |
d667a7e6 |
548 | |
774d564b |
549 | if ($^O eq 'VMS') { |
2fe373ce |
550 | if (defined $old5lib) { |
551 | $ENV{PERL5LIB} = $old5lib; |
552 | } else { |
553 | delete $ENV{PERL5LIB}; |
554 | } |
774d564b |
555 | } |
9c5c68c8 |
556 | |
557 | return(\%tot, \%failedtests); |
558 | } |
559 | |
b82fa0b7 |
560 | =item B<_mk_leader> |
561 | |
7a315204 |
562 | my($leader, $ml) = _mk_leader($test_file, $width); |
b82fa0b7 |
563 | |
564 | Generates the 't/foo........' $leader for the given $test_file as well |
565 | as a similar version which will overwrite the current line (by use of |
566 | \r and such). $ml may be empty if Test::Harness doesn't think you're |
2fe373ce |
567 | on TTY. |
568 | |
569 | The $width is the width of the "yada/blah.." string. |
b82fa0b7 |
570 | |
571 | =cut |
572 | |
573 | sub _mk_leader { |
2fe373ce |
574 | my($te, $width) = @_; |
575 | chomp($te); |
b695f709 |
576 | $te =~ s/\.\w+$/./; |
b82fa0b7 |
577 | |
578 | if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; } |
579 | my $blank = (' ' x 77); |
7a315204 |
580 | my $leader = "$te" . '.' x ($width - length($te)); |
b82fa0b7 |
581 | my $ml = ""; |
582 | |
583 | $ml = "\r$blank\r$leader" |
584 | if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose; |
585 | |
586 | return($leader, $ml); |
587 | } |
588 | |
9c5c68c8 |
589 | |
590 | sub _show_results { |
591 | my($tot, $failedtests) = @_; |
592 | |
593 | my $pct; |
594 | my $bonusmsg = _bonusmsg($tot); |
595 | |
2fe373ce |
596 | if (_all_ok($tot)) { |
597 | print "All tests successful$bonusmsg.\n"; |
598 | } elsif (!$tot->{tests}){ |
599 | die "FAILED--no tests were run for some reason.\n"; |
600 | } elsif (!$tot->{max}) { |
601 | my $blurb = $tot->{tests}==1 ? "script" : "scripts"; |
602 | die "FAILED--$tot->{tests} test $blurb could be run, ". |
9c5c68c8 |
603 | "alas--no output ever seen\n"; |
c07a80fd |
604 | } else { |
2fe373ce |
605 | $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100); |
606 | my $percent_ok = 100*$tot->{ok}/$tot->{max}; |
607 | my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.", |
608 | $tot->{max} - $tot->{ok}, $tot->{max}, |
609 | $percent_ok; |
0a931e4a |
610 | |
9c5c68c8 |
611 | my($fmt_top, $fmt) = _create_fmts($failedtests); |
0a931e4a |
612 | |
2fe373ce |
613 | # Now write to formats |
614 | for my $script (sort keys %$failedtests) { |
615 | $Curtest = $failedtests->{$script}; |
616 | write; |
617 | } |
618 | if ($tot->{bad}) { |
619 | $bonusmsg =~ s/^,\s*//; |
620 | print "$bonusmsg.\n" if $bonusmsg; |
621 | die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.". |
9c5c68c8 |
622 | "$subpct\n"; |
2fe373ce |
623 | } |
c07a80fd |
624 | } |
f0a9308e |
625 | |
9c5c68c8 |
626 | printf("Files=%d, Tests=%d, %s\n", |
627 | $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop')); |
628 | } |
629 | |
630 | |
631 | sub _parse_header { |
632 | my($line, $test, $tot) = @_; |
633 | |
634 | my $is_header = 0; |
635 | |
636 | print $line if $Verbose; |
637 | |
638 | # 1..10 todo 4 7 10; |
639 | if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) { |
640 | $test->{max} = $1; |
641 | for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; } |
642 | |
643 | $tot->{max} += $test->{max}; |
644 | $tot->{files}++; |
645 | |
646 | $is_header = 1; |
647 | } |
648 | # 1..10 |
649 | # 1..0 # skip Why? Because I said so! |
650 | elsif ($line =~ /^1\.\.([0-9]+) |
b82fa0b7 |
651 | (\s*\#\s*[Ss]kip\S*\s* (.+))? |
9c5c68c8 |
652 | /x |
653 | ) |
654 | { |
655 | $test->{max} = $1; |
656 | $tot->{max} += $test->{max}; |
657 | $tot->{files}++; |
b82fa0b7 |
658 | $test->{'next'} = 1 unless $test->{'next'}; |
9c5c68c8 |
659 | $test->{skip_reason} = $3 if not $test->{max} and defined $3; |
660 | |
661 | $is_header = 1; |
662 | } |
663 | else { |
664 | $is_header = 0; |
665 | } |
666 | |
667 | return $is_header; |
c07a80fd |
668 | } |
669 | |
9c5c68c8 |
670 | |
b82fa0b7 |
671 | sub _open_test { |
672 | my($test) = shift; |
673 | |
674 | my $s = _set_switches($test); |
675 | |
676 | # XXX This is WAY too core specific! |
677 | my $cmd = ($ENV{'HARNESS_COMPILE_TEST'}) |
678 | ? "./perl -I../lib ../utils/perlcc $test " |
2fe373ce |
679 | . "-r 2>> ./compilelog |" |
680 | : "$^X $s $test|"; |
b82fa0b7 |
681 | $cmd = "MCR $cmd" if $^O eq 'VMS'; |
682 | |
683 | if( open(PERL, $cmd) ) { |
684 | return \*PERL; |
685 | } |
686 | else { |
687 | print "can't run $test. $!\n"; |
688 | return; |
689 | } |
690 | } |
691 | |
692 | sub _run_one_test { |
693 | my($test) = @_; |
694 | |
695 | |
696 | } |
697 | |
698 | |
9c5c68c8 |
699 | sub _parse_test_line { |
700 | my($line, $test, $tot) = @_; |
701 | |
702 | if ($line =~ /^(not\s+)?ok\b/i) { |
2fe373ce |
703 | $test->{'next'} ||= 1; |
704 | my $this = $test->{'next'}; |
9c5c68c8 |
705 | # "not ok 23" |
2fe373ce |
706 | if ($line =~ /^(not )?ok\s*(\d*)[^#]*(\s*#.*)?/) { |
707 | my($not, $tnum, $extra) = ($1, $2, $3); |
708 | |
709 | $this = $tnum if $tnum; |
710 | |
711 | my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/ |
712 | if defined $extra; |
713 | |
714 | my($istodo, $isskip); |
715 | if( defined $type ) { |
716 | $istodo = 1 if $type =~ /TODO/; |
717 | $isskip = 1 if $type =~ /skip/i; |
718 | } |
719 | |
720 | $test->{todo}{$this} = 1 if $istodo; |
721 | |
722 | $tot->{todo}++ if $test->{todo}{$this}; |
723 | |
724 | if( $not ) { |
725 | print "$test->{ml}NOK $this" if $test->{ml}; |
726 | if (!$test->{todo}{$this}) { |
727 | push @{$test->{failed}}, $this; |
728 | } else { |
729 | $test->{ok}++; |
730 | $tot->{ok}++; |
731 | } |
732 | } |
733 | else { |
734 | print "$test->{ml}ok $this/$test->{max}" if $test->{ml}; |
735 | $test->{ok}++; |
736 | $tot->{ok}++; |
737 | $test->{skipped}++ if $isskip; |
738 | |
739 | $reason = '[no reason given]' |
740 | if $isskip and not defined $reason; |
741 | if (defined $reason and defined $test->{skip_reason}) { |
742 | # print "was: '$skip_reason' new '$reason'\n"; |
743 | $test->{skip_reason} = 'various reasons' |
744 | if $test->{skip_reason} ne $reason; |
745 | } elsif (defined $reason) { |
746 | $test->{skip_reason} = $reason; |
747 | } |
748 | |
749 | $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this}; |
750 | } |
9c5c68c8 |
751 | } |
752 | # XXX ummm... dunno |
753 | elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok? |
754 | $this = $1 if $1 > 0; |
755 | print "$test->{ml}ok $this/$test->{max}" if $test->{ml}; |
756 | $test->{ok}++; |
757 | $tot->{ok}++; |
758 | } |
759 | else { |
760 | # an ok or not ok not matching the 3 cases above... |
761 | # just ignore it for compatibility with TEST |
762 | next; |
763 | } |
764 | |
b82fa0b7 |
765 | if ($this > $test->{'next'}) { |
2fe373ce |
766 | print "Test output counter mismatch [test $this]\n"; |
b82fa0b7 |
767 | push @{$test->{failed}}, $test->{'next'}..$this-1; |
9c5c68c8 |
768 | } |
b82fa0b7 |
769 | elsif ($this < $test->{'next'}) { |
9c5c68c8 |
770 | #we have seen more "ok" lines than the number suggests |
771 | print "Confused test output: test $this answered after ". |
b82fa0b7 |
772 | "test ", $test->{'next'}-1, "\n"; |
773 | $test->{'next'} = $this; |
9c5c68c8 |
774 | } |
b82fa0b7 |
775 | $test->{'next'} = $this + 1; |
9c5c68c8 |
776 | |
777 | } |
778 | elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words |
779 | die "FAILED--Further testing stopped" . |
780 | ($1 ? ": $1\n" : ".\n"); |
781 | } |
782 | } |
783 | |
784 | |
785 | sub _bonusmsg { |
786 | my($tot) = @_; |
787 | |
788 | my $bonusmsg = ''; |
789 | $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : ''). |
2fe373ce |
790 | " UNEXPECTEDLY SUCCEEDED)") |
791 | if $tot->{bonus}; |
9c5c68c8 |
792 | |
793 | if ($tot->{skipped}) { |
2fe373ce |
794 | $bonusmsg .= ", $tot->{skipped} test" |
9c5c68c8 |
795 | . ($tot->{skipped} != 1 ? 's' : ''); |
2fe373ce |
796 | if ($tot->{sub_skipped}) { |
797 | $bonusmsg .= " and $tot->{sub_skipped} subtest" |
798 | . ($tot->{sub_skipped} != 1 ? 's' : ''); |
799 | } |
800 | $bonusmsg .= ' skipped'; |
9c5c68c8 |
801 | } |
802 | elsif ($tot->{sub_skipped}) { |
2fe373ce |
803 | $bonusmsg .= ", $tot->{sub_skipped} subtest" |
804 | . ($tot->{sub_skipped} != 1 ? 's' : '') |
805 | . " skipped"; |
9c5c68c8 |
806 | } |
807 | |
808 | return $bonusmsg; |
809 | } |
810 | |
811 | # VMS has some subtle nastiness with closing the test files. |
812 | sub _close_fh { |
813 | my($fh) = shift; |
814 | |
815 | close($fh); # must close to reap child resource values |
816 | |
2fe373ce |
817 | my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ? |
9c5c68c8 |
818 | my $estatus; |
819 | $estatus = ($^O eq 'VMS' |
820 | ? eval 'use vmsish "status"; $estatus = $?' |
821 | : $wstatus >> 8); |
822 | |
823 | return($estatus, $wstatus); |
824 | } |
825 | |
826 | |
827 | # Set up the command-line switches to run perl as. |
828 | sub _set_switches { |
829 | my($test) = shift; |
830 | |
b82fa0b7 |
831 | local *TEST; |
832 | open(TEST, $test) or print "can't open $test. $!\n"; |
833 | my $first = <TEST>; |
9c5c68c8 |
834 | my $s = $Switches; |
835 | $s .= " $ENV{'HARNESS_PERL_SWITCHES'}" |
836 | if exists $ENV{'HARNESS_PERL_SWITCHES'}; |
6537fe72 |
837 | $s .= join " ", qq[ "-$1"], map {qq["-I$_"]} @INC |
838 | if $first =~ /^#!.*\bperl.*-\w*([tT])/; |
9c5c68c8 |
839 | |
b82fa0b7 |
840 | close(TEST) or print "can't close $test. $!\n"; |
9c5c68c8 |
841 | |
842 | return $s; |
843 | } |
844 | |
845 | |
846 | # Test program go boom. |
847 | sub _dubious_return { |
848 | my($test, $tot, $estatus, $wstatus) = @_; |
849 | my ($failed, $canon, $percent) = ('??', '??'); |
850 | |
851 | printf "$test->{ml}dubious\n\tTest returned status $estatus ". |
852 | "(wstat %d, 0x%x)\n", |
853 | $wstatus,$wstatus; |
854 | print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS'; |
855 | |
856 | if (corestatus($wstatus)) { # until we have a wait module |
857 | if ($Have_Devel_Corestack) { |
858 | Devel::CoreStack::stack($^X); |
859 | } else { |
860 | print "\ttest program seems to have generated a core\n"; |
861 | } |
862 | } |
863 | |
864 | $tot->{bad}++; |
865 | |
866 | if ($test->{max}) { |
b82fa0b7 |
867 | if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) { |
9c5c68c8 |
868 | print "\tafter all the subtests completed successfully\n"; |
869 | $percent = 0; |
2fe373ce |
870 | $failed = 0; # But we do not set $canon! |
9c5c68c8 |
871 | } |
872 | else { |
b82fa0b7 |
873 | push @{$test->{failed}}, $test->{'next'}..$test->{max}; |
9c5c68c8 |
874 | $failed = @{$test->{failed}}; |
875 | (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}}); |
876 | $percent = 100*(scalar @{$test->{failed}})/$test->{max}; |
877 | print "DIED. ",$txt; |
878 | } |
879 | } |
880 | |
881 | return { canon => $canon, max => $test->{max} || '??', |
882 | failed => $failed, |
66fd8cb9 |
883 | percent => $percent, |
9c5c68c8 |
884 | estat => $estatus, wstat => $wstatus, |
885 | }; |
886 | } |
887 | |
888 | |
889 | sub _garbled_output { |
890 | my($gibberish) = shift; |
891 | warn "Confusing test output: '$gibberish'\n"; |
892 | } |
893 | |
894 | |
895 | sub _create_fmts { |
896 | my($failedtests) = @_; |
897 | |
b82fa0b7 |
898 | my $failed_str = "Failed Test"; |
899 | my $middle_str = " Stat Wstat Total Fail Failed "; |
9c5c68c8 |
900 | my $list_str = "List of Failed"; |
901 | |
902 | # Figure out our longest name string for formatting purposes. |
903 | my $max_namelen = length($failed_str); |
904 | foreach my $script (keys %$failedtests) { |
905 | my $namelen = length $failedtests->{$script}->{name}; |
906 | $max_namelen = $namelen if $namelen > $max_namelen; |
907 | } |
908 | |
909 | my $list_len = $Columns - length($middle_str) - $max_namelen; |
910 | if ($list_len < length($list_str)) { |
911 | $list_len = length($list_str); |
912 | $max_namelen = $Columns - length($middle_str) - $list_len; |
913 | if ($max_namelen < length($failed_str)) { |
914 | $max_namelen = length($failed_str); |
915 | $Columns = $max_namelen + length($middle_str) + $list_len; |
916 | } |
917 | } |
918 | |
919 | my $fmt_top = "format STDOUT_TOP =\n" |
b82fa0b7 |
920 | . sprintf("%-${max_namelen}s", $failed_str) |
9c5c68c8 |
921 | . $middle_str |
2fe373ce |
922 | . $list_str . "\n" |
923 | . "-" x $Columns |
924 | . "\n.\n"; |
9c5c68c8 |
925 | |
926 | my $fmt = "format STDOUT =\n" |
2fe373ce |
927 | . "@" . "<" x ($max_namelen - 1) |
b82fa0b7 |
928 | . " @>> @>>>> @>>>> @>>> ^##.##% " |
2fe373ce |
929 | . "^" . "<" x ($list_len - 1) . "\n" |
930 | . '{ $Curtest->{name}, $Curtest->{estat},' |
931 | . ' $Curtest->{wstat}, $Curtest->{max},' |
932 | . ' $Curtest->{failed}, $Curtest->{percent},' |
933 | . ' $Curtest->{canon}' |
934 | . "\n}\n" |
935 | . "~~" . " " x ($Columns - $list_len - 2) . "^" |
936 | . "<" x ($list_len - 1) . "\n" |
937 | . '$Curtest->{canon}' |
938 | . "\n.\n"; |
9c5c68c8 |
939 | |
940 | eval $fmt_top; |
941 | die $@ if $@; |
942 | eval $fmt; |
943 | die $@ if $@; |
944 | |
945 | return($fmt_top, $fmt); |
946 | } |
947 | |
b82fa0b7 |
948 | { |
949 | my $tried_devel_corestack; |
9c5c68c8 |
950 | |
b82fa0b7 |
951 | sub corestatus { |
952 | my($st) = @_; |
c0ee6f5c |
953 | |
b82fa0b7 |
954 | eval {require 'wait.ph'}; |
955 | my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200; |
c0ee6f5c |
956 | |
b82fa0b7 |
957 | eval { require Devel::CoreStack; $Have_Devel_Corestack++ } |
958 | unless $tried_devel_corestack++; |
c0ee6f5c |
959 | |
b82fa0b7 |
960 | $ret; |
961 | } |
c0ee6f5c |
962 | } |
963 | |
c07a80fd |
964 | sub canonfailed ($@) { |
89d3b7e2 |
965 | my($max,$skipped,@failed) = @_; |
6c31b336 |
966 | my %seen; |
967 | @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed; |
c07a80fd |
968 | my $failed = @failed; |
969 | my @result = (); |
970 | my @canon = (); |
971 | my $min; |
972 | my $last = $min = shift @failed; |
760ac839 |
973 | my $canon; |
c07a80fd |
974 | if (@failed) { |
2fe373ce |
975 | for (@failed, $failed[-1]) { # don't forget the last one |
976 | if ($_ > $last+1 || $_ == $last) { |
977 | if ($min == $last) { |
978 | push @canon, $last; |
979 | } else { |
980 | push @canon, "$min-$last"; |
981 | } |
982 | $min = $_; |
983 | } |
984 | $last = $_; |
985 | } |
986 | local $" = ", "; |
987 | push @result, "FAILED tests @canon\n"; |
988 | $canon = join ' ', @canon; |
a0d0e21e |
989 | } else { |
2fe373ce |
990 | push @result, "FAILED test $last\n"; |
991 | $canon = $last; |
a0d0e21e |
992 | } |
c07a80fd |
993 | |
994 | push @result, "\tFailed $failed/$max tests, "; |
89d3b7e2 |
995 | push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay"; |
996 | my $ender = 's' x ($skipped > 1); |
997 | my $good = $max - $failed - $skipped; |
998 | my $goodper = sprintf("%.2f",100*($good/$max)); |
9c5c68c8 |
999 | push @result, " (-$skipped skipped test$ender: $good okay, ". |
1000 | "$goodper%)" |
1001 | if $skipped; |
89d3b7e2 |
1002 | push @result, "\n"; |
760ac839 |
1003 | my $txt = join "", @result; |
1004 | ($txt, $canon); |
a0d0e21e |
1005 | } |
1006 | |
b82fa0b7 |
1007 | =end _private |
9c5c68c8 |
1008 | |
b82fa0b7 |
1009 | =back |
d667a7e6 |
1010 | |
b82fa0b7 |
1011 | =cut |
9c5c68c8 |
1012 | |
9c5c68c8 |
1013 | |
b82fa0b7 |
1014 | 1; |
1015 | __END__ |
9c5c68c8 |
1016 | |
1017 | |
cb1a09d0 |
1018 | =head1 EXPORT |
1019 | |
c0ee6f5c |
1020 | C<&runtests> is exported by Test::Harness per default. |
cb1a09d0 |
1021 | |
9c5c68c8 |
1022 | C<$verbose> and C<$switches> are exported upon request. |
1023 | |
1024 | |
cb1a09d0 |
1025 | =head1 DIAGNOSTICS |
1026 | |
1027 | =over 4 |
1028 | |
1029 | =item C<All tests successful.\nFiles=%d, Tests=%d, %s> |
1030 | |
1031 | If all tests are successful some statistics about the performance are |
1032 | printed. |
1033 | |
6c31b336 |
1034 | =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.> |
1035 | |
1036 | For any single script that has failing subtests statistics like the |
1037 | above are printed. |
1038 | |
1039 | =item C<Test returned status %d (wstat %d)> |
1040 | |
9c5c68c8 |
1041 | Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8> |
1042 | and C<$?> are printed in a message similar to the above. |
6c31b336 |
1043 | |
1044 | =item C<Failed 1 test, %.2f%% okay. %s> |
cb1a09d0 |
1045 | |
6c31b336 |
1046 | =item C<Failed %d/%d tests, %.2f%% okay. %s> |
cb1a09d0 |
1047 | |
1048 | If not all tests were successful, the script dies with one of the |
1049 | above messages. |
1050 | |
d667a7e6 |
1051 | =item C<FAILED--Further testing stopped%s> |
1052 | |
1053 | If a single subtest decides that further testing will not make sense, |
1054 | the script dies with this message. |
1055 | |
cb1a09d0 |
1056 | =back |
1057 | |
9b0ceca9 |
1058 | =head1 ENVIRONMENT |
1059 | |
37ce32a7 |
1060 | =over 4 |
1061 | |
b82fa0b7 |
1062 | =item C<HARNESS_IGNORE_EXITCODE> |
37ce32a7 |
1063 | |
1064 | Makes harness ignore the exit status of child processes when defined. |
1065 | |
b82fa0b7 |
1066 | =item C<HARNESS_NOTTY> |
9b0ceca9 |
1067 | |
37ce32a7 |
1068 | When set to a true value, forces it to behave as though STDOUT were |
1069 | not a console. You may need to set this if you don't want harness to |
1070 | output more frequent progress messages using carriage returns. Some |
1071 | consoles may not handle carriage returns properly (which results in a |
1072 | somewhat messy output). |
0d0c0d42 |
1073 | |
b82fa0b7 |
1074 | =item C<HARNESS_COMPILE_TEST> |
9636a016 |
1075 | |
37ce32a7 |
1076 | When true it will make harness attempt to compile the test using |
1077 | C<perlcc> before running it. |
1078 | |
b82fa0b7 |
1079 | B<NOTE> This currently only works when sitting in the perl source |
1080 | directory! |
1081 | |
1082 | =item C<HARNESS_FILELEAK_IN_DIR> |
37ce32a7 |
1083 | |
1084 | When set to the name of a directory, harness will check after each |
1085 | test whether new files appeared in that directory, and report them as |
17a79f5b |
1086 | |
1087 | LEAKED FILES: scr.tmp 0 my.db |
1088 | |
1089 | If relative, directory name is with respect to the current directory at |
1090 | the moment runtests() was called. Putting absolute path into |
1091 | C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results. |
1092 | |
b82fa0b7 |
1093 | =item C<HARNESS_PERL_SWITCHES> |
37ce32a7 |
1094 | |
1095 | Its value will be prepended to the switches used to invoke perl on |
b82fa0b7 |
1096 | each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will |
37ce32a7 |
1097 | run all tests with all warnings enabled. |
1098 | |
b82fa0b7 |
1099 | =item C<HARNESS_COLUMNS> |
37ce32a7 |
1100 | |
1101 | This value will be used for the width of the terminal. If it is not |
1102 | set then it will default to C<COLUMNS>. If this is not set, it will |
1103 | default to 80. Note that users of Bourne-sh based shells will need to |
1104 | C<export COLUMNS> for this module to use that variable. |
2b32313b |
1105 | |
b82fa0b7 |
1106 | =item C<HARNESS_ACTIVE> |
37ce32a7 |
1107 | |
1108 | Harness sets this before executing the individual tests. This allows |
1109 | the tests to determine if they are being executed through the harness |
1110 | or by any other means. |
1111 | |
1112 | =back |
0a931e4a |
1113 | |
b82fa0b7 |
1114 | =head1 EXAMPLE |
1115 | |
1116 | Here's how Test::Harness tests itself |
1117 | |
1118 | $ cd ~/src/devel/Test-Harness |
1119 | $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose); |
1120 | $verbose=0; runtests @ARGV;' t/*.t |
1121 | Using /home/schwern/src/devel/Test-Harness/blib |
1122 | t/base..............ok |
1123 | t/nonumbers.........ok |
1124 | t/ok................ok |
1125 | t/test-harness......ok |
1126 | All tests successful. |
1127 | Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU) |
f19ae7a7 |
1128 | |
cb1a09d0 |
1129 | =head1 SEE ALSO |
1130 | |
b82fa0b7 |
1131 | L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for |
1132 | the underlying timing routines, L<Devel::CoreStack> to generate core |
1133 | dumps from failed tests and L<Devel::Cover> for test coverage |
1134 | analysis. |
c07a80fd |
1135 | |
1136 | =head1 AUTHORS |
1137 | |
1138 | Either Tim Bunce or Andreas Koenig, we don't know. What we know for |
1139 | sure is, that it was inspired by Larry Wall's TEST script that came |
b876d4a6 |
1140 | with perl distributions for ages. Numerous anonymous contributors |
b82fa0b7 |
1141 | exist. Andreas Koenig held the torch for many years. |
1142 | |
1143 | Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt> |
1144 | |
1145 | =head1 TODO |
1146 | |
1147 | Provide a way of running tests quietly (ie. no printing) for automated |
1148 | validation of tests. This will probably take the form of a version |
1149 | of runtests() which rather than printing its output returns raw data |
1150 | on the state of the tests. |
1151 | |
1152 | Fix HARNESS_COMPILE_TEST without breaking its core usage. |
1153 | |
1154 | Figure a way to report test names in the failure summary. |
37ce32a7 |
1155 | |
b82fa0b7 |
1156 | Rework the test summary so long test names are not truncated as badly. |
1157 | |
1158 | Merge back into bleadperl. |
1159 | |
1160 | Deal with VMS's "not \nok 4\n" mistake. |
1161 | |
1162 | Add option for coverage analysis. |
1163 | |
1164 | =for _private |
1165 | Keeping whittling away at _run_all_tests() |
1166 | |
1167 | =for _private |
1168 | Clean up how the summary is printed. Get rid of those damned formats. |
cb1a09d0 |
1169 | |
1170 | =head1 BUGS |
1171 | |
1172 | Test::Harness uses $^X to determine the perl binary to run the tests |
6c31b336 |
1173 | with. Test scripts running via the shebang (C<#!>) line may not be |
1174 | portable because $^X is not consistent for shebang scripts across |
cb1a09d0 |
1175 | platforms. This is no problem when Test::Harness is run with an |
6c31b336 |
1176 | absolute path to the perl binary or when $^X can be found in the path. |
cb1a09d0 |
1177 | |
b82fa0b7 |
1178 | HARNESS_COMPILE_TEST currently assumes its run from the Perl source |
1179 | directory. |
1180 | |
cb1a09d0 |
1181 | =cut |