From: H.Merijn Brand Date: Mon, 4 Nov 2002 11:04:45 +0000 (+0000) Subject: Tru64, gcc -O3, datasize X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=532eb83898dc393c9f89b475b9f77385c39eff97;p=p5sagit%2Fp5-mst-13.2.git Tru64, gcc -O3, datasize Sun, 3 Nov 2002; Jarkko Hietaniemi p4raw-id: //depot/perl@18084 --- diff --git a/README.tru64 b/README.tru64 index 297cab8..e852a5c 100644 --- a/README.tru64 +++ b/README.tru64 @@ -26,6 +26,14 @@ of the op/regexp and op/pat, or ext/Storable tests dumping core (the exact pattern of failures depending on the GCC release and optimization flags). +gcc 3.2.1 is known to work okay with Perl 5.8.0. However, when +optimizing the toke.c gcc likes to have a lot of memory, 256 megabytes +seems to be enough. The default setting of the process data section +in Tru64 should be one gigabyte, but some sites/setups might have +lowered that. The configuration process of Perl checks for too low +process limits, and lowers the optimization for the toke.c if +necessary, and also gives advice on how to raise the process limits. + =head2 Using Large Files with Perl on Tru64 In Tru64 Perl is automatically able to use large files, that is, diff --git a/hints/dec_osf.sh b/hints/dec_osf.sh index 8ef151e..8cf54b1 100644 --- a/hints/dec_osf.sh +++ b/hints/dec_osf.sh @@ -148,6 +148,42 @@ case "$optimize" in ;; esac +## Optimization limits +case "$isgcc" in +gcc) # gcc 3.2.1 wants a lot of memory for -O3'ing toke.c +cat >try.c < + +int main () +{ + struct rlimit rl; + int i = getrlimit (RLIMIT_DATA, &rl); + printf ("%d\n", rl.rlim_cur / (1024 * 1024)); + } /* main */ +EOF +$cc -o try $ccflags $ldflags try.c + maxdsiz=`./try` +rm -f try try.c core +if [ $maxdsiz -lt 256 ]; then + # less than 256 MB is probably not enough to optimize toke.c with gcc -O3 + cat <&4 + +Your process datasize is limited to $maxdsiz MB, which is (sadly) not +always enough to fully optimize some source code files of Perl, +at least 256 MB seems to be necessary as of Perl 5.8.0. I'll try to +use a lower optimization level for those parts. You could either try +using your shell's ulimit/limit/limits command to raise your datasize +(assuming the system-wide hard resource limits allow you to go higher), +or if you can't go higher and if you are a sysadmin, and you *do* want +the full optimization, you can tune the 'max_per_proc_data_size' +kernel parameter: see man sysconfigtab, and man sys_attrs_proc. + +EOM +toke_cflags='optimize=-O2' + fi +;; +esac + # we want dynamic fp rounding mode, and we want ieee exception semantics case "$isgcc" in gcc) ;;