From: Andy Dougherty Date: Mon, 24 Feb 1997 22:09:09 +0000 (-0500) Subject: Post-28 INSTALL updates X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=55479bb671272502420f9e5f7617b1b0be8544af;p=p5sagit%2Fp5-mst-13.2.git Post-28 INSTALL updates Here are some more updates to the INSTALL file. Specifically, I revised the malloc section a bit to give more of an overall perspective (and, admittedly, less detail). p5p-msgid: private-msgid: config.sh variable +to change its behavior in potentially useful ways. You can find out +more about these flags by reading the F source. +In a future version of perl, these might be enabled by default. =over 4 @@ -563,38 +591,30 @@ following macros to change its behavior in potentially useful ways. If this macro is defined, running out of memory need not be a fatal error: a memory pool can allocated by assigning to the special -variable C<$^M>. See L<"$^M">. +variable C<$^M>. =item -DPACK_MALLOC -Perl memory allocation is by bucket with sizes close to powers of two. -Because of these malloc overhead may be big, especially for data of -size exactly a power of two. If C is defined, perl uses -a slightly different algorithm for small allocations (up to 64 bytes -long), which makes it possible to have overhead down to 1 byte for -allocations which are powers of two (and appear quite often). +If C is defined, malloc.c uses a slightly different +algorithm for small allocations (up to 64 bytes long). Such small +allocations are quite common in typical Perl scripts. -Expected memory savings (with 8-byte alignment in C) is -about 20% for typical Perl usage. Expected slowdown due to additional -malloc overhead is in fractions of a percent (hard to measure, because -of the effect of saved memory on speed). +The expected memory savings (with 8-byte alignment in C) is +about 20% for typical Perl usage. The expected slowdown due to the +additional malloc overhead is in fractions of a percent. (It is hard +to measure because of the effect of the saved memory on speed). =item -DTWO_POT_OPTIMIZE -Similarly to C, this macro improves allocations of data -with size close to a power of two; but this works for big allocations -(starting with 16K by default). Such allocations are typical for big -hashes and special-purpose scripts, especially image processing. - -On recent systems, the fact that perl requires 2M from system for 1M -allocation will not affect speed of execution, since the tail of such -a chunk is not going to be touched (and thus will not require real -memory). However, it may result in a premature out-of-memory error. -So if you will be manipulating very large blocks with sizes close to -powers of two, it would be wise to define this macro. +If C is defined, malloc.c uses a slightly different +algorithm for large allocations that are close to a power of two +(starting with 16K). Such allocations are typical for big hashes and +special-purpose scripts, especially image processing. If you will be +manipulating very large blocks with sizes close to powers of two, it +might be wise to define this macro. -Expected saving of memory is 0-100% (100% in applications which -require most memory in such 2**n chunks); expected slowdown is +The expected saving of memory is 0-100% (100% in applications which +require most memory in such 2**n chunks). The expected slowdown is negligible. =back @@ -1224,4 +1244,4 @@ from the original README by Larry Wall. =head1 LAST MODIFIED -18 February 1997 +24 February 1997