conditions when taint checks are turned on. (Taint checks are used
in setuid or setgid scripts, or when explicitly turned on with the
C<-T> invocation option.) Although it's unlikely, this may cause a
-previously-working script to now fail -- which should be construed
-as a blessing, since that indicates a potentially-serious security
+previously-working script to now fail, which should be construed
+as a blessing since that indicates a potentially-serious security
hole was just plugged.
The new restrictions when tainting include:
File handles are now stored internally as type IO::Handle. The
FileHandle module is still supported for backwards compatibility, but
-it is now merely a front end to the IO::* modules -- specifically,
+it is now merely a front end to the IO::* modules, specifically
IO::Handle, IO::Seekable, and IO::File. We suggest, but do not
require, that you use the IO::* modules in new code.
(W) The pattern match (//), substitution (s///), and transliteration (tr///)
operators work on scalar values. If you apply one of them to an array
-or a hash, it will convert the array or hash to a scalar value -- the
-length of an array, or the population info of a hash -- and then work on
+or a hash, it will convert the array or hash to a scalar value (the
+length of an array or the population info of a hash) and then work on
that scalar value. This is probably not what you meant to do. See
L<perlfunc/grep> and L<perlfunc/map> for alternatives.
(F) You said something like C<< local $ar->{'key'} >>, where $ar is
a reference to a pseudo-hash. That hasn't been implemented yet, but
you can get a similar effect by localizing the corresponding array
-element directly -- C<< local $ar->[$ar->[0]{'key'}] >>.
+element directly: C<< local $ar->[$ar->[0]{'key'}] >>.
=item Can't use %%! because Errno.pm is not available
Many new tests have been added. The most notable is probably the
lib/1_compile: it is very notable because running it takes quite a
-long time -- it test compiles all the Perl modules in the distribution.
+long time. It test compiles all the Perl modules in the distribution.
Please be patient.
=head1 Known Problems
Now, what about data?
-=head2 A horse is a horse, of course of course -- or is it?
+=head2 A horse is a horse, of course of course, or is it?
Let's start with the code for the C<Animal> class
and the C<Horse> class:
created and destroyed once, and the sub can be called
arbitrarily many times in between.
-It is usual to pass parameters using global variables -- typically
-$_ for one parameter, or $a and $b for two parameters -- rather
+It is usual to pass parameters using global variables (typically
+$_ for one parameter, or $a and $b for two parameters) rather
than via @_. (It is possible to use the @_ mechanism if you know
what you're doing, though there is as yet no supported API for
it. It's also inherently slower.)
It is often more readable to use the C<< => >> operator between key/value
pairs. The C<< => >> operator is mostly just a more visually distinctive
synonym for a comma, but it also arranges for its left-hand operand to be
-interpreted as a string -- if it's a bareword that would be a legal simple
-identifier (C<< => >> doesn't quote compound identifiers, that contain
-double colons). This makes it nice for initializing hashes:
+interpreted as a string if it's a bareword that would be a legal simple
+identifier. C<< => >> doesn't quote compound identifiers, that contain
+double colons. This makes it nice for initializing hashes:
%map = (
red => 0x00f,
the key or value to be filtered. Filtering is achieved by modifying
the contents of C<$_>. The return code from the filter is ignored.
-=head2 An Example -- the NULL termination problem.
+=head2 An Example: the NULL termination problem.
DBM Filters are useful for a class of problems where you I<always>
want to make the same transformation to all keys, all values or both.
and both "store" filters add a terminating NULL.
-=head2 Another Example -- Key is a C int.
+=head2 Another Example: Key is a C int.
Here is another real-life example. By default, whenever Perl writes to
a DBM database it always writes the key and value as strings. So when
The code above uses DB_File, but again it will work with any of the
DBM modules.
-This time only two filters have been used -- we only need to manipulate
+This time only two filters have been used; we only need to manipulate
the contents of the key, so it wasn't necessary to install any value
filters.
(W misc) The pattern match (C<//>), substitution (C<s///>), and
transliteration (C<tr///>) operators work on scalar values. If you apply
one of them to an array or a hash, it will convert the array or hash to
-a scalar value -- the length of an array, or the population info of a
-hash -- and then work on that scalar value. This is probably not what
+a scalar value (the length of an array, or the population info of a
+hash) and then work on that scalar value. This is probably not what
you meant to do. See L<perlfunc/grep> and L<perlfunc/map> for
alternatives.
=item B<-X>
-Use an index if it is present -- the B<-X> option looks for an entry
+Use an index if it is present. The B<-X> option looks for an entry
whose basename matches the name given on the command line in the file
C<$Config{archlib}/pod.idx>. The F<pod.idx> file should contain fully
qualified filenames, one per line.
One useful value for C<PERLDOC_PAGER> is C<less -+C -E>.
Having PERLDOCDEBUG set to a positive integer will make perldoc emit
-even more descriptive output than the C<-v> switch does -- the higher the
+even more descriptive output than the C<-v> switch does; the higher the
number, the more it emits.
If you are not sure then doing an C<SvREFCNT_inc> and C<sv_2mortal>, or
making a C<sv_mortalcopy> is safer.
-The mortal routines are not just for SVs -- AVs and HVs can be
+The mortal routines are not just for SVs; AVs and HVs can be
made mortal by passing their address (type-casted to C<SV*>) to the
C<sv_2mortal> or C<sv_mortalcopy> routines.
=head2 Scratchpads
The question remains on when the SVs which are I<target>s for opcodes
-are created. The answer is that they are created when the current unit --
-a subroutine or a file (for opcodes for statements outside of
-subroutines) -- is compiled. During this time a special anonymous Perl
-array is created, which is called a scratchpad for the current
-unit.
+are created. The answer is that they are created when the current
+unit--a subroutine or a file (for opcodes for statements outside of
+subroutines)--is compiled. During this time a special anonymous Perl
+array is created, which is called a scratchpad for the current unit.
A scratchpad keeps SVs which are lexicals for the current unit and are
targets for opcodes. One can deduce that an SV lives on a scratchpad
=back
The C<-Wtraditional> is another example of the annoying tendency of
-gcc to bundle a lot of warnings under one switch -- it would be
-impossible to deploy in practice because it would complain a lot -- but
+gcc to bundle a lot of warnings under one switch (it would be
+impossible to deploy in practice because it would complain a lot) but
it does contain some warnings that would be beneficial to have available
on their own, such as the warning about string constants inside macros
containing the macro arguments: this behaved differently pre-ANSI
The following are common causes of compilation and/or execution
failures, not common to Perl as such. The C FAQ is good bedtime
reading. Please test your changes with as many C compilers and
-platforms as possible -- we will, anyway, and it's nice to save
+platforms as possible; we will, anyway, and it's nice to save
oneself from public embarrassment.
If using gcc, you can add the C<-std=c89> option which will hopefully
minutes become hours. For example as of Perl 5.8.1, the
ext/Encode/t/Unicode.t takes extraordinarily long to complete under
e.g. Purify, Third Degree, and valgrind. Under valgrind it takes more
-than six hours, even on a snappy computer-- the said test must be
+than six hours, even on a snappy computer. The said test must be
doing something that is quite unfriendly for memory debuggers. If you
don't feel like waiting, that you can simply kill away the perl
process.
B<NOTE 2>: To minimize the number of memory leak false alarms (see
-L</PERL_DESTRUCT_LEVEL> for more information), you have to have
-environment variable PERL_DESTRUCT_LEVEL set to 2. The F<TEST>
-and harness scripts do that automatically. But if you are running
-some of the tests manually-- for csh-like shells:
+L</PERL_DESTRUCT_LEVEL> for more information), you have to set the
+environment variable PERL_DESTRUCT_LEVEL to 2.
+
+For csh-like shells:
setenv PERL_DESTRUCT_LEVEL 2
-and for Bourne-type shells:
+For Bourne-type shells:
PERL_DESTRUCT_LEVEL=2
export PERL_DESTRUCT_LEVEL
-or in Unixy environments you can also use the C<env> command:
+In Unixy environments you can also use the C<env> command:
env PERL_DESTRUCT_LEVEL=2 valgrind ./perl -Ilib ...
if (@animals < 5) { ... }
The elements we're getting from the array start with a C<$> because
-we're getting just a single value out of the array -- you ask for a scalar,
+we're getting just a single value out of the array; you ask for a scalar,
you get a scalar.
To get multiple values from an array:
! not
(C<and>, C<or> and C<not> aren't just in the above table as descriptions
-of the operators -- they're also supported as operators in their own
+of the operators. They're also supported as operators in their own
right. They're more readable than the C-style operators, but have
different precedence to C<&&> and friends. Check L<perlop> for more
detail.)
convenience", and to do anything you wanted in your signal handler,
and be prepared to clean up core dumps now and again.
-In Perl 5.7.3 and later to avoid these problems signals are
-"deferred"-- that is when the signal is delivered to the process by
+Perl 5.7.3 and later avoid these problems by "deferring" signals.
+That is, when the signal is delivered to the process by
the system (to the C code that implements Perl) a flag is set, and the
handler returns immediately. Then at strategic "safe" points in the
Perl interpreter (e.g. when it is about to execute a new opcode) the
}
When this code is run with the B<-w> flag, a warning will be produced
-for the C<$a> line -- C<"Reversed += operator">.
+for the C<$a> line: C<"Reversed += operator">.
The problem is that Perl has both compile-time and run-time warnings. To
disable compile-time warnings you need to rewrite the code like this:
=item 2.
-The B<-w> flag just sets the global C<$^W> variable as in 5.005 -- this
+The B<-w> flag just sets the global C<$^W> variable as in 5.005. This
means that any legacy code that currently relies on manipulating C<$^W>
to control warning behavior will still work as is.
}
Would print '1', because C<$foo> holds a reference to the I<original>
-C<$bar> -- the one that was stuffed away by C<local()> and which will be
+C<$bar>. The one that was stuffed away by C<local()> and which will be
restored when the block ends. Because variables are accessed through the
typeglob, you can use C<*foo = *bar> to create an alias which can be
localized. (But be aware that this means you can't have a separate
A C<BEGIN> code block is executed as soon as possible, that is, the moment
it is completely defined, even before the rest of the containing file (or
string) is parsed. You may have multiple C<BEGIN> blocks within a file (or
-eval'ed string) -- they will execute in order of definition. Because a C<BEGIN>
+eval'ed string); they will execute in order of definition. Because a C<BEGIN>
code block executes immediately, it can pull in definitions of subroutines
and such from other files in time to be visible to the rest of the compile
and run time. Once a C<BEGIN> has run, it is immediately undefined and any
might have come with your module!
Also note that these instructions are tailored for installing the
-module into your system's repository of Perl modules -- but you can
+module into your system's repository of Perl modules, but you can
install modules into any directory you wish. For instance, where I
say C<perl Makefile.PL>, you can substitute C<perl Makefile.PL
PREFIX=/my/perl_directory> to install the modules into
in your Perl 5 library directory. Often, you'll need to be root.
That's all you need to do on Unix systems with dynamic linking.
-Most Unix systems have dynamic linking -- if yours doesn't, or if for
+Most Unix systems have dynamic linking. If yours doesn't, or if for
another reason you have a statically-linked perl, B<and> the
module requires compilation, you'll need to build a new Perl binary
that includes the module. Again, you'll probably need to be root.
Does the module require compilation (i.e. does it have files that end
in .xs, .c, .h, .y, .cc, .cxx, or .C)? If it does, life is now
officially tough for you, because you have to compile the module
-yourself -- no easy feat on Windows. You'll need a compiler such as
+yourself (no easy feat on Windows). You'll need a compiler such as
Visual C++. Alternatively, you can download a pre-built PPM package
from ActiveState.
http://aspn.activestate.com/ASPN/Downloads/ActivePerl/PPM/
For Module::Build you would use the C<make test> equivalent C<perl Build test>.
The importance of these tests is proportional to the alleged stability of a
-module -- a module which purports to be stable or which hopes to achieve wide
+module. A module which purports to be stable or which hopes to achieve wide
use should adhere to as strict a testing regime as possible.
Useful modules to help you write tests (with minimum impact on your
call compiled as a method, or vice versa. This can introduce subtle bugs
that are hard to detect.
-For example, a call to a method C<new> in indirect notation -- as C++
-programmers are wont to make -- can be miscompiled into a subroutine
+For example, a call to a method C<new> in indirect notation (as C++
+programmers are wont to make) can be miscompiled into a subroutine
call if there's already a C<new> function in scope. You'd end up
calling the current package's C<new> as a subroutine, rather than the
desired class's method. The compiler tries to cheat by remembering
Notice that the final match matched C<q> instead of C<p>, which a match
without the C<\G> anchor would have done. Also note that the final match
-did not update C<pos> -- C<pos> is only updated on a C</g> match. If the
+did not update C<pos>. C<pos> is only updated on a C</g> match. If the
final match did indeed match C<p>, it's a good bet that you're running an
older (pre-5.6.0) Perl.
warning B<Can't find string terminator "END" anywhere before EOF...>.
Additionally, the quoting rules for the end of string identifier are not
-related to Perl's quoting rules -- C<q()>, C<qq()>, and the like are not
+related to Perl's quoting rules. C<q()>, C<qq()>, and the like are not
supported in place of C<''> and C<"">, and the only interpolation is for
backslashing the quoting character:
X<regexp, parse>
Previous steps were performed during the compilation of Perl code,
-but this one happens at run time--although it may be optimized to
+but this one happens at run time, although it may be optimized to
be calculated at compile time if appropriate. After preprocessing
described above, and possibly after evaluation if concatenation,
joining, casing translation, or metaquoting are involved, the
except that it isn't so cumbersome to say, and will actually work.
It really does shift the @ARGV array and put the current filename
into the $ARGV variable. It also uses filehandle I<ARGV>
-internally--<> is just a synonym for <ARGV>, which
+internally. <> is just a synonym for <ARGV>, which
is magical. (The pseudo code above doesn't work because it treats
<ARGV> as non-magical.)
And perhaps most importantly, keep the items consistent: either use
"=item *" for all of them, to produce bullets; or use "=item 1.",
"=item 2.", etc., to produce numbered lists; or use "=item foo",
-"=item bar", etc. -- namely, things that look nothing like bullets or
+"=item bar", etc.--namely, things that look nothing like bullets or
numbers.
If you start with bullets or numbers, stick with them, as
Older translators might add wording around an LE<lt>E<gt> link, so that
C<LE<lt>Foo::BarE<gt>> may become "the Foo::Bar manpage", for example.
So you shouldn't write things like C<the LE<lt>fooE<gt>
-documentation>, if you want the translated document to read sensibly
--- instead write C<the LE<lt>Foo::Bar|Foo::BarE<gt> documentation> or
+documentation>, if you want the translated document to read sensibly.
+Instead, write C<the LE<lt>Foo::Bar|Foo::BarE<gt> documentation> or
C<LE<lt>the Foo::Bar documentation|Foo::BarE<gt>>, to control how the
link comes out.
=head1 Pod Definitions
-Pod is embedded in files, typically Perl source files -- although you
+Pod is embedded in files, typically Perl source files, although you
can write a file that's nothing but Pod.
A B<line> in a file consists of zero or more non-newline characters,
than space or tab (and terminated by a newline or end-of-file).
(I<Note:> Many older Pod parsers did not accept a line consisting of
-spaces/tabs and then a newline as a blank line -- the only lines they
+spaces/tabs and then a newline as a blank line. The only lines they
considered blank were lines consisting of I<no characters at all>,
terminated by a newline.)
Pod content is contained in B<Pod blocks>. A Pod block starts with a
line that matches <m/\A=[a-zA-Z]/>, and continues up to the next line
-that matches C<m/\A=cut/> -- or up to the end of the file, if there is
+that matches C<m/\A=cut/> or up to the end of the file if there is
no C<m/\A=cut/> line.
=for comment
In other words, the Pod processing handler for "head1" will apply the
same processing to "Did You Remember to CE<lt>use strict;>?" that it
-would to an ordinary paragraph -- i.e., formatting codes (like
+would to an ordinary paragraph (i.e., formatting codes like
"CE<lt>...>") are parsed and presumably formatted appropriately, and
whitespace in the form of literal spaces and/or tabs is not
significant.
B<< $foo->bar(); >>
With this syntax, the whitespace character(s) after the "CE<lt><<"
-and before the ">>" (or whatever letter) are I<not> renderable -- they
+and before the ">>" (or whatever letter) are I<not> renderable. They
do not signify whitespace, are merely part of the formatting codes
themselves. That is, these are all synonymous:
=item Second:
-The possibly inferred link-text -- i.e., if there was no real link
+The possibly inferred link-text; i.e., if there was no real link
text, then this is the text that we'll infer in its place. (E.g., for
"LE<lt>Getopt::Std>", the inferred link text is "Getopt::Std".)
=item Third:
The name or URL, or undef if none. (E.g., in "LE<lt>Perl
-Functions|perlfunc>", the name -- also sometimes called the page --
+Functions|perlfunc>", the name (also sometimes called the page)
is "perlfunc". In "LE<lt>/CAVEATS>", the name is undef.)
=item Fourth:
L<B<ummE<234>stuff>|...>
For C<LE<lt>...E<gt>> codes without a "name|" part, only
-C<EE<lt>...E<gt>> and C<ZE<lt>E<gt>> codes may occur -- no
-other formatting codes. That is, authors should not use
-"C<LE<lt>BE<lt>Foo::BarE<gt>E<gt>>".
+C<EE<lt>...E<gt>> and C<ZE<lt>E<gt>> codes may occur. That is,
+authors should not use "C<LE<lt>BE<lt>Foo::BarE<gt>E<gt>>".
Note, however, that formatting codes and ZE<lt>>'s can occur in any
and all parts of an LE<lt>...> (i.e., in I<name>, I<section>, I<text>,
happens that "outer" is the format name of a higher-up region.) This is
an error. Processors must by default report this as an error, and may halt
processing the document containing that error. A corollary of this is that
-regions cannot "overlap" -- i.e., the latter block above does not represent
+regions cannot "overlap". That is, the latter block above does not represent
a region called "outer" which contains X and Y, overlapping a region called
"inner" which contains Y and Z. But because it is invalid (as all
apparently overlapping regions would be), it doesn't represent that, or
directories.
Don't count on specific values of C<$!>, neither numeric nor
-especially the strings values-- users may switch their locales causing
+especially the strings values. Users may switch their locales causing
error messages to be translated into their languages. If you can
trust a POSIXish environment, you can portably use the symbols defined
by the Errno module, like ENOENT. And don't trust on the values of C<$!>
Don't assume that any particular port (service) will respond.
-Don't assume that Sys::Hostname (or any other API or command)
-returns either a fully qualified hostname or a non-qualified hostname:
-it all depends on how the system had been configured. Also remember
-things like DHCP and NAT-- the hostname you get back might not be very
-useful.
+Don't assume that Sys::Hostname (or any other API or command) returns
+either a fully qualified hostname or a non-qualified hostname: it all
+depends on how the system had been configured. Also remember that for
+things such as DHCP and NAT, the hostname you get back might not be
+very useful.
-All the above "don't":s may look daunting, and they are -- but the key
+All the above "don't":s may look daunting, and they are, but the key
is to degrade gracefully if one cannot reach the particular network
service one wants. Croaking or hanging do not look very professional.
=head2 Security
Most multi-user platforms provide basic levels of security, usually
-implemented at the filesystem level. Some, however, do
-not-- unfortunately. Thus the notion of user id, or "home" directory,
+implemented at the filesystem level. Some, however, unfortunately do
+not. Thus the notion of user id, or "home" directory,
or even the state of being logged-in, may be unrecognizable on many
platforms. If you write programs that are security-conscious, it
is usually best to know what type of system you will be running
(From security viewpoint testing for permissions before attempting to
do something is silly anyway: if one tries this, there is potential
-for race conditions-- someone or something might change the
+for race conditions. Someone or something might change the
permissions between the permissions check and the actual operation.
Just try the operation.)
=head2 Assertions
-Assertions are conditions that have to be true -- they don't actually
+Assertions are conditions that have to be true; they don't actually
match parts of the substring. There are six assertions that are written as
backslash sequences.
bytes to represent characters from the ASCII character set, and sequences
of two or more bytes for all other characters. (See L<perlunitut>
for more information about the relationship between UTF-8 and perl's
-encoding, utf8 -- the difference isn't important for this discussion.)
+encoding, utf8. The difference isn't important for this discussion.)
No matter how you look at it, Unicode support is going to be a pain in a
regex engine. Tricks that might be fine when you have 256 possible
=head1 GRAFTS
The perl history contains one mistake which was not caught in the
-conversion -- a merge was recorded in the history between blead and
+conversion: a merge was recorded in the history between blead and
maint-5.10 where no merge actually occurred. Due to the nature of git,
this is now impossible to fix in the public repository. You can remove
this mis-merge locally by adding the following line to your
=head2 QUANTIFIERS
-Quantifiers are greedy by default -- match the B<longest> leftmost.
+Quantifiers are greedy by default and match the B<longest> leftmost.
Maximal Minimal Possessive Allowed range
------- ------- ---------- -------------
matched by a pattern with a possessive quantifier will not be backtracked
into, even if that causes the whole match to fail.
-There is no quantifier {,n} -- that gets understood as a literal string.
+There is no quantifier C<{,n}>. That's interpreted as a literal string.
=head2 EXTENDED CONSTRUCTS
Closely associated with the matching variables C<$1>, C<$2>, ... are
the I<backreferences> C<\1>, C<\2>,... Backreferences are simply
matching variables that can be used I<inside> a regexp. This is a
-really nice feature -- what matches later in a regexp is made to depend on
+really nice feature; what matches later in a regexp is made to depend on
what matched earlier in the regexp. Suppose we wanted to look
for doubled words in a text, like 'the the'. The following regexp finds
all 3-letter doubles with a space in between:
print "bad line: '$line'\n";
}
-But this doesn't match -- at least not the way one might expect. Only
+But this doesn't match, at least not the way one might expect. Only
after inserting the interpolated C<$a99a> and looking at the resulting
full text of the regexp is it obvious that the backreferences have
-backfired -- the subexpression C<(\w+)> has snatched number 1 and
+backfired. The subexpression C<(\w+)> has snatched number 1 and
demoted the groups in C<$a99a> by one rank. This can be avoided by
using relative backreferences:
=back
-As we have seen above, Principle 0 overrides the others -- the regexp
+As we have seen above, Principle 0 overrides the others. The regexp
will be matched as early as possible, with the other principles
determining how the regexp matches at that earliest character
position.
# but _does_ print
Hmm. What happened here? If you've been following along, you know that
-the above pattern should be effectively (almost) the same as the last one --
-enclosing the d in a character class isn't going to change what it
+the above pattern should be effectively (almost) the same as the last one;
+enclosing the C<d> in a character class isn't going to change what it
matches. So why does the first not print while the second one does?
The answer lies in the optimizations the regex engine makes. In the first
X<PERL_UNICODE>
Equivalent to the B<-C> command-line switch. Note that this is not
-a boolean variable-- setting this to C<"1"> is not the right way to
+a boolean variable. Setting this to C<"1"> is not the right way to
"enable Unicode" (whatever that would mean). You can use C<"0"> to
"disable Unicode", though (or alternatively unset PERL_UNICODE in
your shell before starting Perl). See the description of the C<-C>
disappointed or confused. Possibly both.
This is not to say that Perl threads are completely different from
-everything that's ever come before -- they're not. Perl's threading
+everything that's ever come before. They're not. Perl's threading
model owes a lot to other thread models, especially POSIX. Just as
Perl is not C, though, Perl threads are not POSIX threads. So if you
find yourself looking for mutexes, or thread priorities, it's time to
=head2 Basic Thread Support
-Thread support is a Perl compile-time option -- it's something that's
+Thread support is a Perl compile-time option. It's something that's
turned on or off when Perl is built at your site, rather than when
your programs are compiled. If your Perl wasn't compiled with thread
support enabled, then any attempt to use threads will fail.
thread creation can be quite expensive, both in terms of memory usage and
time spent in creation. The ideal way to reduce these costs is to have a
relatively short number of long-lived threads, all created fairly early
-on -- before the base thread has accumulated too much data. Of course, this
+on (before the base thread has accumulated too much data). Of course, this
may not always be possible, so compromises have to be made. However, after
a thread has been created, its performance and extra memory usage should
be little different than ordinary code.
of Perl. Calls often suffering from not being thread-safe include:
C<localtime()>, C<gmtime()>, functions fetching user, group and
network information (such as C<getgrent()>, C<gethostent()>,
-C<getnetent()> and so on), C<readdir()>,
-C<rand()>, and C<srand()> -- in general, calls that depend on some global
-external state.
+C<getnetent()> and so on), C<readdir()>, C<rand()>, and C<srand()>. In
+general, calls that depend on some global external state.
If the system Perl is compiled in has thread-safe variants of such
calls, they will be used. Beyond that, Perl is at the mercy of
This method will be triggered every time the tied variable is set
(assigned). Beyond its self reference, it also expects one (and only one)
-argument--the new value the user is trying to assign. Don't worry about
-returning a value from STORE -- the semantic of assignment returning the
+argument: the new value the user is trying to assign. Don't worry about
+returning a value from STORE; the semantic of assignment returning the
assigned value is implemented with FETCH.
sub STORE {
So far so good. Those of you who have been paying attention will have
spotted that the tied object hasn't been used so far. So lets add an
extra method to the Remember class to allow comments to be included in
-the file -- say, something like this:
+the file; say, something like this:
sub comment {
my $self = shift;
but the reference to this is stored on the object itself and all other
methods access package data via that reference, so we should be ok.
-What do we mean by the Person::new() function -- isn't that actually
+What do we mean by the Person::new() function? Isn't that actually
a method? Well, in principle, yes. A method is just a function that
expects as its first argument a class name (package) or object
(blessed reference). Person::new() is the function that both the
The regular expression compiler produces polymorphic opcodes. That is,
the pattern adapts to the data and automatically switches to the Unicode
character scheme when presented with data that is internally encoded in
-UTF-8 -- or instead uses a traditional byte scheme when presented with
+UTF-8, or instead uses a traditional byte scheme when presented with
byte data.
=item C<use utf8> still needed to enable UTF-8/UTF-EBCDIC in scripts
END
}
-It's important to remember not to use "&" for the first set -- that
+It's important to remember not to use "&" for the first set; that
would be intersecting with nothing (resulting in an empty set).
=head2 User-Defined Case Mappings
A user of Perl does not normally need to know nor care how Perl
happens to encode its internal strings, but it becomes relevant when
-outputting Unicode strings to a stream without a PerlIO layer -- one with
-the "default" encoding. In such a case, the raw bytes used internally
+outputting Unicode strings to a stream without a PerlIO layer (one with
+the "default" encoding). In such a case, the raw bytes used internally
(the native character set or UTF-8, as appropriate for each string)
will be used, and a "Wide character" warning will be issued if those
strings contain a character beyond 0x00FF.
=item typedef my_cxt_t
-This struct typedef I<must> always be called C<my_cxt_t> -- the other
+This struct typedef I<must> always be called C<my_cxt_t>. The other
C<CXT*> macros assume the existence of the C<my_cxt_t> typedef name.
Declare a typedef named C<my_cxt_t> that is a structure that contains
The MY_CXT_INIT macro initialises storage for the C<my_cxt_t> struct.
-It I<must> be called exactly once -- typically in a BOOT: section. If you
+It I<must> be called exactly once, typically in a BOOT: section. If you
are maintaining multiple interpreters, it should be called once in each
interpreter instance, except for interpreters cloned from existing ones.
(But see C<MY_CXT_CLONE> below.)
next to the variable name and away from the variable type), and place a
"*" near the variable type, but away from the variable name (as in the
call to foo above). By doing so, it is easy to understand exactly what
-will be passed to the C function -- it will be whatever is in the "last
+will be passed to the C function; it will be whatever is in the "last
column".
You should take great pains to try to pass the function the type of variable