X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=blobdiff_plain;f=pod%2Fperluniintro.pod;h=870926ea1fcb999842e3e8e8dcf216f554e1dd64;hb=6d822dc4045278fb03135b2683bac92dba061369;hp=0d840d11d817ca790e902f97c6495f1f1691a38b;hpb=1bfb14c4b8e2fdc2db05afcae1e2a6aee6eb3290;p=p5sagit%2Fp5-mst-13.2.git diff --git a/pod/perluniintro.pod b/pod/perluniintro.pod index 0d840d1..870926e 100644 --- a/pod/perluniintro.pod +++ b/pod/perluniintro.pod @@ -132,7 +132,7 @@ operations. Only one case remains where an explicit C is needed: if your Perl script itself is encoded in UTF-8, you can use UTF-8 in your identifier names, and in string and regular expression literals, by saying C. This is not the default because -scripts with legacy 8-bit data in them would break. +scripts with legacy 8-bit data in them would break. See L. =head2 Perl's Unicode Model @@ -150,7 +150,7 @@ character set. Otherwise, it uses UTF-8. A user of Perl does not normally need to know nor care how Perl happens to encode its internal strings, but it becomes relevant when -outputting Unicode strings to a stream without a discipline--one with +outputting Unicode strings to a stream without a PerlIO layer -- one with the "default" encoding. In such a case, the raw bytes used internally (the native character set or UTF-8, as appropriate for each string) will be used, and a "Wide character" warning will be issued if those @@ -165,7 +165,7 @@ as a warning: Wide character in print at ... -To output UTF-8, use the C<:utf8> output discipline. Prepending +To output UTF-8, use the C<:utf8> output layer. Prepending binmode(STDOUT, ":utf8"); @@ -181,6 +181,11 @@ been led to believe that STDIN should be UTF-8, but then STDIN coming in from another command is not UTF-8, Perl will complain about the malformed UTF-8. +All features that combine Unicode and I/O also require using the new +PerlIO feature. Almost all Perl 5.8 platforms do use PerlIO, though: +you can see whether yours is by running "perl -V" and looking for +C. + =head2 Unicode and EBCDIC Perl 5.8.0 also supports Unicode on EBCDIC platforms. There, @@ -323,7 +328,7 @@ and on already open streams, use C: binmode(STDOUT, ":encoding(shift_jis)"); The matching of encoding names is loose: case does not matter, and -many encodings have several aliases. Note that C<:utf8> discipline +many encodings have several aliases. Note that the C<:utf8> layer must always be specified exactly like that; it is I subject to the loose matching of encoding names. @@ -335,7 +340,7 @@ module. Reading in a file that you know happens to be encoded in one of the Unicode or legacy encodings does not magically turn the data into Unicode in Perl's eyes. To do that, specify the appropriate -discipline when opening files +layer when opening files open(my $fh,'<:utf8', 'anything'); my $line_of_unicode = <$fh>; @@ -343,10 +348,10 @@ discipline when opening files open(my $fh,'<:encoding(Big5)', 'anything'); my $line_of_unicode = <$fh>; -The I/O disciplines can also be specified more flexibly with +The I/O layers can also be specified more flexibly with the C pragma. See L, or look at the following example. - use open ':utf8'; # input and output default discipline will be UTF-8 + use open ':utf8'; # input and output default layer will be UTF-8 open X, ">file"; print X chr(0x100), "\n"; close X; @@ -354,7 +359,7 @@ the C pragma. See L, or look at the following example. printf "%#x\n", ord(); # this should print 0x100 close Y; -With the C pragma you can use the C<:locale> discipline +With the C pragma you can use the C<:locale> layer $ENV{LC_ALL} = $ENV{LANG} = 'ru_RU.KOI8-R'; # the :locale will probe the locale environment variables like LC_ALL @@ -366,7 +371,7 @@ With the C pragma you can use the C<:locale> discipline printf "%#x\n", ord(), "\n"; # this should print 0xc1 close I; -or you can also use the C<':encoding(...)'> discipline +or you can also use the C<':encoding(...)'> layer open(my $epic,'<:encoding(iso-8859-7)','iliad.greek'); my $line_of_unicode = <$epic>; @@ -376,8 +381,8 @@ converts data from the specified encoding when it is read in from the stream. The result is always Unicode. The L pragma affects all the C calls after the pragma by -setting default disciplines. If you want to affect only certain -streams, use explicit disciplines directly in the C call. +setting default layers. If you want to affect only certain +streams, use explicit layers directly in the C call. You can switch encodings on an already opened stream by using C; see L. @@ -387,7 +392,7 @@ C and C, only with the C pragma. The C<:utf8> and C<:encoding(...)> methods do work with all of C, C, and the C pragma. -Similarly, you may use these I/O disciplines on output streams to +Similarly, you may use these I/O layers on output streams to automatically convert Unicode to the specified encoding when it is written to the stream. For example, the following snippet copies the contents of the file "text.jis" (encoded as ISO-2022-JP, aka JIS) to @@ -410,7 +415,7 @@ C and C operate on byte counts, as do C and C. Notice that because of the default behaviour of not doing any -conversion upon input if there is no default discipline, +conversion upon input if there is no default layer, it is easy to mistakenly write code that keeps on expanding a file by repeatedly encoding the data: @@ -428,9 +433,7 @@ UTF-8 encoded. A C would have avoided the bug, or explicitly opening also the F for input as UTF-8. B: the C<:utf8> and C<:encoding> features work only if your -Perl has been built with the new "perlio" feature. Almost all -Perl 5.8 platforms do use "perlio", though: you can see whether -yours is by running "perl -V" and looking for C. +Perl has been built with the new PerlIO feature. =head2 Displaying Unicode As Text @@ -481,7 +484,7 @@ Peeking At Perl's Internal Encoding Normal users of Perl should never care how Perl encodes any particular Unicode string (because the normal ways to get at the contents of a string with Unicode--via input and output--should always be via -explicitly-defined I/O disciplines). But if you must, there are two +explicitly-defined I/O layers). But if you must, there are two ways of looking behind the scenes. One way of peeking inside the internal encoding of Unicode characters @@ -588,7 +591,7 @@ than ASCII 0 to 9 (and ASCII a to f for hexadecimal). =over 4 -=item +=item * Will My Old Scripts Break? @@ -600,7 +603,7 @@ produced a character modulo 255. C, for example, was equal to C or "-" (in ASCII), now it is LATIN CAPITAL LETTER I WITH BREVE. -=item +=item * How Do I Make My Scripts Work With Unicode? @@ -608,7 +611,7 @@ Very little work should be needed since nothing changes until you generate Unicode data. The most important thing is getting input as Unicode; for that, see the earlier I/O discussion. -=item +=item * How Do I Know Whether My String Is In Unicode? @@ -655,7 +658,7 @@ defined function C: print length($unicode), "\n"; # will also print 2 # (the 0xC4 0x80 of the UTF-8) -=item +=item * How Do I Detect Data That's Not Valid In a Particular Encoding? @@ -679,7 +682,7 @@ warning is produced. The "U0" means "expect strictly UTF-8 encoded Unicode". Without that the C would accept also data like C), similarly to the C as we saw earlier. -=item +=item * How Do I Convert Binary Data Into a Particular Encoding, Or Vice Versa? @@ -734,14 +737,14 @@ B. You can use C for the former, and you can create well-formed Unicode data by C. -=item +=item * How Do I Display Unicode? How Do I Input Unicode? See http://www.alanwood.net/unicode/ and http://www.cl.cam.ac.uk/~mgk25/unicode.html -=item +=item * How Does Unicode Work With Traditional Locales?