From: Jarkko Hietaniemi Date: Sun, 11 Nov 2001 18:00:03 +0000 (+0000) Subject: Doc updates; make the Unicode discussions a little X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=c349b1b99a4118ee8249d2cf5e493d2ae1367758;p=p5sagit%2Fp5-mst-13.2.git Doc updates; make the Unicode discussions a little bit less alarming, and add information about encodings, surrogates, and BOMs. p4raw-id: //depot/perl@12943 --- diff --git a/pod/perlunicode.pod b/pod/perlunicode.pod index 106a4bf..1839649 100644 --- a/pod/perlunicode.pod +++ b/pod/perlunicode.pod @@ -6,19 +6,9 @@ perlunicode - Unicode support in Perl =head2 Important Caveats -WARNING: While the implementation of Unicode support in Perl is now -fairly complete it is still evolving to some extent. - -In particular the way Unicode is handled on EBCDIC platforms is still -rather experimental. On such a platform references to UTF-8 encoding -in this document and elsewhere should be read as meaning UTF-EBCDIC as -specified in Unicode Technical Report 16 unless ASCII vs EBCDIC issues -are specifically discussed. There is no C pragma or -":utfebcdic" layer, rather "utf8" and ":utf8" are re-used to mean -platform's "natural" 8-bit encoding of Unicode. See L for -more discussion of the issues. - -The following areas are still under development. +Unicode support is an extensive requirement. While perl does not +implement the Unicode standard or the accompanying technical reports +from cover to cover, Perl does support many Unicode features. =over 4 @@ -27,30 +17,30 @@ The following areas are still under development. A filehandle can be marked as containing perl's internal Unicode encoding (UTF-8 or UTF-EBCDIC) by opening it with the ":utf8" layer. Other encodings can be converted to perl's encoding on input, or from -perl's encoding on output by use of the ":encoding()" layer. There is -not yet a clean way to mark the Perl source itself as being in an -particular encoding. +perl's encoding on output by use of the ":encoding(...)" layer. +See L. + +To mark the Perl source itself as being in an particular encoding, +see L. =item Regular Expressions -The regular expression compiler does now attempt to produce -polymorphic opcodes. That is the pattern should now adapt to the data -and automatically switch to the Unicode character scheme when -presented with Unicode data, or a traditional byte scheme when -presented with byte data. The implementation is still new and -(particularly on EBCDIC platforms) may need further work. +The regular expression compiler produces polymorphic opcodes. That is, +the pattern adapts to the data and automatically switch to the Unicode +character scheme when presented with Unicode data, or a traditional +byte scheme when presented with byte data. =item C still needed to enable UTF-8/UTF-EBCDIC in scripts The C pragma implements the tables used for Unicode support. -These tables are automatically loaded on demand, so the C pragma -need not normally be used. +However, these tables are automatically loaded on demand, so the +C pragma should not normally be used. -However, as a compatibility measure, this pragma must be explicitly -used to enable recognition of UTF-8 in the Perl scripts themselves on -ASCII based machines or recognize UTF-EBCDIC on EBCDIC based machines. -B is -needed>. +As a compatibility measure, this pragma must be explicitly used to +enable recognition of UTF-8 in the Perl scripts themselves on ASCII +based machines or recognize UTF-EBCDIC on EBCDIC based machines. +B +is needed>. You can also use the C pragma to change the default encoding of the data in your script; see L. @@ -81,11 +71,11 @@ character data. Such data may come from filehandles, from calls to external programs, from information provided by the system (such as %ENV), or from literals and constants in the source text. -If the C<-C> command line switch is used, (or the +On Windows platforms, if the C<-C> command line switch is used, (or the ${^WIDE_SYSTEM_CALLS} global flag is set to C<1>), all system calls will use the corresponding wide character APIs. Note that this is -currently only implemented on Windows since other platforms API -standard on this area. +currently only implemented on Windows since other platforms lack an +API standard on this area. Regardless of the above, the C pragma can always be used to force byte semantics in a particular lexical scope. See L. @@ -677,8 +667,87 @@ Level 3 - Locale-Sensitive Support =back +=head2 Unicode Encodings + +Unicode characters are assigned to I which are abstract +numbers. To use this numbers various encodings are needed. + +=over 4 + +=item UTF-8 + +UTF-8 is the encoding used internally by Perl. UTF-8 is variable +length (1 to 6 bytes, current character allocations require 4 bytes), +byteorder independent encoding. For ASCII UTF-8 is transparent +(and we really mean 7-bit ASCII, not any 8-bit encoding). + +=item UTF-16, UTF-16BE, UTF16-LE, Surrogates, and BOMs (Byte Order Marks) + +UTF-16 is a 2 or 4 byte encoding. The Unicode code points +0x0000..0xFFFF are stored in two 16-bit units, and the code points +0x010000..0x10FFFF in four 16-bit units. The latter case is +using I, the first 16-bit unit being the I, and the second being the I. + +Surrogates are code points set aside to encode the 0x01000..0x10FFFF +range of Unicode code points in pairs of 16-bit units. The I are the range 0xD800..0xDBFF, and the I +are the range 0xDC00..0xDFFFF. The surrogate encoding is + + $hi = ($uni - 0x10000) / 0x400 + 0xD800; + $lo = ($uni - 0x10000) % 0x400 + 0xDC00; + +and the decoding is + + $uni = 0x10000 + ($hi - 0xD8000) * 0x400 + ($lo - 0xDC00); + +Because of the 16-bitness, UTF-16 is byteorder dependent. The UTF-16 +itself can be used for in-memory computations, but if storage or +transfer is required, either the UTF-16BE (Big Endian), or UTF-16LE +(Little Endian) must be chosen. + +This introduces another problem: what if you just know that your data +is UTF-16, but you don't know which endianness? Byte Order Marks +(BOMs) are a solution to this. A special character has been reserved +in Unicode to function as a byte order marker: the 0xFFFE is the BOM. +The trick is that if you read a BOM, you will know the byte order, +since if it was written on a big endian platform, you will read the +bytes 0xFF 0xFE, but if it was written on a little endian platform, +you will read the bytes 0xFE 0xFF. (And if the originating platform +was writing in UTF-8, you will read the bytes 0xEF 0xBF 0xBE.) + +=item UTF-32, UTF-32BE, UTF32-LE + +The UTF-32 family is pretty much like the UTF-16 family, expect that +the units are 32-bit, and therefore the surrogate scheme is not needed. + +=item UCS-2, UCS-4 + +Encodings defined by the ISO 10646 standard. UCS-2 is 16-bit +encoding, UCS-4 is 32-bit encoding. Unlike the UTF-16 the UCS-2 +is not extensible beyond 0xFFFF. + +=item UTF-7 + +A seven-bit safe (non-eight-bit) encoding, useful if the +transport/storage is not eight-bit safe. Defined by RFC 2152. + +=head2 Unicode in Perl on EBCDIC + +The way Unicode is handled on EBCDIC platforms is still rather +experimental. On such a platform references to UTF-8 encoding in this +document and elsewhere should be read as meaning UTF-EBCDIC as +specified in Unicode Technical Report 16 unless ASCII vs EBCDIC issues +are specifically discussed. There is no C pragma or +":utfebcdic" layer, rather "utf8" and ":utf8" are re-used to mean +platform's "natural" 8-bit encoding of Unicode. See L for +more discussion of the issues. + +=back + =head1 SEE ALSO -L, L, L, L +L, L, L, L, L, L, +L =cut