X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=blobdiff_plain;f=pod%2Fperlunicode.pod;h=d629cabe9f1c120a3db78d1ab4bdff3cb111c4cf;hb=7237d65751f248e676243bc1e148084f323f4838;hp=bc880364d38924d38938f91f50266cf2347a3b40;hpb=46487f74b15c77c6f040c8b818f810a5255b1078;p=p5sagit%2Fp5-mst-13.2.git diff --git a/pod/perlunicode.pod b/pod/perlunicode.pod index bc88036..d629cab 100644 --- a/pod/perlunicode.pod +++ b/pod/perlunicode.pod @@ -4,14 +4,61 @@ perlunicode - Unicode support in Perl =head1 DESCRIPTION -WARNING: The implementation of Unicode support in Perl is incomplete. -Expect sudden and unannounced changes! +=head2 Important Caveats + +WARNING: While the implementation of Unicode support in Perl is now fairly +complete it is still evolving to some extent. + +In particular the way Unicode is handled on EBCDIC platforms is still rather +experimental. On such a platform references to UTF-8 encoding in this +document and elsewhere should be read as meaning UTF-EBCDIC as specified +in Unicode Technical Report 16 unless ASCII vs EBCDIC issues are specifically +discussed. There is no C pragma or ":utfebcdic" layer, rather +"utf8" and ":utf8" are re-used to mean platform's "natural" 8-bit encoding +of Unicode. See L for more discussion of the issues. + +The following areas are still under development. + +=over 4 + +=item Input and Output Disciplines + +A filehandle can be marked as containing perl's internal Unicode encoding +(UTF-8 or UTF-EBCDIC) by opening it with the ":utf8" layer. +Other encodings can be converted to perl's encoding on input, or from +perl's encoding on output by use of the ":encoding()" layer. +There is not yet a clean way to mark the perl source itself as being +in an particular encoding. + +=item Regular Expressions + +The regular expression compiler does now attempt to produce +polymorphic opcodes. That is the pattern should now adapt to the data +and automatically switch to the Unicode character scheme when presented +with Unicode data, or a traditional byte scheme when presented with +byte data. The implementation is still new and (particularly on +EBCDIC platforms) may need further work. + +=item C still needed to enable a few features + +The C pragma implements the tables used for Unicode support. These +tables are automatically loaded on demand, so the C pragma need not +normally be used. + +However, as a compatibility measure, this pragma must be explicitly used +to enable recognition of UTF-8 encoded literals and identifiers in the +source text on ASCII based machines or recognize UTF-EBCDIC encoded literals +and identifiers on EBCDIC based machines. + +=back + +=head2 Byte and Character semantics Beginning with version 5.6, Perl uses logically wide characters to represent strings internally. This internal representation of strings -uses the UTF-8 encoding. +uses either the UTF-8 or the UTF-EBCDIC encoding. -In future, Perl-level operations will expect to work with characters +In future, Perl-level operations can be expected to work with characters rather than bytes, in general. However, as strictly an interim compatibility measure, Perl v5.6 aims to @@ -27,21 +74,19 @@ which allowed byte semantics in Perl operations, but only as long as none of the program's inputs are marked as being as source of Unicode character data. Such data may come from filehandles, from calls to external programs, from information provided by the system (such as %ENV), -or from literals and constants in the source text. Later, in -L, we'll see how such -inputs may be marked as being Unicode character data sources. +or from literals and constants in the source text. If the C<-C> command line switch is used, (or the ${^WIDE_SYSTEM_CALLS} global flag is set to C<1>), all system calls will use the corresponding wide character APIs. This is currently only implemented -on Windows. +on Windows since UNIXes lack API standard on this area. -Regardless of the above, the C pragma can always be used to force -byte semantics in a particular lexical scope. See L. +Regardless of the above, the C pragma can always be used to force +byte semantics in a particular lexical scope. See L. The C pragma is primarily a compatibility device that enables -recognition of UTF-8 in literals encountered by the parser. It is also -used for enabling some of the more experimental Unicode support features. +recognition of UTF-(8|EBCDIC) in literals encountered by the parser. It may also +be used for enabling some of the more experimental Unicode support features. Note that this pragma is only required until a future version of Perl in which character semantics will become the default. This pragma may then become a no-op. See L. @@ -53,13 +98,15 @@ the input data came from a Unicode source (for example, by adding a character encoding discipline to the filehandle whence it came, or a literal UTF-8 string constant in the program), character semantics apply; otherwise, byte semantics are in effect. To force byte semantics -on Unicode data, the C pragma should be used. +on Unicode data, the C pragma should be used. Under character semantics, many operations that formerly operated on bytes change to operating on characters. For ASCII data this makes no difference, because UTF-8 stores ASCII in single bytes, but for -any character greater than C, the character is stored in +any character greater than C, the character may be stored in a sequence of two or more bytes, all of which have the high bit set. +For C1 controls or Latin 1 characters on an EBCDIC platform the character +may be stored in a UTF-EBCDIC multi byte sequence. But by and large, the user need not worry about this, because Perl hides it from the user. A character in Perl is logically just a number ranging from 0 to 2**32 or so. Larger characters encode to longer @@ -75,21 +122,14 @@ Character semantics have the following effects: =item * Strings and patterns may contain characters that have an ordinal value -larger than 255. In Perl v5.6, this is only enabled if the lexical -scope has a C declaration (due to compatibility needs) but -future versions may enable this by default. +larger than 255. Presuming you use a Unicode editor to edit your program, such characters -will typically occur directly within the literal strings as UTF-8 +will typically occur directly within the literal strings as UTF-(8|EBCDIC) characters, but you can also specify a particular character with an -extension of the C<\x> notation. UTF-8 characters are specified by +extension of the C<\x> notation. UTF-X characters are specified by putting the hexadecimal code within curlies after the C<\x>. For instance, -a Unicode smiley face is C<\x{263A}>. A character in the Latin-1 range -(128..255) should be written C<\x{ab}> rather than C<\xab>, since the -former will turn into a two-byte UTF-8 code, while the latter will -continue to be interpreted as generating a 8-bit byte rather than a -character. In fact, if C<-w> is turned on, it will produce a warning -that you might be generating invalid UTF-8. +a Unicode smiley face is C<\x{263A}>. =item * @@ -98,10 +138,6 @@ characters, including ideographs. (You are currently on your own when it comes to using the canonical forms of characters--Perl doesn't (yet) attempt to canonicalize variable names for you.) -This also needs C currently. [XXX: Why?!? High-bit chars were -syntax errors when they occurred within identifiers in previous versions, -so this should probably be enabled by default.] - =item * Regular expressions match characters instead of bytes. For instance, @@ -109,11 +145,6 @@ Regular expressions match characters instead of bytes. For instance, is provided to force a match a single byte ("C" in C, hence C<\C>).) -Unicode support in regular expressions needs C currently. -[XXX: Because the SWASH routines need to be loaded. And the RE engine -appears to need an overhaul to dynamically match Unicode anyway--the -current RE compiler creates different nodes with and without C.] - =item * Character classes in regular expressions match characters instead of @@ -121,19 +152,185 @@ bytes, and match against the character properties specified in the Unicode properties database. So C<\w> can be used to match an ideograph, for instance. -C is needed to enable this. See above. - =item * Named Unicode properties and block ranges make be used as character classes via the new C<\p{}> (matches property) and C<\P{}> (doesn't match property) constructs. For instance, C<\p{Lu}> matches any character with the Unicode uppercase property, while C<\p{M}> matches -any mark character. Single letter properties may omit the brackets, so -that can be written C<\pM> also. Many predefined character classes are -available, such as C<\p{IsMirrored}> and C<\p{InTibetan}>. - -C is needed to enable this. See above. +any mark character. Single letter properties may omit the brackets, +so that can be written C<\pM> also. Many predefined character classes +are available, such as C<\p{IsMirrored}> and C<\p{InTibetan}>. The +names of the C classes are the official Unicode block names but +with all non-alphanumeric characters removed, for example the block +name C<"Latin-1 Supplement"> becomes C<\p{InLatin1Supplement}>. + +Here is the list as of Unicode 3.1.0 (the two-letter classes) and +Perl 5.8.0 (the one-letter classes): + + L Letter + Lu Letter, Uppercase + Ll Letter, Lowercase + Lt Letter, Titlecase + Lm Letter, Modifier + Lo Letter, Other + M Mark + Mn Mark, Non-Spacing + Mc Mark, Spacing Combining + Me Mark, Enclosing + N Number + Nd Number, Decimal Digit + Nl Number, Letter + No Number, Other + P Punctuation + Pc Punctuation, Connector + Pd Punctuation, Dash + Ps Punctuation, Open + Pe Punctuation, Close + Pi Punctuation, Initial quote + (may behave like Ps or Pe depending on usage) + Pf Punctuation, Final quote + (may behave like Ps or Pe depending on usage) + Po Punctuation, Other + S Symbol + Sm Symbol, Math + Sc Symbol, Currency + Sk Symbol, Modifier + So Symbol, Other + Z Separator + Zs Separator, Space + Zl Separator, Line + Zp Separator, Paragraph + C Other + Cc Other, Control + Cf Other, Format + Cs Other, Surrogate + Co Other, Private Use + Cn Other, Not Assigned (Unicode defines no Cn characters) + +Additionally, because scripts differ in their directionality +(for example Hebrew is written right to left), all characters +have their directionality defined: + + BidiL Left-to-Right + BidiLRE Left-to-Right Embedding + BidiLRO Left-to-Right Override + BidiR Right-to-Left + BidiAL Right-to-Left Arabic + BidiRLE Right-to-Left Embedding + BidiRLO Right-to-Left Override + BidiPDF Pop Directional Format + BidiEN European Number + BidiES European Number Separator + BidiET European Number Terminator + BidiAN Arabic Number + BidiCS Common Number Separator + BidiNSM Non-Spacing Mark + BidiBN Boundary Neutral + BidiB Paragraph Separator + BidiS Segment Separator + BidiWS Whitespace + BidiON Other Neutrals + +The blocks available for C<\p{InBlock}> and C<\P{InBlock}>, for +example \p{InCyrillic>, are as follows: + + BasicLatin + Latin1Supplement + LatinExtendedA + LatinExtendedB + IPAExtensions + SpacingModifierLetters + CombiningDiacriticalMarks + Greek + Cyrillic + Armenian + Hebrew + Arabic + Syriac + Thaana + Devanagari + Bengali + Gurmukhi + Gujarati + Oriya + Tamil + Telugu + Kannada + Malayalam + Sinhala + Thai + Lao + Tibetan + Myanmar + Georgian + HangulJamo + Ethiopic + Cherokee + UnifiedCanadianAboriginalSyllabics + Ogham + Runic + Khmer + Mongolian + LatinExtendedAdditional + GreekExtended + GeneralPunctuation + SuperscriptsandSubscripts + CurrencySymbols + CombiningMarksforSymbols + LetterlikeSymbols + NumberForms + Arrows + MathematicalOperators + MiscellaneousTechnical + ControlPictures + OpticalCharacterRecognition + EnclosedAlphanumerics + BoxDrawing + BlockElements + GeometricShapes + MiscellaneousSymbols + Dingbats + BraillePatterns + CJKRadicalsSupplement + KangxiRadicals + IdeographicDescriptionCharacters + CJKSymbolsandPunctuation + Hiragana + Katakana + Bopomofo + HangulCompatibilityJamo + Kanbun + BopomofoExtended + EnclosedCJKLettersandMonths + CJKCompatibility + CJKUnifiedIdeographsExtensionA + CJKUnifiedIdeographs + YiSyllables + YiRadicals + HangulSyllables + HighSurrogates + HighPrivateUseSurrogates + LowSurrogates + PrivateUse + CJKCompatibilityIdeographs + AlphabeticPresentationForms + ArabicPresentationFormsA + CombiningHalfMarks + CJKCompatibilityForms + SmallFormVariants + ArabicPresentationFormsB + Specials + HalfwidthandFullwidthForms + OldItalic + Gothic + Deseret + ByzantineMusicalSymbols + MusicalSymbols + MathematicalAlphanumericSymbols + CJKUnifiedIdeographsExtensionB + CJKCompatibilityIdeographsSupplement + Tags =item * @@ -143,28 +340,12 @@ character is a base character and subsequent characters are mark characters that apply to the base character. It is equivalent to C<(?:\PM\pM*)>. -C is needed to enable this. See above. - =item * -The C operator translates characters instead of bytes. It can also -be forced to translate between 8-bit codes and UTF-8 regardless of the -surrounding utf8 state. For instance, if you know your input in Latin-1, -you can say: - - use utf8; - while (<>) { - tr/\0-\xff//CU; # latin1 char to utf8 - ... - } - -Similarly you could translate your output with - - tr/\0-\x{ff}//UC; # utf8 to latin1 char - -No, C doesn't take /U or /C (yet?). - -C is needed to enable this. See above. +The C operator translates characters instead of bytes. Note +that the C functionality has been removed, as the interface +was a mistake. For similar functionality see pack('U0', ...) and +pack('C0', ...). =item * @@ -202,19 +383,31 @@ byte-oriented C and C under utf8. =item * +The bit string operators C<& | ^ ~> can operate on character data. +However, for backward compatibility reasons (bit string operations +when the characters all are less than 256 in ordinal value) one cannot +mix C<~> (the bit complement) and characters both less than 256 and +equal or greater than 256. Most importantly, the DeMorgan's laws +(C<~($x|$y) eq ~$x&~$y>, C<~($x&$y) eq ~$x|~$y>) won't hold. +Another way to look at this is that the complement cannot return +B the 8-bit (byte) wide bit complement, and the full character +wide bit complement. + +=item * + And finally, C reverses by character rather than by byte. =back =head2 Character encodings for input and output -[XXX: This feature is not yet implemented.] +See L. =head1 CAVEATS As of yet, there is no method for automatically coercing input and -output to some encoding other than UTF-8. This is planned in the near -future, however. +output to some encoding other than UTF-8 or UTF-EBCDIC. This is planned +in the near future, however. Whether an arbitrary piece of data will be treated as "characters" or "bytes" by internal operations cannot be divined at the current time. @@ -227,6 +420,6 @@ tend to run slower. Avoidance of locales is strongly encouraged. =head1 SEE ALSO -L, L, L +L, L, L, L =cut