3 perlunicode - Unicode support in Perl
7 =head2 Important Caveats
9 Unicode support is an extensive requirement. While Perl does not
10 implement the Unicode standard or the accompanying technical reports
11 from cover to cover, Perl does support many Unicode features.
13 People who want to learn to use Unicode in Perl, should probably read
14 L<the Perl Unicode tutorial, perlunitut|perlunitut>, before reading
15 this reference document.
19 =item Input and Output Layers
21 Perl knows when a filehandle uses Perl's internal Unicode encodings
22 (UTF-8, or UTF-EBCDIC if in EBCDIC) if the filehandle is opened with
23 the ":utf8" layer. Other encodings can be converted to Perl's
24 encoding on input or from Perl's encoding on output by use of the
25 ":encoding(...)" layer. See L<open>.
27 To indicate that Perl source itself is in UTF-8, use C<use utf8;>.
29 =item Regular Expressions
31 The regular expression compiler produces polymorphic opcodes. That is,
32 the pattern adapts to the data and automatically switches to the Unicode
33 character scheme when presented with data that is internally encoded in
34 UTF-8, or instead uses a traditional byte scheme when presented with
37 =item C<use utf8> still needed to enable UTF-8/UTF-EBCDIC in scripts
39 As a compatibility measure, the C<use utf8> pragma must be explicitly
40 included to enable recognition of UTF-8 in the Perl scripts themselves
41 (in string or regular expression literals, or in identifier names) on
42 ASCII-based machines or to recognize UTF-EBCDIC on EBCDIC-based
43 machines. B<These are the only times when an explicit C<use utf8>
44 is needed.> See L<utf8>.
46 =item BOM-marked scripts and UTF-16 scripts autodetected
48 If a Perl script begins marked with the Unicode BOM (UTF-16LE, UTF16-BE,
49 or UTF-8), or if the script looks like non-BOM-marked UTF-16 of either
50 endianness, Perl will correctly read in the script as Unicode.
51 (BOMless UTF-8 cannot be effectively recognized or differentiated from
52 ISO 8859-1 or other eight-bit encodings.)
54 =item C<use encoding> needed to upgrade non-Latin-1 byte strings
56 By default, there is a fundamental asymmetry in Perl's Unicode model:
57 implicit upgrading from byte strings to Unicode strings assumes that
58 they were encoded in I<ISO 8859-1 (Latin-1)>, but Unicode strings are
59 downgraded with UTF-8 encoding. This happens because the first 256
60 codepoints in Unicode happens to agree with Latin-1.
62 See L</"Byte and Character Semantics"> for more details.
66 =head2 Byte and Character Semantics
68 Beginning with version 5.6, Perl uses logically-wide characters to
69 represent strings internally.
71 In future, Perl-level operations will be expected to work with
72 characters rather than bytes.
74 However, as an interim compatibility measure, Perl aims to
75 provide a safe migration path from byte semantics to character
76 semantics for programs. For operations where Perl can unambiguously
77 decide that the input data are characters, Perl switches to
78 character semantics. For operations where this determination cannot
79 be made without additional information from the user, Perl decides in
80 favor of compatibility and chooses to use byte semantics.
82 Under byte semantics, when C<use locale> is in effect, Perl uses the
83 semantics associated with the current locale. Absent a C<use locale>, and
84 absent a C<use feature 'unicode_strings'> pragma, Perl currently uses US-ASCII
85 (or Basic Latin in Unicode terminology) byte semantics, meaning that characters
86 whose ordinal numbers are in the range 128 - 255 are undefined except for their
87 ordinal numbers. This means that none have case (upper and lower), nor are any
88 a member of character classes, like C<[:alpha:]> or C<\w>. (But all do belong
89 to the C<\W> class or the Perl regular expression extension C<[:^alpha:]>.)
91 This behavior preserves compatibility with earlier versions of Perl,
92 which allowed byte semantics in Perl operations only if
93 none of the program's inputs were marked as being a source of Unicode
94 character data. Such data may come from filehandles, from calls to
95 external programs, from information provided by the system (such as %ENV),
96 or from literals and constants in the source text.
98 The C<bytes> pragma will always, regardless of platform, force byte
99 semantics in a particular lexical scope. See L<bytes>.
101 The C<use feature 'unicode_strings'> pragma is intended to always, regardless
102 of platform, force Unicode semantics in a particular lexical scope. In
103 release 5.12, it is partially implemented, applying only to case changes.
104 See L</The "Unicode Bug"> below.
106 The C<utf8> pragma is primarily a compatibility device that enables
107 recognition of UTF-(8|EBCDIC) in literals encountered by the parser.
108 Note that this pragma is only required while Perl defaults to byte
109 semantics; when character semantics become the default, this pragma
110 may become a no-op. See L<utf8>.
112 Unless explicitly stated, Perl operators use character semantics
113 for Unicode data and byte semantics for non-Unicode data.
114 The decision to use character semantics is made transparently. If
115 input data comes from a Unicode source--for example, if a character
116 encoding layer is added to a filehandle or a literal Unicode
117 string constant appears in a program--character semantics apply.
118 Otherwise, byte semantics are in effect. The C<bytes> pragma should
119 be used to force byte semantics on Unicode data, and the C<use feature
120 'unicode_strings'> pragma to force Unicode semantics on byte data (though in
121 5.12 it isn't fully implemented).
123 If strings operating under byte semantics and strings with Unicode
124 character data are concatenated, the new string will have
125 character semantics. This can cause surprises: See L</BUGS>, below
127 Under character semantics, many operations that formerly operated on
128 bytes now operate on characters. A character in Perl is
129 logically just a number ranging from 0 to 2**31 or so. Larger
130 characters may encode into longer sequences of bytes internally, but
131 this internal detail is mostly hidden for Perl code.
132 See L<perluniintro> for more.
134 =head2 Effects of Character Semantics
136 Character semantics have the following effects:
142 Strings--including hash keys--and regular expression patterns may
143 contain characters that have an ordinal value larger than 255.
145 If you use a Unicode editor to edit your program, Unicode characters may
146 occur directly within the literal strings in UTF-8 encoding, or UTF-16.
147 (The former requires a BOM or C<use utf8>, the latter requires a BOM.)
149 Unicode characters can also be added to a string by using the C<\x{...}>
150 notation. The Unicode code for the desired character, in hexadecimal,
151 should be placed in the braces. For instance, a smiley face is
152 C<\x{263A}>. This encoding scheme works for all characters, but
153 for characters under 0x100, note that Perl may use an 8 bit encoding
154 internally, for optimization and/or backward compatibility.
158 use charnames ':full';
160 you can use the C<\N{...}> notation and put the official Unicode
161 character name within the braces, such as C<\N{WHITE SMILING FACE}>.
165 If an appropriate L<encoding> is specified, identifiers within the
166 Perl script may contain Unicode alphanumeric characters, including
167 ideographs. Perl does not currently attempt to canonicalize variable
172 Regular expressions match characters instead of bytes. "." matches
173 a character instead of a byte.
177 Character classes in regular expressions match characters instead of
178 bytes and match against the character properties specified in the
179 Unicode properties database. C<\w> can be used to match a Japanese
180 ideograph, for instance.
184 Named Unicode properties, scripts, and block ranges may be used like
185 character classes via the C<\p{}> "matches property" construct and
186 the C<\P{}> negation, "doesn't match property".
187 See L</"Unicode Character Properties"> for more details.
189 You can define your own character properties and use them
190 in the regular expression with the C<\p{}> or C<\P{}> construct.
191 See L</"User-Defined Character Properties"> for more details.
195 The special pattern C<\X> matches a logical character, an "extended grapheme
196 cluster" in Standardese. In Unicode what appears to the user to be a single
197 character, for example an accented C<G>, may in fact be composed of a sequence
198 of characters, in this case a C<G> followed by an accent character. C<\X>
199 will match the entire sequence.
203 The C<tr///> operator translates characters instead of bytes. Note
204 that the C<tr///CU> functionality has been removed. For similar
205 functionality see pack('U0', ...) and pack('C0', ...).
209 Case translation operators use the Unicode case translation tables
210 when character input is provided. Note that C<uc()>, or C<\U> in
211 interpolated strings, translates to uppercase, while C<ucfirst>,
212 or C<\u> in interpolated strings, translates to titlecase in languages
213 that make the distinction (which is equivalent to uppercase in languages
214 without the distinction).
218 Most operators that deal with positions or lengths in a string will
219 automatically switch to using character positions, including
220 C<chop()>, C<chomp()>, C<substr()>, C<pos()>, C<index()>, C<rindex()>,
221 C<sprintf()>, C<write()>, and C<length()>. An operator that
222 specifically does not switch is C<vec()>. Operators that really don't
223 care include operators that treat strings as a bucket of bits such as
224 C<sort()>, and operators dealing with filenames.
228 The C<pack()>/C<unpack()> letter C<C> does I<not> change, since it is often
229 used for byte-oriented formats. Again, think C<char> in the C language.
231 There is a new C<U> specifier that converts between Unicode characters
232 and code points. There is also a C<W> specifier that is the equivalent of
233 C<chr>/C<ord> and properly handles character values even if they are above 255.
237 The C<chr()> and C<ord()> functions work on characters, similar to
238 C<pack("W")> and C<unpack("W")>, I<not> C<pack("C")> and
239 C<unpack("C")>. C<pack("C")> and C<unpack("C")> are methods for
240 emulating byte-oriented C<chr()> and C<ord()> on Unicode strings.
241 While these methods reveal the internal encoding of Unicode strings,
242 that is not something one normally needs to care about at all.
246 The bit string operators, C<& | ^ ~>, can operate on character data.
247 However, for backward compatibility, such as when using bit string
248 operations when characters are all less than 256 in ordinal value, one
249 should not use C<~> (the bit complement) with characters of both
250 values less than 256 and values greater than 256. Most importantly,
251 DeMorgan's laws (C<~($x|$y) eq ~$x&~$y> and C<~($x&$y) eq ~$x|~$y>)
252 will not hold. The reason for this mathematical I<faux pas> is that
253 the complement cannot return B<both> the 8-bit (byte-wide) bit
254 complement B<and> the full character-wide bit complement.
258 You can define your own mappings to be used in lc(),
259 lcfirst(), uc(), and ucfirst() (or their string-inlined versions).
260 See L</"User-Defined Case Mappings"> for more details.
268 And finally, C<scalar reverse()> reverses by character rather than by byte.
272 =head2 Unicode Character Properties
274 Most Unicode character properties are accessible by using regular expressions.
275 They are used like character classes via the C<\p{}> "matches property"
276 construct and the C<\P{}> negation, "doesn't match property".
278 For instance, C<\p{Uppercase}> matches any character with the Unicode
279 "Uppercase" property, while C<\p{L}> matches any character with a
280 General_Category of "L" (letter) property. Brackets are not
281 required for single letter properties, so C<\p{L}> is equivalent to C<\pL>.
283 More formally, C<\p{Uppercase}> matches any character whose Unicode Uppercase
284 property value is True, and C<\P{Uppercase}> matches any character whose
285 Uppercase property value is False, and they could have been written as
286 C<\p{Uppercase=True}> and C<\p{Uppercase=False}>, respectively
288 This formality is needed when properties are not binary, that is if they can
289 take on more values than just True and False. For example, the Bidi_Class (see
290 L</"Bidirectional Character Types"> below), can take on a number of different
291 values, such as Left, Right, Whitespace, and others. To match these, one needs
292 to specify the property name (Bidi_Class), and the value being matched against
293 (Left, Right, I<etc.>). This is done, as in the examples above, by having the
294 two components separated by an equal sign (or interchangeably, a colon), like
295 C<\p{Bidi_Class: Left}>.
297 All Unicode-defined character properties may be written in these compound forms
298 of C<\p{property=value}> or C<\p{property:value}>, but Perl provides some
299 additional properties that are written only in the single form, as well as
300 single-form short-cuts for all binary properties and certain others described
301 below, in which you may omit the property name and the equals or colon
304 Most Unicode character properties have at least two synonyms (or aliases if you
305 prefer), a short one that is easier to type, and a longer one which is more
306 descriptive and hence it is easier to understand what it means. Thus the "L"
307 and "Letter" above are equivalent and can be used interchangeably. Likewise,
308 "Upper" is a synonym for "Uppercase", and we could have written
309 C<\p{Uppercase}> equivalently as C<\p{Upper}>. Also, there are typically
310 various synonyms for the values the property can be. For binary properties,
311 "True" has 3 synonyms: "T", "Yes", and "Y"; and "False has correspondingly "F",
312 "No", and "N". But be careful. A short form of a value for one property may
313 not mean the same thing as the same short form for another. Thus, for the
314 General_Category property, "L" means "Letter", but for the Bidi_Class property,
315 "L" means "Left". A complete list of properties and synonyms is in
318 Upper/lower case differences in the property names and values are irrelevant,
319 thus C<\p{Upper}> means the same thing as C<\p{upper}> or even C<\p{UpPeR}>.
320 Similarly, you can add or subtract underscores anywhere in the middle of a
321 word, so that these are also equivalent to C<\p{U_p_p_e_r}>. And white space
322 is irrelevant adjacent to non-word characters, such as the braces and the equals
323 or colon separators so C<\p{ Upper }> and C<\p{ Upper_case : Y }> are
324 equivalent to these as well. In fact, in most cases, white space and even
325 hyphens can be added or deleted anywhere. So even C<\p{ Up-per case = Yes}> is
326 equivalent. All this is called "loose-matching" by Unicode. The few places
327 where stricter matching is employed is in the middle of numbers, and the Perl
328 extension properties that begin or end with an underscore. Stricter matching
329 cares about white space (except adjacent to the non-word characters) and
330 hyphens, and non-interior underscores.
332 You can also use negation in both C<\p{}> and C<\P{}> by introducing a caret
333 (^) between the first brace and the property name: C<\p{^Tamil}> is
334 equal to C<\P{Tamil}>.
336 =head3 B<General_Category>
338 Every Unicode character is assigned a general category, which is the "most
339 usual categorization of a character" (from
340 L<http://www.unicode.org/reports/tr44>).
342 The compound way of writing these is like C<\p{General_Category=Number}>
343 (short, C<\p{gc:n}>). But Perl furnishes shortcuts in which everything up
344 through the equal or colon separator is omitted. So you can instead just write
347 Here are the short and long forms of the General Category properties:
352 LC, L& Cased_Letter (that is: [\p{Ll}\p{Lu}\p{Lt}])
365 Nd Decimal_Number (also Digit)
369 P Punctuation (also Punct)
370 Pc Connector_Punctuation
374 Pi Initial_Punctuation
375 (may behave like Ps or Pe depending on usage)
377 (may behave like Ps or Pe depending on usage)
389 Zp Paragraph_Separator
392 Cc Control (also Cntrl)
394 Cs Surrogate (not usable)
398 Single-letter properties match all characters in any of the
399 two-letter sub-properties starting with the same letter.
400 C<LC> and C<L&> are special cases, which are aliases for the set of
401 C<Ll>, C<Lu>, and C<Lt>.
403 Because Perl hides the need for the user to understand the internal
404 representation of Unicode characters, there is no need to implement
405 the somewhat messy concept of surrogates. C<Cs> is therefore not
408 =head3 B<Bidirectional Character Types>
410 Because scripts differ in their directionality--Hebrew is
411 written right to left, for example--Unicode supplies these properties in
412 the Bidi_Class class:
417 LRE Left-to-Right Embedding
418 LRO Left-to-Right Override
421 RLE Right-to-Left Embedding
422 RLO Right-to-Left Override
423 PDF Pop Directional Format
425 ES European Separator
426 ET European Terminator
431 B Paragraph Separator
436 This property is always written in the compound form.
437 For example, C<\p{Bidi_Class:R}> matches characters that are normally
438 written right to left.
442 The world's languages are written in a number of scripts. This sentence
443 (unless you're reading it in translation) is written in Latin, while Russian is
444 written in Cyrllic, and Greek is written in, well, Greek; Japanese mainly in
445 Hiragana or Katakana. There are many more.
447 The Unicode Script property gives what script a given character is in,
448 and can be matched with the compound form like C<\p{Script=Hebrew}> (short:
449 C<\p{sc=hebr}>). Perl furnishes shortcuts for all script names. You can omit
450 everything up through the equals (or colon), and simply write C<\p{Latin}> or
453 A complete list of scripts and their shortcuts is in L<perluniprops>.
455 =head3 B<Use of "Is" Prefix>
457 For backward compatibility (with Perl 5.6), all properties mentioned
458 so far may have C<Is> or C<Is_> prepended to their name, so C<\P{Is_Lu}>, for
459 example, is equal to C<\P{Lu}>, and C<\p{IsScript:Arabic}> is equal to
464 In addition to B<scripts>, Unicode also defines B<blocks> of
465 characters. The difference between scripts and blocks is that the
466 concept of scripts is closer to natural languages, while the concept
467 of blocks is more of an artificial grouping based on groups of Unicode
468 characters with consecutive ordinal values. For example, the "Basic Latin"
469 block is all characters whose ordinals are between 0 and 127, inclusive, in
470 other words, the ASCII characters. The "Latin" script contains some letters
471 from this block as well as several more, like "Latin-1 Supplement",
472 "Latin Extended-A", I<etc.>, but it does not contain all the characters from
473 those blocks. It does not, for example, contain digits, because digits are
474 shared across many scripts. Digits and similar groups, like punctuation, are in
475 the script called C<Common>. There is also a script called C<Inherited> for
476 characters that modify other characters, and inherit the script value of the
477 controlling character.
479 For more about scripts versus blocks, see UAX#24 "Unicode Script Property":
480 L<http://www.unicode.org/reports/tr24>
482 The Script property is likely to be the one you want to use when processing
483 natural language; the Block property may be useful in working with the nuts and
486 Block names are matched in the compound form, like C<\p{Block: Arrows}> or
487 C<\p{Blk=Hebrew}>. Unlike most other properties only a few block names have a
488 Unicode-defined short name. But Perl does provide a (slight) shortcut: You
489 can say, for example C<\p{In_Arrows}> or C<\p{In_Hebrew}>. For backwards
490 compatibility, the C<In> prefix may be omitted if there is no naming conflict
491 with a script or any other property, and you can even use an C<Is> prefix
492 instead in those cases. But it is not a good idea to do this, for a couple
499 It is confusing. There are many naming conflicts, and you may forget some.
500 For example, C<\p{Hebrew}> means the I<script> Hebrew, and NOT the I<block>
501 Hebrew. But would you remember that 6 months from now?
505 It is unstable. A new version of Unicode may pre-empt the current meaning by
506 creating a property with the same name. There was a time in very early Unicode
507 releases when C<\p{Hebrew}> would have matched the I<block> Hebrew; now it
512 Some people just prefer to always use C<\p{Block: foo}> and C<\p{Script: bar}>
513 instead of the shortcuts, for clarity, and because they can't remember the
514 difference between 'In' and 'Is' anyway (or aren't confident that those who
515 eventually will read their code will know).
517 A complete list of blocks and their shortcuts is in L<perluniprops>.
519 =head3 B<Other Properties>
521 There are many more properties than the very basic ones described here.
522 A complete list is in L<perluniprops>.
524 Unicode defines all its properties in the compound form, so all single-form
525 properties are Perl extensions. A number of these are just synonyms for the
526 Unicode ones, but some are genunine extensions, including a couple that are in
527 the compound form. And quite a few of these are actually recommended by Unicode
528 (in L<http://www.unicode.org/reports/tr18>).
530 This section gives some details on all the extensions that aren't synonyms for
531 compound-form Unicode properties (for those, you'll have to refer to the
532 L<Unicode Standard|http://www.unicode.org/reports/tr44>.
538 This matches any of the 1_114_112 Unicode code points. It is a synonym for
541 =item B<C<\p{Alnum}>>
543 This matches any C<\p{Alphabetic}> or C<\p{Decimal_Number}> character.
547 This matches any of the 1_114_112 Unicode code points. It is a synonym for
550 =item B<C<\p{Assigned}>>
552 This matches any assigned code point; that is, any code point whose general
553 category is not Unassigned (or equivalently, not Cn).
555 =item B<C<\p{Blank}>>
557 This is the same as C<\h> and C<\p{HorizSpace}>: A character that changes the
558 spacing horizontally.
560 =item B<C<\p{Decomposition_Type: Non_Canonical}>> (Short: C<\p{Dt=NonCanon}>)
562 Matches a character that has a non-canonical decomposition.
564 To understand the use of this rarely used property=value combination, it is
565 necessary to know some basics about decomposition.
566 Consider a character, say H. It could appear with various marks around it,
567 such as an acute accent, or a circumflex, or various hooks, circles, arrows,
568 I<etc.>, above, below, to one side and/or the other, I<etc.> There are many
569 possibilities among the world's languages. The number of combinations is
570 astronomical, and if there were a character for each combination, it would
571 soon exhaust Unicode's more than a million possible characters. So Unicode
572 took a different approach: there is a character for the base H, and a
573 character for each of the possible marks, and they can be combined variously
574 to get a final logical character. So a logical character--what appears to be a
575 single character--can be a sequence of more than one individual characters.
576 This is called an "extended grapheme cluster". (Perl furnishes the C<\X>
577 construct to match such sequences.)
579 But Unicode's intent is to unify the existing character set standards and
580 practices, and a number of pre-existing standards have single characters that
581 mean the same thing as some of these combinations. An example is ISO-8859-1,
582 which has quite a few of these in the Latin-1 range, an example being "LATIN
583 CAPITAL LETTER E WITH ACUTE". Because this character was in this pre-existing
584 standard, Unicode added it to its repertoire. But this character is considered
585 by Unicode to be equivalent to the sequence consisting of first the character
586 "LATIN CAPITAL LETTER E", then the character "COMBINING ACUTE ACCENT".
588 "LATIN CAPITAL LETTER E WITH ACUTE" is called a "pre-composed" character, and
589 the equivalence with the sequence is called canonical equivalence. All
590 pre-composed characters are said to have a decomposition (into the equivalent
591 sequence) and the decomposition type is also called canonical.
593 However, many more characters have a different type of decomposition, a
594 "compatible" or "non-canonical" decomposition. The sequences that form these
595 decompositions are not considered canonically equivalent to the pre-composed
596 character. An example, again in the Latin-1 range, is the "SUPERSCRIPT ONE".
597 It is kind of like a regular digit 1, but not exactly; its decomposition
598 into the digit 1 is called a "compatible" decomposition, specifically a
599 "super" decomposition. There are several such compatibility
600 decompositions (see L<http://www.unicode.org/reports/tr44>), including one
601 called "compat" which means some miscellaneous type of decomposition
602 that doesn't fit into the decomposition categories that Unicode has chosen.
604 Note that most Unicode characters don't have a decomposition, so their
605 decomposition type is "None".
607 Perl has added the C<Non_Canonical> type, for your convenience, to mean any of
608 the compatibility decompositions.
610 =item B<C<\p{Graph}>>
612 Matches any character that is graphic. Theoretically, this means a character
613 that on a printer would cause ink to be used.
615 =item B<C<\p{HorizSpace}>>
617 This is the same as C<\h> and C<\p{Blank}>: A character that changes the
618 spacing horizontally.
622 This is a synonym for C<\p{Present_In=*}>
624 =item B<C<\p{PerlSpace}>>
626 This is the same as C<\s>, restricted to ASCII, namely C<S<[ \f\n\r\t]>>.
628 Mnemonic: Perl's (original) space
630 =item B<C<\p{PerlWord}>>
632 This is the same as C<\w>, restricted to ASCII, namely C<[A-Za-z0-9_]>
634 Mnemonic: Perl's (original) word.
636 =item B<C<\p{PosixAlnum}>>
638 This matches any alphanumeric character in the ASCII range, namely
641 =item B<C<\p{PosixAlpha}>>
643 This matches any alphabetic character in the ASCII range, namely C<[A-Za-z]>.
645 =item B<C<\p{PosixBlank}>>
647 This matches any blank character in the ASCII range, namely C<S<[ \t]>>.
649 =item B<C<\p{PosixCntrl}>>
651 This matches any control character in the ASCII range, namely C<[\x00-\x1F\x7F]>
653 =item B<C<\p{PosixDigit}>>
655 This matches any digit character in the ASCII range, namely C<[0-9]>.
657 =item B<C<\p{PosixGraph}>>
659 This matches any graphical character in the ASCII range, namely C<[\x21-\x7E]>.
661 =item B<C<\p{PosixLower}>>
663 This matches any lowercase character in the ASCII range, namely C<[a-z]>.
665 =item B<C<\p{PosixPrint}>>
667 This matches any printable character in the ASCII range, namely C<[\x20-\x7E]>.
668 These are the graphical characters plus SPACE.
670 =item B<C<\p{PosixPunct}>>
672 This matches any punctuation character in the ASCII range, namely
673 C<[\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]>. These are the
674 graphical characters that aren't word characters. Note that the Posix standard
675 includes in its definition of punctuation, those characters that Unicode calls
678 =item B<C<\p{PosixSpace}>>
680 This matches any space character in the ASCII range, namely
681 C<S<[ \f\n\r\t\x0B]>> (the last being a vertical tab).
683 =item B<C<\p{PosixUpper}>>
685 This matches any uppercase character in the ASCII range, namely C<[A-Z]>.
687 =item B<C<\p{Present_In: *}>> (Short: C<\p{In=*}>)
689 This property is used when you need to know in what Unicode version(s) a
692 The "*" above stands for some two digit Unicode version number, such as
693 C<1.1> or C<4.0>; or the "*" can also be C<Unassigned>. This property will
694 match the code points whose final disposition has been settled as of the
695 Unicode release given by the version number; C<\p{Present_In: Unassigned}>
696 will match those code points whose meaning has yet to be assigned.
698 For example, C<U+0041> "LATIN CAPITAL LETTER A" was present in the very first
699 Unicode release available, which is C<1.1>, so this property is true for all
700 valid "*" versions. On the other hand, C<U+1EFF> was not assigned until version
701 5.1 when it became "LATIN SMALL LETTER Y WITH LOOP", so the only "*" that
702 would match it are 5.1, 5.2, and later.
704 Unicode furnishes the C<Age> property from which this is derived. The problem
705 with Age is that a strict interpretation of it (which Perl takes) has it
706 matching the precise release a code point's meaning is introduced in. Thus
707 C<U+0041> would match only 1.1; and C<U+1EFF> only 5.1. This is not usually what
710 Some non-Perl implementations of the Age property may change its meaning to be
711 the same as the Perl Present_In property; just be aware of that.
713 Another confusion with both these properties is that the definition is not
714 that the code point has been assigned, but that the meaning of the code point
715 has been determined. This is because 66 code points will always be
716 unassigned, and, so the Age for them is the Unicode version the decision to
717 make them so was made in. For example, C<U+FDD0> is to be permanently
718 unassigned to a character, and the decision to do that was made in version 3.1,
719 so C<\p{Age=3.1}> matches this character and C<\p{Present_In: 3.1}> and up
722 =item B<C<\p{Print}>>
724 This matches any character that is graphical or blank, except controls.
726 =item B<C<\p{SpacePerl}>>
728 This is the same as C<\s>, including beyond ASCII.
730 Mnemonic: Space, as modified by Perl. (It doesn't include the vertical tab
731 which both the Posix standard and Unicode consider to be space.)
733 =item B<C<\p{VertSpace}>>
735 This is the same as C<\v>: A character that changes the spacing vertically.
739 This is the same as C<\w>, including beyond ASCII.
743 =head2 User-Defined Character Properties
745 You can define your own binary character properties by defining subroutines
746 whose names begin with "In" or "Is". The subroutines can be defined in any
747 package. The user-defined properties can be used in the regular expression
748 C<\p> and C<\P> constructs; if you are using a user-defined property from a
749 package other than the one you are in, you must specify its package in the
750 C<\p> or C<\P> construct.
752 # assuming property Is_Foreign defined in Lang::
753 package main; # property package name required
754 if ($txt =~ /\p{Lang::IsForeign}+/) { ... }
756 package Lang; # property package name not required
757 if ($txt =~ /\p{IsForeign}+/) { ... }
760 Note that the effect is compile-time and immutable once defined.
762 The subroutines must return a specially-formatted string, with one
763 or more newline-separated lines. Each line must be one of the following:
769 A single hexadecimal number denoting a Unicode code point to include.
773 Two hexadecimal numbers separated by horizontal whitespace (space or
774 tabular characters) denoting a range of Unicode code points to include.
778 Something to include, prefixed by "+": a built-in character
779 property (prefixed by "utf8::") or a user-defined character property,
780 to represent all the characters in that property; two hexadecimal code
781 points for a range; or a single hexadecimal code point.
785 Something to exclude, prefixed by "-": an existing character
786 property (prefixed by "utf8::") or a user-defined character property,
787 to represent all the characters in that property; two hexadecimal code
788 points for a range; or a single hexadecimal code point.
792 Something to negate, prefixed "!": an existing character
793 property (prefixed by "utf8::") or a user-defined character property,
794 to represent all the characters in that property; two hexadecimal code
795 points for a range; or a single hexadecimal code point.
799 Something to intersect with, prefixed by "&": an existing character
800 property (prefixed by "utf8::") or a user-defined character property,
801 for all the characters except the characters in the property; two
802 hexadecimal code points for a range; or a single hexadecimal code point.
806 For example, to define a property that covers both the Japanese
807 syllabaries (hiragana and katakana), you can define
816 Imagine that the here-doc end marker is at the beginning of the line.
817 Now you can use C<\p{InKana}> and C<\P{InKana}>.
819 You could also have used the existing block property names:
828 Suppose you wanted to match only the allocated characters,
829 not the raw block ranges: in other words, you want to remove
840 The negation is useful for defining (surprise!) negated classes.
850 Intersection is useful for getting the common characters matched by
851 two (or more) classes.
860 It's important to remember not to use "&" for the first set; that
861 would be intersecting with nothing (resulting in an empty set).
863 =head2 User-Defined Case Mappings
865 You can also define your own mappings to be used in the lc(),
866 lcfirst(), uc(), and ucfirst() (or their string-inlined versions).
867 The principle is similar to that of user-defined character
868 properties: to define subroutines
869 with names like C<ToLower> (for lc() and lcfirst()), C<ToTitle> (for
870 the first character in ucfirst()), and C<ToUpper> (for uc(), and the
871 rest of the characters in ucfirst()).
873 The string returned by the subroutines needs to be two hexadecimal numbers
874 separated by two tabulators: the two numbers being, respectively, the source
875 code point and the destination code point. For example:
883 defines an uc() mapping that causes only the character "a"
884 to be mapped to "A"; all other characters will remain unchanged.
886 (For serious hackers only) The above means you have to furnish a complete
887 mapping; you can't just override a couple of characters and leave the rest
888 unchanged. You can find all the mappings in the directory
889 C<$Config{privlib}>/F<unicore/To/>. The mapping data is returned as the
890 here-document, and the C<utf8::ToSpecFoo> are special exception mappings
891 derived from <$Config{privlib}>/F<unicore/SpecialCasing.txt>. The "Digit" and
892 "Fold" mappings that one can see in the directory are not directly
893 user-accessible, one can use either the C<Unicode::UCD> module, or just match
894 case-insensitively (that's when the "Fold" mapping is used).
896 The mappings will only take effect on scalars that have been marked as having
897 Unicode characters, for example by using C<utf8::upgrade()>.
898 Old byte-style strings are not affected.
900 The mappings are in effect for the package they are defined in.
902 =head2 Character Encodings for Input and Output
906 =head2 Unicode Regular Expression Support Level
908 The following list of Unicode support for regular expressions describes
909 all the features currently supported. The references to "Level N"
910 and the section numbers refer to the Unicode Technical Standard #18,
911 "Unicode Regular Expressions", version 11, in May 2005.
917 Level 1 - Basic Unicode Support
919 RL1.1 Hex Notation - done [1]
920 RL1.2 Properties - done [2][3]
921 RL1.2a Compatibility Properties - done [4]
922 RL1.3 Subtraction and Intersection - MISSING [5]
923 RL1.4 Simple Word Boundaries - done [6]
924 RL1.5 Simple Loose Matches - done [7]
925 RL1.6 Line Boundaries - MISSING [8]
926 RL1.7 Supplementary Code Points - done [9]
930 [3] supports not only minimal list, but all Unicode character
931 properties (see L</Unicode Character Properties>)
932 [4] \d \D \s \S \w \W \X [:prop:] [:^prop:]
933 [5] can use regular expression look-ahead [a] or
934 user-defined character properties [b] to emulate set operations
936 [7] note that Perl does Full case-folding in matching (but with bugs),
937 not Simple: for example U+1F88 is equivalent to U+1F00 U+03B9,
938 not with 1F80. This difference matters mainly for certain Greek
939 capital letters with certain modifiers: the Full case-folding
940 decomposes the letter, while the Simple case-folding would map
941 it to a single character.
942 [8] should do ^ and $ also on U+000B (\v in C), FF (\f), CR (\r),
943 CRLF (\r\n), NEL (U+0085), LS (U+2028), and PS (U+2029);
944 should also affect <>, $., and script line numbers;
945 should not split lines within CRLF [c] (i.e. there is no empty
946 line between \r and \n)
947 [9] UTF-8/UTF-EBDDIC used in perl allows not only U+10000 to U+10FFFF
948 but also beyond U+10FFFF [d]
950 [a] You can mimic class subtraction using lookahead.
951 For example, what UTS#18 might write as
953 [{Greek}-[{UNASSIGNED}]]
955 in Perl can be written as:
957 (?!\p{Unassigned})\p{InGreekAndCoptic}
958 (?=\p{Assigned})\p{InGreekAndCoptic}
960 But in this particular example, you probably really want
964 which will match assigned characters known to be part of the Greek script.
966 Also see the Unicode::Regex::Set module, it does implement the full
967 UTS#18 grouping, intersection, union, and removal (subtraction) syntax.
969 [b] '+' for union, '-' for removal (set-difference), '&' for intersection
970 (see L</"User-Defined Character Properties">)
972 [c] Try the C<:crlf> layer (see L<PerlIO>).
974 [d] U+FFFF will currently generate a warning message if 'utf8' warnings are
979 Level 2 - Extended Unicode Support
981 RL2.1 Canonical Equivalents - MISSING [10][11]
982 RL2.2 Default Grapheme Clusters - MISSING [12]
983 RL2.3 Default Word Boundaries - MISSING [14]
984 RL2.4 Default Loose Matches - MISSING [15]
985 RL2.5 Name Properties - MISSING [16]
986 RL2.6 Wildcard Properties - MISSING
988 [10] see UAX#15 "Unicode Normalization Forms"
989 [11] have Unicode::Normalize but not integrated to regexes
990 [12] have \X but we don't have a "Grapheme Cluster Mode"
991 [14] see UAX#29, Word Boundaries
992 [15] see UAX#21 "Case Mappings"
993 [16] have \N{...} but neither compute names of CJK Ideographs
994 and Hangul Syllables nor use a loose match [e]
996 [e] C<\N{...}> allows namespaces (see L<charnames>).
1000 Level 3 - Tailored Support
1002 RL3.1 Tailored Punctuation - MISSING
1003 RL3.2 Tailored Grapheme Clusters - MISSING [17][18]
1004 RL3.3 Tailored Word Boundaries - MISSING
1005 RL3.4 Tailored Loose Matches - MISSING
1006 RL3.5 Tailored Ranges - MISSING
1007 RL3.6 Context Matching - MISSING [19]
1008 RL3.7 Incremental Matches - MISSING
1009 ( RL3.8 Unicode Set Sharing )
1010 RL3.9 Possible Match Sets - MISSING
1011 RL3.10 Folded Matching - MISSING [20]
1012 RL3.11 Submatchers - MISSING
1014 [17] see UAX#10 "Unicode Collation Algorithms"
1015 [18] have Unicode::Collate but not integrated to regexes
1016 [19] have (?<=x) and (?=x), but look-aheads or look-behinds should see
1017 outside of the target substring
1018 [20] need insensitive matching for linguistic features other than case;
1019 for example, hiragana to katakana, wide and narrow, simplified Han
1020 to traditional Han (see UTR#30 "Character Foldings")
1024 =head2 Unicode Encodings
1026 Unicode characters are assigned to I<code points>, which are abstract
1027 numbers. To use these numbers, various encodings are needed.
1035 UTF-8 is a variable-length (1 to 6 bytes, current character allocations
1036 require 4 bytes), byte-order independent encoding. For ASCII (and we
1037 really do mean 7-bit ASCII, not another 8-bit encoding), UTF-8 is
1040 The following table is from Unicode 3.2.
1042 Code Points 1st Byte 2nd Byte 3rd Byte 4th Byte
1044 U+0000..U+007F 00..7F
1045 U+0080..U+07FF * C2..DF 80..BF
1046 U+0800..U+0FFF E0 * A0..BF 80..BF
1047 U+1000..U+CFFF E1..EC 80..BF 80..BF
1048 U+D000..U+D7FF ED 80..9F 80..BF
1049 U+D800..U+DFFF +++++++ utf16 surrogates, not legal utf8 +++++++
1050 U+E000..U+FFFF EE..EF 80..BF 80..BF
1051 U+10000..U+3FFFF F0 * 90..BF 80..BF 80..BF
1052 U+40000..U+FFFFF F1..F3 80..BF 80..BF 80..BF
1053 U+100000..U+10FFFF F4 80..8F 80..BF 80..BF
1055 Note the gaps before several of the byte entries above marked by '*'. These are
1056 caused by legal UTF-8 avoiding non-shortest encodings: it is technically
1057 possible to UTF-8-encode a single code point in different ways, but that is
1058 explicitly forbidden, and the shortest possible encoding should always be used
1059 (and that is what Perl does).
1061 Another way to look at it is via bits:
1063 Code Points 1st Byte 2nd Byte 3rd Byte 4th Byte
1066 00000bbbbbaaaaaa 110bbbbb 10aaaaaa
1067 ccccbbbbbbaaaaaa 1110cccc 10bbbbbb 10aaaaaa
1068 00000dddccccccbbbbbbaaaaaa 11110ddd 10cccccc 10bbbbbb 10aaaaaa
1070 As you can see, the continuation bytes all begin with "10", and the
1071 leading bits of the start byte tell how many bytes there are in the
1078 Like UTF-8 but EBCDIC-safe, in the way that UTF-8 is ASCII-safe.
1082 UTF-16, UTF-16BE, UTF-16LE, Surrogates, and BOMs (Byte Order Marks)
1084 The followings items are mostly for reference and general Unicode
1085 knowledge, Perl doesn't use these constructs internally.
1087 UTF-16 is a 2 or 4 byte encoding. The Unicode code points
1088 C<U+0000..U+FFFF> are stored in a single 16-bit unit, and the code
1089 points C<U+10000..U+10FFFF> in two 16-bit units. The latter case is
1090 using I<surrogates>, the first 16-bit unit being the I<high
1091 surrogate>, and the second being the I<low surrogate>.
1093 Surrogates are code points set aside to encode the C<U+10000..U+10FFFF>
1094 range of Unicode code points in pairs of 16-bit units. The I<high
1095 surrogates> are the range C<U+D800..U+DBFF> and the I<low surrogates>
1096 are the range C<U+DC00..U+DFFF>. The surrogate encoding is
1098 $hi = ($uni - 0x10000) / 0x400 + 0xD800;
1099 $lo = ($uni - 0x10000) % 0x400 + 0xDC00;
1103 $uni = 0x10000 + ($hi - 0xD800) * 0x400 + ($lo - 0xDC00);
1105 If you try to generate surrogates (for example by using chr()), you
1106 will get a warning, if warnings are turned on, because those code
1107 points are not valid for a Unicode character.
1109 Because of the 16-bitness, UTF-16 is byte-order dependent. UTF-16
1110 itself can be used for in-memory computations, but if storage or
1111 transfer is required either UTF-16BE (big-endian) or UTF-16LE
1112 (little-endian) encodings must be chosen.
1114 This introduces another problem: what if you just know that your data
1115 is UTF-16, but you don't know which endianness? Byte Order Marks, or
1116 BOMs, are a solution to this. A special character has been reserved
1117 in Unicode to function as a byte order marker: the character with the
1118 code point C<U+FEFF> is the BOM.
1120 The trick is that if you read a BOM, you will know the byte order,
1121 since if it was written on a big-endian platform, you will read the
1122 bytes C<0xFE 0xFF>, but if it was written on a little-endian platform,
1123 you will read the bytes C<0xFF 0xFE>. (And if the originating platform
1124 was writing in UTF-8, you will read the bytes C<0xEF 0xBB 0xBF>.)
1126 The way this trick works is that the character with the code point
1127 C<U+FFFE> is guaranteed not to be a valid Unicode character, so the
1128 sequence of bytes C<0xFF 0xFE> is unambiguously "BOM, represented in
1129 little-endian format" and cannot be C<U+FFFE>, represented in big-endian
1130 format". (Actually, C<U+FFFE> is legal for use by your program, even for
1131 input/output, but better not use it if you need a BOM. But it is "illegal for
1132 interchange", so that an unsuspecting program won't get confused.)
1136 UTF-32, UTF-32BE, UTF-32LE
1138 The UTF-32 family is pretty much like the UTF-16 family, expect that
1139 the units are 32-bit, and therefore the surrogate scheme is not
1140 needed. The BOM signatures will be C<0x00 0x00 0xFE 0xFF> for BE and
1141 C<0xFF 0xFE 0x00 0x00> for LE.
1147 Encodings defined by the ISO 10646 standard. UCS-2 is a 16-bit
1148 encoding. Unlike UTF-16, UCS-2 is not extensible beyond C<U+FFFF>,
1149 because it does not use surrogates. UCS-4 is a 32-bit encoding,
1150 functionally identical to UTF-32.
1156 A seven-bit safe (non-eight-bit) encoding, which is useful if the
1157 transport or storage is not eight-bit safe. Defined by RFC 2152.
1161 =head2 Security Implications of Unicode
1163 Read L<Unicode Security Considerations|http://www.unicode.org/reports/tr36>.
1164 Also, note the following:
1172 Unfortunately, the specification of UTF-8 leaves some room for
1173 interpretation of how many bytes of encoded output one should generate
1174 from one input Unicode character. Strictly speaking, the shortest
1175 possible sequence of UTF-8 bytes should be generated,
1176 because otherwise there is potential for an input buffer overflow at
1177 the receiving end of a UTF-8 connection. Perl always generates the
1178 shortest length UTF-8, and with warnings on, Perl will warn about
1179 non-shortest length UTF-8 along with other malformations, such as the
1180 surrogates, which are not real Unicode code points.
1184 Regular expressions behave slightly differently between byte data and
1185 character (Unicode) data. For example, the "word character" character
1186 class C<\w> will work differently depending on if data is eight-bit bytes
1189 In the first case, the set of C<\w> characters is either small--the
1190 default set of alphabetic characters, digits, and the "_"--or, if you
1191 are using a locale (see L<perllocale>), the C<\w> might contain a few
1192 more letters according to your language and country.
1194 In the second case, the C<\w> set of characters is much, much larger.
1195 Most importantly, even in the set of the first 256 characters, it will
1196 probably match different characters: unlike most locales, which are
1197 specific to a language and country pair, Unicode classifies all the
1198 characters that are letters I<somewhere> as C<\w>. For example, your
1199 locale might not think that LATIN SMALL LETTER ETH is a letter (unless
1200 you happen to speak Icelandic), but Unicode does.
1202 As discussed elsewhere, Perl has one foot (two hooves?) planted in
1203 each of two worlds: the old world of bytes and the new world of
1204 characters, upgrading from bytes to characters when necessary.
1205 If your legacy code does not explicitly use Unicode, no automatic
1206 switch-over to characters should happen. Characters shouldn't get
1207 downgraded to bytes, either. It is possible to accidentally mix bytes
1208 and characters, however (see L<perluniintro>), in which case C<\w> in
1209 regular expressions might start behaving differently. Review your
1210 code. Use warnings and the C<strict> pragma.
1214 =head2 Unicode in Perl on EBCDIC
1216 The way Unicode is handled on EBCDIC platforms is still
1217 experimental. On such platforms, references to UTF-8 encoding in this
1218 document and elsewhere should be read as meaning the UTF-EBCDIC
1219 specified in Unicode Technical Report 16, unless ASCII vs. EBCDIC issues
1220 are specifically discussed. There is no C<utfebcdic> pragma or
1221 ":utfebcdic" layer; rather, "utf8" and ":utf8" are reused to mean
1222 the platform's "natural" 8-bit encoding of Unicode. See L<perlebcdic>
1223 for more discussion of the issues.
1227 Usually locale settings and Unicode do not affect each other, but
1228 there are a couple of exceptions:
1234 You can enable automatic UTF-8-ification of your standard file
1235 handles, default C<open()> layer, and C<@ARGV> by using either
1236 the C<-C> command line switch or the C<PERL_UNICODE> environment
1237 variable, see L<perlrun> for the documentation of the C<-C> switch.
1241 Perl tries really hard to work both with Unicode and the old
1242 byte-oriented world. Most often this is nice, but sometimes Perl's
1243 straddling of the proverbial fence causes problems.
1247 =head2 When Unicode Does Not Happen
1249 While Perl does have extensive ways to input and output in Unicode,
1250 and few other 'entry points' like the @ARGV which can be interpreted
1251 as Unicode (UTF-8), there still are many places where Unicode (in some
1252 encoding or another) could be given as arguments or received as
1253 results, or both, but it is not.
1255 The following are such interfaces. Also, see L</The "Unicode Bug">.
1256 For all of these interfaces Perl
1257 currently (as of 5.8.3) simply assumes byte strings both as arguments
1258 and results, or UTF-8 strings if the C<encoding> pragma has been used.
1260 One reason why Perl does not attempt to resolve the role of Unicode in
1261 these cases is that the answers are highly dependent on the operating
1262 system and the file system(s). For example, whether filenames can be
1263 in Unicode, and in exactly what kind of encoding, is not exactly a
1264 portable concept. Similarly for the qx and system: how well will the
1265 'command line interface' (and which of them?) handle Unicode?
1271 chdir, chmod, chown, chroot, exec, link, lstat, mkdir,
1272 rename, rmdir, stat, symlink, truncate, unlink, utime, -X
1284 open, opendir, sysopen
1288 qx (aka the backtick operator), system
1296 =head2 The "Unicode Bug"
1298 The term, the "Unicode bug" has been applied to an inconsistency with the
1299 Unicode characters whose code points are in the Latin-1 Supplement block, that
1300 is, between 128 and 255. Without a locale specified, unlike all other
1301 characters or code points, these characters have very different semantics in
1302 byte semantics versus character semantics.
1304 In character semantics they are interpreted as Unicode code points, which means
1305 they have the same semantics as Latin-1 (ISO-8859-1).
1307 In byte semantics, they are considered to be unassigned characters, meaning
1308 that the only semantics they have is their ordinal numbers, and that they are
1309 not members of various character classes. None are considered to match C<\w>
1310 for example, but all match C<\W>. (On EBCDIC platforms, the behavior may
1311 be different from this, depending on the underlying C language library
1314 The behavior is known to have effects on these areas:
1320 Changing the case of a scalar, that is, using C<uc()>, C<ucfirst()>, C<lc()>,
1321 and C<lcfirst()>, or C<\L>, C<\U>, C<\u> and C<\l> in regular expression
1326 Using caseless (C</i>) regular expression matching
1330 Matching a number of properties in regular expressions, such as C<\w>
1334 User-defined case change mappings. You can create a C<ToUpper()> function, for
1335 example, which overrides Perl's built-in case mappings. The scalar must be
1336 encoded in utf8 for your function to actually be invoked.
1340 This behavior can lead to unexpected results in which a string's semantics
1341 suddenly change if a code point above 255 is appended to or removed from it,
1342 which changes the string's semantics from byte to character or vice versa. As
1343 an example, consider the following program and its output:
1348 for ($s1, $s2, $s1.$s2) {
1356 If there's no C<\w> in C<s1> or in C<s2>, why does their concatenation have one?
1358 This anomaly stems from Perl's attempt to not disturb older programs that
1359 didn't use Unicode, and hence had no semantics for characters outside of the
1360 ASCII range (except in a locale), along with Perl's desire to add Unicode
1361 support seamlessly. The result wasn't seamless: these characters were
1364 Work is being done to correct this, but only some of it was complete in time
1365 for the 5.12 release. What has been finished is the important part of the case
1366 changing component. Due to concerns, and some evidence, that older code might
1367 have come to rely on the existing behavior, the new behavior must be explicitly
1368 enabled by the feature C<unicode_strings> in the L<feature> pragma, even though
1369 no new syntax is involved.
1371 See L<perlfunc/lc> for details on how this pragma works in combination with
1372 various others for casing. Even though the pragma only affects casing
1373 operations in the 5.12 release, it is planned to have it affect all the
1374 problematic behaviors in later releases: you can't have one without them all.
1376 In the meantime, a workaround is to always call utf8::upgrade($string), or to
1377 use the standard modules L<Encode> or L<charnames>.
1379 =head2 Forcing Unicode in Perl (Or Unforcing Unicode in Perl)
1381 Sometimes (see L</"When Unicode Does Not Happen"> or L</The "Unicode Bug">)
1382 there are situations where you simply need to force a byte
1383 string into UTF-8, or vice versa. The low-level calls
1384 utf8::upgrade($bytestring) and utf8::downgrade($utf8string[, FAIL_OK]) are
1387 Note that utf8::downgrade() can fail if the string contains characters
1388 that don't fit into a byte.
1390 Calling either function on a string that already is in the desired state is a
1393 =head2 Using Unicode in XS
1395 If you want to handle Perl Unicode in XS extensions, you may find the
1396 following C APIs useful. See also L<perlguts/"Unicode Support"> for an
1397 explanation about Unicode at the XS level, and L<perlapi> for the API
1404 C<DO_UTF8(sv)> returns true if the C<UTF8> flag is on and the bytes
1405 pragma is not in effect. C<SvUTF8(sv)> returns true if the C<UTF8>
1406 flag is on; the bytes pragma is ignored. The C<UTF8> flag being on
1407 does B<not> mean that there are any characters of code points greater
1408 than 255 (or 127) in the scalar or that there are even any characters
1409 in the scalar. What the C<UTF8> flag means is that the sequence of
1410 octets in the representation of the scalar is the sequence of UTF-8
1411 encoded code points of the characters of a string. The C<UTF8> flag
1412 being off means that each octet in this representation encodes a
1413 single character with code point 0..255 within the string. Perl's
1414 Unicode model is not to use UTF-8 until it is absolutely necessary.
1418 C<uvchr_to_utf8(buf, chr)> writes a Unicode character code point into
1419 a buffer encoding the code point as UTF-8, and returns a pointer
1420 pointing after the UTF-8 bytes. It works appropriately on EBCDIC machines.
1424 C<utf8_to_uvchr(buf, lenp)> reads UTF-8 encoded bytes from a buffer and
1425 returns the Unicode character code point and, optionally, the length of
1426 the UTF-8 byte sequence. It works appropriately on EBCDIC machines.
1430 C<utf8_length(start, end)> returns the length of the UTF-8 encoded buffer
1431 in characters. C<sv_len_utf8(sv)> returns the length of the UTF-8 encoded
1436 C<sv_utf8_upgrade(sv)> converts the string of the scalar to its UTF-8
1437 encoded form. C<sv_utf8_downgrade(sv)> does the opposite, if
1438 possible. C<sv_utf8_encode(sv)> is like sv_utf8_upgrade except that
1439 it does not set the C<UTF8> flag. C<sv_utf8_decode()> does the
1440 opposite of C<sv_utf8_encode()>. Note that none of these are to be
1441 used as general-purpose encoding or decoding interfaces: C<use Encode>
1442 for that. C<sv_utf8_upgrade()> is affected by the encoding pragma
1443 but C<sv_utf8_downgrade()> is not (since the encoding pragma is
1444 designed to be a one-way street).
1448 C<is_utf8_char(s)> returns true if the pointer points to a valid UTF-8
1453 C<is_utf8_string(buf, len)> returns true if C<len> bytes of the buffer
1458 C<UTF8SKIP(buf)> will return the number of bytes in the UTF-8 encoded
1459 character in the buffer. C<UNISKIP(chr)> will return the number of bytes
1460 required to UTF-8-encode the Unicode character code point. C<UTF8SKIP()>
1461 is useful for example for iterating over the characters of a UTF-8
1462 encoded buffer; C<UNISKIP()> is useful, for example, in computing
1463 the size required for a UTF-8 encoded buffer.
1467 C<utf8_distance(a, b)> will tell the distance in characters between the
1468 two pointers pointing to the same UTF-8 encoded buffer.
1472 C<utf8_hop(s, off)> will return a pointer to a UTF-8 encoded buffer
1473 that is C<off> (positive or negative) Unicode characters displaced
1474 from the UTF-8 buffer C<s>. Be careful not to overstep the buffer:
1475 C<utf8_hop()> will merrily run off the end or the beginning of the
1476 buffer if told to do so.
1480 C<pv_uni_display(dsv, spv, len, pvlim, flags)> and
1481 C<sv_uni_display(dsv, ssv, pvlim, flags)> are useful for debugging the
1482 output of Unicode strings and scalars. By default they are useful
1483 only for debugging--they display B<all> characters as hexadecimal code
1484 points--but with the flags C<UNI_DISPLAY_ISPRINT>,
1485 C<UNI_DISPLAY_BACKSLASH>, and C<UNI_DISPLAY_QQ> you can make the
1486 output more readable.
1490 C<ibcmp_utf8(s1, pe1, l1, u1, s2, pe2, l2, u2)> can be used to
1491 compare two strings case-insensitively in Unicode. For case-sensitive
1492 comparisons you can just use C<memEQ()> and C<memNE()> as usual.
1496 For more information, see L<perlapi>, and F<utf8.c> and F<utf8.h>
1497 in the Perl source code distribution.
1499 =head2 Hacking Perl to work on earlier Unicode versions (for very serious hackers only)
1501 Perl by default comes with the latest supported Unicode version built in, but
1502 you can change to use any earlier one.
1504 Download the files in the version of Unicode that you want from the Unicode web
1505 site L<http://www.unicode.org>). These should replace the existing files in
1506 C<\$Config{privlib}>/F<unicore>. (C<\%Config> is available from the Config
1507 module.) Follow the instructions in F<README.perl> in that directory to change
1508 some of their names, and then run F<make>.
1510 It is even possible to download them to a different directory, and then change
1511 F<utf8_heavy.pl> in the directory C<\$Config{privlib}> to point to the new
1512 directory, or maybe make a copy of that directory before making the change, and
1513 using C<@INC> or the C<-I> run-time flag to switch between versions at will
1514 (but because of caching, not in the middle of a process), but all this is
1515 beyond the scope of these instructions.
1519 =head2 Interaction with Locales
1521 Use of locales with Unicode data may lead to odd results. Currently,
1522 Perl attempts to attach 8-bit locale info to characters in the range
1523 0..255, but this technique is demonstrably incorrect for locales that
1524 use characters above that range when mapped into Unicode. Perl's
1525 Unicode support will also tend to run slower. Use of locales with
1526 Unicode is discouraged.
1528 =head2 Problems with characters in the Latin-1 Supplement range
1530 See L</The "Unicode Bug">
1532 =head2 Problems with case-insensitive regular expression matching
1534 There are problems with case-insensitive matches, including those involving
1535 character classes (enclosed in [square brackets]), characters whose fold
1536 is to multiple characters (such as the single character LATIN SMALL LIGATURE
1537 FFL matches case-insensitively with the 3-character string C<ffl>), and
1538 characters in the Latin-1 Supplement.
1540 =head2 Interaction with Extensions
1542 When Perl exchanges data with an extension, the extension should be
1543 able to understand the UTF8 flag and act accordingly. If the
1544 extension doesn't know about the flag, it's likely that the extension
1545 will return incorrectly-flagged data.
1547 So if you're working with Unicode data, consult the documentation of
1548 every module you're using if there are any issues with Unicode data
1549 exchange. If the documentation does not talk about Unicode at all,
1550 suspect the worst and probably look at the source to learn how the
1551 module is implemented. Modules written completely in Perl shouldn't
1552 cause problems. Modules that directly or indirectly access code written
1553 in other programming languages are at risk.
1555 For affected functions, the simple strategy to avoid data corruption is
1556 to always make the encoding of the exchanged data explicit. Choose an
1557 encoding that you know the extension can handle. Convert arguments passed
1558 to the extensions to that encoding and convert results back from that
1559 encoding. Write wrapper functions that do the conversions for you, so
1560 you can later change the functions when the extension catches up.
1562 To provide an example, let's say the popular Foo::Bar::escape_html
1563 function doesn't deal with Unicode data yet. The wrapper function
1564 would convert the argument to raw UTF-8 and convert the result back to
1565 Perl's internal representation like so:
1567 sub my_escape_html ($) {
1569 return unless defined $what;
1570 Encode::decode_utf8(Foo::Bar::escape_html(Encode::encode_utf8($what)));
1573 Sometimes, when the extension does not convert data but just stores
1574 and retrieves them, you will be in a position to use the otherwise
1575 dangerous Encode::_utf8_on() function. Let's say the popular
1576 C<Foo::Bar> extension, written in C, provides a C<param> method that
1577 lets you store and retrieve data according to these prototypes:
1579 $self->param($name, $value); # set a scalar
1580 $value = $self->param($name); # retrieve a scalar
1582 If it does not yet provide support for any encoding, one could write a
1583 derived class with such a C<param> method:
1586 my($self,$name,$value) = @_;
1587 utf8::upgrade($name); # make sure it is UTF-8 encoded
1588 if (defined $value) {
1589 utf8::upgrade($value); # make sure it is UTF-8 encoded
1590 return $self->SUPER::param($name,$value);
1592 my $ret = $self->SUPER::param($name);
1593 Encode::_utf8_on($ret); # we know, it is UTF-8 encoded
1598 Some extensions provide filters on data entry/exit points, such as
1599 DB_File::filter_store_key and family. Look out for such filters in
1600 the documentation of your extensions, they can make the transition to
1601 Unicode data much easier.
1605 Some functions are slower when working on UTF-8 encoded strings than
1606 on byte encoded strings. All functions that need to hop over
1607 characters such as length(), substr() or index(), or matching regular
1608 expressions can work B<much> faster when the underlying data are
1611 In Perl 5.8.0 the slowness was often quite spectacular; in Perl 5.8.1
1612 a caching scheme was introduced which will hopefully make the slowness
1613 somewhat less spectacular, at least for some operations. In general,
1614 operations with UTF-8 encoded strings are still slower. As an example,
1615 the Unicode properties (character classes) like C<\p{Nd}> are known to
1616 be quite a bit slower (5-20 times) than their simpler counterparts
1617 like C<\d> (then again, there 268 Unicode characters matching C<Nd>
1618 compared with the 10 ASCII characters matching C<d>).
1620 =head2 Problems on EBCDIC platforms
1622 There are a number of known problems with Perl on EBCDIC platforms. If you
1623 want to use Perl there, send email to perlbug@perl.org.
1625 In earlier versions, when byte and character data were concatenated,
1626 the new string was sometimes created by
1627 decoding the byte strings as I<ISO 8859-1 (Latin-1)>, even if the
1628 old Unicode string used EBCDIC.
1630 If you find any of these, please report them as bugs.
1632 =head2 Porting code from perl-5.6.X
1634 Perl 5.8 has a different Unicode model from 5.6. In 5.6 the programmer
1635 was required to use the C<utf8> pragma to declare that a given scope
1636 expected to deal with Unicode data and had to make sure that only
1637 Unicode data were reaching that scope. If you have code that is
1638 working with 5.6, you will need some of the following adjustments to
1639 your code. The examples are written such that the code will continue
1640 to work under 5.6, so you should be safe to try them out.
1646 A filehandle that should read or write UTF-8
1649 binmode $fh, ":encoding(utf8)";
1654 A scalar that is going to be passed to some extension
1656 Be it Compress::Zlib, Apache::Request or any extension that has no
1657 mention of Unicode in the manpage, you need to make sure that the
1658 UTF8 flag is stripped off. Note that at the time of this writing
1659 (October 2002) the mentioned modules are not UTF-8-aware. Please
1660 check the documentation to verify if this is still true.
1664 $val = Encode::encode_utf8($val); # make octets
1669 A scalar we got back from an extension
1671 If you believe the scalar comes back as UTF-8, you will most likely
1672 want the UTF8 flag restored:
1676 $val = Encode::decode_utf8($val);
1681 Same thing, if you are really sure it is UTF-8
1685 Encode::_utf8_on($val);
1690 A wrapper for fetchrow_array and fetchrow_hashref
1692 When the database contains only UTF-8, a wrapper function or method is
1693 a convenient way to replace all your fetchrow_array and
1694 fetchrow_hashref calls. A wrapper function will also make it easier to
1695 adapt to future enhancements in your database driver. Note that at the
1696 time of this writing (October 2002), the DBI has no standardized way
1697 to deal with UTF-8 data. Please check the documentation to verify if
1701 my($self, $sth, $what) = @_; # $what is one of fetchrow_{array,hashref}
1707 my @arr = $sth->$what;
1709 defined && /[^\000-\177]/ && Encode::_utf8_on($_);
1713 my $ret = $sth->$what;
1715 for my $k (keys %$ret) {
1716 defined && /[^\000-\177]/ && Encode::_utf8_on($_) for $ret->{$k};
1720 defined && /[^\000-\177]/ && Encode::_utf8_on($_) for $ret;
1730 A large scalar that you know can only contain ASCII
1732 Scalars that contain only ASCII and are marked as UTF-8 are sometimes
1733 a drag to your program. If you recognize such a situation, just remove
1736 utf8::downgrade($val) if $] > 5.007;
1742 L<perlunitut>, L<perluniintro>, L<perluniprops>, L<Encode>, L<open>, L<utf8>, L<bytes>,
1743 L<perlretut>, L<perlvar/"${^UNICODE}">
1744 L<http://www.unicode.org/reports/tr44>).