3 perluniintro - Perl Unicode introduction
7 This document gives a general idea of Unicode and how to use Unicode
12 Unicode is a character set standard with plans to cover all of the
13 writing systems of the world, plus many other symbols.
15 Unicode and ISO/IEC 10646 are coordinated standards that provide code
16 points for the characters in almost all modern character set standards,
17 covering more than 30 writing systems and hundreds of languages,
18 including all commercially important modern languages. All characters
19 in the largest Chinese, Japanese, and Korean dictionaries are also
20 encoded. The standards will eventually cover almost all characters in
21 more than 250 writing systems and thousands of languages.
23 A Unicode I<character> is an abstract entity. It is not bound to any
24 particular integer width, and especially not to the C language C<char>.
25 Unicode is language neutral and display neutral: it doesn't encode the
26 language of the text, and it doesn't define fonts or other graphical
27 layout details. Unicode operates on characters and on text built from
30 Unicode defines characters like C<LATIN CAPITAL LETTER A> or C<GREEK
31 SMALL LETTER ALPHA>, and then unique numbers for those, hexadecimal
32 0x0041 or 0x03B1 for those particular characters. Such unique
33 numbers are called I<code points>.
35 The Unicode standard prefers using hexadecimal notation for the code
36 points. (In case this notation, numbers like 0x0041, is unfamiliar to
37 you, take a peek at a later section, L</"Hexadecimal Notation">.)
38 The Unicode standard uses the notation C<U+0041 LATIN CAPITAL LETTER A>,
39 which gives the hexadecimal code point, and the normative name of
42 Unicode also defines various I<properties> for the characters, like
43 "uppercase" or "lowercase", "decimal digit", or "punctuation":
44 these properties are independent of the names of the characters.
45 Furthermore, various operations on the characters like uppercasing,
46 lowercasing, and collating (sorting), are defined.
48 A Unicode character consists either of a single code point, or a
49 I<base character> (like C<LATIN CAPITAL LETTER A>), followed by one or
50 more I<modifiers> (like C<COMBINING ACUTE ACCENT>). This sequence of
51 a base character and modifiers is called a I<combining character
54 Whether to call these combining character sequences, as a whole,
55 "characters" depends on your point of view. If you are a programmer, you
56 probably would tend towards seeing each element in the sequences as one
57 unit, one "character", but from the user viewpoint, the sequence as a
58 whole is probably considered one "character", since that's probably what
59 it looks like in the context of the user's language.
61 With this "as a whole" view of characters, the number of characters is
62 open-ended. But in the programmer's "one unit is one character" point of
63 view, the concept of "characters" is more deterministic, and so we take
64 that point of view in this document: one "character" is one Unicode
65 code point, be it a base character or a combining character.
67 For some of the combinations there are I<precomposed> characters,
68 for example C<LATIN CAPITAL LETTER A WITH ACUTE> is defined as
69 a single code point. These precomposed characters are, however,
70 often available only for some combinations, and mainly they are
71 meant to support round-trip conversions between Unicode and legacy
72 standards (like the ISO 8859), and in general case the composing
73 method is more extensible. To support conversion between the
74 different compositions of the characters, various I<normalization
75 forms> are also defined.
77 Because of backward compatibility with legacy encodings, the "a unique
78 number for every character" breaks down a bit: "at least one number
79 for every character" is closer to truth. (This happens when the same
80 character has been encoded in several legacy encodings.) The converse
81 is also not true: not every code point has an assigned character.
82 Firstly, there are unallocated code points within otherwise used
83 blocks. Secondly, there are special Unicode control characters that
84 do not represent true characters.
86 A common myth about Unicode is that it would be "16-bit", that is,
87 0x10000 (or 65536) characters from 0x0000 to 0xFFFF. B<This is untrue.>
88 Since Unicode 2.0 Unicode has been defined all the way up to 21 bits
89 (0x10FFFF), and since 3.1 characters have been defined beyond 0xFFFF.
90 The first 0x10000 characters are called the I<Plane 0>, or the I<Basic
91 Multilingual Plane> (BMP). With the Unicode 3.1, 17 planes in all are
92 defined (but nowhere near full of defined characters yet).
94 Another myth is that the 256-character blocks have something to do
95 with languages: a block per language. B<Also this is untrue.>
96 The division into the blocks exists but it is almost completely
97 accidental, an artifact of how the characters have been historically
98 allocated. Instead, there is a concept called I<scripts>, which may
99 be more useful: there is C<Latin> script, C<Greek> script, and so on.
100 Scripts usually span several parts of several blocks. For further
101 information see L<Unicode::UCD>.
103 The Unicode code points are just abstract numbers. To input and
104 output these abstract numbers, the numbers must be I<encoded> somehow.
105 Unicode defines several I<character encoding forms>, of which I<UTF-8>
106 is perhaps the most popular. UTF-8 is a variable length encoding that
107 encodes Unicode characters as 1 to 6 bytes (only 4 with the currently
108 defined characters). Other encodings are UTF-16 and UTF-32 and their
109 big and little endian variants (UTF-8 is byteorder independent).
110 The ISO/IEC 10646 defines the UCS-2 and UCS-4 encoding forms.
112 For more information about encodings, for example to learn what
113 I<surrogates> and I<byte order marks> (BOMs) are, see L<perlunicode>.
115 =head2 Perl's Unicode Support
117 Starting from Perl 5.6.0, Perl has had the capability of handling
118 Unicode natively. The first recommended release for serious Unicode
119 work is Perl 5.8.0, however. The maintenance release 5.6.1 fixed many
120 of the problems of the initial implementation of Unicode, but for
121 example regular expressions didn't really work with Unicode.
123 B<Starting from Perl 5.8.0, the use of C<use utf8> is no longer
124 necessary.> In earlier releases the C<utf8> pragma was used to declare
125 that operations in the current block or file would be Unicode-aware.
126 This model was found to be wrong, or at least clumsy: the Unicodeness
127 is now carried with the data, not attached to the operations. (There
128 is one remaining case where an explicit C<use utf8> is needed: if your
129 Perl script is in UTF-8, you can use UTF-8 in your variable and
130 subroutine names, and in your string and regular expression literals,
131 by saying C<use utf8>. This is not the default because that would
132 break existing scripts having legacy 8-bit data in them.)
134 =head2 Perl's Unicode Model
136 Perl supports both the old, pre-5.6, model of strings of eight-bit
137 native bytes, and strings of Unicode characters. The principle is
138 that Perl tries to keep its data as eight-bit bytes for as long as
139 possible, but as soon as Unicodeness cannot be avoided, the data is
140 transparently upgraded to Unicode.
142 The internal encoding of Unicode in Perl is UTF-8. The internal
143 encoding is normally hidden, however, and one need not and should not
144 worry about the internal encoding at all: it is all just characters.
146 Perl 5.8.0 will also support Unicode on EBCDIC platforms. There the
147 support is somewhat harder to implement since additional conversions
148 are needed at every step. Because of these difficulties the Unicode
149 support won't be quite as full as in other, mainly ASCII-based,
150 platforms (the Unicode support will be better than in the 5.6 series,
151 which didn't work much at all for EBCDIC platform). On EBCDIC
152 platforms the internal encoding form used is UTF-EBCDIC.
154 =head2 Creating Unicode
156 To create Unicode literals, use the C<\x{...}> notation in
157 doublequoted strings:
159 my $smiley = "\x{263a}";
161 Similarly for regular expression literals
163 $smiley =~ /\x{263a}/;
165 At run-time you can use C<chr()>:
167 my $hebrew_alef = chr(0x05d0);
169 (See L</"Further Resources"> for how to find all these numeric codes.)
171 Naturally, C<ord()> will do the reverse: turn a character to a code point.
173 Note that C<\x..>, C<\x{..}> and C<chr(...)> for arguments less than
174 0x100 (decimal 256) will generate an eight-bit character for backward
175 compatibility with older Perls. For arguments of 0x100 or more,
176 Unicode will always be produced. If you want UTF-8 always, use
177 C<pack("U", ...)> instead of C<\x..>, C<\x{..}>, or C<chr()>.
179 You can also use the C<charnames> pragma to invoke characters
180 by name in doublequoted strings:
182 use charnames ':full';
183 my $arabic_alef = "\N{ARABIC LETTER ALEF}";
185 And, as mentioned above, you can also C<pack()> numbers into Unicode
188 my $georgian_an = pack("U", 0x10a0);
190 =head2 Handling Unicode
192 Handling Unicode is for the most part transparent: just use the
193 strings as usual. Functions like C<index()>, C<length()>, and
194 C<substr()> will work on the Unicode characters; regular expressions
195 will work on the Unicode characters (see L<perlunicode> and L<perlretut>).
197 Note that Perl does B<not> consider combining character sequences
198 to be characters, such for example
200 use charnames ':full';
201 print length("\N{LATIN CAPITAL LETTER A}\N{COMBINING ACUTE ACCENT}"), "\n";
203 will print 2, not 1. The only exception is that regular expressions
204 have C<\X> for matching a combining character sequence.
206 When life is not quite so transparent is working with legacy
207 encodings, and I/O, and certain special cases.
209 =head2 Legacy Encodings
211 When you combine legacy data and Unicode the legacy data needs
212 to be upgraded to Unicode. Normally ISO 8859-1 (or EBCDIC, if
213 applicable) is assumed. You can override this assumption by
214 using the C<encoding> pragma, for example
216 use encoding 'latin2'; # ISO 8859-2
218 in which case literals (string or regular expression) and chr/ord
219 in your whole script are assumed to produce Unicode characters from
220 ISO 8859-2 code points. Note that the matching for the encoding
221 names is forgiving: instead of C<latin2> you could have said
222 C<Latin 2>, or C<iso8859-2>, and so forth. With just
226 first the environment variable C<PERL_ENCODING> will be consulted,
227 and if that doesn't exist, ISO 8859-1 (Latin 1) will be assumed.
229 The C<Encode> module knows about many encodings and it has interfaces
230 for doing conversions between those encodings:
232 use Encode 'from_to';
233 from_to($data, "iso-8859-3", "utf-8"); # from legacy to utf-8
237 Normally writing out Unicode data
239 print FH chr(0x100), "\n";
241 will print out the raw UTF-8 bytes, but you will get a warning
242 out of that if you use C<-w> or C<use warnings>. To avoid the
243 warning open the stream explicitly in UTF-8:
245 open FH, ">:utf8", "file";
247 and on already open streams use C<binmode()>:
249 binmode(STDOUT, ":utf8");
251 Reading in correctly formed UTF-8 data will not magically turn
252 the data into Unicode in Perl's eyes.
254 You can use either the C<':utf8'> I/O discipline when opening files
256 open(my $fh,'<:utf8', 'anything');
257 my $line_of_utf8 = <$fh>;
259 The I/O disciplines can also be specified more flexibly with
260 the C<open> pragma; see L<open>:
262 use open ':utf8'; # input and output default discipline will be UTF-8
264 print X chr(0x100), "\n";
267 printf "%#x\n", ord(<Y>); # this should print 0x100
270 With the C<open> pragma you can use the C<:locale> discipline
272 $ENV{LANG} = 'ru_RU.KOI8-R';
273 # the :locale will probe the locale environment variables like LANG
274 use open OUT => ':locale'; # russki parusski
276 print O chr(0x430); # Unicode CYRILLIC SMALL LETTER A = KOI8-R 0xc1
279 printf "%#x\n", ord(<I>), "\n"; # this should print 0xc1
282 or you can also use the C<':encoding(...)'> discipline
284 open(my $epic,'<:encoding(iso-8859-7)','iliad.greek');
285 my $line_of_iliad = <$epic>;
287 Both of these methods install a transparent filter on the I/O stream that
288 will convert data from the specified encoding when it is read in from the
289 stream. In the first example the F<anything> file is assumed to be UTF-8
290 encoded Unicode, in the second example the F<iliad.greek> file is assumed
291 to be ISO-8858-7 encoded Greek, but the lines read in will be in both
294 The L<open> pragma affects all the C<open()> calls after the pragma by
295 setting default disciplines. If you want to affect only certain
296 streams, use explicit disciplines directly in the C<open()> call.
298 You can switch encodings on an already opened stream by using
299 C<binmode()>, see L<perlfunc/binmode>.
301 The C<:locale> does not currently work with C<open()> and
302 C<binmode()>, only with the C<open> pragma. The C<:utf8> and
303 C<:encoding(...)> do work with all of C<open()>, C<binmode()>,
304 and the C<open> pragma.
306 Similarly, you may use these I/O disciplines on input streams to
307 automatically convert data from the specified encoding when it is
308 written to the stream.
310 open(my $unicode, '<:utf8', 'japanese.uni');
311 open(my $nihongo, '>:encoding(iso2022-jp)', 'japanese.jp');
312 while (<$unicode>) { print $nihongo }
314 The naming of encodings, both by the C<open()> and by the C<open>
315 pragma, is similarly understanding as with the C<encoding> pragma:
316 C<koi8-r> and C<KOI8R> will both be understood.
318 Common encodings recognized by ISO, MIME, IANA, and various other
319 standardisation organisations are recognised, for a more detailed
322 C<read()> reads characters and returns the number of characters.
323 C<seek()> and C<tell()> operate on byte counts, as do C<sysread()>
326 Notice that because of the default behaviour "input is not UTF-8"
327 it is easy to mistakenly write code that keeps on expanding a file
328 by repeatedly encoding it in UTF-8:
332 local $/; # read in the whole file
335 open F, ">:utf8", "file";
339 If you run this code twice, the contents of the F<file> will be twice
340 UTF-8 encoded. A C<use open ':utf8'> would have avoided the bug, or
341 explicitly opening also the F<file> for input as UTF-8.
349 Bit Complement Operator ~ And vec()
351 The bit complement operator C<~> will produce surprising results if
352 used on strings containing Unicode characters. The results are
353 consistent with the internal UTF-8 encoding of the characters, but not
354 with much else. So don't do that. Similarly for vec(): you will be
355 operating on the UTF-8 bit patterns of the Unicode characters, not on
356 the bytes, which is very probably not what you want.
362 One way of peeking inside the internal encoding of Unicode characters
363 is to use C<unpack("C*", ...> to get the bytes, or C<unpack("H*", ...)>
364 to display the bytes:
366 # this will print c4 80 for the UTF-8 bytes 0xc4 0x80
367 print join(" ", unpack("H*", pack("U", 0x100))), "\n";
369 Yet another way would be to use the Devel::Peek module:
371 perl -MDevel::Peek -e 'Dump(chr(0x100))'
373 That will show the UTF8 flag in FLAGS and both the UTF-8 bytes
374 and Unicode characters in PV. See also later in this document
375 the discussion about the C<is_utf8> function of the C<Encode> module.
379 =head2 Advanced Topics
387 The question of string equivalence turns somewhat complicated
388 in Unicode: what do you mean by equal?
390 (Is C<LATIN CAPITAL LETTER A WITH ACUTE> equal to
391 C<LATIN CAPITAL LETTER A>?)
393 The short answer is that by default Perl compares equivalence
394 (C<eq>, C<ne>) based only on code points of the characters.
395 In the above case, no (because 0x00C1 != 0x0041). But sometimes any
396 CAPITAL LETTER As being considered equal, or even any As of any case,
399 The long answer is that you need to consider character normalization
400 and casing issues: see L<Unicode::Normalize>, and Unicode Technical
401 Reports #15 and #21, I<Unicode Normalization Forms> and I<Case
402 Mappings>, http://www.unicode.org/unicode/reports/tr15/
403 http://www.unicode.org/unicode/reports/tr21/
405 As of Perl 5.8.0, the's regular expression case-ignoring matching
406 implements only 1:1 semantics: one character matches one character.
407 In I<Case Mappings> both 1:N and N:1 matches are defined.
413 People like to see their strings nicely sorted, or as Unicode
414 parlance goes, collated. But again, what do you mean by collate?
416 (Does C<LATIN CAPITAL LETTER A WITH ACUTE> come before or after
417 C<LATIN CAPITAL LETTER A WITH GRAVE>?)
419 The short answer is that by default Perl compares strings (C<lt>,
420 C<le>, C<cmp>, C<ge>, C<gt>) based only on the code points of the
421 characters. In the above case, after, since 0x00C1 > 0x00C0.
423 The long answer is that "it depends", and a good answer cannot be
424 given without knowing (at the very least) the language context.
425 See L<Unicode::Collate>, and I<Unicode Collation Algorithm>
426 http://www.unicode.org/unicode/reports/tr10/
438 Character ranges in regular expression character classes (C</[a-z]/>)
439 and in the C<tr///> (also known as C<y///>) operator are not magically
440 Unicode-aware. What this means that C<[a-z]> will not magically start
441 to mean "all alphabetic letters" (not that it does mean that even for
442 8-bit characters, you should be using C</[[:alpha]]/> for that).
444 For specifying things like that in regular expressions you can use the
445 various Unicode properties, C<\pL> in this particular case. You can
446 use Unicode code points as the end points of character ranges, but
447 that means that particular code point range, nothing more. For
448 further information, see L<perlunicode>.
452 String-To-Number Conversions
454 Unicode does define several other decimal (and numeric) characters
455 than just the familiar 0 to 9, such as the Arabic and Indic digits.
456 Perl does not support string-to-number conversion for digits other
457 than the 0 to 9 (and a to f for hexadecimal).
461 =head2 Questions With Answers
465 =item Will My Old Scripts Break?
467 Very probably not. Unless you are generating Unicode characters
468 somehow, any old behaviour should be preserved. About the only
469 behaviour that has changed and which could start generating Unicode
470 is the old behaviour of C<chr()> where supplying an argument more
471 than 255 produced a character modulo 255 (for example, C<chr(300)>
472 was equal to C<chr(45)>).
474 =item How Do I Make My Scripts Work With Unicode?
476 Very little work should be needed since nothing changes until you
477 somehow generate Unicode data. The greatest trick will be getting
478 input as Unicode, and for that see the earlier I/O discussion.
480 =item How Do I Know Whether My String Is In Unicode?
482 You shouldn't care. No, you really shouldn't. If you have
483 to care (beyond the cases described above), it means that we
484 didn't get the transparency of Unicode quite right.
488 use Encode 'is_utf8';
489 print is_utf8($string) ? 1 : 0, "\n";
491 But note that this doesn't mean that any of the characters in the
492 string are necessary UTF-8 encoded, or that any of the characters have
493 code points greater than 0xFF (255) or even 0x80 (128), or that the
494 string has any characters at all. All the C<is_utf8()> does is to
495 return the value of the internal "utf8ness" flag attached to the
496 $string. If the flag is on, characters added to that string will be
497 automatically upgraded to UTF-8 (and even then only if they really
498 need to be upgraded, that is, if their code point is greater than 0xFF).
500 Sometimes you might really need to know the byte length of a string
501 instead of the character length. For that use the C<bytes> pragma
502 and its only defined function C<length()>:
504 my $unicode = chr(0x100);
505 print length($unicode), "\n"; # will print 1
507 print length($unicode), "\n"; # will print 2 (the 0xC4 0x80 of the UTF-8)
509 =item How Do I Detect Invalid UTF-8?
513 use Encode 'encode_utf8';
514 if (encode_utf8($string)) {
523 @chars = unpack("U0U*", "\xFF"); # will warn
525 The warning will be C<Malformed UTF-8 character (byte 0xff) in
526 unpack>. The "U0" means "expect strictly UTF-8 encoded Unicode".
527 Without that the C<unpack("U*", ...)> would accept also data like
530 =item How Do I Convert Data Into UTF-8? Or Vice Versa?
532 This probably isn't as useful (or simple) as you might think.
533 Also, normally you shouldn't need to.
535 In one sense what you are asking doesn't make much sense: UTF-8 is
536 (intended as an) Unicode encoding, so converting "data" into UTF-8
537 isn't meaningful unless you know in what character set and encoding
538 the binary data is in, and in this case you can use C<Encode>.
540 use Encode 'from_to';
541 from_to($data, "iso-8859-1", "utf-8"); # from latin-1 to utf-8
543 If you have ASCII (really 7-bit US-ASCII), you already have valid
544 UTF-8, the lowest 128 characters of UTF-8 encoded Unicode and US-ASCII
547 If you have Latin-1 (or want Latin-1), you can just use pack/unpack:
549 $latin1 = pack("C*", unpack("U*", $utf8));
550 $utf8 = pack("U*", unpack("C*", $latin1));
552 (The same works for EBCDIC.)
554 If you have a sequence of bytes you B<know> is valid UTF-8,
555 but Perl doesn't know it yet, you can make Perl a believer, too:
557 use Encode 'decode_utf8';
558 $utf8 = decode_utf8($bytes);
560 You can convert well-formed UTF-8 to a sequence of bytes, but if
561 you just want to convert random binary data into UTF-8, you can't.
562 Any random collection of bytes isn't well-formed UTF-8. You can
563 use C<unpack("C*", $string)> for the former, and you can create
564 well-formed Unicode/UTF-8 data by C<pack("U*", 0xff, ...)>.
566 =item How Do I Display Unicode? How Do I Input Unicode?
568 See http://www.hclrss.demon.co.uk/unicode/ and
569 http://www.cl.cam.ac.uk/~mgk25/unicode.html
571 =item How Does Unicode Work With Traditional Locales?
573 In Perl, not very well. Avoid using locales through the C<locale>
574 pragma. Use only one or the other.
578 =head2 Hexadecimal Notation
580 The Unicode standard prefers using hexadecimal notation because that
581 shows better the division of Unicode into blocks of 256 characters.
582 Hexadecimal is also simply shorter than decimal. You can use decimal
583 notation, too, but learning to use hexadecimal just makes life easier
584 with the Unicode standard.
586 The C<0x> prefix means a hexadecimal number, the digits are 0-9 I<and>
587 a-f (or A-F, case doesn't matter). Each hexadecimal digit represents
588 four bits, or half a byte. C<print 0x..., "\n"> will show a
589 hexadecimal number in decimal, and C<printf "%x\n", $decimal> will
590 show a decimal number in hexadecimal. If you have just the
591 "hexdigits" of a hexadecimal number, you can use the C<hex()>
594 print 0x0009, "\n"; # 9
595 print 0x000a, "\n"; # 10
596 print 0x000f, "\n"; # 15
597 print 0x0010, "\n"; # 16
598 print 0x0011, "\n"; # 17
599 print 0x0100, "\n"; # 256
601 print 0x0041, "\n"; # 65
603 printf "%x\n", 65; # 41
604 printf "%#x\n", 65; # 0x41
606 print hex("41"), "\n"; # 65
608 =head2 Further Resources
616 http://www.unicode.org/
622 http://www.unicode.org/unicode/faq/
628 http://www.unicode.org/glossary/
632 Unicode Useful Resources
634 http://www.unicode.org/unicode/onlinedat/resources.html
638 Unicode and Multilingual Support in HTML, Fonts, Web Browsers and Other Applications
640 http://www.hclrss.demon.co.uk/unicode/
644 UTF-8 and Unicode FAQ for Unix/Linux
646 http://www.cl.cam.ac.uk/~mgk25/unicode.html
650 Legacy Character Sets
652 http://www.czyborra.com/
653 http://www.eki.ee/letter/
657 The Unicode support files live within the Perl installation in the
660 $Config{installprivlib}/unicore
662 in Perl 5.8.0 or newer, and
664 $Config{installprivlib}/unicode
666 in the Perl 5.6 series. (The renaming to F<lib/unicore> was done to
667 avoid naming conflicts with lib/Unicode in case-insensitive filesystems.)
668 The main Unicode data file is F<Unicode.txt> (or F<Unicode.301> in
669 Perl 5.6.1.) You can find the C<$Config{installprivlib}> by
671 perl "-V:installprivlib"
673 Note that some of the files have been renamed from the Unicode
674 standard since the Perl installation tries to live by the "8.3"
675 filenaming restrictions. The renamings are shown in the
676 accompanying F<rename> file.
678 You can explore various information from the Unicode data files using
679 the C<Unicode::UCD> module.
685 L<perlunicode>, L<Encode>, L<encoding>, L<open>, L<utf8>, L<bytes>,
686 L<perlretut>, L<Unicode::Collate>, L<Unicode::Normalize>, L<Unicode::UCD>
688 =head1 ACKNOWLEDGEMENTS
690 Thanks to the kind readers of the perl5-porters@perl.org,
691 perl-unicode@perl.org, linux-utf8@nl.linux.org, and unicore@unicode.org
692 mailing lists for their valuable feedback.
694 =head1 AUTHOR, COPYRIGHT, AND LICENSE
696 Copyright 2001 Jarkko Hietaniemi <jhi@iki.fi>
698 This document may be distributed under the same terms as Perl itself.