Commit | Line | Data |
ba62762e |
1 | =head1 NAME |
2 | |
07fcf8ff |
3 | perluniintro - Perl Unicode introduction |
ba62762e |
4 | |
5 | =head1 DESCRIPTION |
6 | |
7 | This document gives a general idea of Unicode and how to use Unicode |
8 | in Perl. |
9 | |
10 | =head2 Unicode |
11 | |
12 | Unicode is a character set standard with plans to cover all of the |
13 | writing systems of the world, plus many other symbols. |
14 | |
15 | Unicode and ISO/IEC 10646 are coordinated standards that provide code |
16 | points for the characters in almost all modern character set standards, |
17 | covering more than 30 writing systems and hundreds of languages, |
18 | including all commercially important modern languages. All characters |
19 | in the largest Chinese, Japanese, and Korean dictionaries are also |
20 | encoded. The standards will eventually cover almost all characters in |
21 | more than 250 writing systems and thousands of languages. |
22 | |
23 | A Unicode I<character> is an abstract entity. It is not bound to any |
24 | particular integer width, and especially not to the C language C<char>. |
25 | Unicode is language neutral and display neutral: it doesn't encode the |
26 | language of the text, and it doesn't define fonts or other graphical |
27 | layout details. Unicode operates on characters and on text built from |
28 | those characters. |
29 | |
30 | Unicode defines characters like C<LATIN CAPITAL LETTER A> or C<GREEK |
31 | SMALL LETTER ALPHA>, and then unique numbers for those, hexadecimal |
32 | 0x0041 or 0x03B1 for those particular characters. Such unique |
33 | numbers are called I<code points>. |
34 | |
35 | The Unicode standard prefers using hexadecimal notation for the code |
36 | points. (In case this notation, numbers like 0x0041, is unfamiliar to |
37 | you, take a peek at a later section, L</"Hexadecimal Notation">.) |
38 | The Unicode standard uses the notation C<U+0041 LATIN CAPITAL LETTER A>, |
39 | which gives the hexadecimal code point, and the normative name of |
40 | the character. |
41 | |
42 | Unicode also defines various I<properties> for the characters, like |
43 | "uppercase" or "lowercase", "decimal digit", or "punctuation": |
44 | these properties are independent of the names of the characters. |
45 | Furthermore, various operations on the characters like uppercasing, |
46 | lowercasing, and collating (sorting), are defined. |
47 | |
48 | A Unicode character consists either of a single code point, or a |
49 | I<base character> (like C<LATIN CAPITAL LETTER A>), followed by one or |
50 | more I<modifiers> (like C<COMBINING ACUTE ACCENT>). This sequence of |
51 | a base character and modifiers is called a I<combining character |
52 | sequence>. |
53 | |
a5f0baef |
54 | Whether to call these combining character sequences, as a whole, |
55 | "characters" depends on your point of view. If you are a programmer, |
56 | you probably would tend towards seeing each element in the sequences |
57 | as one unit, one "character", but from the user viewpoint, the |
58 | sequence as a whole is probably considered one "character", since |
59 | that's probably what it looks like in the context of the user's |
60 | language. |
ba62762e |
61 | |
62 | With this "as a whole" view of characters, the number of characters is |
a5f0baef |
63 | open-ended. But in the programmer's "one unit is one character" point |
64 | of view, the concept of "characters" is more deterministic, and so we |
65 | take that point of view in this document: one "character" is one |
66 | Unicode code point, be it a base character or a combining character. |
ba62762e |
67 | |
68 | For some of the combinations there are I<precomposed> characters, |
69 | for example C<LATIN CAPITAL LETTER A WITH ACUTE> is defined as |
70 | a single code point. These precomposed characters are, however, |
71 | often available only for some combinations, and mainly they are |
72 | meant to support round-trip conversions between Unicode and legacy |
73 | standards (like the ISO 8859), and in general case the composing |
74 | method is more extensible. To support conversion between the |
75 | different compositions of the characters, various I<normalization |
76 | forms> are also defined. |
77 | |
78 | Because of backward compatibility with legacy encodings, the "a unique |
79 | number for every character" breaks down a bit: "at least one number |
80 | for every character" is closer to truth. (This happens when the same |
81 | character has been encoded in several legacy encodings.) The converse |
82 | is also not true: not every code point has an assigned character. |
83 | Firstly, there are unallocated code points within otherwise used |
84 | blocks. Secondly, there are special Unicode control characters that |
85 | do not represent true characters. |
86 | |
87 | A common myth about Unicode is that it would be "16-bit", that is, |
88 | 0x10000 (or 65536) characters from 0x0000 to 0xFFFF. B<This is untrue.> |
89 | Since Unicode 2.0 Unicode has been defined all the way up to 21 bits |
90 | (0x10FFFF), and since 3.1 characters have been defined beyond 0xFFFF. |
91 | The first 0x10000 characters are called the I<Plane 0>, or the I<Basic |
92 | Multilingual Plane> (BMP). With the Unicode 3.1, 17 planes in all are |
93 | defined (but nowhere near full of defined characters yet). |
94 | |
95 | Another myth is that the 256-character blocks have something to do |
96 | with languages: a block per language. B<Also this is untrue.> |
97 | The division into the blocks exists but it is almost completely |
98 | accidental, an artifact of how the characters have been historically |
99 | allocated. Instead, there is a concept called I<scripts>, which may |
100 | be more useful: there is C<Latin> script, C<Greek> script, and so on. |
101 | Scripts usually span several parts of several blocks. For further |
102 | information see L<Unicode::UCD>. |
103 | |
104 | The Unicode code points are just abstract numbers. To input and |
105 | output these abstract numbers, the numbers must be I<encoded> somehow. |
106 | Unicode defines several I<character encoding forms>, of which I<UTF-8> |
107 | is perhaps the most popular. UTF-8 is a variable length encoding that |
108 | encodes Unicode characters as 1 to 6 bytes (only 4 with the currently |
8baee566 |
109 | defined characters). Other encodings include UTF-16 and UTF-32 and their |
ba62762e |
110 | big and little endian variants (UTF-8 is byteorder independent). |
111 | The ISO/IEC 10646 defines the UCS-2 and UCS-4 encoding forms. |
112 | |
113 | For more information about encodings, for example to learn what |
114 | I<surrogates> and I<byte order marks> (BOMs) are, see L<perlunicode>. |
115 | |
116 | =head2 Perl's Unicode Support |
117 | |
118 | Starting from Perl 5.6.0, Perl has had the capability of handling |
119 | Unicode natively. The first recommended release for serious Unicode |
120 | work is Perl 5.8.0, however. The maintenance release 5.6.1 fixed many |
121 | of the problems of the initial implementation of Unicode, but for |
122 | example regular expressions didn't really work with Unicode. |
123 | |
124 | B<Starting from Perl 5.8.0, the use of C<use utf8> is no longer |
125 | necessary.> In earlier releases the C<utf8> pragma was used to declare |
126 | that operations in the current block or file would be Unicode-aware. |
127 | This model was found to be wrong, or at least clumsy: the Unicodeness |
128 | is now carried with the data, not attached to the operations. (There |
129 | is one remaining case where an explicit C<use utf8> is needed: if your |
a5f0baef |
130 | Perl script itself is encoded in UTF-8, you can use UTF-8 in your |
131 | variable and subroutine names, and in your string and regular |
132 | expression literals, by saying C<use utf8>. This is not the default |
133 | because that would break existing scripts having legacy 8-bit data in |
134 | them.) |
ba62762e |
135 | |
136 | =head2 Perl's Unicode Model |
137 | |
138 | Perl supports both the old, pre-5.6, model of strings of eight-bit |
139 | native bytes, and strings of Unicode characters. The principle is |
140 | that Perl tries to keep its data as eight-bit bytes for as long as |
141 | possible, but as soon as Unicodeness cannot be avoided, the data is |
142 | transparently upgraded to Unicode. |
143 | |
4192de81 |
144 | Internally, Perl currently uses either whatever the native eight-bit |
145 | character set of the platform (for example Latin-1) or UTF-8 to encode |
146 | Unicode strings. Specifically, if all code points in the string are |
a5f0baef |
147 | 0xFF or less, Perl uses the native eight-bit character set. |
148 | Otherwise, it uses UTF-8. |
4192de81 |
149 | |
150 | A user of Perl does not normally need to know nor care how Perl happens |
151 | to encodes its internal strings, but it becomes relevant when outputting |
152 | Unicode strings to a stream without a discipline (one with the "default |
153 | default"). In such a case, the raw bytes used internally (the native |
154 | character set or UTF-8, as appropriate for each string) will be used, |
155 | and if warnings are turned on, a "Wide character" warning will be issued |
156 | if those strings contain a character beyond 0x00FF. |
157 | |
158 | For example, |
159 | |
160 | perl -w -e 'print "\x{DF}\n", "\x{0100}\x{DF}\n"' |
161 | |
162 | produces a fairly useless mixture of native bytes and UTF-8, as well |
163 | as a warning. |
164 | |
165 | To output UTF-8 always, use the ":utf8" output discipline. Prepending |
166 | |
167 | binmode(STDOUT, ":utf8"); |
168 | |
169 | to this sample program ensures the output is completely UTF-8, and |
4049dcd4 |
170 | of course, removes the warning. |
ba62762e |
171 | |
8baee566 |
172 | Perl 5.8.0 also supports Unicode on EBCDIC platforms. There, the |
ba62762e |
173 | support is somewhat harder to implement since additional conversions |
8baee566 |
174 | are needed at every step. Because of these difficulties, the Unicode |
a5f0baef |
175 | support isn't quite as full as in other, mainly ASCII-based, platforms |
176 | (the Unicode support is better than in the 5.6 series, which didn't |
177 | work much at all for EBCDIC platform). On EBCDIC platforms, the |
178 | internal Unicode encoding form is UTF-EBCDIC instead of UTF-8 (the |
179 | difference is that as UTF-8 is "ASCII-safe" in that ASCII characters |
180 | encode to UTF-8 as-is, UTF-EBCDIC is "EBCDIC-safe"). |
ba62762e |
181 | |
182 | =head2 Creating Unicode |
183 | |
a5f0baef |
184 | To create Unicode characters in literals for code points above 0xFF, |
185 | use the C<\x{...}> notation in doublequoted strings: |
ba62762e |
186 | |
187 | my $smiley = "\x{263a}"; |
188 | |
8baee566 |
189 | Similarly in regular expression literals |
ba62762e |
190 | |
191 | $smiley =~ /\x{263a}/; |
192 | |
193 | At run-time you can use C<chr()>: |
194 | |
195 | my $hebrew_alef = chr(0x05d0); |
196 | |
197 | (See L</"Further Resources"> for how to find all these numeric codes.) |
198 | |
199 | Naturally, C<ord()> will do the reverse: turn a character to a code point. |
200 | |
a5f0baef |
201 | Note that C<\x..> (no C<{}> and only two hexadecimal digits), |
202 | C<\x{...}>, and C<chr(...)> for arguments less than 0x100 (decimal |
203 | 256) generate an eight-bit character for backward compatibility with |
204 | older Perls. For arguments of 0x100 or more, Unicode characters are |
205 | always produced. If you want to force the production of Unicode |
206 | characters regardless of the numeric value, use C<pack("U", ...)> |
207 | instead of C<\x..>, C<\x{...}>, or C<chr()>. |
ba62762e |
208 | |
209 | You can also use the C<charnames> pragma to invoke characters |
210 | by name in doublequoted strings: |
211 | |
212 | use charnames ':full'; |
213 | my $arabic_alef = "\N{ARABIC LETTER ALEF}"; |
214 | |
215 | And, as mentioned above, you can also C<pack()> numbers into Unicode |
216 | characters: |
217 | |
218 | my $georgian_an = pack("U", 0x10a0); |
219 | |
8a5e5dd5 |
220 | Note that both C<\x{...}> and C<\N{...}> are compile-time string |
221 | constants: you cannot use variables in them. if you want similar |
222 | run-time functionality, use C<chr()> and C<charnames::vianame()>. |
223 | |
ba62762e |
224 | =head2 Handling Unicode |
225 | |
226 | Handling Unicode is for the most part transparent: just use the |
227 | strings as usual. Functions like C<index()>, C<length()>, and |
228 | C<substr()> will work on the Unicode characters; regular expressions |
229 | will work on the Unicode characters (see L<perlunicode> and L<perlretut>). |
230 | |
231 | Note that Perl does B<not> consider combining character sequences |
232 | to be characters, such for example |
233 | |
234 | use charnames ':full'; |
235 | print length("\N{LATIN CAPITAL LETTER A}\N{COMBINING ACUTE ACCENT}"), "\n"; |
236 | |
237 | will print 2, not 1. The only exception is that regular expressions |
238 | have C<\X> for matching a combining character sequence. |
239 | |
240 | When life is not quite so transparent is working with legacy |
241 | encodings, and I/O, and certain special cases. |
242 | |
243 | =head2 Legacy Encodings |
244 | |
245 | When you combine legacy data and Unicode the legacy data needs |
246 | to be upgraded to Unicode. Normally ISO 8859-1 (or EBCDIC, if |
247 | applicable) is assumed. You can override this assumption by |
248 | using the C<encoding> pragma, for example |
249 | |
250 | use encoding 'latin2'; # ISO 8859-2 |
251 | |
252 | in which case literals (string or regular expression) and chr/ord |
253 | in your whole script are assumed to produce Unicode characters from |
254 | ISO 8859-2 code points. Note that the matching for the encoding |
255 | names is forgiving: instead of C<latin2> you could have said |
256 | C<Latin 2>, or C<iso8859-2>, and so forth. With just |
257 | |
258 | use encoding; |
259 | |
260 | first the environment variable C<PERL_ENCODING> will be consulted, |
261 | and if that doesn't exist, ISO 8859-1 (Latin 1) will be assumed. |
262 | |
263 | The C<Encode> module knows about many encodings and it has interfaces |
264 | for doing conversions between those encodings: |
265 | |
266 | use Encode 'from_to'; |
267 | from_to($data, "iso-8859-3", "utf-8"); # from legacy to utf-8 |
268 | |
269 | =head2 Unicode I/O |
270 | |
8baee566 |
271 | Normally, writing out Unicode data |
ba62762e |
272 | |
8baee566 |
273 | print FH $some_string_with_unicode, "\n"; |
ba62762e |
274 | |
8baee566 |
275 | produces raw bytes that Perl happens to use to internally encode the |
a5f0baef |
276 | Unicode string (which depends on the system, as well as what |
277 | characters happen to be in the string at the time). If any of the |
278 | characters are at code points 0x100 or above, you will get a warning |
279 | if you use C<-w> or C<use warnings>. To ensure that the output is |
280 | explicitly rendered in the encoding you desire (and to avoid the |
281 | warning), open the stream with the desired encoding. Some examples: |
ba62762e |
282 | |
8baee566 |
283 | open FH, ">:ucs2", "file" |
284 | open FH, ">:utf8", "file"; |
285 | open FH, ">:Shift-JIS", "file"; |
1d7919c5 |
286 | |
287 | and on already open streams use C<binmode()>: |
288 | |
8baee566 |
289 | binmode(STDOUT, ":ucs2"); |
1d7919c5 |
290 | binmode(STDOUT, ":utf8"); |
8baee566 |
291 | binmode(STDOUT, ":Shift-JIS"); |
1d7919c5 |
292 | |
8baee566 |
293 | See documentation for the C<Encode> module for many supported encodings. |
ba62762e |
294 | |
a5f0baef |
295 | Reading in a file that you know happens to be encoded in one of the |
296 | Unicode encodings does not magically turn the data into Unicode in |
297 | Perl's eyes. To do that, specify the appropriate discipline when |
298 | opening files |
ba62762e |
299 | |
300 | open(my $fh,'<:utf8', 'anything'); |
8baee566 |
301 | my $line_of_unicode = <$fh>; |
302 | |
303 | open(my $fh,'<:Big5', 'anything'); |
304 | my $line_of_unicode = <$fh>; |
ba62762e |
305 | |
306 | The I/O disciplines can also be specified more flexibly with |
307 | the C<open> pragma; see L<open>: |
308 | |
1d7919c5 |
309 | use open ':utf8'; # input and output default discipline will be UTF-8 |
310 | open X, ">file"; |
311 | print X chr(0x100), "\n"; |
ba62762e |
312 | close X; |
1d7919c5 |
313 | open Y, "<file"; |
ba62762e |
314 | printf "%#x\n", ord(<Y>); # this should print 0x100 |
315 | close Y; |
316 | |
317 | With the C<open> pragma you can use the C<:locale> discipline |
318 | |
1ecefa54 |
319 | $ENV{LC_ALL} = $ENV{LANG} = 'ru_RU.KOI8-R'; |
320 | # the :locale will probe the locale environment variables like LC_ALL |
ba62762e |
321 | use open OUT => ':locale'; # russki parusski |
322 | open(O, ">koi8"); |
323 | print O chr(0x430); # Unicode CYRILLIC SMALL LETTER A = KOI8-R 0xc1 |
324 | close O; |
325 | open(I, "<koi8"); |
326 | printf "%#x\n", ord(<I>), "\n"; # this should print 0xc1 |
327 | close I; |
328 | |
329 | or you can also use the C<':encoding(...)'> discipline |
330 | |
331 | open(my $epic,'<:encoding(iso-8859-7)','iliad.greek'); |
8baee566 |
332 | my $line_of_unicode = <$epic>; |
ba62762e |
333 | |
8baee566 |
334 | These methods install a transparent filter on the I/O stream that |
335 | converts data from the specified encoding when it is read in from the |
a5f0baef |
336 | stream. The result is always Unicode. |
ba62762e |
337 | |
338 | The L<open> pragma affects all the C<open()> calls after the pragma by |
339 | setting default disciplines. If you want to affect only certain |
340 | streams, use explicit disciplines directly in the C<open()> call. |
341 | |
342 | You can switch encodings on an already opened stream by using |
8baee566 |
343 | C<binmode()>; see L<perlfunc/binmode>. |
ba62762e |
344 | |
1ecefa54 |
345 | The C<:locale> does not currently (as of Perl 5.8.0) work with |
346 | C<open()> and C<binmode()>, only with the C<open> pragma. The |
8baee566 |
347 | C<:utf8> and C<:encoding(...)> methods do work with all of C<open()>, |
1ecefa54 |
348 | C<binmode()>, and the C<open> pragma. |
ba62762e |
349 | |
8baee566 |
350 | Similarly, you may use these I/O disciplines on output streams to |
a5f0baef |
351 | automatically convert Unicode to the specified encoding when it is |
352 | written to the stream. For example, the following snippet copies the |
353 | contents of the file "text.jis" (encoded as ISO-2022-JP, aka JIS) to |
354 | the file "text.utf8", encoded as UTF-8: |
ba62762e |
355 | |
8baee566 |
356 | open(my $nihongo, '<:encoding(iso2022-jp)', 'text.jis'); |
357 | open(my $unicode, '>:utf8', 'text.utf8'); |
358 | while (<$nihongo>) { print $unicode } |
ba62762e |
359 | |
360 | The naming of encodings, both by the C<open()> and by the C<open> |
361 | pragma, is similarly understanding as with the C<encoding> pragma: |
362 | C<koi8-r> and C<KOI8R> will both be understood. |
363 | |
364 | Common encodings recognized by ISO, MIME, IANA, and various other |
8baee566 |
365 | standardisation organisations are recognised; for a more detailed |
ba62762e |
366 | list see L<Encode>. |
367 | |
368 | C<read()> reads characters and returns the number of characters. |
369 | C<seek()> and C<tell()> operate on byte counts, as do C<sysread()> |
370 | and C<sysseek()>. |
371 | |
8baee566 |
372 | Notice that because of the default behaviour of not doing any |
373 | conversion upon input if there is no default discipline, |
ba62762e |
374 | it is easy to mistakenly write code that keeps on expanding a file |
8baee566 |
375 | by repeatedly encoding: |
ba62762e |
376 | |
377 | # BAD CODE WARNING |
378 | open F, "file"; |
8baee566 |
379 | local $/; ## read in the whole file of 8-bit characters |
ba62762e |
380 | $t = <F>; |
381 | close F; |
382 | open F, ">:utf8", "file"; |
8baee566 |
383 | print F $t; ## convert to UTF-8 on output |
ba62762e |
384 | close F; |
385 | |
386 | If you run this code twice, the contents of the F<file> will be twice |
1d7919c5 |
387 | UTF-8 encoded. A C<use open ':utf8'> would have avoided the bug, or |
388 | explicitly opening also the F<file> for input as UTF-8. |
ba62762e |
389 | |
0c901d84 |
390 | B<NOTE>: the C<:utf8> and C<:encoding> features work only if your |
391 | Perl has been built with the new "perlio" feature. Almost all |
392 | Perl 5.8 platforms do use "perlio", though: you can see whether |
393 | yours is by running "perl -V" and looking for C<useperlio=define>. |
394 | |
1ecefa54 |
395 | =head2 Displaying Unicode As Text |
396 | |
397 | Sometimes you might want to display Perl scalars containing Unicode as |
8baee566 |
398 | simple ASCII (or EBCDIC) text. The following subroutine converts |
1ecefa54 |
399 | its argument so that Unicode characters with code points greater than |
400 | 255 are displayed as "\x{...}", control characters (like "\n") are |
8baee566 |
401 | displayed as "\x..", and the rest of the characters as themselves: |
1ecefa54 |
402 | |
58c274a1 |
403 | sub nice_string { |
404 | join("", |
405 | map { $_ > 255 ? # if wide character... |
8baee566 |
406 | sprintf("\\x{%04X}", $_) : # \x{...} |
58c274a1 |
407 | chr($_) =~ /[[:cntrl:]]/ ? # else if control character ... |
8baee566 |
408 | sprintf("\\x%02X", $_) : # \x.. |
58c274a1 |
409 | chr($_) # else as themselves |
410 | } unpack("U*", $_[0])); # unpack Unicode characters |
411 | } |
412 | |
413 | For example, |
414 | |
415 | nice_string("foo\x{100}bar\n") |
416 | |
8baee566 |
417 | returns: |
58c274a1 |
418 | |
8baee566 |
419 | "foo\x{0100}bar\x0A" |
1ecefa54 |
420 | |
ba62762e |
421 | =head2 Special Cases |
422 | |
423 | =over 4 |
424 | |
425 | =item * |
426 | |
427 | Bit Complement Operator ~ And vec() |
428 | |
8baee566 |
429 | The bit complement operator C<~> may produce surprising results if |
ba62762e |
430 | used on strings containing Unicode characters. The results are |
a5f0baef |
431 | consistent with the internal encoding of the characters, but not with |
432 | much else. So don't do that. Similarly for vec(): you will be |
433 | operating on the internally encoded bit patterns of the Unicode |
434 | characters, not on the code point values, which is very probably not |
435 | what you want. |
ba62762e |
436 | |
437 | =item * |
438 | |
8baee566 |
439 | Peeking At Perl's Internal Encoding |
440 | |
441 | Normal users of Perl should never care how Perl encodes any particular |
a5f0baef |
442 | Unicode string (because the normal ways to get at the contents of a |
443 | string with Unicode -- via input and output -- should always be via |
444 | explicitly-defined I/O disciplines). But if you must, there are two |
445 | ways of looking behind the scenes. |
ba62762e |
446 | |
447 | One way of peeking inside the internal encoding of Unicode characters |
448 | is to use C<unpack("C*", ...> to get the bytes, or C<unpack("H*", ...)> |
449 | to display the bytes: |
450 | |
8baee566 |
451 | # this prints c4 80 for the UTF-8 bytes 0xc4 0x80 |
ba62762e |
452 | print join(" ", unpack("H*", pack("U", 0x100))), "\n"; |
453 | |
454 | Yet another way would be to use the Devel::Peek module: |
455 | |
456 | perl -MDevel::Peek -e 'Dump(chr(0x100))' |
457 | |
8baee566 |
458 | That shows the UTF8 flag in FLAGS and both the UTF-8 bytes |
ba62762e |
459 | and Unicode characters in PV. See also later in this document |
460 | the discussion about the C<is_utf8> function of the C<Encode> module. |
461 | |
462 | =back |
463 | |
464 | =head2 Advanced Topics |
465 | |
466 | =over 4 |
467 | |
468 | =item * |
469 | |
470 | String Equivalence |
471 | |
472 | The question of string equivalence turns somewhat complicated |
473 | in Unicode: what do you mean by equal? |
474 | |
07698885 |
475 | (Is C<LATIN CAPITAL LETTER A WITH ACUTE> equal to |
476 | C<LATIN CAPITAL LETTER A>?) |
ba62762e |
477 | |
a5f0baef |
478 | The short answer is that by default Perl compares equivalence (C<eq>, |
479 | C<ne>) based only on code points of the characters. In the above |
480 | case, the answer is no (because 0x00C1 != 0x0041). But sometimes any |
ba62762e |
481 | CAPITAL LETTER As being considered equal, or even any As of any case, |
482 | would be desirable. |
483 | |
484 | The long answer is that you need to consider character normalization |
485 | and casing issues: see L<Unicode::Normalize>, and Unicode Technical |
486 | Reports #15 and #21, I<Unicode Normalization Forms> and I<Case |
487 | Mappings>, http://www.unicode.org/unicode/reports/tr15/ |
488 | http://www.unicode.org/unicode/reports/tr21/ |
489 | |
58c274a1 |
490 | As of Perl 5.8.0, regular expression case-ignoring matching |
ba62762e |
491 | implements only 1:1 semantics: one character matches one character. |
492 | In I<Case Mappings> both 1:N and N:1 matches are defined. |
493 | |
494 | =item * |
495 | |
496 | String Collation |
497 | |
498 | People like to see their strings nicely sorted, or as Unicode |
499 | parlance goes, collated. But again, what do you mean by collate? |
500 | |
07698885 |
501 | (Does C<LATIN CAPITAL LETTER A WITH ACUTE> come before or after |
502 | C<LATIN CAPITAL LETTER A WITH GRAVE>?) |
ba62762e |
503 | |
58c274a1 |
504 | The short answer is that by default, Perl compares strings (C<lt>, |
ba62762e |
505 | C<le>, C<cmp>, C<ge>, C<gt>) based only on the code points of the |
58c274a1 |
506 | characters. In the above case, the answer is "after", since 0x00C1 > 0x00C0. |
ba62762e |
507 | |
508 | The long answer is that "it depends", and a good answer cannot be |
509 | given without knowing (at the very least) the language context. |
510 | See L<Unicode::Collate>, and I<Unicode Collation Algorithm> |
511 | http://www.unicode.org/unicode/reports/tr10/ |
512 | |
513 | =back |
514 | |
515 | =head2 Miscellaneous |
516 | |
517 | =over 4 |
518 | |
519 | =item * |
520 | |
521 | Character Ranges |
522 | |
523 | Character ranges in regular expression character classes (C</[a-z]/>) |
524 | and in the C<tr///> (also known as C<y///>) operator are not magically |
58c274a1 |
525 | Unicode-aware. What this means that C<[A-Za-z]> will not magically start |
ba62762e |
526 | to mean "all alphabetic letters" (not that it does mean that even for |
527 | 8-bit characters, you should be using C</[[:alpha]]/> for that). |
528 | |
a5f0baef |
529 | For specifying things like that in regular expressions, you can use |
530 | the various Unicode properties, C<\pL> or perhaps C<\p{Alphabetic}>, |
531 | in this particular case. You can use Unicode code points as the end |
532 | points of character ranges, but that means that particular code point |
533 | range, nothing more. For further information, see L<perlunicode>. |
ba62762e |
534 | |
535 | =item * |
536 | |
537 | String-To-Number Conversions |
538 | |
539 | Unicode does define several other decimal (and numeric) characters |
540 | than just the familiar 0 to 9, such as the Arabic and Indic digits. |
541 | Perl does not support string-to-number conversion for digits other |
58c274a1 |
542 | than ASCII 0 to 9 (and ASCII a to f for hexadecimal). |
ba62762e |
543 | |
544 | =back |
545 | |
546 | =head2 Questions With Answers |
547 | |
548 | =over 4 |
549 | |
550 | =item Will My Old Scripts Break? |
551 | |
552 | Very probably not. Unless you are generating Unicode characters |
553 | somehow, any old behaviour should be preserved. About the only |
554 | behaviour that has changed and which could start generating Unicode |
555 | is the old behaviour of C<chr()> where supplying an argument more |
556 | than 255 produced a character modulo 255 (for example, C<chr(300)> |
557 | was equal to C<chr(45)>). |
558 | |
559 | =item How Do I Make My Scripts Work With Unicode? |
560 | |
561 | Very little work should be needed since nothing changes until you |
562 | somehow generate Unicode data. The greatest trick will be getting |
563 | input as Unicode, and for that see the earlier I/O discussion. |
564 | |
565 | =item How Do I Know Whether My String Is In Unicode? |
566 | |
567 | You shouldn't care. No, you really shouldn't. If you have |
568 | to care (beyond the cases described above), it means that we |
569 | didn't get the transparency of Unicode quite right. |
570 | |
571 | Okay, if you insist: |
572 | |
573 | use Encode 'is_utf8'; |
574 | print is_utf8($string) ? 1 : 0, "\n"; |
575 | |
576 | But note that this doesn't mean that any of the characters in the |
577 | string are necessary UTF-8 encoded, or that any of the characters have |
578 | code points greater than 0xFF (255) or even 0x80 (128), or that the |
579 | string has any characters at all. All the C<is_utf8()> does is to |
580 | return the value of the internal "utf8ness" flag attached to the |
581 | $string. If the flag is on, characters added to that string will be |
582 | automatically upgraded to UTF-8 (and even then only if they really |
583 | need to be upgraded, that is, if their code point is greater than 0xFF). |
584 | |
585 | Sometimes you might really need to know the byte length of a string |
586 | instead of the character length. For that use the C<bytes> pragma |
587 | and its only defined function C<length()>: |
588 | |
589 | my $unicode = chr(0x100); |
590 | print length($unicode), "\n"; # will print 1 |
591 | use bytes; |
592 | print length($unicode), "\n"; # will print 2 (the 0xC4 0x80 of the UTF-8) |
593 | |
8baee566 |
594 | =item How Do I Detect Data That's Not Valid In a Particular Encoding |
ba62762e |
595 | |
8baee566 |
596 | Use the C<Encode> package to try converting it. |
597 | For example, |
ba62762e |
598 | |
599 | use Encode 'encode_utf8'; |
8baee566 |
600 | if (encode_utf8($string_of_bytes_that_I_think_is_utf8)) { |
ba62762e |
601 | # valid |
602 | } else { |
603 | # invalid |
604 | } |
605 | |
8baee566 |
606 | For UTF-8 only, you can use: |
ba62762e |
607 | |
608 | use warnings; |
8baee566 |
609 | @chars = unpack("U0U*", $string_of_bytes_that_I_think_is_utf8); |
ba62762e |
610 | |
8baee566 |
611 | If invalid, a C<Malformed UTF-8 character (byte 0x##) in |
612 | unpack> is produced. The "U0" means "expect strictly UTF-8 |
613 | encoded Unicode". Without that the C<unpack("U*", ...)> |
614 | would accept also data like C<chr(0xFF>). |
ba62762e |
615 | |
8baee566 |
616 | =item How Do I Convert Binary Data Into a Particular Encoding, Or Vice Versa? |
ba62762e |
617 | |
8baee566 |
618 | This probably isn't as useful as you might think. |
619 | Normally, you shouldn't need to. |
ba62762e |
620 | |
a5f0baef |
621 | In one sense, what you are asking doesn't make much sense: Encodings |
622 | are for characters, and binary data is not "characters", so converting |
623 | "data" into some encoding isn't meaningful unless you know in what |
624 | character set and encoding the binary data is in, in which case it's |
625 | not binary data, now is it? |
8baee566 |
626 | |
627 | If you have a raw sequence of bytes that you know should be interpreted via |
628 | a particular encoding, you can use C<Encode>: |
ba62762e |
629 | |
630 | use Encode 'from_to'; |
631 | from_to($data, "iso-8859-1", "utf-8"); # from latin-1 to utf-8 |
632 | |
8baee566 |
633 | The call to from_to() changes the bytes in $data, but nothing material |
634 | about the nature of the string has changed as far as Perl is concerned. |
635 | Both before and after the call, the string $data contains just a bunch of |
636 | 8-bit bytes. As far as Perl is concerned, the encoding of the string (as |
637 | Perl sees it) remains as "system-native 8-bit bytes". |
638 | |
639 | You might relate this to a fictional 'Translate' module: |
640 | |
641 | use Translate; |
642 | my $phrase = "Yes"; |
643 | Translate::from_to($phrase, 'english', 'deutsch'); |
644 | ## phrase now contains "Ja" |
ba62762e |
645 | |
8baee566 |
646 | The contents of the string changes, but not the nature of the string. |
647 | Perl doesn't know any more after the call than before that the contents |
648 | of the string indicates the affirmative. |
ba62762e |
649 | |
a5f0baef |
650 | Back to converting data, if you have (or want) data in your system's |
651 | native 8-bit encoding (e.g. Latin-1, EBCDIC, etc.), you can use |
652 | pack/unpack to convert to/from Unicode. |
ba62762e |
653 | |
8baee566 |
654 | $native_string = pack("C*", unpack("U*", $Unicode_string)); |
655 | $Unicode_string = pack("U*", unpack("C*", $native_string)); |
ba62762e |
656 | |
657 | If you have a sequence of bytes you B<know> is valid UTF-8, |
658 | but Perl doesn't know it yet, you can make Perl a believer, too: |
659 | |
660 | use Encode 'decode_utf8'; |
8baee566 |
661 | $Unicode = decode_utf8($bytes); |
ba62762e |
662 | |
663 | You can convert well-formed UTF-8 to a sequence of bytes, but if |
664 | you just want to convert random binary data into UTF-8, you can't. |
665 | Any random collection of bytes isn't well-formed UTF-8. You can |
666 | use C<unpack("C*", $string)> for the former, and you can create |
8baee566 |
667 | well-formed Unicode data by C<pack("U*", 0xff, ...)>. |
ba62762e |
668 | |
669 | =item How Do I Display Unicode? How Do I Input Unicode? |
670 | |
671 | See http://www.hclrss.demon.co.uk/unicode/ and |
672 | http://www.cl.cam.ac.uk/~mgk25/unicode.html |
673 | |
674 | =item How Does Unicode Work With Traditional Locales? |
675 | |
676 | In Perl, not very well. Avoid using locales through the C<locale> |
677 | pragma. Use only one or the other. |
678 | |
679 | =back |
680 | |
681 | =head2 Hexadecimal Notation |
682 | |
683 | The Unicode standard prefers using hexadecimal notation because that |
684 | shows better the division of Unicode into blocks of 256 characters. |
685 | Hexadecimal is also simply shorter than decimal. You can use decimal |
686 | notation, too, but learning to use hexadecimal just makes life easier |
687 | with the Unicode standard. |
688 | |
689 | The C<0x> prefix means a hexadecimal number, the digits are 0-9 I<and> |
690 | a-f (or A-F, case doesn't matter). Each hexadecimal digit represents |
691 | four bits, or half a byte. C<print 0x..., "\n"> will show a |
692 | hexadecimal number in decimal, and C<printf "%x\n", $decimal> will |
693 | show a decimal number in hexadecimal. If you have just the |
a5f0baef |
694 | "hexdigits" of a hexadecimal number, you can use the C<hex()> function. |
ba62762e |
695 | |
696 | print 0x0009, "\n"; # 9 |
697 | print 0x000a, "\n"; # 10 |
698 | print 0x000f, "\n"; # 15 |
699 | print 0x0010, "\n"; # 16 |
700 | print 0x0011, "\n"; # 17 |
701 | print 0x0100, "\n"; # 256 |
702 | |
703 | print 0x0041, "\n"; # 65 |
704 | |
705 | printf "%x\n", 65; # 41 |
706 | printf "%#x\n", 65; # 0x41 |
707 | |
708 | print hex("41"), "\n"; # 65 |
709 | |
710 | =head2 Further Resources |
711 | |
712 | =over 4 |
713 | |
714 | =item * |
715 | |
716 | Unicode Consortium |
717 | |
718 | http://www.unicode.org/ |
719 | |
720 | =item * |
721 | |
722 | Unicode FAQ |
723 | |
724 | http://www.unicode.org/unicode/faq/ |
725 | |
726 | =item * |
727 | |
728 | Unicode Glossary |
729 | |
730 | http://www.unicode.org/glossary/ |
731 | |
732 | =item * |
733 | |
734 | Unicode Useful Resources |
735 | |
736 | http://www.unicode.org/unicode/onlinedat/resources.html |
737 | |
738 | =item * |
739 | |
740 | Unicode and Multilingual Support in HTML, Fonts, Web Browsers and Other Applications |
741 | |
742 | http://www.hclrss.demon.co.uk/unicode/ |
743 | |
744 | =item * |
745 | |
746 | UTF-8 and Unicode FAQ for Unix/Linux |
747 | |
748 | http://www.cl.cam.ac.uk/~mgk25/unicode.html |
749 | |
750 | =item * |
751 | |
752 | Legacy Character Sets |
753 | |
754 | http://www.czyborra.com/ |
755 | http://www.eki.ee/letter/ |
756 | |
757 | =item * |
758 | |
759 | The Unicode support files live within the Perl installation in the |
760 | directory |
761 | |
762 | $Config{installprivlib}/unicore |
763 | |
764 | in Perl 5.8.0 or newer, and |
765 | |
766 | $Config{installprivlib}/unicode |
767 | |
768 | in the Perl 5.6 series. (The renaming to F<lib/unicore> was done to |
769 | avoid naming conflicts with lib/Unicode in case-insensitive filesystems.) |
770 | The main Unicode data file is F<Unicode.txt> (or F<Unicode.301> in |
771 | Perl 5.6.1.) You can find the C<$Config{installprivlib}> by |
772 | |
773 | perl "-V:installprivlib" |
774 | |
775 | Note that some of the files have been renamed from the Unicode |
776 | standard since the Perl installation tries to live by the "8.3" |
777 | filenaming restrictions. The renamings are shown in the |
778 | accompanying F<rename> file. |
779 | |
780 | You can explore various information from the Unicode data files using |
781 | the C<Unicode::UCD> module. |
782 | |
783 | =back |
784 | |
f6edf83b |
785 | =head1 UNICODE IN OLDER PERLS |
786 | |
787 | If you cannot upgrade your Perl to 5.8.0 or later, you can still |
788 | do some Unicode processing by using the modules C<Unicode::String>, |
789 | C<Unicode::Map8>, and C<Unicode::Map>, available from CPAN. |
790 | If you have the GNU recode installed, you can also use the |
791 | Perl frontend C<Convert::Recode> for character conversions. |
792 | |
ba62762e |
793 | =head1 SEE ALSO |
794 | |
795 | L<perlunicode>, L<Encode>, L<encoding>, L<open>, L<utf8>, L<bytes>, |
796 | L<perlretut>, L<Unicode::Collate>, L<Unicode::Normalize>, L<Unicode::UCD> |
797 | |
798 | =head1 ACKNOWLEDGEMENTS |
799 | |
800 | Thanks to the kind readers of the perl5-porters@perl.org, |
801 | perl-unicode@perl.org, linux-utf8@nl.linux.org, and unicore@unicode.org |
802 | mailing lists for their valuable feedback. |
803 | |
804 | =head1 AUTHOR, COPYRIGHT, AND LICENSE |
805 | |
806 | Copyright 2001 Jarkko Hietaniemi <jhi@iki.fi> |
807 | |
808 | This document may be distributed under the same terms as Perl itself. |