Commit | Line | Data |
2120a181 |
1 | =head1 NAME |
2 | |
3 | DBM::Deep - A pure perl multi-level hash/array DBM that supports transactions |
4 | |
5 | =head1 SYNOPSIS |
6 | |
7 | use DBM::Deep; |
8 | my $db = DBM::Deep->new( "foo.db" ); |
9 | |
10 | $db->{key} = 'value'; |
11 | print $db->{key}; |
12 | |
13 | $db->put('key' => 'value'); |
14 | print $db->get('key'); |
15 | |
16 | # true multi-level support |
17 | $db->{my_complex} = [ |
18 | 'hello', { perl => 'rules' }, |
19 | 42, 99, |
20 | ]; |
21 | |
22 | $db->begin_work; |
23 | |
24 | # Do stuff here |
25 | |
26 | $db->rollback; |
27 | $db->commit; |
28 | |
29 | tie my %db, 'DBM::Deep', 'foo.db'; |
30 | $db{key} = 'value'; |
31 | print $db{key}; |
32 | |
33 | tied(%db)->put('key' => 'value'); |
34 | print tied(%db)->get('key'); |
35 | |
36 | =head1 DESCRIPTION |
37 | |
38 | A unique flat-file database module, written in pure perl. True multi-level |
39 | hash/array support (unlike MLDBM, which is faked), hybrid OO / tie() |
40 | interface, cross-platform FTPable files, ACID transactions, and is quite fast. |
41 | Can handle millions of keys and unlimited levels without significant |
42 | slow-down. Written from the ground-up in pure perl -- this is NOT a wrapper |
43 | around a C-based DBM. Out-of-the-box compatibility with Unix, Mac OS X and |
44 | Windows. |
45 | |
46 | =head1 VERSION DIFFERENCES |
47 | |
67e9b86f |
48 | B<NOTE>: 1.0020 introduces different engines which are backed by different types |
49 | of storage. There is the original storage (called 'File') and a database storage |
50 | (called 'DBI'). q.v. L</PLUGINS> for more information. |
51 | |
807f63a7 |
52 | B<NOTE>: 1.0000 has significant file format differences from prior versions. |
53 | THere is a backwards-compatibility layer at C<utils/upgrade_db.pl>. Files |
54 | created by 1.0000 or higher are B<NOT> compatible with scripts using prior |
55 | versions. |
2120a181 |
56 | |
67e9b86f |
57 | =head1 PLUGINS |
58 | |
59 | DBM::Deep is a wrapper around different storage engines. These are: |
60 | |
61 | =head2 File |
62 | |
63 | This is the traditional storage engine, storing the data to a custom file |
64 | format. The parameters accepted are: |
65 | |
66 | =over 4 |
67 | |
68 | =item * file |
69 | |
70 | Filename of the DB file to link the handle to. You can pass a full absolute |
71 | filesystem path, partial path, or a plain filename if the file is in the |
72 | current working directory. This is a required parameter (though q.v. fh). |
73 | |
74 | =item * fh |
75 | |
76 | If you want, you can pass in the fh instead of the file. This is most useful for |
77 | doing something like: |
78 | |
79 | my $db = DBM::Deep->new( { fh => \*DATA } ); |
80 | |
81 | You are responsible for making sure that the fh has been opened appropriately |
82 | for your needs. If you open it read-only and attempt to write, an exception will |
83 | be thrown. If you open it write-only or append-only, an exception will be thrown |
84 | immediately as DBM::Deep needs to read from the fh. |
85 | |
86 | =item * file_offset |
87 | |
88 | This is the offset within the file that the DBM::Deep db starts. Most of the |
89 | time, you will not need to set this. However, it's there if you want it. |
90 | |
91 | If you pass in fh and do not set this, it will be set appropriately. |
92 | |
93 | =item * locking |
94 | |
95 | Specifies whether locking is to be enabled. DBM::Deep uses Perl's flock() |
96 | function to lock the database in exclusive mode for writes, and shared mode |
97 | for reads. Pass any true value to enable. This affects the base DB handle |
98 | I<and any child hashes or arrays> that use the same DB file. This is an |
99 | optional parameter, and defaults to 1 (enabled). See L</LOCKING> below for |
100 | more. |
101 | |
102 | =back |
103 | |
104 | =head2 DBI |
105 | |
106 | This is a storage engine that stores the data in a relational database. Funnily |
107 | enough, this engine doesn't work with transactions (yet) as InnoDB doesn't do |
108 | what DBM::Deep needs it to do. |
109 | |
110 | The parameters accepted are: |
111 | |
112 | =over 4 |
113 | |
114 | =item * dbh |
115 | |
116 | This is a DBH that's already been opened with L<DBI/connect>. |
117 | |
118 | =item * dbi |
119 | |
120 | This is a hashref containing: |
121 | |
122 | =over 4 |
123 | |
124 | =item * dsn |
125 | |
126 | =item * username |
127 | |
128 | =item * password |
129 | |
130 | =item * connect_args |
131 | |
132 | =back |
133 | |
134 | Theses correspond to the 4 parameters L<DBI/connect> takes. |
135 | |
136 | =back |
137 | |
138 | B<NOTE>: This has only been tested with MySQL (with disappointing results). I |
139 | plan on extending this to work with SQLite and PostgreSQL in the next release. |
140 | Oracle, Sybase, and other engines will come later. |
141 | |
142 | =head2 Planned engines |
143 | |
144 | There are plans to extend this functionality to (at least) the following: |
145 | |
146 | =over 4 |
147 | |
148 | =item * BDB (and other hash engines like memcached) |
149 | |
150 | =item * NoSQL engines (such as Tokyo Cabinet) |
151 | |
152 | =item * DBIx::Class (and other ORMs) |
153 | |
154 | =back |
155 | |
2120a181 |
156 | =head1 SETUP |
157 | |
158 | Construction can be done OO-style (which is the recommended way), or using |
159 | Perl's tie() function. Both are examined here. |
160 | |
161 | =head2 OO Construction |
162 | |
163 | The recommended way to construct a DBM::Deep object is to use the new() |
164 | method, which gets you a blessed I<and> tied hash (or array) reference. |
165 | |
166 | my $db = DBM::Deep->new( "foo.db" ); |
167 | |
168 | This opens a new database handle, mapped to the file "foo.db". If this |
169 | file does not exist, it will automatically be created. DB files are |
170 | opened in "r+" (read/write) mode, and the type of object returned is a |
67e9b86f |
171 | hash, unless otherwise specified (see L</OPTIONS> below). |
2120a181 |
172 | |
173 | You can pass a number of options to the constructor to specify things like |
174 | locking, autoflush, etc. This is done by passing an inline hash (or hashref): |
175 | |
176 | my $db = DBM::Deep->new( |
177 | file => "foo.db", |
178 | locking => 1, |
179 | autoflush => 1 |
180 | ); |
181 | |
182 | Notice that the filename is now specified I<inside> the hash with |
183 | the "file" parameter, as opposed to being the sole argument to the |
184 | constructor. This is required if any options are specified. |
67e9b86f |
185 | See L</OPTIONS> below for the complete list. |
2120a181 |
186 | |
187 | You can also start with an array instead of a hash. For this, you must |
188 | specify the C<type> parameter: |
189 | |
190 | my $db = DBM::Deep->new( |
191 | file => "foo.db", |
192 | type => DBM::Deep->TYPE_ARRAY |
193 | ); |
194 | |
195 | B<Note:> Specifing the C<type> parameter only takes effect when beginning |
196 | a new DB file. If you create a DBM::Deep object with an existing file, the |
197 | C<type> will be loaded from the file header, and an error will be thrown if |
198 | the wrong type is passed in. |
199 | |
200 | =head2 Tie Construction |
201 | |
202 | Alternately, you can create a DBM::Deep handle by using Perl's built-in |
203 | tie() function. The object returned from tie() can be used to call methods, |
204 | such as lock() and unlock(). (That object can be retrieved from the tied |
b8370759 |
205 | variable at any time using tied() - please see L<perltie> for more info. |
2120a181 |
206 | |
207 | my %hash; |
208 | my $db = tie %hash, "DBM::Deep", "foo.db"; |
209 | |
210 | my @array; |
211 | my $db = tie @array, "DBM::Deep", "bar.db"; |
212 | |
213 | As with the OO constructor, you can replace the DB filename parameter with |
67e9b86f |
214 | a hash containing one or more options (see L</OPTIONS> just below for the |
2120a181 |
215 | complete list). |
216 | |
217 | tie %hash, "DBM::Deep", { |
218 | file => "foo.db", |
219 | locking => 1, |
220 | autoflush => 1 |
221 | }; |
222 | |
223 | =head2 Options |
224 | |
225 | There are a number of options that can be passed in when constructing your |
226 | DBM::Deep objects. These apply to both the OO- and tie- based approaches. |
227 | |
228 | =over |
229 | |
2120a181 |
230 | =item * type |
231 | |
232 | This parameter specifies what type of object to create, a hash or array. Use |
233 | one of these two constants: |
234 | |
235 | =over 4 |
236 | |
2c70efe1 |
237 | =item * C<<DBM::Deep->TYPE_HASH>> |
2120a181 |
238 | |
2c70efe1 |
239 | =item * C<<DBM::Deep->TYPE_ARRAY>> |
2120a181 |
240 | |
241 | =back |
242 | |
243 | This only takes effect when beginning a new file. This is an optional |
2c70efe1 |
244 | parameter, and defaults to C<<DBM::Deep->TYPE_HASH>>. |
2120a181 |
245 | |
2120a181 |
246 | =item * autoflush |
247 | |
248 | Specifies whether autoflush is to be enabled on the underlying filehandle. |
249 | This obviously slows down write operations, but is required if you may have |
250 | multiple processes accessing the same DB file (also consider enable I<locking>). |
251 | Pass any true value to enable. This is an optional parameter, and defaults to 1 |
252 | (enabled). |
253 | |
254 | =item * filter_* |
255 | |
256 | See L</FILTERS> below. |
257 | |
258 | =back |
259 | |
260 | The following parameters may be specified in the constructor the first time the |
261 | datafile is created. However, they will be stored in the header of the file and |
262 | cannot be overridden by subsequent openings of the file - the values will be set |
263 | from the values stored in the datafile's header. |
264 | |
265 | =over 4 |
266 | |
267 | =item * num_txns |
268 | |
e9b0b5f0 |
269 | This is the number of transactions that can be running at one time. The |
270 | default is one - the HEAD. The minimum is one and the maximum is 255. The more |
271 | transactions, the larger and quicker the datafile grows. |
2120a181 |
272 | |
273 | See L</TRANSACTIONS> below. |
274 | |
275 | =item * max_buckets |
276 | |
277 | This is the number of entries that can be added before a reindexing. The larger |
278 | this number is made, the larger a file gets, but the better performance you will |
e9b0b5f0 |
279 | have. The default and minimum number this can be is 16. The maximum is 256, but |
280 | more than 64 isn't recommended. |
281 | |
282 | =item * data_sector_size |
283 | |
284 | This is the size in bytes of a given data sector. Data sectors will chain, so |
285 | a value of any size can be stored. However, chaining is expensive in terms of |
286 | time. Setting this value to something close to the expected common length of |
287 | your scalars will improve your performance. If it is too small, your file will |
288 | have a lot of chaining. If it is too large, your file will have a lot of dead |
289 | space in it. |
290 | |
291 | The default for this is 64 bytes. The minimum value is 32 and the maximum is |
292 | 256 bytes. |
293 | |
294 | B<Note:> There are between 6 and 10 bytes taken up in each data sector for |
295 | bookkeeping. (It's 4 + the number of bytes in your L</pack_size>.) This is |
296 | included within the data_sector_size, thus the effective value is 6-10 bytes |
297 | less than what you specified. |
2120a181 |
298 | |
299 | =item * pack_size |
300 | |
301 | This is the size of the file pointer used throughout the file. The valid values |
302 | are: |
303 | |
304 | =over 4 |
305 | |
306 | =item * small |
307 | |
e9b0b5f0 |
308 | This uses 2-byte offsets, allowing for a maximum file size of 65 KB. |
2120a181 |
309 | |
310 | =item * medium (default) |
311 | |
e9b0b5f0 |
312 | This uses 4-byte offsets, allowing for a maximum file size of 4 GB. |
2120a181 |
313 | |
314 | =item * large |
315 | |
e9b0b5f0 |
316 | This uses 8-byte offsets, allowing for a maximum file size of 16 XB |
317 | (exabytes). This can only be enabled if your Perl is compiled for 64-bit. |
2120a181 |
318 | |
319 | =back |
320 | |
321 | See L</LARGEFILE SUPPORT> for more information. |
322 | |
323 | =back |
324 | |
325 | =head1 TIE INTERFACE |
326 | |
327 | With DBM::Deep you can access your databases using Perl's standard hash/array |
328 | syntax. Because all DBM::Deep objects are I<tied> to hashes or arrays, you can |
329 | treat them as such. DBM::Deep will intercept all reads/writes and direct them |
330 | to the right place -- the DB file. This has nothing to do with the |
67e9b86f |
331 | L</TIE CONSTRUCTION> section above. This simply tells you how to use DBM::Deep |
2120a181 |
332 | using regular hashes and arrays, rather than calling functions like C<get()> |
333 | and C<put()> (although those work too). It is entirely up to you how to want |
334 | to access your databases. |
335 | |
336 | =head2 Hashes |
337 | |
338 | You can treat any DBM::Deep object like a normal Perl hash reference. Add keys, |
339 | or even nested hashes (or arrays) using standard Perl syntax: |
340 | |
341 | my $db = DBM::Deep->new( "foo.db" ); |
342 | |
343 | $db->{mykey} = "myvalue"; |
344 | $db->{myhash} = {}; |
345 | $db->{myhash}->{subkey} = "subvalue"; |
346 | |
347 | print $db->{myhash}->{subkey} . "\n"; |
348 | |
349 | You can even step through hash keys using the normal Perl C<keys()> function: |
350 | |
351 | foreach my $key (keys %$db) { |
352 | print "$key: " . $db->{$key} . "\n"; |
353 | } |
354 | |
355 | Remember that Perl's C<keys()> function extracts I<every> key from the hash and |
356 | pushes them onto an array, all before the loop even begins. If you have an |
357 | extremely large hash, this may exhaust Perl's memory. Instead, consider using |
358 | Perl's C<each()> function, which pulls keys/values one at a time, using very |
359 | little memory: |
360 | |
361 | while (my ($key, $value) = each %$db) { |
362 | print "$key: $value\n"; |
363 | } |
364 | |
365 | Please note that when using C<each()>, you should always pass a direct |
366 | hash reference, not a lookup. Meaning, you should B<never> do this: |
367 | |
368 | # NEVER DO THIS |
369 | while (my ($key, $value) = each %{$db->{foo}}) { # BAD |
370 | |
371 | This causes an infinite loop, because for each iteration, Perl is calling |
372 | FETCH() on the $db handle, resulting in a "new" hash for foo every time, so |
373 | it effectively keeps returning the first key over and over again. Instead, |
d451cf12 |
374 | assign a temporary variable to C<<$db->{foo}>>, then pass that to each(). |
2120a181 |
375 | |
376 | =head2 Arrays |
377 | |
378 | As with hashes, you can treat any DBM::Deep object like a normal Perl array |
379 | reference. This includes inserting, removing and manipulating elements, |
380 | and the C<push()>, C<pop()>, C<shift()>, C<unshift()> and C<splice()> functions. |
2c70efe1 |
381 | The object must have first been created using type C<<DBM::Deep->TYPE_ARRAY>>, |
2120a181 |
382 | or simply be a nested array reference inside a hash. Example: |
383 | |
384 | my $db = DBM::Deep->new( |
385 | file => "foo-array.db", |
386 | type => DBM::Deep->TYPE_ARRAY |
387 | ); |
388 | |
389 | $db->[0] = "foo"; |
390 | push @$db, "bar", "baz"; |
391 | unshift @$db, "bah"; |
392 | |
2c70efe1 |
393 | my $last_elem = pop @$db; # baz |
394 | my $first_elem = shift @$db; # bah |
395 | my $second_elem = $db->[1]; # bar |
2120a181 |
396 | |
397 | my $num_elements = scalar @$db; |
398 | |
399 | =head1 OO INTERFACE |
400 | |
401 | In addition to the I<tie()> interface, you can also use a standard OO interface |
402 | to manipulate all aspects of DBM::Deep databases. Each type of object (hash or |
403 | array) has its own methods, but both types share the following common methods: |
404 | C<put()>, C<get()>, C<exists()>, C<delete()> and C<clear()>. C<fetch()> and |
405 | C<store(> are aliases to C<put()> and C<get()>, respectively. |
406 | |
407 | =over |
408 | |
409 | =item * new() / clone() |
410 | |
411 | These are the constructor and copy-functions. |
412 | |
413 | =item * put() / store() |
414 | |
415 | Stores a new hash key/value pair, or sets an array element value. Takes two |
416 | arguments, the hash key or array index, and the new value. The value can be |
417 | a scalar, hash ref or array ref. Returns true on success, false on failure. |
418 | |
419 | $db->put("foo", "bar"); # for hashes |
420 | $db->put(1, "bar"); # for arrays |
421 | |
422 | =item * get() / fetch() |
423 | |
424 | Fetches the value of a hash key or array element. Takes one argument: the hash |
425 | key or array index. Returns a scalar, hash ref or array ref, depending on the |
426 | data type stored. |
427 | |
428 | my $value = $db->get("foo"); # for hashes |
429 | my $value = $db->get(1); # for arrays |
430 | |
431 | =item * exists() |
432 | |
433 | Checks if a hash key or array index exists. Takes one argument: the hash key |
434 | or array index. Returns true if it exists, false if not. |
435 | |
436 | if ($db->exists("foo")) { print "yay!\n"; } # for hashes |
437 | if ($db->exists(1)) { print "yay!\n"; } # for arrays |
438 | |
439 | =item * delete() |
440 | |
441 | Deletes one hash key/value pair or array element. Takes one argument: the hash |
442 | key or array index. Returns true on success, false if not found. For arrays, |
443 | the remaining elements located after the deleted element are NOT moved over. |
444 | The deleted element is essentially just undefined, which is exactly how Perl's |
445 | internal arrays work. |
446 | |
447 | $db->delete("foo"); # for hashes |
448 | $db->delete(1); # for arrays |
449 | |
450 | =item * clear() |
451 | |
452 | Deletes B<all> hash keys or array elements. Takes no arguments. No return |
453 | value. |
454 | |
455 | $db->clear(); # hashes or arrays |
456 | |
9c87a079 |
457 | =item * lock() / unlock() / lock_exclusive() / lock_shared() |
2120a181 |
458 | |
e00d0eb3 |
459 | q.v. L</LOCKING> for more info. |
2120a181 |
460 | |
461 | =item * optimize() |
462 | |
e00d0eb3 |
463 | This will compress the datafile so that it takes up as little space as possible. |
464 | There is a freespace manager so that when space is freed up, it is used before |
67e9b86f |
465 | extending the size of the datafile. But, that freespace just sits in the |
466 | datafile unless C<optimize()> is called. |
2120a181 |
467 | |
e00d0eb3 |
468 | =item * import() |
2120a181 |
469 | |
e00d0eb3 |
470 | Unlike simple assignment, C<import()> does not tie the right-hand side. Instead, |
471 | a copy of your data is put into the DB. C<import()> takes either an arrayref (if |
472 | your DB is an array) or a hashref (if your DB is a hash). C<import()> will die |
473 | if anything else is passed in. |
474 | |
475 | =item * export() |
476 | |
477 | This returns a complete copy of the data structure at the point you do the export. |
478 | This copy is in RAM, not on disk like the DB is. |
2120a181 |
479 | |
480 | =item * begin_work() / commit() / rollback() |
481 | |
482 | These are the transactional functions. L</TRANSACTIONS> for more information. |
483 | |
580e5ee2 |
484 | =item * supports( $option ) |
485 | |
486 | This returns a boolean depending on if this instance of DBM::Dep supports |
487 | that feature. C<$option> can be one of: |
488 | |
489 | =over 4 |
490 | |
491 | =item * transactions |
492 | |
493 | =back |
494 | |
2120a181 |
495 | =back |
496 | |
497 | =head2 Hashes |
498 | |
499 | For hashes, DBM::Deep supports all the common methods described above, and the |
500 | following additional methods: C<first_key()> and C<next_key()>. |
501 | |
502 | =over |
503 | |
504 | =item * first_key() |
505 | |
506 | Returns the "first" key in the hash. As with built-in Perl hashes, keys are |
507 | fetched in an undefined order (which appears random). Takes no arguments, |
508 | returns the key as a scalar value. |
509 | |
510 | my $key = $db->first_key(); |
511 | |
512 | =item * next_key() |
513 | |
514 | Returns the "next" key in the hash, given the previous one as the sole argument. |
515 | Returns undef if there are no more keys to be fetched. |
516 | |
517 | $key = $db->next_key($key); |
518 | |
519 | =back |
520 | |
521 | Here are some examples of using hashes: |
522 | |
523 | my $db = DBM::Deep->new( "foo.db" ); |
524 | |
525 | $db->put("foo", "bar"); |
526 | print "foo: " . $db->get("foo") . "\n"; |
527 | |
528 | $db->put("baz", {}); # new child hash ref |
529 | $db->get("baz")->put("buz", "biz"); |
530 | print "buz: " . $db->get("baz")->get("buz") . "\n"; |
531 | |
532 | my $key = $db->first_key(); |
533 | while ($key) { |
534 | print "$key: " . $db->get($key) . "\n"; |
535 | $key = $db->next_key($key); |
536 | } |
537 | |
538 | if ($db->exists("foo")) { $db->delete("foo"); } |
539 | |
540 | =head2 Arrays |
541 | |
542 | For arrays, DBM::Deep supports all the common methods described above, and the |
543 | following additional methods: C<length()>, C<push()>, C<pop()>, C<shift()>, |
544 | C<unshift()> and C<splice()>. |
545 | |
546 | =over |
547 | |
548 | =item * length() |
549 | |
550 | Returns the number of elements in the array. Takes no arguments. |
551 | |
552 | my $len = $db->length(); |
553 | |
554 | =item * push() |
555 | |
556 | Adds one or more elements onto the end of the array. Accepts scalars, hash |
557 | refs or array refs. No return value. |
558 | |
559 | $db->push("foo", "bar", {}); |
560 | |
561 | =item * pop() |
562 | |
563 | Fetches the last element in the array, and deletes it. Takes no arguments. |
564 | Returns undef if array is empty. Returns the element value. |
565 | |
566 | my $elem = $db->pop(); |
567 | |
568 | =item * shift() |
569 | |
570 | Fetches the first element in the array, deletes it, then shifts all the |
571 | remaining elements over to take up the space. Returns the element value. This |
67e9b86f |
572 | method is not recommended with large arrays -- see L</LARGE ARRAYS> below for |
2120a181 |
573 | details. |
574 | |
575 | my $elem = $db->shift(); |
576 | |
577 | =item * unshift() |
578 | |
579 | Inserts one or more elements onto the beginning of the array, shifting all |
580 | existing elements over to make room. Accepts scalars, hash refs or array refs. |
581 | No return value. This method is not recommended with large arrays -- see |
582 | <LARGE ARRAYS> below for details. |
583 | |
584 | $db->unshift("foo", "bar", {}); |
585 | |
586 | =item * splice() |
587 | |
588 | Performs exactly like Perl's built-in function of the same name. See L<perldoc |
589 | -f splice> for usage -- it is too complicated to document here. This method is |
67e9b86f |
590 | not recommended with large arrays -- see L</LARGE ARRAYS> below for details. |
2120a181 |
591 | |
592 | =back |
593 | |
594 | Here are some examples of using arrays: |
595 | |
596 | my $db = DBM::Deep->new( |
597 | file => "foo.db", |
598 | type => DBM::Deep->TYPE_ARRAY |
599 | ); |
600 | |
601 | $db->push("bar", "baz"); |
602 | $db->unshift("foo"); |
603 | $db->put(3, "buz"); |
604 | |
605 | my $len = $db->length(); |
606 | print "length: $len\n"; # 4 |
607 | |
608 | for (my $k=0; $k<$len; $k++) { |
609 | print "$k: " . $db->get($k) . "\n"; |
610 | } |
611 | |
612 | $db->splice(1, 2, "biz", "baf"); |
613 | |
614 | while (my $elem = shift @$db) { |
615 | print "shifted: $elem\n"; |
616 | } |
617 | |
618 | =head1 LOCKING |
619 | |
620 | Enable or disable automatic file locking by passing a boolean value to the |
67e9b86f |
621 | C<locking> parameter when constructing your DBM::Deep object (see L</SETUP> |
1cff45d7 |
622 | above). |
2120a181 |
623 | |
624 | my $db = DBM::Deep->new( |
625 | file => "foo.db", |
626 | locking => 1 |
627 | ); |
628 | |
629 | This causes DBM::Deep to C<flock()> the underlying filehandle with exclusive |
630 | mode for writes, and shared mode for reads. This is required if you have |
631 | multiple processes accessing the same database file, to avoid file corruption. |
67e9b86f |
632 | Please note that C<flock()> does NOT work for files over NFS. See L</DB OVER |
2120a181 |
633 | NFS> below for more. |
634 | |
635 | =head2 Explicit Locking |
636 | |
637 | You can explicitly lock a database, so it remains locked for multiple |
9c87a079 |
638 | actions. This is done by calling the C<lock_exclusive()> method (for when you |
639 | want to write) or the C<lock_shared()> method (for when you want to read). |
640 | This is particularly useful for things like counters, where the current value |
641 | needs to be fetched, then incremented, then stored again. |
2120a181 |
642 | |
9c87a079 |
643 | $db->lock_exclusive(); |
2120a181 |
644 | my $counter = $db->get("counter"); |
645 | $counter++; |
646 | $db->put("counter", $counter); |
647 | $db->unlock(); |
648 | |
649 | # or... |
650 | |
9c87a079 |
651 | $db->lock_exclusive(); |
2120a181 |
652 | $db->{counter}++; |
653 | $db->unlock(); |
654 | |
45f047f8 |
655 | =head2 Win32/Cygwin |
656 | |
657 | Due to Win32 actually enforcing the read-only status of a shared lock, all |
658 | locks on Win32 and cygwin are exclusive. This is because of how autovivification |
659 | currently works. Hopefully, this will go away in a future release. |
660 | |
2120a181 |
661 | =head1 IMPORTING/EXPORTING |
662 | |
663 | You can import existing complex structures by calling the C<import()> method, |
664 | and export an entire database into an in-memory structure using the C<export()> |
665 | method. Both are examined here. |
666 | |
667 | =head2 Importing |
668 | |
669 | Say you have an existing hash with nested hashes/arrays inside it. Instead of |
670 | walking the structure and adding keys/elements to the database as you go, |
671 | simply pass a reference to the C<import()> method. This recursively adds |
672 | everything to an existing DBM::Deep object for you. Here is an example: |
673 | |
674 | my $struct = { |
675 | key1 => "value1", |
676 | key2 => "value2", |
677 | array1 => [ "elem0", "elem1", "elem2" ], |
678 | hash1 => { |
679 | subkey1 => "subvalue1", |
680 | subkey2 => "subvalue2" |
681 | } |
682 | }; |
683 | |
684 | my $db = DBM::Deep->new( "foo.db" ); |
685 | $db->import( $struct ); |
686 | |
687 | print $db->{key1} . "\n"; # prints "value1" |
688 | |
689 | This recursively imports the entire C<$struct> object into C<$db>, including |
690 | all nested hashes and arrays. If the DBM::Deep object contains exsiting data, |
691 | keys are merged with the existing ones, replacing if they already exist. |
692 | The C<import()> method can be called on any database level (not just the base |
693 | level), and works with both hash and array DB types. |
694 | |
695 | B<Note:> Make sure your existing structure has no circular references in it. |
696 | These will cause an infinite loop when importing. There are plans to fix this |
697 | in a later release. |
698 | |
2120a181 |
699 | =head2 Exporting |
700 | |
701 | Calling the C<export()> method on an existing DBM::Deep object will return |
702 | a reference to a new in-memory copy of the database. The export is done |
703 | recursively, so all nested hashes/arrays are all exported to standard Perl |
704 | objects. Here is an example: |
705 | |
706 | my $db = DBM::Deep->new( "foo.db" ); |
707 | |
708 | $db->{key1} = "value1"; |
709 | $db->{key2} = "value2"; |
710 | $db->{hash1} = {}; |
711 | $db->{hash1}->{subkey1} = "subvalue1"; |
712 | $db->{hash1}->{subkey2} = "subvalue2"; |
713 | |
714 | my $struct = $db->export(); |
715 | |
716 | print $struct->{key1} . "\n"; # prints "value1" |
717 | |
718 | This makes a complete copy of the database in memory, and returns a reference |
719 | to it. The C<export()> method can be called on any database level (not just |
720 | the base level), and works with both hash and array DB types. Be careful of |
721 | large databases -- you can store a lot more data in a DBM::Deep object than an |
722 | in-memory Perl structure. |
723 | |
724 | B<Note:> Make sure your database has no circular references in it. |
725 | These will cause an infinite loop when exporting. There are plans to fix this |
726 | in a later release. |
727 | |
728 | =head1 FILTERS |
729 | |
730 | DBM::Deep has a number of hooks where you can specify your own Perl function |
731 | to perform filtering on incoming or outgoing data. This is a perfect |
732 | way to extend the engine, and implement things like real-time compression or |
733 | encryption. Filtering applies to the base DB level, and all child hashes / |
734 | arrays. Filter hooks can be specified when your DBM::Deep object is first |
735 | constructed, or by calling the C<set_filter()> method at any time. There are |
1cff45d7 |
736 | four available filter hooks. |
737 | |
738 | =head2 set_filter() |
739 | |
740 | This method takes two paramters - the filter type and the filter subreference. |
741 | The four types are: |
2120a181 |
742 | |
743 | =over |
744 | |
745 | =item * filter_store_key |
746 | |
747 | This filter is called whenever a hash key is stored. It |
748 | is passed the incoming key, and expected to return a transformed key. |
749 | |
750 | =item * filter_store_value |
751 | |
752 | This filter is called whenever a hash key or array element is stored. It |
753 | is passed the incoming value, and expected to return a transformed value. |
754 | |
755 | =item * filter_fetch_key |
756 | |
757 | This filter is called whenever a hash key is fetched (i.e. via |
758 | C<first_key()> or C<next_key()>). It is passed the transformed key, |
759 | and expected to return the plain key. |
760 | |
761 | =item * filter_fetch_value |
762 | |
763 | This filter is called whenever a hash key or array element is fetched. |
764 | It is passed the transformed value, and expected to return the plain value. |
765 | |
766 | =back |
767 | |
768 | Here are the two ways to setup a filter hook: |
769 | |
770 | my $db = DBM::Deep->new( |
771 | file => "foo.db", |
772 | filter_store_value => \&my_filter_store, |
773 | filter_fetch_value => \&my_filter_fetch |
774 | ); |
775 | |
776 | # or... |
777 | |
778 | $db->set_filter( "filter_store_value", \&my_filter_store ); |
779 | $db->set_filter( "filter_fetch_value", \&my_filter_fetch ); |
780 | |
781 | Your filter function will be called only when dealing with SCALAR keys or |
782 | values. When nested hashes and arrays are being stored/fetched, filtering |
783 | is bypassed. Filters are called as static functions, passed a single SCALAR |
784 | argument, and expected to return a single SCALAR value. If you want to |
785 | remove a filter, set the function reference to C<undef>: |
786 | |
787 | $db->set_filter( "filter_store_value", undef ); |
788 | |
1cff45d7 |
789 | =head2 Examples |
2120a181 |
790 | |
b8370759 |
791 | Please read L<DBM::Deep::Manual> for examples of filters. |
2120a181 |
792 | |
793 | =head1 ERROR HANDLING |
794 | |
795 | Most DBM::Deep methods return a true value for success, and call die() on |
796 | failure. You can wrap calls in an eval block to catch the die. |
797 | |
798 | my $db = DBM::Deep->new( "foo.db" ); # create hash |
799 | eval { $db->push("foo"); }; # ILLEGAL -- push is array-only call |
800 | |
801 | print $@; # prints error message |
802 | |
803 | =head1 LARGEFILE SUPPORT |
804 | |
805 | If you have a 64-bit system, and your Perl is compiled with both LARGEFILE |
e9b0b5f0 |
806 | and 64-bit support, you I<may> be able to create databases larger than 4 GB. |
2120a181 |
807 | DBM::Deep by default uses 32-bit file offset tags, but these can be changed |
808 | by specifying the 'pack_size' parameter when constructing the file. |
809 | |
810 | DBM::Deep->new( |
2c70efe1 |
811 | file => $filename, |
2120a181 |
812 | pack_size => 'large', |
813 | ); |
814 | |
815 | This tells DBM::Deep to pack all file offsets with 8-byte (64-bit) quad words |
816 | instead of 32-bit longs. After setting these values your DB files have a |
817 | theoretical maximum size of 16 XB (exabytes). |
818 | |
2c70efe1 |
819 | You can also use C<<pack_size => 'small'>> in order to use 16-bit file |
2120a181 |
820 | offsets. |
821 | |
822 | B<Note:> Changing these values will B<NOT> work for existing database files. |
823 | Only change this for new files. Once the value has been set, it is stored in |
824 | the file's header and cannot be changed for the life of the file. These |
825 | parameters are per-file, meaning you can access 32-bit and 64-bit files, as |
826 | you choose. |
827 | |
1cff45d7 |
828 | B<Note:> We have not personally tested files larger than 4 GB -- all our |
829 | systems have only a 32-bit Perl. However, we have received user reports that |
e9b0b5f0 |
830 | this does indeed work. |
2120a181 |
831 | |
832 | =head1 LOW-LEVEL ACCESS |
833 | |
834 | If you require low-level access to the underlying filehandle that DBM::Deep uses, |
835 | you can call the C<_fh()> method, which returns the handle: |
836 | |
837 | my $fh = $db->_fh(); |
838 | |
839 | This method can be called on the root level of the datbase, or any child |
840 | hashes or arrays. All levels share a I<root> structure, which contains things |
841 | like the filehandle, a reference counter, and all the options specified |
842 | when you created the object. You can get access to this file object by |
843 | calling the C<_storage()> method. |
844 | |
845 | my $file_obj = $db->_storage(); |
846 | |
847 | This is useful for changing options after the object has already been created, |
848 | such as enabling/disabling locking. You can also store your own temporary user |
849 | data in this structure (be wary of name collision), which is then accessible from |
850 | any child hash or array. |
851 | |
2120a181 |
852 | =head1 CIRCULAR REFERENCES |
853 | |
1cff45d7 |
854 | DBM::Deep has full support for circular references. Meaning you |
2120a181 |
855 | can have a nested hash key or array element that points to a parent object. |
856 | This relationship is stored in the DB file, and is preserved between sessions. |
857 | Here is an example: |
858 | |
859 | my $db = DBM::Deep->new( "foo.db" ); |
860 | |
861 | $db->{foo} = "bar"; |
862 | $db->{circle} = $db; # ref to self |
863 | |
864 | print $db->{foo} . "\n"; # prints "bar" |
865 | print $db->{circle}->{foo} . "\n"; # prints "bar" again |
866 | |
1cff45d7 |
867 | This also works as expected with array and hash references. So, the following |
868 | works as expected: |
869 | |
870 | $db->{foo} = [ 1 .. 3 ]; |
871 | $db->{bar} = $db->{foo}; |
872 | |
873 | push @{$db->{foo}}, 42; |
874 | is( $db->{bar}[-1], 42 ); # Passes |
875 | |
876 | This, however, does I<not> extend to assignments from one DB file to another. |
877 | So, the following will throw an error: |
878 | |
879 | my $db1 = DBM::Deep->new( "foo.db" ); |
880 | my $db2 = DBM::Deep->new( "bar.db" ); |
881 | |
882 | $db1->{foo} = []; |
883 | $db2->{foo} = $db1->{foo}; # dies |
884 | |
2120a181 |
885 | B<Note>: Passing the object to a function that recursively walks the |
886 | object tree (such as I<Data::Dumper> or even the built-in C<optimize()> or |
887 | C<export()> methods) will result in an infinite loop. This will be fixed in |
1cff45d7 |
888 | a future release by adding singleton support. |
2120a181 |
889 | |
890 | =head1 TRANSACTIONS |
891 | |
1cff45d7 |
892 | As of 1.0000, DBM::Deep hass ACID transactions. Every DBM::Deep object is completely |
2120a181 |
893 | transaction-ready - it is not an option you have to turn on. You do have to |
894 | specify how many transactions may run simultaneously (q.v. L</num_txns>). |
895 | |
896 | Three new methods have been added to support them. They are: |
897 | |
898 | =over 4 |
899 | |
900 | =item * begin_work() |
901 | |
902 | This starts a transaction. |
903 | |
904 | =item * commit() |
905 | |
906 | This applies the changes done within the transaction to the mainline and ends |
907 | the transaction. |
908 | |
909 | =item * rollback() |
910 | |
911 | This discards the changes done within the transaction to the mainline and ends |
912 | the transaction. |
913 | |
914 | =back |
915 | |
916 | Transactions in DBM::Deep are done using a variant of the MVCC method, the |
917 | same method used by the InnoDB MySQL engine. |
918 | |
e9b0b5f0 |
919 | =head1 MIGRATION |
920 | |
921 | As of 1.0000, the file format has changed. Furthermore, DBM::Deep is now |
922 | designed to potentially change file format between point-releases, if needed to |
923 | support a requested feature. To aid in this, a migration script is provided |
924 | within the CPAN distribution called C<utils/upgrade_db.pl>. |
925 | |
926 | B<NOTE:> This script is not installed onto your system because it carries a copy |
927 | of every version prior to the current version. |
928 | |
2120a181 |
929 | =head1 TODO |
930 | |
931 | The following are items that are planned to be added in future releases. These |
67e9b86f |
932 | are separate from the L</CAVEATS, ISSUES & BUGS> below. |
2120a181 |
933 | |
934 | =head2 Sub-Transactions |
935 | |
936 | Right now, you cannot run a transaction within a transaction. Removing this |
937 | restriction is technically straightforward, but the combinatorial explosion of |
938 | possible usecases hurts my head. If this is something you want to see |
939 | immediately, please submit many testcases. |
940 | |
941 | =head2 Caching |
942 | |
08164b50 |
943 | If a client is willing to assert upon opening the file that this process will be |
2120a181 |
944 | the only consumer of that datafile, then there are a number of caching |
945 | possibilities that can be taken advantage of. This does, however, mean that |
946 | DBM::Deep is more vulnerable to losing data due to unflushed changes. It also |
947 | means a much larger in-memory footprint. As such, it's not clear exactly how |
948 | this should be done. Suggestions are welcome. |
949 | |
950 | =head2 Ram-only |
951 | |
952 | The techniques used in DBM::Deep simply require a seekable contiguous |
953 | datastore. This could just as easily be a large string as a file. By using |
954 | substr, the STM capabilities of DBM::Deep could be used within a |
955 | single-process. I have no idea how I'd specify this, though. Suggestions are |
956 | welcome. |
957 | |
2120a181 |
958 | =head2 Different contention resolution mechanisms |
959 | |
960 | Currently, the only contention resolution mechanism is last-write-wins. This |
961 | is the mechanism used by most RDBMSes and should be good enough for most uses. |
962 | For advanced uses of STM, other contention mechanisms will be needed. If you |
963 | have an idea of how you'd like to see contention resolution in DBM::Deep, |
964 | please let me know. |
965 | |
966 | =head1 CAVEATS, ISSUES & BUGS |
967 | |
968 | This section describes all the known issues with DBM::Deep. These are issues |
969 | that are either intractable or depend on some feature within Perl working |
970 | exactly right. It you have found something that is not listed below, please |
971 | send an e-mail to L<rkinyon@cpan.org>. Likewise, if you think you know of a |
972 | way around one of these issues, please let me know. |
973 | |
974 | =head2 References |
975 | |
976 | (The following assumes a high level of Perl understanding, specifically of |
977 | references. Most users can safely skip this section.) |
978 | |
979 | Currently, the only references supported are HASH and ARRAY. The other reference |
980 | types (SCALAR, CODE, GLOB, and REF) cannot be supported for various reasons. |
981 | |
982 | =over 4 |
983 | |
984 | =item * GLOB |
985 | |
986 | These are things like filehandles and other sockets. They can't be supported |
987 | because it's completely unclear how DBM::Deep should serialize them. |
988 | |
989 | =item * SCALAR / REF |
990 | |
991 | The discussion here refers to the following type of example: |
992 | |
993 | my $x = 25; |
994 | $db->{key1} = \$x; |
995 | |
996 | $x = 50; |
997 | |
998 | # In some other process ... |
999 | |
1000 | my $val = ${ $db->{key1} }; |
1001 | |
1002 | is( $val, 50, "What actually gets stored in the DB file?" ); |
1003 | |
1004 | The problem is one of synchronization. When the variable being referred to |
1005 | changes value, the reference isn't notified, which is kind of the point of |
1006 | references. This means that the new value won't be stored in the datafile for |
1007 | other processes to read. There is no TIEREF. |
1008 | |
1009 | It is theoretically possible to store references to values already within a |
1010 | DBM::Deep object because everything already is synchronized, but the change to |
1011 | the internals would be quite large. Specifically, DBM::Deep would have to tie |
1012 | every single value that is stored. This would bloat the RAM footprint of |
1013 | DBM::Deep at least twofold (if not more) and be a significant performance drain, |
1014 | all to support a feature that has never been requested. |
1015 | |
1016 | =item * CODE |
1017 | |
b8370759 |
1018 | L<Data::Dump::Streamer> provides a mechanism for serializing coderefs, |
2120a181 |
1019 | including saving off all closure state. This would allow for DBM::Deep to |
1020 | store the code for a subroutine. Then, whenever the subroutine is read, the |
1021 | code could be C<eval()>'ed into being. However, just as for SCALAR and REF, |
1022 | that closure state may change without notifying the DBM::Deep object storing |
1023 | the reference. Again, this would generally be considered a feature. |
1024 | |
1025 | =back |
1026 | |
c57b19c6 |
1027 | =head2 External references and transactions |
1cff45d7 |
1028 | |
2c70efe1 |
1029 | If you do C<<my $x = $db->{foo};>>, then start a transaction, $x will be |
c57b19c6 |
1030 | referencing the database from outside the transaction. A fix for this (and other |
1031 | issues with how external references into the database) is being looked into. This |
1032 | is the skipped set of tests in t/39_singletons.t and a related issue is the focus |
1033 | of t/37_delete_edge_cases.t |
1cff45d7 |
1034 | |
2120a181 |
1035 | =head2 File corruption |
1036 | |
1037 | The current level of error handling in DBM::Deep is minimal. Files I<are> checked |
1038 | for a 32-bit signature when opened, but any other form of corruption in the |
1039 | datafile can cause segmentation faults. DBM::Deep may try to C<seek()> past |
1040 | the end of a file, or get stuck in an infinite loop depending on the level and |
1041 | type of corruption. File write operations are not checked for failure (for |
1042 | speed), so if you happen to run out of disk space, DBM::Deep will probably fail in |
1043 | a bad way. These things will be addressed in a later version of DBM::Deep. |
1044 | |
1045 | =head2 DB over NFS |
1046 | |
1047 | Beware of using DBM::Deep files over NFS. DBM::Deep uses flock(), which works |
1048 | well on local filesystems, but will NOT protect you from file corruption over |
1049 | NFS. I've heard about setting up your NFS server with a locking daemon, then |
1050 | using C<lockf()> to lock your files, but your mileage may vary there as well. |
1051 | From what I understand, there is no real way to do it. However, if you need |
1052 | access to the underlying filehandle in DBM::Deep for using some other kind of |
67e9b86f |
1053 | locking scheme like C<lockf()>, see the L</LOW-LEVEL ACCESS> section above. |
2120a181 |
1054 | |
1055 | =head2 Copying Objects |
1056 | |
1057 | Beware of copying tied objects in Perl. Very strange things can happen. |
1058 | Instead, use DBM::Deep's C<clone()> method which safely copies the object and |
1059 | returns a new, blessed and tied hash or array to the same level in the DB. |
1060 | |
1061 | my $copy = $db->clone(); |
1062 | |
1063 | B<Note>: Since clone() here is cloning the object, not the database location, any |
1064 | modifications to either $db or $copy will be visible to both. |
1065 | |
1066 | =head2 Large Arrays |
1067 | |
1068 | Beware of using C<shift()>, C<unshift()> or C<splice()> with large arrays. |
1069 | These functions cause every element in the array to move, which can be murder |
1070 | on DBM::Deep, as every element has to be fetched from disk, then stored again in |
1071 | a different location. This will be addressed in a future version. |
1072 | |
08164b50 |
1073 | This has been somewhat addressed so that the cost is constant, regardless of |
1074 | what is stored at those locations. So, small arrays with huge data structures in |
1075 | them are faster. But, large arrays are still large. |
1076 | |
2120a181 |
1077 | =head2 Writeonly Files |
1078 | |
08164b50 |
1079 | If you pass in a filehandle to new(), you may have opened it in either a |
1080 | readonly or writeonly mode. STORE will verify that the filehandle is writable. |
1081 | However, there doesn't seem to be a good way to determine if a filehandle is |
1082 | readable. And, if the filehandle isn't readable, it's not clear what will |
1083 | happen. So, don't do that. |
2120a181 |
1084 | |
1085 | =head2 Assignments Within Transactions |
1086 | |
1087 | The following will I<not> work as one might expect: |
1088 | |
1089 | my $x = { a => 1 }; |
1090 | |
1091 | $db->begin_work; |
1092 | $db->{foo} = $x; |
1093 | $db->rollback; |
1094 | |
1095 | is( $x->{a}, 1 ); # This will fail! |
1096 | |
1097 | The problem is that the moment a reference used as the rvalue to a DBM::Deep |
1098 | object's lvalue, it becomes tied itself. This is so that future changes to |
1099 | C<$x> can be tracked within the DBM::Deep file and is considered to be a |
1100 | feature. By the time the rollback occurs, there is no knowledge that there had |
1101 | been an C<$x> or what memory location to assign an C<export()> to. |
1102 | |
1103 | B<NOTE:> This does not affect importing because imports do a walk over the |
1104 | reference to be imported in order to explicitly leave it untied. |
1105 | |
1106 | =head1 CODE COVERAGE |
1107 | |
b8370759 |
1108 | L<Devel::Cover> is used to test the code coverage of the tests. Below is the |
1109 | L<Devel::Cover> report on this distribution's test suite. |
2120a181 |
1110 | |
888453b9 |
1111 | ------------------------------------------ ------ ------ ------ ------ ------ |
1112 | File stmt bran cond sub total |
1113 | ------------------------------------------ ------ ------ ------ ------ ------ |
e00d0eb3 |
1114 | blib/lib/DBM/Deep.pm 97.2 90.9 83.3 100.0 95.4 |
c57b19c6 |
1115 | blib/lib/DBM/Deep/Array.pm 100.0 95.7 100.0 100.0 99.0 |
e00d0eb3 |
1116 | blib/lib/DBM/Deep/Engine.pm 95.6 84.7 81.6 98.4 92.5 |
888453b9 |
1117 | blib/lib/DBM/Deep/File.pm 97.2 81.6 66.7 100.0 91.9 |
1118 | blib/lib/DBM/Deep/Hash.pm 100.0 100.0 100.0 100.0 100.0 |
e00d0eb3 |
1119 | Total 96.7 87.5 82.2 99.2 94.1 |
888453b9 |
1120 | ------------------------------------------ ------ ------ ------ ------ ------ |
2120a181 |
1121 | |
1122 | =head1 MORE INFORMATION |
1123 | |
1124 | Check out the DBM::Deep Google Group at L<http://groups.google.com/group/DBM-Deep> |
1125 | or send email to L<DBM-Deep@googlegroups.com>. You can also visit #dbm-deep on |
1126 | irc.perl.org |
1127 | |
64a531e5 |
1128 | The source code repository is at L<http://github.com/robkinyon/dbm-deep> |
2120a181 |
1129 | |
e9b0b5f0 |
1130 | =head1 MAINTAINERS |
2120a181 |
1131 | |
1132 | Rob Kinyon, L<rkinyon@cpan.org> |
1133 | |
1134 | Originally written by Joseph Huckaby, L<jhuckaby@cpan.org> |
1135 | |
e9b0b5f0 |
1136 | =head1 SPONSORS |
1137 | |
1138 | Stonehenge Consulting (L<http://www.stonehenge.com/>) sponsored the |
1139 | developement of transactions and freespace management, leading to the 1.0000 |
1140 | release. A great debt of gratitude goes out to them for their continuing |
1141 | leadership in and support of the Perl community. |
1142 | |
2120a181 |
1143 | =head1 CONTRIBUTORS |
1144 | |
1145 | The following have contributed greatly to make DBM::Deep what it is today: |
1146 | |
1147 | =over 4 |
1148 | |
e9b0b5f0 |
1149 | =item * Adam Sah and Rich Gaushell for innumerable contributions early on. |
2120a181 |
1150 | |
1151 | =item * Dan Golden and others at YAPC::NA 2006 for helping me design through transactions. |
1152 | |
1153 | =back |
1154 | |
1155 | =head1 SEE ALSO |
1156 | |
1157 | perltie(1), Tie::Hash(3), Digest::MD5(3), Fcntl(3), flock(2), lockf(3), nfs(5), |
1158 | Digest::SHA256(3), Crypt::Blowfish(3), Compress::Zlib(3) |
1159 | |
1160 | =head1 LICENSE |
1161 | |
1162 | Copyright (c) 2007 Rob Kinyon. All Rights Reserved. |
e9b0b5f0 |
1163 | This is free software, you may use it and distribute it under the same terms |
1164 | as Perl itself. |
2120a181 |
1165 | |
1166 | =cut |