2 DBM::Deep - A pure perl multi-level hash/array DBM that supports
7 my $db = DBM::Deep->new( "foo.db" );
12 $db->put('key' => 'value');
13 print $db->get('key');
15 # true multi-level support
17 'hello', { perl => 'rules' },
28 tie my %db, 'DBM::Deep', 'foo.db';
32 tied(%db)->put('key' => 'value');
33 print tied(%db)->get('key');
36 A unique flat-file database module, written in pure perl. True
37 multi-level hash/array support (unlike MLDBM, which is faked), hybrid OO
38 / tie() interface, cross-platform FTPable files, ACID transactions, and
39 is quite fast. Can handle millions of keys and unlimited levels without
40 significant slow-down. Written from the ground-up in pure perl -- this
41 is NOT a wrapper around a C-based DBM. Out-of-the-box compatibility with
42 Unix, Mac OS X and Windows.
45 NOTE: 0.99_03 has significant file format differences from prior
46 versions. THere will be a backwards-compatibility layer in 1.00, but
47 that is slated for a later 0.99_x release. This version is NOT backwards
48 compatible with any other release of DBM::Deep.
50 NOTE: 0.99_01 and above have significant file format differences from
51 0.983 and before. There will be a backwards-compatibility layer in 1.00,
52 but that is slated for a later 0.99_x release. This version is NOT
53 backwards compatible with 0.983 and before.
56 Construction can be done OO-style (which is the recommended way), or
57 using Perl's tie() function. Both are examined here.
60 The recommended way to construct a DBM::Deep object is to use the new()
61 method, which gets you a blessed *and* tied hash (or array) reference.
63 my $db = DBM::Deep->new( "foo.db" );
65 This opens a new database handle, mapped to the file "foo.db". If this
66 file does not exist, it will automatically be created. DB files are
67 opened in "r+" (read/write) mode, and the type of object returned is a
68 hash, unless otherwise specified (see OPTIONS below).
70 You can pass a number of options to the constructor to specify things
71 like locking, autoflush, etc. This is done by passing an inline hash (or
74 my $db = DBM::Deep->new(
80 Notice that the filename is now specified *inside* the hash with the
81 "file" parameter, as opposed to being the sole argument to the
82 constructor. This is required if any options are specified. See OPTIONS
83 below for the complete list.
85 You can also start with an array instead of a hash. For this, you must
86 specify the "type" parameter:
88 my $db = DBM::Deep->new(
90 type => DBM::Deep->TYPE_ARRAY
93 Note: Specifing the "type" parameter only takes effect when beginning a
94 new DB file. If you create a DBM::Deep object with an existing file, the
95 "type" will be loaded from the file header, and an error will be thrown
96 if the wrong type is passed in.
99 Alternately, you can create a DBM::Deep handle by using Perl's built-in
100 tie() function. The object returned from tie() can be used to call
101 methods, such as lock() and unlock(). (That object can be retrieved from
102 the tied variable at any time using tied() - please see perltie for more
106 my $db = tie %hash, "DBM::Deep", "foo.db";
109 my $db = tie @array, "DBM::Deep", "bar.db";
111 As with the OO constructor, you can replace the DB filename parameter
112 with a hash containing one or more options (see OPTIONS just below for
115 tie %hash, "DBM::Deep", {
122 There are a number of options that can be passed in when constructing
123 your DBM::Deep objects. These apply to both the OO- and tie- based
127 Filename of the DB file to link the handle to. You can pass a full
128 absolute filesystem path, partial path, or a plain filename if the
129 file is in the current working directory. This is a required
130 parameter (though q.v. fh).
133 If you want, you can pass in the fh instead of the file. This is
134 most useful for doing something like:
136 my $db = DBM::Deep->new( { fh => \*DATA } );
138 You are responsible for making sure that the fh has been opened
139 appropriately for your needs. If you open it read-only and attempt
140 to write, an exception will be thrown. If you open it write-only or
141 append-only, an exception will be thrown immediately as DBM::Deep
142 needs to read from the fh.
145 This is the offset within the file that the DBM::Deep db starts.
146 Most of the time, you will not need to set this. However, it's there
149 If you pass in fh and do not set this, it will be set appropriately.
152 This parameter specifies what type of object to create, a hash or
153 array. Use one of these two constants:
155 * "DBM::Deep->TYPE_HASH"
156 * "DBM::Deep->TYPE_ARRAY".
158 This only takes effect when beginning a new file. This is an
159 optional parameter, and defaults to "DBM::Deep->TYPE_HASH".
162 Specifies whether locking is to be enabled. DBM::Deep uses Perl's
163 flock() function to lock the database in exclusive mode for writes,
164 and shared mode for reads. Pass any true value to enable. This
165 affects the base DB handle *and any child hashes or arrays* that use
166 the same DB file. This is an optional parameter, and defaults to 1
167 (enabled). See LOCKING below for more.
170 Specifies whether autoflush is to be enabled on the underlying
171 filehandle. This obviously slows down write operations, but is
172 required if you may have multiple processes accessing the same DB
173 file (also consider enable *locking*). Pass any true value to
174 enable. This is an optional parameter, and defaults to 1 (enabled).
179 The following parameters may be specified in the constructor the first
180 time the datafile is created. However, they will be stored in the header
181 of the file and cannot be overridden by subsequent openings of the file
182 - the values will be set from the values stored in the datafile's
186 This is the number of transactions that can be running at one time.
187 The default is one - the HEAD. The minimum is one and the maximum is
188 255. The more transactions, the larger and quicker the datafile
191 See "TRANSACTIONS" below.
194 This is the number of entries that can be added before a reindexing.
195 The larger this number is made, the larger a file gets, but the
196 better performance you will have. The default and minimum number
197 this can be is 16. The maximum is 256, but more than 64 isn't
201 This is the size in bytes of a given data sector. Data sectors will
202 chain, so a value of any size can be stored. However, chaining is
203 expensive in terms of time. Setting this value to something close to
204 the expected common length of your scalars will improve your
205 performance. If it is too small, your file will have a lot of
206 chaining. If it is too large, your file will have a lot of dead
209 The default for this is 64 bytes. The minimum value is 32 and the
210 maximum is 256 bytes.
212 Note: There are between 6 and 10 bytes taken up in each data sector
213 for bookkeeping. (It's 4 + the number of bytes in your "pack_size".)
214 This is included within the data_sector_size, thus the effective
215 value is 6-10 bytes less than what you specified.
218 This is the size of the file pointer used throughout the file. The
222 This uses 2-byte offsets, allowing for a maximum file size of 65
226 This uses 4-byte offsets, allowing for a maximum file size of 4
230 This uses 8-byte offsets, allowing for a maximum file size of 16
231 XB (exabytes). This can only be enabled if your Perl is compiled
234 See "LARGEFILE SUPPORT" for more information.
237 With DBM::Deep you can access your databases using Perl's standard
238 hash/array syntax. Because all DBM::Deep objects are *tied* to hashes or
239 arrays, you can treat them as such. DBM::Deep will intercept all
240 reads/writes and direct them to the right place -- the DB file. This has
241 nothing to do with the "TIE CONSTRUCTION" section above. This simply
242 tells you how to use DBM::Deep using regular hashes and arrays, rather
243 than calling functions like "get()" and "put()" (although those work
244 too). It is entirely up to you how to want to access your databases.
247 You can treat any DBM::Deep object like a normal Perl hash reference.
248 Add keys, or even nested hashes (or arrays) using standard Perl syntax:
250 my $db = DBM::Deep->new( "foo.db" );
252 $db->{mykey} = "myvalue";
254 $db->{myhash}->{subkey} = "subvalue";
256 print $db->{myhash}->{subkey} . "\n";
258 You can even step through hash keys using the normal Perl "keys()"
261 foreach my $key (keys %$db) {
262 print "$key: " . $db->{$key} . "\n";
265 Remember that Perl's "keys()" function extracts *every* key from the
266 hash and pushes them onto an array, all before the loop even begins. If
267 you have an extremely large hash, this may exhaust Perl's memory.
268 Instead, consider using Perl's "each()" function, which pulls
269 keys/values one at a time, using very little memory:
271 while (my ($key, $value) = each %$db) {
272 print "$key: $value\n";
275 Please note that when using "each()", you should always pass a direct
276 hash reference, not a lookup. Meaning, you should never do this:
279 while (my ($key, $value) = each %{$db->{foo}}) { # BAD
281 This causes an infinite loop, because for each iteration, Perl is
282 calling FETCH() on the $db handle, resulting in a "new" hash for foo
283 every time, so it effectively keeps returning the first key over and
284 over again. Instead, assign a temporary variable to "$db-"{foo}>, then
288 As with hashes, you can treat any DBM::Deep object like a normal Perl
289 array reference. This includes inserting, removing and manipulating
290 elements, and the "push()", "pop()", "shift()", "unshift()" and
291 "splice()" functions. The object must have first been created using type
292 "DBM::Deep->TYPE_ARRAY", or simply be a nested array reference inside a
295 my $db = DBM::Deep->new(
296 file => "foo-array.db",
297 type => DBM::Deep->TYPE_ARRAY
301 push @$db, "bar", "baz";
304 my $last_elem = pop @$db; # baz
305 my $first_elem = shift @$db; # bah
306 my $second_elem = $db->[1]; # bar
308 my $num_elements = scalar @$db;
311 In addition to the *tie()* interface, you can also use a standard OO
312 interface to manipulate all aspects of DBM::Deep databases. Each type of
313 object (hash or array) has its own methods, but both types share the
314 following common methods: "put()", "get()", "exists()", "delete()" and
315 "clear()". "fetch()" and "store(" are aliases to "put()" and "get()",
319 These are the constructor and copy-functions.
322 Stores a new hash key/value pair, or sets an array element value.
323 Takes two arguments, the hash key or array index, and the new value.
324 The value can be a scalar, hash ref or array ref. Returns true on
325 success, false on failure.
327 $db->put("foo", "bar"); # for hashes
328 $db->put(1, "bar"); # for arrays
331 Fetches the value of a hash key or array element. Takes one
332 argument: the hash key or array index. Returns a scalar, hash ref or
333 array ref, depending on the data type stored.
335 my $value = $db->get("foo"); # for hashes
336 my $value = $db->get(1); # for arrays
339 Checks if a hash key or array index exists. Takes one argument: the
340 hash key or array index. Returns true if it exists, false if not.
342 if ($db->exists("foo")) { print "yay!\n"; } # for hashes
343 if ($db->exists(1)) { print "yay!\n"; } # for arrays
346 Deletes one hash key/value pair or array element. Takes one
347 argument: the hash key or array index. Returns true on success,
348 false if not found. For arrays, the remaining elements located after
349 the deleted element are NOT moved over. The deleted element is
350 essentially just undefined, which is exactly how Perl's internal
353 $db->delete("foo"); # for hashes
354 $db->delete(1); # for arrays
357 Deletes all hash keys or array elements. Takes no arguments. No
360 $db->clear(); # hashes or arrays
366 Recover lost disk space. This is important to do, especially if you
369 * import() / export()
370 Data going in and out.
372 * begin_work() / commit() / rollback()
373 These are the transactional functions. "TRANSACTIONS" for more
377 For hashes, DBM::Deep supports all the common methods described above,
378 and the following additional methods: "first_key()" and "next_key()".
381 Returns the "first" key in the hash. As with built-in Perl hashes,
382 keys are fetched in an undefined order (which appears random). Takes
383 no arguments, returns the key as a scalar value.
385 my $key = $db->first_key();
388 Returns the "next" key in the hash, given the previous one as the
389 sole argument. Returns undef if there are no more keys to be
392 $key = $db->next_key($key);
394 Here are some examples of using hashes:
396 my $db = DBM::Deep->new( "foo.db" );
398 $db->put("foo", "bar");
399 print "foo: " . $db->get("foo") . "\n";
401 $db->put("baz", {}); # new child hash ref
402 $db->get("baz")->put("buz", "biz");
403 print "buz: " . $db->get("baz")->get("buz") . "\n";
405 my $key = $db->first_key();
407 print "$key: " . $db->get($key) . "\n";
408 $key = $db->next_key($key);
411 if ($db->exists("foo")) { $db->delete("foo"); }
414 For arrays, DBM::Deep supports all the common methods described above,
415 and the following additional methods: "length()", "push()", "pop()",
416 "shift()", "unshift()" and "splice()".
419 Returns the number of elements in the array. Takes no arguments.
421 my $len = $db->length();
424 Adds one or more elements onto the end of the array. Accepts
425 scalars, hash refs or array refs. No return value.
427 $db->push("foo", "bar", {});
430 Fetches the last element in the array, and deletes it. Takes no
431 arguments. Returns undef if array is empty. Returns the element
434 my $elem = $db->pop();
437 Fetches the first element in the array, deletes it, then shifts all
438 the remaining elements over to take up the space. Returns the
439 element value. This method is not recommended with large arrays --
440 see "LARGE ARRAYS" below for details.
442 my $elem = $db->shift();
445 Inserts one or more elements onto the beginning of the array,
446 shifting all existing elements over to make room. Accepts scalars,
447 hash refs or array refs. No return value. This method is not
448 recommended with large arrays -- see <LARGE ARRAYS> below for
451 $db->unshift("foo", "bar", {});
454 Performs exactly like Perl's built-in function of the same name. See
455 "perldoc -f splice" for usage -- it is too complicated to document
456 here. This method is not recommended with large arrays -- see "LARGE
457 ARRAYS" below for details.
459 Here are some examples of using arrays:
461 my $db = DBM::Deep->new(
463 type => DBM::Deep->TYPE_ARRAY
466 $db->push("bar", "baz");
470 my $len = $db->length();
471 print "length: $len\n"; # 4
473 for (my $k=0; $k<$len; $k++) {
474 print "$k: " . $db->get($k) . "\n";
477 $db->splice(1, 2, "biz", "baf");
479 while (my $elem = shift @$db) {
480 print "shifted: $elem\n";
484 Enable or disable automatic file locking by passing a boolean value to
485 the "locking" parameter when constructing your DBM::Deep object (see
488 my $db = DBM::Deep->new(
493 This causes DBM::Deep to "flock()" the underlying filehandle with
494 exclusive mode for writes, and shared mode for reads. This is required
495 if you have multiple processes accessing the same database file, to
496 avoid file corruption. Please note that "flock()" does NOT work for
497 files over NFS. See "DB OVER NFS" below for more.
500 You can explicitly lock a database, so it remains locked for multiple
501 actions. This is done by calling the "lock()" method, and passing an
502 optional lock mode argument (defaults to exclusive mode). This is
503 particularly useful for things like counters, where the current value
504 needs to be fetched, then incremented, then stored again.
507 my $counter = $db->get("counter");
509 $db->put("counter", $counter);
518 You can pass "lock()" an optional argument, which specifies which mode
519 to use (exclusive or shared). Use one of these two constants:
520 "DBM::Deep->LOCK_EX" or "DBM::Deep->LOCK_SH". These are passed directly
521 to "flock()", and are the same as the constants defined in Perl's Fcntl
524 $db->lock( $db->LOCK_SH );
529 You can import existing complex structures by calling the "import()"
530 method, and export an entire database into an in-memory structure using
531 the "export()" method. Both are examined here.
534 Say you have an existing hash with nested hashes/arrays inside it.
535 Instead of walking the structure and adding keys/elements to the
536 database as you go, simply pass a reference to the "import()" method.
537 This recursively adds everything to an existing DBM::Deep object for
538 you. Here is an example:
543 array1 => [ "elem0", "elem1", "elem2" ],
545 subkey1 => "subvalue1",
546 subkey2 => "subvalue2"
550 my $db = DBM::Deep->new( "foo.db" );
551 $db->import( $struct );
553 print $db->{key1} . "\n"; # prints "value1"
555 This recursively imports the entire $struct object into $db, including
556 all nested hashes and arrays. If the DBM::Deep object contains exsiting
557 data, keys are merged with the existing ones, replacing if they already
558 exist. The "import()" method can be called on any database level (not
559 just the base level), and works with both hash and array DB types.
561 Note: Make sure your existing structure has no circular references in
562 it. These will cause an infinite loop when importing. There are plans to
563 fix this in a later release.
566 Calling the "export()" method on an existing DBM::Deep object will
567 return a reference to a new in-memory copy of the database. The export
568 is done recursively, so all nested hashes/arrays are all exported to
569 standard Perl objects. Here is an example:
571 my $db = DBM::Deep->new( "foo.db" );
573 $db->{key1} = "value1";
574 $db->{key2} = "value2";
576 $db->{hash1}->{subkey1} = "subvalue1";
577 $db->{hash1}->{subkey2} = "subvalue2";
579 my $struct = $db->export();
581 print $struct->{key1} . "\n"; # prints "value1"
583 This makes a complete copy of the database in memory, and returns a
584 reference to it. The "export()" method can be called on any database
585 level (not just the base level), and works with both hash and array DB
586 types. Be careful of large databases -- you can store a lot more data in
587 a DBM::Deep object than an in-memory Perl structure.
589 Note: Make sure your database has no circular references in it. These
590 will cause an infinite loop when exporting. There are plans to fix this
594 DBM::Deep has a number of hooks where you can specify your own Perl
595 function to perform filtering on incoming or outgoing data. This is a
596 perfect way to extend the engine, and implement things like real-time
597 compression or encryption. Filtering applies to the base DB level, and
598 all child hashes / arrays. Filter hooks can be specified when your
599 DBM::Deep object is first constructed, or by calling the "set_filter()"
600 method at any time. There are four available filter hooks, described
604 This filter is called whenever a hash key is stored. It is passed
605 the incoming key, and expected to return a transformed key.
608 This filter is called whenever a hash key or array element is
609 stored. It is passed the incoming value, and expected to return a
613 This filter is called whenever a hash key is fetched (i.e. via
614 "first_key()" or "next_key()"). It is passed the transformed key,
615 and expected to return the plain key.
618 This filter is called whenever a hash key or array element is
619 fetched. It is passed the transformed value, and expected to return
622 Here are the two ways to setup a filter hook:
624 my $db = DBM::Deep->new(
626 filter_store_value => \&my_filter_store,
627 filter_fetch_value => \&my_filter_fetch
632 $db->set_filter( "filter_store_value", \&my_filter_store );
633 $db->set_filter( "filter_fetch_value", \&my_filter_fetch );
635 Your filter function will be called only when dealing with SCALAR keys
636 or values. When nested hashes and arrays are being stored/fetched,
637 filtering is bypassed. Filters are called as static functions, passed a
638 single SCALAR argument, and expected to return a single SCALAR value. If
639 you want to remove a filter, set the function reference to "undef":
641 $db->set_filter( "filter_store_value", undef );
643 Real-time Encryption Example
644 Here is a working example that uses the *Crypt::Blowfish* module to do
645 real-time encryption / decryption of keys & values with DBM::Deep
646 Filters. Please visit
647 <http://search.cpan.org/search?module=Crypt::Blowfish> for more on
648 *Crypt::Blowfish*. You'll also need the *Crypt::CBC* module.
654 my $cipher = Crypt::CBC->new({
655 'key' => 'my secret key',
656 'cipher' => 'Blowfish',
658 'regenerate_key' => 0,
659 'padding' => 'space',
663 my $db = DBM::Deep->new(
664 file => "foo-encrypt.db",
665 filter_store_key => \&my_encrypt,
666 filter_store_value => \&my_encrypt,
667 filter_fetch_key => \&my_decrypt,
668 filter_fetch_value => \&my_decrypt,
671 $db->{key1} = "value1";
672 $db->{key2} = "value2";
673 print "key1: " . $db->{key1} . "\n";
674 print "key2: " . $db->{key2} . "\n";
680 return $cipher->encrypt( $_[0] );
683 return $cipher->decrypt( $_[0] );
686 Real-time Compression Example
687 Here is a working example that uses the *Compress::Zlib* module to do
688 real-time compression / decompression of keys & values with DBM::Deep
689 Filters. Please visit
690 <http://search.cpan.org/search?module=Compress::Zlib> for more on
696 my $db = DBM::Deep->new(
697 file => "foo-compress.db",
698 filter_store_key => \&my_compress,
699 filter_store_value => \&my_compress,
700 filter_fetch_key => \&my_decompress,
701 filter_fetch_value => \&my_decompress,
704 $db->{key1} = "value1";
705 $db->{key2} = "value2";
706 print "key1: " . $db->{key1} . "\n";
707 print "key2: " . $db->{key2} . "\n";
713 return Compress::Zlib::memGzip( $_[0] ) ;
716 return Compress::Zlib::memGunzip( $_[0] ) ;
719 Note: Filtering of keys only applies to hashes. Array "keys" are
720 actually numerical index numbers, and are not filtered.
723 Most DBM::Deep methods return a true value for success, and call die()
724 on failure. You can wrap calls in an eval block to catch the die.
726 my $db = DBM::Deep->new( "foo.db" ); # create hash
727 eval { $db->push("foo"); }; # ILLEGAL -- push is array-only call
729 print $@; # prints error message
732 If you have a 64-bit system, and your Perl is compiled with both
733 LARGEFILE and 64-bit support, you *may* be able to create databases
734 larger than 4 GB. DBM::Deep by default uses 32-bit file offset tags, but
735 these can be changed by specifying the 'pack_size' parameter when
736 constructing the file.
739 filename => $filename,
740 pack_size => 'large',
743 This tells DBM::Deep to pack all file offsets with 8-byte (64-bit) quad
744 words instead of 32-bit longs. After setting these values your DB files
745 have a theoretical maximum size of 16 XB (exabytes).
747 You can also use "pack_size => 'small'" in order to use 16-bit file
750 Note: Changing these values will NOT work for existing database files.
751 Only change this for new files. Once the value has been set, it is
752 stored in the file's header and cannot be changed for the life of the
753 file. These parameters are per-file, meaning you can access 32-bit and
754 64-bit files, as you choose.
756 Note: We have not personally tested files larger than 4 GB -- all my
757 systems have only a 32-bit Perl. However, I have received user reports
758 that this does indeed work.
761 If you require low-level access to the underlying filehandle that
762 DBM::Deep uses, you can call the "_fh()" method, which returns the
767 This method can be called on the root level of the datbase, or any child
768 hashes or arrays. All levels share a *root* structure, which contains
769 things like the filehandle, a reference counter, and all the options
770 specified when you created the object. You can get access to this file
771 object by calling the "_storage()" method.
773 my $file_obj = $db->_storage();
775 This is useful for changing options after the object has already been
776 created, such as enabling/disabling locking. You can also store your own
777 temporary user data in this structure (be wary of name collision), which
778 is then accessible from any child hash or array.
780 CUSTOM DIGEST ALGORITHM
781 DBM::Deep by default uses the *Message Digest 5* (MD5) algorithm for
782 hashing keys. However you can override this, and use another algorithm
783 (such as SHA-256) or even write your own. But please note that DBM::Deep
784 currently expects zero collisions, so your algorithm has to be
785 *perfect*, so to speak. Collision detection may be introduced in a later
788 You can specify a custom digest algorithm by passing it into the
789 parameter list for new(), passing a reference to a subroutine as the
790 'digest' parameter, and the length of the algorithm's hashes (in bytes)
791 as the 'hash_size' parameter. Here is a working example that uses a
792 256-bit hash from the *Digest::SHA256* module. Please see
793 <http://search.cpan.org/search?module=Digest::SHA256> for more
799 my $context = Digest::SHA256::new(256);
801 my $db = DBM::Deep->new(
802 filename => "foo-sha.db",
803 digest => \&my_digest,
807 $db->{key1} = "value1";
808 $db->{key2} = "value2";
809 print "key1: " . $db->{key1} . "\n";
810 print "key2: " . $db->{key2} . "\n";
816 return substr( $context->hash($_[0]), 0, 32 );
819 Note: Your returned digest strings must be EXACTLY the number of bytes
820 you specify in the hash_size parameter (in this case 32).
822 Note: If you do choose to use a custom digest algorithm, you must set it
823 every time you access this file. Otherwise, the default (MD5) will be
827 NOTE: DBM::Deep 0.99_03 has turned off circular references pending
828 evaluation of some edge cases. I hope to be able to re-enable circular
829 references in a future version after 1.00. This means that circular
830 references are NO LONGER available.
832 DBM::Deep has experimental support for circular references. Meaning you
833 can have a nested hash key or array element that points to a parent
834 object. This relationship is stored in the DB file, and is preserved
835 between sessions. Here is an example:
837 my $db = DBM::Deep->new( "foo.db" );
840 $db->{circle} = $db; # ref to self
842 print $db->{foo} . "\n"; # prints "bar"
843 print $db->{circle}->{foo} . "\n"; # prints "bar" again
845 Note: Passing the object to a function that recursively walks the object
846 tree (such as *Data::Dumper* or even the built-in "optimize()" or
847 "export()" methods) will result in an infinite loop. This will be fixed
851 New in 0.99_01 is ACID transactions. Every DBM::Deep object is
852 completely transaction-ready - it is not an option you have to turn on.
853 You do have to specify how many transactions may run simultaneously
856 Three new methods have been added to support them. They are:
859 This starts a transaction.
862 This applies the changes done within the transaction to the mainline
863 and ends the transaction.
866 This discards the changes done within the transaction to the
867 mainline and ends the transaction.
869 Transactions in DBM::Deep are done using a variant of the MVCC method,
870 the same method used by the InnoDB MySQL engine.
872 Software-Transactional Memory
873 The addition of transactions to this module provides the basis for STM
874 within Perl 5. Contention is resolved using a default last-write-wins.
875 Currently, this default cannot be changed, but it will be addressed in a
879 Because DBM::Deep is a conncurrent datastore, every change is flushed to
880 disk immediately and every read goes to disk. This means that DBM::Deep
881 functions at the speed of disk (generally 10-20ms) vs. the speed of RAM
882 (generally 50-70ns), or at least 150-200x slower than the comparable
883 in-memory datastructure in Perl.
885 There are several techniques you can use to speed up how DBM::Deep
888 * Put it on a ramdisk
889 The easiest and quickest mechanism to making DBM::Deep run faster is
890 to create a ramdisk and locate the DBM::Deep file there. Doing this
891 as an option may become a feature of DBM::Deep, assuming there is a
892 good ramdisk wrapper on CPAN.
894 * Work at the tightest level possible
895 It is much faster to assign the level of your db that you are
896 working with to an intermediate variable than to re-look it up every
900 while ( my ($k, $v) = each %{$db->{foo}{bar}{baz}} ) {
905 my $x = $db->{foo}{bar}{baz};
906 while ( my ($k, $v) = each %$x ) {
910 * Make your file as tight as possible
911 If you know that you are not going to use more than 65K in your
912 database, consider using the "pack_size => 'small'" option. This
913 will instruct DBM::Deep to use 16bit addresses, meaning that the
914 seek times will be less.
917 The following are items that are planned to be added in future releases.
918 These are separate from the "CAVEATS, ISSUES & BUGS" below.
921 Right now, you cannot run a transaction within a transaction. Removing
922 this restriction is technically straightforward, but the combinatorial
923 explosion of possible usecases hurts my head. If this is something you
924 want to see immediately, please submit many testcases.
927 If a user is willing to assert upon opening the file that this process
928 will be the only consumer of that datafile, then there are a number of
929 caching possibilities that can be taken advantage of. This does,
930 however, mean that DBM::Deep is more vulnerable to losing data due to
931 unflushed changes. It also means a much larger in-memory footprint. As
932 such, it's not clear exactly how this should be done. Suggestions are
936 The techniques used in DBM::Deep simply require a seekable contiguous
937 datastore. This could just as easily be a large string as a file. By
938 using substr, the STM capabilities of DBM::Deep could be used within a
939 single-process. I have no idea how I'd specify this, though. Suggestions
942 Importing using Data::Walker
943 Right now, importing is done using "Clone::clone()" to make a complete
944 copy in memory, then tying that copy. It would be much better to use
945 Data::Walker to walk the data structure instead, particularly in the
946 case of large datastructures.
948 Different contention resolution mechanisms
949 Currently, the only contention resolution mechanism is last-write-wins.
950 This is the mechanism used by most RDBMSes and should be good enough for
951 most uses. For advanced uses of STM, other contention mechanisms will be
952 needed. If you have an idea of how you'd like to see contention
953 resolution in DBM::Deep, please let me know.
955 CAVEATS, ISSUES & BUGS
956 This section describes all the known issues with DBM::Deep. These are
957 issues that are either intractable or depend on some feature within Perl
958 working exactly right. It you have found something that is not listed
959 below, please send an e-mail to rkinyon@cpan.org. Likewise, if you think
960 you know of a way around one of these issues, please let me know.
963 (The following assumes a high level of Perl understanding, specifically
964 of references. Most users can safely skip this section.)
966 Currently, the only references supported are HASH and ARRAY. The other
967 reference types (SCALAR, CODE, GLOB, and REF) cannot be supported for
971 These are things like filehandles and other sockets. They can't be
972 supported because it's completely unclear how DBM::Deep should
976 The discussion here refers to the following type of example:
983 # In some other process ...
985 my $val = ${ $db->{key1} };
987 is( $val, 50, "What actually gets stored in the DB file?" );
989 The problem is one of synchronization. When the variable being
990 referred to changes value, the reference isn't notified, which is
991 kind of the point of references. This means that the new value won't
992 be stored in the datafile for other processes to read. There is no
995 It is theoretically possible to store references to values already
996 within a DBM::Deep object because everything already is
997 synchronized, but the change to the internals would be quite large.
998 Specifically, DBM::Deep would have to tie every single value that is
999 stored. This would bloat the RAM footprint of DBM::Deep at least
1000 twofold (if not more) and be a significant performance drain, all to
1001 support a feature that has never been requested.
1004 Data::Dump::Streamer provides a mechanism for serializing coderefs,
1005 including saving off all closure state. This would allow for
1006 DBM::Deep to store the code for a subroutine. Then, whenever the
1007 subroutine is read, the code could be "eval()"'ed into being.
1008 However, just as for SCALAR and REF, that closure state may change
1009 without notifying the DBM::Deep object storing the reference. Again,
1010 this would generally be considered a feature.
1013 The current level of error handling in DBM::Deep is minimal. Files *are*
1014 checked for a 32-bit signature when opened, but any other form of
1015 corruption in the datafile can cause segmentation faults. DBM::Deep may
1016 try to "seek()" past the end of a file, or get stuck in an infinite loop
1017 depending on the level and type of corruption. File write operations are
1018 not checked for failure (for speed), so if you happen to run out of disk
1019 space, DBM::Deep will probably fail in a bad way. These things will be
1020 addressed in a later version of DBM::Deep.
1023 Beware of using DBM::Deep files over NFS. DBM::Deep uses flock(), which
1024 works well on local filesystems, but will NOT protect you from file
1025 corruption over NFS. I've heard about setting up your NFS server with a
1026 locking daemon, then using "lockf()" to lock your files, but your
1027 mileage may vary there as well. From what I understand, there is no real
1028 way to do it. However, if you need access to the underlying filehandle
1029 in DBM::Deep for using some other kind of locking scheme like "lockf()",
1030 see the "LOW-LEVEL ACCESS" section above.
1033 Beware of copying tied objects in Perl. Very strange things can happen.
1034 Instead, use DBM::Deep's "clone()" method which safely copies the object
1035 and returns a new, blessed and tied hash or array to the same level in
1038 my $copy = $db->clone();
1040 Note: Since clone() here is cloning the object, not the database
1041 location, any modifications to either $db or $copy will be visible to
1045 Beware of using "shift()", "unshift()" or "splice()" with large arrays.
1046 These functions cause every element in the array to move, which can be
1047 murder on DBM::Deep, as every element has to be fetched from disk, then
1048 stored again in a different location. This will be addressed in a future
1052 If you pass in a filehandle to new(), you may have opened it in either a
1053 readonly or writeonly mode. STORE will verify that the filehandle is
1054 writable. However, there doesn't seem to be a good way to determine if a
1055 filehandle is readable. And, if the filehandle isn't readable, it's not
1056 clear what will happen. So, don't do that.
1058 Assignments Within Transactions
1059 The following will *not* work as one might expect:
1067 is( $x->{a}, 1 ); # This will fail!
1069 The problem is that the moment a reference used as the rvalue to a
1070 DBM::Deep object's lvalue, it becomes tied itself. This is so that
1071 future changes to $x can be tracked within the DBM::Deep file and is
1072 considered to be a feature. By the time the rollback occurs, there is no
1073 knowledge that there had been an $x or what memory location to assign an
1076 NOTE: This does not affect importing because imports do a walk over the
1077 reference to be imported in order to explicitly leave it untied.
1080 Devel::Cover is used to test the code coverage of the tests. Below is
1081 the Devel::Cover report on this distribution's test suite.
1083 ---------------------------- ------ ------ ------ ------ ------ ------ ------
1084 File stmt bran cond sub pod time total
1085 ---------------------------- ------ ------ ------ ------ ------ ------ ------
1086 blib/lib/DBM/Deep.pm 96.8 87.9 90.5 100.0 89.5 4.5 95.2
1087 blib/lib/DBM/Deep/Array.pm 100.0 94.3 100.0 100.0 100.0 4.8 98.7
1088 blib/lib/DBM/Deep/Engine.pm 97.2 86.4 86.0 100.0 0.0 56.8 91.0
1089 blib/lib/DBM/Deep/File.pm 98.1 83.3 66.7 100.0 0.0 31.4 88.0
1090 blib/lib/DBM/Deep/Hash.pm 100.0 100.0 100.0 100.0 100.0 2.5 100.0
1091 Total 97.7 88.1 86.6 100.0 31.6 100.0 93.0
1092 ---------------------------- ------ ------ ------ ------ ------ ------ ------
1095 Check out the DBM::Deep Google Group at
1096 <http://groups.google.com/group/DBM-Deep> or send email to
1097 DBM-Deep@googlegroups.com. You can also visit #dbm-deep on irc.perl.org
1099 The source code repository is at <http://svn.perl.org/modules/DBM-Deep>
1102 Rob Kinyon, rkinyon@cpan.org
1104 Originally written by Joseph Huckaby, jhuckaby@cpan.org
1107 The following have contributed greatly to make DBM::Deep what it is
1110 * Adam Sah and Rich Gaushell
1111 * Stonehenge for sponsoring the 1.00 release
1112 * Dan Golden and others at YAPC::NA 2006 for helping me design through
1116 perltie(1), Tie::Hash(3), Digest::MD5(3), Fcntl(3), flock(2), lockf(3),
1117 nfs(5), Digest::SHA256(3), Crypt::Blowfish(3), Compress::Zlib(3)
1120 Copyright (c) 2007 Rob Kinyon. All Rights Reserved. This is free
1121 software, you may use it and distribute it under the same terms as Perl