1 package DBM::Deep::Engine;
6 use warnings FATAL => 'all';
8 use DBM::Deep::Iterator ();
11 # * Every method in here assumes that the storage has been appropriately
12 # safeguarded. This can be anything from flock() to some sort of manual
13 # mutex. But, it's the caller's responsability to make sure that this has
16 # Setup file and tag signatures. These should never change.
17 sub SIG_FILE () { 'DPDB' }
18 sub SIG_HEADER () { 'h' }
19 sub SIG_HASH () { 'H' }
20 sub SIG_ARRAY () { 'A' }
21 sub SIG_NULL () { 'N' }
22 sub SIG_DATA () { 'D' }
23 sub SIG_INDEX () { 'I' }
24 sub SIG_BLIST () { 'B' }
25 sub SIG_FREE () { 'F' }
34 This is an internal-use-only object for L<DBM::Deep/>. It mediates the low-level
35 mapping between the L<DBM::Deep/> objects and the storage medium.
37 The purpose of this documentation is to provide low-level documentation for
38 developers. It is B<not> intended to be used by the general public. This
39 documentation and what it documents can and will change without notice.
43 The engine exposes an API to the DBM::Deep objects (DBM::Deep, DBM::Deep::Array,
44 and DBM::Deep::Hash) for their use to access the actual stored values. This API
55 =item * make_reference
73 =item * lock_exclusive
81 They are explained in their own sections below. These methods, in turn, may
82 provide some bounds-checking, but primarily act to instantiate objects in the
83 Engine::Sector::* hierarchy and dispatch to them.
87 Transactions in DBM::Deep are implemented using a variant of MVCC. This attempts
88 to keep the amount of actual work done against the file low while stil providing
89 Atomicity, Consistency, and Isolation. Durability, unfortunately, cannot be done
94 If another process uses a transaction slot and writes stuff to it, then
95 terminates, the data that process wrote it still within the file. In order to
96 address this, there is also a transaction staleness counter associated within
97 every write. Each time a transaction is started, that process increments that
98 transaction's staleness counter. If, when it reads a value, the staleness
99 counters aren't identical, DBM::Deep will consider the value on disk to be stale
104 The fourth leg of ACID is Durability, the guarantee that when a commit returns,
105 the data will be there the next time you read from it. This should be regardless
106 of any crashes or powerdowns in between the commit and subsequent read.
107 DBM::Deep does provide that guarantee; once the commit returns, all of the data
108 has been transferred from the transaction shadow to the HEAD. The issue arises
109 with partial commits - a commit that is interrupted in some fashion. In keeping
110 with DBM::Deep's "tradition" of very light error-checking and non-existent
111 error-handling, there is no way to recover from a partial commit. (This is
112 probably a failure in Consistency as well as Durability.)
114 Other DBMSes use transaction logs (a separate file, generally) to achieve
115 Durability. As DBM::Deep is a single-file, we would have to do something
116 similar to what SQLite and BDB do in terms of committing using synchonized
117 writes. To do this, we would have to use a much higher RAM footprint and some
118 serious programming that make my head hurts just to think about it.
124 =head2 get_next_key( $obj, $prev_key )
126 This takes an object that provides _base_offset() and an optional string
127 representing the prior key returned via a prior invocation of this method.
129 This method delegates to C<< DBM::Deep::Iterator->get_next_key() >>.
133 # XXX Add staleness here
136 my ($obj, $prev_key) = @_;
138 # XXX Need to add logic about resetting the iterator if any key in the reference has changed
139 unless ( $prev_key ) {
140 $obj->{iterator} = DBM::Deep::Iterator->new({
141 base_offset => $obj->_base_offset,
146 return $obj->{iterator}->get_next_key( $obj );