7 This is a document describing the internal workings of L<DBM::Deep/>. It is
8 not necessary to read this document if you only intend to be a user. This
9 document is intended for people who either want a deeper understanding of
10 specifics of how L<DBM::Deep/> works or who wish to help program
15 L<DBM::Deep/> is broken up into five classes in three inheritance hierarchies.
21 L<DBM::Deep/> is the parent of L<DBM::Deep::Array/> and L<DBM::Deep::Hash/>.
22 These classes form the immediate interface to the outside world. They are the
23 classes that provide the TIE mechanisms as well as the OO methods.
27 L<DBM::Deep::Engine/> is the layer that deals with the mechanics of reading
28 and writing to the file. This is where the logic of the file layout is
33 L<DBM::Deep::File/> is the layer that deals with the physical file. As a
34 singleton that every other object has a reference to, it also provides a place
35 to handle datastructure-wide items, such as transactions.
41 DBM::Deep uses a tagged file layout. Every section has a tag, a size, and then
48 =item * File Signature
50 The first four bytes are 'DPDB' in network byte order, signifying that this is
55 This is the tagging of the file header. The file used by versions prior to
56 1.00 had a different fifth byte, allowing the difference to the determined.
60 This is four bytes containing the file version. This lets the file format change over time.
64 These are the file-wide constants that determine how the file is laid out.
65 They can only be set upon file creation.
67 =item * Transaction information
69 The current running transactions are stored here, as is the next transaction
72 =item * Freespace information
74 Pointers into the next free sectors of the various sector sizes (Index,
75 Bucketlist, and Data) are stored here.
81 The Index parts can be tagged either as Hash, Array, or Index. The latter
82 is if there was a reindexing due to a bucketlist growing too large. The others
83 are the root index for their respective datatypes. The index consists of a
84 tag, a size, and then 256 sections containing file locations. Each section
85 corresponds to each value representable in a byte.
87 The index is used as follows - whenever a hashed key is being looked up, the
88 first byte is used to determine which location to go to from the root index.
89 Then, if that's also an index, the second byte is used, and so forth until a
94 This is the part that contains the link to the data section. A bucketlist
95 defaults to being 16 buckets long (modifiable by the I<max_buckets>
96 parameter used when creating a new file). Each bucket contains an MD5 and a
97 location of the appropriate key section.
101 This is the part that handles transactional awareness. There are
102 I<max_buckets> sections. Each section contains the location to the data
103 section, a transaction ID, and whether that transaction considers this key to
108 This is the part that actual stores the key, value, and class (if
109 appropriate). The layout is:
115 =item * length of the value
117 =item * the actual value
121 =item * the actual key
123 =item * a byte indicating if this value has a classname
125 =item * the classname (if one is there)
129 The key is stored after the value because the value is requested more often
134 L<DBM::Deep/> is written completely in Perl. It also is a multi-process DBM
135 that uses the datafile as a method of synchronizing between multiple
136 processes. This is unlike most RDBMSes like MySQL and Oracle. Furthermore,
137 unlike all RDBMSes, L<DBM::Deep/> stores both the data and the structure of
138 that data as it would appear in a Perl program.
142 DBM::Deep attempts to be CPU-light. As it stores all the data on disk,
143 DBM::Deep is I/O-bound, not CPU-bound.
147 DBM::Deep uses extremely little RAM relative to the amount of data you can
148 access. You can iterate through a million keys (using C<each()>) without
149 increasing your memeory usage at all.
153 DBM::Deep is I/O-bound, pure and simple. The faster your disk, the faster
154 DBM::Deep will be. Currently, when performing C<my $x = $db-E<gt>{foo}>, there
155 are a minimum of 4 seeks and 1332 + N bytes read (where N is the length of your
156 data). (All values assume a medium filesize.) The actions taken are:
160 =item 1 Lock the file
162 =item 1 Perform a stat() to determine if the inode has changed
164 =item 1 Go to the primary index for the $db (1 seek)
166 =item 1 Read the tag/size of the primary index (5 bytes)
168 =item 1 Read the body of the primary index (1024 bytes)
170 =item 1 Go to the bucketlist for this MD5 (1 seek)
172 =item 1 Read the tag/size of the bucketlist (5 bytes)
174 =item 1 Read the body of the bucketlist (144 bytes)
176 =item 1 Go to the keys location for this MD5 (1 seek)
178 =item 1 Read the tag/size of the keys section (5 bytes)
180 =item 1 Read the body of the keys location (144 bytes)
182 =item 1 Go to the data section that corresponds to this transaction ID. (1 seek)
184 =item 1 Read the tag/size of the data section (5 bytes)
186 =item 1 Read the value for this data (N bytes)
188 =item 1 Unlock the file
192 Every additional level of indexing (if there are enough keys) requires an
193 additional seek and the reading of 1029 additional bytes. If the value is
194 blessed, an additional 1 seek and 9 + M bytes are read (where M is the length
197 Arrays are (currently) even worse because they're considered "funny hashes"
198 with the length stored as just another key. This means that if you do any sort
199 of lookup with a negative index, this entire process is performed twice - once
200 for the length and once for the value.
206 Obviously, DBM::Deep isn't going to be as fast as some C-based DBMs, such as
207 the almighty I<BerkeleyDB>. But it makes up for it in features like true
208 multi-level hash/array support, and cross-platform FTPable files. Even so,
209 DBM::Deep is still pretty fast, and the speed stays fairly consistent, even
210 with huge databases. Here is some test data:
212 Adding 1,000,000 keys to new DB file...
214 At 100 keys, avg. speed is 2,703 keys/sec
215 At 200 keys, avg. speed is 2,642 keys/sec
216 At 300 keys, avg. speed is 2,598 keys/sec
217 At 400 keys, avg. speed is 2,578 keys/sec
218 At 500 keys, avg. speed is 2,722 keys/sec
219 At 600 keys, avg. speed is 2,628 keys/sec
220 At 700 keys, avg. speed is 2,700 keys/sec
221 At 800 keys, avg. speed is 2,607 keys/sec
222 At 900 keys, avg. speed is 2,190 keys/sec
223 At 1,000 keys, avg. speed is 2,570 keys/sec
224 At 2,000 keys, avg. speed is 2,417 keys/sec
225 At 3,000 keys, avg. speed is 1,982 keys/sec
226 At 4,000 keys, avg. speed is 1,568 keys/sec
227 At 5,000 keys, avg. speed is 1,533 keys/sec
228 At 6,000 keys, avg. speed is 1,787 keys/sec
229 At 7,000 keys, avg. speed is 1,977 keys/sec
230 At 8,000 keys, avg. speed is 2,028 keys/sec
231 At 9,000 keys, avg. speed is 2,077 keys/sec
232 At 10,000 keys, avg. speed is 2,031 keys/sec
233 At 20,000 keys, avg. speed is 1,970 keys/sec
234 At 30,000 keys, avg. speed is 2,050 keys/sec
235 At 40,000 keys, avg. speed is 2,073 keys/sec
236 At 50,000 keys, avg. speed is 1,973 keys/sec
237 At 60,000 keys, avg. speed is 1,914 keys/sec
238 At 70,000 keys, avg. speed is 2,091 keys/sec
239 At 80,000 keys, avg. speed is 2,103 keys/sec
240 At 90,000 keys, avg. speed is 1,886 keys/sec
241 At 100,000 keys, avg. speed is 1,970 keys/sec
242 At 200,000 keys, avg. speed is 2,053 keys/sec
243 At 300,000 keys, avg. speed is 1,697 keys/sec
244 At 400,000 keys, avg. speed is 1,838 keys/sec
245 At 500,000 keys, avg. speed is 1,941 keys/sec
246 At 600,000 keys, avg. speed is 1,930 keys/sec
247 At 700,000 keys, avg. speed is 1,735 keys/sec
248 At 800,000 keys, avg. speed is 1,795 keys/sec
249 At 900,000 keys, avg. speed is 1,221 keys/sec
250 At 1,000,000 keys, avg. speed is 1,077 keys/sec
252 This test was performed on a PowerMac G4 1gHz running Mac OS X 10.3.2 & Perl
253 5.8.1, with an 80GB Ultra ATA/100 HD spinning at 7200RPM. The hash keys and
254 values were between 6 - 12 chars in length. The DB file ended up at 210MB.
255 Run time was 12 min 3 sec.
259 One of the great things about L<DBM::Deep/> is that it uses very little memory.
260 Even with huge databases (1,000,000+ keys) you will not see much increased
261 memory on your process. L<DBM::Deep/> relies solely on the filesystem for storing
262 and fetching data. Here is output from I<top> before even opening a database
265 PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
266 22831 root 11 0 2716 2716 1296 R 0.0 0.2 0:07 perl
268 Basically the process is taking 2,716K of memory. And here is the same
269 process after storing and fetching 1,000,000 keys:
271 PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
272 22831 root 14 0 2772 2772 1328 R 0.0 0.2 13:32 perl
274 Notice the memory usage increased by only 56K. Test was performed on a 700mHz
275 x86 box running Linux RedHat 7.2 & Perl 5.6.1.