From: rkinyon Date: Wed, 24 Jan 2007 02:49:27 +0000 (+0000) Subject: r14927@rob-kinyons-computer: rob | 2007-01-23 20:49:11 -0500 X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=4a38586b10edcfce849afe6042788567b4e680a4;p=dbsrgits%2FDBM-Deep.git r14927@rob-kinyons-computer: rob | 2007-01-23 20:49:11 -0500 POD and article updates --- diff --git a/article.pod b/article.pod index 7d45695..5441cef 100644 --- a/article.pod +++ b/article.pod @@ -2,7 +2,7 @@ =head1 What is DBM::Deep? -L is a module written completely in Perl that provides a way of +L is a module written completely in Perl that provides a way of storing Perl datastructures (scalars, hashes, and arrays) on disk instead of in memory. The datafile produced is able to be ftp'ed from one machine to another, regardless of OS or Perl version. There are several reasons why @@ -20,7 +20,7 @@ set marshalling periods. =item * Huge datastructures Normally, datastructures are limited by the size of RAM the server has. -L allows for the size a given datastructure to be limited by disk +L allows for the size a given datastructure to be limited by disk instead. =item * IPC @@ -66,12 +66,12 @@ Either every change happens or none of the changes happen. =item * Consistent When the transaction begins and when it is committed, the database must be in -a legal state. This restriction doesn't apply to L very much. +a legal state. This restriction doesn't apply to L very much. =item * Isolated As far as a transaction is concerned, it is the only thing running against the -database while it is running. Unlike most RDBMSes, L provides the +database while it is running. Unlike most RDBMSes, L provides the strongest isolation level possible. =item * Durable @@ -101,7 +101,7 @@ way. =head2 The backstory -The addition of transactions to L has easily been the single most +The addition of transactions to L has easily been the single most complex software endeavor I've ever undertaken. The first step was to figure out exactly how transactions were going to work. After several spikesN, the best design seemed to look to SVN @@ -124,7 +124,7 @@ for the key's data structure within the bucket. =head2 DBM::Deep's file structure -L's file structure is a record-based structure. The key (or array +L's file structure is a record-based structure. The key (or array index - arrays are currently just funny hashes internally) is hashed using MD5 and then stored in a cascade of Index and Bucketlist records. The bucketlist record stores the actual key string and pointers to where the data records are @@ -178,7 +178,7 @@ revision checked into the repository. When you do a ocal modification, you're doing a modification to the HEAD. Then, you choose to either check in your code (commit()) or revert (rollback()). -In L, I chose to make the HEAD transaction ID 0. This has several +In L, I chose to make the HEAD transaction ID 0. This has several benefits: =over 4 @@ -239,14 +239,14 @@ transactions which don't have 'foo' don't find something in the HEAD. =head2 Freespace management The second major piece to the 1.00 release was freespace management. In -pre-1.00 versions of L, the space used by deleted keys would not be +pre-1.00 versions of L, the space used by deleted keys would not be recycled. While always a requested feature, the complexity required to implement freespace meant that it needed to wait for a complete rewrite of several pieces, such as for transactions. -Freespace is implemented by regularizing all the records so that L +Freespace is implemented by regularizing all the records so that L only has three different record sizes - Index, BucketList, and Data. Each -record type has a fixed length based on various parameters the L +record type has a fixed length based on various parameters the L datafile is created with. (In order to accomodate values of various sizes, Data records chain.) Whenever a sector is freed, it's added to a freelist of that sector's size. Whenever a new sector is requested, the freelist is checked @@ -257,11 +257,25 @@ Freespace management did bring up another issue - staleness. It is possible to have a pointer to a record in memory. If that record is deleted, then reused, the pointer in memory has no way of determining that is was deleted and readded vs. modified. So, a staleness counter was added which is incremented -every time the sector is reused through the freelist. +every time the sector is reused through the freelist. If you then attempt to +access that stale record, L returns undef because, at some point, +the entry was deleted. =head2 Staleness counters -The measures taken for isolation can cause staleness issues, as well. +Once it was implemented for freespace management, staleness counters proved to +be a very powerful concept for transactions themselves. Back in L, I mentioned that other processes modifying the HEAD will +protect all running transactions from their effects. This provides +I. But, the running transaction doesn't know about these entries. +If they're not cleaned up, they will be seen the next time a transaction uses +that transaction ID. + +By providing a staleness counter for transactions, the costs of cleaning up +finished transactions is deferred until the space is actually used again. This +is at the cost of having less-than-optimal space utilization. Changing this in +the future would be completely transparent to users, so I felt it was an +acceptable tradeoff for delivering working code quickly. =head1 Conclusion diff --git a/lib/DBM/Deep.pod b/lib/DBM/Deep.pod index f690084..d1dbe4e 100644 --- a/lib/DBM/Deep.pod +++ b/lib/DBM/Deep.pod @@ -176,7 +176,7 @@ Specifies whether locking is to be enabled. DBM::Deep uses Perl's flock() function to lock the database in exclusive mode for writes, and shared mode for reads. Pass any true value to enable. This affects the base DB handle I that use the same DB file. This is an -optional parameter, and defaults to 0 (disabled). See L below for +optional parameter, and defaults to 1 (enabled). See L below for more. =item * autoflush @@ -184,8 +184,8 @@ more. Specifies whether autoflush is to be enabled on the underlying filehandle. This obviously slows down write operations, but is required if you may have multiple processes accessing the same DB file (also consider enable I). -Pass any true value to enable. This is an optional parameter, and defaults to 0 -(disabled). +Pass any true value to enable. This is an optional parameter, and defaults to 1 +(enabled). =item * filter_* @@ -514,8 +514,9 @@ Here are some examples of using arrays: =head1 LOCKING -Enable automatic file locking by passing a true value to the C -parameter when constructing your DBM::Deep object (see L above). +Enable or disable automatic file locking by passing a boolean value to the +C parameter when constructing your DBM::Deep object (see L + above). my $db = DBM::Deep->new( file => "foo.db", @@ -596,6 +597,12 @@ B Make sure your existing structure has no circular references in it. These will cause an infinite loop when importing. There are plans to fix this in a later release. +B With the addition of transactions, importing is performed within a +transaction, then immediately committed upon success (and rolled back upon +failre). As a result, you cannot call C from within a transaction. +This restriction will be lifted when subtransactions are added in a future +release. + =head2 EXPORTING Calling the C method on an existing DBM::Deep object will return @@ -867,7 +874,8 @@ every time you access this file. Otherwise, the default (MD5) will be used. B: DBM::Deep 0.99_03 has turned off circular references pending evaluation of some edge cases. I hope to be able to re-enable circular -references in a future version prior to 1.00. +references in a future version after 1.00. This means that circular references +are B available. DBM::Deep has B support for circular references. Meaning you can have a nested hash key or array element that points to a parent object. @@ -953,18 +961,47 @@ an intermediate variable than to re-look it up every time. Thus =item * Make your file as tight as possible If you know that you are not going to use more than 65K in your database, -consider using the C 'small'> option. This will instruct +consider using the C 'small'> option. This will instruct DBM::Deep to use 16bit addresses, meaning that the seek times will be less. -The same goes with the number of transactions. num_Txns defaults to 16. If you -can set that to 1 or 2, that will reduce the file-size considerably, thus -reducing seek times. =back -=head1 CAVEATS / ISSUES / BUGS +=head1 TODO + +The following are items that are planned to be added in future releases. These +are separate from the L below. + +=head2 SUB-TRANSACTIONS + +Right now, you cannot run a transaction within a transaction. Removing this +restriction is technically straightforward, but the combinatorial explosion of +possible usecases hurts my head. If this is something you want to see +immediately, please submit many testcases. + +=head2 CACHING + +If a user is willing to assert upon opening the file that this process will be +the only consumer of that datafile, then there are a number of caching +possibilities that can be taken advantage of. This does, however, mean that +DBM::Deep is more vulnerable to losing data due to unflushed changes. It also +means a much larger in-memory footprint. As such, it's not clear exactly how +this should be done. Suggestions are welcome. + +=head2 RAM-ONLY + +The techniques used in DBM::Deep simply require a seekable contiguous +datastore. This could just as easily be a large string as a file. By using +substr, the STM capabilities of DBM::Deep could be used within a +single-process. I have no idea how I'd specify this, though. Suggestions are +welcome. + +=head1 CAVEATS, ISSUES & BUGS -This section describes all the known issues with DBM::Deep. It you have found -something that is not listed here, please send e-mail to L. +This section describes all the known issues with DBM::Deep. These are issues +that are either intractable or depend on some feature within Perl working +exactly right. It you have found something that is not listed below, please +send an e-mail to L. Likewise, if you think you know of a +way around one of these issues, please let me know. =head2 REFERENCES @@ -1020,7 +1057,7 @@ the reference. The current level of error handling in DBM::Deep is minimal. Files I checked for a 32-bit signature when opened, but other corruption in files can cause -segmentation faults. DBM::Deep may try to seek() past the end of a file, or get +segmentation faults. DBM::Deep may try to C past the end of a file, or get stuck in an infinite loop depending on the level of corruption. File write operations are not checked for failure (for speed), so if you happen to run out of disk space, DBM::Deep will probably fail in a bad way. These things will @@ -1031,16 +1068,16 @@ be addressed in a later version of DBM::Deep. Beware of using DBM::Deep files over NFS. DBM::Deep uses flock(), which works well on local filesystems, but will NOT protect you from file corruption over NFS. I've heard about setting up your NFS server with a locking daemon, then -using lockf() to lock your files, but your mileage may vary there as well. +using C to lock your files, but your mileage may vary there as well. From what I understand, there is no real way to do it. However, if you need access to the underlying filehandle in DBM::Deep for using some other kind of -locking scheme like lockf(), see the L section above. +locking scheme like C, see the L section above. =head2 COPYING OBJECTS Beware of copying tied objects in Perl. Very strange things can happen. Instead, use DBM::Deep's C method which safely copies the object and -returns a new, blessed, tied hash or array to the same level in the DB. +returns a new, blessed and tied hash or array to the same level in the DB. my $copy = $db->clone(); diff --git a/lib/DBM/Deep/File.pm b/lib/DBM/Deep/File.pm index 174033d..4fda95c 100644 --- a/lib/DBM/Deep/File.pm +++ b/lib/DBM/Deep/File.pm @@ -15,12 +15,12 @@ sub new { my $self = bless { autobless => 1, - autoflush => undef, + autoflush => 1, end => 0, fh => undef, file => undef, file_offset => 0, - locking => undef, + locking => 1, locked => 0, #XXX Migrate this to the engine, where it really belongs. filter_store_key => undef,