=head0 Adding transactions to DBM::Deep =head1 What is DBM::Deep? L is a module written completely in Perl that provides a way of storing Perl datastructures (scalars, hashes, and arrays) on disk instead of in memory. The datafile produced is able to be ftp'ed from one machine to another, regardless of OS or Perl version. There are several reasons why someone would want to do this. =over 4 =item * Transparent Persistence This is the ability to save a set of data structures to disk and retrieve them later without the vast majority of the program even knowing that the data is persisted. Furthermore, the datastructure is persisted immediately and not at set marshalling periods. =item * Huge datastructures Normally, datastructures are limited by the size of RAM the server has. L allows for the size a given datastructure to be limited by disk instead. =item * IPC While not a common use, this allows for inter-process communication without worrying about the specifics of how a given OS handles IPC. =back And, with the release of 1.00, there is now a fourth reason - software-transactional memory, or STM (L). =head1 What are transactions? Originally from the database world, a transaction is a way of isolating the effects of a given set of actions, then applying them all at once. It's a way of saying "I'm going to try the following steps, see if I like the result, then I want everyone else looking at this datastore to see the results immediately." The most common example is taken from banking. Let's say that an application receives a request to have Joe pay Bob five zorkmids. Without transactions, the application would take the money from Joe's account, then add the money to Bob's account. But, what happens if the application crashes after debiting Joe, but before crediting Bob? The application has made money disappear. Or, vice versa, if Bob is credited before Joe is debited, the application has created money. With a transaction wrapping the money transfer, if the application crashes in the middle, it's as if the action never happened. So, when the application recovers from the crash, Joe and Bob still have the same amount of money in their accounts as they did before and the transaction can restart and Bob can finally receive his zorkmids. More formally, transactions are generally considered to be proper when they are ACID-compliant. ACID is an acronym that means the following: =over 4 =item * Atomic Either every change happens or none of the changes happen. =item * Consistent When the transaction begins and when it is committed, the database must be in a legal state. This restriction doesn't apply to L very much. =item * Isolated As far as a transaction is concerned, it is the only thing running against the database while it is running. Unlike most RDBMSes, L provides the strongest isolation level possible. =item * Durable Once the database says that a comit has happened, the commit will be guaranteed, regardless of whatever happens. =back =head1 Why add them to DBM::Deep? The ability to have actions occur in either I (as in the previous example) or I from the rest of the users of the data is a powerful thing. This allows for a certain amount of safety and predictability in how data transformations occur. Imagine, for example, that you have a set of calculations that will update various variables. However, there are some situations that will cause you to throw away all results and start over with a different seed. Without transactions, you would have to put everything into temporary variables, then transfer the values when the calculations were found to be successful. With STM, you start a transaction and do your thing within it. If the calculations succeed, you commit. If they fail, you rollback and try again. If you're thinking that this is very similar to how SVN or CVS works, you're absolutely correct - they are transactional in the exact same way. =head1 How it happened =head2 The backstory The addition of transactions to L has easily been the single most complex software endeavor I've ever undertaken. The first step was to figure out exactly how transactions were going to work. After several spikesN, the best design seemed to look to SVN instead of relational databases. The more I investigated, the more I ran up against the object-relational impedance mismatch N, this time in terms of being able to translate designs. In the relational world, transactions are generally implemented either as row-level locks or using MVCC N. Both of these assume that there is a I, or singular object, that can be locked transparently to everything else. This doesn't translate to a fractally repeating structure like a hash or an array. However, the design used by SVN deals with directories and files which corresponds very closely to hashes and hashkeys. In SVN, the modifications are stored in the file's structure. Translating this to hashes and hashkeys, this means that transactional information should be stored in the keys. This means that the entire datafile is unaware of anything to do with transactions, except for the key's data structure within the bucket. =head2 DBM::Deep's file structure L's file structure is a record-based structure. The key (or array index - arrays are currently just funny hashes internally) is hashed using MD5 and then stored in a cascade of Index and Bucketlist records. The bucketlist record stores the actual key string and pointers to where the data records are stored. The data records themselves are one of Null, Scalar, or Reference. Null represents an I, Scalar represents a string (numbers are stringified for simplicity) and are allocated in 256byte chunks. References represent an array or hash reference and contains a pointer to an Index and Bucketlist cascade of its own. =head2 Transactions in the keys The first pass was to expand the Bucketlist sector to go from a simple key / datapointer mapping to a more complex key / transaction / datapointer mapping. Initially, I interposed a Transaction record that the bucketlist pointed to. That then contained the transaction / datapointer mapping. This had the advantage of changing nothing except for adding one new sector type and the handling for it. This was very quickly merged into the Bucketlist record to simplify the resulting code. This first step got me to the point where I could pass the following test: my $db1 = DBM::Deep->new( $filename ); my $db2 = DBM::Deep->new( $filename ); $db1->{abc} = 'foo'; is( $db1->{abc}, 'foo' ); is( $db2->{abc}, 'foo' ); $db1->begin_work(); is( $db1->{abc}, 'foo' ); is( $db2->{abc}, 'foo' ); $db1->{abc} = 'floober'; is( $db1->{abc}, 'floober' ); is( $db2->{abc}, 'foo' ); Just that much was a major accomplishment. The first pass only looked in the transaction's spot in the bucket for that key. And, that passed my first tests because I didn't check that C<$db1-E{abc}> was still 'foo' I modifying it in the transaction. To pass that test, the code for retrieval needed to look first in the transaction's spot and if that spot had never been assigned to, look at the spot for the HEAD. =head2 The concept of the HEAD This is a concept borrowed from SVN. In SVN, the HEAD revision is the latest revision checked into the repository. When you do a ocal modification, you're doing a modification to the HEAD. Then, you choose to either check in your code (commit()) or revert (rollback()). In L, I chose to make the HEAD transaction ID 0. This has several benefits: =over 4 =item * Easy identifiaction of a transaction C will run the code if and only if we are in a running transaction. =item * The HEAD is the first bucket In a given bucket, the HEAD is the first datapointer because we mutliply the size of the transactional bookkeeping by the transaction ID to find the offset to seek into the file. =back =head2 Protection from changes Let's assume that a transaction is running in one process and another process is also modifying the same area in the data. The only way that process B can notify process A that a change has occurred is through the common point - the DBM file. Because of this, every process that changes the HEAD needs to protect all currently running transactions by copying over the pointer to the original value into every transaction which hasn't already modified this entry. (If it has, then the new value shouldn't overwrite the transaction's modification.) This is the key piece for providing I. =head2 Tracking modified buckets Rolling back changes is very simple - just don't apply them to the HEAD. The next time that transaction ID is reused, the changes will be ignored (q.v. L). Committing, however, requires that all the changes must be transferred over from the bucket entry for the given transaction ID to the entry for the HEAD. =head2 Deleted marker Transactions are performed copy-on-write. This means that if there isn't an entry for that transaction, the HEAD is looked at. This doesn't work if a key has been deleted within a transaction. So, the entry must be marked as deleted within the transaction so that the HEAD isn't checekd. Likewise, when a new key is created in a transaction, the HEAD doesn't have an entry for that key. Consider the following situation: ok( !exists $db1->{foo} ); ok( !exists $db2->{foo} ); $db1->begin_work(); $db1->{foo} = 'bar'; ok( !exists $db2->{foo} ); The entry for the HEAD for 'foo' needs to be marked as deleted so that transactions which don't have 'foo' don't find something in the HEAD. =head2 Freespace management The second major piece to the 1.00 release was freespace management. In pre-1.00 versions of L, the space used by deleted keys would not be recycled. While always a requested feature, the complexity required to implement freespace meant that it needed to wait for a complete rewrite of several pieces, such as for transactions. Freespace is implemented by regularizing all the records so that L only has three different record sizes - Index, BucketList, and Data. Each record type has a fixed length based on various parameters the L datafile is created with. (In order to accomodate values of various sizes, Data records chain.) Whenever a sector is freed, it's added to a freelist of that sector's size. Whenever a new sector is requested, the freelist is checked first. If the freelist has a sector, it's reused, otherwise a new sector is added to the end of the datafile. Freespace management did bring up another issue - staleness. It is possible to have a pointer to a record in memory. If that record is deleted, then reused, the pointer in memory has no way of determining that is was deleted and readded vs. modified. So, a staleness counter was added which is incremented every time the sector is reused through the freelist. If you then attempt to access that stale record, L returns undef because, at some point, the entry was deleted. =head2 Staleness counters Once it was implemented for freespace management, staleness counters proved to be a very powerful concept for transactions themselves. Back in L, I mentioned that other processes modifying the HEAD will protect all running transactions from their effects. This provides I. But, the running transaction doesn't know about these entries. If they're not cleaned up, they will be seen the next time a transaction uses that transaction ID. By providing a staleness counter for transactions, the costs of cleaning up finished transactions is deferred until the space is actually used again. This is at the cost of having less-than-optimal space utilization. Changing this in the future would be completely transparent to users, so I felt it was an acceptable tradeoff for delivering working code quickly. =head1 Conclusion =cut