=item * Database
Most programmers hear the word "database" and think "relational database
-management system." A database is a more general term meaning "place one
-stores data." This can be relational, object, or something else. The software
-used to manage and query a database is a "database management system" (DBMS).
+management system" (or RDBMS). A database is a more general term meaning
+"place one stores data." This can be relational, object, or something else.
+The software used to manage and query a database is a "database management
+system" (DBMS).
L<DBM::Deep|DBM::Deep> provides one half of a DBMS - the data storage part.
Once the datastructures on disk, L<DBM::Deep|DBM::Deep> provides the
-capability to allow multiple processes to access the data.
+capability to allow multiple processes to access the data. N<Presto is a start
+at providing the other half of the DBMS using DBm::Deep as the engine.>
=back
record stores the actual key string and pointers to where the data records are
stored. The data records themselves are one of Null, Scalar, or Reference.
Null represents an I<undef>, Scalar represents a string (numbers are
-stringified internally for simplicity) and are allocated in 256byte chunks.
+stringified internally for simplicity) and are allocated in fixed-size chunks.
Reference represent an array or hash reference and contains a pointer to an
Index and bucketlist cascade of its own. Reference will also store the class
the hash or array reference is blessed into, meaning that almost all objects
=head2 Ending the spike
-At this point, I had
+At this point, I had learned everything I needed from the spike. Yes, the
+SVN idea looked like it was going to work. Yes, there were a lot of squibbly
+details. No, it wasn't going to be finished before I left YAPC::NA. *sigh*
+
+The biggest lessons learned from the spike were:
+
+=over 4
+
+=item 1 Tests are good
+
+I seem to have to relearn this every project I work on. It's pretty sad, if
+you ask me.
+
+=item 1 The transaction sector is superfluous
+
+As I worked with it, the transaction sector didn't add any information over
+extending the actual bucket to have the transaction to datapointer mapping
+within it.
+
+=back
=head2 Protection from changes
-Let's assume that a transaction is running in one process and another process
-is also modifying the same area in the data. The only way that process B can
-notify process A that a change has occurred is through the common point - the
-DBM file. Because of this, every process that changes the HEAD needs to
-protect all currently running transactions by copying over the pointer to the
-original value into every transaction which hasn't already modified this
-entry. (If it has, then the new value shouldn't overwrite the transaction's
-modification.) This is the key piece for providing I<Isolation>.
+After the missed test for checking that starting a transaction didn't lose the
+connection to the HEAD, I started writing more and more tests, being very anal
+about what I was checking. I wrote tests to check every piece of state I could
+think of before and after every change in state, regardless of where the
+change was made. Writing these tests immediately raised an issue with changing
+the HEAD while a transaction is running. If the transaction has already edited
+that key, it already has its new value. However, if it doesn't, it needs to be
+protected from the change to the HEAD. This is the key piece for providing
+I<Isolation>.
+
+My first attempt to solve this problem focused on having the transaction
+itself detect changes. But, the primary usecase for transactions is that each
+transaction is going to be running in a separate process. Without implementing
+IPC, the only common point between various processes is the datafile itself.
+The only process aware of the change is the process making the change. Even
+though it seemed counter-intuitive, the only sane mechanism was that each
+process modifying the HEAD would also protect all running transactions from
+its change, if needed.
+
+=head2 Committing and rolling back
+
+Now that changes are able to be made within a transaction and the transaction,
+the HEAD, and other transactions are protected from one other, the next step
+was to provide the ability to both commit and rollback these changes.
+
+=head3 Rollback
+
+Conceptually, rolling back should the simpler to implement - just discard the
+changes that have been made within the transaction and continue onward with
+the HEAD. And, for the first attempt, that is exactly what I did. This meant
+that the following test would pass:
+
+ $db->{foo} = 'bar';
+
+ $db->begin_work;
+
+ is( $db->{foo}, 'bar' );
+
+ $db->{foo} = 'baz';
-=head2 Tracking modified buckets
+ is( $db->{foo}, 'baz' );
-Rolling back changes is very simple - just don't apply them to the HEAD. The
-next time that transaction ID is reused, the changes will be ignored (q.v.
-L</Staleness counters>). Committing, however, requires that all the changes
-must be transferred over from the bucket entry for the given transaction ID to
-the entry for the HEAD.
+ $db->rollback;
-This tracking is done by the modified buckets themselves. They register
-themselves with the engine as having been modified within the transaction.
-Given L</Protection from changes>, this only guarantees that only those
-changes made by the transaction will be transferred over.
+ is( $db->{foo}, 'bar' );
+
+But, this didn't work out very well for repeated use of that transaction slot.
+I threw a number of solutions at the problem, but none of them were
+addressing the real issue - knowing which use of a transaction ID made the
+change vs. which use of a transaction ID was accessing the value.
+
+XXX
+
+=head3 Committing
+
+Committing is much harder than rolling back. The primary difficulty lies in
+tracking exactly what this transaction has changed in order to copy those
+changed bucket entries over to the HEAD. The good news is that only the actual
+datapointers for that transaction need to be copied over - the actual data
+sectors are left untouched.
+
+The key to the solution lay in the decoupled nature of the code I was writing
+along with the fact that every piece of the code had access to the engine
+object, if needed. Committing (and rolling back) are both handled by the
+Engine object. To get that information into the engine, each bucket modified
+by the transaction would inform the engine object that it had been modified by
+that transaction. When a commit occurs, the engine objet iterates over the
+modified buckets and transfers over the new datapointer and discards the old
+one.
=head2 Deleted marker
-Transactions are performed copy-on-write. This means that if there isn't an
-entry for that transaction, the HEAD is looked at. This doesn't work if a key
-has been deleted within a transaction. So, the entry must be marked as deleted
-within the transaction so that the HEAD isn't checekd.
+After some more tests, a final edge-case was found. Transactions are performed
+copy-on-write. This means that if there isn't an entry for that transaction,
+the HEAD is looked at. This doesn't work if a key has been deleted within a
+transaction. So, the entry must be marked as deleted within the transaction so
+that the HEAD isn't checekd.
Likewise, when a new key is created in a transaction, the HEAD doesn't have an
entry for that key. Consider the following situation:
ok( !exists $db2->{foo} );
The entry for the HEAD for 'foo' needs to be marked as deleted so that
-transactions which don't have 'foo' don't find something in the HEAD.
+transactions which don't have 'foo' don't find something in the HEAD. To add
+this, I originally used a separate flag for each datapointer to indicate if it
+had been marked as deleted or not. I quickly recognized that a data-pointer
+can never have a value of 0 or 1 as those would point to the first and second
+bytes of the datafile, respectively. As these are part of the header, those
+are nonsensical values, so can be re-used for metadata. 0 now means "This
+slot has never been written to" and 1 means "This slot has been explicitly
+deleted."
=head2 Freespace management
-The second major piece to the 1.00 release was freespace management. In
-pre-1.00 versions of L<DBM::Deep|DBM::Deep>, the space used by deleted keys would not be
-recycled. While always a requested feature, the complexity required to
-implement freespace meant that it needed to wait for a complete rewrite of
-several pieces, such as the engine.
-
-Freespace is implemented by regularizing all the records so that L<DBM::Deep|DBM::Deep>
-only has three different record sizes - Index, BucketList, and Data. Each
-record type has a fixed length based on various parameters the L<DBM::Deep|DBM::Deep>
-datafile is created with. (In order to accomodate values of various sizes, Data
-records chain.) Whenever a sector is freed, it's added to a freelist of that
-sector's size. Whenever a new sector is requested, the freelist is checked
-first. If the freelist has a sector, it's reused, otherwise a new sector is
-added to the end of the datafile.
-
-Freespace management did bring up another issue - staleness. It is possible to
-have a pointer to a record in memory. If that record is deleted, then reused,
-the pointer in memory has no way of determining that is was deleted and
-readded vs. modified. So, a staleness counter was added which is incremented
-every time the sector is reused through the freelist. If you then attempt to
-access that stale record, L<DBM::Deep|DBM::Deep> returns undef because, at some point,
-the entry was deleted.
-
-=head2 Staleness counters
+Pre-1.0000 versions of L<DBM::Deep|DBM::Deep> didn't have any form of
+freespace management. This meant that whenever a value was deleted, the old
+value just sat around taking up space, even though it would never be accessed
+again. While barely acceptable for non-transactional uses, this was made
+transactions unusable because transactions, as I've implemented them, are
+predicated on the concept of parallel values that are (presumably) cleaned up
+after the transaction is done with them.
+
+Freespace had never been added before because it requires a different file
+format than the one used in the pre-1.0000 versions. Because I had to change
+the file format anyways B<and> I needed the feature, adding freespace now
+seemed like a good plan.
+
+Freespace was implemented by regularizing all the records so that
+L<DBM::Deep|DBM::Deep> only has three different record sizes - Index,
+BucketList, and Data. Each record type has a fixed length based on various
+parameters the L<DBM::Deep|DBM::Deep> datafile is created with. (In order to
+accomodate values of various sizes, Data records chain.) Whenever the engine
+is finished with a sector, it is freed and added to a list of free sectors of
+that sector's size. Whenever a new sector is requested, the freelist is
+checked first. If the freelist has a sector, it's reused, otherwise a new
+sector is added to the end of the datafile.
+
+Just like everything else, I wrote a mess of tests for adding freespace
+management. One of the tests I thought up was the following:
+
+ $db->{foo} = [ 1 .. 3];
+ my $arr = $db->{foo};
+
+ is( $arr->[1], 2 ); # This always has worked.
+
+ delete $db->{foo};
+
+ isnt( $arr->[1], 2 );
+
+If this was a Perl datastructure, the last test should pass. In the past, that
+test would fail. The key concept I realized was that the C<$arr> variable is
+pointing to a stale area in memory. So, by adding a staleness counter that is
+incremented whenever the sector in use is deleted, I am able to determine if
+the variable in question is looking for a stale version of the sector. At this
+point, L<DBM::Deep|DBM::Deep> returns undef because, at some point, the entry
+was deleted.
+
+=head2 Transactional staleness counters
Once it was implemented for freespace management, staleness counters proved to
be a very powerful concept for transactions themselves. Back in L</Protection
from changes>, I mentioned that other processes modifying the HEAD will
protect all running transactions from their effects. This provides
I<Isolation>. But, the running transaction doesn't know about these entries.
-If they're not cleaned up, they will be seen the next time a transaction uses
-that transaction ID.
+This is both a benefit and a drawback. It's a benefit that it makes tracking
+modified buckets very simple (q.v. L</Committing>). But, it means that changes
+made to protect the transaction are not tracked. If they're not cleaned up,
+they will be seen the next time a transaction uses that transaction ID.
By providing a staleness counter for transactions, the costs of cleaning up
finished transactions is deferred until the space is actually used again. This
the future would be completely transparent to users, so I felt it was an
acceptable tradeoff for quick delivery of a functional product.
+=head2 Fiddly bits
+
+At this point, all the major pieces were in place. All that was left was to
+get all the fiddly bits into place. This included handling the behavior of
+C<keys()>, simultaneous transactions with commits and rollbacks in various
+order, and making sure that transactions played nicely when a new Index sector
+needed to be created due to reindexing a full Bucketlist sector. Of these,
+C<keys()> was the hardest. This is when I actually implemented the Iterator classes
+to handle walking the index/bucketlist chain.
+
=head1 The future
Basic transactions are only the first step. There are several features that
-can be added on top of what's been provided. (Note: these are advanced topics -
-I cannot be held responsible for any bleeding ears.)
+can be added on top of what's been provided. If and in which order any of
+these are implemented is completely up to user-feedback. (Note: these are
+advanced topics - I cannot be held responsible for any bleeding ears.)
=head2 Conflict resolution
Conflict, in this context, is what happens when two transactions run
-simultaneously and change the same piece of data. Like most databases,
+simultaneously and change the same piece of data. Like most relational databases,
L<DBM::Deep|DBM::Deep> uses a very simplistic form of conflict resolution -
-last commit wins. This works quite well for a row-based RDBM, but doesn't work
+last commit wins. This works quite well for a row-based RDBMS, but doesn't work
as well for fractal structures like hashes.
Contrast this with how Subversion handles conflict. It tracks when each
-transaction was started. If the HEAD is at a later version than when the
-transaction started, the commit is rejected. It is up to the developer to pull
-in the latest changes, mediate any conflicts, and then recommit.
+transaction was started. If the HEAD was changed after the transaction
+started, the commit is rejected. It is up to the developer to pull in the
+latest changes, mediate any conflicts, and then recommit. There are several
+other ways to handle conflict resolution, many of which can be pulled from
+Haskell's use of Software Transactional Memory (STM).
=head2 Checkpoints
A transaction may have several steps within it. The first three may succeed,
but the fourth might fail. Instead of rolling back all the way to the
beginning, you might want to rollback to the last successful step and try
-again. This is a checkpoint. Most RDBMSes provide them, but we'll see if users
-request them for L<DBM::Deep|DBM::Deep>.
+again. This is a checkpoint. Most RDBMSes provide them and they aren't very
+difficult, conceptually, but I've seen just how "easy" some features can be
+once you start really exploring the problemspace.
=head2 Sub-transactions
Sub-transactions can also work better with libraries that may want to use
transactions themselves.
+This, of all the features listed, is the one I'm most interested in
+implementing next.
+
=head2 Durability
As mentioned in L</What are transactions?>, the 'D' in ACID stands for
I<Durable>. L<DBM::Deep|DBM::Deep> does not satisfy that criterion because
durability almost always requires another file (or files) for a commit log. I
deemed this unacceptable for this release because one of the
-L<DBM::Deep|DBM::Deep>'s features is the single datafile. However, Berkley DB
-provides durability with only a single file. Cribbing from them may be viable
-in the future.
+L<DBM::Deep|DBM::Deep>'s features is the single datafile. To be honest, I
+don't anticipate this to be an issue for most users because the niche that
+L<DBM::Deep|DBM::Deep> occupies is one that is tolerant to failure and a small
+chance of potential dataloss.
+
+However, Berkley DB does provide durability with only a single file. If it
+becomes necessary, cribbing from them could be an option.
=cut