From: rkinyon Date: Sun, 23 Sep 2007 01:02:54 +0000 (+0000) Subject: Updates to POD and added a test for POD compliance X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=eadd538f8d94e6eee3dc5db9ad640d4870243aad;p=dbsrgits%2FDBM-Deep.git Updates to POD and added a test for POD compliance --- diff --git a/MANIFEST b/MANIFEST index 95ba4b1..8ff737b 100644 --- a/MANIFEST +++ b/MANIFEST @@ -57,6 +57,7 @@ t/41_transaction_multilevel.t t/42_transaction_indexsector.t t/43_transaction_maximum.t t/44_upgrade_db.t +t/98_pod.t t/common.pm t/etc/db-0-983 t/etc/db-0-99_04 diff --git a/lib/DBM/Deep.pod b/lib/DBM/Deep.pod index 57a6151..4bf937f 100644 --- a/lib/DBM/Deep.pod +++ b/lib/DBM/Deep.pod @@ -695,84 +695,9 @@ remove a filter, set the function reference to C: $db->set_filter( "filter_store_value", undef ); -=head2 Real-time Encryption Example +=head2 Examples -Here is a working example that uses the I module to -do real-time encryption / decryption of keys & values with DBM::Deep Filters. -Please visit L for more -on I. You'll also need the I module. - - use DBM::Deep; - use Crypt::Blowfish; - use Crypt::CBC; - - my $cipher = Crypt::CBC->new({ - 'key' => 'my secret key', - 'cipher' => 'Blowfish', - 'iv' => '$KJh#(}q', - 'regenerate_key' => 0, - 'padding' => 'space', - 'prepend_iv' => 0 - }); - - my $db = DBM::Deep->new( - file => "foo-encrypt.db", - filter_store_key => \&my_encrypt, - filter_store_value => \&my_encrypt, - filter_fetch_key => \&my_decrypt, - filter_fetch_value => \&my_decrypt, - ); - - $db->{key1} = "value1"; - $db->{key2} = "value2"; - print "key1: " . $db->{key1} . "\n"; - print "key2: " . $db->{key2} . "\n"; - - undef $db; - exit; - - sub my_encrypt { - return $cipher->encrypt( $_[0] ); - } - sub my_decrypt { - return $cipher->decrypt( $_[0] ); - } - -=head2 Real-time Compression Example - -Here is a working example that uses the I module to do real-time -compression / decompression of keys & values with DBM::Deep Filters. -Please visit L for -more on I. - - use DBM::Deep; - use Compress::Zlib; - - my $db = DBM::Deep->new( - file => "foo-compress.db", - filter_store_key => \&my_compress, - filter_store_value => \&my_compress, - filter_fetch_key => \&my_decompress, - filter_fetch_value => \&my_decompress, - ); - - $db->{key1} = "value1"; - $db->{key2} = "value2"; - print "key1: " . $db->{key1} . "\n"; - print "key2: " . $db->{key2} . "\n"; - - undef $db; - exit; - - sub my_compress { - return Compress::Zlib::memGzip( $_[0] ) ; - } - sub my_decompress { - return Compress::Zlib::memGunzip( $_[0] ) ; - } - -B Filtering of keys only applies to hashes. Array "keys" are -actually numerical index numbers, and are not filtered. +Please read L for examples of filters. =head1 ERROR HANDLING @@ -810,7 +735,7 @@ parameters are per-file, meaning you can access 32-bit and 64-bit files, as you choose. B We have not personally tested files larger than 4 GB -- all my -systems have only a 32-bit Perl. However, I have received user reports that +systems have only a 32-bit Perl. However, we have received user reports that this does indeed work. =head1 LOW-LEVEL ACCESS @@ -833,55 +758,11 @@ such as enabling/disabling locking. You can also store your own temporary user data in this structure (be wary of name collision), which is then accessible from any child hash or array. -=head1 CUSTOM DIGEST ALGORITHM - -DBM::Deep by default uses the I (MD5) algorithm for hashing -keys. However you can override this, and use another algorithm (such as SHA-256) -or even write your own. But please note that DBM::Deep currently expects zero -collisions, so your algorithm has to be I, so to speak. Collision -detection may be introduced in a later version. - -You can specify a custom digest algorithm by passing it into the parameter -list for new(), passing a reference to a subroutine as the 'digest' parameter, -and the length of the algorithm's hashes (in bytes) as the 'hash_size' -parameter. Here is a working example that uses a 256-bit hash from the -I module. Please see -L for more information. - - use DBM::Deep; - use Digest::SHA256; - - my $context = Digest::SHA256::new(256); - - my $db = DBM::Deep->new( - filename => "foo-sha.db", - digest => \&my_digest, - hash_size => 32, - ); - - $db->{key1} = "value1"; - $db->{key2} = "value2"; - print "key1: " . $db->{key1} . "\n"; - print "key2: " . $db->{key2} . "\n"; - - undef $db; - exit; - - sub my_digest { - return substr( $context->hash($_[0]), 0, 32 ); - } - -B Your returned digest strings must be B the number -of bytes you specify in the hash_size parameter (in this case 32). - -B If you do choose to use a custom digest algorithm, you must set it -every time you access this file. Otherwise, the default (MD5) will be used. - =head1 CIRCULAR REFERENCES -B: DBM::Deep 0.99_03 has turned off circular references pending +B: DBM::Deep 1.0000 has turned off circular references pending evaluation of some edge cases. I hope to be able to re-enable circular -references in a future version after 1.00. This means that circular references +references in a future version after 1.0000. This means that circular references are B available. DBM::Deep has B support for circular references. Meaning you @@ -904,7 +785,7 @@ a future release. =head1 TRANSACTIONS -New in 0.99_01 is ACID transactions. Every DBM::Deep object is completely +New in 1.0000 is ACID transactions. Every DBM::Deep object is completely transaction-ready - it is not an option you have to turn on. You do have to specify how many transactions may run simultaneously (q.v. L). diff --git a/lib/DBM/Deep/Cookbook.pod b/lib/DBM/Deep/Cookbook.pod index 61b8163..b82ad8a 100644 --- a/lib/DBM/Deep/Cookbook.pod +++ b/lib/DBM/Deep/Cookbook.pod @@ -26,4 +26,128 @@ In 5.6, you will have to do the following: In a future version, you will be able to specify C 1> and L will do these things for you. -=end +=head2 Real-time Encryption Example + +Here is a working example that uses the I module to +do real-time encryption / decryption of keys & values with DBM::Deep Filters. +Please visit L for more +on I. You'll also need the I module. + + use DBM::Deep; + use Crypt::Blowfish; + use Crypt::CBC; + + my $cipher = Crypt::CBC->new({ + 'key' => 'my secret key', + 'cipher' => 'Blowfish', + 'iv' => '$KJh#(}q', + 'regenerate_key' => 0, + 'padding' => 'space', + 'prepend_iv' => 0 + }); + + my $db = DBM::Deep->new( + file => "foo-encrypt.db", + filter_store_key => \&my_encrypt, + filter_store_value => \&my_encrypt, + filter_fetch_key => \&my_decrypt, + filter_fetch_value => \&my_decrypt, + ); + + $db->{key1} = "value1"; + $db->{key2} = "value2"; + print "key1: " . $db->{key1} . "\n"; + print "key2: " . $db->{key2} . "\n"; + + undef $db; + exit; + + sub my_encrypt { + return $cipher->encrypt( $_[0] ); + } + sub my_decrypt { + return $cipher->decrypt( $_[0] ); + } + +=head2 Real-time Compression Example + +Here is a working example that uses the I module to do real-time +compression / decompression of keys & values with DBM::Deep Filters. +Please visit L for +more on I. + + use DBM::Deep; + use Compress::Zlib; + + my $db = DBM::Deep->new( + file => "foo-compress.db", + filter_store_key => \&my_compress, + filter_store_value => \&my_compress, + filter_fetch_key => \&my_decompress, + filter_fetch_value => \&my_decompress, + ); + + $db->{key1} = "value1"; + $db->{key2} = "value2"; + print "key1: " . $db->{key1} . "\n"; + print "key2: " . $db->{key2} . "\n"; + + undef $db; + exit; + + sub my_compress { + return Compress::Zlib::memGzip( $_[0] ) ; + } + sub my_decompress { + return Compress::Zlib::memGunzip( $_[0] ) ; + } + +B Filtering of keys only applies to hashes. Array "keys" are +actually numerical index numbers, and are not filtered. + +=head1 Custom Digest Algorithm + +DBM::Deep by default uses the I (MD5) algorithm for hashing +keys. However you can override this, and use another algorithm (such as SHA-256) +or even write your own. But please note that DBM::Deep currently expects zero +collisions, so your algorithm has to be I, so to speak. Collision +detection may be introduced in a later version. + +You can specify a custom digest algorithm by passing it into the parameter +list for new(), passing a reference to a subroutine as the 'digest' parameter, +and the length of the algorithm's hashes (in bytes) as the 'hash_size' +parameter. Here is a working example that uses a 256-bit hash from the +I module. Please see +L for more information. + + use DBM::Deep; + use Digest::SHA256; + + my $context = Digest::SHA256::new(256); + + my $db = DBM::Deep->new( + filename => "foo-sha.db", + digest => \&my_digest, + hash_size => 32, + ); + + $db->{key1} = "value1"; + $db->{key2} = "value2"; + print "key1: " . $db->{key1} . "\n"; + print "key2: " . $db->{key2} . "\n"; + + undef $db; + exit; + + sub my_digest { + return substr( $context->hash($_[0]), 0, 32 ); + } + +B Your returned digest strings must be B the number +of bytes you specify in the hash_size parameter (in this case 32). Undefined +behavior will occur otherwise. + +B If you do choose to use a custom digest algorithm, you must set it +every time you access this file. Otherwise, the default (MD5) will be used. + +=cut diff --git a/t/98_pod.t b/t/98_pod.t new file mode 100644 index 0000000..82b971a --- /dev/null +++ b/t/98_pod.t @@ -0,0 +1,8 @@ +use strict; + +use Test::More; + +eval "use Test::Pod 1.14"; +plan skip_all => "Test::Pod 1.14 required for testing POD" if $@; + +all_pod_files_ok();