X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=blobdiff_plain;f=lib%2FDBIx%2FClass%2FStorage%2FDBI%2FReplication.pm;h=cd13b935f1b373ddd3d17412ef40787afa1fecde;hb=cd6d847fc1c74901aa534def86cf10ac9b3adbe3;hp=bc3a8818c867087822f5ff1c9e8deef72eec6240;hpb=9b21c682fe10df23f310baed4a5817b2de2c24ba;p=dbsrgits%2FDBIx-Class-Historic.git diff --git a/lib/DBIx/Class/Storage/DBI/Replication.pm b/lib/DBIx/Class/Storage/DBI/Replication.pm index bc3a881..cd13b93 100644 --- a/lib/DBIx/Class/Storage/DBI/Replication.pm +++ b/lib/DBIx/Class/Storage/DBI/Replication.pm @@ -11,7 +11,7 @@ __PACKAGE__->mk_accessors( qw/read_source write_source/ ); =head1 NAME -DBIx::Class::Storage::DBI::Replication - Replicated database support +DBIx::Class::Storage::DBI::Replication - EXPERIMENTAL Replicated database support =head1 SYNOPSIS @@ -27,20 +27,28 @@ DBIx::Class::Storage::DBI::Replication - Replicated database support =head1 DESCRIPTION -This class implements replicated data store for DBI. Currently you can define one master and numerous slave database -connections. All write-type queries (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are routed to master database, -all read-type queries (SELECTs) go to the slave database. +Warning: This class is marked EXPERIMENTAL. It works for the authors but does +not currently have automated tests so your mileage may vary. -For every slave database you can define a priority value, which controls data source usage pattern. It uses -L, so first the lower priority data sources used (if they have the same priority, the are used -randomized), than if all low priority data sources fail, higher ones tried in order. +This class implements replicated data store for DBI. Currently you can define +one master and numerous slave database connections. All write-type queries +(INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are routed to master +database, all read-type queries (SELECTs) go to the slave database. + +For every slave database you can define a priority value, which controls data +source usage pattern. It uses L, so first the lower priority data +sources used (if they have the same priority, the are used randomized), than +if all low priority data sources fail, higher ones tried in order. =head1 CONFIGURATION =head2 Limit dialect -If you use LIMIT in your queries (effectively, if you use SQL::Abstract::Limit), do not forget to set up limit_dialect (perldoc SQL::Abstract::Limit) by passing it as an option in the (optional) hash reference to connect_info. -DBIC can not set it up automatically, since it can not guess DBD::Multi connection types. +If you use LIMIT in your queries (effectively, if you use +SQL::Abstract::Limit), do not forget to set up limit_dialect (perldoc +SQL::Abstract::Limit) by passing it as an option in the (optional) hash +reference to connect_info. DBIC can not set it up automatically, since it can +not guess DBD::Multi connection types. =cut @@ -79,7 +87,8 @@ sub connect_info { pop @{$info->[0]}; } - # We need to copy-pass $global_options, since connect_info clears it while processing options + # We need to copy-pass $global_options, since connect_info clears it while + # processing options $self->write_source->connect_info( [ @{$info->[0]}, { %$global_options } ] ); @dsns = map { ($_->[3]->{priority} || 10) => $_ } @{$info}[1..@$info-1];