1 package DBIx::Class::Storage::DBI::Replicated;
4 use Carp::Clan qw/^DBIx::Class/;
6 ## Modules required for Replication support not required for general DBIC
7 ## use, so we explicitly test for these.
9 my %replication_required = (
11 'MooseX::Types' => '0.21',
12 'namespace::clean' => '0.11',
13 'Hash::Merge' => '0.11'
18 for my $module (keys %replication_required) {
19 eval "use $module $replication_required{$module}";
20 push @didnt_load, "$module $replication_required{$module}"
24 croak("@{[ join ', ', @didnt_load ]} are missing and are required for Replication")
29 use DBIx::Class::Storage::DBI;
30 use DBIx::Class::Storage::DBI::Replicated::Pool;
31 use DBIx::Class::Storage::DBI::Replicated::Balancer;
32 use DBIx::Class::Storage::DBI::Replicated::Types qw/BalancerClassNamePart DBICSchema DBICStorageDBI/;
33 use MooseX::Types::Moose qw/ClassName HashRef Object/;
34 use Scalar::Util 'reftype';
35 use Hash::Merge 'merge';
36 use List::Util qw/min max/;
38 use namespace::clean -except => 'meta';
42 DBIx::Class::Storage::DBI::Replicated - BETA Replicated database support
46 The Following example shows how to change an existing $schema to a replicated
47 storage type, add some replicated (readonly) databases, and perform reporting
50 You should set the 'storage_type attribute to a replicated type. You should
51 also define your arguments, such as which balancer you want and any arguments
52 that the Pool object should get.
54 my $schema = Schema::Class->clone;
55 $schema->storage_type( ['::DBI::Replicated', {balancer=>'::Random'}] );
56 $schema->connection(...);
58 Next, you need to add in the Replicants. Basically this is an array of
59 arrayrefs, where each arrayref is database connect information. Think of these
60 arguments as what you'd pass to the 'normal' $schema->connect method.
62 $schema->storage->connect_replicants(
63 [$dsn1, $user, $pass, \%opts],
64 [$dsn2, $user, $pass, \%opts],
65 [$dsn3, $user, $pass, \%opts],
68 Now, just use the $schema as you normally would. Automatically all reads will
69 be delegated to the replicants, while writes to the master.
71 $schema->resultset('Source')->search({name=>'etc'});
73 You can force a given query to use a particular storage using the search
74 attribute 'force_pool'. For example:
76 my $RS = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
78 Now $RS will force everything (both reads and writes) to use whatever was setup
79 as the master storage. 'master' is hardcoded to always point to the Master,
80 but you can also use any Replicant name. Please see:
81 L<DBIx::Class::Storage::DBI::Replicated::Pool> and the replicants attribute for more.
83 Also see transactions and L</execute_reliably> for alternative ways to
84 force read traffic to the master. In general, you should wrap your statements
85 in a transaction when you are reading and writing to the same tables at the
86 same time, since your replicants will often lag a bit behind the master.
88 See L<DBIx::Class::Storage::DBI::Replicated::Instructions> for more help and
93 Warning: This class is marked BETA. This has been running a production
94 website using MySQL native replication as its backend and we have some decent
95 test coverage but the code hasn't yet been stressed by a variety of databases.
96 Individual DB's may have quirks we are not aware of. Please use this in first
97 development and pass along your experiences/bug fixes.
99 This class implements replicated data store for DBI. Currently you can define
100 one master and numerous slave database connections. All write-type queries
101 (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are routed to master
102 database, all read-type queries (SELECTs) go to the slave database.
104 Basically, any method request that L<DBIx::Class::Storage::DBI> would normally
105 handle gets delegated to one of the two attributes: L</read_handler> or to
106 L</write_handler>. Additionally, some methods need to be distributed
107 to all existing storages. This way our storage class is a drop in replacement
108 for L<DBIx::Class::Storage::DBI>.
110 Read traffic is spread across the replicants (slaves) occuring to a user
111 selected algorithm. The default algorithm is random weighted.
115 The consistancy betweeen master and replicants is database specific. The Pool
116 gives you a method to validate its replicants, removing and replacing them
117 when they fail/pass predefined criteria. Please make careful use of the ways
118 to force a query to run against Master when needed.
122 Replicated Storage has additional requirements not currently part of L<DBIx::Class>
125 MooseX::Types => '0.21',
126 namespace::clean => '0.11',
127 Hash::Merge => '0.11'
129 You will need to install these modules manually via CPAN or make them part of the
130 Makefile for your distribution.
134 This class defines the following attributes.
138 The underlying L<DBIx::Class::Schema> object this storage is attaching
151 Contains the classname which will instantiate the L</pool> object. Defaults
152 to: L<DBIx::Class::Storage::DBI::Replicated::Pool>.
159 default=>'DBIx::Class::Storage::DBI::Replicated::Pool',
161 'create_pool' => 'new',
167 Contains a hashref of initialized information to pass to the Balancer object.
168 See L<DBIx::Class::Storage::DBI::Replicated::Pool> for available arguments.
182 The replication pool requires a balance class to provider the methods for
183 choose how to spread the query load across each replicant in the pool.
187 has 'balancer_type' => (
189 isa=>BalancerClassNamePart,
192 default=> 'DBIx::Class::Storage::DBI::Replicated::Balancer::First',
194 'create_balancer' => 'new',
200 Contains a hashref of initialized information to pass to the Balancer object.
201 See L<DBIx::Class::Storage::DBI::Replicated::Balancer> for available arguments.
205 has 'balancer_args' => (
215 Is a <DBIx::Class::Storage::DBI::Replicated::Pool> or derived class. This is a
216 container class for one or more replicated databases.
222 isa=>'DBIx::Class::Storage::DBI::Replicated::Pool',
233 Is a <DBIx::Class::Storage::DBI::Replicated::Balancer> or derived class. This
234 is a class that takes a pool (<DBIx::Class::Storage::DBI::Replicated::Pool>)
240 isa=>'DBIx::Class::Storage::DBI::Replicated::Balancer',
242 handles=>[qw/auto_validate_every/],
247 The master defines the canonical state for a pool of connected databases. All
248 the replicants are expected to match this databases state. Thus, in a classic
249 Master / Slaves distributed system, all the slaves are expected to replicate
250 the Master's state as quick as possible. This is the only database in the
251 pool of databases that is allowed to handle write traffic.
261 =head1 ATTRIBUTES IMPLEMENTING THE DBIx::Storage::DBI INTERFACE
263 The following methods are delegated all the methods required for the
264 L<DBIx::Class::Storage::DBI> interface.
268 Defines an object that implements the read side of L<BIx::Class::Storage::DBI>.
272 has 'read_handler' => (
280 _dbh_columns_info_for
287 Defines an object that implements the write side of L<BIx::Class::Storage::DBI>,
288 as well as methods that don't write or read that can be called on only one
289 storage, methods that return a C<$dbh>, and any methods that don't make sense to
294 has 'write_handler' => (
309 deployment_statements
312 build_datetime_parser
326 with_deferred_fk_checks
329 with_deferred_fk_checks
340 relname_to_table_alias
341 _straight_join_to_node
344 _default_dbi_connect_attributes
349 bind_attribute_by_data_type
358 _per_row_update_delete
360 _dbh_execute_inserts_with_no_binds
361 _select_args_to_query
363 _multipk_update_delete
364 source_bind_attributes
365 _normalize_connect_info
369 _placeholders_supported
372 _sqlt_minimum_version
375 _typeless_placeholders_supported
382 _adjust_select_args_for_complex_prefetch
383 _resolve_ident_sources
386 _strip_cond_qualifiers
388 _resolve_aliastypes_from_select_args
396 has _master_connect_info_opts =>
397 (is => 'rw', isa => HashRef, default => sub { {} });
399 =head2 around: connect_info
401 Preserve master's C<connect_info> options (for merging with replicants.)
402 Also set any Replicated related options from connect_info, such as
403 C<pool_type>, C<pool_args>, C<balancer_type> and C<balancer_args>.
407 around connect_info => sub {
408 my ($next, $self, $info, @extra) = @_;
410 my $wantarray = wantarray;
413 for my $arg (@$info) {
414 next unless (reftype($arg)||'') eq 'HASH';
415 %opts = %{ merge($arg, \%opts) };
419 if (@opts{qw/pool_type pool_args/}) {
420 $self->pool_type(delete $opts{pool_type})
424 merge((delete $opts{pool_args} || {}), $self->pool_args)
427 $self->pool($self->_build_pool)
431 if (@opts{qw/balancer_type balancer_args/}) {
432 $self->balancer_type(delete $opts{balancer_type})
433 if $opts{balancer_type};
435 $self->balancer_args(
436 merge((delete $opts{balancer_args} || {}), $self->balancer_args)
439 $self->balancer($self->_build_balancer)
443 $self->_master_connect_info_opts(\%opts);
447 @res = $self->$next($info, @extra);
449 $res = $self->$next($info, @extra);
452 # Make sure master is blessed into the correct class and apply role to it.
453 my $master = $self->master;
454 $master->_determine_driver;
455 Moose::Meta::Class->initialize(ref $master);
457 DBIx::Class::Storage::DBI::Replicated::WithDSN->meta->apply($master);
459 # link pool back to master
460 $self->pool->master($master);
462 $wantarray ? @res : $res;
467 This class defines the following methods.
471 L<DBIx::Class::Schema> when instantiating its storage passed itself as the
472 first argument. So we need to massage the arguments a bit so that all the
473 bits get put into the correct places.
478 my ($class, $schema, $storage_type_args, @args) = @_;
489 Lazy builder for the L</master> attribute.
495 my $master = DBIx::Class::Storage::DBI->new($self->schema);
501 Lazy builder for the L</pool> attribute.
507 $self->create_pool(%{$self->pool_args});
510 =head2 _build_balancer
512 Lazy builder for the L</balancer> attribute. This takes a Pool object so that
513 the balancer knows which pool it's balancing.
517 sub _build_balancer {
519 $self->create_balancer(
521 master=>$self->master,
522 %{$self->balancer_args},
526 =head2 _build_write_handler
528 Lazy builder for the L</write_handler> attribute. The default is to set this to
533 sub _build_write_handler {
534 return shift->master;
537 =head2 _build_read_handler
539 Lazy builder for the L</read_handler> attribute. The default is to set this to
544 sub _build_read_handler {
545 return shift->balancer;
548 =head2 around: connect_replicants
550 All calls to connect_replicants needs to have an existing $schema tacked onto
551 top of the args, since L<DBIx::Storage::DBI> needs it, and any C<connect_info>
552 options merged with the master, with replicant opts having higher priority.
556 around connect_replicants => sub {
557 my ($next, $self, @args) = @_;
560 $r = [ $r ] unless reftype $r eq 'ARRAY';
562 $self->throw_exception('coderef replicant connect_info not supported')
563 if ref $r->[0] && reftype $r->[0] eq 'CODE';
565 # any connect_info options?
567 $i++ while $i < @$r && (reftype($r->[$i])||'') ne 'HASH';
570 $r->[$i] = {} unless $r->[$i];
572 # merge if two hashes
573 my @hashes = @$r[$i .. $#{$r}];
575 $self->throw_exception('invalid connect_info options')
576 if (grep { reftype($_) eq 'HASH' } @hashes) != @hashes;
578 $self->throw_exception('too many hashrefs in connect_info')
581 my %opts = %{ merge(reverse @hashes) };
584 splice @$r, $i+1, ($#{$r} - $i), ();
586 # make sure master/replicants opts don't clash
587 my %master_opts = %{ $self->_master_connect_info_opts };
588 if (exists $opts{dbh_maker}) {
589 delete @master_opts{qw/dsn user password/};
591 delete $master_opts{dbh_maker};
594 %opts = %{ merge(\%opts, \%master_opts) };
600 $self->$next($self->schema, @args);
605 Returns an array of of all the connected storage backends. The first element
606 in the returned array is the master, and the remainings are each of the
613 return grep {defined $_ && blessed $_} (
615 values %{ $self->replicants },
619 =head2 execute_reliably ($coderef, ?@args)
621 Given a coderef, saves the current state of the L</read_handler>, forces it to
622 use reliable storage (ie sets it to the master), executes a coderef and then
623 restores the original state.
629 $schema->resultset('User')->create({name=>$name});
630 my $user_rs = $schema->resultset('User')->find({name=>$name});
634 my $user_rs = $schema->storage->execute_reliably($reliably, 'John');
636 Use this when you must be certain of your database state, such as when you just
637 inserted something and need to get a resultset including it, etc.
641 sub execute_reliably {
642 my ($self, $coderef, @args) = @_;
644 unless( ref $coderef eq 'CODE') {
645 $self->throw_exception('Second argument must be a coderef');
648 ##Get copy of master storage
649 my $master = $self->master;
651 ##Get whatever the current read hander is
652 my $current = $self->read_handler;
654 ##Set the read handler to master
655 $self->read_handler($master);
657 ## do whatever the caller needs
659 my $want_array = wantarray;
663 @result = $coderef->(@args);
664 } elsif(defined $want_array) {
665 ($result[0]) = ($coderef->(@args));
671 ##Reset to the original state
672 $self->read_handler($current);
674 ##Exception testing has to come last, otherwise you might leave the
675 ##read_handler set to master.
678 $self->throw_exception("coderef returned an error: $@");
680 return $want_array ? @result : $result[0];
684 =head2 set_reliable_storage
686 Sets the current $schema to be 'reliable', that is all queries, both read and
687 write are sent to the master
691 sub set_reliable_storage {
693 my $schema = $self->schema;
694 my $write_handler = $self->schema->storage->write_handler;
696 $schema->storage->read_handler($write_handler);
699 =head2 set_balanced_storage
701 Sets the current $schema to be use the </balancer> for all reads, while all
702 writea are sent to the master only
706 sub set_balanced_storage {
708 my $schema = $self->schema;
709 my $balanced_handler = $self->schema->storage->balancer;
711 $schema->storage->read_handler($balanced_handler);
716 Check that the master and at least one of the replicants is connected.
723 $self->master->connected &&
724 $self->pool->connected_replicants;
727 =head2 ensure_connected
729 Make sure all the storages are connected.
733 sub ensure_connected {
735 foreach my $source ($self->all_storages) {
736 $source->ensure_connected(@_);
742 Set the limit_dialect for all existing storages
748 foreach my $source ($self->all_storages) {
749 $source->limit_dialect(@_);
751 return $self->master->quote_char;
756 Set the quote_char for all existing storages
762 foreach my $source ($self->all_storages) {
763 $source->quote_char(@_);
765 return $self->master->quote_char;
770 Set the name_sep for all existing storages
776 foreach my $source ($self->all_storages) {
777 $source->name_sep(@_);
779 return $self->master->name_sep;
784 Set the schema object for all existing storages
790 foreach my $source ($self->all_storages) {
791 $source->set_schema(@_);
797 set a debug flag across all storages
804 foreach my $source ($self->all_storages) {
808 return $self->master->debug;
819 return $self->master->debugobj(@_);
830 return $self->master->debugfh(@_);
841 return $self->master->debugcb(@_);
846 disconnect everything
852 foreach my $source ($self->all_storages) {
853 $source->disconnect(@_);
859 set cursor class on all storages, or return master's
864 my ($self, $cursor_class) = @_;
867 $_->cursor_class($cursor_class) for $self->all_storages;
869 $self->master->cursor_class;
874 set cursor class on all storages, or return master's, alias for L</cursor_class>
880 my ($self, $cursor_class) = @_;
883 $_->cursor($cursor_class) for $self->all_storages;
885 $self->master->cursor;
890 sets the L<DBIx::Class::Storage::DBI/unsafe> option on all storages or returns
891 master's current setting
899 $_->unsafe(@_) for $self->all_storages;
902 return $self->master->unsafe;
905 =head2 disable_sth_caching
907 sets the L<DBIx::Class::Storage::DBI/disable_sth_caching> option on all storages
908 or returns master's current setting
912 sub disable_sth_caching {
916 $_->disable_sth_caching(@_) for $self->all_storages;
919 return $self->master->disable_sth_caching;
922 =head2 lag_behind_master
924 returns the highest Replicant L<DBIx::Class::Storage::DBI/lag_behind_master>
929 sub lag_behind_master {
932 return max map $_->lag_behind_master, $self->replicants;
935 =head2 is_replicating
937 returns true if all replicants return true for
938 L<DBIx::Class::Storage::DBI/is_replicating>
945 return (grep $_->is_replicating, $self->replicants) == ($self->replicants);
948 =head2 connect_call_datetime_setup
950 calls L<DBIx::Class::Storage::DBI/connect_call_datetime_setup> for all storages
954 sub connect_call_datetime_setup {
956 $_->connect_call_datetime_setup for $self->all_storages;
961 $_->_populate_dbh for $self->all_storages;
966 $_->_connect for $self->all_storages;
971 $_->_rebless for $self->all_storages;
974 sub _determine_driver {
976 $_->_determine_driver for $self->all_storages;
979 sub _driver_determined {
983 $_->_driver_determined(@_) for $self->all_storages;
986 return $self->master->_driver_determined;
992 $_->_init for $self->all_storages;
995 sub _run_connection_actions {
998 $_->_run_connection_actions for $self->all_storages;
1001 sub _do_connection_actions {
1005 $_->_do_connection_actions(@_) for $self->all_storages;
1009 sub connect_call_do_sql {
1011 $_->connect_call_do_sql(@_) for $self->all_storages;
1014 sub disconnect_call_do_sql {
1016 $_->disconnect_call_do_sql(@_) for $self->all_storages;
1019 sub _seems_connected {
1022 return min map $_->_seems_connected, $self->all_storages;
1028 return min map $_->_ping, $self->all_storages;
1033 Due to the fact that replicants can lag behind a master, you must take care to
1034 make sure you use one of the methods to force read queries to a master should
1035 you need realtime data integrity. For example, if you insert a row, and then
1036 immediately re-read it from the database (say, by doing $row->discard_changes)
1037 or you insert a row and then immediately build a query that expects that row
1038 to be an item, you should force the master to handle reads. Otherwise, due to
1039 the lag, there is no certainty your data will be in the expected state.
1041 For data integrity, all transactions automatically use the master storage for
1042 all read and write queries. Using a transaction is the preferred and recommended
1043 method to force the master to handle all read queries.
1045 Otherwise, you can force a single query to use the master with the 'force_pool'
1048 my $row = $resultset->search(undef, {force_pool=>'master'})->find($pk);
1050 This attribute will safely be ignore by non replicated storages, so you can use
1051 the same code for both types of systems.
1053 Lastly, you can use the L</execute_reliably> method, which works very much like
1056 For debugging, you can turn replication on/off with the methods L</set_reliable_storage>
1057 and L</set_balanced_storage>, however this operates at a global level and is not
1058 suitable if you have a shared Schema object being used by multiple processes,
1059 such as on a web application server. You can get around this limitation by
1060 using the Schema clone method.
1062 my $new_schema = $schema->clone;
1063 $new_schema->set_reliable_storage;
1065 ## $new_schema will use only the Master storage for all reads/writes while
1066 ## the $schema object will use replicated storage.
1070 John Napiorkowski <john.napiorkowski@takkle.com>
1072 Based on code originated by:
1074 Norbert Csongrádi <bert@cpan.org>
1075 Peter Siklósi <einon@einon.hu>
1079 You may distribute this code under the same terms as Perl itself.
1083 __PACKAGE__->meta->make_immutable;