1 package DBIx::Class::Storage::DBI::Replicated;
4 use Carp::Clan qw/^DBIx::Class/;
6 croak('The following modules are required for Replication ' . DBIx::Class::Optional::Dependencies->req_missing_for ('replicated') )
7 unless DBIx::Class::Optional::Dependencies->req_ok_for ('replicated');
11 use DBIx::Class::Storage::DBI;
12 use DBIx::Class::Storage::DBI::Replicated::Pool;
13 use DBIx::Class::Storage::DBI::Replicated::Balancer;
14 use DBIx::Class::Storage::DBI::Replicated::Types qw/BalancerClassNamePart DBICSchema DBICStorageDBI/;
15 use MooseX::Types::Moose qw/ClassName HashRef Object/;
16 use Scalar::Util 'reftype';
18 use List::Util qw/min max/;
20 use namespace::clean -except => 'meta';
24 DBIx::Class::Storage::DBI::Replicated - BETA Replicated database support
28 The Following example shows how to change an existing $schema to a replicated
29 storage type, add some replicated (read-only) databases, and perform reporting
32 You should set the 'storage_type attribute to a replicated type. You should
33 also define your arguments, such as which balancer you want and any arguments
34 that the Pool object should get.
36 my $schema = Schema::Class->clone;
37 $schema->storage_type( ['::DBI::Replicated', {balancer=>'::Random'}] );
38 $schema->connection(...);
40 Next, you need to add in the Replicants. Basically this is an array of
41 arrayrefs, where each arrayref is database connect information. Think of these
42 arguments as what you'd pass to the 'normal' $schema->connect method.
44 $schema->storage->connect_replicants(
45 [$dsn1, $user, $pass, \%opts],
46 [$dsn2, $user, $pass, \%opts],
47 [$dsn3, $user, $pass, \%opts],
50 Now, just use the $schema as you normally would. Automatically all reads will
51 be delegated to the replicants, while writes to the master.
53 $schema->resultset('Source')->search({name=>'etc'});
55 You can force a given query to use a particular storage using the search
56 attribute 'force_pool'. For example:
58 my $RS = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
60 Now $RS will force everything (both reads and writes) to use whatever was setup
61 as the master storage. 'master' is hardcoded to always point to the Master,
62 but you can also use any Replicant name. Please see:
63 L<DBIx::Class::Storage::DBI::Replicated::Pool> and the replicants attribute for more.
65 Also see transactions and L</execute_reliably> for alternative ways to
66 force read traffic to the master. In general, you should wrap your statements
67 in a transaction when you are reading and writing to the same tables at the
68 same time, since your replicants will often lag a bit behind the master.
70 See L<DBIx::Class::Storage::DBI::Replicated::Instructions> for more help and
75 Warning: This class is marked BETA. This has been running a production
76 website using MySQL native replication as its backend and we have some decent
77 test coverage but the code hasn't yet been stressed by a variety of databases.
78 Individual DBs may have quirks we are not aware of. Please use this in first
79 development and pass along your experiences/bug fixes.
81 This class implements replicated data store for DBI. Currently you can define
82 one master and numerous slave database connections. All write-type queries
83 (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are routed to master
84 database, all read-type queries (SELECTs) go to the slave database.
86 Basically, any method request that L<DBIx::Class::Storage::DBI> would normally
87 handle gets delegated to one of the two attributes: L</read_handler> or to
88 L</write_handler>. Additionally, some methods need to be distributed
89 to all existing storages. This way our storage class is a drop in replacement
90 for L<DBIx::Class::Storage::DBI>.
92 Read traffic is spread across the replicants (slaves) occurring to a user
93 selected algorithm. The default algorithm is random weighted.
97 The consistency between master and replicants is database specific. The Pool
98 gives you a method to validate its replicants, removing and replacing them
99 when they fail/pass predefined criteria. Please make careful use of the ways
100 to force a query to run against Master when needed.
104 Replicated Storage has additional requirements not currently part of
105 L<DBIx::Class>. See L<DBIx::Class::Optional::Dependencies> for more details.
109 This class defines the following attributes.
113 The underlying L<DBIx::Class::Schema> object this storage is attaching
126 Contains the classname which will instantiate the L</pool> object. Defaults
127 to: L<DBIx::Class::Storage::DBI::Replicated::Pool>.
134 default=>'DBIx::Class::Storage::DBI::Replicated::Pool',
136 'create_pool' => 'new',
142 Contains a hashref of initialized information to pass to the Balancer object.
143 See L<DBIx::Class::Storage::DBI::Replicated::Pool> for available arguments.
157 The replication pool requires a balance class to provider the methods for
158 choose how to spread the query load across each replicant in the pool.
162 has 'balancer_type' => (
164 isa=>BalancerClassNamePart,
167 default=> 'DBIx::Class::Storage::DBI::Replicated::Balancer::First',
169 'create_balancer' => 'new',
175 Contains a hashref of initialized information to pass to the Balancer object.
176 See L<DBIx::Class::Storage::DBI::Replicated::Balancer> for available arguments.
180 has 'balancer_args' => (
190 Is a <DBIx::Class::Storage::DBI::Replicated::Pool> or derived class. This is a
191 container class for one or more replicated databases.
197 isa=>'DBIx::Class::Storage::DBI::Replicated::Pool',
208 Is a <DBIx::Class::Storage::DBI::Replicated::Balancer> or derived class. This
209 is a class that takes a pool (<DBIx::Class::Storage::DBI::Replicated::Pool>)
215 isa=>'DBIx::Class::Storage::DBI::Replicated::Balancer',
217 handles=>[qw/auto_validate_every/],
222 The master defines the canonical state for a pool of connected databases. All
223 the replicants are expected to match this databases state. Thus, in a classic
224 Master / Slaves distributed system, all the slaves are expected to replicate
225 the Master's state as quick as possible. This is the only database in the
226 pool of databases that is allowed to handle write traffic.
236 =head1 ATTRIBUTES IMPLEMENTING THE DBIx::Storage::DBI INTERFACE
238 The following methods are delegated all the methods required for the
239 L<DBIx::Class::Storage::DBI> interface.
243 Defines an object that implements the read side of L<BIx::Class::Storage::DBI>.
247 has 'read_handler' => (
255 _dbh_columns_info_for
262 Defines an object that implements the write side of L<BIx::Class::Storage::DBI>,
263 as well as methods that don't write or read that can be called on only one
264 storage, methods that return a C<$dbh>, and any methods that don't make sense to
269 has 'write_handler' => (
284 deployment_statements
287 build_datetime_parser
301 with_deferred_fk_checks
304 with_deferred_fk_checks
316 relname_to_table_alias
317 _straight_join_to_node
320 _default_dbi_connect_attributes
325 bind_attribute_by_data_type
334 _per_row_update_delete
336 _dbh_execute_inserts_with_no_binds
337 _select_args_to_query
339 _multipk_update_delete
340 source_bind_attributes
341 _normalize_connect_info
345 _placeholders_supported
348 _sqlt_minimum_version
351 _typeless_placeholders_supported
358 _adjust_select_args_for_complex_prefetch
359 _resolve_ident_sources
362 _strip_cond_qualifiers
364 _resolve_aliastypes_from_select_args
369 _prefetch_insert_auto_nextvals
373 has _master_connect_info_opts =>
374 (is => 'rw', isa => HashRef, default => sub { {} });
376 =head2 around: connect_info
378 Preserves master's C<connect_info> options (for merging with replicants.)
379 Also sets any Replicated-related options from connect_info, such as
380 C<pool_type>, C<pool_args>, C<balancer_type> and C<balancer_args>.
384 around connect_info => sub {
385 my ($next, $self, $info, @extra) = @_;
387 my $wantarray = wantarray;
389 my $merge = Hash::Merge->new('LEFT_PRECEDENT');
392 for my $arg (@$info) {
393 next unless (reftype($arg)||'') eq 'HASH';
394 %opts = %{ $merge->merge($arg, \%opts) };
398 if (@opts{qw/pool_type pool_args/}) {
399 $self->pool_type(delete $opts{pool_type})
403 $merge->merge((delete $opts{pool_args} || {}), $self->pool_args)
406 $self->pool($self->_build_pool)
410 if (@opts{qw/balancer_type balancer_args/}) {
411 $self->balancer_type(delete $opts{balancer_type})
412 if $opts{balancer_type};
414 $self->balancer_args(
415 $merge->merge((delete $opts{balancer_args} || {}), $self->balancer_args)
418 $self->balancer($self->_build_balancer)
422 $self->_master_connect_info_opts(\%opts);
426 @res = $self->$next($info, @extra);
428 $res = $self->$next($info, @extra);
431 # Make sure master is blessed into the correct class and apply role to it.
432 my $master = $self->master;
433 $master->_determine_driver;
434 Moose::Meta::Class->initialize(ref $master);
436 DBIx::Class::Storage::DBI::Replicated::WithDSN->meta->apply($master);
438 # link pool back to master
439 $self->pool->master($master);
441 $wantarray ? @res : $res;
446 This class defines the following methods.
450 L<DBIx::Class::Schema> when instantiating its storage passed itself as the
451 first argument. So we need to massage the arguments a bit so that all the
452 bits get put into the correct places.
457 my ($class, $schema, $storage_type_args, @args) = @_;
468 Lazy builder for the L</master> attribute.
474 my $master = DBIx::Class::Storage::DBI->new($self->schema);
480 Lazy builder for the L</pool> attribute.
486 $self->create_pool(%{$self->pool_args});
489 =head2 _build_balancer
491 Lazy builder for the L</balancer> attribute. This takes a Pool object so that
492 the balancer knows which pool it's balancing.
496 sub _build_balancer {
498 $self->create_balancer(
500 master=>$self->master,
501 %{$self->balancer_args},
505 =head2 _build_write_handler
507 Lazy builder for the L</write_handler> attribute. The default is to set this to
512 sub _build_write_handler {
513 return shift->master;
516 =head2 _build_read_handler
518 Lazy builder for the L</read_handler> attribute. The default is to set this to
523 sub _build_read_handler {
524 return shift->balancer;
527 =head2 around: connect_replicants
529 All calls to connect_replicants needs to have an existing $schema tacked onto
530 top of the args, since L<DBIx::Storage::DBI> needs it, and any C<connect_info>
531 options merged with the master, with replicant opts having higher priority.
535 around connect_replicants => sub {
536 my ($next, $self, @args) = @_;
539 $r = [ $r ] unless reftype $r eq 'ARRAY';
541 $self->throw_exception('coderef replicant connect_info not supported')
542 if ref $r->[0] && reftype $r->[0] eq 'CODE';
544 # any connect_info options?
546 $i++ while $i < @$r && (reftype($r->[$i])||'') ne 'HASH';
549 $r->[$i] = {} unless $r->[$i];
551 # merge if two hashes
552 my @hashes = @$r[$i .. $#{$r}];
554 $self->throw_exception('invalid connect_info options')
555 if (grep { reftype($_) eq 'HASH' } @hashes) != @hashes;
557 $self->throw_exception('too many hashrefs in connect_info')
560 my $merge = Hash::Merge->new('LEFT_PRECEDENT');
561 my %opts = %{ $merge->merge(reverse @hashes) };
564 splice @$r, $i+1, ($#{$r} - $i), ();
566 # make sure master/replicants opts don't clash
567 my %master_opts = %{ $self->_master_connect_info_opts };
568 if (exists $opts{dbh_maker}) {
569 delete @master_opts{qw/dsn user password/};
571 delete $master_opts{dbh_maker};
574 %opts = %{ $merge->merge(\%opts, \%master_opts) };
580 $self->$next($self->schema, @args);
585 Returns an array of of all the connected storage backends. The first element
586 in the returned array is the master, and the remainings are each of the
593 return grep {defined $_ && blessed $_} (
595 values %{ $self->replicants },
599 =head2 execute_reliably ($coderef, ?@args)
601 Given a coderef, saves the current state of the L</read_handler>, forces it to
602 use reliable storage (e.g. sets it to the master), executes a coderef and then
603 restores the original state.
609 $schema->resultset('User')->create({name=>$name});
610 my $user_rs = $schema->resultset('User')->find({name=>$name});
614 my $user_rs = $schema->storage->execute_reliably($reliably, 'John');
616 Use this when you must be certain of your database state, such as when you just
617 inserted something and need to get a resultset including it, etc.
621 sub execute_reliably {
622 my ($self, $coderef, @args) = @_;
624 unless( ref $coderef eq 'CODE') {
625 $self->throw_exception('Second argument must be a coderef');
628 ##Get copy of master storage
629 my $master = $self->master;
631 ##Get whatever the current read hander is
632 my $current = $self->read_handler;
634 ##Set the read handler to master
635 $self->read_handler($master);
637 ## do whatever the caller needs
639 my $want_array = wantarray;
643 @result = $coderef->(@args);
644 } elsif(defined $want_array) {
645 ($result[0]) = ($coderef->(@args));
651 ##Reset to the original state
652 $self->read_handler($current);
654 ##Exception testing has to come last, otherwise you might leave the
655 ##read_handler set to master.
658 $self->throw_exception("coderef returned an error: $@");
660 return $want_array ? @result : $result[0];
664 =head2 set_reliable_storage
666 Sets the current $schema to be 'reliable', that is all queries, both read and
667 write are sent to the master
671 sub set_reliable_storage {
673 my $schema = $self->schema;
674 my $write_handler = $self->schema->storage->write_handler;
676 $schema->storage->read_handler($write_handler);
679 =head2 set_balanced_storage
681 Sets the current $schema to be use the </balancer> for all reads, while all
682 writes are sent to the master only
686 sub set_balanced_storage {
688 my $schema = $self->schema;
689 my $balanced_handler = $self->schema->storage->balancer;
691 $schema->storage->read_handler($balanced_handler);
696 Check that the master and at least one of the replicants is connected.
703 $self->master->connected &&
704 $self->pool->connected_replicants;
707 =head2 ensure_connected
709 Make sure all the storages are connected.
713 sub ensure_connected {
715 foreach my $source ($self->all_storages) {
716 $source->ensure_connected(@_);
722 Set the limit_dialect for all existing storages
728 foreach my $source ($self->all_storages) {
729 $source->limit_dialect(@_);
731 return $self->master->quote_char;
736 Set the quote_char for all existing storages
742 foreach my $source ($self->all_storages) {
743 $source->quote_char(@_);
745 return $self->master->quote_char;
750 Set the name_sep for all existing storages
756 foreach my $source ($self->all_storages) {
757 $source->name_sep(@_);
759 return $self->master->name_sep;
764 Set the schema object for all existing storages
770 foreach my $source ($self->all_storages) {
771 $source->set_schema(@_);
777 set a debug flag across all storages
784 foreach my $source ($self->all_storages) {
788 return $self->master->debug;
799 return $self->master->debugobj(@_);
810 return $self->master->debugfh(@_);
821 return $self->master->debugcb(@_);
826 disconnect everything
832 foreach my $source ($self->all_storages) {
833 $source->disconnect(@_);
839 set cursor class on all storages, or return master's
844 my ($self, $cursor_class) = @_;
847 $_->cursor_class($cursor_class) for $self->all_storages;
849 $self->master->cursor_class;
854 set cursor class on all storages, or return master's, alias for L</cursor_class>
860 my ($self, $cursor_class) = @_;
863 $_->cursor($cursor_class) for $self->all_storages;
865 $self->master->cursor;
870 sets the L<DBIx::Class::Storage::DBI/unsafe> option on all storages or returns
871 master's current setting
879 $_->unsafe(@_) for $self->all_storages;
882 return $self->master->unsafe;
885 =head2 disable_sth_caching
887 sets the L<DBIx::Class::Storage::DBI/disable_sth_caching> option on all storages
888 or returns master's current setting
892 sub disable_sth_caching {
896 $_->disable_sth_caching(@_) for $self->all_storages;
899 return $self->master->disable_sth_caching;
902 =head2 lag_behind_master
904 returns the highest Replicant L<DBIx::Class::Storage::DBI/lag_behind_master>
909 sub lag_behind_master {
912 return max map $_->lag_behind_master, $self->replicants;
915 =head2 is_replicating
917 returns true if all replicants return true for
918 L<DBIx::Class::Storage::DBI/is_replicating>
925 return (grep $_->is_replicating, $self->replicants) == ($self->replicants);
928 =head2 connect_call_datetime_setup
930 calls L<DBIx::Class::Storage::DBI/connect_call_datetime_setup> for all storages
934 sub connect_call_datetime_setup {
936 $_->connect_call_datetime_setup for $self->all_storages;
941 $_->_populate_dbh for $self->all_storages;
946 $_->_connect for $self->all_storages;
951 $_->_rebless for $self->all_storages;
954 sub _determine_driver {
956 $_->_determine_driver for $self->all_storages;
959 sub _driver_determined {
963 $_->_driver_determined(@_) for $self->all_storages;
966 return $self->master->_driver_determined;
972 $_->_init for $self->all_storages;
975 sub _run_connection_actions {
978 $_->_run_connection_actions for $self->all_storages;
981 sub _do_connection_actions {
985 $_->_do_connection_actions(@_) for $self->all_storages;
989 sub connect_call_do_sql {
991 $_->connect_call_do_sql(@_) for $self->all_storages;
994 sub disconnect_call_do_sql {
996 $_->disconnect_call_do_sql(@_) for $self->all_storages;
999 sub _seems_connected {
1002 return min map $_->_seems_connected, $self->all_storages;
1008 return min map $_->_ping, $self->all_storages;
1013 Due to the fact that replicants can lag behind a master, you must take care to
1014 make sure you use one of the methods to force read queries to a master should
1015 you need realtime data integrity. For example, if you insert a row, and then
1016 immediately re-read it from the database (say, by doing $row->discard_changes)
1017 or you insert a row and then immediately build a query that expects that row
1018 to be an item, you should force the master to handle reads. Otherwise, due to
1019 the lag, there is no certainty your data will be in the expected state.
1021 For data integrity, all transactions automatically use the master storage for
1022 all read and write queries. Using a transaction is the preferred and recommended
1023 method to force the master to handle all read queries.
1025 Otherwise, you can force a single query to use the master with the 'force_pool'
1028 my $row = $resultset->search(undef, {force_pool=>'master'})->find($pk);
1030 This attribute will safely be ignore by non replicated storages, so you can use
1031 the same code for both types of systems.
1033 Lastly, you can use the L</execute_reliably> method, which works very much like
1036 For debugging, you can turn replication on/off with the methods L</set_reliable_storage>
1037 and L</set_balanced_storage>, however this operates at a global level and is not
1038 suitable if you have a shared Schema object being used by multiple processes,
1039 such as on a web application server. You can get around this limitation by
1040 using the Schema clone method.
1042 my $new_schema = $schema->clone;
1043 $new_schema->set_reliable_storage;
1045 ## $new_schema will use only the Master storage for all reads/writes while
1046 ## the $schema object will use replicated storage.
1050 John Napiorkowski <john.napiorkowski@takkle.com>
1052 Based on code originated by:
1054 Norbert Csongrádi <bert@cpan.org>
1055 Peter Siklósi <einon@einon.hu>
1059 You may distribute this code under the same terms as Perl itself.
1063 __PACKAGE__->meta->make_immutable;