1 package DBIx::Class::Storage::DBI::Replicated;
4 use Carp::Clan qw/^DBIx::Class/;
6 croak('The following modules are required for Replication ' . DBIx::Class::Optional::Dependencies->req_missing_for ('replicated') )
7 unless DBIx::Class::Optional::Dependencies->req_ok_for ('replicated');
11 use DBIx::Class::Storage::DBI;
12 use DBIx::Class::Storage::DBI::Replicated::Pool;
13 use DBIx::Class::Storage::DBI::Replicated::Balancer;
14 use DBIx::Class::Storage::DBI::Replicated::Types qw/BalancerClassNamePart DBICSchema DBICStorageDBI/;
15 use MooseX::Types::Moose qw/ClassName HashRef Object/;
16 use Scalar::Util 'reftype';
18 use List::Util qw/min max reduce/;
22 use namespace::clean -except => 'meta';
26 DBIx::Class::Storage::DBI::Replicated - BETA Replicated database support
30 The Following example shows how to change an existing $schema to a replicated
31 storage type, add some replicated (read-only) databases, and perform reporting
34 You should set the 'storage_type attribute to a replicated type. You should
35 also define your arguments, such as which balancer you want and any arguments
36 that the Pool object should get.
38 my $schema = Schema::Class->clone;
39 $schema->storage_type( ['::DBI::Replicated', {balancer=>'::Random'}] );
40 $schema->connection(...);
42 Next, you need to add in the Replicants. Basically this is an array of
43 arrayrefs, where each arrayref is database connect information. Think of these
44 arguments as what you'd pass to the 'normal' $schema->connect method.
46 $schema->storage->connect_replicants(
47 [$dsn1, $user, $pass, \%opts],
48 [$dsn2, $user, $pass, \%opts],
49 [$dsn3, $user, $pass, \%opts],
52 Now, just use the $schema as you normally would. Automatically all reads will
53 be delegated to the replicants, while writes to the master.
55 $schema->resultset('Source')->search({name=>'etc'});
57 You can force a given query to use a particular storage using the search
58 attribute 'force_pool'. For example:
60 my $RS = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
62 Now $RS will force everything (both reads and writes) to use whatever was setup
63 as the master storage. 'master' is hardcoded to always point to the Master,
64 but you can also use any Replicant name. Please see:
65 L<DBIx::Class::Storage::DBI::Replicated::Pool> and the replicants attribute for more.
67 Also see transactions and L</execute_reliably> for alternative ways to
68 force read traffic to the master. In general, you should wrap your statements
69 in a transaction when you are reading and writing to the same tables at the
70 same time, since your replicants will often lag a bit behind the master.
72 See L<DBIx::Class::Storage::DBI::Replicated::Instructions> for more help and
77 Warning: This class is marked BETA. This has been running a production
78 website using MySQL native replication as its backend and we have some decent
79 test coverage but the code hasn't yet been stressed by a variety of databases.
80 Individual DBs may have quirks we are not aware of. Please use this in first
81 development and pass along your experiences/bug fixes.
83 This class implements replicated data store for DBI. Currently you can define
84 one master and numerous slave database connections. All write-type queries
85 (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are routed to master
86 database, all read-type queries (SELECTs) go to the slave database.
88 Basically, any method request that L<DBIx::Class::Storage::DBI> would normally
89 handle gets delegated to one of the two attributes: L</read_handler> or to
90 L</write_handler>. Additionally, some methods need to be distributed
91 to all existing storages. This way our storage class is a drop in replacement
92 for L<DBIx::Class::Storage::DBI>.
94 Read traffic is spread across the replicants (slaves) occurring to a user
95 selected algorithm. The default algorithm is random weighted.
99 The consistency between master and replicants is database specific. The Pool
100 gives you a method to validate its replicants, removing and replacing them
101 when they fail/pass predefined criteria. Please make careful use of the ways
102 to force a query to run against Master when needed.
106 Replicated Storage has additional requirements not currently part of
107 L<DBIx::Class>. See L<DBIx::Class::Optional::Dependencies> for more details.
111 This class defines the following attributes.
115 The underlying L<DBIx::Class::Schema> object this storage is attaching
128 Contains the classname which will instantiate the L</pool> object. Defaults
129 to: L<DBIx::Class::Storage::DBI::Replicated::Pool>.
136 default=>'DBIx::Class::Storage::DBI::Replicated::Pool',
138 'create_pool' => 'new',
144 Contains a hashref of initialized information to pass to the Balancer object.
145 See L<DBIx::Class::Storage::DBI::Replicated::Pool> for available arguments.
159 The replication pool requires a balance class to provider the methods for
160 choose how to spread the query load across each replicant in the pool.
164 has 'balancer_type' => (
166 isa=>BalancerClassNamePart,
169 default=> 'DBIx::Class::Storage::DBI::Replicated::Balancer::First',
171 'create_balancer' => 'new',
177 Contains a hashref of initialized information to pass to the Balancer object.
178 See L<DBIx::Class::Storage::DBI::Replicated::Balancer> for available arguments.
182 has 'balancer_args' => (
192 Is a L<DBIx::Class::Storage::DBI::Replicated::Pool> or derived class. This is a
193 container class for one or more replicated databases.
199 isa=>'DBIx::Class::Storage::DBI::Replicated::Pool',
210 Is a L<DBIx::Class::Storage::DBI::Replicated::Balancer> or derived class. This
211 is a class that takes a pool (L<DBIx::Class::Storage::DBI::Replicated::Pool>)
217 isa=>'DBIx::Class::Storage::DBI::Replicated::Balancer',
219 handles=>[qw/auto_validate_every/],
224 The master defines the canonical state for a pool of connected databases. All
225 the replicants are expected to match this databases state. Thus, in a classic
226 Master / Slaves distributed system, all the slaves are expected to replicate
227 the Master's state as quick as possible. This is the only database in the
228 pool of databases that is allowed to handle write traffic.
238 =head1 ATTRIBUTES IMPLEMENTING THE DBIx::Storage::DBI INTERFACE
240 The following methods are delegated all the methods required for the
241 L<DBIx::Class::Storage::DBI> interface.
245 Defines an object that implements the read side of L<BIx::Class::Storage::DBI>.
249 has 'read_handler' => (
257 _dbh_columns_info_for
264 Defines an object that implements the write side of L<BIx::Class::Storage::DBI>,
265 as well as methods that don't write or read that can be called on only one
266 storage, methods that return a C<$dbh>, and any methods that don't make sense to
271 has 'write_handler' => (
286 deployment_statements
289 build_datetime_parser
303 with_deferred_fk_checks
306 with_deferred_fk_checks
316 relname_to_table_alias
319 _default_dbi_connect_attributes
321 _dbic_connect_attributes
325 bind_attribute_by_data_type
333 _per_row_update_delete
335 _dbh_execute_inserts_with_no_binds
336 _select_args_to_query
338 _multipk_update_delete
339 source_bind_attributes
340 _normalize_connect_info
345 _sqlt_minimum_version
353 _adjust_select_args_for_complex_prefetch
354 _resolve_ident_sources
357 _strip_cond_qualifiers
358 _resolve_aliastypes_from_select_args
366 my @unimplemented = qw(
367 _arm_global_destructor
370 get_use_dbms_capability
371 set_use_dbms_capability
379 _group_over_selection
381 _extract_order_criteria
385 # the capability framework
386 # not sure if CMOP->initialize does evil things to DBIC::S::DBI, fix if a problem
387 push @unimplemented, ( grep
388 { $_ =~ /^ _ (?: use | supports | determine_supports ) _ /x }
389 ( Class::MOP::Class->initialize('DBIx::Class::Storage::DBI')->get_all_method_names )
392 for my $method (@unimplemented) {
393 __PACKAGE__->meta->add_method($method, sub {
394 croak "$method must not be called on ".(blessed shift).' objects';
398 has _master_connect_info_opts =>
399 (is => 'rw', isa => HashRef, default => sub { {} });
401 =head2 around: connect_info
403 Preserves master's C<connect_info> options (for merging with replicants.)
404 Also sets any Replicated-related options from connect_info, such as
405 C<pool_type>, C<pool_args>, C<balancer_type> and C<balancer_args>.
409 around connect_info => sub {
410 my ($next, $self, $info, @extra) = @_;
412 my $merge = Hash::Merge->new('LEFT_PRECEDENT');
415 for my $arg (@$info) {
416 next unless (reftype($arg)||'') eq 'HASH';
417 %opts = %{ $merge->merge($arg, \%opts) };
421 if (@opts{qw/pool_type pool_args/}) {
422 $self->pool_type(delete $opts{pool_type})
426 $merge->merge((delete $opts{pool_args} || {}), $self->pool_args)
429 ## Since we possibly changed the pool_args, we need to clear the current
430 ## pool object so that next time it is used it will be rebuilt.
434 if (@opts{qw/balancer_type balancer_args/}) {
435 $self->balancer_type(delete $opts{balancer_type})
436 if $opts{balancer_type};
438 $self->balancer_args(
439 $merge->merge((delete $opts{balancer_args} || {}), $self->balancer_args)
442 $self->balancer($self->_build_balancer)
446 $self->_master_connect_info_opts(\%opts);
450 @res = $self->$next($info, @extra);
452 $res[0] = $self->$next($info, @extra);
455 # Make sure master is blessed into the correct class and apply role to it.
456 my $master = $self->master;
457 $master->_determine_driver;
458 Moose::Meta::Class->initialize(ref $master);
460 DBIx::Class::Storage::DBI::Replicated::WithDSN->meta->apply($master);
462 # link pool back to master
463 $self->pool->master($master);
465 wantarray ? @res : $res[0];
470 This class defines the following methods.
474 L<DBIx::Class::Schema> when instantiating its storage passed itself as the
475 first argument. So we need to massage the arguments a bit so that all the
476 bits get put into the correct places.
481 my ($class, $schema, $storage_type_args, @args) = @_;
492 Lazy builder for the L</master> attribute.
498 my $master = DBIx::Class::Storage::DBI->new($self->schema);
504 Lazy builder for the L</pool> attribute.
510 $self->create_pool(%{$self->pool_args});
513 =head2 _build_balancer
515 Lazy builder for the L</balancer> attribute. This takes a Pool object so that
516 the balancer knows which pool it's balancing.
520 sub _build_balancer {
522 $self->create_balancer(
524 master=>$self->master,
525 %{$self->balancer_args},
529 =head2 _build_write_handler
531 Lazy builder for the L</write_handler> attribute. The default is to set this to
536 sub _build_write_handler {
537 return shift->master;
540 =head2 _build_read_handler
542 Lazy builder for the L</read_handler> attribute. The default is to set this to
547 sub _build_read_handler {
548 return shift->balancer;
551 =head2 around: connect_replicants
553 All calls to connect_replicants needs to have an existing $schema tacked onto
554 top of the args, since L<DBIx::Storage::DBI> needs it, and any C<connect_info>
555 options merged with the master, with replicant opts having higher priority.
559 around connect_replicants => sub {
560 my ($next, $self, @args) = @_;
563 $r = [ $r ] unless reftype $r eq 'ARRAY';
565 $self->throw_exception('coderef replicant connect_info not supported')
566 if ref $r->[0] && reftype $r->[0] eq 'CODE';
568 # any connect_info options?
570 $i++ while $i < @$r && (reftype($r->[$i])||'') ne 'HASH';
573 $r->[$i] = {} unless $r->[$i];
575 # merge if two hashes
576 my @hashes = @$r[$i .. $#{$r}];
578 $self->throw_exception('invalid connect_info options')
579 if (grep { reftype($_) eq 'HASH' } @hashes) != @hashes;
581 $self->throw_exception('too many hashrefs in connect_info')
584 my $merge = Hash::Merge->new('LEFT_PRECEDENT');
585 my %opts = %{ $merge->merge(reverse @hashes) };
588 splice @$r, $i+1, ($#{$r} - $i), ();
590 # make sure master/replicants opts don't clash
591 my %master_opts = %{ $self->_master_connect_info_opts };
592 if (exists $opts{dbh_maker}) {
593 delete @master_opts{qw/dsn user password/};
595 delete $master_opts{dbh_maker};
598 %opts = %{ $merge->merge(\%opts, \%master_opts) };
604 $self->$next($self->schema, @args);
609 Returns an array of of all the connected storage backends. The first element
610 in the returned array is the master, and the remainings are each of the
617 return grep {defined $_ && blessed $_} (
619 values %{ $self->replicants },
623 =head2 execute_reliably ($coderef, ?@args)
625 Given a coderef, saves the current state of the L</read_handler>, forces it to
626 use reliable storage (e.g. sets it to the master), executes a coderef and then
627 restores the original state.
633 $schema->resultset('User')->create({name=>$name});
634 my $user_rs = $schema->resultset('User')->find({name=>$name});
638 my $user_rs = $schema->storage->execute_reliably($reliably, 'John');
640 Use this when you must be certain of your database state, such as when you just
641 inserted something and need to get a resultset including it, etc.
645 sub execute_reliably {
646 my ($self, $coderef, @args) = @_;
648 unless( ref $coderef eq 'CODE') {
649 $self->throw_exception('Second argument must be a coderef');
652 ##Get copy of master storage
653 my $master = $self->master;
655 ##Get whatever the current read hander is
656 my $current = $self->read_handler;
658 ##Set the read handler to master
659 $self->read_handler($master);
661 ## do whatever the caller needs
663 my $want_array = wantarray;
667 @result = $coderef->(@args);
668 } elsif(defined $want_array) {
669 ($result[0]) = ($coderef->(@args));
674 $self->throw_exception("coderef returned an error: $_");
676 ##Reset to the original state
677 $self->read_handler($current);
680 return wantarray ? @result : $result[0];
683 =head2 set_reliable_storage
685 Sets the current $schema to be 'reliable', that is all queries, both read and
686 write are sent to the master
690 sub set_reliable_storage {
692 my $schema = $self->schema;
693 my $write_handler = $self->schema->storage->write_handler;
695 $schema->storage->read_handler($write_handler);
698 =head2 set_balanced_storage
700 Sets the current $schema to be use the </balancer> for all reads, while all
701 writes are sent to the master only
705 sub set_balanced_storage {
707 my $schema = $self->schema;
708 my $balanced_handler = $self->schema->storage->balancer;
710 $schema->storage->read_handler($balanced_handler);
715 Check that the master and at least one of the replicants is connected.
722 $self->master->connected &&
723 $self->pool->connected_replicants;
726 =head2 ensure_connected
728 Make sure all the storages are connected.
732 sub ensure_connected {
734 foreach my $source ($self->all_storages) {
735 $source->ensure_connected(@_);
741 Set the limit_dialect for all existing storages
747 foreach my $source ($self->all_storages) {
748 $source->limit_dialect(@_);
750 return $self->master->limit_dialect;
755 Set the quote_char for all existing storages
761 foreach my $source ($self->all_storages) {
762 $source->quote_char(@_);
764 return $self->master->quote_char;
769 Set the name_sep for all existing storages
775 foreach my $source ($self->all_storages) {
776 $source->name_sep(@_);
778 return $self->master->name_sep;
783 Set the schema object for all existing storages
789 foreach my $source ($self->all_storages) {
790 $source->set_schema(@_);
796 set a debug flag across all storages
803 foreach my $source ($self->all_storages) {
807 return $self->master->debug;
818 return $self->master->debugobj(@_);
829 return $self->master->debugfh(@_);
840 return $self->master->debugcb(@_);
845 disconnect everything
851 foreach my $source ($self->all_storages) {
852 $source->disconnect(@_);
858 set cursor class on all storages, or return master's
863 my ($self, $cursor_class) = @_;
866 $_->cursor_class($cursor_class) for $self->all_storages;
868 $self->master->cursor_class;
873 set cursor class on all storages, or return master's, alias for L</cursor_class>
879 my ($self, $cursor_class) = @_;
882 $_->cursor($cursor_class) for $self->all_storages;
884 $self->master->cursor;
889 sets the L<DBIx::Class::Storage::DBI/unsafe> option on all storages or returns
890 master's current setting
898 $_->unsafe(@_) for $self->all_storages;
901 return $self->master->unsafe;
904 =head2 disable_sth_caching
906 sets the L<DBIx::Class::Storage::DBI/disable_sth_caching> option on all storages
907 or returns master's current setting
911 sub disable_sth_caching {
915 $_->disable_sth_caching(@_) for $self->all_storages;
918 return $self->master->disable_sth_caching;
921 =head2 lag_behind_master
923 returns the highest Replicant L<DBIx::Class::Storage::DBI/lag_behind_master>
928 sub lag_behind_master {
931 return max map $_->lag_behind_master, $self->replicants;
934 =head2 is_replicating
936 returns true if all replicants return true for
937 L<DBIx::Class::Storage::DBI/is_replicating>
944 return (grep $_->is_replicating, $self->replicants) == ($self->replicants);
947 =head2 connect_call_datetime_setup
949 calls L<DBIx::Class::Storage::DBI/connect_call_datetime_setup> for all storages
953 sub connect_call_datetime_setup {
955 $_->connect_call_datetime_setup for $self->all_storages;
960 $_->_populate_dbh for $self->all_storages;
965 $_->_connect for $self->all_storages;
970 $_->_rebless for $self->all_storages;
973 sub _determine_driver {
975 $_->_determine_driver for $self->all_storages;
978 sub _driver_determined {
982 $_->_driver_determined(@_) for $self->all_storages;
985 return $self->master->_driver_determined;
991 $_->_init for $self->all_storages;
994 sub _run_connection_actions {
997 $_->_run_connection_actions for $self->all_storages;
1000 sub _do_connection_actions {
1004 $_->_do_connection_actions(@_) for $self->all_storages;
1008 sub connect_call_do_sql {
1010 $_->connect_call_do_sql(@_) for $self->all_storages;
1013 sub disconnect_call_do_sql {
1015 $_->disconnect_call_do_sql(@_) for $self->all_storages;
1018 sub _seems_connected {
1021 return min map $_->_seems_connected, $self->all_storages;
1027 return min map $_->_ping, $self->all_storages;
1030 # not using the normalized_version, because we want to preserve
1031 # version numbers much longer than the conventional xxx.yyyzzz
1032 my $numify_ver = sub {
1034 my @numparts = split /\D+/, $ver;
1035 my $format = '%d.' . (join '', ('%06d') x (@numparts - 1));
1037 return sprintf $format, @numparts;
1042 if (not $self->_dbh_details->{info}) {
1043 $self->_dbh_details->{info} = (
1044 reduce { $a->[0] < $b->[0] ? $a : $b }
1045 map [ $numify_ver->($_->{dbms_version}), $_ ],
1046 map $_->_server_info, $self->all_storages
1050 return $self->next::method;
1053 sub _get_server_version {
1056 return $self->_server_info->{dbms_version};
1061 Due to the fact that replicants can lag behind a master, you must take care to
1062 make sure you use one of the methods to force read queries to a master should
1063 you need realtime data integrity. For example, if you insert a row, and then
1064 immediately re-read it from the database (say, by doing $row->discard_changes)
1065 or you insert a row and then immediately build a query that expects that row
1066 to be an item, you should force the master to handle reads. Otherwise, due to
1067 the lag, there is no certainty your data will be in the expected state.
1069 For data integrity, all transactions automatically use the master storage for
1070 all read and write queries. Using a transaction is the preferred and recommended
1071 method to force the master to handle all read queries.
1073 Otherwise, you can force a single query to use the master with the 'force_pool'
1076 my $row = $resultset->search(undef, {force_pool=>'master'})->find($pk);
1078 This attribute will safely be ignore by non replicated storages, so you can use
1079 the same code for both types of systems.
1081 Lastly, you can use the L</execute_reliably> method, which works very much like
1084 For debugging, you can turn replication on/off with the methods L</set_reliable_storage>
1085 and L</set_balanced_storage>, however this operates at a global level and is not
1086 suitable if you have a shared Schema object being used by multiple processes,
1087 such as on a web application server. You can get around this limitation by
1088 using the Schema clone method.
1090 my $new_schema = $schema->clone;
1091 $new_schema->set_reliable_storage;
1093 ## $new_schema will use only the Master storage for all reads/writes while
1094 ## the $schema object will use replicated storage.
1098 John Napiorkowski <john.napiorkowski@takkle.com>
1100 Based on code originated by:
1102 Norbert Csongrádi <bert@cpan.org>
1103 Peter Siklósi <einon@einon.hu>
1107 You may distribute this code under the same terms as Perl itself.
1111 __PACKAGE__->meta->make_immutable;