1 package DBIx::Class::Storage::DBI::Replicated;
4 use Carp::Clan qw/^DBIx::Class/;
6 croak('The following modules are required for Replication ' . DBIx::Class::Optional::Dependencies->req_missing_for ('replicated') )
7 unless DBIx::Class::Optional::Dependencies->req_ok_for ('replicated');
11 use DBIx::Class::Storage::DBI;
12 use DBIx::Class::Storage::DBI::Replicated::Pool;
13 use DBIx::Class::Storage::DBI::Replicated::Balancer;
14 use DBIx::Class::Storage::DBI::Replicated::Types qw/BalancerClassNamePart DBICSchema DBICStorageDBI/;
15 use MooseX::Types::Moose qw/ClassName HashRef Object/;
16 use Scalar::Util 'reftype';
18 use List::Util qw/min max reduce/;
22 use namespace::clean -except => 'meta';
26 DBIx::Class::Storage::DBI::Replicated - BETA Replicated database support
30 The Following example shows how to change an existing $schema to a replicated
31 storage type, add some replicated (read-only) databases, and perform reporting
34 You should set the 'storage_type attribute to a replicated type. You should
35 also define your arguments, such as which balancer you want and any arguments
36 that the Pool object should get.
38 my $schema = Schema::Class->clone;
39 $schema->storage_type( ['::DBI::Replicated', {balancer=>'::Random'}] );
40 $schema->connection(...);
42 Next, you need to add in the Replicants. Basically this is an array of
43 arrayrefs, where each arrayref is database connect information. Think of these
44 arguments as what you'd pass to the 'normal' $schema->connect method.
46 $schema->storage->connect_replicants(
47 [$dsn1, $user, $pass, \%opts],
48 [$dsn2, $user, $pass, \%opts],
49 [$dsn3, $user, $pass, \%opts],
52 Now, just use the $schema as you normally would. Automatically all reads will
53 be delegated to the replicants, while writes to the master.
55 $schema->resultset('Source')->search({name=>'etc'});
57 You can force a given query to use a particular storage using the search
58 attribute 'force_pool'. For example:
60 my $RS = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
62 Now $RS will force everything (both reads and writes) to use whatever was setup
63 as the master storage. 'master' is hardcoded to always point to the Master,
64 but you can also use any Replicant name. Please see:
65 L<DBIx::Class::Storage::DBI::Replicated::Pool> and the replicants attribute for more.
67 Also see transactions and L</execute_reliably> for alternative ways to
68 force read traffic to the master. In general, you should wrap your statements
69 in a transaction when you are reading and writing to the same tables at the
70 same time, since your replicants will often lag a bit behind the master.
72 See L<DBIx::Class::Storage::DBI::Replicated::Instructions> for more help and
77 Warning: This class is marked BETA. This has been running a production
78 website using MySQL native replication as its backend and we have some decent
79 test coverage but the code hasn't yet been stressed by a variety of databases.
80 Individual DBs may have quirks we are not aware of. Please use this in first
81 development and pass along your experiences/bug fixes.
83 This class implements replicated data store for DBI. Currently you can define
84 one master and numerous slave database connections. All write-type queries
85 (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are routed to master
86 database, all read-type queries (SELECTs) go to the slave database.
88 Basically, any method request that L<DBIx::Class::Storage::DBI> would normally
89 handle gets delegated to one of the two attributes: L</read_handler> or to
90 L</write_handler>. Additionally, some methods need to be distributed
91 to all existing storages. This way our storage class is a drop in replacement
92 for L<DBIx::Class::Storage::DBI>.
94 Read traffic is spread across the replicants (slaves) occurring to a user
95 selected algorithm. The default algorithm is random weighted.
99 The consistency between master and replicants is database specific. The Pool
100 gives you a method to validate its replicants, removing and replacing them
101 when they fail/pass predefined criteria. Please make careful use of the ways
102 to force a query to run against Master when needed.
106 Replicated Storage has additional requirements not currently part of
107 L<DBIx::Class>. See L<DBIx::Class::Optional::Dependencies> for more details.
111 This class defines the following attributes.
115 The underlying L<DBIx::Class::Schema> object this storage is attaching
128 Contains the classname which will instantiate the L</pool> object. Defaults
129 to: L<DBIx::Class::Storage::DBI::Replicated::Pool>.
136 default=>'DBIx::Class::Storage::DBI::Replicated::Pool',
138 'create_pool' => 'new',
144 Contains a hashref of initialized information to pass to the Balancer object.
145 See L<DBIx::Class::Storage::DBI::Replicated::Pool> for available arguments.
159 The replication pool requires a balance class to provider the methods for
160 choose how to spread the query load across each replicant in the pool.
164 has 'balancer_type' => (
166 isa=>BalancerClassNamePart,
169 default=> 'DBIx::Class::Storage::DBI::Replicated::Balancer::First',
171 'create_balancer' => 'new',
177 Contains a hashref of initialized information to pass to the Balancer object.
178 See L<DBIx::Class::Storage::DBI::Replicated::Balancer> for available arguments.
182 has 'balancer_args' => (
192 Is a <DBIx::Class::Storage::DBI::Replicated::Pool> or derived class. This is a
193 container class for one or more replicated databases.
199 isa=>'DBIx::Class::Storage::DBI::Replicated::Pool',
210 Is a <DBIx::Class::Storage::DBI::Replicated::Balancer> or derived class. This
211 is a class that takes a pool (<DBIx::Class::Storage::DBI::Replicated::Pool>)
217 isa=>'DBIx::Class::Storage::DBI::Replicated::Balancer',
219 handles=>[qw/auto_validate_every/],
224 The master defines the canonical state for a pool of connected databases. All
225 the replicants are expected to match this databases state. Thus, in a classic
226 Master / Slaves distributed system, all the slaves are expected to replicate
227 the Master's state as quick as possible. This is the only database in the
228 pool of databases that is allowed to handle write traffic.
238 =head1 ATTRIBUTES IMPLEMENTING THE DBIx::Storage::DBI INTERFACE
240 The following methods are delegated all the methods required for the
241 L<DBIx::Class::Storage::DBI> interface.
245 Defines an object that implements the read side of L<BIx::Class::Storage::DBI>.
249 has 'read_handler' => (
257 _dbh_columns_info_for
264 Defines an object that implements the write side of L<BIx::Class::Storage::DBI>,
265 as well as methods that don't write or read that can be called on only one
266 storage, methods that return a C<$dbh>, and any methods that don't make sense to
271 has 'write_handler' => (
286 deployment_statements
289 build_datetime_parser
303 with_deferred_fk_checks
306 with_deferred_fk_checks
311 _supports_insert_returning
317 relname_to_table_alias
318 _straight_join_to_node
321 _default_dbi_connect_attributes
323 _dbic_connect_attributes
327 bind_attribute_by_data_type
336 _per_row_update_delete
338 _dbh_execute_inserts_with_no_binds
339 _select_args_to_query
341 _multipk_update_delete
342 source_bind_attributes
343 _normalize_connect_info
347 _placeholders_supported
349 _sqlt_minimum_version
352 _typeless_placeholders_supported
359 _adjust_select_args_for_complex_prefetch
360 _resolve_ident_sources
363 _strip_cond_qualifiers
365 _resolve_aliastypes_from_select_args
370 _prefetch_insert_auto_nextvals
375 my @unimplemented = qw(
376 _arm_global_destructor
377 _preserve_foreign_dbh
382 for my $method (@unimplemented) {
383 __PACKAGE__->meta->add_method($method, sub {
384 croak "$method must not be called on ".(blessed shift).' objects';
388 has _master_connect_info_opts =>
389 (is => 'rw', isa => HashRef, default => sub { {} });
391 =head2 around: connect_info
393 Preserves master's C<connect_info> options (for merging with replicants.)
394 Also sets any Replicated-related options from connect_info, such as
395 C<pool_type>, C<pool_args>, C<balancer_type> and C<balancer_args>.
399 around connect_info => sub {
400 my ($next, $self, $info, @extra) = @_;
402 my $wantarray = wantarray;
404 my $merge = Hash::Merge->new('LEFT_PRECEDENT');
407 for my $arg (@$info) {
408 next unless (reftype($arg)||'') eq 'HASH';
409 %opts = %{ $merge->merge($arg, \%opts) };
413 if (@opts{qw/pool_type pool_args/}) {
414 $self->pool_type(delete $opts{pool_type})
418 $merge->merge((delete $opts{pool_args} || {}), $self->pool_args)
421 $self->pool($self->_build_pool)
425 if (@opts{qw/balancer_type balancer_args/}) {
426 $self->balancer_type(delete $opts{balancer_type})
427 if $opts{balancer_type};
429 $self->balancer_args(
430 $merge->merge((delete $opts{balancer_args} || {}), $self->balancer_args)
433 $self->balancer($self->_build_balancer)
437 $self->_master_connect_info_opts(\%opts);
441 @res = $self->$next($info, @extra);
443 $res = $self->$next($info, @extra);
446 # Make sure master is blessed into the correct class and apply role to it.
447 my $master = $self->master;
448 $master->_determine_driver;
449 Moose::Meta::Class->initialize(ref $master);
451 DBIx::Class::Storage::DBI::Replicated::WithDSN->meta->apply($master);
453 # link pool back to master
454 $self->pool->master($master);
456 $wantarray ? @res : $res;
461 This class defines the following methods.
465 L<DBIx::Class::Schema> when instantiating its storage passed itself as the
466 first argument. So we need to massage the arguments a bit so that all the
467 bits get put into the correct places.
472 my ($class, $schema, $storage_type_args, @args) = @_;
483 Lazy builder for the L</master> attribute.
489 my $master = DBIx::Class::Storage::DBI->new($self->schema);
495 Lazy builder for the L</pool> attribute.
501 $self->create_pool(%{$self->pool_args});
504 =head2 _build_balancer
506 Lazy builder for the L</balancer> attribute. This takes a Pool object so that
507 the balancer knows which pool it's balancing.
511 sub _build_balancer {
513 $self->create_balancer(
515 master=>$self->master,
516 %{$self->balancer_args},
520 =head2 _build_write_handler
522 Lazy builder for the L</write_handler> attribute. The default is to set this to
527 sub _build_write_handler {
528 return shift->master;
531 =head2 _build_read_handler
533 Lazy builder for the L</read_handler> attribute. The default is to set this to
538 sub _build_read_handler {
539 return shift->balancer;
542 =head2 around: connect_replicants
544 All calls to connect_replicants needs to have an existing $schema tacked onto
545 top of the args, since L<DBIx::Storage::DBI> needs it, and any C<connect_info>
546 options merged with the master, with replicant opts having higher priority.
550 around connect_replicants => sub {
551 my ($next, $self, @args) = @_;
554 $r = [ $r ] unless reftype $r eq 'ARRAY';
556 $self->throw_exception('coderef replicant connect_info not supported')
557 if ref $r->[0] && reftype $r->[0] eq 'CODE';
559 # any connect_info options?
561 $i++ while $i < @$r && (reftype($r->[$i])||'') ne 'HASH';
564 $r->[$i] = {} unless $r->[$i];
566 # merge if two hashes
567 my @hashes = @$r[$i .. $#{$r}];
569 $self->throw_exception('invalid connect_info options')
570 if (grep { reftype($_) eq 'HASH' } @hashes) != @hashes;
572 $self->throw_exception('too many hashrefs in connect_info')
575 my $merge = Hash::Merge->new('LEFT_PRECEDENT');
576 my %opts = %{ $merge->merge(reverse @hashes) };
579 splice @$r, $i+1, ($#{$r} - $i), ();
581 # make sure master/replicants opts don't clash
582 my %master_opts = %{ $self->_master_connect_info_opts };
583 if (exists $opts{dbh_maker}) {
584 delete @master_opts{qw/dsn user password/};
586 delete $master_opts{dbh_maker};
589 %opts = %{ $merge->merge(\%opts, \%master_opts) };
595 $self->$next($self->schema, @args);
600 Returns an array of of all the connected storage backends. The first element
601 in the returned array is the master, and the remainings are each of the
608 return grep {defined $_ && blessed $_} (
610 values %{ $self->replicants },
614 =head2 execute_reliably ($coderef, ?@args)
616 Given a coderef, saves the current state of the L</read_handler>, forces it to
617 use reliable storage (e.g. sets it to the master), executes a coderef and then
618 restores the original state.
624 $schema->resultset('User')->create({name=>$name});
625 my $user_rs = $schema->resultset('User')->find({name=>$name});
629 my $user_rs = $schema->storage->execute_reliably($reliably, 'John');
631 Use this when you must be certain of your database state, such as when you just
632 inserted something and need to get a resultset including it, etc.
636 sub execute_reliably {
637 my ($self, $coderef, @args) = @_;
639 unless( ref $coderef eq 'CODE') {
640 $self->throw_exception('Second argument must be a coderef');
643 ##Get copy of master storage
644 my $master = $self->master;
646 ##Get whatever the current read hander is
647 my $current = $self->read_handler;
649 ##Set the read handler to master
650 $self->read_handler($master);
652 ## do whatever the caller needs
654 my $want_array = wantarray;
658 @result = $coderef->(@args);
659 } elsif(defined $want_array) {
660 ($result[0]) = ($coderef->(@args));
665 $self->throw_exception("coderef returned an error: $_");
667 ##Reset to the original state
668 $self->read_handler($current);
671 return $want_array ? @result : $result[0];
674 =head2 set_reliable_storage
676 Sets the current $schema to be 'reliable', that is all queries, both read and
677 write are sent to the master
681 sub set_reliable_storage {
683 my $schema = $self->schema;
684 my $write_handler = $self->schema->storage->write_handler;
686 $schema->storage->read_handler($write_handler);
689 =head2 set_balanced_storage
691 Sets the current $schema to be use the </balancer> for all reads, while all
692 writes are sent to the master only
696 sub set_balanced_storage {
698 my $schema = $self->schema;
699 my $balanced_handler = $self->schema->storage->balancer;
701 $schema->storage->read_handler($balanced_handler);
706 Check that the master and at least one of the replicants is connected.
713 $self->master->connected &&
714 $self->pool->connected_replicants;
717 =head2 ensure_connected
719 Make sure all the storages are connected.
723 sub ensure_connected {
725 foreach my $source ($self->all_storages) {
726 $source->ensure_connected(@_);
732 Set the limit_dialect for all existing storages
738 foreach my $source ($self->all_storages) {
739 $source->limit_dialect(@_);
741 return $self->master->quote_char;
746 Set the quote_char for all existing storages
752 foreach my $source ($self->all_storages) {
753 $source->quote_char(@_);
755 return $self->master->quote_char;
760 Set the name_sep for all existing storages
766 foreach my $source ($self->all_storages) {
767 $source->name_sep(@_);
769 return $self->master->name_sep;
774 Set the schema object for all existing storages
780 foreach my $source ($self->all_storages) {
781 $source->set_schema(@_);
787 set a debug flag across all storages
794 foreach my $source ($self->all_storages) {
798 return $self->master->debug;
809 return $self->master->debugobj(@_);
820 return $self->master->debugfh(@_);
831 return $self->master->debugcb(@_);
836 disconnect everything
842 foreach my $source ($self->all_storages) {
843 $source->disconnect(@_);
849 set cursor class on all storages, or return master's
854 my ($self, $cursor_class) = @_;
857 $_->cursor_class($cursor_class) for $self->all_storages;
859 $self->master->cursor_class;
864 set cursor class on all storages, or return master's, alias for L</cursor_class>
870 my ($self, $cursor_class) = @_;
873 $_->cursor($cursor_class) for $self->all_storages;
875 $self->master->cursor;
880 sets the L<DBIx::Class::Storage::DBI/unsafe> option on all storages or returns
881 master's current setting
889 $_->unsafe(@_) for $self->all_storages;
892 return $self->master->unsafe;
895 =head2 disable_sth_caching
897 sets the L<DBIx::Class::Storage::DBI/disable_sth_caching> option on all storages
898 or returns master's current setting
902 sub disable_sth_caching {
906 $_->disable_sth_caching(@_) for $self->all_storages;
909 return $self->master->disable_sth_caching;
912 =head2 lag_behind_master
914 returns the highest Replicant L<DBIx::Class::Storage::DBI/lag_behind_master>
919 sub lag_behind_master {
922 return max map $_->lag_behind_master, $self->replicants;
925 =head2 is_replicating
927 returns true if all replicants return true for
928 L<DBIx::Class::Storage::DBI/is_replicating>
935 return (grep $_->is_replicating, $self->replicants) == ($self->replicants);
938 =head2 connect_call_datetime_setup
940 calls L<DBIx::Class::Storage::DBI/connect_call_datetime_setup> for all storages
944 sub connect_call_datetime_setup {
946 $_->connect_call_datetime_setup for $self->all_storages;
951 $_->_populate_dbh for $self->all_storages;
956 $_->_connect for $self->all_storages;
961 $_->_rebless for $self->all_storages;
964 sub _determine_driver {
966 $_->_determine_driver for $self->all_storages;
969 sub _driver_determined {
973 $_->_driver_determined(@_) for $self->all_storages;
976 return $self->master->_driver_determined;
982 $_->_init for $self->all_storages;
985 sub _run_connection_actions {
988 $_->_run_connection_actions for $self->all_storages;
991 sub _do_connection_actions {
995 $_->_do_connection_actions(@_) for $self->all_storages;
999 sub connect_call_do_sql {
1001 $_->connect_call_do_sql(@_) for $self->all_storages;
1004 sub disconnect_call_do_sql {
1006 $_->disconnect_call_do_sql(@_) for $self->all_storages;
1009 sub _seems_connected {
1012 return min map $_->_seems_connected, $self->all_storages;
1018 return min map $_->_ping, $self->all_storages;
1021 my $numify_ver = sub {
1023 my @numparts = split /\D+/, $ver;
1024 my $format = '%d.' . (join '', ('%05d') x (@numparts - 1));
1026 return sprintf $format, @numparts;
1032 if (not $self->_server_info_hash) {
1033 my $min_version_info = (
1034 reduce { $a->[0] < $b->[0] ? $a : $b }
1035 map [ $numify_ver->($_->{dbms_version}), $_ ],
1036 map $_->_server_info, $self->all_storages
1039 $self->_server_info_hash($min_version_info); # on master
1042 return $self->_server_info_hash;
1045 sub _get_server_version {
1048 return $self->_server_info->{dbms_version};
1053 Due to the fact that replicants can lag behind a master, you must take care to
1054 make sure you use one of the methods to force read queries to a master should
1055 you need realtime data integrity. For example, if you insert a row, and then
1056 immediately re-read it from the database (say, by doing $row->discard_changes)
1057 or you insert a row and then immediately build a query that expects that row
1058 to be an item, you should force the master to handle reads. Otherwise, due to
1059 the lag, there is no certainty your data will be in the expected state.
1061 For data integrity, all transactions automatically use the master storage for
1062 all read and write queries. Using a transaction is the preferred and recommended
1063 method to force the master to handle all read queries.
1065 Otherwise, you can force a single query to use the master with the 'force_pool'
1068 my $row = $resultset->search(undef, {force_pool=>'master'})->find($pk);
1070 This attribute will safely be ignore by non replicated storages, so you can use
1071 the same code for both types of systems.
1073 Lastly, you can use the L</execute_reliably> method, which works very much like
1076 For debugging, you can turn replication on/off with the methods L</set_reliable_storage>
1077 and L</set_balanced_storage>, however this operates at a global level and is not
1078 suitable if you have a shared Schema object being used by multiple processes,
1079 such as on a web application server. You can get around this limitation by
1080 using the Schema clone method.
1082 my $new_schema = $schema->clone;
1083 $new_schema->set_reliable_storage;
1085 ## $new_schema will use only the Master storage for all reads/writes while
1086 ## the $schema object will use replicated storage.
1090 John Napiorkowski <john.napiorkowski@takkle.com>
1092 Based on code originated by:
1094 Norbert Csongrádi <bert@cpan.org>
1095 Peter Siklósi <einon@einon.hu>
1099 You may distribute this code under the same terms as Perl itself.
1103 __PACKAGE__->meta->make_immutable;