1 package DBIx::Class::Storage::DBI;
2 # -*- mode: cperl; cperl-indent-level: 2 -*-
4 use base 'DBIx::Class::Storage';
8 use Carp::Clan qw/^DBIx::Class/;
10 use DBIx::Class::SQLAHacks;
11 use DBIx::Class::Storage::DBI::Cursor;
12 use DBIx::Class::Storage::Statistics;
13 use Scalar::Util qw/blessed weaken/;
15 __PACKAGE__->mk_group_accessors('simple' =>
16 qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts
17 _conn_pid _conn_tid transaction_depth _dbh_autocommit savepoints/
20 # the values for these accessors are picked out (and deleted) from
21 # the attribute hashref passed to connect_info
22 my @storage_options = qw/
23 on_connect_do on_disconnect_do disable_sth_caching unsafe auto_savepoint
25 __PACKAGE__->mk_group_accessors('simple' => @storage_options);
28 # default cursor class, overridable in connect_info attributes
29 __PACKAGE__->cursor_class('DBIx::Class::Storage::DBI::Cursor');
31 __PACKAGE__->mk_group_accessors('inherited' => qw/sql_maker_class/);
32 __PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks');
37 DBIx::Class::Storage::DBI - DBI storage handler
41 my $schema = MySchema->connect('dbi:SQLite:my.db');
43 $schema->storage->debug(1);
44 $schema->dbh_do("DROP TABLE authors");
46 $schema->resultset('Book')->search({
47 written_on => $schema->storage->datetime_parser(DateTime->now)
52 This class represents the connection to an RDBMS via L<DBI>. See
53 L<DBIx::Class::Storage> for general information. This pod only
54 documents DBI-specific methods and behaviors.
61 my $new = shift->next::method(@_);
63 $new->transaction_depth(0);
64 $new->_sql_maker_opts({});
65 $new->{savepoints} = [];
66 $new->{_in_dbh_do} = 0;
74 This method is normally called by L<DBIx::Class::Schema/connection>, which
75 encapsulates its argument list in an arrayref before passing them here.
77 The argument list may contain:
83 The same 4-element argument set one would normally pass to
84 L<DBI/connect>, optionally followed by
85 L<extra attributes|/DBIx::Class specific connection attributes>
86 recognized by DBIx::Class:
88 $connect_info_args = [ $dsn, $user, $password, \%dbi_attributes?, \%extra_attributes? ];
92 A single code reference which returns a connected
93 L<DBI database handle|DBI/connect> optionally followed by
94 L<extra attributes|/DBIx::Class specific connection attributes> recognized
97 $connect_info_args = [ sub { DBI->connect (...) }, \%extra_attributes? ];
101 A single hashref with all the attributes and the dsn/user/password
104 $connect_info_args = [{
112 This is particularly useful for L<Catalyst> based applications, allowing the
113 following config (L<Config::General> style):
118 dsn dbi:mysql:database=test
127 Please note that the L<DBI> docs recommend that you always explicitly
128 set C<AutoCommit> to either I<0> or I<1>. L<DBIx::Class> further
129 recommends that it be set to I<1>, and that you perform transactions
130 via our L<DBIx::Class::Schema/txn_do> method. L<DBIx::Class> will set it
131 to I<1> if you do not do explicitly set it to zero. This is the default
132 for most DBDs. See L</DBIx::Class and AutoCommit> for details.
134 =head3 DBIx::Class specific connection attributes
136 In addition to the standard L<DBI|DBI/ATTRIBUTES_COMMON_TO_ALL_HANDLES>
137 L<connection|DBI/Database_Handle_Attributes> attributes, DBIx::Class recognizes
138 the following connection options. These options can be mixed in with your other
139 L<DBI> connection attributes, or placed in a seperate hashref
140 (C<\%extra_attributes>) as shown above.
142 Every time C<connect_info> is invoked, any previous settings for
143 these options will be cleared before setting the new ones, regardless of
144 whether any options are specified in the new C<connect_info>.
151 Specifies things to do immediately after connecting or re-connecting to
152 the database. Its value may contain:
156 =item an array reference
158 This contains SQL statements to execute in order. Each element contains
159 a string or a code reference that returns a string.
161 =item a code reference
163 This contains some code to execute. Unlike code references within an
164 array reference, its return value is ignored.
168 =item on_disconnect_do
170 Takes arguments in the same form as L</on_connect_do> and executes them
171 immediately before disconnecting from the database.
173 Note, this only runs if you explicitly call L</disconnect> on the
176 =item disable_sth_caching
178 If set to a true value, this option will disable the caching of
179 statement handles via L<DBI/prepare_cached>.
183 Sets the limit dialect. This is useful for JDBC-bridge among others
184 where the remote SQL-dialect cannot be determined by the name of the
185 driver alone. See also L<SQL::Abstract::Limit>.
189 Specifies what characters to use to quote table and column names. If
190 you use this you will want to specify L</name_sep> as well.
192 C<quote_char> expects either a single character, in which case is it
193 is placed on either side of the table/column name, or an arrayref of length
194 2 in which case the table/column name is placed between the elements.
196 For example under MySQL you should use C<< quote_char => '`' >>, and for
197 SQL Server you should use C<< quote_char => [qw/[ ]/] >>.
201 This only needs to be used in conjunction with C<quote_char>, and is used to
202 specify the charecter that seperates elements (schemas, tables, columns) from
203 each other. In most cases this is simply a C<.>.
205 The consequences of not supplying this value is that L<SQL::Abstract>
206 will assume DBIx::Class' uses of aliases to be complete column
207 names. The output will look like I<"me.name"> when it should actually
212 This Storage driver normally installs its own C<HandleError>, sets
213 C<RaiseError> and C<ShowErrorStatement> on, and sets C<PrintError> off on
214 all database handles, including those supplied by a coderef. It does this
215 so that it can have consistent and useful error behavior.
217 If you set this option to a true value, Storage will not do its usual
218 modifications to the database handle's attributes, and instead relies on
219 the settings in your connect_info DBI options (or the values you set in
220 your connection coderef, in the case that you are connecting via coderef).
222 Note that your custom settings can cause Storage to malfunction,
223 especially if you set a C<HandleError> handler that suppresses exceptions
224 and/or disable C<RaiseError>.
228 If this option is true, L<DBIx::Class> will use savepoints when nesting
229 transactions, making it possible to recover from failure in the inner
230 transaction without having to abort all outer transactions.
234 Use this argument to supply a cursor class other than the default
235 L<DBIx::Class::Storage::DBI::Cursor>.
239 Some real-life examples of arguments to L</connect_info> and
240 L<DBIx::Class::Schema/connect>
242 # Simple SQLite connection
243 ->connect_info([ 'dbi:SQLite:./foo.db' ]);
246 ->connect_info([ sub { DBI->connect(...) } ]);
248 # A bit more complicated
255 { quote_char => q{"}, name_sep => q{.} },
259 # Equivalent to the previous example
265 { AutoCommit => 1, quote_char => q{"}, name_sep => q{.} },
269 # Same, but with hashref as argument
270 # See parse_connect_info for explanation
273 dsn => 'dbi:Pg:dbname=foo',
275 password => 'my_pg_password',
282 # Subref + DBIx::Class-specific connection options
285 sub { DBI->connect(...) },
289 on_connect_do => ['SET search_path TO myschema,otherschema,public'],
290 disable_sth_caching => 1,
300 my ($self, $info_arg) = @_;
302 return $self->_connect_info if !$info_arg;
304 my @args = @$info_arg; # take a shallow copy for further mutilation
305 $self->_connect_info([@args]); # copy for _connect_info
308 # combine/pre-parse arguments depending on invocation style
311 if (ref $args[0] eq 'CODE') { # coderef with optional \%extra_attributes
312 %attrs = %{ $args[1] || {} };
315 elsif (ref $args[0] eq 'HASH') { # single hashref (i.e. Catalyst config)
316 %attrs = %{$args[0]};
318 for (qw/password user dsn/) {
319 unshift @args, delete $attrs{$_};
322 else { # otherwise assume dsn/user/password + \%attrs + \%extra_attrs
324 % { $args[3] || {} },
325 % { $args[4] || {} },
327 @args = @args[0,1,2];
330 # Kill sql_maker/_sql_maker_opts, so we get a fresh one with only
331 # the new set of options
332 $self->_sql_maker(undef);
333 $self->_sql_maker_opts({});
336 for my $storage_opt (@storage_options, 'cursor_class') { # @storage_options is declared at the top of the module
337 if(my $value = delete $attrs{$storage_opt}) {
338 $self->$storage_opt($value);
341 for my $sql_maker_opt (qw/limit_dialect quote_char name_sep/) {
342 if(my $opt_val = delete $attrs{$sql_maker_opt}) {
343 $self->_sql_maker_opts->{$sql_maker_opt} = $opt_val;
348 %attrs = () if (ref $args[0] eq 'CODE'); # _connect() never looks past $args[0] in this case
350 $self->_dbi_connect_info([@args, keys %attrs ? \%attrs : ()]);
351 $self->_connect_info;
356 This method is deprecated in favour of setting via L</connect_info>.
361 Arguments: ($subref | $method_name), @extra_coderef_args?
363 Execute the given $subref or $method_name using the new exception-based
364 connection management.
366 The first two arguments will be the storage object that C<dbh_do> was called
367 on and a database handle to use. Any additional arguments will be passed
368 verbatim to the called subref as arguments 2 and onwards.
370 Using this (instead of $self->_dbh or $self->dbh) ensures correct
371 exception handling and reconnection (or failover in future subclasses).
373 Your subref should have no side-effects outside of the database, as
374 there is the potential for your subref to be partially double-executed
375 if the database connection was stale/dysfunctional.
379 my @stuff = $schema->storage->dbh_do(
381 my ($storage, $dbh, @cols) = @_;
382 my $cols = join(q{, }, @cols);
383 $dbh->selectrow_array("SELECT $cols FROM foo");
394 my $dbh = $self->_dbh;
396 return $self->$code($dbh, @_) if $self->{_in_dbh_do}
397 || $self->{transaction_depth};
399 local $self->{_in_dbh_do} = 1;
402 my $want_array = wantarray;
405 $self->_verify_pid if $dbh;
407 $self->_populate_dbh;
412 @result = $self->$code($dbh, @_);
414 elsif(defined $want_array) {
415 $result[0] = $self->$code($dbh, @_);
418 $self->$code($dbh, @_);
423 if(!$exception) { return $want_array ? @result : $result[0] }
425 $self->throw_exception($exception) if $self->connected;
427 # We were not connected - reconnect and retry, but let any
428 # exception fall right through this time
429 $self->_populate_dbh;
430 $self->$code($self->_dbh, @_);
433 # This is basically a blend of dbh_do above and DBIx::Class::Storage::txn_do.
434 # It also informs dbh_do to bypass itself while under the direction of txn_do,
435 # via $self->{_in_dbh_do} (this saves some redundant eval and errorcheck, etc)
440 ref $coderef eq 'CODE' or $self->throw_exception
441 ('$coderef must be a CODE reference');
443 return $coderef->(@_) if $self->{transaction_depth} && ! $self->auto_savepoint;
445 local $self->{_in_dbh_do} = 1;
448 my $want_array = wantarray;
453 $self->_verify_pid if $self->_dbh;
454 $self->_populate_dbh if !$self->_dbh;
458 @result = $coderef->(@_);
460 elsif(defined $want_array) {
461 $result[0] = $coderef->(@_);
470 if(!$exception) { return $want_array ? @result : $result[0] }
472 if($tried++ > 0 || $self->connected) {
473 eval { $self->txn_rollback };
474 my $rollback_exception = $@;
475 if($rollback_exception) {
476 my $exception_class = "DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION";
477 $self->throw_exception($exception) # propagate nested rollback
478 if $rollback_exception =~ /$exception_class/;
480 $self->throw_exception(
481 "Transaction aborted: ${exception}. "
482 . "Rollback failed: ${rollback_exception}"
485 $self->throw_exception($exception)
488 # We were not connected, and was first try - reconnect and retry
490 $self->_populate_dbh;
496 Our C<disconnect> method also performs a rollback first if the
497 database is not in C<AutoCommit> mode.
504 if( $self->connected ) {
505 my $connection_do = $self->on_disconnect_do;
506 $self->_do_connection_actions($connection_do) if ref($connection_do);
508 $self->_dbh->rollback unless $self->_dbh_autocommit;
509 $self->_dbh->disconnect;
515 =head2 with_deferred_fk_checks
519 =item Arguments: C<$coderef>
521 =item Return Value: The return value of $coderef
525 Storage specific method to run the code ref with FK checks deferred or
526 in MySQL's case disabled entirely.
530 # Storage subclasses should override this
531 sub with_deferred_fk_checks {
532 my ($self, $sub) = @_;
540 if(my $dbh = $self->_dbh) {
541 if(defined $self->_conn_tid && $self->_conn_tid != threads->tid) {
548 return 0 if !$self->_dbh;
550 return ($dbh->FETCH('Active') && $dbh->ping);
556 # handle pid changes correctly
557 # NOTE: assumes $self->_dbh is a valid $dbh
561 return if defined $self->_conn_pid && $self->_conn_pid == $$;
563 $self->_dbh->{InactiveDestroy} = 1;
570 sub ensure_connected {
573 unless ($self->connected) {
574 $self->_populate_dbh;
580 Returns the dbh - a data base handle of class L<DBI>.
587 $self->ensure_connected;
591 sub _sql_maker_args {
594 return ( bindtype=>'columns', array_datatypes => 1, limit_dialect => $self->dbh, %{$self->_sql_maker_opts} );
599 unless ($self->_sql_maker) {
600 my $sql_maker_class = $self->sql_maker_class;
601 $self->_sql_maker($sql_maker_class->new( $self->_sql_maker_args ));
603 return $self->_sql_maker;
610 my @info = @{$self->_dbi_connect_info || []};
611 $self->_dbh($self->_connect(@info));
613 # Always set the transaction depth on connect, since
614 # there is no transaction in progress by definition
615 $self->{transaction_depth} = $self->_dbh_autocommit ? 0 : 1;
617 if(ref $self eq 'DBIx::Class::Storage::DBI') {
618 my $driver = $self->_dbh->{Driver}->{Name};
619 if ($self->load_optional_class("DBIx::Class::Storage::DBI::${driver}")) {
620 bless $self, "DBIx::Class::Storage::DBI::${driver}";
625 $self->_conn_pid($$);
626 $self->_conn_tid(threads->tid) if $INC{'threads.pm'};
628 my $connection_do = $self->on_connect_do;
629 $self->_do_connection_actions($connection_do) if ref($connection_do);
632 sub _do_connection_actions {
634 my $connection_do = shift;
636 if (ref $connection_do eq 'ARRAY') {
637 $self->_do_query($_) foreach @$connection_do;
639 elsif (ref $connection_do eq 'CODE') {
640 $connection_do->($self);
647 my ($self, $action) = @_;
649 if (ref $action eq 'CODE') {
650 $action = $action->($self);
651 $self->_do_query($_) foreach @$action;
654 # Most debuggers expect ($sql, @bind), so we need to exclude
655 # the attribute hash which is the second argument to $dbh->do
656 # furthermore the bind values are usually to be presented
657 # as named arrayref pairs, so wrap those here too
658 my @do_args = (ref $action eq 'ARRAY') ? (@$action) : ($action);
659 my $sql = shift @do_args;
660 my $attrs = shift @do_args;
661 my @bind = map { [ undef, $_ ] } @do_args;
663 $self->_query_start($sql, @bind);
664 $self->_dbh->do($sql, $attrs, @do_args);
665 $self->_query_end($sql, @bind);
672 my ($self, @info) = @_;
674 $self->throw_exception("You failed to provide any connection info")
677 my ($old_connect_via, $dbh);
679 if ($INC{'Apache/DBI.pm'} && $ENV{MOD_PERL}) {
680 $old_connect_via = $DBI::connect_via;
681 $DBI::connect_via = 'connect';
685 if(ref $info[0] eq 'CODE') {
689 $dbh = DBI->connect(@info);
692 if($dbh && !$self->unsafe) {
693 my $weak_self = $self;
695 $dbh->{HandleError} = sub {
697 $weak_self->throw_exception("DBI Exception: $_[0]");
700 croak ("DBI Exception: $_[0]");
703 $dbh->{ShowErrorStatement} = 1;
704 $dbh->{RaiseError} = 1;
705 $dbh->{PrintError} = 0;
709 $DBI::connect_via = $old_connect_via if $old_connect_via;
711 $self->throw_exception("DBI Connection failed: " . ($@||$DBI::errstr))
714 $self->_dbh_autocommit($dbh->{AutoCommit});
720 my ($self, $name) = @_;
722 $name = $self->_svp_generate_name
723 unless defined $name;
725 $self->throw_exception ("You can't use savepoints outside a transaction")
726 if $self->{transaction_depth} == 0;
728 $self->throw_exception ("Your Storage implementation doesn't support savepoints")
729 unless $self->can('_svp_begin');
731 push @{ $self->{savepoints} }, $name;
733 $self->debugobj->svp_begin($name) if $self->debug;
735 return $self->_svp_begin($name);
739 my ($self, $name) = @_;
741 $self->throw_exception ("You can't use savepoints outside a transaction")
742 if $self->{transaction_depth} == 0;
744 $self->throw_exception ("Your Storage implementation doesn't support savepoints")
745 unless $self->can('_svp_release');
748 $self->throw_exception ("Savepoint '$name' does not exist")
749 unless grep { $_ eq $name } @{ $self->{savepoints} };
751 # Dig through the stack until we find the one we are releasing. This keeps
752 # the stack up to date.
755 do { $svp = pop @{ $self->{savepoints} } } while $svp ne $name;
757 $name = pop @{ $self->{savepoints} };
760 $self->debugobj->svp_release($name) if $self->debug;
762 return $self->_svp_release($name);
766 my ($self, $name) = @_;
768 $self->throw_exception ("You can't use savepoints outside a transaction")
769 if $self->{transaction_depth} == 0;
771 $self->throw_exception ("Your Storage implementation doesn't support savepoints")
772 unless $self->can('_svp_rollback');
775 # If they passed us a name, verify that it exists in the stack
776 unless(grep({ $_ eq $name } @{ $self->{savepoints} })) {
777 $self->throw_exception("Savepoint '$name' does not exist!");
780 # Dig through the stack until we find the one we are releasing. This keeps
781 # the stack up to date.
782 while(my $s = pop(@{ $self->{savepoints} })) {
783 last if($s eq $name);
785 # Add the savepoint back to the stack, as a rollback doesn't remove the
786 # named savepoint, only everything after it.
787 push(@{ $self->{savepoints} }, $name);
789 # We'll assume they want to rollback to the last savepoint
790 $name = $self->{savepoints}->[-1];
793 $self->debugobj->svp_rollback($name) if $self->debug;
795 return $self->_svp_rollback($name);
798 sub _svp_generate_name {
801 return 'savepoint_'.scalar(@{ $self->{'savepoints'} });
806 $self->ensure_connected();
807 if($self->{transaction_depth} == 0) {
808 $self->debugobj->txn_begin()
810 # this isn't ->_dbh-> because
811 # we should reconnect on begin_work
812 # for AutoCommit users
813 $self->dbh->begin_work;
814 } elsif ($self->auto_savepoint) {
817 $self->{transaction_depth}++;
822 if ($self->{transaction_depth} == 1) {
823 my $dbh = $self->_dbh;
824 $self->debugobj->txn_commit()
827 $self->{transaction_depth} = 0
828 if $self->_dbh_autocommit;
830 elsif($self->{transaction_depth} > 1) {
831 $self->{transaction_depth}--;
833 if $self->auto_savepoint;
839 my $dbh = $self->_dbh;
841 if ($self->{transaction_depth} == 1) {
842 $self->debugobj->txn_rollback()
844 $self->{transaction_depth} = 0
845 if $self->_dbh_autocommit;
848 elsif($self->{transaction_depth} > 1) {
849 $self->{transaction_depth}--;
850 if ($self->auto_savepoint) {
856 die DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION->new;
861 my $exception_class = "DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION";
862 $error =~ /$exception_class/ and $self->throw_exception($error);
863 # ensure that a failed rollback resets the transaction depth
864 $self->{transaction_depth} = $self->_dbh_autocommit ? 0 : 1;
865 $self->throw_exception($error);
869 # This used to be the top-half of _execute. It was split out to make it
870 # easier to override in NoBindVars without duping the rest. It takes up
871 # all of _execute's args, and emits $sql, @bind.
872 sub _prep_for_execute {
873 my ($self, $op, $extra_bind, $ident, $args) = @_;
875 if( blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
876 $ident = $ident->from();
879 my ($sql, @bind) = $self->sql_maker->$op($ident, @$args);
882 map { ref $_ eq 'ARRAY' ? $_ : [ '!!dummy', $_ ] } @$extra_bind)
884 return ($sql, \@bind);
887 sub _fix_bind_params {
888 my ($self, @bind) = @_;
890 ### Turn @bind from something like this:
891 ### ( [ "artist", 1 ], [ "cdid", 1, 3 ] )
893 ### ( "'1'", "'1'", "'3'" )
896 if ( defined( $_ && $_->[1] ) ) {
897 map { qq{'$_'}; } @{$_}[ 1 .. $#$_ ];
904 my ( $self, $sql, @bind ) = @_;
906 if ( $self->debug ) {
907 @bind = $self->_fix_bind_params(@bind);
909 $self->debugobj->query_start( $sql, @bind );
914 my ( $self, $sql, @bind ) = @_;
916 if ( $self->debug ) {
917 @bind = $self->_fix_bind_params(@bind);
918 $self->debugobj->query_end( $sql, @bind );
923 my ($self, $dbh, $op, $extra_bind, $ident, $bind_attributes, @args) = @_;
925 my ($sql, $bind) = $self->_prep_for_execute($op, $extra_bind, $ident, \@args);
927 $self->_query_start( $sql, @$bind );
929 my $sth = $self->sth($sql,$op);
931 my $placeholder_index = 1;
933 foreach my $bound (@$bind) {
935 my($column_name, @data) = @$bound;
937 if ($bind_attributes) {
938 $attributes = $bind_attributes->{$column_name}
939 if defined $bind_attributes->{$column_name};
942 foreach my $data (@data) {
944 $data = $ref && $ref ne 'ARRAY' ? ''.$data : $data; # stringify args (except arrayrefs)
946 $sth->bind_param($placeholder_index, $data, $attributes);
947 $placeholder_index++;
951 # Can this fail without throwing an exception anyways???
952 my $rv = $sth->execute();
953 $self->throw_exception($sth->errstr) if !$rv;
955 $self->_query_end( $sql, @$bind );
957 return (wantarray ? ($rv, $sth, @$bind) : $rv);
962 $self->dbh_do('_dbh_execute', @_)
966 my ($self, $source, $to_insert) = @_;
968 my $ident = $source->from;
969 my $bind_attributes = $self->source_bind_attributes($source);
971 my $updated_cols = {};
973 $self->ensure_connected;
974 foreach my $col ( $source->columns ) {
975 if ( !defined $to_insert->{$col} ) {
976 my $col_info = $source->column_info($col);
978 if ( $col_info->{auto_nextval} ) {
979 $updated_cols->{$col} = $to_insert->{$col} = $self->_sequence_fetch( 'nextval', $col_info->{sequence} || $self->_dbh_get_autoinc_seq($self->dbh, $source) );
984 $self->_execute('insert' => [], $source, $bind_attributes, $to_insert);
986 return $updated_cols;
989 ## Still not quite perfect, and EXPERIMENTAL
990 ## Currently it is assumed that all values passed will be "normal", i.e. not
991 ## scalar refs, or at least, all the same type as the first set, the statement is
992 ## only prepped once.
994 my ($self, $source, $cols, $data) = @_;
996 my $table = $source->from;
997 @colvalues{@$cols} = (0..$#$cols);
998 my ($sql, @bind) = $self->sql_maker->insert($table, \%colvalues);
1000 $self->_query_start( $sql, @bind );
1001 my $sth = $self->sth($sql);
1003 # @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
1005 ## This must be an arrayref, else nothing works!
1006 my $tuple_status = [];
1008 ## Get the bind_attributes, if any exist
1009 my $bind_attributes = $self->source_bind_attributes($source);
1011 ## Bind the values and execute
1012 my $placeholder_index = 1;
1014 foreach my $bound (@bind) {
1016 my $attributes = {};
1017 my ($column_name, $data_index) = @$bound;
1019 if( $bind_attributes ) {
1020 $attributes = $bind_attributes->{$column_name}
1021 if defined $bind_attributes->{$column_name};
1024 my @data = map { $_->[$data_index] } @$data;
1026 $sth->bind_param_array( $placeholder_index, [@data], $attributes );
1027 $placeholder_index++;
1029 my $rv = $sth->execute_array({ArrayTupleStatus => $tuple_status});
1030 $self->throw_exception($sth->errstr) if !$rv;
1032 $self->_query_end( $sql, @bind );
1033 return (wantarray ? ($rv, $sth, @bind) : $rv);
1037 my $self = shift @_;
1038 my $source = shift @_;
1039 my $bind_attributes = $self->source_bind_attributes($source);
1041 return $self->_execute('update' => [], $source, $bind_attributes, @_);
1046 my $self = shift @_;
1047 my $source = shift @_;
1049 my $bind_attrs = {}; ## If ever it's needed...
1051 return $self->_execute('delete' => [], $source, $bind_attrs, @_);
1054 # We were sent here because the $rs contains a complex search
1055 # which will require a subquery to select the correct rows
1056 # (i.e. joined or limited resultsets)
1058 # Genarating a single PK column subquery is trivial and supported
1059 # by all RDBMS. However if we have a multicolumn PK, things get ugly.
1060 # Look at multipk_update_delete()
1061 sub subq_update_delete {
1063 my ($rs, $op, $values) = @_;
1065 if ($rs->result_source->primary_columns == 1) {
1066 return $self->_onepk_update_delete (@_);
1069 return $self->_multipk_update_delete (@_);
1073 # Generally a single PK resultset operation is trivially expressed
1074 # with PK IN (subquery). However some databases (mysql) do not support
1075 # modification of a table mentioned in the subselect. This method
1076 # should be overriden in the appropriate storage class to be smarter
1077 # in such situations
1078 sub _onepk_update_delete {
1081 my ($rs, $op, $values) = @_;
1083 my $rsrc = $rs->result_source;
1084 my $attrs = $rs->_resolved_attrs;
1085 my @pcols = $rsrc->primary_columns;
1087 $self->throw_exception ('_onepk_update_delete can not be called on resultsets selecting multiple columns')
1088 if (ref $attrs->{select} eq 'ARRAY' and @{$attrs->{select}} > 1);
1092 $op eq 'update' ? $values : (),
1093 { $pcols[0] => { -in => $rs->as_query } },
1097 # ANSI SQL does not provide a reliable way to perform a multicol-PK
1098 # resultset update/delete involving subqueries. So resort to simple
1099 # (and inefficient) delete_all style per-row opearations, while allowing
1100 # specific storages to override this with a faster implementation.
1102 # We do not use $row->$op style queries, because resultset update/delete
1103 # is not expected to cascade (this is what delete_all/update_all is for).
1105 # There should be no race conditions as the entire operation is rolled
1107 sub _multipk_update_delete {
1109 my ($rs, $op, $values) = @_;
1111 my $rsrc = $rs->result_source;
1112 my @pcols = $rsrc->primary_columns;
1113 my $attrs = $rs->_resolved_attrs;
1115 $self->throw_exception ('Number of columns selected by supplied resultset does not match number of primary keys')
1116 if ( ref $attrs->{select} ne 'ARRAY' or @{$attrs->{select}} != @pcols );
1118 my $guard = $self->txn_scope_guard;
1120 my $subrs_cur = $rs->cursor;
1121 while (my @pks = $subrs_cur->next) {
1124 for my $i (0.. $#pcols) {
1125 $cond->{$pcols[$i]} = $pks[$i];
1130 $op eq 'update' ? $values : (),
1143 my $sql_maker = $self->sql_maker;
1144 local $sql_maker->{for};
1145 return $self->_execute($self->_select_args(@_));
1149 my ($self, $ident, $select, $condition, $attrs) = @_;
1150 my $order = $attrs->{order_by};
1152 my $for = delete $attrs->{for};
1153 my $sql_maker = $self->sql_maker;
1154 $sql_maker->{for} = $for;
1156 if (exists $attrs->{group_by} || $attrs->{having}) {
1158 group_by => $attrs->{group_by},
1159 having => $attrs->{having},
1160 ($order ? (order_by => $order) : ())
1163 my $bind_attrs = {}; ## Future support
1164 my @args = ('select', $attrs->{bind}, $ident, $bind_attrs, $select, $condition, $order);
1165 if ($attrs->{software_limit} ||
1166 $self->sql_maker->_default_limit_syntax eq "GenericSubQ") {
1167 $attrs->{software_limit} = 1;
1169 $self->throw_exception("rows attribute must be positive if present")
1170 if (defined($attrs->{rows}) && !($attrs->{rows} > 0));
1172 # MySQL actually recommends this approach. I cringe.
1173 $attrs->{rows} = 2**48 if not defined $attrs->{rows} and defined $attrs->{offset};
1174 push @args, $attrs->{rows}, $attrs->{offset};
1179 sub source_bind_attributes {
1180 my ($self, $source) = @_;
1182 my $bind_attributes;
1183 foreach my $column ($source->columns) {
1185 my $data_type = $source->column_info($column)->{data_type} || '';
1186 $bind_attributes->{$column} = $self->bind_attribute_by_data_type($data_type)
1190 return $bind_attributes;
1197 =item Arguments: $ident, $select, $condition, $attrs
1201 Handle a SQL select statement.
1207 my ($ident, $select, $condition, $attrs) = @_;
1208 return $self->cursor_class->new($self, \@_, $attrs);
1213 my ($rv, $sth, @bind) = $self->_select(@_);
1214 my @row = $sth->fetchrow_array;
1215 my @nextrow = $sth->fetchrow_array if @row;
1216 if(@row && @nextrow) {
1217 carp "Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single";
1219 # Need to call finish() to work round broken DBDs
1228 =item Arguments: $sql
1232 Returns a L<DBI> sth (statement handle) for the supplied SQL.
1237 my ($self, $dbh, $sql) = @_;
1239 # 3 is the if_active parameter which avoids active sth re-use
1240 my $sth = $self->disable_sth_caching
1241 ? $dbh->prepare($sql)
1242 : $dbh->prepare_cached($sql, {}, 3);
1244 # XXX You would think RaiseError would make this impossible,
1245 # but apparently that's not true :(
1246 $self->throw_exception($dbh->errstr) if !$sth;
1252 my ($self, $sql) = @_;
1253 $self->dbh_do('_dbh_sth', $sql);
1256 sub _dbh_columns_info_for {
1257 my ($self, $dbh, $table) = @_;
1259 if ($dbh->can('column_info')) {
1262 my ($schema,$tab) = $table =~ /^(.+?)\.(.+)$/ ? ($1,$2) : (undef,$table);
1263 my $sth = $dbh->column_info( undef,$schema, $tab, '%' );
1265 while ( my $info = $sth->fetchrow_hashref() ){
1267 $column_info{data_type} = $info->{TYPE_NAME};
1268 $column_info{size} = $info->{COLUMN_SIZE};
1269 $column_info{is_nullable} = $info->{NULLABLE} ? 1 : 0;
1270 $column_info{default_value} = $info->{COLUMN_DEF};
1271 my $col_name = $info->{COLUMN_NAME};
1272 $col_name =~ s/^\"(.*)\"$/$1/;
1274 $result{$col_name} = \%column_info;
1277 return \%result if !$@ && scalar keys %result;
1281 my $sth = $dbh->prepare($self->sql_maker->select($table, undef, \'1 = 0'));
1283 my @columns = @{$sth->{NAME_lc}};
1284 for my $i ( 0 .. $#columns ){
1286 $column_info{data_type} = $sth->{TYPE}->[$i];
1287 $column_info{size} = $sth->{PRECISION}->[$i];
1288 $column_info{is_nullable} = $sth->{NULLABLE}->[$i] ? 1 : 0;
1290 if ($column_info{data_type} =~ m/^(.*?)\((.*?)\)$/) {
1291 $column_info{data_type} = $1;
1292 $column_info{size} = $2;
1295 $result{$columns[$i]} = \%column_info;
1299 foreach my $col (keys %result) {
1300 my $colinfo = $result{$col};
1301 my $type_num = $colinfo->{data_type};
1303 if(defined $type_num && $dbh->can('type_info')) {
1304 my $type_info = $dbh->type_info($type_num);
1305 $type_name = $type_info->{TYPE_NAME} if $type_info;
1306 $colinfo->{data_type} = $type_name if $type_name;
1313 sub columns_info_for {
1314 my ($self, $table) = @_;
1315 $self->dbh_do('_dbh_columns_info_for', $table);
1318 =head2 last_insert_id
1320 Return the row id of the last insert.
1324 sub _dbh_last_insert_id {
1325 # All Storage's need to register their own _dbh_last_insert_id
1326 # the old SQLite-based method was highly inappropriate
1329 my $class = ref $self;
1330 $self->throw_exception (<<EOE);
1332 No _dbh_last_insert_id() method found in $class.
1333 Since the method of obtaining the autoincrement id of the last insert
1334 operation varies greatly between different databases, this method must be
1335 individually implemented for every storage class.
1339 sub last_insert_id {
1341 $self->dbh_do('_dbh_last_insert_id', @_);
1346 Returns the database driver name.
1350 sub sqlt_type { shift->dbh->{Driver}->{Name} }
1352 =head2 bind_attribute_by_data_type
1354 Given a datatype from column info, returns a database specific bind
1355 attribute for C<< $dbh->bind_param($val,$attribute) >> or nothing if we will
1356 let the database planner just handle it.
1358 Generally only needed for special case column types, like bytea in postgres.
1362 sub bind_attribute_by_data_type {
1366 =head2 create_ddl_dir
1370 =item Arguments: $schema \@databases, $version, $directory, $preversion, \%sqlt_args
1374 Creates a SQL file based on the Schema, for each of the specified
1375 database types, in the given directory.
1377 By default, C<\%sqlt_args> will have
1379 { add_drop_table => 1, ignore_constraint_names => 1, ignore_index_names => 1 }
1381 merged with the hash passed in. To disable any of those features, pass in a
1382 hashref like the following
1384 { ignore_constraint_names => 0, # ... other options }
1388 sub create_ddl_dir {
1389 my ($self, $schema, $databases, $version, $dir, $preversion, $sqltargs) = @_;
1391 if(!$dir || !-d $dir) {
1392 carp "No directory given, using ./\n";
1395 $databases ||= ['MySQL', 'SQLite', 'PostgreSQL'];
1396 $databases = [ $databases ] if(ref($databases) ne 'ARRAY');
1398 my $schema_version = $schema->schema_version || '1.x';
1399 $version ||= $schema_version;
1402 add_drop_table => 1,
1403 ignore_constraint_names => 1,
1404 ignore_index_names => 1,
1408 $self->throw_exception(q{Can't create a ddl file without SQL::Translator 0.09003: '}
1409 . $self->_check_sqlt_message . q{'})
1410 if !$self->_check_sqlt_version;
1412 my $sqlt = SQL::Translator->new( $sqltargs );
1414 $sqlt->parser('SQL::Translator::Parser::DBIx::Class');
1415 my $sqlt_schema = $sqlt->translate({ data => $schema })
1416 or $self->throw_exception ($sqlt->error);
1418 foreach my $db (@$databases) {
1420 $sqlt->{schema} = $sqlt_schema;
1421 $sqlt->producer($db);
1424 my $filename = $schema->ddl_filename($db, $version, $dir);
1425 if (-e $filename && ($version eq $schema_version )) {
1426 # if we are dumping the current version, overwrite the DDL
1427 carp "Overwriting existing DDL file - $filename";
1431 my $output = $sqlt->translate;
1433 carp("Failed to translate to $db, skipping. (" . $sqlt->error . ")");
1436 if(!open($file, ">$filename")) {
1437 $self->throw_exception("Can't open $filename for writing ($!)");
1440 print $file $output;
1443 next unless ($preversion);
1445 require SQL::Translator::Diff;
1447 my $prefilename = $schema->ddl_filename($db, $preversion, $dir);
1448 if(!-e $prefilename) {
1449 carp("No previous schema file found ($prefilename)");
1453 my $difffile = $schema->ddl_filename($db, $version, $dir, $preversion);
1455 carp("Overwriting existing diff file - $difffile");
1461 my $t = SQL::Translator->new($sqltargs);
1466 or $self->throw_exception ($t->error);
1468 my $out = $t->translate( $prefilename )
1469 or $self->throw_exception ($t->error);
1471 $source_schema = $t->schema;
1473 $source_schema->name( $prefilename )
1474 unless ( $source_schema->name );
1477 # The "new" style of producers have sane normalization and can support
1478 # diffing a SQL file against a DBIC->SQLT schema. Old style ones don't
1479 # And we have to diff parsed SQL against parsed SQL.
1480 my $dest_schema = $sqlt_schema;
1482 unless ( "SQL::Translator::Producer::$db"->can('preprocess_schema') ) {
1483 my $t = SQL::Translator->new($sqltargs);
1488 or $self->throw_exception ($t->error);
1490 my $out = $t->translate( $filename )
1491 or $self->throw_exception ($t->error);
1493 $dest_schema = $t->schema;
1495 $dest_schema->name( $filename )
1496 unless $dest_schema->name;
1499 my $diff = SQL::Translator::Diff::schema_diff($source_schema, $db,
1503 if(!open $file, ">$difffile") {
1504 $self->throw_exception("Can't write to $difffile ($!)");
1512 =head2 deployment_statements
1516 =item Arguments: $schema, $type, $version, $directory, $sqlt_args
1520 Returns the statements used by L</deploy> and L<DBIx::Class::Schema/deploy>.
1521 The database driver name is given by C<$type>, though the value from
1522 L</sqlt_type> is used if it is not specified.
1524 C<$directory> is used to return statements from files in a previously created
1525 L</create_ddl_dir> directory and is optional. The filenames are constructed
1526 from L<DBIx::Class::Schema/ddl_filename>, the schema name and the C<$version>.
1528 If no C<$directory> is specified then the statements are constructed on the
1529 fly using L<SQL::Translator> and C<$version> is ignored.
1531 See L<SQL::Translator/METHODS> for a list of values for C<$sqlt_args>.
1535 sub deployment_statements {
1536 my ($self, $schema, $type, $version, $dir, $sqltargs) = @_;
1537 # Need to be connected to get the correct sqlt_type
1538 $self->ensure_connected() unless $type;
1539 $type ||= $self->sqlt_type;
1540 $version ||= $schema->schema_version || '1.x';
1542 my $filename = $schema->ddl_filename($type, $version, $dir);
1546 open($file, "<$filename")
1547 or $self->throw_exception("Can't open $filename ($!)");
1550 return join('', @rows);
1553 $self->throw_exception(q{Can't deploy without SQL::Translator 0.09003: '}
1554 . $self->_check_sqlt_message . q{'})
1555 if !$self->_check_sqlt_version;
1557 require SQL::Translator::Parser::DBIx::Class;
1558 eval qq{use SQL::Translator::Producer::${type}};
1559 $self->throw_exception($@) if $@;
1561 # sources needs to be a parser arg, but for simplicty allow at top level
1563 $sqltargs->{parser_args}{sources} = delete $sqltargs->{sources}
1564 if exists $sqltargs->{sources};
1566 my $tr = SQL::Translator->new(%$sqltargs);
1567 SQL::Translator::Parser::DBIx::Class::parse( $tr, $schema );
1568 return "SQL::Translator::Producer::${type}"->can('produce')->($tr);
1572 my ($self, $schema, $type, $sqltargs, $dir) = @_;
1575 return if($line =~ /^--/);
1577 # next if($line =~ /^DROP/m);
1578 return if($line =~ /^BEGIN TRANSACTION/m);
1579 return if($line =~ /^COMMIT/m);
1580 return if $line =~ /^\s+$/; # skip whitespace only
1581 $self->_query_start($line);
1583 $self->dbh->do($line); # shouldn't be using ->dbh ?
1586 carp qq{$@ (running "${line}")};
1588 $self->_query_end($line);
1590 my @statements = $self->deployment_statements($schema, $type, undef, $dir, { no_comments => 1, %{ $sqltargs || {} } } );
1591 if (@statements > 1) {
1592 foreach my $statement (@statements) {
1593 $deploy->( $statement );
1596 elsif (@statements == 1) {
1597 foreach my $line ( split(";\n", $statements[0])) {
1603 =head2 datetime_parser
1605 Returns the datetime parser class
1609 sub datetime_parser {
1611 return $self->{datetime_parser} ||= do {
1612 $self->ensure_connected;
1613 $self->build_datetime_parser(@_);
1617 =head2 datetime_parser_type
1619 Defines (returns) the datetime parser class - currently hardwired to
1620 L<DateTime::Format::MySQL>
1624 sub datetime_parser_type { "DateTime::Format::MySQL"; }
1626 =head2 build_datetime_parser
1628 See L</datetime_parser>
1632 sub build_datetime_parser {
1634 my $type = $self->datetime_parser_type(@_);
1636 $self->throw_exception("Couldn't load ${type}: $@") if $@;
1641 my $_check_sqlt_version; # private
1642 my $_check_sqlt_message; # private
1643 sub _check_sqlt_version {
1644 return $_check_sqlt_version if defined $_check_sqlt_version;
1645 eval 'use SQL::Translator "0.09003"';
1646 $_check_sqlt_message = $@ || '';
1647 $_check_sqlt_version = !$@;
1650 sub _check_sqlt_message {
1651 _check_sqlt_version if !defined $_check_sqlt_message;
1652 $_check_sqlt_message;
1656 =head2 is_replicating
1658 A boolean that reports if a particular L<DBIx::Class::Storage::DBI> is set to
1659 replicate from a master database. Default is undef, which is the result
1660 returned by databases that don't support replication.
1664 sub is_replicating {
1669 =head2 lag_behind_master
1671 Returns a number that represents a certain amount of lag behind a master db
1672 when a given storage is replicating. The number is database dependent, but
1673 starts at zero and increases with the amount of lag. Default in undef
1677 sub lag_behind_master {
1683 return if !$self->_dbh;
1692 =head2 DBIx::Class and AutoCommit
1694 DBIx::Class can do some wonderful magic with handling exceptions,
1695 disconnections, and transactions when you use C<< AutoCommit => 1 >>
1696 combined with C<txn_do> for transaction support.
1698 If you set C<< AutoCommit => 0 >> in your connect info, then you are always
1699 in an assumed transaction between commits, and you're telling us you'd
1700 like to manage that manually. A lot of the magic protections offered by
1701 this module will go away. We can't protect you from exceptions due to database
1702 disconnects because we don't know anything about how to restart your
1703 transactions. You're on your own for handling all sorts of exceptional
1704 cases if you choose the C<< AutoCommit => 0 >> path, just as you would
1711 Matt S. Trout <mst@shadowcatsystems.co.uk>
1713 Andy Grundman <andy@hybridized.org>
1717 You may distribute this code under the same terms as Perl itself.