From: Peter Rabbitson Date: Tue, 6 Apr 2010 09:32:52 +0000 (+0000) Subject: Merge 'trunk' into 'oracle_hierarchical_queries_rt39121' X-Git-Tag: v0.08122~34^2~24 X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=3895349e454e6b3fd36188b27e565acfa7da601e;hp=24e4bacb969c8b6f31708363d776ca27f8cf7b4a;p=dbsrgits%2FDBIx-Class.git Merge 'trunk' into 'oracle_hierarchical_queries_rt39121' r8965@Thesaurus (orig r8952): hobbs | 2010-03-09 20:29:50 +0100 Support add_columns('+colname'=>{...}) syntax to augment column definitions. r8966@Thesaurus (orig r8953): rabbit | 2010-03-10 09:34:38 +0100 docpatch close RT52681 r8974@Thesaurus (orig r8961): rabbit | 2010-03-11 08:08:57 +0100 Where is my spellchecker (not that it would catch this) r9005@Thesaurus (orig r8992): caelum | 2010-03-13 00:47:40 +0100 update Firebird docs r9006@Thesaurus (orig r8993): mo | 2010-03-13 10:03:24 +0100 test the dynamic subclassing example r9008@Thesaurus (orig r8995): mo | 2010-03-13 13:09:59 +0100 call inflate_result on new_result, but not from the CDBI compat layer r9009@Thesaurus (orig r8996): mo | 2010-03-13 13:37:40 +0100 reverting 8995, was supposed to go to a branch r9010@Thesaurus (orig r8997): nigel | 2010-03-14 18:09:26 +0100 Corrected a link to connect_info in Manual::Intro r9018@Thesaurus (orig r9005): rabbit | 2010-03-15 14:55:17 +0100 Proper fix for RETURNING with default insert r9026@Thesaurus (orig r9013): nigel | 2010-03-15 18:36:44 +0100 Documentation on Unicode use with DBIC r9027@Thesaurus (orig r9014): rabbit | 2010-03-16 02:55:27 +0100 Horrible horrible rewrite of the aliastype scanner, but folks are starting to complain that their unqualified columns are making joins go away (this was the initial idea). Hopefully this code will silently die some day. /me can haz shame r9028@Thesaurus (orig r9015): rabbit | 2010-03-16 16:49:45 +0100 Regenerate test DDL r9029@Thesaurus (orig r9016): caelum | 2010-03-16 22:01:21 +0100 _ping for MSSQL r9030@Thesaurus (orig r9017): caelum | 2010-03-17 11:49:51 +0100 add connect_call_use_foreign_keys for SQLite r9031@Thesaurus (orig r9018): abraxxa | 2010-03-17 16:36:13 +0100 fixed Alexander Hartmaier's mail address r9039@Thesaurus (orig r9026): frew | 2010-03-18 15:59:55 +0100 use update instead of set_columns in update_all r9040@Thesaurus (orig r9027): frew | 2010-03-18 20:53:28 +0100 Ch Ch Ch Ch Changes! r9041@Thesaurus (orig r9028): caelum | 2010-03-19 16:03:41 +0100 POD fixups r9042@Thesaurus (orig r9029): rabbit | 2010-03-19 18:39:02 +0100 Fix UTF8Column out of order loading warning r9043@Thesaurus (orig r9030): rabbit | 2010-03-20 09:00:00 +0100 Something is wrong with HRI inflation - too slow r9044@Thesaurus (orig r9031): rabbit | 2010-03-20 09:26:12 +0100 Extend benchmark r9045@Thesaurus (orig r9032): rabbit | 2010-03-20 09:41:30 +0100 MOAR bench r9048@Thesaurus (orig r9035): caelum | 2010-03-22 16:10:38 +0100 redo Pg auto-columns using INSERT RETURNING r9049@Thesaurus (orig r9036): caelum | 2010-03-22 16:45:55 +0100 move INSERT ... RETURNING code into ::DBI::InsertReturning component for Pg and Firebird r9050@Thesaurus (orig r9037): rabbit | 2010-03-22 18:03:13 +0100 Even cleaner way of handling returning (no column interrogation in storage) r9051@Thesaurus (orig r9038): caelum | 2010-03-22 23:43:19 +0100 update proxied methods for DBI::Replicated r9052@Thesaurus (orig r9039): caelum | 2010-03-23 06:56:12 +0100 fix sort r9056@Thesaurus (orig r9043): rabbit | 2010-03-24 11:27:37 +0100 A better illustration how to add relationships at runtime r9057@Thesaurus (orig r9044): rabbit | 2010-03-24 11:33:04 +0100 Clearer 'no such rel' errors, correct exception on pkless prefetch r9058@Thesaurus (orig r9045): rabbit | 2010-03-24 11:44:50 +0100 One missed step r9059@Thesaurus (orig r9046): ribasushi | 2010-03-24 12:11:12 +0100 Straight_join support RT55579 r9060@Thesaurus (orig r9047): rabbit | 2010-03-24 12:43:02 +0100 bump SQLA dep r9061@Thesaurus (orig r9048): ribasushi | 2010-03-24 14:10:33 +0100 Really fix INSERT RETURNING - simply make it a flag on the storage and keep the machinery in core r9062@Thesaurus (orig r9049): rabbit | 2010-03-24 14:30:17 +0100 Cosmetics + changes r9063@Thesaurus (orig r9050): caelum | 2010-03-24 20:44:15 +0100 Pg version check for can_insert_returning r9064@Thesaurus (orig r9051): caelum | 2010-03-24 21:25:24 +0100 collect _server_info on connection r9065@Thesaurus (orig r9052): caelum | 2010-03-24 21:49:38 +0100 s/_get_server_info/_populate_server_info/ r9066@Thesaurus (orig r9053): caelum | 2010-03-25 01:24:09 +0100 remove _get_mssql_version r9067@Thesaurus (orig r9054): caelum | 2010-03-25 06:32:51 +0100 minor fix for SQLite version check r9068@Thesaurus (orig r9055): caelum | 2010-03-25 07:37:36 +0100 add storage->_server_info->{dbms_ver_normalized} r9069@Thesaurus (orig r9056): caelum | 2010-03-26 09:55:46 +0100 a couple minor Informix fixes r9070@Thesaurus (orig r9057): caelum | 2010-03-26 10:55:55 +0100 savepoints for Informix r9071@Thesaurus (orig r9058): caelum | 2010-03-26 12:23:26 +0100 InflateColumn::DateTime support for Informix r9072@Thesaurus (orig r9059): caelum | 2010-03-26 15:08:16 +0100 with_deferred_fk_checks for Informix r9073@Thesaurus (orig r9060): caelum | 2010-03-26 15:28:24 +0100 minor cleanups r9074@Thesaurus (orig r9061): castaway | 2010-03-26 21:16:44 +0100 Added clarification of quoting to cookbook pod for sql funcs, from metaperl r9075@Thesaurus (orig r9062): caelum | 2010-03-27 00:12:37 +0100 missing local r9076@Thesaurus (orig r9063): caelum | 2010-03-27 00:19:56 +0100 move warning suppression into ::DBI::InterBase r9077@Thesaurus (orig r9064): caelum | 2010-03-27 00:30:02 +0100 a bit cleaner warning suppression for DBD::InterBase only r9083@Thesaurus (orig r9070): rabbit | 2010-03-29 10:12:44 +0200 pod error r9092@Thesaurus (orig r9079): boghead | 2010-04-02 22:44:32 +0200 - Minor language cleanup in some of the Cookbook documentation (thanks metaperl and jester) - Fix the synopsis for DBIC::Storage::DBI. ->datetime_parser returns a class, so you need to call a method on it in order to transform a DateTime object r9096@Thesaurus (orig r9083): ribasushi | 2010-04-05 21:53:13 +0200 Minor test cleanups r9097@Thesaurus (orig r9084): caelum | 2010-04-05 22:08:48 +0200 fix test count r9098@Thesaurus (orig r9085): ribasushi | 2010-04-06 05:36:04 +0200 Fix embarassing join optimizer bug --- diff --git a/Changes b/Changes index 08728f8..d8659c5 100644 --- a/Changes +++ b/Changes @@ -1,9 +1,12 @@ Revision history for DBIx::Class - Support for Firebird RDBMS with DBD::InterBase and ODBC + - Add core support for INSERT RETURNING (for storages that + supports this syntax, currently PostgreSQL and Firebird) - DBIx::Class::InflateColumn::File entered deprecated state - DBIx::Class::Optional::Dependencies left experimental state - Add req_group_list to Opt::Deps (RT#55211) + - Add support for mysql-specific STRAIGHT_JOIN (RT#55579) - Cascading delete/update are now wrapped in a transaction for atomicity - Fix multiple deficiencies when using MultiCreate with @@ -12,13 +15,23 @@ Revision history for DBIx::Class handled properly by ::Schema::Versioned. - Fix regression on not properly throwing when $obj->relationship is unresolvable + - Fix the join-optimiser to consider unqualified column names + whenever possible + - Fix an issue with multiple same-table joins confusing the join + optimizier - Add has_relationship method to row objects - Fix regression in set_column on PK-less objects + - Better error text on malformed/missing relationships - Add POD about the significance of PK columns - Fix for SQLite to ignore the (unsupported) { for => ... } attribute - Fix ambiguity in default directory handling of create_ddl_dir (RT#54063) + - Fix update_all and delete_all to be wrapped in a transaction + - Support add_columns('+colname' => { ... }) to augment column + definitions. + - Fix spurious warnings on multiple UTF8Columns component loads + - Unicode support documentation in Cookbook and UTF8Columns 0.08120 2010-02-24 08:58:00 (UTC) - Make sure possibly overwritten deployment_statements methods in diff --git a/Makefile.PL b/Makefile.PL index 2889e84..8f05bbb 100644 --- a/Makefile.PL +++ b/Makefile.PL @@ -45,7 +45,7 @@ my $runtime_requires = { 'MRO::Compat' => '0.09', 'Module::Find' => '0.06', 'Path::Class' => '0.18', - 'SQL::Abstract' => '1.62', + 'SQL::Abstract' => '1.63', 'SQL::Abstract::Limit' => '0.13', 'Sub::Name' => '0.04', 'Data::Dumper::Concise' => '1.000', diff --git a/lib/DBIx/Class.pm b/lib/DBIx/Class.pm index 56f94dc..99ad205 100644 --- a/lib/DBIx/Class.pm +++ b/lib/DBIx/Class.pm @@ -218,7 +218,7 @@ is traditional :) =head1 CONTRIBUTORS -abraxxa: Alexander Hartmaier +abraxxa: Alexander Hartmaier aherzog: Adam Herzog @@ -276,6 +276,8 @@ gphat: Cory G Watson groditi: Guillermo Roditi +hobbs: Andrew Rodland + ilmari: Dagfinn Ilmari MannsEker jasonmay: Jason May diff --git a/lib/DBIx/Class/Componentised.pm b/lib/DBIx/Class/Componentised.pm index 5a59238..fb0bd28 100644 --- a/lib/DBIx/Class/Componentised.pm +++ b/lib/DBIx/Class/Componentised.pm @@ -14,18 +14,23 @@ sub inject_base { my $target = shift; my @present_components = (@{mro::get_linear_isa ($target)||[]}); + shift @present_components; # don't need to interrogate myself no strict 'refs'; for my $comp (reverse @_) { - if ($comp->isa ('DBIx::Class::UTF8Columns') ) { + # if we are trying add a UTF8Columns component *for the first time* + if ($comp->isa ('DBIx::Class::UTF8Columns') && ! $target->isa ('DBIx::Class::UTF8Columns') ) { require B; my @broken; for (@present_components) { + last if $_ eq 'DBIx::Class::Row'; # don't care about anything further down the chain + my $cref = $_->can ('store_column') or next; - push @broken, $_ if B::svref_2object($cref)->STASH->NAME ne 'DBIx::Class::Row'; + + push @broken, $_ if B::svref_2object($cref)->STASH->NAME eq $_; } carp "Incorrect loading order of $comp by ${target} will affect other components overriding store_column (" diff --git a/lib/DBIx/Class/Manual/Cookbook.pod b/lib/DBIx/Class/Manual/Cookbook.pod index 16fa647..b4d52da 100644 --- a/lib/DBIx/Class/Manual/Cookbook.pod +++ b/lib/DBIx/Class/Manual/Cookbook.pod @@ -367,8 +367,8 @@ That creates the following SQL: =head2 Predefined searches -You can write your own L class by inheriting from it -and defining often used searches as methods: +You can define frequently used searches as methods by subclassing +L: package My::DBIC::ResultSet::CD; use strict; @@ -415,6 +415,12 @@ supports indexes on expressions - including return values of functions - and you create an index on the return value of the function in question.) However, it can be accomplished with C when necessary. +Your approach for doing so will depend on whether you have turned +quoting on via the C and C attributes. If you +explicitly defined C and C in your +C (see L) then +you are using quoting, otherwise not. + If you do not have quoting on, simply include the function in your search specification as you would any column: @@ -1217,6 +1223,8 @@ callback routine. =head1 TRANSACTIONS +=head2 Transactions with txn_do + As of version 0.04001, there is improved transaction support in L and L. Here is an example of the recommended way to use it: @@ -1248,11 +1256,16 @@ example of the recommended way to use it: deal_with_failed_transaction(); } +Note: by default C will re-run the coderef one more time if an +error occurs due to client disconnection (e.g. the server is bounced). +You need to make sure that your coderef can be invoked multiple times +without terrible side effects. + Nested transactions will work as expected. That is, only the outermost transaction will actually issue a commit to the $dbh, and a rollback at any level of any transaction will cause the entire nested transaction to fail. - + =head2 Nested transactions and auto-savepoints If savepoints are supported by your RDBMS, it is possible to achieve true @@ -1344,9 +1357,19 @@ commits are happening, but it works just the same as for plain L<>: If the C-block around C fails, a rollback is issued. If the C succeeds, the transaction is committed (or the savepoint released). -While you can get more fine-grained controll using C, C +While you can get more fine-grained control using C, C and C, it is strongly recommended to use C with coderefs. +=head2 Simple Transactions with DBIx::Class::Storage::TxnScopeGuard + +An easy way to use transactions is with +L. See L for an example. + +Note that unlike txn_do, TxnScopeGuard will only make sure the connection is +alive when issuing the C statement. It will not (and really can not) +retry if the server goes away mid-operations, unlike C. + =head1 SQL =head2 Creating Schemas From An Existing Database @@ -1724,6 +1747,75 @@ the bind values (the C<[1, 2, 3]> arrayref in the above example) wrapped in arrayrefs together with the column name, like this: C<< [column_name => value] >>. +=head2 Using Unicode + +When using unicode character data there are two alternatives - +either your database supports unicode characters (including setting +the utf8 flag on the returned string), or you need to encode/decode +data appropriately each time a string field is inserted into or +retrieved from the database. It is better to avoid +encoding/decoding data and to use your database's own unicode +capabilities if at all possible. + +The L component handles storing selected +unicode columns in a database that does not directly support +unicode. If used with a database that does correctly handle unicode +then strange and unexpected data corrupt B occur. + +The Catalyst Wiki Unicode page at +L +has additional information on the use of Unicode with Catalyst and +DBIx::Class. + +The following databases do correctly handle unicode data:- + +=head3 MySQL + +MySQL supports unicode, and will correctly flag utf8 data from the +database if the C is set in the connect options. + + my $schema = My::Schema->connection('dbi:mysql:dbname=test', + $user, $pass, + { mysql_enable_utf8 => 1} ); + + +When set, a data retrieved from a textual column type (char, +varchar, etc) will have the UTF-8 flag turned on if necessary. This +enables character semantics on that string. You will also need to +ensure that your database / table / column is configured to use +UTF8. See Chapter 10 of the mysql manual for details. + +See L for further details. + +=head3 Oracle + +Information about Oracle support for unicode can be found in +L. + +=head3 PostgreSQL + +PostgreSQL supports unicode if the character set is correctly set +at database creation time. Additionally the C +should be set to ensure unicode data is correctly marked. + + my $schema = My::Schema->connection('dbi:Pg:dbname=test', + $user, $pass, + { pg_enable_utf8 => 1} ); + +Further information can be found in L. + +=head3 SQLite + +SQLite version 3 and above natively use unicode internally. To +correctly mark unicode strings taken from the database, the +C flag should be set at connect time (in versions +of L prior to 1.27 this attribute was named +C). + + my $schema = My::Schema->connection('dbi:SQLite:/tmp/test.db', + '', '', + { sqlite_unicode => 1} ); + =head1 BOOTSTRAPPING/MIGRATING =head2 Easy migration from class-based to schema-based setup diff --git a/lib/DBIx/Class/Manual/FAQ.pod b/lib/DBIx/Class/Manual/FAQ.pod index 464040d..9281a99 100644 --- a/lib/DBIx/Class/Manual/FAQ.pod +++ b/lib/DBIx/Class/Manual/FAQ.pod @@ -56,6 +56,12 @@ Create your classes manually, as above. Write a script that calls L. See there for details, or the L. +=item .. store/retrieve Unicode data in my database? + +Make sure you database supports Unicode and set the connect +attributes appropriately - see +L + =item .. connect to my database? Once you have created all the appropriate table/source classes, and an diff --git a/lib/DBIx/Class/Manual/Intro.pod b/lib/DBIx/Class/Manual/Intro.pod index 18347d8..cd8b091 100644 --- a/lib/DBIx/Class/Manual/Intro.pod +++ b/lib/DBIx/Class/Manual/Intro.pod @@ -240,7 +240,7 @@ a special fifth argument to connect: { on_connect_do => \@on_connect_sql_statments } ); -See L for more information about +See L for more information about this and other special C-time options. =head3 Via a database handle diff --git a/lib/DBIx/Class/ResultSet.pm b/lib/DBIx/Class/ResultSet.pm index 4247459..46f2d29 100644 --- a/lib/DBIx/Class/ResultSet.pm +++ b/lib/DBIx/Class/ResultSet.pm @@ -1007,7 +1007,7 @@ sub _collapse_result { # without having to contruct the full hash if (keys %collapse) { - my %pri = map { ($_ => 1) } $self->result_source->primary_columns; + my %pri = map { ($_ => 1) } $self->result_source->_pri_cols; foreach my $i (0 .. $#construct_as) { next if defined($construct_as[$i][0]); # only self table if (delete $pri{$construct_as[$i][1]}) { @@ -1513,9 +1513,10 @@ sub update_all { my ($self, $values) = @_; $self->throw_exception('Values for update_all must be a hash') unless ref $values eq 'HASH'; - foreach my $obj ($self->all) { - $obj->set_columns($values)->update; - } + + my $guard = $self->result_source->schema->txn_scope_guard; + $_->update($values) for $self->all; + $guard->commit; return 1; } @@ -1566,7 +1567,9 @@ sub delete_all { $self->throw_exception('delete_all does not accept any arguments') if @_; + my $guard = $self->result_source->schema->txn_scope_guard; $_->delete for $self->all; + $guard->commit; return 1; } diff --git a/lib/DBIx/Class/ResultSource.pm b/lib/DBIx/Class/ResultSource.pm index ebbf960..1329fe1 100644 --- a/lib/DBIx/Class/ResultSource.pm +++ b/lib/DBIx/Class/ResultSource.pm @@ -139,6 +139,13 @@ The column names given will be created as accessor methods on your L objects. You can change the name of the accessor by supplying an L in the column_info hash. +If a column name beginning with a plus sign ('+col1') is provided, the +attributes provided will be merged with any existing attributes for the +column, with the new attributes taking precedence in the case that an +attribute already exists. Using this without a hashref +(C<< $source->add_columns(qw/+col1 +col2/) >>) is legal, but useless -- +it does the same thing it would do without the plus. + The contents of the column_info are not set in stone. The following keys are currently recognised/used by DBIx::Class: @@ -288,9 +295,17 @@ sub add_columns { my @added; my $columns = $self->_columns; while (my $col = shift @cols) { + my $column_info = {}; + if ($col =~ s/^\+//) { + $column_info = $self->column_info($col); + } + # If next entry is { ... } use that for the column info, if not # use an empty hashref - my $column_info = ref $cols[0] ? shift(@cols) : {}; + if (ref $cols[0]) { + my $new_info = shift(@cols); + %$column_info = (%$column_info, %$new_info); + } push(@added, $col) unless exists $columns->{$col}; $columns->{$col} = $column_info; } @@ -504,12 +519,15 @@ sub primary_columns { return @{shift->_primaries||[]}; } +# a helper method that will automatically die with a descriptive message if +# no pk is defined on the source in question. For internal use to save +# on if @pks... boilerplate sub _pri_cols { my $self = shift; my @pcols = $self->primary_columns or $self->throw_exception (sprintf( - 'Operation requires a primary key to be declared on %s via set_primary_key', - ref $self, + "Operation requires a primary key to be declared on '%s' via set_primary_key", + $self->source_name, )); return @pcols; } @@ -1227,7 +1245,7 @@ sub _resolve_join { for my $rel (keys %$join) { my $rel_info = $self->relationship_info($rel) - or $self->throw_exception("No such relationship ${rel}"); + or $self->throw_exception("No such relationship '$rel' on " . $self->source_name); my $force_left = $parent_force_left; $force_left ||= lc($rel_info->{attrs}{join_type}||'') eq 'left'; @@ -1257,7 +1275,7 @@ sub _resolve_join { ); my $rel_info = $self->relationship_info($join) - or $self->throw_exception("No such relationship ${join}"); + or $self->throw_exception("No such relationship $join on " . $self->source_name); my $rel_src = $self->related_source($join); return [ { $as => $rel_src->from, @@ -1425,7 +1443,7 @@ sub _resolve_prefetch { my $as = shift @{$p->{-join_aliases}}; my $rel_info = $self->relationship_info( $pre ); - $self->throw_exception( $self->name . " has no such relationship '$pre'" ) + $self->throw_exception( $self->source_name . " has no such relationship '$pre'" ) unless $rel_info; my $as_prefix = ($alias =~ /^.*?\.(.+)$/ ? $1.'.' : ''); my $rel_source = $self->related_source($pre); @@ -1450,7 +1468,7 @@ sub _resolve_prefetch { } #my @col = map { (/^self\.(.+)$/ ? ("${as_prefix}.$1") : ()); } # values %{$rel_info->{cond}}; - $collapse->{".${as_prefix}${pre}"} = [ $rel_source->primary_columns ]; + $collapse->{".${as_prefix}${pre}"} = [ $rel_source->_pri_cols ]; # action at a distance. prepending the '.' allows simpler code # in ResultSet->_collapse_result my @key = map { (/^foreign\.(.+)$/ ? ($1) : ()); } @@ -1486,7 +1504,7 @@ Returns the result source object for the given relationship. sub related_source { my ($self, $rel) = @_; if( !$self->has_relationship( $rel ) ) { - $self->throw_exception("No such relationship '$rel'"); + $self->throw_exception("No such relationship '$rel' on " . $self->source_name); } return $self->schema->source($self->relationship_info($rel)->{source}); } @@ -1508,7 +1526,7 @@ Returns the class name for objects in the given relationship. sub related_class { my ($self, $rel) = @_; if( !$self->has_relationship( $rel ) ) { - $self->throw_exception("No such relationship '$rel'"); + $self->throw_exception("No such relationship '$rel' on " . $self->source_name); } return $self->schema->class($self->relationship_info($rel)->{source}); } diff --git a/lib/DBIx/Class/ResultSourceProxy.pm b/lib/DBIx/Class/ResultSourceProxy.pm index 6af0202..feb0a59 100644 --- a/lib/DBIx/Class/ResultSourceProxy.pm +++ b/lib/DBIx/Class/ResultSourceProxy.pm @@ -37,6 +37,9 @@ sub add_columns { my $source = $class->result_source_instance; $source->add_columns(@cols); foreach my $c (grep { !ref } @cols) { + # If this is an augment definition get the real colname. + $c =~ s/^\+//; + $class->register_column($c => $source->column_info($c)); } } diff --git a/lib/DBIx/Class/Row.pm b/lib/DBIx/Class/Row.pm index 2777114..787df86 100644 --- a/lib/DBIx/Class/Row.pm +++ b/lib/DBIx/Class/Row.pm @@ -342,31 +342,48 @@ sub insert { $rollback_guard ||= $source->storage->txn_scope_guard } + ## PK::Auto + my %auto_pri; + my $auto_idx = 0; + for ($self->primary_columns) { + if ( + not defined $self->get_column($_) + || + (ref($self->get_column($_)) eq 'SCALAR') + ) { + my $col_info = $source->column_info($_); + $auto_pri{$_} = $auto_idx++ unless $col_info->{auto_nextval}; # auto_nextval's are pre-fetched in the storage + } + } + MULTICREATE_DEBUG and do { no warnings 'uninitialized'; warn "MC $self inserting (".join(', ', $self->get_columns).")\n"; }; - my $updated_cols = $source->storage->insert($source, { $self->get_columns }); + my $updated_cols = $source->storage->insert( + $source, + { $self->get_columns }, + (keys %auto_pri) && $source->storage->can_insert_returning + ? { returning => [ sort { $auto_pri{$a} <=> $auto_pri{$b} } keys %auto_pri ] } + : () + , + ); + foreach my $col (keys %$updated_cols) { $self->store_column($col, $updated_cols->{$col}); + delete $auto_pri{$col}; } - ## PK::Auto - my @auto_pri = grep { - (not defined $self->get_column($_)) - || - (ref($self->get_column($_)) eq 'SCALAR') - } $self->primary_columns; - - if (@auto_pri) { - MULTICREATE_DEBUG and warn "MC $self fetching missing PKs ".join(', ', @auto_pri)."\n"; + if (keys %auto_pri) { + my @missing = sort { $auto_pri{$a} <=> $auto_pri{$b} } keys %auto_pri; + MULTICREATE_DEBUG and warn "MC $self fetching missing PKs ".join(', ', @missing )."\n"; my $storage = $self->result_source->storage; $self->throw_exception( "Missing primary key but Storage doesn't support last_insert_id" ) unless $storage->can('last_insert_id'); - my @ids = $storage->last_insert_id($self->result_source,@auto_pri); + my @ids = $storage->last_insert_id($self->result_source, @missing); $self->throw_exception( "Can't get last insert id" ) - unless (@ids == @auto_pri); - $self->store_column($auto_pri[$_] => $ids[$_]) for 0 .. $#ids; + unless (@ids == @missing); + $self->store_column($missing[$_] => $ids[$_]) for 0 .. $#missing; } $self->{_dirty_columns} = {}; @@ -467,7 +484,7 @@ the database row can not be constructed (see L for more details). -Also takes an optional hashref of C<< column_name => value> >> pairs +Also takes an optional hashref of C<< column_name => value >> pairs to update on the object first. Be aware that the hashref will be passed to C, which might edit it in place, so don't rely on it being the same after a call to C. If you diff --git a/lib/DBIx/Class/SQLAHacks.pm b/lib/DBIx/Class/SQLAHacks.pm index d2cd569..4da469e 100644 --- a/lib/DBIx/Class/SQLAHacks.pm +++ b/lib/DBIx/Class/SQLAHacks.pm @@ -362,9 +362,8 @@ sub insert { if (! $_[0] or (ref $_[0] eq 'HASH' and !keys %{$_[0]} ) ) { my $sql = "INSERT INTO ${table} DEFAULT VALUES"; - if (my @returning = @{ ($_[1]||{})->{returning} || [] }) { - $sql .= ' RETURNING (' . (join ', ' => map $self->_quote($_), @returning) - . ')'; + if (my $ret = ($_[1]||{})->{returning} ) { + $sql .= $self->_insert_returning ($ret); } return $sql; @@ -510,6 +509,14 @@ sub _table { } } +sub _generate_join_clause { + my ($self, $join_type) = @_; + + return sprintf ('%s JOIN ', + $join_type ? ' ' . uc($join_type) : '' + ); +} + sub _recurse_from { my ($self, $from, @join) = @_; my @sqlf; @@ -528,10 +535,7 @@ sub _recurse_from { $join_type = $self->{_default_jointype} if not defined $join_type; - my $join_clause = sprintf ('%s JOIN ', - $join_type ? ' ' . uc($join_type) : '' - ); - push @sqlf, $join_clause; + push @sqlf, $self->_generate_join_clause( $join_type ); if (ref $to eq 'ARRAY') { push(@sqlf, '(', $self->_recurse_from(@$to), ')'); diff --git a/lib/DBIx/Class/SQLAHacks/MySQL.pm b/lib/DBIx/Class/SQLAHacks/MySQL.pm index 687a793..cc177f1 100644 --- a/lib/DBIx/Class/SQLAHacks/MySQL.pm +++ b/lib/DBIx/Class/SQLAHacks/MySQL.pm @@ -21,4 +21,14 @@ sub insert { return $self->SUPER::insert (@_); } +# Allow STRAIGHT_JOIN's +sub _generate_join_clause { + my ($self, $join_type) = @_; + + if( $join_type && $join_type =~ /^STRAIGHT\z/i ) { + return ' STRAIGHT_JOIN ' + } + + return $self->SUPER::_generate_join_clause( $join_type ); +} 1; diff --git a/lib/DBIx/Class/Storage/DBI.pm b/lib/DBIx/Class/Storage/DBI.pm index e746385..1030c4c 100644 --- a/lib/DBIx/Class/Storage/DBI.pm +++ b/lib/DBIx/Class/Storage/DBI.pm @@ -18,7 +18,8 @@ use Sub::Name (); __PACKAGE__->mk_group_accessors('simple' => qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts _conn_pid - _conn_tid transaction_depth _dbh_autocommit _driver_determined savepoints/ + _conn_tid transaction_depth _dbh_autocommit _driver_determined savepoints + __server_info/ ); # the values for these accessors are picked out (and deleted) from @@ -33,7 +34,10 @@ __PACKAGE__->mk_group_accessors('simple' => @storage_options); # default cursor class, overridable in connect_info attributes __PACKAGE__->cursor_class('DBIx::Class::Storage::DBI::Cursor'); -__PACKAGE__->mk_group_accessors('inherited' => qw/sql_maker_class/); +__PACKAGE__->mk_group_accessors('inherited' => qw/ + sql_maker_class + can_insert_returning +/); __PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks'); @@ -89,7 +93,7 @@ DBIx::Class::Storage::DBI - DBI storage handler ); $schema->resultset('Book')->search({ - written_on => $schema->storage->datetime_parser(DateTime->now) + written_on => $schema->storage->datetime_parser->format_datetime(DateTime->now) }); =head1 DESCRIPTION @@ -916,6 +920,8 @@ sub _populate_dbh { $self->{transaction_depth} = $self->_dbh_autocommit ? 0 : 1; $self->_run_connection_actions unless $self->{_in_determine_driver}; + + $self->_populate_server_info; } sub _run_connection_actions { @@ -928,6 +934,37 @@ sub _run_connection_actions { $self->_do_connection_actions(connect_call_ => $_) for @actions; } +sub _populate_server_info { + my $self = shift; + my %info; + + my $dbms_ver = eval { + local $@; + $self->_get_dbh->get_info(18) + }; + + if (defined $dbms_ver) { + $info{dbms_ver} = $dbms_ver; + + ($dbms_ver) = $dbms_ver =~ /^(\S+)/; + + my @verparts = split /\./, $dbms_ver; + $info{dbms_ver_normalized} = sprintf "%d.%03d%03d", @verparts; + } + + $self->__server_info(\%info); + + return \%info; +} + +sub _server_info { + my $self = shift; + + $self->_get_dbh; + + return $self->__server_info(@_); +} + sub _determine_driver { my ($self) = @_; @@ -1363,20 +1400,17 @@ sub _execute { $self->dbh_do('_dbh_execute', @_); # retry over disconnects } -sub insert { +sub _prefetch_insert_auto_nextvals { my ($self, $source, $to_insert) = @_; - my $ident = $source->from; - my $bind_attributes = $self->source_bind_attributes($source); - - my $updated_cols = {}; + my $upd = {}; foreach my $col ( $source->columns ) { if ( !defined $to_insert->{$col} ) { my $col_info = $source->column_info($col); if ( $col_info->{auto_nextval} ) { - $updated_cols->{$col} = $to_insert->{$col} = $self->_sequence_fetch( + $upd->{$col} = $to_insert->{$col} = $self->_sequence_fetch( 'nextval', $col_info->{sequence} ||= $self->_dbh_get_autoinc_seq($self->_get_dbh, $source, $col) @@ -1385,7 +1419,37 @@ sub insert { } } - $self->_execute('insert' => [], $source, $bind_attributes, $to_insert); + return $upd; +} + +sub insert { + my $self = shift; + my ($source, $to_insert, $opts) = @_; + + my $updated_cols = $self->_prefetch_insert_auto_nextvals (@_); + + my $bind_attributes = $self->source_bind_attributes($source); + + my ($rv, $sth) = $self->_execute('insert' => [], $source, $bind_attributes, $to_insert, $opts); + + if ($opts->{returning}) { + my @ret_cols = @{$opts->{returning}}; + + my @ret_vals = eval { + local $SIG{__WARN__} = sub {}; + my @r = $sth->fetchrow_array; + $sth->finish; + @r; + }; + + my %ret; + @ret{@ret_cols} = @ret_vals if (@ret_vals); + + $updated_cols = { + %$updated_cols, + %ret, + }; + } return $updated_cols; } diff --git a/lib/DBIx/Class/Storage/DBI/AutoCast.pm b/lib/DBIx/Class/Storage/DBI/AutoCast.pm index e5ac27a..f099bc5 100644 --- a/lib/DBIx/Class/Storage/DBI/AutoCast.pm +++ b/lib/DBIx/Class/Storage/DBI/AutoCast.pm @@ -29,7 +29,8 @@ converted to: CAST(? as $mapped_type) -This option can also be enabled in L as: +This option can also be enabled in +L as: on_connect_call => ['set_auto_cast'] @@ -76,7 +77,7 @@ Used as: on_connect_call => ['set_auto_cast'] -in L. +in L. =cut diff --git a/lib/DBIx/Class/Storage/DBI/Informix.pm b/lib/DBIx/Class/Storage/DBI/Informix.pm index c08cb9a..8f23077 100644 --- a/lib/DBIx/Class/Storage/DBI/Informix.pm +++ b/lib/DBIx/Class/Storage/DBI/Informix.pm @@ -3,11 +3,25 @@ use strict; use warnings; use base qw/DBIx::Class::Storage::DBI/; - use mro 'c3'; +use Scope::Guard (); +use Context::Preserve (); + __PACKAGE__->mk_group_accessors('simple' => '__last_insert_id'); +=head1 NAME + +DBIx::Class::Storage::DBI::Informix - Base Storage Class for Informix Support + +=head1 DESCRIPTION + +This class implements storage-specific support for the Informix RDBMS + +=head1 METHODS + +=cut + sub _execute { my $self = shift; my ($op) = @_; @@ -32,23 +46,141 @@ sub _sql_maker_opts { return { limit_dialect => 'SkipFirst', %{$self->{_sql_maker_opts}||{}} }; } -1; +sub _svp_begin { + my ($self, $name) = @_; -__END__ + $self->_get_dbh->do("SAVEPOINT $name"); +} -=head1 NAME +# can't release savepoints +sub _svp_release { 1 } -DBIx::Class::Storage::DBI::Informix - Base Storage Class for INFORMIX Support +sub _svp_rollback { + my ($self, $name) = @_; -=head1 SYNOPSIS + $self->_get_dbh->do("ROLLBACK TO SAVEPOINT $name") +} -=head1 DESCRIPTION +sub with_deferred_fk_checks { + my ($self, $sub) = @_; + + my $txn_scope_guard = $self->txn_scope_guard; + + $self->_do_query('SET CONSTRAINTS ALL DEFERRED'); + + my $sg = Scope::Guard->new(sub { + $self->_do_query('SET CONSTRAINTS ALL IMMEDIATE'); + }); + + return Context::Preserve::preserve_context(sub { $sub->() }, + after => sub { $txn_scope_guard->commit }); +} + +=head2 connect_call_datetime_setup + +Used as: + + on_connect_call => 'datetime_setup' + +In L to set the C and +C formats. + +Sets the following environment variables: + + GL_DATE="%m/%d/%Y" + GL_DATETIME="%Y-%m-%d %H:%M:%S%F5" + +The C and C environment variables are cleared. + +B setting the C environment variable seems to have no effect +after the process has started, so the default format is used. The C +setting does take effect however. -This class implements storage-specific support for Informix +The C data type supports up to 5 digits after the decimal point for +second precision, depending on how you have declared your column. The full +possible precision is used. + +The column declaration for a C with maximum precision is: + + column_name DATETIME YEAR TO FRACTION(5) + +The C data type stores the date portion only, and it B be declared +with: + + data_type => 'date' + +in your Result class. + +You will need the L module for inflation to work. + +=cut + +sub connect_call_datetime_setup { + my $self = shift; + + delete @ENV{qw/DBDATE DBCENTURY/}; + + $ENV{GL_DATE} = "%m/%d/%Y"; + $ENV{GL_DATETIME} = "%Y-%m-%d %H:%M:%S%F5"; +} + +sub datetime_parser_type { + 'DBIx::Class::Storage::DBI::Informix::DateTime::Format' +} + +package # hide from PAUSE + DBIx::Class::Storage::DBI::Informix::DateTime::Format; + +my $timestamp_format = '%Y-%m-%d %H:%M:%S.%5N'; # %F %T +my $date_format = '%m/%d/%Y'; + +my ($timestamp_parser, $date_parser); + +sub parse_datetime { + shift; + require DateTime::Format::Strptime; + $timestamp_parser ||= DateTime::Format::Strptime->new( + pattern => $timestamp_format, + on_error => 'croak', + ); + return $timestamp_parser->parse_datetime(shift); +} + +sub format_datetime { + shift; + require DateTime::Format::Strptime; + $timestamp_parser ||= DateTime::Format::Strptime->new( + pattern => $timestamp_format, + on_error => 'croak', + ); + return $timestamp_parser->format_datetime(shift); +} + +sub parse_date { + shift; + require DateTime::Format::Strptime; + $date_parser ||= DateTime::Format::Strptime->new( + pattern => $date_format, + on_error => 'croak', + ); + return $date_parser->parse_datetime(shift); +} + +sub format_date { + shift; + require DateTime::Format::Strptime; + $date_parser ||= DateTime::Format::Strptime->new( + pattern => $date_format, + on_error => 'croak', + ); + return $date_parser->format_datetime(shift); +} + +1; -=head1 AUTHORS +=head1 AUTHOR -See L +See L and L. =head1 LICENSE diff --git a/lib/DBIx/Class/Storage/DBI/InterBase.pm b/lib/DBIx/Class/Storage/DBI/InterBase.pm index a416fa7..352dcc5 100644 --- a/lib/DBIx/Class/Storage/DBI/InterBase.pm +++ b/lib/DBIx/Class/Storage/DBI/InterBase.pm @@ -1,17 +1,11 @@ package DBIx::Class::Storage::DBI::InterBase; -# partly stolen from DBIx::Class::Storage::DBI::MSSQL - use strict; use warnings; use base qw/DBIx::Class::Storage::DBI/; use mro 'c3'; use List::Util(); -__PACKAGE__->mk_group_accessors(simple => qw/ - _auto_incs -/); - =head1 NAME DBIx::Class::Storage::DBI::InterBase - Driver for the Firebird RDBMS @@ -26,65 +20,16 @@ You need to use either the L option or L (see L) for your code to function correctly with this driver. Otherwise you will likely get bizarre error messages -such as C. - -For ODBC support, see L. +such as C. The alternative is to use the +L driver, which is more suitable +for long running processes such as under L. To turn on L support, see L. =cut -sub _prep_for_execute { - my $self = shift; - my ($op, $extra_bind, $ident, $args) = @_; - - if ($op eq 'insert') { - $self->_auto_incs([]); - - my %pk; - @pk{$ident->primary_columns} = (); - - my @auto_inc_cols = grep { - my $inserting = $args->[0]{$_}; - - ($ident->column_info($_)->{is_auto_increment} - || exists $pk{$_}) - && ( - (not defined $inserting) - || - (ref $inserting eq 'SCALAR' && $$inserting =~ /^null\z/i) - ) - } $ident->columns; - - if (@auto_inc_cols) { - $args->[1]{returning} = \@auto_inc_cols; - - $self->_auto_incs->[0] = \@auto_inc_cols; - } - } - - return $self->next::method(@_); -} - -sub _execute { - my $self = shift; - my ($op) = @_; - - my ($rv, $sth, @bind) = $self->dbh_do($self->can('_dbh_execute'), @_); - - if ($op eq 'insert' && $self->_auto_incs) { - local $@; - my (@auto_incs) = eval { - local $SIG{__WARN__} = sub {}; - $sth->fetchrow_array - }; - $self->_auto_incs->[1] = \@auto_incs; - $sth->finish; - } - - return wantarray ? ($rv, $sth, @bind) : $rv; -} +sub can_insert_returning { 1 } sub _sequence_fetch { my ($self, $nextval, $sequence) = @_; @@ -142,34 +87,6 @@ EOF return undef; } -sub last_insert_id { - my ($self, $source, @cols) = @_; - my @result; - - my %auto_incs; - @auto_incs{ @{ $self->_auto_incs->[0] } } = - @{ $self->_auto_incs->[1] }; - - push @result, $auto_incs{$_} for @cols; - - return @result; -} - -sub insert { - my $self = shift; - - my $updated_cols = $self->next::method(@_); - - if ($self->_auto_incs->[0]) { - my %auto_incs; - @auto_incs{ @{ $self->_auto_incs->[0] } } = @{ $self->_auto_incs->[1] }; - - $updated_cols = { %$updated_cols, %auto_incs }; - } - - return $updated_cols; -} - # this sub stolen from DB2 sub _sql_maker_opts { @@ -206,6 +123,7 @@ sub _ping { my $dbh = $self->_dbh or return 0; local $dbh->{RaiseError} = 1; + local $dbh->{PrintError} = 0; eval { $dbh->do('select 1 from rdb$database'); @@ -237,6 +155,16 @@ sub _set_sql_dialect { } } +sub _populate_server_info { + my $self = shift; + + return $self->next::method(@_) if ref $self ne __PACKAGE__; + + local $SIG{__WARN__} = sub {}; # silence warning due to bug in DBD::InterBase + + return $self->next::method(@_); +} + =head2 connect_call_use_softcommit Used as: @@ -248,7 +176,8 @@ L C option. You need either this option or C<< disable_sth_caching => 1 >> for L code to function correctly (otherwise you may get C errors.) +executing> errors.) Or use the L +driver. The downside of using this option is that your process will B see UPDATEs, INSERTs and DELETEs from other processes for already open statements. @@ -365,6 +294,8 @@ L as a workaround for the C errors, this of course adversely affects performance. +Alternately, use the L driver. + =item * C support by default only works for Firebird versions 2 or @@ -373,7 +304,8 @@ work with earlier versions. =item * -Sub-second precision for TIMESTAMPs is not currently available with ODBC. +Sub-second precision for TIMESTAMPs is not currently available when using the +L driver. =back diff --git a/lib/DBIx/Class/Storage/DBI/MSSQL.pm b/lib/DBIx/Class/Storage/DBI/MSSQL.pm index 6779e86..5f17153 100644 --- a/lib/DBIx/Class/Storage/DBI/MSSQL.pm +++ b/lib/DBIx/Class/Storage/DBI/MSSQL.pm @@ -232,24 +232,14 @@ sub build_datetime_parser { sub sqlt_type { 'SQLServer' } -sub _get_mssql_version { - my $self = shift; - - my $data = $self->_get_dbh->selectrow_hashref('xp_msver ProductVersion'); - - if ($data->{Character_Value} =~ /^(\d+)\./) { - return $1; - } else { - $self->throw_exception(q{Your ProductVersion's Character_Value is missing or malformed!}); - } -} - sub sql_maker { my $self = shift; unless ($self->_sql_maker) { unless ($self->{_sql_maker_opts}{limit_dialect}) { - my $version = eval { $self->_get_mssql_version; } || 0; + + my ($version) = $self->_server_info->{dbms_ver} =~ /^(\d+)/; + $version ||= 0; $self->{_sql_maker_opts} = { limit_dialect => ($version >= 9 ? 'RowNumberOver' : 'Top'), @@ -263,6 +253,21 @@ sub sql_maker { return $self->_sql_maker; } +sub _ping { + my $self = shift; + + my $dbh = $self->_dbh or return 0; + + local $dbh->{RaiseError} = 1; + local $dbh->{PrintError} = 0; + + eval { + $dbh->do('select 1'); + }; + + return $@ ? 0 : 1; +} + 1; =head1 NAME diff --git a/lib/DBIx/Class/Storage/DBI/ODBC/Firebird.pm b/lib/DBIx/Class/Storage/DBI/ODBC/Firebird.pm index cb36879..6f6acdf 100644 --- a/lib/DBIx/Class/Storage/DBI/ODBC/Firebird.pm +++ b/lib/DBIx/Class/Storage/DBI/ODBC/Firebird.pm @@ -10,7 +10,7 @@ use mro 'c3'; DBIx::Class::Storage::DBI::ODBC::Firebird - Driver for using the Firebird RDBMS through ODBC -=head1 SYNOPSIS +=head1 DESCRIPTION Most functionality is provided by L, see that module for details. @@ -19,6 +19,11 @@ To build the ODBC driver for Firebird on Linux for unixODBC, see: L +This driver does not suffer from the nested statement handles across commits +issue that the L based +driver does. This makes it more suitable for long running processes such as +under L. + =cut # XXX seemingly no equivalent to ib_time_all from DBD::InterBase via ODBC diff --git a/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm b/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm index 1b51b57..f8cfdfc 100644 --- a/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm +++ b/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm @@ -37,7 +37,7 @@ Use as: on_connect_call => 'use_dynamic_cursors' -in your L as one way to enable multiple +in your L as one way to enable multiple concurrent statements. Will add C<< odbc_cursortype => 2 >> to your DBI connection attributes. See @@ -175,14 +175,6 @@ sub connect_call_use_MARS { } } -sub _get_mssql_version { - my $self = shift; - - my ($version) = $self->_get_dbh->get_info(18) =~ /^(\d+)/; - - return $version; -} - 1; =head1 AUTHOR diff --git a/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm b/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm index 37ddf7a..e6bf5ae 100644 --- a/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm +++ b/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm @@ -157,9 +157,10 @@ sub _ping { my $dbh = $self->_dbh or return 0; local $dbh->{RaiseError} = 1; + local $dbh->{PrintError} = 0; eval { - $dbh->do("select 1 from dual"); + $dbh->do('select 1 from dual'); }; return $@ ? 0 : 1; @@ -239,10 +240,10 @@ Used as: on_connect_call => 'datetime_setup' -In L to set the session nls date, and -timestamp values for use with L and the -necessary environment variables for L, which is used -by it. +In L to set the session nls +date, and timestamp values for use with L +and the necessary environment variables for L, which +is used by it. Maximum allowable precision is used, unless the environment variables have already been set. diff --git a/lib/DBIx/Class/Storage/DBI/Pg.pm b/lib/DBIx/Class/Storage/DBI/Pg.pm index 92153ec..250707b 100644 --- a/lib/DBIx/Class/Storage/DBI/Pg.pm +++ b/lib/DBIx/Class/Storage/DBI/Pg.pm @@ -3,22 +3,44 @@ package DBIx::Class::Storage::DBI::Pg; use strict; use warnings; -use base qw/DBIx::Class::Storage::DBI::MultiColumnIn/; +use base qw/ + DBIx::Class::Storage::DBI::MultiColumnIn +/; use mro 'c3'; use DBD::Pg qw(:pg_types); +use Scope::Guard (); +use Context::Preserve (); # Ask for a DBD::Pg with array support warn __PACKAGE__.": DBD::Pg 2.9.2 or greater is strongly recommended\n" if ($DBD::Pg::VERSION < 2.009002); # pg uses (used?) version::qv() +sub can_insert_returning { + my $self = shift; + + return 1 + if $self->_server_info->{dbms_ver_normalized} >= 8.002; + + return 0; +} + sub with_deferred_fk_checks { my ($self, $sub) = @_; - $self->_get_dbh->do('SET CONSTRAINTS ALL DEFERRED'); - $sub->(); + my $txn_scope_guard = $self->txn_scope_guard; + + $self->_do_query('SET CONSTRAINTS ALL DEFERRED'); + + my $sg = Scope::Guard->new(sub { + $self->_do_query('SET CONSTRAINTS ALL IMMEDIATE'); + }); + + return Context::Preserve::preserve_context(sub { $sub->() }, + after => sub { $txn_scope_guard->commit }); } +# only used when INSERT ... RETURNING is disabled sub last_insert_id { my ($self,$source,@cols) = @_; @@ -32,19 +54,23 @@ sub last_insert_id { $col, )); - push @values, $self->_dbh_last_insert_id ($self->_dbh, $seq); + push @values, $self->_dbh->last_insert_id(undef, undef, undef, undef, {sequence => $seq}); } return @values; } -# there seems to be absolutely no reason to have this as a separate method, -# but leaving intact in case someone is already overriding it -sub _dbh_last_insert_id { - my ($self, $dbh, $seq) = @_; - $dbh->last_insert_id(undef, undef, undef, undef, {sequence => $seq}); -} +sub _sequence_fetch { + my ($self, $function, $sequence) = @_; + + $self->throw_exception('No sequence to fetch') unless $sequence; + + my ($val) = $self->_get_dbh->selectrow_array( + sprintf ("select %s('%s')", $function, $sequence) + ); + return $val; +} sub _dbh_get_autoinc_seq { my ($self, $dbh, $source, $col) = @_; @@ -155,12 +181,6 @@ sub bind_attribute_by_data_type { } } -sub _sequence_fetch { - my ( $self, $type, $seq ) = @_; - my ($id) = $self->_get_dbh->selectrow_array("SELECT nextval('${seq}')"); - return $id; -} - sub _svp_begin { my ($self, $name) = @_; diff --git a/lib/DBIx/Class/Storage/DBI/Replicated.pm b/lib/DBIx/Class/Storage/DBI/Replicated.pm index ab0a499..c21fe8a 100644 --- a/lib/DBIx/Class/Storage/DBI/Replicated.pm +++ b/lib/DBIx/Class/Storage/DBI/Replicated.pm @@ -306,6 +306,7 @@ has 'write_handler' => ( backup is_datatype_numeric + can_insert_returning _count_select _subq_count_select _subq_update_delete @@ -365,6 +366,7 @@ has 'write_handler' => ( _do_query _dbh_sth _dbh_execute + _prefetch_insert_auto_nextvals /], ); diff --git a/lib/DBIx/Class/Storage/DBI/SQLite.pm b/lib/DBIx/Class/Storage/DBI/SQLite.pm index d096d80..30e7b2b 100644 --- a/lib/DBIx/Class/Storage/DBI/SQLite.pm +++ b/lib/DBIx/Class/Storage/DBI/SQLite.pm @@ -50,18 +50,44 @@ sub deployment_statements { $sqltargs ||= {}; - my $sqlite_version = $self->_get_dbh->{sqlite_version}; + my $sqlite_version = eval { $self->_server_info->{dbms_ver} }; + $sqlite_version ||= ''; # numify, SQLT does a numeric comparison $sqlite_version =~ s/^(\d+) \. (\d+) (?: \. (\d+))? .*/${1}.${2}/x; - $sqltargs->{producer_args}{sqlite_version} = $sqlite_version; + $sqltargs->{producer_args}{sqlite_version} = $sqlite_version if $sqlite_version; $self->next::method($schema, $type, $version, $dir, $sqltargs, @rest); } sub datetime_parser_type { return "DateTime::Format::SQLite"; } +=head2 connect_call_use_foreign_keys + +Used as: + + on_connect_call => 'use_foreign_keys' + +In L to turn on foreign key +(including cascading) support for recent versions of SQLite and L. + +Executes: + + PRAGMA foreign_keys = ON + +See L for more information. + +=cut + +sub connect_call_use_foreign_keys { + my $self = shift; + + $self->_do_query( + 'PRAGMA foreign_keys = ON' + ); +} + 1; =head1 NAME diff --git a/lib/DBIx/Class/Storage/DBI/Sybase.pm b/lib/DBIx/Class/Storage/DBI/Sybase.pm index 77b77e2..8c5f988 100644 --- a/lib/DBIx/Class/Storage/DBI/Sybase.pm +++ b/lib/DBIx/Class/Storage/DBI/Sybase.pm @@ -103,8 +103,8 @@ use this function instead. It does: $dbh->do("SET TEXTSIZE $bytes"); Takes the number of bytes, or uses the C value from your -L if omitted, lastly falls back to the C<32768> which -is the L default. +L if omitted, lastly falls +back to the C<32768> which is the L default. =cut diff --git a/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm b/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm index ddc2339..914b75f 100644 --- a/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm +++ b/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm @@ -49,7 +49,7 @@ With this driver there is unfortunately no way to get the C without doing a C