From: Peter Rabbitson Date: Tue, 17 Nov 2009 23:42:13 +0000 (+0000) Subject: Merge 'trunk' into 'void_populate_resultset_cond' X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=fc7cabbcfe26164d6d7fa2fa7901ef855c8eb38c;hp=fa238f1f276239f96b30a77baad4f0b31530a8a4;p=dbsrgits%2FDBIx-Class-Historic.git Merge 'trunk' into 'void_populate_resultset_cond' r7765@Thesaurus (orig r7753): ribasushi | 2009-10-03 15:49:14 +0200 Test reorg (no changes) r7766@Thesaurus (orig r7754): ribasushi | 2009-10-03 15:55:25 +0200 Add failing tests for RT#50003 r7767@Thesaurus (orig r7755): caelum | 2009-10-03 16:09:45 +0200 fix on_connect_ with coderef connect_info r7771@Thesaurus (orig r7759): ribasushi | 2009-10-04 13:17:53 +0200 Fix AutoCast's POD r7782@Thesaurus (orig r7770): ribasushi | 2009-10-09 06:57:20 +0200 r7777@Thesaurus (orig r7765): frew | 2009-10-07 20:05:05 +0200 add method to check if an rs is paginated r7778@Thesaurus (orig r7766): frew | 2009-10-07 20:31:02 +0200 is_paginated method and test r7780@Thesaurus (orig r7768): frew | 2009-10-09 06:45:36 +0200 change name of method r7781@Thesaurus (orig r7769): frew | 2009-10-09 06:47:31 +0200 add message to changelog for is_paged r7785@Thesaurus (orig r7773): ribasushi | 2009-10-09 11:00:36 +0200 Ugh CRLF r7786@Thesaurus (orig r7774): ribasushi | 2009-10-09 11:04:35 +0200 Skip versioning test on really old perls lacking Time::HiRes r7787@Thesaurus (orig r7775): ribasushi | 2009-10-09 11:04:50 +0200 Changes r7788@Thesaurus (orig r7776): triode | 2009-10-09 22:32:04 +0200 added troubleshooting case of excessive memory allocation involving TEXT/BLOB/etc columns and large LongReadLen r7789@Thesaurus (orig r7777): triode | 2009-10-09 22:44:21 +0200 added my name to contributors list r7790@Thesaurus (orig r7778): ribasushi | 2009-10-10 18:49:15 +0200 Whoops, this isn't right r7791@Thesaurus (orig r7779): ribasushi | 2009-10-11 15:44:18 +0200 More ordered fixes r7793@Thesaurus (orig r7781): norbi | 2009-10-13 11:27:18 +0200 r7982@vger: mendel | 2009-10-13 11:26:11 +0200 Fixed a typo and a POD error. r7805@Thesaurus (orig r7793): ribasushi | 2009-10-16 14:28:35 +0200 Fix test to stop failing when DT-support is not present r7811@Thesaurus (orig r7799): caelum | 2009-10-18 11:13:29 +0200 r20728@hlagh (orig r7703): ribasushi | 2009-09-20 18:51:16 -0400 Another try at a clean sybase branch r20730@hlagh (orig r7705): ribasushi | 2009-09-20 18:58:09 -0400 Part one of the sybase work by Caelum (mostly reviewed) r20731@hlagh (orig r7706): ribasushi | 2009-09-20 19:18:40 -0400 main sybase branch ready r21051@hlagh (orig r7797): caelum | 2009-10-18 04:57:43 -0400 r20732@hlagh (orig r7707): ribasushi | 2009-09-20 19:20:00 -0400 Branch for bulk insert r20733@hlagh (orig r7708): ribasushi | 2009-09-20 20:06:21 -0400 All sybase bulk-insert code by Caelum r20750@hlagh (orig r7725): caelum | 2009-09-24 02:47:39 -0400 clean up set_identity stuff r20751@hlagh (orig r7726): caelum | 2009-09-24 05:21:18 -0400 minor cleanups, test update of blob to NULL r20752@hlagh (orig r7727): caelum | 2009-09-24 08:45:04 -0400 remove some duplicate code r20753@hlagh (orig r7728): caelum | 2009-09-24 09:57:58 -0400 fix insert with all defaults r20786@hlagh (orig r7732): caelum | 2009-09-25 21:17:16 -0400 some cleanups r20804@hlagh (orig r7736): caelum | 2009-09-28 05:31:38 -0400 minor changes r20805@hlagh (orig r7737): caelum | 2009-09-28 06:25:48 -0400 fix DT stuff r20809@hlagh (orig r7741): caelum | 2009-09-28 22:25:55 -0400 removed some dead code, added fix and test for _execute_array_empty r20811@hlagh (orig r7743): caelum | 2009-09-29 13:36:20 -0400 minor changes after review r20812@hlagh (orig r7744): caelum | 2009-09-29 14:16:03 -0400 do not clobber $rv from execute_array r20813@hlagh (orig r7745): caelum | 2009-09-29 14:38:14 -0400 make insert_bulk atomic r20815@hlagh (orig r7747): caelum | 2009-09-29 20:35:26 -0400 remove _exhaaust_statements r20816@hlagh (orig r7748): caelum | 2009-09-29 21:48:38 -0400 fix insert_bulk when not using bulk api inside a txn r20831@hlagh (orig r7749): caelum | 2009-09-30 02:53:42 -0400 added test for populate being atomic r20832@hlagh (orig r7750): caelum | 2009-09-30 03:00:59 -0400 factor out subclass-specific _execute_array callback r20833@hlagh (orig r7751): caelum | 2009-10-01 11:59:30 -0400 remove a piece of dead code r20840@hlagh (orig r7758): caelum | 2009-10-03 15:46:56 -0400 remove _pretty_print r20842@hlagh (orig r7760): caelum | 2009-10-04 16:19:56 -0400 minor optimization for insert_bulk r21050@hlagh (orig r7796): caelum | 2009-10-18 04:56:54 -0400 error checking related to literal SQL for insert_bulk r7820@Thesaurus (orig r7808): caelum | 2009-10-21 03:10:39 +0200 add test for populate with literal sql mixed with binds, improve error messages r7823@Thesaurus (orig r7811): ribasushi | 2009-10-21 16:33:45 +0200 Show what's wrong with the current populate code r7824@Thesaurus (orig r7812): caelum | 2009-10-22 11:10:38 +0200 stringify values passed to populate/insert_bulk r7825@Thesaurus (orig r7813): ribasushi | 2009-10-22 13:17:41 +0200 Some smoker run the suite for 30 *minutes* - the timeout seems to be too short for them (boggle) r7826@Thesaurus (orig r7814): caelum | 2009-10-22 14:41:37 +0200 a few extra tests can never hurt, right? :) r7827@Thesaurus (orig r7815): ribasushi | 2009-10-23 10:51:05 +0200 Prevent sqlt from failing silently r7828@Thesaurus (orig r7816): ribasushi | 2009-10-23 10:52:49 +0200 { is_foreign_key_constraint => 0, on_delete => undef } is a valid construct - no need to carp r7832@Thesaurus (orig r7820): robkinyon | 2009-10-26 20:11:22 +0100 Fixed bad if-check in columns() r7840@Thesaurus (orig r7828): caelum | 2009-10-31 14:01:56 +0100 change repository in meta to point to real svn url rather than svnweb r7842@Thesaurus (orig r7830): caelum | 2009-10-31 21:04:39 +0100 pass sqlite_version to SQLT r7843@Thesaurus (orig r7831): caelum | 2009-10-31 21:22:37 +0100 fix regex to numify sqlite_version r7844@Thesaurus (orig r7832): caelum | 2009-10-31 23:59:19 +0100 work-around disconnect bug with DBD::Pg 2.15.1 r7855@Thesaurus (orig r7843): ribasushi | 2009-11-04 10:55:51 +0100 r7817@Thesaurus (orig r7805): rbuels | 2009-10-21 02:37:28 +0200 making a branch, here we go again with the pg_unqualified_schema r7818@Thesaurus (orig r7806): rbuels | 2009-10-21 02:38:59 +0200 more pg unqualified schema tests, which expose a gap in the coverage r7819@Thesaurus (orig r7807): rbuels | 2009-10-21 03:10:38 +0200 gutted Pg storage driver's sequence discovery to just rely on DBD::Pg's last_insert_id. this needs testing with older versions of DBD::Pg r7821@Thesaurus (orig r7809): rbuels | 2009-10-21 04:00:39 +0200 more coverage in Pg sequence-discovery tests. i think this shows why last_insert_id cannot be used. r7822@Thesaurus (orig r7810): rbuels | 2009-10-21 04:07:05 +0200 reverted [7807], and just changed code to use the custom pg_catalog query, which is the only thing that works in the pathological case where DBIC is told a different primary key from the primary key that is set on the table in the DB ([7809] added testing for this) r7852@Thesaurus (orig r7840): rbuels | 2009-11-03 18:47:05 +0100 added Changes line mentioning tweak to Pg auto-inc fix r7854@Thesaurus (orig r7842): ribasushi | 2009-11-04 10:55:35 +0100 Cleanup exceptions r7858@Thesaurus (orig r7846): caelum | 2009-11-06 16:01:30 +0100 transactions for MSSQL over DBD::Sybase r7861@Thesaurus (orig r7849): caelum | 2009-11-10 13:16:18 +0100 made commit/rollback when disconnected an exception r7862@Thesaurus (orig r7850): robkinyon | 2009-11-10 17:19:57 +0100 Added a note about select r7863@Thesaurus (orig r7851): ribasushi | 2009-11-10 18:23:10 +0100 Changes r7867@Thesaurus (orig r7855): frew | 2009-11-11 21:56:37 +0100 RT50874 r7868@Thesaurus (orig r7856): frew | 2009-11-11 23:50:43 +0100 RT50828 r7869@Thesaurus (orig r7857): frew | 2009-11-11 23:54:15 +0100 clearer test message r7870@Thesaurus (orig r7858): frew | 2009-11-12 00:37:27 +0100 some cleanup for $rs->populate r7872@Thesaurus (orig r7860): ribasushi | 2009-11-12 01:35:36 +0100 Fix find on resultset with custom result_class r7873@Thesaurus (orig r7861): ribasushi | 2009-11-12 01:40:14 +0100 Fix return value of in_storage r7874@Thesaurus (orig r7862): ribasushi | 2009-11-12 01:43:48 +0100 Extra FAQ entry r7875@Thesaurus (orig r7863): ribasushi | 2009-11-12 02:11:25 +0100 Sanify _determine_driver handling in ::Storage::DBI r7876@Thesaurus (orig r7864): ribasushi | 2009-11-12 02:14:37 +0100 Add mysql determine_driver test by Pedro Melo r7881@Thesaurus (orig r7869): ribasushi | 2009-11-12 11:10:04 +0100 _cond_for_update_delete is hopelessly broken attempting to introspect SQLA1. Replace with a horrific but effective hack r7882@Thesaurus (orig r7870): ribasushi | 2009-11-12 11:15:12 +0100 Clarifying comment r7884@Thesaurus (orig r7872): ribasushi | 2009-11-13 00:13:40 +0100 The real fix for the non-introspectable condition bug, mst++ r7885@Thesaurus (orig r7873): ribasushi | 2009-11-13 00:24:56 +0100 Some cleanup r7887@Thesaurus (orig r7875): frew | 2009-11-13 10:01:37 +0100 fix subtle bug with Sybase database type determination r7892@Thesaurus (orig r7880): frew | 2009-11-14 00:53:29 +0100 release woo! r7894@Thesaurus (orig r7882): caelum | 2009-11-14 03:57:52 +0100 fix oracle dep in Makefile.PL r7895@Thesaurus (orig r7883): caelum | 2009-11-14 04:20:53 +0100 skip Oracle BLOB tests on DBD::Oracle == 1.23 r7897@Thesaurus (orig r7885): caelum | 2009-11-14 09:40:01 +0100 r7357@pentium (orig r7355): caelum | 2009-08-20 17:58:23 -0400 branch to support MSSQL over ADO r7358@pentium (orig r7356): caelum | 2009-08-21 00:32:14 -0400 something apparently working r7359@pentium (orig r7357): caelum | 2009-08-21 00:53:53 -0400 slightly better mars test, still passes r7899@Thesaurus (orig r7887): caelum | 2009-11-14 09:41:54 +0100 r7888@pentium (orig r7886): caelum | 2009-11-14 03:41:25 -0500 add TODO test for large column list in select r7901@Thesaurus (orig r7889): caelum | 2009-11-14 09:47:16 +0100 add ADO/MSSQL to Changes r7902@Thesaurus (orig r7890): caelum | 2009-11-14 10:27:29 +0100 fix the large column list test for ADO/MSSQL, now passes r7904@Thesaurus (orig r7892): caelum | 2009-11-14 12:20:58 +0100 fix Changes (ADO change in wrong release) r7905@Thesaurus (orig r7893): ribasushi | 2009-11-14 19:23:23 +0100 Release 0.08114 r7907@Thesaurus (orig r7895): ribasushi | 2009-11-15 12:09:17 +0100 Failing test to highlight mssql autoconnect regression r7908@Thesaurus (orig r7896): ribasushi | 2009-11-15 12:20:25 +0100 Fix plan r7913@Thesaurus (orig r7901): ribasushi | 2009-11-15 13:11:38 +0100 r7773@Thesaurus (orig r7761): norbi | 2009-10-05 14:49:06 +0200 Created branch 'prefetch_bug-unqualified_column_in_search_related_cond': A bug that manifests when a prefetched table's column is referenced without the table name in the condition of a search_related() on an M:N relationship. r7878@Thesaurus (orig r7866): ribasushi | 2009-11-12 02:36:08 +0100 Factor some code out r7879@Thesaurus (orig r7867): ribasushi | 2009-11-12 09:11:03 +0100 Factor out more stuff r7880@Thesaurus (orig r7868): ribasushi | 2009-11-12 09:21:04 +0100 Saner naming/comments r7910@Thesaurus (orig r7898): ribasushi | 2009-11-15 12:39:29 +0100 Move more code to DBIHacks, put back the update/delete rs check, just in case r7911@Thesaurus (orig r7899): ribasushi | 2009-11-15 13:01:34 +0100 TODOify test until we get an AST r7912@Thesaurus (orig r7900): ribasushi | 2009-11-15 13:10:15 +0100 Hide from pause r7921@Thesaurus (orig r7909): ribasushi | 2009-11-15 14:17:48 +0100 r7871@Thesaurus (orig r7859): ribasushi | 2009-11-12 00:46:07 +0100 Branches to test some ideas r7889@Thesaurus (orig r7877): abraxxa | 2009-11-13 12:05:50 +0100 added rels to view result classes in test schema r7890@Thesaurus (orig r7878): abraxxa | 2009-11-13 13:05:45 +0100 seems I found the bugger r7917@Thesaurus (orig r7905): ribasushi | 2009-11-15 13:29:23 +0100 FK constraints towards a view don't quite work r7918@Thesaurus (orig r7906): ribasushi | 2009-11-15 14:10:10 +0100 Turn into a straight-inheritance view class r7919@Thesaurus (orig r7907): ribasushi | 2009-11-15 14:11:03 +0100 Extensive test of virtual and classic view relationships r7920@Thesaurus (orig r7908): ribasushi | 2009-11-15 14:17:23 +0100 Fix non-sqlt schema file r7923@Thesaurus (orig r7911): caelum | 2009-11-15 18:31:37 +0100 fix MSSQL via DBD::Sybase regression r7930@Thesaurus (orig r7918): ribasushi | 2009-11-16 19:15:45 +0100 r7864@Thesaurus (orig r7852): edenc | 2009-11-10 20:15:15 +0100 branching for fixes related to prefetch, distinct and group by r7865@Thesaurus (orig r7853): edenc | 2009-11-10 20:21:38 +0100 added test case for ensuring a column mentioned in the order by clause is also included in the group by clause r7926@Thesaurus (orig r7914): ribasushi | 2009-11-16 08:09:30 +0100 Make _resolve_column_info function without supplying column names r7927@Thesaurus (orig r7915): ribasushi | 2009-11-16 08:11:17 +0100 Fix order_by/distinct bug --- diff --git a/Changes b/Changes index 672f272..dafb2d5 100644 --- a/Changes +++ b/Changes @@ -1,5 +1,52 @@ Revision history for DBIx::Class + - Fix distinct => 1 with non-selecting order_by (the columns + in order_by also need to be aded to the resulting group_by) + - Do not attempt to deploy FK constraints pointing to a View + +0.08114 2009-11-14 17:45:00 (UTC) + - Preliminary support for MSSQL via DBD::ADO + - Fix botched 0.08113 release (invalid tarball) + +0.08113 2009-11-13 23:13:00 (UTC) + - Fix populate with has_many bug + (RT #50828) + - Fix Oracle autoincrement broken for Resultsets with scalar refs + (RT #50874) + - Complete Sybase RDBMS support including: + - Support for TEXT/IMAGE columns + - Support for the 'money' datatype + - Transaction savepoints support + - DateTime inflation support + - Support for bind variables when connecting to a newer Sybase with + OpenClient libraries + - Support for connections via FreeTDS with CASTs for bind variables + when needed + - Support for interpolated variables with proper quoting when + connecting to an older Sybase and/or via FreeTDS + - bulk API support for populate() + - Transaction support for MSSQL via DBD::Sybase + - Add is_paged method to DBIx::Class::ResultSet so that we can + check that if we want a pager + - Skip versioning test on really old perls lacking Time::HiRes + (RT #50209) + - Fixed on_connect_do/call regression when used with a coderef + connector (RT #50003) + - A couple of fixes to Ordered to remedy subclassing issues + - Fixed another lingering problem with PostgreSQL + auto-increment support and its interaction with multiple + schemas + - Remove some IN workarounds, and require a recent version of + SQLA instead + - Improvements to populate's handling of mixed scalarref values + - Fixed regression losing result_class after $rs->find (introduced + in 0.08108) + - Fix in_storage() to return 1|0 as per existing documentation + - Centralize handling of _determine_driver calls prior to certain + ::Storage::DBI methods + - Fix update/delete arbitrary condition handling (RT#51409) + - POD improvements + 0.08112 2009-09-21 10:57:00 (UTC) - Remove the recommends from Makefile.PL, DBIx::Class is not supposed to have optional dependencies. ever. @@ -102,7 +149,7 @@ Revision history for DBIx::Class nonexisting prefetch - make_column_dirty() now overwrites the deflated value with an inflated one if such exists - - Fixed set_$rel with where restriction deleting rows outside + - Fixed set_$rel with where restriction deleting rows outside the restriction - populate() returns the created objects or an arrayref of the created objects depending on scalar vs. list context @@ -154,7 +201,7 @@ Revision history for DBIx::Class side of the relation, to avoid duplicates - DBIC now properly handles empty inserts (invoking all default values from the DB, normally via INSERT INTO tbl DEFAULT VALUES - - Fix find_or_new/create to stop returning random rows when + - Fix find_or_new/create to stop returning random rows when default value insert is requested (RT#28875) - Make IC::DT extra warning state the column name too - It is now possible to transparrently search() on columns @@ -176,9 +223,9 @@ Revision history for DBIx::Class - Change ->count code to work correctly with DISTINCT (distinct => 1) via GROUP BY - Removed interpolation of bind vars for as_query - placeholders - are preserved and nested query bind variables are properly + are preserved and nested query bind variables are properly merged in the correct order - - Refactor DBIx::Class::Storage::DBI::Sybase to automatically + - Refactor DBIx::Class::Storage::DBI::Sybase to automatically load a subclass, namely Microsoft_SQL_Server.pm (similar to DBIx::Class::Storage::DBI::ODBC) - Refactor InflateColumn::DateTime to allow components to @@ -241,7 +288,7 @@ Revision history for DBIx::Class - not try and insert things tagged on via new_related unless required - Possible to set locale in IC::DateTime extra => {} config - Calling the accessor of a belongs_to when the foreign_key - was NULL and the row was not stored would unexpectedly fail + was NULL and the row was not stored would unexpectedly fail - Split sql statements for deploy only if SQLT::Producer returned a scalar containing all statements to be executed - Add as_query() for ResultSet and ResultSetColumn. This makes subqueries @@ -269,8 +316,8 @@ Revision history for DBIx::Class - new order_by => { -desc => 'colname' } syntax supported - PG array datatype supported - insert should use store_column, not set_column to avoid marking - clean just-stored values as dirty. New test for this - - regression test for source_name + clean just-stored values as dirty. New test for this + - regression test for source_name 0.08099_05 2008-10-30 21:30:00 (UTC) - Rewrite of Storage::DBI::connect_info(), extended with an @@ -284,7 +331,7 @@ Revision history for DBIx::Class - Fixed up related resultsets and multi-create - Fixed superfluous connection in ODBC::_rebless - Fixed undef PK for first insert in ODBC::Microsoft_SQL_Server - - Added virtual method to Versioned so a user can create upgrade + - Added virtual method to Versioned so a user can create upgrade path across multiple versions (jgoulah) - Better (and marginally faster) implementation of the HashRefInflator hash construction algorithm @@ -293,7 +340,7 @@ Revision history for DBIx::Class 0.08099_04 2008-07-24 01:00:00 - Functionality to storage to enable a sub to be run without FK checks - - Fixed $schema->clone bug which caused clone and source to share + - Fixed $schema->clone bug which caused clone and source to share internal hash refs - Added register_extra_source methods for additional sources - Added datetime_undef_if_invalid for InflateColumn::DateTime to @@ -319,11 +366,11 @@ Revision history for DBIx::Class - Add warnings for non-unique ResultSet::find queries - Changed Storage::DBI::Replication to Storage::DBI::Replicated and refactored support. - - By default now deploy/diff et al. will ignore constraint and index + - By default now deploy/diff et al. will ignore constraint and index names - Add ResultSet::_is_deterministic_value, make new_result filter the values passed to new to drop values that would generate invalid SQL. - - Use Sub::Name to name closures before installing them. Fixes + - Use Sub::Name to name closures before installing them. Fixes incompatibility with Moose method modifiers on generated methods. 0.08010 2008-03-01 10:30 @@ -332,7 +379,7 @@ Revision history for DBIx::Class 0.08009 2008-01-20 13:30 - Made search_rs smarter about when to preserve the cache to fix mm prefetch usage - - Added Storage::DBI subclass for MSSQL over ODBC. + - Added Storage::DBI subclass for MSSQL over ODBC. - Added freeze, thaw and dclone methods to Schema so that thawed objects will get re-attached to the schema. - Moved dbicadmin to JSON::Any wrapped JSON.pm for a sane API @@ -346,20 +393,20 @@ Revision history for DBIx::Class foreign and self parts the wrong way round in the condition - ResultSetColumn::func() now returns all results if called in list context; this makes things like func('DISTINCT') work as expected - - Many-to-many relationships now warn if the utility methods would + - Many-to-many relationships now warn if the utility methods would clash - InflateColumn::DateTime now accepts an extra parameter of timezone to set timezone on the DT object (thanks Sergio Salvi) - - Added sqlt_deploy_hook to result classes so that indexes can be + - Added sqlt_deploy_hook to result classes so that indexes can be added. - - Added startup checks to warn loudly if we appear to be running on + - Added startup checks to warn loudly if we appear to be running on RedHat systems from perl-5.8.8-10 and up that have the bless/overload patch applied (badly) which causes 2x -> 100x performance penalty. (Jon Schutz) - - ResultSource::reverse_relationship_info can distinguish between + - ResultSource::reverse_relationship_info can distinguish between sources using the same table - Row::insert will now not fall over if passed duplicate related objects - - Row::copy will not fall over if you have two relationships to the + - Row::copy will not fall over if you have two relationships to the same source with a unique constraint on it 0.08007 2007-09-04 19:36:00 @@ -371,7 +418,7 @@ Revision history for DBIx::Class - Move to using Class::C3::Componentised - Remove warn statement from DBIx::Class::Row -0.08005 2007-08-06 +0.08005 2007-08-06 - add timestamp fix re rt.cpan 26978 - no test yet but change clearly should cause no regressions - provide alias for related_resultset via local() so it's set @@ -386,7 +433,7 @@ Revision history for DBIx::Class (original fix from diz) 0.08004 2007-08-06 19:00:00 - - fix storage connect code to not trigger bug via auto-viv + - fix storage connect code to not trigger bug via auto-viv (test from aherzog) - fixup cursor_class to be an 'inherited' attr for per-package defaults - add default_resultset_attributes entry to Schema diff --git a/Makefile.PL b/Makefile.PL index e3885d2..2332153 100644 --- a/Makefile.PL +++ b/Makefile.PL @@ -45,6 +45,7 @@ requires 'Scope::Guard' => '0.03'; requires 'SQL::Abstract' => '1.60'; requires 'SQL::Abstract::Limit' => '0.13'; requires 'Sub::Name' => '0.04'; +requires 'Data::Dumper::Concise' => '1.000'; my %replication_requires = ( 'Moose', => '0.87', @@ -109,11 +110,17 @@ my %force_requires_if_author = ( ) : () , - $ENV{DBICTEST_ORACLE_DSN} + $ENV{DBICTEST_ORA_DSN} ? ( 'DateTime::Format::Oracle' => '0', ) : () , + + $ENV{DBICTEST_SYBASE_DSN} + ? ( + 'DateTime::Format::Sybase' => 0, + ) : () + , ); #************************************************************************# # Make ABSOLUTELY SURE that nothing on the list above is a real require, # @@ -132,15 +139,16 @@ tests_recursive (qw| resources 'IRC' => 'irc://irc.perl.org/#dbix-class'; resources 'license' => 'http://dev.perl.org/licenses/'; -resources 'repository' => 'http://dev.catalyst.perl.org/svnweb/bast/browse/DBIx-Class/'; +resources 'repository' => 'http://dev.catalyst.perl.org/repos/bast/DBIx-Class/'; resources 'MailingList' => 'http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbix-class'; -no_index 'DBIx::Class::Storage::DBI::Sybase::Base'; +no_index 'DBIx::Class::Storage::DBI::Sybase::Common'; no_index 'DBIx::Class::SQLAHacks'; no_index 'DBIx::Class::SQLAHacks::MSSQL'; no_index 'DBIx::Class::Storage::DBI::AmbiguousGlob'; no_index 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server'; no_index 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars'; +no_index 'DBIx::Class::Storage::DBIHacks'; # re-build README and require extra modules for testing if we're in a checkout diff --git a/lib/DBIx/Class.pm b/lib/DBIx/Class.pm index 3c36722..07b678c 100644 --- a/lib/DBIx/Class.pm +++ b/lib/DBIx/Class.pm @@ -24,7 +24,7 @@ sub component_base_class { 'DBIx::Class' } # Always remember to do all digits for the version even if they're 0 # i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports # brain damage and presumably various other packaging systems too -$VERSION = '0.08112'; +$VERSION = '0.08114'; $VERSION = eval $VERSION; # numify for warning-free dev releases @@ -115,7 +115,7 @@ Then you can use these classes in your application's code: my $all_artists_rs = $schema->resultset('Artist'); # Output all artists names - # $artist here is a DBIx::Class::Row, which has accessors + # $artist here is a DBIx::Class::Row, which has accessors # for all its columns. Rows are also subclasses of your Result class. foreach $artist (@artists) { print $artist->name, "\n"; @@ -341,6 +341,8 @@ Todd Lipcon Tom Hukins +triode: Pete Gamache + typester: Daisuke Murase victori: Victor Igumnov diff --git a/lib/DBIx/Class/Manual/FAQ.pod b/lib/DBIx/Class/Manual/FAQ.pod index 4a5d7ba..6d35ae6 100644 --- a/lib/DBIx/Class/Manual/FAQ.pod +++ b/lib/DBIx/Class/Manual/FAQ.pod @@ -371,6 +371,9 @@ C supplied with C. =item .. insert many rows of data efficiently? +The C method in L provides +efficient bulk inserts. + =item .. update a collection of rows at the same time? Create a resultset using a search, to filter the rows of data you diff --git a/lib/DBIx/Class/Manual/Troubleshooting.pod b/lib/DBIx/Class/Manual/Troubleshooting.pod index e3b1ab3..56bcc01 100644 --- a/lib/DBIx/Class/Manual/Troubleshooting.pod +++ b/lib/DBIx/Class/Manual/Troubleshooting.pod @@ -156,5 +156,16 @@ L, L and L +=head2 Excessive Memory Allocation with TEXT/BLOB/etc. Columns and Large LongReadLen + +It has been observed, using L, that a creating a L +object which includes a column of data type TEXT/BLOB/etc. will allocate +LongReadLen bytes. This allocation does not leak, but if LongReadLen +is large in size, and many such row objects are created, e.g. as the +output of a ResultSet query, the memory footprint of the Perl interpreter +can grow very large. + +The solution is to use the smallest practical value for LongReadLen. + =cut diff --git a/lib/DBIx/Class/Ordered.pm b/lib/DBIx/Class/Ordered.pm index 6c7446f..5f17790 100644 --- a/lib/DBIx/Class/Ordered.pm +++ b/lib/DBIx/Class/Ordered.pm @@ -501,7 +501,7 @@ sub move_to_group { } else { my $bumped_pos_val = $self->_position_value ($to_position); - my @between = ($to_position, $new_group_last_position); + my @between = map { $self->_position_value ($_) } ($to_position, $new_group_last_position); $self->_shift_siblings (1, @between); #shift right $self->set_column( $position_column => $bumped_pos_val ); } @@ -682,27 +682,9 @@ You would want to override the methods below if you use sparse if you are working with preexisting non-normalised position data, or if you need to work with materialized path columns. -=head2 _position - - my $num_pos = $item->_position; - -Returns the B of the current object, with the -first object being at position 1, its sibling at position 2 and so on. -By default simply returns the value of L. - -=cut -sub _position { - my $self = shift; - -# #the right way to do this -# return $self->previous_siblings->count + 1; - - return $self->get_column ($self->position_column); -} - =head2 _position_from_value - my $num_pos = $item->_position_of_value ( $pos_value ) + my $num_pos = $item->_position_from_value ( $pos_value ) Returns the B of an object with a B set to C<$pos_value>. By default simply returns C<$pos_value>. @@ -864,6 +846,19 @@ sub _siblings { ); } +=head2 _position + + my $num_pos = $item->_position; + +Returns the B of the current object, with the +first object being at position 1, its sibling at position 2 and so on. + +=cut +sub _position { + my $self = shift; + return $self->_position_from_value ($self->get_column ($self->position_column) ); +} + =head2 _grouping_clause This method returns one or more name=>value pairs for limiting a search diff --git a/lib/DBIx/Class/ResultSet.pm b/lib/DBIx/Class/ResultSet.pm index 851d4e2..ea0f296 100644 --- a/lib/DBIx/Class/ResultSet.pm +++ b/lib/DBIx/Class/ResultSet.pm @@ -357,9 +357,9 @@ sub search_rs { } my $rs = (ref $self)->new($self->result_source, $new_attrs); - if ($rows) { - $rs->set_cache($rows); - } + + $rs->set_cache($rows) if ($rows); + return $rs; } @@ -519,7 +519,7 @@ sub find { # in ::Relationship::Base::search_related (the row method), and furthermore # the relationship is of the 'single' type. This means that the condition # provided by the relationship (already attached to $self) is sufficient, - # as there can be only one row in the databse that would satisfy the + # as there can be only one row in the databse that would satisfy the # relationship } else { @@ -530,7 +530,7 @@ sub find { } # Run the query - my $rs = $self->search ($query, $attrs); + my $rs = $self->search ($query, {result_class => $self->result_class, %$attrs}); if (keys %{$rs->_resolved_attrs->{collapse}}) { my $row = $rs->next; carp "Query returned more than one row" if $rs->next; @@ -1240,7 +1240,7 @@ sub _count_rs { my $tmp_attrs = { %$attrs }; - # take off any limits, record_filter is cdbi, and no point of ordering a count + # take off any limits, record_filter is cdbi, and no point of ordering a count delete $tmp_attrs->{$_} for (qw/select as rows offset order_by record_filter/); # overwrite the selector (supplied by the storage) @@ -1248,7 +1248,7 @@ sub _count_rs { $tmp_attrs->{as} = 'count'; # read the comment on top of the actual function to see what this does - $tmp_attrs->{from} = $self->_switch_to_inner_join_if_needed ( + $tmp_attrs->{from} = $self->result_source->schema->storage->_straight_join_to_node ( $tmp_attrs->{from}, $tmp_attrs->{alias} ); @@ -1280,11 +1280,13 @@ sub _count_subq_rs { $sub_attrs->{select} = $rsrc->storage->_subq_count_select ($rsrc, $sub_attrs); # read the comment on top of the actual function to see what this does - $sub_attrs->{from} = $self->_switch_to_inner_join_if_needed ( + $sub_attrs->{from} = $self->result_source->schema->storage->_straight_join_to_node ( $sub_attrs->{from}, $sub_attrs->{alias} ); - # this is so that ordering can be thrown away in things like Top limit + # this is so that the query can be simplified e.g. + # * non-limiting joins can be pruned + # * ordering can be thrown away in things like Top limit $sub_attrs->{-for_count_only} = 1; my $sub_rs = $rsrc->resultset_class->new ($rsrc, $sub_attrs); @@ -1301,77 +1303,6 @@ sub _count_subq_rs { return $self->_count_rs ($attrs); } - -# The DBIC relationship chaining implementation is pretty simple - every -# new related_relationship is pushed onto the {from} stack, and the {select} -# window simply slides further in. This means that when we count somewhere -# in the middle, we got to make sure that everything in the join chain is an -# actual inner join, otherwise the count will come back with unpredictable -# results (a resultset may be generated with _some_ rows regardless of if -# the relation which the $rs currently selects has rows or not). E.g. -# $artist_rs->cds->count - normally generates: -# SELECT COUNT( * ) FROM artist me LEFT JOIN cd cds ON cds.artist = me.artistid -# which actually returns the number of artists * (number of cds || 1) -# -# So what we do here is crawl {from}, determine if the current alias is at -# the top of the stack, and if not - make sure the chain is inner-joined down -# to the root. -# -sub _switch_to_inner_join_if_needed { - my ($self, $from, $alias) = @_; - - # subqueries and other oddness is naturally not supported - return $from if ( - ref $from ne 'ARRAY' - || - @$from <= 1 - || - ref $from->[0] ne 'HASH' - || - ! $from->[0]{-alias} - || - $from->[0]{-alias} eq $alias - ); - - my $switch_branch; - JOINSCAN: - for my $j (@{$from}[1 .. $#$from]) { - if ($j->[0]{-alias} eq $alias) { - $switch_branch = $j->[0]{-join_path}; - last JOINSCAN; - } - } - - # something else went wrong - return $from unless $switch_branch; - - # So it looks like we will have to switch some stuff around. - # local() is useless here as we will be leaving the scope - # anyway, and deep cloning is just too fucking expensive - # So replace the inner hashref manually - my @new_from = ($from->[0]); - my $sw_idx = { map { $_ => 1 } @$switch_branch }; - - for my $j (@{$from}[1 .. $#$from]) { - my $jalias = $j->[0]{-alias}; - - if ($sw_idx->{$jalias}) { - my %attrs = %{$j->[0]}; - delete $attrs{-join_type}; - push @new_from, [ - \%attrs, - @{$j}[ 1 .. $#$j ], - ]; - } - else { - push @new_from, $j; - } - } - - return \@new_from; -} - - sub _bool { return 1; } @@ -1495,8 +1426,12 @@ sub _rs_update_delete { my $rsrc = $self->result_source; + # if a condition exists we need to strip all table qualifiers + # if this is not possible we'll force a subquery below + my $cond = $rsrc->schema->storage->_strip_cond_qualifiers ($self->{cond}); + my $needs_group_by_subq = $self->_has_resolved_attr (qw/collapse group_by -join/); - my $needs_subq = $self->_has_resolved_attr (qw/row offset/); + my $needs_subq = (not defined $cond) || $self->_has_resolved_attr(qw/row offset/); if ($needs_group_by_subq or $needs_subq) { @@ -1544,70 +1479,11 @@ sub _rs_update_delete { return $rsrc->storage->$op( $rsrc, $op eq 'update' ? $values : (), - $self->_cond_for_update_delete, + $cond, ); } } - -# _cond_for_update_delete -# -# update/delete require the condition to be modified to handle -# the differing SQL syntax available. This transforms the $self->{cond} -# appropriately, returning the new condition. - -sub _cond_for_update_delete { - my ($self, $full_cond) = @_; - my $cond = {}; - - $full_cond ||= $self->{cond}; - # No-op. No condition, we're updating/deleting everything - return $cond unless ref $full_cond; - - if (ref $full_cond eq 'ARRAY') { - $cond = [ - map { - my %hash; - foreach my $key (keys %{$_}) { - $key =~ /([^.]+)$/; - $hash{$1} = $_->{$key}; - } - \%hash; - } @{$full_cond} - ]; - } - elsif (ref $full_cond eq 'HASH') { - if ((keys %{$full_cond})[0] eq '-and') { - $cond->{-and} = []; - my @cond = @{$full_cond->{-and}}; - for (my $i = 0; $i < @cond; $i++) { - my $entry = $cond[$i]; - my $hash; - if (ref $entry eq 'HASH') { - $hash = $self->_cond_for_update_delete($entry); - } - else { - $entry =~ /([^.]+)$/; - $hash->{$1} = $cond[++$i]; - } - push @{$cond->{-and}}, $hash; - } - } - else { - foreach my $key (keys %{$full_cond}) { - $key =~ /([^.]+)$/; - $cond->{$1} = $full_cond->{$key}; - } - } - } - else { - $self->throw_exception("Can't update/delete on resultset with condition unless hash or array"); - } - - return $cond; -} - - =head2 update =over 4 @@ -1794,10 +1670,19 @@ sub populate { } return wantarray ? @created : \@created; } else { - my ($first, @rest) = @$data; + my $first = $data->[0]; + + # if a column is a registered relationship, and is a non-blessed hash/array, consider + # it relationship data + my (@rels, @columns); + for (keys %$first) { + my $ref = ref $first->{$_}; + $self->result_source->has_relationship($_) && ($ref eq 'ARRAY' or $ref eq 'HASH') + ? push @rels, $_ + : push @columns, $_ + ; + } - my @names = grep {!ref $first->{$_}} keys %$first; - my @rels = grep { $self->result_source->has_relationship($_) } keys %$first; my @pks = $self->result_source->primary_columns; ## do the belongs_to relationships @@ -1826,7 +1711,7 @@ sub populate { delete $data->[$index]->{$rel}; $data->[$index] = {%{$data->[$index]}, %$related}; - push @names, keys %$related if $index == 0; + push @columns, keys %$related if $index == 0; } } @@ -1846,8 +1731,8 @@ sub populate { ## do bulk insert on current row $self->result_source->storage->insert_bulk( $self->result_source, - \@names, - \@values, + \@columns, + [ map { [ @$_{@columns} ] } @$data ], ); ## do the has_many relationships @@ -1856,7 +1741,7 @@ sub populate { foreach my $rel (@rels) { next unless $item->{$rel} && ref $item->{$rel} eq "ARRAY"; - my $parent = $self->find(map {{$_=>$item->{$_}} } @pks) + my $parent = $self->find({map { $_ => $item->{$_} } @pks}) || $self->throw_exception('Cannot find the relating object.'); my $child = $parent->$rel; @@ -2597,6 +2482,23 @@ sub clear_cache { shift->set_cache(undef); } +=head2 is_paged + +=over 4 + +=item Arguments: none + +=item Return Value: true, if the resultset has been paginated + +=back + +=cut + +sub is_paged { + my ($self) = @_; + return !!$self->{attrs}{page}; +} + =head2 related_resultset =over 4 @@ -2744,8 +2646,8 @@ sub _chain_relationship { }]; my $seen = { %{$attrs->{seen_join} || {} } }; - my $jpath = ($attrs->{seen_join} && keys %{$attrs->{seen_join}}) - ? $from->[-1][0]{-join_path} + my $jpath = ($attrs->{seen_join} && keys %{$attrs->{seen_join}}) + ? $from->[-1][0]{-join_path} : []; @@ -2953,6 +2855,22 @@ sub _resolved_attrs { } else { $attrs->{group_by} = [ grep { !ref($_) || (ref($_) ne 'HASH') } @{$attrs->{select}} ]; + + # add any order_by parts that are not already present in the group_by + # we need to be careful not to add any named functions/aggregates + # i.e. select => [ ... { count => 'foo', -as 'foocount' } ... ] + my %already_grouped = map { $_ => 1 } (@{$attrs->{group_by}}); + + my $storage = $self->result_source->schema->storage; + my $rs_column_list = $storage->_resolve_column_info ($attrs->{from}); + my @chunks = $storage->sql_maker->_order_by_chunks ($attrs->{order_by}); + + for my $chunk (map { ref $_ ? @$_ : $_ } (@chunks) ) { + $chunk =~ s/\s+ (?: ASC|DESC ) \s* $//ix; + if ($rs_column_list->{$chunk} && not $already_grouped{$chunk}++) { + push @{$attrs->{group_by}}, $chunk; + } + } } } @@ -2981,7 +2899,7 @@ sub _resolved_attrs { # even though it doesn't make much sense, this is what pre 081xx has # been doing if (my $page = delete $attrs->{page}) { - $attrs->{offset} = + $attrs->{offset} = ($attrs->{rows} * ($page - 1)) + ($attrs->{offset} || 0) @@ -3176,7 +3094,7 @@ These are in no particular order: =back -Which column(s) to order the results by. +Which column(s) to order the results by. [The full list of suitable values is documented in L; the following is a summary of @@ -3271,6 +3189,9 @@ When you use function/stored procedure names and do not supply an C attribute, the column names returned are storage-dependent. E.g. MySQL would return a column named C in the above example. +B You will almost always need a corresponding 'as' entry when you use +'select'. + =head2 +select =over 4 @@ -3468,12 +3389,12 @@ exactly as you might expect. =over 4 -=item * +=item * Prefetch uses the L to populate the prefetched relationships. This may or may not be what you want. -=item * +=item * If you specify a condition on a prefetched relationship, ONLY those rows that match the prefetched condition will be fetched into that relationship. @@ -3585,8 +3506,8 @@ Adds to the WHERE clause. # only return rows WHERE deleted IS NULL for all searches __PACKAGE__->resultset_attributes({ where => { deleted => undef } }); ) -Can be overridden by passing C<{ where => undef }> as an attribute -to a resulset. +Can be overridden by passing C<< { where => undef } >> as an attribute +to a resultset. =back diff --git a/lib/DBIx/Class/ResultSource.pm b/lib/DBIx/Class/ResultSource.pm index fa08fae..b3bb934 100644 --- a/lib/DBIx/Class/ResultSource.pm +++ b/lib/DBIx/Class/ResultSource.pm @@ -389,7 +389,7 @@ sub columns { my $self = shift; $self->throw_exception( "columns() is a read-only accessor, did you mean add_columns()?" - ) if (@_ > 1); + ) if @_; return @{$self->{_ordered_columns}||[]}; } diff --git a/lib/DBIx/Class/Row.pm b/lib/DBIx/Class/Row.pm index f708d21..75b64db 100644 --- a/lib/DBIx/Class/Row.pm +++ b/lib/DBIx/Class/Row.pm @@ -424,7 +424,7 @@ L on one, sets it to false. sub in_storage { my ($self, $val) = @_; $self->{_in_storage} = $val if @_ > 1; - return $self->{_in_storage}; + return $self->{_in_storage} ? 1 : 0; } =head2 update diff --git a/lib/DBIx/Class/Storage/DBI.pm b/lib/DBIx/Class/Storage/DBI.pm index 89cf5a3..60c6277 100644 --- a/lib/DBIx/Class/Storage/DBI.pm +++ b/lib/DBIx/Class/Storage/DBI.pm @@ -4,7 +4,7 @@ package DBIx::Class::Storage::DBI; use strict; use warnings; -use base 'DBIx::Class::Storage'; +use base qw/DBIx::Class::Storage::DBIHacks DBIx::Class::Storage/; use mro 'c3'; use Carp::Clan qw/^DBIx::Class/; @@ -13,6 +13,7 @@ use DBIx::Class::Storage::DBI::Cursor; use DBIx::Class::Storage::Statistics; use Scalar::Util(); use List::Util(); +use Data::Dumper::Concise(); # what version of sqlt do we require if deploy() without a ddl_dir is invoked # when changing also adjust the corresponding author_require in Makefile.PL @@ -40,6 +41,38 @@ __PACKAGE__->mk_group_accessors('inherited' => qw/sql_maker_class/); __PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks'); +# Each of these methods need _determine_driver called before itself +# in order to function reliably. This is a purely DRY optimization +my @rdbms_specific_methods = qw/ + sqlt_type + build_datetime_parser + datetime_parser_type + + insert + insert_bulk + update + delete + select + select_single +/; + +for my $meth (@rdbms_specific_methods) { + + my $orig = __PACKAGE__->can ($meth) + or next; + + no strict qw/refs/; + no warnings qw/redefine/; + *{__PACKAGE__ ."::$meth"} = sub { + if (not $_[0]->_driver_determined) { + $_[0]->_determine_driver; + goto $_[0]->can($meth); + } + $orig->(@_); + }; +} + + =head1 NAME DBIx::Class::Storage::DBI - DBI storage handler @@ -712,7 +745,6 @@ in MySQL's case disabled entirely. # Storage subclasses should override this sub with_deferred_fk_checks { my ($self, $sub) = @_; - $sub->(); } @@ -878,13 +910,14 @@ sub _determine_driver { my ($self) = @_; if ((not $self->_driver_determined) && (not $self->{_in_determine_driver})) { - my $started_unconnected = 0; + my $started_connected = 0; local $self->{_in_determine_driver} = 1; if (ref($self) eq __PACKAGE__) { my $driver; if ($self->_dbh) { # we are connected $driver = $self->_dbh->{Driver}{Name}; + $started_connected = 1; } else { # if connect_info is a CODEREF, we have no choice but to connect if (ref $self->_dbi_connect_info->[0] && @@ -896,7 +929,6 @@ sub _determine_driver { # try to use dsn to not require being connected, the driver may still # force a connection in _rebless to determine version ($driver) = $self->_dbi_connect_info->[0] =~ /dbi:([^:]+):/i; - $started_unconnected = 1; } } @@ -913,7 +945,7 @@ sub _determine_driver { $self->_init; # run driver-specific initializations $self->_run_connection_actions - if $started_unconnected && defined $self->_dbh; + if !$started_connected && defined $self->_dbh; } } @@ -1144,7 +1176,6 @@ sub _dbh_begin_work { sub txn_commit { my $self = shift; if ($self->{transaction_depth} == 1) { - my $dbh = $self->_dbh; $self->debugobj->txn_commit() if ($self->debug); $self->_dbh_commit; @@ -1160,7 +1191,9 @@ sub txn_commit { sub _dbh_commit { my $self = shift; - $self->_dbh->commit; + my $dbh = $self->_dbh + or $self->throw_exception('cannot COMMIT on a disconnected handle'); + $dbh->commit; } sub txn_rollback { @@ -1197,7 +1230,9 @@ sub txn_rollback { sub _dbh_rollback { my $self = shift; - $self->_dbh->rollback; + my $dbh = $self->_dbh + or $self->throw_exception('cannot ROLLBACK on a disconnected handle'); + $dbh->rollback; } # This used to be the top-half of _execute. It was split out to make it @@ -1300,12 +1335,6 @@ sub _execute { sub insert { my ($self, $source, $to_insert) = @_; -# redispatch to insert method of storage we reblessed into, if necessary - if (not $self->_driver_determined) { - $self->_determine_driver; - goto $self->can('insert'); - } - my $ident = $source->from; my $bind_attributes = $self->source_bind_attributes($source); @@ -1337,21 +1366,102 @@ sub insert { sub insert_bulk { my ($self, $source, $cols, $data) = @_; -# redispatch to insert_bulk method of storage we reblessed into, if necessary - if (not $self->_driver_determined) { - $self->_determine_driver; - goto $self->can('insert_bulk'); - } - my %colvalues; - my $table = $source->from; @colvalues{@$cols} = (0..$#$cols); - my ($sql, @bind) = $self->sql_maker->insert($table, \%colvalues); - $self->_query_start( $sql, @bind ); + for my $i (0..$#$cols) { + my $first_val = $data->[0][$i]; + next unless ref $first_val eq 'SCALAR'; + + $colvalues{ $cols->[$i] } = $first_val; + } + + # check for bad data and stringify stringifiable objects + my $bad_slice = sub { + my ($msg, $col_idx, $slice_idx) = @_; + $self->throw_exception(sprintf "%s for column '%s' in populate slice:\n%s", + $msg, + $cols->[$col_idx], + do { + local $Data::Dumper::Maxdepth = 1; # don't dump objects, if any + Data::Dumper::Concise::Dumper({ + map { $cols->[$_] => $data->[$slice_idx][$_] } (0 .. $#$cols) + }), + } + ); + }; + + for my $datum_idx (0..$#$data) { + my $datum = $data->[$datum_idx]; + + for my $col_idx (0..$#$cols) { + my $val = $datum->[$col_idx]; + my $sqla_bind = $colvalues{ $cols->[$col_idx] }; + my $is_literal_sql = (ref $sqla_bind) eq 'SCALAR'; + + if ($is_literal_sql) { + if (not ref $val) { + $bad_slice->('bind found where literal SQL expected', $col_idx, $datum_idx); + } + elsif ((my $reftype = ref $val) ne 'SCALAR') { + $bad_slice->("$reftype reference found where literal SQL expected", + $col_idx, $datum_idx); + } + elsif ($$val ne $$sqla_bind){ + $bad_slice->("inconsistent literal SQL value, expecting: '$$sqla_bind'", + $col_idx, $datum_idx); + } + } + elsif (my $reftype = ref $val) { + require overload; + if (overload::Method($val, '""')) { + $datum->[$col_idx] = "".$val; + } + else { + $bad_slice->("$reftype reference found where bind expected", + $col_idx, $datum_idx); + } + } + } + } + + my ($sql, $bind) = $self->_prep_for_execute ( + 'insert', undef, $source, [\%colvalues] + ); + my @bind = @$bind; + + my $empty_bind = 1 if (not @bind) && + (grep { ref $_ eq 'SCALAR' } values %colvalues) == @$cols; + + if ((not @bind) && (not $empty_bind)) { + $self->throw_exception( + 'Cannot insert_bulk without support for placeholders' + ); + } + + $self->_query_start( $sql, ['__BULK__'] ); my $sth = $self->sth($sql); -# @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args + my $rv = do { + if ($empty_bind) { + # bind_param_array doesn't work if there are no binds + $self->_dbh_execute_inserts_with_no_binds( $sth, scalar @$data ); + } + else { +# @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args + $self->_execute_array( $source, $sth, \@bind, $cols, $data ); + } + }; + + $self->_query_end( $sql, ['__BULK__'] ); + + return (wantarray ? ($rv, $sth, @bind) : $rv); +} + +sub _execute_array { + my ($self, $source, $sth, $bind, $cols, $data, @extra) = @_; + + my $guard = $self->txn_scope_guard unless $self->{transaction_depth} != 0; ## This must be an arrayref, else nothing works! my $tuple_status = []; @@ -1362,7 +1472,7 @@ sub insert_bulk { ## Bind the values and execute my $placeholder_index = 1; - foreach my $bound (@bind) { + foreach my $bound (@$bind) { my $attributes = {}; my ($column_name, $data_index) = @$bound; @@ -1377,63 +1487,89 @@ sub insert_bulk { $sth->bind_param_array( $placeholder_index, [@data], $attributes ); $placeholder_index++; } - my $rv = eval { $sth->execute_array({ArrayTupleStatus => $tuple_status}) }; - if (my $err = $@) { + + my $rv = eval { + $self->_dbh_execute_array($sth, $tuple_status, @extra); + }; + my $err = $@ || $sth->errstr; + +# Statement must finish even if there was an exception. + eval { $sth->finish }; + $err = $@ unless $err; + + if ($err) { my $i = 0; ++$i while $i <= $#$tuple_status && !ref $tuple_status->[$i]; - $self->throw_exception($sth->errstr || "Unexpected populate error: $err") + $self->throw_exception("Unexpected populate error: $err") if ($i > $#$tuple_status); - require Data::Dumper; - local $Data::Dumper::Terse = 1; - local $Data::Dumper::Indent = 1; - local $Data::Dumper::Useqq = 1; - local $Data::Dumper::Quotekeys = 0; - local $Data::Dumper::Sortkeys = 1; - $self->throw_exception(sprintf "%s for populate slice:\n%s", - $tuple_status->[$i][1], - Data::Dumper::Dumper( - { map { $cols->[$_] => $data->[$i][$_] } (0 .. $#$cols) } - ), + ($tuple_status->[$i][1] || $err), + Data::Dumper::Concise::Dumper({ + map { $cols->[$_] => $data->[$i][$_] } (0 .. $#$cols) + }), ); } - $self->throw_exception($sth->errstr) if !$rv; - $self->_query_end( $sql, @bind ); - return (wantarray ? ($rv, $sth, @bind) : $rv); + $guard->commit if $guard; + + return $rv; +} + +sub _dbh_execute_array { + my ($self, $sth, $tuple_status, @extra) = @_; + + return $sth->execute_array({ArrayTupleStatus => $tuple_status}); +} + +sub _dbh_execute_inserts_with_no_binds { + my ($self, $sth, $count) = @_; + + my $guard = $self->txn_scope_guard unless $self->{transaction_depth} != 0; + + eval { + my $dbh = $self->_get_dbh; + local $dbh->{RaiseError} = 1; + local $dbh->{PrintError} = 0; + + $sth->execute foreach 1..$count; + }; + my $exception = $@; + +# Make sure statement is finished even if there was an exception. + eval { $sth->finish }; + $exception = $@ unless $exception; + + $self->throw_exception($exception) if $exception; + + $guard->commit if $guard; + + return $count; } sub update { my ($self, $source, @args) = @_; -# redispatch to update method of storage we reblessed into, if necessary - if (not $self->_driver_determined) { - $self->_determine_driver; - goto $self->can('update'); - } - - my $bind_attributes = $self->source_bind_attributes($source); + my $bind_attrs = $self->source_bind_attributes($source); - return $self->_execute('update' => [], $source, $bind_attributes, @args); + return $self->_execute('update' => [], $source, $bind_attrs, @args); } sub delete { - my $self = shift @_; - my $source = shift @_; - $self->_determine_driver; + my ($self, $source, @args) = @_; + my $bind_attrs = $self->source_bind_attributes($source); - return $self->_execute('delete' => [], $source, $bind_attrs, @_); + return $self->_execute('delete' => [], $source, $bind_attrs, @args); } # We were sent here because the $rs contains a complex search # which will require a subquery to select the correct rows -# (i.e. joined or limited resultsets) +# (i.e. joined or limited resultsets, or non-introspectable conditions) # -# Genarating a single PK column subquery is trivial and supported +# Generating a single PK column subquery is trivial and supported # by all RDBMS. However if we have a multicolumn PK, things get ugly. # Look at _multipk_update_delete() sub _subq_update_delete { @@ -1442,14 +1578,19 @@ sub _subq_update_delete { my $rsrc = $rs->result_source; - # we already check this, but double check naively just in case. Should be removed soon + # quick check if we got a sane rs on our hands + my @pcols = $rsrc->primary_columns; + my $sel = $rs->_resolved_attrs->{select}; $sel = [ $sel ] unless ref $sel eq 'ARRAY'; - my @pcols = $rsrc->primary_columns; - if (@$sel != @pcols) { + + if ( + join ("\x00", map { join '.', $rs->{attrs}{alias}, $_ } sort @pcols) + ne + join ("\x00", sort @$sel ) + ) { $self->throw_exception ( - 'Subquery update/delete can not be called on resultsets selecting a' - .' number of columns different than the number of primary keys' + '_subq_update_delete can not be called on resultsets selecting columns other than the primary keys' ); } @@ -1651,324 +1792,6 @@ sub _select_args { return ('select', $attrs->{bind}, $ident, $bind_attrs, $select, $where, $order, @limit); } -# -# This is the code producing joined subqueries like: -# SELECT me.*, other.* FROM ( SELECT me.* FROM ... ) JOIN other ON ... -# -sub _adjust_select_args_for_complex_prefetch { - my ($self, $from, $select, $where, $attrs) = @_; - - $self->throw_exception ('Nothing to prefetch... how did we get here?!') - if not @{$attrs->{_prefetch_select}}; - - $self->throw_exception ('Complex prefetches are not supported on resultsets with a custom from attribute') - if (ref $from ne 'ARRAY' || ref $from->[0] ne 'HASH' || ref $from->[1] ne 'ARRAY'); - - - # generate inner/outer attribute lists, remove stuff that doesn't apply - my $outer_attrs = { %$attrs }; - delete $outer_attrs->{$_} for qw/where bind rows offset group_by having/; - - my $inner_attrs = { %$attrs }; - delete $inner_attrs->{$_} for qw/for collapse _prefetch_select _collapse_order_by select as/; - - - # bring over all non-collapse-induced order_by into the inner query (if any) - # the outer one will have to keep them all - delete $inner_attrs->{order_by}; - if (my $ord_cnt = @{$outer_attrs->{order_by}} - @{$outer_attrs->{_collapse_order_by}} ) { - $inner_attrs->{order_by} = [ - @{$outer_attrs->{order_by}}[ 0 .. $ord_cnt - 1] - ]; - } - - - # generate the inner/outer select lists - # for inside we consider only stuff *not* brought in by the prefetch - # on the outside we substitute any function for its alias - my $outer_select = [ @$select ]; - my $inner_select = []; - for my $i (0 .. ( @$outer_select - @{$outer_attrs->{_prefetch_select}} - 1) ) { - my $sel = $outer_select->[$i]; - - if (ref $sel eq 'HASH' ) { - $sel->{-as} ||= $attrs->{as}[$i]; - $outer_select->[$i] = join ('.', $attrs->{alias}, ($sel->{-as} || "inner_column_$i") ); - } - - push @$inner_select, $sel; - } - - # normalize a copy of $from, so it will be easier to work with further - # down (i.e. promote the initial hashref to an AoH) - $from = [ @$from ]; - $from->[0] = [ $from->[0] ]; - my %original_join_info = map { $_->[0]{-alias} => $_->[0] } (@$from); - - - # decide which parts of the join will remain in either part of - # the outer/inner query - - # First we compose a list of which aliases are used in restrictions - # (i.e. conditions/order/grouping/etc). Since we do not have - # introspectable SQLA, we fall back to ugly scanning of raw SQL for - # WHERE, and for pieces of ORDER BY in order to determine which aliases - # need to appear in the resulting sql. - # It may not be very efficient, but it's a reasonable stop-gap - # Also unqualified column names will not be considered, but more often - # than not this is actually ok - # - # In the same loop we enumerate part of the selection aliases, as - # it requires the same sqla hack for the time being - my ($restrict_aliases, $select_aliases, $prefetch_aliases); - { - # produce stuff unquoted, so it can be scanned - my $sql_maker = $self->sql_maker; - local $sql_maker->{quote_char}; - my $sep = $self->_sql_maker_opts->{name_sep} || '.'; - $sep = "\Q$sep\E"; - - my $non_prefetch_select_sql = $sql_maker->_recurse_fields ($inner_select); - my $prefetch_select_sql = $sql_maker->_recurse_fields ($outer_attrs->{_prefetch_select}); - my $where_sql = $sql_maker->where ($where); - my $group_by_sql = $sql_maker->_order_by({ - map { $_ => $inner_attrs->{$_} } qw/group_by having/ - }); - my @non_prefetch_order_by_chunks = (map - { ref $_ ? $_->[0] : $_ } - $sql_maker->_order_by_chunks ($inner_attrs->{order_by}) - ); - - - for my $alias (keys %original_join_info) { - my $seen_re = qr/\b $alias $sep/x; - - for my $piece ($where_sql, $group_by_sql, @non_prefetch_order_by_chunks ) { - if ($piece =~ $seen_re) { - $restrict_aliases->{$alias} = 1; - } - } - - if ($non_prefetch_select_sql =~ $seen_re) { - $select_aliases->{$alias} = 1; - } - - if ($prefetch_select_sql =~ $seen_re) { - $prefetch_aliases->{$alias} = 1; - } - - } - } - - # Add any non-left joins to the restriction list (such joins are indeed restrictions) - for my $j (values %original_join_info) { - my $alias = $j->{-alias} or next; - $restrict_aliases->{$alias} = 1 if ( - (not $j->{-join_type}) - or - ($j->{-join_type} !~ /^left (?: \s+ outer)? $/xi) - ); - } - - # mark all join parents as mentioned - # (e.g. join => { cds => 'tracks' } - tracks will need to bring cds too ) - for my $collection ($restrict_aliases, $select_aliases) { - for my $alias (keys %$collection) { - $collection->{$_} = 1 - for (@{ $original_join_info{$alias}{-join_path} || [] }); - } - } - - # construct the inner $from for the subquery - my %inner_joins = (map { %{$_ || {}} } ($restrict_aliases, $select_aliases) ); - my @inner_from; - for my $j (@$from) { - push @inner_from, $j if $inner_joins{$j->[0]{-alias}}; - } - - # if a multi-type join was needed in the subquery ("multi" is indicated by - # presence in {collapse}) - add a group_by to simulate the collapse in the subq - unless ($inner_attrs->{group_by}) { - for my $alias (keys %inner_joins) { - - # the dot comes from some weirdness in collapse - # remove after the rewrite - if ($attrs->{collapse}{".$alias"}) { - $inner_attrs->{group_by} ||= $inner_select; - last; - } - } - } - - # demote the inner_from head - $inner_from[0] = $inner_from[0][0]; - - # generate the subquery - my $subq = $self->_select_args_to_query ( - \@inner_from, - $inner_select, - $where, - $inner_attrs, - ); - - my $subq_joinspec = { - -alias => $attrs->{alias}, - -source_handle => $inner_from[0]{-source_handle}, - $attrs->{alias} => $subq, - }; - - # Generate the outer from - this is relatively easy (really just replace - # the join slot with the subquery), with a major caveat - we can not - # join anything that is non-selecting (not part of the prefetch), but at - # the same time is a multi-type relationship, as it will explode the result. - # - # There are two possibilities here - # - either the join is non-restricting, in which case we simply throw it away - # - it is part of the restrictions, in which case we need to collapse the outer - # result by tackling yet another group_by to the outside of the query - - # so first generate the outer_from, up to the substitution point - my @outer_from; - while (my $j = shift @$from) { - if ($j->[0]{-alias} eq $attrs->{alias}) { # time to swap - push @outer_from, [ - $subq_joinspec, - @{$j}[1 .. $#$j], - ]; - last; # we'll take care of what's left in $from below - } - else { - push @outer_from, $j; - } - } - - # see what's left - throw away if not selecting/restricting - # also throw in a group_by if restricting to guard against - # cross-join explosions - # - while (my $j = shift @$from) { - my $alias = $j->[0]{-alias}; - - if ($select_aliases->{$alias} || $prefetch_aliases->{$alias}) { - push @outer_from, $j; - } - elsif ($restrict_aliases->{$alias}) { - push @outer_from, $j; - - # FIXME - this should be obviated by SQLA2, as I'll be able to - # have restrict_inner and restrict_outer... or something to that - # effect... I think... - - # FIXME2 - I can't find a clean way to determine if a particular join - # is a multi - instead I am just treating everything as a potential - # explosive join (ribasushi) - # - # if (my $handle = $j->[0]{-source_handle}) { - # my $rsrc = $handle->resolve; - # ... need to bail out of the following if this is not a multi, - # as it will be much easier on the db ... - - $outer_attrs->{group_by} ||= $outer_select; - # } - } - } - - # demote the outer_from head - $outer_from[0] = $outer_from[0][0]; - - # This is totally horrific - the $where ends up in both the inner and outer query - # Unfortunately not much can be done until SQLA2 introspection arrives, and even - # then if where conditions apply to the *right* side of the prefetch, you may have - # to both filter the inner select (e.g. to apply a limit) and then have to re-filter - # the outer select to exclude joins you didin't want in the first place - # - # OTOH it can be seen as a plus: (notes that this query would make a DBA cry ;) - return (\@outer_from, $outer_select, $where, $outer_attrs); -} - -sub _resolve_ident_sources { - my ($self, $ident) = @_; - - my $alias2source = {}; - my $rs_alias; - - # the reason this is so contrived is that $ident may be a {from} - # structure, specifying multiple tables to join - if ( Scalar::Util::blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) { - # this is compat mode for insert/update/delete which do not deal with aliases - $alias2source->{me} = $ident; - $rs_alias = 'me'; - } - elsif (ref $ident eq 'ARRAY') { - - for (@$ident) { - my $tabinfo; - if (ref $_ eq 'HASH') { - $tabinfo = $_; - $rs_alias = $tabinfo->{-alias}; - } - if (ref $_ eq 'ARRAY' and ref $_->[0] eq 'HASH') { - $tabinfo = $_->[0]; - } - - $alias2source->{$tabinfo->{-alias}} = $tabinfo->{-source_handle}->resolve - if ($tabinfo->{-source_handle}); - } - } - - return ($alias2source, $rs_alias); -} - -# Takes $ident, \@column_names -# -# returns { $column_name => \%column_info, ... } -# also note: this adds -result_source => $rsrc to the column info -# -# usage: -# my $col_sources = $self->_resolve_column_info($ident, @column_names); -sub _resolve_column_info { - my ($self, $ident, $colnames) = @_; - my ($alias2src, $root_alias) = $self->_resolve_ident_sources($ident); - - my $sep = $self->_sql_maker_opts->{name_sep} || '.'; - $sep = "\Q$sep\E"; - - my (%return, %seen_cols); - - # compile a global list of column names, to be able to properly - # disambiguate unqualified column names (if at all possible) - for my $alias (keys %$alias2src) { - my $rsrc = $alias2src->{$alias}; - for my $colname ($rsrc->columns) { - push @{$seen_cols{$colname}}, $alias; - } - } - - COLUMN: - foreach my $col (@$colnames) { - my ($alias, $colname) = $col =~ m/^ (?: ([^$sep]+) $sep)? (.+) $/x; - - unless ($alias) { - # see if the column was seen exactly once (so we know which rsrc it came from) - if ($seen_cols{$colname} and @{$seen_cols{$colname}} == 1) { - $alias = $seen_cols{$colname}[0]; - } - else { - next COLUMN; - } - } - - my $rsrc = $alias2src->{$alias}; - $return{$col} = $rsrc && { - %{$rsrc->column_info($colname)}, - -result_source => $rsrc, - -source_alias => $alias, - }; - } - - return \%return; -} - # Returns a counting SELECT for a simple count # query. Abstracted so that a storage could override # this to { count => 'firstcol' } or whatever makes @@ -1993,7 +1816,6 @@ sub _subq_count_select { return @pcols ? \@pcols : [ 1 ]; } - sub source_bind_attributes { my ($self, $source) = @_; @@ -2227,14 +2049,7 @@ Returns the database driver name. =cut sub sqlt_type { - my ($self) = @_; - - if (not $self->_driver_determined) { - $self->_determine_driver; - goto $self->can ('sqlt_type'); - } - - $self->_get_dbh->{Driver}->{Name}; + shift->_get_dbh->{Driver}->{Name}; } =head2 bind_attribute_by_data_type @@ -2508,7 +2323,11 @@ sub deployment_statements { parser => 'SQL::Translator::Parser::DBIx::Class', data => $schema, ); - return $tr->translate; + + my $ret = $tr->translate + or $self->throw_exception( 'Unable to produce deployment statements: ' . $tr->error); + + return $ret; } sub deploy { @@ -2574,11 +2393,6 @@ See L =cut sub build_datetime_parser { - if (not $_[0]->_driver_determined) { - $_[0]->_determine_driver; - goto $_[0]->can('build_datetime_parser'); - } - my $self = shift; my $type = $self->datetime_parser_type(@_); $self->ensure_class_loaded ($type); @@ -2611,10 +2425,10 @@ sub lag_behind_master { return; } -# SQLT version handling +# SQLT version handling { - my $_sqlt_version_ok; # private - my $_sqlt_version_error; # private + my $_sqlt_version_ok; # private + my $_sqlt_version_error; # private sub _sqlt_version_ok { if (!defined $_sqlt_version_ok) { diff --git a/lib/DBIx/Class/Storage/DBI/ADO.pm b/lib/DBIx/Class/Storage/DBI/ADO.pm new file mode 100644 index 0000000..8a0fa68 --- /dev/null +++ b/lib/DBIx/Class/Storage/DBI/ADO.pm @@ -0,0 +1,42 @@ +package # hide from PAUSE + DBIx::Class::Storage::DBI::ADO; + +use base 'DBIx::Class::Storage::DBI'; + +sub _rebless { + my $self = shift; + +# check for MSSQL +# XXX This should be using an OpenSchema method of some sort, but I don't know +# how. +# Current version is stolen from Sybase.pm + my $dbtype = eval { + @{$self->_get_dbh + ->selectrow_arrayref(qq{sp_server_info \@attribute_id=1}) + }[2] + }; + + unless ($@) { + $dbtype =~ s/\W/_/gi; + my $subclass = "DBIx::Class::Storage::DBI::ADO::${dbtype}"; + if ($self->load_optional_class($subclass) && !$self->isa($subclass)) { + bless $self, $subclass; + $self->_rebless; + } + } +} + +# set cursor type here, if necessary +#sub _dbh_sth { +# my ($self, $dbh, $sql) = @_; +# +# my $sth = $self->disable_sth_caching +# ? $dbh->prepare($sql, { CursorType => 'adOpenStatic' }) +# : $dbh->prepare_cached($sql, { CursorType => 'adOpenStatic' }, 3); +# +# $self->throw_exception($dbh->errstr) if !$sth; +# +# $sth; +#} + +1; diff --git a/lib/DBIx/Class/Storage/DBI/ADO/Microsoft_SQL_Server.pm b/lib/DBIx/Class/Storage/DBI/ADO/Microsoft_SQL_Server.pm new file mode 100644 index 0000000..4082a93 --- /dev/null +++ b/lib/DBIx/Class/Storage/DBI/ADO/Microsoft_SQL_Server.pm @@ -0,0 +1,45 @@ +package DBIx::Class::Storage::DBI::ADO::Microsoft_SQL_Server; + +use strict; +use warnings; + +use base qw/ + DBIx::Class::Storage::DBI::ADO + DBIx::Class::Storage::DBI::MSSQL +/; +use mro 'c3'; + +sub _rebless { + my $self = shift; + $self->_identity_method('@@identity'); +} + +1; + +=head1 NAME + +DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server - Support for Microsoft +SQL Server via DBD::ADO + +=head1 SYNOPSIS + +This subclass supports MSSQL server connections via L. + +=head1 DESCRIPTION + +The MSSQL specific functionality is provided by +L. + +C<_identity_method> is set to C<@@identity>, as C doesn't work +with L. See L +for caveats regarding this. + +=head1 AUTHOR + +See L. + +=head1 LICENSE + +You may distribute this code under the same terms as Perl itself. + +=cut diff --git a/lib/DBIx/Class/Storage/DBI/AutoCast.pm b/lib/DBIx/Class/Storage/DBI/AutoCast.pm index 850015b..c887a86 100644 --- a/lib/DBIx/Class/Storage/DBI/AutoCast.pm +++ b/lib/DBIx/Class/Storage/DBI/AutoCast.pm @@ -10,7 +10,7 @@ __PACKAGE__->mk_group_accessors('simple' => 'auto_cast' ); =head1 NAME -DBIx::Class::Storage::DBI::AutoCast +DBIx::Class::Storage::DBI::AutoCast - Storage component for RDBMS requiring explicit placeholder typing =head1 SYNOPSIS @@ -29,6 +29,10 @@ converted to: CAST(? as $mapped_type) +This option can also be enabled in L as: + + on_connect_call => ['set_auto_cast'] + =cut sub _prep_for_execute { @@ -60,6 +64,26 @@ sub _prep_for_execute { return ($sql, $bind); } +=head2 connect_call_set_auto_cast + +Executes: + + $schema->storage->auto_cast(1); + +on connection. + +Used as: + + on_connect_call => ['set_auto_cast'] + +in L. + +=cut + +sub connect_call_set_auto_cast { + my $self = shift; + $self->auto_cast(1); +} =head1 AUTHOR diff --git a/lib/DBIx/Class/Storage/DBI/MSSQL.pm b/lib/DBIx/Class/Storage/DBI/MSSQL.pm index 3189a3c..2db2af7 100644 --- a/lib/DBIx/Class/Storage/DBI/MSSQL.pm +++ b/lib/DBIx/Class/Storage/DBI/MSSQL.pm @@ -160,7 +160,7 @@ sub _execute { # this should bring back the result of SELECT SCOPE_IDENTITY() we tacked # on in _prep_for_execute above - my ($identity) = $sth->fetchrow_array; + my ($identity) = eval { $sth->fetchrow_array }; # SCOPE_IDENTITY failed, but we can do something else if ( (! $identity) && $self->_identity_method) { diff --git a/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm b/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm index bf01131..d0a0133 100644 --- a/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm +++ b/lib/DBIx/Class/Storage/DBI/ODBC/Microsoft_SQL_Server.pm @@ -61,7 +61,7 @@ sub connect_call_use_dynamic_cursors { my $self = shift; if (ref($self->_dbi_connect_info->[0]) eq 'CODE') { - $self->throw_exception ('cannot set DBI attributes on a CODE ref connect_info'); + $self->throw_exception ('Cannot set DBI attributes on a CODE ref connect_info'); } my $dbi_attrs = $self->_dbi_connect_info->[-1]; diff --git a/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm b/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm index 88cf72d..b1f3ddf 100644 --- a/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm +++ b/lib/DBIx/Class/Storage/DBI/Oracle/Generic.pm @@ -53,8 +53,16 @@ sub _dbh_get_autoinc_seq { my $sth; + my $source_name; + if ( ref $source->name ne 'SCALAR' ) { + $source_name = $source->name; + } + else { + $source_name = ${$source->name}; + } + # check for fully-qualified name (eg. SCHEMA.TABLENAME) - if ( my ( $schema, $table ) = $source->name =~ /(\w+)\.(\w+)/ ) { + if ( my ( $schema, $table ) = $source_name =~ /(\w+)\.(\w+)/ ) { $sql = q{ SELECT trigger_body FROM ALL_TRIGGERS t WHERE t.owner = ? AND t.table_name = ? @@ -66,7 +74,7 @@ sub _dbh_get_autoinc_seq { } else { $sth = $dbh->prepare($sql); - $sth->execute( uc( $source->name ) ); + $sth->execute( uc( $source_name ) ); } while (my ($insert_trigger) = $sth->fetchrow_array) { return uc($1) if $insert_trigger =~ m!(\w+)\.nextval!i; # col name goes here??? @@ -223,7 +231,7 @@ table with more than one LOB column. =cut -sub source_bind_attributes +sub source_bind_attributes { require DBD::Oracle; my $self = shift; diff --git a/lib/DBIx/Class/Storage/DBI/Pg.pm b/lib/DBIx/Class/Storage/DBI/Pg.pm index 3d25b83..d0e8d73 100644 --- a/lib/DBIx/Class/Storage/DBI/Pg.pm +++ b/lib/DBIx/Class/Storage/DBI/Pg.pm @@ -26,11 +26,11 @@ sub last_insert_id { for my $col (@cols) { my $seq = ( $source->column_info($col)->{sequence} ||= $self->dbh_do('_dbh_get_autoinc_seq', $source, $col) ) - or $self->throw_exception( "could not determine sequence for " - . $source->name - . ".$col, please consider adding a " - . "schema-qualified sequence to its column info" - ); + or $self->throw_exception( sprintf( + 'could not determine sequence for column %s.%s, please consider adding a schema-qualified sequence to its column info', + $source->name, + $col, + )); push @values, $self->_dbh_last_insert_id ($self->_dbh, $seq); } @@ -61,22 +61,22 @@ sub _dbh_get_autoinc_seq { ( $schema, $table ) = ( $1, $2 ); } - # use DBD::Pg to fetch the column info if it is recent enough to - # work. otherwise, use custom SQL - my $seq_expr = $DBD::Pg::VERSION >= 2.015001 - ? eval{ $dbh->column_info(undef,$schema,$table,$col)->fetchrow_hashref->{COLUMN_DEF} } - : $self->_dbh_get_column_default( $dbh, $schema, $table, $col ); + # get the column default using a Postgres-specific pg_catalog query + my $seq_expr = $self->_dbh_get_column_default( $dbh, $schema, $table, $col ); # if no default value is set on the column, or if we can't parse the # default value as a sequence, throw. - unless ( defined $seq_expr and $seq_expr =~ /^nextval\(+'([^']+)'::(?:text|regclass)\)/i ){ + unless ( defined $seq_expr and $seq_expr =~ /^nextval\(+'([^']+)'::(?:text|regclass)\)/i ) { $seq_expr = '' unless defined $seq_expr; $schema = "$schema." if defined $schema && length $schema; - $self->throw_exception( "no sequence found for $schema$table.$col, check table definition, " - . "or explicitly set the 'sequence' for this column in the " - . $source->source_name - . " class" - ); + $self->throw_exception( sprintf ( + 'no sequence found for %s%s.%s, check the RDBMS table definition or explicitly set the '. + "'sequence' for this column in %s", + $schema ? "$schema." : '', + $table, + $col, + $source->source_name, + )); } return $1; diff --git a/lib/DBIx/Class/Storage/DBI/SQLite.pm b/lib/DBIx/Class/Storage/DBI/SQLite.pm index 1e5f298..c119f4e 100644 --- a/lib/DBIx/Class/Storage/DBI/SQLite.pm +++ b/lib/DBIx/Class/Storage/DBI/SQLite.pm @@ -47,6 +47,22 @@ sub backup return $backupfile; } +sub deployment_statements { + my $self = shift;; + my ($schema, $type, $version, $dir, $sqltargs, @rest) = @_; + + $sqltargs ||= {}; + + my $sqlite_version = $self->_get_dbh->{sqlite_version}; + + # numify, SQLT does a numeric comparison + $sqlite_version =~ s/^(\d+) \. (\d+) (?: \. (\d+))? .*/${1}.${2}/x; + + $sqltargs->{producer_args}{sqlite_version} = $sqlite_version; + + $self->next::method($schema, $type, $version, $dir, $sqltargs, @rest); +} + sub datetime_parser_type { return "DateTime::Format::SQLite"; } 1; diff --git a/lib/DBIx/Class/Storage/DBI/Sybase.pm b/lib/DBIx/Class/Storage/DBI/Sybase.pm index 41b0c81..5ade896 100644 --- a/lib/DBIx/Class/Storage/DBI/Sybase.pm +++ b/lib/DBIx/Class/Storage/DBI/Sybase.pm @@ -4,63 +4,1137 @@ use strict; use warnings; use base qw/ - DBIx::Class::Storage::DBI::Sybase::Base - DBIx::Class::Storage::DBI::NoBindVars + DBIx::Class::Storage::DBI::Sybase::Common + DBIx::Class::Storage::DBI::AutoCast /; use mro 'c3'; +use Carp::Clan qw/^DBIx::Class/; +use List::Util(); +use Sub::Name(); +use Data::Dumper::Concise(); + +__PACKAGE__->mk_group_accessors('simple' => + qw/_identity _blob_log_on_update _writer_storage _is_extra_storage + _bulk_storage _is_bulk_storage _began_bulk_work + _bulk_disabled_due_to_coderef_connect_info_warned + _identity_method/ +); + +my @also_proxy_to_extra_storages = qw/ + connect_call_set_auto_cast auto_cast connect_call_blob_setup + connect_call_datetime_setup + + disconnect _connect_info _sql_maker _sql_maker_opts disable_sth_caching + auto_savepoint unsafe cursor_class debug debugobj schema +/; + +=head1 NAME + +DBIx::Class::Storage::DBI::Sybase - Sybase support for DBIx::Class + +=head1 SYNOPSIS + +This subclass supports L for real Sybase databases. If you are +using an MSSQL database via L, your storage will be reblessed to +L. + +=head1 DESCRIPTION + +If your version of Sybase does not support placeholders, then your storage +will be reblessed to L. You can +also enable that driver explicitly, see the documentation for more details. + +With this driver there is unfortunately no way to get the C +without doing a C, which is the only way to get the C value in this +mode. + +In addition, they are done on a separate connection so that it's possible to +have active cursors when doing an insert. + +When using C transactions are +disabled, as there are no concurrency issues with C will work +for obtainging the last insert id of an C column, instead of having to +do C