From: Peter Rabbitson Date: Wed, 10 Mar 2010 09:21:12 +0000 (+0000) Subject: Merge 'trunk' into 'oracle_hierarchical_queries_rt39121' X-Git-Tag: v0.08122~34^2~41 X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?a=commitdiff_plain;h=54161a15147b7e8c1fe3251595cf9d5dae78b59b;hp=43426175a56c02bf2ab64a902df2b317ca585fa3;p=dbsrgits%2FDBIx-Class.git Merge 'trunk' into 'oracle_hierarchical_queries_rt39121' r7937@Thesaurus (orig r7925): ribasushi | 2009-11-19 12:04:21 +0100 Bail out eary in Versioned if no versioning checks are requested r7938@Thesaurus (orig r7926): ribasushi | 2009-11-19 12:06:13 +0100 POD fixes r7940@Thesaurus (orig r7928): caelum | 2009-11-22 11:03:33 +0100 fix connection setup for Sybase r7943@Thesaurus (orig r7931): caelum | 2009-11-22 13:27:43 +0100 override _run_connection_actions for internal connection setup in sybase stuff, much cleaner this way r7947@Thesaurus (orig r7935): ribasushi | 2009-11-23 01:18:28 +0100 Whoops r7948@Thesaurus (orig r7936): ribasushi | 2009-11-23 01:28:50 +0100 Fix ::Versioned regression introduced in r7925 r7951@Thesaurus (orig r7939): caelum | 2009-11-23 12:32:10 +0100 add subname to rdbms_specific_methods wrapper r7953@Thesaurus (orig r7941): caelum | 2009-11-23 13:23:14 +0100 r21187@hlagh (orig r7933): ribasushi | 2009-11-22 18:38:34 -0500 New sybase refactor branch r21188@hlagh (orig r7934): ribasushi | 2009-11-22 19:06:48 -0500 refactor part1 r21192@hlagh (orig r7938): ribasushi | 2009-11-22 19:30:05 -0500 refactor part 2 r21194@hlagh (orig r7940): caelum | 2009-11-23 07:06:46 -0500 fix test r7955@Thesaurus (orig r7943): ribasushi | 2009-11-23 16:30:13 +0100 Add missing Sub::Name invocations and improve the SQLA Carp overrides r7957@Thesaurus (orig r7945): ribasushi | 2009-11-24 10:12:49 +0100 r7749@Thesaurus (orig r7738): norbi | 2009-09-28 22:01:39 +0200 Created branch 'void_populate_resultset_cond': Fixing a bug: $rs->populate in void context does not use the conditions from $rs. r7751@Thesaurus (orig r7740): norbi | 2009-09-28 23:26:06 +0200 r7935@vger: mendel | 2009-09-28 23:25:52 +0200 Undid the previous tweaks to the already existing tests and added new tests instead. r7928@Thesaurus (orig r7916): ribasushi | 2009-11-16 08:48:42 +0100 Change plan r7956@Thesaurus (orig r7944): ribasushi | 2009-11-24 10:10:49 +0100 Better naming and a bit leaner implementation. Main idea remains the same r7959@Thesaurus (orig r7947): ribasushi | 2009-11-24 10:39:52 +0100 Changes and prevent a spurious todo-pass r7962@Thesaurus (orig r7950): ribasushi | 2009-11-24 19:43:42 +0100 Extra sqla quoting test r7963@Thesaurus (orig r7951): ribasushi | 2009-11-24 19:48:01 +0100 Extra sqla quoting test(2) r7964@Thesaurus (orig r7952): ribasushi | 2009-11-25 21:24:10 +0100 wtf r7967@Thesaurus (orig r7955): ribasushi | 2009-11-26 11:07:06 +0100 cleanups r7968@Thesaurus (orig r7956): ribasushi | 2009-11-26 12:11:21 +0100 Sanify search_related chaining code (no functional changes) r7969@Thesaurus (orig r7957): ribasushi | 2009-11-26 12:52:05 +0100 Another count() quirk down r7970@Thesaurus (orig r7958): ribasushi | 2009-11-26 14:23:28 +0100 Add a no-accessor column to generally test handling r7972@Thesaurus (orig r7960): ribasushi | 2009-11-26 15:32:17 +0100 Whoops, wrong accessor (things still work though) r7977@Thesaurus (orig r7965): ribasushi | 2009-11-26 16:43:21 +0100 r7971@Thesaurus (orig r7959): ribasushi | 2009-11-26 14:54:17 +0100 New branch for get_inflated_column bugfix r7974@Thesaurus (orig r7962): ribasushi | 2009-11-26 15:56:20 +0100 Fix for rt46953 r7975@Thesaurus (orig r7963): ribasushi | 2009-11-26 16:05:17 +0100 Make Test::More happy r7976@Thesaurus (orig r7964): ribasushi | 2009-11-26 16:43:09 +0100 Changes r7980@Thesaurus (orig r7968): ribasushi | 2009-11-27 01:38:11 +0100 Fix search_related wrt grouped resultsets (distinct is currently passed to the new resultset, this is probably wrong) r7987@Thesaurus (orig r7975): ribasushi | 2009-11-28 16:54:23 +0100 Cleanup the s.c.o. index r7988@Thesaurus (orig r7976): ribasushi | 2009-11-28 16:57:04 +0100 Test based on http://lists.scsys.co.uk/pipermail/dbix-class/2009-November/008599.html r8007@Thesaurus (orig r7995): castaway | 2009-11-30 16:20:19 +0100 Remove over-emphasis on +select/+as. Add docs on prefetch and other ways to get related data, with caveats etc. r8009@Thesaurus (orig r7997): dew | 2009-11-30 19:37:00 +0100 Alter the docs for has_many relationships to make them a little easier to grok r8021@Thesaurus (orig r8009): castaway | 2009-12-02 14:19:40 +0100 Added note about prefetch and has_many related objects r8029@Thesaurus (orig r8017): ribasushi | 2009-12-03 13:24:04 +0100 Source sanity check on subqueried update/delete r8030@Thesaurus (orig r8018): ribasushi | 2009-12-03 14:39:37 +0100 Sanify populate arg handling r8040@Thesaurus (orig r8028): ribasushi | 2009-12-04 02:46:20 +0100 r7935@Thesaurus (orig r7923): ribasushi | 2009-11-19 11:05:04 +0100 Branches for RTs r7965@Thesaurus (orig r7953): ribasushi | 2009-11-26 00:19:21 +0100 Test and fix scalarref in an inflatable slot corner-case r7966@Thesaurus (orig r7954): ribasushi | 2009-11-26 00:24:23 +0100 Looks like we nailed a todo r8038@Thesaurus (orig r8026): ribasushi | 2009-12-04 02:45:40 +0100 Changes r8039@Thesaurus (orig r8027): ribasushi | 2009-12-04 02:46:08 +0100 Changes(2) r8055@Thesaurus (orig r8043): ribasushi | 2009-12-07 15:11:25 +0100 Forgotten auto-savepoint example patch r8057@Thesaurus (orig r8045): ribasushi | 2009-12-08 14:13:38 +0100 Weird test case r8058@Thesaurus (orig r8046): ribasushi | 2009-12-08 14:23:31 +0100 Fix the test - code is correct r8063@Thesaurus (orig r8051): ribasushi | 2009-12-09 02:33:30 +0100 It's almost 2010 - load_components ('Core') is like ewwww r8067@Thesaurus (orig r8055): caelum | 2009-12-09 18:13:33 +0100 workaround for evil ADO bug r8068@Thesaurus (orig r8056): ribasushi | 2009-12-09 23:13:59 +0100 r8022@Thesaurus (orig r8010): frew | 2009-12-02 17:57:17 +0100 branch for replacing TOP with RNO in MSSQL r8027@Thesaurus (orig r8015): frew | 2009-12-03 02:48:36 +0100 Switch to RowNumberOver for MSSQL r8028@Thesaurus (orig r8016): ribasushi | 2009-12-03 10:03:18 +0100 The correct top100 mssql solution and test r8031@Thesaurus (orig r8019): frew | 2009-12-03 15:56:35 +0100 fix RNO for MSSQL to not use a kludgy regexp r8032@Thesaurus (orig r8020): frew | 2009-12-04 01:33:28 +0100 initial (broken) version of 42rno.t r8033@Thesaurus (orig r8021): frew | 2009-12-04 01:37:06 +0100 first shot at moving stuff around r8034@Thesaurus (orig r8022): frew | 2009-12-04 01:45:42 +0100 rename files to get rid of numbers and use folders r8035@Thesaurus (orig r8023): frew | 2009-12-04 01:48:00 +0100 missed toplimit r8036@Thesaurus (orig r8024): frew | 2009-12-04 01:52:44 +0100 still broken rno test, but now it actually tests mssql r8042@Thesaurus (orig r8030): ribasushi | 2009-12-04 09:34:56 +0100 Variable clash r8043@Thesaurus (orig r8031): ribasushi | 2009-12-04 11:44:47 +0100 The complex prefetch rewrite actually takes care of this as cleanly as possible r8044@Thesaurus (orig r8032): ribasushi | 2009-12-04 11:47:22 +0100 Smarter implementation of the select top 100pct subselect handling r8045@Thesaurus (orig r8033): ribasushi | 2009-12-04 12:07:05 +0100 Add support for unordered limited resultsets Rename the limit helper to signify it is MS specific Make sure we don't lose group_by/having clauses r8046@Thesaurus (orig r8034): ribasushi | 2009-12-04 12:07:56 +0100 Un-todoify mssql limit tests - no changes necessary (throw away the obsolete generated sql checks) r8047@Thesaurus (orig r8035): ribasushi | 2009-12-04 12:24:13 +0100 Tests for bindvar propagation and Changes r8049@Thesaurus (orig r8037): ribasushi | 2009-12-04 15:01:32 +0100 KISS - a select(1) makes perfect ordering criteria r8050@Thesaurus (orig r8038): ribasushi | 2009-12-04 15:06:11 +0100 Unify the MSSQL and DB2 RNO implementations - they are the same r8051@Thesaurus (orig r8039): ribasushi | 2009-12-05 10:29:50 +0100 Wrap mssql selects in yet another subquery to make limited right-ordered join resultsets possible r8052@Thesaurus (orig r8040): ribasushi | 2009-12-05 10:46:41 +0100 Better not touch Top - it's too complex at this point r8053@Thesaurus (orig r8041): ribasushi | 2009-12-05 11:03:00 +0100 Extend test just a bit more r8054@Thesaurus (orig r8042): ribasushi | 2009-12-05 11:44:25 +0100 DB2 and MSSQL have different default order syntaxes r8056@Thesaurus (orig r8044): frew | 2009-12-08 02:10:06 +0100 add version check for mssql 2005 and greater r8059@Thesaurus (orig r8047): frew | 2009-12-08 16:15:50 +0100 real exception instead of die r8061@Thesaurus (orig r8049): ribasushi | 2009-12-09 00:19:49 +0100 Test for immediate connection with known storage type r8062@Thesaurus (orig r8050): frew | 2009-12-09 01:24:45 +0100 fix mssql version check so it's lazier r8064@Thesaurus (orig r8052): ribasushi | 2009-12-09 02:40:51 +0100 Fix comment r8066@Thesaurus (orig r8054): caelum | 2009-12-09 16:12:56 +0100 fix _get_mssql_version for ODBC r8071@Thesaurus (orig r8059): frew | 2009-12-10 00:32:55 +0100 fail nicely if user doesn't have perms for xp_msver r8073@Thesaurus (orig r8061): ribasushi | 2009-12-10 09:36:21 +0100 Changes r8074@Thesaurus (orig r8062): ribasushi | 2009-12-10 09:53:38 +0100 First half of distinct cleanup r8075@Thesaurus (orig r8063): frew | 2009-12-10 16:04:37 +0100 release 0.08115 r8076@Thesaurus (orig r8064): ribasushi | 2009-12-12 12:31:12 +0100 Even clearer unloaded FK exception r8078@Thesaurus (orig r8066): ribasushi | 2009-12-12 14:27:18 +0100 As clear as it gets r8141@Thesaurus (orig r8129): ovid | 2009-12-16 17:40:50 +0100 Have has_one/might_have warn if set on nullable columns. r8143@Thesaurus (orig r8131): caelum | 2009-12-17 13:30:10 +0100 somewhat better fix for ADO r8144@Thesaurus (orig r8132): caelum | 2009-12-17 13:34:20 +0100 minor changes r8146@Thesaurus (orig r8134): caelum | 2009-12-17 17:44:34 +0100 cleanup source_bind_attributes for ADO r8147@Thesaurus (orig r8135): caelum | 2009-12-17 18:09:55 +0100 more types for ADO fix, and documentation r8148@Thesaurus (orig r8136): abraxxa | 2009-12-17 19:54:55 +0100 Cookbook POD fix for add_drop_table instead of add_drop_tables r8158@Thesaurus (orig r8146): ribasushi | 2009-12-18 14:55:53 +0100 r8150@Thesaurus (orig r8138): abraxxa | 2009-12-17 23:22:07 +0100 Views without a view_definition won't be added to the SQL::Translator::Schema by the parser + tests r8151@Thesaurus (orig r8139): abraxxa | 2009-12-17 23:23:33 +0100 test cleanups r8153@Thesaurus (orig r8141): abraxxa | 2009-12-18 14:34:14 +0100 throw_exception if view_definition is missing instead of silent skipping + test changes r8154@Thesaurus (orig r8142): abraxxa | 2009-12-18 14:40:32 +0100 use Test::Exception r8155@Thesaurus (orig r8143): abraxxa | 2009-12-18 14:42:00 +0100 fixed Changes r8156@Thesaurus (orig r8144): abraxxa | 2009-12-18 14:44:52 +0100 test cleanups r8157@Thesaurus (orig r8145): ribasushi | 2009-12-18 14:46:26 +0100 Another bitr r8160@Thesaurus (orig r8148): ribasushi | 2009-12-18 15:04:34 +0100 Fix no_index entries r8162@Thesaurus (orig r8150): abraxxa | 2009-12-18 15:59:58 +0100 Schema POD inprovement for dclone r8163@Thesaurus (orig r8151): abraxxa | 2009-12-18 16:07:27 +0100 link to DBIx::Class::Row r8164@Thesaurus (orig r8152): abraxxa | 2009-12-18 16:08:56 +0100 fixed typo in Changes r8165@Thesaurus (orig r8153): abraxxa | 2009-12-18 16:14:47 +0100 dclone pod take #2 r8169@Thesaurus (orig r8157): ribasushi | 2009-12-19 18:47:42 +0100 detabify r8170@Thesaurus (orig r8158): ribasushi | 2009-12-19 19:41:42 +0100 Fix RT52812 r8171@Thesaurus (orig r8159): caelum | 2009-12-23 07:16:29 +0100 minor POD fixes r8175@Thesaurus (orig r8163): ribasushi | 2009-12-24 09:59:52 +0100 Fix deployment_statements context sensitivity regression r8176@Thesaurus (orig r8164): ribasushi | 2009-12-24 10:13:37 +0100 Don't call the PK setter if no PK r8204@Thesaurus (orig r8192): caelum | 2009-12-30 22:58:47 +0100 bump CAG dep r8231@Thesaurus (orig r8219): matthewt | 2010-01-02 01:41:12 +0100 fix typo in variable name r8238@Thesaurus (orig r8226): rafl | 2010-01-02 18:46:40 +0100 Merge branch 'native_traits' * native_traits: Port replicated storage from MXAH to native traits. Create branch native_traits r8244@Thesaurus (orig r8232): caelum | 2010-01-04 00:30:51 +0100 fix _rebless into sybase/mssql/nobindvars r8247@Thesaurus (orig r8235): caelum | 2010-01-05 13:54:56 +0100 r22328@hlagh (orig r8201): caelum | 2009-12-31 12:29:51 -0500 new branch to fix table aliases in queries over the 30char limit r22329@hlagh (orig r8202): caelum | 2009-12-31 12:55:50 -0500 failing test r22330@hlagh (orig r8203): caelum | 2009-12-31 13:00:35 -0500 switch oracle tests to done_testing() r22331@hlagh (orig r8204): caelum | 2009-12-31 15:02:50 -0500 got something working r22332@hlagh (orig r8205): caelum | 2009-12-31 15:08:30 -0500 POD touchups r22343@hlagh (orig r8216): caelum | 2010-01-01 07:42:03 -0500 fix uninitialized warning and a bug in ResultSet r22419@hlagh (orig r8234): caelum | 2010-01-05 07:53:18 -0500 append half of a base64 MD5 to shortened table aliases for Oracle r8249@Thesaurus (orig r8237): caelum | 2010-01-05 15:27:40 +0100 minor change: use more of the hash if possible for oracle table alias shortening r8251@Thesaurus (orig r8239): caelum | 2010-01-06 02:20:17 +0100 bump perl_version to 5.8.1 r8252@Thesaurus (orig r8240): caelum | 2010-01-06 02:21:41 +0100 remove alignment mark on base64 md5 r8260@Thesaurus (orig r8248): ribasushi | 2010-01-07 11:21:55 +0100 5.8.1 is minimum required perl r8261@Thesaurus (orig r8249): ribasushi | 2010-01-07 11:22:42 +0100 Minor optimization r8262@Thesaurus (orig r8250): ribasushi | 2010-01-07 11:23:35 +0100 Wrong title r8265@Thesaurus (orig r8253): ribasushi | 2010-01-08 17:48:50 +0100 Resolve problem reported by http://lists.scsys.co.uk/pipermail/dbix-class/2009-December/008699.html r8266@Thesaurus (orig r8254): ribasushi | 2010-01-08 17:52:01 +0100 Put utf8columns in line with the store_column fix r8267@Thesaurus (orig r8255): ribasushi | 2010-01-08 19:03:26 +0100 Tests while hunting for something else r8268@Thesaurus (orig r8256): ribasushi | 2010-01-08 19:14:42 +0100 Make test look even more like http://lists.scsys.co.uk/pipermail/dbix-class/2009-November/008599.html r8277@Thesaurus (orig r8265): ribasushi | 2010-01-09 02:16:14 +0100 r8263@Thesaurus (orig r8251): ribasushi | 2010-01-08 15:43:38 +0100 New branch to find a leak r8264@Thesaurus (orig r8252): ribasushi | 2010-01-08 15:52:46 +0100 Weird test failures r8272@Thesaurus (orig r8260): ribasushi | 2010-01-09 01:24:56 +0100 Proper invocation r8273@Thesaurus (orig r8261): ribasushi | 2010-01-09 01:35:34 +0100 Test for the real leak reason r8274@Thesaurus (orig r8262): ribasushi | 2010-01-09 01:37:33 +0100 Void ctx as it should be r8275@Thesaurus (orig r8263): ribasushi | 2010-01-09 02:10:13 +0100 A "fix" for sqlt-related schema leaks r8276@Thesaurus (orig r8264): ribasushi | 2010-01-09 02:15:53 +0100 Changes r8287@Thesaurus (orig r8275): caelum | 2010-01-10 11:29:06 +0100 r22483@hlagh (orig r8272): caelum | 2010-01-09 05:52:15 -0500 new branch to add "normalize_connect_info" class method to Storage::DBI r22495@hlagh (orig r8274): caelum | 2010-01-10 05:27:42 -0500 split connect_info parser out into private _normalize_connect_info r8289@Thesaurus (orig r8277): caelum | 2010-01-10 12:04:52 +0100 fix connection details in ::DBI::Replicated docs r8291@Thesaurus (orig r8279): ribasushi | 2010-01-11 09:50:21 +0100 r8077@Thesaurus (orig r8065): ribasushi | 2009-12-12 14:24:30 +0100 Branch for yet another mssql ordered prefetch problem r8079@Thesaurus (orig r8067): ribasushi | 2009-12-12 14:37:48 +0100 prefetch does not get disassembled properly r8112@Thesaurus (orig r8100): ribasushi | 2009-12-13 00:07:00 +0100 Extra test to highlight search_related inefficiency r8113@Thesaurus (orig r8101): ribasushi | 2009-12-13 00:17:44 +0100 Real test for search_related and prefetch r8114@Thesaurus (orig r8102): ribasushi | 2009-12-13 00:19:57 +0100 Fix corner case regression on search_related on a prefetching rs r8115@Thesaurus (orig r8103): ribasushi | 2009-12-13 00:21:05 +0100 Isolate prefetch heads using RNO with a subquery r8116@Thesaurus (orig r8104): ribasushi | 2009-12-13 00:23:46 +0100 Changes r8125@Thesaurus (orig r8113): ribasushi | 2009-12-15 13:06:26 +0100 Extend mssql limited prefetch tests r8126@Thesaurus (orig r8114): ribasushi | 2009-12-15 13:08:56 +0100 Add extra test to prove Alan wrong :) r8132@Thesaurus (orig r8120): ribasushi | 2009-12-16 00:38:04 +0100 Do not realias tables in the RNO subqueries r8133@Thesaurus (orig r8121): ribasushi | 2009-12-16 00:50:52 +0100 Deliberately disturb alphabetical order r8134@Thesaurus (orig r8122): ribasushi | 2009-12-16 10:26:43 +0100 Got a failing test r8135@Thesaurus (orig r8123): ribasushi | 2009-12-16 10:49:10 +0100 Cleanup r8136@Thesaurus (orig r8124): ribasushi | 2009-12-16 10:51:58 +0100 More moving around r8137@Thesaurus (orig r8125): ribasushi | 2009-12-16 11:25:37 +0100 The real mssql problem - it's... bad r8138@Thesaurus (orig r8126): ribasushi | 2009-12-16 11:29:20 +0100 Clearer debug r8139@Thesaurus (orig r8127): ribasushi | 2009-12-16 11:47:48 +0100 This is horrific but the tests pass... maybe someone will figure out something better r8140@Thesaurus (orig r8128): ribasushi | 2009-12-16 16:45:47 +0100 cleanup tests r8187@Thesaurus (orig r8175): ribasushi | 2009-12-24 16:22:30 +0100 Ordered subqueries do not work in mssql after all r8271@Thesaurus (orig r8259): ribasushi | 2010-01-08 23:58:13 +0100 Cleaner RNO sql r8279@Thesaurus (orig r8267): ribasushi | 2010-01-09 10:13:16 +0100 Subqueries no longer experimental r8280@Thesaurus (orig r8268): ribasushi | 2010-01-09 11:26:46 +0100 Close the book on mssql ordered subqueries r8281@Thesaurus (orig r8269): ribasushi | 2010-01-09 11:36:36 +0100 Changes and typos r8283@Thesaurus (orig r8271): ribasushi | 2010-01-09 11:42:21 +0100 Highlight the real problem r8285@Thesaurus (orig r8273): ribasushi | 2010-01-10 10:07:10 +0100 Rename subquery to subselect and rewrite POD (per castaway) r8290@Thesaurus (orig r8278): ribasushi | 2010-01-10 17:01:24 +0100 rename as per mst r8295@Thesaurus (orig r8283): caelum | 2010-01-11 23:42:30 +0100 make a public ::Schema::unregister_source r8298@Thesaurus (orig r8286): abraxxa | 2010-01-12 18:04:18 +0100 fixed a typo in Changes more detailed explanation for the warning about has_one/might_have rels on nullable columns r8307@Thesaurus (orig r8295): abraxxa | 2010-01-13 17:28:05 +0100 added the sources parser arg to the example code r8327@Thesaurus (orig r8315): ribasushi | 2010-01-15 01:25:39 +0100 r8167@Thesaurus (orig r8155): ribasushi | 2009-12-19 12:50:13 +0100 New branch for null-only-result fix r8168@Thesaurus (orig r8156): ribasushi | 2009-12-19 12:51:21 +0100 Failing test r8322@Thesaurus (orig r8310): ribasushi | 2010-01-15 00:48:09 +0100 Correct test order r8323@Thesaurus (orig r8311): ribasushi | 2010-01-15 01:15:33 +0100 Generalize the to-node inner-join-er to apply to all related_resultset calls, not just counts r8324@Thesaurus (orig r8312): ribasushi | 2010-01-15 01:16:05 +0100 Adjust sql-emitter tests r8326@Thesaurus (orig r8314): ribasushi | 2010-01-15 01:25:10 +0100 One more sql-test fix and changes r8328@Thesaurus (orig r8316): ribasushi | 2010-01-15 01:31:58 +0100 Strict mysql bugfix r8329@Thesaurus (orig r8317): ribasushi | 2010-01-15 01:38:53 +0100 Better description of mysql strict option r8331@Thesaurus (orig r8319): ribasushi | 2010-01-15 03:12:13 +0100 Update troubleshooting doc r8337@Thesaurus (orig r8325): ribasushi | 2010-01-15 17:13:28 +0100 RT52674 r8346@Thesaurus (orig r8334): ribasushi | 2010-01-17 09:41:49 +0100 No method aliasing in OO code, *ever* r8373@Thesaurus (orig r8360): ribasushi | 2010-01-18 11:54:51 +0100 Adjust my email r8387@Thesaurus (orig r8374): ribasushi | 2010-01-19 13:07:07 +0100 r8340@Thesaurus (orig r8328): abraxxa | 2010-01-15 19:21:20 +0100 added branch no_duplicate_indexes_for_pk_cols with test and fix r8343@Thesaurus (orig r8331): abraxxa | 2010-01-15 19:32:16 +0100 don't use eq_set in test r8344@Thesaurus (orig r8332): abraxxa | 2010-01-15 19:44:04 +0100 don't sort the primary columns because order matters for indexes r8345@Thesaurus (orig r8333): abraxxa | 2010-01-15 19:56:46 +0100 don't sort the key columns because the order of columns is important for indexes r8372@Thesaurus (orig r8359): abraxxa | 2010-01-18 10:22:09 +0100 don't sort the columns in the tests either r8378@Thesaurus (orig r8365): abraxxa | 2010-01-18 15:39:28 +0100 added pod section for parser args r8379@Thesaurus (orig r8366): abraxxa | 2010-01-18 15:53:08 +0100 better pod thanks to ribasushi r8380@Thesaurus (orig r8367): abraxxa | 2010-01-18 16:04:34 +0100 test and pod fixes r8383@Thesaurus (orig r8370): abraxxa | 2010-01-19 12:38:44 +0100 fixed Authors section added License section fixed t/86sqlt.t tests r8384@Thesaurus (orig r8371): ribasushi | 2010-01-19 12:59:52 +0100 Regenaretd under new parser r8385@Thesaurus (orig r8372): ribasushi | 2010-01-19 13:03:51 +0100 Minor style change and white space trim r8386@Thesaurus (orig r8373): ribasushi | 2010-01-19 13:06:54 +0100 Changes abraxxa++ r8390@Thesaurus (orig r8377): ribasushi | 2010-01-19 13:41:03 +0100 Some minor test refactor and tab cleanups r8394@Thesaurus (orig r8381): frew | 2010-01-19 17:34:10 +0100 add test to ensure no tabs in perl files r8397@Thesaurus (orig r8384): frew | 2010-01-19 18:00:12 +0100 fix test to be an author dep r8398@Thesaurus (orig r8385): ribasushi | 2010-01-19 18:19:40 +0100 First round of detabification r8399@Thesaurus (orig r8386): frew | 2010-01-19 23:42:50 +0100 Add EOL test r8401@Thesaurus (orig r8388): ribasushi | 2010-01-20 08:32:39 +0100 Fix minor RSC bug r8402@Thesaurus (orig r8389): roman | 2010-01-20 15:47:26 +0100 Added a FAQ entry titled: How do I override a run time method (e.g. a relationship accessor)? r8403@Thesaurus (orig r8390): roman | 2010-01-20 16:31:41 +0100 Added myself as a contributor. r8408@Thesaurus (orig r8395): jhannah | 2010-01-21 06:48:14 +0100 Added FAQ: Custom methods in Result classes r8413@Thesaurus (orig r8400): frew | 2010-01-22 04:17:20 +0100 add _is_numeric to ::Row r8418@Thesaurus (orig r8405): ribasushi | 2010-01-22 11:00:05 +0100 Generalize autoinc/count test r8420@Thesaurus (orig r8407): ribasushi | 2010-01-22 11:11:49 +0100 Final round of detabify r8421@Thesaurus (orig r8408): ribasushi | 2010-01-22 11:12:54 +0100 Temporarily disable whitespace checkers r8426@Thesaurus (orig r8413): ribasushi | 2010-01-22 11:35:15 +0100 Moev failing regression test away from trunk r8431@Thesaurus (orig r8418): frew | 2010-01-22 17:05:12 +0100 fix name of _is_numeric to _is_column_numeric r8437@Thesaurus (orig r8424): ribasushi | 2010-01-26 09:33:42 +0100 Switch to Test::Exception r8438@Thesaurus (orig r8425): ribasushi | 2010-01-26 09:48:30 +0100 Test txn_scope_guard regression r8439@Thesaurus (orig r8426): ribasushi | 2010-01-26 10:10:11 +0100 Fix txn_begin on external non-AC coderef regression r8443@Thesaurus (orig r8430): ribasushi | 2010-01-26 14:19:50 +0100 r8304@Thesaurus (orig r8292): nigel | 2010-01-13 16:05:48 +0100 Branch to extend ::Schema::Versioned to handle series of upgrades r8320@Thesaurus (orig r8308): nigel | 2010-01-14 16:52:50 +0100 Changes to support multiple step schema version updates r8321@Thesaurus (orig r8309): nigel | 2010-01-14 17:05:21 +0100 Changelog for Changes to support multiple step schema version updates r8393@Thesaurus (orig r8380): ribasushi | 2010-01-19 13:59:51 +0100 Botched merge (tests still fail) r8395@Thesaurus (orig r8382): ribasushi | 2010-01-19 17:37:07 +0100 More cleanup r8396@Thesaurus (orig r8383): ribasushi | 2010-01-19 17:48:09 +0100 Fix last pieces of retardation and UNtodo the quick cycle r8442@Thesaurus (orig r8429): ribasushi | 2010-01-26 14:18:53 +0100 No need for 2 statements to get the version r8445@Thesaurus (orig r8432): ribasushi | 2010-01-26 14:22:16 +0100 r8161@Thesaurus (orig r8149): ovid | 2009-12-18 15:59:56 +0100 Prefetch queries make inefficient SQL when combined with a pager. This branch is to try to isolate some of the join conditions and figure out if we can fix this. r8166@Thesaurus (orig r8154): ovid | 2009-12-18 18:17:55 +0100 Refactor internals to expose some join logic. Awful method and args :( r8319@Thesaurus (orig r8307): ovid | 2010-01-14 15:37:35 +0100 Attempt to factor our alias handling has mostly failed. r8330@Thesaurus (orig r8318): ribasushi | 2010-01-15 03:02:21 +0100 Better refactor r8332@Thesaurus (orig r8320): ribasushi | 2010-01-15 03:14:39 +0100 Better varnames r8347@Thesaurus (orig r8335): ribasushi | 2010-01-17 11:33:55 +0100 More mangling r8348@Thesaurus (orig r8336): ribasushi | 2010-01-17 13:44:00 +0100 Getting warmer r8349@Thesaurus (orig r8337): ribasushi | 2010-01-17 14:00:20 +0100 That was tricky :) r8352@Thesaurus (orig r8340): ribasushi | 2010-01-17 15:57:06 +0100 Turned out to be much trickier r8354@Thesaurus (orig r8342): ribasushi | 2010-01-17 16:29:20 +0100 This is made out of awesome r8355@Thesaurus (orig r8343): ribasushi | 2010-01-17 16:46:02 +0100 Changes r8400@Thesaurus (orig r8387): ribasushi | 2010-01-20 08:17:44 +0100 Whoops - need to dsable quoting r8459@Thesaurus (orig r8446): ribasushi | 2010-01-27 11:56:15 +0100 Clean up some stuff r8463@Thesaurus (orig r8450): ribasushi | 2010-01-27 12:08:04 +0100 Merge some cleanups from the prefetch branch r8466@Thesaurus (orig r8453): ribasushi | 2010-01-27 12:33:33 +0100 DSNs can not be empty r8471@Thesaurus (orig r8458): frew | 2010-01-27 21:38:42 +0100 fix silly multipk bug r8472@Thesaurus (orig r8459): ribasushi | 2010-01-28 11:13:16 +0100 Consolidate insert_bulk guards (and make them show up correctly in the trace) r8473@Thesaurus (orig r8460): ribasushi | 2010-01-28 11:28:30 +0100 Fix bogus test DDL r8480@Thesaurus (orig r8467): ribasushi | 2010-01-28 22:11:59 +0100 r8381@Thesaurus (orig r8368): moses | 2010-01-18 16:41:38 +0100 Test commit r8425@Thesaurus (orig r8412): ribasushi | 2010-01-22 11:25:01 +0100 Informix test + cleanups r8428@Thesaurus (orig r8415): ribasushi | 2010-01-22 11:59:25 +0100 Initial informix support r8482@Thesaurus (orig r8469): ribasushi | 2010-01-28 22:19:23 +0100 Informix changes r8483@Thesaurus (orig r8470): ribasushi | 2010-01-29 12:01:41 +0100 Require non-warning-spewing MooseX::Types r8484@Thesaurus (orig r8471): ribasushi | 2010-01-29 12:15:15 +0100 Enhance warning test a bit (seems to fail on 5.8) r8485@Thesaurus (orig r8472): ribasushi | 2010-01-29 13:00:54 +0100 Fugly 5.8 workaround r8494@Thesaurus (orig r8481): frew | 2010-01-31 06:47:42 +0100 cleanup (3 arg open, 1 grep instead of 3) r8496@Thesaurus (orig r8483): ribasushi | 2010-01-31 10:04:43 +0100 better skip message r8510@Thesaurus (orig r8497): caelum | 2010-02-01 12:07:13 +0100 throw exception on attempt to insert a blob with DBD::Oracle == 1.23 r8511@Thesaurus (orig r8498): caelum | 2010-02-01 12:12:48 +0100 add RT link for Oracle blob bug in DBD::Oracle == 1.23 r8527@Thesaurus (orig r8514): caelum | 2010-02-02 23:20:17 +0100 r22968@hlagh (orig r8502): caelum | 2010-02-02 05:30:47 -0500 branch to support Sybase SQL Anywhere r22971@hlagh (orig r8505): caelum | 2010-02-02 07:21:13 -0500 ASA last_insert_id and limit support, still needs BLOB support r22972@hlagh (orig r8506): caelum | 2010-02-02 08:33:57 -0500 deref table name if needed, check all columns for identity column not just PK r22973@hlagh (orig r8507): caelum | 2010-02-02 08:48:11 -0500 test blobs, they work, didn't have to do anything r22974@hlagh (orig r8508): caelum | 2010-02-02 09:15:44 -0500 fix stupid identity bug, test empty insert (works), test DTs (not working yet) r22976@hlagh (orig r8510): caelum | 2010-02-02 14:31:00 -0500 rename ::Sybase::ASA to ::SQLAnywhere, per mst r22978@hlagh (orig r8512): caelum | 2010-02-02 17:02:29 -0500 DT inflation now works r22979@hlagh (orig r8513): caelum | 2010-02-02 17:18:06 -0500 minor POD update r8528@Thesaurus (orig r8515): caelum | 2010-02-02 23:23:26 +0100 r22895@hlagh (orig r8473): caelum | 2010-01-30 03:57:26 -0500 branch to fix computed columns in Sybase ASE r22911@hlagh (orig r8489): caelum | 2010-01-31 07:18:33 -0500 empty insert into a Sybase table with computed columns and either data_type => undef or default_value => SCALARREF works now r22912@hlagh (orig r8490): caelum | 2010-01-31 07:39:32 -0500 add POD about computed columns and timestamps for Sybase r22918@hlagh (orig r8496): caelum | 2010-02-01 05:09:07 -0500 update POD about Schema::Loader for Sybase r8531@Thesaurus (orig r8518): ribasushi | 2010-02-02 23:57:27 +0100 r8512@Thesaurus (orig r8499): boghead | 2010-02-01 23:38:13 +0100 - Creating a branch for adding _post_inflate_datetime and _pre_deflate_datetime to InflateColumn::DateTime r8513@Thesaurus (orig r8500): boghead | 2010-02-01 23:42:14 +0100 - Add _post_inflate_datetime and _pre_deflate_datetime to InflateColumn::DateTime to allow for modifying DateTime objects after inflation or before deflation. r8524@Thesaurus (orig r8511): boghead | 2010-02-02 22:59:28 +0100 - Simplify by allowing moving column_info depreciated {extra}{timezone} data to {timezone} (and the same with locale) r8533@Thesaurus (orig r8520): caelum | 2010-02-03 05:19:59 +0100 support for Sybase SQL Anywhere through ODBC r8536@Thesaurus (orig r8523): ribasushi | 2010-02-03 08:27:54 +0100 Changes r8537@Thesaurus (orig r8524): ribasushi | 2010-02-03 08:31:20 +0100 Quote fail r8538@Thesaurus (orig r8525): caelum | 2010-02-03 13:21:37 +0100 test DT inflation for Sybase SQL Anywhere over ODBC too r8539@Thesaurus (orig r8526): caelum | 2010-02-03 17:36:39 +0100 minor code cleanup for SQL Anywhere last_insert_id r8540@Thesaurus (orig r8527): ribasushi | 2010-02-04 11:28:33 +0100 Fix bug reported by tommyt r8548@Thesaurus (orig r8535): ribasushi | 2010-02-04 14:34:45 +0100 Prepare for new SQLA release r8560@Thesaurus (orig r8547): ribasushi | 2010-02-05 08:59:04 +0100 Refactor some evil code r8565@Thesaurus (orig r8552): ribasushi | 2010-02-05 17:00:12 +0100 Looks like RSC is finally (halfway) fixed r8566@Thesaurus (orig r8553): ribasushi | 2010-02-05 17:07:13 +0100 RSC subquery can not include the prefetch r8567@Thesaurus (orig r8554): ribasushi | 2010-02-05 17:10:29 +0100 Fix typo and borked test r8569@Thesaurus (orig r8556): ribasushi | 2010-02-05 17:33:12 +0100 Release 0.08116 r8571@Thesaurus (orig r8558): ribasushi | 2010-02-05 18:01:33 +0100 No idea how I missed all these fails... r8572@Thesaurus (orig r8559): ribasushi | 2010-02-05 18:13:34 +0100 Release 0.08117 r8574@Thesaurus (orig r8561): ribasushi | 2010-02-05 18:51:12 +0100 Try to distinguish trunk from official versions r8580@Thesaurus (orig r8567): gshank | 2010-02-05 22:29:24 +0100 add doc on 'where' attribute r8587@Thesaurus (orig r8574): frew | 2010-02-07 21:07:03 +0100 add as_subselect_rs r8588@Thesaurus (orig r8575): frew | 2010-02-07 21:13:04 +0100 fix longstanding unmentioned bug ("me") r8589@Thesaurus (orig r8576): frew | 2010-02-08 06:17:43 +0100 another example of as_subselect_rs r8590@Thesaurus (orig r8577): frew | 2010-02-08 06:23:58 +0100 fix bug in UTF8Columns r8591@Thesaurus (orig r8578): ribasushi | 2010-02-08 09:31:01 +0100 Extend utf8columns test to trap fixed bug r8592@Thesaurus (orig r8579): ribasushi | 2010-02-08 12:03:23 +0100 Cleanup rel accessor type handling r8593@Thesaurus (orig r8580): ribasushi | 2010-02-08 12:20:47 +0100 Fix some fallout r8595@Thesaurus (orig r8582): ribasushi | 2010-02-08 12:38:19 +0100 Merge some obsolete code cleanup from the prefetch branch r8596@Thesaurus (orig r8583): ribasushi | 2010-02-08 12:42:09 +0100 Merge fix of RT54039 from prefetch branch r8598@Thesaurus (orig r8585): ribasushi | 2010-02-08 12:48:31 +0100 Release 0.08118 r8600@Thesaurus (orig r8587): ribasushi | 2010-02-08 12:52:33 +0100 Bump trunk version r8606@Thesaurus (orig r8593): ribasushi | 2010-02-08 16:16:44 +0100 cheaper lookup r8609@Thesaurus (orig r8596): ribasushi | 2010-02-10 12:40:37 +0100 Consolidate last_insert_id handling with a fallback-attempt on DBI::last_insert_id r8614@Thesaurus (orig r8601): caelum | 2010-02-10 21:29:51 +0100 workaround for Moose bug affecting Replicated storage r8615@Thesaurus (orig r8602): caelum | 2010-02-10 21:40:07 +0100 revert Moose bug workaround, bump Moose dep for Replicated to 0.98 r8616@Thesaurus (orig r8603): caelum | 2010-02-10 22:48:34 +0100 add a couple proxy methods to Replicated so it can run r8628@Thesaurus (orig r8615): caelum | 2010-02-11 11:35:01 +0100 r21090@hlagh (orig r7836): caelum | 2009-11-02 06:40:52 -0500 new branch to fix unhandled methods in Storage::DBI::Replicated r21091@hlagh (orig r7837): caelum | 2009-11-02 06:42:00 -0500 add test to display unhandled methods r21092@hlagh (orig r7838): caelum | 2009-11-02 06:55:34 -0500 minor fix to last committed test r21093@hlagh (orig r7839): caelum | 2009-11-02 09:26:00 -0500 minor test code cleanup r23125@hlagh (orig r8607): caelum | 2010-02-10 19:25:51 -0500 add unimplemented Storage::DBI methods to ::DBI::Replicated r23130@hlagh (orig r8612): ribasushi | 2010-02-11 05:12:48 -0500 Podtesting exclusion r8630@Thesaurus (orig r8617): frew | 2010-02-11 11:45:54 +0100 Changes (from a while ago) r8631@Thesaurus (orig r8618): caelum | 2010-02-11 11:46:58 +0100 savepoints for SQLAnywhere r8640@Thesaurus (orig r8627): ribasushi | 2010-02-11 12:33:19 +0100 r8424@Thesaurus (orig r8411): ribasushi | 2010-01-22 11:19:40 +0100 Chaining POC test r8641@Thesaurus (orig r8628): ribasushi | 2010-02-11 12:34:19 +0100 r8426@Thesaurus (orig r8413): ribasushi | 2010-01-22 11:35:15 +0100 Moev failing regression test away from trunk r8642@Thesaurus (orig r8629): ribasushi | 2010-02-11 12:34:56 +0100 r8643@Thesaurus (orig r8630): ribasushi | 2010-02-11 12:35:03 +0100 r8507@Thesaurus (orig r8494): frew | 2010-02-01 04:33:08 +0100 small refactor to put select/as/+select/+as etc merging in it's own function r8644@Thesaurus (orig r8631): ribasushi | 2010-02-11 12:35:11 +0100 r8514@Thesaurus (orig r8501): frew | 2010-02-02 05:12:29 +0100 revert actual changes from yesterday as per ribasushis advice r8645@Thesaurus (orig r8632): ribasushi | 2010-02-11 12:35:16 +0100 r8522@Thesaurus (orig r8509): frew | 2010-02-02 19:39:33 +0100 delete +stuff if stuff exists r8646@Thesaurus (orig r8633): ribasushi | 2010-02-11 12:35:23 +0100 r8534@Thesaurus (orig r8521): frew | 2010-02-03 06:14:44 +0100 change deletion/overriding to fix t/76 r8647@Thesaurus (orig r8634): ribasushi | 2010-02-11 12:35:30 +0100 r8535@Thesaurus (orig r8522): frew | 2010-02-03 06:57:15 +0100 some basic readability factorings (aka, fewer nested ternaries and long maps) r8648@Thesaurus (orig r8635): ribasushi | 2010-02-11 12:36:01 +0100 r8558@Thesaurus (orig r8545): frew | 2010-02-04 20:32:54 +0100 fix incorrect test in t/76select.t and posit an incorrect solution r8649@Thesaurus (orig r8636): ribasushi | 2010-02-11 12:38:47 +0100 r8650@Thesaurus (orig r8637): ribasushi | 2010-02-11 12:38:57 +0100 r8578@Thesaurus (orig r8565): ribasushi | 2010-02-05 19:11:09 +0100 Should not be needed r8651@Thesaurus (orig r8638): ribasushi | 2010-02-11 12:39:03 +0100 r8579@Thesaurus (orig r8566): ribasushi | 2010-02-05 19:13:24 +0100 SQLA now fixed r8652@Thesaurus (orig r8639): ribasushi | 2010-02-11 12:39:10 +0100 r8624@Thesaurus (orig r8611): ribasushi | 2010-02-11 10:31:08 +0100 MOAR testing r8653@Thesaurus (orig r8640): ribasushi | 2010-02-11 12:39:17 +0100 r8626@Thesaurus (orig r8613): frew | 2010-02-11 11:16:30 +0100 fix bad test r8654@Thesaurus (orig r8641): ribasushi | 2010-02-11 12:39:23 +0100 r8627@Thesaurus (orig r8614): frew | 2010-02-11 11:21:52 +0100 fix t/76, break rsc tests r8655@Thesaurus (orig r8642): ribasushi | 2010-02-11 12:39:30 +0100 r8632@Thesaurus (orig r8619): frew | 2010-02-11 11:53:50 +0100 fix incorrect test r8656@Thesaurus (orig r8643): ribasushi | 2010-02-11 12:39:35 +0100 r8633@Thesaurus (orig r8620): frew | 2010-02-11 11:54:49 +0100 make t/76s and t/88 pass by deleting from the correct attr hash r8657@Thesaurus (orig r8644): ribasushi | 2010-02-11 12:39:40 +0100 r8634@Thesaurus (orig r8621): frew | 2010-02-11 11:55:41 +0100 fix a test due to ordering issues r8658@Thesaurus (orig r8645): ribasushi | 2010-02-11 12:39:45 +0100 r8635@Thesaurus (orig r8622): frew | 2010-02-11 11:58:23 +0100 this is why you run tests before you commit them. r8659@Thesaurus (orig r8646): ribasushi | 2010-02-11 12:39:51 +0100 r8636@Thesaurus (orig r8623): frew | 2010-02-11 12:00:59 +0100 fix another ordering issue r8660@Thesaurus (orig r8647): ribasushi | 2010-02-11 12:39:57 +0100 r8637@Thesaurus (orig r8624): frew | 2010-02-11 12:11:31 +0100 fix for search/select_chains r8661@Thesaurus (orig r8648): ribasushi | 2010-02-11 12:40:03 +0100 r8662@Thesaurus (orig r8649): caelum | 2010-02-11 12:40:07 +0100 test nanosecond precision for SQLAnywhere r8663@Thesaurus (orig r8650): ribasushi | 2010-02-11 12:40:09 +0100 r8639@Thesaurus (orig r8626): ribasushi | 2010-02-11 12:33:03 +0100 Changes and small ommission r8666@Thesaurus (orig r8653): ribasushi | 2010-02-11 18:16:45 +0100 Changes r8674@Thesaurus (orig r8661): ribasushi | 2010-02-12 09:12:45 +0100 Fix moose dep r8680@Thesaurus (orig r8667): dew | 2010-02-12 18:05:11 +0100 Add is_ordered to DBIC::ResultSet r8688@Thesaurus (orig r8675): ribasushi | 2010-02-13 09:36:29 +0100 r8667@Thesaurus (orig r8654): ribasushi | 2010-02-11 18:17:35 +0100 Try a dep-handling idea r8675@Thesaurus (orig r8662): ribasushi | 2010-02-12 12:46:11 +0100 Move optional deps out of the Makefile r8676@Thesaurus (orig r8663): ribasushi | 2010-02-12 13:40:53 +0100 Support methods to verify group dependencies r8677@Thesaurus (orig r8664): ribasushi | 2010-02-12 13:45:18 +0100 Move sqlt dephandling to Optional::Deps r8679@Thesaurus (orig r8666): ribasushi | 2010-02-12 14:03:17 +0100 Move replicated to Opt::Deps r8684@Thesaurus (orig r8671): ribasushi | 2010-02-13 02:47:52 +0100 Auto-POD for Optional Deps r8685@Thesaurus (orig r8672): ribasushi | 2010-02-13 02:53:20 +0100 Privatize the full list method r8686@Thesaurus (orig r8673): ribasushi | 2010-02-13 02:59:51 +0100 Scary warning r8687@Thesaurus (orig r8674): ribasushi | 2010-02-13 09:35:01 +0100 Changes r8691@Thesaurus (orig r8678): ribasushi | 2010-02-13 10:07:15 +0100 Autogen comment for Dependencies.pod r8692@Thesaurus (orig r8679): ribasushi | 2010-02-13 10:11:24 +0100 Ask for newer M::I r8698@Thesaurus (orig r8685): ribasushi | 2010-02-13 11:11:10 +0100 Add author/license to pod r8699@Thesaurus (orig r8686): arcanez | 2010-02-13 13:43:22 +0100 fix typo per nuba on irc r8705@Thesaurus (orig r8692): ribasushi | 2010-02-13 15:15:33 +0100 r8001@Thesaurus (orig r7989): goraxe | 2009-11-30 01:14:47 +0100 Branch for dbicadmin script refactor r8003@Thesaurus (orig r7991): goraxe | 2009-11-30 01:26:39 +0100 add DBIx::Class::Admin r8024@Thesaurus (orig r8012): goraxe | 2009-12-02 22:49:27 +0100 get deployment tests to pass r8025@Thesaurus (orig r8013): goraxe | 2009-12-02 22:50:42 +0100 get deployment tests to pass r8026@Thesaurus (orig r8014): goraxe | 2009-12-02 23:52:40 +0100 all ddl tests now pass r8083@Thesaurus (orig r8071): goraxe | 2009-12-12 17:01:11 +0100 add quite attribute to DBIx::Class admin r8086@Thesaurus (orig r8074): goraxe | 2009-12-12 17:36:58 +0100 add tests for data manipulation ported from 89dbicadmin.t r8088@Thesaurus (orig r8076): goraxe | 2009-12-12 17:38:07 +0100 add sleep 1 to t/admin/02ddl.t so insert into upgrade table does not happen too quickly r8089@Thesaurus (orig r8077): goraxe | 2009-12-12 17:40:33 +0100 update DBIx::Class::Admin data manip functions to pass the test r8095@Thesaurus (orig r8083): goraxe | 2009-12-12 19:36:22 +0100 change passing of preversion to be a parameter r8096@Thesaurus (orig r8084): goraxe | 2009-12-12 19:38:26 +0100 add some pod to DBIx::Class::Admin r8103@Thesaurus (orig r8091): goraxe | 2009-12-12 22:08:55 +0100 some changes to make DBIx::Class::Admin more compatible with dbicadmin interface r8104@Thesaurus (orig r8092): goraxe | 2009-12-12 22:09:39 +0100 commit refactored dbicadmin script and very minor changes to its existing test suite r8107@Thesaurus (orig r8095): goraxe | 2009-12-12 22:34:35 +0100 add compatability for --op for dbicadmin, revert test suite r8127@Thesaurus (orig r8115): goraxe | 2009-12-15 22:14:20 +0100 dep check to end of module r8128@Thesaurus (orig r8116): goraxe | 2009-12-15 23:15:25 +0100 add namespace::autoclean to DBIx::Class::Admin r8129@Thesaurus (orig r8117): goraxe | 2009-12-15 23:16:00 +0100 update test suite to skip if cannot load DBIx::Class::Admin r8130@Thesaurus (orig r8118): goraxe | 2009-12-15 23:18:35 +0100 add deps check for 89dbicadmin.t r8131@Thesaurus (orig r8119): goraxe | 2009-12-15 23:19:01 +0100 include deps for dbicadmin DBIx::Class::Admin to Makefile.PL r8149@Thesaurus (orig r8137): goraxe | 2009-12-17 23:21:50 +0100 use DBICTest::_database over creating a schema object to steal conn info r8338@Thesaurus (orig r8326): goraxe | 2010-01-15 19:00:17 +0100 change white space to not be tabs r8339@Thesaurus (orig r8327): goraxe | 2010-01-15 19:10:42 +0100 remove Module::Load from test suite r8358@Thesaurus (orig r8346): ribasushi | 2010-01-17 17:52:10 +0100 Real detabify r8359@Thesaurus (orig r8347): ribasushi | 2010-01-17 18:01:53 +0100 Fix POD (spacing matters) r8360@Thesaurus (orig r8348): ribasushi | 2010-01-17 21:57:53 +0100 More detabification r8361@Thesaurus (orig r8349): ribasushi | 2010-01-17 22:33:12 +0100 Test cleanup r8362@Thesaurus (orig r8350): ribasushi | 2010-01-17 22:41:11 +0100 More tets cleanup r8363@Thesaurus (orig r8351): ribasushi | 2010-01-17 22:43:57 +0100 And more cleanup r8364@Thesaurus (orig r8352): ribasushi | 2010-01-17 22:51:21 +0100 Disallow mucking with INC r8365@Thesaurus (orig r8353): ribasushi | 2010-01-17 23:23:15 +0100 More cleanup r8366@Thesaurus (orig r8354): ribasushi | 2010-01-17 23:27:49 +0100 Add lib path to ENV so that $^X can see it r8367@Thesaurus (orig r8355): ribasushi | 2010-01-17 23:33:10 +0100 Move script-test r8368@Thesaurus (orig r8356): goraxe | 2010-01-17 23:35:03 +0100 change warns/dies -> carp/throw_exception r8369@Thesaurus (orig r8357): goraxe | 2010-01-17 23:53:54 +0100 add goraxe to contributors r8370@Thesaurus (orig r8358): goraxe | 2010-01-17 23:54:15 +0100 remove comment headers r8404@Thesaurus (orig r8391): caelum | 2010-01-20 20:54:29 +0100 minor fixups r8405@Thesaurus (orig r8392): goraxe | 2010-01-20 21:13:24 +0100 add private types to coerce r8406@Thesaurus (orig r8393): goraxe | 2010-01-20 21:17:19 +0100 remove un-needed coerce from schema_class of type Str r8411@Thesaurus (orig r8398): caelum | 2010-01-21 23:36:25 +0100 minor documentation updates r8436@Thesaurus (orig r8423): caelum | 2010-01-25 02:56:30 +0100 this code never runs anyway r8440@Thesaurus (orig r8427): caelum | 2010-01-26 14:05:53 +0100 prefer JSON::DWIW for barekey support r8693@Thesaurus (orig r8680): ribasushi | 2010-02-13 10:27:18 +0100 dbicadmin dependencies r8694@Thesaurus (orig r8681): ribasushi | 2010-02-13 10:28:04 +0100 Some cleaup, make use of Text::CSV r8695@Thesaurus (orig r8682): ribasushi | 2010-02-13 10:34:19 +0100 We use Try::Tiny in a single spot, not grounds for inlusion in deps r8696@Thesaurus (orig r8683): ribasushi | 2010-02-13 10:37:30 +0100 POD section r8697@Thesaurus (orig r8684): ribasushi | 2010-02-13 11:05:17 +0100 Switch tests to Optional::Deps r8700@Thesaurus (orig r8687): ribasushi | 2010-02-13 14:32:50 +0100 Switch Admin/dbicadmin to Opt::Deps r8702@Thesaurus (orig r8689): ribasushi | 2010-02-13 14:39:24 +0100 JSON dep is needed for Admin.pm itself r8703@Thesaurus (orig r8690): ribasushi | 2010-02-13 15:06:28 +0100 Test fixes r8704@Thesaurus (orig r8691): ribasushi | 2010-02-13 15:13:31 +0100 Changes r8707@Thesaurus (orig r8694): ribasushi | 2010-02-13 16:37:57 +0100 Test for optional deps manager r8710@Thesaurus (orig r8697): caelum | 2010-02-14 05:22:03 +0100 add doc on maximum cursors for SQLAnywhere r8711@Thesaurus (orig r8698): ribasushi | 2010-02-14 09:23:09 +0100 Cleanup dependencies / Admin inheritance r8712@Thesaurus (orig r8699): ribasushi | 2010-02-14 09:28:29 +0100 Some formatting r8715@Thesaurus (orig r8702): ribasushi | 2010-02-14 10:46:51 +0100 This is Moose, so use CMOP r8720@Thesaurus (orig r8707): ribasushi | 2010-02-15 10:28:22 +0100 Final POD touches r8721@Thesaurus (orig r8708): ribasushi | 2010-02-15 10:31:38 +0100 Spellcheck (jawnsy++) r8722@Thesaurus (orig r8709): ribasushi | 2010-02-15 10:32:24 +0100 One more r8723@Thesaurus (orig r8710): ribasushi | 2010-02-15 14:49:26 +0100 Release 0.08119 r8725@Thesaurus (orig r8712): ribasushi | 2010-02-15 14:50:56 +0100 Bump trunl version r8726@Thesaurus (orig r8713): rafl | 2010-02-15 15:49:55 +0100 Make sure we actually run all tests, given we're using done_testing. r8727@Thesaurus (orig r8714): rafl | 2010-02-15 15:50:01 +0100 Make sure overriding deployment_statements is possible from within schemas. r8728@Thesaurus (orig r8715): rafl | 2010-02-15 15:56:06 +0100 Changelogging. r8729@Thesaurus (orig r8716): rafl | 2010-02-15 15:58:09 +0100 Make some cookbook code compile. r8730@Thesaurus (orig r8717): nuba | 2010-02-15 16:11:52 +0100 spelling fixes in the documaentation, sholud be gud now ;) r8732@Thesaurus (orig r8719): caelum | 2010-02-16 11:09:58 +0100 use OO interface of Hash::Merge for ::DBI::Replicated r8734@Thesaurus (orig r8721): ribasushi | 2010-02-16 11:41:06 +0100 Augment did-author-run-makefile check to include OptDeps r8735@Thesaurus (orig r8722): ribasushi | 2010-02-16 12:16:06 +0100 Reorg support section, add live-chat link r8739@Thesaurus (orig r8726): caelum | 2010-02-16 14:51:58 +0100 set behavior for Hash::Merge in ::DBI::Replicated, otherwise it uses the global setting r8740@Thesaurus (orig r8727): caelum | 2010-02-16 15:43:25 +0100 POD touchups r8759@Thesaurus (orig r8746): ribasushi | 2010-02-19 00:30:37 +0100 Fix bogus test r8760@Thesaurus (orig r8747): ribasushi | 2010-02-19 00:34:22 +0100 Retire useless abstraction (all rdbms need this anyway) r8761@Thesaurus (orig r8748): ribasushi | 2010-02-19 00:35:01 +0100 Fix count of group_by over aliased function r8765@Thesaurus (orig r8752): ribasushi | 2010-02-19 10:11:20 +0100 r8497@Thesaurus (orig r8484): ribasushi | 2010-01-31 10:06:29 +0100 Branch to unify mandatory PK handling r8498@Thesaurus (orig r8485): ribasushi | 2010-01-31 10:20:36 +0100 This is not really used for anything (same code in DBI) r8499@Thesaurus (orig r8486): ribasushi | 2010-01-31 10:25:55 +0100 Helper primary_columns wrapper to throw if a PK is not defined r8500@Thesaurus (orig r8487): ribasushi | 2010-01-31 11:07:25 +0100 Stupid errors r8501@Thesaurus (orig r8488): ribasushi | 2010-01-31 12:18:57 +0100 Saner handling of nonexistent/partial conditions r8762@Thesaurus (orig r8749): ribasushi | 2010-02-19 10:07:40 +0100 trap unresolvable conditions due to incomplete relationship specification r8764@Thesaurus (orig r8751): ribasushi | 2010-02-19 10:11:09 +0100 Changes r8767@Thesaurus (orig r8754): ribasushi | 2010-02-19 11:14:30 +0100 Fix for RT54697 r8769@Thesaurus (orig r8756): caelum | 2010-02-19 12:21:53 +0100 bump Test::Pod dep r8770@Thesaurus (orig r8757): caelum | 2010-02-19 12:23:07 +0100 bump Test::Pod dep in Optional::Dependencies too r8773@Thesaurus (orig r8760): rabbit | 2010-02-19 16:41:24 +0100 Fix stupid sqlt parser regression r8774@Thesaurus (orig r8761): rabbit | 2010-02-19 16:42:40 +0100 Port remaining tests to the Opt::Dep reposiory r8775@Thesaurus (orig r8762): rabbit | 2010-02-19 16:43:36 +0100 Some test cleanups r8780@Thesaurus (orig r8767): rabbit | 2010-02-20 20:59:20 +0100 Test::Deep actually isn't required r8786@Thesaurus (orig r8773): rabbit | 2010-02-20 22:21:41 +0100 These are core for perl 5.8 r8787@Thesaurus (orig r8774): rabbit | 2010-02-21 10:52:40 +0100 Shuffle tests a bit r8788@Thesaurus (orig r8775): rabbit | 2010-02-21 12:09:25 +0100 Bogus require r8789@Thesaurus (orig r8776): rabbit | 2010-02-21 12:09:48 +0100 Bogus unnecessary dep r8800@Thesaurus (orig r8787): rabbit | 2010-02-21 13:39:21 +0100 r8748@Thesaurus (orig r8735): goraxe | 2010-02-17 23:17:15 +0100 branch for dbicadmin pod fixes r8778@Thesaurus (orig r8765): goraxe | 2010-02-20 20:35:00 +0100 add G:L:D sub classes to generate pod r8779@Thesaurus (orig r8766): goraxe | 2010-02-20 20:56:16 +0100 dbicadmin: use subclassed G:L:D to generate some pod r8782@Thesaurus (orig r8769): goraxe | 2010-02-20 21:48:29 +0100 adjust Makefile.pl to generate dbicadmin.pod r8783@Thesaurus (orig r8770): goraxe | 2010-02-20 21:50:55 +0100 add svn-ignore for dbicadmin.pod r8784@Thesaurus (orig r8771): goraxe | 2010-02-20 22:01:41 +0100 change Options to Arguments r8785@Thesaurus (orig r8772): goraxe | 2010-02-20 22:10:29 +0100 add DBIx::Class::Admin::{Descriptive,Usage} to podcover ignore list r8790@Thesaurus (orig r8777): rabbit | 2010-02-21 12:35:38 +0100 Cleanup the makefile regen a bit r8792@Thesaurus (orig r8779): rabbit | 2010-02-21 12:53:01 +0100 Bah humbug r8793@Thesaurus (orig r8780): rabbit | 2010-02-21 12:55:18 +0100 And another one r8797@Thesaurus (orig r8784): rabbit | 2010-02-21 13:32:03 +0100 The minimal pod seems to confuse the manpage generator, commenting out for now r8798@Thesaurus (orig r8785): rabbit | 2010-02-21 13:38:03 +0100 Add license/author to dbicadmin autogen POD r8799@Thesaurus (orig r8786): rabbit | 2010-02-21 13:38:58 +0100 Reorder makefile author actions to make output more readable r8803@Thesaurus (orig r8790): ribasushi | 2010-02-21 14:24:15 +0100 Fix exception text r8804@Thesaurus (orig r8791): ribasushi | 2010-02-21 15:14:58 +0100 Extra testdep r8808@Thesaurus (orig r8795): caelum | 2010-02-22 20:16:07 +0100 with_deferred_fk_checks for Oracle r8809@Thesaurus (orig r8796): rabbit | 2010-02-22 21:26:20 +0100 Add a hidden option to dbicadmin to self-inject autogenerated POD r8810@Thesaurus (orig r8797): caelum | 2010-02-22 21:48:43 +0100 improve with_deferred_fk_checks for Oracle, add tests r8812@Thesaurus (orig r8799): rbuels | 2010-02-22 23:09:40 +0100 added package name to DBD::Pg warning in Pg storage driver to make it explicit where the warning is coming from r8815@Thesaurus (orig r8802): rabbit | 2010-02-23 11:21:10 +0100 Looks like the distdir wrapping is finally taken care of r8818@Thesaurus (orig r8805): rabbit | 2010-02-23 14:05:14 +0100 remove POD r8819@Thesaurus (orig r8806): rabbit | 2010-02-23 14:05:32 +0100 More index exclusions r8821@Thesaurus (orig r8808): goraxe | 2010-02-23 15:00:38 +0100 remove short options from dbicadmin r8822@Thesaurus (orig r8809): rabbit | 2010-02-23 15:15:00 +0100 Whitespace r8826@Thesaurus (orig r8813): rabbit | 2010-02-24 09:28:43 +0100 r8585@Thesaurus (orig r8572): faxm0dem | 2010-02-06 23:01:04 +0100 sqlt::producer::oracle is now able to handle quotes correctly. Now we need to take advantage of that as currently the oracle producer capitalises everything r8586@Thesaurus (orig r8573): faxm0dem | 2010-02-06 23:03:31 +0100 the way I thought. ribasushi suggested to override deploy(ment_statements) r8607@Thesaurus (orig r8594): faxm0dem | 2010-02-09 21:53:48 +0100 should work now r8714@Thesaurus (orig r8701): faxm0dem | 2010-02-14 09:49:44 +0100 oracle_version r8747@Thesaurus (orig r8734): faxm0dem | 2010-02-17 18:54:45 +0100 still need to uc source_name if quotes off r8817@Thesaurus (orig r8804): rabbit | 2010-02-23 12:03:23 +0100 Cleanup code (hopefully no functional changes) r8820@Thesaurus (orig r8807): rabbit | 2010-02-23 14:14:19 +0100 Proper error message r8823@Thesaurus (orig r8810): faxm0dem | 2010-02-23 15:46:11 +0100 Schema Object Naming Rules : [...] However, database names, global database names, and database link names are always case insensitive and are stored as uppercase. # source: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements008.htm r8824@Thesaurus (orig r8811): rabbit | 2010-02-23 16:09:36 +0100 Changes and dep-bump r8828@Thesaurus (orig r8815): rabbit | 2010-02-24 09:32:53 +0100 Changelogging r8829@Thesaurus (orig r8816): rabbit | 2010-02-24 09:37:14 +0100 Protect dbicadmin from self-injection when not in make r8830@Thesaurus (orig r8817): rabbit | 2010-02-24 10:00:43 +0100 Release 0.08120 r8832@Thesaurus (orig r8819): rabbit | 2010-02-24 10:02:36 +0100 Bump trunk version r8833@Thesaurus (orig r8820): goraxe | 2010-02-24 14:21:23 +0100 do not include hidden opts in generated pod r8834@Thesaurus (orig r8821): rabbit | 2010-02-24 15:50:34 +0100 small tool to query cpan deps r8835@Thesaurus (orig r8822): rabbit | 2010-02-26 00:22:51 +0100 Typo r8849@Thesaurus (orig r8836): rabbit | 2010-03-01 01:32:03 +0100 Cleanup logic in RSC r8850@Thesaurus (orig r8837): rabbit | 2010-03-01 01:36:24 +0100 Fix incorrect placement of condition resolution failure trap r8851@Thesaurus (orig r8838): rabbit | 2010-03-01 01:37:53 +0100 Changes r8855@Thesaurus (orig r8842): rabbit | 2010-03-01 18:04:23 +0100 Add has_relationship proxy to row r8856@Thesaurus (orig r8843): rabbit | 2010-03-02 10:29:18 +0100 Do not autoviv empty ENV vars r8857@Thesaurus (orig r8844): rabbit | 2010-03-02 11:09:06 +0100 This test is identical to the one above it r8858@Thesaurus (orig r8845): rabbit | 2010-03-02 11:13:55 +0100 Test belongs_to accessor in-memory tie r8859@Thesaurus (orig r8846): rabbit | 2010-03-02 11:35:19 +0100 proving rafl wrong r8865@Thesaurus (orig r8852): mo | 2010-03-03 12:05:51 +0100 Fix for SQLite to ignore the { for => ... } attribute r8866@Thesaurus (orig r8853): rabbit | 2010-03-03 12:15:22 +0100 Fix whitespace r8869@Thesaurus (orig r8856): castaway | 2010-03-03 23:07:40 +0100 Minor doc tweaks r8870@Thesaurus (orig r8857): castaway | 2010-03-03 23:33:07 +0100 Added note+warning about how Ordered works, from steveo_aa r8900@Thesaurus (orig r8887): rabbit | 2010-03-04 19:11:32 +0100 Fix identity fiasco r8901@Thesaurus (orig r8888): rjbs | 2010-03-04 19:39:54 +0100 fix a typo in FAQ r8904@Thesaurus (orig r8891): acmoore | 2010-03-05 22:37:55 +0100 Fix regression where SQL files with comments were not handled properly by ::Schema::Versioned. r8913@Thesaurus (orig r8900): ribasushi | 2010-03-06 12:26:10 +0100 More checks for weird usage of _determine_driver (maint/gen-schema) r8916@Thesaurus (orig r8903): rabbit | 2010-03-06 12:30:56 +0100 r8422@Thesaurus (orig r8409): ribasushi | 2010-01-22 11:14:57 +0100 Branches for some stuff r8477@Thesaurus (orig r8464): ribasushi | 2010-01-28 12:26:40 +0100 RT#52681 r8478@Thesaurus (orig r8465): ribasushi | 2010-01-28 12:41:25 +0100 Deprecate IC::File r8479@Thesaurus (orig r8466): ribasushi | 2010-01-28 12:41:56 +0100 Deprecate IC::File(2) r8487@Thesaurus (orig r8474): ribasushi | 2010-01-30 13:11:18 +0100 Draft PK explanation r8488@Thesaurus (orig r8475): frew | 2010-01-30 21:19:30 +0100 clarify PK stuff in intro just a bit r8489@Thesaurus (orig r8476): frew | 2010-01-30 21:24:21 +0100 no first in POD r8910@Thesaurus (orig r8897): rabbit | 2010-03-06 11:37:12 +0100 Improve POD about PKs and why they matter r8912@Thesaurus (orig r8899): rabbit | 2010-03-06 11:42:41 +0100 One more PODlink r8915@Thesaurus (orig r8902): rabbit | 2010-03-06 12:27:29 +0100 Fully deprecate IC::File r8919@Thesaurus (orig r8906): rabbit | 2010-03-06 12:44:59 +0100 Fix RT54063 r8920@Thesaurus (orig r8907): rabbit | 2010-03-06 13:18:02 +0100 me-- not thinking r8925@Thesaurus (orig r8912): wreis | 2010-03-06 18:51:59 +0100 improvements for HasOne relationship validationn r8934@Thesaurus (orig r8921): rabbit | 2010-03-07 00:52:51 +0100 Cascading delete needs a guard to remain atomic r8936@Thesaurus (orig r8923): rabbit | 2010-03-07 02:35:51 +0100 Fix the docs for select/as r8937@Thesaurus (orig r8924): rabbit | 2010-03-07 02:58:09 +0100 Unmark Opt::Deps experimental and add extra method as per RT55211 r8938@Thesaurus (orig r8925): rabbit | 2010-03-07 03:22:08 +0100 Switch NoTab/EOL checks to Opt::Deps Enable NoTab checks Disable EOL checks r8939@Thesaurus (orig r8926): rabbit | 2010-03-07 10:23:24 +0100 Cleanup a bit r8941@Thesaurus (orig r8928): rabbit | 2010-03-07 11:38:35 +0100 Fix MC bug reported by felix r8944@Thesaurus (orig r8931): caelum | 2010-03-07 11:55:06 +0100 r23004@hlagh (orig r8530): moritz | 2010-02-04 07:41:29 -0500 create branch for Storage::DBI::InterBase r23005@hlagh (orig r8531): moritz | 2010-02-04 07:44:02 -0500 primitive, non-working and very specific Storage::DBI::InterBase r23006@hlagh (orig r8532): moritz | 2010-02-04 08:00:05 -0500 [Storage::DBI::InterBase] remove cruft copied from MSSQL r23008@hlagh (orig r8534): moritz | 2010-02-04 08:34:22 -0500 [Storage::DBI::InterBase] remove more cruft r23014@hlagh (orig r8540): caelum | 2010-02-04 10:08:27 -0500 test file for firebird, not passing yet r23015@hlagh (orig r8541): caelum | 2010-02-04 11:24:51 -0500 Firebird: fix test cleanup, add ODBC wrapper r23016@hlagh (orig r8542): caelum | 2010-02-04 13:18:48 -0500 limit and better autoinc for Firebird r23018@hlagh (orig r8544): caelum | 2010-02-04 14:19:51 -0500 override quoting columns for RETURNING in Firebird ODBC (where it doesn't work) and generate a RETURNING clause only when necessary r23022@hlagh (orig r8548): caelum | 2010-02-05 03:55:43 -0500 fix up my Row code for non-pk autoincs, add pretty crappy DT inflation for Firebird r23023@hlagh (orig r8549): caelum | 2010-02-05 04:26:03 -0500 rename a couple of variables r23024@hlagh (orig r8550): caelum | 2010-02-05 04:46:31 -0500 check for both NULL and null, rename _fb_auto_incs to _auto_incs r23025@hlagh (orig r8551): caelum | 2010-02-05 05:07:14 -0500 support autoinc PKs without is_auto_increment set r23047@hlagh (orig r8570): caelum | 2010-02-06 07:35:31 -0500 move Firebird ODBC override for RETURNING to a SQLAHacks class r23048@hlagh (orig r8571): caelum | 2010-02-06 08:06:44 -0500 Firebird: add POD, fix BLOB tests r23085@hlagh (orig r8588): caelum | 2010-02-08 08:26:41 -0500 better DT inflation for Firebird and _ping r23087@hlagh (orig r8590): moritz | 2010-02-08 08:32:26 -0500 test ->update({...}) for firebird r23088@hlagh (orig r8591): caelum | 2010-02-08 08:33:09 -0500 test update r23089@hlagh (orig r8592): moritz | 2010-02-08 08:43:50 -0500 use quoting in firebird tests r23115@hlagh (orig r8597): caelum | 2010-02-10 07:05:21 -0500 default to sql dialect 3 unless overridden r23116@hlagh (orig r8598): caelum | 2010-02-10 07:42:17 -0500 turn on ib_softcommit, savepoint tests now pass for DBD::InterBase r23123@hlagh (orig r8605): caelum | 2010-02-10 17:38:24 -0500 fix savepoints for Firebird ODBC r23170@hlagh (orig r8652): caelum | 2010-02-11 07:27:19 -0500 support the DATE data type for Firebird r23186@hlagh (orig r8668): caelum | 2010-02-12 14:43:20 -0500 special bind_param_array move to make DBD::InterBase happy (RT#54561) r23213@hlagh (orig r8695): caelum | 2010-02-13 15:15:46 -0500 fix fail in t/72pg.t related to new autoinc retrieval code in ::Row r23214@hlagh (orig r8696): caelum | 2010-02-13 15:18:27 -0500 fix multiple cursor test r23246@hlagh (orig r8728): caelum | 2010-02-16 09:47:43 -0500 POD fix r23358@hlagh (orig r8758): caelum | 2010-02-19 06:25:27 -0500 s/primary_columns/_pri_cols/ for Firebird r23420@hlagh (orig r8800): rkitover | 2010-02-22 19:33:13 -0500 don't use ib_softcommit by default r23496@hlagh (orig r8841): rkitover | 2010-03-01 04:22:19 -0500 update POD r23545@hlagh (orig r8855): rkitover | 2010-03-03 12:59:41 -0500 destroy cached statements in $storage->disconnect too r23582@hlagh (orig r8892): rkitover | 2010-03-05 18:06:33 -0500 auto_nextval support for Firebird r23598@hlagh (orig r8908): rkitover | 2010-03-06 11:48:41 -0500 remove that code for non-pk autoincs from Row, move to ::DBI::InterBase r23599@hlagh (orig r8909): rkitover | 2010-03-06 12:00:15 -0500 remove BindType2 test class r23601@hlagh (orig r8911): rkitover | 2010-03-06 12:12:55 -0500 cache autoinc sequence in column_info r23609@hlagh (orig r8919): rkitover | 2010-03-06 18:05:24 -0500 remove connect_info from maint/gen-schema.pl r23610@hlagh (orig r8920): rkitover | 2010-03-06 18:15:13 -0500 don't die on insert in firebird with no pk r23612@hlagh (orig r8922): ribasushi | 2010-03-06 19:18:46 -0500 What I really meant r23619@hlagh (orig r8929): rkitover | 2010-03-07 05:46:04 -0500 fix RETURNING for empty INSERT r8946@Thesaurus (orig r8933): caelum | 2010-03-07 12:08:00 +0100 remove unnecessary transaction_depth check in DBI::insert_bulk r8963@Thesaurus (orig r8950): ilmari | 2010-03-09 15:06:48 +0100 Fix POD link --- diff --git a/.gitignore b/.gitignore index ebae942..5aa3840 100644 --- a/.gitignore +++ b/.gitignore @@ -9,5 +9,6 @@ README _build/ blib/ inc/ +lib/DBIx/Class/Optional/Dependencies.pod pm_to_blib t/var/ diff --git a/Changes b/Changes index dafb2d5..08728f8 100644 --- a/Changes +++ b/Changes @@ -1,8 +1,114 @@ Revision history for DBIx::Class + - Support for Firebird RDBMS with DBD::InterBase and ODBC + - DBIx::Class::InflateColumn::File entered deprecated state + - DBIx::Class::Optional::Dependencies left experimental state + - Add req_group_list to Opt::Deps (RT#55211) + - Cascading delete/update are now wrapped in a transaction + for atomicity + - Fix multiple deficiencies when using MultiCreate with + data-encoder components (e.g. ::EncodedColumn) + - Fix regression where SQL files with comments were not + handled properly by ::Schema::Versioned. + - Fix regression on not properly throwing when $obj->relationship + is unresolvable + - Add has_relationship method to row objects + - Fix regression in set_column on PK-less objects + - Add POD about the significance of PK columns + - Fix for SQLite to ignore the (unsupported) { for => ... } + attribute + - Fix ambiguity in default directory handling of create_ddl_dir + (RT#54063) + +0.08120 2010-02-24 08:58:00 (UTC) + - Make sure possibly overwritten deployment_statements methods in + schemas get called on $schema->deploy + - Fix count() with group_by aliased-function resultsets + - with_deferred_fk_checks() Oracle support + - Massive refactor and cleanup of primary key handling + - Fixed regression losing custom result_class (really this time) + (RT#54697) + - Fixed regression in DBIC SQLT::Parser failing with a classname + (as opposed to a schema object) + - Changes to Storage::DBI::Oracle to accomodate changes in latest + SQL::Translator (quote handling) + - Make sure deployment_statements is per-storage overridable + - Fix dbicadmin's (lack of) POD + +0.08119 2010-02-15 09:36:00 (UTC) + - Add $rs->is_ordered to test for existing order_by on a resultset + - Add as_subselect_rs to DBIC::ResultSet from + DBIC::Helper::ResultSet::VirtualView::as_virtual_view + - Refactor dbicadmin adding DDL manipulation capabilities + - New optional dependency manager to aid extension writers + - Depend on newest bugfixed Moose + - Make resultset chaining consistent wrt selection specification + - Storage::DBI::Replicated cleanup + - Fix autoinc PKs without an autoinc flag on Sybase ASA + +0.08118 2010-02-08 11:53:00 (UTC) + - Fix a bug causing UTF8 columns not to be decoded (RT#54395) + - Fix bug in One->Many->One prefetch-collapse handling (RT#54039) + - Cleanup handling of relationship accessor types + +0.08117 2010-02-05 17:10:00 (UTC) + - Perl 5.8.1 is now the minimum supported version + - Massive optimization of the join resolution code - now joins + will be removed from the resulting SQL if DBIC can prove they + are not referenced by anything + - Subqueries no longer marked experimental + - Support for Informix RDBMS (limit/offset and auto-inc columns) + - Support for Sybase SQLAnywhere, both native and via ODBC + - might_have/has_one now warn if applied calling class's column + has is_nullable set to true. + - Fixed regression in deploy() with a {sources} table limit applied + (RT#52812) + - Views without a view_definition will throw an exception when + parsed by SQL::Translator::Parser::DBIx::Class + - Stop the SQLT parser from auto-adding indexes identical to the + Primary Key + - InflateColumn::DateTime refactoring to allow fine grained method + overloads + - Fix ResultSetColumn improperly selecting more than the requested + column when +columns/+select is present + - Fix failure when update/delete of resultsets with complex WHERE + SQLA structures + - Fix regression in context sensitiveness of deployment_statements + - Fix regression resulting in overcomplicated query on + search_related from prefetching resultsets + - Fix regression on all-null returning searches (properly switch + LEFT JOIN to JOIN in order to distinguish between both cases) + - Fix regression in groupedresultset count() used on strict-mode + MySQL connections + - Better isolation of RNO-limited queries from the rest of a + prefetching resultset + - New MSSQL specific resultset attribute to allow hacky ordered + subquery support + - Fix nasty schema/dbhandle leak due to SQL::Translator + - Initial implementation of a mechanism for Schema::Version to + apply multiple step upgrades + - Fix regression on externally supplied $dbh with AutoCommit=0 + - FAQ "Custom methods in Result classes" + - Cookbook POD fix for add_drop_table instead of add_drop_tables + - Schema POD improvement for dclone + +0.08115 2009-12-10 09:02:00 (CST) + - Real limit/offset support for MSSQL server (via Row_Number) - Fix distinct => 1 with non-selecting order_by (the columns in order_by also need to be aded to the resulting group_by) - Do not attempt to deploy FK constraints pointing to a View + - Fix count/objects from search_related on limited resultset + - Stop propagating distinct => 1 over search_related chains + - Make sure populate() inherits the resultset conditions just + like create() does + - Make get_inflated_columns behave identically to get_columns + wrt +select/+as (RT#46953) + - Fix problems with scalarrefs under InflateColumn (RT#51559) + - Throw exception on delete/update of PK-less resultsets + - Refactored Sybase storage driver into a central ::DBI::Sybase + dispatcher, and a sybase-specific ::DBI::Sybase::ASE + - Fixed an atrocious DBD::ADO bind-value bug + - Cookbook/Intro POD improvements 0.08114 2009-11-14 17:45:00 (UTC) - Preliminary support for MSSQL via DBD::ADO diff --git a/Makefile.PL b/Makefile.PL index 2332153..9d087b2 100644 --- a/Makefile.PL +++ b/Makefile.PL @@ -1,157 +1,68 @@ -use inc::Module::Install 0.89; +use inc::Module::Install 0.93; use strict; use warnings; use POSIX (); -use 5.006001; # delete this line if you want to send patches for earlier. +use 5.008001; -# ****** DO NOT ADD OPTIONAL DEPENDENCIES. EVER. --mst ****** +use FindBin; +use lib "$FindBin::Bin/lib"; -name 'DBIx-Class'; -perl_version '5.006001'; -all_from 'lib/DBIx/Class.pm'; - - -test_requires 'Test::Builder' => '0.33'; -test_requires 'Test::Deep' => '0'; -test_requires 'Test::Exception' => '0'; -test_requires 'Test::More' => '0.92'; -test_requires 'Test::Warn' => '0.21'; - -test_requires 'File::Temp' => '0.22'; - - -# Core -requires 'List::Util' => '0'; -requires 'Scalar::Util' => '0'; -requires 'Storable' => '0'; - -# Perl 5.8.0 doesn't have utf8::is_utf8() -requires 'Encode' => '0' if ($] <= 5.008000); - -# Dependencies (keep in alphabetical order) -requires 'Carp::Clan' => '6.0'; -requires 'Class::Accessor::Grouped' => '0.09000'; -requires 'Class::C3::Componentised' => '1.0005'; -requires 'Class::Inspector' => '1.24'; -requires 'Data::Page' => '2.00'; -requires 'DBD::SQLite' => '1.25'; -requires 'DBI' => '1.605'; -requires 'JSON::Any' => '1.18'; -requires 'MRO::Compat' => '0.09'; -requires 'Module::Find' => '0.06'; -requires 'Path::Class' => '0.16'; -requires 'Scope::Guard' => '0.03'; -requires 'SQL::Abstract' => '1.60'; -requires 'SQL::Abstract::Limit' => '0.13'; -requires 'Sub::Name' => '0.04'; -requires 'Data::Dumper::Concise' => '1.000'; - -my %replication_requires = ( - 'Moose', => '0.87', - 'MooseX::AttributeHelpers' => '0.21', - 'MooseX::Types', => '0.16', - 'namespace::clean' => '0.11', - 'Hash::Merge', => '0.11', -); - -#************************************************************************# -# Make *ABSOLUTELY SURE* that nothing on this list is a real require, # -# since every module listed in %force_requires_if_author is deleted # -# from the final META.yml (thus will never make it as a CPAN dependency) # -#************************************************************************# -my %force_requires_if_author = ( - %replication_requires, - - # when changing also adjust $DBIx::Class::Storage::DBI::minimum_sqlt_version - 'SQL::Translator' => '0.11002', - -# 'Module::Install::Pod::Inherit' => '0.01', - - # when changing also adjust version in t/02pod.t - 'Test::Pod' => '1.26', - - # when changing also adjust version in t/03podcoverage.t - 'Test::Pod::Coverage' => '1.08', - 'Pod::Coverage' => '0.20', - - # CDBI-compat related - 'DBIx::ContextualFetch' => '0', - 'Class::DBI::Plugin::DeepAbstractSearch' => '0', - 'Class::Trigger' => '0', - 'Time::Piece::MySQL' => '0', - 'Clone' => '0', - 'Date::Simple' => '3.03', - - # t/52cycle.t - 'Test::Memory::Cycle' => '0', - 'Devel::Cycle' => '1.10', - - # t/36datetime.t - # t/60core.t - 'DateTime::Format::SQLite' => '0', - - # t/96_is_deteministic_value.t - 'DateTime::Format::Strptime'=> '0', - - # database-dependent reqs - # - $ENV{DBICTEST_PG_DSN} - ? ( - 'Sys::SigAction' => '0', - 'DBD::Pg' => '2.009002', - 'DateTime::Format::Pg' => '0', - ) : () - , - - $ENV{DBICTEST_MYSQL_DSN} - ? ( - 'DateTime::Format::MySQL' => '0', - ) : () - , - - $ENV{DBICTEST_ORA_DSN} - ? ( - 'DateTime::Format::Oracle' => '0', - ) : () - , - - $ENV{DBICTEST_SYBASE_DSN} - ? ( - 'DateTime::Format::Sybase' => 0, - ) : () - , -); -#************************************************************************# -# Make ABSOLUTELY SURE that nothing on the list above is a real require, # -# since every module listed in %force_requires_if_author is deleted # -# from the final META.yml (thus will never make it as a CPAN dependency) # -#************************************************************************# - - -install_script (qw| - script/dbicadmin -|); +# adjust ENV for $AUTHOR system() calls +use Config; +$ENV{PERL5LIB} = join ($Config{path_sep}, @INC); -tests_recursive (qw| - t -|); -resources 'IRC' => 'irc://irc.perl.org/#dbix-class'; -resources 'license' => 'http://dev.perl.org/licenses/'; -resources 'repository' => 'http://dev.catalyst.perl.org/repos/bast/DBIx-Class/'; -resources 'MailingList' => 'http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbix-class'; +### +### DO NOT ADD OPTIONAL DEPENDENCIES HERE, EVEN AS recommends() +### All of them should go to DBIx::Class::Optional::Dependencies +### -no_index 'DBIx::Class::Storage::DBI::Sybase::Common'; -no_index 'DBIx::Class::SQLAHacks'; -no_index 'DBIx::Class::SQLAHacks::MSSQL'; -no_index 'DBIx::Class::Storage::DBI::AmbiguousGlob'; -no_index 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server'; -no_index 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars'; -no_index 'DBIx::Class::Storage::DBIHacks'; -# re-build README and require extra modules for testing if we're in a checkout +name 'DBIx-Class'; +perl_version '5.008001'; +all_from 'lib/DBIx/Class.pm'; +my $build_requires = { + 'DBD::SQLite' => '1.25', +}; + +my $test_requires = { + 'File::Temp' => '0.22', + 'Test::Builder' => '0.33', + 'Test::Exception' => '0', + 'Test::More' => '0.92', + 'Test::Warn' => '0.21', +}; + +my $runtime_requires = { + 'Carp::Clan' => '6.0', + 'Class::Accessor::Grouped' => '0.09002', + 'Class::C3::Componentised' => '1.0005', + 'Class::Inspector' => '1.24', + 'Data::Page' => '2.00', + 'DBI' => '1.609', + 'MRO::Compat' => '0.09', + 'Module::Find' => '0.06', + 'Path::Class' => '0.18', + 'SQL::Abstract' => '1.61', + 'SQL::Abstract::Limit' => '0.13', + 'Sub::Name' => '0.04', + 'Data::Dumper::Concise' => '1.000', + 'Scope::Guard' => '0.03', + 'Context::Preserve' => '0.01', +}; + +# this is so we can order requires alphabetically +# copies are needed for author requires injection +my $reqs = { + build_requires => { %$build_requires }, + requires => { %$runtime_requires }, + test_requires => { %$test_requires }, +}; + + +# require extra modules for testing if we're in a checkout if ($Module::Install::AUTHOR) { warn <<'EOW'; ****************************************************************************** @@ -164,9 +75,38 @@ if ($Module::Install::AUTHOR) { EOW - foreach my $module (sort keys %force_requires_if_author) { - build_requires ($module => $force_requires_if_author{$module}); + require DBIx::Class::Optional::Dependencies; + $reqs->{test_requires} = { + %{$reqs->{test_requires}}, + map { %$_ } (values %{DBIx::Class::Optional::Dependencies->req_group_list}), + }; +} + +# compose final req list, for alphabetical ordering +my %final_req; +for my $rtype (keys %$reqs) { + for my $mod (keys %{$reqs->{$rtype}} ) { + + # sanity check req duplications + if ($final_req{$mod}) { + die "$mod specified as both a '$rtype' and a '$final_req{$mod}[0]'\n"; + } + + $final_req{$mod} = [ $rtype, $reqs->{$rtype}{$mod}||0 ], } +} + +# actual require +for my $mod (sort keys %final_req) { + my ($rtype, $ver) = @{$final_req{$mod}}; + no strict 'refs'; + $rtype->($mod, $ver); +} + +auto_install(); + +# re-create various autogenerated documentation bits +if ($Module::Install::AUTHOR) { print "Regenerating README\n"; system('pod2text lib/DBIx/Class.pm > README'); @@ -176,20 +116,78 @@ EOW unlink 'MANIFEST'; } -# require Module::Install::Pod::Inherit; -# PodInherit(); + print "Regenerating Optional/Dependencies.pod\n"; + require DBIx::Class::Optional::Dependencies; + DBIx::Class::Optional::Dependencies->_gen_pod; + + # FIXME Disabled due to unsolved issues, ask theorbtwo + # require Module::Install::Pod::Inherit; + # PodInherit(); } -auto_install(); +tests_recursive (qw| + t +|); + +install_script (qw| + script/dbicadmin +|); + + +### Mangle makefile - read the comments for more info +# +postamble <<"EOP"; + +# This will add an extra dep-spec for the distdir target, +# which `make` will fold together in a first-come first-serve +# fashion. What we do here is essentially adding extra +# commands to execute once the distdir is assembled (via +# create_distdir), but before control is returned to a higher +# calling rule. +distdir : dbicadmin_pod_inject + +# The pod self-injection code is in fact a hidden option in +# dbicadmin itself +dbicadmin_pod_inject : +\tcd \$(DISTVNAME) && \$(ABSPERL) -Ilib script/dbicadmin --selfinject-pod + +# Regenerate manifest before running create_distdir. +create_distdir : manifest + +EOP + + + +resources 'IRC' => 'irc://irc.perl.org/#dbix-class'; +resources 'license' => 'http://dev.perl.org/licenses/'; +resources 'repository' => 'http://dev.catalyst.perl.org/repos/bast/DBIx-Class/'; +resources 'MailingList' => 'http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbix-class'; + +# Deprecated/internal modules need no exposure +no_index directory => $_ for (qw| + lib/DBIx/Class/Admin + lib/DBIx/Class/SQLAHacks + lib/DBIx/Class/PK/Auto + lib/DBIx/Class/CDBICompat +|); +no_index package => $_ for (qw/ + DBIx::Class::SQLAHacks DBIx::Class::Storage::DBIHacks +/); + WriteAll(); + # Re-write META.yml to _exclude_ all forced requires (we do not want to ship this) if ($Module::Install::AUTHOR) { - Meta->{values}{build_requires} = [ grep - { not exists $force_requires_if_author{$_->[0]} } - ( @{Meta->{values}{build_requires}} ) + # FIXME test_requires is not yet part of META + my %original_build_requires = ( %$build_requires, %$test_requires ); + + print "Regenerating META with author requires excluded\n"; + Meta->{values}{build_requires} = [ grep + { exists $original_build_requires{$_->[0]} } + ( @{Meta->{values}{build_requires}} ) ]; Meta->write; diff --git a/examples/Schema/MyDatabase/Main/Result/Artist.pm b/examples/Schema/MyDatabase/Main/Result/Artist.pm index ec78501..0571dae 100644 --- a/examples/Schema/MyDatabase/Main/Result/Artist.pm +++ b/examples/Schema/MyDatabase/Main/Result/Artist.pm @@ -1,9 +1,16 @@ package MyDatabase::Main::Result::Artist; -use base qw/DBIx::Class/; -__PACKAGE__->load_components(qw/PK::Auto Core/); + +use warnings; +use strict; + +use base qw/DBIx::Class::Core/; + __PACKAGE__->table('artist'); + __PACKAGE__->add_columns(qw/ artistid name /); + __PACKAGE__->set_primary_key('artistid'); + __PACKAGE__->has_many('cds' => 'MyDatabase::Main::Result::Cd'); 1; diff --git a/examples/Schema/MyDatabase/Main/Result/Cd.pm b/examples/Schema/MyDatabase/Main/Result/Cd.pm index 83fd21e..6a465a1 100644 --- a/examples/Schema/MyDatabase/Main/Result/Cd.pm +++ b/examples/Schema/MyDatabase/Main/Result/Cd.pm @@ -1,9 +1,16 @@ package MyDatabase::Main::Result::Cd; -use base qw/DBIx::Class/; -__PACKAGE__->load_components(qw/PK::Auto Core/); + +use warnings; +use strict; + +use base qw/DBIx::Class::Core/; + __PACKAGE__->table('cd'); + __PACKAGE__->add_columns(qw/ cdid artist title/); + __PACKAGE__->set_primary_key('cdid'); + __PACKAGE__->belongs_to('artist' => 'MyDatabase::Main::Result::Artist'); __PACKAGE__->has_many('tracks' => 'MyDatabase::Main::Result::Track'); diff --git a/examples/Schema/MyDatabase/Main/Result/Track.pm b/examples/Schema/MyDatabase/Main/Result/Track.pm index 23877bb..961018b 100644 --- a/examples/Schema/MyDatabase/Main/Result/Track.pm +++ b/examples/Schema/MyDatabase/Main/Result/Track.pm @@ -1,9 +1,16 @@ package MyDatabase::Main::Result::Track; -use base qw/DBIx::Class/; -__PACKAGE__->load_components(qw/PK::Auto Core/); + +use warnings; +use strict; + +use base qw/DBIx::Class::Core/; + __PACKAGE__->table('track'); + __PACKAGE__->add_columns(qw/ trackid cd title/); + __PACKAGE__->set_primary_key('trackid'); + __PACKAGE__->belongs_to('cd' => 'MyDatabase::Main::Result::Cd'); 1; diff --git a/lib/DBIx/Class.pm b/lib/DBIx/Class.pm index 07b678c..56f94dc 100644 --- a/lib/DBIx/Class.pm +++ b/lib/DBIx/Class.pm @@ -4,9 +4,12 @@ use strict; use warnings; use MRO::Compat; +use mro 'c3'; + +use DBIx::Class::Optional::Dependencies; use vars qw($VERSION); -use base qw/Class::C3::Componentised Class::Accessor::Grouped/; +use base qw/DBIx::Class::Componentised Class::Accessor::Grouped/; use DBIx::Class::StartupCheck; sub mk_classdata { @@ -24,9 +27,9 @@ sub component_base_class { 'DBIx::Class' } # Always remember to do all digits for the version even if they're 0 # i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports # brain damage and presumably various other packaging systems too -$VERSION = '0.08114'; +$VERSION = '0.08120_1'; -$VERSION = eval $VERSION; # numify for warning-free dev releases +$VERSION = eval $VERSION if $VERSION =~ /_/; # numify for warning-free dev releases sub MODIFY_CODE_ATTRIBUTES { my ($class,$code,@attrs) = @_; @@ -53,13 +56,20 @@ DBIx::Class - Extensible and flexible object <-> relational mapper. The community can be found via: - Mailing list: http://lists.scsys.co.uk/mailman/listinfo/dbix-class/ +=over + +=item * IRC: L + +=item * Mailing list: L + +=item * RT Bug Tracker: L - SVN: http://dev.catalyst.perl.org/repos/bast/DBIx-Class/ +=item * SVNWeb: L - SVNWeb: http://dev.catalyst.perl.org/svnweb/bast/browse/DBIx-Class/ +=item * SVN: L - IRC: irc.perl.org#dbix-class +=back =head1 SYNOPSIS @@ -78,9 +88,8 @@ MyDB/Schema/Result/Artist.pm: See L for docs on defining result classes. package MyDB::Schema::Result::Artist; - use base qw/DBIx::Class/; + use base qw/DBIx::Class::Core/; - __PACKAGE__->load_components(qw/Core/); __PACKAGE__->table('artist'); __PACKAGE__->add_columns(qw/ artistid name /); __PACKAGE__->set_primary_key('artistid'); @@ -92,9 +101,9 @@ A result class to represent a CD, which belongs to an artist, in MyDB/Schema/Result/CD.pm: package MyDB::Schema::Result::CD; - use base qw/DBIx::Class/; + use base qw/DBIx::Class::Core/; - __PACKAGE__->load_components(qw/Core/); + __PACKAGE__->load_components(qw/InflateColumn::DateTime/); __PACKAGE__->table('cd'); __PACKAGE__->add_columns(qw/ cdid artistid title year /); __PACKAGE__->set_primary_key('cdid'); @@ -117,7 +126,7 @@ Then you can use these classes in your application's code: # Output all artists names # $artist here is a DBIx::Class::Row, which has accessors # for all its columns. Rows are also subclasses of your Result class. - foreach $artist (@artists) { + foreach $artist (@all_artists) { print $artist->name, "\n"; } @@ -213,6 +222,8 @@ abraxxa: Alexander Hartmaier aherzog: Adam Herzog +amoore: Andrew Moore + andyg: Andy Grundman ank: Andres Kievsky @@ -227,6 +238,8 @@ blblack: Brandon L. Black bluefeet: Aran Deltac +boghead: Bryan Beeley + bricas: Brian Cassidy brunov: Bruno Vecchi @@ -243,16 +256,22 @@ da5id: David Jack Olrik debolaz: Anders Nor Berle +dew: Dan Thomas + dkubb: Dan Kubb dnm: Justin Wheeler +dpetrov: Dimitar Petrov + dwc: Daniel Westermann-Clark dyfrgi: Michael Leuchtenburg frew: Arthur Axel "fREW" Schmidt +goraxe: Gordon Irving + gphat: Cory G Watson groditi: Guillermo Roditi @@ -267,6 +286,8 @@ jgoulah: John Goulah jguenther: Justin Guenther +jhannah: Jay Hannah + jnapiorkowski: John Napiorkowski jon: Jon Schutz @@ -293,8 +314,12 @@ Nniuq: Ron "Quinn" Straight" norbi: Norbert Buchmuller +nuba: Nuba Princigalli + Numa: Dan Sully +ovid: Curtis "Ovid" Poe + oyse: Øystein Torget paulm: Paul Makepeace @@ -317,12 +342,14 @@ rbuels: Robert Buels rdj: Ryan D Johnson -ribasushi: Peter Rabbitson +ribasushi: Peter Rabbitson rjbs: Ricardo Signes robkinyon: Rob Kinyon +Roman: Roman Filippov + sc_: Just Another Perl Hacker scotty: Scotty Allen @@ -357,7 +384,7 @@ zamolxes: Bogdan Lucaciu =head1 COPYRIGHT -Copyright (c) 2005 - 2009 the DBIx::Class L and L +Copyright (c) 2005 - 2010 the DBIx::Class L and L as listed above. =head1 LICENSE diff --git a/lib/DBIx/Class/AccessorGroup.pm b/lib/DBIx/Class/AccessorGroup.pm index ae4d490..4d7e046 100644 --- a/lib/DBIx/Class/AccessorGroup.pm +++ b/lib/DBIx/Class/AccessorGroup.pm @@ -17,8 +17,6 @@ DBIx::Class::AccessorGroup - See Class::Accessor::Grouped This class now exists in its own right on CPAN as Class::Accessor::Grouped -1; - =head1 AUTHORS Matt S. Trout diff --git a/lib/DBIx/Class/Admin.pm b/lib/DBIx/Class/Admin.pm new file mode 100644 index 0000000..284f72d --- /dev/null +++ b/lib/DBIx/Class/Admin.pm @@ -0,0 +1,568 @@ +package DBIx::Class::Admin; + +# check deps +BEGIN { + use Carp::Clan qw/^DBIx::Class/; + use DBIx::Class; + croak('The following modules are required for DBIx::Class::Admin ' . DBIx::Class::Optional::Dependencies->req_missing_for ('admin') ) + unless DBIx::Class::Optional::Dependencies->req_ok_for ('admin'); +} + +use Moose; +use MooseX::Types::Moose qw/Int Str Any Bool/; +use DBIx::Class::Admin::Types qw/DBICConnectInfo DBICHashRef/; +use MooseX::Types::JSON qw(JSON); +use MooseX::Types::Path::Class qw(Dir File); +use Try::Tiny; +use JSON::Any qw(DWIW XS JSON); +use namespace::autoclean; + +=head1 NAME + +DBIx::Class::Admin - Administration object for schemas + +=head1 SYNOPSIS + + $ dbicadmin --help + + $ dbicadmin --schema=MyApp::Schema \ + --connect='["dbi:SQLite:my.db", "", ""]' \ + --deploy + + $ dbicadmin --schema=MyApp::Schema --class=Employee \ + --connect='["dbi:SQLite:my.db", "", ""]' \ + --op=update --set='{ "name": "New_Employee" }' + + use DBIx::Class::Admin; + + # ddl manipulation + my $admin = DBIx::Class::Admin->new( + schema_class=> 'MY::Schema', + sql_dir=> $sql_dir, + connect_info => { dsn => $dsn, user => $user, password => $pass }, + ); + + # create SQLite sql + $admin->create('SQLite'); + + # create SQL diff for an upgrade + $admin->create('SQLite', {} , "1.0"); + + # upgrade a database + $admin->upgrade(); + + # install a version for an unversioned schema + $admin->install("3.0"); + +=head1 REQUIREMENTS + +The Admin interface has additional requirements not currently part of +L. See L for more details. + +=head1 ATTRIBUTES + +=head2 schema_class + +the class of the schema to load + +=cut + +has 'schema_class' => ( + is => 'ro', + isa => Str, +); + + +=head2 schema + +A pre-connected schema object can be provided for manipulation + +=cut + +has 'schema' => ( + is => 'ro', + isa => 'DBIx::Class::Schema', + lazy_build => 1, +); + +sub _build_schema { + my ($self) = @_; + require Class::MOP; + Class::MOP::load_class($self->schema_class); + + $self->connect_info->[3]->{ignore_version} =1; + return $self->schema_class->connect(@{$self->connect_info()} ); # , $self->connect_info->[3], { ignore_version => 1} ); +} + + +=head2 resultset + +a resultset from the schema to operate on + +=cut + +has 'resultset' => ( + is => 'rw', + isa => Str, +); + + +=head2 where + +a hash ref or json string to be used for identifying data to manipulate + +=cut + +has 'where' => ( + is => 'rw', + isa => DBICHashRef, + coerce => 1, +); + + +=head2 set + +a hash ref or json string to be used for inserting or updating data + +=cut + +has 'set' => ( + is => 'rw', + isa => DBICHashRef, + coerce => 1, +); + + +=head2 attrs + +a hash ref or json string to be used for passing additonal info to the ->search call + +=cut + +has 'attrs' => ( + is => 'rw', + isa => DBICHashRef, + coerce => 1, +); + + +=head2 connect_info + +connect_info the arguments to provide to the connect call of the schema_class + +=cut + +has 'connect_info' => ( + is => 'ro', + isa => DBICConnectInfo, + lazy_build => 1, + coerce => 1, +); + +sub _build_connect_info { + my ($self) = @_; + return $self->_find_stanza($self->config, $self->config_stanza); +} + + +=head2 config_file + +config_file provide a config_file to read connect_info from, if this is provided +config_stanze should also be provided to locate where the connect_info is in the config +The config file should be in a format readable by Config::General + +=cut + +has config_file => ( + is => 'ro', + isa => File, + coerce => 1, +); + + +=head2 config_stanza + +config_stanza for use with config_file should be a '::' deliminated 'path' to the connection information +designed for use with catalyst config files + +=cut + +has 'config_stanza' => ( + is => 'ro', + isa => Str, +); + + +=head2 config + +Instead of loading from a file the configuration can be provided directly as a hash ref. Please note +config_stanza will still be required. + +=cut + +has config => ( + is => 'ro', + isa => DBICHashRef, + lazy_build => 1, +); + +sub _build_config { + my ($self) = @_; + + eval { require Config::Any } + or die ("Config::Any is required to parse the config file.\n"); + + my $cfg = Config::Any->load_files ( {files => [$self->config_file], use_ext =>1, flatten_to_hash=>1}); + + # just grab the config from the config file + $cfg = $cfg->{$self->config_file}; + return $cfg; +} + + +=head2 sql_dir + +The location where sql ddl files should be created or found for an upgrade. + +=cut + +has 'sql_dir' => ( + is => 'ro', + isa => Dir, + coerce => 1, +); + + +=head2 version + +Used for install, the version which will be 'installed' in the schema + +=cut + +has version => ( + is => 'rw', + isa => Str, +); + + +=head2 preversion + +Previouse version of the schema to create an upgrade diff for, the full sql for that version of the sql must be in the sql_dir + +=cut + +has preversion => ( + is => 'rw', + isa => Str, +); + + +=head2 force + +Try and force certain operations. + +=cut + +has force => ( + is => 'rw', + isa => Bool, +); + + +=head2 quiet + +Be less verbose about actions + +=cut + +has quiet => ( + is => 'rw', + isa => Bool, +); + +has '_confirm' => ( + is => 'bare', + isa => Bool, +); + + +=head1 METHODS + +=head2 create + +=over 4 + +=item Arguments: $sqlt_type, \%sqlt_args, $preversion + +=back + +L will generate sql for the supplied schema_class in sql_dir. The flavour of sql to +generate can be controlled by suppling a sqlt_type which should be a L name. + +Arguments for L can be supplied in the sqlt_args hashref. + +Optional preversion can be supplied to generate a diff to be used by upgrade. + +=cut + +sub create { + my ($self, $sqlt_type, $sqlt_args, $preversion) = @_; + + $preversion ||= $self->preversion(); + + my $schema = $self->schema(); + # create the dir if does not exist + $self->sql_dir->mkpath() if ( ! -d $self->sql_dir); + + $schema->create_ddl_dir( $sqlt_type, (defined $schema->schema_version ? $schema->schema_version : ""), $self->sql_dir->stringify, $preversion, $sqlt_args ); +} + + +=head2 upgrade + +=over 4 + +=item Arguments: + +=back + +upgrade will attempt to upgrade the connected database to the same version as the schema_class. +B + +=cut + +sub upgrade { + my ($self) = @_; + my $schema = $self->schema(); + if (!$schema->get_db_version()) { + # schema is unversioned + $schema->throw_exception ("Could not determin current schema version, please either install() or deploy().\n"); + } else { + my $ret = $schema->upgrade(); + return $ret; + } +} + + +=head2 install + +=over 4 + +=item Arguments: $version + +=back + +install is here to help when you want to move to L and have an existing +database. install will take a version and add the version tracking tables and 'install' the version. No +further ddl modification takes place. Setting the force attribute to a true value will allow overriding of +already versioned databases. + +=cut + +sub install { + my ($self, $version) = @_; + + my $schema = $self->schema(); + $version ||= $self->version(); + if (!$schema->get_db_version() ) { + # schema is unversioned + print "Going to install schema version\n"; + my $ret = $schema->install($version); + print "retun is $ret\n"; + } + elsif ($schema->get_db_version() and $self->force ) { + carp "Forcing install may not be a good idea"; + if($self->_confirm() ) { + $self->schema->_set_db_version({ version => $version}); + } + } + else { + $schema->throw_exception ("Schema already has a version. Try upgrade instead.\n"); + } + +} + + +=head2 deploy + +=over 4 + +=item Arguments: $args + +=back + +deploy will create the schema at the connected database. C<$args> are passed straight to +L. + +=cut + +sub deploy { + my ($self, $args) = @_; + my $schema = $self->schema(); + if (!$schema->get_db_version() ) { + # schema is unversioned + $schema->deploy( $args, $self->sql_dir) + or $schema->throw_exception ("Could not deploy schema.\n"); # FIXME deploy() does not return 1/0 on success/fail + } else { + $schema->throw_exception("A versioned schema has already been deployed, try upgrade instead.\n"); + } +} + +=head2 insert + +=over 4 + +=item Arguments: $rs, $set + +=back + +insert takes the name of a resultset from the schema_class and a hashref of data to insert +into that resultset + +=cut + +sub insert { + my ($self, $rs, $set) = @_; + + $rs ||= $self->resultset(); + $set ||= $self->set(); + my $resultset = $self->schema->resultset($rs); + my $obj = $resultset->create( $set ); + print ''.ref($resultset).' ID: '.join(',',$obj->id())."\n" if (!$self->quiet); +} + + +=head2 update + +=over 4 + +=item Arguments: $rs, $set, $where + +=back + +update takes the name of a resultset from the schema_class, a hashref of data to update and +a where hash used to form the search for the rows to update. + +=cut + +sub update { + my ($self, $rs, $set, $where) = @_; + + $rs ||= $self->resultset(); + $where ||= $self->where(); + $set ||= $self->set(); + my $resultset = $self->schema->resultset($rs); + $resultset = $resultset->search( ($where||{}) ); + + my $count = $resultset->count(); + print "This action will modify $count ".ref($resultset)." records.\n" if (!$self->quiet); + + if ( $self->force || $self->_confirm() ) { + $resultset->update_all( $set ); + } +} + + +=head2 delete + +=over 4 + +=item Arguments: $rs, $where, $attrs + +=back + +delete takes the name of a resultset from the schema_class, a where hashref and a attrs to pass to ->search. +The found data is deleted and cannot be recovered. + +=cut + +sub delete { + my ($self, $rs, $where, $attrs) = @_; + + $rs ||= $self->resultset(); + $where ||= $self->where(); + $attrs ||= $self->attrs(); + my $resultset = $self->schema->resultset($rs); + $resultset = $resultset->search( ($where||{}), ($attrs||()) ); + + my $count = $resultset->count(); + print "This action will delete $count ".ref($resultset)." records.\n" if (!$self->quiet); + + if ( $self->force || $self->_confirm() ) { + $resultset->delete_all(); + } +} + + +=head2 select + +=over 4 + +=item Arguments: $rs, $where, $attrs + +=back + +select takes the name of a resultset from the schema_class, a where hashref and a attrs to pass to ->search. +The found data is returned in a array ref where the first row will be the columns list. + +=cut + +sub select { + my ($self, $rs, $where, $attrs) = @_; + + $rs ||= $self->resultset(); + $where ||= $self->where(); + $attrs ||= $self->attrs(); + my $resultset = $self->schema->resultset($rs); + $resultset = $resultset->search( ($where||{}), ($attrs||()) ); + + my @data; + my @columns = $resultset->result_source->columns(); + push @data, [@columns];# + + while (my $row = $resultset->next()) { + my @fields; + foreach my $column (@columns) { + push( @fields, $row->get_column($column) ); + } + push @data, [@fields]; + } + + return \@data; +} + +sub _confirm { + my ($self) = @_; + print "Are you sure you want to do this? (type YES to confirm) \n"; + # mainly here for testing + return 1 if ($self->meta->get_attribute('_confirm')->get_value($self)); + my $response = ; + return 1 if ($response=~/^YES/); + return; +} + +sub _find_stanza { + my ($self, $cfg, $stanza) = @_; + my @path = split /::/, $stanza; + while (my $path = shift @path) { + if (exists $cfg->{$path}) { + $cfg = $cfg->{$path}; + } + else { + die ("Could not find $stanza in config, $path does not seem to exist.\n"); + } + } + return $cfg; +} + +=head1 AUTHOR + +See L. + +=head1 LICENSE + +You may distribute this code under the same terms as Perl itself + +=cut + +1; diff --git a/lib/DBIx/Class/Admin/Descriptive.pm b/lib/DBIx/Class/Admin/Descriptive.pm new file mode 100644 index 0000000..45fcb19 --- /dev/null +++ b/lib/DBIx/Class/Admin/Descriptive.pm @@ -0,0 +1,10 @@ +package # hide from PAUSE + DBIx::Class::Admin::Descriptive; + +use DBIx::Class::Admin::Usage; + +use base 'Getopt::Long::Descriptive'; + +sub usage_class { 'DBIx::Class::Admin::Usage'; } + +1; diff --git a/lib/DBIx/Class/Admin/Types.pm b/lib/DBIx/Class/Admin/Types.pm new file mode 100644 index 0000000..23af292 --- /dev/null +++ b/lib/DBIx/Class/Admin/Types.pm @@ -0,0 +1,48 @@ +package # hide from PAUSE + DBIx::Class::Admin::Types; + +use MooseX::Types -declare => [qw( + DBICConnectInfo + DBICArrayRef + DBICHashRef +)]; +use MooseX::Types::Moose qw/Int HashRef ArrayRef Str Any Bool/; +use MooseX::Types::JSON qw(JSON); + +subtype DBICArrayRef, + as ArrayRef; + +subtype DBICHashRef, + as HashRef; + +coerce DBICArrayRef, + from JSON, + via { _json_to_data ($_) }; + +coerce DBICHashRef, + from JSON, + via { _json_to_data($_) }; + +subtype DBICConnectInfo, + as ArrayRef; + +coerce DBICConnectInfo, + from JSON, + via { return _json_to_data($_) } ; + +coerce DBICConnectInfo, + from Str, + via { return _json_to_data($_) }; + +coerce DBICConnectInfo, + from HashRef, + via { [ $_ ] }; + +sub _json_to_data { + my ($json_str) = @_; + my $json = JSON::Any->new(allow_barekey => 1, allow_singlequote => 1, relaxed=>1); + my $ret = $json->jsonToObj($json_str); + return $ret; +} + +1; diff --git a/lib/DBIx/Class/Admin/Usage.pm b/lib/DBIx/Class/Admin/Usage.pm new file mode 100644 index 0000000..ddd925a --- /dev/null +++ b/lib/DBIx/Class/Admin/Usage.pm @@ -0,0 +1,79 @@ +package # hide from PAUSE + DBIx::Class::Admin::Usage; + + +use base 'Getopt::Long::Descriptive::Usage'; + +use base 'Class::Accessor::Grouped'; + +use Class::C3; + +__PACKAGE__->mk_group_accessors('simple', 'synopsis', 'short_description'); + +sub prog_name { + Getopt::Long::Descriptive::prog_name(); +} + +sub set_simple { + my ($self,$field, $value) = @_; + my $prog_name = prog_name(); + $value =~ s/%c/$prog_name/g; + $self->next::method($field, $value); +} + + + +# This returns the usage formated as a pod document +sub pod { + my ($self) = @_; + return join qq{\n}, $self->pod_leader_text, $self->pod_option_text, $self->pod_authorlic_text; +} + +sub pod_leader_text { + my ($self) = @_; + + return qq{=head1 NAME\n\n}.prog_name()." - ".$self->short_description().qq{\n\n}. + qq{=head1 SYNOPSIS\n\n}.$self->leader_text().qq{\n}.$self->synopsis().qq{\n\n}; + +} + +sub pod_authorlic_text { + + return join ("\n\n", + '=head1 AUTHORS', + 'See L', + '=head1 LICENSE', + 'You may distribute this code under the same terms as Perl itself', + '=cut', + ); +} + + +sub pod_option_text { + my ($self) = @_; + my @options = @{ $self->{options} || [] }; + my $string = q{}; + return $string unless @options; + + $string .= "=head1 OPTIONS\n\n=over\n\n"; + + foreach my $opt (@options) { + my $spec = $opt->{spec}; + my $desc = $opt->{desc}; + next if ($desc eq 'hidden'); + if ($desc eq 'spacer') { + $string .= "=back\n\n=head2 $spec\n\n=cut\n\n=over\n\n"; + next; + } + + $spec = Getopt::Long::Descriptive->_strip_assignment($spec); + $string .= "=item " . join " or ", map { length > 1 ? "B<--$_>" : "B<-$_>" } + split /\|/, $spec; + $string .= "\n\n$desc\n\n=cut\n\n"; + + } + $string .= "=back\n\n"; + return $string; +} + +1; diff --git a/lib/DBIx/Class/CDBICompat.pm b/lib/DBIx/Class/CDBICompat.pm index 835adfe..41160c0 100644 --- a/lib/DBIx/Class/CDBICompat.pm +++ b/lib/DBIx/Class/CDBICompat.pm @@ -91,7 +91,7 @@ This plugin will work, but it is more efficiently done using DBIC's native searc =head2 Choosing Features -In fact, this class is just a receipe containing all the features emulated. +In fact, this class is just a recipe containing all the features emulated. If you like, you can choose which features to emulate by building your own class and loading it like this: @@ -145,7 +145,7 @@ The semi-documented Class::DBI::Relationship objects returned by C sub { my $self = shift; $self->sth_to_objects($self->sql_Retrieve($fragment), \@_); }; diff --git a/lib/DBIx/Class/CDBICompat/Copy.pm b/lib/DBIx/Class/CDBICompat/Copy.pm index ffd5381..0ab6092 100644 --- a/lib/DBIx/Class/CDBICompat/Copy.pm +++ b/lib/DBIx/Class/CDBICompat/Copy.pm @@ -12,7 +12,7 @@ DBIx::Class::CDBICompat::Copy - Emulates Class::DBI->copy($new_id) =head1 SYNOPSIS -See DBIx::Class::CDBICompat for directions for use. +See DBIx::Class::CDBICompat for usage directions. =head1 DESCRIPTION diff --git a/lib/DBIx/Class/CDBICompat/Iterator.pm b/lib/DBIx/Class/CDBICompat/Iterator.pm index 3e93154..847b10b 100644 --- a/lib/DBIx/Class/CDBICompat/Iterator.pm +++ b/lib/DBIx/Class/CDBICompat/Iterator.pm @@ -10,7 +10,7 @@ DBIx::Class::CDBICompat::Iterator - Emulates the extra behaviors of the Class::D =head1 SYNOPSIS -See DBIx::Class::CDBICompat for directions for use. +See DBIx::Class::CDBICompat for usage directions. =head1 DESCRIPTION diff --git a/lib/DBIx/Class/Componentised.pm b/lib/DBIx/Class/Componentised.pm index 7cb5d54..5a59238 100644 --- a/lib/DBIx/Class/Componentised.pm +++ b/lib/DBIx/Class/Componentised.pm @@ -4,10 +4,40 @@ package # hide from PAUSE use strict; use warnings; -### -# Keep this class for backwards compatibility -### - use base 'Class::C3::Componentised'; +use Carp::Clan qw/^DBIx::Class|^Class::C3::Componentised/; +use mro 'c3'; + +# this warns of subtle bugs introduced by UTF8Columns hacky handling of store_column +sub inject_base { + my $class = shift; + my $target = shift; + + my @present_components = (@{mro::get_linear_isa ($target)||[]}); + + no strict 'refs'; + for my $comp (reverse @_) { + + if ($comp->isa ('DBIx::Class::UTF8Columns') ) { + require B; + my @broken; + + for (@present_components) { + my $cref = $_->can ('store_column') + or next; + push @broken, $_ if B::svref_2object($cref)->STASH->NAME ne 'DBIx::Class::Row'; + } + + carp "Incorrect loading order of $comp by ${target} will affect other components overriding store_column (" + . join (', ', @broken) + .'). Refer to the documentation of DBIx::Class::UTF8Columns for more info' + if @broken; + } + + unshift @present_components, $comp; + } + + $class->next::method($target, @_); +} 1; diff --git a/lib/DBIx/Class/Core.pm b/lib/DBIx/Class/Core.pm index d4d980a..a7e5f59 100644 --- a/lib/DBIx/Class/Core.pm +++ b/lib/DBIx/Class/Core.pm @@ -2,7 +2,6 @@ package DBIx::Class::Core; use strict; use warnings; -no warnings 'qw'; use base qw/DBIx::Class/; @@ -12,7 +11,8 @@ __PACKAGE__->load_components(qw/ PK::Auto PK Row - ResultSourceProxy::Table/); + ResultSourceProxy::Table +/); 1; @@ -22,8 +22,8 @@ DBIx::Class::Core - Core set of DBIx::Class modules =head1 SYNOPSIS - # In your table classes - __PACKAGE__->load_components(qw/Core/); + # In your result (table) classes + use base 'DBIx::Class::Core'; =head1 DESCRIPTION diff --git a/lib/DBIx/Class/InflateColumn.pm b/lib/DBIx/Class/InflateColumn.pm index ee3081c..f5c2f8f 100644 --- a/lib/DBIx/Class/InflateColumn.pm +++ b/lib/DBIx/Class/InflateColumn.pm @@ -26,7 +26,7 @@ for the database. It can be used, for example, to automatically convert to and from L objects for your date and time fields. There's a -conveniece component to actually do that though, try +convenience component to actually do that though, try L. It will handle all types of references except scalar references. It @@ -79,7 +79,8 @@ sub inflate_column { $self->throw_exception("inflate_column needs attr hashref") unless ref $attrs eq 'HASH'; $self->column_info($col)->{_inflate_info} = $attrs; - $self->mk_group_accessors('inflated_column' => [$self->column_info($col)->{accessor} || $col, $col]); + my $acc = $self->column_info($col)->{accessor}; + $self->mk_group_accessors('inflated_column' => [ (defined $acc ? $acc : $col), $col]); return 1; } @@ -113,7 +114,7 @@ sub _deflated_column { Fetch a column value in its inflated state. This is directly analogous to L in that it only fetches a -column already retreived from the database, and then inflates it. +column already retrieved from the database, and then inflates it. Throws an exception if the column requested is not an inflated column. =cut @@ -124,8 +125,11 @@ sub get_inflated_column { unless exists $self->column_info($col)->{_inflate_info}; return $self->{_inflated_column}{$col} if exists $self->{_inflated_column}{$col}; - return $self->{_inflated_column}{$col} = - $self->_inflated_column($col, $self->get_column($col)); + + my $val = $self->get_column($col); + return $val if ref $val eq 'SCALAR'; #that would be a not-yet-reloaded sclarref update + + return $self->{_inflated_column}{$col} = $self->_inflated_column($col, $val); } =head2 set_inflated_column @@ -175,7 +179,7 @@ sub store_inflated_column { =over 4 =item L - This component is loaded as part of the - "core" L components; generally there is no need to + C L components; generally there is no need to load it directly =back diff --git a/lib/DBIx/Class/InflateColumn/DateTime.pm b/lib/DBIx/Class/InflateColumn/DateTime.pm index 2b40608..ad3da46 100644 --- a/lib/DBIx/Class/InflateColumn/DateTime.pm +++ b/lib/DBIx/Class/InflateColumn/DateTime.pm @@ -15,15 +15,14 @@ Load this component and then declare one or more columns to be of the datetime, timestamp or date datatype. package Event; - __PACKAGE__->load_components(qw/InflateColumn::DateTime Core/); + use base 'DBIx::Class::Core'; + + __PACKAGE__->load_components(qw/InflateColumn::DateTime/); __PACKAGE__->add_columns( starts_when => { data_type => 'datetime' } create_date => { data_type => 'date' } ); -NOTE: You B load C B C. See -L for details. - Then you can treat the specified column as a L object. print "This event starts the month of ". @@ -137,23 +136,18 @@ sub register_column { } } - my $timezone; if ( defined $info->{extra}{timezone} ) { carp "Putting timezone into extra => { timezone => '...' } has been deprecated, ". "please put it directly into the '$column' column definition."; - $timezone = $info->{extra}{timezone}; + $info->{timezone} = $info->{extra}{timezone} unless defined $info->{timezone}; } - my $locale; if ( defined $info->{extra}{locale} ) { carp "Putting locale into extra => { locale => '...' } has been deprecated, ". "please put it directly into the '$column' column definition."; - $locale = $info->{extra}{locale}; + $info->{locale} = $info->{extra}{locale} unless defined $info->{locale}; } - $locale = $info->{locale} if defined $info->{locale}; - $timezone = $info->{timezone} if defined $info->{timezone}; - my $undef_if_invalid = $info->{datetime_undef_if_invalid}; if ($type eq 'datetime' || $type eq 'date' || $type eq 'timestamp') { @@ -179,21 +173,12 @@ sub register_column { $self->throw_exception ("Error while inflating ${value} for ${column} on ${self}: $err"); } - $dt->set_time_zone($timezone) if $timezone; - $dt->set_locale($locale) if $locale; - return $dt; + return $obj->_post_inflate_datetime( $dt, \%info ); }, deflate => sub { my ($value, $obj) = @_; - if ($timezone) { - carp "You're using a floating timezone, please see the documentation of" - . " DBIx::Class::InflateColumn::DateTime for an explanation" - if ref( $value->time_zone ) eq 'DateTime::TimeZone::Floating' - and not $info{floating_tz_ok} - and not $ENV{DBIC_FLOATING_TZ_OK}; - $value->set_time_zone($timezone); - $value->set_locale($locale) if $locale; - } + + $value = $obj->_pre_deflate_datetime( $value, \%info ); $obj->_deflate_from_datetime( $value, \%info ); }, } @@ -225,6 +210,33 @@ sub _datetime_parser { shift->result_source->storage->datetime_parser (@_); } +sub _post_inflate_datetime { + my( $self, $dt, $info ) = @_; + + $dt->set_time_zone($info->{timezone}) if defined $info->{timezone}; + $dt->set_locale($info->{locale}) if defined $info->{locale}; + + return $dt; +} + +sub _pre_deflate_datetime { + my( $self, $dt, $info ) = @_; + + if (defined $info->{timezone}) { + carp "You're using a floating timezone, please see the documentation of" + . " DBIx::Class::InflateColumn::DateTime for an explanation" + if ref( $dt->time_zone ) eq 'DateTime::TimeZone::Floating' + and not $info->{floating_tz_ok} + and not $ENV{DBIC_FLOATING_TZ_OK}; + + $dt->set_time_zone($info->{timezone}); + } + + $dt->set_locale($info->{locale}) if defined $info->{locale}; + + return $dt; +} + 1; __END__ diff --git a/lib/DBIx/Class/InflateColumn/File.pm b/lib/DBIx/Class/InflateColumn/File.pm index 1901187..951b76e 100644 --- a/lib/DBIx/Class/InflateColumn/File.pm +++ b/lib/DBIx/Class/InflateColumn/File.pm @@ -7,6 +7,17 @@ use File::Path; use File::Copy; use Path::Class; +use Carp::Clan qw/^DBIx::Class/; +carp 'InflateColumn::File has entered a deprecation cycle. This component ' + .'has a number of architectural deficiencies that can quickly drive ' + .'your filesystem and database out of sync and is not recommended ' + .'for further use. It will be retained for backwards ' + .'compatibility, but no new functionality patches will be accepted. ' + .'Please consider using the much more mature and actively maintained ' + .'DBIx::Class::InflateColumn::FS. You can set the environment variable ' + .'DBIC_IC_FILE_NOWARN to a true value to disable this warning.' +unless $ENV{DBIC_IC_FILE_NOWARN}; + __PACKAGE__->load_components(qw/InflateColumn/); sub register_column { @@ -107,13 +118,25 @@ sub _save_file_column { =head1 NAME -DBIx::Class::InflateColumn::File - map files from the Database to the filesystem. +DBIx::Class::InflateColumn::File - DEPRECATED (superseded by DBIx::Class::InflateColumn::FS) + +=head2 Deprecation Notice + + This component has a number of architectural deficiencies that can quickly + drive your filesystem and database out of sync and is not recommended for + further use. It will be retained for backwards compatibility, but no new + functionality patches will be accepted. Please consider using the much more + mature and actively supported DBIx::Class::InflateColumn::FS. You can set + the environment variable DBIC_IC_FILE_NOWARN to a true value to disable + this warning. =head1 SYNOPSIS In your L table class: - __PACKAGE__->load_components( "PK::Auto", "InflateColumn::File", "Core" ); + use base 'DBIx::Class::Core'; + + __PACKAGE__->load_components(qw/InflateColumn::File/); # define your columns __PACKAGE__->add_columns( @@ -174,7 +197,7 @@ InflateColumn::File =head2 _file_column_callback ($file,$ret,$target) -method made to be overridden for callback purposes. +Method made to be overridden for callback purposes. =cut diff --git a/lib/DBIx/Class/Manual/Component.pod b/lib/DBIx/Class/Manual/Component.pod index b8da6f7..398ef2e 100644 --- a/lib/DBIx/Class/Manual/Component.pod +++ b/lib/DBIx/Class/Manual/Component.pod @@ -12,31 +12,29 @@ itself creates, after the insert has happened. =head1 USING -Components are loaded using the load_components() method within your +Components are loaded using the load_components() method within your DBIx::Class classes. package My::Thing; - use base qw( DBIx::Class ); - __PACKAGE__->load_components(qw/ PK::Auto Core /); + use base qw( DBIx::Class::Core ); + __PACKAGE__->load_components(qw/InflateColumn::DateTime TimeStamp/); -Generally you do not want to specify the full package name -of a component, instead take off the DBIx::Class:: part of -it and just include the rest. If you do want to load a -component outside of the normal namespace you can do so +Generally you do not want to specify the full package name +of a component, instead take off the DBIx::Class:: part of +it and just include the rest. If you do want to load a +component outside of the normal namespace you can do so by prepending the component name with a +. __PACKAGE__->load_components(qw/ +My::Component /); -Once a component is loaded all of it's methods, or otherwise, +Once a component is loaded all of it's methods, or otherwise, that it provides will be available in your class. -The order in which is you load the components may be -very important, depending on the component. The general -rule of thumb is to first load extra components and then -load core ones last. If you are not sure, then read the -docs for the components you are using and see if they -mention anything about the order in which you should load -them. +The order in which is you load the components may be very +important, depending on the component. If you are not sure, +then read the docs for the components you are using and see +if they mention anything about the order in which you should +load them. =head1 CREATING COMPONENTS @@ -47,11 +45,11 @@ Making your own component is very easy. # Create methods, accessors, load other components, etc. 1; -When a component is loaded it is included in the calling -class' inheritance chain using L. As well as -providing custom utility methods, a component may also -override methods provided by other core components, like -L and others. For example, you +When a component is loaded it is included in the calling +class' inheritance chain using L. As well as +providing custom utility methods, a component may also +override methods provided by other core components, like +L and others. For example, you could override the insert and delete methods. sub insert { @@ -108,22 +106,22 @@ L - CRUD methods. =head2 Experimental -These components are under development, there interfaces may -change, they may not work, etc. So, use them if you want, but +These components are under development, their interfaces may +change, they may not work, etc. So, use them if you want, but be warned. L - Validate all data before submitting to your database. =head2 Core -These are the components that all, or nearly all, people will use -without even knowing it. These components provide most of +These are the components that all, or nearly all, people will use +without even knowing it. These components provide most of DBIx::Class' functionality. -L - Lets you build groups of accessors. - L - Loads various components that "most people" would want. +L - Lets you build groups of accessors. + L - Non-recommended classdata schema component. L - Automatically create objects from column data. diff --git a/lib/DBIx/Class/Manual/Cookbook.pod b/lib/DBIx/Class/Manual/Cookbook.pod index 2769ded..16fa647 100644 --- a/lib/DBIx/Class/Manual/Cookbook.pod +++ b/lib/DBIx/Class/Manual/Cookbook.pod @@ -113,9 +113,8 @@ almost like you would define a regular ResultSource. package My::Schema::Result::UserFriendsComplex; use strict; use warnings; - use base qw/DBIx::Class/; + use base qw/DBIx::Class::Core/; - __PACKAGE__->load_components('Core'); __PACKAGE__->table_class('DBIx::Class::ResultSource::View'); # ->table, ->add_columns, etc. @@ -142,7 +141,7 @@ Next, you can execute your complex query using bind parameters like this: ); ... and you'll get back a perfect L (except, of course, -that you cannot modify the rows it contains, ie. cannot call L, +that you cannot modify the rows it contains, e.g. cannot call L, L, ... on it). Note that you cannot have bind parameters unless is_virtual is set to true. @@ -202,7 +201,7 @@ to access the returned value: # SELECT name name, LENGTH( name ) # FROM artist -Note that the C attribute B with the sql +Note that the C attribute B with the SQL syntax C< SELECT foo AS bar > (see the documentation in L). You can control the C part of the generated SQL via the C<-as> field attribute as follows: @@ -318,7 +317,7 @@ Please see L documentation if you are in any way unsure about the use of the attributes above (C< join >, C< select >, C< as > and C< group_by >). -=head2 Subqueries (EXPERIMENTAL) +=head2 Subqueries You can write subqueries relatively easily in DBIC. @@ -330,7 +329,7 @@ You can write subqueries relatively easily in DBIC. artist_id => { 'IN' => $inside_rs->get_column('id')->as_query }, }); -The usual operators ( =, !=, IN, NOT IN, etc) are supported. +The usual operators ( =, !=, IN, NOT IN, etc.) are supported. B: You have to explicitly use '=' when doing an equality comparison. The following will B work: @@ -366,10 +365,6 @@ That creates the following SQL: WHERE artist_id = me.artist_id ) -=head3 EXPERIMENTAL - -Please note that subqueries are considered an experimental feature. - =head2 Predefined searches You can write your own L class by inheriting from it @@ -391,11 +386,16 @@ and defining often used searches as methods: 1; -To use your resultset, first tell DBIx::Class to create an instance of it -for you, in your My::DBIC::Schema::CD class: +If you're using L, simply place the file +into the C directory next to your C directory, and it will +be automatically loaded. + +If however you are still using L, first tell +DBIx::Class to create an instance of the ResultSet class for you, in your +My::DBIC::Schema::CD class: # class definition as normal - __PACKAGE__->load_components(qw/ Core /); + use base 'DBIx::Class::Core'; __PACKAGE__->table('cd'); # tell DBIC to use the custom ResultSet class @@ -411,7 +411,7 @@ Then call your new method in your code: Using SQL functions on the left hand side of a comparison is generally not a good idea since it requires a scan of the entire table. (Unless your RDBMS -supports indexes on expressions - including return values of functions -, and +supports indexes on expressions - including return values of functions - and you create an index on the return value of the function in question.) However, it can be accomplished with C when necessary. @@ -771,7 +771,7 @@ B package My::App::Schema; - use base DBIx::Class::Schema; + use base 'DBIx::Class::Schema'; # load subclassed classes from My::App::Schema::Result/ResultSet __PACKAGE__->load_namespaces; @@ -791,7 +791,7 @@ B use strict; use warnings; - use base My::Shared::Model::Result::Baz; + use base 'My::Shared::Model::Result::Baz'; # WARNING: Make sure you call table() again in your subclass, # otherwise DBIx::Class::ResultSourceProxy::Table will not be called @@ -814,7 +814,7 @@ this example we have a single user table that carries a boolean bit for admin. We would like like to give the admin users objects (L) the same methods as a regular user but also special admin only methods. It doesn't make sense to create two -seperate proxy-class files for this. We would be copying all the user +separate proxy-class files for this. We would be copying all the user methods into the Admin class. There is a cleaner way to accomplish this. @@ -842,13 +842,11 @@ B use strict; use warnings; - use base qw/DBIx::Class/; + use base qw/DBIx::Class::Core/; ### Define what our admin class is, for ensure_class_loaded() my $admin_class = __PACKAGE__ . '::Admin'; - __PACKAGE__->load_components(qw/Core/); - __PACKAGE__->table('users'); __PACKAGE__->add_columns(qw/user_id email password @@ -1090,8 +1088,7 @@ If you want to get a filtered result set, you can just add add to $attr as follo This is straightforward using L: package My::User; - use base 'DBIx::Class'; - __PACKAGE__->load_components('Core'); + use base 'DBIx::Class::Core'; __PACKAGE__->table('user'); __PACKAGE__->add_columns(qw/id name/); __PACKAGE__->set_primary_key('id'); @@ -1099,8 +1096,7 @@ This is straightforward using Lmany_to_many('addresses' => 'user_address', 'address'); package My::UserAddress; - use base 'DBIx::Class'; - __PACKAGE__->load_components('Core'); + use base 'DBIx::Class::Core'; __PACKAGE__->table('user_address'); __PACKAGE__->add_columns(qw/user address/); __PACKAGE__->set_primary_key(qw/user address/); @@ -1108,8 +1104,7 @@ This is straightforward using Lbelongs_to('address' => 'My::Address'); package My::Address; - use base 'DBIx::Class'; - __PACKAGE__->load_components('Core'); + use base 'DBIx::Class::Core'; __PACKAGE__->table('address'); __PACKAGE__->add_columns(qw/id street town area_code country/); __PACKAGE__->set_primary_key('id'); @@ -1140,8 +1135,7 @@ To accomplish this one only needs to specify the DB schema name in the table declaration, like so... package MyDatabase::Main::Artist; - use base qw/DBIx::Class/; - __PACKAGE__->load_components(qw/PK::Auto Core/); + use base qw/DBIx::Class::Core/; __PACKAGE__->table('database1.artist'); # will use "database1.artist" in FROM clause @@ -1257,9 +1251,101 @@ example of the recommended way to use it: Nested transactions will work as expected. That is, only the outermost transaction will actually issue a commit to the $dbh, and a rollback at any level of any transaction will cause the entire nested -transaction to fail. Support for savepoints and for true nested -transactions (for databases that support them) will hopefully be added -in the future. +transaction to fail. + +=head2 Nested transactions and auto-savepoints + +If savepoints are supported by your RDBMS, it is possible to achieve true +nested transactions with minimal effort. To enable auto-savepoints via nested +transactions, supply the C<< auto_savepoint = 1 >> connection attribute. + +Here is an example of true nested transactions. In the example, we start a big +task which will create several rows. Generation of data for each row is a +fragile operation and might fail. If we fail creating something, depending on +the type of failure, we want to abort the whole task, or only skip the failed +row. + + my $schema = MySchema->connect("dbi:Pg:dbname=my_db"); + + # Start a transaction. Every database change from here on will only be + # committed into the database if the eval block succeeds. + eval { + $schema->txn_do(sub { + # SQL: BEGIN WORK; + + my $job = $schema->resultset('Job')->create({ name=> 'big job' }); + # SQL: INSERT INTO job ( name) VALUES ( 'big job' ); + + for (1..10) { + + # Start a nested transaction, which in fact sets a savepoint. + eval { + $schema->txn_do(sub { + # SQL: SAVEPOINT savepoint_0; + + my $thing = $schema->resultset('Thing')->create({ job=>$job->id }); + # SQL: INSERT INTO thing ( job) VALUES ( 1 ); + + if (rand > 0.8) { + # This will generate an error, thus setting $@ + + $thing->update({force_fail=>'foo'}); + # SQL: UPDATE thing SET force_fail = 'foo' + # WHERE ( id = 42 ); + } + }); + }; + if ($@) { + # SQL: ROLLBACK TO SAVEPOINT savepoint_0; + + # There was an error while creating a $thing. Depending on the error + # we want to abort the whole transaction, or only rollback the + # changes related to the creation of this $thing + + # Abort the whole job + if ($@ =~ /horrible_problem/) { + print "something horrible happend, aborting job!"; + die $@; # rethrow error + } + + # Ignore this $thing, report the error, and continue with the + # next $thing + print "Cannot create thing: $@"; + } + # There was no error, so save all changes since the last + # savepoint. + + # SQL: RELEASE SAVEPOINT savepoint_0; + } + }); + }; + if ($@) { + # There was an error while handling the $job. Rollback all changes + # since the transaction started, including the already committed + # ('released') savepoints. There will be neither a new $job nor any + # $thing entry in the database. + + # SQL: ROLLBACK; + + print "ERROR: $@\n"; + } + else { + # There was no error while handling the $job. Commit all changes. + # Only now other connections can see the newly created $job and + # @things. + + # SQL: COMMIT; + + print "Ok\n"; + } + +In this example it might be hard to see where the rollbacks, releases and +commits are happening, but it works just the same as for plain L<>: If +the C-block around C fails, a rollback is issued. If the C +succeeds, the transaction is committed (or the savepoint released). + +While you can get more fine-grained controll using C, C +and C, it is strongly recommended to use C with coderefs. =head1 SQL @@ -1296,7 +1382,7 @@ MySQL, SQLite and PostgreSQL, using the $VERSION from your Schema.pm. To create a new database using the schema: my $schema = My::Schema->connect($dsn); - $schema->deploy({ add_drop_tables => 1}); + $schema->deploy({ add_drop_table => 1}); To import created .sql files using the mysql client: @@ -1334,8 +1420,7 @@ Make a table class as you would for any other table package MyAppDB::Dual; use strict; use warnings; - use base 'DBIx::Class'; - __PACKAGE__->load_components("Core"); + use base 'DBIx::Class::Core'; __PACKAGE__->table("Dual"); __PACKAGE__->add_columns( "dummy", @@ -1536,10 +1621,10 @@ B Add the L schema component to your Schema class. This will add a new table to your database called C which will keep track of which version is installed -and warn if the user trys to run a newer schema version than the +and warn if the user tries to run a newer schema version than the database thinks it has. -Alternatively, you can send the conversion sql scripts to your +Alternatively, you can send the conversion SQL scripts to your customers as above. =head2 Setting quoting for the generated SQL @@ -1621,7 +1706,7 @@ methods: } ); -In conditions (eg. C<\%cond> in the L family of +In conditions (e.g. C<\%cond> in the L family of methods) you cannot directly use array references (since this is interpreted as a list of values to be Ced), but you can use the following syntax to force passing them as bind values: @@ -1736,12 +1821,27 @@ You can accomplish this by overriding C on your objects: sub insert { my ( $self, @args ) = @_; $self->next::method(@args); - $self->cds->new({})->fill_from_artist($self)->insert; + $self->create_related ('cds', \%initial_cd_data ); return $self; } -where C is a method you specify in C which sets -values in C based on the data in the C object you pass in. +If you want to wrap the two inserts in a transaction (for consistency, +an excellent idea), you can use the awesome +L: + + sub insert { + my ( $self, @args ) = @_; + + my $guard = $self->result_source->schema->txn_scope_guard; + + $self->next::method(@args); + $self->create_related ('cds', \%initial_cd_data ); + + $guard->commit; + + return $self + } + =head2 Wrapping/overloading a column accessor @@ -1920,15 +2020,15 @@ details on creating static schemas from a database). Typically L result classes start off with - use base qw/DBIx::Class/; - __PACKAGE__->load_components(qw/InflateColumn::DateTime Core/); + use base qw/DBIx::Class::Core/; + __PACKAGE__->load_components(qw/InflateColumn::DateTime/); If this preamble is moved into a common base class:- package MyDBICbase; - use base qw/DBIx::Class/; - __PACKAGE__->load_components(qw/InflateColumn::DateTime Core/); + use base qw/DBIx::Class::Core/; + __PACKAGE__->load_components(qw/InflateColumn::DateTime/); 1; and each result class then uses this as a base:- diff --git a/lib/DBIx/Class/Manual/Example.pod b/lib/DBIx/Class/Manual/Example.pod index 5d8980f..fe2cf9e 100644 --- a/lib/DBIx/Class/Manual/Example.pod +++ b/lib/DBIx/Class/Manual/Example.pod @@ -58,7 +58,7 @@ Save the following into a example.sql in the directory db title TEXT NOT NULL ); -and create the sqlite database file: +and create the SQLite database file: sqlite3 example.db < example.sql @@ -89,8 +89,7 @@ MyDatabase/Main.pm: MyDatabase/Main/Result/Artist.pm: package MyDatabase::Main::Result::Artist; - use base qw/DBIx::Class/; - __PACKAGE__->load_components(qw/Core/); + use base qw/DBIx::Class::Core/; __PACKAGE__->table('artist'); __PACKAGE__->add_columns(qw/ artistid name /); __PACKAGE__->set_primary_key('artistid'); @@ -102,8 +101,8 @@ MyDatabase/Main/Result/Artist.pm: MyDatabase/Main/Result/Cd.pm: package MyDatabase::Main::Result::Cd; - use base qw/DBIx::Class/; - __PACKAGE__->load_components(qw/Core/); + use base qw/DBIx::Class::Core/; + __PACKAGE__->load_components(qw/InflateColumn::DateTime/); __PACKAGE__->table('cd'); __PACKAGE__->add_columns(qw/ cdid artist title/); __PACKAGE__->set_primary_key('cdid'); @@ -116,10 +115,9 @@ MyDatabase/Main/Result/Cd.pm: MyDatabase/Main/Result/Track.pm: package MyDatabase::Main::Result::Track; - use base qw/DBIx::Class/; - __PACKAGE__->load_components(qw/Core/); + use base qw/DBIx::Class::Core/; __PACKAGE__->table('track'); - __PACKAGE__->add_columns(qw/ trackid cd title/); + __PACKAGE__->add_columns(qw/ trackid cd title /); __PACKAGE__->set_primary_key('trackid'); __PACKAGE__->belongs_to('cd' => 'MyDatabase::Main::Result::Cd'); @@ -200,7 +198,7 @@ testdb.pl: use strict; my $schema = MyDatabase::Main->connect('dbi:SQLite:db/example.db'); - # for other DSNs, e.g. MySql, see the perldoc for the relevant dbd + # for other DSNs, e.g. MySQL, see the perldoc for the relevant dbd # driver, e.g perldoc L. get_tracks_by_cd('Bad'); @@ -347,7 +345,7 @@ It should output: =head1 Notes -A reference implentation of the database and scripts in this example +A reference implementation of the database and scripts in this example are available in the main distribution for DBIx::Class under the directory F. diff --git a/lib/DBIx/Class/Manual/FAQ.pod b/lib/DBIx/Class/Manual/FAQ.pod index 6d35ae6..464040d 100644 --- a/lib/DBIx/Class/Manual/FAQ.pod +++ b/lib/DBIx/Class/Manual/FAQ.pod @@ -126,7 +126,7 @@ allow you to supply a hashref containing the condition across which the tables are to be joined. The condition may contain as many fields as you like. See L. -=item .. define a relatiopnship across an intermediate table? (many-to-many) +=item .. define a relationship across an intermediate table? (many-to-many) Read the documentation on L. @@ -182,15 +182,9 @@ attribute. See L. =item .. sort my results based on fields I've aliased using C? -You don't. You'll need to supply the same functions/expressions to -C, as you did to C attribute, such as: - - ->search({}, { select => [ \'now() AS currenttime'] }) - -Then you can use the alias in your C attribute. +You didn't alias anything, since L +B with the produced SQL. See +L for details. =item .. group the results of my search? @@ -199,15 +193,7 @@ attribute, see L. =item .. group my results based on fields I've aliased using C? -You don't. You'll need to supply the same functions/expressions to -C, as you did to C attribute, such as: - - ->search({}, { select => [ \'now() AS currenttime'] }) - -Then you can use the alias in your C attribute. +You don't. See the explanation on ordering by an alias above. =item .. filter the results of my search? @@ -433,6 +419,38 @@ data out. =back +=head2 Custom methods in Result classes + +You can add custom methods that do arbitrary things, even to unrelated tables. +For example, to provide a C<< $book->foo() >> method which searches the +cd table, you'd could add this to Book.pm: + + sub foo { + my ($self, $col_data) = @_; + return $self->result_source->schema->resultset('cd')->search($col_data); + } + +And invoke that on any Book Result object like so: + + my $rs = $book->foo({ title => 'Down to Earth' }); + +When two tables ARE related, L provides many +methods to find or create data in related tables for you. But if you want to +write your own methods, you can. + +For example, to provide a C<< $book->foo() >> method to manually implement +what create_related() from L does, you could +add this to Book.pm: + + sub foo { + my ($self, $relname, $col_data) = @_; + return $self->related_resultset($relname)->create($col_data); + } + +Invoked like this: + + my $author = $book->foo('author', { name => 'Fred' }); + =head2 Misc =over 4 @@ -520,6 +538,65 @@ You can reduce the overhead of object creation within L using the tips in L and L +=item How do I override a run time method (e.g. a relationship accessor)? + +If you need access to the original accessor, then you must "wrap around" the original method. +You can do that either with L or L. +The code example works for both modules: + + package Your::Schema::Group; + use Class::Method::Modifiers; + + # ... declare columns ... + + __PACKAGE__->has_many('group_servers', 'Your::Schema::GroupServer', 'group_id'); + __PACKAGE__->many_to_many('servers', 'group_servers', 'server'); + + # if the server group is a "super group", then return all servers + # otherwise return only servers that belongs to the given group + around 'servers' => sub { + my $orig = shift; + my $self = shift; + + return $self->$orig(@_) unless $self->is_super_group; + return $self->result_source->schema->resultset('Server')->all; + }; + +If you just want to override the original method, and don't care about the data +from the original accessor, then you have two options. Either use +L that does most of the work for you, or do +it the "dirty way". + +L way: + + package Your::Schema::Group; + use Method::Signatures::Simple; + + # ... declare columns ... + + __PACKAGE__->has_many('group_servers', 'Your::Schema::GroupServer', 'group_id'); + __PACKAGE__->many_to_many('servers', 'group_servers', 'server'); + + # The method keyword automatically injects the annoying my $self = shift; for you. + method servers { + return $self->result_source->schema->resultset('Server')->search({ ... }); + } + +The dirty way: + + package Your::Schema::Group; + use Sub::Name; + + # ... declare columns ... + + __PACKAGE__->has_many('group_servers', 'Your::Schema::GroupServer', 'group_id'); + __PACKAGE__->many_to_many('servers', 'group_servers', 'server'); + + *servers = subname servers => sub { + my $self = shift; + return $self->result_source->schema->resultset('Server')->search({ ... }); + }; + =back =head2 Notes for CDBI users @@ -550,7 +627,7 @@ Likely you have/had two copies of postgresql installed simultaneously, the second one will use a default port of 5433, while L is compiled with a default port of 5432. -You can chance the port setting in C. +You can change the port setting in C. =item I've lost or forgotten my mysql password diff --git a/lib/DBIx/Class/Manual/Intro.pod b/lib/DBIx/Class/Manual/Intro.pod index fa33614..18347d8 100644 --- a/lib/DBIx/Class/Manual/Intro.pod +++ b/lib/DBIx/Class/Manual/Intro.pod @@ -105,13 +105,18 @@ required resultset classes. Next, create each of the classes you want to load as specified above: package My::Schema::Result::Album; - use base qw/DBIx::Class/; + use base qw/DBIx::Class::Core/; -Load any components required by each class with the load_components() method. -This should consist of "Core" plus any additional components you want to use. -For example, if you want to force columns to use UTF-8 encoding: +Load any additional components you may need with the load_components() method, +and provide component configuration if required. For example, if you want +automatic row ordering: - __PACKAGE__->load_components(qw/ ForceUTF8 Core /); + __PACKAGE__->load_components(qw/ Ordered /); + __PACKAGE__->position_column('rank'); + +Ordered will refer to a field called 'position' unless otherwise directed. Here you are defining +the ordering field to be named 'rank'. (NOTE: Insert errors may occur if you use the Ordered +component, but have not defined a position column or have a 'position' field in your row.) Set the table for your class: @@ -119,7 +124,7 @@ Set the table for your class: Add columns to your class: - __PACKAGE__->add_columns(qw/ albumid artist title /); + __PACKAGE__->add_columns(qw/ albumid artist title rank /); Each column can also be set up with its own accessor, data_type and other pieces of information that it may be useful to have -- just pass C a hash: @@ -145,13 +150,20 @@ of information that it may be useful to have -- just pass C a hash: is_nullable => 0, is_auto_increment => 0, default_value => '', + }, + rank => + { data_type => 'integer', + size => 16, + is_nullable => 0, + is_auto_increment => 0, + default_value => '', } ); DBIx::Class doesn't directly use most of this data yet, but various related modules such as L make use of it. Also it allows you to create your database tables from your Schema, instead of the other way around. -See L for details. +See L for details. See L for more details of the possible column attributes. @@ -217,7 +229,7 @@ second database you want to access: Note that L does not cache connections for you. If you use multiple connections, you need to do this manually. -To execute some sql statements on every connect you can add them as an option in +To execute some SQL statements on every connect you can add them as an option in a special fifth argument to connect: my $another_schema = My::Schema->connect( @@ -389,6 +401,53 @@ L. =head1 NOTES +=head2 The Significance and Importance of Primary Keys + +The concept of a L in +DBIx::Class warrants special discussion. The formal definition (which somewhat +resembles that of a classic RDBMS) is I. However this is where the +similarity ends. Any time you call a CRUD operation on a row (e.g. +L, +L, +L, +etc.) DBIx::Class will use the values of of the +L columns to populate +the C clause necessary to accomplish the operation. This is why it is +important to declare a L +on all your result sources B. +In a pinch one can always declare each row identifiable by all its columns: + + __PACKAGE__->set_primary_keys (__PACKAGE__->columns); + +Note that DBIx::Class is smart enough to store a copy of the PK values before +any row-object changes take place, so even if you change the values of PK +columns the C clause will remain correct. + +If you elect not to declare a C, DBIx::Class will behave correctly +by throwing exceptions on any row operation that relies on unique identifiable +rows. If you inherited datasets with multiple identical rows in them, you can +still operate with such sets provided you only utilize +L CRUD methods: +L, +L, +L + +For example, the following would not work (assuming C does not have +a declared PK): + + my $row = $schema->resultset('People') + ->search({ last_name => 'Dantes' }) + ->next; + $row->update({ children => 2 }); # <-- exception thrown because $row isn't + # necessarily unique + +So instead the following should be done: + + $schema->resultset('People') + ->search({ last_name => 'Dantes' }) + ->update({ children => 2 }); # <-- update's ALL Dantes to have children of 2 + =head2 Problems on RHEL5/CentOS5 There used to be an issue with the system perl on Red Hat Enterprise diff --git a/lib/DBIx/Class/Manual/Joining.pod b/lib/DBIx/Class/Manual/Joining.pod index 2a03c1a..4bf3331 100644 --- a/lib/DBIx/Class/Manual/Joining.pod +++ b/lib/DBIx/Class/Manual/Joining.pod @@ -17,7 +17,7 @@ instead. Skip this part if you know what joins are.. But I'll explain anyway. Assuming you have created your database in a more or less sensible way, you will end up with several tables that contain C information. For example, you may have a table -containing information about C, containing the CD title and it's +containing information about Cs, containing the CD title and it's year of publication, and another table containing all the Cs for the CDs, one track per row. @@ -34,7 +34,8 @@ to fetch the tracks, or you can use a join. Compare: So, joins are a way of extending simple select statements to include fields from other, related, tables. There are various types of joins, depending on which combination of the data you wish to retrieve, see -MySQL's doc on JOINs: L. +MySQL's doc on JOINs: +L. =head1 DEFINING JOINS AND RELATIONSHIPS @@ -42,7 +43,7 @@ In L each relationship between two tables needs to first be defined in the L for the table. If the relationship needs to be accessed in both directions (i.e. Fetch all tracks of a CD, and fetch the CD data for a Track), -then it needs to be defined in both tables. +then it needs to be defined for both tables. For the CDs/Tracks example, that means writing, in C: @@ -68,14 +69,15 @@ L docs. When performing either a L or a L operation, you can specify which -C to also fetch data from (or sort by), using the +C to also refine your results based on, using the L attribute, like this: $schema->resultset('CD')->search( - { 'Title' => 'Funky CD' }, + { 'Title' => 'Funky CD', + 'tracks.Name' => { like => 'T%' } + }, { join => 'tracks', - '+select' => [ 'tracks.Name', 'tracks.Artist' ], - '+as' => [ 'TrackName', 'ArtistName' ] + order_by => ['tracks.id'], } ); @@ -84,17 +86,124 @@ read L and L, but here's a quick break down: The first argument to search is a hashref of the WHERE attributes, in -this case a simple restriction on the Title column. The second -argument is a hashref of attributes to the search, '+select' adds -extra columns to the select (from the joined table(s) or from -calculations), and '+as' gives aliases to those fields. +this case a restriction on the Title column in the CD table, and a +restriction on the name of the track in the Tracks table, but ONLY for +tracks actually related to the chosen CD(s). The second argument is a +hashref of attributes to the search, the results will be returned +sorted by the C of the related tracks. + +The special 'join' attribute specifies which C to +include in the query. The distinction between C and +C is important here, only the C names are valid. + +This slightly nonsense example will produce SQL similar to: + + SELECT cd.ID, cd.Title, cd.Year FROM CD cd JOIN Tracks tracks ON cd.ID = tracks.CDID WHERE cd.Title = 'Funky CD' AND tracks.Name LIKE 'T%' ORDER BY 'tracks.id'; + +=head1 FETCHING RELATED DATA + +Another common use for joining to related tables, is to fetch the data +from both tables in one query, preventing extra round-trips to the +database. See the example above in L. + +Three techniques are described here. Of the three, only the +C technique will deal sanely with fetching related objects +over a C relation. The others work fine for 1 to 1 type +relationships. + +=head2 Whole related objects + +To fetch entire related objects, e.g. CDs and all Track data, use the +'prefetch' attribute: + + $schema->resultset('CD')->search( + { 'Title' => 'Funky CD', + }, + { prefetch => 'tracks', + order_by => ['tracks.id'], + } + ); + +This will produce SQL similar to the following: + + SELECT cd.ID, cd.Title, cd.Year, tracks.id, tracks.Name, tracks.Artist FROM CD JOIN Tracks ON CD.ID = tracks.CDID WHERE cd.Title = 'Funky CD' ORDER BY 'tracks.id'; + +The syntax of 'prefetch' is the same as 'join' and implies the +joining, so there is no need to use both together. + +=head2 Subset of related fields + +To fetch a subset or the related fields, the '+select' and '+as' +attributes can be used. For example, if the CD data is required and +just the track name from the Tracks table: + + $schema->resultset('CD')->search( + { 'Title' => 'Funky CD', + }, + { join => 'tracks', + '+select' => ['tracks.Name'], + '+as' => ['track_name'], + order_by => ['tracks.id'], + } + ); + +Which will produce the query: + + SELECT cd.ID, cd.Title, cd.Year, tracks.Name FROM CD JOIN Tracks ON CD.ID = tracks.CDID WHERE cd.Title = 'Funky CD' ORDER BY 'tracks.id'; + +Note that the '+as' does not produce an SQL 'AS' keyword in the +output, see the L for an explanation. + +This type of column restriction has a downside, the resulting $row +object will have no 'track_name' accessor: + + while(my $row = $search_rs->next) { + print $row->track_name; ## ERROR + } + +Instead C must be used: + + while(my $row = $search_rs->next) { + print $row->get_colum('track_name'); ## WORKS + } + +=head2 Incomplete related objects + +In rare circumstances, you may also wish to fetch related data as +incomplete objects. The usual reason to do is when the related table +has a very large field you don't need for the current data +output. This is better solved by storing that field in a separate +table which you only join to when needed. + +To fetch an incomplete related object, supply the dotted notation to the '+as' attribute: + + $schema->resultset('CD')->search( + { 'Title' => 'Funky CD', + }, + { join => 'tracks', + '+select' => ['tracks.Name'], + '+as' => ['tracks.Name'], + order_by => ['tracks.id'], + } + ); + +Which will produce same query as above; + + SELECT cd.ID, cd.Title, cd.Year, tracks.Name FROM CD JOIN Tracks ON CD.ID = tracks.CDID WHERE cd.Title = 'Funky CD' ORDER BY 'tracks.id'; + +Now you can access the result using the relationship accessor: + + while(my $row = $search_rs->next) { + print $row->tracks->name; ## WORKS + } -'join' specifies which C to include in the query. The -distinction between C and C is important here, -only the C names are valid. +However, this will produce broken objects. If the tracks id column is +not fetched, the object will not be usable for any operation other +than reading its data. Use the L method as +much as possible to avoid confusion in your code later. -This example should magically produce SQL like the second select in -L above. +Broken means: Update will not work. Fetching other related objects +will not work. Deleting the object will not work. =head1 COMPLEX JOINS AND STUFF @@ -114,18 +223,16 @@ The search: $schema->resultset('CD')->search( { 'Title' => 'Funky CD' }, { join => { 'tracks' => 'artist' }, - '+select' => [ 'tracks.Name', 'artist.Artist' ], - '+as' => [ 'TrackName', 'ArtistName' ] } ); Which is: - SELECT me.ID, me.Title, me.Year, tracks.Name, artist.Artist FROM CD me JOIN Tracks tracks ON CD.ID = tracks.CDID JOIN Artists artist ON tracks.ArtistID = artist.ID WHERE me.Title = 'Funky CD'; + SELECT me.ID, me.Title, me.Year FROM CD me JOIN Tracks tracks ON CD.ID = tracks.CDID JOIN Artists artist ON tracks.ArtistID = artist.ID WHERE me.Title = 'Funky CD'; To perform joins using relations of the tables you are joining to, use a hashref to indicate the join depth. This can theoretically go as -deep as you like (warning, contrived examples!): +deep as you like (warning: contrived examples!): join => { room => { table => 'leg' } } @@ -147,12 +254,10 @@ you need to add grouping or ordering to your queries: { 'Title' => 'Funky CD' }, { join => { 'tracks' => 'artist' }, order_by => [ 'tracks.Name', 'artist.Artist' ], - '+select' => [ 'tracks.Name', 'artist.Artist' ], - '+as' => [ 'TrackName', 'ArtistName' ] } ); - SELECT me.ID, me.Title, me.Year, tracks.Name, artist.Artist FROM CD me JOIN Tracks tracks ON CD.ID = tracks.CDID JOIN Artists artist ON tracks.ArtistID = artist.ID WHERE me.Title = 'Funky CD' ORDER BY tracks.Name, artist.Artist; + SELECT me.ID, me.Title, me.Year FROM CD me JOIN Tracks tracks ON CD.ID = tracks.CDID JOIN Artists artist ON tracks.ArtistID = artist.ID WHERE me.Title = 'Funky CD' ORDER BY tracks.Name, artist.Artist; This is essential if any of your tables have columns with the same names. diff --git a/lib/DBIx/Class/Manual/Reading.pod b/lib/DBIx/Class/Manual/Reading.pod index 02b6dcd..bcd4610 100644 --- a/lib/DBIx/Class/Manual/Reading.pod +++ b/lib/DBIx/Class/Manual/Reading.pod @@ -17,14 +17,14 @@ additions are consistent with the rest of the documentation. Methods should be documented in the files which also contain the code for the method, or that file should be hidden from PAUSE completely, in which case the methods are documented in the file which loads -it. Methods may also be documented and refered to in files +it. Methods may also be documented and referred to in files representing the major objects or components on which they can be called. For example, L documents the methods actually coded in the helper relationship classes like DBIx::Class::Relationship::BelongsTo. The BelongsTo file itself is -hidden from pause as it has no documentation. The accessors created by +hidden from PAUSE as it has no documentation. The accessors created by relationships should be mentioned in L, the major object that they will be called on. @@ -46,7 +46,7 @@ of the arguments the method is expected to take, and an indication of what the method returns. The first item provides a list of all possible values for the -arguments of the method in order, separated by C<, >, preceeded by the +arguments of the method in order, separated by C<, >, preceded by the text "Arguments: " Example (for the belongs_to relationship): @@ -145,10 +145,10 @@ self-explanatory enough to not require it. Use best judgement. =item * The argument list is followed by some examples of how to use the -method, using it's various types of arguments. +method, using its various types of arguments. The examples can also include ways to use the results if -applicable. For instance if the documentation is for a relationship +applicable. For instance, if the documentation is for a relationship type, the examples can include how to call the resulting relation accessor, how to use the relation name in a search and so on. diff --git a/lib/DBIx/Class/Manual/Troubleshooting.pod b/lib/DBIx/Class/Manual/Troubleshooting.pod index 56bcc01..820359d 100644 --- a/lib/DBIx/Class/Manual/Troubleshooting.pod +++ b/lib/DBIx/Class/Manual/Troubleshooting.pod @@ -23,7 +23,7 @@ To send the output somewhere else set debugfh:- $schema->storage->debugfh(IO::File->new('/tmp/trace.out', 'w'); -Alternatively you can do this with the environment variable too:- +Alternatively you can do this with the environment variable, too:- export DBIC_TRACE="1=/tmp/trace.out" @@ -51,9 +51,8 @@ L version 1.50 and L 1.43 are known to work. There's likely a syntax error in the table class referred to elsewhere in this error message. In particular make sure that the package -declaration is correct, so for a schema C< MySchema > you need to -specify a fully qualified namespace: C< package MySchema::MyTable; > -for example. +declaration is correct. For example, for a schema C< MySchema > +you need to specify a fully qualified namespace: C< package MySchema::MyTable; >. =head2 syntax error at or near "" ... @@ -100,28 +99,20 @@ The solution is to enable quoting - see L for details. -Note that quoting may lead to problems with C clauses, see -L<... column "foo DESC" does not exist ...> for info on avoiding those. - =head2 column "foo DESC" does not exist ... -This can happen if you've turned on quoting and then done something like -this: +This can happen if you are still using the obsolete order hack, and also +happen to turn on SQL-quoting. $rs->search( {}, { order_by => [ 'name DESC' ] } ); -This results in SQL like this: - - ... ORDER BY "name DESC" - -The solution is to pass your order_by items as scalar references to avoid -quoting: - - $rs->search( {}, { order_by => [ \'name DESC' ] } ); +Since L >= 0.08100 and L >= 1.50 the above +should be written as: -Now you'll get SQL like this: + $rs->search( {}, { order_by => { -desc => 'name' } } ); - ... ORDER BY name DESC +For more ways to express order clauses refer to +L =head2 Perl Performance Issues on Red Hat Systems @@ -141,15 +132,15 @@ with full current updates will not be subject to this problem):- Fedora 8 - perl-5.8.8-41.fc8 RHEL5 - perl-5.8.8-15.el5_2.1 -The issue is due to perl doing an exhaustive search of blessed objects +This issue is due to perl doing an exhaustive search of blessed objects under certain circumstances. The problem shows up as performance -degredation exponential to the number of L row objects in -memory, so can be unoticeable with certain data sets, but with huge +degradation exponential to the number of L row objects in +memory, so can be unnoticeable with certain data sets, but with huge performance impacts on other datasets. -A pair of tests for susceptability to the issue, and performance effects +A pair of tests for susceptibility to the issue and performance effects of the bless/overload problem can be found in the L test -suite in the file C +suite, in the C file. Further information on this issue can be found in L, @@ -158,7 +149,7 @@ L =head2 Excessive Memory Allocation with TEXT/BLOB/etc. Columns and Large LongReadLen -It has been observed, using L, that a creating a L +It has been observed, using L, that creating a L object which includes a column of data type TEXT/BLOB/etc. will allocate LongReadLen bytes. This allocation does not leak, but if LongReadLen is large in size, and many such row objects are created, e.g. as the diff --git a/lib/DBIx/Class/Optional/Dependencies.pm b/lib/DBIx/Class/Optional/Dependencies.pm new file mode 100644 index 0000000..bc262eb --- /dev/null +++ b/lib/DBIx/Class/Optional/Dependencies.pm @@ -0,0 +1,442 @@ +package DBIx::Class::Optional::Dependencies; + +use warnings; +use strict; + +use Carp; + +# NO EXTERNAL NON-5.8.1 CORE DEPENDENCIES EVER (e.g. C::A::G) +# This module is to be loaded by Makefile.PM on a pristine system + +# POD is generated automatically by calling _gen_pod from the +# Makefile.PL in $AUTHOR mode + +my $moose_basic = { + 'Moose' => '0.98', + 'MooseX::Types' => '0.21', +}; + +my $admin_basic = { + %$moose_basic, + 'MooseX::Types::Path::Class' => '0.05', + 'MooseX::Types::JSON' => '0.02', + 'JSON::Any' => '1.22', + 'namespace::autoclean' => '0.09', +}; + +my $reqs = { + dist => { + #'Module::Install::Pod::Inherit' => '0.01', + }, + + replicated => { + req => { + %$moose_basic, + 'namespace::clean' => '0.11', + 'Hash::Merge' => '0.12', + }, + pod => { + title => 'Storage::Replicated', + desc => 'Modules required for L', + }, + }, + + admin => { + req => { + %$admin_basic, + }, + pod => { + title => 'DBIx::Class::Admin', + desc => 'Modules required for the DBIx::Class administrative library', + }, + }, + + admin_script => { + req => { + %$moose_basic, + %$admin_basic, + 'Getopt::Long::Descriptive' => '0.081', + 'Text::CSV' => '1.16', + }, + pod => { + title => 'dbicadmin', + desc => 'Modules required for the CLI DBIx::Class interface dbicadmin', + }, + }, + + deploy => { + req => { + 'SQL::Translator' => '0.11005', + }, + pod => { + title => 'Storage::DBI::deploy()', + desc => 'Modules required for L and L', + }, + }, + + + test_pod => { + req => { + 'Test::Pod' => '1.41', + }, + }, + + test_podcoverage => { + req => { + 'Test::Pod::Coverage' => '1.08', + 'Pod::Coverage' => '0.20', + }, + }, + + test_notabs => { + req => { + 'Test::NoTabs' => '0.9', + }, + }, + + test_eol => { + req => { + 'Test::EOL' => '0.6', + }, + }, + + test_cycle => { + req => { + 'Test::Memory::Cycle' => '0', + 'Devel::Cycle' => '1.10', + }, + }, + + test_dtrelated => { + req => { + # t/36datetime.t + # t/60core.t + 'DateTime::Format::SQLite' => '0', + + # t/96_is_deteministic_value.t + 'DateTime::Format::Strptime'=> '0', + + # t/inflate/datetime_mysql.t + # (doesn't need Mysql itself) + 'DateTime::Format::MySQL' => '0', + + # t/inflate/datetime_pg.t + # (doesn't need PG itself) + 'DateTime::Format::Pg' => '0', + }, + }, + + cdbicompat => { + req => { + 'DBIx::ContextualFetch' => '0', + 'Class::DBI::Plugin::DeepAbstractSearch' => '0', + 'Class::Trigger' => '0', + 'Time::Piece::MySQL' => '0', + 'Clone' => '0', + 'Date::Simple' => '3.03', + }, + }, + + rdbms_pg => { + req => { + $ENV{DBICTEST_PG_DSN} + ? ( + 'Sys::SigAction' => '0', + 'DBD::Pg' => '2.009002', + ) : () + }, + }, + + rdbms_mysql => { + req => { + $ENV{DBICTEST_MYSQL_DSN} + ? ( + 'DBD::mysql' => '0', + ) : () + }, + }, + + rdbms_oracle => { + req => { + $ENV{DBICTEST_ORA_DSN} + ? ( + 'DateTime::Format::Oracle' => '0', + ) : () + }, + }, + + rdbms_ase => { + req => { + $ENV{DBICTEST_SYBASE_DSN} + ? ( + 'DateTime::Format::Sybase' => 0, + ) : () + }, + }, + + rdbms_asa => { + req => { + (scalar grep { $ENV{$_} } (qw/DBICTEST_SYBASE_ASA_DSN DBICTEST_SYBASE_ASA_ODBC_DSN/) ) + ? ( + 'DateTime::Format::Strptime' => 0, + ) : () + }, + }, + + rdbms_db2 => { + req => { + $ENV{DBICTEST_DB2_DSN} + ? ( + 'DBD::DB2' => 0, + ) : () + }, + }, + +}; + + +sub req_list_for { + my ($class, $group) = @_; + + croak "req_list_for() expects a requirement group name" + unless $group; + + my $deps = $reqs->{$group}{req} + or croak "Requirement group '$group' does not exist"; + + return { %$deps }; +} + + +our %req_availability_cache; +sub req_ok_for { + my ($class, $group) = @_; + + croak "req_ok_for() expects a requirement group name" + unless $group; + + $class->_check_deps ($group) unless $req_availability_cache{$group}; + + return $req_availability_cache{$group}{status}; +} + +sub req_missing_for { + my ($class, $group) = @_; + + croak "req_missing_for() expects a requirement group name" + unless $group; + + $class->_check_deps ($group) unless $req_availability_cache{$group}; + + return $req_availability_cache{$group}{missing}; +} + +sub req_errorlist_for { + my ($class, $group) = @_; + + croak "req_errorlist_for() expects a requirement group name" + unless $group; + + $class->_check_deps ($group) unless $req_availability_cache{$group}; + + return $req_availability_cache{$group}{errorlist}; +} + +sub _check_deps { + my ($class, $group) = @_; + + my $deps = $class->req_list_for ($group); + + my %errors; + for my $mod (keys %$deps) { + if (my $ver = $deps->{$mod}) { + eval "use $mod $ver ()"; + } + else { + eval "require $mod"; + } + + $errors{$mod} = $@ if $@; + } + + if (keys %errors) { + my $missing = join (', ', map { $deps->{$_} ? "$_ >= $deps->{$_}" : $_ } (sort keys %errors) ); + $missing .= " (see $class for details)" if $reqs->{$group}{pod}; + $req_availability_cache{$group} = { + status => 0, + errorlist => { %errors }, + missing => $missing, + }; + } + else { + $req_availability_cache{$group} = { + status => 1, + errorlist => {}, + missing => '', + }; + } +} + +sub req_group_list { + return { map { $_ => { %{ $reqs->{$_}{req} || {} } } } (keys %$reqs) }; +} + +# This is to be called by the author only (automatically in Makefile.PL) +sub _gen_pod { + my $class = shift; + my $modfn = __PACKAGE__ . '.pm'; + $modfn =~ s/\:\:/\//g; + + require DBIx::Class; + my $distver = DBIx::Class->VERSION; + my $sqltver = $class->req_list_for ('deploy')->{'SQL::Translator'} + or die "Hrmm? No sqlt dep?"; + + my @chunks = ( + <<"EOC", +######################################################################### +##################### A U T O G E N E R A T E D ######################## +######################################################################### +# +# The contents of this POD file are auto-generated. Any changes you make +# will be lost. If you need to change the generated text edit _gen_pod() +# at the end of $modfn +# +EOC + '=head1 NAME', + "$class - Optional module dependency specifications (for module authors)", + '=head1 SYNOPSIS', + <'s Makefile.PL): + + ... + + configure_requires 'DBIx::Class' => '$distver'; + + require $class; + + my \$deploy_deps = $class->req_list_for ('deploy'); + + for (keys %\$deploy_deps) { + requires \$_ => \$deploy_deps->{\$_}; + } + + ... + +Note that there are some caveats regarding C, more info +can be found at L +EOS + '=head1 DESCRIPTION', + <<'EOD', +Some of the less-frequently used features of L have external +module dependencies on their own. In order not to burden the average user +with modules he will never use, these optional dependencies are not included +in the base Makefile.PL. Instead an exception with a descriptive message is +thrown when a specific feature is missing one or several modules required for +its operation. This module is the central holding place for the current list +of such dependencies, for DBIx::Class core authors, and DBIx::Class extension +authors alike. +EOD + '=head1 CURRENT REQUIREMENT GROUPS', + <<'EOD', +Dependencies are organized in C and each group can list one or more +required modules, with an optional minimum version (or 0 for any version). +The group name can be used in the +EOD + ); + + for my $group (sort keys %$reqs) { + my $p = $reqs->{$group}{pod} + or next; + + my $modlist = $reqs->{$group}{req} + or next; + + next unless keys %$modlist; + + push @chunks, ( + "=head2 $p->{title}", + "$p->{desc}", + '=over', + ( map { "=item * $_" . ($modlist->{$_} ? " >= $modlist->{$_}" : '') } (sort keys %$modlist) ), + '=back', + "Requirement group: B<$group>", + ); + } + + push @chunks, ( + '=head1 METHODS', + '=head2 req_group_list', + '=over', + '=item Arguments: $none', + '=item Returns: \%list_of_requirement_groups', + '=back', + < version of +DBIx::Class. See the L for a real-world +example. +EOD + + '=head2 req_ok_for', + '=over', + '=item Arguments: $group_name', + '=item Returns: 1|0', + '=back', + 'Returns true or false depending on whether all modules required by C<$group_name> are present on the system and loadable', + + '=head2 req_missing_for', + '=over', + '=item Arguments: $group_name', + '=item Returns: $error_message_string', + '=back', + < are not available, +the returned string could look like: + + SQL::Translator >= $sqltver (see $class for details) + +The author is expected to prepend the necessary text to this message before +returning the actual error seen by the user. +EOD + + '=head2 req_errorlist_for', + '=over', + '=item Arguments: $group_name', + '=item Returns: \%list_of_loaderrors_per_module', + '=back', + <<'EOD', +Returns a hashref containing the actual errors that occured while attempting +to load each module in the requirement group. +EOD + '=head1 AUTHOR', + 'See L.', + '=head1 LICENSE', + 'You may distribute this code under the same terms as Perl itself', + ); + + my $fn = __FILE__; + $fn =~ s/\.pm$/\.pod/; + + open (my $fh, '>', $fn) or croak "Unable to write to $fn: $!"; + print $fh join ("\n\n", @chunks); + close ($fh); +} + +1; diff --git a/lib/DBIx/Class/Ordered.pm b/lib/DBIx/Class/Ordered.pm index 5f17790..c9579a8 100644 --- a/lib/DBIx/Class/Ordered.pm +++ b/lib/DBIx/Class/Ordered.pm @@ -127,7 +127,7 @@ __PACKAGE__->mk_classdata( 'grouping_column' ); This method specifies a value of L which B during normal operation. When a row is moved, its position is set to this value temporarily, so -that any unique constrainst can not be violated. This value defaults +that any unique constraints can not be violated. This value defaults to 0, which should work for all cases except when your positions do indeed start from 0. @@ -797,15 +797,15 @@ sub _shift_siblings { if (grep { $_ eq $position_column } ( map { @$_ } (values %{{ $rsrc->unique_constraints }} ) ) ) { - my @pcols = $rsrc->primary_columns; + my @pcols = $rsrc->_pri_cols; my $cursor = $shift_rs->search ({}, { order_by => { "-$ord", $position_column }, columns => \@pcols } )->cursor; my $rs = $self->result_source->resultset; - while (my @pks = $cursor->next ) { - + my @all_pks = $cursor->all; + while (my $pks = shift @all_pks) { my $cond; for my $i (0.. $#pcols) { - $cond->{$pcols[$i]} = $pks[$i]; + $cond->{$pcols[$i]} = $pks->[$i]; } $rs->search($cond)->update ({ $position_column => \ "$position_column $op 1" } ); @@ -921,7 +921,7 @@ module to update positioning values in isolation (i.e. without triggering any of the positioning integrity code). Some day you might get confronted by datasets that have ambiguous -positioning data (i.e. duplicate position values within the same group, +positioning data (e.g. duplicate position values within the same group, in a table without unique constraints). When manually fixing such data keep in mind that you can not invoke L like you normally would, as it will get confused by the wrong data before @@ -956,14 +956,14 @@ will prevent such race conditions going undetected. =head2 Multiple Moves -Be careful when issueing move_* methods to multiple objects. If +Be careful when issuing move_* methods to multiple objects. If you've pre-loaded the objects then when you move one of the objects the position of the other object will not reflect their new value until you reload them from the database - see L. There are times when you will want to move objects as groups, such -as changeing the parent of several objects at once - this directly +as changing the parent of several objects at once - this directly conflicts with this problem. One solution is for us to write a ResultSet class that supports a parent() method, for example. Another solution is to somehow automagically modify the objects that exist diff --git a/lib/DBIx/Class/PK.pm b/lib/DBIx/Class/PK.pm index cf8a194..c4d3b93 100644 --- a/lib/DBIx/Class/PK.pm +++ b/lib/DBIx/Class/PK.pm @@ -31,13 +31,28 @@ sub id { my ($self) = @_; $self->throw_exception( "Can't call id() as a class method" ) unless ref $self; - my @pk = $self->_ident_values; - return (wantarray ? @pk : $pk[0]); + my @id_vals = $self->_ident_values; + return (wantarray ? @id_vals : $id_vals[0]); } sub _ident_values { my ($self) = @_; - return (map { $self->{_column_data}{$_} } $self->primary_columns); + + my (@ids, @missing); + + for ($self->_pri_cols) { + push @ids, $self->get_column($_); + push @missing, $_ if (! defined $ids[-1] and ! $self->has_column_loaded ($_) ); + } + + if (@missing && $self->in_storage) { + $self->throw_exception ( + 'Unable to uniquely identify row object with missing PK columns: ' + . join (', ', @missing ) + ); + } + + return @ids; } =head2 ID @@ -64,12 +79,11 @@ sub ID { $self->throw_exception( "Can't call ID() as a class method" ) unless ref $self; return undef unless $self->in_storage; - return $self->_create_ID(map { $_ => $self->{_column_data}{$_} } - $self->primary_columns); + return $self->_create_ID(%{$self->ident_condition}); } sub _create_ID { - my ($self,%vals) = @_; + my ($self, %vals) = @_; return undef unless 0 == grep { !defined } values %vals; return join '|', ref $self || $self, $self->result_source->name, map { $_ . '=' . $vals{$_} } sort keys %vals; @@ -87,9 +101,25 @@ Produces a condition hash to locate a row based on the primary key(s). sub ident_condition { my ($self, $alias) = @_; - my %cond; + + my @pks = $self->_pri_cols; + my @vals = $self->_ident_values; + + my (%cond, @undef); my $prefix = defined $alias ? $alias.'.' : ''; - $cond{$prefix.$_} = $self->get_column($_) for $self->primary_columns; + for my $col (@pks) { + if (! defined ($cond{$prefix.$col} = shift @vals) ) { + push @undef, $col; + } + } + + if (@undef && $self->in_storage) { + $self->throw_exception ( + 'Unable to construct row object identity condition due to NULL PK columns: ' + . join (', ', @undef) + ); + } + return \%cond; } diff --git a/lib/DBIx/Class/PK/Auto.pm b/lib/DBIx/Class/PK/Auto.pm index 04f211b..e2f717f 100644 --- a/lib/DBIx/Class/PK/Auto.pm +++ b/lib/DBIx/Class/PK/Auto.pm @@ -11,7 +11,7 @@ DBIx::Class::PK::Auto - Automatic primary key class =head1 SYNOPSIS -__PACKAGE__->load_components(qw/Core/); +use base 'DBIx::Class::Core'; __PACKAGE__->set_primary_key('id'); =head1 DESCRIPTION @@ -19,8 +19,6 @@ __PACKAGE__->set_primary_key('id'); This class overrides the insert method to get automatically incremented primary keys. - __PACKAGE__->load_components(qw/Core/); - PK::Auto is now part of Core. See L for details of component interactions. diff --git a/lib/DBIx/Class/Relationship.pm b/lib/DBIx/Class/Relationship.pm index e3b812b..d4926d1 100644 --- a/lib/DBIx/Class/Relationship.pm +++ b/lib/DBIx/Class/Relationship.pm @@ -111,7 +111,7 @@ Both C<$cond> and C<$attrs> are optional. Pass C for C<$cond> if you want to use the default value for it, but still want to set C<\%attrs>. See L for documentation on the -attrubutes that are allowed in the C<\%attrs> argument. +attributes that are allowed in the C<\%attrs> argument. =head2 belongs_to @@ -232,13 +232,13 @@ which can be assigned to relationships as well. =back -Creates a one-to-many relationship, where the corresponding elements -of the foreign class store the calling class's primary key in one (or -more) of the foreign class columns. This relationship defaults to using -the end of this classes namespace as the foreign key in C<$related_class> -to resolve the join, unless C<$their_fk_column> specifies the foreign -key column in C<$related_class> or C specifies a reference to a -join condition hash. +Creates a one-to-many relationship where the foreign class refers to +this class's primary key. This relationship refers to zero or more +records in the foreign table (e.g. a C). This relationship +defaults to using the end of this classes namespace as the foreign key +in C<$related_class> to resolve the join, unless C<$their_fk_column> +specifies the foreign key column in C<$related_class> or C +specifies a reference to a join condition hash. =over @@ -441,6 +441,17 @@ methods and valid relationship attributes. Also see L for a L which can be assigned to relationships as well. +Note that if you supply a condition on which to join, if the column in the +current table allows nulls (i.e., has the C attribute set to a +true value), than C will warn about this because it's naughty and +you shouldn't do that. + + "might_have/has_one" must not be on columns with is_nullable set to true (MySchema::SomeClass/key) + +If you must be naughty, you can suppress the warning by setting +C environment variable to a true value. Otherwise, +you probably just want to use C. + =head2 has_one =over 4 @@ -528,6 +539,11 @@ methods and valid relationship attributes. Also see L for a L which can be assigned to relationships as well. +Note that if you supply a condition on which to join, if the column in the +current table allows nulls (i.e., has the C attribute set to a +true value), than warnings might apply just as with +L. + =head2 many_to_many =over 4 diff --git a/lib/DBIx/Class/Relationship/Accessor.pm b/lib/DBIx/Class/Relationship/Accessor.pm index eb03a3d..03700f4 100644 --- a/lib/DBIx/Class/Relationship/Accessor.pm +++ b/lib/DBIx/Class/Relationship/Accessor.pm @@ -4,7 +4,6 @@ package # hide from PAUSE use strict; use warnings; use Sub::Name (); -use Class::Inspector (); our %_pod_inherit_config = ( diff --git a/lib/DBIx/Class/Relationship/Base.pm b/lib/DBIx/Class/Relationship/Base.pm index 7cd3214..62133a8 100644 --- a/lib/DBIx/Class/Relationship/Base.pm +++ b/lib/DBIx/Class/Relationship/Base.pm @@ -30,6 +30,8 @@ methods, for predefined ones, look in L. __PACKAGE__->add_relationship('relname', 'Foreign::Class', $cond, $attrs); +=head3 condition + The condition needs to be an L-style representation of the join between the tables. When resolving the condition for use in a C, keys using the pseudo-table C are resolved to mean "the Table on the @@ -67,9 +69,18 @@ Each key-value pair provided in a hashref will be used as Ced conditions. To add an Ced condition, use an arrayref of hashrefs. See the L documentation for more details. -In addition to the -L, -the following attributes are also valid: +=head3 attributes + +The L may +be used as relationship attributes. In particular, the 'where' attribute is +useful for filtering relationships: + + __PACKAGE__->has_many( 'valid_users', 'MyApp::Schema::User', + { 'foreign.user_id' => 'self.user_id' }, + { where => { valid => 1 } } + ); + +The following attributes are also valid: =over 4 @@ -189,13 +200,23 @@ sub related_resultset { my $query = ((@_ > 1) ? {@_} : shift); my $source = $self->result_source; - my $cond = $source->_resolve_condition( - $rel_info->{cond}, $rel, $self - ); + + # condition resolution may fail if an incomplete master-object prefetch + # is encountered - that is ok during prefetch construction (not yet in_storage) + my $cond = eval { $source->_resolve_condition( $rel_info->{cond}, $rel, $self ) }; + if (my $err = $@) { + if ($self->in_storage) { + $self->throw_exception ($err); + } + else { + $cond = $DBIx::Class::ResultSource::UNRESOLVABLE_CONDITION; + } + } + if ($cond eq $DBIx::Class::ResultSource::UNRESOLVABLE_CONDITION) { my $reverse = $source->reverse_relationship_info($rel); foreach my $rev_rel (keys %$reverse) { - if ($reverse->{$rev_rel}{attrs}{accessor} eq 'multi') { + if ($reverse->{$rev_rel}{attrs}{accessor} && $reverse->{$rev_rel}{attrs}{accessor} eq 'multi') { $attrs->{related_objects}{$rev_rel} = [ $self ]; Scalar::Util::weaken($attrs->{related_object}{$rev_rel}[0]); } else { @@ -249,7 +270,7 @@ sub search_related { ( $objects_rs ) = $rs->search_related_rs('relname', $cond, $attrs); This method works exactly the same as search_related, except that -it guarantees a restultset, even in list context. +it guarantees a resultset, even in list context. =cut @@ -381,7 +402,7 @@ example, to set the correct author for a book, find the Author object, then call set_from_related on the book. This is called internally when you pass existing objects as values to -L, or pass an object to a belongs_to acessor. +L, or pass an object to a belongs_to accessor. The columns are only set in the local copy of the object, call L to set them in the storage. diff --git a/lib/DBIx/Class/Relationship/BelongsTo.pm b/lib/DBIx/Class/Relationship/BelongsTo.pm index af68b7b..471a417 100644 --- a/lib/DBIx/Class/Relationship/BelongsTo.pm +++ b/lib/DBIx/Class/Relationship/BelongsTo.pm @@ -24,19 +24,14 @@ sub belongs_to { # no join condition or just a column name if (!ref $cond) { $class->ensure_class_loaded($f_class); - my %f_primaries = map { $_ => 1 } eval { $f_class->primary_columns }; + my %f_primaries = map { $_ => 1 } eval { $f_class->_pri_cols }; $class->throw_exception( - "Can't infer join condition for ${rel} on ${class}; ". - "unable to load ${f_class}: $@" + "Can't infer join condition for ${rel} on ${class}: $@" ) if $@; my ($pri, $too_many) = keys %f_primaries; $class->throw_exception( "Can't infer join condition for ${rel} on ${class}; ". - "${f_class} has no primary keys" - ) unless defined $pri; - $class->throw_exception( - "Can't infer join condition for ${rel} on ${class}; ". "${f_class} has multiple primary keys" ) if $too_many; diff --git a/lib/DBIx/Class/Relationship/CascadeActions.pm b/lib/DBIx/Class/Relationship/CascadeActions.pm index e5afd35..fde8f5d 100644 --- a/lib/DBIx/Class/Relationship/CascadeActions.pm +++ b/lib/DBIx/Class/Relationship/CascadeActions.pm @@ -16,15 +16,24 @@ sub delete { # be handling this anyway. Assuming we have joins we probably actually # *could* do them, but I'd rather not. - my $ret = $self->next::method(@rest); - my $source = $self->result_source; my %rels = map { $_ => $source->relationship_info($_) } $source->relationships; my @cascade = grep { $rels{$_}{attrs}{cascade_delete} } keys %rels; - foreach my $rel (@cascade) { - $self->search_related($rel)->delete_all; + + if (@cascade) { + my $guard = $source->schema->txn_scope_guard; + + my $ret = $self->next::method(@rest); + + foreach my $rel (@cascade) { + $self->search_related($rel)->delete_all; + } + + $guard->commit; + return $ret; } - return $ret; + + $self->next::method(@rest); } sub update { @@ -32,19 +41,31 @@ sub update { return $self->next::method(@rest) unless ref $self; # Because update cascades on a class *really* don't make sense! - my $ret = $self->next::method(@rest); - my $source = $self->result_source; my %rels = map { $_ => $source->relationship_info($_) } $source->relationships; my @cascade = grep { $rels{$_}{attrs}{cascade_update} } keys %rels; - foreach my $rel (@cascade) { - next if ( - $rels{$rel}{attrs}{accessor} eq 'single' - && !exists($self->{_relationship_data}{$rel}) - ); - $_->update for grep defined, $self->$rel; + + if (@cascade) { + my $guard = $source->schema->txn_scope_guard; + + my $ret = $self->next::method(@rest); + + foreach my $rel (@cascade) { + next if ( + $rels{$rel}{attrs}{accessor} + && + $rels{$rel}{attrs}{accessor} eq 'single' + && + !exists($self->{_relationship_data}{$rel}) + ); + $_->update for grep defined, $self->$rel; + } + + $guard->commit; + return $ret; } - return $ret; + + $self->next::method(@rest); } 1; diff --git a/lib/DBIx/Class/Relationship/HasMany.pm b/lib/DBIx/Class/Relationship/HasMany.pm index d74a9a4..7690af8 100644 --- a/lib/DBIx/Class/Relationship/HasMany.pm +++ b/lib/DBIx/Class/Relationship/HasMany.pm @@ -14,7 +14,10 @@ sub has_many { unless (ref $cond) { $class->ensure_class_loaded($f_class); - my ($pri, $too_many) = $class->primary_columns; + my ($pri, $too_many) = eval { $class->_pri_cols }; + $class->throw_exception( + "Can't infer join condition for ${rel} on ${class}: $@" + ) if $@; $class->throw_exception( "has_many can only infer join for a single primary key; ". diff --git a/lib/DBIx/Class/Relationship/HasOne.pm b/lib/DBIx/Class/Relationship/HasOne.pm index 4c910b8..33a0641 100644 --- a/lib/DBIx/Class/Relationship/HasOne.pm +++ b/lib/DBIx/Class/Relationship/HasOne.pm @@ -3,6 +3,7 @@ package # hide from PAUSE use strict; use warnings; +use Carp::Clan qw/^DBIx::Class/; our %_pod_inherit_config = ( @@ -21,12 +22,8 @@ sub _has_one { my ($class, $join_type, $rel, $f_class, $cond, $attrs) = @_; unless (ref $cond) { $class->ensure_class_loaded($f_class); - my ($pri, $too_many) = $class->primary_columns; - $class->throw_exception( - "might_have/has_one can only infer join for a single primary key; ". - "${class} has more" - ) if $too_many; + my $pri = $class->_get_primary_key; $class->throw_exception( "might_have/has_one needs a primary key to infer a join; ". @@ -34,7 +31,7 @@ sub _has_one { ) if !defined $pri && (!defined $cond || !length $cond); my $f_class_loaded = eval { $f_class->columns }; - my ($f_key,$guess); + my ($f_key,$too_many,$guess); if (defined $cond && length $cond) { $f_key = $cond; $guess = "caller specified foreign key '$f_key'"; @@ -42,11 +39,7 @@ sub _has_one { $f_key = $rel; $guess = "using given relationship '$rel' for foreign key"; } else { - ($f_key, $too_many) = $f_class->primary_columns; - $class->throw_exception( - "might_have/has_one can only infer join for a single primary key; ". - "${f_class} has more" - ) if $too_many; + $f_key = $class->_get_primary_key($f_class); $guess = "using primary key of foreign class for foreign key"; } $class->throw_exception( @@ -54,6 +47,7 @@ sub _has_one { ) if $f_class_loaded && !$f_class->has_column($f_key); $cond = { "foreign.${f_key}" => "self.${pri}" }; } + $class->_validate_has_one_condition($cond); $class->add_relationship($rel, $f_class, $cond, { accessor => 'single', @@ -63,4 +57,40 @@ sub _has_one { 1; } +sub _get_primary_key { + my ( $class, $target_class ) = @_; + $target_class ||= $class; + my ($pri, $too_many) = eval { $target_class->_pri_cols }; + $class->throw_exception( + "Can't infer join condition on ${target_class}: $@" + ) if $@; + + $class->throw_exception( + "might_have/has_one can only infer join for a single primary key; ". + "${class} has more" + ) if $too_many; + return $pri; +} + +sub _validate_has_one_condition { + my ($class, $cond ) = @_; + + return if $ENV{DBIC_DONT_VALIDATE_RELS}; + return unless 'HASH' eq ref $cond; + foreach my $foreign_id ( keys %$cond ) { + my $self_id = $cond->{$foreign_id}; + + # we can ignore a bad $self_id because add_relationship handles this + # warning + return unless $self_id =~ /^self\.(.*)$/; + my $key = $1; + $class->throw_exception("Defining rel on ${class} that includes ${key} but no such column defined here yet") + unless $class->has_column($key); + my $column_info = $class->column_info($key); + if ( $column_info->{is_nullable} ) { + carp(qq'"might_have/has_one" must not be on columns with is_nullable set to true ($class/$key). This might indicate an incorrect use of those relationship helpers instead of belongs_to.'); + } + } +} + 1; diff --git a/lib/DBIx/Class/Relationship/ManyToMany.pm b/lib/DBIx/Class/Relationship/ManyToMany.pm index 07a244a..137fb30 100644 --- a/lib/DBIx/Class/Relationship/ManyToMany.pm +++ b/lib/DBIx/Class/Relationship/ManyToMany.pm @@ -64,15 +64,15 @@ EOW my $rs = $self->search_related($rel)->search_related( $f_rel, @_ > 0 ? @_ : undef, { %{$rel_attrs||{}}, %$attrs } ); - return $rs; + return $rs; }; my $meth_name = join '::', $class, $meth; *$meth_name = Sub::Name::subname $meth_name, sub { - my $self = shift; - my $rs = $self->$rs_meth( @_ ); - return (wantarray ? $rs->all : $rs); - }; + my $self = shift; + my $rs = $self->$rs_meth( @_ ); + return (wantarray ? $rs->all : $rs); + }; my $add_meth_name = join '::', $class, $add_meth; *$add_meth_name = Sub::Name::subname $add_meth_name, sub { @@ -102,7 +102,7 @@ EOW my $link = $self->search_related($rel)->new_result($link_vals); $link->set_from_related($f_rel, $obj); $link->insert(); - return $obj; + return $obj; }; my $set_meth_name = join '::', $class, $set_meth; diff --git a/lib/DBIx/Class/ResultSet.pm b/lib/DBIx/Class/ResultSet.pm index 54f1cac..4247459 100644 --- a/lib/DBIx/Class/ResultSet.pm +++ b/lib/DBIx/Class/ResultSet.pm @@ -25,6 +25,10 @@ DBIx::Class::ResultSet - Represents a query used for fetching a set of results. =head1 SYNOPSIS my $users_rs = $schema->resultset('User'); + while( $user = $users_rs->next) { + print $user->username; + } + my $registered_users_rs = $schema->resultset('User')->search({ registered => 1 }); my @cds_in_2005 = $schema->resultset('CD')->search({ year => 2005 })->all(); @@ -141,7 +145,7 @@ See: L, L, L, L, L. =head1 OVERLOADING If a resultset is used in a numeric context it returns the L. -However, if it is used in a booleand context it is always true. So if +However, if it is used in a boolean context it is always true. So if you want to check if a resultset has any results use C. C will always be true. @@ -291,10 +295,15 @@ sub search_rs { $rows = $self->get_cache; } + # reset the selector list + if (List::Util::first { exists $attrs->{$_} } qw{columns select as}) { + delete @{$our_attrs}{qw{select as columns +select +as +columns include_columns}}; + } + my $new_attrs = { %{$our_attrs}, %{$attrs} }; # merge new attrs into inherited - foreach my $key (qw/join prefetch +select +as bind/) { + foreach my $key (qw/join prefetch +select +as +columns include_columns bind/) { next unless exists $attrs->{$key}; $new_attrs->{$key} = $self->_merge_attr($our_attrs->{$key}, $attrs->{$key}); } @@ -519,7 +528,7 @@ sub find { # in ::Relationship::Base::search_related (the row method), and furthermore # the relationship is of the 'single' type. This means that the condition # provided by the relationship (already attached to $self) is sufficient, - # as there can be only one row in the databse that would satisfy the + # as there can be only one row in the database that would satisfy the # relationship } else { @@ -530,7 +539,7 @@ sub find { } # Run the query - my $rs = $self->search ($query, {result_class => $self->result_class, %$attrs}); + my $rs = $self->search ($query, $attrs); if (keys %{$rs->_resolved_attrs->{collapse}}) { my $row = $rs->next; carp "Query returned more than one row" if $rs->next; @@ -634,7 +643,7 @@ sub search_related { =head2 search_related_rs This method works exactly the same as search_related, except that -it guarantees a restultset, even in list context. +it guarantees a resultset, even in list context. =cut @@ -692,7 +701,7 @@ L returned. =item B -As of 0.08100, this method enforces the assumption that the preceeding +As of 0.08100, this method enforces the assumption that the preceding query returns only one row. If more than one row is returned, you will receive a warning: @@ -974,19 +983,6 @@ sub _construct_object { sub _collapse_result { my ($self, $as_proto, $row) = @_; - # if the first row that ever came in is totally empty - this means we got - # hit by a smooth^Wempty left-joined resultset. Just noop in that case - # instead of producing a {} - # - my $has_def; - for (@$row) { - if (defined $_) { - $has_def++; - last; - } - } - return undef unless $has_def; - my @copy = @$row; # 'foo' => [ undef, 'foo' ] @@ -1144,6 +1140,7 @@ sub result_class { if ($result_class) { $self->ensure_class_loaded($result_class); $self->_result_class($result_class); + $self->{attrs}{result_class} = $result_class if ref $self; } $self->_result_class; } @@ -1247,11 +1244,6 @@ sub _count_rs { $tmp_attrs->{select} = $rsrc->storage->_count_select ($rsrc, $tmp_attrs); $tmp_attrs->{as} = 'count'; - # read the comment on top of the actual function to see what this does - $tmp_attrs->{from} = $self->result_source->schema->storage->_straight_join_to_node ( - $tmp_attrs->{from}, $tmp_attrs->{alias} - ); - my $tmp_rs = $rsrc->resultset_class->new($rsrc, $tmp_attrs)->get_column ('count'); return $tmp_rs; @@ -1271,21 +1263,15 @@ sub _count_subq_rs { # extra selectors do not go in the subquery and there is no point of ordering it delete $sub_attrs->{$_} for qw/collapse select _prefetch_select as order_by/; - # if we prefetch, we group_by primary keys only as this is what we would get out - # of the rs via ->next/->all. We DO WANT to clobber old group_by regardless - if ( keys %{$attrs->{collapse}} ) { - $sub_attrs->{group_by} = [ map { "$attrs->{alias}.$_" } ($rsrc->primary_columns) ] + # if we multi-prefetch we group_by primary keys only as this is what we would + # get out of the rs via ->next/->all. We *DO WANT* to clobber old group_by regardless + if ( keys %{$attrs->{collapse}} ) { + $sub_attrs->{group_by} = [ map { "$attrs->{alias}.$_" } ($rsrc->_pri_cols) ] } - $sub_attrs->{select} = $rsrc->storage->_subq_count_select ($rsrc, $sub_attrs); - - # read the comment on top of the actual function to see what this does - $sub_attrs->{from} = $self->result_source->schema->storage->_straight_join_to_node ( - $sub_attrs->{from}, $sub_attrs->{alias} - ); + $sub_attrs->{select} = $rsrc->storage->_subq_count_select ($rsrc, $attrs); # this is so that the query can be simplified e.g. - # * non-limiting joins can be pruned # * ordering can be thrown away in things like Top limit $sub_attrs->{-for_count_only} = 1; @@ -1431,7 +1417,7 @@ sub _rs_update_delete { my $cond = $rsrc->schema->storage->_strip_cond_qualifiers ($self->{cond}); my $needs_group_by_subq = $self->_has_resolved_attr (qw/collapse group_by -join/); - my $needs_subq = (not defined $cond) || $self->_has_resolved_attr(qw/row offset/); + my $needs_subq = $needs_group_by_subq || (not defined $cond) || $self->_has_resolved_attr(qw/row offset/); if ($needs_group_by_subq or $needs_subq) { @@ -1439,7 +1425,7 @@ sub _rs_update_delete { my $attrs = $self->_resolved_attrs_copy; delete $attrs->{$_} for qw/collapse select as/; - $attrs->{columns} = [ map { "$attrs->{alias}.$_" } ($self->result_source->primary_columns) ]; + $attrs->{columns} = [ map { "$attrs->{alias}.$_" } ($self->result_source->_pri_cols) ]; if ($needs_group_by_subq) { # make sure no group_by was supplied, or if there is one - make sure it matches @@ -1547,7 +1533,7 @@ Deletes the contents of the resultset from its result source. Note that this will not run DBIC cascade triggers. See L if you need triggers to run. See also L. -Return value will be the amount of rows deleted; exact type of return value +Return value will be the number of rows deleted; exact type of return value is storage-dependent. =cut @@ -1616,7 +1602,7 @@ Example: Assuming an Artist Class that has many CDs Classes relating: ], }, { artistid => 5, name => 'Angsty-Whiny Girl', cds => [ - { title => 'My parents sold me to a record company' ,year => 2005 }, + { title => 'My parents sold me to a record company', year => 2005 }, { title => 'Why Am I So Ugly?', year => 2006 }, { title => 'I Got Surgery and am now Popular', year => 2007 } ], @@ -1644,7 +1630,7 @@ example: [qw/artistid name/], [100, 'A Formally Unknown Singer'], [101, 'A singer that jumped the shark two albums ago'], - [102, 'An actually cool singer.'], + [102, 'An actually cool singer'], ]); Please note an important effect on your data when choosing between void and @@ -1658,10 +1644,10 @@ values. =cut sub populate { - my $self = shift @_; - my $data = ref $_[0][0] eq 'HASH' - ? $_[0] : ref $_[0][0] eq 'ARRAY' ? $self->_normalize_populate_args($_[0]) : - $self->throw_exception('Populate expects an arrayref of hashes or arrayref of arrayrefs'); + my $self = shift; + + # cruft placed in standalone method + my $data = $self->_normalize_populate_args(@_); if(defined wantarray) { my @created; @@ -1715,11 +1701,17 @@ sub populate { } } + ## inherit the data locked in the conditions of the resultset + my ($rs_data) = $self->_merge_cond_with_data({}); + delete @{$rs_data}{@columns}; + my @inherit_cols = keys %$rs_data; + my @inherit_data = values %$rs_data; + ## do bulk insert on current row $self->result_source->storage->insert_bulk( $self->result_source, - \@columns, - [ map { [ @$_{@columns} ] } @$data ], + [@columns, @inherit_cols], + [ map { [ @$_{@columns}, @inherit_data ] } @$data ], ); ## do the has_many relationships @@ -1748,26 +1740,27 @@ sub populate { } } -=head2 _normalize_populate_args ($args) - -Private method used by L to normalize its incoming arguments. Factored -out in case you want to subclass and accept new argument structures to the -L method. - -=cut +# populate() argumnets went over several incarnations +# What we ultimately support is AoH sub _normalize_populate_args { - my ($self, $data) = @_; - my @names = @{shift(@$data)}; - my @results_to_create; - foreach my $datum (@$data) { - my %result_to_create; - foreach my $index (0..$#names) { - $result_to_create{$names[$index]} = $$datum[$index]; + my ($self, $arg) = @_; + + if (ref $arg eq 'ARRAY') { + if (ref $arg->[0] eq 'HASH') { + return $arg; + } + elsif (ref $arg->[0] eq 'ARRAY') { + my @ret; + my @colnames = @{$arg->[0]}; + foreach my $values (@{$arg}[1 .. $#$arg]) { + push @ret, { map { $colnames[$_] => $values->[$_] } (0 .. $#colnames) }; + } + return \@ret; } - push @results_to_create, \%result_to_create; } - return \@results_to_create; + + $self->throw_exception('Populate expects an arrayref of hashrefs or arrayref of arrayrefs'); } =head2 pager @@ -1856,46 +1849,66 @@ sub new_result { $self->throw_exception( "new_result needs a hash" ) unless (ref $values eq 'HASH'); - my %new; + my ($merged_cond, $cols_from_relations) = $self->_merge_cond_with_data($values); + + my %new = ( + %$merged_cond, + @$cols_from_relations + ? (-cols_from_relations => $cols_from_relations) + : (), + -source_handle => $self->_source_handle, + -result_source => $self->result_source, # DO NOT REMOVE THIS, REQUIRED + ); + + return $self->result_class->new(\%new); +} + +# _merge_cond_with_data +# +# Takes a simple hash of K/V data and returns its copy merged with the +# condition already present on the resultset. Additionally returns an +# arrayref of value/condition names, which were inferred from related +# objects (this is needed for in-memory related objects) +sub _merge_cond_with_data { + my ($self, $data) = @_; + + my (%new_data, @cols_from_relations); + my $alias = $self->{attrs}{alias}; - if ( - defined $self->{cond} - && $self->{cond} eq $DBIx::Class::ResultSource::UNRESOLVABLE_CONDITION - ) { - %new = %{ $self->{attrs}{related_objects} || {} }; # nothing might have been inserted yet - $new{-from_resultset} = [ keys %new ] if keys %new; - } else { + if (! defined $self->{cond}) { + # just massage $data below + } + elsif ($self->{cond} eq $DBIx::Class::ResultSource::UNRESOLVABLE_CONDITION) { + %new_data = %{ $self->{attrs}{related_objects} || {} }; # nothing might have been inserted yet + @cols_from_relations = keys %new_data; + } + elsif (ref $self->{cond} ne 'HASH') { $self->throw_exception( - "Can't abstract implicit construct, condition not a hash" - ) if ($self->{cond} && !(ref $self->{cond} eq 'HASH')); - - my $collapsed_cond = ( - $self->{cond} - ? $self->_collapse_cond($self->{cond}) - : {} + "Can't abstract implicit construct, resultset condition not a hash" ); - + } + else { # precendence must be given to passed values over values inherited from # the cond, so the order here is important. - my %implied = %{$self->_remove_alias($collapsed_cond, $alias)}; - while( my($col,$value) = each %implied ){ - if(ref($value) eq 'HASH' && keys(%$value) && (keys %$value)[0] eq '='){ - $new{$col} = $value->{'='}; + my $collapsed_cond = $self->_collapse_cond($self->{cond}); + my %implied = %{$self->_remove_alias($collapsed_cond, $alias)}; + + while ( my($col, $value) = each %implied ) { + if (ref($value) eq 'HASH' && keys(%$value) && (keys %$value)[0] eq '=') { + $new_data{$col} = $value->{'='}; next; } - $new{$col} = $value if $self->_is_deterministic_value($value); + $new_data{$col} = $value if $self->_is_deterministic_value($value); } } - %new = ( - %new, - %{ $self->_remove_alias($values, $alias) }, - -source_handle => $self->_source_handle, - -result_source => $self->result_source, # DO NOT REMOVE THIS, REQUIRED + %new_data = ( + %new_data, + %{ $self->_remove_alias($data, $alias) }, ); - return $self->result_class->new(\%new); + return (\%new_data, \@cols_from_relations); } # _is_deterministic_value @@ -2020,7 +2033,7 @@ sub _remove_alias { return \%unaliased; } -=head2 as_query (EXPERIMENTAL) +=head2 as_query =over 4 @@ -2034,8 +2047,6 @@ Returns the SQL query and bind vars associated with the invocant. This is generally used as the RHS for a subquery. -B: This feature is still experimental. - =cut sub as_query { @@ -2126,7 +2137,7 @@ To create related objects, pass a hashref of related-object column values B. If the relationship is of type C (L) - pass an arrayref of hashrefs. The process will correctly identify columns holding foreign keys, and will -transparrently populate them from the keys of the corresponding relation. +transparently populate them from the keys of the corresponding relation. This can be applied recursively, and will work correctly for a structure with an arbitrary depth and width, as long as the relationships actually exists and the correct column data has been supplied. @@ -2464,6 +2475,23 @@ sub is_paged { return !!$self->{attrs}{page}; } +=head2 is_ordered + +=over 4 + +=item Arguments: none + +=item Return Value: true, if the resultset has been ordered with C. + +=back + +=cut + +sub is_ordered { + my ($self) = @_; + return scalar $self->result_source->storage->_parse_order_by($self->{attrs}{order_by}); +} + =head2 related_resultset =over 4 @@ -2485,21 +2513,30 @@ sub related_resultset { $self->{related_resultsets} ||= {}; return $self->{related_resultsets}{$rel} ||= do { - my $rel_info = $self->result_source->relationship_info($rel); + my $rsrc = $self->result_source; + my $rel_info = $rsrc->relationship_info($rel); $self->throw_exception( - "search_related: result source '" . $self->result_source->source_name . + "search_related: result source '" . $rsrc->source_name . "' has no such relationship $rel") unless $rel_info; - my ($from,$seen) = $self->_chain_relationship($rel); + my $attrs = $self->_chain_relationship($rel); + + my $join_count = $attrs->{seen_join}{$rel}; + + my $alias = $self->result_source->storage + ->relname_to_table_alias($rel, $join_count); + + # since this is search_related, and we already slid the select window inwards + # (the select/as attrs were deleted in the beginning), we need to flip all + # left joins to inner, so we get the expected results + # read the comment on top of the actual function to see what this does + $attrs->{from} = $rsrc->schema->storage->_straight_join_to_node ($attrs->{from}, $alias); - my $join_count = $seen->{$rel}; - my $alias = ($join_count > 1 ? join('_', $rel, $join_count) : $rel); #XXX - temp fix for result_class bug. There likely is a more elegant fix -groditi - my %attrs = %{$self->{attrs}||{}}; - delete @attrs{qw(result_class alias)}; + delete @{$attrs}{qw(result_class alias)}; my $new_cache; @@ -2510,7 +2547,7 @@ sub related_resultset { } } - my $rel_source = $self->result_source->related_source($rel); + my $rel_source = $rsrc->related_source($rel); my $new = do { @@ -2520,20 +2557,14 @@ sub related_resultset { # to work sanely (e.g. RestrictWithObject wants to be able to add # extra query restrictions, and these may need to be $alias.) - my $attrs = $rel_source->resultset_attributes; - local $attrs->{alias} = $alias; + my $rel_attrs = $rel_source->resultset_attributes; + local $rel_attrs->{alias} = $alias; $rel_source->resultset ->search_rs( undef, { - %attrs, - join => undef, - prefetch => undef, - select => undef, - as => undef, - where => $self->{cond}, - seen_join => $seen, - from => $from, + %$attrs, + where => $attrs->{where}, }); }; $new->set_cache($new_cache) if $new_cache; @@ -2584,6 +2615,68 @@ sub current_source_alias { return ($self->{attrs} || {})->{alias} || 'me'; } +=head2 as_subselect_rs + +=over 4 + +=item Arguments: none + +=item Return Value: $resultset + +=back + +Act as a barrier to SQL symbols. The resultset provided will be made into a +"virtual view" by including it as a subquery within the from clause. From this +point on, any joined tables are inaccessible to ->search on the resultset (as if +it were simply where-filtered without joins). For example: + + my $rs = $schema->resultset('Bar')->search({'x.name' => 'abc'},{ join => 'x' }); + + # 'x' now pollutes the query namespace + + # So the following works as expected + my $ok_rs = $rs->search({'x.other' => 1}); + + # But this doesn't: instead of finding a 'Bar' related to two x rows (abc and + # def) we look for one row with contradictory terms and join in another table + # (aliased 'x_2') which we never use + my $broken_rs = $rs->search({'x.name' => 'def'}); + + my $rs2 = $rs->as_subselect_rs; + + # doesn't work - 'x' is no longer accessible in $rs2, having been sealed away + my $not_joined_rs = $rs2->search({'x.other' => 1}); + + # works as expected: finds a 'table' row related to two x rows (abc and def) + my $correctly_joined_rs = $rs2->search({'x.name' => 'def'}); + +Another example of when one might use this would be to select a subset of +columns in a group by clause: + + my $rs = $schema->resultset('Bar')->search(undef, { + group_by => [qw{ id foo_id baz_id }], + })->as_subselect_rs->search(undef, { + columns => [qw{ id foo_id }] + }); + +In the above example normally columns would have to be equal to the group by, +but because we isolated the group by into a subselect the above works. + +=cut + +sub as_subselect_rs { + my $self = shift; + + return $self->result_source->resultset->search( undef, { + alias => $self->current_source_alias, + from => [{ + $self->current_source_alias => $self->as_query, + -alias => $self->current_source_alias, + -source_handle => $self->result_source->handle, + }] + }); +} + # This code is called by search_related, and makes sure there # is clear separation between the joins before, during, and # after the relationship. This information is needed later @@ -2591,37 +2684,67 @@ sub current_source_alias { # with a relation_chain_depth less than the depth of the # current prefetch is not considered) # -# The increments happen in 1/2s to make it easier to correlate the -# join depth with the join path. An integer means a relationship -# specified via a search_related, whereas a fraction means an added -# join/prefetch via attributes +# The increments happen twice per join. An even number means a +# relationship specified via a search_related, whereas an odd +# number indicates a join/prefetch added via attributes +# +# Also this code will wrap the current resultset (the one we +# chain to) in a subselect IFF it contains limiting attributes sub _chain_relationship { my ($self, $rel) = @_; my $source = $self->result_source; - my $attrs = $self->{attrs}; + my $attrs = { %{$self->{attrs}||{}} }; - my $from = [ @{ - $attrs->{from} - || - [{ - -source_handle => $source->handle, - -alias => $attrs->{alias}, - $attrs->{alias} => $source->from, - }] - }]; + # we need to take the prefetch the attrs into account before we + # ->_resolve_join as otherwise they get lost - captainL + my $join = $self->_merge_attr( $attrs->{join}, $attrs->{prefetch} ); - my $seen = { %{$attrs->{seen_join} || {} } }; - my $jpath = ($attrs->{seen_join} && keys %{$attrs->{seen_join}}) - ? $from->[-1][0]{-join_path} - : []; + delete @{$attrs}{qw/join prefetch collapse distinct select as columns +select +as +columns/}; + my $seen = { %{ (delete $attrs->{seen_join}) || {} } }; - # we need to take the prefetch the attrs into account before we - # ->_resolve_join as otherwise they get lost - captainL - my $merged = $self->_merge_attr( $attrs->{join}, $attrs->{prefetch} ); + my $from; + my @force_subq_attrs = qw/offset rows group_by having/; + + if ( + ($attrs->{from} && ref $attrs->{from} ne 'ARRAY') + || + $self->_has_resolved_attr (@force_subq_attrs) + ) { + # Nuke the prefetch (if any) before the new $rs attrs + # are resolved (prefetch is useless - we are wrapping + # a subquery anyway). + my $rs_copy = $self->search; + $rs_copy->{attrs}{join} = $self->_merge_attr ( + $rs_copy->{attrs}{join}, + delete $rs_copy->{attrs}{prefetch}, + ); + + $from = [{ + -source_handle => $source->handle, + -alias => $attrs->{alias}, + $attrs->{alias} => $rs_copy->as_query, + }]; + delete @{$attrs}{@force_subq_attrs, 'where'}; + $seen->{-relation_chain_depth} = 0; + } + elsif ($attrs->{from}) { #shallow copy suffices + $from = [ @{$attrs->{from}} ]; + } + else { + $from = [{ + -source_handle => $source->handle, + -alias => $attrs->{alias}, + $attrs->{alias} => $source->from, + }]; + } + + my $jpath = ($seen->{-relation_chain_depth}) + ? $from->[-1][0]{-join_path} + : []; my @requested_joins = $source->_resolve_join( - $merged, + $join, $attrs->{alias}, $seen, $jpath, @@ -2629,7 +2752,7 @@ sub _chain_relationship { push @$from, @requested_joins; - $seen->{-relation_chain_depth} += 0.5; + $seen->{-relation_chain_depth}++; # if $self already had a join/prefetch specified on it, the requested # $rel might very well be already included. What we do in this case @@ -2637,26 +2760,16 @@ sub _chain_relationship { # the join in question so we could tell it *is* the search_related) my $already_joined; - # we consider the last one thus reverse for my $j (reverse @requested_joins) { - if ($rel eq $j->[0]{-join_path}[-1]) { - $j->[0]{-relation_chain_depth} += 0.5; + my ($last_j) = keys %{$j->[0]{-join_path}[-1]}; + if ($rel eq $last_j) { + $j->[0]{-relation_chain_depth}++; $already_joined++; last; } } -# alternative way to scan the entire chain - not backwards compatible -# for my $j (reverse @$from) { -# next unless ref $j eq 'ARRAY'; -# if ($j->[0]{-join_path} && $j->[0]{-join_path}[-1] eq $rel) { -# $j->[0]{-relation_chain_depth} += 0.5; -# $already_joined++; -# last; -# } -# } - unless ($already_joined) { push @$from, $source->_resolve_join( $rel, @@ -2666,9 +2779,9 @@ sub _chain_relationship { ); } - $seen->{-relation_chain_depth} += 0.5; + $seen->{-relation_chain_depth}++; - return ($from,$seen); + return {%$attrs, from => $from, seen_join => $seen}; } # too many times we have to do $attrs = { %{$self->_resolved_attrs} } @@ -2691,41 +2804,46 @@ sub _resolved_attrs { # build columns (as long as select isn't set) into a set of as/select hashes unless ( $attrs->{select} ) { - my @cols = ( ref($attrs->{columns}) eq 'ARRAY' ) - ? @{ delete $attrs->{columns}} - : ( - ( delete $attrs->{columns} ) - || - $source->columns - ) - ; + my @cols; + if ( ref $attrs->{columns} eq 'ARRAY' ) { + @cols = @{ delete $attrs->{columns}} + } elsif ( defined $attrs->{columns} ) { + @cols = delete $attrs->{columns} + } else { + @cols = $source->columns + } - @colbits = map { - ( ref($_) eq 'HASH' ) - ? $_ - : { - ( - /^\Q${alias}.\E(.+)$/ - ? "$1" - : "$_" - ) - => - ( - /\./ - ? "$_" - : "${alias}.$_" - ) - } - } @cols; + for (@cols) { + if ( ref $_ eq 'HASH' ) { + push @colbits, $_ + } else { + my $key = /^\Q${alias}.\E(.+)$/ + ? "$1" + : "$_"; + my $value = /\./ + ? "$_" + : "${alias}.$_"; + push @colbits, { $key => $value }; + } + } } # add the additional columns on - foreach ( 'include_columns', '+columns' ) { - push @colbits, map { - ( ref($_) eq 'HASH' ) - ? $_ - : { ( split( /\./, $_ ) )[-1] => ( /\./ ? $_ : "${alias}.$_" ) } - } ( ref($attrs->{$_}) eq 'ARRAY' ) ? @{ delete $attrs->{$_} } : delete $attrs->{$_} if ( $attrs->{$_} ); + foreach (qw{include_columns +columns}) { + if ( $attrs->{$_} ) { + my @list = ( ref($attrs->{$_}) eq 'ARRAY' ) + ? @{ delete $attrs->{$_} } + : delete $attrs->{$_}; + for (@list) { + if ( ref($_) eq 'HASH' ) { + push @colbits, $_ + } else { + my $key = ( split /\./, $_ )[-1]; + my $value = ( /\./ ? $_ : "$alias.$_" ); + push @colbits, { $key => $value }; + } + } + } } # start with initial select items @@ -2734,15 +2852,22 @@ sub _resolved_attrs { ( ref $attrs->{select} eq 'ARRAY' ) ? [ @{ $attrs->{select} } ] : [ $attrs->{select} ]; - $attrs->{as} = ( - $attrs->{as} - ? ( - ref $attrs->{as} eq 'ARRAY' - ? [ @{ $attrs->{as} } ] - : [ $attrs->{as} ] + + if ( $attrs->{as} ) { + $attrs->{as} = + ( + ref $attrs->{as} eq 'ARRAY' + ? [ @{ $attrs->{as} } ] + : [ $attrs->{as} ] ) - : [ map { m/^\Q${alias}.\E(.+)$/ ? $1 : $_ } @{ $attrs->{select} } ] - ); + } else { + $attrs->{as} = [ map { + m/^\Q${alias}.\E(.+)$/ + ? $1 + : $_ + } @{ $attrs->{select} } + ] + } } else { @@ -2752,27 +2877,24 @@ sub _resolved_attrs { } # now add colbits to select/as - push( @{ $attrs->{select} }, map { values( %{$_} ) } @colbits ); - push( @{ $attrs->{as} }, map { keys( %{$_} ) } @colbits ); + push @{ $attrs->{select} }, map values %{$_}, @colbits; + push @{ $attrs->{as} }, map keys %{$_}, @colbits; - my $adds; - if ( $adds = delete $attrs->{'+select'} ) { + if ( my $adds = delete $attrs->{'+select'} ) { $adds = [$adds] unless ref $adds eq 'ARRAY'; - push( - @{ $attrs->{select} }, - map { /\./ || ref $_ ? $_ : "${alias}.$_" } @$adds - ); + push @{ $attrs->{select} }, + map { /\./ || ref $_ ? $_ : "$alias.$_" } @$adds; } - if ( $adds = delete $attrs->{'+as'} ) { + if ( my $adds = delete $attrs->{'+as'} ) { $adds = [$adds] unless ref $adds eq 'ARRAY'; - push( @{ $attrs->{as} }, @$adds ); + push @{ $attrs->{as} }, @$adds; } - $attrs->{from} ||= [ { + $attrs->{from} ||= [{ -source_handle => $source->handle, -alias => $self->{attrs}{alias}, $self->{attrs}{alias} => $source->from, - } ]; + }]; if ( $attrs->{join} || $attrs->{prefetch} ) { @@ -2792,7 +2914,7 @@ sub _resolved_attrs { $join, $alias, { %{ $attrs->{seen_join} || {} } }, - ($attrs->{seen_join} && keys %{$attrs->{seen_join}}) + ( $attrs->{seen_join} && keys %{$attrs->{seen_join}}) ? $attrs->{from}[-1][0]{-join_path} : [] , @@ -2827,11 +2949,10 @@ sub _resolved_attrs { my %already_grouped = map { $_ => 1 } (@{$attrs->{group_by}}); my $storage = $self->result_source->schema->storage; + my $rs_column_list = $storage->_resolve_column_info ($attrs->{from}); - my @chunks = $storage->sql_maker->_order_by_chunks ($attrs->{order_by}); - for my $chunk (map { ref $_ ? @$_ : $_ } (@chunks) ) { - $chunk =~ s/\s+ (?: ASC|DESC ) \s* $//ix; + for my $chunk ($storage->_parse_order_by($attrs->{order_by})) { if ($rs_column_list->{$chunk} && not $already_grouped{$chunk}++) { push @{$attrs->{group_by}}, $chunk; } @@ -2845,7 +2966,26 @@ sub _resolved_attrs { my $prefetch_ordering = []; - my $join_map = $self->_joinpath_aliases ($attrs->{from}, $attrs->{seen_join}); + # this is a separate structure (we don't look in {from} directly) + # as the resolver needs to shift things off the lists to work + # properly (identical-prefetches on different branches) + my $join_map = {}; + if (ref $attrs->{from} eq 'ARRAY') { + + my $start_depth = $attrs->{seen_join}{-relation_chain_depth} || 0; + + for my $j ( @{$attrs->{from}}[1 .. $#{$attrs->{from}} ] ) { + next unless $j->[0]{-alias}; + next unless $j->[0]{-join_path}; + next if ($j->[0]{-relation_chain_depth} || 0) < $start_depth; + + my @jpath = map { keys %$_ } @{$j->[0]{-join_path}}; + + my $p = $join_map; + $p = $p->{$_} ||= {} for @jpath[ ($start_depth/2) .. $#jpath]; #only even depths are actual jpath boundaries + push @{$p->{-join_aliases} }, $j->[0]{-alias}; + } + } my @prefetch = $source->_resolve_prefetch( $prefetch, $alias, $join_map, $prefetch_ordering, $attrs->{collapse} ); @@ -2874,33 +3014,6 @@ sub _resolved_attrs { return $self->{_attrs} = $attrs; } -sub _joinpath_aliases { - my ($self, $fromspec, $seen) = @_; - - my $paths = {}; - return $paths unless ref $fromspec eq 'ARRAY'; - - my $cur_depth = $seen->{-relation_chain_depth} || 0; - - if (int ($cur_depth) != $cur_depth) { - $self->throw_exception ("-relation_chain_depth is not an integer, something went horribly wrong ($cur_depth)"); - } - - for my $j (@$fromspec) { - - next if ref $j ne 'ARRAY'; - next if ($j->[0]{-relation_chain_depth} || 0) < $cur_depth; - - my $jpath = $j->[0]{-join_path}; - - my $p = $paths; - $p = $p->{$_} ||= {} for @{$jpath}[$cur_depth .. $#$jpath]; - push @{$p->{-join_aliases} }, $j->[0]{-alias}; - } - - return $paths; -} - sub _rollout_attr { my ($self, $attr) = @_; @@ -3146,23 +3259,27 @@ names: select => [ 'name', { count => 'employeeid' }, - { sum => 'salary' } + { max => { length => 'name' }, -as => 'longest_name' } ] }); -When you use function/stored procedure names and do not supply an C -attribute, the column names returned are storage-dependent. E.g. MySQL would -return a column named C in the above example. + # Equivalent SQL + SELECT name, COUNT( employeeid ), MAX( LENGTH( name ) ) AS longest_name FROM employee -B You will almost always need a corresponding 'as' entry when you use -'select'. +B You will almost always need a corresponding L attribute when you +use L, to instruct DBIx::Class how to store the result of the column. +Also note that the L attribute has nothing to do with the SQL-side 'AS' +identifier aliasing. You can however alias a function, so you can use it in +e.g. an C clause. This is done via the C<-as> B but adds columns to the selection. +L but adds columns to the default selection, instead of specifying +an explicit list. =back @@ -3182,25 +3299,26 @@ Indicates additional column names for those added via L. See L. =back -Indicates column names for object inflation. That is, C -indicates the name that the column can be accessed as via the -C method (or via the object accessor, B). It has nothing to do with the SQL code C, -usually when C for details. $rs = $schema->resultset('Employee')->search(undef, { select => [ 'name', - { count => 'employeeid' } + { count => 'employeeid' }, + { max => { length => 'name' }, -as => 'longest_name' } ], - as => ['name', 'employee_count'], + as => [qw/ + name + employee_count + max_name_length + /], }); - my $employee = $rs->first(); # get the first Employee - If the object against which the search is performed already has an accessor matching a column name specified in C, the value can be retrieved using the accessor as normal: @@ -3215,16 +3333,6 @@ use C instead: You can create your own accessors if required - see L for details. -Please note: This will NOT insert an C into the SQL -statement produced, it is used for internal access only. Thus -attempting to use the accessor in an C clause or similar -will fail miserably. - -To get around this limitation, you can supply literal SQL to your -C. This is done safely in a transaction -(locking the table.) See L. - -A recommended L setting: - - on_connect_call => [['datetime_setup'], ['blob_setup', log_on_update => 0]] +This is the base class/dispatcher for Storage's designed to work with +L =head1 METHODS @@ -59,1076 +22,106 @@ A recommended L setting: sub _rebless { my $self = shift; - if (ref($self) eq 'DBIx::Class::Storage::DBI::Sybase') { - my $dbtype = eval { - @{$self->_get_dbh->selectrow_arrayref(qq{sp_server_info \@attribute_id=1})}[2] - } || ''; - $self->throw_exception("Unable to estable connection to determine database type: $@") - if $@; + my $dbtype = eval { + @{$self->_get_dbh->selectrow_arrayref(qq{sp_server_info \@attribute_id=1})}[2] + }; + $self->throw_exception("Unable to estable connection to determine database type: $@") + if $@; + + if ($dbtype) { $dbtype =~ s/\W/_/gi; - my $subclass = "DBIx::Class::Storage::DBI::Sybase::${dbtype}"; - if ($dbtype && $self->load_optional_class($subclass)) { + # saner class name + $dbtype = 'ASE' if $dbtype eq 'SQL_Server'; + + my $subclass = __PACKAGE__ . "::$dbtype"; + if ($self->load_optional_class($subclass)) { bless $self, $subclass; $self->_rebless; - } else { # real Sybase - my $no_bind_vars = 'DBIx::Class::Storage::DBI::Sybase::NoBindVars'; - - if ($self->using_freetds) { - carp <<'EOF' unless $ENV{DBIC_SYBASE_FREETDS_NOWARN}; - -You are using FreeTDS with Sybase. - -We will do our best to support this configuration, but please consider this -support experimental. - -TEXT/IMAGE columns will definitely not work. - -You are encouraged to recompile DBD::Sybase with the Sybase Open Client libraries -instead. - -See perldoc DBIx::Class::Storage::DBI::Sybase for more details. - -To turn off this warning set the DBIC_SYBASE_FREETDS_NOWARN environment -variable. -EOF - if (not $self->_typeless_placeholders_supported) { - if ($self->_placeholders_supported) { - $self->auto_cast(1); - } else { - $self->ensure_class_loaded($no_bind_vars); - bless $self, $no_bind_vars; - $self->_rebless; - } - } - } - elsif (not $self->_get_dbh->{syb_dynamic_supported}) { - # not necessarily FreeTDS, but no placeholders nevertheless - $self->ensure_class_loaded($no_bind_vars); - bless $self, $no_bind_vars; - $self->_rebless; - } elsif (not $self->_typeless_placeholders_supported) { - # this is highly unlikely, but we check just in case - $self->auto_cast(1); - } } } } -sub _init { +sub _ping { my $self = shift; - $self->_set_max_connect(256); - - # based on LongReadLen in connect_info - $self->set_textsize if $self->using_freetds; - -# create storage for insert/(update blob) transactions, -# unless this is that storage - return if $self->_is_extra_storage; - - my $writer_storage = (ref $self)->new; - $writer_storage->_is_extra_storage(1); - $writer_storage->connect_info($self->connect_info); - $writer_storage->auto_cast($self->auto_cast); + my $dbh = $self->_dbh or return 0; - $self->_writer_storage($writer_storage); + local $dbh->{RaiseError} = 1; + local $dbh->{PrintError} = 0; -# create a bulk storage unless connect_info is a coderef - return - if (Scalar::Util::reftype($self->_dbi_connect_info->[0])||'') eq 'CODE'; - - my $bulk_storage = (ref $self)->new; - - $bulk_storage->_is_extra_storage(1); - $bulk_storage->_is_bulk_storage(1); # for special ->disconnect acrobatics - $bulk_storage->connect_info($self->connect_info); - -# this is why - $bulk_storage->_dbi_connect_info->[0] .= ';bulkLogin=1'; - - $self->_bulk_storage($bulk_storage); -} - -for my $method (@also_proxy_to_extra_storages) { - no strict 'refs'; - no warnings 'redefine'; - - my $replaced = __PACKAGE__->can($method); + if ($dbh->{syb_no_child_con}) { +# if extra connections are not allowed, then ->ping is reliable + my $ping = eval { $dbh->ping }; + return $@ ? 0 : $ping; + } - *{$method} = Sub::Name::subname $method => sub { - my $self = shift; - $self->_writer_storage->$replaced(@_) if $self->_writer_storage; - $self->_bulk_storage->$replaced(@_) if $self->_bulk_storage; - return $self->$replaced(@_); + eval { +# XXX if the main connection goes stale, does opening another for this statement +# really determine anything? + $dbh->do('select 1'); }; -} -sub disconnect { - my $self = shift; - -# Even though we call $sth->finish for uses off the bulk API, there's still an -# "active statement" warning on disconnect, which we throw away here. -# This is due to the bug described in insert_bulk. -# Currently a noop because 'prepare' is used instead of 'prepare_cached'. - local $SIG{__WARN__} = sub { - warn $_[0] unless $_[0] =~ /active statement/i; - } if $self->_is_bulk_storage; - -# so that next transaction gets a dbh - $self->_began_bulk_work(0) if $self->_is_bulk_storage; - - $self->next::method; + return $@ ? 0 : 1; } -# Make sure we have CHAINED mode turned on if AutoCommit is off in non-FreeTDS -# DBD::Sybase (since we don't know how DBD::Sybase was compiled.) If however -# we're using FreeTDS, CHAINED mode turns on an implicit transaction which we -# only want when AutoCommit is off. -sub _populate_dbh { +sub _set_max_connect { my $self = shift; + my $val = shift || 256; - $self->next::method(@_); + my $dsn = $self->_dbi_connect_info->[0]; - return unless $self->_driver_determined; # otherwise we screw up MSSQL - - if ($self->_is_bulk_storage) { -# this should be cleared on every reconnect - $self->_began_bulk_work(0); - return; - } + return if ref($dsn) eq 'CODE'; - if (not $self->using_freetds) { - $self->_dbh->{syb_chained_txn} = 1; - } else { - if ($self->_dbh_autocommit) { - $self->_dbh->do('SET CHAINED OFF'); - } else { - $self->_dbh->do('SET CHAINED ON'); - } + if ($dsn !~ /maxConnect=/) { + $self->_dbi_connect_info->[0] = "$dsn;maxConnect=$val"; + my $connected = defined $self->_dbh; + $self->disconnect; + $self->ensure_connected if $connected; } } -=head2 connect_call_blob_setup - -Used as: - - on_connect_call => [ [ 'blob_setup', log_on_update => 0 ] ] - -Does C<< $dbh->{syb_binary_images} = 1; >> to return C data as raw binary -instead of as a hex string. - -Recommended. +=head2 using_freetds -Also sets the C value for blob write operations. The default is -C<1>, but C<0> is better if your database is configured for it. - -See -L. +Whether or not L was compiled against FreeTDS. If false, it means +the Sybase OpenClient libraries were used. =cut -sub connect_call_blob_setup { - my $self = shift; - my %args = @_; - my $dbh = $self->_dbh; - $dbh->{syb_binary_images} = 1; - - $self->_blob_log_on_update($args{log_on_update}) - if exists $args{log_on_update}; -} - -sub _is_lob_type { - my $self = shift; - my $type = shift; - $type && $type =~ /(?:text|image|lob|bytea|binary|memo)/i; -} - -sub _is_lob_column { - my ($self, $source, $column) = @_; - - return $self->_is_lob_type($source->column_info($column)->{data_type}); -} - -sub _prep_for_execute { - my $self = shift; - my ($op, $extra_bind, $ident, $args) = @_; - - my ($sql, $bind) = $self->next::method (@_); - - my $table = Scalar::Util::blessed($ident) ? $ident->from : $ident; - - my $bind_info = $self->_resolve_column_info( - $ident, [map $_->[0], @{$bind}] - ); - my $bound_identity_col = List::Util::first - { $bind_info->{$_}{is_auto_increment} } - (keys %$bind_info) - ; - my $identity_col = Scalar::Util::blessed($ident) && - List::Util::first - { $ident->column_info($_)->{is_auto_increment} } - $ident->columns - ; - - if (($op eq 'insert' && $bound_identity_col) || - ($op eq 'update' && exists $args->[0]{$identity_col})) { - $sql = join ("\n", - $self->_set_table_identity_sql($op => $table, 'on'), - $sql, - $self->_set_table_identity_sql($op => $table, 'off'), - ); - } - - if ($op eq 'insert' && (not $bound_identity_col) && $identity_col && - (not $self->{insert_bulk})) { - $sql = - "$sql\n" . - $self->_fetch_identity_sql($ident, $identity_col); - } - - return ($sql, $bind); -} - -sub _set_table_identity_sql { - my ($self, $op, $table, $on_off) = @_; - - return sprintf 'SET IDENTITY_%s %s %s', - uc($op), $self->sql_maker->_quote($table), uc($on_off); -} - -# Stolen from SQLT, with some modifications. This is a makeshift -# solution before a sane type-mapping library is available, thus -# the 'our' for easy overrides. -our %TYPE_MAPPING = ( - number => 'numeric', - money => 'money', - varchar => 'varchar', - varchar2 => 'varchar', - timestamp => 'datetime', - text => 'varchar', - real => 'double precision', - comment => 'text', - bit => 'bit', - tinyint => 'smallint', - float => 'double precision', - serial => 'numeric', - bigserial => 'numeric', - boolean => 'varchar', - long => 'varchar', -); - -sub _native_data_type { - my ($self, $type) = @_; - - $type = lc $type; - $type =~ s/\s* identity//x; - - return uc($TYPE_MAPPING{$type} || $type); -} - -sub _fetch_identity_sql { - my ($self, $source, $col) = @_; - - return sprintf ("SELECT MAX(%s) FROM %s", - map { $self->sql_maker->_quote ($_) } ($col, $source->from) - ); -} - -sub _execute { - my $self = shift; - my ($op) = @_; - - my ($rv, $sth, @bind) = $self->dbh_do($self->can('_dbh_execute'), @_); - - if ($op eq 'insert') { - $self->_identity($sth->fetchrow_array); - $sth->finish; - } - - return wantarray ? ($rv, $sth, @bind) : $rv; -} - -sub last_insert_id { shift->_identity } - -# handles TEXT/IMAGE and transaction for last_insert_id -sub insert { +sub using_freetds { my $self = shift; - my ($source, $to_insert) = @_; - - my $identity_col = (List::Util::first - { $source->column_info($_)->{is_auto_increment} } - $source->columns) || ''; - - # check for empty insert - # INSERT INTO foo DEFAULT VALUES -- does not work with Sybase - # try to insert explicit 'DEFAULT's instead (except for identity) - if (not %$to_insert) { - for my $col ($source->columns) { - next if $col eq $identity_col; - $to_insert->{$col} = \'DEFAULT'; - } - } - - my $blob_cols = $self->_remove_blob_cols($source, $to_insert); - - # do we need the horrific SELECT MAX(COL) hack? - my $dumb_last_insert_id = - $identity_col - && (not exists $to_insert->{$identity_col}) - && ($self->_identity_method||'') ne '@@IDENTITY'; - - my $next = $self->next::can; - - # we are already in a transaction, or there are no blobs - # and we don't need the PK - just (try to) do it - if ($self->{transaction_depth} - || (!$blob_cols && !$dumb_last_insert_id) - ) { - return $self->_insert ( - $next, $source, $to_insert, $blob_cols, $identity_col - ); - } - - # otherwise use the _writer_storage to do the insert+transaction on another - # connection - my $guard = $self->_writer_storage->txn_scope_guard; - - my $updated_cols = $self->_writer_storage->_insert ( - $next, $source, $to_insert, $blob_cols, $identity_col - ); - - $self->_identity($self->_writer_storage->_identity); - - $guard->commit; - - return $updated_cols; -} - -sub _insert { - my ($self, $next, $source, $to_insert, $blob_cols, $identity_col) = @_; - - my $updated_cols = $self->$next ($source, $to_insert); - - my $final_row = { - ($identity_col ? - ($identity_col => $self->last_insert_id($source, $identity_col)) : ()), - %$to_insert, - %$updated_cols, - }; - - $self->_insert_blobs ($source, $blob_cols, $final_row) if $blob_cols; - - return $updated_cols; -} - -sub update { - my $self = shift; - my ($source, $fields, $where, @rest) = @_; - - my $wantarray = wantarray; - - my $blob_cols = $self->_remove_blob_cols($source, $fields); - - my $table = $source->name; - - my $identity_col = List::Util::first - { $source->column_info($_)->{is_auto_increment} } - $source->columns; - - my $is_identity_update = $identity_col && defined $fields->{$identity_col}; - - return $self->next::method(@_) unless $blob_cols; - -# If there are any blobs in $where, Sybase will return a descriptive error -# message. -# XXX blobs can still be used with a LIKE query, and this should be handled. - -# update+blob update(s) done atomically on separate connection - $self = $self->_writer_storage; - - my $guard = $self->txn_scope_guard; - -# First update the blob columns to be updated to '' (taken from $fields, where -# it is originally put by _remove_blob_cols .) - my %blobs_to_empty = map { ($_ => delete $fields->{$_}) } keys %$blob_cols; - -# We can't only update NULL blobs, because blobs cannot be in the WHERE clause. - - $self->next::method($source, \%blobs_to_empty, $where, @rest); - -# Now update the blobs before the other columns in case the update of other -# columns makes the search condition invalid. - $self->_update_blobs($source, $blob_cols, $where); - - my @res; - if (%$fields) { - if ($wantarray) { - @res = $self->next::method(@_); - } - elsif (defined $wantarray) { - $res[0] = $self->next::method(@_); - } - else { - $self->next::method(@_); - } - } - - $guard->commit; - - return $wantarray ? @res : $res[0]; -} - -sub insert_bulk { - my $self = shift; - my ($source, $cols, $data) = @_; - - my $identity_col = List::Util::first - { $source->column_info($_)->{is_auto_increment} } - $source->columns; - - my $is_identity_insert = (List::Util::first - { $_ eq $identity_col } - @{$cols} - ) ? 1 : 0; - - my @source_columns = $source->columns; - - my $use_bulk_api = - $self->_bulk_storage && - $self->_get_dbh->{syb_has_blk}; - - if ((not $use_bulk_api) && - (Scalar::Util::reftype($self->_dbi_connect_info->[0])||'') eq 'CODE' && - (not $self->_bulk_disabled_due_to_coderef_connect_info_warned)) { - carp <<'EOF'; -Bulk API support disabled due to use of a CODEREF connect_info. Reverting to -regular array inserts. -EOF - $self->_bulk_disabled_due_to_coderef_connect_info_warned(1); - } - - if (not $use_bulk_api) { - my $blob_cols = $self->_remove_blob_cols_array($source, $cols, $data); - -# _execute_array uses a txn anyway, but it ends too early in case we need to -# select max(col) to get the identity for inserting blobs. - ($self, my $guard) = $self->{transaction_depth} == 0 ? - ($self->_writer_storage, $self->_writer_storage->txn_scope_guard) - : - ($self, undef); - - local $self->{insert_bulk} = 1; - - $self->next::method(@_); - - if ($blob_cols) { - if ($is_identity_insert) { - $self->_insert_blobs_array ($source, $blob_cols, $cols, $data); - } - else { - my @cols_with_identities = (@$cols, $identity_col); - - ## calculate identities - # XXX This assumes identities always increase by 1, which may or may not - # be true. - my ($last_identity) = - $self->_dbh->selectrow_array ( - $self->_fetch_identity_sql($source, $identity_col) - ); - my @identities = (($last_identity - @$data + 1) .. $last_identity); - - my @data_with_identities = map [@$_, shift @identities], @$data; - - $self->_insert_blobs_array ( - $source, $blob_cols, \@cols_with_identities, \@data_with_identities - ); - } - } - - $guard->commit if $guard; - - return; - } - -# otherwise, use the bulk API - -# rearrange @$data so that columns are in database order - my %orig_idx; - @orig_idx{@$cols} = 0..$#$cols; - - my %new_idx; - @new_idx{@source_columns} = 0..$#source_columns; - - my @new_data; - for my $datum (@$data) { - my $new_datum = []; - for my $col (@source_columns) { -# identity data will be 'undef' if not $is_identity_insert -# columns with defaults will also be 'undef' - $new_datum->[ $new_idx{$col} ] = - exists $orig_idx{$col} ? $datum->[ $orig_idx{$col} ] : undef; - } - push @new_data, $new_datum; - } - -# bcp identity index is 1-based - my $identity_idx = exists $new_idx{$identity_col} ? - $new_idx{$identity_col} + 1 : 0; - -## Set a client-side conversion error handler, straight from DBD::Sybase docs. -# This ignores any data conversion errors detected by the client side libs, as -# they are usually harmless. - my $orig_cslib_cb = DBD::Sybase::set_cslib_cb( - Sub::Name::subname insert_bulk => sub { - my ($layer, $origin, $severity, $errno, $errmsg, $osmsg, $blkmsg) = @_; - - return 1 if $errno == 36; - - carp - "Layer: $layer, Origin: $origin, Severity: $severity, Error: $errno" . - ($errmsg ? "\n$errmsg" : '') . - ($osmsg ? "\n$osmsg" : '') . - ($blkmsg ? "\n$blkmsg" : ''); - - return 0; - }); - - eval { - my $bulk = $self->_bulk_storage; - - my $guard = $bulk->txn_scope_guard; - -## XXX get this to work instead of our own $sth -## will require SQLA or *Hacks changes for ordered columns -# $bulk->next::method($source, \@source_columns, \@new_data, { -# syb_bcp_attribs => { -# identity_flag => $is_identity_insert, -# identity_column => $identity_idx, -# } -# }); - my $sql = 'INSERT INTO ' . - $bulk->sql_maker->_quote($source->name) . ' (' . -# colname list is ignored for BCP, but does no harm - (join ', ', map $bulk->sql_maker->_quote($_), @source_columns) . ') '. - ' VALUES ('. (join ', ', ('?') x @source_columns) . ')'; - -## XXX there's a bug in the DBD::Sybase bulk support that makes $sth->finish for -## a prepare_cached statement ineffective. Replace with ->sth when fixed, or -## better yet the version above. Should be fixed in DBD::Sybase . - my $sth = $bulk->_get_dbh->prepare($sql, -# 'insert', # op - { - syb_bcp_attribs => { - identity_flag => $is_identity_insert, - identity_column => $identity_idx, - } - } - ); - - my @bind = do { - my $idx = 0; - map [ $_, $idx++ ], @source_columns; - }; - $self->_execute_array( - $source, $sth, \@bind, \@source_columns, \@new_data, sub { - $guard->commit - } - ); - - $bulk->_query_end($sql); - }; - - my $exception = $@; - DBD::Sybase::set_cslib_cb($orig_cslib_cb); - - if ($exception =~ /-Y option/) { - carp <<"EOF"; - -Sybase bulk API operation failed due to character set incompatibility, reverting -to regular array inserts: - -*** Try unsetting the LANG environment variable. - -$exception -EOF - $self->_bulk_storage(undef); - unshift @_, $self; - goto \&insert_bulk; - } - elsif ($exception) { -# rollback makes the bulkLogin connection unusable - $self->_bulk_storage->disconnect; - $self->throw_exception($exception); - } -} - -sub _dbh_execute_array { - my ($self, $sth, $tuple_status, $cb) = @_; - - my $rv = $self->next::method($sth, $tuple_status); - $cb->() if $cb; - - return $rv; -} - -# Make sure blobs are not bound as placeholders, and return any non-empty ones -# as a hash. -sub _remove_blob_cols { - my ($self, $source, $fields) = @_; - - my %blob_cols; - - for my $col (keys %$fields) { - if ($self->_is_lob_column($source, $col)) { - my $blob_val = delete $fields->{$col}; - if (not defined $blob_val) { - $fields->{$col} = \'NULL'; - } - else { - $fields->{$col} = \"''"; - $blob_cols{$col} = $blob_val unless $blob_val eq ''; - } - } - } - - return %blob_cols ? \%blob_cols : undef; -} - -# same for insert_bulk -sub _remove_blob_cols_array { - my ($self, $source, $cols, $data) = @_; - - my @blob_cols; - - for my $i (0..$#$cols) { - my $col = $cols->[$i]; - - if ($self->_is_lob_column($source, $col)) { - for my $j (0..$#$data) { - my $blob_val = delete $data->[$j][$i]; - if (not defined $blob_val) { - $data->[$j][$i] = \'NULL'; - } - else { - $data->[$j][$i] = \"''"; - $blob_cols[$j][$i] = $blob_val - unless $blob_val eq ''; - } - } - } - } - - return @blob_cols ? \@blob_cols : undef; + return $self->_get_dbh->{syb_oc_version} =~ /freetds/i; } -sub _update_blobs { - my ($self, $source, $blob_cols, $where) = @_; - - my (@primary_cols) = $source->primary_columns; - - $self->throw_exception('Cannot update TEXT/IMAGE column(s) without a primary key') - unless @primary_cols; - -# check if we're updating a single row by PK - my $pk_cols_in_where = 0; - for my $col (@primary_cols) { - $pk_cols_in_where++ if defined $where->{$col}; - } - my @rows; - - if ($pk_cols_in_where == @primary_cols) { - my %row_to_update; - @row_to_update{@primary_cols} = @{$where}{@primary_cols}; - @rows = \%row_to_update; - } else { - my $cursor = $self->select ($source, \@primary_cols, $where, {}); - @rows = map { - my %row; @row{@primary_cols} = @$_; \%row - } $cursor->all; - } - - for my $row (@rows) { - $self->_insert_blobs($source, $blob_cols, $row); - } -} - -sub _insert_blobs { - my ($self, $source, $blob_cols, $row) = @_; - my $dbh = $self->_get_dbh; - - my $table = $source->name; - - my %row = %$row; - my (@primary_cols) = $source->primary_columns; - - $self->throw_exception('Cannot update TEXT/IMAGE column(s) without a primary key') - unless @primary_cols; - - $self->throw_exception('Cannot update TEXT/IMAGE column(s) without primary key values') - if ((grep { defined $row{$_} } @primary_cols) != @primary_cols); +=head2 set_textsize - for my $col (keys %$blob_cols) { - my $blob = $blob_cols->{$col}; +When using FreeTDS and/or MSSQL, C<< $dbh->{LongReadLen} >> is not available, +use this function instead. It does: - my %where = map { ($_, $row{$_}) } @primary_cols; + $dbh->do("SET TEXTSIZE $bytes"); - my $cursor = $self->select ($source, [$col], \%where, {}); - $cursor->next; - my $sth = $cursor->sth; - - if (not $sth) { - - $self->throw_exception( - "Could not find row in table '$table' for blob update:\n" - . Data::Dumper::Concise::Dumper (\%where) - ); - } - - eval { - do { - $sth->func('CS_GET', 1, 'ct_data_info') or die $sth->errstr; - } while $sth->fetch; - - $sth->func('ct_prepare_send') or die $sth->errstr; - - my $log_on_update = $self->_blob_log_on_update; - $log_on_update = 1 if not defined $log_on_update; - - $sth->func('CS_SET', 1, { - total_txtlen => length($blob), - log_on_update => $log_on_update - }, 'ct_data_info') or die $sth->errstr; - - $sth->func($blob, length($blob), 'ct_send_data') or die $sth->errstr; - - $sth->func('ct_finish_send') or die $sth->errstr; - }; - my $exception = $@; - $sth->finish if $sth; - if ($exception) { - if ($self->using_freetds) { - $self->throw_exception ( - 'TEXT/IMAGE operation failed, probably because you are using FreeTDS: ' - . $exception - ); - } else { - $self->throw_exception($exception); - } - } - } -} - -sub _insert_blobs_array { - my ($self, $source, $blob_cols, $cols, $data) = @_; - - for my $i (0..$#$data) { - my $datum = $data->[$i]; - - my %row; - @row{ @$cols } = @$datum; - - my %blob_vals; - for my $j (0..$#$cols) { - if (exists $blob_cols->[$i][$j]) { - $blob_vals{ $cols->[$j] } = $blob_cols->[$i][$j]; - } - } - - $self->_insert_blobs ($source, \%blob_vals, \%row); - } -} - -=head2 connect_call_datetime_setup - -Used as: - - on_connect_call => 'datetime_setup' - -In L to set: - - $dbh->syb_date_fmt('ISO_strict'); # output fmt: 2004-08-21T14:36:48.080Z - $dbh->do('set dateformat mdy'); # input fmt: 08/13/1979 18:08:55.080 - -On connection for use with L, using -L, which you will need to install. - -This works for both C and C columns, although -C columns only have minute precision. +Takes the number of bytes, or uses the C value from your +L if omitted, lastly falls back to the C<32768> which +is the L default. =cut -{ - my $old_dbd_warned = 0; - - sub connect_call_datetime_setup { - my $self = shift; - my $dbh = $self->_get_dbh; - - if ($dbh->can('syb_date_fmt')) { - # amazingly, this works with FreeTDS - $dbh->syb_date_fmt('ISO_strict'); - } elsif (not $old_dbd_warned) { - carp "Your DBD::Sybase is too old to support ". - "DBIx::Class::InflateColumn::DateTime, please upgrade!"; - $old_dbd_warned = 1; - } - - $dbh->do('SET DATEFORMAT mdy'); - - 1; - } -} - -sub datetime_parser_type { "DateTime::Format::Sybase" } - -# ->begin_work and such have no effect with FreeTDS but we run them anyway to -# let the DBD keep any state it needs to. -# -# If they ever do start working, the extra statements will do no harm (because -# Sybase supports nested transactions.) - -sub _dbh_begin_work { +sub set_textsize { my $self = shift; + my $text_size = shift || + eval { $self->_dbi_connect_info->[-1]->{LongReadLen} } || + 32768; # the DBD::Sybase default -# bulkLogin=1 connections are always in a transaction, and can only call BEGIN -# TRAN once. However, we need to make sure there's a $dbh. - return if $self->_is_bulk_storage && $self->_dbh && $self->_began_bulk_work; - - $self->next::method(@_); + return unless defined $text_size; - if ($self->using_freetds) { - $self->_get_dbh->do('BEGIN TRAN'); - } - - $self->_began_bulk_work(1) if $self->_is_bulk_storage; -} - -sub _dbh_commit { - my $self = shift; - if ($self->using_freetds) { - $self->_dbh->do('COMMIT'); - } - return $self->next::method(@_); -} - -sub _dbh_rollback { - my $self = shift; - if ($self->using_freetds) { - $self->_dbh->do('ROLLBACK'); - } - return $self->next::method(@_); -} - -# savepoint support using ASE syntax - -sub _svp_begin { - my ($self, $name) = @_; - - $self->_get_dbh->do("SAVE TRANSACTION $name"); -} - -# A new SAVE TRANSACTION with the same name releases the previous one. -sub _svp_release { 1 } - -sub _svp_rollback { - my ($self, $name) = @_; - - $self->_get_dbh->do("ROLLBACK TRANSACTION $name"); + $self->_dbh->do("SET TEXTSIZE $text_size"); } 1; -=head1 Schema::Loader Support - -There is an experimental branch of L that will -allow you to dump a schema from most (if not all) versions of Sybase. - -It is available via subversion from: - - http://dev.catalyst.perl.org/repos/bast/branches/DBIx-Class-Schema-Loader/current/ - -=head1 FreeTDS - -This driver supports L compiled against FreeTDS -(L) to the best of our ability, however it is -recommended that you recompile L against the Sybase Open Client -libraries. They are a part of the Sybase ASE distribution: - -The Open Client FAQ is here: -L. - -Sybase ASE for Linux (which comes with the Open Client libraries) may be -downloaded here: L. - -To see if you're using FreeTDS check C<< $schema->storage->using_freetds >>, or run: - - perl -MDBI -le 'my $dbh = DBI->connect($dsn, $user, $pass); print $dbh->{syb_oc_version}' - -Some versions of the libraries involved will not support placeholders, in which -case the storage will be reblessed to -L. - -In some configurations, placeholders will work but will throw implicit type -conversion errors for anything that's not expecting a string. In such a case, -the C option from L is -automatically set, which you may enable on connection with -L. The type info -for the Cs is taken from the L -definitions in your Result classes, and are mapped to a Sybase type (if it isn't -already) using a mapping based on L. - -In other configurations, placeholers will work just as they do with the Sybase -Open Client libraries. - -Inserts or updates of TEXT/IMAGE columns will B work with FreeTDS. - -=head1 INSERTS WITH PLACEHOLDERS - -With placeholders enabled, inserts are done in a transaction so that there are -no concurrency issues with getting the inserted identity value using -C as it's a -session variable. - -=head1 TRANSACTIONS - -Due to limitations of the TDS protocol, L, or both; you cannot -begin a transaction while there are active cursors; nor can you use multiple -active cursors within a transaction. An active cursor is, for example, a -L that has been executed using C or -C but has not been exhausted or L. - -For example, this will not work: - - $schema->txn_do(sub { - my $rs = $schema->resultset('Book'); - while (my $row = $rs->next) { - $schema->resultset('MetaData')->create({ - book_id => $row->id, - ... - }); - } - }); - -This won't either: - - my $first_row = $large_rs->first; - $schema->txn_do(sub { ... }); - -Transactions done for inserts in C mode when placeholders are in use -are not affected, as they are done on an extra database handle. - -Some workarounds: - -=over 4 - -=item * use L - -=item * L another L - -=item * load the data from your cursor with L - -=back - -=head1 MAXIMUM CONNECTIONS - -The TDS protocol makes separate connections to the server for active statements -in the background. By default the number of such connections is limited to 25, -on both the client side and the server side. - -This is a bit too low for a complex L application, so on connection -the client side setting is set to C<256> (see L.) You -can override it to whatever setting you like in the DSN. - -See -L -for information on changing the setting on the server side. - -=head1 DATES - -See L to setup date formats -for L. - -=head1 TEXT/IMAGE COLUMNS - -L compiled with FreeTDS will B allow you to insert or update -C columns. - -Setting C<< $dbh->{LongReadLen} >> will also not work with FreeTDS use either: - - $schema->storage->dbh->do("SET TEXTSIZE $bytes"); - -or - - $schema->storage->set_textsize($bytes); - -instead. - -However, the C you pass in -L is used to execute the equivalent -C command on connection. - -See L for a L -setting you need to work with C columns. - -=head1 BULK API - -The experimental L Bulk API support is used for -L in B context, in a transaction -on a separate connection. - -To use this feature effectively, use a large number of rows for each -L call, eg.: - - while (my $rows = $data_source->get_100_rows()) { - $rs->populate($rows); - } - -B the L -calls in your C classes B list columns in database order for this -to work. Also, you may have to unset the C environment variable before -loading your app, if it doesn't match the character set of your database. - -When inserting IMAGE columns using this method, you'll need to use -L as well. - -=head1 TODO - -=over - -=item * - -Transitions to AutoCommit=0 (starting a transaction) mode by exhausting -any active cursors, using eager cursors. - -=item * - -Real limits and limited counts using stored procedures deployed on startup. - -=item * - -Adaptive Server Anywhere (ASA) support, with possible SQLA::Limit support. - -=item * - -Blob update with a LIKE query on a blob, without invalidating the WHERE condition. - -=item * - -bulk_insert using prepare_cached (see comments.) - -=back - -=head1 AUTHOR +=head1 AUTHORS See L. @@ -1137,4 +130,3 @@ See L. You may distribute this code under the same terms as Perl itself. =cut -# vim:sts=2 sw=2: diff --git a/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm b/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm new file mode 100644 index 0000000..ddc2339 --- /dev/null +++ b/lib/DBIx/Class/Storage/DBI/Sybase/ASE.pm @@ -0,0 +1,1171 @@ +package DBIx::Class::Storage::DBI::Sybase::ASE; + +use strict; +use warnings; + +use base qw/ + DBIx::Class::Storage::DBI::Sybase + DBIx::Class::Storage::DBI::AutoCast +/; +use mro 'c3'; +use Carp::Clan qw/^DBIx::Class/; +use Scalar::Util(); +use List::Util(); +use Sub::Name(); +use Data::Dumper::Concise(); + +__PACKAGE__->mk_group_accessors('simple' => + qw/_identity _blob_log_on_update _writer_storage _is_extra_storage + _bulk_storage _is_bulk_storage _began_bulk_work + _bulk_disabled_due_to_coderef_connect_info_warned + _identity_method/ +); + +my @also_proxy_to_extra_storages = qw/ + connect_call_set_auto_cast auto_cast connect_call_blob_setup + connect_call_datetime_setup + + disconnect _connect_info _sql_maker _sql_maker_opts disable_sth_caching + auto_savepoint unsafe cursor_class debug debugobj schema +/; + +=head1 NAME + +DBIx::Class::Storage::DBI::Sybase::ASE - Sybase ASE SQL Server support for +DBIx::Class + +=head1 SYNOPSIS + +This subclass supports L for real (non-Microsoft) Sybase databases. + +=head1 DESCRIPTION + +If your version of Sybase does not support placeholders, then your storage will +be reblessed to L. +You can also enable that driver explicitly, see the documentation for more +details. + +With this driver there is unfortunately no way to get the C +without doing a C, which is the only way to get the C value in this +mode. + +In addition, they are done on a separate connection so that it's possible to +have active cursors when doing an insert. + +When using C transactions +are disabled, as there are no concurrency issues with C will work -for obtainging the last insert id of an C column, instead of having to +for obtaining the last insert id of an C column, instead of having to do C