r20551@brandon-blacks-computer (orig r3005): castaway | 2007-01-05 03:25:11 -0600
add inflatecolumn fixes to Changes
r20553@brandon-blacks-computer (orig r3007): castaway | 2007-01-09 15:20:55 -0600
todoify collapse test
r20554@brandon-blacks-computer (orig r3008): castaway | 2007-01-09 15:42:59 -0600
Up version to 0.07004
r20555@brandon-blacks-computer (orig r3009): castaway | 2007-01-09 15:44:39 -0600
0.07004
r20556@brandon-blacks-computer (orig r3010): castaway | 2007-01-10 11:54:19 -0600
Skip .orig files in dist
r20557@brandon-blacks-computer (orig r3011): castaway | 2007-01-10 13:37:23 -0600
oops, remove .orig files etc
r20599@brandon-blacks-computer (orig r3053): zarquon | 2007-01-23 04:15:05 -0600
minor doc improvement to clarify how to do joins
r20604@brandon-blacks-computer (orig r3058): zarquon | 2007-01-24 15:48:07 -0600
explained a cryptic error message
r20610@brandon-blacks-computer (orig r3064): castaway | 2007-01-29 10:28:12 -0600
Make POD a little more obvious about component order
--This line, and those below, will be ignored--
M DateTime.pm
r20616@brandon-blacks-computer (orig r3070): jester | 2007-01-31 17:59:06 -0600
Doc cleanup in Ordered.pm
r20618@brandon-blacks-computer (orig r3072): castaway | 2007-02-02 13:48:53 -0600
POD updates
r20619@brandon-blacks-computer (orig r3073): ash | 2007-02-03 15:33:19 -0600
Made ->update not change the hashref passed in (with test.)
r20621@brandon-blacks-computer (orig r3075): ash | 2007-02-05 06:29:52 -0600
Backed out my r3073, and doc'd the fact that it takes a hashref that might be changed
r20622@brandon-blacks-computer (orig r3076): castaway | 2007-02-05 07:56:35 -0600
Add _accessor example, thanks to grinktt3n
r20623@brandon-blacks-computer (orig r3077): castaway | 2007-02-05 08:23:22 -0600
Fix overload example
r20625@brandon-blacks-computer (orig r3079): grink1tt3n | 2007-02-06 13:40:00 -0600
Added;
FAQ-fetch-a-formatted-column.txt
FAQ-store-JSON-in-a-column.txt
r20640@brandon-blacks-computer (orig r3094): castaway | 2007-02-08 02:25:54 -0600
Argh! Fix spurious example
r20643@brandon-blacks-computer (orig r3097): castaway | 2007-02-13 08:00:21 -0600
Improve resultset_attributes docs
r20644@brandon-blacks-computer (orig r3098): grink1tt3n | 2007-02-14 01:16:40 -0600
Fleshed out the JSON inflated column example with a corresponding YAML example. Added a cautionary warning against overuse. Added a link to InflateColumn.
r20645@brandon-blacks-computer (orig r3099): grink1tt3n | 2007-02-14 02:39:26 -0600
Added how to '.. fetch a single (or topmost) row?' to the FAQ
r20646@brandon-blacks-computer (orig r3100): zarquon | 2007-02-14 02:44:18 -0600
improved docs for as attrib
r20647@brandon-blacks-computer (orig r3101): zarquon | 2007-02-14 04:05:51 -0600
pointed out ambiguity of as attribs
r26909@brandon-blacks-computer (orig r3107): castaway | 2007-03-03 06:15:03 -0600
Fix to $filename from Carl Vincent
r27114@brandon-blacks-computer (orig r3126): jshirley | 2007-03-12 16:27:08 -0500
Updating Manual/Intro to reflect better -in => [] usage, adding myself to contributors (mst put me in dbic-devel, too)
r27116@brandon-blacks-computer (orig r3128): castaway | 2007-03-14 10:02:44 -0500
Added patch from Schwern to allow cdbi compat to infer the has_many from a has_a
r27119@brandon-blacks-computer (orig r3129): castaway | 2007-03-16 11:04:07 -0500
Initial version of Manual::Joining
r27120@brandon-blacks-computer (orig r3130): castaway | 2007-03-16 11:06:21 -0500
Add FAQ and Joining links to Manual.pod
r27319@brandon-blacks-computer (orig r3148): blblack | 2007-03-29 08:42:27 -0500
fix for rt.cpan.org #25683 (DBD::Sybase/FreeTDS/MSSQL cannot have two open sths during column_info
r27324@brandon-blacks-computer (orig r3153): blblack | 2007-03-29 09:13:17 -0500
backport Ash's quoting fix from Loader to columns_info_for
r29666@brandon-blacks-computer (orig r3173): castaway | 2007-04-02 17:19:05 -0500
Add patch from dec to allow us to pick which sources to deploy
r30330@brandon-blacks-computer (orig r3185): matthewt | 2007-04-09 15:41:34 -0500
add SQL::Abstract to search docs (patch from zamolxes)
r30510@brandon-blacks-computer (orig r3189): blblack | 2007-04-13 19:09:32 -0500
fixed regex in t/76joins (was relying on a 5.8.8 bug that is fixed in bleadperl, so this test was failing on bleadperl)
r30560@brandon-blacks-computer (orig r3196): marcus | 2007-04-16 07:55:44 -0500
Updated cookbook example
r30589@brandon-blacks-computer (orig r3199): blblack | 2007-04-17 18:45:11 -0500
Changes list synced up through now
r30590@brandon-blacks-computer (orig r3200): blblack | 2007-04-17 18:47:03 -0500
cleared up a Changes entry
r30591@brandon-blacks-computer (orig r3201): blblack | 2007-04-17 19:21:01 -0500
0.07006 changes, version update
r30607@brandon-blacks-computer (orig r3202): matthewt | 2007-04-18 21:29:05 -0500
reverting r3173, already implemented in -current
r30709@brandon-blacks-computer (orig r3208): matthewt | 2007-04-24 17:04:02 -0500
oops. props to chapman for the spot
r30762@brandon-blacks-computer (orig r3220): blblack | 2007-05-01 00:25:33 -0500
load-time performance improvements
Revision history for DBIx::Class
+ - Added Oracle/WhereJoins.pm for Oracle >= 8 to support
+ Oracle <= 9i, and provide Oracle with a better join method for
+ later versions. (I use the term better loosely.)
+ - select et al weren't properly detecing when the server connection
+ had timed out when not in a transaction
+ - The SQL::T parser class now respects a relationship attribute of
+ is_foreign_key_constrain to allow explicit control over wether or
+ not a foreign constraint is needed
+
+ 0.07006 2007-04-17 23:18:00
+ - Lots of documentation updates
+ - deploy now takes an optional 'source_names' parameter (dec)
+ - Quoting for for columns_info_for
+ - RT#25683 fixed (multiple open sths on DBD::Sybase)
+ - CDBI compat infers has_many from has_a (Schwern)
+ - Fix ddl_filename transformation (Carl Vincent)
+
+0.07999_02 2007-01-25 20:11:00
+ - add support for binding BYTEA and similar parameters (w/Pg impl)
+ - add support to Ordered for multiple ordering columns
+ - mark DB.pm and compose_connection as deprecated
+ - switch tests to compose_namespace
+ - ResltClass::HashRefInflator added
+ - Changed row and rs objects to not have direct handle to a source,
+ instead a (schema,source_name) tuple of type ResultSourceHandle
+
+ 0.07005 2007-01-10 18:36:00
+ - fixup changes file
+ - remove erroneous .orig files - oops
+
+ 0.07004 2007-01-09 21:52:00
+ - fix find_related-based queries to correctly grep the unique key
+ - fix InflateColumn to inflate/deflate all refs but scalar refs
+
+ 0.07003 2006-11-16 11:52:00
+ - fix for rt.cpan.org #22740 (use $^X instead of hardcoded "perl")
+ - Tweaks to resultset to allow inflate_result to return an array
+ - Fix UTF8Columns to work under Perl <= 5.8.0
+ - Fix up new_result in ResultSet to avoid alias-related bugs
+ - Made new/update/find handle 'single' rel accessor correctly
+ - Fix NoBindVars to be safer and handle non-true bind values
+ - Don't blow up if columns_info_for returns useless results
+ - Documentation updates
+
+0.07999_01 2006-10-05 21:00:00
+ - add connect_info option "disable_statement_caching"
+ - create insert_bulk using execute_array, populate uses it
+ - added DBIx::Class::Schema::load_namespaces, alternative to
+ load_classes
+ - added source_info method for source-level metadata (kinda like
+ column_info)
+ - Some of ::Storage::DBI's code/docs moved to ::Storage
+ - DBIx::Class::Schema::txn_do code moved to ::Storage
+ - Storage::DBI now uses exceptions instead of ->ping/->{Active} checks
+ - Storage exceptions are thrown via the schema class's throw_exception
+ - DBIx::Class::Schema::throw_exception's behavior can be modified via
+ ->exception_action
+ - columns_info_for is deprecated, and no longer runs automatically.
+ You can make it work like before via
+ __PACKAGE__->column_info_from_storage(1) for now
+ - Replaced DBIx::Class::AccessorGroup and Class::Data::Accessor with
+ Class::Accessor::Grouped. Only user noticible change is to
+ table_class on ResultSourceProxy::Table (i.e. table objects in
+ schemas) and, resultset_class and result_class in ResultSource.
+ These accessors no longer automatically require the classes when
+ set.
+
- 0.07004
- - fix find_related-based queries to correctly grep the unique key
-
- 0.07003 2006-11-16 11:52:00
- - fix for rt.cpan.org #22740 (use $^X instead of hardcoded "perl")
- - Tweaks to resultset to allow inflate_result to return an array
- - Fix UTF8Columns to work under Perl <= 5.8.0
- - Fix up new_result in ResultSet to avoid alias-related bugs
- - Made new/update/find handle 'single' rel accessor correctly
- - Fix NoBindVars to be safer and handle non-true bind values
- - Don't blow up if columns_info_for returns useless results
- - Documentation updates
-
0.07002 2006-09-14 21:17:32
- fix quote tests for recent versions of SQLite
- added reference implementation of Manual::Example
- backported column_info_from_storage accessor from -current, but
- defaults true instead of false in 0.07xxx
- fixed inflate_datetime.t tests/stringify under older Test::More
- minor fixes for many-to-many relationship helpers
- cleared up Relationship docs, and fixed some typos
- fixes to pass test suite on Windows
- rewrote and cleaned up SQL::Translator tests
- changed relationship helpers to only call ensure_class_loaded when the
- join condition is inferred
+ join condition is inferred
- rewrote many_to_many implementation, now provides helpers for adding
and deleting objects without dealing with the link table
- reworked InflateColumn implementation to lazily deflate where
- changed join merging to not create a rel_2 alias when adding a join
that already exists in a parent resultset
- Storage::DBI::deployment_statements now calls ensure_connected
- if it isn't passed a type
+ if it isn't passed a type
- fixed Componentized::ensure_class_loaded
- InflateColumn::DateTime supports date as well as datetime
- split Storage::DBI::MSSQL into MSSQL and Sybase::MSSQL
- - fixed wrong debugging hook call in Storage::DBI
- - set connect_info properly before setting any ->sql_maker things
+ - fixed wrong debugging hook call in Storage::DBI
+ - set connect_info properly before setting any ->sql_maker things
0.06999_02 2006-06-09 23:58:33
- Fixed up POD::Coverage tests, filled in some POD holes
\.tmp$
\.old$
\.bak$
+\..*?\.sw[po]$
\#$
\b\.#
# Skip maint stuff
^maint/
+ # Avoid copies to .orig
+ \.orig$
++
+# Dont use Module::Build anymore
+# Build.PL
use warnings;
use vars qw($VERSION);
-use base qw/DBIx::Class::Componentised Class::Data::Accessor/;
+use base qw/DBIx::Class::Componentised Class::Accessor::Grouped/;
+
+sub mk_classdata {
+ my $self = shift;
+ $self->mk_group_accessors('inherited', $_[0]);
+ $self->set_inherited(@_) if @_ > 1;
+}
-sub mk_classdata { shift->mk_classaccessor(@_); }
sub component_base_class { 'DBIx::Class' }
# Always remember to do all digits for the version even if they're 0
# i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports
# brain damage and presumably various other packaging systems too
-$VERSION = '0.07006';
+$VERSION = '0.07999_02';
sub MODIFY_CODE_ATTRIBUTES {
my ($class,$code,@attrs) = @_;
ank: Andres Kievsky
+ash: Ash Berlin <ash@cpan.org>
+
blblack: Brandon L. Black <blblack@gmail.com>
bluefeet: Aran Deltac <bluefeet@cpan.org>
clkao: CL Kao
+da5id: David Jack Olrik <djo@cpan.org>
+
dkubb: Dan Kubb <dan.kubb-cpan@onautopilot.com>
+dnm: Justin Wheeler <jwheeler@datademons.com>
+
draven: Marcus Ramberg <mramberg@cpan.org>
dwc: Daniel Westermann-Clark <danieltwc@cpan.org>
jguenther: Justin Guenther <jguenther@cpan.org>
+ jshirley: J. Shirley <jshirley@gmail.com>
+
konobi: Scott McWhirter
LTJake: Brian Cassidy <bricas@cpan.org>
+ned: Neil de Carteret
+
nigel: Nigel Metheringham <nigelm@cpan.org>
ningu: David Kamholz <dkamholz@cpan.org>
typester: Daisuke Murase <typester@cpan.org>
+victori: Victor Igumnov <victori@cpan.org>
+
wdh: Will Hawes
willert: Sebastian Willert <willert@cpan.org>
use Class::C3;
use Class::Inspector;
-use Carp::Clan qw/DBIx::Class/;
+use Carp::Clan qw/^DBIx::Class/;
sub inject_base {
my ($class, $target, @to_inject) = @_;
# it on the basis of the comments in Class::C3, the author was on #dbix-class
# while I was implementing this.
- my $table = { Class::C3::_dump_MRO_table };
- eval "package $target; import Class::C3;" unless exists $table->{$target};
+ eval "package $target; import Class::C3;" unless exists $Class::C3::MRO{$target};
}
sub load_components {
);
# Equivalent SQL:
- # SELECT name name, LENGTH( name ) name_length
+ # SELECT name name, LENGTH( name )
# FROM artist
- If your alias exists as a column in your base class (i.e. it was added
- with C<add_columns>), you just access it as normal. Our C<Artist>
- class has a C<name> column, so we just use the C<name> accessor:
+ Note that the C< as > attribute has absolutely nothing to with the sql
+ syntax C< SELECT foo AS bar > (see the documentation in
+ L<DBIx::Class::ResultSet/ATTRIBUTES>). If your alias exists as a
+ column in your base class (i.e. it was added with C<add_columns>), you
+ just access it as normal. Our C<Artist> class has a C<name> column, so
+ we just use the C<name> accessor:
my $artist = $rs->first();
my $name = $artist->name();
select => [
{ distinct => [ $source->columns ] }
],
- as => [ $source->columns ]
+ as => [ $source->columns ] # remember 'as' is not the same as SQL AS :-)
}
);
# LEFT JOIN cd cds ON ( cds.artist = me.artistid )
# GROUP BY name
+ Please see L<DBIx::Class::ResultSet/ATTRIBUTES> documentation if you
+ are in any way unsure about the use of the attributes above (C< join
+ >, C< select >, C< as > and C< group_by >).
+
=head3 Predefined searches
You can write your own L<DBIx::Class::ResultSet> class by inheriting from it
{},
{
select => [ { sum => 'Cost' } ],
- as => [ 'total_cost' ],
+ as => [ 'total_cost' ], # remember this 'as' is for DBIx::Class::ResultSet not SQL
}
);
my $tc = $rs->first->get_column('total_cost');
=head2 Transactions
As of version 0.04001, there is improved transaction support in
-L<DBIx::Class::Storage::DBI> and L<DBIx::Class::Schema>. Here is an
+L<DBIx::Class::Storage> and L<DBIx::Class::Schema>. Here is an
example of the recommended way to use it:
my $genus = $schema->resultset('Genus')->find(12);
=head2 Many-to-many relationships
- This is straightforward using L<DBIx::Class::Relationship::ManyToMany>:
+ This is straightforward using L<ManyToMany|DBIx::Class::Relationship/many_to_many>:
package My::DB;
# ... set up connection ...
$attrs->{foo} = 'bar' unless defined $attrs->{foo};
- $class->next::method($attrs);
+ my $new = $class->next::method($attrs);
+
+ return $new;
}
For more information about C<next::method>, look in the L<Class::C3>
To make an object stringify itself as a single column, use something
like this (replace C<foo> with the column/method of your choice):
- use overload '""' => 'foo', fallback => 1;
+ use overload '""' => sub { shift->name}, fallback => 1;
For more complex stringification, you can use an anonymous subroutine:
=head2 Profiling
-When you enable L<DBIx::Class::Storage::DBI>'s debugging it prints the SQL
+When you enable L<DBIx::Class::Storage>'s debugging it prints the SQL
executed as well as notifications of query completion and transaction
begin/commit. If you'd like to profile the SQL you can subclass the
L<DBIx::Class::Storage::Statistics> class and write your own profiling
DBIx::Class is not built for speed, it's built for convenience and
ease of use, but sometimes you just need to get the data, and skip the
-fancy objects. Luckily this is also fairly easy using
-C<inflate_result>:
-
- # Define a class which just returns the results as a hashref:
- package My::HashRefInflator;
-
- ## $me is the hashref of cols/data from the immediate resultsource
- ## $prefetch is a deep hashref of all the data from the prefetched
- ## related sources.
-
- sub mk_hash {
- my ($me, $rest) = @_;
-
- return { %$me,
- map { ($_ => mk_hash(@{$rest->{$_}})) } keys %$rest
- };
- }
-
- sub inflate_result {
- my ($self, $source, $me, $prefetch) = @_;
- return mk_hash($me, $prefetch);
- }
-
- # Change the object inflation to a hashref for just this resultset:
- $rs->result_class('My::HashRefInflator');
-
- my $datahashref = $rs->next;
- foreach my $col (keys %$datahashref) {
- if(!ref($datahashref->{$col})) {
- # It's a plain value
- }
- elsif(ref($datahashref->{$col} eq 'HASH')) {
- # It's a related value in a hashref
- }
- }
-
+fancy objects.
+
+To do this simply use L<DBIx::Class::ResultClass::HashRefInflator>.
+
+ my $rs = $schema->resultset('CD');
+
+ $rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
+
+ my $hash_ref = $rs->find(1);
+
+Wasn't that easy?
+
=head2 Want to know if find_or_create found or created a row?
Just use C<find_or_new> instead, then check C<in_storage>:
# do whatever else you wanted if it was a new row
}
+ =head3 Wrapping/overloading a column accessor
+
+ Problem: Say you have a table "Camera" and want to associate a description
+ with each camera. For most cameras, you'll be able to generate the description from
+ the other columns. However, in a few special cases you may want to associate a
+ custom description with a camera.
+
+ Solution:
+
+ In your database schema, define a description field in the "Camera" table that
+ can contain text and null values.
+
+ In DBIC, we'll overload the column accessor to provide a sane default if no
+ custom description is defined. The accessor will either return or generate the
+ description, depending on whether the field is null or not.
+
+ First, in your "Camera" schema class, define the description field as follows:
+
+ __PACKAGE__->add_columns(description => { accessor => '_description' });
+
+ Next, we'll define the accessor-wrapper subroutine:
+
+ sub description {
+ my $self = shift;
+
+ # If there is an update to the column, we'll let the original accessor
+ # deal with it.
+ return $self->_description(@_) if @_;
+
+ # Fetch the column value.
+ my $description = $self->_description;
+
+ # If there's something in the description field, then just return that.
+ return $description if defined $description && length $descripton;
+
+ # Otherwise, generate a description.
+ return $self->generate_description;
+ }
+
=cut
L<DBIx::Class::ResultSetColumn>, see it's documentation and the
L<Cookbook|DBIx::Class::Manual::Cookbook> for details.
+ =item .. fetch a formatted column?
+
+ In your table schema class, create a "private" column accessor with:
+
+ __PACKAGE__->add_columns(my_common => { accessor => '_hidden_my_column' });
+
+ Then, in the same class, implement a subroutine called "my_column" that
+ fetches the real value and does the formatting you want.
+
+ See the Cookbook for more details.
+
+ =item .. fetch a single (or topmost) row?
+
+ Sometimes you many only want a single record back from a search. A quick
+ way to get that single row is to first run your search as usual:
+
+ ->search->(undef, { order_by => "id DESC" })
+
+ Then call L<DBIx::Class::ResultSet/slice> and ask it only to return 1 row:
+
+ ->slice(0,1)
+
+ These two calls can be combined into a single statement:
+
+ ->search->(undef, { order_by => "id DESC" })->slice(0,1)
+
+ Why slice instead of L<DBIx::Class::ResultSet/first> or L<DBIx::Class::ResultSet/single>?
+ If supported by the database, slice will use LIMIT/OFFSET to hint to the database that we
+ really only need one row. This can result in a significant speed improvement.
+
=back
=head2 Inserting and updating data
->update({ somecolumn => \'othercolumn' })
+ =item .. store JSON/YAML in a column and have it deflate/inflate automatically?
+
+ You can use L<DBIx::Class::InflateColumn> to accomplish YAML/JSON storage transparently.
+
+ If you want to use JSON, then in your table schema class, do the following:
+
+ use JSON;
+
+ __PACKAGE__->add_columns(qw/ ... my_column ../)
+ __PACKAGE__->inflate_column('my_column', {
+ inflate => sub { jsonToObj(shift) },
+ deflate => sub { objToJson(shift) },
+ });
+
+ For YAML, in your table schema class, do the following:
+
+ use YAML;
+
+ __PACKAGE__->add_columns(qw/ ... my_column ../)
+ __PACKAGE__->inflate_column('my_column', {
+ inflate => sub { YAML::Load(shift) },
+ deflate => sub { YAML::Dump(shift) },
+ });
+
+ This technique is an easy way to store supplemental unstructured data in a table. Be
+ careful not to overuse this capability, however. If you find yourself depending more
+ and more on some data within the inflated column, then it may be time to factor that
+ data out.
+
=back
=head2 Misc
=item See the SQL statements my code is producing?
-Turn on debugging! See L<DBIx::Class::Storage::DBI> for details of how
+Turn on debugging! See L<DBIx::Class::Storage> for details of how
to turn on debugging in the environment, pass your own filehandle to
save debug to, or create your own callback.
name TEXT NOT NULL,
position INTEGER NOT NULL
);
- # Optional: group_id INTEGER NOT NULL
+
+Optionally, add one or more columns to specify groupings, allowing you
+to maintain independent ordered lists within one table:
+
+ CREATE TABLE items (
+ item_id INTEGER PRIMARY KEY AUTOINCREMENT,
+ name TEXT NOT NULL,
+ position INTEGER NOT NULL,
+ group_id INTEGER NOT NULL
+ );
+
+Or even
+
+ CREATE TABLE items (
+ item_id INTEGER PRIMARY KEY AUTOINCREMENT,
+ name TEXT NOT NULL,
+ position INTEGER NOT NULL,
+ group_id INTEGER NOT NULL,
+ other_group_id INTEGER NOT NULL
+ );
- In your Schema or DB class add Ordered to the top
+ In your Schema or DB class add "Ordered" to the top
of the component list.
__PACKAGE__->load_components(qw( Ordered ... ));
package My::Item;
__PACKAGE__->position_column('position');
- __PACKAGE__->grouping_column('group_id'); # optional
+
+If you are using one grouping column, specify it as follows:
+
+ __PACKAGE__->grouping_column('group_id');
+
+Or if you have multiple grouping columns:
+
+ __PACKAGE__->grouping_column(['group_id', 'other_group_id']);
- Thats it, now you can change the position of your objects.
+ That's it, now you can change the position of your objects.
#!/use/bin/perl
use My::Item;
$item->move_first();
$item->move_last();
$item->move_to( $position );
+ $item->move_to_group( 'groupname' );
+ $item->move_to_group( 'groupname', $position );
+ $item->move_to_group( {group_id=>'groupname', 'other_group_id=>'othergroupname'} );
+ $item->move_to_group( {group_id=>'groupname', 'other_group_id=>'othergroupname'}, $position );
=head1 DESCRIPTION
__PACKAGE__->position_column('position');
Sets and retrieves the name of the column that stores the
- positional value of each record. Default to "position".
+ positional value of each record. Defaults to "position".
=cut
__PACKAGE__->grouping_column('group_id');
- This method specified a column to limit all queries in
+ This method specifies a column to limit all queries in
this module by. This effectively allows you to have multiple
ordered lists within the same table.
my $rs = $item->siblings();
my @siblings = $item->siblings();
- Returns either a result set or an array of all other objects
+ Returns either a resultset or an array of all other objects
excluding the one you called it on.
=cut
my $sibling = $item->first_sibling();
Returns the first sibling object, or 0 if the first sibling
- is this sibliing.
+ is this sibling.
=cut
sub first_sibling {
my( $self ) = @_;
return 0 if ($self->get_column($self->position_column())==1);
+
return ($self->result_source->resultset->search(
{
$self->position_column => 1,
my $sibling = $item->last_sibling();
- Return the last sibling, or 0 if the last sibling is this
+ Returns the last sibling, or 0 if the last sibling is this
sibling.
=cut
my $sibling = $item->previous_sibling();
- Returns the sibling that resides one position back. Undef
- is returned if the current object is the first one.
+ Returns the sibling that resides one position back. Returns undef
+ if the current object is the first one.
=cut
my $sibling = $item->next_sibling();
- Returns the sibling that resides one position foward. Undef
- is returned if the current object is the last one.
+ Returns the sibling that resides one position forward. Returns undef
+ if the current object is the last one.
=cut
$item->move_previous();
- Swaps position with the sibling on position previous in the list.
- 1 is returned on success, and 0 is returned if the objects is already
- the first one.
+ Swaps position with the sibling in the position previous in
+ the list. Returns 1 on success, and 0 if the object is
+ already the first one.
=cut
$item->move_next();
- Swaps position with the sibling in the next position. 1 is returned on
- success, and 0 is returned if the object is already the last in the list.
+ Swaps position with the sibling in the next position in the
+ list. Returns 1 on success, and 0 if the object is already
+ the last in the list.
=cut
$item->move_first();
- Moves the object to the first position. 1 is returned on
- success, and 0 is returned if the object is already the first.
+ Moves the object to the first position in the list. Returns 1
+ on success, and 0 if the object is already the first.
=cut
$item->move_last();
- Moves the object to the very last position. 1 is returned on
- success, and 0 is returned if the object is already the last one.
+ Moves the object to the last position in the list. Returns 1
+ on success, and 0 if the object is already the last one.
=cut
$item->move_to( $position );
- Moves the object to the specified position. 1 is returned on
- success, and 0 is returned if the object is already at the
- specified position.
+ Moves the object to the specified position. Returns 1 on
+ success, and 0 if the object is already at the specified
+ position.
=cut
$self->_grouping_clause(),
});
my $op = ($from_position>$to_position) ? '+' : '-';
- $rs->update({ $position_column => \"$position_column $op 1" });
+ $rs->update({ $position_column => \"$position_column $op 1" }); #" Sorry, GEdit bug
+ $self->{_ORDERED_INTERNAL_UPDATE} = 1;
$self->update({ $position_column => $to_position });
return 1;
}
+
+
+=head2 move_to_group
+
+ $item->move_to_group( $group, $position );
+
+Moves the object to the specified position of the specified
+group, or to the end of the group if $position is undef.
+1 is returned on success, and 0 is returned if the object is
+already at the specified position of the specified group.
+
+$group may be specified as a single scalar if only one
+grouping column is in use, or as a hashref of column => value pairs
+if multiple grouping columns are in use.
+
+=cut
+
+sub move_to_group {
+ my( $self, $to_group, $to_position ) = @_;
+
+ # if we're given a string, turn it into a hashref
+ unless (ref $to_group eq 'HASH') {
+ $to_group = {($self->_grouping_columns)[0] => $to_group};
+ }
+
+ my $position_column = $self->position_column;
+ #my @grouping_columns = $self->_grouping_columns;
+
+ return 0 if ( ! defined($to_group) );
+ return 0 if ( defined($to_position) and $to_position < 1 );
+ return 0 if ( $self->_is_in_group($to_group)
+ and ((not defined($to_position))
+ or (defined($to_position) and $self->$position_column==$to_position)
+ )
+ );
+
+ # Move to end of current group and adjust siblings
+ $self->move_last;
+
+ $self->set_columns($to_group);
+ my $new_group_count = $self->result_source->resultset->search({$self->_grouping_clause()})->count();
+ if (!defined($to_position) or $to_position > $new_group_count) {
+ $self->{_ORDERED_INTERNAL_UPDATE} = 1;
+ $self->update({ $position_column => $new_group_count + 1 });
+ }
+ else {
+ my @between = ($to_position, $new_group_count);
+
+ my $rs = $self->result_source->resultset->search({
+ $position_column => { -between => [ @between ] },
+ $self->_grouping_clause(),
+ });
+ $rs->update({ $position_column => \"$position_column + 1" }); #"
+ $self->{_ORDERED_INTERNAL_UPDATE} = 1;
+ $self->update({ $position_column => $to_position });
+ }
+
+ return 1;
+}
+
=head2 insert
Overrides the DBIC insert() method by providing a default
return $self->next::method( @_ );
}
+=head2 update
+
+Overrides the DBIC update() method by checking for a change
+to the position and/or group columns. Movement within a
+group or to another group is handled by repositioning
+the appropriate siblings. Position defaults to the end
+of a new group if it has been changed to undef.
+
+=cut
+
+sub update {
+ my $self = shift;
+
+ if ($self->{_ORDERED_INTERNAL_UPDATE}) {
+ delete $self->{_ORDERED_INTERNAL_UPDATE};
+ return $self->next::method( @_ );
+ }
+
+ $self->set_columns($_[0]) if @_ > 0;
+ my %changes = $self->get_dirty_columns;
+ $self->discard_changes;
+
+ my $pos_col = $self->position_column;
+
+ # if any of our grouping columns have been changed
+ if (grep {$_} map {exists $changes{$_}} $self->_grouping_columns ) {
+
+ # create new_group by taking the current group and inserting changes
+ my $new_group = {$self->_grouping_clause};
+ foreach my $col (keys %$new_group) {
+ if (exists $changes{$col}) {
+ $new_group->{$col} = $changes{$col};
+ delete $changes{$col}; # don't want to pass this on to next::method
+ }
+ }
+
+ $self->move_to_group(
+ $new_group,
+ exists($changes{$pos_col}) ? delete($changes{$pos_col}) : $self->$pos_col
+ );
+ }
+ elsif (exists $changes{$pos_col}) {
+ $self->move_to(delete $changes{$pos_col});
+ }
+ return $self->next::method( \%changes );
+}
+
=head2 delete
Overrides the DBIC delete() method by first moving the object
=head2 _grouping_clause
-This method returns a name=>value pair for limiting a search
-by the collection column. If the collection column is not
+This method returns one or more name=>value pairs for limiting a search
+by the grouping column(s). If the grouping column is not
defined then this will return an empty list.
=cut
-
sub _grouping_clause {
my( $self ) = @_;
+ return map { $_ => $self->get_column($_) } $self->_grouping_columns();
+}
+
+
+
+=head2 _get_grouping_columns
+
+Returns a list of the column names used for grouping, regardless of whether
+they were specified as an arrayref or a single string, and returns ()
+if there is no grouping.
+
+=cut
+sub _grouping_columns {
+ my( $self ) = @_;
my $col = $self->grouping_column();
- if ($col) {
- return ( $col => $self->get_column($col) );
+ if (ref $col eq 'ARRAY') {
+ return @$col;
+ } elsif ($col) {
+ return ( $col );
+ } else {
+ return ();
}
- return ();
}
+
+
+=head2 _is_in_group($other)
+
+ $item->_is_in_group( {user => 'fred', list => 'work'} )
+
+Returns true if the object is in the group represented by hashref $other
+=cut
+sub _is_in_group {
+ my ($self, $other) = @_;
+ my $current = {$self->_grouping_clause};
+ return 0 unless (ref $other eq 'HASH') and (keys %$current == keys %$other);
+ for my $key (keys %$current) {
+ return 0 unless exists $other->{$key};
+ return 0 if $current->{$key} ne $other->{$key};
+ }
+ return 1;
+}
+
+
1;
__END__
If a position is not specified for an insert than a position
will be chosen based on COUNT(*)+1. But, it first selects the
- count then inserts the record. The space of time between select
+ count, and then inserts the record. The space of time between select
and insert introduces a race condition. To fix this we need the
ability to lock tables in DBIC. I've added an entry in the TODO
about this.
use Data::Page;
use Storable;
use DBIx::Class::ResultSetColumn;
+use DBIx::Class::ResultSourceHandle;
use base qw/DBIx::Class/;
-__PACKAGE__->load_components(qw/AccessorGroup/);
-__PACKAGE__->mk_group_accessors('simple' => qw/result_source result_class/);
+__PACKAGE__->mk_group_accessors('simple' => qw/result_class _source_handle/);
=head1 NAME
return $class->new_result(@_) if ref $class;
my ($source, $attrs) = @_;
- #weaken $source;
+ $source = $source->handle
+ unless $source->isa('DBIx::Class::ResultSourceHandle');
$attrs = { %{$attrs||{}} };
if ($attrs->{page}) {
$attrs->{alias} ||= 'me';
my $self = {
- result_source => $source,
- result_class => $attrs->{result_class} || $source->result_class,
+ _source_handle => $source,
+ result_class => $attrs->{result_class} || $source->resolve->result_class,
cond => $attrs->{where},
count => undef,
pager => undef,
columns => [qw/name artistid/],
});
- For a list of attributes that can be passed to C<search>, see L</ATTRIBUTES>. For more examples of using this function, see L<Searching|DBIx::Class::Manual::Cookbook/Searching>.
+ For a list of attributes that can be passed to C<search>, see
+ L</ATTRIBUTES>. For more examples of using this function, see
+ L<Searching|DBIx::Class::Manual::Cookbook/Searching>. For a complete
+ documentation for the first argument, see L<SQL::Abstract>.
=cut
: $having);
}
- my $rs = (ref $self)->new($self->result_source, $new_attrs);
+ my $rs = (ref $self)->new($self->_source_handle, $new_attrs);
if ($rows) {
$rs->set_cache($rows);
}
sub _construct_object {
my ($self, @row) = @_;
my $info = $self->_collapse_result($self->{_attrs}{as}, \@row);
- my @new = $self->result_class->inflate_result($self->result_source, @$info);
+ my @new = $self->result_class->inflate_result($self->_source_handle, @$info);
@new = $self->{_attrs}{record_filter}->(@new)
if exists $self->{_attrs}{record_filter};
return @new;
# offset, order by and page are not needed to count. record_filter is cdbi
delete $attrs->{$_} for qw/rows offset order_by page pager record_filter/;
- my $tmp_rs = (ref $self)->new($self->result_source, $attrs);
+ my $tmp_rs = (ref $self)->new($self->_source_handle, $attrs);
my ($count) = $tmp_rs->cursor->next;
return $count;
}
unless ref $values eq 'HASH';
my $cond = $self->_cond_for_update_delete;
-
+
return $self->result_source->storage->update(
- $self->result_source->from, $values, $cond
+ $self->result_source, $values, $cond
);
}
my $cond = $self->_cond_for_update_delete;
- $self->result_source->storage->delete($self->result_source->from, $cond);
+ $self->result_source->storage->delete($self->result_source, $cond);
return 1;
}
sub page {
my ($self, $page) = @_;
- return (ref $self)->new($self->result_source, { %{$self->{attrs}}, page => $page });
+ return (ref $self)->new($self->_source_handle, { %{$self->{attrs}}, page => $page });
}
=head2 new_result
my %new = (
%{ $self->_remove_alias($values, $alias) },
%{ $self->_remove_alias($collapsed_cond, $alias) },
- -result_source => $self->result_source,
);
- my $obj = $self->result_class->new(\%new);
- return $obj;
+ return $self->result_class->new(\%new,$self->_source_handle);
}
# _collapse_cond
my $rel_obj = $self->result_source->relationship_info($rel);
$self->throw_exception(
- "search_related: result source '" . $self->result_source->name .
+ "search_related: result source '" . $self->_source_handle->source_moniker .
"' has no such relationship $rel")
unless $rel_obj;
my $join_count = $seen->{$rel};
my $alias = ($join_count > 1 ? join('_', $rel, $join_count) : $rel);
- $self->result_source->schema->resultset($rel_obj->{class})->search_rs(
+ $self->_source_handle->schema->resultset($rel_obj->{class})->search_rs(
undef, {
%{$self->{attrs}||{}},
join => undef,
return $self->{_attrs} if $self->{_attrs};
my $attrs = { %{$self->{attrs}||{}} };
- my $source = $self->{result_source};
+ my $source = $self->result_source;
my $alias = $attrs->{alias};
$attrs->{columns} ||= delete $attrs->{cols} if exists $attrs->{cols};
}
}
+sub result_source {
+ my $self = shift;
+
+ if (@_) {
+ $self->_source_handle($_[0]->handle);
+ } else {
+ $self->_source_handle->resolve;
+ }
+}
+
=head2 throw_exception
See L<DBIx::Class::Schema/throw_exception> for details.
sub throw_exception {
my $self=shift;
- $self->result_source->schema->throw_exception(@_);
+ $self->_source_handle->schema->throw_exception(@_);
}
# XXX: FIXME: Attributes docs need clearing up
through directly to SQL, so you can give e.g. C<year DESC> for a
descending order on the column `year'.
-Please note that if you have quoting enabled (see
-L<DBIx::Class::Storage/quote_char>) you will need to do C<\'year DESC' > to
+Please note that if you have C<quote_char> enabled (see
+L<DBIx::Class::Storage::DBI/connect_info>) you will need to do C<\'year DESC' > to
specify an order. (The scalar ref causes it to be passed as raw sql to the DB,
so you will need to manually quote things as appropriate.)
});
would return all CDs and include a 'name' column to the information
- passed to object inflation
+ passed to object inflation. Note that the 'artist' is the name of the
+ column (or relationship) accessor, and 'name' is the name of the column
+ accessor in the related table.
=head2 select
=back
- Indicates column names for object inflation. This is used in conjunction with
- C<select>, usually when C<select> contains one or more function or stored
+ Indicates column names for object inflation. That is, c< as >
+ indicates the name that the column can be accessed as via the
+ C<get_column> method (or via the object accessor, B<if one already
+ exists>). It has nothing to do with the SQL code C< SELECT foo AS bar
+ >.
+
+ The C< as > attribute is used in conjunction with C<select>,
+ usually when C<select> contains one or more function or stored
procedure names:
$rs = $schema->resultset('Employee')->search(undef, {
use warnings;
use DBIx::Class::ResultSet;
+use DBIx::Class::ResultSourceHandle;
use Carp::Clan qw/^DBIx::Class/;
use Storable;
use base qw/DBIx::Class/;
__PACKAGE__->mk_group_accessors('simple' => qw/_ordered_columns
_columns _primaries _unique_constraints name resultset_attributes
- schema from _relationships column_info_from_storage source_name/);
+ schema from _relationships column_info_from_storage source_info/);
-__PACKAGE__->mk_group_accessors('component_class' => qw/resultset_class
+__PACKAGE__->mk_group_accessors('inherited' => qw/resultset_class
result_class/);
+__PACKAGE__->mk_group_ro_accessors('simple' => qw/source_name/);
+
=head1 NAME
DBIx::Class::ResultSource - Result source object
my ($class, $attrs) = @_;
$class = ref $class if ref $class;
- my $new = { %{$attrs || {}}, _resultset => undef };
- bless $new, $class;
-
+ my $new = bless { %{$attrs || {}} }, $class;
$new->{resultset_class} ||= 'DBIx::Class::ResultSet';
$new->{resultset_attributes} = { %{$new->{resultset_attributes} || {}} };
$new->{_ordered_columns} = [ @{$new->{_ordered_columns}||[]}];
$new->{_relationships} = { %{$new->{_relationships}||{}} };
$new->{name} ||= "!!NAME NOT SET!!";
$new->{_columns_info_loaded} ||= 0;
- if(!defined $new->column_info_from_storage) {
- $new->{column_info_from_storage} = 1
- }
return $new;
}
=pod
+=head2 source_info
+
+Stores a hashref of per-source metadata. No specific key names
+have yet been standardized, the examples below are purely hypothetical
+and don't actually accomplish anything on their own:
+
+ __PACKAGE__->source_info({
+ "_tablespace" => 'fast_disk_array_3',
+ "_engine" => 'InnoDB',
+ });
+
=head2 add_columns
$table->add_columns(qw/col1 col2 col3/);
=head2 column_info_from_storage
-Enables or disables the on-demand automatic loading of the above
-column metadata from storage as neccesary. Defaults to true in the
-current release, but will default to false in future releases starting
-with 0.08000. This is *deprecated*, and should not be used. It will
-be removed before 1.0.
+Enables the on-demand automatic loading of the above column
+metadata from storage as neccesary. This is *deprecated*, and
+should not be used. It will be removed before 1.0.
- __PACKAGE__->column_info_from_storage(0);
__PACKAGE__->column_info_from_storage(1);
=head2 columns
$source->resultset_attributes({ order_by => [ 'id' ] });
- Specify here any attributes you wish to pass to your specialised resultset.
+ Specify here any attributes you wish to pass to your specialised
+ resultset. For a full list of these, please see
+ L<DBIx::Class::ResultSet/ATTRIBUTES>.
=cut
'call it on the schema instead.'
) if scalar @_;
- # disabled until we can figure out a way to do it without consistency issues
- #
- #return $self->{_resultset}
- # if ref $self->{_resultset} eq $self->resultset_class;
- #return $self->{_resultset} =
-
return $self->resultset_class->new(
$self, $self->{resultset_attributes}
);
# from your schema...
$schema->resultset('Books')->find(1);
+=head2 handle
+
+Obtain a new handle to this source. Returns an instance of a
+L<DBIx::Class::ResultSourceHandle>.
+
+=cut
+
+sub handle {
+ return new DBIx::Class::ResultSourceHandle({
+ schema => $_[0]->schema,
+ source_moniker => $_[0]->source_name
+ });
+}
+
=head2 throw_exception
See L<DBIx::Class::Schema/"throw_exception">.
use base qw/DBIx::Class/;
use Carp::Clan qw/^DBIx::Class/;
-__PACKAGE__->load_components(qw/AccessorGroup/);
-
-__PACKAGE__->mk_group_accessors('simple' => 'result_source');
+__PACKAGE__->mk_group_accessors('simple' => qw/_source_handle/);
=head1 NAME
=cut
sub new {
- my ($class, $attrs) = @_;
+ my ($class, $attrs, $source) = @_;
$class = ref $class if ref $class;
my $new = { _column_data => {} };
bless $new, $class;
+ $new->_source_handle($source) if $source;
+
if ($attrs) {
$new->throw_exception("attrs must be a hashref")
unless ref($attrs) eq 'HASH';
+
+ my ($related,$inflated);
+ foreach my $key (keys %$attrs) {
+ if (ref $attrs->{$key}) {
+ my $info = $class->relationship_info($key);
+ if ($info && $info->{attrs}{accessor}
+ && $info->{attrs}{accessor} eq 'single')
+ {
+ $new->set_from_related($key, $attrs->{$key});
+ $related->{$key} = $attrs->{$key};
+ next;
+ }
+ elsif ($class->has_column($key)
+ && exists $class->column_info($key)->{_inflate_info})
+ {
+ $inflated->{$key} = $attrs->{$key};
+ next;
+ }
+ }
+ $new->throw_exception("No such column $key on $class")
+ unless $class->has_column($key);
+ $new->store_column($key => $attrs->{$key});
+ }
if (my $source = delete $attrs->{-result_source}) {
$new->result_source($source);
}
- foreach my $k (keys %$attrs) {
- $new->throw_exception("No such column $k on $class")
- unless $class->has_column($k);
- $new->store_column($k => $attrs->{$k});
- }
+
+ $new->{_relationship_data} = $related if $related;
+ $new->{_inflated_column} = $inflated if $inflated;
}
return $new;
sub insert {
my ($self) = @_;
return $self if $self->in_storage;
- $self->{result_source} ||= $self->result_source_instance
+ my $source = $self->result_source;
+ $source ||= $self->result_source($self->result_source_instance)
if $self->can('result_source_instance');
- my $source = $self->{result_source};
$self->throw_exception("No result_source set on this object; can't insert")
unless $source;
- #use Data::Dumper; warn Dumper($self);
- $source->storage->insert($source->from, { $self->get_columns });
+
+ $source->storage->insert($source, { $self->get_columns });
$self->in_storage(1);
$self->{_dirty_columns} = {};
$self->{related_resultsets} = {};
=head2 update
- $obj->update;
+ $obj->update \%columns?;
Must be run on an object that is already in the database; issues an SQL
UPDATE query to commit any changes to the object to the database if
required.
+ Also takes an options hashref of C<< column_name => value> pairs >> to update
+ first. But be aware that this hashref might be edited in place, so dont rely on
+ it being the same after a call to C<update>.
+
=cut
sub update {
my $ident_cond = $self->ident_condition;
$self->throw_exception("Cannot safely update a row in a PK-less table")
if ! keys %$ident_cond;
- $self->set_columns($upd) if $upd;
+
+ if ($upd) {
+ foreach my $key (keys %$upd) {
+ if (ref $upd->{$key}) {
+ my $info = $self->relationship_info($key);
+ if ($info && $info->{attrs}{accessor}
+ && $info->{attrs}{accessor} eq 'single')
+ {
+ my $rel = delete $upd->{$key};
+ $self->set_from_related($key => $rel);
+ $self->{_relationship_data}{$key} = $rel;
+ }
+ elsif ($self->has_column($key)
+ && exists $self->column_info($key)->{_inflate_info})
+ {
+ $self->set_inflated_column($key, delete $upd->{$key});
+ }
+ }
+ }
+ $self->set_columns($upd);
+ }
my %to_update = $self->get_dirty_columns;
return $self unless keys %to_update;
my $rows = $self->result_source->storage->update(
- $self->result_source->from, \%to_update, $self->{_orig_ident} || $ident_cond);
+ $self->result_source, \%to_update,
+ $self->{_orig_ident} || $ident_cond
+ );
if ($rows == 0) {
$self->throw_exception( "Can't update ${self}: row not found" );
} elsif ($rows > 1) {
$obj->delete
Deletes the object from the database. The object is still perfectly
-usable, but C<-E<gt>in_storage()> will now return 0 and the object must
-reinserted using C<-E<gt>insert()> before C<-E(<gt>update()> can be used
+usable, but C<< ->in_storage() >> will now return 0 and the object must
+reinserted using C<< ->insert() >> before C<< ->update() >> can be used
on it. If you delete an object in a class with a C<has_many>
relationship, all the related objects will be deleted as well. To turn
this behavior off, pass C<cascade_delete => 0> in the C<$attr>
unless exists $self->{_column_data}{$column};
}
$self->result_source->storage->delete(
- $self->result_source->from, $ident_cond);
+ $self->result_source, $ident_cond);
$self->in_storage(undef);
} else {
$self->throw_exception("Can't do class delete without a ResultSource instance")
my $val = $obj->get_column($col);
-Gets a column value from a row object. Currently, does not do
-any queries; the column must have already been fetched from
-the database and stored in the object.
+Gets a column value from a row object. Does not do any queries; the column
+must have already been fetched from the database and stored in the object. If
+there is an inflated value stored that has not yet been deflated, it is deflated
+when the method is invoked.
=cut
my ($self, $column) = @_;
$self->throw_exception( "Can't fetch data as class method" ) unless ref $self;
return $self->{_column_data}{$column} if exists $self->{_column_data}{$column};
+ if (exists $self->{_inflated_column}{$column}) {
+ return $self->store_column($column,
+ $self->_deflated_column($column, $self->{_inflated_column}{$column}));
+ }
$self->throw_exception( "No such column '${column}'" ) unless $self->has_column($column);
return undef;
}
sub has_column_loaded {
my ($self, $column) = @_;
$self->throw_exception( "Can't call has_column data as class method" ) unless ref $self;
+ return 1 if exists $self->{_inflated_column}{$column};
return exists $self->{_column_data}{$column};
}
sub get_columns {
my $self = shift;
+ if (exists $self->{_inflated_column}) {
+ foreach my $col (keys %{$self->{_inflated_column}}) {
+ $self->store_column($col, $self->_deflated_column($col, $self->{_inflated_column}{$col}))
+ unless exists $self->{_column_data}{$col};
+ }
+ }
return %{$self->{_column_data}};
}
sub inflate_result {
my ($class, $source, $me, $prefetch) = @_;
- #use Data::Dumper; print Dumper(@_);
+
+ my ($source_handle) = $source;
+
+ if ($source->isa('DBIx::Class::ResultSourceHandle')) {
+ $source = $source_handle->resolve
+ } else {
+ $source_handle = $source->handle
+ }
+
my $new = {
- result_source => $source,
+ _source_handle => $source_handle,
_column_data => $me,
_in_storage => 1
};
Accessor to the ResultSource this object was created from
+=cut
+
+sub result_source {
+ my $self = shift;
+
+ if (@_) {
+ $self->_source_handle($_[0]->handle);
+ } else {
+ $self->_source_handle->resolve;
+ }
+}
+
=head2 register_column
$column_info = { .... };
use Carp::Clan qw/^DBIx::Class/;
use Scalar::Util qw/weaken/;
+use File::Spec;
+require Module::Find;
use base qw/DBIx::Class/;
__PACKAGE__->mk_classdata('source_registrations' => {});
__PACKAGE__->mk_classdata('storage_type' => '::DBI');
__PACKAGE__->mk_classdata('storage');
+__PACKAGE__->mk_classdata('exception_action');
=head1 NAME
sub register_source {
my ($self, $moniker, $source) = @_;
+
+ %$source = %{ $source->new( { %$source, source_name => $moniker }) };
+
- my %reg = %{$self->source_registrations};
- $reg{$moniker} = $source;
- $self->source_registrations(\%reg);
+ $self->source_registrations->{$moniker} = $source;
+
$source->schema($self);
+
weaken($source->{schema}) if ref($self);
if ($source->result_class) {
- my %map = %{$self->class_mappings};
- $map{$source->result_class} = $moniker;
- $self->class_mappings(\%map);
+ $self->class_mappings->{$source->result_class} = $moniker;
}
}
+sub _unregister_source {
+ my ($self, $moniker) = @_;
+ my %reg = %{$self->source_registrations};
+
+ my $source = delete $reg{$moniker};
+ $self->source_registrations(\%reg);
+ if ($source->result_class) {
+ my %map = %{$self->class_mappings};
+ delete $map{$source->result_class};
+ $self->class_mappings(\%map);
+ }
+}
+
=head2 class
=over 4
}
}
} else {
- eval "require Module::Find;";
- $class->throw_exception(
- "No arguments to load_classes and couldn't load Module::Find ($@)"
- ) if $@;
my @comp = map { substr $_, length "${class}::" }
Module::Find::findallmod($class);
$comps_for{$class} = \@comp;
}
}
$class->ensure_class_loaded($comp_class);
- $comp_class->source_name($comp) unless $comp_class->source_name;
- push(@to_register, [ $comp_class->source_name, $comp_class ]);
+ $comp = $comp_class->source_name || $comp;
+# $DB::single = 1;
+ push(@to_register, [ $comp, $comp_class ]);
}
}
}
}
}
-=head2 compose_connection
+=head2 load_namespaces
+
+=over 4
+
+=item Arguments: %options?
+
+=back
+
+This is an alternative to L</load_classes> above which assumes an alternative
+layout for automatic class loading. It assumes that all result
+classes are underneath a sub-namespace of the schema called C<Result>, any
+corresponding ResultSet classes are underneath a sub-namespace of the schema
+called C<ResultSet>.
+
+Both of the sub-namespaces are configurable if you don't like the defaults,
+via the options C<result_namespace> and C<resultset_namespace>.
+
+If (and only if) you specify the option C<default_resultset_class>, any found
+Result classes for which we do not find a corresponding
+ResultSet class will have their C<resultset_class> set to
+C<default_resultset_class>.
+
+C<load_namespaces> takes care of calling C<resultset_class> for you where
+neccessary if you didn't do it for yourself.
+
+All of the namespace and classname options to this method are relative to
+the schema classname by default. To specify a fully-qualified name, prefix
+it with a literal C<+>.
+
+Examples:
+
+ # load My::Schema::Result::CD, My::Schema::Result::Artist,
+ # My::Schema::ResultSet::CD, etc...
+ My::Schema->load_namespaces;
+
+ # Override everything to use ugly names.
+ # In this example, if there is a My::Schema::Res::Foo, but no matching
+ # My::Schema::RSets::Foo, then Foo will have its
+ # resultset_class set to My::Schema::RSetBase
+ My::Schema->load_namespaces(
+ result_namespace => 'Res',
+ resultset_namespace => 'RSets',
+ default_resultset_class => 'RSetBase',
+ );
+
+ # Put things in other namespaces
+ My::Schema->load_namespaces(
+ result_namespace => '+Some::Place::Results',
+ resultset_namespace => '+Another::Place::RSets',
+ );
+
+If you'd like to use multiple namespaces of each type, simply use an arrayref
+of namespaces for that option. In the case that the same result
+(or resultset) class exists in multiple namespaces, the latter entries in
+your list of namespaces will override earlier ones.
+
+ My::Schema->load_namespaces(
+ # My::Schema::Results_C::Foo takes precedence over My::Schema::Results_B::Foo :
+ result_namespace => [ 'Results_A', 'Results_B', 'Results_C' ],
+ resultset_namespace => [ '+Some::Place::RSets', 'RSets' ],
+ );
+
+=cut
+
+# Pre-pends our classname to the given relative classname or
+# class namespace, unless there is a '+' prefix, which will
+# be stripped.
+sub _expand_relative_name {
+ my ($class, $name) = @_;
+ return if !$name;
+ $name = $class . '::' . $name if ! ($name =~ s/^\+//);
+ return $name;
+}
+
+# returns a hash of $shortname => $fullname for every package
+# found in the given namespaces ($shortname is with the $fullname's
+# namespace stripped off)
+sub _map_namespaces {
+ my ($class, @namespaces) = @_;
+
+ my @results_hash;
+ foreach my $namespace (@namespaces) {
+ push(
+ @results_hash,
+ map { (substr($_, length "${namespace}::"), $_) }
+ Module::Find::findallmod($namespace)
+ );
+ }
+
+ @results_hash;
+}
+
+sub load_namespaces {
+ my ($class, %args) = @_;
+
+ my $result_namespace = delete $args{result_namespace} || 'Result';
+ my $resultset_namespace = delete $args{resultset_namespace} || 'ResultSet';
+ my $default_resultset_class = delete $args{default_resultset_class};
+
+ $class->throw_exception('load_namespaces: unknown option(s): '
+ . join(q{,}, map { qq{'$_'} } keys %args))
+ if scalar keys %args;
+
+ $default_resultset_class
+ = $class->_expand_relative_name($default_resultset_class);
+
+ for my $arg ($result_namespace, $resultset_namespace) {
+ $arg = [ $arg ] if !ref($arg) && $arg;
+
+ $class->throw_exception('load_namespaces: namespace arguments must be '
+ . 'a simple string or an arrayref')
+ if ref($arg) ne 'ARRAY';
+
+ $_ = $class->_expand_relative_name($_) for (@$arg);
+ }
+
+ my %results = $class->_map_namespaces(@$result_namespace);
+ my %resultsets = $class->_map_namespaces(@$resultset_namespace);
+
+ my @to_register;
+ {
+ no warnings 'redefine';
+ local *Class::C3::reinitialize = sub { };
+ use warnings 'redefine';
+
+ foreach my $result (keys %results) {
+ my $result_class = $results{$result};
+ $class->ensure_class_loaded($result_class);
+ $result_class->source_name($result) unless $result_class->source_name;
+
+ my $rs_class = delete $resultsets{$result};
+ my $rs_set = $result_class->resultset_class;
+ if($rs_set && $rs_set ne 'DBIx::Class::ResultSet') {
+ if($rs_class && $rs_class ne $rs_set) {
+ warn "We found ResultSet class '$rs_class' for '$result', but it seems "
+ . "that you had already set '$result' to use '$rs_set' instead";
+ }
+ }
+ elsif($rs_class ||= $default_resultset_class) {
+ $class->ensure_class_loaded($rs_class);
+ $result_class->resultset_class($rs_class);
+ }
+
+ push(@to_register, [ $result_class->source_name, $result_class ]);
+ }
+ }
+
+ foreach (sort keys %resultsets) {
+ warn "load_namespaces found ResultSet class $_ with no "
+ . 'corresponding Result class';
+ }
+
+ Class::C3->reinitialize;
+ $class->register_class(@$_) for (@to_register);
+
+ return;
+}
+
+=head2 compose_connection (DEPRECATED)
=over 4
=back
+DEPRECATED. You probably wanted compose_namespace.
+
+Actually, you probably just wanted to call connect.
+
+=for hidden due to deprecation
+
Calls L<DBIx::Class::Schema/"compose_namespace"> to the target namespace,
calls L<DBIx::Class::Schema/connection> with @db_info on the new schema,
then injects the L<DBix::Class::ResultSetProxy> component and a
=cut
-sub compose_connection {
- my ($self, $target, @info) = @_;
- my $base = 'DBIx::Class::ResultSetProxy';
- eval "require ${base};";
- $self->throw_exception
- ("No arguments to load_classes and couldn't load ${base} ($@)")
- if $@;
-
- if ($self eq $target) {
- # Pathological case, largely caused by the docs on early C::M::DBIC::Plain
- foreach my $moniker ($self->sources) {
- my $source = $self->source($moniker);
+{
+ my $warn;
+
+ sub compose_connection {
+ my ($self, $target, @info) = @_;
+
+ warn "compose_connection deprecated as of 0.08000" unless $warn++;
+
+ my $base = 'DBIx::Class::ResultSetProxy';
+ eval "require ${base};";
+ $self->throw_exception
+ ("No arguments to load_classes and couldn't load ${base} ($@)")
+ if $@;
+
+ if ($self eq $target) {
+ # Pathological case, largely caused by the docs on early C::M::DBIC::Plain
+ foreach my $moniker ($self->sources) {
+ my $source = $self->source($moniker);
+ my $class = $source->result_class;
+ $self->inject_base($class, $base);
+ $class->mk_classdata(resultset_instance => $source->resultset);
+ $class->mk_classdata(class_resolver => $self);
+ }
+ $self->connection(@info);
+ return $self;
+ }
+
+ my $schema = $self->compose_namespace($target, $base);
+ {
+ no strict 'refs';
+ *{"${target}::schema"} = sub { $schema };
+ }
+
+ $schema->connection(@info);
+ foreach my $moniker ($schema->sources) {
+ my $source = $schema->source($moniker);
my $class = $source->result_class;
- $self->inject_base($class, $base);
+ #warn "$moniker $class $source ".$source->storage;
+ $class->mk_classdata(result_source_instance => $source);
$class->mk_classdata(resultset_instance => $source->resultset);
- $class->mk_classdata(class_resolver => $self);
+ $class->mk_classdata(class_resolver => $schema);
}
- $self->connection(@info);
- return $self;
+ return $schema;
}
-
- my $schema = $self->compose_namespace($target, $base);
- {
- no strict 'refs';
- *{"${target}::schema"} = sub { $schema };
- }
-
- $schema->connection(@info);
- foreach my $moniker ($schema->sources) {
- my $source = $schema->source($moniker);
- my $class = $source->result_class;
- #warn "$moniker $class $source ".$source->storage;
- $class->mk_classdata(result_source_instance => $source);
- $class->mk_classdata(resultset_instance => $source->resultset);
- $class->mk_classdata(class_resolver => $schema);
- }
- return $schema;
}
=head2 compose_namespace
sub compose_namespace {
my ($self, $target, $base) = @_;
- my %reg = %{ $self->source_registrations };
- my %target;
- my %map;
my $schema = $self->clone;
{
no warnings qw/redefine/;
Instantiates a new Storage object of type
L<DBIx::Class::Schema/"storage_type"> and passes the arguments to
-$storage->connect_info. Sets the connection in-place on the schema. See
-L<DBIx::Class::Storage::DBI/"connect_info"> for more information.
+$storage->connect_info. Sets the connection in-place on the schema.
+
+See L<DBIx::Class::Storage::DBI/"connect_info"> for DBI-specific syntax,
+or L<DBIx::Class::Storage> in general.
=cut
$self->throw_exception(
"No arguments to load_classes and couldn't load ${storage_class} ($@)"
) if $@;
- my $storage = $storage_class->new;
+ my $storage = $storage_class->new($self);
$storage->connect_info(\@info);
$self->storage($storage);
+ $self->on_connect() if($self->can('on_connect'));
return $self;
}
sub connect { shift->clone->connection(@_) }
-=head2 txn_begin
-
-Begins a transaction (does nothing if AutoCommit is off). Equivalent to
-calling $schema->storage->txn_begin. See
-L<DBIx::Class::Storage::DBI/"txn_begin"> for more information.
+=head2 txn_do
-=cut
+=over 4
-sub txn_begin { shift->storage->txn_begin }
+=item Arguments: C<$coderef>, @coderef_args?
-=head2 txn_commit
+=item Return Value: The return value of $coderef
-Commits the current transaction. Equivalent to calling
-$schema->storage->txn_commit. See L<DBIx::Class::Storage::DBI/"txn_commit">
-for more information.
+=back
-=cut
+Executes C<$coderef> with (optional) arguments C<@coderef_args> atomically,
+returning its result (if any). Equivalent to calling $schema->storage->txn_do.
+See L<DBIx::Class::Storage/"txn_do"> for more information.
-sub txn_commit { shift->storage->txn_commit }
+This interface is preferred over using the individual methods L</txn_begin>,
+L</txn_commit>, and L</txn_rollback> below.
-=head2 txn_rollback
+=cut
-Rolls back the current transaction. Equivalent to calling
-$schema->storage->txn_rollback. See
-L<DBIx::Class::Storage::DBI/"txn_rollback"> for more information.
+sub txn_do {
+ my $self = shift;
-=cut
+ $self->storage or $self->throw_exception
+ ('txn_do called on $schema without storage');
-sub txn_rollback { shift->storage->txn_rollback }
+ $self->storage->txn_do(@_);
+}
-=head2 txn_do
+=head2 txn_begin
-=over 4
+Begins a transaction (does nothing if AutoCommit is off). Equivalent to
+calling $schema->storage->txn_begin. See
+L<DBIx::Class::Storage::DBI/"txn_begin"> for more information.
-=item Arguments: C<$coderef>, @coderef_args?
+=cut
-=item Return Value: The return value of $coderef
+sub txn_begin {
+ my $self = shift;
-=back
+ $self->storage or $self->throw_exception
+ ('txn_begin called on $schema without storage');
-Executes C<$coderef> with (optional) arguments C<@coderef_args> atomically,
-returning its result (if any). If an exception is caught, a rollback is issued
-and the exception is rethrown. If the rollback fails, (i.e. throws an
-exception) an exception is thrown that includes a "Rollback failed" message.
+ $self->storage->txn_begin;
+}
-For example,
+=head2 txn_commit
- my $author_rs = $schema->resultset('Author')->find(1);
- my @titles = qw/Night Day It/;
+Commits the current transaction. Equivalent to calling
+$schema->storage->txn_commit. See L<DBIx::Class::Storage::DBI/"txn_commit">
+for more information.
- my $coderef = sub {
- # If any one of these fails, the entire transaction fails
- $author_rs->create_related('books', {
- title => $_
- }) foreach (@titles);
+=cut
- return $author->books;
- };
+sub txn_commit {
+ my $self = shift;
- my $rs;
- eval {
- $rs = $schema->txn_do($coderef);
- };
+ $self->storage or $self->throw_exception
+ ('txn_commit called on $schema without storage');
- if ($@) { # Transaction failed
- die "something terrible has happened!" #
- if ($@ =~ /Rollback failed/); # Rollback failed
+ $self->storage->txn_commit;
+}
- deal_with_failed_transaction();
- }
+=head2 txn_rollback
-In a nested transaction (calling txn_do() from within a txn_do() coderef) only
-the outermost transaction will issue a L<DBIx::Class::Schema/"txn_commit"> on
-the Schema's storage, and txn_do() can be called in void, scalar and list
-context and it will behave as expected.
+Rolls back the current transaction. Equivalent to calling
+$schema->storage->txn_rollback. See
+L<DBIx::Class::Storage::DBI/"txn_rollback"> for more information.
=cut
-sub txn_do {
- my ($self, $coderef, @args) = @_;
+sub txn_rollback {
+ my $self = shift;
$self->storage or $self->throw_exception
- ('txn_do called on $schema without storage');
- ref $coderef eq 'CODE' or $self->throw_exception
- ('$coderef must be a CODE reference');
-
- my (@return_values, $return_value);
-
- $self->txn_begin; # If this throws an exception, no rollback is needed
-
- my $wantarray = wantarray; # Need to save this since the context
- # inside the eval{} block is independent
- # of the context that called txn_do()
- eval {
-
- # Need to differentiate between scalar/list context to allow for
- # returning a list in scalar context to get the size of the list
- if ($wantarray) {
- # list context
- @return_values = $coderef->(@args);
- } elsif (defined $wantarray) {
- # scalar context
- $return_value = $coderef->(@args);
- } else {
- # void context
- $coderef->(@args);
- }
- $self->txn_commit;
- };
+ ('txn_rollback called on $schema without storage');
- if ($@) {
- my $error = $@;
-
- eval {
- $self->txn_rollback;
- };
-
- if ($@) {
- my $rollback_error = $@;
- my $exception_class = "DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION";
- $self->throw_exception($error) # propagate nested rollback
- if $rollback_error =~ /$exception_class/;
-
- $self->throw_exception(
- "Transaction aborted: $error. Rollback failed: ${rollback_error}"
- );
- } else {
- $self->throw_exception($error); # txn failed but rollback succeeded
- }
- }
-
- return $wantarray ? @return_values : $return_value;
+ $self->storage->txn_rollback;
}
=head2 clone
my $new = $source->new($source);
$clone->register_source($moniker => $new);
}
+ $clone->storage->set_schema($clone) if $clone->storage;
return $clone;
}
arrayrefs. The arrayrefs should contain a list of column names,
followed by one or many sets of matching data for the given columns.
-Each set of data is inserted into the database using
+In void context, C<insert_bulk> in L<DBIx::Class::Storage::DBI> is used
+to insert the data, as this is a fast method. However, insert_bulk currently
+assumes that your datasets all contain the same type of values, using scalar
+references in a column in one row, and not in another will probably not work.
+
+Otherwise, each set of data is inserted into the database using
L<DBIx::Class::ResultSet/create>, and a arrayref of the resulting row
objects is returned.
my ($self, $name, $data) = @_;
my $rs = $self->resultset($name);
my @names = @{shift(@$data)};
- my @created;
- foreach my $item (@$data) {
- my %create;
- @create{@names} = @$item;
- push(@created, $rs->create(\%create));
+ if(defined wantarray) {
+ my @created;
+ foreach my $item (@$data) {
+ my %create;
+ @create{@names} = @$item;
+ push(@created, $rs->create(\%create));
+ }
+ return @created;
}
- return @created;
+ $self->storage->insert_bulk($self->source($name), \@names, $data);
}
+=head2 exception_action
+
+=over 4
+
+=item Arguments: $code_reference
+
+=back
+
+If C<exception_action> is set for this class/object, L</throw_exception>
+will prefer to call this code reference with the exception as an argument,
+rather than its normal <croak> action.
+
+Your subroutine should probably just wrap the error in the exception
+object/class of your choosing and rethrow. If, against all sage advice,
+you'd like your C<exception_action> to suppress a particular exception
+completely, simply have it return true.
+
+Example:
+
+ package My::Schema;
+ use base qw/DBIx::Class::Schema/;
+ use My::ExceptionClass;
+ __PACKAGE__->exception_action(sub { My::ExceptionClass->throw(@_) });
+ __PACKAGE__->load_classes;
+
+ # or:
+ my $schema_obj = My::Schema->connect( .... );
+ $schema_obj->exception_action(sub { My::ExceptionClass->throw(@_) });
+
+ # suppress all exceptions, like a moron:
+ $schema_obj->exception_action(sub { 1 });
+
=head2 throw_exception
=over 4
=back
Throws an exception. Defaults to using L<Carp::Clan> to report errors from
-user's perspective.
+user's perspective. See L</exception_action> for details on overriding
+this method's behavior.
=cut
sub throw_exception {
- my ($self) = shift;
- croak @_;
+ my $self = shift;
+ croak @_ if !$self->exception_action || !$self->exception_action->(@_);
}
=head2 deploy (EXPERIMENTAL)
Attempts to deploy the schema to the current storage using L<SQL::Translator>.
Note that this feature is currently EXPERIMENTAL and may not work correctly
-across all databases, or fully handle complex relationships.
+across all databases, or fully handle complex relationships. Saying that, it
+has been used successfully by many people, including the core dev team.
See L<SQL::Translator/METHODS> for a list of values for C<$sqlt_args>. The most
common value for this would be C<< { add_drop_table => 1, } >> to have the SQL
produced include a DROP TABLE statement for each table created.
+Additionally, the DBIx::Class parser accepts a C<sources> parameter as a hash
+ref or an array ref, containing a list of source to deploy. If present, then
+only the sources listed will get deployed.
+
=cut
sub deploy {
=over 4
-=item Arguments: \@databases, $version, $directory, $sqlt_args
+=item Arguments: \@databases, $version, $directory, $preversion, $sqlt_args
=back
Creates an SQL file based on the Schema, for each of the specified
-database types, in the given directory.
+database types, in the given directory. Given a previous version number,
+this will also create a file containing the ALTER TABLE statements to
+transform the previous schema into the current one. Note that these
+statements may contain DROP TABLE or DROP COLUMN statements that can
+potentially destroy data.
+
+The file names are created using the C<ddl_filename> method below, please
+override this method in your schema if you would like a different file
+name format. For the ALTER file, the same format is used, replacing
+$version in the name with "$preversion-$version".
+
+If no arguments are passed, then the following default values are used:
+
+=over 4
+
+=item databases - ['MySQL', 'SQLite', 'PostgreSQL']
+
+=item version - $schema->VERSION
+
+=item directory - './'
+
+=item preversion - <none>
+
+=back
Note that this feature is currently EXPERIMENTAL and may not work correctly
across all databases, or fully handle complex relationships.
+WARNING: Please check all SQL files created, before applying them.
+
=cut
sub create_ddl_dir {
=head2 ddl_filename (EXPERIMENTAL)
- my $filename = $table->ddl_filename($type, $dir, $version)
+=over 4
+
+=item Arguments: $directory, $database-type, $version, $preversion
+
+=back
+
+ my $filename = $table->ddl_filename($type, $dir, $version, $preversion)
+
+This method is called by C<create_ddl_dir> to compose a file name out of
+the supplied directory, database type and version number. The default file
+name format is: C<$dir$schema-$version-$type.sql>.
-Creates a filename for a SQL file based on the table class name. Not
-intended for direct end user use.
+You may override this method in your schema if you wish to use a different
+format.
=cut
sub ddl_filename {
- my ($self, $type, $dir, $version) = @_;
+ my ($self, $type, $dir, $version, $pversion) = @_;
my $filename = ref($self);
- $filename =~ s/::/-/;
+ $filename =~ s/::/-/g;
- $filename = "$dir$filename-$version-$type.sql";
+ $filename = File::Spec->catfile($dir, "$filename-$version-$type.sql");
+ $filename =~ s/$version/$pversion-$version/ if($pversion);
return $filename;
}
use base 'DBIx::Class::Storage';
-use strict;
+use strict;
use warnings;
use DBI;
use SQL::Abstract::Limit;
use DBIx::Class::Storage::DBI::Cursor;
use DBIx::Class::Storage::Statistics;
use IO::File;
-use Carp::Clan qw/DBIx::Class/;
+use Scalar::Util 'blessed';
+
+__PACKAGE__->mk_group_accessors(
+ 'simple' =>
+ qw/_connect_info _dbh _sql_maker _sql_maker_opts _conn_pid _conn_tid
+ disable_sth_caching cursor on_connect_do transaction_depth/
+);
+
BEGIN {
package DBIC::SQL::Abstract; # Would merge upstream, but nate doesn't reply :(
sub _find_syntax {
my ($self, $syntax) = @_;
my $dbhname = blessed($syntax) ? $syntax->{Driver}{Name} : $syntax;
-# print STDERR "Found DBH $syntax >$dbhname< ", $syntax->{Driver}->{Name}, "\n";
if(ref($self) && $dbhname && $dbhname eq 'DB2') {
return 'RowNumberOver';
}
} # End of BEGIN block
-use base qw/DBIx::Class/;
-
-__PACKAGE__->load_components(qw/AccessorGroup/);
-
-__PACKAGE__->mk_group_accessors('simple' =>
- qw/_connect_info _dbh _sql_maker _sql_maker_opts _conn_pid _conn_tid
- debug debugobj cursor on_connect_do transaction_depth/);
-
=head1 NAME
DBIx::Class::Storage::DBI - DBI storage handler
=head1 DESCRIPTION
-This class represents the connection to the database
+This class represents the connection to an RDBMS via L<DBI>. See
+L<DBIx::Class::Storage> for general information. This pod only
+documents DBI-specific methods and behaviors.
=head1 METHODS
=cut
sub new {
- my $new = {};
- bless $new, (ref $_[0] || $_[0]);
+ my $new = shift->next::method(@_);
$new->cursor("DBIx::Class::Storage::DBI::Cursor");
$new->transaction_depth(0);
-
- $new->debugobj(new DBIx::Class::Storage::Statistics());
-
- my $fh;
-
- my $debug_env = $ENV{DBIX_CLASS_STORAGE_DBI_DEBUG}
- || $ENV{DBIC_TRACE};
-
- if (defined($debug_env) && ($debug_env =~ /=(.+)$/)) {
- $fh = IO::File->new($1, 'w')
- or $new->throw_exception("Cannot open trace file $1");
- } else {
- $fh = IO::File->new('>&STDERR');
- }
- $new->debugfh($fh);
- $new->debug(1) if $debug_env;
$new->_sql_maker_opts({});
- return $new;
-}
-
-=head2 throw_exception
+ $new->{_in_dbh_do} = 0;
+ $new->{_dbh_gen} = 0;
-Throws an exception - croaks.
-
-=cut
-
-sub throw_exception {
- my ($self, $msg) = @_;
- croak($msg);
+ $new;
}
=head2 connect_info
be executed immediately after making the connection to the database
every time we [re-]connect.
+=item disable_sth_caching
+
+If set to a true value, this option will disable the caching of
+statement handles via L<DBI/prepare_cached>.
+
=item limit_dialect
Sets the limit dialect. This is useful for JDBC-bridge among others
these options will be cleared before setting the new ones, regardless of
whether any options are specified in the new C<connect_info>.
+Important note: DBIC expects the returned database handle provided by
+a subref argument to have RaiseError set on it. If it doesn't, things
+might not work very well, YMMV. If you don't use a subref, DBIC will
+force this setting for you anyways. Setting HandleError to anything
+other than simple exception object wrapper might cause problems too.
+
Examples:
# Simple SQLite connection
quote_char => q{`},
name_sep => q{@},
on_connect_do => ['SET search_path TO myschema,otherschema,public'],
+ disable_sth_caching => 1,
},
]
);
+=cut
+
+sub connect_info {
+ my ($self, $info_arg) = @_;
+
+ return $self->_connect_info if !$info_arg;
+
+ # Kill sql_maker/_sql_maker_opts, so we get a fresh one with only
+ # the new set of options
+ $self->_sql_maker(undef);
+ $self->_sql_maker_opts({});
+
+ my $info = [ @$info_arg ]; # copy because we can alter it
+ my $last_info = $info->[-1];
+ if(ref $last_info eq 'HASH') {
+ for my $storage_opt (qw/on_connect_do disable_sth_caching/) {
+ if(my $value = delete $last_info->{$storage_opt}) {
+ $self->$storage_opt($value);
+ }
+ }
+ for my $sql_maker_opt (qw/limit_dialect quote_char name_sep/) {
+ if(my $opt_val = delete $last_info->{$sql_maker_opt}) {
+ $self->_sql_maker_opts->{$sql_maker_opt} = $opt_val;
+ }
+ }
+
+ # Get rid of any trailing empty hashref
+ pop(@$info) if !keys %$last_info;
+ }
+
+ $self->_connect_info($info);
+}
+
=head2 on_connect_do
This method is deprecated in favor of setting via L</connect_info>.
-=head2 debug
+=head2 dbh_do
+
+Arguments: $subref, @extra_coderef_args?
-Causes SQL trace information to be emitted on the C<debugobj> object.
-(or C<STDERR> if C<debugobj> has not specifically been set).
+Execute the given subref using the new exception-based connection management.
-This is the equivalent to setting L</DBIC_TRACE> in your
-shell environment.
+The first two arguments will be the storage object that C<dbh_do> was called
+on and a database handle to use. Any additional arguments will be passed
+verbatim to the called subref as arguments 2 and onwards.
-=head2 debugfh
+Using this (instead of $self->_dbh or $self->dbh) ensures correct
+exception handling and reconnection (or failover in future subclasses).
-Set or retrieve the filehandle used for trace/debug output. This should be
-an IO::Handle compatible ojbect (only the C<print> method is used. Initially
-set to be STDERR - although see information on the
-L<DBIC_TRACE> environment variable.
+Your subref should have no side-effects outside of the database, as
+there is the potential for your subref to be partially double-executed
+if the database connection was stale/dysfunctional.
+
+Example:
+
+ my @stuff = $schema->storage->dbh_do(
+ sub {
+ my ($storage, $dbh, @cols) = @_;
+ my $cols = join(q{, }, @cols);
+ $dbh->selectrow_array("SELECT $cols FROM foo");
+ },
+ @column_list
+ );
=cut
-sub debugfh {
- my $self = shift;
+sub dbh_do {
+ my $self = shift;
+ my $coderef = shift;
- if ($self->debugobj->can('debugfh')) {
- return $self->debugobj->debugfh(@_);
+ ref $coderef eq 'CODE' or $self->throw_exception
+ ('$coderef must be a CODE reference');
+
+ return $coderef->($self, $self->_dbh, @_) if $self->{_in_dbh_do};
+ local $self->{_in_dbh_do} = 1;
+
+ my @result;
+ my $want_array = wantarray;
+
+ eval {
+ $self->_verify_pid if $self->_dbh;
+ $self->_populate_dbh if !$self->_dbh;
+ if($want_array) {
+ @result = $coderef->($self, $self->_dbh, @_);
}
-}
+ elsif(defined $want_array) {
+ $result[0] = $coderef->($self, $self->_dbh, @_);
+ }
+ else {
+ $coderef->($self, $self->_dbh, @_);
+ }
+ };
-=head2 debugobj
+ my $exception = $@;
+ if(!$exception) { return $want_array ? @result : $result[0] }
-Sets or retrieves the object used for metric collection. Defaults to an instance
-of L<DBIx::Class::Storage::Statistics> that is campatible with the original
-method of using a coderef as a callback. See the aforementioned Statistics
-class for more information.
+ $self->throw_exception($exception) if $self->connected;
-=head2 debugcb
+ # We were not connected - reconnect and retry, but let any
+ # exception fall right through this time
+ $self->_populate_dbh;
+ $coderef->($self, $self->_dbh, @_);
+}
-Sets a callback to be executed each time a statement is run; takes a sub
-reference. Callback is executed as $sub->($op, $info) where $op is
-SELECT/INSERT/UPDATE/DELETE and $info is what would normally be printed.
+# This is basically a blend of dbh_do above and DBIx::Class::Storage::txn_do.
+# It also informs dbh_do to bypass itself while under the direction of txn_do,
+# via $self->{_in_dbh_do} (this saves some redundant eval and errorcheck, etc)
+sub txn_do {
+ my $self = shift;
+ my $coderef = shift;
-See L<debugobj> for a better way.
+ ref $coderef eq 'CODE' or $self->throw_exception
+ ('$coderef must be a CODE reference');
-=cut
+ local $self->{_in_dbh_do} = 1;
-sub debugcb {
- my $self = shift;
+ my @result;
+ my $want_array = wantarray;
- if ($self->debugobj->can('callback')) {
- return $self->debugobj->callback(@_);
+ my $tried = 0;
+ while(1) {
+ eval {
+ $self->_verify_pid if $self->_dbh;
+ $self->_populate_dbh if !$self->_dbh;
+
+ $self->txn_begin;
+ if($want_array) {
+ @result = $coderef->(@_);
+ }
+ elsif(defined $want_array) {
+ $result[0] = $coderef->(@_);
+ }
+ else {
+ $coderef->(@_);
+ }
+ $self->txn_commit;
+ };
+
+ my $exception = $@;
+ if(!$exception) { return $want_array ? @result : $result[0] }
+
+ if($tried++ > 0 || $self->connected) {
+ eval { $self->txn_rollback };
+ my $rollback_exception = $@;
+ if($rollback_exception) {
+ my $exception_class = "DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION";
+ $self->throw_exception($exception) # propagate nested rollback
+ if $rollback_exception =~ /$exception_class/;
+
+ $self->throw_exception(
+ "Transaction aborted: ${exception}. "
+ . "Rollback failed: ${rollback_exception}"
+ );
+ }
+ $self->throw_exception($exception)
}
+
+ # We were not connected, and was first try - reconnect and retry
+ # via the while loop
+ $self->_populate_dbh;
+ }
}
=head2 disconnect
-Disconnect the L<DBI> handle, performing a rollback first if the
+Our C<disconnect> method also performs a rollback first if the
database is not in C<AutoCommit> mode.
=cut
$self->_dbh->rollback unless $self->_dbh->{AutoCommit};
$self->_dbh->disconnect;
$self->_dbh(undef);
+ $self->{_dbh_gen}++;
}
}
-=head2 connected
-
-Check if the L<DBI> handle is connected. Returns true if the handle
-is connected.
-
-=cut
-
-sub connected { my ($self) = @_;
+sub connected {
+ my ($self) = @_;
if(my $dbh = $self->_dbh) {
if(defined $self->_conn_tid && $self->_conn_tid != threads->tid) {
- return $self->_dbh(undef);
+ $self->_dbh(undef);
+ $self->{_dbh_gen}++;
+ return;
}
- elsif($self->_conn_pid != $$) {
- $self->_dbh->{InactiveDestroy} = 1;
- return $self->_dbh(undef);
+ else {
+ $self->_verify_pid;
}
return ($dbh->FETCH('Active') && $dbh->ping);
}
return 0;
}
-=head2 ensure_connected
+# handle pid changes correctly
+# NOTE: assumes $self->_dbh is a valid $dbh
+sub _verify_pid {
+ my ($self) = @_;
-Check whether the database handle is connected - if not then make a
-connection.
+ return if $self->_conn_pid == $$;
-=cut
+ $self->_dbh->{InactiveDestroy} = 1;
+ $self->_dbh(undef);
+ $self->{_dbh_gen}++;
+
+ return;
+}
sub ensure_connected {
my ($self) = @_;
sub _sql_maker_args {
my ($self) = @_;
- return ( limit_dialect => $self->dbh, %{$self->_sql_maker_opts} );
+ return ( bindtype=>'columns', limit_dialect => $self->dbh, %{$self->_sql_maker_opts} );
}
-=head2 sql_maker
-
-Returns a C<sql_maker> object - normally an object of class
-C<DBIC::SQL::Abstract>.
-
-=cut
-
sub sql_maker {
my ($self) = @_;
unless ($self->_sql_maker) {
return $self->_sql_maker;
}
-sub connect_info {
- my ($self, $info_arg) = @_;
-
- if($info_arg) {
- # Kill sql_maker/_sql_maker_opts, so we get a fresh one with only
- # the new set of options
- $self->_sql_maker(undef);
- $self->_sql_maker_opts({});
-
- my $info = [ @$info_arg ]; # copy because we can alter it
- my $last_info = $info->[-1];
- if(ref $last_info eq 'HASH') {
- if(my $on_connect_do = delete $last_info->{on_connect_do}) {
- $self->on_connect_do($on_connect_do);
- }
- for my $sql_maker_opt (qw/limit_dialect quote_char name_sep/) {
- if(my $opt_val = delete $last_info->{$sql_maker_opt}) {
- $self->_sql_maker_opts->{$sql_maker_opt} = $opt_val;
- }
- }
-
- # Get rid of any trailing empty hashref
- pop(@$info) if !keys %$last_info;
- }
-
- $self->_connect_info($info);
- }
-
- $self->_connect_info;
-}
-
sub _populate_dbh {
my ($self) = @_;
my @info = @{$self->_connect_info || []};
}
eval {
- $dbh = ref $info[0] eq 'CODE'
- ? &{$info[0]}
- : DBI->connect(@info);
+ if(ref $info[0] eq 'CODE') {
+ $dbh = &{$info[0]}
+ }
+ else {
+ $dbh = DBI->connect(@info);
+ $dbh->{RaiseError} = 1;
+ $dbh->{PrintError} = 0;
+ $dbh->{PrintWarn} = 0;
+ }
};
$DBI::connect_via = $old_connect_via if $old_connect_via;
$dbh;
}
-=head2 txn_begin
-
-Calls begin_work on the current dbh.
-
-See L<DBIx::Class::Schema> for the txn_do() method, which allows for
-an entire code block to be executed transactionally.
-
-=cut
+sub _dbh_txn_begin {
+ my ($self, $dbh) = @_;
+ if ($dbh->{AutoCommit}) {
+ $self->debugobj->txn_begin()
+ if ($self->debug);
+ $dbh->begin_work;
+ }
+}
sub txn_begin {
my $self = shift;
- if ($self->{transaction_depth}++ == 0) {
- my $dbh = $self->dbh;
- if ($dbh->{AutoCommit}) {
- $self->debugobj->txn_begin()
- if ($self->debug);
- $dbh->begin_work;
- }
- }
+ $self->dbh_do($self->can('_dbh_txn_begin'))
+ if $self->{transaction_depth}++ == 0;
}
-=head2 txn_commit
-
-Issues a commit against the current dbh.
-
-=cut
-
-sub txn_commit {
- my $self = shift;
- my $dbh = $self->dbh;
+sub _dbh_txn_commit {
+ my ($self, $dbh) = @_;
if ($self->{transaction_depth} == 0) {
unless ($dbh->{AutoCommit}) {
$self->debugobj->txn_commit()
}
}
-=head2 txn_rollback
-
-Issues a rollback against the current dbh. A nested rollback will
-throw a L<DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION> exception,
-which allows the rollback to propagate to the outermost transaction.
-
-=cut
-
-sub txn_rollback {
+sub txn_commit {
my $self = shift;
+ $self->dbh_do($self->can('_dbh_txn_commit'));
+}
- eval {
- my $dbh = $self->dbh;
- if ($self->{transaction_depth} == 0) {
- unless ($dbh->{AutoCommit}) {
- $self->debugobj->txn_rollback()
- if ($self->debug);
- $dbh->rollback;
- }
+sub _dbh_txn_rollback {
+ my ($self, $dbh) = @_;
+ if ($self->{transaction_depth} == 0) {
+ unless ($dbh->{AutoCommit}) {
+ $self->debugobj->txn_rollback()
+ if ($self->debug);
+ $dbh->rollback;
+ }
+ }
+ else {
+ if (--$self->{transaction_depth} == 0) {
+ $self->debugobj->txn_rollback()
+ if ($self->debug);
+ $dbh->rollback;
}
else {
- if (--$self->{transaction_depth} == 0) {
- $self->debugobj->txn_rollback()
- if ($self->debug);
- $dbh->rollback;
- }
- else {
- die DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION->new;
- }
+ die DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION->new;
}
- };
+ }
+}
+
+sub txn_rollback {
+ my $self = shift;
+ eval { $self->dbh_do($self->can('_dbh_txn_rollback')) };
if ($@) {
my $error = $@;
my $exception_class = "DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION";
}
}
-sub _execute {
+# This used to be the top-half of _execute. It was split out to make it
+# easier to override in NoBindVars without duping the rest. It takes up
+# all of _execute's args, and emits $sql, @bind.
+sub _prep_for_execute {
my ($self, $op, $extra_bind, $ident, @args) = @_;
+
my ($sql, @bind) = $self->sql_maker->$op($ident, @args);
- unshift(@bind, @$extra_bind) if $extra_bind;
+ unshift(@bind,
+ map { ref $_ eq 'ARRAY' ? $_ : [ '!!dummy', $_ ] } @$extra_bind)
+ if $extra_bind;
+ @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
+
+ return ($sql, @bind);
+}
+
+sub _execute {
+ my ($self, $op, $extra_bind, $ident, $bind_attributes, @args) = @_;
+
+ if( blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
+ $ident = $ident->from();
+ }
+
+ my ($sql, @bind) = $self->sql_maker->$op($ident, @args);
+ unshift(@bind,
+ map { ref $_ eq 'ARRAY' ? $_ : [ '!!dummy', $_ ] } @$extra_bind)
+ if $extra_bind;
if ($self->debug) {
- my @debug_bind = map { defined $_ ? qq{'$_'} : q{'NULL'} } @bind;
+ my @debug_bind =
+ map { defined ($_ && $_->[1]) ? qq{'$_->[1]'} : q{'NULL'} } @bind;
$self->debugobj->query_start($sql, @debug_bind);
}
- my $sth = eval { $self->sth($sql,$op) };
- if (!$sth || $@) {
- $self->throw_exception(
- 'no sth generated via sql (' . ($@ || $self->_dbh->errstr) . "): $sql"
- );
- }
- @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
- my $rv;
- if ($sth) {
- my $time = time();
- $rv = eval { $sth->execute(@bind) };
+ my ($rv, $sth);
+ RETRY: while (1) {
+ $sth = eval { $self->sth($sql,$op) };
- if ($@ || !$rv) {
- $self->throw_exception("Error executing '$sql': ".($@ || $sth->errstr));
+ if (!$sth || $@) {
+ $self->throw_exception(
+ 'no sth generated via sql (' . ($@ || $self->_dbh->errstr) . "): $sql"
+ );
}
- } else {
- $self->throw_exception("'$sql' did not generate a statement.");
- }
+
+ if ($sth) {
+ my $time = time();
+ $rv = eval {
+ my $placeholder_index = 1;
+
+ foreach my $bound (@bind) {
+
+ my $attributes = {};
+ my($column_name, @data) = @$bound;
+
+ if( $bind_attributes ) {
+ $attributes = $bind_attributes->{$column_name}
+ if defined $bind_attributes->{$column_name};
+ }
+
+ foreach my $data (@data)
+ {
+ $data = ref $data ? ''.$data : $data; # stringify args
+
+ $sth->bind_param($placeholder_index, $data, $attributes);
+ $placeholder_index++;
+ }
+ }
+ $sth->execute();
+ };
+
+ if ($@ || !$rv) {
+ $self->throw_exception("Error executing '$sql': ".($@ || $sth->errstr))
+ if $self->connected;
+ $self->_populate_dbh;
+ } else {
+ last RETRY;
+ }
+ } else {
+ $self->throw_exception("'$sql' did not generate a statement.");
+ }
+ } # While(1) to retry if disconencted
+
if ($self->debug) {
- my @debug_bind = map { defined $_ ? qq{`$_'} : q{`NULL'} } @bind;
- $self->debugobj->query_end($sql, @debug_bind);
+ my @debug_bind =
+ map { defined ($_ && $_->[1]) ? qq{'$_->[1]'} : q{'NULL'} } @bind;
+ $self->debugobj->query_end($sql, @debug_bind);
}
return (wantarray ? ($rv, $sth, @bind) : $rv);
}
sub insert {
- my ($self, $ident, $to_insert) = @_;
+ my ($self, $source, $to_insert) = @_;
+
+ my $ident = $source->from;
+ my $bind_attributes = $self->source_bind_attributes($source);
+
$self->throw_exception(
"Couldn't insert ".join(', ',
map "$_ => $to_insert->{$_}", keys %$to_insert
)." into ${ident}"
- ) unless ($self->_execute('insert' => [], $ident, $to_insert));
+ ) unless ($self->_execute('insert' => [], $source, $bind_attributes, $to_insert));
return $to_insert;
}
+## Still not quite perfect, and EXPERIMENTAL
+## Currently it is assumed that all values passed will be "normal", i.e. not
+## scalar refs, or at least, all the same type as the first set, the statement is
+## only prepped once.
+sub insert_bulk {
+ my ($self, $source, $cols, $data) = @_;
+ my %colvalues;
+ my $table = $source->from;
+ @colvalues{@$cols} = (0..$#$cols);
+ my ($sql, @bind) = $self->sql_maker->insert($table, \%colvalues);
+
+ if ($self->debug) {
+ my @debug_bind = map { defined $_->[1] ? qq{$_->[1]} : q{'NULL'} } @bind;
+ $self->debugobj->query_start($sql, @debug_bind);
+ }
+ my $sth = $self->sth($sql);
+
+# @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
+
+ my $rv;
+
+ ## This must be an arrayref, else nothing works!
+
+ my $tuple_status = [];
+
+ ##use Data::Dumper;
+ ##print STDERR Dumper( $data, $sql, [@bind] );
+
+ if ($sth) {
+
+ my $time = time();
+
+ ## Get the bind_attributes, if any exist
+ my $bind_attributes = $self->source_bind_attributes($source);
+
+ ## Bind the values and execute
+ $rv = eval {
+
+ my $placeholder_index = 1;
+
+ foreach my $bound (@bind) {
+
+ my $attributes = {};
+ my ($column_name, $data_index) = @$bound;
+
+ if( $bind_attributes ) {
+ $attributes = $bind_attributes->{$column_name}
+ if defined $bind_attributes->{$column_name};
+ }
+
+ my @data = map { $_->[$data_index] } @$data;
+
+ $sth->bind_param_array( $placeholder_index, [@data], $attributes );
+ $placeholder_index++;
+ }
+ $sth->execute_array( {ArrayTupleStatus => $tuple_status} );
+
+ };
+
+ if ($@ || !defined $rv) {
+ my $errors = '';
+ foreach my $tuple (@$tuple_status) {
+ $errors .= "\n" . $tuple->[1] if(ref $tuple);
+ }
+ $self->throw_exception("Error executing '$sql': ".($@ || $errors));
+ }
+ } else {
+ $self->throw_exception("'$sql' did not generate a statement.");
+ }
+ if ($self->debug) {
+ my @debug_bind = map { defined $_ ? qq{`$_'} : q{`NULL'} } @bind;
+ $self->debugobj->query_end($sql, @debug_bind);
+ }
+ return (wantarray ? ($rv, $sth, @bind) : $rv);
+}
+
sub update {
- return shift->_execute('update' => [], @_);
+ my $self = shift @_;
+ my $source = shift @_;
+ my $bind_attributes = $self->source_bind_attributes($source);
+
+ return $self->_execute('update' => [], $source, $bind_attributes, @_);
}
+
sub delete {
- return shift->_execute('delete' => [], @_);
+ my $self = shift @_;
+ my $source = shift @_;
+
+ my $bind_attrs = {}; ## If ever it's needed...
+
+ return $self->_execute('delete' => [], $source, $bind_attrs, @_);
}
sub _select {
($order ? (order_by => $order) : ())
};
}
- my @args = ('select', $attrs->{bind}, $ident, $select, $condition, $order);
+ my $bind_attrs = {}; ## Future support
+ my @args = ('select', $attrs->{bind}, $ident, $bind_attrs, $select, $condition, $order);
if ($attrs->{software_limit} ||
$self->sql_maker->_default_limit_syntax eq "GenericSubQ") {
$attrs->{software_limit} = 1;
return $self->_execute(@args);
}
+sub source_bind_attributes {
+ my ($self, $source) = @_;
+
+ my $bind_attributes;
+ foreach my $column ($source->columns) {
+
+ my $data_type = $source->column_info($column)->{data_type} || '';
+ $bind_attributes->{$column} = $self->bind_attribute_by_data_type($data_type)
+ if $data_type;
+ }
+
+ return $bind_attributes;
+}
+
=head2 select
=over 4
return $self->cursor->new($self, \@_, $attrs);
}
-=head2 select_single
-
-Performs a select, fetch and return of data - handles a single row
-only.
-
-=cut
-
-# Need to call finish() to work round broken DBDs
-
sub select_single {
my $self = shift;
my ($rv, $sth, @bind) = $self->_select(@_);
my @row = $sth->fetchrow_array;
+ # Need to call finish() to work round broken DBDs
$sth->finish();
return @row;
}
=cut
-sub sth {
- my ($self, $sql) = @_;
- # 3 is the if_active parameter which avoids active sth re-use
- return $self->dbh->prepare_cached($sql, {}, 3);
-}
+sub _dbh_sth {
+ my ($self, $dbh, $sql) = @_;
-=head2 columns_info_for
+ # 3 is the if_active parameter which avoids active sth re-use
+ my $sth = $self->disable_sth_caching
+ ? $dbh->prepare($sql)
+ : $dbh->prepare_cached($sql, {}, 3);
-Returns database type info for a given table column.
+ $self->throw_exception(
+ 'no sth generated via sql (' . ($@ || $dbh->errstr) . "): $sql"
+ ) if !$sth;
-=cut
+ $sth;
+}
-sub columns_info_for {
- my ($self, $table) = @_;
+sub sth {
+ my ($self, $sql) = @_;
+ $self->dbh_do($self->can('_dbh_sth'), $sql);
+}
- my $dbh = $self->dbh;
+sub _dbh_columns_info_for {
+ my ($self, $dbh, $table) = @_;
if ($dbh->can('column_info')) {
my %result;
- local $dbh->{RaiseError} = 1;
- local $dbh->{PrintError} = 0;
eval {
my ($schema,$tab) = $table =~ /^(.+?)\.(.+)$/ ? ($1,$2) : (undef,$table);
my $sth = $dbh->column_info( undef,$schema, $tab, '%' );
$sth->execute();
-
while ( my $info = $sth->fetchrow_hashref() ){
my %column_info;
$column_info{data_type} = $info->{TYPE_NAME};
}
my %result;
- my $sth = $dbh->prepare("SELECT * FROM $table WHERE 1=0");
+ my $sth = $dbh->prepare($self->sql_maker->select($table, undef, \'1 = 0'));
$sth->execute;
my @columns = @{$sth->{NAME_lc}};
for my $i ( 0 .. $#columns ){
my %column_info;
- my $type_num = $sth->{TYPE}->[$i];
- my $type_name;
- if(defined $type_num && $dbh->can('type_info')) {
- my $type_info = $dbh->type_info($type_num);
- $type_name = $type_info->{TYPE_NAME} if $type_info;
- }
- $column_info{data_type} = $type_name ? $type_name : $type_num;
+ $column_info{data_type} = $sth->{TYPE}->[$i];
$column_info{size} = $sth->{PRECISION}->[$i];
$column_info{is_nullable} = $sth->{NULLABLE}->[$i] ? 1 : 0;
$result{$columns[$i]} = \%column_info;
}
+ $sth->finish;
+
+ foreach my $col (keys %result) {
+ my $colinfo = $result{$col};
+ my $type_num = $colinfo->{data_type};
+ my $type_name;
+ if(defined $type_num && $dbh->can('type_info')) {
+ my $type_info = $dbh->type_info($type_num);
+ $type_name = $type_info->{TYPE_NAME} if $type_info;
+ $colinfo->{data_type} = $type_name if $type_name;
+ }
+ }
return \%result;
}
+sub columns_info_for {
+ my ($self, $table) = @_;
+ $self->dbh_do($self->can('_dbh_columns_info_for'), $table);
+}
+
=head2 last_insert_id
Return the row id of the last insert.
=cut
-sub last_insert_id {
- my ($self, $row) = @_;
-
- return $self->dbh->func('last_insert_rowid');
+sub _dbh_last_insert_id {
+ my ($self, $dbh, $source, $col) = @_;
+ # XXX This is a SQLite-ism as a default... is there a DBI-generic way?
+ $dbh->func('last_insert_rowid');
+}
+sub last_insert_id {
+ my $self = shift;
+ $self->dbh_do($self->can('_dbh_last_insert_id'), @_);
}
=head2 sqlt_type
sub sqlt_type { shift->dbh->{Driver}->{Name} }
+=head2 bind_attribute_by_data_type
+
+Given a datatype from column info, returns a database specific bind attribute for
+$dbh->bind_param($val,$attribute) or nothing if we will let the database planner
+just handle it.
+
+Generally only needed for special case column types, like bytea in postgres.
+
+=cut
+
+sub bind_attribute_by_data_type {
+ return;
+}
+
=head2 create_ddl_dir (EXPERIMENTAL)
=over 4
-=item Arguments: $schema \@databases, $version, $directory, $sqlt_args
+=item Arguments: $schema \@databases, $version, $directory, $preversion, $sqlt_args
=back
sub create_ddl_dir
{
- my ($self, $schema, $databases, $version, $dir, $sqltargs) = @_;
+ my ($self, $schema, $databases, $version, $dir, $preversion, $sqltargs) = @_;
if(!$dir || !-d $dir)
{
$sqltargs = { ( add_drop_table => 1 ), %{$sqltargs || {}} };
eval "use SQL::Translator";
- $self->throw_exception("Can't deploy without SQL::Translator: $@") if $@;
+ $self->throw_exception("Can't create a ddl file without SQL::Translator: $@") if $@;
- my $sqlt = SQL::Translator->new($sqltargs);
+ my $sqlt = SQL::Translator->new({
+# debug => 1,
+ add_drop_table => 1,
+ });
foreach my $db (@$databases)
{
$sqlt->reset();
$sqlt->parser('SQL::Translator::Parser::DBIx::Class');
# $sqlt->parser_args({'DBIx::Class' => $schema);
+ $sqlt = $self->configure_sqlt($sqlt, $db);
$sqlt->data($schema);
$sqlt->producer($db);
my $filename = $schema->ddl_filename($db, $dir, $version);
if(-e $filename)
{
- $self->throw_exception("$filename already exists, skipping $db");
+ warn("$filename already exists, skipping $db");
next;
}
- open($file, ">$filename")
- or $self->throw_exception("Can't open $filename for writing ($!)");
+
my $output = $sqlt->translate;
-#use Data::Dumper;
-# print join(":", keys %{$schema->source_registrations});
-# print Dumper($sqlt->schema);
if(!$output)
{
- $self->throw_exception("Failed to translate to $db. (" . $sqlt->error . ")");
+ warn("Failed to translate to $db, skipping. (" . $sqlt->error . ")");
next;
}
+ if(!open($file, ">$filename"))
+ {
+ $self->throw_exception("Can't open $filename for writing ($!)");
+ next;
+ }
print $file $output;
close($file);
+
+ if($preversion)
+ {
+ eval "use SQL::Translator::Diff";
+ if($@)
+ {
+ warn("Can't diff versions without SQL::Translator::Diff: $@");
+ next;
+ }
+
+ my $prefilename = $schema->ddl_filename($db, $dir, $preversion);
+# print "Previous version $prefilename\n";
+ if(!-e $prefilename)
+ {
+ warn("No previous schema file found ($prefilename)");
+ next;
+ }
+ #### We need to reparse the SQLite file we just wrote, so that
+ ## Diff doesnt get all confoosed, and Diff is *very* confused.
+ ## FIXME: rip Diff to pieces!
+# my $target_schema = $sqlt->schema;
+# unless ( $target_schema->name ) {
+# $target_schema->name( $filename );
+# }
+ my @input;
+ push @input, {file => $prefilename, parser => $db};
+ push @input, {file => $filename, parser => $db};
+ my ( $source_schema, $source_db, $target_schema, $target_db ) = map {
+ my $file = $_->{'file'};
+ my $parser = $_->{'parser'};
+
+ my $t = SQL::Translator->new;
+ $t->debug( 0 );
+ $t->trace( 0 );
+ $t->parser( $parser ) or die $t->error;
+ my $out = $t->translate( $file ) or die $t->error;
+ my $schema = $t->schema;
+ unless ( $schema->name ) {
+ $schema->name( $file );
+ }
+ ($schema, $parser);
+ } @input;
+
+ my $diff = SQL::Translator::Diff::schema_diff($source_schema, $db,
+ $target_schema, $db,
+ {}
+ );
+ my $difffile = $schema->ddl_filename($db, $dir, $version, $preversion);
+ print STDERR "Diff: $difffile: $db, $dir, $version, $preversion \n";
+ if(-e $difffile)
+ {
+ warn("$difffile already exists, skipping");
+ next;
+ }
+ if(!open $file, ">$difffile")
+ {
+ $self->throw_exception("Can't write to $difffile ($!)");
+ next;
+ }
+ print $file $diff;
+ close($file);
+ }
}
+}
+sub configure_sqlt() {
+ my $self = shift;
+ my $tr = shift;
+ my $db = shift || $self->sqlt_type;
+ if ($db eq 'PostgreSQL') {
+ $tr->quote_table_names(0);
+ $tr->quote_field_names(0);
+ }
+ return $tr;
}
=head2 deployment_statements
$type ||= $self->sqlt_type;
$version ||= $schema->VERSION || '1.x';
$dir ||= './';
+ my $filename = $schema->ddl_filename($type, $dir, $version);
+ if(-f $filename)
+ {
+ my $file;
+ open($file, "<$filename")
+ or $self->throw_exception("Can't open $filename ($!)");
+ my @rows = <$file>;
+ close($file);
+ return join('', @rows);
+ }
+
eval "use SQL::Translator";
if(!$@)
{
$self->throw_exception($@) if $@;
eval "use SQL::Translator::Producer::${type};";
$self->throw_exception($@) if $@;
+
+ # sources needs to be a parser arg, but for simplicty allow at top level
+ # coming in
+ $sqltargs->{parser_args}{sources} = delete $sqltargs->{sources}
+ if exists $sqltargs->{sources};
+
my $tr = SQL::Translator->new(%$sqltargs);
SQL::Translator::Parser::DBIx::Class::parse( $tr, $schema );
return "SQL::Translator::Producer::${type}"->can('produce')->($tr);
}
- my $filename = $schema->ddl_filename($type, $dir, $version);
- if(!-f $filename)
- {
-# $schema->create_ddl_dir([ $type ], $version, $dir, $sqltargs);
- $self->throw_exception("No SQL::Translator, and no Schema file found, aborting deploy");
- return;
- }
- my $file;
- open($file, "<$filename")
- or $self->throw_exception("Can't open $filename ($!)");
- my @rows = <$file>;
- close($file);
+ $self->throw_exception("No SQL::Translator, and no Schema file found, aborting deploy");
+ return;
- return join('', @rows);
-
}
-=head2 deploy
-
-Sends the appropriate statements to create or modify tables to the
-db. This would normally be called through
-L<DBIx::Class::Schema/deploy>.
-
-=cut
-
sub deploy {
my ($self, $schema, $type, $sqltargs, $dir) = @_;
foreach my $statement ( $self->deployment_statements($schema, $type, undef, $dir, { no_comments => 1, %{ $sqltargs || {} } } ) ) {
next if($_ =~ /^COMMIT/m);
next if $_ =~ /^\s+$/; # skip whitespace only
$self->debugobj->query_start($_) if $self->debug;
- $self->dbh->do($_) or warn "SQL was:\n $_";
+ $self->dbh->do($_) or warn "SQL was:\n $_"; # XXX exceptions?
$self->debugobj->query_end($_) if $self->debug;
}
}
}
sub DESTROY {
- # NOTE: if there's a merge conflict here when -current is pushed
- # back to trunk, take -current's version and ignore this trunk one :)
my $self = shift;
-
- if($self->_dbh && $self->_conn_pid != $$) {
- $self->_dbh->{InactiveDestroy} = 1;
- }
-
+ return if !$self->_dbh;
+ $self->_verify_pid;
$self->_dbh(undef);
}
=back
-=head1 ENVIRONMENT VARIABLES
-
-=head2 DBIC_TRACE
-
-If C<DBIC_TRACE> is set then SQL trace information
-is produced (as when the L<debug> method is set).
-
-If the value is of the form C<1=/path/name> then the trace output is
-written to the file C</path/name>.
-
-This environment variable is checked when the storage object is first
-created (when you call connect on your schema). So, run-time changes
-to this environment variable will not take effect unless you also
-re-connect on your schema.
-
-=head2 DBIX_CLASS_STORAGE_DBI_DEBUG
-
-Old name for DBIC_TRACE
-
=head1 AUTHORS
Matt S. Trout <mst@shadowcatsystems.co.uk>
[ { father => 'person' }, { 'father.person_id' => { '!=', '42' } }, ],
[ { mother => 'person' }, { 'mother.person_id' => 'child.mother_id' } ],
);
- $match = qr/^\QHASH reference arguments are not supported in JOINS - try using \"..." instead\E/;
+ $match = qr/^HASH reference arguments are not supported in JOINS - try using "\.\.\." instead/;
eval { $sa->_recurse_from(@j6) };
like( $@, $match, 'join 6 (HASH reference for ON statement dies) ok' );
is_deeply( $prefetch_result, $nonpre_result,
'Compare 2 level prefetch result to non-prefetch result' );
}
+