Revision history for DBIx::Class
+0.05999_04
+ - Fix for delete on full-table resultsets
+ - Removed caching on count() and added _count for pager()
+ - ->connection does nothing if ->storage defined and no args
+ (and hence ->connect acts like ->clone under the same conditions)
+ - Storage::DBI throws better exception if no connect info
+ - columns_info_for made more robust / informative
+ - ithreads compat added, fork compat improved
+ - weaken result_source in all resultsets
+ - Make pg seq extractor less sensitive.
+
+0.05999_03 2006-03-14 01:58:10
+ - has_many prefetch fixes
+ - deploy now adds drop statements before creates
+ - deploy outputs debugging statements if DBIX_CLASS_STORAGE_DBI_DEBUG
+ is set
+
+0.05999_02 2006-03-10 13:31:37
+ - remove test dep on YAML
+ - additional speed tweaks for C3
+ - allow scalarefs passed to order_by to go straight through to SQL
+ - renamed insert_or_update to update_or_insert (with compat alias)
+ - hidden lots of packages from the PAUSE Indexer
+
+0.05999_01 2006-03-09 18:31:44
+ - renamed cols attribute to columns (cols still supported)
+ - added has_column_loaded to Row
+ - Storage::DBI connect_info supports coderef returning dbh as 1st arg
+ - load_components() doesn't prepend base when comp. prefixed with +
- $schema->deploy
- HAVING support
- prefetch for has_many
(sponsored by Airspace Software, http://www.airspace.co.uk/)
- clean up set_from_related
- made copy() automatically null out auto-inc columns
+ - added txn_do() method to Schema, which allows a coderef to be
+ executed atomically
0.05007 2006-02-24 00:59:00
- tweak to Componentised for Class::C3 0.11
- count will now work for grouped resultsets
- added accessor => option to column_info to specify accessor name
- added $schema->populate to load test data (similar to AR fixtures)
- - removed cdbi-t dependencies, only run tests if installed
- - Removed DBIx::Class::Exception
- - unified throw_exception stuff, using Carp::Clan
- - report query when sth generation fails.
+ - removed cdbi-t dependencies, only run tests if installed
+ - Removed DBIx::Class::Exception
+ - unified throw_exception stuff, using Carp::Clan
+ - report query when sth generation fails.
- multi-step prefetch!
- inheritance fixes
- test tweaks
- made Storage::DBI use prepare_cached safely (thanks to Tim Bunce)
- many documentation improvements (thanks guys!)
- added ->connection, ->connect, ->register_source and ->clone schema methods
- - Use croak instead of die for user errors.
+ - Use croak instead of die for user errors.
0.04999_02 2006-01-14 07:17:35
- Schema is now self-contained; no requirement for co-operation
0.03004
- Added an || '' to the CDBICompat stringify to avoid null warnings
- - Updated name section for manual pods
+ - Updated name section for manual pods
0.03003 2005-11-03 17:00:00
- POD fixes.
- Changed use to require in Relationship/Base to avoid import.
+++ /dev/null
-NAME
- DBIx::Class - Extensible and flexible object <-> relational mapper.
-
-SYNOPSIS
-DESCRIPTION
- This is an SQL to OO mapper, inspired by the Class::DBI framework, and
- meant to support compability with it, while restructuring the internals
- and making it possible to support some new features like self-joins,
- distinct, group bys and more.
-
- This project is still at an early stage, so the maintainers don't make
- any absolute promise that full backwards-compatibility will be
- supported; however, if we can without compromising the improvements
- we're trying to make, we will, and any non-compatible changes will merit
- a full justification on the mailing list and a CPAN developer release
- for people to test against.
-
- The community can be found via -
-
- Mailing list: http://lists.rawmode.org/mailman/listinfo/dbix-class/
-
- SVN: http://dev.catalyst.perl.org/repos/bast/trunk/DBIx-Class/
-
- Wiki: http://dbix-class.shadowcatsystems.co.uk/
-
- IRC: irc.perl.org#dbix-class
-
-QUICKSTART
- If you're using Class::DBI, and want an easy and fast way of migrating
- to DBIx::Class, take a look at DBIx::Class::CDBICompat.
-
- There are two ways of using DBIx::Class, the "simple" way and the
- "schema" way. The "simple" way of using DBIx::Class needs less classes
- than the "schema" way but doesn't give you the ability to easily use
- different database connections.
-
- Some examples where different database connections are useful are:
-
- different users with different rights different databases with the same
- schema.
-
- Simple
- First you need to create a base class which all other classes will
- inherit from. See DBIx::Class::DB for information on how to do this.
-
- Then you need to create a class for every table you want to use with
- DBIx::Class. See DBIx::Class::Table for information on how to do this.
-
- Schema
- With this approach, the table classes inherit directly from
- DBIx::Class::Core, although it might be a good idea to create a "parent"
- class for all table classes that inherits from DBIx::Class::Core and
- adds additional methods needed by all table classes, e.g. reading a
- config file or loading auto primary key support.
-
- Look at DBIx::Class::Schema for information on how to do this.
-
- If you need more help, check out the introduction in the manual below.
-
-SEE ALSO
- DBIx::Class::Core - DBIC Core Classes
- DBIx::Class::Manual - User's manual
- DBIx::Class::CDBICompat - Class::DBI Compat layer
- DBIx::Class::Schema
- DBIx::Class::ResultSet
- DBIx::Class::ResultSource
- DBIx::Class::Row - row-level methods
- DBIx::Class::PK - primary key methods
- DBIx::Class::Relationship - relationships between tables
-
-AUTHOR
- Matt S. Trout <mst@shadowcatsystems.co.uk>
-
-CONTRIBUTORS
- Andy Grundman <andy@hybridized.org>
-
- Brian Cassidy <bricas@cpan.org>
-
- Dan Kubb <dan.kubb-cpan@onautopilot.com>
-
- Dan Sully <daniel@cpan.org>
-
- David Kamholz <dkamholz@cpan.org>
-
- Jules Bean
-
- Marcus Ramberg <mramberg@cpan.org>
-
- Paul Makepeace
-
- CL Kao
-
- Jess Robinson
-
- Marcus Ramberg
-
- Will Hawes
-
- Todd Lipcon
-
- Daniel Westermann-Clark <danieltwc@cpan.org>
-
- Alexander Hartmaier <alex_hartmaier@hotmail.com>
-
- Zbigniew Lukasiak
-
- Nigel Metheringham <nigelm@cpan.org>
-
- Jesper Krogh
-
- Brandon Black
-
- Scotty Allen <scotty@scottyallen.com>
-
- Justin Guenther <jguenther@gmail.com>
-
-LICENSE
- You may distribute this code under the same terms as Perl itself.
-
-Added 2006-02-07:
-JR - Extract DBIC::SQL::Abstract into a separate module for CPAN
- - Chop PK::Auto::Foo up to have PK::Auto refer to an appropriate
+
+2006-01-31 by bluefeet
+ - Create a DBIx::Class::FilterColumn to replace inflate/deflate. This
+ component would provide a new syntax for filtering column update and
+ retrieval through a simple syntax. The syntax would be:
+ __PACKAGE__->add_columns(phone => { set=>sub{ ... }, get=>sub{ ... } });
+ We should still support the old inflate/deflate syntax, but this new
+ way should be recommended.
+
+2006-02-07 by JR
+ - Extract DBIC::SQL::Abstract into a separate module for CPAN
+ - Chop PK::Auto::Foo up to have PK::Auto refer to an appropriate
DBIx::Storage::DBI::Foo, which will be loaded on connect from Driver info?
- - Add deploy method to Schema, which will create DB tables from Schema, via
+ - Add deploy method to Schema, which will create DB tables from Schema, via
SQLT
+
+2006-03-18 by bluefeet
+ - Support table locking.
+
# i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports
# brain damage and presumably various other packaging systems too
-$VERSION = '0.05007';
+$VERSION = '0.05999_04';
sub MODIFY_CODE_ATTRIBUTES {
my ($class,$code,@attrs) = @_;
- unless ($class->can('_attr_cache')) {
- $class->mk_classdata('_attr_cache');
- $class->_attr_cache({});
- }
- my $cache = $class->_attr_cache;
- $class->_attr_cache->{$code} = [@attrs];
+ $class->mk_classdata('__attr_cache' => {}) unless $class->can('__attr_cache');
+ $class->__attr_cache->{$code} = [@attrs];
return ();
}
+sub _attr_cache {
+ my $self = shift;
+ my $cache = $self->can('__attr_cache') ? $self->__attr_cache : {};
+ my $rest = eval { $self->next::method };
+ return $@ ? $cache : { %$cache, %$rest };
+}
+
1;
=head1 NAME
=head1 CONTRIBUTORS
+Alexander Hartmaier <alex_hartmaier@hotmail.com>
+
Andy Grundman <andy@hybridized.org>
-Brian Cassidy <bricas@cpan.org>
+Andres Kievsky
-Dan Kubb <dan.kubb-cpan@onautopilot.com>
+Brandon Black
-Dan Sully <daniel@cpan.org>
+Brian Cassidy <bricas@cpan.org>
-David Kamholz <dkamholz@cpan.org>
+Christopher H. Laco
-Jules Bean
+CL Kao
-Marcus Ramberg <mramberg@cpan.org>
+Daisuke Murase <typester@cpan.org>
-Paul Makepeace
+Dan Kubb <dan.kubb-cpan@onautopilot.com>
-CL Kao
+Dan Sully <daniel@cpan.org>
-Jess Robinson
+Daniel Westermann-Clark <danieltwc@cpan.org>
-Marcus Ramberg
+David Kamholz <dkamholz@cpan.org>
-Will Hawes
+Jesper Krogh
-Todd Lipcon
+Jess Robinson
-Daniel Westermann-Clark <danieltwc@cpan.org>
+Jules Bean
-Alexander Hartmaier <alex_hartmaier@hotmail.com>
+Justin Guenther <guentherj@agr.gc.ca>
-Zbigniew Lukasiak
+Marcus Ramberg <mramberg@cpan.org>
Nigel Metheringham <nigelm@cpan.org>
-Jesper Krogh
+Paul Makepeace
-Brandon Black
+Robert Sedlacek <phaylon@dunkelheit.at>
-Christopher H. Laco
+sc_ of irc.perl.org#dbix-class
+
+Scott McWhirter (konobi)
Scotty Allen <scotty@scottyallen.com>
-sc_
+Todd Lipcon
+
+Will Hawes
=head1 LICENSE
-package DBIx::Class::CDBICompat::AccessorMapping;
+package # hide from PAUSE Indexer
+ DBIx::Class::CDBICompat::AccessorMapping;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::AttributeAPI;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::AttributeAPI;
sub _attrs {
my ($self, @atts) = @_;
-package DBIx::Class::CDBICompat::AutoUpdate;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::AutoUpdate;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ColumnCase;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ColumnCase;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ColumnGroups;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ColumnGroups;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Constraints;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Constraints;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Constructor;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Constructor;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::DestroyWarning;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::DestroyWarning;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::GetSet;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::GetSet;
#use base qw/Class::Accessor/;
-package DBIx::Class::CDBICompat::HasA;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::HasA;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::HasMany;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::HasMany;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ImaDBI;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ImaDBI;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::LazyLoading;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::LazyLoading;
use strict;
use warnings;
sub resultset_instance {
my $self = shift;
my $rs = $self->next::method(@_);
- $rs = $rs->search(undef, { cols => [ $self->columns('Essential') ] });
+ $rs = $rs->search(undef, { columns => [ $self->columns('Essential') ] });
return $rs;
}
-package DBIx::Class::CDBICompat::LiveObjectIndex;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::LiveObjectIndex;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::MightHave;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::MightHave;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ObjIndexStubs;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ObjIndexStubs;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Pager;\r
+package # hide from PAUSE\r
+ DBIx::Class::CDBICompat::Pager;\r
\r
use strict;\r
use warnings FATAL => 'all';\r
-package DBIx::Class::CDBICompat::ReadOnly;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ReadOnly;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Retrieve;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Retrieve;
use strict;
use warnings FATAL => 'all';
-package DBIx::Class::CDBICompat::Stringify;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Stringify;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::TempColumns;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::TempColumns;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Triggers;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Triggers;
use strict;
use warnings;
-package DBIx::Class::ClassResolver::PassThrough;
+package # hide from PAUSE
+ DBIx::Class::ClassResolver::PassThrough;
sub class {
shift;
-package DBIx::Class::Componentised;
+package # hide from PAUSE
+ DBIx::Class::Componentised;
use Class::C3;
sub load_components {
my $class = shift;
my $base = $class->component_base_class;
- my @comp = map { "${base}::$_" } grep { $_ !~ /^#/ } @_;
+ my @comp = map { /^\+(.*)$/ ? $1 : "${base}::$_" } grep { $_ !~ /^#/ } @_;
$class->_load_components(@comp);
Class::C3::reinitialize();
}
-package DBIx::Class::Cursor;
+package # hide from PAUSE
+ DBIx::Class::Cursor;
use strict;
use warnings;
foreach my $key (keys %$attrs) {
if (ref $attrs->{$key}
&& exists $class->column_info($key)->{_inflate_info}) {
- $attrs->{$key} = $class->_deflated_column($key, $attrs->{$key});
+# $attrs->{$key} = $class->_deflated_column($key, $attrs->{$key});
+ $class->set_inflated_column ($key, delete $attrs->{$key});
}
}
return $class->next::method($attrs, @rest);
paged resultset, which will fetch only a small number of records at a time:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
page => 1, # page to return (defaults to 1)
rows => 10, # number of results per page
The C<page> attribute does not have to be specified in your search:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
rows => 10,
}
specify which ones you need:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
- cols => [qw/ name /]
+ columns => [qw/ name /]
}
);
to access the returned value:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
select => [ 'name', { LENGTH => 'name' } ],
as => [qw/ name name_length /],
=head3 SELECT DISTINCT with multiple columns
my $rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
{ distinct => [ $source->columns ] }
=head3 SELECT COUNT(DISTINCT colname)
my $rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
{ count => { distinct => 'colname' } }
L<DBIx::Class> supports C<GROUP BY> as follows:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
join => [qw/ cds /],
select => [ 'name', { count => 'cds.cdid' } ],
deep using the same syntax as a multi-step join:
my $rs = $schema->resultset('Tag')->search(
- {},
+ undef,
{
prefetch => {
cd => 'artist'
__PACKAGE__->set_primary_key( qw/ albumid artistid / );
+=begin hide
+
You can define relationships for any of your classes. L<DBIx::Class> will
automatically fill in the correct namespace, so if you want to say
"a My::Schema::Album object belongs to a My::Schema::Artist object" you do not
__PACKAGE__->belongs_to('artist' => 'Artist');
+=end hide
+
That's all you need in terms of setup.
=head2 Usage
$new_album->delete;
-You can also remove records without or retrieving first. This
-operation takes the same kind of arguments as a search.
+You can also remove records without retrieving them first, by calling
+delete directly on a ResultSet object.
# Delete all of Falco's albums
- $schema->resultset('Album')->delete({ artist => 'Falco' });
+ $schema->resultset('Album')->search({ artist => 'Falco' })->delete;
=head2 Finding your objects
-package DBIx::Class::Relationship::Accessor;
+package # hide from PAUSE
+ DBIx::Class::Relationship::Accessor;
use strict;
use warnings;
if ($@) {
$class->throw_exception($@) unless $@ =~ /Can't locate/;
}
-
- my %f_primaries;
- $f_primaries{$_} = 1 for eval { $f_class->primary_columns };
- my $f_loaded = !$@;
- # single key relationship
+ # no join condition or just a column name
if (!ref $cond) {
+ my %f_primaries = map { $_ => 1 } eval { $f_class->primary_columns };
$class->throw_exception("Can't infer join condition for ${rel} on ${class}; unable to load ${f_class}")
- unless $f_loaded;
+ if $@;
my ($pri, $too_many) = keys %f_primaries;
$class->throw_exception("Can't infer join condition for ${rel} on ${class}; ${f_class} has no primary keys")
unless defined $pri;
- $class->throw_exception("Can't infer join condition for ${rel} on ${class}; ${f_class} has multiple primary key")
+ $class->throw_exception("Can't infer join condition for ${rel} on ${class}; ${f_class} has multiple primary keys")
if $too_many;
my $fk = defined $cond ? $cond : $rel;
{ accessor => $acc_type, %{$attrs || {}} }
);
}
- # multiple key relationship
+ # explicit join condition
elsif (ref $cond eq 'HASH') {
my $cond_rel;
for (keys %$cond) {
-package DBIx::Class::Relationship::CascadeActions;
+package # hide from PAUSE
+ DBIx::Class::Relationship::CascadeActions;
sub delete {
my ($self, @rest) = @_;
-package DBIx::Class::Relationship::HasMany;
+package # hide from PAUSE
+ DBIx::Class::Relationship::HasMany;
use strict;
use warnings;
my ($pri, $too_many) = $class->primary_columns;
$class->throw_exception( "has_many can only infer join for a single primary key; ${class} has more" )
if $too_many;
- my $f_key;
- my $f_class_loaded = eval { $f_class->columns };
- my $guess;
+
+ my ($f_key,$guess);
if (defined $cond && length $cond) {
$f_key = $cond;
$guess = "caller specified foreign key '$f_key'";
$f_key = lc $1; # go ahead and guess; best we can do
$guess = "using our class name '$class' as foreign key";
}
+
+ my $f_class_loaded = eval { $f_class->columns };
$class->throw_exception("No such column ${f_key} on foreign class ${f_class} ($guess)")
if $f_class_loaded && !$f_class->has_column($f_key);
- $cond = { "foreign.${f_key}" => "self.${pri}" },
+
+ $cond = { "foreign.${f_key}" => "self.${pri}" };
}
$class->add_relationship($rel, $f_class, $cond,
-package DBIx::Class::Relationship::HasOne;
+package # hide from PAUSE
+ DBIx::Class::Relationship::HasOne;
use strict;
use warnings;
my ($pri, $too_many) = $class->primary_columns;
$class->throw_exception( "might_have/has_one can only infer join for a single primary key; ${class} has more" )
if $too_many;
- my $f_key;
my $f_class_loaded = eval { $f_class->columns };
- my $guess;
+ my ($f_key,$guess);
if (defined $cond && length $cond) {
$f_key = $cond;
$guess = "caller specified foreign key '$f_key'";
-package DBIx::Class::Relationship::Helpers;
+package # hide from PAUSE
+ DBIx::Class::Relationship::Helpers;
use strict;
use warnings;
-package DBIx::Class::Relationship::ManyToMany;
+package # hide from PAUSE
+ DBIx::Class::Relationship::ManyToMany;
use strict;
use warnings;
sub many_to_many {
my ($class, $meth, $rel, $f_rel, $rel_attrs) = @_;
- $rel_attrs ||= {};
-
{
no strict 'refs';
no warnings 'redefine';
*{"${class}::${meth}"} = sub {
my $self = shift;
my $attrs = @_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {};
- $self->search_related($rel)->search_related($f_rel, @_ > 0 ? @_ : undef, { %$rel_attrs, %$attrs });
+ $self->search_related($rel)->search_related($f_rel, @_ > 0 ? @_ : undef, { %{$rel_attrs||{}}, %$attrs });
};
}
}
-package DBIx::Class::Relationship::ProxyMethods;
+package # hide from PAUSE
+ DBIx::Class::Relationship::ProxyMethods;
use strict;
use warnings;
fallback => 1;
use Data::Page;
use Storable;
+use Scalar::Util qw/weaken/;
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/AccessorGroup/);
-__PACKAGE__->mk_group_accessors('simple' => 'result_source');
+__PACKAGE__->mk_group_accessors('simple' => qw/result_source result_class/);
=head1 NAME
sub new {
my $class = shift;
return $class->new_result(@_) if ref $class;
+
my ($source, $attrs) = @_;
- #use Data::Dumper; warn Dumper($attrs);
+ weaken $source;
$attrs = Storable::dclone($attrs || {}); # { %{ $attrs || {} } };
- my %seen;
+ #use Data::Dumper; warn Dumper($attrs);
my $alias = ($attrs->{alias} ||= 'me');
- if ($attrs->{cols} || !$attrs->{select}) {
- delete $attrs->{as} if $attrs->{cols};
- my @cols = ($attrs->{cols}
- ? @{delete $attrs->{cols}}
- : $source->columns);
- $attrs->{select} = [ map { m/\./ ? $_ : "${alias}.$_" } @cols ];
- }
- $attrs->{as} ||= [ map { m/^$alias\.(.*)$/ ? $1 : $_ } @{$attrs->{select}} ];
+
+ $attrs->{columns} ||= delete $attrs->{cols} if $attrs->{cols};
+ delete $attrs->{as} if $attrs->{columns};
+ $attrs->{columns} ||= [ $source->columns ] unless $attrs->{select};
+ $attrs->{select} = [ map { m/\./ ? $_ : "${alias}.$_" } @{delete $attrs->{columns}} ]
+ if $attrs->{columns};
+ $attrs->{as} ||= [ map { m/^\Q$alias.\E(.+)$/ ? $1 : $_ } @{$attrs->{select}} ];
if (my $include = delete $attrs->{include_columns}) {
push(@{$attrs->{select}}, @$include);
- push(@{$attrs->{as}}, map { m/([^\.]+)$/; $1; } @$include);
+ push(@{$attrs->{as}}, map { m/([^.]+)$/; $1; } @$include);
}
#use Data::Dumper; warn Dumper(@{$attrs}{qw/select as/});
+
$attrs->{from} ||= [ { $alias => $source->from } ];
$attrs->{seen_join} ||= {};
+ my %seen;
if (my $join = delete $attrs->{join}) {
- foreach my $j (ref $join eq 'ARRAY'
- ? (@{$join}) : ($join)) {
+ foreach my $j (ref $join eq 'ARRAY' ? @$join : ($join)) {
if (ref $j eq 'HASH') {
$seen{$_} = 1 foreach keys %$j;
} else {
}
push(@{$attrs->{from}}, $source->resolve_join($join, $attrs->{alias}, $attrs->{seen_join}));
}
+
$attrs->{group_by} ||= $attrs->{select} if delete $attrs->{distinct};
+ $attrs->{order_by} = [ $attrs->{order_by} ] if $attrs->{order_by} and !ref($attrs->{order_by});
+ $attrs->{order_by} ||= [];
+ my $collapse = $attrs->{collapse} || {};
if (my $prefetch = delete $attrs->{prefetch}) {
- foreach my $p (ref $prefetch eq 'ARRAY'
- ? (@{$prefetch}) : ($prefetch)) {
- if( ref $p eq 'HASH' ) {
+ my @pre_order;
+ foreach my $p (ref $prefetch eq 'ARRAY' ? @$prefetch : ($prefetch)) {
+ if ( ref $p eq 'HASH' ) {
foreach my $key (keys %$p) {
push(@{$attrs->{from}}, $source->resolve_join($p, $attrs->{alias}))
unless $seen{$key};
}
- }
- else {
+ } else {
push(@{$attrs->{from}}, $source->resolve_join($p, $attrs->{alias}))
unless $seen{$p};
}
- my @prefetch = $source->resolve_prefetch($p, $attrs->{alias});
- #die Dumper \@cols;
+ my @prefetch = $source->resolve_prefetch(
+ $p, $attrs->{alias}, {}, \@pre_order, $collapse);
push(@{$attrs->{select}}, map { $_->[0] } @prefetch);
push(@{$attrs->{as}}, map { $_->[1] } @prefetch);
}
+ push(@{$attrs->{order_by}}, @pre_order);
}
+ $attrs->{collapse} = $collapse;
+# use Data::Dumper; warn Dumper($collapse) if keys %{$collapse};
if ($attrs->{page}) {
$attrs->{rows} ||= 10;
$attrs->{offset} ||= 0;
$attrs->{offset} += ($attrs->{rows} * ($attrs->{page} - 1));
}
- my $new = {
+
+ bless {
result_source => $source,
+ result_class => $attrs->{result_class} || $source->result_class,
cond => $attrs->{where},
from => $attrs->{from},
+ collapse => $collapse,
count => undef,
page => delete $attrs->{page},
pager => undef,
- attrs => $attrs };
- bless ($new, $class);
- return $new;
+ attrs => $attrs
+ }, $class;
}
=head2 search
my $new_rs = $rs->search({ foo => 3 });
If you need to pass in additional attributes but no additional condition,
-call it as C<search({}, \%attrs);>.
+call it as C<search(undef, \%attrs);>.
# "SELECT foo, bar FROM $class_table"
- my @all = $class->search({}, { cols => [qw/foo bar/] });
+ my @all = $class->search(undef, { columns => [qw/foo bar/] });
=cut
my $attrs = { %{$self->{attrs}} };
my $having = delete $attrs->{having};
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = { %$attrs, %{ pop(@_) } };
- }
+ $attrs = { %$attrs, %{ pop(@_) } } if @_ > 1 and ref $_[$#_] eq 'HASH';
my $where = (@_
? ((@_ == 1 || ref $_[0] eq "HASH")
: {@_}))
: undef());
if (defined $where) {
- $where = (defined $attrs->{where}
+ $attrs->{where} = (defined $attrs->{where}
? { '-and' =>
[ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
$where, $attrs->{where} ] }
: $where);
- $attrs->{where} = $where;
}
if (defined $having) {
- $having = (defined $attrs->{having}
+ $attrs->{having} = (defined $attrs->{having}
? { '-and' =>
[ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
$having, $attrs->{having} ] }
: $having);
- $attrs->{having} = $having;
}
$rs = (ref $self)->new($self->result_source, $attrs);
}
else {
$rs = $self;
- $rs->reset();
+ $rs->reset;
}
return (wantarray ? $rs->all : $rs);
}
my @cols = $self->result_source->primary_columns;
if (exists $attrs->{key}) {
my %uniq = $self->result_source->unique_constraints;
- $self->( "Unknown key " . $attrs->{key} . " on " . $self->name )
+ $self->throw_exception( "Unknown key $attrs->{key} on $self->name" )
unless exists $uniq{$attrs->{key}};
@cols = @{ $uniq{$attrs->{key}} };
}
} else {
$query = {@vals};
}
- foreach (keys %$query) {
- next if m/\./;
- $query->{$self->{attrs}{alias}.'.'.$_} = delete $query->{$_};
+ foreach my $key (grep { ! m/\./ } keys %$query) {
+ $query->{"$self->{attrs}{alias}.$key"} = delete $query->{$key};
}
#warn Dumper($query);
- return (keys %$attrs
- ? $self->search($query,$attrs)->single
- : $self->single($query));
+
+ if (keys %$attrs) {
+ my $rs = $self->search($query,$attrs);
+ return keys %{$rs->{collapse}} ? $rs->next : $rs->single;
+ } else {
+ return keys %{$self->{collapse}} ? $self->search($query)->next : $self->single($query);
+ }
}
=head2 search_related
sub cursor {
my ($self) = @_;
- my ($attrs) = $self->{attrs};
- $attrs = { %$attrs };
+ my $attrs = { %{$self->{attrs}} };
return $self->{cursor}
||= $self->result_source->storage->select($self->{from}, $attrs->{select},
$attrs->{where},$attrs);
=cut
sub single {
- my ($self, $extra) = @_;
- my ($attrs) = $self->{attrs};
- $attrs = { %$attrs };
- if ($extra) {
+ my ($self, $where) = @_;
+ my $attrs = { %{$self->{attrs}} };
+ if ($where) {
if (defined $attrs->{where}) {
$attrs->{where} = {
- '-and'
- => [ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
- delete $attrs->{where}, $extra ]
+ '-and' =>
+ [ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
+ $where, delete $attrs->{where} ]
};
} else {
- $attrs->{where} = $extra;
+ $attrs->{where} = $where;
}
}
my @data = $self->result_source->storage->select_single(
=cut
sub search_like {
- my $class = shift;
- my $attrs = { };
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = pop(@_);
- }
- my $query = ref $_[0] eq "HASH" ? { %{shift()} }: {@_};
+ my $class = shift;
+ my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
+ my $query = ref $_[0] eq 'HASH' ? { %{shift()} }: {@_};
$query->{$_} = { 'like' => $query->{$_} } for keys %$query;
return $class->search($query, { %$attrs });
}
Can be used to efficiently iterate over records in the resultset:
- my $rs = $schema->resultset('CD')->search({});
+ my $rs = $schema->resultset('CD')->search;
while (my $cd = $rs->next) {
print $cd->title;
}
sub next {
my ($self) = @_;
- my $cache;
- if( @{$cache = $self->{all_cache} || []}) {
+ if (@{$self->{all_cache} || []}) {
$self->{all_cache_position} ||= 0;
- my $obj = $cache->[$self->{all_cache_position}];
- $self->{all_cache_position}++;
- return $obj;
+ return $self->{all_cache}->[$self->{all_cache_position}++];
}
if ($self->{attrs}{cache}) {
- $self->{all_cache_position} = 0;
+ $self->{all_cache_position} = 1;
return ($self->all)[0];
}
- my @row = $self->cursor->next;
+ my @row = (exists $self->{stashed_row}
+ ? @{delete $self->{stashed_row}}
+ : $self->cursor->next);
# warn Dumper(\@row); use Data::Dumper;
return unless (@row);
return $self->_construct_object(@row);
sub _construct_object {
my ($self, @row) = @_;
- my @row_orig = @row; # copy @row for key comparison later, because @row will change
my @as = @{ $self->{attrs}{as} };
-#use Data::Dumper; warn Dumper \@as;
- #warn "@cols -> @row";
- my $info = [ {}, {} ];
- foreach my $as (@as) {
- my $rs = $self;
- my $target = $info;
- my @parts = split(/\./, $as);
- my $col = pop(@parts);
- foreach my $p (@parts) {
- $target = $target->[1]->{$p} ||= [];
-
- $rs = $rs->related_resultset($p) if $rs->{attrs}->{cache};
- }
-
- $target->[0]->{$col} = shift @row
- if ref($target->[0]) ne 'ARRAY'; # arrayref is pre-inflated objects, do not overwrite
- }
- #use Data::Dumper; warn Dumper(\@as, $info);
- my $new = $self->result_source->result_class->inflate_result(
- $self->result_source, @$info);
+
+ my $info = $self->_collapse_result(\@as, \@row);
+
+ my $new = $self->result_class->inflate_result($self->result_source, @$info);
+
$new = $self->{attrs}{record_filter}->($new)
if exists $self->{attrs}{record_filter};
-
- if( $self->{attrs}->{cache} ) {
- while( my( $rel, $rs ) = each( %{$self->{related_resultsets}} ) ) {
- $rs->all;
- #warn "$rel:", @{$rs->get_cache};
- }
- $self->build_rr( $self, $new );
- }
-
return $new;
}
-
-sub build_rr {
- # build related resultsets for supplied object
- my ( $self, $context, $obj ) = @_;
-
- my $re = qr/^\w+\./;
- while( my ($rel, $rs) = each( %{$context->{related_resultsets}} ) ) {
- #warn "context:", $context->result_source->name, ", rel:$rel, rs:", $rs->result_source->name;
- my @objs = ();
- my $map = {};
- my $cond = $context->result_source->relationship_info($rel)->{cond};
- keys %$cond;
- while( my( $rel_key, $pk ) = each(%$cond) ) {
- $rel_key =~ s/$re//;
- $pk =~ s/$re//;
- $map->{$rel_key} = $pk;
+
+sub _collapse_result {
+ my ($self, $as, $row, $prefix) = @_;
+
+ my %const;
+
+ my @copy = @$row;
+ foreach my $this_as (@$as) {
+ my $val = shift @copy;
+ if (defined $prefix) {
+ if ($this_as =~ m/^\Q${prefix}.\E(.+)$/) {
+ my $remain = $1;
+ $remain =~ /^(?:(.*)\.)?([^.]+)$/;
+ $const{$1||''}{$2} = $val;
+ }
+ } else {
+ $this_as =~ /^(?:(.*)\.)?([^.]+)$/;
+ $const{$1||''}{$2} = $val;
}
-
- $rs->reset();
- while( my $rel_obj = $rs->next ) {
- while( my( $rel_key, $pk ) = each(%$map) ) {
- if( $rel_obj->get_column($rel_key) eq $obj->get_column($pk) ) {
- push @objs, $rel_obj;
- }
+ }
+
+ my $info = [ {}, {} ];
+ foreach my $key (keys %const) {
+ if (length $key) {
+ my $target = $info;
+ my @parts = split(/\./, $key);
+ foreach my $p (@parts) {
+ $target = $target->[1]->{$p} ||= [];
}
+ $target->[0] = $const{$key};
+ } else {
+ $info->[0] = $const{$key};
}
+ }
- my $rel_rs = $obj->related_resultset($rel);
- $rel_rs->{attrs}->{cache} = 1;
- $rel_rs->set_cache( \@objs );
-
- while( my $rel_obj = $rel_rs->next ) {
- $self->build_rr( $rs, $rel_obj );
+ my @collapse = (defined($prefix)
+ ? (map { (m/^\Q${prefix}.\E(.+)$/ ? ($1) : ()); }
+ keys %{$self->{collapse}})
+ : keys %{$self->{collapse}});
+ if (@collapse) {
+ my ($c) = sort { length $a <=> length $b } @collapse;
+ my $target = $info;
+ foreach my $p (split(/\./, $c)) {
+ $target = $target->[1]->{$p} ||= [];
}
-
+ my $c_prefix = (defined($prefix) ? "${prefix}.${c}" : $c);
+ my @co_key = @{$self->{collapse}{$c_prefix}};
+ my %co_check = map { ($_, $target->[0]->{$_}); } @co_key;
+ my $tree = $self->_collapse_result($as, $row, $c_prefix);
+ my (@final, @raw);
+ while ( !(grep {
+ !defined($tree->[0]->{$_})
+ || $co_check{$_} ne $tree->[0]->{$_}
+ } @co_key) ) {
+ push(@final, $tree);
+ last unless (@raw = $self->cursor->next);
+ $row = $self->{stashed_row} = \@raw;
+ $tree = $self->_collapse_result($as, $row, $c_prefix);
+ #warn Data::Dumper::Dumper($tree, $row);
+ }
+ @$target = @final;
}
-
+
+ return $info;
}
=head2 result_source
sub count {
my $self = shift;
- return $self->search(@_)->count if @_ && defined $_[0];
- unless (defined $self->{count}) {
- return scalar @{ $self->get_cache }
- if @{ $self->get_cache };
- my $group_by;
- my $select = { 'count' => '*' };
- my $attrs = { %{ $self->{attrs} } };
- if( $group_by = delete $attrs->{group_by} ) {
- delete $attrs->{having};
- my @distinct = (ref $group_by ? @$group_by : ($group_by));
- # todo: try CONCAT for multi-column pk
- my @pk = $self->result_source->primary_columns;
- if( scalar(@pk) == 1 ) {
- my $pk = shift(@pk);
- my $alias = $attrs->{alias};
- my $re = qr/^($alias\.)?$pk$/;
- foreach my $column ( @distinct) {
- if( $column =~ $re ) {
- @distinct = ( $column );
- last;
- }
- }
- }
+ return $self->search(@_)->count if @_ and defined $_[0];
+ return scalar @{ $self->get_cache } if @{ $self->get_cache };
- $select = { count => { 'distinct' => \@distinct } };
- #use Data::Dumper; die Dumper $select;
- }
+ my $count = $self->_count;
+ return 0 unless $count;
- $attrs->{select} = $select;
- $attrs->{as} = [ 'count' ];
- # offset, order by and page are not needed to count. record_filter is cdbi
- delete $attrs->{$_} for qw/rows offset order_by page pager record_filter/;
-
- ($self->{count}) = (ref $self)->new($self->result_source, $attrs)->cursor->next;
- }
- return 0 unless $self->{count};
- my $count = $self->{count};
$count -= $self->{attrs}{offset} if $self->{attrs}{offset};
$count = $self->{attrs}{rows} if
- ($self->{attrs}{rows} && $self->{attrs}{rows} < $count);
+ $self->{attrs}{rows} and $self->{attrs}{rows} < $count;
+ return $count;
+}
+
+sub _count { # Separated out so pager can get the full count
+ my $self = shift;
+ my $select = { count => '*' };
+ my $attrs = { %{ $self->{attrs} } };
+ if (my $group_by = delete $attrs->{group_by}) {
+ delete $attrs->{having};
+ my @distinct = (ref $group_by ? @$group_by : ($group_by));
+ # todo: try CONCAT for multi-column pk
+ my @pk = $self->result_source->primary_columns;
+ if (@pk == 1) {
+ foreach my $column (@distinct) {
+ if ($column =~ qr/^(?:\Q$attrs->{alias}.\E)?$pk[0]$/) {
+ @distinct = ($column);
+ last;
+ }
+ }
+ }
+
+ $select = { count => { distinct => \@distinct } };
+ #use Data::Dumper; die Dumper $select;
+ }
+
+ $attrs->{select} = $select;
+ $attrs->{as} = [qw/count/];
+
+ # offset, order by and page are not needed to count. record_filter is cdbi
+ delete $attrs->{$_} for qw/rows offset order_by page pager record_filter/;
+
+ my ($count) = (ref $self)->new($self->result_source, $attrs)->cursor->next;
return $count;
}
sub all {
my ($self) = @_;
- return @{ $self->get_cache }
- if @{ $self->get_cache };
- if( $self->{attrs}->{cache} ) {
- my @obj = map { $self->_construct_object(@$_); }
- $self->cursor->all;
- $self->set_cache( \@obj );
- return @obj;
+ return @{ $self->get_cache } if @{ $self->get_cache };
+
+ my @obj;
+
+ if (keys %{$self->{collapse}}) {
+ # Using $self->cursor->all is really just an optimisation.
+ # If we're collapsing has_many prefetches it probably makes
+ # very little difference, and this is cleaner than hacking
+ # _construct_object to survive the approach
+ $self->cursor->reset;
+ my @row = $self->cursor->next;
+ while (@row) {
+ push(@obj, $self->_construct_object(@row));
+ @row = (exists $self->{stashed_row}
+ ? @{delete $self->{stashed_row}}
+ : $self->cursor->next);
+ }
+ } else {
+ @obj = map { $self->_construct_object(@$_) } $self->cursor->all;
}
- return map { $self->_construct_object(@$_); }
- $self->cursor->all;
+
+ $self->set_cache(\@obj) if $self->{attrs}{cache};
+ return @obj;
}
=head2 reset
sub delete {
my ($self) = @_;
my $del = {};
- $self->throw_exception("Can't delete on resultset with condition unless hash or array")
- unless (ref($self->{cond}) eq 'HASH' || ref($self->{cond}) eq 'ARRAY');
- if (ref $self->{cond} eq 'ARRAY') {
+
+ if (!ref($self->{cond})) {
+
+ # No-op. No condition, we're deleting everything
+
+ } elsif (ref $self->{cond} eq 'ARRAY') {
+
$del = [ map { my %hash;
foreach my $key (keys %{$_}) {
- $key =~ /([^\.]+)$/;
+ $key =~ /([^.]+)$/;
$hash{$1} = $_->{$key};
}; \%hash; } @{$self->{cond}} ];
- } elsif ((keys %{$self->{cond}})[0] eq '-and') {
- $del->{-and} = [ map { my %hash;
- foreach my $key (keys %{$_}) {
- $key =~ /([^\.]+)$/;
- $hash{$1} = $_->{$key};
- }; \%hash; } @{$self->{cond}{-and}} ];
- } else {
- foreach my $key (keys %{$self->{cond}}) {
- $key =~ /([^\.]+)$/;
- $del->{$1} = $self->{cond}{$key};
+
+ } elsif (ref $self->{cond} eq 'HASH') {
+
+ if ((keys %{$self->{cond}})[0] eq '-and') {
+
+ $del->{-and} = [ map { my %hash;
+ foreach my $key (keys %{$_}) {
+ $key =~ /([^.]+)$/;
+ $hash{$1} = $_->{$key};
+ }; \%hash; } @{$self->{cond}{-and}} ];
+
+ } else {
+
+ foreach my $key (keys %{$self->{cond}}) {
+ $key =~ /([^.]+)$/;
+ $del->{$1} = $self->{cond}{$key};
+ }
}
+ } else {
+ $self->throw_exception(
+ "Can't delete on resultset with condition unless hash or array");
}
+
$self->result_source->storage->delete($self->result_source->from, $del);
return 1;
}
my $attrs = $self->{attrs};
$self->throw_exception("Can't create pager for non-paged rs") unless $self->{page};
$attrs->{rows} ||= 10;
- $self->count;
return $self->{pager} ||= Data::Page->new(
- $self->{count}, $attrs->{rows}, $self->{page});
+ $self->_count, $attrs->{rows}, $self->{page});
}
=head2 page
my %new = %$values;
my $alias = $self->{attrs}{alias};
foreach my $key (keys %{$self->{cond}||{}}) {
- $new{$1} = $self->{cond}{$key} if ($key =~ m/^(?:$alias\.)?([^\.]+)$/);
+ $new{$1} = $self->{cond}{$key} if ($key =~ m/^(?:\Q${alias}.\E)?([^.]+)$/);
}
- my $obj = $self->result_source->result_class->new(\%new);
+ my $obj = $self->result_class->new(\%new);
$obj->result_source($self->result_source) if $obj->can('result_source');
- $obj;
+ return $obj;
}
=head2 create
sub find_or_create {
my $self = shift;
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
- my $hash = ref $_[0] eq "HASH" ? shift : {@_};
+ my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
my $exists = $self->find($hash, $attrs);
- return defined($exists) ? $exists : $self->create($hash);
+ return defined $exists ? $exists : $self->create($hash);
}
=head2 update_or_create
sub update_or_create {
my $self = shift;
-
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
- my $hash = ref $_[0] eq "HASH" ? shift : {@_};
+ my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
my %unique_constraints = $self->result_source->unique_constraints;
my @constraint_names = (exists $attrs->{key}
if (scalar keys %unique_hash == scalar @unique_cols);
}
- my $row;
if (@unique_hashes) {
- $row = $self->search(\@unique_hashes, { rows => 1 })->first;
- if ($row) {
+ my $row = $self->single(\@unique_hashes);
+ if (defined $row) {
$row->set_columns($hash);
$row->update;
+ return $row;
}
}
- unless ($row) {
- $row = $self->create($hash);
- }
-
- return $row;
+ return $self->create($hash);
}
=head2 get_cache
=cut
sub get_cache {
- my $self = shift;
- return $self->{all_cache} || [];
+ shift->{all_cache} || [];
}
=head2 set_cache
my ( $self, $data ) = @_;
$self->throw_exception("set_cache requires an arrayref")
if ref $data ne 'ARRAY';
- my $result_class = $self->result_source->result_class;
+ my $result_class = $self->result_class;
foreach( @$data ) {
$self->throw_exception("cannot cache object of type '$_', expected '$result_class'")
if ref $_ ne $result_class;
=cut
sub clear_cache {
- my $self = shift;
- $self->set_cache([]);
+ shift->set_cache([]);
}
=head2 related_resultset
sub related_resultset {
my ( $self, $rel, @rest ) = @_;
$self->{related_resultsets} ||= {};
- my $resultsets = $self->{related_resultsets};
- if( !exists $resultsets->{$rel} ) {
- #warn "fetching related resultset for rel '$rel'";
- my $rel_obj = $self->result_source->relationship_info($rel);
- $self->throw_exception(
- "search_related: result source '" . $self->result_source->name .
- "' has no such relationship ${rel}")
- unless $rel_obj; #die Dumper $self->{attrs};
- my $rs;
- if( $self->{attrs}->{cache} ) {
- $rs = $self->search(undef);
- }
- else {
- $rs = $self->search(undef, { join => $rel });
- }
- #use Data::Dumper; die Dumper $rs->{attrs};#$rs = $self->search( undef );
- #use Data::Dumper; warn Dumper $self->{attrs}, Dumper $rs->{attrs};
- my $alias = (defined $rs->{attrs}{seen_join}{$rel}
- && $rs->{attrs}{seen_join}{$rel} > 1
- ? join('_', $rel, $rs->{attrs}{seen_join}{$rel})
- : $rel);
- $resultsets->{$rel} =
+ return $self->{related_resultsets}{$rel} ||= do {
+ #warn "fetching related resultset for rel '$rel'";
+ my $rel_obj = $self->result_source->relationship_info($rel);
+ $self->throw_exception(
+ "search_related: result source '" . $self->result_source->name .
+ "' has no such relationship ${rel}")
+ unless $rel_obj; #die Dumper $self->{attrs};
+
+ my $rs = $self->search(undef, { join => $rel });
+ my $alias = defined $rs->{attrs}{seen_join}{$rel}
+ && $rs->{attrs}{seen_join}{$rel} > 1
+ ? join('_', $rel, $rs->{attrs}{seen_join}{$rel})
+ : $rel;
+
$self->result_source->schema->resultset($rel_obj->{class}
)->search( undef,
{ %{$rs->{attrs}},
alias => $alias,
- select => undef(),
- as => undef() }
- )->search(@rest);
- }
- return $resultsets->{$rel};
+ select => undef,
+ as => undef }
+ )->search(@rest);
+ };
}
=head2 throw_exception
Which column(s) to order the results by. This is currently passed through
directly to SQL, so you can give e.g. C<foo DESC> for a descending order.
-=head2 cols
+=head2 columns
=head3 Arguments: (arrayref)
Shortcut to request a particular set of columns to be retrieved. Adds
C<me.> onto the start of any column without a C<.> in it and sets C<select>
-from that, then auto-populates C<as> from C<select> as normal.
+from that, then auto-populates C<as> from C<select> as normal. (You may also
+use the C<cols> attribute, as in earlier versions of DBIC.)
=head2 include_columns
names:
$rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
'column_name',
procedure names:
$rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
'column1',
objects, because it saves at least one query:
my $rs = $schema->resultset('Tag')->search(
- {},
+ undef,
{
prefetch => {
cd => 'artist'
then search against all mothers of those children:
$rs = $schema->resultset('Person')->search(
- {},
+ undef,
{
alias => 'mother', # alias columns in accordance with "from"
from => [
with a father in the person table, we could explicitly use C<INNER JOIN>:
$rs = $schema->resultset('Person')->search(
- {},
+ undef,
{
alias => 'child', # alias columns in accordance with "from"
from => [
sub _register_attributes {
my $self = shift;
- return unless $self->can('_attr_cache');
-
my $cache = $self->_attr_cache;
+ return if keys %$cache == 0;
+
foreach my $meth (@{Class::Inspector->methods($self) || []}) {
my $attrs = $cache->{$self->can($meth)};
next unless $attrs;
=head1 SYNOPSIS
# in a table class
- __PACKAGE__->load_components(qw/ResultSetManager/);
+ __PACKAGE__->load_components(qw/ResultSetManager Core/); # note order!
__PACKAGE__->load_resultset_components(qw/AlwaysRS/);
# will be removed from the table class and inserted into a table-specific resultset class
C<load_resultset_components> loads components in addition to C<DBIx::Class::ResultSet>
(or whatever you set as C<base_resultset_class>). Any methods tagged with the C<ResultSet>
attribute will be moved into a table-specific resultset class (by default called
-C<Class::_resultset>).
+C<Class::_resultset>, but configurable via C<table_resultset_class_suffix>).
+Most of the magic is done when you call C<< __PACKAGE__->table >>.
=head1 AUTHORS
-package DBIx::Class::ResultSetProxy;
+package # hide from PAUSE
+ DBIx::Class::ResultSetProxy;
use base qw/DBIx::Class/;
use DBIx::Class::ResultSet;
use Carp::Clan qw/^DBIx::Class/;
-
use Storable;
-use Scalar::Util qw/weaken/;
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/AccessorGroup/);
sub add_columns {
my ($self, @cols) = @_;
- $self->_ordered_columns( \@cols )
- if !$self->_ordered_columns;
+ $self->_ordered_columns(\@cols) unless $self->_ordered_columns;
+
my @added;
my $columns = $self->_columns;
while (my $col = shift @cols) {
-
+ # If next entry is { ... } use that for the column info, if not
+ # use an empty hashref
my $column_info = ref $cols[0] ? shift(@cols) : {};
- # If next entry is { ... } use that for the column info, if not
- # use an empty hashref
-
push(@added, $col) unless exists $columns->{$col};
-
$columns->{$col} = $column_info;
}
push @{ $self->_ordered_columns }, @added;
$self->throw_exception("No such column $column")
unless exists $self->_columns->{$column};
#warn $self->{_columns_info_loaded}, "\n";
- if ( ! $self->_columns->{$column}->{data_type}
- && ! $self->{_columns_info_loaded}
- && $self->schema && $self->storage() ){
- $self->{_columns_info_loaded}++;
- my $info;
-############ eval for the case of storage without table
- eval{
- $info = $self->storage->columns_info_for ( $self->from() );
- };
- if ( ! $@ ){
- for my $col ( keys %{$self->_columns} ){
- for my $i ( keys %{$info->{$col}} ){
- $self->_columns()->{$col}->{$i} = $info->{$col}->{$i};
- }
- }
+ if ( ! $self->_columns->{$column}{data_type}
+ and ! $self->{_columns_info_loaded}
+ and $self->schema and $self->storage )
+ {
+ $self->{_columns_info_loaded}++;
+ my $info;
+ # eval for the case of storage without table
+ eval { $info = $self->storage->columns_info_for($self->from) };
+ unless ($@) {
+ foreach my $col ( keys %{$self->_columns} ) {
+ foreach my $i ( keys %{$info->{$col}} ) {
+ $self->_columns->{$col}{$i} = $info->{$col}{$i};
+ }
}
+ }
}
return $self->_columns->{$column};
}
=cut
sub columns {
- my $self=shift;
+ my $self = shift;
$self->throw_exception("columns() is a read-only accessor, did you mean add_columns()?") if (@_ > 1);
return @{$self->{_ordered_columns}||[]};
}
sub set_primary_key {
my ($self, @cols) = @_;
# check if primary key columns are valid columns
- for (@cols) {
- $self->throw_exception("No such column $_ on table ".$self->name)
- unless $self->has_column($_);
+ foreach my $col (@cols) {
+ $self->throw_exception("No such column $col on table " . $self->name)
+ unless $self->has_column($col);
}
$self->_primaries(\@cols);
sub add_unique_constraint {
my ($self, $name, $cols) = @_;
- for (@$cols) {
- $self->throw_exception("No such column $_ on table ".$self->name)
- unless $self->has_column($_);
+ foreach my $col (@$cols) {
+ $self->throw_exception("No such column $col on table " . $self->name)
+ unless $self->has_column($col);
}
my %unique_constraints = $self->unique_constraints;
=cut
sub resolve_prefetch {
- my ($self, $pre, $alias, $seen) = @_;
+ my ($self, $pre, $alias, $seen, $order, $collapse) = @_;
$seen ||= {};
- use Data::Dumper;
#$alias ||= $self->name;
#warn $alias, Dumper $pre;
if( ref $pre eq 'ARRAY' ) {
- return map { $self->resolve_prefetch( $_, $alias, $seen ) } @$pre;
+ return
+ map { $self->resolve_prefetch( $_, $alias, $seen, $order, $collapse ) }
+ @$pre;
}
elsif( ref $pre eq 'HASH' ) {
my @ret =
map {
- $self->resolve_prefetch($_, $alias, $seen),
+ $self->resolve_prefetch($_, $alias, $seen, $order, $collapse),
$self->related_source($_)->resolve_prefetch(
- $pre->{$_}, "${alias}.$_", $seen)
- } keys %$pre;
+ $pre->{$_}, "${alias}.$_", $seen, $order, $collapse)
+ } keys %$pre;
#die Dumper \@ret;
return @ret;
}
elsif( ref $pre ) {
- $self->throw_exception( "don't know how to resolve prefetch reftype " . ref $pre);
+ $self->throw_exception(
+ "don't know how to resolve prefetch reftype ".ref($pre));
}
else {
my $count = ++$seen->{$pre};
my $as = ($count > 1 ? "${pre}_${count}" : $pre);
my $rel_info = $self->relationship_info( $pre );
- $self->throw_exception( $self->name . " has no such relationship '$pre'" ) unless $rel_info;
- my $as_prefix = ($alias =~ /^.*?\.(.*)$/ ? $1.'.' : '');
+ $self->throw_exception( $self->name . " has no such relationship '$pre'" )
+ unless $rel_info;
+ my $as_prefix = ($alias =~ /^.*?\.(.+)$/ ? $1.'.' : '');
+ my $rel_source = $self->related_source($pre);
+
+ if (exists $rel_info->{attrs}{accessor}
+ && $rel_info->{attrs}{accessor} eq 'multi') {
+ $self->throw_exception(
+ "Can't prefetch has_many ${pre} (join cond too complex)")
+ unless ref($rel_info->{cond}) eq 'HASH';
+ my @key = map { (/^foreign\.(.+)$/ ? ($1) : ()); }
+ keys %{$rel_info->{cond}};
+ $collapse->{"${as_prefix}${pre}"} = \@key;
+ my @ord = (ref($rel_info->{attrs}{order_by}) eq 'ARRAY'
+ ? @{$rel_info->{attrs}{order_by}}
+ : (defined $rel_info->{attrs}{order_by}
+ ? ($rel_info->{attrs}{order_by})
+ : ()));
+ push(@$order, map { "${as}.$_" } (@key, @ord));
+ }
+
return map { [ "${as}.$_", "${as_prefix}${pre}.$_", ] }
- $self->related_source($pre)->columns;
+ $rel_source->columns;
#warn $alias, Dumper (\@ret);
#return @ret;
}
sub resultset {
my $self = shift;
+ $self->throw_exception('resultset does not take any arguments. If you want another resultset, call it on the schema instead.') if scalar @_;
return $self->{_resultset} if ref $self->{_resultset} eq $self->resultset_class;
- return $self->{_resultset} = do {
- my $rs = $self->resultset_class->new($self, $self->{resultset_attributes});
- weaken $rs->result_source;
- $rs;
- };
+ return $self->{_resultset} = $self->resultset_class->new($self, $self->{resultset_attributes});
}
=head2 throw_exception
-package DBIx::Class::ResultSourceProxy;
+package # hide from PAUSE
+ DBIx::Class::ResultSourceProxy;
use strict;
use warnings;
sub new {
my ($class, $attrs) = @_;
$class = ref $class if ref $class;
- my $new = bless({ _column_data => { } }, $class);
+ my $new = bless { _column_data => {} }, $class;
if ($attrs) {
- $new->throw_exception("attrs must be a hashref" ) unless ref($attrs) eq 'HASH';
- while (my ($k, $v) = each %{$attrs}) {
+ $new->throw_exception("attrs must be a hashref") unless ref($attrs) eq 'HASH';
+ while (my ($k, $v) = each %$attrs) {
$new->throw_exception("No such column $k on $class") unless $class->has_column($k);
$new->store_column($k => $v);
}
$self->{result_source} ||= $self->result_source_instance
if $self->can('result_source_instance');
my $source = $self->{result_source};
- $self->throw_exception("No result_source set on this object; can't insert") unless $source;
+ $self->throw_exception("No result_source set on this object; can't insert")
+ unless $source;
#use Data::Dumper; warn Dumper($self);
$source->storage->insert($source->from, { $self->get_columns });
$self->in_storage(1);
my $ident_cond = $self->ident_condition;
$self->throw_exception("Cannot safely delete a row in a PK-less table")
if ! keys %$ident_cond;
+ foreach my $column (keys %$ident_cond) {
+ $self->throw_exception("Can't delete the object unless it has loaded the primary keys")
+ unless exists $self->{_column_data}{$column};
+ }
$self->result_source->storage->delete(
$self->result_source->from, $ident_cond);
$self->in_storage(undef);
} else {
$self->throw_exception("Can't do class delete without a ResultSource instance")
unless $self->can('result_source_instance');
- my $attrs = { };
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = { %{ pop(@_) } };
- }
- my $query = (ref $_[0] eq 'HASH' ? $_[0] : {@_});
+ my $attrs = @_ > 1 && ref $_[$#_] eq 'HASH' ? { %{pop(@_)} } : {};
+ my $query = ref $_[0] eq 'HASH' ? $_[0] : {@_};
$self->result_source_instance->resultset->search(@_)->delete;
}
return $self;
sub get_column {
my ($self, $column) = @_;
$self->throw_exception( "Can't fetch data as class method" ) unless ref $self;
- return $self->{_column_data}{$column}
- if exists $self->{_column_data}{$column};
+ return $self->{_column_data}{$column} if exists $self->{_column_data}{$column};
$self->throw_exception( "No such column '${column}'" ) unless $self->has_column($column);
return undef;
}
+sub has_column_loaded {
+ my ($self, $column) = @_;
+ $self->throw_exception( "Can't call has_column data as class method" ) unless ref $self;
+ return exists $self->{_column_data}{$column};
+}
+
=head2 get_columns
my %data = $obj->get_columns;
delete $col_data->{$col}
if $self->result_source->column_info($col)->{is_auto_increment};
}
- my $new = bless({ _column_data => $col_data }, ref $self);
+ my $new = bless { _column_data => $col_data }, ref $self;
$new->set_columns($changes);
$new->insert;
foreach my $rel ($self->result_source->relationships) {
foreach my $pre (keys %{$prefetch||{}}) {
my $pre_val = $prefetch->{$pre};
my $pre_source = $source->related_source($pre);
- $class->throw_exception("Can't prefetch non-existent relationship ${pre}") unless $pre_source;
- my $fetched;
- unless ($pre_source->primary_columns == grep { exists $prefetch->{$pre}[0]{$_}
- and !defined $prefetch->{$pre}[0]{$_} } $pre_source->primary_columns)
- {
- $fetched = $pre_source->result_class->inflate_result(
- $pre_source, @{$prefetch->{$pre}});
- }
- my $accessor = $source->relationship_info($pre)->{attrs}{accessor};
- $class->throw_exception("No accessor for prefetched $pre")
- unless defined $accessor;
- if ($accessor eq 'single') {
- $new->{_relationship_data}{$pre} = $fetched;
- } elsif ($accessor eq 'filter') {
- $new->{_inflated_column}{$pre} = $fetched;
- } elsif ($accessor eq 'multi') {
-
- } else {
- $class->throw_exception("Prefetch not supported with accessor '$accessor'");
+ $class->throw_exception("Can't prefetch non-existent relationship ${pre}")
+ unless $pre_source;
+ if (ref($pre_val->[0]) eq 'ARRAY') { # multi
+ my @pre_objects;
+ foreach my $pre_rec (@$pre_val) {
+ unless ($pre_source->primary_columns == grep { exists $pre_rec->[0]{$_}
+ and defined $pre_rec->[0]{$_} } $pre_source->primary_columns) {
+ next;
+ }
+ push(@pre_objects, $pre_source->result_class->inflate_result(
+ $pre_source, @{$pre_rec}));
+ }
+ $new->related_resultset($pre)->set_cache(\@pre_objects);
+ } elsif (defined $pre_val->[0]) {
+ my $fetched;
+ unless ($pre_source->primary_columns == grep { exists $pre_val->[0]{$_}
+ and !defined $pre_val->[0]{$_} } $pre_source->primary_columns)
+ {
+ $fetched = $pre_source->result_class->inflate_result(
+ $pre_source, @{$pre_val});
+ }
+ my $accessor = $source->relationship_info($pre)->{attrs}{accessor};
+ $class->throw_exception("No accessor for prefetched $pre")
+ unless defined $accessor;
+ if ($accessor eq 'single') {
+ $new->{_relationship_data}{$pre} = $fetched;
+ } elsif ($accessor eq 'filter') {
+ $new->{_inflated_column}{$pre} = $fetched;
+ } else {
+ $class->throw_exception("Prefetch not supported with accessor '$accessor'");
+ }
}
}
return $new;
}
-=head2 insert_or_update
+=head2 update_or_insert
- $obj->insert_or_update
+ $obj->update_or_insert
Updates the object if it's already in the db, else inserts it.
=cut
-sub insert_or_update {
+*insert_or_update = \&update_or_insert;
+sub update_or_insert {
my $self = shift;
return ($self->in_storage ? $self->update : $self->insert);
}
$password,
$attrs
);
-
- my $schema2 = My::Schema->connect( ... );
+
+ my $schema2 = My::Schema->connect($coderef_returning_dbh);
# fetch objects using My::Schema::Foo
my $resultset = $schema1->resultset('Foo')->search( ... );
$comps_for{$class} = \@comp;
}
- foreach my $prefix (keys %comps_for) {
- foreach my $comp (@{$comps_for{$prefix}||[]}) {
- my $comp_class = "${prefix}::${comp}";
- eval "use $comp_class"; # If it fails, assume the user fixed it
- if ($@) {
- die $@ unless $@ =~ /Can't locate/;
+ my @to_register;
+ {
+ no warnings qw/redefine/;
+ local *Class::C3::reinitialize = sub { };
+ foreach my $prefix (keys %comps_for) {
+ foreach my $comp (@{$comps_for{$prefix}||[]}) {
+ my $comp_class = "${prefix}::${comp}";
+ eval "use $comp_class"; # If it fails, assume the user fixed it
+ if ($@) {
+ die $@ unless $@ =~ /Can't locate/;
+ }
+ push(@to_register, [ $comp, $comp_class ]);
}
- $class->register_class($comp => $comp_class);
- # if $class->can('result_source_instance');
}
}
+ Class::C3->reinitialize;
+
+ foreach my $to (@to_register) {
+ $class->register_class(@$to);
+ # if $class->can('result_source_instance');
+ }
}
=head2 compose_connection
my %target;
my %map;
my $schema = $self->clone;
- foreach my $moniker ($schema->sources) {
- my $source = $schema->source($moniker);
- my $target_class = "${target}::${moniker}";
- $self->inject_base(
- $target_class => $source->result_class, ($base ? $base : ())
- );
- $source->result_class($target_class);
+ {
+ no warnings qw/redefine/;
+ local *Class::C3::reinitialize = sub { };
+ foreach my $moniker ($schema->sources) {
+ my $source = $schema->source($moniker);
+ my $target_class = "${target}::${moniker}";
+ $self->inject_base(
+ $target_class => $source->result_class, ($base ? $base : ())
+ );
+ $source->result_class($target_class);
+ }
}
+ Class::C3->reinitialize();
{
no strict 'refs';
foreach my $meth (qw/class source resultset/) {
sub connection {
my ($self, @info) = @_;
+ return $self if !@info && $self->storage;
my $storage_class = $self->storage_type;
$storage_class = 'DBIx::Class::Storage'.$storage_class
if $storage_class =~ m/^::/;
my ($self, $name, $data) = @_;
my $rs = $self->resultset($name);
my @names = @{shift(@$data)};
+ my @created;
foreach my $item (@$data) {
my %create;
@create{@names} = @$item;
- $rs->create(\%create);
+ push(@created, $rs->create(\%create));
}
+ return @created;
}
=head2 throw_exception
=cut
sub deploy {
- my ($self) = shift;
+ my ($self, $sqltargs) = @_;
$self->throw_exception("Can't deploy without storage") unless $self->storage;
- $self->storage->deploy($self);
+ $self->storage->deploy($self, undef, $sqltargs);
}
1;
-package DBIx::Class::Storage;
+package # hide from PAUSE
+ DBIx::Class::Storage;
use strict;
use warnings;
sub columns_info_for { die "Virtual method!" }
-
-
package DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION;
use overload '"' => sub {
sub new {
my $class = shift;
-
return bless {}, $class;
}
if (defined $_[0]->{order_by}) {
$ret .= $self->SUPER::_order_by($_[0]->{order_by});
}
+ } elsif(ref $_[0] eq 'SCALAR') {
+ $ret = $self->_sqlcase(' order by ').${ $_[0] };
} else {
$ret = $self->SUPER::_order_by(@_);
}
__PACKAGE__->load_components(qw/AccessorGroup/);
__PACKAGE__->mk_group_accessors('simple' =>
- qw/connect_info _dbh _sql_maker _connection_pid debug debugfh cursor
- on_connect_do transaction_depth/);
+ qw/connect_info _dbh _sql_maker _conn_pid _conn_tid debug debugfh
+ cursor on_connect_do transaction_depth/);
sub new {
my $new = bless({}, ref $_[0] || $_[0]);
sub connected {
my ($self) = @_;
- my $dbh;
- (($dbh = $self->_dbh) && $dbh->FETCH('Active') && $dbh->ping)
+ if(my $dbh = $self->_dbh) {
+ if(defined $self->_conn_tid && $self->_conn_tid != threads->tid) {
+ $self->_sql_maker(undef);
+ return $self->_dbh(undef);
+ }
+ elsif($self->_conn_pid != $$) {
+ $self->_dbh->{InactiveDestroy} = 1;
+ $self->_sql_maker(undef);
+ return $self->_dbh(undef)
+ }
+ return ($dbh->FETCH('Active') && $dbh->ping);
+ }
+
+ return 0;
}
sub ensure_connected {
sub dbh {
my ($self) = @_;
- if($self->_connection_pid && $self->_connection_pid != $$) {
- $self->_dbh->{InactiveDestroy} = 1;
- $self->_dbh(undef)
- }
$self->ensure_connected;
return $self->_dbh;
}
$self->_dbh->do($sql_statement);
}
- $self->_connection_pid($$);
+ $self->_conn_pid($$);
+ $self->_conn_tid(threads->tid) if $INC{'threads.pm'};
}
sub _connect {
my ($self, @info) = @_;
+ $self->throw_exception("You failed to provide any connection info")
+ if !@info;
+
+ my ($old_connect_via, $dbh);
+
if ($INC{'Apache/DBI.pm'} && $ENV{MOD_PERL}) {
- my $old_connect_via = $DBI::connect_via;
+ $old_connect_via = $DBI::connect_via;
$DBI::connect_via = 'connect';
- my $dbh = DBI->connect(@info);
- $DBI::connect_via = $old_connect_via;
- return $dbh;
}
- my $dbh = DBI->connect(@info);
+ if(ref $info[0] eq 'CODE') {
+ $dbh = &{$info[0]};
+ }
+ else {
+ $dbh = DBI->connect(@info);
+ }
+
+ $DBI::connect_via = $old_connect_via if $old_connect_via;
+
$self->throw_exception("DBI Connection failed: $DBI::errstr")
unless $dbh;
+
$dbh;
}
else {
--$self->{transaction_depth} == 0 ?
$self->dbh->rollback :
- die DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION->new;
+ die DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION->new;
}
};
@bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
my $rv;
if ($sth) {
- $rv = $sth->execute(@bind);
+ $rv = $sth->execute(@bind) or $self->throw_exception("Error executing '$sql': " . $sth->errstr);
} else {
$self->throw_exception("'$sql' did not generate a statement.");
}
sub columns_info_for {
my ($self, $table) = @_;
+
+ if ($self->dbh->can('column_info')) {
+ my %result;
+ my $old_raise_err = $self->dbh->{RaiseError};
+ my $old_print_err = $self->dbh->{PrintError};
+ $self->dbh->{RaiseError} = 1;
+ $self->dbh->{PrintError} = 0;
+ eval {
+ my $sth = $self->dbh->column_info( undef, undef, $table, '%' );
+ $sth->execute();
+ while ( my $info = $sth->fetchrow_hashref() ){
+ my %column_info;
+ $column_info{data_type} = $info->{TYPE_NAME};
+ $column_info{size} = $info->{COLUMN_SIZE};
+ $column_info{is_nullable} = $info->{NULLABLE} ? 1 : 0;
+ $column_info{default_value} = $info->{COLUMN_DEF};
+ $result{$info->{COLUMN_NAME}} = \%column_info;
+ }
+ };
+ $self->dbh->{RaiseError} = $old_raise_err;
+ $self->dbh->{PrintError} = $old_print_err;
+ return \%result if !$@;
+ }
+
my %result;
- if ( $self->dbh->can( 'column_info' ) ){
- my $sth = $self->dbh->column_info( undef, undef, $table, '%' );
- $sth->execute();
- while ( my $info = $sth->fetchrow_hashref() ){
- my %column_info;
- $column_info{data_type} = $info->{TYPE_NAME};
- $column_info{size} = $info->{COLUMN_SIZE};
- $column_info{is_nullable} = $info->{NULLABLE};
- $result{$info->{COLUMN_NAME}} = \%column_info;
- }
- } else {
- my $sth = $self->dbh->prepare("SELECT * FROM $table WHERE 1=0");
- $sth->execute;
- my @columns = @{$sth->{NAME}};
- for my $i ( 0 .. $#columns ){
- $result{$columns[$i]}{data_type} = $sth->{TYPE}->[$i];
+ my $sth = $self->dbh->prepare("SELECT * FROM $table WHERE 1=0");
+ $sth->execute;
+ my @columns = @{$sth->{NAME_lc}};
+ for my $i ( 0 .. $#columns ){
+ my %column_info;
+ my $type_num = $sth->{TYPE}->[$i];
+ my $type_name;
+ if(defined $type_num && $self->dbh->can('type_info')) {
+ my $type_info = $self->dbh->type_info($type_num);
+ $type_name = $type_info->{TYPE_NAME} if $type_info;
}
+ $column_info{data_type} = $type_name ? $type_name : $type_num;
+ $column_info{size} = $sth->{PRECISION}->[$i];
+ $column_info{is_nullable} = $sth->{NULLABLE}->[$i] ? 1 : 0;
+ $result{$columns[$i]} = \%column_info;
}
+
return \%result;
}
}
-sub sqlt_type {
- my ($self) = @_;
- my $dsn = $self->connect_info->[0];
- $dsn =~ /^dbi:(.*?)\d*:/;
- return $1;
-}
+sub sqlt_type { shift->dbh->{Driver}->{Name} }
sub deployment_statements {
- my ($self, $schema, $type) = @_;
+ my ($self, $schema, $type, $sqltargs) = @_;
$type ||= $self->sqlt_type;
eval "use SQL::Translator";
$self->throw_exception("Can't deploy without SQL::Translator: $@") if $@;
$self->throw_exception($@) if $@;
eval "use SQL::Translator::Producer::${type};";
$self->throw_exception($@) if $@;
- my $tr = SQL::Translator->new();
+ my $tr = SQL::Translator->new(%$sqltargs);
SQL::Translator::Parser::DBIx::Class::parse( $tr, $schema );
return "SQL::Translator::Producer::${type}"->can('produce')->($tr);
}
sub deploy {
- my ($self, $schema, $type) = @_;
- foreach(split(";\n", $self->deployment_statements($schema, $type))) {
- $self->dbh->do($_) or warn "SQL was:\n $_";
+ my ($self, $schema, $type, $sqltargs) = @_;
+ foreach(split(";\n", $self->deployment_statements($schema, $type, $sqltargs))) {
+ $self->debugfh->print("$_\n") if $self->debug;
+ $self->dbh->do($_) or warn "SQL was:\n $_";
}
}
-package DBIx::Class::Storage::DBI::Cursor;
+package # hide from PAUSE
+ DBIx::Class::Storage::DBI::Cursor;
use base qw/DBIx::Class::Cursor/;
storage => $storage,
args => $args,
pos => 0,
- attrs => $attrs };
+ attrs => $attrs,
+ pid => $$,
+ };
+
+ $new->{tid} = threads->tid if $INC{'threads.pm'};
+
return bless ($new, $class);
}
sub next {
my ($self) = @_;
+
+ $self->_check_forks_threads;
if ($self->{attrs}{rows} && $self->{pos} >= $self->{attrs}{rows}) {
$self->{sth}->finish if $self->{sth}->{Active};
delete $self->{sth};
sub all {
my ($self) = @_;
+
+ $self->_check_forks_threads;
return $self->SUPER::all if $self->{attrs}{rows};
$self->{sth}->finish if $self->{sth}->{Active};
delete $self->{sth};
sub reset {
my ($self) = @_;
+
+ $self->_check_forks_threads;
$self->{sth}->finish if $self->{sth}->{Active};
+ $self->_soft_reset;
+}
+
+sub _soft_reset {
+ my ($self) = @_;
+
delete $self->{sth};
$self->{pos} = 0;
delete $self->{done};
return $self;
}
+sub _check_forks_threads {
+ my ($self) = @_;
+
+ if($INC{'threads.pm'} && $self->{tid} != threads->tid) {
+ $self->_soft_reset;
+ $self->{tid} = threads->tid;
+ }
+
+ if($self->{pid} != $$) {
+ $self->_soft_reset;
+ $self->{pid} = $$;
+ }
+}
+
sub DESTROY {
my ($self) = @_;
+
+ $self->_check_forks_threads;
$self->{sth}->finish if $self->{sth}->{Active};
}
while (my $col = shift @pri) {
my $info = $dbh->column_info(undef,$schema,$table,$col)->fetchrow_arrayref;
if (defined $info->[12] and $info->[12] =~
- /^nextval\('"?([^"']+)"?'::(?:text|regclass)\)/)
+ /^nextval\(+'([^']+)'::(?:text|regclass)\)/)
{
- return $1;
+ return $1; # may need to strip quotes -- see if this works
}
}
}
+sub sqlt_type {
+ return 'PostgreSQL';
+}
+
1;
=head1 NAME
--- /dev/null
+package DBIx::Class::UTF8Columns;
+use strict;
+use warnings;
+use base qw/DBIx::Class/;
+
+use Encode;
+
+__PACKAGE__->mk_classdata( force_utf8_columns => [] );
+
+=head1 NAME
+
+DBIx::Class::UTF8Columns - Force UTF8 (Unicode) flag on columns
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UTF8Columns Core/);
+ __PACKAGE__->utf8_columns(qw/name description/);
+
+ # then belows return strings with utf8 flag
+ $artist->name;
+ $artist->get_column('description');
+
+=head1 DESCRIPTION
+
+This module allows you to get columns data that have utf8 (Unicode) flag.
+
+=head1 SEE ALSO
+
+L<Template::Stash::ForceUTF8>, L<DBIx::Class::UUIDColumns>.
+
+=head1 METHODS
+
+=head2 utf8_columns
+
+=cut
+
+sub utf8_columns {
+ my $self = shift;
+ for (@_) {
+ $self->throw_exception("column $_ doesn't exist")
+ unless $self->has_column($_);
+ }
+ $self->force_utf8_columns( \@_ );
+}
+
+=head1 EXTENDED METHODS
+
+=head2 get_column
+
+=cut
+
+sub get_column {
+ my ( $self, $column ) = @_;
+ my $value = $self->next::method($column);
+
+ if ( { map { $_ => 1 } @{ $self->force_utf8_columns } }->{$column} ) {
+ Encode::_utf8_on($value) unless Encode::is_utf8($value);
+ }
+
+ $value;
+}
+
+=head2 get_columns
+
+=cut
+
+sub get_columns {
+ my $self = shift;
+ my %data = $self->next::method(@_);
+
+ for (@{ $self->force_utf8_columns }) {
+ Encode::_utf8_on($data{$_}) if $data{$_} and !Encode::is_utf8($_);
+ }
+
+ %data;
+}
+
+=head2 store_column
+
+=cut
+
+sub store_column {
+ my ( $self, $column, $value ) = @_;
+
+ if ( { map { $_ => 1 } @{ $self->force_utf8_columns } }->{$column} ) {
+ Encode::_utf8_off($value) if Encode::is_utf8($value);
+ }
+
+ $self->next::method( $column, $value );
+}
+
+=head1 AUTHOR
+
+Daisuke Murase <typester@cpan.org>
+
+=head1 COPYRIGHT
+
+This program is free software; you can redistribute
+it and/or modify it under the same terms as Perl itself.
+
+The full text of the license can be found in the
+LICENSE file included with this module.
+
+=cut
+
+1;
+
__PACKAGE__->uuid_columns('id');
+sub uuid_class {
+ my ($self, $class) = @_;
+
+ if ($class) {
+ $class = "DBIx::Class::UUIDMaker$class" if $class =~ /^::/;
+
+ if (!eval "require $class") {
+ $self->throw_exception("$class could not be loaded: $@");
+ } elsif (!$class->isa('DBIx::Class::UUIDMaker')) {
+ $self->throw_exception("$class is not a UUIDMaker subclass");
+ } else {
+ $self->uuid_maker($class->new);
+ };
+ };
+
+ return ref $self->uuid_maker;
+};
+
+sub insert {
+ my $self = shift;
+ for my $column (@{$self->uuid_auto_columns}) {
+ $self->store_column( $column, $self->get_uuid )
+ unless defined $self->get_column( $column );
+ }
+ $self->next::method(@_);
+}
+
+sub get_uuid {
+ return shift->uuid_maker->as_string;
+}
+
+sub _find_uuid_module {
+ if (eval{require Data::UUID}) {
+ return '::Data::UUID';
+ } elsif ($^O ne 'openbsd' && eval{require APR::UUID}) {
+ # APR::UUID on openbsd causes some as yet unfound nastyness for XS
+ return '::APR::UUID';
+ } elsif (eval{require UUID}) {
+ return '::UUID';
+ } elsif (eval{
+ # squelch the 'too late for INIT' warning in Win32::API::Type
+ local $^W = 0;
+ require Win32::Guidgen;
+ }) {
+ return '::Win32::Guidgen';
+ } elsif (eval{require Win32API::GUID}) {
+ return '::Win32API::GUID';
+ } else {
+ shift->throw_exception('no suitable uuid module could be found')
+ };
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDColumns - Implicit uuid columns
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+
+=head1 DESCRIPTION
+
+This L<DBIx::Class> component resembles the behaviour of
+L<Class::DBI::UUID>, to make some columns implicitly created as uuid.
+
+When loaded, C<UUIDColumns> will search for a suitable uuid generation module
+from the following list of supported modules:
+
+ Data::UUID
+ APR::UUID*
+ UUID
+ Win32::Guidgen
+ Win32API::GUID
+
+If no supporting module can be found, an exception will be thrown.
+
+*APR::UUID will not be loaded under OpenBSD due to an as yet unidentified XS
+issue.
+
+If you would like to use a specific module, you can set C<uuid_class>:
+
+ __PACKAGE__->uuid_class('::Data::UUID');
+ __PACKAGE__->uuid_class('MyUUIDGenerator');
+
+Note that the component needs to be loaded before Core.
+
+=head1 METHODS
+
+=head2 uuid_columns(@columns)
+
+Takes a list of columns to be filled with uuids during insert.
+
+ __PACKAGE__->uuid_columns('id');
+
=head2 uuid_class($classname)
Takes the name of a UUIDMaker subclass to be used for uuid value generation.
-package SQL::Translator::Parser::DBIx::Class;
+package # hide from PAUSE
+ SQL::Translator::Parser::DBIx::Class;
# AUTHOR: Jess Robinson
--- /dev/null
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+use Test::More;
+
+use lib qw(t/lib);
+use DBICTest::ForeignComponent;
+
+plan tests => 1;
+
+# Tests if foreign component was loaded by calling foreign's method
+ok( DBICTest::ForeignComponent->foreign_test_method, 'foreign component' );
+
eval "use DBD::SQLite";
plan $@
? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 4 );
+ : ( tests => 6 );
}
use lib qw(t/lib);
cmp_ok( $rs->count, '==', 1, "join with fields quoted");
+$rs = DBICTest::CD->search({},
+ { 'order_by' => 'year DESC'});
+{
+ my $warnings;
+ local $SIG{__WARN__} = sub { $warnings .= $_[0] };
+ my $first = eval{ $rs->first() };
+ ok( $warnings =~ /ORDER BY terms/, "Problem with ORDER BY quotes" );
+}
+
+my $order = 'year DESC';
+$rs = DBICTest::CD->search({},
+ { 'order_by' => \$order });
+{
+ my $warnings;
+ local $SIG{__WARN__} = sub { $warnings .= $_[0] };
+ my $first = $rs->first();
+ ok( $warnings !~ /ORDER BY terms/,
+ "No problem handling ORDER by scalaref" );
+}
+
DBICTest->schema->storage->sql_maker->quote_char([qw/[ ]/]);
DBICTest->schema->storage->sql_maker->name_sep('.');
-use Class::C3;
use strict;
-use Test::More;
use warnings;
+use Test::More;
-# This test passes no matter what in most cases. However, prior to the recent
-# fork-related fixes, it would spew lots of warnings. I have not quite gotten
-# it to where it actually fails in those cases.
+# README: If you set the env var to a number greater than 10,
+# we will use that many children
my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_PG_${_}" } qw/DSN USER PASS/};
+my $num_children = $ENV{DBICTEST_FORK_STRESS};
plan skip_all => 'Set $ENV{DBICTEST_FORK_STRESS} to run this test'
- unless $ENV{DBICTEST_FORK_STRESS};
+ unless $num_children;
plan skip_all => 'Set $ENV{DBICTEST_PG_DSN}, _USER and _PASS to run this test'
. ' (note: creates and drops a table named artist!)' unless ($dsn && $user);
-plan tests => 15;
+if($num_children !~ /^[0-9]+$/ || $num_children < 10) {
+ $num_children = 10;
+}
+
+plan tests => $num_children + 5;
use lib qw(t/lib);
use_ok('DBICTest::Schema');
-DBICTest::Schema->compose_connection('PgTest' => $dsn, $user, $pass, { AutoCommit => 1 });
+my $schema = DBICTest::Schema->connection($dsn, $user, $pass, { AutoCommit => 1 });
-my ($first_rs, $joe_record);
+my $parent_rs;
eval {
- my $dbh = PgTest->schema->storage->dbh;
+ my $dbh = $schema->storage->dbh;
- eval {
- $dbh->do("DROP TABLE cd");
+ {
+ local $SIG{__WARN__} = sub {};
+ eval { $dbh->do("DROP TABLE cd") };
$dbh->do("CREATE TABLE cd (cdid serial PRIMARY KEY, artist INTEGER NOT NULL UNIQUE, title VARCHAR(255) NOT NULL UNIQUE, year VARCHAR(255));");
- };
+ }
- PgTest->resultset('CD')->create({ title => 'vacation in antarctica', artist => 123, year => 1901 });
- PgTest->resultset('CD')->create({ title => 'vacation in antarctica part 2', artist => 456, year => 1901 });
+ $schema->resultset('CD')->create({ title => 'vacation in antarctica', artist => 123, year => 1901 });
+ $schema->resultset('CD')->create({ title => 'vacation in antarctica part 2', artist => 456, year => 1901 });
- $first_rs = PgTest->resultset('CD')->search({ year => 1901 });
- $joe_record = $first_rs->next;
+ $parent_rs = $schema->resultset('CD')->search({ year => 1901 });
+ $parent_rs->next;
};
ok(!$@) or diag "Creation eval failed: $@";
-my $num_children = 10;
my @pids;
while(@pids < $num_children) {
}
elsif($pid) {
push(@pids, $pid);
- next;
+ next;
}
$pid = $$;
- my ($forked_rs, $joe_forked);
- $forked_rs = PgTest->resultset('CD')->search({ year => 1901 });
- $joe_forked = $first_rs->next;
- if($joe_forked && $joe_forked->get_column('artist') =~ /^(?:123|456)$/) {
- PgTest->resultset('CD')->create({ title => "test success $pid", artist => $pid, year => scalar(@pids) });
+ my $child_rs = $schema->resultset('CD')->search({ year => 1901 });
+ my $row = $parent_rs->next;
+ if($row && $row->get_column('artist') =~ /^(?:123|456)$/) {
+ $schema->resultset('CD')->create({ title => "test success $pid", artist => $pid, year => scalar(@pids) });
}
sleep(3);
exit;
while(@pids) {
my $pid = pop(@pids);
- my $rs = PgTest->resultset('CD')->search({ title => "test success $pid", artist => $pid, year => scalar(@pids) });
+ my $rs = $schema->resultset('CD')->search({ title => "test success $pid", artist => $pid, year => scalar(@pids) });
is($rs->next->get_column('artist'), $pid, "Child $pid successful");
}
ok(1, "Made it to the end");
-PgTest->schema->storage->dbh->do("DROP TABLE cd");
+$schema->storage->dbh->do("DROP TABLE cd");
--- /dev/null
+use strict;
+use warnings;
+use Test::More;
+use Config;
+
+# README: If you set the env var to a number greater than 10,
+# we will use that many children
+
+BEGIN {
+ plan skip_all => 'Your perl does not support ithreads'
+ if !$Config{useithreads} || $] < 5.008;
+}
+
+use threads;
+
+my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_PG_${_}" } qw/DSN USER PASS/};
+my $num_children = $ENV{DBICTEST_THREAD_STRESS};
+
+plan skip_all => 'Set $ENV{DBICTEST_THREAD_STRESS} to run this test'
+ unless $num_children;
+
+plan skip_all => 'Set $ENV{DBICTEST_PG_DSN}, _USER and _PASS to run this test'
+ . ' (note: creates and drops a table named artist!)' unless ($dsn && $user);
+
+diag 'It is normal to see a series of "Scalars leaked: ..." messages during this test';
+
+if($num_children !~ /^[0-9]+$/ || $num_children < 10) {
+ $num_children = 10;
+}
+
+plan tests => $num_children + 5;
+
+use lib qw(t/lib);
+
+use_ok('DBICTest::Schema');
+
+my $schema = DBICTest::Schema->connection($dsn, $user, $pass, { AutoCommit => 1, RaiseError => 1, PrintError => 0 });
+
+my $parent_rs;
+
+eval {
+ my $dbh = $schema->storage->dbh;
+
+ {
+ local $SIG{__WARN__} = sub {};
+ eval { $dbh->do("DROP TABLE cd") };
+ $dbh->do("CREATE TABLE cd (cdid serial PRIMARY KEY, artist INTEGER NOT NULL UNIQUE, title VARCHAR(255) NOT NULL UNIQUE, year VARCHAR(255));");
+ }
+
+ $schema->resultset('CD')->create({ title => 'vacation in antarctica', artist => 123, year => 1901 });
+ $schema->resultset('CD')->create({ title => 'vacation in antarctica part 2', artist => 456, year => 1901 });
+
+ $parent_rs = $schema->resultset('CD')->search({ year => 1901 });
+ $parent_rs->next;
+};
+ok(!$@) or diag "Creation eval failed: $@";
+
+my @children;
+while(@children < $num_children) {
+
+ my $newthread = async {
+ my $tid = threads->tid;
+ my $dbh = $schema->storage->dbh;
+
+ my $child_rs = $schema->resultset('CD')->search({ year => 1901 });
+ my $row = $parent_rs->next;
+ if($row && $row->get_column('artist') =~ /^(?:123|456)$/) {
+ $schema->resultset('CD')->create({ title => "test success $tid", artist => $tid, year => scalar(@children) });
+ }
+ sleep(3);
+ };
+ die "Thread creation failed: $! $@" if !defined $newthread;
+ push(@children, $newthread);
+}
+
+ok(1, "past spawning");
+
+{
+ $_->join for(@children);
+}
+
+ok(1, "past joining");
+
+while(@children) {
+ my $child = pop(@children);
+ my $tid = $child->tid;
+ my $rs = $schema->resultset('CD')->search({ title => "test success $tid", artist => $tid, year => scalar(@children) });
+ is($rs->next->get_column('artist'), $tid, "Child $tid successful");
+}
+
+ok(1, "Made it to the end");
+
+$schema->storage->dbh->do("DROP TABLE cd");
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/08inflate_serialize.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/25utf8.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/08inflate_serialize.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/25utf8.tl";
+run_tests(DBICTest->schema);
-package DBICTest;
+package # hide from PAUSE
+ DBICTest;
use strict;
use warnings;
-package DBICTest::BasicRels;
+package # hide from PAUSE
+ DBICTest::BasicRels;
use DBICTest::Schema;
use DBICTest::Schema::BasicRels;
-package DBICTest::Extra;
+package # hide from PAUSE
+ DBICTest::Extra;
use base 'DBIx::Class::Schema';
__PACKAGE__->load_classes("Foo");
-package DBICTest::Extra::Foo;
+package # hide from PAUSE
+ DBICTest::Extra::Foo;
use base 'DBIx::Class';
__PACKAGE__->load_components(qw/ ResultSetManager Core /);
sub bar : ResultSet { 'good' }
-1;
\ No newline at end of file
+1;
--- /dev/null
+# belongs to t/05components.t
+package # hide from PAUSE
+ DBICTest::ForeignComponent;
+use warnings;
+use strict;
+
+use base qw/ DBIx::Class /;
+
+__PACKAGE__->load_components( qw/ +DBICTest::ForeignComponent::TestComp / );
+
+1;
--- /dev/null
+# belongs to t/05components.t
+package # hide from PAUSE
+ DBICTest::ForeignComponent::TestComp;
+use warnings;
+use strict;
+
+sub foreign_test_method { 1 }
+
+1;
-package DBICTest::HelperRels;
+package # hide from PAUSE
+ DBICTest::HelperRels;
use DBICTest::Schema;
use DBICTest::Schema::HelperRels;
-package DBICTest::Plain;
+package # hide from PAUSE
+ DBICTest::Plain;
use strict;
use warnings;
-package DBICTest::Plain::Test;
+package # hide from PAUSE
+ DBICTest::Plain::Test;
use base 'DBIx::Class::Core';
-package DBICTest::Schema;
+package # hide from PAUSE
+ DBICTest::Schema;
use base qw/DBIx::Class::Schema/;
-package DBICTest::Schema::Artist;
+package # hide from PAUSE
+ DBICTest::Schema::Artist;
use base 'DBIx::Class::Core';
},
'name' => {
data_type => 'varchar',
+ size => 100,
is_nullable => 1,
},
);
-package DBICTest::Schema::ArtistUndirectedMap;
+package # hide from PAUSE
+ DBICTest::Schema::ArtistUndirectedMap;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::BasicRels;
+package # hide from PAUSE
+ DBICTest::Schema::BasicRels;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::CD;
+package # hide from PAUSE
+ DBICTest::Schema::CD;
use base 'DBIx::Class::Core';
},
'title' => {
data_type => 'varchar',
+ size => 100,
},
'year' => {
data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::CD->set_primary_key('cdid');
-package DBICTest::Schema::CD_to_Producer;
+package # hide from PAUSE
+ DBICTest::Schema::CD_to_Producer;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::FourKeys;
+package # hide from PAUSE
+ DBICTest::Schema::FourKeys;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::HelperRels;
+package # hide from PAUSE
+ DBICTest::Schema::HelperRels;
use base 'DBIx::Class::Core';
DBICTest::Schema::CD->belongs_to('artist', 'DBICTest::Schema::Artist');
DBICTest::Schema::CD->has_many(tracks => 'DBICTest::Schema::Track');
-DBICTest::Schema::CD->has_many(tags => 'DBICTest::Schema::Tag');
+DBICTest::Schema::CD->has_many(tags => 'DBICTest::Schema::Tag', undef,
+ { order_by => 'tag' });
DBICTest::Schema::CD->has_many(cd_to_producer => 'DBICTest::Schema::CD_to_Producer' => 'cd');
DBICTest::Schema::CD->might_have(liner_notes => 'DBICTest::Schema::LinerNotes',
-package DBICTest::Schema::LinerNotes;
+package # hide from PAUSE
+ DBICTest::Schema::LinerNotes;
use base qw/DBIx::Class::Core/;
},
'notes' => {
data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::LinerNotes->set_primary_key('liner_id');
-package DBICTest::Schema::OneKey;
+package # hide from PAUSE
+ DBICTest::Schema::OneKey;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::Producer;
+package # hide from PAUSE
+ DBICTest::Schema::Producer;
use base 'DBIx::Class::Core';
},
'name' => {
data_type => 'varchar',
+ size => 100,
},
);
__PACKAGE__->set_primary_key('producerid');
-package DBICTest::Schema::SelfRef;\r
+package # hide from PAUSE \r
+ DBICTest::Schema::SelfRef;\r
\r
use base 'DBIx::Class::Core';\r
\r
},\r
'name' => {\r
data_type => 'varchar',\r
+ size => 100,\r
},\r
);\r
__PACKAGE__->set_primary_key('id');\r
-package DBICTest::Schema::SelfRefAlias;\r
+package # hide from PAUSE \r
+ DBICTest::Schema::SelfRefAlias;\r
\r
use base 'DBIx::Class::Core';\r
\r
-package DBICTest::Schema::Serialized;
+package # hide from PAUSE
+ DBICTest::Schema::Serialized;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::Tag;
+package # hide from PAUSE
+ DBICTest::Schema::Tag;
use base qw/DBIx::Class::Core/;
data_type => 'integer',
},
'tag' => {
- data_type => 'varchar'
+ data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::Tag->set_primary_key('tagid');
-package DBICTest::Schema::Track;
+package # hide from PAUSE
+ DBICTest::Schema::Track;
use base 'DBIx::Class::Core';
},
'title' => {
data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::Track->set_primary_key('trackid');
-package DBICTest::Schema::TreeLike;
+package # hide from PAUSE
+ DBICTest::Schema::TreeLike;
use base qw/DBIx::Class/;
__PACKAGE__->add_columns(
'id' => { data_type => 'integer', is_auto_increment => 1 },
'parent' => { data_type => 'integer' },
- 'name' => { data_type => 'varchar' },
+ 'name' => { data_type => 'varchar',
+ size => 100,
+ },
);
__PACKAGE__->set_primary_key(qw/id/);
__PACKAGE__->belongs_to('parent', 'TreeLike',
-package DBICTest::Schema::TwoKeys;
+package # hide from PAUSE
+ DBICTest::Schema::TwoKeys;
use base 'DBIx::Class::Core';
sub run_tests {
my $schema = shift;
-plan tests => 39;
+plan tests => 41;
my @art = $schema->resultset("Artist")->search({ }, { order_by => 'name DESC'});
is_deeply( \@cd, [qw/cdid artist title year/], 'column order');
-$cd = $schema->resultset("CD")->search({ title => 'Spoonful of bees' }, { cols => ['title'] })->next;
+$cd = $schema->resultset("CD")->search({ title => 'Spoonful of bees' }, { columns => ['title'] })->next;
is($cd->title, 'Spoonful of bees', 'subset of columns returned correctly');
$cd = $schema->resultset("CD")->search(undef, { include_columns => [ 'artist.name' ], join => [ 'artist' ] })->find(1);
is($cd->title, 'Spoonful of bees', 'Correct CD returned with include');
is($cd->get_column('name'), 'Caterwauler McCrae', 'Additional column returned');
-# insert_or_update
+# update_or_insert
$new = $schema->resultset("Track")->new( {
trackid => 100,
cd => 1,
position => 1,
title => 'Insert or Update',
} );
-$new->insert_or_update;
-ok($new->in_storage, 'insert_or_update insert ok');
+$new->update_or_insert;
+ok($new->in_storage, 'update_or_insert insert ok');
# test in update mode
$new->pos(5);
-$new->insert_or_update;
-is( $schema->resultset("Track")->find(100)->pos, 5, 'insert_or_update update ok');
+$new->update_or_insert;
+is( $schema->resultset("Track")->find(100)->pos, 5, 'update_or_insert update ok');
eval { $schema->class("Track")->load_components('DoesNotExist'); };
cmp_ok($or_rs->next->cdid, '==', $rel_rs->next->cdid, 'Related object ok');
+my $tag = $schema->resultset('Tag')->search(
+ [ { 'me.tag' => 'Blue' } ], { cols=>[qw/tagid/] } )->next;
+
+cmp_ok($tag->has_column_loaded('tagid'), '==', 1, 'Has tagid loaded');
+cmp_ok($tag->has_column_loaded('tag'), '==', 0, 'Has not tag loaded');
+
ok($schema->storage(), 'Storage available');
$schema->source("Artist")->{_columns}{'artistid'} = {};
my $type_info = $schema->storage->columns_info_for('artist');
my $test_type_info = {
'artistid' => {
- 'data_type' => 'INTEGER'
+ 'data_type' => 'INTEGER',
+ 'is_nullable' => 0,
+ 'size' => undef,
},
'name' => {
- 'data_type' => 'varchar'
+ 'data_type' => 'varchar',
+ 'is_nullable' => 0,
+ 'size' => undef,
}
};
is_deeply($type_info, $test_type_info, 'columns_info_for - column data types');
eval { require DateTime };
plan skip_all => "Need DateTime for inflation tests" if $@;
-plan tests => 5;
+plan tests => 3;
DBICTest::Schema::CD->inflate_column( 'year',
{ inflate => sub { DateTime->new( year => shift ) },
($cd) = $schema->resultset("CD")->search( year => $now->year );
is( $cd->year->year, $now->year, 'deflate ok' );
-use YAML;
-DBICTest::Schema::Serialized->inflate_column( 'serialized',
- { inflate => sub { Load (shift) },
- deflate => sub { die "Expecting a reference" unless (ref $_[0]); Dump (shift) } }
-);
-Class::C3->reinitialize;
-
-my $complex1 = {
- id => 1,
- serialized => {
- a => 1,
- b => 2,
- },
-};
-
-my $complex2 = {
- id => 1,
- serialized => [qw/a b 1 2/],
-};
-
-my $rs = $schema->resultset('Serialized');
-
-my $entry = $rs->create($complex2);
-
-ok($entry->update ($complex1), "update with hashref deflating ok");
-
-ok($entry->update ($complex2), "update with arrayref deflating ok");
-
}
1;
--- /dev/null
+sub run_tests {
+my $schema = shift;
+
+use Data::Dumper;
+
+my @serializers = (
+ { module => 'YAML.pm',
+ inflater => sub { YAML::Load (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); YAML::Dump (shift) },
+ },
+ { module => 'Storable.pm',
+ inflater => sub { Storable::thaw (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); Storable::nfreeze (shift) },
+ },
+);
+
+
+my $selected;
+foreach my $serializer (@serializers) {
+ eval { require $serializer->{module} };
+ unless ($@) {
+ $selected = $serializer;
+ last;
+ }
+}
+
+plan (skip_all => "No suitable serializer found") unless $selected;
+
+plan (tests => 6);
+DBICTest::Schema::Serialized->inflate_column( 'serialized',
+ { inflate => $selected->{inflater},
+ deflate => $selected->{deflater},
+ },
+);
+Class::C3->reinitialize;
+
+my $complex1 = {
+ id => 1,
+ serialized => {
+ a => 1,
+ b => [
+ { c => 2 },
+ ],
+ d => 3,
+ },
+};
+
+my $complex2 = {
+ id => 1,
+ serialized => [
+ 'a',
+ { b => 1, c => 2},
+ 'd',
+ ],
+};
+
+my $rs = $schema->resultset('Serialized');
+my $entry = $rs->create({ id => 1, serialized => ''});
+
+my $inflated;
+
+ok($entry->update ({ %{$complex1} }), 'hashref deflation ok');
+ok($inflated = $entry->serialized, 'hashref inflation ok');
+is_deeply($inflated, $complex1->{serialized}, 'inflated hash matches original');
+
+ok($entry->update ({ %{$complex2} }), 'arrayref deflation ok');
+ok($inflated = $entry->serialized, 'arrayref inflation ok');
+is_deeply($inflated, $complex2->{serialized}, 'inflated array matches original');
+
+}
+
+1;
'artistid' => {
'data_type' => 'INT',
'is_nullable' => 0,
- 'size' => 11
+ 'size' => 11,
+ 'default_value' => undef,
},
'name' => {
'data_type' => 'VARCHAR',
'is_nullable' => 1,
- 'size' => 255
+ 'size' => 255,
+ 'default_value' => undef,
},
'charfield' => {
'data_type' => 'VARCHAR',
'is_nullable' => 1,
- 'size' => 10
+ 'size' => 10,
+ 'default_value' => undef,
},
};
plan skip_all, 'Set $ENV{DBICTEST_PG_DSN}, _USER and _PASS to run this test'
. ' (note: creates and drops a table named artist!)' unless ($dsn && $user);
-plan tests => 3;
+plan tests => 4;
DBICTest::Schema->compose_connection('PgTest' => $dsn, $user, $pass);
'artistid' => {
'data_type' => 'integer',
'is_nullable' => 0,
- 'size' => 4
+ 'size' => 4,
},
'name' => {
'data_type' => 'character varying',
'is_nullable' => 1,
- 'size' => 255
+ 'size' => 255,
+ 'default_value' => undef,
},
'charfield' => {
'data_type' => 'character',
'is_nullable' => 1,
- 'size' => 10
+ 'size' => 10,
+ 'default_value' => undef,
},
};
+
my $type_info = PgTest->schema->storage->columns_info_for('artist');
-is_deeply($type_info, $test_type_info, 'columns_info_for - column data types');
+my $artistid_defval = delete $type_info->{artistid}->{default_value};
+like($artistid_defval,
+ qr/^nextval\('public\.artist_artistid_seq'::(?:text|regclass)\)/,
+ 'columns_info_for - sequence matches Pg get_autoinc_seq expectations');
+is_deeply($type_info, $test_type_info,
+ 'columns_info_for - column data types');
$dbh->do("DROP TABLE artist;");
plan skip_all, 'Set $ENV{DBICTEST_DB2_DSN}, _USER and _PASS to run this test'
unless ($dsn && $user);
-plan tests => 5;
+plan tests => 6;
DBICTest::Schema->compose_connection('DB2Test' => $dsn, $user, $pass);
my $dbh = DB2Test->schema->storage->dbh;
-$dbh->do("DROP TABLE artist;");
+{
+ local $SIG{__WARN__} = sub {};
+ $dbh->do("DROP TABLE artist;");
+}
$dbh->do("CREATE TABLE artist (artistid INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1), name VARCHAR(255), charfield CHAR(10));");
}
);
is( $it->count, 3, "LIMIT count ok" );
-is( $it->next->name, "Artist 2", "iterator->next ok" );
-$it->next;
+is( $it->next->name, "foo", "iterator->next ok" );
$it->next;
+is( $it->next->name, "Artist 2", "iterator->next ok" );
is( $it->next, undef, "next past end of resultset ok" );
my $test_type_info = {
'artistid' => {
'data_type' => 'INTEGER',
'is_nullable' => 0,
- 'size' => 11
+ 'size' => 10
},
'name' => {
'data_type' => 'VARCHAR',
'size' => 255
},
'charfield' => {
- 'data_type' => 'VARCHAR',
+ 'data_type' => 'CHAR',
'is_nullable' => 1,
'size' => 10
},
unlink 't/var/dbic.trace';
is($selects, 1, 'prefetch ran only 1 select statement');
-# test for partial prefetch via cols attr
+# test for partial prefetch via columns attr
my $cd = $schema->resultset('CD')->find(1,
{
- cols => [qw/title artist.name/],
+ columns => [qw/title artist.name/],
join => { 'artist' => {} }
}
);
eval "use DBD::SQLite";
plan skip_all => 'needs DBD::SQLite for testing' if $@;
-plan tests => 12;
+plan tests => 17;
my $rs = $schema->resultset("Artist")->search(
{ artistid => 1 }
is( scalar @{ $rs->get_cache }, 0, 'cache is not populated without cache attribute' );
+my @a = $schema->resultset("Artist")->search(
+ { },
+ {
+ join => [ qw/ cds /],
+ prefetch => [qw/ cds /],
+ }
+);
+
+is(scalar @a, 3, 'artist with cds: count parent objects');
+
$rs = $schema->resultset("Artist")->search(
{ 'artistid' => 1 },
{
join => [ qw/ cds /],
prefetch => [qw/ cds /],
- cache => 1,
}
);
}
$trace->close;
unlink 't/var/dbic.trace';
-is($selects, 2, 'only one SQL statement for each cached table');
+is($selects, 1, 'only one SQL statement executed');
# make sure related_resultset is deleted after object is updated
$artist->set_column('name', 'New Name');
prefetch => {
cds => 'tags'
},
- cache => 1
}
);
+{
+my $artist_count_before = $schema->resultset('Artist')->count;
+$schema->resultset("Artist")->create({artistid=>4,name=>qq{Humoungous Hamsters}});
+is($schema->resultset('Artist')->count, $artist_count_before + 1, 'count() reflects new artist');
+my $artist = $schema->resultset("Artist")->search(
+ { artistid => 4 },{prefetch=>[qw/cds/]}
+)->first;
+
+is($artist->cds, 0, 'No cds for this artist');
+}
# SELECT count for nested has_many prefetch
unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
DBI->trace(1, 't/var/dbic.trace');
-$artist = $rs->first;
+$artist = ($rs->all)[0];
# count the SELECTs
DBI->trace(0, undef);
}
$trace->close;
unlink 't/var/dbic.trace';
-is($selects, 3, 'one SQL statement for each cached table with nested prefetch');
+is($selects, 1, 'only one SQL statement executed');
my @objs;
-$artist = $rs->find(1);
+#$artist = $rs->find(1);
unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
DBI->trace(1, 't/var/dbic.trace');
my $cds = $artist->cds;
my $tags = $cds->next->tags;
while( my $tag = $tags->next ) {
- push @objs, $tag->tagid; #warn "tag:", $tag->ID;
+ push @objs, $tag->tagid; #warn "tag:", $tag->ID, " => ", $tag->tag;
}
-is_deeply( \@objs, [ 1 ], 'first cd has correct tags' );
+is_deeply( \@objs, [ 3 ], 'first cd has correct tags' );
$tags = $cds->next->tags;
@objs = ();
is( $selects, 0, 'no additional SQL statements while checking nested data' );
+# start test for prefetch SELECT count
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+$artist = $schema->resultset('Artist')->find(1, { prefetch => [qw/cds/] });
+
+# count the SELECTs
+DBI->trace(0, undef);
+$selects = 0;
+$trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+
+is( $selects, 1, 'only one select statement on find with inline has_many prefetch' );
+
+# start test for prefetch SELECT count
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+$rs = $schema->resultset('Artist')->search(undef, { prefetch => [qw/cds/] });
+$artist = $rs->find(1);
+
+# count the SELECTs
+DBI->trace(0, undef);
+$selects = 0;
+$trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+
+is( $selects, 1, 'only one select statement on find with has_many prefetch on resultset' );
+
}
1;
--- /dev/null
+sub run_tests {
+ my $schema = shift;
+
+ eval 'use Encode ; 1'
+ or plan skip_all, 'Install Encode run this test';
+
+ plan tests => 2;
+
+ DBICTest::Schema::Artist->load_components('UTF8Columns');
+ DBICTest::Schema::Artist->utf8_columns('name');
+ Class::C3->reinitialize();
+
+ my $artist = $schema->resultset("Artist")->create( { name => 'uni' } );
+ ok( Encode::is_utf8( $artist->name ), 'got name with utf8 flag' );
+
+ my $utf8_char = 'uniuni';
+ Encode::_utf8_on($utf8_char);
+ $artist->name($utf8_char);
+ ok( !Encode::is_utf8( $artist->{_column_data}->{name} ),
+ 'store utf8 less chars' );
+}
+
+1;
-package Actor;
+package # hide from PAUSE
+ Actor;
BEGIN { unshift @INC, './t/testlib'; }
-package ActorAlias;\r
+package # hide from PAUSE \r
+ ActorAlias;\r
\r
BEGIN { unshift @INC, './t/testlib'; }\r
\r
-package Binary;
+package # hide from PAUSE
+ Binary;
BEGIN { unshift @INC, './t/testlib'; }
-package Blurb;
+package # hide from PAUSE
+ Blurb;
BEGIN { unshift @INC, './t/testlib'; }
-package CDBase;
+package # hide from PAUSE
+ CDBase;
use strict;
use base qw(DBIx::Class::Test::SQLite);
-package Director;
+package # hide from PAUSE
+ Director;
BEGIN { unshift @INC, './t/testlib'; }
-package Film;
+package # hide from PAUSE
+ Film;
BEGIN { unshift @INC, './t/testlib'; }
use base 'DBIx::Class::Test::SQLite';
-package Lazy;
+package # hide from PAUSE
+ Lazy;
BEGIN { unshift @INC, './t/testlib'; }
use base 'DBIx::Class::Test::SQLite';
-package Log;
+package # hide from PAUSE
+ Log;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyBase;
+package # hide from PAUSE
+ MyBase;
use strict;
use base qw(DBIx::Class);
-package MyFilm;
+package # hide from PAUSE
+ MyFilm;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyFoo;
+package # hide from PAUSE
+ MyFoo;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyStar;
+package # hide from PAUSE
+ MyStar;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyStarLink;
+package # hide from PAUSE
+ MyStarLink;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyStarLinkMCPK;
+package # hide from PAUSE
+ MyStarLinkMCPK;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package Order;
+package # hide from PAUSE
+ Order;
BEGIN { unshift @INC, './t/testlib'; }
-package OtherFilm;
+package # hide from PAUSE
+ OtherFilm;
use strict;
use base 'Film';
-package PgBase;
+package # hide from PAUSE
+ PgBase;
use strict;
use base 'DBIx::Class';