Revision history for DBIx::Class
+0.05999_03
+ - deploy now adds drop statements before creates
+ - deploy outputs debugging statements if DBIX_CLASS_STORAGE_DBI_DEBUG
+ is set
+
+0.05999_02 2006-03-10 13:31:37
+ - remove test dep on YAML
+ - additional speed tweaks for C3
+ - allow scalarefs passed to order_by to go straight through to SQL
+ - renamed insert_or_update to update_or_insert (with compat alias)
+ - hidden lots of packages from the PAUSE Indexer
+
+0.05999_01 2006-03-09 18:31:44
+ - renamed cols attribute to columns (cols still supported)
+ - added has_column_loaded to Row
+ - Storage::DBI connect_info supports coderef returning dbh as 1st arg
+ - load_components() doesn't prepend base when comp. prefixed with +
+ - $schema->deploy
+ - HAVING support
+ - prefetch for has_many
+ - PK::Auto::* no longer required since Storage::DBI::* handle auto-inc
+ - minor tweak to tests for join edge case
+ - added cascade_copy relationship attribute
+ (sponsored by Airspace Software, http://www.airspace.co.uk/)
+ - clean up set_from_related
+ - made copy() automatically null out auto-inc columns
+
0.05007 2006-02-24 00:59:00
- tweak to Componentised for Class::C3 0.11
- fixes for auto-inc under MSSQL
0.05003 2006-02-08 17:50:20
- add component_class accessors and use them for *_class
- small fixes to Serialize and ResultSetManager
- - prevent accidental table-wide update/delete on row-object
- from PK-less table
- rollback on disconnect, and disconnect on DESTROY
- - fixes to deep search and search_relateduser
0.05002 2006-02-06 12:12:03
- Added recommends for Class::Inspector
- count will now work for grouped resultsets
- added accessor => option to column_info to specify accessor name
- added $schema->populate to load test data (similar to AR fixtures)
- - removed cdbi-t dependencies, only run tests if installed
- - Removed DBIx::Class::Exception
- - unified throw_exception stuff, using Carp::Clan
- - report query when sth generation fails.
+ - removed cdbi-t dependencies, only run tests if installed
+ - Removed DBIx::Class::Exception
+ - unified throw_exception stuff, using Carp::Clan
+ - report query when sth generation fails.
- multi-step prefetch!
- inheritance fixes
- test tweaks
- made Storage::DBI use prepare_cached safely (thanks to Tim Bunce)
- many documentation improvements (thanks guys!)
- added ->connection, ->connect, ->register_source and ->clone schema methods
- - Use croak instead of die for user errors.
+ - Use croak instead of die for user errors.
0.04999_02 2006-01-14 07:17:35
- Schema is now self-contained; no requirement for co-operation
0.03004
- Added an || '' to the CDBICompat stringify to avoid null warnings
- - Updated name section for manual pods
-
+ - Updated name section for manual pods
0.03003 2005-11-03 17:00:00
- POD fixes.
- Changed use to require in Relationship/Base to avoid import.
+++ /dev/null
-NAME
- DBIx::Class - Extensible and flexible object <-> relational mapper.
-
-SYNOPSIS
-DESCRIPTION
- This is an SQL to OO mapper, inspired by the Class::DBI framework, and
- meant to support compability with it, while restructuring the internals
- and making it possible to support some new features like self-joins,
- distinct, group bys and more.
-
- This project is still at an early stage, so the maintainers don't make
- any absolute promise that full backwards-compatibility will be
- supported; however, if we can without compromising the improvements
- we're trying to make, we will, and any non-compatible changes will merit
- a full justification on the mailing list and a CPAN developer release
- for people to test against.
-
- The community can be found via -
-
- Mailing list: http://lists.rawmode.org/mailman/listinfo/dbix-class/
-
- SVN: http://dev.catalyst.perl.org/repos/bast/trunk/DBIx-Class/
-
- Wiki: http://dbix-class.shadowcatsystems.co.uk/
-
- IRC: irc.perl.org#dbix-class
-
-QUICKSTART
- If you're using Class::DBI, and want an easy and fast way of migrating
- to DBIx::Class, take a look at DBIx::Class::CDBICompat.
-
- There are two ways of using DBIx::Class, the "simple" way and the
- "schema" way. The "simple" way of using DBIx::Class needs less classes
- than the "schema" way but doesn't give you the ability to easily use
- different database connections.
-
- Some examples where different database connections are useful are:
-
- different users with different rights different databases with the same
- schema.
-
- Simple
- First you need to create a base class which all other classes will
- inherit from. See DBIx::Class::DB for information on how to do this.
-
- Then you need to create a class for every table you want to use with
- DBIx::Class. See DBIx::Class::Table for information on how to do this.
-
- Schema
- With this approach, the table classes inherit directly from
- DBIx::Class::Core, although it might be a good idea to create a "parent"
- class for all table classes that inherits from DBIx::Class::Core and
- adds additional methods needed by all table classes, e.g. reading a
- config file or loading auto primary key support.
-
- Look at DBIx::Class::Schema for information on how to do this.
-
- If you need more help, check out the introduction in the manual below.
-
-SEE ALSO
- DBIx::Class::Core - DBIC Core Classes
- DBIx::Class::Manual - User's manual
- DBIx::Class::CDBICompat - Class::DBI Compat layer
- DBIx::Class::Schema
- DBIx::Class::ResultSet
- DBIx::Class::ResultSource
- DBIx::Class::Row - row-level methods
- DBIx::Class::PK - primary key methods
- DBIx::Class::Relationship - relationships between tables
-
-AUTHOR
- Matt S. Trout <mst@shadowcatsystems.co.uk>
-
-CONTRIBUTORS
- Andy Grundman <andy@hybridized.org>
-
- Brian Cassidy <bricas@cpan.org>
-
- Dan Kubb <dan.kubb-cpan@onautopilot.com>
-
- Dan Sully <daniel@cpan.org>
-
- David Kamholz <dkamholz@cpan.org>
-
- Jules Bean
-
- Marcus Ramberg <mramberg@cpan.org>
-
- Paul Makepeace
-
- CL Kao
-
- Jess Robinson
-
- Marcus Ramberg
-
- Will Hawes
-
- Todd Lipcon
-
- Daniel Westermann-Clark <danieltwc@cpan.org>
-
- Alexander Hartmaier <alex_hartmaier@hotmail.com>
-
- Zbigniew Lukasiak
-
- Nigel Metheringham <nigelm@cpan.org>
-
- Jesper Krogh
-
- Brandon Black
-
- Scotty Allen <scotty@scottyallen.com>
-
- Justin Guenther <guentherj@agr.gc.ca>
-
-LICENSE
- You may distribute this code under the same terms as Perl itself.
-
# i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports
# brain damage and presumably various other packaging systems too
-$VERSION = '0.05007';
+$VERSION = '0.05999_02';
sub MODIFY_CODE_ATTRIBUTES {
my ($class,$code,@attrs) = @_;
Brandon Black
+Christopher H. Laco
+
Scotty Allen <scotty@scottyallen.com>
sc_
+Robert Sedlacek <phaylon@dunkelheit.at>
+
Justin Guenther <jguenther@agr.gc.ca>
+Daisuke Murase <typester@cpan.org>
+
+Scott McWhirter (konobi)
+
=head1 LICENSE
You may distribute this code under the same terms as Perl itself.
-package DBIx::Class::CDBICompat::AccessorMapping;
+package # hide from PAUSE Indexer
+ DBIx::Class::CDBICompat::AccessorMapping;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::AttributeAPI;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::AttributeAPI;
sub _attrs {
my ($self, @atts) = @_;
-package DBIx::Class::CDBICompat::AutoUpdate;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::AutoUpdate;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ColumnCase;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ColumnCase;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ColumnGroups;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ColumnGroups;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Constraints;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Constraints;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Constructor;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Constructor;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::DestroyWarning;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::DestroyWarning;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::GetSet;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::GetSet;
#use base qw/Class::Accessor/;
-package DBIx::Class::CDBICompat::HasA;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::HasA;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::HasMany;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::HasMany;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ImaDBI;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ImaDBI;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::LazyLoading;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::LazyLoading;
use strict;
use warnings;
sub resultset_instance {
my $self = shift;
my $rs = $self->next::method(@_);
- $rs = $rs->search(undef, { cols => [ $self->columns('Essential') ] });
+ $rs = $rs->search(undef, { columns => [ $self->columns('Essential') ] });
return $rs;
}
-package DBIx::Class::CDBICompat::LiveObjectIndex;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::LiveObjectIndex;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::MightHave;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::MightHave;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::ObjIndexStubs;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ObjIndexStubs;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Pager;\r
+package # hide from PAUSE\r
+ DBIx::Class::CDBICompat::Pager;\r
\r
use strict;\r
use warnings FATAL => 'all';\r
-package DBIx::Class::CDBICompat::ReadOnly;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::ReadOnly;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Retrieve;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Retrieve;
use strict;
use warnings FATAL => 'all';
-package DBIx::Class::CDBICompat::Stringify;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Stringify;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::TempColumns;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::TempColumns;
use strict;
use warnings;
-package DBIx::Class::CDBICompat::Triggers;
+package # hide from PAUSE
+ DBIx::Class::CDBICompat::Triggers;
use strict;
use warnings;
-package DBIx::Class::ClassResolver::PassThrough;
+package # hide from PAUSE
+ DBIx::Class::ClassResolver::PassThrough;
sub class {
shift;
-package DBIx::Class::Componentised;
+package # hide from PAUSE
+ DBIx::Class::Componentised;
use Class::C3;
sub load_components {
my $class = shift;
my $base = $class->component_base_class;
- my @comp = map { "${base}::$_" } grep { $_ !~ /^#/ } @_;
+ my @comp = map { /^\+(.*)$/ ? $1 : "${base}::$_" } grep { $_ !~ /^#/ } @_;
$class->_load_components(@comp);
Class::C3::reinitialize();
}
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/
+ Serialize::Storable
InflateColumn
Relationship
PK
-package DBIx::Class::Cursor;
+package # hide from PAUSE
+ DBIx::Class::Cursor;
use strict;
use warnings;
sub storage { shift->schema_instance(@_)->storage; }
sub resultset_instance {
- my $class = shift;
+ my $class = ref $_[0] || $_[0];
my $source = $class->result_source_instance;
if ($source->result_class ne $class) {
$source = $source->new($source);
foreach my $key (keys %$attrs) {
if (ref $attrs->{$key}
&& exists $class->column_info($key)->{_inflate_info}) {
- $attrs->{$key} = $class->_deflated_column($key, $attrs->{$key});
+# $attrs->{$key} = $class->_deflated_column($key, $attrs->{$key});
+ $class->set_inflated_column ($key, delete $attrs->{$key});
}
}
return $class->next::method($attrs, @rest);
paged resultset, which will fetch only a small number of records at a time:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
page => 1, # page to return (defaults to 1)
rows => 10, # number of results per page
The C<page> attribute does not have to be specified in your search:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
rows => 10,
}
specify which ones you need:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
- cols => [qw/ name /]
+ columns => [qw/ name /]
}
);
to access the returned value:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
select => [ 'name', { LENGTH => 'name' } ],
as => [qw/ name name_length /],
=head3 SELECT DISTINCT with multiple columns
my $rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
{ distinct => [ $source->columns ] }
=head3 SELECT COUNT(DISTINCT colname)
my $rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
{ count => { distinct => 'colname' } }
L<DBIx::Class> supports C<GROUP BY> as follows:
my $rs = $schema->resultset('Artist')->search(
- {},
+ undef,
{
join => [qw/ cds /],
select => [ 'name', { count => 'cds.cdid' } ],
deep using the same syntax as a multi-step join:
my $rs = $schema->resultset('Tag')->search(
- {},
+ undef,
{
prefetch => {
cd => 'artist'
__PACKAGE__->set_primary_key( qw/ albumid artistid / );
+=begin hide
+
You can define relationships for any of your classes. L<DBIx::Class> will
automatically fill in the correct namespace, so if you want to say
"a My::Schema::Album object belongs to a My::Schema::Artist object" you do not
__PACKAGE__->belongs_to('artist' => 'Artist');
+=end hide
+
That's all you need in terms of setup.
=head2 Usage
$new_album->delete;
-You can also remove records without or retrieving first. This
-operation takes the same kind of arguments as a search.
+You can also remove records without retrieving them first, by calling
+delete directly on a ResultSet object.
# Delete all of Falco's albums
- $schema->resultset('Album')->delete({ artist => 'Falco' });
+ $schema->resultset('Album')->search({ artist => 'Falco' })->delete;
=head2 Finding your objects
my ($self, @rest) = @_;
my $ret = $self->next::method(@rest);
- # if all primaries are already populated, skip auto-inc
- my $populated = 0;
- map { $populated++ if defined $self->get_column($_) } $self->primary_columns;
- return $ret if ( $populated == scalar $self->primary_columns );
-
- my ($pri, $too_many) =
- (grep { $self->column_info($_)->{'auto_increment'} }
- $self->primary_columns)
- || $self->primary_columns;
+ my ($pri, $too_many) = grep { !defined $self->get_column($_) } $self->primary_columns;
+ return $ret unless defined $pri; # if all primaries are already populated, skip auto-inc
$self->throw_exception( "More than one possible key found for auto-inc on ".ref $self )
- if $too_many;
- unless (defined $self->get_column($pri)) {
- $self->throw_exception( "Can't auto-inc for $pri on ".ref $self.": no _last_insert_id method" )
- unless $self->can('last_insert_id');
- my $id = $self->last_insert_id;
- $self->throw_exception( "Can't get last insert id" ) unless $id;
- $self->store_column($pri => $id);
- }
+ if defined $too_many;
+
+ my $storage = $self->result_source->storage;
+ $self->throw_exception( "Missing primary key but Storage doesn't support last_insert_id" ) unless $storage->can('last_insert_id');
+ my $id = $storage->last_insert_id($self->result_source,$pri);
+ $self->throw_exception( "Can't get last insert id" ) unless $id;
+ $self->store_column($pri => $id);
+
return $ret;
}
=cut
-__PACKAGE__->mk_classdata('sequence');
+sub sequence {
+ my ($self,$seq) = @_;
+ foreach my $pri ($self->primary_columns) {
+ $self->column_info($pri)->{sequence} = $seq;
+ }
+}
1;
__PACKAGE__->load_components(qw/PK::Auto/);
-sub last_insert_id
-{
- my ($self) = @_;
-
- my $dbh = $self->result_source->storage->dbh;
- my $sth = $dbh->prepare_cached("VALUES(IDENTITY_VAL_LOCAL())", {}, 3);
- $sth->execute();
-
- my @res = $sth->fetchrow_array();
-
- return @res ? $res[0] : undef;
-
-}
-
1;
=head1 NAME
-DBIx::Class::PK::Auto::DB2 - Automatic primary key class for DB2
+DBIx::Class::PK::Auto::DB2 - (DEPRECATED) Automatic primary key class for DB2
=head1 SYNOPSIS
- # In your table classes
- __PACKAGE__->load_components(qw/PK::Auto::DB2 Core/);
- __PACKAGE__->set_primary_key('id');
-
-=head1 DESCRIPTION
-
-This class implements autoincrements for DB2.
+Just load PK::Auto instead; auto-inc is now handled by Storage.
=head1 AUTHORS
-Jess Robinson
+Matt S Trout <mst@shadowcatsystems.co.uk>
=head1 LICENSE
-package DBIx::Class::PK::Auto::MSSQL;\r
-\r
-use strict;\r
-use warnings;\r
-\r
-use base qw/DBIx::Class/;\r
-\r
-__PACKAGE__->load_components(qw/PK::Auto/);\r
-\r
-sub last_insert_id {\r
- my( $id ) = $_[0]->result_source->storage->dbh->selectrow_array(\r
- 'SELECT @@IDENTITY' );\r
- return $id;\r
-}\r
-\r
-1;\r
-\r
-=head1 NAME \r
-\r
-DBIx::Class::PK::Auto::MSSQL - Automatic primary key class for MSSQL\r
-\r
-=head1 SYNOPSIS\r
-\r
- # In your table classes\r
- __PACKAGE__->load_components(qw/PK::Auto::MSSQL Core/);\r
- __PACKAGE__->set_primary_key('id');\r
-\r
-=head1 DESCRIPTION\r
-\r
-This class implements autoincrements for MSSQL.\r
-\r
-=head1 AUTHORS\r
-\r
-Brian Cassidy <bricas@cpan.org>\r
-\r
-=head1 LICENSE\r
-\r
-You may distribute this code under the same terms as Perl itself.\r
-\r
-=cut\r
+package DBIx::Class::PK::Auto::MSSQL;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class/;
+
+__PACKAGE__->load_components(qw/PK::Auto/);
+
+1;
+
+=head1 NAME
+
+DBIx::Class::PK::Auto::MSSQL - (DEPRECATED) Automatic primary key class for MSSQL
+
+=head1 SYNOPSIS
+
+Just load PK::Auto instead; auto-inc is now handled by Storage.
+
+=head1 AUTHORS
+
+Matt S Trout <mst@shadowcatsystems.co.uk>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
__PACKAGE__->load_components(qw/PK::Auto/);
-sub last_insert_id {
- return $_[0]->result_source->storage->dbh->{mysql_insertid};
-}
-
1;
=head1 NAME
-DBIx::Class::PK::Auto::MySQL - Automatic primary key class for MySQL
+DBIx::Class::PK::Auto::MySQL - (DEPRECATED) Automatic primary key class for MySQL
=head1 SYNOPSIS
- # In your table classes
- __PACKAGE__->load_components(qw/PK::Auto::MySQL Core/);
- __PACKAGE__->set_primary_key('id');
-
-=head1 DESCRIPTION
-
-This class implements autoincrements for MySQL.
+Just load PK::Auto instead; auto-inc is now handled by Storage.
=head1 AUTHORS
-Matt S. Trout <mst@shadowcatsystems.co.uk>
+Matt S Trout <mst@shadowcatsystems.co.uk>
=head1 LICENSE
use strict;
use warnings;
-use Carp qw/croak/;
-
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/PK::Auto/);
-sub last_insert_id {
- my $self = shift;
- $self->get_autoinc_seq unless $self->{_autoinc_seq};
- my $sql = "SELECT " . $self->{_autoinc_seq} . ".currval FROM DUAL";
- my ($id) = $self->result_source->storage->dbh->selectrow_array($sql);
- return $id;
-}
-
-sub get_autoinc_seq {
- my $self = shift;
-
- # return the user-defined sequence if known
- if ($self->sequence) {
- return $self->{_autoinc_seq} = $self->sequence;
- }
-
- # look up the correct sequence automatically
- my $dbh = $self->result_source->storage->dbh;
- my $sql = qq{
- SELECT trigger_body FROM ALL_TRIGGERS t
- WHERE t.table_name = ?
- AND t.triggering_event = 'INSERT'
- AND t.status = 'ENABLED'
- };
- # trigger_body is a LONG
- $dbh->{LongReadLen} = 64 * 1024 if ($dbh->{LongReadLen} < 64 * 1024);
- my $sth = $dbh->prepare($sql);
- $sth->execute( uc($self->result_source->name) );
- while (my ($insert_trigger) = $sth->fetchrow_array) {
- if ($insert_trigger =~ m!(\w+)\.nextval!i ) {
- $self->{_autoinc_seq} = uc($1);
- }
- }
- unless ($self->{_autoinc_seq}) {
- croak "Unable to find a sequence INSERT trigger on table '" . $self->_table_name . "'.";
- }
-}
-
1;
=head1 NAME
-DBIx::Class::PK::Auto::Oracle - Automatic primary key class for Oracle
+DBIx::Class::PK::Auto::Oracle - (DEPRECATED) Automatic primary key class for Oracle
=head1 SYNOPSIS
- # In your table classes
- __PACKAGE__->load_components(qw/PK::Auto::Oracle Core/);
- __PACKAGE__->set_primary_key('id');
-
-=head1 DESCRIPTION
-
-This class implements autoincrements for Oracle.
+Just load PK::Auto instead; auto-inc is now handled by Storage.
=head1 AUTHORS
-Andy Grundman <andy@hybridized.org>
-
-Scott Connelly <scottsweep@yahoo.com>
+Matt S Trout <mst@shadowcatsystems.co.uk>
=head1 LICENSE
__PACKAGE__->load_components(qw/PK::Auto/);
-sub last_insert_id {
- my $self = shift;
- $self->get_autoinc_seq unless $self->{_autoinc_seq};
- $self->result_source->storage->dbh->last_insert_id(undef,undef,undef,undef,
- {sequence=>$self->{_autoinc_seq}});
-}
-
-sub get_autoinc_seq {
- my $self = shift;
-
- # return the user-defined sequence if known
- if ($self->sequence) {
- return $self->{_autoinc_seq} = $self->sequence;
- }
-
- my @pri = $self->primary_columns;
- my $dbh = $self->result_source->storage->dbh;
- my ($schema,$table) = $self->table =~ /^(.+)\.(.+)$/ ? ($1,$2) : (undef,$self->table);
- while (my $col = shift @pri) {
- my $info = $dbh->column_info(undef,$schema,$table,$col)->fetchrow_arrayref;
- if (defined $info->[12] and $info->[12] =~
- /^nextval\('([^']+)'::(?:text|regclass)\)/)
- {
- $self->{_autoinc_seq} = $1;
- #$self->{_autoinc_seq} =~ s/"//g;
- last;
- }
- }
-}
-
1;
=head1 NAME
-DBIx::Class::PK::Auto::Pg - Automatic primary key class for PostgreSQL
+DBIx::Class::PK::Auto::Pg - (DEPRECATED) Automatic primary key class for Pg
=head1 SYNOPSIS
- # In your table classes
- __PACKAGE__->load_components(qw/PK::Auto::Pg Core/);
- __PACKAGE__->set_primary_key('id');
-
-=head1 DESCRIPTION
-
-This class implements autoincrements for PostgreSQL.
+Just load PK::Auto instead; auto-inc is now handled by Storage.
=head1 AUTHORS
-Marcus Ramberg <m.ramberg@cpan.org>
+Matt S Trout <mst@shadowcatsystems.co.uk>
=head1 LICENSE
__PACKAGE__->load_components(qw/PK::Auto/);
-sub last_insert_id {
- return $_[0]->result_source->storage->dbh->func('last_insert_rowid');
-}
-
1;
=head1 NAME
-DBIx::Class::PK::Auto::SQLite - Automatic primary key class for SQLite
+DBIx::Class::PK::Auto::SQLite - (DEPRECATED) Automatic primary key class for SQLite
=head1 SYNOPSIS
- # In your table classes
- __PACKAGE__->load_components(qw/PK::Auto::SQLite Core/);
- __PACKAGE__->set_primary_key('id');
-
-=head1 DESCRIPTION
-
-This class implements autoincrements for SQLite.
+Just load PK::Auto instead; auto-inc is now handled by Storage.
=head1 AUTHORS
-Matt S. Trout <mst@shadowcatsystems.co.uk>
+Matt S Trout <mst@shadowcatsystems.co.uk>
=head1 LICENSE
=head2 many_to_many
- __PACKAGE__->many_to_many( 'accessorname' => 'a_to_b', 'table_b' );
- my @f_objs = $obj_a->accessorname;
+ __PACKAGE__->many_to_many( 'accessorname' => 'a_to_b', 'table_b' );
+ my @f_objs = $obj_a->accessorname;
+
+Creates an accessor bridging two relationships; not strictly a relationship
+in its own right, although the accessor will return a resultset or collection
+of objects just as a has_many would.
=cut
-package DBIx::Class::Relationship::Accessor;
+package # hide from PAUSE
+ DBIx::Class::Relationship::Accessor;
use strict;
use warnings;
=cut
sub search_related {
- my $self = shift;
- die "Can't call *_related as class methods" unless ref $self;
- my $rel = shift;
- my $attrs = { };
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = { %{ pop(@_) } };
- }
- my $rel_obj = $self->relationship_info($rel);
- $self->throw_exception( "No such relationship ${rel}" ) unless $rel_obj;
- $attrs = { %{$rel_obj->{attrs} || {}}, %{$attrs || {}} };
-
- $self->throw_exception( "Invalid query: @_" ) if (@_ > 1 && (@_ % 2 == 1));
- my $query = ((@_ > 1) ? {@_} : shift);
-
- my ($cond) = $self->result_source->resolve_condition($rel_obj->{cond}, $rel, $self);
- if (ref $cond eq 'ARRAY') {
- $cond = [ map { my %hash;
- foreach my $key (keys %{$_}) {
- unless ($key =~ m/\./) {
- $hash{"me.$key"} = $_->{$key};
- } else {
- $hash{$key} = $_->{$key};
- }
- }; \%hash; } @$cond ];
- } else {
- foreach my $key (keys %$cond) {
- unless ($key =~ m/\./) {
- $cond->{"me.$key"} = delete $cond->{$key};
- }
- }
- }
- $query = ($query ? { '-and' => [ $cond, $query ] } : $cond);
- #use Data::Dumper; warn Dumper($cond);
- #warn $rel_obj->{class}." $meth $cond ".join(', ', @{$attrs->{bind}||[]});
- return $self->result_source->related_source($rel
- )->resultset->search($query, $attrs);
+ return shift->related_resultset(shift)->search(@_);
}
=head2 count_related
sub create_related {
my $self = shift;
my $rel = shift;
- return $self->search_related($rel)->create(@_);
+ my $obj = $self->search_related($rel)->create(@_);
+ delete $self->{related_resultsets}->{$rel};
+ return $obj;
}
=head2 new_related
my $f_class = $self->result_source->schema->class($rel_obj->{class});
$self->throw_exception( "Object $f_obj isn't a ".$f_class )
unless $f_obj->isa($f_class);
- foreach my $key (keys %$cond) {
- next if ref $cond->{$key}; # Skip literals and complex conditions
- $self->throw_exception("set_from_related can't handle $key as key")
- unless $key =~ m/^foreign\.([^\.]+)$/;
- my $val = $f_obj->get_column($1);
- $self->throw_exception("set_from_related can't handle ".$cond->{$key}." as value")
- unless $cond->{$key} =~ m/^self\.([^\.]+)$/;
- $self->set_column($1 => $val);
- }
+ $self->set_columns(
+ $self->result_source->resolve_condition(
+ $rel_obj->{cond}, $f_obj, $rel));
return 1;
}
sub delete_related {
my $self = shift;
- return $self->search_related(@_)->delete;
+ my $obj = $self->search_related(@_)->delete;
+ delete $self->{related_resultsets}->{$_[0]};
+ return $obj;
}
1;
+=head2 related_resultset($name)
+
+Returns a L<DBIx::Class::ResultSet> for the relationship named $name.
+
+ $rs = $obj->related_resultset('related_table');
+
+=cut
+
+sub related_resultset {
+ my $self = shift;
+ $self->throw_exception("Can't call *_related as class methods") unless ref $self;
+ my $rel = shift;
+ my $rel_obj = $self->relationship_info($rel);
+ $self->throw_exception( "No such relationship ${rel}" ) unless $rel_obj;
+
+ return $self->{related_resultsets}{$rel} ||= do {
+ my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
+ $attrs = { %{$rel_obj->{attrs} || {}}, %$attrs };
+
+ $self->throw_exception( "Invalid query: @_" ) if (@_ > 1 && (@_ % 2 == 1));
+ my $query = ((@_ > 1) ? {@_} : shift);
+
+ my $cond = $self->result_source->resolve_condition($rel_obj->{cond}, $rel, $self);
+ if (ref $cond eq 'ARRAY') {
+ $cond = [ map { my $hash;
+ foreach my $key (keys %$_) {
+ my $newkey = $key =~ /\./ ? "me.$key" : $key;
+ $hash->{$newkey} = $_->{$key};
+ }; $hash } @$cond ];
+ } else {
+ foreach my $key (grep { ! /\./ } keys %$cond) {
+ $cond->{"me.$key"} = delete $cond->{$key};
+ }
+ }
+ $query = ($query ? { '-and' => [ $cond, $query ] } : $cond);
+ $self->result_source->related_source($rel)->resultset->search($query, $attrs);
+ };
+}
+
=head1 AUTHORS
Matt S. Trout <mst@shadowcatsystems.co.uk>
if ($@) {
$class->throw_exception($@) unless $@ =~ /Can't locate/;
}
-
- my %f_primaries;
- $f_primaries{$_} = 1 for eval { $f_class->primary_columns };
- my $f_loaded = !$@;
- # single key relationship
+ # no join condition or just a column name
if (!ref $cond) {
+ my %f_primaries = map { $_ => 1 } eval { $f_class->primary_columns };
$class->throw_exception("Can't infer join condition for ${rel} on ${class}; unable to load ${f_class}")
- unless $f_loaded;
+ if $@;
my ($pri, $too_many) = keys %f_primaries;
$class->throw_exception("Can't infer join condition for ${rel} on ${class}; ${f_class} has no primary keys")
unless defined $pri;
- $class->throw_exception("Can't infer join condition for ${rel} on ${class}; ${f_class} has multiple primary key")
+ $class->throw_exception("Can't infer join condition for ${rel} on ${class}; ${f_class} has multiple primary keys")
if $too_many;
my $fk = defined $cond ? $cond : $rel;
{ accessor => $acc_type, %{$attrs || {}} }
);
}
- # multiple key relationship
+ # explicit join condition
elsif (ref $cond eq 'HASH') {
my $cond_rel;
for (keys %$cond) {
-package DBIx::Class::Relationship::CascadeActions;
+package # hide from PAUSE
+ DBIx::Class::Relationship::CascadeActions;
sub delete {
my ($self, @rest) = @_;
-package DBIx::Class::Relationship::HasMany;
+package # hide from PAUSE
+ DBIx::Class::Relationship::HasMany;
use strict;
use warnings;
my ($pri, $too_many) = $class->primary_columns;
$class->throw_exception( "has_many can only infer join for a single primary key; ${class} has more" )
if $too_many;
- my $f_key;
- my $f_class_loaded = eval { $f_class->columns };
- my $guess;
+
+ my ($f_key,$guess);
if (defined $cond && length $cond) {
$f_key = $cond;
$guess = "caller specified foreign key '$f_key'";
$f_key = lc $1; # go ahead and guess; best we can do
$guess = "using our class name '$class' as foreign key";
}
+
+ my $f_class_loaded = eval { $f_class->columns };
$class->throw_exception("No such column ${f_key} on foreign class ${f_class} ($guess)")
if $f_class_loaded && !$f_class->has_column($f_key);
- $cond = { "foreign.${f_key}" => "self.${pri}" },
+
+ $cond = { "foreign.${f_key}" => "self.${pri}" };
}
$class->add_relationship($rel, $f_class, $cond,
{ accessor => 'multi',
join_type => 'LEFT',
cascade_delete => 1,
+ cascade_copy => 1,
%{$attrs||{}} } );
}
-package DBIx::Class::Relationship::HasOne;
+package # hide from PAUSE
+ DBIx::Class::Relationship::HasOne;
use strict;
use warnings;
my ($pri, $too_many) = $class->primary_columns;
$class->throw_exception( "might_have/has_one can only infer join for a single primary key; ${class} has more" )
if $too_many;
- my $f_key;
my $f_class_loaded = eval { $f_class->columns };
- my $guess;
+ my ($f_key,$guess);
if (defined $cond && length $cond) {
$f_key = $cond;
$guess = "caller specified foreign key '$f_key'";
-package DBIx::Class::Relationship::Helpers;
+package # hide from PAUSE
+ DBIx::Class::Relationship::Helpers;
use strict;
use warnings;
-package DBIx::Class::Relationship::ManyToMany;
+package # hide from PAUSE
+ DBIx::Class::Relationship::ManyToMany;
use strict;
use warnings;
sub many_to_many {
my ($class, $meth, $rel, $f_rel, $rel_attrs) = @_;
- $rel_attrs ||= {};
-
{
no strict 'refs';
no warnings 'redefine';
*{"${class}::${meth}"} = sub {
my $self = shift;
my $attrs = @_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {};
- $self->search_related($rel)->search_related($f_rel, @_ > 0 ? @_ : undef, { %$rel_attrs, %$attrs });
+ $self->search_related($rel)->search_related($f_rel, @_ > 0 ? @_ : undef, { %{$rel_attrs||{}}, %$attrs });
};
}
}
-package DBIx::Class::Relationship::ProxyMethods;
+package # hide from PAUSE
+ DBIx::Class::Relationship::ProxyMethods;
use strict;
use warnings;
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/AccessorGroup/);
-__PACKAGE__->mk_group_accessors('simple' => 'result_source');
+__PACKAGE__->mk_group_accessors('simple' => qw/result_source result_class/);
=head1 NAME
sub new {
my $class = shift;
return $class->new_result(@_) if ref $class;
+
my ($source, $attrs) = @_;
#use Data::Dumper; warn Dumper($attrs);
$attrs = Storable::dclone($attrs || {}); # { %{ $attrs || {} } };
- my %seen;
my $alias = ($attrs->{alias} ||= 'me');
- if ($attrs->{cols} || !$attrs->{select}) {
- delete $attrs->{as} if $attrs->{cols};
- my @cols = ($attrs->{cols}
- ? @{delete $attrs->{cols}}
- : $source->columns);
- $attrs->{select} = [ map { m/\./ ? $_ : "${alias}.$_" } @cols ];
- }
- $attrs->{as} ||= [ map { m/^$alias\.(.*)$/ ? $1 : $_ } @{$attrs->{select}} ];
+
+ $attrs->{columns} ||= delete $attrs->{cols} if $attrs->{cols};
+ delete $attrs->{as} if $attrs->{columns};
+ $attrs->{columns} ||= [ $source->columns ] unless $attrs->{select};
+ $attrs->{select} = [ map { m/\./ ? $_ : "${alias}.$_" } @{delete $attrs->{columns}} ]
+ if $attrs->{columns};
+ $attrs->{as} ||= [ map { m/^\Q$alias.\E(.+)$/ ? $1 : $_ } @{$attrs->{select}} ];
if (my $include = delete $attrs->{include_columns}) {
push(@{$attrs->{select}}, @$include);
- push(@{$attrs->{as}}, map { m/([^\.]+)$/; $1; } @$include);
+ push(@{$attrs->{as}}, map { m/([^.]+)$/; $1; } @$include);
}
#use Data::Dumper; warn Dumper(@{$attrs}{qw/select as/});
+
$attrs->{from} ||= [ { $alias => $source->from } ];
$attrs->{seen_join} ||= {};
+ my %seen;
if (my $join = delete $attrs->{join}) {
- foreach my $j (ref $join eq 'ARRAY'
- ? (@{$join}) : ($join)) {
+ foreach my $j (ref $join eq 'ARRAY' ? @$join : ($join)) {
if (ref $j eq 'HASH') {
$seen{$_} = 1 foreach keys %$j;
} else {
}
push(@{$attrs->{from}}, $source->resolve_join($join, $attrs->{alias}, $attrs->{seen_join}));
}
+
$attrs->{group_by} ||= $attrs->{select} if delete $attrs->{distinct};
+ $attrs->{order_by} = [ $attrs->{order_by} ] if $attrs->{order_by} and !ref($attrs->{order_by});
+ $attrs->{order_by} ||= [];
+ my $collapse = $attrs->{collapse} || {};
if (my $prefetch = delete $attrs->{prefetch}) {
- foreach my $p (ref $prefetch eq 'ARRAY'
- ? (@{$prefetch}) : ($prefetch)) {
- if( ref $p eq 'HASH' ) {
+ my @pre_order;
+ foreach my $p (ref $prefetch eq 'ARRAY' ? @$prefetch : ($prefetch)) {
+ if ( ref $p eq 'HASH' ) {
foreach my $key (keys %$p) {
push(@{$attrs->{from}}, $source->resolve_join($p, $attrs->{alias}))
unless $seen{$key};
}
- }
- else {
+ } else {
push(@{$attrs->{from}}, $source->resolve_join($p, $attrs->{alias}))
unless $seen{$p};
}
- my @prefetch = $source->resolve_prefetch($p, $attrs->{alias});
- #die Dumper \@cols;
+ my @prefetch = $source->resolve_prefetch(
+ $p, $attrs->{alias}, {}, \@pre_order, $collapse);
push(@{$attrs->{select}}, map { $_->[0] } @prefetch);
push(@{$attrs->{as}}, map { $_->[1] } @prefetch);
}
+ push(@{$attrs->{order_by}}, @pre_order);
}
+ $attrs->{collapse} = $collapse;
+# use Data::Dumper; warn Dumper($collapse) if keys %{$collapse};
if ($attrs->{page}) {
$attrs->{rows} ||= 10;
$attrs->{offset} ||= 0;
$attrs->{offset} += ($attrs->{rows} * ($attrs->{page} - 1));
}
- my $new = {
+
+ bless {
result_source => $source,
+ result_class => $attrs->{result_class} || $source->result_class,
cond => $attrs->{where},
from => $attrs->{from},
+ collapse => $collapse,
count => undef,
page => delete $attrs->{page},
pager => undef,
- attrs => $attrs };
- bless ($new, $class);
- return $new;
+ attrs => $attrs
+ }, $class;
}
=head2 search
my $new_rs = $rs->search({ foo => 3 });
If you need to pass in additional attributes but no additional condition,
-call it as C<search({}, \%attrs);>.
+call it as C<search(undef, \%attrs);>.
# "SELECT foo, bar FROM $class_table"
- my @all = $class->search({}, { cols => [qw/foo bar/] });
+ my @all = $class->search(undef, { columns => [qw/foo bar/] });
=cut
sub search {
my $self = shift;
- #use Data::Dumper;warn Dumper(@_);
-
- my $attrs = { %{$self->{attrs}} };
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = { %$attrs, %{ pop(@_) } };
- }
-
- my $where = (@_ ? ((@_ == 1 || ref $_[0] eq "HASH") ? shift : {@_}) : undef());
- if (defined $where) {
- $where = (defined $attrs->{where}
+ my $rs;
+ if( @_ ) {
+
+ my $attrs = { %{$self->{attrs}} };
+ my $having = delete $attrs->{having};
+ $attrs = { %$attrs, %{ pop(@_) } } if @_ > 1 and ref $_[$#_] eq 'HASH';
+
+ my $where = (@_
+ ? ((@_ == 1 || ref $_[0] eq "HASH")
+ ? shift
+ : ((@_ % 2)
+ ? $self->throw_exception(
+ "Odd number of arguments to search")
+ : {@_}))
+ : undef());
+ if (defined $where) {
+ $attrs->{where} = (defined $attrs->{where}
? { '-and' =>
[ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
$where, $attrs->{where} ] }
: $where);
- $attrs->{where} = $where;
- }
+ }
- my $rs = (ref $self)->new($self->result_source, $attrs);
+ if (defined $having) {
+ $attrs->{having} = (defined $attrs->{having}
+ ? { '-and' =>
+ [ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
+ $having, $attrs->{having} ] }
+ : $having);
+ }
+ $rs = (ref $self)->new($self->result_source, $attrs);
+ }
+ else {
+ $rs = $self;
+ $rs->reset;
+ }
return (wantarray ? $rs->all : $rs);
}
my @cols = $self->result_source->primary_columns;
if (exists $attrs->{key}) {
my %uniq = $self->result_source->unique_constraints;
- $self->( "Unknown key " . $attrs->{key} . " on " . $self->name )
+ $self->throw_exception( "Unknown key $attrs->{key} on $self->name" )
unless exists $uniq{$attrs->{key}};
@cols = @{ $uniq{$attrs->{key}} };
}
} else {
$query = {@vals};
}
- foreach (keys %$query) {
- next if m/\./;
- $query->{$self->{attrs}{alias}.'.'.$_} = delete $query->{$_};
+ foreach my $key (grep { ! m/\./ } keys %$query) {
+ $query->{"$self->{attrs}{alias}.$key"} = delete $query->{$key};
}
#warn Dumper($query);
- return $self->search($query,$attrs)->next;
+
+ if (keys %$attrs) {
+ my $rs = $self->search($query,$attrs);
+ return keys %{$rs->{collapse}} ? $rs->next : $rs->single;
+ } else {
+ return keys %{$self->{collapse}} ? $self->search($query)->next : $self->single($query);
+ }
}
=head2 search_related
=cut
sub search_related {
- my ($self, $rel, @rest) = @_;
- my $rel_obj = $self->result_source->relationship_info($rel);
- $self->throw_exception(
- "No such relationship ${rel} in search_related")
- unless $rel_obj;
- my $rs = $self->search(undef, { join => $rel });
- my $alias = ($rs->{attrs}{seen_join}{$rel} > 1
- ? join('_', $rel, $rs->{attrs}{seen_join}{$rel})
- : $rel);
- return $self->result_source->schema->resultset($rel_obj->{class}
- )->search( undef,
- { %{$rs->{attrs}},
- alias => $alias,
- select => undef(),
- as => undef() }
- )->search(@rest);
+ return shift->related_resultset(shift)->search(@_);
}
=head2 cursor
sub cursor {
my ($self) = @_;
- my ($attrs) = $self->{attrs};
- $attrs = { %$attrs };
+ my $attrs = { %{$self->{attrs}} };
return $self->{cursor}
||= $self->result_source->storage->select($self->{from}, $attrs->{select},
$attrs->{where},$attrs);
}
+=head2 single
+
+Inflates the first result without creating a cursor
+
+=cut
+
+sub single {
+ my ($self, $where) = @_;
+ my $attrs = { %{$self->{attrs}} };
+ if ($where) {
+ if (defined $attrs->{where}) {
+ $attrs->{where} = {
+ '-and' =>
+ [ map { ref $_ eq 'ARRAY' ? [ -or => $_ ] : $_ }
+ $where, delete $attrs->{where} ]
+ };
+ } else {
+ $attrs->{where} = $where;
+ }
+ }
+ my @data = $self->result_source->storage->select_single(
+ $self->{from}, $attrs->{select},
+ $attrs->{where},$attrs);
+ return (@data ? $self->_construct_object(@data) : ());
+}
+
+
=head2 search_like
Perform a search, but use C<LIKE> instead of equality as the condition. Note
=cut
sub search_like {
- my $class = shift;
- my $attrs = { };
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = pop(@_);
- }
- my $query = ref $_[0] eq "HASH" ? { %{shift()} }: {@_};
+ my $class = shift;
+ my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
+ my $query = ref $_[0] eq 'HASH' ? { %{shift()} }: {@_};
$query->{$_} = { 'like' => $query->{$_} } for keys %$query;
return $class->search($query, { %$attrs });
}
Can be used to efficiently iterate over records in the resultset:
- my $rs = $schema->resultset('CD')->search({});
+ my $rs = $schema->resultset('CD')->search;
while (my $cd = $rs->next) {
print $cd->title;
}
sub next {
my ($self) = @_;
- my @row = $self->cursor->next;
+ if (@{$self->{all_cache} || []}) {
+ $self->{all_cache_position} ||= 0;
+ return $self->{all_cache}->[$self->{all_cache_position}++];
+ }
+ if ($self->{attrs}{cache}) {
+ $self->{all_cache_position} = 1;
+ return ($self->all)[0];
+ }
+ my @row = (exists $self->{stashed_row}
+ ? @{delete $self->{stashed_row}}
+ : $self->cursor->next);
# warn Dumper(\@row); use Data::Dumper;
return unless (@row);
return $self->_construct_object(@row);
sub _construct_object {
my ($self, @row) = @_;
my @as = @{ $self->{attrs}{as} };
- #warn "@cols -> @row";
+
+ my $info = $self->_collapse_result(\@as, \@row);
+
+ my $new = $self->result_class->inflate_result($self->result_source, @$info);
+
+ $new = $self->{attrs}{record_filter}->($new)
+ if exists $self->{attrs}{record_filter};
+ return $new;
+}
+
+sub _collapse_result {
+ my ($self, $as, $row, $prefix) = @_;
+
+ my %const;
+
+ my @copy = @$row;
+ foreach my $this_as (@$as) {
+ my $val = shift @copy;
+ if (defined $prefix) {
+ if ($this_as =~ m/^\Q${prefix}.\E(.+)$/) {
+ my $remain = $1;
+ $remain =~ /^(?:(.*)\.)?([^.]+)$/;
+ $const{$1||''}{$2} = $val;
+ }
+ } else {
+ $this_as =~ /^(?:(.*)\.)?([^.]+)$/;
+ $const{$1||''}{$2} = $val;
+ }
+ }
+
my $info = [ {}, {} ];
- foreach my $as (@as) {
+ foreach my $key (keys %const) {
+ if (length $key) {
+ my $target = $info;
+ my @parts = split(/\./, $key);
+ foreach my $p (@parts) {
+ $target = $target->[1]->{$p} ||= [];
+ }
+ $target->[0] = $const{$key};
+ } else {
+ $info->[0] = $const{$key};
+ }
+ }
+
+ my @collapse = (defined($prefix)
+ ? (map { (m/^\Q${prefix}.\E(.+)$/ ? ($1) : ()); }
+ keys %{$self->{collapse}})
+ : keys %{$self->{collapse}});
+ if (@collapse) {
+ my ($c) = sort { length $a <=> length $b } @collapse;
my $target = $info;
- my @parts = split(/\./, $as);
- my $col = pop(@parts);
- foreach my $p (@parts) {
+ foreach my $p (split(/\./, $c)) {
$target = $target->[1]->{$p} ||= [];
}
- $target->[0]->{$col} = shift @row;
+ my $c_prefix = (defined($prefix) ? "${prefix}.${c}" : $c);
+ my @co_key = @{$self->{collapse}{$c_prefix}};
+ my %co_check = map { ($_, $target->[0]->{$_}); } @co_key;
+ my $tree = $self->_collapse_result($as, $row, $c_prefix);
+ my (@final, @raw);
+ while ( !(grep {
+ !defined($tree->[0]->{$_})
+ || $co_check{$_} ne $tree->[0]->{$_}
+ } @co_key) ) {
+ push(@final, $tree);
+ last unless (@raw = $self->cursor->next);
+ $row = $self->{stashed_row} = \@raw;
+ $tree = $self->_collapse_result($as, $row, $c_prefix);
+ #warn Data::Dumper::Dumper($tree, $row);
+ }
+ @$target = @final;
}
- #use Data::Dumper; warn Dumper(\@as, $info);
- my $new = $self->result_source->result_class->inflate_result(
- $self->result_source, @$info);
- $new = $self->{attrs}{record_filter}->($new)
- if exists $self->{attrs}{record_filter};
- return $new;
+
+ return $info;
}
=head2 result_source
sub count {
my $self = shift;
- return $self->search(@_)->count if @_ && defined $_[0];
+ return $self->search(@_)->count if @_ and defined $_[0];
unless (defined $self->{count}) {
- my $group_by;
- my $select = { 'count' => '*' };
- if( $group_by = delete $self->{attrs}{group_by} ) {
+ return scalar @{ $self->get_cache } if @{ $self->get_cache };
+ my $select = { count => '*' };
+ my $attrs = { %{ $self->{attrs} } };
+ if (my $group_by = delete $attrs->{group_by}) {
+ delete $attrs->{having};
my @distinct = (ref $group_by ? @$group_by : ($group_by));
# todo: try CONCAT for multi-column pk
my @pk = $self->result_source->primary_columns;
- if( scalar(@pk) == 1 ) {
- my $pk = shift(@pk);
- my $alias = $self->{attrs}{alias};
- my $re = qr/^($alias\.)?$pk$/;
- foreach my $column ( @distinct) {
- if( $column =~ $re ) {
- @distinct = ( $column );
+ if (@pk == 1) {
+ foreach my $column (@distinct) {
+ if ($column =~ qr/^(?:\Q$attrs->{alias}.\E)?$pk[0]$/) {
+ @distinct = ($column);
last;
}
}
}
- $select = { count => { 'distinct' => \@distinct } };
+ $select = { count => { distinct => \@distinct } };
#use Data::Dumper; die Dumper $select;
}
- my $attrs = { %{ $self->{attrs} },
- select => $select,
- as => [ 'count' ] };
+ $attrs->{select} = $select;
+ $attrs->{as} = [qw/count/];
# offset, order by and page are not needed to count. record_filter is cdbi
delete $attrs->{$_} for qw/rows offset order_by page pager record_filter/;
($self->{count}) = (ref $self)->new($self->result_source, $attrs)->cursor->next;
- $self->{attrs}{group_by} = $group_by;
}
return 0 unless $self->{count};
my $count = $self->{count};
$count -= $self->{attrs}{offset} if $self->{attrs}{offset};
$count = $self->{attrs}{rows} if
- ($self->{attrs}{rows} && $self->{attrs}{rows} < $count);
+ $self->{attrs}{rows} and $self->{attrs}{rows} < $count;
return $count;
}
sub all {
my ($self) = @_;
- return map { $self->_construct_object(@$_); }
- $self->cursor->all;
+ return @{ $self->get_cache } if @{ $self->get_cache };
+
+ my @obj;
+
+ if (keys %{$self->{collapse}}) {
+ # Using $self->cursor->all is really just an optimisation.
+ # If we're collapsing has_many prefetches it probably makes
+ # very little difference, and this is cleaner than hacking
+ # _construct_object to survive the approach
+ my @row;
+ $self->cursor->reset;
+ while (@row = $self->cursor->next) {
+ push(@obj, $self->_construct_object(@row));
+ }
+ } else {
+ @obj = map { $self->_construct_object(@$_) } $self->cursor->all;
+ }
+
+ $self->set_cache(\@obj) if $self->{attrs}{cache};
+ return @obj;
}
=head2 reset
sub reset {
my ($self) = @_;
+ $self->{all_cache_position} = 0;
$self->cursor->reset;
return $self;
}
if (ref $self->{cond} eq 'ARRAY') {
$del = [ map { my %hash;
foreach my $key (keys %{$_}) {
- $key =~ /([^\.]+)$/;
+ $key =~ /([^.]+)$/;
$hash{$1} = $_->{$key};
}; \%hash; } @{$self->{cond}} ];
} elsif ((keys %{$self->{cond}})[0] eq '-and') {
$del->{-and} = [ map { my %hash;
foreach my $key (keys %{$_}) {
- $key =~ /([^\.]+)$/;
+ $key =~ /([^.]+)$/;
$hash{$1} = $_->{$key};
}; \%hash; } @{$self->{cond}{-and}} ];
} else {
foreach my $key (keys %{$self->{cond}}) {
- $key =~ /([^\.]+)$/;
+ $key =~ /([^.]+)$/;
$del->{$1} = $self->{cond}{$key};
}
}
my %new = %$values;
my $alias = $self->{attrs}{alias};
foreach my $key (keys %{$self->{cond}||{}}) {
- $new{$1} = $self->{cond}{$key} if ($key =~ m/^(?:$alias\.)?([^\.]+)$/);
+ $new{$1} = $self->{cond}{$key} if ($key =~ m/^(?:\Q${alias}.\E)?([^.]+)$/);
}
- my $obj = $self->result_source->result_class->new(\%new);
+ my $obj = $self->result_class->new(\%new);
$obj->result_source($self->result_source) if $obj->can('result_source');
- $obj;
+ return $obj;
}
=head2 create
sub find_or_create {
my $self = shift;
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
- my $hash = ref $_[0] eq "HASH" ? shift : {@_};
+ my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
my $exists = $self->find($hash, $attrs);
- return defined($exists) ? $exists : $self->create($hash);
+ return defined $exists ? $exists : $self->create($hash);
}
=head2 update_or_create
sub update_or_create {
my $self = shift;
-
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
- my $hash = ref $_[0] eq "HASH" ? shift : {@_};
+ my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
my %unique_constraints = $self->result_source->unique_constraints;
my @constraint_names = (exists $attrs->{key}
if (scalar keys %unique_hash == scalar @unique_cols);
}
- my $row;
if (@unique_hashes) {
- $row = $self->search(\@unique_hashes, { rows => 1 })->first;
- if ($row) {
+ my $row = $self->single(\@unique_hashes);
+ if (defined $row) {
$row->set_columns($hash);
$row->update;
+ return $row;
}
}
- unless ($row) {
- $row = $self->create($hash);
+ return $self->create($hash);
+}
+
+=head2 get_cache
+
+Gets the contents of the cache for the resultset.
+
+=cut
+
+sub get_cache {
+ shift->{all_cache} || [];
+}
+
+=head2 set_cache
+
+Sets the contents of the cache for the resultset. Expects an arrayref of objects of the same class as those produced by the resultset.
+
+=cut
+
+sub set_cache {
+ my ( $self, $data ) = @_;
+ $self->throw_exception("set_cache requires an arrayref")
+ if ref $data ne 'ARRAY';
+ my $result_class = $self->result_class;
+ foreach( @$data ) {
+ $self->throw_exception("cannot cache object of type '$_', expected '$result_class'")
+ if ref $_ ne $result_class;
}
+ $self->{all_cache} = $data;
+}
- return $row;
+=head2 clear_cache
+
+Clears the cache for the resultset.
+
+=cut
+
+sub clear_cache {
+ shift->set_cache([]);
+}
+
+=head2 related_resultset
+
+Returns a related resultset for the supplied relationship name.
+
+ $rs = $rs->related_resultset('foo');
+
+=cut
+
+sub related_resultset {
+ my ( $self, $rel, @rest ) = @_;
+ $self->{related_resultsets} ||= {};
+ return $self->{related_resultsets}{$rel} ||= do {
+ #warn "fetching related resultset for rel '$rel'";
+ my $rel_obj = $self->result_source->relationship_info($rel);
+ $self->throw_exception(
+ "search_related: result source '" . $self->result_source->name .
+ "' has no such relationship ${rel}")
+ unless $rel_obj; #die Dumper $self->{attrs};
+
+ my $rs = $self->search(undef, { join => $rel });
+ my $alias = defined $rs->{attrs}{seen_join}{$rel}
+ && $rs->{attrs}{seen_join}{$rel} > 1
+ ? join('_', $rel, $rs->{attrs}{seen_join}{$rel})
+ : $rel;
+
+ $self->result_source->schema->resultset($rel_obj->{class}
+ )->search( undef,
+ { %{$rs->{attrs}},
+ alias => $alias,
+ select => undef,
+ as => undef }
+ )->search(@rest);
+ };
}
=head2 throw_exception
Which column(s) to order the results by. This is currently passed through
directly to SQL, so you can give e.g. C<foo DESC> for a descending order.
-=head2 cols
+=head2 columns
=head3 Arguments: (arrayref)
Shortcut to request a particular set of columns to be retrieved. Adds
C<me.> onto the start of any column without a C<.> in it and sets C<select>
-from that, then auto-populates C<as> from C<select> as normal.
+from that, then auto-populates C<as> from C<select> as normal. (You may also
+use the C<cols> attribute, as in earlier versions of DBIC.)
=head2 include_columns
names:
$rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
'column_name',
procedure names:
$rs = $schema->resultset('Foo')->search(
- {},
+ undef,
{
select => [
'column1',
objects, because it saves at least one query:
my $rs = $schema->resultset('Tag')->search(
- {},
+ undef,
{
prefetch => {
cd => 'artist'
then search against all mothers of those children:
$rs = $schema->resultset('Person')->search(
- {},
+ undef,
{
alias => 'mother', # alias columns in accordance with "from"
from => [
with a father in the person table, we could explicitly use C<INNER JOIN>:
$rs = $schema->resultset('Person')->search(
- {},
+ undef,
{
alias => 'child', # alias columns in accordance with "from"
from => [
-package DBIx::Class::ResultSetProxy;
+package # hide from PAUSE
+ DBIx::Class::ResultSetProxy;
use base qw/DBIx::Class/;
use Carp::Clan qw/^DBIx::Class/;
use Storable;
+use Scalar::Util qw/weaken/;
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/AccessorGroup/);
sub new {
my ($class, $attrs) = @_;
$class = ref $class if ref $class;
- my $new = bless({ %{$attrs || {}} }, $class);
+ my $new = bless({ %{$attrs || {}}, _resultset => undef }, $class);
$new->{resultset_class} ||= 'DBIx::Class::ResultSet';
$new->{resultset_attributes} = { %{$new->{resultset_attributes} || {}} };
$new->{_ordered_columns} = [ @{$new->{_ordered_columns}||[]}];
sub add_columns {
my ($self, @cols) = @_;
- $self->_ordered_columns( \@cols )
- if !$self->_ordered_columns;
+ $self->_ordered_columns(\@cols) unless $self->_ordered_columns;
+
my @added;
my $columns = $self->_columns;
while (my $col = shift @cols) {
-
+ # If next entry is { ... } use that for the column info, if not
+ # use an empty hashref
my $column_info = ref $cols[0] ? shift(@cols) : {};
- # If next entry is { ... } use that for the column info, if not
- # use an empty hashref
-
push(@added, $col) unless exists $columns->{$col};
-
$columns->{$col} = $column_info;
}
push @{ $self->_ordered_columns }, @added;
$self->throw_exception("No such column $column")
unless exists $self->_columns->{$column};
#warn $self->{_columns_info_loaded}, "\n";
- if ( ! $self->_columns->{$column}->{data_type}
- && ! $self->{_columns_info_loaded}
- && $self->schema && $self->storage() ){
- $self->{_columns_info_loaded}++;
- my $info;
-############ eval for the case of storage without table
- eval{
- $info = $self->storage->columns_info_for ( $self->from() );
- };
- if ( ! $@ ){
- for my $col ( keys %{$self->_columns} ){
- for my $i ( keys %{$info->{$col}} ){
- $self->_columns()->{$col}->{$i} = $info->{$col}->{$i};
- }
- }
+ if ( ! $self->_columns->{$column}{data_type}
+ and ! $self->{_columns_info_loaded}
+ and $self->schema and $self->storage )
+ {
+ $self->{_columns_info_loaded}++;
+ my $info;
+ # eval for the case of storage without table
+ eval { $info = $self->storage->columns_info_for($self->from) };
+ unless ($@) {
+ foreach my $col ( keys %{$self->_columns} ) {
+ foreach my $i ( keys %{$info->{$col}} ) {
+ $self->_columns->{$col}{$i} = $info->{$col}{$i};
+ }
}
+ }
}
return $self->_columns->{$column};
}
=cut
sub columns {
- my $self=shift;
+ my $self = shift;
$self->throw_exception("columns() is a read-only accessor, did you mean add_columns()?") if (@_ > 1);
return @{$self->{_ordered_columns}||[]};
}
sub set_primary_key {
my ($self, @cols) = @_;
# check if primary key columns are valid columns
- for (@cols) {
- $self->throw_exception("No such column $_ on table ".$self->name)
- unless $self->has_column($_);
+ foreach my $col (@cols) {
+ $self->throw_exception("No such column $col on table " . $self->name)
+ unless $self->has_column($col);
}
$self->_primaries(\@cols);
sub add_unique_constraint {
my ($self, $name, $cols) = @_;
- for (@$cols) {
- $self->throw_exception("No such column $_ on table ".$self->name)
- unless $self->has_column($_);
+ foreach my $col (@$cols) {
+ $self->throw_exception("No such column $col on table " . $self->name)
+ unless $self->has_column($col);
}
my %unique_constraints = $self->unique_constraints;
#warn "$self $k $for $v";
$ret{$k} = $for->get_column($v);
#warn %ret;
+ } elsif (ref $as) { # reverse object
+ $ret{$v} = $as->get_column($k);
} else {
$ret{"${as}.${k}"} = "${for}.${v}";
}
=cut
sub resolve_prefetch {
- my ($self, $pre, $alias, $seen) = @_;
+ my ($self, $pre, $alias, $seen, $order, $collapse) = @_;
$seen ||= {};
- use Data::Dumper;
#$alias ||= $self->name;
#warn $alias, Dumper $pre;
if( ref $pre eq 'ARRAY' ) {
- return map { $self->resolve_prefetch( $_, $alias, $seen ) } @$pre;
+ return
+ map { $self->resolve_prefetch( $_, $alias, $seen, $order, $collapse ) }
+ @$pre;
}
elsif( ref $pre eq 'HASH' ) {
my @ret =
map {
- $self->resolve_prefetch($_, $alias, $seen),
+ $self->resolve_prefetch($_, $alias, $seen, $order, $collapse),
$self->related_source($_)->resolve_prefetch(
- $pre->{$_}, "${alias}.$_", $seen)
- } keys %$pre;
+ $pre->{$_}, "${alias}.$_", $seen, $order, $collapse)
+ } keys %$pre;
#die Dumper \@ret;
return @ret;
}
elsif( ref $pre ) {
- $self->throw_exception( "don't know how to resolve prefetch reftype " . ref $pre);
+ $self->throw_exception(
+ "don't know how to resolve prefetch reftype ".ref($pre));
}
else {
my $count = ++$seen->{$pre};
my $as = ($count > 1 ? "${pre}_${count}" : $pre);
my $rel_info = $self->relationship_info( $pre );
- $self->throw_exception( $self->name . " has no such relationship '$pre'" ) unless $rel_info;
- my $as_prefix = ($alias =~ /^.*?\.(.*)$/ ? $1.'.' : '');
+ $self->throw_exception( $self->name . " has no such relationship '$pre'" )
+ unless $rel_info;
+ my $as_prefix = ($alias =~ /^.*?\.(.+)$/ ? $1.'.' : '');
+ my $rel_source = $self->related_source($pre);
+
+ if (exists $rel_info->{attrs}{accessor}
+ && $rel_info->{attrs}{accessor} eq 'multi') {
+ $self->throw_exception(
+ "Can't prefetch has_many ${pre} (join cond too complex)")
+ unless ref($rel_info->{cond}) eq 'HASH';
+ my @key = map { (/^foreign\.(.+)$/ ? ($1) : ()); }
+ keys %{$rel_info->{cond}};
+ $collapse->{"${as_prefix}${pre}"} = \@key;
+ my @ord = (ref($rel_info->{attrs}{order_by}) eq 'ARRAY'
+ ? @{$rel_info->{attrs}{order_by}}
+ : (defined $rel_info->{attrs}{order_by}
+ ? ($rel_info->{attrs}{order_by})
+ : ()));
+ push(@$order, map { "${as}.$_" } (@key, @ord));
+ }
+
return map { [ "${as}.$_", "${as_prefix}${pre}.$_", ] }
- $self->related_source($pre)->columns;
+ $rel_source->columns;
#warn $alias, Dumper (\@ret);
#return @ret;
}
sub resultset {
my $self = shift;
- return $self->resultset_class->new($self, $self->{resultset_attributes});
+ $self->throw_exception('resultset does not take any arguments. If you want another resultset, call it on the schema instead.') if scalar @_;
+ return $self->{_resultset} if ref $self->{_resultset} eq $self->resultset_class;
+ return $self->{_resultset} = do {
+ my $rs = $self->resultset_class->new($self, $self->{resultset_attributes});
+ weaken $rs->result_source;
+ $rs;
+ };
}
=head2 throw_exception
-package DBIx::Class::ResultSourceProxy;
+package # hide from PAUSE
+ DBIx::Class::ResultSourceProxy;
use strict;
use warnings;
sub new {
my ($class, $attrs) = @_;
$class = ref $class if ref $class;
- my $new = bless({ _column_data => { } }, $class);
+ my $new = bless { _column_data => {} }, $class;
if ($attrs) {
- $new->throw_exception("attrs must be a hashref" ) unless ref($attrs) eq 'HASH';
- while (my ($k, $v) = each %{$attrs}) {
+ $new->throw_exception("attrs must be a hashref") unless ref($attrs) eq 'HASH';
+ while (my ($k, $v) = each %$attrs) {
$new->throw_exception("No such column $k on $class") unless $class->has_column($k);
$new->store_column($k => $v);
}
$self->{result_source} ||= $self->result_source_instance
if $self->can('result_source_instance');
my $source = $self->{result_source};
- $self->throw_exception("No result_source set on this object; can't insert") unless $source;
+ $self->throw_exception("No result_source set on this object; can't insert")
+ unless $source;
#use Data::Dumper; warn Dumper($self);
$source->storage->insert($source->from, { $self->get_columns });
$self->in_storage(1);
$self->{_dirty_columns} = {};
+ $self->{related_resultsets} = {};
return $self;
}
$self->throw_exception("Can't update ${self}: updated more than one row");
}
$self->{_dirty_columns} = {};
+ $self->{related_resultsets} = {};
return $self;
}
my $ident_cond = $self->ident_condition;
$self->throw_exception("Cannot safely delete a row in a PK-less table")
if ! keys %$ident_cond;
+ foreach my $column (keys %$ident_cond) {
+ $self->throw_exception("Can't delete the object unless it has loaded the primary keys")
+ unless exists $self->{_column_data}{$column};
+ }
$self->result_source->storage->delete(
$self->result_source->from, $ident_cond);
$self->in_storage(undef);
} else {
$self->throw_exception("Can't do class delete without a ResultSource instance")
unless $self->can('result_source_instance');
- my $attrs = { };
- if (@_ > 1 && ref $_[$#_] eq 'HASH') {
- $attrs = { %{ pop(@_) } };
- }
- my $query = (ref $_[0] eq 'HASH' ? $_[0] : {@_});
+ my $attrs = @_ > 1 && ref $_[$#_] eq 'HASH' ? { %{pop(@_)} } : {};
+ my $query = ref $_[0] eq 'HASH' ? $_[0] : {@_};
$self->result_source_instance->resultset->search(@_)->delete;
}
return $self;
sub get_column {
my ($self, $column) = @_;
$self->throw_exception( "Can't fetch data as class method" ) unless ref $self;
- return $self->{_column_data}{$column}
- if exists $self->{_column_data}{$column};
+ return $self->{_column_data}{$column} if exists $self->{_column_data}{$column};
$self->throw_exception( "No such column '${column}'" ) unless $self->has_column($column);
return undef;
}
+sub has_column_loaded {
+ my ($self, $column) = @_;
+ $self->throw_exception( "Can't call has_column data as class method" ) unless ref $self;
+ return exists $self->{_column_data}{$column};
+}
+
=head2 get_columns
my %data = $obj->get_columns;
sub copy {
my ($self, $changes) = @_;
- my $new = bless({ _column_data => { %{$self->{_column_data}}} }, ref $self);
- $new->set_column($_ => $changes->{$_}) for keys %$changes;
- return $new->insert;
+ $changes ||= {};
+ my $col_data = { %{$self->{_column_data}} };
+ foreach my $col (keys %$col_data) {
+ delete $col_data->{$col}
+ if $self->result_source->column_info($col)->{is_auto_increment};
+ }
+ my $new = bless { _column_data => $col_data }, ref $self;
+ $new->set_columns($changes);
+ $new->insert;
+ foreach my $rel ($self->result_source->relationships) {
+ my $rel_info = $self->result_source->relationship_info($rel);
+ if ($rel_info->{attrs}{cascade_copy}) {
+ my $resolved = $self->result_source->resolve_condition(
+ $rel_info->{cond}, $rel, $new);
+ foreach my $related ($self->search_related($rel)) {
+ $related->copy($resolved);
+ }
+ }
+ }
+ return $new;
}
=head2 store_column
},
ref $class || $class);
my $schema;
- PRE: foreach my $pre (keys %{$prefetch||{}}) {
+ foreach my $pre (keys %{$prefetch||{}}) {
+ my $pre_val = $prefetch->{$pre};
my $pre_source = $source->related_source($pre);
- $class->throw_exception("Can't prefetch non-existant relationship ${pre}") unless $pre_source;
- my $fetched;
- unless ($pre_source->primary_columns == grep { exists $prefetch->{$pre}[0]{$_}
- and !defined $prefetch->{$pre}[0]{$_} } $pre_source->primary_columns)
- {
- $fetched = $pre_source->result_class->inflate_result(
- $pre_source, @{$prefetch->{$pre}});
- }
- my $accessor = $source->relationship_info($pre)->{attrs}{accessor};
- $class->throw_exception("No accessor for prefetched $pre")
- unless defined $accessor;
- if ($accessor eq 'single') {
- $new->{_relationship_data}{$pre} = $fetched;
- } elsif ($accessor eq 'filter') {
- $new->{_inflated_column}{$pre} = $fetched;
- } else {
- $class->throw_exception("Don't know how to store prefetched $pre");
+ $class->throw_exception("Can't prefetch non-existent relationship ${pre}")
+ unless $pre_source;
+ if (ref($pre_val->[0]) eq 'ARRAY') { # multi
+ my @pre_objects;
+ foreach my $pre_rec (@$pre_val) {
+ unless ($pre_source->primary_columns == grep { exists $pre_rec->[0]{$_}
+ and defined $pre_rec->[0]{$_} } $pre_source->primary_columns) {
+ next;
+ }
+ push(@pre_objects, $pre_source->result_class->inflate_result(
+ $pre_source, @{$pre_rec}));
+ }
+ $new->related_resultset($pre)->set_cache(\@pre_objects);
+ } elsif (defined $pre_val->[0]) {
+ my $fetched;
+ unless ($pre_source->primary_columns == grep { exists $pre_val->[0]{$_}
+ and !defined $pre_val->[0]{$_} } $pre_source->primary_columns)
+ {
+ $fetched = $pre_source->result_class->inflate_result(
+ $pre_source, @{$pre_val});
+ }
+ my $accessor = $source->relationship_info($pre)->{attrs}{accessor};
+ $class->throw_exception("No accessor for prefetched $pre")
+ unless defined $accessor;
+ if ($accessor eq 'single') {
+ $new->{_relationship_data}{$pre} = $fetched;
+ } elsif ($accessor eq 'filter') {
+ $new->{_inflated_column}{$pre} = $fetched;
+ } else {
+ $class->throw_exception("Prefetch not supported with accessor '$accessor'");
+ }
}
}
return $new;
}
-=head2 insert_or_update
+=head2 update_or_insert
- $obj->insert_or_update
+ $obj->update_or_insert
Updates the object if it's already in the db, else inserts it.
=cut
-sub insert_or_update {
+*insert_or_update = \&update_or_insert;
+sub update_or_insert {
my $self = shift;
return ($self->in_storage ? $self->update : $self->insert);
}
$password,
$attrs
);
-
- my $schema2 = My::Schema->connect( ... );
+
+ my $schema2 = My::Schema->connect($coderef_returning_dbh);
# fetch objects using My::Schema::Foo
my $resultset = $schema1->resultset('Foo')->search( ... );
$comps_for{$class} = \@comp;
}
- foreach my $prefix (keys %comps_for) {
- foreach my $comp (@{$comps_for{$prefix}||[]}) {
- my $comp_class = "${prefix}::${comp}";
- eval "use $comp_class"; # If it fails, assume the user fixed it
- if ($@) {
- die $@ unless $@ =~ /Can't locate/;
+ my @to_register;
+ {
+ no warnings qw/redefine/;
+ local *Class::C3::reinitialize = sub { };
+ foreach my $prefix (keys %comps_for) {
+ foreach my $comp (@{$comps_for{$prefix}||[]}) {
+ my $comp_class = "${prefix}::${comp}";
+ eval "use $comp_class"; # If it fails, assume the user fixed it
+ if ($@) {
+ die $@ unless $@ =~ /Can't locate/;
+ }
+ push(@to_register, [ $comp, $comp_class ]);
}
- $class->register_class($comp => $comp_class);
- # if $class->can('result_source_instance');
}
}
+ Class::C3->reinitialize;
+
+ foreach my $to (@to_register) {
+ $class->register_class(@$to);
+ # if $class->can('result_source_instance');
+ }
}
=head2 compose_connection
my %target;
my %map;
my $schema = $self->clone;
- foreach my $moniker ($schema->sources) {
- my $source = $schema->source($moniker);
- my $target_class = "${target}::${moniker}";
- $self->inject_base(
- $target_class => $source->result_class, ($base ? $base : ())
- );
- $source->result_class($target_class);
+ {
+ no warnings qw/redefine/;
+ local *Class::C3::reinitialize = sub { };
+ foreach my $moniker ($schema->sources) {
+ my $source = $schema->source($moniker);
+ my $target_class = "${target}::${moniker}";
+ $self->inject_base(
+ $target_class => $source->result_class, ($base ? $base : ())
+ );
+ $source->result_class($target_class);
+ }
}
+ Class::C3->reinitialize();
{
no strict 'refs';
foreach my $meth (qw/class source resultset/) {
croak @_;
}
+=head2 deploy
+
+Attempts to deploy the schema to the current storage
+
+=cut
+
+sub deploy {
+ my ($self, $sqltargs) = @_;
+ $self->throw_exception("Can't deploy without storage") unless $self->storage;
+ $self->storage->deploy($self, undef, $sqltargs);
+}
+
1;
=head1 AUTHORS
-package DBIx::Class::Serialize;
+package DBIx::Class::Serialize::Storable;
use strict;
-use Storable qw/freeze thaw/;
+use Storable;
sub STORABLE_freeze {
my ($self,$cloning) = @_;
- #return if $cloning;
my $to_serialize = { %$self };
delete $to_serialize->{result_source};
- return (freeze($to_serialize));
+ return (Storable::freeze($to_serialize));
}
sub STORABLE_thaw {
my ($self,$cloning,$serialized) = @_;
- %$self = %{ thaw($serialized) };
- $self->result_source($self->result_source_instance);
+ %$self = %{ Storable::thaw($serialized) };
+ $self->result_source($self->result_source_instance) if $self->can('result_source_instance');
}
1;
=head1 NAME
- DBIx::Class::Serialize - hooks for Storable freeze/thaw (EXPERIMENTAL)
+ DBIx::Class::Serialize::Storable - hooks for Storable freeze/thaw (EXPERIMENTAL)
=head1 SYNOPSIS
# in a table class definition
- __PACKAGE__->load_components(qw/Serialize/);
+ __PACKAGE__->load_components(qw/Serialize::Storable/);
# meanwhile, in a nearby piece of code
my $obj = $schema->resultset('Foo')->find(12);
-package DBIx::Class::Storage;
+package # hide from PAUSE
+ DBIx::Class::Storage;
use strict;
use warnings;
sub columns_info_for { die "Virtual method!" }
-
-
package DBIx::Class::Storage::NESTED_ROLLBACK_EXCEPTION;
use overload '"' => sub {
sub new {
my $class = shift;
-
return bless {}, $class;
}
sub select {
my ($self, $table, $fields, $where, $order, @rest) = @_;
@rest = (-1) unless defined $rest[0];
- $self->SUPER::select($table, $self->_recurse_fields($fields),
- $where, $order, @rest);
+ local $self->{having_bind} = [];
+ my ($sql, @ret) = $self->SUPER::select($table,
+ $self->_recurse_fields($fields), $where, $order, @rest);
+ return wantarray ? ($sql, @ret, @{$self->{having_bind}}) : $sql;
}
sub _emulate_limit {
sub _order_by {
my $self = shift;
my $ret = '';
+ my @extra;
if (ref $_[0] eq 'HASH') {
if (defined $_[0]->{group_by}) {
$ret = $self->_sqlcase(' group by ')
.$self->_recurse_fields($_[0]->{group_by});
}
+ if (defined $_[0]->{having}) {
+ my $frag;
+ ($frag, @extra) = $self->_recurse_where($_[0]->{having});
+ push(@{$self->{having_bind}}, @extra);
+ $ret .= $self->_sqlcase(' having ').$frag;
+ }
if (defined $_[0]->{order_by}) {
$ret .= $self->SUPER::_order_by($_[0]->{order_by});
}
+ } elsif(ref $_[0] eq 'SCALAR') {
+ $ret = $self->_sqlcase(' order by ').${ $_[0] };
} else {
$ret = $self->SUPER::_order_by(@_);
}
$new->transaction_depth(0);
if (defined($ENV{DBIX_CLASS_STORAGE_DBI_DEBUG}) &&
($ENV{DBIX_CLASS_STORAGE_DBI_DEBUG} =~ /=(.+)$/)) {
- $new->debugfh(IO::File->new($1, 'w')||croak "Cannot open trace file $1");
+ $new->debugfh(IO::File->new($1, 'w')) || $new->throw_exception("Cannot open trace file $1");
} else {
$new->debugfh(IO::File->new('>&STDERR'));
}
return $new;
}
+sub throw_exception {
+ my ($self, $msg) = @_;
+ croak($msg);
+}
+
=head1 NAME
DBIx::Class::Storage::DBI - DBI storage handler
my ($self) = @_;
my @info = @{$self->connect_info || []};
$self->_dbh($self->_connect(@info));
-
+ my $driver = $self->_dbh->{Driver}->{Name};
+ eval "require DBIx::Class::Storage::DBI::${driver}";
+ unless ($@) {
+ bless $self, "DBIx::Class::Storage::DBI::${driver}";
+ }
# if on-connect sql statements are given execute them
foreach my $sql_statement (@{$self->on_connect_do || []}) {
$self->_dbh->do($sql_statement);
sub _connect {
my ($self, @info) = @_;
+ my ($old_connect_via, $dbh);
+
if ($INC{'Apache/DBI.pm'} && $ENV{MOD_PERL}) {
- my $old_connect_via = $DBI::connect_via;
+ $old_connect_via = $DBI::connect_via;
$DBI::connect_via = 'connect';
- my $dbh = DBI->connect(@info);
- $DBI::connect_via = $old_connect_via;
- return $dbh;
}
- DBI->connect(@info);
+ if(ref $info[0] eq 'CODE') {
+ $dbh = &{$info[0]};
+ }
+ else {
+ $dbh = DBI->connect(@info);
+ }
+
+ $DBI::connect_via = $old_connect_via if $old_connect_via;
+
+ $self->throw_exception("DBI Connection failed: $DBI::errstr")
+ unless $dbh;
+
+ $dbh;
}
=head2 txn_begin
$self->debugfh->print("$sql: " . join(', ', @debug_bind) . "\n");
}
my $sth = $self->sth($sql,$op);
- croak "no sth generated via sql: $sql" unless $sth;
+ $self->throw_exception("no sth generated via sql: $sql") unless $sth;
@bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
my $rv;
if ($sth) {
- $rv = $sth->execute(@bind);
+ $rv = $sth->execute(@bind) or $self->throw_exception("Error executing '$sql': " . $sth->errstr);
} else {
- croak "'$sql' did not generate a statement.";
+ $self->throw_exception("'$sql' did not generate a statement.");
}
return (wantarray ? ($rv, $sth, @bind) : $rv);
}
sub insert {
my ($self, $ident, $to_insert) = @_;
- croak( "Couldn't insert ".join(', ', map "$_ => $to_insert->{$_}", keys %$to_insert)." into ${ident}" )
+ $self->throw_exception( "Couldn't insert ".join(', ', map "$_ => $to_insert->{$_}", keys %$to_insert)." into ${ident}" )
unless ($self->_execute('insert' => [], $ident, $to_insert));
return $to_insert;
}
if (ref $condition eq 'SCALAR') {
$order = $1 if $$condition =~ s/ORDER BY (.*)$//i;
}
- if (exists $attrs->{group_by}) {
+ if (exists $attrs->{group_by} || $attrs->{having}) {
$order = { group_by => $attrs->{group_by},
+ having => $attrs->{having},
($order ? (order_by => $order) : ()) };
}
my @args = ('select', $attrs->{bind}, $ident, $select, $condition, $order);
$column_info{is_nullable} = $info->{NULLABLE};
$result{$info->{COLUMN_NAME}} = \%column_info;
}
- }else{
+ } else {
my $sth = $self->dbh->prepare("SELECT * FROM $table WHERE 1=0");
$sth->execute;
my @columns = @{$sth->{NAME}};
return \%result;
}
+sub last_insert_id {
+ my ($self, $row) = @_;
+
+ return $self->dbh->func('last_insert_rowid');
+
+}
+
+sub sqlt_type { shift->dbh->{Driver}->{Name} }
+
+sub deployment_statements {
+ my ($self, $schema, $type, $sqltargs) = @_;
+ $type ||= $self->sqlt_type;
+ eval "use SQL::Translator";
+ $self->throw_exception("Can't deploy without SQL::Translator: $@") if $@;
+ eval "use SQL::Translator::Parser::DBIx::Class;";
+ $self->throw_exception($@) if $@;
+ eval "use SQL::Translator::Producer::${type};";
+ $self->throw_exception($@) if $@;
+ my $tr = SQL::Translator->new(%$sqltargs);
+ SQL::Translator::Parser::DBIx::Class::parse( $tr, $schema );
+ return "SQL::Translator::Producer::${type}"->can('produce')->($tr);
+}
+
+sub deploy {
+ my ($self, $schema, $type, $sqltargs) = @_;
+ foreach(split(";\n", $self->deployment_statements($schema, $type, $sqltargs))) {
+ $self->debugfh->print("$_\n") if $self->debug;
+ $self->dbh->do($_) or warn "SQL was:\n $_";
+ }
+}
+
sub DESTROY { shift->disconnect }
1;
-package DBIx::Class::Storage::DBI::Cursor;
+package # hide from PAUSE
+ DBIx::Class::Storage::DBI::Cursor;
use base qw/DBIx::Class::Cursor/;
--- /dev/null
+package DBIx::Class::Storage::DBI::DB2;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+
+# __PACKAGE__->load_components(qw/PK::Auto/);
+
+sub last_insert_id
+{
+ my ($self) = @_;
+
+ my $dbh = $self->_dbh;
+ my $sth = $dbh->prepare_cached("VALUES(IDENTITY_VAL_LOCAL())", {}, 3);
+ $sth->execute();
+
+ my @res = $sth->fetchrow_array();
+
+ return @res ? $res[0] : undef;
+
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::DB2 - Automatic primary key class for DB2
+
+=head1 SYNOPSIS
+
+ # In your table classes
+ __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->set_primary_key('id');
+
+=head1 DESCRIPTION
+
+This class implements autoincrements for DB2.
+
+=head1 AUTHORS
+
+Jess Robinson
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
--- /dev/null
+package DBIx::Class::Storage::DBI::MSSQL;\r
+\r
+use strict;\r
+use warnings;\r
+\r
+use base qw/DBIx::Class::Storage::DBI/;\r
+\r
+# __PACKAGE__->load_components(qw/PK::Auto/);\r
+\r
+sub last_insert_id {\r
+ my( $id ) = $_[0]->_dbh->selectrow_array('SELECT @@IDENTITY' );\r
+ return $id;\r
+}\r
+\r
+1;\r
+\r
+=head1 NAME \r
+\r
+DBIx::Class::Storage::DBI::MSSQL - Automatic primary key class for MSSQL\r
+\r
+=head1 SYNOPSIS\r
+\r
+ # In your table classes\r
+ __PACKAGE__->load_components(qw/PK::Auto Core/);\r
+ __PACKAGE__->set_primary_key('id');\r
+\r
+=head1 DESCRIPTION\r
+\r
+This class implements autoincrements for MSSQL.\r
+\r
+=head1 AUTHORS\r
+\r
+Brian Cassidy <bricas@cpan.org>\r
+\r
+=head1 LICENSE\r
+\r
+You may distribute this code under the same terms as Perl itself.\r
+\r
+=cut\r
--- /dev/null
+package DBIx::Class::Storage::DBI::Oracle;
+
+use strict;
+use warnings;
+
+use Carp qw/croak/;
+
+use base qw/DBIx::Class::Storage::DBI/;
+
+# __PACKAGE__->load_components(qw/PK::Auto/);
+
+sub last_insert_id {
+ my ($self,$source,$col) = @_;
+ my $seq = ($source->column_info($col)->{sequence} ||= $self->get_autoinc_seq($source,$col));
+ my $sql = "SELECT " . $seq . ".currval FROM DUAL";
+ my ($id) = $self->_dbh->selectrow_array($sql);
+ return $id;
+}
+
+sub get_autoinc_seq {
+ my ($self,$source,$col) = @_;
+
+ # look up the correct sequence automatically
+ my $dbh = $self->_dbh;
+ my $sql = q{
+ SELECT trigger_body FROM ALL_TRIGGERS t
+ WHERE t.table_name = ?
+ AND t.triggering_event = 'INSERT'
+ AND t.status = 'ENABLED'
+ };
+ # trigger_body is a LONG
+ $dbh->{LongReadLen} = 64 * 1024 if ($dbh->{LongReadLen} < 64 * 1024);
+ my $sth = $dbh->prepare($sql);
+ $sth->execute( uc($source->name) );
+ while (my ($insert_trigger) = $sth->fetchrow_array) {
+ return uc($1) if $insert_trigger =~ m!(\w+)\.nextval!i; # col name goes here???
+ }
+ croak "Unable to find a sequence INSERT trigger on table '" . $source->name . "'.";
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Oracle - Automatic primary key class for Oracle
+
+=head1 SYNOPSIS
+
+ # In your table classes
+ __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->set_primary_key('id');
+ __PACKAGE__->sequence('mysequence');
+
+=head1 DESCRIPTION
+
+This class implements autoincrements for Oracle.
+
+=head1 AUTHORS
+
+Andy Grundman <andy@hybridized.org>
+
+Scott Connelly <scottsweep@yahoo.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
--- /dev/null
+package DBIx::Class::Storage::DBI::Pg;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+
+# __PACKAGE__->load_components(qw/PK::Auto/);
+
+sub last_insert_id {
+ my ($self,$source,$col) = @_;
+ my $seq = ($source->column_info($col)->{sequence} ||= $self->get_autoinc_seq($source,$col));
+ $self->_dbh->last_insert_id(undef,undef,undef,undef, {sequence => $seq});
+}
+
+sub get_autoinc_seq {
+ my ($self,$source,$col) = @_;
+
+ my @pri = $source->primary_columns;
+ my $dbh = $self->_dbh;
+ my ($schema,$table) = $source->name =~ /^(.+)\.(.+)$/ ? ($1,$2)
+ : (undef,$source->name);
+ while (my $col = shift @pri) {
+ my $info = $dbh->column_info(undef,$schema,$table,$col)->fetchrow_arrayref;
+ if (defined $info->[12] and $info->[12] =~
+ /^nextval\('([^']+)'::(?:text|regclass)\)/)
+ {
+ return $1; # may need to strip quotes -- see if this works
+ }
+ }
+}
+
+sub sqlt_type {
+ return 'PostgreSQL';
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Pg - Automatic primary key class for PostgreSQL
+
+=head1 SYNOPSIS
+
+ # In your table classes
+ __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->set_primary_key('id');
+ __PACKAGE__->sequence('mysequence');
+
+=head1 DESCRIPTION
+
+This class implements autoincrements for PostgreSQL.
+
+=head1 AUTHORS
+
+Marcus Ramberg <m.ramberg@cpan.org>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
--- /dev/null
+package DBIx::Class::Storage::DBI::SQLite;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+
+sub last_insert_id {
+ return $_[0]->dbh->func('last_insert_rowid');
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::PK::Auto::SQLite - Automatic primary key class for SQLite
+
+=head1 SYNOPSIS
+
+ # In your table classes
+ __PACKAGE__->load_components(qw/PK::Auto::SQLite Core/);
+ __PACKAGE__->set_primary_key('id');
+
+=head1 DESCRIPTION
+
+This class implements autoincrements for SQLite.
+
+=head1 AUTHORS
+
+Matt S. Trout <mst@shadowcatsystems.co.uk>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
--- /dev/null
+package DBIx::Class::Storage::DBI::mysql;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+
+# __PACKAGE__->load_components(qw/PK::Auto/);
+
+sub last_insert_id {
+ return $_[0]->_dbh->{mysql_insertid};
+}
+
+sub sqlt_type {
+ return 'MySQL';
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::mysql - Automatic primary key class for MySQL
+
+=head1 SYNOPSIS
+
+ # In your table classes
+ __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->set_primary_key('id');
+
+=head1 DESCRIPTION
+
+This class implements autoincrements for MySQL.
+
+=head1 AUTHORS
+
+Matt S. Trout <mst@shadowcatsystems.co.uk>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
--- /dev/null
+package DBIx::Class::UTF8Columns;
+use strict;
+use warnings;
+use base qw/DBIx::Class/;
+
+use Encode;
+
+__PACKAGE__->mk_classdata( force_utf8_columns => [] );
+
+=head1 NAME
+
+DBIx::Class::UTF8Columns - Force UTF8 (Unicode) flag on columns
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UTF8Columns Core/);
+ __PACKAGE__->utf8_columns(qw/name description/);
+
+ # then belows return strings with utf8 flag
+ $artist->name;
+ $artist->get_column('description');
+
+=head1 DESCRIPTION
+
+This module allows you to get columns data that have utf8 (Unicode) flag.
+
+=head1 SEE ALSO
+
+L<Template::Stash::ForceUTF8>, L<DBIx::Class::UUIDColumns>.
+
+=head1 METHODS
+
+=head2 utf8_columns
+
+=cut
+
+sub utf8_columns {
+ my $self = shift;
+ for (@_) {
+ $self->throw_exception("column $_ doesn't exist")
+ unless $self->has_column($_);
+ }
+ $self->force_utf8_columns( \@_ );
+}
+
+=head1 EXTENDED METHODS
+
+=head2 get_column
+
+=cut
+
+sub get_column {
+ my ( $self, $column ) = @_;
+ my $value = $self->next::method($column);
+
+ if ( { map { $_ => 1 } @{ $self->force_utf8_columns } }->{$column} ) {
+ Encode::_utf8_on($value) unless Encode::is_utf8($value);
+ }
+
+ $value;
+}
+
+=head2 get_columns
+
+=cut
+
+sub get_columns {
+ my $self = shift;
+ my %data = $self->next::method(@_);
+
+ for (@{ $self->force_utf8_columns }) {
+ Encode::_utf8_on($data{$_}) if $data{$_} and !Encode::is_utf8($_);
+ }
+
+ %data;
+}
+
+=head2 store_column
+
+=cut
+
+sub store_column {
+ my ( $self, $column, $value ) = @_;
+
+ if ( { map { $_ => 1 } @{ $self->force_utf8_columns } }->{$column} ) {
+ Encode::_utf8_off($value) if Encode::is_utf8($value);
+ }
+
+ $self->next::method( $column, $value );
+}
+
+=head1 AUTHOR
+
+Daisuke Murase <typester@cpan.org>
+
+=head1 COPYRIGHT
+
+This program is free software; you can redistribute
+it and/or modify it under the same terms as Perl itself.
+
+The full text of the license can be found in the
+LICENSE file included with this module.
+
+=cut
+
+1;
+
package DBIx::Class::UUIDColumns;
use base qw/DBIx::Class/;
-use Data::UUID;
-
__PACKAGE__->mk_classdata( 'uuid_auto_columns' => [] );
+__PACKAGE__->mk_classdata( 'uuid_maker' );
+__PACKAGE__->uuid_class( __PACKAGE__->_find_uuid_module );
+
+# be compatible with Class::DBI::UUID
+sub uuid_columns {
+ my $self = shift;
+ for (@_) {
+ $self->throw_exception("column $_ doesn't exist") unless $self->has_column($_);
+ }
+ $self->uuid_auto_columns(\@_);
+}
+
+sub uuid_class {
+ my ($self, $class) = @_;
+
+ if ($class) {
+ $class = "DBIx::Class::UUIDMaker$class" if $class =~ /^::/;
+
+ if (!eval "require $class") {
+ $self->throw_exception("$class could not be loaded: $@");
+ } elsif (!$class->isa('DBIx::Class::UUIDMaker')) {
+ $self->throw_exception("$class is not a UUIDMaker subclass");
+ } else {
+ $self->uuid_maker($class->new);
+ };
+ };
+
+ return ref $self->uuid_maker;
+};
+
+sub insert {
+ my $self = shift;
+ for my $column (@{$self->uuid_auto_columns}) {
+ $self->store_column( $column, $self->get_uuid )
+ unless defined $self->get_column( $column );
+ }
+ $self->next::method(@_);
+}
+
+sub get_uuid {
+ return shift->uuid_maker->as_string;
+}
+
+sub _find_uuid_module {
+ if (eval{require Data::UUID}) {
+ return '::Data::UUID';
+ } elsif ($^O ne 'openbsd' && eval{require APR::UUID}) {
+ # APR::UUID on openbsd causes some as yet unfound nastyness for XS
+ return '::APR::UUID';
+ } elsif (eval{require UUID}) {
+ return '::UUID';
+ } elsif (eval{
+ # squelch the 'too late for INIT' warning in Win32::API::Type
+ local $^W = 0;
+ require Win32::Guidgen;
+ }) {
+ return '::Win32::Guidgen';
+ } elsif (eval{require Win32API::GUID}) {
+ return '::Win32API::GUID';
+ } else {
+ shift->throw_exception('no suitable uuid module could be found')
+ };
+};
+
+1;
+__END__
=head1 NAME
=head1 SYNOPSIS
- pacakge Artist;
+ package Artist;
__PACKAGE__->load_components(qw/UUIDColumns Core DB/);
__PACKAGE__->uuid_columns( 'artist_id' );
This L<DBIx::Class> component resembles the behaviour of
L<Class::DBI::UUID>, to make some columns implicitly created as uuid.
+When loaded, C<UUIDColumns> will search for a suitable uuid generation module
+from the following list of supported modules:
+
+ Data::UUID
+ APR::UUID*
+ UUID
+ Win32::Guidgen
+ Win32API::GUID
+
+If no supporting module can be found, an exception will be thrown.
+
+*APR::UUID will not be loaded under OpenBSD due to an as yet unidentified XS
+issue.
+
+If you would like to use a specific module, you can set C<uuid_class>:
+
+ __PACKAGE__->uuid_class('::Data::UUID');
+ __PACKAGE__->uuid_class('MyUUIDGenerator');
+
Note that the component needs to be loaded before Core.
=head1 METHODS
-=head2 uuid_columns
+=head2 uuid_columns(@columns)
-=cut
+Takes a list of columns to be filled with uuids during insert.
-# be compatible with Class::DBI::UUID
-sub uuid_columns {
- my $self = shift;
- for (@_) {
- $self->throw_exception("column $_ doesn't exist") unless $self->has_column($_);
- }
- $self->uuid_auto_columns(\@_);
-}
+ __PACKAGE__->uuid_columns('id');
-sub insert {
- my $self = shift;
- for my $column (@{$self->uuid_auto_columns}) {
- $self->store_column( $column, $self->get_uuid )
- unless defined $self->get_column( $column );
- }
- $self->next::method(@_);
-}
+=head2 uuid_class($classname)
-sub get_uuid {
- return Data::UUID->new->to_string(Data::UUID->new->create),
-}
+Takes the name of a UUIDMaker subclass to be used for uuid value generation.
+This can be a fully qualified class name, or a shortcut name starting with ::
+that matches one of the available DBIx::Class::UUIDMaker subclasses:
+
+ __PACKAGE__->uuid_class('CustomUUIDGenerator');
+ # loads CustomeUUIDGenerator
+
+ __PACKAGE->uuid_class('::Data::UUID');
+ # loads DBIx::Class::UUIDMaker::Data::UUID;
+
+Note that C<uuid_class> chacks to see that the specified class isa
+DBIx::Class::UUIDMaker subbclass and throws and exception if it isn't.
+
+=head2 uuid_maker
+
+Returns the current UUIDMaker instance for the given module.
+
+ my $uuid = __PACKAGE__->uuid_maker->as_string;
+
+=head1 SEE ALSO
+
+L<DBIx::Class::UUIDMaker>
=head1 AUTHORS
Chia-liang Kao <clkao@clkao.org>
+Chris Laco <claco@chrislaco.com>
=head1 LICENSE
You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-1;
--- /dev/null
+package DBIx::Class::UUIDMaker;
+
+sub new {
+ return bless {}, shift;
+};
+
+sub as_string {
+ return undef;
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker - UUID wrapper module
+
+=head1 SYNOPSIS
+
+ package CustomUUIDMaker;
+ use base qw/DBIx::Class::/;
+
+ sub as_string {
+ my $uuid;
+ ...magic encantations...
+ return $uuid;
+ };
+
+=head1 DESCRIPTION
+
+DBIx::Class::UUIDMaker is a base class used by the various uuid generation
+subclasses.
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<DBIx::Class::UUIDMaker>,
+L<DBIx::Class::UUIDMaker::UUID>,
+L<DBIx::Class::UUIDMaker::APR::UUID>,
+L<DBIx::Class::UUIDMaker::Data::UUID>,
+L<DBIx::Class::UUIDMaker::Win32::Guidgen>,
+L<DBIx::Class::UUIDMaker::Win32API::GUID>,
+L<DBIx::Class::UUIDMaker::Data::Uniqid>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::UUIDMaker::APR::UUID;
+use base qw/DBIx::Class::UUIDMaker/;
+use APR::UUID ();
+
+sub as_string {
+ return APR::UUID->new->format;
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker::APR::UUID - Create uuids using APR::UUID
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+ __PACKAGE__->uuid_class('::APR::UUID');
+
+=head1 DESCRIPTION
+
+This DBIx::Class::UUIDMaker subclass uses APR::UUID to generate uuid
+strings in the following format:
+
+ 098f2470-bae0-11cd-b579-08002b30bfeb
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<APR::UUID>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::UUIDMaker::Data::UUID;
+use base qw/DBIx::Class::UUIDMaker/;
+use Data::UUID ();
+
+sub as_string {
+ return Data::UUID->new->to_string(Data::UUID->new->create);
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker::Data::UUID - Create uuids using Data::UUID
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+ __PACKAGE__->uuid_class('::Data::UUID');
+
+=head1 DESCRIPTION
+
+This DBIx::Class::UUIDMaker subclass uses Data::UUID to generate uuid
+strings in the following format:
+
+ 098f2470-bae0-11cd-b579-08002b30bfeb
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<Data::UUID>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::UUIDMaker::Data::Uniqid;
+use base qw/DBIx::Class::UUIDMaker/;
+use Data::Uniqid ();
+
+sub as_string {
+ return Data::Uniqid->luniqid;
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker::Data::Uniqid - Create uuids using Data::Uniqid
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+ __PACKAGE__->uuid_class('::Data::Uniqid');
+
+=head1 DESCRIPTION
+
+This DBIx::Class::UUIDMaker subclass uses Data::Uniqid to generate uuid
+strings using Data::Uniqid::luniqid.
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<Data::Data::Uniqid>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::UUIDMaker::UUID;
+use base qw/DBIx::Class::UUIDMaker/;
+use UUID ();
+
+sub as_string {
+ my ($uuid, $uuidstring);
+ UUID::generate($uuid);
+ UUID::unparse($uuid, $uuidstring);
+
+ return $uuidstring;
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker::UUID - Create uuids using UUID
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+ __PACKAGE__->uuid_class('::UUID');
+
+=head1 DESCRIPTION
+
+This DBIx::Class::UUIDMaker subclass uses UUID to generate uuid
+strings in the following format:
+
+ 098f2470-bae0-11cd-b579-08002b30bfeb
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<UUID>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::UUIDMaker::Win32::Guidgen;
+use base qw/DBIx::Class::UUIDMaker/;
+use Win32::Guidgen ();
+
+sub as_string {
+ my $uuid = Win32::Guidgen::create();
+ $uuid =~ s/(^\{|\}$)//g;
+
+ return $uuid;
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker::Win32:::Guidgen - Create uuids using Win32::Guidgen
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+ __PACKAGE__->uuid_class('::Win32::Guidgen');
+
+=head1 DESCRIPTION
+
+This DBIx::Class::UUIDMaker subclass uses Win32::Guidgen to generate uuid
+strings in the following format:
+
+ 098f2470-bae0-11cd-b579-08002b30bfeb
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<Win32::Guidgen>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::UUIDMaker::Win32API::GUID;
+use base qw/DBIx::Class::UUIDMaker/;
+use Win32API::GUID ();
+
+sub as_string {
+ return Win32API::GUID::CreateGuid();
+};
+
+1;
+__END__
+
+=head1 NAME
+
+DBIx::Class::UUIDMaker::Win32API:::GUID - Create uuids using Win32API::GUID
+
+=head1 SYNOPSIS
+
+ package Artist;
+ __PACKAGE__->load_components(qw/UUIDColumns Core DB/);
+ __PACKAGE__->uuid_columns( 'artist_id' );
+ __PACKAGE__->uuid_class('::Win32API::GUID');
+
+=head1 DESCRIPTION
+
+This DBIx::Class::UUIDMaker subclass uses Win32API::GUID to generate uuid
+strings in the following format:
+
+ 098f2470-bae0-11cd-b579-08002b30bfeb
+
+=head1 METHODS
+
+=head2 as_string
+
+Returns the new uuid as a string.
+
+=head1 SEE ALSO
+
+L<Win32API::GUID>
+
+=head1 AUTHOR
+
+Chris Laco <claco@chrislaco.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
-package SQL::Translator::Parser::DBIx::Class;
+package # hide from PAUSE
+ SQL::Translator::Parser::DBIx::Class;
# AUTHOR: Jess Robinson
}
$table->primary_key($source->primary_columns);
-
my @rels = $source->relationships();
foreach my $rel (@rels)
{
my $rel_table = $source->related_source($rel)->name;
my $cond = (keys (%{$rel_info->{cond}}))[0];
my ($refkey) = $cond =~ /^\w+\.(\w+)$/;
+ my ($key) = $rel_info->{cond}->{$cond} =~ /^\w+\.(\w+)$/;
if($rel_table && $refkey)
{
$table->add_constraint(
type => 'foreign_key',
- name => "fk_${rel}_id",
- fields => $rel,
+ name => "fk_${key}",
+ fields => $key,
reference_fields => $refkey,
reference_table => $rel_table,
- );
+ );
}
}
}
use warnings;
use lib qw(lib t/lib);
-use UNIVERSAL::require;
+use DBICTest;
+use DBICTest::HelperRels;
-my $from = 'SQL::Translator::Parser::DBIx::Class';
-my $to = 'SQL::Translator::Producer::SQLite';
-my $sqlt = 'SQL::Translator';
-my $schema = 'DBICTest::Schema';
+my $schema = DBICTest->initialise;
-$from->require;
-$to->require;
-$sqlt->require;
-$schema->require;
-
-my $tr = $sqlt->new;
-
-$from->can("parse")->($tr, $schema);
-print $to->can("produce")->($tr);
+print $schema->storage->deployment_statements($schema);
--- /dev/null
+#!/usr/bin/perl
+
+die "must be run from DBIx::Class root dir" unless -d 't/run';
+
+gen_tests($_) for qw/BasicRels HelperRels/;
+
+sub gen_tests {
+ my $variant = shift;
+ my $dir = lc $variant;
+ system("rm -f t/$dir/*.t");
+
+ foreach my $test (map { m[^t/run/(.+)\.tl$]; $1 } split(/\n/, `ls t/run/*.tl`)) {
+ open(my $fh, '>', "t/$dir/${test}.t") or die $!;
+ print $fh <<"EOF";
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::$variant;
+
+require "t/run/${test}.tl";
+run_tests(DBICTest->schema);
+EOF
+ close $fh;
+ }
+}
\ No newline at end of file
--- /dev/null
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+use Test::More;
+
+use lib qw(t/lib);
+use DBICTest::ForeignComponent;
+
+plan tests => 1;
+
+# Tests if foreign component was loaded by calling foreign's method
+ok( DBICTest::ForeignComponent->foreign_test_method, 'foreign component' );
+
eval "use DBD::SQLite";
plan $@
? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 4 );
+ : ( tests => 6 );
}
use lib qw(t/lib);
cmp_ok( $rs->count, '==', 1, "join with fields quoted");
+$rs = DBICTest::CD->search({},
+ { 'order_by' => 'year DESC'});
+{
+ my $warnings;
+ local $SIG{__WARN__} = sub { $warnings .= $_[0] };
+ my $first = eval{ $rs->first() };
+ ok( $warnings =~ /ORDER BY terms/, "Problem with ORDER BY quotes" );
+}
+
+my $order = 'year DESC';
+$rs = DBICTest::CD->search({},
+ { 'order_by' => \$order });
+{
+ my $warnings;
+ local $SIG{__WARN__} = sub { $warnings .= $_[0] };
+ my $first = $rs->first();
+ ok( $warnings !~ /ORDER BY terms/,
+ "No problem handling ORDER by scalaref" );
+}
+
DBICTest->schema->storage->sql_maker->quote_char([qw/[ ]/]);
DBICTest->schema->storage->sql_maker->name_sep('.');
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/08inflate_serialize.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/145db2.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/20unique.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/21transactions.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/22cascade_copy.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/23cache.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/24serialize.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::BasicRels;
+
+require "t/run/25utf8.tl";
+run_tests(DBICTest->schema);
Actor->iterator_class('Class::DBI::My::Iterator');
+delete $film->{related_resultsets};
+
{
my @acts = $film->actors->slice(1, 2);
is @acts, 2, "Slice gives 2 results";
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/08inflate_serialize.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/145db2.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/22cascade_copy.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/23cache.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/24serialize.tl";
+run_tests(DBICTest->schema);
--- /dev/null
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::HelperRels;
+
+require "t/run/25utf8.tl";
+run_tests(DBICTest->schema);
+package # hide from PAUSE
+ DBICTest;
+
+use strict;
+use warnings;
+use DBICTest::Schema;
+
+sub initialise {
+
+ my $db_file = "t/var/DBIxClass.db";
+
+ unlink($db_file) if -e $db_file;
+ unlink($db_file . "-journal") if -e $db_file . "-journal";
+ mkdir("t/var") unless -d "t/var";
+
+ my $dsn = "dbi:SQLite:${db_file}";
+
+ return DBICTest::Schema->compose_connection('DBICTest' => $dsn);
+}
+
1;
-package DBICTest::BasicRels;
+package # hide from PAUSE
+ DBICTest::BasicRels;
use DBICTest::Schema;
use DBICTest::Schema::BasicRels;
-package DBICTest::Extra;
+package # hide from PAUSE
+ DBICTest::Extra;
use base 'DBIx::Class::Schema';
__PACKAGE__->load_classes("Foo");
-package DBICTest::Extra::Foo;
+package # hide from PAUSE
+ DBICTest::Extra::Foo;
use base 'DBIx::Class';
__PACKAGE__->load_components(qw/ ResultSetManager Core /);
sub bar : ResultSet { 'good' }
-1;
\ No newline at end of file
+1;
--- /dev/null
+# belongs to t/05components.t
+package # hide from PAUSE
+ DBICTest::ForeignComponent;
+use warnings;
+use strict;
+
+use base qw/ DBIx::Class /;
+
+__PACKAGE__->load_components( qw/ +DBICTest::ForeignComponent::TestComp / );
+
+1;
--- /dev/null
+# belongs to t/05components.t
+package # hide from PAUSE
+ DBICTest::ForeignComponent::TestComp;
+use warnings;
+use strict;
+
+sub foreign_test_method { 1 }
+
+1;
-package DBICTest::HelperRels;
+package # hide from PAUSE
+ DBICTest::HelperRels;
use DBICTest::Schema;
use DBICTest::Schema::HelperRels;
-package DBICTest::Plain;
+package # hide from PAUSE
+ DBICTest::Plain;
use strict;
use warnings;
-package DBICTest::Plain::Test;
+package # hide from PAUSE
+ DBICTest::Plain::Test;
use base 'DBIx::Class::Core';
-package DBICTest::Schema;
+package # hide from PAUSE
+ DBICTest::Schema;
use base qw/DBIx::Class::Schema/;
-package DBICTest::Schema::Artist;
+package # hide from PAUSE
+ DBICTest::Schema::Artist;
use base 'DBIx::Class::Core';
+__PACKAGE__->load_components('PK::Auto');
+
DBICTest::Schema::Artist->table('artist');
DBICTest::Schema::Artist->add_columns(
'artistid' => {
},
'name' => {
data_type => 'varchar',
+ size => 100,
is_nullable => 1,
},
);
-package DBICTest::Schema::ArtistUndirectedMap;
+package # hide from PAUSE
+ DBICTest::Schema::ArtistUndirectedMap;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::BasicRels;
+package # hide from PAUSE
+ DBICTest::Schema::BasicRels;
use base 'DBIx::Class::Core';
DBICTest::Schema::Artist->add_relationship(
cds => 'DBICTest::Schema::CD',
{ 'foreign.artist' => 'self.artistid' },
- { order_by => 'year', join_type => 'LEFT', cascade_delete => 1 }
+ { order_by => 'year', join_type => 'LEFT', cascade_delete => 1, cascade_copy => 1, accessor => 'multi' }
);
DBICTest::Schema::Artist->add_relationship(
twokeys => 'DBICTest::Schema::TwoKeys',
- { 'foreign.artist' => 'self.artistid' }
+ { 'foreign.artist' => 'self.artistid' },
+ { cascade_copy => 1 }
);
DBICTest::Schema::Artist->add_relationship(
onekeys => 'DBICTest::Schema::OneKey',
DBICTest::Schema::CD->add_relationship(
tags => 'DBICTest::Schema::Tag',
{ 'foreign.cd' => 'self.cdid' },
- { join_type => 'LEFT', cascade_delete => 1 }
+ { join_type => 'LEFT', cascade_delete => 1, cascade_copy => 1, accessor => 'multi' }
);
#DBICTest::Schema::CD->might_have(liner_notes => 'DBICTest::Schema::LinerNotes' => qw/notes/);
DBICTest::Schema::CD->add_relationship(
-package DBICTest::Schema::CD;
+package # hide from PAUSE
+ DBICTest::Schema::CD;
use base 'DBIx::Class::Core';
+__PACKAGE__->load_components('PK::Auto');
+
DBICTest::Schema::CD->table('cd');
DBICTest::Schema::CD->add_columns(
'cdid' => {
},
'title' => {
data_type => 'varchar',
+ size => 100,
},
'year' => {
data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::CD->set_primary_key('cdid');
-package DBICTest::Schema::CD_to_Producer;
+package # hide from PAUSE
+ DBICTest::Schema::CD_to_Producer;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::FourKeys;
+package # hide from PAUSE
+ DBICTest::Schema::FourKeys;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::HelperRels;
+package # hide from PAUSE
+ DBICTest::Schema::HelperRels;
use base 'DBIx::Class::Core';
DBICTest::Schema::CD->belongs_to('artist', 'DBICTest::Schema::Artist');
DBICTest::Schema::CD->has_many(tracks => 'DBICTest::Schema::Track');
-DBICTest::Schema::CD->has_many(tags => 'DBICTest::Schema::Tag');
+DBICTest::Schema::CD->has_many(tags => 'DBICTest::Schema::Tag', undef,
+ { order_by => 'tag' });
DBICTest::Schema::CD->has_many(cd_to_producer => 'DBICTest::Schema::CD_to_Producer' => 'cd');
DBICTest::Schema::CD->might_have(liner_notes => 'DBICTest::Schema::LinerNotes',
);
DBICTest::Schema::Artist->has_many(
'artist_undirected_maps', 'DBICTest::Schema::ArtistUndirectedMap',
- [{'foreign.id1' => 'self.artistid'}, {'foreign.id2' => 'self.artistid'}]
+ [{'foreign.id1' => 'self.artistid'}, {'foreign.id2' => 'self.artistid'}],
+ { cascade_copy => 0 } # this would *so* not make sense
);
DBICTest::Schema::ArtistUndirectedMap->belongs_to(
'artist1', 'DBICTest::Schema::Artist', 'id1');
-package DBICTest::Schema::LinerNotes;
+package # hide from PAUSE
+ DBICTest::Schema::LinerNotes;
use base qw/DBIx::Class::Core/;
},
'notes' => {
data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::LinerNotes->set_primary_key('liner_id');
-package DBICTest::Schema::OneKey;
+package # hide from PAUSE
+ DBICTest::Schema::OneKey;
use base 'DBIx::Class::Core';
+__PACKAGE__->load_components('PK::Auto');
+
DBICTest::Schema::OneKey->table('onekey');
DBICTest::Schema::OneKey->add_columns(
'id' => {
-package DBICTest::Schema::Producer;
+package # hide from PAUSE
+ DBICTest::Schema::Producer;
use base 'DBIx::Class::Core';
},
'name' => {
data_type => 'varchar',
+ size => 100,
},
);
__PACKAGE__->set_primary_key('producerid');
-package DBICTest::Schema::SelfRef;\r
+package # hide from PAUSE \r
+ DBICTest::Schema::SelfRef;\r
\r
use base 'DBIx::Class::Core';\r
\r
},\r
'name' => {\r
data_type => 'varchar',\r
+ size => 100,\r
},\r
);\r
__PACKAGE__->set_primary_key('id');\r
-package DBICTest::Schema::SelfRefAlias;\r
+package # hide from PAUSE \r
+ DBICTest::Schema::SelfRefAlias;\r
\r
use base 'DBIx::Class::Core';\r
\r
-package DBICTest::Schema::Serialized;
+package # hide from PAUSE
+ DBICTest::Schema::Serialized;
use base 'DBIx::Class::Core';
-package DBICTest::Schema::Tag;
+package # hide from PAUSE
+ DBICTest::Schema::Tag;
use base qw/DBIx::Class::Core/;
+__PACKAGE__->load_components('PK::Auto');
+
DBICTest::Schema::Tag->table('tags');
DBICTest::Schema::Tag->add_columns(
'tagid' => {
- data_type => 'varchar',
+ data_type => 'integer',
is_auto_increment => 1,
},
'cd' => {
data_type => 'integer',
},
'tag' => {
- data_type => 'varchar'
+ data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::Tag->set_primary_key('tagid');
-package DBICTest::Schema::Track;
+package # hide from PAUSE
+ DBICTest::Schema::Track;
use base 'DBIx::Class::Core';
},
'title' => {
data_type => 'varchar',
+ size => 100,
},
);
DBICTest::Schema::Track->set_primary_key('trackid');
-package DBICTest::Schema::TreeLike;
+package # hide from PAUSE
+ DBICTest::Schema::TreeLike;
use base qw/DBIx::Class/;
__PACKAGE__->add_columns(
'id' => { data_type => 'integer', is_auto_increment => 1 },
'parent' => { data_type => 'integer' },
- 'name' => { data_type => 'varchar' },
+ 'name' => { data_type => 'varchar',
+ size => 100,
+ },
);
__PACKAGE__->set_primary_key(qw/id/);
__PACKAGE__->belongs_to('parent', 'TreeLike',
-package DBICTest::Schema::TwoKeys;
+package # hide from PAUSE
+ DBICTest::Schema::TwoKeys;
use base 'DBIx::Class::Core';
use strict;
use warnings;
-use DBICTest::Schema;
+use DBICTest;
-my $db_file = "t/var/DBIxClass.db";
-
-unlink($db_file) if -e $db_file;
-unlink($db_file . "-journal") if -e $db_file . "-journal";
-mkdir("t/var") unless -d "t/var";
-
-my $dsn = "dbi:SQLite:${db_file}";
-
-my $schema = DBICTest::Schema->compose_connection('DBICTest' => $dsn);
+my $schema = DBICTest->initialise;
$schema->storage->on_connect_do([ "PRAGMA synchronous = OFF" ]);
my $dbh = $schema->storage->dbh;
-open IN, "t/lib/sqlite.sql";
+if ($ENV{"DBICTEST_SQLT_DEPLOY"}) {
+ $schema->deploy;
+} else {
+ open IN, "t/lib/sqlite.sql";
-my $sql;
+ my $sql;
-{ local $/ = undef; $sql = <IN>; }
+ { local $/ = undef; $sql = <IN>; }
-close IN;
+ close IN;
-$dbh->do($_) for split(/\n\n/, $sql);
+ $dbh->do($_) for split(/\n\n/, $sql);
+}
$schema->storage->dbh->do("PRAGMA synchronous = OFF");
--
-- Created by SQL::Translator::Producer::SQLite
--- Created on Mon Feb 6 01:07:16 2006
+-- Created on Fri Feb 24 15:13:57 2006
--
BEGIN TRANSACTION;
);
--
--- Table: producer
---
-CREATE TABLE producer (
- producerid INTEGER PRIMARY KEY NOT NULL,
- name varchar NOT NULL
-);
-
---
-- Table: onekey
--
CREATE TABLE onekey (
);
--
+-- Table: producer
+--
+CREATE TABLE producer (
+ producerid INTEGER PRIMARY KEY NOT NULL,
+ name varchar NOT NULL
+);
+
+--
-- Table: treelike
--
CREATE TABLE treelike (
-- Table: tags
--
CREATE TABLE tags (
- tagid varchar NOT NULL,
+ tagid INTEGER PRIMARY KEY NOT NULL,
cd integer NOT NULL,
- tag varchar NOT NULL,
- PRIMARY KEY (tagid)
+ tag varchar NOT NULL
);
--
sub run_tests {
my $schema = shift;
-plan tests => 39;
+plan tests => 41;
my @art = $schema->resultset("Artist")->search({ }, { order_by => 'name DESC'});
is_deeply( \@cd, [qw/cdid artist title year/], 'column order');
-$cd = $schema->resultset("CD")->search({ title => 'Spoonful of bees' }, { cols => ['title'] })->next;
+$cd = $schema->resultset("CD")->search({ title => 'Spoonful of bees' }, { columns => ['title'] })->next;
is($cd->title, 'Spoonful of bees', 'subset of columns returned correctly');
$cd = $schema->resultset("CD")->search(undef, { include_columns => [ 'artist.name' ], join => [ 'artist' ] })->find(1);
is($cd->title, 'Spoonful of bees', 'Correct CD returned with include');
is($cd->get_column('name'), 'Caterwauler McCrae', 'Additional column returned');
-# insert_or_update
+# update_or_insert
$new = $schema->resultset("Track")->new( {
trackid => 100,
cd => 1,
position => 1,
title => 'Insert or Update',
} );
-$new->insert_or_update;
-ok($new->in_storage, 'insert_or_update insert ok');
+$new->update_or_insert;
+ok($new->in_storage, 'update_or_insert insert ok');
# test in update mode
$new->pos(5);
-$new->insert_or_update;
-is( $schema->resultset("Track")->find(100)->pos, 5, 'insert_or_update update ok');
+$new->update_or_insert;
+is( $schema->resultset("Track")->find(100)->pos, 5, 'update_or_insert update ok');
eval { $schema->class("Track")->load_components('DoesNotExist'); };
cmp_ok($or_rs->next->cdid, '==', $rel_rs->next->cdid, 'Related object ok');
+my $tag = $schema->resultset('Tag')->search(
+ [ { 'me.tag' => 'Blue' } ], { cols=>[qw/tagid/] } )->next;
+
+cmp_ok($tag->has_column_loaded('tagid'), '==', 1, 'Has tagid loaded');
+cmp_ok($tag->has_column_loaded('tag'), '==', 0, 'Has not tag loaded');
+
ok($schema->storage(), 'Storage available');
$schema->source("Artist")->{_columns}{'artistid'} = {};
eval { require DateTime };
plan skip_all => "Need DateTime for inflation tests" if $@;
-plan tests => 5;
+plan tests => 3;
DBICTest::Schema::CD->inflate_column( 'year',
{ inflate => sub { DateTime->new( year => shift ) },
($cd) = $schema->resultset("CD")->search( year => $now->year );
is( $cd->year->year, $now->year, 'deflate ok' );
-use YAML;
-DBICTest::Schema::Serialized->inflate_column( 'serialized',
- { inflate => sub { Load (shift) },
- deflate => sub { die "Expecting a reference" unless (ref $_[0]); Dump (shift) } }
-);
-Class::C3->reinitialize;
-
-my $complex1 = {
- id => 1,
- serialized => {
- a => 1,
- b => 2,
- },
-};
-
-my $complex2 = {
- id => 1,
- serialized => [qw/a b 1 2/],
-};
-
-my $rs = $schema->resultset('Serialized');
-
-my $entry = $rs->create($complex2);
-
-ok($entry->update ($complex1), "update with hashref deflating ok");
-
-ok($entry->update ($complex2), "update with arrayref deflating ok");
-
}
1;
--- /dev/null
+sub run_tests {
+my $schema = shift;
+
+use Data::Dumper;
+
+my @serializers = (
+ { module => 'YAML.pm',
+ inflater => sub { YAML::Load (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); YAML::Dump (shift) },
+ },
+ { module => 'Storable.pm',
+ inflater => sub { Storable::thaw (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); Storable::nfreeze (shift) },
+ },
+);
+
+
+my $selected;
+foreach my $serializer (@serializers) {
+ eval { require $serializer->{module} };
+ unless ($@) {
+ $selected = $serializer;
+ last;
+ }
+}
+
+plan (skip_all => "No suitable serializer found") unless $selected;
+
+plan (tests => 6);
+DBICTest::Schema::Serialized->inflate_column( 'serialized',
+ { inflate => $selected->{inflater},
+ deflate => $selected->{deflater},
+ },
+);
+Class::C3->reinitialize;
+
+my $complex1 = {
+ id => 1,
+ serialized => {
+ a => 1,
+ b => [
+ { c => 2 },
+ ],
+ d => 3,
+ },
+};
+
+my $complex2 = {
+ id => 1,
+ serialized => [
+ 'a',
+ { b => 1, c => 2},
+ 'd',
+ ],
+};
+
+my $rs = $schema->resultset('Serialized');
+my $entry = $rs->create({ id => 1, serialized => ''});
+
+my $inflated;
+
+ok($entry->update ({ %{$complex1} }), 'hashref deflation ok');
+ok($inflated = $entry->serialized, 'hashref inflation ok');
+is_deeply($inflated, $complex1->{serialized}, 'inflated hash matches original');
+
+ok($entry->update ({ %{$complex2} }), 'arrayref deflation ok');
+ok($inflated = $entry->serialized, 'arrayref inflation ok');
+is_deeply($inflated, $complex2->{serialized}, 'inflated array matches original');
+
+}
+
+1;
plan tests => 2;
$schema->class("Artist")->load_components(qw/PK::Auto::SQLite/);
+ # Should just be PK::Auto but this ensures the compat shim works
# add an artist without primary key to test Auto
my $artist = $schema->resultset("Artist")->create( { name => 'Auto' } );
#'dbi:mysql:host=localhost;database=dbic_test', 'dbic_test', '');
-MySQLTest::Artist->load_components('PK::Auto::MySQL');
+MySQLTest::Artist->load_components('PK::Auto');
# test primary key handling
my $new = MySQLTest::Artist->create({ name => 'foo' });
$dbh->do("CREATE TABLE artist (artistid serial PRIMARY KEY, name VARCHAR(255), charfield CHAR(10));");
-PgTest::Artist->load_components('PK::Auto::Pg');
+PgTest::Artist->load_components('PK::Auto');
my $new = PgTest::Artist->create({ name => 'foo' });
END;
});
-OraTest::Artist->load_components('PK::Auto::Oracle');
+OraTest::Artist->load_components('PK::Auto');
OraTest::CD->load_components('PK::Auto::Oracle');
OraTest::Track->load_components('PK::Auto::Oracle');
--- /dev/null
+sub run_tests {
+my $schema = shift;
+
+my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_DB2_${_}" } qw/DSN USER PASS/};
+
+#warn "$dsn $user $pass";
+
+plan skip_all, 'Set $ENV{DBICTEST_DB2_DSN}, _USER and _PASS to run this test'
+ unless ($dsn && $user);
+
+plan tests => 5;
+
+DBICTest::Schema->compose_connection('DB2Test' => $dsn, $user, $pass);
+
+my $dbh = DB2Test->schema->storage->dbh;
+
+$dbh->do("DROP TABLE artist;");
+
+$dbh->do("CREATE TABLE artist (artistid INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1), name VARCHAR(255), charfield CHAR(10));");
+
+#'dbi:mysql:host=localhost;database=dbic_test', 'dbic_test', '');
+
+DB2Test::Artist->load_components('PK::Auto');
+
+# test primary key handling
+my $new = DB2Test::Artist->create({ name => 'foo' });
+ok($new->artistid, "Auto-PK worked");
+
+# test LIMIT support
+for (1..6) {
+ DB2Test::Artist->create({ name => 'Artist ' . $_ });
+}
+my $it = DB2Test::Artist->search( {},
+ { rows => 3,
+ order_by => 'artistid'
+ }
+);
+is( $it->count, 3, "LIMIT count ok" );
+is( $it->next->name, "Artist 2", "iterator->next ok" );
+$it->next;
+$it->next;
+is( $it->next, undef, "next past end of resultset ok" );
+
+my $test_type_info = {
+ 'artistid' => {
+ 'data_type' => 'INTEGER',
+ 'is_nullable' => 0,
+ 'size' => 11
+ },
+ 'name' => {
+ 'data_type' => 'VARCHAR',
+ 'is_nullable' => 1,
+ 'size' => 255
+ },
+ 'charfield' => {
+ 'data_type' => 'VARCHAR',
+ 'is_nullable' => 1,
+ 'size' => 10
+ },
+};
+
+
+my $type_info = DB2Test->schema->storage->columns_info_for('artist');
+is_deeply($type_info, $test_type_info, 'columns_info_for - column data types');
+
+
+
+# clean up our mess
+$dbh->do("DROP TABLE artist");
+
+}
+
+1;
eval "use DBD::SQLite";
plan $@
? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 41 );
+ : ( tests => 42 );
}
# figure out if we've got a version of sqlite that is older than 3.2.6, in
unlink 't/var/dbic.trace';
is($selects, 1, 'prefetch ran only 1 select statement');
-# test for partial prefetch via cols attr
+# test for partial prefetch via columns attr
my $cd = $schema->resultset('CD')->find(1,
{
- cols => [qw/title artist.name/],
- join => 'artist'
+ columns => [qw/title artist.name/],
+ join => { 'artist' => {} }
}
);
ok(eval { $cd->artist->name eq 'Caterwauler McCrae' }, 'single related column prefetched');
cmp_ok( $rs->count, '==', 3, "count() ok after group_by on related column" );
}
-cmp_ok( scalar $rs->all, '==', 3, "all() returns same count as count() after group_by on related column" );
+$rs = $schema->resultset("Artist")->search(
+ {},
+ { join => [qw/ cds /], group_by => [qw/ me.name /], having =>{ 'MAX(cds.cdid)'=> \'< 5' } }
+);
+
+cmp_ok( $rs->all, '==', 2, "results ok after group_by on related column with a having" );
+
+$rs = $rs->search( undef, { having =>{ 'count(*)'=> \'> 2' }});
+
+cmp_ok( $rs->all, '==', 1, "count() ok after group_by on related column with a having" );
$rs = $schema->resultset("Artist")->search(
{ 'cds.title' => 'Spoonful of bees',
--- /dev/null
+use strict;
+use warnings;
+
+sub run_tests {
+my $schema = shift;
+
+plan tests => 4;
+my $artist = $schema->resultset('Artist')->find(1);
+my $artist_cds = $artist->search_related('cds');
+my $cover_band = $artist->copy;
+my $cover_cds = $cover_band->search_related('cds');
+cmp_ok($cover_band->id, '!=', $artist->id, 'ok got new column id...');
+is($cover_cds->count, $artist_cds->count, 'duplicated rows count ok');
+
+#check multi-keyed
+cmp_ok($cover_band->search_related('twokeys')->count, '>', 0, 'duplicated multiPK ok');
+
+#and check copying a few relations away
+cmp_ok($cover_cds->search_related('tags')->count, '==',
+ $artist_cds->search_related('tags')->count , 'duplicated count ok');
+
+}
+1;
--- /dev/null
+sub run_tests {
+my $schema = shift;
+
+eval "use DBD::SQLite";
+plan skip_all => 'needs DBD::SQLite for testing' if $@;
+plan tests => 15;
+
+my $rs = $schema->resultset("Artist")->search(
+ { artistid => 1 }
+);
+
+my $artist = $rs->first;
+
+is( scalar @{ $rs->get_cache }, 0, 'cache is not populated without cache attribute' );
+
+my @a = $schema->resultset("Artist")->search(
+ { },
+ {
+ join => [ qw/ cds /],
+ prefetch => [qw/ cds /],
+ }
+);
+
+is(scalar @a, 3, 'artist with cds: count parent objects');
+
+$rs = $schema->resultset("Artist")->search(
+ { 'artistid' => 1 },
+ {
+ join => [ qw/ cds /],
+ prefetch => [qw/ cds /],
+ }
+);
+
+use Data::Dumper; $Data::Dumper::Deparse = 1;
+
+# start test for prefetch SELECT count
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+$artist = $rs->first;
+$rs->reset();
+
+# make sure artist contains a related resultset for cds
+is( ref $artist->{related_resultsets}->{cds}, 'DBIx::Class::ResultSet', 'artist has a related_resultset for cds' );
+
+# check if $artist->cds->get_cache is populated
+is( scalar @{$artist->cds->get_cache}, 3, 'cache for artist->cds contains correct number of records');
+
+# ensure that $artist->cds returns correct number of objects
+is( scalar ($artist->cds), 3, 'artist->cds returns correct number of objects' );
+
+# ensure that $artist->cds->count returns correct value
+is( $artist->cds->count, 3, 'artist->cds->count returns correct value' );
+
+# ensure that $artist->count_related('cds') returns correct value
+is( $artist->count_related('cds'), 3, 'artist->count_related returns correct value' );
+
+# count the SELECTs
+DBI->trace(0, undef);
+my $selects = 0;
+my $trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+is($selects, 1, 'only one SQL statement executed');
+
+# make sure related_resultset is deleted after object is updated
+$artist->set_column('name', 'New Name');
+$artist->update();
+
+is( scalar keys %{$artist->{related_resultsets}}, 0, 'related resultsets deleted after update' );
+
+# todo: make sure caching works with nested prefetch e.g. $artist->cds->tracks
+$rs = $schema->resultset("Artist")->search(
+ { artistid => 1 },
+ {
+ join => { cds => 'tags' },
+ prefetch => {
+ cds => 'tags'
+ },
+ }
+);
+{
+$schema->resultset("Artist")->create({artistid=>4,name=>qq{Humoungous Hamsters}});
+my $artist = $schema->resultset("Artist")->search(
+ { artistid => 4 },{prefetch=>[qw/cds/]}
+)->first;
+
+is($artist->cds, 0, 'No cds for this artist');
+}
+
+# SELECT count for nested has_many prefetch
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+$artist = ($rs->all)[0];
+
+# count the SELECTs
+DBI->trace(0, undef);
+$selects = 0;
+$trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+is($selects, 1, 'only one SQL statement executed');
+
+my @objs;
+#$artist = $rs->find(1);
+
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+my $cds = $artist->cds;
+my $tags = $cds->next->tags;
+while( my $tag = $tags->next ) {
+ push @objs, $tag->tagid; #warn "tag:", $tag->ID, " => ", $tag->tag;
+}
+
+is_deeply( \@objs, [ 3 ], 'first cd has correct tags' );
+
+$tags = $cds->next->tags;
+@objs = ();
+while( my $tag = $tags->next ) {
+ push @objs, $tag->id; #warn "tag: ", $tag->ID;
+}
+
+is_deeply( \@objs, [ 2, 5, 8 ], 'second cd has correct tags' );
+
+# count the SELECTs
+DBI->trace(0, undef);
+$selects = 0;
+$trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+
+is( $selects, 0, 'no additional SQL statements while checking nested data' );
+
+# start test for prefetch SELECT count
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+$artist = $schema->resultset('Artist')->find(1, { prefetch => [qw/cds/] });
+
+# count the SELECTs
+DBI->trace(0, undef);
+$selects = 0;
+$trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+
+is( $selects, 1, 'only one select statement on find with inline has_many prefetch' );
+
+# start test for prefetch SELECT count
+unlink 't/var/dbic.trace' if -e 't/var/dbic.trace';
+DBI->trace(1, 't/var/dbic.trace');
+
+$rs = $schema->resultset('Artist')->search(undef, { prefetch => [qw/cds/] });
+$artist = $rs->find(1);
+
+# count the SELECTs
+DBI->trace(0, undef);
+$selects = 0;
+$trace = IO::File->new('t/var/dbic.trace', '<')
+ or die "Unable to read trace file";
+while (<$trace>) {
+ $selects++ if /SELECT/;
+}
+$trace->close;
+unlink 't/var/dbic.trace';
+
+is( $selects, 1, 'only one select statement on find with has_many prefetch on resultset' );
+
+}
+
+1;
--- /dev/null
+use Storable;
+
+sub run_tests {
+my $schema = shift;
+
+plan tests => 1;
+
+my $artist = $schema->resultset('Artist')->find(1);
+my $copy = eval { Storable::dclone($artist) };
+is_deeply($copy, $artist, 'serialize row object works');
+
+}
+
+1;
--- /dev/null
+sub run_tests {
+ my $schema = shift;
+
+ eval 'use Encode ; 1'
+ or plan skip_all, 'Install Encode run this test';
+
+ plan tests => 2;
+
+ DBICTest::Schema::Artist->load_components('UTF8Columns');
+ DBICTest::Schema::Artist->utf8_columns('name');
+ Class::C3->reinitialize();
+
+ my $artist = $schema->resultset("Artist")->create( { name => 'uni' } );
+ ok( Encode::is_utf8( $artist->name ), 'got name with utf8 flag' );
+
+ my $utf8_char = 'uniuni';
+ Encode::_utf8_on($utf8_char);
+ $artist->name($utf8_char);
+ ok( !Encode::is_utf8( $artist->{_column_data}->{name} ),
+ 'store utf8 less chars' );
+}
+
+1;
-package Actor;
+package # hide from PAUSE
+ Actor;
BEGIN { unshift @INC, './t/testlib'; }
-package ActorAlias;\r
+package # hide from PAUSE \r
+ ActorAlias;\r
\r
BEGIN { unshift @INC, './t/testlib'; }\r
\r
-package Binary;
+package # hide from PAUSE
+ Binary;
BEGIN { unshift @INC, './t/testlib'; }
-package Blurb;
+package # hide from PAUSE
+ Blurb;
BEGIN { unshift @INC, './t/testlib'; }
-package CDBase;
+package # hide from PAUSE
+ CDBase;
use strict;
use base qw(DBIx::Class::Test::SQLite);
-package Director;
+package # hide from PAUSE
+ Director;
BEGIN { unshift @INC, './t/testlib'; }
-package Film;
+package # hide from PAUSE
+ Film;
BEGIN { unshift @INC, './t/testlib'; }
use base 'DBIx::Class::Test::SQLite';
-package Lazy;
+package # hide from PAUSE
+ Lazy;
BEGIN { unshift @INC, './t/testlib'; }
use base 'DBIx::Class::Test::SQLite';
-package Log;
+package # hide from PAUSE
+ Log;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyBase;
+package # hide from PAUSE
+ MyBase;
use strict;
use base qw(DBIx::Class);
-package MyFilm;
+package # hide from PAUSE
+ MyFilm;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyFoo;
+package # hide from PAUSE
+ MyFoo;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyStar;
+package # hide from PAUSE
+ MyStar;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyStarLink;
+package # hide from PAUSE
+ MyStarLink;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package MyStarLinkMCPK;
+package # hide from PAUSE
+ MyStarLinkMCPK;
BEGIN { unshift @INC, './t/testlib'; }
use base 'MyBase';
-package Order;
+package # hide from PAUSE
+ Order;
BEGIN { unshift @INC, './t/testlib'; }
-package OtherFilm;
+package # hide from PAUSE
+ OtherFilm;
use strict;
use base 'Film';
-package PgBase;
+package # hide from PAUSE
+ PgBase;
use strict;
use base 'DBIx::Class';