Revision history for DBIx::Class
- Support for Firebird RDBMS with DBD::InterBase and ODBC
+ - Add core support for INSERT RETURNING (for storages that
+ supports this syntax, currently PostgreSQL and Firebird)
- DBIx::Class::InflateColumn::File entered deprecated state
- DBIx::Class::Optional::Dependencies left experimental state
- Add req_group_list to Opt::Deps (RT#55211)
+ - Add support for mysql-specific STRAIGHT_JOIN (RT#55579)
- Cascading delete/update are now wrapped in a transaction
for atomicity
- Fix multiple deficiencies when using MultiCreate with
handled properly by ::Schema::Versioned.
- Fix regression on not properly throwing when $obj->relationship
is unresolvable
+ - Fix the join-optimiser to consider unqualified column names
+ whenever possible
+ - Fix an issue with multiple same-table joins confusing the join
+ optimizier
- Add has_relationship method to row objects
- Fix regression in set_column on PK-less objects
+ - Better error text on malformed/missing relationships
- Add POD about the significance of PK columns
- Fix for SQLite to ignore the (unsupported) { for => ... }
attribute
- Fix ambiguity in default directory handling of create_ddl_dir
(RT#54063)
+ - Fix update_all and delete_all to be wrapped in a transaction
+ - Support add_columns('+colname' => { ... }) to augment column
+ definitions.
+ - Fix spurious warnings on multiple UTF8Columns component loads
+ - Unicode support documentation in Cookbook and UTF8Columns
0.08120 2010-02-24 08:58:00 (UTC)
- Make sure possibly overwritten deployment_statements methods in
'MRO::Compat' => '0.09',
'Module::Find' => '0.06',
'Path::Class' => '0.18',
- 'SQL::Abstract' => '1.62',
+ 'SQL::Abstract' => '1.63',
'SQL::Abstract::Limit' => '0.13',
'Sub::Name' => '0.04',
'Data::Dumper::Concise' => '1.000',
=head1 CONTRIBUTORS
-abraxxa: Alexander Hartmaier <alex_hartmaier@hotmail.com>
+abraxxa: Alexander Hartmaier <abraxxa@cpan.org>
aherzog: Adam Herzog <adam@herzogdesigns.com>
groditi: Guillermo Roditi <groditi@cpan.org>
+hobbs: Andrew Rodland <arodland@cpan.org>
+
ilmari: Dagfinn Ilmari MannsE<aring>ker <ilmari@ilmari.org>
jasonmay: Jason May <jason.a.may@gmail.com>
my $target = shift;
my @present_components = (@{mro::get_linear_isa ($target)||[]});
+ shift @present_components; # don't need to interrogate myself
no strict 'refs';
for my $comp (reverse @_) {
- if ($comp->isa ('DBIx::Class::UTF8Columns') ) {
+ # if we are trying add a UTF8Columns component *for the first time*
+ if ($comp->isa ('DBIx::Class::UTF8Columns') && ! $target->isa ('DBIx::Class::UTF8Columns') ) {
require B;
my @broken;
for (@present_components) {
+ last if $_ eq 'DBIx::Class::Row'; # don't care about anything further down the chain
+
my $cref = $_->can ('store_column')
or next;
- push @broken, $_ if B::svref_2object($cref)->STASH->NAME ne 'DBIx::Class::Row';
+
+ push @broken, $_ if B::svref_2object($cref)->STASH->NAME eq $_;
}
carp "Incorrect loading order of $comp by ${target} will affect other components overriding store_column ("
=head2 Predefined searches
-You can write your own L<DBIx::Class::ResultSet> class by inheriting from it
-and defining often used searches as methods:
+You can define frequently used searches as methods by subclassing
+L<DBIx::Class::ResultSet>:
package My::DBIC::ResultSet::CD;
use strict;
you create an index on the return value of the function in question.) However,
it can be accomplished with C<DBIx::Class> when necessary.
+Your approach for doing so will depend on whether you have turned
+quoting on via the C<quote_char> and C<name_sep> attributes. If you
+explicitly defined C<quote_char> and C<name_sep> in your
+C<connect_info> (see L<DBIx::Class::Storage::DBI/"connect_info">) then
+you are using quoting, otherwise not.
+
If you do not have quoting on, simply include the function in your search
specification as you would any column:
=head1 TRANSACTIONS
+=head2 Transactions with txn_do
+
As of version 0.04001, there is improved transaction support in
L<DBIx::Class::Storage> and L<DBIx::Class::Schema>. Here is an
example of the recommended way to use it:
deal_with_failed_transaction();
}
+Note: by default C<txn_do> will re-run the coderef one more time if an
+error occurs due to client disconnection (e.g. the server is bounced).
+You need to make sure that your coderef can be invoked multiple times
+without terrible side effects.
+
Nested transactions will work as expected. That is, only the outermost
transaction will actually issue a commit to the $dbh, and a rollback
at any level of any transaction will cause the entire nested
transaction to fail.
-
+
=head2 Nested transactions and auto-savepoints
If savepoints are supported by your RDBMS, it is possible to achieve true
the C<eval>-block around C<txn_do> fails, a rollback is issued. If the C<eval>
succeeds, the transaction is committed (or the savepoint released).
-While you can get more fine-grained controll using C<svp_begin>, C<svp_release>
+While you can get more fine-grained control using C<svp_begin>, C<svp_release>
and C<svp_rollback>, it is strongly recommended to use C<txn_do> with coderefs.
+=head2 Simple Transactions with DBIx::Class::Storage::TxnScopeGuard
+
+An easy way to use transactions is with
+L<DBIx::Class::Storage::TxnScopeGuard>. See L</Automatically creating
+related objects> for an example.
+
+Note that unlike txn_do, TxnScopeGuard will only make sure the connection is
+alive when issuing the C<BEGIN> statement. It will not (and really can not)
+retry if the server goes away mid-operations, unlike C<txn_do>.
+
=head1 SQL
=head2 Creating Schemas From An Existing Database
arrayrefs together with the column name, like this: C<< [column_name => value]
>>.
+=head2 Using Unicode
+
+When using unicode character data there are two alternatives -
+either your database supports unicode characters (including setting
+the utf8 flag on the returned string), or you need to encode/decode
+data appropriately each time a string field is inserted into or
+retrieved from the database. It is better to avoid
+encoding/decoding data and to use your database's own unicode
+capabilities if at all possible.
+
+The L<DBIx::Class::UTF8Columns> component handles storing selected
+unicode columns in a database that does not directly support
+unicode. If used with a database that does correctly handle unicode
+then strange and unexpected data corrupt B<will> occur.
+
+The Catalyst Wiki Unicode page at
+L<http://wiki.catalystframework.org/wiki/tutorialsandhowtos/using_unicode>
+has additional information on the use of Unicode with Catalyst and
+DBIx::Class.
+
+The following databases do correctly handle unicode data:-
+
+=head3 MySQL
+
+MySQL supports unicode, and will correctly flag utf8 data from the
+database if the C<mysql_enable_utf8> is set in the connect options.
+
+ my $schema = My::Schema->connection('dbi:mysql:dbname=test',
+ $user, $pass,
+ { mysql_enable_utf8 => 1} );
+
+
+When set, a data retrieved from a textual column type (char,
+varchar, etc) will have the UTF-8 flag turned on if necessary. This
+enables character semantics on that string. You will also need to
+ensure that your database / table / column is configured to use
+UTF8. See Chapter 10 of the mysql manual for details.
+
+See L<DBD::mysql> for further details.
+
+=head3 Oracle
+
+Information about Oracle support for unicode can be found in
+L<DBD::Oracle/Unicode>.
+
+=head3 PostgreSQL
+
+PostgreSQL supports unicode if the character set is correctly set
+at database creation time. Additionally the C<pg_enable_utf8>
+should be set to ensure unicode data is correctly marked.
+
+ my $schema = My::Schema->connection('dbi:Pg:dbname=test',
+ $user, $pass,
+ { pg_enable_utf8 => 1} );
+
+Further information can be found in L<DBD::Pg>.
+
+=head3 SQLite
+
+SQLite version 3 and above natively use unicode internally. To
+correctly mark unicode strings taken from the database, the
+C<sqlite_unicode> flag should be set at connect time (in versions
+of L<DBD::SQLite> prior to 1.27 this attribute was named
+C<unicode>).
+
+ my $schema = My::Schema->connection('dbi:SQLite:/tmp/test.db',
+ '', '',
+ { sqlite_unicode => 1} );
+
=head1 BOOTSTRAPPING/MIGRATING
=head2 Easy migration from class-based to schema-based setup
L<DBIx::Class::Schema/deploy>. See there for details, or the
L<DBIx::Class::Manual::Cookbook>.
+=item .. store/retrieve Unicode data in my database?
+
+Make sure you database supports Unicode and set the connect
+attributes appropriately - see
+L<DBIx::Class::Manual::Cookbook/Using Unicode>
+
=item .. connect to my database?
Once you have created all the appropriate table/source classes, and an
{ on_connect_do => \@on_connect_sql_statments }
);
-See L<DBIx::Class::Schema::Storage::DBI/connect_info> for more information about
+See L<DBIx::Class::Storage::DBI/connect_info> for more information about
this and other special C<connect>-time options.
=head3 Via a database handle
# without having to contruct the full hash
if (keys %collapse) {
- my %pri = map { ($_ => 1) } $self->result_source->primary_columns;
+ my %pri = map { ($_ => 1) } $self->result_source->_pri_cols;
foreach my $i (0 .. $#construct_as) {
next if defined($construct_as[$i][0]); # only self table
if (delete $pri{$construct_as[$i][1]}) {
my ($self, $values) = @_;
$self->throw_exception('Values for update_all must be a hash')
unless ref $values eq 'HASH';
- foreach my $obj ($self->all) {
- $obj->set_columns($values)->update;
- }
+
+ my $guard = $self->result_source->schema->txn_scope_guard;
+ $_->update($values) for $self->all;
+ $guard->commit;
return 1;
}
$self->throw_exception('delete_all does not accept any arguments')
if @_;
+ my $guard = $self->result_source->schema->txn_scope_guard;
$_->delete for $self->all;
+ $guard->commit;
return 1;
}
L<DBIx::Class::Row> objects. You can change the name of the accessor
by supplying an L</accessor> in the column_info hash.
+If a column name beginning with a plus sign ('+col1') is provided, the
+attributes provided will be merged with any existing attributes for the
+column, with the new attributes taking precedence in the case that an
+attribute already exists. Using this without a hashref
+(C<< $source->add_columns(qw/+col1 +col2/) >>) is legal, but useless --
+it does the same thing it would do without the plus.
+
The contents of the column_info are not set in stone. The following
keys are currently recognised/used by DBIx::Class:
my @added;
my $columns = $self->_columns;
while (my $col = shift @cols) {
+ my $column_info = {};
+ if ($col =~ s/^\+//) {
+ $column_info = $self->column_info($col);
+ }
+
# If next entry is { ... } use that for the column info, if not
# use an empty hashref
- my $column_info = ref $cols[0] ? shift(@cols) : {};
+ if (ref $cols[0]) {
+ my $new_info = shift(@cols);
+ %$column_info = (%$column_info, %$new_info);
+ }
push(@added, $col) unless exists $columns->{$col};
$columns->{$col} = $column_info;
}
return @{shift->_primaries||[]};
}
+# a helper method that will automatically die with a descriptive message if
+# no pk is defined on the source in question. For internal use to save
+# on if @pks... boilerplate
sub _pri_cols {
my $self = shift;
my @pcols = $self->primary_columns
or $self->throw_exception (sprintf(
- 'Operation requires a primary key to be declared on %s via set_primary_key',
- ref $self,
+ "Operation requires a primary key to be declared on '%s' via set_primary_key",
+ $self->source_name,
));
return @pcols;
}
for my $rel (keys %$join) {
my $rel_info = $self->relationship_info($rel)
- or $self->throw_exception("No such relationship ${rel}");
+ or $self->throw_exception("No such relationship '$rel' on " . $self->source_name);
my $force_left = $parent_force_left;
$force_left ||= lc($rel_info->{attrs}{join_type}||'') eq 'left';
);
my $rel_info = $self->relationship_info($join)
- or $self->throw_exception("No such relationship ${join}");
+ or $self->throw_exception("No such relationship $join on " . $self->source_name);
my $rel_src = $self->related_source($join);
return [ { $as => $rel_src->from,
my $as = shift @{$p->{-join_aliases}};
my $rel_info = $self->relationship_info( $pre );
- $self->throw_exception( $self->name . " has no such relationship '$pre'" )
+ $self->throw_exception( $self->source_name . " has no such relationship '$pre'" )
unless $rel_info;
my $as_prefix = ($alias =~ /^.*?\.(.+)$/ ? $1.'.' : '');
my $rel_source = $self->related_source($pre);
}
#my @col = map { (/^self\.(.+)$/ ? ("${as_prefix}.$1") : ()); }
# values %{$rel_info->{cond}};
- $collapse->{".${as_prefix}${pre}"} = [ $rel_source->primary_columns ];
+ $collapse->{".${as_prefix}${pre}"} = [ $rel_source->_pri_cols ];
# action at a distance. prepending the '.' allows simpler code
# in ResultSet->_collapse_result
my @key = map { (/^foreign\.(.+)$/ ? ($1) : ()); }
sub related_source {
my ($self, $rel) = @_;
if( !$self->has_relationship( $rel ) ) {
- $self->throw_exception("No such relationship '$rel'");
+ $self->throw_exception("No such relationship '$rel' on " . $self->source_name);
}
return $self->schema->source($self->relationship_info($rel)->{source});
}
sub related_class {
my ($self, $rel) = @_;
if( !$self->has_relationship( $rel ) ) {
- $self->throw_exception("No such relationship '$rel'");
+ $self->throw_exception("No such relationship '$rel' on " . $self->source_name);
}
return $self->schema->class($self->relationship_info($rel)->{source});
}
my $source = $class->result_source_instance;
$source->add_columns(@cols);
foreach my $c (grep { !ref } @cols) {
+ # If this is an augment definition get the real colname.
+ $c =~ s/^\+//;
+
$class->register_column($c => $source->column_info($c));
}
}
$rollback_guard ||= $source->storage->txn_scope_guard
}
+ ## PK::Auto
+ my %auto_pri;
+ my $auto_idx = 0;
+ for ($self->primary_columns) {
+ if (
+ not defined $self->get_column($_)
+ ||
+ (ref($self->get_column($_)) eq 'SCALAR')
+ ) {
+ my $col_info = $source->column_info($_);
+ $auto_pri{$_} = $auto_idx++ unless $col_info->{auto_nextval}; # auto_nextval's are pre-fetched in the storage
+ }
+ }
+
MULTICREATE_DEBUG and do {
no warnings 'uninitialized';
warn "MC $self inserting (".join(', ', $self->get_columns).")\n";
};
- my $updated_cols = $source->storage->insert($source, { $self->get_columns });
+ my $updated_cols = $source->storage->insert(
+ $source,
+ { $self->get_columns },
+ (keys %auto_pri) && $source->storage->can_insert_returning
+ ? { returning => [ sort { $auto_pri{$a} <=> $auto_pri{$b} } keys %auto_pri ] }
+ : ()
+ ,
+ );
+
foreach my $col (keys %$updated_cols) {
$self->store_column($col, $updated_cols->{$col});
+ delete $auto_pri{$col};
}
- ## PK::Auto
- my @auto_pri = grep {
- (not defined $self->get_column($_))
- ||
- (ref($self->get_column($_)) eq 'SCALAR')
- } $self->primary_columns;
-
- if (@auto_pri) {
- MULTICREATE_DEBUG and warn "MC $self fetching missing PKs ".join(', ', @auto_pri)."\n";
+ if (keys %auto_pri) {
+ my @missing = sort { $auto_pri{$a} <=> $auto_pri{$b} } keys %auto_pri;
+ MULTICREATE_DEBUG and warn "MC $self fetching missing PKs ".join(', ', @missing )."\n";
my $storage = $self->result_source->storage;
$self->throw_exception( "Missing primary key but Storage doesn't support last_insert_id" )
unless $storage->can('last_insert_id');
- my @ids = $storage->last_insert_id($self->result_source,@auto_pri);
+ my @ids = $storage->last_insert_id($self->result_source, @missing);
$self->throw_exception( "Can't get last insert id" )
- unless (@ids == @auto_pri);
- $self->store_column($auto_pri[$_] => $ids[$_]) for 0 .. $#ids;
+ unless (@ids == @missing);
+ $self->store_column($missing[$_] => $ids[$_]) for 0 .. $#missing;
}
$self->{_dirty_columns} = {};
L<significance of primary keys|DBIx::Class::Manual::Intro/The Significance and Importance of Primary Keys>
for more details).
-Also takes an optional hashref of C<< column_name => value> >> pairs
+Also takes an optional hashref of C<< column_name => value >> pairs
to update on the object first. Be aware that the hashref will be
passed to C<set_inflated_columns>, which might edit it in place, so
don't rely on it being the same after a call to C<update>. If you
if (! $_[0] or (ref $_[0] eq 'HASH' and !keys %{$_[0]} ) ) {
my $sql = "INSERT INTO ${table} DEFAULT VALUES";
- if (my @returning = @{ ($_[1]||{})->{returning} || [] }) {
- $sql .= ' RETURNING (' . (join ', ' => map $self->_quote($_), @returning)
- . ')';
+ if (my $ret = ($_[1]||{})->{returning} ) {
+ $sql .= $self->_insert_returning ($ret);
}
return $sql;
}
}
+sub _generate_join_clause {
+ my ($self, $join_type) = @_;
+
+ return sprintf ('%s JOIN ',
+ $join_type ? ' ' . uc($join_type) : ''
+ );
+}
+
sub _recurse_from {
my ($self, $from, @join) = @_;
my @sqlf;
$join_type = $self->{_default_jointype} if not defined $join_type;
- my $join_clause = sprintf ('%s JOIN ',
- $join_type ? ' ' . uc($join_type) : ''
- );
- push @sqlf, $join_clause;
+ push @sqlf, $self->_generate_join_clause( $join_type );
if (ref $to eq 'ARRAY') {
push(@sqlf, '(', $self->_recurse_from(@$to), ')');
return $self->SUPER::insert (@_);
}
+# Allow STRAIGHT_JOIN's
+sub _generate_join_clause {
+ my ($self, $join_type) = @_;
+
+ if( $join_type && $join_type =~ /^STRAIGHT\z/i ) {
+ return ' STRAIGHT_JOIN '
+ }
+
+ return $self->SUPER::_generate_join_clause( $join_type );
+}
1;
__PACKAGE__->mk_group_accessors('simple' =>
qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts _conn_pid
- _conn_tid transaction_depth _dbh_autocommit _driver_determined savepoints/
+ _conn_tid transaction_depth _dbh_autocommit _driver_determined savepoints
+ __server_info/
);
# the values for these accessors are picked out (and deleted) from
# default cursor class, overridable in connect_info attributes
__PACKAGE__->cursor_class('DBIx::Class::Storage::DBI::Cursor');
-__PACKAGE__->mk_group_accessors('inherited' => qw/sql_maker_class/);
+__PACKAGE__->mk_group_accessors('inherited' => qw/
+ sql_maker_class
+ can_insert_returning
+/);
__PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks');
);
$schema->resultset('Book')->search({
- written_on => $schema->storage->datetime_parser(DateTime->now)
+ written_on => $schema->storage->datetime_parser->format_datetime(DateTime->now)
});
=head1 DESCRIPTION
$self->{transaction_depth} = $self->_dbh_autocommit ? 0 : 1;
$self->_run_connection_actions unless $self->{_in_determine_driver};
+
+ $self->_populate_server_info;
}
sub _run_connection_actions {
$self->_do_connection_actions(connect_call_ => $_) for @actions;
}
+sub _populate_server_info {
+ my $self = shift;
+ my %info;
+
+ my $dbms_ver = eval {
+ local $@;
+ $self->_get_dbh->get_info(18)
+ };
+
+ if (defined $dbms_ver) {
+ $info{dbms_ver} = $dbms_ver;
+
+ ($dbms_ver) = $dbms_ver =~ /^(\S+)/;
+
+ my @verparts = split /\./, $dbms_ver;
+ $info{dbms_ver_normalized} = sprintf "%d.%03d%03d", @verparts;
+ }
+
+ $self->__server_info(\%info);
+
+ return \%info;
+}
+
+sub _server_info {
+ my $self = shift;
+
+ $self->_get_dbh;
+
+ return $self->__server_info(@_);
+}
+
sub _determine_driver {
my ($self) = @_;
$self->dbh_do('_dbh_execute', @_); # retry over disconnects
}
-sub insert {
+sub _prefetch_insert_auto_nextvals {
my ($self, $source, $to_insert) = @_;
- my $ident = $source->from;
- my $bind_attributes = $self->source_bind_attributes($source);
-
- my $updated_cols = {};
+ my $upd = {};
foreach my $col ( $source->columns ) {
if ( !defined $to_insert->{$col} ) {
my $col_info = $source->column_info($col);
if ( $col_info->{auto_nextval} ) {
- $updated_cols->{$col} = $to_insert->{$col} = $self->_sequence_fetch(
+ $upd->{$col} = $to_insert->{$col} = $self->_sequence_fetch(
'nextval',
$col_info->{sequence} ||=
$self->_dbh_get_autoinc_seq($self->_get_dbh, $source, $col)
}
}
- $self->_execute('insert' => [], $source, $bind_attributes, $to_insert);
+ return $upd;
+}
+
+sub insert {
+ my $self = shift;
+ my ($source, $to_insert, $opts) = @_;
+
+ my $updated_cols = $self->_prefetch_insert_auto_nextvals (@_);
+
+ my $bind_attributes = $self->source_bind_attributes($source);
+
+ my ($rv, $sth) = $self->_execute('insert' => [], $source, $bind_attributes, $to_insert, $opts);
+
+ if ($opts->{returning}) {
+ my @ret_cols = @{$opts->{returning}};
+
+ my @ret_vals = eval {
+ local $SIG{__WARN__} = sub {};
+ my @r = $sth->fetchrow_array;
+ $sth->finish;
+ @r;
+ };
+
+ my %ret;
+ @ret{@ret_cols} = @ret_vals if (@ret_vals);
+
+ $updated_cols = {
+ %$updated_cols,
+ %ret,
+ };
+ }
return $updated_cols;
}
CAST(? as $mapped_type)
-This option can also be enabled in L<DBIx::Class::Storage::DBI/connect_info> as:
+This option can also be enabled in
+L<connect_info|DBIx::Class::Storage::DBI/connect_info> as:
on_connect_call => ['set_auto_cast']
on_connect_call => ['set_auto_cast']
-in L<DBIx::Class::Storage::DBI/connect_info>.
+in L<connect_info|DBIx::Class::Storage::DBI/connect_info>.
=cut
use warnings;
use base qw/DBIx::Class::Storage::DBI/;
-
use mro 'c3';
+use Scope::Guard ();
+use Context::Preserve ();
+
__PACKAGE__->mk_group_accessors('simple' => '__last_insert_id');
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Informix - Base Storage Class for Informix Support
+
+=head1 DESCRIPTION
+
+This class implements storage-specific support for the Informix RDBMS
+
+=head1 METHODS
+
+=cut
+
sub _execute {
my $self = shift;
my ($op) = @_;
return { limit_dialect => 'SkipFirst', %{$self->{_sql_maker_opts}||{}} };
}
-1;
+sub _svp_begin {
+ my ($self, $name) = @_;
-__END__
+ $self->_get_dbh->do("SAVEPOINT $name");
+}
-=head1 NAME
+# can't release savepoints
+sub _svp_release { 1 }
-DBIx::Class::Storage::DBI::Informix - Base Storage Class for INFORMIX Support
+sub _svp_rollback {
+ my ($self, $name) = @_;
-=head1 SYNOPSIS
+ $self->_get_dbh->do("ROLLBACK TO SAVEPOINT $name")
+}
-=head1 DESCRIPTION
+sub with_deferred_fk_checks {
+ my ($self, $sub) = @_;
+
+ my $txn_scope_guard = $self->txn_scope_guard;
+
+ $self->_do_query('SET CONSTRAINTS ALL DEFERRED');
+
+ my $sg = Scope::Guard->new(sub {
+ $self->_do_query('SET CONSTRAINTS ALL IMMEDIATE');
+ });
+
+ return Context::Preserve::preserve_context(sub { $sub->() },
+ after => sub { $txn_scope_guard->commit });
+}
+
+=head2 connect_call_datetime_setup
+
+Used as:
+
+ on_connect_call => 'datetime_setup'
+
+In L<connect_info|DBIx::Class::Storage::DBI/connect_info> to set the C<DATE> and
+C<DATETIME> formats.
+
+Sets the following environment variables:
+
+ GL_DATE="%m/%d/%Y"
+ GL_DATETIME="%Y-%m-%d %H:%M:%S%F5"
+
+The C<DBDATE> and C<DBCENTURY> environment variables are cleared.
+
+B<NOTE:> setting the C<GL_DATE> environment variable seems to have no effect
+after the process has started, so the default format is used. The C<GL_DATETIME>
+setting does take effect however.
-This class implements storage-specific support for Informix
+The C<DATETIME> data type supports up to 5 digits after the decimal point for
+second precision, depending on how you have declared your column. The full
+possible precision is used.
+
+The column declaration for a C<DATETIME> with maximum precision is:
+
+ column_name DATETIME YEAR TO FRACTION(5)
+
+The C<DATE> data type stores the date portion only, and it B<MUST> be declared
+with:
+
+ data_type => 'date'
+
+in your Result class.
+
+You will need the L<DateTime::Format::Strptime> module for inflation to work.
+
+=cut
+
+sub connect_call_datetime_setup {
+ my $self = shift;
+
+ delete @ENV{qw/DBDATE DBCENTURY/};
+
+ $ENV{GL_DATE} = "%m/%d/%Y";
+ $ENV{GL_DATETIME} = "%Y-%m-%d %H:%M:%S%F5";
+}
+
+sub datetime_parser_type {
+ 'DBIx::Class::Storage::DBI::Informix::DateTime::Format'
+}
+
+package # hide from PAUSE
+ DBIx::Class::Storage::DBI::Informix::DateTime::Format;
+
+my $timestamp_format = '%Y-%m-%d %H:%M:%S.%5N'; # %F %T
+my $date_format = '%m/%d/%Y';
+
+my ($timestamp_parser, $date_parser);
+
+sub parse_datetime {
+ shift;
+ require DateTime::Format::Strptime;
+ $timestamp_parser ||= DateTime::Format::Strptime->new(
+ pattern => $timestamp_format,
+ on_error => 'croak',
+ );
+ return $timestamp_parser->parse_datetime(shift);
+}
+
+sub format_datetime {
+ shift;
+ require DateTime::Format::Strptime;
+ $timestamp_parser ||= DateTime::Format::Strptime->new(
+ pattern => $timestamp_format,
+ on_error => 'croak',
+ );
+ return $timestamp_parser->format_datetime(shift);
+}
+
+sub parse_date {
+ shift;
+ require DateTime::Format::Strptime;
+ $date_parser ||= DateTime::Format::Strptime->new(
+ pattern => $date_format,
+ on_error => 'croak',
+ );
+ return $date_parser->parse_datetime(shift);
+}
+
+sub format_date {
+ shift;
+ require DateTime::Format::Strptime;
+ $date_parser ||= DateTime::Format::Strptime->new(
+ pattern => $date_format,
+ on_error => 'croak',
+ );
+ return $date_parser->format_datetime(shift);
+}
+
+1;
-=head1 AUTHORS
+=head1 AUTHOR
-See L<DBIx::Class/CONTRIBUTORS>
+See L<DBIx::Class/AUTHOR> and L<DBIx::Class/CONTRIBUTORS>.
=head1 LICENSE
package DBIx::Class::Storage::DBI::InterBase;
-# partly stolen from DBIx::Class::Storage::DBI::MSSQL
-
use strict;
use warnings;
use base qw/DBIx::Class::Storage::DBI/;
use mro 'c3';
use List::Util();
-__PACKAGE__->mk_group_accessors(simple => qw/
- _auto_incs
-/);
-
=head1 NAME
DBIx::Class::Storage::DBI::InterBase - Driver for the Firebird RDBMS
L<disable_sth_caching|DBIx::Class::Storage::DBI/disable_sth_caching> option or
L</connect_call_use_softcommit> (see L</CAVEATS>) for your code to function
correctly with this driver. Otherwise you will likely get bizarre error messages
-such as C<no statement executing>.
-
-For ODBC support, see L<DBIx::Class::Storage::DBI::ODBC::Firebird>.
+such as C<no statement executing>. The alternative is to use the
+L<ODBC|DBIx::Class::Storage::DBI::ODBC::Firebird> driver, which is more suitable
+for long running processes such as under L<Catalyst>.
To turn on L<DBIx::Class::InflateColumn::DateTime> support, see
L</connect_call_datetime_setup>.
=cut
-sub _prep_for_execute {
- my $self = shift;
- my ($op, $extra_bind, $ident, $args) = @_;
-
- if ($op eq 'insert') {
- $self->_auto_incs([]);
-
- my %pk;
- @pk{$ident->primary_columns} = ();
-
- my @auto_inc_cols = grep {
- my $inserting = $args->[0]{$_};
-
- ($ident->column_info($_)->{is_auto_increment}
- || exists $pk{$_})
- && (
- (not defined $inserting)
- ||
- (ref $inserting eq 'SCALAR' && $$inserting =~ /^null\z/i)
- )
- } $ident->columns;
-
- if (@auto_inc_cols) {
- $args->[1]{returning} = \@auto_inc_cols;
-
- $self->_auto_incs->[0] = \@auto_inc_cols;
- }
- }
-
- return $self->next::method(@_);
-}
-
-sub _execute {
- my $self = shift;
- my ($op) = @_;
-
- my ($rv, $sth, @bind) = $self->dbh_do($self->can('_dbh_execute'), @_);
-
- if ($op eq 'insert' && $self->_auto_incs) {
- local $@;
- my (@auto_incs) = eval {
- local $SIG{__WARN__} = sub {};
- $sth->fetchrow_array
- };
- $self->_auto_incs->[1] = \@auto_incs;
- $sth->finish;
- }
-
- return wantarray ? ($rv, $sth, @bind) : $rv;
-}
+sub can_insert_returning { 1 }
sub _sequence_fetch {
my ($self, $nextval, $sequence) = @_;
return undef;
}
-sub last_insert_id {
- my ($self, $source, @cols) = @_;
- my @result;
-
- my %auto_incs;
- @auto_incs{ @{ $self->_auto_incs->[0] } } =
- @{ $self->_auto_incs->[1] };
-
- push @result, $auto_incs{$_} for @cols;
-
- return @result;
-}
-
-sub insert {
- my $self = shift;
-
- my $updated_cols = $self->next::method(@_);
-
- if ($self->_auto_incs->[0]) {
- my %auto_incs;
- @auto_incs{ @{ $self->_auto_incs->[0] } } = @{ $self->_auto_incs->[1] };
-
- $updated_cols = { %$updated_cols, %auto_incs };
- }
-
- return $updated_cols;
-}
-
# this sub stolen from DB2
sub _sql_maker_opts {
my $dbh = $self->_dbh or return 0;
local $dbh->{RaiseError} = 1;
+ local $dbh->{PrintError} = 0;
eval {
$dbh->do('select 1 from rdb$database');
}
}
+sub _populate_server_info {
+ my $self = shift;
+
+ return $self->next::method(@_) if ref $self ne __PACKAGE__;
+
+ local $SIG{__WARN__} = sub {}; # silence warning due to bug in DBD::InterBase
+
+ return $self->next::method(@_);
+}
+
=head2 connect_call_use_softcommit
Used as:
You need either this option or C<< disable_sth_caching => 1 >> for
L<DBIx::Class> code to function correctly (otherwise you may get C<no statement
-executing> errors.)
+executing> errors.) Or use the L<ODBC|DBIx::Class::Storage::DBI::ODBC::Firebird>
+driver.
The downside of using this option is that your process will B<NOT> see UPDATEs,
INSERTs and DELETEs from other processes for already open statements.
workaround for the C<no statement executing> errors, this of course adversely
affects performance.
+Alternately, use the L<ODBC|DBIx::Class::Storage::DBI::ODBC::Firebird> driver.
+
=item *
C<last_insert_id> support by default only works for Firebird versions 2 or
=item *
-Sub-second precision for TIMESTAMPs is not currently available with ODBC.
+Sub-second precision for TIMESTAMPs is not currently available when using the
+L<ODBC|DBIx::Class::Storage::DBI::ODBC::Firebird> driver.
=back
sub sqlt_type { 'SQLServer' }
-sub _get_mssql_version {
- my $self = shift;
-
- my $data = $self->_get_dbh->selectrow_hashref('xp_msver ProductVersion');
-
- if ($data->{Character_Value} =~ /^(\d+)\./) {
- return $1;
- } else {
- $self->throw_exception(q{Your ProductVersion's Character_Value is missing or malformed!});
- }
-}
-
sub sql_maker {
my $self = shift;
unless ($self->_sql_maker) {
unless ($self->{_sql_maker_opts}{limit_dialect}) {
- my $version = eval { $self->_get_mssql_version; } || 0;
+
+ my ($version) = $self->_server_info->{dbms_ver} =~ /^(\d+)/;
+ $version ||= 0;
$self->{_sql_maker_opts} = {
limit_dialect => ($version >= 9 ? 'RowNumberOver' : 'Top'),
return $self->_sql_maker;
}
+sub _ping {
+ my $self = shift;
+
+ my $dbh = $self->_dbh or return 0;
+
+ local $dbh->{RaiseError} = 1;
+ local $dbh->{PrintError} = 0;
+
+ eval {
+ $dbh->do('select 1');
+ };
+
+ return $@ ? 0 : 1;
+}
+
1;
=head1 NAME
DBIx::Class::Storage::DBI::ODBC::Firebird - Driver for using the Firebird RDBMS
through ODBC
-=head1 SYNOPSIS
+=head1 DESCRIPTION
Most functionality is provided by L<DBIx::Class::Storage::DBI::Interbase>, see
that module for details.
L<http://www.firebirdnews.org/?p=1324>
+This driver does not suffer from the nested statement handles across commits
+issue that the L<DBD::InterBase|DBIx::Class::Storage::DBI::InterBase> based
+driver does. This makes it more suitable for long running processes such as
+under L<Catalyst>.
+
=cut
# XXX seemingly no equivalent to ib_time_all from DBD::InterBase via ODBC
on_connect_call => 'use_dynamic_cursors'
-in your L<DBIx::Class::Storage::DBI/connect_info> as one way to enable multiple
+in your L<connect_info|DBIx::Class::Storage::DBI/connect_info> as one way to enable multiple
concurrent statements.
Will add C<< odbc_cursortype => 2 >> to your DBI connection attributes. See
}
}
-sub _get_mssql_version {
- my $self = shift;
-
- my ($version) = $self->_get_dbh->get_info(18) =~ /^(\d+)/;
-
- return $version;
-}
-
1;
=head1 AUTHOR
my $dbh = $self->_dbh or return 0;
local $dbh->{RaiseError} = 1;
+ local $dbh->{PrintError} = 0;
eval {
- $dbh->do("select 1 from dual");
+ $dbh->do('select 1 from dual');
};
return $@ ? 0 : 1;
on_connect_call => 'datetime_setup'
-In L<DBIx::Class::Storage::DBI/connect_info> to set the session nls date, and
-timestamp values for use with L<DBIx::Class::InflateColumn::DateTime> and the
-necessary environment variables for L<DateTime::Format::Oracle>, which is used
-by it.
+In L<connect_info|DBIx::Class::Storage::DBI/connect_info> to set the session nls
+date, and timestamp values for use with L<DBIx::Class::InflateColumn::DateTime>
+and the necessary environment variables for L<DateTime::Format::Oracle>, which
+is used by it.
Maximum allowable precision is used, unless the environment variables have
already been set.
use strict;
use warnings;
-use base qw/DBIx::Class::Storage::DBI::MultiColumnIn/;
+use base qw/
+ DBIx::Class::Storage::DBI::MultiColumnIn
+/;
use mro 'c3';
use DBD::Pg qw(:pg_types);
+use Scope::Guard ();
+use Context::Preserve ();
# Ask for a DBD::Pg with array support
warn __PACKAGE__.": DBD::Pg 2.9.2 or greater is strongly recommended\n"
if ($DBD::Pg::VERSION < 2.009002); # pg uses (used?) version::qv()
+sub can_insert_returning {
+ my $self = shift;
+
+ return 1
+ if $self->_server_info->{dbms_ver_normalized} >= 8.002;
+
+ return 0;
+}
+
sub with_deferred_fk_checks {
my ($self, $sub) = @_;
- $self->_get_dbh->do('SET CONSTRAINTS ALL DEFERRED');
- $sub->();
+ my $txn_scope_guard = $self->txn_scope_guard;
+
+ $self->_do_query('SET CONSTRAINTS ALL DEFERRED');
+
+ my $sg = Scope::Guard->new(sub {
+ $self->_do_query('SET CONSTRAINTS ALL IMMEDIATE');
+ });
+
+ return Context::Preserve::preserve_context(sub { $sub->() },
+ after => sub { $txn_scope_guard->commit });
}
+# only used when INSERT ... RETURNING is disabled
sub last_insert_id {
my ($self,$source,@cols) = @_;
$col,
));
- push @values, $self->_dbh_last_insert_id ($self->_dbh, $seq);
+ push @values, $self->_dbh->last_insert_id(undef, undef, undef, undef, {sequence => $seq});
}
return @values;
}
-# there seems to be absolutely no reason to have this as a separate method,
-# but leaving intact in case someone is already overriding it
-sub _dbh_last_insert_id {
- my ($self, $dbh, $seq) = @_;
- $dbh->last_insert_id(undef, undef, undef, undef, {sequence => $seq});
-}
+sub _sequence_fetch {
+ my ($self, $function, $sequence) = @_;
+
+ $self->throw_exception('No sequence to fetch') unless $sequence;
+
+ my ($val) = $self->_get_dbh->selectrow_array(
+ sprintf ("select %s('%s')", $function, $sequence)
+ );
+ return $val;
+}
sub _dbh_get_autoinc_seq {
my ($self, $dbh, $source, $col) = @_;
}
}
-sub _sequence_fetch {
- my ( $self, $type, $seq ) = @_;
- my ($id) = $self->_get_dbh->selectrow_array("SELECT nextval('${seq}')");
- return $id;
-}
-
sub _svp_begin {
my ($self, $name) = @_;
backup
is_datatype_numeric
+ can_insert_returning
_count_select
_subq_count_select
_subq_update_delete
_do_query
_dbh_sth
_dbh_execute
+ _prefetch_insert_auto_nextvals
/],
);
$sqltargs ||= {};
- my $sqlite_version = $self->_get_dbh->{sqlite_version};
+ my $sqlite_version = eval { $self->_server_info->{dbms_ver} };
+ $sqlite_version ||= '';
# numify, SQLT does a numeric comparison
$sqlite_version =~ s/^(\d+) \. (\d+) (?: \. (\d+))? .*/${1}.${2}/x;
- $sqltargs->{producer_args}{sqlite_version} = $sqlite_version;
+ $sqltargs->{producer_args}{sqlite_version} = $sqlite_version if $sqlite_version;
$self->next::method($schema, $type, $version, $dir, $sqltargs, @rest);
}
sub datetime_parser_type { return "DateTime::Format::SQLite"; }
+=head2 connect_call_use_foreign_keys
+
+Used as:
+
+ on_connect_call => 'use_foreign_keys'
+
+In L<connect_info|DBIx::Class::Storage::DBI/connect_info> to turn on foreign key
+(including cascading) support for recent versions of SQLite and L<DBD::SQLite>.
+
+Executes:
+
+ PRAGMA foreign_keys = ON
+
+See L<http://www.sqlite.org/foreignkeys.html> for more information.
+
+=cut
+
+sub connect_call_use_foreign_keys {
+ my $self = shift;
+
+ $self->_do_query(
+ 'PRAGMA foreign_keys = ON'
+ );
+}
+
1;
=head1 NAME
$dbh->do("SET TEXTSIZE $bytes");
Takes the number of bytes, or uses the C<LongReadLen> value from your
-L<DBIx::Class/connect_info> if omitted, lastly falls back to the C<32768> which
-is the L<DBD::Sybase> default.
+L<connect_info|DBIx::Class::Storage::DBI/connect_info> if omitted, lastly falls
+back to the C<32768> which is the L<DBD::Sybase> default.
=cut
without doing a C<SELECT MAX(col)>. This is done safely in a transaction
(locking the table.) See L</INSERTS WITH PLACEHOLDERS>.
-A recommended L<DBIx::Class::Storage::DBI/connect_info> setting:
+A recommended L<connect_info|DBIx::Class::Storage::DBI/connect_info> setting:
on_connect_call => [['datetime_setup'], ['blob_setup', log_on_update => 0]]
on_connect_call => 'datetime_setup'
-In L<DBIx::Class::Storage::DBI/connect_info> to set:
+In L<connect_info|DBIx::Class::Storage::DBI/connect_info> to set:
$dbh->syb_date_fmt('ISO_strict'); # output fmt: 2004-08-21T14:36:48.080Z
$dbh->do('set dateformat mdy'); # input fmt: 08/13/1979 18:08:55.080
instead.
However, the C<LongReadLen> you pass in
-L<DBIx::Class::Storage::DBI/connect_info> is used to execute the equivalent
-C<SET TEXTSIZE> command on connection.
+L<connect_info|DBIx::Class::Storage::DBI/connect_info> is used to execute the
+equivalent C<SET TEXTSIZE> command on connection.
-See L</connect_call_blob_setup> for a L<DBIx::Class::Storage::DBI/connect_info>
-setting you need to work with C<IMAGE> columns.
+See L</connect_call_blob_setup> for a
+L<connect_info|DBIx::Class::Storage::DBI/connect_info> setting you need to work
+with C<IMAGE> columns.
=head1 BULK API
$dbh->do('ROLLBACK');
}
+sub _populate_server_info {
+ my $self = shift;
+
+ my $info = $self->next::method(@_);
+
+ my $product_version = $self->_get_dbh->selectrow_hashref('xp_msver ProductVersion');
+
+ if ((my $version = $data->{Character_Value}) =~ /^(\d+)\./) {
+ $info->{dbms_ver} = $version;
+ } else {
+ $self->throw_exception(q{
+MSSQL Version Retrieval Failed, Your ProductVersion's Character_Value is missing
+or malformed!
+ });
+ }
+
+ return $info;
+}
+
1;
=head1 NAME
=head1 DESCRIPTION
-This class implements MySQL specific bits of L<DBIx::Class::Storage::DBI>.
+This class implements MySQL specific bits of L<DBIx::Class::Storage::DBI>,
+like AutoIncrement column support and savepoints. Also it augments the
+SQL maker to support the MySQL-specific C<STRAIGHT_JOIN> join type, which
+you can use by specifying C<< join_type => 'straight' >> in the
+L<relationship attributes|DBIx::Class::Relationship::Base/join_type>
+
It also provides a one-stop on-connect macro C<set_strict_mode> which sets
session variables such that MySQL behaves more predictably as far as the
while (my $j = shift @$from) {
my $alias = $j->[0]{-alias};
- if ($outer_aliastypes->{select}{$alias}) {
+ if ($outer_aliastypes->{selecting}{$alias}) {
push @outer_from, $j;
}
- elsif ($outer_aliastypes->{restrict}{$alias}) {
+ elsif ($outer_aliastypes->{restricting}{$alias}) {
push @outer_from, $j;
$outer_attrs->{group_by} ||= $outer_select unless $j->[0]{-is_single};
}
return (\@outer_from, $outer_select, $where, $outer_attrs);
}
+#
+# I KNOW THIS SUCKS! GET SQLA2 OUT THE DOOR SO THIS CAN DIE!
+#
# Due to a lack of SQLA2 we fall back to crude scans of all the
# select/where/order/group attributes, in order to determine what
# aliases are neded to fulfill the query. This information is used
# throughout the code to prune unnecessary JOINs from the queries
# in an attempt to reduce the execution time.
# Although the method is pretty horrific, the worst thing that can
-# happen is for it to fail due to an unqualified column, which in
-# turn will result in a vocal exception. Qualifying the column will
-# invariably solve the problem.
+# happen is for it to fail due to some scalar SQL, which in turn will
+# result in a vocal exception.
sub _resolve_aliastypes_from_select_args {
my ( $self, $from, $select, $where, $attrs ) = @_;
unless $j->{-is_single};
}
+ # get a column to source/alias map (including unqualified ones)
+ my $colinfo = $self->_resolve_column_info ($from);
+
# set up a botched SQLA
my $sql_maker = $self->sql_maker;
my $sep = quotemeta ($self->_sql_maker_opts->{name_sep} || '.');
- local $sql_maker->{quote_char}; # so that we can regex away
-
- my $select_sql = $sql_maker->_recurse_fields ($select);
- my $where_sql = $sql_maker->where ($where);
- my $group_by_sql = $sql_maker->_order_by({
- map { $_ => $attrs->{$_} } qw/group_by having/
+ my ($orig_lquote, $orig_rquote) = map { quotemeta $_ } (do {
+ if (ref $sql_maker->{quote_char} eq 'ARRAY') {
+ @{$sql_maker->{quote_char}}
+ }
+ else {
+ ($sql_maker->{quote_char} || '') x 2;
+ }
});
- my @order_by_chunks = ($self->_parse_order_by ($attrs->{order_by}) );
- # match every alias to the sql chunks above
+ local $sql_maker->{quote_char} = "\x00"; # so that we can regex away
+
+ # generate sql chunks
+ my $to_scan = {
+ restricting => [
+ $sql_maker->_recurse_where ($where),
+ $sql_maker->_order_by({
+ map { $_ => $attrs->{$_} } (qw/group_by having/)
+ }),
+ ],
+ selecting => [
+ $self->_parse_order_by ($attrs->{order_by}, $sql_maker),
+ $sql_maker->_recurse_fields ($select),
+ ],
+ };
+
+ # throw away empty chunks
+ $_ = [ map { $_ || () } @$_ ] for values %$to_scan;
+
+ # first loop through all fully qualified columns and get the corresponding
+ # alias (should work even if they are in scalarrefs)
for my $alias (keys %$alias_list) {
- my $al_re = qr/\b $alias $sep/x;
+ my $al_re = qr/
+ \x00 $alias \x00 $sep
+ |
+ \b $alias $sep
+ /x;
+
+ # add matching for possible quoted literal sql
+ $al_re = qr/ $al_re | $orig_lquote $alias $orig_rquote /x
+ if ($orig_lquote && $orig_rquote);
- for my $piece ($where_sql, $group_by_sql) {
- $aliases_by_type->{restrict}{$alias} = 1 if ($piece =~ $al_re);
+
+ for my $type (keys %$to_scan) {
+ for my $piece (@{$to_scan->{$type}}) {
+ $aliases_by_type->{$type}{$alias} = 1 if ($piece =~ $al_re);
+ }
}
+ }
- for my $piece ($select_sql, @order_by_chunks ) {
- $aliases_by_type->{select}{$alias} = 1 if ($piece =~ $al_re);
+ # now loop through unqualified column names, and try to locate them within
+ # the chunks
+ for my $col (keys %$colinfo) {
+ next if $col =~ $sep; # if column is qualified it was caught by the above
+
+ my $col_re = qr/ \x00 $col \x00 /x;
+
+ $col_re = qr/ $col_re | $orig_lquote $col $orig_rquote /x
+ if ($orig_lquote && $orig_rquote);
+
+ for my $type (keys %$to_scan) {
+ for my $piece (@{$to_scan->{$type}}) {
+ $aliases_by_type->{$type}{$colinfo->{$col}{-source_alias}} = 1 if ($piece =~ $col_re);
+ }
}
}
# Add any non-left joins to the restriction list (such joins are indeed restrictions)
for my $j (values %$alias_list) {
my $alias = $j->{-alias} or next;
- $aliases_by_type->{restrict}{$alias} = 1 if (
+ $aliases_by_type->{restricting}{$alias} = 1 if (
(not $j->{-join_type})
or
($j->{-join_type} !~ /^left (?: \s+ outer)? $/xi)
for my $type (keys %$aliases_by_type) {
for my $alias (keys %{$aliases_by_type->{$type}}) {
$aliases_by_type->{$type}{$_} = 1
- for (map { keys %$_ } @{ $alias_list->{$alias}{-join_path} || [] });
+ for (map { values %$_ } @{ $alias_list->{$alias}{-join_path} || [] });
}
}
# anyway, and deep cloning is just too fucking expensive
# So replace the first hashref in the node arrayref manually
my @new_from = ($from->[0]);
- my $sw_idx = { map { values %$_ => 1 } @$switch_branch };
+ my $sw_idx = { map { (values %$_), 1 } @$switch_branch }; #there's one k/v per join-path
for my $j (@{$from}[1 .. $#$from]) {
my $jalias = $j->[0]{-alias};
}
sub _parse_order_by {
- my ($self, $order_by) = @_;
+ my ($self, $order_by, $sql_maker) = @_;
- return scalar $self->sql_maker->_order_by_chunks ($order_by)
- unless wantarray;
+ my $parser = sub {
+ my ($sql_maker, $order_by) = @_;
- my $sql_maker = $self->sql_maker;
- local $sql_maker->{quote_char}; #disable quoting
- my @chunks;
- for my $chunk (map { ref $_ ? @$_ : $_ } ($sql_maker->_order_by_chunks ($order_by) ) ) {
- $chunk =~ s/\s+ (?: ASC|DESC ) \s* $//ix;
- push @chunks, $chunk;
- }
+ return scalar $sql_maker->_order_by_chunks ($order_by)
+ unless wantarray;
- return @chunks;
+ my @chunks;
+ for my $chunk (map { ref $_ ? @$_ : $_ } ($sql_maker->_order_by_chunks ($order_by) ) ) {
+ $chunk =~ s/\s+ (?: ASC|DESC ) \s* $//ix;
+ push @chunks, $chunk;
+ }
+
+ return @chunks;
+ };
+
+ if ($sql_maker) {
+ return $parser->($sql_maker, $order_by);
+ }
+ else {
+ $sql_maker = $self->sql_maker;
+ local $sql_maker->{quote_char};
+ return $parser->($sql_maker, $order_by);
+ }
}
1;
DBIx::Class::UTF8Columns - Force UTF8 (Unicode) flag on columns
+ Please ensure you understand the purpose of this module before use.
+ Read the warnings below to prevent data corruption through misuse.
+
=head1 SYNOPSIS
package Artist;
=head1 DESCRIPTION
-This module allows you to get columns data that have utf8 (Unicode) flag.
+This module allows you to get and store utf8 (unicode) column data
+in a database that does not natively support unicode. It ensures
+that column data is correctly serialised as a byte stream when
+stored and de-serialised to unicode strings on retrieval.
+
+=head2 Warning - Native Database Unicode Support
+
+If your database natively supports Unicode (as does SQLite with the
+C<sqlite_unicode> connect flag, MySQL with C<mysql_enable_utf8>
+connect flag or Postgres with the C<pg_enable_utf8> connect flag),
+then this component should B<not> be used, and will corrupt unicode
+data in a subtle and unexpected manner.
+
+It is far better to do Unicode support within the database if
+possible rather convert data into and out of the database on every
+round trip.
-=head2 Warning
+=head2 Warning - Component Overloading
Note that this module overloads L<DBIx::Class::Row/store_column> in a way
that may prevent other components overloading the same method from working
--- /dev/null
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use Benchmark qw/cmpthese/;
+use FindBin;
+use lib "$FindBin::Bin/../t/lib";
+use lib "$FindBin::Bin/../lib";
+use DBICTest::Schema;
+use DBIx::Class::ResultClass::HashRefInflator; # older dbic didn't load it
+
+printf "Benchmarking DBIC version %s\n", DBIx::Class->VERSION;
+
+my $schema = DBICTest::Schema->connect ('dbi:SQLite::memory:');
+$schema->deploy;
+
+my $rs = $schema->resultset ('Artist');
+$rs->populate ([ map { { name => "Art_$_"} } (1 .. 10000) ]);
+
+my $dbh = $schema->storage->dbh;
+my $sql = sprintf ('SELECT %s FROM %s %s',
+ join (',', @{$rs->_resolved_attrs->{select}} ),
+ $rs->result_source->name,
+ $rs->_resolved_attrs->{alias},
+);
+
+my $compdbi = sub {
+ my @r = $schema->storage->dbh->selectall_arrayref ('SELECT * FROM ' . ${$rs->as_query}->[0] )
+} if $rs->can ('as_query');
+
+cmpthese(-3, {
+ Cursor => sub { $rs->reset; my @r = $rs->cursor->all },
+ HRI => sub { $rs->reset; my @r = $rs->search ({}, { result_class => 'DBIx::Class::ResultClass::HashRefInflator' } )->all },
+ RowObj => sub { $rs->reset; my @r = $rs->all },
+ RawDBI => sub { my @r = $dbh->selectall_arrayref ($sql) },
+ $compdbi ? (CompDBI => $compdbi) : (),
+});
my $s = CPANDB::Schema->connect (sub { CPANDB->dbh } );
# reference names are unstable - just create rels manually
-# is there a saner way to do that?
-my $distclass = $s->class('Distribution');
-$distclass->has_many (
+my $distrsrc = $s->source('Distribution');
+
+# the has_many helper is a class-only method (why?), thus
+# manual add_rel
+$distrsrc->add_relationship (
'deps',
$s->class('Dependency'),
- 'distribution',
+ { 'foreign.distribution' => 'self.' . ($distrsrc->primary_columns)[0] },
+ { accessor => 'multi', join_type => 'left' },
);
-$s->unregister_source ('Distribution');
-$s->register_class ('Distribution', $distclass);
+
+# here is how one could use the helper currently:
+#
+#my $distresult = $s->class('Distribution');
+#$distresult->has_many (
+# 'deps',
+# $s->class('Dependency'),
+# 'distribution',
+#);
+#$s->unregister_source ('Distribution');
+#$s->register_class ('Distribution', $distresult);
# a proof of concept how to find out who uses us *AND* SQLT
use strict;
use warnings;
use Test::More;
+use Test::Exception;
+use Test::Warn;
use lib qw(t/lib);
use DBICTest; # do not remove even though it is not used
-plan tests => 7;
-
-my $warnings;
-eval {
- local $SIG{__WARN__} = sub { $warnings .= shift };
- package DBICNSTestOther;
- use base qw/DBIx::Class::Schema/;
- __PACKAGE__->load_namespaces(
- result_namespace => [ '+DBICNSTest::Rslt', '+DBICNSTest::OtherRslt' ],
- resultset_namespace => '+DBICNSTest::RSet',
- );
-};
-ok(!$@) or diag $@;
-like($warnings, qr/load_namespaces found ResultSet class C with no corresponding Result class/);
+lives_ok (sub {
+ warnings_exist ( sub {
+ package DBICNSTestOther;
+ use base qw/DBIx::Class::Schema/;
+ __PACKAGE__->load_namespaces(
+ result_namespace => [ '+DBICNSTest::Rslt', '+DBICNSTest::OtherRslt' ],
+ resultset_namespace => '+DBICNSTest::RSet',
+ );
+ },
+ qr/load_namespaces found ResultSet class C with no corresponding Result class/,
+ );
+});
my $source_a = DBICNSTestOther->source('A');
isa_ok($source_a, 'DBIx::Class::ResultSource::Table');
my $source_d = DBICNSTestOther->source('D');
isa_ok($source_d, 'DBIx::Class::ResultSource::Table');
+
+done_testing;
use strict;
use warnings;
+
use Test::More;
+use Test::Exception;
+
use Config;
# README: If you set the env var to a number greater than 10,
my $parent_rs;
-eval {
+lives_ok (sub {
my $dbh = $schema->storage->dbh;
{
$parent_rs = $schema->resultset('CD')->search({ year => 1901 });
$parent_rs->next;
-};
-ok(!$@) or diag "Creation eval failed: $@";
+}, 'populate successfull');
my @children;
while(@children < $num_children) {
);
}
+{
+ # Test support for straight joins
+ my $cdsrc = $schema->source('CD');
+ my $artrel_info = $cdsrc->relationship_info ('artist');
+ $cdsrc->add_relationship(
+ 'straight_artist',
+ $artrel_info->{class},
+ $artrel_info->{cond},
+ { %{$artrel_info->{attrs}}, join_type => 'straight' },
+ );
+ is_same_sql_bind (
+ $cdsrc->resultset->search({}, { prefetch => 'straight_artist' })->as_query,
+ '(
+ SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track,
+ straight_artist.artistid, straight_artist.name, straight_artist.rank, straight_artist.charfield
+ FROM cd me
+ STRAIGHT_JOIN artist straight_artist ON straight_artist.artistid = me.artist
+ )',
+ [],
+ 'straight joins correctly supported for mysql'
+ );
+}
+
## Can we properly deal with the null search problem?
##
## Only way is to do a SET SQL_AUTO_IS_NULL = 0; on connect
plan skip_all => <<EOM unless $dsn && $user;
Set \$ENV{DBICTEST_PG_DSN}, _USER and _PASS to run this test
-( NOTE: This test drops and creates tables called 'artist', 'casecheck',
- 'array_test' and 'sequence_test' as well as following sequences:
- 'pkid1_seq', 'pkid2_seq' and 'nonpkid_seq''. as well as following
- schemas: 'dbic_t_schema', 'dbic_t_schema_2', 'dbic_t_schema_3',
- 'dbic_t_schema_4', and 'dbic_t_schema_5'
-)
+( NOTE: This test drops and creates tables called 'artist', 'cd',
+'timestamp_primary_key_test', 'track', 'casecheck', 'array_test' and
+'sequence_test' as well as following sequences: 'pkid1_seq', 'pkid2_seq' and
+'nonpkid_seq''. as well as following schemas: 'dbic_t_schema',
+'dbic_t_schema_2', 'dbic_t_schema_3', 'dbic_t_schema_4', and 'dbic_t_schema_5')
EOM
### load any test classes that are defined further down in the file via BEGIN blocks
our @test_classes; #< array that will be pushed into by test classes defined in this file
DBICTest::Schema->load_classes( map {s/.+:://;$_} @test_classes ) if @test_classes;
+my $schema;
+
+require DBIx::Class::Storage::DBI::Pg;
+
+my $can_insert_returning =
+ DBIx::Class::Storage::DBI::Pg->can('can_insert_returning');
+
+for my $use_insert_returning (0..1) {
+ no warnings qw/redefine once/;
+ local *DBIx::Class::Storage::DBI::Pg::can_insert_returning = sub {
+ $use_insert_returning
+ };
### pre-connect tests (keep each test separate as to make sure rebless() runs)
-{
- my $s = DBICTest::Schema->connect($dsn, $user, $pass);
+ {
+ my $s = DBICTest::Schema->connect($dsn, $user, $pass);
- ok (!$s->storage->_dbh, 'definitely not connected');
+ ok (!$s->storage->_dbh, 'definitely not connected');
- # Check that datetime_parser returns correctly before we explicitly connect.
- SKIP: {
- eval { require DateTime::Format::Pg };
- skip "DateTime::Format::Pg required", 2 if $@;
+ # Check that datetime_parser returns correctly before we explicitly connect.
+ SKIP: {
+ eval { require DateTime::Format::Pg };
+ skip "DateTime::Format::Pg required", 2 if $@;
- my $store = ref $s->storage;
- is($store, 'DBIx::Class::Storage::DBI', 'Started with generic storage');
+ my $store = ref $s->storage;
+ is($store, 'DBIx::Class::Storage::DBI', 'Started with generic storage');
- my $parser = $s->storage->datetime_parser;
- is( $parser, 'DateTime::Format::Pg', 'datetime_parser is as expected');
- }
+ my $parser = $s->storage->datetime_parser;
+ is( $parser, 'DateTime::Format::Pg', 'datetime_parser is as expected');
+ }
- ok (!$s->storage->_dbh, 'still not connected');
-}
-{
- my $s = DBICTest::Schema->connect($dsn, $user, $pass);
- # make sure sqlt_type overrides work (::Storage::DBI::Pg does this)
- ok (!$s->storage->_dbh, 'definitely not connected');
- is ($s->storage->sqlt_type, 'PostgreSQL', 'sqlt_type correct pre-connection');
- ok (!$s->storage->_dbh, 'still not connected');
-}
+ ok (!$s->storage->_dbh, 'still not connected');
+ }
+ {
+ my $s = DBICTest::Schema->connect($dsn, $user, $pass);
+ # make sure sqlt_type overrides work (::Storage::DBI::Pg does this)
+ ok (!$s->storage->_dbh, 'definitely not connected');
+ is ($s->storage->sqlt_type, 'PostgreSQL', 'sqlt_type correct pre-connection');
+ ok (!$s->storage->_dbh, 'still not connected');
+ }
### connect, create postgres-specific test schema
-my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+ $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+ $schema->storage->ensure_connected;
+
+ if ($use_insert_returning && (not $can_insert_returning->($schema->storage)))
+ {
+ diag "Your version of PostgreSQL does not support INSERT ... RETURNING.";
+ diag "*** SKIPPING FURTHER TESTS";
+ last;
+ }
-drop_test_schema($schema);
-create_test_schema($schema);
+ drop_test_schema($schema);
+ create_test_schema($schema);
### begin main tests
-
# run a BIG bunch of tests for last-insert-id / Auto-PK / sequence
# discovery
-run_apk_tests($schema); #< older set of auto-pk tests
-run_extended_apk_tests($schema); #< new extended set of auto-pk tests
+ run_apk_tests($schema); #< older set of auto-pk tests
+ run_extended_apk_tests($schema); #< new extended set of auto-pk tests
+### type_info tests
+ my $test_type_info = {
+ 'artistid' => {
+ 'data_type' => 'integer',
+ 'is_nullable' => 0,
+ 'size' => 4,
+ },
+ 'name' => {
+ 'data_type' => 'character varying',
+ 'is_nullable' => 1,
+ 'size' => 100,
+ 'default_value' => undef,
+ },
+ 'rank' => {
+ 'data_type' => 'integer',
+ 'is_nullable' => 0,
+ 'size' => 4,
+ 'default_value' => 13,
+ },
+ 'charfield' => {
+ 'data_type' => 'character',
+ 'is_nullable' => 1,
+ 'size' => 10,
+ 'default_value' => undef,
+ },
+ 'arrayfield' => {
+ 'data_type' => 'integer[]',
+ 'is_nullable' => 1,
+ 'size' => undef,
+ 'default_value' => undef,
+ },
+ };
+ my $type_info = $schema->storage->columns_info_for('dbic_t_schema.artist');
+ my $artistid_defval = delete $type_info->{artistid}->{default_value};
+ like($artistid_defval,
+ qr/^nextval\('([^\.]*\.){0,1}artist_artistid_seq'::(?:text|regclass)\)/,
+ 'columns_info_for - sequence matches Pg get_autoinc_seq expectations');
+ is_deeply($type_info, $test_type_info,
+ 'columns_info_for - column data types');
-### type_info tests
-my $test_type_info = {
- 'artistid' => {
- 'data_type' => 'integer',
- 'is_nullable' => 0,
- 'size' => 4,
- },
- 'name' => {
- 'data_type' => 'character varying',
- 'is_nullable' => 1,
- 'size' => 100,
- 'default_value' => undef,
- },
- 'rank' => {
- 'data_type' => 'integer',
- 'is_nullable' => 0,
- 'size' => 4,
- 'default_value' => 13,
-
- },
- 'charfield' => {
- 'data_type' => 'character',
- 'is_nullable' => 1,
- 'size' => 10,
- 'default_value' => undef,
- },
- 'arrayfield' => {
- 'data_type' => 'integer[]',
- 'is_nullable' => 1,
- 'size' => undef,
- 'default_value' => undef,
- },
-};
-my $type_info = $schema->storage->columns_info_for('dbic_t_schema.artist');
-my $artistid_defval = delete $type_info->{artistid}->{default_value};
-like($artistid_defval,
- qr/^nextval\('([^\.]*\.){0,1}artist_artistid_seq'::(?:text|regclass)\)/,
- 'columns_info_for - sequence matches Pg get_autoinc_seq expectations');
-is_deeply($type_info, $test_type_info,
- 'columns_info_for - column data types');
+####### Array tests
+ BEGIN {
+ package DBICTest::Schema::ArrayTest;
+ push @main::test_classes, __PACKAGE__;
+ use strict;
+ use warnings;
+ use base 'DBIx::Class::Core';
-####### Array tests
+ __PACKAGE__->table('dbic_t_schema.array_test');
+ __PACKAGE__->add_columns(qw/id arrayfield/);
+ __PACKAGE__->column_info_from_storage(1);
+ __PACKAGE__->set_primary_key('id');
-BEGIN {
- package DBICTest::Schema::ArrayTest;
- push @main::test_classes, __PACKAGE__;
-
- use strict;
- use warnings;
- use base 'DBIx::Class::Core';
+ }
+ SKIP: {
+ skip "Need DBD::Pg 2.9.2 or newer for array tests", 4 if $DBD::Pg::VERSION < 2.009002;
- __PACKAGE__->table('dbic_t_schema.array_test');
- __PACKAGE__->add_columns(qw/id arrayfield/);
- __PACKAGE__->column_info_from_storage(1);
- __PACKAGE__->set_primary_key('id');
+ lives_ok {
+ $schema->resultset('ArrayTest')->create({
+ arrayfield => [1, 2],
+ });
+ } 'inserting arrayref as pg array data';
-}
-SKIP: {
- skip "Need DBD::Pg 2.9.2 or newer for array tests", 4 if $DBD::Pg::VERSION < 2.009002;
+ lives_ok {
+ $schema->resultset('ArrayTest')->update({
+ arrayfield => [3, 4],
+ });
+ } 'updating arrayref as pg array data';
- lives_ok {
$schema->resultset('ArrayTest')->create({
- arrayfield => [1, 2],
- });
- } 'inserting arrayref as pg array data';
-
- lives_ok {
- $schema->resultset('ArrayTest')->update({
- arrayfield => [3, 4],
+ arrayfield => [5, 6],
});
- } 'updating arrayref as pg array data';
- $schema->resultset('ArrayTest')->create({
- arrayfield => [5, 6],
- });
-
- my $count;
- lives_ok {
- $count = $schema->resultset('ArrayTest')->search({
- arrayfield => \[ '= ?' => [arrayfield => [3, 4]] ], #Todo anything less ugly than this?
- })->count;
- } 'comparing arrayref to pg array data does not blow up';
- is($count, 1, 'comparing arrayref to pg array data gives correct result');
-}
+ my $count;
+ lives_ok {
+ $count = $schema->resultset('ArrayTest')->search({
+ arrayfield => \[ '= ?' => [arrayfield => [3, 4]] ], #Todo anything less ugly than this?
+ })->count;
+ } 'comparing arrayref to pg array data does not blow up';
+ is($count, 1, 'comparing arrayref to pg array data gives correct result');
+ }
########## Case check
-BEGIN {
- package DBICTest::Schema::Casecheck;
- push @main::test_classes, __PACKAGE__;
+ BEGIN {
+ package DBICTest::Schema::Casecheck;
+ push @main::test_classes, __PACKAGE__;
- use strict;
- use warnings;
- use base 'DBIx::Class::Core';
+ use strict;
+ use warnings;
+ use base 'DBIx::Class::Core';
- __PACKAGE__->table('dbic_t_schema.casecheck');
- __PACKAGE__->add_columns(qw/id name NAME uc_name/);
- __PACKAGE__->column_info_from_storage(1);
- __PACKAGE__->set_primary_key('id');
-}
+ __PACKAGE__->table('dbic_t_schema.casecheck');
+ __PACKAGE__->add_columns(qw/id name NAME uc_name/);
+ __PACKAGE__->column_info_from_storage(1);
+ __PACKAGE__->set_primary_key('id');
+ }
-my $name_info = $schema->source('Casecheck')->column_info( 'name' );
-is( $name_info->{size}, 1, "Case sensitive matching info for 'name'" );
+ my $name_info = $schema->source('Casecheck')->column_info( 'name' );
+ is( $name_info->{size}, 1, "Case sensitive matching info for 'name'" );
-my $NAME_info = $schema->source('Casecheck')->column_info( 'NAME' );
-is( $NAME_info->{size}, 2, "Case sensitive matching info for 'NAME'" );
+ my $NAME_info = $schema->source('Casecheck')->column_info( 'NAME' );
+ is( $NAME_info->{size}, 2, "Case sensitive matching info for 'NAME'" );
-my $uc_name_info = $schema->source('Casecheck')->column_info( 'uc_name' );
-is( $uc_name_info->{size}, 3, "Case insensitive matching info for 'uc_name'" );
+ my $uc_name_info = $schema->source('Casecheck')->column_info( 'uc_name' );
+ is( $uc_name_info->{size}, 3, "Case insensitive matching info for 'uc_name'" );
## Test SELECT ... FOR UPDATE
-SKIP: {
- if(eval "require Sys::SigAction" && !$@) {
- Sys::SigAction->import( 'set_sig_handler' );
- }
- else {
- skip "Sys::SigAction is not available", 6;
- }
+ SKIP: {
+ if(eval "require Sys::SigAction" && !$@) {
+ Sys::SigAction->import( 'set_sig_handler' );
+ }
+ else {
+ skip "Sys::SigAction is not available", 6;
+ }
- my ($timed_out, $artist2);
+ my ($timed_out, $artist2);
- for my $t (
- {
- # Make sure that an error was raised, and that the update failed
- update_lock => 1,
- test_sub => sub {
- ok($timed_out, "update from second schema times out");
- ok($artist2->is_column_changed('name'), "'name' column is still dirty from second schema");
+ for my $t (
+ {
+ # Make sure that an error was raised, and that the update failed
+ update_lock => 1,
+ test_sub => sub {
+ ok($timed_out, "update from second schema times out");
+ ok($artist2->is_column_changed('name'), "'name' column is still dirty from second schema");
+ },
},
- },
- {
- # Make sure that an error was NOT raised, and that the update succeeded
- update_lock => 0,
- test_sub => sub {
- ok(! $timed_out, "update from second schema DOES NOT timeout");
- ok(! $artist2->is_column_changed('name'), "'name' column is NOT dirty from second schema");
+ {
+ # Make sure that an error was NOT raised, and that the update succeeded
+ update_lock => 0,
+ test_sub => sub {
+ ok(! $timed_out, "update from second schema DOES NOT timeout");
+ ok(! $artist2->is_column_changed('name'), "'name' column is NOT dirty from second schema");
+ },
},
- },
- ) {
- # create a new schema
- my $schema2 = DBICTest::Schema->connect($dsn, $user, $pass);
- $schema2->source("Artist")->name("dbic_t_schema.artist");
-
- $schema->txn_do( sub {
- my $artist = $schema->resultset('Artist')->search(
- {
- artistid => 1
- },
- $t->{update_lock} ? { for => 'update' } : {}
- )->first;
- is($artist->artistid, 1, "select returns artistid = 1");
-
- $timed_out = 0;
- eval {
- my $h = set_sig_handler( 'ALRM', sub { die "DBICTestTimeout" } );
- alarm(2);
- $artist2 = $schema2->resultset('Artist')->find(1);
- $artist2->name('fooey');
- $artist2->update;
- alarm(0);
- };
- $timed_out = $@ =~ /DBICTestTimeout/;
- });
+ ) {
+ # create a new schema
+ my $schema2 = DBICTest::Schema->connect($dsn, $user, $pass);
+ $schema2->source("Artist")->name("dbic_t_schema.artist");
+
+ $schema->txn_do( sub {
+ my $artist = $schema->resultset('Artist')->search(
+ {
+ artistid => 1
+ },
+ $t->{update_lock} ? { for => 'update' } : {}
+ )->first;
+ is($artist->artistid, 1, "select returns artistid = 1");
+
+ $timed_out = 0;
+ eval {
+ my $h = set_sig_handler( 'ALRM', sub { die "DBICTestTimeout" } );
+ alarm(2);
+ $artist2 = $schema2->resultset('Artist')->find(1);
+ $artist2->name('fooey');
+ $artist2->update;
+ alarm(0);
+ };
+ $timed_out = $@ =~ /DBICTestTimeout/;
+ });
- $t->{test_sub}->();
- }
-}
+ $t->{test_sub}->();
+ }
+ }
######## other older Auto-pk tests
-$schema->source("SequenceTest")->name("dbic_t_schema.sequence_test");
-for (1..5) {
- my $st = $schema->resultset('SequenceTest')->create({ name => 'foo' });
- is($st->pkid1, $_, "Oracle Auto-PK without trigger: First primary key");
- is($st->pkid2, $_ + 9, "Oracle Auto-PK without trigger: Second primary key");
- is($st->nonpkid, $_ + 19, "Oracle Auto-PK without trigger: Non-primary key");
+ $schema->source("SequenceTest")->name("dbic_t_schema.sequence_test");
+ for (1..5) {
+ my $st = $schema->resultset('SequenceTest')->create({ name => 'foo' });
+ is($st->pkid1, $_, "Auto-PK for sequence without default: First primary key");
+ is($st->pkid2, $_ + 9, "Auto-PK for sequence without default: Second primary key");
+ is($st->nonpkid, $_ + 19, "Auto-PK for sequence without default: Non-primary key");
+ }
+ my $st = $schema->resultset('SequenceTest')->create({ name => 'foo', pkid1 => 55 });
+ is($st->pkid1, 55, "Auto-PK for sequence without default: First primary key set manually");
+
+
+######## test non-serial auto-pk
+
+ if ($schema->storage->can_insert_returning) {
+ $schema->source('TimestampPrimaryKey')->name('dbic_t_schema.timestamp_primary_key_test');
+ my $row = $schema->resultset('TimestampPrimaryKey')->create({});
+ ok $row->id;
+ }
+
+######## test with_deferred_fk_checks
+
+ $schema->source('CD')->name('dbic_t_schema.cd');
+ $schema->source('Track')->name('dbic_t_schema.track');
+ lives_ok {
+ $schema->storage->with_deferred_fk_checks(sub {
+ $schema->resultset('Track')->create({
+ trackid => 999, cd => 999, position => 1, title => 'deferred FK track'
+ });
+ $schema->resultset('CD')->create({
+ artist => 1, cdid => 999, year => '2003', title => 'deferred FK cd'
+ });
+ });
+ } 'with_deferred_fk_checks code survived';
+
+ is eval { $schema->resultset('Track')->find(999)->title }, 'deferred FK track',
+ 'code in with_deferred_fk_checks worked';
+
+ throws_ok {
+ $schema->resultset('Track')->create({
+ trackid => 1, cd => 9999, position => 1, title => 'Track1'
+ });
+ } qr/constraint/i, 'with_deferred_fk_checks is off';
}
-my $st = $schema->resultset('SequenceTest')->create({ name => 'foo', pkid1 => 55 });
-is($st->pkid1, 55, "Oracle Auto-PK without trigger: First primary key set manually");
done_testing;
-exit;
-
END {
return unless $schema;
drop_test_schema($schema);
$dbh->do("CREATE SCHEMA dbic_t_schema");
$dbh->do("CREATE TABLE dbic_t_schema.artist $std_artist_table");
+
+ $dbh->do(<<EOS);
+CREATE TABLE dbic_t_schema.timestamp_primary_key_test (
+ id timestamp default current_timestamp
+)
+EOS
+ $dbh->do(<<EOS);
+CREATE TABLE dbic_t_schema.cd (
+ cdid int PRIMARY KEY,
+ artist int,
+ title varchar(255),
+ year varchar(4),
+ genreid int,
+ single_track int
+)
+EOS
+ $dbh->do(<<EOS);
+CREATE TABLE dbic_t_schema.track (
+ trackid int,
+ cd int REFERENCES dbic_t_schema.cd(cdid) DEFERRABLE,
+ position int,
+ title varchar(255),
+ last_updated_on date,
+ last_updated_at date,
+ small_dt date
+)
+EOS
+
$dbh->do(<<EOS);
CREATE TABLE dbic_t_schema.sequence_test (
pkid1 integer
my $search_path_save = eapk_get_search_path($schema);
eapk_drop_all($schema);
+ %seqs = ();
# make the test schemas and sequences
$schema->storage->dbh_do(sub {
ok (! $schema2->storage->connected, 'a re-connected cloned schema starts unconnected');
}
+$schema->storage->_dbh->disconnect;
+
+lives_ok {
+ $schema->storage->dbh_do(sub { $_[1]->do('select 1') })
+} '_ping works';
+
$schema->storage->dbh_do (sub {
my ($storage, $dbh) = @_;
eval { $dbh->do("DROP TABLE artist") };
$row->update({ amount => undef });
} 'updated a money value to NULL';
- my $null_amount = eval { $rs->find($row->id)->amount };
- ok(
- (($null_amount == undef) && (not $@)),
- 'updated money value to NULL round-trip'
- );
- diag $@ if $@;
+ lives_and {
+ my $null_amount = $rs->find($row->id)->amount;
+ is $null_amount, undef;
+ } 'updated money value to NULL round-trip';
# Test computed columns and timestamps
$schema->storage->dbh_do (sub {
#warn "$dsn $user $pass";
plan skip_all => 'Set $ENV{DBICTEST_INFORMIX_DSN}, _USER and _PASS to run this test'
- unless ($dsn && $user);
+ unless $dsn;
-my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+my $schema = DBICTest::Schema->connect($dsn, $user, $pass, {
+ auto_savepoint => 1
+});
my $dbh = $schema->storage->dbh;
eval { $dbh->do("DROP TABLE artist") };
-
$dbh->do("CREATE TABLE artist (artistid SERIAL, name VARCHAR(255), charfield CHAR(10), rank INTEGER DEFAULT 13);");
+eval { $dbh->do("DROP TABLE cd") };
+$dbh->do(<<EOS);
+CREATE TABLE cd (
+ cdid int PRIMARY KEY,
+ artist int,
+ title varchar(255),
+ year varchar(4),
+ genreid int,
+ single_track int
+)
+EOS
+eval { $dbh->do("DROP TABLE track") };
+$dbh->do(<<EOS);
+CREATE TABLE track (
+ trackid int,
+ cd int REFERENCES cd(cdid),
+ position int,
+ title varchar(255),
+ last_updated_on date,
+ last_updated_at date,
+ small_dt date
+)
+EOS
my $ars = $schema->resultset('Artist');
is ( $ars->count, 0, 'No rows at first' );
is( $lim->next->artistid, 102, "iterator->next ok" );
is( $lim->next, undef, "next past end of resultset ok" );
+# test savepoints
+throws_ok {
+ $schema->txn_do(sub {
+ eval {
+ $schema->txn_do(sub {
+ $ars->create({ name => 'in_savepoint' });
+ die "rolling back savepoint";
+ });
+ };
+ ok ((not $ars->search({ name => 'in_savepoint' })->first),
+ 'savepoint rolled back');
+ $ars->create({ name => 'in_outer_txn' });
+ die "rolling back outer txn";
+ });
+} qr/rolling back outer txn/,
+ 'correct exception for rollback';
+
+ok ((not $ars->search({ name => 'in_outer_txn' })->first),
+ 'outer txn rolled back');
+
+######## test with_deferred_fk_checks
+lives_ok {
+ $schema->storage->with_deferred_fk_checks(sub {
+ $schema->resultset('Track')->create({
+ trackid => 999, cd => 999, position => 1, title => 'deferred FK track'
+ });
+ $schema->resultset('CD')->create({
+ artist => 1, cdid => 999, year => '2003', title => 'deferred FK cd'
+ });
+ });
+} 'with_deferred_fk_checks code survived';
+
+is eval { $schema->resultset('Track')->find(999)->title }, 'deferred FK track',
+ 'code in with_deferred_fk_checks worked';
+
+throws_ok {
+ $schema->resultset('Track')->create({
+ trackid => 1, cd => 9999, position => 1, title => 'Track1'
+ });
+} qr/constraint/i, 'with_deferred_fk_checks is off';
done_testing;
is($new->artistid, 66, 'Explicit PK assigned');
# test savepoints
- eval {
+ throws_ok {
$schema->txn_do(sub {
eval {
$schema->txn_do(sub {
$ars->create({ name => 'in_outer_txn' });
die "rolling back outer txn";
});
- };
-
- like $@, qr/rolling back outer txn/,
+ } qr/rolling back outer txn/,
'correct exception for rollback';
ok ((not $ars->search({ name => 'in_outer_txn' })->first),
isa_ok($schema->storage, "DBIx::Class::Storage::$storage_type");
-# start disconnected to test reconnection
+# start disconnected to test _ping
$schema->storage->_dbh->disconnect;
- my $dbh;
- lives_ok (sub {
- $dbh = $schema->storage->dbh;
- }, 'reconnect works');
+ lives_ok {
+ $schema->storage->dbh_do(sub { $_[1]->do('select 1') })
+ } '_ping works';
+
+ my $dbh = $schema->storage->dbh;
$dbh->do("IF OBJECT_ID('artist', 'U') IS NOT NULL
DROP TABLE artist");
is($st->pkid1, 55, "Firebird Auto-PK without trigger: First primary key set manually");
# test savepoints
- eval {
+ throws_ok {
$schema->txn_do(sub {
eval {
$schema->txn_do(sub {
$ars->create({ name => 'in_outer_txn' });
die "rolling back outer txn";
});
- };
-
- like $@, qr/rolling back outer txn/,
+ } qr/rolling back outer txn/,
'correct exception for rollback';
ok ((not $ars->search({ name => 'in_outer_txn' })->first),
is( eval { $lim->next->artistid }, 102, "iterator->next ok" );
is( $lim->next, undef, "next past end of resultset ok" );
-# test multiple executing cursors
+# test nested cursors
{
my $rs1 = $ars->search({}, { order_by => { -asc => 'artistid' }});
- my $rs2 = $ars->search({}, { order_by => { -desc => 'artistid' }});
- is $rs1->next->artistid, 1, 'multiple cursors';
- is $rs2->next->artistid, 102, 'multiple cursors';
+ my $rs2 = $ars->search({ artistid => $rs1->next->artistid }, {
+ order_by => { -desc => 'artistid' }
+ });
+
+ is $rs2->next->artistid, 1, 'nested cursors';
}
# test empty insert
use lib qw(t/lib);
use DBICTest;
-warning_like (
- sub {
- package A::Comp;
- use base 'DBIx::Class';
- sub store_column { shift->next::method (@_) };
- 1;
+{
+ package A::Comp;
+ use base 'DBIx::Class';
+ sub store_column { shift->next::method (@_) };
+ 1;
+}
+
+{
+ package A::SubComp;
+ use base 'A::Comp';
+ 1;
+}
+warnings_like (
+ sub {
package A::Test;
use base 'DBIx::Class::Core';
- __PACKAGE__->load_components(qw(UTF8Columns +A::Comp));
+ __PACKAGE__->load_components(qw(UTF8Columns +A::SubComp +A::Comp));
1;
},
- qr/Incorrect loading order of DBIx::Class::UTF8Columns.+affect other components overriding store_column \(A::Comp\)/,
+ [qr/Incorrect loading order of DBIx::Class::UTF8Columns.+affect other components overriding store_column \(A::Comp\)/],
'incorrect order warning issued',
);
+warnings_are (
+ sub {
+ package A::Test2;
+ use base 'DBIx::Class::Core';
+ __PACKAGE__->load_components(qw(Core +A::Comp Ordered UTF8Columns));
+ __PACKAGE__->load_components(qw(Ordered +A::Comp Row UTF8Columns Core));
+ 1;
+ },
+ [],
+ 'no spurious warnings issued',
+);
+
+my $test2_mro;
+my $idx = 0;
+for (@{mro::get_linear_isa ('A::Test2')} ) {
+ $test2_mro->{$_} = $idx++;
+}
+
+cmp_ok ($test2_mro->{'A::Comp'}, '<', $test2_mro->{'DBIx::Class::UTF8Columns'}, 'mro of Test2 correct (A::Comp before UTF8Col)' );
+cmp_ok ($test2_mro->{'DBIx::Class::UTF8Columns'}, '<', $test2_mro->{'DBIx::Class::Core'}, 'mro of Test2 correct (UTF8Col before Core)' );
+cmp_ok ($test2_mro->{'DBIx::Class::Core'}, '<', $test2_mro->{'DBIx::Class::Row'}, 'mro of Test2 correct (Core before Row)' );
+
my $schema = DBICTest->init_schema();
DBICTest::Schema::CD->load_components('UTF8Columns');
DBICTest::Schema::CD->utf8_columns('title');
use strict;
-use warnings;
+use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
+use DBIC::SqlMakerTest;
my $schema = DBICTest->init_schema();
-plan tests => 22;
-
{
my $rs = $schema->resultset( 'CD' )->search(
{
],
}
);
-
- eval {
+
+ lives_ok {
my @rows = $rs->all();
};
- is( $@, '' );
}
is(scalar(@{$merge_rs_2->{attrs}->{join}}), 1, 'only one join kept when inherited');
my $merge_rs_2_cd = $merge_rs_2->next;
-eval {
+lives_ok (sub {
my @rs_with_prefetch = $schema->resultset('TreeLike')
->search(
prefetch => [ 'parent', { 'children' => 'parent' } ],
});
-};
-
-ok(!$@, "pathological prefetch ok");
+}, 'pathological prefetch ok');
my $rs = $schema->resultset("Artist")->search({}, { join => 'twokeys' });
my $second_search_rs = $rs->search({ 'cds_2.cdid' => '2' }, { join =>
is(scalar(@{$second_search_rs->{attrs}->{join}}), 3, 'both joins kept');
ok($second_search_rs->next, 'query on double joined rel runs okay');
-1;
+# test joinmap pruner
+lives_ok ( sub {
+ my $rs = $schema->resultset('Artwork')->search (
+ {
+ },
+ {
+ distinct => 1,
+ join => [
+ { artwork_to_artist => 'artist' },
+ { cd => 'artist' },
+ ],
+ },
+ );
+
+ is_same_sql_bind (
+ $rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT me.cd_id
+ FROM cd_artwork me
+ JOIN cd cd ON cd.cdid = me.cd_id
+ JOIN artist artist_2 ON artist_2.artistid = cd.artist
+ GROUP BY me.cd_id
+ ) count_subq
+ )',
+ [],
+ );
+
+ ok (defined $rs->count);
+});
+
+done_testing;
use DBICTest;
use Scope::Guard ();
-# XXX we're only testing TIMESTAMP here
-
my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_FIREBIRD_${_}" } qw/DSN USER PASS/};
my ($dsn2, $user2, $pass2) = @ENV{map { "DBICTEST_FIREBIRD_ODBC_${_}" } qw/DSN USER PASS/};
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+use Scope::Guard ();
+
+my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_INFORMIX_${_}" } qw/DSN USER PASS/};
+
+if (not $dsn) {
+ plan skip_all => <<'EOF';
+Set $ENV{DBICTEST_INFORMIX_DSN} _USER and _PASS to run this test'.
+Warning: This test drops and creates a table called 'event'";
+EOF
+} else {
+ eval "use DateTime; use DateTime::Format::Strptime;";
+ if ($@) {
+ plan skip_all => 'needs DateTime and DateTime::Format::Strptime for testing';
+ }
+}
+
+my $schema;
+
+{
+ $schema = DBICTest::Schema->connect($dsn, $user, $pass, {
+ on_connect_call => [ 'datetime_setup' ],
+ });
+
+ my $sg = Scope::Guard->new(\&cleanup);
+
+ eval { $schema->storage->dbh->do('DROP TABLE event') };
+ $schema->storage->dbh->do(<<'SQL');
+ CREATE TABLE event (
+ id INT PRIMARY KEY,
+ starts_at DATE,
+ created_on DATETIME YEAR TO FRACTION(5)
+ );
+SQL
+ my $rs = $schema->resultset('Event');
+
+ my $dt = DateTime->now;
+ $dt->set_nanosecond(555640000);
+
+ my $date_only = DateTime->new(
+ year => $dt->year, month => $dt->month, day => $dt->day
+ );
+
+ my $row;
+ ok( $row = $rs->create({
+ id => 1,
+ starts_at => $date_only,
+ created_on => $dt,
+ }));
+ ok( $row = $rs->search({ id => 1 }, { select => [qw/starts_at created_on/] })
+ ->first
+ );
+ is $row->created_on, $dt, 'TIMESTAMP as DateTime roundtrip';
+
+ cmp_ok $row->created_on->nanosecond, '==', $dt->nanosecond,
+ 'fractional part of a second survived';
+
+ is $row->starts_at, $date_only, 'DATE as DateTime roundtrip';
+}
+
+done_testing;
+
+# clean up our mess
+sub cleanup {
+ my $dbh;
+ eval {
+ $dbh = $schema->storage->dbh;
+ };
+ return unless $dbh;
+
+ eval { $dbh->do(qq{DROP TABLE $_}) } for qw/event/;
+}
Year1999CDs
CustomSql
Money
+ TimestampPrimaryKey
/,
{ 'DBICTest::Schema' => [qw/
LinerNotes
starts_at => { data_type => 'date' },
created_on => { data_type => 'timestamp' },
- varchar_date => { data_type => 'varchar', inflate_date => 1, size => 20, is_nullable => 1 },
- varchar_datetime => { data_type => 'varchar', inflate_datetime => 1, size => 20, is_nullable => 1 },
+ varchar_date => { data_type => 'varchar', size => 20, is_nullable => 1 },
+ varchar_datetime => { data_type => 'varchar', size => 20, is_nullable => 1 },
skip_inflation => { data_type => 'datetime', inflate_datetime => 0, is_nullable => 1 },
ts_without_tz => { data_type => 'datetime', is_nullable => 1 }, # used in EventTZPg
);
__PACKAGE__->set_primary_key('id');
+# Test add_columns '+colname' to augment a column definition.
+__PACKAGE__->add_columns(
+ '+varchar_date' => {
+ inflate_date => 1,
+ },
+ '+varchar_datetime' => {
+ inflate_datetime => 1,
+ },
+);
+
1;
--- /dev/null
+package # hide from PAUSE
+ DBICTest::Schema::TimestampPrimaryKey;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('timestamp_primary_key_test');
+
+__PACKAGE__->add_columns(
+ 'id' => {
+ data_type => 'timestamp',
+ default_value => \'current_timestamp',
+ },
+);
+
+__PACKAGE__->set_primary_key('id');
+
+1;
--
-- Created by SQL::Translator::Producer::SQLite
--- Created on Sat Mar 6 18:04:27 2010
+-- Created on Mon Mar 22 11:08:33 2010
--
;
);
--
+-- Table: timestamp_primary_key_test
+--
+CREATE TABLE timestamp_primary_key_test (
+ id timestamp NOT NULL DEFAULT current_timestamp,
+ PRIMARY KEY (id)
+);
+
+--
-- Table: treelike
--
CREATE TABLE treelike (
-- View: year2000cds
--
CREATE VIEW year2000cds AS
- SELECT cdid, artist, title, year, genreid, single_track FROM cd WHERE year = "2000"
+ SELECT cdid, artist, title, year, genreid, single_track FROM cd WHERE year = "2000"
\ No newline at end of file
}, 'search_related prefetch with order_by works');
-TODO: { local $TODO = 'Unqualified columns in where clauses can not be fixed without an SQLA rewrite' if SQL::Abstract->VERSION < 2;
lives_ok ( sub {
my $no_prefetch = $schema->resultset('Track')->search_related(cd =>
{
is($use_prefetch->count, $no_prefetch->count, 'counts with and without prefetch match');
}, 'search_related prefetch with condition referencing unqualified column of a joined table works');
-}
-
lives_ok (sub {
my $rs = $schema->resultset("Artwork")->search(undef, {distinct => 1})
--- /dev/null
+package My::Schema::Result::User;
+
+use strict;
+use warnings;
+use base qw/DBIx::Class::Core/;
+
+### Define what our admin class is, for ensure_class_loaded()
+my $admin_class = __PACKAGE__ . '::Admin';
+
+__PACKAGE__->table('users');
+
+__PACKAGE__->add_columns(
+ qw/user_id email password
+ firstname lastname active
+ admin/
+);
+
+__PACKAGE__->set_primary_key('user_id');
+
+sub inflate_result {
+ my $self = shift;
+ my $ret = $self->next::method(@_);
+ if ( $ret->admin ) { ### If this is an admin, rebless for extra functions
+ $self->ensure_class_loaded($admin_class);
+ bless $ret, $admin_class;
+ }
+ return $ret;
+}
+
+sub hello {
+ return "I am a regular user.";
+}
+
+package My::Schema::Result::User::Admin;
+
+use strict;
+use warnings;
+use base qw/My::Schema::Result::User/;
+
+# This line is important
+__PACKAGE__->table('users');
+
+sub hello {
+ return "I am an admin.";
+}
+
+sub do_admin_stuff {
+ return "I am doing admin stuff";
+}
+
+package My::Schema;
+
+use base qw/DBIx::Class::Schema/;
+
+My::Schema->register_class( Admin => 'My::Schema::Result::User::Admin' );
+My::Schema->register_class( User => 'My::Schema::Result::User' );
+
+1;
+
+package main;
+
+use lib qw(t/lib);
+use DBICTest;
+
+use Test::More;
+
+my $user_data = {
+ email => 'someguy@place.com',
+ password => 'pass1',
+ admin => 0
+};
+
+my $admin_data = {
+ email => 'someadmin@adminplace.com',
+ password => 'pass2',
+ admin => 1
+};
+
+ok( my $schema = My::Schema->connection('dbi:SQLite:dbname=:memory:') );
+
+ok(
+ $schema->storage->dbh->do(
+"create table users (user_id, email, password, firstname, lastname, active, admin)"
+ )
+);
+
+TODO: {
+ local $TODO = 'New objects should also be inflated';
+ my $user = $schema->resultset('User')->create($user_data);
+ my $admin = $schema->resultset('User')->create($admin_data);
+
+ is( ref $user, 'My::Schema::Result::User' );
+ is( ref $admin, 'My::Schema::Result::User::Admin' );
+
+}
+
+my $user = $schema->resultset('User')->single($user_data);
+my $admin = $schema->resultset('User')->single($admin_data);
+
+is( ref $user, 'My::Schema::Result::User' );
+is( ref $admin, 'My::Schema::Result::User::Admin' );
+
+is( $user->password, 'pass1' );
+is( $admin->password, 'pass2' );
+is( $user->hello, 'I am a regular user.' );
+is( $admin->hello, 'I am an admin.' );
+
+ok( !$user->can('do_admin_stuff') );
+ok( $admin->can('do_admin_stuff') );
+
+done_testing;