=head3 Paged results
When you expect a large number of results, you can ask L<DBIx::Class> for a
- paged resultset, which will fetch only a small number of records at a time:
+ paged resultset, which will fetch only a defined number of records at a time:
my $rs = $schema->resultset('Artist')->search(
undef,
return $rs->page(1); # DBIx::Class::ResultSet containing first 10 records
- In either of the above cases, you can return a L<Data::Page> object for the
+ In either of the above cases, you can get a L<Data::Page> object for the
resultset (suitable for use in e.g. a template) using the C<pager> method:
return $rs->pager();
join => [qw/ artist /],
order_by => [qw/ artist.name /]
}
- };
+ );
# Equivalent SQL:
# SELECT cd.* FROM cd
C<next::method>.
sub new {
- my ( $class, $attrs ) = @_;
+ my ( $self, $attrs ) = @_;
$attrs->{foo} = 'bar' unless defined $attrs->{foo};
=head2 Schema import/export
- This functionality requires you to have L<SQL::Translator> (also known as
- "SQL Fairy") installed.
+ To create a DBIx::Class schema from an existing database, use
+ L<DBIx::Class::Schema::Loader>'s C<make_schema_at>:
- To create a DBIx::Class schema from an existing database:
+ perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib -e 'make_schema_at("My::Schema", { debug => 1 }, [ "dbi:Pg:dbname=foo","postgres" ])'
- sqlt --from DBI
- --to DBIx::Class::File
- --prefix "MySchema" > MySchema.pm
+ The following functionality requires you to have L<SQL::Translator>
+ (also known as "SQL Fairy") installed.
- To create a MySQL database from an existing L<DBIx::Class> schema, convert the
- schema to MySQL's dialect of SQL:
+ To create a set of database-specific .sql files for the above schema:
- sqlt --from SQL::Translator::Parser::DBIx::Class
- --to MySQL
- --DBIx::Class "MySchema.pm" > Schema1.sql
-
- And import using the mysql client:
+ my $schema = My::Schema->connect($dsn);
+ $schema->create_ddl_dir(['MySQL', 'SQLite', 'PostgreSQL'],
+ '0.1',
+ '/dbscriptdir/'
+ );
+
+ By default this will create schema files in the current directory, for
+ MySQL, SQLite and PostgreSQL, using the $VERSION from your Schema.pm.
+
+ To create a new database using the schema:
+
+ my $schema = My::Schema->connect($dsn);
+ $schema->deploy({ add_drop_tables => 1});
+
+ To import created .sql files using the mysql client:
+
+ mysql -h "host" -D "database" -u "user" -p < My_Schema_1.0_MySQL.sql
+
+ To create C<ALTER TABLE> conversion scripts to update a database to a
+ newer version of your schema at a later point, first set a new
+ $VERSION in your Schema file, then:
+
+ my $schema = My::Schema->connect($dsn);
+ $schema->create_ddl_dir(['MySQL', 'SQLite', 'PostgreSQL'],
+ '0.2',
+ '/dbscriptdir/',
+ '0.1'
+ );
+
+ This will produce new database-specific .sql files for the new version
+ of the schema, plus scripts to convert from version 0.1 to 0.2. This
+ requires that the files for 0.1 as created above are available in the
+ given directory to diff against.
- mysql -h "host" -D "database" -u "user" -p < Schema1.sql
=head2 Easy migration from class-based to schema-based setup
=head3 Save the schema
- Use C<sqlt> to transform your schema into an SQL script suitable for your
- customer's database. E.g. for MySQL:
-
- sqlt --from SQL::Translator::Parser::DBIx::Class
- --to MySQL
- --DBIx::Class "MySchema.pm" > Schema1.mysql.sql
-
- If you need to target databases from multiple vendors, just generate an SQL
- script suitable for each. To support PostgreSQL too:
-
- sqlt --from SQL::Translator::DBIx::Class
- --to PostgreSQL
- --DBIx::Class "MySchema.pm" > Schema1.pgsql.sql
+ Call L<DBIx::Class::Schema/create_ddl_dir> as above under L<Schema
+ import/export>.
=head3 Deploy to customers
=head3 Modify the schema to change functionality
- As your application evolves, it may be necessary to modify your schema to
- change functionality. Once the changes are made to your schema in DBIx::Class,
- export the modified schema as before, taking care not to overwrite the original:
-
- sqlt --from SQL::Translator::DBIx::Class
- --to MySQL
- --DBIx::Class "Anything.pm" > Schema2.mysql.sql
-
- Next, use sqlt-diff to create an SQL script that will update the customer's
- database schema:
-
- sqlt-diff --to MySQL Schema1=MySQL Schema2=MySQL > SchemaUpdate.mysql.sql
+ As your application evolves, it may be necessary to modify your schema
+ to change functionality. Once the changes are made to your schema in
+ DBIx::Class, export the modified schema and the conversion scripts as
+ in L<Schema import/export>.
=head3 Deploy update to customers
- The schema update can be deployed to customers using the same method as before.
+ Add the L<DBIx::Class::Schema::Versioned> schema component to your
+ Schema class. This will add a new table to your database called
+ C<SchemaVersions> which will keep track of which version is installed
+ and warn if the user trys to run a newer schema version than the
+ database thinks it has.
+
+ Alternatively, you can send the conversion sql scripts to your
+ customers as above.
=head2 Setting limit dialect for SQL::Abstract::Limit
my $sql = shift();
my $params = @_;
- print "Executing $sql: ".join(', ', @params)."\n";
+ $self->print("Executing $sql: ".join(', ', @params)."\n");
$start = time();
}
my $sql = shift();
my @params = @_;
- printf("Execution took %0.4f seconds.\n", time() - $start);
+ my $elapsed = sprintf("%0.4f", time() - $start);
+ $self->print("Execution took $elapsed seconds.\n");
$start = undef;
}
You can then install that class as the debugging object:
- __PACKAGE__->storage()->debugobj(new My::Profiler());
- __PACKAGE__->storage()->debug(1);
+ __PACKAGE__->storage->debugobj(new My::Profiler());
+ __PACKAGE__->storage->debug(1);
A more complicated example might involve storing each execution of SQL in an
array:
You could then create average, high and low execution times for an SQL
statement and dig down to see if certain parameters cause aberrant behavior.
+ You might want to check out L<DBIx::Class::QueryLog> as well.
=head2 Getting the value of the primary key for the last database insert
use strict;
use warnings;
use overload
- '0+' => \&count,
- 'bool' => sub { 1; },
+ '0+' => "count",
+ 'bool' => "_bool",
fallback => 1;
use Carp::Clan qw/^DBIx::Class/;
use Data::Page;
__PACKAGE__->belongs_to(artist => 'MyApp::Schema::Artist');
1;
+=head1 OVERLOADING
+
+If a resultset is used as a number it returns the C<count()>. However, if it is used as a boolean it is always true. So if you want to check if a result set has any results use C<if $rs != 0>. C<if $rs> will always be true.
+
=head1 METHODS
=head2 new
# offset, order by and page are not needed to count. record_filter is cdbi
delete $attrs->{$_} for qw/rows offset order_by page pager record_filter/;
- my $tmp_rs = (ref $self)->new($self->_source_handle, $attrs);
+ my $tmp_rs = (ref $self)->new($self->result_source, $attrs);
my ($count) = $tmp_rs->cursor->next;
return $count;
}
+sub _bool {
+ return 1;
+}
+
=head2 count_literal
=over 4
submitting to a $resultset->create(...) method.
In void context, C<insert_bulk> in L<DBIx::Class::Storage::DBI> is used
- to insert the data, as this is a faster method.
+ to insert the data, as this is a faster method.
Otherwise, each set of data is inserted into the database using
L<DBIx::Class::ResultSet/create>, and a arrayref of the resulting row
print $ArtistOne->name; ## response is 'Artist One'
print $ArtistThree->cds->count ## reponse is '2'
+
+ Please note an important effect on your data when choosing between void and
+ wantarray context. Since void context goes straight to C<insert_bulk> in
+ L<DBIx::Class::Storage::DBI> this will skip any component that is overriding
+ c<insert>. So if you are using something like L<DBIx-Class-UUIDColumns> to
+ create primary keys for you, you will find that your PKs are empty. In this
+ case you will have to use the wantarray context in order to create those
+ values.
=cut
sub page {
my ($self, $page) = @_;
- return (ref $self)->new($self->_source_handle, { %{$self->{attrs}}, page => $page });
+ return (ref $self)->new($self->result_source, { %{$self->{attrs}}, page => $page });
}
=head2 new_result
=back
- Creates an object in the resultset's result class and returns it.
+ Creates a new row object in the resultset's result class and returns
+ it. The row is not inserted into the database at this point, call
+ L<DBIx::Class::Row/insert> to do that. Calling L<DBIx::Class::Row/in_storage>
+ will tell you whether the row object has been inserted or not.
+
+ Passes the hashref of input on to L<DBIx::Class::Row/new>.
=cut
=back
- Inserts a record into the resultset and returns the object representing it.
+ Attempt to create a single new row or a row with multiple related rows
+ in the table represented by the resultset (and related tables). This
+ will not check for duplicate rows before inserting, use
+ L</find_or_create> to do that.
+
+ To create one row for this resultset, pass a hashref of key/value
+ pairs representing the columns of the table and the values you wish to
+ store. If the appropriate relationships are set up, foreign key fields
+ can also be passed an object representing the foreign row, and the
+ value will be set to it's primary key.
+
+ To create related objects, pass a hashref for the value if the related
+ item is a foreign key relationship (L<DBIx::Class::Relationship/belongs_to>),
+ and use the name of the relationship as the key. (NOT the name of the field,
+ necessarily). For C<has_many> and C<has_one> relationships, pass an arrayref
+ of hashrefs containing the data for each of the rows to create in the foreign
+ tables, again using the relationship name as the key.
+
+ Instead of hashrefs of plain related data (key/value pairs), you may
+ also pass new or inserted objects. New objects (not inserted yet, see
+ L</new>), will be inserted into their appropriate tables.
Effectively a shortcut for C<< ->new_result(\%vals)->insert >>.
+ Example of creating a new row.
+
+ $person_rs->create({
+ name=>"Some Person",
+ email=>"somebody@someplace.com"
+ });
+
+ Example of creating a new row and also creating rows in a related C<has_many>
+ or C<has_one> resultset. Note Arrayref.
+
+ $artist_rs->create(
+ { artistid => 4, name => 'Manufactured Crap', cds => [
+ { title => 'My First CD', year => 2006 },
+ { title => 'Yet More Tweeny-Pop crap', year => 2007 },
+ ],
+ },
+ );
+
+ Example of creating a new row and also creating a row in a related
+ C<belongs_to>resultset. Note Hashref.
+
+ $cd_rs->create({
+ title=>"Music for Silly Walks",
+ year=>2000,
+ artist => {
+ name=>"Silly Musician",
+ }
+ });
+
=cut
sub create {
my $rel_obj = $self->result_source->relationship_info($rel);
$self->throw_exception(
- "search_related: result source '" . $self->_source_handle->source_moniker .
+ "search_related: result source '" . $self->result_source->source_name .
"' has no such relationship $rel")
unless $rel_obj;
#XXX - temp fix for result_class bug. There likely is a more elegant fix -groditi
my %attrs = %{$self->{attrs}||{}};
- delete $attrs{result_class};
+ delete @attrs{qw(result_class alias)};
my $new_cache;
}
}
- my $new = $self->_source_handle
- ->schema
- ->resultset($rel_obj->{class})
- ->search_rs(
- undef, {
- %attrs,
- join => undef,
- prefetch => undef,
- select => undef,
- as => undef,
- alias => $alias,
- where => $self->{cond},
- seen_join => $seen,
- from => $from,
- });
+ my $rel_source = $self->result_source->related_source($rel);
+
+ my $new = do {
+
+ # The reason we do this now instead of passing the alias to the
+ # search_rs below is that if you wrap/overload resultset on the
+ # source you need to know what alias it's -going- to have for things
+ # to work sanely (e.g. RestrictWithObject wants to be able to add
+ # extra query restrictions, and these may need to be $alias.)
+
+ my $attrs = $rel_source->resultset_attributes;
+ local $attrs->{alias} = $alias;
+
+ $rel_source->resultset
+ ->search_rs(
+ undef, {
+ %attrs,
+ join => undef,
+ prefetch => undef,
+ select => undef,
+ as => undef,
+ where => $self->{cond},
+ seen_join => $seen,
+ from => $from,
+ });
+ };
$new->set_cache($new_cache) if $new_cache;
$new;
};
exists>). It has nothing to do with the SQL code C< SELECT foo AS bar
>.
- The C< as > attribute is used in conjunction with C<select>,
+ The C<as> attribute is used in conjunction with C<select>,
usually when C<select> contains one or more function or stored
procedure names:
=back
- Contains one or more relationships that should be fetched along with the main
- query (when they are accessed afterwards they will have already been
- "prefetched"). This is useful for when you know you will need the related
- objects, because it saves at least one query:
+ Contains one or more relationships that should be fetched along with
+ the main query (when they are accessed afterwards the data will
+ already be available, without extra queries to the database). This is
+ useful for when you know you will need the related objects, because it
+ saves at least one query:
my $rs = $schema->resultset('Tag')->search(
undef,
be turned into objects via new_related, and treated as if you had
passed objects.
+ For a more involved explanation, see L<DBIx::Class::ResultSet/create>.
+
=cut
## It needs to store the new objects somewhere, and call insert on that list later when insert is called on this object. We may need an accessor for these so the user can retrieve them, if just doing ->new().
my ($class, $attrs) = @_;
$class = ref $class if ref $class;
- my $new = { _column_data => {} };
+ my $new = {
+ _column_data => {},
+ };
bless $new, $class;
if (my $handle = delete $attrs->{-source_handle}) {
an entirely new object into the database, use C<create> (see
L<DBIx::Class::ResultSet/create>).
+ This will also insert any uninserted, related objects held inside this
+ one, see L<DBIx::Class::ResultSet/create> for more details.
+
=cut
sub insert {
foreach my $p (@pri) {
if (exists $keyhash->{$p}) {
- warn $keyhash->{$p};
unless (defined($rel_obj->get_column($keyhash->{$p}))
|| $rel_obj->column_info($keyhash->{$p})
->{is_auto_increment}) {
$obj->in_storage; # Get value
$obj->in_storage(1); # Set value
- Indicated whether the object exists as a row in the database or not
+ Indicates whether the object exists as a row in the database or not
=cut
my $ret = $self->store_column(@_);
$self->{_dirty_columns}{$column} = 1
if (defined $old ^ defined $ret) || (defined $old && $old ne $ret);
+
+ # XXX clear out the relation cache for this column
+ delete $self->{related_resultsets}{$column};
+
return $ret;
}
__PACKAGE__->mk_group_accessors('simple' =>
qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts
- _conn_pid _conn_tid disable_sth_caching cursor on_connect_do
+ _conn_pid _conn_tid disable_sth_caching on_connect_do
transaction_depth unsafe _dbh_autocommit/
);
+ __PACKAGE__->cursor_class('DBIx::Class::Storage::DBI::Cursor');
+
BEGIN {
package DBIC::SQL::Abstract; # Would merge upstream, but nate doesn't reply :(
sub new {
my $new = shift->next::method(@_);
- $new->cursor("DBIx::Class::Storage::DBI::Cursor");
$new->transaction_depth(0);
$new->_sql_maker_opts({});
$new->{_in_dbh_do} = 0;
my $last_info = $dbi_info->[-1];
if(ref $last_info eq 'HASH') {
$last_info = { %$last_info }; # so delete is non-destructive
- for my $storage_opt (qw/on_connect_do disable_sth_caching unsafe/) {
+ for my $storage_opt (
+ qw/on_connect_do disable_sth_caching unsafe cursor_class/
+ ) {
if(my $value = delete $last_info->{$storage_opt}) {
$self->$storage_opt($value);
}
}
else {
$self->_verify_pid;
+ return 0 if !$self->_dbh;
}
return ($dbh->FETCH('Active') && $dbh->ping);
}
# if on-connect sql statements are given execute them
foreach my $sql_statement (@{$self->on_connect_do || []}) {
- $self->debugobj->query_start($sql_statement) if $self->debug();
+ $self->_query_start($sql_statement);
$self->_dbh->do($sql_statement);
- $self->debugobj->query_end($sql_statement) if $self->debug();
+ $self->_query_end($sql_statement);
}
$self->_conn_pid($$);
$dbh = DBI->connect(@info);
}
- if(!$self->unsafe) {
+ if($dbh && !$self->unsafe) {
my $weak_self = $self;
weaken($weak_self);
$dbh->{HandleError} = sub {
return ($sql, \@bind);
}
+ sub _fix_bind_params {
+ my ($self, @bind) = @_;
+
+ ### Turn @bind from something like this:
+ ### ( [ "artist", 1 ], [ "cdid", 1, 3 ] )
+ ### to this:
+ ### ( "'1'", "'1'", "'3'" )
+ return
+ map {
+ if ( defined( $_ && $_->[1] ) ) {
+ map { qq{'$_'}; } @{$_}[ 1 .. $#$_ ];
+ }
+ else { q{'NULL'}; }
+ } @bind;
+ }
+
+ sub _query_start {
+ my ( $self, $sql, @bind ) = @_;
+
+ if ( $self->debug ) {
+ @bind = $self->_fix_bind_params(@bind);
+ $self->debugobj->query_start( $sql, @bind );
+ }
+ }
+
+ sub _query_end {
+ my ( $self, $sql, @bind ) = @_;
+
+ if ( $self->debug ) {
+ @bind = $self->_fix_bind_params(@bind);
+ $self->debugobj->query_end( $sql, @bind );
+ }
+ }
+
sub _dbh_execute {
my ($self, $dbh, $op, $extra_bind, $ident, $bind_attributes, @args) = @_;
my ($sql, $bind) = $self->_prep_for_execute($op, $extra_bind, $ident, \@args);
- if ($self->debug) {
- my @debug_bind =
- map { defined ($_ && $_->[1]) ? qq{'$_->[1]'} : q{'NULL'} } @$bind;
- $self->debugobj->query_start($sql, @debug_bind);
- }
+ $self->_query_start( $sql, @$bind );
my $sth = $self->sth($sql,$op);
my $rv = $sth->execute();
$self->throw_exception($sth->errstr) if !$rv;
- if ($self->debug) {
- my @debug_bind =
- map { defined ($_ && $_->[1]) ? qq{'$_->[1]'} : q{'NULL'} } @$bind;
- $self->debugobj->query_end($sql, @debug_bind);
- }
+ $self->_query_end( $sql, @$bind );
return (wantarray ? ($rv, $sth, @$bind) : $rv);
}
@colvalues{@$cols} = (0..$#$cols);
my ($sql, @bind) = $self->sql_maker->insert($table, \%colvalues);
- if ($self->debug) {
- my @debug_bind = map { defined $_->[1] ? qq{$_->[1]} : q{'NULL'} } @bind;
- $self->debugobj->query_start($sql, @debug_bind);
- }
+ $self->_query_start( $sql, @bind );
my $sth = $self->sth($sql);
# @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
my $rv = $sth->execute_array({ArrayTupleStatus => $tuple_status});
$self->throw_exception($sth->errstr) if !$rv;
- if ($self->debug) {
- my @debug_bind = map { defined $_ ? qq{`$_'} : q{`NULL'} } @bind;
- $self->debugobj->query_end($sql, @debug_bind);
- }
+ $self->_query_end( $sql, @bind );
return (wantarray ? ($rv, $sth, @bind) : $rv);
}
} else {
$self->throw_exception("rows attribute must be positive if present")
if (defined($attrs->{rows}) && !($attrs->{rows} > 0));
+
+ # MySQL actually recommends this approach. I cringe.
+ $attrs->{rows} = 2**48 if not defined $attrs->{rows} and defined $attrs->{offset};
push @args, $attrs->{rows}, $attrs->{offset};
}
return $self->_execute(@args);
sub select {
my $self = shift;
my ($ident, $select, $condition, $attrs) = @_;
- return $self->cursor->new($self, \@_, $attrs);
+ return $self->cursor_class->new($self, \@_, $attrs);
}
sub select_single {
next if($line =~ /^BEGIN TRANSACTION/m);
next if($line =~ /^COMMIT/m);
next if $line =~ /^\s+$/; # skip whitespace only
- $self->debugobj->query_start($line) if $self->debug;
+ $self->_query_start($line);
eval {
$self->dbh->do($line); # shouldn't be using ->dbh ?
};
if ($@) {
warn qq{$@ (running "${line}")};
}
- $self->debugobj->query_end($line) if $self->debug;
+ $self->_query_end($line);
}
}
}