Build
Build.bat
MANIFEST
+MANIFEST.bak
META.yml
Makefile
Makefile.old
Revision history for DBIx::Class
+0.08109 2009-08-18 08:35:00 (UTC)
+ - Replication updates:
+ - Improved the replication tests so that they are more reliable
+ and accurate, and hopefully solve some cross platform issues.
+ - Bugfixes related to naming particular replicants in a
+ 'force_pool' attribute.
+ - Lots of documentation updates, including a new Introduction.pod
+ file.
+ - Fixed the way we detect transaction to make this more reliable
+ and forward looking.
+ - Fixed some trouble with the way Moose Types are used.
+ - Made discard_chages/get_from_storage replication aware (they
+ now read from the master storage by default)
+ - Refactor of MSSQL storage drivers, with some new features:
+ - Support for placeholders for MSSQL via DBD::Sybase with proper
+ autodetection
+ - 'uniqueidentifier' support with auto newid()
+ - Dynamic cursor support and other MARS options for ODBC
+ - savepoints with auto_savepoint => 1
+ - Support for MSSQL 'money' type
+ - Support for 'smalldatetime' type used in MSSQL and Sybase for
+ InflateColumn::DateTime
+ - support for Postgres 'timestamp without timezone' type in
+ InflateColumn::DateTime (RT#48389)
+ - Added new MySQL specific on_connect_call macro 'set_strict_mode'
+ (also known as make_mysql_not_suck_as_much)
+ - Multiple prefetch-related fixes:
+ - Adjust overly agressive subquery join-chain pruning
+ - Always preserve the outer join-chain - fixes numerous
+ problems with search_related chaining
+ - Deal with the distinct => 1 attribute properly when using
+ prefetch
+ - An extension of the select-hashref syntax, allowing labeling
+ SQL-side aliasing: select => [ { max => 'foo', -as => 'bar' } ]
+ - Massive optimization of the DBI storage layer - reduce the
+ amount of connected() ping-calls
+ - Some fixes of multi-create corner cases
+ - Multiple POD improvements
+ - Added exception when resultset is called without an argument
+ - Improved support for non-schema-qualified tables under
+ Postgres (fixed last_insert_id sequence name auto-detection)
+
+0.08108 2009-07-05 23:15:00 (UTC)
+ - Fixed the has_many prefetch with limit/group deficiency -
+ it is now possible to select "top 5 commenters" while
+ prefetching all their comments
+ - New resultsed method count_rs, returns a ::ResultSetColumn
+ which in turn returns a single count value
+ - Even better support of count with limit
+ - New on_connect_call/on_disconnect_call functionality (check
+ POD of Storage::DBI)
+ - Automatic datetime handling environment/session setup for
+ Oracle via connect_call_datetime_setup()
+ - count/all on related left-joined empty resultsets now correctly
+ returns 0/()
+ - Fixed regression when both page and offset are specified on
+ a resultset
+ - Fixed HRI returning too many empty results on multilevel
+ nonexisting prefetch
+ - make_column_dirty() now overwrites the deflated value with an
+ inflated one if such exists
+ - Fixed set_$rel with where restriction deleting rows outside
+ the restriction
+ - populate() returns the created objects or an arrayref of the
+ created objects depending on scalar vs. list context
+ - Fixed find_related on 'single' relationships - the former
+ implementation would overspecify the WHERE condition, reporting
+ no related objects when there in fact is one
+ - SQL::Translator::Parser::DBIx::Class now attaches tables to the
+ central schema object in relationship dependency order
+ - Fixed regression in set_column() preventing sourceless object
+ manipulations
+ - Fixed a bug in search_related doubling a join if the original
+ $rs already joins/prefetches the same relation
+ - Storage::DBI::connected() improvements for Oracle and Sybase
+ - Fixed prefetch+incomplete select regression introduced in
+ 0.08100
+ - MSSQL limit (TOP emulation) fixes and improvements
+
+0.08107 2009-06-14 08:21:00 (UTC)
+ - Fix serialization regression introduced in 0.08103 (affects
+ Cursor::Cached)
+ - POD fixes
+ - Fixed incomplete ::Replicated debug output
+
+0.08106 2009-06-11 21:42:00 (UTC)
+ - Switched SQLite storage driver to DateTime::Format::SQLite
+ (proper timezone handling)
+ - Fix more test problems
+
+0.08105 2009-06-11 19:04:00 (UTC)
+ - Update of numeric columns now properly uses != to determine
+ dirtyness instead of the usual eq
+ - Fixes to IC::DT tests
+ - Fixed exception when undef_if_invalid and timezone are both set
+ on an invalid datetime column
+
+0.08104 2009-06-10 13:38:00 (UTC)
+ - order_by now can take \[$sql, @bind] as in
+ order_by => { -desc => \['colA LIKE ?', 'somestring'] }
+ - SQL::Abstract errors are now properly croak()ed with the
+ correct trace
+ - populate() now properly reports the dataset slice in case of
+ an exception
+ - Fixed corner case when populate() erroneously falls back to
+ create()
+ - Work around braindead mysql when doing subquery counts on
+ resultsets containing identically named columns from several
+ tables
+ - Fixed m2m add_to_$rel to invoke find_or_create on the far
+ side of the relation, to avoid duplicates
+ - DBIC now properly handles empty inserts (invoking all default
+ values from the DB, normally via INSERT INTO tbl DEFAULT VALUES
+ - Fix find_or_new/create to stop returning random rows when
+ default value insert is requested (RT#28875)
+ - Make IC::DT extra warning state the column name too
+ - It is now possible to transparrently search() on columns
+ requiring DBI bind (i.e. PostgreSQL BLOB)
+ - as_query is now a Storage::DBI method, so custom cursors can
+ be seamlessly used
+ - Fix search_related regression introduced in 0.08103
+
+0.08103 2009-05-26 19:50:00 (UTC)
+ - Multiple $resultset -> count/update/delete fixes. Now any
+ of these operations will succeed, regardless of the complexity
+ of $resultset. distinct, group_by, join, prefetch are all
+ supported with expected results
+ - Return value of $rs->delete is now the storage return value
+ and not 1 as it used to be
+ - don't pass SQL functions into GROUP BY
+ - Remove MultiDistinctEmulation.pm, effectively deprecating
+ { select => { distinct => [ qw/col1 col2/ ] } }
+ - Change ->count code to work correctly with DISTINCT (distinct => 1)
+ via GROUP BY
+ - Removed interpolation of bind vars for as_query - placeholders
+ are preserved and nested query bind variables are properly
+ merged in the correct order
+ - Refactor DBIx::Class::Storage::DBI::Sybase to automatically
+ load a subclass, namely Microsoft_SQL_Server.pm
+ (similar to DBIx::Class::Storage::DBI::ODBC)
+ - Refactor InflateColumn::DateTime to allow components to
+ circumvent DateTime parsing
+ - Support inflation of timestamp datatype
+ - Support BLOB and CLOB datatypes on Oracle
+ - Storage::DBI::Replicated::Balancer::Random:
+ added master_read_weight
+ - Storage::DBI::Replicated: storage opts from connect_info,
+ connect_info merging to replicants, hashref connect_info support,
+ improved trace output, other bug fixes/cleanups
+ - distinct => 1 with prefetch now groups by all columns
+ - on_connect_do accepts a single string equivalent to a one
+ element arrayref (RT#45159)
+ - DB2 limit + offset now works correctly
+ - Sybase now supports autoinc PKs (RT#40265)
+ - Prefetch on joins over duplicate relations now works
+ correctly (RT#28451)
+ - "timestamp with time zone" columns (for Pg) now get inflated with a
+ time zone information preserved
+ - MSSQL Top limit-emulation improvements (GROUP BY and subquery support)
+ - ResultSetColumn will not lose the joins infered from a parent
+ resultset prefetch
+
+0.08102 2009-04-30 08:29:00 (UTC)
+ - Fixed two subtle bugs when using columns or select/as
+ paired with a join (limited prefetch)
+ - Fixed breakage of cdbi tests (RT#45551)
+ - Some POD improvements
+
+0.08101 2009-04-27 09:45:00 (UTC)
+ - Fix +select, +as, +columns and include_columns being stripped
+ by $rs->get_column
+ - move load_optional_class from DBIx::Class::Componentised to
+ Class::C3::Componentised, bump dependency
+ - register_extra_source() now *really* fixed wrt subclassing
+ - Added missing POD descriptions (RT#45195)
+ - Fix insert() to not store_column() every present object column
+ - Multiple Makefile.PL fixes
+
+0.08100 2009-04-19 11:39:35 (UTC)
+ - Todo out the register_extra_source test until after shipping
+
+0.08099_08 2009-03-30 00:00:00 (UTC)
+ - Fixed taint mode with load_namespaces
+ - Putting IC::DateTime locale, timezone or floating_tz_ok attributes into
+ extra => {} has been deprecated. The new way is to put these things
+ directly into the columns definition
+ - Switched MI code to MRO::Compat
+ - Document db-side default_value caveats
+ - Search_like() now warns to indicate deprecation in 0.09.
+ - TxnScopeGuard left experimental state
+
0.08099_07 2009-02-27 02:00:00 (UTC)
- multi-create using find_or_create rather than _related for post-insert
- fix get_inflated_columns to check has_column_loaded
- not try and insert things tagged on via new_related unless required
- Possible to set locale in IC::DateTime extra => {} config
- Calling the accessor of a belongs_to when the foreign_key
- was NULL and the row was not stored would unexpectedly fail (groditi)
+ was NULL and the row was not stored would unexpectedly fail
- Split sql statements for deploy only if SQLT::Producer returned a scalar
containing all statements to be executed
- Add as_query() for ResultSet and ResultSetColumn. This makes subqueries
- possible. See the Cookbook for details. (robkinyon, michaelr)
+ possible. See the Cookbook for details.
- Massive rewrite of Ordered to properly handle position constraints and
to make it more matpath-friendly
- deploy_statements called ddl_filename with the $version and $dir arguments
- in the wrong order. (groditi)
+ in the wrong order.
- columns/+columns attributes now support { as => select } hahsrefs
- support for views both in DBIC and via deploy() in SQLT
- new order_by => { -desc => 'colname' } syntax supported
- PG array datatype supported
- insert should use store_column, not set_column to avoid marking
- clean just-stored values as dirty. New test for this (groditi)
- - regression test for source_name (groditi)
+ clean just-stored values as dirty. New test for this
+ - regression test for source_name
0.08099_05 2008-10-30 21:30:00 (UTC)
- - Rewritte of Storage::DBI::connect_info(), extended with an
+ - Rewrite of Storage::DBI::connect_info(), extended with an
additional argument format type
- InflateColumn::DateTime: add warning about floating timezone
- InflateColumn::DateTime: possible to enforce/skip inflation
- - delete throws exception if passed arguments to prevent drunken mishaps. (purge)
+ - delete throws exception if passed arguments to prevent drunken mishaps.
- Fix storage to copy scalar conds before regexping to avoid
trying to modify a constant in odd edge cases
- Related resultsets on uninserted objects are now empty
- "belongs_to" to "contains/refers/something"
Using inflated objects/references as values in searches
- - Goes together with subselects above
- should deflate then run search
-FilterColumn - like Inflate, only for changing scalar values
- - This seems to be vaporware atm..
-
SQL/API feature complete?
- UNION
- proper join conditions!
Moosification - ouch
+Metamodel stuff - introspection
+
Prefetch improvements
- slow on mysql, speedup?
- multi has_many prefetch
- - paging working with prefetch
Magically "discover" needed joins/prefetches and add them
- eg $books->search({ 'author.name' => 'Fred'}), autoadds: join => 'author'
- also guess aliases when supplying column names that are on joined/related tables
-Metamodel stuff - introspection
-
Storage API/restructure
- call update/insert etc on the ResultSource, which then calls to storage
- handle different storages/db-specific code better
Documentation - improvements
- better indexing for finding of stuff in general
- more cross-referencing of docs
-
+^(?!script/|examples/|lib/|inc/|t/|Makefile.PL$|README$|MANIFEST$|Changes$|META.yml$)
+
+
# Avoid version control files.
\bRCS\b
\bCVS\b
-use inc::Module::Install 0.67;
+use inc::Module::Install 0.89;
use strict;
use warnings;
use POSIX ();
perl_version '5.006001';
all_from 'lib/DBIx/Class.pm';
-requires 'Data::Page' => 2.00;
+
+test_requires 'Test::Builder' => 0.33;
+test_requires 'Test::Deep' => 0;
+test_requires 'Test::Exception' => 0;
+test_requires 'Test::More' => 0.92;
+test_requires 'Test::Warn' => 0.11;
+
+test_requires 'File::Temp' => 0.22;
+
+
+# Core
+requires 'List::Util' => 0;
requires 'Scalar::Util' => 0;
-requires 'SQL::Abstract' => 1.49;
-requires 'SQL::Abstract::Limit' => 0.13;
-requires 'Class::C3' => 0.20;
-requires 'Class::C3::Componentised' => 0;
requires 'Storable' => 0;
-requires 'Carp::Clan' => 0;
-requires 'DBI' => 1.40;
-requires 'Module::Find' => 0;
-requires 'Class::Inspector' => 0;
-requires 'Class::Accessor::Grouped' => 0.08002;
-requires 'JSON::Any' => 1.17;
-requires 'Scope::Guard' => 0.03;
-requires 'Path::Class' => 0;
-requires 'List::Util' => 1.19;
-requires 'Sub::Name' => 0.04;
# Perl 5.8.0 doesn't have utf8::is_utf8()
-requires 'Encode' => 0 if ($] <= 5.008000);
-
-# configure_requires so the sanity check below can run
-configure_requires 'DBD::SQLite' => 1.14;
+requires 'Encode' => 0 if ($] <= 5.008000);
-test_requires 'Test::Builder' => 0.33;
-test_requires 'Test::Warn' => 0.11;
-test_requires 'Test::Exception' => 0;
-test_requires 'Test::Deep' => 0;
+# Dependencies (keep in alphabetical order)
+requires 'Carp::Clan' => 6.0;
+requires 'Class::Accessor::Grouped' => 0.08003;
+requires 'Class::C3::Componentised' => 1.0005;
+requires 'Class::Inspector' => 1.24;
+requires 'Data::Page' => 2.00;
+requires 'DBD::SQLite' => 1.25;
+requires 'DBI' => 1.605;
+requires 'JSON::Any' => 1.18;
+requires 'MRO::Compat' => 0.09;
+requires 'Module::Find' => 0.06;
+requires 'Path::Class' => 0.16;
+requires 'Scope::Guard' => 0.03;
+requires 'SQL::Abstract' => 1.56;
+requires 'SQL::Abstract::Limit' => 0.13;
+requires 'Sub::Name' => 0.04;
recommends 'SQL::Translator' => 0.09004;
-install_script 'script/dbicadmin';
-
-tests_recursive 't';
-
-# re-build README and require extra modules for testing if we're in a checkout
+my %replication_requires = (
+ 'Moose', => 0.87,
+ 'MooseX::AttributeHelpers' => 0.21,
+ 'MooseX::Types', => 0.16,
+ 'namespace::clean' => 0.11,
+ 'Hash::Merge', => 0.11,
+);
my %force_requires_if_author = (
+ %replication_requires,
+
+# 'Module::Install::Pod::Inherit' => 0.01,
'Test::Pod::Coverage' => 1.04,
- 'SQL::Translator' => 0.09004,
+ 'SQL::Translator' => 0.09007,
# CDBI-compat related
'DBIx::ContextualFetch' => 0,
+ 'Class::DBI::Plugin::DeepAbstractSearch' => 0,
'Class::Trigger' => 0,
- 'Time::Piece' => 0,
+ 'Time::Piece::MySQL' => 0,
'Clone' => 0,
+ 'Date::Simple' => 3.03,
# t/52cycle.t
'Test::Memory::Cycle' => 0,
+ 'Devel::Cycle' => 1.10,
+ # t/36datetime.t
# t/60core.t
- 'DateTime::Format::MySQL' => 0,
-
- # t/93storage_replication.t
- 'Moose', => 0,
- 'MooseX::AttributeHelpers' => 0.12,
+ 'DateTime::Format::SQLite' => 0,
# t/96_is_deteministic_value.t
- 'DateTime::Format::Strptime' => 0,
+ 'DateTime::Format::Strptime'=> 0,
+
+ # database-dependent reqs
+ #
+ $ENV{DBICTEST_PG_DSN}
+ ? (
+ 'Sys::SigAction' => 0,
+ 'DBD::Pg' => 2.009002,
+ 'DateTime::Format::Pg' => 0,
+ ) : ()
+ ,
+
+ $ENV{DBICTEST_MYSQL_DSN}
+ ? (
+ 'DateTime::Format::MySQL' => 0,
+ ) : ()
+ ,
+
+ $ENV{DBICTEST_ORACLE_DSN}
+ ? (
+ 'DateTime::Format::Oracle' => 0,
+ ) : ()
+ ,
);
-if ($Module::Install::AUTHOR) {
- foreach my $module (keys %force_requires_if_author) {
- requires ($module => $force_requires_if_author{$module});
- }
+install_script (qw|
+ script/dbicadmin
+|);
- system('pod2text lib/DBIx/Class.pm > README');
-}
+tests_recursive (qw|
+ t
+|);
-auto_provides;
+resources 'IRC' => 'irc://irc.perl.org/#dbix-class';
+resources 'license' => 'http://dev.perl.org/licenses/';
+resources 'repository' => 'http://dev.catalyst.perl.org/svnweb/bast/browse/DBIx-Class/';
+resources 'MailingList' => 'http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbix-class';
-auto_install;
+no_index 'DBIx::Class::Storage::DBI::Sybase::Base';
+no_index 'DBIx::Class::SQLAHacks';
+no_index 'DBIx::Class::SQLAHacks::MSSQL';
+no_index 'DBIx::Class::Storage::DBI::AmbiguousGlob';
+no_index 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server';
+no_index 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars';
-# Have all prerequisites, check DBD::SQLite sanity
-if (! $ENV{DBICTEST_NO_SQLITE_CHECK} ) {
+# re-build README and require extra modules for testing if we're in a checkout
- my $pid = fork();
- if (not defined $pid) {
- die "Unable to fork(): $!";
- }
- elsif (! $pid) {
-
- # Win32 does not have real fork()s so a segfault will bring
- # everything down. Warn about it.
- if ($^O eq 'MSWin32') {
- print <<'EOW';
-
-######################################################################
-# #
-# A short stress-testing of DBD::SQLite will follow. If you have a #
-# buggy library this might very well be the last text you will see #
-# before the installation silently terminates. If this happens it #
-# would mean that you are running a buggy version of DBD::SQLite #
-# known to randomly segfault on errors. Even if you have the latest #
-# CPAN module version, the system sqlite3 dynamic library might have #
-# been compiled against an older buggy sqlite3 dev library (oddly #
-# DBD::SQLite will prefer the system library against the one bundled #
-# with it). You are strongly advised to resolve this issue before #
-# proceeding. #
-# #
-# If this happens to you (this text is the last thing you see), and #
-# you just want to install this module without worrying about the #
-# tests (which will almost certainly fail) - set the environment #
-# variable DBICTEST_NO_SQLITE_CHECK to a true value and try again. #
-# #
-######################################################################
+if ($Module::Install::AUTHOR) {
+ warn <<'EOW';
+******************************************************************************
+******************************************************************************
+*** ***
+*** AUTHOR MODE: all optional test dependencies converted to hard requires ***
+*** ***
+******************************************************************************
+******************************************************************************
EOW
- }
-
- require DBI;
- for (1 .. 100) {
- my $dbh;
- $dbh = DBI->connect ('dbi:SQLite::memory:', undef, undef, {
- AutoCommit => 1,
- RaiseError => 0,
- PrintError => 0,
- })
- or die "Unable to connect to database: $@";
- $dbh->do ('CREATE TABLE name_with_no_columns'); # a subtle syntax error
- $dbh->do ('COMMIT'); # followed by commit
- $dbh->disconnect;
- }
-
- exit 0;
+
+ foreach my $module (sort keys %force_requires_if_author) {
+ build_requires ($module => $force_requires_if_author{$module});
}
- else {
- eval {
- local $SIG{ALRM} = sub { die "timeout\n" };
- alarm 5;
- wait();
- alarm 0;
- };
- my $exception = $@;
-
- my $sig = $? & 127;
-
-# make sure process actually dies
- $exception && kill POSIX::SIGKILL(), $pid;
-
- if ($exception || $sig == POSIX::SIGSEGV() || $sig == POSIX::SIGABRT()
- || $sig == 7) { # 7 == SIGBUS, haven't seen it but just in case
- warn (<<EOE);
-
-############################### WARNING #################################
-# #
-# You are running a buggy version of DBD::SQLite known to randomly #
-# segfault on errors. Even if you have the latest CPAN module version, #
-# the sqlite3 dynamic library on this system might have been compiled #
-# against an older buggy sqlite3 dev library (oddly DBD::SQLite will #
-# prefer the system library against the one bundled with it). You are #
-# strongly advised to resolve this issue before proceeding. #
-# #
-#########################################################################
-
-EOE
- my $ans = prompt (
- "The test suite of this module is almost certain to fail.\n"
- . 'Do you really want to continue?',
- 'no',
- );
- exit 0 unless ($ans =~ /^y(es)?$/i);
- }
+
+ print "Regenerating README\n";
+ system('pod2text lib/DBIx/Class.pm > README');
+
+ if (-f 'MANIFEST') {
+ print "Removing MANIFEST\n";
+ unlink 'MANIFEST';
}
+
+# require Module::Install::Pod::Inherit;
+# PodInherit();
}
+auto_install();
WriteAll();
-
+# Re-write META.yml to _exclude_ all forced requires (we do not want to ship this)
if ($Module::Install::AUTHOR) {
- # Need to do this _after_ WriteAll else it looses track of them
- Meta->{values}{build_requires} = [ grep {
- my $ok = 1;
- foreach my $module (keys %force_requires_if_author) {
- if ($_->[0] =~ /$module/) {
- $ok = 0;
- last;
- }
- }
- $ok;
- } @{Meta->{values}{build_requires}} ];
-
- my @scalar_keys = Module::Install::Metadata::Meta_TupleKeys();
- my $cr = Module::Install::Metadata->can("Meta_TupleKeys");
- {
- no warnings 'redefine';
- *Module::Install::Metadata::Meta_TupleKeys = sub {
- return $cr->(@_), 'resources';
- };
- }
- Meta->{values}{resources} = [
- [ 'MailingList', 'http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbix-class' ],
- [ 'IRC', 'irc://irc.perl.org/#dbix-class' ],
- [ 'license', 'http://dev.perl.org/licenses/' ],
- [ 'repository', 'http://dev.catalyst.perl.org/svnweb/bast/browse/DBIx-Class/' ],
+
+ Meta->{values}{build_requires} = [ grep
+ { not exists $force_requires_if_author{$_->[0]} }
+ ( @{Meta->{values}{build_requires}} )
];
+
Meta->write;
}
- ResultSource objects caching ->resultset causes interesting problems
- find why XSUB dumper kills schema in Catalyst (may be Pg only?)
-2006-04-11 by castaway
- - docs of copy() should say that is_auto_increment is essential for auto_incrementing keys
-
2006-03-25 by mst
- find a way to un-wantarray search without breaking compat
- - audit logging component
- delay relationship setup if done via ->load_classes
- double-sided relationships
- make short form of class specifier in relationships work
We should still support the old inflate/deflate syntax, but this new
way should be recommended.
-2006-02-07 by castaway
- - Extract DBIC::SQL::Abstract into a separate module for CPAN
-
2006-03-18 by bluefeet
- Support table locking.
if you haven't specified one of the others
2008-10-30 by ribasushi
- Leftovers for next dev-release
- Rewrite the test suite to rely on $schema->deploy, allowing for seamless
testing of various RDBMS using the same tests
- - Proper support of default create (i.e. create({}) ), with proper workarounds
- for different Storage's
- Automatically infer quote_char/name_sep from $schema->storage
- - Finally incorporate View support (needs real tests)
- Fix and properly test chained search attribute merging
-
-2008-11-07 by ribasushi
- - Be loud when a relationship resolution fails because we did not select/as
- a neccessary pk
- Recursive update() (all code seems to be already available)
- - $rs->populate changes its syntax depending on wantarray context (BAD)
- Also the interface differs from $schema->populate (not so good)
use strict;
use warnings;
+use MRO::Compat;
+
use vars qw($VERSION);
use base qw/DBIx::Class::Componentised Class::Accessor::Grouped/;
use DBIx::Class::StartupCheck;
-
-sub mk_classdata {
+sub mk_classdata {
shift->mk_classaccessor(@_);
}
sub mk_classaccessor {
my $self = shift;
- $self->mk_group_accessors('inherited', $_[0]);
+ $self->mk_group_accessors('inherited', $_[0]);
$self->set_inherited(@_) if @_ > 1;
}
# i.e. first release of 0.XX *must* be 0.XX000. This avoids fBSD ports
# brain damage and presumably various other packaging systems too
-$VERSION = '0.08099_07';
+$VERSION = '0.08109';
$VERSION = eval $VERSION; # numify for warning-free dev releases
1;
-Create a table class to represent artists, who have many CDs, in
+Create a result class to represent artists, who have many CDs, in
MyDB/Schema/Result/Artist.pm:
+See L<DBIx::Class::ResultSource> for docs on defining result classes.
+
package MyDB::Schema::Result::Artist;
use base qw/DBIx::Class/;
1;
-A table class to represent a CD, which belongs to an artist, in
+A result class to represent a CD, which belongs to an artist, in
MyDB/Schema/Result/CD.pm:
package MyDB::Schema::Result::CD;
# Query for all artists and put them in an array,
# or retrieve them as a result set object.
+ # $schema->resultset returns a DBIx::Class::ResultSet
my @all_artists = $schema->resultset('Artist')->all;
my $all_artists_rs = $schema->resultset('Artist');
+ # Output all artists names
+ # $artist here is a DBIx::Class::Row, which has accessors
+ # for all its columns. Rows are also subclasses of your Result class.
+ foreach $artist (@artists) {
+ print $artist->name, "\n";
+ }
+
# Create a result set to search for artists.
# This does not query the DB.
my $johns_rs = $schema->resultset('Artist')->search(
ank: Andres Kievsky
+arcanez: Justin Hunter <justin.d.hunter@gmail.com>
+
ash: Ash Berlin <ash@cpan.org>
bert: Norbert Csongradi <bert@cpan.org>
caelum: Rafael Kitover <rkitover@cpan.org>
-captainL: Luke Saunders <luke.saunders@gmail.com>
-
castaway: Jess Robinson
claco: Christopher H. Laco
dyfrgi: Michael Leuchtenburg <michael@slashhome.org>
+frew: Arthur Axel "fREW" Schmidt <frioux@gmail.com>
+
gphat: Cory G Watson <gphat@cpan.org>
groditi: Guillermo Roditi <groditi@cpan.org>
+ilmari: Dagfinn Ilmari MannsE<aring>ker <ilmari@ilmari.org>
+
+jasonmay: Jason May <jason.a.may@gmail.com>
+
jesper: Jesper Krogh
jgoulah: John Goulah <jgoulah@cpan.org>
konobi: Scott McWhirter
+lukes: Luke Saunders <luke.saunders@gmail.com>
+
marcus: Marcus Ramberg <mramberg@cpan.org>
mattlaw: Matt Lawrence
ningu: David Kamholz <dkamholz@cpan.org>
+Nniuq: Ron "Quinn" Straight" <quinnfazigu@gmail.org>
+
+norbi: Norbert Buchmuller <norbi@nix.hu>
+
Numa: Dan Sully <daniel@cpan.org>
oyse: Øystein Torget <oystein.torget@dnv.com>
perigrin: Chris Prather <chris@prather.org>
+peter: Peter Collingbourne <peter@pcc.me.uk>
+
phaylon: Robert Sedlacek <phaylon@dunkelheit.at>
plu: Johannes Plunien <plu@cpan.org>
rafl: Florian Ragwitz <rafl@debian.org>
+rbuels: Robert Buels <rmb32@cornell.edu>
+
rdj: Ryan D Johnson <ryan@innerfence.com>
ribasushi: Peter Rabbitson <rabbit+dbic@rabbit.us>
semifor: Marc Mims <marc@questright.com>
+solomon: Jared Johnson <jaredj@nmgi.com>
+
+spb: Stephen Bennett <stephen@freenode.net>
+
sszabo: Stephan Szabo <sszabo@bigpanda.com>
teejay : Aaron Trevena <teejay@cpan.org>
willert: Sebastian Willert <willert@cpan.org>
-zamolxes: Bogdan Lucaciu <bogdan@wiz.ro>
+wreis: Wallace Reis <wreis@cpan.org>
-norbi: Norbert Buchmuller <norbi@nix.hu>
+zamolxes: Bogdan Lucaciu <bogdan@wiz.ro>
=head1 LICENSE
DBIx::ContextualFetch
Clone
);
-
+
my @didnt_load;
for my $module (@Extra_Modules) {
push @didnt_load, $module unless eval qq{require $module};
package Foo;
use base qw(Class::DBI);
-
+
Foo->table("foo");
Foo->columns( All => qw(this that bar) );
package Bar;
use base qw(Class::DBI);
-
+
Bar->table("bar");
Bar->columns( All => qw(up down) );
=head1 NAME
-DBIx::Class::CDBICompat::AbstractSearch
+DBIx::Class::CDBICompat::AbstractSearch - Emulates Class::DBI::AbstractSearch
=head1 SYNOPSIS
sub has_a {
my($self, $col, @rest) = @_;
-
+
$self->_declare_has_a(lc $col, @rest);
$self->_mk_inflated_column_accessor($col);
-
+
return 1;
}
sub _has_custom_accessor {
my($class, $name) = @_;
-
+
no strict 'refs';
my $existing_accessor = *{$class .'::'. $name}{CODE};
return $existing_accessor && !$our_accessors{$existing_accessor};
my $fullname = join '::', $class, $name;
*$fullname = Sub::Name::subname $fullname, $accessor;
}
-
+
$our_accessors{$accessor}++;
return 1;
# warn " $field $alias\n";
{
no strict 'refs';
-
+
$class->_deploy_accessor($name, $accessor);
$class->_deploy_accessor($alias, $accessor);
}
return map { $class->find_column($_) } @col;
}
-package DBIx::Class::CDBICompat::ColumnGroups::GrouperShim;
+package # hide from PAUSE (should be harmless, no POD no Version)
+ DBIx::Class::CDBICompat::ColumnGroups::GrouperShim;
sub groups_for {
my ($self, @cols) = @_;
}
return keys %groups;
}
-
1;
=head1 NAME
-DBIx::Class::CDBICompat::ColumnsAsHash
+DBIx::Class::CDBICompat::ColumnsAsHash - Emulates the behavior of Class::DBI where the object can be accessed as a hash of columns.
=head1 SYNOPSIS
my $class = shift;
my $new = $class->next::method(@_);
-
+
$new->_make_columns_as_hash;
-
+
return $new;
}
sub _make_columns_as_hash {
my $self = shift;
-
+
for my $col ($self->columns) {
if( exists $self->{$col} ) {
warn "Skipping mapping $col to a hash key because it exists";
=head1 NAME
-DBIx::Class::CDBICompat::Copy
+DBIx::Class::CDBICompat::Copy - Emulates Class::DBI->copy($new_id)
=head1 SYNOPSIS
sub copy {
my($self, $arg) = @_;
return $self->next::method($arg) if ref $arg;
-
+
my @primary_columns = $self->primary_columns;
croak("Need hash-ref to edit copied column values")
if @primary_columns > 1;
$self->throw_exception( "No relationship to JOIN from ${from_class} to ${to_class}" )
unless $rel_obj;
my $join = $from_class->storage->sql_maker->_join_condition(
- $from_class->result_source_instance->resolve_condition(
+ $from_class->result_source_instance->_resolve_condition(
$rel_obj->{cond}, $to, $from) );
return $join;
}
-
+
} );
sub db_Main {
sub transform_sql {
my ($class, $sql, @args) = @_;
-
+
my $tclass = $class->sql_transformer_class;
$class->ensure_class_loaded($tclass);
my $t = $tclass->new($class, $sql, @args);
=head1 NAME
-DBIx::Class::CDBICompat::Iterator
+DBIx::Class::CDBICompat::Iterator - Emulates the extra behaviors of the Class::DBI search iterator.
=head1 SYNOPSIS
sub _init_result_source_instance {
my $class = shift;
-
+
my $table = $class->next::method(@_);
$table->resultset_class("DBIx::Class::CDBICompat::Iterator::ResultSet");
# request in case the database modifies the new value (say, via a trigger)
sub update {
my $self = shift;
-
+
my @dirty_columns = keys %{$self->{_dirty_columns}};
-
+
my $ret = $self->next::method(@_);
$self->_clear_column_data(@dirty_columns);
-
+
return $ret;
}
sub create {
my $class = shift;
my($data) = @_;
-
+
my @columns = keys %$data;
-
+
my $obj = $class->next::method(@_);
return $obj unless defined $obj;
-
+
my %primary_cols = map { $_ => 1 } $class->primary_columns;
my @data_cols = grep !$primary_cols{$_}, @columns;
$obj->_clear_column_data(@data_cols);
sub _clear_column_data {
my $self = shift;
-
+
delete $self->{_column_data}{$_} for @_;
delete $self->{_inflated_column}{$_} for @_;
}
for my $col ($self->primary_columns) {
$changes->{$col} = undef unless exists $changes->{$col};
}
-
+
return $self->next::method($changes);
}
sub nocache {
my $class = shift;
-
+
return $class->__nocache(@_) if @_;
-
+
return 1 if $Class::DBI::Weaken_Is_Available == 0;
return $class->__nocache;
}
sub insert {
my ($self, @rest) = @_;
$self->next::method(@rest);
-
+
return $self if $self->nocache;
- # Because the insert will die() if it can't insert into the db (or should)
- # we can be sure the object *was* inserted if we got this far. In which
- # case, given primary keys are unique and ID only returns a
- # value if the object has all its primary keys, we can be sure there
- # isn't a real one in the object index already because such a record
- # cannot have existed without the insert failing.
+ # Because the insert will die() if it can't insert into the db (or should)
+ # we can be sure the object *was* inserted if we got this far. In which
+ # case, given primary keys are unique and ID only returns a
+ # value if the object has all its primary keys, we can be sure there
+ # isn't a real one in the object index already because such a record
+ # cannot have existed without the insert failing.
if (my $key = $self->ID) {
my $live = $self->live_object_index;
weaken($live->{$key} = $self);
if ++$self->live_object_init_count->{count}
% $self->purge_object_index_every == 0;
}
- #use Data::Dumper; warn Dumper($self);
+
return $self;
}
sub inflate_result {
my ($class, @rest) = @_;
my $new = $class->next::method(@rest);
-
+
return $new if $new->nocache;
-
+
if (my $key = $new->ID) {
#warn "Key $key";
my $live = $class->live_object_index;
=head1 NAME
-DBIx::Class::CDBICompat::NoObjectIndex
+DBIx::Class::CDBICompat::NoObjectIndex - Defines empty methods for object indexing. They do nothing
=head1 SYNOPSIS
package # hide from PAUSE
DBIx::Class::CDBICompat::Pager;
-\r
+
use strict;
use warnings FATAL => 'all';
-\r
+
*pager = \&page;
-\r
+
sub page {
my $class = shift;
-\r
+
my $rs = $class->search(@_);
unless ($rs->{attrs}{page}) {
$rs = $rs->page(1);
}
return ( $rs->pager, $rs );
}
-\r
+
1;
=head1 NAME
-DBIx::Class::CDBICompat::Relationship
+DBIx::Class::CDBICompat::Relationship - Emulate the Class::DBI::Relationship object returned from meta_info()
=head1 DESCRIPTION
sub new {
my($class, $args) = @_;
-
+
return bless $args, $class;
}
my $code = sub {
$_[0]->{$key};
};
-
+
no strict 'refs';
*{$method} = Sub::Name::subname $method, $code;
}
=head1 NAME
-DBIx::Class::CDBICompat::Relationships
+DBIx::Class::CDBICompat::Relationships - Emulate has_a(), has_many(), might_have() and meta_info()
=head1 DESCRIPTION
sub has_a {
my($self, $col, @rest) = @_;
-
+
$self->_declare_has_a($col, @rest);
$self->_mk_inflated_column_accessor($col);
-
+
return 1;
}
$self->throw_exception( "No such column ${col}" )
unless $self->has_column($col);
$self->ensure_class_loaded($f_class);
-
+
my $rel_info;
if ($args{'inflate'} || $args{'deflate'}) { # Non-database has_a
$args{'deflate'} = sub { shift->$meth; };
}
$self->inflate_column($col, \%args);
-
+
$rel_info = {
class => $f_class
};
$self->belongs_to($col, $f_class);
$rel_info = $self->result_source_instance->relationship_info($col);
}
-
+
$rel_info->{args} = \%args;
-
+
$self->_extend_meta(
has_a => $col,
$rel_info
sub _mk_inflated_column_accessor {
my($class, $col) = @_;
-
+
return $class->mk_group_accessors('inflated_column' => $col);
}
sub might_have {
my ($class, $rel, $f_class, @columns) = @_;
-
+
my $ret;
if (ref $columns[0] || !defined $columns[0]) {
$ret = $class->next::method($rel, $f_class, @columns);
might_have => $rel,
$rel_info
);
-
+
return $ret;
}
$cond =~ s/^\s*WHERE//i;
- if( $cond =~ s/\bLIMIT (\d+)\s*$//i ) {
- push @rest, { rows => $1 };
+ # Need to parse the SQL clauses after WHERE in reverse
+ # order of appearance.
+
+ my %attrs;
+
+ if( $cond =~ s/\bLIMIT\s+(\d+)\s*$//i ) {
+ $attrs{rows} = $1;
+ }
+
+ if ( $cond =~ s/\bORDER\s+BY\s+(.*)\s*$//i ) {
+ $attrs{order_by} = $1;
}
- return $class->search_literal($cond, @rest);
+ if( $cond =~ s/\bGROUP\s+BY\s+(.*)\s*$//i ) {
+ $attrs{group_by} = $1;
+ }
+
+ return $class->search_literal($cond, @rest, ( %attrs ? \%attrs : () ) );
}
sub construct {
my $class = shift;
my $obj = $class->resultset_instance->new_result(@_);
$obj->in_storage(1);
-
+
return $obj;
}
sub _add_column_group {
my ($class, $group, @cols) = @_;
-
+
return $class->next::method($group, @cols) unless $group eq 'TEMP';
my %new_cols = map { $_ => 1 } @cols;
sub set {
my($self, %data) = @_;
-
+
my $temp_data = $self->_extract_temp_data(\%data);
-
+
$self->set_temp($_, $temp_data->{$_}) for keys %$temp_data;
-
+
return $self->next::method(%data);
}
foreach my $first_comp (@comps) {
if ($to eq 'DBIx::Class::Core' &&
$target->isa("DBIx::Class::${first_comp}")) {
- warn "Possible incorrect order of components in ".
+ carp "Possible incorrect order of components in ".
"${target}::load_components($first_comp) call: Core loaded ".
"before $first_comp. See the documentation for ".
"DBIx::Class::$first_comp for more information";
$class->next::method($target, @to_inject);
}
-# Returns a true value if the specified class is installed and loaded
-# successfully, throws an exception if the class is found but not loaded
-# successfully, and false if the class is not installed
-sub load_optional_class {
- my ($class, $f_class) = @_;
- eval { $class->ensure_class_loaded($f_class) };
- my $err = $@; # so we don't lose it
- if (! $err) {
- return 1;
- }
- else {
- my $fn = (join ('/', split ('::', $f_class) ) ) . '.pm';
- if ($err =~ /Can't locate ${fn} in \@INC/ ) {
- return 0;
- }
- else {
- die $err;
- }
- }
-}
-
1;
sub _maybe_attach_source_to_schema {
my ($class, $source) = @_;
if (my $meth = $class->can('schema_instance')) {
- my $schema = $class->$meth;
- $schema->register_class($class, $class);
- my $new_source = $schema->source($class);
- %$source = %$new_source;
- $schema->source_registrations->{$class} = $source;
+ if (my $schema = $class->$meth) {
+ $schema->register_class($class, $class);
+ my $new_source = $schema->source($class);
+ %$source = %$new_source;
+ $schema->source_registrations->{$class} = $source;
+ }
}
}
sub result_source_instance {
my $class = shift;
$class = ref $class || $class;
-
+
if (@_) {
my $source = $_[0];
$class->_result_source_instance([$source, $class]);
return unless Scalar::Util::blessed($source);
if ($result_class ne $class) { # new class
- # Give this new class it's own source and register it.
+ # Give this new class its own source and register it.
$source = $source->new({
%$source,
source_name => $class,
else {
$msg = Carp::longmess($msg);
}
-
+
my $self = { msg => $msg };
bless $self => $class;
use strict;
use warnings;
use base qw/DBIx::Class/;
+use Carp::Clan qw/^DBIx::Class/;
=head1 NAME
__PACKAGE__->load_components(qw/InflateColumn::DateTime Core/);
__PACKAGE__->add_columns(
starts_when => { data_type => 'datetime' }
+ create_date => { data_type => 'date' }
);
NOTE: You B<must> load C<InflateColumn::DateTime> B<before> C<Core>. See
If you want to set a specific timezone and locale for that field, use:
__PACKAGE__->add_columns(
- starts_when => { data_type => 'datetime', extra => { timezone => "America/Chicago", locale => "de_DE" } }
+ starts_when => { data_type => 'datetime', timezone => "America/Chicago", locale => "de_DE" }
);
If you want to inflate no matter what data_type your column is,
__PACKAGE__->add_columns(
starts_when => { data_type => 'varchar', inflate_datetime => 1 }
);
-
+
__PACKAGE__->add_columns(
starts_when => { data_type => 'varchar', inflate_date => 1 }
);
It's also possible to explicitly skip inflation:
-
+
__PACKAGE__->add_columns(
starts_when => { data_type => 'datetime', inflate_datetime => 0 }
);
+NOTE: Don't rely on C<InflateColumn::DateTime> to parse date strings for you.
+The column is set directly for any non-references and C<InflateColumn::DateTime>
+is completely bypassed. Instead, use an input parser to create a DateTime
+object. For instance, if your user input comes as a 'YYYY-MM-DD' string, you can
+use C<DateTime::Format::ISO8601> thusly:
+
+ use DateTime::Format::ISO8601;
+ my $dt = DateTime::Format::ISO8601->parse_datetime('YYYY-MM-DD');
+
=head1 DESCRIPTION
This module figures out the type of DateTime::Format::* class to
that this feature is new as of 0.07, so it may not be perfect yet - bug
reports to the list very much welcome).
+If the data_type of a field is C<date>, C<datetime> or C<timestamp> (or
+a derivative of these datatypes, e.g. C<timestamp with timezone>, this
+module will automatically call the appropriate parse/format method for
+deflation/inflation as defined in the storage class. For instance, for
+a C<datetime> field the methods C<parse_datetime> and C<format_datetime>
+would be called on deflation/inflation. If the storage class does not
+provide a specialized inflator/deflator, C<[parse|format]_datetime> will
+be used as a fallback. See L<DateTime::Format> for more information on
+date formatting.
+
For more help with using components, see L<DBIx::Class::Manual::Component/USING>.
=cut
In the case of an invalid date, L<DateTime> will throw an exception. To
bypass these exceptions and just have the inflation return undef, use
the C<datetime_undef_if_invalid> option in the column info:
-
+
"broken_date",
{
data_type => "datetime",
my $type;
- for (qw/date datetime/) {
+ for (qw/date datetime timestamp/) {
my $key = "inflate_${_}";
next unless exists $info->{$key};
unless ($type) {
$type = lc($info->{data_type});
- $type = 'datetime' if ($type =~ /^timestamp/);
+ if ($type eq "timestamp with time zone" || $type eq "timestamptz") {
+ $type = "timestamp";
+ $info->{_ic_dt_method} ||= "timestamp_with_timezone";
+ } elsif ($type eq "timestamp without time zone") {
+ $type = "timestamp";
+ $info->{_ic_dt_method} ||= "timestamp_without_timezone";
+ } elsif ($type eq "smalldatetime") {
+ $type = "datetime";
+ $info->{_ic_dt_method} ||= "datetime";
+ }
}
my $timezone;
if ( defined $info->{extra}{timezone} ) {
+ carp "Putting timezone into extra => { timezone => '...' } has been deprecated, ".
+ "please put it directly into the '$column' column definition.";
$timezone = $info->{extra}{timezone};
}
my $locale;
if ( defined $info->{extra}{locale} ) {
+ carp "Putting locale into extra => { locale => '...' } has been deprecated, ".
+ "please put it directly into the '$column' column definition.";
$locale = $info->{extra}{locale};
}
+ $locale = $info->{locale} if defined $info->{locale};
+ $timezone = $info->{timezone} if defined $info->{timezone};
+
my $undef_if_invalid = $info->{datetime_undef_if_invalid};
- if ($type eq 'datetime' || $type eq 'date') {
- my ($parse, $format) = ("parse_${type}", "format_${type}");
-
- # This assignment must happen here, otherwise Devel::Cycle treats
- # the resulting deflator as a circular reference (go figure):
- #
- # Cycle #1
- # DBICTest::Schema A->{source_registrations} => %B
- # %B->{Event} => DBIx::Class::ResultSource::Table C
- # DBIx::Class::ResultSource::Table C->{_columns} => %D
- # %D->{created_on} => %E
- # %E->{_inflate_info} => %F
- # %F->{deflate} => &G
- # closure &G, $info => $H
- # $H => %E
- #
- my $floating_tz_ok = $info->{extra}{floating_tz_ok};
+ if ($type eq 'datetime' || $type eq 'date' || $type eq 'timestamp') {
+ # This shallow copy of %info avoids t/52_cycle.t treating
+ # the resulting deflator as a circular reference.
+ my %info = ( '_ic_dt_method' => $type , %{ $info } );
+
+ if (defined $info->{extra}{floating_tz_ok}) {
+ carp "Putting floating_tz_ok into extra => { floating_tz_ok => 1 } has been deprecated, ".
+ "please put it directly into the '$column' column definition.";
+ $info{floating_tz_ok} = $info->{extra}{floating_tz_ok};
+ }
$self->inflate_column(
$column =>
{
inflate => sub {
my ($value, $obj) = @_;
- my $dt = eval { $obj->_datetime_parser->$parse($value); };
- die "Error while inflating ${value} for ${column} on ${self}: $@"
- if $@ and not $undef_if_invalid;
+
+ my $dt = eval { $obj->_inflate_to_datetime( $value, \%info ) };
+ if (my $err = $@ ) {
+ return undef if ($undef_if_invalid);
+ $self->throw_exception ("Error while inflating ${value} for ${column} on ${self}: $err");
+ }
+
$dt->set_time_zone($timezone) if $timezone;
$dt->set_locale($locale) if $locale;
return $dt;
deflate => sub {
my ($value, $obj) = @_;
if ($timezone) {
- warn "You're using a floating timezone, please see the documentation of"
+ carp "You're using a floating timezone, please see the documentation of"
. " DBIx::Class::InflateColumn::DateTime for an explanation"
if ref( $value->time_zone ) eq 'DateTime::TimeZone::Floating'
- and not $floating_tz_ok
+ and not $info{floating_tz_ok}
and not $ENV{DBIC_FLOATING_TZ_OK};
$value->set_time_zone($timezone);
$value->set_locale($locale) if $locale;
}
- $obj->_datetime_parser->$format($value);
+ $obj->_deflate_from_datetime( $value, \%info );
},
}
);
}
}
+sub _flate_or_fallback
+{
+ my( $self, $value, $info, $method_fmt ) = @_;
+
+ my $parser = $self->_datetime_parser;
+ my $preferred_method = sprintf($method_fmt, $info->{ _ic_dt_method });
+ my $method = $parser->can($preferred_method) ? $preferred_method : sprintf($method_fmt, 'datetime');
+ return $parser->$method($value);
+}
+
+sub _inflate_to_datetime {
+ my( $self, $value, $info ) = @_;
+ return $self->_flate_or_fallback( $value, $info, 'parse_%s' );
+}
+
+sub _deflate_from_datetime {
+ my( $self, $value, $info ) = @_;
+ return $self->_flate_or_fallback( $value, $info, 'format_%s' );
+}
+
sub _datetime_parser {
my $self = shift;
if (my $parser = $self->__datetime_parser) {
=head1 USAGE NOTES
-If you have a datetime column with the C<timezone> extra setting, and subsenquently
+If you have a datetime column with an associated C<timezone>, and subsequently
create/update this column with a DateTime object in the L<DateTime::TimeZone::Floating>
timezone, you will get a warning (as there is a very good chance this will not have the
result you expect). For example:
__PACKAGE__->add_columns(
- starts_when => { data_type => 'datetime', extra => { timezone => "America/Chicago" } }
+ starts_when => { data_type => 'datetime', timezone => "America/Chicago" }
);
my $event = $schema->resultset('EventTZ')->create({
=item Suppress the check on per-column basis
__PACKAGE__->add_columns(
- starts_when => { data_type => 'datetime', extra => { timezone => "America/Chicago", floating_tz_ok => 1 } }
+ starts_when => { data_type => 'datetime', timezone => "America/Chicago", floating_tz_ok => 1 }
);
=item Suppress the check globally
=back
-
+Putting extra attributes like timezone, locale or floating_tz_ok into extra => {} has been
+B<DEPRECATED> because this gets you into trouble using L<DBIx::Class::Schema::Versioned>.
+Instead put it directly into the columns definition like in the examples above. If you still
+use the old way you'll see a warning - please fix your code then!
=head1 SEE ALSO
sub insert {
my $self = shift;
-
+
# cache our file columns so we can write them to the fs
# -after- we have a PK
my %file_column;
In your L<DBIx::Class> table class:
__PACKAGE__->load_components( "PK::Auto", "InflateColumn::File", "Core" );
-
+
# define your columns
__PACKAGE__->add_columns(
"id",
size => 255,
},
);
-
+
In your L<Catalyst::Controller> class:
body => '....'
});
$c->stash->{entry}=$entry;
-
+
And Place the following in your TT template
-
+
Article Subject: [% entry.subject %]
Uploaded File:
<a href="/static/files/[% entry.id %]/[% entry.filename.filename %]">File</a>
Body: [% entry.body %]
-
+
The file will be stored on the filesystem for later retrieval. Calling delete
on your resultset will delete the file from the filesystem. Retrevial of the
record automatically inflates the column back to the set hash with the
-=head1 NAME
+=head1 NAME
DBIx::Class::Manual::Cookbook - Miscellaneous recipes
return $rs->all(); # all records for page 1
-The C<page> attribute does not have to be specified in your search:
+ return $rs->page(2); # records for page 2
- my $rs = $schema->resultset('Artist')->search(
- undef,
- {
- rows => 10,
- }
- );
-
- return $rs->page(1); # DBIx::Class::ResultSet containing first 10 records
-
-In either of the above cases, you can get a L<Data::Page> object for the
-resultset (suitable for use in e.g. a template) using the C<pager> method:
+You can get a L<Data::Page> object for the resultset (suitable for use
+in e.g. a template) using the C<pager> method:
return $rs->pager();
=head2 Retrieve one and only one row from a resultset
-Sometimes you need only the first "top" row of a resultset. While this can be
-easily done with L<< $rs->first|DBIx::Class::ResultSet/first >>, it is suboptimal,
-as a full blown cursor for the resultset will be created and then immediately
-destroyed after fetching the first row object.
-L<< $rs->single|DBIx::Class::ResultSet/single >> is
-designed specifically for this case - it will grab the first returned result
-without even instantiating a cursor.
+Sometimes you need only the first "top" row of a resultset. While this
+can be easily done with L<< $rs->first|DBIx::Class::ResultSet/first
+>>, it is suboptimal, as a full blown cursor for the resultset will be
+created and then immediately destroyed after fetching the first row
+object. L<< $rs->single|DBIx::Class::ResultSet/single >> is designed
+specifically for this case - it will grab the first returned result
+without even instantiating a cursor.
-Before replacing all your calls to C<first()> with C<single()> please observe the
+Before replacing all your calls to C<first()> with C<single()> please observe the
following CAVEATS:
=over
=item *
+
While single() takes a search condition just like search() does, it does
_not_ accept search attributes. However one can always chain a single() to
a search():
- my $top_cd = $cd_rs -> search({}, { order_by => 'rating' }) -> single;
+ my $top_cd = $cd_rs->search({}, { order_by => 'rating' })->single;
=item *
+
Since single() is the engine behind find(), it is designed to fetch a
single row per database query. Thus a warning will be issued when the
underlying SELECT returns more than one row. Sometimes however this usage
at the top of the charts at any given time. If you know what you are doing,
you can silence the warning by explicitly limiting the resultset size:
- my $top_cd = $cd_rs -> search ({}, { order_by => 'rating', rows => 1 }) -> single;
+ my $top_cd = $cd_rs->search ({}, { order_by => 'rating', rows => 1 })->single;
=back
Sometimes you have to run arbitrary SQL because your query is too complex
(e.g. it contains Unions, Sub-Selects, Stored Procedures, etc.) or has to
-be optimized for your database in a special way, but you still want to
-get the results as a L<DBIx::Class::ResultSet>.
-The recommended way to accomplish this is by defining a separate ResultSource
-for your query. You can then inject complete SQL statements using a scalar
-reference (this is a feature of L<SQL::Abstract>).
+be optimized for your database in a special way, but you still want to
+get the results as a L<DBIx::Class::ResultSet>.
-Say you want to run a complex custom query on your user data, here's what
-you have to add to your User class:
+This is accomplished by defining a
+L<ResultSource::View|DBIx::Class::ResultSource::View> for your query,
+almost like you would define a regular ResultSource.
- package My::Schema::User;
-
+ package My::Schema::Result::UserFriendsComplex;
+ use strict;
+ use warnings;
use base qw/DBIx::Class/;
-
- # ->load_components, ->table, ->add_columns, etc.
-
- # Make a new ResultSource based on the User class
- my $source = __PACKAGE__->result_source_instance();
- my $new_source = $source->new( $source );
- $new_source->source_name( 'UserFriendsComplex' );
-
- # Hand in your query as a scalar reference
- # It will be added as a sub-select after FROM,
- # so pay attention to the surrounding brackets!
- $new_source->name( \<<SQL );
- ( SELECT u.* FROM user u
- INNER JOIN user_friends f ON u.id = f.user_id
- WHERE f.friend_user_id = ?
- UNION
- SELECT u.* FROM user u
- INNER JOIN user_friends f ON u.id = f.friend_user_id
- WHERE f.user_id = ? )
- SQL
-
- # Finally, register your new ResultSource with your Schema
- My::Schema->register_extra_source( 'UserFriendsComplex' => $new_source );
+
+ __PACKAGE__->load_components('Core');
+ __PACKAGE__->table_class('DBIx::Class::ResultSource::View');
+
+ # ->table, ->add_columns, etc.
+
+ # do not attempt to deploy() this view
+ __PACKAGE__->result_source_instance->is_virtual(1);
+
+ __PACKAGE__->result_source_instance->view_definition(q[
+ SELECT u.* FROM user u
+ INNER JOIN user_friends f ON u.id = f.user_id
+ WHERE f.friend_user_id = ?
+ UNION
+ SELECT u.* FROM user u
+ INNER JOIN user_friends f ON u.id = f.friend_user_id
+ WHERE f.user_id = ?
+ ]);
Next, you can execute your complex query using bind parameters like this:
- my $friends = [ $schema->resultset( 'UserFriendsComplex' )->search( {},
+ my $friends = $schema->resultset( 'UserFriendsComplex' )->search( {},
{
bind => [ 12345, 12345 ]
}
- ) ];
-
+ );
+
... and you'll get back a perfect L<DBIx::Class::ResultSet> (except, of course,
that you cannot modify the rows it contains, ie. cannot call L</update>,
L</delete>, ... on it).
-If you prefer to have the definitions of these custom ResultSources in separate
-files (instead of stuffing all of them into the same resultset class), you can
-achieve the same with subclassing the resultset class and defining the
-ResultSource there:
+Note that you cannot have bind parameters unless is_virtual is set to true.
- package My::Schema::UserFriendsComplex;
+=over
- use My::Schema::User;
- use base qw/My::Schema::User/;
+=item * NOTE
- __PACKAGE__->table('dummy'); # currently must be called before anything else
+If you're using the old deprecated C<< $rsrc_instance->name(\'( SELECT ...') >>
+method for custom SQL execution, you are highly encouraged to update your code
+to use a virtual view as above. If you do not want to change your code, and just
+want to suppress the deprecation warning when you call
+L<DBIx::Class::Schema/deploy>, add this line to your source definition, so that
+C<deploy> will exclude this "table":
- # Hand in your query as a scalar reference
- # It will be added as a sub-select after FROM,
- # so pay attention to the surrounding brackets!
- __PACKAGE__->name( \<<SQL );
- ( SELECT u.* FROM user u
- INNER JOIN user_friends f ON u.id = f.user_id
- WHERE f.friend_user_id = ?
- UNION
- SELECT u.* FROM user u
- INNER JOIN user_friends f ON u.id = f.friend_user_id
- WHERE f.user_id = ? )
- SQL
+ sub sqlt_deploy_hook { $_[1]->schema->drop_table ($_[1]) }
-TIMTOWDI.
+=back
=head2 Using specific columns
# SELECT name name, LENGTH( name )
# FROM artist
-Note that the C< as > attribute has absolutely nothing to with the sql
-syntax C< SELECT foo AS bar > (see the documentation in
-L<DBIx::Class::ResultSet/ATTRIBUTES>). If your alias exists as a
-column in your base class (i.e. it was added with C<add_columns>), you
-just access it as normal. Our C<Artist> class has a C<name> column, so
-we just use the C<name> accessor:
+Note that the C<as> attribute B<has absolutely nothing to do> with the sql
+syntax C< SELECT foo AS bar > (see the documentation in
+L<DBIx::Class::ResultSet/ATTRIBUTES>). You can control the C<AS> part of the
+generated SQL via the C<-as> field attribute as follows:
+
+ my $rs = $schema->resultset('Artist')->search(
+ {},
+ {
+ join => 'cds',
+ distinct => 1,
+ '+select' => [ { count => 'cds.cdid', -as => 'amount_of_cds' } ],
+ '+as' => [qw/num_cds/],
+ order_by => { -desc => 'amount_of_cds' },
+ }
+ );
+
+ # Equivalent SQL
+ # SELECT me.artistid, me.name, me.rank, me.charfield, COUNT( cds.cdid ) AS amount_of_cds
+ # FROM artist me LEFT JOIN cd cds ON cds.artist = me.artistid
+ # GROUP BY me.artistid, me.name, me.rank, me.charfield
+ # ORDER BY amount_of_cds DESC
+
+
+If your alias exists as a column in your base class (i.e. it was added with
+L<add_columns|DBIx::Class::ResultSource/add_columns>), you just access it as
+normal. Our C<Artist> class has a C<name> column, so we just use the C<name>
+accessor:
my $artist = $rs->first();
my $name = $artist->name();
# Define accessor manually:
sub name_length { shift->get_column('name_length'); }
-
+
# Or use DBIx::Class::AccessorGroup:
__PACKAGE__->mk_group_accessors('column' => 'name_length');
=head2 SELECT DISTINCT with multiple columns
- my $rs = $schema->resultset('Foo')->search(
+ my $rs = $schema->resultset('Artist')->search(
{},
{
- select => [
- { distinct => [ $source->columns ] }
- ],
- as => [ $source->columns ] # remember 'as' is not the same as SQL AS :-)
+ columns => [ qw/artist_id name rank/ ],
+ distinct => 1
}
);
+ my $rs = $schema->resultset('Artist')->search(
+ {},
+ {
+ columns => [ qw/artist_id name rank/ ],
+ group_by => [ qw/artist_id name rank/ ],
+ }
+ );
+
+ # Equivalent SQL:
+ # SELECT me.artist_id, me.name, me.rank
+ # FROM artist me
+ # GROUP BY artist_id, name, rank
+
=head2 SELECT COUNT(DISTINCT colname)
- my $rs = $schema->resultset('Foo')->search(
+ my $rs = $schema->resultset('Artist')->search(
{},
{
- select => [
- { count => { distinct => 'colname' } }
- ],
- as => [ 'count' ]
+ columns => [ qw/name/ ],
+ distinct => 1
+ }
+ );
+
+ my $rs = $schema->resultset('Artist')->search(
+ {},
+ {
+ columns => [ qw/name/ ],
+ group_by => [ qw/name/ ],
}
);
- my $count = $rs->next->get_column('count');
+ my $count = $rs->count;
+
+ # Equivalent SQL:
+ # SELECT COUNT( * ) FROM (SELECT me.name FROM artist me GROUP BY me.name) count_subq:
=head2 Grouping results
my $rs = $cdrs->search({
year => {
'=' => $cdrs->search(
- { artistid => { '=' => \'me.artistid' } },
+ { artist_id => { '=' => \'me.artist_id' } },
{ alias => 'inner' }
)->get_column('year')->max_rs->as_query,
},
WHERE year = (
SELECT MAX(inner.year)
FROM cd inner
- WHERE artistid = me.artistid
+ WHERE artist_id = me.artist_id
)
=head3 EXPERIMENTAL
=head2 Predefined searches
You can write your own L<DBIx::Class::ResultSet> class by inheriting from it
-and define often used searches as methods:
+and defining often used searches as methods:
package My::DBIC::ResultSet::CD;
use strict;
=head2 Using joins and prefetch
You can use the C<join> attribute to allow searching on, or sorting your
-results by, one or more columns in a related table. To return all CDs matching
-a particular artist name:
+results by, one or more columns in a related table.
+
+This requires that you have defined the L<DBIx::Class::Relationship>. For example :
+
+ My::Schema::CD->has_many( artists => 'My::Schema::Artist', 'artist_id');
+
+To return all CDs matching a particular artist name, you specify the name of the relationship ('artists'):
my $rs = $schema->resultset('CD')->search(
{
- 'artist.name' => 'Bob Marley'
+ 'artists.name' => 'Bob Marley'
},
{
- join => 'artist', # join the artist table
+ join => 'artists', # join the artist table
}
);
# JOIN artist ON cd.artist = artist.id
# WHERE artist.name = 'Bob Marley'
+In that example both the join, and the condition use the relationship name rather than the table name
+(see L<DBIx::Class::Manual::Joining> for more details on aliasing ).
+
If required, you can now sort on any column in the related tables by including
-it in your C<order_by> attribute:
+it in your C<order_by> attribute, (again using the aliased relation name rather than table name) :
my $rs = $schema->resultset('CD')->search(
{
- 'artist.name' => 'Bob Marley'
+ 'artists.name' => 'Bob Marley'
},
{
- join => 'artist',
- order_by => [qw/ artist.name /]
+ join => 'artists',
+ order_by => [qw/ artists.name /]
}
);
my $rs = $schema->resultset('CD')->search(
{
- 'artist.name' => 'Bob Marley'
+ 'artists.name' => 'Bob Marley'
},
{
- join => 'artist',
- order_by => [qw/ artist.name /],
- prefetch => 'artist' # return artist data too!
+ join => 'artists',
+ order_by => [qw/ artists.name /],
+ prefetch => 'artists' # return artist data too!
}
);
so no additional SQL statements are executed. You now have a much more
efficient query.
-Note that as of L<DBIx::Class> 0.05999_01, C<prefetch> I<can> be used with
-C<has_many> relationships.
-
Also note that C<prefetch> should only be used when you know you will
definitely use data from a related table. Pre-fetching related tables when you
only need columns from the main table will make performance worse!
=head2 Multi-step prefetch
-From 0.04999_05 onwards, C<prefetch> can be nested more than one relationship
+C<prefetch> can be nested more than one relationship
deep using the same syntax as a multi-step join:
my $rs = $schema->resultset('Tag')->search(
my $schema = $cd->result_source->schema;
# use the schema as normal:
- my $artist_rs = $schema->resultset('Artist');
+ my $artist_rs = $schema->resultset('Artist');
This can be useful when you don't want to pass around a Schema object to every
method.
AKA getting last_insert_id
-If you are using PK::Auto (which is a core component as of 0.07), this is
-straightforward:
+Thanks to the core component PK::Auto, this is straightforward:
my $foo = $rs->create(\%blah);
# do more stuff
=head2 Stringification
-Employ the standard stringification technique by using the C<overload>
+Employ the standard stringification technique by using the L<overload>
module.
To make an object stringify itself as a single column, use something
-like this (replace C<foo> with the column/method of your choice):
+like this (replace C<name> with the column/method of your choice):
use overload '""' => sub { shift->name}, fallback => 1;
# do whatever else you wanted if it was a new row
}
-=head2 Dynamic Sub-classing DBIx::Class proxy classes
+=head2 Static sub-classing DBIx::Class result classes
+
+AKA adding additional relationships/methods/etc. to a model for a
+specific usage of the (shared) model.
+
+B<Schema definition>
+
+ package My::App::Schema;
+
+ use base DBIx::Class::Schema;
+
+ # load subclassed classes from My::App::Schema::Result/ResultSet
+ __PACKAGE__->load_namespaces;
+
+ # load classes from shared model
+ load_classes({
+ 'My::Shared::Model::Result' => [qw/
+ Foo
+ Bar
+ /]});
+
+ 1;
+
+B<Result-Subclass definition>
+
+ package My::App::Schema::Result::Baz;
+
+ use strict;
+ use warnings;
+ use base My::Shared::Model::Result::Baz;
+
+ # WARNING: Make sure you call table() again in your subclass,
+ # otherwise DBIx::Class::ResultSourceProxy::Table will not be called
+ # and the class name is not correctly registered as a source
+ __PACKAGE__->table('baz');
+
+ sub additional_method {
+ return "I'm an additional method only needed by this app";
+ }
+
+ 1;
+
+=head2 Dynamic Sub-classing DBIx::Class proxy classes
AKA multi-class object inflation from one table
-
+
L<DBIx::Class> classes are proxy classes, therefore some different
techniques need to be employed for more than basic subclassing. In
this example we have a single user table that carries a boolean bit
for admin. We would like like to give the admin users
-objects(L<DBIx::Class::Row>) the same methods as a regular user but
+objects (L<DBIx::Class::Row>) the same methods as a regular user but
also special admin only methods. It doesn't make sense to create two
seperate proxy-class files for this. We would be copying all the user
methods into the Admin class. There is a cleaner way to accomplish
grab the object being returned, inspect the values we are looking for,
bless it if it's an admin object, and then return it. See the example
below:
-
-B<Schema Definition>
-
- package DB::Schema;
-
- use base qw/DBIx::Class::Schema/;
-
- __PACKAGE__->load_classes(qw/User/);
-
-
-B<Proxy-Class definitions>
-
- package DB::Schema::User;
-
- use strict;
- use warnings;
- use base qw/DBIx::Class/;
-
- ### Defined what our admin class is for ensure_class_loaded
- my $admin_class = __PACKAGE__ . '::Admin';
-
- __PACKAGE__->load_components(qw/Core/);
-
- __PACKAGE__->table('users');
-
- __PACKAGE__->add_columns(qw/user_id email password
- firstname lastname active
- admin/);
-
- __PACKAGE__->set_primary_key('user_id');
-
- sub inflate_result {
- my $self = shift;
- my $ret = $self->next::method(@_);
- if( $ret->admin ) {### If this is an admin rebless for extra functions
- $self->ensure_class_loaded( $admin_class );
- bless $ret, $admin_class;
- }
- return $ret;
- }
-
- sub hello {
- print "I am a regular user.\n";
- return ;
- }
-
-
- package DB::Schema::User::Admin;
-
- use strict;
- use warnings;
- use base qw/DB::Schema::User/;
-
- sub hello
- {
- print "I am an admin.\n";
- return;
- }
-
- sub do_admin_stuff
- {
- print "I am doing admin stuff\n";
- return ;
- }
-
-B<Test File> test.pl
-
- use warnings;
- use strict;
- use DB::Schema;
-
- my $user_data = { email => 'someguy@place.com',
- password => 'pass1',
- admin => 0 };
-
- my $admin_data = { email => 'someadmin@adminplace.com',
- password => 'pass2',
- admin => 1 };
-
- my $schema = DB::Schema->connection('dbi:Pg:dbname=test');
-
- $schema->resultset('User')->create( $user_data );
- $schema->resultset('User')->create( $admin_data );
-
- ### Now we search for them
- my $user = $schema->resultset('User')->single( $user_data );
- my $admin = $schema->resultset('User')->single( $admin_data );
-
- print ref $user, "\n";
- print ref $admin, "\n";
-
- print $user->password , "\n"; # pass1
- print $admin->password , "\n";# pass2; inherited from User
- print $user->hello , "\n";# I am a regular user.
- print $admin->hello, "\n";# I am an admin.
-
- ### The statement below will NOT print
- print "I can do admin stuff\n" if $user->can('do_admin_stuff');
- ### The statement below will print
- print "I can do admin stuff\n" if $admin->can('do_admin_stuff');
+
+B<Schema Definition>
+
+ package My::Schema;
+
+ use base qw/DBIx::Class::Schema/;
+
+ __PACKAGE__->load_namespaces;
+
+ 1;
+
+
+B<Proxy-Class definitions>
+
+ package My::Schema::Result::User;
+
+ use strict;
+ use warnings;
+ use base qw/DBIx::Class/;
+
+ ### Define what our admin class is, for ensure_class_loaded()
+ my $admin_class = __PACKAGE__ . '::Admin';
+
+ __PACKAGE__->load_components(qw/Core/);
+
+ __PACKAGE__->table('users');
+
+ __PACKAGE__->add_columns(qw/user_id email password
+ firstname lastname active
+ admin/);
+
+ __PACKAGE__->set_primary_key('user_id');
+
+ sub inflate_result {
+ my $self = shift;
+ my $ret = $self->next::method(@_);
+ if( $ret->admin ) {### If this is an admin, rebless for extra functions
+ $self->ensure_class_loaded( $admin_class );
+ bless $ret, $admin_class;
+ }
+ return $ret;
+ }
+
+ sub hello {
+ print "I am a regular user.\n";
+ return ;
+ }
+
+ 1;
+
+
+ package My::Schema::Result::User::Admin;
+
+ use strict;
+ use warnings;
+ use base qw/My::Schema::Result::User/;
+
+ # This line is important
+ __PACKAGE__->table('users');
+
+ sub hello
+ {
+ print "I am an admin.\n";
+ return;
+ }
+
+ sub do_admin_stuff
+ {
+ print "I am doing admin stuff\n";
+ return ;
+ }
+
+ 1;
+
+B<Test File> test.pl
+
+ use warnings;
+ use strict;
+ use My::Schema;
+
+ my $user_data = { email => 'someguy@place.com',
+ password => 'pass1',
+ admin => 0 };
+
+ my $admin_data = { email => 'someadmin@adminplace.com',
+ password => 'pass2',
+ admin => 1 };
+
+ my $schema = My::Schema->connection('dbi:Pg:dbname=test');
+
+ $schema->resultset('User')->create( $user_data );
+ $schema->resultset('User')->create( $admin_data );
+
+ ### Now we search for them
+ my $user = $schema->resultset('User')->single( $user_data );
+ my $admin = $schema->resultset('User')->single( $admin_data );
+
+ print ref $user, "\n";
+ print ref $admin, "\n";
+
+ print $user->password , "\n"; # pass1
+ print $admin->password , "\n";# pass2; inherited from User
+ print $user->hello , "\n";# I am a regular user.
+ print $admin->hello, "\n";# I am an admin.
+
+ ### The statement below will NOT print
+ print "I can do admin stuff\n" if $user->can('do_admin_stuff');
+ ### The statement below will print
+ print "I can do admin stuff\n" if $admin->can('do_admin_stuff');
=head2 Skip row object creation for faster results
DBIx::Class is not built for speed, it's built for convenience and
ease of use, but sometimes you just need to get the data, and skip the
fancy objects.
-
+
To do this simply use L<DBIx::Class::ResultClass::HashRefInflator>.
-
+
my $rs = $schema->resultset('CD');
-
+
$rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
-
+
my $hash_ref = $rs->find(1);
Wasn't that easy?
+Beware, changing the Result class using
+L<DBIx::Class::ResultSet/result_class> will replace any existing class
+completely including any special components loaded using
+load_components, eg L<DBIx::Class::InflateColumn::DateTime>.
+
=head2 Get raw data for blindingly fast results
If the L<HashRefInflator|DBIx::Class::ResultClass::HashRefInflator> solution
above is not fast enough for you, you can use a DBIx::Class to return values
-exactly as they come out of the data base with none of the convenience methods
+exactly as they come out of the database with none of the convenience methods
wrapped round them.
This is used like so:
}
You will need to map the array offsets to particular columns (you can
-use the I<select> attribute of C<search()> to force ordering).
+use the L<DBIx::Class::ResultSet/select> attribute of L<DBIx::Class::ResultSet/search> to force ordering).
=head1 RESULTSET OPERATIONS
=head2 Getting Schema from a ResultSet
-To get the schema object from a result set, do the following:
+To get the L<DBIx::Class::Schema> object from a ResultSet, do the following:
$rs->result_source->schema
my $rs = $schema->resultset('Items')->search(
{},
- {
+ {
select => [ { sum => 'Cost' } ],
as => [ 'total_cost' ], # remember this 'as' is for DBIx::Class::ResultSet not SQL
}
print $c;
}
-C<ResultSetColumn> only has a limited number of built-in functions, if
+C<ResultSetColumn> only has a limited number of built-in functions. If
you need one that it doesn't have, then you can use the C<func> method
instead:
=head2 Creating a result set from a set of rows
-Sometimes you have a (set of) row objects that you want to put into a
+Sometimes you have a (set of) row objects that you want to put into a
resultset without the need to hit the DB again. You can do that by using the
L<set_cache|DBIx::Class::Resultset/set_cache> method:
=head2 Ordering a relationship result set
-If you always want a relation to be ordered, you can specify this when you
+If you always want a relation to be ordered, you can specify this when you
create the relationship.
To order C<< $book->pages >> by descending page_number, create the relation
$rs = $user->addresses(); # get all addresses for a user
$rs = $address->users(); # get all users for an address
+=head2 Relationships across DB schemas
+
+Mapping relationships across L<DB schemas|DBIx::Class::Manual::Glossary/DB schema>
+is easy as long as the schemas themselves are all accessible via the same DBI
+connection. In most cases, this means that they are on the same database host
+as each other and your connecting database user has the proper permissions to them.
+
+To accomplish this one only needs to specify the DB schema name in the table
+declaration, like so...
+
+ package MyDatabase::Main::Artist;
+ use base qw/DBIx::Class/;
+ __PACKAGE__->load_components(qw/PK::Auto Core/);
+
+ __PACKAGE__->table('database1.artist'); # will use "database1.artist" in FROM clause
+
+ __PACKAGE__->add_columns(qw/ artist_id name /);
+ __PACKAGE__->set_primary_key('artist_id');
+ __PACKAGE__->has_many('cds' => 'MyDatabase::Main::Cd');
+
+ 1;
+
+Whatever string you specify there will be used to build the "FROM" clause in SQL
+queries.
+
+The big drawback to this is you now have DB schema names hardcoded in your
+class files. This becomes especially troublesome if you have multiple instances
+of your application to support a change lifecycle (e.g. DEV, TEST, PROD) and
+the DB schemas are named based on the environment (e.g. database1_dev).
+
+However, one can dynamically "map" to the proper DB schema by overriding the
+L<connection|DBIx::Class::Schama/connection> method in your Schema class and
+building a renaming facility, like so:
+
+ package MyDatabase::Schema;
+ use Moose;
+
+ extends 'DBIx::Class::Schema';
+
+ around connection => sub {
+ my ( $inner, $self, $dsn, $username, $pass, $attr ) = ( shift, @_ );
+
+ my $postfix = delete $attr->{schema_name_postfix};
+
+ $inner->(@_);
+
+ if ( $postfix ) {
+ $self->append_db_name($postfix);
+ }
+ };
+
+ sub append_db_name {
+ my ( $self, $postfix ) = @_;
+
+ my @sources_with_db
+ = grep
+ { $_->name =~ /^\w+\./mx }
+ map
+ { $self->source($_) }
+ $self->sources;
+
+ foreach my $source (@sources_with_db) {
+ my $name = $source->name;
+ $name =~ s{^(\w+)\.}{${1}${postfix}\.}mx;
+
+ $source->name($name);
+ }
+ }
+
+ 1;
+
+By overridding the L<connection|DBIx::Class::Schama/connection>
+method and extracting a custom option from the provided \%attr hashref one can
+then simply iterate over all the Schema's ResultSources, renaming them as
+needed.
+
+To use this facility, simply add or modify the \%attr hashref that is passed to
+L<connection|DBIx::Class::Schama/connect>, as follows:
+
+ my $schema
+ = MyDatabase::Schema->connect(
+ $dsn,
+ $user,
+ $pass,
+ {
+ schema_name_postfix => '_dev'
+ # ... Other options as desired ...
+ })
+
+Obviously, one could accomplish even more advanced mapping via a hash map or a
+callback routine.
+
=head1 TRANSACTIONS
As of version 0.04001, there is improved transaction support in
transactions (for databases that support them) will hopefully be added
in the future.
-=head1 SQL
+=head1 SQL
=head2 Creating Schemas From An Existing Database
-L<DBIx::Class::Schema::Loader> will connect to a database and create a
+L<DBIx::Class::Schema::Loader> will connect to a database and create a
L<DBIx::Class::Schema> and associated sources by examining the database.
-The recommend way of achieving this is to use the
+The recommend way of achieving this is to use the
L<make_schema_at|DBIx::Class::Schema::Loader/make_schema_at> method:
perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib \
your database.
Make a table class as you would for any other table
-
+
package MyAppDB::Dual;
use strict;
use warnings;
"dummy",
{ data_type => "VARCHAR2", is_nullable => 0, size => 1 },
);
-
+
Once you've loaded your table class select from it using C<select>
and C<as> instead of C<columns>
-
+
my $rs = $schema->resultset('Dual')->search(undef,
{ select => [ 'sydate' ],
as => [ 'now' ]
},
);
-
+
All you have to do now is be careful how you access your resultset, the below
will not work because there is no column called 'now' in the Dual table class
-
+
while (my $dual = $rs->next) {
print $dual->now."\n";
}
# Can't locate object method "now" via package "MyAppDB::Dual" at headshot.pl line 23.
-
+
You could of course use 'dummy' in C<as> instead of 'now', or C<add_columns> to
your Dual class for whatever you wanted to select from dual, but that's just
silly, instead use C<get_column>
-
+
while (my $dual = $rs->next) {
print $dual->get_column('now')."\n";
}
-
+
Or use C<cursor>
-
+
my $cursor = $rs->cursor;
while (my @vals = $cursor->next) {
print $vals[0]."\n";
}
-
+
+In case you're going to use this "trick" together with L<DBIx::Class::Schema/deploy> or
+L<DBIx::Class::Schema/create_ddl_dir> a table called "dual" will be created in your
+current schema. This would overlap "sys.dual" and you could not fetch "sysdate" or
+"sequence.nextval" anymore from dual. To avoid this problem, just tell
+L<SQL::Translator> to not create table dual:
+
+ my $sqlt_args = {
+ add_drop_table => 1,
+ parser_args => { sources => [ grep $_ ne 'Dual', schema->sources ] },
+ };
+ $schema->create_ddl_dir( [qw/Oracle/], undef, './sql', undef, $sqlt_args );
+
Or use L<DBIx::Class::ResultClass::HashRefInflator>
-
+
$rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
while ( my $dual = $rs->next ) {
print $dual->{now}."\n";
}
-
+
Here are some example C<select> conditions to illustrate the different syntax
-you could use for doing stuff like
+you could use for doing stuff like
C<oracles.heavily(nested(functions_can('take', 'lots'), OF), 'args')>
-
+
# get a sequence value
select => [ 'A_SEQ.nextval' ],
-
+
# get create table sql
select => [ { 'dbms_metadata.get_ddl' => [ "'TABLE'", "'ARTIST'" ]} ],
-
+
# get a random num between 0 and 100
select => [ { "trunc" => [ { "dbms_random.value" => [0,100] } ]} ],
-
+
# what year is it?
select => [ { 'extract' => [ \'year from sysdate' ] } ],
-
+
# do some math
select => [ {'round' => [{'cos' => [ \'180 * 3.14159265359/180' ]}]}],
-
+
# which day of the week were you born on?
select => [{'to_char' => [{'to_date' => [ "'25-DEC-1980'", "'dd-mon-yyyy'" ]}, "'day'"]}],
-
+
# select 16 rows from dual
select => [ "'hello'" ],
as => [ 'world' ],
group_by => [ 'cube( 1, 2, 3, 4 )' ],
-
-
+
+
=head2 Adding Indexes And Functions To Your SQL
Often you will want indexes on columns on your table to speed up searching. To
-do this, create a method called C<sqlt_deploy_hook> in the relevant source
-class (refer to the advanced
+do this, create a method called C<sqlt_deploy_hook> in the relevant source
+class (refer to the advanced
L<callback system|DBIx::Class::ResultSource/sqlt_deploy_callback> if you wish
to share a hook between multiple sources):
- package My::Schema::Artist;
+ package My::Schema::Result::Artist;
__PACKAGE__->table('artist');
__PACKAGE__->add_columns(id => { ... }, name => { ... })
1;
-Sometimes you might want to change the index depending on the type of the
+Sometimes you might want to change the index depending on the type of the
database for which SQL is being generated:
my ($db_type = $sqlt_table->schema->translator->producer_type)
=~ s/^SQL::Translator::Producer:://;
-You can also add hooks to the schema level to stop certain tables being
+You can also add hooks to the schema level to stop certain tables being
created:
package My::Schema;
Alternatively, you can send the conversion sql scripts to your
customers as above.
-=head2 Setting quoting for the generated SQL.
+=head2 Setting quoting for the generated SQL.
If the database contains column names with spaces and/or reserved words, they
need to be quoted in the SQL queries. This is done using:
- __PACKAGE__->storage->sql_maker->quote_char([ qw/[ ]/] );
- __PACKAGE__->storage->sql_maker->name_sep('.');
+ $schema->storage->sql_maker->quote_char([ qw/[ ]/] );
+ $schema->storage->sql_maker->name_sep('.');
The first sets the quote characters. Either a pair of matching
brackets, or a C<"> or C<'>:
-
- __PACKAGE__->storage->sql_maker->quote_char('"');
+
+ $schema->storage->sql_maker->quote_char('"');
Check the documentation of your database for the correct quote
characters to use. C<name_sep> needs to be set to allow the SQL
generator to put the quotes the correct place.
-In most cases you should set these as part of the arguments passed to
+In most cases you should set these as part of the arguments passed to
L<DBIx::Class::Schema/connect>:
my $schema = My::Schema->connect(
}
)
+In some cases, quoting will be required for all users of a schema. To enforce
+this, you can also overload the C<connection> method for your schema class:
+
+ sub connection {
+ my $self = shift;
+ my $rv = $self->next::method( @_ );
+ $rv->storage->sql_maker->quote_char([ qw/[ ]/ ]);
+ $rv->storage->sql_maker->name_sep('.');
+ return $rv;
+ }
+
=head2 Setting limit dialect for SQL::Abstract::Limit
In some cases, SQL::Abstract::Limit cannot determine the dialect of
The JDBC bridge is one way of getting access to a MSSQL server from a platform
that Microsoft doesn't deliver native client libraries for. (e.g. Linux)
-The limit dialect can also be set at connect time by specifying a
+The limit dialect can also be set at connect time by specifying a
C<limit_dialect> key in the final hash as shown above.
=head2 Working with PostgreSQL array types
arrayrefs together with the column name, like this: C<< [column_name => value]
>>.
-=head1 BOOTSTRAPPING/MIGRATING
+=head1 BOOTSTRAPPING/MIGRATING
=head2 Easy migration from class-based to schema-based setup
use MyDB;
use SQL::Translator;
-
+
my $schema = MyDB->schema_instance;
-
- my $translator = SQL::Translator->new(
+
+ my $translator = SQL::Translator->new(
debug => $debug || 0,
trace => $trace || 0,
no_comments => $no_comments || 0,
'prefix' => 'My::Schema',
},
);
-
+
$translator->parser('SQL::Translator::Parser::DBIx::Class');
$translator->producer('SQL::Translator::Producer::DBIx::Class::File');
-
+
my $output = $translator->translate(@args) or die
"Error: " . $translator->error;
-
+
print $output;
You could use L<Module::Find> to search for all subclasses in the MyDB::*
return $new;
}
-For more information about C<next::method>, look in the L<Class::C3>
+For more information about C<next::method>, look in the L<Class::C3>
documentation. See also L<DBIx::Class::Manual::Component> for more
ways to write your own base classes to do this.
People looking for ways to do "triggers" with DBIx::Class are probably
-just looking for this.
+just looking for this.
=head2 Changing one field whenever another changes
-For example, say that you have three columns, C<id>, C<number>, and
+For example, say that you have three columns, C<id>, C<number>, and
C<squared>. You would like to make changes to C<number> and have
C<squared> be automagically set to the value of C<number> squared.
You can accomplish this by overriding C<store_column>:
=head2 Automatically creating related objects
-You might have a class C<Artist> which has many C<CD>s. Further, if you
+You might have a class C<Artist> which has many C<CD>s. Further, you
want to create a C<CD> object every time you insert an C<Artist> object.
You can accomplish this by overriding C<insert> on your objects:
If this preamble is moved into a common base class:-
package MyDBICbase;
-
+
use base qw/DBIx::Class/;
__PACKAGE__->load_components(qw/InflateColumn::DateTime Core/);
1;
to load the result classes. This will use L<Module::Find|Module::Find>
to find and load the appropriate modules. Explicitly defining the
classes you wish to load will remove the overhead of
-L<Module::Find|Module::Find> and the related directory operations:-
+L<Module::Find|Module::Find> and the related directory operations:
__PACKAGE__->load_classes(qw/ CD Artist Track /);
syntax to load the appropriate classes there is not a direct alternative
avoiding L<Module::Find|Module::Find>.
+=head1 MEMORY USAGE
+
+=head2 Cached statements
+
+L<DBIx::Class> normally caches all statements with L<< prepare_cached()|DBI/prepare_cached >>.
+This is normally a good idea, but if too many statements are cached, the database may use too much
+memory and may eventually run out and fail entirely. If you suspect this may be the case, you may want
+to examine DBI's L<< CachedKids|DBI/CachedKidsCachedKids_(hash_ref) >> hash:
+
+ # print all currently cached prepared statements
+ print for keys %{$schema->storage->dbh->{CachedKids}};
+ # get a count of currently cached prepared statements
+ my $count = scalar keys %{$schema->storage->dbh->{CachedKids}};
+
+If it's appropriate, you can simply clear these statements, automatically deallocating them in the
+database:
+
+ my $kids = $schema->storage->dbh->{CachedKids};
+ delete @{$kids}{keys %$kids} if scalar keys %$kids > 100;
+
+But what you probably want is to expire unused statements and not those that are used frequently.
+You can accomplish this with L<Tie::Cache> or L<Tie::Cache::LRU>:
+
+ use Tie::Cache;
+ use DB::Main;
+ my $schema = DB::Main->connect($dbi_dsn, $user, $pass, {
+ on_connect_do => sub { tie %{shift->_dbh->{CachedKids}}, 'Tie::Cache', 100 },
+ });
+
=cut
CREATE TABLE artist (
artistid INTEGER PRIMARY KEY,
- name TEXT NOT NULL
+ name TEXT NOT NULL
);
CREATE TABLE cd (
and create the sqlite database file:
-sqlite3 example.db < example.sql
+ sqlite3 example.db < example.sql
=head3 Set up DBIx::Class::Schema
Then, create the following DBIx::Class::Schema classes:
MyDatabase/Main.pm:
-
+
package MyDatabase::Main;
use base qw/DBIx::Class::Schema/;
__PACKAGE__->load_namespaces;
package MyDatabase::Main::Result::Artist;
use base qw/DBIx::Class/;
- __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->load_components(qw/Core/);
__PACKAGE__->table('artist');
__PACKAGE__->add_columns(qw/ artistid name /);
__PACKAGE__->set_primary_key('artistid');
package MyDatabase::Main::Result::Cd;
use base qw/DBIx::Class/;
- __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->load_components(qw/Core/);
__PACKAGE__->table('cd');
__PACKAGE__->add_columns(qw/ cdid artist title/);
__PACKAGE__->set_primary_key('cdid');
package MyDatabase::Main::Result::Track;
use base qw/DBIx::Class/;
- __PACKAGE__->load_components(qw/PK::Auto Core/);
+ __PACKAGE__->load_components(qw/Core/);
__PACKAGE__->table('track');
__PACKAGE__->add_columns(qw/ trackid cd title/);
__PACKAGE__->set_primary_key('trackid');
my $schema = MyDatabase::Main->connect('dbi:SQLite:db/example.db');
- # here's some of the sql that is going to be generated by the schema
+ # here's some of the SQL that is going to be generated by the schema
# INSERT INTO artist VALUES (NULL,'Michael Jackson');
# INSERT INTO artist VALUES (NULL,'Eminem');
}
print "\n";
}
-
-
+
+
sub get_cd_by_track {
my $tracktitle = shift;
print "get_cd_by_track($tracktitle):\n";
my $cd = $rs->first;
print $cd->title . "\n\n";
}
-
+
sub get_cds_by_artist {
my $artistname = shift;
print "get_cds_by_artist($artistname):\n";
A reference implentation of the database and scripts in this example
are available in the main distribution for DBIx::Class under the
-directory t/examples/Schema
+directory F<t/examples/Schema>.
With these scripts we're relying on @INC looking in the current
working directory. You may want to add the MyDatabase namespaces to
@INC in a different way when it comes to deployment.
-The testdb.pl script is an excellent start for testing your database
+The F<testdb.pl> script is an excellent start for testing your database
model.
-This example uses load_namespaces to load in the appropriate Row classes
-from the MyDatabase::Main::Result namespace, and any required resultset
-classes from the MyDatabase::Main::ResultSet namespace (although we
-created the directory in the directions above we did not add, or need to
-add, any resultset classes).
+This example uses L<DBIx::Class::Schema/load_namespaces> to load in the
+appropriate L<Row|DBIx::Class::Row> classes from the MyDatabase::Main::Result namespace,
+and any required resultset classes from the MyDatabase::Main::ResultSet
+namespace (although we created the directory in the directions above we
+did not add, or need to add, any resultset classes).
=head1 TODO
to connect with rights to read/write all the schemas/tables as
necessary.
-=back
+=back
=head2 Relationships
Create a C<belongs_to> relationship for the field containing the
foreign key. See L<DBIx::Class::Relationship/belongs_to>.
-=item .. define a foreign key relationship where the key field may contain NULL?
+=item .. define a foreign key relationship where the key field may contain NULL?
Just create a C<belongs_to> relationship, as above. If the column is
NULL then the inflation to the foreign object will not happen. This
Or, if you have quoting off:
- ->search({ 'YEAR(date_of_birth' => 1979 });
+ ->search({ 'YEAR(date_of_birth)' => 1979 });
=item .. find more help on constructing searches?
=item .. fetch a whole column of data instead of a row?
-Call C<get_column> on a L<DBIx::Class::ResultSet>, this returns a
-L<DBIx::Class::ResultSetColumn>, see it's documentation and the
+Call C<get_column> on a L<DBIx::Class::ResultSet>. This returns a
+L<DBIx::Class::ResultSetColumn>. See its documentation and the
L<Cookbook|DBIx::Class::Manual::Cookbook> for details.
=item .. fetch a formatted column?
=item .. fetch a single (or topmost) row?
-Sometimes you many only want a single record back from a search. A quick
-way to get that single row is to first run your search as usual:
-
- ->search->(undef, { order_by => "id DESC" })
-
-Then call L<DBIx::Class::ResultSet/slice> and ask it only to return 1 row:
-
- ->slice(0)
+See L<DBIx::Class::Manual::Cookbook/Retrieve_one_and_only_one_row_from_a_resultset>.
-These two calls can be combined into a single statement:
+A less readable way is to ask a regular search to return 1 row, using
+L<DBIx::Class::ResultSet/slice>:
->search->(undef, { order_by => "id DESC" })->slice(0)
-Why slice instead of L<DBIx::Class::ResultSet/first> or L<DBIx::Class::ResultSet/single>?
-If supported by the database, slice will use LIMIT/OFFSET to hint to the database that we
-really only need one row. This can result in a significant speed improvement.
+which (if supported by the database) will use LIMIT/OFFSET to hint to the
+database that we really only need one row. This can result in a significant
+speed improvement. The method using L<DBIx::Class::ResultSet/single> mentioned
+in the cookbook can do the same if you pass a C<rows> attribute to the search.
=item .. refresh a row from storage?
L<DBIx::Class::PK/discard_changes> does just that by re-fetching the row from storage
using the row's primary key.
+=item .. fetch my data a "page" at a time?
+
+Pass the C<rows> and C<page> attributes to your search, eg:
+
+ ->search({}, { rows => 10, page => 1});
+
+=item .. get a count of all rows even when paging?
+
+Call C<pager> on the paged resultset, it will return a L<Data::Page>
+object. Calling C<total_entries> on the pager will return the correct
+total.
+
+C<count> on the resultset will only return the total number in the page.
+
=back
=head2 Inserting and updating data
But note that when using a scalar reference the column in the database
will be updated but when you read the value from the object with e.g.
-
+
->somecolumn()
-
+
you still get back the scalar reference to the string, B<not> the new
value in the database. To get that you must refresh the row from storage
using C<discard_changes()>. Or chain your function calls like this:
->update->discard_changes
-
- to update the database and refresh the object in one step.
-
+
+to update the database and refresh the object in one step.
+
=item .. store JSON/YAML in a column and have it deflate/inflate automatically?
You can use L<DBIx::Class::InflateColumn> to accomplish YAML/JSON storage transparently.
package MyTable;
use Moose; # import Moose
- use Moose::Util::TypeConstraint; # import Moose accessor type constraints
+ use Moose::Util::TypeConstraint; # import Moose accessor type constraints
extends 'DBIx::Class'; # Moose changes the way we define our parent (base) package
my $row;
- # assume that some where in here $row will get assigned to a MyTable row
+ # assume that somewhere in here $row will get assigned to a MyTable row
$row->non_column_data('some string'); # would set the non_column_data accessor
$row->update(); # would not inline the non_column_data accessor into the update
-
+
=item How do I use DBIx::Class objects in my TT templates?
Like normal objects, mostly. However you need to watch out for TT
=item How do I reduce the overhead of database queries?
You can reduce the overhead of object creation within L<DBIx::Class>
-using the tips in L<DBIx::Class::Manual::Cookbook/"Skip row object creation for faster results">
+using the tips in L<DBIx::Class::Manual::Cookbook/"Skip row object creation for faster results">
and L<DBIx::Class::Manual::Cookbook/"Get raw data for blindingly fast results">
=back
See L<DBIx::Class::Manual::Cookbook/Stringification>
=back
+
+=head2 Troubleshooting
+
+=over 4
+
+=item Help, I can't connect to postgresql!
+
+If you get an error such as:
+
+ DBI connect('dbname=dbic','user',...) failed: could not connect to server:
+ No such file or directory Is the server running locally and accepting
+ connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
+
+Likely you have/had two copies of postgresql installed simultaneously, the
+second one will use a default port of 5433, while L<DBD::Pg> is compiled with a
+default port of 5432.
+
+You can chance the port setting in C<postgresql.conf>.
+
+=item I've lost or forgotten my mysql password
+
+Stop mysqld and restart it with the --skip-grant-tables option.
+
+Issue the following statements in the mysql client.
+
+ UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root';
+ FLUSH PRIVILEGES;
+
+Restart mysql.
+
+Taken from:
+
+L<http://dev.mysql.com/doc/refman/5.1/en/resetting-permissions.html>.
+
+=back
=head1 TERMS
+=head2 DB schema
+
+Refers to a single physical schema within an RDBMS. Synonymous with the terms
+'database', for MySQL; and 'schema', for most other RDBMS(s).
+
+In other words, it's the 'xyz' _thing_ you're connecting to when using any of
+the following L<DSN|DBI/connect>(s):
+
+ dbi:DriverName:xyz@hostname:port
+ dbi:DriverName:database=xyz;host=hostname;port=port
+
=head2 Inflation
The act of turning database row data into objects in
=head1 THE DBIx::Class WAY
Here are a few simple tips that will help you get your bearings with
-DBIx::Class.
+DBIx::Class.
=head2 Tables become Result classes
=head2 It's all about the ResultSet
So, we've got some ResultSources defined. Now, we want to actually use those
-definitions to help us translate the queries we need into handy perl objects!
+definitions to help us translate the queries we need into handy perl objects!
Let's say we defined a ResultSource for an "album" table with three columns:
"albumid", "artist", and "title". Any time we want to query this table, we'll
SELECT albumid, artist, title FROM album;
Would be retrieved by creating a ResultSet object from the album table's
-ResultSource, likely by using the "search" method.
+ResultSource, likely by using the "search" method.
DBIx::Class doesn't limit you to creating only simple ResultSets -- if you
wanted to do something like:
SELECT title FROM album GROUP BY title;
-You could easily achieve it.
+You could easily achieve it.
-The important thing to understand:
+The important thing to understand:
- Any time you would reach for a SQL query in DBI, you are
+ Any time you would reach for a SQL query in DBI, you are
creating a DBIx::Class::ResultSet.
=head2 Search is like "prepare"
Load any components required by each class with the load_components() method.
This should consist of "Core" plus any additional components you want to use.
-For example, if you want serial/auto-incrementing primary keys:
-
- __PACKAGE__->load_components(qw/ PK::Auto Core /);
+For example, if you want to force columns to use UTF-8 encoding:
-C<PK::Auto> is supported for many databases; see L<DBIx::Class::Storage::DBI>
-for more information.
+ __PACKAGE__->load_components(qw/ ForceUTF8 Core /);
Set the table for your class:
is_auto_increment => 0,
default_value => '',
},
- title =>
+ title =>
{ data_type => 'varchar',
size => 256,
is_nullable => 0,
make a predefined accessor for fetching objects that contain this Table's
foreign key:
- __PACKAGE__->has_many('albums', 'My::Schema::Result::Artist', 'album_id');
+ # in My::Schema::Result::Artist
+ __PACKAGE__->has_many('albums', 'My::Schema::Result::Album', 'artist');
See L<DBIx::Class::Relationship> for more information about the various types of
available relationships and how you can design your own.
=head2 Connecting
-To connect to your Schema, you need to provide the connection details. The
-arguments are the same as for L<DBI/connect>:
+To connect to your Schema, you need to provide the connection details or a
+database handle.
+
+=head3 Via connection details
+
+The arguments are the same as for L<DBI/connect>:
my $schema = My::Schema->connect('dbi:SQLite:/home/me/myapp/my.db');
See L<DBIx::Class::Schema::Storage::DBI/connect_info> for more information about
this and other special C<connect>-time options.
+=head3 Via a database handle
+
+The supplied coderef is expected to return a single connected database handle
+(e.g. a L<DBI> C<$dbh>)
+
+ my $schema = My::Schema->connect (
+ sub { Some::DBH::Factory->connect },
+ \%extra_attrs,
+ );
+
=head2 Basic usage
Once you've defined the basic classes, either manually or using
$album->set_column('title', 'Presence');
$title = $album->get_column('title');
-Just like with L<Class::DBI>, you call C<update> to commit your changes to the
-database:
+Just like with L<Class::DBI>, you call C<update> to save your changes to the
+database (by executing the actual C<UPDATE> statement):
$album->update;
returns an instance of C<My::Schema::Result::Album> that can be used to access the data
in the new record:
- my $new_album = $schema->resultset('Album')->create({
+ my $new_album = $schema->resultset('Album')->create({
title => 'Wish You Were Here',
artist => 'Pink Floyd'
});
So, joins are a way of extending simple select statements to include
fields from other, related, tables. There are various types of joins,
depending on which combination of the data you wish to retrieve, see
-L<MySQL's doc on JOINs|http://dev.mysql.com/doc/refman/5.0/en/join.html>.
+MySQL's doc on JOINs: L<http://dev.mysql.com/doc/refman/5.0/en/join.html>.
=head1 DEFINING JOINS AND RELATIONSHIPS
=item *
-Each method starts with a "head2" statement of it's name.
+Each method starts with a "head2" statement of its name.
+
+Just the plain method name, not an example of how to call it, or a link.
+This is to ensure easy linking to method documentation from other POD.
=item *
-The header is followed by a one-item list.
+The header is followed by a two-item list. This contains a description
+of the arguments the method is expected to take, and an indication of
+what the method returns.
-The single item provides a list of all possible values for the
+The first item provides a list of all possible values for the
arguments of the method in order, separated by C<, >, preceeded by the
text "Arguments: "
=item *
+%var - A hashref variable (list of key/value pairs) - rarely used in DBIx::Class.
+
+Reading an argument as a hash variable will consume all subsequent
+method arguments, use with caution.
+
+=item *
+
+@var - An array variable (list of values).
+
+Reading an argument as a array variable will consume all subsequent
+method arguments, use with caution.
+
+=item *
+
? - Optional, should be placed after the argument type and name.
+ ## Correct
+ \%myhashref|\@myarrayref?
+
+ ## Wrong
+ \%myhashref?|\@myarrayref
+
+Applies to the entire argument.
+
+Optional arguments can be left out of method calls, unless the caller
+needs to pass in any of the following arguments. In which case the
+caller should pass C<undef> in place of the missing argument.
+
=item *
-| - Alternate argument types.
+| - Alternate argument content types.
+
+At least one of these must be supplied unless the argument is also
+marked optional.
=back
-NOTES:
+The second item starts with the text "Return value:". The remainder of
+the line is either the text "undefined", a text describing the result of
+the method, or a variable with a descriptive name.
-If several arguments are optional, it is always possible to pass
-C<undef> as one optional argument in order to skip it and provide a
-value for the following ones. This does not need to be indicated in
-the Arguments line, it is assumed.
+ ## Good examples
+ =item Return value: undefined
+ =item Return value: A schema object
+ =item Return value: $classname
-The C<?> for optional arguments always applies to the entire argument
-value, not a particular type or argument.
+ ## Bad examples
+ =item Return value: The names
+
+"undefined" means the method does not deliberately return a value, and
+the caller should not use or rely on anything it does return. (Perl
+functions always return something, usually the result of the last code
+statement, if there is no explicit return statement.)
=item *
The description paragraph is followed by another list. Each item in
the list explains one of the possible argument/type combinations.
+This list may be omitted if the author feels that the variable names are
+self-explanatory enough to not require it. Use best judgement.
+
=item *
The argument list is followed by some examples of how to use the
Alternatively use the C<< storage->debug >> class method:-
- $class->storage->debug(1);
+ $schema->storage->debug(1);
To send the output somewhere else set debugfh:-
- $class->storage->debugfh(IO::File->new('/tmp/trace.out', 'w');
+ $schema->storage->debugfh(IO::File->new('/tmp/trace.out', 'w');
Alternatively you can do this with the environment variable too:-
L<DBI> version 1.50 and L<DBD::Pg> 1.43 are known to work.
-=head2 ... Can't locate object method "source_name" via package ...
+=head2 Can't locate object method "source_name" via package
There's likely a syntax error in the table class referred to elsewhere
in this error message. In particular make sure that the package
#!/use/bin/perl
use My::Item;
-
+
my $item = My::Item->create({ name=>'Matt S. Trout' });
# If using grouping_column:
my $item = My::Item->create({ name=>'Matt S. Trout', group_id=>1 });
-
+
my $rs = $item->siblings();
my @siblings = $item->siblings();
-
+
my $sibling;
$sibling = $item->first_sibling();
$sibling = $item->last_sibling();
$sibling = $item->previous_sibling();
$sibling = $item->next_sibling();
-
+
$item->move_previous();
$item->move_next();
$item->move_first();
return defined $lsib ? $lsib : 0;
}
+# an optimized method to get the last sibling position value without inflating a row object
+sub _last_sibling_posval {
+ my $self = shift;
+ my $position_column = $self->position_column;
+
+ my $cursor = $self->next_siblings->search(
+ {},
+ { rows => 1, order_by => { '-desc' => $position_column }, select => $position_column },
+ )->cursor;
+
+ my ($pos) = $cursor->next;
+ return $pos;
+}
+
=head2 move_previous
$item->move_previous();
sub move_next {
my $self = shift;
- return 0 unless $self->next_siblings->count;
+ return 0 unless defined $self->_last_sibling_posval; # quick way to check for no more siblings
return $self->move_to ($self->_position + 1);
}
sub move_last {
my $self = shift;
- return $self->move_to( $self->_group_rs->count );
+ my $last_posval = $self->_last_sibling_posval;
+
+ return 0 unless defined $last_posval;
+
+ return $self->move_to( $self->_position_from_value ($last_posval) );
}
=head2 move_to
my( $self, $to_position ) = @_;
return 0 if ( $to_position < 1 );
- my $from_position = $self->_position;
- return 0 if ( $from_position == $to_position );
-
my $position_column = $self->position_column;
- {
- my $guard = $self->result_source->schema->txn_scope_guard;
+ my $guard;
- my ($direction, @between);
- if ( $from_position < $to_position ) {
- $direction = -1;
- @between = map { $self->_position_value ($_) } ( $from_position + 1, $to_position );
- }
- else {
- $direction = 1;
- @between = map { $self->_position_value ($_) } ( $to_position, $from_position - 1 );
- }
+ if ($self->is_column_changed ($position_column) ) {
+ # something changed our position, we have no idea where we
+ # used to be - requery without using discard_changes
+ # (we need only a specific column back)
- my $new_pos_val = $self->_position_value ($to_position); # record this before the shift
+ $guard = $self->result_source->schema->txn_scope_guard;
- # we need to null-position the moved row if the position column is part of a constraint
- if (grep { $_ eq $position_column } ( map { @$_ } (values %{{ $self->result_source->unique_constraints }} ) ) ) {
- $self->_ordered_internal_update({ $position_column => $self->null_position_value });
- }
+ my $cursor = $self->result_source->resultset->search(
+ $self->ident_condition,
+ { select => $position_column },
+ )->cursor;
- $self->_shift_siblings ($direction, @between);
- $self->_ordered_internal_update({ $position_column => $new_pos_val });
+ my ($pos) = $cursor->next;
+ $self->$position_column ($pos);
+ delete $self->{_dirty_columns}{$position_column};
+ }
- $guard->commit;
+ my $from_position = $self->_position;
+
+ if ( $from_position == $to_position ) { # FIXME this will not work for non-numeric order
+ $guard->commit if $guard;
+ return 0;
+ }
+
+ $guard ||= $self->result_source->schema->txn_scope_guard;
+
+ my ($direction, @between);
+ if ( $from_position < $to_position ) {
+ $direction = -1;
+ @between = map { $self->_position_value ($_) } ( $from_position + 1, $to_position );
+ }
+ else {
+ $direction = 1;
+ @between = map { $self->_position_value ($_) } ( $to_position, $from_position - 1 );
+ }
+
+ my $new_pos_val = $self->_position_value ($to_position); # record this before the shift
- return 1;
+ # we need to null-position the moved row if the position column is part of a constraint
+ if (grep { $_ eq $position_column } ( map { @$_ } (values %{{ $self->result_source->unique_constraints }} ) ) ) {
+ $self->_ordered_internal_update({ $position_column => $self->null_position_value });
}
+
+ $self->_shift_siblings ($direction, @between);
+ $self->_ordered_internal_update({ $position_column => $new_pos_val });
+
+ $guard->commit;
+ return 1;
}
=head2 move_to_group
my $position_column = $self->position_column;
return 0 if ( defined($to_position) and $to_position < 1 );
- if ($self->_is_in_group ($to_group) ) {
- return 0 if not defined $to_position;
- return $self->move_to ($to_position);
+
+ # check if someone changed the _grouping_columns - this will
+ # prevent _is_in_group working, so we need to requery the db
+ # for the original values
+ my (@dirty_cols, %values, $guard);
+ for ($self->_grouping_columns) {
+ $values{$_} = $self->get_column ($_);
+ push @dirty_cols, $_ if $self->is_column_changed ($_);
}
- {
- my $guard = $self->result_source->schema->txn_scope_guard;
+ # re-query only the dirty columns, and restore them on the
+ # object (subsequent code will update them to the correct
+ # after-move values)
+ if (@dirty_cols) {
+ $guard = $self->result_source->schema->txn_scope_guard;
- # Move to end of current group to adjust siblings
- $self->move_last;
+ my $cursor = $self->result_source->resultset->search(
+ $self->ident_condition,
+ { select => \@dirty_cols },
+ )->cursor;
- $self->set_inflated_columns({ %$to_group, $position_column => undef });
- my $new_group_count = $self->_group_rs->count;
+ my @original_values = $cursor->next;
+ $self->set_inflated_columns ({ %values, map { $_ => shift @original_values } (@dirty_cols) });
+ delete $self->{_dirty_columns}{$_} for (@dirty_cols);
+ }
- if ( not defined($to_position) or $to_position > $new_group_count) {
- $self->set_column(
- $position_column => $new_group_count
- ? $self->_next_position_value ( $self->last_sibling->get_column ($position_column) ) # FIXME - no need to inflate last_sibling
- : $self->_initial_position_value
- );
- }
- else {
- my $bumped_pos_val = $self->_position_value ($to_position);
- my @between = ($to_position, $new_group_count);
- $self->_shift_siblings (1, @between); #shift right
- $self->set_column( $position_column => $bumped_pos_val );
- }
+ if ($self->_is_in_group ($to_group) ) {
+ my $ret;
+ if (defined $to_position) {
+ $ret = $self->move_to ($to_position);
+ }
- $self->_ordered_internal_update;
+ $guard->commit if $guard;
+ return $ret||0;
+ }
- $guard->commit;
+ $guard ||= $self->result_source->schema->txn_scope_guard;
- return 1;
+ # Move to end of current group to adjust siblings
+ $self->move_last;
+
+ $self->set_inflated_columns({ %$to_group, $position_column => undef });
+ my $new_group_last_posval = $self->_last_sibling_posval;
+ my $new_group_last_position = $self->_position_from_value (
+ $new_group_last_posval
+ );
+
+ if ( not defined($to_position) or $to_position > $new_group_last_position) {
+ $self->set_column(
+ $position_column => $new_group_last_position
+ ? $self->_next_position_value ( $new_group_last_posval )
+ : $self->_initial_position_value
+ );
+ }
+ else {
+ my $bumped_pos_val = $self->_position_value ($to_position);
+ my @between = ($to_position, $new_group_last_position);
+ $self->_shift_siblings (1, @between); #shift right
+ $self->set_column( $position_column => $bumped_pos_val );
}
+
+ $self->_ordered_internal_update;
+
+ $guard->commit;
+
+ return 1;
}
=head2 insert
my $position_column = $self->position_column;
unless ($self->get_column($position_column)) {
- my $lsib = $self->last_sibling; # FIXME - no need to inflate last_sibling
+ my $lsib_posval = $self->_last_sibling_posval;
$self->set_column(
- $position_column => ($lsib
- ? $self->_next_position_value ( $lsib->get_column ($position_column) )
+ $position_column => (defined $lsib_posval
+ ? $self->_next_position_value ( $lsib_posval )
: $self->_initial_position_value
)
);
# this is set by _ordered_internal_update()
return $self->next::method(@_) if $self->{_ORDERED_INTERNAL_UPDATE};
- my $upd = shift;
- $self->set_inflated_columns($upd) if $upd;
- my %changes = $self->get_dirty_columns;
- $self->discard_changes;
-
my $position_column = $self->position_column;
+ my @ordering_columns = ($self->_grouping_columns, $position_column);
+
+
+ # these steps are necessary to keep the external appearance of
+ # ->update($upd) so that other things overloading update() will
+ # work properly
+ my %original_values = $self->get_columns;
+ my %existing_changes = $self->get_dirty_columns;
+
+ # See if any of the *supplied* changes would affect the ordering
+ # The reason this is so contrived, is that we want to leverage
+ # the datatype aware value comparing, while at the same time
+ # keep the original value intact (it will be updated later by the
+ # corresponding routine)
+
+ my %upd = %{shift || {}};
+ my %changes = %existing_changes;
+
+ for (@ordering_columns) {
+ next unless exists $upd{$_};
+
+ # we do not want to keep propagating this to next::method
+ # as it will be a done deal by the time get there
+ my $value = delete $upd{$_};
+ $self->set_inflated_columns ({ $_ => $value });
+
+ # see if an update resulted in a dirty column
+ # it is important to preserve the old value, as it
+ # will be needed to carry on a successfull move()
+ # operation without re-querying the database
+ if ($self->is_column_changed ($_) && not exists $existing_changes{$_}) {
+ $changes{$_} = $value;
+ $self->set_inflated_columns ({ $_ => $original_values{$_} });
+ delete $self->{_dirty_columns}{$_};
+ }
+ }
# if nothing group/position related changed - short circuit
- if (not grep { exists $changes{$_} } ($self->_grouping_columns, $position_column) ) {
- return $self->next::method( \%changes, @_ );
+ if (not grep { exists $changes{$_} } ( @ordering_columns ) ) {
+ return $self->next::method( \%upd, @_ );
}
{
# create new_group by taking the current group and inserting changes
my $new_group = {$self->_grouping_clause};
foreach my $col (keys %$new_group) {
- if (exists $changes{$col}) {
- $new_group->{$col} = delete $changes{$col}; # don't want to pass this on to next::method
- }
+ $new_group->{$col} = $changes{$col} if exists $changes{$col};
}
$self->move_to_group(
$new_group,
(exists $changes{$position_column}
- # The FIXME bit contradicts the documentation: when changing groups without supplying explicit
- # positions in move_to_group(), we push the item to the end of the group.
- # However when I was rewriting this, the position from the old group was clearly passed to the new one
+ # The FIXME bit contradicts the documentation: POD states that
+ # when changing groups without supplying explicit positions in
+ # move_to_group(), we push the item to the end of the group.
+ # However when I was rewriting this, the position from the old
+ # group was clearly passed to the new one
# Probably needs to go away (by ribasushi)
- ? delete $changes{$position_column} # means there was a position change supplied with the update too
- : $self->_position # FIXME!
+ ? $changes{$position_column} # means there was a position change supplied with the update too
+ : $self->_position # FIXME! (replace with undef)
),
);
}
elsif (exists $changes{$position_column}) {
- $self->move_to(delete $changes{$position_column});
+ $self->move_to($changes{$position_column});
}
my @res;
my $want = wantarray();
if (not defined $want) {
- $self->next::method( \%changes, @_ );
+ $self->next::method( \%upd, @_ );
}
elsif ($want) {
- @res = $self->next::method( \%changes, @_ );
+ @res = $self->next::method( \%upd, @_ );
}
else {
- $res[0] = $self->next::method( \%changes, @_ );
+ $res[0] = $self->next::method( \%upd, @_ );
}
$guard->commit;
return $self->get_column ($self->position_column);
}
+=head2 _position_from_value
+
+ my $num_pos = $item->_position_of_value ( $pos_value )
+
+Returns the B<absolute numeric position> of an object with a B<position
+value> set to C<$pos_value>. By default simply returns C<$pos_value>.
+
+=cut
+sub _position_from_value {
+ my ($self, $val) = @_;
+
+ return 0 unless defined $val;
+
+# #the right way to do this
+# return $self -> _group_rs
+# -> search({ $self->position_column => { '<=', $val } })
+# -> count
+
+ return $val;
+}
+
=head2 _position_value
my $pos_value = $item->_position_value ( $pos )
# position column is part of a unique constraint, and do a
# one-by-one update if this is the case
- if (grep { $_ eq $position_column } ( map { @$_ } (values %{{ $self->result_source->unique_constraints }} ) ) ) {
+ my $rsrc = $self->result_source;
+
+ if (grep { $_ eq $position_column } ( map { @$_ } (values %{{ $rsrc->unique_constraints }} ) ) ) {
+
+ my @pcols = $rsrc->primary_columns;
+ my $cursor = $shift_rs->search ({}, { order_by => { "-$ord", $position_column }, columns => \@pcols } )->cursor;
+ my $rs = $self->result_source->resultset;
+
+ while (my @pks = $cursor->next ) {
+
+ my $cond;
+ for my $i (0.. $#pcols) {
+ $cond->{$pcols[$i]} = $pks[$i];
+ }
- my $rs = $shift_rs->search ({}, { order_by => { "-$ord", $position_column } } );
- # FIXME - no need to inflate each row
- while (my $r = $rs->next) {
- $r->_ordered_internal_update ({ $position_column => \ "$position_column $op 1" } );
+ $rs->search($cond)->update ({ $position_column => \ "$position_column $op 1" } );
}
}
else {
=head2 _grouping_clause
This method returns one or more name=>value pairs for limiting a search
-by the grouping column(s). If the grouping column is not
-defined then this will return an empty list.
+by the grouping column(s). If the grouping column is not defined then
+this will return an empty list.
=cut
sub _grouping_clause {
=cut
-sub _ident_values {
- my ($self) = @_;
- return (map { $self->{_column_data}{$_} } $self->primary_columns);
-}
-
-=head2 discard_changes ($attrs)
-
-Re-selects the row from the database, losing any changes that had
-been made.
-
-This method can also be used to refresh from storage, retrieving any
-changes made since the row was last read from storage.
-
-$attrs is expected to be a hashref of attributes suitable for passing as the
-second argument to $resultset->search($cond, $attrs);
-
-=cut
-
-sub discard_changes {
- my ($self, $attrs) = @_;
- delete $self->{_dirty_columns};
- return unless $self->in_storage; # Don't reload if we aren't real!
-
- if( my $current_storage = $self->get_from_storage($attrs)) {
-
- # Set $self to the current.
- %$self = %$current_storage;
-
- # Avoid a possible infinite loop with
- # sub DESTROY { $_[0]->discard_changes }
- bless $current_storage, 'Do::Not::Exist';
-
- return $self;
- } else {
- $self->in_storage(0);
- return $self;
- }
-}
-
=head2 id
Returns the primary key(s) for a row. Can't be called as
return (wantarray ? @pk : $pk[0]);
}
+sub _ident_values {
+ my ($self) = @_;
+ return (map { $self->{_column_data}{$_} } $self->primary_columns);
+}
+
=head2 ID
Returns a unique id string identifying a row object by primary key.
Used by L<DBIx::Class::CDBICompat::LiveObjectIndex> and
L<DBIx::Class::ObjectCache>.
+=over
+
+=item WARNING
+
+The default C<_create_ID> method used by this function orders the returned
+values by the alphabetical order of the primary column names, B<unlike>
+the L</id> method, which follows the same order in which columns were fed
+to L<DBIx::Class::ResultSource/set_primary_key>.
+
+=back
+
=cut
sub ID {
All helper methods are called similar to the following template:
__PACKAGE__->$method_name('relname', 'Foreign::Class', \%cond | \@cond, \%attrs);
-
+
Both C<$cond> and C<$attrs> are optional. Pass C<undef> for C<$cond> if
you want to use the default value for it, but still want to set C<\%attrs>.
'My::DBIC::Schema::Book',
{ 'foreign.author_id' => 'self.id' },
);
-
+
# OR (similar result, assuming related_class is storing our PK, in "author")
# (the "author" is guessed at from "Author" in the class namespace)
My::DBIC::Schema::Author->has_many(
use Sub::Name ();
use Class::Inspector ();
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::Accessor' => 'DBIx::Class::Relationship' }
+ );
+
sub register_relationship {
my ($class, $rel, $info) = @_;
if (my $acc_type = $info->{attrs}{accessor}) {
} elsif (exists $self->{_relationship_data}{$rel}) {
return $self->{_relationship_data}{$rel};
} else {
- my $cond = $self->result_source->resolve_condition(
+ my $cond = $self->result_source->_resolve_condition(
$rel_info->{cond}, $rel, $self
);
if ($rel_info->{attrs}->{undef_on_null_fk}){
- return unless ref($cond) eq 'HASH';
- return if grep { not defined } values %$cond;
+ return undef unless ref($cond) eq 'HASH';
+ return undef if grep { not defined $_ } values %$cond;
}
my $val = $self->find_related($rel, {}, {});
- return unless $val;
+ return $val unless $val; # $val instead of undef so that null-objects can go through
+
return $self->{_relationship_data}{$rel} = $val;
}
};
An arrayref containing a list of accessors in the foreign class to create in
the main class. If, for example, you do the following:
-
+
MyDB::Schema::CD->might_have(liner_notes => 'MyDB::Schema::LinerNotes',
undef, {
proxy => [ qw/notes/ ],
});
-
+
Then, assuming MyDB::Schema::LinerNotes has an accessor named notes, you can do:
my $cd = MyDB::Schema::CD->find(1);
$cd->notes('Notes go here'); # set notes -- LinerNotes object is
# created if it doesn't exist
-
+
=item accessor
Specifies the type of accessor that should be created for the relationship.
$self->throw_exception("Can't call *_related as class methods")
unless ref $self;
my $rel = shift;
- my $rel_obj = $self->relationship_info($rel);
+ my $rel_info = $self->relationship_info($rel);
$self->throw_exception( "No such relationship ${rel}" )
- unless $rel_obj;
-
+ unless $rel_info;
+
return $self->{related_resultsets}{$rel} ||= do {
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
- $attrs = { %{$rel_obj->{attrs} || {}}, %$attrs };
+ $attrs = { %{$rel_info->{attrs} || {}}, %$attrs };
$self->throw_exception( "Invalid query: @_" )
if (@_ > 1 && (@_ % 2 == 1));
my $query = ((@_ > 1) ? {@_} : shift);
my $source = $self->result_source;
- my $cond = $source->resolve_condition(
- $rel_obj->{cond}, $rel, $self
+ my $cond = $source->_resolve_condition(
+ $rel_info->{cond}, $rel, $self
);
if ($cond eq $DBIx::Class::ResultSource::UNRESOLVABLE_CONDITION) {
my $reverse = $source->reverse_relationship_info($rel);
sub set_from_related {
my ($self, $rel, $f_obj) = @_;
- my $rel_obj = $self->relationship_info($rel);
- $self->throw_exception( "No such relationship ${rel}" ) unless $rel_obj;
- my $cond = $rel_obj->{cond};
+ my $rel_info = $self->relationship_info($rel);
+ $self->throw_exception( "No such relationship ${rel}" ) unless $rel_info;
+ my $cond = $rel_info->{cond};
$self->throw_exception(
"set_from_related can only handle a hash condition; the ".
"condition for $rel is of type ".
(ref $cond ? ref $cond : 'plain scalar')
) unless ref $cond eq 'HASH';
if (defined $f_obj) {
- my $f_class = $rel_obj->{class};
+ my $f_class = $rel_info->{class};
$self->throw_exception( "Object $f_obj isn't a ".$f_class )
unless Scalar::Util::blessed($f_obj) and $f_obj->isa($f_class);
}
$self->set_columns(
- $self->result_source->resolve_condition(
- $rel_obj->{cond}, $f_obj, $rel));
+ $self->result_source->_resolve_condition(
+ $rel_info->{cond}, $f_obj, $rel));
return 1;
}
=over 4
-=item Arguments: (\@hashrefs | \@objs)
+=item Arguments: (\@hashrefs | \@objs), $link_vals?
=back
$actor->set_roles(\@roles);
# Replaces all of $actor's previous roles with the two named
+ $actor->set_roles(\@roles, { salary => 15_000_000 });
+ # Sets a column in the link table for all roles
+
+
Replace all the related objects with the given reference to a list of
objects. This does a C<delete> B<on the link table resultset> to remove the
association between the current object and all related objects, then calls
use strict;
use warnings;
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::BelongsTo' => 'DBIx::Class::Relationship' }
+ );
+
sub belongs_to {
my ($class, $rel, $f_class, $cond, $attrs) = @_;
return 1;
}
-=head1 AUTHORS
-
-Alexander Hartmaier <Alexander.Hartmaier@t-systems.at>
-
-Matt S. Trout <mst@shadowcatsystems.co.uk>
+# Attempt to remove the POD so it (maybe) falls off the indexer
-=cut
+#=head1 AUTHORS
+#
+#Alexander Hartmaier <Alexander.Hartmaier@t-systems.at>
+#
+#Matt S. Trout <mst@shadowcatsystems.co.uk>
+#
+#=cut
1;
use strict;
use warnings;
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::CascadeActions' => 'DBIx::Class::Relationship' }
+ );
+
sub delete {
my ($self, @rest) = @_;
return $self->next::method(@rest) unless ref $self;
use strict;
use warnings;
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::HasMany' => 'DBIx::Class::Relationship' }
+ );
+
sub has_many {
my ($class, $rel, $f_class, $cond, $attrs) = @_;
$class->throw_exception(
"No such column ${f_key} on foreign class ${f_class} ($guess)"
) if $f_class_loaded && !$f_class->has_column($f_key);
-
+
$cond = { "foreign.${f_key}" => "self.${pri}" };
}
use strict;
use warnings;
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::HasOne' => 'DBIx::Class::Relationship' }
+ );
+
sub might_have {
shift->_has_one('LEFT' => @_);
}
use strict;
use warnings;
-use warnings::register;
+
+use Carp::Clan qw/^DBIx::Class/;
use Sub::Name ();
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::ManyToMany' => 'DBIx::Class::Relationship' }
+ );
+
sub many_to_many {
my ($class, $meth, $rel, $f_rel, $rel_attrs) = @_;
for ($add_meth, $remove_meth, $set_meth, $rs_meth) {
if ( $class->can ($_) ) {
- warnings::warnif(<<"EOW")
+ carp (<<"EOW") unless $ENV{DBIC_OVERWRITE_HELPER_METHODS_OK};
+
***************************************************************************
-The many-to-many relationship $meth is trying to create a utility method called
-$_. This will overwrite the existing method on $class. You almost certainly
-want to rename your method or the many-to-many relationship, as your method
-will not be callable (it will use the one from the relationship instead.)
+The many-to-many relationship '$meth' is trying to create a utility method
+called $_.
+This will completely overwrite one such already existing method on class
+$class.
-To disable this warning add the following to $class
+You almost certainly want to rename your method or the many-to-many
+relationship, as the functionality of the original method will not be
+accessible anymore.
- no warnings 'DBIx::Class::Relationship::ManyToMany';
+To disable this warning set to a true value the environment variable
+DBIC_OVERWRITE_HELPER_METHODS_OK
***************************************************************************
EOW
my $obj;
if (ref $_[0]) {
if (ref $_[0] eq 'HASH') {
- $obj = $f_rel_rs->create($_[0]);
+ $obj = $f_rel_rs->find_or_create($_[0]);
} else {
$obj = $_[0];
}
} else {
- $obj = $f_rel_rs->create({@_});
+ $obj = $f_rel_rs->find_or_create({@_});
}
my $link_vals = @_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {};
"{$set_meth} needs a list of objects or hashrefs"
);
my @to_set = (ref($_[0]) eq 'ARRAY' ? @{ $_[0] } : @_);
- $self->search_related($rel, {})->delete;
- $self->$add_meth($_) for (@to_set);
+ # if there is a where clause in the attributes, ensure we only delete
+ # rows that are within the where restriction
+ if ($rel_attrs && $rel_attrs->{where}) {
+ $self->search_related( $rel, $rel_attrs->{where},{join => $f_rel})->delete;
+ } else {
+ $self->search_related( $rel, {} )->delete;
+ }
+ # add in the set rel objects
+ $self->$add_meth($_, ref($_[1]) ? $_[1] : {}) for (@to_set);
};
my $remove_meth_name = join '::', $class, $remove_meth;
my $obj = shift;
my $rel_source = $self->search_related($rel)->result_source;
my $cond = $rel_source->relationship_info($f_rel)->{cond};
- my $link_cond = $rel_source->resolve_condition(
+ my $link_cond = $rel_source->_resolve_condition(
$cond, $obj, $f_rel
);
$self->search_related($rel, $link_cond)->delete;
use Sub::Name ();
use base qw/DBIx::Class/;
+our %_pod_inherit_config =
+ (
+ class_map => { 'DBIx::Class::Relationship::ProxyMethods' => 'DBIx::Class::Relationship' }
+ );
+
sub register_relationship {
my ($class, $rel, $info) = @_;
if (my $proxy_list = $info->{attrs}{proxy}) {
=head1 NAME
-DBIx::Class::ResultClass::HashRefInflator
+DBIx::Class::ResultClass::HashRefInflator - Get raw hashrefs from a resultset
=head1 SYNOPSIS
# if there is at least one defined column consider the resultset real
# (and not an emtpy has_many rel containing one empty hashref)
+ # an empty arrayref is an empty multi-sub-prefetch - don't consider
+ # those either
for (values %$hash) {
- return $hash if defined $_;
+ if (ref $_ eq 'ARRAY') {
+ return $hash if @$_;
+ }
+ elsif (defined $_) {
+ return $hash;
+ }
}
return undef;
HashRefInflator only affects resultsets at inflation time, and prefetch causes
relations to be inflated when the master C<$artist> row is inflated.
+=item *
+
+Column value inflation, e.g., using modules like
+L<DBIx::Class::InflateColumn::DateTime>, is not performed.
+The returned hash contains the raw database values.
+
=back
=cut
ResultSet. The new one will contain all the conditions of the
original, plus any new conditions added in the C<search> call.
-A ResultSet is also an iterator. L</next> is used to return all the
-L<DBIx::Class::Row>s the ResultSet represents.
+A ResultSet also incorporates an implicit iterator. L</next> and L</reset>
+can be used to walk through all the L<DBIx::Class::Row>s the ResultSet
+represents.
The query that the ResultSet represents is B<only> executed against
the database when these methods are called:
+L</find> L</next> L</all> L</first> L</single> L</count>
-=over
-
-=item L</find>
-
-=item L</next>
-
-=item L</all>
-
-=item L</count>
-
-=item L</single>
-
-=item L</first>
-
-=back
-
-=head1 EXAMPLES
+=head1 EXAMPLES
=head2 Chaining resultsets
});
}
+=head3 Resolving conditions and attributes
+
+When a resultset is chained from another resultset, conditions and
+attributes with the same keys need resolving.
+
+L</join>, L</prefetch>, L</+select>, L</+as> attributes are merged
+into the existing ones from the original resultset.
+
+The L</where>, L</having> attribute, and any search conditions are
+merged with an SQL C<AND> to the existing condition from the original
+resultset.
+
+All other attributes are overridden by any new ones supplied in the
+search attributes.
+
=head2 Multiple queries
Since a resultset just defines a query, you can do all sorts of
return $class->new_result(@_) if ref $class;
my ($source, $attrs) = @_;
- $source = $source->handle
+ $source = $source->handle
unless $source->isa('DBIx::Class::ResultSourceHandle');
$attrs = { %{$attrs||{}} };
sub search_rs {
my $self = shift;
+ # Special-case handling for (undef, undef).
+ if ( @_ == 2 && !defined $_[1] && !defined $_[0] ) {
+ pop(@_); pop(@_);
+ }
+
my $attrs = {};
$attrs = pop(@_) if @_ > 1 and ref $_[$#_] eq 'HASH';
my $our_attrs = { %{$self->{attrs}} };
unless (
(@_ && defined($_[0])) # @_ == () or (undef)
- ||
+ ||
(keys %$attrs # empty attrs or only 'safe' attrs
&& List::Util::first { !$safe{$_} } keys %$attrs)
) {
my $new_attrs = { %{$our_attrs}, %{$attrs} };
# merge new attrs into inherited
- foreach my $key (qw/join prefetch +select +as/) {
+ foreach my $key (qw/join prefetch +select +as bind/) {
next unless exists $attrs->{$key};
$new_attrs->{$key} = $self->_merge_attr($our_attrs->{$key}, $attrs->{$key});
}
resultset query.
CAVEAT: C<search_literal> is provided for Class::DBI compatibility and should
-only be used in that context. There are known problems using C<search_literal>
-in chained queries; it can result in bind values in the wrong order. See
-L<DBIx::Class::Manual::Cookbook/Searching> and
+only be used in that context. C<search_literal> is a convenience method.
+It is equivalent to calling $schema->search(\[]), but if you want to ensure
+columns are bound correctly, use C<search>.
+
+Example of how to use C<search> instead of C<search_literal>
+
+ my @cds = $cd_rs->search_literal('cdid = ? AND (artist = ? OR artist = ?)', (2, 1, 2));
+ my @cds = $cd_rs->search(\[ 'cdid = ? AND (artist = ? OR artist = ?)', [ 'cdid', 2 ], [ 'artist', 1 ], [ 'artist', 2 ] ]);
+
+
+See L<DBIx::Class::Manual::Cookbook/Searching> and
L<DBIx::Class::Manual::FAQ/Searching> for searching techniques that do not
require C<search_literal>.
=cut
sub search_literal {
- my ($self, $cond, @vals) = @_;
- my $attrs = (ref $vals[$#vals] eq 'HASH' ? { %{ pop(@vals) } } : {});
- $attrs->{bind} = [ @{$self->{attrs}{bind}||[]}, @vals ];
- return $self->search(\$cond, $attrs);
+ my ($self, $sql, @bind) = @_;
+ my $attr;
+ if ( @bind && ref($bind[-1]) eq 'HASH' ) {
+ $attr = pop @bind;
+ }
+ return $self->search(\[ $sql, map [ __DUMMY__ => $_ ], @bind ], ($attr || () ));
}
=head2 find
&& ($info = $self->result_source->relationship_info($key))) {
my $val = delete $input_query->{$key};
next KEY if (ref($val) eq 'ARRAY'); # has_many for multi_create
- my $rel_q = $self->result_source->resolve_condition(
+ my $rel_q = $self->result_source->_resolve_condition(
$info->{cond}, $val, $key
);
die "Can't handle OR join condition in find" if ref($rel_q) eq 'ARRAY';
my $unique_query = $self->_build_unique_query($input_query, \@unique_cols);
$query = $self->_add_alias($unique_query, $alias);
}
+ elsif ($self->{attrs}{accessor} and $self->{attrs}{accessor} eq 'single') {
+ # This means that we got here after a merger of relationship conditions
+ # in ::Relationship::Base::search_related (the row method), and furthermore
+ # the relationship is of the 'single' type. This means that the condition
+ # provided by the relationship (already attached to $self) is sufficient,
+ # as there can be only one row in the databse that would satisfy the
+ # relationship
+ }
else {
my @unique_queries = $self->_unique_queries($input_query, $attrs);
$query = @unique_queries
}
# Run the query
- if (keys %$attrs) {
- my $rs = $self->search($query, $attrs);
- if (keys %{$rs->_resolved_attrs->{collapse}}) {
- my $row = $rs->next;
- carp "Query returned more than one row" if $rs->next;
- return $row;
- }
- else {
- return $rs->single;
- }
+ my $rs = $self->search ($query, $attrs);
+ if (keys %{$rs->_resolved_attrs->{collapse}}) {
+ my $row = $rs->next;
+ carp "Query returned more than one row" if $rs->next;
+ return $row;
}
else {
- if (keys %{$self->_resolved_attrs->{collapse}}) {
- my $rs = $self->search($query);
- my $row = $rs->next;
- carp "Query returned more than one row" if $rs->next;
- return $row;
- }
- else {
- return $self->single($query);
- }
+ return $rs->single;
}
}
sub cursor {
my ($self) = @_;
- my $attrs = { %{$self->_resolved_attrs} };
+ my $attrs = $self->_resolved_attrs_copy;
+
return $self->{cursor}
||= $self->result_source->storage->select($attrs->{from}, $attrs->{select},
$attrs->{where},$attrs);
Query returned more than one row
-In this case, you should be using L</first> or L</find> instead, or if you really
-know what you are doing, use the L</rows> attribute to explicitly limit the size
+In this case, you should be using L</next> or L</find> instead, or if you really
+know what you are doing, use the L</rows> attribute to explicitly limit the size
of the resultset.
+This method will also throw an exception if it is called on a resultset prefetching
+has_many, as such a prefetch implies fetching multiple rows from the database in
+order to assemble the resulting object.
+
=back
=cut
$self->throw_exception('single() only takes search conditions, no attributes. You want ->search( $cond, $attrs )->single()');
}
- my $attrs = { %{$self->_resolved_attrs} };
+ my $attrs = $self->_resolved_attrs_copy;
+
+ if (keys %{$attrs->{collapse}}) {
+ $self->throw_exception(
+ 'single() can not be used on resultsets prefetching has_many. Use find( \%cond ) or next() instead'
+ );
+ }
+
if ($where) {
if (defined $attrs->{where}) {
$attrs->{where} = {
return (@data ? ($self->_construct_object(@data))[0] : undef);
}
+
# _is_unique_query
#
# Try to determine if the specified query is guaranteed to be unique, based on
if (ref $query eq 'ARRAY') {
foreach my $subquery (@$query) {
next unless ref $subquery; # -or
-# warn "ARRAY: " . Dumper $subquery;
$collapsed = $self->_collapse_query($subquery, $collapsed);
}
}
elsif (ref $query eq 'HASH') {
if (keys %$query and (keys %$query)[0] eq '-and') {
foreach my $subquery (@{$query->{-and}}) {
-# warn "HASH: " . Dumper $subquery;
$collapsed = $self->_collapse_query($subquery, $collapsed);
}
}
else {
-# warn "LEAF: " . Dumper $query;
foreach my $col (keys %$query) {
my $value = $query->{$col};
$collapsed->{$col}{$value}++;
For more information, see L<DBIx::Class::Manual::Cookbook>.
+This method is deprecated and will be removed in 0.09. Use L</search()>
+instead. An example conversion is:
+
+ ->search_like({ foo => 'bar' });
+
+ # Becomes
+
+ ->search({ foo => { like => 'bar' } });
+
=cut
sub search_like {
my $class = shift;
+ carp (
+ 'search_like() is deprecated and will be removed in DBIC version 0.09.'
+ .' Instead use ->search({ x => { -like => "y%" } })'
+ .' (note the outer pair of {}s - they are important!)'
+ );
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
my $query = ref $_[0] eq 'HASH' ? { %{shift()} }: {@_};
$query->{$_} = { 'like' => $query->{$_} } for keys %$query;
sub _construct_object {
my ($self, @row) = @_;
- my $info = $self->_collapse_result($self->{_attrs}{as}, \@row);
+
+ my $info = $self->_collapse_result($self->{_attrs}{as}, \@row)
+ or return ();
my @new = $self->result_class->inflate_result($self->result_source, @$info);
@new = $self->{_attrs}{record_filter}->(@new)
if exists $self->{_attrs}{record_filter};
sub _collapse_result {
my ($self, $as_proto, $row) = @_;
+ # if the first row that ever came in is totally empty - this means we got
+ # hit by a smooth^Wempty left-joined resultset. Just noop in that case
+ # instead of producing a {}
+ #
+ my $has_def;
+ for (@$row) {
+ if (defined $_) {
+ $has_def++;
+ last;
+ }
+ }
+ return undef unless $has_def;
+
my @copy = @$row;
# 'foo' => [ undef, 'foo' ]
do { # no need to check anything at the front, we always want the first row
my %const;
-
+
foreach my $this_as (@construct_as) {
$const{$this_as->[0]||''}{$this_as->[1]} = shift(@copy);
}
foreach my $p (@parts) {
$target = $target->[1]->{$p} ||= [];
$cur .= ".${p}";
- if ($cur eq ".${key}" && (my @ckey = @{$collapse{$cur}||[]})) {
+ if ($cur eq ".${key}" && (my @ckey = @{$collapse{$cur}||[]})) {
# collapsing at this point and on final part
my $pos = $collapse_pos{$cur};
CK: foreach my $ck (@ckey) {
=back
-An accessor for the class to use when creating row objects. Defaults to
-C<< result_source->result_class >> - which in most cases is the name of the
+An accessor for the class to use when creating row objects. Defaults to
+C<< result_source->result_class >> - which in most cases is the name of the
L<"table"|DBIx::Class::Manual::Glossary/"ResultSource"> class.
+Note that changing the result_class will also remove any components
+that were originally loaded in the source class via
+L<DBIx::Class::ResultSource/load_components>. Any overloaded methods
+in the original source class will not run.
+
=cut
sub result_class {
=back
Performs an SQL C<COUNT> with the same query as the resultset was built
-with to find the number of elements. If passed arguments, does a search
-on the resultset and counts the results of that.
-
-Note: When using C<count> with C<group_by>, L<DBIx::Class> emulates C<GROUP BY>
-using C<COUNT( DISTINCT( columns ) )>. Some databases (notably SQLite) do
-not support C<DISTINCT> with multiple columns. If you are using such a
-database, you should only use columns from the main table in your C<group_by>
-clause.
+with to find the number of elements. Passing arguments is equivalent to
+C<< $rs->search ($cond, \%attrs)->count >>
=cut
my $self = shift;
return $self->search(@_)->count if @_ and defined $_[0];
return scalar @{ $self->get_cache } if $self->get_cache;
- my $count = $self->_count;
- return 0 unless $count;
- # need to take offset from resolved attrs
+ my $attrs = $self->_resolved_attrs_copy;
- $count -= $self->{_attrs}{offset} if $self->{_attrs}{offset};
- $count = $self->{attrs}{rows} if
- $self->{attrs}{rows} and $self->{attrs}{rows} < $count;
+ # this is a little optimization - it is faster to do the limit
+ # adjustments in software, instead of a subquery
+ my $rows = delete $attrs->{rows};
+ my $offset = delete $attrs->{offset};
+
+ my $crs;
+ if ($self->_has_resolved_attr (qw/collapse group_by/)) {
+ $crs = $self->_count_subq_rs ($attrs);
+ }
+ else {
+ $crs = $self->_count_rs ($attrs);
+ }
+ my $count = $crs->next;
+
+ $count -= $offset if $offset;
+ $count = $rows if $rows and $rows < $count;
$count = 0 if ($count < 0);
+
return $count;
}
-sub _count { # Separated out so pager can get the full count
+=head2 count_rs
+
+=over 4
+
+=item Arguments: $cond, \%attrs??
+
+=item Return Value: $count_rs
+
+=back
+
+Same as L</count> but returns a L<DBIx::Class::ResultSetColumn> object.
+This can be very handy for subqueries:
+
+ ->search( { amount => $some_rs->count_rs->as_query } )
+
+As with regular resultsets the SQL query will be executed only after
+the resultset is accessed via L</next> or L</all>. That would return
+the same single value obtainable via L</count>.
+
+=cut
+
+sub count_rs {
my $self = shift;
- my $select = { count => '*' };
-
- my $attrs = { %{$self->_resolved_attrs} };
- if (my $group_by = delete $attrs->{group_by}) {
- delete $attrs->{having};
- my @distinct = (ref $group_by ? @$group_by : ($group_by));
- # todo: try CONCAT for multi-column pk
- my @pk = $self->result_source->primary_columns;
- if (@pk == 1) {
- my $alias = $attrs->{alias};
- foreach my $column (@distinct) {
- if ($column =~ qr/^(?:\Q${alias}.\E)?$pk[0]$/) {
- @distinct = ($column);
- last;
- }
- }
- }
+ return $self->search(@_)->count_rs if @_;
- $select = { count => { distinct => \@distinct } };
+ # this may look like a lack of abstraction (count() does about the same)
+ # but in fact an _rs *must* use a subquery for the limits, as the
+ # software based limiting can not be ported if this $rs is to be used
+ # in a subquery itself (i.e. ->as_query)
+ if ($self->_has_resolved_attr (qw/collapse group_by offset rows/)) {
+ return $self->_count_subq_rs;
}
+ else {
+ return $self->_count_rs;
+ }
+}
+
+#
+# returns a ResultSetColumn object tied to the count query
+#
+sub _count_rs {
+ my ($self, $attrs) = @_;
- $attrs->{select} = $select;
- $attrs->{as} = [qw/count/];
+ my $rsrc = $self->result_source;
+ $attrs ||= $self->_resolved_attrs;
- # offset, order by and page are not needed to count. record_filter is cdbi
- delete $attrs->{$_} for qw/rows offset order_by page pager record_filter/;
+ my $tmp_attrs = { %$attrs };
- my $tmp_rs = (ref $self)->new($self->result_source, $attrs);
- my ($count) = $tmp_rs->cursor->next;
- return $count;
+ # take off any limits, record_filter is cdbi, and no point of ordering a count
+ delete $tmp_attrs->{$_} for (qw/select as rows offset order_by record_filter/);
+
+ # overwrite the selector (supplied by the storage)
+ $tmp_attrs->{select} = $rsrc->storage->_count_select ($rsrc, $tmp_attrs);
+ $tmp_attrs->{as} = 'count';
+
+ # read the comment on top of the actual function to see what this does
+ $tmp_attrs->{from} = $self->_switch_to_inner_join_if_needed (
+ $tmp_attrs->{from}, $tmp_attrs->{alias}
+ );
+
+ my $tmp_rs = $rsrc->resultset_class->new($rsrc, $tmp_attrs)->get_column ('count');
+
+ return $tmp_rs;
+}
+
+#
+# same as above but uses a subquery
+#
+sub _count_subq_rs {
+ my ($self, $attrs) = @_;
+
+ my $rsrc = $self->result_source;
+ $attrs ||= $self->_resolved_attrs_copy;
+
+ my $sub_attrs = { %$attrs };
+
+ # extra selectors do not go in the subquery and there is no point of ordering it
+ delete $sub_attrs->{$_} for qw/collapse select _prefetch_select as order_by/;
+
+ # if we prefetch, we group_by primary keys only as this is what we would get out
+ # of the rs via ->next/->all. We DO WANT to clobber old group_by regardless
+ if ( keys %{$attrs->{collapse}} ) {
+ $sub_attrs->{group_by} = [ map { "$attrs->{alias}.$_" } ($rsrc->primary_columns) ]
+ }
+
+ $sub_attrs->{select} = $rsrc->storage->_subq_count_select ($rsrc, $sub_attrs);
+
+ # read the comment on top of the actual function to see what this does
+ $sub_attrs->{from} = $self->_switch_to_inner_join_if_needed (
+ $sub_attrs->{from}, $sub_attrs->{alias}
+ );
+
+ # this is so that ordering can be thrown away in things like Top limit
+ $sub_attrs->{-for_count_only} = 1;
+
+ my $sub_rs = $rsrc->resultset_class->new ($rsrc, $sub_attrs);
+
+ $attrs->{from} = [{
+ -alias => 'count_subq',
+ -source_handle => $rsrc->handle,
+ count_subq => $sub_rs->as_query,
+ }];
+
+ # the subquery replaces this
+ delete $attrs->{$_} for qw/where bind collapse group_by having having_bind rows offset/;
+
+ return $self->_count_rs ($attrs);
}
+
+# The DBIC relationship chaining implementation is pretty simple - every
+# new related_relationship is pushed onto the {from} stack, and the {select}
+# window simply slides further in. This means that when we count somewhere
+# in the middle, we got to make sure that everything in the join chain is an
+# actual inner join, otherwise the count will come back with unpredictable
+# results (a resultset may be generated with _some_ rows regardless of if
+# the relation which the $rs currently selects has rows or not). E.g.
+# $artist_rs->cds->count - normally generates:
+# SELECT COUNT( * ) FROM artist me LEFT JOIN cd cds ON cds.artist = me.artistid
+# which actually returns the number of artists * (number of cds || 1)
+#
+# So what we do here is crawl {from}, determine if the current alias is at
+# the top of the stack, and if not - make sure the chain is inner-joined down
+# to the root.
+#
+sub _switch_to_inner_join_if_needed {
+ my ($self, $from, $alias) = @_;
+
+ # subqueries and other oddness is naturally not supported
+ return $from if (
+ ref $from ne 'ARRAY'
+ ||
+ @$from <= 1
+ ||
+ ref $from->[0] ne 'HASH'
+ ||
+ ! $from->[0]{-alias}
+ ||
+ $from->[0]{-alias} eq $alias
+ );
+
+ my $switch_branch;
+ JOINSCAN:
+ for my $j (@{$from}[1 .. $#$from]) {
+ if ($j->[0]{-alias} eq $alias) {
+ $switch_branch = $j->[0]{-join_path};
+ last JOINSCAN;
+ }
+ }
+
+ # something else went wrong
+ return $from unless $switch_branch;
+
+ # So it looks like we will have to switch some stuff around.
+ # local() is useless here as we will be leaving the scope
+ # anyway, and deep cloning is just too fucking expensive
+ # So replace the inner hashref manually
+ my @new_from = ($from->[0]);
+ my $sw_idx = { map { $_ => 1 } @$switch_branch };
+
+ for my $j (@{$from}[1 .. $#$from]) {
+ my $jalias = $j->[0]{-alias};
+
+ if ($sw_idx->{$jalias}) {
+ my %attrs = %{$j->[0]};
+ delete $attrs{-join_type};
+ push @new_from, [
+ \%attrs,
+ @{$j}[ 1 .. $#$j ],
+ ];
+ }
+ else {
+ push @new_from, $j;
+ }
+ }
+
+ return \@new_from;
+}
+
+
sub _bool {
return 1;
}
my @obj;
- # TODO: don't call resolve here
if (keys %{$self->_resolved_attrs->{collapse}}) {
-# if ($self->{attrs}{prefetch}) {
- # Using $self->cursor->all is really just an optimisation.
- # If we're collapsing has_many prefetches it probably makes
- # very little difference, and this is cleaner than hacking
- # _construct_object to survive the approach
+ # Using $self->cursor->all is really just an optimisation.
+ # If we're collapsing has_many prefetches it probably makes
+ # very little difference, and this is cleaner than hacking
+ # _construct_object to survive the approach
+ $self->cursor->reset;
my @row = $self->cursor->next;
while (@row) {
push(@obj, $self->_construct_object(@row));
}
$self->set_cache(\@obj) if $self->{attrs}{cache};
+
return @obj;
}
=back
Resets the resultset's cursor, so you can iterate through the elements again.
+Implicitly resets the storage cursor, so a subsequent L</next> will trigger
+another query.
=cut
return $_[0]->reset->next;
}
+
+# _rs_update_delete
+#
+# Determines whether and what type of subquery is required for the $rs operation.
+# If grouping is necessary either supplies its own, or verifies the current one
+# After all is done delegates to the proper storage method.
+
+sub _rs_update_delete {
+ my ($self, $op, $values) = @_;
+
+ my $rsrc = $self->result_source;
+
+ my $needs_group_by_subq = $self->_has_resolved_attr (qw/collapse group_by -join/);
+ my $needs_subq = $self->_has_resolved_attr (qw/row offset/);
+
+ if ($needs_group_by_subq or $needs_subq) {
+
+ # make a new $rs selecting only the PKs (that's all we really need)
+ my $attrs = $self->_resolved_attrs_copy;
+
+ delete $attrs->{$_} for qw/collapse select as/;
+ $attrs->{columns} = [ map { "$attrs->{alias}.$_" } ($self->result_source->primary_columns) ];
+
+ if ($needs_group_by_subq) {
+ # make sure no group_by was supplied, or if there is one - make sure it matches
+ # the columns compiled above perfectly. Anything else can not be sanely executed
+ # on most databases so croak right then and there
+
+ if (my $g = $attrs->{group_by}) {
+ my @current_group_by = map
+ { $_ =~ /\./ ? $_ : "$attrs->{alias}.$_" }
+ @$g
+ ;
+
+ if (
+ join ("\x00", sort @current_group_by)
+ ne
+ join ("\x00", sort @{$attrs->{columns}} )
+ ) {
+ $self->throw_exception (
+ "You have just attempted a $op operation on a resultset which does group_by"
+ . ' on columns other than the primary keys, while DBIC internally needs to retrieve'
+ . ' the primary keys in a subselect. All sane RDBMS engines do not support this'
+ . ' kind of queries. Please retry the operation with a modified group_by or'
+ . ' without using one at all.'
+ );
+ }
+ }
+ else {
+ $attrs->{group_by} = $attrs->{columns};
+ }
+ }
+
+ my $subrs = (ref $self)->new($rsrc, $attrs);
+
+ return $self->result_source->storage->_subq_update_delete($subrs, $op, $values);
+ }
+ else {
+ return $rsrc->storage->$op(
+ $rsrc,
+ $op eq 'update' ? $values : (),
+ $self->_cond_for_update_delete,
+ );
+ }
+}
+
+
# _cond_for_update_delete
#
# update/delete require the condition to be modified to handle
elsif (ref $full_cond eq 'HASH') {
if ((keys %{$full_cond})[0] eq '-and') {
$cond->{-and} = [];
-
my @cond = @{$full_cond->{-and}};
- for (my $i = 0; $i < @cond; $i++) {
+ for (my $i = 0; $i < @cond; $i++) {
my $entry = $cond[$i];
-
my $hash;
if (ref $entry eq 'HASH') {
$hash = $self->_cond_for_update_delete($entry);
$entry =~ /([^.]+)$/;
$hash->{$1} = $cond[++$i];
}
-
push @{$cond->{-and}}, $hash;
}
}
}
}
else {
- $self->throw_exception(
- "Can't update/delete on resultset with condition unless hash or array"
- );
+ $self->throw_exception("Can't update/delete on resultset with condition unless hash or array");
}
return $cond;
sub update {
my ($self, $values) = @_;
- $self->throw_exception("Values for update must be a hash")
+ $self->throw_exception('Values for update must be a hash')
unless ref $values eq 'HASH';
- carp( 'WARNING! Currently $rs->update() does not generate proper SQL'
- . ' on joined resultsets, and may affect rows well outside of the'
- . ' contents of $rs. Use at your own risk' )
- if ( $self->{attrs}{seen_join} );
-
- my $cond = $self->_cond_for_update_delete;
-
- return $self->result_source->storage->update(
- $self->result_source, $values, $cond
- );
+ return $self->_rs_update_delete ('update', $values);
}
=head2 update_all
sub update_all {
my ($self, $values) = @_;
- $self->throw_exception("Values for update must be a hash")
+ $self->throw_exception('Values for update_all must be a hash')
unless ref $values eq 'HASH';
foreach my $obj ($self->all) {
$obj->set_columns($values)->update;
=item Arguments: none
-=item Return Value: 1
+=item Return Value: $storage_rv
=back
will not run DBIC cascade triggers. See L</delete_all> if you need triggers
to run. See also L<DBIx::Class::Row/delete>.
-delete may not generate correct SQL for a query with joins or a resultset
-chained from a related resultset. In this case it will generate a warning:-
-
- WARNING! Currently $rs->delete() does not generate proper SQL on
- joined resultsets, and may delete rows well outside of the contents
- of $rs. Use at your own risk
-
-In these cases you may find that delete_all is more appropriate, or you
-need to respecify your query in a way that can be expressed without a join.
+Return value will be the amount of rows deleted; exact type of return value
+is storage-dependent.
=cut
sub delete {
- my ($self) = @_;
- $self->throw_exception("Delete should not be passed any arguments")
- if $_[1];
- carp( 'WARNING! Currently $rs->delete() does not generate proper SQL'
- . ' on joined resultsets, and may delete rows well outside of the'
- . ' contents of $rs. Use at your own risk' )
- if ( $self->{attrs}{seen_join} );
- my $cond = $self->_cond_for_update_delete;
-
- $self->result_source->storage->delete($self->result_source, $cond);
- return 1;
+ my $self = shift;
+ $self->throw_exception('delete does not accept any arguments')
+ if @_;
+
+ return $self->_rs_update_delete ('delete');
}
=head2 delete_all
=cut
sub delete_all {
- my ($self) = @_;
+ my $self = shift;
+ $self->throw_exception('delete_all does not accept any arguments')
+ if @_;
+
$_->delete for $self->all;
return 1;
}
forsubmitting to a $resultset->create(...) method.
In void context, C<insert_bulk> in L<DBIx::Class::Storage::DBI> is used
-to insert the data, as this is a faster method.
+to insert the data, as this is a faster method.
Otherwise, each set of data is inserted into the database using
-L<DBIx::Class::ResultSet/create>, and a arrayref of the resulting row
-objects is returned.
+L<DBIx::Class::ResultSet/create>, and the resulting objects are
+accumulated into an array. The array itself, or an array reference
+is returned depending on scalar or list context.
Example: Assuming an Artist Class that has many CDs Classes relating:
my $Artist_rs = $schema->resultset("Artist");
-
- ## Void Context Example
+
+ ## Void Context Example
$Artist_rs->populate([
- { artistid => 4, name => 'Manufactured Crap', cds => [
+ { artistid => 4, name => 'Manufactured Crap', cds => [
{ title => 'My First CD', year => 2006 },
{ title => 'Yet More Tweeny-Pop crap', year => 2007 },
],
],
},
]);
-
+
## Array Context Example
my ($ArtistOne, $ArtistTwo, $ArtistThree) = $Artist_rs->populate([
{ name => "Artist One"},
{ title => "Second CD", year => 2008},
]}
]);
-
+
print $ArtistOne->name; ## response is 'Artist One'
print $ArtistThree->cds->count ## reponse is '2'
]);
Please note an important effect on your data when choosing between void and
-wantarray context. Since void context goes straight to C<insert_bulk> in
+wantarray context. Since void context goes straight to C<insert_bulk> in
L<DBIx::Class::Storage::DBI> this will skip any component that is overriding
-c<insert>. So if you are using something like L<DBIx-Class-UUIDColumns> to
-create primary keys for you, you will find that your PKs are empty. In this
-case you will have to use the wantarray context in order to create those
+C<insert>. So if you are using something like L<DBIx-Class-UUIDColumns> to
+create primary keys for you, you will find that your PKs are empty. In this
+case you will have to use the wantarray context in order to create those
values.
=cut
my $data = ref $_[0][0] eq 'HASH'
? $_[0] : ref $_[0][0] eq 'ARRAY' ? $self->_normalize_populate_args($_[0]) :
$self->throw_exception('Populate expects an arrayref of hashes or arrayref of arrayrefs');
-
+
if(defined wantarray) {
my @created;
foreach my $item (@$data) {
push(@created, $self->create($item));
}
- return @created;
+ return wantarray ? @created : \@created;
} else {
my ($first, @rest) = @$data;
my @names = grep {!ref $first->{$_}} keys %$first;
my @rels = grep { $self->result_source->has_relationship($_) } keys %$first;
- my @pks = $self->result_source->primary_columns;
+ my @pks = $self->result_source->primary_columns;
- ## do the belongs_to relationships
+ ## do the belongs_to relationships
foreach my $index (0..$#$data) {
- if( grep { !defined $data->[$index]->{$_} } @pks ) {
- my @ret = $self->populate($data);
- return;
+
+ # delegate to create() for any dataset without primary keys with specified relationships
+ if (grep { !defined $data->[$index]->{$_} } @pks ) {
+ for my $r (@rels) {
+ if (grep { ref $data->[$index]{$r} eq $_ } qw/HASH ARRAY/) { # a related set must be a HASH or AoH
+ my @ret = $self->populate($data);
+ return;
+ }
+ }
}
-
+
foreach my $rel (@rels) {
- next unless $data->[$index]->{$rel} && ref $data->[$index]->{$rel} eq "HASH";
+ next unless ref $data->[$index]->{$rel} eq "HASH";
my $result = $self->related_resultset($rel)->create($data->[$index]->{$rel});
my ($reverse) = keys %{$self->result_source->reverse_relationship_info($rel)};
- my $related = $result->result_source->resolve_condition(
+ my $related = $result->result_source->_resolve_condition(
$result->result_source->relationship_info($reverse)->{cond},
- $self,
- $result,
+ $self,
+ $result,
);
delete $data->[$index]->{$rel};
$data->[$index] = {%{$data->[$index]}, %$related};
-
+
push @names, keys %$related if $index == 0;
}
}
my @values = map { [ @$_{@names} ] } @$data;
$self->result_source->storage->insert_bulk(
- $self->result_source,
- \@names,
+ $self->result_source,
+ \@names,
\@values,
);
foreach my $rel (@rels) {
next unless $item->{$rel} && ref $item->{$rel} eq "ARRAY";
- my $parent = $self->find(map {{$_=>$item->{$_}} } @pks)
+ my $parent = $self->find(map {{$_=>$item->{$_}} } @pks)
|| $self->throw_exception('Cannot find the relating object.');
-
+
my $child = $parent->$rel;
-
- my $related = $child->result_source->resolve_condition(
+
+ my $related = $child->result_source->_resolve_condition(
$parent->result_source->relationship_info($rel)->{cond},
$child,
$parent,
foreach my $index (0..$#names) {
$result_to_create{$names[$index]} = $$datum[$index];
}
- push @results_to_create, \%result_to_create;
+ push @results_to_create, \%result_to_create;
}
return \@results_to_create;
}
Return Value a L<Data::Page> object for the current resultset. Only makes
sense for queries with a C<page> attribute.
+To get the full count of entries for a paged resultset, call
+C<total_entries> on the L<Data::Page> object.
+
=cut
sub pager {
my ($self) = @_;
+
+ return $self->{pager} if $self->{pager};
+
my $attrs = $self->{attrs};
$self->throw_exception("Can't create pager for non-paged rs")
unless $self->{attrs}{page};
$attrs->{rows} ||= 10;
- return $self->{pager} ||= Data::Page->new(
- $self->_count, $attrs->{rows}, $self->{attrs}{page});
+
+ # throw away the paging flags and re-run the count (possibly
+ # with a subselect) to get the real total count
+ my $count_attrs = { %$attrs };
+ delete $count_attrs->{$_} for qw/rows offset page pager/;
+ my $total_count = (ref $self)->new($self->result_source, $count_attrs)->count;
+
+ return $self->{pager} = Data::Page->new(
+ $total_count,
+ $attrs->{rows},
+ $self->{attrs}{page}
+ );
}
=head2 page
$self->throw_exception(
"Can't abstract implicit construct, condition not a hash"
) if ($self->{cond} && !(ref $self->{cond} eq 'HASH'));
-
+
my $collapsed_cond = (
$self->{cond}
? $self->_collapse_cond($self->{cond})
: {}
);
-
+
# precendence must be given to passed values over values inherited from
# the cond, so the order here is important.
my %implied = %{$self->_remove_alias($collapsed_cond, $alias)};
# _is_deterministic_value
#
-# Make an effor to strip non-deterministic values from the condition,
+# Make an effor to strip non-deterministic values from the condition,
# to make sure new_result chokes less
sub _is_deterministic_value {
return 0;
}
+# _has_resolved_attr
+#
+# determines if the resultset defines at least one
+# of the attributes supplied
+#
+# used to determine if a subquery is neccessary
+#
+# supports some virtual attributes:
+# -join
+# This will scan for any joins being present on the resultset.
+# It is not a mere key-search but a deep inspection of {from}
+#
+
+sub _has_resolved_attr {
+ my ($self, @attr_names) = @_;
+
+ my $attrs = $self->_resolved_attrs;
+
+ my %extra_checks;
+
+ for my $n (@attr_names) {
+ if (grep { $n eq $_ } (qw/-join/) ) {
+ $extra_checks{$n}++;
+ next;
+ }
+
+ my $attr = $attrs->{$n};
+
+ next if not defined $attr;
+
+ if (ref $attr eq 'HASH') {
+ return 1 if keys %$attr;
+ }
+ elsif (ref $attr eq 'ARRAY') {
+ return 1 if @$attr;
+ }
+ else {
+ return 1 if $attr;
+ }
+ }
+
+ # a resolved join is expressed as a multi-level from
+ return 1 if (
+ $extra_checks{-join}
+ and
+ ref $attrs->{from} eq 'ARRAY'
+ and
+ @{$attrs->{from}} > 1
+ );
+
+ return 0;
+}
+
# _collapse_cond
#
# Recursively collapse the condition.
if (ref $cond eq 'ARRAY') {
foreach my $subcond (@$cond) {
next unless ref $subcond; # -or
-# warn "ARRAY: " . Dumper $subcond;
$collapsed = $self->_collapse_cond($subcond, $collapsed);
}
}
elsif (ref $cond eq 'HASH') {
if (keys %$cond and (keys %$cond)[0] eq '-and') {
foreach my $subcond (@{$cond->{-and}}) {
-# warn "HASH: " . Dumper $subcond;
$collapsed = $self->_collapse_cond($subcond, $collapsed);
}
}
else {
-# warn "LEAF: " . Dumper $cond;
foreach my $col (keys %$cond) {
my $value = $cond->{$col};
$collapsed->{$col} = $value;
=cut
-sub as_query { return shift->cursor->as_query(@_) }
+sub as_query {
+ my $self = shift;
+
+ my $attrs = $self->_resolved_attrs_copy;
+
+ # For future use:
+ #
+ # in list ctx:
+ # my ($sql, \@bind, \%dbi_bind_attrs) = _select_args_to_query (...)
+ # $sql also has no wrapping parenthesis in list ctx
+ #
+ my $sqlbind = $self->result_source->storage
+ ->_select_args_to_query ($attrs->{from}, $attrs->{select}, $attrs->{where}, $attrs);
+
+ return $sqlbind;
+}
=head2 find_or_new
my $self = shift;
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
- my $exists = $self->find($hash, $attrs);
- return defined $exists ? $exists : $self->new_result($hash);
+ if (keys %$hash and my $row = $self->find($hash, $attrs) ) {
+ return $row;
+ }
+ return $self->new_result($hash);
}
=head2 create
can also be passed an object representing the foreign row, and the
value will be set to its primary key.
-To create related objects, pass a hashref for the value if the related
-item is a foreign key relationship (L<DBIx::Class::Relationship/belongs_to>),
-and use the name of the relationship as the key. (NOT the name of the field,
-necessarily). For C<has_many> and C<has_one> relationships, pass an arrayref
-of hashrefs containing the data for each of the rows to create in the foreign
-tables, again using the relationship name as the key.
+To create related objects, pass a hashref of related-object column values
+B<keyed on the relationship name>. If the relationship is of type C<multi>
+(L<DBIx::Class::Relationship/has_many>) - pass an arrayref of hashrefs.
+The process will correctly identify columns holding foreign keys, and will
+transparrently populate them from the keys of the corresponding relation.
+This can be applied recursively, and will work correctly for a structure
+with an arbitrary depth and width, as long as the relationships actually
+exists and the correct column data has been supplied.
+
Instead of hashrefs of plain related data (key/value pairs), you may
also pass new or inserted objects. New objects (not inserted yet, see
name=>"Some Person",
email=>"somebody@someplace.com"
});
-
+
Example of creating a new row and also creating rows in a related C<has_many>
or C<has_one> resultset. Note Arrayref.
$artist_rs->create(
- { artistid => 4, name => 'Manufactured Crap', cds => [
+ { artistid => 4, name => 'Manufactured Crap', cds => [
{ title => 'My First CD', year => 2006 },
{ title => 'Yet More Tweeny-Pop crap', year => 2007 },
],
=back
$cd->cd_to_producer->find_or_create({ producer => $producer },
- { key => 'primary });
+ { key => 'primary' });
Tries to find a record based on its primary key or unique constraints; if none
is found, creates one and returns that instead.
my $self = shift;
my $attrs = (@_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {});
my $hash = ref $_[0] eq 'HASH' ? shift : {@_};
- my $exists = $self->find($hash, $attrs);
- return defined $exists ? $exists : $self->create($hash);
+ if (keys %$hash and my $row = $self->find($hash, $attrs) ) {
+ return $row;
+ }
+ return $self->create($hash);
}
=head2 update_or_create
{ key => 'cd_artist_title' }
);
- $cd->cd_to_producer->update_or_create({
- producer => $producer,
+ $cd->cd_to_producer->update_or_create({
+ producer => $producer,
name => 'harry',
- }, {
+ }, {
key => 'primary,
});
return $self->create($cond);
}
+=head2 update_or_new
+
+=over 4
+
+=item Arguments: \%col_values, { key => $unique_constraint }?
+
+=item Return Value: $rowobject
+
+=back
+
+ $resultset->update_or_new({ col => $val, ... });
+
+First, searches for an existing row matching one of the unique constraints
+(including the primary key) on the source of this resultset. If a row is
+found, updates it with the other given column values. Otherwise, instantiate
+a new result object and return it. The object will not be saved into your storage
+until you call L<DBIx::Class::Row/insert> on it.
+
+Takes an optional C<key> attribute to search on a specific unique constraint.
+For example:
+
+ # In your application
+ my $cd = $schema->resultset('CD')->update_or_new(
+ {
+ artist => 'Massive Attack',
+ title => 'Mezzanine',
+ year => 1998,
+ },
+ { key => 'cd_artist_title' }
+ );
+
+ if ($cd->in_storage) {
+ # the cd was updated
+ }
+ else {
+ # the cd is not yet in the database, let's insert it
+ $cd->insert;
+ }
+
+See also L</find>, L</find_or_create> and L<find_or_new>.
+
+=cut
+
+sub update_or_new {
+ my $self = shift;
+ my $attrs = ( @_ > 1 && ref $_[$#_] eq 'HASH' ? pop(@_) : {} );
+ my $cond = ref $_[0] eq 'HASH' ? shift : {@_};
+
+ my $row = $self->find( $cond, $attrs );
+ if ( defined $row ) {
+ $row->update($cond);
+ return $row;
+ }
+
+ return $self->new_result($cond);
+}
+
=head2 get_cache
=over 4
$self->{related_resultsets} ||= {};
return $self->{related_resultsets}{$rel} ||= do {
- my $rel_obj = $self->result_source->relationship_info($rel);
+ my $rel_info = $self->result_source->relationship_info($rel);
$self->throw_exception(
"search_related: result source '" . $self->result_source->source_name .
"' has no such relationship $rel")
- unless $rel_obj;
-
- my ($from,$seen) = $self->_resolve_from($rel);
+ unless $rel_info;
+
+ my ($from,$seen) = $self->_chain_relationship($rel);
my $join_count = $seen->{$rel};
my $alias = ($join_count > 1 ? join('_', $rel, $join_count) : $rel);
return ($self->{attrs} || {})->{alias} || 'me';
}
-sub _resolve_from {
- my ($self, $extra_join) = @_;
+# This code is called by search_related, and makes sure there
+# is clear separation between the joins before, during, and
+# after the relationship. This information is needed later
+# in order to properly resolve prefetch aliases (any alias
+# with a relation_chain_depth less than the depth of the
+# current prefetch is not considered)
+#
+# The increments happen in 1/2s to make it easier to correlate the
+# join depth with the join path. An integer means a relationship
+# specified via a search_related, whereas a fraction means an added
+# join/prefetch via attributes
+sub _chain_relationship {
+ my ($self, $rel) = @_;
my $source = $self->result_source;
my $attrs = $self->{attrs};
-
- my $from = $attrs->{from}
- || [ { $attrs->{alias} => $source->from } ];
-
- my $seen = { %{$attrs->{seen_join}||{}} };
- my $join = ($attrs->{join}
- ? [ $attrs->{join}, $extra_join ]
- : $extra_join);
+ my $from = [ @{
+ $attrs->{from}
+ ||
+ [{
+ -source_handle => $source->handle,
+ -alias => $attrs->{alias},
+ $attrs->{alias} => $source->from,
+ }]
+ }];
+
+ my $seen = { %{$attrs->{seen_join} || {} } };
+ my $jpath = ($attrs->{seen_join} && keys %{$attrs->{seen_join}})
+ ? $from->[-1][0]{-join_path}
+ : [];
+
+
+ # we need to take the prefetch the attrs into account before we
+ # ->_resolve_join as otherwise they get lost - captainL
+ my $merged = $self->_merge_attr( $attrs->{join}, $attrs->{prefetch} );
+
+ my @requested_joins = $source->_resolve_join(
+ $merged,
+ $attrs->{alias},
+ $seen,
+ $jpath,
+ );
+
+ push @$from, @requested_joins;
- # we need to take the prefetch the attrs into account before we
- # ->resolve_join as otherwise they get lost - captainL
- my $merged = $self->_merge_attr( $join, $attrs->{prefetch} );
+ $seen->{-relation_chain_depth} += 0.5;
- $from = [
- @$from,
- ($join ? $source->resolve_join($merged, $attrs->{alias}, $seen) : ()),
- ];
+ # if $self already had a join/prefetch specified on it, the requested
+ # $rel might very well be already included. What we do in this case
+ # is effectively a no-op (except that we bump up the chain_depth on
+ # the join in question so we could tell it *is* the search_related)
+ my $already_joined;
+
+
+ # we consider the last one thus reverse
+ for my $j (reverse @requested_joins) {
+ if ($rel eq $j->[0]{-join_path}[-1]) {
+ $j->[0]{-relation_chain_depth} += 0.5;
+ $already_joined++;
+ last;
+ }
+ }
+
+# alternative way to scan the entire chain - not backwards compatible
+# for my $j (reverse @$from) {
+# next unless ref $j eq 'ARRAY';
+# if ($j->[0]{-join_path} && $j->[0]{-join_path}[-1] eq $rel) {
+# $j->[0]{-relation_chain_depth} += 0.5;
+# $already_joined++;
+# last;
+# }
+# }
+
+ unless ($already_joined) {
+ push @$from, $source->_resolve_join(
+ $rel,
+ $attrs->{alias},
+ $seen,
+ $jpath,
+ );
+ }
+
+ $seen->{-relation_chain_depth} += 0.5;
return ($from,$seen);
}
+# too many times we have to do $attrs = { %{$self->_resolved_attrs} }
+sub _resolved_attrs_copy {
+ my $self = shift;
+ return { %{$self->_resolved_attrs (@_)} };
+}
+
sub _resolved_attrs {
my $self = shift;
return $self->{_attrs} if $self->{_attrs};
# build columns (as long as select isn't set) into a set of as/select hashes
unless ( $attrs->{select} ) {
@colbits = map {
- ( ref($_) eq 'HASH' ) ? $_
- : {
- (
- /^\Q${alias}.\E(.+)$/ ? $1
- : $_
- ) => ( /\./ ? $_ : "${alias}.$_" )
+ ( ref($_) eq 'HASH' )
+ ? $_
+ : {
+ (
+ /^\Q${alias}.\E(.+)$/
+ ? "$1"
+ : "$_"
+ )
+ =>
+ (
+ /\./
+ ? "$_"
+ : "${alias}.$_"
+ )
}
} ( ref($attrs->{columns}) eq 'ARRAY' ) ? @{ delete $attrs->{columns}} : (delete $attrs->{columns} || $source->columns );
}
push( @{ $attrs->{as} }, @$adds );
}
- $attrs->{from} ||= [ { $self->{attrs}{alias} => $source->from } ];
+ $attrs->{from} ||= [ {
+ -source_handle => $source->handle,
+ -alias => $self->{attrs}{alias},
+ $self->{attrs}{alias} => $source->from,
+ } ];
+
+ if ( $attrs->{join} || $attrs->{prefetch} ) {
+
+ $self->throw_exception ('join/prefetch can not be used with a literal scalarref {from}')
+ if ref $attrs->{from} ne 'ARRAY';
- if ( exists $attrs->{join} || exists $attrs->{prefetch} ) {
my $join = delete $attrs->{join} || {};
if ( defined $attrs->{prefetch} ) {
$join = $self->_merge_attr( $join, $attrs->{prefetch} );
-
}
$attrs->{from} = # have to copy here to avoid corrupting the original
[
- @{ $attrs->{from} },
- $source->resolve_join(
- $join, $alias, { %{ $attrs->{seen_join} || {} } }
- )
+ @{ $attrs->{from} },
+ $source->_resolve_join(
+ $join,
+ $alias,
+ { %{ $attrs->{seen_join} || {} } },
+ ($attrs->{seen_join} && keys %{$attrs->{seen_join}})
+ ? $attrs->{from}[-1][0]{-join_path}
+ : []
+ ,
+ )
];
-
}
- $attrs->{group_by} ||= $attrs->{select}
- if delete $attrs->{distinct};
- if ( $attrs->{order_by} ) {
+ if ( defined $attrs->{order_by} ) {
$attrs->{order_by} = (
ref( $attrs->{order_by} ) eq 'ARRAY'
? [ @{ $attrs->{order_by} } ]
- : [ $attrs->{order_by} ]
+ : [ $attrs->{order_by} || () ]
);
}
- else {
- $attrs->{order_by} = [];
+
+ if ($attrs->{group_by} and ref $attrs->{group_by} ne 'ARRAY') {
+ $attrs->{group_by} = [ $attrs->{group_by} ];
}
- my $collapse = $attrs->{collapse} || {};
+ # generate the distinct induced group_by early, as prefetch will be carried via a
+ # subquery (since a group_by is present)
+ if (delete $attrs->{distinct}) {
+ $attrs->{group_by} ||= [ grep { !ref($_) || (ref($_) ne 'HASH') } @{$attrs->{select}} ];
+ }
+
+ $attrs->{collapse} ||= {};
if ( my $prefetch = delete $attrs->{prefetch} ) {
$prefetch = $self->_merge_attr( {}, $prefetch );
- my @pre_order;
- my $seen = { %{ $attrs->{seen_join} || {} } };
- foreach my $p ( ref $prefetch eq 'ARRAY' ? @$prefetch : ($prefetch) ) {
-
- # bring joins back to level of current class
- my @prefetch =
- $source->resolve_prefetch( $p, $alias, $seen, \@pre_order, $collapse );
- push( @{ $attrs->{select} }, map { $_->[0] } @prefetch );
- push( @{ $attrs->{as} }, map { $_->[1] } @prefetch );
- }
- push( @{ $attrs->{order_by} }, @pre_order );
+
+ my $prefetch_ordering = [];
+
+ my $join_map = $self->_joinpath_aliases ($attrs->{from}, $attrs->{seen_join});
+
+ my @prefetch =
+ $source->_resolve_prefetch( $prefetch, $alias, $join_map, $prefetch_ordering, $attrs->{collapse} );
+
+ # we need to somehow mark which columns came from prefetch
+ $attrs->{_prefetch_select} = [ map { $_->[0] } @prefetch ];
+
+ push @{ $attrs->{select} }, @{$attrs->{_prefetch_select}};
+ push @{ $attrs->{as} }, (map { $_->[1] } @prefetch);
+
+ push( @{$attrs->{order_by}}, @$prefetch_ordering );
+ $attrs->{_collapse_order_by} = \@$prefetch_ordering;
}
- $attrs->{collapse} = $collapse;
- if ( $attrs->{page} ) {
- $attrs->{offset} ||= 0;
- $attrs->{offset} += ( $attrs->{rows} * ( $attrs->{page} - 1 ) );
+ # if both page and offset are specified, produce a combined offset
+ # even though it doesn't make much sense, this is what pre 081xx has
+ # been doing
+ if (my $page = delete $attrs->{page}) {
+ $attrs->{offset} =
+ ($attrs->{rows} * ($page - 1))
+ +
+ ($attrs->{offset} || 0)
+ ;
}
return $self->{_attrs} = $attrs;
}
+sub _joinpath_aliases {
+ my ($self, $fromspec, $seen) = @_;
+
+ my $paths = {};
+ return $paths unless ref $fromspec eq 'ARRAY';
+
+ my $cur_depth = $seen->{-relation_chain_depth} || 0;
+
+ if (int ($cur_depth) != $cur_depth) {
+ $self->throw_exception ("-relation_chain_depth is not an integer, something went horribly wrong ($cur_depth)");
+ }
+
+ for my $j (@$fromspec) {
+
+ next if ref $j ne 'ARRAY';
+ next if ($j->[0]{-relation_chain_depth} || 0) < $cur_depth;
+
+ my $jpath = $j->[0]{-join_path};
+
+ my $p = $paths;
+ $p = $p->{$_} ||= {} for @{$jpath}[$cur_depth .. $#$jpath];
+ push @{$p->{-join_aliases} }, $j->[0]{-alias};
+ }
+
+ return $paths;
+}
+
sub _rollout_attr {
my ($self, $attr) = @_;
-
+
if (ref $attr eq 'HASH') {
return $self->_rollout_hash($attr);
} elsif (ref $attr eq 'ARRAY') {
}
} else {
return ($a eq $b_key) ? 1 : 0;
- }
+ }
} else {
if (ref $a eq 'HASH') {
my ($a_key) = keys %{$a};
return $import unless defined($orig);
return $orig unless defined($import);
-
+
$orig = $self->_rollout_attr($orig);
$import = $self->_rollout_attr($import);
=back
-Which column(s) to order the results by. If a single column name, or
-an arrayref of names is supplied, the argument is passed through
-directly to SQL. The hashref syntax allows for connection-agnostic
-specification of ordering direction:
+Which column(s) to order the results by.
+
+[The full list of suitable values is documented in
+L<SQL::Abstract/"ORDER BY CLAUSES">; the following is a summary of
+common options.]
+
+If a single column name, or an arrayref of names is supplied, the
+argument is passed through directly to SQL. The hashref syntax allows
+for connection-agnostic specification of ordering direction:
For descending order:
}
);
-You need to use the relationship (not the table) name in conditions,
-because they are aliased as such. The current table is aliased as "me", so
+You need to use the relationship (not the table) name in conditions,
+because they are aliased as such. The current table is aliased as "me", so
you need to use me.column_name in order to avoid ambiguity. For example:
- # Get CDs from 1984 with a 'Foo' track
+ # Get CDs from 1984 with a 'Foo' track
my $rs = $schema->resultset('CD')->search(
- {
+ {
'me.year' => 1984,
'tracks.name' => 'Foo'
},
{ join => 'tracks' }
);
-
+
If the same join is supplied twice, it will be aliased to <rel>_2 (and
similarly for a third time). For e.g.
case.
Simple prefetches will be joined automatically, so there is no need
-for a C<join> attribute in the above search.
+for a C<join> attribute in the above search.
C<prefetch> can be used with the following relationship types: C<belongs_to>,
C<has_one> (or if you're using C<add_relationship>, any relationship declared
with an accessor type of 'single' or 'filter'). A more complex example that
-prefetches an artists cds, the tracks on those cds, and the tags associted
+prefetches an artists cds, the tracks on those cds, and the tags associted
with that artist is given below (assuming many-to-many from artists to tags):
my $rs = $schema->resultset('Artist')->search(
]
}
);
-
+
B<NOTE:> If you specify a C<prefetch> attribute, the C<join> and C<select>
attributes will be ignored.
+B<CAVEATs>: Prefetch does a lot of deep magic. As such, it may not behave
+exactly as you might expect.
+
+=over 4
+
+=item *
+
+Prefetch uses the L</cache> to populate the prefetched relationships. This
+may or may not be what you want.
+
+=item *
+
+If you specify a condition on a prefetched relationship, ONLY those
+rows that match the prefetched condition will be fetched into that relationship.
+This means that adding prefetch to a search() B<may alter> what is returned by
+traversing a relationship. So, if you have C<< Artist->has_many(CDs) >> and you do
+
+ my $artist_rs = $schema->resultset('Artist')->search({
+ 'cds.year' => 2008,
+ }, {
+ join => 'cds',
+ });
+
+ my $count = $artist_rs->first->cds->count;
+
+ my $artist_rs_prefetch = $artist_rs->search( {}, { prefetch => 'cds' } );
+
+ my $prefetch_count = $artist_rs_prefetch->first->cds->count;
+
+ cmp_ok( $count, '==', $prefetch_count, "Counts should be the same" );
+
+that cmp_ok() may or may not pass depending on the datasets involved. This
+behavior may or may not survive the 0.09 transition.
+
+=back
+
=head2 page
=over 4
identical to creating a non-pages resultset and then calling ->page($page)
on it.
-If L<rows> attribute is not specified it defualts to 10 rows per page.
+If L<rows> attribute is not specified it defaults to 10 rows per page.
+
+When you have a paged resultset, L</count> will only return the number
+of rows in the page. To get the total, use the L</pager> and call
+C<total_entries> on it.
=head2 rows
# SELECT child.* FROM person child
# INNER JOIN person father ON child.father_id = father.id
-If you need to express really complex joins or you need a subselect, you
+You can select from a subquery by passing a resultset to from as follows.
+
+ $schema->resultset('Artist')->search(
+ undef,
+ { alias => 'artist2',
+ from => [ { artist2 => $artist_rs->as_query } ],
+ } );
+
+ # and you'll get sql like this..
+ # SELECT artist2.artistid, artist2.name, artist2.rank, artist2.charfield FROM
+ # ( SELECT me.artistid, me.name, me.rank, me.charfield FROM artists me ) artist2
+
+If you need to express really complex joins, you
can supply literal SQL to C<from> via a scalar reference. In this case
-the contents of the scalar will replace the table name asscoiated with the
+the contents of the scalar will replace the table name associated with the
resultsource.
WARNING: This technique might very well not work as expected on chained
$table = $rs->result_source->name;
$latest = $rs->search (
undef,
- { from => \ "
- (SELECT e1.* FROM $table e1
- JOIN $table e2
- ON e1.location = e2.location
- AND e1.sequence < e2.sequence
- WHERE e2.sequence is NULL
+ { from => \ "
+ (SELECT e1.* FROM $table e1
+ JOIN $table e2
+ ON e1.location = e2.location
+ AND e1.sequence < e2.sequence
+ WHERE e2.sequence is NULL
) me",
},
);
sub new {
my ($class, $rs, $column) = @_;
$class = ref $class if ref $class;
- my $new_parent_rs = $rs->search_rs; # we don't want to mess up the original, so clone it
- my $attrs = $new_parent_rs->_resolved_attrs;
- $new_parent_rs->{attrs}->{$_} = undef for qw(prefetch include_columns +select +as); # prefetch, include_columns, +select, +as cause additional columns to be fetched
+
+ $rs->throw_exception("column must be supplied") unless $column;
+
+ my $orig_attrs = $rs->_resolved_attrs;
+ my $new_parent_rs = $rs->search_rs;
+
+ # prefetch causes additional columns to be fetched, but we can not just make a new
+ # rs via the _resolved_attrs trick - we need to retain the separation between
+ # +select/+as and select/as. At the same time we want to preserve any joins that the
+ # prefetch would otherwise generate.
+
+ my $new_attrs = $new_parent_rs->{attrs} ||= {};
+ $new_attrs->{join} = $rs->_merge_attr( delete $new_attrs->{join}, delete $new_attrs->{prefetch} );
# If $column can be found in the 'as' list of the parent resultset, use the
# corresponding element of its 'select' list (to keep any custom column
# definition set up with 'select' or '+select' attrs), otherwise use $column
# (to create a new column definition on-the-fly).
- my $as_list = $attrs->{as} || [];
- my $select_list = $attrs->{select} || [];
+
+ my $as_list = $orig_attrs->{as} || [];
+ my $select_list = $orig_attrs->{select} || [];
my $as_index = List::Util::first { ($as_list->[$_] || "") eq $column } 0..$#$as_list;
my $select = defined $as_index ? $select_list->[$as_index] : $column;
+ # {collapse} would mean a has_many join was injected, which in turn means
+ # we need to group IF WE CAN (only if the column in question is unique)
+ if (!$new_attrs->{group_by} && keys %{$orig_attrs->{collapse}}) {
+
+ # scan for a constraint that would contain our column only - that'd be proof
+ # enough it is unique
+ my $constraints = { $rs->result_source->unique_constraints };
+ for my $constraint_columns ( values %$constraints ) {
+
+ next unless @$constraint_columns == 1;
+
+ my $col = $constraint_columns->[0];
+ my $fqcol = join ('.', $new_attrs->{alias}, $col);
+
+ if ($col eq $select or $fqcol eq $select) {
+ $new_attrs->{group_by} = [ $select ];
+ last;
+ }
+ }
+ }
+
my $new = bless { _select => $select, _as => $column, _parent_resultset => $new_parent_rs }, $class;
- $new->throw_exception("column must be supplied") unless $column;
return $new;
}
=cut
-sub as_query { return shift->_resultset->as_query }
+sub as_query { return shift->_resultset->as_query(@_) }
=head2 next
sub func {
my ($self,$function) = @_;
my $cursor = $self->func_rs($function)->cursor;
-
+
if( wantarray ) {
return map { $_->[ 0 ] } $cursor->all;
}
=head2 throw_exception
See L<DBIx::Class::Schema/throw_exception> for details.
-
+
=cut
-
+
sub throw_exception {
my $self=shift;
if (ref $self && $self->{_parent_resultset}) {
=head1 SYNOPSIS
+ # Create a table based result source, in a result class.
+
+ package MyDB::Schema::Result::Artist;
+ use base qw/DBIx::Class/;
+
+ __PACKAGE__->load_components(qw/Core/);
+ __PACKAGE__->table('artist');
+ __PACKAGE__->add_columns(qw/ artistid name /);
+ __PACKAGE__->set_primary_key('artistid');
+ __PACKAGE__->has_many(cds => 'MyDB::Schema::Result::CD');
+
+ 1;
+
+ # Create a query (view) based result source, in a result class
+ package MyDB::Schema::Result::Year2000CDs;
+
+ __PACKAGE__->load_components('Core');
+ __PACKAGE__->table_class('DBIx::Class::ResultSource::View');
+
+ __PACKAGE__->table('year2000cds');
+ __PACKAGE__->result_source_instance->is_virtual(1);
+ __PACKAGE__->result_source_instance->view_definition(
+ "SELECT cdid, artist, title FROM cd WHERE year ='2000'"
+ );
+
+
=head1 DESCRIPTION
-A ResultSource is a component of a schema from which results can be directly
-retrieved, most usually a table (see L<DBIx::Class::ResultSource::Table>)
+A ResultSource is an object that represents a source of data for querying.
+
+This class is a base class for various specialised types of result
+sources, for example L<DBIx::Class::ResultSource::Table>. Table is the
+default result source type, so one is created for you when defining a
+result class as described in the synopsis above.
+
+More specifically, the L<DBIx::Class::Core> component pulls in the
+L<DBIx::Class::ResultSourceProxy::Table> as a base class, which
+defines the L<table|DBIx::Class::ResultSourceProxy::Table/table>
+method. When called, C<table> creates and stores an instance of
+L<DBIx::Class::ResultSoure::Table>. Luckily, to use tables as result
+sources, you don't need to remember any of this.
+
+Result sources representing select queries, or views, can also be
+created, see L<DBIx::Class::ResultSource::View> for full details.
+
+=head2 Finding result source objects
+
+As mentioned above, a result source instance is created and stored for
+you when you define a L<Result Class|DBIx::Class::Manual::Glossary/Result Class>.
+
+You can retrieve the result source at runtime in the following ways:
+
+=over
+
+=item From a Schema object:
+
+ $schema->source($source_name);
+
+=item From a Row object:
+
+ $row->result_source;
+
+=item From a ResultSet object:
-Basic view support also exists, see L<<DBIx::Class::ResultSource::View>.
+ $rs->result_source;
+
+=back
=head1 METHODS
$source->add_columns('col1' => \%col1_info, 'col2' => \%col2_info, ...);
-Adds columns to the result source. If supplied key => hashref pairs, uses
-the hashref as the column_info for that column. Repeated calls of this
-method will add more columns, not replace them.
+Adds columns to the result source. If supplied colname => hashref
+pairs, uses the hashref as the L</column_info> for that column. Repeated
+calls of this method will add more columns, not replace them.
The column names given will be created as accessor methods on your
L<DBIx::Class::Row> objects. You can change the name of the accessor
=item accessor
+ { accessor => '_name' }
+
+ # example use, replace standard accessor with one of your own:
+ sub name {
+ my ($self, $value) = @_;
+
+ die "Name cannot contain digits!" if($value =~ /\d/);
+ $self->_name($value);
+
+ return $self->_name();
+ }
+
Use this to set the name of the accessor method for this column. If unset,
the name of the column will be used.
=item data_type
-This contains the column type. It is automatically filled by the
-L<SQL::Translator::Producer::DBIx::Class::File> producer, and the
-L<DBIx::Class::Schema::Loader> module. If you do not enter a
-data_type, DBIx::Class will attempt to retrieve it from the
-database for you, using L<DBI>'s column_info method. The values of this
-key are typically upper-cased.
+ { data_type => 'integer' }
+
+This contains the column type. It is automatically filled if you use the
+L<SQL::Translator::Producer::DBIx::Class::File> producer, or the
+L<DBIx::Class::Schema::Loader> module.
Currently there is no standard set of values for the data_type. Use
whatever your database supports.
=item size
+ { size => 20 }
+
The length of your column, if it is a column type that can have a size
-restriction. This is currently only used by L<DBIx::Class::Schema/deploy>.
+restriction. This is currently only used to create tables from your
+schema, see L<DBIx::Class::Schema/deploy>.
=item is_nullable
-Set this to a true value for a columns that is allowed to contain
-NULL values. This is currently only used by L<DBIx::Class::Schema/deploy>.
+ { is_nullable => 1 }
+
+Set this to a true value for a columns that is allowed to contain NULL
+values, default is false. This is currently only used to create tables
+from your schema, see L<DBIx::Class::Schema/deploy>.
=item is_auto_increment
+ { is_auto_increment => 1 }
+
Set this to a true value for a column whose value is somehow
-automatically set. This is used to determine which columns to empty
-when cloning objects using C<copy>. It is also used by
+automatically set, defaults to false. This is used to determine which
+columns to empty when cloning objects using
+L<DBIx::Class::Row/copy>. It is also used by
L<DBIx::Class::Schema/deploy>.
+=item is_numeric
+
+ { is_numeric => 1 }
+
+Set this to a true or false value (not C<undef>) to explicitly specify
+if this column contains numeric data. This controls how set_column
+decides whether to consider a column dirty after an update: if
+C<is_numeric> is true a numeric comparison C<< != >> will take place
+instead of the usual C<eq>
+
+If not specified the storage class will attempt to figure this out on
+first access to the column, based on the column C<data_type>. The
+result will be cached in this attribute.
+
=item is_foreign_key
+ { is_foreign_key => 1 }
+
Set this to a true value for a column that contains a key from a
-foreign table. This is currently only used by
-L<DBIx::Class::Schema/deploy>.
+foreign table, defaults to false. This is currently only used to
+create tables from your schema, see L<DBIx::Class::Schema/deploy>.
=item default_value
-Set this to the default value which will be inserted into a column
-by the database. Can contain either a value or a function. This is
-currently only used by L<DBIx::Class::Schema/deploy>.
+ { default_value => \'now()' }
+
+Set this to the default value which will be inserted into a column by
+the database. Can contain either a value or a function (use a
+reference to a scalar e.g. C<\'now()'> if you want a function). This
+is currently only used to create tables from your schema, see
+L<DBIx::Class::Schema/deploy>.
+
+See the note on L<DBIx::Class::Row/new> for more information about possible
+issues related to db-side default values.
=item sequence
+ { sequence => 'my_table_seq' }
+
Set this on a primary key column to the name of the sequence used to
generate a new key value. If not specified, L<DBIx::Class::PK::Auto>
will attempt to retrieve the name of the sequence from the database
=item auto_nextval
-Set this to a true value for a column whose value is retrieved
-automatically from an oracle sequence. If you do not use an Oracle
-trigger to get the nextval, you have to set sequence as well.
+Set this to a true value for a column whose value is retrieved automatically
+from a sequence or function (if supported by your Storage driver.) For a
+sequence, if you do not use a trigger to get the nextval, you have to set the
+L</sequence> value as well.
+
+Also set this for MSSQL columns with the 'uniqueidentifier'
+L<DBIx::Class::ResultSource/data_type> whose values you want to automatically
+generate using C<NEWID()>, unless they are a primary key in which case this will
+be done anyway.
=item extra
=over
-=item Arguments: $colname, [ \%columninfo ]
+=item Arguments: $colname, \%columninfo?
=item Return value: 1/0 (true/false)
=back
- $source->add_column('col' => \%info?);
+ $source->add_column('col' => \%info);
Add a single column and optional column info. Uses the same column
info keys as L</add_columns>.
my $info = $source->column_info($col);
Returns the column metadata hashref for a column, as originally passed
-to L</add_columns>. See the description of L</add_columns> for information
-on the contents of the hashref.
+to L</add_columns>. See L</add_columns> above for information on the
+contents of the hashref.
=cut
=back
-Defines one or more columns as primary key for this source. Should be
+Defines one or more columns as primary key for this source. Must be
called after L</add_columns>.
Additionally, defines a L<unique constraint|add_unique_constraint>
named C<primary>.
The primary key columns are used by L<DBIx::Class::PK::Auto> to
-retrieve automatically created values from the database.
+retrieve automatically created values from the database. They are also
+used as default joining columns when specifying relationships, see
+L<DBIx::Class::Relationship>.
=cut
=over 4
-=item Arguments: [ $name ], \@colnames
+=item Arguments: $name?, \@colnames
=item Return value: undefined
__PACKAGE__->add_unique_constraint([ qw/column1 column2/ ]);
-This will result in a unique constraint named C<table_column1_column2>, where
-C<table> is replaced with the table name.
+This will result in a unique constraint named
+C<table_column1_column2>, where C<table> is replaced with the table
+name.
-Unique constraints are used, for example, when you call
-L<DBIx::Class::ResultSet/find>. Only columns in the constraint are searched.
+Unique constraints are used, for example, when you pass the constraint
+name as the C<key> attribute to L<DBIx::Class::ResultSet/find>. Then
+only columns in the constraint are searched.
Throws an error if any of the given column names do not yet exist on
the result source.
sub name_unique_constraint {
my ($self, $cols) = @_;
- return join '_', $self->name, @$cols;
+ my $name = $self->name;
+ $name = $$name if (ref $name eq 'SCALAR');
+
+ return join '_', $name, @$cols;
}
=head2 unique_constraints
$source->unique_constraints();
-Read-only accessor which returns a hash of unique constraints on this source.
+Read-only accessor which returns a hash of unique constraints on this
+source.
The hash is keyed by constraint name, and contains an arrayref of
column names as values.
=back
- package My::ResultSetClass;
+ package My::Schema::ResultSet::Artist;
use base 'DBIx::Class::ResultSet';
...
- $source->resultset_class('My::ResultSet::Class');
+ # In the result class
+ __PACKAGE__->resultset_class('My::Schema::ResultSet::Artist');
+
+ # Or in code
+ $source->resultset_class('My::Schema::ResultSet::Artist');
Set the class of the resultset. This is useful if you want to create your
own resultset methods. Create your own class derived from
=back
+ # In the result class
+ __PACKAGE__->resultset_attributes({ order_by => [ 'id' ] });
+
+ # Or in code
$source->resultset_attributes({ order_by => [ 'id' ] });
Store a collection of resultset attributes, that will be set on every
=back
Throws an exception if the condition is improperly supplied, or cannot
-be resolved using L</resolve_join>.
+be resolved.
=cut
}
return unless $f_source; # Can't test rel without f_source
- eval { $self->resolve_join($rel, 'me') };
+ eval { $self->_resolve_join($rel, 'me', {}, []) };
if ($@) { # If the resolve failed, back out and re-throw the error
delete $rels{$rel}; #
L<DBIx::Class::Relationship>.
The returned hashref is keyed by the name of the opposing
-relationship, and contains it's data in the same manner as
+relationship, and contains its data in the same manner as
L</relationship_info>.
=cut
my @other_cond = keys(%$othercond);
my @other_refkeys = map {/^\w+\.(\w+)$/} @other_cond;
my @other_keys = map {$othercond->{$_} =~ /^\w+\.(\w+)$/} @other_cond;
- next if (!$self->compare_relationship_keys(\@refkeys, \@other_keys) ||
- !$self->compare_relationship_keys(\@other_refkeys, \@keys));
+ next if (!$self->_compare_relationship_keys(\@refkeys, \@other_keys) ||
+ !$self->_compare_relationship_keys(\@other_refkeys, \@keys));
$ret->{$otherrel} = $otherrel_info;
}
}
return $ret;
}
-=head2 compare_relationship_keys
-
-=over 4
-
-=item Arguments: \@keys1, \@keys2
-
-=item Return value: 1/0 (true/false)
-
-=back
-
-Returns true if both sets of keynames are the same, false otherwise.
-
-=cut
-
sub compare_relationship_keys {
+ carp 'compare_relationship_keys is a private method, stop calling it';
+ my $self = shift;
+ $self->_compare_relationship_keys (@_);
+}
+
+# Returns true if both sets of keynames are the same, false otherwise.
+sub _compare_relationship_keys {
my ($self, $keys1, $keys2) = @_;
# Make sure every keys1 is in keys2
return $found;
}
-=head2 resolve_join
-
-=over 4
+sub resolve_join {
+ carp 'resolve_join is a private method, stop calling it';
+ my $self = shift;
+ $self->_resolve_join (@_);
+}
-=item Arguments: $relation
+# Returns the {from} structure used to express JOIN conditions
+sub _resolve_join {
+ my ($self, $join, $alias, $seen, $jpath, $force_left) = @_;
-=item Return value: Join condition arrayref
+ # we need a supplied one, because we do in-place modifications, no returns
+ $self->throw_exception ('You must supply a seen hashref as the 3rd argument to _resolve_join')
+ unless ref $seen eq 'HASH';
-=back
+ $self->throw_exception ('You must supply a joinpath arrayref as the 4th argument to _resolve_join')
+ unless ref $jpath eq 'ARRAY';
-Returns the join structure required for the related result source.
+ $jpath = [@$jpath];
-=cut
-
-sub resolve_join {
- my ($self, $join, $alias, $seen, $force_left) = @_;
- $seen ||= {};
- $force_left ||= { force => 0 };
if (ref $join eq 'ARRAY') {
- return map { $self->resolve_join($_, $alias, $seen) } @$join;
+ return
+ map {
+ $self->_resolve_join($_, $alias, $seen, $jpath, $force_left);
+ } @$join;
} elsif (ref $join eq 'HASH') {
return
map {
- my $as = ($seen->{$_} ? $_.'_'.($seen->{$_}+1) : $_);
- local $force_left->{force};
+ my $as = ($seen->{$_} ? join ('_', $_, $seen->{$_} + 1) : $_); # the actual seen value will be incremented below
+ local $force_left->{force} = $force_left->{force};
(
- $self->resolve_join($_, $alias, $seen, $force_left),
- $self->related_source($_)->resolve_join(
- $join->{$_}, $as, $seen, $force_left
+ $self->_resolve_join($_, $alias, $seen, [@$jpath], $force_left),
+ $self->related_source($_)->_resolve_join(
+ $join->{$_}, $as, $seen, [@$jpath, $_], $force_left
)
);
} keys %$join;
} elsif (ref $join) {
$self->throw_exception("No idea how to resolve join reftype ".ref $join);
} else {
+
+ return() unless defined $join;
+
my $count = ++$seen->{$join};
- #use Data::Dumper; warn Dumper($seen);
my $as = ($count > 1 ? "${join}_${count}" : $join);
+
my $rel_info = $self->relationship_info($join);
$self->throw_exception("No such relationship ${join}") unless $rel_info;
my $type;
- if ($force_left->{force}) {
+ if ($force_left) {
$type = 'left';
} else {
$type = $rel_info->{attrs}{join_type} || '';
- $force_left->{force} = 1 if lc($type) eq 'left';
+ $force_left = 1 if lc($type) eq 'left';
}
- return [ { $as => $self->related_source($join)->from,
- -join_type => $type },
- $self->resolve_condition($rel_info->{cond}, $as, $alias) ];
+
+ my $rel_src = $self->related_source($join);
+ return [ { $as => $rel_src->from,
+ -source_handle => $rel_src->handle,
+ -join_type => $type,
+ -join_path => [@$jpath, $join],
+ -alias => $as,
+ -relation_chain_depth => $seen->{-relation_chain_depth} || 0,
+ },
+ $self->_resolve_condition($rel_info->{cond}, $as, $alias) ];
}
}
-=head2 pk_depends_on
-
-=over 4
-
-=item Arguments: $relname, $rel_data
-
-=item Return value: 1/0 (true/false)
-
-=back
+sub pk_depends_on {
+ carp 'pk_depends_on is a private method, stop calling it';
+ my $self = shift;
+ $self->_pk_depends_on (@_);
+}
-Determines whether a relation is dependent on an object from this source
-having already been inserted. Takes the name of the relationship and a
-hashref of columns of the related object.
+# Determines whether a relation is dependent on an object from this source
+# having already been inserted. Takes the name of the relationship and a
+# hashref of columns of the related object.
+sub _pk_depends_on {
+ my ($self, $relname, $rel_data) = @_;
-=cut
+ my $relinfo = $self->relationship_info($relname);
-sub pk_depends_on {
- my ($self, $relname, $rel_data) = @_;
- my $cond = $self->relationship_info($relname)->{cond};
+ # don't assume things if the relationship direction is specified
+ return $relinfo->{attrs}{is_foreign_key_constraint}
+ if exists ($relinfo->{attrs}{is_foreign_key_constraint});
+ my $cond = $relinfo->{cond};
return 0 unless ref($cond) eq 'HASH';
# map { foreign.foo => 'self.bar' } to { bar => 'foo' }
-
my $keyhash = { map { my $x = $_; $x =~ s/.*\.//; $x; } reverse %$cond };
# assume anything that references our PK probably is dependent on us
# rather than vice versa, unless the far side is (a) defined or (b)
# auto-increment
-
my $rel_source = $self->related_source($relname);
foreach my $p ($self->primary_columns) {
return 1;
}
-=head2 resolve_condition
-
-=over 4
-
-=item Arguments: $cond, $as, $alias|$object
-
-=back
-
-Resolves the passed condition to a concrete query fragment. If given an alias,
-returns a join condition; if given an object, inverts that object to produce
-a related conditional from that object.
-
-=cut
+sub resolve_condition {
+ carp 'resolve_condition is a private method, stop calling it';
+ my $self = shift;
+ $self->_resolve_condition (@_);
+}
+# Resolves the passed condition to a concrete query fragment. If given an alias,
+# returns a join condition; if given an object, inverts that object to produce
+# a related conditional from that object.
our $UNRESOLVABLE_CONDITION = \'1 = 0';
-sub resolve_condition {
+sub _resolve_condition {
my ($self, $cond, $as, $for) = @_;
- #warn %$cond;
if (ref $cond eq 'HASH') {
my %ret;
foreach my $k (keys %{$cond}) {
#warn "$self $k $for $v";
unless ($for->has_column_loaded($v)) {
if ($for->in_storage) {
- $self->throw_exception("Column ${v} not loaded on ${for} trying to resolve relationship");
+ $self->throw_exception(
+ "Column ${v} not loaded or not passed to new() prior to insert()"
+ ." on ${for} trying to resolve relationship (maybe you forgot "
+ ."to call ->discard_changes to get defaults from the db)"
+ );
}
return $UNRESOLVABLE_CONDITION;
}
}
return \%ret;
} elsif (ref $cond eq 'ARRAY') {
- return [ map { $self->resolve_condition($_, $as, $for) } @$cond ];
+ return [ map { $self->_resolve_condition($_, $as, $for) } @$cond ];
} else {
- die("Can't handle this yet :(");
+ die("Can't handle condition $cond yet :(");
}
}
-=head2 resolve_prefetch
-
-=over 4
-
-=item Arguments: hashref/arrayref/scalar
-
-=back
-
-Accepts one or more relationships for the current source and returns an
-array of column names for each of those relationships. Column names are
-prefixed relative to the current source, in accordance with where they appear
-in the supplied relationships. Examples:
-
- my $source = $schema->resultset('Tag')->source;
- @columns = $source->resolve_prefetch( { cd => 'artist' } );
-
- # @columns =
- #(
- # 'cd.cdid',
- # 'cd.artist',
- # 'cd.title',
- # 'cd.year',
- # 'cd.artist.artistid',
- # 'cd.artist.name'
- #)
-
- @columns = $source->resolve_prefetch( qw[/ cd /] );
-
- # @columns =
- #(
- # 'cd.cdid',
- # 'cd.artist',
- # 'cd.title',
- # 'cd.year'
- #)
-
- $source = $schema->resultset('CD')->source;
- @columns = $source->resolve_prefetch( qw[/ artist producer /] );
-
- # @columns =
- #(
- # 'artist.artistid',
- # 'artist.name',
- # 'producer.producerid',
- # 'producer.name'
- #)
-
-=cut
-
+# Legacy code, needs to go entirely away (fully replaced by _resolve_prefetch)
sub resolve_prefetch {
+ carp 'resolve_prefetch is a private method, stop calling it';
+
my ($self, $pre, $alias, $seen, $order, $collapse) = @_;
$seen ||= {};
- #$alias ||= $self->name;
- #warn $alias, Dumper $pre;
if( ref $pre eq 'ARRAY' ) {
return
map { $self->resolve_prefetch( $_, $alias, $seen, $order, $collapse ) }
$self->related_source($_)->resolve_prefetch(
$pre->{$_}, "${alias}.$_", $seen, $order, $collapse)
} keys %$pre;
- #die Dumper \@ret;
return @ret;
}
elsif( ref $pre ) {
? "at the same level (${as_prefix}) "
: "at top level "
)
- . 'will currently disrupt both the functionality of $rs->count(), '
- . 'and the amount of objects retrievable via $rs->next(). '
+ . 'will explode the number of row objects retrievable via ->next or ->all. '
+ . 'Use at your own risk.'
+ );
+ }
+ #my @col = map { (/^self\.(.+)$/ ? ("${as_prefix}.$1") : ()); }
+ # values %{$rel_info->{cond}};
+ $collapse->{".${as_prefix}${pre}"} = [ $rel_source->primary_columns ];
+ # action at a distance. prepending the '.' allows simpler code
+ # in ResultSet->_collapse_result
+ my @key = map { (/^foreign\.(.+)$/ ? ($1) : ()); }
+ keys %{$rel_info->{cond}};
+ my @ord = (ref($rel_info->{attrs}{order_by}) eq 'ARRAY'
+ ? @{$rel_info->{attrs}{order_by}}
+ : (defined $rel_info->{attrs}{order_by}
+ ? ($rel_info->{attrs}{order_by})
+ : ()));
+ push(@$order, map { "${as}.$_" } (@key, @ord));
+ }
+
+ return map { [ "${as}.$_", "${as_prefix}${pre}.$_", ] }
+ $rel_source->columns;
+ }
+}
+
+# Accepts one or more relationships for the current source and returns an
+# array of column names for each of those relationships. Column names are
+# prefixed relative to the current source, in accordance with where they appear
+# in the supplied relationships. Needs an alias_map generated by
+# $rs->_joinpath_aliases
+
+sub _resolve_prefetch {
+ my ($self, $pre, $alias, $alias_map, $order, $collapse, $pref_path) = @_;
+ $pref_path ||= [];
+
+ if( ref $pre eq 'ARRAY' ) {
+ return
+ map { $self->_resolve_prefetch( $_, $alias, $alias_map, $order, $collapse, [ @$pref_path ] ) }
+ @$pre;
+ }
+ elsif( ref $pre eq 'HASH' ) {
+ my @ret =
+ map {
+ $self->_resolve_prefetch($_, $alias, $alias_map, $order, $collapse, [ @$pref_path ] ),
+ $self->related_source($_)->_resolve_prefetch(
+ $pre->{$_}, "${alias}.$_", $alias_map, $order, $collapse, [ @$pref_path, $_] )
+ } keys %$pre;
+ return @ret;
+ }
+ elsif( ref $pre ) {
+ $self->throw_exception(
+ "don't know how to resolve prefetch reftype ".ref($pre));
+ }
+ else {
+ my $p = $alias_map;
+ $p = $p->{$_} for (@$pref_path, $pre);
+
+ $self->throw_exception (
+ "Unable to resolve prefetch $pre - join alias map does not contain an entry for path: "
+ . join (' -> ', @$pref_path, $pre)
+ ) if (ref $p->{-join_aliases} ne 'ARRAY' or not @{$p->{-join_aliases}} );
+
+ my $as = shift @{$p->{-join_aliases}};
+
+ my $rel_info = $self->relationship_info( $pre );
+ $self->throw_exception( $self->name . " has no such relationship '$pre'" )
+ unless $rel_info;
+ my $as_prefix = ($alias =~ /^.*?\.(.+)$/ ? $1.'.' : '');
+ my $rel_source = $self->related_source($pre);
+
+ if (exists $rel_info->{attrs}{accessor}
+ && $rel_info->{attrs}{accessor} eq 'multi') {
+ $self->throw_exception(
+ "Can't prefetch has_many ${pre} (join cond too complex)")
+ unless ref($rel_info->{cond}) eq 'HASH';
+ my $dots = @{[$as_prefix =~ m/\./g]} + 1; # +1 to match the ".${as_prefix}"
+ if (my ($fail) = grep { @{[$_ =~ m/\./g]} == $dots }
+ keys %{$collapse}) {
+ my ($last) = ($fail =~ /([^\.]+)$/);
+ carp (
+ "Prefetching multiple has_many rels ${last} and ${pre} "
+ .(length($as_prefix)
+ ? "at the same level (${as_prefix}) "
+ : "at top level "
+ )
+ . 'will explode the number of row objects retrievable via ->next or ->all. '
. 'Use at your own risk.'
);
}
return map { [ "${as}.$_", "${as_prefix}${pre}.$_", ] }
$rel_source->columns;
- #warn $alias, Dumper (\@ret);
- #return @ret;
}
}
=head1 SYNOPSIS
- package MyDB::Schema::Year2000CDs;
+ package MyDB::Schema::Result::Year2000CDs;
- use DBIx::Class::ResultSource::View;
+ use base qw/DBIx::Class/;
__PACKAGE__->load_components('Core');
__PACKAGE__->table_class('DBIx::Class::ResultSource::View');
__PACKAGE__->result_source_instance->is_virtual(1);
__PACKAGE__->result_source_instance->view_definition(
"SELECT cdid, artist, title FROM cd WHERE year ='2000'"
- );
+ );
+ __PACKAGE__->add_columns(
+ 'cdid' => {
+ data_type => 'integer',
+ is_auto_increment => 1,
+ },
+ 'artist' => {
+ data_type => 'integer',
+ },
+ 'title' => {
+ data_type => 'varchar',
+ size => 100,
+ },
+ );
=head1 DESCRIPTION
View object that inherits from L<DBIx::Class::ResultSource>
-This class extends ResultSource to add basic view support.
+This class extends ResultSource to add basic view support.
-A view has a L</view_definition>, which contains an SQL query. The
-query cannot have parameters. It may contain JOINs, sub selects and
-any other SQL your database supports.
+A view has a L</view_definition>, which contains a SQL query. The query can
+only have parameters if L</is_virtual> is set to true. It may contain JOINs,
+sub selects and any other SQL your database supports.
View definition SQL is deployed to your database on
L<DBIx::Class::Schema/deploy> unless you set L</is_virtual> to true.
Deploying the view does B<not> translate it between different database
syntaxes, so be careful what you write in your view SQL.
-Virtual views (L</is_virtual> unset or false), are assumed to not
+Virtual views (L</is_virtual> true), are assumed to not
exist in your database as a real view. The L</view_definition> in this
case replaces the view name in a FROM clause in a subselect.
+=head1 EXAMPLES
+
+Having created the MyDB::Schema::Year2000CDs schema as shown in the SYNOPSIS
+above, you can then:
+
+ $2000_cds = $schema->resultset('Year2000CDs')
+ ->search()
+ ->all();
+ $count = $schema->resultset('Year2000CDs')
+ ->search()
+ ->count();
+
+If you modified the schema to include a placeholder
+
+ __PACKAGE__->result_source_instance->view_definition(
+ "SELECT cdid, artist, title FROM cd WHERE year ='?'"
+ );
+
+and ensuring you have is_virtual set to true:
+
+ __PACKAGE__->result_source_instance->is_virtual(1);
+
+You could now say:
+
+ $2001_cds = $schema->resultset('Year2000CDs')
+ ->search({}, { bind => [2001] })
+ ->all();
+ $count = $schema->resultset('Year2000CDs')
+ ->search({}, { bind => [2001] })
+ ->count();
+
=head1 SQL EXAMPLES
=over
-=item is_virtual set to true
+=item is_virtual set to false
$schema->resultset('Year2000CDs')->all();
SELECT cdid, artist, title FROM year2000cds me
-=item is_virtual set to false
+=item is_virtual set to true
$schema->resultset('Year2000CDs')->all();
Jess Robinson <castaway@desert-island.me.uk>
+Wallace Reis <wreis@cpan.org>
+
=head1 LICENSE
You may distribute this code under the same terms as Perl itself.
=head1 NAME
-DBIx::Class::ResultSourceHandle
+DBIx::Class::ResultSourceHandle - Decouple Rows/ResultSets objects from their Source objects
=head1 DESCRIPTION
my ($self, $cloning) = @_;
my $to_serialize = { %$self };
-
+
my $class = $self->schema->class($self->source_moniker);
$to_serialize->{schema} = $class;
return (Storable::freeze($to_serialize));
Thaws frozen handle. Resets the internal schema reference to the package
variable C<$thaw_schema>. The recomened way of setting this is to use
-C<$schema->thaw($ice)> which handles this for you.
+C<< $schema->thaw($ice) >> which handles this for you.
=cut
use base qw/DBIx::Class::ResultSourceProxy/;
use DBIx::Class::ResultSource::Table;
+use Scalar::Util ();
__PACKAGE__->mk_classdata(table_class => 'DBIx::Class::ResultSource::Table');
my $class_has_table_instance = ($table and $table->result_class eq $class);
return $table if $class_has_table_instance;
+ my $table_class = $class->table_class;
+ $class->ensure_class_loaded($table_class);
+
if( $table ) {
- $table = $class->table_class->new({
+ $table = $table_class->new({
%$table,
result_class => $class,
source_name => undef,
});
}
else {
- $table = $class->table_class->new({
+ $table = $table_class->new({
name => undef,
result_class => $class,
source_name => undef,
=head2 table
__PACKAGE__->table('tbl_name');
-
+
Gets or sets the table name.
=cut
sub table {
my ($class, $table) = @_;
return $class->result_source_instance->name unless $table;
- unless (ref $table) {
- $table = $class->table_class->new({
+
+ unless (Scalar::Util::blessed($table) && $table->isa($class->table_class)) {
+
+ my $table_class = $class->table_class;
+ $class->ensure_class_loaded($table_class);
+
+ $table = $table_class->new({
$class->can('result_source_instance') ?
%{$class->result_source_instance||{}} : (),
name => $table,
For a more involved explanation, see L<DBIx::Class::ResultSet/create>.
+Please note that if a value is not passed to new, no value will be sent
+in the SQL INSERT call, and the column will therefore assume whatever
+default value was specified in your database. While DBIC will retrieve the
+value of autoincrement columns, it will never make an explicit database
+trip to retrieve default values assigned by the RDBMS. You can explicitly
+request that all values be fetched back from the database by calling
+L</discard_changes>, or you can supply an explicit C<undef> to columns
+with NULL as the default, and save yourself a SELECT.
+
+ CAVEAT:
+
+ The behavior described above will backfire if you use a foreign key column
+ with a database-defined default. If you call the relationship accessor on
+ an object that doesn't have a set value for the FK column, DBIC will throw
+ an exception, as it has no way of knowing the PK of the related object (if
+ there is one).
+
=cut
## It needs to store the new objects somewhere, and call insert on that list later when insert is called on this object. We may need an accessor for these so the user can retrieve them, if just doing ->new().
->resultset
->new_result($data);
}
- if ($self->result_source->pk_depends_on($relname, $data)) {
+ if ($self->result_source->_pk_depends_on($relname, $data)) {
MULTICREATE_DEBUG and warn "MC $self constructing $relname via find_or_new";
return $self->result_source
->related_source($relname)
foreach my $key (keys %$reverse) {
# if their primary key depends on us, then we have to
# just create a result and we'll fill it out afterwards
- return 1 if $rel_source->pk_depends_on($key, $us);
+ return 1 if $rel_source->_pk_depends_on($key, $us);
}
return 0;
}
if ($attrs) {
$new->throw_exception("attrs must be a hashref")
unless ref($attrs) eq 'HASH';
-
+
my ($related,$inflated);
- ## Pretend all the rels are actual objects, unset below if not, for insert() to fix
- $new->{_rel_in_storage} = 1;
foreach my $key (keys %$attrs) {
if (ref $attrs->{$key}) {
}
if ($rel_obj->in_storage) {
+ $new->{_rel_in_storage}{$key} = 1;
$new->set_from_related($key, $rel_obj);
} else {
- $new->{_rel_in_storage} = 0;
MULTICREATE_DEBUG and warn "MC $new uninserted $key $rel_obj\n";
}
}
if ($rel_obj->in_storage) {
- $new->set_from_related($key, $rel_obj);
+ $rel_obj->throw_exception ('A multi relationship can not be pre-existing when doing multicreate. Something went wrong');
} else {
- $new->{_rel_in_storage} = 0;
MULTICREATE_DEBUG and
warn "MC $new uninserted $key $rel_obj (${\($idx+1)} of $total)\n";
}
- $new->set_from_related($key, $rel_obj) if $rel_obj->in_storage;
push(@objects, $rel_obj);
}
$related->{$key} = \@objects;
if(!Scalar::Util::blessed($rel_obj)) {
$rel_obj = $new->__new_related_find_or_new_helper($key, $rel_obj);
}
- unless ($rel_obj->in_storage) {
- $new->{_rel_in_storage} = 0;
+ if ($rel_obj->in_storage) {
+ $new->{_rel_in_storage}{$key} = 1;
+ }
+ else {
MULTICREATE_DEBUG and warn "MC $new uninserted $key $rel_obj";
}
$inflated->{$key} = $rel_obj;
}
$new->throw_exception("No such column $key on $class")
unless $class->has_column($key);
- $new->store_column($key => $attrs->{$key});
+ $new->store_column($key => $attrs->{$key});
}
$new->{_relationship_data} = $related if $related;
my $rollback_guard;
# Check if we stored uninserted relobjs here in new()
- my %related_stuff = (%{$self->{_relationship_data} || {}},
+ my %related_stuff = (%{$self->{_relationship_data} || {}},
%{$self->{_inflated_column} || {}});
- if(!$self->{_rel_in_storage}) {
-
- # The guard will save us if we blow out of this scope via die
- $rollback_guard = $source->storage->txn_scope_guard;
+ # insert what needs to be inserted before us
+ my %pre_insert;
+ for my $relname (keys %related_stuff) {
+ my $rel_obj = $related_stuff{$relname};
- ## Should all be in relationship_data, but we need to get rid of the
- ## 'filter' reltype..
- ## These are the FK rels, need their IDs for the insert.
+ if (! $self->{_rel_in_storage}{$relname}) {
+ next unless (Scalar::Util::blessed($rel_obj)
+ && $rel_obj->isa('DBIx::Class::Row'));
- my @pri = $self->primary_columns;
+ next unless $source->_pk_depends_on(
+ $relname, { $rel_obj->get_columns }
+ );
- REL: foreach my $relname (keys %related_stuff) {
-
- my $rel_obj = $related_stuff{$relname};
-
- next REL unless (Scalar::Util::blessed($rel_obj)
- && $rel_obj->isa('DBIx::Class::Row'));
-
- next REL unless $source->pk_depends_on(
- $relname, { $rel_obj->get_columns }
- );
+ # The guard will save us if we blow out of this scope via die
+ $rollback_guard ||= $source->storage->txn_scope_guard;
MULTICREATE_DEBUG and warn "MC $self pre-reconstructing $relname $rel_obj\n";
->related_source($relname)
->resultset
->find_or_create($them);
+
%{$rel_obj} = %{$re};
- $self->set_from_related($relname, $rel_obj);
- delete $related_stuff{$relname};
+ $self->{_rel_in_storage}{$relname} = 1;
}
+
+ $self->set_from_related($relname, $rel_obj);
+ delete $related_stuff{$relname};
+ }
+
+ # start a transaction here if not started yet and there is more stuff
+ # to insert after us
+ if (keys %related_stuff) {
+ $rollback_guard ||= $source->storage->txn_scope_guard
}
MULTICREATE_DEBUG and do {
## PK::Auto
my @auto_pri = grep {
- !defined $self->get_column($_) ||
- ref($self->get_column($_)) eq 'SCALAR'
+ (not defined $self->get_column($_))
+ ||
+ (ref($self->get_column($_)) eq 'SCALAR')
} $self->primary_columns;
if (@auto_pri) {
- #$self->throw_exception( "More than one possible key found for auto-inc on ".ref $self )
- # if defined $too_many;
MULTICREATE_DEBUG and warn "MC $self fetching missing PKs ".join(', ', @auto_pri)."\n";
my $storage = $self->result_source->storage;
$self->throw_exception( "Missing primary key but Storage doesn't support last_insert_id" )
$self->throw_exception( "Can't get last insert id" )
unless (@ids == @auto_pri);
$self->store_column($auto_pri[$_] => $ids[$_]) for 0 .. $#ids;
-#use Data::Dumper; warn Dumper($self);
}
$self->{_dirty_columns} = {};
$self->{related_resultsets} = {};
- if(!$self->{_rel_in_storage}) {
- ## Now do the relationships that need our ID (has_many etc.)
- foreach my $relname (keys %related_stuff) {
- my $rel_obj = $related_stuff{$relname};
- my @cands;
- if (Scalar::Util::blessed($rel_obj)
- && $rel_obj->isa('DBIx::Class::Row')) {
- @cands = ($rel_obj);
- } elsif (ref $rel_obj eq 'ARRAY') {
- @cands = @$rel_obj;
- }
- if (@cands) {
- my $reverse = $source->reverse_relationship_info($relname);
- foreach my $obj (@cands) {
- $obj->set_from_related($_, $self) for keys %$reverse;
- my $them = { %{$obj->{_relationship_data} || {} }, $obj->get_inflated_columns };
- if ($self->__their_pk_needs_us($relname, $them)) {
- if (exists $self->{_ignore_at_insert}{$relname}) {
- MULTICREATE_DEBUG and warn "MC $self skipping post-insert on $relname";
- } else {
- MULTICREATE_DEBUG and warn "MC $self re-creating $relname $obj";
- my $re = $self->result_source
- ->related_source($relname)
- ->resultset
- ->find_or_create($them);
- %{$obj} = %{$re};
- MULTICREATE_DEBUG and warn "MC $self new $relname $obj";
- }
+ foreach my $relname (keys %related_stuff) {
+ next unless $source->has_relationship ($relname);
+
+ my @cands = ref $related_stuff{$relname} eq 'ARRAY'
+ ? @{$related_stuff{$relname}}
+ : $related_stuff{$relname}
+ ;
+
+ if (@cands
+ && Scalar::Util::blessed($cands[0])
+ && $cands[0]->isa('DBIx::Class::Row')
+ ) {
+ my $reverse = $source->reverse_relationship_info($relname);
+ foreach my $obj (@cands) {
+ $obj->set_from_related($_, $self) for keys %$reverse;
+ my $them = { %{$obj->{_relationship_data} || {} }, $obj->get_inflated_columns };
+ if ($self->__their_pk_needs_us($relname, $them)) {
+ if (exists $self->{_ignore_at_insert}{$relname}) {
+ MULTICREATE_DEBUG and warn "MC $self skipping post-insert on $relname";
} else {
- MULTICREATE_DEBUG and warn "MC $self post-inserting $obj";
- $obj->insert();
+ MULTICREATE_DEBUG and warn "MC $self re-creating $relname $obj";
+ my $re = $self->result_source
+ ->related_source($relname)
+ ->resultset
+ ->create($them);
+ %{$obj} = %{$re};
+ MULTICREATE_DEBUG and warn "MC $self new $relname $obj";
}
+ } else {
+ MULTICREATE_DEBUG and warn "MC $self post-inserting $obj";
+ $obj->insert();
}
}
}
- delete $self->{_ignore_at_insert};
- $rollback_guard->commit;
}
$self->in_storage(1);
- undef $self->{_orig_ident};
+ delete $self->{_orig_ident};
+ delete $self->{_ignore_at_insert};
+ $rollback_guard->commit if $rollback_guard;
+
return $self;
}
Indicates whether the object exists as a row in the database or
not. This is set to true when L<DBIx::Class::ResultSet/find>,
L<DBIx::Class::ResultSet/create> or L<DBIx::Class::ResultSet/insert>
-are used.
+are used.
Creating a row object using L<DBIx::Class::ResultSet/new>, or calling
L</delete> on one, sets it to false.
The object is still perfectly usable, but L</in_storage> will
now return 0 and the object must be reinserted using L</insert>
-before it can be used to L</update> the row again.
+before it can be used to L</update> the row again.
If you delete an object in a class with a C<has_many> relationship, an
attempt is made to delete all the related objects as well. To turn
this behaviour off, pass C<< cascade_delete => 0 >> in the C<$attr>
hashref of the relationship, see L<DBIx::Class::Relationship>. Any
database-level cascade or restrict will take precedence over a
-DBIx-Class-based cascading delete.
+DBIx-Class-based cascading delete.
If you delete an object within a txn_do() (see L<DBIx::Class::Storage/txn_do>)
and the transaction subsequently fails, the row object will remain marked as
return $self->{_column_data}{$column} if exists $self->{_column_data}{$column};
if (exists $self->{_inflated_column}{$column}) {
return $self->store_column($column,
- $self->_deflated_column($column, $self->{_inflated_column}{$column}));
+ $self->_deflated_column($column, $self->{_inflated_column}{$column}));
}
$self->throw_exception( "No such column '${column}'" ) unless $self->has_column($column);
return undef;
Returns all loaded column data as a hash, containing raw values. To
get just one value for a particular column, use L</get_column>.
+See L</get_inflated_columns> to get the inflated values.
+
=cut
sub get_columns {
Throws an exception if the column does not exist.
Marks a column as having been changed regardless of whether it has
-really changed.
+really changed.
=cut
sub make_column_dirty {
$self->throw_exception( "No such column '${column}'" )
unless exists $self->{_column_data}{$column} || $self->has_column($column);
+
+ # the entire clean/dirty code relies on exists, not on true/false
+ return 1 if exists $self->{_dirty_columns}{$column};
+
$self->{_dirty_columns}{$column} = 1;
+
+ # if we are just now making the column dirty, and if there is an inflated
+ # value, force it over the deflated one
+ if (exists $self->{_inflated_column}{$column}) {
+ $self->store_column($column,
+ $self->_deflated_column(
+ $column, $self->{_inflated_column}{$column}
+ )
+ );
+ }
}
=head2 get_inflated_columns
my $old_value = $self->get_column($column);
$self->store_column($column, $new_value);
- $self->{_dirty_columns}{$column} = 1
- if (defined $old_value xor defined $new_value) || (defined $old_value && $old_value ne $new_value);
+
+ my $dirty;
+ if (!$self->in_storage) { # no point tracking dirtyness on uninserted data
+ $dirty = 1;
+ }
+ elsif (defined $old_value xor defined $new_value) {
+ $dirty = 1;
+ }
+ elsif (not defined $old_value) { # both undef
+ $dirty = 0;
+ }
+ elsif ($old_value eq $new_value) {
+ $dirty = 0;
+ }
+ else { # do a numeric comparison if datatype allows it
+ my $colinfo = $self->column_info ($column);
+
+ # cache for speed (the object may *not* have a resultsource instance)
+ if (not defined $colinfo->{is_numeric} && $self->_source_handle) {
+ $colinfo->{is_numeric} =
+ $self->result_source->schema->storage->is_datatype_numeric ($colinfo->{data_type})
+ ? 1
+ : 0
+ ;
+ }
+
+ if ($colinfo->{is_numeric}) {
+ $dirty = $old_value != $new_value;
+ }
+ else {
+ $dirty = 1;
+ }
+ }
+
+ # sadly the update code just checks for keys, not for their value
+ $self->{_dirty_columns}{$column} = 1 if $dirty;
# XXX clear out the relation cache for this column
delete $self->{related_resultsets}{$column};
$row->set_columns({ $col => $val, ... });
-=over
+=over
=item Arguments: \%columndata
=back
Sets more than one column value at once. Any inflated values are
-deflated and the raw values stored.
+deflated and the raw values stored.
Any related values passed as Row objects, using the relation name as a
key, are reduced to the appropriate foreign key values and stored. If
}
}
}
- $self->set_columns($upd);
+ $self->set_columns($upd);
}
=head2 copy
Inserts a new row into the database, as a copy of the original
object. If a hashref of replacement data is supplied, these will take
-precedence over data in the original.
+precedence over data in the original. Also any columns which have
+the L<column info attribute|DBIx::Class::ResultSource/add_columns>
+C<< is_auto_increment => 1 >> are explicitly removed before the copy,
+so that the database can insert its own autoincremented values into
+the new object.
-If the row has related objects in a
-L<DBIx::Class::Relationship/has_many> then those objects may be copied
-too depending on the L<cascade_copy|DBIx::Class::Relationship>
-relationship attribute.
+Relationships will be followed by the copy procedure B<only> if the
+relationship specifes a true value for its
+L<cascade_copy|DBIx::Class::Relationship::Base> attribute. C<cascade_copy>
+is set by default on C<has_many> relationships and unset on all others.
=cut
$new->set_inflated_columns($changes);
$new->insert;
- # Its possible we'll have 2 relations to the same Source. We need to make
+ # Its possible we'll have 2 relations to the same Source. We need to make
# sure we don't try to insert the same row twice esle we'll violate unique
# constraints
my $rels_copied = {};
my $rel_info = $self->result_source->relationship_info($rel);
next unless $rel_info->{attrs}{cascade_copy};
-
- my $resolved = $self->result_source->resolve_condition(
+
+ my $resolved = $self->result_source->_resolve_condition(
$rel_info->{cond}, $rel, $new
);
$copied->{$id_str} = 1;
my $rel_copy = $related->copy($resolved);
}
-
+
}
return $new;
}
Reblessing can also be done more easily by setting C<result_class> in
your Result class. See L<DBIx::Class::ResultSource/result_class>.
+Different types of results can also be created from a particular
+L<DBIx::Class::ResultSet>, see L<DBIx::Class::ResultSet/result_class>.
+
=cut
sub inflate_result {
my $new = {
_source_handle => $source_handle,
_column_data => $me,
- _in_storage => 1
};
bless $new, (ref $class || $class);
unless $pre_source;
if (ref($pre_val->[0]) eq 'ARRAY') { # multi
my @pre_objects;
- foreach my $pre_rec (@$pre_val) {
- unless ($pre_source->primary_columns == grep { exists $pre_rec->[0]{$_}
- and defined $pre_rec->[0]{$_} } $pre_source->primary_columns) {
- next;
+
+ for my $me_pref (@$pre_val) {
+
+ # the collapser currently *could* return bogus elements with all
+ # columns set to undef
+ my $has_def;
+ for (values %{$me_pref->[0]}) {
+ if (defined $_) {
+ $has_def++;
+ last;
+ }
}
- push(@pre_objects, $pre_source->result_class->inflate_result(
- $pre_source, @{$pre_rec}));
+ next unless $has_def;
+
+ push @pre_objects, $pre_source->result_class->inflate_result(
+ $pre_source, @$me_pref
+ );
}
+
$new->related_resultset($pre)->set_cache(\@pre_objects);
} elsif (defined $pre_val->[0]) {
my $fetched;
$fetched = $pre_source->result_class->inflate_result(
$pre_source, @{$pre_val});
}
- $new->related_resultset($pre)->set_cache([ $fetched ]);
my $accessor = $source->relationship_info($pre)->{attrs}{accessor};
$class->throw_exception("No accessor for prefetched $pre")
unless defined $accessor;
} elsif ($accessor eq 'filter') {
$new->{_inflated_column}{$pre} = $fetched;
} else {
- $class->throw_exception("Prefetch not supported with accessor '$accessor'");
+ $class->throw_exception("Implicit prefetch (via select/columns) not supported with accessor '$accessor'");
}
+ $new->related_resultset($pre)->set_cache([ $fetched ]);
}
}
+
+ $new->in_storage (1);
return $new;
}
my $self = shift @_;
my $attrs = shift @_;
my $resultset = $self->result_source->resultset;
-
+
if(defined $attrs) {
- $resultset = $resultset->search(undef, $attrs);
+ $resultset = $resultset->search(undef, $attrs);
}
-
+
return $resultset->find($self->{_orig_ident} || $self->ident_condition);
}
+=head2 discard_changes ($attrs)
+
+Re-selects the row from the database, losing any changes that had
+been made.
+
+This method can also be used to refresh from storage, retrieving any
+changes made since the row was last read from storage.
+
+$attrs is expected to be a hashref of attributes suitable for passing as the
+second argument to $resultset->search($cond, $attrs);
+
+=cut
+
+sub discard_changes {
+ my ($self, $attrs) = @_;
+ delete $self->{_dirty_columns};
+ return unless $self->in_storage; # Don't reload if we aren't real!
+
+ # add a replication default to read from the master only
+ $attrs = { force_pool => 'master', %{$attrs||{}} };
+
+ if( my $current_storage = $self->get_from_storage($attrs)) {
+
+ # Set $self to the current.
+ %$self = %$current_storage;
+
+ # Avoid a possible infinite loop with
+ # sub DESTROY { $_[0]->discard_changes }
+ bless $current_storage, 'Do::Not::Exist';
+
+ return $self;
+ }
+ else {
+ $self->in_storage(0);
+ return $self;
+ }
+}
+
+
=head2 throw_exception
See L<DBIx::Class::Schema/throw_exception>.
changes made since the row was last read from storage. Actually
implemented in L<DBIx::Class::PK>
+Note: If you are using L<DBIx::Class::Storage::DBI::Replicated> as your
+storage, please kept in mind that if you L</discard_changes> on a row that you
+just updated or created, you should wrap the entire bit inside a transaction.
+Otherwise you run the risk that you insert or update to the master database
+but read from a replicant database that has not yet been updated from the
+master. This will result in unexpected results.
+
=cut
1;
--- /dev/null
+package # Hide from PAUSE
+ DBIx::Class::SQLAHacks;
+
+# This module is a subclass of SQL::Abstract::Limit and includes a number
+# of DBIC-specific workarounds, not yet suitable for inclusion into the
+# SQLA core
+
+use base qw/SQL::Abstract::Limit/;
+use strict;
+use warnings;
+use Carp::Clan qw/^DBIx::Class|^SQL::Abstract/;
+
+BEGIN {
+ # reinstall the carp()/croak() functions imported into SQL::Abstract
+ # as Carp and Carp::Clan do not like each other much
+ no warnings qw/redefine/;
+ no strict qw/refs/;
+ for my $f (qw/carp croak/) {
+
+ my $orig = \&{"SQL::Abstract::$f"};
+ *{"SQL::Abstract::$f"} = sub {
+
+ local $Carp::CarpLevel = 1; # even though Carp::Clan ignores this, $orig will not
+
+ if (Carp::longmess() =~ /DBIx::Class::SQLAHacks::[\w]+ .+? called \s at/x) {
+ __PACKAGE__->can($f)->(@_);
+ }
+ else {
+ $orig->(@_);
+ }
+ }
+ }
+}
+
+
+# Tries to determine limit dialect.
+#
+sub new {
+ my $self = shift->SUPER::new(@_);
+
+ # This prevents the caching of $dbh in S::A::L, I believe
+ # If limit_dialect is a ref (like a $dbh), go ahead and replace
+ # it with what it resolves to:
+ $self->{limit_dialect} = $self->_find_syntax($self->{limit_dialect})
+ if ref $self->{limit_dialect};
+
+ $self;
+}
+
+# Some databases (sqlite) do not handle multiple parenthesis
+# around in/between arguments. A tentative x IN ( (1, 2 ,3) )
+# is interpreted as x IN 1 or something similar.
+#
+# Since we currently do not have access to the SQLA AST, resort
+# to barbaric mutilation of any SQL supplied in literal form
+sub _strip_outer_paren {
+ my ($self, $arg) = @_;
+
+ return $self->_SWITCH_refkind ($arg, {
+ ARRAYREFREF => sub {
+ $$arg->[0] = __strip_outer_paren ($$arg->[0]);
+ return $arg;
+ },
+ SCALARREF => sub {
+ return \__strip_outer_paren( $$arg );
+ },
+ FALLBACK => sub {
+ return $arg
+ },
+ });
+}
+
+sub __strip_outer_paren {
+ my $sql = shift;
+
+ if ($sql and not ref $sql) {
+ while ($sql =~ /^ \s* \( (.*) \) \s* $/x ) {
+ $sql = $1;
+ }
+ }
+
+ return $sql;
+}
+
+sub _where_field_IN {
+ my ($self, $lhs, $op, $rhs) = @_;
+ $rhs = $self->_strip_outer_paren ($rhs);
+ return $self->SUPER::_where_field_IN ($lhs, $op, $rhs);
+}
+
+sub _where_field_BETWEEN {
+ my ($self, $lhs, $op, $rhs) = @_;
+ $rhs = $self->_strip_outer_paren ($rhs);
+ return $self->SUPER::_where_field_BETWEEN ($lhs, $op, $rhs);
+}
+
+# Slow but ANSI standard Limit/Offset support. DB2 uses this
+sub _RowNumberOver {
+ my ($self, $sql, $order, $rows, $offset ) = @_;
+
+ $offset += 1;
+ my $last = $rows + $offset - 1;
+ my ( $order_by ) = $self->_order_by( $order );
+
+ $sql = <<"SQL";
+SELECT * FROM
+(
+ SELECT Q1.*, ROW_NUMBER() OVER( ) AS ROW_NUM FROM (
+ $sql
+ $order_by
+ ) Q1
+) Q2
+WHERE ROW_NUM BETWEEN $offset AND $last
+
+SQL
+
+ return $sql;
+}
+
+# Crappy Top based Limit/Offset support. MSSQL uses this currently,
+# but may have to switch to RowNumberOver one day
+sub _Top {
+ my ( $self, $sql, $order, $rows, $offset ) = @_;
+
+ # mangle the input sql so it can be properly aliased in the outer queries
+ $sql =~ s/^ \s* SELECT \s+ (.+?) \s+ (?=FROM)//ix
+ or croak "Unrecognizable SELECT: $sql";
+ my $sql_select = $1;
+ my @sql_select = split (/\s*,\s*/, $sql_select);
+
+ # we can't support subqueries (in fact MSSQL can't) - croak
+ if (@sql_select != @{$self->{_dbic_rs_attrs}{select}}) {
+ croak (sprintf (
+ 'SQL SELECT did not parse cleanly - retrieved %d comma separated elements, while '
+ . 'the resultset select attribure contains %d elements: %s',
+ scalar @sql_select,
+ scalar @{$self->{_dbic_rs_attrs}{select}},
+ $sql_select,
+ ));
+ }
+
+ my $name_sep = $self->name_sep || '.';
+ my $esc_name_sep = "\Q$name_sep\E";
+ my $col_re = qr/ ^ (?: (.+) $esc_name_sep )? ([^$esc_name_sep]+) $ /x;
+
+ my $rs_alias = $self->{_dbic_rs_attrs}{alias};
+ my $quoted_rs_alias = $self->_quote ($rs_alias);
+
+ # construct the new select lists, rename(alias) some columns if necessary
+ my (@outer_select, @inner_select, %seen_names, %col_aliases, %outer_col_aliases);
+
+ for (@{$self->{_dbic_rs_attrs}{select}}) {
+ next if ref $_;
+ my ($table, $orig_colname) = ( $_ =~ $col_re );
+ next unless $table;
+ $seen_names{$orig_colname}++;
+ }
+
+ for my $i (0 .. $#sql_select) {
+
+ my $colsel_arg = $self->{_dbic_rs_attrs}{select}[$i];
+ my $colsel_sql = $sql_select[$i];
+
+ # this may or may not work (in case of a scalarref or something)
+ my ($table, $orig_colname) = ( $colsel_arg =~ $col_re );
+
+ my $quoted_alias;
+ # do not attempt to understand non-scalar selects - alias numerically
+ if (ref $colsel_arg) {
+ $quoted_alias = $self->_quote ('column_' . (@inner_select + 1) );
+ }
+ # column name seen more than once - alias it
+ elsif ($orig_colname &&
+ ($seen_names{$orig_colname} && $seen_names{$orig_colname} > 1) ) {
+ $quoted_alias = $self->_quote ("${table}__${orig_colname}");
+ }
+
+ # we did rename - make a record and adjust
+ if ($quoted_alias) {
+ # alias inner
+ push @inner_select, "$colsel_sql AS $quoted_alias";
+
+ # push alias to outer
+ push @outer_select, $quoted_alias;
+
+ # Any aliasing accumulated here will be considered
+ # both for inner and outer adjustments of ORDER BY
+ $self->__record_alias (
+ \%col_aliases,
+ $quoted_alias,
+ $colsel_arg,
+ $table ? $orig_colname : undef,
+ );
+ }
+
+ # otherwise just leave things intact inside, and use the abbreviated one outside
+ # (as we do not have table names anymore)
+ else {
+ push @inner_select, $colsel_sql;
+
+ my $outer_quoted = $self->_quote ($orig_colname); # it was not a duplicate so should just work
+ push @outer_select, $outer_quoted;
+ $self->__record_alias (
+ \%outer_col_aliases,
+ $outer_quoted,
+ $colsel_arg,
+ $table ? $orig_colname : undef,
+ );
+ }
+ }
+
+ my $outer_select = join (', ', @outer_select );
+ my $inner_select = join (', ', @inner_select );
+
+ %outer_col_aliases = (%outer_col_aliases, %col_aliases);
+
+ # deal with order
+ croak '$order supplied to SQLAHacks limit emulators must be a hash'
+ if (ref $order ne 'HASH');
+
+ $order = { %$order }; #copy
+
+ my $req_order = $order->{order_by};
+
+ # examine normalized version, collapses nesting
+ my $limit_order;
+ if (scalar $self->_order_by_chunks ($req_order)) {
+ $limit_order = $req_order;
+ }
+ else {
+ $limit_order = [ map
+ { join ('', $rs_alias, $name_sep, $_ ) }
+ ( $self->{_dbic_rs_attrs}{_source_handle}->resolve->primary_columns )
+ ];
+ }
+
+ my ( $order_by_inner, $order_by_outer ) = $self->_order_directions($limit_order);
+ my $order_by_requested = $self->_order_by ($req_order);
+
+ # generate the rest
+ delete $order->{order_by};
+ my $grpby_having = $self->_order_by ($order);
+
+ # short circuit for counts - the ordering complexity is needless
+ if ($self->{_dbic_rs_attrs}{-for_count_only}) {
+ return "SELECT TOP $rows $inner_select $sql $grpby_having $order_by_outer";
+ }
+
+ # we can't really adjust the order_by columns, as introspection is lacking
+ # resort to simple substitution
+ for my $col (keys %outer_col_aliases) {
+ for ($order_by_requested, $order_by_outer) {
+ $_ =~ s/\s+$col\s+/ $outer_col_aliases{$col} /g;
+ }
+ }
+ for my $col (keys %col_aliases) {
+ $order_by_inner =~ s/\s+$col\s+/ $col_aliases{$col} /g;
+ }
+
+
+ my $inner_lim = $rows + $offset;
+
+ $sql = "SELECT TOP $inner_lim $inner_select $sql $grpby_having $order_by_inner";
+
+ if ($offset) {
+ $sql = <<"SQL";
+
+ SELECT TOP $rows $outer_select FROM
+ (
+ $sql
+ ) $quoted_rs_alias
+ $order_by_outer
+SQL
+
+ }
+
+ if ($order_by_requested) {
+ $sql = <<"SQL";
+
+ SELECT $outer_select FROM
+ ( $sql ) $quoted_rs_alias
+ $order_by_requested
+SQL
+
+ }
+
+ $sql =~ s/\s*\n\s*/ /g; # parsing out multiline statements is harder than a single line
+ return $sql;
+}
+
+# action at a distance to shorten Top code above
+sub __record_alias {
+ my ($self, $register, $alias, $fqcol, $col) = @_;
+
+ # record qualified name
+ $register->{$fqcol} = $alias;
+ $register->{$self->_quote($fqcol)} = $alias;
+
+ return unless $col;
+
+ # record unqualified name, undef (no adjustment) if a duplicate is found
+ if (exists $register->{$col}) {
+ $register->{$col} = undef;
+ }
+ else {
+ $register->{$col} = $alias;
+ }
+
+ $register->{$self->_quote($col)} = $register->{$col};
+}
+
+
+
+# While we're at it, this should make LIMIT queries more efficient,
+# without digging into things too deeply
+sub _find_syntax {
+ my ($self, $syntax) = @_;
+ return $self->{_cached_syntax} ||= $self->SUPER::_find_syntax($syntax);
+}
+
+my $for_syntax = {
+ update => 'FOR UPDATE',
+ shared => 'FOR SHARE',
+};
+# Quotes table names, handles "limit" dialects (e.g. where rownum between x and
+# y), supports SELECT ... FOR UPDATE and SELECT ... FOR SHARE.
+sub select {
+ my ($self, $table, $fields, $where, $order, @rest) = @_;
+
+ $self->{"${_}_bind"} = [] for (qw/having from order/);
+
+ if (not ref($table) or ref($table) eq 'SCALAR') {
+ $table = $self->_quote($table);
+ }
+
+ local $self->{rownum_hack_count} = 1
+ if (defined $rest[0] && $self->{limit_dialect} eq 'RowNum');
+ @rest = (-1) unless defined $rest[0];
+ croak "LIMIT 0 Does Not Compute" if $rest[0] == 0;
+ # and anyway, SQL::Abstract::Limit will cause a barf if we don't first
+ my ($sql, @where_bind) = $self->SUPER::select(
+ $table, $self->_recurse_fields($fields), $where, $order, @rest
+ );
+ if (my $for = delete $self->{_dbic_rs_attrs}{for}) {
+ $sql .= " $for_syntax->{$for}" if $for_syntax->{$for};
+ }
+
+ return wantarray ? ($sql, @{$self->{from_bind}}, @where_bind, @{$self->{having_bind}}, @{$self->{order_bind}} ) : $sql;
+}
+
+# Quotes table names, and handles default inserts
+sub insert {
+ my $self = shift;
+ my $table = shift;
+ $table = $self->_quote($table);
+
+ # SQLA will emit INSERT INTO $table ( ) VALUES ( )
+ # which is sadly understood only by MySQL. Change default behavior here,
+ # until SQLA2 comes with proper dialect support
+ if (! $_[0] or (ref $_[0] eq 'HASH' and !keys %{$_[0]} ) ) {
+ return "INSERT INTO ${table} DEFAULT VALUES"
+ }
+
+ $self->SUPER::insert($table, @_);
+}
+
+# Just quotes table names.
+sub update {
+ my $self = shift;
+ my $table = shift;
+ $table = $self->_quote($table);
+ $self->SUPER::update($table, @_);
+}
+
+# Just quotes table names.
+sub delete {
+ my $self = shift;
+ my $table = shift;
+ $table = $self->_quote($table);
+ $self->SUPER::delete($table, @_);
+}
+
+sub _emulate_limit {
+ my $self = shift;
+ if ($_[3] == -1) {
+ return $_[1].$self->_order_by($_[2]);
+ } else {
+ return $self->SUPER::_emulate_limit(@_);
+ }
+}
+
+sub _recurse_fields {
+ my ($self, $fields, $params) = @_;
+ my $ref = ref $fields;
+ return $self->_quote($fields) unless $ref;
+ return $$fields if $ref eq 'SCALAR';
+
+ if ($ref eq 'ARRAY') {
+ return join(', ', map {
+ $self->_recurse_fields($_)
+ .(exists $self->{rownum_hack_count} && !($params && $params->{no_rownum_hack})
+ ? ' AS col'.$self->{rownum_hack_count}++
+ : '')
+ } @$fields);
+ }
+ elsif ($ref eq 'HASH') {
+ my %hash = %$fields;
+
+ my $as = delete $hash{-as}; # if supplied
+
+ my ($func, $args) = each %hash;
+ delete $hash{$func};
+
+ if (lc ($func) eq 'distinct' && ref $args eq 'ARRAY' && @$args > 1) {
+ croak (
+ 'The select => { distinct => ... } syntax is not supported for multiple columns.'
+ .' Instead please use { group_by => [ qw/' . (join ' ', @$args) . '/ ] }'
+ .' or { select => [ qw/' . (join ' ', @$args) . '/ ], distinct => 1 }'
+ );
+ }
+
+ my $select = sprintf ('%s( %s )%s',
+ $self->_sqlcase($func),
+ $self->_recurse_fields($args),
+ $as
+ ? sprintf (' %s %s', $self->_sqlcase('as'), $as)
+ : ''
+ );
+
+ # there should be nothing left
+ if (keys %hash) {
+ croak "Malformed select argument - too many keys in hash: " . join (',', keys %$fields );
+ }
+
+ return $select;
+ }
+ # Is the second check absolutely necessary?
+ elsif ( $ref eq 'REF' and ref($$fields) eq 'ARRAY' ) {
+ return $self->_fold_sqlbind( $fields );
+ }
+ else {
+ croak($ref . qq{ unexpected in _recurse_fields()})
+ }
+}
+
+sub _order_by {
+ my ($self, $arg) = @_;
+
+ if (ref $arg eq 'HASH' and keys %$arg and not grep { $_ =~ /^-(?:desc|asc)/i } keys %$arg ) {
+
+ my $ret = '';
+
+ if (my $g = $self->_recurse_fields($arg->{group_by}, { no_rownum_hack => 1 }) ) {
+ $ret = $self->_sqlcase(' group by ') . $g;
+ }
+
+ if (defined $arg->{having}) {
+ my ($frag, @bind) = $self->_recurse_where($arg->{having});
+ push(@{$self->{having_bind}}, @bind);
+ $ret .= $self->_sqlcase(' having ').$frag;
+ }
+
+ if (defined $arg->{order_by}) {
+ my ($frag, @bind) = $self->SUPER::_order_by($arg->{order_by});
+ push(@{$self->{order_bind}}, @bind);
+ $ret .= $frag;
+ }
+
+ return $ret;
+ }
+ else {
+ my ($sql, @bind) = $self->SUPER::_order_by ($arg);
+ push(@{$self->{order_bind}}, @bind);
+ return $sql;
+ }
+}
+
+sub _order_directions {
+ my ($self, $order) = @_;
+
+ # strip bind values - none of the current _order_directions users support them
+ return $self->SUPER::_order_directions( [ map
+ { ref $_ ? $_->[0] : $_ }
+ $self->_order_by_chunks ($order)
+ ]);
+}
+
+sub _table {
+ my ($self, $from) = @_;
+ if (ref $from eq 'ARRAY') {
+ return $self->_recurse_from(@$from);
+ } elsif (ref $from eq 'HASH') {
+ return $self->_make_as($from);
+ } else {
+ return $from; # would love to quote here but _table ends up getting called
+ # twice during an ->select without a limit clause due to
+ # the way S::A::Limit->select works. should maybe consider
+ # bypassing this and doing S::A::select($self, ...) in
+ # our select method above. meantime, quoting shims have
+ # been added to select/insert/update/delete here
+ }
+}
+
+sub _recurse_from {
+ my ($self, $from, @join) = @_;
+ my @sqlf;
+ push(@sqlf, $self->_make_as($from));
+ foreach my $j (@join) {
+ my ($to, $on) = @$j;
+
+ # check whether a join type exists
+ my $join_clause = '';
+ my $to_jt = ref($to) eq 'ARRAY' ? $to->[0] : $to;
+ if (ref($to_jt) eq 'HASH' and exists($to_jt->{-join_type})) {
+ $join_clause = ' '.uc($to_jt->{-join_type}).' JOIN ';
+ } else {
+ $join_clause = ' JOIN ';
+ }
+ push(@sqlf, $join_clause);
+
+ if (ref $to eq 'ARRAY') {
+ push(@sqlf, '(', $self->_recurse_from(@$to), ')');
+ } else {
+ push(@sqlf, $self->_make_as($to));
+ }
+ push(@sqlf, ' ON ', $self->_join_condition($on));
+ }
+ return join('', @sqlf);
+}
+
+sub _fold_sqlbind {
+ my ($self, $sqlbind) = @_;
+
+ my @sqlbind = @$$sqlbind; # copy
+ my $sql = shift @sqlbind;
+ push @{$self->{from_bind}}, @sqlbind;
+
+ return $sql;
+}
+
+sub _make_as {
+ my ($self, $from) = @_;
+ return join(' ', map { (ref $_ eq 'SCALAR' ? $$_
+ : ref $_ eq 'REF' ? $self->_fold_sqlbind($_)
+ : $self->_quote($_))
+ } reverse each %{$self->_skip_options($from)});
+}
+
+sub _skip_options {
+ my ($self, $hash) = @_;
+ my $clean_hash = {};
+ $clean_hash->{$_} = $hash->{$_}
+ for grep {!/^-/} keys %$hash;
+ return $clean_hash;
+}
+
+sub _join_condition {
+ my ($self, $cond) = @_;
+ if (ref $cond eq 'HASH') {
+ my %j;
+ for (keys %$cond) {
+ my $v = $cond->{$_};
+ if (ref $v) {
+ croak (ref($v) . qq{ reference arguments are not supported in JOINS - try using \"..." instead'})
+ if ref($v) ne 'SCALAR';
+ $j{$_} = $v;
+ }
+ else {
+ my $x = '= '.$self->_quote($v); $j{$_} = \$x;
+ }
+ };
+ return scalar($self->_recurse_where(\%j));
+ } elsif (ref $cond eq 'ARRAY') {
+ return join(' OR ', map { $self->_join_condition($_) } @$cond);
+ } else {
+ die "Can't handle this yet!";
+ }
+}
+
+sub _quote {
+ my ($self, $label) = @_;
+ return '' unless defined $label;
+ return $$label if ref($label) eq 'SCALAR';
+ return "*" if $label eq '*';
+ return $label unless $self->{quote_char};
+ if(ref $self->{quote_char} eq "ARRAY"){
+ return $self->{quote_char}->[0] . $label . $self->{quote_char}->[1]
+ if !defined $self->{name_sep};
+ my $sep = $self->{name_sep};
+ return join($self->{name_sep},
+ map { $self->{quote_char}->[0] . $_ . $self->{quote_char}->[1] }
+ split(/\Q$sep\E/,$label));
+ }
+ return $self->SUPER::_quote($label);
+}
+
+sub limit_dialect {
+ my $self = shift;
+ $self->{limit_dialect} = shift if @_;
+ return $self->{limit_dialect};
+}
+
+# Set to an array-ref to specify separate left and right quotes for table names.
+# A single scalar is equivalen to [ $char, $char ]
+sub quote_char {
+ my $self = shift;
+ $self->{quote_char} = shift if @_;
+ return $self->{quote_char};
+}
+
+# Character separating quoted table names.
+sub name_sep {
+ my $self = shift;
+ $self->{name_sep} = shift if @_;
+ return $self->{name_sep};
+}
+
+1;
--- /dev/null
+package # Hide from PAUSE
+ DBIx::Class::SQLAHacks::MSSQL;
+
+use base qw( DBIx::Class::SQLAHacks );
+use Carp::Clan qw/^DBIx::Class|^SQL::Abstract/;
+
+#
+# MSSQL is retarded wrt TOP (crappy limit) and ordering.
+# One needs to add a TOP to *all* ordered subqueries, if
+# TOP has been used in the statement at least once.
+# Do it here.
+#
+sub select {
+ my $self = shift;
+
+ my ($sql, @bind) = $self->SUPER::select (@_);
+
+ # ordering was requested and there are at least 2 SELECT/FROM pairs
+ # (thus subquery), and there is no TOP specified
+ if (
+ $sql =~ /\bSELECT\b .+? \bFROM\b .+? \bSELECT\b .+? \bFROM\b/isx
+ &&
+ $sql !~ /^ \s* SELECT \s+ TOP \s+ \d+ /xi
+ &&
+ scalar $self->_order_by_chunks ($_[3]->{order_by})
+ ) {
+ $sql =~ s/^ \s* SELECT \s/SELECT TOP 100 PERCENT /xi;
+ }
+
+ return wantarray ? ($sql, @bind) : $sql;
+}
+
+1;
--- /dev/null
+package # Hide from PAUSE
+ DBIx::Class::SQLAHacks::MySQL;
+
+use base qw( DBIx::Class::SQLAHacks );
+use Carp::Clan qw/^DBIx::Class|^SQL::Abstract/;
+
+#
+# MySQL does not understand the standard INSERT INTO $table DEFAULT VALUES
+# Adjust SQL here instead
+#
+sub insert {
+ my $self = shift;
+
+ my $table = $_[0];
+ $table = $self->_quote($table);
+
+ if (! $_[1] or (ref $_[1] eq 'HASH' and !keys %{$_[1]} ) ) {
+ return "INSERT INTO ${table} () VALUES ()"
+ }
+
+ return $self->SUPER::insert (@_);
+}
+
+1;
--- /dev/null
+package # Hide from PAUSE
+ DBIx::Class::SQLAHacks::OracleJoins;
+
+use base qw( DBIx::Class::SQLAHacks );
+use Carp::Clan qw/^DBIx::Class|^SQL::Abstract/;
+
+sub select {
+ my ($self, $table, $fields, $where, $order, @rest) = @_;
+
+ if (ref($table) eq 'ARRAY') {
+ $where = $self->_oracle_joins($where, @{ $table });
+ }
+
+ return $self->SUPER::select($table, $fields, $where, $order, @rest);
+}
+
+sub _recurse_from {
+ my ($self, $from, @join) = @_;
+
+ my @sqlf = $self->_make_as($from);
+
+ foreach my $j (@join) {
+ my ($to, $on) = @{ $j };
+
+ if (ref $to eq 'ARRAY') {
+ push (@sqlf, $self->_recurse_from(@{ $to }));
+ }
+ else {
+ push (@sqlf, $self->_make_as($to));
+ }
+ }
+
+ return join q{, }, @sqlf;
+}
+
+sub _oracle_joins {
+ my ($self, $where, $from, @join) = @_;
+ my $join_where = {};
+ $self->_recurse_oracle_joins($join_where, $from, @join);
+ if (keys %$join_where) {
+ if (!defined($where)) {
+ $where = $join_where;
+ } else {
+ if (ref($where) eq 'ARRAY') {
+ $where = { -or => $where };
+ }
+ $where = { -and => [ $join_where, $where ] };
+ }
+ }
+ return $where;
+}
+
+sub _recurse_oracle_joins {
+ my ($self, $where, $from, @join) = @_;
+
+ foreach my $j (@join) {
+ my ($to, $on) = @{ $j };
+
+ if (ref $to eq 'ARRAY') {
+ $self->_recurse_oracle_joins($where, @{ $to });
+ }
+
+ my $to_jt = ref $to eq 'ARRAY' ? $to->[0] : $to;
+ my $left_join = q{};
+ my $right_join = q{};
+
+ if (ref $to_jt eq 'HASH' and exists $to_jt->{-join_type}) {
+ #TODO: Support full outer joins -- this would happen much earlier in
+ #the sequence since oracle 8's full outer join syntax is best
+ #described as INSANE.
+ croak "Can't handle full outer joins in Oracle 8 yet!\n"
+ if $to_jt->{-join_type} =~ /full/i;
+
+ $left_join = q{(+)} if $to_jt->{-join_type} =~ /left/i
+ && $to_jt->{-join_type} !~ /inner/i;
+
+ $right_join = q{(+)} if $to_jt->{-join_type} =~ /right/i
+ && $to_jt->{-join_type} !~ /inner/i;
+ }
+
+ foreach my $lhs (keys %{ $on }) {
+ $where->{$lhs . $left_join} = \"= $on->{ $lhs }$right_join";
+ }
+ }
+}
+
+1;
+
+=pod
+
+=head1 NAME
+
+DBIx::Class::SQLAHacks::OracleJoins - Pre-ANSI Joins-via-Where-Clause Syntax
+
+=head1 PURPOSE
+
+This module was originally written to support Oracle < 9i where ANSI joins
+weren't supported at all, but became the module for Oracle >= 8 because
+Oracle's optimising of ANSI joins is horrible. (See:
+http://scsys.co.uk:8001/7495)
+
+=head1 SYNOPSIS
+
+Not intended for use directly; used as the sql_maker_class for schemas and components.
+
+=head1 DESCRIPTION
+
+Implements pre-ANSI joins specified in the where clause. Instead of:
+
+ SELECT x FROM y JOIN z ON y.id = z.id
+
+It will write:
+
+ SELECT x FROM y, z WHERE y.id = z.id
+
+It should properly support left joins, and right joins. Full outer joins are
+not possible due to the fact that Oracle requires the entire query be written
+to union the results of a left and right join, and by the time this module is
+called to create the where query and table definition part of the sql query,
+it's already too late.
+
+=head1 METHODS
+
+=over
+
+=item select ($\@$;$$@)
+
+Replaces DBIx::Class::SQLAHacks's select() method, which calls _oracle_joins()
+to modify the column and table list before calling SUPER::select().
+
+=item _recurse_from ($$\@)
+
+Recursive subroutine that builds the table list.
+
+=item _oracle_joins ($$$@)
+
+Creates the left/right relationship in the where query.
+
+=back
+
+=head1 BUGS
+
+Does not support full outer joins.
+Probably lots more.
+
+=head1 SEE ALSO
+
+=over
+
+=item L<DBIx::Class::Storage::DBI::Oracle::WhereJoins> - Storage class using this
+
+=item L<DBIx::Class::SQLAHacks> - Parent module
+
+=item L<DBIx::Class> - Duh
+
+=back
+
+=head1 AUTHOR
+
+Justin Wheeler C<< <jwheeler@datademons.com> >>
+
+=head1 CONTRIBUTORS
+
+David Jack Olrik C<< <djo@cpan.org> >>
+
+=head1 LICENSE
+
+This module is licensed under the same terms as Perl itself.
+
+=cut
+
use Scalar::Util qw/weaken/;
use File::Spec;
use Sub::Name ();
-require Module::Find;
+use Module::Find();
use base qw/DBIx::Class/;
$dsn,
$user,
$password,
- { AutoCommit => 0 },
+ { AutoCommit => 1 },
);
my $schema2 = Library::Schema->connect($coderef_returning_dbh);
}
# returns a hash of $shortname => $fullname for every package
-# found in the given namespaces ($shortname is with the $fullname's
-# namespace stripped off)
+# found in the given namespaces ($shortname is with the $fullname's
+# namespace stripped off)
sub _map_namespaces {
my ($class, @namespaces) = @_;
@results_hash;
}
+# returns the result_source_instance for the passed class/object,
+# or dies with an informative message (used by load_namespaces)
+sub _ns_get_rsrc_instance {
+ my $class = shift;
+ my $rs = ref ($_[0]) || $_[0];
+
+ if ($rs->can ('result_source_instance') ) {
+ return $rs->result_source_instance;
+ }
+ else {
+ $class->throw_exception (
+ "Attempt to load_namespaces() class $rs failed - are you sure this is a real Result Class?"
+ );
+ }
+}
+
sub load_namespaces {
my ($class, %args) = @_;
local *Class::C3::reinitialize = sub { };
use warnings 'redefine';
- # ensure classes are loaded and fetch properly sorted classes
+ # ensure classes are loaded and attached in inheritance order
$class->ensure_class_loaded($_) foreach(values %results);
- my @subclass_last = sort { $results{$a}->isa($results{$b}) } keys(%results);
-
+ my %inh_idx;
+ my @subclass_last = sort {
+
+ ($inh_idx{$a} ||=
+ scalar @{mro::get_linear_isa( $results{$a} )}
+ )
+
+ <=>
+
+ ($inh_idx{$b} ||=
+ scalar @{mro::get_linear_isa( $results{$b} )}
+ )
+
+ } keys(%results);
+
foreach my $result (@subclass_last) {
my $result_class = $results{$result};
my $rs_class = delete $resultsets{$result};
- my $rs_set = $result_class->resultset_class;
-
+ my $rs_set = $class->_ns_get_rsrc_instance ($result_class)->resultset_class;
+
if($rs_set && $rs_set ne 'DBIx::Class::ResultSet') {
if($rs_class && $rs_class ne $rs_set) {
- warn "We found ResultSet class '$rs_class' for '$result', but it seems "
+ carp "We found ResultSet class '$rs_class' for '$result', but it seems "
. "that you had already set '$result' to use '$rs_set' instead";
}
}
elsif($rs_class ||= $default_resultset_class) {
$class->ensure_class_loaded($rs_class);
- $result_class->resultset_class($rs_class);
+ $class->_ns_get_rsrc_instance ($result_class)->resultset_class($rs_class);
}
- my $source_name = $result_class->source_name || $result;
+ my $source_name = $class->_ns_get_rsrc_instance ($result_class)->source_name || $result;
push(@to_register, [ $source_name, $result_class ]);
}
}
foreach (sort keys %resultsets) {
- warn "load_namespaces found ResultSet class $_ with no "
+ carp "load_namespaces found ResultSet class $_ with no "
. 'corresponding Result class';
}
=back
-Alternative method to L</load_namespaces> which you should look at
-using if you can.
+L</load_classes> is an alternative method to L</load_namespaces>, both of
+which serve similar purposes, each with different advantages and disadvantages.
+In the general case you should use L</load_namespaces>, unless you need to
+be able to specify that only specific classes are loaded at runtime.
With no arguments, this method uses L<Module::Find> to find all classes under
the schema's namespace. Otherwise, this method loads the classes you specify
my $snsub = $comp_class->can('source_name');
if(! $snsub ) {
- warn "Failed to load $comp_class. Can't find source_name method. Is $comp_class really a full DBIC result class? Fix it, move it elsewhere, or make your load_classes call more specific.";
+ carp "Failed to load $comp_class. Can't find source_name method. Is $comp_class really a full DBIC result class? Fix it, move it elsewhere, or make your load_classes call more specific.";
next;
}
$comp = $snsub->($comp_class) || $comp;
general.
Note that C<connect_info> expects an arrayref of arguments, but
-C<connect> does not. C<connect> wraps it's arguments in an arrayref
+C<connect> does not. C<connect> wraps its arguments in an arrayref
before passing them to C<connect_info>.
+=head3 Overloading
+
+C<connect> is a convenience method. It is equivalent to calling
+$schema->clone->connection(@connectinfo). To write your own overloaded
+version, overload L</connection> instead.
+
=cut
sub connect { shift->clone->connection(@_) }
sub resultset {
my ($self, $moniker) = @_;
+ $self->throw_exception('resultset() expects a source name')
+ unless defined $moniker;
return $self->source($moniker)->resultset;
}
$self->storage->txn_do(@_);
}
-=head2 txn_scope_guard (EXPERIMENTAL)
+=head2 txn_scope_guard
Runs C<txn_scope_guard> on the schema's storage. See
L<DBIx::Class::Storage/txn_scope_guard>.
[ 2, 'Indie Band' ],
...
]);
-
+
Since wantarray context is basically the same as looping over $rs->create(...)
you won't see any performance benefits and in this case the method is more for
convenience. Void context sends the column information directly to storage
data in-place on the Schema class. You should probably be calling
L</connect> to get a proper Schema object instead.
+=head3 Overloading
+
+Overload C<connection> to change the behaviour of C<connect>.
=cut
sub connection {
my ($self, @info) = @_;
return $self if !@info && $self->storage;
-
+
my ($storage_class, $args) = ref $self->storage_type ?
($self->_normalize_storage_type($self->storage_type),{}) : ($self->storage_type, {});
-
+
$storage_class = 'DBIx::Class::Storage'.$storage_class
if $storage_class =~ m/^::/;
eval "require ${storage_class};";
=over 4
-=item Arguments: $sqlt_args, $dir
+=item Arguments: \%sqlt_args, $dir
=back
Attempts to deploy the schema to the current storage using L<SQL::Translator>.
-See L<SQL::Translator/METHODS> for a list of values for C<$sqlt_args>. The most
-common value for this would be C<< { add_drop_table => 1, } >> to have the SQL
-produced include a DROP TABLE statement for each table created.
+See L<SQL::Translator/METHODS> for a list of values for C<\%sqlt_args>.
+The most common value for this would be C<< { add_drop_table => 1 } >>
+to have the SQL produced include a C<DROP TABLE> statement for each table
+created. For quoting purposes supply C<quote_table_names> and
+C<quote_field_names>.
Additionally, the DBIx::Class parser accepts a C<sources> parameter as a hash
ref or an array ref, containing a list of source to deploy. If present, then
=over 4
-=item Arguments: $rdbms_type, $sqlt_args, $dir
+=item Arguments: See L<DBIx::Class::Storage::DBI/deployment_statements>
=item Return value: $listofstatements
=back
-A convenient shortcut to storage->deployment_statements(). Returns the
-SQL statements used by L</deploy> and
-L<DBIx::Class::Schema::Storage/deploy>. C<$rdbms_type> provides the
-(optional) SQLT (not DBI) database driver name for which the SQL
-statements are produced. If not supplied, the type is determined by
-interrogating the current connection. The other two arguments are
-identical to those of L</deploy>.
+A convenient shortcut to
+C<< $self->storage->deployment_statements($self, @args) >>.
+Returns the SQL statements used by L</deploy> and
+L<DBIx::Class::Schema::Storage/deploy>.
=cut
=over 4
-=item Arguments: \@databases, $version, $directory, $preversion, $sqlt_args
+=item Arguments: See L<DBIx::Class::Storage::DBI/create_ddl_dir>
=back
-Creates an SQL file based on the Schema, for each of the specified
-database types, in the given directory. Given a previous version number,
-this will also create a file containing the ALTER TABLE statements to
-transform the previous schema into the current one. Note that these
-statements may contain DROP TABLE or DROP COLUMN statements that can
-potentially destroy data.
-
-The file names are created using the C<ddl_filename> method below, please
-override this method in your schema if you would like a different file
-name format. For the ALTER file, the same format is used, replacing
-$version in the name with "$preversion-$version".
-
-See L<DBIx::Class::Schema/deploy> for details of $sqlt_args.
-
-If no arguments are passed, then the following default values are used:
-
-=over 4
-
-=item databases - ['MySQL', 'SQLite', 'PostgreSQL']
+A convenient shortcut to
+C<< $self->storage->create_ddl_dir($self, @args) >>.
-=item version - $schema->schema_version
-
-=item directory - './'
-
-=item preversion - <none>
-
-=back
-
-Note that this feature is currently EXPERIMENTAL and may not work correctly
-across all databases, or fully handle complex relationships.
-
-WARNING: Please check all SQL files created, before applying them.
+Creates an SQL file based on the Schema, for each of the specified
+database types, in the given directory.
=cut
You may override this method in your schema if you wish to use a different
format.
+ WARNING
+
+ Prior to DBIx::Class version 0.08100 this method had a different signature:
+
+ my $filename = $table->ddl_filename($type, $dir, $version, $preversion)
+
+ In recent versions variables $dir and $version were reversed in order to
+ bring the signature in line with other Schema/Storage methods. If you
+ really need to maintain backward compatibility, you can do the following
+ in any overriding methods:
+
+ ($dir, $version) = ($version, $dir) if ($DBIx::Class::VERSION < 0.08100);
+
=cut
sub ddl_filename {
$filename =~ s/::/-/g;
$filename = File::Spec->catfile($dir, "$filename-$version-$type.sql");
$filename =~ s/$version/$preversion-$version/ if($preversion);
-
+
return $filename;
}
sub _register_source {
my ($self, $moniker, $source, $params) = @_;
+ my $orig_source = $source;
+
$source = $source->new({ %$source, source_name => $moniker });
+ $source->schema($self);
+ weaken($source->{schema}) if ref($self);
+
+ my $rs_class = $source->result_class;
my %reg = %{$self->source_registrations};
$reg{$moniker} = $source;
$self->source_registrations(\%reg);
- $source->schema($self);
- weaken($source->{schema}) if ref($self);
return if ($params->{extra});
-
- if ($source->result_class) {
- my %map = %{$self->class_mappings};
- if (exists $map{$source->result_class}) {
- warn $source->result_class . ' already has a source, use register_extra_source for additional sources';
- }
- $map{$source->result_class} = $moniker;
- $self->class_mappings(\%map);
+ return unless defined($rs_class) && $rs_class->can('result_source_instance');
+
+ my %map = %{$self->class_mappings};
+ if (
+ exists $map{$rs_class}
+ and
+ $map{$rs_class} ne $moniker
+ and
+ $rs_class->result_source_instance ne $orig_source
+ ) {
+ carp "$rs_class already has a source, use register_extra_source for additional sources";
}
+ $map{$rs_class} = $moniker;
+ $self->class_mappings(\%map);
}
sub _unregister_source {
sub compose_connection {
my ($self, $target, @info) = @_;
- warn "compose_connection deprecated as of 0.08000"
+ carp "compose_connection deprecated as of 0.08000"
unless ($INC{"DBIx/Class/CDBICompat.pm"} || $warn++);
my $base = 'DBIx::Class::ResultSetProxy';
$self->throw_exception
("No arguments to load_classes and couldn't load ${base} ($@)")
if $@;
-
+
if ($self eq $target) {
# Pathological case, largely caused by the docs on early C::M::DBIC::Plain
foreach my $moniker ($self->sources) {
$self->connection(@info);
return $self;
}
-
+
my $schema = $self->compose_namespace($target, $base);
{
no strict 'refs';
my $name = join '::', $target, 'schema';
*$name = Sub::Name::subname $name, sub { $schema };
}
-
+
$schema->connection(@info);
foreach my $moniker ($schema->sources) {
my $source = $schema->source($moniker);
=head1 SYNOPSIS
- package Library::Schema;
+ package MyApp::Schema;
use base qw/DBIx::Class::Schema/;
our $VERSION = 0.001;
- # load Library::Schema::CD, Library::Schema::Book, Library::Schema::DVD
+ # load MyApp::Schema::CD, MyApp::Schema::Book, MyApp::Schema::DVD
__PACKAGE__->load_classes(qw/CD Book DVD/);
__PACKAGE__->load_components(qw/Schema::Versioned/);
use strict;
use warnings;
use base 'DBIx::Class';
+
+use Carp::Clan qw/^DBIx::Class/;
use POSIX 'strftime';
-use Data::Dumper;
__PACKAGE__->mk_classdata('_filedata');
__PACKAGE__->mk_classdata('upgrade_directory');
# must be called on a fresh database
if ($self->get_db_version()) {
- warn 'Install not possible as versions table already exists in database';
+ carp 'Install not possible as versions table already exists in database';
}
# default to current version if none passed
# db unversioned
unless ($db_version) {
- warn 'Upgrade not possible as database is unversioned. Please call install first.';
+ carp 'Upgrade not possible as database is unversioned. Please call install first.';
return;
}
# db and schema at same version. do nothing
if ($db_version eq $self->schema_version) {
- print "Upgrade not necessary\n";
+ carp "Upgrade not necessary\n";
return;
}
# here to be sure.
# XXX - just fix it
$self->storage->sqlt_type;
-
+
my $upgrade_file = $self->ddl_filename(
$self->storage->sqlt_type,
$self->schema_version,
$self->create_upgrade_path({ upgrade_file => $upgrade_file });
unless (-f $upgrade_file) {
- warn "Upgrade not possible, no upgrade file found ($upgrade_file), please create one\n";
+ carp "Upgrade not possible, no upgrade file found ($upgrade_file), please create one\n";
return;
}
- warn "\nDB version ($db_version) is lower than the schema version (".$self->schema_version."). Attempting upgrade.\n";
+ carp "\nDB version ($db_version) is lower than the schema version (".$self->schema_version."). Attempting upgrade.\n";
# backup if necessary then apply upgrade
$self->_filedata($self->_read_sql_file($upgrade_file));
sub apply_statement {
my ($self, $statement) = @_;
- $self->storage->dbh->do($_) or warn "SQL was:\n $_";
+ $self->storage->dbh->do($_) or carp "SQL was:\n $_";
}
=head2 get_db_version
if($pversion eq $self->schema_version)
{
-# warn "This version is already installed\n";
+# carp "This version is already installed\n";
return 1;
}
if(!$pversion)
{
- warn "Your DB is currently unversioned. Please call upgrade on your schema to sync the DB.\n";
+ carp "Your DB is currently unversioned. Please call upgrade on your schema to sync the DB.\n";
return 1;
}
- warn "Versions out of sync. This is " . $self->schema_version .
+ carp "Versions out of sync. This is " . $self->schema_version .
", your database contains version $pversion, please call upgrade on your Schema.\n";
}
$db_tr->producer($db);
my $dbic_tr = SQL::Translator->new;
$dbic_tr->parser('SQL::Translator::Parser::DBIx::Class');
- $dbic_tr = $self->storage->configure_sqlt($dbic_tr, $db);
$dbic_tr->data($self);
$dbic_tr->producer($db);
print $file $diff;
close($file);
- print "WARNING: There may be differences between your DB and your DBIC schema. Please review and if necessary run the SQL in $filename to sync your DB.\n";
+ carp "WARNING: There may be differences between your DB and your DBIC schema. Please review and if necessary run the SQL in $filename to sync your DB.\n";
}
my $file = shift || return;
my $fh;
- open $fh, "<$file" or warn("Can't open upgrade file, $file ($!)");
+ open $fh, "<$file" or carp("Can't open upgrade file, $file ($!)");
my @data = split(/\n/, join('', <$fh>));
@data = grep(!/^--/, @data);
@data = split(/;/, join('', @data));
=head1 AUTHORS
-Jess Robinson <castaway@desert-island.demon.co.uk>
+Jess Robinson <castaway@desert-island.me.uk>
Luke Saunders <luke@shadowcatsystems.co.uk>
=head1 LICENSE
=head1 SYNOPSIS
use DBIx::Class::StartupCheck;
-
+
=head1 DESCRIPTION
This module used to check for, and if necessary issue a warning for, a
use warnings;
use base qw/DBIx::Class/;
+use mro 'c3';
use Scalar::Util qw/weaken/;
use Carp::Clan qw/^DBIx::Class/;
Issues a commit of the current transaction.
+It does I<not> perform an actual storage commit unless there's a DBIx::Class
+transaction currently in effect (i.e. you called L</txn_begin>).
+
=cut
sub txn_commit { die "Virtual method!" }
=for comment
-=head2 txn_scope_guard (EXPERIMENTAL)
+=head2 txn_scope_guard
-An alternative way of using transactions to C<txn_do>:
+An alternative way of transaction handling based on
+L<DBIx::Class::Storage::TxnScopeGuard>:
- my $txn = $storage->txn_scope_guard;
+ my $txn_guard = $storage->txn_scope_guard;
$row->col1("val1");
$row->update;
- $txn->commit;
+ $txn_guard->commit;
-If a exception occurs, the transaction will be rolled back. This is still very
-experiemental, and we are not 100% sure it is working right when nested. The
-onus is on you as the user to make sure you dont forget to call
-$C<$txn->commit>.
+If an exception occurs, or the guard object otherwise leaves the scope
+before C<< $txn_guard->commit >> is called, the transaction will be rolled
+back by an explicit L</txn_rollback> call. In essence this is akin to
+using a L</txn_begin>/L</txn_commit> pair, without having to worry
+about calling L</txn_rollback> at the right places. Note that since there
+is no defined code closure, there will be no retries and other magic upon
+database disconnection. If you need such functionality see L</txn_do>.
=cut
=head2 sql_maker
Returns a C<sql_maker> object - normally an object of class
-C<DBIC::SQL::Abstract>.
+C<DBIx::Class::SQLAHacks>.
=cut
package DBIx::Class::Storage::DBI;
# -*- mode: cperl; cperl-indent-level: 2 -*-
+use strict;
+use warnings;
+
use base 'DBIx::Class::Storage';
+use mro 'c3';
-use strict;
-use warnings;
use Carp::Clan qw/^DBIx::Class/;
use DBI;
-use SQL::Abstract::Limit;
use DBIx::Class::Storage::DBI::Cursor;
use DBIx::Class::Storage::Statistics;
-use Scalar::Util qw/blessed weaken/;
+use Scalar::Util();
+use List::Util();
__PACKAGE__->mk_group_accessors('simple' =>
- qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts
- _conn_pid _conn_tid transaction_depth _dbh_autocommit savepoints/
+ qw/_connect_info _dbi_connect_info _dbh _sql_maker _sql_maker_opts _conn_pid
+ _conn_tid transaction_depth _dbh_autocommit _driver_determined savepoints/
);
# the values for these accessors are picked out (and deleted) from
# the attribute hashref passed to connect_info
my @storage_options = qw/
- on_connect_do on_disconnect_do disable_sth_caching unsafe auto_savepoint
+ on_connect_call on_disconnect_call on_connect_do on_disconnect_do
+ disable_sth_caching unsafe auto_savepoint
/;
__PACKAGE__->mk_group_accessors('simple' => @storage_options);
__PACKAGE__->cursor_class('DBIx::Class::Storage::DBI::Cursor');
__PACKAGE__->mk_group_accessors('inherited' => qw/sql_maker_class/);
-__PACKAGE__->sql_maker_class('DBIC::SQL::Abstract');
-
-BEGIN {
-
-package # Hide from PAUSE
- DBIC::SQL::Abstract; # Would merge upstream, but nate doesn't reply :(
-
-use base qw/SQL::Abstract::Limit/;
-
-# This prevents the caching of $dbh in S::A::L, I believe
-sub new {
- my $self = shift->SUPER::new(@_);
-
- # If limit_dialect is a ref (like a $dbh), go ahead and replace
- # it with what it resolves to:
- $self->{limit_dialect} = $self->_find_syntax($self->{limit_dialect})
- if ref $self->{limit_dialect};
-
- $self;
-}
-
-# DB2 is the only remaining DB using this. Even though we are not sure if
-# RowNumberOver is still needed here (should be part of SQLA) leave the
-# code in place
-sub _RowNumberOver {
- my ($self, $sql, $order, $rows, $offset ) = @_;
-
- $offset += 1;
- my $last = $rows + $offset;
- my ( $order_by ) = $self->_order_by( $order );
-
- $sql = <<"SQL";
-SELECT * FROM
-(
- SELECT Q1.*, ROW_NUMBER() OVER( ) AS ROW_NUM FROM (
- $sql
- $order_by
- ) Q1
-) Q2
-WHERE ROW_NUM BETWEEN $offset AND $last
-
-SQL
-
- return $sql;
-}
-
-
-# While we're at it, this should make LIMIT queries more efficient,
-# without digging into things too deeply
-use Scalar::Util 'blessed';
-sub _find_syntax {
- my ($self, $syntax) = @_;
-
- # DB2 is the only remaining DB using this. Even though we are not sure if
- # RowNumberOver is still needed here (should be part of SQLA) leave the
- # code in place
- my $dbhname = blessed($syntax) ? $syntax->{Driver}{Name} : $syntax;
- if(ref($self) && $dbhname && $dbhname eq 'DB2') {
- return 'RowNumberOver';
- }
-
- $self->{_cached_syntax} ||= $self->SUPER::_find_syntax($syntax);
-}
-
-sub select {
- my ($self, $table, $fields, $where, $order, @rest) = @_;
- if (ref $table eq 'SCALAR') {
- $table = $$table;
- }
- elsif (not ref $table) {
- $table = $self->_quote($table);
- }
- local $self->{rownum_hack_count} = 1
- if (defined $rest[0] && $self->{limit_dialect} eq 'RowNum');
- @rest = (-1) unless defined $rest[0];
- die "LIMIT 0 Does Not Compute" if $rest[0] == 0;
- # and anyway, SQL::Abstract::Limit will cause a barf if we don't first
- local $self->{having_bind} = [];
- my ($sql, @ret) = $self->SUPER::select(
- $table, $self->_recurse_fields($fields), $where, $order, @rest
- );
- $sql .=
- $self->{for} ?
- (
- $self->{for} eq 'update' ? ' FOR UPDATE' :
- $self->{for} eq 'shared' ? ' FOR SHARE' :
- ''
- ) :
- ''
- ;
- return wantarray ? ($sql, @ret, @{$self->{having_bind}}) : $sql;
-}
-
-sub insert {
- my $self = shift;
- my $table = shift;
- $table = $self->_quote($table) unless ref($table);
- $self->SUPER::insert($table, @_);
-}
-
-sub update {
- my $self = shift;
- my $table = shift;
- $table = $self->_quote($table) unless ref($table);
- $self->SUPER::update($table, @_);
-}
-
-sub delete {
- my $self = shift;
- my $table = shift;
- $table = $self->_quote($table) unless ref($table);
- $self->SUPER::delete($table, @_);
-}
-
-sub _emulate_limit {
- my $self = shift;
- if ($_[3] == -1) {
- return $_[1].$self->_order_by($_[2]);
- } else {
- return $self->SUPER::_emulate_limit(@_);
- }
-}
-
-sub _recurse_fields {
- my ($self, $fields, $params) = @_;
- my $ref = ref $fields;
- return $self->_quote($fields) unless $ref;
- return $$fields if $ref eq 'SCALAR';
-
- if ($ref eq 'ARRAY') {
- return join(', ', map {
- $self->_recurse_fields($_)
- .(exists $self->{rownum_hack_count} && !($params && $params->{no_rownum_hack})
- ? ' AS col'.$self->{rownum_hack_count}++
- : '')
- } @$fields);
- } elsif ($ref eq 'HASH') {
- foreach my $func (keys %$fields) {
- return $self->_sqlcase($func)
- .'( '.$self->_recurse_fields($fields->{$func}).' )';
- }
- }
- # Is the second check absolutely necessary?
- elsif ( $ref eq 'REF' and ref($$fields) eq 'ARRAY' ) {
- return $self->_bind_to_sql( $fields );
- }
- else {
- Carp::croak($ref . qq{ unexpected in _recurse_fields()})
- }
-}
-
-sub _order_by {
- my $self = shift;
- my $ret = '';
- my @extra;
- if (ref $_[0] eq 'HASH') {
- if (defined $_[0]->{group_by}) {
- $ret = $self->_sqlcase(' group by ')
- .$self->_recurse_fields($_[0]->{group_by}, { no_rownum_hack => 1 });
- }
- if (defined $_[0]->{having}) {
- my $frag;
- ($frag, @extra) = $self->_recurse_where($_[0]->{having});
- push(@{$self->{having_bind}}, @extra);
- $ret .= $self->_sqlcase(' having ').$frag;
- }
- if (defined $_[0]->{order_by}) {
- $ret .= $self->_order_by($_[0]->{order_by});
- }
- if (grep { $_ =~ /^-(desc|asc)/i } keys %{$_[0]}) {
- return $self->SUPER::_order_by($_[0]);
- }
- } elsif (ref $_[0] eq 'SCALAR') {
- $ret = $self->_sqlcase(' order by ').${ $_[0] };
- } elsif (ref $_[0] eq 'ARRAY' && @{$_[0]}) {
- my @order = @{+shift};
- $ret = $self->_sqlcase(' order by ')
- .join(', ', map {
- my $r = $self->_order_by($_, @_);
- $r =~ s/^ ?ORDER BY //i;
- $r;
- } @order);
- } else {
- $ret = $self->SUPER::_order_by(@_);
- }
- return $ret;
-}
-
-sub _order_directions {
- my ($self, $order) = @_;
- $order = $order->{order_by} if ref $order eq 'HASH';
- return $self->SUPER::_order_directions($order);
-}
-
-sub _table {
- my ($self, $from) = @_;
- if (ref $from eq 'ARRAY') {
- return $self->_recurse_from(@$from);
- } elsif (ref $from eq 'HASH') {
- return $self->_make_as($from);
- } else {
- return $from; # would love to quote here but _table ends up getting called
- # twice during an ->select without a limit clause due to
- # the way S::A::Limit->select works. should maybe consider
- # bypassing this and doing S::A::select($self, ...) in
- # our select method above. meantime, quoting shims have
- # been added to select/insert/update/delete here
- }
-}
-
-sub _recurse_from {
- my ($self, $from, @join) = @_;
- my @sqlf;
- push(@sqlf, $self->_make_as($from));
- foreach my $j (@join) {
- my ($to, $on) = @$j;
-
- # check whether a join type exists
- my $join_clause = '';
- my $to_jt = ref($to) eq 'ARRAY' ? $to->[0] : $to;
- if (ref($to_jt) eq 'HASH' and exists($to_jt->{-join_type})) {
- $join_clause = ' '.uc($to_jt->{-join_type}).' JOIN ';
- } else {
- $join_clause = ' JOIN ';
- }
- push(@sqlf, $join_clause);
-
- if (ref $to eq 'ARRAY') {
- push(@sqlf, '(', $self->_recurse_from(@$to), ')');
- } else {
- push(@sqlf, $self->_make_as($to));
- }
- push(@sqlf, ' ON ', $self->_join_condition($on));
- }
- return join('', @sqlf);
-}
+__PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks');
-sub _bind_to_sql {
- my $self = shift;
- my $arr = shift;
- my $sql = shift @$$arr;
- $sql =~ s/\?/$self->_quote((shift @$$arr)->[1])/eg;
- return $sql
-}
-
-sub _make_as {
- my ($self, $from) = @_;
- return join(' ', map { (ref $_ eq 'SCALAR' ? $$_
- : ref $_ eq 'REF' ? $self->_bind_to_sql($_)
- : $self->_quote($_))
- } reverse each %{$self->_skip_options($from)});
-}
-
-sub _skip_options {
- my ($self, $hash) = @_;
- my $clean_hash = {};
- $clean_hash->{$_} = $hash->{$_}
- for grep {!/^-/} keys %$hash;
- return $clean_hash;
-}
-
-sub _join_condition {
- my ($self, $cond) = @_;
- if (ref $cond eq 'HASH') {
- my %j;
- for (keys %$cond) {
- my $v = $cond->{$_};
- if (ref $v) {
- # XXX no throw_exception() in this package and croak() fails with strange results
- Carp::croak(ref($v) . qq{ reference arguments are not supported in JOINS - try using \"..." instead'})
- if ref($v) ne 'SCALAR';
- $j{$_} = $v;
- }
- else {
- my $x = '= '.$self->_quote($v); $j{$_} = \$x;
- }
- };
- return scalar($self->_recurse_where(\%j));
- } elsif (ref $cond eq 'ARRAY') {
- return join(' OR ', map { $self->_join_condition($_) } @$cond);
- } else {
- die "Can't handle this yet!";
- }
-}
-
-sub _quote {
- my ($self, $label) = @_;
- return '' unless defined $label;
- return "*" if $label eq '*';
- return $label unless $self->{quote_char};
- if(ref $self->{quote_char} eq "ARRAY"){
- return $self->{quote_char}->[0] . $label . $self->{quote_char}->[1]
- if !defined $self->{name_sep};
- my $sep = $self->{name_sep};
- return join($self->{name_sep},
- map { $self->{quote_char}->[0] . $_ . $self->{quote_char}->[1] }
- split(/\Q$sep\E/,$label));
- }
- return $self->SUPER::_quote($label);
-}
-
-sub limit_dialect {
- my $self = shift;
- $self->{limit_dialect} = shift if @_;
- return $self->{limit_dialect};
-}
-
-sub quote_char {
- my $self = shift;
- $self->{quote_char} = shift if @_;
- return $self->{quote_char};
-}
-
-sub name_sep {
- my $self = shift;
- $self->{name_sep} = shift if @_;
- return $self->{name_sep};
-}
-
-} # End of BEGIN block
=head1 NAME
=item *
-A single code reference which returns a connected
-L<DBI database handle|DBI/connect> optionally followed by
+A single code reference which returns a connected
+L<DBI database handle|DBI/connect> optionally followed by
L<extra attributes|/DBIx::Class specific connection attributes> recognized
by DBIx::Class:
%extra_attributes,
}];
-This is particularly useful for L<Catalyst> based applications, allowing the
+This is particularly useful for L<Catalyst> based applications, allowing the
following config (L<Config::General> style):
<Model::DB>
set C<AutoCommit> to either I<0> or I<1>. L<DBIx::Class> further
recommends that it be set to I<1>, and that you perform transactions
via our L<DBIx::Class::Schema/txn_do> method. L<DBIx::Class> will set it
-to I<1> if you do not do explicitly set it to zero. This is the default
+to I<1> if you do not do explicitly set it to zero. This is the default
for most DBDs. See L</DBIx::Class and AutoCommit> for details.
=head3 DBIx::Class specific connection attributes
=over
+=item a scalar
+
+This contains one SQL statement to execute.
+
=item an array reference
This contains SQL statements to execute in order. Each element contains
Note, this only runs if you explicitly call L</disconnect> on the
storage object.
+=item on_connect_call
+
+A more generalized form of L</on_connect_do> that calls the specified
+C<connect_call_METHOD> methods in your storage driver.
+
+ on_connect_do => 'select 1'
+
+is equivalent to:
+
+ on_connect_call => [ [ do_sql => 'select 1' ] ]
+
+Its values may contain:
+
+=over
+
+=item a scalar
+
+Will call the C<connect_call_METHOD> method.
+
+=item a code reference
+
+Will execute C<< $code->($storage) >>
+
+=item an array reference
+
+Each value can be a method name or code reference.
+
+=item an array of arrays
+
+For each array, the first item is taken to be the C<connect_call_> method name
+or code reference, and the rest are parameters to it.
+
+=back
+
+Some predefined storage methods you may use:
+
+=over
+
+=item do_sql
+
+Executes a SQL string or a code reference that returns a SQL string. This is
+what L</on_connect_do> and L</on_disconnect_do> use.
+
+It can take:
+
+=over
+
+=item a scalar
+
+Will execute the scalar as SQL.
+
+=item an arrayref
+
+Taken to be arguments to L<DBI/do>, the SQL string optionally followed by the
+attributes hashref and bind values.
+
+=item a code reference
+
+Will execute C<< $code->($storage) >> and execute the return array refs as
+above.
+
+=back
+
+=item datetime_setup
+
+Execute any statements necessary to initialize the database session to return
+and accept datetime/timestamp values used with
+L<DBIx::Class::InflateColumn::DateTime>.
+
+Only necessary for some databases, see your specific storage driver for
+implementation details.
+
+=back
+
+=item on_disconnect_call
+
+Takes arguments in the same form as L</on_connect_call> and executes them
+immediately before disconnecting from the database.
+
+Calls the C<disconnect_call_METHOD> methods as opposed to the
+C<connect_call_METHOD> methods called by L</on_connect_call>.
+
+Note, this only runs if you explicitly call L</disconnect> on the
+storage object.
+
=item disable_sth_caching
If set to a true value, this option will disable the caching of
statement handles via L<DBI/prepare_cached>.
-=item limit_dialect
+=item limit_dialect
Sets the limit dialect. This is useful for JDBC-bridge among others
where the remote SQL-dialect cannot be determined by the name of the
=item quote_char
-Specifies what characters to use to quote table and column names. If
+Specifies what characters to use to quote table and column names. If
you use this you will want to specify L</name_sep> as well.
C<quote_char> expects either a single character, in which case is it
=item name_sep
-This only needs to be used in conjunction with C<quote_char>, and is used to
-specify the charecter that seperates elements (schemas, tables, columns) from
+This only needs to be used in conjunction with C<quote_char>, and is used to
+specify the charecter that seperates elements (schemas, tables, columns) from
each other. In most cases this is simply a C<.>.
The consequences of not supplying this value is that L<SQL::Abstract>
}
}
- %attrs = () if (ref $args[0] eq 'CODE'); # _connect() never looks past $args[0] in this case
+ if (ref $args[0] eq 'CODE') {
+ # _connect() never looks past $args[0] in this case
+ %attrs = ()
+ } else {
+ %attrs = (
+ %{ $self->_default_dbi_connect_attributes || {} },
+ %attrs,
+ );
+ }
$self->_dbi_connect_info([@args, keys %attrs ? \%attrs : ()]);
$self->_connect_info;
}
+sub _default_dbi_connect_attributes {
+ return {
+ AutoCommit => 1,
+ RaiseError => 1,
+ PrintError => 0,
+ };
+}
+
=head2 on_connect_do
This method is deprecated in favour of setting via L</connect_info>.
+=cut
+
+=head2 on_disconnect_do
+
+This method is deprecated in favour of setting via L</connect_info>.
+
+=cut
+
+sub _parse_connect_do {
+ my ($self, $type) = @_;
+
+ my $val = $self->$type;
+ return () if not defined $val;
+
+ my @res;
+
+ if (not ref($val)) {
+ push @res, [ 'do_sql', $val ];
+ } elsif (ref($val) eq 'CODE') {
+ push @res, $val;
+ } elsif (ref($val) eq 'ARRAY') {
+ push @res, map { [ 'do_sql', $_ ] } @$val;
+ } else {
+ $self->throw_exception("Invalid type for $type: ".ref($val));
+ }
+
+ return \@res;
+}
=head2 dbh_do
}
};
+ # ->connected might unset $@ - copy
my $exception = $@;
if(!$exception) { return $want_array ? @result : $result[0] }
# We were not connected - reconnect and retry, but let any
# exception fall right through this time
+ carp "Retrying $code after catching disconnected exception: $exception"
+ if $ENV{DBIC_DBIRETRY_DEBUG};
$self->_populate_dbh;
$self->$code($self->_dbh, @_);
}
$self->txn_commit;
};
+ # ->connected might unset $@ - copy
my $exception = $@;
if(!$exception) { return $want_array ? @result : $result[0] }
- if($tried++ > 0 || $self->connected) {
+ if($tried++ || $self->connected) {
eval { $self->txn_rollback };
my $rollback_exception = $@;
if($rollback_exception) {
# We were not connected, and was first try - reconnect and retry
# via the while loop
+ carp "Retrying $coderef after catching disconnected exception: $exception"
+ if $ENV{DBIC_DBIRETRY_DEBUG};
$self->_populate_dbh;
}
}
sub disconnect {
my ($self) = @_;
- if( $self->connected ) {
- my $connection_do = $self->on_disconnect_do;
- $self->_do_connection_actions($connection_do) if ref($connection_do);
+ if( $self->_dbh ) {
+ my @actions;
+
+ push @actions, ( $self->on_disconnect_call || () );
+ push @actions, $self->_parse_connect_do ('on_disconnect_do');
+
+ $self->_do_connection_actions(disconnect_call_ => $_) for @actions;
$self->_dbh->rollback unless $self->_dbh_autocommit;
$self->_dbh->disconnect;
$sub->();
}
+=head2 connected
+
+=over
+
+=item Arguments: none
+
+=item Return Value: 1|0
+
+=back
+
+Verifies that the the current database handle is active and ready to execute
+an SQL statement (i.e. the connection did not get stale, server is still
+answering, etc.) This method is used internally by L</dbh>.
+
+=cut
+
sub connected {
- my ($self) = @_;
+ my $self = shift;
+ return 0 unless $self->_seems_connected;
- if(my $dbh = $self->_dbh) {
- if(defined $self->_conn_tid && $self->_conn_tid != threads->tid) {
- $self->_dbh(undef);
- $self->{_dbh_gen}++;
- return;
- }
- else {
- $self->_verify_pid;
- return 0 if !$self->_dbh;
- }
- return ($dbh->FETCH('Active') && $dbh->ping);
+ #be on the safe side
+ local $self->_dbh->{RaiseError} = 1;
+
+ return $self->_ping;
+}
+
+sub _seems_connected {
+ my $self = shift;
+
+ my $dbh = $self->_dbh
+ or return 0;
+
+ if(defined $self->_conn_tid && $self->_conn_tid != threads->tid) {
+ $self->_dbh(undef);
+ $self->{_dbh_gen}++;
+ return 0;
+ }
+ else {
+ $self->_verify_pid;
+ return 0 if !$self->_dbh;
}
- return 0;
+ return $dbh->FETCH('Active');
+}
+
+sub _ping {
+ my $self = shift;
+
+ my $dbh = $self->_dbh or return 0;
+
+ return $dbh->ping;
}
# handle pid changes correctly
=head2 dbh
-Returns the dbh - a data base handle of class L<DBI>.
+Returns a C<$dbh> - a data base handle of class L<DBI>. The returned handle
+is guaranteed to be healthy by implicitly calling L</connected>, and if
+necessary performing a reconnection before returning. Keep in mind that this
+is very B<expensive> on some database engines. Consider using L<dbh_do>
+instead.
=cut
sub dbh {
my ($self) = @_;
- $self->ensure_connected;
+ if (not $self->_dbh) {
+ $self->_populate_dbh;
+ } else {
+ $self->ensure_connected;
+ }
+ return $self->_dbh;
+}
+
+# this is the internal "get dbh or connect (don't check)" method
+sub _get_dbh {
+ my $self = shift;
+ $self->_populate_dbh unless $self->_dbh;
return $self->_dbh;
}
sub _sql_maker_args {
my ($self) = @_;
-
- return ( bindtype=>'columns', array_datatypes => 1, limit_dialect => $self->dbh, %{$self->_sql_maker_opts} );
+
+ return (
+ bindtype=>'columns',
+ array_datatypes => 1,
+ limit_dialect => $self->_get_dbh,
+ %{$self->_sql_maker_opts}
+ );
}
sub sql_maker {
my ($self) = @_;
unless ($self->_sql_maker) {
my $sql_maker_class = $self->sql_maker_class;
+ $self->ensure_class_loaded ($sql_maker_class);
$self->_sql_maker($sql_maker_class->new( $self->_sql_maker_args ));
}
return $self->_sql_maker;
sub _populate_dbh {
my ($self) = @_;
+
my @info = @{$self->_dbi_connect_info || []};
+ $self->_dbh(undef); # in case ->connected failed we might get sent here
$self->_dbh($self->_connect(@info));
+ $self->_conn_pid($$);
+ $self->_conn_tid(threads->tid) if $INC{'threads.pm'};
+
+ $self->_determine_driver;
+
# Always set the transaction depth on connect, since
# there is no transaction in progress by definition
$self->{transaction_depth} = $self->_dbh_autocommit ? 0 : 1;
- if(ref $self eq 'DBIx::Class::Storage::DBI') {
- my $driver = $self->_dbh->{Driver}->{Name};
- if ($self->load_optional_class("DBIx::Class::Storage::DBI::${driver}")) {
- bless $self, "DBIx::Class::Storage::DBI::${driver}";
- $self->_rebless();
- }
- }
+ $self->_run_connection_actions unless $self->{_in_determine_driver};
+}
- $self->_conn_pid($$);
- $self->_conn_tid(threads->tid) if $INC{'threads.pm'};
+sub _run_connection_actions {
+ my $self = shift;
+ my @actions;
+
+ push @actions, ( $self->on_connect_call || () );
+ push @actions, $self->_parse_connect_do ('on_connect_do');
- my $connection_do = $self->on_connect_do;
- $self->_do_connection_actions($connection_do) if ref($connection_do);
+ $self->_do_connection_actions(connect_call_ => $_) for @actions;
}
-sub _do_connection_actions {
- my $self = shift;
- my $connection_do = shift;
+sub _determine_driver {
+ my ($self) = @_;
+
+ if ((not $self->_driver_determined) && (not $self->{_in_determine_driver})) {
+ my $started_unconnected = 0;
+ local $self->{_in_determine_driver} = 1;
+
+ if (ref($self) eq __PACKAGE__) {
+ my $driver;
+ if ($self->_dbh) { # we are connected
+ $driver = $self->_dbh->{Driver}{Name};
+ } else {
+ # try to use dsn to not require being connected, the driver may still
+ # force a connection in _rebless to determine version
+ ($driver) = $self->_dbi_connect_info->[0] =~ /dbi:([^:]+):/i;
+ $started_unconnected = 1;
+ }
- if (ref $connection_do eq 'ARRAY') {
- $self->_do_query($_) foreach @$connection_do;
+ my $storage_class = "DBIx::Class::Storage::DBI::${driver}";
+ if ($self->load_optional_class($storage_class)) {
+ mro::set_mro($storage_class, 'c3');
+ bless $self, $storage_class;
+ $self->_rebless();
+ }
+ }
+
+ $self->_driver_determined(1);
+
+ $self->_run_connection_actions
+ if $started_unconnected && defined $self->_dbh;
}
- elsif (ref $connection_do eq 'CODE') {
- $connection_do->($self);
+}
+
+sub _do_connection_actions {
+ my $self = shift;
+ my $method_prefix = shift;
+ my $call = shift;
+
+ if (not ref($call)) {
+ my $method = $method_prefix . $call;
+ $self->$method(@_);
+ } elsif (ref($call) eq 'CODE') {
+ $self->$call(@_);
+ } elsif (ref($call) eq 'ARRAY') {
+ if (ref($call->[0]) ne 'ARRAY') {
+ $self->_do_connection_actions($method_prefix, $_) for @$call;
+ } else {
+ $self->_do_connection_actions($method_prefix, @$_) for @$call;
+ }
+ } else {
+ $self->throw_exception (sprintf ("Don't know how to process conection actions of type '%s'", ref($call)) );
}
return $self;
}
+sub connect_call_do_sql {
+ my $self = shift;
+ $self->_do_query(@_);
+}
+
+sub disconnect_call_do_sql {
+ my $self = shift;
+ $self->_do_query(@_);
+}
+
+# override in db-specific backend when necessary
+sub connect_call_datetime_setup { 1 }
+
sub _do_query {
my ($self, $action) = @_;
if($dbh && !$self->unsafe) {
my $weak_self = $self;
- weaken($weak_self);
+ Scalar::Util::weaken($weak_self);
$dbh->{HandleError} = sub {
if ($weak_self) {
$weak_self->throw_exception("DBI Exception: $_[0]");
$self->throw_exception ("Your Storage implementation doesn't support savepoints")
unless $self->can('_svp_begin');
-
+
push @{ $self->{savepoints} }, $name;
$self->debugobj->svp_begin($name) if $self->debug;
-
+
return $self->_svp_begin($name);
}
}
$self->debugobj->svp_rollback($name) if $self->debug;
-
+
return $self->_svp_rollback($name);
}
sub txn_begin {
my $self = shift;
- $self->ensure_connected();
if($self->{transaction_depth} == 0) {
$self->debugobj->txn_begin()
if $self->debug;
- # this isn't ->_dbh-> because
- # we should reconnect on begin_work
- # for AutoCommit users
- $self->dbh->begin_work;
+
+ # being here implies we have AutoCommit => 1
+ # if the user is utilizing txn_do - good for
+ # him, otherwise we need to ensure that the
+ # $dbh is healthy on BEGIN
+ my $dbh_method = $self->{_in_dbh_do} ? '_dbh' : 'dbh';
+ $self->$dbh_method->begin_work;
+
} elsif ($self->auto_savepoint) {
$self->svp_begin;
}
sub _prep_for_execute {
my ($self, $op, $extra_bind, $ident, $args) = @_;
- if( blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
+ if( Scalar::Util::blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
$ident = $ident->from();
}
return ($sql, \@bind);
}
+
sub _fix_bind_params {
my ($self, @bind) = @_;
if ( $self->debug ) {
@bind = $self->_fix_bind_params(@bind);
-
+
$self->debugobj->query_start( $sql, @bind );
}
}
my $sth = $self->sth($sql,$op);
- my $placeholder_index = 1;
+ my $placeholder_index = 1;
foreach my $bound (@$bind) {
my $attributes = {};
sub insert {
my ($self, $source, $to_insert) = @_;
-
- my $ident = $source->from;
+
+# redispatch to insert method of storage we reblessed into, if necessary
+ if (not $self->_driver_determined) {
+ $self->_determine_driver;
+ goto $self->can('insert');
+ }
+
+ my $ident = $source->from;
my $bind_attributes = $self->source_bind_attributes($source);
- $self->ensure_connected;
+ my $updated_cols = {};
+
foreach my $col ( $source->columns ) {
if ( !defined $to_insert->{$col} ) {
my $col_info = $source->column_info($col);
if ( $col_info->{auto_nextval} ) {
- $to_insert->{$col} = $self->_sequence_fetch( 'nextval', $col_info->{sequence} || $self->_dbh_get_autoinc_seq($self->dbh, $source) );
+ $updated_cols->{$col} = $to_insert->{$col} = $self->_sequence_fetch(
+ 'nextval',
+ $col_info->{sequence} ||
+ $self->_dbh_get_autoinc_seq($self->_get_dbh, $source)
+ );
}
}
}
$self->_execute('insert' => [], $source, $bind_attributes, $to_insert);
- return $to_insert;
+ return $updated_cols;
}
## Still not quite perfect, and EXPERIMENTAL
-## Currently it is assumed that all values passed will be "normal", i.e. not
+## Currently it is assumed that all values passed will be "normal", i.e. not
## scalar refs, or at least, all the same type as the first set, the statement is
## only prepped once.
sub insert_bulk {
my $table = $source->from;
@colvalues{@$cols} = (0..$#$cols);
my ($sql, @bind) = $self->sql_maker->insert($table, \%colvalues);
-
+
+ $self->_determine_driver;
+
$self->_query_start( $sql, @bind );
my $sth = $self->sth($sql);
# @bind = map { ref $_ ? ''.$_ : $_ } @bind; # stringify args
## This must be an arrayref, else nothing works!
-
my $tuple_status = [];
-
- ##use Data::Dumper;
- ##print STDERR Dumper( $data, $sql, [@bind] );
-
- my $time = time();
## Get the bind_attributes, if any exist
my $bind_attributes = $self->source_bind_attributes($source);
## Bind the values and execute
- my $placeholder_index = 1;
+ my $placeholder_index = 1;
foreach my $bound (@bind) {
$sth->bind_param_array( $placeholder_index, [@data], $attributes );
$placeholder_index++;
}
- my $rv = $sth->execute_array({ArrayTupleStatus => $tuple_status});
+ my $rv = eval { $sth->execute_array({ArrayTupleStatus => $tuple_status}) };
+ if (my $err = $@) {
+ my $i = 0;
+ ++$i while $i <= $#$tuple_status && !ref $tuple_status->[$i];
+
+ $self->throw_exception($sth->errstr || "Unexpected populate error: $err")
+ if ($i > $#$tuple_status);
+
+ require Data::Dumper;
+ local $Data::Dumper::Terse = 1;
+ local $Data::Dumper::Indent = 1;
+ local $Data::Dumper::Useqq = 1;
+ local $Data::Dumper::Quotekeys = 0;
+
+ $self->throw_exception(sprintf "%s for populate slice:\n%s",
+ $tuple_status->[$i][1],
+ Data::Dumper::Dumper(
+ { map { $cols->[$_] => $data->[$i][$_] } (0 .. $#$cols) }
+ ),
+ );
+ }
$self->throw_exception($sth->errstr) if !$rv;
$self->_query_end( $sql, @bind );
sub update {
my $self = shift @_;
my $source = shift @_;
+ $self->_determine_driver;
my $bind_attributes = $self->source_bind_attributes($source);
-
+
return $self->_execute('update' => [], $source, $bind_attributes, @_);
}
sub delete {
my $self = shift @_;
my $source = shift @_;
-
- my $bind_attrs = {}; ## If ever it's needed...
-
+ $self->_determine_driver;
+ my $bind_attrs = $self->source_bind_attributes($source);
+
return $self->_execute('delete' => [], $source, $bind_attrs, @_);
}
+# We were sent here because the $rs contains a complex search
+# which will require a subquery to select the correct rows
+# (i.e. joined or limited resultsets)
+#
+# Genarating a single PK column subquery is trivial and supported
+# by all RDBMS. However if we have a multicolumn PK, things get ugly.
+# Look at _multipk_update_delete()
+sub _subq_update_delete {
+ my $self = shift;
+ my ($rs, $op, $values) = @_;
+
+ my $rsrc = $rs->result_source;
+
+ # we already check this, but double check naively just in case. Should be removed soon
+ my $sel = $rs->_resolved_attrs->{select};
+ $sel = [ $sel ] unless ref $sel eq 'ARRAY';
+ my @pcols = $rsrc->primary_columns;
+ if (@$sel != @pcols) {
+ $self->throw_exception (
+ 'Subquery update/delete can not be called on resultsets selecting a'
+ .' number of columns different than the number of primary keys'
+ );
+ }
+
+ if (@pcols == 1) {
+ return $self->$op (
+ $rsrc,
+ $op eq 'update' ? $values : (),
+ { $pcols[0] => { -in => $rs->as_query } },
+ );
+ }
+
+ else {
+ return $self->_multipk_update_delete (@_);
+ }
+}
+
+# ANSI SQL does not provide a reliable way to perform a multicol-PK
+# resultset update/delete involving subqueries. So by default resort
+# to simple (and inefficient) delete_all style per-row opearations,
+# while allowing specific storages to override this with a faster
+# implementation.
+#
+sub _multipk_update_delete {
+ return shift->_per_row_update_delete (@_);
+}
+
+# This is the default loop used to delete/update rows for multi PK
+# resultsets, and used by mysql exclusively (because it can't do anything
+# else).
+#
+# We do not use $row->$op style queries, because resultset update/delete
+# is not expected to cascade (this is what delete_all/update_all is for).
+#
+# There should be no race conditions as the entire operation is rolled
+# in a transaction.
+#
+sub _per_row_update_delete {
+ my $self = shift;
+ my ($rs, $op, $values) = @_;
+
+ my $rsrc = $rs->result_source;
+ my @pcols = $rsrc->primary_columns;
+
+ my $guard = $self->txn_scope_guard;
+
+ # emulate the return value of $sth->execute for non-selects
+ my $row_cnt = '0E0';
+
+ my $subrs_cur = $rs->cursor;
+ while (my @pks = $subrs_cur->next) {
+
+ my $cond;
+ for my $i (0.. $#pcols) {
+ $cond->{$pcols[$i]} = $pks[$i];
+ }
+
+ $self->$op (
+ $rsrc,
+ $op eq 'update' ? $values : (),
+ $cond,
+ );
+
+ $row_cnt++;
+ }
+
+ $guard->commit;
+
+ return $row_cnt;
+}
+
sub _select {
my $self = shift;
+
+ # localization is neccessary as
+ # 1) there is no infrastructure to pass this around before SQLA2
+ # 2) _select_args sets it and _prep_for_execute consumes it
my $sql_maker = $self->sql_maker;
- local $sql_maker->{for};
+ local $sql_maker->{_dbic_rs_attrs};
+
return $self->_execute($self->_select_args(@_));
}
+sub _select_args_to_query {
+ my $self = shift;
+
+ # localization is neccessary as
+ # 1) there is no infrastructure to pass this around before SQLA2
+ # 2) _select_args sets it and _prep_for_execute consumes it
+ my $sql_maker = $self->sql_maker;
+ local $sql_maker->{_dbic_rs_attrs};
+
+ # my ($op, $bind, $ident, $bind_attrs, $select, $cond, $order, $rows, $offset)
+ # = $self->_select_args($ident, $select, $cond, $attrs);
+ my ($op, $bind, $ident, $bind_attrs, @args) =
+ $self->_select_args(@_);
+
+ # my ($sql, $prepared_bind) = $self->_prep_for_execute($op, $bind, $ident, [ $select, $cond, $order, $rows, $offset ]);
+ my ($sql, $prepared_bind) = $self->_prep_for_execute($op, $bind, $ident, \@args);
+ $prepared_bind ||= [];
+
+ return wantarray
+ ? ($sql, $prepared_bind, $bind_attrs)
+ : \[ "($sql)", @$prepared_bind ]
+ ;
+}
+
sub _select_args {
- my ($self, $ident, $select, $condition, $attrs) = @_;
- my $order = $attrs->{order_by};
-
- if (ref $condition eq 'SCALAR') {
- my $unwrap = ${$condition};
- if ($unwrap =~ s/ORDER BY (.*)$//i) {
- $order = $1;
- $condition = \$unwrap;
- }
- }
+ my ($self, $ident, $select, $where, $attrs) = @_;
+
+ my ($alias2source, $rs_alias) = $self->_resolve_ident_sources ($ident);
- my $for = delete $attrs->{for};
my $sql_maker = $self->sql_maker;
- $sql_maker->{for} = $for;
+ $sql_maker->{_dbic_rs_attrs} = {
+ %$attrs,
+ select => $select,
+ from => $ident,
+ where => $where,
+ $rs_alias
+ ? ( _source_handle => $alias2source->{$rs_alias}->handle )
+ : ()
+ ,
+ };
- if (exists $attrs->{group_by} || $attrs->{having}) {
- $order = {
- group_by => $attrs->{group_by},
- having => $attrs->{having},
- ($order ? (order_by => $order) : ())
- };
+ # calculate bind_attrs before possible $ident mangling
+ my $bind_attrs = {};
+ for my $alias (keys %$alias2source) {
+ my $bindtypes = $self->source_bind_attributes ($alias2source->{$alias}) || {};
+ for my $col (keys %$bindtypes) {
+
+ my $fqcn = join ('.', $alias, $col);
+ $bind_attrs->{$fqcn} = $bindtypes->{$col} if $bindtypes->{$col};
+
+ # Unqialified column names are nice, but at the same time can be
+ # rather ambiguous. What we do here is basically go along with
+ # the loop, adding an unqualified column slot to $bind_attrs,
+ # alongside the fully qualified name. As soon as we encounter
+ # another column by that name (which would imply another table)
+ # we unset the unqualified slot and never add any info to it
+ # to avoid erroneous type binding. If this happens the users
+ # only choice will be to fully qualify his column name
+
+ if (exists $bind_attrs->{$col}) {
+ $bind_attrs->{$col} = {};
+ }
+ else {
+ $bind_attrs->{$col} = $bind_attrs->{$fqcn};
+ }
+ }
}
- my $bind_attrs = {}; ## Future support
- my @args = ('select', $attrs->{bind}, $ident, $bind_attrs, $select, $condition, $order);
- if ($attrs->{software_limit} ||
- $self->sql_maker->_default_limit_syntax eq "GenericSubQ") {
- $attrs->{software_limit} = 1;
- } else {
+
+ # adjust limits
+ if (
+ $attrs->{software_limit}
+ ||
+ $sql_maker->_default_limit_syntax eq "GenericSubQ"
+ ) {
+ $attrs->{software_limit} = 1;
+ }
+ else {
$self->throw_exception("rows attribute must be positive if present")
if (defined($attrs->{rows}) && !($attrs->{rows} > 0));
# MySQL actually recommends this approach. I cringe.
$attrs->{rows} = 2**48 if not defined $attrs->{rows} and defined $attrs->{offset};
- push @args, $attrs->{rows}, $attrs->{offset};
}
- return @args;
+
+ my @limit;
+
+ # see if we need to tear the prefetch apart (either limited has_many or grouped prefetch)
+ # otherwise delegate the limiting to the storage, unless software limit was requested
+ if (
+ ( $attrs->{rows} && keys %{$attrs->{collapse}} )
+ ||
+ ( $attrs->{group_by} && @{$attrs->{group_by}} &&
+ $attrs->{_prefetch_select} && @{$attrs->{_prefetch_select}} )
+ ) {
+ ($ident, $select, $where, $attrs)
+ = $self->_adjust_select_args_for_complex_prefetch ($ident, $select, $where, $attrs);
+ }
+ elsif (! $attrs->{software_limit} ) {
+ push @limit, $attrs->{rows}, $attrs->{offset};
+ }
+
+###
+ # This would be the point to deflate anything found in $where
+ # (and leave $attrs->{bind} intact). Problem is - inflators historically
+ # expect a row object. And all we have is a resultsource (it is trivial
+ # to extract deflator coderefs via $alias2source above).
+ #
+ # I don't see a way forward other than changing the way deflators are
+ # invoked, and that's just bad...
+###
+
+ my $order = { map
+ { $attrs->{$_} ? ( $_ => $attrs->{$_} ) : () }
+ (qw/order_by group_by having/ )
+ };
+
+ return ('select', $attrs->{bind}, $ident, $bind_attrs, $select, $where, $order, @limit);
+}
+
+#
+# This is the code producing joined subqueries like:
+# SELECT me.*, other.* FROM ( SELECT me.* FROM ... ) JOIN other ON ...
+#
+sub _adjust_select_args_for_complex_prefetch {
+ my ($self, $from, $select, $where, $attrs) = @_;
+
+ $self->throw_exception ('Complex prefetches are not supported on resultsets with a custom from attribute')
+ if (ref $from ne 'ARRAY');
+
+ # copies for mangling
+ $from = [ @$from ];
+ $select = [ @$select ];
+ $attrs = { %$attrs };
+
+ # separate attributes
+ my $sub_attrs = { %$attrs };
+ delete $attrs->{$_} for qw/where bind rows offset group_by having/;
+ delete $sub_attrs->{$_} for qw/for collapse _prefetch_select _collapse_order_by select as/;
+
+ my $select_root_alias = $attrs->{alias};
+ my $sql_maker = $self->sql_maker;
+
+ # create subquery select list - consider only stuff *not* brought in by the prefetch
+ my $sub_select = [];
+ my $sub_group_by;
+ for my $i (0 .. @{$attrs->{select}} - @{$attrs->{_prefetch_select}} - 1) {
+ my $sel = $attrs->{select}[$i];
+
+ # alias any functions to the dbic-side 'as' label
+ # adjust the outer select accordingly
+ if (ref $sel eq 'HASH' ) {
+ $sel->{-as} ||= $attrs->{as}[$i];
+ $select->[$i] = join ('.', $attrs->{alias}, ($sel->{-as} || "select_$i") );
+ }
+
+ push @$sub_select, $sel;
+ }
+
+ # bring over all non-collapse-induced order_by into the inner query (if any)
+ # the outer one will have to keep them all
+ delete $sub_attrs->{order_by};
+ if (my $ord_cnt = @{$attrs->{order_by}} - @{$attrs->{_collapse_order_by}} ) {
+ $sub_attrs->{order_by} = [
+ @{$attrs->{order_by}}[ 0 .. $ord_cnt - 1]
+ ];
+ }
+
+ # mangle {from}, keep in mind that $from is "headless" from here on
+ my $join_root = shift @$from;
+
+ my %inner_joins;
+ my %join_info = map { $_->[0]{-alias} => $_->[0] } (@$from);
+
+ # in complex search_related chains $select_root_alias may *not* be
+ # 'me' so always include it in the inner join
+ $inner_joins{$select_root_alias} = 1 if ($join_root->{-alias} ne $select_root_alias);
+
+
+ # decide which parts of the join will remain on the inside
+ #
+ # this is not a very viable optimisation, but it was written
+ # before I realised this, so might as well remain. We can throw
+ # away _any_ branches of the join tree that are:
+ # 1) not mentioned in the condition/order
+ # 2) left-join leaves (or left-join leaf chains)
+ # Most of the join conditions will not satisfy this, but for real
+ # complex queries some might, and we might make some RDBMS happy.
+ #
+ #
+ # since we do not have introspectable SQLA, we fall back to ugly
+ # scanning of raw SQL for WHERE, and for pieces of ORDER BY
+ # in order to determine what goes into %inner_joins
+ # It may not be very efficient, but it's a reasonable stop-gap
+ {
+ # produce stuff unquoted, so it can be scanned
+ local $sql_maker->{quote_char};
+ my $sep = $self->_sql_maker_opts->{name_sep} || '.';
+ $sep = "\Q$sep\E";
+
+ my @order_by = (map
+ { ref $_ ? $_->[0] : $_ }
+ $sql_maker->_order_by_chunks ($sub_attrs->{order_by})
+ );
+
+ my $where_sql = $sql_maker->where ($where);
+ my $select_sql = $sql_maker->_recurse_fields ($sub_select);
+
+ # sort needed joins
+ for my $alias (keys %join_info) {
+
+ # any table alias found on a column name in where or order_by
+ # gets included in %inner_joins
+ # Also any parent joins that are needed to reach this particular alias
+ for my $piece ($select_sql, $where_sql, @order_by ) {
+ if ($piece =~ /\b $alias $sep/x) {
+ $inner_joins{$alias} = 1;
+ }
+ }
+ }
+ }
+
+ # scan for non-leaf/non-left joins and mark as needed
+ # also mark all ancestor joins that are needed to reach this particular alias
+ # (e.g. join => { cds => 'tracks' } - tracks will bring cds too )
+ #
+ # traverse by the size of the -join_path i.e. reverse depth first
+ for my $alias (sort { @{$join_info{$b}{-join_path}} <=> @{$join_info{$a}{-join_path}} } (keys %join_info) ) {
+
+ my $j = $join_info{$alias};
+ $inner_joins{$alias} = 1 if (! $j->{-join_type} || ($j->{-join_type} !~ /^left$/i) );
+
+ if ($inner_joins{$alias}) {
+ $inner_joins{$_} = 1 for (@{$j->{-join_path}});
+ }
+ }
+
+ # construct the inner $from for the subquery
+ my $inner_from = [ $join_root ];
+ for my $j (@$from) {
+ push @$inner_from, $j if $inner_joins{$j->[0]{-alias}};
+ }
+
+ # if a multi-type join was needed in the subquery ("multi" is indicated by
+ # presence in {collapse}) - add a group_by to simulate the collapse in the subq
+ unless ($sub_attrs->{group_by}) {
+ for my $alias (keys %inner_joins) {
+
+ # the dot comes from some weirdness in collapse
+ # remove after the rewrite
+ if ($attrs->{collapse}{".$alias"}) {
+ $sub_attrs->{group_by} ||= $sub_select;
+ last;
+ }
+ }
+ }
+
+ # generate the subquery
+ my $subq = $self->_select_args_to_query (
+ $inner_from,
+ $sub_select,
+ $where,
+ $sub_attrs
+ );
+ my $subq_joinspec = {
+ -alias => $select_root_alias,
+ -source_handle => $join_root->{-source_handle},
+ $select_root_alias => $subq,
+ };
+
+ # Generate a new from (really just replace the join slot with the subquery)
+ # Before we would start the outer chain from the subquery itself (i.e.
+ # SELECT ... FROM (SELECT ... ) alias JOIN ..., but this turned out to be
+ # a bad idea for search_related, as the root of the chain was effectively
+ # lost (i.e. $artist_rs->search_related ('cds'... ) would result in alias
+ # of 'cds', which would prevent from doing things like order_by artist.*)
+ # See t/prefetch/via_search_related.t for a better idea
+ my @outer_from;
+ if ($join_root->{-alias} eq $select_root_alias) { # just swap the root part and we're done
+ @outer_from = (
+ $subq_joinspec,
+ @$from,
+ )
+ }
+ else { # this is trickier
+ @outer_from = ($join_root);
+
+ for my $j (@$from) {
+ if ($j->[0]{-alias} eq $select_root_alias) {
+ push @outer_from, [
+ $subq_joinspec,
+ @{$j}[1 .. $#$j],
+ ];
+ }
+ else {
+ push @outer_from, $j;
+ }
+ }
+ }
+
+ # This is totally horrific - the $where ends up in both the inner and outer query
+ # Unfortunately not much can be done until SQLA2 introspection arrives, and even
+ # then if where conditions apply to the *right* side of the prefetch, you may have
+ # to both filter the inner select (e.g. to apply a limit) and then have to re-filter
+ # the outer select to exclude joins you didin't want in the first place
+ #
+ # OTOH it can be seen as a plus: <ash> (notes that this query would make a DBA cry ;)
+ return (\@outer_from, $select, $where, $attrs);
+}
+
+sub _resolve_ident_sources {
+ my ($self, $ident) = @_;
+
+ my $alias2source = {};
+ my $rs_alias;
+
+ # the reason this is so contrived is that $ident may be a {from}
+ # structure, specifying multiple tables to join
+ if ( Scalar::Util::blessed($ident) && $ident->isa("DBIx::Class::ResultSource") ) {
+ # this is compat mode for insert/update/delete which do not deal with aliases
+ $alias2source->{me} = $ident;
+ $rs_alias = 'me';
+ }
+ elsif (ref $ident eq 'ARRAY') {
+
+ for (@$ident) {
+ my $tabinfo;
+ if (ref $_ eq 'HASH') {
+ $tabinfo = $_;
+ $rs_alias = $tabinfo->{-alias};
+ }
+ if (ref $_ eq 'ARRAY' and ref $_->[0] eq 'HASH') {
+ $tabinfo = $_->[0];
+ }
+
+ $alias2source->{$tabinfo->{-alias}} = $tabinfo->{-source_handle}->resolve
+ if ($tabinfo->{-source_handle});
+ }
+ }
+
+ return ($alias2source, $rs_alias);
+}
+
+# Takes $ident, \@column_names
+#
+# returns { $column_name => \%column_info, ... }
+# also note: this adds -result_source => $rsrc to the column info
+#
+# usage:
+# my $col_sources = $self->_resolve_column_info($ident, @column_names);
+sub _resolve_column_info {
+ my ($self, $ident, $colnames) = @_;
+ my ($alias2src, $root_alias) = $self->_resolve_ident_sources($ident);
+
+ my $sep = $self->_sql_maker_opts->{name_sep} || '.';
+ $sep = "\Q$sep\E";
+
+ my (%return, %seen_cols);
+
+ # compile a global list of column names, to be able to properly
+ # disambiguate unqualified column names (if at all possible)
+ for my $alias (keys %$alias2src) {
+ my $rsrc = $alias2src->{$alias};
+ for my $colname ($rsrc->columns) {
+ push @{$seen_cols{$colname}}, $alias;
+ }
+ }
+
+ COLUMN:
+ foreach my $col (@$colnames) {
+ my ($alias, $colname) = $col =~ m/^ (?: ([^$sep]+) $sep)? (.+) $/x;
+
+ unless ($alias) {
+ # see if the column was seen exactly once (so we know which rsrc it came from)
+ if ($seen_cols{$colname} and @{$seen_cols{$colname}} == 1) {
+ $alias = $seen_cols{$colname}[0];
+ }
+ else {
+ next COLUMN;
+ }
+ }
+
+ my $rsrc = $alias2src->{$alias};
+ $return{$col} = $rsrc && {
+ %{$rsrc->column_info($colname)},
+ -result_source => $rsrc,
+ -source_alias => $alias,
+ };
+ }
+
+ return \%return;
}
+# Returns a counting SELECT for a simple count
+# query. Abstracted so that a storage could override
+# this to { count => 'firstcol' } or whatever makes
+# sense as a performance optimization
+sub _count_select {
+ #my ($self, $source, $rs_attrs) = @_;
+ return { count => '*' };
+}
+
+# Returns a SELECT which will end up in the subselect
+# There may or may not be a group_by, as the subquery
+# might have been called to accomodate a limit
+#
+# Most databases would be happy with whatever ends up
+# here, but some choke in various ways.
+#
+sub _subq_count_select {
+ my ($self, $source, $rs_attrs) = @_;
+ return $rs_attrs->{group_by} if $rs_attrs->{group_by};
+
+ my @pcols = map { join '.', $rs_attrs->{alias}, $_ } ($source->primary_columns);
+ return @pcols ? \@pcols : [ 1 ];
+}
+
+
sub source_bind_attributes {
my ($self, $source) = @_;
-
+
my $bind_attributes;
foreach my $column ($source->columns) {
-
+
my $data_type = $source->column_info($column)->{data_type} || '';
$bind_attributes->{$column} = $self->bind_attribute_by_data_type($data_type)
if $data_type;
=cut
sub _dbh_last_insert_id {
- my ($self, $dbh, $source, $col) = @_;
- # XXX This is a SQLite-ism as a default... is there a DBI-generic way?
- $dbh->func('last_insert_rowid');
+ # All Storage's need to register their own _dbh_last_insert_id
+ # the old SQLite-based method was highly inappropriate
+
+ my $self = shift;
+ my $class = ref $self;
+ $self->throw_exception (<<EOE);
+
+No _dbh_last_insert_id() method found in $class.
+Since the method of obtaining the autoincrement id of the last insert
+operation varies greatly between different databases, this method must be
+individually implemented for every storage class.
+EOE
}
sub last_insert_id {
=cut
-sub sqlt_type { shift->dbh->{Driver}->{Name} }
+sub sqlt_type { shift->_get_dbh->{Driver}->{Name} }
=head2 bind_attribute_by_data_type
return;
}
-=head2 create_ddl_dir
+=head2 is_datatype_numeric
+
+Given a datatype from column_info, returns a boolean value indicating if
+the current RDBMS considers it a numeric value. This controls how
+L<DBIx::Class::Row/set_column> decides whether to mark the column as
+dirty - when the datatype is deemed numeric a C<< != >> comparison will
+be performed instead of the usual C<eq>.
+
+=cut
+
+sub is_datatype_numeric {
+ my ($self, $dt) = @_;
+
+ return 0 unless $dt;
+
+ return $dt =~ /^ (?:
+ numeric | int(?:eger)? | (?:tiny|small|medium|big)int | dec(?:imal)? | real | float | double (?: \s+ precision)? | (?:big)?serial
+ ) $/ix;
+}
+
+
+=head2 create_ddl_dir (EXPERIMENTAL)
=over 4
=back
Creates a SQL file based on the Schema, for each of the specified
-database types, in the given directory.
+database engines in C<\@databases> in the given directory.
+(note: specify L<SQL::Translator> names, not L<DBI> driver names).
+
+Given a previous version number, this will also create a file containing
+the ALTER TABLE statements to transform the previous schema into the
+current one. Note that these statements may contain C<DROP TABLE> or
+C<DROP COLUMN> statements that can potentially destroy data.
+
+The file names are created using the C<ddl_filename> method below, please
+override this method in your schema if you would like a different file
+name format. For the ALTER file, the same format is used, replacing
+$version in the name with "$preversion-$version".
+
+See L<SQL::Translator/METHODS> for a list of values for C<\%sqlt_args>.
+The most common value for this would be C<< { add_drop_table => 1 } >>
+to have the SQL produced include a C<DROP TABLE> statement for each table
+created. For quoting purposes supply C<quote_table_names> and
+C<quote_field_names>.
+
+If no arguments are passed, then the following default values are assumed:
+
+=over 4
+
+=item databases - ['MySQL', 'SQLite', 'PostgreSQL']
+
+=item version - $schema->schema_version
+
+=item directory - './'
+
+=item preversion - <none>
+
+=back
By default, C<\%sqlt_args> will have
{ add_drop_table => 1, ignore_constraint_names => 1, ignore_index_names => 1 }
-merged with the hash passed in. To disable any of those features, pass in a
+merged with the hash passed in. To disable any of those features, pass in a
hashref like the following
{ ignore_constraint_names => 0, # ... other options }
+
+Note that this feature is currently EXPERIMENTAL and may not work correctly
+across all databases, or fully handle complex relationships.
+
+WARNING: Please check all SQL files created, before applying them.
+
=cut
sub create_ddl_dir {
my ($self, $schema, $databases, $version, $dir, $preversion, $sqltargs) = @_;
if(!$dir || !-d $dir) {
- warn "No directory given, using ./\n";
+ carp "No directory given, using ./\n";
$dir = "./";
}
$databases ||= ['MySQL', 'SQLite', 'PostgreSQL'];
$version ||= $schema_version;
$sqltargs = {
- add_drop_table => 1,
+ add_drop_table => 1,
ignore_constraint_names => 1,
ignore_index_names => 1,
%{$sqltargs || {}}
my $sqlt = SQL::Translator->new( $sqltargs );
$sqlt->parser('SQL::Translator::Parser::DBIx::Class');
- my $sqlt_schema = $sqlt->translate({ data => $schema }) or die $sqlt->error;
+ my $sqlt_schema = $sqlt->translate({ data => $schema })
+ or $self->throw_exception ($sqlt->error);
foreach my $db (@$databases) {
$sqlt->reset();
- $sqlt = $self->configure_sqlt($sqlt, $db);
$sqlt->{schema} = $sqlt_schema;
$sqlt->producer($db);
my $filename = $schema->ddl_filename($db, $version, $dir);
if (-e $filename && ($version eq $schema_version )) {
# if we are dumping the current version, overwrite the DDL
- warn "Overwriting existing DDL file - $filename";
+ carp "Overwriting existing DDL file - $filename";
unlink($filename);
}
my $output = $sqlt->translate;
if(!$output) {
- warn("Failed to translate to $db, skipping. (" . $sqlt->error . ")");
+ carp("Failed to translate to $db, skipping. (" . $sqlt->error . ")");
next;
}
if(!open($file, ">$filename")) {
}
print $file $output;
close($file);
-
+
next unless ($preversion);
require SQL::Translator::Diff;
my $prefilename = $schema->ddl_filename($db, $preversion, $dir);
if(!-e $prefilename) {
- warn("No previous schema file found ($prefilename)");
+ carp("No previous schema file found ($prefilename)");
next;
}
my $difffile = $schema->ddl_filename($db, $version, $dir, $preversion);
if(-e $difffile) {
- warn("Overwriting existing diff file - $difffile");
+ carp("Overwriting existing diff file - $difffile");
unlink($difffile);
}
-
+
my $source_schema;
{
my $t = SQL::Translator->new($sqltargs);
$t->debug( 0 );
$t->trace( 0 );
- $t->parser( $db ) or die $t->error;
- $t = $self->configure_sqlt($t, $db);
- my $out = $t->translate( $prefilename ) or die $t->error;
+
+ $t->parser( $db )
+ or $self->throw_exception ($t->error);
+
+ my $out = $t->translate( $prefilename )
+ or $self->throw_exception ($t->error);
+
$source_schema = $t->schema;
- unless ( $source_schema->name ) {
- $source_schema->name( $prefilename );
- }
+
+ $source_schema->name( $prefilename )
+ unless ( $source_schema->name );
}
- # The "new" style of producers have sane normalization and can support
+ # The "new" style of producers have sane normalization and can support
# diffing a SQL file against a DBIC->SQLT schema. Old style ones don't
# And we have to diff parsed SQL against parsed SQL.
my $dest_schema = $sqlt_schema;
-
+
unless ( "SQL::Translator::Producer::$db"->can('preprocess_schema') ) {
my $t = SQL::Translator->new($sqltargs);
$t->debug( 0 );
$t->trace( 0 );
- $t->parser( $db ) or die $t->error;
- $t = $self->configure_sqlt($t, $db);
- my $out = $t->translate( $filename ) or die $t->error;
+
+ $t->parser( $db )
+ or $self->throw_exception ($t->error);
+
+ my $out = $t->translate( $filename )
+ or $self->throw_exception ($t->error);
+
$dest_schema = $t->schema;
+
$dest_schema->name( $filename )
unless $dest_schema->name;
}
-
+
my $diff = SQL::Translator::Diff::schema_diff($source_schema, $db,
$dest_schema, $db,
$sqltargs
);
- if(!open $file, ">$difffile") {
+ if(!open $file, ">$difffile") {
$self->throw_exception("Can't write to $difffile ($!)");
next;
}
}
}
-sub configure_sqlt() {
- my $self = shift;
- my $tr = shift;
- my $db = shift || $self->sqlt_type;
- if ($db eq 'PostgreSQL') {
- $tr->quote_table_names(0);
- $tr->quote_field_names(0);
- }
- return $tr;
-}
-
=head2 deployment_statements
=over 4
=back
Returns the statements used by L</deploy> and L<DBIx::Class::Schema/deploy>.
-The database driver name is given by C<$type>, though the value from
-L</sqlt_type> is used if it is not specified.
+
+The L<SQL::Translator> (not L<DBI>) database driver name can be explicitly
+provided in C<$type>, otherwise the result of L</sqlt_type> is used as default.
C<$directory> is used to return statements from files in a previously created
L</create_ddl_dir> directory and is optional. The filenames are constructed
sub deployment_statements {
my ($self, $schema, $type, $version, $dir, $sqltargs) = @_;
- # Need to be connected to get the correct sqlt_type
- $self->ensure_connected() unless $type;
$type ||= $self->sqlt_type;
$version ||= $schema->schema_version || '1.x';
$dir ||= './';
if(-f $filename)
{
my $file;
- open($file, "<$filename")
+ open($file, "<$filename")
or $self->throw_exception("Can't open $filename ($!)");
my @rows = <$file>;
close($file);
. $self->_check_sqlt_message . q{'})
if !$self->_check_sqlt_version;
- require SQL::Translator::Parser::DBIx::Class;
- eval qq{use SQL::Translator::Producer::${type}};
- $self->throw_exception($@) if $@;
-
- # sources needs to be a parser arg, but for simplicty allow at top level
+ # sources needs to be a parser arg, but for simplicty allow at top level
# coming in
$sqltargs->{parser_args}{sources} = delete $sqltargs->{sources}
if exists $sqltargs->{sources};
- my $tr = SQL::Translator->new(%$sqltargs);
- SQL::Translator::Parser::DBIx::Class::parse( $tr, $schema );
- return "SQL::Translator::Producer::${type}"->can('produce')->($tr);
+ my $tr = SQL::Translator->new(
+ producer => "SQL::Translator::Producer::${type}",
+ %$sqltargs,
+ parser => 'SQL::Translator::Parser::DBIx::Class',
+ data => $schema,
+ );
+ return $tr->translate;
}
sub deploy {
return if $line =~ /^\s+$/; # skip whitespace only
$self->_query_start($line);
eval {
- $self->dbh->do($line); # shouldn't be using ->dbh ?
+ # do a dbh_do cycle here, as we need some error checking in
+ # place (even though we will ignore errors)
+ $self->dbh_do (sub { $_[1]->do($line) });
};
if ($@) {
- warn qq{$@ (running "${line}")};
+ carp qq{$@ (running "${line}")};
}
$self->_query_end($line);
};
- my @statements = $self->deployment_statements($schema, $type, undef, $dir, { no_comments => 1, %{ $sqltargs || {} } } );
+ my @statements = $self->deployment_statements($schema, $type, undef, $dir, { %{ $sqltargs || {} }, no_comments => 1 } );
if (@statements > 1) {
foreach my $statement (@statements) {
$deploy->( $statement );
sub datetime_parser {
my $self = shift;
return $self->{datetime_parser} ||= do {
- $self->ensure_connected;
+ $self->_populate_dbh unless $self->_dbh;
$self->build_datetime_parser(@_);
};
}
sub is_replicating {
return;
-
+
}
=head2 lag_behind_master
sub DESTROY {
my $self = shift;
- return if !$self->_dbh;
- $self->_verify_pid;
+ $self->_verify_pid if $self->_dbh;
+
+ # some databases need this to stop spewing warnings
+ if (my $dbh = $self->_dbh) {
+ eval { $dbh->disconnect };
+ }
+
$self->_dbh(undef);
}
DBIx::Class can do some wonderful magic with handling exceptions,
disconnections, and transactions when you use C<< AutoCommit => 1 >>
-combined with C<txn_do> for transaction support.
+(the default) combined with C<txn_do> for transaction support.
If you set C<< AutoCommit => 0 >> in your connect info, then you are always
in an assumed transaction between commits, and you're telling us you'd
be with raw DBI.
-=head1 SQL METHODS
-
-The module defines a set of methods within the DBIC::SQL::Abstract
-namespace. These build on L<SQL::Abstract::Limit> to provide the
-SQL query functions.
-
-The following methods are extended:-
-
-=over 4
-
-=item delete
-
-=item insert
-
-=item select
-
-=item update
-
-=item limit_dialect
-
-See L</connect_info> for details.
-
-=item quote_char
-
-See L</connect_info> for details.
-
-=item name_sep
-
-See L</connect_info> for details.
-
-=back
-
=head1 AUTHORS
Matt S. Trout <mst@shadowcatsystems.co.uk>
--- /dev/null
+package DBIx::Class::Storage::DBI::AmbiguousGlob;
+
+use strict;
+use warnings;
+
+use base 'DBIx::Class::Storage::DBI';
+use mro 'c3';
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::AmbiguousGlob - Storage component for RDBMS supporting multicolumn in clauses
+
+=head1 DESCRIPTION
+
+Some servers choke on things like:
+
+ COUNT(*) FROM (SELECT tab1.col, tab2.col FROM tab1 JOIN tab2 ... )
+
+claiming that col is a duplicate column (it loses the table specifiers by
+the time it gets to the *). Thus for any subquery count we select only the
+primary keys of the main table in the inner query. This hopefully still
+hits the indexes and keeps the server happy.
+
+At this point the only overriden method is C<_subq_count_select()>
+
+=cut
+
+sub _subq_count_select {
+ my ($self, $source, $rs_attrs) = @_;
+ my @pcols = map { join '.', $rs_attrs->{alias}, $_ } ($source->primary_columns);
+ return @pcols ? \@pcols : [ 1 ];
+}
+
+=head1 AUTHORS
+
+See L<DBIx::Class/CONTRIBUTORS>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
+
+1;
package DBIx::Class::Storage::DBI::Cursor;
-use base qw/DBIx::Class::Cursor/;
-
use strict;
use warnings;
+use base qw/DBIx::Class::Cursor/;
+
=head1 NAME
DBIx::Class::Storage::DBI::Cursor - Object representing a query cursor on a
sub new {
my ($class, $storage, $args, $attrs) = @_;
- #use Data::Dumper; warn Dumper(@_);
$class = ref $class if ref $class;
+
my $new = {
storage => $storage,
args => $args,
return bless ($new, $class);
}
-=head2 as_query
-
-=over 4
-
-=item Arguments: none
-
-=item Return Value: \[ $sql, @bind ]
-
-=back
-
-Returns the SQL statement and bind vars associated with the invocant.
-
-=cut
-
-sub as_query {
- my $self = shift;
-
- my $storage = $self->{storage};
- my $sql_maker = $storage->sql_maker;
- local $sql_maker->{for};
-
- my @args = $storage->_select_args(@{$self->{args}});
- my ($sql, $bind) = $storage->_prep_for_execute(@args[0 .. 2], [@args[4 .. $#args]]);
- return \[ "($sql)", @$bind ];
-}
-
=head2 next
=over 4
my ($storage, $dbh, $self) = @_;
$self->_check_dbh_gen;
- if ($self->{attrs}{rows} && $self->{pos} >= $self->{attrs}{rows}) {
+ if (
+ $self->{attrs}{software_limit}
+ && $self->{attrs}{rows}
+ && $self->{pos} >= $self->{attrs}{rows}
+ ) {
$self->{sth}->finish if $self->{sth}->{Active};
delete $self->{sth};
$self->{done} = 1;
my ($self) = @_;
if ($self->{attrs}{software_limit}
&& ($self->{attrs}{offset} || $self->{attrs}{rows})) {
- return $self->SUPER::all;
+ return $self->next::method;
}
+
$self->{storage}->dbh_do($self->can('_dbh_all'), $self);
}
use warnings;
use base qw/DBIx::Class::Storage::DBI/;
-
-# __PACKAGE__->load_components(qw/PK::Auto/);
+use mro 'c3';
sub _dbh_last_insert_id {
my ($self, $dbh, $source, $col) = @_;
sub datetime_parser_type { "DateTime::Format::DB2"; }
+sub _sql_maker_opts {
+ my ( $self, $opts ) = @_;
+
+ if ( $opts ) {
+ $self->{_sql_maker_opts} = { %$opts };
+ }
+
+ return { limit_dialect => 'RowNumberOver', %{$self->{_sql_maker_opts}||{}} };
+}
+
1;
=head1 NAME
use strict;
use warnings;
-use base qw/DBIx::Class::Storage::DBI/;
+use base qw/DBIx::Class::Storage::DBI::AmbiguousGlob DBIx::Class::Storage::DBI/;
+use mro 'c3';
-sub _dbh_last_insert_id {
- my ($self, $dbh, $source, $col) = @_;
- my ($id) = $dbh->selectrow_array('SELECT SCOPE_IDENTITY()');
- return $id;
+use List::Util();
+
+__PACKAGE__->mk_group_accessors(simple => qw/
+ _identity _identity_method
+/);
+
+__PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks::MSSQL');
+
+sub insert_bulk {
+ my $self = shift;
+ my ($source, $cols, $data) = @_;
+
+ my $identity_insert = 0;
+
+ COLUMNS:
+ foreach my $col (@{$cols}) {
+ if ($source->column_info($col)->{is_auto_increment}) {
+ $identity_insert = 1;
+ last COLUMNS;
+ }
+ }
+
+ if ($identity_insert) {
+ my $table = $source->from;
+ $self->_get_dbh->do("SET IDENTITY_INSERT $table ON");
+ }
+
+ $self->next::method(@_);
+
+ if ($identity_insert) {
+ my $table = $source->from;
+ $self->_get_dbh->do("SET IDENTITY_INSERT $table OFF");
+ }
+}
+
+# support MSSQL GUID column types
+
+sub insert {
+ my $self = shift;
+ my ($source, $to_insert) = @_;
+
+ my $updated_cols = {};
+
+ my %guid_cols;
+ my @pk_cols = $source->primary_columns;
+ my %pk_cols;
+ @pk_cols{@pk_cols} = ();
+
+ my @pk_guids = grep {
+ $source->column_info($_)->{data_type}
+ &&
+ $source->column_info($_)->{data_type} =~ /^uniqueidentifier/i
+ } @pk_cols;
+
+ my @auto_guids = grep {
+ $source->column_info($_)->{data_type}
+ &&
+ $source->column_info($_)->{data_type} =~ /^uniqueidentifier/i
+ &&
+ $source->column_info($_)->{auto_nextval}
+ } grep { not exists $pk_cols{$_} } $source->columns;
+
+ my @get_guids_for =
+ grep { not exists $to_insert->{$_} } (@pk_guids, @auto_guids);
+
+ for my $guid_col (@get_guids_for) {
+ my ($new_guid) = $self->_get_dbh->selectrow_array('SELECT NEWID()');
+ $updated_cols->{$guid_col} = $to_insert->{$guid_col} = $new_guid;
+ }
+
+ $updated_cols = { %$updated_cols, %{ $self->next::method(@_) } };
+
+ return $updated_cols;
+}
+
+sub _prep_for_execute {
+ my $self = shift;
+ my ($op, $extra_bind, $ident, $args) = @_;
+
+# cast MONEY values properly
+ if ($op eq 'insert' || $op eq 'update') {
+ my $fields = $args->[0];
+
+ for my $col (keys %$fields) {
+ # $ident is a result source object with INSERT/UPDATE ops
+ if ($ident->column_info ($col)->{data_type}
+ &&
+ $ident->column_info ($col)->{data_type} =~ /^money\z/i) {
+ my $val = $fields->{$col};
+ $fields->{$col} = \['CAST(? AS MONEY)', [ $col => $val ]];
+ }
+ }
+ }
+
+ my ($sql, $bind) = $self->next::method (@_);
+
+ if ($op eq 'insert') {
+ $sql .= ';SELECT SCOPE_IDENTITY()';
+
+ my $col_info = $self->_resolve_column_info($ident, [map $_->[0], @{$bind}]);
+ if (List::Util::first { $_->{is_auto_increment} } (values %$col_info) ) {
+
+ my $table = $ident->from;
+ my $identity_insert_on = "SET IDENTITY_INSERT $table ON";
+ my $identity_insert_off = "SET IDENTITY_INSERT $table OFF";
+ $sql = "$identity_insert_on; $sql; $identity_insert_off";
+ }
+ }
+
+ return ($sql, $bind);
+}
+
+sub _execute {
+ my $self = shift;
+ my ($op) = @_;
+
+ my ($rv, $sth, @bind) = $self->dbh_do($self->can('_dbh_execute'), @_);
+
+ if ($op eq 'insert') {
+
+ # this should bring back the result of SELECT SCOPE_IDENTITY() we tacked
+ # on in _prep_for_execute above
+ my ($identity) = $sth->fetchrow_array;
+
+ # SCOPE_IDENTITY failed, but we can do something else
+ if ( (! $identity) && $self->_identity_method) {
+ ($identity) = $self->_dbh->selectrow_array(
+ 'select ' . $self->_identity_method
+ );
+ }
+
+ $self->_identity($identity);
+ $sth->finish;
+ }
+
+ return wantarray ? ($rv, $sth, @bind) : $rv;
+}
+
+sub last_insert_id { shift->_identity }
+
+# savepoint syntax is the same as in Sybase ASE
+
+sub _svp_begin {
+ my ($self, $name) = @_;
+
+ $self->_get_dbh->do("SAVE TRANSACTION $name");
+}
+
+# A new SAVE TRANSACTION with the same name releases the previous one.
+sub _svp_release { 1 }
+
+sub _svp_rollback {
+ my ($self, $name) = @_;
+
+ $self->_get_dbh->do("ROLLBACK TRANSACTION $name");
}
sub build_datetime_parser {
my $type = "DateTime::Format::Strptime";
eval "use ${type}";
$self->throw_exception("Couldn't load ${type}: $@") if $@;
- return $type->new( pattern => '%m/%d/%Y %H:%M:%S' );
+ return $type->new( pattern => '%Y-%m-%d %H:%M:%S' ); # %F %T
+}
+
+sub sqlt_type { 'SQLServer' }
+
+sub _sql_maker_opts {
+ my ( $self, $opts ) = @_;
+
+ if ( $opts ) {
+ $self->{_sql_maker_opts} = { %$opts };
+ }
+
+ return { limit_dialect => 'Top', %{$self->{_sql_maker_opts}||{}} };
}
1;
=head1 NAME
-DBIx::Class::Storage::DBI::MSSQL - Storage::DBI subclass for MSSQL
+DBIx::Class::Storage::DBI::MSSQL - Base Class for Microsoft SQL Server support
+in DBIx::Class
=head1 SYNOPSIS
-This subclass supports MSSQL, and can in theory be used directly
-via the C<storage_type> mechanism:
+This is the base class for Microsoft SQL Server support, used by
+L<DBIx::Class::Storage::DBI::ODBC::Microsoft_SQL_Server> and
+L<DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server>.
+
+=head1 IMPLEMENTATION NOTES
+
+Microsoft SQL Server supports three methods of retrieving the IDENTITY
+value for inserted row: IDENT_CURRENT, @@IDENTITY, and SCOPE_IDENTITY().
+SCOPE_IDENTITY is used here because it is the safest. However, it must
+be called is the same execute statement, not just the same connection.
+
+So, this implementation appends a SELECT SCOPE_IDENTITY() statement
+onto each INSERT to accommodate that requirement.
+
+C<SELECT @@IDENTITY> can also be used by issuing:
+
+ $self->_identity_method('@@identity');
- $schema->storage_type('::DBI::MSSQL');
- $schema->connect_info('dbi:....', ...);
+it will only be used if SCOPE_IDENTITY() fails.
-However, as there is no L<DBD::MSSQL>, you will probably want to use
-one of the other DBD-specific MSSQL classes, such as
-L<DBIx::Class::Storage::DBI::Sybase::MSSQL>. These classes will
-merge this class with a DBD-specific class to obtain fully
-correct behavior for your scenario.
+This is more dangerous, as inserting into a table with an on insert trigger that
+inserts into another table with an identity will give erroneous results on
+recent versions of SQL Server.
-=head1 AUTHORS
+=head1 AUTHOR
-Brian Cassidy <bricas@cpan.org>
+See L<DBIx::Class/CONTRIBUTORS>.
=head1 LICENSE
--- /dev/null
+package DBIx::Class::Storage::DBI::MultiColumnIn;
+
+use strict;
+use warnings;
+
+use base 'DBIx::Class::Storage::DBI';
+use mro 'c3';
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::MultiColumnIn - Storage component for RDBMS supporting multicolumn in clauses
+
+=head1 DESCRIPTION
+
+While ANSI SQL does not define a multicolumn in operator, many databases can
+in fact understand WHERE (cola, colb) IN ( SELECT subcol_a, subcol_b ... )
+The storage class for any such RDBMS should inherit from this class, in order
+to dramatically speed up update/delete operations on joined multipk resultsets.
+
+At this point the only overriden method is C<_multipk_update_delete()>
+
+=cut
+
+sub _multipk_update_delete {
+ my $self = shift;
+ my ($rs, $op, $values) = @_;
+
+ my $rsrc = $rs->result_source;
+ my @pcols = $rsrc->primary_columns;
+ my $attrs = $rs->_resolved_attrs;
+
+ # naive check - this is an internal method after all, we should know what we are doing
+ $self->throw_exception ('Number of columns selected by supplied resultset does not match number of primary keys')
+ if ( ref $attrs->{select} ne 'ARRAY' or @{$attrs->{select}} != @pcols );
+
+ # This is hideously ugly, but SQLA does not understand multicol IN expressions
+ my $sqla = $self->_sql_maker;
+ my ($sql, @bind) = @${$rs->as_query};
+ $sql = sprintf ('(%s) IN %s', # the as_query stuff is already enclosed in ()s
+ join (', ', map { $sqla->_quote ($_) } @pcols),
+ $sql,
+ );
+
+ return $self->$op (
+ $rsrc,
+ $op eq 'update' ? $values : (),
+ \[$sql, @bind],
+ );
+
+}
+
+=head1 AUTHORS
+
+See L<DBIx::Class/CONTRIBUTORS>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
+
+1;
+++ /dev/null
-package DBIx::Class::Storage::DBI::MultiDistinctEmulation;
-
-use strict;
-use warnings;
-
-use base qw/DBIx::Class::Storage::DBI/;
-
-sub _select {
- my ($self, $ident, $select, $condition, $attrs) = @_;
-
- # hack to make count distincts with multiple columns work in SQLite and Oracle
- if (ref $select eq 'ARRAY') {
- @{$select} = map {$self->replace_distincts($_)} @{$select};
- } else {
- $select = $self->replace_distincts($select);
- }
-
- return $self->next::method($ident, $select, $condition, $attrs);
-}
-
-sub replace_distincts {
- my ($self, $select) = @_;
-
- $select->{count}->{distinct} = join("||", @{$select->{count}->{distinct}})
- if (ref $select eq 'HASH' && $select->{count} && ref $select->{count} eq 'HASH' &&
- $select->{count}->{distinct} && ref $select->{count}->{distinct} eq 'ARRAY');
-
- return $select;
-}
-
-1;
-
-=head1 NAME
-
-DBIx::Class::Storage::DBI::MultiDistinctEmulation - Some databases can't handle count distincts with multiple cols. They should use base on this.
-
-=head1 SYNOPSIS
-
-=head1 DESCRIPTION
-
-This class allows count distincts with multiple columns for retarded databases (Oracle and SQLite)
-
-=head1 AUTHORS
-
-Luke Saunders <luke.saunders@gmail.com>
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
use warnings;
use base 'DBIx::Class::Storage::DBI';
+use mro 'c3';
=head1 NAME
}
$new_sql .= join '', @sql_part;
- return ($new_sql);
+ return ($new_sql, []);
}
=head1 AUTHORS
use warnings;
use base qw/DBIx::Class::Storage::DBI/;
+use mro 'c3';
sub _rebless {
my ($self) = @_;
- my $dbtype = eval { $self->_dbh->get_info(17) };
+ my $dbtype = eval { $self->_get_dbh->get_info(17) };
+
unless ( $@ ) {
# Translate the backend name into a perl identifier
$dbtype =~ s/\W/_/gi;
- my $class = "DBIx::Class::Storage::DBI::ODBC::${dbtype}";
- eval "require $class";
- bless $self, $class unless $@;
+ my $subclass = "DBIx::Class::Storage::DBI::ODBC::${dbtype}";
+ if ($self->load_optional_class($subclass) && !$self->isa($subclass)) {
+ bless $self, $subclass;
+ $self->_rebless;
+ }
}
}
-package DBIx::Class::Storage::DBI::ODBC::ACCESS;\r
-use strict;\r
-use warnings;\r
-\r
-use Data::Dump qw( dump );\r
-\r
-use DBI;\r
-use base qw/DBIx::Class::Storage::DBI/;\r
-\r
-my $ERR_MSG_START = __PACKAGE__ . ' failed: ';\r
-\r
-sub insert {\r
- my $self = shift;\r
- my ( $source, $to_insert ) = @_;\r
-\r
- my $bind_attributes = $self->source_bind_attributes( $source );\r
- my ( undef, $sth ) = $self->_execute( 'insert' => [], $source, $bind_attributes, $to_insert );\r
-\r
- #store the identity here since @@IDENTITY is connection global and this prevents\r
- #possibility that another insert to a different table overwrites it for this resultsource\r
- my $identity = 'SELECT @@IDENTITY';\r
- my $max_sth = $self->{ _dbh }->prepare( $identity )\r
- or $self->throw_exception( $ERR_MSG_START . $self->{ _dbh }->errstr() );\r
- $max_sth->execute() or $self->throw_exception( $ERR_MSG_START . $max_sth->errstr );\r
-\r
- my $row = $max_sth->fetchrow_arrayref()\r
- or $self->throw_exception( $ERR_MSG_START . "$identity did not return any result." );\r
-\r
- $self->{ last_pk }->{ $source->name() } = $row;\r
-\r
- return $to_insert;\r
-}\r
-\r
-sub last_insert_id {\r
- my $self = shift;\r
- my ( $result_source ) = @_;\r
-\r
- return @{ $self->{ last_pk }->{ $result_source->name() } };\r
-}\r
-\r
-sub bind_attribute_by_data_type {\r
- my $self = shift;\r
- \r
- my ( $data_type ) = @_;\r
- \r
- return { TYPE => $data_type } if $data_type == DBI::SQL_LONGVARCHAR;\r
- \r
- return;\r
-}\r
-\r
-sub sqlt_type { 'ACCESS' }\r
-\r
-1;\r
-\r
-=head1 NAME\r
-\r
-DBIx::Class::Storage::DBI::ODBC::ACCESS - Support specific to MS Access over ODBC\r
-\r
-=head1 WARNING\r
-\r
-I am not a DBI, DBIx::Class or MS Access guru. Use this module with that in\r
-mind.\r
-\r
-This module is currently considered alpha software and can change without notice.\r
-\r
-=head1 DESCRIPTION\r
-\r
-This class implements support specific to Microsoft Access over ODBC.\r
-\r
-It is loaded automatically by by DBIx::Class::Storage::DBI::ODBC when it\r
-detects a MS Access back-end.\r
-\r
-=head1 SUPPORTED VERSIONS\r
-\r
-This module have currently only been tested on MS Access 2003 using the Jet 4.0 engine.\r
-\r
-As far as my knowledge it should work on MS Access 2000 or later, but that have not been tested.\r
-Information about support for different version of MS Access is welcome.\r
-\r
-=head1 IMPLEMENTATION NOTES\r
-\r
-MS Access supports the @@IDENTITY function for retriving the id of the latest inserted row.\r
-@@IDENTITY is global to the connection, so to support the possibility of getting the last inserted\r
-id for different tables, the insert() function stores the inserted id on a per table basis.\r
-last_insert_id() then just returns the stored value.\r
-\r
-=head1 KNOWN ACCESS PROBLEMS\r
-\r
-=over\r
-\r
-=item Invalid precision value\r
-\r
-This error message is received when trying to store more than 255 characters in a MEMO field.\r
-The problem is (to my knowledge) an error in the MS Access ODBC driver. The problem is fixed\r
-by setting the C<data_type> of the column to C<SQL_LONGVARCHAR> in C<add_columns>. \r
-C<SQL_LONGVARCHAR> is a constant in the C<DBI> module.\r
-\r
-=back\r
-\r
-=head1 IMPLEMENTED FUNCTIONS\r
-\r
-=head2 bind_attribute_by_data_type\r
-\r
-This function currently supports the SQL_LONGVARCHAR column type.\r
-\r
-=head2 insert\r
-\r
-=head2 last_insert_id\r
-\r
-=head2 sqlt_type\r
-\r
-=head1 BUGS\r
-\r
-Most likely. Bug reports are welcome.\r
-\r
-=head1 AUTHORS\r
-\r
-Øystein Torget C<< <oystein.torget@dnv.com> >>\r
-\r
-=head1 COPYRIGHT\r
-\r
-You may distribute this code under the same terms as Perl itself.\r
-\r
-Det Norske Veritas AS (DNV)\r
-\r
-http://www.dnv.com\r
-\r
-=cut\r
-\r
+package DBIx::Class::Storage::DBI::ODBC::ACCESS;
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+use mro 'c3';
+
+use DBI;
+
+my $ERR_MSG_START = __PACKAGE__ . ' failed: ';
+
+sub insert {
+ my $self = shift;
+ my ( $source, $to_insert ) = @_;
+
+ my $bind_attributes = $self->source_bind_attributes( $source );
+ my ( undef, $sth ) = $self->_execute( 'insert' => [], $source, $bind_attributes, $to_insert );
+
+ #store the identity here since @@IDENTITY is connection global and this prevents
+ #possibility that another insert to a different table overwrites it for this resultsource
+ my $identity = 'SELECT @@IDENTITY';
+ my $max_sth = $self->{ _dbh }->prepare( $identity )
+ or $self->throw_exception( $ERR_MSG_START . $self->{ _dbh }->errstr() );
+ $max_sth->execute() or $self->throw_exception( $ERR_MSG_START . $max_sth->errstr );
+
+ my $row = $max_sth->fetchrow_arrayref()
+ or $self->throw_exception( $ERR_MSG_START . "$identity did not return any result." );
+
+ $self->{ last_pk }->{ $source->name() } = $row;
+
+ return $to_insert;
+}
+
+sub last_insert_id {
+ my $self = shift;
+ my ( $result_source ) = @_;
+
+ return @{ $self->{ last_pk }->{ $result_source->name() } };
+}
+
+sub bind_attribute_by_data_type {
+ my $self = shift;
+
+ my ( $data_type ) = @_;
+
+ return { TYPE => $data_type } if $data_type == DBI::SQL_LONGVARCHAR;
+
+ return;
+}
+
+sub sqlt_type { 'ACCESS' }
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::ODBC::ACCESS - Support specific to MS Access over ODBC
+
+=head1 WARNING
+
+I am not a DBI, DBIx::Class or MS Access guru. Use this module with that in
+mind.
+
+This module is currently considered alpha software and can change without notice.
+
+=head1 DESCRIPTION
+
+This class implements support specific to Microsoft Access over ODBC.
+
+It is loaded automatically by by DBIx::Class::Storage::DBI::ODBC when it
+detects a MS Access back-end.
+
+=head1 SUPPORTED VERSIONS
+
+This module have currently only been tested on MS Access 2003 using the Jet 4.0 engine.
+
+As far as my knowledge it should work on MS Access 2000 or later, but that have not been tested.
+Information about support for different version of MS Access is welcome.
+
+=head1 IMPLEMENTATION NOTES
+
+MS Access supports the @@IDENTITY function for retriving the id of the latest inserted row.
+@@IDENTITY is global to the connection, so to support the possibility of getting the last inserted
+id for different tables, the insert() function stores the inserted id on a per table basis.
+last_insert_id() then just returns the stored value.
+
+=head1 KNOWN ACCESS PROBLEMS
+
+=over
+
+=item Invalid precision value
+
+This error message is received when trying to store more than 255 characters in a MEMO field.
+The problem is (to my knowledge) an error in the MS Access ODBC driver. The problem is fixed
+by setting the C<data_type> of the column to C<SQL_LONGVARCHAR> in C<add_columns>.
+C<SQL_LONGVARCHAR> is a constant in the C<DBI> module.
+
+=back
+
+=head1 IMPLEMENTED FUNCTIONS
+
+=head2 bind_attribute_by_data_type
+
+This function currently supports the SQL_LONGVARCHAR column type.
+
+=head2 insert
+
+=head2 last_insert_id
+
+=head2 sqlt_type
+
+=head1 BUGS
+
+Most likely. Bug reports are welcome.
+
+=head1 AUTHORS
+
+Øystein Torget C<< <oystein.torget@dnv.com> >>
+
+=head1 COPYRIGHT
+
+You may distribute this code under the same terms as Perl itself.
+
+Det Norske Veritas AS (DNV)
+
+http://www.dnv.com
+
+=cut
+
use warnings;
use base qw/DBIx::Class::Storage::DBI::ODBC/;
+use mro 'c3';
sub _dbh_last_insert_id {
my ($self, $dbh, $source, $col) = @_;
sub _sql_maker_opts {
my ($self) = @_;
-
+
$self->dbh_do(sub {
my ($self, $dbh) = @_;
use strict;
use warnings;
-use base qw/DBIx::Class::Storage::DBI/;
+use base qw/DBIx::Class::Storage::DBI::MSSQL/;
+use mro 'c3';
-sub _prep_for_execute {
- my $self = shift;
- my ($op, $extra_bind, $ident, $args) = @_;
+use Carp::Clan qw/^DBIx::Class/;
+use List::Util();
+use Scalar::Util ();
- my ($sql, $bind) = $self->SUPER::_prep_for_execute(@_);
- $sql .= ';SELECT SCOPE_IDENTITY()' if $op eq 'insert';
+__PACKAGE__->mk_group_accessors(simple => qw/
+ _using_dynamic_cursors
+/);
- return ($sql, $bind);
-}
+=head1 NAME
-sub _execute {
- my $self = shift;
- my ($op) = @_;
+DBIx::Class::Storage::DBI::ODBC::Microsoft_SQL_Server - Support specific
+to Microsoft SQL Server over ODBC
- my ($rv, $sth, @bind) = $self->dbh_do($self->can('_dbh_execute'), @_);
- $self->{_scope_identity} = $sth->fetchrow_array if $op eq 'insert';
+=head1 DESCRIPTION
- return wantarray ? ($rv, $sth, @bind) : $rv;
-}
+This class implements support specific to Microsoft SQL Server over ODBC. It is
+loaded automatically by by DBIx::Class::Storage::DBI::ODBC when it detects a
+MSSQL back-end.
+
+Most of the functionality is provided from the superclass
+L<DBIx::Class::Storage::DBI::MSSQL>.
+
+=head1 MULTIPLE ACTIVE STATEMENTS
+
+The following options are alternative ways to enable concurrent executing
+statement support. Each has its own advantages and drawbacks.
+
+=head2 connect_call_use_dynamic_cursors
+
+Use as:
-sub last_insert_id { shift->{_scope_identity} }
+ on_connect_call => 'use_dynamic_cursors'
-sub sqlt_type { 'SQLServer' }
+in your L<DBIx::Class::Storage::DBI/connect_info> as one way to enable multiple
+concurrent statements.
-sub _sql_maker_opts {
- my ( $self, $opts ) = @_;
+Will add C<< odbc_cursortype => 2 >> to your DBI connection attributes. See
+L<DBD::ODBC/odbc_cursortype> for more information.
- if ( $opts ) {
- $self->{_sql_maker_opts} = { %$opts };
- }
+Alternatively, you can add it yourself and dynamic cursor support will be
+automatically enabled.
- return { limit_dialect => 'Top', %{$self->{_sql_maker_opts}||{}} };
+If you're using FreeTDS, C<tds_version> must be set to at least C<8.0>.
+
+This will not work with CODE ref connect_info's.
+
+B<WARNING:> this will break C<SCOPE_IDENTITY()>, and C<SELECT @@IDENTITY> will
+be used instead, which on SQL Server 2005 and later will return erroneous
+results on tables which have an on insert trigger that inserts into another
+table with an C<IDENTITY> column.
+
+=cut
+
+sub connect_call_use_dynamic_cursors {
+ my $self = shift;
+
+ if (ref($self->_dbi_connect_info->[0]) eq 'CODE') {
+ croak 'cannot set DBI attributes on a CODE ref connect_info';
+ }
+
+ my $dbi_attrs = $self->_dbi_connect_info->[-1];
+
+ unless (ref($dbi_attrs) && Scalar::Util::reftype($dbi_attrs) eq 'HASH') {
+ $dbi_attrs = {};
+ push @{ $self->_dbi_connect_info }, $dbi_attrs;
+ }
+
+ if (not exists $dbi_attrs->{odbc_cursortype}) {
+ # turn on support for multiple concurrent statements, unless overridden
+ $dbi_attrs->{odbc_cursortype} = 2;
+ $self->disconnect; # resetting dbi attrs, so have to reconnect
+ $self->ensure_connected;
+ $self->_set_dynamic_cursors;
+ }
}
-sub build_datetime_parser {
+sub _set_dynamic_cursors {
my $self = shift;
- my $type = "DateTime::Format::Strptime";
- eval "use ${type}";
- $self->throw_exception("Couldn't load ${type}: $@") if $@;
- return $type->new( pattern => '%F %T' );
+ my $dbh = $self->_get_dbh;
+
+ eval {
+ local $dbh->{RaiseError} = 1;
+ local $dbh->{PrintError} = 0;
+ $dbh->do('SELECT @@IDENTITY');
+ };
+ if ($@) {
+ croak <<'EOF';
+
+Your drivers do not seem to support dynamic cursors (odbc_cursortype => 2),
+if you're using FreeTDS, make sure to set tds_version to 8.0 or greater.
+EOF
+ }
+
+ $self->_using_dynamic_cursors(1);
+ $self->_identity_method('@@identity');
}
-1;
+sub _rebless {
+ no warnings 'uninitialized';
+ my $self = shift;
-__END__
+ if (ref($self->_dbi_connect_info->[0]) ne 'CODE' &&
+ eval { $self->_dbi_connect_info->[-1]{odbc_cursortype} } == 2) {
+ $self->_set_dynamic_cursors;
+ return;
+ }
-=head1 NAME
+ $self->_using_dynamic_cursors(0);
+}
-DBIx::Class::Storage::DBI::ODBC::Microsoft_SQL_Server - Support specific
-to Microsoft SQL Server over ODBC
+=head2 connect_call_use_server_cursors
-=head1 DESCRIPTION
+Use as:
-This class implements support specific to Microsoft SQL Server over ODBC,
-including auto-increment primary keys and SQL::Abstract::Limit dialect. It
-is loaded automatically by by DBIx::Class::Storage::DBI::ODBC when it
-detects a MSSQL back-end.
+ on_connect_call => 'use_server_cursors'
-=head1 IMPLEMENTATION NOTES
+May allow multiple active select statements. See
+L<DBD::ODBC/odbc_SQL_ROWSET_SIZE> for more information.
-Microsoft SQL Server supports three methods of retrieving the IDENTITY
-value for inserted row: IDENT_CURRENT, @@IDENTITY, and SCOPE_IDENTITY().
-SCOPE_IDENTITY is used here because it is the safest. However, it must
-be called is the same execute statement, not just the same connection.
+Takes an optional parameter for the value to set the attribute to, default is
+C<2>.
-So, this implementation appends a SELECT SCOPE_IDENTITY() statement
-onto each INSERT to accommodate that requirement.
+B<WARNING>: this does not work on all versions of SQL Server, and may lock up
+your database!
+
+=cut
+
+sub connect_call_use_server_cursors {
+ my $self = shift;
+ my $sql_rowset_size = shift || 2;
+
+ $self->_get_dbh->{odbc_SQL_ROWSET_SIZE} = $sql_rowset_size;
+}
-=head1 METHODS
+=head2 connect_call_use_MARS
-=head2 last_insert_id
+Use as:
-=head2 sqlt_type
+ on_connect_call => 'use_MARS'
-=head2 build_datetime_parser
+Use to enable a feature of SQL Server 2005 and later, "Multiple Active Result
+Sets". See L<DBD::ODBC::FAQ/Does DBD::ODBC support Multiple Active Statements?>
+for more information.
-The resulting parser handles the MSSQL C<DATETIME> type, but is almost
-certainly not sufficient for the other MSSQL 2008 date/time types.
+B<WARNING>: This has implications for the way transactions are handled.
+
+=cut
+
+sub connect_call_use_MARS {
+ my $self = shift;
+
+ my $dsn = $self->_dbi_connect_info->[0];
+
+ if (ref($dsn) eq 'CODE') {
+ croak 'cannot change the DBI DSN on a CODE ref connect_info';
+ }
+
+ if ($dsn !~ /MARS_Connection=/) {
+ $self->_dbi_connect_info->[0] = "$dsn;MARS_Connection=Yes";
+ my $was_connected = defined $self->_dbh;
+ $self->disconnect;
+ $self->ensure_connected if $was_connected;
+ }
+}
+
+1;
-=head1 AUTHORS
+=head1 AUTHOR
-Marc Mims C<< <marc@questright.com> >>
+See L<DBIx::Class/CONTRIBUTORS>.
=head1 LICENSE
You may distribute this code under the same terms as Perl itself.
=cut
+# vim: sw=2 sts=2
use warnings;
use base qw/DBIx::Class::Storage::DBI/;
+use mro 'c3';
sub _rebless {
my ($self) = @_;
- my $version = eval { $self->_dbh->get_info(18); };
+ my $version = eval { $self->_get_dbh->get_info(18); };
if ( !$@ ) {
my ($major, $minor, $patchlevel) = split(/\./, $version);
package DBIx::Class::Storage::DBI::Oracle::Generic;
-# -*- mode: cperl; cperl-indent-level: 2 -*-
use strict;
use warnings;
=head1 NAME
-DBIx::Class::Storage::DBI::Oracle::Generic - Automatic primary key class for Oracle
+DBIx::Class::Storage::DBI::Oracle::Generic - Oracle Support for DBIx::Class
=head1 SYNOPSIS
=cut
-use Carp::Clan qw/^DBIx::Class/;
-
-use base qw/DBIx::Class::Storage::DBI::MultiDistinctEmulation/;
-
-# __PACKAGE__->load_components(qw/PK::Auto/);
+use base qw/DBIx::Class::Storage::DBI/;
+use mro 'c3';
sub _dbh_last_insert_id {
my ($self, $dbh, $source, @columns) = @_;
};
# trigger_body is a LONG
- $dbh->{LongReadLen} = 64 * 1024 if ($dbh->{LongReadLen} < 64 * 1024);
+ local $dbh->{LongReadLen} = 64 * 1024 if ($dbh->{LongReadLen} < 64 * 1024);
my $sth;
sub _sequence_fetch {
my ( $self, $type, $seq ) = @_;
- my ($id) = $self->dbh->selectrow_array("SELECT ${seq}.${type} FROM DUAL");
+ my ($id) = $self->_get_dbh->selectrow_array("SELECT ${seq}.${type} FROM DUAL");
return $id;
}
-=head2 connected
-
-Returns true if we have an open (and working) database connection, false if it is not (yet)
-open (or does not work). (Executes a simple SELECT to make sure it works.)
-
-The reason this is needed is that L<DBD::Oracle>'s ping() does not do a real
-OCIPing but just gets the server version, which doesn't help if someone killed
-your session.
-
-=cut
-
-sub connected {
+sub _ping {
my $self = shift;
- if (not $self->SUPER::connected(@_)) {
- return 0;
- }
- else {
- my $dbh = $self->_dbh;
+ my $dbh = $self->_dbh or return 0;
- local $dbh->{RaiseError} = 1;
+ local $dbh->{RaiseError} = 1;
- eval {
- my $ping_sth = $dbh->prepare_cached("select 1 from dual");
- $ping_sth->execute;
- $ping_sth->finish;
- };
+ eval {
+ $dbh->do("select 1 from dual");
+ };
- return $@ ? 0 : 1;
- }
+ return $@ ? 0 : 1;
}
sub _dbh_execute {
do {
eval {
if ($wantarray) {
- @res = $self->SUPER::_dbh_execute(@_);
+ @res = $self->next::method(@_);
} else {
- $res[0] = $self->SUPER::_dbh_execute(@_);
+ $res[0] = $self->next::method(@_);
}
};
$exception = $@;
sub get_autoinc_seq {
my ($self, $source, $col) = @_;
-
+
$self->dbh_do('_dbh_get_autoinc_seq', $source, $col);
}
sub datetime_parser_type { return "DateTime::Format::Oracle"; }
+=head2 connect_call_datetime_setup
+
+Used as:
+
+ on_connect_call => 'datetime_setup'
+
+In L<DBIx::Class::Storage::DBI/connect_info> to set the session nls date, and
+timestamp values for use with L<DBIx::Class::InflateColumn::DateTime> and the
+necessary environment variables for L<DateTime::Format::Oracle>, which is used
+by it.
+
+Maximum allowable precision is used, unless the environment variables have
+already been set.
+
+These are the defaults used:
+
+ $ENV{NLS_DATE_FORMAT} ||= 'YYYY-MM-DD HH24:MI:SS';
+ $ENV{NLS_TIMESTAMP_FORMAT} ||= 'YYYY-MM-DD HH24:MI:SS.FF';
+ $ENV{NLS_TIMESTAMP_TZ_FORMAT} ||= 'YYYY-MM-DD HH24:MI:SS.FF TZHTZM';
+
+To get more than second precision with L<DBIx::Class::InflateColumn::DateTime>
+for your timestamps, use something like this:
+
+ use Time::HiRes 'time';
+ my $ts = DateTime->from_epoch(epoch => time);
+
+=cut
+
+sub connect_call_datetime_setup {
+ my $self = shift;
+
+ my $date_format = $ENV{NLS_DATE_FORMAT} ||= 'YYYY-MM-DD HH24:MI:SS';
+ my $timestamp_format = $ENV{NLS_TIMESTAMP_FORMAT} ||=
+ 'YYYY-MM-DD HH24:MI:SS.FF';
+ my $timestamp_tz_format = $ENV{NLS_TIMESTAMP_TZ_FORMAT} ||=
+ 'YYYY-MM-DD HH24:MI:SS.FF TZHTZM';
+
+ $self->_do_query("alter session set nls_date_format = '$date_format'");
+ $self->_do_query(
+"alter session set nls_timestamp_format = '$timestamp_format'");
+ $self->_do_query(
+"alter session set nls_timestamp_tz_format='$timestamp_tz_format'");
+}
+
sub _svp_begin {
my ($self, $name) = @_;
-
- $self->dbh->do("SAVEPOINT $name");
+
+ $self->_get_dbh->do("SAVEPOINT $name");
+}
+
+=head2 source_bind_attributes
+
+Handle LOB types in Oracle. Under a certain size (4k?), you can get away
+with the driver assuming your input is the deprecated LONG type if you
+encode it as a hex string. That ain't gonna fly at larger values, where
+you'll discover you have to do what this does.
+
+This method had to be overridden because we need to set ora_field to the
+actual column, and that isn't passed to the call (provided by Storage) to
+bind_attribute_by_data_type.
+
+According to L<DBD::Oracle>, the ora_field isn't always necessary, but
+adding it doesn't hurt, and will save your bacon if you're modifying a
+table with more than one LOB column.
+
+=cut
+
+sub source_bind_attributes
+{
+ require DBD::Oracle;
+ my $self = shift;
+ my($source) = @_;
+
+ my %bind_attributes;
+
+ foreach my $column ($source->columns) {
+ my $data_type = $source->column_info($column)->{data_type} || '';
+ next unless $data_type;
+
+ my %column_bind_attrs = $self->bind_attribute_by_data_type($data_type);
+
+ if ($data_type =~ /^[BC]LOB$/i) {
+ $column_bind_attrs{'ora_type'} = uc($data_type) eq 'CLOB' ?
+ DBD::Oracle::ORA_CLOB() :
+ DBD::Oracle::ORA_BLOB();
+ $column_bind_attrs{'ora_field'} = $column;
+ }
+
+ $bind_attributes{$column} = \%column_bind_attrs;
+ }
+
+ return \%bind_attributes;
}
# Oracle automatically releases a savepoint when you start another one with the
sub _svp_rollback {
my ($self, $name) = @_;
- $self->dbh->do("ROLLBACK TO SAVEPOINT $name")
+ $self->_get_dbh->do("ROLLBACK TO SAVEPOINT $name")
}
-=head1 AUTHORS
-
-Andy Grundman <andy@hybridized.org>
+=head1 AUTHOR
-Scott Connelly <scottsweep@yahoo.com>
+See L<DBIx::Class/CONTRIBUTORS>.
=head1 LICENSE
package DBIx::Class::Storage::DBI::Oracle::WhereJoins;
-use base qw( DBIx::Class::Storage::DBI::Oracle::Generic );
-
use strict;
use warnings;
-__PACKAGE__->sql_maker_class('DBIC::SQL::Abstract::Oracle');
-
-BEGIN {
- package # Hide from PAUSE
- DBIC::SQL::Abstract::Oracle;
-
- use base qw( DBIC::SQL::Abstract );
-
- sub select {
- my ($self, $table, $fields, $where, $order, @rest) = @_;
-
- if (ref($table) eq 'ARRAY') {
- $where = $self->_oracle_joins($where, @{ $table });
- }
-
- return $self->SUPER::select($table, $fields, $where, $order, @rest);
- }
-
- sub _recurse_from {
- my ($self, $from, @join) = @_;
-
- my @sqlf = $self->_make_as($from);
-
- foreach my $j (@join) {
- my ($to, $on) = @{ $j };
-
- if (ref $to eq 'ARRAY') {
- push (@sqlf, $self->_recurse_from(@{ $to }));
- }
- else {
- push (@sqlf, $self->_make_as($to));
- }
- }
-
- return join q{, }, @sqlf;
- }
-
- sub _oracle_joins {
- my ($self, $where, $from, @join) = @_;
- my $join_where = {};
- $self->_recurse_oracle_joins($join_where, $from, @join);
- if (keys %$join_where) {
- if (!defined($where)) {
- $where = $join_where;
- } else {
- if (ref($where) eq 'ARRAY') {
- $where = { -or => $where };
- }
- $where = { -and => [ $join_where, $where ] };
- }
- }
- return $where;
- }
-
- sub _recurse_oracle_joins {
- my ($self, $where, $from, @join) = @_;
-
- foreach my $j (@join) {
- my ($to, $on) = @{ $j };
-
- if (ref $to eq 'ARRAY') {
- $self->_recurse_oracle_joins($where, @{ $to });
- }
-
- my $to_jt = ref $to eq 'ARRAY' ? $to->[0] : $to;
- my $left_join = q{};
- my $right_join = q{};
-
- if (ref $to_jt eq 'HASH' and exists $to_jt->{-join_type}) {
- #TODO: Support full outer joins -- this would happen much earlier in
- #the sequence since oracle 8's full outer join syntax is best
- #described as INSANE.
- die "Can't handle full outer joins in Oracle 8 yet!\n"
- if $to_jt->{-join_type} =~ /full/i;
-
- $left_join = q{(+)} if $to_jt->{-join_type} =~ /left/i
- && $to_jt->{-join_type} !~ /inner/i;
-
- $right_join = q{(+)} if $to_jt->{-join_type} =~ /right/i
- && $to_jt->{-join_type} !~ /inner/i;
- }
-
- foreach my $lhs (keys %{ $on }) {
- $where->{$lhs . $left_join} = \"= $on->{ $lhs }$right_join";
- }
- }
- }
-}
+use base qw( DBIx::Class::Storage::DBI::Oracle::Generic );
+use mro 'c3';
+
+__PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks::OracleJoins');
1;
=head1 METHODS
-This module replaces a subroutine contained in DBIC::SQL::Abstract:
-
-=over
-
-=item sql_maker
-
-=back
-
-It also creates a new module in its BEGIN { } block called
-DBIC::SQL::Abstract::Oracle which has the following methods:
-
-=over
-
-=item select ($\@$;$$@)
-
-Replaces DBIC::SQL::Abstract's select() method, which calls _oracle_joins()
-to modify the column and table list before calling SUPER::select().
-
-=item _recurse_from ($$\@)
-
-Recursive subroutine that builds the table list.
-
-=item _oracle_joins ($$$@)
-
-Creates the left/right relationship in the where query.
-
-=back
+See L<DBIx::Class::SQLAHacks::OracleJoins> for implementation details.
=head1 BUGS
=over
-=item L<DBIC::SQL::Abstract>
+=item L<DBIx::Class::SQLAHacks>
+
+=item L<DBIx::Class::SQLAHacks::OracleJoins>
=item L<DBIx::Class::Storage::DBI::Oracle::Generic>
use strict;
use warnings;
-use DBD::Pg qw(:pg_types);
-
-use base qw/DBIx::Class::Storage::DBI/;
+use base qw/DBIx::Class::Storage::DBI::MultiColumnIn/;
+use mro 'c3';
-# __PACKAGE__->load_components(qw/PK::Auto/);
+use DBD::Pg qw(:pg_types);
-# Warn about problematic versions of DBD::Pg
-warn "DBD::Pg 1.49 is strongly recommended"
- if ($DBD::Pg::VERSION < 1.49);
+# Ask for a DBD::Pg with array support
+warn "DBD::Pg 2.9.2 or greater is strongly recommended\n"
+ if ($DBD::Pg::VERSION < 2.009002); # pg uses (used?) version::qv()
sub with_deferred_fk_checks {
my ($self, $sub) = @_;
- $self->dbh->do('SET CONSTRAINTS ALL DEFERRED');
+ $self->_get_dbh->do('SET CONSTRAINTS ALL DEFERRED');
$sub->();
}
$self->dbh_do('_dbh_last_insert_id', $seq);
}
+sub _get_pg_search_path {
+ my ($self,$dbh) = @_;
+ # cache the search path as ['schema','schema',...] in the storage
+ # obj
+ $self->{_pg_search_path} ||= do {
+ my @search_path;
+ my ($sp_string) = $dbh->selectrow_array('SHOW search_path');
+ while( $sp_string =~ s/("[^"]+"|[^,]+),?// ) {
+ unless( defined $1 and length $1 ) {
+ $self->throw_exception("search path sanity check failed: '$1'")
+ }
+ push @search_path, $1;
+ }
+ \@search_path
+ };
+}
+
sub _dbh_get_autoinc_seq {
my ($self, $dbh, $schema, $table, @pri) = @_;
- while (my $col = shift @pri) {
- my $info = $dbh->column_info(undef,$schema,$table,$col)->fetchrow_hashref;
- if(defined $info->{COLUMN_DEF} and
- $info->{COLUMN_DEF} =~ /^nextval\(+'([^']+)'::(?:text|regclass)\)/) {
- my $seq = $1;
- # may need to strip quotes -- see if this works
- return $seq =~ /\./ ? $seq : $info->{TABLE_SCHEM} . "." . $seq;
- }
+ # get the list of postgres schemas to search. if we have a schema
+ # specified, use that. otherwise, use the search path
+ my @search_path;
+ if( defined $schema and length $schema ) {
+ @search_path = ( $schema );
+ } else {
+ @search_path = @{ $self->_get_pg_search_path($dbh) };
+ }
+
+ foreach my $search_schema (@search_path) {
+ foreach my $col (@pri) {
+ my $info = $dbh->column_info(undef,$search_schema,$table,$col)->fetchrow_hashref;
+ if($info) {
+ # if we get here, we have definitely found the right
+ # column.
+ if( defined $info->{COLUMN_DEF} and
+ $info->{COLUMN_DEF}
+ =~ /^nextval\(+'([^']+)'::(?:text|regclass)\)/i
+ ) {
+ my $seq = $1;
+ return $seq =~ /\./ ? $seq : $info->{TABLE_SCHEM} . "." . $seq;
+ } else {
+ # we have found the column, but cannot figure out
+ # the nextval seq
+ return;
+ }
+ }
+ }
}
return;
}
sub get_autoinc_seq {
my ($self,$source,$col) = @_;
-
+
my @pri = $source->primary_columns;
- my ($schema,$table) = $source->name =~ /^(.+)\.(.+)$/ ? ($1,$2)
- : (undef,$source->name);
+
+ my $schema;
+ my $table = $source->name;
+
+ if (ref $table eq 'SCALAR') {
+ $table = $$table;
+ }
+ elsif ($table =~ /^(.+)\.(.+)$/) {
+ ($schema, $table) = ($1, $2);
+ }
$self->dbh_do('_dbh_get_autoinc_seq', $schema, $table, @pri);
}
bytea => { pg_type => DBD::Pg::PG_BYTEA },
blob => { pg_type => DBD::Pg::PG_BYTEA },
};
-
+
if( defined $bind_attributes->{$data_type} ) {
return $bind_attributes->{$data_type};
}
sub _sequence_fetch {
my ( $self, $type, $seq ) = @_;
- my ($id) = $self->dbh->selectrow_array("SELECT nextval('${seq}')");
+ my ($id) = $self->_get_dbh->selectrow_array("SELECT nextval('${seq}')");
return $id;
}
sub _svp_begin {
my ($self, $name) = @_;
- $self->dbh->pg_savepoint($name);
+ $self->_get_dbh->pg_savepoint($name);
}
sub _svp_release {
my ($self, $name) = @_;
- $self->dbh->pg_release($name);
+ $self->_get_dbh->pg_release($name);
}
sub _svp_rollback {
my ($self, $name) = @_;
- $self->dbh->pg_rollback_to($name);
+ $self->_get_dbh->pg_rollback_to($name);
}
1;
This class implements autoincrements for PostgreSQL.
+=head1 POSTGRESQL SCHEMA SUPPORT
+
+This supports multiple PostgreSQL schemas, with one caveat: for
+performance reasons, the schema search path is queried the first time it is
+needed and CACHED for subsequent uses.
+
+For this reason, you should do any necessary manipulation of the
+PostgreSQL search path BEFORE instantiating your schema object, or as
+part of the on_connect_do option to connect(), for example:
+
+ my $schema = My::Schema->connect
+ ( $dsn,$user,$pass,
+ { on_connect_do =>
+ [ 'SET search_path TO myschema, foo, public' ],
+ },
+ );
+
=head1 AUTHORS
-Marcus Ramberg <m.ramberg@cpan.org>
+See L<DBIx::Class/CONTRIBUTORS>
=head1 LICENSE
BEGIN {
use Carp::Clan qw/^DBIx::Class/;
-
+
## Modules required for Replication support not required for general DBIC
## use, so we explicitly test for these.
-
+
my %replication_required = (
- Moose => '0.54',
- MooseX::AttributeHelpers => '0.12',
- Moose::Util::TypeConstraints => '0.54',
- Class::MOP => '0.63',
+ 'Moose' => '0.87',
+ 'MooseX::AttributeHelpers' => '0.21',
+ 'MooseX::Types' => '0.16',
+ 'namespace::clean' => '0.11',
+ 'Hash::Merge' => '0.11'
);
-
+
my @didnt_load;
-
+
for my $module (keys %replication_required) {
eval "use $module $replication_required{$module}";
push @didnt_load, "$module $replication_required{$module}"
if $@;
}
-
+
croak("@{[ join ', ', @didnt_load ]} are missing and are required for Replication")
- if @didnt_load;
+ if @didnt_load;
}
+use Moose;
use DBIx::Class::Storage::DBI;
use DBIx::Class::Storage::DBI::Replicated::Pool;
use DBIx::Class::Storage::DBI::Replicated::Balancer;
+use DBIx::Class::Storage::DBI::Replicated::Types qw/BalancerClassNamePart DBICSchema DBICStorageDBI/;
+use MooseX::Types::Moose qw/ClassName HashRef Object/;
+use Scalar::Util 'reftype';
+use Carp::Clan qw/^DBIx::Class/;
+use Hash::Merge 'merge';
+
+use namespace::clean -except => 'meta';
=head1 NAME
storage type, add some replicated (readonly) databases, and perform reporting
tasks.
- ## Change storage_type in your schema class
+You should set the 'storage_type attribute to a replicated type. You should
+also define your arguments, such as which balancer you want and any arguments
+that the Pool object should get.
+
$schema->storage_type( ['::DBI::Replicated', {balancer=>'::Random'}] );
-
- ## Add some slaves. Basically this is an array of arrayrefs, where each
- ## arrayref is database connect information
-
+
+Next, you need to add in the Replicants. Basically this is an array of
+arrayrefs, where each arrayref is database connect information. Think of these
+arguments as what you'd pass to the 'normal' $schema->connect method.
+
$schema->storage->connect_replicants(
[$dsn1, $user, $pass, \%opts],
[$dsn2, $user, $pass, \%opts],
[$dsn3, $user, $pass, \%opts],
);
-
- ## Now, just use the $schema as normal
+
+Now, just use the $schema as you normally would. Automatically all reads will
+be delegated to the replicants, while writes to the master.
+
$schema->resultset('Source')->search({name=>'etc'});
-
- ## You can force a given query to use a particular storage using the search
- ### attribute 'force_pool'. For example:
-
+
+You can force a given query to use a particular storage using the search
+attribute 'force_pool'. For example:
+
my $RS = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
-
- ## Now $RS will force everything (both reads and writes) to use whatever was
- ## setup as the master storage. 'master' is hardcoded to always point to the
- ## Master, but you can also use any Replicant name. Please see:
- ## L<DBIx::Class::Storage::Replicated::Pool> and the replicants attribute for
- ## More. Also see transactions and L</execute_reliably> for alternative ways
- ## to force read traffic to the master.
-
+
+Now $RS will force everything (both reads and writes) to use whatever was setup
+as the master storage. 'master' is hardcoded to always point to the Master,
+but you can also use any Replicant name. Please see:
+L<DBIx::Class::Storage::DBI::Replicated::Pool> and the replicants attribute for more.
+
+Also see transactions and L</execute_reliably> for alternative ways to
+force read traffic to the master. In general, you should wrap your statements
+in a transaction when you are reading and writing to the same tables at the
+same time, since your replicants will often lag a bit behind the master.
+
+See L<DBIx::Class::Storage::DBI::Replicated::Instructions> for more help and
+walkthroughs.
+
=head1 DESCRIPTION
Warning: This class is marked BETA. This has been running a production
=head1 NOTES
The consistancy betweeen master and replicants is database specific. The Pool
-gives you a method to validate it's replicants, removing and replacing them
+gives you a method to validate its replicants, removing and replacing them
when they fail/pass predefined criteria. Please make careful use of the ways
to force a query to run against Master when needed.
Replicated Storage has additional requirements not currently part of L<DBIx::Class>
- Moose => 0.54
- MooseX::AttributeHelpers => 0.12
- Moose::Util::TypeConstraints => 0.54
- Class::MOP => 0.63
-
+ Moose => '0.87',
+ MooseX::AttributeHelpers => '0.20',
+ MooseX::Types => '0.16',
+ namespace::clean => '0.11',
+ Hash::Merge => '0.11'
+
You will need to install these modules manually via CPAN or make them part of the
Makefile for your distribution.
has 'schema' => (
is=>'rw',
- isa=>'DBIx::Class::Schema',
+ isa=>DBICSchema,
weak_ref=>1,
required=>1,
);
=cut
has 'pool_type' => (
- is=>'ro',
- isa=>'ClassName',
- required=>1,
+ is=>'rw',
+ isa=>ClassName,
default=>'DBIx::Class::Storage::DBI::Replicated::Pool',
handles=>{
'create_pool' => 'new',
=head2 pool_args
Contains a hashref of initialized information to pass to the Balancer object.
-See L<DBIx::Class::Storage::Replicated::Pool> for available arguments.
+See L<DBIx::Class::Storage::DBI::Replicated::Pool> for available arguments.
=cut
has 'pool_args' => (
- is=>'ro',
- isa=>'HashRef',
+ is=>'rw',
+ isa=>HashRef,
lazy=>1,
- required=>1,
default=>sub { {} },
);
=cut
-subtype 'DBIx::Class::Storage::DBI::Replicated::BalancerClassNamePart',
- as 'ClassName';
-
-coerce 'DBIx::Class::Storage::DBI::Replicated::BalancerClassNamePart',
- from 'Str',
- via {
- my $type = $_;
- if($type=~m/^::/) {
- $type = 'DBIx::Class::Storage::DBI::Replicated::Balancer'.$type;
- }
- Class::MOP::load_class($type);
- $type;
- };
-
has 'balancer_type' => (
- is=>'ro',
- isa=>'DBIx::Class::Storage::DBI::Replicated::BalancerClassNamePart',
+ is=>'rw',
+ isa=>BalancerClassNamePart,
coerce=>1,
required=>1,
default=> 'DBIx::Class::Storage::DBI::Replicated::Balancer::First',
=head2 balancer_args
Contains a hashref of initialized information to pass to the Balancer object.
-See L<DBIx::Class::Storage::Replicated::Balancer> for available arguments.
+See L<DBIx::Class::Storage::DBI::Replicated::Balancer> for available arguments.
=cut
has 'balancer_args' => (
- is=>'ro',
- isa=>'HashRef',
+ is=>'rw',
+ isa=>HashRef,
lazy=>1,
required=>1,
default=>sub { {} },
=cut
has 'balancer' => (
- is=>'ro',
+ is=>'rw',
isa=>'DBIx::Class::Storage::DBI::Replicated::Balancer',
lazy_build=>1,
handles=>[qw/auto_validate_every/],
has 'master' => (
is=> 'ro',
- isa=>'DBIx::Class::Storage::DBI',
+ isa=>DBICStorageDBI,
lazy_build=>1,
);
has 'read_handler' => (
is=>'rw',
- isa=>'Object',
+ isa=>Object,
lazy_build=>1,
handles=>[qw/
select
has 'write_handler' => (
is=>'ro',
- isa=>'Object',
- lazy_build=>1,
+ isa=>Object,
lazy_build=>1,
handles=>[qw/
on_connect_do
create_ddl_dir
deployment_statements
datetime_parser
- datetime_parser_type
+ datetime_parser_type
+ build_datetime_parser
last_insert_id
insert
insert_bulk
sth
deploy
with_deferred_fk_checks
-
+ dbh_do
reload_row
+ with_deferred_fk_checks
_prep_for_execute
- configure_sqlt
-
+
+ backup
+ is_datatype_numeric
+ _count_select
+ _subq_count_select
+ _subq_update_delete
+ svp_rollback
+ svp_begin
+ svp_release
/],
);
+has _master_connect_info_opts =>
+ (is => 'rw', isa => HashRef, default => sub { {} });
+
+=head2 around: connect_info
+
+Preserve master's C<connect_info> options (for merging with replicants.)
+Also set any Replicated related options from connect_info, such as
+C<pool_type>, C<pool_args>, C<balancer_type> and C<balancer_args>.
+
+=cut
+
+around connect_info => sub {
+ my ($next, $self, $info, @extra) = @_;
+
+ my $wantarray = wantarray;
+
+ my %opts;
+ for my $arg (@$info) {
+ next unless (reftype($arg)||'') eq 'HASH';
+ %opts = %{ merge($arg, \%opts) };
+ }
+ delete $opts{dsn};
+
+ if (@opts{qw/pool_type pool_args/}) {
+ $self->pool_type(delete $opts{pool_type})
+ if $opts{pool_type};
+
+ $self->pool_args(
+ merge((delete $opts{pool_args} || {}), $self->pool_args)
+ );
+
+ $self->pool($self->_build_pool)
+ if $self->pool;
+ }
+
+ if (@opts{qw/balancer_type balancer_args/}) {
+ $self->balancer_type(delete $opts{balancer_type})
+ if $opts{balancer_type};
+
+ $self->balancer_args(
+ merge((delete $opts{balancer_args} || {}), $self->balancer_args)
+ );
+
+ $self->balancer($self->_build_balancer)
+ if $self->balancer;
+ }
+
+ $self->_master_connect_info_opts(\%opts);
+
+ my (@res, $res);
+ if ($wantarray) {
+ @res = $self->$next($info, @extra);
+ } else {
+ $res = $self->$next($info, @extra);
+ }
+
+ # Make sure master is blessed into the correct class and apply role to it.
+ my $master = $self->master;
+ $master->_determine_driver;
+ Moose::Meta::Class->initialize(ref $master);
+ DBIx::Class::Storage::DBI::Replicated::WithDSN->meta->apply($master);
+
+ $wantarray ? @res : $res;
+};
+
=head1 METHODS
This class defines the following methods.
=head2 BUILDARGS
-L<DBIx::Class::Schema> when instantiating it's storage passed itself as the
+L<DBIx::Class::Schema> when instantiating its storage passed itself as the
first argument. So we need to massage the arguments a bit so that all the
bits get put into the correct places.
sub BUILDARGS {
my ($class, $schema, $storage_type_args, @args) = @_;
-
+
return {
schema=>$schema,
%$storage_type_args,
sub _build_master {
my $self = shift @_;
- DBIx::Class::Storage::DBI->new($self->schema);
+ my $master = DBIx::Class::Storage::DBI->new($self->schema);
+ $master
}
=head2 _build_pool
=head2 around: connect_replicants
All calls to connect_replicants needs to have an existing $schema tacked onto
-top of the args, since L<DBIx::Storage::DBI> needs it.
+top of the args, since L<DBIx::Storage::DBI> needs it, and any C<connect_info>
+options merged with the master, with replicant opts having higher priority.
=cut
-around 'connect_replicants' => sub {
- my ($method, $self, @args) = @_;
- $self->$method($self->schema, @args);
+around connect_replicants => sub {
+ my ($next, $self, @args) = @_;
+
+ for my $r (@args) {
+ $r = [ $r ] unless reftype $r eq 'ARRAY';
+
+ croak "coderef replicant connect_info not supported"
+ if ref $r->[0] && reftype $r->[0] eq 'CODE';
+
+# any connect_info options?
+ my $i = 0;
+ $i++ while $i < @$r && (reftype($r->[$i])||'') ne 'HASH';
+
+# make one if none
+ $r->[$i] = {} unless $r->[$i];
+
+# merge if two hashes
+ my @hashes = @$r[$i .. $#{$r}];
+
+ croak "invalid connect_info options"
+ if (grep { reftype($_) eq 'HASH' } @hashes) != @hashes;
+
+ croak "too many hashrefs in connect_info"
+ if @hashes > 2;
+
+ my %opts = %{ merge(reverse @hashes) };
+
+# delete them
+ splice @$r, $i+1, ($#{$r} - $i), ();
+
+# merge with master
+ %opts = %{ merge(\%opts, $self->_master_connect_info_opts) };
+
+# update
+ $r->[$i] = \%opts;
+ }
+
+ $self->$next($self->schema, @args);
};
=head2 all_storages
my $self = shift @_;
return grep {defined $_ && blessed $_} (
$self->master,
- $self->replicants,
+ values %{ $self->replicants },
);
}
sub execute_reliably {
my ($self, $coderef, @args) = @_;
-
+
unless( ref $coderef eq 'CODE') {
$self->throw_exception('Second argument must be a coderef');
}
-
+
##Get copy of master storage
my $master = $self->master;
-
+
##Get whatever the current read hander is
my $current = $self->read_handler;
-
+
##Set the read handler to master
$self->read_handler($master);
-
+
## do whatever the caller needs
my @result;
my $want_array = wantarray;
-
+
eval {
if($want_array) {
@result = $coderef->(@args);
$coderef->(@args);
}
};
-
+
##Reset to the original state
$self->read_handler($current);
-
+
##Exception testing has to come last, otherwise you might leave the
##read_handler set to master.
-
+
if($@) {
$self->throw_exception("coderef returned an error: $@");
} else {
Sets the current $schema to be 'reliable', that is all queries, both read and
write are sent to the master
-
+
=cut
sub set_reliable_storage {
my $self = shift @_;
my $schema = $self->schema;
my $write_handler = $self->schema->storage->write_handler;
-
+
$schema->storage->read_handler($write_handler);
}
Sets the current $schema to be use the </balancer> for all reads, while all
writea are sent to the master only
-
+
=cut
sub set_balanced_storage {
my $self = shift @_;
my $schema = $self->schema;
- my $write_handler = $self->schema->storage->balancer;
-
- $schema->storage->read_handler($write_handler);
-}
+ my $balanced_handler = $self->schema->storage->balancer;
-=head2 around: txn_do ($coderef)
-
-Overload to the txn_do method, which is delegated to whatever the
-L<write_handler> is set to. We overload this in order to wrap in inside a
-L</execute_reliably> method.
-
-=cut
-
-around 'txn_do' => sub {
- my($txn_do, $self, $coderef, @args) = @_;
- $self->execute_reliably(sub {$self->$txn_do($coderef, @args)});
-};
+ $schema->storage->read_handler($balanced_handler);
+}
=head2 connected
}
}
+=head2 cursor_class
+
+set cursor class on all storages, or return master's
+
+=cut
+
+sub cursor_class {
+ my ($self, $cursor_class) = @_;
+
+ if ($cursor_class) {
+ $_->cursor_class($cursor_class) for $self->all_storages;
+ }
+ $self->master->cursor_class;
+}
+
=head1 GOTCHAS
Due to the fact that replicants can lag behind a master, you must take care to
my $new_schema = $schema->clone;
$new_schema->set_reliable_storage;
-
+
## $new_schema will use only the Master storage for all reads/writes while
## the $schema object will use replicated storage.
use Moose::Role;
requires 'next_storage';
+use MooseX::Types::Moose qw/Int/;
+use DBIx::Class::Storage::DBI::Replicated::Pool;
+use DBIx::Class::Storage::DBI::Replicated::Types qw/DBICStorageDBI/;
+use namespace::clean -except => 'meta';
=head1 NAME
=head1 SYNOPSIS
This role is used internally by L<DBIx::Class::Storage::DBI::Replicated>.
-
+
=head1 DESCRIPTION
Given a pool (L<DBIx::Class::Storage::DBI::Replicated::Pool>) of replicated
has 'auto_validate_every' => (
is=>'rw',
- isa=>'Int',
+ isa=>Int,
predicate=>'has_auto_validate_every',
);
has 'master' => (
is=>'ro',
- isa=>'DBIx::Class::Storage::DBI',
+ isa=>DBICStorageDBI,
required=>1,
);
This attribute returns the next slave to handle a read request. Your L</pool>
attribute has methods to help you shuffle through all the available replicants
-via it's balancer object.
+via its balancer object.
=cut
has 'current_replicant' => (
is=> 'rw',
- isa=>'DBIx::Class::Storage::DBI',
+ isa=>DBICStorageDBI,
lazy_build=>1,
handles=>[qw/
select
around 'select' => sub {
my ($select, $self, @args) = @_;
-
+
if (my $forced_pool = $args[-1]->{force_pool}) {
delete $args[-1]->{force_pool};
return $self->_get_forced_pool($forced_pool)->select(@args);
+ } elsif($self->master->{transaction_depth}) {
+ return $self->master->select(@args);
} else {
$self->increment_storage;
return $self->$select(@args);
around 'select_single' => sub {
my ($select_single, $self, @args) = @_;
-
+
if (my $forced_pool = $args[-1]->{force_pool}) {
delete $args[-1]->{force_pool};
return $self->_get_forced_pool($forced_pool)->select_single(@args);
+ } elsif($self->master->{transaction_depth}) {
+ return $self->master->select_single(@args);
} else {
$self->increment_storage;
return $self->$select_single(@args);
return $forced_pool;
} elsif($forced_pool eq 'master') {
return $self->master;
- } elsif(my $replicant = $self->pool->replicants($forced_pool)) {
+ } elsif(my $replicant = $self->pool->replicants->{$forced_pool}) {
return $replicant;
} else {
$self->master->throw_exception("$forced_pool is not a named replicant.");
=head1 AUTHOR
-John Napiorkowski <john.napiorkowski@takkle.com>
+John Napiorkowski <jjnapiork@cpan.org>
=head1 LICENSE
use Moose;
with 'DBIx::Class::Storage::DBI::Replicated::Balancer';
+use namespace::clean -except => 'meta';
=head1 NAME
This class is used internally by L<DBIx::Class::Storage::DBI::Replicated>. You
shouldn't need to create instances of this class.
-
+
=head1 DESCRIPTION
Given a pool (L<DBIx::Class::Storage::DBI::Replicated::Pool>) of replicated
__PACKAGE__->meta->make_immutable;
-1;
\ No newline at end of file
+1;
use Moose;
with 'DBIx::Class::Storage::DBI::Replicated::Balancer';
+use DBIx::Class::Storage::DBI::Replicated::Types 'Weight';
+use namespace::clean -except => 'meta';
=head1 NAME
This class is used internally by L<DBIx::Class::Storage::DBI::Replicated>. You
shouldn't need to create instances of this class.
-
+
=head1 DESCRIPTION
Given a pool (L<DBIx::Class::Storage::DBI::Replicated::Pool>) of replicated
This class defines the following attributes.
+=head2 master_read_weight
+
+A number greater than 0 that specifies what weight to give the master when
+choosing which backend to execute a read query on. A value of 0, which is the
+default, does no reads from master, while a value of 1 gives it the same
+priority as any single replicant.
+
+For example: if you have 2 replicants, and a L</master_read_weight> of C<0.5>,
+the chance of reading from master will be C<20%>.
+
+You can set it to a value higher than 1, making master have higher weight than
+any single replicant, if for example you have a very powerful master.
+
+=cut
+
+has master_read_weight => (is => 'rw', isa => Weight, default => sub { 0 });
+
=head1 METHODS
This class defines the following methods.
sub next_storage {
my $self = shift @_;
- my @active_replicants = $self->pool->active_replicants;
- my $count_active_replicants = $#active_replicants +1;
- my $random_replicant = int(rand($count_active_replicants));
-
- return $active_replicants[$random_replicant];
+
+ my @replicants = $self->pool->active_replicants;
+
+ if (not @replicants) {
+ # will fall back to master anyway
+ return;
+ }
+
+ my $master = $self->master;
+
+ my $rnd = $self->_random_number(@replicants + $self->master_read_weight);
+
+ return $rnd >= @replicants ? $master : $replicants[int $rnd];
+}
+
+sub _random_number {
+ rand($_[1])
}
=head1 AUTHOR
__PACKAGE__->meta->make_immutable;
-1;
\ No newline at end of file
+1;
--- /dev/null
+package DBIx::Class::Storage::DBI::Replicated::Introduction;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Replicated::Introduction - Minimum Need to Know
+
+=head1 SYNOPSIS
+
+This is an introductory document for L<DBIx::Class::Storage::Replication>.
+
+This document is not an overview of what replication is or why you should be
+using it. It is not a document explaing how to setup MySQL native replication
+either. Copious external resources are avialable for both. This document
+presumes you have the basics down.
+
+=head1 DESCRIPTION
+
+L<DBIx::Class> supports a framework for using database replication. This system
+is integrated completely, which means once it's setup you should be able to
+automatically just start using a replication cluster without additional work or
+changes to your code. Some caveats apply, primarily related to the proper use
+of transactions (you are wrapping all your database modifying statements inside
+a transaction, right ;) ) however in our experience properly written DBIC will
+work transparently with Replicated storage.
+
+Currently we have support for MySQL native replication, which is relatively
+easy to install and configure. We also currently support single master to one
+or more replicants (also called 'slaves' in some documentation). However the
+framework is not specifically tied to the MySQL framework and supporting other
+replication systems or topographies should be possible. Please bring your
+patches and ideas to the #dbix-class IRC channel or the mailing list.
+
+For an easy way to start playing with MySQL native replication, see:
+L<MySQL::Sandbox>.
+
+If you are using this with a L<Catalyst> based appplication, you may also wish
+to see more recent updates to L<Catalyst::Model::DBIC::Schema>, which has
+support for replication configuration options as well.
+
+=head1 REPLICATED STORAGE
+
+By default, when you start L<DBIx::Class>, your Schema (L<DBIx::Class::Schema>)
+is assigned a storage_type, which when fully connected will reflect your
+underlying storage engine as defined by your choosen database driver. For
+example, if you connect to a MySQL database, your storage_type will be
+L<DBIx::Class::Storage::DBI::mysql> Your storage type class will contain
+database specific code to help smooth over the differences between databases
+and let L<DBIx::Class> do its thing.
+
+If you want to use replication, you will override this setting so that the
+replicated storage engine will 'wrap' your underlying storages and present to
+the end programmer a unified interface. This wrapper storage class will
+delegate method calls to either a master database or one or more replicated
+databases based on if they are read only (by default sent to the replicants)
+or write (reserved for the master). Additionally, the Replicated storage
+will monitor the health of your replicants and automatically drop them should
+one exceed configurable parameters. Later, it can automatically restore a
+replicant when its health is restored.
+
+This gives you a very robust system, since you can add or drop replicants
+and DBIC will automatically adjust itself accordingly.
+
+Additionally, if you need high data integrity, such as when you are executing
+a transaction, replicated storage will automatically delegate all database
+traffic to the master storage. There are several ways to enable this high
+integrity mode, but wrapping your statements inside a transaction is the easy
+and canonical option.
+
+=head1 PARTS OF REPLICATED STORAGE
+
+A replicated storage contains several parts. First, there is the replicated
+storage itself (L<DBIx::Class::Storage::DBI::Replicated>). A replicated storage
+takes a pool of replicants (L<DBIx::Class::Storage::DBI::Replicated::Pool>)
+and a software balancer (L<DBIx::Class::Storage::DBI::Replicated::Pool>). The
+balancer does the job of splitting up all the read traffic amongst each
+replicant in the Pool. Currently there are two types of balancers, a Random one
+which chooses a Replicant in the Pool using a naive randomizer algorithm, and a
+First replicant, which just uses the first one in the Pool (and obviously is
+only of value when you have a single replicant).
+
+=head1 REPLICATED STORAGE CONFIGURATION
+
+All the parts of replication can be altered dynamically at runtime, which makes
+it possibly to create a system that automatically scales under load by creating
+more replicants as needed, perhaps using a cloud system such as Amazon EC2.
+However, for common use you can setup your replicated storage to be enabled at
+the time you connect the databases. The following is a breakdown of how you
+may wish to do this. Again, if you are using L<Catalyst>, I strongly recommend
+you use (or upgrade to) the latest L<Catalyst::Model::DBIC::Schema>, which makes
+this job even easier.
+
+First, you need to connect your L<DBIx::Class::Schema>. Let's assume you have
+such a schema called, "MyApp::Schema".
+
+ use MyApp::Schema;
+ my $schema = MyApp::Schema->connect($dsn, $user, $pass);
+
+Next, you need to set the storage_type.
+
+ $schema->storage_type(
+ ::DBI::Replicated' => {
+ balancer_type => '::Random',
+ balancer_args => {
+ auto_validate_every => 5,
+ master_read_weight => 1
+ },
+ pool_args => {
+ maximum_lag =>2,
+ },
+ }
+ );
+
+Let's break down the settings. The method L<DBIx::Class::Schema/storage_type>
+takes one mandatory parameter, a scalar value, and an option second value which
+is a Hash Reference of configuration options for that storage. In this case,
+we are setting the Replicated storage type using '::DBI::Replicated' as the
+first value. You will only use a different value if you are subclassing the
+replicated storage, so for now just copy that first parameter.
+
+The second parameter contains a hash reference of stuff that gets passed to the
+replicated storage. L<DBIx::Class::Storage::DBI::Replicated/balancer_type> is
+the type of software load balancer you will use to split up traffic among all
+your replicants. Right now we have two options, "::Random" and "::First". You
+can review documentation for both at:
+
+L<DBIx::Class::Storage::DBI::Replicated::Balancer::First>,
+L<DBIx::Class::Storage::DBI::Replicated::Balancer::Random>.
+
+In this case we will have three replicants, so the ::Random option is the only
+one that makes sense.
+
+'balancer_args' get passed to the balancer when it's instantiated. All
+balancers have the 'auto_validate_every' option. This is the number of seconds
+we allow to pass between validation checks on a load balanced replicant. So
+the higher the number, the more possibility that your reads to the replicant
+may be inconsistant with what's on the master. Setting this number too low
+will result in increased database loads, so choose a number with care. Our
+experience is that setting the number around 5 seconds results in a good
+performance / integrity balance.
+
+'master_read_weight' is an option associated with the ::Random balancer. It
+allows you to let the master be read from. I usually leave this off (default
+is off).
+
+The 'pool_args' are configuration options associated with the replicant pool.
+This object (L<DBIx::Class::Storage::DBI::Replicated::Pool>) manages all the
+declared replicants. 'maximum_lag' is the number of seconds a replicant is
+allowed to lag behind the master before being temporarily removed from the pool.
+Keep in mind that the Balancer option 'auto_validate_every' determins how often
+a replicant is tested against this condition, so the true possible lag can be
+higher than the number you set. The default is zero.
+
+No matter how low you set the maximum_lag or the auto_validate_every settings,
+there is always the chance that your replicants will lag a bit behind the
+master for the supported replication system built into MySQL. You can ensure
+reliabily reads by using a transaction, which will force both read and write
+activity to the master, however this will increase the load on your master
+database.
+
+After you've configured the replicated storage, you need to add the connection
+information for the replicants:
+
+ $schema->storage->connect_replicants(
+ [$dsn1, $user, $pass, \%opts],
+ [$dsn2, $user, $pass, \%opts],
+ [$dsn3, $user, $pass, \%opts],
+ );
+
+These replicants should be configured as slaves to the master using the
+instructions for MySQL native replication, or if you are just learning, you
+will find L<MySQL::Sandbox> an easy way to set up a replication cluster.
+
+And now your $schema object is properly configured! Enjoy!
+
+=head1 AUTHOR
+
+John Napiorkowski <jjnapiork@cpan.org>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
+
+1;
use Moose;
use MooseX::AttributeHelpers;
use DBIx::Class::Storage::DBI::Replicated::Replicant;
-use List::Util qw(sum);
+use List::Util 'sum';
+use Scalar::Util 'reftype';
+use Carp::Clan qw/^DBIx::Class/;
+use MooseX::Types::Moose qw/Num Int ClassName HashRef/;
+
+use namespace::clean -except => 'meta';
=head1 NAME
This class is used internally by L<DBIx::Class::Storage::DBI::Replicated>. You
shouldn't need to create instances of this class.
-
+
=head1 DESCRIPTION
In a replicated storage type, there is at least one replicant to handle the
This is a number which defines the maximum allowed lag returned by the
L<DBIx::Class::Storage::DBI/lag_behind_master> method. The default is 0. In
general, this should return a larger number when the replicant is lagging
-behind it's master, however the implementation of this is database specific, so
+behind its master, however the implementation of this is database specific, so
don't count on this number having a fixed meaning. For example, MySQL will
return a number of seconds that the replicating database is lagging.
has 'maximum_lag' => (
is=>'rw',
- isa=>'Num',
+ isa=>Num,
required=>1,
lazy=>1,
default=>0,
=head2 last_validated
This is an integer representing a time since the last time the replicants were
-validated. It's nothing fancy, just an integer provided via the perl time
+validated. It's nothing fancy, just an integer provided via the perl L<time|perlfunc/time>
builtin.
=cut
has 'last_validated' => (
is=>'rw',
- isa=>'Int',
+ isa=>Int,
reader=>'last_validated',
writer=>'_last_validated',
lazy=>1,
has 'replicant_type' => (
is=>'ro',
- isa=>'ClassName',
+ isa=>ClassName,
required=>1,
default=>'DBIx::Class::Storage::DBI',
handles=>{
actual replicant storage. For example if the $dsn element is something like:
"dbi:SQLite:dbname=dbfile"
-
+
You could access the specific replicant via:
$schema->storage->replicants->{'dbname=dbfile'}
-
+
This attributes also supports the following helper methods:
=over 4
has 'replicants' => (
is=>'rw',
metaclass => 'Collection::Hash',
- isa=>'HashRef[DBIx::Class::Storage::DBI]',
+ isa=>HashRef['Object'],
default=>sub {{}},
provides => {
'set' => 'set_replicant',
- 'get' => 'get_replicant',
+ 'get' => 'get_replicant',
'empty' => 'has_replicants',
'count' => 'num_replicants',
'delete' => 'delete_replicant',
+ 'values' => 'all_replicant_storages',
},
);
=head2 connect_replicants ($schema, Array[$connect_info])
-Given an array of $dsn suitable for connected to a database, create an
-L<DBIx::Class::Storage::DBI::Replicated::Replicant> object and store it in the
-L</replicants> attribute.
+Given an array of $dsn or connect_info structures suitable for connected to a
+database, create an L<DBIx::Class::Storage::DBI::Replicated::Replicant> object
+and store it in the L</replicants> attribute.
=cut
sub connect_replicants {
my $self = shift @_;
my $schema = shift @_;
-
+
my @newly_created = ();
foreach my $connect_info (@_) {
+ $connect_info = [ $connect_info ]
+ if reftype $connect_info ne 'ARRAY';
+
+ croak "coderef replicant connect_info not supported"
+ if ref $connect_info->[0] && reftype $connect_info->[0] eq 'CODE';
+
my $replicant = $self->connect_replicant($schema, $connect_info);
- my ($key) = ($connect_info->[0]=~m/^dbi\:.+\:(.+)$/);
+
+ my $key = $connect_info->[0];
+ $key = $key->{dsn} if ref $key && reftype $key eq 'HASH';
+ ($key) = ($key =~ m/^dbi\:.+\:(.+)$/);
+
$self->set_replicant( $key => $replicant);
push @newly_created, $replicant;
}
-
+
return @newly_created;
}
my ($self, $schema, $connect_info) = @_;
my $replicant = $self->create_replicant($schema);
$replicant->connect_info($connect_info);
- $self->_safely_ensure_connected($replicant);
+
+## It is undesirable for catalyst to connect at ->conect_replicants time, as
+## connections should only happen on the first request that uses the database.
+## So we try to set the driver without connecting, however this doesn't always
+## work, as a driver may need to connect to determine the DB version, and this
+## may fail.
+##
+## Why this is necessary at all, is that we need to have the final storage
+## class to apply the Replicant role.
+
+ $self->_safely($replicant, '->_determine_driver', sub {
+ $replicant->_determine_driver
+ });
+
DBIx::Class::Storage::DBI::Replicated::Replicant->meta->apply($replicant);
return $replicant;
}
connect. For the master database this is desirable, but since replicants are
allowed to fail, this behavior is not desirable. This method wraps the call
to ensure_connected in an eval in order to catch any generated errors. That
-way a slave to go completely offline (ie, the box itself can die) without
+way a slave can go completely offline (ie, the box itself can die) without
bringing down your entire pool of databases.
=cut
sub _safely_ensure_connected {
my ($self, $replicant, @args) = @_;
+
+ return $self->_safely($replicant, '->ensure_connected', sub {
+ $replicant->ensure_connected(@args)
+ });
+}
+
+=head2 _safely ($replicant, $name, $code)
+
+Execute C<$code> for operation C<$name> catching any exceptions and printing an
+error message to the C<<$replicant->debugobj>>.
+
+Returns 1 on success and undef on failure.
+
+=cut
+
+sub _safely {
+ my ($self, $replicant, $name, $code) = @_;
+
eval {
- $replicant->ensure_connected(@args);
+ $code->()
};
if ($@) {
$replicant
->debugobj
->print(
- sprintf( "Exception trying to ->ensure_connected for replicant %s, error is %s",
+ sprintf( "Exception trying to $name for replicant %s, error is %s",
$replicant->_dbi_connect_info->[0], $@)
);
return;
if($self->_safely_ensure_connected($replicant)) {
my $is_replicating = $replicant->is_replicating;
unless(defined $is_replicating) {
- $replicant->debugobj->print("Storage Driver ".ref $self." Does not support the 'is_replicating' method. Assuming you are manually managing.");
+ $replicant->debugobj->print("Storage Driver ".ref($self)." Does not support the 'is_replicating' method. Assuming you are manually managing.\n");
next;
} else {
if($is_replicating) {
my $lag_behind_master = $replicant->lag_behind_master;
unless(defined $lag_behind_master) {
- $replicant->debugobj->print("Storage Driver ".ref $self." Does not support the 'lag_behind_master' method. Assuming you are manually managing.");
+ $replicant->debugobj->print("Storage Driver ".ref($self)." Does not support the 'lag_behind_master' method. Assuming you are manually managing.\n");
next;
} else {
if($lag_behind_master <= $self->maximum_lag) {
use Moose::Role;
requires qw/_query_start/;
+with 'DBIx::Class::Storage::DBI::Replicated::WithDSN';
+use MooseX::Types::Moose 'Bool';
+
+use namespace::clean -except => 'meta';
=head1 NAME
=head1 SYNOPSIS
This class is used internally by L<DBIx::Class::Storage::DBI::Replicated>.
-
+
=head1 DESCRIPTION
Replicants are DBI Storages that follow a master DBI Storage. Typically this
has 'active' => (
is=>'rw',
- isa=>'Bool',
+ isa=>Bool,
lazy=>1,
required=>1,
default=>1,
This class defines the following methods.
-=head2 around: _query_start
-
-advice iof the _query_start method to add more debuggin
-
-=cut
-
-around '_query_start' => sub {
- my ($method, $self, $sql, @bind) = @_;
- my $dsn = $self->_dbi_connect_info->[0];
- $self->$method("DSN: $dsn SQL: $sql", @bind);
-};
-
=head2 debugobj
Override the debugobj method to redirect this method call back to the master.
=head1 ALSO SEE
-L<<a href="http://en.wikipedia.org/wiki/Replicant">http://en.wikipedia.org/wiki/Replicant</a>>
+L<http://en.wikipedia.org/wiki/Replicant>,
+L<DBIx::Class::Storage::DBI::Replicated>
=head1 AUTHOR
=cut
-1;
\ No newline at end of file
+1;
--- /dev/null
+package # hide from PAUSE
+ DBIx::Class::Storage::DBI::Replicated::Types;
+
+# DBIx::Class::Storage::DBI::Replicated::Types - Types used internally by
+# L<DBIx::Class::Storage::DBI::Replicated>
+
+use MooseX::Types
+ -declare => [qw/BalancerClassNamePart Weight DBICSchema DBICStorageDBI/];
+use MooseX::Types::Moose qw/ClassName Str Num/;
+
+class_type 'DBIx::Class::Storage::DBI';
+class_type 'DBIx::Class::Schema';
+
+subtype DBICSchema, as 'DBIx::Class::Schema';
+subtype DBICStorageDBI, as 'DBIx::Class::Storage::DBI';
+
+subtype BalancerClassNamePart,
+ as ClassName;
+
+coerce BalancerClassNamePart,
+ from Str,
+ via {
+ my $type = $_;
+ if($type=~m/^::/) {
+ $type = 'DBIx::Class::Storage::DBI::Replicated::Balancer'.$type;
+ }
+ Class::MOP::load_class($type);
+ $type;
+ };
+
+subtype Weight,
+ as Num,
+ where { $_ >= 0 },
+ message { 'weight must be a decimal greater than 0' };
+
+# AUTHOR
+#
+# John Napiorkowski <john.napiorkowski@takkle.com>
+#
+# LICENSE
+#
+# You may distribute this code under the same terms as Perl itself.
+
+1;
--- /dev/null
+package DBIx::Class::Storage::DBI::Replicated::WithDSN;
+
+use Moose::Role;
+requires qw/_query_start/;
+
+use namespace::clean -except => 'meta';
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Replicated::WithDSN - A DBI Storage Role with DSN
+information in trace output
+
+=head1 SYNOPSIS
+
+This class is used internally by L<DBIx::Class::Storage::DBI::Replicated>.
+
+=head1 DESCRIPTION
+
+This role adds C<DSN: > info to storage debugging output.
+
+=head1 METHODS
+
+This class defines the following methods.
+
+=head2 around: _query_start
+
+Add C<DSN: > to debugging output.
+
+=cut
+
+around '_query_start' => sub {
+ my ($method, $self, $sql, @bind) = @_;
+ my $dsn = $self->_dbi_connect_info->[0];
+ my($op, $rest) = (($sql=~m/^(\w+)(.+)$/),'NOP', 'NO SQL');
+ my $storage_type = $self->can('active') ? 'REPLICANT' : 'MASTER';
+
+ $self->$method("$op [DSN_$storage_type=$dsn]$rest", @bind);
+};
+
+=head1 ALSO SEE
+
+L<DBIx::Class::Storage::DBI>
+
+=head1 AUTHOR
+
+John Napiorkowski <john.napiorkowski@takkle.com>
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
+
+1;
use strict;
use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+use mro 'c3';
+
use POSIX 'strftime';
use File::Copy;
use File::Spec;
-use base qw/DBIx::Class::Storage::DBI::MultiDistinctEmulation/;
-
sub _dbh_last_insert_id {
my ($self, $dbh, $source, $col) = @_;
$dbh->func('last_insert_rowid');
return $backupfile;
}
-sub disconnect {
-
- # As described in this node http://www.perlmonks.org/?node_id=666210
- # there seems to be no sane way to ->disconnect a SQLite database with
- # cached statement handles. As per mst we just zap the cache and
- # proceed as normal.
-
- my $self = shift;
- if ($self->connected) {
- $self->_dbh->{CachedKids} = {};
- $self->next::method (@_);
- }
-}
-
+sub datetime_parser_type { return "DateTime::Format::SQLite"; }
1;
use strict;
use warnings;
-use base qw/DBIx::Class::Storage::DBI::NoBindVars/;
+use base qw/
+ DBIx::Class::Storage::DBI::Sybase::Base
+ DBIx::Class::Storage::DBI::NoBindVars
+/;
+use mro 'c3';
+
+sub _rebless {
+ my $self = shift;
+
+ my $dbtype = eval {
+ @{$self->_get_dbh
+ ->selectrow_arrayref(qq{sp_server_info \@attribute_id=1})
+ }[2]
+ };
+ unless ( $@ ) {
+ $dbtype =~ s/\W/_/gi;
+ my $subclass = "DBIx::Class::Storage::DBI::Sybase::${dbtype}";
+ if ($self->load_optional_class($subclass) && !$self->isa($subclass)) {
+ bless $self, $subclass;
+ $self->_rebless;
+ }
+ }
+}
+
+sub _dbh_last_insert_id {
+ my ($self, $dbh, $source, $col) = @_;
+ return ($dbh->selectrow_array('select @@identity'))[0];
+}
1;
you are using an MSSQL database via L<DBD::Sybase>, see
L<DBIx::Class::Storage::DBI::Sybase::MSSQL>.
+=head1 CAVEATS
+
+This storage driver uses L<DBIx::Class::Storage::DBI::NoBindVars> as a base.
+This means that bind variables will be interpolated (properly quoted of course)
+into the SQL query itself, without using bind placeholders.
+
+More importantly this means that caching of prepared statements is explicitly
+disabled, as the interpolation renders it useless.
+
=head1 AUTHORS
Brandon L Black <blblack@gmail.com>
+Justin Hunter <justin.d.hunter@gmail.com>
+
=head1 LICENSE
You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package # hide from PAUSE
+ DBIx::Class::Storage::DBI::Sybase::Base;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::Storage::DBI/;
+use mro 'c3';
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Sybase::Base - Common functionality for drivers using
+DBD::Sybase
+
+=cut
+
+sub _ping {
+ my $self = shift;
+
+ my $dbh = $self->_dbh or return 0;
+
+ local $dbh->{RaiseError} = 1;
+ eval {
+ $dbh->do('select 1');
+ };
+
+ return $@ ? 0 : 1;
+}
+
+sub _placeholders_supported {
+ my $self = shift;
+ my $dbh = $self->_get_dbh;
+
+ return eval {
+# There's also $dbh->{syb_dynamic_supported} but it can be inaccurate for this
+# purpose.
+ local $dbh->{PrintError} = 0;
+ local $dbh->{RaiseError} = 1;
+# this specifically tests a bind that is NOT a string
+ $dbh->selectrow_array('select 1 where 1 = ?', {}, 1);
+ };
+}
+
+1;
+
+=head1 AUTHORS
+
+See L<DBIx::Class/CONTRIBUTORS>.
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
use strict;
use warnings;
-use Class::C3;
-use base qw/DBIx::Class::Storage::DBI::MSSQL DBIx::Class::Storage::DBI::Sybase/;
+use Carp::Clan qw/^DBIx::Class/;
+
+carp 'Setting of storage_type is redundant as connections through DBD::Sybase'
+ .' are now properly recognized and reblessed into the appropriate subclass'
+ .' (DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server in the'
+ .' case of MSSQL). Please remove the explicit call to'
+ .q/ $schema->storage_type('::DBI::Sybase::MSSQL')/
+ .', as this storage class has been deprecated in favor of the autodetected'
+ .' ::DBI::Sybase::Microsoft_SQL_Server';
+
+
+use base qw/DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server/;
+use mro 'c3';
1;
=head1 NAME
-DBIx::Class::Storage::DBI::Sybase::MSSQL - Storage::DBI subclass for MSSQL via
-DBD::Sybase
+DBIx::Class::Storage::DBI::Sybase::MSSQL - (DEPRECATED) Legacy storage class for MSSQL via DBD::Sybase
+
+=head1 NOTE
+
+Connections through DBD::Sybase are now correctly recognized and reblessed
+into the appropriate subclass (L<DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server>
+in the case of MSSQL). Please remove the explicit storage_type setting from your
+schema.
=head1 SYNOPSIS
Brandon L Black <blblack@gmail.com>
+Justin Hunter <justin.d.hunter@gmail.com>
+
=head1 LICENSE
You may distribute this code under the same terms as Perl itself.
--- /dev/null
+package DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server;
+
+use strict;
+use warnings;
+
+use base qw/
+ DBIx::Class::Storage::DBI::Sybase::Base
+ DBIx::Class::Storage::DBI::MSSQL
+/;
+use mro 'c3';
+
+sub _rebless {
+ my $self = shift;
+ my $dbh = $self->_get_dbh;
+
+ if (not $self->_placeholders_supported) {
+ bless $self,
+ 'DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars';
+ $self->_rebless;
+ }
+
+# LongReadLen doesn't work with MSSQL through DBD::Sybase, and the default is
+# huge on some versions of SQL server and can cause memory problems, so we
+# fix it up here.
+ my $text_size = eval { $self->_dbi_connect_info->[-1]->{LongReadLen} } ||
+ 32768; # the DBD::Sybase default
+
+ $dbh->do("set textsize $text_size");
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server - Support for Microsoft
+SQL Server via DBD::Sybase
+
+=head1 SYNOPSIS
+
+This subclass supports MSSQL server connections via L<DBD::Sybase>.
+
+=head1 DESCRIPTION
+
+This driver tries to determine whether your version of L<DBD::Sybase> and
+supporting libraries (usually FreeTDS) support using placeholders, if not the
+storage will be reblessed to
+L<DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars>.
+
+The MSSQL specific functionality is provided by
+L<DBIx::Class::Storage::DBI::MSSQL>.
+
+=head1 AUTHOR
+
+See L<DBIx::Class/CONTRIBUTORS>.
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
--- /dev/null
+package DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars;
+
+use strict;
+use warnings;
+
+use base qw/
+ DBIx::Class::Storage::DBI::NoBindVars
+ DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server
+/;
+use mro 'c3';
+
+sub _rebless {
+ my $self = shift;
+
+ $self->disable_sth_caching(1);
+}
+
+1;
+
+=head1 NAME
+
+DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server::NoBindVars - Support for Microsoft
+SQL Server via DBD::Sybase without placeholders
+
+=head1 SYNOPSIS
+
+This subclass supports MSSQL server connections via DBD::Sybase when ? style
+placeholders are not available.
+
+=head1 DESCRIPTION
+
+If you are using this driver then your combination of L<DBD::Sybase> and
+libraries (most likely FreeTDS) does not support ? style placeholders.
+
+This storage driver uses L<DBIx::Class::Storage::DBI::NoBindVars> as a base.
+This means that bind variables will be interpolated (properly quoted of course)
+into the SQL query itself, without using bind placeholders.
+
+More importantly this means that caching of prepared statements is explicitly
+disabled, as the interpolation renders it useless.
+
+In all other respects, it is a subclass of
+L<DBIx::Class::Storage::DBI::Sybase::Microsoft_SQL_Server>.
+
+=head1 AUTHOR
+
+See L<DBIx::Class/CONTRIBUTORS>.
+
+=head1 LICENSE
+
+You may distribute this code under the same terms as Perl itself.
+
+=cut
use strict;
use warnings;
-use base qw/DBIx::Class::Storage::DBI/;
+use base qw/
+ DBIx::Class::Storage::DBI::MultiColumnIn
+ DBIx::Class::Storage::DBI::AmbiguousGlob
+ DBIx::Class::Storage::DBI
+/;
+use mro 'c3';
-# __PACKAGE__->load_components(qw/PK::Auto/);
+__PACKAGE__->sql_maker_class('DBIx::Class::SQLAHacks::MySQL');
sub with_deferred_fk_checks {
my ($self, $sub) = @_;
- $self->dbh->do('SET foreign_key_checks=0');
+ $self->_do_query('SET FOREIGN_KEY_CHECKS = 0');
$sub->();
- $self->dbh->do('SET foreign_key_checks=1');
+ $self->_do_query('SET FOREIGN_KEY_CHECKS = 1');
+}
+
+sub connect_call_set_strict_mode {
+ my $self = shift;
+
+ # the @@sql_mode puts back what was previously set on the session handle
+ $self->_do_query(q|SET SQL_MODE = CONCAT('ANSI,TRADITIONAL,ONLY_FULL_GROUP_BY,', @@sql_mode)|);
+ $self->_do_query(q|SET SQL_AUTO_IS_NULL = 0|);
}
sub _dbh_last_insert_id {
sub _svp_begin {
my ($self, $name) = @_;
- $self->dbh->do("SAVEPOINT $name");
+ $self->_get_dbh->do("SAVEPOINT $name");
}
sub _svp_release {
my ($self, $name) = @_;
- $self->dbh->do("RELEASE SAVEPOINT $name");
+ $self->_get_dbh->do("RELEASE SAVEPOINT $name");
}
sub _svp_rollback {
my ($self, $name) = @_;
- $self->dbh->do("ROLLBACK TO SAVEPOINT $name")
+ $self->_get_dbh->do("ROLLBACK TO SAVEPOINT $name")
}
-
+
sub is_replicating {
- my $status = shift->dbh->selectrow_hashref('show slave status');
+ my $status = shift->_get_dbh->selectrow_hashref('show slave status');
return ($status->{Slave_IO_Running} eq 'Yes') && ($status->{Slave_SQL_Running} eq 'Yes');
}
sub lag_behind_master {
- return shift->dbh->selectrow_hashref('show slave status')->{Seconds_Behind_Master};
+ return shift->_get_dbh->selectrow_hashref('show slave status')->{Seconds_Behind_Master};
+}
+
+# MySql can not do subquery update/deletes, only way is slow per-row operations.
+# This assumes you have set proper transaction isolation and use innodb.
+sub _subq_update_delete {
+ return shift->_per_row_update_delete (@_);
}
1;
=head1 NAME
-DBIx::Class::Storage::DBI::mysql - Automatic primary key class for MySQL
+DBIx::Class::Storage::DBI::mysql - Storage::DBI class implementing MySQL specifics
=head1 SYNOPSIS
- # In your table classes
- __PACKAGE__->load_components(qw/PK::Auto Core/);
- __PACKAGE__->set_primary_key('id');
+Storage::DBI autodetects the underlying MySQL database, and re-blesses the
+C<$storage> object into this class.
+
+ my $schema = MyDb::Schema->connect( $dsn, $user, $pass, { on_connect_call => 'set_strict_mode' } );
=head1 DESCRIPTION
-This class implements autoincrements for MySQL.
+This class implements MySQL specific bits of L<DBIx::Class::Storage::DBI>.
+
+It also provides a one-stop on-connect macro C<set_strict_mode> which sets
+session variables such that MySQL behaves more predictably as far as the
+SQL standard is concerned.
=head1 AUTHORS
-Matt S. Trout <mst@shadowcatsystems.co.uk>
+See L<DBIx::Class/CONTRIBUTORS>
=head1 LICENSE
use base qw/Class::Accessor::Grouped/;
use IO::File;
-__PACKAGE__->mk_group_accessors(simple => qw/callback debugfh/);
+__PACKAGE__->mk_group_accessors(simple => qw/callback debugfh silence/);
=head1 NAME
=head1 DESCRIPTION
This class is called by DBIx::Class::Storage::DBI as a means of collecting
-statistics on it's actions. Using this class alone merely prints the SQL
+statistics on its actions. Using this class alone merely prints the SQL
executed, the fact that it completes and begin/end notification for
transactions.
sub print {
my ($self, $msg) = @_;
+ return if $self->silence;
+
if(!defined($self->debugfh())) {
my $fh;
my $debug_env = $ENV{DBIX_CLASS_STORAGE_DBI_DEBUG}
$self->debugfh->print($msg);
}
+=head2 silence
+
+Turn off all output if set to true.
+
=head2 txn_begin
Called when a transaction begins.
sub txn_begin {
my $self = shift;
+ return if $self->callback;
+
$self->print("BEGIN WORK\n");
}
sub txn_rollback {
my $self = shift;
+ return if $self->callback;
+
$self->print("ROLLBACK\n");
}
sub txn_commit {
my $self = shift;
+ return if $self->callback;
+
$self->print("COMMIT\n");
}
sub svp_begin {
my ($self, $name) = @_;
+ return if $self->callback;
+
$self->print("SAVEPOINT $name\n");
}
sub svp_release {
my ($self, $name) = @_;
- $self->print("RELEASE SAVEPOINT $name\n");
+ return if $self->callback;
+
+ $self->print("RELEASE SAVEPOINT $name\n");
}
=head2 svp_rollback
sub svp_rollback {
my ($self, $name) = @_;
- $self->print("ROLLBACK TO SAVEPOINT $name\n");
+ return if $self->callback;
+
+ $self->print("ROLLBACK TO SAVEPOINT $name\n");
}
=head2 query_start
-package # Hide from pause for now - till we get it working
- DBIx::Class::Storage::TxnScopeGuard;
+package DBIx::Class::Storage::TxnScopeGuard;
use strict;
use warnings;
+use Carp ();
sub new {
my ($class, $storage) = @_;
=head1 NAME
-DBIx::Class::Storage::TxnScopeGuard - Experimental
+DBIx::Class::Storage::TxnScopeGuard - Scope-based transaction handling
=head1 SYNOPSIS
=head2 new
-Creating an instance of this class will start a new transaction. Expects a
+Creating an instance of this class will start a new transaction (by
+implicitly calling L<DBIx::Class::Storage/txn_begin>. Expects a
L<DBIx::Class::Storage> object as its only argument.
=head2 commit
Commit the transaction, and stop guarding the scope. If this method is not
-called (i.e. an exception is thrown) and this object goes out of scope then
-the transaction is rolled back.
+called and this object goes out of scope (i.e. an exception is thrown) then
+the transaction is rolled back, via L<DBIx::Class::Storage/txn_rollback>
=cut
package Artist;
__PACKAGE__->load_components(qw/UTF8Columns Core/);
__PACKAGE__->utf8_columns(qw/name description/);
-
+
# then belows return strings with utf8 flag
$artist->name;
$artist->get_column('description');
$DEBUG = 0 unless defined $DEBUG;
use Exporter;
-use Data::Dumper;
use SQL::Translator::Utils qw(debug normalize_name);
+use Carp::Clan qw/^SQL::Translator|^DBIx::Class/;
use base qw(Exporter);
my $dbicschema = $args->{'DBIx::Class::Schema'} || $args->{"DBIx::Schema"} ||$data;
$dbicschema ||= $args->{'package'};
my $limit_sources = $args->{'sources'};
-
- die 'No DBIx::Class::Schema' unless ($dbicschema);
+
+ croak 'No DBIx::Class::Schema' unless ($dbicschema);
if (!ref $dbicschema) {
eval "use $dbicschema;";
- die "Can't load $dbicschema ($@)" if($@);
+ croak "Can't load $dbicschema ($@)" if($@);
}
my $schema = $tr->schema;
$schema->name( ref($dbicschema) . " v" . ($dbicschema->schema_version || '1.x'))
unless ($schema->name);
- my %seen_tables;
-
my @monikers = sort $dbicschema->sources;
if ($limit_sources) {
my $ref = ref $limit_sources || '';
- die "'sources' parameter must be an array or hash ref" unless $ref eq 'ARRAY' || ref eq 'HASH';
+ $dbicschema->throw_exception ("'sources' parameter must be an array or hash ref")
+ unless( $ref eq 'ARRAY' || ref eq 'HASH' );
# limit monikers to those specified in
my $sources;
}
}
+ my %tables;
foreach my $moniker (sort @table_monikers)
{
my $source = $dbicschema->source($moniker);
-
- # Skip custom query sources
- next if ref($source->name);
+ my $table_name = $source->name;
- # Its possible to have multiple DBIC source using same table
- next if $seen_tables{$source->name}++;
+ # FIXME - this isn't the right way to do it, but sqlt does not
+ # support quoting properly to be signaled about this
+ $table_name = $$table_name if ref $table_name eq 'SCALAR';
+
+ # Its possible to have multiple DBIC sources using the same table
+ next if $tables{$table_name};
- my $table = $schema->add_table(
- name => $source->name,
+ $tables{$table_name}{source} = $source;
+ my $table = $tables{$table_name}{object} = SQL::Translator::Schema::Table->new(
+ name => $table_name,
type => 'TABLE',
- ) || die $schema->error;
- my $colcount = 0;
+ );
foreach my $col ($source->columns)
{
# assuming column_info in dbic is the same as DBI (?)
if ($colinfo{is_nullable}) {
$colinfo{default} = '' unless exists $colinfo{default};
}
- my $f = $table->add_field(%colinfo) || die $table->error;
+ my $f = $table->add_field(%colinfo)
+ || $dbicschema->throw_exception ($table->error);
}
$table->primary_key($source->primary_columns);
my @primary = $source->primary_columns;
my %unique_constraints = $source->unique_constraints;
foreach my $uniq (sort keys %unique_constraints) {
- if (!$source->compare_relationship_keys($unique_constraints{$uniq}, \@primary)) {
+ if (!$source->_compare_relationship_keys($unique_constraints{$uniq}, \@primary)) {
$table->add_constraint(
type => 'unique',
name => $uniq,
my @rels = $source->relationships();
my %created_FK_rels;
-
+
# global add_fk_index set in parser_args
my $add_fk_index = (exists $args->{add_fk_index} && ($args->{add_fk_index} == 0)) ? 0 : 1;
my $othertable = $source->related_source($rel);
my $rel_table = $othertable->name;
+ # FIXME - this isn't the right way to do it, but sqlt does not
+ # support quoting properly to be signaled about this
+ $rel_table = $$rel_table if ref $rel_table eq 'SCALAR';
+
my $reverse_rels = $source->reverse_relationship_info($rel);
my ($otherrelname, $otherrelationship) = each %{$reverse_rels};
my $idx;
my %other_columns_idx = map {'foreign.'.$_ => ++$idx } $othertable->columns;
my @cond = sort { $other_columns_idx{$a} cmp $other_columns_idx{$b} } keys(%{$rel_info->{cond}});
-
+
# Get the key information, mapping off the foreign/self markers
my @refkeys = map {/^\w+\.(\w+)$/} @cond;
my @keys = map {$rel_info->{cond}->{$_} =~ /^\w+\.(\w+)$/} @cond;
# this is supposed to indicate a has_one/might_have...
# where's the introspection!!?? :)
else {
- $fk_constraint = not $source->compare_relationship_keys(\@keys, \@primary);
+ $fk_constraint = not $source->_compare_relationship_keys(\@keys, \@primary);
}
my $cascade;
$cascade->{$c} = $rel_info->{attrs}{"on_$c"};
}
else {
- warn "SQLT attribute 'on_$c' was supplied for relationship '$moniker/$rel', which does not appear to be a foreign constraint. "
+ carp "SQLT attribute 'on_$c' was supplied for relationship '$moniker/$rel', which does not appear to be a foreign constraint. "
. "If you are sure that SQLT must generate a constraint for this relationship, add 'is_foreign_key_constraint => 1' to the attributes.\n";
}
}
my $key_test = join("\x00", @keys);
next if $created_FK_rels{$rel_table}->{$key_test};
- my $is_deferrable = $rel_info->{attrs}{is_deferrable};
-
- # global parser_args add_fk_index param can be overridden on the rel def
- my $add_fk_index_rel = (exists $rel_info->{attrs}{add_fk_index}) ? $rel_info->{attrs}{add_fk_index} : $add_fk_index;
+ if (scalar(@keys)) {
+ $created_FK_rels{$rel_table}->{$key_test} = 1;
+
+ my $is_deferrable = $rel_info->{attrs}{is_deferrable};
+
+ # do not consider deferrable constraints and self-references
+ # for dependency calculations
+ if (! $is_deferrable and $rel_table ne $table_name) {
+ $tables{$table_name}{foreign_table_deps}{$rel_table}++;
+ }
- $created_FK_rels{$rel_table}->{$key_test} = 1;
- if (scalar(@keys)) {
$table->add_constraint(
type => 'foreign_key',
- name => join('_', $table->name, 'fk', @keys),
+ name => join('_', $table_name, 'fk', @keys),
fields => \@keys,
reference_fields => \@refkeys,
reference_table => $rel_table,
on_update => uc ($cascade->{update} || ''),
(defined $is_deferrable ? ( deferrable => $is_deferrable ) : ()),
);
-
+
+ # global parser_args add_fk_index param can be overridden on the rel def
+ my $add_fk_index_rel = (exists $rel_info->{attrs}{add_fk_index}) ? $rel_info->{attrs}{add_fk_index} : $add_fk_index;
+
if ($add_fk_index_rel) {
my $index = $table->add_index(
- name => join('_', $table->name, 'idx', @keys),
+ name => join('_', $table_name, 'idx', @keys),
fields => \@keys,
type => 'NORMAL',
);
}
}
}
-
- $source->_invoke_sqlt_deploy_hook($table);
+
}
+ # attach the tables to the schema in dependency order
+ my $dependencies = {
+ map { $_ => _resolve_deps ($_, \%tables) } (keys %tables)
+ };
+ for my $table (sort
+ {
+ keys %{$dependencies->{$a} || {} } <=> keys %{ $dependencies->{$b} || {} }
+ ||
+ $a cmp $b
+ }
+ (keys %tables)
+ ) {
+ $schema->add_table ($tables{$table}{object});
+ $tables{$table}{source} -> _invoke_sqlt_deploy_hook( $tables{$table}{object} );
+
+ # the hook might have already removed the table
+ if ($schema->get_table($table) && $table =~ /^ \s* \( \s* SELECT \s+/ix) {
+ warn <<'EOW';
+
+Custom SQL through ->name(\'( SELECT ...') is DEPRECATED, for more details see
+"Arbitrary SQL through a custom ResultSource" in DBIx::Class::Manual::Cookbook
+or http://search.cpan.org/dist/DBIx-Class/lib/DBIx/Class/Manual/Cookbook.pod
+
+EOW
+
+ # remove the table as there is no way someone might want to
+ # actually deploy this
+ $schema->drop_table ($table);
+ }
+ }
+
+ my %views;
foreach my $moniker (sort @view_monikers)
{
my $source = $dbicschema->source($moniker);
+ my $view_name = $source->name;
+
+ # FIXME - this isn't the right way to do it, but sqlt does not
+ # support quoting properly to be signaled about this
+ $view_name = $$view_name if ref $view_name eq 'SCALAR';
+
# Skip custom query sources
- next if ref($source->name);
+ next if ref $view_name;
# Its possible to have multiple DBIC source using same table
- next if $seen_tables{$source->name}++;
+ next if $views{$view_name}++;
- my $view = $schema->add_view(
- name => $source->name,
+ my $view = $schema->add_view (
+ name => $view_name,
fields => [ $source->columns ],
$source->view_definition ? ( 'sql' => $source->view_definition ) : ()
- );
- if ($source->result_class->can('sqlt_deploy_hook')) {
- $source->result_class->sqlt_deploy_hook($view);
- }
+ ) || $dbicschema->throw_exception ($schema->error);
$source->_invoke_sqlt_deploy_hook($view);
}
+
if ($dbicschema->can('sqlt_deploy_hook')) {
$dbicschema->sqlt_deploy_hook($schema);
}
return 1;
}
+#
+# Quick and dirty dependency graph calculator
+#
+sub _resolve_deps {
+ my ($table, $tables, $seen) = @_;
+
+ my $ret = {};
+ $seen ||= {};
+
+ # copy and bump all deps by one (so we can reconstruct the chain)
+ my %seen = map { $_ => $seen->{$_} + 1 } (keys %$seen);
+ $seen{$table} = 1;
+
+ for my $dep (keys %{$tables->{$table}{foreign_table_deps}} ) {
+
+ if ($seen->{$dep}) {
+
+ # warn and remove the circular constraint so we don't get flooded with the same warning over and over
+ #carp sprintf ("Circular dependency detected, schema may not be deployable:\n%s\n",
+ # join (' -> ', (sort { $seen->{$b} <=> $seen->{$a} } (keys %$seen) ), $table, $dep )
+ #);
+ #delete $tables->{$table}{foreign_table_deps}{$dep};
+
+ return {};
+ }
+
+ my $subdeps = _resolve_deps ($dep, $tables, \%seen);
+ $ret->{$_} += $subdeps->{$_} for ( keys %$subdeps );
+
+ ++$ret->{$dep};
+ }
+
+ return $ret;
+}
+
1;
=head1 NAME
## Standalone
use MyApp::Schema;
use SQL::Translator;
-
+
my $schema = MyApp::Schema->connect;
my $trans = SQL::Translator->new (
parser => 'SQL::Translator::Parser::DBIx::Class',
C<SQL::Translator::Parser::DBIx::Class> reads a DBIx::Class schema,
interrogates the columns, and stuffs it all in an $sqlt_schema object.
-It's primary use is in deploying database layouts described as a set
-of L<DBIx::Class> classes, to a database. To do this, see the
-L<DBIx::Class::Schema/deploy> method.
+Its primary use is in deploying database layouts described as a set
+of L<DBIx::Class> classes, to a database. To do this, see
+L<DBIx::Class::Schema/deploy>.
This can also be achieved by having DBIx::Class export the schema as a
set of SQL files ready for import into your database, or passed to
other machines that need to have your application installed but don't
-have SQL::Translator installed. To do this see the
-L<DBIx::Class::Schema/create_ddl_dir> method.
+have SQL::Translator installed. To do this see
+L<DBIx::Class::Schema/create_ddl_dir>.
=head1 SEE ALSO
use SQL::Translator::Schema::Constants;
use SQL::Translator::Utils qw(header_comment);
+use Data::Dumper ();
## Skip all column type translation, as we want to use whatever the parser got.
$tableextras{$table->name} .= "\n__PACKAGE__->belongs_to('" .
$cont->fields->[0]->name . "', '" .
"${dbixschema}::" . $cont->reference_table . "');\n";
-
+
my $other = "\n__PACKAGE__->has_many('" .
"get_" . $table->name. "', '" .
"${dbixschema}::" . $table->name. "', '" .
qw( MULTICREATE_DEBUG )
],
},
+ 'DBIx::Class::ResultSource' => {
+ ignore => [qw/
+ compare_relationship_keys
+ pk_depends_on
+ resolve_condition
+ resolve_join
+ resolve_prefetch
+ /],
+ },
'DBIx::Class::Storage' => {
ignore => [
qw(cursor)
'DBIx::Class::ResultSetManager' => { skip => 1 },
'DBIx::Class::ResultSourceProxy' => { skip => 1 },
'DBIx::Class::Storage::DBI' => { skip => 1 },
+ 'DBIx::Class::Storage::DBI::Replicated::Types' => { skip => 1 },
'DBIx::Class::Storage::DBI::DB2' => { skip => 1 },
'DBIx::Class::Storage::DBI::MSSQL' => { skip => 1 },
- 'DBIx::Class::Storage::DBI::MultiDistinctEmulation' => { skip => 1 },
+ 'DBIx::Class::Storage::DBI::Sybase::MSSQL' => { skip => 1 },
'DBIx::Class::Storage::DBI::ODBC400' => { skip => 1 },
'DBIx::Class::Storage::DBI::ODBC::DB2_400_SQL' => { skip => 1 },
+ 'DBIx::Class::Storage::DBI::ODBC::Microsoft_SQL_Server' => { skip => 1 },
'DBIx::Class::Storage::DBI::Oracle' => { skip => 1 },
'DBIx::Class::Storage::DBI::Pg' => { skip => 1 },
'DBIx::Class::Storage::DBI::SQLite' => { skip => 1 },
'DBIx::Class::Storage::DBI::mysql' => { skip => 1 },
+ 'DBIx::Class::SQLAHacks' => { skip => 1 },
+ 'DBIx::Class::SQLAHacks::MySQL' => { skip => 1 },
+ 'DBIx::Class::SQLAHacks::MSSQL' => { skip => 1 },
'SQL::Translator::Parser::DBIx::Class' => { skip => 1 },
'SQL::Translator::Producer::DBIx::Class::File' => { skip => 1 },
-#!/usr/bin/perl -w
-#Simon Ilyushchenko, 12/05/05
-#Testing the case when we try to inject into @ISA a class that's already a parent of the target class.
use strict;
use Test::More tests => 2;
+use MRO::Compat;
+
+use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
{
package AAA;
__PACKAGE__->inject_base( __PACKAGE__, 'DBIx::Class::Core' );
}
-eval { Class::C3::calculateMRO('BBB'); };
+eval { mro::get_linear_isa('BBB'); };
ok (! $@, "Correctly skipped injecting a direct parent of class BBB");
-eval { Class::C3::calculateMRO('CCC'); };
+eval { mro::get_linear_isa('CCC'); };
ok (! $@, "Correctly skipped injecting an indirect parent of class BBB");
use base qw/DBIx::Class::ResultSource::Table/;
}
-plan tests => 3;
+plan tests => 4;
my $schema = DBICTest->init_schema();
my $artist_source = $schema->source('Artist');
}
{
+ my $source = $schema->source('DBICTest::Artist');
+ $schema->register_source($source->source_name, $source);
+ is($warn, '', "re-registering an existing source under the same name causes no errors");
+}
+
+{
my $new_source_name = 'Artist->preview(artist_preview)';
$schema->register_source( $new_source_name => $new_source );
use strict;
-use warnings;
+use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
-plan tests => 21;
-
-# perl -le'my $letter = 'a'; for my $i (4..10000) { $letter++; print "[ $i, \"$letter\" ]," }' > tests.txt
+plan tests => 23;
my $schema = DBICTest->init_schema();
-$schema->populate('Artist', [
-[ qw/artistid name/ ],
-[ 4, "b" ],
-[ 5, "c" ],
-[ 6, "d" ],
-[ 7, "e" ],
-[ 8, "f" ],
-[ 9, "g" ],
-[ 10, "h" ],
-[ 11, "i" ],
-[ 12, "j" ],
-[ 13, "k" ],
-[ 14, "l" ],
-[ 15, "m" ],
-[ 16, "n" ],
-[ 17, "o" ],
-[ 18, "p" ],
-[ 19, "q" ],
-[ 20, "r" ],
-[ 21, "s" ],
-[ 22, "t" ],
-[ 23, "u" ],
-[ 24, "v" ],
-[ 25, "w" ],
-[ 26, "x" ],
-[ 27, "y" ],
-[ 28, "z" ],
-[ 29, "aa" ],
-[ 30, "ab" ],
-[ 31, "ac" ],
-[ 32, "ad" ],
-[ 33, "ae" ],
-[ 34, "af" ],
-[ 35, "ag" ],
-[ 36, "ah" ],
-[ 37, "ai" ],
-[ 38, "aj" ],
-[ 39, "ak" ],
-[ 40, "al" ],
-[ 41, "am" ],
-[ 42, "an" ],
-[ 43, "ao" ],
-[ 44, "ap" ],
-[ 45, "aq" ],
-[ 46, "ar" ],
-[ 47, "as" ],
-[ 48, "at" ],
-[ 49, "au" ],
-[ 50, "av" ],
-[ 51, "aw" ],
-[ 52, "ax" ],
-[ 53, "ay" ],
-[ 54, "az" ],
-[ 55, "ba" ],
-[ 56, "bb" ],
-[ 57, "bc" ],
-[ 58, "bd" ],
-[ 59, "be" ],
-[ 60, "bf" ],
-[ 61, "bg" ],
-[ 62, "bh" ],
-[ 63, "bi" ],
-[ 64, "bj" ],
-[ 65, "bk" ],
-[ 66, "bl" ],
-[ 67, "bm" ],
-[ 68, "bn" ],
-[ 69, "bo" ],
-[ 70, "bp" ],
-[ 71, "bq" ],
-[ 72, "br" ],
-[ 73, "bs" ],
-[ 74, "bt" ],
-[ 75, "bu" ],
-[ 76, "bv" ],
-[ 77, "bw" ],
-[ 78, "bx" ],
-[ 79, "by" ],
-[ 80, "bz" ],
-[ 81, "ca" ],
-[ 82, "cb" ],
-[ 83, "cc" ],
-[ 84, "cd" ],
-[ 85, "ce" ],
-[ 86, "cf" ],
-[ 87, "cg" ],
-[ 88, "ch" ],
-[ 89, "ci" ],
-[ 90, "cj" ],
-[ 91, "ck" ],
-[ 92, "cl" ],
-[ 93, "cm" ],
-[ 94, "cn" ],
-[ 95, "co" ],
-[ 96, "cp" ],
-[ 97, "cq" ],
-[ 98, "cr" ],
-[ 99, "cs" ],
-[ 100, "ct" ],
-[ 101, "cu" ],
-[ 102, "cv" ],
-[ 103, "cw" ],
-[ 104, "cx" ],
-[ 105, "cy" ],
-[ 106, "cz" ],
-[ 107, "da" ],
-[ 108, "db" ],
-[ 109, "dc" ],
-[ 110, "dd" ],
-[ 111, "de" ],
-[ 112, "df" ],
-[ 113, "dg" ],
-[ 114, "dh" ],
-[ 115, "di" ],
-[ 116, "dj" ],
-[ 117, "dk" ],
-[ 118, "dl" ],
-[ 119, "dm" ],
-[ 120, "dn" ],
-[ 121, "do" ],
-[ 122, "dp" ],
-[ 123, "dq" ],
-[ 124, "dr" ],
-[ 125, "ds" ],
-[ 126, "dt" ],
-[ 127, "du" ],
-[ 128, "dv" ],
-[ 129, "dw" ],
-[ 130, "dx" ],
-[ 131, "dy" ],
-[ 132, "dz" ],
-[ 133, "ea" ],
-[ 134, "eb" ],
-[ 135, "ec" ],
-[ 136, "ed" ],
-[ 137, "ee" ],
-[ 138, "ef" ],
-[ 139, "eg" ],
-[ 140, "eh" ],
-[ 141, "ei" ],
-[ 142, "ej" ],
-[ 143, "ek" ],
-[ 144, "el" ],
-[ 145, "em" ],
-[ 146, "en" ],
-[ 147, "eo" ],
-[ 148, "ep" ],
-[ 149, "eq" ],
-[ 150, "er" ],
-[ 151, "es" ],
-[ 152, "et" ],
-[ 153, "eu" ],
-[ 154, "ev" ],
-[ 155, "ew" ],
-[ 156, "ex" ],
-[ 157, "ey" ],
-[ 158, "ez" ],
-[ 159, "fa" ],
-[ 160, "fb" ],
-[ 161, "fc" ],
-[ 162, "fd" ],
-[ 163, "fe" ],
-[ 164, "ff" ],
-[ 165, "fg" ],
-[ 166, "fh" ],
-[ 167, "fi" ],
-[ 168, "fj" ],
-[ 169, "fk" ],
-[ 170, "fl" ],
-[ 171, "fm" ],
-[ 172, "fn" ],
-[ 173, "fo" ],
-[ 174, "fp" ],
-[ 175, "fq" ],
-[ 176, "fr" ],
-[ 177, "fs" ],
-[ 178, "ft" ],
-[ 179, "fu" ],
-[ 180, "fv" ],
-[ 181, "fw" ],
-[ 182, "fx" ],
-[ 183, "fy" ],
-[ 184, "fz" ],
-[ 185, "ga" ],
-[ 186, "gb" ],
-[ 187, "gc" ],
-[ 188, "gd" ],
-[ 189, "ge" ],
-[ 190, "gf" ],
-[ 191, "gg" ],
-[ 192, "gh" ],
-[ 193, "gi" ],
-[ 194, "gj" ],
-[ 195, "gk" ],
-[ 196, "gl" ],
-[ 197, "gm" ],
-[ 198, "gn" ],
-[ 199, "go" ],
-[ 200, "gp" ],
-[ 201, "gq" ],
-[ 202, "gr" ],
-[ 203, "gs" ],
-[ 204, "gt" ],
-[ 205, "gu" ],
-[ 206, "gv" ],
-[ 207, "gw" ],
-[ 208, "gx" ],
-[ 209, "gy" ],
-[ 210, "gz" ],
-[ 211, "ha" ],
-[ 212, "hb" ],
-[ 213, "hc" ],
-[ 214, "hd" ],
-[ 215, "he" ],
-[ 216, "hf" ],
-[ 217, "hg" ],
-[ 218, "hh" ],
-[ 219, "hi" ],
-[ 220, "hj" ],
-[ 221, "hk" ],
-[ 222, "hl" ],
-[ 223, "hm" ],
-[ 224, "hn" ],
-[ 225, "ho" ],
-[ 226, "hp" ],
-[ 227, "hq" ],
-[ 228, "hr" ],
-[ 229, "hs" ],
-[ 230, "ht" ],
-[ 231, "hu" ],
-[ 232, "hv" ],
-[ 233, "hw" ],
-[ 234, "hx" ],
-[ 235, "hy" ],
-[ 236, "hz" ],
-[ 237, "ia" ],
-[ 238, "ib" ],
-[ 239, "ic" ],
-[ 240, "id" ],
-[ 241, "ie" ],
-[ 242, "if" ],
-[ 243, "ig" ],
-[ 244, "ih" ],
-[ 245, "ii" ],
-[ 246, "ij" ],
-[ 247, "ik" ],
-[ 248, "il" ],
-[ 249, "im" ],
-[ 250, "in" ],
-[ 251, "io" ],
-[ 252, "ip" ],
-[ 253, "iq" ],
-[ 254, "ir" ],
-[ 255, "is" ],
-[ 256, "it" ],
-[ 257, "iu" ],
-[ 258, "iv" ],
-[ 259, "iw" ],
-[ 260, "ix" ],
-[ 261, "iy" ],
-[ 262, "iz" ],
-[ 263, "ja" ],
-[ 264, "jb" ],
-[ 265, "jc" ],
-[ 266, "jd" ],
-[ 267, "je" ],
-[ 268, "jf" ],
-[ 269, "jg" ],
-[ 270, "jh" ],
-[ 271, "ji" ],
-[ 272, "jj" ],
-[ 273, "jk" ],
-[ 274, "jl" ],
-[ 275, "jm" ],
-[ 276, "jn" ],
-[ 277, "jo" ],
-[ 278, "jp" ],
-[ 279, "jq" ],
-[ 280, "jr" ],
-[ 281, "js" ],
-[ 282, "jt" ],
-[ 283, "ju" ],
-[ 284, "jv" ],
-[ 285, "jw" ],
-[ 286, "jx" ],
-[ 287, "jy" ],
-[ 288, "jz" ],
-[ 289, "ka" ],
-[ 290, "kb" ],
-[ 291, "kc" ],
-[ 292, "kd" ],
-[ 293, "ke" ],
-[ 294, "kf" ],
-[ 295, "kg" ],
-[ 296, "kh" ],
-[ 297, "ki" ],
-[ 298, "kj" ],
-[ 299, "kk" ],
-[ 300, "kl" ],
-[ 301, "km" ],
-[ 302, "kn" ],
-[ 303, "ko" ],
-[ 304, "kp" ],
-[ 305, "kq" ],
-[ 306, "kr" ],
-[ 307, "ks" ],
-[ 308, "kt" ],
-[ 309, "ku" ],
-[ 310, "kv" ],
-[ 311, "kw" ],
-[ 312, "kx" ],
-[ 313, "ky" ],
-[ 314, "kz" ],
-[ 315, "la" ],
-[ 316, "lb" ],
-[ 317, "lc" ],
-[ 318, "ld" ],
-[ 319, "le" ],
-[ 320, "lf" ],
-[ 321, "lg" ],
-[ 322, "lh" ],
-[ 323, "li" ],
-[ 324, "lj" ],
-[ 325, "lk" ],
-[ 326, "ll" ],
-[ 327, "lm" ],
-[ 328, "ln" ],
-[ 329, "lo" ],
-[ 330, "lp" ],
-[ 331, "lq" ],
-[ 332, "lr" ],
-[ 333, "ls" ],
-[ 334, "lt" ],
-[ 335, "lu" ],
-[ 336, "lv" ],
-[ 337, "lw" ],
-[ 338, "lx" ],
-[ 339, "ly" ],
-[ 340, "lz" ],
-[ 341, "ma" ],
-[ 342, "mb" ],
-[ 343, "mc" ],
-[ 344, "md" ],
-[ 345, "me" ],
-[ 346, "mf" ],
-[ 347, "mg" ],
-[ 348, "mh" ],
-[ 349, "mi" ],
-[ 350, "mj" ],
-[ 351, "mk" ],
-[ 352, "ml" ],
-[ 353, "mm" ],
-[ 354, "mn" ],
-[ 355, "mo" ],
-[ 356, "mp" ],
-[ 357, "mq" ],
-[ 358, "mr" ],
-[ 359, "ms" ],
-[ 360, "mt" ],
-[ 361, "mu" ],
-[ 362, "mv" ],
-[ 363, "mw" ],
-[ 364, "mx" ],
-[ 365, "my" ],
-[ 366, "mz" ],
-[ 367, "na" ],
-[ 368, "nb" ],
-[ 369, "nc" ],
-[ 370, "nd" ],
-[ 371, "ne" ],
-[ 372, "nf" ],
-[ 373, "ng" ],
-[ 374, "nh" ],
-[ 375, "ni" ],
-[ 376, "nj" ],
-[ 377, "nk" ],
-[ 378, "nl" ],
-[ 379, "nm" ],
-[ 380, "nn" ],
-[ 381, "no" ],
-[ 382, "np" ],
-[ 383, "nq" ],
-[ 384, "nr" ],
-[ 385, "ns" ],
-[ 386, "nt" ],
-[ 387, "nu" ],
-[ 388, "nv" ],
-[ 389, "nw" ],
-[ 390, "nx" ],
-[ 391, "ny" ],
-[ 392, "nz" ],
-[ 393, "oa" ],
-[ 394, "ob" ],
-[ 395, "oc" ],
-[ 396, "od" ],
-[ 397, "oe" ],
-[ 398, "of" ],
-[ 399, "og" ],
-[ 400, "oh" ],
-[ 401, "oi" ],
-[ 402, "oj" ],
-[ 403, "ok" ],
-[ 404, "ol" ],
-[ 405, "om" ],
-[ 406, "on" ],
-[ 407, "oo" ],
-[ 408, "op" ],
-[ 409, "oq" ],
-[ 410, "or" ],
-[ 411, "os" ],
-[ 412, "ot" ],
-[ 413, "ou" ],
-[ 414, "ov" ],
-[ 415, "ow" ],
-[ 416, "ox" ],
-[ 417, "oy" ],
-[ 418, "oz" ],
-[ 419, "pa" ],
-[ 420, "pb" ],
-[ 421, "pc" ],
-[ 422, "pd" ],
-[ 423, "pe" ],
-[ 424, "pf" ],
-[ 425, "pg" ],
-[ 426, "ph" ],
-[ 427, "pi" ],
-[ 428, "pj" ],
-[ 429, "pk" ],
-[ 430, "pl" ],
-[ 431, "pm" ],
-[ 432, "pn" ],
-[ 433, "po" ],
-[ 434, "pp" ],
-[ 435, "pq" ],
-[ 436, "pr" ],
-[ 437, "ps" ],
-[ 438, "pt" ],
-[ 439, "pu" ],
-[ 440, "pv" ],
-[ 441, "pw" ],
-[ 442, "px" ],
-[ 443, "py" ],
-[ 444, "pz" ],
-[ 445, "qa" ],
-[ 446, "qb" ],
-[ 447, "qc" ],
-[ 448, "qd" ],
-[ 449, "qe" ],
-[ 450, "qf" ],
-[ 451, "qg" ],
-[ 452, "qh" ],
-[ 453, "qi" ],
-[ 454, "qj" ],
-[ 455, "qk" ],
-[ 456, "ql" ],
-[ 457, "qm" ],
-[ 458, "qn" ],
-[ 459, "qo" ],
-[ 460, "qp" ],
-[ 461, "qq" ],
-[ 462, "qr" ],
-[ 463, "qs" ],
-[ 464, "qt" ],
-[ 465, "qu" ],
-[ 466, "qv" ],
-[ 467, "qw" ],
-[ 468, "qx" ],
-[ 469, "qy" ],
-[ 470, "qz" ],
-[ 471, "ra" ],
-[ 472, "rb" ],
-[ 473, "rc" ],
-[ 474, "rd" ],
-[ 475, "re" ],
-[ 476, "rf" ],
-[ 477, "rg" ],
-[ 478, "rh" ],
-[ 479, "ri" ],
-[ 480, "rj" ],
-[ 481, "rk" ],
-[ 482, "rl" ],
-[ 483, "rm" ],
-[ 484, "rn" ],
-[ 485, "ro" ],
-[ 486, "rp" ],
-[ 487, "rq" ],
-[ 488, "rr" ],
-[ 489, "rs" ],
-[ 490, "rt" ],
-[ 491, "ru" ],
-[ 492, "rv" ],
-[ 493, "rw" ],
-[ 494, "rx" ],
-[ 495, "ry" ],
-[ 496, "rz" ],
-[ 497, "sa" ],
-[ 498, "sb" ],
-[ 499, "sc" ],
-[ 500, "sd" ],
-[ 501, "se" ],
-[ 502, "sf" ],
-[ 503, "sg" ],
-[ 504, "sh" ],
-[ 505, "si" ],
-[ 506, "sj" ],
-[ 507, "sk" ],
-[ 508, "sl" ],
-[ 509, "sm" ],
-[ 510, "sn" ],
-[ 511, "so" ],
-[ 512, "sp" ],
-[ 513, "sq" ],
-[ 514, "sr" ],
-[ 515, "ss" ],
-[ 516, "st" ],
-[ 517, "su" ],
-[ 518, "sv" ],
-[ 519, "sw" ],
-[ 520, "sx" ],
-[ 521, "sy" ],
-[ 522, "sz" ],
-[ 523, "ta" ],
-[ 524, "tb" ],
-[ 525, "tc" ],
-[ 526, "td" ],
-[ 527, "te" ],
-[ 528, "tf" ],
-[ 529, "tg" ],
-[ 530, "th" ],
-[ 531, "ti" ],
-[ 532, "tj" ],
-[ 533, "tk" ],
-[ 534, "tl" ],
-[ 535, "tm" ],
-[ 536, "tn" ],
-[ 537, "to" ],
-[ 538, "tp" ],
-[ 539, "tq" ],
-[ 540, "tr" ],
-[ 541, "ts" ],
-[ 542, "tt" ],
-[ 543, "tu" ],
-[ 544, "tv" ],
-[ 545, "tw" ],
-[ 546, "tx" ],
-[ 547, "ty" ],
-[ 548, "tz" ],
-[ 549, "ua" ],
-[ 550, "ub" ],
-[ 551, "uc" ],
-[ 552, "ud" ],
-[ 553, "ue" ],
-[ 554, "uf" ],
-[ 555, "ug" ],
-[ 556, "uh" ],
-[ 557, "ui" ],
-[ 558, "uj" ],
-[ 559, "uk" ],
-[ 560, "ul" ],
-[ 561, "um" ],
-[ 562, "un" ],
-[ 563, "uo" ],
-[ 564, "up" ],
-[ 565, "uq" ],
-[ 566, "ur" ],
-[ 567, "us" ],
-[ 568, "ut" ],
-[ 569, "uu" ],
-[ 570, "uv" ],
-[ 571, "uw" ],
-[ 572, "ux" ],
-[ 573, "uy" ],
-[ 574, "uz" ],
-[ 575, "va" ],
-[ 576, "vb" ],
-[ 577, "vc" ],
-[ 578, "vd" ],
-[ 579, "ve" ],
-[ 580, "vf" ],
-[ 581, "vg" ],
-[ 582, "vh" ],
-[ 583, "vi" ],
-[ 584, "vj" ],
-[ 585, "vk" ],
-[ 586, "vl" ],
-[ 587, "vm" ],
-[ 588, "vn" ],
-[ 589, "vo" ],
-[ 590, "vp" ],
-[ 591, "vq" ],
-[ 592, "vr" ],
-[ 593, "vs" ],
-[ 594, "vt" ],
-[ 595, "vu" ],
-[ 596, "vv" ],
-[ 597, "vw" ],
-[ 598, "vx" ],
-[ 599, "vy" ],
-[ 600, "vz" ],
-[ 601, "wa" ],
-[ 602, "wb" ],
-[ 603, "wc" ],
-[ 604, "wd" ],
-[ 605, "we" ],
-[ 606, "wf" ],
-[ 607, "wg" ],
-[ 608, "wh" ],
-[ 609, "wi" ],
-[ 610, "wj" ],
-[ 611, "wk" ],
-[ 612, "wl" ],
-[ 613, "wm" ],
-[ 614, "wn" ],
-[ 615, "wo" ],
-[ 616, "wp" ],
-[ 617, "wq" ],
-[ 618, "wr" ],
-[ 619, "ws" ],
-[ 620, "wt" ],
-[ 621, "wu" ],
-[ 622, "wv" ],
-[ 623, "ww" ],
-[ 624, "wx" ],
-[ 625, "wy" ],
-[ 626, "wz" ],
-[ 627, "xa" ],
-[ 628, "xb" ],
-[ 629, "xc" ],
-[ 630, "xd" ],
-[ 631, "xe" ],
-[ 632, "xf" ],
-[ 633, "xg" ],
-[ 634, "xh" ],
-[ 635, "xi" ],
-[ 636, "xj" ],
-[ 637, "xk" ],
-[ 638, "xl" ],
-[ 639, "xm" ],
-[ 640, "xn" ],
-[ 641, "xo" ],
-[ 642, "xp" ],
-[ 643, "xq" ],
-[ 644, "xr" ],
-[ 645, "xs" ],
-[ 646, "xt" ],
-[ 647, "xu" ],
-[ 648, "xv" ],
-[ 649, "xw" ],
-[ 650, "xx" ],
-[ 651, "xy" ],
-[ 652, "xz" ],
-[ 653, "ya" ],
-[ 654, "yb" ],
-[ 655, "yc" ],
-[ 656, "yd" ],
-[ 657, "ye" ],
-[ 658, "yf" ],
-[ 659, "yg" ],
-[ 660, "yh" ],
-[ 661, "yi" ],
-[ 662, "yj" ],
-[ 663, "yk" ],
-[ 664, "yl" ],
-[ 665, "ym" ],
-[ 666, "yn" ],
-[ 667, "yo" ],
-[ 668, "yp" ],
-[ 669, "yq" ],
-[ 670, "yr" ],
-[ 671, "ys" ],
-[ 672, "yt" ],
-[ 673, "yu" ],
-[ 674, "yv" ],
-[ 675, "yw" ],
-[ 676, "yx" ],
-[ 677, "yy" ],
-[ 678, "yz" ],
-[ 679, "za" ],
-[ 680, "zb" ],
-[ 681, "zc" ],
-[ 682, "zd" ],
-[ 683, "ze" ],
-[ 684, "zf" ],
-[ 685, "zg" ],
-[ 686, "zh" ],
-[ 687, "zi" ],
-[ 688, "zj" ],
-[ 689, "zk" ],
-[ 690, "zl" ],
-[ 691, "zm" ],
-[ 692, "zn" ],
-[ 693, "zo" ],
-[ 694, "zp" ],
-[ 695, "zq" ],
-[ 696, "zr" ],
-[ 697, "zs" ],
-[ 698, "zt" ],
-[ 699, "zu" ],
-[ 700, "zv" ],
-[ 701, "zw" ],
-[ 702, "zx" ],
-[ 703, "zy" ],
-[ 704, "zz" ],
-[ 705, "aaa" ],
-[ 706, "aab" ],
-[ 707, "aac" ],
-[ 708, "aad" ],
-[ 709, "aae" ],
-[ 710, "aaf" ],
-[ 711, "aag" ],
-[ 712, "aah" ],
-[ 713, "aai" ],
-[ 714, "aaj" ],
-[ 715, "aak" ],
-[ 716, "aal" ],
-[ 717, "aam" ],
-[ 718, "aan" ],
-[ 719, "aao" ],
-[ 720, "aap" ],
-[ 721, "aaq" ],
-[ 722, "aar" ],
-[ 723, "aas" ],
-[ 724, "aat" ],
-[ 725, "aau" ],
-[ 726, "aav" ],
-[ 727, "aaw" ],
-[ 728, "aax" ],
-[ 729, "aay" ],
-[ 730, "aaz" ],
-[ 731, "aba" ],
-[ 732, "abb" ],
-[ 733, "abc" ],
-[ 734, "abd" ],
-[ 735, "abe" ],
-[ 736, "abf" ],
-[ 737, "abg" ],
-[ 738, "abh" ],
-[ 739, "abi" ],
-[ 740, "abj" ],
-[ 741, "abk" ],
-[ 742, "abl" ],
-[ 743, "abm" ],
-[ 744, "abn" ],
-[ 745, "abo" ],
-[ 746, "abp" ],
-[ 747, "abq" ],
-[ 748, "abr" ],
-[ 749, "abs" ],
-[ 750, "abt" ],
-[ 751, "abu" ],
-[ 752, "abv" ],
-[ 753, "abw" ],
-[ 754, "abx" ],
-[ 755, "aby" ],
-[ 756, "abz" ],
-[ 757, "aca" ],
-[ 758, "acb" ],
-[ 759, "acc" ],
-[ 760, "acd" ],
-[ 761, "ace" ],
-[ 762, "acf" ],
-[ 763, "acg" ],
-[ 764, "ach" ],
-[ 765, "aci" ],
-[ 766, "acj" ],
-[ 767, "ack" ],
-[ 768, "acl" ],
-[ 769, "acm" ],
-[ 770, "acn" ],
-[ 771, "aco" ],
-[ 772, "acp" ],
-[ 773, "acq" ],
-[ 774, "acr" ],
-[ 775, "acs" ],
-[ 776, "act" ],
-[ 777, "acu" ],
-[ 778, "acv" ],
-[ 779, "acw" ],
-[ 780, "acx" ],
-[ 781, "acy" ],
-[ 782, "acz" ],
-[ 783, "ada" ],
-[ 784, "adb" ],
-[ 785, "adc" ],
-[ 786, "add" ],
-[ 787, "ade" ],
-[ 788, "adf" ],
-[ 789, "adg" ],
-[ 790, "adh" ],
-[ 791, "adi" ],
-[ 792, "adj" ],
-[ 793, "adk" ],
-[ 794, "adl" ],
-[ 795, "adm" ],
-[ 796, "adn" ],
-[ 797, "ado" ],
-[ 798, "adp" ],
-[ 799, "adq" ],
-[ 800, "adr" ],
-[ 801, "ads" ],
-[ 802, "adt" ],
-[ 803, "adu" ],
-[ 804, "adv" ],
-[ 805, "adw" ],
-[ 806, "adx" ],
-[ 807, "ady" ],
-[ 808, "adz" ],
-[ 809, "aea" ],
-[ 810, "aeb" ],
-[ 811, "aec" ],
-[ 812, "aed" ],
-[ 813, "aee" ],
-[ 814, "aef" ],
-[ 815, "aeg" ],
-[ 816, "aeh" ],
-[ 817, "aei" ],
-[ 818, "aej" ],
-[ 819, "aek" ],
-[ 820, "ael" ],
-[ 821, "aem" ],
-[ 822, "aen" ],
-[ 823, "aeo" ],
-[ 824, "aep" ],
-[ 825, "aeq" ],
-[ 826, "aer" ],
-[ 827, "aes" ],
-[ 828, "aet" ],
-[ 829, "aeu" ],
-[ 830, "aev" ],
-[ 831, "aew" ],
-[ 832, "aex" ],
-[ 833, "aey" ],
-[ 834, "aez" ],
-[ 835, "afa" ],
-[ 836, "afb" ],
-[ 837, "afc" ],
-[ 838, "afd" ],
-[ 839, "afe" ],
-[ 840, "aff" ],
-[ 841, "afg" ],
-[ 842, "afh" ],
-[ 843, "afi" ],
-[ 844, "afj" ],
-[ 845, "afk" ],
-[ 846, "afl" ],
-[ 847, "afm" ],
-[ 848, "afn" ],
-[ 849, "afo" ],
-[ 850, "afp" ],
-[ 851, "afq" ],
-[ 852, "afr" ],
-[ 853, "afs" ],
-[ 854, "aft" ],
-[ 855, "afu" ],
-[ 856, "afv" ],
-[ 857, "afw" ],
-[ 858, "afx" ],
-[ 859, "afy" ],
-[ 860, "afz" ],
-[ 861, "aga" ],
-[ 862, "agb" ],
-[ 863, "agc" ],
-[ 864, "agd" ],
-[ 865, "age" ],
-[ 866, "agf" ],
-[ 867, "agg" ],
-[ 868, "agh" ],
-[ 869, "agi" ],
-[ 870, "agj" ],
-[ 871, "agk" ],
-[ 872, "agl" ],
-[ 873, "agm" ],
-[ 874, "agn" ],
-[ 875, "ago" ],
-[ 876, "agp" ],
-[ 877, "agq" ],
-[ 878, "agr" ],
-[ 879, "ags" ],
-[ 880, "agt" ],
-[ 881, "agu" ],
-[ 882, "agv" ],
-[ 883, "agw" ],
-[ 884, "agx" ],
-[ 885, "agy" ],
-[ 886, "agz" ],
-[ 887, "aha" ],
-[ 888, "ahb" ],
-[ 889, "ahc" ],
-[ 890, "ahd" ],
-[ 891, "ahe" ],
-[ 892, "ahf" ],
-[ 893, "ahg" ],
-[ 894, "ahh" ],
-[ 895, "ahi" ],
-[ 896, "ahj" ],
-[ 897, "ahk" ],
-[ 898, "ahl" ],
-[ 899, "ahm" ],
-[ 900, "ahn" ],
-[ 901, "aho" ],
-[ 902, "ahp" ],
-[ 903, "ahq" ],
-[ 904, "ahr" ],
-[ 905, "ahs" ],
-[ 906, "aht" ],
-[ 907, "ahu" ],
-[ 908, "ahv" ],
-[ 909, "ahw" ],
-[ 910, "ahx" ],
-[ 911, "ahy" ],
-[ 912, "ahz" ],
-[ 913, "aia" ],
-[ 914, "aib" ],
-[ 915, "aic" ],
-[ 916, "aid" ],
-[ 917, "aie" ],
-[ 918, "aif" ],
-[ 919, "aig" ],
-[ 920, "aih" ],
-[ 921, "aii" ],
-[ 922, "aij" ],
-[ 923, "aik" ],
-[ 924, "ail" ],
-[ 925, "aim" ],
-[ 926, "ain" ],
-[ 927, "aio" ],
-[ 928, "aip" ],
-[ 929, "aiq" ],
-[ 930, "air" ],
-[ 931, "ais" ],
-[ 932, "ait" ],
-[ 933, "aiu" ],
-[ 934, "aiv" ],
-[ 935, "aiw" ],
-[ 936, "aix" ],
-[ 937, "aiy" ],
-[ 938, "aiz" ],
-[ 939, "aja" ],
-[ 940, "ajb" ],
-[ 941, "ajc" ],
-[ 942, "ajd" ],
-[ 943, "aje" ],
-[ 944, "ajf" ],
-[ 945, "ajg" ],
-[ 946, "ajh" ],
-[ 947, "aji" ],
-[ 948, "ajj" ],
-[ 949, "ajk" ],
-[ 950, "ajl" ],
-[ 951, "ajm" ],
-[ 952, "ajn" ],
-[ 953, "ajo" ],
-[ 954, "ajp" ],
-[ 955, "ajq" ],
-[ 956, "ajr" ],
-[ 957, "ajs" ],
-[ 958, "ajt" ],
-[ 959, "aju" ],
-[ 960, "ajv" ],
-[ 961, "ajw" ],
-[ 962, "ajx" ],
-[ 963, "ajy" ],
-[ 964, "ajz" ],
-[ 965, "aka" ],
-[ 966, "akb" ],
-[ 967, "akc" ],
-[ 968, "akd" ],
-[ 969, "ake" ],
-[ 970, "akf" ],
-[ 971, "akg" ],
-[ 972, "akh" ],
-[ 973, "aki" ],
-[ 974, "akj" ],
-[ 975, "akk" ],
-[ 976, "akl" ],
-[ 977, "akm" ],
-[ 978, "akn" ],
-[ 979, "ako" ],
-[ 980, "akp" ],
-[ 981, "akq" ],
-[ 982, "akr" ],
-[ 983, "aks" ],
-[ 984, "akt" ],
-[ 985, "aku" ],
-[ 986, "akv" ],
-[ 987, "akw" ],
-[ 988, "akx" ],
-[ 989, "aky" ],
-[ 990, "akz" ],
-[ 991, "ala" ],
-[ 992, "alb" ],
-[ 993, "alc" ],
-[ 994, "ald" ],
-[ 995, "ale" ],
-[ 996, "alf" ],
-[ 997, "alg" ],
-[ 998, "alh" ],
-[ 999, "ali" ],
-[ 1000, "alj" ],
-[ 1001, "alk" ],
-[ 1002, "all" ],
-[ 1003, "alm" ],
-[ 1004, "aln" ],
-[ 1005, "alo" ],
-[ 1006, "alp" ],
-[ 1007, "alq" ],
-[ 1008, "alr" ],
-[ 1009, "als" ],
-[ 1010, "alt" ],
-[ 1011, "alu" ],
-[ 1012, "alv" ],
-[ 1013, "alw" ],
-[ 1014, "alx" ],
-[ 1015, "aly" ],
-[ 1016, "alz" ],
-[ 1017, "ama" ],
-[ 1018, "amb" ],
-[ 1019, "amc" ],
-[ 1020, "amd" ],
-[ 1021, "ame" ],
-[ 1022, "amf" ],
-[ 1023, "amg" ],
-[ 1024, "amh" ],
-[ 1025, "ami" ],
-[ 1026, "amj" ],
-[ 1027, "amk" ],
-[ 1028, "aml" ],
-[ 1029, "amm" ],
-[ 1030, "amn" ],
-[ 1031, "amo" ],
-[ 1032, "amp" ],
-[ 1033, "amq" ],
-[ 1034, "amr" ],
-[ 1035, "ams" ],
-[ 1036, "amt" ],
-[ 1037, "amu" ],
-[ 1038, "amv" ],
-[ 1039, "amw" ],
-[ 1040, "amx" ],
-[ 1041, "amy" ],
-[ 1042, "amz" ],
-[ 1043, "ana" ],
-[ 1044, "anb" ],
-[ 1045, "anc" ],
-[ 1046, "and" ],
-[ 1047, "ane" ],
-[ 1048, "anf" ],
-[ 1049, "ang" ],
-[ 1050, "anh" ],
-[ 1051, "ani" ],
-[ 1052, "anj" ],
-[ 1053, "ank" ],
-[ 1054, "anl" ],
-[ 1055, "anm" ],
-[ 1056, "ann" ],
-[ 1057, "ano" ],
-[ 1058, "anp" ],
-[ 1059, "anq" ],
-[ 1060, "anr" ],
-[ 1061, "ans" ],
-[ 1062, "ant" ],
-[ 1063, "anu" ],
-[ 1064, "anv" ],
-[ 1065, "anw" ],
-[ 1066, "anx" ],
-[ 1067, "any" ],
-[ 1068, "anz" ],
-[ 1069, "aoa" ],
-[ 1070, "aob" ],
-[ 1071, "aoc" ],
-[ 1072, "aod" ],
-[ 1073, "aoe" ],
-[ 1074, "aof" ],
-[ 1075, "aog" ],
-[ 1076, "aoh" ],
-[ 1077, "aoi" ],
-[ 1078, "aoj" ],
-[ 1079, "aok" ],
-[ 1080, "aol" ],
-[ 1081, "aom" ],
-[ 1082, "aon" ],
-[ 1083, "aoo" ],
-[ 1084, "aop" ],
-[ 1085, "aoq" ],
-[ 1086, "aor" ],
-[ 1087, "aos" ],
-[ 1088, "aot" ],
-[ 1089, "aou" ],
-[ 1090, "aov" ],
-[ 1091, "aow" ],
-[ 1092, "aox" ],
-[ 1093, "aoy" ],
-[ 1094, "aoz" ],
-[ 1095, "apa" ],
-[ 1096, "apb" ],
-[ 1097, "apc" ],
-[ 1098, "apd" ],
-[ 1099, "ape" ],
-[ 1100, "apf" ],
-[ 1101, "apg" ],
-[ 1102, "aph" ],
-[ 1103, "api" ],
-[ 1104, "apj" ],
-[ 1105, "apk" ],
-[ 1106, "apl" ],
-[ 1107, "apm" ],
-[ 1108, "apn" ],
-[ 1109, "apo" ],
-[ 1110, "app" ],
-[ 1111, "apq" ],
-[ 1112, "apr" ],
-[ 1113, "aps" ],
-[ 1114, "apt" ],
-[ 1115, "apu" ],
-[ 1116, "apv" ],
-[ 1117, "apw" ],
-[ 1118, "apx" ],
-[ 1119, "apy" ],
-[ 1120, "apz" ],
-[ 1121, "aqa" ],
-[ 1122, "aqb" ],
-[ 1123, "aqc" ],
-[ 1124, "aqd" ],
-[ 1125, "aqe" ],
-[ 1126, "aqf" ],
-[ 1127, "aqg" ],
-[ 1128, "aqh" ],
-[ 1129, "aqi" ],
-[ 1130, "aqj" ],
-[ 1131, "aqk" ],
-[ 1132, "aql" ],
-[ 1133, "aqm" ],
-[ 1134, "aqn" ],
-[ 1135, "aqo" ],
-[ 1136, "aqp" ],
-[ 1137, "aqq" ],
-[ 1138, "aqr" ],
-[ 1139, "aqs" ],
-[ 1140, "aqt" ],
-[ 1141, "aqu" ],
-[ 1142, "aqv" ],
-[ 1143, "aqw" ],
-[ 1144, "aqx" ],
-[ 1145, "aqy" ],
-[ 1146, "aqz" ],
-[ 1147, "ara" ],
-[ 1148, "arb" ],
-[ 1149, "arc" ],
-[ 1150, "ard" ],
-[ 1151, "are" ],
-[ 1152, "arf" ],
-[ 1153, "arg" ],
-[ 1154, "arh" ],
-[ 1155, "ari" ],
-[ 1156, "arj" ],
-[ 1157, "ark" ],
-[ 1158, "arl" ],
-[ 1159, "arm" ],
-[ 1160, "arn" ],
-[ 1161, "aro" ],
-[ 1162, "arp" ],
-[ 1163, "arq" ],
-[ 1164, "arr" ],
-[ 1165, "ars" ],
-[ 1166, "art" ],
-[ 1167, "aru" ],
-[ 1168, "arv" ],
-[ 1169, "arw" ],
-[ 1170, "arx" ],
-[ 1171, "ary" ],
-[ 1172, "arz" ],
-[ 1173, "asa" ],
-[ 1174, "asb" ],
-[ 1175, "asc" ],
-[ 1176, "asd" ],
-[ 1177, "ase" ],
-[ 1178, "asf" ],
-[ 1179, "asg" ],
-[ 1180, "ash" ],
-[ 1181, "asi" ],
-[ 1182, "asj" ],
-[ 1183, "ask" ],
-[ 1184, "asl" ],
-[ 1185, "asm" ],
-[ 1186, "asn" ],
-[ 1187, "aso" ],
-[ 1188, "asp" ],
-[ 1189, "asq" ],
-[ 1190, "asr" ],
-[ 1191, "ass" ],
-[ 1192, "ast" ],
-[ 1193, "asu" ],
-[ 1194, "asv" ],
-[ 1195, "asw" ],
-[ 1196, "asx" ],
-[ 1197, "asy" ],
-[ 1198, "asz" ],
-[ 1199, "ata" ],
-[ 1200, "atb" ],
-[ 1201, "atc" ],
-[ 1202, "atd" ],
-[ 1203, "ate" ],
-[ 1204, "atf" ],
-[ 1205, "atg" ],
-[ 1206, "ath" ],
-[ 1207, "ati" ],
-[ 1208, "atj" ],
-[ 1209, "atk" ],
-[ 1210, "atl" ],
-[ 1211, "atm" ],
-[ 1212, "atn" ],
-[ 1213, "ato" ],
-[ 1214, "atp" ],
-[ 1215, "atq" ],
-[ 1216, "atr" ],
-[ 1217, "ats" ],
-[ 1218, "att" ],
-[ 1219, "atu" ],
-[ 1220, "atv" ],
-[ 1221, "atw" ],
-[ 1222, "atx" ],
-[ 1223, "aty" ],
-[ 1224, "atz" ],
-[ 1225, "aua" ],
-[ 1226, "aub" ],
-[ 1227, "auc" ],
-[ 1228, "aud" ],
-[ 1229, "aue" ],
-[ 1230, "auf" ],
-[ 1231, "aug" ],
-[ 1232, "auh" ],
-[ 1233, "aui" ],
-[ 1234, "auj" ],
-[ 1235, "auk" ],
-[ 1236, "aul" ],
-[ 1237, "aum" ],
-[ 1238, "aun" ],
-[ 1239, "auo" ],
-[ 1240, "aup" ],
-[ 1241, "auq" ],
-[ 1242, "aur" ],
-[ 1243, "aus" ],
-[ 1244, "aut" ],
-[ 1245, "auu" ],
-[ 1246, "auv" ],
-[ 1247, "auw" ],
-[ 1248, "aux" ],
-[ 1249, "auy" ],
-[ 1250, "auz" ],
-[ 1251, "ava" ],
-[ 1252, "avb" ],
-[ 1253, "avc" ],
-[ 1254, "avd" ],
-[ 1255, "ave" ],
-[ 1256, "avf" ],
-[ 1257, "avg" ],
-[ 1258, "avh" ],
-[ 1259, "avi" ],
-[ 1260, "avj" ],
-[ 1261, "avk" ],
-[ 1262, "avl" ],
-[ 1263, "avm" ],
-[ 1264, "avn" ],
-[ 1265, "avo" ],
-[ 1266, "avp" ],
-[ 1267, "avq" ],
-[ 1268, "avr" ],
-[ 1269, "avs" ],
-[ 1270, "avt" ],
-[ 1271, "avu" ],
-[ 1272, "avv" ],
-[ 1273, "avw" ],
-[ 1274, "avx" ],
-[ 1275, "avy" ],
-[ 1276, "avz" ],
-[ 1277, "awa" ],
-[ 1278, "awb" ],
-[ 1279, "awc" ],
-[ 1280, "awd" ],
-[ 1281, "awe" ],
-[ 1282, "awf" ],
-[ 1283, "awg" ],
-[ 1284, "awh" ],
-[ 1285, "awi" ],
-[ 1286, "awj" ],
-[ 1287, "awk" ],
-[ 1288, "awl" ],
-[ 1289, "awm" ],
-[ 1290, "awn" ],
-[ 1291, "awo" ],
-[ 1292, "awp" ],
-[ 1293, "awq" ],
-[ 1294, "awr" ],
-[ 1295, "aws" ],
-[ 1296, "awt" ],
-[ 1297, "awu" ],
-[ 1298, "awv" ],
-[ 1299, "aww" ],
-[ 1300, "awx" ],
-[ 1301, "awy" ],
-[ 1302, "awz" ],
-[ 1303, "axa" ],
-[ 1304, "axb" ],
-[ 1305, "axc" ],
-[ 1306, "axd" ],
-[ 1307, "axe" ],
-[ 1308, "axf" ],
-[ 1309, "axg" ],
-[ 1310, "axh" ],
-[ 1311, "axi" ],
-[ 1312, "axj" ],
-[ 1313, "axk" ],
-[ 1314, "axl" ],
-[ 1315, "axm" ],
-[ 1316, "axn" ],
-[ 1317, "axo" ],
-[ 1318, "axp" ],
-[ 1319, "axq" ],
-[ 1320, "axr" ],
-[ 1321, "axs" ],
-[ 1322, "axt" ],
-[ 1323, "axu" ],
-[ 1324, "axv" ],
-[ 1325, "axw" ],
-[ 1326, "axx" ],
-[ 1327, "axy" ],
-[ 1328, "axz" ],
-[ 1329, "aya" ],
-[ 1330, "ayb" ],
-[ 1331, "ayc" ],
-[ 1332, "ayd" ],
-[ 1333, "aye" ],
-[ 1334, "ayf" ],
-[ 1335, "ayg" ],
-[ 1336, "ayh" ],
-[ 1337, "ayi" ],
-[ 1338, "ayj" ],
-[ 1339, "ayk" ],
-[ 1340, "ayl" ],
-[ 1341, "aym" ],
-[ 1342, "ayn" ],
-[ 1343, "ayo" ],
-[ 1344, "ayp" ],
-[ 1345, "ayq" ],
-[ 1346, "ayr" ],
-[ 1347, "ays" ],
-[ 1348, "ayt" ],
-[ 1349, "ayu" ],
-[ 1350, "ayv" ],
-[ 1351, "ayw" ],
-[ 1352, "ayx" ],
-[ 1353, "ayy" ],
-[ 1354, "ayz" ],
-[ 1355, "aza" ],
-[ 1356, "azb" ],
-[ 1357, "azc" ],
-[ 1358, "azd" ],
-[ 1359, "aze" ],
-[ 1360, "azf" ],
-[ 1361, "azg" ],
-[ 1362, "azh" ],
-[ 1363, "azi" ],
-[ 1364, "azj" ],
-[ 1365, "azk" ],
-[ 1366, "azl" ],
-[ 1367, "azm" ],
-[ 1368, "azn" ],
-[ 1369, "azo" ],
-[ 1370, "azp" ],
-[ 1371, "azq" ],
-[ 1372, "azr" ],
-[ 1373, "azs" ],
-[ 1374, "azt" ],
-[ 1375, "azu" ],
-[ 1376, "azv" ],
-[ 1377, "azw" ],
-[ 1378, "azx" ],
-[ 1379, "azy" ],
-[ 1380, "azz" ],
-[ 1381, "baa" ],
-[ 1382, "bab" ],
-[ 1383, "bac" ],
-[ 1384, "bad" ],
-[ 1385, "bae" ],
-[ 1386, "baf" ],
-[ 1387, "bag" ],
-[ 1388, "bah" ],
-[ 1389, "bai" ],
-[ 1390, "baj" ],
-[ 1391, "bak" ],
-[ 1392, "bal" ],
-[ 1393, "bam" ],
-[ 1394, "ban" ],
-[ 1395, "bao" ],
-[ 1396, "bap" ],
-[ 1397, "baq" ],
-[ 1398, "bar" ],
-[ 1399, "bas" ],
-[ 1400, "bat" ],
-[ 1401, "bau" ],
-[ 1402, "bav" ],
-[ 1403, "baw" ],
-[ 1404, "bax" ],
-[ 1405, "bay" ],
-[ 1406, "baz" ],
-[ 1407, "bba" ],
-[ 1408, "bbb" ],
-[ 1409, "bbc" ],
-[ 1410, "bbd" ],
-[ 1411, "bbe" ],
-[ 1412, "bbf" ],
-[ 1413, "bbg" ],
-[ 1414, "bbh" ],
-[ 1415, "bbi" ],
-[ 1416, "bbj" ],
-[ 1417, "bbk" ],
-[ 1418, "bbl" ],
-[ 1419, "bbm" ],
-[ 1420, "bbn" ],
-[ 1421, "bbo" ],
-[ 1422, "bbp" ],
-[ 1423, "bbq" ],
-[ 1424, "bbr" ],
-[ 1425, "bbs" ],
-[ 1426, "bbt" ],
-[ 1427, "bbu" ],
-[ 1428, "bbv" ],
-[ 1429, "bbw" ],
-[ 1430, "bbx" ],
-[ 1431, "bby" ],
-[ 1432, "bbz" ],
-[ 1433, "bca" ],
-[ 1434, "bcb" ],
-[ 1435, "bcc" ],
-[ 1436, "bcd" ],
-[ 1437, "bce" ],
-[ 1438, "bcf" ],
-[ 1439, "bcg" ],
-[ 1440, "bch" ],
-[ 1441, "bci" ],
-[ 1442, "bcj" ],
-[ 1443, "bck" ],
-[ 1444, "bcl" ],
-[ 1445, "bcm" ],
-[ 1446, "bcn" ],
-[ 1447, "bco" ],
-[ 1448, "bcp" ],
-[ 1449, "bcq" ],
-[ 1450, "bcr" ],
-[ 1451, "bcs" ],
-[ 1452, "bct" ],
-[ 1453, "bcu" ],
-[ 1454, "bcv" ],
-[ 1455, "bcw" ],
-[ 1456, "bcx" ],
-[ 1457, "bcy" ],
-[ 1458, "bcz" ],
-[ 1459, "bda" ],
-[ 1460, "bdb" ],
-[ 1461, "bdc" ],
-[ 1462, "bdd" ],
-[ 1463, "bde" ],
-[ 1464, "bdf" ],
-[ 1465, "bdg" ],
-[ 1466, "bdh" ],
-[ 1467, "bdi" ],
-[ 1468, "bdj" ],
-[ 1469, "bdk" ],
-[ 1470, "bdl" ],
-[ 1471, "bdm" ],
-[ 1472, "bdn" ],
-[ 1473, "bdo" ],
-[ 1474, "bdp" ],
-[ 1475, "bdq" ],
-[ 1476, "bdr" ],
-[ 1477, "bds" ],
-[ 1478, "bdt" ],
-[ 1479, "bdu" ],
-[ 1480, "bdv" ],
-[ 1481, "bdw" ],
-[ 1482, "bdx" ],
-[ 1483, "bdy" ],
-[ 1484, "bdz" ],
-[ 1485, "bea" ],
-[ 1486, "beb" ],
-[ 1487, "bec" ],
-[ 1488, "bed" ],
-[ 1489, "bee" ],
-[ 1490, "bef" ],
-[ 1491, "beg" ],
-[ 1492, "beh" ],
-[ 1493, "bei" ],
-[ 1494, "bej" ],
-[ 1495, "bek" ],
-[ 1496, "bel" ],
-[ 1497, "bem" ],
-[ 1498, "ben" ],
-[ 1499, "beo" ],
-[ 1500, "bep" ],
-[ 1501, "beq" ],
-[ 1502, "ber" ],
-[ 1503, "bes" ],
-[ 1504, "bet" ],
-[ 1505, "beu" ],
-[ 1506, "bev" ],
-[ 1507, "bew" ],
-[ 1508, "bex" ],
-[ 1509, "bey" ],
-[ 1510, "bez" ],
-[ 1511, "bfa" ],
-[ 1512, "bfb" ],
-[ 1513, "bfc" ],
-[ 1514, "bfd" ],
-[ 1515, "bfe" ],
-[ 1516, "bff" ],
-[ 1517, "bfg" ],
-[ 1518, "bfh" ],
-[ 1519, "bfi" ],
-[ 1520, "bfj" ],
-[ 1521, "bfk" ],
-[ 1522, "bfl" ],
-[ 1523, "bfm" ],
-[ 1524, "bfn" ],
-[ 1525, "bfo" ],
-[ 1526, "bfp" ],
-[ 1527, "bfq" ],
-[ 1528, "bfr" ],
-[ 1529, "bfs" ],
-[ 1530, "bft" ],
-[ 1531, "bfu" ],
-[ 1532, "bfv" ],
-[ 1533, "bfw" ],
-[ 1534, "bfx" ],
-[ 1535, "bfy" ],
-[ 1536, "bfz" ],
-[ 1537, "bga" ],
-[ 1538, "bgb" ],
-[ 1539, "bgc" ],
-[ 1540, "bgd" ],
-[ 1541, "bge" ],
-[ 1542, "bgf" ],
-[ 1543, "bgg" ],
-[ 1544, "bgh" ],
-[ 1545, "bgi" ],
-[ 1546, "bgj" ],
-[ 1547, "bgk" ],
-[ 1548, "bgl" ],
-[ 1549, "bgm" ],
-[ 1550, "bgn" ],
-[ 1551, "bgo" ],
-[ 1552, "bgp" ],
-[ 1553, "bgq" ],
-[ 1554, "bgr" ],
-[ 1555, "bgs" ],
-[ 1556, "bgt" ],
-[ 1557, "bgu" ],
-[ 1558, "bgv" ],
-[ 1559, "bgw" ],
-[ 1560, "bgx" ],
-[ 1561, "bgy" ],
-[ 1562, "bgz" ],
-[ 1563, "bha" ],
-[ 1564, "bhb" ],
-[ 1565, "bhc" ],
-[ 1566, "bhd" ],
-[ 1567, "bhe" ],
-[ 1568, "bhf" ],
-[ 1569, "bhg" ],
-[ 1570, "bhh" ],
-[ 1571, "bhi" ],
-[ 1572, "bhj" ],
-[ 1573, "bhk" ],
-[ 1574, "bhl" ],
-[ 1575, "bhm" ],
-[ 1576, "bhn" ],
-[ 1577, "bho" ],
-[ 1578, "bhp" ],
-[ 1579, "bhq" ],
-[ 1580, "bhr" ],
-[ 1581, "bhs" ],
-[ 1582, "bht" ],
-[ 1583, "bhu" ],
-[ 1584, "bhv" ],
-[ 1585, "bhw" ],
-[ 1586, "bhx" ],
-[ 1587, "bhy" ],
-[ 1588, "bhz" ],
-[ 1589, "bia" ],
-[ 1590, "bib" ],
-[ 1591, "bic" ],
-[ 1592, "bid" ],
-[ 1593, "bie" ],
-[ 1594, "bif" ],
-[ 1595, "big" ],
-[ 1596, "bih" ],
-[ 1597, "bii" ],
-[ 1598, "bij" ],
-[ 1599, "bik" ],
-[ 1600, "bil" ],
-[ 1601, "bim" ],
-[ 1602, "bin" ],
-[ 1603, "bio" ],
-[ 1604, "bip" ],
-[ 1605, "biq" ],
-[ 1606, "bir" ],
-[ 1607, "bis" ],
-[ 1608, "bit" ],
-[ 1609, "biu" ],
-[ 1610, "biv" ],
-[ 1611, "biw" ],
-[ 1612, "bix" ],
-[ 1613, "biy" ],
-[ 1614, "biz" ],
-[ 1615, "bja" ],
-[ 1616, "bjb" ],
-[ 1617, "bjc" ],
-[ 1618, "bjd" ],
-[ 1619, "bje" ],
-[ 1620, "bjf" ],
-[ 1621, "bjg" ],
-[ 1622, "bjh" ],
-[ 1623, "bji" ],
-[ 1624, "bjj" ],
-[ 1625, "bjk" ],
-[ 1626, "bjl" ],
-[ 1627, "bjm" ],
-[ 1628, "bjn" ],
-[ 1629, "bjo" ],
-[ 1630, "bjp" ],
-[ 1631, "bjq" ],
-[ 1632, "bjr" ],
-[ 1633, "bjs" ],
-[ 1634, "bjt" ],
-[ 1635, "bju" ],
-[ 1636, "bjv" ],
-[ 1637, "bjw" ],
-[ 1638, "bjx" ],
-[ 1639, "bjy" ],
-[ 1640, "bjz" ],
-[ 1641, "bka" ],
-[ 1642, "bkb" ],
-[ 1643, "bkc" ],
-[ 1644, "bkd" ],
-[ 1645, "bke" ],
-[ 1646, "bkf" ],
-[ 1647, "bkg" ],
-[ 1648, "bkh" ],
-[ 1649, "bki" ],
-[ 1650, "bkj" ],
-[ 1651, "bkk" ],
-[ 1652, "bkl" ],
-[ 1653, "bkm" ],
-[ 1654, "bkn" ],
-[ 1655, "bko" ],
-[ 1656, "bkp" ],
-[ 1657, "bkq" ],
-[ 1658, "bkr" ],
-[ 1659, "bks" ],
-[ 1660, "bkt" ],
-[ 1661, "bku" ],
-[ 1662, "bkv" ],
-[ 1663, "bkw" ],
-[ 1664, "bkx" ],
-[ 1665, "bky" ],
-[ 1666, "bkz" ],
-[ 1667, "bla" ],
-[ 1668, "blb" ],
-[ 1669, "blc" ],
-[ 1670, "bld" ],
-[ 1671, "ble" ],
-[ 1672, "blf" ],
-[ 1673, "blg" ],
-[ 1674, "blh" ],
-[ 1675, "bli" ],
-[ 1676, "blj" ],
-[ 1677, "blk" ],
-[ 1678, "bll" ],
-[ 1679, "blm" ],
-[ 1680, "bln" ],
-[ 1681, "blo" ],
-[ 1682, "blp" ],
-[ 1683, "blq" ],
-[ 1684, "blr" ],
-[ 1685, "bls" ],
-[ 1686, "blt" ],
-[ 1687, "blu" ],
-[ 1688, "blv" ],
-[ 1689, "blw" ],
-[ 1690, "blx" ],
-[ 1691, "bly" ],
-[ 1692, "blz" ],
-[ 1693, "bma" ],
-[ 1694, "bmb" ],
-[ 1695, "bmc" ],
-[ 1696, "bmd" ],
-[ 1697, "bme" ],
-[ 1698, "bmf" ],
-[ 1699, "bmg" ],
-[ 1700, "bmh" ],
-[ 1701, "bmi" ],
-[ 1702, "bmj" ],
-[ 1703, "bmk" ],
-[ 1704, "bml" ],
-[ 1705, "bmm" ],
-[ 1706, "bmn" ],
-[ 1707, "bmo" ],
-[ 1708, "bmp" ],
-[ 1709, "bmq" ],
-[ 1710, "bmr" ],
-[ 1711, "bms" ],
-[ 1712, "bmt" ],
-[ 1713, "bmu" ],
-[ 1714, "bmv" ],
-[ 1715, "bmw" ],
-[ 1716, "bmx" ],
-[ 1717, "bmy" ],
-[ 1718, "bmz" ],
-[ 1719, "bna" ],
-[ 1720, "bnb" ],
-[ 1721, "bnc" ],
-[ 1722, "bnd" ],
-[ 1723, "bne" ],
-[ 1724, "bnf" ],
-[ 1725, "bng" ],
-[ 1726, "bnh" ],
-[ 1727, "bni" ],
-[ 1728, "bnj" ],
-[ 1729, "bnk" ],
-[ 1730, "bnl" ],
-[ 1731, "bnm" ],
-[ 1732, "bnn" ],
-[ 1733, "bno" ],
-[ 1734, "bnp" ],
-[ 1735, "bnq" ],
-[ 1736, "bnr" ],
-[ 1737, "bns" ],
-[ 1738, "bnt" ],
-[ 1739, "bnu" ],
-[ 1740, "bnv" ],
-[ 1741, "bnw" ],
-[ 1742, "bnx" ],
-[ 1743, "bny" ],
-[ 1744, "bnz" ],
-[ 1745, "boa" ],
-[ 1746, "bob" ],
-[ 1747, "boc" ],
-[ 1748, "bod" ],
-[ 1749, "boe" ],
-[ 1750, "bof" ],
-[ 1751, "bog" ],
-[ 1752, "boh" ],
-[ 1753, "boi" ],
-[ 1754, "boj" ],
-[ 1755, "bok" ],
-[ 1756, "bol" ],
-[ 1757, "bom" ],
-[ 1758, "bon" ],
-[ 1759, "boo" ],
-[ 1760, "bop" ],
-[ 1761, "boq" ],
-[ 1762, "bor" ],
-[ 1763, "bos" ],
-[ 1764, "bot" ],
-[ 1765, "bou" ],
-[ 1766, "bov" ],
-[ 1767, "bow" ],
-[ 1768, "box" ],
-[ 1769, "boy" ],
-[ 1770, "boz" ],
-[ 1771, "bpa" ],
-[ 1772, "bpb" ],
-[ 1773, "bpc" ],
-[ 1774, "bpd" ],
-[ 1775, "bpe" ],
-[ 1776, "bpf" ],
-[ 1777, "bpg" ],
-[ 1778, "bph" ],
-[ 1779, "bpi" ],
-[ 1780, "bpj" ],
-[ 1781, "bpk" ],
-[ 1782, "bpl" ],
-[ 1783, "bpm" ],
-[ 1784, "bpn" ],
-[ 1785, "bpo" ],
-[ 1786, "bpp" ],
-[ 1787, "bpq" ],
-[ 1788, "bpr" ],
-[ 1789, "bps" ],
-[ 1790, "bpt" ],
-[ 1791, "bpu" ],
-[ 1792, "bpv" ],
-[ 1793, "bpw" ],
-[ 1794, "bpx" ],
-[ 1795, "bpy" ],
-[ 1796, "bpz" ],
-[ 1797, "bqa" ],
-[ 1798, "bqb" ],
-[ 1799, "bqc" ],
-[ 1800, "bqd" ],
-[ 1801, "bqe" ],
-[ 1802, "bqf" ],
-[ 1803, "bqg" ],
-[ 1804, "bqh" ],
-[ 1805, "bqi" ],
-[ 1806, "bqj" ],
-[ 1807, "bqk" ],
-[ 1808, "bql" ],
-[ 1809, "bqm" ],
-[ 1810, "bqn" ],
-[ 1811, "bqo" ],
-[ 1812, "bqp" ],
-[ 1813, "bqq" ],
-[ 1814, "bqr" ],
-[ 1815, "bqs" ],
-[ 1816, "bqt" ],
-[ 1817, "bqu" ],
-[ 1818, "bqv" ],
-[ 1819, "bqw" ],
-[ 1820, "bqx" ],
-[ 1821, "bqy" ],
-[ 1822, "bqz" ],
-[ 1823, "bra" ],
-[ 1824, "brb" ],
-[ 1825, "brc" ],
-[ 1826, "brd" ],
-[ 1827, "bre" ],
-[ 1828, "brf" ],
-[ 1829, "brg" ],
-[ 1830, "brh" ],
-[ 1831, "bri" ],
-[ 1832, "brj" ],
-[ 1833, "brk" ],
-[ 1834, "brl" ],
-[ 1835, "brm" ],
-[ 1836, "brn" ],
-[ 1837, "bro" ],
-[ 1838, "brp" ],
-[ 1839, "brq" ],
-[ 1840, "brr" ],
-[ 1841, "brs" ],
-[ 1842, "brt" ],
-[ 1843, "bru" ],
-[ 1844, "brv" ],
-[ 1845, "brw" ],
-[ 1846, "brx" ],
-[ 1847, "bry" ],
-[ 1848, "brz" ],
-[ 1849, "bsa" ],
-[ 1850, "bsb" ],
-[ 1851, "bsc" ],
-[ 1852, "bsd" ],
-[ 1853, "bse" ],
-[ 1854, "bsf" ],
-[ 1855, "bsg" ],
-[ 1856, "bsh" ],
-[ 1857, "bsi" ],
-[ 1858, "bsj" ],
-[ 1859, "bsk" ],
-[ 1860, "bsl" ],
-[ 1861, "bsm" ],
-[ 1862, "bsn" ],
-[ 1863, "bso" ],
-[ 1864, "bsp" ],
-[ 1865, "bsq" ],
-[ 1866, "bsr" ],
-[ 1867, "bss" ],
-[ 1868, "bst" ],
-[ 1869, "bsu" ],
-[ 1870, "bsv" ],
-[ 1871, "bsw" ],
-[ 1872, "bsx" ],
-[ 1873, "bsy" ],
-[ 1874, "bsz" ],
-[ 1875, "bta" ],
-[ 1876, "btb" ],
-[ 1877, "btc" ],
-[ 1878, "btd" ],
-[ 1879, "bte" ],
-[ 1880, "btf" ],
-[ 1881, "btg" ],
-[ 1882, "bth" ],
-[ 1883, "bti" ],
-[ 1884, "btj" ],
-[ 1885, "btk" ],
-[ 1886, "btl" ],
-[ 1887, "btm" ],
-[ 1888, "btn" ],
-[ 1889, "bto" ],
-[ 1890, "btp" ],
-[ 1891, "btq" ],
-[ 1892, "btr" ],
-[ 1893, "bts" ],
-[ 1894, "btt" ],
-[ 1895, "btu" ],
-[ 1896, "btv" ],
-[ 1897, "btw" ],
-[ 1898, "btx" ],
-[ 1899, "bty" ],
-[ 1900, "btz" ],
-[ 1901, "bua" ],
-[ 1902, "bub" ],
-[ 1903, "buc" ],
-[ 1904, "bud" ],
-[ 1905, "bue" ],
-[ 1906, "buf" ],
-[ 1907, "bug" ],
-[ 1908, "buh" ],
-[ 1909, "bui" ],
-[ 1910, "buj" ],
-[ 1911, "buk" ],
-[ 1912, "bul" ],
-[ 1913, "bum" ],
-[ 1914, "bun" ],
-[ 1915, "buo" ],
-[ 1916, "bup" ],
-[ 1917, "buq" ],
-[ 1918, "bur" ],
-[ 1919, "bus" ],
-[ 1920, "but" ],
-[ 1921, "buu" ],
-[ 1922, "buv" ],
-[ 1923, "buw" ],
-[ 1924, "bux" ],
-[ 1925, "buy" ],
-[ 1926, "buz" ],
-[ 1927, "bva" ],
-[ 1928, "bvb" ],
-[ 1929, "bvc" ],
-[ 1930, "bvd" ],
-[ 1931, "bve" ],
-[ 1932, "bvf" ],
-[ 1933, "bvg" ],
-[ 1934, "bvh" ],
-[ 1935, "bvi" ],
-[ 1936, "bvj" ],
-[ 1937, "bvk" ],
-[ 1938, "bvl" ],
-[ 1939, "bvm" ],
-[ 1940, "bvn" ],
-[ 1941, "bvo" ],
-[ 1942, "bvp" ],
-[ 1943, "bvq" ],
-[ 1944, "bvr" ],
-[ 1945, "bvs" ],
-[ 1946, "bvt" ],
-[ 1947, "bvu" ],
-[ 1948, "bvv" ],
-[ 1949, "bvw" ],
-[ 1950, "bvx" ],
-[ 1951, "bvy" ],
-[ 1952, "bvz" ],
-[ 1953, "bwa" ],
-[ 1954, "bwb" ],
-[ 1955, "bwc" ],
-[ 1956, "bwd" ],
-[ 1957, "bwe" ],
-[ 1958, "bwf" ],
-[ 1959, "bwg" ],
-[ 1960, "bwh" ],
-[ 1961, "bwi" ],
-[ 1962, "bwj" ],
-[ 1963, "bwk" ],
-[ 1964, "bwl" ],
-[ 1965, "bwm" ],
-[ 1966, "bwn" ],
-[ 1967, "bwo" ],
-[ 1968, "bwp" ],
-[ 1969, "bwq" ],
-[ 1970, "bwr" ],
-[ 1971, "bws" ],
-[ 1972, "bwt" ],
-[ 1973, "bwu" ],
-[ 1974, "bwv" ],
-[ 1975, "bww" ],
-[ 1976, "bwx" ],
-[ 1977, "bwy" ],
-[ 1978, "bwz" ],
-[ 1979, "bxa" ],
-[ 1980, "bxb" ],
-[ 1981, "bxc" ],
-[ 1982, "bxd" ],
-[ 1983, "bxe" ],
-[ 1984, "bxf" ],
-[ 1985, "bxg" ],
-[ 1986, "bxh" ],
-[ 1987, "bxi" ],
-[ 1988, "bxj" ],
-[ 1989, "bxk" ],
-[ 1990, "bxl" ],
-[ 1991, "bxm" ],
-[ 1992, "bxn" ],
-[ 1993, "bxo" ],
-[ 1994, "bxp" ],
-[ 1995, "bxq" ],
-[ 1996, "bxr" ],
-[ 1997, "bxs" ],
-[ 1998, "bxt" ],
-[ 1999, "bxu" ],
-[ 2000, "bxv" ],
-[ 2001, "bxw" ],
-[ 2002, "bxx" ],
-[ 2003, "bxy" ],
-[ 2004, "bxz" ],
-[ 2005, "bya" ],
-[ 2006, "byb" ],
-[ 2007, "byc" ],
-[ 2008, "byd" ],
-[ 2009, "bye" ],
-[ 2010, "byf" ],
-[ 2011, "byg" ],
-[ 2012, "byh" ],
-[ 2013, "byi" ],
-[ 2014, "byj" ],
-[ 2015, "byk" ],
-[ 2016, "byl" ],
-[ 2017, "bym" ],
-[ 2018, "byn" ],
-[ 2019, "byo" ],
-[ 2020, "byp" ],
-[ 2021, "byq" ],
-[ 2022, "byr" ],
-[ 2023, "bys" ],
-[ 2024, "byt" ],
-[ 2025, "byu" ],
-[ 2026, "byv" ],
-[ 2027, "byw" ],
-[ 2028, "byx" ],
-[ 2029, "byy" ],
-[ 2030, "byz" ],
-[ 2031, "bza" ],
-[ 2032, "bzb" ],
-[ 2033, "bzc" ],
-[ 2034, "bzd" ],
-[ 2035, "bze" ],
-[ 2036, "bzf" ],
-[ 2037, "bzg" ],
-[ 2038, "bzh" ],
-[ 2039, "bzi" ],
-[ 2040, "bzj" ],
-[ 2041, "bzk" ],
-[ 2042, "bzl" ],
-[ 2043, "bzm" ],
-[ 2044, "bzn" ],
-[ 2045, "bzo" ],
-[ 2046, "bzp" ],
-[ 2047, "bzq" ],
-[ 2048, "bzr" ],
-[ 2049, "bzs" ],
-[ 2050, "bzt" ],
-[ 2051, "bzu" ],
-[ 2052, "bzv" ],
-[ 2053, "bzw" ],
-[ 2054, "bzx" ],
-[ 2055, "bzy" ],
-[ 2056, "bzz" ],
-[ 2057, "caa" ],
-[ 2058, "cab" ],
-[ 2059, "cac" ],
-[ 2060, "cad" ],
-[ 2061, "cae" ],
-[ 2062, "caf" ],
-[ 2063, "cag" ],
-[ 2064, "cah" ],
-[ 2065, "cai" ],
-[ 2066, "caj" ],
-[ 2067, "cak" ],
-[ 2068, "cal" ],
-[ 2069, "cam" ],
-[ 2070, "can" ],
-[ 2071, "cao" ],
-[ 2072, "cap" ],
-[ 2073, "caq" ],
-[ 2074, "car" ],
-[ 2075, "cas" ],
-[ 2076, "cat" ],
-[ 2077, "cau" ],
-[ 2078, "cav" ],
-[ 2079, "caw" ],
-[ 2080, "cax" ],
-[ 2081, "cay" ],
-[ 2082, "caz" ],
-[ 2083, "cba" ],
-[ 2084, "cbb" ],
-[ 2085, "cbc" ],
-[ 2086, "cbd" ],
-[ 2087, "cbe" ],
-[ 2088, "cbf" ],
-[ 2089, "cbg" ],
-[ 2090, "cbh" ],
-[ 2091, "cbi" ],
-[ 2092, "cbj" ],
-[ 2093, "cbk" ],
-[ 2094, "cbl" ],
-[ 2095, "cbm" ],
-[ 2096, "cbn" ],
-[ 2097, "cbo" ],
-[ 2098, "cbp" ],
-[ 2099, "cbq" ],
-[ 2100, "cbr" ],
-[ 2101, "cbs" ],
-[ 2102, "cbt" ],
-[ 2103, "cbu" ],
-[ 2104, "cbv" ],
-[ 2105, "cbw" ],
-[ 2106, "cbx" ],
-[ 2107, "cby" ],
-[ 2108, "cbz" ],
-[ 2109, "cca" ],
-[ 2110, "ccb" ],
-[ 2111, "ccc" ],
-[ 2112, "ccd" ],
-[ 2113, "cce" ],
-[ 2114, "ccf" ],
-[ 2115, "ccg" ],
-[ 2116, "cch" ],
-[ 2117, "cci" ],
-[ 2118, "ccj" ],
-[ 2119, "cck" ],
-[ 2120, "ccl" ],
-[ 2121, "ccm" ],
-[ 2122, "ccn" ],
-[ 2123, "cco" ],
-[ 2124, "ccp" ],
-[ 2125, "ccq" ],
-[ 2126, "ccr" ],
-[ 2127, "ccs" ],
-[ 2128, "cct" ],
-[ 2129, "ccu" ],
-[ 2130, "ccv" ],
-[ 2131, "ccw" ],
-[ 2132, "ccx" ],
-[ 2133, "ccy" ],
-[ 2134, "ccz" ],
-[ 2135, "cda" ],
-[ 2136, "cdb" ],
-[ 2137, "cdc" ],
-[ 2138, "cdd" ],
-[ 2139, "cde" ],
-[ 2140, "cdf" ],
-[ 2141, "cdg" ],
-[ 2142, "cdh" ],
-[ 2143, "cdi" ],
-[ 2144, "cdj" ],
-[ 2145, "cdk" ],
-[ 2146, "cdl" ],
-[ 2147, "cdm" ],
-[ 2148, "cdn" ],
-[ 2149, "cdo" ],
-[ 2150, "cdp" ],
-[ 2151, "cdq" ],
-[ 2152, "cdr" ],
-[ 2153, "cds" ],
-[ 2154, "cdt" ],
-[ 2155, "cdu" ],
-[ 2156, "cdv" ],
-[ 2157, "cdw" ],
-[ 2158, "cdx" ],
-[ 2159, "cdy" ],
-[ 2160, "cdz" ],
-[ 2161, "cea" ],
-[ 2162, "ceb" ],
-[ 2163, "cec" ],
-[ 2164, "ced" ],
-[ 2165, "cee" ],
-[ 2166, "cef" ],
-[ 2167, "ceg" ],
-[ 2168, "ceh" ],
-[ 2169, "cei" ],
-[ 2170, "cej" ],
-[ 2171, "cek" ],
-[ 2172, "cel" ],
-[ 2173, "cem" ],
-[ 2174, "cen" ],
-[ 2175, "ceo" ],
-[ 2176, "cep" ],
-[ 2177, "ceq" ],
-[ 2178, "cer" ],
-[ 2179, "ces" ],
-[ 2180, "cet" ],
-[ 2181, "ceu" ],
-[ 2182, "cev" ],
-[ 2183, "cew" ],
-[ 2184, "cex" ],
-[ 2185, "cey" ],
-[ 2186, "cez" ],
-[ 2187, "cfa" ],
-[ 2188, "cfb" ],
-[ 2189, "cfc" ],
-[ 2190, "cfd" ],
-[ 2191, "cfe" ],
-[ 2192, "cff" ],
-[ 2193, "cfg" ],
-[ 2194, "cfh" ],
-[ 2195, "cfi" ],
-[ 2196, "cfj" ],
-[ 2197, "cfk" ],
-[ 2198, "cfl" ],
-[ 2199, "cfm" ],
-[ 2200, "cfn" ],
-[ 2201, "cfo" ],
-[ 2202, "cfp" ],
-[ 2203, "cfq" ],
-[ 2204, "cfr" ],
-[ 2205, "cfs" ],
-[ 2206, "cft" ],
-[ 2207, "cfu" ],
-[ 2208, "cfv" ],
-[ 2209, "cfw" ],
-[ 2210, "cfx" ],
-[ 2211, "cfy" ],
-[ 2212, "cfz" ],
-[ 2213, "cga" ],
-[ 2214, "cgb" ],
-[ 2215, "cgc" ],
-[ 2216, "cgd" ],
-[ 2217, "cge" ],
-[ 2218, "cgf" ],
-[ 2219, "cgg" ],
-[ 2220, "cgh" ],
-[ 2221, "cgi" ],
-[ 2222, "cgj" ],
-[ 2223, "cgk" ],
-[ 2224, "cgl" ],
-[ 2225, "cgm" ],
-[ 2226, "cgn" ],
-[ 2227, "cgo" ],
-[ 2228, "cgp" ],
-[ 2229, "cgq" ],
-[ 2230, "cgr" ],
-[ 2231, "cgs" ],
-[ 2232, "cgt" ],
-[ 2233, "cgu" ],
-[ 2234, "cgv" ],
-[ 2235, "cgw" ],
-[ 2236, "cgx" ],
-[ 2237, "cgy" ],
-[ 2238, "cgz" ],
-[ 2239, "cha" ],
-[ 2240, "chb" ],
-[ 2241, "chc" ],
-[ 2242, "chd" ],
-[ 2243, "che" ],
-[ 2244, "chf" ],
-[ 2245, "chg" ],
-[ 2246, "chh" ],
-[ 2247, "chi" ],
-[ 2248, "chj" ],
-[ 2249, "chk" ],
-[ 2250, "chl" ],
-[ 2251, "chm" ],
-[ 2252, "chn" ],
-[ 2253, "cho" ],
-[ 2254, "chp" ],
-[ 2255, "chq" ],
-[ 2256, "chr" ],
-[ 2257, "chs" ],
-[ 2258, "cht" ],
-[ 2259, "chu" ],
-[ 2260, "chv" ],
-[ 2261, "chw" ],
-[ 2262, "chx" ],
-[ 2263, "chy" ],
-[ 2264, "chz" ],
-[ 2265, "cia" ],
-[ 2266, "cib" ],
-[ 2267, "cic" ],
-[ 2268, "cid" ],
-[ 2269, "cie" ],
-[ 2270, "cif" ],
-[ 2271, "cig" ],
-[ 2272, "cih" ],
-[ 2273, "cii" ],
-[ 2274, "cij" ],
-[ 2275, "cik" ],
-[ 2276, "cil" ],
-[ 2277, "cim" ],
-[ 2278, "cin" ],
-[ 2279, "cio" ],
-[ 2280, "cip" ],
-[ 2281, "ciq" ],
-[ 2282, "cir" ],
-[ 2283, "cis" ],
-[ 2284, "cit" ],
-[ 2285, "ciu" ],
-[ 2286, "civ" ],
-[ 2287, "ciw" ],
-[ 2288, "cix" ],
-[ 2289, "ciy" ],
-[ 2290, "ciz" ],
-[ 2291, "cja" ],
-[ 2292, "cjb" ],
-[ 2293, "cjc" ],
-[ 2294, "cjd" ],
-[ 2295, "cje" ],
-[ 2296, "cjf" ],
-[ 2297, "cjg" ],
-[ 2298, "cjh" ],
-[ 2299, "cji" ],
-[ 2300, "cjj" ],
-[ 2301, "cjk" ],
-[ 2302, "cjl" ],
-[ 2303, "cjm" ],
-[ 2304, "cjn" ],
-[ 2305, "cjo" ],
-[ 2306, "cjp" ],
-[ 2307, "cjq" ],
-[ 2308, "cjr" ],
-[ 2309, "cjs" ],
-[ 2310, "cjt" ],
-[ 2311, "cju" ],
-[ 2312, "cjv" ],
-[ 2313, "cjw" ],
-[ 2314, "cjx" ],
-[ 2315, "cjy" ],
-[ 2316, "cjz" ],
-[ 2317, "cka" ],
-[ 2318, "ckb" ],
-[ 2319, "ckc" ],
-[ 2320, "ckd" ],
-[ 2321, "cke" ],
-[ 2322, "ckf" ],
-[ 2323, "ckg" ],
-[ 2324, "ckh" ],
-[ 2325, "cki" ],
-[ 2326, "ckj" ],
-[ 2327, "ckk" ],
-[ 2328, "ckl" ],
-[ 2329, "ckm" ],
-[ 2330, "ckn" ],
-[ 2331, "cko" ],
-[ 2332, "ckp" ],
-[ 2333, "ckq" ],
-[ 2334, "ckr" ],
-[ 2335, "cks" ],
-[ 2336, "ckt" ],
-[ 2337, "cku" ],
-[ 2338, "ckv" ],
-[ 2339, "ckw" ],
-[ 2340, "ckx" ],
-[ 2341, "cky" ],
-[ 2342, "ckz" ],
-[ 2343, "cla" ],
-[ 2344, "clb" ],
-[ 2345, "clc" ],
-[ 2346, "cld" ],
-[ 2347, "cle" ],
-[ 2348, "clf" ],
-[ 2349, "clg" ],
-[ 2350, "clh" ],
-[ 2351, "cli" ],
-[ 2352, "clj" ],
-[ 2353, "clk" ],
-[ 2354, "cll" ],
-[ 2355, "clm" ],
-[ 2356, "cln" ],
-[ 2357, "clo" ],
-[ 2358, "clp" ],
-[ 2359, "clq" ],
-[ 2360, "clr" ],
-[ 2361, "cls" ],
-[ 2362, "clt" ],
-[ 2363, "clu" ],
-[ 2364, "clv" ],
-[ 2365, "clw" ],
-[ 2366, "clx" ],
-[ 2367, "cly" ],
-[ 2368, "clz" ],
-[ 2369, "cma" ],
-[ 2370, "cmb" ],
-[ 2371, "cmc" ],
-[ 2372, "cmd" ],
-[ 2373, "cme" ],
-[ 2374, "cmf" ],
-[ 2375, "cmg" ],
-[ 2376, "cmh" ],
-[ 2377, "cmi" ],
-[ 2378, "cmj" ],
-[ 2379, "cmk" ],
-[ 2380, "cml" ],
-[ 2381, "cmm" ],
-[ 2382, "cmn" ],
-[ 2383, "cmo" ],
-[ 2384, "cmp" ],
-[ 2385, "cmq" ],
-[ 2386, "cmr" ],
-[ 2387, "cms" ],
-[ 2388, "cmt" ],
-[ 2389, "cmu" ],
-[ 2390, "cmv" ],
-[ 2391, "cmw" ],
-[ 2392, "cmx" ],
-[ 2393, "cmy" ],
-[ 2394, "cmz" ],
-[ 2395, "cna" ],
-[ 2396, "cnb" ],
-[ 2397, "cnc" ],
-[ 2398, "cnd" ],
-[ 2399, "cne" ],
-[ 2400, "cnf" ],
-[ 2401, "cng" ],
-[ 2402, "cnh" ],
-[ 2403, "cni" ],
-[ 2404, "cnj" ],
-[ 2405, "cnk" ],
-[ 2406, "cnl" ],
-[ 2407, "cnm" ],
-[ 2408, "cnn" ],
-[ 2409, "cno" ],
-[ 2410, "cnp" ],
-[ 2411, "cnq" ],
-[ 2412, "cnr" ],
-[ 2413, "cns" ],
-[ 2414, "cnt" ],
-[ 2415, "cnu" ],
-[ 2416, "cnv" ],
-[ 2417, "cnw" ],
-[ 2418, "cnx" ],
-[ 2419, "cny" ],
-[ 2420, "cnz" ],
-[ 2421, "coa" ],
-[ 2422, "cob" ],
-[ 2423, "coc" ],
-[ 2424, "cod" ],
-[ 2425, "coe" ],
-[ 2426, "cof" ],
-[ 2427, "cog" ],
-[ 2428, "coh" ],
-[ 2429, "coi" ],
-[ 2430, "coj" ],
-[ 2431, "cok" ],
-[ 2432, "col" ],
-[ 2433, "com" ],
-[ 2434, "con" ],
-[ 2435, "coo" ],
-[ 2436, "cop" ],
-[ 2437, "coq" ],
-[ 2438, "cor" ],
-[ 2439, "cos" ],
-[ 2440, "cot" ],
-[ 2441, "cou" ],
-[ 2442, "cov" ],
-[ 2443, "cow" ],
-[ 2444, "cox" ],
-[ 2445, "coy" ],
-[ 2446, "coz" ],
-[ 2447, "cpa" ],
-[ 2448, "cpb" ],
-[ 2449, "cpc" ],
-[ 2450, "cpd" ],
-[ 2451, "cpe" ],
-[ 2452, "cpf" ],
-[ 2453, "cpg" ],
-[ 2454, "cph" ],
-[ 2455, "cpi" ],
-[ 2456, "cpj" ],
-[ 2457, "cpk" ],
-[ 2458, "cpl" ],
-[ 2459, "cpm" ],
-[ 2460, "cpn" ],
-[ 2461, "cpo" ],
-[ 2462, "cpp" ],
-[ 2463, "cpq" ],
-[ 2464, "cpr" ],
-[ 2465, "cps" ],
-[ 2466, "cpt" ],
-[ 2467, "cpu" ],
-[ 2468, "cpv" ],
-[ 2469, "cpw" ],
-[ 2470, "cpx" ],
-[ 2471, "cpy" ],
-[ 2472, "cpz" ],
-[ 2473, "cqa" ],
-[ 2474, "cqb" ],
-[ 2475, "cqc" ],
-[ 2476, "cqd" ],
-[ 2477, "cqe" ],
-[ 2478, "cqf" ],
-[ 2479, "cqg" ],
-[ 2480, "cqh" ],
-[ 2481, "cqi" ],
-[ 2482, "cqj" ],
-[ 2483, "cqk" ],
-[ 2484, "cql" ],
-[ 2485, "cqm" ],
-[ 2486, "cqn" ],
-[ 2487, "cqo" ],
-[ 2488, "cqp" ],
-[ 2489, "cqq" ],
-[ 2490, "cqr" ],
-[ 2491, "cqs" ],
-[ 2492, "cqt" ],
-[ 2493, "cqu" ],
-[ 2494, "cqv" ],
-[ 2495, "cqw" ],
-[ 2496, "cqx" ],
-[ 2497, "cqy" ],
-[ 2498, "cqz" ],
-[ 2499, "cra" ],
-[ 2500, "crb" ],
-[ 2501, "crc" ],
-[ 2502, "crd" ],
-[ 2503, "cre" ],
-[ 2504, "crf" ],
-[ 2505, "crg" ],
-[ 2506, "crh" ],
-[ 2507, "cri" ],
-[ 2508, "crj" ],
-[ 2509, "crk" ],
-[ 2510, "crl" ],
-[ 2511, "crm" ],
-[ 2512, "crn" ],
-[ 2513, "cro" ],
-[ 2514, "crp" ],
-[ 2515, "crq" ],
-[ 2516, "crr" ],
-[ 2517, "crs" ],
-[ 2518, "crt" ],
-[ 2519, "cru" ],
-[ 2520, "crv" ],
-[ 2521, "crw" ],
-[ 2522, "crx" ],
-[ 2523, "cry" ],
-[ 2524, "crz" ],
-[ 2525, "csa" ],
-[ 2526, "csb" ],
-[ 2527, "csc" ],
-[ 2528, "csd" ],
-[ 2529, "cse" ],
-[ 2530, "csf" ],
-[ 2531, "csg" ],
-[ 2532, "csh" ],
-[ 2533, "csi" ],
-[ 2534, "csj" ],
-[ 2535, "csk" ],
-[ 2536, "csl" ],
-[ 2537, "csm" ],
-[ 2538, "csn" ],
-[ 2539, "cso" ],
-[ 2540, "csp" ],
-[ 2541, "csq" ],
-[ 2542, "csr" ],
-[ 2543, "css" ],
-[ 2544, "cst" ],
-[ 2545, "csu" ],
-[ 2546, "csv" ],
-[ 2547, "csw" ],
-[ 2548, "csx" ],
-[ 2549, "csy" ],
-[ 2550, "csz" ],
-[ 2551, "cta" ],
-[ 2552, "ctb" ],
-[ 2553, "ctc" ],
-[ 2554, "ctd" ],
-[ 2555, "cte" ],
-[ 2556, "ctf" ],
-[ 2557, "ctg" ],
-[ 2558, "cth" ],
-[ 2559, "cti" ],
-[ 2560, "ctj" ],
-[ 2561, "ctk" ],
-[ 2562, "ctl" ],
-[ 2563, "ctm" ],
-[ 2564, "ctn" ],
-[ 2565, "cto" ],
-[ 2566, "ctp" ],
-[ 2567, "ctq" ],
-[ 2568, "ctr" ],
-[ 2569, "cts" ],
-[ 2570, "ctt" ],
-[ 2571, "ctu" ],
-[ 2572, "ctv" ],
-[ 2573, "ctw" ],
-[ 2574, "ctx" ],
-[ 2575, "cty" ],
-[ 2576, "ctz" ],
-[ 2577, "cua" ],
-[ 2578, "cub" ],
-[ 2579, "cuc" ],
-[ 2580, "cud" ],
-[ 2581, "cue" ],
-[ 2582, "cuf" ],
-[ 2583, "cug" ],
-[ 2584, "cuh" ],
-[ 2585, "cui" ],
-[ 2586, "cuj" ],
-[ 2587, "cuk" ],
-[ 2588, "cul" ],
-[ 2589, "cum" ],
-[ 2590, "cun" ],
-[ 2591, "cuo" ],
-[ 2592, "cup" ],
-[ 2593, "cuq" ],
-[ 2594, "cur" ],
-[ 2595, "cus" ],
-[ 2596, "cut" ],
-[ 2597, "cuu" ],
-[ 2598, "cuv" ],
-[ 2599, "cuw" ],
-[ 2600, "cux" ],
-[ 2601, "cuy" ],
-[ 2602, "cuz" ],
-[ 2603, "cva" ],
-[ 2604, "cvb" ],
-[ 2605, "cvc" ],
-[ 2606, "cvd" ],
-[ 2607, "cve" ],
-[ 2608, "cvf" ],
-[ 2609, "cvg" ],
-[ 2610, "cvh" ],
-[ 2611, "cvi" ],
-[ 2612, "cvj" ],
-[ 2613, "cvk" ],
-[ 2614, "cvl" ],
-[ 2615, "cvm" ],
-[ 2616, "cvn" ],
-[ 2617, "cvo" ],
-[ 2618, "cvp" ],
-[ 2619, "cvq" ],
-[ 2620, "cvr" ],
-[ 2621, "cvs" ],
-[ 2622, "cvt" ],
-[ 2623, "cvu" ],
-[ 2624, "cvv" ],
-[ 2625, "cvw" ],
-[ 2626, "cvx" ],
-[ 2627, "cvy" ],
-[ 2628, "cvz" ],
-[ 2629, "cwa" ],
-[ 2630, "cwb" ],
-[ 2631, "cwc" ],
-[ 2632, "cwd" ],
-[ 2633, "cwe" ],
-[ 2634, "cwf" ],
-[ 2635, "cwg" ],
-[ 2636, "cwh" ],
-[ 2637, "cwi" ],
-[ 2638, "cwj" ],
-[ 2639, "cwk" ],
-[ 2640, "cwl" ],
-[ 2641, "cwm" ],
-[ 2642, "cwn" ],
-[ 2643, "cwo" ],
-[ 2644, "cwp" ],
-[ 2645, "cwq" ],
-[ 2646, "cwr" ],
-[ 2647, "cws" ],
-[ 2648, "cwt" ],
-[ 2649, "cwu" ],
-[ 2650, "cwv" ],
-[ 2651, "cww" ],
-[ 2652, "cwx" ],
-[ 2653, "cwy" ],
-[ 2654, "cwz" ],
-[ 2655, "cxa" ],
-[ 2656, "cxb" ],
-[ 2657, "cxc" ],
-[ 2658, "cxd" ],
-[ 2659, "cxe" ],
-[ 2660, "cxf" ],
-[ 2661, "cxg" ],
-[ 2662, "cxh" ],
-[ 2663, "cxi" ],
-[ 2664, "cxj" ],
-[ 2665, "cxk" ],
-[ 2666, "cxl" ],
-[ 2667, "cxm" ],
-[ 2668, "cxn" ],
-[ 2669, "cxo" ],
-[ 2670, "cxp" ],
-[ 2671, "cxq" ],
-[ 2672, "cxr" ],
-[ 2673, "cxs" ],
-[ 2674, "cxt" ],
-[ 2675, "cxu" ],
-[ 2676, "cxv" ],
-[ 2677, "cxw" ],
-[ 2678, "cxx" ],
-[ 2679, "cxy" ],
-[ 2680, "cxz" ],
-[ 2681, "cya" ],
-[ 2682, "cyb" ],
-[ 2683, "cyc" ],
-[ 2684, "cyd" ],
-[ 2685, "cye" ],
-[ 2686, "cyf" ],
-[ 2687, "cyg" ],
-[ 2688, "cyh" ],
-[ 2689, "cyi" ],
-[ 2690, "cyj" ],
-[ 2691, "cyk" ],
-[ 2692, "cyl" ],
-[ 2693, "cym" ],
-[ 2694, "cyn" ],
-[ 2695, "cyo" ],
-[ 2696, "cyp" ],
-[ 2697, "cyq" ],
-[ 2698, "cyr" ],
-[ 2699, "cys" ],
-[ 2700, "cyt" ],
-[ 2701, "cyu" ],
-[ 2702, "cyv" ],
-[ 2703, "cyw" ],
-[ 2704, "cyx" ],
-[ 2705, "cyy" ],
-[ 2706, "cyz" ],
-[ 2707, "cza" ],
-[ 2708, "czb" ],
-[ 2709, "czc" ],
-[ 2710, "czd" ],
-[ 2711, "cze" ],
-[ 2712, "czf" ],
-[ 2713, "czg" ],
-[ 2714, "czh" ],
-[ 2715, "czi" ],
-[ 2716, "czj" ],
-[ 2717, "czk" ],
-[ 2718, "czl" ],
-[ 2719, "czm" ],
-[ 2720, "czn" ],
-[ 2721, "czo" ],
-[ 2722, "czp" ],
-[ 2723, "czq" ],
-[ 2724, "czr" ],
-[ 2725, "czs" ],
-[ 2726, "czt" ],
-[ 2727, "czu" ],
-[ 2728, "czv" ],
-[ 2729, "czw" ],
-[ 2730, "czx" ],
-[ 2731, "czy" ],
-[ 2732, "czz" ],
-[ 2733, "daa" ],
-[ 2734, "dab" ],
-[ 2735, "dac" ],
-[ 2736, "dad" ],
-[ 2737, "dae" ],
-[ 2738, "daf" ],
-[ 2739, "dag" ],
-[ 2740, "dah" ],
-[ 2741, "dai" ],
-[ 2742, "daj" ],
-[ 2743, "dak" ],
-[ 2744, "dal" ],
-[ 2745, "dam" ],
-[ 2746, "dan" ],
-[ 2747, "dao" ],
-[ 2748, "dap" ],
-[ 2749, "daq" ],
-[ 2750, "dar" ],
-[ 2751, "das" ],
-[ 2752, "dat" ],
-[ 2753, "dau" ],
-[ 2754, "dav" ],
-[ 2755, "daw" ],
-[ 2756, "dax" ],
-[ 2757, "day" ],
-[ 2758, "daz" ],
-[ 2759, "dba" ],
-[ 2760, "dbb" ],
-[ 2761, "dbc" ],
-[ 2762, "dbd" ],
-[ 2763, "dbe" ],
-[ 2764, "dbf" ],
-[ 2765, "dbg" ],
-[ 2766, "dbh" ],
-[ 2767, "dbi" ],
-[ 2768, "dbj" ],
-[ 2769, "dbk" ],
-[ 2770, "dbl" ],
-[ 2771, "dbm" ],
-[ 2772, "dbn" ],
-[ 2773, "dbo" ],
-[ 2774, "dbp" ],
-[ 2775, "dbq" ],
-[ 2776, "dbr" ],
-[ 2777, "dbs" ],
-[ 2778, "dbt" ],
-[ 2779, "dbu" ],
-[ 2780, "dbv" ],
-[ 2781, "dbw" ],
-[ 2782, "dbx" ],
-[ 2783, "dby" ],
-[ 2784, "dbz" ],
-[ 2785, "dca" ],
-[ 2786, "dcb" ],
-[ 2787, "dcc" ],
-[ 2788, "dcd" ],
-[ 2789, "dce" ],
-[ 2790, "dcf" ],
-[ 2791, "dcg" ],
-[ 2792, "dch" ],
-[ 2793, "dci" ],
-[ 2794, "dcj" ],
-[ 2795, "dck" ],
-[ 2796, "dcl" ],
-[ 2797, "dcm" ],
-[ 2798, "dcn" ],
-[ 2799, "dco" ],
-[ 2800, "dcp" ],
-[ 2801, "dcq" ],
-[ 2802, "dcr" ],
-[ 2803, "dcs" ],
-[ 2804, "dct" ],
-[ 2805, "dcu" ],
-[ 2806, "dcv" ],
-[ 2807, "dcw" ],
-[ 2808, "dcx" ],
-[ 2809, "dcy" ],
-[ 2810, "dcz" ],
-[ 2811, "dda" ],
-[ 2812, "ddb" ],
-[ 2813, "ddc" ],
-[ 2814, "ddd" ],
-[ 2815, "dde" ],
-[ 2816, "ddf" ],
-[ 2817, "ddg" ],
-[ 2818, "ddh" ],
-[ 2819, "ddi" ],
-[ 2820, "ddj" ],
-[ 2821, "ddk" ],
-[ 2822, "ddl" ],
-[ 2823, "ddm" ],
-[ 2824, "ddn" ],
-[ 2825, "ddo" ],
-[ 2826, "ddp" ],
-[ 2827, "ddq" ],
-[ 2828, "ddr" ],
-[ 2829, "dds" ],
-[ 2830, "ddt" ],
-[ 2831, "ddu" ],
-[ 2832, "ddv" ],
-[ 2833, "ddw" ],
-[ 2834, "ddx" ],
-[ 2835, "ddy" ],
-[ 2836, "ddz" ],
-[ 2837, "dea" ],
-[ 2838, "deb" ],
-[ 2839, "dec" ],
-[ 2840, "ded" ],
-[ 2841, "dee" ],
-[ 2842, "def" ],
-[ 2843, "deg" ],
-[ 2844, "deh" ],
-[ 2845, "dei" ],
-[ 2846, "dej" ],
-[ 2847, "dek" ],
-[ 2848, "del" ],
-[ 2849, "dem" ],
-[ 2850, "den" ],
-[ 2851, "deo" ],
-[ 2852, "dep" ],
-[ 2853, "deq" ],
-[ 2854, "der" ],
-[ 2855, "des" ],
-[ 2856, "det" ],
-[ 2857, "deu" ],
-[ 2858, "dev" ],
-[ 2859, "dew" ],
-[ 2860, "dex" ],
-[ 2861, "dey" ],
-[ 2862, "dez" ],
-[ 2863, "dfa" ],
-[ 2864, "dfb" ],
-[ 2865, "dfc" ],
-[ 2866, "dfd" ],
-[ 2867, "dfe" ],
-[ 2868, "dff" ],
-[ 2869, "dfg" ],
-[ 2870, "dfh" ],
-[ 2871, "dfi" ],
-[ 2872, "dfj" ],
-[ 2873, "dfk" ],
-[ 2874, "dfl" ],
-[ 2875, "dfm" ],
-[ 2876, "dfn" ],
-[ 2877, "dfo" ],
-[ 2878, "dfp" ],
-[ 2879, "dfq" ],
-[ 2880, "dfr" ],
-[ 2881, "dfs" ],
-[ 2882, "dft" ],
-[ 2883, "dfu" ],
-[ 2884, "dfv" ],
-[ 2885, "dfw" ],
-[ 2886, "dfx" ],
-[ 2887, "dfy" ],
-[ 2888, "dfz" ],
-[ 2889, "dga" ],
-[ 2890, "dgb" ],
-[ 2891, "dgc" ],
-[ 2892, "dgd" ],
-[ 2893, "dge" ],
-[ 2894, "dgf" ],
-[ 2895, "dgg" ],
-[ 2896, "dgh" ],
-[ 2897, "dgi" ],
-[ 2898, "dgj" ],
-[ 2899, "dgk" ],
-[ 2900, "dgl" ],
-[ 2901, "dgm" ],
-[ 2902, "dgn" ],
-[ 2903, "dgo" ],
-[ 2904, "dgp" ],
-[ 2905, "dgq" ],
-[ 2906, "dgr" ],
-[ 2907, "dgs" ],
-[ 2908, "dgt" ],
-[ 2909, "dgu" ],
-[ 2910, "dgv" ],
-[ 2911, "dgw" ],
-[ 2912, "dgx" ],
-[ 2913, "dgy" ],
-[ 2914, "dgz" ],
-[ 2915, "dha" ],
-[ 2916, "dhb" ],
-[ 2917, "dhc" ],
-[ 2918, "dhd" ],
-[ 2919, "dhe" ],
-[ 2920, "dhf" ],
-[ 2921, "dhg" ],
-[ 2922, "dhh" ],
-[ 2923, "dhi" ],
-[ 2924, "dhj" ],
-[ 2925, "dhk" ],
-[ 2926, "dhl" ],
-[ 2927, "dhm" ],
-[ 2928, "dhn" ],
-[ 2929, "dho" ],
-[ 2930, "dhp" ],
-[ 2931, "dhq" ],
-[ 2932, "dhr" ],
-[ 2933, "dhs" ],
-[ 2934, "dht" ],
-[ 2935, "dhu" ],
-[ 2936, "dhv" ],
-[ 2937, "dhw" ],
-[ 2938, "dhx" ],
-[ 2939, "dhy" ],
-[ 2940, "dhz" ],
-[ 2941, "dia" ],
-[ 2942, "dib" ],
-[ 2943, "dic" ],
-[ 2944, "did" ],
-[ 2945, "die" ],
-[ 2946, "dif" ],
-[ 2947, "dig" ],
-[ 2948, "dih" ],
-[ 2949, "dii" ],
-[ 2950, "dij" ],
-[ 2951, "dik" ],
-[ 2952, "dil" ],
-[ 2953, "dim" ],
-[ 2954, "din" ],
-[ 2955, "dio" ],
-[ 2956, "dip" ],
-[ 2957, "diq" ],
-[ 2958, "dir" ],
-[ 2959, "dis" ],
-[ 2960, "dit" ],
-[ 2961, "diu" ],
-[ 2962, "div" ],
-[ 2963, "diw" ],
-[ 2964, "dix" ],
-[ 2965, "diy" ],
-[ 2966, "diz" ],
-[ 2967, "dja" ],
-[ 2968, "djb" ],
-[ 2969, "djc" ],
-[ 2970, "djd" ],
-[ 2971, "dje" ],
-[ 2972, "djf" ],
-[ 2973, "djg" ],
-[ 2974, "djh" ],
-[ 2975, "dji" ],
-[ 2976, "djj" ],
-[ 2977, "djk" ],
-[ 2978, "djl" ],
-[ 2979, "djm" ],
-[ 2980, "djn" ],
-[ 2981, "djo" ],
-[ 2982, "djp" ],
-[ 2983, "djq" ],
-[ 2984, "djr" ],
-[ 2985, "djs" ],
-[ 2986, "djt" ],
-[ 2987, "dju" ],
-[ 2988, "djv" ],
-[ 2989, "djw" ],
-[ 2990, "djx" ],
-[ 2991, "djy" ],
-[ 2992, "djz" ],
-[ 2993, "dka" ],
-[ 2994, "dkb" ],
-[ 2995, "dkc" ],
-[ 2996, "dkd" ],
-[ 2997, "dke" ],
-[ 2998, "dkf" ],
-[ 2999, "dkg" ],
-[ 3000, "dkh" ],
-[ 3001, "dki" ],
-[ 3002, "dkj" ],
-[ 3003, "dkk" ],
-[ 3004, "dkl" ],
-[ 3005, "dkm" ],
-[ 3006, "dkn" ],
-[ 3007, "dko" ],
-[ 3008, "dkp" ],
-[ 3009, "dkq" ],
-[ 3010, "dkr" ],
-[ 3011, "dks" ],
-[ 3012, "dkt" ],
-[ 3013, "dku" ],
-[ 3014, "dkv" ],
-[ 3015, "dkw" ],
-[ 3016, "dkx" ],
-[ 3017, "dky" ],
-[ 3018, "dkz" ],
-[ 3019, "dla" ],
-[ 3020, "dlb" ],
-[ 3021, "dlc" ],
-[ 3022, "dld" ],
-[ 3023, "dle" ],
-[ 3024, "dlf" ],
-[ 3025, "dlg" ],
-[ 3026, "dlh" ],
-[ 3027, "dli" ],
-[ 3028, "dlj" ],
-[ 3029, "dlk" ],
-[ 3030, "dll" ],
-[ 3031, "dlm" ],
-[ 3032, "dln" ],
-[ 3033, "dlo" ],
-[ 3034, "dlp" ],
-[ 3035, "dlq" ],
-[ 3036, "dlr" ],
-[ 3037, "dls" ],
-[ 3038, "dlt" ],
-[ 3039, "dlu" ],
-[ 3040, "dlv" ],
-[ 3041, "dlw" ],
-[ 3042, "dlx" ],
-[ 3043, "dly" ],
-[ 3044, "dlz" ],
-[ 3045, "dma" ],
-[ 3046, "dmb" ],
-[ 3047, "dmc" ],
-[ 3048, "dmd" ],
-[ 3049, "dme" ],
-[ 3050, "dmf" ],
-[ 3051, "dmg" ],
-[ 3052, "dmh" ],
-[ 3053, "dmi" ],
-[ 3054, "dmj" ],
-[ 3055, "dmk" ],
-[ 3056, "dml" ],
-[ 3057, "dmm" ],
-[ 3058, "dmn" ],
-[ 3059, "dmo" ],
-[ 3060, "dmp" ],
-[ 3061, "dmq" ],
-[ 3062, "dmr" ],
-[ 3063, "dms" ],
-[ 3064, "dmt" ],
-[ 3065, "dmu" ],
-[ 3066, "dmv" ],
-[ 3067, "dmw" ],
-[ 3068, "dmx" ],
-[ 3069, "dmy" ],
-[ 3070, "dmz" ],
-[ 3071, "dna" ],
-[ 3072, "dnb" ],
-[ 3073, "dnc" ],
-[ 3074, "dnd" ],
-[ 3075, "dne" ],
-[ 3076, "dnf" ],
-[ 3077, "dng" ],
-[ 3078, "dnh" ],
-[ 3079, "dni" ],
-[ 3080, "dnj" ],
-[ 3081, "dnk" ],
-[ 3082, "dnl" ],
-[ 3083, "dnm" ],
-[ 3084, "dnn" ],
-[ 3085, "dno" ],
-[ 3086, "dnp" ],
-[ 3087, "dnq" ],
-[ 3088, "dnr" ],
-[ 3089, "dns" ],
-[ 3090, "dnt" ],
-[ 3091, "dnu" ],
-[ 3092, "dnv" ],
-[ 3093, "dnw" ],
-[ 3094, "dnx" ],
-[ 3095, "dny" ],
-[ 3096, "dnz" ],
-[ 3097, "doa" ],
-[ 3098, "dob" ],
-[ 3099, "doc" ],
-[ 3100, "dod" ],
-[ 3101, "doe" ],
-[ 3102, "dof" ],
-[ 3103, "dog" ],
-[ 3104, "doh" ],
-[ 3105, "doi" ],
-[ 3106, "doj" ],
-[ 3107, "dok" ],
-[ 3108, "dol" ],
-[ 3109, "dom" ],
-[ 3110, "don" ],
-[ 3111, "doo" ],
-[ 3112, "dop" ],
-[ 3113, "doq" ],
-[ 3114, "dor" ],
-[ 3115, "dos" ],
-[ 3116, "dot" ],
-[ 3117, "dou" ],
-[ 3118, "dov" ],
-[ 3119, "dow" ],
-[ 3120, "dox" ],
-[ 3121, "doy" ],
-[ 3122, "doz" ],
-[ 3123, "dpa" ],
-[ 3124, "dpb" ],
-[ 3125, "dpc" ],
-[ 3126, "dpd" ],
-[ 3127, "dpe" ],
-[ 3128, "dpf" ],
-[ 3129, "dpg" ],
-[ 3130, "dph" ],
-[ 3131, "dpi" ],
-[ 3132, "dpj" ],
-[ 3133, "dpk" ],
-[ 3134, "dpl" ],
-[ 3135, "dpm" ],
-[ 3136, "dpn" ],
-[ 3137, "dpo" ],
-[ 3138, "dpp" ],
-[ 3139, "dpq" ],
-[ 3140, "dpr" ],
-[ 3141, "dps" ],
-[ 3142, "dpt" ],
-[ 3143, "dpu" ],
-[ 3144, "dpv" ],
-[ 3145, "dpw" ],
-[ 3146, "dpx" ],
-[ 3147, "dpy" ],
-[ 3148, "dpz" ],
-[ 3149, "dqa" ],
-[ 3150, "dqb" ],
-[ 3151, "dqc" ],
-[ 3152, "dqd" ],
-[ 3153, "dqe" ],
-[ 3154, "dqf" ],
-[ 3155, "dqg" ],
-[ 3156, "dqh" ],
-[ 3157, "dqi" ],
-[ 3158, "dqj" ],
-[ 3159, "dqk" ],
-[ 3160, "dql" ],
-[ 3161, "dqm" ],
-[ 3162, "dqn" ],
-[ 3163, "dqo" ],
-[ 3164, "dqp" ],
-[ 3165, "dqq" ],
-[ 3166, "dqr" ],
-[ 3167, "dqs" ],
-[ 3168, "dqt" ],
-[ 3169, "dqu" ],
-[ 3170, "dqv" ],
-[ 3171, "dqw" ],
-[ 3172, "dqx" ],
-[ 3173, "dqy" ],
-[ 3174, "dqz" ],
-[ 3175, "dra" ],
-[ 3176, "drb" ],
-[ 3177, "drc" ],
-[ 3178, "drd" ],
-[ 3179, "dre" ],
-[ 3180, "drf" ],
-[ 3181, "drg" ],
-[ 3182, "drh" ],
-[ 3183, "dri" ],
-[ 3184, "drj" ],
-[ 3185, "drk" ],
-[ 3186, "drl" ],
-[ 3187, "drm" ],
-[ 3188, "drn" ],
-[ 3189, "dro" ],
-[ 3190, "drp" ],
-[ 3191, "drq" ],
-[ 3192, "drr" ],
-[ 3193, "drs" ],
-[ 3194, "drt" ],
-[ 3195, "dru" ],
-[ 3196, "drv" ],
-[ 3197, "drw" ],
-[ 3198, "drx" ],
-[ 3199, "dry" ],
-[ 3200, "drz" ],
-[ 3201, "dsa" ],
-[ 3202, "dsb" ],
-[ 3203, "dsc" ],
-[ 3204, "dsd" ],
-[ 3205, "dse" ],
-[ 3206, "dsf" ],
-[ 3207, "dsg" ],
-[ 3208, "dsh" ],
-[ 3209, "dsi" ],
-[ 3210, "dsj" ],
-[ 3211, "dsk" ],
-[ 3212, "dsl" ],
-[ 3213, "dsm" ],
-[ 3214, "dsn" ],
-[ 3215, "dso" ],
-[ 3216, "dsp" ],
-[ 3217, "dsq" ],
-[ 3218, "dsr" ],
-[ 3219, "dss" ],
-[ 3220, "dst" ],
-[ 3221, "dsu" ],
-[ 3222, "dsv" ],
-[ 3223, "dsw" ],
-[ 3224, "dsx" ],
-[ 3225, "dsy" ],
-[ 3226, "dsz" ],
-[ 3227, "dta" ],
-[ 3228, "dtb" ],
-[ 3229, "dtc" ],
-[ 3230, "dtd" ],
-[ 3231, "dte" ],
-[ 3232, "dtf" ],
-[ 3233, "dtg" ],
-[ 3234, "dth" ],
-[ 3235, "dti" ],
-[ 3236, "dtj" ],
-[ 3237, "dtk" ],
-[ 3238, "dtl" ],
-[ 3239, "dtm" ],
-[ 3240, "dtn" ],
-[ 3241, "dto" ],
-[ 3242, "dtp" ],
-[ 3243, "dtq" ],
-[ 3244, "dtr" ],
-[ 3245, "dts" ],
-[ 3246, "dtt" ],
-[ 3247, "dtu" ],
-[ 3248, "dtv" ],
-[ 3249, "dtw" ],
-[ 3250, "dtx" ],
-[ 3251, "dty" ],
-[ 3252, "dtz" ],
-[ 3253, "dua" ],
-[ 3254, "dub" ],
-[ 3255, "duc" ],
-[ 3256, "dud" ],
-[ 3257, "due" ],
-[ 3258, "duf" ],
-[ 3259, "dug" ],
-[ 3260, "duh" ],
-[ 3261, "dui" ],
-[ 3262, "duj" ],
-[ 3263, "duk" ],
-[ 3264, "dul" ],
-[ 3265, "dum" ],
-[ 3266, "dun" ],
-[ 3267, "duo" ],
-[ 3268, "dup" ],
-[ 3269, "duq" ],
-[ 3270, "dur" ],
-[ 3271, "dus" ],
-[ 3272, "dut" ],
-[ 3273, "duu" ],
-[ 3274, "duv" ],
-[ 3275, "duw" ],
-[ 3276, "dux" ],
-[ 3277, "duy" ],
-[ 3278, "duz" ],
-[ 3279, "dva" ],
-[ 3280, "dvb" ],
-[ 3281, "dvc" ],
-[ 3282, "dvd" ],
-[ 3283, "dve" ],
-[ 3284, "dvf" ],
-[ 3285, "dvg" ],
-[ 3286, "dvh" ],
-[ 3287, "dvi" ],
-[ 3288, "dvj" ],
-[ 3289, "dvk" ],
-[ 3290, "dvl" ],
-[ 3291, "dvm" ],
-[ 3292, "dvn" ],
-[ 3293, "dvo" ],
-[ 3294, "dvp" ],
-[ 3295, "dvq" ],
-[ 3296, "dvr" ],
-[ 3297, "dvs" ],
-[ 3298, "dvt" ],
-[ 3299, "dvu" ],
-[ 3300, "dvv" ],
-[ 3301, "dvw" ],
-[ 3302, "dvx" ],
-[ 3303, "dvy" ],
-[ 3304, "dvz" ],
-[ 3305, "dwa" ],
-[ 3306, "dwb" ],
-[ 3307, "dwc" ],
-[ 3308, "dwd" ],
-[ 3309, "dwe" ],
-[ 3310, "dwf" ],
-[ 3311, "dwg" ],
-[ 3312, "dwh" ],
-[ 3313, "dwi" ],
-[ 3314, "dwj" ],
-[ 3315, "dwk" ],
-[ 3316, "dwl" ],
-[ 3317, "dwm" ],
-[ 3318, "dwn" ],
-[ 3319, "dwo" ],
-[ 3320, "dwp" ],
-[ 3321, "dwq" ],
-[ 3322, "dwr" ],
-[ 3323, "dws" ],
-[ 3324, "dwt" ],
-[ 3325, "dwu" ],
-[ 3326, "dwv" ],
-[ 3327, "dww" ],
-[ 3328, "dwx" ],
-[ 3329, "dwy" ],
-[ 3330, "dwz" ],
-[ 3331, "dxa" ],
-[ 3332, "dxb" ],
-[ 3333, "dxc" ],
-[ 3334, "dxd" ],
-[ 3335, "dxe" ],
-[ 3336, "dxf" ],
-[ 3337, "dxg" ],
-[ 3338, "dxh" ],
-[ 3339, "dxi" ],
-[ 3340, "dxj" ],
-[ 3341, "dxk" ],
-[ 3342, "dxl" ],
-[ 3343, "dxm" ],
-[ 3344, "dxn" ],
-[ 3345, "dxo" ],
-[ 3346, "dxp" ],
-[ 3347, "dxq" ],
-[ 3348, "dxr" ],
-[ 3349, "dxs" ],
-[ 3350, "dxt" ],
-[ 3351, "dxu" ],
-[ 3352, "dxv" ],
-[ 3353, "dxw" ],
-[ 3354, "dxx" ],
-[ 3355, "dxy" ],
-[ 3356, "dxz" ],
-[ 3357, "dya" ],
-[ 3358, "dyb" ],
-[ 3359, "dyc" ],
-[ 3360, "dyd" ],
-[ 3361, "dye" ],
-[ 3362, "dyf" ],
-[ 3363, "dyg" ],
-[ 3364, "dyh" ],
-[ 3365, "dyi" ],
-[ 3366, "dyj" ],
-[ 3367, "dyk" ],
-[ 3368, "dyl" ],
-[ 3369, "dym" ],
-[ 3370, "dyn" ],
-[ 3371, "dyo" ],
-[ 3372, "dyp" ],
-[ 3373, "dyq" ],
-[ 3374, "dyr" ],
-[ 3375, "dys" ],
-[ 3376, "dyt" ],
-[ 3377, "dyu" ],
-[ 3378, "dyv" ],
-[ 3379, "dyw" ],
-[ 3380, "dyx" ],
-[ 3381, "dyy" ],
-[ 3382, "dyz" ],
-[ 3383, "dza" ],
-[ 3384, "dzb" ],
-[ 3385, "dzc" ],
-[ 3386, "dzd" ],
-[ 3387, "dze" ],
-[ 3388, "dzf" ],
-[ 3389, "dzg" ],
-[ 3390, "dzh" ],
-[ 3391, "dzi" ],
-[ 3392, "dzj" ],
-[ 3393, "dzk" ],
-[ 3394, "dzl" ],
-[ 3395, "dzm" ],
-[ 3396, "dzn" ],
-[ 3397, "dzo" ],
-[ 3398, "dzp" ],
-[ 3399, "dzq" ],
-[ 3400, "dzr" ],
-[ 3401, "dzs" ],
-[ 3402, "dzt" ],
-[ 3403, "dzu" ],
-[ 3404, "dzv" ],
-[ 3405, "dzw" ],
-[ 3406, "dzx" ],
-[ 3407, "dzy" ],
-[ 3408, "dzz" ],
-[ 3409, "eaa" ],
-[ 3410, "eab" ],
-[ 3411, "eac" ],
-[ 3412, "ead" ],
-[ 3413, "eae" ],
-[ 3414, "eaf" ],
-[ 3415, "eag" ],
-[ 3416, "eah" ],
-[ 3417, "eai" ],
-[ 3418, "eaj" ],
-[ 3419, "eak" ],
-[ 3420, "eal" ],
-[ 3421, "eam" ],
-[ 3422, "ean" ],
-[ 3423, "eao" ],
-[ 3424, "eap" ],
-[ 3425, "eaq" ],
-[ 3426, "ear" ],
-[ 3427, "eas" ],
-[ 3428, "eat" ],
-[ 3429, "eau" ],
-[ 3430, "eav" ],
-[ 3431, "eaw" ],
-[ 3432, "eax" ],
-[ 3433, "eay" ],
-[ 3434, "eaz" ],
-[ 3435, "eba" ],
-[ 3436, "ebb" ],
-[ 3437, "ebc" ],
-[ 3438, "ebd" ],
-[ 3439, "ebe" ],
-[ 3440, "ebf" ],
-[ 3441, "ebg" ],
-[ 3442, "ebh" ],
-[ 3443, "ebi" ],
-[ 3444, "ebj" ],
-[ 3445, "ebk" ],
-[ 3446, "ebl" ],
-[ 3447, "ebm" ],
-[ 3448, "ebn" ],
-[ 3449, "ebo" ],
-[ 3450, "ebp" ],
-[ 3451, "ebq" ],
-[ 3452, "ebr" ],
-[ 3453, "ebs" ],
-[ 3454, "ebt" ],
-[ 3455, "ebu" ],
-[ 3456, "ebv" ],
-[ 3457, "ebw" ],
-[ 3458, "ebx" ],
-[ 3459, "eby" ],
-[ 3460, "ebz" ],
-[ 3461, "eca" ],
-[ 3462, "ecb" ],
-[ 3463, "ecc" ],
-[ 3464, "ecd" ],
-[ 3465, "ece" ],
-[ 3466, "ecf" ],
-[ 3467, "ecg" ],
-[ 3468, "ech" ],
-[ 3469, "eci" ],
-[ 3470, "ecj" ],
-[ 3471, "eck" ],
-[ 3472, "ecl" ],
-[ 3473, "ecm" ],
-[ 3474, "ecn" ],
-[ 3475, "eco" ],
-[ 3476, "ecp" ],
-[ 3477, "ecq" ],
-[ 3478, "ecr" ],
-[ 3479, "ecs" ],
-[ 3480, "ect" ],
-[ 3481, "ecu" ],
-[ 3482, "ecv" ],
-[ 3483, "ecw" ],
-[ 3484, "ecx" ],
-[ 3485, "ecy" ],
-[ 3486, "ecz" ],
-[ 3487, "eda" ],
-[ 3488, "edb" ],
-[ 3489, "edc" ],
-[ 3490, "edd" ],
-[ 3491, "ede" ],
-[ 3492, "edf" ],
-[ 3493, "edg" ],
-[ 3494, "edh" ],
-[ 3495, "edi" ],
-[ 3496, "edj" ],
-[ 3497, "edk" ],
-[ 3498, "edl" ],
-[ 3499, "edm" ],
-[ 3500, "edn" ],
-[ 3501, "edo" ],
-[ 3502, "edp" ],
-[ 3503, "edq" ],
-[ 3504, "edr" ],
-[ 3505, "eds" ],
-[ 3506, "edt" ],
-[ 3507, "edu" ],
-[ 3508, "edv" ],
-[ 3509, "edw" ],
-[ 3510, "edx" ],
-[ 3511, "edy" ],
-[ 3512, "edz" ],
-[ 3513, "eea" ],
-[ 3514, "eeb" ],
-[ 3515, "eec" ],
-[ 3516, "eed" ],
-[ 3517, "eee" ],
-[ 3518, "eef" ],
-[ 3519, "eeg" ],
-[ 3520, "eeh" ],
-[ 3521, "eei" ],
-[ 3522, "eej" ],
-[ 3523, "eek" ],
-[ 3524, "eel" ],
-[ 3525, "eem" ],
-[ 3526, "een" ],
-[ 3527, "eeo" ],
-[ 3528, "eep" ],
-[ 3529, "eeq" ],
-[ 3530, "eer" ],
-[ 3531, "ees" ],
-[ 3532, "eet" ],
-[ 3533, "eeu" ],
-[ 3534, "eev" ],
-[ 3535, "eew" ],
-[ 3536, "eex" ],
-[ 3537, "eey" ],
-[ 3538, "eez" ],
-[ 3539, "efa" ],
-[ 3540, "efb" ],
-[ 3541, "efc" ],
-[ 3542, "efd" ],
-[ 3543, "efe" ],
-[ 3544, "eff" ],
-[ 3545, "efg" ],
-[ 3546, "efh" ],
-[ 3547, "efi" ],
-[ 3548, "efj" ],
-[ 3549, "efk" ],
-[ 3550, "efl" ],
-[ 3551, "efm" ],
-[ 3552, "efn" ],
-[ 3553, "efo" ],
-[ 3554, "efp" ],
-[ 3555, "efq" ],
-[ 3556, "efr" ],
-[ 3557, "efs" ],
-[ 3558, "eft" ],
-[ 3559, "efu" ],
-[ 3560, "efv" ],
-[ 3561, "efw" ],
-[ 3562, "efx" ],
-[ 3563, "efy" ],
-[ 3564, "efz" ],
-[ 3565, "ega" ],
-[ 3566, "egb" ],
-[ 3567, "egc" ],
-[ 3568, "egd" ],
-[ 3569, "ege" ],
-[ 3570, "egf" ],
-[ 3571, "egg" ],
-[ 3572, "egh" ],
-[ 3573, "egi" ],
-[ 3574, "egj" ],
-[ 3575, "egk" ],
-[ 3576, "egl" ],
-[ 3577, "egm" ],
-[ 3578, "egn" ],
-[ 3579, "ego" ],
-[ 3580, "egp" ],
-[ 3581, "egq" ],
-[ 3582, "egr" ],
-[ 3583, "egs" ],
-[ 3584, "egt" ],
-[ 3585, "egu" ],
-[ 3586, "egv" ],
-[ 3587, "egw" ],
-[ 3588, "egx" ],
-[ 3589, "egy" ],
-[ 3590, "egz" ],
-[ 3591, "eha" ],
-[ 3592, "ehb" ],
-[ 3593, "ehc" ],
-[ 3594, "ehd" ],
-[ 3595, "ehe" ],
-[ 3596, "ehf" ],
-[ 3597, "ehg" ],
-[ 3598, "ehh" ],
-[ 3599, "ehi" ],
-[ 3600, "ehj" ],
-[ 3601, "ehk" ],
-[ 3602, "ehl" ],
-[ 3603, "ehm" ],
-[ 3604, "ehn" ],
-[ 3605, "eho" ],
-[ 3606, "ehp" ],
-[ 3607, "ehq" ],
-[ 3608, "ehr" ],
-[ 3609, "ehs" ],
-[ 3610, "eht" ],
-[ 3611, "ehu" ],
-[ 3612, "ehv" ],
-[ 3613, "ehw" ],
-[ 3614, "ehx" ],
-[ 3615, "ehy" ],
-[ 3616, "ehz" ],
-[ 3617, "eia" ],
-[ 3618, "eib" ],
-[ 3619, "eic" ],
-[ 3620, "eid" ],
-[ 3621, "eie" ],
-[ 3622, "eif" ],
-[ 3623, "eig" ],
-[ 3624, "eih" ],
-[ 3625, "eii" ],
-[ 3626, "eij" ],
-[ 3627, "eik" ],
-[ 3628, "eil" ],
-[ 3629, "eim" ],
-[ 3630, "ein" ],
-[ 3631, "eio" ],
-[ 3632, "eip" ],
-[ 3633, "eiq" ],
-[ 3634, "eir" ],
-[ 3635, "eis" ],
-[ 3636, "eit" ],
-[ 3637, "eiu" ],
-[ 3638, "eiv" ],
-[ 3639, "eiw" ],
-[ 3640, "eix" ],
-[ 3641, "eiy" ],
-[ 3642, "eiz" ],
-[ 3643, "eja" ],
-[ 3644, "ejb" ],
-[ 3645, "ejc" ],
-[ 3646, "ejd" ],
-[ 3647, "eje" ],
-[ 3648, "ejf" ],
-[ 3649, "ejg" ],
-[ 3650, "ejh" ],
-[ 3651, "eji" ],
-[ 3652, "ejj" ],
-[ 3653, "ejk" ],
-[ 3654, "ejl" ],
-[ 3655, "ejm" ],
-[ 3656, "ejn" ],
-[ 3657, "ejo" ],
-[ 3658, "ejp" ],
-[ 3659, "ejq" ],
-[ 3660, "ejr" ],
-[ 3661, "ejs" ],
-[ 3662, "ejt" ],
-[ 3663, "eju" ],
-[ 3664, "ejv" ],
-[ 3665, "ejw" ],
-[ 3666, "ejx" ],
-[ 3667, "ejy" ],
-[ 3668, "ejz" ],
-[ 3669, "eka" ],
-[ 3670, "ekb" ],
-[ 3671, "ekc" ],
-[ 3672, "ekd" ],
-[ 3673, "eke" ],
-[ 3674, "ekf" ],
-[ 3675, "ekg" ],
-[ 3676, "ekh" ],
-[ 3677, "eki" ],
-[ 3678, "ekj" ],
-[ 3679, "ekk" ],
-[ 3680, "ekl" ],
-[ 3681, "ekm" ],
-[ 3682, "ekn" ],
-[ 3683, "eko" ],
-[ 3684, "ekp" ],
-[ 3685, "ekq" ],
-[ 3686, "ekr" ],
-[ 3687, "eks" ],
-[ 3688, "ekt" ],
-[ 3689, "eku" ],
-[ 3690, "ekv" ],
-[ 3691, "ekw" ],
-[ 3692, "ekx" ],
-[ 3693, "eky" ],
-[ 3694, "ekz" ],
-[ 3695, "ela" ],
-[ 3696, "elb" ],
-[ 3697, "elc" ],
-[ 3698, "eld" ],
-[ 3699, "ele" ],
-[ 3700, "elf" ],
-[ 3701, "elg" ],
-[ 3702, "elh" ],
-[ 3703, "eli" ],
-[ 3704, "elj" ],
-[ 3705, "elk" ],
-[ 3706, "ell" ],
-[ 3707, "elm" ],
-[ 3708, "eln" ],
-[ 3709, "elo" ],
-[ 3710, "elp" ],
-[ 3711, "elq" ],
-[ 3712, "elr" ],
-[ 3713, "els" ],
-[ 3714, "elt" ],
-[ 3715, "elu" ],
-[ 3716, "elv" ],
-[ 3717, "elw" ],
-[ 3718, "elx" ],
-[ 3719, "ely" ],
-[ 3720, "elz" ],
-[ 3721, "ema" ],
-[ 3722, "emb" ],
-[ 3723, "emc" ],
-[ 3724, "emd" ],
-[ 3725, "eme" ],
-[ 3726, "emf" ],
-[ 3727, "emg" ],
-[ 3728, "emh" ],
-[ 3729, "emi" ],
-[ 3730, "emj" ],
-[ 3731, "emk" ],
-[ 3732, "eml" ],
-[ 3733, "emm" ],
-[ 3734, "emn" ],
-[ 3735, "emo" ],
-[ 3736, "emp" ],
-[ 3737, "emq" ],
-[ 3738, "emr" ],
-[ 3739, "ems" ],
-[ 3740, "emt" ],
-[ 3741, "emu" ],
-[ 3742, "emv" ],
-[ 3743, "emw" ],
-[ 3744, "emx" ],
-[ 3745, "emy" ],
-[ 3746, "emz" ],
-[ 3747, "ena" ],
-[ 3748, "enb" ],
-[ 3749, "enc" ],
-[ 3750, "end" ],
-[ 3751, "ene" ],
-[ 3752, "enf" ],
-[ 3753, "eng" ],
-[ 3754, "enh" ],
-[ 3755, "eni" ],
-[ 3756, "enj" ],
-[ 3757, "enk" ],
-[ 3758, "enl" ],
-[ 3759, "enm" ],
-[ 3760, "enn" ],
-[ 3761, "eno" ],
-[ 3762, "enp" ],
-[ 3763, "enq" ],
-[ 3764, "enr" ],
-[ 3765, "ens" ],
-[ 3766, "ent" ],
-[ 3767, "enu" ],
-[ 3768, "env" ],
-[ 3769, "enw" ],
-[ 3770, "enx" ],
-[ 3771, "eny" ],
-[ 3772, "enz" ],
-[ 3773, "eoa" ],
-[ 3774, "eob" ],
-[ 3775, "eoc" ],
-[ 3776, "eod" ],
-[ 3777, "eoe" ],
-[ 3778, "eof" ],
-[ 3779, "eog" ],
-[ 3780, "eoh" ],
-[ 3781, "eoi" ],
-[ 3782, "eoj" ],
-[ 3783, "eok" ],
-[ 3784, "eol" ],
-[ 3785, "eom" ],
-[ 3786, "eon" ],
-[ 3787, "eoo" ],
-[ 3788, "eop" ],
-[ 3789, "eoq" ],
-[ 3790, "eor" ],
-[ 3791, "eos" ],
-[ 3792, "eot" ],
-[ 3793, "eou" ],
-[ 3794, "eov" ],
-[ 3795, "eow" ],
-[ 3796, "eox" ],
-[ 3797, "eoy" ],
-[ 3798, "eoz" ],
-[ 3799, "epa" ],
-[ 3800, "epb" ],
-[ 3801, "epc" ],
-[ 3802, "epd" ],
-[ 3803, "epe" ],
-[ 3804, "epf" ],
-[ 3805, "epg" ],
-[ 3806, "eph" ],
-[ 3807, "epi" ],
-[ 3808, "epj" ],
-[ 3809, "epk" ],
-[ 3810, "epl" ],
-[ 3811, "epm" ],
-[ 3812, "epn" ],
-[ 3813, "epo" ],
-[ 3814, "epp" ],
-[ 3815, "epq" ],
-[ 3816, "epr" ],
-[ 3817, "eps" ],
-[ 3818, "ept" ],
-[ 3819, "epu" ],
-[ 3820, "epv" ],
-[ 3821, "epw" ],
-[ 3822, "epx" ],
-[ 3823, "epy" ],
-[ 3824, "epz" ],
-[ 3825, "eqa" ],
-[ 3826, "eqb" ],
-[ 3827, "eqc" ],
-[ 3828, "eqd" ],
-[ 3829, "eqe" ],
-[ 3830, "eqf" ],
-[ 3831, "eqg" ],
-[ 3832, "eqh" ],
-[ 3833, "eqi" ],
-[ 3834, "eqj" ],
-[ 3835, "eqk" ],
-[ 3836, "eql" ],
-[ 3837, "eqm" ],
-[ 3838, "eqn" ],
-[ 3839, "eqo" ],
-[ 3840, "eqp" ],
-[ 3841, "eqq" ],
-[ 3842, "eqr" ],
-[ 3843, "eqs" ],
-[ 3844, "eqt" ],
-[ 3845, "equ" ],
-[ 3846, "eqv" ],
-[ 3847, "eqw" ],
-[ 3848, "eqx" ],
-[ 3849, "eqy" ],
-[ 3850, "eqz" ],
-[ 3851, "era" ],
-[ 3852, "erb" ],
-[ 3853, "erc" ],
-[ 3854, "erd" ],
-[ 3855, "ere" ],
-[ 3856, "erf" ],
-[ 3857, "erg" ],
-[ 3858, "erh" ],
-[ 3859, "eri" ],
-[ 3860, "erj" ],
-[ 3861, "erk" ],
-[ 3862, "erl" ],
-[ 3863, "erm" ],
-[ 3864, "ern" ],
-[ 3865, "ero" ],
-[ 3866, "erp" ],
-[ 3867, "erq" ],
-[ 3868, "err" ],
-[ 3869, "ers" ],
-[ 3870, "ert" ],
-[ 3871, "eru" ],
-[ 3872, "erv" ],
-[ 3873, "erw" ],
-[ 3874, "erx" ],
-[ 3875, "ery" ],
-[ 3876, "erz" ],
-[ 3877, "esa" ],
-[ 3878, "esb" ],
-[ 3879, "esc" ],
-[ 3880, "esd" ],
-[ 3881, "ese" ],
-[ 3882, "esf" ],
-[ 3883, "esg" ],
-[ 3884, "esh" ],
-[ 3885, "esi" ],
-[ 3886, "esj" ],
-[ 3887, "esk" ],
-[ 3888, "esl" ],
-[ 3889, "esm" ],
-[ 3890, "esn" ],
-[ 3891, "eso" ],
-[ 3892, "esp" ],
-[ 3893, "esq" ],
-[ 3894, "esr" ],
-[ 3895, "ess" ],
-[ 3896, "est" ],
-[ 3897, "esu" ],
-[ 3898, "esv" ],
-[ 3899, "esw" ],
-[ 3900, "esx" ],
-[ 3901, "esy" ],
-[ 3902, "esz" ],
-[ 3903, "eta" ],
-[ 3904, "etb" ],
-[ 3905, "etc" ],
-[ 3906, "etd" ],
-[ 3907, "ete" ],
-[ 3908, "etf" ],
-[ 3909, "etg" ],
-[ 3910, "eth" ],
-[ 3911, "eti" ],
-[ 3912, "etj" ],
-[ 3913, "etk" ],
-[ 3914, "etl" ],
-[ 3915, "etm" ],
-[ 3916, "etn" ],
-[ 3917, "eto" ],
-[ 3918, "etp" ],
-[ 3919, "etq" ],
-[ 3920, "etr" ],
-[ 3921, "ets" ],
-[ 3922, "ett" ],
-[ 3923, "etu" ],
-[ 3924, "etv" ],
-[ 3925, "etw" ],
-[ 3926, "etx" ],
-[ 3927, "ety" ],
-[ 3928, "etz" ],
-[ 3929, "eua" ],
-[ 3930, "eub" ],
-[ 3931, "euc" ],
-[ 3932, "eud" ],
-[ 3933, "eue" ],
-[ 3934, "euf" ],
-[ 3935, "eug" ],
-[ 3936, "euh" ],
-[ 3937, "eui" ],
-[ 3938, "euj" ],
-[ 3939, "euk" ],
-[ 3940, "eul" ],
-[ 3941, "eum" ],
-[ 3942, "eun" ],
-[ 3943, "euo" ],
-[ 3944, "eup" ],
-[ 3945, "euq" ],
-[ 3946, "eur" ],
-[ 3947, "eus" ],
-[ 3948, "eut" ],
-[ 3949, "euu" ],
-[ 3950, "euv" ],
-[ 3951, "euw" ],
-[ 3952, "eux" ],
-[ 3953, "euy" ],
-[ 3954, "euz" ],
-[ 3955, "eva" ],
-[ 3956, "evb" ],
-[ 3957, "evc" ],
-[ 3958, "evd" ],
-[ 3959, "eve" ],
-[ 3960, "evf" ],
-[ 3961, "evg" ],
-[ 3962, "evh" ],
-[ 3963, "evi" ],
-[ 3964, "evj" ],
-[ 3965, "evk" ],
-[ 3966, "evl" ],
-[ 3967, "evm" ],
-[ 3968, "evn" ],
-[ 3969, "evo" ],
-[ 3970, "evp" ],
-[ 3971, "evq" ],
-[ 3972, "evr" ],
-[ 3973, "evs" ],
-[ 3974, "evt" ],
-[ 3975, "evu" ],
-[ 3976, "evv" ],
-[ 3977, "evw" ],
-[ 3978, "evx" ],
-[ 3979, "evy" ],
-[ 3980, "evz" ],
-[ 3981, "ewa" ],
-[ 3982, "ewb" ],
-[ 3983, "ewc" ],
-[ 3984, "ewd" ],
-[ 3985, "ewe" ],
-[ 3986, "ewf" ],
-[ 3987, "ewg" ],
-[ 3988, "ewh" ],
-[ 3989, "ewi" ],
-[ 3990, "ewj" ],
-[ 3991, "ewk" ],
-[ 3992, "ewl" ],
-[ 3993, "ewm" ],
-[ 3994, "ewn" ],
-[ 3995, "ewo" ],
-[ 3996, "ewp" ],
-[ 3997, "ewq" ],
-[ 3998, "ewr" ],
-[ 3999, "ews" ],
-[ 4000, "ewt" ],
-[ 4001, "ewu" ],
-[ 4002, "ewv" ],
-[ 4003, "eww" ],
-[ 4004, "ewx" ],
-[ 4005, "ewy" ],
-[ 4006, "ewz" ],
-[ 4007, "exa" ],
-[ 4008, "exb" ],
-[ 4009, "exc" ],
-[ 4010, "exd" ],
-[ 4011, "exe" ],
-[ 4012, "exf" ],
-[ 4013, "exg" ],
-[ 4014, "exh" ],
-[ 4015, "exi" ],
-[ 4016, "exj" ],
-[ 4017, "exk" ],
-[ 4018, "exl" ],
-[ 4019, "exm" ],
-[ 4020, "exn" ],
-[ 4021, "exo" ],
-[ 4022, "exp" ],
-[ 4023, "exq" ],
-[ 4024, "exr" ],
-[ 4025, "exs" ],
-[ 4026, "ext" ],
-[ 4027, "exu" ],
-[ 4028, "exv" ],
-[ 4029, "exw" ],
-[ 4030, "exx" ],
-[ 4031, "exy" ],
-[ 4032, "exz" ],
-[ 4033, "eya" ],
-[ 4034, "eyb" ],
-[ 4035, "eyc" ],
-[ 4036, "eyd" ],
-[ 4037, "eye" ],
-[ 4038, "eyf" ],
-[ 4039, "eyg" ],
-[ 4040, "eyh" ],
-[ 4041, "eyi" ],
-[ 4042, "eyj" ],
-[ 4043, "eyk" ],
-[ 4044, "eyl" ],
-[ 4045, "eym" ],
-[ 4046, "eyn" ],
-[ 4047, "eyo" ],
-[ 4048, "eyp" ],
-[ 4049, "eyq" ],
-[ 4050, "eyr" ],
-[ 4051, "eys" ],
-[ 4052, "eyt" ],
-[ 4053, "eyu" ],
-[ 4054, "eyv" ],
-[ 4055, "eyw" ],
-[ 4056, "eyx" ],
-[ 4057, "eyy" ],
-[ 4058, "eyz" ],
-[ 4059, "eza" ],
-[ 4060, "ezb" ],
-[ 4061, "ezc" ],
-[ 4062, "ezd" ],
-[ 4063, "eze" ],
-[ 4064, "ezf" ],
-[ 4065, "ezg" ],
-[ 4066, "ezh" ],
-[ 4067, "ezi" ],
-[ 4068, "ezj" ],
-[ 4069, "ezk" ],
-[ 4070, "ezl" ],
-[ 4071, "ezm" ],
-[ 4072, "ezn" ],
-[ 4073, "ezo" ],
-[ 4074, "ezp" ],
-[ 4075, "ezq" ],
-[ 4076, "ezr" ],
-[ 4077, "ezs" ],
-[ 4078, "ezt" ],
-[ 4079, "ezu" ],
-[ 4080, "ezv" ],
-[ 4081, "ezw" ],
-[ 4082, "ezx" ],
-[ 4083, "ezy" ],
-[ 4084, "ezz" ],
-[ 4085, "faa" ],
-[ 4086, "fab" ],
-[ 4087, "fac" ],
-[ 4088, "fad" ],
-[ 4089, "fae" ],
-[ 4090, "faf" ],
-[ 4091, "fag" ],
-[ 4092, "fah" ],
-[ 4093, "fai" ],
-[ 4094, "faj" ],
-[ 4095, "fak" ],
-[ 4096, "fal" ],
-[ 4097, "fam" ],
-[ 4098, "fan" ],
-[ 4099, "fao" ],
-[ 4100, "fap" ],
-[ 4101, "faq" ],
-[ 4102, "far" ],
-[ 4103, "fas" ],
-[ 4104, "fat" ],
-[ 4105, "fau" ],
-[ 4106, "fav" ],
-[ 4107, "faw" ],
-[ 4108, "fax" ],
-[ 4109, "fay" ],
-[ 4110, "faz" ],
-[ 4111, "fba" ],
-[ 4112, "fbb" ],
-[ 4113, "fbc" ],
-[ 4114, "fbd" ],
-[ 4115, "fbe" ],
-[ 4116, "fbf" ],
-[ 4117, "fbg" ],
-[ 4118, "fbh" ],
-[ 4119, "fbi" ],
-[ 4120, "fbj" ],
-[ 4121, "fbk" ],
-[ 4122, "fbl" ],
-[ 4123, "fbm" ],
-[ 4124, "fbn" ],
-[ 4125, "fbo" ],
-[ 4126, "fbp" ],
-[ 4127, "fbq" ],
-[ 4128, "fbr" ],
-[ 4129, "fbs" ],
-[ 4130, "fbt" ],
-[ 4131, "fbu" ],
-[ 4132, "fbv" ],
-[ 4133, "fbw" ],
-[ 4134, "fbx" ],
-[ 4135, "fby" ],
-[ 4136, "fbz" ],
-[ 4137, "fca" ],
-[ 4138, "fcb" ],
-[ 4139, "fcc" ],
-[ 4140, "fcd" ],
-[ 4141, "fce" ],
-[ 4142, "fcf" ],
-[ 4143, "fcg" ],
-[ 4144, "fch" ],
-[ 4145, "fci" ],
-[ 4146, "fcj" ],
-[ 4147, "fck" ],
-[ 4148, "fcl" ],
-[ 4149, "fcm" ],
-[ 4150, "fcn" ],
-[ 4151, "fco" ],
-[ 4152, "fcp" ],
-[ 4153, "fcq" ],
-[ 4154, "fcr" ],
-[ 4155, "fcs" ],
-[ 4156, "fct" ],
-[ 4157, "fcu" ],
-[ 4158, "fcv" ],
-[ 4159, "fcw" ],
-[ 4160, "fcx" ],
-[ 4161, "fcy" ],
-[ 4162, "fcz" ],
-[ 4163, "fda" ],
-[ 4164, "fdb" ],
-[ 4165, "fdc" ],
-[ 4166, "fdd" ],
-[ 4167, "fde" ],
-[ 4168, "fdf" ],
-[ 4169, "fdg" ],
-[ 4170, "fdh" ],
-[ 4171, "fdi" ],
-[ 4172, "fdj" ],
-[ 4173, "fdk" ],
-[ 4174, "fdl" ],
-[ 4175, "fdm" ],
-[ 4176, "fdn" ],
-[ 4177, "fdo" ],
-[ 4178, "fdp" ],
-[ 4179, "fdq" ],
-[ 4180, "fdr" ],
-[ 4181, "fds" ],
-[ 4182, "fdt" ],
-[ 4183, "fdu" ],
-[ 4184, "fdv" ],
-[ 4185, "fdw" ],
-[ 4186, "fdx" ],
-[ 4187, "fdy" ],
-[ 4188, "fdz" ],
-[ 4189, "fea" ],
-[ 4190, "feb" ],
-[ 4191, "fec" ],
-[ 4192, "fed" ],
-[ 4193, "fee" ],
-[ 4194, "fef" ],
-[ 4195, "feg" ],
-[ 4196, "feh" ],
-[ 4197, "fei" ],
-[ 4198, "fej" ],
-[ 4199, "fek" ],
-[ 4200, "fel" ],
-[ 4201, "fem" ],
-[ 4202, "fen" ],
-[ 4203, "feo" ],
-[ 4204, "fep" ],
-[ 4205, "feq" ],
-[ 4206, "fer" ],
-[ 4207, "fes" ],
-[ 4208, "fet" ],
-[ 4209, "feu" ],
-[ 4210, "fev" ],
-[ 4211, "few" ],
-[ 4212, "fex" ],
-[ 4213, "fey" ],
-[ 4214, "fez" ],
-[ 4215, "ffa" ],
-[ 4216, "ffb" ],
-[ 4217, "ffc" ],
-[ 4218, "ffd" ],
-[ 4219, "ffe" ],
-[ 4220, "fff" ],
-[ 4221, "ffg" ],
-[ 4222, "ffh" ],
-[ 4223, "ffi" ],
-[ 4224, "ffj" ],
-[ 4225, "ffk" ],
-[ 4226, "ffl" ],
-[ 4227, "ffm" ],
-[ 4228, "ffn" ],
-[ 4229, "ffo" ],
-[ 4230, "ffp" ],
-[ 4231, "ffq" ],
-[ 4232, "ffr" ],
-[ 4233, "ffs" ],
-[ 4234, "fft" ],
-[ 4235, "ffu" ],
-[ 4236, "ffv" ],
-[ 4237, "ffw" ],
-[ 4238, "ffx" ],
-[ 4239, "ffy" ],
-[ 4240, "ffz" ],
-[ 4241, "fga" ],
-[ 4242, "fgb" ],
-[ 4243, "fgc" ],
-[ 4244, "fgd" ],
-[ 4245, "fge" ],
-[ 4246, "fgf" ],
-[ 4247, "fgg" ],
-[ 4248, "fgh" ],
-[ 4249, "fgi" ],
-[ 4250, "fgj" ],
-[ 4251, "fgk" ],
-[ 4252, "fgl" ],
-[ 4253, "fgm" ],
-[ 4254, "fgn" ],
-[ 4255, "fgo" ],
-[ 4256, "fgp" ],
-[ 4257, "fgq" ],
-[ 4258, "fgr" ],
-[ 4259, "fgs" ],
-[ 4260, "fgt" ],
-[ 4261, "fgu" ],
-[ 4262, "fgv" ],
-[ 4263, "fgw" ],
-[ 4264, "fgx" ],
-[ 4265, "fgy" ],
-[ 4266, "fgz" ],
-[ 4267, "fha" ],
-[ 4268, "fhb" ],
-[ 4269, "fhc" ],
-[ 4270, "fhd" ],
-[ 4271, "fhe" ],
-[ 4272, "fhf" ],
-[ 4273, "fhg" ],
-[ 4274, "fhh" ],
-[ 4275, "fhi" ],
-[ 4276, "fhj" ],
-[ 4277, "fhk" ],
-[ 4278, "fhl" ],
-[ 4279, "fhm" ],
-[ 4280, "fhn" ],
-[ 4281, "fho" ],
-[ 4282, "fhp" ],
-[ 4283, "fhq" ],
-[ 4284, "fhr" ],
-[ 4285, "fhs" ],
-[ 4286, "fht" ],
-[ 4287, "fhu" ],
-[ 4288, "fhv" ],
-[ 4289, "fhw" ],
-[ 4290, "fhx" ],
-[ 4291, "fhy" ],
-[ 4292, "fhz" ],
-[ 4293, "fia" ],
-[ 4294, "fib" ],
-[ 4295, "fic" ],
-[ 4296, "fid" ],
-[ 4297, "fie" ],
-[ 4298, "fif" ],
-[ 4299, "fig" ],
-[ 4300, "fih" ],
-[ 4301, "fii" ],
-[ 4302, "fij" ],
-[ 4303, "fik" ],
-[ 4304, "fil" ],
-[ 4305, "fim" ],
-[ 4306, "fin" ],
-[ 4307, "fio" ],
-[ 4308, "fip" ],
-[ 4309, "fiq" ],
-[ 4310, "fir" ],
-[ 4311, "fis" ],
-[ 4312, "fit" ],
-[ 4313, "fiu" ],
-[ 4314, "fiv" ],
-[ 4315, "fiw" ],
-[ 4316, "fix" ],
-[ 4317, "fiy" ],
-[ 4318, "fiz" ],
-[ 4319, "fja" ],
-[ 4320, "fjb" ],
-[ 4321, "fjc" ],
-[ 4322, "fjd" ],
-[ 4323, "fje" ],
-[ 4324, "fjf" ],
-[ 4325, "fjg" ],
-[ 4326, "fjh" ],
-[ 4327, "fji" ],
-[ 4328, "fjj" ],
-[ 4329, "fjk" ],
-[ 4330, "fjl" ],
-[ 4331, "fjm" ],
-[ 4332, "fjn" ],
-[ 4333, "fjo" ],
-[ 4334, "fjp" ],
-[ 4335, "fjq" ],
-[ 4336, "fjr" ],
-[ 4337, "fjs" ],
-[ 4338, "fjt" ],
-[ 4339, "fju" ],
-[ 4340, "fjv" ],
-[ 4341, "fjw" ],
-[ 4342, "fjx" ],
-[ 4343, "fjy" ],
-[ 4344, "fjz" ],
-[ 4345, "fka" ],
-[ 4346, "fkb" ],
-[ 4347, "fkc" ],
-[ 4348, "fkd" ],
-[ 4349, "fke" ],
-[ 4350, "fkf" ],
-[ 4351, "fkg" ],
-[ 4352, "fkh" ],
-[ 4353, "fki" ],
-[ 4354, "fkj" ],
-[ 4355, "fkk" ],
-[ 4356, "fkl" ],
-[ 4357, "fkm" ],
-[ 4358, "fkn" ],
-[ 4359, "fko" ],
-[ 4360, "fkp" ],
-[ 4361, "fkq" ],
-[ 4362, "fkr" ],
-[ 4363, "fks" ],
-[ 4364, "fkt" ],
-[ 4365, "fku" ],
-[ 4366, "fkv" ],
-[ 4367, "fkw" ],
-[ 4368, "fkx" ],
-[ 4369, "fky" ],
-[ 4370, "fkz" ],
-[ 4371, "fla" ],
-[ 4372, "flb" ],
-[ 4373, "flc" ],
-[ 4374, "fld" ],
-[ 4375, "fle" ],
-[ 4376, "flf" ],
-[ 4377, "flg" ],
-[ 4378, "flh" ],
-[ 4379, "fli" ],
-[ 4380, "flj" ],
-[ 4381, "flk" ],
-[ 4382, "fll" ],
-[ 4383, "flm" ],
-[ 4384, "fln" ],
-[ 4385, "flo" ],
-[ 4386, "flp" ],
-[ 4387, "flq" ],
-[ 4388, "flr" ],
-[ 4389, "fls" ],
-[ 4390, "flt" ],
-[ 4391, "flu" ],
-[ 4392, "flv" ],
-[ 4393, "flw" ],
-[ 4394, "flx" ],
-[ 4395, "fly" ],
-[ 4396, "flz" ],
-[ 4397, "fma" ],
-[ 4398, "fmb" ],
-[ 4399, "fmc" ],
-[ 4400, "fmd" ],
-[ 4401, "fme" ],
-[ 4402, "fmf" ],
-[ 4403, "fmg" ],
-[ 4404, "fmh" ],
-[ 4405, "fmi" ],
-[ 4406, "fmj" ],
-[ 4407, "fmk" ],
-[ 4408, "fml" ],
-[ 4409, "fmm" ],
-[ 4410, "fmn" ],
-[ 4411, "fmo" ],
-[ 4412, "fmp" ],
-[ 4413, "fmq" ],
-[ 4414, "fmr" ],
-[ 4415, "fms" ],
-[ 4416, "fmt" ],
-[ 4417, "fmu" ],
-[ 4418, "fmv" ],
-[ 4419, "fmw" ],
-[ 4420, "fmx" ],
-[ 4421, "fmy" ],
-[ 4422, "fmz" ],
-[ 4423, "fna" ],
-[ 4424, "fnb" ],
-[ 4425, "fnc" ],
-[ 4426, "fnd" ],
-[ 4427, "fne" ],
-[ 4428, "fnf" ],
-[ 4429, "fng" ],
-[ 4430, "fnh" ],
-[ 4431, "fni" ],
-[ 4432, "fnj" ],
-[ 4433, "fnk" ],
-[ 4434, "fnl" ],
-[ 4435, "fnm" ],
-[ 4436, "fnn" ],
-[ 4437, "fno" ],
-[ 4438, "fnp" ],
-[ 4439, "fnq" ],
-[ 4440, "fnr" ],
-[ 4441, "fns" ],
-[ 4442, "fnt" ],
-[ 4443, "fnu" ],
-[ 4444, "fnv" ],
-[ 4445, "fnw" ],
-[ 4446, "fnx" ],
-[ 4447, "fny" ],
-[ 4448, "fnz" ],
-[ 4449, "foa" ],
-[ 4450, "fob" ],
-[ 4451, "foc" ],
-[ 4452, "fod" ],
-[ 4453, "foe" ],
-[ 4454, "fof" ],
-[ 4455, "fog" ],
-[ 4456, "foh" ],
-[ 4457, "foi" ],
-[ 4458, "foj" ],
-[ 4459, "fok" ],
-[ 4460, "fol" ],
-[ 4461, "fom" ],
-[ 4462, "fon" ],
-[ 4463, "foo" ],
-[ 4464, "fop" ],
-[ 4465, "foq" ],
-[ 4466, "for" ],
-[ 4467, "fos" ],
-[ 4468, "fot" ],
-[ 4469, "fou" ],
-[ 4470, "fov" ],
-[ 4471, "fow" ],
-[ 4472, "fox" ],
-[ 4473, "foy" ],
-[ 4474, "foz" ],
-[ 4475, "fpa" ],
-[ 4476, "fpb" ],
-[ 4477, "fpc" ],
-[ 4478, "fpd" ],
-[ 4479, "fpe" ],
-[ 4480, "fpf" ],
-[ 4481, "fpg" ],
-[ 4482, "fph" ],
-[ 4483, "fpi" ],
-[ 4484, "fpj" ],
-[ 4485, "fpk" ],
-[ 4486, "fpl" ],
-[ 4487, "fpm" ],
-[ 4488, "fpn" ],
-[ 4489, "fpo" ],
-[ 4490, "fpp" ],
-[ 4491, "fpq" ],
-[ 4492, "fpr" ],
-[ 4493, "fps" ],
-[ 4494, "fpt" ],
-[ 4495, "fpu" ],
-[ 4496, "fpv" ],
-[ 4497, "fpw" ],
-[ 4498, "fpx" ],
-[ 4499, "fpy" ],
-[ 4500, "fpz" ],
-[ 4501, "fqa" ],
-[ 4502, "fqb" ],
-[ 4503, "fqc" ],
-[ 4504, "fqd" ],
-[ 4505, "fqe" ],
-[ 4506, "fqf" ],
-[ 4507, "fqg" ],
-[ 4508, "fqh" ],
-[ 4509, "fqi" ],
-[ 4510, "fqj" ],
-[ 4511, "fqk" ],
-[ 4512, "fql" ],
-[ 4513, "fqm" ],
-[ 4514, "fqn" ],
-[ 4515, "fqo" ],
-[ 4516, "fqp" ],
-[ 4517, "fqq" ],
-[ 4518, "fqr" ],
-[ 4519, "fqs" ],
-[ 4520, "fqt" ],
-[ 4521, "fqu" ],
-[ 4522, "fqv" ],
-[ 4523, "fqw" ],
-[ 4524, "fqx" ],
-[ 4525, "fqy" ],
-[ 4526, "fqz" ],
-[ 4527, "fra" ],
-[ 4528, "frb" ],
-[ 4529, "frc" ],
-[ 4530, "frd" ],
-[ 4531, "fre" ],
-[ 4532, "frf" ],
-[ 4533, "frg" ],
-[ 4534, "frh" ],
-[ 4535, "fri" ],
-[ 4536, "frj" ],
-[ 4537, "frk" ],
-[ 4538, "frl" ],
-[ 4539, "frm" ],
-[ 4540, "frn" ],
-[ 4541, "fro" ],
-[ 4542, "frp" ],
-[ 4543, "frq" ],
-[ 4544, "frr" ],
-[ 4545, "frs" ],
-[ 4546, "frt" ],
-[ 4547, "fru" ],
-[ 4548, "frv" ],
-[ 4549, "frw" ],
-[ 4550, "frx" ],
-[ 4551, "fry" ],
-[ 4552, "frz" ],
-[ 4553, "fsa" ],
-[ 4554, "fsb" ],
-[ 4555, "fsc" ],
-[ 4556, "fsd" ],
-[ 4557, "fse" ],
-[ 4558, "fsf" ],
-[ 4559, "fsg" ],
-[ 4560, "fsh" ],
-[ 4561, "fsi" ],
-[ 4562, "fsj" ],
-[ 4563, "fsk" ],
-[ 4564, "fsl" ],
-[ 4565, "fsm" ],
-[ 4566, "fsn" ],
-[ 4567, "fso" ],
-[ 4568, "fsp" ],
-[ 4569, "fsq" ],
-[ 4570, "fsr" ],
-[ 4571, "fss" ],
-[ 4572, "fst" ],
-[ 4573, "fsu" ],
-[ 4574, "fsv" ],
-[ 4575, "fsw" ],
-[ 4576, "fsx" ],
-[ 4577, "fsy" ],
-[ 4578, "fsz" ],
-[ 4579, "fta" ],
-[ 4580, "ftb" ],
-[ 4581, "ftc" ],
-[ 4582, "ftd" ],
-[ 4583, "fte" ],
-[ 4584, "ftf" ],
-[ 4585, "ftg" ],
-[ 4586, "fth" ],
-[ 4587, "fti" ],
-[ 4588, "ftj" ],
-[ 4589, "ftk" ],
-[ 4590, "ftl" ],
-[ 4591, "ftm" ],
-[ 4592, "ftn" ],
-[ 4593, "fto" ],
-[ 4594, "ftp" ],
-[ 4595, "ftq" ],
-[ 4596, "ftr" ],
-[ 4597, "fts" ],
-[ 4598, "ftt" ],
-[ 4599, "ftu" ],
-[ 4600, "ftv" ],
-[ 4601, "ftw" ],
-[ 4602, "ftx" ],
-[ 4603, "fty" ],
-[ 4604, "ftz" ],
-[ 4605, "fua" ],
-[ 4606, "fub" ],
-[ 4607, "fuc" ],
-[ 4608, "fud" ],
-[ 4609, "fue" ],
-[ 4610, "fuf" ],
-[ 4611, "fug" ],
-[ 4612, "fuh" ],
-[ 4613, "fui" ],
-[ 4614, "fuj" ],
-[ 4615, "fuk" ],
-[ 4616, "ful" ],
-[ 4617, "fum" ],
-[ 4618, "fun" ],
-[ 4619, "fuo" ],
-[ 4620, "fup" ],
-[ 4621, "fuq" ],
-[ 4622, "fur" ],
-[ 4623, "fus" ],
-[ 4624, "fut" ],
-[ 4625, "fuu" ],
-[ 4626, "fuv" ],
-[ 4627, "fuw" ],
-[ 4628, "fux" ],
-[ 4629, "fuy" ],
-[ 4630, "fuz" ],
-[ 4631, "fva" ],
-[ 4632, "fvb" ],
-[ 4633, "fvc" ],
-[ 4634, "fvd" ],
-[ 4635, "fve" ],
-[ 4636, "fvf" ],
-[ 4637, "fvg" ],
-[ 4638, "fvh" ],
-[ 4639, "fvi" ],
-[ 4640, "fvj" ],
-[ 4641, "fvk" ],
-[ 4642, "fvl" ],
-[ 4643, "fvm" ],
-[ 4644, "fvn" ],
-[ 4645, "fvo" ],
-[ 4646, "fvp" ],
-[ 4647, "fvq" ],
-[ 4648, "fvr" ],
-[ 4649, "fvs" ],
-[ 4650, "fvt" ],
-[ 4651, "fvu" ],
-[ 4652, "fvv" ],
-[ 4653, "fvw" ],
-[ 4654, "fvx" ],
-[ 4655, "fvy" ],
-[ 4656, "fvz" ],
-[ 4657, "fwa" ],
-[ 4658, "fwb" ],
-[ 4659, "fwc" ],
-[ 4660, "fwd" ],
-[ 4661, "fwe" ],
-[ 4662, "fwf" ],
-[ 4663, "fwg" ],
-[ 4664, "fwh" ],
-[ 4665, "fwi" ],
-[ 4666, "fwj" ],
-[ 4667, "fwk" ],
-[ 4668, "fwl" ],
-[ 4669, "fwm" ],
-[ 4670, "fwn" ],
-[ 4671, "fwo" ],
-[ 4672, "fwp" ],
-[ 4673, "fwq" ],
-[ 4674, "fwr" ],
-[ 4675, "fws" ],
-[ 4676, "fwt" ],
-[ 4677, "fwu" ],
-[ 4678, "fwv" ],
-[ 4679, "fww" ],
-[ 4680, "fwx" ],
-[ 4681, "fwy" ],
-[ 4682, "fwz" ],
-[ 4683, "fxa" ],
-[ 4684, "fxb" ],
-[ 4685, "fxc" ],
-[ 4686, "fxd" ],
-[ 4687, "fxe" ],
-[ 4688, "fxf" ],
-[ 4689, "fxg" ],
-[ 4690, "fxh" ],
-[ 4691, "fxi" ],
-[ 4692, "fxj" ],
-[ 4693, "fxk" ],
-[ 4694, "fxl" ],
-[ 4695, "fxm" ],
-[ 4696, "fxn" ],
-[ 4697, "fxo" ],
-[ 4698, "fxp" ],
-[ 4699, "fxq" ],
-[ 4700, "fxr" ],
-[ 4701, "fxs" ],
-[ 4702, "fxt" ],
-[ 4703, "fxu" ],
-[ 4704, "fxv" ],
-[ 4705, "fxw" ],
-[ 4706, "fxx" ],
-[ 4707, "fxy" ],
-[ 4708, "fxz" ],
-[ 4709, "fya" ],
-[ 4710, "fyb" ],
-[ 4711, "fyc" ],
-[ 4712, "fyd" ],
-[ 4713, "fye" ],
-[ 4714, "fyf" ],
-[ 4715, "fyg" ],
-[ 4716, "fyh" ],
-[ 4717, "fyi" ],
-[ 4718, "fyj" ],
-[ 4719, "fyk" ],
-[ 4720, "fyl" ],
-[ 4721, "fym" ],
-[ 4722, "fyn" ],
-[ 4723, "fyo" ],
-[ 4724, "fyp" ],
-[ 4725, "fyq" ],
-[ 4726, "fyr" ],
-[ 4727, "fys" ],
-[ 4728, "fyt" ],
-[ 4729, "fyu" ],
-[ 4730, "fyv" ],
-[ 4731, "fyw" ],
-[ 4732, "fyx" ],
-[ 4733, "fyy" ],
-[ 4734, "fyz" ],
-[ 4735, "fza" ],
-[ 4736, "fzb" ],
-[ 4737, "fzc" ],
-[ 4738, "fzd" ],
-[ 4739, "fze" ],
-[ 4740, "fzf" ],
-[ 4741, "fzg" ],
-[ 4742, "fzh" ],
-[ 4743, "fzi" ],
-[ 4744, "fzj" ],
-[ 4745, "fzk" ],
-[ 4746, "fzl" ],
-[ 4747, "fzm" ],
-[ 4748, "fzn" ],
-[ 4749, "fzo" ],
-[ 4750, "fzp" ],
-[ 4751, "fzq" ],
-[ 4752, "fzr" ],
-[ 4753, "fzs" ],
-[ 4754, "fzt" ],
-[ 4755, "fzu" ],
-[ 4756, "fzv" ],
-[ 4757, "fzw" ],
-[ 4758, "fzx" ],
-[ 4759, "fzy" ],
-[ 4760, "fzz" ],
-[ 4761, "gaa" ],
-[ 4762, "gab" ],
-[ 4763, "gac" ],
-[ 4764, "gad" ],
-[ 4765, "gae" ],
-[ 4766, "gaf" ],
-[ 4767, "gag" ],
-[ 4768, "gah" ],
-[ 4769, "gai" ],
-[ 4770, "gaj" ],
-[ 4771, "gak" ],
-[ 4772, "gal" ],
-[ 4773, "gam" ],
-[ 4774, "gan" ],
-[ 4775, "gao" ],
-[ 4776, "gap" ],
-[ 4777, "gaq" ],
-[ 4778, "gar" ],
-[ 4779, "gas" ],
-[ 4780, "gat" ],
-[ 4781, "gau" ],
-[ 4782, "gav" ],
-[ 4783, "gaw" ],
-[ 4784, "gax" ],
-[ 4785, "gay" ],
-[ 4786, "gaz" ],
-[ 4787, "gba" ],
-[ 4788, "gbb" ],
-[ 4789, "gbc" ],
-[ 4790, "gbd" ],
-[ 4791, "gbe" ],
-[ 4792, "gbf" ],
-[ 4793, "gbg" ],
-[ 4794, "gbh" ],
-[ 4795, "gbi" ],
-[ 4796, "gbj" ],
-[ 4797, "gbk" ],
-[ 4798, "gbl" ],
-[ 4799, "gbm" ],
-[ 4800, "gbn" ],
-[ 4801, "gbo" ],
-[ 4802, "gbp" ],
-[ 4803, "gbq" ],
-[ 4804, "gbr" ],
-[ 4805, "gbs" ],
-[ 4806, "gbt" ],
-[ 4807, "gbu" ],
-[ 4808, "gbv" ],
-[ 4809, "gbw" ],
-[ 4810, "gbx" ],
-[ 4811, "gby" ],
-[ 4812, "gbz" ],
-[ 4813, "gca" ],
-[ 4814, "gcb" ],
-[ 4815, "gcc" ],
-[ 4816, "gcd" ],
-[ 4817, "gce" ],
-[ 4818, "gcf" ],
-[ 4819, "gcg" ],
-[ 4820, "gch" ],
-[ 4821, "gci" ],
-[ 4822, "gcj" ],
-[ 4823, "gck" ],
-[ 4824, "gcl" ],
-[ 4825, "gcm" ],
-[ 4826, "gcn" ],
-[ 4827, "gco" ],
-[ 4828, "gcp" ],
-[ 4829, "gcq" ],
-[ 4830, "gcr" ],
-[ 4831, "gcs" ],
-[ 4832, "gct" ],
-[ 4833, "gcu" ],
-[ 4834, "gcv" ],
-[ 4835, "gcw" ],
-[ 4836, "gcx" ],
-[ 4837, "gcy" ],
-[ 4838, "gcz" ],
-[ 4839, "gda" ],
-[ 4840, "gdb" ],
-[ 4841, "gdc" ],
-[ 4842, "gdd" ],
-[ 4843, "gde" ],
-[ 4844, "gdf" ],
-[ 4845, "gdg" ],
-[ 4846, "gdh" ],
-[ 4847, "gdi" ],
-[ 4848, "gdj" ],
-[ 4849, "gdk" ],
-[ 4850, "gdl" ],
-[ 4851, "gdm" ],
-[ 4852, "gdn" ],
-[ 4853, "gdo" ],
-[ 4854, "gdp" ],
-[ 4855, "gdq" ],
-[ 4856, "gdr" ],
-[ 4857, "gds" ],
-[ 4858, "gdt" ],
-[ 4859, "gdu" ],
-[ 4860, "gdv" ],
-[ 4861, "gdw" ],
-[ 4862, "gdx" ],
-[ 4863, "gdy" ],
-[ 4864, "gdz" ],
-[ 4865, "gea" ],
-[ 4866, "geb" ],
-[ 4867, "gec" ],
-[ 4868, "ged" ],
-[ 4869, "gee" ],
-[ 4870, "gef" ],
-[ 4871, "geg" ],
-[ 4872, "geh" ],
-[ 4873, "gei" ],
-[ 4874, "gej" ],
-[ 4875, "gek" ],
-[ 4876, "gel" ],
-[ 4877, "gem" ],
-[ 4878, "gen" ],
-[ 4879, "geo" ],
-[ 4880, "gep" ],
-[ 4881, "geq" ],
-[ 4882, "ger" ],
-[ 4883, "ges" ],
-[ 4884, "get" ],
-[ 4885, "geu" ],
-[ 4886, "gev" ],
-[ 4887, "gew" ],
-[ 4888, "gex" ],
-[ 4889, "gey" ],
-[ 4890, "gez" ],
-[ 4891, "gfa" ],
-[ 4892, "gfb" ],
-[ 4893, "gfc" ],
-[ 4894, "gfd" ],
-[ 4895, "gfe" ],
-[ 4896, "gff" ],
-[ 4897, "gfg" ],
-[ 4898, "gfh" ],
-[ 4899, "gfi" ],
-[ 4900, "gfj" ],
-[ 4901, "gfk" ],
-[ 4902, "gfl" ],
-[ 4903, "gfm" ],
-[ 4904, "gfn" ],
-[ 4905, "gfo" ],
-[ 4906, "gfp" ],
-[ 4907, "gfq" ],
-[ 4908, "gfr" ],
-[ 4909, "gfs" ],
-[ 4910, "gft" ],
-[ 4911, "gfu" ],
-[ 4912, "gfv" ],
-[ 4913, "gfw" ],
-[ 4914, "gfx" ],
-[ 4915, "gfy" ],
-[ 4916, "gfz" ],
-[ 4917, "gga" ],
-[ 4918, "ggb" ],
-[ 4919, "ggc" ],
-[ 4920, "ggd" ],
-[ 4921, "gge" ],
-[ 4922, "ggf" ],
-[ 4923, "ggg" ],
-[ 4924, "ggh" ],
-[ 4925, "ggi" ],
-[ 4926, "ggj" ],
-[ 4927, "ggk" ],
-[ 4928, "ggl" ],
-[ 4929, "ggm" ],
-[ 4930, "ggn" ],
-[ 4931, "ggo" ],
-[ 4932, "ggp" ],
-[ 4933, "ggq" ],
-[ 4934, "ggr" ],
-[ 4935, "ggs" ],
-[ 4936, "ggt" ],
-[ 4937, "ggu" ],
-[ 4938, "ggv" ],
-[ 4939, "ggw" ],
-[ 4940, "ggx" ],
-[ 4941, "ggy" ],
-[ 4942, "ggz" ],
-[ 4943, "gha" ],
-[ 4944, "ghb" ],
-[ 4945, "ghc" ],
-[ 4946, "ghd" ],
-[ 4947, "ghe" ],
-[ 4948, "ghf" ],
-[ 4949, "ghg" ],
-[ 4950, "ghh" ],
-[ 4951, "ghi" ],
-[ 4952, "ghj" ],
-[ 4953, "ghk" ],
-[ 4954, "ghl" ],
-[ 4955, "ghm" ],
-[ 4956, "ghn" ],
-[ 4957, "gho" ],
-[ 4958, "ghp" ],
-[ 4959, "ghq" ],
-[ 4960, "ghr" ],
-[ 4961, "ghs" ],
-[ 4962, "ght" ],
-[ 4963, "ghu" ],
-[ 4964, "ghv" ],
-[ 4965, "ghw" ],
-[ 4966, "ghx" ],
-[ 4967, "ghy" ],
-[ 4968, "ghz" ],
-[ 4969, "gia" ],
-[ 4970, "gib" ],
-[ 4971, "gic" ],
-[ 4972, "gid" ],
-[ 4973, "gie" ],
-[ 4974, "gif" ],
-[ 4975, "gig" ],
-[ 4976, "gih" ],
-[ 4977, "gii" ],
-[ 4978, "gij" ],
-[ 4979, "gik" ],
-[ 4980, "gil" ],
-[ 4981, "gim" ],
-[ 4982, "gin" ],
-[ 4983, "gio" ],
-[ 4984, "gip" ],
-[ 4985, "giq" ],
-[ 4986, "gir" ],
-[ 4987, "gis" ],
-[ 4988, "git" ],
-[ 4989, "giu" ],
-[ 4990, "giv" ],
-[ 4991, "giw" ],
-[ 4992, "gix" ],
-[ 4993, "giy" ],
-[ 4994, "giz" ],
-[ 4995, "gja" ],
-[ 4996, "gjb" ],
-[ 4997, "gjc" ],
-[ 4998, "gjd" ],
-[ 4999, "gje" ],
-[ 5000, "gjf" ],
-[ 5001, "gjg" ],
-[ 5002, "gjh" ],
-[ 5003, "gji" ],
-[ 5004, "gjj" ],
-[ 5005, "gjk" ],
-[ 5006, "gjl" ],
-[ 5007, "gjm" ],
-[ 5008, "gjn" ],
-[ 5009, "gjo" ],
-[ 5010, "gjp" ],
-[ 5011, "gjq" ],
-[ 5012, "gjr" ],
-[ 5013, "gjs" ],
-[ 5014, "gjt" ],
-[ 5015, "gju" ],
-[ 5016, "gjv" ],
-[ 5017, "gjw" ],
-[ 5018, "gjx" ],
-[ 5019, "gjy" ],
-[ 5020, "gjz" ],
-[ 5021, "gka" ],
-[ 5022, "gkb" ],
-[ 5023, "gkc" ],
-[ 5024, "gkd" ],
-[ 5025, "gke" ],
-[ 5026, "gkf" ],
-[ 5027, "gkg" ],
-[ 5028, "gkh" ],
-[ 5029, "gki" ],
-[ 5030, "gkj" ],
-[ 5031, "gkk" ],
-[ 5032, "gkl" ],
-[ 5033, "gkm" ],
-[ 5034, "gkn" ],
-[ 5035, "gko" ],
-[ 5036, "gkp" ],
-[ 5037, "gkq" ],
-[ 5038, "gkr" ],
-[ 5039, "gks" ],
-[ 5040, "gkt" ],
-[ 5041, "gku" ],
-[ 5042, "gkv" ],
-[ 5043, "gkw" ],
-[ 5044, "gkx" ],
-[ 5045, "gky" ],
-[ 5046, "gkz" ],
-[ 5047, "gla" ],
-[ 5048, "glb" ],
-[ 5049, "glc" ],
-[ 5050, "gld" ],
-[ 5051, "gle" ],
-[ 5052, "glf" ],
-[ 5053, "glg" ],
-[ 5054, "glh" ],
-[ 5055, "gli" ],
-[ 5056, "glj" ],
-[ 5057, "glk" ],
-[ 5058, "gll" ],
-[ 5059, "glm" ],
-[ 5060, "gln" ],
-[ 5061, "glo" ],
-[ 5062, "glp" ],
-[ 5063, "glq" ],
-[ 5064, "glr" ],
-[ 5065, "gls" ],
-[ 5066, "glt" ],
-[ 5067, "glu" ],
-[ 5068, "glv" ],
-[ 5069, "glw" ],
-[ 5070, "glx" ],
-[ 5071, "gly" ],
-[ 5072, "glz" ],
-[ 5073, "gma" ],
-[ 5074, "gmb" ],
-[ 5075, "gmc" ],
-[ 5076, "gmd" ],
-[ 5077, "gme" ],
-[ 5078, "gmf" ],
-[ 5079, "gmg" ],
-[ 5080, "gmh" ],
-[ 5081, "gmi" ],
-[ 5082, "gmj" ],
-[ 5083, "gmk" ],
-[ 5084, "gml" ],
-[ 5085, "gmm" ],
-[ 5086, "gmn" ],
-[ 5087, "gmo" ],
-[ 5088, "gmp" ],
-[ 5089, "gmq" ],
-[ 5090, "gmr" ],
-[ 5091, "gms" ],
-[ 5092, "gmt" ],
-[ 5093, "gmu" ],
-[ 5094, "gmv" ],
-[ 5095, "gmw" ],
-[ 5096, "gmx" ],
-[ 5097, "gmy" ],
-[ 5098, "gmz" ],
-[ 5099, "gna" ],
-[ 5100, "gnb" ],
-[ 5101, "gnc" ],
-[ 5102, "gnd" ],
-[ 5103, "gne" ],
-[ 5104, "gnf" ],
-[ 5105, "gng" ],
-[ 5106, "gnh" ],
-[ 5107, "gni" ],
-[ 5108, "gnj" ],
-[ 5109, "gnk" ],
-[ 5110, "gnl" ],
-[ 5111, "gnm" ],
-[ 5112, "gnn" ],
-[ 5113, "gno" ],
-[ 5114, "gnp" ],
-[ 5115, "gnq" ],
-[ 5116, "gnr" ],
-[ 5117, "gns" ],
-[ 5118, "gnt" ],
-[ 5119, "gnu" ],
-[ 5120, "gnv" ],
-[ 5121, "gnw" ],
-[ 5122, "gnx" ],
-[ 5123, "gny" ],
-[ 5124, "gnz" ],
-[ 5125, "goa" ],
-[ 5126, "gob" ],
-[ 5127, "goc" ],
-[ 5128, "god" ],
-[ 5129, "goe" ],
-[ 5130, "gof" ],
-[ 5131, "gog" ],
-[ 5132, "goh" ],
-[ 5133, "goi" ],
-[ 5134, "goj" ],
-[ 5135, "gok" ],
-[ 5136, "gol" ],
-[ 5137, "gom" ],
-[ 5138, "gon" ],
-[ 5139, "goo" ],
-[ 5140, "gop" ],
-[ 5141, "goq" ],
-[ 5142, "gor" ],
-[ 5143, "gos" ],
-[ 5144, "got" ],
-[ 5145, "gou" ],
-[ 5146, "gov" ],
-[ 5147, "gow" ],
-[ 5148, "gox" ],
-[ 5149, "goy" ],
-[ 5150, "goz" ],
-[ 5151, "gpa" ],
-[ 5152, "gpb" ],
-[ 5153, "gpc" ],
-[ 5154, "gpd" ],
-[ 5155, "gpe" ],
-[ 5156, "gpf" ],
-[ 5157, "gpg" ],
-[ 5158, "gph" ],
-[ 5159, "gpi" ],
-[ 5160, "gpj" ],
-[ 5161, "gpk" ],
-[ 5162, "gpl" ],
-[ 5163, "gpm" ],
-[ 5164, "gpn" ],
-[ 5165, "gpo" ],
-[ 5166, "gpp" ],
-[ 5167, "gpq" ],
-[ 5168, "gpr" ],
-[ 5169, "gps" ],
-[ 5170, "gpt" ],
-[ 5171, "gpu" ],
-[ 5172, "gpv" ],
-[ 5173, "gpw" ],
-[ 5174, "gpx" ],
-[ 5175, "gpy" ],
-[ 5176, "gpz" ],
-[ 5177, "gqa" ],
-[ 5178, "gqb" ],
-[ 5179, "gqc" ],
-[ 5180, "gqd" ],
-[ 5181, "gqe" ],
-[ 5182, "gqf" ],
-[ 5183, "gqg" ],
-[ 5184, "gqh" ],
-[ 5185, "gqi" ],
-[ 5186, "gqj" ],
-[ 5187, "gqk" ],
-[ 5188, "gql" ],
-[ 5189, "gqm" ],
-[ 5190, "gqn" ],
-[ 5191, "gqo" ],
-[ 5192, "gqp" ],
-[ 5193, "gqq" ],
-[ 5194, "gqr" ],
-[ 5195, "gqs" ],
-[ 5196, "gqt" ],
-[ 5197, "gqu" ],
-[ 5198, "gqv" ],
-[ 5199, "gqw" ],
-[ 5200, "gqx" ],
-[ 5201, "gqy" ],
-[ 5202, "gqz" ],
-[ 5203, "gra" ],
-[ 5204, "grb" ],
-[ 5205, "grc" ],
-[ 5206, "grd" ],
-[ 5207, "gre" ],
-[ 5208, "grf" ],
-[ 5209, "grg" ],
-[ 5210, "grh" ],
-[ 5211, "gri" ],
-[ 5212, "grj" ],
-[ 5213, "grk" ],
-[ 5214, "grl" ],
-[ 5215, "grm" ],
-[ 5216, "grn" ],
-[ 5217, "gro" ],
-[ 5218, "grp" ],
-[ 5219, "grq" ],
-[ 5220, "grr" ],
-[ 5221, "grs" ],
-[ 5222, "grt" ],
-[ 5223, "gru" ],
-[ 5224, "grv" ],
-[ 5225, "grw" ],
-[ 5226, "grx" ],
-[ 5227, "gry" ],
-[ 5228, "grz" ],
-[ 5229, "gsa" ],
-[ 5230, "gsb" ],
-[ 5231, "gsc" ],
-[ 5232, "gsd" ],
-[ 5233, "gse" ],
-[ 5234, "gsf" ],
-[ 5235, "gsg" ],
-[ 5236, "gsh" ],
-[ 5237, "gsi" ],
-[ 5238, "gsj" ],
-[ 5239, "gsk" ],
-[ 5240, "gsl" ],
-[ 5241, "gsm" ],
-[ 5242, "gsn" ],
-[ 5243, "gso" ],
-[ 5244, "gsp" ],
-[ 5245, "gsq" ],
-[ 5246, "gsr" ],
-[ 5247, "gss" ],
-[ 5248, "gst" ],
-[ 5249, "gsu" ],
-[ 5250, "gsv" ],
-[ 5251, "gsw" ],
-[ 5252, "gsx" ],
-[ 5253, "gsy" ],
-[ 5254, "gsz" ],
-[ 5255, "gta" ],
-[ 5256, "gtb" ],
-[ 5257, "gtc" ],
-[ 5258, "gtd" ],
-[ 5259, "gte" ],
-[ 5260, "gtf" ],
-[ 5261, "gtg" ],
-[ 5262, "gth" ],
-[ 5263, "gti" ],
-[ 5264, "gtj" ],
-[ 5265, "gtk" ],
-[ 5266, "gtl" ],
-[ 5267, "gtm" ],
-[ 5268, "gtn" ],
-[ 5269, "gto" ],
-[ 5270, "gtp" ],
-[ 5271, "gtq" ],
-[ 5272, "gtr" ],
-[ 5273, "gts" ],
-[ 5274, "gtt" ],
-[ 5275, "gtu" ],
-[ 5276, "gtv" ],
-[ 5277, "gtw" ],
-[ 5278, "gtx" ],
-[ 5279, "gty" ],
-[ 5280, "gtz" ],
-[ 5281, "gua" ],
-[ 5282, "gub" ],
-[ 5283, "guc" ],
-[ 5284, "gud" ],
-[ 5285, "gue" ],
-[ 5286, "guf" ],
-[ 5287, "gug" ],
-[ 5288, "guh" ],
-[ 5289, "gui" ],
-[ 5290, "guj" ],
-[ 5291, "guk" ],
-[ 5292, "gul" ],
-[ 5293, "gum" ],
-[ 5294, "gun" ],
-[ 5295, "guo" ],
-[ 5296, "gup" ],
-[ 5297, "guq" ],
-[ 5298, "gur" ],
-[ 5299, "gus" ],
-[ 5300, "gut" ],
-[ 5301, "guu" ],
-[ 5302, "guv" ],
-[ 5303, "guw" ],
-[ 5304, "gux" ],
-[ 5305, "guy" ],
-[ 5306, "guz" ],
-[ 5307, "gva" ],
-[ 5308, "gvb" ],
-[ 5309, "gvc" ],
-[ 5310, "gvd" ],
-[ 5311, "gve" ],
-[ 5312, "gvf" ],
-[ 5313, "gvg" ],
-[ 5314, "gvh" ],
-[ 5315, "gvi" ],
-[ 5316, "gvj" ],
-[ 5317, "gvk" ],
-[ 5318, "gvl" ],
-[ 5319, "gvm" ],
-[ 5320, "gvn" ],
-[ 5321, "gvo" ],
-[ 5322, "gvp" ],
-[ 5323, "gvq" ],
-[ 5324, "gvr" ],
-[ 5325, "gvs" ],
-[ 5326, "gvt" ],
-[ 5327, "gvu" ],
-[ 5328, "gvv" ],
-[ 5329, "gvw" ],
-[ 5330, "gvx" ],
-[ 5331, "gvy" ],
-[ 5332, "gvz" ],
-[ 5333, "gwa" ],
-[ 5334, "gwb" ],
-[ 5335, "gwc" ],
-[ 5336, "gwd" ],
-[ 5337, "gwe" ],
-[ 5338, "gwf" ],
-[ 5339, "gwg" ],
-[ 5340, "gwh" ],
-[ 5341, "gwi" ],
-[ 5342, "gwj" ],
-[ 5343, "gwk" ],
-[ 5344, "gwl" ],
-[ 5345, "gwm" ],
-[ 5346, "gwn" ],
-[ 5347, "gwo" ],
-[ 5348, "gwp" ],
-[ 5349, "gwq" ],
-[ 5350, "gwr" ],
-[ 5351, "gws" ],
-[ 5352, "gwt" ],
-[ 5353, "gwu" ],
-[ 5354, "gwv" ],
-[ 5355, "gww" ],
-[ 5356, "gwx" ],
-[ 5357, "gwy" ],
-[ 5358, "gwz" ],
-[ 5359, "gxa" ],
-[ 5360, "gxb" ],
-[ 5361, "gxc" ],
-[ 5362, "gxd" ],
-[ 5363, "gxe" ],
-[ 5364, "gxf" ],
-[ 5365, "gxg" ],
-[ 5366, "gxh" ],
-[ 5367, "gxi" ],
-[ 5368, "gxj" ],
-[ 5369, "gxk" ],
-[ 5370, "gxl" ],
-[ 5371, "gxm" ],
-[ 5372, "gxn" ],
-[ 5373, "gxo" ],
-[ 5374, "gxp" ],
-[ 5375, "gxq" ],
-[ 5376, "gxr" ],
-[ 5377, "gxs" ],
-[ 5378, "gxt" ],
-[ 5379, "gxu" ],
-[ 5380, "gxv" ],
-[ 5381, "gxw" ],
-[ 5382, "gxx" ],
-[ 5383, "gxy" ],
-[ 5384, "gxz" ],
-[ 5385, "gya" ],
-[ 5386, "gyb" ],
-[ 5387, "gyc" ],
-[ 5388, "gyd" ],
-[ 5389, "gye" ],
-[ 5390, "gyf" ],
-[ 5391, "gyg" ],
-[ 5392, "gyh" ],
-[ 5393, "gyi" ],
-[ 5394, "gyj" ],
-[ 5395, "gyk" ],
-[ 5396, "gyl" ],
-[ 5397, "gym" ],
-[ 5398, "gyn" ],
-[ 5399, "gyo" ],
-[ 5400, "gyp" ],
-[ 5401, "gyq" ],
-[ 5402, "gyr" ],
-[ 5403, "gys" ],
-[ 5404, "gyt" ],
-[ 5405, "gyu" ],
-[ 5406, "gyv" ],
-[ 5407, "gyw" ],
-[ 5408, "gyx" ],
-[ 5409, "gyy" ],
-[ 5410, "gyz" ],
-[ 5411, "gza" ],
-[ 5412, "gzb" ],
-[ 5413, "gzc" ],
-[ 5414, "gzd" ],
-[ 5415, "gze" ],
-[ 5416, "gzf" ],
-[ 5417, "gzg" ],
-[ 5418, "gzh" ],
-[ 5419, "gzi" ],
-[ 5420, "gzj" ],
-[ 5421, "gzk" ],
-[ 5422, "gzl" ],
-[ 5423, "gzm" ],
-[ 5424, "gzn" ],
-[ 5425, "gzo" ],
-[ 5426, "gzp" ],
-[ 5427, "gzq" ],
-[ 5428, "gzr" ],
-[ 5429, "gzs" ],
-[ 5430, "gzt" ],
-[ 5431, "gzu" ],
-[ 5432, "gzv" ],
-[ 5433, "gzw" ],
-[ 5434, "gzx" ],
-[ 5435, "gzy" ],
-[ 5436, "gzz" ],
-[ 5437, "haa" ],
-[ 5438, "hab" ],
-[ 5439, "hac" ],
-[ 5440, "had" ],
-[ 5441, "hae" ],
-[ 5442, "haf" ],
-[ 5443, "hag" ],
-[ 5444, "hah" ],
-[ 5445, "hai" ],
-[ 5446, "haj" ],
-[ 5447, "hak" ],
-[ 5448, "hal" ],
-[ 5449, "ham" ],
-[ 5450, "han" ],
-[ 5451, "hao" ],
-[ 5452, "hap" ],
-[ 5453, "haq" ],
-[ 5454, "har" ],
-[ 5455, "has" ],
-[ 5456, "hat" ],
-[ 5457, "hau" ],
-[ 5458, "hav" ],
-[ 5459, "haw" ],
-[ 5460, "hax" ],
-[ 5461, "hay" ],
-[ 5462, "haz" ],
-[ 5463, "hba" ],
-[ 5464, "hbb" ],
-[ 5465, "hbc" ],
-[ 5466, "hbd" ],
-[ 5467, "hbe" ],
-[ 5468, "hbf" ],
-[ 5469, "hbg" ],
-[ 5470, "hbh" ],
-[ 5471, "hbi" ],
-[ 5472, "hbj" ],
-[ 5473, "hbk" ],
-[ 5474, "hbl" ],
-[ 5475, "hbm" ],
-[ 5476, "hbn" ],
-[ 5477, "hbo" ],
-[ 5478, "hbp" ],
-[ 5479, "hbq" ],
-[ 5480, "hbr" ],
-[ 5481, "hbs" ],
-[ 5482, "hbt" ],
-[ 5483, "hbu" ],
-[ 5484, "hbv" ],
-[ 5485, "hbw" ],
-[ 5486, "hbx" ],
-[ 5487, "hby" ],
-[ 5488, "hbz" ],
-[ 5489, "hca" ],
-[ 5490, "hcb" ],
-[ 5491, "hcc" ],
-[ 5492, "hcd" ],
-[ 5493, "hce" ],
-[ 5494, "hcf" ],
-[ 5495, "hcg" ],
-[ 5496, "hch" ],
-[ 5497, "hci" ],
-[ 5498, "hcj" ],
-[ 5499, "hck" ],
-[ 5500, "hcl" ],
-[ 5501, "hcm" ],
-[ 5502, "hcn" ],
-[ 5503, "hco" ],
-[ 5504, "hcp" ],
-[ 5505, "hcq" ],
-[ 5506, "hcr" ],
-[ 5507, "hcs" ],
-[ 5508, "hct" ],
-[ 5509, "hcu" ],
-[ 5510, "hcv" ],
-[ 5511, "hcw" ],
-[ 5512, "hcx" ],
-[ 5513, "hcy" ],
-[ 5514, "hcz" ],
-[ 5515, "hda" ],
-[ 5516, "hdb" ],
-[ 5517, "hdc" ],
-[ 5518, "hdd" ],
-[ 5519, "hde" ],
-[ 5520, "hdf" ],
-[ 5521, "hdg" ],
-[ 5522, "hdh" ],
-[ 5523, "hdi" ],
-[ 5524, "hdj" ],
-[ 5525, "hdk" ],
-[ 5526, "hdl" ],
-[ 5527, "hdm" ],
-[ 5528, "hdn" ],
-[ 5529, "hdo" ],
-[ 5530, "hdp" ],
-[ 5531, "hdq" ],
-[ 5532, "hdr" ],
-[ 5533, "hds" ],
-[ 5534, "hdt" ],
-[ 5535, "hdu" ],
-[ 5536, "hdv" ],
-[ 5537, "hdw" ],
-[ 5538, "hdx" ],
-[ 5539, "hdy" ],
-[ 5540, "hdz" ],
-[ 5541, "hea" ],
-[ 5542, "heb" ],
-[ 5543, "hec" ],
-[ 5544, "hed" ],
-[ 5545, "hee" ],
-[ 5546, "hef" ],
-[ 5547, "heg" ],
-[ 5548, "heh" ],
-[ 5549, "hei" ],
-[ 5550, "hej" ],
-[ 5551, "hek" ],
-[ 5552, "hel" ],
-[ 5553, "hem" ],
-[ 5554, "hen" ],
-[ 5555, "heo" ],
-[ 5556, "hep" ],
-[ 5557, "heq" ],
-[ 5558, "her" ],
-[ 5559, "hes" ],
-[ 5560, "het" ],
-[ 5561, "heu" ],
-[ 5562, "hev" ],
-[ 5563, "hew" ],
-[ 5564, "hex" ],
-[ 5565, "hey" ],
-[ 5566, "hez" ],
-[ 5567, "hfa" ],
-[ 5568, "hfb" ],
-[ 5569, "hfc" ],
-[ 5570, "hfd" ],
-[ 5571, "hfe" ],
-[ 5572, "hff" ],
-[ 5573, "hfg" ],
-[ 5574, "hfh" ],
-[ 5575, "hfi" ],
-[ 5576, "hfj" ],
-[ 5577, "hfk" ],
-[ 5578, "hfl" ],
-[ 5579, "hfm" ],
-[ 5580, "hfn" ],
-[ 5581, "hfo" ],
-[ 5582, "hfp" ],
-[ 5583, "hfq" ],
-[ 5584, "hfr" ],
-[ 5585, "hfs" ],
-[ 5586, "hft" ],
-[ 5587, "hfu" ],
-[ 5588, "hfv" ],
-[ 5589, "hfw" ],
-[ 5590, "hfx" ],
-[ 5591, "hfy" ],
-[ 5592, "hfz" ],
-[ 5593, "hga" ],
-[ 5594, "hgb" ],
-[ 5595, "hgc" ],
-[ 5596, "hgd" ],
-[ 5597, "hge" ],
-[ 5598, "hgf" ],
-[ 5599, "hgg" ],
-[ 5600, "hgh" ],
-[ 5601, "hgi" ],
-[ 5602, "hgj" ],
-[ 5603, "hgk" ],
-[ 5604, "hgl" ],
-[ 5605, "hgm" ],
-[ 5606, "hgn" ],
-[ 5607, "hgo" ],
-[ 5608, "hgp" ],
-[ 5609, "hgq" ],
-[ 5610, "hgr" ],
-[ 5611, "hgs" ],
-[ 5612, "hgt" ],
-[ 5613, "hgu" ],
-[ 5614, "hgv" ],
-[ 5615, "hgw" ],
-[ 5616, "hgx" ],
-[ 5617, "hgy" ],
-[ 5618, "hgz" ],
-[ 5619, "hha" ],
-[ 5620, "hhb" ],
-[ 5621, "hhc" ],
-[ 5622, "hhd" ],
-[ 5623, "hhe" ],
-[ 5624, "hhf" ],
-[ 5625, "hhg" ],
-[ 5626, "hhh" ],
-[ 5627, "hhi" ],
-[ 5628, "hhj" ],
-[ 5629, "hhk" ],
-[ 5630, "hhl" ],
-[ 5631, "hhm" ],
-[ 5632, "hhn" ],
-[ 5633, "hho" ],
-[ 5634, "hhp" ],
-[ 5635, "hhq" ],
-[ 5636, "hhr" ],
-[ 5637, "hhs" ],
-[ 5638, "hht" ],
-[ 5639, "hhu" ],
-[ 5640, "hhv" ],
-[ 5641, "hhw" ],
-[ 5642, "hhx" ],
-[ 5643, "hhy" ],
-[ 5644, "hhz" ],
-[ 5645, "hia" ],
-[ 5646, "hib" ],
-[ 5647, "hic" ],
-[ 5648, "hid" ],
-[ 5649, "hie" ],
-[ 5650, "hif" ],
-[ 5651, "hig" ],
-[ 5652, "hih" ],
-[ 5653, "hii" ],
-[ 5654, "hij" ],
-[ 5655, "hik" ],
-[ 5656, "hil" ],
-[ 5657, "him" ],
-[ 5658, "hin" ],
-[ 5659, "hio" ],
-[ 5660, "hip" ],
-[ 5661, "hiq" ],
-[ 5662, "hir" ],
-[ 5663, "his" ],
-[ 5664, "hit" ],
-[ 5665, "hiu" ],
-[ 5666, "hiv" ],
-[ 5667, "hiw" ],
-[ 5668, "hix" ],
-[ 5669, "hiy" ],
-[ 5670, "hiz" ],
-[ 5671, "hja" ],
-[ 5672, "hjb" ],
-[ 5673, "hjc" ],
-[ 5674, "hjd" ],
-[ 5675, "hje" ],
-[ 5676, "hjf" ],
-[ 5677, "hjg" ],
-[ 5678, "hjh" ],
-[ 5679, "hji" ],
-[ 5680, "hjj" ],
-[ 5681, "hjk" ],
-[ 5682, "hjl" ],
-[ 5683, "hjm" ],
-[ 5684, "hjn" ],
-[ 5685, "hjo" ],
-[ 5686, "hjp" ],
-[ 5687, "hjq" ],
-[ 5688, "hjr" ],
-[ 5689, "hjs" ],
-[ 5690, "hjt" ],
-[ 5691, "hju" ],
-[ 5692, "hjv" ],
-[ 5693, "hjw" ],
-[ 5694, "hjx" ],
-[ 5695, "hjy" ],
-[ 5696, "hjz" ],
-[ 5697, "hka" ],
-[ 5698, "hkb" ],
-[ 5699, "hkc" ],
-[ 5700, "hkd" ],
-[ 5701, "hke" ],
-[ 5702, "hkf" ],
-[ 5703, "hkg" ],
-[ 5704, "hkh" ],
-[ 5705, "hki" ],
-[ 5706, "hkj" ],
-[ 5707, "hkk" ],
-[ 5708, "hkl" ],
-[ 5709, "hkm" ],
-[ 5710, "hkn" ],
-[ 5711, "hko" ],
-[ 5712, "hkp" ],
-[ 5713, "hkq" ],
-[ 5714, "hkr" ],
-[ 5715, "hks" ],
-[ 5716, "hkt" ],
-[ 5717, "hku" ],
-[ 5718, "hkv" ],
-[ 5719, "hkw" ],
-[ 5720, "hkx" ],
-[ 5721, "hky" ],
-[ 5722, "hkz" ],
-[ 5723, "hla" ],
-[ 5724, "hlb" ],
-[ 5725, "hlc" ],
-[ 5726, "hld" ],
-[ 5727, "hle" ],
-[ 5728, "hlf" ],
-[ 5729, "hlg" ],
-[ 5730, "hlh" ],
-[ 5731, "hli" ],
-[ 5732, "hlj" ],
-[ 5733, "hlk" ],
-[ 5734, "hll" ],
-[ 5735, "hlm" ],
-[ 5736, "hln" ],
-[ 5737, "hlo" ],
-[ 5738, "hlp" ],
-[ 5739, "hlq" ],
-[ 5740, "hlr" ],
-[ 5741, "hls" ],
-[ 5742, "hlt" ],
-[ 5743, "hlu" ],
-[ 5744, "hlv" ],
-[ 5745, "hlw" ],
-[ 5746, "hlx" ],
-[ 5747, "hly" ],
-[ 5748, "hlz" ],
-[ 5749, "hma" ],
-[ 5750, "hmb" ],
-[ 5751, "hmc" ],
-[ 5752, "hmd" ],
-[ 5753, "hme" ],
-[ 5754, "hmf" ],
-[ 5755, "hmg" ],
-[ 5756, "hmh" ],
-[ 5757, "hmi" ],
-[ 5758, "hmj" ],
-[ 5759, "hmk" ],
-[ 5760, "hml" ],
-[ 5761, "hmm" ],
-[ 5762, "hmn" ],
-[ 5763, "hmo" ],
-[ 5764, "hmp" ],
-[ 5765, "hmq" ],
-[ 5766, "hmr" ],
-[ 5767, "hms" ],
-[ 5768, "hmt" ],
-[ 5769, "hmu" ],
-[ 5770, "hmv" ],
-[ 5771, "hmw" ],
-[ 5772, "hmx" ],
-[ 5773, "hmy" ],
-[ 5774, "hmz" ],
-[ 5775, "hna" ],
-[ 5776, "hnb" ],
-[ 5777, "hnc" ],
-[ 5778, "hnd" ],
-[ 5779, "hne" ],
-[ 5780, "hnf" ],
-[ 5781, "hng" ],
-[ 5782, "hnh" ],
-[ 5783, "hni" ],
-[ 5784, "hnj" ],
-[ 5785, "hnk" ],
-[ 5786, "hnl" ],
-[ 5787, "hnm" ],
-[ 5788, "hnn" ],
-[ 5789, "hno" ],
-[ 5790, "hnp" ],
-[ 5791, "hnq" ],
-[ 5792, "hnr" ],
-[ 5793, "hns" ],
-[ 5794, "hnt" ],
-[ 5795, "hnu" ],
-[ 5796, "hnv" ],
-[ 5797, "hnw" ],
-[ 5798, "hnx" ],
-[ 5799, "hny" ],
-[ 5800, "hnz" ],
-[ 5801, "hoa" ],
-[ 5802, "hob" ],
-[ 5803, "hoc" ],
-[ 5804, "hod" ],
-[ 5805, "hoe" ],
-[ 5806, "hof" ],
-[ 5807, "hog" ],
-[ 5808, "hoh" ],
-[ 5809, "hoi" ],
-[ 5810, "hoj" ],
-[ 5811, "hok" ],
-[ 5812, "hol" ],
-[ 5813, "hom" ],
-[ 5814, "hon" ],
-[ 5815, "hoo" ],
-[ 5816, "hop" ],
-[ 5817, "hoq" ],
-[ 5818, "hor" ],
-[ 5819, "hos" ],
-[ 5820, "hot" ],
-[ 5821, "hou" ],
-[ 5822, "hov" ],
-[ 5823, "how" ],
-[ 5824, "hox" ],
-[ 5825, "hoy" ],
-[ 5826, "hoz" ],
-[ 5827, "hpa" ],
-[ 5828, "hpb" ],
-[ 5829, "hpc" ],
-[ 5830, "hpd" ],
-[ 5831, "hpe" ],
-[ 5832, "hpf" ],
-[ 5833, "hpg" ],
-[ 5834, "hph" ],
-[ 5835, "hpi" ],
-[ 5836, "hpj" ],
-[ 5837, "hpk" ],
-[ 5838, "hpl" ],
-[ 5839, "hpm" ],
-[ 5840, "hpn" ],
-[ 5841, "hpo" ],
-[ 5842, "hpp" ],
-[ 5843, "hpq" ],
-[ 5844, "hpr" ],
-[ 5845, "hps" ],
-[ 5846, "hpt" ],
-[ 5847, "hpu" ],
-[ 5848, "hpv" ],
-[ 5849, "hpw" ],
-[ 5850, "hpx" ],
-[ 5851, "hpy" ],
-[ 5852, "hpz" ],
-[ 5853, "hqa" ],
-[ 5854, "hqb" ],
-[ 5855, "hqc" ],
-[ 5856, "hqd" ],
-[ 5857, "hqe" ],
-[ 5858, "hqf" ],
-[ 5859, "hqg" ],
-[ 5860, "hqh" ],
-[ 5861, "hqi" ],
-[ 5862, "hqj" ],
-[ 5863, "hqk" ],
-[ 5864, "hql" ],
-[ 5865, "hqm" ],
-[ 5866, "hqn" ],
-[ 5867, "hqo" ],
-[ 5868, "hqp" ],
-[ 5869, "hqq" ],
-[ 5870, "hqr" ],
-[ 5871, "hqs" ],
-[ 5872, "hqt" ],
-[ 5873, "hqu" ],
-[ 5874, "hqv" ],
-[ 5875, "hqw" ],
-[ 5876, "hqx" ],
-[ 5877, "hqy" ],
-[ 5878, "hqz" ],
-[ 5879, "hra" ],
-[ 5880, "hrb" ],
-[ 5881, "hrc" ],
-[ 5882, "hrd" ],
-[ 5883, "hre" ],
-[ 5884, "hrf" ],
-[ 5885, "hrg" ],
-[ 5886, "hrh" ],
-[ 5887, "hri" ],
-[ 5888, "hrj" ],
-[ 5889, "hrk" ],
-[ 5890, "hrl" ],
-[ 5891, "hrm" ],
-[ 5892, "hrn" ],
-[ 5893, "hro" ],
-[ 5894, "hrp" ],
-[ 5895, "hrq" ],
-[ 5896, "hrr" ],
-[ 5897, "hrs" ],
-[ 5898, "hrt" ],
-[ 5899, "hru" ],
-[ 5900, "hrv" ],
-[ 5901, "hrw" ],
-[ 5902, "hrx" ],
-[ 5903, "hry" ],
-[ 5904, "hrz" ],
-[ 5905, "hsa" ],
-[ 5906, "hsb" ],
-[ 5907, "hsc" ],
-[ 5908, "hsd" ],
-[ 5909, "hse" ],
-[ 5910, "hsf" ],
-[ 5911, "hsg" ],
-[ 5912, "hsh" ],
-[ 5913, "hsi" ],
-[ 5914, "hsj" ],
-[ 5915, "hsk" ],
-[ 5916, "hsl" ],
-[ 5917, "hsm" ],
-[ 5918, "hsn" ],
-[ 5919, "hso" ],
-[ 5920, "hsp" ],
-[ 5921, "hsq" ],
-[ 5922, "hsr" ],
-[ 5923, "hss" ],
-[ 5924, "hst" ],
-[ 5925, "hsu" ],
-[ 5926, "hsv" ],
-[ 5927, "hsw" ],
-[ 5928, "hsx" ],
-[ 5929, "hsy" ],
-[ 5930, "hsz" ],
-[ 5931, "hta" ],
-[ 5932, "htb" ],
-[ 5933, "htc" ],
-[ 5934, "htd" ],
-[ 5935, "hte" ],
-[ 5936, "htf" ],
-[ 5937, "htg" ],
-[ 5938, "hth" ],
-[ 5939, "hti" ],
-[ 5940, "htj" ],
-[ 5941, "htk" ],
-[ 5942, "htl" ],
-[ 5943, "htm" ],
-[ 5944, "htn" ],
-[ 5945, "hto" ],
-[ 5946, "htp" ],
-[ 5947, "htq" ],
-[ 5948, "htr" ],
-[ 5949, "hts" ],
-[ 5950, "htt" ],
-[ 5951, "htu" ],
-[ 5952, "htv" ],
-[ 5953, "htw" ],
-[ 5954, "htx" ],
-[ 5955, "hty" ],
-[ 5956, "htz" ],
-[ 5957, "hua" ],
-[ 5958, "hub" ],
-[ 5959, "huc" ],
-[ 5960, "hud" ],
-[ 5961, "hue" ],
-[ 5962, "huf" ],
-[ 5963, "hug" ],
-[ 5964, "huh" ],
-[ 5965, "hui" ],
-[ 5966, "huj" ],
-[ 5967, "huk" ],
-[ 5968, "hul" ],
-[ 5969, "hum" ],
-[ 5970, "hun" ],
-[ 5971, "huo" ],
-[ 5972, "hup" ],
-[ 5973, "huq" ],
-[ 5974, "hur" ],
-[ 5975, "hus" ],
-[ 5976, "hut" ],
-[ 5977, "huu" ],
-[ 5978, "huv" ],
-[ 5979, "huw" ],
-[ 5980, "hux" ],
-[ 5981, "huy" ],
-[ 5982, "huz" ],
-[ 5983, "hva" ],
-[ 5984, "hvb" ],
-[ 5985, "hvc" ],
-[ 5986, "hvd" ],
-[ 5987, "hve" ],
-[ 5988, "hvf" ],
-[ 5989, "hvg" ],
-[ 5990, "hvh" ],
-[ 5991, "hvi" ],
-[ 5992, "hvj" ],
-[ 5993, "hvk" ],
-[ 5994, "hvl" ],
-[ 5995, "hvm" ],
-[ 5996, "hvn" ],
-[ 5997, "hvo" ],
-[ 5998, "hvp" ],
-[ 5999, "hvq" ],
-[ 6000, "hvr" ],
-[ 6001, "hvs" ],
-[ 6002, "hvt" ],
-[ 6003, "hvu" ],
-[ 6004, "hvv" ],
-[ 6005, "hvw" ],
-[ 6006, "hvx" ],
-[ 6007, "hvy" ],
-[ 6008, "hvz" ],
-[ 6009, "hwa" ],
-[ 6010, "hwb" ],
-[ 6011, "hwc" ],
-[ 6012, "hwd" ],
-[ 6013, "hwe" ],
-[ 6014, "hwf" ],
-[ 6015, "hwg" ],
-[ 6016, "hwh" ],
-[ 6017, "hwi" ],
-[ 6018, "hwj" ],
-[ 6019, "hwk" ],
-[ 6020, "hwl" ],
-[ 6021, "hwm" ],
-[ 6022, "hwn" ],
-[ 6023, "hwo" ],
-[ 6024, "hwp" ],
-[ 6025, "hwq" ],
-[ 6026, "hwr" ],
-[ 6027, "hws" ],
-[ 6028, "hwt" ],
-[ 6029, "hwu" ],
-[ 6030, "hwv" ],
-[ 6031, "hww" ],
-[ 6032, "hwx" ],
-[ 6033, "hwy" ],
-[ 6034, "hwz" ],
-[ 6035, "hxa" ],
-[ 6036, "hxb" ],
-[ 6037, "hxc" ],
-[ 6038, "hxd" ],
-[ 6039, "hxe" ],
-[ 6040, "hxf" ],
-[ 6041, "hxg" ],
-[ 6042, "hxh" ],
-[ 6043, "hxi" ],
-[ 6044, "hxj" ],
-[ 6045, "hxk" ],
-[ 6046, "hxl" ],
-[ 6047, "hxm" ],
-[ 6048, "hxn" ],
-[ 6049, "hxo" ],
-[ 6050, "hxp" ],
-[ 6051, "hxq" ],
-[ 6052, "hxr" ],
-[ 6053, "hxs" ],
-[ 6054, "hxt" ],
-[ 6055, "hxu" ],
-[ 6056, "hxv" ],
-[ 6057, "hxw" ],
-[ 6058, "hxx" ],
-[ 6059, "hxy" ],
-[ 6060, "hxz" ],
-[ 6061, "hya" ],
-[ 6062, "hyb" ],
-[ 6063, "hyc" ],
-[ 6064, "hyd" ],
-[ 6065, "hye" ],
-[ 6066, "hyf" ],
-[ 6067, "hyg" ],
-[ 6068, "hyh" ],
-[ 6069, "hyi" ],
-[ 6070, "hyj" ],
-[ 6071, "hyk" ],
-[ 6072, "hyl" ],
-[ 6073, "hym" ],
-[ 6074, "hyn" ],
-[ 6075, "hyo" ],
-[ 6076, "hyp" ],
-[ 6077, "hyq" ],
-[ 6078, "hyr" ],
-[ 6079, "hys" ],
-[ 6080, "hyt" ],
-[ 6081, "hyu" ],
-[ 6082, "hyv" ],
-[ 6083, "hyw" ],
-[ 6084, "hyx" ],
-[ 6085, "hyy" ],
-[ 6086, "hyz" ],
-[ 6087, "hza" ],
-[ 6088, "hzb" ],
-[ 6089, "hzc" ],
-[ 6090, "hzd" ],
-[ 6091, "hze" ],
-[ 6092, "hzf" ],
-[ 6093, "hzg" ],
-[ 6094, "hzh" ],
-[ 6095, "hzi" ],
-[ 6096, "hzj" ],
-[ 6097, "hzk" ],
-[ 6098, "hzl" ],
-[ 6099, "hzm" ],
-[ 6100, "hzn" ],
-[ 6101, "hzo" ],
-[ 6102, "hzp" ],
-[ 6103, "hzq" ],
-[ 6104, "hzr" ],
-[ 6105, "hzs" ],
-[ 6106, "hzt" ],
-[ 6107, "hzu" ],
-[ 6108, "hzv" ],
-[ 6109, "hzw" ],
-[ 6110, "hzx" ],
-[ 6111, "hzy" ],
-[ 6112, "hzz" ],
-[ 6113, "iaa" ],
-[ 6114, "iab" ],
-[ 6115, "iac" ],
-[ 6116, "iad" ],
-[ 6117, "iae" ],
-[ 6118, "iaf" ],
-[ 6119, "iag" ],
-[ 6120, "iah" ],
-[ 6121, "iai" ],
-[ 6122, "iaj" ],
-[ 6123, "iak" ],
-[ 6124, "ial" ],
-[ 6125, "iam" ],
-[ 6126, "ian" ],
-[ 6127, "iao" ],
-[ 6128, "iap" ],
-[ 6129, "iaq" ],
-[ 6130, "iar" ],
-[ 6131, "ias" ],
-[ 6132, "iat" ],
-[ 6133, "iau" ],
-[ 6134, "iav" ],
-[ 6135, "iaw" ],
-[ 6136, "iax" ],
-[ 6137, "iay" ],
-[ 6138, "iaz" ],
-[ 6139, "iba" ],
-[ 6140, "ibb" ],
-[ 6141, "ibc" ],
-[ 6142, "ibd" ],
-[ 6143, "ibe" ],
-[ 6144, "ibf" ],
-[ 6145, "ibg" ],
-[ 6146, "ibh" ],
-[ 6147, "ibi" ],
-[ 6148, "ibj" ],
-[ 6149, "ibk" ],
-[ 6150, "ibl" ],
-[ 6151, "ibm" ],
-[ 6152, "ibn" ],
-[ 6153, "ibo" ],
-[ 6154, "ibp" ],
-[ 6155, "ibq" ],
-[ 6156, "ibr" ],
-[ 6157, "ibs" ],
-[ 6158, "ibt" ],
-[ 6159, "ibu" ],
-[ 6160, "ibv" ],
-[ 6161, "ibw" ],
-[ 6162, "ibx" ],
-[ 6163, "iby" ],
-[ 6164, "ibz" ],
-[ 6165, "ica" ],
-[ 6166, "icb" ],
-[ 6167, "icc" ],
-[ 6168, "icd" ],
-[ 6169, "ice" ],
-[ 6170, "icf" ],
-[ 6171, "icg" ],
-[ 6172, "ich" ],
-[ 6173, "ici" ],
-[ 6174, "icj" ],
-[ 6175, "ick" ],
-[ 6176, "icl" ],
-[ 6177, "icm" ],
-[ 6178, "icn" ],
-[ 6179, "ico" ],
-[ 6180, "icp" ],
-[ 6181, "icq" ],
-[ 6182, "icr" ],
-[ 6183, "ics" ],
-[ 6184, "ict" ],
-[ 6185, "icu" ],
-[ 6186, "icv" ],
-[ 6187, "icw" ],
-[ 6188, "icx" ],
-[ 6189, "icy" ],
-[ 6190, "icz" ],
-[ 6191, "ida" ],
-[ 6192, "idb" ],
-[ 6193, "idc" ],
-[ 6194, "idd" ],
-[ 6195, "ide" ],
-[ 6196, "idf" ],
-[ 6197, "idg" ],
-[ 6198, "idh" ],
-[ 6199, "idi" ],
-[ 6200, "idj" ],
-[ 6201, "idk" ],
-[ 6202, "idl" ],
-[ 6203, "idm" ],
-[ 6204, "idn" ],
-[ 6205, "ido" ],
-[ 6206, "idp" ],
-[ 6207, "idq" ],
-[ 6208, "idr" ],
-[ 6209, "ids" ],
-[ 6210, "idt" ],
-[ 6211, "idu" ],
-[ 6212, "idv" ],
-[ 6213, "idw" ],
-[ 6214, "idx" ],
-[ 6215, "idy" ],
-[ 6216, "idz" ],
-[ 6217, "iea" ],
-[ 6218, "ieb" ],
-[ 6219, "iec" ],
-[ 6220, "ied" ],
-[ 6221, "iee" ],
-[ 6222, "ief" ],
-[ 6223, "ieg" ],
-[ 6224, "ieh" ],
-[ 6225, "iei" ],
-[ 6226, "iej" ],
-[ 6227, "iek" ],
-[ 6228, "iel" ],
-[ 6229, "iem" ],
-[ 6230, "ien" ],
-[ 6231, "ieo" ],
-[ 6232, "iep" ],
-[ 6233, "ieq" ],
-[ 6234, "ier" ],
-[ 6235, "ies" ],
-[ 6236, "iet" ],
-[ 6237, "ieu" ],
-[ 6238, "iev" ],
-[ 6239, "iew" ],
-[ 6240, "iex" ],
-[ 6241, "iey" ],
-[ 6242, "iez" ],
-[ 6243, "ifa" ],
-[ 6244, "ifb" ],
-[ 6245, "ifc" ],
-[ 6246, "ifd" ],
-[ 6247, "ife" ],
-[ 6248, "iff" ],
-[ 6249, "ifg" ],
-[ 6250, "ifh" ],
-[ 6251, "ifi" ],
-[ 6252, "ifj" ],
-[ 6253, "ifk" ],
-[ 6254, "ifl" ],
-[ 6255, "ifm" ],
-[ 6256, "ifn" ],
-[ 6257, "ifo" ],
-[ 6258, "ifp" ],
-[ 6259, "ifq" ],
-[ 6260, "ifr" ],
-[ 6261, "ifs" ],
-[ 6262, "ift" ],
-[ 6263, "ifu" ],
-[ 6264, "ifv" ],
-[ 6265, "ifw" ],
-[ 6266, "ifx" ],
-[ 6267, "ify" ],
-[ 6268, "ifz" ],
-[ 6269, "iga" ],
-[ 6270, "igb" ],
-[ 6271, "igc" ],
-[ 6272, "igd" ],
-[ 6273, "ige" ],
-[ 6274, "igf" ],
-[ 6275, "igg" ],
-[ 6276, "igh" ],
-[ 6277, "igi" ],
-[ 6278, "igj" ],
-[ 6279, "igk" ],
-[ 6280, "igl" ],
-[ 6281, "igm" ],
-[ 6282, "ign" ],
-[ 6283, "igo" ],
-[ 6284, "igp" ],
-[ 6285, "igq" ],
-[ 6286, "igr" ],
-[ 6287, "igs" ],
-[ 6288, "igt" ],
-[ 6289, "igu" ],
-[ 6290, "igv" ],
-[ 6291, "igw" ],
-[ 6292, "igx" ],
-[ 6293, "igy" ],
-[ 6294, "igz" ],
-[ 6295, "iha" ],
-[ 6296, "ihb" ],
-[ 6297, "ihc" ],
-[ 6298, "ihd" ],
-[ 6299, "ihe" ],
-[ 6300, "ihf" ],
-[ 6301, "ihg" ],
-[ 6302, "ihh" ],
-[ 6303, "ihi" ],
-[ 6304, "ihj" ],
-[ 6305, "ihk" ],
-[ 6306, "ihl" ],
-[ 6307, "ihm" ],
-[ 6308, "ihn" ],
-[ 6309, "iho" ],
-[ 6310, "ihp" ],
-[ 6311, "ihq" ],
-[ 6312, "ihr" ],
-[ 6313, "ihs" ],
-[ 6314, "iht" ],
-[ 6315, "ihu" ],
-[ 6316, "ihv" ],
-[ 6317, "ihw" ],
-[ 6318, "ihx" ],
-[ 6319, "ihy" ],
-[ 6320, "ihz" ],
-[ 6321, "iia" ],
-[ 6322, "iib" ],
-[ 6323, "iic" ],
-[ 6324, "iid" ],
-[ 6325, "iie" ],
-[ 6326, "iif" ],
-[ 6327, "iig" ],
-[ 6328, "iih" ],
-[ 6329, "iii" ],
-[ 6330, "iij" ],
-[ 6331, "iik" ],
-[ 6332, "iil" ],
-[ 6333, "iim" ],
-[ 6334, "iin" ],
-[ 6335, "iio" ],
-[ 6336, "iip" ],
-[ 6337, "iiq" ],
-[ 6338, "iir" ],
-[ 6339, "iis" ],
-[ 6340, "iit" ],
-[ 6341, "iiu" ],
-[ 6342, "iiv" ],
-[ 6343, "iiw" ],
-[ 6344, "iix" ],
-[ 6345, "iiy" ],
-[ 6346, "iiz" ],
-[ 6347, "ija" ],
-[ 6348, "ijb" ],
-[ 6349, "ijc" ],
-[ 6350, "ijd" ],
-[ 6351, "ije" ],
-[ 6352, "ijf" ],
-[ 6353, "ijg" ],
-[ 6354, "ijh" ],
-[ 6355, "iji" ],
-[ 6356, "ijj" ],
-[ 6357, "ijk" ],
-[ 6358, "ijl" ],
-[ 6359, "ijm" ],
-[ 6360, "ijn" ],
-[ 6361, "ijo" ],
-[ 6362, "ijp" ],
-[ 6363, "ijq" ],
-[ 6364, "ijr" ],
-[ 6365, "ijs" ],
-[ 6366, "ijt" ],
-[ 6367, "iju" ],
-[ 6368, "ijv" ],
-[ 6369, "ijw" ],
-[ 6370, "ijx" ],
-[ 6371, "ijy" ],
-[ 6372, "ijz" ],
-[ 6373, "ika" ],
-[ 6374, "ikb" ],
-[ 6375, "ikc" ],
-[ 6376, "ikd" ],
-[ 6377, "ike" ],
-[ 6378, "ikf" ],
-[ 6379, "ikg" ],
-[ 6380, "ikh" ],
-[ 6381, "iki" ],
-[ 6382, "ikj" ],
-[ 6383, "ikk" ],
-[ 6384, "ikl" ],
-[ 6385, "ikm" ],
-[ 6386, "ikn" ],
-[ 6387, "iko" ],
-[ 6388, "ikp" ],
-[ 6389, "ikq" ],
-[ 6390, "ikr" ],
-[ 6391, "iks" ],
-[ 6392, "ikt" ],
-[ 6393, "iku" ],
-[ 6394, "ikv" ],
-[ 6395, "ikw" ],
-[ 6396, "ikx" ],
-[ 6397, "iky" ],
-[ 6398, "ikz" ],
-[ 6399, "ila" ],
-[ 6400, "ilb" ],
-[ 6401, "ilc" ],
-[ 6402, "ild" ],
-[ 6403, "ile" ],
-[ 6404, "ilf" ],
-[ 6405, "ilg" ],
-[ 6406, "ilh" ],
-[ 6407, "ili" ],
-[ 6408, "ilj" ],
-[ 6409, "ilk" ],
-[ 6410, "ill" ],
-[ 6411, "ilm" ],
-[ 6412, "iln" ],
-[ 6413, "ilo" ],
-[ 6414, "ilp" ],
-[ 6415, "ilq" ],
-[ 6416, "ilr" ],
-[ 6417, "ils" ],
-[ 6418, "ilt" ],
-[ 6419, "ilu" ],
-[ 6420, "ilv" ],
-[ 6421, "ilw" ],
-[ 6422, "ilx" ],
-[ 6423, "ily" ],
-[ 6424, "ilz" ],
-[ 6425, "ima" ],
-[ 6426, "imb" ],
-[ 6427, "imc" ],
-[ 6428, "imd" ],
-[ 6429, "ime" ],
-[ 6430, "imf" ],
-[ 6431, "img" ],
-[ 6432, "imh" ],
-[ 6433, "imi" ],
-[ 6434, "imj" ],
-[ 6435, "imk" ],
-[ 6436, "iml" ],
-[ 6437, "imm" ],
-[ 6438, "imn" ],
-[ 6439, "imo" ],
-[ 6440, "imp" ],
-[ 6441, "imq" ],
-[ 6442, "imr" ],
-[ 6443, "ims" ],
-[ 6444, "imt" ],
-[ 6445, "imu" ],
-[ 6446, "imv" ],
-[ 6447, "imw" ],
-[ 6448, "imx" ],
-[ 6449, "imy" ],
-[ 6450, "imz" ],
-[ 6451, "ina" ],
-[ 6452, "inb" ],
-[ 6453, "inc" ],
-[ 6454, "ind" ],
-[ 6455, "ine" ],
-[ 6456, "inf" ],
-[ 6457, "ing" ],
-[ 6458, "inh" ],
-[ 6459, "ini" ],
-[ 6460, "inj" ],
-[ 6461, "ink" ],
-[ 6462, "inl" ],
-[ 6463, "inm" ],
-[ 6464, "inn" ],
-[ 6465, "ino" ],
-[ 6466, "inp" ],
-[ 6467, "inq" ],
-[ 6468, "inr" ],
-[ 6469, "ins" ],
-[ 6470, "int" ],
-[ 6471, "inu" ],
-[ 6472, "inv" ],
-[ 6473, "inw" ],
-[ 6474, "inx" ],
-[ 6475, "iny" ],
-[ 6476, "inz" ],
-[ 6477, "ioa" ],
-[ 6478, "iob" ],
-[ 6479, "ioc" ],
-[ 6480, "iod" ],
-[ 6481, "ioe" ],
-[ 6482, "iof" ],
-[ 6483, "iog" ],
-[ 6484, "ioh" ],
-[ 6485, "ioi" ],
-[ 6486, "ioj" ],
-[ 6487, "iok" ],
-[ 6488, "iol" ],
-[ 6489, "iom" ],
-[ 6490, "ion" ],
-[ 6491, "ioo" ],
-[ 6492, "iop" ],
-[ 6493, "ioq" ],
-[ 6494, "ior" ],
-[ 6495, "ios" ],
-[ 6496, "iot" ],
-[ 6497, "iou" ],
-[ 6498, "iov" ],
-[ 6499, "iow" ],
-[ 6500, "iox" ],
-[ 6501, "ioy" ],
-[ 6502, "ioz" ],
-[ 6503, "ipa" ],
-[ 6504, "ipb" ],
-[ 6505, "ipc" ],
-[ 6506, "ipd" ],
-[ 6507, "ipe" ],
-[ 6508, "ipf" ],
-[ 6509, "ipg" ],
-[ 6510, "iph" ],
-[ 6511, "ipi" ],
-[ 6512, "ipj" ],
-[ 6513, "ipk" ],
-[ 6514, "ipl" ],
-[ 6515, "ipm" ],
-[ 6516, "ipn" ],
-[ 6517, "ipo" ],
-[ 6518, "ipp" ],
-[ 6519, "ipq" ],
-[ 6520, "ipr" ],
-[ 6521, "ips" ],
-[ 6522, "ipt" ],
-[ 6523, "ipu" ],
-[ 6524, "ipv" ],
-[ 6525, "ipw" ],
-[ 6526, "ipx" ],
-[ 6527, "ipy" ],
-[ 6528, "ipz" ],
-[ 6529, "iqa" ],
-[ 6530, "iqb" ],
-[ 6531, "iqc" ],
-[ 6532, "iqd" ],
-[ 6533, "iqe" ],
-[ 6534, "iqf" ],
-[ 6535, "iqg" ],
-[ 6536, "iqh" ],
-[ 6537, "iqi" ],
-[ 6538, "iqj" ],
-[ 6539, "iqk" ],
-[ 6540, "iql" ],
-[ 6541, "iqm" ],
-[ 6542, "iqn" ],
-[ 6543, "iqo" ],
-[ 6544, "iqp" ],
-[ 6545, "iqq" ],
-[ 6546, "iqr" ],
-[ 6547, "iqs" ],
-[ 6548, "iqt" ],
-[ 6549, "iqu" ],
-[ 6550, "iqv" ],
-[ 6551, "iqw" ],
-[ 6552, "iqx" ],
-[ 6553, "iqy" ],
-[ 6554, "iqz" ],
-[ 6555, "ira" ],
-[ 6556, "irb" ],
-[ 6557, "irc" ],
-[ 6558, "ird" ],
-[ 6559, "ire" ],
-[ 6560, "irf" ],
-[ 6561, "irg" ],
-[ 6562, "irh" ],
-[ 6563, "iri" ],
-[ 6564, "irj" ],
-[ 6565, "irk" ],
-[ 6566, "irl" ],
-[ 6567, "irm" ],
-[ 6568, "irn" ],
-[ 6569, "iro" ],
-[ 6570, "irp" ],
-[ 6571, "irq" ],
-[ 6572, "irr" ],
-[ 6573, "irs" ],
-[ 6574, "irt" ],
-[ 6575, "iru" ],
-[ 6576, "irv" ],
-[ 6577, "irw" ],
-[ 6578, "irx" ],
-[ 6579, "iry" ],
-[ 6580, "irz" ],
-[ 6581, "isa" ],
-[ 6582, "isb" ],
-[ 6583, "isc" ],
-[ 6584, "isd" ],
-[ 6585, "ise" ],
-[ 6586, "isf" ],
-[ 6587, "isg" ],
-[ 6588, "ish" ],
-[ 6589, "isi" ],
-[ 6590, "isj" ],
-[ 6591, "isk" ],
-[ 6592, "isl" ],
-[ 6593, "ism" ],
-[ 6594, "isn" ],
-[ 6595, "iso" ],
-[ 6596, "isp" ],
-[ 6597, "isq" ],
-[ 6598, "isr" ],
-[ 6599, "iss" ],
-[ 6600, "ist" ],
-[ 6601, "isu" ],
-[ 6602, "isv" ],
-[ 6603, "isw" ],
-[ 6604, "isx" ],
-[ 6605, "isy" ],
-[ 6606, "isz" ],
-[ 6607, "ita" ],
-[ 6608, "itb" ],
-[ 6609, "itc" ],
-[ 6610, "itd" ],
-[ 6611, "ite" ],
-[ 6612, "itf" ],
-[ 6613, "itg" ],
-[ 6614, "ith" ],
-[ 6615, "iti" ],
-[ 6616, "itj" ],
-[ 6617, "itk" ],
-[ 6618, "itl" ],
-[ 6619, "itm" ],
-[ 6620, "itn" ],
-[ 6621, "ito" ],
-[ 6622, "itp" ],
-[ 6623, "itq" ],
-[ 6624, "itr" ],
-[ 6625, "its" ],
-[ 6626, "itt" ],
-[ 6627, "itu" ],
-[ 6628, "itv" ],
-[ 6629, "itw" ],
-[ 6630, "itx" ],
-[ 6631, "ity" ],
-[ 6632, "itz" ],
-[ 6633, "iua" ],
-[ 6634, "iub" ],
-[ 6635, "iuc" ],
-[ 6636, "iud" ],
-[ 6637, "iue" ],
-[ 6638, "iuf" ],
-[ 6639, "iug" ],
-[ 6640, "iuh" ],
-[ 6641, "iui" ],
-[ 6642, "iuj" ],
-[ 6643, "iuk" ],
-[ 6644, "iul" ],
-[ 6645, "ium" ],
-[ 6646, "iun" ],
-[ 6647, "iuo" ],
-[ 6648, "iup" ],
-[ 6649, "iuq" ],
-[ 6650, "iur" ],
-[ 6651, "ius" ],
-[ 6652, "iut" ],
-[ 6653, "iuu" ],
-[ 6654, "iuv" ],
-[ 6655, "iuw" ],
-[ 6656, "iux" ],
-[ 6657, "iuy" ],
-[ 6658, "iuz" ],
-[ 6659, "iva" ],
-[ 6660, "ivb" ],
-[ 6661, "ivc" ],
-[ 6662, "ivd" ],
-[ 6663, "ive" ],
-[ 6664, "ivf" ],
-[ 6665, "ivg" ],
-[ 6666, "ivh" ],
-[ 6667, "ivi" ],
-[ 6668, "ivj" ],
-[ 6669, "ivk" ],
-[ 6670, "ivl" ],
-[ 6671, "ivm" ],
-[ 6672, "ivn" ],
-[ 6673, "ivo" ],
-[ 6674, "ivp" ],
-[ 6675, "ivq" ],
-[ 6676, "ivr" ],
-[ 6677, "ivs" ],
-[ 6678, "ivt" ],
-[ 6679, "ivu" ],
-[ 6680, "ivv" ],
-[ 6681, "ivw" ],
-[ 6682, "ivx" ],
-[ 6683, "ivy" ],
-[ 6684, "ivz" ],
-[ 6685, "iwa" ],
-[ 6686, "iwb" ],
-[ 6687, "iwc" ],
-[ 6688, "iwd" ],
-[ 6689, "iwe" ],
-[ 6690, "iwf" ],
-[ 6691, "iwg" ],
-[ 6692, "iwh" ],
-[ 6693, "iwi" ],
-[ 6694, "iwj" ],
-[ 6695, "iwk" ],
-[ 6696, "iwl" ],
-[ 6697, "iwm" ],
-[ 6698, "iwn" ],
-[ 6699, "iwo" ],
-[ 6700, "iwp" ],
-[ 6701, "iwq" ],
-[ 6702, "iwr" ],
-[ 6703, "iws" ],
-[ 6704, "iwt" ],
-[ 6705, "iwu" ],
-[ 6706, "iwv" ],
-[ 6707, "iww" ],
-[ 6708, "iwx" ],
-[ 6709, "iwy" ],
-[ 6710, "iwz" ],
-[ 6711, "ixa" ],
-[ 6712, "ixb" ],
-[ 6713, "ixc" ],
-[ 6714, "ixd" ],
-[ 6715, "ixe" ],
-[ 6716, "ixf" ],
-[ 6717, "ixg" ],
-[ 6718, "ixh" ],
-[ 6719, "ixi" ],
-[ 6720, "ixj" ],
-[ 6721, "ixk" ],
-[ 6722, "ixl" ],
-[ 6723, "ixm" ],
-[ 6724, "ixn" ],
-[ 6725, "ixo" ],
-[ 6726, "ixp" ],
-[ 6727, "ixq" ],
-[ 6728, "ixr" ],
-[ 6729, "ixs" ],
-[ 6730, "ixt" ],
-[ 6731, "ixu" ],
-[ 6732, "ixv" ],
-[ 6733, "ixw" ],
-[ 6734, "ixx" ],
-[ 6735, "ixy" ],
-[ 6736, "ixz" ],
-[ 6737, "iya" ],
-[ 6738, "iyb" ],
-[ 6739, "iyc" ],
-[ 6740, "iyd" ],
-[ 6741, "iye" ],
-[ 6742, "iyf" ],
-[ 6743, "iyg" ],
-[ 6744, "iyh" ],
-[ 6745, "iyi" ],
-[ 6746, "iyj" ],
-[ 6747, "iyk" ],
-[ 6748, "iyl" ],
-[ 6749, "iym" ],
-[ 6750, "iyn" ],
-[ 6751, "iyo" ],
-[ 6752, "iyp" ],
-[ 6753, "iyq" ],
-[ 6754, "iyr" ],
-[ 6755, "iys" ],
-[ 6756, "iyt" ],
-[ 6757, "iyu" ],
-[ 6758, "iyv" ],
-[ 6759, "iyw" ],
-[ 6760, "iyx" ],
-[ 6761, "iyy" ],
-[ 6762, "iyz" ],
-[ 6763, "iza" ],
-[ 6764, "izb" ],
-[ 6765, "izc" ],
-[ 6766, "izd" ],
-[ 6767, "ize" ],
-[ 6768, "izf" ],
-[ 6769, "izg" ],
-[ 6770, "izh" ],
-[ 6771, "izi" ],
-[ 6772, "izj" ],
-[ 6773, "izk" ],
-[ 6774, "izl" ],
-[ 6775, "izm" ],
-[ 6776, "izn" ],
-[ 6777, "izo" ],
-[ 6778, "izp" ],
-[ 6779, "izq" ],
-[ 6780, "izr" ],
-[ 6781, "izs" ],
-[ 6782, "izt" ],
-[ 6783, "izu" ],
-[ 6784, "izv" ],
-[ 6785, "izw" ],
-[ 6786, "izx" ],
-[ 6787, "izy" ],
-[ 6788, "izz" ],
-[ 6789, "jaa" ],
-[ 6790, "jab" ],
-[ 6791, "jac" ],
-[ 6792, "jad" ],
-[ 6793, "jae" ],
-[ 6794, "jaf" ],
-[ 6795, "jag" ],
-[ 6796, "jah" ],
-[ 6797, "jai" ],
-[ 6798, "jaj" ],
-[ 6799, "jak" ],
-[ 6800, "jal" ],
-[ 6801, "jam" ],
-[ 6802, "jan" ],
-[ 6803, "jao" ],
-[ 6804, "jap" ],
-[ 6805, "jaq" ],
-[ 6806, "jar" ],
-[ 6807, "jas" ],
-[ 6808, "jat" ],
-[ 6809, "jau" ],
-[ 6810, "jav" ],
-[ 6811, "jaw" ],
-[ 6812, "jax" ],
-[ 6813, "jay" ],
-[ 6814, "jaz" ],
-[ 6815, "jba" ],
-[ 6816, "jbb" ],
-[ 6817, "jbc" ],
-[ 6818, "jbd" ],
-[ 6819, "jbe" ],
-[ 6820, "jbf" ],
-[ 6821, "jbg" ],
-[ 6822, "jbh" ],
-[ 6823, "jbi" ],
-[ 6824, "jbj" ],
-[ 6825, "jbk" ],
-[ 6826, "jbl" ],
-[ 6827, "jbm" ],
-[ 6828, "jbn" ],
-[ 6829, "jbo" ],
-[ 6830, "jbp" ],
-[ 6831, "jbq" ],
-[ 6832, "jbr" ],
-[ 6833, "jbs" ],
-[ 6834, "jbt" ],
-[ 6835, "jbu" ],
-[ 6836, "jbv" ],
-[ 6837, "jbw" ],
-[ 6838, "jbx" ],
-[ 6839, "jby" ],
-[ 6840, "jbz" ],
-[ 6841, "jca" ],
-[ 6842, "jcb" ],
-[ 6843, "jcc" ],
-[ 6844, "jcd" ],
-[ 6845, "jce" ],
-[ 6846, "jcf" ],
-[ 6847, "jcg" ],
-[ 6848, "jch" ],
-[ 6849, "jci" ],
-[ 6850, "jcj" ],
-[ 6851, "jck" ],
-[ 6852, "jcl" ],
-[ 6853, "jcm" ],
-[ 6854, "jcn" ],
-[ 6855, "jco" ],
-[ 6856, "jcp" ],
-[ 6857, "jcq" ],
-[ 6858, "jcr" ],
-[ 6859, "jcs" ],
-[ 6860, "jct" ],
-[ 6861, "jcu" ],
-[ 6862, "jcv" ],
-[ 6863, "jcw" ],
-[ 6864, "jcx" ],
-[ 6865, "jcy" ],
-[ 6866, "jcz" ],
-[ 6867, "jda" ],
-[ 6868, "jdb" ],
-[ 6869, "jdc" ],
-[ 6870, "jdd" ],
-[ 6871, "jde" ],
-[ 6872, "jdf" ],
-[ 6873, "jdg" ],
-[ 6874, "jdh" ],
-[ 6875, "jdi" ],
-[ 6876, "jdj" ],
-[ 6877, "jdk" ],
-[ 6878, "jdl" ],
-[ 6879, "jdm" ],
-[ 6880, "jdn" ],
-[ 6881, "jdo" ],
-[ 6882, "jdp" ],
-[ 6883, "jdq" ],
-[ 6884, "jdr" ],
-[ 6885, "jds" ],
-[ 6886, "jdt" ],
-[ 6887, "jdu" ],
-[ 6888, "jdv" ],
-[ 6889, "jdw" ],
-[ 6890, "jdx" ],
-[ 6891, "jdy" ],
-[ 6892, "jdz" ],
-[ 6893, "jea" ],
-[ 6894, "jeb" ],
-[ 6895, "jec" ],
-[ 6896, "jed" ],
-[ 6897, "jee" ],
-[ 6898, "jef" ],
-[ 6899, "jeg" ],
-[ 6900, "jeh" ],
-[ 6901, "jei" ],
-[ 6902, "jej" ],
-[ 6903, "jek" ],
-[ 6904, "jel" ],
-[ 6905, "jem" ],
-[ 6906, "jen" ],
-[ 6907, "jeo" ],
-[ 6908, "jep" ],
-[ 6909, "jeq" ],
-[ 6910, "jer" ],
-[ 6911, "jes" ],
-[ 6912, "jet" ],
-[ 6913, "jeu" ],
-[ 6914, "jev" ],
-[ 6915, "jew" ],
-[ 6916, "jex" ],
-[ 6917, "jey" ],
-[ 6918, "jez" ],
-[ 6919, "jfa" ],
-[ 6920, "jfb" ],
-[ 6921, "jfc" ],
-[ 6922, "jfd" ],
-[ 6923, "jfe" ],
-[ 6924, "jff" ],
-[ 6925, "jfg" ],
-[ 6926, "jfh" ],
-[ 6927, "jfi" ],
-[ 6928, "jfj" ],
-[ 6929, "jfk" ],
-[ 6930, "jfl" ],
-[ 6931, "jfm" ],
-[ 6932, "jfn" ],
-[ 6933, "jfo" ],
-[ 6934, "jfp" ],
-[ 6935, "jfq" ],
-[ 6936, "jfr" ],
-[ 6937, "jfs" ],
-[ 6938, "jft" ],
-[ 6939, "jfu" ],
-[ 6940, "jfv" ],
-[ 6941, "jfw" ],
-[ 6942, "jfx" ],
-[ 6943, "jfy" ],
-[ 6944, "jfz" ],
-[ 6945, "jga" ],
-[ 6946, "jgb" ],
-[ 6947, "jgc" ],
-[ 6948, "jgd" ],
-[ 6949, "jge" ],
-[ 6950, "jgf" ],
-[ 6951, "jgg" ],
-[ 6952, "jgh" ],
-[ 6953, "jgi" ],
-[ 6954, "jgj" ],
-[ 6955, "jgk" ],
-[ 6956, "jgl" ],
-[ 6957, "jgm" ],
-[ 6958, "jgn" ],
-[ 6959, "jgo" ],
-[ 6960, "jgp" ],
-[ 6961, "jgq" ],
-[ 6962, "jgr" ],
-[ 6963, "jgs" ],
-[ 6964, "jgt" ],
-[ 6965, "jgu" ],
-[ 6966, "jgv" ],
-[ 6967, "jgw" ],
-[ 6968, "jgx" ],
-[ 6969, "jgy" ],
-[ 6970, "jgz" ],
-[ 6971, "jha" ],
-[ 6972, "jhb" ],
-[ 6973, "jhc" ],
-[ 6974, "jhd" ],
-[ 6975, "jhe" ],
-[ 6976, "jhf" ],
-[ 6977, "jhg" ],
-[ 6978, "jhh" ],
-[ 6979, "jhi" ],
-[ 6980, "jhj" ],
-[ 6981, "jhk" ],
-[ 6982, "jhl" ],
-[ 6983, "jhm" ],
-[ 6984, "jhn" ],
-[ 6985, "jho" ],
-[ 6986, "jhp" ],
-[ 6987, "jhq" ],
-[ 6988, "jhr" ],
-[ 6989, "jhs" ],
-[ 6990, "jht" ],
-[ 6991, "jhu" ],
-[ 6992, "jhv" ],
-[ 6993, "jhw" ],
-[ 6994, "jhx" ],
-[ 6995, "jhy" ],
-[ 6996, "jhz" ],
-[ 6997, "jia" ],
-[ 6998, "jib" ],
-[ 6999, "jic" ],
-[ 7000, "jid" ],
-[ 7001, "jie" ],
-[ 7002, "jif" ],
-[ 7003, "jig" ],
-[ 7004, "jih" ],
-[ 7005, "jii" ],
-[ 7006, "jij" ],
-[ 7007, "jik" ],
-[ 7008, "jil" ],
-[ 7009, "jim" ],
-[ 7010, "jin" ],
-[ 7011, "jio" ],
-[ 7012, "jip" ],
-[ 7013, "jiq" ],
-[ 7014, "jir" ],
-[ 7015, "jis" ],
-[ 7016, "jit" ],
-[ 7017, "jiu" ],
-[ 7018, "jiv" ],
-[ 7019, "jiw" ],
-[ 7020, "jix" ],
-[ 7021, "jiy" ],
-[ 7022, "jiz" ],
-[ 7023, "jja" ],
-[ 7024, "jjb" ],
-[ 7025, "jjc" ],
-[ 7026, "jjd" ],
-[ 7027, "jje" ],
-[ 7028, "jjf" ],
-[ 7029, "jjg" ],
-[ 7030, "jjh" ],
-[ 7031, "jji" ],
-[ 7032, "jjj" ],
-[ 7033, "jjk" ],
-[ 7034, "jjl" ],
-[ 7035, "jjm" ],
-[ 7036, "jjn" ],
-[ 7037, "jjo" ],
-[ 7038, "jjp" ],
-[ 7039, "jjq" ],
-[ 7040, "jjr" ],
-[ 7041, "jjs" ],
-[ 7042, "jjt" ],
-[ 7043, "jju" ],
-[ 7044, "jjv" ],
-[ 7045, "jjw" ],
-[ 7046, "jjx" ],
-[ 7047, "jjy" ],
-[ 7048, "jjz" ],
-[ 7049, "jka" ],
-[ 7050, "jkb" ],
-[ 7051, "jkc" ],
-[ 7052, "jkd" ],
-[ 7053, "jke" ],
-[ 7054, "jkf" ],
-[ 7055, "jkg" ],
-[ 7056, "jkh" ],
-[ 7057, "jki" ],
-[ 7058, "jkj" ],
-[ 7059, "jkk" ],
-[ 7060, "jkl" ],
-[ 7061, "jkm" ],
-[ 7062, "jkn" ],
-[ 7063, "jko" ],
-[ 7064, "jkp" ],
-[ 7065, "jkq" ],
-[ 7066, "jkr" ],
-[ 7067, "jks" ],
-[ 7068, "jkt" ],
-[ 7069, "jku" ],
-[ 7070, "jkv" ],
-[ 7071, "jkw" ],
-[ 7072, "jkx" ],
-[ 7073, "jky" ],
-[ 7074, "jkz" ],
-[ 7075, "jla" ],
-[ 7076, "jlb" ],
-[ 7077, "jlc" ],
-[ 7078, "jld" ],
-[ 7079, "jle" ],
-[ 7080, "jlf" ],
-[ 7081, "jlg" ],
-[ 7082, "jlh" ],
-[ 7083, "jli" ],
-[ 7084, "jlj" ],
-[ 7085, "jlk" ],
-[ 7086, "jll" ],
-[ 7087, "jlm" ],
-[ 7088, "jln" ],
-[ 7089, "jlo" ],
-[ 7090, "jlp" ],
-[ 7091, "jlq" ],
-[ 7092, "jlr" ],
-[ 7093, "jls" ],
-[ 7094, "jlt" ],
-[ 7095, "jlu" ],
-[ 7096, "jlv" ],
-[ 7097, "jlw" ],
-[ 7098, "jlx" ],
-[ 7099, "jly" ],
-[ 7100, "jlz" ],
-[ 7101, "jma" ],
-[ 7102, "jmb" ],
-[ 7103, "jmc" ],
-[ 7104, "jmd" ],
-[ 7105, "jme" ],
-[ 7106, "jmf" ],
-[ 7107, "jmg" ],
-[ 7108, "jmh" ],
-[ 7109, "jmi" ],
-[ 7110, "jmj" ],
-[ 7111, "jmk" ],
-[ 7112, "jml" ],
-[ 7113, "jmm" ],
-[ 7114, "jmn" ],
-[ 7115, "jmo" ],
-[ 7116, "jmp" ],
-[ 7117, "jmq" ],
-[ 7118, "jmr" ],
-[ 7119, "jms" ],
-[ 7120, "jmt" ],
-[ 7121, "jmu" ],
-[ 7122, "jmv" ],
-[ 7123, "jmw" ],
-[ 7124, "jmx" ],
-[ 7125, "jmy" ],
-[ 7126, "jmz" ],
-[ 7127, "jna" ],
-[ 7128, "jnb" ],
-[ 7129, "jnc" ],
-[ 7130, "jnd" ],
-[ 7131, "jne" ],
-[ 7132, "jnf" ],
-[ 7133, "jng" ],
-[ 7134, "jnh" ],
-[ 7135, "jni" ],
-[ 7136, "jnj" ],
-[ 7137, "jnk" ],
-[ 7138, "jnl" ],
-[ 7139, "jnm" ],
-[ 7140, "jnn" ],
-[ 7141, "jno" ],
-[ 7142, "jnp" ],
-[ 7143, "jnq" ],
-[ 7144, "jnr" ],
-[ 7145, "jns" ],
-[ 7146, "jnt" ],
-[ 7147, "jnu" ],
-[ 7148, "jnv" ],
-[ 7149, "jnw" ],
-[ 7150, "jnx" ],
-[ 7151, "jny" ],
-[ 7152, "jnz" ],
-[ 7153, "joa" ],
-[ 7154, "job" ],
-[ 7155, "joc" ],
-[ 7156, "jod" ],
-[ 7157, "joe" ],
-[ 7158, "jof" ],
-[ 7159, "jog" ],
-[ 7160, "joh" ],
-[ 7161, "joi" ],
-[ 7162, "joj" ],
-[ 7163, "jok" ],
-[ 7164, "jol" ],
-[ 7165, "jom" ],
-[ 7166, "jon" ],
-[ 7167, "joo" ],
-[ 7168, "jop" ],
-[ 7169, "joq" ],
-[ 7170, "jor" ],
-[ 7171, "jos" ],
-[ 7172, "jot" ],
-[ 7173, "jou" ],
-[ 7174, "jov" ],
-[ 7175, "jow" ],
-[ 7176, "jox" ],
-[ 7177, "joy" ],
-[ 7178, "joz" ],
-[ 7179, "jpa" ],
-[ 7180, "jpb" ],
-[ 7181, "jpc" ],
-[ 7182, "jpd" ],
-[ 7183, "jpe" ],
-[ 7184, "jpf" ],
-[ 7185, "jpg" ],
-[ 7186, "jph" ],
-[ 7187, "jpi" ],
-[ 7188, "jpj" ],
-[ 7189, "jpk" ],
-[ 7190, "jpl" ],
-[ 7191, "jpm" ],
-[ 7192, "jpn" ],
-[ 7193, "jpo" ],
-[ 7194, "jpp" ],
-[ 7195, "jpq" ],
-[ 7196, "jpr" ],
-[ 7197, "jps" ],
-[ 7198, "jpt" ],
-[ 7199, "jpu" ],
-[ 7200, "jpv" ],
-[ 7201, "jpw" ],
-[ 7202, "jpx" ],
-[ 7203, "jpy" ],
-[ 7204, "jpz" ],
-[ 7205, "jqa" ],
-[ 7206, "jqb" ],
-[ 7207, "jqc" ],
-[ 7208, "jqd" ],
-[ 7209, "jqe" ],
-[ 7210, "jqf" ],
-[ 7211, "jqg" ],
-[ 7212, "jqh" ],
-[ 7213, "jqi" ],
-[ 7214, "jqj" ],
-[ 7215, "jqk" ],
-[ 7216, "jql" ],
-[ 7217, "jqm" ],
-[ 7218, "jqn" ],
-[ 7219, "jqo" ],
-[ 7220, "jqp" ],
-[ 7221, "jqq" ],
-[ 7222, "jqr" ],
-[ 7223, "jqs" ],
-[ 7224, "jqt" ],
-[ 7225, "jqu" ],
-[ 7226, "jqv" ],
-[ 7227, "jqw" ],
-[ 7228, "jqx" ],
-[ 7229, "jqy" ],
-[ 7230, "jqz" ],
-[ 7231, "jra" ],
-[ 7232, "jrb" ],
-[ 7233, "jrc" ],
-[ 7234, "jrd" ],
-[ 7235, "jre" ],
-[ 7236, "jrf" ],
-[ 7237, "jrg" ],
-[ 7238, "jrh" ],
-[ 7239, "jri" ],
-[ 7240, "jrj" ],
-[ 7241, "jrk" ],
-[ 7242, "jrl" ],
-[ 7243, "jrm" ],
-[ 7244, "jrn" ],
-[ 7245, "jro" ],
-[ 7246, "jrp" ],
-[ 7247, "jrq" ],
-[ 7248, "jrr" ],
-[ 7249, "jrs" ],
-[ 7250, "jrt" ],
-[ 7251, "jru" ],
-[ 7252, "jrv" ],
-[ 7253, "jrw" ],
-[ 7254, "jrx" ],
-[ 7255, "jry" ],
-[ 7256, "jrz" ],
-[ 7257, "jsa" ],
-[ 7258, "jsb" ],
-[ 7259, "jsc" ],
-[ 7260, "jsd" ],
-[ 7261, "jse" ],
-[ 7262, "jsf" ],
-[ 7263, "jsg" ],
-[ 7264, "jsh" ],
-[ 7265, "jsi" ],
-[ 7266, "jsj" ],
-[ 7267, "jsk" ],
-[ 7268, "jsl" ],
-[ 7269, "jsm" ],
-[ 7270, "jsn" ],
-[ 7271, "jso" ],
-[ 7272, "jsp" ],
-[ 7273, "jsq" ],
-[ 7274, "jsr" ],
-[ 7275, "jss" ],
-[ 7276, "jst" ],
-[ 7277, "jsu" ],
-[ 7278, "jsv" ],
-[ 7279, "jsw" ],
-[ 7280, "jsx" ],
-[ 7281, "jsy" ],
-[ 7282, "jsz" ],
-[ 7283, "jta" ],
-[ 7284, "jtb" ],
-[ 7285, "jtc" ],
-[ 7286, "jtd" ],
-[ 7287, "jte" ],
-[ 7288, "jtf" ],
-[ 7289, "jtg" ],
-[ 7290, "jth" ],
-[ 7291, "jti" ],
-[ 7292, "jtj" ],
-[ 7293, "jtk" ],
-[ 7294, "jtl" ],
-[ 7295, "jtm" ],
-[ 7296, "jtn" ],
-[ 7297, "jto" ],
-[ 7298, "jtp" ],
-[ 7299, "jtq" ],
-[ 7300, "jtr" ],
-[ 7301, "jts" ],
-[ 7302, "jtt" ],
-[ 7303, "jtu" ],
-[ 7304, "jtv" ],
-[ 7305, "jtw" ],
-[ 7306, "jtx" ],
-[ 7307, "jty" ],
-[ 7308, "jtz" ],
-[ 7309, "jua" ],
-[ 7310, "jub" ],
-[ 7311, "juc" ],
-[ 7312, "jud" ],
-[ 7313, "jue" ],
-[ 7314, "juf" ],
-[ 7315, "jug" ],
-[ 7316, "juh" ],
-[ 7317, "jui" ],
-[ 7318, "juj" ],
-[ 7319, "juk" ],
-[ 7320, "jul" ],
-[ 7321, "jum" ],
-[ 7322, "jun" ],
-[ 7323, "juo" ],
-[ 7324, "jup" ],
-[ 7325, "juq" ],
-[ 7326, "jur" ],
-[ 7327, "jus" ],
-[ 7328, "jut" ],
-[ 7329, "juu" ],
-[ 7330, "juv" ],
-[ 7331, "juw" ],
-[ 7332, "jux" ],
-[ 7333, "juy" ],
-[ 7334, "juz" ],
-[ 7335, "jva" ],
-[ 7336, "jvb" ],
-[ 7337, "jvc" ],
-[ 7338, "jvd" ],
-[ 7339, "jve" ],
-[ 7340, "jvf" ],
-[ 7341, "jvg" ],
-[ 7342, "jvh" ],
-[ 7343, "jvi" ],
-[ 7344, "jvj" ],
-[ 7345, "jvk" ],
-[ 7346, "jvl" ],
-[ 7347, "jvm" ],
-[ 7348, "jvn" ],
-[ 7349, "jvo" ],
-[ 7350, "jvp" ],
-[ 7351, "jvq" ],
-[ 7352, "jvr" ],
-[ 7353, "jvs" ],
-[ 7354, "jvt" ],
-[ 7355, "jvu" ],
-[ 7356, "jvv" ],
-[ 7357, "jvw" ],
-[ 7358, "jvx" ],
-[ 7359, "jvy" ],
-[ 7360, "jvz" ],
-[ 7361, "jwa" ],
-[ 7362, "jwb" ],
-[ 7363, "jwc" ],
-[ 7364, "jwd" ],
-[ 7365, "jwe" ],
-[ 7366, "jwf" ],
-[ 7367, "jwg" ],
-[ 7368, "jwh" ],
-[ 7369, "jwi" ],
-[ 7370, "jwj" ],
-[ 7371, "jwk" ],
-[ 7372, "jwl" ],
-[ 7373, "jwm" ],
-[ 7374, "jwn" ],
-[ 7375, "jwo" ],
-[ 7376, "jwp" ],
-[ 7377, "jwq" ],
-[ 7378, "jwr" ],
-[ 7379, "jws" ],
-[ 7380, "jwt" ],
-[ 7381, "jwu" ],
-[ 7382, "jwv" ],
-[ 7383, "jww" ],
-[ 7384, "jwx" ],
-[ 7385, "jwy" ],
-[ 7386, "jwz" ],
-[ 7387, "jxa" ],
-[ 7388, "jxb" ],
-[ 7389, "jxc" ],
-[ 7390, "jxd" ],
-[ 7391, "jxe" ],
-[ 7392, "jxf" ],
-[ 7393, "jxg" ],
-[ 7394, "jxh" ],
-[ 7395, "jxi" ],
-[ 7396, "jxj" ],
-[ 7397, "jxk" ],
-[ 7398, "jxl" ],
-[ 7399, "jxm" ],
-[ 7400, "jxn" ],
-[ 7401, "jxo" ],
-[ 7402, "jxp" ],
-[ 7403, "jxq" ],
-[ 7404, "jxr" ],
-[ 7405, "jxs" ],
-[ 7406, "jxt" ],
-[ 7407, "jxu" ],
-[ 7408, "jxv" ],
-[ 7409, "jxw" ],
-[ 7410, "jxx" ],
-[ 7411, "jxy" ],
-[ 7412, "jxz" ],
-[ 7413, "jya" ],
-[ 7414, "jyb" ],
-[ 7415, "jyc" ],
-[ 7416, "jyd" ],
-[ 7417, "jye" ],
-[ 7418, "jyf" ],
-[ 7419, "jyg" ],
-[ 7420, "jyh" ],
-[ 7421, "jyi" ],
-[ 7422, "jyj" ],
-[ 7423, "jyk" ],
-[ 7424, "jyl" ],
-[ 7425, "jym" ],
-[ 7426, "jyn" ],
-[ 7427, "jyo" ],
-[ 7428, "jyp" ],
-[ 7429, "jyq" ],
-[ 7430, "jyr" ],
-[ 7431, "jys" ],
-[ 7432, "jyt" ],
-[ 7433, "jyu" ],
-[ 7434, "jyv" ],
-[ 7435, "jyw" ],
-[ 7436, "jyx" ],
-[ 7437, "jyy" ],
-[ 7438, "jyz" ],
-[ 7439, "jza" ],
-[ 7440, "jzb" ],
-[ 7441, "jzc" ],
-[ 7442, "jzd" ],
-[ 7443, "jze" ],
-[ 7444, "jzf" ],
-[ 7445, "jzg" ],
-[ 7446, "jzh" ],
-[ 7447, "jzi" ],
-[ 7448, "jzj" ],
-[ 7449, "jzk" ],
-[ 7450, "jzl" ],
-[ 7451, "jzm" ],
-[ 7452, "jzn" ],
-[ 7453, "jzo" ],
-[ 7454, "jzp" ],
-[ 7455, "jzq" ],
-[ 7456, "jzr" ],
-[ 7457, "jzs" ],
-[ 7458, "jzt" ],
-[ 7459, "jzu" ],
-[ 7460, "jzv" ],
-[ 7461, "jzw" ],
-[ 7462, "jzx" ],
-[ 7463, "jzy" ],
-[ 7464, "jzz" ],
-[ 7465, "kaa" ],
-[ 7466, "kab" ],
-[ 7467, "kac" ],
-[ 7468, "kad" ],
-[ 7469, "kae" ],
-[ 7470, "kaf" ],
-[ 7471, "kag" ],
-[ 7472, "kah" ],
-[ 7473, "kai" ],
-[ 7474, "kaj" ],
-[ 7475, "kak" ],
-[ 7476, "kal" ],
-[ 7477, "kam" ],
-[ 7478, "kan" ],
-[ 7479, "kao" ],
-[ 7480, "kap" ],
-[ 7481, "kaq" ],
-[ 7482, "kar" ],
-[ 7483, "kas" ],
-[ 7484, "kat" ],
-[ 7485, "kau" ],
-[ 7486, "kav" ],
-[ 7487, "kaw" ],
-[ 7488, "kax" ],
-[ 7489, "kay" ],
-[ 7490, "kaz" ],
-[ 7491, "kba" ],
-[ 7492, "kbb" ],
-[ 7493, "kbc" ],
-[ 7494, "kbd" ],
-[ 7495, "kbe" ],
-[ 7496, "kbf" ],
-[ 7497, "kbg" ],
-[ 7498, "kbh" ],
-[ 7499, "kbi" ],
-[ 7500, "kbj" ],
-[ 7501, "kbk" ],
-[ 7502, "kbl" ],
-[ 7503, "kbm" ],
-[ 7504, "kbn" ],
-[ 7505, "kbo" ],
-[ 7506, "kbp" ],
-[ 7507, "kbq" ],
-[ 7508, "kbr" ],
-[ 7509, "kbs" ],
-[ 7510, "kbt" ],
-[ 7511, "kbu" ],
-[ 7512, "kbv" ],
-[ 7513, "kbw" ],
-[ 7514, "kbx" ],
-[ 7515, "kby" ],
-[ 7516, "kbz" ],
-[ 7517, "kca" ],
-[ 7518, "kcb" ],
-[ 7519, "kcc" ],
-[ 7520, "kcd" ],
-[ 7521, "kce" ],
-[ 7522, "kcf" ],
-[ 7523, "kcg" ],
-[ 7524, "kch" ],
-[ 7525, "kci" ],
-[ 7526, "kcj" ],
-[ 7527, "kck" ],
-[ 7528, "kcl" ],
-[ 7529, "kcm" ],
-[ 7530, "kcn" ],
-[ 7531, "kco" ],
-[ 7532, "kcp" ],
-[ 7533, "kcq" ],
-[ 7534, "kcr" ],
-[ 7535, "kcs" ],
-[ 7536, "kct" ],
-[ 7537, "kcu" ],
-[ 7538, "kcv" ],
-[ 7539, "kcw" ],
-[ 7540, "kcx" ],
-[ 7541, "kcy" ],
-[ 7542, "kcz" ],
-[ 7543, "kda" ],
-[ 7544, "kdb" ],
-[ 7545, "kdc" ],
-[ 7546, "kdd" ],
-[ 7547, "kde" ],
-[ 7548, "kdf" ],
-[ 7549, "kdg" ],
-[ 7550, "kdh" ],
-[ 7551, "kdi" ],
-[ 7552, "kdj" ],
-[ 7553, "kdk" ],
-[ 7554, "kdl" ],
-[ 7555, "kdm" ],
-[ 7556, "kdn" ],
-[ 7557, "kdo" ],
-[ 7558, "kdp" ],
-[ 7559, "kdq" ],
-[ 7560, "kdr" ],
-[ 7561, "kds" ],
-[ 7562, "kdt" ],
-[ 7563, "kdu" ],
-[ 7564, "kdv" ],
-[ 7565, "kdw" ],
-[ 7566, "kdx" ],
-[ 7567, "kdy" ],
-[ 7568, "kdz" ],
-[ 7569, "kea" ],
-[ 7570, "keb" ],
-[ 7571, "kec" ],
-[ 7572, "ked" ],
-[ 7573, "kee" ],
-[ 7574, "kef" ],
-[ 7575, "keg" ],
-[ 7576, "keh" ],
-[ 7577, "kei" ],
-[ 7578, "kej" ],
-[ 7579, "kek" ],
-[ 7580, "kel" ],
-[ 7581, "kem" ],
-[ 7582, "ken" ],
-[ 7583, "keo" ],
-[ 7584, "kep" ],
-[ 7585, "keq" ],
-[ 7586, "ker" ],
-[ 7587, "kes" ],
-[ 7588, "ket" ],
-[ 7589, "keu" ],
-[ 7590, "kev" ],
-[ 7591, "kew" ],
-[ 7592, "kex" ],
-[ 7593, "key" ],
-[ 7594, "kez" ],
-[ 7595, "kfa" ],
-[ 7596, "kfb" ],
-[ 7597, "kfc" ],
-[ 7598, "kfd" ],
-[ 7599, "kfe" ],
-[ 7600, "kff" ],
-[ 7601, "kfg" ],
-[ 7602, "kfh" ],
-[ 7603, "kfi" ],
-[ 7604, "kfj" ],
-[ 7605, "kfk" ],
-[ 7606, "kfl" ],
-[ 7607, "kfm" ],
-[ 7608, "kfn" ],
-[ 7609, "kfo" ],
-[ 7610, "kfp" ],
-[ 7611, "kfq" ],
-[ 7612, "kfr" ],
-[ 7613, "kfs" ],
-[ 7614, "kft" ],
-[ 7615, "kfu" ],
-[ 7616, "kfv" ],
-[ 7617, "kfw" ],
-[ 7618, "kfx" ],
-[ 7619, "kfy" ],
-[ 7620, "kfz" ],
-[ 7621, "kga" ],
-[ 7622, "kgb" ],
-[ 7623, "kgc" ],
-[ 7624, "kgd" ],
-[ 7625, "kge" ],
-[ 7626, "kgf" ],
-[ 7627, "kgg" ],
-[ 7628, "kgh" ],
-[ 7629, "kgi" ],
-[ 7630, "kgj" ],
-[ 7631, "kgk" ],
-[ 7632, "kgl" ],
-[ 7633, "kgm" ],
-[ 7634, "kgn" ],
-[ 7635, "kgo" ],
-[ 7636, "kgp" ],
-[ 7637, "kgq" ],
-[ 7638, "kgr" ],
-[ 7639, "kgs" ],
-[ 7640, "kgt" ],
-[ 7641, "kgu" ],
-[ 7642, "kgv" ],
-[ 7643, "kgw" ],
-[ 7644, "kgx" ],
-[ 7645, "kgy" ],
-[ 7646, "kgz" ],
-[ 7647, "kha" ],
-[ 7648, "khb" ],
-[ 7649, "khc" ],
-[ 7650, "khd" ],
-[ 7651, "khe" ],
-[ 7652, "khf" ],
-[ 7653, "khg" ],
-[ 7654, "khh" ],
-[ 7655, "khi" ],
-[ 7656, "khj" ],
-[ 7657, "khk" ],
-[ 7658, "khl" ],
-[ 7659, "khm" ],
-[ 7660, "khn" ],
-[ 7661, "kho" ],
-[ 7662, "khp" ],
-[ 7663, "khq" ],
-[ 7664, "khr" ],
-[ 7665, "khs" ],
-[ 7666, "kht" ],
-[ 7667, "khu" ],
-[ 7668, "khv" ],
-[ 7669, "khw" ],
-[ 7670, "khx" ],
-[ 7671, "khy" ],
-[ 7672, "khz" ],
-[ 7673, "kia" ],
-[ 7674, "kib" ],
-[ 7675, "kic" ],
-[ 7676, "kid" ],
-[ 7677, "kie" ],
-[ 7678, "kif" ],
-[ 7679, "kig" ],
-[ 7680, "kih" ],
-[ 7681, "kii" ],
-[ 7682, "kij" ],
-[ 7683, "kik" ],
-[ 7684, "kil" ],
-[ 7685, "kim" ],
-[ 7686, "kin" ],
-[ 7687, "kio" ],
-[ 7688, "kip" ],
-[ 7689, "kiq" ],
-[ 7690, "kir" ],
-[ 7691, "kis" ],
-[ 7692, "kit" ],
-[ 7693, "kiu" ],
-[ 7694, "kiv" ],
-[ 7695, "kiw" ],
-[ 7696, "kix" ],
-[ 7697, "kiy" ],
-[ 7698, "kiz" ],
-[ 7699, "kja" ],
-[ 7700, "kjb" ],
-[ 7701, "kjc" ],
-[ 7702, "kjd" ],
-[ 7703, "kje" ],
-[ 7704, "kjf" ],
-[ 7705, "kjg" ],
-[ 7706, "kjh" ],
-[ 7707, "kji" ],
-[ 7708, "kjj" ],
-[ 7709, "kjk" ],
-[ 7710, "kjl" ],
-[ 7711, "kjm" ],
-[ 7712, "kjn" ],
-[ 7713, "kjo" ],
-[ 7714, "kjp" ],
-[ 7715, "kjq" ],
-[ 7716, "kjr" ],
-[ 7717, "kjs" ],
-[ 7718, "kjt" ],
-[ 7719, "kju" ],
-[ 7720, "kjv" ],
-[ 7721, "kjw" ],
-[ 7722, "kjx" ],
-[ 7723, "kjy" ],
-[ 7724, "kjz" ],
-[ 7725, "kka" ],
-[ 7726, "kkb" ],
-[ 7727, "kkc" ],
-[ 7728, "kkd" ],
-[ 7729, "kke" ],
-[ 7730, "kkf" ],
-[ 7731, "kkg" ],
-[ 7732, "kkh" ],
-[ 7733, "kki" ],
-[ 7734, "kkj" ],
-[ 7735, "kkk" ],
-[ 7736, "kkl" ],
-[ 7737, "kkm" ],
-[ 7738, "kkn" ],
-[ 7739, "kko" ],
-[ 7740, "kkp" ],
-[ 7741, "kkq" ],
-[ 7742, "kkr" ],
-[ 7743, "kks" ],
-[ 7744, "kkt" ],
-[ 7745, "kku" ],
-[ 7746, "kkv" ],
-[ 7747, "kkw" ],
-[ 7748, "kkx" ],
-[ 7749, "kky" ],
-[ 7750, "kkz" ],
-[ 7751, "kla" ],
-[ 7752, "klb" ],
-[ 7753, "klc" ],
-[ 7754, "kld" ],
-[ 7755, "kle" ],
-[ 7756, "klf" ],
-[ 7757, "klg" ],
-[ 7758, "klh" ],
-[ 7759, "kli" ],
-[ 7760, "klj" ],
-[ 7761, "klk" ],
-[ 7762, "kll" ],
-[ 7763, "klm" ],
-[ 7764, "kln" ],
-[ 7765, "klo" ],
-[ 7766, "klp" ],
-[ 7767, "klq" ],
-[ 7768, "klr" ],
-[ 7769, "kls" ],
-[ 7770, "klt" ],
-[ 7771, "klu" ],
-[ 7772, "klv" ],
-[ 7773, "klw" ],
-[ 7774, "klx" ],
-[ 7775, "kly" ],
-[ 7776, "klz" ],
-[ 7777, "kma" ],
-[ 7778, "kmb" ],
-[ 7779, "kmc" ],
-[ 7780, "kmd" ],
-[ 7781, "kme" ],
-[ 7782, "kmf" ],
-[ 7783, "kmg" ],
-[ 7784, "kmh" ],
-[ 7785, "kmi" ],
-[ 7786, "kmj" ],
-[ 7787, "kmk" ],
-[ 7788, "kml" ],
-[ 7789, "kmm" ],
-[ 7790, "kmn" ],
-[ 7791, "kmo" ],
-[ 7792, "kmp" ],
-[ 7793, "kmq" ],
-[ 7794, "kmr" ],
-[ 7795, "kms" ],
-[ 7796, "kmt" ],
-[ 7797, "kmu" ],
-[ 7798, "kmv" ],
-[ 7799, "kmw" ],
-[ 7800, "kmx" ],
-[ 7801, "kmy" ],
-[ 7802, "kmz" ],
-[ 7803, "kna" ],
-[ 7804, "knb" ],
-[ 7805, "knc" ],
-[ 7806, "knd" ],
-[ 7807, "kne" ],
-[ 7808, "knf" ],
-[ 7809, "kng" ],
-[ 7810, "knh" ],
-[ 7811, "kni" ],
-[ 7812, "knj" ],
-[ 7813, "knk" ],
-[ 7814, "knl" ],
-[ 7815, "knm" ],
-[ 7816, "knn" ],
-[ 7817, "kno" ],
-[ 7818, "knp" ],
-[ 7819, "knq" ],
-[ 7820, "knr" ],
-[ 7821, "kns" ],
-[ 7822, "knt" ],
-[ 7823, "knu" ],
-[ 7824, "knv" ],
-[ 7825, "knw" ],
-[ 7826, "knx" ],
-[ 7827, "kny" ],
-[ 7828, "knz" ],
-[ 7829, "koa" ],
-[ 7830, "kob" ],
-[ 7831, "koc" ],
-[ 7832, "kod" ],
-[ 7833, "koe" ],
-[ 7834, "kof" ],
-[ 7835, "kog" ],
-[ 7836, "koh" ],
-[ 7837, "koi" ],
-[ 7838, "koj" ],
-[ 7839, "kok" ],
-[ 7840, "kol" ],
-[ 7841, "kom" ],
-[ 7842, "kon" ],
-[ 7843, "koo" ],
-[ 7844, "kop" ],
-[ 7845, "koq" ],
-[ 7846, "kor" ],
-[ 7847, "kos" ],
-[ 7848, "kot" ],
-[ 7849, "kou" ],
-[ 7850, "kov" ],
-[ 7851, "kow" ],
-[ 7852, "kox" ],
-[ 7853, "koy" ],
-[ 7854, "koz" ],
-[ 7855, "kpa" ],
-[ 7856, "kpb" ],
-[ 7857, "kpc" ],
-[ 7858, "kpd" ],
-[ 7859, "kpe" ],
-[ 7860, "kpf" ],
-[ 7861, "kpg" ],
-[ 7862, "kph" ],
-[ 7863, "kpi" ],
-[ 7864, "kpj" ],
-[ 7865, "kpk" ],
-[ 7866, "kpl" ],
-[ 7867, "kpm" ],
-[ 7868, "kpn" ],
-[ 7869, "kpo" ],
-[ 7870, "kpp" ],
-[ 7871, "kpq" ],
-[ 7872, "kpr" ],
-[ 7873, "kps" ],
-[ 7874, "kpt" ],
-[ 7875, "kpu" ],
-[ 7876, "kpv" ],
-[ 7877, "kpw" ],
-[ 7878, "kpx" ],
-[ 7879, "kpy" ],
-[ 7880, "kpz" ],
-[ 7881, "kqa" ],
-[ 7882, "kqb" ],
-[ 7883, "kqc" ],
-[ 7884, "kqd" ],
-[ 7885, "kqe" ],
-[ 7886, "kqf" ],
-[ 7887, "kqg" ],
-[ 7888, "kqh" ],
-[ 7889, "kqi" ],
-[ 7890, "kqj" ],
-[ 7891, "kqk" ],
-[ 7892, "kql" ],
-[ 7893, "kqm" ],
-[ 7894, "kqn" ],
-[ 7895, "kqo" ],
-[ 7896, "kqp" ],
-[ 7897, "kqq" ],
-[ 7898, "kqr" ],
-[ 7899, "kqs" ],
-[ 7900, "kqt" ],
-[ 7901, "kqu" ],
-[ 7902, "kqv" ],
-[ 7903, "kqw" ],
-[ 7904, "kqx" ],
-[ 7905, "kqy" ],
-[ 7906, "kqz" ],
-[ 7907, "kra" ],
-[ 7908, "krb" ],
-[ 7909, "krc" ],
-[ 7910, "krd" ],
-[ 7911, "kre" ],
-[ 7912, "krf" ],
-[ 7913, "krg" ],
-[ 7914, "krh" ],
-[ 7915, "kri" ],
-[ 7916, "krj" ],
-[ 7917, "krk" ],
-[ 7918, "krl" ],
-[ 7919, "krm" ],
-[ 7920, "krn" ],
-[ 7921, "kro" ],
-[ 7922, "krp" ],
-[ 7923, "krq" ],
-[ 7924, "krr" ],
-[ 7925, "krs" ],
-[ 7926, "krt" ],
-[ 7927, "kru" ],
-[ 7928, "krv" ],
-[ 7929, "krw" ],
-[ 7930, "krx" ],
-[ 7931, "kry" ],
-[ 7932, "krz" ],
-[ 7933, "ksa" ],
-[ 7934, "ksb" ],
-[ 7935, "ksc" ],
-[ 7936, "ksd" ],
-[ 7937, "kse" ],
-[ 7938, "ksf" ],
-[ 7939, "ksg" ],
-[ 7940, "ksh" ],
-[ 7941, "ksi" ],
-[ 7942, "ksj" ],
-[ 7943, "ksk" ],
-[ 7944, "ksl" ],
-[ 7945, "ksm" ],
-[ 7946, "ksn" ],
-[ 7947, "kso" ],
-[ 7948, "ksp" ],
-[ 7949, "ksq" ],
-[ 7950, "ksr" ],
-[ 7951, "kss" ],
-[ 7952, "kst" ],
-[ 7953, "ksu" ],
-[ 7954, "ksv" ],
-[ 7955, "ksw" ],
-[ 7956, "ksx" ],
-[ 7957, "ksy" ],
-[ 7958, "ksz" ],
-[ 7959, "kta" ],
-[ 7960, "ktb" ],
-[ 7961, "ktc" ],
-[ 7962, "ktd" ],
-[ 7963, "kte" ],
-[ 7964, "ktf" ],
-[ 7965, "ktg" ],
-[ 7966, "kth" ],
-[ 7967, "kti" ],
-[ 7968, "ktj" ],
-[ 7969, "ktk" ],
-[ 7970, "ktl" ],
-[ 7971, "ktm" ],
-[ 7972, "ktn" ],
-[ 7973, "kto" ],
-[ 7974, "ktp" ],
-[ 7975, "ktq" ],
-[ 7976, "ktr" ],
-[ 7977, "kts" ],
-[ 7978, "ktt" ],
-[ 7979, "ktu" ],
-[ 7980, "ktv" ],
-[ 7981, "ktw" ],
-[ 7982, "ktx" ],
-[ 7983, "kty" ],
-[ 7984, "ktz" ],
-[ 7985, "kua" ],
-[ 7986, "kub" ],
-[ 7987, "kuc" ],
-[ 7988, "kud" ],
-[ 7989, "kue" ],
-[ 7990, "kuf" ],
-[ 7991, "kug" ],
-[ 7992, "kuh" ],
-[ 7993, "kui" ],
-[ 7994, "kuj" ],
-[ 7995, "kuk" ],
-[ 7996, "kul" ],
-[ 7997, "kum" ],
-[ 7998, "kun" ],
-[ 7999, "kuo" ],
-[ 8000, "kup" ],
-[ 8001, "kuq" ],
-[ 8002, "kur" ],
-[ 8003, "kus" ],
-[ 8004, "kut" ],
-[ 8005, "kuu" ],
-[ 8006, "kuv" ],
-[ 8007, "kuw" ],
-[ 8008, "kux" ],
-[ 8009, "kuy" ],
-[ 8010, "kuz" ],
-[ 8011, "kva" ],
-[ 8012, "kvb" ],
-[ 8013, "kvc" ],
-[ 8014, "kvd" ],
-[ 8015, "kve" ],
-[ 8016, "kvf" ],
-[ 8017, "kvg" ],
-[ 8018, "kvh" ],
-[ 8019, "kvi" ],
-[ 8020, "kvj" ],
-[ 8021, "kvk" ],
-[ 8022, "kvl" ],
-[ 8023, "kvm" ],
-[ 8024, "kvn" ],
-[ 8025, "kvo" ],
-[ 8026, "kvp" ],
-[ 8027, "kvq" ],
-[ 8028, "kvr" ],
-[ 8029, "kvs" ],
-[ 8030, "kvt" ],
-[ 8031, "kvu" ],
-[ 8032, "kvv" ],
-[ 8033, "kvw" ],
-[ 8034, "kvx" ],
-[ 8035, "kvy" ],
-[ 8036, "kvz" ],
-[ 8037, "kwa" ],
-[ 8038, "kwb" ],
-[ 8039, "kwc" ],
-[ 8040, "kwd" ],
-[ 8041, "kwe" ],
-[ 8042, "kwf" ],
-[ 8043, "kwg" ],
-[ 8044, "kwh" ],
-[ 8045, "kwi" ],
-[ 8046, "kwj" ],
-[ 8047, "kwk" ],
-[ 8048, "kwl" ],
-[ 8049, "kwm" ],
-[ 8050, "kwn" ],
-[ 8051, "kwo" ],
-[ 8052, "kwp" ],
-[ 8053, "kwq" ],
-[ 8054, "kwr" ],
-[ 8055, "kws" ],
-[ 8056, "kwt" ],
-[ 8057, "kwu" ],
-[ 8058, "kwv" ],
-[ 8059, "kww" ],
-[ 8060, "kwx" ],
-[ 8061, "kwy" ],
-[ 8062, "kwz" ],
-[ 8063, "kxa" ],
-[ 8064, "kxb" ],
-[ 8065, "kxc" ],
-[ 8066, "kxd" ],
-[ 8067, "kxe" ],
-[ 8068, "kxf" ],
-[ 8069, "kxg" ],
-[ 8070, "kxh" ],
-[ 8071, "kxi" ],
-[ 8072, "kxj" ],
-[ 8073, "kxk" ],
-[ 8074, "kxl" ],
-[ 8075, "kxm" ],
-[ 8076, "kxn" ],
-[ 8077, "kxo" ],
-[ 8078, "kxp" ],
-[ 8079, "kxq" ],
-[ 8080, "kxr" ],
-[ 8081, "kxs" ],
-[ 8082, "kxt" ],
-[ 8083, "kxu" ],
-[ 8084, "kxv" ],
-[ 8085, "kxw" ],
-[ 8086, "kxx" ],
-[ 8087, "kxy" ],
-[ 8088, "kxz" ],
-[ 8089, "kya" ],
-[ 8090, "kyb" ],
-[ 8091, "kyc" ],
-[ 8092, "kyd" ],
-[ 8093, "kye" ],
-[ 8094, "kyf" ],
-[ 8095, "kyg" ],
-[ 8096, "kyh" ],
-[ 8097, "kyi" ],
-[ 8098, "kyj" ],
-[ 8099, "kyk" ],
-[ 8100, "kyl" ],
-[ 8101, "kym" ],
-[ 8102, "kyn" ],
-[ 8103, "kyo" ],
-[ 8104, "kyp" ],
-[ 8105, "kyq" ],
-[ 8106, "kyr" ],
-[ 8107, "kys" ],
-[ 8108, "kyt" ],
-[ 8109, "kyu" ],
-[ 8110, "kyv" ],
-[ 8111, "kyw" ],
-[ 8112, "kyx" ],
-[ 8113, "kyy" ],
-[ 8114, "kyz" ],
-[ 8115, "kza" ],
-[ 8116, "kzb" ],
-[ 8117, "kzc" ],
-[ 8118, "kzd" ],
-[ 8119, "kze" ],
-[ 8120, "kzf" ],
-[ 8121, "kzg" ],
-[ 8122, "kzh" ],
-[ 8123, "kzi" ],
-[ 8124, "kzj" ],
-[ 8125, "kzk" ],
-[ 8126, "kzl" ],
-[ 8127, "kzm" ],
-[ 8128, "kzn" ],
-[ 8129, "kzo" ],
-[ 8130, "kzp" ],
-[ 8131, "kzq" ],
-[ 8132, "kzr" ],
-[ 8133, "kzs" ],
-[ 8134, "kzt" ],
-[ 8135, "kzu" ],
-[ 8136, "kzv" ],
-[ 8137, "kzw" ],
-[ 8138, "kzx" ],
-[ 8139, "kzy" ],
-[ 8140, "kzz" ],
-[ 8141, "laa" ],
-[ 8142, "lab" ],
-[ 8143, "lac" ],
-[ 8144, "lad" ],
-[ 8145, "lae" ],
-[ 8146, "laf" ],
-[ 8147, "lag" ],
-[ 8148, "lah" ],
-[ 8149, "lai" ],
-[ 8150, "laj" ],
-[ 8151, "lak" ],
-[ 8152, "lal" ],
-[ 8153, "lam" ],
-[ 8154, "lan" ],
-[ 8155, "lao" ],
-[ 8156, "lap" ],
-[ 8157, "laq" ],
-[ 8158, "lar" ],
-[ 8159, "las" ],
-[ 8160, "lat" ],
-[ 8161, "lau" ],
-[ 8162, "lav" ],
-[ 8163, "law" ],
-[ 8164, "lax" ],
-[ 8165, "lay" ],
-[ 8166, "laz" ],
-[ 8167, "lba" ],
-[ 8168, "lbb" ],
-[ 8169, "lbc" ],
-[ 8170, "lbd" ],
-[ 8171, "lbe" ],
-[ 8172, "lbf" ],
-[ 8173, "lbg" ],
-[ 8174, "lbh" ],
-[ 8175, "lbi" ],
-[ 8176, "lbj" ],
-[ 8177, "lbk" ],
-[ 8178, "lbl" ],
-[ 8179, "lbm" ],
-[ 8180, "lbn" ],
-[ 8181, "lbo" ],
-[ 8182, "lbp" ],
-[ 8183, "lbq" ],
-[ 8184, "lbr" ],
-[ 8185, "lbs" ],
-[ 8186, "lbt" ],
-[ 8187, "lbu" ],
-[ 8188, "lbv" ],
-[ 8189, "lbw" ],
-[ 8190, "lbx" ],
-[ 8191, "lby" ],
-[ 8192, "lbz" ],
-[ 8193, "lca" ],
-[ 8194, "lcb" ],
-[ 8195, "lcc" ],
-[ 8196, "lcd" ],
-[ 8197, "lce" ],
-[ 8198, "lcf" ],
-[ 8199, "lcg" ],
-[ 8200, "lch" ],
-[ 8201, "lci" ],
-[ 8202, "lcj" ],
-[ 8203, "lck" ],
-[ 8204, "lcl" ],
-[ 8205, "lcm" ],
-[ 8206, "lcn" ],
-[ 8207, "lco" ],
-[ 8208, "lcp" ],
-[ 8209, "lcq" ],
-[ 8210, "lcr" ],
-[ 8211, "lcs" ],
-[ 8212, "lct" ],
-[ 8213, "lcu" ],
-[ 8214, "lcv" ],
-[ 8215, "lcw" ],
-[ 8216, "lcx" ],
-[ 8217, "lcy" ],
-[ 8218, "lcz" ],
-[ 8219, "lda" ],
-[ 8220, "ldb" ],
-[ 8221, "ldc" ],
-[ 8222, "ldd" ],
-[ 8223, "lde" ],
-[ 8224, "ldf" ],
-[ 8225, "ldg" ],
-[ 8226, "ldh" ],
-[ 8227, "ldi" ],
-[ 8228, "ldj" ],
-[ 8229, "ldk" ],
-[ 8230, "ldl" ],
-[ 8231, "ldm" ],
-[ 8232, "ldn" ],
-[ 8233, "ldo" ],
-[ 8234, "ldp" ],
-[ 8235, "ldq" ],
-[ 8236, "ldr" ],
-[ 8237, "lds" ],
-[ 8238, "ldt" ],
-[ 8239, "ldu" ],
-[ 8240, "ldv" ],
-[ 8241, "ldw" ],
-[ 8242, "ldx" ],
-[ 8243, "ldy" ],
-[ 8244, "ldz" ],
-[ 8245, "lea" ],
-[ 8246, "leb" ],
-[ 8247, "lec" ],
-[ 8248, "led" ],
-[ 8249, "lee" ],
-[ 8250, "lef" ],
-[ 8251, "leg" ],
-[ 8252, "leh" ],
-[ 8253, "lei" ],
-[ 8254, "lej" ],
-[ 8255, "lek" ],
-[ 8256, "lel" ],
-[ 8257, "lem" ],
-[ 8258, "len" ],
-[ 8259, "leo" ],
-[ 8260, "lep" ],
-[ 8261, "leq" ],
-[ 8262, "ler" ],
-[ 8263, "les" ],
-[ 8264, "let" ],
-[ 8265, "leu" ],
-[ 8266, "lev" ],
-[ 8267, "lew" ],
-[ 8268, "lex" ],
-[ 8269, "ley" ],
-[ 8270, "lez" ],
-[ 8271, "lfa" ],
-[ 8272, "lfb" ],
-[ 8273, "lfc" ],
-[ 8274, "lfd" ],
-[ 8275, "lfe" ],
-[ 8276, "lff" ],
-[ 8277, "lfg" ],
-[ 8278, "lfh" ],
-[ 8279, "lfi" ],
-[ 8280, "lfj" ],
-[ 8281, "lfk" ],
-[ 8282, "lfl" ],
-[ 8283, "lfm" ],
-[ 8284, "lfn" ],
-[ 8285, "lfo" ],
-[ 8286, "lfp" ],
-[ 8287, "lfq" ],
-[ 8288, "lfr" ],
-[ 8289, "lfs" ],
-[ 8290, "lft" ],
-[ 8291, "lfu" ],
-[ 8292, "lfv" ],
-[ 8293, "lfw" ],
-[ 8294, "lfx" ],
-[ 8295, "lfy" ],
-[ 8296, "lfz" ],
-[ 8297, "lga" ],
-[ 8298, "lgb" ],
-[ 8299, "lgc" ],
-[ 8300, "lgd" ],
-[ 8301, "lge" ],
-[ 8302, "lgf" ],
-[ 8303, "lgg" ],
-[ 8304, "lgh" ],
-[ 8305, "lgi" ],
-[ 8306, "lgj" ],
-[ 8307, "lgk" ],
-[ 8308, "lgl" ],
-[ 8309, "lgm" ],
-[ 8310, "lgn" ],
-[ 8311, "lgo" ],
-[ 8312, "lgp" ],
-[ 8313, "lgq" ],
-[ 8314, "lgr" ],
-[ 8315, "lgs" ],
-[ 8316, "lgt" ],
-[ 8317, "lgu" ],
-[ 8318, "lgv" ],
-[ 8319, "lgw" ],
-[ 8320, "lgx" ],
-[ 8321, "lgy" ],
-[ 8322, "lgz" ],
-[ 8323, "lha" ],
-[ 8324, "lhb" ],
-[ 8325, "lhc" ],
-[ 8326, "lhd" ],
-[ 8327, "lhe" ],
-[ 8328, "lhf" ],
-[ 8329, "lhg" ],
-[ 8330, "lhh" ],
-[ 8331, "lhi" ],
-[ 8332, "lhj" ],
-[ 8333, "lhk" ],
-[ 8334, "lhl" ],
-[ 8335, "lhm" ],
-[ 8336, "lhn" ],
-[ 8337, "lho" ],
-[ 8338, "lhp" ],
-[ 8339, "lhq" ],
-[ 8340, "lhr" ],
-[ 8341, "lhs" ],
-[ 8342, "lht" ],
-[ 8343, "lhu" ],
-[ 8344, "lhv" ],
-[ 8345, "lhw" ],
-[ 8346, "lhx" ],
-[ 8347, "lhy" ],
-[ 8348, "lhz" ],
-[ 8349, "lia" ],
-[ 8350, "lib" ],
-[ 8351, "lic" ],
-[ 8352, "lid" ],
-[ 8353, "lie" ],
-[ 8354, "lif" ],
-[ 8355, "lig" ],
-[ 8356, "lih" ],
-[ 8357, "lii" ],
-[ 8358, "lij" ],
-[ 8359, "lik" ],
-[ 8360, "lil" ],
-[ 8361, "lim" ],
-[ 8362, "lin" ],
-[ 8363, "lio" ],
-[ 8364, "lip" ],
-[ 8365, "liq" ],
-[ 8366, "lir" ],
-[ 8367, "lis" ],
-[ 8368, "lit" ],
-[ 8369, "liu" ],
-[ 8370, "liv" ],
-[ 8371, "liw" ],
-[ 8372, "lix" ],
-[ 8373, "liy" ],
-[ 8374, "liz" ],
-[ 8375, "lja" ],
-[ 8376, "ljb" ],
-[ 8377, "ljc" ],
-[ 8378, "ljd" ],
-[ 8379, "lje" ],
-[ 8380, "ljf" ],
-[ 8381, "ljg" ],
-[ 8382, "ljh" ],
-[ 8383, "lji" ],
-[ 8384, "ljj" ],
-[ 8385, "ljk" ],
-[ 8386, "ljl" ],
-[ 8387, "ljm" ],
-[ 8388, "ljn" ],
-[ 8389, "ljo" ],
-[ 8390, "ljp" ],
-[ 8391, "ljq" ],
-[ 8392, "ljr" ],
-[ 8393, "ljs" ],
-[ 8394, "ljt" ],
-[ 8395, "lju" ],
-[ 8396, "ljv" ],
-[ 8397, "ljw" ],
-[ 8398, "ljx" ],
-[ 8399, "ljy" ],
-[ 8400, "ljz" ],
-[ 8401, "lka" ],
-[ 8402, "lkb" ],
-[ 8403, "lkc" ],
-[ 8404, "lkd" ],
-[ 8405, "lke" ],
-[ 8406, "lkf" ],
-[ 8407, "lkg" ],
-[ 8408, "lkh" ],
-[ 8409, "lki" ],
-[ 8410, "lkj" ],
-[ 8411, "lkk" ],
-[ 8412, "lkl" ],
-[ 8413, "lkm" ],
-[ 8414, "lkn" ],
-[ 8415, "lko" ],
-[ 8416, "lkp" ],
-[ 8417, "lkq" ],
-[ 8418, "lkr" ],
-[ 8419, "lks" ],
-[ 8420, "lkt" ],
-[ 8421, "lku" ],
-[ 8422, "lkv" ],
-[ 8423, "lkw" ],
-[ 8424, "lkx" ],
-[ 8425, "lky" ],
-[ 8426, "lkz" ],
-[ 8427, "lla" ],
-[ 8428, "llb" ],
-[ 8429, "llc" ],
-[ 8430, "lld" ],
-[ 8431, "lle" ],
-[ 8432, "llf" ],
-[ 8433, "llg" ],
-[ 8434, "llh" ],
-[ 8435, "lli" ],
-[ 8436, "llj" ],
-[ 8437, "llk" ],
-[ 8438, "lll" ],
-[ 8439, "llm" ],
-[ 8440, "lln" ],
-[ 8441, "llo" ],
-[ 8442, "llp" ],
-[ 8443, "llq" ],
-[ 8444, "llr" ],
-[ 8445, "lls" ],
-[ 8446, "llt" ],
-[ 8447, "llu" ],
-[ 8448, "llv" ],
-[ 8449, "llw" ],
-[ 8450, "llx" ],
-[ 8451, "lly" ],
-[ 8452, "llz" ],
-[ 8453, "lma" ],
-[ 8454, "lmb" ],
-[ 8455, "lmc" ],
-[ 8456, "lmd" ],
-[ 8457, "lme" ],
-[ 8458, "lmf" ],
-[ 8459, "lmg" ],
-[ 8460, "lmh" ],
-[ 8461, "lmi" ],
-[ 8462, "lmj" ],
-[ 8463, "lmk" ],
-[ 8464, "lml" ],
-[ 8465, "lmm" ],
-[ 8466, "lmn" ],
-[ 8467, "lmo" ],
-[ 8468, "lmp" ],
-[ 8469, "lmq" ],
-[ 8470, "lmr" ],
-[ 8471, "lms" ],
-[ 8472, "lmt" ],
-[ 8473, "lmu" ],
-[ 8474, "lmv" ],
-[ 8475, "lmw" ],
-[ 8476, "lmx" ],
-[ 8477, "lmy" ],
-[ 8478, "lmz" ],
-[ 8479, "lna" ],
-[ 8480, "lnb" ],
-[ 8481, "lnc" ],
-[ 8482, "lnd" ],
-[ 8483, "lne" ],
-[ 8484, "lnf" ],
-[ 8485, "lng" ],
-[ 8486, "lnh" ],
-[ 8487, "lni" ],
-[ 8488, "lnj" ],
-[ 8489, "lnk" ],
-[ 8490, "lnl" ],
-[ 8491, "lnm" ],
-[ 8492, "lnn" ],
-[ 8493, "lno" ],
-[ 8494, "lnp" ],
-[ 8495, "lnq" ],
-[ 8496, "lnr" ],
-[ 8497, "lns" ],
-[ 8498, "lnt" ],
-[ 8499, "lnu" ],
-[ 8500, "lnv" ],
-[ 8501, "lnw" ],
-[ 8502, "lnx" ],
-[ 8503, "lny" ],
-[ 8504, "lnz" ],
-[ 8505, "loa" ],
-[ 8506, "lob" ],
-[ 8507, "loc" ],
-[ 8508, "lod" ],
-[ 8509, "loe" ],
-[ 8510, "lof" ],
-[ 8511, "log" ],
-[ 8512, "loh" ],
-[ 8513, "loi" ],
-[ 8514, "loj" ],
-[ 8515, "lok" ],
-[ 8516, "lol" ],
-[ 8517, "lom" ],
-[ 8518, "lon" ],
-[ 8519, "loo" ],
-[ 8520, "lop" ],
-[ 8521, "loq" ],
-[ 8522, "lor" ],
-[ 8523, "los" ],
-[ 8524, "lot" ],
-[ 8525, "lou" ],
-[ 8526, "lov" ],
-[ 8527, "low" ],
-[ 8528, "lox" ],
-[ 8529, "loy" ],
-[ 8530, "loz" ],
-[ 8531, "lpa" ],
-[ 8532, "lpb" ],
-[ 8533, "lpc" ],
-[ 8534, "lpd" ],
-[ 8535, "lpe" ],
-[ 8536, "lpf" ],
-[ 8537, "lpg" ],
-[ 8538, "lph" ],
-[ 8539, "lpi" ],
-[ 8540, "lpj" ],
-[ 8541, "lpk" ],
-[ 8542, "lpl" ],
-[ 8543, "lpm" ],
-[ 8544, "lpn" ],
-[ 8545, "lpo" ],
-[ 8546, "lpp" ],
-[ 8547, "lpq" ],
-[ 8548, "lpr" ],
-[ 8549, "lps" ],
-[ 8550, "lpt" ],
-[ 8551, "lpu" ],
-[ 8552, "lpv" ],
-[ 8553, "lpw" ],
-[ 8554, "lpx" ],
-[ 8555, "lpy" ],
-[ 8556, "lpz" ],
-[ 8557, "lqa" ],
-[ 8558, "lqb" ],
-[ 8559, "lqc" ],
-[ 8560, "lqd" ],
-[ 8561, "lqe" ],
-[ 8562, "lqf" ],
-[ 8563, "lqg" ],
-[ 8564, "lqh" ],
-[ 8565, "lqi" ],
-[ 8566, "lqj" ],
-[ 8567, "lqk" ],
-[ 8568, "lql" ],
-[ 8569, "lqm" ],
-[ 8570, "lqn" ],
-[ 8571, "lqo" ],
-[ 8572, "lqp" ],
-[ 8573, "lqq" ],
-[ 8574, "lqr" ],
-[ 8575, "lqs" ],
-[ 8576, "lqt" ],
-[ 8577, "lqu" ],
-[ 8578, "lqv" ],
-[ 8579, "lqw" ],
-[ 8580, "lqx" ],
-[ 8581, "lqy" ],
-[ 8582, "lqz" ],
-[ 8583, "lra" ],
-[ 8584, "lrb" ],
-[ 8585, "lrc" ],
-[ 8586, "lrd" ],
-[ 8587, "lre" ],
-[ 8588, "lrf" ],
-[ 8589, "lrg" ],
-[ 8590, "lrh" ],
-[ 8591, "lri" ],
-[ 8592, "lrj" ],
-[ 8593, "lrk" ],
-[ 8594, "lrl" ],
-[ 8595, "lrm" ],
-[ 8596, "lrn" ],
-[ 8597, "lro" ],
-[ 8598, "lrp" ],
-[ 8599, "lrq" ],
-[ 8600, "lrr" ],
-[ 8601, "lrs" ],
-[ 8602, "lrt" ],
-[ 8603, "lru" ],
-[ 8604, "lrv" ],
-[ 8605, "lrw" ],
-[ 8606, "lrx" ],
-[ 8607, "lry" ],
-[ 8608, "lrz" ],
-[ 8609, "lsa" ],
-[ 8610, "lsb" ],
-[ 8611, "lsc" ],
-[ 8612, "lsd" ],
-[ 8613, "lse" ],
-[ 8614, "lsf" ],
-[ 8615, "lsg" ],
-[ 8616, "lsh" ],
-[ 8617, "lsi" ],
-[ 8618, "lsj" ],
-[ 8619, "lsk" ],
-[ 8620, "lsl" ],
-[ 8621, "lsm" ],
-[ 8622, "lsn" ],
-[ 8623, "lso" ],
-[ 8624, "lsp" ],
-[ 8625, "lsq" ],
-[ 8626, "lsr" ],
-[ 8627, "lss" ],
-[ 8628, "lst" ],
-[ 8629, "lsu" ],
-[ 8630, "lsv" ],
-[ 8631, "lsw" ],
-[ 8632, "lsx" ],
-[ 8633, "lsy" ],
-[ 8634, "lsz" ],
-[ 8635, "lta" ],
-[ 8636, "ltb" ],
-[ 8637, "ltc" ],
-[ 8638, "ltd" ],
-[ 8639, "lte" ],
-[ 8640, "ltf" ],
-[ 8641, "ltg" ],
-[ 8642, "lth" ],
-[ 8643, "lti" ],
-[ 8644, "ltj" ],
-[ 8645, "ltk" ],
-[ 8646, "ltl" ],
-[ 8647, "ltm" ],
-[ 8648, "ltn" ],
-[ 8649, "lto" ],
-[ 8650, "ltp" ],
-[ 8651, "ltq" ],
-[ 8652, "ltr" ],
-[ 8653, "lts" ],
-[ 8654, "ltt" ],
-[ 8655, "ltu" ],
-[ 8656, "ltv" ],
-[ 8657, "ltw" ],
-[ 8658, "ltx" ],
-[ 8659, "lty" ],
-[ 8660, "ltz" ],
-[ 8661, "lua" ],
-[ 8662, "lub" ],
-[ 8663, "luc" ],
-[ 8664, "lud" ],
-[ 8665, "lue" ],
-[ 8666, "luf" ],
-[ 8667, "lug" ],
-[ 8668, "luh" ],
-[ 8669, "lui" ],
-[ 8670, "luj" ],
-[ 8671, "luk" ],
-[ 8672, "lul" ],
-[ 8673, "lum" ],
-[ 8674, "lun" ],
-[ 8675, "luo" ],
-[ 8676, "lup" ],
-[ 8677, "luq" ],
-[ 8678, "lur" ],
-[ 8679, "lus" ],
-[ 8680, "lut" ],
-[ 8681, "luu" ],
-[ 8682, "luv" ],
-[ 8683, "luw" ],
-[ 8684, "lux" ],
-[ 8685, "luy" ],
-[ 8686, "luz" ],
-[ 8687, "lva" ],
-[ 8688, "lvb" ],
-[ 8689, "lvc" ],
-[ 8690, "lvd" ],
-[ 8691, "lve" ],
-[ 8692, "lvf" ],
-[ 8693, "lvg" ],
-[ 8694, "lvh" ],
-[ 8695, "lvi" ],
-[ 8696, "lvj" ],
-[ 8697, "lvk" ],
-[ 8698, "lvl" ],
-[ 8699, "lvm" ],
-[ 8700, "lvn" ],
-[ 8701, "lvo" ],
-[ 8702, "lvp" ],
-[ 8703, "lvq" ],
-[ 8704, "lvr" ],
-[ 8705, "lvs" ],
-[ 8706, "lvt" ],
-[ 8707, "lvu" ],
-[ 8708, "lvv" ],
-[ 8709, "lvw" ],
-[ 8710, "lvx" ],
-[ 8711, "lvy" ],
-[ 8712, "lvz" ],
-[ 8713, "lwa" ],
-[ 8714, "lwb" ],
-[ 8715, "lwc" ],
-[ 8716, "lwd" ],
-[ 8717, "lwe" ],
-[ 8718, "lwf" ],
-[ 8719, "lwg" ],
-[ 8720, "lwh" ],
-[ 8721, "lwi" ],
-[ 8722, "lwj" ],
-[ 8723, "lwk" ],
-[ 8724, "lwl" ],
-[ 8725, "lwm" ],
-[ 8726, "lwn" ],
-[ 8727, "lwo" ],
-[ 8728, "lwp" ],
-[ 8729, "lwq" ],
-[ 8730, "lwr" ],
-[ 8731, "lws" ],
-[ 8732, "lwt" ],
-[ 8733, "lwu" ],
-[ 8734, "lwv" ],
-[ 8735, "lww" ],
-[ 8736, "lwx" ],
-[ 8737, "lwy" ],
-[ 8738, "lwz" ],
-[ 8739, "lxa" ],
-[ 8740, "lxb" ],
-[ 8741, "lxc" ],
-[ 8742, "lxd" ],
-[ 8743, "lxe" ],
-[ 8744, "lxf" ],
-[ 8745, "lxg" ],
-[ 8746, "lxh" ],
-[ 8747, "lxi" ],
-[ 8748, "lxj" ],
-[ 8749, "lxk" ],
-[ 8750, "lxl" ],
-[ 8751, "lxm" ],
-[ 8752, "lxn" ],
-[ 8753, "lxo" ],
-[ 8754, "lxp" ],
-[ 8755, "lxq" ],
-[ 8756, "lxr" ],
-[ 8757, "lxs" ],
-[ 8758, "lxt" ],
-[ 8759, "lxu" ],
-[ 8760, "lxv" ],
-[ 8761, "lxw" ],
-[ 8762, "lxx" ],
-[ 8763, "lxy" ],
-[ 8764, "lxz" ],
-[ 8765, "lya" ],
-[ 8766, "lyb" ],
-[ 8767, "lyc" ],
-[ 8768, "lyd" ],
-[ 8769, "lye" ],
-[ 8770, "lyf" ],
-[ 8771, "lyg" ],
-[ 8772, "lyh" ],
-[ 8773, "lyi" ],
-[ 8774, "lyj" ],
-[ 8775, "lyk" ],
-[ 8776, "lyl" ],
-[ 8777, "lym" ],
-[ 8778, "lyn" ],
-[ 8779, "lyo" ],
-[ 8780, "lyp" ],
-[ 8781, "lyq" ],
-[ 8782, "lyr" ],
-[ 8783, "lys" ],
-[ 8784, "lyt" ],
-[ 8785, "lyu" ],
-[ 8786, "lyv" ],
-[ 8787, "lyw" ],
-[ 8788, "lyx" ],
-[ 8789, "lyy" ],
-[ 8790, "lyz" ],
-[ 8791, "lza" ],
-[ 8792, "lzb" ],
-[ 8793, "lzc" ],
-[ 8794, "lzd" ],
-[ 8795, "lze" ],
-[ 8796, "lzf" ],
-[ 8797, "lzg" ],
-[ 8798, "lzh" ],
-[ 8799, "lzi" ],
-[ 8800, "lzj" ],
-[ 8801, "lzk" ],
-[ 8802, "lzl" ],
-[ 8803, "lzm" ],
-[ 8804, "lzn" ],
-[ 8805, "lzo" ],
-[ 8806, "lzp" ],
-[ 8807, "lzq" ],
-[ 8808, "lzr" ],
-[ 8809, "lzs" ],
-[ 8810, "lzt" ],
-[ 8811, "lzu" ],
-[ 8812, "lzv" ],
-[ 8813, "lzw" ],
-[ 8814, "lzx" ],
-[ 8815, "lzy" ],
-[ 8816, "lzz" ],
-[ 8817, "maa" ],
-[ 8818, "mab" ],
-[ 8819, "mac" ],
-[ 8820, "mad" ],
-[ 8821, "mae" ],
-[ 8822, "maf" ],
-[ 8823, "mag" ],
-[ 8824, "mah" ],
-[ 8825, "mai" ],
-[ 8826, "maj" ],
-[ 8827, "mak" ],
-[ 8828, "mal" ],
-[ 8829, "mam" ],
-[ 8830, "man" ],
-[ 8831, "mao" ],
-[ 8832, "map" ],
-[ 8833, "maq" ],
-[ 8834, "mar" ],
-[ 8835, "mas" ],
-[ 8836, "mat" ],
-[ 8837, "mau" ],
-[ 8838, "mav" ],
-[ 8839, "maw" ],
-[ 8840, "max" ],
-[ 8841, "may" ],
-[ 8842, "maz" ],
-[ 8843, "mba" ],
-[ 8844, "mbb" ],
-[ 8845, "mbc" ],
-[ 8846, "mbd" ],
-[ 8847, "mbe" ],
-[ 8848, "mbf" ],
-[ 8849, "mbg" ],
-[ 8850, "mbh" ],
-[ 8851, "mbi" ],
-[ 8852, "mbj" ],
-[ 8853, "mbk" ],
-[ 8854, "mbl" ],
-[ 8855, "mbm" ],
-[ 8856, "mbn" ],
-[ 8857, "mbo" ],
-[ 8858, "mbp" ],
-[ 8859, "mbq" ],
-[ 8860, "mbr" ],
-[ 8861, "mbs" ],
-[ 8862, "mbt" ],
-[ 8863, "mbu" ],
-[ 8864, "mbv" ],
-[ 8865, "mbw" ],
-[ 8866, "mbx" ],
-[ 8867, "mby" ],
-[ 8868, "mbz" ],
-[ 8869, "mca" ],
-[ 8870, "mcb" ],
-[ 8871, "mcc" ],
-[ 8872, "mcd" ],
-[ 8873, "mce" ],
-[ 8874, "mcf" ],
-[ 8875, "mcg" ],
-[ 8876, "mch" ],
-[ 8877, "mci" ],
-[ 8878, "mcj" ],
-[ 8879, "mck" ],
-[ 8880, "mcl" ],
-[ 8881, "mcm" ],
-[ 8882, "mcn" ],
-[ 8883, "mco" ],
-[ 8884, "mcp" ],
-[ 8885, "mcq" ],
-[ 8886, "mcr" ],
-[ 8887, "mcs" ],
-[ 8888, "mct" ],
-[ 8889, "mcu" ],
-[ 8890, "mcv" ],
-[ 8891, "mcw" ],
-[ 8892, "mcx" ],
-[ 8893, "mcy" ],
-[ 8894, "mcz" ],
-[ 8895, "mda" ],
-[ 8896, "mdb" ],
-[ 8897, "mdc" ],
-[ 8898, "mdd" ],
-[ 8899, "mde" ],
-[ 8900, "mdf" ],
-[ 8901, "mdg" ],
-[ 8902, "mdh" ],
-[ 8903, "mdi" ],
-[ 8904, "mdj" ],
-[ 8905, "mdk" ],
-[ 8906, "mdl" ],
-[ 8907, "mdm" ],
-[ 8908, "mdn" ],
-[ 8909, "mdo" ],
-[ 8910, "mdp" ],
-[ 8911, "mdq" ],
-[ 8912, "mdr" ],
-[ 8913, "mds" ],
-[ 8914, "mdt" ],
-[ 8915, "mdu" ],
-[ 8916, "mdv" ],
-[ 8917, "mdw" ],
-[ 8918, "mdx" ],
-[ 8919, "mdy" ],
-[ 8920, "mdz" ],
-[ 8921, "mea" ],
-[ 8922, "meb" ],
-[ 8923, "mec" ],
-[ 8924, "med" ],
-[ 8925, "mee" ],
-[ 8926, "mef" ],
-[ 8927, "meg" ],
-[ 8928, "meh" ],
-[ 8929, "mei" ],
-[ 8930, "mej" ],
-[ 8931, "mek" ],
-[ 8932, "mel" ],
-[ 8933, "mem" ],
-[ 8934, "men" ],
-[ 8935, "meo" ],
-[ 8936, "mep" ],
-[ 8937, "meq" ],
-[ 8938, "mer" ],
-[ 8939, "mes" ],
-[ 8940, "met" ],
-[ 8941, "meu" ],
-[ 8942, "mev" ],
-[ 8943, "mew" ],
-[ 8944, "mex" ],
-[ 8945, "mey" ],
-[ 8946, "mez" ],
-[ 8947, "mfa" ],
-[ 8948, "mfb" ],
-[ 8949, "mfc" ],
-[ 8950, "mfd" ],
-[ 8951, "mfe" ],
-[ 8952, "mff" ],
-[ 8953, "mfg" ],
-[ 8954, "mfh" ],
-[ 8955, "mfi" ],
-[ 8956, "mfj" ],
-[ 8957, "mfk" ],
-[ 8958, "mfl" ],
-[ 8959, "mfm" ],
-[ 8960, "mfn" ],
-[ 8961, "mfo" ],
-[ 8962, "mfp" ],
-[ 8963, "mfq" ],
-[ 8964, "mfr" ],
-[ 8965, "mfs" ],
-[ 8966, "mft" ],
-[ 8967, "mfu" ],
-[ 8968, "mfv" ],
-[ 8969, "mfw" ],
-[ 8970, "mfx" ],
-[ 8971, "mfy" ],
-[ 8972, "mfz" ],
-[ 8973, "mga" ],
-[ 8974, "mgb" ],
-[ 8975, "mgc" ],
-[ 8976, "mgd" ],
-[ 8977, "mge" ],
-[ 8978, "mgf" ],
-[ 8979, "mgg" ],
-[ 8980, "mgh" ],
-[ 8981, "mgi" ],
-[ 8982, "mgj" ],
-[ 8983, "mgk" ],
-[ 8984, "mgl" ],
-[ 8985, "mgm" ],
-[ 8986, "mgn" ],
-[ 8987, "mgo" ],
-[ 8988, "mgp" ],
-[ 8989, "mgq" ],
-[ 8990, "mgr" ],
-[ 8991, "mgs" ],
-[ 8992, "mgt" ],
-[ 8993, "mgu" ],
-[ 8994, "mgv" ],
-[ 8995, "mgw" ],
-[ 8996, "mgx" ],
-[ 8997, "mgy" ],
-[ 8998, "mgz" ],
-[ 8999, "mha" ],
-[ 9000, "mhb" ],
-[ 9001, "mhc" ],
-[ 9002, "mhd" ],
-[ 9003, "mhe" ],
-[ 9004, "mhf" ],
-[ 9005, "mhg" ],
-[ 9006, "mhh" ],
-[ 9007, "mhi" ],
-[ 9008, "mhj" ],
-[ 9009, "mhk" ],
-[ 9010, "mhl" ],
-[ 9011, "mhm" ],
-[ 9012, "mhn" ],
-[ 9013, "mho" ],
-[ 9014, "mhp" ],
-[ 9015, "mhq" ],
-[ 9016, "mhr" ],
-[ 9017, "mhs" ],
-[ 9018, "mht" ],
-[ 9019, "mhu" ],
-[ 9020, "mhv" ],
-[ 9021, "mhw" ],
-[ 9022, "mhx" ],
-[ 9023, "mhy" ],
-[ 9024, "mhz" ],
-[ 9025, "mia" ],
-[ 9026, "mib" ],
-[ 9027, "mic" ],
-[ 9028, "mid" ],
-[ 9029, "mie" ],
-[ 9030, "mif" ],
-[ 9031, "mig" ],
-[ 9032, "mih" ],
-[ 9033, "mii" ],
-[ 9034, "mij" ],
-[ 9035, "mik" ],
-[ 9036, "mil" ],
-[ 9037, "mim" ],
-[ 9038, "min" ],
-[ 9039, "mio" ],
-[ 9040, "mip" ],
-[ 9041, "miq" ],
-[ 9042, "mir" ],
-[ 9043, "mis" ],
-[ 9044, "mit" ],
-[ 9045, "miu" ],
-[ 9046, "miv" ],
-[ 9047, "miw" ],
-[ 9048, "mix" ],
-[ 9049, "miy" ],
-[ 9050, "miz" ],
-[ 9051, "mja" ],
-[ 9052, "mjb" ],
-[ 9053, "mjc" ],
-[ 9054, "mjd" ],
-[ 9055, "mje" ],
-[ 9056, "mjf" ],
-[ 9057, "mjg" ],
-[ 9058, "mjh" ],
-[ 9059, "mji" ],
-[ 9060, "mjj" ],
-[ 9061, "mjk" ],
-[ 9062, "mjl" ],
-[ 9063, "mjm" ],
-[ 9064, "mjn" ],
-[ 9065, "mjo" ],
-[ 9066, "mjp" ],
-[ 9067, "mjq" ],
-[ 9068, "mjr" ],
-[ 9069, "mjs" ],
-[ 9070, "mjt" ],
-[ 9071, "mju" ],
-[ 9072, "mjv" ],
-[ 9073, "mjw" ],
-[ 9074, "mjx" ],
-[ 9075, "mjy" ],
-[ 9076, "mjz" ],
-[ 9077, "mka" ],
-[ 9078, "mkb" ],
-[ 9079, "mkc" ],
-[ 9080, "mkd" ],
-[ 9081, "mke" ],
-[ 9082, "mkf" ],
-[ 9083, "mkg" ],
-[ 9084, "mkh" ],
-[ 9085, "mki" ],
-[ 9086, "mkj" ],
-[ 9087, "mkk" ],
-[ 9088, "mkl" ],
-[ 9089, "mkm" ],
-[ 9090, "mkn" ],
-[ 9091, "mko" ],
-[ 9092, "mkp" ],
-[ 9093, "mkq" ],
-[ 9094, "mkr" ],
-[ 9095, "mks" ],
-[ 9096, "mkt" ],
-[ 9097, "mku" ],
-[ 9098, "mkv" ],
-[ 9099, "mkw" ],
-[ 9100, "mkx" ],
-[ 9101, "mky" ],
-[ 9102, "mkz" ],
-[ 9103, "mla" ],
-[ 9104, "mlb" ],
-[ 9105, "mlc" ],
-[ 9106, "mld" ],
-[ 9107, "mle" ],
-[ 9108, "mlf" ],
-[ 9109, "mlg" ],
-[ 9110, "mlh" ],
-[ 9111, "mli" ],
-[ 9112, "mlj" ],
-[ 9113, "mlk" ],
-[ 9114, "mll" ],
-[ 9115, "mlm" ],
-[ 9116, "mln" ],
-[ 9117, "mlo" ],
-[ 9118, "mlp" ],
-[ 9119, "mlq" ],
-[ 9120, "mlr" ],
-[ 9121, "mls" ],
-[ 9122, "mlt" ],
-[ 9123, "mlu" ],
-[ 9124, "mlv" ],
-[ 9125, "mlw" ],
-[ 9126, "mlx" ],
-[ 9127, "mly" ],
-[ 9128, "mlz" ],
-[ 9129, "mma" ],
-[ 9130, "mmb" ],
-[ 9131, "mmc" ],
-[ 9132, "mmd" ],
-[ 9133, "mme" ],
-[ 9134, "mmf" ],
-[ 9135, "mmg" ],
-[ 9136, "mmh" ],
-[ 9137, "mmi" ],
-[ 9138, "mmj" ],
-[ 9139, "mmk" ],
-[ 9140, "mml" ],
-[ 9141, "mmm" ],
-[ 9142, "mmn" ],
-[ 9143, "mmo" ],
-[ 9144, "mmp" ],
-[ 9145, "mmq" ],
-[ 9146, "mmr" ],
-[ 9147, "mms" ],
-[ 9148, "mmt" ],
-[ 9149, "mmu" ],
-[ 9150, "mmv" ],
-[ 9151, "mmw" ],
-[ 9152, "mmx" ],
-[ 9153, "mmy" ],
-[ 9154, "mmz" ],
-[ 9155, "mna" ],
-[ 9156, "mnb" ],
-[ 9157, "mnc" ],
-[ 9158, "mnd" ],
-[ 9159, "mne" ],
-[ 9160, "mnf" ],
-[ 9161, "mng" ],
-[ 9162, "mnh" ],
-[ 9163, "mni" ],
-[ 9164, "mnj" ],
-[ 9165, "mnk" ],
-[ 9166, "mnl" ],
-[ 9167, "mnm" ],
-[ 9168, "mnn" ],
-[ 9169, "mno" ],
-[ 9170, "mnp" ],
-[ 9171, "mnq" ],
-[ 9172, "mnr" ],
-[ 9173, "mns" ],
-[ 9174, "mnt" ],
-[ 9175, "mnu" ],
-[ 9176, "mnv" ],
-[ 9177, "mnw" ],
-[ 9178, "mnx" ],
-[ 9179, "mny" ],
-[ 9180, "mnz" ],
-[ 9181, "moa" ],
-[ 9182, "mob" ],
-[ 9183, "moc" ],
-[ 9184, "mod" ],
-[ 9185, "moe" ],
-[ 9186, "mof" ],
-[ 9187, "mog" ],
-[ 9188, "moh" ],
-[ 9189, "moi" ],
-[ 9190, "moj" ],
-[ 9191, "mok" ],
-[ 9192, "mol" ],
-[ 9193, "mom" ],
-[ 9194, "mon" ],
-[ 9195, "moo" ],
-[ 9196, "mop" ],
-[ 9197, "moq" ],
-[ 9198, "mor" ],
-[ 9199, "mos" ],
-[ 9200, "mot" ],
-[ 9201, "mou" ],
-[ 9202, "mov" ],
-[ 9203, "mow" ],
-[ 9204, "mox" ],
-[ 9205, "moy" ],
-[ 9206, "moz" ],
-[ 9207, "mpa" ],
-[ 9208, "mpb" ],
-[ 9209, "mpc" ],
-[ 9210, "mpd" ],
-[ 9211, "mpe" ],
-[ 9212, "mpf" ],
-[ 9213, "mpg" ],
-[ 9214, "mph" ],
-[ 9215, "mpi" ],
-[ 9216, "mpj" ],
-[ 9217, "mpk" ],
-[ 9218, "mpl" ],
-[ 9219, "mpm" ],
-[ 9220, "mpn" ],
-[ 9221, "mpo" ],
-[ 9222, "mpp" ],
-[ 9223, "mpq" ],
-[ 9224, "mpr" ],
-[ 9225, "mps" ],
-[ 9226, "mpt" ],
-[ 9227, "mpu" ],
-[ 9228, "mpv" ],
-[ 9229, "mpw" ],
-[ 9230, "mpx" ],
-[ 9231, "mpy" ],
-[ 9232, "mpz" ],
-[ 9233, "mqa" ],
-[ 9234, "mqb" ],
-[ 9235, "mqc" ],
-[ 9236, "mqd" ],
-[ 9237, "mqe" ],
-[ 9238, "mqf" ],
-[ 9239, "mqg" ],
-[ 9240, "mqh" ],
-[ 9241, "mqi" ],
-[ 9242, "mqj" ],
-[ 9243, "mqk" ],
-[ 9244, "mql" ],
-[ 9245, "mqm" ],
-[ 9246, "mqn" ],
-[ 9247, "mqo" ],
-[ 9248, "mqp" ],
-[ 9249, "mqq" ],
-[ 9250, "mqr" ],
-[ 9251, "mqs" ],
-[ 9252, "mqt" ],
-[ 9253, "mqu" ],
-[ 9254, "mqv" ],
-[ 9255, "mqw" ],
-[ 9256, "mqx" ],
-[ 9257, "mqy" ],
-[ 9258, "mqz" ],
-[ 9259, "mra" ],
-[ 9260, "mrb" ],
-[ 9261, "mrc" ],
-[ 9262, "mrd" ],
-[ 9263, "mre" ],
-[ 9264, "mrf" ],
-[ 9265, "mrg" ],
-[ 9266, "mrh" ],
-[ 9267, "mri" ],
-[ 9268, "mrj" ],
-[ 9269, "mrk" ],
-[ 9270, "mrl" ],
-[ 9271, "mrm" ],
-[ 9272, "mrn" ],
-[ 9273, "mro" ],
-[ 9274, "mrp" ],
-[ 9275, "mrq" ],
-[ 9276, "mrr" ],
-[ 9277, "mrs" ],
-[ 9278, "mrt" ],
-[ 9279, "mru" ],
-[ 9280, "mrv" ],
-[ 9281, "mrw" ],
-[ 9282, "mrx" ],
-[ 9283, "mry" ],
-[ 9284, "mrz" ],
-[ 9285, "msa" ],
-[ 9286, "msb" ],
-[ 9287, "msc" ],
-[ 9288, "msd" ],
-[ 9289, "mse" ],
-[ 9290, "msf" ],
-[ 9291, "msg" ],
-[ 9292, "msh" ],
-[ 9293, "msi" ],
-[ 9294, "msj" ],
-[ 9295, "msk" ],
-[ 9296, "msl" ],
-[ 9297, "msm" ],
-[ 9298, "msn" ],
-[ 9299, "mso" ],
-[ 9300, "msp" ],
-[ 9301, "msq" ],
-[ 9302, "msr" ],
-[ 9303, "mss" ],
-[ 9304, "mst" ],
-[ 9305, "msu" ],
-[ 9306, "msv" ],
-[ 9307, "msw" ],
-[ 9308, "msx" ],
-[ 9309, "msy" ],
-[ 9310, "msz" ],
-[ 9311, "mta" ],
-[ 9312, "mtb" ],
-[ 9313, "mtc" ],
-[ 9314, "mtd" ],
-[ 9315, "mte" ],
-[ 9316, "mtf" ],
-[ 9317, "mtg" ],
-[ 9318, "mth" ],
-[ 9319, "mti" ],
-[ 9320, "mtj" ],
-[ 9321, "mtk" ],
-[ 9322, "mtl" ],
-[ 9323, "mtm" ],
-[ 9324, "mtn" ],
-[ 9325, "mto" ],
-[ 9326, "mtp" ],
-[ 9327, "mtq" ],
-[ 9328, "mtr" ],
-[ 9329, "mts" ],
-[ 9330, "mtt" ],
-[ 9331, "mtu" ],
-[ 9332, "mtv" ],
-[ 9333, "mtw" ],
-[ 9334, "mtx" ],
-[ 9335, "mty" ],
-[ 9336, "mtz" ],
-[ 9337, "mua" ],
-[ 9338, "mub" ],
-[ 9339, "muc" ],
-[ 9340, "mud" ],
-[ 9341, "mue" ],
-[ 9342, "muf" ],
-[ 9343, "mug" ],
-[ 9344, "muh" ],
-[ 9345, "mui" ],
-[ 9346, "muj" ],
-[ 9347, "muk" ],
-[ 9348, "mul" ],
-[ 9349, "mum" ],
-[ 9350, "mun" ],
-[ 9351, "muo" ],
-[ 9352, "mup" ],
-[ 9353, "muq" ],
-[ 9354, "mur" ],
-[ 9355, "mus" ],
-[ 9356, "mut" ],
-[ 9357, "muu" ],
-[ 9358, "muv" ],
-[ 9359, "muw" ],
-[ 9360, "mux" ],
-[ 9361, "muy" ],
-[ 9362, "muz" ],
-[ 9363, "mva" ],
-[ 9364, "mvb" ],
-[ 9365, "mvc" ],
-[ 9366, "mvd" ],
-[ 9367, "mve" ],
-[ 9368, "mvf" ],
-[ 9369, "mvg" ],
-[ 9370, "mvh" ],
-[ 9371, "mvi" ],
-[ 9372, "mvj" ],
-[ 9373, "mvk" ],
-[ 9374, "mvl" ],
-[ 9375, "mvm" ],
-[ 9376, "mvn" ],
-[ 9377, "mvo" ],
-[ 9378, "mvp" ],
-[ 9379, "mvq" ],
-[ 9380, "mvr" ],
-[ 9381, "mvs" ],
-[ 9382, "mvt" ],
-[ 9383, "mvu" ],
-[ 9384, "mvv" ],
-[ 9385, "mvw" ],
-[ 9386, "mvx" ],
-[ 9387, "mvy" ],
-[ 9388, "mvz" ],
-[ 9389, "mwa" ],
-[ 9390, "mwb" ],
-[ 9391, "mwc" ],
-[ 9392, "mwd" ],
-[ 9393, "mwe" ],
-[ 9394, "mwf" ],
-[ 9395, "mwg" ],
-[ 9396, "mwh" ],
-[ 9397, "mwi" ],
-[ 9398, "mwj" ],
-[ 9399, "mwk" ],
-[ 9400, "mwl" ],
-[ 9401, "mwm" ],
-[ 9402, "mwn" ],
-[ 9403, "mwo" ],
-[ 9404, "mwp" ],
-[ 9405, "mwq" ],
-[ 9406, "mwr" ],
-[ 9407, "mws" ],
-[ 9408, "mwt" ],
-[ 9409, "mwu" ],
-[ 9410, "mwv" ],
-[ 9411, "mww" ],
-[ 9412, "mwx" ],
-[ 9413, "mwy" ],
-[ 9414, "mwz" ],
-[ 9415, "mxa" ],
-[ 9416, "mxb" ],
-[ 9417, "mxc" ],
-[ 9418, "mxd" ],
-[ 9419, "mxe" ],
-[ 9420, "mxf" ],
-[ 9421, "mxg" ],
-[ 9422, "mxh" ],
-[ 9423, "mxi" ],
-[ 9424, "mxj" ],
-[ 9425, "mxk" ],
-[ 9426, "mxl" ],
-[ 9427, "mxm" ],
-[ 9428, "mxn" ],
-[ 9429, "mxo" ],
-[ 9430, "mxp" ],
-[ 9431, "mxq" ],
-[ 9432, "mxr" ],
-[ 9433, "mxs" ],
-[ 9434, "mxt" ],
-[ 9435, "mxu" ],
-[ 9436, "mxv" ],
-[ 9437, "mxw" ],
-[ 9438, "mxx" ],
-[ 9439, "mxy" ],
-[ 9440, "mxz" ],
-[ 9441, "mya" ],
-[ 9442, "myb" ],
-[ 9443, "myc" ],
-[ 9444, "myd" ],
-[ 9445, "mye" ],
-[ 9446, "myf" ],
-[ 9447, "myg" ],
-[ 9448, "myh" ],
-[ 9449, "myi" ],
-[ 9450, "myj" ],
-[ 9451, "myk" ],
-[ 9452, "myl" ],
-[ 9453, "mym" ],
-[ 9454, "myn" ],
-[ 9455, "myo" ],
-[ 9456, "myp" ],
-[ 9457, "myq" ],
-[ 9458, "myr" ],
-[ 9459, "mys" ],
-[ 9460, "myt" ],
-[ 9461, "myu" ],
-[ 9462, "myv" ],
-[ 9463, "myw" ],
-[ 9464, "myx" ],
-[ 9465, "myy" ],
-[ 9466, "myz" ],
-[ 9467, "mza" ],
-[ 9468, "mzb" ],
-[ 9469, "mzc" ],
-[ 9470, "mzd" ],
-[ 9471, "mze" ],
-[ 9472, "mzf" ],
-[ 9473, "mzg" ],
-[ 9474, "mzh" ],
-[ 9475, "mzi" ],
-[ 9476, "mzj" ],
-[ 9477, "mzk" ],
-[ 9478, "mzl" ],
-[ 9479, "mzm" ],
-[ 9480, "mzn" ],
-[ 9481, "mzo" ],
-[ 9482, "mzp" ],
-[ 9483, "mzq" ],
-[ 9484, "mzr" ],
-[ 9485, "mzs" ],
-[ 9486, "mzt" ],
-[ 9487, "mzu" ],
-[ 9488, "mzv" ],
-[ 9489, "mzw" ],
-[ 9490, "mzx" ],
-[ 9491, "mzy" ],
-[ 9492, "mzz" ],
-[ 9493, "naa" ],
-[ 9494, "nab" ],
-[ 9495, "nac" ],
-[ 9496, "nad" ],
-[ 9497, "nae" ],
-[ 9498, "naf" ],
-[ 9499, "nag" ],
-[ 9500, "nah" ],
-[ 9501, "nai" ],
-[ 9502, "naj" ],
-[ 9503, "nak" ],
-[ 9504, "nal" ],
-[ 9505, "nam" ],
-[ 9506, "nan" ],
-[ 9507, "nao" ],
-[ 9508, "nap" ],
-[ 9509, "naq" ],
-[ 9510, "nar" ],
-[ 9511, "nas" ],
-[ 9512, "nat" ],
-[ 9513, "nau" ],
-[ 9514, "nav" ],
-[ 9515, "naw" ],
-[ 9516, "nax" ],
-[ 9517, "nay" ],
-[ 9518, "naz" ],
-[ 9519, "nba" ],
-[ 9520, "nbb" ],
-[ 9521, "nbc" ],
-[ 9522, "nbd" ],
-[ 9523, "nbe" ],
-[ 9524, "nbf" ],
-[ 9525, "nbg" ],
-[ 9526, "nbh" ],
-[ 9527, "nbi" ],
-[ 9528, "nbj" ],
-[ 9529, "nbk" ],
-[ 9530, "nbl" ],
-[ 9531, "nbm" ],
-[ 9532, "nbn" ],
-[ 9533, "nbo" ],
-[ 9534, "nbp" ],
-[ 9535, "nbq" ],
-[ 9536, "nbr" ],
-[ 9537, "nbs" ],
-[ 9538, "nbt" ],
-[ 9539, "nbu" ],
-[ 9540, "nbv" ],
-[ 9541, "nbw" ],
-[ 9542, "nbx" ],
-[ 9543, "nby" ],
-[ 9544, "nbz" ],
-[ 9545, "nca" ],
-[ 9546, "ncb" ],
-[ 9547, "ncc" ],
-[ 9548, "ncd" ],
-[ 9549, "nce" ],
-[ 9550, "ncf" ],
-[ 9551, "ncg" ],
-[ 9552, "nch" ],
-[ 9553, "nci" ],
-[ 9554, "ncj" ],
-[ 9555, "nck" ],
-[ 9556, "ncl" ],
-[ 9557, "ncm" ],
-[ 9558, "ncn" ],
-[ 9559, "nco" ],
-[ 9560, "ncp" ],
-[ 9561, "ncq" ],
-[ 9562, "ncr" ],
-[ 9563, "ncs" ],
-[ 9564, "nct" ],
-[ 9565, "ncu" ],
-[ 9566, "ncv" ],
-[ 9567, "ncw" ],
-[ 9568, "ncx" ],
-[ 9569, "ncy" ],
-[ 9570, "ncz" ],
-[ 9571, "nda" ],
-[ 9572, "ndb" ],
-[ 9573, "ndc" ],
-[ 9574, "ndd" ],
-[ 9575, "nde" ],
-[ 9576, "ndf" ],
-[ 9577, "ndg" ],
-[ 9578, "ndh" ],
-[ 9579, "ndi" ],
-[ 9580, "ndj" ],
-[ 9581, "ndk" ],
-[ 9582, "ndl" ],
-[ 9583, "ndm" ],
-[ 9584, "ndn" ],
-[ 9585, "ndo" ],
-[ 9586, "ndp" ],
-[ 9587, "ndq" ],
-[ 9588, "ndr" ],
-[ 9589, "nds" ],
-[ 9590, "ndt" ],
-[ 9591, "ndu" ],
-[ 9592, "ndv" ],
-[ 9593, "ndw" ],
-[ 9594, "ndx" ],
-[ 9595, "ndy" ],
-[ 9596, "ndz" ],
-[ 9597, "nea" ],
-[ 9598, "neb" ],
-[ 9599, "nec" ],
-[ 9600, "ned" ],
-[ 9601, "nee" ],
-[ 9602, "nef" ],
-[ 9603, "neg" ],
-[ 9604, "neh" ],
-[ 9605, "nei" ],
-[ 9606, "nej" ],
-[ 9607, "nek" ],
-[ 9608, "nel" ],
-[ 9609, "nem" ],
-[ 9610, "nen" ],
-[ 9611, "neo" ],
-[ 9612, "nep" ],
-[ 9613, "neq" ],
-[ 9614, "ner" ],
-[ 9615, "nes" ],
-[ 9616, "net" ],
-[ 9617, "neu" ],
-[ 9618, "nev" ],
-[ 9619, "new" ],
-[ 9620, "nex" ],
-[ 9621, "ney" ],
-[ 9622, "nez" ],
-[ 9623, "nfa" ],
-[ 9624, "nfb" ],
-[ 9625, "nfc" ],
-[ 9626, "nfd" ],
-[ 9627, "nfe" ],
-[ 9628, "nff" ],
-[ 9629, "nfg" ],
-[ 9630, "nfh" ],
-[ 9631, "nfi" ],
-[ 9632, "nfj" ],
-[ 9633, "nfk" ],
-[ 9634, "nfl" ],
-[ 9635, "nfm" ],
-[ 9636, "nfn" ],
-[ 9637, "nfo" ],
-[ 9638, "nfp" ],
-[ 9639, "nfq" ],
-[ 9640, "nfr" ],
-[ 9641, "nfs" ],
-[ 9642, "nft" ],
-[ 9643, "nfu" ],
-[ 9644, "nfv" ],
-[ 9645, "nfw" ],
-[ 9646, "nfx" ],
-[ 9647, "nfy" ],
-[ 9648, "nfz" ],
-[ 9649, "nga" ],
-[ 9650, "ngb" ],
-[ 9651, "ngc" ],
-[ 9652, "ngd" ],
-[ 9653, "nge" ],
-[ 9654, "ngf" ],
-[ 9655, "ngg" ],
-[ 9656, "ngh" ],
-[ 9657, "ngi" ],
-[ 9658, "ngj" ],
-[ 9659, "ngk" ],
-[ 9660, "ngl" ],
-[ 9661, "ngm" ],
-[ 9662, "ngn" ],
-[ 9663, "ngo" ],
-[ 9664, "ngp" ],
-[ 9665, "ngq" ],
-[ 9666, "ngr" ],
-[ 9667, "ngs" ],
-[ 9668, "ngt" ],
-[ 9669, "ngu" ],
-[ 9670, "ngv" ],
-[ 9671, "ngw" ],
-[ 9672, "ngx" ],
-[ 9673, "ngy" ],
-[ 9674, "ngz" ],
-[ 9675, "nha" ],
-[ 9676, "nhb" ],
-[ 9677, "nhc" ],
-[ 9678, "nhd" ],
-[ 9679, "nhe" ],
-[ 9680, "nhf" ],
-[ 9681, "nhg" ],
-[ 9682, "nhh" ],
-[ 9683, "nhi" ],
-[ 9684, "nhj" ],
-[ 9685, "nhk" ],
-[ 9686, "nhl" ],
-[ 9687, "nhm" ],
-[ 9688, "nhn" ],
-[ 9689, "nho" ],
-[ 9690, "nhp" ],
-[ 9691, "nhq" ],
-[ 9692, "nhr" ],
-[ 9693, "nhs" ],
-[ 9694, "nht" ],
-[ 9695, "nhu" ],
-[ 9696, "nhv" ],
-[ 9697, "nhw" ],
-[ 9698, "nhx" ],
-[ 9699, "nhy" ],
-[ 9700, "nhz" ],
-[ 9701, "nia" ],
-[ 9702, "nib" ],
-[ 9703, "nic" ],
-[ 9704, "nid" ],
-[ 9705, "nie" ],
-[ 9706, "nif" ],
-[ 9707, "nig" ],
-[ 9708, "nih" ],
-[ 9709, "nii" ],
-[ 9710, "nij" ],
-[ 9711, "nik" ],
-[ 9712, "nil" ],
-[ 9713, "nim" ],
-[ 9714, "nin" ],
-[ 9715, "nio" ],
-[ 9716, "nip" ],
-[ 9717, "niq" ],
-[ 9718, "nir" ],
-[ 9719, "nis" ],
-[ 9720, "nit" ],
-[ 9721, "niu" ],
-[ 9722, "niv" ],
-[ 9723, "niw" ],
-[ 9724, "nix" ],
-[ 9725, "niy" ],
-[ 9726, "niz" ],
-[ 9727, "nja" ],
-[ 9728, "njb" ],
-[ 9729, "njc" ],
-[ 9730, "njd" ],
-[ 9731, "nje" ],
-[ 9732, "njf" ],
-[ 9733, "njg" ],
-[ 9734, "njh" ],
-[ 9735, "nji" ],
-[ 9736, "njj" ],
-[ 9737, "njk" ],
-[ 9738, "njl" ],
-[ 9739, "njm" ],
-[ 9740, "njn" ],
-[ 9741, "njo" ],
-[ 9742, "njp" ],
-[ 9743, "njq" ],
-[ 9744, "njr" ],
-[ 9745, "njs" ],
-[ 9746, "njt" ],
-[ 9747, "nju" ],
-[ 9748, "njv" ],
-[ 9749, "njw" ],
-[ 9750, "njx" ],
-[ 9751, "njy" ],
-[ 9752, "njz" ],
-[ 9753, "nka" ],
-[ 9754, "nkb" ],
-[ 9755, "nkc" ],
-[ 9756, "nkd" ],
-[ 9757, "nke" ],
-[ 9758, "nkf" ],
-[ 9759, "nkg" ],
-[ 9760, "nkh" ],
-[ 9761, "nki" ],
-[ 9762, "nkj" ],
-[ 9763, "nkk" ],
-[ 9764, "nkl" ],
-[ 9765, "nkm" ],
-[ 9766, "nkn" ],
-[ 9767, "nko" ],
-[ 9768, "nkp" ],
-[ 9769, "nkq" ],
-[ 9770, "nkr" ],
-[ 9771, "nks" ],
-[ 9772, "nkt" ],
-[ 9773, "nku" ],
-[ 9774, "nkv" ],
-[ 9775, "nkw" ],
-[ 9776, "nkx" ],
-[ 9777, "nky" ],
-[ 9778, "nkz" ],
-[ 9779, "nla" ],
-[ 9780, "nlb" ],
-[ 9781, "nlc" ],
-[ 9782, "nld" ],
-[ 9783, "nle" ],
-[ 9784, "nlf" ],
-[ 9785, "nlg" ],
-[ 9786, "nlh" ],
-[ 9787, "nli" ],
-[ 9788, "nlj" ],
-[ 9789, "nlk" ],
-[ 9790, "nll" ],
-[ 9791, "nlm" ],
-[ 9792, "nln" ],
-[ 9793, "nlo" ],
-[ 9794, "nlp" ],
-[ 9795, "nlq" ],
-[ 9796, "nlr" ],
-[ 9797, "nls" ],
-[ 9798, "nlt" ],
-[ 9799, "nlu" ],
-[ 9800, "nlv" ],
-[ 9801, "nlw" ],
-[ 9802, "nlx" ],
-[ 9803, "nly" ],
-[ 9804, "nlz" ],
-[ 9805, "nma" ],
-[ 9806, "nmb" ],
-[ 9807, "nmc" ],
-[ 9808, "nmd" ],
-[ 9809, "nme" ],
-[ 9810, "nmf" ],
-[ 9811, "nmg" ],
-[ 9812, "nmh" ],
-[ 9813, "nmi" ],
-[ 9814, "nmj" ],
-[ 9815, "nmk" ],
-[ 9816, "nml" ],
-[ 9817, "nmm" ],
-[ 9818, "nmn" ],
-[ 9819, "nmo" ],
-[ 9820, "nmp" ],
-[ 9821, "nmq" ],
-[ 9822, "nmr" ],
-[ 9823, "nms" ],
-[ 9824, "nmt" ],
-[ 9825, "nmu" ],
-[ 9826, "nmv" ],
-[ 9827, "nmw" ],
-[ 9828, "nmx" ],
-[ 9829, "nmy" ],
-[ 9830, "nmz" ],
-[ 9831, "nna" ],
-[ 9832, "nnb" ],
-[ 9833, "nnc" ],
-[ 9834, "nnd" ],
-[ 9835, "nne" ],
-[ 9836, "nnf" ],
-[ 9837, "nng" ],
-[ 9838, "nnh" ],
-[ 9839, "nni" ],
-[ 9840, "nnj" ],
-[ 9841, "nnk" ],
-[ 9842, "nnl" ],
-[ 9843, "nnm" ],
-[ 9844, "nnn" ],
-[ 9845, "nno" ],
-[ 9846, "nnp" ],
-[ 9847, "nnq" ],
-[ 9848, "nnr" ],
-[ 9849, "nns" ],
-[ 9850, "nnt" ],
-[ 9851, "nnu" ],
-[ 9852, "nnv" ],
-[ 9853, "nnw" ],
-[ 9854, "nnx" ],
-[ 9855, "nny" ],
-[ 9856, "nnz" ],
-[ 9857, "noa" ],
-[ 9858, "nob" ],
-[ 9859, "noc" ],
-[ 9860, "nod" ],
-[ 9861, "noe" ],
-[ 9862, "nof" ],
-[ 9863, "nog" ],
-[ 9864, "noh" ],
-[ 9865, "noi" ],
-[ 9866, "noj" ],
-[ 9867, "nok" ],
-[ 9868, "nol" ],
-[ 9869, "nom" ],
-[ 9870, "non" ],
-[ 9871, "noo" ],
-[ 9872, "nop" ],
-[ 9873, "noq" ],
-[ 9874, "nor" ],
-[ 9875, "nos" ],
-[ 9876, "not" ],
-[ 9877, "nou" ],
-[ 9878, "nov" ],
-[ 9879, "now" ],
-[ 9880, "nox" ],
-[ 9881, "noy" ],
-[ 9882, "noz" ],
-[ 9883, "npa" ],
-[ 9884, "npb" ],
-[ 9885, "npc" ],
-[ 9886, "npd" ],
-[ 9887, "npe" ],
-[ 9888, "npf" ],
-[ 9889, "npg" ],
-[ 9890, "nph" ],
-[ 9891, "npi" ],
-[ 9892, "npj" ],
-[ 9893, "npk" ],
-[ 9894, "npl" ],
-[ 9895, "npm" ],
-[ 9896, "npn" ],
-[ 9897, "npo" ],
-[ 9898, "npp" ],
-[ 9899, "npq" ],
-[ 9900, "npr" ],
-[ 9901, "nps" ],
-[ 9902, "npt" ],
-[ 9903, "npu" ],
-[ 9904, "npv" ],
-[ 9905, "npw" ],
-[ 9906, "npx" ],
-[ 9907, "npy" ],
-[ 9908, "npz" ],
-[ 9909, "nqa" ],
-[ 9910, "nqb" ],
-[ 9911, "nqc" ],
-[ 9912, "nqd" ],
-[ 9913, "nqe" ],
-[ 9914, "nqf" ],
-[ 9915, "nqg" ],
-[ 9916, "nqh" ],
-[ 9917, "nqi" ],
-[ 9918, "nqj" ],
-[ 9919, "nqk" ],
-[ 9920, "nql" ],
-[ 9921, "nqm" ],
-[ 9922, "nqn" ],
-[ 9923, "nqo" ],
-[ 9924, "nqp" ],
-[ 9925, "nqq" ],
-[ 9926, "nqr" ],
-[ 9927, "nqs" ],
-[ 9928, "nqt" ],
-[ 9929, "nqu" ],
-[ 9930, "nqv" ],
-[ 9931, "nqw" ],
-[ 9932, "nqx" ],
-[ 9933, "nqy" ],
-[ 9934, "nqz" ],
-[ 9935, "nra" ],
-[ 9936, "nrb" ],
-[ 9937, "nrc" ],
-[ 9938, "nrd" ],
-[ 9939, "nre" ],
-[ 9940, "nrf" ],
-[ 9941, "nrg" ],
-[ 9942, "nrh" ],
-[ 9943, "nri" ],
-[ 9944, "nrj" ],
-[ 9945, "nrk" ],
-[ 9946, "nrl" ],
-[ 9947, "nrm" ],
-[ 9948, "nrn" ],
-[ 9949, "nro" ],
-[ 9950, "nrp" ],
-[ 9951, "nrq" ],
-[ 9952, "nrr" ],
-[ 9953, "nrs" ],
-[ 9954, "nrt" ],
-[ 9955, "nru" ],
-[ 9956, "nrv" ],
-[ 9957, "nrw" ],
-[ 9958, "nrx" ],
-[ 9959, "nry" ],
-[ 9960, "nrz" ],
-[ 9961, "nsa" ],
-[ 9962, "nsb" ],
-[ 9963, "nsc" ],
-[ 9964, "nsd" ],
-[ 9965, "nse" ],
-[ 9966, "nsf" ],
-[ 9967, "nsg" ],
-[ 9968, "nsh" ],
-[ 9969, "nsi" ],
-[ 9970, "nsj" ],
-[ 9971, "nsk" ],
-[ 9972, "nsl" ],
-[ 9973, "nsm" ],
-[ 9974, "nsn" ],
-[ 9975, "nso" ],
-[ 9976, "nsp" ],
-[ 9977, "nsq" ],
-[ 9978, "nsr" ],
-[ 9979, "nss" ],
-[ 9980, "nst" ],
-[ 9981, "nsu" ],
-[ 9982, "nsv" ],
-[ 9983, "nsw" ],
-[ 9984, "nsx" ],
-[ 9985, "nsy" ],
-[ 9986, "nsz" ],
-[ 9987, "nta" ],
-[ 9988, "ntb" ],
-[ 9989, "ntc" ],
-[ 9990, "ntd" ],
-[ 9991, "nte" ],
-[ 9992, "ntf" ],
-[ 9993, "ntg" ],
-[ 9994, "nth" ],
-[ 9995, "nti" ],
-[ 9996, "ntj" ],
-[ 9997, "ntk" ],
-[ 9998, "ntl" ],
-[ 9999, "ntm" ],
-[ 10000, "ntn" ],
- ]);
+# The map below generates stuff like:
+# [ qw/artistid name/ ],
+# [ 4, "b" ],
+# [ 5, "c" ],
+# ...
+# [ 9999, "ntm" ],
+# [ 10000, "ntn" ],
+
+my $start_id = 'populateXaaaaaa';
+my $rows = 10;
+my $offset = 3;
+
+$schema->populate('Artist', [ [ qw/artistid name/ ], map { [ ($_ + $offset) => $start_id++ ] } ( 1 .. $rows ) ] );
+is (
+ $schema->resultset ('Artist')->search ({ name => { -like => 'populateX%' } })->count,
+ $rows,
+ 'populate created correct number of rows with massive AoA bulk insert',
+);
+
+my $artist = $schema->resultset ('Artist')
+ ->search ({ 'cds.title' => { '!=', undef } }, { join => 'cds' })
+ ->first;
+my $ex_title = $artist->cds->first->title;
+
+throws_ok ( sub {
+ my $i = 600;
+ $schema->populate('CD', [
+ map {
+ {
+ artist => $artist->id,
+ title => $_,
+ year => 2009,
+ }
+ } ('Huey', 'Dewey', $ex_title, 'Louie')
+ ])
+}, qr/columns .+ are not unique for populate slice.+$ex_title/ms, 'Readable exception thrown for failed populate');
## make sure populate honors fields/orders in list context
## schema order
use lib qw(t/lib);
use Data::Dumper;
-plan ( ($] >= 5.009000 and $] < 5.010001)
- ? (skip_all => 'warnings::register broken under 5.10: http://rt.perl.org/rt3/Public/Bug/Display.html?id=62522')
- : (tests => 4)
-);
+plan tests => 4;
+my $exp_warn = qr/The many-to-many relationship 'bars' is trying to create/;
{
my @w;
- local $SIG{__WARN__} = sub { push @w, @_ };
+ local $SIG{__WARN__} = sub { $_[0] =~ $exp_warn ? push @w, $_[0] : warn $_[0] };
my $code = gen_code ( suffix => 1 );
eval "$code";
ok (! $@, 'Eval code without warnings suppression')
|| diag $@;
- ok ( (grep { $_ =~ /The many-to-many relationship bars is trying to create/ } @w), "Warning triggered without relevant 'no warnings'");
+ ok (@w, "Warning triggered without DBIC_OVERWRITE_HELPER_METHODS_OK");
}
{
my @w;
- local $SIG{__WARN__} = sub { push @w, @_ };
+ local $SIG{__WARN__} = sub { $_[0] =~ $exp_warn ? push @w, $_[0] : warn $_[0] };
- my $code = gen_code ( suffix => 2, no_warn => 1 );
+ my $code = gen_code ( suffix => 2 );
+
+ local $ENV{DBIC_OVERWRITE_HELPER_METHODS_OK} = 1;
eval "$code";
ok (! $@, 'Eval code with warnings suppression')
|| diag $@;
- ok ( (not grep { $_ =~ /The many-to-many relationship bars is trying to create/ } @w), "No warning triggered with relevant 'no warnings'");
+ ok (! @w, "No warning triggered with DBIC_OVERWRITE_HELPER_METHODS_OK");
}
sub gen_code {
my $args = { @_ };
my $suffix = $args->{suffix};
- my $no_warn = ( $args->{no_warn}
- ? "no warnings 'DBIx::Class::Relationship::ManyToMany';"
- : '',
- );
return <<EOF;
use strict;
},
);
- ${no_warn}
__PACKAGE__->set_primary_key('barid');
__PACKAGE__->has_many('foo_to_bar' => 'DBICTest::Schema::FooToBar${suffix}' => 'foo');
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $tests = 3;
+plan tests => $tests;
+
+my $schema = DBICTest->init_schema();
+my $rs = $schema->resultset ('Artist');
+my $last_obj = $rs->search ({}, { order_by => { -desc => 'artistid' }, rows => 1})->single;
+my $last_id = $last_obj ? $last_obj->artistid : 0;
+
+my $obj;
+eval { $obj = $rs->create ({}) };
+my $err = $@;
+
+ok ($obj, 'Insert defaults ( $rs->create ({}) )' );
+SKIP: {
+ skip "Default insert failed: $err", $tests-1 if $err;
+
+ # this should be picked up without calling the DB again
+ is ($obj->artistid, $last_id + 1, 'Autoinc PK works');
+
+ # for this we need to refresh
+ $obj->discard_changes;
+ is ($obj->rank, 13, 'Default value works');
+}
+
use_ok('DBIC::DebugObj');
my $schema = DBICTest->init_schema();
-diag('Testing against ' . join(' ', map { $schema->storage->dbh->get_info($_) } qw/17 18/));
+#diag('Testing against ' . join(' ', map { $schema->storage->dbh->get_info($_) } qw/17 18/));
$schema->storage->sql_maker->quote_char('`');
$schema->storage->sql_maker->name_sep('.');
-my ($sql, @bind) = ('');
+my ($sql, @bind);
$schema->storage->debugobj(DBIC::DebugObj->new(\$sql, \@bind));
$schema->storage->debug(1);
eval { $rs->count };
is_same_sql_bind(
$sql, \@bind,
- "SELECT COUNT( * ) FROM `cd` `me` JOIN `artist` `artist` ON ( `artist`.`artistid` = `me`.`artist` ) WHERE ( `artist`.`name` = ? AND `me`.`year` = ? )", ["'Caterwauler McCrae'", "'2001'"],
+ "SELECT COUNT( * ) FROM cd `me` JOIN `artist` `artist` ON ( `artist`.`artistid` = `me`.`artist` ) WHERE ( `artist`.`name` = ? AND `me`.`year` = ? )", ["'Caterwauler McCrae'", "'2001'"],
'got correct SQL for count query with quoting'
);
eval { $rs->count };
is_same_sql_bind(
$sql, \@bind,
- "SELECT COUNT( * ) FROM [cd] [me] JOIN [artist] [artist] ON ( [artist].[artistid] = [me].[artist] ) WHERE ( [artist].[name] = ? AND [me].[year] = ? )", ["'Caterwauler McCrae'", "'2001'"],
+ "SELECT COUNT( * ) FROM cd [me] JOIN [artist] [artist] ON ( [artist].[artistid] = [me].[artist] ) WHERE ( [artist].[name] = ? AND [me].[year] = ? )", ["'Caterwauler McCrae'", "'2001'"],
'got correct SQL for count query with bracket quoting'
);
my $schema = DBICTest->init_schema();
-diag('Testing against ' . join(' ', map { $schema->storage->dbh->get_info($_) } qw/17 18/));
+#diag('Testing against ' . join(' ', map { $schema->storage->dbh->get_info($_) } qw/17 18/));
my $dsn = $schema->storage->_dbi_connect_info->[0];
$schema->connection(
{ quote_char => '`', name_sep => '.' },
);
-my ($sql, @bind) = ('');
+my ($sql, @bind);
$schema->storage->debugobj(DBIC::DebugObj->new(\$sql, \@bind)),
$schema->storage->debug(1);
eval { $rs->count };
is_same_sql_bind(
$sql, \@bind,
- "SELECT COUNT( * ) FROM `cd` `me` JOIN `artist` `artist` ON ( `artist`.`artistid` = `me`.`artist` ) WHERE ( `artist`.`name` = ? AND `me`.`year` = ? )", ["'Caterwauler McCrae'", "'2001'"],
+ "SELECT COUNT( * ) FROM cd `me` JOIN `artist` `artist` ON ( `artist`.`artistid` = `me`.`artist` ) WHERE ( `artist`.`name` = ? AND `me`.`year` = ? )", ["'Caterwauler McCrae'", "'2001'"],
'got correct SQL for count query with quoting'
);
eval { $rs->count };
is_same_sql_bind(
$sql, \@bind,
- "SELECT COUNT( * ) FROM [cd] [me] JOIN [artist] [artist] ON ( [artist].[artistid] = [me].[artist] ) WHERE ( [artist].[name] = ? AND [me].[year] = ? )", ["'Caterwauler McCrae'", "'2001'"],
+ "SELECT COUNT( * ) FROM cd [me] JOIN [artist] [artist] ON ( [artist].[artistid] = [me].[artist] ) WHERE ( [artist].[name] = ? AND [me].[year] = ? )", ["'Caterwauler McCrae'", "'2001'"],
'got correct SQL for count query with bracket quoting'
);
use warnings;
use Test::More;
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 12 );
-}
+plan tests => 12;
use lib qw(t/lib);
# Disconnect the dbh, and be sneaky about it
# Also test if DBD::SQLite finaly knows how to ->disconnect properly
-TODO: {
- local $TODO = 'SQLite is evil/braindead. Once this test starts passing, remove the related atrocity from DBIx::Class::Storage::DBI::SQLite';
- my $w;
- local $SIG{__WARN__} = sub { $w = shift };
- $schema->storage->_dbh->disconnect;
- ok ($w !~ /active statement handles/, 'SQLite can disconnect properly \o/');
+{
+ my $w;
+ local $SIG{__WARN__} = sub { $w = shift };
+ $schema->storage->_dbh->disconnect;
+ ok ($w !~ /active statement handles/, 'SQLite can disconnect properly');
}
# Try the operation again - What should happen here is:
use lib qw(t/lib);
use DBICTest;
-eval { require DateTime::Format::MySQL };
-
-plan $@ ? ( skip_all => 'Requires DateTime::Format::MySQL' )
+eval { require DateTime::Format::SQLite };
+plan $@ ? ( skip_all => 'Requires DateTime::Format::SQLite' )
: ( tests => 3 );
my $schema = DBICTest->init_schema(
my $parser = $schema->storage->datetime_parser();
-# We're currently expecting a MySQL parser. May change in future.
-is($parser, 'DateTime::Format::MySQL', 'Got expected datetime_parser');
-
+is($parser, 'DateTime::Format::SQLite', 'Got expected storage-set datetime_parser');
isa_ok($schema->storage, 'DBIx::Class::Storage::DBI::SQLite', 'storage');
use warnings;
use Test::More;
-unshift(@INC, './t/lib');
+use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
plan tests => 8;
use warnings;
use Test::More;
-unshift(@INC, './t/lib');
+use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
plan tests => 6;
use warnings;
use Test::More;
-unshift(@INC, './t/lib');
+use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
plan tests => 7;
use warnings;
use Test::More;
-unshift(@INC, './t/lib');
+use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
plan tests => 6;
--- /dev/null
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+use Test::More;
+
+use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
+
+plan tests => 1;
+
+eval {
+ package DBICNSTest;
+ use base qw/DBIx::Class::Schema/;
+ __PACKAGE__->load_namespaces(
+ result_namespace => 'Bogus',
+ resultset_namespace => 'RSet',
+ );
+};
+
+like ($@, qr/are you sure this is a real Result Class/, 'Clear exception thrown');
use strict;
use warnings;
-use Test::More;
use lib 't/lib';
-
-plan tests => 4;
+use DBICTest; # do not remove even though it is not used
+use Test::More tests => 8;
sub _chk_warning {
- defined $_[0]?
- $_[0] !~ qr/We found ResultSet class '([^']+)' for '([^']+)', but it seems that you had already set '([^']+)' to use '([^']+)' instead/ :
- 1
+ defined $_[0]?
+ $_[0] !~ qr/We found ResultSet class '([^']+)' for '([^']+)', but it seems that you had already set '([^']+)' to use '([^']+)' instead/ :
+ 1
+}
+
+sub _chk_extra_sources_warning {
+ my $p = qr/already has a source, use register_extra_source for additional sources/;
+ defined $_[0]? $_[0] !~ /$p/ : 1;
+}
+
+sub _verify_sources {
+ my @monikers = @_;
+ is_deeply (
+ [ sort DBICNSTest::RtBug41083->sources ],
+ \@monikers,
+ 'List of resultsource registrations',
+ );
}
-my $warnings;
-eval {
+{
+ my $warnings;
+ eval {
local $SIG{__WARN__} = sub { $warnings .= shift };
package DBICNSTest::RtBug41083;
use base 'DBIx::Class::Schema';
__PACKAGE__->load_namespaces(
- result_namespace => 'Schema_A',
- resultset_namespace => 'ResultSet_A',
- default_resultset_class => 'ResultSet'
+ result_namespace => 'Schema_A',
+ resultset_namespace => 'ResultSet_A',
+ default_resultset_class => 'ResultSet'
);
-};
-ok(!$@) or diag $@;
-ok(_chk_warning($warnings), 'expected no complaint');
+ };
-eval {
+ ok(!$@) or diag $@;
+ ok(_chk_warning($warnings), 'expected no resultset complaint');
+ ok(_chk_extra_sources_warning($warnings), 'expected no extra sources complaint') or diag($warnings);
+
+ _verify_sources (qw/A A::Sub/);
+}
+
+{
+ my $warnings;
+ eval {
local $SIG{__WARN__} = sub { $warnings .= shift };
package DBICNSTest::RtBug41083;
use base 'DBIx::Class::Schema';
__PACKAGE__->load_namespaces(
- result_namespace => 'Schema',
- resultset_namespace => 'ResultSet',
- default_resultset_class => 'ResultSet'
+ result_namespace => 'Schema',
+ resultset_namespace => 'ResultSet',
+ default_resultset_class => 'ResultSet'
);
-};
-ok(!$@) or diag $@;
-ok(_chk_warning($warnings), 'expected no complaint') or diag $warnings;
+ };
+ ok(!$@) or diag $@;
+ ok(_chk_warning($warnings), 'expected no resultset complaint') or diag $warnings;
+ ok(_chk_extra_sources_warning($warnings), 'expected no extra sources complaint') or diag($warnings);
+
+ _verify_sources (qw/A A::Sub Foo Foo::Sub/);
+}
use warnings;
use Test::More;
-use DBIx::Class::Storage::DBI::Oracle::WhereJoins;
+use DBIx::Class::SQLAHacks::OracleJoins;
use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
use DBIC::SqlMakerTest;
plan tests => 4;
-my $sa = new DBIC::SQL::Abstract::Oracle;
+my $sa = new DBIx::Class::SQLAHacks::OracleJoins;
$sa->limit_dialect('RowNum');
-use strict;\r
-use warnings;\r
-\r
-use Test::More;\r
-use DBIx::Class::Storage::DBI;\r
-\r
-plan tests => 1;\r
-\r
-my $sa = new DBIC::SQL::Abstract;\r
-\r
-$sa->limit_dialect( 'Top' );\r
-\r
-is(\r
- $sa->select( 'rubbish', [ 'foo.id', 'bar.id' ], undef, { order_by => 'artistid' }, 1, 3 ),\r
- 'SELECT * FROM\r
-(\r
- SELECT TOP 1 * FROM\r
- (\r
- SELECT TOP 4 foo.id, bar.id FROM rubbish ORDER BY artistid ASC\r
- ) AS foo\r
- ORDER BY artistid DESC\r
-) AS bar\r
-ORDER BY artistid ASC\r
-',\r
- "make sure limit_dialect( 'Top' ) is working okay"\r
-);\r
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema;
+
+# Trick the sqlite DB to use Top limit emulation
+# We could test all of this via $sq->$op directly,
+# but some conditions need a $rsrc
+delete $schema->storage->_sql_maker->{_cached_syntax};
+$schema->storage->_sql_maker->limit_dialect ('Top');
+
+my $rs = $schema->resultset ('BooksInLibrary')->search ({}, { prefetch => 'owner', rows => 1, offset => 3 });
+
+sub default_test_order {
+ my $order_by = shift;
+ is_same_sql_bind(
+ $rs->search ({}, {order_by => $order_by})->as_query,
+ "(SELECT
+ TOP 1 me__id, source, owner, title, price, owner__id, name FROM
+ (SELECT
+ TOP 4 me.id AS me__id, me.source, me.owner, me.title, me.price, owner.id AS owner__id, owner.name
+ FROM books me
+ JOIN owners owner ON
+ owner.id = me.owner
+ WHERE ( source = ? )
+ ORDER BY me__id ASC
+ ) me ORDER BY me__id DESC
+ )",
+ [ [ source => 'Library' ] ],
+ );
+}
+
+sub test_order {
+ my $args = shift;
+
+ my $req_order = $args->{order_req}
+ ? "ORDER BY $args->{order_req}"
+ : ''
+ ;
+
+ is_same_sql_bind(
+ $rs->search ({}, {order_by => $args->{order_by}})->as_query,
+ "(SELECT
+ me__id, source, owner, title, price, owner__id, name FROM
+ (SELECT
+ TOP 1 me__id, source, owner, title, price, owner__id, name FROM
+ (SELECT
+ TOP 4 me.id AS me__id, me.source, me.owner, me.title, me.price, owner.id AS owner__id, owner.name FROM
+ books me
+ JOIN owners owner ON owner.id = me.owner
+ WHERE ( source = ? )
+ ORDER BY $args->{order_inner}
+ ) me ORDER BY $args->{order_outer}
+ ) me $req_order
+ )",
+ [ [ source => 'Library' ] ],
+ );
+}
+
+my @tests = (
+ {
+ order_by => \'foo DESC',
+ order_req => 'foo DESC',
+ order_inner => 'foo DESC',
+ order_outer => 'foo ASC'
+ },
+ {
+ order_by => { -asc => 'foo' },
+ order_req => 'foo ASC',
+ order_inner => 'foo ASC',
+ order_outer => 'foo DESC',
+ },
+ {
+ order_by => 'foo',
+ order_req => 'foo',
+ order_inner => 'foo ASC',
+ order_outer => 'foo DESC',
+ },
+ {
+ order_by => [ qw{ foo bar} ],
+ order_req => 'foo, bar',
+ order_inner => 'foo ASC,bar ASC',
+ order_outer => 'foo DESC, bar DESC',
+ },
+ {
+ order_by => { -desc => 'foo' },
+ order_req => 'foo DESC',
+ order_inner => 'foo DESC',
+ order_outer => 'foo ASC',
+ },
+ {
+ order_by => ['foo', { -desc => 'bar' } ],
+ order_req => 'foo, bar DESC',
+ order_inner => 'foo ASC, bar DESC',
+ order_outer => 'foo DESC, bar ASC',
+ },
+ {
+ order_by => { -asc => [qw{ foo bar }] },
+ order_req => 'foo ASC, bar ASC',
+ order_inner => 'foo ASC, bar ASC',
+ order_outer => 'foo DESC, bar DESC',
+ },
+ {
+ order_by => [
+ { -asc => 'foo' },
+ { -desc => [qw{bar}] },
+ { -asc => [qw{hello sensors}]},
+ ],
+ order_req => 'foo ASC, bar DESC, hello ASC, sensors ASC',
+ order_inner => 'foo ASC, bar DESC, hello ASC, sensors ASC',
+ order_outer => 'foo DESC, bar ASC, hello DESC, sensors DESC',
+ },
+);
+
+my @default_tests = ( undef, '', {}, [] );
+
+plan (tests => scalar @tests + scalar @default_tests + 1);
+
+test_order ($_) for @tests;
+default_test_order ($_) for @default_tests;
+
+
+is_same_sql_bind (
+ $rs->search ({}, { group_by => 'title', order_by => 'title' })->as_query,
+'(SELECT
+me.id, me.source, me.owner, me.title, me.price, owner.id, owner.name FROM
+ ( SELECT
+ id, source, owner, title, price FROM
+ ( SELECT
+ TOP 1 id, source, owner, title, price FROM
+ ( SELECT
+ TOP 4 me.id, me.source, me.owner, me.title, me.price FROM
+ books me JOIN
+ owners owner ON owner.id = me.owner
+ WHERE ( source = ? )
+ GROUP BY title
+ ORDER BY title ASC
+ ) me
+ ORDER BY title DESC
+ ) me
+ ORDER BY title
+ ) me JOIN
+ owners owner ON owner.id = me.owner WHERE
+ ( source = ? )
+ ORDER BY title)' ,
+ [ [ source => 'Library' ], [ source => 'Library' ] ],
+);
-use strict;\r
-use warnings;\r
-\r
-use Test::More;\r
-use Data::Dumper;\r
-use lib qw(t/lib);\r
-use DBICTest;\r
-my $schema = DBICTest->init_schema();\r
-\r
-plan tests => 16;\r
-\r
-# select from a class with resultset_attributes\r
-my $resultset = $schema->resultset('BooksInLibrary');\r
-is($resultset, 3, "select from a class with resultset_attributes okay");\r
-\r
-# now test out selects through a resultset\r
-my $owner = $schema->resultset('Owners')->find({name => "Newton"});\r
-my $programming_perl = $owner->books->find_or_create({ title => "Programming Perl" });\r
-is($programming_perl->id, 1, 'select from a resultset with find_or_create for existing entry ok');\r
-\r
-# and inserts?\r
-my $see_spot;\r
-$see_spot = eval { $owner->books->find_or_create({ title => "See Spot Run" }) };\r
-if ($@) { print $@ }\r
-ok(!$@, 'find_or_create on resultset with attribute for non-existent entry did not throw');\r
-ok(defined $see_spot, 'successfully did insert on resultset with attribute for non-existent entry');\r
-\r
-my $see_spot_rs = $owner->books->search({ title => "See Spot Run" });\r
-eval { $see_spot_rs->delete(); };\r
-if ($@) { print $@ }\r
-ok(!$@, 'delete on resultset with attribute did not throw');\r
-is($see_spot_rs->count(), 0, 'delete on resultset with attributes succeeded');\r
-\r
-# many_to_many tests\r
-my $collection = $schema->resultset('Collection')->search({collectionid => 1});\r
-my $pointy_objects = $collection->search_related('collection_object')->search_related('object', { type => "pointy"});\r
-my $pointy_count = $pointy_objects->count();\r
-is($pointy_count, 2, 'many_to_many explicit query through linking table with query starting from resultset count correct');\r
-\r
-$collection = $schema->resultset('Collection')->find(1);\r
-$pointy_objects = $collection->search_related('collection_object')->search_related('object', { type => "pointy"});\r
-$pointy_count = $pointy_objects->count();\r
-is($pointy_count, 2, 'many_to_many explicit query through linking table with query starting from row count correct');\r
-\r
-# use where on many_to_many query\r
-$collection = $schema->resultset('Collection')->find(1);\r
-$pointy_objects = $collection->search_related('collection_object')->search_related('object', {}, { where => { 'object.type' => 'pointy' } });\r
-is($pointy_objects->count(), 2, 'many_to_many explicit query through linking table with where starting from row count correct');\r
-\r
-$collection = $schema->resultset('Collection')->find(1);\r
-$pointy_objects = $collection->pointy_objects();\r
-$pointy_count = $pointy_objects->count();\r
-is($pointy_count, 2, 'many_to_many resultset with where in resultset attrs count correct');\r
-\r
-# add_to_$rel on many_to_many with where containing a required field\r
-eval {$collection->add_to_pointy_objects({ value => "Nail" }) };\r
-if ($@) { print $@ }\r
-ok( !$@, 'many_to_many add_to_$rel($hash) with where in relationship attrs did not throw');\r
-is($pointy_objects->count, $pointy_count+1, 'many_to_many add_to_$rel($hash) with where in relationship attrs count correct');\r
-$pointy_count = $pointy_objects->count();\r
-\r
-my $pen = $schema->resultset('TypedObject')->create({ value => "Pen", type => "pointy"});\r
-eval {$collection->add_to_pointy_objects($pen)};\r
-if ($@) { print $@ }\r
-ok( !$@, 'many_to_many add_to_$rel($object) with where in relationship attrs did not throw');\r
-is($pointy_objects->count, $pointy_count+1, 'many_to_many add_to_$rel($object) with where in relationship attrs count correct');\r
-$pointy_count = $pointy_objects->count();\r
-\r
-my $round_objects = $collection->round_objects();\r
-my $round_count = $round_objects->count();\r
-eval {$collection->add_to_objects({ value => "Wheel", type => "round" })};\r
-if ($@) { print $@ }\r
-ok( !$@, 'many_to_many add_to_$rel($hash) did not throw');\r
-is($round_objects->count, $round_count+1, 'many_to_many add_to_$rel($hash) count correct');\r
+use strict;
+use warnings;
+
+use Test::More;
+use Data::Dumper;
+use lib qw(t/lib);
+use DBICTest;
+my $schema = DBICTest->init_schema();
+
+plan tests => 19;
+
+# select from a class with resultset_attributes
+my $resultset = $schema->resultset('BooksInLibrary');
+is($resultset, 3, "select from a class with resultset_attributes okay");
+
+# now test out selects through a resultset
+my $owner = $schema->resultset('Owners')->find({name => "Newton"});
+my $programming_perl = $owner->books->find_or_create({ title => "Programming Perl" });
+is($programming_perl->id, 1, 'select from a resultset with find_or_create for existing entry ok');
+
+# and inserts?
+my $see_spot;
+$see_spot = eval { $owner->books->find_or_create({ title => "See Spot Run" }) };
+if ($@) { print $@ }
+ok(!$@, 'find_or_create on resultset with attribute for non-existent entry did not throw');
+ok(defined $see_spot, 'successfully did insert on resultset with attribute for non-existent entry');
+
+my $see_spot_rs = $owner->books->search({ title => "See Spot Run" });
+eval { $see_spot_rs->delete(); };
+if ($@) { print $@ }
+ok(!$@, 'delete on resultset with attribute did not throw');
+is($see_spot_rs->count(), 0, 'delete on resultset with attributes succeeded');
+
+# many_to_many tests
+my $collection = $schema->resultset('Collection')->search({collectionid => 1});
+my $pointy_objects = $collection->search_related('collection_object')->search_related('object', { type => "pointy"});
+my $pointy_count = $pointy_objects->count();
+is($pointy_count, 2, 'many_to_many explicit query through linking table with query starting from resultset count correct');
+
+$collection = $schema->resultset('Collection')->find(1);
+$pointy_objects = $collection->search_related('collection_object')->search_related('object', { type => "pointy"});
+$pointy_count = $pointy_objects->count();
+is($pointy_count, 2, 'many_to_many explicit query through linking table with query starting from row count correct');
+
+# use where on many_to_many query
+$collection = $schema->resultset('Collection')->find(1);
+$pointy_objects = $collection->search_related('collection_object')->search_related('object', {}, { where => { 'object.type' => 'pointy' } });
+is($pointy_objects->count(), 2, 'many_to_many explicit query through linking table with where starting from row count correct');
+
+$collection = $schema->resultset('Collection')->find(1);
+$pointy_objects = $collection->pointy_objects();
+$pointy_count = $pointy_objects->count();
+is($pointy_count, 2, 'many_to_many resultset with where in resultset attrs count correct');
+
+# add_to_$rel on many_to_many with where containing a required field
+eval {$collection->add_to_pointy_objects({ value => "Nail" }) };
+if ($@) { print $@ }
+ok( !$@, 'many_to_many add_to_$rel($hash) with where in relationship attrs did not throw');
+is($pointy_objects->count, $pointy_count+1, 'many_to_many add_to_$rel($hash) with where in relationship attrs count correct');
+$pointy_count = $pointy_objects->count();
+
+my $pen = $schema->resultset('TypedObject')->create({ value => "Pen", type => "pointy"});
+eval {$collection->add_to_pointy_objects($pen)};
+if ($@) { print $@ }
+ok( !$@, 'many_to_many add_to_$rel($object) with where in relationship attrs did not throw');
+is($pointy_objects->count, $pointy_count+1, 'many_to_many add_to_$rel($object) with where in relationship attrs count correct');
+$pointy_count = $pointy_objects->count();
+
+my $round_objects = $collection->round_objects();
+my $round_count = $round_objects->count();
+eval {$collection->add_to_objects({ value => "Wheel", type => "round" })};
+if ($@) { print $@ }
+ok( !$@, 'many_to_many add_to_$rel($hash) did not throw');
+is($round_objects->count, $round_count+1, 'many_to_many add_to_$rel($hash) count correct');
+
+# test set_$rel
+$round_count = $round_objects->count();
+$pointy_count = $pointy_objects->count();
+my @all_pointy_objects = $pointy_objects->all;
+# doing a set on pointy objects with its current set should not change any counts
+eval {$collection->set_pointy_objects(\@all_pointy_objects)};
+if ($@) { print $@ }
+ok( !$@, 'many_to_many set_$rel(\@objects) did not throw');
+is($pointy_objects->count, $pointy_count, 'many_to_many set_$rel($hash) count correct');
+is($round_objects->count, $round_count, 'many_to_many set_$rel($hash) other rel count correct');
+++ /dev/null
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-my $schema = DBICTest->init_schema;
-
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 7 );
-}
-
-### $schema->storage->debug(1);
-
-my $where_bind = {
- where => \'name like ?',
- bind => [ 'Cat%' ],
-};
-
-my $rs;
-
-TODO: {
- local $TODO = 'bind args order needs fixing (semifor)';
-
- # First, the simple cases...
- $rs = $schema->resultset('Artist')->search(
- { artistid => 1 },
- $where_bind,
- );
-
- is ( $rs->count, 1, 'where/bind combined' );
-
- $rs= $schema->resultset('Artist')->search({}, $where_bind)
- ->search({ artistid => 1});
-
- is ( $rs->count, 1, 'where/bind first' );
-
- $rs = $schema->resultset('Artist')->search({ artistid => 1})
- ->search({}, $where_bind);
-
- is ( $rs->count, 1, 'where/bind last' );
-}
-
-# More complex cases, based primarily on the Cookbook
-# "Arbitrary SQL through a custom ResultSource" technique,
-# which seems to be the only place the bind attribute is
-# documented. Breaking this technique probably breaks existing
-# application code.
-my $source = DBICTest::Artist->result_source_instance;
-my $new_source = $source->new($source);
-$new_source->source_name('Complex');
-
-$new_source->name(\<<'');
-( select a.*, cd.cdid as cdid, cd.title as title, cd.year as year
- from artist a
- join cd on cd.artist=a.artistid
- where cd.year=?)
-
-$schema->register_extra_source('Complex' => $new_source);
-
-$rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] });
-is ( $rs->count, 1, 'cookbook arbitrary sql example' );
-
-$rs = $schema->resultset('Complex')->search({ 'artistid' => 1 }, { bind => [ 1999 ] });
-is ( $rs->count, 1, '...coobook + search condition' );
-
-$rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] })
- ->search({ 'artistid' => 1 });
-is ( $rs->count, 1, '...cookbook (bind first) + chained search' );
-
-TODO: {
- # not sure what causes an uninit warning here, please remove when the TODO starts to pass,
- # so the real reason for the warning can be found and fixed
- local $SIG{__WARN__} = sub { warn @_ unless $_[0] =~ /uninitialized/ };
-
- local $TODO = 'bind args order needs fixing (semifor)';
- $rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] })
- ->search({ 'artistid' => 1 }, {
- where => \'title like ?',
- bind => [ 'Spoon%' ] });
- is ( $rs->count, 1, '...cookbook + chained search with extra bind' );
-}
use lib qw(t/lib);
BEGIN {
- eval { require Test::Memory::Cycle };
- if ($@) {
- plan skip_all => "leak test needs Test::Memory::Cycle";
+ eval { require Test::Memory::Cycle; require Devel::Cycle };
+ if ($@ or Devel::Cycle->VERSION < 1.10) {
+ plan skip_all => "leak test needs Test::Memory::Cycle and Devel::Cycle >= 1.10";
} else {
plan tests => 1;
}
+++ /dev/null
-use Test::More;
-use strict;
-use warnings;
-use lib qw(t/lib);
-use DBICTest;
-
-plan tests => 9;
-
-# This set of tests attempts to do a delete on a chained resultset, which
-# would lead to SQL DELETE with a JOIN, which is not supported by the
-# SQL generator right now.
-# So it currently checks that these operations fail with a warning.
-# When the SQL generator is fixed this test will need fixing up appropriately.
-
-my $schema = DBICTest->init_schema();
-my $total_tracks = $schema->resultset('Track')->count;
-cmp_ok($total_tracks, '>', 0, 'need track records');
-
-# test that delete_related w/o conditions deletes all related records only
-{
- my $w;
- local $SIG{__WARN__} = sub { $w = shift };
-
- my $artist = $schema->resultset("Artist")->find(3);
- my $artist_tracks = $artist->cds->search_related('tracks')->count;
- cmp_ok($artist_tracks, '<', $total_tracks, 'need more tracks than just related tracks');
-
- ok(!eval{$artist->cds->search_related('tracks')->delete});
- cmp_ok($schema->resultset('Track')->count, '==', $total_tracks, 'No tracks should be deleted');
- like ($w, qr/Currently \$rs->delete\(\) does not generate proper SQL/, 'Delete join warning');
-}
-
-# test that delete_related w/conditions deletes just the matched related records only
-{
- my $w;
- local $SIG{__WARN__} = sub { $w = shift };
-
- my $artist2 = $schema->resultset("Artist")->find(2);
- my $artist2_tracks = $artist2->search_related('cds')->search_related('tracks')->count;
- cmp_ok($artist2_tracks, '<', $total_tracks, 'need more tracks than related tracks');
-
- ok(!eval{$artist2->search_related('cds')->search_related('tracks')->delete});
- cmp_ok($schema->resultset('Track')->count, '==', $total_tracks, 'No tracks should be deleted');
- like ($w, qr/Currently \$rs->delete\(\) does not generate proper SQL/, 'Delete join warning');
-}
+++ /dev/null
-use Test::More;
-use strict;
-use warnings;
-use lib qw(t/lib);
-use DBICTest;
-
-plan tests => 7;
-
-my $schema = DBICTest->init_schema();
-my $total_cds = $schema->resultset('CD')->count;
-cmp_ok($total_cds, '>', 0, 'need cd records');
-
-# test that delete_related w/o conditions deletes all related records only
-my $artist = $schema->resultset("Artist")->find(3);
-my $artist_cds = $artist->cds->count;
-cmp_ok($artist_cds, '<', $total_cds, 'need more cds than just related cds');
-
-ok($artist->delete_related('cds'));
-cmp_ok($schema->resultset('CD')->count, '==', ($total_cds - $artist_cds), 'wrong number of cds were deleted');
-
-$total_cds -= $artist_cds;
-
-# test that delete_related w/conditions deletes just the matched related records only
-my $artist2 = $schema->resultset("Artist")->find(2);
-my $artist2_cds = $artist2->search_related('cds')->count;
-cmp_ok($artist2_cds, '<', $total_cds, 'need more cds than related cds');
-
-ok($artist2->delete_related('cds', {title => {like => '%'}}));
-cmp_ok($schema->resultset('CD')->count, '==', ($total_cds - $artist2_cds), 'wrong number of cds were deleted');
-
use Test::Exception;
use lib qw(t/lib);
use DBICTest;
+use DBIC::SqlMakerTest;
my $schema = DBICTest->init_schema();
-plan tests => 95;
-
-eval { require DateTime::Format::MySQL };
+eval { require DateTime::Format::SQLite };
my $NO_DTFM = $@ ? 1 : 0;
-# figure out if we've got a version of sqlite that is older than 3.2.6, in
-# which case COUNT(DISTINCT()) doesn't work
-my $is_broken_sqlite = 0;
-my ($sqlite_major_ver,$sqlite_minor_ver,$sqlite_patch_ver) =
- split /\./, $schema->storage->dbh->get_info(18);
-if( $schema->storage->dbh->get_info(17) eq 'SQLite' &&
- ( ($sqlite_major_ver < 3) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver < 2) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver == 2 && $sqlite_patch_ver < 6) ) ) {
- $is_broken_sqlite = 1;
-}
-
-
my @art = $schema->resultset("Artist")->search({ }, { order_by => 'name DESC'});
-cmp_ok(@art, '==', 3, "Three artists returned");
+is(@art, 3, "Three artists returned");
my $art = $art[0];
is($art->name, 'We Are In Rehab', "Accessor update ok");
my %dirty = $art->get_dirty_columns();
-cmp_ok(scalar(keys(%dirty)), '==', 1, '1 dirty column');
+is(scalar(keys(%dirty)), 1, '1 dirty column');
ok(grep($_ eq 'name', keys(%dirty)), 'name is dirty');
is($art->get_column("name"), 'We Are In Rehab', 'And via get_column');
ok($art->update, 'Update run');
my %not_dirty = $art->get_dirty_columns();
-cmp_ok(scalar(keys(%not_dirty)), '==', 0, 'Nothing is dirty');
+is(scalar(keys(%not_dirty)), 0, 'Nothing is dirty');
eval {
my $ret = $art->make_column_dirty('name2');
ok(defined($@), 'Failed to make non-existent column dirty');
$art->make_column_dirty('name');
my %fake_dirty = $art->get_dirty_columns();
-cmp_ok(scalar(keys(%fake_dirty)), '==', 1, '1 fake dirty column');
+is(scalar(keys(%fake_dirty)), 1, '1 fake dirty column');
ok(grep($_ eq 'name', keys(%fake_dirty)), 'name is fake dirty');
my $record_jp = $schema->resultset("Artist")->search(undef, { join => 'cds' })->search(undef, { prefetch => 'cds' })->next;
@art = $schema->resultset("Artist")->search({ name => 'We Are In Rehab' });
-cmp_ok(@art, '==', 1, "Changed artist returned by search");
+is(@art, 1, "Changed artist returned by search");
-cmp_ok($art[0]->artistid, '==', 3,'Correct artist too');
+is($art[0]->artistid, 3,'Correct artist too');
lives_ok (sub { $art->delete }, 'Cascading delete on Ordered has_many works' ); # real test in ordered.t
@art = $schema->resultset("Artist")->search({ });
-cmp_ok(@art, '==', 2, 'And then there were two');
+is(@art, 2, 'And then there were two');
ok(!$art->in_storage, "It knows it's dead");
@art = $schema->resultset("Artist")->search({ });
-cmp_ok(@art, '==', 3, 'And now there are three again');
+is(@art, 3, 'And now there are three again');
my $new = $schema->resultset("Artist")->create({ artistid => 4 });
-cmp_ok($new->artistid, '==', 4, 'Create produced record ok');
+is($new->artistid, 4, 'Create produced record ok');
@art = $schema->resultset("Artist")->search({ });
-cmp_ok(@art, '==', 4, "Oh my god! There's four of them!");
+is(@art, 4, "Oh my god! There's four of them!");
$new->set_column('name' => 'Man With A Fork');
my $cd = $schema->resultset("CD")->find(1);
my %cols = $cd->get_columns;
-cmp_ok(keys %cols, '==', 6, 'get_columns number of columns ok');
+is(keys %cols, 6, 'get_columns number of columns ok');
is($cols{title}, 'Spoonful of bees', 'get_columns values ok');
# get_inflated_columns w/relation and accessor alias
SKIP: {
- skip "This test requires DateTime::Format::MySQL", 8 if $NO_DTFM;
+ skip "This test requires DateTime::Format::SQLite", 8 if $NO_DTFM;
isa_ok($new->updated_date, 'DateTime', 'have inflated object via accessor');
my %tdata = $new->get_inflated_columns;
my( $or_rs ) = $schema->resultset("CD")->search_rs($search, { join => 'tags',
order_by => 'cdid' });
+is($or_rs->all, 5, 'Joined search with OR returned correct number of rows');
+is($or_rs->count, 5, 'Search count with OR ok');
-cmp_ok($or_rs->count, '==', 5, 'Search with OR ok');
-
-my $distinct_rs = $schema->resultset("CD")->search($search, { join => 'tags', distinct => 1 });
-cmp_ok($distinct_rs->all, '==', 4, 'DISTINCT search with OR ok');
+my $collapsed_or_rs = $or_rs->search ({}, { distinct => 1 }); # induce collapse
+is ($collapsed_or_rs->all, 4, 'Collapsed joined search with OR returned correct number of rows');
+is ($collapsed_or_rs->count, 4, 'Collapsed search count with OR ok');
-SKIP: {
- skip "SQLite < 3.2.6 doesn't understand COUNT(DISTINCT())", 2
- if $is_broken_sqlite;
+{
+ my $tcount = $schema->resultset('Track')->search(
+ {},
+ {
+ select => [ qw/position title/ ],
+ distinct => 1,
+ }
+ );
+ is($tcount->count, 13, 'multiple column COUNT DISTINCT ok');
- my $tcount = $schema->resultset("Track")->search(
+ $tcount = $schema->resultset('Track')->search(
{},
- {
- select => {count => {distinct => ['position', 'title']}},
- as => ['count']
+ {
+ columns => [ qw/position title/ ],
+ distinct => 1,
}
);
- cmp_ok($tcount->next->get_column('count'), '==', 13, 'multiple column COUNT DISTINCT ok');
+ is($tcount->count, 13, 'multiple column COUNT DISTINCT ok');
- $tcount = $schema->resultset("Track")->search(
+ $tcount = $schema->resultset('Track')->search(
{},
- {
- columns => {count => {count => {distinct => ['position', 'title']}}},
+ {
+ group_by => [ qw/position title/ ]
}
);
- cmp_ok($tcount->next->get_column('count'), '==', 13, 'multiple column COUNT DISTINCT using column syntax ok');
+ is($tcount->count, 13, 'multiple column COUNT DISTINCT using column syntax ok');
}
my $tag_rs = $schema->resultset('Tag')->search(
my $rel_rs = $tag_rs->search_related('cd');
-cmp_ok($rel_rs->count, '==', 5, 'Related search ok');
+is($rel_rs->count, 5, 'Related search ok');
-cmp_ok($or_rs->next->cdid, '==', $rel_rs->next->cdid, 'Related object ok');
+is($or_rs->next->cdid, $rel_rs->next->cdid, 'Related object ok');
$or_rs->reset;
$rel_rs->reset;
my $tag = $schema->resultset('Tag')->search(
[ { 'me.tag' => 'Blue' } ], { cols=>[qw/tagid/] } )->next;
-cmp_ok($tag->has_column_loaded('tagid'), '==', 1, 'Has tagid loaded');
-cmp_ok($tag->has_column_loaded('tag'), '==', 0, 'Has not tag loaded');
+ok($tag->has_column_loaded('tagid'), 'Has tagid loaded');
+ok(!$tag->has_column_loaded('tag'), 'Has not tag loaded');
ok($schema->storage(), 'Storage available');
ok($schema->source('SourceNameArtists'), 'SourceNameArtists result source exists');
my @artsn = $schema->resultset('SourceNameArtists')->search({}, { order_by => 'name DESC' });
- cmp_ok(@artsn, '==', 4, "Four artists returned");
+ is(@artsn, 4, "Four artists returned");
# make sure subclasses that don't set source_name are ok
ok($schema->source('ArtistSubclass'), 'ArtistSubclass exists');
{
my $art_del = $schema->resultset("Artist")->find({ artistid => 1 });
lives_ok (sub { $art_del->delete }, 'Cascading delete on Ordered has_many works' ); # real test in ordered.t
- cmp_ok( $schema->resultset("CD")->search({artist => 1}), '==', 0, 'Cascading through has_many top level.');
- cmp_ok( $schema->resultset("CD_to_Producer")->search({cd => 1}), '==', 0, 'Cascading through has_many children.');
+ is( $schema->resultset("CD")->search({artist => 1}), 0, 'Cascading through has_many top level.');
+ is( $schema->resultset("CD_to_Producer")->search({cd => 1}), 0, 'Cascading through has_many children.');
}
# test column_info
# test get_inflated_columns with objects
SKIP: {
- skip "This test requires DateTime::Format::MySQL", 5 if $NO_DTFM;
+ skip "This test requires DateTime::Format::SQLite", 5 if $NO_DTFM;
my $event = $schema->resultset('Event')->search->first;
my %edata = $event->get_inflated_columns;
is($edata{'id'}, $event->id, 'got id');
$en_row->insert;
is($en_row->encoded, 'amliw', 'insert does not encode again');
}
+
+# make sure we got rid of the compat shims
+SKIP: {
+ skip "Remove in 0.09", 5 if $DBIx::Class::VERSION < 0.09;
+
+ for (qw/compare_relationship_keys pk_depends_on resolve_condition resolve_join resolve_prefetch/) {
+ ok (! DBIx::Class::ResultSource->can ($_), "$_ no longer provided by DBIx::Class::ResultSource");
+ }
+}
+
+#------------------------------
+# READ THIS BEFORE "FIXING"
+#------------------------------
+#
+# make sure we got rid of discard_changes mess - this is a mess and a source
+# of great confusion. Here I simply die if the methods are available, which
+# is wrong on its own (we *have* to provide some sort of back-compat, even
+# if with warnings). Here is how I envision things should actually be. Also
+# note that a lot of the deprecation can be started today (i.e. the switch
+# from get_from_storage to copy_from_storage). So:
+#
+# $row->discard_changes =>
+# warning, and delegation to reload_from_storage
+#
+# $row->reload_from_storage =>
+# does what discard changes did in 0.08 - issues a query to the db
+# and repopulates all column slots, regardless of dirty states etc.
+#
+# $row->revert_changes =>
+# does what discard_changes should have done initially (before it became
+# a dual-purpose call). In order to make this work we will have to
+# augment $row to carry its own initial-state, much like svn has a
+# copy of the current checkout in contrast to cvs.
+#
+# my $db_row = $row->get_from_storage =>
+# warns and delegates to an improved name copy_from_storage, with the
+# same semantics
+#
+# my $db_row = $row->copy_from_storage =>
+# a much better/descriptive name than get_from_storage
+#
+#------------------------------
+# READ THIS BEFORE "FIXING"
+#------------------------------
+#
+SKIP: {
+ skip "Something needs to be done before 0.09", 2 if $DBIx::Class::VERSION < 0.09;
+
+ my $row = $schema->resultset ('Artist')->next;
+
+ for (qw/discard_changes get_from_storage/) {
+ ok (! $row->can ($_), "$_ needs *some* sort of facelift before 0.09 ships - current state of affairs is unacceptable");
+ }
+}
+
+throws_ok { $schema->resultset} qr/resultset\(\) expects a source name/, 'resultset with no argument throws exception';
+
+done_testing;
use strict;
use warnings;
-use Test::More tests => 3;
+use Test::More tests => 2;
use lib qw(t/lib);
use DBICTest;
use DBICTest::Schema;
use DBICTest::Schema::Artist;
DBICTest::Schema::Artist->source_name('MyArtist');
-{
- my $w;
- local $SIG{__WARN__} = sub { $w = shift };
- DBICTest::Schema->register_class('FooA', 'DBICTest::Schema::Artist');
- like ($w, qr/use register_extra_source/, 'Complain about using register_class on an already-registered class');
-}
+DBICTest::Schema->register_class('FooA', 'DBICTest::Schema::Artist');
my $schema = DBICTest->init_schema();
--- /dev/null
+use strict;
+use warnings;
+
+use Test::Exception tests => 1;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::Schema;
+use DBIx::Class::ResultSource::Table;
+
+my $schema = DBICTest->init_schema();
+
+my $foo = DBIx::Class::ResultSource::Table->new({ name => "foo" });
+my $bar = DBIx::Class::ResultSource::Table->new({ name => "bar" });
+
+lives_ok {
+ $schema->register_source(foo => $foo);
+ $schema->register_source(bar => $bar);
+} 'multiple classless sources can be registered';
'rank' => {
'data_type' => 'integer',
'is_nullable' => 0,
+ 'default_value' => '13',
+ },
+ 'charfield' => {
+ 'data_type' => 'char',
+ 'is_nullable' => 1,
},
},
'Correctly retrieve column info (mixed null and non-null columns)'
is( $it->count, 3, "count on paged rs ok" );
+is( $it->pager->total_entries, 5, "total_entries ok" );
+
is( $it->next->title, "Caterwaulin' Blues", "iterator->next ok" );
$it->next;
$schema->default_resultset_attributes({ rows => 5 });
is($p->(), 5, 'default rows is 5');
+
+# test page with offset
+$it = $schema->resultset('CD')->search({}, {
+ rows => 2,
+ page => 2,
+ offset => 1,
+ order_by => 'cdid'
+});
+
+my $row = $schema->resultset('CD')->search({}, {
+ order_by => 'cdid',
+ offset => 3,
+ rows => 1
+})->single;
+
+is($row->cdid, $it->first->cdid, 'page with offset');
use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
use DBI::Const::GetInfoType;
plan skip_all => 'Set $ENV{DBICTEST_MYSQL_DSN}, _USER and _PASS to run this test'
unless ($dsn && $user);
-plan tests => 10;
+plan tests => 19;
my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
$dbh->do("CREATE TABLE artist (artistid INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), rank INTEGER NOT NULL DEFAULT '13', charfield CHAR(10));");
+$dbh->do("DROP TABLE IF EXISTS cd;");
+
+$dbh->do("CREATE TABLE cd (cdid INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, artist INTEGER, title TEXT, year INTEGER, genreid INTEGER, single_track INTEGER);");
+
+$dbh->do("DROP TABLE IF EXISTS producer;");
+
+$dbh->do("CREATE TABLE producer (producerid INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, name TEXT);");
+
+$dbh->do("DROP TABLE IF EXISTS cd_to_producer;");
+
+$dbh->do("CREATE TABLE cd_to_producer (cd INTEGER,producer INTEGER);");
+
+$dbh->do("DROP TABLE IF EXISTS owners;");
+
+$dbh->do("CREATE TABLE owners (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100) NOT NULL);");
+
+$dbh->do("DROP TABLE IF EXISTS books;");
+
+$dbh->do("CREATE TABLE books (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, source VARCHAR(100) NOT NULL, owner integer NOT NULL, title varchar(100) NOT NULL, price integer);");
+
#'dbi:mysql:host=localhost;database=dbic_test', 'dbic_test', '');
# This is in Core now, but it's here just to test that it doesn't break
offset => 2,
order_by => 'artistid' }
);
-is( $it->count, 3, "LIMIT count ok" );
+is( $it->count, 3, "LIMIT count ok" ); # ask for 3 rows out of 7 artists
is( $it->next->name, "Artist 2", "iterator->next ok" );
$it->next;
$it->next;
},
};
+$schema->populate ('Owners', [
+ [qw/id name /],
+ [qw/1 wiggle/],
+ [qw/2 woggle/],
+ [qw/3 boggle/],
+]);
+
+$schema->populate ('BooksInLibrary', [
+ [qw/source owner title /],
+ [qw/Library 1 secrets1/],
+ [qw/Eatery 1 secrets2/],
+ [qw/Library 2 secrets3/],
+]);
+
+#
+# try a distinct + prefetch on tables with identically named columns
+# (mysql doesn't seem to like subqueries with equally named columns)
+#
+
+{
+ # try a ->has_many direction (due to a 'multi' accessor the select/group_by group is collapsed)
+ my $owners = $schema->resultset ('Owners')->search (
+ { 'books.id' => { '!=', undef }},
+ { prefetch => 'books', distinct => 1 }
+ );
+ my $owners2 = $schema->resultset ('Owners')->search ({ id => { -in => $owners->get_column ('me.id')->as_query }});
+ for ($owners, $owners2) {
+ is ($_->all, 2, 'Prefetched grouped search returns correct number of rows');
+ is ($_->count, 2, 'Prefetched grouped search returns correct count');
+ }
+
+ # try a ->belongs_to direction (no select collapse)
+ my $books = $schema->resultset ('BooksInLibrary')->search (
+ { 'owner.name' => 'wiggle' },
+ { prefetch => 'owner', distinct => 1 }
+ );
+ my $books2 = $schema->resultset ('BooksInLibrary')->search ({ id => { -in => $books->get_column ('me.id')->as_query }});
+ for ($books, $books2) {
+ is ($_->all, 1, 'Prefetched grouped search returns correct number of rows');
+ is ($_->count, 1, 'Prefetched grouped search returns correct count');
+ }
+}
+
SKIP: {
my $mysql_version = $dbh->get_info( $GetInfoType{SQL_DBMS_VER} );
skip "Cannot determine MySQL server version", 1 if !$mysql_version;
is_deeply($type_info, $test_type_info, 'columns_info_for - column data types');
}
+my $cd = $schema->resultset ('CD')->create ({});
+my $producer = $schema->resultset ('Producer')->create ({});
+lives_ok { $cd->set_producers ([ $producer ]) } 'set_relationship doesnt die';
+
+
## Can we properly deal with the null search problem?
##
## Only way is to do a SET SQL_AUTO_IS_NULL = 0; on connect
## But I'm not sure if we should do this or not (Ash, 2008/06/03)
+#
+# There is now a built-in function to do this, test that everything works
+# with it (ribasushi, 2009/07/03)
NULLINSEARCH: {
-
- ok my $artist1_rs = $schema->resultset('Artist')->search({artistid=>6666})
- => 'Created an artist resultset of 6666';
-
+ my $ansi_schema = DBICTest::Schema->connect ($dsn, $user, $pass, { on_connect_call => 'set_strict_mode' });
+
+ $ansi_schema->resultset('Artist')->create ({ name => 'last created artist' });
+
+ ok my $artist1_rs = $ansi_schema->resultset('Artist')->search({artistid=>6666})
+ => 'Created an artist resultset of 6666';
+
is $artist1_rs->count, 0
- => 'Got no returned rows';
-
- ok my $artist2_rs = $schema->resultset('Artist')->search({artistid=>undef})
- => 'Created an artist resultset of undef';
-
- TODO: {
- $TODO = "need to fix the row count =1 when select * from table where pk IS NULL problem";
- is $artist2_rs->count, 0
- => 'got no rows';
- }
+ => 'Got no returned rows';
+
+ ok my $artist2_rs = $ansi_schema->resultset('Artist')->search({artistid=>undef})
+ => 'Created an artist resultset of undef';
+
+ is $artist2_rs->count, 0
+ => 'got no rows';
my $artist = $artist2_rs->single;
-
+
is $artist => undef
- => 'Nothing Found!';
+ => 'Nothing Found!';
}
-
-
-# clean up our mess
-END {
- #$dbh->do("DROP TABLE artist") if $dbh;
-}
\ No newline at end of file
use strict;
-use warnings;
+use warnings;
use Test::More;
use Test::Exception;
__PACKAGE__->load_components(qw/Core/);
__PACKAGE__->table('testschema.casecheck');
- __PACKAGE__->add_columns(qw/id name NAME uc_name/);
+ __PACKAGE__->add_columns(qw/id name NAME uc_name storecolumn/);
__PACKAGE__->column_info_from_storage(1);
__PACKAGE__->set_primary_key('id');
+ sub store_column {
+ my ($self, $name, $value) = @_;
+ $value = '#'.$value if($name eq "storecolumn");
+ $self->maybe::next::method($name, $value);
+ }
}
{
plan skip_all => 'Set $ENV{DBICTEST_PG_DSN}, _USER and _PASS to run this test '.
'(note: This test drops and creates tables called \'artist\', \'casecheck\', \'array_test\' and \'sequence_test\''.
' as well as following sequences: \'pkid1_seq\', \'pkid2_seq\' and \'nonpkid_seq\''.
- ' as well as following schemas: \'testschema\'!)'
+ ' as well as following schemas: \'testschema\',\'anothertestschema\'!)'
unless ($dsn && $user);
-
-plan tests => 37;
-
DBICTest::Schema->load_classes( 'Casecheck', 'ArrayTest' );
-my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+my $schema = DBICTest::Schema->connect($dsn, $user, $pass,);
# Check that datetime_parser returns correctly before we explicitly connect.
SKIP: {
$schema->source("SequenceTest")->name("testschema.sequence_test");
{
local $SIG{__WARN__} = sub {};
+ _cleanup ($dbh);
+
+ my $artist_table_def = <<EOS;
+(
+ artistid serial PRIMARY KEY
+ , name VARCHAR(100)
+ , rank INTEGER NOT NULL DEFAULT '13'
+ , charfield CHAR(10)
+ , arrayfield INTEGER[]
+)
+EOS
$dbh->do("CREATE SCHEMA testschema;");
- $dbh->do("CREATE TABLE testschema.artist (artistid serial PRIMARY KEY, name VARCHAR(100), rank INTEGER NOT NULL DEFAULT '13', charfield CHAR(10), arrayfield INTEGER[]);");
+ $dbh->do("CREATE TABLE testschema.artist $artist_table_def;");
$dbh->do("CREATE TABLE testschema.sequence_test (pkid1 integer, pkid2 integer, nonpkid integer, name VARCHAR(100), CONSTRAINT pk PRIMARY KEY(pkid1, pkid2));");
$dbh->do("CREATE SEQUENCE pkid1_seq START 1 MAXVALUE 999999 MINVALUE 0");
$dbh->do("CREATE SEQUENCE pkid2_seq START 10 MAXVALUE 999999 MINVALUE 0");
$dbh->do("CREATE SEQUENCE nonpkid_seq START 20 MAXVALUE 999999 MINVALUE 0");
- ok ( $dbh->do('CREATE TABLE testschema.casecheck (id serial PRIMARY KEY, "name" VARCHAR(1), "NAME" VARCHAR(2), "UC_NAME" VARCHAR(3));'), 'Creation of casecheck table');
+ ok ( $dbh->do('CREATE TABLE testschema.casecheck (id serial PRIMARY KEY, "name" VARCHAR(1), "NAME" VARCHAR(2), "UC_NAME" VARCHAR(3), "storecolumn" VARCHAR(10));'), 'Creation of casecheck table');
ok ( $dbh->do('CREATE TABLE testschema.array_test (id serial PRIMARY KEY, arrayfield INTEGER[]);'), 'Creation of array_test table');
+ $dbh->do("CREATE SCHEMA anothertestschema;");
+ $dbh->do("CREATE TABLE anothertestschema.artist $artist_table_def;");
+ $dbh->do("CREATE SCHEMA yetanothertestschema;");
+ $dbh->do("CREATE TABLE yetanothertestschema.artist $artist_table_def;");
+ $dbh->do('set search_path=testschema,public');
}
+# store_column is called once for create() for non sequence columns
+
+ok(my $storecolumn = $schema->resultset('Casecheck')->create({'storecolumn' => 'a'}));
+
+is($storecolumn->storecolumn, '#a'); # was '##a'
+
+
# This is in Core now, but it's here just to test that it doesn't break
$schema->class('Artist')->load_components('PK::Auto');
+cmp_ok( $schema->resultset('Artist')->count, '==', 0, 'this should start with an empty artist table');
+
+{ # test that auto-pk also works with the defined search path by
+ # un-schema-qualifying the table name
+ my $artist_name_save = $schema->source("Artist")->name;
+ $schema->source("Artist")->name("artist");
+
+ my $unq_new;
+ lives_ok {
+ $unq_new = $schema->resultset('Artist')->create({ name => 'baz' });
+ } 'insert into unqualified, shadowed table succeeds';
+
+ is($unq_new && $unq_new->artistid, 1, "and got correct artistid");
+
+ #test with anothertestschema
+ $schema->source('Artist')->name('anothertestschema.artist');
+ my $another_new = $schema->resultset('Artist')->create({ name => 'ribasushi'});
+ is( $another_new->artistid,1, 'got correct artistid for yetanotherschema');
+
+ #test with yetanothertestschema
+ $schema->source('Artist')->name('yetanothertestschema.artist');
+ my $yetanother_new = $schema->resultset('Artist')->create({ name => 'ribasushi'});
+ is( $yetanother_new->artistid,1, 'got correct artistid for yetanotherschema');
+ is( $yetanother_new->artistid,1, 'got correct artistid for yetanotherschema');
+
+ $schema->source("Artist")->name($artist_name_save);
+}
+
my $new = $schema->resultset('Artist')->create({ name => 'foo' });
-is($new->artistid, 1, "Auto-PK worked");
+is($new->artistid, 2, "Auto-PK worked");
$new = $schema->resultset('Artist')->create({ name => 'bar' });
-is($new->artistid, 2, "Auto-PK worked");
+is($new->artistid, 3, "Auto-PK worked");
+
my $test_type_info = {
'artistid' => {
is_deeply($type_info, $test_type_info,
'columns_info_for - column data types');
-{
+SKIP: {
+ skip "Need DBD::Pg 2.9.2 or newer for array tests", 4 if $DBD::Pg::VERSION < 2.009002;
+
lives_ok {
$schema->resultset('ArrayTest')->create({
arrayfield => [1, 2],
my $count;
lives_ok {
$count = $schema->resultset('ArrayTest')->search({
- arrayfield => \[ '= ?' => [arrayfield => [3, 4]] ], #TODO anything less ugly than this?
+ arrayfield => \[ '= ?' => [arrayfield => [3, 4]] ], #Todo anything less ugly than this?
})->count;
} 'comparing arrayref to pg array data does not blow up';
is($count, 1, 'comparing arrayref to pg array data gives correct result');
});
}
-SKIP: {
- skip "Oracle Auto-PK tests are broken", 16;
-
- # test auto increment using sequences WITHOUT triggers
- for (1..5) {
+for (1..5) {
my $st = $schema->resultset('SequenceTest')->create({ name => 'foo' });
is($st->pkid1, $_, "Oracle Auto-PK without trigger: First primary key");
is($st->pkid2, $_ + 9, "Oracle Auto-PK without trigger: Second primary key");
is($st->nonpkid, $_ + 19, "Oracle Auto-PK without trigger: Non-primary key");
+}
+my $st = $schema->resultset('SequenceTest')->create({ name => 'foo', pkid1 => 55 });
+is($st->pkid1, 55, "Oracle Auto-PK without trigger: First primary key set manually");
+
+sub _cleanup {
+ my $dbh = shift or return;
+
+ for my $stat (
+ 'DROP TABLE testschema.artist',
+ 'DROP TABLE testschema.casecheck',
+ 'DROP TABLE testschema.sequence_test',
+ 'DROP TABLE testschema.array_test',
+ 'DROP SEQUENCE pkid1_seq',
+ 'DROP SEQUENCE pkid2_seq',
+ 'DROP SEQUENCE nonpkid_seq',
+ 'DROP SCHEMA testschema',
+ 'DROP TABLE anothertestschema.artist',
+ 'DROP SCHEMA anothertestschema',
+ 'DROP TABLE yetanothertestschema.artist',
+ 'DROP SCHEMA yetanothertestschema',
+ ) {
+ eval { $dbh->do ($stat) };
}
- my $st = $schema->resultset('SequenceTest')->create({ name => 'foo', pkid1 => 55 });
- is($st->pkid1, 55, "Oracle Auto-PK without trigger: First primary key set manually");
}
-END {
- if($dbh) {
- $dbh->do("DROP TABLE testschema.artist;");
- $dbh->do("DROP TABLE testschema.casecheck;");
- $dbh->do("DROP TABLE testschema.sequence_test;");
- $dbh->do("DROP TABLE testschema.array_test;");
- $dbh->do("DROP SEQUENCE pkid1_seq");
- $dbh->do("DROP SEQUENCE pkid2_seq");
- $dbh->do("DROP SEQUENCE nonpkid_seq");
- $dbh->do("DROP SCHEMA testschema;");
- }
-}
+done_testing;
+END { _cleanup($dbh) }
use strict;
use warnings;
+use Test::Exception;
use Test::More;
use lib qw(t/lib);
use DBICTest;
' as well as following sequences: \'pkid1_seq\', \'pkid2_seq\' and \'nonpkid_seq\''
unless ($dsn && $user && $pass);
-plan tests => 24;
+plan tests => 35;
DBICTest::Schema->load_classes('ArtistFQN');
my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
$dbh->do("CREATE TABLE artist (artistid NUMBER(12), name VARCHAR(255), rank NUMBER(38), charfield VARCHAR2(10))");
$dbh->do("CREATE TABLE sequence_test (pkid1 NUMBER(12), pkid2 NUMBER(12), nonpkid NUMBER(12), name VARCHAR(255))");
$dbh->do("CREATE TABLE cd (cdid NUMBER(12), artist NUMBER(12), title VARCHAR(255), year VARCHAR(4))");
-$dbh->do("CREATE TABLE track (trackid NUMBER(12), cd NUMBER(12), position NUMBER(12), title VARCHAR(255), last_updated_on DATE)");
+$dbh->do("CREATE TABLE track (trackid NUMBER(12), cd NUMBER(12), position NUMBER(12), title VARCHAR(255), last_updated_on DATE, last_updated_at DATE, small_dt DATE)");
$dbh->do("ALTER TABLE artist ADD (CONSTRAINT artist_pk PRIMARY KEY (artistid))");
$dbh->do("ALTER TABLE sequence_test ADD (CONSTRAINT sequence_test_constraint PRIMARY KEY (pkid1, pkid2))");
END;
});
+{
+ # Swiped from t/bindtype_columns.t to avoid creating my own Resultset.
+
+ local $SIG{__WARN__} = sub {};
+ eval { $dbh->do('DROP TABLE bindtype_test') };
+
+ $dbh->do(qq[
+ CREATE TABLE bindtype_test
+ (
+ id integer NOT NULL PRIMARY KEY,
+ bytea integer NULL,
+ blob blob NULL,
+ clob clob NULL
+ )
+ ],{ RaiseError => 1, PrintError => 1 });
+}
+
# This is in Core now, but it's here just to test that it doesn't break
$schema->class('Artist')->load_components('PK::Auto');
# These are compat shims for PK::Auto...
is( $new->artistid, 2, "Oracle Auto-PK worked with fully-qualified tablename" );
# test join with row count ambiguity
+
my $cd = $schema->resultset('CD')->create({ cdid => 1, artist => 1, title => 'EP C', year => '2003' });
-my $track = $schema->resultset('Track')->create({ trackid => 1, cd => 1, position => 1, title => 'Track1' });
+my $track = $schema->resultset('Track')->create({ trackid => 1, cd => 1,
+ position => 1, title => 'Track1' });
my $tjoin = $schema->resultset('Track')->search({ 'me.title' => 'Track1'},
{ join => 'cd',
rows => 2 }
);
-is($tjoin->next->title, 'Track1', "ambiguous column ok");
+ok(my $row = $tjoin->next);
+
+is($row->title, 'Track1', "ambiguous column ok");
# check count distinct with multiple columns
my $other_track = $schema->resultset('Track')->create({ trackid => 2, cd => 1, position => 1, title => 'Track2' });
+
my $tcount = $schema->resultset('Track')->search(
- {},
- {
- select => [{count => {distinct => ['position', 'title']}}],
- as => ['count']
- }
- );
+ {},
+ {
+ select => [ qw/position title/ ],
+ distinct => 1,
+ }
+);
+is($tcount->count, 2, 'multiple column COUNT DISTINCT ok');
+
+$tcount = $schema->resultset('Track')->search(
+ {},
+ {
+ columns => [ qw/position title/ ],
+ distinct => 1,
+ }
+);
+is($tcount->count, 2, 'multiple column COUNT DISTINCT ok');
-is($tcount->next->get_column('count'), 2, "multiple column select distinct ok");
+$tcount = $schema->resultset('Track')->search(
+ {},
+ {
+ group_by => [ qw/position title/ ]
+ }
+);
+is($tcount->count, 2, 'multiple column COUNT DISTINCT using column syntax ok');
# test LIMIT support
for (1..6) {
my $st = $schema->resultset('SequenceTest')->create({ name => 'foo', pkid1 => 55 });
is($st->pkid1, 55, "Oracle Auto-PK without trigger: First primary key set manually");
+{
+ my %binstr = ( 'small' => join('', map { chr($_) } ( 1 .. 127 )) );
+ $binstr{'large'} = $binstr{'small'} x 1024;
+
+ my $maxloblen = length $binstr{'large'};
+ note "Localizing LongReadLen to $maxloblen to avoid truncation of test data";
+ local $dbh->{'LongReadLen'} = $maxloblen;
+
+ my $rs = $schema->resultset('BindType');
+ my $id = 0;
+
+ foreach my $type (qw( blob clob )) {
+ foreach my $size (qw( small large )) {
+ $id++;
+
+ lives_ok { $rs->create( { 'id' => $id, $type => $binstr{$size} } ) }
+ "inserted $size $type without dying";
+ ok($rs->find($id)->$type eq $binstr{$size}, "verified inserted $size $type" );
+ }
+ }
+}
+
# clean up our mess
END {
if($schema && ($dbh = $schema->storage->dbh)) {
$dbh->do("DROP TABLE sequence_test");
$dbh->do("DROP TABLE cd");
$dbh->do("DROP TABLE track");
+ $dbh->do("DROP TABLE bindtype_test");
}
}
+++ /dev/null
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_ORA_${_}" } qw/DSN USER PASS/};
-
-if (not ($dsn && $user && $pass)) {
- plan skip_all => 'Set $ENV{DBICTEST_ORA_DSN}, _USER and _PASS to run this test. ' .
- 'Warning: This test drops and creates a table called \'track\'';
-}
-else {
- eval "use DateTime; use DateTime::Format::Oracle;";
- if ($@) {
- plan skip_all => 'needs DateTime and DateTime::Format::Oracle for testing';
- }
- else {
- plan tests => 4;
- }
-}
-
-# DateTime::Format::Oracle needs this set
-$ENV{NLS_DATE_FORMAT} = 'DD-MON-YY';
-
-my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
-
-# Need to redefine the last_updated_on column
-my $col_metadata = $schema->class('Track')->column_info('last_updated_on');
-$schema->class('Track')->add_column( 'last_updated_on' => {
- data_type => 'date' });
-
-my $dbh = $schema->storage->dbh;
-
-eval {
- $dbh->do("DROP TABLE track");
-};
-$dbh->do("CREATE TABLE track (trackid NUMBER(12), cd NUMBER(12), position NUMBER(12), title VARCHAR(255), last_updated_on DATE)");
-
-# insert a row to play with
-my $new = $schema->resultset('Track')->create({ trackid => 1, cd => 1, position => 1, title => 'Track1', last_updated_on => '06-MAY-07' });
-is($new->trackid, 1, "insert sucessful");
-
-my $track = $schema->resultset('Track')->find( 1 );
-
-is( ref($track->last_updated_on), 'DateTime', "last_updated_on inflated ok");
-
-is( $track->last_updated_on->month, 5, "DateTime methods work on inflated column");
-
-my $dt = DateTime->now();
-$track->last_updated_on($dt);
-$track->update;
-
-is( $track->last_updated_on->month, $dt->month, "deflate ok");
-
-# clean up our mess
-END {
- if($dbh) {
- $dbh->do("DROP TABLE track");
- }
-}
-
plan skip_all => 'Set $ENV{DBICTEST_DB2_DSN}, _USER and _PASS to run this test'
unless ($dsn && $user);
-plan tests => 6;
+plan tests => 9;
my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
# This is in core, just testing that it still loads ok
$schema->class('Artist')->load_components('PK::Auto');
+my $ars = $schema->resultset('Artist');
+
# test primary key handling
-my $new = $schema->resultset('Artist')->create({ name => 'foo' });
+my $new = $ars->create({ name => 'foo' });
ok($new->artistid, "Auto-PK worked");
-# test LIMIT support
+my $init_count = $ars->count;
for (1..6) {
- $schema->resultset('Artist')->create({ name => 'Artist ' . $_ });
+ $ars->create({ name => 'Artist ' . $_ });
}
-my $it = $schema->resultset('Artist')->search( {},
- { rows => 3,
- order_by => 'artistid'
- }
+is ($ars->count, $init_count + 6, 'Simple count works');
+
+# test LIMIT support
+my $it = $ars->search( {},
+ {
+ rows => 3,
+ order_by => 'artistid'
+ }
);
is( $it->count, 3, "LIMIT count ok" );
+
+my @all = $it->all;
+is (@all, 3, 'Number of ->all objects matches count');
+
+$it->reset;
is( $it->next->name, "foo", "iterator->next ok" );
-$it->next;
+is( $it->next->name, "Artist 1", "iterator->next ok" );
is( $it->next->name, "Artist 2", "iterator->next ok" );
-is( $it->next, undef, "next past end of resultset ok" );
+is( $it->next, undef, "next past end of resultset ok" ); # this can not succeed if @all > 3
+
my $test_type_info = {
'artistid' => {
# clean up our mess
END {
+ my $dbh = eval { $schema->storage->_dbh };
$dbh->do("DROP TABLE artist") if $dbh;
}
# clean up our mess
END {
+ my $dbh = eval { $schema->storage->_dbh };
$dbh->do("DROP TABLE artist") if $dbh;
}
-
use strict;
-use warnings;
+use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
+use DBIC::SqlMakerTest;
my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_MSSQL_ODBC_${_}" } qw/DSN USER PASS/};
plan skip_all => 'Set $ENV{DBICTEST_MSSQL_ODBC_DSN}, _USER and _PASS to run this test'
unless ($dsn && $user);
-plan tests => 12;
+plan tests => 39;
-my $schema = DBICTest::Schema->connect($dsn, $user, $pass, {AutoCommit => 1});
+DBICTest::Schema->load_classes('ArtistGUID');
+my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+
+{
+ no warnings 'redefine';
+ my $connect_count = 0;
+ my $orig_connect = \&DBI::connect;
+ local *DBI::connect = sub { $connect_count++; goto &$orig_connect };
+
+ $schema->storage->ensure_connected;
+
+ is( $connect_count, 1, 'only one connection made');
+}
-$schema->storage->ensure_connected;
isa_ok( $schema->storage, 'DBIx::Class::Storage::DBI::ODBC::Microsoft_SQL_Server' );
$schema->storage->dbh_do (sub {
my ($storage, $dbh) = @_;
eval { $dbh->do("DROP TABLE artist") };
$dbh->do(<<'SQL');
-
CREATE TABLE artist (
artistid INT IDENTITY NOT NULL,
name VARCHAR(100),
charfield CHAR(10) NULL,
primary key(artistid)
)
-
SQL
-
});
my %seen_id;
-# fresh $schema so we start unconnected
-$schema = DBICTest::Schema->connect($dsn, $user, $pass, {AutoCommit => 1});
+my @opts = (
+ { on_connect_call => 'use_dynamic_cursors' },
+ {},
+);
+my $new;
+
+# test Auto-PK with different options
+for my $opts (@opts) {
+ SKIP: {
+ $schema = DBICTest::Schema->connect($dsn, $user, $pass, $opts);
+
+ eval {
+ $schema->storage->ensure_connected
+ };
+ if ($@ =~ /dynamic cursors/) {
+ skip
+'Dynamic Cursors not functional, tds_version 8.0 or greater required if using'.
+' FreeTDS', 1;
+ }
+
+ $schema->resultset('Artist')->search({ name => 'foo' })->delete;
-# test primary key handling
-my $new = $schema->resultset('Artist')->create({ name => 'foo' });
-ok($new->artistid > 0, "Auto-PK worked");
+ $new = $schema->resultset('Artist')->create({ name => 'foo' });
+
+ ok($new->artistid > 0, "Auto-PK worked");
+ }
+}
$seen_id{$new->artistid}++;
is( $it->next->name, "Artist 2", "iterator->next ok" );
is( $it->next, undef, "next past end of resultset ok" );
+# test GUID columns
+
+$schema->storage->dbh_do (sub {
+ my ($storage, $dbh) = @_;
+ eval { $dbh->do("DROP TABLE artist") };
+ $dbh->do(<<'SQL');
+CREATE TABLE artist (
+ artistid UNIQUEIDENTIFIER NOT NULL,
+ name VARCHAR(100),
+ rank INT NOT NULL DEFAULT '13',
+ charfield CHAR(10) NULL,
+ a_guid UNIQUEIDENTIFIER,
+ primary key(artistid)
+)
+SQL
+});
+
+# start disconnected to make sure insert works on an un-reblessed storage
+$schema = DBICTest::Schema->connect($dsn, $user, $pass);
+
+my $row;
+lives_ok {
+ $row = $schema->resultset('ArtistGUID')->create({ name => 'mtfnpy' })
+} 'created a row with a GUID';
+
+ok(
+ eval { $row->artistid },
+ 'row has GUID PK col populated',
+);
+diag $@ if $@;
+
+ok(
+ eval { $row->a_guid },
+ 'row has a GUID col with auto_nextval populated',
+);
+diag $@ if $@;
+
+my $row_from_db = $schema->resultset('ArtistGUID')
+ ->search({ name => 'mtfnpy' })->first;
+
+is $row_from_db->artistid, $row->artistid,
+ 'PK GUID round trip';
+
+is $row_from_db->a_guid, $row->a_guid,
+ 'NON-PK GUID round trip';
+
+# test MONEY type
+$schema->storage->dbh_do (sub {
+ my ($storage, $dbh) = @_;
+ eval { $dbh->do("DROP TABLE money_test") };
+ $dbh->do(<<'SQL');
+
+CREATE TABLE money_test (
+ id INT IDENTITY PRIMARY KEY,
+ amount MONEY NULL
+)
+
+SQL
+
+});
+
+my $rs = $schema->resultset('Money');
+
+lives_ok {
+ $row = $rs->create({ amount => 100 });
+} 'inserted a money value';
+
+cmp_ok $rs->find($row->id)->amount, '==', 100, 'money value round-trip';
+
+lives_ok {
+ $row->update({ amount => 200 });
+} 'updated a money value';
+
+cmp_ok $rs->find($row->id)->amount, '==', 200,
+ 'updated money value round-trip';
+
+lives_ok {
+ $row->update({ amount => undef });
+} 'updated a money value to NULL';
+
+is $rs->find($row->id)->amount, undef,'updated money value to NULL round-trip';
+
+$schema->storage->dbh_do (sub {
+ my ($storage, $dbh) = @_;
+ eval { $dbh->do("DROP TABLE Owners") };
+ eval { $dbh->do("DROP TABLE Books") };
+ $dbh->do(<<'SQL');
+CREATE TABLE Books (
+ id INT IDENTITY (1, 1) NOT NULL,
+ source VARCHAR(100),
+ owner INT,
+ title VARCHAR(10),
+ price INT NULL
+)
+
+CREATE TABLE Owners (
+ id INT IDENTITY (1, 1) NOT NULL,
+ name VARCHAR(100),
+)
+SQL
+
+});
+
+lives_ok ( sub {
+ $schema->populate ('Owners', [
+ [qw/id name /],
+ [qw/1 wiggle/],
+ [qw/2 woggle/],
+ [qw/3 boggle/],
+ [qw/4 fREW/],
+ [qw/5 fRIOUX/],
+ [qw/6 fROOH/],
+ [qw/7 fRUE/],
+ [qw/8 fISMBoC/],
+ [qw/9 station/],
+ [qw/10 mirror/],
+ [qw/11 dimly/],
+ [qw/12 face_to_face/],
+ [qw/13 icarus/],
+ [qw/14 dream/],
+ [qw/15 dyrstyggyr/],
+ ]);
+}, 'populate with PKs supplied ok' );
+
+lives_ok ( sub {
+ $schema->populate ('BooksInLibrary', [
+ [qw/source owner title /],
+ [qw/Library 1 secrets0/],
+ [qw/Library 1 secrets1/],
+ [qw/Eatery 1 secrets2/],
+ [qw/Library 2 secrets3/],
+ [qw/Library 3 secrets4/],
+ [qw/Eatery 3 secrets5/],
+ [qw/Library 4 secrets6/],
+ [qw/Library 5 secrets7/],
+ [qw/Eatery 5 secrets8/],
+ [qw/Library 6 secrets9/],
+ [qw/Library 7 secrets10/],
+ [qw/Eatery 7 secrets11/],
+ [qw/Library 8 secrets12/],
+ ]);
+}, 'populate without PKs supplied ok' );
+
+#
+# try a prefetch on tables with identically named columns
+#
+
+# set quote char - make sure things work while quoted
+$schema->storage->_sql_maker->{quote_char} = [qw/[ ]/];
+$schema->storage->_sql_maker->{name_sep} = '.';
+
+{
+ # try a ->has_many direction
+ my $owners = $schema->resultset ('Owners')->search ({
+ 'books.id' => { '!=', undef }
+ }, {
+ prefetch => 'books',
+ order_by => 'name',
+ rows => 3, # 8 results total
+ });
+
+ is ($owners->page(1)->all, 3, 'has_many prefetch returns correct number of rows');
+ is ($owners->page(1)->count, 3, 'has-many prefetch returns correct count');
+
+ TODO: {
+ local $TODO = 'limit past end of resultset problem';
+ is ($owners->page(3)->all, 2, 'has_many prefetch returns correct number of rows');
+ is ($owners->page(3)->count, 2, 'has-many prefetch returns correct count');
+ is ($owners->page(3)->count_rs->next, 2, 'has-many prefetch returns correct count_rs');
+
+ # make sure count does not become overly complex
+ is_same_sql_bind (
+ $owners->page(3)->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT TOP 3 [me].[id]
+ FROM [owners] [me]
+ LEFT JOIN [books] [books] ON [books].[owner] = [me].[id]
+ WHERE ( [books].[id] IS NOT NULL )
+ GROUP BY [me].[id]
+ ORDER BY [me].[id] DESC
+ ) [count_subq]
+ )',
+ [],
+ );
+ }
+
+ # try a ->belongs_to direction (no select collapse, group_by should work)
+ my $books = $schema->resultset ('BooksInLibrary')->search ({
+ 'owner.name' => [qw/wiggle woggle/],
+ }, {
+ distinct => 1,
+ prefetch => 'owner',
+ rows => 2, # 3 results total
+ order_by => { -desc => 'owner' },
+ # there is no sane way to order by the right side of a grouped prefetch currently :(
+ #order_by => { -desc => 'owner.name' },
+ });
+
+
+ is ($books->page(1)->all, 2, 'Prefetched grouped search returns correct number of rows');
+ is ($books->page(1)->count, 2, 'Prefetched grouped search returns correct count');
+
+ TODO: {
+ local $TODO = 'limit past end of resultset problem';
+ is ($books->page(2)->all, 1, 'Prefetched grouped search returns correct number of rows');
+ is ($books->page(2)->count, 1, 'Prefetched grouped search returns correct count');
+ is ($books->page(2)->count_rs->next, 1, 'Prefetched grouped search returns correct count_rs');
+
+ # make sure count does not become overly complex (FIXME - the distinct-induced group_by is incorrect)
+ is_same_sql_bind (
+ $books->page(2)->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT TOP 2 [me].[id]
+ FROM [books] [me]
+ JOIN [owners] [owner] ON [owner].[id] = [me].[owner]
+ WHERE ( ( ( [owner].[name] = ? OR [owner].[name] = ? ) AND [source] = ? ) )
+ GROUP BY [me].[id], [me].[source], [me].[owner], [me].[title], [me].[price]
+ ORDER BY [me].[id] DESC
+ ) [count_subq]
+ )',
+ [
+ [ 'owner.name' => 'wiggle' ],
+ [ 'owner.name' => 'woggle' ],
+ [ 'source' => 'Library' ],
+ ],
+ );
+ }
+
+}
# clean up our mess
END {
- my $dbh = eval { $schema->storage->_dbh };
- $dbh->do('DROP TABLE artist') if $dbh;
+ if (my $dbh = eval { $schema->storage->_dbh }) {
+ eval { $dbh->do("DROP TABLE $_") }
+ for qw/artist money_test Books Owners/;
+ }
}
-
+# vim:sw=2 sts=2
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_SYBASE_${_}" } qw/DSN USER PASS/};
+
+plan skip_all => 'Set $ENV{DBICTEST_SYBASE_DSN}, _USER and _PASS to run this test'
+ unless ($dsn && $user);
+
+plan tests => 13;
+
+my $schema = DBICTest::Schema->connect($dsn, $user, $pass, {AutoCommit => 1});
+
+# start disconnected to test reconnection
+$schema->storage->ensure_connected;
+$schema->storage->_dbh->disconnect;
+
+isa_ok( $schema->storage, 'DBIx::Class::Storage::DBI::Sybase' );
+
+my $dbh;
+lives_ok (sub {
+ $dbh = $schema->storage->dbh;
+}, 'reconnect works');
+
+$schema->storage->dbh_do (sub {
+ my ($storage, $dbh) = @_;
+ eval { $dbh->do("DROP TABLE artist") };
+ $dbh->do(<<'SQL');
+
+CREATE TABLE artist (
+ artistid INT IDENTITY NOT NULL,
+ name VARCHAR(100),
+ rank INT DEFAULT 13 NOT NULL,
+ charfield CHAR(10) NULL,
+ primary key(artistid)
+)
+
+SQL
+
+});
+
+my %seen_id;
+
+# fresh $schema so we start unconnected
+$schema = DBICTest::Schema->connect($dsn, $user, $pass, {AutoCommit => 1});
+
+# test primary key handling
+my $new = $schema->resultset('Artist')->create({ name => 'foo' });
+ok($new->artistid > 0, "Auto-PK worked");
+
+$seen_id{$new->artistid}++;
+
+# test LIMIT support
+for (1..6) {
+ $new = $schema->resultset('Artist')->create({ name => 'Artist ' . $_ });
+ is ( $seen_id{$new->artistid}, undef, "id for Artist $_ is unique" );
+ $seen_id{$new->artistid}++;
+}
+
+my $it;
+
+$it = $schema->resultset('Artist')->search( {}, {
+ rows => 3,
+ order_by => 'artistid',
+});
+
+TODO: {
+ local $TODO = 'Sybase is very very fucked in the limit department';
+
+ is( $it->count, 3, "LIMIT count ok" );
+}
+
+# The iterator still works correctly with rows => 3, even though the sql is
+# fucked, very interesting.
+
+is( $it->next->name, "foo", "iterator->next ok" );
+$it->next;
+is( $it->next->name, "Artist 2", "iterator->next ok" );
+is( $it->next, undef, "next past end of resultset ok" );
+
+
+# clean up our mess
+END {
+ my $dbh = eval { $schema->storage->_dbh };
+ $dbh->do('DROP TABLE artist') if $dbh;
+}
+
use strict;
use warnings;
+# use this if you keep a copy of DBD::Sybase linked to FreeTDS somewhere else
+BEGIN {
+ if (my $lib_dirs = $ENV{DBICTEST_MSSQL_PERL5LIB}) {
+ unshift @INC, $_ for split /:/, $lib_dirs;
+ }
+}
+
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_MSSQL_${_}" } qw/DSN USER PASS/};
-#warn "$dsn $user $pass";
-
plan skip_all => 'Set $ENV{DBICTEST_MSSQL_DSN}, _USER and _PASS to run this test'
unless ($dsn);
-plan tests => 5;
+my $TESTS = 13;
+
+plan tests => $TESTS * 2;
+
+my @storage_types = (
+ 'DBI::Sybase::Microsoft_SQL_Server',
+ 'DBI::Sybase::Microsoft_SQL_Server::NoBindVars',
+);
+my $storage_idx = -1;
+my $schema;
+
+for my $storage_type (@storage_types) {
+ $storage_idx++;
+
+ $schema = DBICTest::Schema->clone;
-my $storage_type = '::DBI::MSSQL';
-$storage_type = '::DBI::Sybase::MSSQL' if $dsn =~ /^dbi:Sybase:/;
-# Add more for others in the future when they exist (ODBC? ADO? JDBC?)
+ if ($storage_idx != 0) { # autodetect
+ $schema->storage_type("::$storage_type");
+ }
-my $schema = DBICTest::Schema->clone;
-$schema->storage_type($storage_type);
-$schema->connection($dsn, $user, $pass);
+ $schema->connection($dsn, $user, $pass);
-my $dbh = $schema->storage->dbh;
+ $schema->storage->ensure_connected;
-$dbh->do("IF OBJECT_ID('artist', 'U') IS NOT NULL
- DROP TABLE artist");
-$dbh->do("IF OBJECT_ID('cd', 'U') IS NOT NULL
- DROP TABLE cd");
+ if ($storage_idx == 0 && ref($schema->storage) =~ /NoBindVars\z/) {
+ my $tb = Test::More->builder;
+ $tb->skip('no placeholders') for 1..$TESTS;
+ next;
+ }
-$dbh->do("CREATE TABLE artist (artistid INT IDENTITY PRIMARY KEY, name VARCHAR(100), rank INT DEFAULT '13', charfield CHAR(10) NULL);");
-$dbh->do("CREATE TABLE cd (cdid INT IDENTITY PRIMARY KEY, artist INT, title VARCHAR(100), year VARCHAR(100), genreid INT NULL, single_track INT NULL);");
+ isa_ok($schema->storage, "DBIx::Class::Storage::$storage_type");
+
+# start disconnected to test reconnection
+ $schema->storage->_dbh->disconnect;
+
+ my $dbh;
+ lives_ok (sub {
+ $dbh = $schema->storage->dbh;
+ }, 'reconnect works');
+
+ $dbh->do("IF OBJECT_ID('artist', 'U') IS NOT NULL
+ DROP TABLE artist");
+ $dbh->do("IF OBJECT_ID('cd', 'U') IS NOT NULL
+ DROP TABLE cd");
+
+ $dbh->do("CREATE TABLE artist (artistid INT IDENTITY PRIMARY KEY, name VARCHAR(100), rank INT DEFAULT '13', charfield CHAR(10) NULL);");
+ $dbh->do("CREATE TABLE cd (cdid INT IDENTITY PRIMARY KEY, artist INT, title VARCHAR(100), year VARCHAR(100), genreid INT NULL, single_track INT NULL);");
# Just to test compat shim, Auto is in Core
-$schema->class('Artist')->load_components('PK::Auto::MSSQL');
+ $schema->class('Artist')->load_components('PK::Auto::MSSQL');
# Test PK
-my $new = $schema->resultset('Artist')->create( { name => 'foo' } );
-ok($new->artistid, "Auto-PK worked");
+ my $new = $schema->resultset('Artist')->create( { name => 'foo' } );
+ ok($new->artistid, "Auto-PK worked");
# Test LIMIT
-for (1..6) {
- $schema->resultset('Artist')->create( { name => 'Artist ' . $_, rank => $_ } );
-}
+ for (1..6) {
+ $schema->resultset('Artist')->create( { name => 'Artist ' . $_, rank => $_ } );
+ }
-my $it = $schema->resultset('Artist')->search( { },
- { rows => 3,
- offset => 2,
- order_by => 'artistid'
- }
-);
+ my $it = $schema->resultset('Artist')->search( { },
+ { rows => 3,
+ offset => 2,
+ order_by => 'artistid'
+ }
+ );
# Test ? in data don't get treated as placeholders
-my $cd = $schema->resultset('CD')->create( {
- artist => 1,
- title => 'Does this break things?',
- year => 2007,
-} );
-ok($cd->id, 'Not treating ? in data as placeholders');
-
-is( $it->count, 3, "LIMIT count ok" );
-ok( $it->next->name, "iterator->next ok" );
-$it->next;
-$it->next;
-is( $it->next, undef, "next past end of resultset ok" );
+ my $cd = $schema->resultset('CD')->create( {
+ artist => 1,
+ title => 'Does this break things?',
+ year => 2007,
+ } );
+ ok($cd->id, 'Not treating ? in data as placeholders');
+
+ is( $it->count, 3, "LIMIT count ok" );
+ ok( $it->next->name, "iterator->next ok" );
+ $it->next;
+ $it->next;
+ is( $it->next, undef, "next past end of resultset ok" );
+
+# test MONEY column support
+ $schema->storage->dbh_do (sub {
+ my ($storage, $dbh) = @_;
+ eval { $dbh->do("DROP TABLE money_test") };
+ $dbh->do(<<'SQL');
+ CREATE TABLE money_test (
+ id INT IDENTITY PRIMARY KEY,
+ amount MONEY NULL
+ )
+SQL
+
+ });
+
+ my $rs = $schema->resultset('Money');
+
+ my $row;
+ lives_ok {
+ $row = $rs->create({ amount => 100 });
+ } 'inserted a money value';
+
+ cmp_ok $rs->find($row->id)->amount, '==', 100, 'money value round-trip';
+
+ lives_ok {
+ $row->update({ amount => 200 });
+ } 'updated a money value';
+
+ cmp_ok $rs->find($row->id)->amount, '==', 200,
+ 'updated money value round-trip';
+
+ lives_ok {
+ $row->update({ amount => undef });
+ } 'updated a money value to NULL';
+
+ is $rs->find($row->id)->amount,
+ undef, 'updated money value to NULL round-trip';
+}
# clean up our mess
END {
- $dbh->do("IF OBJECT_ID('artist', 'U') IS NOT NULL DROP TABLE artist")
- if $dbh;
- $dbh->do("IF OBJECT_ID('cd', 'U') IS NOT NULL DROP TABLE cd")
- if $dbh;
+ if (my $dbh = eval { $schema->storage->dbh }) {
+ $dbh->do("IF OBJECT_ID('artist', 'U') IS NOT NULL DROP TABLE artist");
+ $dbh->do("IF OBJECT_ID('cd', 'U') IS NOT NULL DROP TABLE cd");
+ $dbh->do("IF OBJECT_ID('money_test', 'U') IS NOT NULL DROP TABLE money_test");
+ }
}
eval "use DBD::SQLite";
plan $@
? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 18 );
-}
-
-# figure out if we've got a version of sqlite that is older than 3.2.6, in
-# which case COUNT(DISTINCT()) doesn't work
-my $is_broken_sqlite = 0;
-my ($sqlite_major_ver,$sqlite_minor_ver,$sqlite_patch_ver) =
- split /\./, $schema->storage->dbh->get_info(18);
-if( $schema->storage->dbh->get_info(17) eq 'SQLite' &&
- ( ($sqlite_major_ver < 3) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver < 2) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver == 2 && $sqlite_patch_ver < 6) ) ) {
- $is_broken_sqlite = 1;
+ : ( tests => 33 );
}
# test the abstract join => SQL generator
-my $sa = new DBIC::SQL::Abstract;
+my $sa = new DBIx::Class::SQLAHacks;
my @j = (
{ child => 'person' },
. 'child.father_id ) JOIN person mother ON ( mother.person_id '
. '= child.mother_id )'
;
-is_same_sql_bind(
- $sa->_recurse_from(@j), [],
- $match, [],
+is_same_sql(
+ $sa->_recurse_from(@j),
+ $match,
'join 1 ok'
);
. ' father.person_id = child.father_id )) ON ( mother.person_id = '
. 'child.mother_id )'
;
-is_same_sql_bind(
- $sa->_recurse_from(@j2), [],
- $match, [],
+is_same_sql(
+ $sa->_recurse_from(@j2),
+ $match,
'join 2 ok'
);
. '= child.mother_id )'
;
-is_same_sql_bind(
- $sa->_recurse_from(@j3), [],
- $match, [],
+is_same_sql(
+ $sa->_recurse_from(@j3),
+ $match,
'join 3 (inner join) ok'
);
. ' father.person_id = child.father_id )) ON ( mother.person_id = '
. 'child.mother_id )'
;
-is_same_sql_bind(
- $sa->_recurse_from(@j4), [],
- $match, [],
+is_same_sql(
+ $sa->_recurse_from(@j4),
+ $match,
'join 4 (nested joins + join types) ok'
);
. 'child.father_id ) JOIN person mother ON ( mother.person_id '
. '= child.mother_id )'
;
-is_same_sql_bind(
- $sa->_recurse_from(@j5), [],
- $match, [],
+is_same_sql(
+ $sa->_recurse_from(@j5),
+ $match,
'join 5 (SCALAR reference for ON statement) ok'
);
[ { father => 'person' }, { 'father.person_id' => { '!=', '42' } }, ],
[ { mother => 'person' }, { 'mother.person_id' => 'child.mother_id' } ],
);
-$match = qr/^HASH reference arguments are not supported in JOINS - try using "\.\.\." instead/;
+$match = qr/HASH reference arguments are not supported in JOINS/;
eval { $sa->_recurse_from(@j6) };
like( $@, $match, 'join 6 (HASH reference for ON statement dies) ok' );
] ] }
);
-cmp_ok( $rs + 0, '==', 1, "Single record in resultset");
+is( $rs + 0, 1, "Single record in resultset");
is($rs->first->title, 'Forkful of bees', 'Correct record returned');
{ 'year' => 2001, 'artist.name' => 'Caterwauler McCrae' },
{ join => 'artist' });
-cmp_ok( $rs + 0, '==', 1, "Single record in resultset");
+is( $rs + 0, 1, "Single record in resultset");
is($rs->first->title, 'Forkful of bees', 'Correct record returned');
'liner_notes.notes' => 'Kill Yourself!' },
{ join => [ qw/artist liner_notes/ ] });
-cmp_ok( $rs + 0, '==', 1, "Single record in resultset");
+is( $rs + 0, 1, "Single record in resultset");
is($rs->first->title, 'Come Be Depressed With Us', 'Correct record returned');
{ 'artist' => 1 },
{ join => [qw/artist/], order_by => 'artist.name' }
);
-cmp_ok( scalar $rs->all, '==', scalar $rs->slice(0, $rs->count - 1), 'slice() with join has same count as all()' );
+is( scalar $rs->all, scalar $rs->slice(0, $rs->count - 1), 'slice() with join has same count as all()' );
ok(!$rs->slice($rs->count+1000, $rs->count+1002)->count,
'Slicing beyond end of rs returns a zero count');
{ 'liner_notes.notes' => 'Kill Yourself!' },
{ join => { 'cds' => 'liner_notes' } });
-cmp_ok( $rs->count, '==', 1, "Single record in resultset");
+is( $rs->count, 1, "Single record in resultset");
is($rs->first->name, 'We Are Goth', 'Correct record returned');
-# test for warnings on delete of joined resultset
-$rs = $schema->resultset("CD")->search(
- { 'artist.name' => 'Caterwauler McCrae' },
- { join => [qw/artist/]}
-);
-my $tst_delete_warning;
-eval {
- local $SIG{__WARN__} = sub { $tst_delete_warning = shift };
- $rs->delete();
-};
-
-ok( ($@ || $tst_delete_warning), 'fail/warning on attempt to delete a join-ed resultset');
-# test for warnings on update of joined resultset
-$rs = $schema->resultset("CD")->search(
- { 'artist.name' => 'Random Boy Band' },
- { join => [qw/artist/]}
-);
-my $tst_update_warning;
-eval {
- local $SIG{__WARN__} = sub { $tst_update_warning = shift };
- $rs->update({ 'artist' => 1 });
-};
-
-ok( ($@ || $tst_update_warning), 'fail/warning on attempt to update a join-ed resultset');
+{
+ $schema->populate('Artist', [
+ [ qw/artistid name/ ],
+ [ 4, 'Another Boy Band' ],
+ ]);
+ $schema->populate('CD', [
+ [ qw/cdid artist title year/ ],
+ [ 6, 2, "Greatest Hits", 2001 ],
+ [ 7, 4, "Greatest Hits", 2005 ],
+ [ 8, 4, "BoyBandBlues", 2008 ],
+ ]);
+ $schema->populate('TwoKeys', [
+ [ qw/artist cd/ ],
+ [ 2, 4 ],
+ [ 2, 6 ],
+ [ 4, 7 ],
+ [ 4, 8 ],
+ ]);
+
+ sub cd_count {
+ return $schema->resultset("CD")->count;
+ }
+ sub tk_count {
+ return $schema->resultset("TwoKeys")->count;
+ }
+
+ is(cd_count(), 8, '8 rows in table cd');
+ is(tk_count(), 7, '7 rows in table twokeys');
+
+ sub artist1 {
+ return $schema->resultset("CD")->search(
+ { 'artist.name' => 'Caterwauler McCrae' },
+ { join => [qw/artist/]}
+ );
+ }
+ sub artist2 {
+ return $schema->resultset("CD")->search(
+ { 'artist.name' => 'Random Boy Band' },
+ { join => [qw/artist/]}
+ );
+ }
+
+ is( artist1()->count, 3, '3 Caterwauler McCrae CDs' );
+ ok( artist1()->delete, 'Successfully deleted 3 CDs' );
+ is( artist1()->count, 0, '0 Caterwauler McCrae CDs' );
+ is( artist2()->count, 2, '3 Random Boy Band CDs' );
+ ok( artist2()->update( { 'artist' => 1 } ) );
+ is( artist2()->count, 0, '0 Random Boy Band CDs' );
+ is( artist1()->count, 2, '2 Caterwauler McCrae CDs' );
+
+ # test update on multi-column-pk
+ sub tk1 {
+ return $schema->resultset("TwoKeys")->search(
+ {
+ 'artist.name' => { like => '%Boy Band' },
+ 'cd.title' => 'Greatest Hits',
+ },
+ { join => [qw/artist cd/] }
+ );
+ }
+ sub tk2 {
+ return $schema->resultset("TwoKeys")->search(
+ { 'artist.name' => 'Caterwauler McCrae' },
+ { join => [qw/artist/]}
+ );
+ }
+ is( tk2()->count, 2, 'TwoKeys count == 2' );
+ is( tk1()->count, 2, 'TwoKeys count == 2' );
+ ok( tk1()->update( { artist => 1 } ) );
+ is( tk1()->count, 0, 'TwoKeys count == 0' );
+ is( tk2()->count, 4, '2 Caterwauler McCrae CDs' );
+ ok( tk2()->delete, 'Successfully deleted 4 CDs' );
+ is(cd_count(), 5, '5 rows in table cd');
+ is(tk_count(), 3, '3 rows in table twokeys');
+}
use Test::Exception;
use lib qw(t/lib);
use DBICTest;
+use DBIC::SqlMakerTest;
my $schema = DBICTest->init_schema();
-plan tests => 12;
+plan tests => 24;
my $rs = $schema->resultset('CD')->search({},
{
lives_ok(sub { $rs->first->get_column('count') }, 'multiple +select/+as columns, 1st rscolumn present');
lives_ok(sub { $rs->first->get_column('addedtitle') }, 'multiple +select/+as columns, 2nd rscolumn present');
+# Tests a regression in ResultSetColumn wrt +select
+$rs = $schema->resultset('CD')->search(undef,
+ {
+ '+select' => [ \'COUNT(*) AS year_count' ],
+ order_by => 'year_count'
+ }
+);
+my @counts = $rs->get_column('cdid')->all;
+ok(scalar(@counts), 'got rows from ->all using +select');
+
$rs = $schema->resultset('CD')->search({},
{
'+select' => [ \ 'COUNT(*)', 'title' ],
cmp_ok ($cds->count, '>', 2, 'Initially populated with more than 2 CDs');
my $table = $cds->result_source->name;
+$table = $$table if ref $table eq 'SCALAR';
my $subsel = $cds->search ({}, {
columns => [qw/cdid title/],
from => \ "(SELECT cdid, title FROM $table LIMIT 2) me",
is ($subsel->next->title, $cds->next->title, 'Second CD title match');
is($schema->resultset('CD')->current_source_alias, "me", '$rs->current_source_alias returns "me"');
+
+
+
+$rs = $schema->resultset('CD')->search({},
+ {
+ 'join' => 'artist',
+ 'columns' => ['cdid', 'title', 'artist.name'],
+ }
+);
+
+is_same_sql_bind (
+ $rs->as_query,
+ '(SELECT me.cdid, me.title, artist.name FROM cd me JOIN artist artist ON artist.artistid = me.artist)',
+ [],
+ 'Use of columns attribute results in proper sql'
+);
+
+lives_ok(sub {
+ $rs->first->get_column('cdid')
+}, 'columns 1st rscolumn present');
+
+lives_ok(sub {
+ $rs->first->get_column('title')
+}, 'columns 2nd rscolumn present');
+
+lives_ok(sub {
+ $rs->first->artist->get_column('name')
+}, 'columns 3rd rscolumn present');
+
+
+
+$rs = $schema->resultset('CD')->search({},
+ {
+ 'join' => 'artist',
+ '+columns' => ['cdid', 'title', 'artist.name'],
+ }
+);
+
+is_same_sql_bind (
+ $rs->as_query,
+ '(SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track, me.cdid, me.title, artist.name FROM cd me JOIN artist artist ON artist.artistid = me.artist)',
+ [],
+ 'Use of columns attribute results in proper sql'
+);
+
+lives_ok(sub {
+ $rs->first->get_column('cdid')
+}, 'columns 1st rscolumn present');
+
+lives_ok(sub {
+ $rs->first->get_column('title')
+}, 'columns 2nd rscolumn present');
+
+lives_ok(sub {
+ $rs->first->artist->get_column('name')
+}, 'columns 3rd rscolumn present');
+
+
+$rs = $schema->resultset('CD')->search({'tracks.position' => { -in => [2] } },
+ {
+ join => 'tracks',
+ columns => [qw/me.cdid me.title/],
+ '+select' => ['tracks.position'],
+ '+as' => ['track_position'],
+
+ # get a hashref of CD1 only (the first with a second track)
+ result_class => 'DBIx::Class::ResultClass::HashRefInflator',
+ order_by => 'cdid',
+ rows => 1,
+ }
+);
+
+is_deeply (
+ $rs->single,
+ {
+ cdid => 1,
+ track_position => 2,
+ title => 'Spoonful of bees',
+ },
+ 'limited prefetch via column works on a multi-relationship',
+);
+
+my $sub_rs = $rs->search ({},
+ {
+ columns => [qw/artist tracks.trackid/], # columns should not be merged but override $rs columns
+ '+select' => ['tracks.title'],
+ '+as' => ['tracks.title'],
+ }
+);
+
+is_deeply (
+ $sub_rs->single,
+ {
+ artist => 1,
+ track_position => 2,
+ tracks =>
+ {
+ trackid => 17,
+ title => 'Apiary',
+ },
+ },
+ 'columns/select/as fold properly on sub-searches',
+);
+
+TODO: {
+ local $TODO = "Multi-collapsing still doesn't work right - HRI should be getting an arrayref, not an individual hash";
+ is_deeply (
+ $sub_rs->single,
+ {
+ artist => 1,
+ track_position => 2,
+ tracks => [
+ {
+ trackid => 17,
+ title => 'Apiary',
+ },
+ ],
+ },
+ 'columns/select/as fold properly on sub-searches',
+ );
+}
my $schema = DBICTest->init_schema();
-plan tests => 45;
+plan tests => 49;
# Check the defined unique constraints
is_deeply(
ok($cd->in_storage, 'find correctly grepped the key across a relationship');
is($cd->cdid, 1, 'cdid is correct');
}
+
+# Test update_or_new
+{
+ my $cd1 = $schema->resultset('CD')->update_or_new(
+ {
+ artist => $artistid,
+ title => "SuperHits $$",
+ year => 2007,
+ },
+ { key => 'cd_artist_title' }
+ );
+
+ ok(!$cd1->in_storage, 'CD is not in storage yet after update_or_new');
+ $cd1->insert;
+ ok($cd1->in_storage, 'CD got added to strage after update_or_new && insert');
+
+ my $cd2 = $schema->resultset('CD')->update_or_new(
+ {
+ artist => $artistid,
+ title => "SuperHits $$",
+ year => 2008,
+ },
+ { key => 'cd_artist_title' }
+ );
+ ok($cd2->in_storage, 'Updating year using update_or_new was successful');
+ is($cd2->id, $cd1->id, 'Got the same CD using update_or_new');
+}
\ No newline at end of file
ok(!$artist_rs->find({name => 'Death Cab for Cutie'}), "Artist not created");
- eval {
+ lives_ok (sub {
my $w;
local $SIG{__WARN__} = sub { $w = shift };
outer($schema, 0);
like ($w, qr/A DBIx::Class::Storage::TxnScopeGuard went out of scope without explicit commit or an error/, 'Out of scope warning detected');
- };
-
- local $TODO = "Work out how this should work";
- is($@, "Not sure what we want here, but something", "Rollback okay");
-
- ok(!$artist_rs->find({name => 'Death Cab for Cutie'}), "Artist not created");
+ ok(!$artist_rs->find({name => 'Death Cab for Cutie'}), "Artist not created");
+ }, 'rollback successful withot exception');
sub outer {
my ($schema) = @_;
-
+
my $guard = $schema->txn_scope_guard;
$schema->resultset('Artist')->create({
name => 'Death Cab for Cutie',
});
inner(@_);
- $guard->commit;
}
sub inner {
my ($schema, $fatal) = @_;
- my $guard = $schema->txn_scope_guard;
+
+ my $inner_guard = $schema->txn_scope_guard;
+ is($schema->storage->transaction_depth, 2, "Correct transaction depth");
my $artist = $artist_rs->find({ name => 'Death Cab for Cutie' });
- is($schema->storage->transaction_depth, 2, "Correct transaction depth");
- undef $@;
eval {
$artist->cds->create({
title => 'Plans',
die $@;
}
- # See what happens if we dont $guard->commit;
+ # inner guard should commit without consequences
+ $inner_guard->commit;
}
}
my $artist = $schema->resultset('Artist')->find(1);
my $artist_cds = $artist->search_related('cds');
-my $cover_band;
-
-{
- no warnings qw(redefine once);
- local *DBICTest::Artist::result_source_instance = \&DBICTest::Schema::Artist::result_source_instance;
-
- $cover_band = $artist->copy;
-}
+my $cover_band = $artist->copy;
my $cover_cds = $cover_band->search_related('cds');
cmp_ok($cover_band->id, '!=', $artist->id, 'ok got new column id...');
my $schema = DBICTest->init_schema();
my $queries;
-$schema->storage->debugcb( sub{ $queries++ } );
+my $debugcb = sub{ $queries++ };
+my $sdebug = $schema->storage->debug;
-eval "use DBD::SQLite";
-plan skip_all => 'needs DBD::SQLite for testing' if $@;
plan tests => 23;
my $rs = $schema->resultset("Artist")->search(
$queries = 0;
$schema->storage->debug(1);
+$schema->storage->debugcb ($debugcb);
$rs = $schema->resultset('Artist')->search( undef, { cache => 1 } );
while( $artist = $rs->next ) {}
is( $queries, 1, 'revisiting a row does not issue a query when cache => 1' );
-$schema->storage->debug(0);
+$schema->storage->debug($sdebug);
+$schema->storage->debugcb (undef);
my @a = $schema->resultset("Artist")->search(
{ },
# start test for prefetch SELECT count
$queries = 0;
$schema->storage->debug(1);
+$schema->storage->debugcb ($debugcb);
$artist = $rs->first;
$rs->reset();
# make sure artist contains a related resultset for cds
-is( ref $artist->{related_resultsets}->{cds}, 'DBIx::Class::ResultSet', 'artist has a related_resultset for cds' );
+isa_ok( $artist->{related_resultsets}{cds}, 'DBIx::Class::ResultSet', 'artist has a related_resultset for cds' );
# check if $artist->cds->get_cache is populated
is( scalar @{$artist->cds->get_cache}, 3, 'cache for artist->cds contains correct number of records');
is($queries, 1, 'only one SQL statement executed');
-$schema->storage->debug(0);
+$schema->storage->debug($sdebug);
+$schema->storage->debugcb (undef);
# make sure related_resultset is deleted after object is updated
$artist->set_column('name', 'New Name');
# SELECT count for nested has_many prefetch
$queries = 0;
$schema->storage->debug(1);
+$schema->storage->debugcb ($debugcb);
$artist = ($rs->all)[0];
is($queries, 1, 'only one SQL statement executed');
-$schema->storage->debug(0);
+$schema->storage->debug($sdebug);
+$schema->storage->debugcb (undef);
my @objs;
#$artist = $rs->find(1);
$queries = 0;
$schema->storage->debug(1);
+$schema->storage->debugcb ($debugcb);
my $cds = $artist->cds;
my $tags = $cds->next->tags;
is( $queries, 1, 'only one select statement on find with has_many prefetch on resultset' );
-$schema->storage->debug(0);
-
+$schema->storage->debug($sdebug);
+$schema->storage->debugcb (undef);
use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
use Storable qw(dclone freeze thaw);
},
);
-plan tests => (7 * keys %stores);
+plan tests => (11 * keys %stores);
for my $name (keys %stores) {
my $store = $stores{$name};
+ my $copy;
my $artist = $schema->resultset('Artist')->find(1);
DBICTest::CD->result_source_instance->schema(undef);
}
- my $copy = eval { $store->($artist) };
+ lives_ok { $copy = $store->($artist) } "serialize row object lives: $name";
is_deeply($copy, $artist, "serialize row object works: $name");
- # Test that an object with a related_resultset can be serialized.
- my @cds = $artist->related_resultset("cds");
+ my $cd_rs = $artist->search_related("cds");
+
+ # test that a result source can be serialized as well
+
+ $cd_rs->_resolved_attrs; # this builds up the {from} attr
+ lives_ok {
+ $copy = $store->($cd_rs);
+ is_deeply (
+ [ $copy->all ],
+ [ $cd_rs->all ],
+ "serialize resultset works: $name",
+ );
+ } "serialize resultset lives: $name";
+
+ # Test that an object with a related_resultset can be serialized.
ok $artist->{related_resultsets}, 'has key: related_resultsets';
- $copy = eval { $store->($artist) };
+ lives_ok { $copy = $store->($artist) } "serialize row object with related_resultset lives: $name";
for my $key (keys %$artist) {
next if $key eq 'related_resultsets';
next if $key eq '_inflated_column';
is_deeply($copy->{$key}, $artist->{$key},
qq[serialize with related_resultset "$key"]);
}
-
+
ok eval { $copy->discard_changes; 1 } or diag $@;
is($copy->id, $artist->id, "IDs still match ");
}
eval 'use utf8; 1' or plan skip_all => 'Need utf8 run this test';
}
-plan tests => 5;
+plan tests => 6;
DBICTest::Schema::CD->load_components('UTF8Columns');
DBICTest::Schema::CD->utf8_columns('title');
Class::C3->reinitialize();
-my $cd = $schema->resultset('CD')->create( { artist => 1, title => 'øni', year => 'foo' } );
+my $cd = $schema->resultset('CD')->create( { artist => 1, title => 'øni', year => '2048' } );
my $utf8_char = 'uniuni';
-if ($] <= 5.008000) {
-
- ok( Encode::is_utf8( $cd->title ), 'got title with utf8 flag' );
- ok( !Encode::is_utf8( $cd->year ), 'got year without utf8 flag' );
-
- Encode::_utf8_on($utf8_char);
- $cd->title($utf8_char);
- ok( !Encode::is_utf8( $cd->{_column_data}{title} ), 'store utf8-less chars' );
-} else {
+ok( _is_utf8( $cd->title ), 'got title with utf8 flag' );
+ok(! _is_utf8( $cd->year ), 'got year without utf8 flag' );
- ok( utf8::is_utf8( $cd->title ), 'got title with utf8 flag' );
- ok( !utf8::is_utf8( $cd->year ), 'got year without utf8 flag' );
+_force_utf8($utf8_char);
+$cd->title($utf8_char);
+ok(! _is_utf8( $cd->{_column_data}{title} ), 'store utf8-less chars' );
- utf8::decode($utf8_char);
- $cd->title($utf8_char);
- ok( !utf8::is_utf8( $cd->{_column_data}{title} ), 'store utf8-less chars' );
-}
my $v_utf8 = "\x{219}";
$cd->update ({ title => $v_utf8 });
$cd->title('something_else');
ok( $cd->is_column_changed('title'), 'column is dirty after setting to something completely different');
+
+TODO: {
+ local $TODO = 'There is currently no way to propagate aliases to inflate_result()';
+ $cd = $schema->resultset('CD')->find ({ title => $v_utf8 }, { select => 'title', as => 'name' });
+ ok (_is_utf8( $cd->get_column ('name') ), 'utf8 flag propagates via as');
+}
+
+
+sub _force_utf8 {
+ if ($] <= 5.008000) {
+ Encode::_utf8_on ($_[0]);
+ }
+ else {
+ utf8::decode ($_[0]);
+ }
+}
+
+sub _is_utf8 {
+ if ($] <= 5.008000) {
+ return Encode::is_utf8 (shift);
+ }
+ else {
+ return utf8::is_utf8 (shift);
+ }
+}
my $schema = DBICTest->init_schema();
my $queries;
-#$schema->storage->debugfh(IO::File->new('t/var/temp.trace', 'w'));
$schema->storage->debugcb( sub{ $queries++ } );
+my $sdebug = $schema->storage->debug;
-eval "use DBD::SQLite";
-plan skip_all => 'needs DBD::SQLite for testing' if $@;
plan tests => 2;
-
my $cd = $schema->resultset("CD")->find(1);
$cd->title('test');
is($queries, 1, 'liner_notes (might_have) not prefetched - do not load
liner_notes on update');
-$schema->storage->debug(0);
+$schema->storage->debug($sdebug);
my $cd2 = $schema->resultset("CD")->find(2, {prefetch => 'liner_notes'});
is($queries, 1, 'liner_notes (might_have) prefetched - do not load
liner_notes on update');
-$schema->storage->debug(0);
-
+$schema->storage->debug($sdebug);
eval "use SQL::Translator";
plan skip_all => 'SQL::Translator required' if $@;
-my $schema = DBICTest->init_schema;
+my $schema = DBICTest->init_schema (no_deploy => 1);
+
+# replace the sqlt calback with a custom version ading an index
+$schema->source('Track')->sqlt_deploy_callback(sub {
+ my ($self, $sqlt_table) = @_;
+
+ is (
+ $sqlt_table->schema->translator->producer_type,
+ join ('::', 'SQL::Translator::Producer', $schema->storage->sqlt_type),
+ 'Production type passed to translator object',
+ );
+
+ if ($schema->storage->sqlt_type eq 'SQLite' ) {
+ $sqlt_table->add_index( name => 'track_title', fields => ['title'] )
+ or die $sqlt_table->error;
+ }
+
+ $self->default_sqlt_deploy_hook($sqlt_table);
+});
+
+$schema->deploy; # do not remove, this fires the is() test in the callback above
+
-plan tests => 133;
my $translator = SQL::Translator->new(
parser_args => {
my $relinfo = $schema->source('Artist')->relationship_info ('cds');
local $relinfo->{attrs}{on_delete} = 'restrict';
- $schema->source('Track')->sqlt_deploy_callback(sub {
- my ($self, $sqlt_table) = @_;
-
- if ($schema->storage->sqlt_type eq 'SQLite' ) {
- $sqlt_table->add_index( name => 'track_title', fields => ['title'] )
- or die $sqlt_table->error;
- }
-
- $self->default_sqlt_deploy_hook($sqlt_table);
- });
$translator->parser('SQL::Translator::Parser::DBIx::Class');
$translator->producer('SQLite');
ok($output, "SQLT produced someoutput")
or diag($translator->error);
- like ($warn, qr/^SQLT attribute .+? was supplied for relationship/, 'Warn about dubious on_delete/on_update attributes');
+
+ like (
+ $warn,
+ qr/SQLT attribute .+? was supplied for relationship .+? which does not appear to be a foreign constraint/,
+ 'Warn about dubious on_delete/on_update attributes',
+ );
}
# Note that the constraints listed here are the only ones that are tested -- if
'name' => 'artist_undirected_map_fk_id2', 'index_name' => 'artist_undirected_map_idx_id2',
'selftable' => 'artist_undirected_map', 'foreigntable' => 'artist',
'selfcols' => ['id2'], 'foreigncols' => ['artistid'],
- on_delete => '', on_update => 'CASCADE', deferrable => 1,
+ on_delete => '', on_update => '', deferrable => 1,
},
],
'name' => 'bookmark_fk_link', 'index_name' => 'bookmark_idx_link',
'selftable' => 'bookmark', 'foreigntable' => 'link',
'selfcols' => ['link'], 'foreigncols' => ['id'],
- on_delete => '', on_update => '', deferrable => 1,
+ on_delete => 'SET NULL', on_update => 'CASCADE', deferrable => 1,
},
],
# ForceForeign
is( $got->name, $expected->{name},
"name parameter correct for `$desc'" );
}
+
+done_testing;
my $schema = DBICTest->init_schema();
-plan tests => 1269;
-
my $employees = $schema->resultset('Employee');
$employees->delete();
my $group_3 = $employees->search({group_id=>3});
my $to_group = 1;
my $to_pos = undef;
-# now that we have transactions we need to work around stupid sqlite
{
my @empl = $group_3->all;
while (my $employee = shift @empl) {
- $employee->discard_changes; # since we are effective shift()ing the $rs while doing this
$employee->move_to_group($to_group, $to_pos);
$to_pos++;
$to_group = $to_group==1 ? 2 : 1;
}
foreach my $group_id (1..4) {
my $group_employees = $employees->search({group_id=>$group_id});
- $group_employees->all();
ok( check_rs($group_employees), "group positions after move_to_group" );
}
my $to_group_2 = 1;
$to_pos = undef;
-# now that we have transactions we need to work around stupid sqlite
{
my @empl = $group_3->all;
while (my $employee = shift @empl) {
foreach my $group_id_2 (1..4) {
foreach my $group_id_3 (1..4) {
my $group_employees = $employees->search({group_id_2=>$group_id_2,group_id_3=>$group_id_3});
- $group_employees->all();
ok( check_rs($group_employees), "group positions after move_to_group" );
}
}
return 1;
}
+done_testing;
my $schema = DBICTest->init_schema();
-plan tests => 18;
+plan tests => 20;
-my $cd;
-my $rs = $cd = $schema->resultset("CD")->search({}, { order_by => 'cdid' });
+my $rs = $schema->resultset("CD")->search({}, { order_by => 'cdid' });
my $rs_title = $rs->get_column('title');
my $rs_year = $rs->get_column('year');
my $owner = $schema->resultset('Owners')->find ({ name => 'Newton' });
ok ($owner->books->count > 1, 'Owner Newton has multiple books');
is ($owner->search_related ('books')->get_column ('price')->sum, 60, 'Correctly calculated price of all owned books');
+
+
+# make sure joined/prefetched get_column of a PK dtrt
+
+$rs->reset;
+my $j_rs = $rs->search ({}, { join => 'tracks' })->get_column ('cdid');
+is_deeply (
+ [ $j_rs->all ],
+ [ map { my $c = $rs->next; ( ($c->id) x $c->tracks->count ) } (1 .. $rs->count) ],
+ 'join properly explodes amount of rows from get_column',
+);
+
+$rs->reset;
+my $p_rs = $rs->search ({}, { prefetch => 'tracks' })->get_column ('cdid');
+is_deeply (
+ [ $p_rs->all ],
+ [ $rs->get_column ('cdid')->all ],
+ 'prefetch properly collapses amount of rows from get_column',
+);
plan tests => $tests_per_run * @json_backends;
-use JSON::Any;
for my $js (@json_backends) {
eval {JSON::Any->import ($js) };
+++ /dev/null
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-my $schema = DBICTest->init_schema();
-
-eval { require DateTime::Format::MySQL };
-plan skip_all => "Need DateTime::Format::MySQL for inflation tests" if $@;
-
-plan tests => 32;
-
-# inflation test
-my $event = $schema->resultset("Event")->find(1);
-
-isa_ok($event->starts_at, 'DateTime', 'DateTime returned');
-
-# klunky, but makes older Test::More installs happy
-my $starts = $event->starts_at;
-is("$starts", '2006-04-25T22:24:33', 'Correct date/time');
-
-# create using DateTime
-my $created = $schema->resultset('Event')->create({
- starts_at => DateTime->new(year=>2006, month=>6, day=>18),
- created_on => DateTime->new(year=>2006, month=>6, day=>23)
-});
-my $created_start = $created->starts_at;
-
-isa_ok($created->starts_at, 'DateTime', 'DateTime returned');
-is("$created_start", '2006-06-18T00:00:00', 'Correct date/time');
-
-## timestamp field
-isa_ok($event->created_on, 'DateTime', 'DateTime returned');
-
-## varchar fields
-isa_ok($event->varchar_date, 'DateTime', 'DateTime returned');
-isa_ok($event->varchar_datetime, 'DateTime', 'DateTime returned');
-
-## skip inflation field
-isnt(ref($event->skip_inflation), 'DateTime', 'No DateTime returned for skip inflation column');
-
-# klunky, but makes older Test::More installs happy
-my $createo = $event->created_on;
-is("$createo", '2006-06-22T21:00:05', 'Correct date/time');
-
-my $created_cron = $created->created_on;
-
-isa_ok($created->created_on, 'DateTime', 'DateTime returned');
-is("$created_cron", '2006-06-23T00:00:00', 'Correct date/time');
-
-
-# Test "timezone" parameter
-my $event_tz = $schema->resultset('EventTZ')->create({
- starts_at => DateTime->new(year=>2007, month=>12, day=>31, time_zone => "America/Chicago" ),
- created_on => DateTime->new(year=>2006, month=>1, day=>31,
- hour => 13, minute => 34, second => 56, time_zone => "America/New_York" ),
-});
-
-is ($event_tz->starts_at->day_name, "Montag", 'Locale de_DE loaded: day_name');
-is ($event_tz->starts_at->month_name, "Dezember", 'Locale de_DE loaded: month_name');
-is ($event_tz->created_on->day_name, "Tuesday", 'Default locale loaded: day_name');
-is ($event_tz->created_on->month_name, "January", 'Default locale loaded: month_name');
-
-my $starts_at = $event_tz->starts_at;
-is("$starts_at", '2007-12-31T00:00:00', 'Correct date/time using timezone');
-
-my $created_on = $event_tz->created_on;
-is("$created_on", '2006-01-31T12:34:56', 'Correct timestamp using timezone');
-is($event_tz->created_on->time_zone->name, "America/Chicago", "Correct timezone");
-
-my $loaded_event = $schema->resultset('EventTZ')->find( $event_tz->id );
-
-isa_ok($loaded_event->starts_at, 'DateTime', 'DateTime returned');
-$starts_at = $loaded_event->starts_at;
-is("$starts_at", '2007-12-31T00:00:00', 'Loaded correct date/time using timezone');
-is($starts_at->time_zone->name, 'America/Chicago', 'Correct timezone');
-
-isa_ok($loaded_event->created_on, 'DateTime', 'DateTime returned');
-$created_on = $loaded_event->created_on;
-is("$created_on", '2006-01-31T12:34:56', 'Loaded correct timestamp using timezone');
-is($created_on->time_zone->name, 'America/Chicago', 'Correct timezone');
-
-# Test floating timezone warning
-# We expect one warning
-SKIP: {
- skip "ENV{DBIC_FLOATING_TZ_OK} was set, skipping", 1 if $ENV{DBIC_FLOATING_TZ_OK};
- local $SIG{__WARN__} = sub {
- like(
- shift,
- qr/You're using a floating timezone, please see the documentation of DBIx::Class::InflateColumn::DateTime for an explanation/,
- 'Floating timezone warning'
- );
- };
- my $event_tz_floating = $schema->resultset('EventTZ')->create({
- starts_at => DateTime->new(year=>2007, month=>12, day=>31, ),
- created_on => DateTime->new(year=>2006, month=>1, day=>31,
- hour => 13, minute => 34, second => 56, ),
- });
- delete $SIG{__WARN__};
-};
-
-# This should fail to set
-my $prev_str = "$created_on";
-$loaded_event->update({ created_on => '0000-00-00' });
-is("$created_on", $prev_str, "Don't update invalid dates");
-
-my $invalid = $schema->resultset('Event')->create({
- starts_at => '0000-00-00',
- created_on => $created_on
-});
-
-is( $invalid->get_column('starts_at'), '0000-00-00', "Invalid date stored" );
-is( $invalid->starts_at, undef, "Inflate to undef" );
-
-$invalid->created_on('0000-00-00');
-$invalid->update;
-
-{
- local $@;
- eval { $invalid->created_on };
- like( $@, qr/invalid date format/i, "Invalid date format exception");
-}
-
-## varchar field using inflate_date => 1
-my $varchar_date = $event->varchar_date;
-is("$varchar_date", '2006-07-23T00:00:00', 'Correct date/time');
-
-## varchar field using inflate_datetime => 1
-my $varchar_datetime = $event->varchar_datetime;
-is("$varchar_datetime", '2006-05-22T19:05:07', 'Correct date/time');
-
-## skip inflation field
-my $skip_inflation = $event->skip_inflation;
-is ("$skip_inflation", '2006-04-21 18:04:06', 'Correct date/time');
cmp_ok(scalar @cds, '==', 1, "condition based on inherited join okay");
my $rs3 = $rs2->search_related('cds');
-cmp_ok(scalar($rs3->all), '==', 45, "All cds for artist returned");
+cmp_ok(scalar($rs3->all), '==', 15, "All cds for artist returned");
-cmp_ok($rs3->count, '==', 45, "All cds for artist returned via count");
+cmp_ok($rs3->count, '==', 15, "All cds for artist returned via count");
my $rs4 = $schema->resultset("CD")->search({ 'artist.artistid' => '1' }, { join => ['tracks', 'artist'], prefetch => 'artist' });
my @rs4_results = $rs4->all;
# test trace output correctness for bind params
{
- my ($sql, @bind) = ('');
- $schema->storage->debugcb( sub { $sql = $_[1] } );
+ my ($sql, @bind);
+ $schema->storage->debugobj(DBIC::DebugObj->new(\$sql, \@bind));
my @cds = $schema->resultset('CD')->search( { artist => 1, cdid => { -between => [ 1, 3 ] }, } );
is_same_sql_bind(
- $sql, [],
- "SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE ( artist = ? AND cdid BETWEEN ? AND ? ): '1', '1', '3'", [],
+ $sql, \@bind,
+ "SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE ( artist = ? AND (cdid BETWEEN ? AND ?) ): '1', '1', '3'",
+ [qw/'1' '1' '3'/],
'got correct SQL with all bind parameters (debugcb)'
);
- $schema->storage->debugcb(undef);
- $schema->storage->debugobj(DBIC::DebugObj->new(\$sql, \@bind));
@cds = $schema->resultset('CD')->search( { artist => 1, cdid => { -between => [ 1, 3 ] }, } );
is_same_sql_bind(
$sql, \@bind,
- "SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE ( artist = ? AND cdid BETWEEN ? AND ? )", ["'1'", "'1'", "'3'"],
+ "SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE ( artist = ? AND (cdid BETWEEN ? AND ?) )", ["'1'", "'1'", "'3'"],
'got correct SQL with all bind parameters (debugobj)'
);
}
'bar',
undef,
{
+ %{$storage->_default_dbi_connect_attributes || {} },
PrintError => 0,
AutoCommit => 1,
},
args => [
{
on_connect_do => [qw/a b c/],
- PrintError => 0,
- AutoCommit => 1,
+ PrintError => 1,
+ AutoCommit => 0,
on_disconnect_do => [qw/d e f/],
user => 'bar',
dsn => 'foo',
'bar',
undef,
{
- PrintError => 0,
- AutoCommit => 1,
+ %{$storage->_default_dbi_connect_attributes || {} },
+ PrintError => 1,
+ AutoCommit => 0,
},
],
},
--- /dev/null
+use strict;
+use warnings;
+no warnings qw/once redefine/;
+
+use lib qw(t/lib);
+use DBICTest;
+
+use Test::More tests => 9;
+
+my $schema = DBICTest->init_schema(
+ no_connect => 1,
+ no_deploy => 1,
+);
+
+local *DBIx::Class::Storage::DBI::connect_call_foo = sub {
+ isa_ok $_[0], 'DBIx::Class::Storage::DBI',
+ 'got storage in connect_call method';
+ is $_[1], 'bar', 'got param in connect_call method';
+};
+
+local *DBIx::Class::Storage::DBI::disconnect_call_foo = sub {
+ isa_ok $_[0], 'DBIx::Class::Storage::DBI',
+ 'got storage in disconnect_call method';
+};
+
+ok $schema->connection(
+ DBICTest->_database,
+ {
+ on_connect_call => [
+ [ do_sql => 'create table test1 (id integer)' ],
+ [ do_sql => [ 'insert into test1 values (?)', {}, 1 ] ],
+ [ do_sql => sub { ['insert into test1 values (2)'] } ],
+ [ sub { $_[0]->dbh->do($_[1]) }, 'insert into test1 values (3)' ],
+ # this invokes $storage->connect_call_foo('bar') (above)
+ [ foo => 'bar' ],
+ ],
+ on_connect_do => 'insert into test1 values (4)',
+ on_disconnect_call => 'foo',
+ },
+), 'connection()';
+
+is_deeply (
+ $schema->storage->dbh->selectall_arrayref('select * from test1'),
+ [ [ 1 ], [ 2 ], [ 3 ], [ 4 ] ],
+ 'on_connect_call/do actions worked'
+);
+
+local *DBIx::Class::Storage::DBI::connect_call_foo = sub {
+ isa_ok $_[0], 'DBIx::Class::Storage::DBI',
+ 'got storage in connect_call method';
+};
+
+local *DBIx::Class::Storage::DBI::connect_call_bar = sub {
+ isa_ok $_[0], 'DBIx::Class::Storage::DBI',
+ 'got storage in connect_call method';
+};
+
+$schema->storage->disconnect;
+
+ok $schema->connection(
+ DBICTest->_database,
+ {
+ # method list form
+ on_connect_call => [ 'foo', sub { ok 1, "coderef in list form" }, 'bar' ],
+ },
+), 'connection()';
+
+$schema->storage->ensure_connected;
use strict;
use warnings;
-use Test::More tests => 10;
+use Test::More tests => 12;
use lib qw(t/lib);
use base 'DBICTest';
no_connect => 1,
no_deploy => 1,
);
+
+ok $schema->connection(
+ DBICTest->_database,
+ {
+ on_connect_do => 'CREATE TABLE TEST_empty (id INTEGER)',
+ },
+), 'connection()';
+
+is_deeply (
+ $schema->storage->dbh->selectall_arrayref('SELECT * FROM TEST_empty'),
+ [],
+ 'string version on_connect_do() worked'
+);
+
+$schema->storage->disconnect;
+
ok $schema->connection(
DBICTest->_database,
{
},
), 'connection()';
-is_deeply
+is_deeply (
$schema->storage->dbh->selectall_arrayref('SELECT * FROM TEST_empty'),
[ [ 2 ], [ 3 ], [ 7 ] ],
- 'on_connect_do() worked';
+ 'on_connect_do() worked'
+);
eval { $schema->storage->dbh->do('SELECT 1 FROM TEST_nonexistent'); };
ok $@, 'Searching for nonexistent table dies';
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use Data::Dumper;
+use DBIC::SqlMakerTest;
+
+my $ping_count = 0;
+
+{
+ local $SIG{__WARN__} = sub {};
+ require DBIx::Class::Storage::DBI;
+
+ my $ping = \&DBIx::Class::Storage::DBI::_ping;
+
+ *DBIx::Class::Storage::DBI::_ping = sub {
+ $ping_count++;
+ goto &$ping;
+ };
+}
+
+
+# measure pings around deploy() separately
+my $schema = DBICTest->init_schema( sqlite_use_file => 1, no_populate => 1 );
+
+is ($ping_count, 0, 'no _ping() calls during deploy');
+$ping_count = 0;
+
+
+
+DBICTest->populate_schema ($schema);
+
+# perform some operations and make sure they don't ping
+
+$schema->resultset('CD')->create({
+ cdid => 6, artist => 3, title => 'mtfnpy', year => 2009
+});
+
+$schema->resultset('CD')->create({
+ cdid => 7, artist => 3, title => 'mtfnpy2', year => 2009
+});
+
+$schema->storage->_dbh->disconnect;
+
+$schema->resultset('CD')->create({
+ cdid => 8, artist => 3, title => 'mtfnpy3', year => 2009
+});
+
+$schema->storage->_dbh->disconnect;
+
+$schema->txn_do(sub {
+ $schema->resultset('CD')->create({
+ cdid => 9, artist => 3, title => 'mtfnpy4', year => 2009
+ });
+});
+
+is $ping_count, 0, 'no _ping() calls';
+
+done_testing;
offset => 2,
order_by => 'artistid' }
);
-is( $it->count, 3, "LIMIT count ok" );
+is( $it->count, 3, "LIMIT count ok" ); # ask for 3 rows out of 7 artists
is( $it->next->name, "Artist 2", "iterator->next ok" );
$it->next;
$it->next;
# clean up our mess
END {
+ my $dbh = eval { $schema->storage->_dbh };
$dbh->do("DROP TABLE artist") if $dbh;
}
use Test::More;
use Test::Exception;
use DBICTest;
+use List::Util 'first';
+use Scalar::Util 'reftype';
+use File::Spec;
+use IO::Handle;
BEGIN {
eval "use DBIx::Class::Storage::DBI::Replicated; use Test::Moose";
- plan $@
- ? ( skip_all => "Deps not installed: $@" )
- : ( tests => 79 );
+ plan skip_all => "Deps not installed: $@" if $@;
}
use_ok 'DBIx::Class::Storage::DBI::Replicated::Pool';
use_ok 'DBIx::Class::Storage::DBI::Replicated::Replicant';
use_ok 'DBIx::Class::Storage::DBI::Replicated';
+use Moose();
+use MooseX::Types();
+diag "Using Moose version $Moose::VERSION and MooseX::Types version $MooseX::Types::VERSION";
+
=head1 HOW TO USE
This is a test of the replicated storage system. This will work in one of
will try to test those. If you do that, it will assume the setup is properly
replicating. Your results may vary, but I have demonstrated this to work with
mysql native replication.
-
+
=cut
## --------------------------------------------------------------------- ##
## Create an object to contain your replicated stuff.
## --------------------------------------------------------------------- ##
-
+
package DBIx::Class::DBI::Replicated::TestReplication;
-
+
use DBICTest;
use base qw/Class::Accessor::Fast/;
-
+
__PACKAGE__->mk_accessors( qw/schema/ );
## Initialize the object
-
- sub new {
- my $class = shift @_;
- my $self = $class->SUPER::new(@_);
-
- $self->schema( $self->init_schema );
- return $self;
- }
-
+
+ sub new {
+ my ($class, $schema_method) = (shift, shift);
+ my $self = $class->SUPER::new(@_);
+
+ $self->schema( $self->init_schema($schema_method) );
+ return $self;
+ }
+
## Get the Schema and set the replication storage type
-
+
sub init_schema {
# current SQLT SQLite producer does not handle DROP TABLE IF EXISTS, trap warnings here
local $SIG{__WARN__} = sub { warn @_ unless $_[0] =~ /no such table.+DROP TABLE/ };
- my $class = shift @_;
+ my ($class, $schema_method) = @_;
- my $schema = DBICTest->init_schema(
- sqlite_use_file => 1,
- storage_type=>{
- '::DBI::Replicated' => {
- balancer_type=>'::Random',
- balancer_args=>{
- auto_validate_every=>100,
- },
- }
- },
- deploy_args=>{
- add_drop_table => 1,
- },
- );
+ my $method = "get_schema_$schema_method";
+ my $schema = $class->$method;
return $schema;
}
-
+
+ sub get_schema_by_storage_type {
+ DBICTest->init_schema(
+ sqlite_use_file => 1,
+ storage_type=>{
+ '::DBI::Replicated' => {
+ balancer_type=>'::Random',
+ balancer_args=>{
+ auto_validate_every=>100,
+ master_read_weight => 1
+ },
+ }
+ },
+ deploy_args=>{
+ add_drop_table => 1,
+ },
+ );
+ }
+
+ sub get_schema_by_connect_info {
+ DBICTest->init_schema(
+ sqlite_use_file => 1,
+ storage_type=> '::DBI::Replicated',
+ balancer_type=>'::Random',
+ balancer_args=> {
+ auto_validate_every=>100,
+ master_read_weight => 1
+ },
+ deploy_args=>{
+ add_drop_table => 1,
+ },
+ );
+ }
+
sub generate_replicant_connect_info {}
sub replicate {}
sub cleanup {}
-
+ ## --------------------------------------------------------------------- ##
+ ## Add a connect_info option to test option merging.
+ ## --------------------------------------------------------------------- ##
+ {
+ package DBIx::Class::Storage::DBI::Replicated;
+
+ use Moose;
+
+ __PACKAGE__->meta->make_mutable;
+
+ around connect_info => sub {
+ my ($next, $self, $info) = @_;
+ $info->[3]{master_option} = 1;
+ $self->$next($info);
+ };
+
+ __PACKAGE__->meta->make_immutable;
+
+ no Moose;
+ }
+
## --------------------------------------------------------------------- ##
## Subclass for when you are using SQLite for testing, this provides a fake
## replication support.
## --------------------------------------------------------------------- ##
-
+
package DBIx::Class::DBI::Replicated::TestReplication::SQLite;
use DBICTest;
- use File::Copy;
+ use File::Copy;
use base 'DBIx::Class::DBI::Replicated::TestReplication';
-
- __PACKAGE__->mk_accessors( qw/master_path slave_paths/ );
-
- ## Set the mastep path from DBICTest
-
- sub new {
- my $class = shift @_;
- my $self = $class->SUPER::new(@_);
-
- $self->master_path( DBICTest->_sqlite_dbfilename );
- $self->slave_paths([
- "t/var/DBIxClass_slave1.db",
- "t/var/DBIxClass_slave2.db",
+
+ __PACKAGE__->mk_accessors(qw/master_path slave_paths/);
+
+ ## Set the master path from DBICTest
+
+ sub new {
+ my $class = shift @_;
+ my $self = $class->SUPER::new(@_);
+
+ $self->master_path( DBICTest->_sqlite_dbfilename );
+ $self->slave_paths([
+ File::Spec->catfile(qw/t var DBIxClass_slave1.db/),
+ File::Spec->catfile(qw/t var DBIxClass_slave2.db/),
]);
-
- return $self;
- }
-
+
+ return $self;
+ }
+
## Return an Array of ArrayRefs where each ArrayRef is suitable to use for
## $storage->connect_info to be used for connecting replicants.
-
+
sub generate_replicant_connect_info {
my $self = shift @_;
my @dsn = map {
"dbi:SQLite:${_}";
} @{$self->slave_paths};
-
- return map { [$_,'','',{AutoCommit=>1}] } @dsn;
+
+ my @connect_infos = map { [$_,'','',{AutoCommit=>1}] } @dsn;
+
+ ## Make sure nothing is left over from a failed test
+ $self->cleanup;
+
+ ## try a hashref too
+ my $c = $connect_infos[0];
+ $connect_infos[0] = {
+ dsn => $c->[0],
+ user => $c->[1],
+ password => $c->[2],
+ %{ $c->[3] }
+ };
+
+ @connect_infos
}
-
+
## Do a 'good enough' replication by copying the master dbfile over each of
## the slave dbfiles. If the master is SQLite we do this, otherwise we
## just do a one second pause to let the slaves catch up.
-
+
sub replicate {
my $self = shift @_;
foreach my $slave (@{$self->slave_paths}) {
copy($self->master_path, $slave);
}
}
-
+
## Cleanup after ourselves. Unlink all gthe slave paths.
-
+
sub cleanup {
my $self = shift @_;
foreach my $slave (@{$self->slave_paths}) {
- unlink $slave;
- }
+ if(-e $slave) {
+ unlink $slave;
+ }
+ }
}
-
+
## --------------------------------------------------------------------- ##
## Subclass for when you are setting the databases via custom export vars
## This is for when you have a replicating database setup that you are
## two slave databases to test against, as well as a replication system
## that will replicate in less than 1 second.
## --------------------------------------------------------------------- ##
-
- package DBIx::Class::DBI::Replicated::TestReplication::Custom;
+
+ package DBIx::Class::DBI::Replicated::TestReplication::Custom;
use base 'DBIx::Class::DBI::Replicated::TestReplication';
-
+
## Return an Array of ArrayRefs where each ArrayRef is suitable to use for
## $storage->connect_info to be used for connecting replicants.
-
- sub generate_replicant_connect_info {
+
+ sub generate_replicant_connect_info {
return (
[$ENV{"DBICTEST_SLAVE0_DSN"}, $ENV{"DBICTEST_SLAVE0_DBUSER"}, $ENV{"DBICTEST_SLAVE0_DBPASS"}, {AutoCommit => 1}],
- [$ENV{"DBICTEST_SLAVE1_DSN"}, $ENV{"DBICTEST_SLAVE1_DBUSER"}, $ENV{"DBICTEST_SLAVE1_DBPASS"}, {AutoCommit => 1}],
+ [$ENV{"DBICTEST_SLAVE1_DSN"}, $ENV{"DBICTEST_SLAVE1_DBUSER"}, $ENV{"DBICTEST_SLAVE1_DBPASS"}, {AutoCommit => 1}],
);
}
-
- ## pause a bit to let the replication catch up
-
+
+ ## pause a bit to let the replication catch up
+
sub replicate {
- sleep 1;
- }
+ sleep 1;
+ }
}
## ----------------------------------------------------------------------------
'DBIx::Class::DBI::Replicated::TestReplication::Custom' :
'DBIx::Class::DBI::Replicated::TestReplication::SQLite';
-ok my $replicated = $replicated_class->new
- => 'Created a replication object';
-
-isa_ok $replicated->schema
- => 'DBIx::Class::Schema';
-
-isa_ok $replicated->schema->storage
- => 'DBIx::Class::Storage::DBI::Replicated';
+my $replicated;
+
+for my $method (qw/by_connect_info by_storage_type/) {
+ undef $replicated;
+ ok $replicated = $replicated_class->new($method)
+ => "Created a replication object $method";
+
+ isa_ok $replicated->schema
+ => 'DBIx::Class::Schema';
+
+ isa_ok $replicated->schema->storage
+ => 'DBIx::Class::Storage::DBI::Replicated';
+
+ isa_ok $replicated->schema->storage->balancer
+ => 'DBIx::Class::Storage::DBI::Replicated::Balancer::Random'
+ => 'configured balancer_type';
+}
ok $replicated->schema->storage->meta
=> 'has a meta object';
-
+
isa_ok $replicated->schema->storage->master
=> 'DBIx::Class::Storage::DBI';
-
+
isa_ok $replicated->schema->storage->pool
=> 'DBIx::Class::Storage::DBI::Replicated::Pool';
-
+
does_ok $replicated->schema->storage->balancer
- => 'DBIx::Class::Storage::DBI::Replicated::Balancer';
+ => 'DBIx::Class::Storage::DBI::Replicated::Balancer';
ok my @replicant_connects = $replicated->generate_replicant_connect_info
=> 'got replication connect information';
ok my @replicated_storages = $replicated->schema->storage->connect_replicants(@replicant_connects)
=> 'Created some storages suitable for replicants';
-
+
+our %debug;
+$replicated->schema->storage->debug(1);
+$replicated->schema->storage->debugcb(sub {
+ my ($op, $info) = @_;
+ ##warn "\n$op, $info\n";
+ %debug = (
+ op => $op,
+ info => $info,
+ dsn => ($info=~m/\[(.+)\]/)[0],
+ storage_type => $info=~m/REPLICANT/ ? 'REPLICANT' : 'MASTER',
+ );
+});
+
+ok my @all_storages = $replicated->schema->storage->all_storages
+ => '->all_storages';
+
+is scalar @all_storages,
+ 3
+ => 'correct number of ->all_storages';
+
+is ((grep $_->isa('DBIx::Class::Storage::DBI'), @all_storages),
+ 3
+ => '->all_storages are correct type');
+
+my @all_storage_opts =
+ grep { (reftype($_)||'') eq 'HASH' }
+ map @{ $_->_connect_info }, @all_storages;
+
+is ((grep $_->{master_option}, @all_storage_opts),
+ 3
+ => 'connect_info was merged from master to replicants');
+
+my @replicant_names = keys %{ $replicated->schema->storage->replicants };
+
+ok @replicant_names, "found replicant names @replicant_names";
+
+## Silence warning about not supporting the is_replicating method if using the
+## sqlite dbs.
+$replicated->schema->storage->debugobj->silence(1)
+ if first { m{^t/} } @replicant_names;
+
isa_ok $replicated->schema->storage->balancer->current_replicant
=> 'DBIx::Class::Storage::DBI';
-
+
+$replicated->schema->storage->debugobj->silence(0);
+
ok $replicated->schema->storage->pool->has_replicants
- => 'does have replicants';
+ => 'does have replicants';
is $replicated->schema->storage->pool->num_replicants => 2
=> 'has two replicants';
-
+
does_ok $replicated_storages[0]
=> 'DBIx::Class::Storage::DBI::Replicated::Replicant';
does_ok $replicated_storages[1]
=> 'DBIx::Class::Storage::DBI::Replicated::Replicant';
-
-my @replicant_names = keys %{$replicated->schema->storage->replicants};
does_ok $replicated->schema->storage->replicants->{$replicant_names[0]}
=> 'DBIx::Class::Storage::DBI::Replicated::Replicant';
does_ok $replicated->schema->storage->replicants->{$replicant_names[1]}
- => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
+ => 'DBIx::Class::Storage::DBI::Replicated::Replicant';
## Add some info to the database
[ qw/artistid name/ ],
[ 4, "Ozric Tentacles"],
]);
-
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ like $debug{info}, qr/INSERT/, 'Last was an insert';
+
## Make sure all the slaves have the table definitions
$replicated->replicate;
$replicated->schema->storage->replicants->{$replicant_names[0]}->active(1);
$replicated->schema->storage->replicants->{$replicant_names[1]}->active(1);
+
+## Silence warning about not supporting the is_replicating method if using the
+## sqlite dbs.
+$replicated->schema->storage->debugobj->silence(1)
+ if first { m{^t/} } @replicant_names;
+
$replicated->schema->storage->pool->validate_replicants;
+$replicated->schema->storage->debugobj->silence(0);
+
## Make sure we can read the data.
ok my $artist1 = $replicated->schema->resultset('Artist')->find(4)
=> 'Created Result';
+## We removed testing here since master read weight is on, so we can't tell in
+## advance what storage to expect. We turn master read weight off a bit lower
+## is $debug{storage_type}, 'REPLICANT'
+## => "got last query from a replicant: $debug{dsn}, $debug{info}";
+
isa_ok $artist1
=> 'DBICTest::Artist';
-
+
is $artist1->name, 'Ozric Tentacles'
=> 'Found expected name for first result';
+## Check that master_read_weight is honored
+{
+ no warnings qw/once redefine/;
+
+ local
+ *DBIx::Class::Storage::DBI::Replicated::Balancer::Random::_random_number =
+ sub { 999 };
+
+ $replicated->schema->storage->balancer->increment_storage;
+
+ is $replicated->schema->storage->balancer->current_replicant,
+ $replicated->schema->storage->master
+ => 'master_read_weight is honored';
+
+ ## turn it off for the duration of the test
+ $replicated->schema->storage->balancer->master_read_weight(0);
+ $replicated->schema->storage->balancer->increment_storage;
+}
+
## Add some new rows that only the master will have This is because
## we overload any type of write operation so that is must hit the master
## database.
[ 7, "Watergate"],
]);
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ like $debug{info}, qr/INSERT/, 'Last was an insert';
+
## Make sure all the slaves have the table definitions
$replicated->replicate;
ok my $artist2 = $replicated->schema->resultset('Artist')->find(5)
=> 'Sync succeed';
-
+
+is $debug{storage_type}, 'REPLICANT'
+ => "got last query from a replicant: $debug{dsn}";
+
isa_ok $artist2
=> 'DBICTest::Artist';
-
+
is $artist2->name, "Doom's Children"
=> 'Found expected name for first result';
is $replicated->schema->storage->pool->connected_replicants => 2
=> "both replicants are connected";
-
+
$replicated->schema->storage->replicants->{$replicant_names[0]}->disconnect;
$replicated->schema->storage->replicants->{$replicant_names[1]}->disconnect;
ok my $artist3 = $replicated->schema->resultset('Artist')->find(6)
=> 'Still finding stuff.';
-
+
+is $debug{storage_type}, 'REPLICANT'
+ => "got last query from a replicant: $debug{dsn}";
+
isa_ok $artist3
=> 'DBICTest::Artist';
-
+
is $artist3->name, "Dead On Arrival"
=> 'Found expected name for first result';
is $replicated->schema->storage->pool->connected_replicants => 1
=> "At Least One replicant reconnected to handle the job";
-
+
## What happens when we try to select something that doesn't exist?
ok ! $replicated->schema->resultset('Artist')->find(666)
=> 'Correctly failed to find something.';
-
+
+is $debug{storage_type}, 'REPLICANT'
+ => "got last query from a replicant: $debug{dsn}";
+
## test the reliable option
TESTRELIABLE: {
-
- $replicated->schema->storage->set_reliable_storage;
-
- ok $replicated->schema->resultset('Artist')->find(2)
- => 'Read from master 1';
-
- ok $replicated->schema->resultset('Artist')->find(5)
- => 'Read from master 2';
-
- $replicated->schema->storage->set_balanced_storage;
-
- ok $replicated->schema->resultset('Artist')->find(3)
+
+ $replicated->schema->storage->set_reliable_storage;
+
+ ok $replicated->schema->resultset('Artist')->find(2)
+ => 'Read from master 1';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ ok $replicated->schema->resultset('Artist')->find(5)
+ => 'Read from master 2';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ $replicated->schema->storage->set_balanced_storage;
+
+ ok $replicated->schema->resultset('Artist')->find(3)
=> 'Read from replicant';
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
}
## Make sure when reliable goes out of scope, we are using replicants again
ok $replicated->schema->resultset('Artist')->find(1)
=> 'back to replicant 1.';
-
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
ok $replicated->schema->resultset('Artist')->find(2)
=> 'back to replicant 2.';
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
## set all the replicants to inactive, and make sure the balancer falls back to
## the master.
$replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
$replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
-
-ok $replicated->schema->resultset('Artist')->find(2)
- => 'Fallback to master';
+
+{
+ ## catch the fallback to master warning
+ open my $debugfh, '>', \my $fallback_warning;
+ my $oldfh = $replicated->schema->storage->debugfh;
+ $replicated->schema->storage->debugfh($debugfh);
+
+ ok $replicated->schema->resultset('Artist')->find(2)
+ => 'Fallback to master';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ like $fallback_warning, qr/falling back to master/
+ => 'emits falling back to master warning';
+
+ $replicated->schema->storage->debugfh($oldfh);
+}
$replicated->schema->storage->replicants->{$replicant_names[0]}->active(1);
$replicated->schema->storage->replicants->{$replicant_names[1]}->active(1);
+
+## Silence warning about not supporting the is_replicating method if using the
+## sqlite dbs.
+$replicated->schema->storage->debugobj->silence(1)
+ if first { m{^t/} } @replicant_names;
+
$replicated->schema->storage->pool->validate_replicants;
+$replicated->schema->storage->debugobj->silence(0);
+
ok $replicated->schema->resultset('Artist')->find(2)
=> 'Returned to replicates';
-
+
+is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
## Getting slave status tests
SKIP: {
## We skip this tests unless you have a custom replicants, since the default
## sqlite based replication tests don't support these functions.
-
- skip 'Cannot Test Replicant Status on Non Replicating Database', 9
+
+ skip 'Cannot Test Replicant Status on Non Replicating Database', 10
unless DBICTest->has_custom_dsn && $ENV{"DBICTEST_SLAVE0_DSN"};
$replicated->replicate; ## Give the slaves a chance to catchup.
- ok $replicated->schema->storage->replicants->{$replicant_names[0]}->is_replicating
- => 'Replicants are replicating';
-
- is $replicated->schema->storage->replicants->{$replicant_names[0]}->lag_behind_master, 0
- => 'Replicant is zero seconds behind master';
-
- ## Test the validate replicants
-
- $replicated->schema->storage->pool->validate_replicants;
-
- is $replicated->schema->storage->pool->active_replicants, 2
- => 'Still have 2 replicants after validation';
-
- ## Force the replicants to fail the validate test by required their lag to
- ## be negative (ie ahead of the master!)
-
+ ok $replicated->schema->storage->replicants->{$replicant_names[0]}->is_replicating
+ => 'Replicants are replicating';
+
+ is $replicated->schema->storage->replicants->{$replicant_names[0]}->lag_behind_master, 0
+ => 'Replicant is zero seconds behind master';
+
+ ## Test the validate replicants
+
+ $replicated->schema->storage->pool->validate_replicants;
+
+ is $replicated->schema->storage->pool->active_replicants, 2
+ => 'Still have 2 replicants after validation';
+
+ ## Force the replicants to fail the validate test by required their lag to
+ ## be negative (ie ahead of the master!)
+
$replicated->schema->storage->pool->maximum_lag(-10);
$replicated->schema->storage->pool->validate_replicants;
-
+
is $replicated->schema->storage->pool->active_replicants, 0
=> 'No way a replicant be be ahead of the master';
-
+
## Let's be fair to the replicants again. Let them lag up to 5
-
+
$replicated->schema->storage->pool->maximum_lag(5);
$replicated->schema->storage->pool->validate_replicants;
-
+
is $replicated->schema->storage->pool->active_replicants, 2
- => 'Both replicants in good standing again';
-
- ## Check auto validate
-
- is $replicated->schema->storage->balancer->auto_validate_every, 100
- => "Got the expected value for auto validate";
-
- ## This will make sure we auto validatge everytime
- $replicated->schema->storage->balancer->auto_validate_every(0);
-
- ## set all the replicants to inactive, and make sure the balancer falls back to
- ## the master.
-
- $replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
- $replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
-
- ## Ok, now when we go to run a query, autovalidate SHOULD reconnect
-
- is $replicated->schema->storage->pool->active_replicants => 0
- => "both replicants turned off";
-
- ok $replicated->schema->resultset('Artist')->find(5)
- => 'replicant reactivated';
-
- is $replicated->schema->storage->pool->active_replicants => 2
- => "both replicants reactivated";
+ => 'Both replicants in good standing again';
+
+ ## Check auto validate
+
+ is $replicated->schema->storage->balancer->auto_validate_every, 100
+ => "Got the expected value for auto validate";
+
+ ## This will make sure we auto validatge everytime
+ $replicated->schema->storage->balancer->auto_validate_every(0);
+
+ ## set all the replicants to inactive, and make sure the balancer falls back to
+ ## the master.
+
+ $replicated->schema->storage->replicants->{$replicant_names[0]}->active(0);
+ $replicated->schema->storage->replicants->{$replicant_names[1]}->active(0);
+
+ ## Ok, now when we go to run a query, autovalidate SHOULD reconnect
+
+ is $replicated->schema->storage->pool->active_replicants => 0
+ => "both replicants turned off";
+
+ ok $replicated->schema->resultset('Artist')->find(5)
+ => 'replicant reactivated';
+
+ is $debug{storage_type}, 'REPLICANT',
+ "got last query from a replicant: $debug{dsn}";
+
+ is $replicated->schema->storage->pool->active_replicants => 2
+ => "both replicants reactivated";
}
## Test the reliably callback
ok my $reliably = sub {
-
+
ok $replicated->schema->resultset('Artist')->find(5)
- => 'replicant reactivated';
-
+ => 'replicant reactivated';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
} => 'created coderef properly';
$replicated->schema->storage->execute_reliably($reliably);
## Try something with an error
ok my $unreliably = sub {
-
+
ok $replicated->schema->resultset('ArtistXX')->find(5)
- => 'replicant reactivated';
-
+ => 'replicant reactivated';
+
} => 'created coderef properly';
-throws_ok {$replicated->schema->storage->execute_reliably($unreliably)}
+throws_ok {$replicated->schema->storage->execute_reliably($unreliably)}
qr/Can't find source for ArtistXX/
=> 'Bad coderef throws proper error';
-
+
## Make sure replication came back
ok $replicated->schema->resultset('Artist')->find(3)
=> 'replicant reactivated';
-
+
+is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+
## make sure transactions are set to execute_reliably
ok my $transaction = sub {
-
- my $id = shift @_;
-
- $replicated
- ->schema
- ->populate('Artist', [
- [ qw/artistid name/ ],
- [ $id, "Children of the Grave"],
- ]);
-
+
+ my $id = shift @_;
+
+ $replicated
+ ->schema
+ ->populate('Artist', [
+ [ qw/artistid name/ ],
+ [ $id, "Children of the Grave"],
+ ]);
+
ok my $result = $replicated->schema->resultset('Artist')->find($id)
- => 'Found expected artist';
-
+ => "Found expected artist for $id";
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'Found expected artist again';
-
+ => 'Found expected artist again for 1';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
return ($result, $more);
-
+
} => 'Created a coderef properly';
## Test the transaction with multi return
{
- ok my @return = $replicated->schema->txn_do($transaction, 666)
- => 'did transaction';
-
- is $return[0]->id, 666
- => 'first returned value is correct';
-
- is $return[1]->id, 1
- => 'second returned value is correct';
+ ok my @return = $replicated->schema->txn_do($transaction, 666)
+ => 'did transaction';
+
+ is $return[0]->id, 666
+ => 'first returned value is correct';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
+ is $return[1]->id, 1
+ => 'second returned value is correct';
+
+ is $debug{storage_type}, 'MASTER',
+ "got last query from a master: $debug{dsn}";
+
}
## Test that asking for single return works
{
- ok my $return = $replicated->schema->txn_do($transaction, 777)
- => 'did transaction';
-
- is $return->id, 777
- => 'first returned value is correct';
+ ok my @return = $replicated->schema->txn_do($transaction, 777)
+ => 'did transaction';
+
+ is $return[0]->id, 777
+ => 'first returned value is correct';
+
+ is $return[1]->id, 1
+ => 'second returned value is correct';
}
## Test transaction returning a single value
{
- ok my $result = $replicated->schema->txn_do(sub {
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'found inside a transaction';
- return $more;
- }) => 'successfully processed transaction';
-
- is $result->id, 1
- => 'Got expected single result from transaction';
+ ok my $result = $replicated->schema->txn_do(sub {
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'found inside a transaction';
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+ return $more;
+ }) => 'successfully processed transaction';
+
+ is $result->id, 1
+ => 'Got expected single result from transaction';
}
## Make sure replication came back
ok $replicated->schema->resultset('Artist')->find(1)
=> 'replicant reactivated';
-
+
+is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+
## Test Discard changes
{
- ok my $artist = $replicated->schema->resultset('Artist')->find(2)
- => 'got an artist to test discard changes';
-
- ok $artist->discard_changes
- => 'properly discard changes';
+ ok my $artist = $replicated->schema->resultset('Artist')->find(2)
+ => 'got an artist to test discard changes';
+
+ is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+
+ ok $artist->get_from_storage({force_pool=>'master'})
+ => 'properly discard changes';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+
+ ok $artist->discard_changes({force_pool=>'master'})
+ => 'properly called discard_changes against master (manual attrs)';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+
+ ok $artist->discard_changes()
+ => 'properly called discard_changes against master (default attrs)';
+
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+
+ ok $artist->discard_changes({force_pool=>$replicant_names[0]})
+ => 'properly able to override the default attributes';
+
+ is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}"
}
## Test some edge cases, like trying to do a transaction inside a transaction, etc
{
ok my $result = $replicated->schema->txn_do(sub {
- return $replicated->schema->txn_do(sub {
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'found inside a transaction inside a transaction';
- return $more;
- });
+ return $replicated->schema->txn_do(sub {
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'found inside a transaction inside a transaction';
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+ return $more;
+ });
}) => 'successfully processed transaction';
-
+
is $result->id, 1
- => 'Got expected single result from transaction';
+ => 'Got expected single result from transaction';
}
{
ok my $result = $replicated->schema->txn_do(sub {
- return $replicated->schema->storage->execute_reliably(sub {
- return $replicated->schema->txn_do(sub {
- return $replicated->schema->storage->execute_reliably(sub {
- ok my $more = $replicated->schema->resultset('Artist')->find(1)
- => 'found inside crazy deep transactions and execute_reliably';
- return $more;
- });
- });
- });
+ return $replicated->schema->storage->execute_reliably(sub {
+ return $replicated->schema->txn_do(sub {
+ return $replicated->schema->storage->execute_reliably(sub {
+ ok my $more = $replicated->schema->resultset('Artist')->find(1)
+ => 'found inside crazy deep transactions and execute_reliably';
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+ return $more;
+ });
+ });
+ });
}) => 'successfully processed transaction';
-
+
is $result->id, 1
- => 'Got expected single result from transaction';
-}
+ => 'Got expected single result from transaction';
+}
## Test the force_pool resultset attribute.
{
- ok my $artist_rs = $replicated->schema->resultset('Artist')
+ ok my $artist_rs = $replicated->schema->resultset('Artist')
=> 'got artist resultset';
-
- ## Turn on Forced Pool Storage
- ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>'master'})
+
+ ## Turn on Forced Pool Storage
+ ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>'master'})
=> 'Created a resultset using force_pool storage';
-
- ok my $artist = $reliable_artist_rs->find(2)
+
+ ok my $artist = $reliable_artist_rs->find(2)
=> 'got an artist result via force_pool storage';
-}
-## Delete the old database files
-$replicated->cleanup;
+ is $debug{storage_type}, 'MASTER', "got last query from a master: $debug{dsn}";
+}
-use Data::Dump qw/dump/;
-#warn dump $replicated->schema->storage->read_handler;
+## Test the force_pool resultset attribute part two.
+{
+ ok my $artist_rs = $replicated->schema->resultset('Artist')
+ => 'got artist resultset';
+ ## Turn on Forced Pool Storage
+ ok my $reliable_artist_rs = $artist_rs->search(undef, {force_pool=>$replicant_names[0]})
+ => 'Created a resultset using force_pool storage';
+ ok my $artist = $reliable_artist_rs->find(2)
+ => 'got an artist result via force_pool storage';
+ is $debug{storage_type}, 'REPLICANT', "got last query from a replicant: $debug{dsn}";
+}
+## Delete the old database files
+$replicated->cleanup;
+done_testing;
+# vim: sw=4 sts=4 :
};
use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
+
use_ok('DBICVersionOrig');
my $schema_orig = DBICVersion::Schema->connect($dsn, $user, $pass, { ignore_version => 1 });
# should overwrite files and warn about it
my @w;
local $SIG{__WARN__} = sub {
- if ($_[0] =~ /^Overwriting/) {
+ if ($_[0] =~ /Overwriting existing/) {
push @w, $_[0];
}
else {
$schema_upgrade->create_ddl_dir('MySQL', '2.0', $ddl_dir, '1.0');
is (2, @w, 'A warning generated for both the DDL and the diff');
- like ($w[0], qr/^Overwriting existing DDL file - $fn->{v2}/, 'New version DDL overwrite warning');
- like ($w[1], qr/^Overwriting existing diff file - $fn->{trans}/, 'Upgrade diff overwrite warning');
+ like ($w[0], qr/Overwriting existing DDL file - $fn->{v2}/, 'New version DDL overwrite warning');
+ like ($w[1], qr/Overwriting existing diff file - $fn->{trans}/, 'Upgrade diff overwrite warning');
}
{
use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBIC::SqlMakerTest;
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 3 );
-}
+plan tests => 4;
use_ok('DBICTest');
'sql_maker passes arrayrefs in update'
);
}
+
+# Make sure the carp/croak override in SQLA works (via SQLAHacks)
+my $file = __FILE__;
+$file = "\Q$file\E";
+throws_ok (sub {
+ $schema->resultset ('Artist')->search ({}, { order_by => { -asc => 'stuff', -desc => 'staff' } } )->as_query;
+}, qr/$file/, 'Exception correctly croak()ed');
{
'artist.artistid' => 'me.artist'
}
- ]
+ ],
+ [
+ {
+ 'tracks' => 'tracks',
+ '-join_type' => 'left'
+ },
+ {
+ 'tracks.cd' => 'me.cdid'
+ }
+ ],
],
[
- {
- 'count' => '*'
- }
+ 'me.cdid',
+ { count => 'tracks.cd' },
+ { min => 'me.year', -as => 'me.minyear' },
],
{
'artist.name' => 'Caterwauler McCrae',
is_same_sql_bind(
$sql, \@bind,
- q/SELECT COUNT( * ) FROM `cd` `me` JOIN `artist` `artist` ON ( `artist`.`artistid` = `me`.`artist` ) WHERE ( `artist`.`name` = ? AND `me`.`year` = ? )/, [ ['artist.name' => 'Caterwauler McCrae'], ['me.year' => 2001] ],
- 'got correct SQL and bind parameters for count query with quoting'
+ q/
+ SELECT `me`.`cdid`, COUNT( `tracks`.`cd` ), MIN( `me`.`year` ) AS `me`.`minyear`
+ FROM `cd` `me`
+ JOIN `artist` `artist` ON ( `artist`.`artistid` = `me`.`artist` )
+ LEFT JOIN `tracks` `tracks` ON ( `tracks`.`cd` = `me`.`cdid` )
+ WHERE ( `artist`.`name` = ? AND `me`.`year` = ? )
+ /,
+ [ ['artist.name' => 'Caterwauler McCrae'], ['me.year' => 2001] ],
+ 'got correct SQL and bind parameters for complex select query with quoting'
);
],
[
{
- 'count' => '*'
+ max => 'rank',
+ -as => 'max_rank',
+ },
+ 'rank',
+ {
+ 'count' => '*',
+ -as => 'cnt',
}
],
{
is_same_sql_bind(
$sql, \@bind,
- q/SELECT COUNT( * ) FROM [cd] [me] JOIN [artist] [artist] ON ( [artist].[artistid] = [me].[artist] ) WHERE ( [artist].[name] = ? AND [me].[year] = ? )/, [ ['artist.name' => 'Caterwauler McCrae'], ['me.year' => 2001] ],
+ q/SELECT MAX ( [rank] ) AS [max_rank], [rank], COUNT( * ) AS [cnt] FROM [cd] [me] JOIN [artist] [artist] ON ( [artist].[artistid] = [me].[artist] ) WHERE ( [artist].[name] = ? AND [me].[year] = ? )/, [ ['artist.name' => 'Caterwauler McCrae'], ['me.year' => 2001] ],
'got correct SQL and bind parameters for count query with bracket quoting'
);
+++ /dev/null
-use strict;
-use warnings;
-
-use Test::More;
-use Test::Exception;
-use lib qw(t/lib);
-use DBICTest;
-
-sub mc_diag { diag (@_) if $ENV{DBIC_MULTICREATE_DEBUG} };
-
-plan tests => 8;
-
-my $schema = DBICTest->init_schema();
-
-mc_diag (<<'DG');
-* Test a multilevel might-have with a PK == FK in the might_have/has_many table
-
-CD -> might have -> Artwork
- \
- \-> has_many \
- --> Artwork_to_Artist
- /-> has_many /
- /
- Artist
-DG
-
-lives_ok (sub {
- my $someartist = $schema->resultset('Artist')->first;
- my $cd = $schema->resultset('CD')->create ({
- artist => $someartist,
- title => 'Music to code by until the cows come home',
- year => 2008,
- artwork => {
- artwork_to_artist => [
- { artist => { name => 'cowboy joe' } },
- { artist => { name => 'billy the kid' } },
- ],
- },
- });
-
- isa_ok ($cd, 'DBICTest::CD', 'Main CD object created');
- is ($cd->title, 'Music to code by until the cows come home', 'Correct CD title');
-
- my $art_obj = $cd->artwork;
- ok ($art_obj->has_column_loaded ('cd_id'), 'PK/FK present on artwork object');
- is ($art_obj->artists->count, 2, 'Correct artwork creator count via the new object');
- is_deeply (
- [ sort $art_obj->artists->get_column ('name')->all ],
- [ 'billy the kid', 'cowboy joe' ],
- 'Artists named correctly when queried via object',
- );
-
- my $artwork = $schema->resultset('Artwork')->search (
- { 'cd.title' => 'Music to code by until the cows come home' },
- { join => 'cd' },
- )->single;
- is ($artwork->artists->count, 2, 'Correct artwork creator count via a new search');
- is_deeply (
- [ sort $artwork->artists->get_column ('name')->all ],
- [ 'billy the kid', 'cowboy joe' ],
- 'Artists named correctly queried via a new search',
- );
-}, 'multilevel might-have with a PK == FK in the might_have/has_many table ok');
-
-1;
BEGIN {
- eval "use DBD::mysql; use SQL::Translator 0.09003;";
+ eval "use SQL::Translator 0.09003;";
if ($@) {
- plan skip_all => 'needs DBD::mysql and SQL::Translator 0.09003 for testing';
+ plan skip_all => 'needs SQL::Translator 0.09003 for testing';
}
}
my $schema = DBICTest->init_schema();
# Dummy was yanked out by the sqlt hook test
+# CustomSql tests the horrific/deprecated ->name(\$sql) hack
# YearXXXXCDs are views
-my @sources = grep { $_ ne 'Dummy' && $_ !~ /^Year\d{4}CDs$/ }
- $schema->sources;
+#
+my @sources = grep
+ { $_ !~ /^ (?: Dummy | CustomSql | Year\d{4}CDs ) $/x }
+ $schema->sources
+;
plan tests => ( @sources * 3);
my $sqlt_schema = create_schema({ schema => $schema, args => { parser_args => { } } });
foreach my $source (@sources) {
- my $table = $sqlt_schema->get_table($schema->source($source)->from);
+ my $table = get_table($sqlt_schema, $schema, $source);
my $fk_count = scalar(grep { $_->type eq 'FOREIGN KEY' } $table->get_constraints);
my @indices = $table->get_indices;
my $sqlt_schema = create_schema({ schema => $schema, args => { parser_args => { add_fk_index => 1 } } });
foreach my $source (@sources) {
- my $table = $sqlt_schema->get_table($schema->source($source)->from);
+ my $table = get_table($sqlt_schema, $schema, $source);
my $fk_count = scalar(grep { $_->type eq 'FOREIGN KEY' } $table->get_constraints);
my @indices = $table->get_indices;
my $sqlt_schema = create_schema({ schema => $schema, args => { parser_args => { add_fk_index => 0 } } });
foreach my $source (@sources) {
- my $table = $sqlt_schema->get_table($schema->source($source)->from);
+ my $table = get_table($sqlt_schema, $schema, $source);
my @indices = $table->get_indices;
my $index_count = scalar(@indices);
$sqlt->parser('SQL::Translator::Parser::DBIx::Class');
return $sqlt->translate({ data => $schema }) or die $sqlt->error;
}
+
+sub get_table {
+ my ($sqlt_schema, $schema, $source) = @_;
+
+ my $table_name = $schema->source($source)->from;
+ $table_name = $$table_name if ref $table_name;
+
+ return $sqlt_schema->get_table($table_name);
+}
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBIC::SqlMakerTest;
+
+use_ok('DBICTest');
+
+my $schema = DBICTest->init_schema;
+
+BEGIN {
+ eval "use DBD::SQLite";
+ plan $@
+ ? ( skip_all => 'needs DBD::SQLite for testing' )
+ : ( tests => 13 );
+}
+
+my $where_bind = {
+ where => \'name like ?',
+ bind => [ 'Cat%' ],
+};
+
+my $rs;
+
+TODO: {
+ local $TODO = 'bind args order needs fixing (semifor)';
+
+ # First, the simple cases...
+ $rs = $schema->resultset('Artist')->search(
+ { artistid => 1 },
+ $where_bind,
+ );
+
+ is ( $rs->count, 1, 'where/bind combined' );
+
+ $rs= $schema->resultset('Artist')->search({}, $where_bind)
+ ->search({ artistid => 1});
+
+ is ( $rs->count, 1, 'where/bind first' );
+
+ $rs = $schema->resultset('Artist')->search({ artistid => 1})
+ ->search({}, $where_bind);
+
+ is ( $rs->count, 1, 'where/bind last' );
+}
+
+{
+ # More complex cases, based primarily on the Cookbook
+ # "Arbitrary SQL through a custom ResultSource" technique,
+ # which seems to be the only place the bind attribute is
+ # documented. Breaking this technique probably breaks existing
+ # application code.
+ my $source = DBICTest::Artist->result_source_instance;
+ my $new_source = $source->new($source);
+ $new_source->source_name('Complex');
+
+ $new_source->name(\<<'');
+ ( SELECT a.*, cd.cdid AS cdid, cd.title AS title, cd.year AS year
+ FROM artist a
+ JOIN cd ON cd.artist = a.artistid
+ WHERE cd.year = ?)
+
+ $schema->register_extra_source('Complex' => $new_source);
+
+ $rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] });
+ is ( $rs->count, 1, 'cookbook arbitrary sql example' );
+
+ $rs = $schema->resultset('Complex')->search({ 'artistid' => 1 }, { bind => [ 1999 ] });
+ is ( $rs->count, 1, '...cookbook + search condition' );
+
+ $rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] })
+ ->search({ 'artistid' => 1 });
+ is ( $rs->count, 1, '...cookbook (bind first) + chained search' );
+
+ $rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] })->search({}, { where => \"title LIKE ?", bind => [ 'Spoon%' ] });
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT me.artistid, me.name, me.rank, me.charfield FROM (SELECT a.*, cd.cdid AS cdid, cd.title AS title, cd.year AS year FROM artist a JOIN cd ON cd.artist = a.artistid WHERE cd.year = ?) WHERE title LIKE ?)",
+ [
+ [ '!!dummy' => '1999' ],
+ [ '!!dummy' => 'Spoon%' ]
+ ],
+ 'got correct SQL'
+ );
+}
+
+{
+ # More complex cases, based primarily on the Cookbook
+ # "Arbitrary SQL through a custom ResultSource" technique,
+ # which seems to be the only place the bind attribute is
+ # documented. Breaking this technique probably breaks existing
+ # application code.
+
+ $rs = $schema->resultset('CustomSql')->search({}, { bind => [ 1999 ] });
+ is ( $rs->count, 1, 'cookbook arbitrary sql example (in separate file)' );
+
+ $rs = $schema->resultset('CustomSql')->search({ 'artistid' => 1 }, { bind => [ 1999 ] });
+ is ( $rs->count, 1, '...cookbook (in separate file) + search condition' );
+
+ $rs = $schema->resultset('CustomSql')->search({}, { bind => [ 1999 ] })
+ ->search({ 'artistid' => 1 });
+ is ( $rs->count, 1, '...cookbook (bind first, in separate file) + chained search' );
+
+ $rs = $schema->resultset('CustomSql')->search({}, { bind => [ 1999 ] })->search({}, { where => \"title LIKE ?", bind => [ 'Spoon%' ] });
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT me.artistid, me.name, me.rank, me.charfield FROM (SELECT a.*, cd.cdid AS cdid, cd.title AS title, cd.year AS year FROM artist a JOIN cd ON cd.artist = a.artistid WHERE cd.year = ?) WHERE title LIKE ?)",
+ [
+ [ '!!dummy' => '1999' ],
+ [ '!!dummy' => 'Spoon%' ]
+ ],
+ 'got correct SQL (cookbook arbitrary SQL, in separate file)'
+ );
+}
+
+TODO: {
+ local $TODO = 'bind args order needs fixing (semifor)';
+ $rs = $schema->resultset('Complex')->search({}, { bind => [ 1999 ] })
+ ->search({ 'artistid' => 1 }, {
+ where => \'title like ?',
+ bind => [ 'Spoon%' ] });
+ is ( $rs->count, 1, '...cookbook + chained search with extra bind' );
+}
is($row->get_column('bytea'), $big_long_string, "Created the blob correctly.");
}
-TODO: {
- local $TODO =
- 'Passing bind attributes to $sth->bind_param() should be implemented (it only works in $storage->insert ATM)';
-
+{
my $rs = $schema->resultset('BindType')->search({ bytea => $big_long_string });
# search on the bytea column (select)
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema;
+
+my $rs = $schema->resultset('FourKeys');
+
+sub test_order {
+
+ TODO: {
+ my $args = shift;
+
+ local $TODO = "Not implemented" if $args->{todo};
+
+ lives_ok {
+ is_same_sql_bind(
+ $rs->search(
+ { foo => 'bar' },
+ {
+ order_by => $args->{order_by},
+ having =>
+ [ { read_count => { '>' => 5 } }, \[ 'read_count < ?', 8 ] ]
+ }
+ )->as_query,
+ "(
+ SELECT me.foo, me.bar, me.hello, me.goodbye, me.sensors, me.read_count
+ FROM fourkeys me
+ WHERE ( foo = ? )
+ HAVING read_count > ? OR read_count < ?
+ ORDER BY $args->{order_req}
+ )",
+ [
+ [qw(foo bar)],
+ [qw(read_count 5)],
+ 8,
+ $args->{bind}
+ ? @{ $args->{bind} }
+ : ()
+ ],
+ );
+ };
+ fail('Fail the unfinished is_same_sql_bind') if $@;
+ }
+}
+
+my @tests = (
+ {
+ order_by => \'foo DESC',
+ order_req => 'foo DESC',
+ bind => [],
+ },
+ {
+ order_by => { -asc => 'foo' },
+ order_req => 'foo ASC',
+ bind => [],
+ },
+ {
+ order_by => { -desc => \[ 'colA LIKE ?', 'test' ] },
+ order_req => 'colA LIKE ? DESC',
+ bind => [qw(test)],
+ },
+ {
+ order_by => \[ 'colA LIKE ? DESC', 'test' ],
+ order_req => 'colA LIKE ? DESC',
+ bind => [qw(test)],
+ },
+ {
+ order_by => [
+ { -asc => \['colA'] },
+ { -desc => \[ 'colB LIKE ?', 'test' ] },
+ { -asc => \[ 'colC LIKE ?', 'tost' ] }
+ ],
+ order_req => 'colA ASC, colB LIKE ? DESC, colC LIKE ? ASC',
+ bind => [qw(test tost)],
+ },
+
+ # (mo) this would be really really nice!
+ # (ribasushi) I don't think so, not writing it - patches welcome
+ {
+ order_by => [
+ { -asc => 'colA' },
+ { -desc => { colB => { 'LIKE' => 'test' } } },
+ { -asc => { colC => { 'LIKE' => 'tost' } } }
+ ],
+ order_req => 'colA ASC, colB LIKE ? DESC, colC LIKE ? ASC',
+ bind => [ [ colB => 'test' ], [ colC => 'tost' ] ], # ???
+ todo => 1,
+ },
+ {
+ order_by => { -desc => { colA => { LIKE => 'test' } } },
+ order_req => 'colA LIKE ? DESC',
+ bind => [qw(test)],
+ todo => 1,
+ },
+);
+
+plan( tests => scalar @tests * 2 );
+
+test_order($_) for @tests;
+
use strict;
use Test::More;
+use lib 't/cdbi/testlib';
BEGIN {
eval "use DBIx::Class::CDBICompat;";
#-----------------------------------------------------------------------
package State;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
State->table('State');
State->columns(Essential => qw/Abbreviation Name/);
package City;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
City->table('City');
City->columns(All => qw/Name State Population/);
#-------------------------------------------------------------------------
package CD;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
CD->table('CD');
CD->columns('All' => qw/artist title length/);
plan (skip_all => 'Class::Trigger and DBIx::ContextualFetch required');
next;
}
- eval "use DBD::SQLite";
- plan $@ ? (skip_all => 'needs DBD::SQLite for testing') : (tests => 98);
+ plan tests => 98;
}
INIT {
# Multi-column search
{
- my @films = $blrunner->search_like(title => "Bladerunner%", rating => '15');
+ my @films = $blrunner->search (title => { -like => "Bladerunner%"}, rating => '15');
is @films, 1, "Only one Bladerunner is a 15";
}
ok(!Film->retrieve('Ishtar'), 'Ishtar no longer there');
{
my $deprecated = 0;
- #local $SIG{__WARN__} = sub { $deprecated++ if $_[0] =~ /deprecated/ };
+ local $SIG{__WARN__} = sub { $deprecated++ if $_[0] =~ /deprecated/ };
ok(
Film->delete(Director => 'Elaine May'),
"In fact, delete all films by Elaine May"
);
cmp_ok(Film->search(Director => 'Elaine May'), '==',
0, "0 Films by Elaine May");
- SKIP: {
- skip "No deprecated warnings from compat layer", 1;
- is $deprecated, 1, "Got a deprecated warning";
- }
+ is $deprecated, 0, "No deprecated warnings from compat layer";
}
};
is $@, '', "No problems with deletes";
is($films[0]->id, $gone->id, ' ... the correct one');
# Find all films which were directed by Bob
-@films = Film->search_like('Director', 'Bob %');
+@films = Film->search ( { 'Director' => { -like => 'Bob %' } });
is(scalar @films, 3, ' search_like returns 3 films');
ok(
eq_array(
BEGIN {
eval "use DBIx::Class::CDBICompat;";
- if ($@) {
- plan (skip_all => 'Class::Trigger and DBIx::ContextualFetch required');
- next;
- }
- eval "use DBD::SQLite";
- plan $@ ? (skip_all => 'needs DBD::SQLite for testing') : (tests => 36);
+ plan $@
+ ? (skip_all => 'Class::Trigger and DBIx::ContextualFetch required')
+ : (tests => 36)
+ ;
}
INIT {
ok($@, $@);
-warning_is {
+warning_like {
Lazy->columns( TEMP => qw(that) );
-} "Declaring column that as TEMP but it already exists";
+} qr/Declaring column that as TEMP but it already exists/;
# Test that create() and update() throws out columns that changed
{
# Now again for inflated values
SKIP: {
- skip "Requires Date::Simple", 5 unless eval "use Date::Simple; 1; ";
+ skip "Requires Date::Simple 3.03", 5 unless eval "use Date::Simple 3.03; 1; ";
Lazy->has_a(
orp => 'Date::Simple',
inflate => sub { Date::Simple->new($_[0] . '-01-01') },
sub Class::DBI::sheep { ok 0; }
}
+# Install the deprecation warning intercept here for the rest of the 08 dev cycle
+local $SIG{__WARN__} = sub {
+ warn @_ unless (DBIx::Class->VERSION < 0.09 and $_[0] =~ /Query returned more than one row/);
+};
+
sub Film::mutator_name {
my ($class, $col) = @_;
return "set_sheep" if lc $col eq "numexplodingsheep";
like $@, qr/film/, "no hasa film";
eval {
- local $SIG{__WARN__} = sub {
- warn @_ unless $_[0] =~ /Query returned more than one row/;
- };
ok my $f = $ac->movie, "hasa movie";
isa_ok $f, "Film";
is $f->id, $bt->id, " - Bad Taste";
my $abigail = eval { Film->create({ title => "Abigail's Party" }) };
like $@, qr/read only/, "Or create new films";
- $sandl->discard_changes;
+ $_->discard_changes for ($naked, $sandl);
}
eval { require Time::Piece::MySQL };
plan skip_all => "Need Time::Piece::MySQL for this test" if $@;
+use lib 't/cdbi/testlib';
eval { require 't/cdbi/testlib/Log.pm' };
plan skip_all => "Need MySQL for this test" if $@;
use strict;
use Test::More;
+use lib 't/cdbi/testlib';
BEGIN {
eval "use DBIx::Class::CDBICompat;";
{
package Thing;
- use base 'DBIx::Class::Test::SQLite';
+ use base 'DBIC::Test::SQLite';
Thing->columns(TEMP => qw[foo bar]);
Thing->columns(All => qw[thing_id yarrow flower]);
package # hide from PAUSE
MyFilm;
- use base 'DBIx::Class::Test::SQLite';
+ use base 'DBIC::Test::SQLite';
use strict;
__PACKAGE__->set_table('Movies');
use strict;
use Test::More;
+use lib 't/cdbi/testlib';
BEGIN {
eval "use DBIx::Class::CDBICompat;";
{
package Thing;
- use base 'DBIx::Class::Test::SQLite';
+ use base 'DBIC::Test::SQLite';
Thing->columns(TEMP => qw[foo bar baz]);
Thing->columns(All => qw[some real stuff]);
use strict;
use Test::More;
+use lib 't/cdbi/testlib';
BEGIN {
eval "use DBIx::Class::CDBICompat;";
{
package Thing;
- use base 'DBIx::Class::Test::SQLite';
+ use base 'DBIC::Test::SQLite';
Thing->columns(All => qw[thing_id this that date]);
}
use strict;
use Test::More;
use Test::Exception;
+use lib 't/cdbi/testlib';
BEGIN {
eval "use DBIx::Class::CDBICompat;";
{
package Thing;
- use base 'DBIx::Class::Test::SQLite';
+ use base 'DBIC::Test::SQLite';
Thing->columns(All => qw[thing_id this that date]);
}
use strict;
use warnings;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
__PACKAGE__->set_table('Actor');
use strict;
use warnings;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
__PACKAGE__->set_table( 'ActorAlias' );
Blurb;
use strict;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
__PACKAGE__->set_table('Blurbs');
__PACKAGE__->columns('Primary', 'Title');
CDBase;
use strict;
-use base qw(DBIx::Class::Test::SQLite);
+use base qw(DBIC::Test::SQLite);
1;
-package DBIx::Class::Test::SQLite;
+package # hide from PAUSE
+ DBIC::Test::SQLite;
=head1 NAME
Director;
use strict;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
__PACKAGE__->set_table('Directors');
__PACKAGE__->columns('All' => qw/ Name Birthday IsInsane /);
package # hide from PAUSE
Film;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
use strict;
__PACKAGE__->set_table('Movies');
package # hide from PAUSE
Lazy;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
use strict;
__PACKAGE__->set_table("Lazy");
use vars qw/$dbh/;
-# temporary, might get switched to the new test framework someday
-my @connect = ("dbi:mysql:test", "", "", { PrintError => 0});
-
+my @connect = (@ENV{map { "DBICTEST_MYSQL_${_}" } qw/DSN USER PASS/}, { PrintError => 0});
$dbh = DBI->connect(@connect) or die DBI->errstr;
my @table;
use base 'MyBase';
+use Date::Simple 3.03;
+
use strict;
__PACKAGE__->set_table();
Order;
use strict;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
__PACKAGE__->set_table('orders');
__PACKAGE__->table_alias('orders');
package OtherThing;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
OtherThing->set_table("other_thing");
OtherThing->columns(All => qw(id));
package Thing;
-use base 'DBIx::Class::Test::SQLite';
+use base 'DBIC::Test::SQLite';
Thing->set_table("thing");
Thing->columns(All => qw(id that_thing));
--- /dev/null
+use strict;
+use warnings;
+
+use lib qw(t/lib);
+
+use Test::More;
+use DBICTest;
+use DBIC::SqlMakerTest;
+use DBIC::DebugObj;
+
+plan tests => 10;
+
+my $schema = DBICTest->init_schema();
+
+# non-collapsing prefetch (no multi prefetches)
+{
+ my $rs = $schema->resultset("CD")
+ ->search_related('tracks',
+ { position => [1,2] },
+ { prefetch => [qw/disc lyrics/], rows => 3, offset => 8 },
+ );
+ is ($rs->all, 2, 'Correct number of objects');
+
+
+ my ($sql, @bind);
+ $schema->storage->debugobj(DBIC::DebugObj->new(\$sql, \@bind));
+ $schema->storage->debug(1);
+
+ is ($rs->count, 2, 'Correct count via count()');
+
+ is_same_sql_bind (
+ $sql,
+ \@bind,
+ 'SELECT COUNT( * )
+ FROM cd me
+ JOIN track tracks ON tracks.cd = me.cdid
+ JOIN cd disc ON disc.cdid = tracks.cd
+ LEFT JOIN lyrics lyrics ON lyrics.track_id = tracks.trackid
+ WHERE ( ( position = ? OR position = ? ) )
+ ',
+ [ qw/'1' '2'/ ],
+ 'count softlimit applied',
+ );
+
+ my $crs = $rs->count_rs;
+ is ($crs->next, 2, 'Correct count via count_rs()');
+
+ is_same_sql_bind (
+ $crs->as_query,
+ '(SELECT COUNT( * )
+ FROM (
+ SELECT tracks.trackid
+ FROM cd me
+ JOIN track tracks ON tracks.cd = me.cdid
+ JOIN cd disc ON disc.cdid = tracks.cd
+ LEFT JOIN lyrics lyrics ON lyrics.track_id = tracks.trackid
+ WHERE ( ( position = ? OR position = ? ) )
+ LIMIT 3 OFFSET 8
+ ) count_subq
+ )',
+ [ [ position => 1 ], [ position => 2 ] ],
+ 'count_rs db-side limit applied',
+ );
+}
+
+# has_many prefetch with limit
+{
+ my $rs = $schema->resultset("Artist")
+ ->search_related('cds',
+ { 'tracks.position' => [1,2] },
+ { prefetch => [qw/tracks artist/], rows => 3, offset => 4 },
+ );
+ is ($rs->all, 1, 'Correct number of objects');
+
+ my ($sql, @bind);
+ $schema->storage->debugobj(DBIC::DebugObj->new(\$sql, \@bind));
+ $schema->storage->debug(1);
+
+ is ($rs->count, 1, 'Correct count via count()');
+
+ is_same_sql_bind (
+ $sql,
+ \@bind,
+ 'SELECT COUNT( * )
+ FROM (
+ SELECT cds.cdid
+ FROM artist me
+ JOIN cd cds ON cds.artist = me.artistid
+ LEFT JOIN track tracks ON tracks.cd = cds.cdid
+ JOIN artist artist ON artist.artistid = cds.artist
+ WHERE tracks.position = ? OR tracks.position = ?
+ GROUP BY cds.cdid
+ ) count_subq
+ ',
+ [ qw/'1' '2'/ ],
+ 'count softlimit applied',
+ );
+
+ my $crs = $rs->count_rs;
+ is ($crs->next, 1, 'Correct count via count_rs()');
+
+ is_same_sql_bind (
+ $crs->as_query,
+ '(SELECT COUNT( * )
+ FROM (
+ SELECT cds.cdid
+ FROM artist me
+ JOIN cd cds ON cds.artist = me.artistid
+ LEFT JOIN track tracks ON tracks.cd = cds.cdid
+ JOIN artist artist ON artist.artistid = cds.artist
+ WHERE tracks.position = ? OR tracks.position = ?
+ GROUP BY cds.cdid
+ LIMIT 3 OFFSET 4
+ ) count_subq
+ )',
+ [ [ 'tracks.position' => 1 ], [ 'tracks.position' => 2 ] ],
+ 'count_rs db-side limit applied',
+ );
+}
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+
+use lib qw(t/lib);
+
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema();
+
+# The tag Blue is assigned to cds 1 2 3 and 5
+# The tag Cheesy is assigned to cds 2 4 and 5
+#
+# This combination should make some interesting group_by's
+#
+my $rs;
+my $in_rs = $schema->resultset('Tag')->search({ tag => [ 'Blue', 'Cheesy' ] });
+
+for my $get_count (
+ sub { shift->count },
+ sub { my $crs = shift->count_rs; isa_ok ($crs, 'DBIx::Class::ResultSetColumn'); $crs->next }
+) {
+ $rs = $schema->resultset('Tag')->search({ tag => 'Blue' });
+ is($get_count->($rs), 4, 'Count without DISTINCT');
+
+ $rs = $schema->resultset('Tag')->search({ tag => [ 'Blue', 'Cheesy' ] }, { group_by => 'tag' });
+ is($get_count->($rs), 2, 'Count with single column group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => [ 'Blue', 'Cheesy' ] }, { group_by => 'cd' });
+ is($get_count->($rs), 5, 'Count with another single column group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => 'Blue' }, { group_by => [ qw/tag cd/ ]});
+ is($get_count->($rs), 4, 'Count with multiple column group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => 'Blue' }, { distinct => 1 });
+ is($get_count->($rs), 4, 'Count with single column distinct');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->get_column('tag')->as_query } });
+ is($get_count->($rs), 7, 'Count with IN subquery');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->get_column('tag')->as_query } }, { group_by => 'tag' });
+ is($get_count->($rs), 2, 'Count with IN subquery with outside group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->get_column('tag')->as_query } }, { distinct => 1 });
+ is($get_count->($rs), 7, 'Count with IN subquery with outside distinct');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->get_column('tag')->as_query } }, { distinct => 1, select => 'tag' }),
+ is($get_count->($rs), 2, 'Count with IN subquery with outside distinct on a single column');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->search({}, { group_by => 'tag' })->get_column('tag')->as_query } });
+ is($get_count->($rs), 7, 'Count with IN subquery with single group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->search({}, { group_by => 'cd' })->get_column('tag')->as_query } });
+ is($get_count->($rs), 7, 'Count with IN subquery with another single group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => { -in => $in_rs->search({}, { group_by => [ qw/tag cd/ ] })->get_column('tag')->as_query } });
+ is($get_count->($rs), 7, 'Count with IN subquery with multiple group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => \"= 'Blue'" });
+ is($get_count->($rs), 4, 'Count without DISTINCT, using literal SQL');
+
+ $rs = $schema->resultset('Tag')->search({ tag => \" IN ('Blue', 'Cheesy')" }, { group_by => 'tag' });
+ is($get_count->($rs), 2, 'Count with literal SQL and single group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => \" IN ('Blue', 'Cheesy')" }, { group_by => 'cd' });
+ is($get_count->($rs), 5, 'Count with literal SQL and another single group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => \" IN ('Blue', 'Cheesy')" }, { group_by => [ qw/tag cd/ ] });
+ is($get_count->($rs), 7, 'Count with literal SQL and multiple group_by');
+
+ $rs = $schema->resultset('Tag')->search({ tag => 'Blue' }, { '+select' => { max => 'tagid' }, distinct => 1 });
+ is($get_count->($rs), 4, 'Count with +select aggreggate');
+
+ $rs = $schema->resultset('Tag')->search({}, { select => 'length(me.tag)', distinct => 1 });
+ is($get_count->($rs), 3, 'Count by distinct function result as select literal');
+}
+
+throws_ok(
+ sub { my $row = $schema->resultset('Tag')->search({}, { select => { distinct => [qw/tag cd/] } })->first },
+ qr/select => { distinct => \.\.\. } syntax is not supported for multiple columns/,
+ 'throw on unsupported syntax'
+);
+
+# make sure distinct+func works
+{
+ my $rs = $schema->resultset('Artist')->search(
+ {},
+ {
+ join => 'cds',
+ distinct => 1,
+ '+select' => [ { count => 'cds.cdid', -as => 'amount_of_cds' } ],
+ '+as' => [qw/num_cds/],
+ order_by => { -desc => 'amount_of_cds' },
+ }
+ );
+
+ is_same_sql_bind (
+ $rs->as_query,
+ '(
+ SELECT me.artistid, me.name, me.rank, me.charfield, COUNT( cds.cdid ) AS amount_of_cds
+ FROM artist me LEFT JOIN cd cds ON cds.artist = me.artistid
+ GROUP BY me.artistid, me.name, me.rank, me.charfield
+ ORDER BY amount_of_cds DESC
+ )',
+ [],
+ );
+
+ is ($rs->next->get_column ('num_cds'), 3, 'Function aliased correctly');
+}
+
+# These two rely on the database to throw an exception. This might not be the case one day. Please revise.
+dies_ok(sub { my $count = $schema->resultset('Tag')->search({}, { '+select' => \'tagid AS tag_id', distinct => 1 })->count }, 'expecting to die');
+
+done_testing;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+
+use lib qw(t/lib);
+
+use DBICTest;
+
+plan tests => 7;
+
+my $schema = DBICTest->init_schema();
+
+use Data::Dumper;
+
+# add 2 extra artists
+$schema->populate ('Artist', [
+ [qw/name/],
+ [qw/ar_1/],
+ [qw/ar_2/],
+]);
+
+# add 3 extra cds to every artist
+for my $ar ($schema->resultset ('Artist')->all) {
+ for my $cdnum (1 .. 3) {
+ $ar->create_related ('cds', {
+ title => "bogon $cdnum",
+ year => 2000 + $cdnum,
+ });
+ }
+}
+
+my $cds = $schema->resultset ('CD')->search ({}, { group_by => 'artist' } );
+is ($cds->count, 5, 'Resultset collapses to 5 groups');
+
+my ($pg1, $pg2, $pg3) = map { $cds->search_rs ({}, {rows => 2, page => $_}) } (1..3);
+
+for ($pg1, $pg2, $pg3) {
+ is ($_->pager->total_entries, 5, 'Total count via pager correct');
+}
+
+is ($pg1->count, 2, 'First page has 2 groups');
+is ($pg2->count, 2, 'Second page has 2 groups');
+is ($pg3->count, 1, 'Third page has one group remaining');
--- /dev/null
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use Data::Dumper;
+
+use Test::More;
+
+plan ( tests => 1 );
+
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema();
+
+{
+ my $rs = $schema->resultset("CD")->search(
+ { 'artist.name' => 'Caterwauler McCrae' },
+ { join => [qw/artist/]}
+ );
+ my $squery = $rs->get_column('cdid')->as_query;
+ my $subsel_rs = $schema->resultset("CD")->search( { cdid => { IN => $squery } } );
+ is($subsel_rs->count, $rs->count, 'Subselect on PK got the same row count');
+}
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+
+use lib qw(t/lib);
+
+use DBICTest;
+
+plan tests => 7;
+
+my $schema = DBICTest->init_schema();
+
+my $cds = $schema->resultset("CD")->search({ cdid => 1 }, { join => { cd_to_producer => 'producer' } });
+cmp_ok($cds->count, '>', 1, "extra joins explode entity count");
+
+is (
+ $cds->search({}, { prefetch => 'cd_to_producer' })->count,
+ 1,
+ "Count correct with extra joins collapsed by prefetch"
+);
+
+is (
+ $cds->search({}, { distinct => 1 })->count,
+ 1,
+ "Count correct with requested distinct collapse of main table"
+);
+
+# JOIN and LEFT JOIN issues mean that we've seen problems where counted rows and fetched rows are sometimes 1 higher than they should
+# be in the related resultset.
+my $artist=$schema->resultset('Artist')->create({name => 'xxx'});
+is($artist->related_resultset('cds')->count(), 0, "No CDs found for a shiny new artist");
+is(scalar($artist->related_resultset('cds')->all()), 0, "No CDs fetched for a shiny new artist");
+
+my $artist_rs = $schema->resultset('Artist')->search({artistid => $artist->id});
+is($artist_rs->related_resultset('cds')->count(), 0, "No CDs counted for a shiny new artist using a resultset search");
+is(scalar($artist_rs->related_resultset('cds')->all), 0, "No CDs fetched for a shiny new artist using a resultset search");
--- /dev/null
+use strict;
+use warnings;
+
+use lib qw(t/lib);
+
+use Test::More;
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema();
+
+# collapsing prefetch
+{
+ my $rs = $schema->resultset("Artist")
+ ->search_related('cds',
+ { 'tracks.position' => [1,2] },
+ { prefetch => [qw/tracks artist/] },
+ );
+ is ($rs->all, 5, 'Correct number of objects');
+ is ($rs->count, 5, 'Correct count');
+
+ is_same_sql_bind (
+ $rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT cds.cdid
+ FROM artist me
+ JOIN cd cds ON cds.artist = me.artistid
+ LEFT JOIN track tracks ON tracks.cd = cds.cdid
+ JOIN artist artist ON artist.artistid = cds.artist
+ WHERE tracks.position = ? OR tracks.position = ?
+ GROUP BY cds.cdid
+ ) count_subq
+ )',
+ [ map { [ 'tracks.position' => $_ ] } (1, 2) ],
+ );
+}
+
+# collapsing prefetch with distinct
+{
+ my $first_cd = $schema->resultset('Artist')->first->cds->first;
+ $first_cd->update ({
+ genreid => $first_cd->create_related (
+ genre => ({ name => 'vague genre' })
+ )->id
+ });
+
+ my $rs = $schema->resultset("Artist")->search(undef, {distinct => 1})
+ ->search_related('cds')->search_related('genre',
+ { 'genre.name' => { '!=', 'foo' } },
+ { prefetch => q(cds) },
+ );
+ is ($rs->all, 1, 'Correct number of objects');
+ is ($rs->count, 1, 'Correct count');
+
+ is_same_sql_bind (
+ $rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT genre.genreid
+ FROM artist me
+ JOIN cd cds ON cds.artist = me.artistid
+ JOIN genre genre ON genre.genreid = cds.genreid
+ LEFT JOIN cd cds_2 ON cds_2.genreid = genre.genreid
+ WHERE ( genre.name != ? )
+ GROUP BY genre.genreid
+ ) count_subq
+ )',
+ [ [ 'genre.name' => 'foo' ] ],
+ );
+}
+
+# non-collapsing prefetch (no multi prefetches)
+{
+ my $rs = $schema->resultset("CD")
+ ->search_related('tracks',
+ { position => [1,2] },
+ { prefetch => [qw/disc lyrics/] },
+ );
+ is ($rs->all, 10, 'Correct number of objects');
+
+
+ is ($rs->count, 10, 'Correct count');
+
+ is_same_sql_bind (
+ $rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM cd me
+ JOIN track tracks ON tracks.cd = me.cdid
+ JOIN cd disc ON disc.cdid = tracks.cd
+ LEFT JOIN lyrics lyrics ON lyrics.track_id = tracks.trackid
+ WHERE position = ? OR position = ?
+ )',
+ [ map { [ position => $_ ] } (1, 2) ],
+ );
+}
+
+done_testing;
--- /dev/null
+use Test::More;
+use strict;
+use warnings;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 4;
+
+my $schema = DBICTest->init_schema();
+
+my $ars = $schema->resultset('Artist');
+my $cdrs = $schema->resultset('CD');
+my $cd2pr_rs = $schema->resultset('CD_to_Producer');
+
+# create some custom entries
+$ars->populate ([
+ [qw/artistid name/],
+ [qw/71 a1/],
+ [qw/72 a2/],
+ [qw/73 a3/],
+]);
+
+$cdrs->populate ([
+ [qw/cdid artist title year/],
+ [qw/70 71 delete0 2005/],
+ [qw/71 72 delete1 2005/],
+ [qw/72 72 delete2 2005/],
+ [qw/73 72 delete3 2006/],
+ [qw/74 72 delete4 2007/],
+ [qw/75 73 delete5 2008/],
+]);
+
+my $prod = $schema->resultset('Producer')->create ({ name => 'deleter' });
+my $prod_cd = $cdrs->find (70);
+my $cd2pr = $cd2pr_rs->create ({
+ producer => $prod,
+ cd => $prod_cd,
+});
+
+my $total_cds = $cdrs->count;
+
+# test that delete_related w/o conditions deletes all related records only
+$ars->search ({name => 'a3' })->search_related ('cds')->delete;
+is ($cdrs->count, $total_cds -= 1, 'related delete ok');
+
+my $a2_cds = $ars->search ({ name => 'a2' })->search_related ('cds');
+
+# test that related deletion w/conditions deletes just the matched related records only
+$a2_cds->search ({ year => 2005 })->delete;
+is ($cdrs->count, $total_cds -= 2, 'related + condition delete ok');
+
+# test that related deletion with limit condition works
+$a2_cds->search ({}, { rows => 1})->delete;
+is ($cdrs->count, $total_cds -= 1, 'related + limit delete ok');
+
+TODO: {
+ local $TODO = 'delete_related is based on search_related which is based on search which does not understand object arguments';
+ my $cd2pr_count = $cd2pr_rs->count;
+ $prod_cd->delete_related('cd_to_producer', { producer => $prod } );
+ is ($cd2pr_rs->count, $cd2pr_count -= 1, 'm2m link deleted succesfully');
+}
--- /dev/null
+use strict;
+use warnings FATAL => 'all';
+
+use Test::More;
+
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+plan tests => 8;
+
+my $schema = DBICTest->init_schema();
+my $art_rs = $schema->resultset('Artist');
+my $cdrs = $schema->resultset('CD');
+
+{
+ my $cdrs2 = $cdrs->search({
+ artist_id => { 'in' => $art_rs->search({}, { rows => 1 })->get_column( 'id' )->as_query },
+ });
+
+ is_same_sql_bind(
+ $cdrs2->as_query,
+ "(SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE artist_id IN ( SELECT id FROM artist me LIMIT 1 ))",
+ [],
+ );
+}
+
+{
+ my $rs = $art_rs->search(
+ {},
+ {
+ 'select' => [
+ $cdrs->search({}, { rows => 1 })->get_column('id')->as_query,
+ ],
+ },
+ );
+
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT (SELECT id FROM cd me LIMIT 1) FROM artist me)",
+ [],
+ );
+}
+
+{
+ my $rs = $art_rs->search(
+ {},
+ {
+ '+select' => [
+ $cdrs->search({}, { rows => 1 })->get_column('id')->as_query,
+ ],
+ },
+ );
+
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT me.artistid, me.name, me.rank, me.charfield, (SELECT id FROM cd me LIMIT 1) FROM artist me)",
+ [],
+ );
+}
+
+# simple from
+{
+ my $rs = $cdrs->search(
+ {},
+ {
+ alias => 'cd2',
+ from => [
+ { cd2 => $cdrs->search({ id => { '>' => 20 } })->as_query },
+ ],
+ },
+ );
+
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track FROM (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE ( id > ? ) ) cd2)",
+ [
+ [ 'id', 20 ]
+ ],
+ );
+}
+
+# nested from
+{
+ my $art_rs2 = $schema->resultset('Artist')->search({},
+ {
+ from => [ { 'me' => 'artist' },
+ [ { 'cds' => $cdrs->search({},{ 'select' => [\'me.artist as cds_artist' ]})->as_query },
+ { 'me.artistid' => 'cds_artist' } ] ]
+ });
+
+ is_same_sql_bind(
+ $art_rs2->as_query,
+ "(SELECT me.artistid, me.name, me.rank, me.charfield FROM artist me JOIN (SELECT me.artist as cds_artist FROM cd me) cds ON me.artistid = cds_artist)",
+ []
+ );
+
+
+}
+
+# nested subquery in from
+{
+ my $rs = $cdrs->search(
+ {},
+ {
+ alias => 'cd2',
+ from => [
+ { cd2 => $cdrs->search(
+ { id => { '>' => 20 } },
+ {
+ alias => 'cd3',
+ from => [
+ { cd3 => $cdrs->search( { id => { '<' => 40 } } )->as_query }
+ ],
+ }, )->as_query },
+ ],
+ },
+ );
+
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track
+ FROM
+ (SELECT cd3.cdid,cd3.artist,cd3.title,cd3.year,cd3.genreid,cd3.single_track
+ FROM
+ (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track
+ FROM cd me WHERE ( id < ? ) ) cd3
+ WHERE ( id > ? ) ) cd2)",
+ [
+ [ 'id', 40 ],
+ [ 'id', 20 ]
+ ],
+ );
+
+}
+
+{
+ my $rs = $cdrs->search({
+ year => {
+ '=' => $cdrs->search(
+ { artistid => { '=' => \'me.artistid' } },
+ { alias => 'inner' }
+ )->get_column('year')->max_rs->as_query,
+ },
+ });
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE year = (SELECT MAX(inner.year) FROM cd inner WHERE artistid = me.artistid))",
+ [],
+ );
+}
+
+{
+ my $rs = $cdrs->search(
+ {},
+ {
+ alias => 'cd2',
+ from => [
+ { cd2 => $cdrs->search({ title => 'Thriller' })->as_query },
+ ],
+ },
+ );
+
+ is_same_sql_bind(
+ $rs->as_query,
+ "(SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track FROM (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE ( title = ? ) ) cd2)",
+ [ [ 'title', 'Thriller' ] ],
+ );
+}
use strict;
-use warnings;
+use warnings;
use Test::More;
+use Test::Exception;
use lib qw(t/lib);
use DBICTest;
eval { require DateTime };
plan skip_all => "Need DateTime for inflation tests" if $@;
-plan tests => 21;
-
-$schema->class('CD')
-#DBICTest::Schema::CD
-->inflate_column( 'year',
+$schema->class('CD') ->inflate_column( 'year',
{ inflate => sub { DateTime->new( year => shift ) },
deflate => sub { shift->year } }
);
-Class::C3->reinitialize;
# inflation test
my $cd = $schema->resultset("CD")->find(3);
ok(!$@, 'set_inflated_column with DateTime object');
$cd->update;
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
is( $cd->year->year, $now->year, 'deflate ok' );
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
my $before_year = $cd->year->year;
eval { $cd->set_inflated_column('year', \'year + 1') };
ok(!$@, 'set_inflated_column to "year + 1"');
$cd->update;
-$cd = $schema->resultset("CD")->find(3);
+TODO: {
+ local $TODO = 'this was left in without a TODO - should it work?';
+
+ lives_ok (sub {
+ $cd->store_inflated_column('year', \'year + 1');
+ is_deeply( $cd->year, \'year + 1', 'deflate ok' );
+ }, 'store_inflated_column to "year + 1"');
+}
+
+$cd = $schema->resultset("CD")->find(3);
is( $cd->year->year, $before_year+1, 'deflate ok' );
# store_inflated_column test
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
eval { $cd->store_inflated_column('year', $now) };
ok(!$@, 'store_inflated_column with DateTime object');
$cd->update;
is( $cd->year->year, $now->year, 'deflate ok' );
# update tests
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
eval { $cd->update({'year' => $now}) };
ok(!$@, 'update using DateTime object ok');
is($cd->year->year, $now->year, 'deflate ok');
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
$before_year = $cd->year->year;
eval { $cd->update({'year' => \'year + 1'}) };
ok(!$@, 'update using scalarref ok');
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
is($cd->year->year, $before_year + 1, 'deflate ok');
# discard_changes test
-$cd = $schema->resultset("CD")->find(3);
+$cd = $schema->resultset("CD")->find(3);
# inflate the year
$before_year = $cd->year->year;
$cd->update({ year => \'year + 1'});
my $copy = $cd->copy({ year => $now, title => "zemoose" });
isnt( $copy->year->year, $before_year, "copy" );
-
-# eval { $cd->store_inflated_column('year', \'year + 1') };
-# print STDERR "ERROR: $@" if($@);
-# ok(!$@, 'store_inflated_column to "year + 1"');
-
-# is_deeply( $cd->year, \'year + 1', 'deflate ok' );
+done_testing;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+eval { require DateTime::Format::SQLite };
+plan $@
+ ? ( skip_all => "Need DateTime::Format::SQLite for DT inflation tests" )
+ : ( tests => 18 )
+;
+
+# inflation test
+my $event = $schema->resultset("Event")->find(1);
+
+isa_ok($event->starts_at, 'DateTime', 'DateTime returned');
+
+# klunky, but makes older Test::More installs happy
+my $starts = $event->starts_at;
+is("$starts", '2006-04-25T22:24:33', 'Correct date/time');
+
+TODO: {
+ local $TODO = "We can't do this yet before 0.09" if DBIx::Class->VERSION < 0.09;
+
+ ok(my $row =
+ $schema->resultset('Event')->search({ starts_at => $starts })->single);
+ is(eval { $row->id }, 1, 'DT in search');
+
+ ok($row =
+ $schema->resultset('Event')->search({ starts_at => { '>=' => $starts } })->single);
+ is(eval { $row->id }, 1, 'DT in search with condition');
+}
+
+# create using DateTime
+my $created = $schema->resultset('Event')->create({
+ starts_at => DateTime->new(year=>2006, month=>6, day=>18),
+ created_on => DateTime->new(year=>2006, month=>6, day=>23)
+});
+my $created_start = $created->starts_at;
+
+isa_ok($created->starts_at, 'DateTime', 'DateTime returned');
+is("$created_start", '2006-06-18T00:00:00', 'Correct date/time');
+
+## timestamp field
+isa_ok($event->created_on, 'DateTime', 'DateTime returned');
+
+## varchar fields
+isa_ok($event->varchar_date, 'DateTime', 'DateTime returned');
+isa_ok($event->varchar_datetime, 'DateTime', 'DateTime returned');
+
+## skip inflation field
+isnt(ref($event->skip_inflation), 'DateTime', 'No DateTime returned for skip inflation column');
+
+# klunky, but makes older Test::More installs happy
+my $createo = $event->created_on;
+is("$createo", '2006-06-22T21:00:05', 'Correct date/time');
+
+my $created_cron = $created->created_on;
+
+isa_ok($created->created_on, 'DateTime', 'DateTime returned');
+is("$created_cron", '2006-06-23T00:00:00', 'Correct date/time');
+
+## varchar field using inflate_date => 1
+my $varchar_date = $event->varchar_date;
+is("$varchar_date", '2006-07-23T00:00:00', 'Correct date/time');
+
+## varchar field using inflate_datetime => 1
+my $varchar_datetime = $event->varchar_datetime;
+is("$varchar_datetime", '2006-05-22T19:05:07', 'Correct date/time');
+
+## skip inflation field
+my $skip_inflation = $event->skip_inflation;
+is ("$skip_inflation", '2006-04-21 18:04:06', 'Correct date/time');
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_MSSQL_ODBC_${_}" } qw/DSN USER PASS/};
+
+if (not ($dsn && $user)) {
+ plan skip_all =>
+ 'Set $ENV{DBICTEST_MSSQL_ODBC_DSN}, _USER and _PASS to run this test' .
+ "\nWarning: This test drops and creates a table called 'track'";
+} else {
+ eval "use DateTime; use DateTime::Format::Strptime;";
+ if ($@) {
+ plan skip_all => 'needs DateTime and DateTime::Format::Strptime for testing';
+ }
+ else {
+ plan tests => 4 * 2; # (tests * dt_types)
+ }
+}
+
+my $schema = DBICTest::Schema->clone;
+
+$schema->connection($dsn, $user, $pass);
+$schema->storage->ensure_connected;
+
+# coltype, column, datehash
+my @dt_types = (
+ ['DATETIME',
+ 'last_updated_at',
+ {
+ year => 2004,
+ month => 8,
+ day => 21,
+ hour => 14,
+ minute => 36,
+ second => 48,
+ nanosecond => 500000000,
+ }],
+ ['SMALLDATETIME', # minute precision
+ 'small_dt',
+ {
+ year => 2004,
+ month => 8,
+ day => 21,
+ hour => 14,
+ minute => 36,
+ }],
+);
+
+for my $dt_type (@dt_types) {
+ my ($type, $col, $sample_dt) = @$dt_type;
+
+ eval { $schema->storage->dbh->do("DROP TABLE track") };
+ $schema->storage->dbh->do(<<"SQL");
+CREATE TABLE track (
+ trackid INT IDENTITY PRIMARY KEY,
+ cd INT,
+ position INT,
+ $col $type,
+)
+SQL
+ ok(my $dt = DateTime->new($sample_dt));
+
+ my $row;
+ ok( $row = $schema->resultset('Track')->create({
+ $col => $dt,
+ cd => 1,
+ }));
+ ok( $row = $schema->resultset('Track')
+ ->search({ trackid => $row->trackid }, { select => [$col] })
+ ->first
+ );
+ is( $row->$col, $dt, 'DateTime roundtrip' );
+}
+
+# clean up our mess
+END {
+ if (my $dbh = eval { $schema->storage->_dbh }) {
+ $dbh->do('DROP TABLE track');
+ }
+}
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+use DBICTest::Schema;
+
+{
+ local $SIG{__WARN__} = sub { warn @_ if $_[0] !~ /extra \=\> .+? has been deprecated/ };
+ DBICTest::Schema->load_classes('EventTZ');
+ DBICTest::Schema->load_classes('EventTZDeprecated');
+}
+
+eval { require DateTime::Format::MySQL };
+plan $@
+ ? ( skip_all => "Need DateTime::Format::MySQL for inflation tests")
+ : ( tests => 33 )
+;
+
+my $schema = DBICTest->init_schema();
+
+# Test "timezone" parameter
+foreach my $tbl (qw/EventTZ EventTZDeprecated/) {
+ my $event_tz = $schema->resultset($tbl)->create({
+ starts_at => DateTime->new(year=>2007, month=>12, day=>31, time_zone => "America/Chicago" ),
+ created_on => DateTime->new(year=>2006, month=>1, day=>31,
+ hour => 13, minute => 34, second => 56, time_zone => "America/New_York" ),
+ });
+
+ is ($event_tz->starts_at->day_name, "Montag", 'Locale de_DE loaded: day_name');
+ is ($event_tz->starts_at->month_name, "Dezember", 'Locale de_DE loaded: month_name');
+ is ($event_tz->created_on->day_name, "Tuesday", 'Default locale loaded: day_name');
+ is ($event_tz->created_on->month_name, "January", 'Default locale loaded: month_name');
+
+ my $starts_at = $event_tz->starts_at;
+ is("$starts_at", '2007-12-31T00:00:00', 'Correct date/time using timezone');
+
+ my $created_on = $event_tz->created_on;
+ is("$created_on", '2006-01-31T12:34:56', 'Correct timestamp using timezone');
+ is($event_tz->created_on->time_zone->name, "America/Chicago", "Correct timezone");
+
+ my $loaded_event = $schema->resultset($tbl)->find( $event_tz->id );
+
+ isa_ok($loaded_event->starts_at, 'DateTime', 'DateTime returned');
+ $starts_at = $loaded_event->starts_at;
+ is("$starts_at", '2007-12-31T00:00:00', 'Loaded correct date/time using timezone');
+ is($starts_at->time_zone->name, 'America/Chicago', 'Correct timezone');
+
+ isa_ok($loaded_event->created_on, 'DateTime', 'DateTime returned');
+ $created_on = $loaded_event->created_on;
+ is("$created_on", '2006-01-31T12:34:56', 'Loaded correct timestamp using timezone');
+ is($created_on->time_zone->name, 'America/Chicago', 'Correct timezone');
+
+ # Test floating timezone warning
+ # We expect one warning
+ SKIP: {
+ skip "ENV{DBIC_FLOATING_TZ_OK} was set, skipping", 1 if $ENV{DBIC_FLOATING_TZ_OK};
+ local $SIG{__WARN__} = sub {
+ like(
+ shift,
+ qr/You're using a floating timezone, please see the documentation of DBIx::Class::InflateColumn::DateTime for an explanation/,
+ 'Floating timezone warning'
+ );
+ };
+ my $event_tz_floating = $schema->resultset($tbl)->create({
+ starts_at => DateTime->new(year=>2007, month=>12, day=>31, ),
+ created_on => DateTime->new(year=>2006, month=>1, day=>31,
+ hour => 13, minute => 34, second => 56, ),
+ });
+ delete $SIG{__WARN__};
+ };
+
+ # This should fail to set
+ my $prev_str = "$created_on";
+ $loaded_event->update({ created_on => '0000-00-00' });
+ is("$created_on", $prev_str, "Don't update invalid dates");
+}
+
+# Test invalid DT
+my $invalid = $schema->resultset('EventTZ')->create({
+ starts_at => '0000-00-00',
+ created_on => DateTime->now,
+});
+
+is( $invalid->get_column('starts_at'), '0000-00-00', "Invalid date stored" );
+is( $invalid->starts_at, undef, "Inflate to undef" );
+
+$invalid->created_on('0000-00-00');
+$invalid->update;
+
+throws_ok (
+ sub { $invalid->created_on },
+ qr/invalid date format/i,
+ "Invalid date format exception"
+);
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my ($dsn, $user, $pass) = @ENV{map { "DBICTEST_ORA_${_}" } qw/DSN USER PASS/};
+
+if (not ($dsn && $user && $pass)) {
+ plan skip_all => 'Set $ENV{DBICTEST_ORA_DSN}, _USER and _PASS to run this test. ' .
+ 'Warning: This test drops and creates a table called \'track\'';
+}
+else {
+ eval "use DateTime; use DateTime::Format::Oracle;";
+ if ($@) {
+ plan skip_all => 'needs DateTime and DateTime::Format::Oracle for testing';
+ }
+ else {
+ plan tests => 10;
+ }
+}
+
+# DateTime::Format::Oracle needs this set
+$ENV{NLS_DATE_FORMAT} = 'DD-MON-YY';
+$ENV{NLS_TIMESTAMP_FORMAT} = 'YYYY-MM-DD HH24:MI:SSXFF';
+$ENV{NLS_LANG} = 'AMERICAN_AMERICA.WE8ISO8859P1';
+
+my $schema = DBICTest::Schema->connect($dsn, $user, $pass);
+
+# Need to redefine the last_updated_on column
+my $col_metadata = $schema->class('Track')->column_info('last_updated_on');
+$schema->class('Track')->add_column( 'last_updated_on' => {
+ data_type => 'date' });
+$schema->class('Track')->add_column( 'last_updated_at' => {
+ data_type => 'timestamp' });
+
+my $dbh = $schema->storage->dbh;
+
+#$dbh->do("alter session set nls_timestamp_format = 'YYYY-MM-DD HH24:MI:SSXFF'");
+
+eval {
+ $dbh->do("DROP TABLE track");
+};
+$dbh->do("CREATE TABLE track (trackid NUMBER(12), cd NUMBER(12), position NUMBER(12), title VARCHAR(255), last_updated_on DATE, last_updated_at TIMESTAMP, small_dt DATE)");
+
+# insert a row to play with
+my $new = $schema->resultset('Track')->create({ trackid => 1, cd => 1, position => 1, title => 'Track1', last_updated_on => '06-MAY-07', last_updated_at => '2009-05-03 21:17:18.5' });
+is($new->trackid, 1, "insert sucessful");
+
+my $track = $schema->resultset('Track')->find( 1 );
+
+is( ref($track->last_updated_on), 'DateTime', "last_updated_on inflated ok");
+
+is( $track->last_updated_on->month, 5, "DateTime methods work on inflated column");
+
+#note '$track->last_updated_at => ', $track->last_updated_at;
+is( ref($track->last_updated_at), 'DateTime', "last_updated_at inflated ok");
+
+is( $track->last_updated_at->nanosecond, 500_000_000, "DateTime methods work with nanosecond precision");
+
+my $dt = DateTime->now();
+$track->last_updated_on($dt);
+$track->last_updated_at($dt);
+$track->update;
+
+is( $track->last_updated_on->month, $dt->month, "deflate ok");
+is( int $track->last_updated_at->nanosecond, int $dt->nanosecond, "deflate ok with nanosecond precision");
+
+# test datetime_setup
+
+$schema->storage->disconnect;
+
+delete $ENV{NLS_DATE_FORMAT};
+delete $ENV{NLS_TIMESTAMP_FORMAT};
+
+$schema->connection($dsn, $user, $pass, {
+ on_connect_call => 'datetime_setup'
+});
+
+$dt = DateTime->now();
+
+my $timestamp = $dt->clone;
+$timestamp->set_nanosecond( int 500_000_000 );
+
+$track = $schema->resultset('Track')->find( 1 );
+$track->update({ last_updated_on => $dt, last_updated_at => $timestamp });
+
+$track = $schema->resultset('Track')->find(1);
+
+is( $track->last_updated_on, $dt, 'DateTime round-trip as DATE' );
+is( $track->last_updated_at, $timestamp, 'DateTime round-trip as TIMESTAMP' );
+
+is( int $track->last_updated_at->nanosecond, int 500_000_000,
+ 'TIMESTAMP nanoseconds survived' );
+
+# clean up our mess
+END {
+ if($schema && ($dbh = $schema->storage->dbh)) {
+ $dbh->do("DROP TABLE track");
+ }
+}
+
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+{
+ local $SIG{__WARN__} = sub { warn @_ if $_[0] !~ /extra \=\> .+? has been deprecated/ };
+ DBICTest::Schema->load_classes('EventTZPg');
+}
+
+eval { require DateTime::Format::Pg };
+plan $@
+ ? ( skip_all => 'Need DateTime::Format::Pg for timestamp inflation tests')
+ : ( tests => 6 )
+;
+
+
+my $schema = DBICTest->init_schema();
+
+{
+ my $event = $schema->resultset("EventTZPg")->find(1);
+ $event->update({created_on => '2009-01-15 17:00:00+00'});
+ $event->discard_changes;
+ isa_ok($event->created_on, "DateTime") or diag $event->created_on;
+ is($event->created_on->time_zone->name, "America/Chicago", "Timezone changed");
+ # Time zone difference -> -6hours
+ is($event->created_on->iso8601, "2009-01-15T11:00:00", "Time with TZ correct");
+
+# test 'timestamp without time zone'
+ my $dt = DateTime->from_epoch(epoch => time);
+ $dt->set_nanosecond(int 500_000_000);
+ $event->update({ts_without_tz => $dt});
+ $event->discard_changes;
+ isa_ok($event->ts_without_tz, "DateTime") or diag $event->created_on;
+ is($event->ts_without_tz, $dt, 'timestamp without time zone inflation');
+ is($event->ts_without_tz->microsecond, $dt->microsecond,
+ 'timestamp without time zone microseconds survived');
+}
plan tests => 10;
my $rs = $schema->resultset('FileColumn');
-my $fname = '96file_column.t';
-my $source_file = file('t', $fname);
+my $source_file = file(__FILE__);
+my $fname = $source_file->basename;
my $fh = $source_file->open('r') or die "failed to open $source_file: $!\n";
my $fc = eval {
$rs->create({ file => { handle => $fh, filename => $fname } })
use DBICTest;
my $schema = DBICTest->init_schema();
-
# Under some versions of SQLite if the $rs is left hanging around it will lock
# So we create a scope here cos I'm lazy
{
});
$rs_hashrefinf->result_class('DBIx::Class::ResultClass::HashRefInflator');
is_deeply [$rs_hashrefinf->all], \@hashrefinf, 'Check query using extended columns syntax';
+
+# check nested prefetching of has_many relationships which return nothing
+my $artist = $schema->resultset ('Artist')->create ({ name => 'unsuccessful artist without CDs'});
+$artist->discard_changes;
+my $rs_artists = $schema->resultset ('Artist')->search ({ 'me.artistid' => $artist->id}, {
+ prefetch => { cds => 'tracks' }, result_class => 'DBIx::Class::ResultClass::HashRefInflator',
+});
+is_deeply(
+ [$rs_artists->all],
+ [{ $artist->get_columns, cds => [] }],
+ 'nested has_many prefetch without entries'
+);
use Data::Dumper;
my @serializers = (
- { module => 'YAML.pm',
- inflater => sub { YAML::Load (shift) },
- deflater => sub { die "Expecting a reference" unless (ref $_[0]); YAML::Dump (shift) },
+ { module => 'YAML.pm',
+ inflater => sub { YAML::Load (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); YAML::Dump (shift) },
},
- { module => 'Storable.pm',
- inflater => sub { Storable::thaw (shift) },
- deflater => sub { die "Expecting a reference" unless (ref $_[0]); Storable::nfreeze (shift) },
+ { module => 'Storable.pm',
+ inflater => sub { Storable::thaw (shift) },
+ deflater => sub { die "Expecting a reference" unless (ref $_[0]); Storable::nfreeze (shift) },
},
);
foreach my $serializer (@serializers) {
eval { require $serializer->{module} };
unless ($@) {
- $selected = $serializer;
- last;
+ $selected = $serializer;
+ last;
}
}
plan (skip_all => "No suitable serializer found") unless $selected;
-plan (tests => 8);
DBICTest::Schema::Serialized->inflate_column( 'serialized',
{ inflate => $selected->{inflater},
deflate => $selected->{deflater},
my $struct_hash = {
a => 1,
- b => [
+ b => [
{ c => 2 },
],
d => 3,
};
my $struct_array = [
- 'a',
- {
- b => 1,
- c => 2
+ 'a',
+ {
+ b => 1,
+ c => 2,
},
'd',
];
#======= testing hashref serialization
my $object = $rs->create( {
- id => 1,
serialized => '',
} );
ok($object->update( { serialized => $struct_hash } ), 'hashref deflation');
is_deeply($inflated, $struct_hash, 'inflated hash matches original');
$object = $rs->create( {
- id => 2,
serialized => '',
} );
-eval { $object->set_inflated_column('serialized', $struct_hash) };
-ok(!$@, 'set_inflated_column to a hashref');
+$object->set_inflated_column('serialized', $struct_hash);
is_deeply($object->serialized, $struct_hash, 'inflated hash matches original');
+$object = $rs->new({});
+$object->serialized ($struct_hash);
+$object->insert;
+is_deeply (
+ $rs->find ({id => $object->id})->serialized,
+ $struct_hash,
+ 'new/insert works',
+);
#====== testing arrayref serialization
ok($object->update( { serialized => $struct_array } ), 'arrayref deflation');
ok($inflated = $object->serialized, 'arrayref inflation');
is_deeply($inflated, $struct_array, 'inflated array matches original');
+
+$object = $rs->new({});
+$object->serialized ($struct_array);
+$object->insert;
+is_deeply (
+ $rs->find ({id => $object->id})->serialized,
+ $struct_array,
+ 'new/insert works',
+);
+
+#===== make sure make_column_dirty interacts reasonably with inflation
+$object = $rs->first;
+$object->update ({serialized => { x => 'y'}});
+
+$object->serialized->{x} = 'z'; # change state without notifying $object
+ok (!$object->get_dirty_columns, 'no dirty columns yet');
+is_deeply ($object->serialized, { x => 'z' }, 'object data correct');
+
+$object->make_column_dirty('serialized');
+$object->update;
+
+is_deeply ($rs->first->serialized, { x => 'z' }, 'changes made it to the db' );
+
+done_testing;
use strict;
use warnings;
-use base qw/Test::Builder::Module Exporter/;
+use base qw/Exporter/;
+
+use Carp;
+use SQL::Abstract::Test;
our @EXPORT = qw/
- &is_same_sql_bind
- &is_same_sql
- &is_same_bind
- &eq_sql
- &eq_bind
- &eq_sql_bind
+ is_same_sql_bind
+ is_same_sql
+ is_same_bind
+/;
+our @EXPORT_OK = qw/
+ eq_sql
+ eq_bind
+ eq_sql_bind
/;
+sub is_same_sql_bind {
+ # unroll possible as_query arrayrefrefs
+ my @args;
-{
- package DBIC::SqlMakerTest::SQLATest;
-
- # replacement for SQL::Abstract::Test if not available
-
- use strict;
- use warnings;
-
- use base qw/Test::Builder::Module Exporter/;
-
- use Scalar::Util qw(looks_like_number blessed reftype);
- use Data::Dumper;
- use Test::Builder;
- use Test::Deep qw(eq_deeply);
-
- our $tb = __PACKAGE__->builder;
-
- sub is_same_sql_bind
- {
- my ($sql1, $bind_ref1, $sql2, $bind_ref2, $msg) = @_;
-
- my $same_sql = eq_sql($sql1, $sql2);
- my $same_bind = eq_bind($bind_ref1, $bind_ref2);
-
- $tb->ok($same_sql && $same_bind, $msg);
-
- if (!$same_sql) {
- _sql_differ_diag($sql1, $sql2);
- }
- if (!$same_bind) {
- _bind_differ_diag($bind_ref1, $bind_ref2);
- }
- }
-
- sub is_same_sql
- {
- my ($sql1, $sql2, $msg) = @_;
-
- my $same_sql = eq_sql($sql1, $sql2);
-
- $tb->ok($same_sql, $msg);
+ for (1,2) {
+ my $chunk = shift @_;
- if (!$same_sql) {
- _sql_differ_diag($sql1, $sql2);
+ if ( ref $chunk eq 'REF' and ref $$chunk eq 'ARRAY' ) {
+ my ($sql, @bind) = @$$chunk;
+ push @args, ($sql, \@bind);
}
- }
-
- sub is_same_bind
- {
- my ($bind_ref1, $bind_ref2, $msg) = @_;
-
- my $same_bind = eq_bind($bind_ref1, $bind_ref2);
-
- $tb->ok($same_bind, $msg);
-
- if (!$same_bind) {
- _bind_differ_diag($bind_ref1, $bind_ref2);
+ else {
+ push @args, $chunk, shift @_;
}
- }
-
- sub _sql_differ_diag
- {
- my ($sql1, $sql2) = @_;
- $tb->diag("SQL expressions differ\n"
- . " got: $sql1\n"
- . "expected: $sql2\n"
- );
}
- sub _bind_differ_diag
- {
- my ($bind_ref1, $bind_ref2) = @_;
+ push @args, shift @_;
- $tb->diag("BIND values differ\n"
- . " got: " . Dumper($bind_ref1)
- . "expected: " . Dumper($bind_ref2)
- );
- }
-
- sub eq_sql
- {
- my ($left, $right) = @_;
-
- $left =~ s/\s+//g;
- $right =~ s/\s+//g;
-
- return $left eq $right;
- }
-
- sub eq_bind
- {
- my ($bind_ref1, $bind_ref2) = @_;
-
- return eq_deeply($bind_ref1, $bind_ref2);
- }
-
- sub eq_sql_bind
- {
- my ($sql1, $bind_ref1, $sql2, $bind_ref2) = @_;
-
- return eq_sql($sql1, $sql2) && eq_bind($bind_ref1, $bind_ref2);
- }
-}
+ croak "Unexpected argument(s) supplied to is_same_sql_bind: " . join ('; ', @_)
+ if @_;
-eval "use SQL::Abstract::Test;";
-if ($@ eq '') {
- # SQL::Abstract::Test available
-
- *is_same_sql_bind = \&SQL::Abstract::Test::is_same_sql_bind;
- *is_same_sql = \&SQL::Abstract::Test::is_same_sql;
- *is_same_bind = \&SQL::Abstract::Test::is_same_bind;
- *eq_sql = \&SQL::Abstract::Test::eq_sql;
- *eq_bind = \&SQL::Abstract::Test::eq_bind;
- *eq_sql_bind = \&SQL::Abstract::Test::eq_sql_bind;
-} else {
- # old SQL::Abstract
-
- *is_same_sql_bind = \&DBIC::SqlMakerTest::SQLATest::is_same_sql_bind;
- *is_same_sql = \&DBIC::SqlMakerTest::SQLATest::is_same_sql;
- *is_same_bind = \&DBIC::SqlMakerTest::SQLATest::is_same_bind;
- *eq_sql = \&DBIC::SqlMakerTest::SQLATest::eq_sql;
- *eq_bind = \&DBIC::SqlMakerTest::SQLATest::eq_bind;
- *eq_sql_bind = \&DBIC::SqlMakerTest::SQLATest::eq_sql_bind;
+ @_ = @args;
+ goto &SQL::Abstract::Test::is_same_sql_bind;
}
+*is_same_sql = \&SQL::Abstract::Test::is_same_sql;
+*is_same_bind = \&SQL::Abstract::Test::is_same_bind;
+*eq_sql = \&SQL::Abstract::Test::eq_sql;
+*eq_bind = \&SQL::Abstract::Test::eq_bind;
+*eq_sql_bind = \&SQL::Abstract::Test::eq_sql_bind;
1;
Exports functions that can be used to compare generated SQL and bind values.
-If L<SQL::Abstract::Test> (packaged in L<SQL::Abstract> versions 1.50 and
-above) is available, then it is used to perform the comparisons (all functions
-are delegated to id). Otherwise uses simple string comparison for the SQL
-statements and simple L<Data::Dumper>-like recursive stringification for
-comparison of bind values.
-
+This is a thin wrapper around L<SQL::Abstract::Test>, which makes it easier
+to compare as_query sql/bind arrayrefrefs directly.
=head1 FUNCTIONS
=head2 is_same_sql_bind
is_same_sql_bind(
- $given_sql, \@given_bind,
+ $given_sql, \@given_bind,
+ $expected_sql, \@expected_bind,
+ $test_msg
+ );
+
+ is_same_sql_bind(
+ $rs->as_query
+ $expected_sql, \@expected_bind,
+ $test_msg
+ );
+
+ is_same_sql_bind(
+ \[$given_sql, @given_bind],
$expected_sql, \@expected_bind,
$test_msg
);
Copyright 2008 by Norbert Buchmuller.
This library is free software; you can redistribute it and/or modify
-it under the same terms as Perl itself.
+it under the same terms as Perl itself.
+++ /dev/null
-package # hide from PAUSE
- DBICNGTest::Schema;
-
- use Moose;
- use Path::Class::File;
- extends 'DBIx::Class::Schema', 'Moose::Object';
-
-
-=head1 NAME
-
-DBICNGTest::Schema; Schema Base For Testing Moose Roles, Traits, etc.
-
-=head1 SYNOPSIS
-
- my $schema = DBICNGTest::Schema->connect($dsn);
-
- ## Do anything you would as with a normal $schema object.
-
-=head1 DESCRIPTION
-
-Defines the base case for loading DBIC Schemas. We add in some additional
-helpful functions for administering you schemas. This namespace is dedicated
-to integration of Moose based development practices.
-
-=head1 PACKAGE METHODS
-
-The following is a list of package methods declared with this class.
-
-=head2 load_namespaces
-
-Automatically load the classes and resultsets from their default namespaces.
-
-=cut
-
-__PACKAGE__->load_namespaces(
- default_resultset_class => 'ResultSet',
-);
-
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head1 METHODS
-
-This module declares the following methods
-
-=head2 new
-
-overload new to make sure we get a good meta object and that the attributes all
-get properly setup. This is done so that our instances properly get a L<Moose>
-meta class.
-
-=cut
-
-sub new
-{
- my $class = shift @_;
- my $obj = $class->SUPER::new(@_);
-
- return $class->meta->new_object(
- __INSTANCE__ => $obj, @_
- );
-}
-
-
-=head2 connect_and_setup
-
-Creates a schema, deploys a database and sets the testing data.
-
-=cut
-
-sub connect_and_setup {
- my $class = shift @_;
- my $db_file = shift @_;
-
- my ($dsn, $user, $pass) = (
- $ENV{DBICNG_DSN} || "dbi:SQLite:${db_file}",
- $ENV{DBICNG_USER} || '',
- $ENV{DBICNG_PASS} || '',
- );
-
- return $class
- ->connect($dsn, $user, $pass, { AutoCommit => 1 })
- ->setup;
-}
-
-
-=head2 setup
-
-deploy a database and populate it with the initial data
-
-=cut
-
-sub setup {
- my $self = shift @_;
- $self->deploy();
- $self->initial_populate(@_);
-
- return $self;
-}
-
-
-=head2 initial_populate
-
-initializing the startup database information
-
-=cut
-
-sub initial_populate {
- my $self = shift @_;
-
- my @genders = $self->populate('Gender' => [
- [qw(gender_id label)],
- [qw(1 female)],
- [qw(2 male)],
- [qw(3 transgender)],
- ]);
-
- my @persons = $self->populate('Person' => [
- [ qw(person_id fk_gender_id name age) ],
- [ qw(1 1 john 25) ],
- [ qw(2 1 dan 35) ],
- [ qw(3 2 mary 15) ],
- [ qw(4 2 jane 95) ],
- [ qw(5 3 steve 40) ],
- ]);
-
- my @friends = $self->populate('FriendList' => [
- [ qw(fk_person_id fk_friend_id) ],
- [ qw(1 2) ],
- [ qw(1 3) ],
- [ qw(2 3) ],
- [ qw(3 2) ],
- ]);
-}
-
-
-=head2 job_handler_echo
-
-This is a method to test the job handler role.
-
-=cut
-
-sub job_handler_echo {
- my ($schema, $job, $alert) = @_;
- return $alert;
-}
-
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
+++ /dev/null
-package # hide from PAUSE
- DBICNGTest::Schema::Result;
-
- use Moose;
- extends 'DBIx::Class', 'Moose::Object';
-
-=head1 NAME
-
-DBICNGTest::Schema::Result; Base Class for result and class objects
-
-=head1 SYNOPSIS
-
- package DBICNGTest::Schema::Result::Member;
-
- use Moose;
- extends 'DBICNGTest::Schema::Result';
-
- ## Rest of the class definition.
-
-=head1 DESCRIPTION
-
-Defines the base case for loading DBIC Schemas. We add in some additional
-helpful functions for administering you schemas. This namespace is dedicated
-to integration of Moose based development practices
-
-=head1 PACKAGE METHODS
-
-The following is a list of package methods declared with this class.
-
-=head2 load_components
-
-Components to preload.
-
-=cut
-
-__PACKAGE__->load_components(qw/
- PK::Auto
- InflateColumn::DateTime
- Core
-/);
-
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head1 METHODS
-
-This module declares the following methods
-
-=head2 new
-
-overload new to make sure we get a good meta object and that the attributes all
-get properly setup. This is done so that our instances properly get a L<Moose>
-meta class.
-
-=cut
-
-sub new
-{
- my $class = shift @_;
- my $attrs = shift @_;
-
- my $obj = $class->SUPER::new($attrs);
-
- return $class->meta->new_object(
- __INSTANCE__ => $obj, %$attrs
- );
-}
-
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
\ No newline at end of file
+++ /dev/null
-package #hide from pause
- DBICNGTest::Schema::Result::FriendList;
-
- use Moose;
- extends 'DBICNGTest::Schema::Result';
-
-
-=head1 NAME
-
-Zoomwit::tlib::DBIC::Schema::Result::FriendList; An example Friends Class;
-
-=head1 VERSION
-
-0.01
-
-=cut
-
-our $VERSION = '0.01';
-
-
-=head1 DESCRIPTION
-
-A Person can have zero or more friends
-A Person can't be their own friend
-A Person over 18 can't be friends with Persons under 18 and vis versa.
-A Person can have friendships that are not mutual.
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head1 PACKAGE METHODS
-
-This module defines the following package methods
-
-=head2 table
-
-Name of the Physical table in the database
-
-=cut
-
-__PACKAGE__
- ->table('friend_list');
-
-
-=head2 add_columns
-
-Add columns and meta information
-
-=head3 fk_person_id
-
-ID of the person with friends
-
-=head3 fk_friend_id
-
-Who is the friend?
-
-=cut
-
-__PACKAGE__
- ->add_columns(
- fk_person_id => {
- data_type=>'integer',
- },
- fk_friend_id => {
- data_type=>'integer',
- },
-);
-
-
-=head2 primary_key
-
-Sets the Primary keys for this table
-
-=cut
-
-__PACKAGE__
- ->set_primary_key(qw/fk_person_id fk_friend_id/);
-
-
-=head2 befriender
-
-The person that 'owns' the friendship (list)
-
-=cut
-
-__PACKAGE__
- ->belongs_to( befriender => 'DBICNGTest::Schema::Result::Person', {
- 'foreign.person_id' => 'self.fk_person_id' });
-
-
-=head2 friendee
-
-The actual friend that befriender is listing
-
-=cut
-
-__PACKAGE__
- ->belongs_to( friendee => 'DBICNGTest::Schema::Result::Person', {
- 'foreign.person_id' => 'self.fk_friend_id' });
-
-
-=head1 METHODS
-
-This module defines the following methods.
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
+++ /dev/null
-package #hide from pause
- DBICNGTest::Schema::Result::Gender;
-
- use Moose;
- extends 'DBICNGTest::Schema::Result';
-
-
-=head1 NAME
-
-DBICNGTest::Schema::Result::Gender; An example Gender Class;
-
-=head1 DESCRIPTION
-
-Tests for this type of FK relationship
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head2 label
-
-example of using an attribute to add constraints on a table insert
-
-=cut
-
-has 'label' =>(is=>'rw', required=>1, isa=>'Str');
-
-
-=head1 PACKAGE METHODS
-
-This module defines the following package methods
-
-=head2 table
-
-Name of the Physical table in the database
-
-=cut
-
-__PACKAGE__
- ->table('gender');
-
-
-=head2 add_columns
-
-Add columns and meta information
-
-=head3 gender_id
-
-Primary Key which is an auto generated UUID
-
-=head3 label
-
-Text label of the gender (ie, 'male', 'female', 'transgender', etc.).
-
-=cut
-
-__PACKAGE__
- ->add_columns(
- gender_id => {
- data_type=>'integer',
- },
- label => {
- data_type=>'varchar',
- size=>12,
- },
- );
-
-
-=head2 primary_key
-
-Sets the Primary keys for this table
-
-=cut
-
-__PACKAGE__
- ->set_primary_key(qw/gender_id/);
-
-
-=head2
-
-Marks the unique columns
-
-=cut
-
-__PACKAGE__
- ->add_unique_constraint('gender_label_unique' => [ qw/label/ ]);
-
-
-=head2 people
-
-A resultset of people with this gender
-
-=cut
-
-__PACKAGE__
- ->has_many(
- people => 'DBICNGTest::Schema::Result::Person',
- {'foreign.fk_gender_id' => 'self.gender_id'}
- );
-
-
-=head1 METHODS
-
-This module defines the following methods.
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
+++ /dev/null
-package #hide from pause
- DBICNGTest::Schema::Result::Person;
-
- use Moose;
- use DateTime;
- extends 'DBICNGTest::Schema::Result';
-
-
-=head1 NAME
-
-DBICNGTest::Schema::Result::Person; An example Person Class;
-
-=head1 DESCRIPTION
-
-Tests for this type of FK relationship
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head2 created
-
-attribute for the created column
-
-=cut
-
-has 'created' => (
- is=>'ro',
- isa=>'DateTime',
- required=>1,
- default=>sub {
- DateTime->now;
- },
-);
-
-
-=head1 PACKAGE METHODS
-
-This module defines the following package methods
-
-=head2 table
-
-Name of the Physical table in the database
-
-=cut
-
-__PACKAGE__
- ->table('person');
-
-
-=head2 add_columns
-
-Add columns and meta information
-
-=head3 person_id
-
-Primary Key which is an auto generated autoinc
-
-=head3 fk_gender_id
-
-foreign key to the Gender table
-
-=head3 name
-
-Just an ordinary name
-
-=head3 age
-
-The person's age
-
-head3 created
-
-When the person was added to the database
-
-=cut
-
-__PACKAGE__
- ->add_columns(
- person_id => {
- data_type=>'integer',
- },
- fk_gender_id => {
- data_type=>'integer',
- },
- name => {
- data_type=>'varchar',
- size=>32,
- },
- age => {
- data_type=>'integer',
- default_value=>25,
- },
- created => {
- data_type=>'datetime',
- default_value=>'date("now")',
- });
-
-
-=head2 primary_key
-
-Sets the Primary keys for this table
-
-=cut
-
-__PACKAGE__
- ->set_primary_key(qw/person_id/);
-
-
-=head2 friendlist
-
-Each Person might have a resultset of friendlist
-
-=cut
-
-__PACKAGE__
- ->has_many(
- friendlist => 'DBICNGTest::Schema::Result::FriendList',
- {'foreign.fk_person_id' => 'self.person_id'});
-
-
-=head2 gender
-
-This person's gender
-
-=cut
-
-__PACKAGE__
- ->belongs_to( gender => 'DBICNGTest::Schema::Result::Gender', {
- 'foreign.gender_id' => 'self.fk_gender_id' });
-
-
-=head2 fanlist
-
-A resultset of the people listing me as a friend (if any)
-
-=cut
-
-__PACKAGE__
- ->belongs_to( fanlist => 'DBICNGTest::Schema::Result::FriendList', {
- 'foreign.fk_friend_id' => 'self.person_id' });
-
-
-=head2 friends
-
-A resultset of Persons who are in my FriendList
-
-=cut
-
-__PACKAGE__
- ->many_to_many( friends => 'friendlist', 'friend' );
-
-
-=head2 fans
-
-A resultset of people that have me in their friendlist
-
-=cut
-
-__PACKAGE__
- ->many_to_many( fans => 'fanlist', 'befriender' );
-
-
-=head1 METHODS
-
-This module defines the following methods.
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
+++ /dev/null
-package # hide from PAUSE
- DBICNGTest::Schema::ResultSet;
-
- use Moose;
- extends 'DBIx::Class::ResultSet', 'Moose::Object';
-
-=head1 NAME
-
-DBICNGTest::Schema::ResultSet; A base ResultSet Class
-
-=head1 SYNOPSIS
-
- package DBICNGTest::Schema::ResultSet::Member;
-
- use Moose;
- extends 'DBICNGTest::Schema::ResultSet';
-
- ## Rest of the class definition.
-
-=head1 DESCRIPTION
-
-All ResultSet classes will inherit from this. This provides some base function
-for all your resultsets and it is also the default resultset if you don't
-bother to declare a custom resultset in the resultset namespace
-
-=head1 PACKAGE METHODS
-
-The following is a list of package methods declared with this class.
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head1 METHODS
-
-This module declares the following methods
-
-=head2 new
-
-overload new to make sure we get a good meta object and that the attributes all
-get properly setup. This is done so that our instances properly get a L<Moose>
-meta class.
-
-=cut
-
-sub new
-{
- my $class = shift @_;
- my $obj = $class->SUPER::new(@_);
-
- return $class->meta->new_object(
- __INSTANCE__ => $obj, @_
- );
-}
-
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
\ No newline at end of file
+++ /dev/null
-package # hide from pause
- DBICNGTest::Schema::ResultSet::Person;
-
- use Moose;
- extends 'DBICNGTest::Schema::ResultSet';
-
-
-=head1 NAME
-
-DBICNGTest::Schema::ResultSet:Person; Example Resultset
-
-=head1 VERSION
-
-0.01
-
-=cut
-
-our $VERSION = '0.01';
-
-
-=head1 SYNOPSIS
-
- ##Example Usage
-
-See Tests for more example usage.
-
-=head1 DESCRIPTION
-
-Resultset Methods for the Person Source
-
-=head1 ATTRIBUTES
-
-This class defines the following attributes.
-
-=head2 literal
-
-a literal attribute for testing
-
-=cut
-
-has 'literal' => (is=>'ro', isa=>'Str', required=>1, lazy=>1, default=>'hi');
-
-
-=head2 available_genders
-
-A resultset of the genders people can have. Keep in mind this get's run once
-only at the first compile, so it's only good for stuff that doesn't change
-between reboots.
-
-=cut
-
-has 'available_genders' => (
- is=>'ro',
- isa=>'DBICNGTest::Schema::ResultSet',
- required=>1,
- lazy=>1,
- default=> sub {
- shift
- ->result_source
- ->schema
- ->resultset('Gender');
- }
-);
-
-
-=head1 METHODS
-
-This module defines the following methods.
-
-=head2 older_than($int)
-
-Only people over a given age
-
-=cut
-
-sub older_than
-{
- my ($self, $age) = @_;
-
- return $self->search({age=>{'>'=>$age}});
-}
-
-
-=head1 AUTHORS
-
-See L<DBIx::Class> for more information regarding authors.
-
-=head1 LICENSE
-
-You may distribute this code under the same terms as Perl itself.
-
-=cut
-
-
-1;
--- /dev/null
+package DBICNSTest::Bogus::A;
+use base qw/DBIx::Class/;
+__PACKAGE__->load_components(qw/PK::Auto Core/);
+__PACKAGE__->table('a');
+__PACKAGE__->add_columns('a');
+1;
--- /dev/null
+package DBICNSTest::Result::B;
+use base qw/DBIx::Class/;
+__PACKAGE__->load_components(qw/PK::Auto Core/);
+__PACKAGE__->table('b');
+__PACKAGE__->add_columns('b');
+1;
--- /dev/null
+package DBICNSTest::Bogus::Bigos;
+
+1;
use strict;
use warnings;
+use DBICTest::AuthorCheck;
use DBICTest::Schema;
=head1 NAME
my $dbuser = $ENV{"DBICTEST_DBUSER"} || '';
my $dbpass = $ENV{"DBICTEST_DBPASS"} || '';
- my @connect_info = ($dsn, $dbuser, $dbpass, { AutoCommit => 1 });
+ my @connect_info = ($dsn, $dbuser, $dbpass, { AutoCommit => 1, %args });
return @connect_info;
}
close IN;
for my $chunk ( split (/;\s*\n+/, $sql) ) {
if ( $chunk =~ / ^ (?! --\s* ) \S /xm ) { # there is some real sql in the chunk - a non-space at the start of the string which is not a comment
- $schema->storage->dbh->do($chunk) or print "Error on SQL: $chunk\n";
+ $schema->storage->dbh_do(sub { $_[1]->do($chunk) }) or print "Error on SQL: $chunk\n";
}
}
}
]);
$schema->populate('Owners', [
- [ qw/ownerid name/ ],
+ [ qw/id name/ ],
[ 1, "Newton" ],
[ 2, "Waltham" ],
]);
--- /dev/null
+package # hide from PAUSE
+ DBICTest::AuthorCheck;
+
+use strict;
+use warnings;
+
+use Path::Class qw/file dir/;
+
+_check_author_makefile() unless $ENV{DBICTEST_NO_MAKEFILE_VERIFICATION};
+
+# Die if the author did not update his makefile
+#
+# This is pretty heavy handed, so the check is pretty solid:
+#
+# 1) Assume that this particular module is loaded from -I <$root>/t/lib
+# 2) Make sure <$root>/Makefile.PL exists
+# 3) Make sure we can stat() <$root>/Makefile.PL
+#
+# If all of the above is satisfied
+#
+# *) die if <$root>/inc does not exist
+# *) die if no stat() results for <$root>/Makefile (covers no Makefile)
+# *) die if Makefile.PL mtime > Makefile mtime
+#
+sub _check_author_makefile {
+
+ my $root = _find_co_root()
+ or return;
+
+ # not using file->stat as it invokes File::stat which in turn breaks stat(_)
+ my ($mf_pl_mtime, $mf_mtime) = ( map
+ { (stat ($root->file ($_)) )[9] }
+ qw/Makefile.PL Makefile/
+ );
+
+ return unless $mf_pl_mtime; # something went wrong during co_root detection ?
+
+ if (
+ not -d $root->subdir ('inc')
+ or
+ not $mf_mtime
+ or
+ $mf_mtime < $mf_pl_mtime
+ ) {
+ print STDERR <<'EOE';
+
+
+
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+======================== FATAL ERROR ===========================
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
+We have a number of reasons to believe that this is a development
+checkout and that you, the user, did not run `perl Makefile.PL`
+before using this code. You absolutely _must_ perform this step,
+as not doing so often results in a lot of wasted time for other
+contributors trying to assit you with "it broke!" problems.
+
+If you are seeing this message unexpectedly (i.e. you are in fact
+attempting a regular installation be it through CPAN or manually,
+set the variable DBICTEST_NO_MAKEFILE_VERIFICATION to a true value
+so you can continue. Also _make_absolutely_sure_ to report this to
+either the mailing list or to the irc channel as described in
+
+http://search.cpan.org/dist/DBIx-Class/lib/DBIx/Class.pm#GETTING_HELP/SUPPORT
+
+Failure to do this will make us believe that all these checks are
+indeed foolproof and we will remove the ability to override this
+entirely.
+
+The DBIC team
+
+
+
+EOE
+
+ exit 1;
+ }
+}
+
+# Try to determine the root of a checkout/untar if possible
+# or return undef
+sub _find_co_root {
+
+ my @mod_parts = split /::/, (__PACKAGE__ . '.pm');
+ my $rel_path = join ('/', @mod_parts); # %INC stores paths with / regardless of OS
+
+ return undef unless ($INC{$rel_path});
+
+ # a bit convoluted, but what we do here essentially is:
+ # - get the file name of this particular module
+ # - do 'cd ..' as many times as necessary to get to t/lib/../..
+
+ my $root = dir ($INC{$rel_path});
+ for (1 .. @mod_parts + 2) {
+ $root = $root->parent;
+ }
+
+ return (-f $root->file ('Makefile.PL') )
+ ? $root
+ : undef
+ ;
+}
+
+1;
--- /dev/null
+package #hide from pause
+ DBICTest::BaseResult;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class/;
+use DBICTest::BaseResultSet;
+
+__PACKAGE__->load_components (qw/Core/);
+__PACKAGE__->table ('bogus');
+__PACKAGE__->resultset_class ('DBICTest::BaseResultSet');
+
+1;
--- /dev/null
+package #hide from pause
+ DBICTest::BaseResultSet;
+
+use strict;
+use warnings;
+
+use base qw/DBIx::Class::ResultSet/;
+
+sub hri_dump {
+ return shift->search ({}, { result_class => 'DBIx::Class::ResultClass::HashRefInflator' });
+}
+
+1;
Tag
Year2000CDs
Year1999CDs
+ CustomSql
+ Money
/,
{ 'DBICTest::Schema' => [qw/
LinerNotes
package # hide from PAUSE
DBICTest::Schema::Artist;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('artist');
__PACKAGE__->source_info({
);
__PACKAGE__->has_many(
- artist_to_artwork => 'DBICTest::Schema::Artwork_to_Artist' => 'artist_id'
+ artwork_to_artist => 'DBICTest::Schema::Artwork_to_Artist' => 'artist_id'
);
-__PACKAGE__->many_to_many('artworks', 'artist_to_artwork', 'artwork');
+__PACKAGE__->many_to_many('artworks', 'artwork_to_artist', 'artwork');
sub sqlt_deploy_hook {
--- /dev/null
+package # hide from PAUSE
+ DBICTest::Schema::ArtistGUID;
+
+use base qw/DBICTest::BaseResult/;
+
+# test MSSQL uniqueidentifier type
+
+__PACKAGE__->table('artist');
+__PACKAGE__->add_columns(
+ 'artistid' => {
+ data_type => 'uniqueidentifier' # auto_nextval not necessary for PK
+ },
+ 'name' => {
+ data_type => 'varchar',
+ size => 100,
+ is_nullable => 1,
+ },
+ rank => {
+ data_type => 'integer',
+ default_value => 13,
+ },
+ charfield => {
+ data_type => 'char',
+ size => 10,
+ is_nullable => 1,
+ },
+ a_guid => {
+ data_type => 'uniqueidentifier',
+ auto_nextval => 1, # necessary here, because not a PK
+ is_nullable => 1,
+ }
+);
+__PACKAGE__->set_primary_key('artistid');
+
+1;
package # hide from PAUSE
DBICTest::Schema::ArtistUndirectedMap;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('artist_undirected_map');
__PACKAGE__->add_columns(
__PACKAGE__->set_primary_key(qw/id1 id2/);
__PACKAGE__->belongs_to( 'artist1', 'DBICTest::Schema::Artist', 'id1', { on_delete => 'RESTRICT', on_update => 'CASCADE'} );
-__PACKAGE__->belongs_to( 'artist2', 'DBICTest::Schema::Artist', 'id2', { on_delete => undef, on_update => 'CASCADE'} );
+__PACKAGE__->belongs_to( 'artist2', 'DBICTest::Schema::Artist', 'id2', { on_delete => undef, on_update => undef} );
__PACKAGE__->has_many(
'mapped_artists', 'DBICTest::Schema::Artist',
[ {'foreign.artistid' => 'self.id1'}, {'foreign.artistid' => 'self.id2'} ],
package # hide from PAUSE
DBICTest::Schema::Artwork;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('cd_artwork');
__PACKAGE__->add_columns(
'cd_id' => {
data_type => 'integer',
+ is_nullable => 0,
},
);
__PACKAGE__->set_primary_key('cd_id');
package # hide from PAUSE
DBICTest::Schema::Artwork_to_Artist;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('artwork_to_artist');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::BindType;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('bindtype_test');
package # hide from PAUSE
DBICTest::Schema::Bookmark;
- use base 'DBIx::Class::Core';
+ use base qw/DBICTest::BaseResult/;
use strict;
},
'link' => {
data_type => 'integer',
+ is_nullable => 1,
},
);
__PACKAGE__->set_primary_key('id');
-__PACKAGE__->belongs_to(link => 'DBICTest::Schema::Link' );
+__PACKAGE__->belongs_to(link => 'DBICTest::Schema::Link', 'link', { on_delete => 'SET NULL' } );
1;
-package # hide from PAUSE \r
- DBICTest::Schema::BooksInLibrary;\r
-\r
-use base qw/DBIx::Class::Core/;\r
-\r
-__PACKAGE__->table('books');\r
-__PACKAGE__->add_columns(\r
- 'id' => {\r
- data_type => 'integer',\r
- is_auto_increment => 1,\r
- },\r
- 'source' => {\r
- data_type => 'varchar',\r
- size => '100',\r
- },\r
- 'owner' => {\r
- data_type => 'integer',\r
- },\r
- 'title' => {\r
- data_type => 'varchar',\r
- size => '100',\r
- },\r
- 'price' => {\r
- data_type => 'integer',\r
- is_nullable => 1,\r
- },\r
-);\r
-__PACKAGE__->set_primary_key('id');\r
-\r
-__PACKAGE__->resultset_attributes({where => { source => "Library" } });\r
-\r
-1;\r
+package # hide from PAUSE
+ DBICTest::Schema::BooksInLibrary;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('books');
+__PACKAGE__->add_columns(
+ 'id' => {
+ data_type => 'integer',
+ is_auto_increment => 1,
+ },
+ 'source' => {
+ data_type => 'varchar',
+ size => '100',
+ },
+ 'owner' => {
+ data_type => 'integer',
+ },
+ 'title' => {
+ data_type => 'varchar',
+ size => '100',
+ },
+ 'price' => {
+ data_type => 'integer',
+ is_nullable => 1,
+ },
+);
+__PACKAGE__->set_primary_key('id');
+
+__PACKAGE__->resultset_attributes({where => { source => "Library" } });
+
+__PACKAGE__->belongs_to ( owner => 'DBICTest::Schema::Owners', 'owner' );
+
+1;
package # hide from PAUSE
DBICTest::Schema::CD;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
+
+# this tests table name as scalar ref
+# DO NOT REMOVE THE \
+__PACKAGE__->table(\'cd');
-__PACKAGE__->table('cd');
__PACKAGE__->add_columns(
'cdid' => {
data_type => 'integer',
});
# in case this is a single-cd it promotes a track from another cd
-__PACKAGE__->belongs_to( single_track => 'DBICTest::Schema::Track' );
+__PACKAGE__->belongs_to( single_track => 'DBICTest::Schema::Track', 'single_track',
+ { join_type => 'left'}
+);
__PACKAGE__->has_many( tracks => 'DBICTest::Schema::Track' );
__PACKAGE__->has_many(
{ proxy => [ qw/notes/ ] },
);
__PACKAGE__->might_have(artwork => 'DBICTest::Schema::Artwork', 'cd_id');
+__PACKAGE__->has_one(mandatory_artwork => 'DBICTest::Schema::Artwork', 'cd_id');
__PACKAGE__->many_to_many( producers => cd_to_producer => 'producer' );
__PACKAGE__->many_to_many(
package # hide from PAUSE
DBICTest::Schema::CD_to_Producer;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('cd_to_producer');
__PACKAGE__->add_columns(
cd => { data_type => 'integer' },
producer => { data_type => 'integer' },
+ attribute => { data_type => 'integer', is_nullable => 1 },
);
__PACKAGE__->set_primary_key(qw/cd producer/);
-package # hide from PAUSE \r
- DBICTest::Schema::Collection;\r
-\r
-use base qw/DBIx::Class::Core/;\r
-\r
-__PACKAGE__->table('collection');\r
-__PACKAGE__->add_columns(\r
- 'collectionid' => {\r
- data_type => 'integer',\r
- is_auto_increment => 1,\r
- },\r
- 'name' => {\r
- data_type => 'varchar',\r
- size => 100,\r
- },\r
-);\r
-__PACKAGE__->set_primary_key('collectionid');\r
-\r
-__PACKAGE__->has_many( collection_object => "DBICTest::Schema::CollectionObject",\r
- { "foreign.collection" => "self.collectionid" }\r
- );\r
-__PACKAGE__->many_to_many( objects => collection_object => "object" );\r
-__PACKAGE__->many_to_many( pointy_objects => collection_object => "object",\r
- { where => { "object.type" => "pointy" } }\r
- );\r
-__PACKAGE__->many_to_many( round_objects => collection_object => "object",\r
- { where => { "object.type" => "round" } } \r
- );\r
-\r
-1;\r
+package # hide from PAUSE
+ DBICTest::Schema::Collection;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('collection');
+__PACKAGE__->add_columns(
+ 'collectionid' => {
+ data_type => 'integer',
+ is_auto_increment => 1,
+ },
+ 'name' => {
+ data_type => 'varchar',
+ size => 100,
+ },
+);
+__PACKAGE__->set_primary_key('collectionid');
+
+__PACKAGE__->has_many( collection_object => "DBICTest::Schema::CollectionObject",
+ { "foreign.collection" => "self.collectionid" }
+ );
+__PACKAGE__->many_to_many( objects => collection_object => "object" );
+__PACKAGE__->many_to_many( pointy_objects => collection_object => "object",
+ { where => { "object.type" => "pointy" } }
+ );
+__PACKAGE__->many_to_many( round_objects => collection_object => "object",
+ { where => { "object.type" => "round" } }
+ );
+
+1;
-package # hide from PAUSE \r
- DBICTest::Schema::CollectionObject;\r
-\r
-use base qw/DBIx::Class::Core/;\r
-\r
-__PACKAGE__->table('collection_object');\r
-__PACKAGE__->add_columns(\r
- 'collection' => {\r
- data_type => 'integer',\r
- },\r
- 'object' => {\r
- data_type => 'integer',\r
- },\r
-);\r
-__PACKAGE__->set_primary_key(qw/collection object/);\r
-\r
-__PACKAGE__->belongs_to( collection => "DBICTest::Schema::Collection",\r
- { "foreign.collectionid" => "self.collection" }\r
- );\r
-__PACKAGE__->belongs_to( object => "DBICTest::Schema::TypedObject",\r
- { "foreign.objectid" => "self.object" }\r
- );\r
-\r
-1;\r
+package # hide from PAUSE
+ DBICTest::Schema::CollectionObject;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('collection_object');
+__PACKAGE__->add_columns(
+ 'collection' => {
+ data_type => 'integer',
+ },
+ 'object' => {
+ data_type => 'integer',
+ },
+);
+__PACKAGE__->set_primary_key(qw/collection object/);
+
+__PACKAGE__->belongs_to( collection => "DBICTest::Schema::Collection",
+ { "foreign.collectionid" => "self.collection" }
+ );
+__PACKAGE__->belongs_to( object => "DBICTest::Schema::TypedObject",
+ { "foreign.objectid" => "self.object" }
+ );
+
+1;
--- /dev/null
+package # hide from PAUSE
+ DBICTest::Schema::CustomSql;
+
+use base qw/DBICTest::Schema::Artist/;
+
+__PACKAGE__->table('dummy');
+
+__PACKAGE__->result_source_instance->name(\<<SQL);
+ ( SELECT a.*, cd.cdid AS cdid, cd.title AS title, cd.year AS year
+ FROM artist a
+ JOIN cd ON cd.artist = a.artistid
+ WHERE cd.year = ?)
+SQL
+
+sub sqlt_deploy_hook { $_[1]->schema->drop_table($_[1]) }
+
+1;
package # hide from PAUSE
DBICTest::Schema::Dummy;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
use strict;
use warnings;
package # hide from PAUSE
DBICTest::Schema::Employee;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->load_components(qw( Ordered ));
package # hide from PAUSE
DBICTest::Schema::Encoded;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
use strict;
use warnings;
use strict;
use warnings;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->load_components(qw/InflateColumn::DateTime/);
__PACKAGE__->add_columns(
id => { data_type => 'integer', is_auto_increment => 1 },
- starts_at => { data_type => 'datetime', datetime_undef_if_invalid => 1 },
+ starts_at => { data_type => 'datetime' },
created_on => { data_type => 'timestamp' },
varchar_date => { data_type => 'varchar', inflate_date => 1, size => 20, is_nullable => 1 },
varchar_datetime => { data_type => 'varchar', inflate_datetime => 1, size => 20, is_nullable => 1 },
skip_inflation => { data_type => 'datetime', inflate_datetime => 0, is_nullable => 1 },
+ ts_without_tz => { data_type => 'datetime', is_nullable => 1 }, # used in EventTZPg
);
__PACKAGE__->set_primary_key('id');
use strict;
use warnings;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->load_components(qw/InflateColumn::DateTime/);
__PACKAGE__->add_columns(
id => { data_type => 'integer', is_auto_increment => 1 },
- starts_at => { data_type => 'datetime', extra => { timezone => "America/Chicago", locale => 'de_DE' } },
- created_on => { data_type => 'timestamp', extra => { timezone => "America/Chicago", floating_tz_ok => 1 } },
+ starts_at => { data_type => 'datetime', timezone => "America/Chicago", locale => 'de_DE', datetime_undef_if_invalid => 1 },
+ created_on => { data_type => 'timestamp', timezone => "America/Chicago", floating_tz_ok => 1 },
);
__PACKAGE__->set_primary_key('id');
+sub _datetime_parser {
+ require DateTime::Format::MySQL;
+ DateTime::Format::MySQL->new();
+}
+
1;
--- /dev/null
+package DBICTest::Schema::EventTZDeprecated;
+
+use strict;
+use warnings;
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->load_components(qw/InflateColumn::DateTime/);
+
+__PACKAGE__->table('event');
+
+__PACKAGE__->add_columns(
+ id => { data_type => 'integer', is_auto_increment => 1 },
+ starts_at => { data_type => 'datetime', extra => { timezone => "America/Chicago", locale => 'de_DE' } },
+ created_on => { data_type => 'timestamp', extra => { timezone => "America/Chicago", floating_tz_ok => 1 } },
+);
+
+__PACKAGE__->set_primary_key('id');
+
+sub _datetime_parser {
+ require DateTime::Format::MySQL;
+ DateTime::Format::MySQL->new();
+}
+
+
+1;
--- /dev/null
+package DBICTest::Schema::EventTZPg;
+
+use strict;
+use warnings;
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->load_components(qw/InflateColumn::DateTime/);
+
+__PACKAGE__->table('event');
+
+__PACKAGE__->add_columns(
+ id => { data_type => 'integer', is_auto_increment => 1 },
+ starts_at => { data_type => 'datetime', timezone => "America/Chicago", locale => 'de_DE' },
+ created_on => { data_type => 'timestamp with time zone', timezone => "America/Chicago" },
+ ts_without_tz => { data_type => 'timestamp without time zone' },
+);
+
+__PACKAGE__->set_primary_key('id');
+
+sub _datetime_parser {
+ require DateTime::Format::Pg;
+ DateTime::Format::Pg->new();
+}
+
+1;
use strict;
use warnings;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
use File::Temp qw/tempdir/;
__PACKAGE__->load_components(qw/InflateColumn::File/);
package # hide from PAUSE
DBICTest::Schema::ForceForeign;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('forceforeign');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::FourKeys;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('fourkeys');
__PACKAGE__->add_columns(
'bar' => { data_type => 'integer' },
'hello' => { data_type => 'integer' },
'goodbye' => { data_type => 'integer' },
- 'sensors' => { data_type => 'character' },
+ 'sensors' => { data_type => 'character', size => 10 },
+ 'read_count' => { data_type => 'integer', is_nullable => 1 },
);
__PACKAGE__->set_primary_key(qw/foo bar hello goodbye/);
package # hide from PAUSE
DBICTest::Schema::FourKeys_to_TwoKeys;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('fourkeys_to_twokeys');
__PACKAGE__->add_columns(
't_artist' => { data_type => 'integer' },
't_cd' => { data_type => 'integer' },
'autopilot' => { data_type => 'character' },
+ 'pilot_sequence' => { data_type => 'integer', is_nullable => 1 },
);
__PACKAGE__->set_primary_key(
qw/f_foo f_bar f_hello f_goodbye t_artist t_cd/
use strict;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('genre');
__PACKAGE__->add_columns(
__PACKAGE__->has_many (cds => 'DBICTest::Schema::CD', 'genreid');
+__PACKAGE__->has_one (model_cd => 'DBICTest::Schema::CD', 'genreid');
+
1;
package # hide from PAUSE
DBICTest::Schema::Image;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('images');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::LinerNotes;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('liner_notes');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::Link;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
use strict;
use warnings;
);
__PACKAGE__->set_primary_key('id');
+__PACKAGE__->has_many ( bookmarks => 'DBICTest::Schema::Bookmark', 'link', { cascade_delete => 0 } );
+
use overload '""' => sub { shift->url }, fallback=> 1;
1;
package # hide from PAUSE
DBICTest::Schema::LyricVersion;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('lyric_versions');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::Lyrics;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('lyrics');
__PACKAGE__->add_columns(
--- /dev/null
+package # hide from PAUSE
+ DBICTest::Schema::Money;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('money_test');
+
+__PACKAGE__->add_columns(
+ 'id' => {
+ data_type => 'integer',
+ is_auto_increment => 1,
+ },
+ 'amount' => {
+ data_type => 'money',
+ is_nullable => 1,
+ },
+);
+
+__PACKAGE__->set_primary_key('id');
+
+1;
package # hide from PAUSE
DBICTest::Schema::NoPrimaryKey;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('noprimarykey');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::OneKey;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('onekey');
__PACKAGE__->add_columns(
-package # hide from PAUSE \r
- DBICTest::Schema::Owners;\r
-\r
-use base qw/DBIx::Class::Core/;\r
-\r
-__PACKAGE__->table('owners');\r
-__PACKAGE__->add_columns(\r
- 'ownerid' => {\r
- data_type => 'integer',\r
- is_auto_increment => 1,\r
- },\r
- 'name' => {\r
- data_type => 'varchar',\r
- size => '100',\r
- },\r
-);\r
-__PACKAGE__->set_primary_key('ownerid');\r
-\r
-__PACKAGE__->has_many(books => "DBICTest::Schema::BooksInLibrary", "owner");\r
-\r
-1;\r
+package # hide from PAUSE
+ DBICTest::Schema::Owners;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('owners');
+__PACKAGE__->add_columns(
+ 'id' => {
+ data_type => 'integer',
+ is_auto_increment => 1,
+ },
+ 'name' => {
+ data_type => 'varchar',
+ size => '100',
+ },
+);
+__PACKAGE__->set_primary_key('id');
+
+__PACKAGE__->has_many(books => "DBICTest::Schema::BooksInLibrary", "owner");
+
+1;
package # hide from PAUSE
DBICTest::Schema::Producer;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('producer');
__PACKAGE__->add_columns(
-package # hide from PAUSE \r
- DBICTest::Schema::SelfRef;\r
-\r
-use base 'DBIx::Class::Core';\r
-\r
-__PACKAGE__->table('self_ref');\r
-__PACKAGE__->add_columns(\r
- 'id' => {\r
- data_type => 'integer',\r
- is_auto_increment => 1,\r
- },\r
- 'name' => {\r
- data_type => 'varchar',\r
- size => 100,\r
- },\r
-);\r
-__PACKAGE__->set_primary_key('id');\r
-\r
-__PACKAGE__->has_many( aliases => 'DBICTest::Schema::SelfRefAlias' => 'self_ref' );\r
-\r
-1;\r
+package # hide from PAUSE
+ DBICTest::Schema::SelfRef;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('self_ref');
+__PACKAGE__->add_columns(
+ 'id' => {
+ data_type => 'integer',
+ is_auto_increment => 1,
+ },
+ 'name' => {
+ data_type => 'varchar',
+ size => 100,
+ },
+);
+__PACKAGE__->set_primary_key('id');
+
+__PACKAGE__->has_many( aliases => 'DBICTest::Schema::SelfRefAlias' => 'self_ref' );
+
+1;
-package # hide from PAUSE \r
- DBICTest::Schema::SelfRefAlias;\r
-\r
-use base 'DBIx::Class::Core';\r
-\r
-__PACKAGE__->table('self_ref_alias');\r
-__PACKAGE__->add_columns(\r
- 'self_ref' => {\r
- data_type => 'integer',\r
- },\r
- 'alias' => {\r
- data_type => 'integer',\r
- },\r
-);\r
-__PACKAGE__->set_primary_key(qw/self_ref alias/);\r
-\r
-__PACKAGE__->belongs_to( self_ref => 'DBICTest::Schema::SelfRef' );\r
-__PACKAGE__->belongs_to( alias => 'DBICTest::Schema::SelfRef' );\r
-\r
-1;\r
+package # hide from PAUSE
+ DBICTest::Schema::SelfRefAlias;
+
+use base qw/DBICTest::BaseResult/;
+
+__PACKAGE__->table('self_ref_alias');
+__PACKAGE__->add_columns(
+ 'self_ref' => {
+ data_type => 'integer',
+ },
+ 'alias' => {
+ data_type => 'integer',
+ },
+);
+__PACKAGE__->set_primary_key(qw/self_ref alias/);
+
+__PACKAGE__->belongs_to( self_ref => 'DBICTest::Schema::SelfRef' );
+__PACKAGE__->belongs_to( alias => 'DBICTest::Schema::SelfRef' );
+
+1;
package # hide from PAUSE
DBICTest::Schema::SequenceTest;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('sequence_test');
__PACKAGE__->source_info({
package # hide from PAUSE
DBICTest::Schema::Serialized;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('serialized');
__PACKAGE__->add_columns(
- 'id' => { data_type => 'integer' },
+ 'id' => { data_type => 'integer', is_auto_increment => 1 },
'serialized' => { data_type => 'text' },
);
__PACKAGE__->set_primary_key('id');
package # hide from PAUSE
DBICTest::Schema::Tag;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('tags');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::Track;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->load_components(qw/InflateColumn::DateTime Ordered/);
__PACKAGE__->table('track');
accessor => 'updated_date',
is_nullable => 1
},
+ last_updated_at => {
+ data_type => 'datetime',
+ is_nullable => 1
+ },
+ small_dt => { # for mssql and sybase DT tests
+ data_type => 'smalldatetime',
+ is_nullable => 1
+ },
);
__PACKAGE__->set_primary_key('trackid');
package # hide from PAUSE
DBICTest::Schema::TreeLike;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('treelike');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::TwoKeyTreeLike;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('twokeytreelike');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::TwoKeys;
-use base 'DBIx::Class::Core';
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('twokeys');
__PACKAGE__->add_columns(
package # hide from PAUSE
DBICTest::Schema::TypedObject;
-use base qw/DBIx::Class::Core/;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table('typed_object');
__PACKAGE__->add_columns(
DBICTest::Schema::Year1999CDs;
## Used in 104view.t
-use base 'DBIx::Class::Core';
-use DBIx::Class::ResultSource::View;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table_class('DBIx::Class::ResultSource::View');
DBICTest::Schema::Year2000CDs;
## Used in 104view.t
-use base 'DBIx::Class::Core';
-use DBIx::Class::ResultSource::View;
+use base qw/DBICTest::BaseResult/;
__PACKAGE__->table_class('DBIx::Class::ResultSource::View');
--
-- Created by SQL::Translator::Producer::SQLite
--- Created on Sun Feb 22 00:15:06 2009
+-- Created on Thu Aug 20 07:47:13 2009
--
charfield char(10)
);
---
--- Table: artist_undirected_map
---
-CREATE TABLE artist_undirected_map (
- id1 integer NOT NULL,
- id2 integer NOT NULL,
- PRIMARY KEY (id1, id2)
-);
-
-CREATE INDEX artist_undirected_map_idx_id1_ ON artist_undirected_map (id1);
-
-CREATE INDEX artist_undirected_map_idx_id2_ ON artist_undirected_map (id2);
-
---
--- Table: cd_artwork
---
-CREATE TABLE cd_artwork (
- cd_id INTEGER PRIMARY KEY NOT NULL
-);
-
-CREATE INDEX cd_artwork_idx_cd_id_cd_artwor ON cd_artwork (cd_id);
-
---
--- Table: artwork_to_artist
---
-CREATE TABLE artwork_to_artist (
- artwork_cd_id integer NOT NULL,
- artist_id integer NOT NULL,
- PRIMARY KEY (artwork_cd_id, artist_id)
-);
-
-CREATE INDEX artwork_to_artist_idx_artist_id_artwork_to_arti ON artwork_to_artist (artist_id);
-
-CREATE INDEX artwork_to_artist_idx_artwork_cd_id_artwork_to_ ON artwork_to_artist (artwork_cd_id);
+CREATE INDEX artist_name_hookidx ON artist (name);
--
-- Table: bindtype_test
);
--
--- Table: bookmark
---
-CREATE TABLE bookmark (
- id INTEGER PRIMARY KEY NOT NULL,
- link integer NOT NULL
-);
-
-CREATE INDEX bookmark_idx_link_bookmark ON bookmark (link);
-
---
--- Table: books
---
-CREATE TABLE books (
- id INTEGER PRIMARY KEY NOT NULL,
- source varchar(100) NOT NULL,
- owner integer NOT NULL,
- title varchar(100) NOT NULL,
- price integer
-);
-
---
--- Table: cd
---
-CREATE TABLE cd (
- cdid INTEGER PRIMARY KEY NOT NULL,
- artist integer NOT NULL,
- title varchar(100) NOT NULL,
- year varchar(100) NOT NULL,
- genreid integer,
- single_track integer
-);
-
-CREATE INDEX cd_idx_artist_cd ON cd (artist);
-
-CREATE INDEX cd_idx_genreid_cd ON cd (genreid);
-
-CREATE INDEX cd_idx_single_track_cd ON cd (single_track);
-
-CREATE UNIQUE INDEX cd_artist_title_cd ON cd (artist, title);
-
---
--- Table: cd_to_producer
---
-CREATE TABLE cd_to_producer (
- cd integer NOT NULL,
- producer integer NOT NULL,
- PRIMARY KEY (cd, producer)
-);
-
-CREATE INDEX cd_to_producer_idx_cd_cd_to_pr ON cd_to_producer (cd);
-
-CREATE INDEX cd_to_producer_idx_producer_cd ON cd_to_producer (producer);
-
---
-- Table: collection
--
CREATE TABLE collection (
);
--
--- Table: collection_object
---
-CREATE TABLE collection_object (
- collection integer NOT NULL,
- object integer NOT NULL,
- PRIMARY KEY (collection, object)
-);
-
-CREATE INDEX collection_object_idx_collection_collection_obj ON collection_object (collection);
-
-CREATE INDEX collection_object_idx_object_c ON collection_object (object);
-
---
-- Table: employee
--
CREATE TABLE employee (
created_on timestamp NOT NULL,
varchar_date varchar(20),
varchar_datetime varchar(20),
- skip_inflation datetime
+ skip_inflation datetime,
+ ts_without_tz datetime
);
--
);
--
--- Table: forceforeign
---
-CREATE TABLE forceforeign (
- artist INTEGER PRIMARY KEY NOT NULL,
- cd integer NOT NULL
-);
-
-CREATE INDEX forceforeign_idx_artist_forcef ON forceforeign (artist);
-
---
-- Table: fourkeys
--
CREATE TABLE fourkeys (
bar integer NOT NULL,
hello integer NOT NULL,
goodbye integer NOT NULL,
- sensors character NOT NULL,
+ sensors character(10) NOT NULL,
+ read_count integer,
PRIMARY KEY (foo, bar, hello, goodbye)
);
--
--- Table: fourkeys_to_twokeys
---
-CREATE TABLE fourkeys_to_twokeys (
- f_foo integer NOT NULL,
- f_bar integer NOT NULL,
- f_hello integer NOT NULL,
- f_goodbye integer NOT NULL,
- t_artist integer NOT NULL,
- t_cd integer NOT NULL,
- autopilot character NOT NULL,
- PRIMARY KEY (f_foo, f_bar, f_hello, f_goodbye, t_artist, t_cd)
-);
-
-CREATE INDEX fourkeys_to_twokeys_idx_f_foo_f_bar_f_hello_f_goodbye_ ON fourkeys_to_twokeys (f_foo, f_bar, f_hello, f_goodbye);
-
-CREATE INDEX fourkeys_to_twokeys_idx_t_artist_t_cd_fourkeys_to ON fourkeys_to_twokeys (t_artist, t_cd);
-
---
-- Table: genre
--
CREATE TABLE genre (
name varchar(100) NOT NULL
);
-CREATE UNIQUE INDEX genre_name_genre ON genre (name);
-
---
--- Table: images
---
-CREATE TABLE images (
- id INTEGER PRIMARY KEY NOT NULL,
- artwork_id integer NOT NULL,
- name varchar(100) NOT NULL,
- data blob
-);
-
-CREATE INDEX images_idx_artwork_id_images ON images (artwork_id);
-
---
--- Table: liner_notes
---
-CREATE TABLE liner_notes (
- liner_id INTEGER PRIMARY KEY NOT NULL,
- notes varchar(100) NOT NULL
-);
-
-CREATE INDEX liner_notes_idx_liner_id_liner ON liner_notes (liner_id);
+CREATE UNIQUE INDEX genre_name ON genre (name);
--
-- Table: link
);
--
--- Table: lyric_versions
+-- Table: money_test
--
-CREATE TABLE lyric_versions (
+CREATE TABLE money_test (
id INTEGER PRIMARY KEY NOT NULL,
- lyric_id integer NOT NULL,
- text varchar(100) NOT NULL
+ amount money
);
-CREATE INDEX lyric_versions_idx_lyric_id_ly ON lyric_versions (lyric_id);
-
---
--- Table: lyrics
---
-CREATE TABLE lyrics (
- lyric_id INTEGER PRIMARY KEY NOT NULL,
- track_id integer NOT NULL
-);
-
-CREATE INDEX lyrics_idx_track_id_lyrics ON lyrics (track_id);
-
--
-- Table: noprimarykey
--
baz integer NOT NULL
);
-CREATE UNIQUE INDEX foo_bar_noprimarykey ON noprimarykey (foo, bar);
+CREATE UNIQUE INDEX foo_bar ON noprimarykey (foo, bar);
--
-- Table: onekey
-- Table: owners
--
CREATE TABLE owners (
- ownerid INTEGER PRIMARY KEY NOT NULL,
+ id INTEGER PRIMARY KEY NOT NULL,
name varchar(100) NOT NULL
);
name varchar(100) NOT NULL
);
-CREATE UNIQUE INDEX prod_name_producer ON producer (name);
+CREATE UNIQUE INDEX prod_name ON producer (name);
--
-- Table: self_ref
);
--
--- Table: self_ref_alias
---
-CREATE TABLE self_ref_alias (
- self_ref integer NOT NULL,
- alias integer NOT NULL,
- PRIMARY KEY (self_ref, alias)
-);
-
-CREATE INDEX self_ref_alias_idx_alias_self_ ON self_ref_alias (alias);
-
-CREATE INDEX self_ref_alias_idx_self_ref_se ON self_ref_alias (self_ref);
-
---
-- Table: sequence_test
--
CREATE TABLE sequence_test (
);
--
--- Table: tags
+-- Table: treelike
--
-CREATE TABLE tags (
- tagid INTEGER PRIMARY KEY NOT NULL,
- cd integer NOT NULL,
- tag varchar(100) NOT NULL
+CREATE TABLE treelike (
+ id INTEGER PRIMARY KEY NOT NULL,
+ parent integer,
+ name varchar(100) NOT NULL
+);
+
+CREATE INDEX treelike_idx_parent ON treelike (parent);
+
+--
+-- Table: twokeytreelike
+--
+CREATE TABLE twokeytreelike (
+ id1 integer NOT NULL,
+ id2 integer NOT NULL,
+ parent1 integer NOT NULL,
+ parent2 integer NOT NULL,
+ name varchar(100) NOT NULL,
+ PRIMARY KEY (id1, id2)
+);
+
+CREATE INDEX twokeytreelike_idx_parent1_parent2 ON twokeytreelike (parent1, parent2);
+
+CREATE UNIQUE INDEX tktlnameunique ON twokeytreelike (name);
+
+--
+-- Table: typed_object
+--
+CREATE TABLE typed_object (
+ objectid INTEGER PRIMARY KEY NOT NULL,
+ type varchar(100) NOT NULL,
+ value varchar(100) NOT NULL
+);
+
+--
+-- Table: artist_undirected_map
+--
+CREATE TABLE artist_undirected_map (
+ id1 integer NOT NULL,
+ id2 integer NOT NULL,
+ PRIMARY KEY (id1, id2)
+);
+
+CREATE INDEX artist_undirected_map_idx_id1 ON artist_undirected_map (id1);
+
+CREATE INDEX artist_undirected_map_idx_id2 ON artist_undirected_map (id2);
+
+--
+-- Table: bookmark
+--
+CREATE TABLE bookmark (
+ id INTEGER PRIMARY KEY NOT NULL,
+ link integer
);
-CREATE INDEX tags_idx_cd_tags ON tags (cd);
+CREATE INDEX bookmark_idx_link ON bookmark (link);
+
+--
+-- Table: books
+--
+CREATE TABLE books (
+ id INTEGER PRIMARY KEY NOT NULL,
+ source varchar(100) NOT NULL,
+ owner integer NOT NULL,
+ title varchar(100) NOT NULL,
+ price integer
+);
+
+CREATE INDEX books_idx_owner ON books (owner);
+
+--
+-- Table: forceforeign
+--
+CREATE TABLE forceforeign (
+ artist INTEGER PRIMARY KEY NOT NULL,
+ cd integer NOT NULL
+);
+
+CREATE INDEX forceforeign_idx_artist ON forceforeign (artist);
+
+--
+-- Table: self_ref_alias
+--
+CREATE TABLE self_ref_alias (
+ self_ref integer NOT NULL,
+ alias integer NOT NULL,
+ PRIMARY KEY (self_ref, alias)
+);
+
+CREATE INDEX self_ref_alias_idx_alias ON self_ref_alias (alias);
+
+CREATE INDEX self_ref_alias_idx_self_ref ON self_ref_alias (self_ref);
--
-- Table: track
cd integer NOT NULL,
position integer NOT NULL,
title varchar(100) NOT NULL,
- last_updated_on datetime
+ last_updated_on datetime,
+ last_updated_at datetime,
+ small_dt smalldatetime
);
-CREATE INDEX track_idx_cd_track ON track (cd);
+CREATE INDEX track_idx_cd ON track (cd);
-CREATE UNIQUE INDEX track_cd_position_track ON track (cd, position);
+CREATE UNIQUE INDEX track_cd_position ON track (cd, position);
-CREATE UNIQUE INDEX track_cd_title_track ON track (cd, title);
+CREATE UNIQUE INDEX track_cd_title ON track (cd, title);
--
--- Table: treelike
+-- Table: cd
--
-CREATE TABLE treelike (
+CREATE TABLE cd (
+ cdid INTEGER PRIMARY KEY NOT NULL,
+ artist integer NOT NULL,
+ title varchar(100) NOT NULL,
+ year varchar(100) NOT NULL,
+ genreid integer,
+ single_track integer
+);
+
+CREATE INDEX cd_idx_artist ON cd (artist);
+
+CREATE INDEX cd_idx_genreid ON cd (genreid);
+
+CREATE INDEX cd_idx_single_track ON cd (single_track);
+
+CREATE UNIQUE INDEX cd_artist_title ON cd (artist, title);
+
+--
+-- Table: collection_object
+--
+CREATE TABLE collection_object (
+ collection integer NOT NULL,
+ object integer NOT NULL,
+ PRIMARY KEY (collection, object)
+);
+
+CREATE INDEX collection_object_idx_collection ON collection_object (collection);
+
+CREATE INDEX collection_object_idx_object ON collection_object (object);
+
+--
+-- Table: lyrics
+--
+CREATE TABLE lyrics (
+ lyric_id INTEGER PRIMARY KEY NOT NULL,
+ track_id integer NOT NULL
+);
+
+CREATE INDEX lyrics_idx_track_id ON lyrics (track_id);
+
+--
+-- Table: cd_artwork
+--
+CREATE TABLE cd_artwork (
+ cd_id INTEGER PRIMARY KEY NOT NULL
+);
+
+CREATE INDEX cd_artwork_idx_cd_id ON cd_artwork (cd_id);
+
+--
+-- Table: liner_notes
+--
+CREATE TABLE liner_notes (
+ liner_id INTEGER PRIMARY KEY NOT NULL,
+ notes varchar(100) NOT NULL
+);
+
+CREATE INDEX liner_notes_idx_liner_id ON liner_notes (liner_id);
+
+--
+-- Table: lyric_versions
+--
+CREATE TABLE lyric_versions (
id INTEGER PRIMARY KEY NOT NULL,
- parent integer,
- name varchar(100) NOT NULL
+ lyric_id integer NOT NULL,
+ text varchar(100) NOT NULL
);
-CREATE INDEX treelike_idx_parent_treelike ON treelike (parent);
+CREATE INDEX lyric_versions_idx_lyric_id ON lyric_versions (lyric_id);
--
--- Table: twokeytreelike
+-- Table: tags
--
-CREATE TABLE twokeytreelike (
- id1 integer NOT NULL,
- id2 integer NOT NULL,
- parent1 integer NOT NULL,
- parent2 integer NOT NULL,
- name varchar(100) NOT NULL,
- PRIMARY KEY (id1, id2)
+CREATE TABLE tags (
+ tagid INTEGER PRIMARY KEY NOT NULL,
+ cd integer NOT NULL,
+ tag varchar(100) NOT NULL
);
-CREATE INDEX twokeytreelike_idx_parent1_parent2_twokeytre ON twokeytreelike (parent1, parent2);
+CREATE INDEX tags_idx_cd ON tags (cd);
-CREATE UNIQUE INDEX tktlnameunique_twokeytreelike ON twokeytreelike (name);
+--
+-- Table: cd_to_producer
+--
+CREATE TABLE cd_to_producer (
+ cd integer NOT NULL,
+ producer integer NOT NULL,
+ attribute integer,
+ PRIMARY KEY (cd, producer)
+);
+
+CREATE INDEX cd_to_producer_idx_cd ON cd_to_producer (cd);
+
+CREATE INDEX cd_to_producer_idx_producer ON cd_to_producer (producer);
+
+--
+-- Table: images
+--
+CREATE TABLE images (
+ id INTEGER PRIMARY KEY NOT NULL,
+ artwork_id integer NOT NULL,
+ name varchar(100) NOT NULL,
+ data blob
+);
+
+CREATE INDEX images_idx_artwork_id ON images (artwork_id);
--
-- Table: twokeys
PRIMARY KEY (artist, cd)
);
-CREATE INDEX twokeys_idx_artist_twokeys ON twokeys (artist);
+CREATE INDEX twokeys_idx_artist ON twokeys (artist);
--
--- Table: typed_object
+-- Table: artwork_to_artist
--
-CREATE TABLE typed_object (
- objectid INTEGER PRIMARY KEY NOT NULL,
- type varchar(100) NOT NULL,
- value varchar(100) NOT NULL
+CREATE TABLE artwork_to_artist (
+ artwork_cd_id integer NOT NULL,
+ artist_id integer NOT NULL,
+ PRIMARY KEY (artwork_cd_id, artist_id)
+);
+
+CREATE INDEX artwork_to_artist_idx_artist_id ON artwork_to_artist (artist_id);
+
+CREATE INDEX artwork_to_artist_idx_artwork_cd_id ON artwork_to_artist (artwork_cd_id);
+
+--
+-- Table: fourkeys_to_twokeys
+--
+CREATE TABLE fourkeys_to_twokeys (
+ f_foo integer NOT NULL,
+ f_bar integer NOT NULL,
+ f_hello integer NOT NULL,
+ f_goodbye integer NOT NULL,
+ t_artist integer NOT NULL,
+ t_cd integer NOT NULL,
+ autopilot character NOT NULL,
+ pilot_sequence integer,
+ PRIMARY KEY (f_foo, f_bar, f_hello, f_goodbye, t_artist, t_cd)
);
+CREATE INDEX fourkeys_to_twokeys_idx_f_foo_f_bar_f_hello_f_goodbye ON fourkeys_to_twokeys (f_foo, f_bar, f_hello, f_goodbye);
+
+CREATE INDEX fourkeys_to_twokeys_idx_t_artist_t_cd ON fourkeys_to_twokeys (t_artist, t_cd);
+
--
-- View: year2000cds
--
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+sub mc_diag { diag (@_) if $ENV{DBIC_MULTICREATE_DEBUG} };
+
+my $schema = DBICTest->init_schema();
+
+mc_diag (<<'DG');
+* Try a diamond multicreate
+
+Artist -> has_many -> Artwork_to_Artist -> belongs_to
+ /
+ belongs_to <- CD <- belongs_to <- Artwork <-/
+ \
+ \-> Artist2
+
+DG
+
+lives_ok (sub {
+ $schema->resultset ('Artist')->create ({
+ name => 'The wooled wolf',
+ artwork_to_artist => [{
+ artwork => {
+ cd => {
+ title => 'Wool explosive',
+ year => 1999,
+ artist => { name => 'The black exploding sheep' },
+ }
+ }
+ }],
+ });
+
+ my $art2 = $schema->resultset ('Artist')->find ({ name => 'The black exploding sheep' });
+ ok ($art2, 'Second artist exists');
+
+ my $cd = $art2->cds->single;
+ is ($cd->title, 'Wool explosive', 'correctly created CD');
+
+ is_deeply (
+ [ $cd->artwork->artists->get_column ('name')->all ],
+ [ 'The wooled wolf' ],
+ 'Artist correctly attached to artwork',
+ );
+
+}, 'Diamond chain creation ok');
+
+done_testing;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+# For fully intuitive multicreate any relationships in a chain
+# that do not exist for one reason or another should be created,
+# even if the preceeding relationship already exists.
+#
+# To get this to work a minor rewrite of find() is necessary, and
+# more importantly some sort of recursive_insert() call needs to
+# be available. The way things will work then is:
+# *) while traversing the hierarchy code calls find_or_create()
+# *) this in turn calls find(%\nested_dataset)
+# *) this should return not only the existing object, but must
+# also attach all non-existing (in fact maybe existing) related
+# bits of data to it, with in_storage => 0
+# *) then before returning the result of the succesful find(), we
+# simply call $obj->recursive_insert and all is dandy
+#
+# Since this will not be a very clean solution, todoifying for the
+# time being until an actual need arises
+#
+# ribasushi
+
+TODO: { my $f = __FILE__; local $TODO = "See comment at top of $f for discussion of the TODO";
+
+{
+ my $counts;
+ $counts->{$_} = $schema->resultset($_)->count for qw/Track CD Genre/;
+
+ lives_ok (sub {
+ my $existing_nogen_cd = $schema->resultset('CD')->search (
+ { 'genre.genreid' => undef },
+ { join => 'genre' },
+ )->first;
+
+ $schema->resultset('Track')->create ({
+ title => 'Sugar-coated',
+ cd => {
+ title => $existing_nogen_cd->title,
+ genre => {
+ name => 'sugar genre',
+ }
+ }
+ });
+
+ is ($schema->resultset('Track')->count, $counts->{Track} + 1, '1 new track');
+ is ($schema->resultset('CD')->count, $counts->{CD}, 'No new cds');
+ is ($schema->resultset('Genre')->count, $counts->{Genre} + 1, '1 new genre');
+
+ is ($existing_nogen_cd->genre->title, 'sugar genre', 'Correct genre assigned to CD');
+ }, 'create() did not throw');
+}
+{
+ my $counts;
+ $counts->{$_} = $schema->resultset($_)->count for qw/Artist CD Producer/;
+
+ lives_ok (sub {
+ my $artist = $schema->resultset('Artist')->first;
+ my $producer = $schema->resultset('Producer')->create ({ name => 'the queen of england' });
+
+ $schema->resultset('CD')->create ({
+ artist => $artist,
+ title => 'queen1',
+ year => 2007,
+ cd_to_producer => [
+ {
+ producer => {
+ name => $producer->name,
+ producer_to_cd => [
+ {
+ cd => {
+ title => 'queen2',
+ year => 2008,
+ artist => $artist,
+ },
+ },
+ ],
+ },
+ },
+ ],
+ });
+
+ is ($schema->resultset('Artist')->count, $counts->{Artist}, 'No new artists');
+ is ($schema->resultset('Producer')->count, $counts->{Producer} + 1, '1 new producers');
+ is ($schema->resultset('CD')->count, $counts->{CD} + 2, '2 new cds');
+
+ is ($producer->cds->count, 2, 'CDs assigned to correct producer');
+ is_deeply (
+ [ $producer->cds->search ({}, { order_by => 'title' })->get_column('title')->all],
+ [ qw/queen1 queen2/ ],
+ 'Correct cd names',
+ );
+ }, 'create() did not throw');
+}
+
+}
+
+done_testing;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 2;
+
+my $schema = DBICTest->init_schema();
+
+my $track_no_lyrics = $schema->resultset ('Track')
+ ->search ({ 'lyrics.lyric_id' => undef }, { join => 'lyrics' })
+ ->first;
+
+my $lyric = $track_no_lyrics->create_related ('lyrics', {
+ lyric_versions => [
+ { text => 'english doubled' },
+ { text => 'english doubled' },
+ ],
+});
+is ($lyric->lyric_versions->count, 2, "Two identical has_many's created");
+
+
+my $link = $schema->resultset ('Link')->create ({
+ url => 'lolcats!',
+ bookmarks => [
+ {},
+ {},
+ ]
+});
+is ($link->bookmarks->count, 2, "Two identical default-insert has_many's created");
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 8;
+
+my $schema = DBICTest->init_schema();
+
+# Attempt sequential nested find_or_create with autoinc
+# As a side effect re-test nested default create (both the main object and the relation are {})
+my $bookmark_rs = $schema->resultset('Bookmark');
+my $last_bookmark = $bookmark_rs->search ({}, { order_by => { -desc => 'id' }, rows => 1})->single;
+my $last_link = $bookmark_rs->search_related ('link', {}, { order_by => { -desc => 'link.id' }, rows => 1})->single;
+
+# find_or_create a bookmark-link combo with data for a non-existing link
+my $o1 = $bookmark_rs->find_or_create ({ link => { url => 'something-weird' } });
+is ($o1->id, $last_bookmark->id + 1, '1st bookmark ID');
+is ($o1->link->id, $last_link->id + 1, '1st related link ID');
+
+# find_or_create a bookmark-link combo without any data at all (default insert)
+# should extend this test to all available Storage's, and fix them accordingly
+my $o2 = $bookmark_rs->find_or_create ({ link => {} });
+is ($o2->id, $last_bookmark->id + 2, '2nd bookmark ID');
+is ($o2->link->id, $last_link->id + 2, '2nd related link ID');
+
+# make sure the pre-existing link has only one related bookmark
+is ($last_link->bookmarks->count, 1, 'Expecting only 1 bookmark and 1 link, someone mucked with the table!');
+
+# find_or_create a bookmark withouyt any data, but supplying an existing link object
+# should return $last_bookmark
+my $o0 = $bookmark_rs->find_or_create ({ link => $last_link });
+is_deeply ({ $o0->columns}, {$last_bookmark->columns}, 'Correctly identify a row given a relationship');
+
+# inject an additional bookmark and repeat the test
+# should warn and return the first row
+my $o3 = $last_link->create_related ('bookmarks', {});
+is ($o3->id, $last_bookmark->id + 3, '3rd bookmark ID');
+
+local $SIG{__WARN__} = sub { warn @_ unless $_[0] =~ /Query returned more than one row/ };
+my $oX = $bookmark_rs->find_or_create ({ link => $last_link });
+is_deeply ({ $oX->columns}, {$last_bookmark->columns}, 'Correctly identify a row given a relationship');
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 4;
+
+my $schema = DBICTest->init_schema();
+
+lives_ok ( sub {
+
+ my $prod_rs = $schema->resultset ('Producer');
+ my $prod_count = $prod_rs->count;
+
+ my $cd = $schema->resultset('CD')->first;
+ $cd->add_to_producers ({name => 'new m2m producer'});
+
+ is ($prod_rs->count, $prod_count + 1, 'New producer created');
+ ok ($cd->producers->find ({name => 'new m2m producer'}), 'Producer created with correct name');
+
+ my $cd2 = $schema->resultset('CD')->search ( { cdid => { '!=', $cd->cdid } }, {rows => 1} )->single; # retrieve a cd different from the first
+ $cd2->add_to_producers ({name => 'new m2m producer'}); # attach to an existing producer
+ ok ($cd2->producers->find ({name => 'new m2m producer'}), 'Existing producer attached to existing cd');
+
+}, 'Test far-end find_or_create over many_to_many');
+
+1;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+sub mc_diag { diag (@_) if $ENV{DBIC_MULTICREATE_DEBUG} };
+
+my $schema = DBICTest->init_schema();
+
+mc_diag (<<'DG');
+* Test a multilevel might-have/has_one with a PK == FK in the mid-table
+
+CD -> might have -> Artwork
+ \- has_one -/ \
+ \
+ \-> has_many \
+ --> Artwork_to_Artist
+ /-> has_many /
+ /
+ Artist
+DG
+
+my $rels = {
+ has_one => 'mandatory_artwork',
+ might_have => 'artwork',
+};
+
+for my $type (qw/has_one might_have/) {
+
+ lives_ok (sub {
+
+ my $rel = $rels->{$type};
+ my $cd_title = "Simple test $type cd";
+
+ my $cd = $schema->resultset('CD')->create ({
+ artist => 1,
+ title => $cd_title,
+ year => 2008,
+ $rel => {},
+ });
+
+ isa_ok ($cd, 'DBICTest::CD', 'Main CD object created');
+ is ($cd->title, $cd_title, 'Correct CD title');
+
+ isa_ok ($cd->$rel, 'DBICTest::Artwork', 'Related artwork present');
+ ok ($cd->$rel->in_storage, 'And in storage');
+
+ }, "Simple $type creation");
+}
+
+my $artist_rs = $schema->resultset('Artist');
+for my $type (qw/has_one might_have/) {
+
+ my $rel = $rels->{$type};
+
+ my $cd_title = "Test $type cd";
+ my $artist_names = [ map { "Artist via $type $_" } (1, 2) ];
+
+ my $someartist = $artist_rs->next;
+
+ lives_ok (sub {
+ my $cd = $schema->resultset('CD')->create ({
+ artist => $someartist,
+ title => $cd_title,
+ year => 2008,
+ $rel => {
+ artwork_to_artist => [ map {
+ { artist => { name => $_ } }
+ } (@$artist_names)
+ ]
+ },
+ });
+
+
+ isa_ok ($cd, 'DBICTest::CD', 'Main CD object created');
+ is ($cd->title, $cd_title, 'Correct CD title');
+
+ my $art_obj = $cd->$rel;
+ ok ($art_obj->has_column_loaded ('cd_id'), 'PK/FK present on artwork object');
+ is ($art_obj->artists->count, 2, 'Correct artwork creator count via the new object');
+ is_deeply (
+ [ sort $art_obj->artists->get_column ('name')->all ],
+ $artist_names,
+ 'Artists named correctly when queried via object',
+ );
+
+ my $artwork = $schema->resultset('Artwork')->search (
+ { 'cd.title' => $cd_title },
+ { join => 'cd' },
+ )->single;
+ is ($artwork->artists->count, 2, 'Correct artwork creator count via a new search');
+ is_deeply (
+ [ sort $artwork->artists->get_column ('name')->all ],
+ $artist_names,
+ 'Artists named correctly queried via a new search',
+ );
+ }, "multilevel $type with a PK == FK in the $type/has_many table ok");
+}
+
+done_testing;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan 'no_plan';
+
+my $schema = DBICTest->init_schema();
+
+my $query_stats;
+$schema->storage->debugcb (sub { push @{$query_stats->{$_[0]}}, $_[1] });
+$schema->storage->debug (1);
+
+TODO: {
+ local $TODO = 'This is an optimization task, will wait... a while';
+
+lives_ok (sub {
+ undef $query_stats;
+ $schema->resultset('Artist')->create ({
+ name => 'poor artist',
+ cds => [
+ {
+ title => 'cd1',
+ year => 2001,
+ },
+ {
+ title => 'cd2',
+ year => 2002,
+ },
+ ],
+ });
+
+ is ( @{$query_stats->{INSERT} || []}, 3, 'number of inserts during creation of artist with 2 cds' );
+ is ( @{$query_stats->{SELECT} || []}, 0, 'number of selects during creation of artist with 2 cds' )
+ || $ENV{DBIC_MULTICREATE_DEBUG} && diag join "\n", @{$query_stats->{SELECT} || []};
+});
+
+
+lives_ok (sub {
+ undef $query_stats;
+ $schema->resultset('Artist')->create ({
+ name => 'poorer artist',
+ cds => [
+ {
+ title => 'cd3',
+ year => 2003,
+ genre => { name => 'vague genre' },
+ },
+ {
+ title => 'cd4',
+ year => 2004,
+ genre => { name => 'vague genre' },
+ },
+ ],
+ });
+
+ is ( @{$query_stats->{INSERT} || []}, 4, 'number of inserts during creation of artist with 2 cds, converging on the same genre' );
+ is ( @{$query_stats->{SELECT} || []}, 0, 'number of selects during creation of artist with 2 cds, converging on the same genre' )
+ || $ENV{DBIC_MULTICREATE_DEBUG} && diag join "\n", @{$query_stats->{SELECT} || []};
+});
+
+
+lives_ok (sub {
+ my $genre = $schema->resultset('Genre')->first;
+ undef $query_stats;
+ $schema->resultset('Artist')->create ({
+ name => 'poorest artist',
+ cds => [
+ {
+ title => 'cd5',
+ year => 2005,
+ genre => $genre,
+ },
+ {
+ title => 'cd6',
+ year => 2004,
+ genre => $genre,
+ },
+ ],
+ });
+
+ is ( @{$query_stats->{INSERT} || []}, 3, 'number of inserts during creation of artist with 2 cds, converging on the same existing genre' );
+ is ( @{$query_stats->{SELECT} || []}, 0, 'number of selects during creation of artist with 2 cds, converging on the same existing genre' )
+ || $ENV{DBIC_MULTICREATE_DEBUG} && diag join "\n", @{$query_stats->{SELECT} || []};
+});
+
+
+lives_ok (sub {
+ undef $query_stats;
+ $schema->resultset('Artist')->create ({
+ name => 'poorer than the poorest artist',
+ cds => [
+ {
+ title => 'cd7',
+ year => 2007,
+ cd_to_producer => [
+ {
+ producer => {
+ name => 'jolly producer',
+ producer_to_cd => [
+ {
+ cd => {
+ title => 'cd8',
+ year => 2008,
+ artist => {
+ name => 'poorer than the poorest artist',
+ },
+ },
+ },
+ ],
+ },
+ },
+ ],
+ },
+ ],
+ });
+
+ is ( @{$query_stats->{INSERT} || []}, 6, 'number of inserts during creation of artist->cd->producer->cd->same_artist' );
+ is ( @{$query_stats->{SELECT} || []}, 0, 'number of selects during creation of artist->cd->producer->cd->same_artist' )
+ || $ENV{DBIC_MULTICREATE_DEBUG} && diag join "\n", @{$query_stats->{SELECT} || []};
+});
+
+lives_ok (sub {
+ undef $query_stats;
+ $schema->resultset ('Artist')->find(1)->create_related (cds => {
+ title => 'cd9',
+ year => 2009,
+ cd_to_producer => [
+ {
+ producer => {
+ name => 'jolly producer',
+ producer_to_cd => [
+ {
+ cd => {
+ title => 'cd10',
+ year => 2010,
+ artist => {
+ name => 'poorer than the poorest artist',
+ },
+ },
+ },
+ ],
+ },
+ },
+ ],
+ });
+
+ is ( @{$query_stats->{INSERT} || []}, 4, 'number of inserts during creation of existing_artist->cd->existing_producer->cd->existing_artist2' );
+ is ( @{$query_stats->{SELECT} || []}, 0, 'number of selects during creation of existing_artist->cd->existing_producer->cd->existing_artist2' )
+ || $ENV{DBIC_MULTICREATE_DEBUG} && diag join "\n", @{$query_stats->{SELECT} || []};
+});
+
+lives_ok (sub {
+ undef $query_stats;
+
+ my $artist = $schema->resultset ('Artist')->first;
+ my $producer = $schema->resultset ('Producer')->first;
+
+ $schema->resultset ('CD')->create ({
+ title => 'cd11',
+ year => 2011,
+ artist => $artist,
+ cd_to_producer => [
+ {
+ producer => $producer,
+ },
+ ],
+ });
+
+ is ( @{$query_stats->{INSERT} || []}, 2, 'number of inserts during creation of artist_object->cd->producer_object' );
+ is ( @{$query_stats->{SELECT} || []}, 0, 'number of selects during creation of artist_object->cd->producer_object' )
+ || $ENV{DBIC_MULTICREATE_DEBUG} && diag join "\n", @{$query_stats->{SELECT} || []};
+});
+
+}
+
+1;
use lib qw(t/lib);
use DBICTest;
-plan tests => 93;
+plan tests => 91;
my $schema = DBICTest->init_schema();
}, 'Nested find_or_create');
lives_ok ( sub {
- my $artist2 = $schema->resultset('Artist')->create({
- name => 'Fred 4',
- cds => [
- {
- title => 'Music to code by',
- year => 2007,
- },
- ],
- cds_unordered => [
- {
- title => 'Music to code by',
- year => 2007,
- },
- ]
- });
-
- is($artist2->in_storage, 1, 'artist with duplicate rels inserted okay');
-}, 'Multiple same level has_many create');
-
-lives_ok ( sub {
my $artist = $schema->resultset('Artist')->first;
my $cd_result = $artist->create_related('cds', {
});
- ok( $cd_result && ref $cd_result eq 'DBICTest::CD', "Got Good CD Class");
+ isa_ok( $cd_result, 'DBICTest::CD', "Got Good CD Class");
ok( $cd_result->title eq "TestOneCD1", "Got Expected Title");
my $tracks = $cd_result->tracks;
- ok( ref $tracks eq "DBIx::Class::ResultSet", "Got Expected Tracks ResultSet");
+ isa_ok( $tracks, 'DBIx::Class::ResultSet', 'Got Expected Tracks ResultSet');
foreach my $track ($tracks->all)
{
- ok( $track && ref $track eq 'DBICTest::Track', 'Got Expected Track Class');
+ isa_ok( $track, 'DBICTest::Track', 'Got Expected Track Class');
}
}, 'First create_related pass');
});
- ok( $cd_result && ref $cd_result eq 'DBICTest::CD', "Got Good CD Class");
+ isa_ok( $cd_result, 'DBICTest::CD', "Got Good CD Class");
ok( $cd_result->title eq "TestOneCD2", "Got Expected Title");
ok( $cd_result->notes eq 'I can haz liner notes?', 'Liner notes');
my $tracks = $cd_result->tracks;
- ok( ref $tracks eq "DBIx::Class::ResultSet", "Got Expected Tracks ResultSet");
+ isa_ok( $tracks, 'DBIx::Class::ResultSet', "Got Expected Tracks ResultSet");
foreach my $track ($tracks->all)
{
- ok( $track && ref $track eq 'DBICTest::Track', 'Got Expected Track Class');
+ isa_ok( $track, 'DBICTest::Track', 'Got Expected Track Class');
}
}, 'second create_related with same arguments');
is (
$cds_2012->search(
{ 'tags.tag' => { -in => [qw/A B/] } },
- { join => 'tags', group_by => 'me.cdid' }
+ {
+ join => 'tags',
+ group_by => 'me.cdid',
+ having => 'count(me.cdid) = 2',
+ }
),
5,
'All 10 tags were pairwise distributed between 5 year-2012 CDs'
my $schema = DBICTest->init_schema();
-my $orig_debug = $schema->storage->debug;
-
-use IO::File;
-
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 3 );
-}
-
-# figure out if we've got a version of sqlite that is older than 3.2.6, in
-# which case COUNT(DISTINCT()) doesn't work
-my $is_broken_sqlite = 0;
-my ($sqlite_major_ver,$sqlite_minor_ver,$sqlite_patch_ver) =
- split /\./, $schema->storage->dbh->get_info(18);
-if( $schema->storage->dbh->get_info(17) eq 'SQLite' &&
- ( ($sqlite_major_ver < 3) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver < 2) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver == 2 && $sqlite_patch_ver < 6) ) ) {
- $is_broken_sqlite = 1;
-}
+plan tests => 3;
# bug in 0.07000 caused attr (join/prefetch) to be modifed by search
# so we check the search & attr arrays are not modified
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+plan tests => 23;
+
+my $schema = DBICTest->init_schema();
+
+my $cd_rs = $schema->resultset('CD')->search (
+ { 'tracks.cd' => { '!=', undef } },
+ { prefetch => ['tracks', 'artist'] },
+);
+
+
+is($cd_rs->count, 5, 'CDs with tracks count');
+is($cd_rs->search_related('tracks')->count, 15, 'Tracks associated with CDs count (before SELECT()ing)');
+
+is($cd_rs->all, 5, 'Amount of CD objects with tracks');
+is($cd_rs->search_related('tracks')->count, 15, 'Tracks associated with CDs count (after SELECT()ing)');
+
+is($cd_rs->search_related ('tracks')->all, 15, 'Track objects associated with CDs (after SELECT()ing)');
+
+my $artist = $schema->resultset('Artist')->create({name => 'xxx'});
+
+my $artist_rs = $schema->resultset('Artist')->search(
+ {artistid => $artist->id},
+ {prefetch=>'cds', join => 'twokeys' }
+);
+
+is($artist_rs->count, 1, "New artist found with prefetch turned on");
+is(scalar($artist_rs->all), 1, "New artist fetched with prefetch turned on");
+is($artist_rs->related_resultset('cds')->count, 0, "No CDs counted on a brand new artist");
+is(scalar($artist_rs->related_resultset('cds')->all), 0, "No CDs fetched on a brand new artist (count == fetch)");
+
+# create a cd, and make sure the non-existing join does not skew the count
+$artist->create_related ('cds', { title => 'yyy', year => '1999' });
+is($artist_rs->related_resultset('cds')->count, 1, "1 CDs counted on a brand new artist");
+is(scalar($artist_rs->related_resultset('cds')->all), 1, "1 CDs prefetched on a brand new artist (count == fetch)");
+
+# Really fuck shit up with one more cd and some insanity
+# this doesn't quite work as there are the prefetch gets lost
+# on search_related. This however is too esoteric to fix right
+# now
+
+my $cd2 = $artist->create_related ('cds', {
+ title => 'zzz',
+ year => '1999',
+ tracks => [{ title => 'ping' }, { title => 'pong' }],
+});
+
+my $cds = $cd2->search_related ('artist', {}, { join => 'twokeys' })
+ ->search_related ('cds');
+my $tracks = $cds->search_related ('tracks');
+
+is($tracks->count, 2, "2 Tracks counted on cd via artist via one of the cds");
+is(scalar($tracks->all), 2, "2 Track objects on cd via artist via one of the cds");
+
+is($cds->count, 2, "2 CDs counted on artist via one of the cds");
+is(scalar($cds->all), 2, "2 CD objectson artist via one of the cds");
+
+# make sure the join collapses all the way
+is_same_sql_bind (
+ $tracks->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM artist me
+ LEFT JOIN twokeys twokeys ON twokeys.artist = me.artistid
+ JOIN cd cds ON cds.artist = me.artistid
+ JOIN track tracks ON tracks.cd = cds.cdid
+ WHERE ( me.artistid = ? )
+ )',
+ [ [ 'me.artistid' => 4 ] ],
+);
+
+
+TODO: {
+ local $TODO = "Chaining with prefetch is fundamentally broken";
+
+ my $queries;
+ $schema->storage->debugcb ( sub { $queries++ } );
+ $schema->storage->debug (1);
+
+ my $cds = $cd2->search_related ('artist', {}, { prefetch => { cds => 'tracks' }, join => 'twokeys' })
+ ->search_related ('cds');
+
+ my $tracks = $cds->search_related ('tracks');
+
+ is($tracks->count, 2, "2 Tracks counted on cd via artist via one of the cds");
+ is(scalar($tracks->all), 2, "2 Tracks prefetched on cd via artist via one of the cds");
+ is($tracks->count, 2, "Cached 2 Tracks counted on cd via artist via one of the cds");
+
+ is($cds->count, 2, "2 CDs counted on artist via one of the cds");
+ is(scalar($cds->all), 2, "2 CDs prefetched on artist via one of the cds");
+ is($cds->count, 2, "Cached 2 CDs counted on artist via one of the cds");
+
+ is ($queries, 3, '2 counts + 1 prefetch?');
+}
--- /dev/null
+# Test if prefetch and join in diamond relationship fetching the correct rows
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+$schema->populate('Artwork', [
+ [ qw/cd_id/ ],
+ [ 1 ],
+]);
+
+$schema->populate('Artwork_to_Artist', [
+ [ qw/artwork_cd_id artist_id/ ],
+ [ 1, 2 ],
+]);
+
+my $ars = $schema->resultset ('Artwork');
+
+# The relationship diagram here is:
+#
+# $ars --> artwork_to_artist
+# | |
+# | |
+# V V
+# cd ------> artist
+#
+# The current artwork belongs to a cd by artist1
+# but the artwork itself is painted by artist2
+#
+# What we try is all possible permutations of join/prefetch
+# combinations in both directions, while always expecting to
+# arrive at the specific artist at the end of each path.
+
+
+my $cd_paths = {
+ 'no cd' => [],
+ 'cd' => ['cd'],
+ 'cd->artist1' => [{'cd' => 'artist'}]
+};
+my $a2a_paths = {
+ 'no a2a' => [],
+ 'a2a' => ['artwork_to_artist'],
+ 'a2a->artist2' => [{'artwork_to_artist' => 'artist'}]
+};
+
+my %tests;
+
+foreach my $cd_path (keys %$cd_paths) {
+
+ foreach my $a2a_path (keys %$a2a_paths) {
+
+
+ $tests{sprintf "join %s, %s", $cd_path, $a2a_path} = $ars->search({}, {
+ 'join' => [
+ @{ $cd_paths->{$cd_path} },
+ @{ $a2a_paths->{$a2a_path} },
+ ],
+ 'prefetch' => [
+ ],
+ });
+
+
+ $tests{sprintf "prefetch %s, %s", $cd_path, $a2a_path} = $ars->search({}, {
+ 'join' => [
+ ],
+ 'prefetch' => [
+ @{ $cd_paths->{$cd_path} },
+ @{ $a2a_paths->{$a2a_path} },
+ ],
+ });
+
+
+ $tests{sprintf "join %s, prefetch %s", $cd_path, $a2a_path} = $ars->search({}, {
+ 'join' => [
+ @{ $cd_paths->{$cd_path} },
+ ],
+ 'prefetch' => [
+ @{ $a2a_paths->{$a2a_path} },
+ ],
+ });
+
+
+ $tests{sprintf "join %s, prefetch %s", $a2a_path, $cd_path} = $ars->search({}, {
+ 'join' => [
+ @{ $a2a_paths->{$a2a_path} },
+ ],
+ 'prefetch' => [
+ @{ $cd_paths->{$cd_path} },
+ ],
+ });
+
+ }
+}
+
+plan tests => (scalar (keys %tests) * 3);
+
+foreach my $name (keys %tests) {
+ foreach my $artwork ($tests{$name}->all()) {
+ is($artwork->id, 1, $name . ', correct artwork');
+ is($artwork->cd->artist->artistid, 1, $name . ', correct artist_id over cd');
+ is($artwork->artwork_to_artist->first->artist->artistid, 2, $name . ', correct artist_id over A2A');
+ }
+}
\ No newline at end of file
--- /dev/null
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBIC::SqlMakerTest;
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+plan tests => 1;
+
+# While this is a rather GIGO case, make sure it behaves as pre-103,
+# as it may result in hard-to-track bugs
+my $cds = $schema->resultset('Artist')
+ ->search_related ('cds')
+ ->search ({}, {
+ prefetch => [ 'single_track', { single_track => 'cd' } ],
+ });
+
+is_same_sql(
+ ${$cds->as_query}->[0],
+ '(
+ SELECT
+ cds.cdid, cds.artist, cds.title, cds.year, cds.genreid, cds.single_track,
+ single_track.trackid, single_track.cd, single_track.position, single_track.title, single_track.last_updated_on, single_track.last_updated_at, single_track.small_dt,
+ single_track_2.trackid, single_track_2.cd, single_track_2.position, single_track_2.title, single_track_2.last_updated_on, single_track_2.last_updated_at, single_track_2.small_dt,
+ cd.cdid, cd.artist, cd.title, cd.year, cd.genreid, cd.single_track
+ FROM artist me
+ LEFT JOIN cd cds ON cds.artist = me.artistid
+ LEFT JOIN track single_track ON single_track.trackid = cds.single_track
+ LEFT JOIN track single_track_2 ON single_track_2.trackid = cds.single_track
+ LEFT JOIN cd cd ON cd.cdid = single_track_2.cd
+ )',
+);
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema();
+my $sdebug = $schema->storage->debug;
+
+my $cd_rs = $schema->resultset('CD')->search (
+ { 'tracks.cd' => { '!=', undef } },
+ { prefetch => 'tracks' },
+);
+
+# Database sanity check
+is($cd_rs->count, 5, 'CDs with tracks count');
+for ($cd_rs->all) {
+ is ($_->tracks->count, 3, '3 tracks for CD' . $_->id );
+}
+
+# Test a belongs_to prefetch of a has_many
+{
+ my $track_rs = $schema->resultset ('Track')->search (
+ { 'me.cd' => { -in => [ $cd_rs->get_column ('cdid')->all ] } },
+ {
+ select => [
+ 'me.cd',
+ { count => 'me.trackid' },
+ ],
+ as => [qw/
+ cd
+ track_count
+ /],
+ group_by => [qw/me.cd/],
+ prefetch => 'cd',
+ },
+ );
+
+ # this used to fuck up ->all, do not remove!
+ ok ($track_rs->first, 'There is stuff in the rs');
+
+ is($track_rs->count, 5, 'Prefetched count with groupby');
+ is($track_rs->all, 5, 'Prefetched objects with groupby');
+
+ {
+ my $query_cnt = 0;
+ $schema->storage->debugcb ( sub { $query_cnt++ } );
+ $schema->storage->debug (1);
+
+ while (my $collapsed_track = $track_rs->next) {
+ my $cdid = $collapsed_track->get_column('cd');
+ is($collapsed_track->get_column('track_count'), 3, "Correct count of tracks for CD $cdid" );
+ ok($collapsed_track->cd->title, "Prefetched title for CD $cdid" );
+ }
+
+ is ($query_cnt, 1, 'Single query on prefetched titles');
+ $schema->storage->debugcb (undef);
+ $schema->storage->debug ($sdebug);
+ }
+
+ # Test sql by hand, as the sqlite db will simply paper over
+ # improper group/select combinations
+ #
+ is_same_sql_bind (
+ $track_rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT me.cd
+ FROM track me
+ JOIN cd cd ON cd.cdid = me.cd
+ WHERE ( me.cd IN ( ?, ?, ?, ?, ? ) )
+ GROUP BY me.cd
+ )
+ count_subq
+ )',
+ [ map { [ 'me.cd' => $_] } ($cd_rs->get_column ('cdid')->all) ],
+ 'count() query generated expected SQL',
+ );
+
+ is_same_sql_bind (
+ $track_rs->as_query,
+ '(
+ SELECT me.cd, me.track_count, cd.cdid, cd.artist, cd.title, cd.year, cd.genreid, cd.single_track
+ FROM (
+ SELECT me.cd, COUNT (me.trackid) AS track_count
+ FROM track me
+ JOIN cd cd ON cd.cdid = me.cd
+ WHERE ( me.cd IN ( ?, ?, ?, ?, ? ) )
+ GROUP BY me.cd
+ ) me
+ JOIN cd cd ON cd.cdid = me.cd
+ WHERE ( me.cd IN ( ?, ?, ?, ?, ? ) )
+ )',
+ [ map { [ 'me.cd' => $_] } ( ($cd_rs->get_column ('cdid')->all) x 2 ) ],
+ 'next() query generated expected SQL',
+ );
+
+
+ # add an extra track to one of the cds, and then make sure we can get it on top
+ # (check if limit works)
+ my $top_cd = $cd_rs->slice (1,1)->next;
+ $top_cd->create_related ('tracks', {
+ title => 'over the top',
+ });
+
+ my $top_cd_collapsed_track = $track_rs->search ({}, {
+ rows => 2,
+ order_by => [
+ { -desc => 'track_count' },
+ ],
+ });
+
+ is ($top_cd_collapsed_track->count, 2);
+
+ is (
+ $top_cd->title,
+ $top_cd_collapsed_track->first->cd->title,
+ 'Correct collapsed track with prefetched CD returned on top'
+ );
+}
+
+# test a has_many/might_have prefetch at the same level
+# Note that one of the CDs now has 4 tracks instead of 3
+{
+ my $most_tracks_rs = $schema->resultset ('CD')->search (
+ {
+ 'me.cdid' => { '!=' => undef }, # duh - this is just to test WHERE
+ },
+ {
+ prefetch => [qw/tracks liner_notes/],
+ select => ['me.cdid', { count => 'tracks.trackid' }, { max => 'tracks.trackid', -as => 'maxtr'} ],
+ as => [qw/cdid track_count max_track_id/],
+ group_by => 'me.cdid',
+ order_by => [ { -desc => 'track_count' }, { -asc => 'maxtr' } ],
+ rows => 2,
+ }
+ );
+
+ is_same_sql_bind (
+ $most_tracks_rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT me.cdid
+ FROM cd me
+ LEFT JOIN track tracks ON tracks.cd = me.cdid
+ LEFT JOIN liner_notes liner_notes ON liner_notes.liner_id = me.cdid
+ WHERE ( me.cdid IS NOT NULL )
+ GROUP BY me.cdid
+ LIMIT 2
+ ) count_subq
+ )',
+ [],
+ 'count() query generated expected SQL',
+ );
+
+ is_same_sql_bind (
+ $most_tracks_rs->as_query,
+ '(
+ SELECT me.cdid, me.track_count, me.maxtr,
+ tracks.trackid, tracks.cd, tracks.position, tracks.title, tracks.last_updated_on, tracks.last_updated_at, tracks.small_dt,
+ liner_notes.liner_id, liner_notes.notes
+ FROM (
+ SELECT me.cdid, COUNT( tracks.trackid ) AS track_count, MAX( tracks.trackid ) AS maxtr
+ FROM cd me
+ LEFT JOIN track tracks ON tracks.cd = me.cdid
+ WHERE ( me.cdid IS NOT NULL )
+ GROUP BY me.cdid
+ ORDER BY track_count DESC, maxtr ASC
+ LIMIT 2
+ ) me
+ LEFT JOIN track tracks ON tracks.cd = me.cdid
+ LEFT JOIN liner_notes liner_notes ON liner_notes.liner_id = me.cdid
+ WHERE ( me.cdid IS NOT NULL )
+ ORDER BY track_count DESC, maxtr ASC, tracks.cd
+ )',
+ [],
+ 'next() query generated expected SQL',
+ );
+
+ is ($most_tracks_rs->count, 2, 'Limit works');
+ my $top_cd = $most_tracks_rs->first;
+ is ($top_cd->id, 2, 'Correct cd fetched on top'); # 2 because of the slice(1,1) earlier
+
+ my $query_cnt = 0;
+ $schema->storage->debugcb ( sub { $query_cnt++ } );
+ $schema->storage->debug (1);
+
+ is ($top_cd->get_column ('track_count'), 4, 'Track count fetched correctly');
+ is ($top_cd->tracks->count, 4, 'Count of prefetched tracks rs still correct');
+ is ($top_cd->tracks->all, 4, 'Number of prefetched track objects still correct');
+ is (
+ $top_cd->liner_notes->notes,
+ 'Buy Whiskey!',
+ 'Correct liner pre-fetched with top cd',
+ );
+
+ is ($query_cnt, 0, 'No queries executed during prefetched data access');
+ $schema->storage->debugcb (undef);
+ $schema->storage->debug ($sdebug);
+}
+
+# make sure that distinct still works
+{
+ my $rs = $schema->resultset("CD")->search({}, {
+ prefetch => 'tags',
+ order_by => 'cdid',
+ distinct => 1,
+ });
+
+ is_same_sql_bind (
+ $rs->as_query,
+ '(
+ SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track,
+ tags.tagid, tags.cd, tags.tag
+ FROM (
+ SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track
+ FROM cd me
+ GROUP BY me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track
+ ORDER BY cdid
+ ) me
+ LEFT JOIN tags tags ON tags.cd = me.cdid
+ ORDER BY cdid, tags.cd, tags.tag
+ )',
+ [],
+ 'Prefetch + distinct resulted in correct group_by',
+ );
+
+ is ($rs->all, 5, 'Correct number of CD objects');
+ is ($rs->count, 5, 'Correct count of CDs');
+}
+
+# RT 47779, test group_by as a scalar ref
+{
+ my $track_rs = $schema->resultset ('Track')->search (
+ { 'me.cd' => { -in => [ $cd_rs->get_column ('cdid')->all ] } },
+ {
+ select => [
+ 'me.cd',
+ { count => 'me.trackid' },
+ ],
+ as => [qw/
+ cd
+ track_count
+ /],
+ group_by => \'SUBSTR(me.cd, 1, 1)',
+ prefetch => 'cd',
+ },
+ );
+
+ is_same_sql_bind (
+ $track_rs->count_rs->as_query,
+ '(
+ SELECT COUNT( * )
+ FROM (
+ SELECT SUBSTR(me.cd, 1, 1)
+ FROM track me
+ JOIN cd cd ON cd.cdid = me.cd
+ WHERE ( me.cd IN ( ?, ?, ?, ?, ? ) )
+ GROUP BY SUBSTR(me.cd, 1, 1)
+ )
+ count_subq
+ )',
+ [ map { [ 'me.cd' => $_] } ($cd_rs->get_column ('cdid')->all) ],
+ 'count() query generated expected SQL',
+ );
+}
+
+done_testing;
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 9;
+
+my $schema = DBICTest->init_schema();
+
+lives_ok(sub {
+ # while cds.* will be selected anyway (prefetch currently forces the result of _resolve_prefetch)
+ # only the requested me.name column will be fetched.
+
+ # reference sql with select => [...]
+ # SELECT me.name, cds.title, cds.cdid, cds.artist, cds.title, cds.year, cds.genreid, cds.single_track FROM ...
+
+ my $rs = $schema->resultset('Artist')->search(
+ { 'cds.title' => { '!=', 'Generic Manufactured Singles' } },
+ {
+ prefetch => [ qw/ cds / ],
+ order_by => [ { -desc => 'me.name' }, 'cds.title' ],
+ select => [qw/ me.name cds.title / ],
+ }
+ );
+
+ is ($rs->count, 2, 'Correct number of collapsed artists');
+ my $we_are_goth = $rs->first;
+ is ($we_are_goth->name, 'We Are Goth', 'Correct first artist');
+ is ($we_are_goth->cds->count, 1, 'Correct number of CDs for first artist');
+ is ($we_are_goth->cds->first->title, 'Come Be Depressed With Us', 'Correct cd for artist');
+}, 'explicit prefetch on a keyless object works');
+
+
+lives_ok(sub {
+ # test implicit prefetch as well
+
+ my $rs = $schema->resultset('CD')->search(
+ { title => 'Generic Manufactured Singles' },
+ {
+ join=> 'artist',
+ select => [qw/ me.title artist.name / ],
+ }
+ );
+
+ my $cd = $rs->next;
+ is ($cd->title, 'Generic Manufactured Singles', 'CD title prefetched correctly');
+ isa_ok ($cd->artist, 'DBICTest::Artist');
+ is ($cd->artist->name, 'Random Boy Band', 'Artist object has correct name');
+
+}, 'implicit keyless prefetch works');
use strict;
-use warnings;
+use warnings;
use Test::More;
use Test::Exception;
use lib qw(t/lib);
use DBICTest;
-use Data::Dumper;
-
-my $schema = DBICTest->init_schema();
-
-my $orig_debug = $schema->storage->debug;
-
use IO::File;
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
-# : ( tests => 16 );
- : 'no_plan';
-}
+plan tests => 10;
+
+my $schema = DBICTest->init_schema();
+my $sdebug = $schema->storage->debug;
# once the following TODO is complete, remove the 2 warning tests immediately
# after the TODO block
ok(! $o_mm_warn, 'no warning on attempt to prefetch several same level has_many\'s (1 -> M + M)');
is($queries, 1, 'prefetch one->(has_many,has_many) ran exactly 1 query');
- is($pr_tracks_count, $tracks_count, 'equal count of prefetched relations over several same level has_many\'s (1 -> M + M)');
-
- for ($pr_tracks_rs, $tracks_rs) {
- $_->result_class ('DBIx::Class::ResultClass::HashRefInflator');
- }
+ $schema->storage->debugcb (undef);
+ $schema->storage->debug ($sdebug);
- is_deeply ([$pr_tracks_rs->all], [$tracks_rs->all], 'same structure returned with and without prefetch over several same level has_many\'s (1 -> M + M)');
+ is($pr_tracks_count, $tracks_count, 'equal count of prefetched relations over several same level has_many\'s (1 -> M + M)');
+ is ($pr_tracks_rs->all, $tracks_rs->all, 'equal amount of objects returned with and without prefetch over several same level has_many\'s (1 -> M + M)');
#( M -> 1 -> M + M )
my $note_rs = $schema->resultset('LinerNotes')->search ({ notes => 'Buy Whiskey!' });
my $pr_note_rs = $note_rs->search ({}, {
prefetch => {
- cd => [qw/tags tracks/]
+ cd => [qw/tracks tags/]
},
});
ok(! $m_o_mm_warn, 'no warning on attempt to prefetch several same level has_many\'s (M -> 1 -> M + M)');
is($queries, 1, 'prefetch one->(has_many,has_many) ran exactly 1 query');
+ $schema->storage->debugcb (undef);
+ $schema->storage->debug ($sdebug);
is($pr_tags_count, $tags_count, 'equal count of prefetched relations over several same level has_many\'s (M -> 1 -> M + M)');
-
- for ($pr_tags_rs, $tags_rs) {
- $_->result_class ('DBIx::Class::ResultClass::HashRefInflator');
- }
-
- is_deeply ([$pr_tags_rs->all], [$tags_rs->all], 'same structure returned with and without prefetch over several same level has_many\'s (M -> 1 -> M + M)');
+ is($pr_tags_rs->all, $tags_rs->all, 'equal amount of objects with and without prefetch over several same level has_many\'s (M -> 1 -> M + M)');
}
# remove this closure once the TODO above is working
-my $w;
{
- local $SIG{__WARN__} = sub { $w = shift };
+ my $warn_re = qr/will explode the number of row objects retrievable via/;
+
+ my (@w, @dummy);
+ local $SIG{__WARN__} = sub { $_[0] =~ $warn_re ? push @w, @_ : warn @_ };
my $rs = $schema->resultset('CD')->search ({ 'me.title' => 'Forkful of bees' }, { prefetch => [qw/tracks tags/] });
- for (qw/all count next first/) {
- undef $w;
- my @stuff = $rs->search()->$_;
- like ($w, qr/will currently disrupt both the functionality of .rs->count\(\), and the amount of objects retrievable via .rs->next\(\)/,
- "warning on ->$_ attempt prefetching several same level has_manys (1 -> M + M)");
- }
+ @w = ();
+ @dummy = $rs->first;
+ is (@w, 1, 'warning on attempt prefetching several same level has_manys (1 -> M + M)');
+
my $rs2 = $schema->resultset('LinerNotes')->search ({ notes => 'Buy Whiskey!' }, { prefetch => { cd => [qw/tags tracks/] } });
- for (qw/all count next first/) {
- undef $w;
- my @stuff = $rs2->search()->$_;
- like ($w, qr/will currently disrupt both the functionality of .rs->count\(\), and the amount of objects retrievable via .rs->next\(\)/,
- "warning on ->$_ attempt prefetching several same level has_manys (M -> 1 -> M + M)");
- }
+ @w = ();
+ @dummy = $rs2->first;
+ is (@w, 1, 'warning on attempt prefetching several same level has_manys (M -> 1 -> M + M)');
}
}
$rs->reset;
+use Data::Dumper;
note Dumper [
"\n$query",
"\n$tb",
t3 => { col1 => 5, col2 => 6 },
};
At this point, find the stuff that's different is easy enough to do and slotting
-things into the right spot is, likewise, pretty straightforward.
+things into the right spot is, likewise, pretty straightforward. Instead of
+storing things in a AoH, store them in a HoH keyed on the PKs of the the table,
+then convert to an AoH after all collapsing is done.
This implies that the collapse attribute can probably disappear or, at the
least, be turned into a boolean (which is how it's used in every other place).
+++ /dev/null
-# Test to ensure we get a consistent result set wether or not we use the
-# prefetch option in combination rows (LIMIT).
-use strict;
-use warnings;
-
-use Test::More;
-use lib qw(t/lib);
-use DBICTest;
-
-plan $@ ? (skip_all => 'needs DBD::SQLite for testing') : (tests => 2);
-
-my $schema = DBICTest->init_schema();
-my $no_prefetch = $schema->resultset('Artist')->search(
- undef,
- { rows => 3 }
-);
-
-my $use_prefetch = $schema->resultset('Artist')->search(
- undef,
- {
- prefetch => 'cds',
- rows => 3
- }
-);
-
-my $no_prefetch_count = 0;
-my $use_prefetch_count = 0;
-
-is($no_prefetch->count, $use_prefetch->count, '$no_prefetch->count == $use_prefetch->count');
-
-TODO: {
- local $TODO = "This is a difficult bug to fix, workaround is not to use prefetch with rows";
- $no_prefetch_count++ while $no_prefetch->next;
- $use_prefetch_count++ while $use_prefetch->next;
- is(
- $no_prefetch_count,
- $use_prefetch_count,
- "manual row count confirms consistency"
- . " (\$no_prefetch_count == $no_prefetch_count, "
- . " \$use_prefetch_count == $use_prefetch_count)"
- );
-}
-
-__END__
-The fix is to, when using prefetch, take the query and put it into a subquery
-joined to the tables we're prefetching from. This might result in the same
-table being joined once in the main subquery and once in the main query. This
-may actually resolve other, unknown edgecase bugs. It is also the right way
-to do prefetching. Optimizations can come later.
-
-This means that:
- $foo_rs->search(
- { ... },
- {
- prefetch => 'bar',
- ...
- },
- );
-
-becomes:
- my $temp = $foo_rs->search(
- { ... },
- {
- join => 'bar',
- ...
- },
- );
- $foo_rs->storage->schema->resultset('foo')->search(
- undef,
- {
- from => [
- { me => $temp->as_query },
- ],
- prefetch => 'bar',
- },
- );
-
-Problem:
- * The prefetch->join change needs to happen ONLY IF there are conditions
- that depend on bar being joined.
- * How will this work when the $rs is further searched on? Those clauses
- need to be added to the subquery, not the outer one. This is particularly
- true if rows is added in the attribute later per the Pager.
use strict;
-use warnings;
+use warnings;
use Test::More;
use Test::Exception;
use lib qw(t/lib);
use DBICTest;
use Data::Dumper;
+use IO::File;
my $schema = DBICTest->init_schema();
-
my $orig_debug = $schema->storage->debug;
-use IO::File;
-
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 45 );
-}
-
-# figure out if we've got a version of sqlite that is older than 3.2.6, in
-# which case COUNT(DISTINCT()) doesn't work
-my $is_broken_sqlite = 0;
-my ($sqlite_major_ver,$sqlite_minor_ver,$sqlite_patch_ver) =
- split /\./, $schema->storage->dbh->get_info(18);
-if( $schema->storage->dbh->get_info(17) eq 'SQLite' &&
- ( ($sqlite_major_ver < 3) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver < 2) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver == 2 && $sqlite_patch_ver < 6) ) ) {
- $is_broken_sqlite = 1;
-}
+plan tests => 44;
my $queries = 0;
$schema->storage->debugcb(sub { $queries++; });
{ group_by => [qw/ title me.cdid /] }
);
-SKIP: {
- skip "SQLite < 3.2.6 doesn't understand COUNT(DISTINCT())", 1
- if $is_broken_sqlite;
- cmp_ok( $rs->count, '==', 5, "count() ok after group_by on main pk" );
-}
+cmp_ok( $rs->count, '==', 5, "count() ok after group_by on main pk" );
cmp_ok( scalar $rs->all, '==', 5, "all() returns same count as count() after group_by on main pk" );
{ join => [qw/ artist /], group_by => [qw/ artist.name /] }
);
-SKIP: {
- skip "SQLite < 3.2.6 doesn't understand COUNT(DISTINCT())", 1
- if $is_broken_sqlite;
- cmp_ok( $rs->count, '==', 3, "count() ok after group_by on related column" );
-}
+cmp_ok( $rs->count, '==', 3, "count() ok after group_by on related column" );
$rs = $schema->resultset("Artist")->search(
{},
'cds_2.title' => 'Forkful of bees' },
{ join => [ 'cds', 'cds' ] });
-SKIP: {
- skip "SQLite < 3.2.6 doesn't understand COUNT(DISTINCT())", 1
- if $is_broken_sqlite;
- cmp_ok($rs->count, '==', 1, "single artist returned from multi-join");
-}
+cmp_ok($rs->count, '==', 1, "single artist returned from multi-join");
is($rs->next->name, 'Caterwauler McCrae', "Correct artist returned");
$tree_like = eval { $schema->resultset('TreeLike')->search(
{ 'children.id' => 3, 'children_2.id' => 6 },
- { join => [qw/children children/] }
+ { join => [qw/children children children/] }
)->search_related('children', { 'children_4.id' => 7 }, { prefetch => 'children' }
)->first->children->first; };
is(eval { $tree_like->name }, 'fong', 'Tree with multiple has_many joins ok');
-# test that collapsed joins don't get a _2 appended to the alias
-
-my $sql = '';
-$schema->storage->debugcb(sub { $sql = $_[1] });
-$schema->storage->debug(1);
-
-eval {
- my $row = $schema->resultset('Artist')->search_related('cds', undef, {
- join => 'tracks',
- prefetch => 'tracks',
- })->search_related('tracks')->first;
-};
-
-like( $sql, qr/^SELECT tracks_2\.trackid/, "join not collapsed for search_related" );
-
-$schema->storage->debug($orig_debug);
-$schema->storage->debugobj->callback(undef);
-
$rs = $schema->resultset('Artist');
$rs->create({ artistid => 4, name => 'Unknown singer-songwriter' });
$rs->create({ artistid => 5, name => 'Emo 4ever' });
is($queries, 0, 'chained search_related after has_many->has_many prefetch ran no queries');
+$schema->storage->debug($orig_debug);
+$schema->storage->debugobj->callback(undef);
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+lives_ok ( sub {
+ my $no_prefetch = $schema->resultset('Track')->search_related(cd =>
+ {
+ 'cd.year' => "2000",
+ },
+ {
+ join => 'tags',
+ order_by => 'me.trackid',
+ rows => 1,
+ }
+ );
+
+ my $use_prefetch = $no_prefetch->search(
+ {},
+ {
+ prefetch => 'tags',
+ }
+ );
+
+ is($use_prefetch->count, $no_prefetch->count, 'counts with and without prefetch match');
+ is(
+ scalar ($use_prefetch->all),
+ scalar ($no_prefetch->all),
+ "Amount of returned rows is right"
+ );
+
+}, 'search_related prefetch with order_by works');
+
+
+lives_ok (sub {
+ my $rs = $schema->resultset("Artwork")->search(undef, {distinct => 1})
+ ->search_related('artwork_to_artist')->search_related('artist',
+ undef,
+ { prefetch => 'cds' },
+ );
+ is($rs->all, 0, 'prefetch without WHERE (objects)');
+ is($rs->count, 0, 'prefetch without WHERE (count)');
+
+ $rs = $schema->resultset("Artwork")->search(undef, {distinct => 1})
+ ->search_related('artwork_to_artist')->search_related('artist',
+ { 'cds.title' => 'foo' },
+ { prefetch => 'cds' },
+ );
+ is($rs->all, 0, 'prefetch with WHERE (objects)');
+ is($rs->count, 0, 'prefetch with WHERE (count)');
+
+
+# test where conditions at the root of the related chain
+ my $artist_rs = $schema->resultset("Artist")->search({artistid => 11});
+
+
+ $rs = $artist_rs->search_related('cds')->search_related('genre',
+ { 'genre.name' => 'foo' },
+ { prefetch => 'cds' },
+ );
+ is($rs->all, 0, 'prefetch without distinct (objects)');
+ is($rs->count, 0, 'prefetch without distinct (count)');
+
+
+
+ $rs = $artist_rs->search(undef, {distinct => 1})
+ ->search_related('cds')->search_related('genre',
+ { 'genre.name' => 'foo' },
+ );
+ is($rs->all, 0, 'distinct without prefetch (objects)');
+ is($rs->count, 0, 'distinct without prefetch (count)');
+
+
+
+ $rs = $artist_rs->search({}, {distinct => 1})
+ ->search_related('cds')->search_related('genre',
+ { 'genre.name' => 'foo' },
+ { prefetch => 'cds' },
+ );
+ is($rs->all, 0, 'distinct with prefetch (objects)');
+ is($rs->count, 0, 'distinct with prefetch (count)');
+
+
+
+}, 'distinct generally works with prefetch on deep search_related chains');
+
+done_testing;
--- /dev/null
+# Test to ensure we get a consistent result set wether or not we use the
+# prefetch option in combination rows (LIMIT).
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+
+plan tests => 9;
+
+my $schema = DBICTest->init_schema();
+
+
+my $no_prefetch = $schema->resultset('Artist')->search(
+ [ # search deliberately contrived
+ { 'artwork.cd_id' => undef },
+ { 'tracks.title' => { '!=' => 'blah-blah-1234568' }}
+ ],
+ { rows => 3, join => { cds => [qw/artwork tracks/] },
+ }
+);
+
+my $use_prefetch = $no_prefetch->search(
+ {},
+ {
+ prefetch => 'cds',
+ order_by => { -desc => 'name' },
+ }
+);
+
+is($no_prefetch->count, $use_prefetch->count, '$no_prefetch->count == $use_prefetch->count');
+is(
+ scalar ($no_prefetch->all),
+ scalar ($use_prefetch->all),
+ "Amount of returned rows is right"
+);
+
+my $artist_many_cds = $schema->resultset('Artist')->search ( {}, {
+ join => 'cds',
+ group_by => 'me.artistid',
+ having => \ 'count(cds.cdid) > 1',
+})->first;
+
+
+$no_prefetch = $schema->resultset('Artist')->search(
+ { artistid => $artist_many_cds->id },
+ { rows => 1 }
+);
+
+$use_prefetch = $no_prefetch->search ({}, { prefetch => 'cds' });
+
+my $normal_artist = $no_prefetch->single;
+my $prefetch_artist = $use_prefetch->find({ name => $artist_many_cds->name });
+my $prefetch2_artist = $use_prefetch->first;
+
+is(
+ $prefetch_artist->cds->count,
+ $normal_artist->cds->count,
+ "Count of child rel with prefetch + rows => 1 is right (find)"
+);
+is(
+ $prefetch2_artist->cds->count,
+ $normal_artist->cds->count,
+ "Count of child rel with prefetch + rows => 1 is right (first)"
+);
+
+is (
+ scalar ($prefetch_artist->cds->all),
+ scalar ($normal_artist->cds->all),
+ "Amount of child rel rows with prefetch + rows => 1 is right (find)"
+);
+is (
+ scalar ($prefetch2_artist->cds->all),
+ scalar ($normal_artist->cds->all),
+ "Amount of child rel rows with prefetch + rows => 1 is right (first)"
+);
+
+throws_ok (
+ sub { $use_prefetch->single },
+ qr/resultsets prefetching has_many/,
+ 'single() with multiprefetch is illegal',
+);
+
+my $artist = $use_prefetch->search({'cds.title' => $artist_many_cds->cds->first->title })->next;
+is($artist->cds->count, 1, "count on search limiting prefetched has_many");
+
+# try with double limit
+my $artist2 = $use_prefetch->search({'cds.title' => { '!=' => $artist_many_cds->cds->first->title } })->slice (0,0)->next;
+is($artist2->cds->count, 2, "count on search limiting prefetched has_many");
+
use DBICTest;
my $schema = DBICTest->init_schema();
+my $sdebug = $schema->storage->debug;
-plan tests => 70;
+plan tests => 79;
# has_a test
my $cd = $schema->resultset("CD")->find(4);
if ($INC{'DBICTest/HelperRels.pm'}) {
$artist->add_to_cds({ title => 'Big Flop', year => 2005 });
} else {
- $artist->create_related( 'cds', {
+ my $big_flop = $artist->create_related( 'cds', {
title => 'Big Flop',
year => 2005,
} );
+
+ TODO: {
+ local $TODO = "Can't fix right now" if $DBIx::Class::VERSION < 0.09;
+ lives_ok { $big_flop->genre} "Don't throw exception when col is not loaded after insert";
+ };
}
my $big_flop_cd = ($artist->search_related('cds'))[3];
is($queries, 0, 'No SELECT made for belongs_to if key IS NULL');
$big_flop_cd->genre_inefficient; #should trigger a select query
is($queries, 1, 'SELECT made for belongs_to if key IS NULL when undef_on_null_fk disabled');
- $schema->storage->debug(0);
+ $schema->storage->debug($sdebug);
$schema->storage->debugcb(undef);
}
my( $rs_from_list ) = $artist->search_related_rs('cds');
-is( ref($rs_from_list), 'DBIx::Class::ResultSet', 'search_related_rs in list context returns rs' );
+isa_ok( $rs_from_list, 'DBIx::Class::ResultSet', 'search_related_rs in list context returns rs' );
( $rs_from_list ) = $artist->cds_rs();
-is( ref($rs_from_list), 'DBIx::Class::ResultSet', 'relation_rs in list context returns rs' );
+isa_ok( $rs_from_list, 'DBIx::Class::ResultSet', 'relation_rs in list context returns rs' );
# count_related
is( $artist->count_related('cds'), 4, 'count_related ok' );
is( $prod_rs->first->name, 'Matt S Trout',
'many_to_many add_to_$rel($obj) ok' );
$cd->remove_from_producers($prod);
+$cd->add_to_producers($prod, {attribute => 1});
+is( $prod_rs->count(), 1, 'many_to_many add_to_$rel($obj, $link_vals) count ok' );
+is( $cd->cd_to_producer->first->attribute, 1, 'many_to_many $link_vals ok');
+$cd->remove_from_producers($prod);
+$cd->set_producers([$prod], {attribute => 2});
+is( $prod_rs->count(), 1, 'many_to_many set_$rel($obj, $link_vals) count ok' );
+is( $cd->cd_to_producer->first->attribute, 2, 'many_to_many $link_vals ok');
+$cd->remove_from_producers($prod);
is( $schema->resultset('Producer')->find(1)->name, 'Matt S Trout',
"producer object exists after remove of link" );
is( $prod_rs->count, 0, 'many_to_many remove_from_$rel($obj) ok' );
is( $twokey->fourkeys_to_twokeys->count, 0,
'twokey has no links to fourkey' );
+
my $undef_artist_cd = $schema->resultset("CD")->new_result({ 'title' => 'badgers', 'year' => 2007 });
is($undef_artist_cd->has_column_loaded('artist'), '', 'FK not loaded');
is($undef_artist_cd->search_related('artist')->count, 0, '0=1 search when FK does not exist and object not yet in db');
cmp_ok($searched->count, '==', 2, "Both artist returned from map after adding another condition");
-# check join through cascaded has_many relationships
+# check join through cascaded has_many relationships (also empty has_many rels)
$artist = $schema->resultset("Artist")->find(1);
my $trackset = $artist->cds->search_related('tracks');
-# LEFT join means we also see the trackless additional album...
-cmp_ok($trackset->count, '==', 11, "Correct number of tracks for artist");
+is($trackset->count, 10, "Correct number of tracks for artist");
+is($trackset->all, 10, "Correct number of track objects for artist");
# now see about updating eveything that belongs to artist 2 to artist 3
$artist = $schema->resultset("Artist")->find(2);
my $rs_overridden = $schema->source('ForceForeign');
my $relinfo_with_attr = $rs_overridden->relationship_info ('cd_3');
cmp_ok($relinfo_with_attr->{attrs}{is_foreign_key_constraint}, '==', 0, "is_foreign_key_constraint defined for belongs_to relationships with attr.");
+
+# check that relationships below left join relationships are forced to left joins
+# when traversing multiple belongs_to
+my $cds = $schema->resultset("CD")->search({ 'me.cdid' => 5 }, { join => { single_track => 'cd' } });
+is($cds->count, 1, "subjoins under left joins force_left (string)");
+
+$cds = $schema->resultset("CD")->search({ 'me.cdid' => 5 }, { join => { single_track => ['cd'] } });
+is($cds->count, 1, "subjoins under left joins force_left (arrayref)");
+
+$cds = $schema->resultset("CD")->search({ 'me.cdid' => 5 }, { join => { single_track => { cd => {} } } });
+is($cds->count, 1, "subjoins under left joins force_left (hashref)");
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib qw(t/lib);
+use DBICTest;
+use DBIC::SqlMakerTest;
+
+my $schema = DBICTest->init_schema();
+my $sdebug = $schema->storage->debug;
+
+plan tests => 6;
+
+my $artist = $schema->resultset ('Artist')->first;
+
+my $genre = $schema->resultset ('Genre')
+ ->create ({ name => 'par excellence' });
+
+is ($genre->search_related( 'cds' )->count, 0, 'No cds yet');
+
+# expect a create
+$genre->update_or_create_related ('cds', {
+ artist => $artist,
+ year => 2009,
+ title => 'the best thing since sliced bread',
+});
+
+# verify cd was inserted ok
+is ($genre->search_related( 'cds' )->count, 1, 'One cd');
+my $cd = $genre->find_related ('cds', {});
+is_deeply (
+ { map { $_, $cd->get_column ($_) } qw/artist year title/ },
+ {
+ artist => $artist->id,
+ year => 2009,
+ title => 'the best thing since sliced bread',
+ },
+ 'CD created correctly',
+);
+
+# expect a year update on the only related row
+# (non-qunique column + unique column as disambiguator)
+$genre->update_or_create_related ('cds', {
+ year => 2010,
+ title => 'the best thing since sliced bread',
+});
+
+# re-fetch the cd, verify update
+is ($genre->search_related( 'cds' )->count, 1, 'Still one cd');
+$cd = $genre->find_related ('cds', {});
+is_deeply (
+ { map { $_, $cd->get_column ($_) } qw/artist year title/ },
+ {
+ artist => $artist->id,
+ year => 2010,
+ title => 'the best thing since sliced bread',
+ },
+ 'CD year column updated correctly',
+);
+
+
+# expect a create, after a failed search using *only* the
+# *current* relationship and the unique column constraints
+# (so no year)
+my @sql;
+$schema->storage->debugcb(sub { push @sql, $_[1] });
+$schema->storage->debug (1);
+
+$genre->update_or_create_related ('cds', {
+ title => 'the best thing since vertical toasters',
+ artist => $artist,
+ year => 2012,
+});
+
+$schema->storage->debugcb(undef);
+$schema->storage->debug ($sdebug);
+
+is_same_sql (
+ $sql[0],
+ 'SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track
+ FROM cd me
+ WHERE ( me.artist = ? AND me.title = ? AND me.genreid = ? )
+ ',
+ 'expected select issued',
+);
+
+# a has_many search without a unique constraint makes no sense
+# but I am not sure what to test for - leaving open
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+plan tests => 9;
+
+my $artist = $schema->resultset ('Artist')->first;
+
+my $genre = $schema->resultset ('Genre')
+ ->create ({ name => 'par excellence' });
+
+is ($genre->search_related( 'model_cd' )->count, 0, 'No cds yet');
+
+# expect a create
+$genre->update_or_create_related ('model_cd', {
+ artist => $artist,
+ year => 2009,
+ title => 'the best thing since sliced bread',
+});
+
+# verify cd was inserted ok
+is ($genre->search_related( 'model_cd' )->count, 1, 'One cd');
+my $cd = $genre->find_related ('model_cd', {});
+is_deeply (
+ { map { $_, $cd->get_column ($_) } qw/artist year title/ },
+ {
+ artist => $artist->id,
+ year => 2009,
+ title => 'the best thing since sliced bread',
+ },
+ 'CD created correctly',
+);
+
+# expect a year update on the only related row
+# (non-qunique column + unique column as disambiguator)
+$genre->update_or_create_related ('model_cd', {
+ year => 2010,
+ title => 'the best thing since sliced bread',
+});
+
+# re-fetch the cd, verify update
+is ($genre->search_related( 'model_cd' )->count, 1, 'Still one cd');
+$cd = $genre->find_related ('model_cd', {});
+is_deeply (
+ { map { $_, $cd->get_column ($_) } qw/artist year title/ },
+ {
+ artist => $artist->id,
+ year => 2010,
+ title => 'the best thing since sliced bread',
+ },
+ 'CD year column updated correctly',
+);
+
+
+# expect an update of the only related row
+# (update a unique column)
+$genre->update_or_create_related ('model_cd', {
+ title => 'the best thing since vertical toasters',
+});
+
+# re-fetch the cd, verify update
+is ($genre->search_related( 'model_cd' )->count, 1, 'Still one cd');
+$cd = $genre->find_related ('model_cd', {});
+is_deeply (
+ { map { $_, $cd->get_column ($_) } qw/artist year title/ },
+ {
+ artist => $artist->id,
+ year => 2010,
+ title => 'the best thing since vertical toasters',
+ },
+ 'CD title column updated correctly',
+);
+
+
+# expect a year update on the only related row
+# (non-qunique column only)
+$genre->update_or_create_related ('model_cd', {
+ year => 2011,
+});
+
+# re-fetch the cd, verify update
+is ($genre->search_related( 'model_cd' )->count, 1, 'Still one cd');
+$cd = $genre->find_related ('model_cd', {});
+is_deeply (
+ { map { $_, $cd->get_column ($_) } qw/artist year title/ },
+ {
+ artist => $artist->id,
+ year => 2011,
+ title => 'the best thing since vertical toasters',
+ },
+ 'CD year column updated correctly without a disambiguator',
+);
+
+
use Test::More;
-plan ( tests => 4 );
+plan ( tests => 5 );
use lib qw(t/lib);
use DBICTest;
my $cdrs = $schema->resultset('CD');
{
- my $arr = $art_rs->as_query;
- my ($query, @bind) = @{$$arr};
-
is_same_sql_bind(
- $query, \@bind,
+ $art_rs->as_query,
"(SELECT me.artistid, me.name, me.rank, me.charfield FROM artist me)", [],
);
}
$art_rs = $art_rs->search({ name => 'Billy Joel' });
{
- my $arr = $art_rs->as_query;
- my ($query, @bind) = @{$$arr};
-
is_same_sql_bind(
- $query, \@bind,
+ $art_rs->as_query,
"(SELECT me.artistid, me.name, me.rank, me.charfield FROM artist me WHERE ( name = ? ))",
[ [ name => 'Billy Joel' ] ],
);
$art_rs = $art_rs->search({ rank => 2 });
{
- my $arr = $art_rs->as_query;
- my ($query, @bind) = @{$$arr};
-
is_same_sql_bind(
- $query, \@bind,
+ $art_rs->as_query,
"(SELECT me.artistid, me.name, me.rank, me.charfield FROM artist me WHERE ( ( ( rank = ? ) AND ( name = ? ) ) ) )",
[ [ rank => 2 ], [ name => 'Billy Joel' ] ],
);
my $rscol = $art_rs->get_column( 'charfield' );
{
- my $arr = $rscol->as_query;
- my ($query, @bind) = @{$$arr};
-
is_same_sql_bind(
- $query, \@bind,
+ $rscol->as_query,
"(SELECT me.charfield FROM artist me WHERE ( ( ( rank = ? ) AND ( name = ? ) ) ) )",
[ [ rank => 2 ], [ name => 'Billy Joel' ] ],
);
}
-__END__
+{
+ my $rs = $schema->resultset("CD")->search(
+ { 'artist.name' => 'Caterwauler McCrae' },
+ { join => [qw/artist/]}
+ );
+ my $subsel_rs = $schema->resultset("CD")->search( { cdid => { IN => $rs->get_column('cdid')->as_query } } );
+ is($subsel_rs->count, $rs->count, 'Subselect on PK got the same row count');
+}
--- /dev/null
+use strict;
+use warnings;
+
+use lib qw(t/lib);
+use Test::More;
+use Test::Exception;
+use DBICTest;
+
+#plan tests => 5;
+plan 'no_plan';
+
+my $schema = DBICTest->init_schema();
+
+my $tkfks = $schema->resultset('FourKeys_to_TwoKeys');
+
+my ($fa, $fb) = $tkfks->related_resultset ('fourkeys')->populate ([
+ [qw/foo bar hello goodbye sensors read_count/],
+ [qw/1 1 1 1 a 10 /],
+ [qw/2 2 2 2 b 20 /],
+]);
+
+# This is already provided by DBICTest
+#my ($ta, $tb) = $tkfk->related_resultset ('twokeys')->populate ([
+# [qw/artist cd /],
+# [qw/1 1 /],
+# [qw/2 2 /],
+#]);
+my ($ta, $tb) = $schema->resultset ('TwoKeys')
+ ->search ( [ { artist => 1, cd => 1 }, { artist => 2, cd => 2 } ])
+ ->all;
+
+my $tkfk_cnt = $tkfks->count;
+
+my $non_void_ctx = $tkfks->populate ([
+ { autopilot => 'a', fourkeys => $fa, twokeys => $ta, pilot_sequence => 10 },
+ { autopilot => 'b', fourkeys => $fb, twokeys => $tb, pilot_sequence => 20 },
+ { autopilot => 'x', fourkeys => $fa, twokeys => $tb, pilot_sequence => 30 },
+ { autopilot => 'y', fourkeys => $fb, twokeys => $ta, pilot_sequence => 40 },
+]);
+is ($tkfks->count, $tkfk_cnt += 4, 'FourKeys_to_TwoKeys populated succesfully');
+
+#
+# Make sure the forced group by works (i.e. the joining does not cause double-updates)
+#
+
+# create a resultset matching $fa and $fb only
+my $fks = $schema->resultset ('FourKeys')
+ ->search ({ map { $_ => [1, 2] } qw/foo bar hello goodbye/}, { join => 'fourkeys_to_twokeys' });
+
+is ($fks->count, 4, 'Joined FourKey count correct (2x2)');
+$fks->update ({ read_count => \ 'read_count + 1' });
+$_->discard_changes for ($fa, $fb);
+
+is ($fa->read_count, 11, 'Update ran only once on joined resultset');
+is ($fb->read_count, 21, 'Update ran only once on joined resultset');
+
+
+#
+# Make sure multicolumn in or the equivalen functions correctly
+#
+
+my $sub_rs = $tkfks->search (
+ [
+ { map { $_ => 1 } qw/artist.artistid cd.cdid fourkeys.foo fourkeys.bar fourkeys.hello fourkeys.goodbye/ },
+ { map { $_ => 2 } qw/artist.artistid cd.cdid fourkeys.foo fourkeys.bar fourkeys.hello fourkeys.goodbye/ },
+ ],
+ {
+ join => [ 'fourkeys', { twokeys => [qw/artist cd/] } ],
+ },
+);
+
+is ($sub_rs->count, 2, 'Only two rows from fourkeys match');
+
+# attempts to delete a grouped rs should fail miserably
+throws_ok (
+ sub { $sub_rs->search ({}, { distinct => 1 })->delete },
+ qr/attempted a delete operation on a resultset which does group_by/,
+ 'Grouped rs update/delete not allowed',
+);
+
+# grouping on PKs only should pass
+$sub_rs->search ({}, { group_by => [ reverse $sub_rs->result_source->primary_columns ] }) # reverse to make sure the comaprison works
+ ->update ({ pilot_sequence => \ 'pilot_sequence + 1' });
+
+is_deeply (
+ [ $tkfks->search ({ autopilot => [qw/a b x y/]}, { order_by => 'autopilot' })
+ ->get_column ('pilot_sequence')->all
+ ],
+ [qw/11 21 30 40/],
+ 'Only two rows incremented',
+);
+
+$sub_rs->delete;
+
+is ($tkfks->count, $tkfk_cnt -= 2, 'Only two rows deleted');
use DBICTest;
-is(DBICTest::Schema->source('Artist')->resultset_class, 'DBIx::Class::ResultSet', 'default resultset class');
+is(DBICTest::Schema->source('Artist')->resultset_class, 'DBICTest::BaseResultSet', 'default resultset class');
ok(!Class::Inspector->loaded('DBICNSTest::ResultSet::A'), 'custom resultset class not loaded');
DBICTest::Schema->source('Artist')->resultset_class('DBICNSTest::ResultSet::A');
ok(Class::Inspector->loaded('DBICNSTest::ResultSet::A'), 'custom resultset class loaded automatically');
use strict;
-use warnings;
+use warnings;
use Test::More;
use Test::Exception;
+
use lib qw(t/lib);
+use DBIC::SqlMakerTest;
+use DBIC::DebugObj;
use DBICTest;
use Data::Dumper;
my $schema = DBICTest->init_schema();
-my $orig_debug = $schema->storage->debug;
-
-use IO::File;
-
-BEGIN {
- eval "use DBD::SQLite";
- plan $@
- ? ( skip_all => 'needs DBD::SQLite for testing' )
- : ( tests => 10 );
-}
-
-# figure out if we've got a version of sqlite that is older than 3.2.6, in
-# which case COUNT(DISTINCT()) doesn't work
-my $is_broken_sqlite = 0;
-my ($sqlite_major_ver,$sqlite_minor_ver,$sqlite_patch_ver) =
- split /\./, $schema->storage->dbh->get_info(18);
-if( $schema->storage->dbh->get_info(17) eq 'SQLite' &&
- ( ($sqlite_major_ver < 3) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver < 2) ||
- ($sqlite_major_ver == 3 && $sqlite_minor_ver == 2 && $sqlite_patch_ver < 6) ) ) {
- $is_broken_sqlite = 1;
-}
+plan tests => 22;
# A search() with prefetch seems to pollute an already joined resultset
# in a way that offsets future joins (adapted from a test case by Debolaz)
is (Dumper ($cd_rs->{attrs}), $attrs, 'Resultset attributes preserved after another search with prefetch')
}, 'second prefetching search ok');
}
+
+# Also test search_related, but now that we have as_query simply compare before and after
+my $artist = $schema->resultset ('Artist')->first;
+my %q;
+
+$q{a2a}{rs} = $artist->search_related ('artwork_to_artist');
+$q{a2a}{query} = $q{a2a}{rs}->as_query;
+
+$q{artw}{rs} = $q{a2a}{rs}->search_related ('artwork',
+ { },
+ { join => ['cd', 'artwork_to_artist'] },
+);
+$q{artw}{query} = $q{artw}{rs}->as_query;
+
+$q{cd}{rs} = $q{artw}{rs}->search_related ('cd', {}, { join => [ 'artist', 'tracks' ] } );
+$q{cd}{query} = $q{cd}{rs}->as_query;
+
+$q{artw_back}{rs} = $q{cd}{rs}->search_related ('artwork',
+ {}, { join => { artwork_to_artist => 'artist' } }
+)->search_related ('artwork_to_artist', {}, { join => 'artist' });
+$q{artw_back}{query} = $q{artw_back}{rs}->as_query;
+
+for my $s (qw/a2a artw cd artw_back/) {
+ my $rs = $q{$s}{rs};
+
+ lives_ok ( sub { $rs->first }, "first() on $s does not throw an exception" );
+
+ lives_ok ( sub { $rs->count }, "count() on $s does not throw an exception" );
+
+ is_same_sql_bind ($rs->as_query, $q{$s}{query}, "$s resultset unmodified (as_query matches)" );
+}
+
#!/usr/bin/perl
use strict;
-use warnings FATAL => 'all';
+use warnings;
use Data::Dumper;
use Test::More;
-plan ( tests => 7 );
use lib qw(t/lib);
use DBICTest;
my $art_rs = $schema->resultset('Artist');
my $cdrs = $schema->resultset('CD');
-{
- my $cdrs2 = $cdrs->search({
- artist_id => { 'in' => $art_rs->search({}, { rows => 1 })->get_column( 'id' )->as_query },
- });
-
- my $arr = $cdrs2->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE artist_id IN ( SELECT id FROM artist me LIMIT 1 )",
- [],
- );
-}
+my @tests = (
+ {
+ rs => $cdrs,
+ search => {
+ artist_id => { 'in' => $art_rs->search({}, { rows => 1 })->get_column( 'id' )->as_query },
+ },
+ sqlbind => \[
+ "( SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE artist_id IN ( SELECT id FROM artist me LIMIT 1 ) )",
+ ],
+ },
-{
- my $rs = $art_rs->search(
- {},
- {
+ {
+ rs => $art_rs,
+ attrs => {
'select' => [
$cdrs->search({}, { rows => 1 })->get_column('id')->as_query,
],
},
- );
-
- my $arr = $rs->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT (SELECT id FROM cd me LIMIT 1) FROM artist me",
- [],
- );
-}
+ sqlbind => \[
+ "( SELECT (SELECT id FROM cd me LIMIT 1) FROM artist me )",
+ ],
+ },
-{
- my $rs = $art_rs->search(
- {},
- {
+ {
+ rs => $art_rs,
+ attrs => {
'+select' => [
$cdrs->search({}, { rows => 1 })->get_column('id')->as_query,
],
},
- );
-
- my $arr = $rs->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT me.artistid, me.name, me.rank, me.charfield, (SELECT id FROM cd me LIMIT 1) FROM artist me",
- [],
- );
-}
+ sqlbind => \[
+ "( SELECT me.artistid, me.name, me.rank, me.charfield, (SELECT id FROM cd me LIMIT 1) FROM artist me )",
+ ],
+ },
-# simple from
-{
- my $rs = $cdrs->search(
- {},
- {
+ {
+ rs => $cdrs,
+ attrs => {
alias => 'cd2',
from => [
{ cd2 => $cdrs->search({ id => { '>' => 20 } })->as_query },
],
},
- );
-
- my $arr = $rs->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track FROM (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE id > 20) cd2",
- [],
- );
-}
+ sqlbind => \[
+ "( SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track FROM (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE id > ?) cd2 )",
+ [ 'id', 20 ]
+ ],
+ },
-# nested from
-{
- my $art_rs2 = $schema->resultset('Artist')->search({},
{
- from => [ { 'me' => 'artist' },
- [ { 'cds' => $cdrs->search({},{ 'select' => [\'me.artist as cds_artist' ]})->as_query },
- { 'me.artistid' => 'cds_artist' } ] ]
- });
-
- my $arr = $art_rs2->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT me.artistid, me.name, me.rank, me.charfield FROM artist me JOIN (SELECT me.artist as cds_artist FROM cd me) cds ON me.artistid = cds_artist", []
- );
-
-
-}
+ rs => $art_rs,
+ attrs => {
+ from => [ { 'me' => 'artist' },
+ [ { 'cds' => $cdrs->search({},{ 'select' => [\'me.artist as cds_artist' ]})->as_query },
+ { 'me.artistid' => 'cds_artist' } ] ]
+ },
+ sqlbind => \[
+ "( SELECT me.artistid, me.name, me.rank, me.charfield FROM artist me JOIN (SELECT me.artist as cds_artist FROM cd me) cds ON me.artistid = cds_artist )"
+ ],
+ },
-# nested subquery in from
-{
- my $rs = $cdrs->search(
- {},
- {
+ {
+ rs => $cdrs,
+ attrs => {
alias => 'cd2',
from => [
{ cd2 => $cdrs->search(
}, )->as_query },
],
},
- );
-
- my $arr = $rs->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track
- FROM
- (SELECT cd3.cdid,cd3.artist,cd3.title,cd3.year,cd3.genreid,cd3.single_track
- FROM
- (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track
- FROM cd me WHERE id < 40) cd3
- WHERE id > 20) cd2",
- [],
- );
+ sqlbind => \[
+ "( SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track
+ FROM
+ (SELECT cd3.cdid,cd3.artist,cd3.title,cd3.year,cd3.genreid,cd3.single_track
+ FROM
+ (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track
+ FROM cd me WHERE id < ?) cd3
+ WHERE id > ?) cd2
+ )",
+ [ 'id', 40 ],
+ [ 'id', 20 ]
+ ],
+ },
-}
+ {
+ rs => $cdrs,
+ search => {
+ year => {
+ '=' => $cdrs->search(
+ { artistid => { '=' => \'me.artistid' } },
+ { alias => 'inner' }
+ )->get_column('year')->max_rs->as_query,
+ },
+ },
+ sqlbind => \[
+ "( SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE year = (SELECT MAX(inner.year) FROM cd inner WHERE artistid = me.artistid) )",
+ ],
+ },
-{
- my $rs = $cdrs->search({
- year => {
- '=' => $cdrs->search(
- { artistid => { '=' => \'me.artistid' } },
- { alias => 'inner' }
- )->get_column('year')->max_rs->as_query,
+ {
+ rs => $cdrs,
+ attrs => {
+ alias => 'cd2',
+ from => [
+ { cd2 => $cdrs->search({ title => 'Thriller' })->as_query },
+ ],
},
- });
- my $arr = $rs->as_query;
- my ($query, @bind) = @{$$arr};
- is_same_sql_bind(
- $query, \@bind,
- "SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE year = (SELECT MAX(inner.year) FROM cd inner WHERE artistid = me.artistid)",
- [],
- );
+ sqlbind => \[
+ "(SELECT cd2.cdid, cd2.artist, cd2.title, cd2.year, cd2.genreid, cd2.single_track FROM (SELECT me.cdid,me.artist,me.title,me.year,me.genreid,me.single_track FROM cd me WHERE title = ?) cd2)",
+ [ 'title',
+ 'Thriller'
+ ]
+ ],
+ },
+);
+
+
+plan tests => @tests * 2;
+
+for my $i (0 .. $#tests) {
+ my $t = $tests[$i];
+ for my $p (1, 2) { # repeat everything twice, make sure we do not clobber search arguments
+ is_same_sql_bind (
+ $t->{rs}->search ($t->{search}, $t->{attrs})->as_query,
+ $t->{sqlbind},
+ sprintf 'Testcase %d, pass %d', $i+1, $p,
+ );
+ }
}
-
-__END__
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use lib qw(t/lib);
+use DBICTest;
+
+my $schema = DBICTest->init_schema();
+
+plan tests => 4;
+
+my $artist = $schema->resultset ('Artist')->first;
+ok (!$artist->get_dirty_columns, 'Artist is clean' );
+
+$artist->rank (13);
+ok (!$artist->get_dirty_columns, 'Artist is clean after num value update' );
+$artist->discard_changes;
+
+$artist->rank ('13.00');
+ok (!$artist->get_dirty_columns, 'Artist is clean after string value update' );
+$artist->discard_changes;
+
+# override column info
+$artist->result_source->column_info ('rank')->{is_numeric} = 0;
+$artist->rank ('13.00');
+ok ($artist->get_dirty_columns, 'Artist is updated after is_numeric override' );
+$artist->discard_changes;
-#!/usr/bin/perl
use strict;
use warnings;
use Test::More;
use lib qw(t/lib);
+use DBICTest; # do not remove even though it is not used
# This is a rather unusual test.
# It does not test any aspect of DBIx::Class, but instead tests the
"in the Troubleshooting POD documentation entitled\n",
"'Perl Performance Issues on Red Hat Systems'\n",
"As this is an extremely serious condition, the only way to skip\n",
- "over this test is to --force the installation, or to edit the test\n",
+ "over this test is to --force the installation, or to look in the test\n",
"file " . __FILE__ . "\n",
);
"Please read the section in the Troubleshooting POD documentation\n",
"entitled 'Perl Performance Issues on Red Hat Systems'\n",
"As this is an extremely serious condition, the only way to skip\n",
- "over this test is to --force the installation, or to edit the test\n",
+ "over this test is to --force the installation, or to look in the test\n",
"file " . __FILE__ . "\n",
);
}
--- /dev/null
+use strict;
+use warnings;
+
+use Test::More;
+use Test::Exception;
+use lib 't/lib';
+
+use File::Temp ();
+use DBICTest;
+use DBICTest::Schema;
+
+plan tests => 2;
+my $wait_for = 10; # how many seconds to wait
+
+for my $close (0,1) {
+
+ my $tmp = File::Temp->new(
+ UNLINK => 1,
+ TMPDIR => 1,
+ SUFFIX => '.sqlite',
+ EXLOCK => 0, # important for BSD and derivatives
+ );
+
+ my $tmp_fn = $tmp->filename;
+ close $tmp if $close;
+
+ local $SIG{ALRM} = sub { die sprintf (
+ "Timeout of %d seconds reached (tempfile still open: %s)",
+ $wait_for, $close ? 'No' : 'Yes'
+ )};
+
+ alarm $wait_for;
+
+ lives_ok (sub {
+ my $schema = DBICTest::Schema->connect ("DBI:SQLite:$tmp_fn");
+ DBICTest->deploy_schema ($schema);
+ #DBICTest->populate_schema ($schema);
+ });
+
+ alarm 0;
+}