- find why XSUB dumper kills schema in Catalyst (may be Pg only?)
2006-04-11 by castaway
- - using PK::Auto should set is_auto_increment for the PK columns, so that copy() "just works"
- docs of copy() should say that is_auto_increment is essential for auto_incrementing keys
2006-03-25 by mst
- - Refactor ResultSet::new to be less hairy
- - we should move the setup of select, as, and from out of here
- - these should be local rs attrs, not main attrs, and extra joins
- provided on search should be merged
- find a way to un-wantarray search without breaking compat
- audit logging component
- delay relationship setup if done via ->load_classes
- double-sided relationships
- - incremental deploy
- make short form of class specifier in relationships work
2006-01-31 by bluefeet
2006-02-07 by castaway
- Extract DBIC::SQL::Abstract into a separate module for CPAN
- - Chop PK::Auto::Foo up to have PK::Auto refer to an appropriate
- DBIx::Storage::DBI::Foo, which will be loaded on connect from Driver info?
-(done -> 0.06001!)
- - Add deploy method to Schema, which will create DB tables from Schema, via
- SQLT
-(sorta done)
2006-03-18 by bluefeet
- Support table locking.
+++ /dev/null
-Schema versioning/deployment ideas from Jess (with input from theorbtwo and mst):
-1) Add a method to storage to:
- - take args of DB type, version, and optional file/pathname
- - create an SQL file, via SQLT, for the current schema
- - passing prev. version + version will create an sqlt-diff'ed upgrade file, such as
- - $preversion->$currentversion-$dbtype.sql, which contains ALTER foo statements.
-2) Make deploy/deploy_statements able to to load from the appropriate file, for the current DB, or on the fly? - Compare against current schema version..
-3) Add an on_connect_cb (callback) thingy to storage.
-4) create a component to deploy version/updates:
- - it hooks itself into on_connect_cb ?
- - when run it:
- - Attempts or prompts a backup of the database. (commands for these per-rdbms can be stored in storage::dbi::<dbtype> ?)
- - Checks the version of the current schema being used
- - Compares it to some schema table containing the installed version
- - If none such exists, we can attempt to sqlt-diff the DB structure with the schema
- - If version does exist, we use an array of user-defined upgrade paths,
- eg: version = '3x.'; schema = '1.x', upgrade paths = ('1.x->2.x', '2.x->3.x')
- - Find the appropriate upgrade-path file, parse into two chunks:
- a) the commands which do not contain "DROP"
- b) the ones that do
- - Calls user callbacks for "pre-upgrade"
- - Runs the first set of commands on the DB
- - Calls user callbacks for "post-alter"
- - Runs drop commands
- - Calls user callbacks for "post-drop"
- - The user will need to define (or ignore) the following callbacks:
- - "pre-upgrade", any code to be run before the upgrade, called with schema object, version-from, version-to, db-type .. bear in mind that here any new fields in the schema will not work, but can be used via scalarrefs.
- - "post-alter", this is the main callback, at this stage, all old and new fields will be available, to allow data migration.
- - "post-drop", this is the clean-up stage, now only new fields are available.
-