I'll be addressing this in detail over the course of three Oracle Magazine issues starting with the Jan/Feb issue. It is a *large* topic and needs a bit of space to address completely.
I'll try to put it in a nutshell here. The forthcoming articles will be more comprehensive and example laden. For technical details right now:
http://www.oracle.com/pls/db112/search?remark=quick_search&word=edition+based+redefinition will cover it.
In short, Edition Based Redefinition (EBR) is a new capability in 11g Release 2 that allows us to stage all of the changes to our application in the database (put in and compile new versions of stored procedures, triggers, grants on things, views, etc - without disturbing the existing code base - without contention - without blocking anyone). Instead of taking a one hour outage to install, compile and verify 100 new pieces of code out of 1,000 units in an application - the application administrator can now install, compile and verify those 100 units in the production database
without disturbing the existing application. When they are ready - they can cut over to the newly installed code in one command (sort of like being able to transactionally install 100 things all at once).
So, in its simplest form - EBR allows you to remove the downtime involved in patching your application - you can safely put into place a new copy of code in the database in your production or development environment without blocking, without having to take an outage.
Going further, you can even use its capabilities to perform complete application upgrades that involve physical schema changes - such as adding a column, redefining a table, whatever. Here - you can use EBR to reduce the amount of down time necessary for the upgrade OR if you want to go all of the way, you can use EBR to eliminate the downtime.
This is where cross edition triggers come into play - as well as a thing called an editioning view.
Suppose you wanted to make a change to a column T.C1 - modify it from the datatype NUMBER to the datatype VARCHAR2(50) for whatever reason. Now we cannot just modify the column - because that would affect the existing application - it binds number types in queries like "select * from t where c1 = :x" and if we converted C1 to a varchar2 - we'd lose the ability to use the index on C1 for example as well as have other strange things happen. So, we cannot modify the column type while the current application is running.
But - we know we could add a column C1_new varchar2(50) to the table and with 11g Release 1's ability to 'fast add' a column - do it without affecting the current application. Then, we would want to update c1_new to contain the string value of c1. But, we would not just want to "update t set c1_new = to_char(c1)" for two reasons
a) that would lock the entire table, every row. That would not be an online change
b) once the update finished and the current version of the application began working again, newly created rows in T would not have the necessary data in c1_new.
So, this is where editioning views and crossedition triggers come into play. These are the steps we would take
a) every 'release' of our code would access an editioning view, NOT the base table. An editioning view is a view that acts just like a table, it is more like a synonym than a view - you can put triggers on it, do to it things you would do to a table. The application thinks this IS the table. The editioning view can only select and rename columns from a single table - no where clause, no joins, just a simple "select .... from table".
b) the current version of our application would have an editioning view T that selects C1 and the other columns
c) we would alter table T_real_table add c1_new varchar2(50) - current version does not see this column at all. It is unaffected by this change.
d) we would create a new editioning view in "version 2" - it would not overwrite the view in "current version", it would create a new one in the same schema. This new editioning view instance would select C1_new AS C1 - instead of selecting C1 itself. So, in the new release - we have C1 as a varchar2(50)
e) we would create a crossedition trigger - a trigger that fires ONLY IN ONE OF THE EDITIONS. We would create this crossedition trigger as a "forwarding" trigger - it would fire only if the code executing was in the "current version", it's goal would be to 'forward' the changes from 'current version' to 'version 2'. In this case, the trigger would look something like:
begin
:new.c1_new := to_char(:new.c1);
end;
it would fill in the data for version 2.
f) now, any new rows added in 'current version' would forward the new data to version 2. Any rows that had c1 updated would do the same - forward the change to version 2. All we need to do now is - in 'current version' - execute "update t set c1=c1" - that would fire this trigger and set the c1_new column properly. BUT - we don't want to do that - we need to do this in a way that will not lock the entire table.... so.....
g) we use the new package dbms_parallel_execute (if you want to see a quick example, grab my slides from Oracle OpenWorld 2009 - available on the files tab above). Using that package we can execute that update in the background in 1,000 pieces - that is, we break the table into 1,000 little pieces (each about 1/10th of a percent of the table) and have the update work just against that little bit of the table. At most we could have 1/10th of a percent of the table locked at a time - committing after each piece. This update - done in 'current version' would fire the trigger to move the data from c1 to c1_new.
Now, we can stage version 2 of our application (remember, 'current version' is still running, has been running, is not blocked, has not stopped so far). We put in place the new code, synonyms, etc as described above. The new data in c1_new is being maintained by the cross edition trigger - forwarding the change from 'current version' to version 2.
When all of the changes have taken place - all of the code is installed, verified, compiled, validated - we can 'cut over'.
We have significantly reduced the down time for this upgrade to just a short "cutover".
Once we cutover - we drop the forwarding trigger - 'current version' is dead, we don't need it anymore. We can then drop c1 from the table as well if you want (or not, up to you)
We could go a step further and do this upgrade entirely online - we would run 'current version' AND 'version 2' at the same time. We would REMOVE the downtime for the upgrade altogether. In order to accomplish this - we'd need another trigger - a reverse crossedition trigger - one that fires in 'version 2' and maintains the data in 'current version'. This may or may not be possible. If you look at this example, we are converting a number type to a varchar2 type - that always works, a number can become a string. The reverse is not necessarily true. If 'version 2' will immediately start adding values in C1 that cannot be converted into numbers - we cannot run with both of these editions, versions at the same time - they are incompatible. But let's assume that 'version 2' will continue to generate values for some period of time that are numbers and can be converted.
Then, all we would do is add the column, install the forwarding trigger, update the table by piece, add the code for 'version 2' (this is all online remember) and then install a reverse crossedition trigger for 'version 2', this one would have
begin
:new.c1 := to_number(:new.c1_new);
end;
Now, we can release 'version 2' - allow new sessions to use 'version 2' but permit existing connections to continue to use 'current version'. Both versions will be running. Over time - as connections in connection pools close and reopen themselves - 'current version' will cease being used.
We can then drop the crossedition triggers and clean up.
Again, as I wrote above, I'll be covering this in more detail in the magazine over the course of three issues - about 12-15 pages of stuff... But you can read the documentation and get the gist of the concept of EBR here.