A reader, July 07, 2002 - 9:09 am UTC
Hi tom,
I came to know new word "Workspace Manager" in oracle database.
Just explain in few words what is it? and What we can you whis this new feature?
Thanks
July 07, 2002 - 9:47 am UTC
</code>
http://206.204.21.139/otndoc/oracle9i/901_doc/appdev.901/a88806/long_int.htm#1002069 <code>
<quote>
Workspace management refers to the ability of the database to hold different versions of the same record (that is, row) in one or more workspaces. Users of the database can then change these versions independently. There are two fundamental benefits of versioning in a database system:
* Versioning improves concurrent access of data in the database. In a database without the versioning ability, users wanting to change the same record are serialized by means of locks. Relaxing the locking strictness to improve concurrency leads to undesirable side effects, such as cascading rollbacks.
* Multiple what-if analyses can be run against the data simultaneously. Each analysis works on a separate version of the data. After the analyses are complete, the results can be stored in the database for quick lookup.
The unit of versioning in the product is a database table. A table in the database can be version-enabled, which means that all rows in the table can now support multiple versions of data. The versioning infrastructure is not visible to the users of the database. After a table has been version-enabled, users automatically see the correct version of the record in which they are interested.
</quote>
Excellent
Sanjay, November 06, 2002 - 3:32 pm UTC
Wonder when my company will start using 9iR2 - we are still on 8i.
Thanks Tom
Performance Impact & Audit for Specific Columns
Tony, January 23, 2003 - 12:31 am UTC
1.Is there any performance impact in using workspace management?
2.Is it possible to audit specific columns of a table?
January 23, 2003 - 7:53 am UTC
1) of course there is. benchmark it. nothing but nothing is "free"
2) all or nothing, it does rows
Limitation?
Doug C, February 04, 2003 - 5:27 pm UTC
I just got this error - no hits on metalink -self explanatory
ORA-20171: WM error: version enabled tables cannot have non-primary key unique index
Is this a hard limitation? Do you know of a workaround?
February 04, 2003 - 7:46 pm UTC
looks like a hard limitation - still there in 9ir2
Best way to 'implement' this functionality without moving to 9i
A Reader, February 05, 2003 - 6:02 am UTC
Tom,
Moving to 9i is being discussed here, but until that is definite I have a requirement to keep histories of tables. Unfotunately the tables are in a third-party application so I would like at all costs to avoid adding triggers to that application.
I had envisaged some generic package (so we don't have to customise it for every table, maintain when tables change etc.) so my current thoughts are it might have to be dynamic sql.
I was playing around with SET operations to see if this may do what I want i.e.: if I have a table DEPT and a DEPT_YESTERDAY and I do a SELECT * FROM DEPT MINUS SELECT * FROM DEPT_YESTERDAY, then do them the other way around, I get the before and after records. I could then process these using dynamic sql (?) to end up with a result like:
DEPT_CHANGES (table)
CHANGE_DATE CHANGED_BY OLD_VALUE NEW_VALUE
I'd need to have a strategy for handling changes to the 3rd party app - as my _YESTERDAY table could potentially be different to my "today" table then - and that strategy would need to involve minimum maintainance or down-time.
Any thoughts ? Is this heading in the right direction ?
Regards,
Paul
February 05, 2003 - 8:58 am UTC
I don't see how you can keep a history without a trigger.
the minus could help to get a delta -- but if someone updated a record 50 times in a day and deleted it, you would miss the 50 updates?
You would miss the "who" as well...
If you just need the delta, don't care about the who -- you could consider creating snapshot logs on the tables (materialized view logs). This'll give you a nice table of primary keys -- dml operation (IUD) and time of modification. You could use that to "speed up" the delta capture (and then purge the logs)
More info for "Best way to 'implement' this functionality without moving to 9i " review
A Reader, February 05, 2003 - 8:20 am UTC
Of course, i'd need to have the column name that changed in that example table also for it to be meaningful !!!
RE: Implementing without 9i
A Reader, February 05, 2003 - 10:11 am UTC
Tom,
Exactly, the history won't be a complete history - so if "Employee Salary" gets modified from 30k to 40k back to 30k, we won't know...but thats ok - we're interested in deltas between one day and another at the end of that day.
The snapshot log etc. - that sounds like it would be as intrusive as adding triggers though unless i'm mistaken as to how that would work ? I don't want to switch on logging, add any objects etc. etc. in the source schema.
Given this - and the need for it to be i) minimum coding/maintainance (including when the source app changeS) and ii) to work for many tables without lots of coding and iii) to get the data into the format above....(ok so i've really narrowed my options by this stage) how would you do it ?
Thanks in advance,
Paul
February 05, 2003 - 12:07 pm UTC
then you will end up diffing the entire database every night which sounds onerous to me.
but only you know if you have a window of time large enough to do it.
I wouldn't do it -- I mean I would SERIOUSLY question why I was doing it. I would ask "well, why doesn't the application do it". I would push back so hard -- they wouldn't want to make me do it anymore.
but if forced to do it -- i would might a schema "s1". I would just:
create table t1 as select sysdate dt, t1.* from app.t1 where 1=0;
for each table.
And then
insert into t1 select ... minus union all select ... minus ....
to get the diffs (might give serious consider to partitioning by date -- a day per partition or something like that). Then, when "upgrade" comes along or just ever 6 months -- create schema S2 and do it all over again.
So how to audit if you want 50 updates and a delete
Doug C, February 05, 2003 - 11:22 pm UTC
So Tom - if I need to keep a history of many tables - and workspace manager isn't going to work because I have a lot of non-primary key unique indexes (I have filed an enhancement request (can I push that at all or is it just a matter of how many people want it?) ).. and I can't diff the database because if a record is updated 50 times and deleted then I need all 50 updates and the delete - am I back to good old fashioned triggers and audit tables? with a little 'I','U','D' flag ?
February 06, 2003 - 7:46 am UTC
enhancement requests -- function of business case, number of requests and user voting.
yup on the trigger.
Feature is useless
Tony, February 06, 2003 - 2:21 am UTC
I thought workspace manager is a wonderful stuff to be used for auditing. After knowing the limitations such as version enabled tables cannot have non-primary key uique index and specific columns cannot be audited, I realize that the feature is not of much use. one limitation could stop the use of a good feature.
Oracle corp didn't think before adding these features?
RE: Implementing without 9i February 05, 2003
A Reader, February 06, 2003 - 4:07 am UTC
Tom,
Many thanks as always for your advice on this.
Believe me, if there was a way not to do this I would but as I'm sure you're aware things aren't always so clear-cut in the real world...'pushing back' works fine if what you are pushing for can be achieved at reasonable cost and there's no political issues (that can't be overcome - say for example "we're not giving any non-essential work to the third party provider" - if I can do this another way and it doesn't cost the earth or cause major problems, the third-party app change work becomes 'non-essential' and therefore won't get approved. (hope that makes sense)
So...there is a third-party app, I don't *want* to add any objects to its schema (no-one would actually stop me) - I'm sure I don't need to list the reasons for not wanting to do that. Now... I *need* to refresh a data warehouse daily from this 3rd party app anyway, so I have 'yesterdays' table. 'todays' table will have a 'last modified' column so I don't need to diff the entire database.. and this is a few specific tables....26000 rows today - maybe double every year but probably not that aggressive growth.
So I guess what I'm saying is - I'm not doing this thinking its the best solution, I'm taking the political, technical, economic factors into account and saying 'this is the best job I can do'.
Given that, and a table full of 'diffs' as we've discussed using MINUS's...I guess the bit i'm 'stuck' on is how to turn this into a single change record.
i.e. FROM:
Mod_Date Mod_By pk_field number colour
17-FEB-02 JD UID1 3 Green
06-JAN-03 PS UID1 3 Blue
(where pk_field is the primary key say - so we see that PS changed Field 3 from Green to Blue today)
TO:
Mod_Date Mod_By Mod_Column Old Value New Value
06-JAN-03 PS colour Green Blue
Dynamic SQL ? I was thinking if I join the 'minus' queries to each other on primary key I would have one row with the old and new records on it, and if I aliased the tables 'old' and 'new' there might be some way I can go through the columns comparing values on that single record and 'spit out' the changes I find.....not sure where to start in Dynamic SQL on this though.
In terms of schema changes, I had some more thoughts.... in my experience 3rd party apps rarely drop columns. If they drop tables, I should know about it beforehand and can 'remove' that table from the compare (leaving the old histories in-tact), if they add a table, no harm done - I can start capturing it as soon as I wish, and if they add a column - same. So schema changes *shouldn't* (in theory) be as big a deal as I thought as long as I have some warning.
Thanks Tom,
"A Reader"
February 06, 2003 - 8:40 am UTC
Ahh -- data warehouse -- now that is a horse of a different color (or colour as you would spell ;)
I for some reason thought this was about AUDITING. for auditing, i wouldn't like this, for capturing changes to feed a DW, thats something more interesting....
well, the primary keys should always come in pairs in this mod table so
ops$tkyte@ORA920> select pk_field,
2 max( decode( rn, 2, mod_date, null ) ) mod_date,
3 max( decode( rn, 2, mod_by, null ) ) mod_date,
4 max( decode( fnum, 3, 'color', 4, 'something else', 5, 'etc...' ) ) mod_column,
5 max( decode( rn, 1, val, null )) old_val,
6 max( decode( rn, 2, val, null )) new_val
7 from (
8 select t.*,
9 row_number() over (partition by pk_field order by mod_date) rn
10 from t
11 )
12 group by pk_field
13 /
PK_FI MOD_DATE MO MOD_COLUMN OLD_VAL NEW_VAL
----- --------- -- -------------- ---------- ----------
UID1 06-JAN-03 PS color Green Blue
UID2 16-JAN-03 JD color Red Yellow
ops$tkyte@ORA920>
ops$tkyte@ORA920>
Excellent
A Reader, February 06, 2003 - 9:20 am UTC
Hi,
Yes its about Data Warehousing, but kind of Auditing too as people can't see 'what has changed' in the third party app, so I offered them that feature in the warehouse (as we control that).
Excellent answer....what if I don't have enterprise edition (we're getting it!) - how can I do that without the analysis functions ? From what I can see the analysis function will alternate the row number as 1, 2, 1, 2 so we can decode the rows into OLD and NEW ?
Also - where does fnum come from ?
We also may have hundreds of columns on the third party app tables that we need to track so that max, decode etc. could get very messy - what I'm trying now is, I've got a table with 'before-and-after-picture' side-by-side:
emp_num last_modified last_modified_by old.fielda new.fielda old.fieldb new.fieldb
and I was thinking I could go through each of these change rows (in dynamic sql), and once I hit 'old.something', get the next value 'new.something', and create a change record from that (storing the emp_num, last_modified etc. from the start of the row). Nasty but achieves the desired effect and I can check for performance.
I'm doing this at the moment for one table with 1 months changes (will usually be each night) and it is only taking a few seconds to create the 'before-and-after-side-by-side' picture (and that can be improved on). So i'm not too worried about that performance. I know I can achieve this in dynamic sql .... but let me know if you think it will fall over and die ! (I'll keep moving towards that in the mean time)
February 06, 2003 - 10:09 am UTC
If you KNOW that there are pairs (each PK_FIELD is in there two times), rownum after an order by will work
ops$tkyte@ORA920> select t.*,
2 mod(rownum+1,2)+1 rn
3 from (select * from t order by pk_field, mod_date) t
4 /
MOD_DATE MO PK_F FNUM VAL RN
--------- -- ---- ---------- ---------- ----------
17-FEB-02 JD UID1 3 Green 1
06-JAN-03 PS UID1 3 Blue 2
17-FEB-02 PS UID2 3 Red 1
16-JAN-03 JD UID2 3 Yellow 2
ops$tkyte@ORA920> select pk_field,
2 max( decode( rn, 2, mod_date, null ) ) mod_date,
3 max( decode( rn, 2, mod_by, null ) ) mod_date,
4 max( decode( fnum, 3, 'color', 4, 'something else', 5, 'etc...' ) ) mod_column,
5 max( decode( rn, 1, val, null )) old_val,
6 max( decode( rn, 2, val, null )) new_val
7 from (
8 select t.*,
9 mod(rownum+1,2)+1 rn
10 from (select * from t order by pk_field, mod_date) t
11 )
12 group by pk_field
13 /
PK_F MOD_DATE MO MOD_COLUMN OLD_VAL NEW_VAL
---- --------- -- -------------- ---------- ----------
UID1 06-JAN-03 PS color Green Blue
UID2 16-JAN-03 JD color Red Yellow
fnum was your "number" field from your table...
Re: Pairs
A Reader, February 06, 2003 - 10:13 am UTC
Tom,
I know that there will be pairs, but I think the number was misleading in my example. That was just a field. So as per my review above, there could be hundreds of fields returned by the set operation so I'll have
old.field1 new.field1 old.field2 new.field2
I don't think this will work in that case and I'm back to my dynamic sql question above unless you can think of a better way ?
February 06, 2003 - 10:22 am UTC
it'll work -- you would just need dynamic sql to apply.
why won't it work?
Why it wouldn't work
A Reader, February 06, 2003 - 10:53 am UTC
The example appears to assume that when fnum = 3 the field's value is colour, but my original example the column was just a number field that happened to be 3 throughout (bad example maybe).... when there are 200 columns on that table (pretend fnum doesn't exist - ignore it) how will the decode work to show which column changed and what its value was?
as above.... the table I will have is
pk_field (modified date, modified by etc.) old.column1 new.column1 old.column2 new.column2 old.column3 new.column3 etc.
I could have:
pk_field old.column1 old.column2 old.column3
pk_field new.column1 new.column2 new.column3
But I thought it would be easier to have a single row with the before and after picture on it. I think we're probably both working from a different source result now - sorry for any confusion!
I'll continue to play and post my results to see if its the best way of doing it (unless its clear from this what I'm suggesting!?)
Regards,
Paul
February 06, 2003 - 11:17 am UTC
Mod_Date Mod_By pk_field number colour
17-FEB-02 JD UID1 3 Green
06-JAN-03 PS UID1 3 Blue
^^^^^^^^
apparently i didn't need the decode, my "val" field is your "colour" field.
ops$tkyte@ORA920> select pk_field,
2 max( decode( rn, 2, mod_date, null ) ) mod_date,
3 max( decode( rn, 2, mod_by, null ) ) mod_date,
5 max( decode( rn, 1, colour, null )) old_colour,
6 max( decode( rn, 2, colour, null )) new_colour,
max( decode( rn, 1, column2, null )) old_column2,
max( decode( rn, 2, column2, null )) old_column2,
.........
7 from (
8 select t.*,
9 mod(rownum+1,2)+1 rn
10 from (select * from t order by pk_field, mod_date) t
11 )
12 group by pk_field
13 /
A question
Riaz Shahid, April 15, 2003 - 12:46 pm UTC
Hello Tom !
From "Oracle Workspace Manager - Release 9.2.0.2.0 README"
This Oracle Workspace Manager 9.2.0.2.0 patch set replaces your existing Oracle Workspace Manager installation. It should be used with the following supported Oracle Database Releases: 9.2.0.2, 9.0.1.4 and 8.1.7.4.
The minimum supported version of the Oracle database is 8.1.7.
And
cr@STARR.LHR> select * from V$version
2 ;
BANNER
============================================================
Oracle8i Enterprise Edition Release 8.1.7.2.1 - Production
PL/SQL Release 8.1.7.2.0 - Production
CORE 8.1.7.2.1 Production
TNS for 32-bit Windows: Version 8.1.7.2.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
Does that means that i can't use Workspace Manager ???
April 16, 2003 - 9:13 am UTC
correct -- you are running unsupported software. 8174 is the only supported 817 release right now.
Workspace Management Clarification.
Matt, June 16, 2003 - 1:58 am UTC
I need to audit changes to table in a system. So, when any data is changed I can see who made the change and identify the data that changed. This is purely for audit purposes, although in some cases it might be necessary to display the earlier versions of data to the user alongside the latest data.
In the doco, I took it that the old versions are stored in the same table as the original data. Is this the case, some examples I have seen appear to store the original data in emp and the earlier versions in emp_hist.
Is there any (software) limit on the number of versions of data that can be stored in this way? I am expencting to be able to set of a workspavce for the duration of my system (10+ years)
Is it possible to write reports on the historic data, so that you can see the contents of a report at some point in time?
THanks in advance.
June 16, 2003 - 7:57 am UTC
EMP and EMP_HIST are views of a table.
there is a "goto" function in the DBMS_WM package to let you "goto" a point in time and queries against the data will be "as of" that time.
You need to read:
</code>
http://docs.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-APP <code>
Application Developer's Guide - Workspace Manager
from cover to cover and you
must benchmark this feature to ensure it meets your needs before designing a system around it! (as it true of any feature/function)
Additional to previous post
Matt, June 16, 2003 - 2:03 am UTC
I would also like to store a free text field to store a comment against the data change. Can this be done using workspace management or do I need to re-consider my approach?
Also, how does workspace managements impact in FK contraint definition? Does the FK enforce integrity between the correct rows of versioned data in two tables for instance?
THanks again
June 16, 2003 - 8:02 am UTC
lobs are supported if that is what you mean by "free text field"
foreign keys are properly handled, yes.
Workspace Manager Pain Reliever Needed...
Robert, October 03, 2003 - 5:41 pm UTC
9.2.0.1.0 on Linux
Tom, which part of Oracle is right here ?
Thanks
SQL> drop table BNFCY_CVG_LT ;
drop table BNFCY_CVG_LT
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20061: versioned objects have to be version disabled before being dropped
ORA-06512: at "SYS.WM_ERROR", line 288
ORA-06512: at "SYS.WM_ERROR", line 297
ORA-06512: at "SYS.NO_VM_DROP_PROC", line 60
ORA-06512: at line 3
SQL> exec dbms_wm.disableversioning('BNFCY_CVG_LT')
BEGIN dbms_wm.disableversioning('BNFCY_CVG_LT'); END;
*
ERROR at line 1:
ORA-20132: table 'BNFCY_CVG_LT' is not version enabled
ORA-06512: at "SYS.WM_ERROR", line 288
ORA-06512: at "SYS.WM_ERROR", line 305
ORA-06512: at "SYS.LT", line 695
ORA-06512: at "SYS.LT", line 8114
ORA-06512: at line 1
October 04, 2003 - 9:47 am UTC
please contact support.
apparently they are both right.
workspace manager - please help!
Charlie, October 23, 2003 - 5:42 pm UTC
Hi Tom,
Do you have a workaround for the following issue?
ORA-20109: a table with unique constraints cannot be version enabled
It seems that workspace manager cannot work with a table with unique constraints. But we have lots of tables with unique constraints. That means we cannot use workspace manager at all?
thanks,
October 23, 2003 - 7:35 pm UTC
10g can, 9i cannot.
workspace manager - please help!
Charlie, October 23, 2003 - 9:53 pm UTC
Oh, no. How frustrated!
WM for auditing in 9i
Oleg Oleander, November 07, 2003 - 6:07 am UTC
Robert: try
exec dbms_wm.disableversioning('BNFCY_CVG')
instead of
exec dbms_wm.disableversioning('BNFCY_CVG_LT'). since BNFCY_CVG is the name of your version-enabled table.
Tom,
It seems to me that you cannot have unique constraints because oracle-wm stores all versions of a row in the same table (tabname_LT). For the primary key of the original table the VERSION and DELSTATUS columns are added. What if we add unique constarints manually in the same fashion AFTER the table would have been verion-enabled?
1: Do you think Tom, it's a possible solution for having unique constraints on version-enabled tables?
2: Is there a way using WM to separate historical data from current? (store separately)
Thanks
Oleg
November 07, 2003 - 9:35 am UTC
in 10g this restriction was removed (you can have unique constraints in addition to the primary key)
1) in 10g, yes, in 9i, no
2) no
FlashbackQuery instead of WM.gotodate
Oleg Oleander, November 11, 2003 - 2:22 am UTC
If I set the Undo_retention sufficiently high and ensure disk space for the undo segment, could I use flashback query to view the db as it existed, say, two month ago?
(suppose low DML activity)
November 11, 2003 - 7:15 am UTC
theoretically -- by SCN -- (not by time, we only keep the last 1440 5 minute scn to timestamp mappings)
Workspace Manager could be the future / limitation?
Doug, November 23, 2003 - 9:03 pm UTC
Hey Tom - I think the enhancement of workspace manager is probably the future of databases. Here's another limitation I found. When enabling versioning, it turns my table into a VIEW according to the data dictionary. Unforunately, I now cannot implement a trigger on the view (which used to be a table).
Do you concur?
create table salary (
name varchar2(10),
salary number(10),
title varchar2(20),
dateofbirth date,
starttime date,
stoptime date);
SQL> alter table salary add constraint aprimarykey primary key
(name, dateofbirth);
SQL> begin
DBMS_WM.EnableVersioning ('salary', 'VIEW_WO_OVERWRITE');
end; 2 3
4 /
1 create trigger x before update of salary
2 on salary
3 for each row
4 begin
5 null;
6* end;
SQL> /
create trigger x before update of salary
*
ERROR at line 1:
ORA-25001: cannot create this trigger type on views
How do I file an enhancement request?
November 24, 2003 - 7:50 am UTC
you put the trigger on the base TABLE.
Problem when trying to update PO to check Workspace Manager...
Sven Bleckwedel, March 17, 2004 - 8:03 am UTC
Hi Tom,
I Noted some warnings before trying to use the "Workspace Manager", in one test environment:
>From "Oracle Workspace Manager - Release 9.2.0.2.0 README"
>
>This Oracle Workspace Manager 9.2.0.2.0 patch set replaces
>your existing Oracle Workspace Manager installation. It
>should be used with the following supported Oracle
>Database Releases: 9.2.0.2, 9.0.1.4 and 8.1.7.4.
>
>The minimum supported version of the Oracle database is 8.1.7.
I use an Personal Oracle 8i Release 3 (downloaded from OTN, for testing pourposes, like this one) in one specific test environment, as showed below:
SQL*Plus: Release 8.1.7.0.0 - Production on Qua Mar 17 09:52:16 2004
(c) Copyright 2000 Oracle Corporation. All rights reserved.
SQL> connect system
Enter password: *******
Connected.
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle8i Personal Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for 32-bit Windows: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
I downloaded in the past (when having access to Metalink), the patchset "p2376472_8174_WINNT.zip". The readme showed me that was possible to use it with Personal release of Oracle, for Windows NT or 2000. But couldn´t find any specific reference to Personal edition for Windows 98. Is there any possibility to apply this patchset in this test environment or exist any workaround to be possible to update Personal Oracle to test Workspace Manager ?
Tks in Adv,
Sven
March 17, 2004 - 8:34 am UTC
please contact support for this sort of information.
Audit/Replication
Rich, January 28, 2005 - 10:28 am UTC
Workspace Manager - Export
A reader, February 02, 2005 - 1:40 pm UTC
Hi Tom,
One of the limitations of using Workspace Manager is that only full database exports are supported. Is there a workaround for exporting table(s)?
Thanks,
Andy
Dead Link above
Alberto Dell'Era, February 28, 2005 - 8:31 am UTC
February 28, 2005 - 8:35 am UTC
Could I modify the table structure afterwards?
Tommy, December 07, 2005 - 12:38 pm UTC
Hi,
I created a table and enabled versioning for it. I am trying to test if I could modify the structure by adding more columns. However, it doesn't seem to allow me to.
select * from tmp_1;
exec dbms_wm.beginDDL( 'TMP_1' );
alter table tmp_1_lt add value3 number;
exec dbms_wm.commitDDL( 'TMP_1' );
select * from tmp_1;
--Output>>>>
KEY1 VALUE1 VALUE2
---------- ---------- ----------
1 2
1 row selected.
PL/SQL procedure successfully completed.
Table altered.
PL/SQL procedure successfully completed.
KEY1 VALUE1 VALUE2
---------- ---------- ----------
1 2
1 row selected.
Second question, could primary key be changed too?
Audit Problems
A reader, September 07, 2006 - 9:53 am UTC
I tried your steps to set up workspace management as in the first post. Got this error: ora-03212 Temporary segment cannot be created in locally managed tablespace. I followed all your instructions. What am I doing wrong? Using Oracle 10g. Thanks.
September 07, 2006 - 11:58 am UTC
got example (cut and paste), example is rather "large", no idea where it is failing for you or what you might have changed...
Auditing example
Reader, September 07, 2006 - 1:30 pm UTC
Great site Tom! But for my purposes your example is too simplistic. I have 5 tables that I need to monitor but each table has anywhere from 10-20 foreign and primary keys. Do you still recommend using your approach for something that is much more complicated? The examples in the Oracle 10g guide do not have you creating the tables again under wsmgmt. They grant privileges on the original table and enable versioning on all parent and child tables at once. Is this a better approach? Do these approaches work if one is going to be entering their data from a Java GUI rather than SQLPlus. Thanks in advance.
September 08, 2006 - 3:59 pm UTC
why does "more" make it "more complex"??
wsmgmt was just a demo schema I created for the *example*. Use whatever schema you want.
It was *just a demo of a feature*
"A java GUI" uses the same exact language to access and modify data in Oracle as SQLPlus does SQL. So yes, "a java gui" (read that as: a client application) is just a client application is just a client application.
Audit/Replication
A reader, September 11, 2006 - 6:49 am UTC
Maybe I was not clear. My point was that the 10g recommends a different approach, not that there was anything sacred about wsmgmt. The 10g guide version enables the tables in the same schema that owns the tables. On the surface that seems to be easier, but if you have triggers with more than one DML command then errors get generated and you can't do that. What I am trying to get at is if there is a less tedious way of doing this. Your simple example implies that you can set this up Voila! immediately with little effort. But if you have a huge hierarchy of parent child tables, lots of constraints, etc. then your simple little example becomes extremely long and involved and is not something that can be implemented in a short time. At least that is how it seems to be falling out on my end. All I wanted was clarification from you as to whether or not that was the case, as your simple case did not give me any heads up on what I was actually getting into.
September 11, 2006 - 10:02 am UTC
10g recommends a different approach from what exactly? I'm not sure what you mean by that.
give me an example of long and convoluted please. I'm not understanding why 15 tables is more difficult here.
we have a similar requirement with few add on
Vijay Sehgal, October 03, 2006 - 8:45 am UTC
Dear Tom,
we have a requirement of audit trail but with few add ons, below is the requirement we have and the way we are thinking to use workspace management. If you can please advise us on this.
our application will have two sets of users: 1. Makers and 2. checkers.
Makers are operators who will create or modify the data and Checkers will be users who approve/reject the changes made by the Makers.
Below is how we plan to use workspace managment.
1. We are thinking to version enable all the tables on which this functionality will be required.
2. when ever a Maker makes the changes add/modify/delete we will create a workspace for the maker, change the workspace to the created workspace and do the changes.
3. After the changes are made we will create a request in the Request table, the Checkers will be able to see the pending requests from this table.
4. As the Checker approves the request we will merge the workspace associated with the request to Live workspace, in case the Checker rejects the request we will remove the workspace.
Any thoughts on this approach or a better apporach to achieve this?, your help in this regard is much appreciated.
Thanks and Regards as always,
Vijay Sehgal
October 03, 2006 - 10:39 am UTC
That is one use of workspace management - yes, you will want to prototype this and evaluate the performance (to scale, not just a single user) and that you can live with the restrictions imposed by workspace manager (which are fewer and fewer as the versions go up)
need some workaround if the refcursor is opened on table which has data modified in workspace.
Vijay Sehgal, November 17, 2006 - 12:16 pm UTC
Dear Tom,
good day to you as always, with respect to the above changes we are facing a small issue on which I need your help.Below is the scenario for the same.
If the maker has made changes to the record they reside in the workspace and in the LIVE worskpace we have request id in the table which will point to the workspace in having the changes.
If the search is run we return data from LIVE workspace, and when the user select any record and click on View we goto workspace if the record is modified else we return data from live workspace.
here we are facing the problem, if we open a ref cursor on a table in workspace and before exiting from the procedure if we do gotoworkspace LIVE the data that is returned is from LIVE workspace and not the workspace created, can you please suggest any work around this.
For now I am fetching the data and populating in sql type table and opening the refcursor on it, another workaround would be to tell java guys to call gotoworkspace LIVE after the call to procedure returning resultset by refcursor, still I would like to know the best way to handle this.
If you can please help me on this.
Thanks for your help.
Kind Regards,
Vijay Sehgal.
still waiting....
A reader, November 25, 2006 - 2:46 am UTC
Dear Tom,
good day to you as always, any workaround for the issue we are facing I posted above.
Thanks for your help and time.
Kind Regards,
Vijay Sehgal
still waiting
Vijay Sehgal, November 28, 2006 - 8:46 am UTC
Dear Tom,
good day to you as always, I am sorry for posting my request again, but if you can please take out some time and shed some light on the query as to which approach will be better or if you have any other approach for this.
Thanks a lot for all your time and efforts.
Kind Regards,
Vijay Sehgal.
November 28, 2006 - 9:36 am UTC
got code.
I always and forever ask for
a) a test case that
b) is very very very small
c) yet 100% complete so that anyone could run it but that is
d) very very very small, as small as you can make it to reproduce the issue.
scenario to reproduce issue I am facing hope you will be able to help me out on this.
Vijay Sehgal, December 26, 2006 - 5:35 am UTC
Dear Tom,
below is the test scenario to reproduce the issue, hope after looking at the issue you will be able to give me a clean solution than the one I am having now.
Create table testtable (col1 number primary key ,col2 varchar2(10));
insert into testtable values(1,'test1');
insert into testtable values(2,'test2');
commit;
exec dbms_wm.enableversioning('testtable');
exec dbms_wm.createworkspace('workspace1');
exec dbms_wm.gotoworkspace('workspace1');
update testtable set col2='updated2';
commit;
CREATE OR REPLACE package testwspkg
as
TYPE cur_get_record IS REF CURSOR;
Procedure return_result(pin in varchar2, pout out cur_get_record);
end;
CREATE OR REPLACE package body EBS.testwspkg
as
procedure return_result(pin in varchar2,pout out cur_get_record)
as
begin
if pin ='ws' then
dbms_wm.gotoworkspace('workspace1');
end if;
open pout for select * from testtable;
if dbms_wm.getworkspace() != 'LIVE' then
dbms_wm.gotoworkspace('LIVE');
end if;
end;
end testwspkg;
Now if I execute the procedure as below from sql*plus
var v refcursor;
var ws varchar2(10);
exec ws:= 'ws';
exec testwspkg.return_result(:ws,:v);
print :v;
this returns the resultset from 'LIVE' workspace and not the 'ws' workspace;
As a solution for now I am fetching the data and populating in sql type table and opening the refcursor on it,
another workaround could be to call a procedure after the call to procedure returning the result and this procuedure
will change workspace to 'LIVE' workspace.
Thanks a lot for all your help on this.
Kind Regards,
Vijay Sehgal.
December 26, 2006 - 8:17 am UTC
it would appear that for some reason, the opening of the cursor did not fully "open" the cursor - the first fetch did, if you change the procedure to be:
procedure return_result(pin in varchar2,pout out cur_get_record)
as
l_rec testtable%rowtype;
begin
if pin ='ws' then
dbms_wm.gotoworkspace('workspace1');
end if;
open pout for select * from testtable;
fetch pout into l_rec;
dbms_output.put_line( l_rec.col1 || ', ' || l_rec.col2 );
if dbms_wm.getworkspace() != 'LIVE' then
dbms_wm.gotoworkspace('LIVE');
end if;
end;
then it "works". I would encourage you to submit this testcase to support - although you can make it much smaller still
ops$tkyte%ORA10GR2> Create table testtable (col1 primary key ,col2 )
2 as
3 select rownum, 'test'||rownum from all_users where rownum <= 2;
Table created.
ops$tkyte%ORA10GR2> exec dbms_wm.enableversioning('testtable');
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> exec dbms_wm.createworkspace('workspace1');
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> exec dbms_wm.gotoworkspace('workspace1');
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> update testtable set col2='updated2';
2 rows updated.
ops$tkyte%ORA10GR2> commit;
Commit complete.
ops$tkyte%ORA10GR2> exec open :v for select * from testtable;
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> print v
COL1 COL2
---------- --------------------------------------------
1 updated2
2 updated2
ops$tkyte%ORA10GR2> exec open :v for select * from testtable;
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> exec dbms_wm.gotoworkspace('LIVE');
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> print v
COL1 COL2
---------- --------------------------------------------
1 test1
2 test2
ops$tkyte%ORA10GR2> exec dbms_wm.DISABLEVERSIONING( 'TESTTABLE' , TRUE );
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> exec dbms_wm.removeworkspace('workspace1');
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> drop table testtable purge;
Table dropped.
Thanks a lot for your help on this, raising it with support
Vijay Sehgal, December 26, 2006 - 9:18 am UTC
raised a service request on this, but no response yet.
Vijay Sehgal, December 30, 2006 - 5:47 am UTC
Dear Tom,
good day to you as always, as per your suggestion I have raised a Service Request with the support, but I haven't received a response on this yet.
For now I am using sql type to get all the details from the table and opening the ref cursor on that, this is taking a lot of efforts to create a sql type on each table and then populating it before returning the result from workspace, can you please advise an alternative for this, till the time we get response from support on the issue we are facing.
Thanks a lot for your help and time on this.
Regards,
Vijay Sehgal
response recived from support on this.
Vijay Sehgal, January 09, 2007 - 10:22 am UTC
Dear Tom,
good day to you, first of all wishing you a Very Happy New year. As discussed I had raised this with the support and below is the response from support.
"When a user executes GotoWorkspace, we are storing the new workspace information in their session context. This session context is then used when querying a versioned table.
In your example, the database is not evaluating the value for the current workspace until the cursor is executed, instead of at the time the cursor is opened. As a result, there is nothing that can be done from Workspace Manager's perspective to change this behavior."
Now given this working, what is the best way to return the result set from workspace even when I am changing the workspace from the one I am working in to LIVE.
Should I continue with using sql type table, opening refcursor on it, or is there a better solution for this.
Please suggest.
Thanks in advance for your time and help on this.
Regards,
Vijay Sehgal
one more help on this
Vijay Sehgal, January 11, 2007 - 9:20 am UTC
Dear Tom,
good day to you as always, one more point on which I would like to have your help.
Scenario, In Schema/User A a procedure is as below.
create or replace procedure testproc(piname in varchar2)
as
begin
dbms_wm.createworkspace(piname);
end;
Execute privilege of this procedure is given to Schema/User B
if this procedure is executed from User B as
exec testproc('test');
the owner of workspace is user B and not user A, is this default behaviour or can this be changed.
Please help on this.
Thanks for your time and efforts on this.
Waiting for your response.
Regards,
Vijay Sehgal.
one more observation of versioned table need some help on it.
Vijay Sehgal, January 16, 2007 - 9:57 am UTC
Dear Tom,
good day to you, one more in this series, recently we observed that the explain plan of query on versioned table shows a lot of objects from WMSYS,it will be a great help if you can please shed some light on these object coming into picture and should it be a concern considering performance of query, if we have to do a benchmark on this what should be the strategy for this.
Your help in this regard is much appreciated.
Thanks,
Vijay Sehgal.
waiting for you inputs.
Vijay Sehgal, February 05, 2007 - 10:47 am UTC
Dear Tom,
good day to you, still waiting for your inputs on the above three questions I posted, your help in this regard is much appreciated and awaited.
Kind Regards,
Vijay Sehgal.
February 05, 2007 - 11:14 am UTC
definers rights procedures run "as the owner"
there are invokers rights procedures that run "as invoker", I haven't tested it, but you can try it in this case - "authid current_user"
regarding explain plan when using workspace manager.
Vijay Sehgal, February 06, 2007 - 9:37 am UTC
Dear Tom,
thanks a lot for your reply on the above query, I will try the suggestion given by you, but in the mean time we are facing one issue which I have already mentioned but I am posting it again for your help on it.
As I had already posted we are using around 25 tables in the system which are version enabled out of all the tables we have in the system we are creating. I noticed that in the exlpain plan generated by queries that are using versioned tables there are a lot of objects used from WMSYS schema which I believe is Work Space Manger schema used internally by Oracle.
The cost,bytes and cardinality for all this is very less
but the versioned tables cardinality is much more than what it should be.
After going through metalink note 309078.1, " How To Deal With a Bad Performance Of Workspace Manager Operations " and taking necessary actions mentioned in doc I still found that cardianility is wrong.
Also query on table with _LT as suffix runs faster but not sure if we should use table with _LT as suffix.
Can you please suggest something on this.
Your help in this regard is much appreciated.
Thanks and Regards,
Vijay Sehgal.
Bad Workspace Manager Performance.
Lew, February 13, 2007 - 4:09 am UTC
Our tests have shown that queries on versioned tables with ValidTime run 600 times slower than on non versioned tables. Versioned tables without ValidTime still run around 100 times slower which is a big CPU price to pay for the extra functionality.
Error with referential integrity ORA-20100.
Tom, March 20, 2007 - 10:05 am UTC
ORA-20100: 'TABLE_NAME' is both parent and child tables of referential integrity constraints
ORA-06512: at "SYS.LTDDL", line 457
ORA-06512: at "SYS.LTDDL", line 1123
ORA-06512: at "SYS.LTDDL", line 1107
ORA-06512: at "SYS.LT", line 9024
ORA-06512: at line 2
I need to have versioning of a table but I don¿t want to sacrifice referential integrity. Why can¿t versioning handle being both a child and parent of referential integrity?
Should I fall back to a trigger to handle my history and keep my referential integrity in your opinion?
March 20, 2007 - 11:01 am UTC
if you just want to version - and you have a data structure that cannot be supported by workspace manager (as you do), then it would likely make sense to handle the history yourself.
workspace manager
DM, September 30, 2008 - 9:12 am UTC
Hi Tom
Is it possible to stop the WSM to create the version of a record if no changes has been has been done .
I mean to say that if I update the table as
Update tab1 set name = 'TOM' where empid = 233;
it will create the new version of row
Now If I again
Update tab1 set name = 'TOM' where empid = 233;
It will again make the new version of the record..
I want to not happen second time !!
Any help would be appreciated..
thanx.
September 30, 2008 - 12:54 pm UTC
nope, they are both changes as far as anything is concerned...
auditing
A reader, April 01, 2009 - 11:33 pm UTC
Tom:
Great info on this thread. I have a few small questions.
1. You have a generic package that can build an audit trail for any table and any transaction and be called via a trigger.
Why do that if you have this workspace manager that seems to give you a smilar thing without wirting any code.
Do they serve different purposes?
2. I have a reqt to audit table updates for 4 tables. One of the tables is about 50 columns
Do you think the before-update trigger with those 50 procedure calls will have great impact on performance.
The system does not have that many transactions.
3. Do you think your one-table design is much more easier and manageable than having 4 different audit tables (one for each prodcution table and adding a NEW_COL_VAL for each OLD_COL_VAL to store the row old values and new values.
A 50 column table would have 100 column audit table.
4. Would you use DB standard auditing for this instead of a trigger. When do you usually use the DB AUDIT command.
April 02, 2009 - 9:39 am UTC
1) if you refer back to that, you'll find the requirements were different. They didn't want an entire row versioned. They wanted just the modified columns. Compare the questions, you'll find they are somewhat "not the same at all"
2) if it doesn't do much work, well, think about it.
And you should see this coming:
benchmark it. prototype it, simulate it, measure it, figure out what the penalty will be. Anything short of that is called "a guess"
3) depends - think about it again. If you were auditing all columns every time, a table per table would make sense (I call that a shadow table). If you are auditing JUST the columns that were modified and doing it in a fashion that has a row per modified column, then a single table makes sense.
4) you want to capture row changes - modified data - we do not audit that (not until 11g with the flashback data archive that is)
https://asktom.oracle.com/Misc/oramag/on-redefinition-nature-and-triggers.html
Maintaining history of records - Point in time design
Anees Ahmad, January 07, 2010 - 6:02 am UTC
I have scanned full article and found very useful explanations. I have checked other articles also.
There were discussion of having shadow table populated by triggers and using oracle inbuilt feature WM (Oracle 10g - with more features) and oracle 11g paid feature.
(We have oracle 10g v2.)
If I prefer trigger approach for simplicity, that comes with some performance hit, even if we implement using packaged procedure call from trigger.
one more idea comes in mind as follows:
1.create snapshot logs on all dynamic tables
2.create updatable materialized view for each dynamic tables
3.create triggers on materialized view that would populate shadow tables when ever we refresh materialized view.
schedule a refresh job at your convenient time according to DB load.
This approach only to reduce some performance hit caused by triggers. In context of, if there is comparatively big data load happens daily and trigger should not affect load process.
This is what we think. In your opinion, whether there would be any performance gain if we ignore disk space issue.
"In my opinion this post is related and need background of above article. Please consider it as review."
January 11, 2010 - 8:11 pm UTC
problem is...
triggers are allowed ONLY on updateable materialized views
and even then, they must DISABLE themselves during a refresh - by asking "am_i_a_refresh", if true, they must return
so no - this would not be something to consider.
I fail to see how trigger approach would be "simple", it would actually require you to design and thing A LOT about a lot of things - things you might not even be aware of (which makes it really hard to think about)
On simplicity scale - triggers are a negative 1000, 10 being the easiest, 1 being the hardest.
Maintaining history of records - Point in time design
Anees, January 13, 2010 - 7:20 am UTC
Thank you for your reply.
Triggers are simple because we think we have more control without affecting any other team(DBA-in case of imp & exp restriction).
If we consider workspace manager is more simple but while going through dev-guide we could see many "considerations and restrictions apply". For these restrictions there are workarounds (gr8) but to implement workarounds we have to search all already written code.
Some of the restrictions are as follows:
Triggers on Version-Enabled Tables:
Only per-row triggers are supported. Per-statement triggers are not supported.
Only whole-row triggers are supported. Before-update and after-update triggers
for specific columns are not supported.
Import and Export Considerations:
Workspace Manager supports the import and export of version-enabled tables in one
of the following two ways: a full database import and export, and a workspace-level
import and export through Workspace Manager procedures. No other export modes,
such as schema, table, or partition level, are currently supported.
Bulk Loading into Version-Enabled Tables:
The following restrictions apply to bulk loading with version-enabled tables in the
current release:
■ Bulk loading into a table with a self-referential integrity constraint is not allowed.
■ Bulk loading into a workspace, other than LIVE, that has continually refreshed
child workspaces is not allowed.
■ Only the owner of a table or a user with the WM_ADMIN_ROLE role can bulk load
into a version-enabled table.
■ The user that is bulk loading the version-enabled table must have the INSERT
privilege for <table_name>_LT.
■ User-defined triggers on version-enabled tables are not executed during bulk
loading.
What is your take?
January 18, 2010 - 4:17 pm UTC
... Triggers are simple because we think we have more control without affecting any
other team(DBA-in case of imp & exp restriction).
...
triggers come with their own host of problems. They can be disabled, they can be bypassed without even disabling! Triggers do not have to fire.
... "considerations and restrictions apply"....
and please never allow yourself to think that ALL APPROACHES do not have "considerations and restrictions apply". - all of them do. Writing code is a way to introduce bugs and unseen side effect (triggers do for sure).
Since I hate triggers, the first 'restriction' is perfectly ok by me :)
and do you really use export / import - these days, with tables of any size, it is sort of not done nearly as much as it was 20 years ago.
Everything comes with pro's and con's. everything.
OK
Pawan, February 03, 2010 - 10:59 am UTC
Hi Tom,
For restricting number of rows for a specific key column say deptno, Is triggers only way to use?
February 03, 2010 - 12:25 pm UTC
I don't know what you mean....
primary key
Jonie, October 10, 2011 - 2:33 am UTC
hi ,
how to track primary key values in your ex. of generic trigger.
ex:if status has been changed in a table from x to y ,how to track the primary key id of the same.
request you to explain with an ex . asap
thanks.
October 10, 2011 - 10:43 am UTC
primary keys are immutable, never changing.
So, there wouldn't be anything to track. I've never dealt with updating a primary key, the concept is foreign to me.
You'll have to design something on your own, sorry. As far as I'm concerned - if a primary key were to be changed - it would be done via DELETE + INSERT - meaning, the key doesn't change - a new row comes into existence.
How's that for ASAP - and what is up with that "asap"?? really?
Recover from errors?
A reader, February 06, 2012 - 4:54 pm UTC
I have been testing some stuff using Workspace Management for the Valid Time functionality, and I wanted to make a significant rename of some of the objects. I had 3 tables in the schema that had WM enabled, and I dropped the schema before I disabled WM for those tables. Now, after I have recreated the schema and the base tables, I get the following error when I try to enable WM:
ORA-20131: table 'EMAIL_OWNER.EMAIL_ALERT_RECIPIENT' is already version enabled
ORA-06512: at "WMSYS.LT", line 9175
ORA-06512: at line 1
If I try to disable WM, I get the following (which I would expect since none of the WM objects exist):
ORA-00942: table or view does not exist
ORA-06512: at "WMSYS.LT", line 9355
ORA-06512: at line 1
Funny thing is that I dropped a dependent schema with 2 WM enabled tables and they worked fine. Can I recover from this issue? I did a search and found a few people suggesting running the disableversioning with the Ignore_Last_Error flag set to true multiple times, and I have done that and the error persists. Other people suggest creating the same object in another schema then copying over the objects to the broken schema, but that just sounds a bit like something that would get the, “congratulations, you just corrupted your database” response. Another person suggested using DBMS_WM.RecoverAllMigratingTables and it did not solve the issue either. I cannot find any documentation for this issue in the WM user guide. Do you know of the correct way to fix this issue?
February 06, 2012 - 9:07 pm UTC
please utilize support for this one.
Auditing a different way
Lise Parker, June 14, 2012 - 4:21 am UTC
I do not agree with a specific design around auditing and I would appreciate as always your thoughts on this.
We currently audit our data by having a trigger on each of the tables that we want to audit. When the trigger fires it will move the record into a dedicated audit table which is a mirror of the table from where the trigger fired. We add some other bits and pieces to the audit table too like user id etc.
However, with the new design, we will still have a trigger on each of the table's to be audited. However instead of moving the data to a mirrored table so to speak, they will insert the data from the row as XML into a common table that will hold all audited data not just from this table but from all the audited tables. This central table will have a column holding the table name and then the column holding the XML which is the row of data that was audited.
I think the overhead of producing the XML and then having to query this XML after is such an unneccessary overhead and I cannot see the benefits. It will save us creating individual audit tables, but that is not much saving to make surely.
Another argument they have is that they want one central repository for audited tables for different databases. So, say this common audit table in Oracle will not only hold audited data for all the Oracle tables, but also for all the tables in say SQL Server.
Note that the audited data is purely that and will be used by our support them to investigate data issues.
I would really appreciate your thoughts.
PS! Saw you in Lintlithgow yesterday and your talk as always was very interesting.
June 14, 2012 - 5:55 am UTC
I think the overhead of producing the XML and then having to query this XML
after is such an unneccessary overhead and I cannot see the benefits. It will
save us creating individual audit tables, but that is not much saving to make
surely.
I agree with you 100% - and in fact the whole "it'll will save us" argument is silly. This trigger can and should be generated - as could the audit table itself. That is, a smart programmer would write a single stored procedure that would dbms_output.put_line (or even execute if you want, but I like the generate the code approach) the necessary CREATE trigger and table.
How would they get the sql server data in there?
This would be a very secure audit trail (the xml one). Easy to get data in, but not so possible to get data out! Think about indexing this and retrieving from it.
If they really want easy of use, you might consider the Total recall Option in 11g - it introduces the flashback data archive:
thanks for the feedback - it was *hot* in those conference rooms :)
Auditing and more
Dhananjay Modak, August 28, 2012 - 8:06 am UTC
Hi Tom,
I have been following this thread and got some very useful information. Further on this, we are in the process of implementing new initiatives for a new application framework we are putting in place and I am looking at requirements for Auditing, logging and a number of related areas.
In particular, there is a need to detect data changes (via DML, but also DDL) in the database and have the capability of "knowing everything" about the changes. Our backend is Oracle 11g R1 and I have been looking at what my options are. Two Oracle features/solutions seem relevant but I have been struggling to figure out which one is more suitable:
1. Change Data Capture (but I am told this will be deprecated soon and I should be using GoldenGate).
2. Total Recall + FDA
These two seem to be doing very similar things so I wanted to know the drivers behind these products from Oracle side. Why would you chose one over the other and under what scenarios.
Thanks and regards
Dhananjay
August 29, 2012 - 1:50 pm UTC
In 11g Release 1, total recall did not really permit DDL. You would have to be on 11g Release 2 for DDL support.
And then - total recall would fit the bill. If by 'detect data changes' you mean "display the prior values". Change data capture would not be appropriate.
How to trace only changed column and its value after date and date in a table ?
Hamidur Rahman Siddique, September 03, 2012 - 6:40 am UTC
My Banar is
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
I have more then 50 columns in a table. This table is a detail table of Another Master Table say "Demo1". Generally user insert & sometime update into detail table (like example). Its the history of master data and new insert takes few field changes in a given date.
I want to track and in a date which column/field are change and what changes (previous value and new value).
Here is the sample Data and the out put format.
Id Date Name Col1 Col2 Col3 Col4 Col55 Col56 Col N
1 01-aug-2012 Abc Ab Bc aa bb TT
2 05-aug-2012 abc A A aa bb TT
3 20-aug-2012 bcd AA AA BB bb C D TT
Now report like…
Demo1 (master Data)
Revision Date Revision Field Previous Value Revised Value
05-aug-2012 Col1 Ab A
Col2 Bc A
20-aug-2012 Name Abc Bcd
Col1 A AA
Col2 A AA
Col3 Aa BB
Col55 C
Col56 D
Hope you understand...
Thanks,
data image
Hamidur Rahman Siddique, September 03, 2012 - 6:56 am UTC