I've investigated the performance of the mv-implemented constraint above
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:42170695313022#42335743411615 <code>
and my conclusion is that it is not appropriate for an OLTP environment, unless it's read-mostly.
But here's the experiment - judge by yourself.
First, let's load the tables (100k rows in master, 10*100k rows in detail):
insert into master (id, min_no_of_children)
select rownum, 10 from
(select null from all_objects where rownum <= 1000),
(select null from all_objects where rownum <= 100)
;
insert into detail (pid)
select 1 + mod (rownum, 100000) from
(select null from all_objects where rownum <= 1000),
(select null from all_objects where rownum <= 100),
(select null from all_objects where rownum <= 10)
;
commit;
Add the index on the fk i forgot to create above:
create index detail_fk_idx on detail (pid);
Now, let's augment the refresh machinery with this indexes, to make the performance as optimal as possible:
-- improve my original union_all_mv_rowid_idx
drop index union_all_mv_rowid_idx;
create index union_all_mv_rowid_marker_idx on union_all_mv (row_id, marker);
create index union_all_mv_map_id_idx on union_all_mv (SYS_OP_MAP_NONNULL (id));
create index log_snap_idx_detail on mlog$_detail (snaptime$$);
create index log_snap_idx_master on mlog$_master (snaptime$$);
create index log_snap_idx_union_all_mv on mlog$_union_all_mv (snaptime$$);
Ananlyze everything (logs are not analyzed by gather_schema_stats):
exec dbms_stats.gather_table_stats (user, 'mlog$_detail', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'mlog$_master', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'mlog$_union_all_mv', cascade=>true);
exec dbms_stats.gather_schema_stats (user, cascade=>true);
Now, let's clone the master+detail structure without the MVs and logs, and compare the stats (using runstats) of a representative OLTP benchmark, to measure the overhead:
create table naked_master as select * from master;
create table naked_detail as select * from detail;
alter table naked_master add constraint naked_master_pk primary key (id);
alter table naked_detail add constraint naked_detail_fk foreign key (pid) references naked_master (id);
create index naked_detail_fk on naked_detail (pid);
exec dbms_stats.gather_schema_stats (user, cascade=>true);
exec runstats_pkg.rs_start;
insert into naked_master (id, min_no_of_children) values (100, 0);
commit;
update naked_master set min_no_of_children=-1 where id = 100;
commit;
delete from naked_master where id = 100;
commit;
insert into naked_master (id, min_no_of_children) values (100, 1);
insert into naked_detail (pid) values (100);
commit;
update naked_detail set pid = pid where pid = 100;
commit;
delete from naked_detail where pid = 100;
delete from naked_master where id = 100;
commit;
exec runstats_pkg.rs_middle;
insert into master (id, min_no_of_children) values (100, 0);
commit;
update master set min_no_of_children=-1 where id = 100;
commit;
delete from master where id = 100;
commit;
insert into master (id, min_no_of_children) values (100, 1);
insert into detail (pid) values (100);
commit;
update detail set pid = pid where pid = 100;
commit;
delete from detail where pid = 100;
delete from master where id = 100;
commit;
exec runstats_pkg.rs_stop (100);
Run1 ran in 23 hsecs
Run2 ran in 50 hsecs
run 1 ran in 46% of the time
Name Run1 Run2 Diff
STAT...data blocks consistent 0 102 102
STAT...rows fetched via callba 0 110 110
STAT...consistent changes 10 124 114
STAT...table scans (short tabl 0 134 134
LATCH.undo global data 21 162 141
STAT...commit cleanouts 18 159 141
STAT...commit cleanouts succes 14 155 141
LATCH.enqueues 12 180 168
LATCH.cache buffers lru chain 0 170 170
STAT...free buffer requested 0 170 170
STAT...sorts (rows) 2,597 2,778 181
STAT...workarea executions - o 10 198 188
STAT...sorts (memory) 6 200 194
STAT...enqueue releases 18 213 195
STAT...enqueue requests 17 213 196
LATCH.dml lock allocation 24 228 204
STAT...calls to kcmgas 6 219 213
LATCH.child cursor hash table 0 291 291
STAT...cluster key scans 0 326 326
STAT...table scan blocks gotte 0 344 344
LATCH.enqueue hash chains 44 428 384
STAT...shared hash latch upgra 2 505 503
STAT...redo entries 32 574 542
LATCH.redo allocation 44 587 543
STAT...table fetch by rowid 0 566 566
STAT...buffer is pinned count 0 572 572
STAT...index scans kdiixs1 2 577 575
STAT...session cursor cache hi 30 605 575
STAT...cluster key scan block 0 578 578
STAT...index fetch by key 3 593 590
STAT...opened cursors cumulati 30 764 734
STAT...db block gets 53 902 849
STAT...parse count (total) 30 918 888
STAT...execute count 31 925 894
STAT...db block changes 61 1,107 1,046
STAT...consistent gets - exami 12 1,077 1,065
LATCH.simulator hash latch 0 1,196 1,196
STAT...calls to get snapshot s 20 1,316 1,296
STAT...buffer is not pinned co 0 1,792 1,792
LATCH.row cache enqueue latch 0 2,238 2,238
LATCH.row cache objects 0 2,373 2,373
STAT...recursive calls 7 3,119 3,112
LATCH.library cache pin alloca 60 3,705 3,645
LATCH.shared pool 209 4,943 4,734
LATCH.library cache pin 224 6,219 5,995
STAT...table scan rows gotten 0 7,720 7,720
LATCH.library cache 264 9,129 8,865
STAT...no work - consistent re 0 14,590 14,590
STAT...consistent gets 20 16,584 16,564
STAT...session logical reads 73 17,486 17,413
LATCH.cache buffers chains 271 36,231 35,960
STAT...session pga memory 0 65,536 65,536
STAT...redo size 6,800 152,952 146,152
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,442 68,413 66,971 2.11%
So we have
LIO : 16,584 / 20 = 830
redo : 152,952 / 6,800 = 22
latches: 68,413 / 1,442 = 47
elapsed: 50 / 23 = 2.1
A very heavy overhead in general; especially the latches and LIO increase probably makes this solution unscalable in a multi-user environment.
I think it is attractive only in read-mostly environment, where concurrency is not a concern; in this environment, i would enjoy the generalizability of the solution (just propagate the necessary data down the MV, then make the check constraint as complex as necessary) and the fact that's a declarative (SQL-only) solution.