Skip to Main Content
  • Questions
  • Create Materialized View Log Semantics

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Gagneesh.

Asked: December 20, 2004 - 4:00 pm UTC

Last updated: May 12, 2011 - 7:51 am UTC

Version: 9.2.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,
I have a couple of clarifications with "Create materialized view Logs" in Oracle 9.2.0.1.
I did try to research and get more info in Oracle documentation and on internet forums but not with much success.

For Materialized View Logs (using rowid or primary key):
1) I understand that "INCLUDING NEW VALUES" clause is only good/useful if you are doing aggregations. One does not need it for a pure data/table replication?? Please comment.
I was kind of interested to know where the Materialized View LOG stores the actual OLD/NEW values when using the "INCLUDING NEW VALUES" clause.

2) My understanding is that with the changing nature of "rowids" (table REORG,Transportable tablespaces), the "with rowid clause" is primarily there for "backward compatibilty and for scenarios when tables do not have primary keys for whatever reasons. For tables having primary keys, it is desirable to have the logs and Mviews created using "with primary clause" so that they would
always be eligible for fast refresh. Please comment on these.

3) When do you need to use "with sequence(...)" clause. The documentation says "Sequence numbers are necessary to support fast refresh after some update scenarios". The Materialized View does handle multiple updates on the same columns during the day before the nightly Materialized View Refresh (eg. an on demand refresh every 24 hours)?
What "update scenarios" do you need "with sequence ..." ? I was not able to find more details on this.

4) When do you need to use multiple of the "with clause" like "with rowid, primary key, sequence(...)" ??? How would it help?

Thanks a lot in adavance for your valued inputs.
Gagneesh

and Tom said...

1) including new values is for fast refresh of aggregated materialized views.

if you list the columns you want the new values for -- they pop up right in the log:

ops$tkyte@ORA9IR2> create table t as select * from all_users;

Table created.

ops$tkyte@ORA9IR2> alter table t add constraint t_pk primary key(user_id);

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view log on t with rowid (user_id,username,created) including new values;

Materialized view log created.

ops$tkyte@ORA9IR2> update t set user_id = 56, created = sysdate where rownum = 1;

1 row updated.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2> set echo off
@printtbl8 'select * from mlog$_t'
USER_ID : "0"
USERNAME : "SYS"
CREATED : "02-sep-2004 08:39:30"
M_ROW$$ : "AAAITxAAHAAAACMAAA"
SNAPTIME$$ : "01-jan-4000 00:00:00"
DMLTYPE$$ : "U"
OLD_NEW$$ : "U"
CHANGE_VECTOR$$ : "0C"
-----------------
USER_ID : "56"
USERNAME : "SYS"
CREATED : "21-dec-2004 10:56:42"
M_ROW$$ : "AAAITxAAHAAAACMAAA"
SNAPTIME$$ : "01-jan-4000 00:00:00"
DMLTYPE$$ : "U"
OLD_NEW$$ : "N"
CHANGE_VECTOR$$ : "0C"
-----------------

PL/SQL procedure successfully completed.

2) it works OK with rowids regardless, consider:


ops$tkyte@ORA9IR2> CREATE TABLE t
2 (
3 dt date,
4 x int
5 )
6 enable row movement
7 PARTITION BY RANGE (dt)
8 (
9 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
10 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace tools,
11 PARTITION junk VALUES LESS THAN (MAXVALUE)
12 )
13 /

Table created.

ops$tkyte@ORA9IR2> insert into t values ( to_date( '12-mar-2003' ), 1 );

1 row created.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2> create materialized view log on t with rowid;

Materialized view log created.

ops$tkyte@ORA9IR2> update t set dt = dt+1;

1 row updated.

ops$tkyte@ORA9IR2> set echo off
@printtbl8 'select * from mlog$_t'
M_ROW$$ : "AAAIUHAAHAAAACOAAA"
SNAPTIME$$ : "01-jan-4000 00:00:00"
DMLTYPE$$ : "D"
OLD_NEW$$ : "O"
CHANGE_VECTOR$$ : "00"
-----------------
M_ROW$$ : "AAAIUIAAFAAAAAKAAA"
SNAPTIME$$ : "01-jan-4000 00:00:00"
DMLTYPE$$ : "I"
OLD_NEW$$ : "N"
CHANGE_VECTOR$$ : "FF"
-----------------

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop materialized view log on t;

Materialized view log dropped.

ops$tkyte@ORA9IR2> create materialized view log on t with rowid;

Materialized view log created.

ops$tkyte@ORA9IR2> update t set x = x+1;

1 row updated.

ops$tkyte@ORA9IR2> set echo off
@printtbl8 'select * from mlog$_t'
M_ROW$$ : "AAAIUIAAFAAAAAKAAA"
SNAPTIME$$ : "01-jan-4000 00:00:00"
DMLTYPE$$ : "U"
OLD_NEW$$ : "U"
CHANGE_VECTOR$$ : "04"
-----------------

PL/SQL procedure successfully completed.

See how the one that moved the row was treated as a delete + insert, whereas the other was an update. it works.


3) Almost always want to have it on. Some snippets from the DW guide...


Specify the SEQUENCE clause if the table is expected to have a mix of
inserts/direct-loads, deletes, and updates.
....
Oracle recommends that the keyword SEQUENCE be included in your materialized
view log statement unless you are sure that you will never perform a mixed DML
operation (a combination of INSERT, UPDATE, or DELETE operations on multiple
tables).
.......
Fast refresh may be possible even if the SEQUENCE option is omitted from the
materialized view log. If it can be determined that only inserts or deletes will
occur on all the detail tables, then the materialized view log does not require the SEQUENCE clause. However, if updates to multiple tables are likely or required or if the specific update scenarios are unknown, make sure the SEQUENCE
clause is included.

4) basically when the mview explain process tells you that an operation you want to perform won't be unless you have it. eg:

</code> http://docs.oracle.com/cd/B10501_01/server.920/a96520/mv.htm#48737 <code>

shows you would need rowid, including new values, a set of columns and sequence in order to fast refresh in most all cases.





Rating

  (32 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks! That was great/very informative...

Gagneesh, December 21, 2004 - 7:00 pm UTC

Hi Tom,
Thanks for the detailed clarifications. It did clarify a lot and helped me understand it much better.
Well, I still have some clarifications and do need your invaluable inputs:
Reference (1):
The "including new values" clause has to have the columns listed (for storing old/new values).
Hence, for Aggregate Materialized Views, all the columns getting aggregated need to be there when defining the Materialized View log?? Please provide inputs on this.
Would it serve any purpose to have the "including new values" clause with no columns listed ?

Reference (2):
We need to be designing several Materialized views many of them for pure replication. Based on the earlier thread "Materialized View on Table Without Primary Key", I am of the opinion that always use "Primary Key" based MV logs. If you have the primary keys defined for all the tables, would there be any reason to create "rowid" based MV logs and MV?
Have been using "rowids" so often especially for huge table updates (several million) so efficiently that would like to make sure if they would("rowids") be still preferable in some scenario when the primary key exists on the table.

Reference (3):
Yes, I guess I will always be using "sequence" clause as we will always be having insert/updates/deletes...
Thanks. I would say that it is absolutely essential and not recommended (though Oracle documentation saying that "it is recommended" leaves it somewhat open and gives a feeling that maybe it is not necessary).

Reference (4):
As indicated by you, I did use the "MVIEW EXPLAIN PROCESS" and it was helpful.
Would like to know that would there be any scenario where you would need both "with primary key, rowid" clause? I understand that the other combinations "with primary key, sequence ...", or "with rowid, sequence ..." and/or a set of columns would be very often used.

5) column Change Vectors$$:
Based on thread "Snapshot Log Column change_vector$$", I understand that " The size of the vector is a function of the number of columns. They can be used to see which columns were updated".
Is this column used by the Materialized View refresh process to figure out which columns need to be updated in the MView log for "Fast Refresh"?
For a simple replication using materialized View, How does the fast refresh process works for ? Does it just look at the rowid/primary key and the change_vector$$ column in the log and then get the updated values for UPDATES? Does the "change_vector$$" column have any other purpose?

Thanks once again!!!

Tom, could you please provide inputs...

Gagneesh, December 29, 2004 - 11:11 am UTC

Dear Tom,
Thanks for the initial response and the inputs.
Could you please provide inputs on my subsequent clarifications ?
We are going to be designing several Pure replication MViews for certain Schemas and would have wanted your inputs on these.

My understanding and experience has been that rowid based processing is always faster than the primary key especially when huge tables are involved.
I would have wanted your inputs on all but more importantly on points (2) and (4) re-clarifications.

Thank You so much!!!


Tom Kyte
December 29, 2004 - 7:10 pm UTC

reference 1) is not true as you can do a simple example to see that. the other subtle difference in the MV output was:


CHANGE_VECTOR$$ : "0C"

CHANGE_VECTOR$$ : "FF"

look at the bitmasks, they are like pointers

2) for replication only use primary keys, rowids can change too easily (reorgs, etc)

3) it is not "necessary", but almost always useful

4) i let dbms mview tell me when, too many permutations. cannot think of one off the top of my head right now, but you have the tools to see when and if you need em. (that was my goal)

5)

Still some doubt persist on the "including new values" clause usage

Gagneesh, December 30, 2004 - 3:55 pm UTC

Hi Tom,
Thanks a lot for all the inputs!!!
I am still not clear about the "including new values" clause usage and cannot seem to find details elsewhere.
Referring to your link and quoting from it: 
https://asktom.oracle.com/Misc/oramag/getting-just-the-rows-i-want.html
**================== begin ===========================**
SQL> create table t1 ( a  int,
  2     b int,
  3     y number,
  4     primary key(a,b) )
  5  /
Table created.

On this table, we will need to keep a materialized view log to capture the changes made to T1 during our transaction. 
We'll keep a summary of column Y rolled up by the values of column A—hence those columns must appear in our materialized view log: 

SQL>    create materialized view log on t1 
  2  with rowid ( a, y ) including new values
  3  /
Materialized view log created.

We are ready for our materialized view: 

SQL> create materialized view t2
  2  build immediate
  3  refresh on commit
  4  as
  5  select a, count(*), count(y), sum(y) from t1 group by a
  6  /
Materialized view created.
**=================== end ==========================**

Please Note: "hence those columns must appear in our materialized view log"
a) Hence, for fast refreshable aggregate Materialized View we definitely need the "column list" which contains all columns getting aggregated. One has to include all columns mentioned in the query. Otherwise one gets Oracle error ORA-12033.
My understanding is: For fast refreshable aggrgate views all the columns getting aggregated need to be included.

b) What I am not understanding is: 
1)Does the "including new values" clause any good (does it help/provide additional info and is it even required for any reason) when you omit the column list when creating the MV Log? 
2) The new/updated values for the columns would be in the master table but the old values are lost as the column list to store the values is not there in the MV log definition.
3) The Materialized View Log would be having an additional record per transaction in the log table, what good it is (for what purpose) ?
The change_vector$$  as per my understanding can basically be used to 
see which columns were updated but does not have the actual (old/new) values of those columns.

c) I guess the MV Log without column list would only be applicable to non-aggregate Materialized Views as else one may get errors when creating/refreshing the Materialized View itself.I feel the statements below:

1) create materialized view log on t1 
     with rowid including new values
/
(would add an additional record in the MV log table: an overhead with no benefit??)

and 
2) create materialized view log on t1 
     with rowid excluding new values
/

provide us the same information.

Please advise. 

Tom Kyte
December 30, 2004 - 4:23 pm UTC

1) do you see it provides a change vector "hey, this is what changed"


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( x int, y int, z int, primary key (x) );
 
Table created.
 
ops$tkyte@ORA9IR2> insert into t values ( 1, 1, 1 );
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view log on t
  2  with rowid including new values;
 
Materialized view log created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> update t set y = 1;
 
1 row updated.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> begin print_table( 'select * from MLOG$_T' ); end;
  2  /
.M_ROW$$                      : AAAI2DAAGAAAAAPAAA
.SNAPTIME$$                   : 01-jan-4000 00:00:00
.DMLTYPE$$                    : U
.OLD_NEW$$                    : U
.CHANGE_VECTOR$$              : 04
-----------------
.M_ROW$$                      : AAAI2DAAGAAAAAPAAA
.SNAPTIME$$                   : 01-jan-4000 00:00:00
.DMLTYPE$$                    : U
.OLD_NEW$$                    : N
.CHANGE_VECTOR$$              : 04
-----------------
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> delete from mlog$_t;
 
2 rows deleted.
 
ops$tkyte@ORA9IR2> update t set z = 1;
 
1 row updated.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> begin print_table( 'select * from MLOG$_T' ); end;
  2  /
.M_ROW$$                      : AAAI2DAAGAAAAAPAAA
.SNAPTIME$$                   : 01-jan-4000 00:00:00
.DMLTYPE$$                    : U
.OLD_NEW$$                    : U
.CHANGE_VECTOR$$              : 08
-----------------
.M_ROW$$                      : AAAI2DAAGAAAAAPAAA
.SNAPTIME$$                   : 01-jan-4000 00:00:00
.DMLTYPE$$                    : U
.OLD_NEW$$                    : N
.CHANGE_VECTOR$$              : 08
-----------------
 
PL/SQL procedure successfully completed.


for other kinds of MV's to make use of.

2) correct

3) generic implementation -- small overhead. 

far in the future?

Alexander, December 30, 2004 - 5:03 pm UTC

Tom, how do you explain '01-jan-4000' in your samples?

Thanks

Tom Kyte
December 30, 2004 - 6:14 pm UTC

date that is far enough in the future that the developers felt certain their code would not be running anymore so they can safely use it in replication to ask for all of the changes that happened since their last refresh (01-jan-4000 is way in the future, it'll be AFTER the last refresh)

it is in support of more than one MV in replication feeding from the same log -- as the first guy to refresh, it will change the 4000 into today. so next time, when it askes for "what changed since i last refreshed -- it will not see the row again.

column snaptime$$ purpose/details...

Gagneesh, December 30, 2004 - 5:20 pm UTC

Please refer to Tom's answer in the thread "Materialized View Log - snaptime$$". The link is:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:9042180525899 <code>

Quoting from the other thread, it is basically:
"snaptime$$ is modified upon a refresh and is only used when there are multiple snapshots defined on a single master."
Hope that helps...

Clause "Including New Values"....

Gagneesh, December 30, 2004 - 5:34 pm UTC

Hi Tom,
Thanks for the prompt prompt response.
Maybe I am not putting it in correct words, but I still have not understood the clause "Including New Values"....properly. Also, I had some confusion as I had put points as " a), b) 1,2,3 and c)" and I am not clear when you refer " 1),2),3)".

1) As per my undertsanding, I would be only using Create MV LOG with "including new values" CLAUSE for fast refreshable aggregate Materialized Views and listing all the aggregate columns in the "column list".
2) Use "including new values" clause along with column list only for fast refreshable aggregate Materialized Views ONLY.

3) For pure replication MV's , it does not serve any purpose or provide benefits. Also, the change_vector$$ value remains same for both the records (old/new).
4) Change vector column provide information:
for describing what columns have been modified. The size of the vector is a function of the number of columns.It can be used to see which columns were updated.
But it does not store any old/new values. And hence, when comparing MV logs created with "including new values" and "excluding new values" (default clause), it does not give any additional info.

Thanks
Gagneesh




Tom Kyte
December 30, 2004 - 6:53 pm UTC

4) sure it does. think about:

create mv as select c1, c100 from t;

update c2 and c4... so what? does the mv care?

Really good

Purushotham, January 21, 2005 - 8:25 am UTC

Tom,

My question regarding the MView logs is more on design issue. I searched your site but couldn't come with the answer.

We develope BI applications using Oracle 9iR2. We have defined materalied view logs on Fact tables and as well has on Dimension tables.

We have Item Dimension table and have defined MView log on this dimension table. There is no other flavor of dimension that extends Item dimension. I argued with developement team on having them over there. Recently I worked on performance problem on loading the item dimension. I see considerable time has been spent on inserting into MView logs.

Now my question
1. Will it really help in anyway to have MView logs on dimensions or final level of MViews ? (I believe the answer has to be "No")

2. If the answer is "No" How should I convince my team to drop those Mview Logs.

Please help

Thanks

Tom Kyte
January 21, 2005 - 9:52 am UTC

let me add #3 -- why don't you bulk load the dimensions, in which case the MV log overhead is virtually 0.0% -- only a rowid range would be added to the mview log, not a record per row.


I cannot answer #1, I have to assume they are doing changes only refresh somewhere (else you would not have the logs) and maybe they need them -- maybe they don't (maybe a full refresh would be "better" -- insufficient data to say)



Thanks for instance reply

Purushotham, January 21, 2005 - 2:26 pm UTC

Tom,

Thanks for your reply. I understand when we do full refresh mview logs will not be of much use. But your suggestion "bulk load" is different or am I getting lost in terminology for Insert(..) select mv.col1...

It would make me have complete idea about MView logs.

Thanks
Purushotham

Tom Kyte
January 21, 2005 - 7:47 pm UTC

You said:

I see considerable time has been spent on
inserting into MView logs.


well, that could only happen if you load the dimensions row by row using the conventional path sql approach. If you did it *in bulk* (insert /*+ append */ or a direct path sqlldr), it would not be the case -- we just log one row or so in the mview log in order to give the rowid range of the newly added rows. So, you could add thousands of records in the dimension -- and create just a single row in the mview log. Using row by row sql, you would create thousands of rows in the mview log.

Thanks

Purushotham, January 22, 2005 - 2:58 am UTC

Tom that clears every things. We infact have two modes to load data :- Full (using direct insert) and partial mode.

Thanks again for your quick response.

Bulk insertion

selva, February 06, 2005 - 8:53 am UTC

Hi Tom,
Really your service very helpful for us,
ok coming to my point i have a table with 6 million rows that does not have any keys,but for business purpose i need to take 2 columns from this table using substr function i will merge with another table.
my requirements is i found substr match then i need to check some more conditions and finally insert the data to archive table.if match is not found in this case also insert into archive table.
i wrote a query for this but for 6 million rows it does not run faster.
can you please suggest me which way is best solution for this situation.
Thanks in advance.


Tom Kyte
February 07, 2005 - 3:51 am UTC

for 6 million rows, it will take just a couple of minutes (on my desktop pc).

my suggestion: model your data properly so you don't have to substr. use columns for columns.


but even so, it should just be a couple of minutes for 6 million rows (as long as this is a single statement). If by "check other conditions" you mean "i am fetching slow by slow and processing data procedurally" - that is probably where your slow down is.

Bulk Insertion

selva, February 07, 2005 - 12:00 am UTC

Hi tom,


Really your service is very helpful for us,
ok coming to my point i have a table with 6 million rows that does not have any
keys,but for business purpose i need to take 2 columns from this table using
substr function i will merge with another table.
my requirements is if i found substr match then i need to check some more
conditions and finally insert the data to archive table.if match is not found in
this case also insert into archive table.
i wrote a query for this but for 6 million rows it does not run faster.
can you please suggest me which way is best solution for this situation. please see the query

insert into tblsap_history(select * from tblsap ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a,ds_onebill d
where
(ts.taig like 'IS%' or ts.taig like 'TAIG-No Change In Bill Group%' and a.source_system = 1)
or
(
d.bill_profile='EXCLUDE'
and a.onebill_account=d.onebill_account
)
)
OR
(ts.css_database,ts.sap) not in (select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a
)
and required_by <= (sysdate-180)
)
);

Thanks in advance.


Tom Kyte
February 07, 2005 - 4:33 am UTC

correlated subqueries with substrs, not in on a substr function -- yup, that's not going to be fast....


I see tons of full scans due to the not in in all likely hood (somewhat unavoidable with that substr...)

are you sure this is accurate:



( select SUBSTR(source_account, 1, 2),
SUBSTR(source_account, 3, 18)
from ds_account a,ds_onebill d
where (ts.taig like 'IS%'
or
ts.taig like 'TAIG-No Change In Bill Group%'
and
a.source_system = 1)
or ( d.bill_profile='EXCLUDE'
and
a.onebill_account=d.onebill_account
)
)



I'm suspicious of and's and or's without paran's (ts.taig like .... )

you have a cartesian join in there.

that is nasty looking, are you sure it it correct? just look at this part:



( select SUBSTR(source_account, 1, 2),
SUBSTR(source_account, 3, 18)
from ds_account a,ds_onebill d
where (ts.taig like 'IS%'
or
ts.taig like 'TAIG-No Change In Bill Group%'
and
a.source_system = 1)


join every row in ds_account where source_system=1 to every row in ds_onebill.


I don't know, that doesn't sit right with me. that is basically going to get every source_account IF there is a ts_taig like .... and source_system = 1.




Bulk Insertion

selva, February 07, 2005 - 7:10 am UTC

Thanks for your reply,there is no way to join with other columns,this query worked for less amount of data,the problem is with out unique fields how to run fast?

for eample i already mentioned the columns in the table.

for table1 talsap has 44 columns and table 2 ds_account has more than 10 columns .but there was no reletions between them. only possibiltity is they stored in some defined format in the columns like
for tblsap table

sap css_database

EM 12345678
ER 12448788
EM 12334467
CM 23456789
now ds_account table column like

source_account

EM12345678
ER12448788
EC13331333


if i use (ts.sap,ts.css_database) in (select substr(source_account,1,2),substr(source_account,3,18) from ds_account
now i relate this query by using this
i will get the record out put for in this case the match for that one

EM12345678
ER12448788

and additionaly check 3 subconditions if match is found.

a.source_system = 1
ts.taig like 'IS%' or ts.taig like 'TAIG-No Change In Bill Group%'
required_by <= (sysdate-180)

select * from tblsap ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a
where a.source_system = 1
and
ts.taig like 'IS%' or ts.taig like 'TAIG-No Change In Bill Group%'
)
and required_by <= (sysdate-180)
)
this is my first condition,

if account not match
EC13331333(i will get the account number like),
for this second requirement i need to check only one condition
the condition is
required_by <= (sysdate-180)
i wrote the query
as like
2.select * from tblsap ts
where
(
(ts.css_database,ts.sap) not in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a
)
and required_by <= (sysdate-180)
)

for my third requirement i include another table called ds_onebill in this case if acoount match is found and again need to check onebill is excluded ,in this table we are having the key
ds_onebill.onebill_account=ds_account.onebill_account
and ds_onebill.bill_profile='EXCLUDE'
these are the conditions additionaly need to check.
for this case i wrote the query as
3.
select * from tblsap ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a ,ds_onebill d
where d.bill_profile='EXCLUDE'
and a.onebill_account=d.onebill_account
)
and required_by <= (sysdate-180)

)
finally i want to insert all 3 conditions to one archive table
so,i have written the query last i sent.now i hope you getting my requirement,it took full scan for all the tables.
my requirement
1.which way is faster for this query?
2.how to simplify this query?
3.how to negate this query for deleting the records from the source table?
can you please help me,i am suffer very much.really sorry for posting various places.
Thanks in advance.

Tom Kyte
February 07, 2005 - 8:44 am UTC

do you see that the logic there is just not looking right?

i don't care how many times it "worked" -- you look at that and please tell me in english the logic behind it.

bulk insertion

selva, February 08, 2005 - 12:09 am UTC

ok ,i will explain once again .
table1 has 44 columns and table 2 has more than 10
columns .but there was no relations between them. only possibility is they are
stored in some defined format in the columns like


table 1 has the columns

sap css_database

EM 12345678
ER 12448788
EM 12334467
CM 23456789

now table2 has the column like

source_account

EM12345678
ER12448788
EC13331333

case 1:

if i use
(table1.sap,table1.css_database) in (select
substr(table2.source_account,1,2),substr(table2.source_account,3,18) from table2 .

if match is found ,i will get the record from table 1 ,also logically query merging output will be like

EM12345678
ER12448788

and additionally need to check 3 subconditions(if match is found).
they are

table2.source_system = 1
table1.taig like 'IS%' or table1.taig like 'TAIG-No Change In Bill Group%'
table1.required_by <= (sysdate-180)

finally this query now become as for first condition.

select * from table1 ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1,
2),SUBSTR(source_account, 3, 18)
from table2 a
where a.source_system = 1
and
ts.taig like 'IS%' or ts.taig like 'TAIG-No Change In Bill Group%'
)
and required_by <= (sysdate-180)
)


case 2:
if match is not found by using substr functionality ,that mean it is just opposite to the first condition .
,also logically query merging output will be like

EM 12334467
CM 23456789
(i will get the source_account like),

for this second requirement, i need to check only one condition,
the condition is
required_by <= (sysdate-180)

now the query becomes

2.select * from tblsap ts
where
(
(ts.css_database,ts.sap) not in ( select SUBSTR(source_account, 1,
2),SUBSTR(source_account, 3, 18)
from ds_account a
)
and required_by <= (sysdate-180)
)

case 3:

for my third requirement i include another table called table3, in this case
if source_account match is found then again need to check table3.bill_profile='EXCLUDE' is satisfied. ,in this
table we are having the key
table3.onebill_account=table2.onebill_account
and table3.bill_profile='EXCLUDE'
these are the conditions additionaly need to check for case 3.
i wrote the query for this case
3.
select * from tblsap ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1,
2),SUBSTR(source_account, 3, 18)
from ds_account a ,ds_onebill d
where d.bill_profile='EXCLUDE'
and a.onebill_account=d.onebill_account
)
and required_by <= (sysdate-180)

)
finally i want to insert all 3 conditions in to one archive_table4
1.which way is faster for this query?
2.how to simplify this query?
3.after inserting into archive_table4 ,i want all the records what ever moved to archive_table4, should be delted in the table1.
regarding that how to negate this query for deleting the records from the source table1?
i am very new to oracle,so i am trying to solve with your help.

for simple understanding i wrote it as a simple algorithm.

if(match is found)
{
//check some conditions
1.table2.source_system = 1
2.table1.taig like 'IS%' or table1.taig like 'TAIG-No Change In Bill Group%'
3.table1.required_by <= (sysdate-180)
//if 3 condition is satisfied insert all records to archive table.
or
//check some conditions
1.table3.bill_profile='EXCLUDE'
2.table3.onebill_account=table2.onebill_account
//if 2 condition is satisfied insert all records to archive table.
}

else
{
if (match is not found)
//check one condition
1.table1.required_by <= (sysdate-180)
//if 1 condition is satisfied insert all records to archive table.
}

Tom Kyte
February 08, 2005 - 1:29 am UTC

some time, when I'm taking new questions -- we can try there

but this has nothing to do with an existing question

I seriously question the logic you are using (i believe it to be very wrong, this is not the place where I can "debug" it for you) and there is no point in tuning something that is "wrong"


sorry. this is just not the place for this type of brand new question.

updating views

Shashi kumar, February 09, 2005 - 11:23 pm UTC

Hi Tom,
i just wanted to know that can views be updated or not?.if it does then how .
i am new to this database world,please clarify my doubt.I did try to research and get more infoormation on net
but couln't get a satisfied answer.
Thanking You
Shashi

Tom Kyte
February 10, 2005 - 1:09 am UTC

the concepts guide...

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm#24841 <code>

... You can query views, and with some restrictions you can update, insert into, and delete from views......

Materialized view log records created when MV not interested

Graeme, October 04, 2005 - 8:37 am UTC

Hi Tom,

Running version 10.1.0.4.0 we have an issue with the bulk update of our data warehouse fact tables that have materialized view logs for fast refresh aggregated materialized views.

Despite the logs referencing only the columns our mviews are interested in, and the bulk update being on a completely different column, for each fact row updated we get 2 rows inserted in our materialized view log (as we are including new values). Consequently this is causing performance and space issues.

I believe we can get round the issue but from the point of view of what we are doing here it would seem sensible that if you have named the columns for the materialized view log then updates to other columns would be ignored.

From other points of view (replication possibly?) I am sure the current behaviour makes a lot of sense but I am not sure why. Could you elaborate?

Tom Kyte
October 04, 2005 - 4:27 pm UTC

there can be many many MV's pointing to this and they need not even be on that same machine/database - the MV log doesn't know what (in general) the clients of the MV log need.

ORA-12033

Su Baba, April 30, 2006 - 2:11 pm UTC

Can a materialized view contain columns from multiple tables?  I'm suspecting that's the reason that I got ORA-12033 error, but I'm not too sure.

SQL> 
SQL> CREATE TABLE x_objects NOLOGGING AS
  2  SELECT * FROM all_objects;

Table created.

SQL> 
SQL> CREATE TABLE x_users NOLOGGING AS
  2  SELECT * FROM all_users;

Table created.

SQL> 
SQL> CREATE MATERIALIZED VIEW LOG ON x_objects
  2  WITH ROWID, SEQUENCE (object_type) INCLUDING NEW VALUES;

Materialized view log created.

SQL> 
SQL> CREATE MATERIALIZED VIEW LOG ON x_users
  2  WITH ROWID, SEQUENCE (user_id) INCLUDING NEW VALUES;

Materialized view log created.

SQL> 
SQL> 
SQL> CREATE MATERIALIZED VIEW x_object_cnt_MV
  2     BUILD IMMEDIATE
  3     REFRESH FAST
  4     ON DEMAND
  5  AS
  6  SELECT b.user_id, a.objecT_type, COUNT(*) cnt
  7  FROM   x_objects a, x_users b
  8  WHERE  a.owner = b.username
  9  GROUP  BY b.user_id, a.objecT_type;
FROM   x_objects a, x_users b
       *
ERROR at line 7:
ORA-12033: cannot use filter columns from materialized view log on "SCOTT"."X_USERS"
 

Tom Kyte
May 01, 2006 - 1:56 am UTC

yes it can - however - I suggest you use dbms_mview.explain_mview to see what it is capable of (and why - it'll tell you why certain refresh methods are not available as well)

Creating multiple MVs based on a single table

Allen Shatzer, November 16, 2006 - 8:16 pm UTC

I am running into a problem where I have an existing MV (MV1), that is refreshed using a fast refresh every two hours. At some point after the creation of MV1, I then want to create a new MV (MV2) based on one of the tables that the first one is based upon. The creation of MV2 takes 1 1/2 hours and is not refreshing at all. MV2 is simply a "select col1, col2, count(*) from table1 where ... group by col1, col2". Due to this, it never refreshes itself, even though I specified a start with and next clause when creating MV2.

Given the above scenario, if a fast refresh of MV1 occurs while MV2 is being created, is it possible that the MV Log(s) shared by the two of them would be purged before the create of MV2 completes and therefore the MV log is considered "younger" than MV2 and results in the problem described above?

If so, how can I work around this? I looked for an easy way to "disable" the refresh of MV1 while creating MV2, but could not find anything. The only thing I can think of is to make the job that refreshes MV1 (and any other MVs sharing the same MV logs) broken while I create MV2. Is this my only option?

ORA-12033

A reader, April 19, 2007 - 11:41 am UTC

Hi

I have a simple question

create materialized view log on emp with sequence, rowid including new values;

create materialized view LSC$EMP1
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
as
select deptno, count(*) as cnt, count(*)
from emp
group by deptno;

this gives ORA-12033 where if I change materialized vie wlog definition to

create materialized view log on emp with sequence, rowid (deptno) including new values;

it works

how so?

Why deptno is required :-?
Tom Kyte
April 19, 2007 - 12:41 pm UTC

because you group by it....

ora-12033

A reader, April 22, 2007 - 12:54 pm UTC

Hi

If a table who has materialized view log is deifned with

create materialized view log on emp with sequence, rowid (deptno) including new values;

And there are several MV, simple ones and complex ones which are based on this table will this definition make any difference when refresh each MV?


Tom Kyte
April 23, 2007 - 4:24 pm UTC

as long as this definition supports the needs all of them.... it'll be OK.

A reader, May 01, 2007 - 9:06 am UTC

Hi Tom,

I wonder why we need to have a primary key while creating materialized view from a table?

Is primary key being used to create/update/refresh materialized view?



SQL> create materialized view
2 mv_t2 as
3 select * from t2;
select * from t2
*
ERROR at line 3:
ORA-12014: table 'T2' does not contain a primary key constraint


Thanks in Advance.

Tom Kyte
May 01, 2007 - 10:07 am UTC

[tkyte@tkyte-pc ~]$ oerr ora 12014
12014, 00000, "table '%s' does not contain a primary key constraint"
// *Cause:  The CREATE MATERIALIZED VIEW LOG command was issued with the
//          WITH PRIMARY KEY option and the master table did not contain
//          a primary key constraint or the constraint was disabled.
// *Action: Reissue the command using only the WITH ROWID option, create a
//          primary key constraint on the master table, or enable an existing
//          primary key constraint.


you do not *need* to have one

but - if you don't, I question the base tables very existence.

Delete from MVLOGS

A reader, May 28, 2008 - 5:40 am UTC

Tom,
Can I safely delete the data from MV LOGS older than 7 days ? Every day the fast refresh happens. Im on Oracle10g
Tom Kyte
May 28, 2008 - 9:40 am UTC

why isn't the data getting purged all by itself???

do you have materialized views that were registered once - but no longer exist?

the logs are designed to clean themselves out when the last MV that needs a log record uses it. If they are not getting cleaned out - you must have some MV's that are NOT refreshing.

No, you would not clear them out yourself, something is wrong there.

MView Log Purging

Jason B., July 02, 2008 - 8:03 pm UTC

We have a materialized view that's used for a process that runs infrequently. It's not used or refreshed for multiple months at a time, until the next time the business wants to run the process.

We're adding mview logs to support other mviews that could use fast refresh. I'm concerned about the logs growing for multiple months until this one occasional mview is finally refreshed.

Well, I just discovered the DBMS_MVIEW.PURGE_MVIEW_FROM_LOG procedure, which will be very useful in removing old log data on the detail tables. I'm working on some code to run occasionally against this particular mview to keep the logs clean. (We don't mind doing a complete refresh the next time...)

Tom Kyte
July 07, 2008 - 9:27 am UTC

but, if you purge it - then it won't be there anymore? I didn't follow this train of thought here....

MView Log Purging 2

Jason B., July 15, 2008 - 5:36 pm UTC

The "purged" state of the mview resets the next time you refresh it. So if you do a (complete, of course) refresh, from then on it will resume holding mview log information, so that subsequent refreshes may be fast ones. Unless and until you decide to purge the mview again.

Tom Kyte
July 15, 2008 - 8:16 pm UTC

right, I know that, but so what.

I still don't follow that train of thought, why would you want to purge to force all of the others to do a complete refresh? why would you think that is "good"?

MView Log Purging 3

Jason B., July 18, 2008 - 12:21 pm UTC

It doesn't force others to do complete refreshes--it doesn't force purging of data that *others* might need.

It only purges log rows that only this mview needs (if there are any such rows). Just like when the mview refreshes or is dropped.

Tom Kyte
July 18, 2008 - 5:10 pm UTC

sorry, I did not see you said "purge mview from log" above, sorry about that.

yes. I get it now.

Question related to materialized view

Naveen, July 22, 2008 - 4:11 pm UTC

Hi Sir,
Greetings!

I was reading the thread and its awsome. I learnt a lot and it really helped me lot. I have a question... Is there a way to refresh the snapshot to affect only inserts and updates but not deletes(In other words we need to refresh the materialized view to grab just Inserts and Updates for the materialized view log). If not do we have any workaround. Please help....


Regards,
Naveen

Tom Kyte
July 22, 2008 - 6:54 pm UTC

no, the entire goal, purpose and only reason for being for the materialized view is to make the materialized view look JUST LIKE the defining query.

If you want just inserts/updates - skip deletes or "flag delete" a row - you'll be looking at streams and doing your own apply processing so you can skip a delete or change what delete means (probably best to change what delete means, to update a "end date" or something)

Thanks for the reply

Naveen, July 23, 2008 - 10:59 am UTC

Hi Tom,
Greetings!

Thanks for the reply!. Basically I need to grab the application log records (these log records are created by user defined packages and store in a table called zaudit_log) from different production database(s) running on different server(s) and store them in another database (called auxDB) which resides on different unix box. Once in a month the records in the table:zaudit_log will be physically deleted from production database(s). So if i use Materialized views I observed that when a physical delete happens in production database(s) those records get deleted in materialized view aswell. So I thought there would be an option in the materialized view which will grab just inserts/updates and skip deletes.



Naveen

Tom Kyte
July 24, 2008 - 10:26 am UTC

basically, re-read my response right above?

You cannot use a materialized view.
You can use streams.

Materialized views to ignore delete's

Naveen, July 24, 2008 - 11:36 am UTC

Hi Tom,
Greetings!

Yup, I started looking into streams and will make use of it to complete my task.

Thanks & Regards,
Naveen

MATERIALIZED VIEW LOG on Existing tables

Rupal, December 03, 2009 - 9:06 pm UTC

Hi Tom,

Happy Holidays In Advance Tom!!

We have created materialized view log to speed up MV refresh. Tables on which materialized view log are present are big tables and are partitioned. We have a periodic process of unarchiving the tables, in this process we exchange the lowermost dated partition with seperate table in different schema. This process was badly impacted because of Materialized View Log present on that table.

In order to prove speed up the process, we dropped materialized view log from one of the table and then fired exchanged partition process and it completed within few seconds.

But now we have lost the Materialized view log. I have 2 questions based on above thing

1. Is there anyway we can recreate the materialized view log and have effect of existing records. I mean can we refresh materialized view log?

2. Whan can we do so that ALTER table ...EXCHANGE PARTITION ... statement will not have any impact of MATERIALIZED VIEW LOGS.

Any help from you would be a great help Tom.

--Rupal
Tom Kyte
December 04, 2009 - 4:37 pm UTC

1) no

2) sometimes you cannot get something for nothing. How long is "badly impacted" in this case? what does the table look like? (please, if it has 1,000 columns do NOT list them all, just give us a good idea of what we are dealing with here). Do you know of partitioning change tracking - have you read about it? does it apply for you? can it apply for you?

OK

Kumar, December 20, 2009 - 11:30 am UTC

Hi Tom,
we have a requirement like making some tables in our local database to be in synch with tables in another database.
For small tables we planned to be use a db link for refresh and for larger tables to create materialized views and view logs.
we want the MV to be refreshed with data which has only changed or newly inserted. Base table is having some millions of rows.

presently we have this idea.

ON remote database
------------------

create materialized view log on bill_master
with primary key
including new values
/
create materialized view mv_billmst
refresh fast
as select * from bill_master
/


In our local database if we do a refresh using
select * from MV_BILLMST this would place a huge burden on
undo tablespace and other database resources.

Can you provide some technical advice?

Can a MV log be created on a remote table?
Tom Kyte
December 21, 2009 - 1:09 pm UTC

... For small tables we planned to be use a db link for refresh and for larger
tables to create materialized views and view logs.
..

why option #1? I don't get it - just use materialized views for *everything*, why complicate it with two approaches - one of which is manual and not easy to monitor?

..
In our local database if we do a refresh using
select * from MV_BILLMST this would place a huge burden on
undo tablespace and other database resources.
...

that doesn't make sense, you don't refresh that way, you refresh using a materialize view refresh which is a stored procedure call that does the work.

You give this information:


... Base table is having some millions of rows.
...


which is nice to know, but provides nothing to help one answer:

... this would place a huge burden on
undo tablespace and other database resources.
....


You don't tell us what the volume of new inserts/updates/deletes would be - nor do your define your idea of "huge"

if you insert 5 rows, update 10 and deleted 2 - the refresh will not take much effort at all.

If you insert 5,000,000 rows, update 10,000,000 rows and delete 3,000,000 rows - it will take much more effort.

The best way to find out what it means to you (since it requires knowledge also of your indexing strategy, the width of the rows and other factors - none of which we really want because you can and will figure this out yourself) will be to BENCHMARK IT in your test environment.

... Can you provide some technical advice?

sure, test it, prototype it, benchmark it, do the math on the results. That way, you'll know HOW to do it and you'll know WHAT the effects will be.


... Can a MV log be created on a remote table?

You can only create them on the table itself, in the remote database, meaning you have to be connected to the remote database . A materialized view log is just like an index would be - a transparently maintained axillary data structure associated with the table.




OK

Kumar, December 22, 2009 - 5:45 am UTC

Hi Tom,
thanks for your response.
we have a db link bill_connect which can query the base table from remote database.
we have a composite primary key on the master table and decided to use it for MV log.


We have this idea.

In remote database
------------------

create a MV log on the master table.


In local database
------------------

copy all rows from master table into MV once as mentioned below


create materialized view bill_mv
refresh fast
as
select * from bill_mst@bill_connect
/


My question is :


will this MV in local database get refreshed when master table in remote database
undergoes changes by means of updates/inserts or all objects need to be in same database?
Tom Kyte
December 31, 2009 - 7:27 am UTC

We have this idea.

In remote database
------------------

create a MV log on the master table


that idea won't fly, it cannot work, it will not work.

The goal of the materialized view is to remove the distributed transaction from happening in the first place - if you were going to do the distributed transaction - why not just push the modified rows???

distributed transactions decrease availability, increase (massively) response times. They are to be avoided, and that is what materialized views do - avoid them in real time. They are done in the background, when a person isn't waiting for completion.


REFRESH FAST ON COMMIT

Faisal, May 12, 2011 - 3:19 am UTC

Hi Tom,

Thanks a lot for this discussion! Its been very helpful.

I am using WITH SEQUENCE, INCLUDING NEW VALUES for my agregate MV.

The view us updated nicely whenever I insert and commit a row, but not for update/delete! its driving me nuts.

Here is a short script.


SQL*Plus: Release 11.1.0.6.0 - Production on Thu May 12 15:19:41 2011

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Enter user-name: user/password@localhost:1521/orcl

Connected to:
Personal Oracle Database 11g Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table test (col1 char(2), col2 char(2), col3 char(2));

Table created.

SQL>
SQL> alter table test add primary key (col1, col2);

Table altered.

SQL> create materialized view log on test with sequence, rowid (col1, col3)
  2  including new values;

Materialized view log created.

SQL>
SQL> insert into test values ('01', '01', 'AA');

1 row created.

SQL> insert into test values ('01', '02', 'AA');

1 row created.

SQL> insert into test values ('01', '03', 'BB');

1 row created.

SQL> insert into test values ('01', '05', 'BB');

1 row created.

SQL> insert into test values ('01', '06', 'CC');

1 row created.

SQL> insert into test values ('02', '01', 'AA');

1 row created.

SQL> insert into test values ('02', '02', 'BB');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> create materialized view mv_test
  2  refresh fast on commit
  3  as
  4  select col1,
  5  Sum(decode(col3, 'AA', 1, 0)) AA_Count,
  6  Sum(decode(col3, 'CC', 1, 0)) CC_Count
  7  from test
  8  group by col1;

Materialized view created.

SQL> select * from mv_test;

CO   AA_COUNT   CC_COUNT
-- ---------- ----------
01          2          1
02          1          0

SQL> insert into test values ('02', '03', 'CC');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from MLOG$_TEST;

no rows selected

SQL> insert into test values ('02', '04', 'CC');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from MLOG$_TEST;

no rows selected

SQL> select * from mv_test;

CO   AA_COUNT   CC_COUNT
-- ---------- ----------
01          2          1
02          1          2

SQL> Update test set col3 = 'AA' where col1 = '02' and col2 = '03';

1 row updated.

SQL> Commit;

Commit complete.

SQL> select * from test;

CO CO CO
-- -- --
01 01 AA
01 02 AA
01 03 BB
01 05 BB
01 06 CC
02 01 AA
02 02 BB
02 03 AA
02 04 CC

9 rows selected.

SQL> select * from mv_test;

CO   AA_COUNT   CC_COUNT
-- ---------- ----------
01          2          1
02          1          2

SQL> delete from test where col1 = '02';

4 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from mv_test;

CO   AA_COUNT   CC_COUNT
-- ---------- ----------
01          2          1
02          1          2

SQL>select * from MLOG$_TEST;

02 BB AAARn4AAFAADgJyAAG 01/01/00 D O 00 27
02 AA AAARn4AAFAADgJyAAH 01/01/00 D O 00 28
02 AA AAARn4AAFAADgJyAAF 01/01/00 D O 00 26
02 CC AAARn4AAFAADgJyAAI 01/01/00 D O 00 29

After Update / Delete the result still remains the same, but Insert its able to FAST refresh on Commit!

Tom Kyte
May 12, 2011 - 7:51 am UTC

ops$tkyte%ORA11GR2> create materialized view mv_test
  2  refresh fast on commit
  3  as
  4  select col1,
  5  Sum(decode(col3, 'AA', 1, 0)) AA_Count,
  6  Sum(decode(col3, 'CC', 1, 0)) CC_Count
  7  from test
  8  group by col1;

Materialized view created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace function explain_mview( p_mv in varchar2 )
  2  return sys.ExplainMVArrayType
  3  PIPELINED
  4  is
  5      pragma autonomous_transaction;
  6      l_data sys.explainMVArrayType;
  7  begin
  8      dbms_mview.explain_mview( mv => p_mv, msg_array => l_data );
  9      commit;
 10      for i in 1 .. l_data.count
 11      loop
 12          pipe row( l_data(i) );
 13      end loop;
 14      return;
 15  end;
 16  /

Function created.

ops$tkyte%ORA11GR2> pause

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select capability_name,
  2         possible,
  3         case when related_text is not null then '('||related_text||') ' end || msgtxt txt
  4   from TABLE( explain_mview('MV_TEST') );

CAPABILITY_NAME                P TXT
------------------------------ - --------------------------------------------------
PCT                            F
REFRESH_COMPLETE               T
REFRESH_FAST                   T
REWRITE                        F
PCT_TABLE                      F (TEST) relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT      T
REFRESH_FAST_AFTER_ONETAB_DML  F (CC_COUNT) SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML  F (AA_COUNT) SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML  F COUNT(*) is not present in the select list
REFRESH_FAST_AFTER_ANY_DML     F see the reason why REFRESH_FAST_AFTER_ONETAB_DML i
                                 s disabled

REFRESH_FAST_PCT               F PCT is not possible on any of the detail tables in
                                  the materialized view

REWRITE_FULL_TEXT_MATCH        F query rewrite is disabled on the materialized view
REWRITE_PARTIAL_TEXT_MATCH     F query rewrite is disabled on the materialized view
REWRITE_GENERAL                F query rewrite is disabled on the materialized view
REWRITE_PCT                    F general rewrite is not possible or PCT is not poss
                                 ible on any of the detail tables

PCT_TABLE_REWRITE              F (TEST) relation is not a partitioned table

16 rows selected.




dbms_view.explain_mview is very useful here. You are missing some count's in there that would be necessary to maintain the refresh fast capability.

Great!

Faisal, May 13, 2011 - 12:41 am UTC

Tom,

You are unbelievable... Thanks for the help! I now understand, FAST refresh is only applicable for views with COUNT() and no other aggregate function.

But as long as there are only inserts and no updates I can still use this MV :)

Thanks a lot again. You are the BEST.

Cheers

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.