Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Evara.

Asked: July 07, 2002 - 8:57 pm UTC

Last updated: June 25, 2020 - 11:14 am UTC

Version: 8.1.7

Viewed 50K+ times! This question is

You Asked

Dear Tom

I create materialized view like this :

create materialized view dtl_budget_mv
build immediate
refresh fast
enable query rewrite
as
select a.company, a.nu_budget_year, a.nu_version,
b.nu_month, b.vc_stock_code,
sum(b.nu_quantity) as nu_quantity
from mst_budget a, dtl_budget b
where a.nu_budget_year = b.nu_budget_year
and a.company = b.company
and a.vc_budget_code = b.vc_budget_code
group by a.company, a.nu_budget_year,
a.nu_version, b.nu_month, b.vc_stock_code;

And then I run my query :

SQL> alter session set query_rewrite_enabled = true;
SQL> select a.company, a.nu_budget_year, a.nu_version,
2> b.nu_month, b.vc_stock_code, sum(b.nu_quantity) as nu_quantity
3> from mst_budget a, dtl_budget b
4> where a.nu_budget_year = b.nu_budget_year
5> and a.company = b.company
6> and a.vc_budget_code = b.vc_budget_code
7> group by a.company, a.nu_budget_year,
8> a.nu_version, b.nu_month, b.vc_stock_code;

I look in Plan, this query using DTL_BUDGET_MV as materialized view.
but, if I update or insert in table dtl_budget, I run again this query.
I look in Plan does not use the materialized view.
Why ?
I try to refresh manually, did not use too

please advice me, what wrong in my materialized view ?

best regards

and Tom said...

read

</code> http://docs.oracle.com/cd/A87860_01/doc/server.817/a85397/statem18.htm#2053386 <code>

the MV is not "refresh on commit" hence the MV is "stale" and you must enable "stale tolerated" or refresh the MV before it can be used again.

Rating

  (267 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, July 08, 2002 - 12:13 am UTC


Stalre

Evara, July 08, 2002 - 12:17 am UTC

alter session set query_rewrite_integrity = stale_tolerated;

If I use this statement, my result is not up to date;
I try to refresh like this :
exec dbms_mview.refresh('dtl_budget_mv');

My query still can't use this materialized view.
some missing my parameter ?

Tom Kyte
July 08, 2002 - 7:29 am UTC

You must have skipped something somewhere.  I would need a test case from start to finish in order to see where.  Here is one that shows

a) the MV is used
b) we make the MV "stale" and see it does not get used
c) we refresh it
d) we see the MV getting used again

I would need something similar from you showing something different.


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table my_all_objects nologging
  2  as select * from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table my_all_objects add constraint my_all_objects_pk primary key(object_id);

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create snapshot log on my_all_objects with rowid ( owner ) including new values;

Materialized view log created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table my_all_objects compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> create materialized view my_all_objects_aggs
  2  build immediate
  3  refresh fast
  4  enable query rewrite
  5  as
  6  select owner, count(*)
  7    from my_all_objects
  8   group by owner
  9  /

Materialized view created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table my_all_objects_aggs compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter session set query_rewrite_enabled=true;

Session altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter session set query_rewrite_integrity=enforced;

Session altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly
ops$tkyte@ORA817DEV.US.ORACLE.COM> select owner, count(*)
  2    from my_all_objects
  3   group by owner;

31 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=31 Bytes=310)
   1    0   TABLE ACCESS (FULL) OF 'MY_ALL_OBJECTS_AGGS' (Cost=1 Card=31 Bytes=310)




Statistics
----------------------------------------------------------
         11  recursive calls
         12  db block gets
         18  consistent gets
          0  physical reads
          0  redo size
       2127  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
         31  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into my_all_objects
  2  ( owner, object_name, object_type, object_id, created, last_ddl_time )
  3  values
  4  ( 'New Owner', 'New Name', 'New Type', 1111111, sysdate, sysdate );

1 row created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly
ops$tkyte@ORA817DEV.US.ORACLE.COM> select owner, count(*)
  2    from my_all_objects
  3   group by owner;

32 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=31 Bytes=155)
   1    0   SORT (GROUP BY) (Cost=59 Card=31 Bytes=155)
   2    1     TABLE ACCESS (FULL) OF 'MY_ALL_OBJECTS' (Cost=10 Card=22832 Bytes=114160)




Statistics
----------------------------------------------------------
        121  recursive calls
         72  db block gets
        357  consistent gets
          0  physical reads
      10352  redo size
       2173  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
         32  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> exec dbms_mview.refresh( 'my_all_objects_aggs' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly
ops$tkyte@ORA817DEV.US.ORACLE.COM> select owner, count(*)
  2    from my_all_objects
  3   group by owner;

32 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=31 Bytes=310)
   1    0   TABLE ACCESS (FULL) OF 'MY_ALL_OBJECTS_AGGS' (Cost=1 Card=31 Bytes=310)




Statistics
----------------------------------------------------------
          0  recursive calls
          4  db block gets
        319  consistent gets
          0  physical reads
          0  redo size
       2173  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         32  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off
 

Sum function

Evara Samsyiar, July 09, 2002 - 4:58 am UTC

Toms, I try your demonstration, and successfully.
the materialized view can use after refresh.
but.
If I include sum function to materialized view,
( ....
as
select owner, count(*), sum(object_id)
...
)
after refresh, this query does not use materialized view.
Why ?

thanks tom, for your advice

best regards

Tom Kyte
July 09, 2002 - 7:34 am UTC

sigh, still no complete example.

I'll guess.  You simply added sum(object_id) to the MV and did nothing else.

You cannot fast refresh that one, your refresh didn't refresh.  You need to add this as well:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create snapshot log on my_all_objects with rowid ( owner<b>, object_id</b> ) including new values;

Materialized view log created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> create materialized view my_all_objects_aggs
  2  build immediate
  3  refresh fast
  4  enable query rewrite
  5  as
  6  select owner, count(*), <b>count(object_id),</b> sum(object_id)
  7    from my_all_objects
  8   group by owner
  9  /

Materialized view created.


Then the view can fast refresh, the refresh will actually refresh (check user_mviews to verify that) and the query rewrite will work.

Refer to the data warehousing guide, chapter on MV's.  It goes into what must be included in the MV in order to fast refresh. 

gia dinh, July 10, 2002 - 1:53 am UTC

see oracle bug no 2088032

Tom Kyte
July 10, 2002 - 7:34 am UTC

And then scratch your head and ask "why am I reading this bug report which doesn't have anything to do with my problem?"

The solution to this users problem, as stated above, is to correctly create the MVEIW so that it can be fast refreshed.

Join and Aggregates function

Evara, July 11, 2002 - 12:19 am UTC

Dear Tom,
I really confused about join and aggregates function in materialized view.
I read in oracle documentation part Materialized view, they say :
If want materialized view can refresh fast, count(*) must be include with sum(expr)

I create demonstration table like this :

drop table sales;
drop table dept;
create table dept(
deptno varchar2(10), location varchar2(30),
constraints dept$pk primary key(deptno));

create table sales(
nobukti varchar2(5), tanggal date,
deptno varchar2(10), qty number,
constraints sales$pk primary key(nobukti),
constraints sales$deptno foreign key(deptno)
references dept(deptno));

create snapshot log on dept with
rowid(deptno, location) including new values;

create snapshot log on sales with
rowid(nobukti,tanggal,deptno,qty);

drop materialized view sales_mv;

create materialized view sales_mv
build immediate
refresh fast
enable query rewrite
as
select dept.location, sum(sales.qty), count(sales.qty), count(*)
from dept, sales
where dept.deptno = sales.deptno
group by dept.location;

after I refresh :
exec dbms_mview.refresh('sales_mv','F');
select * from user_mviews;

I look in column AFTER_FAST_REFRESH is STALE.
So my query does not use this materialized view.

What I missing in this case ?. I already add count(*) and sum(expr) in materialized view declaration, like in oracle documentation

Best regards



Evara Samsyiar



Tom Kyte
July 11, 2002 - 7:35 am UTC

Well, I sort of thought I told you what was needed? Eg: count(COL), in addition to COUNT(*) is needed with SUM(col) in order to enable fast refresh.

See

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76994/mv.htm#42194 <code>

There is a table there that says "if you use this aggregate, you need to also include this function in the select list...."

So, sum(col) requires you select count(*), count(col) .....




Refreshing MVs with 8.1.7 and 9.2

Sandra, July 11, 2002 - 4:20 pm UTC

Tom,

We are writing a reporting system, and we have decided to summarize the data using MVs, so query time would be nearly instantaneous. However, when we enter a new record into the database, the insert time is long because at every new DML we manually refresh the MVs. That's because we cannot tolerate any staleness. That was the tradeoff. This was very useful because our system is very query-intensive, with little DML. Therefore, we had to find some way to speed up a liitle the data entries. We are now using 8.1.7 and are migrating to 9.2.

Ideally we would like to use ON COMMIT, so we would not have to manually refresh our MVs. However, we have a lot of restrictions telling us that we should use ON DEMAND, like multiple joins, aggragate funnctions, etc.

Q1. One thing we were not able to test and maybe you could tell us is, in case we use ON COMMIT, would then this refresh be carried out "in background", that is, so I would have my prompt immediately back after the COMMIT in my session, not having to wait for the refresh to complete ?

Q2. In case the refresh occurs in background, can I somehow fragment my MV select in multi-tier views so I could get rid of the restrictions and be able to use ON COMMIT ?

Q3. My final attempt is using a "dirty" approach. I have created a procedure that commits and refreshes the MVs. This refresh is done by using Java to call a background UNIX process that refreshes the MVs. It works, but I would like to do it in the database. Do you see any other solution ?

Thank you very much !

Tom Kyte
July 12, 2002 - 8:04 am UTC

q1) no, on commit would mean on commit. You are asking the MV refresh to be part of YOUR transaction.

q2) not following -- but yes, in 9i you can have mv's of mv's of mv's

q3) use dbms_job, that is how we run things in the background.

Personally -- i would do this.

Use stale tolerated. Run the refresh in the background. You will see the insert in the MV shortly AFTER it was inserted. Generally, I find you can have a tolerance for stale tolerated -- if the difference is only minutes.

fast refresh in oracle817 and oracle901

Evara Samsyiar, July 12, 2002 - 10:56 pm UTC

Dear tom
Thanks for your adviced.
This is my script to demonstrated about fast refresh on materialiazed view. I create this script refer from your answer.

drop table sales;
drop table dept;
create table dept(
deptno varchar2(10),
location varchar2(30),
constraints dept$pk primary key(deptno));

create materialized view log on dept
with rowid (deptno, location)
including new values;

analyze table dept compute statistics
for table for all indexes for all indexed columns;

create table sales(
nobukti varchar2(10), tanggal date,
deptno varchar2(10), qty number default 0,
constraints sales$fk foreign key(deptno)
references dept(deptno));

create materialized view log on sales
with rowid(nobukti,tanggal,deptno,qty)
including new values;

analyze table sales compute statistics
for table for all indexes for all indexed columns;

drop materialized view sales_mv;

create materialized view sales_mv
build immediate
refresh fast
enable query rewrite
as
select a.location, count(*) as cnt,
count (b.qty) as cnt_qty, sum(b.qty) as sum_qty
from dept a, sales b
where a.deptno = b.deptno
group by a.location;

analyze table sales_mv compute statistics
for table for all indexes for all indexed columns;

alter session set query_rewrite_enabled = true;
alter session set query_rewrite_integrity = enforced;

I test this script in Oracle 817.
I look that fast refresh does not work in oracle 817.
I test again, again and again for 3 days before.
I frustation why my materialized view cannot refresh using fast refresh method ?

And then, I try to test in Oracle 901...beautifull..
My materialized view can fast refresh successfully :)

So, What difference in oracle 817 and oracle 901 about materialized view fast refresh method.

Please tell me, what I wrong in this test ?

Best regards




Tom Kyte
July 13, 2002 - 10:35 am UTC

In 8i -- the documentation has this to say:

(chapter 8 of the data warehousing guide on MV's)


Materialized Views with Joins and Aggregates

In data warehouses, materialized views would normally contain one of the
aggregates shown in Example 2 below. For fast refresh to be possible, the SELECT
list must contain all of the GROUP BY columns (if present), and may contain one or more aggregate functions. The aggregate function must be one of: SUM, COUNT(x), COUNT(*),COUNT(DISTINCT x), AVG, VARIANCE, STDDEV, MIN, and MAX,
and the expression to be aggregated can be any SQL value expression.
If a materialized view contains joins and aggregates, then it cannot be fast refreshed using a materialized view log. Therefore, for a fast refresh to be possible, only new data can be added to the detail tables and it must be loaded using the direct path method.




In 9i the same documentation says:

Fast refresh for a materialized view containing joins and aggregates is possible after any type of DML to the base tables (direct load or conventional INSERT, UPDATE, or DELETE). It can be defined to be refreshed ON COMMIT or ON DEMAND. A REFRESH ON COMMIT, materialized view will be refreshed automatically when a transaction that does DML to one of the materialized views commits. The time taken to complete the commit may be slightly longer than usual when this method is chosen. This is because the refresh operation is performed as part of the commit process. Therefore, this method may not be suitable if many users are concurrently changing the tables upon which the materialized view is based.


so, what you are seeing is to be expected for a JOIN mv with Aggregates -- no fast refresh in 8i (unless you direct path the data), fast refesh = true in 9i.

It is a new feature




ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view

Louis, July 26, 2002 - 5:53 am UTC

Dear Tom,
U'r response was useful. I have a different issue . Pls throw some light on this
In my development environment I created a materialized view as
CREATE MATERIALIZED VIEW cc_cdd_view
BUILD IMMEDIATE
REFRESH FORCE ON COMMIT
ENABLE QUERY REWRITE
AS
SELECT CK006.CK006_CC_CODE, CK006.CK006_CC_NAME,ck006.ck006_modern_cc,CK001.CK001_CDD_CODE, CK001.CK001_CDD_DESC
FROM CK006_CC_MST CK006,
CK009_GRC_CONSTCY_MST CK009,
CK008_CDC_GRC_MST CK008,
CK007_CDD_CDC_MST CK007,
CK001_CDD_MST CK001
WHERE CK006.CK004_CONSTCY_CODE = CK009.CK004_CONSTCY_CODE
AND CK009.CK003_GRC_CODE = CK008.CK003_GRC_CODE
AND CK008.CK002_CDC_CODE = CK007.CK002_CDC_CODE
AND CK007.CK001_CDD_CODE = CK001.CK001_CDD_CODE;

Here this user is granted with "create materialized view" and "query rewrite" in addition to the connect and resource roles. The init parameter values related to this are kept as
QUERY_REWRITE_ENABLED=TRUE
QUERY_REWRITE_INTEGRITY=TRUSTED. Here I did not create any materialized view logs. Whenever any change is there in any of the related tables it is getting properly reflected in the materialized view immediately after commit.
In my production server in addition to all these privilages ,DBA role is also granted . Now when I try to create the above mentioned materialized view it gives the error, "ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view".
What could be the reason that it is happening only in Production server ?. Is there any more settings to be done for creating materialized views ?.









Tom Kyte
July 26, 2002 - 7:02 am UTC

do a select * from v$version on "prod" and "dev" and compare the results.

Louis, July 31, 2002 - 4:21 am UTC

When I executed

select * from v$version;, I got the same result from development and production as

Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for Solaris: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

So any other possibilities ???



Tom Kyte
August 05, 2002 - 9:34 am UTC

check for missing indexes, constraints, etc. Make sure you are comparing apples to apples.

Should be easy for you to post a very simple test case that works in ONE 817 and not on the other -- test case would create a test user account, grant to them, log in as them and run all ddl necessary to reproduce.

That'll help LOTS

Is fast refresh possible if ...

Ping, August 21, 2002 - 9:50 am UTC

in the mv there are some sum(decode(....))?

Tom Kyte
August 21, 2002 - 10:27 am UTC

yes.

Is fast refresh possible if there are

Ping, September 09, 2002 - 3:13 pm UTC

in the materialized view, there are some sum(decode()) like:

sum(decode(col1,1,col2,2,col3,3,col4....))?

The reason I asked is this sum can be sum(col2),sum(col3),sum(col4)..., depending on the values of col1. To make fast refresh possible, we need to include count(*), count(col1),count(col2),count(col3)...? Or it is just impossible to do fast refresh at all?



Tom Kyte
September 09, 2002 - 7:57 pm UTC

I do not see how the fast refresh could work there, no.

Say you have:

create table t ( id int primary key, x int, y int, z int );

and you created an mv like:

create materialized view mv
refresh fast
as
select x, sum( decode( id, 1, y, z ) ), count(*), count(x), count(y), count(z)
from t
group by x
/

(suppose you could, you cannot).

Now, if you started with two rows in T

( 1, 100, 100, 100 )
( 2, 500, 500, 500 )

the MV will have

x sum( decode( .....
100 100
500 500

Now, you update t set id = 3 where id = 1.

How would anyone know that we need to DELETE row 1 in the MV (we don't KNOW that row belonged to ID 1, we lost that fact along time ago) and add that deleted value into row 2?

So no, this isn't doable. It is a complex query.

Stale Tolerated

Bharath, September 23, 2002 - 9:55 am UTC

What to do you mean by stale tolerated.I have gone thru the oracle documentation still i am not getting clear answer.


Thx
Bharath

Tom Kyte
September 23, 2002 - 10:10 am UTC

A materialized view can get out of sync with the underlying details (eg: if it is not a refresh on commit MV -- very typical, most MV's are *not* refresh on commit)

Stale Tolerated says "use the MV during query rewrite even if the MV is known to be out of sync with the details". That is, the data doesn't need to be up to the second accurate, data that was current 5 or 10 minutes (or however long) ago is OK for this report. That is "stale tolerated"

if drop of MV will cause purge on log table

June, September 23, 2002 - 12:58 pm UTC

Dear Tom,

I searched all topics and looked over Oracle doc I had, there is no where indicated that drop of MV on snapshot site will cause the associated log table on master site purged.

We ran into severe production problem on master site by filling up all archive logs (total of 5GB). By killing the process, the statement which was killed is the deletion of from one of the log table; meanwhile, the drop of MV was knocked off with error msg: ORA-02067: transaction or savepoint rollback required. and drop of MV is the only thing I could identified during that time frame and with the OSUSER for that process.

so my question is: how the 'drop materialized view...' statement on snapshot site trigger the 'delete from mastersite.mlog$...' statement? we had db link between snapshot site and mastersite. what should I do if I want to drop MV table with log table on remote (log table will serve for other snapshot accounts too)?

any suggestion will be appreciated.

Tom Kyte
September 23, 2002 - 1:26 pm UTC

It should only clean out records relevant to IT. Just like when you refresh, the last guy that pulls -- clears the record.

Did you observe something different? That is was purging records OTHERS would actually need, or, was it purging records that would no longer be needed since it wasn't going to be around anymore?

last refresh time stamp is...

June, September 23, 2002 - 2:59 pm UTC

last refresh time stamp for the MV (assume on site A) I was trying to drop is on 6/28/01. after that, there are approximately half million records been refreshed via fast refresh mode for other production snapshot site (such as site B). I have no way to identify which records it was going to delete by the time we killed the process.

so from what implicated, it is possible that even though site B has all data refreshed from master log table and still doing daily refresh, when site A tried to drop MV which has not been refreshed for long time, Oracle internally tried to purge data on or before 6/28/01 (about half million) in log table? what is the suggested way to re-organize snapshot(such as moving tablespace, drop and do complete refresh) with log table referenced by multiple sites ? Thanks very much!

Tom Kyte
September 24, 2002 - 6:59 am UTC

Why would you need to "reorg" a snapshot, especially a read only one. Anyway -- when you dropped it, you saw the normal and natural "cleaning of the logs"

the records you saw being deleted -- if I follow you -- are the 1/2 million added AFTER 6/28/01 that were propagated to site B over time but not to site A. When you drop site A -- those records are of course not needed and just like they would have been during a REFRESH of site A -- they were removed (eg: it was not that you dropped site a that would just do this -- a refresh of A would have. You let 1/2 million change records queue up. You ran out of rollback unqueuing them).

the suggestion would be not to let records queue up for over a year.

Lock type JI during refresh on commit in MV

A reader, September 23, 2002 - 4:51 pm UTC

Hi Tom,

I have a couple of views which were created using huge tables (2 Gigabyte each), and this views take forever to return the results. The explain plan for them are ok but the problem is connected with the amount of records that they have to scan is a lot. I created two materialized views which resolve the most complicate and common logic of those views and I replace those huge tables in the views with the MV's, but as soon as I implemented the MVs all the batch processes and the applications which need to insert and update the big tables (which are being used by the Mvs) began to experience waits for enqueue, library cache lock, row cache lock, etc. All these sessions were having problems of lock type JI with the materialized view log on those big tables. The solution that I found was dropping the materialized views and logs view and after this everything was ok again.

My question is, if there is an enviroment with a lot of insert/update/deletes is there any way to have MVs refresh on commit without facing lock type JI every time when many transactions need to insert/delete/update a row on the master tables ?

Thanks in advance for any help


Tom Kyte
September 24, 2002 - 7:15 am UTC

If you have my book -- you would find that refresh on commit MVs are not appropriate at all for an environment where many people concurrently modify the detail tables.

Why? It is quite simple really. Suppose you have the EMP table and you wanted to maintain counts by DEPTNO within emp (eg: select deptno, count(*) from emp). You do LOTS of inserts and deletes into this emp table.

Now, you have 1,000,000 EMP records and say 1,000 DEPT records.

Now, when you REFRESH the dept rollup upon commit -- obviously that will SERIALIZE at the DEPTNO level. If two sessions both insert into DEPTNO=10, you know both cannot increment the count at that same time right? They will serialize. You have concentrated a large number of records into a small number of records -- causing big time serialization.

JI locks are locks used to refresh MV's

This is a natural and entirely un-avoidable issue (Many people are trying to update the same exact record(s)).

No query re-writing happens

Praveen, November 19, 2002 - 12:26 am UTC

Q1.
I created a MV on all_objects table created in my schema
with only the following fields
my_all_objects=(owner,object_name, object_type, object_id)

create materailized view mv_all_obj
build immediate
enable query rewrite
referesh on commit
as
select owner, count(*) from my_all_objects group my owner;

I set compatibility option to 8.1.0
issued ALTER SESSION SET QUERY_REWRITE_ENABLED=TRUE
ALTER SESSION SET QUERY_REWRITE_INTEGRITY=ENFOREDE (I tried with all the three options too)

But still my view is not used by the oracle-no query rewriting happens!


Q2.
When trying to create a materialized view I get an error:
"ON COMMIT attribute is incompatible with other options"
When I remove the "REFRESH ON COMMIT" clause from the "CREATE MATERIALZED VIEW" statement, the operation is successfull. But the MV does not get updated after an update on base table.

1) What are the other refresh options.
2) How they are specified/enforced
3) What are the relative adv/disadv

Thanks in advance

Praveen

Tom Kyte
November 19, 2002 - 7:21 am UTC

q1) i see no analyze. In fact -- I see no full test case as I am prone to give to you. do it step by step like I do (like I did right above!!)

without CBO, no rewrite

q2) you know, without an EXAMPLE, it is impossible to tell you WHY.

see
</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76994/mv.htm#32933 <code>

for the general restrictions. see that entire chapter for the what/how/why's (or if you have my book "Expert one on one Oracle" i have a chapter on MV's and discuss these topics as well)





Need your help badly

VJ, February 28, 2003 - 2:08 pm UTC

Hi Tom,

I created a materialized view joining multiple tables. I enabled query rewrite on the materialized view. I am confused on one thing though.

Let's say that, I have 3 tables t1, t2, t3. I joined these 3 tables and created the materialized view and grouped by t2.fld1. Now, when i try to run a query that includes one more field t1.fld1 along with or without the t2.fld1, i see the explain plan and i see that it is not using the MView. That is bowling me over. That means that i can only use the t2.fld1 to be able to use the MView.

I need your help badly on this. Even if you can point me to some docs anywhere on this, i would really appreciate your help...

Thanks

VJ.

Tom Kyte
February 28, 2003 - 2:30 pm UTC

well, without a real example (like the ones i post!) it is hard to say HOWEVER, it does seem obvious from what you describe that it cannot use it in your case.

You have:


select t2.fld1, aggregates on t1, t3 columns perhaps......
from t1, t2, t3
where ...<joins>...
group t2.fld1
/


Now it seems you are querying:

select ...
from t1, t2, t3
where ....<joins>....
AND t1.fld1 = 'foo'
group by t2.fld1
/

well -- that cannot in any way shape or form make use of the MV you created above -- t1.fld1 has been "aggregated over", rolled up, that level of detail is GONE.

That would be creating a MV:

create materialized view emp_rollup
as
select dept.deptno, sum(sal) sal
from dept, emp
where dept.deptno = emp.deptno
group by dept.deptno;


and then trying to have


select dept.deptno, sum(sal) sal
from dept, emp
where dept.deptno = emp.deptno
AND emp.job = 'CLERK'
group by dept.deptno;

use it -- well, it cannot since the level of detail needed to support CLERK is just gone, you aggregated over it, we have details at the deptno level NOT the deptno AND job level.


If that is not your situation - you need to give me a clear cut example

Need Help Badly

VJ, March 01, 2003 - 6:36 pm UTC

I could understand that the detail is lost. I am sorry i did not provide complete information. But here you go...

3 Tables - t1,t2,t3. t2 references t1 and t3 references t2. It is a fanning out one-many relationship (t1->t2->t3)

Now when i created the materialized view, i created it at the t2 level.
Create materialized view x_mv
enable query rewrite
as
select t2.fld1, count(t3.fld2)
from t2, t3
where t2.fld1=t3.fld1
group by t2.fld1.

Now when i write a query like this, it is not using the MView.

Select t1.fld1, t2.fld1,count(t3.fld2)
from t1, t2, t3
where t1.fld1=t2.fld2
and t2.fld1=t3.fld1
group by t1.fld1, t2.fld1

I would expect it to be rewritten to use the MView but it isn't. Please explain, if i am making any mistakes...

Thanks
VJ


Tom Kyte
March 01, 2003 - 7:04 pm UTC

what I meant by


well, without a real example (like the ones i post!)


was -- hey, give me an example I can run in sqlplus. I feel like I spend half my life guessing and trying to come up with tests.

I don't know what primary /foreign keys you have OR more importantly DON'T have.

But, in any case -- is it not obvious here?

the MV has

t2.fld1

Your query you want to have use the MV references

t2.fld2

there is NO SUCH FIELD in the MV. How -- really, tell me -- HOW could you use the MV here????? You could get the wrong answer easily....


there just isn't anything to JOIN to -- you wiped out t2.fld2 in the MV, not there, nothing to join on. cannot do it.


(seems almost obvious that it could not)

But -- if you believe it should be able to then......

supply the tables for your test case
with the data
and write the query directly against the MV and compare to the query against the base tables -- showing how Oracle should rewrite the query

and then I'll comment.

A reader, June 27, 2003 - 1:05 pm UTC

Tom,

In your book expert-on-one page 597, materialized views , I have similar situation, but i want to accomplish the same in 805.

Thanks

Tom Kyte
June 27, 2003 - 1:36 pm UTC

sorry -- "new feature as of 8i", no query rewrite in 8.0

Hi Tom, do you know why this doesn't works

juancarlosreyesp@yahoo.com, September 26, 2003 - 6:26 pm UTC

I created previously so I recreate again.

SQL> grant query rewrite to adm;

Concesi¾n terminada con Úxito.

SQL> DROP materialized view log on adm.personas_me;

Log de rÚplica borrado.

SQL> drop materialized view adm.perci;

RÚplica borrada.

SQL> create materialized view log on adm.personas_me
  2  with ROWID, (per_NOMBRE)
  3  including new values
  4  ;

Log de rÚplica creado.

SQL> create materialized view adm.perci
  2  BUILD IMMEDIATE
  3  refresh fast
  4  ON COMMIT
  5  as select per_NOMBRE,count(*) from adm.personas_me
  6  group by per_NOMBRE;

RÚplica creada.

SQL>
SQL>
SQL> alter system set query_rewrite_enabled = true;

Sistema modificado.

SQL> EXECUTE DBMS_MVIEW.REFRESH('PERCI');

Procedimiento PL/SQL terminado con Úxito.

SQL> select per_nombre,count(*) from adm.PERCI
  2  group by per_nombre;

PER_NOMBRE                            COUNT(*)
------------------------------------ ---------
CARLOS JOSE                                  1
LIGIA YANET                                  1
NO IDENTIFICADO                              1

SQL> select per_nombre,count(*) from adm.PERSONAS_ME
  2  group by per_nombre;

PER_NOMBRE                            COUNT(*)
------------------------------------ ---------
CARLOS JOSE                                  1
LIGIA YANET                                  1
NO IDENTIFICADO                              6

SQL>


It seems that SQL> EXECUTE DBMS_MVIEW.REFRESH('PERCI');
doesn't works, and it didn't gave any error. 

Tom Kyte
September 26, 2003 - 8:17 pm UTC

why the schema name everywhere but the refresh?

I tried both with and without schema

A reader, September 29, 2003 - 9:09 am UTC

Hi Tom, thanks, I am connect as ADM,
I tried with and without schema

SQL>  EXECUTE DBMS_MVIEW.REFRESH('ADM.PERCI');

Procedimiento PL/SQL terminado con Úxito.

SQL> select per_nombre,count(*) from adm.PERCI
  2     group by per_nombre;

PER_NOMBRE                            COUNT(*)
------------------------------------ ---------
CARLOS JOSE                                  1
LIGIA YANET                                  1
NO IDENTIFICADO                              1

I starting to think this is another bug of my database release. 

Stale and Fresh

A reader, October 04, 2003 - 8:51 pm UTC

Hi Tom,

Is it possible to control the period of staleness when using query rewrite to a Materialized View. For example:

If the data in this stale Materialized View is less than 2 hours stale, then user query rewrite to it, otherwise not.


Thanks.

Tom Kyte
October 05, 2003 - 9:11 am UTC

Nope, it is either "stale" or it is "not stale" in our eyes.

A reader, October 17, 2003 - 11:34 am UTC

Hi Tom,

I have a very quick question related to materialized view here.

1. Can we create Mviews over views.
2. Can we create Mviews over views on tables in remote database.
3. What all Mviews (what I mean is FAST or COMPLETE, ON COMMIT or ON DEMAND etc.)we can create in both the above cases.

Thanks in advance

Tom Kyte
October 17, 2003 - 12:48 pm UTC

1) yes

ops$tkyte@ORA920> create table t ( x int primary key, y int  );
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace view v
  2  as
  3  select * from t where x > 50;
 
View created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> create materialized view mv
  2  as
  3  select * from v;
 
Materialized view created.
 
2) yes

3) complete for mvs of views.  fast will get you


ops$tkyte@ORA920> create materialized view mv
  2  refresh fast
  3  as
  4  select * from v;
select * from v
              *
ERROR at line 4:
ORA-12015: cannot create a fast refresh materialized view from a complex query

on commit will get you
ops$tkyte@ORA920> create materialized view mv
  2  refresh fast on commit
  3  as
  4  select * from v;
select * from v
              *
ERROR at line 4:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view


but


ops$tkyte@ORA920> create materialized view mv
  2  refresh complete on demand
  3  as
  4  select * from v;
 
Materialized view created.

works 

A reader, October 17, 2003 - 1:27 pm UTC

Hi Tom,

One more question which I forgot to post earlier:-

Do we always need a materialized view log while dealing with materialized views ,i.e. even if the refresh type is COMPLETE (not FAST).

Once again thanks for the quick response.



Tom Kyte
October 17, 2003 - 2:09 pm UTC

nope.

example above demonstrated that!

a simple documented way of finding mv capabilities incl. refresh capabilities

Menon, October 18, 2003 - 1:08 pm UTC

This works in 9.0.1.0 and up.

1. run the $ORACLE_HOME/rdbms/admin/utlxmv.sql
This creates a table called MV_CAPABILITIES_TABLE
2. exec dbms_mview.explain_mview
3. select * from mv_capabilities_table or I use
select capability_name, possible, related_text, msgtxt
from mv_capabilities_table
where mvname = '<MV NAME HERE>';



oops !

Menon, October 18, 2003 - 1:09 pm UTC

the second step should in the last post should be

2. dbms_mview.explain_mview ( 'YOUR_VIEW_NAME_HERE' )



Fast refresh / refresh on commit not working

Habeeb, October 29, 2003 - 8:44 am UTC

I am using oracle 8.1.5 work group server.

While creating materialized view, "refresh on demand" or with schedule (REFRESH START WITH SYSDATE NEXT sysdate+1/24) is working.

But refresh on commit (of) fast refresh is not working.

Pl clarify whether, it is not allowed in 8.1.5 work group server.



Tom Kyte
October 29, 2003 - 11:15 am UTC



"not working"
"not meaningful"

hows about the error message and code?

Optimizing MV on demand fast refresh

David, October 29, 2003 - 12:05 pm UTC

Hi Tom,
I use 9i r2 and created Materialized View on 200 million records EVENTS table.
The MV_DD_EVENTS is simple GROUP BY to deduplicate events and is used by smaller 2nd-level MVs. I can't merge deduping with other MVs cause it will make MV non-fast refreshable.

I need to refresh MV_DD_EVENTS after each batch insert wich I want to perform frequently (let's say every 5 minutes 3000-5000 records).
The problem is that refresh takes about 15 minutes regardless of number of records inserted. I don't have visibility into refresh process, autotrace and SQL_TRACE don't show things within, cause it's PL/SQL package.
I have a strong feeling that it spends most of the time doing full table scan of 200 million records, cause I see lot's of IOs via iostat and refresh time varies only slightly if I insert 1K or 500K records (15 minutes vs. 16 minutes).
How do I get a visibility into refresh process to tune it?

I'm pretty confident with my ability to tune the query, but I don't know what's happening in DBMS_MVIEW.REFRESH. Even when I get the query from session that runs DBMS_MVIEW.REFRESH from v$SQLAREA, I can't run it standalone to tune it.

I really need you advice cause I exhausted my troubleshooting skills on this one :(
Am I missing something obvious?

Thank you very much for you help and sorry if I omitted some important details, please let me know.
Appreciate your time,
David



Tom Kyte
October 29, 2003 - 2:08 pm UTC

sql_trace certainl does show us the SQL executed. turn on sql_trace and lets see what it is doing.

MV

ARC, October 30, 2003 - 12:02 pm UTC

Hi Tom,
Coluld you please explain.
I have a MV, can I write a query to access MV directly or shall I make use of MV in a query?

Thanks

ARC

Tom Kyte
October 30, 2003 - 9:39 pm UTC

you "can" write a query directly

but

the sheer "beauty" of the MV is you need NOT TO.

they work like indexes -- applications don't need to know they even exist, they will be used auto-magically

Fast refresh not working - materialized view

Habeeb, October 31, 2003 - 6:10 am UTC

SQL> create materialized view bbc1_mv
  2  build immediate
  3  refresh on commit
  4  enable query rewrite
  5  as

Error I got while creating:
ORA-00439: feature not enabled: Materialized view rewrite

Version 8.1.5 work group server
 

Tom Kyte
October 31, 2003 - 8:09 am UTC

that indicates 100% that MV rewrite is not supported in workgroup server -- it is about query REWRITE not the refresh mode at this point.

feature is a feature of enterprise edition.

materialized view and locking

Riyaz, October 31, 2003 - 6:12 am UTC

Will the tables used will get locked when materialized view is refreshing.

Is this applicable to all kind of materialised views?

Tom Kyte
October 31, 2003 - 8:10 am UTC

no they won't -- the base tables will not get locked.

an updateable MV will lock the MV's during the refresh however.

RE: Optimizing MV on demand fast refresh

David Katsapov, November 04, 2003 - 1:59 pm UTC

Tom,
I run sql trace and plsql trace,
here is the output but it has nothing on the actual MV refresh operation.
The trace can't fit into this post, so I provide the link:
</code> http://www.geocities.com/katsapov/oracle/FAST_REFRESH_trace.html <code>

In the end I put sql statement from V$SQLAREA that looks more like the actual refresh operation.

You wrote:
>sql_trace certainl does show us the SQL executed. turn on >sql_trace and lets
>see what it is doing.



Tom Kyte
November 04, 2003 - 5:30 pm UTC

can you dump some of the relevant stats from v$sql for that merge? cpu/elapsed time/exections in particular

I wrote a simple example for MV test

sonali, November 05, 2003 - 3:40 pm UTC

I am writing this for 1st time, we have oracle 9.2 database
did this 1st -
DROP MATERIALIZED VIEW LOG ON mwebwork;
CREATE MATERIALIZED VIEW LOG ON mwebwork WITH ROWID, SEQUENCE
(work_id, WORK_ENTITY_TYPE, work_level, work_currency)
INCLUDING NEW VALUES;

work_id is Primary key and no FK in this example.

SQLWKS> CREATE MATERIALIZED VIEW mv_mwebwork
2> BUILD DEFERRED
3> REFRESH FAST ON DEMAND
4> AS
5> select count(*), Work_ID, Work_Level, Work_Entity_Type, Work_Currency
6> from mwebwork
7> where work_entity_type in (3, 4)
8> group by Work_ID, Work_Level, Work_Entity_Type, Work_Currency
9> ;
Statement processed.
SQLWKS> exec dbms_mview.refresh('mv_mwebwork','F');
exec dbms_mview.refresh('mv_mwebwork','F')
*
ORA-00900: invalid SQL statement
SQLWKS> execute dbms_mview.refresh('mv_mwebwork','F');
ORA-12057: materialized view "V52DEVOR"."MV_MWEBWORK" is INVALID and must complete refresh
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 794
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 851
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 832
ORA-06512: at line 2
Why do I get this error ? What am I missing ?
I really want to do fast refresh.. as these MV are going to have huge number of records to process.. so I want to update on demand ( we will run this say twice a day) and just update things that got changed..
Thanks
Sonali

Tom Kyte
November 06, 2003 - 7:26 am UTC

you did a deferred build?  so the MV doesn't "exist" really.  you need to do a complete FIRST in order to FAST (incrementally -- which may not necessarily be faster then a complete, just incremental) refresh in the future!



but it is funny here -- if work_id is the primary key, to what end is this MV in the first PLACE????  count(*) = 1 in all cases??

ops$tkyte@ORA920LAP> create table mwebwork
  2  ( work_id int primary key,
  3    work_level varchar2(10),
  4    work_entity_type int,
  5    work_currency varchar2(10),
  6    data varchar2(10)
  7  )
  8  /

Table created.

ops$tkyte@ORA920LAP> exec gen_data( 'MWEBWORK', 100 );

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> update mwebwork set work_entity_type = 3;

100 rows updated.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> CREATE MATERIALIZED VIEW LOG ON mwebwork WITH ROWID, SEQUENCE
  2  (work_id, WORK_ENTITY_TYPE, work_level, work_currency)
  3  INCLUDING NEW VALUES;

Materialized view log created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> CREATE MATERIALIZED VIEW mv_mwebwork
  2   BUILD DEFERRED
  3   REFRESH FAST ON DEMAND
  4   AS
  5   select count(*), Work_ID, Work_Level, Work_Entity_Type, Work_Currency
  6   from mwebwork
  7   where work_entity_type in (3, 4)
  8   group by Work_ID, Work_Level, Work_Entity_Type, Work_Currency
  9  /

Materialized view created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> select count(*) from mv_mwebwork;

  COUNT(*)
----------
         0

ops$tkyte@ORA920LAP> exec dbms_mview.refresh( 'MV_MWEBWORK', 'F' );
BEGIN dbms_mview.refresh( 'MV_MWEBWORK', 'F' ); END;

*
ERROR at line 1:
ORA-12057: materialized view "OPS$TKYTE"."MV_MWEBWORK" is INVALID and must complete refresh
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 794
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 851
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 832
ORA-06512: at line 1


ops$tkyte@ORA920LAP> select count(*) from mv_mwebwork;

  COUNT(*)
----------
         0

ops$tkyte@ORA920LAP> exec dbms_mview.refresh( 'MV_MWEBWORK', 'C' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> select count(*) from mv_mwebwork;

  COUNT(*)
----------
       100

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> exec gen_data( 'MWEBWORK', 5 );

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> update mwebwork set work_entity_type = 3 where work_id not in ( select work_id from mv_mwebwork );

5 rows updated.

ops$tkyte@ORA920LAP> select count(*) from mv_mwebwork;

  COUNT(*)
----------
       100

ops$tkyte@ORA920LAP> exec dbms_mview.refresh( 'MV_MWEBWORK', 'F' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> select count(*) from mv_mwebwork;

  COUNT(*)
----------
       105
 

Thanks so much

Sonali, November 06, 2003 - 9:04 am UTC

If I got it, did you say take out count(*)..
If I take out count(*), I get this error.. so I thought that for FAST REFRESH I would need count(*).. even when work_id is PK and its always going to be 1..
SQLWKS> CREATE MATERIALIZED VIEW mv_mwebwork
2> BUILD DEFERRED
3> REFRESH FAST ON DEMAND
4> AS
5> select Work_ID, Work_Level, Work_Entity_Type, Work_Currency
6> from mwebwork
7> where work_entity_type in (3, 4)
8> ;
from mwebwork
*
ORA-23415: materialized view log for "V52DEVOR"."MWEBWORK" does not record the primary key

Other question, how come fast refresh which is incremental is same as complete refresh ( which is going to delete everything 1st and then readd everything) ?

What does query rewrite enabled do ? I read lots of oracle documents, but did not get what exactly it does.

Thanks again

Bulk insert: can switching off MV help

Saha, December 09, 2003 - 11:27 pm UTC

Consider a situation where an OLTP environment switches temporarily to Batch environment for conversion. Huge volume of data will be inserted and updated for this short span of time. There are MVs on these tables. Will switching off the MV refresh during this period help? After the conversion we can refresh the MVs. (Similar to indexes being disabled and enabled before and after heavy insert.

If so, how to do this?

Tom Kyte
December 10, 2003 - 3:00 am UTC

will switching off mv refresh make the load faster?

yes, no, maybe.

if you have MV logs, they'll be maintained unless you drop them.
if you drop them, you'll have to recreate them and do FULL (not incremental) refreshes

might the FULLs be faster then incrementals? yes, they could be, but maybe not. it really depends on how much there is to do.


Just like with indexes, you'll have to benchmark your situation to see if dropping and recreating the MVs is best for you. (or just dropping and recreating the logs and doing a full refresh is best)

Limitation of MV

A reader, December 10, 2003 - 1:21 pm UTC

Will I be able to use a materialized view if I have a user defined function in the select clause of the query ?

Tom Kyte
December 10, 2003 - 4:13 pm UTC

yes


big_table@ORA9IR2> alter session set query_rewrite_enabled=true;

Session altered.

big_table@ORA9IR2> alter session set query_rewrite_integrity=TRUSTED;

Session altered.

big_table@ORA9IR2>
big_table@ORA9IR2> create or replace function foo( x in varchar2 ) return varchar2
2 DETERMINISTIC
3 as
4 begin
5 return x;
6 end;
7 /

Function created.

big_table@ORA9IR2>
big_table@ORA9IR2> set autotrace traceonly
big_table@ORA9IR2> select foo(status), count(*) from big_table group by status;



Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2849 Card=999620 Bytes=4998100)
1 0 SORT (GROUP BY) (Cost=2849 Card=999620 Bytes=4998100)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1379 Card=999620 Bytes=4998100)




Statistics
----------------------------------------------------------
301 recursive calls
0 db block gets
14440 consistent gets
13968 physical reads
300 redo size
444 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
14 sorts (memory)
0 sorts (disk)
1 rows processed

big_table@ORA9IR2> set autotrace off
big_table@ORA9IR2> pause
big_table@ORA9IR2>
big_table@ORA9IR2> create materialized view big_table_mv
2 build immediate
3 enable query rewrite
4 as
5 select foo(status), count(*)
6 from big_table
7 group by status
8 /

Materialized view created.

big_table@ORA9IR2> analyze table big_table_mv compute statistics;

Table analyzed.

big_table@ORA9IR2> pause

big_table@ORA9IR2>
big_table@ORA9IR2> variable x varchar2(20);
big_table@ORA9IR2> exec :x := 'VALID';

PL/SQL procedure successfully completed.

big_table@ORA9IR2>
big_table@ORA9IR2> set autotrace traceonly
big_table@ORA9IR2> select foo(status), count(*) from big_table group by status;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=7)
1 0 TABLE ACCESS (FULL) OF 'BIG_TABLE_MV' (Cost=2 Card=1 Bytes=7)






Limitations - Contd. from previous thread

A reader, December 10, 2003 - 1:47 pm UTC

Assuming the materilized view is replicating data, will I be able to query the mv than the underlying base tables (ie without the need for setting query_rewrite) ?

Tom Kyte
December 10, 2003 - 4:16 pm UTC

yes

Don't know why it didnt work

A reader, December 11, 2003 - 4:29 pm UTC

Hi Tom,

Is there any limitation on using mv in RBO without query_rewrite/query_rewrite_integrity not set? If so I am planning to use the mv as a data aggregation tool whereby on EOD I can run dbms_job to populate the mv in an OLTP environment. Can you think of any pitfalls to this approach ?


orcl9i> drop snapshot big_table_mv
2 /

Materialized view dropped.

Elapsed: 00:00:00.03
orcl9i> drop table big_table
2 /

Table dropped.

Elapsed: 00:00:00.00
orcl9i>
orcl9i> create table big_table as select * from all_objects
2 /

Table created.

Elapsed: 00:00:01.09
orcl9i> alter session set query_rewrite_enabled=true;

Session altered.

Elapsed: 00:00:00.00
orcl9i>
orcl9i> alter session set query_rewrite_integrity=TRUSTED;

Session altered.

Elapsed: 00:00:00.00
orcl9i>
orcl9i> create or replace function foo( x in varchar2 ) return
2 varchar2
3 DETERMINISTIC
4 as
5 begin
6 return x;
7 end;
8 /

Function created.

Elapsed: 00:00:00.00
orcl9i>
orcl9i> set autotrace traceonly
orcl9i> select foo(status), count(*) from big_table group by status;

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (GROUP BY)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE'




Statistics
----------------------------------------------------------
22 recursive calls
0 db block gets
360 consistent gets
345 physical reads
0 redo size
446 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
1 rows processed

orcl9i>
orcl9i> create materialized view big_table_mv
2 build immediate
3 enable query rewrite
4 as
5 select foo(status), count(*)
6 from big_table
7 group by status
8 /

Materialized view created.

Elapsed: 00:00:00.03
orcl9i>
orcl9i> analyze table big_table_mv compute statistics;

Table analyzed.

Elapsed: 00:00:00.00
orcl9i>
orcl9i> variable x varchar2(20);
orcl9i> exec :x := 'VALID';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
orcl9i>
orcl9i> set autotrace traceonly
orcl9i> select foo(status), count(*) from big_table group by status;

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (GROUP BY)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE'




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
349 consistent gets
345 physical reads
0 redo size
446 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

orcl9i>
orcl9i>
orcl9i>


Tom Kyte
December 11, 2003 - 6:01 pm UTC

umm, beyond the fact that MV's don't kick in UNLESS you are using the CBO -- nope, not problems with RBO

RBO does not "seem them"

you HAVE to use the CBO.

A reader, December 12, 2003 - 3:58 pm UTC

Quote "beyond the fact that MV's don't kick in UNLESS you are using the CBO, RBO does not "seem them"
". Could you please clarify why MV's cannot be used directly in RBO as a summary table mechanism. I will not need query_rewrite features.

Tom Kyte
December 13, 2003 - 11:27 am UTC

because the rbo doesn't see mv's, partitioned tables, iots, bitmapped indexes, function based indexes, etc etc etc etc etc.

let me put it this way. If you have the need for mv's, you have an even greater need for the CBO with all of its advanced access paths.

rbo = NO mv's, no hash joins, no bitmaps, no <lots and lots and lots of stuff>

Fast Refresh

A reader, December 15, 2003 - 6:11 pm UTC

Hi Tom,

Thanks for replying to the previous threads. I understand having user defined function will nullify the fast refresh option. I have a union-all query with user defined function declared. Just to populate the mv takes about 2 hours. Is there any way to speed up/ overcome this ?

Tom Kyte
December 16, 2003 - 7:03 am UTC


i find many times the UDF can be written in straight sql. give me an example to look at.

Another Solution

A reader, December 16, 2003 - 11:32 am UTC

Hi Tom,
Thanks for your reply, I thought of creating the materialized without the udf and creating a view on top of mv with the udf. Hope it will work. Another issue I am having trouble is fast refresh on a union-all query. I did create a mv log, and also included rowid's of all tables accessed, and marker in each query. The tables in from clause come from multiple schemas and I have prefixed with the schema name. Still to no avail. Am I still missing something.

Tom Kyte
December 16, 2003 - 1:42 pm UTC

if you provide me a

o concise
o yet 100% complete

test case to work with (try to reproduce with EMP and DEPT for example) that I can run, I'll try to tell you if

a) its not supposed to work (and why)
b) its a "problem" that needs to be fixed.

A reader, December 17, 2003 - 8:58 am UTC

drop materialized view log on emp;
drop materialized view log on emp2;

create materialized view log on emp with rowid;
create materialized view log on emp2 with rowid;

drop materialized view mv_scott_test
/

create materialized view mv_scott_test build immediate refresh complete on demand
as select
1 umarker, a.rowid arowid, b.rowid browid, a.empno, a.ename, a.sal
from emp a, emp2 b
where a.empno = b.empno
union all
select 2 umarker, a.rowid arowid, b.rowid browid, a.empno, a.ename, a.sal
from emp a, emp2 b
where a.empno = b.empno
/

alter materialized view mv_scott_test refresh fast on demand
/
exec dbms_mview.refresh('mv_scott_test');
*
ERROR at line 1:
ORA-12015: cannot create a fast refresh materialized view from a complex query

Tom Kyte
December 18, 2003 - 8:23 am UTC

ops$tkyte@ORA920> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA920> create table emp2 as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> alter table emp add constraint emp_pk primary key(empno);
 
Table altered.
 
ops$tkyte@ORA920> alter table emp2 add constraint emp2_pk primary key(empno);
 
Table altered.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> create materialized view log on emp with rowid;
 
Materialized view log created.
 
ops$tkyte@ORA920> create materialized view log on emp2 with rowid;
 
Materialized view log created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> create materialized view mv_scott_test
  2  build immediate<b>
  3  refresh fast</b>
  4  on demand
  5  as
  6  select
  7  1 umarker, a.rowid arowid, b.rowid browid, a.empno, a.ename, a.sal
  8  from emp a, emp2 b
  9  where a.empno = b.empno
 10  union all
 11  select 2 umarker, a.rowid arowid, b.rowid browid, a.empno, a.ename, a.sal
 12  from emp a, emp2 b
 13  where a.empno = b.empno
 14  /
 
Materialized view created.

ops$tkyte@ORA920> exec dbms_mview.refresh('mv_scott_test');
 
PL/SQL procedure successfully completed.


<b>don't know your version.  don't know your tables.  don't know your keys... but -- I can get it to work</b>

 

Materialized View - Fast Refresh - ORA-04030

KJ, December 29, 2003 - 2:37 pm UTC

Hi Tom,
We have an informatica workflow executing unix scripts that refresh 7 materialized views around the same time( within an interval of 30mins). One particular mview m_trade_fact( the master table had 25 Million rows) gave this error
"
ORA-12012: error on auto execute of job 1094
ORA-12008: error in materialized view refresh path
ORA-12801: error signaled in parallel query server P010
ORA-04030: out of process memory when trying to allocate 56 bytes (cursor work he,kenv: ktfcsnp)
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 189
ORA-06512: at line 1
"
The script itself had this error
"ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 794
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 851
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 832
ORA-06512: at line 1"

Though it gave this error, When I look at all_mviews, the status was VALID and the refresh_type was COMPLETE. ( The script does a fast refresh )

We placed a TAR on this and sent dumps and logs. We havent got any suggestion on this yet except that they referred to Bug 3282771 - which I could not get from metalink.

I added sleep for 10mins (since I know for sure that atleast 5 of the 7 mviews get refreshed within this time ) within the script to refresh m_trade_fact. It seems to be working fine.

My question is:
Is there any Init Parameter that I should tune so that the above error doesnt happen. If so, which one. Anything that I need to look at in particular? Any suggestions?
Thanks for your time and I appreciate your help.

Regards


Tom Kyte
December 29, 2003 - 5:46 pm UTC

that bug is "not really a bug" bug.

Basically -- you are running out of memory. The fact that if you stagger them indicates this is so (it falls back to a complete since the "changes only" failed)

By staggering them -- not having them run all at the same time -- you have enough ram.

Running them all at the same time -- you run out of RAM. that is all. you would decrease the size of the SGA, look at the pga_aggregate_target or sort area size/hash area size, look at what else is going on at that time, look at how much memory you actually have available on the machine in order to avoid running out.

Followup - MView and 04030

KJ, December 30, 2003 - 3:23 pm UTC

Hi Tom
Thanks for your response and suggestion. Here are some statistics

We have two boxes.
Both BOX1 and BOX2 have 16GB RAM
BOX1 hosts Informatica and the unix scripts.
BOX2 hosts two databases ODS and DM
It is only the informatica workflow that is being executed at this time. No other process runs.

ODS and DM are setup the same way

Total System Global Area 3223820768 bytes
Fixed Size 735712 bytes
Variable Size 2164260864 bytes
Database Buffers 1056964608 bytes
Redo Buffers 1859584 bytes

PGA Aggregate Target : 1048576000
sort_area_size : 524288
hash_area_size : 1048576

It looks like there is sufficient RAM. Should we increase the sort_area_size? Any suggestion with any numbers?
Thanks so much
Regards

Tom Kyte
December 30, 2003 - 4:50 pm UTC

what is the workarea policy set to? manual or auto?

what is the OS?

Followup - MView and 04030

KJ, December 30, 2003 - 3:57 pm UTC

Hi Tom
Thanks for your response and suggestion. Here are some statistics

We have two boxes.
Both BOX1 and BOX2 have 16GB RAM
BOX1 hosts Informatica and the unix scripts.
BOX2 hosts two databases ODS and DM
It is only the informatica workflow that is being executed at this time. No other process runs.

ODS and DM are setup the same way

Total System Global Area 3223820768 bytes
Fixed Size 735712 bytes
Variable Size 2164260864 bytes
Database Buffers 1056964608 bytes
Redo Buffers 1859584 bytes

PGA Aggregate Target : 1048576000
sort_area_size : 524288
hash_area_size : 1048576

It looks like there is sufficient RAM. Should we increase the sort_area_size? Any suggestion with any numbers?
Thanks so much
Regards

Materlized view and 04030

KJ, December 31, 2003 - 4:26 pm UTC

Hi Tom

workarea_size_policy is AUTO
and the OS is SunOS 5.8

Thanks & Happy New Year!!!


Tom Kyte
December 31, 2003 - 5:12 pm UTC

is the OS 32 or 64 bit at the release....

any ulimits in place?

Followup MView and 04030

KJ, January 02, 2004 - 2:34 pm UTC

Hi Tom

Oracle release is :
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production

maximum file descriptor = 4096
maximum size of stack segment = 8192

Tom, I see some ORA-07445 errors in the alertlog (Not due to the MView but some other select statements or index enable parallel. I could see it happening a few times in the last three months ). I will place a separate tar on this with Oracle. I thought I would mention it since this could trigger some reasoning behind the mview failure as well?

Thanks for your time & help.
Regards

Tom Kyte
January 02, 2004 - 2:37 pm UTC

use

$ ulimit -a

cut & paste

Followup

KJ, January 02, 2004 - 4:36 pm UTC

Here it is

$ ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) 0
nofiles(descriptors) 4096
vmemory(kbytes) unlimited

Thanks & Regards


Tom Kyte
January 02, 2004 - 4:46 pm UTC

well, i researched a bit more. found an issue. the pga_aggregate_target is going to give you a soft limit of about 1gig.

In your case -- i would suggest this:

alter session to set the workarea policy to manual.
set nice beefy sort/hash area sizes

update your tar with <Bug:3153463> and have the analyst look at that to confirm my suspicions.

Followup

KJ, January 05, 2004 - 3:16 pm UTC

Thanks Tom.
Documentation says "For workarea policy: Specifying MANUAL may result in sub-optimal performance and poor PGA memory utilization."

Can i update the workarea policy to Manual and increase Sort_area_size to 1M? Should I set this only for the sessions that refresh the materialized views?

I did update the tar with the bug #.
Thanks for your help Tom. Appreciate your time.
Regards

Tom Kyte
January 06, 2004 - 7:39 am UTC

you can alter a session, set workarea policy to manual and your session will use the SAS and HAS


Yes, I would only set this for the sessions in question -- but SAS and HAS don't have to be limited to 1m here.

The problem is the pga aggregate target is limited to 1gig currently -- you have gobs of ram, you ran out "artificially" I believe due to this limit.

Followup

KJ, January 05, 2004 - 3:44 pm UTC

Hi Tom,

we had some ORA-07445 messages in alert log for which i opened another tar. The analyst had referred to 2805580 for patch #. The analyst did mention that
"* Does not occur when pga_aggregate_target is off."

Is it better to have the pga_aggregate_target as OFF?

I hope this isnt diverting from the topic too much.
Also I am going through your archives for more information on this one (Difference between automatic vs manual PGA setting).

Thanks

Tom Kyte
January 06, 2004 - 7:50 am UTC

i don't see such a patch, there is a bug by that number but it is against 10. not sure what that number is for?

it is better to have it set to auto with an appropriate pga_aggregate_target for most systems.

Followup 04030

KJ, January 06, 2004 - 12:38 pm UTC

Hi Tom
That is a bug# which doesnt have much information.
<Bug:2805580> SKGM.C: SEGV IN KGHALF() - LRGI3:TKITPART.TSC:TKITPART3.SQL

1.Tom, does Bug# 3153463 imply that i could set PGA_aggregate_target to more than 1GB and this wouldnt occur?

2.If I set SAS and HAS for this particular session (MView refresh ) and if there are other sessions that are invoked around the same time, will there be issues with PGA workareas?

3.These jobs get executed between 5-7AM. I see that
"over allocation count is = 0" which is ideal right? There are very few onepass-executions starting with PGA size
4mb <= PGA < 8mb. This implies that 1GB of PGA_aggregate_Target Size is good enough, right?

I am sorry to bother you with so many questions. I am looking for some handle here to get this rectified.
Thanks & Appreciate your time

Tom Kyte
January 06, 2004 - 2:17 pm UTC

1) no, bug 3153463 implies that pga_aggregate_target is putting a limit of 1 gig on you

2) if you are using workarea = manual, you are not using workareas so the other sessions should be unaffected

3) 1gig right now is the biggest you can go in 9i (there are patches that can change that -- thats why i pointed you to this 3153463 and support).

Thanks Tom

KJ, January 06, 2004 - 3:10 pm UTC

Thanks Tom. You have been responsive and of great support. I did not get much response from the Analyst except that it is an open issue.

I appreciate your time and help.
Thanks again for your suggestions.
Best regards

why this does not work?

A reader, January 08, 2004 - 6:47 am UTC

Hi

Do you know why I am getting this error?

CREATE OR REPLACE PROCEDURE MV_REFRESH(P_OWNER VARCHAR2, P_NOM_VM VARCHAR2)
AS
BEGIN
EXECUTE IMMEDIATE 'alter session set workarea_size_policy = manual';
EXECUTE IMMEDIATE 'alter session set hash_area_size = 67108864';
EXECUTE IMMEDIATE 'alter session set sort_area_size = 67108864';
dbms_mview.refresh(tab => P_OWNER||'.'||P_NOM_VM,
method = 'c');
EXCEPTION
WHEN OTHERS THEN dbms_output.put_line(sqlcode);
END;
/

LINE/COL ERROR
-------- -----------------------------------------------------------------
7/4 PL/SQL: Statement ignored
8/23 PLS-00201: identifier 'METHOD' must be declared

Tom Kyte
January 08, 2004 - 1:29 pm UTC

method =

instead of

method =>

perhaps.

It is trying to pass a boolean value that is the result of "method = 'c'" but you have no variable "method" i suppose.

mv refresh and stats

A reader, January 22, 2004 - 10:20 am UTC

Hello Sir,
After we refresh an MV do we also need to explicitly
analyze the mv for updated stats on table and indexes or this is built in the refresh (dbms_mv.refresh) itself ?

Tom Kyte
January 22, 2004 - 6:43 pm UTC

you'd probably want to alter mv_name monitoring and gather stale stats on the schema periodically. the refresh won't gather stats -- and in fact, you might not need to gather stats -- but monitoring+gather stale will get them when they are needed.

Changes in underlying tables

A reader, February 25, 2004 - 5:45 pm UTC

What is the best way to get structural changes reflected in materialized views? If a column gets added to a table we want materialized view to show that column as well, is that even possible without recreating the MV ?

Tom Kyte
February 25, 2004 - 7:25 pm UTC

you have to recreate the MV -- just like you would a real view. The MV doesn't go bad after you add the column -- but your desire to add the column to the MV will require a rebuild.



refresh_all_mview

SD, April 16, 2004 - 4:29 am UTC

Hi,
I've executed following code,
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production

SQL> declare
  2  v_mv_refresh_failures integer:=0;
  3  begin
  4  DBMS_MVIEW.REFRESH_ALL_MVIEWS(v_mv_refresh_failures);
  5  end;
  6  /
declare
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 617
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 1078
ORA-06512: at line 4


Could you pls. tell me what grants/privs. should i give to this user.

TIA 

Tom Kyte
April 16, 2004 - 7:39 am UTC

generally means you do not have access to the snapshot logs and or base tables for one or more of the snapshots.


You might enable sql_trace=true, run it and inspect the trace file looking for the failed statement.

mv for replication to guard against data loss

Anandhi, April 29, 2004 - 9:41 am UTC

Our 8i Database has a particular schema, which is considered very critical and no data loss can be afforded. Previous night's backup are good enough for the other schemas. We do not want the 8i standby feature as it might involve some data loss due to loss of redo logs.

I am toying with the idea of using materialized views (simple one-to-one mappings of views to tables in the relevant schema) with ON COMMIT refresh attribute for replicating the critical schema only, so that in case of a database recovery scenario, we can recover the entire database from backup, and then just drop and import the critical schema from the MVs in the replicated database.

Questions:
1. From your reply to a previous query on this site, I understand that the MV becomes part of a transaction involving base tables at the master site. Does it mean I can be assured of absolutely no data loss for the relevant tables?
2. If the site involves around 1000 transactions per day, would there be any performance degradation as a consequence of using refresh on commit.

Thanks

Tom Kyte
April 29, 2004 - 10:37 am UTC

MV's are asyncronous as well -- over a dblink. You cannot have "on commit refresh" in a distributed environment.


You need to get to 9i/10g anyhow -- defect resolution support runs out this year on 8i. You *need* 9i/10g data guard if you want this kind of protection.

Otherwise, you'd be doing SYNCRONOUS replication and your availability and performance would be very much (very very much) impacted -- in a negative sense.

Only refresh first Time

A reader, April 29, 2004 - 7:55 pm UTC

Hi Tom, sorry if this is a simple question, but I read what I could and can't understand it. Please if you know why.
oracle 9.2.0.2 standard edition

the first time I do a insert, or delete the materialized view is updated after that, never more is updated again. I t ried several times.

thanks.
SQL> alter session set query_rewrite_enabled = true;

Sesi¾n modificada.

SQL> alter session set query_rewrite_integrity = enforced;

Sesi¾n modificada.

SQL>
SQL> drop MATERIALIZED VIEW LOG ON fon.comcli_rw;

Log de rÚplica borrado.

SQL> CREATE MATERIALIZED VIEW LOG ON fon.comcli_rw
  2  --WITH ROWID,PRIMARY KEY (COC_COMISION)
  3  WITH ROWID (coc_codcli,COC_COMISION)
  4  INCLUDING NEW VALUES;

Log de rÚplica creado.

SQL>
SQL> drop MATERIALIZED VIEW FON.MVW_ERP_CTPE;

RÚplica borrada.

SQL> CREATE MATERIALIZED VIEW FON.MVW_ERP_CTPE
  2  BUILD IMMEDIATE
  3  REFRESH FAST ON COMMIT
  4  AS
  5  SELECT coc_codcli,sum(coc_comision) FROM
  6  fon.comcli_rW
  7  group by coc_codcli
  8  /

RÚplica creada.

SQL>
SQL> SELECT * FROM FON.MVW_ERP_CTPE WHERE COC_CODCLI =9479880;

COC_CODCLI SUM(COC_COMISION)
---------- -----------------
   9479880            959.04

SQL> SELECT coc_codcli,sum(coc_comision) FROM FON.comcli_rw WHERE COC_CODCLI=947
9880 group by coc_codcli;

COC_CODCLI SUM(COC_COMISION)
---------- -----------------
   9479880            959.04

SQL>
SQL> INSERT INTO comcli_rw
  2  (COC_CODCLI,COC_CTACORR,COC_FECHA,COC_COMISION,COC_DSC,COC_COMISION_EXITO,C
OC_DSM)
  3  values (9479880,1,to_date('010101','ddmmyyyy'),1000000,'006020927195922',0,
NULL)
  4  /

1 fila creada.

SQL> commit;

Validaci¾n terminada.

SQL> SELECT * FROM FON.MVW_ERP_CTPE WHERE COC_CODCLI =9479880;

COC_CODCLI SUM(COC_COMISION)
---------- -----------------
   9479880           1000959

SQL> SELECT coc_codcli,sum(coc_comision) FROM FON.comcli_rw WHERE COC_CODCLI=947
9880 group by coc_codcli;

COC_CODCLI SUM(COC_COMISION)
---------- -----------------
   9479880           1000959

SQL> delete from comcli_rw where coc_codcli = 9479880 and coc_comision = 1000000

  2  /

1 fila borrada.

SQL> commit;

Validaci¾n terminada.

SQL> SELECT * FROM FON.MVW_ERP_CTPE WHERE COC_CODCLI =9479880;

COC_CODCLI SUM(COC_COMISION)
---------- -----------------
   9479880           1000959

SQL> SELECT coc_codcli,sum(coc_comision) FROM FON.comcli_rw WHERE COC_CODCLI=947
9880 group by coc_codcli;

COC_CODCLI SUM(COC_COMISION)
---------- -----------------
   9479880            959.04

SQL>
SQL>
SQL> select * from dba_mviews;

OWNER                          MVIEW_NAME                     CONTAINER_NAME
------------------------------ ------------------------------ ------------------
------------
QUERY
 QUERY_LEN U
--------------------------------------------------------------------------------
 --------- -
UPDATE_LOG                     MASTER_ROLLBACK_SEG
------------------------------ ------------------------------
MASTER_LINK
--------------------------------------------------------------------------------
--------------------
R REWRITE_C REFRES REFRESH_ BUILD_MOD FAST_REFRESHABLE   LAST_REF LAST_REFR STAL
ENESS
- --------- ------ -------- --------- ------------------ -------- --------- ----
---------------
AFTER_FAST_REFRESH  U U U U U COMPILE_STATE       U
------------------- - - - - - ------------------- -
FON                            MVW_ERP_CTPE                   MVW_ERP_CTPE
SELECT coc_codcli,sum(coc_comision) FROM
        75 N
fon.comcli_rW
group by coc_codcli


N GENERAL   COMMIT FAST     IMMEDIATE DIRLOAD_LIMITEDDML FAST     29ABR2004 UNUS
ABLE
NA                  N N N N N VALID               N


SQL>


 

Tom Kyte
April 30, 2004 - 7:05 am UTC

oracle 9.2.0.2 standard edition => materialized views for summary management functionality, join indexes, and summary advisor. Provides powerful functionality that automatically redirects queries to stored summary tables, improving query performance dramatically are not supported, are not a feature


You have mv's for basic replication in that release. that does not include "on commit refresh"

A reader, April 30, 2004 - 9:20 am UTC

Thanks Tom,
I think the bug is that it allow fast refresh once ;)

MVs

chandra sekhar rao, June 04, 2004 - 7:02 am UTC

it is a very good useful for developers and learners how to use M.Vs

Question on Refresh of Materialized View

Lamya, June 08, 2004 - 9:23 am UTC

This view keeps on refreshing , and we dont know why

CREATE MATERIALIZED VIEW VW_OPENFIELD2_MOUSE_CALC
REFRESH FORCE
ON DEMAND
START WITH SYSDATE
AS
SELECT
MT.PROJ_ID,
O.DATASET_ID,
MT.GENOTYPE,
MT.ID AS MOUSE_NUMBER,
MT.LABEL AS MOUSE_LABEL,
O.MOUSE_ID,
TO_CHAR(ROUND(O.TOTDIST/V.SUMDIST*100, 2)) AS PERC_TOT_DIST,
O.SAMPLE,
TO_CHAR(ROUND(O2.SLOPE_VALUE,2)) AS NORM_SLOPE
FROM OPENFIELD2 O, MOUSE_TABLE MT, SMALLADI.OPENFIELD2_CALC O2,
(SELECT O.MOUSE_ID, SUM(O.TOTDIST) AS SUMDIST
FROM
OPENFIELD2 O
WHERE
O.TOTDIST IS NOT NULL
GROUP BY
O.MOUSE_ID
HAVING
SUM(O.TOTDIST) > 0) V
WHERE
O.MOUSE_ID = V.MOUSE_ID AND
O.MOUSE_ID = MT.MOUSE_ID AND
O.MOUSE_ID = O2.MOUSE_ID AND
O.TOTDIST IS NOT NULL
ORDER BY MT.GENOTYPE, MT.ID, O.SAMPLE;


Tom Kyte
June 08, 2004 - 10:06 am UTC

query user|dba_jobs and see if someone has scheduled a job to do so.

Re:Question on Refresh of Materialized View

Lamya, June 08, 2004 - 10:46 am UTC

Hi Tom,

I did that before only , the dba_jobs says

select * from dba_jobs where job = 462;

JOB LOG_USER PRIV_USER
---------- ------------------------------ ------------------------------
SCHEMA_USER LAST_DATE LAST_SEC THIS_DATE THIS_SEC NEXT_DATE
------------------------------ --------- -------- --------- -------- ---------
NEXT_SEC TOTAL_TIME B
-------- ---------- -
INTERVAL
--------------------------------------------------------------------------------
FAILURES
----------
WHAT
--------------------------------------------------------------------------------
NLS_ENV
--------------------------------------------------------------------------------
MISC_ENV INSTANCE
---------------------------------------------------------------- ----------
462 LVISION LVISION
LVISION 07-JUN-04 18:16:55 01-JAN-00
00:00:00 7 Y
null
5
dbms_refresh.refresh('"LVISION"."VW_OPENFIELD2_MOUSE_CALC"');
NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENC
Y='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LAN
GUAGE='AMERICAN' NLS_SORT='BINARY'
0102000200000000

thus we see that the next_date is 1/1/4000 and the job is broken , so I thought maybe the syntax for the materialized view is wrong ??





Tom Kyte
June 08, 2004 - 11:56 am UTC

the last time that mv tried to refresh was 07-JUN-04 18:16:55 last night. is it still "refreshing"?

Re:Question on Refresh of Materialized View

Lamya, June 08, 2004 - 12:18 pm UTC

No its not refreshing now , the question I had is on the creation of this view , it starts refreshing even though we do not have a NEXT in the syntax. Why is this so ??

Tom Kyte
June 08, 2004 - 1:26 pm UTC

"start with" specifies a date expression for the first automatic refresh time.


you really have coded:


refresh force on demand start with sysdate


not

"start with sysdate"
"refresh force"
"on deman"

start with, force, on demand are all attributes of "refresh"

you asked for an automatic refresh to happen "right now"

Thanks Tom , for the clarification

Lamya, June 09, 2004 - 11:44 am UTC


Does a MV have to have a primary key

A reader, June 14, 2004 - 11:16 am UTC

If I am setting up basic read only MV's, do the Materialized views have to have primary keys? Its just that I put monitoring on all our indexes to see which ones were being used, and all the indexes that show up as NOT used are primary keys on the Materialized views. The materialized views themseleves are just exact copies of the base tables, no sub queries or joins etc.

Thanks

Tom Kyte
June 14, 2004 - 11:44 am UTC

what type of refresh do you perform on these MV's?

Does a MV have to have a primary key

A reader, June 15, 2004 - 3:28 am UTC

Currently complete refreshes as part of a refresh group, but
hopefully moving to fast refreshes as part of a refresh group.

Thanks

Tom Kyte
June 15, 2004 - 3:24 pm UTC

fast refreshes need the primary key -- does the update by primary key

but not needed for complete refreshes ?

A reader, June 15, 2004 - 3:49 pm UTC

So I could remove them for any that are complete refreshes ?

Thanks

Tom Kyte
June 16, 2004 - 11:10 am UTC

in theory, but you would have to confirm with support if you want to be supported for them. since the pk's are system generated, by the CREATE MV statement -- they are there by design.

Great - thanks

A reader, June 17, 2004 - 3:20 am UTC


Refresh on Materialized view logs ?

Purushoth, June 24, 2004 - 6:03 am UTC

We have materialized view ENI_OLTP_ITEMS and a mview log on it. After every fresh we analyze the mv's using dbms_stats.gather_table_stats(ownname=>'SYS', tabname=>'ENI_OLTP_ITEMS', cascade=> true)

1. When we query dba_tables for last_analyzed column we see the update on MView but the column is empty for Mview log.

2. If we explicitly refresh the mview log it's getting updated.

The question is

(1) Will it be possible to analyze the mview log while the mview is refreshed

(2) Performance team guy is saying since the mview log is not analyzed it could cause performance issues. Do the mview logs analyze/not analyzing really improve/degrade the performance for refresh ?



Tom Kyte
June 24, 2004 - 9:49 am UTC

you have a MV owned by SYS?????????

don't use SYS

would you use root to do your job?

don't use SYS
sys is special
sys is magic
sys does things differently
the rules are different for sys

sys is Oracle's account, don't use it for anything. ever. unless it is explicitly documented that "you should do this as sys"




MView Log analyze

A reader, June 25, 2004 - 6:05 am UTC

Sorry Tom,
The actual schema under which we have the MView is different. It's not "SYS", it's "ENI"

dbms_stats.gather_table_stats(ownname=>'ENI',
tabname=>'ENI_OLTP_ITEMS', cascade=> true)

and my questions remain the same.

Thanks in advance!!

Tom Kyte
June 25, 2004 - 2:23 pm UTC

dbms_stats.gather_table_stats(ownname=>'SYS',
tabname=>'ENI_OLTP_ITEMS', cascade=> true)

would indicate otherwise. I can only go on what I see.

you can analyze the mview log anytime you like. if you are using the cbo, you should analyze it (maybe just once for an "average" since it always is changing in size and shape)

A reader, June 25, 2004 - 5:56 pm UTC

Thanks for your suggestions.


A reader, August 07, 2004 - 6:58 am UTC

Tom,


I want to replicate data to 4 other databases. My db is Oracle 9iR1 and its for a dwh application. Using database link is ruled out.

In this case -- which one i should use.

1. Replication
2. AQ
3. Streams

..or anything else 9i has provided us?

My requirment goes like this -- I have a huge table, in a db that need to be in sync with all other db's located across the world.

Case:

1. I need only one table from a single db to be repliacted across other db's.

2. I need the whole schema objects from the master db to be repliacted across the others db's

3.I need only one table from a single db to be repliacted across other db's + two way distribution of data

4.I need the whole schema objects from the master db to be repliacted across the others db's + two way distribution of data

I the above cases .. which one should be choosen.

Please let me know your inputs on this. I am confused here .. Thanks for your consideration


Tom Kyte
August 07, 2004 - 10:28 am UTC

using dblink, the most reasonable solution, is rule out.

cool.

how good.

nice, sweet.

how about floppy disks?

1) replication is dblink based.
2) aq doesn't replicate.
3) streams didn't exist.


tell me why dblinks are "ruled out" -- just to make it harder?

A reader, August 09, 2004 - 12:53 am UTC

Tom,

Can you please let us know why "3) streams didn't exist." from the above posting.

tell me why dblinks are "ruled out" -- just to make it harder?

-- because we thought writing a trigger and making an insert or update in other db's are going to be the same as replication in oracle - please correct me if am wrong.

our requirment is everytime there is an insert or update that needs to seen in all other db's

Also --
Snapshot logs impact the performance of the production system.
Snapshot logs impact the maintainence of the production system.

Thanks for your time and valuable service.


Tom Kyte
August 09, 2004 - 7:44 am UTC

3) streams didn't exist in 9ir1 and that is what they said they had to work with.


to rule out replication is short sighted. if everytime there is an insert or update -- it needs to be replicated, well, guess what.

snapshot logs introduce minimal overhead for which you can measure and size for.




Parallel Refresh

Bo, August 09, 2004 - 9:23 am UTC

Hi, Tom.
We could do paralle select, parallel insert to improve performance. Could we do same thing on MV Refresh? Why or Why not?

Thanks,

Tom Kyte
August 09, 2004 - 10:32 am UTC

On demand and complete refresh

Sonali, August 19, 2004 - 2:08 pm UTC

I have some strange case and error I think..
I have materialized view -

CREATE MATERIALIZED VIEW mv_plannedhrs_rev
BUILD IMMEDIATE
REFRESH COMPLETE
WITH ROWID
DISABLE QUERY REWRITE
AS
Select work_id, period_ID, AcctCat, nvl(sum(totalplanhrs),0) as planhrs
From (
Select Work_ID, Period_ID, Auth_RevCat_ID as AcctCat,
sum((num_business_days(greatest(AuthAttr_Start_Date,period_Start),least(AuthAttr_Finish_Date, period_finish))
*AuthAttr_Amount)) as totalplanhrs
from xyz
UNION ALL
Select Work_ID, Period_ID, Reserve_RevCat_ID as AcctCat,
sum((num_business_days(greatest(Reserve_Start_Date, period_Start),least(Reserve_Finish_Date, period_finish))
*Reserve_Daily_Factor)) as totalplanhrs
from x,y,z
UNION ALL
select Work_ID, Period_ID, Work_RevCat_ID as AcctCat,
NVL(SUM(Request_Hours/num_business_days(Request_Start_Date, Request_Finish_Date)
*num_business_days(greatest(Request_Start_Date, period_start),least(Request_Finish_Date, period_finish))),0.0)
as totalplanhrs
from xyz
)
group by work_ID, period_ID, AcctCat;

I refresh this view from stored procedure which is going to read it..

I get following error -
EXECUTE pr_GetActualDLRev (552, 552, 1, 'Assignments', 553, '2004-08-19 00:00:00.000')
ORA-00942: table or view does not exist ORA-06512: at "SYS.DBMS_SNAPSHOT", line 794 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 851 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 832 ORA-06512: at "V52DEVOR.PR_GETACTUALDLREV", line 17 ORA-06512: at line 2


The procedure looks like this -

CREATE OR REPLACE Procedure pr_GetActualDLREV(inHDRID int, inENTID int, inENTTYPE int, strDESCRIPTION varchar2, inURESID int, strCreateDate varchar2)
IS
a number;

BEGIN
dtCreateDate := to_date(strCreateDate,'YYYY-MM-DD "00:00:00.000"');
begin
dbms_mview.refresh('mv_plannedrevenue','C');
dbms_mview.refresh('mv_plannedhrs_rev','C');
end;

delete SBBGPR
where BGPR_GETDL = 10 AND BGPR_HDRID = inHDRID AND BGPR_Type = 10;

delete SBBGITEM
where BGIT_GETDL = 10 AND BGIT_HDRID = inHDRID and BGIT_Type = 10;

/* Find currency and currency table of work item */
select Work_Currency, GLCo_CurrTBL into inCurrency, inCurrTable
from ETGLCmpny, mwebWork
where GLCo_Company = Work_GLCompany AND Work_id = inENTID;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20002, SQLCODE, TRUE);
raise_application_error(-20003, SQLERRM, TRUE);
End pr_GetActualDLREV;
/


If I change these lines
CREATE MATERIALIZED VIEW mv_plannedhrs_rev
BUILD IMMEDIATE
REFRESH COMPLETE
WITH ROWID
DISABLE QUERY REWRITE

TO
CREATE MATERIALIZED VIEW mv_plannedhrs_rev
BUILD IMMEDIATE
REFRESH on demand
WITH ROWID
DISABLE QUERY REWRITE

I do not get error and it works.
Why doesn't refresh complete work and why does on demand work ? Whats the difference ??

I have 2 materialized views used in this procedure and both are refreshed from this procedure and they do not read each other..
dbms_mview.refresh('mv_plannedrevenue','C');
dbms_mview.refresh('mv_plannedhrs_rev','C');

Both of them had

BUILD IMMEDIATE
REFRESH COMPLETE
WITH ROWID
DISABLE QUERY REWRITE

It worked for 'mv_plannedrevenue' view but did not work for
'mv_plannedhrs_rev' view. I am puzzled..


Tom Kyte
August 19, 2004 - 7:49 pm UTC

Need entire example -- from start to finish -- terse, yet complete (eg: don't give me pages and pages -- whittle it down to something as small as possible...)

ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> CREATE MATERIALIZED VIEW mv
  2  BUILD IMMEDIATE
  3  REFRESH COMPLETE
  4      WITH ROWID
  5          DISABLE QUERY REWRITE
  6  as
  7  select * from emp;
 
Materialized view created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure p
  2  as
  3  begin
  4          dbms_mview.refresh( 'mv', 'c' );
  5  end;
  6  /
 
Procedure created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec p
 
PL/SQL procedure successfully completed.
 

Some more info.

sonali, August 19, 2004 - 2:28 pm UTC

I am finally able to fifure out the difference between the 2 views.. where one works and other doesn't

CREATE MATERIALIZED VIEW mv_plannedhrs_rev
BUILD IMMEDIATE
REFRESH COMPLETE
WITH ROWID
DISABLE QUERY REWRITE
AS
Select work_id, period_ID, AcctCat, nvl(sum(totalplanhrs),0) as planhrs
From (
Select Work_ID, Period_ID, Auth_RevCat_ID as AcctCat,
sum((num_business_days(greatest(AuthAttr_Start_Date,period_Start),least(AuthAttr_Finish_Date, period_finish))
*AuthAttr_Amount)) as totalplanhrs
from x,y, mv_mwebWork
blah blah blah

mv_plannedhrs_rev which has REFRESH COMPLETE clause reads the view mv_mwebWork which has REFRESH ON DEMAND.

So when the data is updated so mv_mwebWork is changed. I get the error on mv_plannedhrs_rev view.
mv_mwebWork is refreshed on scheduled basis ( its a job).
But mv_plannedhrs_rev which reads mv_mwebWork is refreshed from the procedure.

Will this cause this error ( table or view does not exists ??)

Here is what the oracle documentation says
COMPLETE
Specify COMPLETE to indicate the complete refresh method, which is implemented by executing the materialized view's defining query. If you request a complete refresh, Oracle performs a complete refresh even if a fast refresh is possible.

ON DEMAND
Specify ON DEMAND to indicate that the materialized view will be refreshed on demand by calling one of the three DBMS_MVIEW refresh procedures. If you omit both ON COMMIT and ON DEMAND, ON DEMAND is the default.

When I change mv_plannedhrs_rev from refresh complete to refresh on demand it works.

Do I need both to be refresh complete or both refresh on demand ?
I mean both views have to match the way they are refreshed ?

how does it work ?


Tom Kyte
August 19, 2004 - 7:51 pm UTC

develop a short, concise, small, yet 100% complete test case to work with please.

question about refresh failure

James Su, September 08, 2004 - 12:22 pm UTC

hi tom,
I have a snapshot with fast refresh type, from a oracle 8.0.3 db to 8.1.5. If the connection between the servers is ok, it works fine. But if there's something wrong with the db link, the refreshing job seems to be blocked, and it can not be stop. Do we have any settings like "timeout"? It will tell the job to stop trying and wait for the next refresh.
Thank you very much.

Tom Kyte
September 08, 2004 - 1:35 pm UTC

you could play with the TCP/IP timeout parameters at the OS level (very OS specific, time to get a SA involved). Nominally -- tcp takes a very long time to notify the end points "abandon all hope"

Architectural Decision (Replication with MV)

Reader, October 12, 2004 - 6:08 am UTC

Hi Tom,
Currently we are having 2 Database applications running on 2 diffrent Servers. Now we will be combining the features of both these Applications into a single application. We will be Upgrading Server 1 to handle the necessary load. Now since Server 2 will be free, I was toying with the idea of making this SERVER 2 as a PURE Reporting Server for the new Application by using Replication and Read Only MV's. So Users will be doing their DML on the upgraded server(Server 1) and the reporting server(Server 2) will be refreshed periodically. Is this a good decision?, or am I better off by creating MV's on Server 1 itself.

Tom Kyte
October 12, 2004 - 8:25 am UTC

I would do as much as humanly possible in a single server.

distributed complexity is just that -- distributed and complex.

a single server with a single database is simply easier.




Architectural Decision (Replication with MV)

Reader, October 14, 2004 - 4:55 am UTC

Thanks for the quick reply. From Simplicity point of view , yes One Server is fine. What about from performance point?. I mean are there any performance gains to be obtained? I read in Data WareHousing Guide that when using Remote MV's Query Rewrite will not take place.Any workarounds to this problem? Apart from that any other dis-advantages?. Also when I refresh the MV over the link will there be a load on Master Server (ie CPU,Memory) or the Server where the MV will be created.

Thanks

Tom Kyte
October 14, 2004 - 10:12 am UTC

too many points to generalize in a paragraph.

the "remoteness" would not be an issue with the MV's really -- you would not have the DETAILS to query on the DW (you just have the MV's) so you would be, well, querying the MVs - they would be your data warehouse.

A refresh involves resources on both the refresher as well as the refreshee -- it is a big old distributed transaction -- pull rows from source to target, apply them, clean up mv logs on source and finish.

What happen if “parallelism” is > 0 for DBMS_MVIEW.REFRESH

Ronald, October 21, 2004 - 6:43 am UTC

Could you please explain the parameter “parallelism” of DBMS_MVIEW.REFRESH more?

Does it apply to the underlying query of the mview or the DML of the mview itself?

Tom Kyte
October 21, 2004 - 7:06 am UTC

it is for parallel propagation -- whereby the changes could be propaged from the 'master' to the 'copy' using parallel applies. applies more to updatable snapshots -- the transactions logged against the "update anywhere" master can be applied in parallel

Re: What happen if “parallelism” is > 0 for DBMS_MVIEW.REFRESH

Ronald, October 21, 2004 - 2:02 pm UTC

Thank you very much,
but how to make/check refresh faster, especially full refresh (the source query is a join of ~6 tables and some outer joining).

Tom Kyte
October 21, 2004 - 3:49 pm UTC

parallel query might be helpful if the base tables are "parallel"

Stale and Sqlldr direct path

Scot, October 21, 2004 - 2:31 pm UTC

Hey Tom,

If a materialized view is created on a base table with the refresh complete on commit option, and that base table is reloaded via a direct path sql loader truncate, I confirm that the materialized view does indeed refresh as expected.

However, the staleness column of user_mviews does not change to FRESH as I would expect, causing the view to still be marked as stale and not be used for rewrites (because I am not tolerating stale rewrites).

Is this behavior normal in an Oracle 9iR2 EE HPUX environment? Please do not take the time to create an example for me, as I have not provided a test case to you (yet), and am only looking for a yes/no at this point before I proceed.


Tom Kyte
October 21, 2004 - 3:51 pm UTC

i would think it to be a "not correct action on our part".

Stale and sqlldr direct path

Scot, October 21, 2004 - 4:19 pm UTC

Thanks.

Materialized View status INVALID

RK, October 26, 2004 - 10:58 am UTC

I created a materialized view on a table.

This materialized view should be refreshed whenever the business date of the application changes.

Hence, the refresh was put in a trigger ( as a pragma).
Very strangely, after the trigger fires, the materialized view status becomes invalid. ( This was found with user_objects table where status = 'INVALID').

By issuing the command in SQLPLUS: ALTER MATERIALIZED VIEW MV compile;
it becomes valid.

However, if this alter statement issued in the trigger did not make the status valid though the trigger did not report any error.

The oracle version is 8.1.7.
Is it a bug in oracle?

Tom Kyte
October 26, 2004 - 11:06 am UTC

no idea -- since you don't give me a test case to reproduce with, like I give to you. probably not a bug, but I cannot tell cause I cannot really be sure what exactly you've done.

(but if you are using an autonomous transaction in a trigger -- you are almost certainly doing something wrong :)

also -- think about it, if the trigger is what causes the MV to need to be refreshed.....

and

you use the trigger to refresh it.....

but

you have to use an autonomous transaction since the refresh commits.....


then the refresh won't be able to see the changes it is supposed to be REFRESHING will it? (rhetorical question, no, the autonomous transaction won't be able to see the changes you are wanting it to see since the transaction hasn't yet committed so the changes are not visible to any other transaction....)


Hmmm. thats alot of guessing on my part though, cause "pragma" isn't sufficient to tell me what you were doing .



So, thats probably what is happening.

The trigger should not use autonomous transactions (it would be exceedingly rare to use them, in the 2nd edition of Expert One on One Oracle I'm going to make sure people are mightly afraid of them)..

The trigger should use:

dbms_job.submit( l_job, 'dbms_mview.refresh( ''MV_NAME'' );' );


instead so the refresh takes place soon AFTER the transaction that made the refresh "necessary" actually commits (making the data visible).

MV - Disable Query rewrite

A reader, November 20, 2004 - 12:53 pm UTC

create materialized view mv_mvp_se_cust
tablespace master_d1
build deferred
refresh force on demand
disable query rewrite
as (select
.
.
from
.
.
join
.
.

);


There is a mview created like above 

disable query rewrite - If i say disable query rewrite, will my mview ever be used?.
Is it possible to force my query to use the MV

I took this info too

SQL> SELECT capability_name, possible,SUBSTR(related_text,1,8)ASrel_text,
SUBSTR(msgtxt,1,60) AS msgtxt
FROM MV_CAPABILITIES_TABLE
ORDER BY seq;  2    3    4
 
CAPABILITY_NAME                P ASREL_TE MSGTXT
------------------------------ - -------- ------------------------------------------------------------
PCT                            Y
REFRESH_COMPLETE               Y
REFRESH_FAST                   Y
REWRITE                        N
PCT_TABLE                      Y TB_SNB03
PCT_TABLE                      N TB_SNB03 relation is not a partitioned table
PCT_TABLE                      N TB_SNB03 no partition key or PMARKER in select list
REFRESH_FAST_AFTER_INSERT      N DW.TB_SN the detail table does not have a materialized view log
REFRESH_FAST_AFTER_INSERT      N DW.TB_SN the detail table does not have a materialized view log
REFRESH_FAST_AFTER_ONETAB_DML  N          see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ANY_DML     N          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
 
CAPABILITY_NAME                P ASREL_TE MSGTXT
------------------------------ - -------- ------------------------------------------------------------
REFRESH_FAST_PCT               Y
REWRITE_FULL_TEXT_MATCH        N          query rewrite is disabled on the materialized view
REWRITE_PARTIAL_TEXT_MATCH     N          query rewrite is disabled on the materialized view
REWRITE_GENERAL                N          query rewrite is disabled on the materialized view
REWRITE_PCT                    N          query rewrite is disabled on the materialized view
  

Tom Kyte
November 20, 2004 - 3:28 pm UTC

you can of course query it directly -- but with the query rewrite disabled -- we'll not be using it "automagically"

you can alter the materialized view to enable query rewrite.

Class I ODS and standby database

A reader, December 27, 2004 - 10:04 am UTC

Hi

I have a question regarding Class I ODS. Since this type of ODS has a lag of 2 to 4 seconds I wonder if we could use Standby Database or Data Guard for this type of ODS?

Tom Kyte
December 27, 2004 - 10:40 am UTC

I'm not a fan of replication.

data guard is generally archive redo log based (standby site recieves redo in real time, but the apply is deferred).

you would be looking at "replication" but at that point one might ask "why not just query the source", sounds like you need to consolidate -- not propogate.

Refresh MV over a db link

ubuser, January 07, 2005 - 3:06 pm UTC

Is it possible to execute a procedure that would refresh a MV over a db link. If so, can you please highlight the steps to do so?

Thanks

Tom Kyte
January 08, 2005 - 4:04 pm UTC

best bet here is to be on the instance where the mv resides in order to refresh it -- for the MV process wants to commit and the commit is supposed to come from the originating site of the transaction. 

to that end, I would suggest you look at using dbms_job to schedule a refresh to happen on the remote site soon after you commit, that is:


SQL> create synonym remote_job for dbms_job@remote_site;
SQL> exec remote_job.submit( :x, 'dbms_mview.refresh( ''xxxx'' );' );
SQL> commit;

soon after, the job queues on the remote site will pick up the job and run it as a local job. (make sure they have job_queue_processes set to >= 1)  

sid, April 20, 2005 - 5:45 am UTC

I created a table.

CREATE TABLE test1

(qw VARCHAR2(30) NOT NULL primary key ,
qwe VARCHAR2(10) NOT NULL)

CREATE MATERIALIZED VIEW LOG ON test1


create materialized view test1
build immediate
refresh fast
with primary key
enable query rewrite
as
select * from test1@meidasdb


But then again i was not getting Materialized View refreshed, will i have to refresh explicitly i thought it was implicit process . PLease help.



Tom Kyte
April 20, 2005 - 6:58 am UTC

there is no interval specified on the MV create. See the sql reference for the options.

you didn't schedule it.

Does 'paralell' make sence for COMPLETE refresh?

Lily, April 20, 2005 - 2:52 pm UTC

Tom,
I know that we can do paralle for materialized view refresh.
But while I read Oracle 9.2 release manual, I don't see any
sample of parallel for refresh COMPLETE although some sample of parallel is for refresh FORCE. So I try to some test to see if parallel will speed up COMPLETE refresh.I alter one of our large of materialized view to parallel (degree 8)--the setting for parallel_max_servers is 8 , and then COMPLETE refresh this materialized view . It seems to me no any speedup(it still takes more than 2 hours to COMPLETE refresh as before), and I don't see Oracle uses more than 1 process to refresh it. This materialized view was created based on single table over the database link, the base table was created with NOPARALLEL on master site.

How can we speed up the process of COMPLETE REFRESH -- we
have to use this option since we are practicing desaster recovery while we encounter ORA-12057.

Your help will be appreciated.



Yor are the best

sid, April 21, 2005 - 12:50 am UTC

CREATE TABLE test1

(qw VARCHAR2(30) NOT NULL primary key ,
qwe VARCHAR2(10) NOT NULL)

CREATE MATERIALIZED VIEW LOG ON test1


create materialized view test1
build immediate
refresh fast
with primary key
enable query rewrite
as
select * from test1@meidb


But then again i was not getting Materialized View refreshed, will i have to
refresh explicitly i thought it was implicit process . PLease help.




Followup:
there is no interval specified on the MV create. See the sql reference for the
options.

you didn't schedule it.

as you clearly pointed out i did scheduling i want to enable my MV refreshed automatically also i am using DBlink ie accessing remote database, thats why the whole reason for going with MV


here is what i did
create materialized view test2
build immediate
refresh fast
start with sysdate
next sysdate+1/10000
with primary key
enable query rewrite
as
select * from test2@meidb

and i am happy to report it works

Your help is greatly appreciated also little doubt
Can i get my MV refreshed without going for scedule and procedure ..also my schedling is 20 seconds ahead of the original table ..is there i way to get it refreshed after every 1 sec ?


Tom Kyte
April 21, 2005 - 6:09 am UTC

you have to schedule it, you have to say how often.

one second would be a great burden. 5 seconds it the resolution of the job queues (_job_queue_interval=5). I would consider 20 seconds even to be "too often"

thanks for the followup

sid, April 21, 2005 - 10:17 pm UTC

"you have to schedule it, you have to say how often.

one second would be a great burden. 5 seconds it the resolution of the job
queues (_job_queue_interval=5). I would consider 20 seconds even to be "too
often"


But if i dont schedule every 1 sec then whole purpose of the MV creation will be lost and MV will be giving stale result to the application ...in order to cut down time in accessing the remote DB we are creating a MV..

is this too strenous to have a refresh after 1 sec?

Also where there is job to be scheduled ? {queues (_job_queue_interval=5). }

Please let me know, As usual thanks in advance...

Tom Kyte
April 22, 2005 - 10:22 am UTC

you are going over a dblink.

1 second would be too much.

If you need real time access to this data, you should be asking yourself why you are running in a different database in the first place.

Many thanks

sid, April 25, 2005 - 6:03 am UTC

Really appreciate your fast response.

Hi Tom, one related technical question.

RICHARD TAN, May 17, 2005 - 12:07 pm UTC

Hi Tom,

Thank you very much for your previous help. Your oracle expertise is unprecedented.

To simplify the question, we have many different kinds of regional databases and one master database with complex replications mechanism, using many replication groups, all triggers have
if (not dbms_snapshot.i_am_a_refresh) and (not dbms_reputil.from_remote) then
do something.
end if;
end of triggers.
Mostly, we use row-level triggers, basically no statement-level triggers.

Recently, we try to make one regional database independent (much less dependencies from other functional regions), so tables have been moved to master, mviews are on the regions, we have table table_a, table_b and table_c
table_a@master and table_b@master are master tables, materialized views are on the regions.

tia_table_a@master (after insert row-level trigger) get fired, then should insert into table_b@master, which should make remote procedure call to region to insert into table_c@region,
We found tia_table_b@master is not firing at all, we use
if (not dbms_snapshot.i_am_a_refresh) and (not dbms_reputil.from_remote) then
do something
else
do something else
end;
in the tia_table_b trigger.

The original triggers (based upon the original design) have been working for five to ten years, we just move around the tables and materialized views, then tia_table_b stop firing.

DBAs and on site long-term oracle consultants and our developers each have at least ten years of oracle experience, the whole application is huge and working for years.

We have multiple replication groups, table_a and table_b are under different replication groups,

We suspect some replication related parameters are not set up correctly, I am not talking about oracle parameters.

It is unlikely to be a simple problem, maybe some higher level parameters of replication groups are not set up correctly, based upon your instincts, what could go wrong? Please advise a list, we will check through all.

Thank you very much for help. I know you usually need some concrete stuff. We need some ideas to proceed.

Thanks, Tom.





Tom Kyte
May 17, 2005 - 2:16 pm UTC

sorry -- not following the flow here.

the trigger are not firing "when", when you do a simple update? or during a refresh. you know a MV refresh is very different from a master to master

Summary advisor is an option or a feature?

Javier Morales, June 21, 2005 - 4:17 am UTC

Hi Tom,

I'm building a datawarehouse on a Oracle9iR2 Enterprise Edition and I recommend the use of MV's and OLAP features. I was building a test envirorment (fact and dimension tables, RI, dimensions, hierarchies, stats gathered with DBMS_STATS and cubes built with OEM).

But when I try to validate the dimension, do a DBMS_OLAP.recommend_mv or even create an id with DBMS_OLAP I got an ORA-30475: Feature not enabled.


ERROR en línea 1:
ORA-30475: función no activada: Summary Advisor
ORA-06512: en "SYS.DBMS_SYS_ERROR", línea 86
ORA-06512: en "SYS.DBMS_SUMMARY", línea 49
ORA-06512: en "SYS.DBMS_SUMMARY", línea 437
ORA-06512: en línea 1

Do I'm missing anything? Is there something we have to purchase or enable or something about licences? Or am I doing something wrong? (I could create dimensions, even cubes with that database!).

Thanks in advance, Tom,
Javier


Here is the code...

SQL> show user
USER es "DW_VIAJES"
SQL> select * from user_role_privs;

USERNAME                       GRANTED_ROLE                   ADM DEF OS_
------------------------------ ------------------------------ --- --- ---
DW_VIAJES                      CONNECT                        NO  YES NO
DW_VIAJES                      DBA                            NO  YES NO
DW_VIAJES                      OLAP_DBA                       NO  YES NO
DW_VIAJES                      OLAP_USER                      NO  YES NO
DW_VIAJES                      RESOURCE                       NO  YES NO

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE    9.2.0.6.0       Production
TNS for HPUX: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production

SQL> select comp_name, version, status from dba_registry;

COMP_NAME                                VERSION                      STATUS
---------------------------------------- --------------------------- -----------
Oracle9i Catalog Views                   9.2.0.6.0                    VALID
Oracle9i Packages and Types              9.2.0.6.0                    VALID
JServer JAVA Virtual Machine             9.2.0.6.0                    VALID
Oracle XDK for Java                      9.2.0.8.0                    VALID
Oracle XML Database                      9.2.0.6.0                    VALID
OLAP Analytic Workspace                  9.2.0.6.0                    LOADED
Oracle OLAP API                          9.2.0.6.0                    UPGRADED
OLAP Catalog                             9.2.0.6.0                    VALID

8 filas seleccionadas.

SQL> variable ident number
SQL>  exec dbms_olap.create_id(:ident);
BEGIN dbms_olap.create_id(:ident); END;

*
ERROR en línea 1:
ORA-30475: función no activada: Summary Advisor
ORA-06512: en "SYS.DBMS_SYS_ERROR", línea 86
ORA-06512: en "SYS.DBMS_SUMMARY", línea 49
ORA-06512: en "SYS.DBMS_SUMMARY", línea 109
ORA-06512: en línea 1




SQL> exec dbms_summary.validate_dimension('GEOGRAFIA','DW_VIAJES',false,true);
BEGIN dbms_summary.validate_dimension('GEOGRAFIA','DW_VIAJES',false,true); END;

*
ERROR en línea 1:
ORA-30475: función no activada: Summary Advisor
ORA-06512: en "SYS.DBMS_SYS_ERROR", línea 86
ORA-06512: en "SYS.DBMS_SUMMARY", línea 49
ORA-06512: en "SYS.DBMS_SUMMARY", línea 437
ORA-06512: en línea 1

 

Tom Kyte
June 21, 2005 - 4:38 pm UTC

according to this
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96531/ch5_edit.htm#89027 <code>
it should be a feature of enterprise edition.

I'll have to refer you to support (perhaps this was an upgrade from SE?)

Summary advisor is an option or a feature?

Javier Morales, June 23, 2005 - 5:46 am UTC

Thanks, Tom.

I think that software was upgraded to EE from SE after I did install jvm and xml. I found in metalink something about reinstall jvm and xml.

So, I'm on this. I will review this question to tell if it did solve this problem (I couldn't find much else documentation about this situation). Maybe this would be useful for other people.

Thanks for all, Tom.

Tom Kyte
June 23, 2005 - 6:37 pm UTC

absolutely, if you resolve it, it'll be helpful to others hitting the same issue.

I'm sure it is an "upgrade issue"

Summary advisor is an option or a feature?

Javier Morales, June 28, 2005 - 3:58 am UTC

Hi Tom,

You were right. It was an upgrade problem.

The software installed was Oracle9iR2SE and I did create the database manually, launching the scripts to create JVM, XKD, etc. So, when the script is going to create the java classes related to "Summary advisor", the script initqsma.sql checks Oracle edition and disables functionality of Summary Advisor in case of Standar Edition.

When I needed OLAP and DM option, the software was upgraded to Oracle9iR2EE, but java envirorment still had this "feature" disabled.

Launching this script initqsma.sql would be enough, but I prefered to follow this metalink note to reload JVM in Oracle9i.

</code> http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=209870.1&blackframe=1 <code>

So, everything's working ok now.

Thanks for all,
Javier

Parition Change Tracking

Alex McRae, July 15, 2005 - 1:30 am UTC

Hi Tom,

Your previous replies have been extremely useful.

One thing I keep reading a lot about is partition change tracking (PCT) when looking at materialized views and the different ways of refreshing of materialized views but am not sure what it. My question is if you could give a simple example of how this is done and how it works and when would use it.

To give a bit of background is I am looking at is a table I want to make into a materialized view that contains data I think I want to partition based on a date column, that is for each day I have thousands of rows. I might want to partition based on a day basis or a week basis.

I am using Oracle 9iR1, unfortunately we are not looking to upgrade in the near future.


Tom Kyte
July 15, 2005 - 7:38 am UTC

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/mv.htm#45519 <code>

did you see that explanation and example?

I'm not following here

Mikito Harakiri, July 15, 2005 - 1:50 pm UTC

Why refreshing with a certain interval makes any sence at all? Any programming logic based upon (physical) time is error prone. Refresh fast on commit. Deprecate all the other options. Period.

Tom Kyte
July 15, 2005 - 6:11 pm UTC

umm, can you spell "distributed"

can you spell "not everything is refresh fast"

how about "we want our data to remain constant from 9-5 while reporting on it"

how about "we cannot take the hit on the transactional system that on commit refreshes would impose"

need more reasons?


It is not error prone if it is you business requirement.

David Aldridge, July 15, 2005 - 5:06 pm UTC

Why refresh on a time interval?

Because sometimes a business wants stable, aggregated reports, refreshed at midnight, 6am, midday, and 6pm for example, on a transactional table.

Ask me how I know :)

Refresh on Materialized View

Ramana, July 15, 2005 - 5:35 pm UTC

Hi Tom
The above discussion is excellent.
But I have one basic question

We are planning to create materialized view using db link refresh on commit

We need to know whether the transaction will be in hold until the materialized view has refreshed.
Both are same transaction scope or not. Or else it will write in Materialized log and realeased the transaction.

Thanks


Tom Kyte
July 15, 2005 - 8:41 pm UTC

that will not happen, what you plan. on commit refresh MV's are for a single database, not distributed.



questions answered

Mikito Harakiri, July 15, 2005 - 6:24 pm UTC

"umm, can you spell distributed"

Distributed transactions, what about them?

"can you spell not everything is refresh fast"

Why not?

"we want our data to remain constant from 9-5 while reporting on it"

select as of timestamp 5pm

"we cannot take the hit on the transactional system
that on commit
refreshes would impose"

at the expence of having to deal with inconsistent data

Tom Kyte
July 15, 2005 - 9:04 pm UTC

they cannot be on commit refresh. so... time based is "good" else -- there wouldn't be any distributed MV's


not everything can be FAST REFRESHED (you know that, you've gone through 15 layer of MV's to try and hack out a fast refresh but even you must admin "not real world"). Many MV's *cannot be fast refreshed* there are many limits to what can and cannot be fast refreshed (you *know* that)


Using flashback query like that would be "a really utterly poor idea". Do you really want to

a) ensure you have sufficient disk for hours and hours of undo
b) pay the HUGE price to undo all changes for the day. PITY THE GUY that queries at 4pm and has to undo the changes for the last 7 hours.
c) have a buffer cache big enough to avoid the physical IO's you would otherwise incurr due to (b)
d) so many other bad things, I'll just let you think about it.

IF you have a business requirement to have data "as of a fixed point in time"
THEN
you do not have inconsistent data with a time based refresh, you have --
CORRECT DATA

anything else would be -- in a word -- wrong
END IF


questions - what questions? I see not so thought through thoughts. I see no "questions answered"



droping the materialized view log

Kubilay, August 10, 2005 - 8:29 am UTC

Hi Tom

I have materialized view replication setup with the 'Fast' refresh option
between 2 sites and 2 databases with db_links on Oracle 9i R2.

The situation is like this:

DB A (Site A) : Master site table t1 and mlog$_t1 materialized view log.
DB A (Site A) : Fast Refresh Materialized view MV_T1_1 using the log mlog$_t1
DB B (Site B) : Fast Refresh Materialized view MV_T1_2 using the same log mlog$_t1


DB A (Site A - Master Site)
T1
|
mlog$_t1
| |
DB A(Site A Materialized View site) DB B (Site B Materialized View site)
MV_T1_1 MV_T1_2

My question is:

If the Site B goes down with the intention to never come back, basically a 'disaster'.

My mlog$_t1 table still keeps recording all the changes which happen to T1 with
the intention to propagate them to MV_T1_2 on site B which doesn't exist.

Basically the number of the records in the mlog$_t1 log is constantly increasing although
the refresh jobs on Site A and MV_T1_1 happen periodically without problems.

I want to prevent this increasing of the mlog$_t1, I can't unregister the Materialized view on the the dead
Site B neither I can purge the log since I can't see it in the USER_REGISTERED_MVIEWS.

My other options is to TRUNCATE the logs but that won't 'unregister' the Materialized view
of the dead site B. ( I tested this.)

The last option is to drop the mlog$_T1 after refreshing it with the current working MV_T1_1
and re-creating it quickly. I tried this on a test situation it seems to work.

Table T1 and MV_T1_1 are production 'live' sites.
Table T1 is updated at unpredictable times.

My worry is if I drop the mlog$_t1 and at the same minute there is an update on T1,
will the new log miss it. Will it save it to put it in the new mlog$_T1?
Can this corrupt data?
I am aware that I will need to do a full refresh of the fast MV_T1_1 after I drop the mlog$_T1, the
table is not big.


What would be the best approach?

Thank you very much for all your help.


Best Regards

Kubilay







Tom Kyte
August 10, 2005 - 10:30 am UTC

you run the unregister at the MASTER site (site A), not the copy site.

droping the materialized view log

Kubilay, August 10, 2005 - 11:44 am UTC

Hi Tom

Yes, I do query the master sites USER_REGISTERED_MVIEWS but I don't see the materialized view which I want to unregister.

Also, my mistake!
Just found out that although the materialized view MV_T1_1 on Site A refresh with the 'FAST' method, the materialized view on Site B MV_T1_2 (now the dead site) was not refreshing with the 'FAST' method but with the 'COMPLETE' method, but is still allowed to use the log mlog$_T1 table?!

Is there a way to unregister the one which uses the 'COMPLETE' refresh method still? It is not displayed in the USER_REGISTERED_MVIEWS though on master site A.

Many Thanks!

Kubilay







Tom Kyte
August 10, 2005 - 12:53 pm UTC

it should not be registered at all, it wasn't using the mv log.

If the mv log entries are not getting cleaned out after the refresh on A, you have a "problem", I'll refer you to support to diagnose what it is.

10g R2 new feature: Partition Change Tracking Refresh Without Materialized View Logs

Maurice, September 20, 2005 - 7:38 am UTC

Hi Tom,

In the document "Oracle® Database New Features Guide
10g Release 2 (10.2)" it is written that a new feature called "Partition Change Tracking Refresh without Materialized View Logs" has been introduced in 10gR2.

I was wondering what was new about it because I remember having done a fast refresh of a MV over PCT without using any MV log already on Oracle 9.2.

Right now I don't have any 9iR2 DB available so I made a test on 10gR1. Based on the script below it seems to me that at least in 10gR1 it is already possible to make a fast refresh without any MV log.

Regrads
Maurice



SQL> CREATE MATERIALIZED VIEW MV_SALES
  2  PARTITION BY RANGE (TIME_ID)
  3  ( 
  4    PARTITION SALES_Q4_1999 VALUES LESS THAN (TO_DATE(' 200001', 'YYYYMM'))
  5      NOLOGGING, 
  6    PARTITION SALES_Q4_2001 VALUES LESS THAN (TO_DATE(' 200201', 'YYYYMM'))
  7      NOLOGGING, 
  8    PARTITION SALES_Q4_2002 VALUES LESS THAN (TO_DATE(' 200301', 'YYYYMM'))
  9      NOLOGGING, 
 10    PARTITION SALES_Q4_2003 VALUES LESS THAN (TO_DATE(' 200401', 'YYYYMM'))
 11      NOLOGGING
 12  )
 13  COMPRESS
 14  AS
 15  SELECT TIME_ID, SUM(AMOUNT_SOLD)
 16  FROM SH.SALES
 17  GROUP BY TIME_ID;

Materialized view created.

SQL> SELECT *
  2  FROM DBA_MVIEW_LOGS
  3  WHERE MASTER = 'SALES';

no rows selected

SQL> INSERT INTO SALES VALUES
  2  (13 ,2380 ,TO_DATE(' 20010204', 'YYYYMMDD') ,3 ,999 ,1 ,1232.16);

1 row created.

SQL>
SQL> COMMIT;

Commit complete.

SQL> select SUBOBJECT_NAME, value from v$segment_statistics
  2  where owner = 'SH'
  3  and OBJECT_NAME = 'SALES'
  4  and STATISTIC_NAME = 'logical reads'
  5  order by SUBOBJECT_NAME
  6  /

SUBOBJECT_NAME                      VALUE
------------------------------ ----------
SALES_1995                             16
SALES_1996                             32
SALES_H1_1997                          16
SALES_H2_1997                           0
SALES_Q1_1998                         256
SALES_Q1_1999                         336
SALES_Q1_2000                         336
SALES_Q1_2001                         496
SALES_Q1_2002                           0
SALES_Q1_2003                          32
SALES_Q2_1998                         224

SUBOBJECT_NAME                      VALUE
------------------------------ ----------
SALES_Q2_1999                         288
SALES_Q2_2000                         304
SALES_Q2_2001                         352
SALES_Q2_2002                           0
SALES_Q2_2003                           0
SALES_Q3_1998                         272
SALES_Q3_1999                         336
SALES_Q3_2000                         320
SALES_Q3_2001                         352
SALES_Q3_2002                          16
SALES_Q3_2003                           0

SUBOBJECT_NAME                      VALUE
------------------------------ ----------
SALES_Q4_1998                         320
SALES_Q4_1999                         336
SALES_Q4_2000                         304
SALES_Q4_2001                         384
SALES_Q4_2002                          16
SALES_Q4_2003                          32

28 rows selected.

SQL> begin
  2   dbms_mview.refresh('MV_SALES','F');
  3  end;
  4  /

PL/SQL procedure successfully completed.

SQL> select SUBOBJECT_NAME, value from v$segment_statistics
  2  where owner = 'SH'
  3  and OBJECT_NAME = 'SALES'
  4  and STATISTIC_NAME = 'logical reads'
  5  order by SUBOBJECT_NAME
  6  /

SUBOBJECT_NAME                      VALUE
------------------------------ ----------
SALES_1995                             16
SALES_1996                             32
SALES_H1_1997                          16
SALES_H2_1997                           0
SALES_Q1_1998                         256
SALES_Q1_1999                         336
SALES_Q1_2000                         336
SALES_Q1_2001                         608
SALES_Q1_2002                           0
SALES_Q1_2003                          32
SALES_Q2_1998                         224

SUBOBJECT_NAME                      VALUE
------------------------------ ----------
SALES_Q2_1999                         288
SALES_Q2_2000                         304
SALES_Q2_2001                         352
SALES_Q2_2002                           0
SALES_Q2_2003                           0
SALES_Q3_1998                         272
SALES_Q3_1999                         336
SALES_Q3_2000                         320
SALES_Q3_2001                         352
SALES_Q3_2002                          16
SALES_Q3_2003                           0

SUBOBJECT_NAME                      VALUE
------------------------------ ----------
SALES_Q4_1998                         320
SALES_Q4_1999                         336
SALES_Q4_2000                         304
SALES_Q4_2001                         384
SALES_Q4_2002                          16
SALES_Q4_2003                          32

28 rows selected.

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Prod
PL/SQL Release 10.1.0.4.0 - Production
CORE    10.1.0.4.0      Production
TNS for 32-bit Windows: Version 10.1.0.4.0 - Production
NLSRTL Version 10.1.0.4.0 - Production 

     

What is the required additional privilege for refresh on commit in 9i?

Pauline, September 29, 2005 - 1:21 pm UTC

Tom,
we created a MV by FAST REFRESH ON COMMIT while using oracle 8.1.7. Now we are using oracle 9.2.0.4. After we drop this MV, we can't use same code to create it and get 
the error of ORA-01031: insufficient privileges. If we remove 'ON COMMIT', the MV can be created. Please see below:

SQL> CREATE MATERIALIZED VIEW MIRMVU.MVU_EXCHANGE_RATES
  2  TABLESPACE MIR_MVU_A
build immediate
  3    4  REFRESH FAST ON COMMIT
WITH ROWID
  5    6  AS    
 SELECT
  7    8    exchange_rates.EXCHRATE_ID,
  CURRENCIES.CURRENCY_CODE,
  9   10    exchange_rates.EXCHANGE_RATE_DT,exchange_rates.RATE_PER_USD_NUM,
  exchange_rates.RATE_PER_GBP_NUM,exchange_rates.RATE_PER_EUR_NUM,
 11   12    exchange_rates.DELETED_FLG, exchange_rates.LAST_UPDATE_DT ,
  exchange_rates.rowid RATE_RID,CURRENCIES.rowid CURR_RID
 13   14    FROM core.exchange_rates,
       core.CURRENCIES
 15   16    WHERE CURRENCIES.CURRENCY_ID=exchange_rates.CURRENCY_ID;
  FROM core.exchange_rates,
            *
ERROR at line 14:
ORA-01031: insufficient privileges


SQL> CREATE MATERIALIZED VIEW MIRMVU.MVU_EXCHANGE_RATES
  2  TABLESPACE MIR_MVU_A
  3  build immediate
REFRESH FAST 
  4    5  WITH ROWID
AS      
  6    7   SELECT
  exchange_rates.EXCHRATE_ID,
  8    9    CURRENCIES.CURRENCY_CODE,
  exchange_rates.EXCHANGE_RATE_DT,exchange_rates.RATE_PER_USD_NUM,
 10   11    exchange_rates.RATE_PER_GBP_NUM,exchange_rates.RATE_PER_EUR_NUM,
  exchange_rates.DELETED_FLG, exchange_rates.LAST_UPDATE_DT ,
 12   13    exchange_rates.rowid RATE_RID,CURRENCIES.rowid CURR_RID
  FROM core.exchange_rates,
 14   15         core.CURRENCIES
 16    WHERE CURRENCIES.CURRENCY_ID=exchange_rates.CURRENCY_ID;

Materialized view created.

We really need to have it as refresh on commit. I guess
Oracle 9i has some restriction for refresh on commit. What is the additional privilege for creating such kind of MV?

Thanks.




 

Tom Kyte
September 30, 2005 - 8:28 am UTC

you are missing a grant you had in the old system -- you are doing this across schemas - hence grants on the mv logs and such would have been in place.

go back to your original system and look at the grants that were in place.

Pauline, September 30, 2005 - 10:43 am UTC

This user has select any table privilege. We only drop this MV in development database. But in staging and production,
this MV is still there. We compared this user privileges on
all dev/staging/prod, no difference. Also I just try login
as this user, create different name MV by same code in staging environment(because the original on is existing), I get ORA-01031 also. When don't use ON COMMIT, it is OK.

It is very strange.





Tom Kyte
September 30, 2005 - 11:49 am UTC

give me a test case - create user a, create user b, create the simple tables, mv logs, etc, grants

so I can run it in version X and see it work and version Y and see it fail.

I have tested on different versions and got results back

Pauline, September 30, 2005 - 8:39 pm UTC

Tom,
I have created 2 test cases based on different version of Oracle softwear (the 8.1.7 database was down and zipped
for saving disk space -- we don't use it for our application now. I just uncompressed the files and start the database in order to create the test case. That is why I delay to post results here). The results indicated my guess is right -- 9.2.0.4 requires something more about privilege for ON COMMIT. But I just don't know what kind of additional privilege this user needs to have in 9.2.0.4 because it has select any table privilege,
and if no 'ON COMMIT' in create statement, the MV can be created.

Please see all from my 2 test cases below:

<dbdev1:oracle:dev1:/appl/oracle/admin/bin/log> sqlplus /

SQL*Plus: Release 9.2.0.4.0 - Production on Fri Sep 30 17:59:11 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SQL> Create user  "TESTMVU" identified by TEST
  2    DEFAULT TABLESPACE MIR_MVU_A
  TEMPORARY TABLESPACE TEMP_DEFAULT
  PROFILE DEFAULT
  QUOTA UNLIMITED ON MIR_MVU_A
  ;
grant ALTER SESSION to testmvu;
  3    4    5    6  grant CREATE SESSION to testmvu;
grant CREATE SNAPSHOT to testmvu;
grant CREATE TABLE to testmvu;
grant CREATE VIEW to testmvu;
grant SELECT ANY TABLE to testmvu;
User created.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 

Grant succeeded.

SQL> Create user  "TESTCORE" identified by TEST
  2    DEFAULT TABLESPACE MIR_DATA_A
  TEMPORARY TABLESPACE TEMP_DEFAULT
  QUOTA UNLIMITED ON MIR_DATA_A
  QUOTA UNLIMITED ON MIR_INDEX_A
  ;
grant ALTER SESSION to testcore;
grant CREATE SESSION to testcore;
grant CREATE SNAPSHOT to testcore;
grant CREATE TABLE to testcore;
  3    4    5    6  
User created.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> grant select any table to testcore;

Grant succeeded.

SQL> conn testcore/test
Connected.
SQL> create table rates as select * from core.exchange_rates where rownum<101;

Table created.

SQL> create table curr as select * from core.currencies where rownum <101;

Table created.

SQL> desc rates 
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL> alter table rates add constraint pk_rate_id primary key (exchrate_id) using index tablespace MIR_INDEX_A;

Table altered.

SQL> desc curr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE


SQL> alter table curr add constraint pk_curr_id primary key (CURRENCY_ID ) using index tablespace MIR_INDEX_A;

Table altered.

SQL> create materialized view log on testcore.rates
  2      tablespace MIR_DATA_A 
    with rowid, primary key 
    excluding new values;  3    4  

Materialized view log created.

SQL> create materialized view log on testcore.curr
  2      tablespace MIR_DATA_A 
    with rowid, primary key 
    excluding new values;  3    4  

Materialized view log created.

SQL> select count(*) from
  2  (SELECT
  rates.EXCHRATE_ID,
  CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
  3    4    5    6    7    rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
  rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
       testcore.CURR
  8    9   10   11    WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID);

  COUNT(*)
----------
       100

SQL> analyze table rates compute statistics for table for all indexes for all indexed columns;

Table analyzed.

SQL> analyze table curr compute statistics for table for all indexes for all indexed columns;

Table analyzed.

SQL> conn testmvu/test
Connected.
SQL> sho user
USER is "TESTMVU"
SQL> desc testcore.rates;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL> desc testcore.curr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE

SQL> create materialized view mvu_exchange_rates_test
  2  tablespace mir_mvu_a
BUILD IMMEDIATE
REFRESH FAST ON COMMIT 
AS
SELECT
  rates.EXCHRATE_ID,
  CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  3    4    5    6    7    8    9   10    rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
  rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
  rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
 11   12   13   14         testcore.CURR
 15    WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID;
  FROM testcore.rates,
                *
ERROR at line 13:
ORA-01031: insufficient privileges


SQL> create materialized view mvu_exchange_rates_test
  2  tablespace mir_mvu_a
BUILD IMMEDIATE
REFRESH FAST 
AS
SELECT
  rates.EXCHRATE_ID,
  3    4    5    6    7    8    CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  9   10    rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
 11    rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
 12    rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
       testcore.CURR
  WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID; 13   14   15  

Materialized view created.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production


<dbdev1:dev4:/appl/oracle> sqlplus '/as sysdba'

SQL*Plus: Release 8.1.7.0.0 - Production on Fri Sep 30 20:01:10 2005

(c) Copyright 2000 Oracle Corporation.  All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production

SQL> Create user  "TESTMVU" identified by TEST
  2    DEFAULT TABLESPACE MIR_MVU_A
  TEMPORARY TABLESPACE TEMP
  PROFILE DEFAULT
  QUOTA UNLIMITED ON MIR_MVU_A
  ;  3    4    5    6  

User created.

SQL> grant ALTER SESSION to testmvu;
grant CREATE SESSION to testmvu;
grant CREATE SNAPSHOT to testmvu;
grant CREATE TABLE to testmvu;
grant CREATE VIEW to testmvu;
grant SELECT ANY TABLE to testmvu;
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 

Grant succeeded.

SQL> Create user  "TESTCORE" identified by TEST
  2    DEFAULT TABLESPACE MIR_DATA_A
  TEMPORARY TABLESPACE TEMP
  QUOTA UNLIMITED ON MIR_DATA_A
  QUOTA UNLIMITED ON MIR_INDEX_A
  ;
grant ALTER SESSION to testcore;
grant CREATE SESSION to testcore;
  3    4    5    6  grant CREATE SNAPSHOT to testcore;
grant CREATE TABLE to testcore;
User created.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 

Grant succeeded.

SQL>  grant select any table to testcore;

Grant succeeded.

SQL> conn testcore/test
Connected.
SQL> sho user
USER is "TESTCORE"
SQL> select name from v$database;

NAME
---------
DEV4

SQL> create table rates as select * from core.exchange_rates where rownum<101;

Table created.

SQL> create table curr as select * from core.currencies where rownum <101;

Table created.

SQL> desc rates 
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL> alter table rates add constraint pk_rate_id primary key (exchrate_id) using index tablespace MIR_INDEX_A;

Table altered.

SQL> desc curr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE

SQL> alter table curr add constraint pk_curr_id primary key (CURRENCY_ID ) using index tablespace MIR_INDEX_A;

Table altered.

SQL> create materialized view log on testcore.rates
  2          tablespace MIR_DATA_A 
    with rowid, primary key 
    excluding new values;    3    4  

Materialized view log created.

SQL> create materialized view log on testcore.curr
  2        tablespace MIR_DATA_A 
    with rowid, primary key 
    excluding new values;   3    4  

Materialized view log created.

SQL> analyze table rates compute statistics for table for all indexes for all indexed columns;

Table analyzed.

SQL> analyze table curr compute statistics for table for all indexes for all indexed columns;

Table analyzed.

SQL> select count(*) from
  2      (SELECT
  rates.EXCHRATE_ID,
  CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
  3    4    5    6    7       rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
  rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
       testcore.CURR
     WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID);  8    9   10   11  

  COUNT(*)
----------
       100

SQL> conn testmvu/test
Connected.
SQL> sho user
USER is "TESTMVU"
SQL> desc testcore.rates;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL> desc testcore.curr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE

SQL> create materialized view mvu_exchange_rates_test
  2  tablespace mir_mvu_a
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
AS
SELECT
  rates.EXCHRATE_ID,
  3    4    5    6    7    8    CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
  rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
  rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
       testcore.CURR
  WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID;  9   10   11   12   13   14   15  

Materialized view created.

SQL> select * from v$version;  

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
PL/SQL Release 8.1.7.4.0 - Production
CORE    8.1.7.0.0       Production
TNS for Solaris: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

Thanks for your help.

 

Tom Kyte
October 01, 2005 - 8:57 pm UTC

I was starting to run this - but realized - I don't have your tables.

So, if you make this "self contained" - create user, create user, create table, create table -- etc so that anyone can run it, I'll be glad to take a look at it.

my question above

A reader, October 03, 2005 - 12:59 pm UTC

Hi Tom,

Could you please have a look at my question above:
10g R2 new feature: Partition Change Tracking Refresh Without Materialized View Logs September 20, 2005

Thanks
Maurice

Tom Kyte
October 03, 2005 - 8:39 pm UTC

insufficient time to research right now.

Please see 'self contain' test cases below

Pauline, October 03, 2005 - 1:08 pm UTC

Tom,
Thanks very much for your response. Today I create 2 self contain test cases by different Oracle software. As I noticed, 8.1.7 can create MV with ON COMMIT but 9.2.0.4 can't due to ORA-01031. But if remove 'ON COMMIT' in 9.2.0.4, then it is OK. I post output from my screen here.Please let me know your test result. 

<dbdev1:oracle:dev1:/db/dbbackup/expimp.dump> sqlplus /

SQL*Plus: Release 9.2.0.4.0 - Production on Mon Oct 3 11:33:32 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SQL> drop user TESTMVU cascade;

User dropped.

SQL> Create user  "TESTMVU" identified by TEST
  2    DEFAULT TABLESPACE MIR_MVU_A
  TEMPORARY TABLESPACE TEMP_DEFAULT
  PROFILE DEFAULT
  3    4    5    QUOTA UNLIMITED ON MIR_MVU_A
  ;  6  

User created.

SQL> grant ALTER SESSION to testmvu;

grant CREATE SESSION to testmvu;
grant CREATE SNAPSHOT to testmvu;
grant CREATE TABLE to testmvu;
Grant succeeded.

SQL> 
Grant succeeded.
grant CREATE VIEW to testmvu;

SQL> grant SELECT ANY TABLE to testmvu;
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 

Grant succeeded.

SQL> drop user TESTCORE cascade;

User dropped.

SQL> Create user  "TESTCORE" identified by TEST
  2    DEFAULT TABLESPACE MIR_DATA_A
  3    TEMPORARY TABLESPACE TEMP_DEFAULT
  QUOTA UNLIMITED ON MIR_DATA_A
  4    5    QUOTA UNLIMITED ON MIR_INDEX_A
  ;  6  

User created.

SQL> grant ALTER SESSION to testcore;

Grant succeeded.

SQL> grant CREATE SESSION to testcore;
grant CREATE SNAPSHOT to testcore;
grant CREATE TABLE to testcore;
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 

Grant succeeded.

SQL> conn testcore/test
Connected.

SQL> CREATE TABLE testcore.RATES
  2  (
  3    EXCHRATE_ID       NUMBER                      NOT NULL,
  CURRENCY_ID       NUMBER                      NOT NULL,
  4    5    EXCHANGE_RATE_DT  DATE                        NOT NULL,
  6    RATE_PER_USD_NUM  NUMBER,
  RATE_PER_GBP_NUM  NUMBER,
  7    8    RATE_PER_EUR_NUM  NUMBER,
  DELETED_FLG       VARCHAR2(1)                 DEFAULT 'N'                   NO  9  T NULL,
 10    LAST_UPDATE_DT    DATE                        NOT NULL
)
 11   12  TABLESPACE MIR_DATA_A;

Table created.

SQL> desc rates;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL> CREATE TABLE testcore.CURR
  2  (
  CURRENCY_ID          NUMBER                   NOT NULL,
  3    4    CURRENCY_CODE        VARCHAR2(4)              NOT NULL,
  CURRENCY_NM          VARCHAR2(40)             NOT NULL,
  5    6    DELETED_FLG          VARCHAR2(1)              DEFAULT 'N'                   NOT NULL,
  7    LAST_UPDATE_DT       DATE                     NOT NULL,
  8    CURRENCY_SYMBOL_NUM  NUMBER,
  VALID_FROM_DT        DATE                     NOT NULL,
  9   10    VALID_TO_DT          DATE
)
 11   12  TABLESPACE MIR_DATA_A;

Table created.

SQL> desc curr;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE

SQL> alter table rates add constraint pk_rate_id primary key (exchrate_id) using index tablespace MIR_INDEX_A;

Table altered.

SQL> alter table curr add constraint pk_curr_id primary key (CURRENCY_ID ) using index tablespace MIR_INDEX_A;

Table altered.

SQL> INSERT INTO CURR ( CURRENCY_ID, CURRENCY_CODE, CURRENCY_NM, DELETED_FLG, LAST_UPDATE_DT,
CURRENCY_SYMBOL_NUM, VALID_FROM_DT, VALID_TO_DT ) VALUES ( 
  2    3  111, 'ATS', 'AUSTRIAN SCHILLING', 'Y',  TO_Date( '01/02/2001 04:33:12 PM', 'MM/DD/YYYY HH:MI:SS AM')
  4  , NULL,  TO_Date( '01/01/1900 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), NULL); 

1 row created.


SQL> INSERT INTO RATES ( EXCHRATE_ID, CURRENCY_ID, EXCHANGE_RATE_DT, RATE_PER_USD_NUM,
RATE_PER_GBP_NUM, RATE_PER_EUR_NUM, DELETED_FLG, LAST_UPDATE_DT ) VALUES ( 
  2    3  61624466, 111,  TO_Date( '04/29/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), 0.1033057
  4  , 0.0639049, 0.1000619, 'N',  TO_Date( '12/27/2001 08:47:00 PM', 'MM/DD/YYYY HH:MI:SS AM')); 

1 row created.

SQL> commit;

Commit complete.

SQL> create materialized view log on testcore.rates
  2       tablespace MIR_DATA_A 
     with rowid, primary key 
  3    4       excluding new values;  

Materialized view log created.

SQL> create materialized view log on testcore.curr
  2       tablespace MIR_DATA_A 
    with rowid, primary key 
  3    4      excluding new values; 

Materialized view log created.

SQL> analyze table rates compute statistics;

Table analyzed.

SQL> analyze table curr compute statistics ;

Table analyzed.

SQL> conn testmvu/test
Connected.
SQL> desc testcore.rates;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL>  desc testcore.curr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE

SQL> create materialized view mvu_exchange_rates_test
  2      tablespace mir_mvu_a
BUILD IMMEDIATE
  3    4  REFRESH FAST ON COMMIT 
AS
  5    6  SELECT
  rates.EXCHRATE_ID,
  7    8    CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  9   10    rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
  rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
 11   12    rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
 13   14         testcore.CURR
    WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID;
 15    FROM testcore.rates,  FROM testcore.rates,
                *
ERROR at line 13:
ORA-01031: insufficient privileges


SQL> create materialized view mvu_exchange_rates_test
  2       tablespace mir_mvu_a
  3  BUILD IMMEDIATE
REFRESH FAST 
  4    5  AS
  6  SELECT
  7    rates.EXCHRATE_ID,
  8    CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  9   10       rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
     rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
 11   12       rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
 13   14         testcore.CURR
 15    WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID;

Materialized view created.

SQL> select * from mvu_exchange_rates_test;

EXCHRATE_ID CURR EXCHANGE_ RATE_PER_USD_NUM RATE_PER_GBP_NUM RATE_PER_EUR_NUM D
----------- ---- --------- ---------------- ---------------- ---------------- -
LAST_UPDA RATE_RID           CURR_RID
--------- ------------------ ------------------
   61624466 ATS  29-APR-95         .1033057         .0639049         .1000619 N
27-DEC-01 ABBHJVAAPAAEEHGAAA ABBHJWAAPAAEEIGAAA

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
PL/SQL Release 9.2.0.4.0 - Production
CORE    9.2.0.3.0       Production
TNS for Solaris: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

*********************************************************************************

<dbdev1:oracle:dev1:/appl/oracle> . ./.profile.8i
<dbdev1:dev4:/appl/oracle> sqlplus '/as sysdba'

SQL*Plus: Release 8.1.7.0.0 - Production on Mon Oct 3 12:36:34 2005

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  189595324 bytes
Fixed Size                   102076 bytes
Variable Size              57880576 bytes
Database Buffers          131072000 bytes
Redo Buffers                 540672 bytes
Database mounted.
Database opened.
SQL> drop user TESTMVU cascade;

User dropped.

SQL> Create user  "TESTMVU" identified by TEST
  2    DEFAULT TABLESPACE MIR_MVU_A
  3    TEMPORARY TABLESPACE TEMP
  PROFILE DEFAULT
  4    5    QUOTA UNLIMITED ON MIR_MVU_A
  ;  6  

User created.

SQL> grant ALTER SESSION to testmvu;
grant CREATE SESSION to testmvu;
grant CREATE SNAPSHOT to testmvu;
grant CREATE TABLE to testmvu;
grant CREATE VIEW to testmvu;
grant SELECT ANY TABLE to testmvu;

Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
SQL> drop user TESTCORE cascade;

User dropped.

SQL> Create user  "TESTCORE" identified by TEST
  2    DEFAULT TABLESPACE MIR_DATA_A
  TEMPORARY TABLESPACE TEMP
  3    4    QUOTA UNLIMITED ON MIR_DATA_A
  5    QUOTA UNLIMITED ON MIR_INDEX_A
  6    ;

User created.

SQL> grant ALTER SESSION to testcore;

Grant succeeded.

SQL> grant CREATE SESSION to testcore;
grant CREATE SNAPSHOT to testcore;
grant CREATE TABLE to testcore;

Grant succeeded.

SQL> 
Grant succeeded.

SQL> 
Grant succeeded.

SQL>   
SQL> conn testcore/test
Connected.
SQL> CREATE TABLE testcore.RATES
  2  (
  3    EXCHRATE_ID       NUMBER                      NOT NULL,
  CURRENCY_ID       NUMBER                      NOT NULL,
  EXCHANGE_RATE_DT  DATE                        NOT NULL,
  4    5    6    RATE_PER_USD_NUM  NUMBER,
  RATE_PER_GBP_NUM  NUMBER,
  7    8    RATE_PER_EUR_NUM  NUMBER,
  DELETED_FLG       VARCHAR2(1)                 DEFAULT 'N'                   NO  9  T NULL,
  LAST_UPDATE_DT    DATE                        NOT NULL
 10   11  )
TABLESPACE MIR_DATA_A; 12  

Table created.

SQL> desc rates 
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL> CREATE TABLE testcore.CURR
  2  (
  CURRENCY_ID          NUMBER                   NOT NULL,
  3    4    CURRENCY_CODE        VARCHAR2(4)              NOT NULL,
  CURRENCY_NM          VARCHAR2(40)             NOT NULL,
  5    6    DELETED_FLG          VARCHAR2(1)              DEFAULT 'N'                   NOT NULL,
  7    LAST_UPDATE_DT       DATE                     NOT NULL,
  8    CURRENCY_SYMBOL_NUM  NUMBER,
  VALID_FROM_DT        DATE                     NOT NULL,
  9   10    VALID_TO_DT          DATE
) 11   
 12  TABLESPACE MIR_DATA_A;

Table created.

SQL> desc curr;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE


SQL> alter table rates add constraint pk_rate_id primary key (exchrate_id) using index 
  2  tablespace MIR_INDEX_A;

Table altered.

SQL> alter table curr add constraint pk_curr_id primary key (CURRENCY_ID ) using index 
  2  tablespace MIR_INDEX_A;

Table altered.

SQL> INSERT INTO CURR ( CURRENCY_ID, CURRENCY_CODE, CURRENCY_NM, 
  2  DELETED_FLG, LAST_UPDATE_DT,CURRENCY_SYMBOL_NUM, VALID_FROM_DT, VALID_TO_DT ) 
  3  VALUES 
( 111, 'ATS', 'AUSTRIAN SCHILLING', 'Y',  TO_Date( '01/02/2001 04:33:12 PM', 
  4    5  'MM/DD/YYYY HH:MI:SS AM'), NULL,  TO_Date( '01/01/1900 12:00:00 AM', 
'MM/DD/YYYY HH:MI:SS AM'), NULL);   6  

1 row created.

SQL> INSERT INTO RATES ( EXCHRATE_ID, CURRENCY_ID, EXCHANGE_RATE_DT, 
  2  RATE_PER_USD_NUM,RATE_PER_GBP_NUM, RATE_PER_EUR_NUM, DELETED_FLG, LAST_UPDATE_DT ) 
  3  VALUES 
( 61624466, 111,  TO_Date( '04/29/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'),   4  0.1033057
, 0.0639049, 0.1000619, 'N',  TO_Date( '12/27/2001 08:47:00 PM', 'MM/DD/YYYY HH:  5  MI:SS AM')); 

1 row created.

SQL> commit;

Commit complete.

SQL> create materialized view log on testcore.rates
  2       tablespace MIR_DATA_A 
  3       with rowid, primary key 
     excluding new values;    4  

Materialized view log created.

SQL>  create materialized view log on testcore.curr
  2       tablespace MIR_DATA_A 
    with rowid, primary key 
  3    4      excluding new values; 

Materialized view log created.

SQL> analyze table rates compute statistics;

Table analyzed.

SQL> analyze table curr compute statistics;

Table analyzed.

SQL> conn testmvu/test
Connected.
SQL> desc testcore.rates;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EXCHRATE_ID                               NOT NULL NUMBER
 CURRENCY_ID                               NOT NULL NUMBER
 EXCHANGE_RATE_DT                          NOT NULL DATE
 RATE_PER_USD_NUM                                   NUMBER
 RATE_PER_GBP_NUM                                   NUMBER
 RATE_PER_EUR_NUM                                   NUMBER
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE

SQL>  desc testcore.curr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CURRENCY_ID                               NOT NULL NUMBER
 CURRENCY_CODE                             NOT NULL VARCHAR2(4)
 CURRENCY_NM                               NOT NULL VARCHAR2(40)
 DELETED_FLG                               NOT NULL VARCHAR2(1)
 LAST_UPDATE_DT                            NOT NULL DATE
 CURRENCY_SYMBOL_NUM                                NUMBER
 VALID_FROM_DT                             NOT NULL DATE
 VALID_TO_DT                                        DATE

SQL> create materialized view mvu_exchange_rates_test
  2      tablespace mir_mvu_a
BUILD IMMEDIATE
  3    4  REFRESH FAST ON COMMIT 
AS
  5    6  SELECT
  rates.EXCHRATE_ID,
  7    8    CURR.CURRENCY_CODE,
  rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
  9   10    rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
  rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
 11   12    rates.rowid RATE_RID,CURR.rowid CURR_RID 
  FROM testcore.rates,
 13   14         testcore.CURR
    WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID; 15  

Materialized view created.

SQL> select * from mvu_exchange_rates_test;

EXCHRATE_ID CURR EXCHANGE_ RATE_PER_USD_NUM RATE_PER_GBP_NUM RATE_PER_EUR_NUM D
----------- ---- --------- ---------------- ---------------- ---------------- -
LAST_UPDA RATE_RID           CURR_RID
--------- ------------------ ------------------
   61624466 ATS  29-APR-95         .1033057         .0639049         .1000619 N
27-DEC-01 AACATRAAPAABzyGAAA AACATSAAPAABzzGAAA

SQL> select * from v$version;  

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
PL/SQL Release 8.1.7.4.0 - Production
CORE    8.1.7.0.0       Production
TNS for Solaris: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production 

Tom Kyte
October 03, 2005 - 8:41 pm UTC

i was going to run this, but it was taking forever to edit. 

tablespace names (that I do not have)
code that is formatted really hard to cut and past (the flow is a tad "strange", things like:

SQL> CREATE TABLE testcore.RATES
  2  (
  3    EXCHRATE_ID       NUMBER                      NOT NULL,
  CURRENCY_ID       NUMBER                      NOT NULL,
  4    5    EXCHANGE_RATE_DT  DATE                        NOT NULL,
  6    RATE_PER_USD_NUM  NUMBER,
  RATE_PER_GBP_NUM  NUMBER,
  7    8    RATE_PER_EUR_NUM  NUMBER,
  DELETED_FLG       VARCHAR2(1)                 DEFAULT 'N'                   NO 
 9  T NULL,
 10    LAST_UPDATE_DT    DATE                        NOT NULL
)
 11   12  TABLESPACE MIR_DATA_A;

look at that no...... t null and such...)

make it "easier"? please. no tablespaces, just script, or use tablespaces anyone might have (like users) - a test case, something we can cut and paste and run. 

Script should run on 9.2.04 and 8.1.7

Pauline, October 04, 2005 - 4:19 pm UTC

Tom,
Please see the script which makes you easier to run. Thanks
very much for your time and help.

**************************************************

conn /
spool test.log
set echo on

drop user TESTMVU cascade;

Create user "TESTMVU" identified by TEST
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE TEMP_DEFAULT
PROFILE DEFAULT
QUOTA UNLIMITED ON users
;


grant ALTER SESSION to testmvu;
grant CREATE SESSION to testmvu;
grant CREATE SNAPSHOT to testmvu;
grant CREATE TABLE to testmvu;
grant SELECT ANY TABLE to testmvu;

drop user TESTCORE cascade;

Create user "TESTCORE" identified by TEST
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE TEMP_DEFAULT
QUOTA UNLIMITED ON users
;


grant ALTER SESSION to testcore;
grant CREATE SESSION to testcore;
grant CREATE SNAPSHOT to testcore;
grant CREATE TABLE to testcore;


conn testcore/test

CREATE TABLE testcore.RATES
(
EXCHRATE_ID NUMBER NOT NULL,
CURRENCY_ID NUMBER NOT NULL,
EXCHANGE_RATE_DT DATE NOT NULL,
RATE_PER_USD_NUM NUMBER,
RATE_PER_GBP_NUM NUMBER,
RATE_PER_EUR_NUM NUMBER,
DELETED_FLG VARCHAR2(1) DEFAULT 'N' NOT NULL,
LAST_UPDATE_DT DATE NOT NULL
)
TABLESPACE users;


desc rates;

CREATE TABLE testcore.CURR
(
CURRENCY_ID NUMBER NOT NULL,
CURRENCY_CODE VARCHAR2(4) NOT NULL,
CURRENCY_NM VARCHAR2(40) NOT NULL,
DELETED_FLG VARCHAR2(1) DEFAULT 'N' NOT NULL,
LAST_UPDATE_DT DATE NOT NULL,
CURRENCY_SYMBOL_NUM NUMBER,
VALID_FROM_DT DATE NOT NULL,
VALID_TO_DT DATE
)
TABLESPACE users;


desc curr;


alter table rates add constraint pk_rate_id primary key (exchrate_id) ;

alter table curr add constraint pk_curr_id primary key (CURRENCY_ID ) ;


INSERT INTO CURR ( CURRENCY_ID, CURRENCY_CODE, CURRENCY_NM, DELETED_FLG, LAST_UPDATE_DT, CURRENCY_SYMBOL_NUM,
VALID_FROM_DT, VALID_TO_DT )
VALUES
(111, 'ATS', 'AUSTRIAN SCHILLING', 'Y', TO_Date( '01/02/2001 04:33:12 PM', 'MM/DD/YYYY HH:MI:SS AM') , NULL, TO_Date( '01/01/1900
12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), NULL);

INSERT INTO RATES ( EXCHRATE_ID, CURRENCY_ID, EXCHANGE_RATE_DT, RATE_PER_USD_NUM, RATE_PER_GBP_NUM,
RATE_PER_EUR_NUM, DELETED_FLG, LAST_UPDATE_DT )
VALUES
(61624466, 111, TO_Date( '04/29/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), 0.1033057 , 0.0639049, 0.1000619, 'N',
TO_Date( '12/27/2001 08:47:00 PM', 'MM/DD/YYYY HH:MI:SS AM'));


create materialized view log on testcore.rates
tablespace users
with rowid, primary key
excluding new values;

create materialized view log on testcore.curr
tablespace users
with rowid, primary key
excluding new values;

analyze table rates compute statistics;

analyze table curr compute statistics ;

conn testmvu/test
desc testcore.rates;
desc testcore.curr
create materialized view mvu_exchange_rates_test
tablespace users
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
AS
SELECT
rates.EXCHRATE_ID,
CURR.CURRENCY_CODE,
rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
rates.rowid RATE_RID,CURR.rowid CURR_RID
FROM testcore.rates,
testcore.CURR
WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID;


create materialized view mvu_exchange_rates_test
tablespace users
BUILD IMMEDIATE
REFRESH FAST
AS
SELECT
rates.EXCHRATE_ID,
CURR.CURRENCY_CODE,
rates.EXCHANGE_RATE_DT,rates.RATE_PER_USD_NUM,
rates.RATE_PER_GBP_NUM,rates.RATE_PER_EUR_NUM,
rates.DELETED_FLG, rates.LAST_UPDATE_DT ,
rates.rowid RATE_RID,CURR.rowid CURR_RID
FROM testcore.rates,
testcore.CURR
WHERE CURR.CURRENCY_ID=rates.CURRENCY_ID;


select * from mvu_exchange_rates_test;

select * from v$version;

exit

spool off


Tom Kyte
October 04, 2005 - 8:24 pm UTC

thanks -- that did it (with that, i was able to confirm and find....

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_63a.htm#2080852 <code>

) a new grant apparently. Learn something new every day.....

Got it !

Pauline, October 05, 2005 - 10:21 am UTC

Tom,
By reading the link documentation for 9.2, I grant additional privilege to that user, then the MV with on commint can be created.

Thanks very much.



Exporting and Importing a schema having maateriaalized views

A reader, October 12, 2005 - 2:40 pm UTC

Tom,
Did a user level export of schema1 and imported this schema into schema2. The materialized views present in schema1 were not exported to schema2.

My question
Q1) Can materialized views be exported and imported between schemas having different schema names?
Q2) If no, what is the best procedure to re-create the materialized views in the destination schem.

As always appreciate all the help you are providing to the Oracle community.

Thank you


Tom Kyte
October 13, 2005 - 9:24 am UTC

do you have an example for us to work with (as FEW objects as humanly possible)

ops$tkyte@ORA10G> drop user a cascade;
 
User dropped.
 
ops$tkyte@ORA10G> drop user b cascade;
 
User dropped.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> grant dba to a identified by a;
 
Grant succeeded.
 
ops$tkyte@ORA10G> grant dba to b identified by b;
 
Grant succeeded.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> @connect a/a
ops$tkyte@ORA10G> set termout off
a@ORA10G> @login
a@ORA10G> set termout off
a@ORA10G>
a@ORA10G> set termout on
a@ORA10G>
a@ORA10G> create table t ( x int primary key, y int );
 
Table created.
 
a@ORA10G>
a@ORA10G> create materialized view mv
  2  as
  3  select * from t where y > 50;
 
Materialized view created.
 
a@ORA10G>
a@ORA10G> !exp userid=a/a owner=a
 
Export: Release 10.1.0.4.0 - Production on Thu Oct 13 09:20:53 2005
 
Copyright (c) 1982, 2004, Oracle.  All rights reserved.
 
 
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
 
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user A
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user A
About to export A's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export A's tables via Conventional Path ...
. . exporting table                             MV          0 rows exported
. . exporting table                              T          0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
 
a@ORA10G> !imp userid=b/b fromuser=a touser=b
 
Import: Release 10.1.0.4.0 - Production on Thu Oct 13 09:20:56 2005
 
Copyright (c) 1982, 2004, Oracle.  All rights reserved.
 
 
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
 
Export file created by EXPORT:V10.01.00 via conventional path
 
Warning: the objects were exported by A, not by you
 
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. . importing table                           "MV"          0 rows imported
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.
 
a@ORA10G>
a@ORA10G> @connect b/b
a@ORA10G> set termout off
b@ORA10G> @login
b@ORA10G> set termout off
b@ORA10G>
b@ORA10G> set termout on
b@ORA10G> select object_name, object_type from user_objects;
 
OBJECT_NAME                    OBJECT_TYPE
------------------------------ -------------------
MV                             TABLE
SYS_C0020437                   INDEX
T                              TABLE
SYS_C0020438                   INDEX
MV                             MATERIALIZED VIEW
 


<b>the MV's "came with" in this case...</b>
 

Exporting and Importing a schema having materialized views

A reader, October 13, 2005 - 11:57 am UTC

Thank you Tom, for the step by step example.

After looking at the example above, particularly the statement 'grant dba to b identified by b'
noticed that the schema 'b' I have did not have the create materialized view privilege and so my import was missing the materialized views.

Thank you again.


Tom Kyte
October 13, 2005 - 1:16 pm UTC

ahh, the import wasn't MISSING the materialized views - the import FAILED on the creation of them :)

one more query

Sid, October 20, 2005 - 9:11 pm UTC

Please provide me how to build this query --logic below

SELECT MAX(a1.effdt)
FROM ps_job a1
WHERE a1.emplid=job.emplid
and a1.empl_status='A'
and
(
if (case1 )
then
a1.effdt<=sysdate
else
a1.effdt>=sysdate
)


Please help me appreciate your help...

Tom Kyte
October 21, 2005 - 8:05 am UTC

eh?

no clue what "case1" might be.

sorry abt less info provided earlier

SID, October 21, 2005 - 12:13 pm UTC

SELECT MAX(a1.effdt)
FROM ps_job a1
WHERE a1.emplid=job.emplid
and a1.empl_status='A'
and *******************
(
if (case1 )
then
a1.effdt<=sysdate
else
a1.effdt>=sysdate
)

***** here i want to add a dynamic and condition based on any condition(case 1 ) say if &var=5 then i want to use a1.effdt<=sysdate in main query else i want to use a1.effdt>=sysdate


Tom Kyte
October 21, 2005 - 1:55 pm UTC

this would be a case where you probably want to use dynamic sql in your programing language and "do the right thing" - run query 1 or query 2.


but if you persist in this path, just

and (( &var=5 and a1.effdt <= sysdate )
or
( (&var<> 5 or &var is null) and a1.effdt >= sysdate ))


Thanks a lot ..it worked

SID, October 21, 2005 - 2:55 pm UTC


mviews and migration

A reader, October 24, 2005 - 5:56 am UTC

Hi

We recently migrated from 8i to 9i after migration we noticed Oracle created tons of new jobs of those MV which normally are refreshed on demand.

How so? Why Oracle creates new jobs?!

Tom Kyte
October 24, 2005 - 6:30 am UTC

give me an example and tell how you migrated.

I make a pardon

A reader, October 24, 2005 - 7:37 am UTC

Hi

I make a pardon from previous post. Migration didnt cause this

It seems that if I call dbms_mview.refresh from a pl/sql wrapper internally Oracle creates a new job, I have verified this by looking dba_jobs_running and the job sequence increasing. However if I call dbms_mview.refresh manually this doesnt happen.

Is this expected behaviour in 9.2.0.6?

Tom Kyte
October 24, 2005 - 11:36 am UTC

describe what you mean by "from a pl/sql wrapper internally"



using a procedure

A reader, October 24, 2005 - 12:59 pm UTC

Hi

If I run dbms_mviews.refresh from sqlplus it runs in the prompt

if i create a procedure (a wrapper)

create or replace procedure
as
begin
dbms_mview.refresh('EMP_MV');
end;
/

This internally submits a job!

How so?

Tom Kyte
October 24, 2005 - 1:14 pm UTC

show me what you mean - from start to finish....


ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte@ORA9IR2> alter table emp add constraint emp_pk primary key(empno);

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view emp_mv
  2  refresh on demand
  3  as
  4  select * from emp;
select * from emp
              *
ERROR at line 4:
ORA-12006: a materialized view with the same user.name already exists


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure p
  2  as
  3  begin
  4          dbms_mview.refresh('EMP_MV');
  5  end;
  6  /

Procedure created.

ops$tkyte@ORA9IR2> exec p

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select * from user_jobs;

no rows selected

 

using a procedure

Vladimir Begun, October 24, 2005 - 3:20 pm UTC

Please double check that you do not have atomic_refresh => false (unless you really need it) when you refresh mviews using your wrappers.

=========================================================
atomic_refresh

If this parameter is set to true, then the list of materialized views is refreshed in a single transaction. All of the refreshed materialized views are updated to a single point in time. If the refresh fails for any of the materialized views, none of the materialized views are updated.

If this parameter is set to false, then each of the materialized views is refreshed in a separate transaction. The number of job queue processes must be set to 1 or greater if this parameter is false.
=========================================================

Be aware that on 10g the mview index(es) rebuild(s) can be done via jobs.

atomic_refresh

A reader, October 26, 2005 - 10:31 am UTC

atomic_refresh is really a bug of bug and of bugs!

In 8i the parameter simply doesnt work, no matter how many MV you refresh it no job would be submitted

In 9i things gets better but not correct. If you refresh more than one MV the jobs are submitted but if you only have MV it submits jobs too! The workaround is use atomic_refresh => TRUE.

In 10g things work properly

Materialized View using external table

NR, November 11, 2005 - 4:04 pm UTC

Tom,

I tried creating a materialied view on external table and it was succesful, but when I tried to use refresh on commit, it failed with ORA-12054 error.

Question: Is it possible to create a materialized view with refersh on commit on external tables? If yes, an example will be very helpful to understand and if not then why not?

Here is the sameple code
/* Formatted on 2005/11/11 15:45 (Formatter Plus v4.8.6) */
CREATE MATERIALIZED VIEW klondike_dss.billing_mv
TABLESPACE billing_data
NOCACHE
NOLOGGING
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
REFRESH COMPLETE ON COMMIT
WITH ROWID
ENABLE QUERY REWRITE
AS
SELECT TRUNC (hist.rel_date) AS summary_date, 0 AS is_ldc,
TRUNC (hist.iam_date) AS call_date, hist.call_type_code,
hist.org_id_opc, hist.org_id_dpc, hist.ssp_id_opc, hist.ssp_id_dpc,
hist.trunk_group_id,
SUM (DECODE (hist.call_type_code, '6', hist.mou, 0)) AS tot_unknown,
COUNT (*) AS tot_calls, SUM (hist.mou) AS tot_minutes,
SUM ( hist.mou
* (DECODE (hist.orig_call_rate,
NULL, DECODE (hist.term_call_rate,
NULL, 0,
hist.term_call_rate
),
hist.orig_call_rate
)
)
) AS tot_amount,

FROM billing hist -- external table
WHERE hist.report_ind = 0
GROUP BY TRUNC (hist.rel_date),
TRUNC (hist.iam_date),
hist.org_id_opc,
hist.org_id_dpc,
hist.ssp_id_opc,
hist.ssp_id_dpc,
hist.trunk_group_id,
hist.call_type_code
/




Tom Kyte
November 12, 2005 - 10:44 am UTC

it makes no sense at all to have a refresh on commit MV on an external table - none.

You cannot modify an external table, what is the point?

Materialized View using external table

NR, November 11, 2005 - 4:06 pm UTC

Tom,

I tried creating a materialied view on external table and it was succesful, but when I tried to use refresh on commit, it failed with ORA-12054 error.

Question: Is it possible to create a materialized view with refersh on commit on external tables? If yes, an example will be very helpful to understand and if not then why not?

Here is the sameple code
/* Formatted on 2005/11/11 15:45 (Formatter Plus v4.8.6) */
CREATE MATERIALIZED VIEW klondike_dss.billing_mv
TABLESPACE billing_data
NOCACHE
NOLOGGING
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
REFRESH COMPLETE ON COMMIT
WITH ROWID
ENABLE QUERY REWRITE
AS
SELECT TRUNC (hist.rel_date) AS summary_date, 0 AS is_ldc,
TRUNC (hist.iam_date) AS call_date, hist.call_type_code,
hist.org_id_opc, hist.org_id_dpc, hist.ssp_id_opc, hist.ssp_id_dpc,
hist.trunk_group_id,
SUM (DECODE (hist.call_type_code, '6', hist.mou, 0)) AS tot_unknown,
COUNT (*) AS tot_calls, SUM (hist.mou) AS tot_minutes,
SUM ( hist.mou
* (DECODE (hist.orig_call_rate,
NULL, DECODE (hist.term_call_rate,
NULL, 0,
hist.term_call_rate
),
hist.orig_call_rate
)
)
) AS tot_amount,

FROM billing hist -- external table
WHERE hist.report_ind = 0
GROUP BY TRUNC (hist.rel_date),
TRUNC (hist.iam_date),
hist.org_id_opc,
hist.org_id_dpc,
hist.ssp_id_opc,
hist.ssp_id_dpc,
hist.trunk_group_id,
hist.call_type_code
/




Trigger on a snapshot or materialized view

Kanchan, November 23, 2005 - 1:23 am UTC

Hi Tom,
I have been trying to find the answer to a query, but could not really find it anywhere..Hence need to ask it here..

We are trying to separate an application consisting of a Oracle 8i Database and C++ application into two logiocally separate parts. Hence the design consists of having another database and C++ application based on the current one on another server catering to different kind of requests.

Our problem is that a particular table data needs to be shared by both the databases, and not only that, both the C++ applications need to be alerted on any change in this data.
This is currently done using DBMS_ALERT.SIGNAL on the original database.
We are thinking of various options like
1)Having a view on this table from the new remote database and registering the new C++ application as well to wait for the relevant alert
2) Having a snapshot/materialized view and adding a trigger on this data to raise the same alert in the new database

Another consideration here is that we need to make the two databases as independent of each other as possible.

Could you please recommend one of the approaches above, or any other approach we can follow for implementing this?

Thanks for all your help, it has been very useful when we were stuck in the past.

Thanks and Regards,
Kanchan,
Pune.

Tom Kyte
November 23, 2005 - 9:38 am UTC

(you do know that dbms_alert is

a) SERIAL
b) "lossy" like Unix signal() is....
c) SERIAL (ugh)


you need to make these two databases as co-dependent as possible in my opinion - all of the way to CONSOLIDATING them into one.

They are co-dependent! by definition. in reality - right now. Everything you do will make them even more co-dependent (the failure of one impacts the other). I truly believe your right answer is "consolidation".

Excellent

Avishay, November 30, 2005 - 4:47 am UTC


Fast Refresh Clarification

Suzanne, December 12, 2005 - 9:01 pm UTC

Why is this materialized view not able to do a REFRESH FAST ON COMMIT?

CREATE MATERIALIZED VIEW LOG ON CD_RMIS_LIMITS
WITH ROWID,SEQUENCE (COVERAGE_TYPE,LIMITAMOUNT,LIMITDESCRIPTION,RMIS_ID)
INCLUDING NEW VALUES
/

CREATE MATERIALIZED VIEW ORACLE.CD_RMIS_CARGO_LIMIT_MAX_VIEW
BUILD IMMEDIATE
REFRESH FAST
ON COMMIT
AS select rmis_id, max(limitamount) as cargo_limit
from cd_rmis_limits
where coverage_type='CARGO'
group by rmis_id

/

I get a -
Following on-commit snapshots not refreshed :
ORACLE.CD_RMIS_CARGO_LIMIT_MAX_VIEW
in my alert log.


Tom Kyte
December 13, 2005 - 8:38 am UTC

well it is not a complete example, however the data warehousing guide does say....


... Note that COUNT(*) must always be present to guarantee all types of fast refresh. Otherwise, you may be limited to fast refresh after inserts only. Oracle recommends .....


so, perhaps that is it, only guessing since we don't have a cradle to grave example with the base table, some modifications, you know - a complete example :)

Refresh Fast on Commit

Suzanne, December 12, 2005 - 11:46 pm UTC

I forgot to include that this is Oracle 9.2.0.6 and my developers are trying to keep this view in sync with updates on the base table for use on a web page.

Tom Kyte
December 13, 2005 - 9:05 am UTC

you know what will be faster, if you need the max limitamount for a given rmis_id?

create index t_idx on t(rmis_id, limitamount);



....
ops$tkyte@ORA9IR2> create table t
  2  as
  3  select mod(rownum,5000) rmis_id, object_id limitamount, a.*
  4    from all_objects a;

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index t_idx on t(rmis_id,limitamount);

Index created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select max(limitamount)
  2    from t
  3   where rmis_id = 42;

MAX(LIMITAMOUNT)
----------------
           32441

ops$tkyte@ORA9IR2> set autotrace on
ops$tkyte@ORA9IR2> select max(limitamount)
  2    from t
  3   where rmis_id = 42;

MAX(LIMITAMOUNT)
----------------
           32441


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=9)
   1    0   SORT (AGGREGATE)
   2    1     FIRST ROW (Cost=2 Card=6 Bytes=54)
   3    2       INDEX (RANGE SCAN (MIN/MAX)) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=5119)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        389  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte@ORA9IR2> set autotrace off;


That MV would only be useful if you frequently needed ALL max(limitamounts) for EVERY rmis_id.


If you need the max(limitamount) given a rmis_id, just index. 

Refresh Fast on Commit

Suzanne, December 13, 2005 - 8:17 pm UTC

Thank you for your response. I believe that my developers are using the view to display all of the data from the base table that is related to the transportation type of Cargo. This is a web screen that tells truckers what various weight limits are.

I'm sorry I didn't include the DDL for the table. It is very simple.

CREATE TABLE CD_RMIS_LIMITS
(RMIS_ID NUMBER NOT NULL,
COVERAGE_TYPE VARCHAR2(90 byte) NOT NULL,
LIMITDESCRIPTION VARCHAR2(90 byte) NOT NULL,
LIMITAMOUNT NUMBER NOT NULL,
INSERTED DATE NOT NULL,
CONSTRAINT PK_CD_RMIS_LIMITS PRIMARY KEY(RMIS_ID,
COVERAGE_TYPE, LIMITDESCRIPTION) )

I believe your observation about the count(*) being missing is the answer I needed. I've tried to find a straight answer about what to include in a materialized view to do a refresh fast on commit with an aggregate and I've found various and confusing answers. I'll see if the count(*) works. Thank you.

Tom Kyte
December 14, 2005 - 7:53 am UTC

Refresh Fast On Commit

Suzanne, December 16, 2005 - 6:35 pm UTC

Ok, I set up a test scenario and my new materialized view with counts still doesn't work with an update.  Here is my new materialized view -
CREATE MATERIALIZED VIEW ORACLE.CD_RMIS_CARGO_LIMIT_MAX_VIEW2
BUILD IMMEDIATE
REFRESH FAST
ON COMMIT
AS select r.rmis_id,
          count(*) as total_cnt,
          max(r.limitamount) as cargo_limit,
          count(r.limitamount) as cargo_cnt
   from cd_rmis_limits r
   where r.coverage_type='CARGO'
   group by r.rmis_id
/
SQL> select * from CD_RMIS_CARGO_LIMIT_MAX_VIEW2 where rmis_id in (4903, 4882, 3367);

   RMIS_ID  TOTAL_CNT CARGO_LIMIT  CARGO_CNT
---------- ---------- ----------- ----------
      3367          4      500000          4
      4882          3      500000          3
      4903          2     1500000          2

SQL> update cd_rmis_limits set limitamount = 700000 where rmis_id = 4882 and limitdescription = 'PER
 VEHICLE' and coverage_type = 'CARGO'; 

1 row updated.

SQL> select * from CD_RMIS_CARGO_LIMIT_MAX_VIEW2 where rmis_id in (4903, 4882, 3367);

   RMIS_ID  TOTAL_CNT CARGO_LIMIT  CARGO_CNT
---------- ---------- ----------- ----------
      3367          4      500000          4
      4882          3      500000          3
      4903          2     1500000          2

SQL> select rmis_id, limitamount from cd_rmis_limits where rmis_id in (4903, 4882, 3367);

   RMIS_ID LIMITAMOUNT
---------- -----------
      3367      300000
      3367      500000
      3367      500000
      3367      200000
      4882      700000
      4882      500000
      4882      500000
      4903     1000000
      4903     1500000

And I did do a commit.  What have I missed. 

Tom Kyte
December 17, 2005 - 11:11 am UTC

it is your predicate - makes it non-fast refreshable in this case


CREATE MATERIALIZED VIEW LOG ON CD_RMIS_LIMITS
WITH ROWID,SEQUENCE (COVERAGE_TYPE,LIMITAMOUNT,RMIS_ID)
INCLUDING NEW VALUES
/


CREATE MATERIALIZED VIEW CD_RMIS_CARGO_LIMIT_MAX_VIEW2
BUILD IMMEDIATE
REFRESH FAST
ON COMMIT
AS select r.rmis_id, coverage_type,
count(*) as total_cnt,
max(r.limitamount) as cargo_limit,
count(r.limitamount) as cargo_cnt
from cd_rmis_limits r
group by r.rmis_id, coverage_type
/

that'll work.

ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view

Rahul Dutta, December 19, 2005 - 2:26 pm UTC

Hi Tom,

I amd using Oracle 9i (9.2.0.6.0) on HP_UX 11.00.
Context
=======
I am having one master table and one detail table which gets populated once in a month by using some batch jobs. The detail table contains different types of data corrosponding to each master records and need to displayed by application in different format.
On these two tables, I have created four materialized views with the following options
CREATE MATERIALIZED VIEW XYZ
BUILD IMMEDIATE
REFRESH FORCE
ON COMMIT
AS select ......
from master, detail
group by ...
Whenever I am creating the four MV, the fourth one gives me the following error:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Please let me know, Is there any limitations on creating MV's or I am missing some important point.

Thanks a lot for your help as always..

Thanks and Regards

Rahul

Tom Kyte
December 19, 2005 - 3:35 pm UTC

way way insufficient data to say *anything*

have you read the data warehousing guide and the rules for creating on commit refresh materialized views?

Materialized View | Clarification

Rahul Dutta, December 29, 2005 - 5:34 am UTC

Hi Tom,

We are having a Master Table (FUND) and a detail table (FUND_PERF). The structure of the FUND_PERF table is as follows:
Name Null? Type
-------------------- -------- ------------
FUND_ID NOT NULL NUMBER(10)
PERIOD_TYPE_ID NOT NULL NUMBER(7)
AS_OF_DATE NOT NULL DATE
PERF_VALUE NUMBER(23,16)
CURRENCY_TYPE_ID NOT NULL NUMBER(7)

Due to some business requirement, I have created the following MV:
CREATE MATERIALIZED VIEW MV_FUND_PERF_Stand
TABLESPACE VJ_PRICE_PERF
BUILD IMMEDIATE
REFRESH FORCE
ON COMMIT
AS
select
a.fund_id, b.name,
SUM(DECODE(period_type_id,34,perf_value,NULL)) YEAR_TO_DATE,
MAX(DECODE(period_type_id,34,as_of_date,NULL)) YTD_AS_OF_DATE,
SUM(DECODE(period_type_id,35,perf_value,NULL)) CURRENT_MONTH,
MAX(DECODE(period_type_id,35,as_of_date,NULL)) MONTH_AS_OF_DATE,
SUM(DECODE(period_type_id,36,perf_value,NULL)) TRAILING_THREE_MONTHS,
MAX(DECODE(period_type_id,36,as_of_date,NULL)) THREE_MONTH_AS_OF_DATE,
SUM(DECODE(period_type_id,37,perf_value,NULL)) CALENDER_QUARTER,
MAX(DECODE(period_type_id,37,as_of_date,NULL)) QUARTER_AS_OF_DATE,
SUM(DECODE(period_type_id,39,perf_value,NULL)) QTR_ONE_YEAR,
MAX(DECODE(period_type_id,39,as_of_date,NULL)) QTR_AS_OF_DATE,
SUM(DECODE(period_type_id,40,perf_value,NULL)) QTR_THREE_YEAR,
SUM(DECODE(period_type_id,41,perf_value,NULL)) QTR_FIVE_YEAR,
SUM(DECODE(period_type_id,42,perf_value,NULL)) QTR_TEN_YEAR,
SUM(DECODE(period_type_id,43,perf_value,NULL)) QTR_SINCE_INCEPTION,
MAX(DECODE(period_type_id,520,as_of_date,NULL)) ME_AS_OF_DATE,
SUM(DECODE(period_type_id,520,perf_value,NULL)) ME_ONE_YEAR,
SUM(DECODE(period_type_id,521,perf_value,NULL)) ME_THREE_YEAR,
SUM(DECODE(period_type_id,522,perf_value,NULL)) ME_FIVE_YEAR,
SUM(DECODE(period_type_id,523,perf_value,NULL)) ME_TEN_YEAR,
SUM(DECODE(period_type_id,524,perf_value,NULL)) ME_SINCE_INCEPTION
from fund_perf a, fund b
where a.fund_id=b.fund_id
group by a.fund_id,b.name;

Initially I have created it with Refresh Option as "FAST" and created the MV logs for both the tables. But whenever any inserts was done on detail table the MV was not refreshing. So I have changed the refresh option to FORCE and remove the MV logs and it was refreshing immediately.

Please help me in understanding what was the reason behind that and Is my approach without MV logs and FORCE Refresh option ok?

Thanks a lot

Rahul

Tom Kyte
December 29, 2005 - 11:58 am UTC

http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/mv.htm#42223

... "Note that COUNT(*) must always be present."....  as well.

It could look something like this:


ops$tkyte@ORA10GR2> create table t ( x int, y int, z int, val int );

Table created.

ops$tkyte@ORA10GR2> create materialized view log on t
  2  with sequence, rowid
  3  ( x, y, z, val )
  4  including new values;

Materialized view log created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create materialized view mv
  2  build immediate
  3  refresh fast
  4  on commit
  5  as
  6  select x, y,
  7         sum( decode( z, 1, val ) ) z_1_sum,
  8         max( decode( z, 1, val ) ) z_1_max,
  9         sum( decode( z, 2, val ) ) z_2_sum,
 10         max( decode( z, 2, val ) ) z_2_max,
 11             count(*),
 12             count(  decode( z, 1, val ) ) count_z1,
 13             count(  decode( z, 2, val ) ) count_z2
 14    from t
 15   group by x, y
 16  /

Materialized view created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> set feedback off
ops$tkyte@ORA10GR2> insert into t values ( 1, 1, 1, 1 );
ops$tkyte@ORA10GR2> insert into t values ( 1, 1, 1, 2 );
ops$tkyte@ORA10GR2> insert into t values ( 1, 1, 1, 3 );
ops$tkyte@ORA10GR2> insert into t values ( 1, 1, 2, 3 );
ops$tkyte@ORA10GR2> insert into t values ( 1, 1, 2, 3 );
ops$tkyte@ORA10GR2> insert into t values ( 2, 1, 1, 10 );
ops$tkyte@ORA10GR2> insert into t values ( 2, 1, 1, 20 );
ops$tkyte@ORA10GR2> insert into t values ( 2, 1, 1, 30 );
ops$tkyte@ORA10GR2> insert into t values ( 2, 1, 2, 30 );
ops$tkyte@ORA10GR2> set feedback on
ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> select * from mv;

   X    Y Z_1_SUM Z_1_MAX Z_2_SUM Z_2_MAX COUNT(*) COUNT_Z1 COUNT_Z2
---- ---- ------- ------- ------- ------- -------- -------- --------
   2    1      60      30      30      30        4        3        1
   1    1       6       3       6       3        5        3        2

2 rows selected.

ops$tkyte@ORA10GR2> insert into t values ( 1, 1, 2, 1 );

1 row created.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> select * from mv;

   X    Y Z_1_SUM Z_1_MAX Z_2_SUM Z_2_MAX COUNT(*) COUNT_Z1 COUNT_Z2
---- ---- ------- ------- ------- ------- -------- -------- --------
   2    1      60      30      30      30        4        3        1
   1    1       6       3       7       3        6        3        3

2 rows selected.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> update t set val = 100;

10 rows updated.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> select * from mv;

   X    Y Z_1_SUM Z_1_MAX Z_2_SUM Z_2_MAX COUNT(*) COUNT_Z1 COUNT_Z2
---- ---- ------- ------- ------- ------- -------- -------- --------
   2    1     300     100     100     100        4        3        1
   1    1     300     100     300     100        6        3        3

2 rows selected.


 

Materialized view refresh

A reader, January 10, 2006 - 6:34 am UTC

Hi Tom we have materialized view with agreegate function and union in it would you please recommend the type of refresh for it
as base table is having 128 millions of rows and we are using group by and union while creating the materialized view so please help us on this

Thanks in advance

Tom Kyte
January 10, 2006 - 8:26 am UTC

aggregate plus union (not union all) gives you one choice.

did you really mean to use UNION ALL?

Completely Refreshed or Incrementally Refreshed?

Muhammad Waseem Haroon, January 21, 2006 - 3:41 am UTC

Hi Tom,

I have created a Materialized view with financial year date ranges.

CREATE MATERIALIZED VIEW "FINANCE"."DPR_MV"
PCTFREE 10
PCTUSED 0
MAXTRANS 255
STORAGE (
INITIAL 72K NEXT 0K MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0
) NOLOGGING
TABLESPACE "FINANCE_ERP"
BUILD IMMEDIATE
USING INDEX TABLESPACE "INDX" PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE ( INITIAL 72K NEXT 0K MINEXTENTS 1 MAXEXTENTS 2147483645 )
REFRESH FORCE
ON COMMIT
AS
select fa_code,
Sum(Depr_Amnt) depr_amnt,
add_months(Round(dated,'YEAR'),-6) FY_FROM,
add_months(Round(dated,'YEAR'),6)-1 FY_TO
from fad
WHERE flag = 'DPR'
GROUP BY fa_code ,
add_months(Round(dated,'YEAR'),-6) ,
add_months(Round(dated,'YEAR'),6)-1;


SELECT *
FROM dpr_mv;

FA_CODE DEPR_AMNT FY_FROM FY_TO
--------- --------- --------- ---------
1 7985 01-JUL-05 30-JUN-06
2 31649 01-JUL-05 30-JUN-06
3 73997 01-JUL-05 30-JUN-06
4 3195 01-JUL-05 30-JUN-06
5 13585 01-JUL-05 30-JUN-06
...
..
.

214 rows selected.


1) Will it be Incremental Refresh or Complete Refresh?
2) How can I find out that the Materialized View has been Completely Refreshed or Incrementally Refreshed?

Thanks in Advance
Muhammad Waseem Haroon
waseem.haroon@hotmail.com


dbms_mview.refresh

Parag J Patankar, January 24, 2006 - 8:47 am UTC

Hi Tom,

I had observed that dbms_mview is having two types of procedures dbms_mview.refresh_all and dbms_mview_refresh_all_mviews. What is a difference between them ?
PROCEDURE REFRESH_ALL
PROCEDURE REFRESH_ALL_MVIEWS
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NUMBER_OF_FAILURES             BINARY_INTEGER          OUT
 METHOD                         VARCHAR2                IN     DEFAULT
 ROLLBACK_SEG                   VARCHAR2                IN     DEFAULT
 REFRESH_AFTER_ERRORS           BOOLEAN                 IN     DEFAULT
 ATOMIC_REFRESH                 BOOLEAN                 IN     DEFAULT
PROCEDURE REFRESH_DEPENDENT
 Argument Name                  Type                    In/Out Default?

I have created mv as follows

create materialized view tp258_mv
build immediate
refresh complete
with rowid
as
select * from tp258@prodin01std
/

when I do 

18:06:45 SQL> exec dbms_mview.refresh_all;

PL/SQL procedure successfully completed.

But materialized view does not get refreshed. But this view getting refreshed by refresh_all_mviews

Kindly guide me

regards & thanks
pjp 

Tom Kyte
January 24, 2006 - 8:40 pm UTC

the refresh_all is not externally documented in the documenation, but if you

select text
from dba_source
where name = 'DBMS_SNAPSHOT' -- dbms_mview is a synonym to this...
and type = 'PACKAGE'
order by line;


and spool that out, you can read about each routine in turn...

Cannot refresh MV

Avishay, February 21, 2006 - 7:32 am UTC

Hi Tom,

I have an MV that is set :
CREATE MATERIALIZED VIEW MV_DIM_GEOG
REFRESH COMPLETE ON COMMIT
AS
SELECT D.GEOG_ID, D.GEOG_ID_DESC, D.SUB_SITE_ID FROM DW.DW_DIM_GEOG D

The MV is set on a different schema then the master table(GRANT SELECT)

For some reason new data was inserted into that table, but that data is not updated into the MV.

Also when I try to execute :
exec Dbms_Mview.refresh_mv('MV_DIM_GEOG');
I get the following error:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'REFRESH_MV'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

What am I doing wrong ??? :(

Tom Kyte
February 21, 2006 - 7:56 am UTC

PROCEDURE REFRESH_MV
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
PIPENAME VARCHAR2 IN
MV_INDEX BINARY_INTEGER IN
OWNER VARCHAR2 IN
NAME VARCHAR2 IN
METHOD VARCHAR2 IN
ROLLSEG VARCHAR2 IN
ATOMIC_REFRESH BINARY_INTEGER IN
ENV BINARY_INTEGER IN

you are sending the wrong number and types of arguments to that procedure.

That is likely NOT the procedure you mean to be calling.


I think you mean to not have the _MV on the end and want to just call "REFRESH", not "REFRESH_MV"

Excelent

Avishay, February 23, 2006 - 3:46 am UTC

Thanks,
that helped

Refresh on MVs

A reader, March 13, 2006 - 5:50 am UTC

Hi Tom,
We have materialized view replication on Oracle 8i Rel 2 at two sites.
Site A and B on two different users User_A and User_B.

As the users were supposed to run in parallel we implemented MV. Therefore, Site A also the Master Site for Site B and also the Client. The same applies for Site B.

Site A has a DB link a_to_b which connects to B as User_b on database B
and
Site B has a DB link b_to_a which connects to A as User_a on database A .

I have created Refresh Groups on both the sites by the same name called Ref_Grp.

Now the requirement is such:
When a user, say User_a at Site A connects to the Database at he can refresh the local mview group by executing
exec DBMS_refresh.refresh('Ref_Grp').
But we also want to refresh the mviews at site B.
To acheive the same I executed the following:
exec DBMS_refresh.refresh@b_to_a('Ref_Grp')
It gives the error Ora-2041

Kindly state a way with which it can be achieved.
Do I have to write a procedure to achieve the same?

Thanks as always



Tom Kyte
March 13, 2006 - 10:23 am UTC

you are going to try to run a remote stored procedure that will issue a "commit", that is not a support sequence of events (the original session, your local session, is the only one that should commit or rollback)


I would suggest the remote site uses DBMS_JOB to schedule a refresh on the remote site. YOu would dbms_job@remote to submit a job to do the refresh, and shortly after you commit - the job would execute.

Sorry....

A reader, March 13, 2006 - 5:52 am UTC

Sorry Tom it is Oracle 8i on Rel 3 and not Rel 2 as stated above.


Refresh on Materialized View

A reader, March 16, 2006 - 11:17 am UTC

Hi Tom,
Got a little strnage problem. I need to physicalize more than 50 views to tables. These views are getting data from MVs. SPace being a secondary problem, I am bit concerned about :
1. Refreshing those tables everynight with million of rows will bring the performance down a lot.

COuld there be any better way of doing this like setting MVs on MVs?

Rgds.

Tom Kyte
March 16, 2006 - 2:44 pm UTC

you can create multi-level materialized views, yes.

whether that is faster than a full refresh - that will remain to be seen (sometimes a complete refresh is many times faster than a "fast" refresh which I wish they would have named "incremental refresh" instead).



How about a procedure......

A reader, April 07, 2006 - 3:31 am UTC

Hi Tom,

I have 2 dbs both on Windows and Oracle 8i Rel 3.

Materialized View Replication has been implemented at both the sites on a 14 tables at each master site.

I have incorporated the mviews into one refresh group.

I have created a procedure at both the sites called Ref_Grp as follows:

Create or replace procedure Ref_Grp as
begin
execute immediate 'execute dbms_refresh.refresh('||'''||Refresh_Group_Name||'''||')';
ed ref_grp;

I want to write the following code:

SQL>exec dbms_refresh.refresh('Refresh_Group_Name'); --Refreshes the local Refresh Group

SQL> exec ref_grp@remote_site; --refreshes the remote Refresh Group

My question is:
Is this a feasible approach. Or should I modify the code to make it execution even better?

Thanks as always
 

Tom Kyte
April 08, 2006 - 8:45 am UTC

hmm, one wonders at the syntax of that procedure. Might you MEANT to have said:


create or replace procedure ref_grp(p_refresh_Group_Name in varchar2 )
as
begin
dbms_refresh.refresh( p_Refresh_Group_Name );
end;
/


Your syntax won't actually "fly".

You would have to use dbms_jobs on the remote site to schedule a job to happen on the remote site shortly after you commit. the refresh stuff tries to do transaction control statements that in general won't work over a database link.


So, you would schedule the refresh to happen using jobs (after you commit) OR you would just log into the remote site directly and do this (consider this to be DDL - it is in fact sort of considered DDL)

Thanks for the reply..

A reader, April 10, 2006 - 2:27 am UTC

Hi Tom,
Thank you for the reply.
As a matter of fact I had changed the above procedure in the database as you mentioned.
The one in my question was written by a another person.
It was saved on this computer as a result I just pasted it without any changes ;)
To be frank, I do not know how to implement your logic about submitting your job idea.
Will be really grateful if you could just guide me on that.
Thanks as always



Tom Kyte
April 10, 2006 - 5:41 am UTC

begin
dbms_job.submit@remote_site
( l_job, 'begin dbms_refresh.refresh( ''' || p_Refresh_Group_Name || ''' );' );
commit;
end;

that'll schedule a job to run in the background on the remote site that does the refresh. assuming job queues are configured on the remote site - it'll run shortly after you commit.


Exists clause in materilize view.

Sriram Varadharajan, April 10, 2006 - 10:45 am UTC

Dear Tom,

I am trying to create the materilized view using
<exist clause> in the following query. I am getting the
error.

ORA-12015: cannot create a fast refresh materialized view from a complex query

1)
CREATE MATERIALIZED VIEW check_status
PCTFREE 0
STORAGE (INITIAL 8k NEXT 8k PCTINCREASE 0)
BUILD IMMEDIATE
REFRESH FAST
start with sysdate
next sysdate + 1/1440
ENABLE QUERY REWRITE
as select count(*) from
tl a
where
A.STATUS_CODE = 77
and exists (
select 1
from
T2 B
where
a.city_code = b.city_code
and a.country_code = b.country_code
AND a.sbatch_no = b.sbatch_no
AND a.batch_ref = b.batch_ref
AND a.cust_id = b.cust_id
AND (B.EXCEPTION_CLOSED !='Y' OR B.EXCEPTION_CLOSED IS NULL)
AND B.STATUS_CODE = 53
AND B.COUNTRY_CODE='HK'
AND B.CITY_CODE='HKG' )



Same query i have rewrite following way materilized
view is created.but cost of the query is higher than 1st query.

2)
CREATE MATERIALIZED VIEW check_status
PCTFREE 0
STORAGE (INITIAL 8k NEXT 8k PCTINCREASE 0)
BUILD IMMEDIATE
REFRESH FAST
start with sysdate
next sysdate + 1/1440
ENABLE QUERY REWRITE as
select count(1) from tl a,T2 B
where a.status_code=77 and b.status_code=53
and a.country_code = b.country_code
-- and a.city_code=b.city_code
AND a.sbatch_no = b.sbatch_no
AND a.batch_ref = b.batch_ref
AND a.cust_id = b.cust_id
AND (A.EXCEPTION_CLOSED !='Y' OR B.EXCEPTION_CLOSED IS NULL)


please advice me, what wrong in my materialized view ?



Tom Kyte
April 11, 2006 - 10:19 am UTC

so what if the cost is higher?

Exists clause

Sriram Varadharajan, April 17, 2006 - 12:36 am UTC

Tom,

Actually my need creating fast refresh MV, refreshing every minute.I have tuned the 1st query and query cost is very low compare to 2nd query.I want the 1st query convert to
materialized view. I couldn't create the MV using this
following query. Please advice



ORA-12015: cannot create a fast refresh materialized view from a complex query

1)
CREATE MATERIALIZED VIEW check_status
PCTFREE 0
STORAGE (INITIAL 8k NEXT 8k PCTINCREASE 0)
BUILD IMMEDIATE
REFRESH FAST
start with sysdate
next sysdate + 1/1440
ENABLE QUERY REWRITE
as select count(*) from
tl a
where
A.STATUS_CODE = 77
and exists (
select /*+ NO_UNNEST INDEX(B IND_PYMT_STATUS_SBATCH) */ 1
from
T2 B
where
a.city_code = b.city_code
and a.country_code = b.country_code
AND a.sbatch_no = b.sbatch_no
AND a.batch_ref = b.batch_ref
AND a.cust_id = b.cust_id
AND (B.EXCEPTION_CLOSED !='Y' OR B.EXCEPTION_CLOSED IS NULL)
AND B.STATUS_CODE = 53
AND B.COUNTRY_CODE='HK'
AND B.CITY_CODE='HKG' )




Tom Kyte
April 17, 2006 - 7:53 am UTC

use dbms_mview explain mview and see what it says. since you haven't given me the 100% complete, yet tiny, but complete and concise "test case".

What I mean by that is - when I run that create statement - it fails.


(i'd lose the hint, no idea why you think it would be needed, it likely isn't part of the problem, but just a bad idea)

Heavy purging on base table and complete refresh of Materialized view

VLS, May 10, 2006 - 7:14 am UTC

Hi Tom,

Greetings !

I have a 8174 database in which I have around 40 tables. These 40 tables are required on our CRM application
databases which are 9207. The architecture is as under :

8174
----
Table : X
snapshot log on X

9205
----
Table x_mum with one additional column
Materialized view x_mum on prebuilt table as select * from X@8174;

We have data pertaining to last 5 years in table X and hence would like to purge the data so that only one year data is retained. For this, the procedure we would be using is

1. Shutdown and restart the db.
2. Fast Refresh of the materialized view X_MUM. Now the table count of X and MVIEW X_MUM is same.
3. CTAS to create a new table x_req with required data
4. drop Snapshot log and the mview x_mum.
5. Rename x to x_old and x_req to X
6. Create all the required indexes and constraints on new X
7. create the snapshot log

Now the question here is how do I ensure that my Mview is refreshed fast enough to reduce the amount of downtime.
Should we

OPTION 1
--------
1. create the prebuilt table x_mum (9205) using CTAS (select * from x@8174) and
2. create mview x_mum on prebuilt table as select * from X@8174;
or

OPTION 2
--------
1. create the structure of prebuilt table x_mum (9205)
2. create mview x_mum on prebuilt table as select * from X@8174;
3. Do a Complete refresh.

A faster approach, I thought would be to use 3rd option which would be

OPTION 3 (Entire steps)
------------------------
1. Shutdown and restart the db.
2. Fast Refresh of the materialized view X_MUM. Now the table count of X and MVIEW X_MUM is same.
3. drop the snapshot log and the materialized view
4. CTAS to create a new table x_req with required data 8174 and CTAS x_mum_req with required data from x_mum (prebuilt table)
5. Rename x to x_old and x_req to X, x_mum to x_mum_old and x_mum_req to x_mum
6. Create all the required indexes and constraints
7. create the snapshot log
8. create mview x_mum on prebuilt table as select * from X@8174;

Is there any other approach to accomplish this ? Need your assistance on the above so that the entire task can be completed smoothly with less downtime.

Thanks and Regards
VLS

Tom Kyte
May 10, 2006 - 9:18 am UTC

if you are considering option 1 - why not just recreate the materialized view in the first place?

In continuation to my previous posting...

VLS, May 10, 2006 - 10:53 am UTC

Hi Tom,

Thanks for your reply.

My mview are on prebuilt table and after the purge on the base table, my base table will contain only 40% of the rows whereas the prebuilt table or the mview will have all the records.

The Scenario will be :
TABLE X 100% rows
CTAS X_REQ will have 40% i.e.60% rows will be eliminated.

After the purge on the base table, I need only those 40% rows in the materialized view that are required, hence I have to drop the mview and the prebuilt table as well. so, the question here was :

Option 1
--------
1. drop and recreate the prebuilt table using CTAS
2. create the mview on this prebuilt table

or

Option 2
--------
1. drop and recreate the structure of the prebuilt table.
2. create the mview on this prebuilt table
3. do a complete refresh of the mview

or option 3
-----------
1. Purge the records from the prebuilt table using CTAS..i.e creating a new table with only 40% rows required
2. rename the tables
3. create a materialized view on this new prebuilt table.

Thanks and Regards
VLS

Tom Kyte
May 11, 2006 - 7:30 am UTC

If I was to remove some 60% of the data from an mview, I would likely much rather recreate the mview.

so, option 4 pops into mind:

drop and create the mview.

Unless you have a real reason to do it on a prebuilt table, then

create new prebuilt table
drop old mview
create new mview on this prebuilt table
droip old prebuilt table

Too many dimensions to aggregate

AVS, May 22, 2006 - 6:08 pm UTC

Tom,

We have implemented a data warehouse with specific set of requirements. Since it was performing good all the similar type of applications wants to merge their data in the DW which we have built. Now my problem start, One of the business users requires Pre-Calculated aggregations on measures starting from previous years start date e.g. PYTD(Previous Year To Date), PY (Previous Year), Yesterday etc. Some of the back dated records get updated and hence calculations like PY changes. But these results he wants is across 52 dimensions (none of dimensions has NULL value). So if I built the cube on these the data generated is more than the actual data. Now since none of the dimensions is null I did sum on all the measures(facts) and grouped by all the dimensions. But still the number has hardly come down to 35 millions from 50 millions. Can you please suggest somethig that would help me to do the fast Pre-Calcuted tables. I've got the liberty to make data model changes or any type of change as you may suggest to try out.

Cheers,
AVS.

Tom Kyte
May 23, 2006 - 6:55 am UTC

well, since you are duplicating data (that is what it sounds like), you would expect it to be larger, much like adding "an index" makes the storage used by some data increase.

Question is - after you do this for them - do their queries run significantly *faster*



Too many dimensions to aggregate

AVS, May 23, 2006 - 1:49 pm UTC

Tom,

Well one of the important person I've not mentioned is that Precalculated Previous Year, Yesterday are altogether different tables. For PY will have its own table, Yesterday will have its own table. So if you sum only one days data (yesterday) it has less number of records as compared to PYTD. Hence query executed on them are faster.

Regards,
AVS

Tom Kyte
May 24, 2006 - 6:35 am UTC

so? I don't understand the relation of this comment with the prior?

Too many dimensions to aggregate

AVS, May 23, 2006 - 1:51 pm UTC

Sorry in the above statement please ignore my ignorance
PERSOn should be replaced with THING

How to get only changes using materialized views for replication

sujit, June 23, 2006 - 11:29 am UTC

Tom,

My requirement is to replicate a remote db table_b with my local db table_a. Here is what I gathered should happen from the discussions on your site. Please correct me if I am not understanding it right.

On master site b:

drop materialized view log on table_b

create materialized view log on table_b
tablespace smc_data
with primary key;


On local site a:

drop materialized view mv_table_b

create materialized view mv_table_b
refresh fast with primary key
start with sysdate
next sysdate + 1/(24*60)
as (select * from table_b@orcl_b);

--orcl_b is the db_link to master site b

select count(*) from mv_table_b;
51433 rows


On master site b:

Insert into table_b
(col1, col2, col3, col4, col5, col6, col7, col8, col9, LAST_UPDATE_USER_ID, LAST_UPDATE_DATE_TIME, ROW_CREATE_DATE, MODEL_YEAR_PARTITIONING_KEY)
Values
(51435, 'YKD12352', 8660, 100, 3, 9, 2, 3, 'Y', 'DKSV', TO_DATE('06/14/2005 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('06/21/2005 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));

COMMIT;


On local site a:
select count(*) from mv_table_b;
51434 rows

Notice that it returned the complete set of 51433 + 1 newly inserted row. I thought I was supposed to get only the newly inserted or updated rows...in my case, I expected to see only 1 row.

1) Is my understanding right about seeing only the changed rows
2) If so how would I see only the changes (at master site b) on the local site?
3) To replicate local site a with master site b, I was planning on using a merge statament...any help on how I would write it?

Tom Kyte
June 23, 2006 - 1:19 pm UTC

rather than guess from a bunch of forum replies - did you read the documentation on replication and materialized views?


The entire goal of a materialized view is to make an exact replicate of the defining query. When you did the create - it copied that data over. Then you did the insert, and it copied that over.


It would be not a useful feature if it only captured modifications after the create and never had the initial data!

How to get only changes using materialized views for replication

sujit, June 23, 2006 - 1:42 pm UTC

Thanks Tom. I read through some oracle documentation from links on your site. Still a bit fuzzy...:-)

So now I have all the rows in local server side...including the new rows and changed rows from master db table. Now how do I proceed to implement replication between the master db table and the local db table. Can you throw some light with a typical roadmap to this kind of simple replication.



Tom Kyte
June 23, 2006 - 2:03 pm UTC

when you CREATED it, you got all of the existing rows.

The MV log would then supply the incremental changes - to keep the replica in perfect syncronization.


you already *had* replication implemented, the second you did the create materialized view. You are done.

How to get only changes using materialized views for replication

Sujit, June 23, 2006 - 2:16 pm UTC

Thanks for clarifying that only the changed rows come from the log. very clever way of implementing....just bring the changes

Now, I have my materialized view on local db that contains all the rows - old unchanged rows, newly inserted rows, updated rows and possibly deleted rows.

How do I identify these changes and How do I push these into my local db table_a

Tom Kyte
June 23, 2006 - 2:25 pm UTC

they are already "pushed", they are THERE, you have them, you just counted them.

How to get only changes using materialized views for replication June 23, 2006

Sujit, June 23, 2006 - 2:33 pm UTC

Sorry Tom, I am still lost.

In all my scripts above I have used table_b (master table on remote db). All I have on the local db is MV with all changes and the content of table_a is still old.

Dont I need to push changes from MV to my local table_a?

Are you suggesting - use the MV in place of table_a??

Tom Kyte
June 23, 2006 - 2:44 pm UTC

what is "table_a", you have your materialized view, IT HAS ALL OF THE DATA

you are done, you have it. It is in a thing called mv_table_b;

you are finished.


use a private synonym or a view if you need to refer to it by another name.

How to get only changes using materialized views for replication

Suji, June 23, 2006 - 2:57 pm UTC

Thanks Tom.

Finally i got it through my thick skull.

You are the best. Have awonderful weekend

Why the result differs for the same query?

AVS, July 20, 2006 - 1:00 pm UTC

Tom,

    I ran a query on a FACT_ORDER table (query below). It used the query_rewrite and redirected to repective Materalized View to get the data. The result returned was 5055341474. When I ran the same query against the base table by changing the where clause (Added: || ''    4th line) The result returned is 5055803575. During time difference of executing the query there was no DDL/DML (not even any select) or anything running on database. Why do I got such different results?

SQL> SELECT sum(FACT_ORDER.AM_NET_CVA_DELTA)
  2  FROM FACT_ORDER
  3  WHERE (FACT_ORDER.IND_IS_REPORTABLE  = 'Y'
  4  AND FACT_ORDER.IND_TO_BE_DELETED = 'N')
  5  AND REPORTING_DATE BETWEEN to_date('01-JAN-2006', 'DD-MON-YYYY') AND to_date('03-JUL-2006', 'DD-MON-YYYY');

SUM(FACT_ORDER.AM_NET_CVA_DELTA)
-----------------------------------------
                               5055341474


Execution Plan
----------------------------------------------------------
Plan hash value: 3626593659

-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                      |     1 |    12 |  1509   (2)| 00:00:22 |       |       |        |      |            |
|   1 |  SORT AGGREGATE                   |                      |     1 |    12 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR                  |                      |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)            | :TQ10000             |     1 |    12 |            |          |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                |                      |     1 |    12 |            |          |       |       |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR            |                      |  2389K|    27M|  1509   (2)| 00:00:22 |    50 |    74 |  Q1,00 | PCWC |            |
|*  6 |       MAT_VIEW REWRITE ACCESS FULL| MV_PRECALC_ORDER_SUM |  2389K|    27M|  1509   (2)| 00:00:22 |    50 |    74 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter("MV_PRECALC_ORDER_SUM"."REPORTING_DATE"<=TO_DATE('2006-07-03 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
      48454  recursive calls
          3  db block gets
      51976  consistent gets
      19291  physical reads
        680  redo size
        244  bytes sent via SQL*Net to client
        240  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
        225  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> SELECT sum(FACT_ORDER.AM_NET_CVA_DELTA)
  2  FROM FACT_ORDER
  3  WHERE (FACT_ORDER.IND_IS_REPORTABLE  = 'Y'
  4  AND FACT_ORDER.IND_TO_BE_DELETED || '' = 'N')
  5  AND REPORTING_DATE BETWEEN to_date('01-JAN-2006', 'DD-MON-YYYY') AND to_date('03-JUL-2006', 'DD-MON-YYYY');

SUM(FACT_ORDER.AM_NET_CVA_DELTA)
-----------------------------------------
                               5055803575


Execution Plan
----------------------------------------------------------
Plan hash value: 3220554619

----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |            |     1 |    16 | 66552   (1)| 00:15:32 |       |       |        |      |            |
|   1 |  SORT AGGREGATE        |            |     1 |    16 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR       |            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000   |     1 |    16 |            |          |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |            |     1 |    16 |            |          |       |       |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |            |  6031K|    92M| 66552   (1)| 00:15:32 |    12 |    14 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL| FACT_ORDER |  6031K|    92M| 66552   (1)| 00:15:32 |    12 |    14 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter("REPORTING_DATE">=TO_DATE('2006-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
              "FACT_ORDER"."IND_IS_REPORTABLE"='Y' AND "FACT_ORDER"."IND_TO_BE_DELETED"||''='N' AND
              "REPORTING_DATE"<=TO_DATE('2006-07-03 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
       2538  recursive calls
          3  db block gets
     818105  consistent gets
     770956  physical reads
        588  redo size
        259  bytes sent via SQL*Net to client
        240  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
         40  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>


Regards,
AVS 

Tom Kyte
July 22, 2006 - 4:55 pm UTC

was the materialized view up to date and what was your query rewrite integrity set to.

trigger for select statement

ramasubramanian, July 27, 2006 - 5:46 am UTC

how to write a trigger that fires when a select statement is executed.

Tom Kyte
July 27, 2006 - 11:54 am UTC

you cannot.

You can use fine grained auditing however, very similar. read about dbms_fga

trigger for select statement

ramasubramanian, July 27, 2006 - 5:46 am UTC

how to write a trigger that fires when a select statement is executed.

Why the result differs for the same query?

AVS, July 27, 2006 - 7:11 am UTC

The MV's were completely refreshed before I fired any query. Also there was no DDl/DML fired in between the two sql statements.

SQL> show parameter query

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------
query_rewrite_enabled                string      TRUE
query_rewrite_integrity              string      enforced
 

Tom Kyte
July 27, 2006 - 12:04 pm UTC

please utilize support - if the MV was "fresh" and you get different answers when the query is rewritten to use the MV and query rewrite integrity is enforced (not trusted), they should be the same.

suggest you use the norewrite hint instead of ||'' - it'll make for a better test case.

ORA-12015 even though I have primary key on columns?

cg, August 01, 2006 - 10:40 am UTC

I am try to create the below materialized view with fast refresh. It contains outer joins. The inner table columns are primary key columns.

SQL> alter table on_ord_qty add constraint on_ord_pk primary key ( store_id, item_id );

Table altered.

SQL> alter table alloctab add constraint alloctab_pk primary key ( id_str_rt, id_itm );

Table altered.

-- The materialized view creation
SQL> create materialized view dis_mv
  2     PARTITION BY HASH(store_id)
  3     PARTITIONS 8
  4     STORE IN (DISCOVERER_DATA,
  5               DISCOVERER_DATA,
  6               DISCOVERER_DATA,
  7               DISCOVERER_DATA,
  8               DISCOVERER_DATA,
  9               DISCOVERER_DATA,
 10               DISCOVERER_DATA,
 11               DISCOVERER_DATA)
 12    NOCACHE
 13    NOPARALLEL
 14    build immediate
 15    refresh fast on demand
 16    as
 17   SELECT    /*+ index(itmloc pk_as_itm_rtl_str) index(soh pk_rk_store_item_soh) index(ris pk_rk_item_supplier) index(itm pk_as_itm)*/
 18                psr.id_str_rt    store_id,
 19                psr.rowid        psr_rid,
 20                po.nm_orgn       store_name,
 21                po.rowid         ps_rid,
 22                soh.id_itm       item_id,
 23                soh.rowid        soh_rid,
 24                SUBSTR(itm.id_strc_mr,1,INSTR(itm.id_strc_mr,':')-1) Dept,
 25                soh.id_itm ||'-'|| itm.de_itm id_desc,
 26                itm.de_itm       item_description,
 27                itm.lu_uom_dft   default_uom,
 28                itm.rowid        itm_rowid,
 29                itmloc.id_spr    primary_supplier,
 30                itmloc.rowid     itmloc_rid,
 31                ps.nm_spr        vendor_name,
 32                ps.rowid         ps_rowid,
 33                itmloc.id_spr ||'-'|| ps.nm_spr VDesc,
 34                ris.vpn          vendor_style,
 35                ris.rowid        ris_rid,
 36                itmloc.repn_type repn_type,
 37                soh.total_quantity soh,
 38                soh.in_transit_quantity in_transit_quantity,
 39                soh.adjust_unavail_qty unavailable_quantity,
 40                aq.allocated_quantity,
 41                aq.rowid       aq_rid,
 42                ooq.on_order_quantity,
 43                ooq.rowid      ooq_rid,
 44                CASE WHEN soh.total_quantity = 0 THEN 'ZERO'
 45                     WHEN soh.total_quantity < 0 THEN 'NEGATIVE'
 46                END status
 47          FROM sim.pa_str_rtl        psr,
 48               sim.as_itm_rtl_str    itmloc,
 49               sim.as_itm            itm,
 50               sim.rk_item_supplier  ris,
 51               sim.pa_spr            ps,
 52               sim.rk_store_item_soh soh,
 53               sim.pa_orgn           po,
 54               sim.alloctab      aq,
 55               sim.on_ord_qty    ooq
 56         WHERE itmloc.id_str_rt = psr.id_str_rt
 57           AND itmloc.id_itm    = itm.id_itm
 58           AND ooq.store_id(+)     = soh.id_str_rt
 59           AND ooq.item_id(+)      = soh.id_itm
 60           AND itmloc.id_itm    = ris.id_itm
 61           AND itmloc.id_spr    = ris.id_spr
 62           AND ris.id_spr       = ps.id_spr
 63           AND aq.id_str_rt(+)     = soh.id_str_rt
 64           AND aq.id_itm(+)        = soh.id_itm
 65           AND soh.id_str_rt    = itmloc.id_str_rt
 66           AND soh.id_str_rt IN ('10','4','79','83','5','7','8','9','11','13','15','16','17','18','20','21','24','25','26','34','78','84','85','86'
,'2')
 67           AND soh.id_itm       = itmloc.id_itm
 68           AND psr.id_prty     = po.id_prty_orgn;
        FROM sim.pa_str_rtl        psr,
                 *
ERROR at line 47:
ORA-12015: cannot create a fast refresh materialized view from a complex query

Why is that. I have logs on all my base tables to.
I previously created this materialized view with all inner joins but had to change to some outer joins.
 

Tom Kyte
August 01, 2006 - 6:44 pm UTC

have you used the supplied packages that explain what you can and cannot do to a materialized view? They are very useful in these cases.


I quite simply refuse to look at really big queries like this without, well, you know...

AN ENTIRE TEST CASE that I can cut and paste and run for myself.

</code> http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_mview.htm#sthref4880 <code>


cg, August 02, 2006 - 8:06 am UTC

Well I really just wanted confirmation that those primary keys ARE UNIQUE CONSTRAINTS?

If so I should be able to fast refresh on those columns.

The below is what the package came back with.
Saying there is a 'non-AND conjunction' is confusing because the IN was there when I could fast refresh.
Same with the filter conditions. Its adding exceptions that didnt  matter before.

SQL> @readcap

CAPABILITY_NAME                P REL_TEXT
------------------------------ - ------------------------
MSGTXT
--------------------------------------------------------------------------
PCT                            N


REFRESH_COMPLETE               Y


REFRESH_FAST                   N


REWRITE                        N


PCT_TABLE                      N SIM.PA_S
relation is not a partitioned table

PCT_TABLE                      N SIM.AS_I
PCT not supported with this type of partitioning

PCT_TABLE                      N SIM.AS_I

CAPABILITY_NAME                P REL_TEXT
------------------------------ - ------------------------
MSGTXT
--------------------------------------------------------------------------
relation is not a partitioned table

PCT_TABLE                      N SIM.RK_I
relation is not a partitioned table

PCT_TABLE                      N SIM.PA_S
relation is not a partitioned table

PCT_TABLE                      N SIM.RK_S
PCT not supported with this type of partitioning

PCT_TABLE                      N SIM.PA_O
relation is not a partitioned table

PCT_TABLE                      N SIM.ALLO
relation is not a partitioned table

PCT_TABLE                      N SIM.ON_O
relation is not a partitioned table

CAPABILITY_NAME                P REL_TEXT
------------------------------ - ------------------------
MSGTXT
--------------------------------------------------------------------------

REFRESH_FAST_AFTER_INSERT      N
non-AND conjunction in WHERE clause

REFRESH_FAST_AFTER_INSERT      N
mv uses a filter condition in the WHERE or HAVING clause

REFRESH_FAST_AFTER_INSERT      N
outer join in mv

REFRESH_FAST_AFTER_INSERT      N
one or more joins present in mv

REFRESH_FAST_AFTER_ONETAB_DML  N
see the reason why REFRESH_FAST_AFTER_INSERT is disabled

REFRESH_FAST_AFTER_ANY_DML     N
see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled


CAPABILITY_NAME                P REL_TEXT
------------------------------ - ------------------------
MSGTXT
--------------------------------------------------------------------------
REFRESH_FAST_PCT               N
PCT is not possible on any of the detail tables in the mater

REWRITE_FULL_TEXT_MATCH        N
query rewrite is disabled on the materialized view

REWRITE_PARTIAL_TEXT_MATCH     N
query rewrite is disabled on the materialized view

REWRITE_GENERAL                N
query rewrite is disabled on the materialized view

REWRITE_PCT                    N
general rewrite is not possible and PCT is not possible on a
 

Tom Kyte
August 02, 2006 - 12:01 pm UTC

primary key is better than good enough for "unique"

unless and until you give me a really tiny (super small, trivial) example that demonstrates the issue (but it must be 100% complete), I cannot really take a look. I cannot reproduce your issue as I do not have your tables.

super small means "any column not absolutely necessary is gone, there are no tablespaces in ddl, etc".

many (most) times I find when I make these test cases - I find the problem myself.

OK-Here is my test case

cg, August 03, 2006 - 8:59 am UTC

My first attempt at small with respect to the issue. If this is too large let me know I will try to see what I can do.

CREATE TABLE RK_ALLOCATIONS1
(
ALLOCATION_ID NUMBER(10) NOT NULL,
ITEM_ID VARCHAR2(128 BYTE) NOT NULL,
DELIVERY_DATE DATE,
QUANTITY NUMBER(20,4),
STORE_ID VARCHAR2(128 BYTE) NOT NULL
);
ALTER TABLE RK_ALLOCATIONS1 ADD
CONSTRAINT PK_RK_ALLOCATIONS1 PRIMARY KEY (ALLOCATION_ID, ITEM_ID, STORE_ID);
CREATE TABLE RK_STORE_ITEM_SOH1
(
ID_ITM VARCHAR2(128 BYTE) NOT NULL,
ID_STR_RT VARCHAR2(128 BYTE) NOT NULL,
TOTAL_QUANTITY NUMBER(20,4) DEFAULT 0 NOT NULL,
IN_TRANSIT_QUANTITY NUMBER(20,4),
ADJUST_UNAVAIL_QTY NUMBER(20,4)
);
ALTER TABLE RK_STORE_ITEM_SOH1 ADD
CONSTRAINT PK_RK_STORE_ITEM_SOH1 PRIMARY KEY (ID_ITM, ID_STR_RT);
CREATE TABLE PA_ORGN1
(
NM_ORGN VARCHAR2(40 BYTE),
ID_PRTY_ORGN VARCHAR2(128 BYTE) NOT NULL
);
ALTER TABLE PA_ORGN1 ADD
CONSTRAINT PK_PA_ORGN11 PRIMARY KEY (ID_PRTY_ORGN);
CREATE TABLE PA_SPR1
(
ID_SPR VARCHAR2(128 BYTE) NOT NULL,
NM_SPR VARCHAR2(40 BYTE)
);
ALTER TABLE PA_SPR1 ADD
CONSTRAINT PK_PA_SPR1 PRIMARY KEY (ID_SPR);
CREATE TABLE AS_ITM_RTL_STR1
(
ID_STR_RT VARCHAR2(128 BYTE) NOT NULL,
ID_ITM VARCHAR2(128 BYTE) NOT NULL,
ID_SPR VARCHAR2(128 BYTE),
REPN_TYPE VARCHAR2(4 BYTE)
);
ALTER TABLE AS_ITM_RTL_STR1 ADD
CONSTRAINT PK_AS_ITM_RTL_STR1 PRIMARY KEY (ID_ITM, ID_STR_RT );
CREATE TABLE PA_STR_RTL1
(
ID_STR_RT VARCHAR2(128 BYTE) NOT NULL,
ID_PRTY VARCHAR2(128 BYTE) NOT NULL
);
ALTER TABLE PA_STR_RTL1 ADD
CONSTRAINT PK_PA_STR_RTL1 PRIMARY KEY (ID_STR_RT);
CREATE UNIQUE INDEX PA_STR_RTL_I11 ON PA_STR_RTL1
(ID_PRTY);
CREATE TABLE RK_ITEM_SUPPLIER1
(
ID_ITM VARCHAR2(128 BYTE) NOT NULL,
ID_SPR VARCHAR2(128 BYTE) NOT NULL
);
ALTER TABLE RK_ITEM_SUPPLIER1 ADD
CONSTRAINT PK_RK_ITEM_SUPPLIER1 PRIMARY KEY (ID_ITM, ID_SPR);
CREATE TABLE AS_ITM1
(
ID_ITM VARCHAR2(128 BYTE) NOT NULL,
DE_ITM VARCHAR2(255 BYTE),
LU_UOM_DFT VARCHAR2(40 BYTE)
);
ALTER TABLE AS_ITM1 ADD
CONSTRAINT PK_AS_ITM1 PRIMARY KEY (ID_ITM);

create table alloctab1 as
SELECT soh.id_str_rt,
soh.id_itm,
SUM (rka.quantity) allocated_quantity
FROM rk_allocations rka,
rk_store_item_soh soh
WHERE rka.delivery_date > SYSDATE
AND rka.store_id = soh.id_str_rt
AND rka.store_id = '10'
AND rka.item_id = soh.id_itm
GROUP BY soh.id_str_rt,
soh.id_itm;

alter table alloctab1 add constraint alloctab1_pk primary key ( id_str_rt, id_itm );

insert into as_itm1 values ('22','33','each');

insert into rk_allocations1 values ( 1,'22','10-JUN-05',500,'10');

insert into rk_store_item_soh1 values (22,'10',1000,5000,500);

insert into pa_orgn1 values('nm_orgn','20');

insert into pa_spr1 values('30','nm_spr');

insert into as_itm_rtl_str1 values('10','22','30','patl');

insert into pa_str_rtl1 values ('10','20');

insert into rk_item_supplier1 values('22','30');

DROP SNAPSHOT LOG ON AS_ITM1;

CREATE MATERIALIZED VIEW LOG ON AS_ITM1
WITH ROWID, PRIMARY KEY, SEQUENCE
(DE_ITM,LU_UOM_DFT) INCLUDING NEW VALUES;

DROP SNAPSHOT LOG ON AS_ITM_RTL_STR1;

CREATE MATERIALIZED VIEW LOG ON AS_ITM_RTL_STR1
WITH ROWID, PRIMARY KEY, SEQUENCE
(ID_SPR,REPN_TYPE) INCLUDING NEW VALUES;

DROP SNAPSHOT LOG ON PA_ORGN1;

CREATE MATERIALIZED VIEW LOG ON PA_ORGN1
WITH ROWID, PRIMARY KEY, SEQUENCE
(NM_ORGN) INCLUDING NEW VALUES;

DROP SNAPSHOT LOG ON PA_SPR1;

CREATE MATERIALIZED VIEW LOG ON PA_SPR1
WITH ROWID, PRIMARY KEY, SEQUENCE
(NM_SPR) INCLUDING NEW VALUES;

DROP SNAPSHOT LOG ON PA_STR_RTL1;

CREATE MATERIALIZED VIEW LOG ON PA_STR_RTL1
WITH ROWID, PRIMARY KEY, SEQUENCE INCLUDING NEW VALUES;


DROP SNAPSHOT LOG ON RK_ITEM_SUPPLIER1;

CREATE MATERIALIZED VIEW LOG ON RK_ITEM_SUPPLIER1
WITH ROWID, PRIMARY KEY, SEQUENCE
INCLUDING NEW VALUES;

DROP SNAPSHOT LOG ON RK_STORE_ITEM_SOH1;

CREATE MATERIALIZED VIEW LOG ON RK_STORE_ITEM_SOH1
WITH ROWID, PRIMARY KEY, SEQUENCE
(TOTAL_QUANTITY) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON alloctab1
NOCACHE
NOPARALLEL
WITH ROWID, PRIMARY KEY, SEQUENCE
(allocated_quantity) INCLUDING NEW VALUES;

create materialized view tst_mv
PARALLEL
PARTITION BY HASH(store_id)
PARTITIONS 8
NOCACHE
NOLOGGING
build immediate
refresh fast on demand
as
SELECT psr.id_str_rt store_id,
psr.rowid psr_rid,
po.nm_orgn store_name,
po.rowid ps_rid,
soh.id_itm item_id,
soh.rowid soh_rid,
itm.de_itm item_description,
itm.lu_uom_dft default_uom,
itm.rowid itm_rowid,
itmloc.id_spr primary_supplier,
itmloc.rowid itmloc_rid,
ps.nm_spr vendor_name,
ps.rowid ps_rowid,
ris.rowid ris_rid,
itmloc.repn_type repn_type,
soh.total_quantity soh,
soh.in_transit_quantity in_transit_quantity,
soh.adjust_unavail_qty unavailable_quantity,
aq.allocated_quantity,
aq.rowid aq_rid,
CASE WHEN soh.total_quantity = 0 THEN 'ZERO'
WHEN soh.total_quantity < 0 THEN 'NEGATIVE'
END status
FROM pa_str_rtl1 psr,
as_itm_rtl_str1 itmloc,
as_itm1 itm,
rk_item_supplier1 ris,
pa_spr1 ps,
rk_store_item_soh1 soh,
pa_orgn1 po,
alloctab1 aq
WHERE itmloc.id_str_rt = psr.id_str_rt
AND itmloc.id_itm = itm.id_itm
AND itmloc.id_itm = ris.id_itm
AND itmloc.id_spr = ris.id_spr
AND ris.id_spr = ps.id_spr
AND aq.id_str_rt(+) = soh.id_str_rt
AND aq.id_itm(+) = soh.id_itm
AND soh.id_str_rt = itmloc.id_str_rt
AND soh.id_str_rt ='10'
AND soh.id_itm = itmloc.id_itm
AND psr.id_prty = po.id_prty_orgn;

Tom Kyte
August 03, 2006 - 10:08 am UTC

(your test case could have been smaller - no inserts needed.  also, the create table as select was botched, referenced tables we don't have)

Anyway:

ops$tkyte%ORA10GR2> create materialized view tst_mv
  2     PARALLEL
  3     PARTITION BY HASH(store_id)
  4     PARTITIONS 8
  5    NOCACHE
  6    NOLOGGING
  7    build immediate
  8    refresh fast on demand
  9    as
 10   SELECT psr.id_str_rt    store_id,
 11                psr.rowid        psr_rid,
 12                po.nm_orgn       store_name,
 13                po.rowid         ps_rid,
 14                soh.id_itm       item_id,
 15                soh.rowid        soh_rid,
 16                itm.de_itm       item_description,
 17                itm.lu_uom_dft   default_uom,
 18                itm.rowid        itm_rowid,
 19                itmloc.id_spr    primary_supplier,
 20                itmloc.rowid     itmloc_rid,
 21                ps.nm_spr        vendor_name,
 22                ps.rowid         ps_rowid,
 23                ris.rowid        ris_rid,
 24                itmloc.repn_type repn_type,
 25                soh.total_quantity soh,
 26                soh.in_transit_quantity in_transit_quantity,
 27                soh.adjust_unavail_qty unavailable_quantity,
 28                aq.allocated_quantity,
 29                aq.rowid       aq_rid,
 30                CASE WHEN soh.total_quantity = 0 THEN 'ZERO'
 31                     WHEN soh.total_quantity < 0 THEN 'NEGATIVE'
 32                END status
 33          FROM pa_str_rtl1        psr,
 34               as_itm_rtl_str1    itmloc,
 35               as_itm1            itm,
 36               rk_item_supplier1  ris,
 37               pa_spr1            ps,
 38               rk_store_item_soh1 soh,
 39               pa_orgn1           po,
 40               alloctab1      aq
 41         WHERE itmloc.id_str_rt = psr.id_str_rt
 42           AND itmloc.id_itm    = itm.id_itm
 43           AND itmloc.id_itm    = ris.id_itm
 44           AND itmloc.id_spr    = ris.id_spr
 45           AND ris.id_spr       = ps.id_spr
 46           AND aq.id_str_rt(+)  = soh.id_str_rt
 47           AND aq.id_itm(+)     = soh.id_itm
 48           AND soh.id_str_rt    = itmloc.id_str_rt
 49           AND soh.id_str_rt    ='10'
 50           AND soh.id_itm       = itmloc.id_itm
 51           AND psr.id_prty      = po.id_prty_orgn;

Materialized view created.


works in 10gR2 (more and more support for more and more things)..

However, in 10gr1 and before - it was pretty obvious, the explain_mview said:

REFRESH_FAST_AFTER_INSERT      N                                mv uses a filter condition in the WHERE
                                                                or HAVING clause


and upon removing line 49 AND removing the CASE (changed to decode:

              decode( sign(soh.total_quantity), 0, 'ZERO', -1, 'NEGATIVE' )
)

the MV becomes fast refreshable.  In 9iR2 for example:

ops$tkyte%ORA9IR2> create materialized view tst_mv
  2     PARALLEL
  3     PARTITION BY HASH(store_id)
  4     PARTITIONS 8
  5    NOCACHE
  6    NOLOGGING
  7    build immediate
  8    refresh fast on demand
  9    as
 10   SELECT psr.id_str_rt    store_id,
 11                psr.rowid        psr_rid,
 12                po.nm_orgn       store_name,
 13                po.rowid         ps_rid,
 14                soh.id_itm       item_id,
 15                soh.rowid        soh_rid,
 16                itm.de_itm       item_description,
 17                itm.lu_uom_dft   default_uom,
 18                itm.rowid        itm_rowid,
 19                itmloc.id_spr    primary_supplier,
 20                itmloc.rowid     itmloc_rid,
 21                ps.nm_spr        vendor_name,
 22                ps.rowid         ps_rowid,
 23                ris.rowid        ris_rid,
 24                itmloc.repn_type repn_type,
 25                soh.total_quantity soh,
 26                soh.in_transit_quantity in_transit_quantity,
 27                soh.adjust_unavail_qty unavailable_quantity,
 28                aq.allocated_quantity,
 29                aq.rowid       aq_rid,
 30                            decode( sign(soh.total_quantity), 0, 'ZERO', -1, 'NEGATIVE' )
 31                            /*
 32                CASE WHEN soh.total_quantity = 0 THEN 'ZERO'
 33                     WHEN soh.total_quantity < 0 THEN 'NEGATIVE'
 34                END */
 35                            status
 36          FROM pa_str_rtl1        psr,
 37               as_itm_rtl_str1    itmloc,
 38               as_itm1            itm,
 39               rk_item_supplier1  ris,
 40               pa_spr1            ps,
 41               rk_store_item_soh1 soh,
 42               pa_orgn1           po,
 43               alloctab1      aq
 44         WHERE itmloc.id_str_rt = psr.id_str_rt
 45           AND itmloc.id_itm    = itm.id_itm
 46           AND itmloc.id_itm    = ris.id_itm
 47           AND itmloc.id_spr    = ris.id_spr
 48           AND ris.id_spr       = ps.id_spr
 49           AND aq.id_str_rt(+)  = soh.id_str_rt
 50           AND aq.id_itm(+)     = soh.id_itm
 51           AND soh.id_str_rt    = itmloc.id_str_rt
 52  --         AND soh.id_str_rt    ='10'
 53           AND soh.id_itm       = itmloc.id_itm
 54           AND psr.id_prty      = po.id_prty_orgn;

Materialized view created.


 

Thank you for your time it is appreciated But.......

CG, August 03, 2006 - 1:58 pm UTC

...is the lesson learned that the restrictions are not mutually exclusive? I mean the case and filter in the where clause didnt matter here:
SQL> create materialized view dis_mv2
  2     PARALLEL
  3     PARTITION BY HASH(store_id)
  4     PARTITIONS 8
  5     STORE IN (DISCOVERER_DATA,
  6               DISCOVERER_DATA,
  7               DISCOVERER_DATA,
  8               DISCOVERER_DATA,
  9               DISCOVERER_DATA,
 10               DISCOVERER_DATA,
 11               DISCOVERER_DATA,
 12               DISCOVERER_DATA)
 13    NOCACHE
 14    NOLOGGING
 15    build immediate
 16    refresh fast on demand
 17    as
 18   SELECT    /*+ index(itmloc pk_as_itm_rtl_str) index(soh pk_rk_store_item_soh) index(ris pk_rk_item_supplier) index(itm pk_as_itm)*/
 19                psr.id_str_rt    store_id,
 20                psr.rowid        psr_rid,
 21                po.nm_orgn       store_name,
 22                po.rowid         ps_rid,
 23                soh.id_itm       item_id,
 24                soh.rowid        soh_rid,
 25                SUBSTR(itm.id_strc_mr,1,INSTR(itm.id_strc_mr,':')-1) Dept,
 26                soh.id_itm ||'-'|| itm.de_itm id_desc,
 27                itm.de_itm       item_description,
 28                itm.lu_uom_dft   default_uom,
 29                itm.rowid        itm_rowid,
 30                itmloc.id_spr    primary_supplier,
 31                itmloc.rowid     itmloc_rid,
 32                ps.nm_spr        vendor_name,
 33                ps.rowid         ps_rowid,
 34                itmloc.id_spr ||'-'|| ps.nm_spr VDesc,
 35                ris.vpn          vendor_style,
 36                ris.rowid        ris_rid,
 37                itmloc.repn_type repn_type,
 38                soh.total_quantity soh,
 39                soh.in_transit_quantity in_transit_quantity,
 40                soh.adjust_unavail_qty unavailable_quantity,
 41                aq.allocated_quantity,
 42                aq.rowid       aq_rid,
 43                ooq.on_order_quantity,
 44                ooq.rowid      ooq_rid,
 45                CASE WHEN soh.total_quantity = 0 THEN 'ZERO'
 46                     WHEN soh.total_quantity < 0 THEN 'NEGATIVE'
 47                END status
 48          FROM sim.pa_str_rtl        psr,
 49               sim.as_itm_rtl_str    itmloc,
 50               sim.as_itm            itm,
 51               sim.rk_item_supplier  ris,
 52               sim.pa_spr            ps,
 53               sim.rk_store_item_soh soh,
 54               sim.pa_orgn           po,
 55               sim.alloctab      aq,
 56               sim.on_ord_qty    ooq
 57         WHERE itmloc.id_str_rt = psr.id_str_rt
 58           AND itmloc.id_itm    = itm.id_itm
 59           AND ooq.store_id     = soh.id_str_rt
 60           AND ooq.item_id      = soh.id_itm
 61           AND itmloc.id_itm    = ris.id_itm
 62           AND itmloc.id_spr    = ris.id_spr
 63           AND ris.id_spr       = ps.id_spr
 64           AND aq.id_str_rt     = soh.id_str_rt
 65           AND aq.id_itm        = soh.id_itm
 66           AND soh.id_str_rt    = itmloc.id_str_rt
 67           AND soh.id_str_rt IN ('10','4','79','83','5','7','8','9','11','13','15','16','17','18','20','21','24','25','26','34','78','84','85','86'
,'2')
 68           AND soh.id_itm       = itmloc.id_itm
 69           AND psr.id_prty     = po.id_prty_orgn;

Materialized view created.
 
Only when I added the outer join on aq and ooq did it make a difference. But the mv_cap table even mentioning the outer joins threw me off because they both had primary keys on the columns ( so why mentioned them? )

So it comes down to SPECIFIC combos of restrictions for Oracle to prevent fast refresh? So you cant weigh in your mind that something should be working just because it worked before.

But after running more scenarios the case causes it to give ora-12015 and IN lists dont work. I joined another table that held the ids for me and fast was accepted. Ill go on that. 

Tom Kyte
August 03, 2006 - 4:27 pm UTC

the outer join is inheritly more restrictive, yes.

You could weigh them all in your mind, the restrictions should all be documented (and if not, we should get them to be so)

Me, I'm lazy sometimes - I put the MV together and use explain_mview to see what I can and cannot do to it. That is exactly what I did here.



I understand your point.....

Cg, August 04, 2006 - 11:58 am UTC

..... but I dont remember any restrictions on the CASE statement ( if so please give me the link ).

The inherently more restriction part, well, like you said IT SHOULD be documented.



Tom Kyte
August 04, 2006 - 12:21 pm UTC

it is the where clause filter - that is sort of implicit (case is like "where")

If there are no rows in mv logs.......

cg, August 07, 2006 - 1:37 pm UTC

......how long should a fast refresh take?

Theoreticly I would say less then one second. But whats the reality?

An attempted fast refresh took an hour when there were no data changes.

I did not have any indexes on the rowid columns which I am now creating.

Tom Kyte
August 07, 2006 - 9:42 pm UTC

this is like "how long is a piece of string"

trace it.

Replication between 8i and 10g

Mike, August 09, 2006 - 10:10 am UTC

Tom,

We have two database servers:

Server1: Oracle 8.1.7 DB - OpenVMS
Server2: Oracle 10g R2 – Solaris 9

We have to synchronize Server2 with Server1. WeÂ’re planning to do that using advanced replication and materialized views.
Configuration will be MASTER(Server1)-SNAPSHOT(Server2).

The problem is that we donÂ’t have Distributed Option installed on Server1. Installation of this option requires downtime that we cannot afford.

Do we need Distributed Option installed on Server1, since in MASTER server configuration no database links (to snapshot server) are used. As I know all replication packages that have to be used during the configuration require Replication Option (we have that option installed).

Thanks,
Mike


Tom Kyte
August 09, 2006 - 11:13 am UTC

wow, you are going to incurr downtime anyway.

You are doing new development on unsupported software.


are you sure the 8i stuff doesn't have "distributed", distributed ceased being an option a long long long time ago.

Replication between 8i and 10g

Mike, August 09, 2006 - 11:33 am UTC

Tom,

Thanks for your quick answer.

8i is supposed to have distributed option installed by default, but not for OpenVMS Alpha version.

This software is supported under Extended Maintenance Support (EMS).

In MASTER-SNAPSHOT replication snapshot site runs packages that are using db links to MASTER site, therefore it requires "distributed" which is not a problem.

MASTER site doesn't require any db link to the snapshot site. (Note:120094.1, Note:123560.1) therefore it might not require "distributed" option.

This replication is only for short period of time while we are migrating from 8i (OpenVMS) to 10g (Solaris)

Thanks,
Mike

Tom Kyte
August 09, 2006 - 12:54 pm UTC

which means "if solution for problem exists, you can have it, but no new fixes"

you are basically doing new development on unsupported software.


I guess all I can say is "try it"

Fast refresh with MIN and MAX aggregates

A reader, November 03, 2006 - 3:58 am UTC

Hi Tom,

Could you explain why fast refresh for a materialized view with MIN and MAX aggregates is only supported for conventional DML inserts in contrast to full DML? This is the case at least with 10g Release 2.



Tom Kyte
November 03, 2006 - 9:40 am UTC

are you sure?

ops$tkyte%ORA10GR2> create table t ( x int primary key, y int, z int );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create materialized view log on t with rowid (x,y,z) including new values;

Materialized view log created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create materialized view mv
  2  refresh fast
  3  on commit
  4  as
  5  select z, count(*), count(y), min(y), max(y)
  6    from t
  7   group by z;

Materialized view created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_mview.explain_mview( 'MV' )

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select capability_name, possible from mv_capabilities_table where capability_name like 'REFRESH_FAST%';

CAPABILITY_NAME                P
------------------------------ -
REFRESH_FAST                   Y
REFRESH_FAST_AFTER_INSERT      Y
REFRESH_FAST_AFTER_ANY_DML     Y
REFRESH_FAST_PCT               N

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t values ( 1, 0, 1 );

1 row created.

ops$tkyte%ORA10GR2> insert into t values ( 2, 1, 1 );

1 row created.

ops$tkyte%ORA10GR2> insert into t values ( 3, 2, 2 );

1 row created.

ops$tkyte%ORA10GR2> insert into t values ( 4, 3, 2 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select * from mv;

         Z   COUNT(*)   COUNT(Y)     MIN(Y)     MAX(Y)
---------- ---------- ---------- ---------- ----------
         1          2          2          0          1
         2          2          2          2          3

ops$tkyte%ORA10GR2> select z, count(*), count(y), min(y), max(y)
  2    from t
  3   group by z;

         Z   COUNT(*)   COUNT(Y)     MIN(Y)     MAX(Y)
---------- ---------- ---------- ---------- ----------
         1          2          2          0          1
         2          2          2          2          3

ops$tkyte%ORA10GR2> update t set y = y*1.5;

4 rows updated.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select * from mv;

         Z   COUNT(*)   COUNT(Y)     MIN(Y)     MAX(Y)
---------- ---------- ---------- ---------- ----------
         1          2          2          0          2
         2          2          2          3          5

ops$tkyte%ORA10GR2> select z, count(*), count(y), min(y), max(y)
  2    from t
  3   group by z;

         Z   COUNT(*)   COUNT(Y)     MIN(Y)     MAX(Y)
---------- ---------- ---------- ---------- ----------
         1          2          2          0          2
         2          2          2          3          5


 

Refresh Problem

Nishith Pandey, November 06, 2006 - 3:24 am UTC

Hi Tom

We are using Oracle10gR1 on Windows 2000.
I have created some materialized views of similar complexity. But one of these is creating problem. After being refreshed some times, it shows the error. And I have recreate it again and again. Here is the definition of Materialized view i saw in data dictionary:

SQL> CREATE MATERIALIZED VIEW ADDB.CR_UNSDET_MV
  ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE ADDBTS4 
  BUILD IMMEDIATE
  USING INDEX 
  REFRESH FORCE ON DEMAND
  USING DEFAULT LOCAL ROLLBACK SEGMENT
  DISABLE QUERY REWRITE
  AS select   substr(fun_cr_agty(c.agty),1,2) agty
       , to_char(a.recptdt,'yyyymm') yyyymm
       , substr(c.loccd, 1, 4) district
       , substr(c.loccd, 1, 6) tehseel
       , substr(c.loccd, 1, 8) block
       , substr(c.loccd, 1, 10) area
       , c.agcd
       , c.dpcd
       , sum(a.acur) acur
       , sum(a.acnc) acnc
       , sum(a.acul) acul
       , sum(a.acnn) acnn
       , sum(a.acsr) acsr
       , sum(a.acfr) acfr
  from cr_unsdet a, cr_receipt b, cr_agmast c
 where a.unit=b.unit
   and a.doctyp=b.doctyp
   and a.recptno=b.recptno
   and a.recptdt=b.recptdt
   and b.unit=c.unit
   and b.agcd=c.agcd
   and b.dpcd=c.dpcd
   and a.recptdt >= to_date('01-03-2004','dd-mm-yyyy')
 group by   substr(fun_cr_agty(c.agty),1,2)
          , to_char(a.recptdt,'yyyymm')
          , substr(c.loccd, 1, 4)
          , substr(c.loccd, 1, 6)
          , substr(c.loccd, 1, 8)
          , substr(c.loccd, 1, 10)
          , c.agcd
          , c.dpcd
;

SQL> ALTER MATERIALIZED VIEW ADDB.CR_UNSDET_MV COMPILE;
 
SQL> CREATE INDEX ADDB.CR_UNSDET_MV_YYYYMM ON ADDB.CR_UNSDET_MV (AGTY, YYYYMM, DISTRICT, TEHSEEL, BLOCK, AREA) 
  ;
 
SQL> CREATE BITMAP INDEX ADDB.CR_UNSDET_MV_AGTY ON ADDB.CR_UNSDET_MV (AGTY) 
  ;

After some refreshes, it shows the following error:

SQL> exec dbms_mview.refresh('CR_UNSDET_MV')
BEGIN dbms_mview.refresh('CR_UNSDET_MV'); END;

*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01858: a non-numeric character was found where a numeric was expected
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 1883
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2089
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2058
ORA-06512: at line 1

When I run the query of materialized view , it runs perfectly.

Other snapshots are more complex and they are refreshed successfully. What's wrong with this particular one?

Thanks in Advance! 

Tom Kyte
November 06, 2006 - 9:57 am UTC

no idea, when I run your example, it just fails outright (get it? I cannot run your example)

suggest you turn on sql-trace=true, run it and then review the raw trace file, look for #er to find the "error" and you'll see the generated SQL, might be useful to you.

Fast refresh with MIN and MAX aggregates

A reader, November 06, 2006 - 4:27 am UTC

Tom,

Although fast refresh with MIN and MAX aggregates seems to work if the view involves only single table, when joining two tables it's a different story. This is what the documentation says:

--
10r2 Data Warehousing Guide, Ch 8

Restrictions on Fast Refresh on Materialized Views with Aggregates

If the materialized view has one of the following, then fast refresh is supported only on conventional DML inserts and direct loads.
  * Materialized views with MIN or MAX aggregates"
--

Here's a concrete example 

SQL> 
SQL> create table t1 (
  2       x int,
  3       y int,
  4       z int);

Table created.

SQL> 
SQL> create table t2 (
  2       s int primary key,
  3       t int);

Table created.

SQL> 
SQL> create materialized view log on t1 with sequence,
  2  rowid (x,y,z) including new values;

Materialized view log created.

SQL> 
SQL> create materialized view log on t2 with sequence,
  2  rowid (s,t) including new values;

Materialized view log created.

SQL> 
SQL> create materialized view mv
  2  refresh fast on commit as
  3  select x, s, t, count(*), count(z), min(z), max(z)
  4  from t1, t2
  5  where t1.y = t2.s
  6  group by x, s, t;

Materialized view created.

SQL> 
SQL> exec dbms_mview.explain_mview('mv');

PL/SQL procedure successfully completed.

SQL> 
SQL> select capability_name, possible, msgtxt
  2  from mv_capabilities_table
  3  where capability_name like 'REFRESH_FAST%';

CAPABILITY_NAME                P MSGTXT
------------------------------ - ------------------------------------------
REFRESH_FAST                   Y
REFRESH_FAST_AFTER_INSERT      Y
REFRESH_FAST_AFTER_ONETAB_DML  N mv uses the MIN or MAX aggregate functions
REFRESH_FAST_AFTER_ONETAB_DML  N mv uses the MIN or MAX aggregate functions
REFRESH_FAST_AFTER_ANY_DML     N see the reason why REFRESH_FAST_AFTER_ONET
REFRESH_FAST_PCT               N PCT is not possible on any of the detail t

6 rows selected.

SQL> 

Back to the original question. Could you explain why fast refresh is not possible in this case.

 

Tom Kyte
November 06, 2006 - 10:03 am UTC

because of a documented implementation restriction.

this is why examples are really good, for you see - your first statement:
<quote>
Could you explain why fast refresh for a materialized view with MIN and MAX
aggregates is only supported for conventional DML inserts in contrast to full
DML?
</quote>
is absolutely "not true"


I will presume it has to do with the inherit complexity of rows dropping in and out of the MV due to the join conditions being met or not making the maintenance of the MV view either impossibly hard - or just impossible (as a fast refresh)

suppose for example, you update a single row in T1.Y and change Y from 5 to 6.

Now, for that specific T1.Y = 5 record, Z was the maximum.

We take it away - there is NOTHING in the MV log now about what the new maximum value is.

It is the movement of the rows amongst many rows in the MV that makes it impracticable to maintain.


Query Rewrite

CT.VELU, December 19, 2006 - 10:47 am UTC

Hi Tom
I have created a Materialized view like( Version 10g R2)

CREATE MATERIALIZED VIEW GUS.MV_SALES_SUM_MTH
TABLESPACE USERS
NOCACHE
LOGGING
NOCOMPRESS
NOPARALLEL
BUILD IMMEDIATE
USING INDEX
TABLESPACE USERS
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT GUS.UT_CAL_DT.INTVL, GUS.UT_CAL_DT.PERIOD, GUS.UT_CAL_DT.CYCLE,
GUS.SALES_DTL.ITEM_NBR , GUS.SALES_DTL.STORE_NBR, GUS.SALES_DTL.ORG_CD_MFR
, SUM("GUS"."SALES_DTL"."UNITS"), COUNT("GUS"."SALES_DTL"."UNITS")
, COUNT(*)
FROM GUS.UT_CAL_DT, GUS.SALES_DTL
WHERE
GUS.SALES_DTL.SHIP_DT = GUS.UT_CAL_DT.CAL_DT
AND (GUS.SALES_DTL.ORG_CD_MFR = 'MFR')
AND (GUS.UT_CAL_DT.CYCLE IN (2006, 2005))
GROUP BY
GUS.UT_CAL_DT.INTVL,
GUS.UT_CAL_DT.PERIOD,
GUS.UT_CAL_DT.CYCLE,
GUS.SALES_DTL.ITEM_NBR,
GUS.SALES_DTL.STORE_NBR,
GUS.SALES_DTL.ORG_CD_MFR;

Materialized view was created without any problems.I issued a following query

SELECT /*+ REWRITE_OR_ERROR */
s.STORE_NBR,
s.item_nbr,
t.cycle ,
t.intvl ,
SUM(s.units) Sales
FROM sales_dtl s, ut_cal_dt t
WHERE s.ship_dt = t.cal_dt AND
t.cycle in (2005,2006) AND
s.item_nbr = 24000 AND
s.store_nbr = 10314099 AND
s.org_cd_mfr = 'MFR'
GROUP BY s.item_nbr, s.STORE_NBR,t.cycle,t.intvl

It did not error out, so I thought it is rewriting the query but looking into the 10046 trace file (pasted in here) shows it did not.I am wondering what could be the reason.
tkprof:
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
NVL(SUM(C2),:"SYS_B_1")
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("MV_SALES_SUM_MTH")
FULL("MV_SALES_SUM_MTH") NO_PARALLEL_INDEX("MV_SALES_SUM_MTH") */
:"SYS_B_2" AS C1, CASE WHEN "MV_SALES_SUM_MTH"."ITEM_NBR"=:"SYS_B_3" AND
"MV_SALES_SUM_MTH"."STORE_NBR"=:"SYS_B_4" THEN :"SYS_B_5" ELSE :"SYS_B_6"
END AS C2 FROM "GUS"."MV_SALES_SUM_MTH" SAMPLE BLOCK (:"SYS_B_7" ,
:"SYS_B_8") SEED (:"SYS_B_9") "MV_SALES_SUM_MTH") SAMPLESUB


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 128 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.01 0.01 0 128 0 2

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1344 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=64 pr=0 pw=0 time=4437 us)
9868 MAT_VIEW ACCESS SAMPLE MV_SALES_SUM_MTH (cr=64 pr=0 pw=0 time=29808 us)

********************************************************************************

SELECT /*+ REWRITE_OR_ERROR */
s.STORE_NBR,
s.item_nbr,
t.cycle ,
t.intvl ,
SUM(s.units) Sales
FROM sales_dtl s, ut_cal_dt t
WHERE s.ship_dt = t.cal_dt AND
t.cycle in (2005,2006) AND
s.item_nbr = 24000 AND
s.store_nbr = 10314099 AND
s.org_cd_mfr = 'MFR'
GROUP BY s.item_nbr, s.STORE_NBR,t.cycle,t.intvl

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 0 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.77 3.70 0 35541 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 3.77 3.74 0 35542 0 2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1344

Rows Row Source Operation
------- ---------------------------------------------------
2 HASH GROUP BY (cr=35541 pr=0 pw=0 time=3707561 us)
2 HASH JOIN (cr=35541 pr=0 pw=0 time=3706254 us)
4 TABLE ACCESS BY GLOBAL INDEX ROWID SALES_DTL PARTITION: ROW LOCATION ROW LOCATION (cr=35503 pr=0 pw=0 time=4836623 us)
2856779 INDEX RANGE SCAN SALES_DTL_NDX_01 (cr=11802 pr=0 pw=0 time=8648330 us)(object id 100507)
730 TABLE ACCESS FULL UT_CAL_DT (cr=38 pr=0 pw=0 time=2585 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
***********************************************************
1. I am wondering why it did not fetch from materialized view. (When I issued the query directly against the materilized view it took a second, where as here it is moe than 3 seconds)

2.I collected statistics for the Materialized view, but why do I see a DYNAMIC SAMPLING in tkprof.

Thank you very much for taking time to look into this.

with best regards
CT VELU



Tom Kyte
December 19, 2006 - 10:56 am UTC

no example I can run
no look


you don't tell us very much - like what your dynamic sampling level is set to (perhaps 3 - then it samples when it "guesses" at some predicate)


use 10053 trace to see what was considered during the optimization. Could be it was rewritten but costed out to be as much or more expensive.

Query rewrite-10053

CT VELU, December 19, 2006 - 12:05 pm UTC

Hi Tom,

I traced the session for 10053. Since this is the first time for me, I went thru the raw trace file and am able to see it has considered calculting the cost for materialized view access path and table access path. From what I see in the cost column cost for accessing Materilized view is around 2000 where as for the table acces it is more than 5000 but finally It access thru the table scan.
I found the following in 10053 trace file in the materialized view section (or part)
" CBQT: Validity checks failed for 5hvz5u8g5hqv1 "


Predicate Move-Around (PM)
**************************
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks failed for 5hvz5u8g5hqv1.
Registered qb: SEL$4C4BD064 0xf4a2ec98 (PARSER)
signature (): qb_name=SEL$4C4BD064 nbfros=1 flg=0
fro(0): flg=4 objn=134087 hint_alias="MV_SALES_SUM_MTH"@"SEL$4C4BD064"
Registered qb: SEL$55CB2DAB 0xf467bb00 (MV REWRITE SEL$1; SEL$1; MV_SALES_SUM_M\
TH)
signature (): qb_name=SEL$55CB2DAB nbfros=1 flg=0
fro(0): flg=0 objn=134087 hint_alias="MV_SALES_SUM_MTH"@"SEL$4C4BD064"
CVM: Considering view merge in query block SEL$55CB2DAB (#0)
CBQT: Validity checks failed for 5hvz5u8g5hqv1.

1)I don't know what this mean ?

2)Is there any utility like tkprof to read 10053 trace file.This is the first time I am using.

I sincerely apologize for not providing create table and insert statements.The base table whatever I am using has more then 10 million rows and the materailized view has 1.5 million rows, I am not sure how can I reproduce this situation as a test case.

Thanks again for your time.

with best
CT VELU





Tom Kyte
December 19, 2006 - 12:57 pm UTC

1) it was considered and rejected - so it met the letter of the "hint"

2) no

query rewrite 10053

CT VELU, December 19, 2006 - 12:27 pm UTC

For the above query rewrite follow up, optimizer_dynamic_sampling is set at 2
Thanks
CT

Does REFRESH FAST ON COMMIT really occurring ?

Iudith Mentzel, January 22, 2007 - 5:51 am UTC

Hello Tom,
I have an MV very similar to one of your examples in this
thread ( on Ora10gR2 )

create materialized view mv
build immediate
refresh fast
on commit
as
select x,
max( decode( y, 1, val ) ) y_1_max,
max( decode( y, 2, val ) ) y_2_max,
count(*) count_all,
from t
group by x
/

Table T has the primary key (x,y).
The MV has about 1.5M rows and has a UNIQUE INDEX on x.

It does perform indeed FAST REFRESH ON COMMIT,
but the COMMIT itself takes a VERY, VERY LONG TIME
even if I update one single record in the base table:

Update T set val = 100 where x = 1 ;

Related to such an example, you gave a very nice explanation
( as always ! ) of why such an MV cannot fast refresh in
case of an update.
However, in the Oracle10gR2 DWH guide they state that this
is possible, in case the MV query has NO WHERE CLAUSE,
as is our case.
Is it possible however that Oracle does in fact "silently"
perform a COMPLETE refresh and not a FAST refresh,
which could indeed explain the long time of the COMMIT operation ?

Thanks a lot anticipately & Best regards,
Iudith

importing the mview from one db to another

A Reader, January 29, 2007 - 10:25 pm UTC

Hi Tom,

I am facing a problem.
Scenario is, I have 2 databases. One is Test and another is Prod. Both have same objects but data in tables, is different. In both databases there is a mview called MV1 and offcourse its having the same definition (objects used) but with different records.
Now I want to have Prod.MV1 into Test.MV1 so what I did:
I took the export of Test.MV1 (for backup purpose) and dropped the mview from Test. Now I took another export of Prod.MV1 and imported it into Test and it worked fine and I can see the Prod records in Test database.
But the problem is , I can not refresh the MV1 with Test tables and even I can not drop this MV1 from Test and import the old Test.MV1 (which I exported earlier) back into Test database.

How can we do all this operations.

Thanks a lot for your help.
Tom Kyte
January 31, 2007 - 1:10 pm UTC

you've had this problem on other pages, use one.

ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view

A reader, February 01, 2007 - 9:03 am UTC

Hi Tom,

Current version is:
___________________________________________________________
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
PL/SQL Release 9.2.0.3.0 - Production
CORE 9.2.0.3.0 Production
TNS for 32-bit Windows: Version 9.2.0.3.0 - Production
NLSRTL Version 9.2.0.3.0 - Production
___________________________________________________________

I am trying to create a materialized view with the following definition :

create materialized view test_mv1
build immediate
refresh complete
on commit
as
SELECT
tmm.MESSAGE,NVL(tmlt.LOCALIZED,tmm.MESSAGE) message_text, tmlt.LOCALE_ID,
CASE WHEN NVL(tmlt.IS_LONG,'N')='Y' THEN
tllt.LONG_TEXT
ELSE
TO_CLOB(tmm.MESSAGE)
END message_long_text,
CASE WHEN NVL(tmlt.IS_IMAGE,'N') = 'Y' THEN
tmi.IMAGE
ELSE
NULL
END message_image,
NVL(tmlt.IS_IMAGE,'N') IS_IMAGE, NVL(tmlt.IS_LONG,'N') IS_LONG
FROM tbl_my_local_text tmlt, tbl_my_message tmm,tbl_my_image tmi,tbl_my_long_text tllt,tbl_my_app tmb
WHERE tmlt.MESSAGE_ID(+)=tmm.MESSAGE_ID
AND tllt.LOCAL_TEXT_ID(+)=tmlt.LOCAL_TEXT_ID
AND tmi.LOCAL_TEXT_ID(+)=tmlt.LOCAL_TEXT_ID
AND tmm.BUNDLE_ID=tmb.bundle_id
AND tmb.bundle_name = 'ConfigUtility';

The relationship of the tables is as follows:

Parent Table: tbl_my_app

Child to tbl_my_app : tbl_my_message

Child to tbl_my_message : tbl_my_local_text

Child to tbl_my_local_text : tbl_my_image

Child to tbl_my_local_text : tbl_my_long_text

When I try to create the MV, I always get the following error:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view

I have no clue as to why this error is coming as I not not doing fast refresh.
I have the ON COMMIT REFRESH system privilege.

One point worth mentioning is with the same set of tables and privleges, the MV gets created on version 10g.

Is this a bug or limitation in verion 9. My sole requirement
is to use "ON Commit" as I wanted to do certain things upon the commit in a transaction. As there is nothing like ON COMMIT thing in Oracle and MV are the only source which react to the ON COMMIT event in the transaction.

Tom, pls respond to this query.
Tom Kyte
February 01, 2007 - 1:15 pm UTC

if the sole reason is to do something upon commit - why would you use such a complex, hard to execute query?

have you looked at dbms_mview.explain_mview?
http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_mview2.htm#95440

ORA-12054 cannot set the ON COMMIT refresh attribute for the materialized view

A reader, February 02, 2007 - 11:04 am UTC

Hi Tom,
Thanks for the response. I do run through the DBMS_MVIEW.Explain_mview and passed my select query as the MV candidate. I got all the propable operations that I can do on this query. It says "REFRESH_COMPLETE" capability_name has value 'Y' in the possible column.

But still I receive the error on creating MV with complete refresh.

On the functional part, let me explain you my scenario.
We have multiple web based applications which represents the data in multiple languages.There is a single/central schema "SCH_MULTI_LANG" which holds the multiple language labels for all the applications. Now the apps are hosted through web sphere which does this multi-lingual caching on its startup. But after WS startup, any change happening in this SCH_MULTI_LANG does not get reflected to the GUI as the old data is cached and returned.

Now we are trying to remove this WS dependency. We would like to do database level caching. Any app making 1st reference to the db interface package, the multi-lingual data will get cached in database and any further calls will get the data from the cache. Any changes to the SCH_MULTI_LANG tables will (on transaction commit) send an alert to all the registered apps (A separate signal name for each app) saying that the my data has changed and your cache has become dirty implemented through a trigger. Hence I will be using DBMS_ALERT to make notification.

Any registered application upon receiving this signal,will empty its old data cache and refresh itself again.

Now the requirement is this signal should be sent only when the transaction in SCH_MULTI_LANG schema commits.Rollback will not make any difference to the cache held by different apps. As in oracle, unlike Forms, we do not have any ON COMMIT trigger thing, so I went with the MV way.

Hope this makes my requirement clear. But as per your review, is there any other way of performing a particular task only on the commit in a transaction?

Your guidance awaited.
Tom Kyte
February 02, 2007 - 1:37 pm UTC

why such a complex mv - i'll ask again.

ORA-12054 cannot set the ON COMMIT refresh attribute for the materialized view

a reader, February 04, 2007 - 8:20 am UTC

Tom, thanks again for the response.

I used the MV way because I wanted to track the changes to any of the base tables (otherwise I would have to track individual base table & I don't know how to go about that).
So, If in a transaction , any of the base table's content gets changed and gets committed, the MV will get refreshed on commit. I have written a after trigger on this MV to capture any such change in its data and send out the signal to all the interested sessions indicating the data has changed.

Your response awaited.
Tom Kyte
February 04, 2007 - 8:45 am UTC

use a simplier MV - I don't understand why you made such a complex one, that is my point.

tricks are tricks, this is a "trick", I'm not a big fan of this approach...

There are new features in 10g that do this (database change notification).

AND - just a trigger on the tables that did the dbms_alert would be sufficient. You don't need to wait to the commit.

Check the where condition and output please !!!

Star Nirav, February 04, 2007 - 1:43 pm UTC

If you check the SQL query and output you will find that I am querying for :

WHERE DATE_CODE = to_date('01/01/2007','dd/mm/yyyy') AND
ms_code = '303' AND
rss_code = 'PJ60513' AND
product_code = 'S24841' AND
type_code = 'INY'

and output comes like this :

Date Ms_Code Rss_Code Product Code Type Code Qty Amt
Tax Amt Depth Qty Pkd Date
01/01/2007
134
124
S15271
STRL
0
0
0
86025
30/01/2007 6:05:41 PM
==========
Oracle version : 8.1.7.0.0
Tom Kyte
February 05, 2007 - 7:31 am UTC

so, I do not have your data, I don't have your create table - I cannot even make sense of your column names here.

I just see bytes on the screen - give me a test case to reproduce with and we'll talk.

ORA-12054 cannot set the ON COMMIT refresh attribute for the materialized view

Vinayak, February 05, 2007 - 6:11 pm UTC

Tom, thanks again for your response.

I realized that I dont even need MVs. I could directly create stmt. level trigger on the base tables and send the signal. I believe this is what you trying to explain.

Still I have a few questions:

1.Since I have 5-6 base tables, do I need to have this statement level trigger on each table and if yes, will the alert name will have to be same.

2. Due to the very basic nature of dbms_alert, the waiting sessions will get blocked until the signaling session either commits/rollback.But in my case I do not want this. My waiting session is also a continuously running application. I cant make it blocked. My requirement is whenever it will make a database call, it should check for any received alert and act appropriately. I am worried about the changes happening to the base tables and database call from the application concurrently. In this case the app will go hang. How should I resolve this.

Since, we are on Oracle 9i so cant use the 10g feature.

Your response awaited.
Tom Kyte
February 05, 2007 - 6:46 pm UTC

1) yes and yes
2) how would doing this on commit change anything?

ORA-12054 cannot set the ON COMMIT refresh attribute for the materialized view

Vinayak, February 05, 2007 - 7:09 pm UTC

Tom,
thanks again.

After reading your point 2 review, I thought again and came to the point that I need to use individual MV(with refresh complete on commit) for each table. And then write a statement level AFTER trigger on each MV to send the alert signal. In that way my receiving session/application will not get hanged/blocked if some changes happen to any of the base table and transaction has not yet committed/rolled back because the alert is now being sent from the MV(that too only on transaction commit, nothing to do on rollback) and not from the base tables directly.

Please let me know if I understood this correctly.

Secondly, I would like to know why my initial MV depending upon the joins of base tables throw that error. What am I doing wrong in that query?


feedback required

Vinayak, February 06, 2007 - 5:39 pm UTC

Tom,

any feedback on my understanding above. Also wanted to know the cause/resolution of MV error.

Your response awaited..
Tom Kyte
February 07, 2007 - 12:55 pm UTC

i already did respond two up.


my understanding !!!

Vinayak, February 07, 2007 - 6:03 pm UTC

Tom,

for your point 2 "how would doing this on commit change anything", this is what my understanding is.

Now, I am using the MV on individual table which have the on-commit refresh clause. The application calls an oracle package(its only interface to database) and the very first thing I do in this package is to check for any received signals. So, if the changes to the base table(not yet committed) and database call from the application happen to occur at the very same time, the application will find that no alert signal has been sent(As the MVs haven't refreshed yet) and hence it wont refresh its data cache assuming its clean. Subsequently, the base tables changes gets done and commit is issued. At that time the involved MVs will get refreshed and then send the signal through their trigger.

On the application front, if the end user report that he is still seeing the old data , we would ask him to re-query the data again(as we can tolerate the difference of about 10-15 seconds). Now this time the package will realize that a signal has come and saying that the cache is dirty. The old data cache will get deleted and new would be updated (the data cache is held in plsql tables of associative array type with my data's primary key as arrays' index by clause). The new data would then be sent back to the application front.

Please let me know if you see any issues with this approach.

2) My 2nd curiosity is you said the initial MV was complex. It gave me the error
"ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view". But it is a normal MV based on joins without aggregates. I am not sure why this error is coming. If you want I will send the structure of the base tables involved.

Please help me solve this.
Tom Kyte
February 07, 2007 - 7:24 pm UTC

so, why would on commit change anything - if you look for a signal with nowait...??

2) because things change over time, I don't have your tables, indexes etc - hence cannot really tell you - but - do it "more simply", you don't need all of this

so, why would on commit change anything - if you look for a signal with nowait...??

Vinayak, February 08, 2007 - 9:03 am UTC

Tom,
rather I am paraphrasing my understanding. I thank you for your patience and listening to me. You said

"so, why would on commit change anything - if you look for a signal with nowait...?? "

I think I had a little deviation here. If the app call to database and the changes to base tables happen concurrently, then I presume that due to nowait (timeout=0 parameter), the app will not get any new signal and it would continue ahead.

The real problem comes when the app calls the database just a few seconds after the changes in the base table started(trigger on the base tables only). The changes are not yet committed and in the other session through which application has connected, the app calls the package and just hangs. Here is my test case:

Schema : Scott
opened two sessions and did the following:

Session 1 :

create table alert (a number primary key);


create or replace trigger after_iud_alert
after insert or update or delete on alert
begin
dbms_alert.signal('TESTING','my_message');
end;
/

insert into alert values(1);

insert into alert select level+1 from dual connect by level <= 11;

update alert set a=a*20 where mod(a,2)=0;

I didn't do a commit yet.....

Session 2: This is the waiting session (in my context, this is the application. For brevity, instead of database package, i have used an anonymous plsql block)

declare
message varchar2(100);
status number;
begin
dbms_alert.register('TESTING');
/*Register for the alert with nowait */
dbms_alert.waitone('TESTING',message, status, 0);
if status=0 then
dbms_output.put_line('Received: '||message);
ELSE
dbms_output.put_line('Received Nothing');
END IF;
end;
/
-----This hangs------

Go to Session 1 and issue:

commit;

Commit complete.

Come to session 2 & the o/p is:

Received: my_message

PL/SQL procedure successfully completed.

This hang is what I tried to avoid using the ON COMMIT feature of the MV. I created the MV on individual table and created the trigger on this. And now my application continue to query the database N times and successfully gets the result of either the signal is received or not.

Hope Now I am able to make myself more clear.

I wanted to know if this approach is acceptable given the fact that the application is open to a large no of users and at any given time, around 100-150 users are always online.

Also, if you could suggest any new way of implementing this. (I thought we can use Global Application context but don't know how)

----------------------------------------------------------
Onto the next part, although I am not going with the complex MV way, I just wanted to know the reason for that error. This is the first time I am using MV, and on the very onset got this stumbling error. Here is my all table script:

-------------------
CREATE TABLE tbl_my_app
(
BUNDLE_ID NUMBER NOT NULL,
BUNDLE_NAME VARCHAR2(20 BYTE) NOT NULL,
BUNDLE_DESCRIPTION VARCHAR2(255 BYTE) NOT NULL
)
/
CREATE UNIQUE INDEX UNQ1_tbl_my_app ON tbl_my_app
(BUNDLE_NAME)
NOLOGGING
/
ALTER TABLE tbl_my_app ADD (
CONSTRAINT PK_tbl_my_app
PRIMARY KEY
(BUNDLE_ID))
/

-----------------------

CREATE TABLE tbl_my_message
(
MESSAGE_ID NUMBER NOT NULL,
BUNDLE_ID NUMBER NOT NULL,
MESSAGE VARCHAR2(100 BYTE) NOT NULL
)
/

CREATE UNIQUE INDEX UNQ1_tbl_my_message ON tbl_my_message
(BUNDLE_ID, MESSAGE)
NOLOGGING
/
CREATE INDEX IDX_FK_BUNDLE_ID ON tbl_my_message
(BUNDLE_ID)
LOGGING
/
ALTER TABLE tbl_my_message ADD (
CONSTRAINT PK_tbl_my_message
PRIMARY KEY
(MESSAGE_ID))
/
ALTER TABLE tbl_my_message ADD (
FOREIGN KEY (BUNDLE_ID)
REFERENCES tbl_my_app (BUNDLE_ID))
/
----------------------------

CREATE TABLE tbl_my_local_text
(
LOCAL_TEXT_ID NUMBER NOT NULL,
MESSAGE_ID NUMBER NOT NULL,
LOCALE_ID NUMBER NOT NULL,
LOCALIZED VARCHAR2(4000 BYTE),
IS_LONG VARCHAR2(1 BYTE) NOT NULL,
IS_IMAGE CHAR(1 BYTE) DEFAULT 'N' NOT NULL
)
;


CREATE UNIQUE INDEX UNQ1_tbl_my_local_text ON tbl_my_local_text
(MESSAGE_ID, LOCALE_ID)
NOLOGGING
;


CREATE INDEX IDX_FK_MESSAGE_ID ON tbl_my_local_text
(MESSAGE_ID)
LOGGING
;


CREATE INDEX IDX_FK_LOCALE_ID ON tbl_my_local_text
(LOCALE_ID)
LOGGING
;

ALTER TABLE tbl_my_local_text ADD (
CONSTRAINT PK_tbl_my_local_text
PRIMARY KEY
(LOCAL_TEXT_ID))
;

ALTER TABLE tbl_my_local_text ADD (
FOREIGN KEY (MESSAGE_ID)
REFERENCES tbl_my_message (MESSAGE_ID))
;
-------------

CREATE TABLE tbl_my_image
(
LOCAL_IMAGE_ID NUMBER NOT NULL,
LOCAL_TEXT_ID NUMBER NOT NULL,
IMAGE BLOB NOT NULL
)
;

CREATE UNIQUE INDEX UNQ1_tbl_my_image ON tbl_my_image
(LOCAL_TEXT_ID)
NOLOGGING
;

ALTER TABLE tbl_my_image ADD (
CONSTRAINT PK_tbl_my_image
PRIMARY KEY
(LOCAL_IMAGE_ID))
;

ALTER TABLE tbl_my_image ADD (
FOREIGN KEY (LOCAL_TEXT_ID)
REFERENCES tbl_my_local_text (LOCAL_TEXT_ID));

-----------------------

CREATE TABLE tbl_my_long_text
(
LOCAL_LONG_TEXT_ID NUMBER NOT NULL,
LOCAL_TEXT_ID NUMBER NOT NULL,
LONG_TEXT CLOB NOT NULL
)
;

CREATE UNIQUE INDEX UNQ1_tbl_my_long_text ON tbl_my_long_text
(LOCAL_TEXT_ID)
NOLOGGING
;


ALTER TABLE tbl_my_long_text ADD (
CONSTRAINT PK_tbl_my_long_text
PRIMARY KEY
(LOCAL_LONG_TEXT_ID))
;

ALTER TABLE tbl_my_long_text ADD (
FOREIGN KEY (LOCAL_TEXT_ID)
REFERENCES tbl_my_local_text (LOCAL_TEXT_ID));

here goes my MV script

create materialized view mv_text
build immediate
refresh complete
on commit
as
SELECT
tmm.MESSAGE,NVL(tmlt.LOCALIZED,tmm.MESSAGE) message_text, tmlt.LOCALE_ID,
CASE WHEN NVL(tmlt.IS_LONG,'N')='Y' THEN
tllt.LONG_TEXT
ELSE
TO_CLOB(tmm.MESSAGE)
END message_long_text,
CASE WHEN NVL(tmlt.IS_IMAGE,'N') = 'Y' THEN
tmi.IMAGE
ELSE
NULL
END message_image,
NVL(tmlt.IS_IMAGE,'N') IS_IMAGE, NVL(tmlt.IS_LONG,'N') IS_LONG
FROM tbl_my_local_text tmlt, tbl_my_message tmm,tbl_my_image tmi,tbl_my_long_text tllt,tbl_my_app tmb
WHERE tmlt.MESSAGE_ID(+)=tmm.MESSAGE_ID
AND tllt.LOCAL_TEXT_ID(+)=tmlt.LOCAL_TEXT_ID
AND tmi.LOCAL_TEXT_ID(+)=tmlt.LOCAL_TEXT_ID
AND tmm.BUNDLE_ID=tmb.bundle_id
AND tmb.bundle_name = 'ConfigUtility';

This does not work in version 9.2.0.3 or 9.2.0.6
but it does work on 10.2.0.1.0

Whats the difference oracle has implemented in the two versions ??

Tom Kyte
February 08, 2007 - 11:16 am UTC

the application would register for the alert once (that is what is getting blocked), why do it every time?

Sorry, I am confused....

Vinayak, February 08, 2007 - 6:07 pm UTC

Tom,

I am really confused....Oracle docs says:

"SIGNAL Procedure
This procedure signals an alert. The effect of the SIGNAL call only occurs when the transaction in which it is made commits. If the transaction rolls back, SIGNAL has no effect.

All sessions that have registered interest in this alert are notified. If the interested sessions are currently waiting, they are awakened. If the interested sessions are not currently waiting, they are notified the next time they do a wait call."

The bold line means that any session that wants to receive the signal needs to register. The application is also accessed by multiple users and each user gets connected to the database as a separate session (database is in dedicated mode). So every time a new application session connects to the database, the database package is called that will register to the alert and then wait for the signal.

But you say that application need to register just once. How this can be achieved? Can you please explain with an example else I am lost.....


Tom Kyte
February 08, 2007 - 9:15 pm UTC

just have your application, right after it connects to the database, register - ONCE

A reader, February 15, 2007 - 11:13 am UTC

I've got a materialized view with creation code like

"CREATE MATERIALIZED VIEW ei_ert_summ
pctfree 10 TABLESPACE GRPSIACSIIDATA
BUILD IMMEDIATE
REFRESH FAST
ON COMMIT
AS
/* Formatted on 13/02/2007 09:20 (Formatter Plus v4.8.7) */
SELECT
ert.ROWID ert_rowid, ei.ROWID ei_rowid, er.ROWID er_rowid, ei_er_id
, ei_eit_code, ei_parr_id, ei_start_date, ei_end_date, ert_rt_code
, er_start_date, er_end_date, er_number, ei_obe_id
FROM
entitlement_interests ei
, entitlement_register_types ert
, entitlement_register er
WHERE ert_er_id = ei_er_id
AND er_id = ert_er_id


drop materialized view log on entitlement_register;

create materialized view log on entitlement_register with rowid,sequence (er_start_date,er_end_date,er_number) including new values;

drop materialized view log on entitlement_register_types;

create materialized view log on entitlement_register_types with rowid,sequence ( ert_rt_code) including new values;

drop materialized view log on entitlement_interests;

create materialized view log on entitlement_interests with rowid ,sequence(ei_er_id, ei_eit_code,ei_parr_id,ei_start_date, ei_end_date) including new values;

"

Now, my question is I notice the view gets refreshed during the COMMIT phase of an updating transaction.

1) Can this view becomes INVALID, if yes, what could be the circumstances.

2) But the critical concern is, if it becomes INVALID, will the COMMIT fail with an Oracle error, which is what I'd want. Because, I'd not want the view to have STALE data, we need to get it back to VALID again.

3) Is there a way of testing the view to break.

Is it able to do refresh of materialized view without commiting it?

Andriy Solonchuk, February 27, 2007 - 7:42 am UTC

I see that if i try to call dbms_mview.refresh('mview') I have latent(implicit) commit during this call.

1. I have found no information about this commit in documentation.

2. Is I able any way to refresh materialized view without committing current transaction by this call?
as result I want to have uncommitted data to be applied to view. (sure, with ability to rollback )

hope for your answer.

Tom Kyte
February 27, 2007 - 10:55 am UTC

I don't know what you mean by "as result i want to have uncommitted data to be applied to view" and how that would tie into the fact that dbms_mview refresh does commit.

Even if it did not, there would be no possible way to have uncommitted data applied to the materialized view - that doesn't even make sense.

why do oracle do commit before refresh materilized view ?

Andriy Solonchuk, February 28, 2007 - 7:42 am UTC

SQL> drop materialized view MV_TEST;

Materialized view dropped

SQL> drop materialized view log on TEST;

Materialized view log dropped

SQL> drop table TEST ;

Table dropped

SQL> create table TEST (
  2      test_id number(9)
  3    , value varchar2(32)
  4    , constraint PK_TEST primary key (test_id)
  5  );

Table created

SQL> create materialized view log on TEST;

Materialized view log created

SQL> create materialized view MV_TEST
  2  refresh fast on demand
  3  as
  4  select test_id
  5       , value
  6  from TEST;

Materialized view created

SQL> select count(*) from TEST;

  COUNT(*)
----------
         0

SQL> insert into TEST (test_id, value) values (1, 'One');

1 row inserted

SQL> exec dbms_mview.refresh('MV_TEST');

PL/SQL procedure successfully completed

SQL> rollback;

Rollback complete

SQL> select count(*) from TEST;

  COUNT(*)
----------
        1


Question:

1.Why "exec dbms_mview.refresh('MV_TEST');" do commit?

2.
>Even if it did not, there would be no possible way to have
>uncommitted data applied to the materialized view - that
>doesn't even make sense.

there is may exist sence. Suppose we have some complex calculation and we will do some small update or insert that will touch those calculation.
And We want to review affect of those change to that cost(heavy) select that incapsulated in materialized view. and want to have ability to rollback or correct chaging.
Is there any way to do so or no ?

Thanks for attention.
Tom Kyte
February 28, 2007 - 3:47 pm UTC

because it is considered to be just like DDL.



And that last paragraph doesn't correspond to what you said. You said you wanted to read uncommitted data:

... as result I want to have uncommitted data to be applied to view. ....

you cannot have uncommitted data applied to the MV (only committed data)

but with the rephrasing I see what you are saying - however, I disagree utterly with the concept. You either

a) want the defining query to be executed and syncronized
b) or not

it commits, it has committed in the past and it'll continue to do so into the future as far as I know. It is the way it works.

I don't know what rolling back would buy you here - unless you completely changed the defining query (drop and recreate the materialized view) - to what end...

Why is it considered to be just like DDL.?

Andriy Solonchuk, March 06, 2007 - 2:52 pm UTC

SQL> select OBJECT_TYPE, OBJECT_NAME from user_objects where object_name like '%TEST%' and object_type <> 'INDEX'
  2  /

OBJECT_TYPE         OBJECT_NAME
------------------- -----------------
TABLE               MLOG$_TEST
TABLE               MV_TEST
MATERIALIZED VIEW   MV_TEST
TABLE               RUPD$_TEST
TABLE               TEST

SQL> select * from user_internal_triggers
  2  /

TABLE_NAME                     INTERNAL_TRIGGER_TYPE
------------------------------ ---------------------
TEST                           MVIEW LOG


>>because it is considered to be just like DDL.
Why is it considered to be just like DDL.?

as I understand the log on table is looks like the table.
The information to log is moved by internal trigger.

So the all operation is looks like DML and not DDL.

Please, decribe (explain) the sence of considering it like DDL.

Thanks for attention.

Tom Kyte
March 06, 2007 - 4:11 pm UTC

why is it considered ddl? because they made a decision at design time to make it like ddl.

the act of refreshing is considered ddl. period. It is, it commits, changing what you or I think of it will not change that fact.

refresh with analytics

A reader, May 22, 2007 - 9:10 pm UTC

Tom, this is "how would you do it" question. We have kind of star schema DB (lets say - regular time-customer-product dimensions) which is used for lot of reports. Majority of them are presented by MV for performance reasons. Some of these reports are using analytics function calls and this prevents from using fast refresh. Full refresh is slow. Most of the data are newly entered daily records but some times historical data is also changed. We are trying to find some solution that would allow us to run these reports effectively. Doesn't have to be MV - just point us in the right direction. Thank you very much for your help!
Tom Kyte
May 23, 2007 - 6:49 am UTC

sounds all good to me so far - the reports are run "effectively" are they not? (they use the MV)

If it were just newly entered records and you could use partitioning and the partitioning was setup in such a way the analytic windows never overlapped a partition - we might be able to find faster ways to refresh the MV, but the updates to old information will of course obviate that.

How slow is slow and is it faster than fast enough.

moving MVs across platforms

A reader, July 17, 2007 - 5:34 pm UTC

We are in the process of migrating our reporting database running on Solaris to x86_64 platform running RHEL4.

We refresh our reporting database using MVIEWS (fast refresh) every hour.

As our production tables are huge we didn't want to be doing complete refresh on the new database rather start from where you left on the old database. I tried several different options to get this working but none seemed to have worked for what I wanted.

1) Used Transportable tablespaces, this one didn't even move the MVIEWS when I plugged back the tablespaces into the new database( limitation of TTS).
2) Used schema export/import into new database, this created the MVIEWS in the new database, but when I tried to fast refresh, it said that it cannot do a FAST refresh it only allowed me to do a complete refresh, which is same as recreating all the MVIEWS on the new database and starting the refresh from scratch, which I want to avoid.

Is it possible to move the database across platforms and resume their MVIEW refresh from where we left off on the old database?

Thanks
Tom Kyte
July 17, 2007 - 7:04 pm UTC

Does oracle id need insert/update privilege on source mview logs

Megala, January 30, 2008 - 10:53 pm UTC

Tom,

I am setting up readonly user id for target database (read-only mview replication) to pull data from source. All master tables are owned by different schema on source.

Do i need to grant UPDATE/DELETE privilege on materialized view logs to read-only oracle id (on source) which is used to pull from target side ?

As far as i tested on 9i (both source and target), i am able to do fast refresh from target even with SELECT on MVIEW logs to the oracle user on source side.


wrong number or types of arguments in call to 'VERIFY_LOG'

Raja, June 18, 2008 - 11:37 am UTC

Hi Tom,

Database version: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi

When we tried to refresh a materialized view as follows we are getting the following errors:

BEGIN DBMS_MVIEW.REFRESH('DATA_MVIEW','C'); END;

*
ERROR at line 1:
ORA-06550: line 1, column 9:
PLS-00306: wrong number or types of arguments in call to 'VERIFY_LOG'
ORA-06550: line 1, column 9:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2254
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2460
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2429
ORA-06512: at line 1

What can we do for this. Where is this VERIFY_LOG is? Any help will be appreciated.

Thanks,
Tom Kyte
June 18, 2008 - 3:18 pm UTC

please utilize support

Alexander the ok, July 16, 2008 - 4:37 pm UTC

Hi Tom,

Can you please help me analyze a problem I had regarding mv refreshes.

I think I may have been experiencing this

< http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4541191739042#5516548214755 >

However, the mv is not refresh on commit, it is force:

START WITH TO_DATE('16-JUL-2008 03:00 AM','DD-MON-YYYY HH12:MI PM')
NEXT TRUNC(sysdate+1) + 3/24


So the problem was, this thing was refreshing because it takes about 3 hours, and users were unable to do stuff. Stuff in this case was insert and update to a specific table (which was not reference in the mv strangely).

I ran a trace on a session experiencing the problem, it contained

WAIT #94: nam='row cache lock' ela= 2929698 cache id=19 mode=0 request=3 obj#=-1 tim=43980119334844
WAIT #94: nam='row cache lock' ela= 2929698 cache id=19 mode=0 request=3 obj#=-1 tim=43980122286140
WAIT #94: nam='row cache lock' ela= 2929696 cache id=19 mode=0 request=3 obj#=-1 tim=43980125215887
WAIT #94: nam='row cache lock' ela= 2929700 cache id=19 mode=0 request=3 obj#=-1 tim=43980128145655
*** 2008-07-16 13:58:50.903
WAIT #94: nam='row cache lock' ela= 2929703 cache id=19 mode=0 request=3 obj#=-1 tim=43980131075438
WAIT #94: nam='row cache lock' ela= 2929702 cache id=19 mode=0 request=3 obj#=-1 tim=43980134005258
WAIT #94: nam='row cache lock' ela= 2929698 cache id=19 mode=0 request=3 obj#=-1 tim=43980136935021
WAIT #94: nam='row cache lock' ela= 2929700 cache id=19 mode=0 request=3 obj#=-1 tim=43980139864788
*** 2008-07-16 13:59:02.903


AWR for that timeframe also showed this wait activity;

 Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:      5274 16-Jul-08 13:00:47        59      20.1
  End Snap:      5275 16-Jul-08 14:00:00        52       8.7
   Elapsed:               59.21 (mins)
   DB Time:            1,055.68 (mins)


Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
row cache lock                       15,776      46,046   2919   72.7 Concurrenc
enq: TX - row lock contention         5,847      17,054   2917   26.9 Applicatio
CPU time                                            193           0.3
inactive session                         60          59    977    0.1      Other
ksdxexeotherwait                  1,613,451           2      0    0.0      Other

Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
---------------------------- -------------- ------ ----------- ------- ---------
row cache lock                       15,776  100.0      46,046    2919      19.6
enq: TX - row lock contentio          5,847   99.9      17,054    2917       7.3
inactive session                         60  100.0          59     977       0.1
ksdxexeotherwait                  1,613,451  100.0           2       0   2,006.8
db file sequential read                 653     .0           1       2       0.8
log file parallel write               1,210     .0           1       1       1.5
control file parallel write           1,184     .0           1       1       1.5
log file sync                           767     .0           1       1       1.0
SQL*Net more data from clien            293     .0           1       3       0.4
db file parallel write                  170     .0           0       2       0.2
os thread startup                         3     .0           0      25       0.0
control file sequential read          3,219     .0           0       0       4.0
latch free                                1     .0           0      15       0.0
SQL*Net message to client            16,862     .0           0       0      21.0
db file scattered read                    2     .0           0       5       0.0
SQL*Net break/reset to clien              4     .0           0       2       0.0
SQL*Net more data to client             317     .0           0       0       0.4
enq: RO - fast object reuse               1     .0           0       2       0.0
ksdxexeother                            993  100.0           0       0       1.2
rdbms ipc reply                          12     .0           0       0       0.0
LGWR wait for redo copy                  29     .0           0       0       0.0
direct path read temp                     4     .0           0       0       0.0
reliable message                          1     .0           0       0       0.0
direct path read                          6     .0           0       0       0.0
direct path write                        13     .0           0       0       0.0
direct path write temp                   12     .0           0       0       0.0
SQL*Net message from client          16,854     .0      55,578    3298      21.0
Streams AQ: qmn slave idle w            134    5.2       3,482   25985       0.2
Streams AQ: qmn coordinator             265   52.1       3,482   13139       0.3
virtual circuit status                  115  100.0       3,450   29998       0.1
Streams AQ: waiting for time             39   61.5       3,223   82645       0.0
class slave wait                          3     .0           0       0       0.0
          -------------------------------------------------------------


The table I mentioned also appeared here:

Segments by Row Lock Waits           DB/Inst: OCP29P/OCP29P  Snaps: 5274-5275
-> % of Capture shows % of row lock waits for each top segment compared
-> with total row lock waits for all segments captured by the Snapshot

                                                                     Row
           Tablespace                      Subobject  Obj.          Lock    % of
Owner         Name    Object Name            Name     Type         Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
PSSAPL     PSS_USER   LEGALNOTICE                     TABLE            8   88.89
PSSAPL     PSS_USER   PK_ORGUNIT_ORGID                INDEX            1   11.11
          -------------------------------------------------------------


Can you explain to me what you think happened, and how can I avoid this? Can an mv refresh like this cause all kinds of data dictionary contention, or on an underlying table (in case I am wrong about it not being referenced). Thanks in advance.


Alexander, August 05, 2008 - 10:07 am UTC

Tom, were you able to look at the above question? I put that in when you were out, so you probably had a billion followups waiting when you returned ;)
Tom Kyte
August 05, 2008 - 10:18 am UTC

SELECT cache#, type, parameter
FROM v$rowcache
WHERE cache# = 19;

what does that return on your system.

Alexander, August 05, 2008 - 11:43 am UTC

  1  SELECT cache#, type, parameter
  2                  FROM v$rowcache
  3*                 WHERE cache# = 19
SQL> /

    CACHE# TYPE        PARAMETER
---------- ----------- --------------------------------
        19 PARENT      dc_table_scns
        19 SUBORDINATE dc_partition_scns

Tom Kyte
August 05, 2008 - 1:11 pm UTC

so, why do you think contention on that was related to the MV refresh?

Alexander, August 05, 2008 - 3:20 pm UTC

Well, I have no real hard evidence, but the session refreshing the mv was holding blocking locks, when I dropped it and killed the session users where able to perform their actions. And the problem hasn't happened since. Also the refresh would take about 3 hours. I got the impression it was "touching stuff" the other sessions where waiting on.

Hardly scientific, I know. But it was the only thing hogging resources.
Tom Kyte
August 05, 2008 - 6:38 pm UTC

this was a block on the dictionary cache, not on a table or anything.

Alexander, August 06, 2008 - 10:10 am UTC

Right, my theory was that while this mv refresh was running, it was constantly updating information in the data dictionary, so when users tried to do something, they were waiting for those latches to be released.

I don't know if mv refreshes do anything like that though, what do you think?
Tom Kyte
August 06, 2008 - 12:36 pm UTC

it should not prevent access like that, no.

BUILD DEFERRED behaviour in creating MV (Version 10.2.0.1 )

Megala, August 19, 2008 - 12:15 am UTC

Tom:

When i am building following MV with BUILD DEFERRED option, i am observing that the whole process took so long (3 hrs and 24 minutes) but did not even complete and ended up with the "rollback segment" error given below.

I thought the purpose of 'build deferred' mode is not to execute the underlying query against the source table and populate the data but it seems that it does here. I tried with building another MV that took 15 minutes to create the MV without data. Any reason why it takes so much time ?

00:19:36 OSDS@ECX01SRW > 00:19:36 OSDS@ECX01SRW > 00:19:36 OSDS@ECX01SRW > 00:19:36 OSDS@ECX01SRW > 00:19:36 OSDS@ECX01SRW > create materialized view WORD_NOTE_MV nologging tablespace ECX01SRW_OSDS_DATA_T01 BUILD DEFERRED
refresh complete with ROWID
as select * from OSDSDBA.WORD_NOTE@OSDS
/
refresh complete with ROWID as select * from OSDSDBA.WORD_NOTE@OSDS
*
ERROR at line 2:
ORA-01562: failed to extend rollback segment number 1
ORA-01650: unable to extend rollback segment RBIG by 2048 in tablespace RBIG
ORA-06512: at "SYS.DBMS_SNAPSHOT_UTL", line 183
ORA-06512: at "SYS.DBMS_SNAPSHOT_UTL", line 429
ORA-06512: at line 1


Elapsed: 03:24:13.33


Question :

Are you aware of any change in behaviour of creating MV using BUILD DEFERRED mode and performance slowness in 10.2.0.1 ?

Source database : 9.2.0
Target database : 10.2.0.1


Thanks for any suggestions.

Tom Kyte
August 20, 2008 - 10:20 am UTC

trace it, I don't see that at all.

trace it
tkprof it
see what sql was executed and what work was done

BUILD DEFERRED MODE in MV creation

Megala, August 20, 2008 - 9:35 pm UTC

Ran sql_trace and reviewed the tkprof report, it seems that BUILD DEFERRED is working as expected (as attached below).

Question: Does Oracle check the MV log table rows when we create MV with deferred/immediate build mode with complete refresh ?.

Only difference between then and now was, Source MV log table had serveral millions of records and it took so long and got the rollback error when using BUILD DEFERRED mode (explicity mentioned complete refresh in MV creation command). But now after MV log table was dropped/recreated, regardless of BUILD mode (BUILD DEFERRED/BUILD IMMEDIATE), MV creation is successful with resonable execution time.



********************************************************************************

create materialized view WORD_NOTE_MV_DEFER nologging tablespace ECX01SRW_OSDS_DATA_T01 BUILD DEFERRED
refresh complete with ROWID as select * from OSDSDBA.WORD_NOTE@OSDS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 1.20 0 0 1 0
Execute 1 0.15 1.11 0 10 42 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.19 2.32 0 10 43 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 32
********************************************************************************

CREATE TABLE "OSDS"."WORD_NOTE_MV_DEFER" (M_ROW$$ default
'00000000.0000.0000', "WD_NREC","WD_RIDER_NUM","WD_SEQ_NUM","WD_DATE")
nologging tablespace ECX01SRW_OSDS_DATA_T01 AS SELECT * FROM (SELECT
ROWIDTOCHAR("WORD_NOTE"."ROWID") "ROWID","WORD_NOTE"."WD_NREC" "WD_NREC",
"WORD_NOTE"."WD_RIDER_NUM" "WD_RIDER_NUM","WORD_NOTE"."WD_SEQ_NUM"
"WD_SEQ_NUM","WORD_NOTE"."WD_DATE" "WD_DATE" FROM
"OSDSDBA"."WORD_NOTE"@OSDS "WORD_NOTE") WHERE 1 = NULL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.05 0 0 0 0
Execute 1 0.04 0.01 0 1 12 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.04 0.07 0 1 12 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 32 (recursive depth: 1)
********************************************************************************

********************************************************************************

SELECT "WORD_NOTE"."WD_NREC" "WD_NREC","WORD_NOTE"."WD_RIDER_NUM"
"WD_RIDER_NUM","WORD_NOTE"."WD_SEQ_NUM" "WD_SEQ_NUM","WORD_NOTE"."WD_DATE"
"WD_DATE"
FROM
"OSDSDBA"."WORD_NOTE"@OSDS "WORD_NOTE"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

********************************************************************************

INSERT INTO "OSDS"."WORD_NOTE_MV_DEFER"(M_ROW$$,"WD_NREC","WD_RIDER_NUM",
"WD_SEQ_NUM","WD_DATE") SELECT "WORD_NOTE"."ROWID","WORD_NOTE"."WD_NREC",
"WORD_NOTE"."WD_RIDER_NUM","WORD_NOTE"."WD_SEQ_NUM","WORD_NOTE"."WD_DATE"
FROM "OSDSDBA"."WORD_NOTE"@OSDS "WORD_NOTE"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.06 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.06 0 0 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 32 (recursive depth: 1)
********************************************************************************
Tom Kyte
August 21, 2008 - 8:33 am UTC

everything done is in the trace file, you can answer

"Does Oracle check the MV log table rows when we create MV with
deferred/immediate build mode with complete refresh ?"

right from the traces you have....

materialized view

DM, September 23, 2008 - 9:46 am UTC

I was going thr the list of replies u have given ..they r excellent and very good......
In the line of that ...I have view say
create materialized view SPEC_MVW
refresh force on demand
as
select sp1.*
from spec_mv1 sp1,
spec_mv2 sp2,
spec_nvw spv
Where sp2.spec_id = sp1.spec_id
and spv.spec_id = sp1.spec_id;

now i drop 1 of the col in 1 of the underlying table sp1...
that col is still appearing in the MV query result.....hwo shall i refresh my MV
any thoughts..

Thanx in Advance
Tom Kyte
September 24, 2008 - 3:49 pm UTC

... I was going thr the list of replies u have given ...

wow, your keyboard is truly broken - you should look into that. It is dropping many important characters like vowels.


If you drop a column of the underlying tables, you will need to rebuild the materialized view.

Need for Materialized View Log

Ravikumar, October 13, 2008 - 3:40 pm UTC

Dear Tom,

Went through most of your questions on Materialized View Log, got one question to ask you. I have Materialized view constructed on joining 2 tables with Rollup data. This view works fine with and without Materialized View Log.
When it works without MV Log, why do we need this ?
Appreciate your reply.
Thanks
Ravikumar.
Tom Kyte
October 14, 2008 - 5:53 pm UTC

you use materialized view logs to enable incremental (also known as a 'fast' refresh)

if you are not doing incremental refreshes, you have no need for the materialized view log.

Chetan, November 21, 2008 - 3:57 pm UTC

Hi,

I am using oracle 8.1.7.4 database version.

In our dataware house enviroment we daily refresh materialized view with complete option( i can't choose fast refresh).

tables are having billions of record. it took huge time to refresh and also generates more redo logs.

i tried execute dbms_mview.refresh('MVIEW','C',atomic_refresh=>FALSE);

but still it generates redo logs. I read ask tom forums that in Oracle 8.1.7.4 complete refresh work on

Delete + Insert formula not on Truncate + insert formula.

My redolog is of 18Gb and it's also 98% full during refresh.

i tried to build view using nologging option. but as per metalink it's not use redolog during build.

Can any one suggest to avoid redolog or minimize it use

it's urgent please


Tom Kyte
November 24, 2008 - 4:14 pm UTC

do you have indexes on said materialized views.

if so, indexes will generate redo during the insert /*+ APPEND */

you could drop indexes - refresh - build indexes

but you might fight that to be a lot slower than what you are doing, you'll need to full scan the table each time you build an index - rather than do it efficiently as you are now.


redolog does not get "full".

Chetan, November 25, 2008 - 12:08 pm UTC

Hi Tom
i checked in oracle 8.1.7.4 its works on Trunc+insert formula.

Index is there. but i tried w/o it also. it generate same redo log.

Could you please guide me to refresh w/o generating more redo
Tom Kyte
November 25, 2008 - 12:42 pm UTC

then the table is not in "nologging",

if you can cut and paste

a) the query proving the materialized view is nologging AND no indexes exist
b) a measure of the redo your session has generated
c) a refresh
d) a measure of now how much your session has generated

we'll take a look - until then, I believe you missed the "nologging" bit.

Locks on mview refresh

A Reader, January 31, 2009 - 12:02 am UTC


Hi Tom,
Thanks a lot for sharing your valuable ideas on this site.

Could you please clarify these?

1. when I have an "on commit" materialized view, and suppose that the I have updated a row in the base table and committed the row, then during this commit how many rows in the base table and how many rows in the materialized view will be locked?

2. How to know the refresh duration of an "on commit" materialized view?
3. If I do this update and commit in a loop ( for 20 times) will there be any issue(dead lock ? ) , if performance is not an issue

4. Kindly let me know if there is any article on locking issue of "on commit" mviews

Thank you


Tom Kyte
January 31, 2009 - 8:54 am UTC

1) the modified ones will be.

2) you would have to time the commit.

3) you are describing a single user event, no deadlocks.

4) basically, your commits will tend to serialize. Whether this is an issue depends on how long your materialized view takes to refresh itself. Remember - the commits will serialize - not entire transactions, just the very end of them.

Locks in db due to on commit mviews

Chinni, January 31, 2009 - 1:02 pm UTC

Hi Tom,
Thanks for the reply!

Could you please let me know, how to measure the on commit mview refresh time? meant which table to refer?

and if there are two tables blocking each other will oracle kill one of the sessions immediately or it will wait for condition? In my production database i saw two session blocking each other for hours .. I doubt that i noted wrong timings?

thank you for your time
Tom Kyte
January 31, 2009 - 2:55 pm UTC

ask what time it is
commit;
ask what time it is again
print out difference


...
and if there are two tables blocking each other will oracle kill one of the sessions immediately or it will wait for condition? In my production database i saw two session blocking each other for hours .. I doubt that i noted wrong timings?
....

no you didn't, you saw something, but you did not see two sessions deadlocked for two hours, it lasts at most 3 seconds in a single instance and distributed_lock_timeout in a distributed environment.


What are the rows locked

Chinni, February 01, 2009 - 3:29 am UTC

Hi Tom,
Please bare with me, I am asking similar questions because I have production issue in one of my processes. I am trying to find out the root cause.

...
ask what time it is
commit;
ask what time it is again
print out difference
....

Yes, I can do this. Thanks !
But if i measure time like that I will also count the time for committing outstanding changes along with mview refresh?

Could you also clarify these?
How to know which row/rows is/are currently blocked by a sessions( actual rows, rather than object ids)?
How to know the row for which a blocked session waiting on?

Thank you very for your answers. I am learning many things on this site...

By the way .. your trip to India was postponed recently .. is it rescheduled?


Tom Kyte
February 02, 2009 - 10:13 am UTC

...
But if i measure time like that I will also count the time for committing outstanding changes along with mview refresh?
...

of course??? But so what

a) it takes about as much time to commit 1,000,000 row updates as it does a 1 row update.
b) and that time it tiny

...
How to know which row/rows is/are currently blocked by a sessions( actual rows, rather than object ids)?
....

that information is not exposed anywhere, we do not have a master list of locks at the row level (or else it would take a really long time to commit 1,000,000 rows and a lot less time to commit 1 and there would be limits on the number of locks in total and .... <nothing good here> ....)


...
How to know the row for which a blocked session waiting on?
....

v$session has that information.


... By the way .. your trip to India was postponed recently .. is it rescheduled? ...

tentatively scheduled for September 2009.

Is oracle not detecting dead block(continuing from above)

Chinni, February 01, 2009 - 6:30 am UTC

<code>Hi Tom,

Is Oracle not detecting dead block here?
(I have used the query from your book)

BLOCKER SID 'ISBLOCKING' BLOCKEE SID_1
DCSW_USER 225  is blocking  CRIS_APPS 45
CRIS_APPS 45  is blocking  DCSW_USER 225

Please see that sessions 45 and 225 are blocking each other.

From v$lock

SID    TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
45 TX 1376271 2817722 6 0 700 0
45 JI 80726 0 6 0 700 1
45 MS 78753 0 6 0 700 0
45 JI 80730 0 6 0 700 0
45 MS 80730 0 0 6 0 0
...
225 TX 1310720 2876963 6 0 700 0
....
225 MS 80730 0 6 0 700 1
225 JI 80726 0 0 6 0 0

OBJECT_ID OBJECT_NAME
78753 POS -- Blocked by 45 in Mode 6
80730 TRA_PENDFAIL -- Blocked by 45 and 225 in Mode 6 and 225 is also waiting for this
80726 TRA_CANCEL -- Blocked by 45 in Mode 6

these tow mviews (TRA_PENDFAIL,TRA_CANCEL) are on commit mview on a table TRA table
As the tow sessions blocked each other on refreshing mviews, i assume this is what is happening
a.session 45 has updated R1 of TRA
b.session 225 has updated R1 (as this is not blocked on TRA table i assumed that updates from two session are successful)
c.Both sessions committed

Questions:
1. CTIME (Time since current mode was granted) is 700 Seconds (v$lock data) , is some thing wrong in this?
Is oracle not detecting this(should detect in 3 seconds,as you mentioned) .. I am missing something? Is this not a dead block?
2. Lock mode
Internet says
6 --> 'Exclusive'
3 --> 'Row Exclusive'
These two tables TRA_PENDFAIL and TRA_CANCEL are mview tables. My question is
How is object (80730, TRA_PENDFAIL) is blocked by tow sessions in lock mode 6?
is lock mode 6 is table level exclusive lock?
and also
45 JI 80730 0 6 0 700 0 -- locked
45 MS 80730 0 0 6 0 0 -- requesting

how is that 80730 is locked as well as requested at the same time by sid 45 .. are those Lock Types to do something,
JI and MS?

3. In my process(steps a,b,c above), is it possible for two sessions which are refreshing on commit refresh mviews block each other?
I assume this can only happen when

session 45 - refresh TRA_PENDFAIL (and lock this table row)
session 225 - refresh TRA_CANCEL (and lock this table row)

session 45 - refresh TRA_CANCEL wait for 225 to release lock
session 225 - refresh TRA_PENDFAIL wait for 225 to release lock

Can this happen? is there any order defined for on commit mviews?

Kindly suggest. I hope I am clear in putting my problem in words.

Thank you for your guidance</code>
Tom Kyte
February 02, 2009 - 10:41 am UTC

please contact support, it might be related to bug 6644122 or 6475688

Thank you!!

Chinni, February 02, 2009 - 11:08 am UTC

Hi Tom,
Thank you very much for your replies. I would consider taking this up with the Support.

Unfortunately bug 6644122 is not classified as public. I can't view that.

...
a) it takes about as much time to commit 1,000,000 row updates as it does a 1 row update.
b) and that time it tiny
...
Yes, I agree that, you have given an example in your book on this as well.

..
tentatively scheduled for September 2009.
..
that's a long time...:)

Materialized View Refresh [Incremental]

Maverick, February 13, 2009 - 9:33 am UTC

Tom,
Is is possible to have a Materialized View such that, there are only Inserts and not updates or Deletes. For ex: If source table data gets changed, then it should insert into MV as a new row and not update existing one [kind of Audit Table].
I am really not sure. I think the Answer is no. But I want to hear it from you.
Is it possible?
Tom Kyte
February 16, 2009 - 11:10 am UTC

You would use streams for this - with streams you can have a custom apply that does whatever you want when you hit and insert/update/delete

You could turn an update into a flag delete of existing record and insert of new.
you could turn a delete into a flag delete
you could let inserts just go through and insert

to maintain a history.



With materialized views - the entire, only, sole and ultimate goal of them is to make the materialized view reflect perfectly the contents of the defining query. After a refresh, the materialized view MUST reflect what was in the base tables as of the point in time the refresh was performed. There is no opportunity to have them do anything else, that is the only thing they know how to do

Questions on Fast Refresh - on commit

Chinni, February 26, 2009 - 5:00 am UTC

Hi Tom,
I have two "on commit" fast refresh mviews, base table is heavily modified in the day. In my database, now I could see blocking sessions(no dead lock but) very frequently. Blocking session is actually refreshing one of these mviews and other sessions are waiting for lock mode 6 on the same mview.

I have checked the table V$MVREFRESH (Contains information about the materialized views currently being refreshed. - DOC)

Now in this view, I have lot of sessions which are listed against the SAME mview. Now how is that possible that same mview is being refreshed by so many sessions at the same time .. ? ( am I missing something?)

Just want get some clarity this, creating "on commit" fast refresh mviews on heavily modified tables will be bound to locking issues???


I have observed that the blocking session is running a select statement as

SELECT 1
FROM sumpartlog$ a
WHERE a.bo# = :1 AND BITAND (a.flags, 5) != 5 AND ROWNUM = 1

I am not sure why this select is blocking the other sessions, is it due that this statement is part of "INSTALLATION" stage of the refresh ?

and finally , I am not sure if i have some mechanism to speed up refresh process ( which is a fast refresh already )

Thank you for your time

Tom Kyte
March 03, 2009 - 7:32 am UTC

you are seeing sessions that will be refreshing that materialized view, they are all in the queue to do so.


... Just want get some clarity this, creating "on commit" fast refresh mviews on heavily modified tables will be bound to locking issues???
.....

undoubtedly - think about it - typically a materialized view aggregates data - takes MANY ROWS and rolls them up. Think about what that means.


In Expert one on one Oracle I wrote:

<quote>

As mentioned above, materialized views typically add overhead to individual transactions and, if created with REFRESH ON COMMIT, will introduce contention. The overhead arises from the need to track the changes made by a transaction, these changes will either be maintained in the session state or in log tables. In a high end OLTP system, this overhead is not desirable. The concurrency issue comes into play with a REFRESH ON COMMIT materialized view due to the fact that many rows in the detail fact table point to a single row in a summary table. An update to any one of perhaps thousands of records will need to modify a single row in the summary. This will naturally inhibit concurrency in a high update situation.

That does not preclude the use of materialized views with OLTP, in particular materialized views that are refreshed on demand with a FULL refresh. A full refresh does not add the overhead of tracking transaction level changes, rather at some point in time the defining query for the materialized view is executed and the results simply replace the existing materialized view. Since this is done on demand (or on a timed basis), the refresh may be scheduled for a time when the load is light. The resulting materialized view are especially relevant for reporting purposes ¿ your OLTP data can be transformed using SQL into something that is easy and fast to query every night. The next day, your online reports of yesterday¿s activities run as fast as possible and easily coexist with your OLTP system.


</quote>

Why the index status is valid while refreshing the MV

Rajeev, March 03, 2009 - 8:55 am UTC

Hi Tom,

Could you pls suggest if any thing is wrong in my steps below

1)I have created a MV SAMPLE_MV with complete refresh.

creation time:1 hour
count(1)in MV:69371156

2)There are 4 indexes on this MV
3)I have Inserted around 2000 records in the base table
4)I have made the indexes unusable on MV.
5)Then I tried to refresh the MV with the below statement
DBMS_MVIEW.REFRESH('SAMPLE_MV','C',PARALLELISM=>8);

Refresh time:2 Hours

6)When the refresh is happening,in the mean while when I
check the status of indexes ,it shows that indexes are valid.

So what is the point of disabling the indexes before refreshing the MV and rebuild the indexes after refresh.
I could not understand why the index sattus is valid ,because of this the refresh took long time.

Could you please shed some light on this.

Thanks in advance.


Tom Kyte
March 03, 2009 - 4:58 pm UTC

sigh, no versions anywhere.

Anyway, I'll assume 9i or before. In that case - a complete refresh is TRUNCATE+insert /*+ append */

Truncate - resets the index status to "valid"
Insert append is a direct path load


It is almost certainly better to load with the indexes ON in this case, else you'll have to FULL SCAN the table 4 times *after* you load it. You don't want that do you?


Refresh on Commit MVs

A reader, April 15, 2009 - 5:27 pm UTC

Earlier in the thread (September 24, 2002), you mentioned that "you would find that refresh on commit MVs are not appropriate at all for an environment where many people concurrently modify the detail tables." So in what circumstances are the REFERSH ON COMMIT MVs appropriate?

Would a FAST REFRESH MV that gets refreshed on a regular basis be appropriate for a high concurrency environment?

Tom Kyte
April 15, 2009 - 7:42 pm UTC

lower concurrency rates - the problem is one of serialization on the commit - you are taking many detail rows (I update a single detail row, you update a single detail row) and at commit time - we both need to update the same "aggregate", you are funneling detail records up to the aggregate level, in a high concurrent situation - you would find the serialization might be an issue (or not - it depends on the commit patterns)

A reader, April 16, 2009 - 12:06 am UTC

We have a system that has no more than 5 writers at any one time. At peak time, there may be continuous inserts from all the writers and only modest amount of updates.

We also have a refresh on commit materialized view with no aggregates, but just a pre-join of several tables. We ran into situations where the writers just hung and the only way to get out of that situation was to kill the blocking process. During the "hang" time, even simple queries (e.g. SELECT COUNT(*)) against some of the tables referenced in the materialized view would also hang!

As soon as we removed the materialized view, everything goes back to normal. Does the serialization on the commit issue that you mentioned earlier in the thread apply to this situation?

Materialized Views- Aggregate functions

Lok Nammina, April 16, 2009 - 8:56 am UTC

Tom, this article is very useful.. thanks

Can you please confirm whether we can use Analytical Functions in Fast Refresh Materialized views ???
Tom Kyte
April 16, 2009 - 10:03 am UTC

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#sthref534

...
"It cannot contain analytical functions (for example, RANK) in the SELECT clause."
....

materialized views

Lok Nammina, April 17, 2009 - 8:15 am UTC

Hi Tom,
Thanks for your reply.
i have created a complete refresh on demand materialized view with UNION ALL to union data from 20 odd queries. And created a job to refresh it on everyday. But we have observed that materialized view is not refreshing properly.
select query 1
union all
select query 2
union all
select query 3
The problem is that data is not getting refreshed from the query 2 while remaining queries do. After manuallty refreshing the mview , data is coming alright.
Can you please suggest me the reason for this behaviour.

Thanks in advance
Tom Kyte
April 17, 2009 - 10:44 am UTC

The only answer I can think of is...

You have made a mistake in your observation of what is happening.


When the materialized view was refreshed - the entire query was executed in it's entirety. Whatever was in query 2 as of the time the refresh ran (committed and in query2) is what is in the materialized view.

When you manually refresh it later - the data that underlies this mysterious query 2 must have CHANGED since the other refresh ran.

So, look in your system, query 2 had that data when the query was refreshed, something then updated the tables behind query 2 and committed and then you refreshed again, observing different data from query 2.


But what you say you are seeing is "not possible", so the only answer I can have is "your conclusions of what is happening are incorrect"

Purge Snapshot Logs

Harschil, May 11, 2009 - 9:27 am UTC

Hi Tom,

Many thanks for your time.


YY : Username
XX : Table Name
mlog$_XX : MV log on XX

Statspacks report says ( 20 mins ) :

  Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
         45,003            3       15,001.0    1.7     4.71     13.63 2141515260
update "YY"."MLOG$_XX" set snaptime$$ = :1  where s
naptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS
')

         44,988            3       14,996.0    1.7     4.38     13.18  791849907
Module: user@hostA (TNS V1-V3)
SELECT /*+ */ DISTINCT "A1"."OBJID" FROM "YY"."MLOG$_XX" "A1" WHERE "A1"."OBJID"<>ALL (SELECT "A2"."OBJID" FROM "YY
"."XX" "A2" WHERE "A1"."OBJID"="A2"."OBJID") AND "A1"."SNAPTIME$$">:1 AND "A1"."DMLTYPE$$"<>'I'




sqlplus un/pw <<EOF
spool a.lst
select  count(*)  from MLOG\$_XX;
exec dbms_snapshot.purge_log('XX',1);
select  count(*)  from MLOG\$_XX;
spool off;
EOF


and after running the above script

more a.lst
SQL>select  count(*)  from MLOG\$_XX;
 
  COUNT(*)
----------
   1634629

03:00:05 SQL>
PL/SQL procedure successfully completed.

03:01:09 SQL>
  COUNT(*)
----------
         4

-- We purged successfully all records which were waiting in mlog$_XX to get refreshed "nowhere".


Now after 2 days :

03:01:09 SQL>
  COUNT(*)
----------
         10

-- i.e. Entries in mlog$_XX are removed once the same is refreshed at Snapshot site. ( No accumulation there) 


from Statspack report still we have more PIO for mlog$_XX..
                                                     CPU      Elapsd
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------

     138,554           10       13,855.4   12.7    11.32     67.92 2441750107
delete from "YY"."MLOG$_XX" where snaptime$$ <= :1


        129,037            9       14,337.4   11.8    11.20     76.70 2141515260
update "YY"."MLOG$_XX" set snaptime$$ = :1  where s
naptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS
')

      126,507            9       14,056.3   11.6    10.15     76.24  791849907
Module: user@hostA (TNS V1-V3)
SELECT /*+ */ DISTINCT "A1"."OBJID" FROM "YY"."MLOG$_TABLE_ACT_E
NTRY" "A1" WHERE "A1"."OBJID"<>ALL (SELECT "A2"."OBJID" FROM "YY
"."XX" "A2" WHERE "A1"."OBJID"="A2"."OBJID") AND "A
1"."SNAPTIME$$">:1 AND "A1"."DMLTYPE$$"<>'I'

We have :

Oracle 9.2.0.6
HP Ux 11i

Question/s :
a) Does HWM stil playing the role here? 
b) Do we need to drop snapshot log and then recreate the same to lower HWM of mlog$_XX ( as truncate on mlog$ tables we should not use)?
c) Your suggestion to reduce PIO on mlog$_XX ?
      
      
Kind Regards

Harschil

Tom Kyte
May 11, 2009 - 6:45 pm UTC

a) yes
b) no
c) alter table mlog$_<tname> move;

there are no indexes on that table - just moving it will 'reorg it' and lower the high water mark.



Purge Snapshot Logs...

Harschil, May 12, 2009 - 7:04 am UTC

Tom,

...c) alter table mlog$_<tname> move;

Question/s

a) Above cannot be done online when DML on base table <tname> are happening?
b) What should be strategy to do alter table ..move on mlog$_<tname> tables?

Regards

Harschil

Tom Kyte
May 13, 2009 - 10:32 am UTC

... Do we need to drop snapshot log and then recreate the same to lower HWM of mlog$_XX ( as
truncate on mlog$ tables we should not use)?
...


neither can truncate.
nor drop and recreate.

alter table t move will be the right approach and yes, it'll be an offline operation.



I don't know what you mean by "what should be strategy"??

Purge Snapshot Logs...

Harschil, May 13, 2009 - 6:49 pm UTC

Thanks a lot Tom.

I got the answer.

... alter table move T ...on snapshot log offline operation.


Regards

Harschil.


MV Re fresh

pointers, May 23, 2009 - 2:19 am UTC

Hi Tom,

I have a Materialized view which is on a view.
The Materialized view defination is as follows.

CREATE MATERIALIZED VIEW MV_R_HR
REFRESH FORCE ON DEMAND
AS
SELECT h_id,h_name, MAX (date_time) datetime
FROM v_raw
GROUP BY h_id,h_name;



Here v_raw is an view on a high volume transaction tables.
The v_raw has 33134296 records which shows 8 weeks transaction details. I created a db job to refresh this materialized view.

When I refresh the MV manually (dbms_mview.refresh) it is taking lot of time nearly 20 min. Can you pls help me how can i well tune this issue.

Thanks in advance.

Regards
Pointers.


Tom Kyte
May 23, 2009 - 10:35 pm UTC

no version... sigh....

tell me, if you just run:

set autotrace traceonly
SELECT h_id,h_name, MAX (date_time) datetime
FROM v_raw
GROUP BY h_id,h_name;
set autotrace off

how long does that take.

pointers, May 24, 2009 - 10:07 am UTC

sorry tom i forgot to tell...
the version is oracle 10g(dont remeber exact version)..the query is taking lot of time nearly 20 min....
Tom Kyte
May 26, 2009 - 8:40 am UTC

ok, so if the query itself on your machine takes 20 minutes - why do you think the materialized view refresh would be faster?

do this for us

exec dbms_monitor.session_trace_enable( waits => true );
... run the query ....
exit sqlplus


run tkprof on the trace file and show us the query and related (related to that query) information - the execution times, the row source operation and the wait events.

monitor session trace

shafi,singapore, June 10, 2009 - 4:16 am UTC

perhaps he can also include binds=true in the exec dbms_monitor.session_trace_enable( waits => true );
..
Am I right,tom?
Tom Kyte
June 10, 2009 - 12:35 pm UTC

sure he could.

but since there aren't any bind variables in a create statement, it would be sort of boring.

how to refresh fast on this query ?

A reader, June 22, 2009 - 5:04 am UTC

hi tom,

i am trying very hard to create a materialised view according to the specifications / rules listed in the documentation.
( http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#g1028195 )

I need to create a materialized view base on (inline view/query)


CREATE MATERIALIZED VIEW test
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
AS
SELECT * FROM
(SELECT tab1.rowid, col1,col2, 1 unmarker FROM tab1 where condi = 'Y'
UNION ALL
SELECT tab1.rowid, col1,col2, 2 umarker FROM tab1 where condi = 'Z'
);

but i failed.
any idea how should i do it?

Tom Kyte
June 22, 2009 - 12:31 pm UTC

may i ask why you are using a union all for this, that doesn't make sense.

that screams "or"

anyway, incomplete example, I can do it:

ops$tkyte%ORA11GR1> create materialized view log on t1 with rowid;

Materialized view log created.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> create materialized view mv
  2  refresh fast
  3  as
  4  select t1.rowid rid, 1 marker, t1.* from t1 where t1.y = 'A'
  5  union all
  6  select t1.rowid rid, 2 marker, t1.* from t1 where t1.y = 'B'
  7  /

Materialized view created.



now, you have to modify that to make it "not work" to demonstrate what YOU are doing.

how to refresh fast on this query ?

A reader, June 22, 2009 - 5:07 am UTC

hi tom,
sorry to double post

with regards to the question above
the error is

ERROR at line 5:
ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT, GROUP BY, etc.

how to refresh fast on this query ?

A reader, June 22, 2009 - 2:23 pm UTC

hi tom,
thanks for the prompt reply! didnt expect you to reply immediately after i post.

i am sorry i cant seems to mimic that error again after following your example.

actually i am having a very complicated ( i wouldnt dare to say complex infront of you ) and i am trying to break it off a little by a little to see if it is feasible for materialized view.

here the sql goes

/* table contain all date in this year */
CREATE table DATES as
SELECT TRUNC(SYSDATE,'YY') - 1 + LEVEL AS CONNTIME
FROM DUAL CONNECT BY LEVEL < 366;


/* the query */

WITH DAYYEAR AS
(SELECT * FROM DATES),
PERSONAL AS (
SELECT A, SUM(DECODE(STATES,'USA',SIGN_UP,0)) USA_SIGNUP, SUM(DECODE(STATES,'USA',TERMINATE,0)) USA_TERMINATE,SUM(DECODE(STATES,'USA',SUSPEND,0)) AS USA_SUSPEND, SUM(DECODE(STATES,'USA',EXTEND,0)) USA_EXTEND, SUM(DECODE(STATES,'USA',ACTIVE_USER,0)) USA_ACTIVE_USER,
SUM(DECODE(STATES,'UK',SIGN_UP,0)) UK_SIGNUP, SUM(DECODE(STATES,'UK',TERMINATE,0)) UK_TERMINATE, SUM(DECODE(STATES,'UK',SUSPEND,0)) UK_SUSPEND, SUM(DECODE(STATES,'UK',EXTEND,0)) UK_EXTEND, SUM(DECODE(STATES,'UK',ACTIVE_USER,0)) UK_ACTIVE_USER,
SUM(DECODE(STATES,'AUSTRALIA',SIGN_UP,0)) AUST_SIGNUP, SUM(DECODE(STATES,'AUSTRALIA',TERMINATE,0)) AUST_TERMINATE, SUM(DECODE(STATES,'AUSTRALIA',SUSPEND,0)) AUST_SUSPEND, SUM(DECODE(STATES,'AUSTRALIA',EXTEND,0)) AUST_EXTEND, SUM(DECODE(STATES,'AUSTRALIA',ACTIVE_USER,0)) AUST_ACTIVE_USER,
SUM(DECODE(STATES,'JAPAN',SIGN_UP,0)) JAP_SIGNUP, SUM(DECODE(STATES,'JAPAN',TERMINATE,0)) JAP_TERMINATE, SUM(DECODE(STATES,'JAPAN',SUSPEND,0)) JAP_SUSPEND, SUM(DECODE(STATES,'JAPAN',EXTEND,0)) JAP_EXTEND, SUM(DECODE(STATES,'JAPAN',ACTIVE_USER,0)) JAP_ACTIVE_USER,
SUM(DECODE(STATES,'SINGAPORE',SIGN_UP,0)) SING_SIGNUP, SUM(DECODE(STATES,'SINGAPORE',TERMINATE,0)) SING_TERMINATE, SUM(DECODE(STATES,'SINGAPORE',SUSPEND,0)) SING_SUSPEND, SUM(DECODE(STATES,'SINGAPORE',EXTEND,0)) SING_EXTEND, SUM(DECODE(STATES,'SINGAPORE',ACTIVE_USER,0)) SING_ACTIVE_USER
FROM (
SELECT TRUNC(C.REGISTER_DATE) A,
DECODE(SP.STATES,'TOKYO','JAPAN','ALTANTA','USA','NASSAU ZONE','USA','CALIFORNIA CITY','USA','HOUSTON','USA','LONDON','UK','SYDNEY','AUSTRALIA','MELBOURNE','AUSTRALIA','PERTH','AUSTRALIA','SINGAPORE','SINGAPORE') AS STATES,
SUM(DECODE(C.REGISTER_TYPE,'REGISTER',1,0)) AS SIGN_UP,
SUM(DECODE(C.REGISTER_TYPE,'TERMINATE',1,0)) AS TERMINATE,
SUM(DECODE(C.REGISTER_TYPE,'SUSPEND',1,0)) AS SUSPEND,
SUM(DECODE(C.REGISTER_TYPE,'EXTEND',1,0)) AS EXTEND,
SUM(DECODE(C.REGISTER_TYPE,'REGISTER',1,0))
-
SUM(DECODE(C.REGISTER_TYPE,'TERMINATE',1,0))
-
SUM(DECODE(C.REGISTER_TYPE,'SUSPEND',DECODE(C.UNREGISTER_TYPE,'TERMINATE',0,'SUSPEND', DECODE(TRUNC(C.REGISTER_DATE), TRUNC(C.UNREGISTER_DATE), 0,1),1),0))
+
SUM(DECODE(C.REGISTER_TYPE,'EXTEND', DECODE(C.UNREGISTER_TYPE,'SUSPEND',1,0),0)) ACTIVE_USER
FROM (
/* aim on register and suspend on that date */
SELECT REGISTER_DATE, UNREGISTER_DATE, REGISTER_TYPE, UNREGISTER_TYPE, PLAN_ID
FROM CFR
WHERE SYS_PLAN_ID IS NOT NULL
AND REGISTER_DATE >= '12-MAY-09'
AND REGISTER_TYPE IN ('REGISTER','SUSPEND')
AND DDI_TYPE = 'CMASK_P'
UNION ALL
SELECT UNREGISTER_DATE, REGISTER_DATE, UNREGISTER_TYPE, REGISTER_TYPE, PLAN_ID
FROM CFR
WHERE SYS_PLAN_ID IS NOT NULL
AND UNREGISTER_DATE >= '12-MAY-09'
AND UNREGISTER_TYPE = 'TERMINATE'
AND DDI_TYPE = 'CMASK_P'
UNION ALL
SELECT UNREGISTER_DATE, REGISTER_DATE, UNREGISTER_TYPE, REGISTER_TYPE, PLAN_ID
FROM CFR
WHERE SYS_PLAN_ID IS NOT NULL
AND UNREGISTER_DATE >= '12-MAY-09'
AND UNREGISTER_TYPE = 'EXTEND'
AND DDI_TYPE = 'CMASK_P'
) C JOIN SCB_PLAN SP
ON (C.PLAN_ID = sp.plan_id)
GROUP BY TRUNC(C.REGISTER_DATE), DECODE(SP.STATES,'TOKYO','JAPAN','ALTANTA','USA','NASSAU ZONE','USA','CALIFORNIA CITY','USA','HOUSTON','USA','LONDON','UK','SYDNEY','AUSTRALIA','MELBOURNE','AUSTRALIA','PERTH','AUSTRALIA','SINGAPORE','SINGAPORE')
) GROUP BY A )
SELECT * FROM DAYYEAR DY
LEFT OUTER JOIN PERSONAL P
ON (DY.CONNTIME = P.A);

From the WITH statement, you can see that in under the PERSONAL part
there are

union all
decode function
inner query

and on the outer query of the WITH statement,
there is a LEFT OUTER JOIN.

i am trying to adhere to the rules stated in the guide for indivdiual components 1st before i try to merge them up together as a whole.

but its really tough.

hope to hear your opinion on it.
my most sincere apologies as i know its hard to see the query without (knowing the business logic)

thanks alot tom.




Tom Kyte
June 22, 2009 - 3:51 pm UTC

the union all is buried way down, one of the requirements is it *NOT BE* - re-read the link you provided

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#sthref539

you broke the first rule!


you have a group by on the union all
you do not select the rowid from each table in the union all
you do not select a marker column in each table in the union all

you violated most *every* rule....

how to refresh fast on this query ?

A reader, June 23, 2009 - 4:41 am UTC

hi tom,

thanks for the reply.

yea.. i know almost broken every rule.

what i am trying to find out is are my sql able to adhere to the rules without much change.

can i check with you, is it advisable for me to 'create the inner query with union all' as a separated materialized view

then have it join up with the main materialized view. so that i will not be confuse about rules overlapping with one another.

what would be your way in working this query out to be a materialized view ?

Thanks and Best Regards,
A


Tom Kyte
June 26, 2009 - 9:03 am UTC

... yea.. i know almost broken every rule. ...

fascinating, since you started with:

...
i am trying very hard to create a materialised view according to the specifications / rules listed
in the documentation.
( http://docs.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#g1028195 )

I need to create a materialized view base on (inline view/query)


CREATE MATERIALIZED VIEW test
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
AS
SELECT * FROM
(SELECT tab1.rowid, col1,col2, 1 unmarker FROM tab1 where condi = 'Y'
UNION ALL
SELECT tab1.rowid, col1,col2, 2 umarker FROM tab1 where condi = 'Z'
);

but i failed.
any idea how should i do it?

....


you start by saying

a) I know the rules
b) I am following all of them
c) here is my example


and it turns out

a) you do know the rules
b) you know you break almost every single one
c) your real query is as different from your original example as can be.


You cannot create an incrementally refreshed materialized view with a group by of a union all.

Why not create a pair of MV's and create a view that union all 's them together?

A regular view that union all's them together.

how to refresh fast on this query ?

A reader, July 04, 2009 - 8:15 am UTC

thanks alot tom! yea i should have create a view on it! silly me!

Import refresh group using data pump

A reader, August 07, 2009 - 9:37 am UTC

Tom,
I am not very familiar with mviews. Recently, I copied a schema X from database A to database B using datapump import over network_link.

The schema X has mviews and refresh groups in database A. After import, I found that mviews has been imported but the refresh groups were not. There was no error during import. I had done the import connected as user A. Now the mviews are failing refresh in database B.

a) Are refresh groups left out during datapump import?
b) How can I fix the situation? Each mview is refreshed by a separate database job and there are 300+ of them.

Version 10gr2.

Thanks...
Tom Kyte
August 07, 2009 - 10:27 am UTC

look in the export and import log and see if

Processing object type SCHEMA_EXPORT/REFRESH_GROUP


is in there, they should just come over.

Import refresh group using data pump

A reader, August 07, 2009 - 11:10 am UTC

In the import log there is no mention of refresh_group so I assume that these were not imported. Is there a way to ensure that these are imported?

I was reading the documentation and here is what I figured can be done to fix the situation. In database B:
a) Drop all the refresh jobs (dbms_job.remove). This will leave me with only the mviews.
b) Use dbms_refresh.make to create new refresh group and add the mview to that refresh group.

I may be totally wrong, so please let me know if I am thinking on the right track.

Thanks...
Tom Kyte
August 07, 2009 - 2:36 pm UTC

are you sure the schema in question HAD the groups - in its schema?

I ran this in an empty schema

create table emp as select * from scott.emp;
alter table emp add constraint emp_pk primary key(empno);
create materialized view log on emp with rowid (deptno,job,sal) including new values;
create materialized view mv1
refresh fast
as
select deptno, job, count(*) cnt, sum(sal) sal, count(sal)  cnt_sal
  from emp
 group by deptno, job;
create materialized view log on mv1 with rowid(deptno,sal) including new values;
create materialized view mv2
refresh fast
as
select deptno, count(*), sum(sal) sal, count(sal) cnt_sal
  from mv1
 group by deptno;
begin
dbms_refresh.make(
name             =>'group1',
list             =>'mv2,mv1',
next_date    =>sysdate,
interval         =>'sysdate+1/24',
implicit_destroy =>true);
end;
/


expdp'ed it - drop and created the user again - impdp'ed it and the groups came right along

I'm tending to think the groups where not in that schema.

Import refresh group using data pump

A reader, August 07, 2009 - 2:55 pm UTC

My limited knowledge about mviews is proving to be a major roadblock here so I am asking a very basic question. How do I find out who owns the refresh group? I have queried the dba_rgroup view in the original database (A) and the owner is displayed as user X. It might very well be that whoever created the refresh groups in the original database did so as SYSTEM user with name=>X.<name>. How can I find this out?

BTW the clock seems to be off by about 20 minutes. The last update time on your responses is 20 minutes ahead of EST.

Thanks...
Tom Kyte
August 07, 2009 - 3:43 pm UTC

ops$tkyte%ORA10GR2> select rowner, rname from dba_refresh;

ROWNER                         RNAME
------------------------------ ------------------------------
OPS$TKYTE                      GROUP1



that is from my last test above. I only have one refresh group in my database.

Import refresh group using data pump

A reader, August 07, 2009 - 4:09 pm UTC

I checked the dba_refresh view on original database and the rowner is X. Here is the parameter file I used for import, nothing was excluded.

DIRECTORY=DATA_PUMP_DIR
NETWORK_LINK=nw_link
LOGFILE=network_link.log
FLASHBACK_TIME="TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')"
JOB_NAME=imp_job
PARALLEL=4
SCHEMAS=X

The only thing is the X does not have imp_full_database role or exp_full_database role assigned on target/source databases. Not sure if this could have made the difference.

Thanks...

Tom Kyte
August 07, 2009 - 4:14 pm UTC

if you run my script in a test schema and expdp it, impdp it - does it reproduce?

If it does, what are the exact set of privs you used to create this test schema to make it like user X

Import refresh group using data pump

A reader, August 07, 2009 - 5:17 pm UTC

I tested your example and refresh groups did not export or import. Here is the log of export/import and create user statement:

;;;
Export: Release 10.2.0.3.0 - 64bit Production on Friday, 07 August, 2009 16:38:57

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ABC"."EXP_ABC_JOB": abc/******** parfile=abc.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
. . exported "ABC"."EMP" 29.82 KB 48 rows
. . exported "ABC"."MV1" 7.601 KB 33 rows
. . exported "ABC"."MV2" 6.578 KB 27 rows
. . exported "ABC"."MLOG$_EMP" 0 KB 0 rows
. . exported "ABC"."MLOG$_MV1" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
Processing object type SCHEMA_EXPORT/JOB
Master table "ABC"."EXP_ABC_JOB" successfully loaded/unloaded
******************************************************************************
Dump file set for ABC.EXP_ABC_JOB is:
C:\ABC.DMP
Job "ABC"."EXP_ABC_JOB" successfully completed at 16:39:40

;;;
Import: Release 10.2.0.3.0 - 64bit Production on Friday, 07 August, 2009 16:48:36

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ABC"."IMP_ABC_JOB" successfully loaded/unloaded
Starting "ABC"."IMP_ABC_JOB": abc/******** parfile=imp_abc.par
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "ABC"."EMP" 29.82 KB 48 rows
. . imported "ABC"."MV1" 7.601 KB 33 rows
. . imported "ABC"."MV2" 6.578 KB 27 rows
. . imported "ABC"."MLOG$_EMP" 0 KB 0 rows
. . imported "ABC"."MLOG$_MV1" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
Job "ABC"."IMP_ABC_JOB" successfully completed at 16:48:47


CREATE USER abc PROFILE DEFAULT IDENTIFIED BY abc
DEFAULT TABLESPACE USERS_DATA
TEMPORARY TABLESPACE TEMP quota unlimited on USERS_DATA;
GRANT connect to abc;
alter user abc default role all;
grant QUERY REWRITE to abc;
grant CREATE TABLE to abc;
grant ALTER ANY MATERIALIZED VIEW to abc;
grant CREATE PROCEDURE to abc;
grant CREATE ANY MATERIALIZED VIEW to abc;
grant CREATE SESSION to abc;
grant CREATE VIEW to abc;
grant create sequence to abc;
grant create trigger to abc;
grant all on directory data_pump_dir to abc;

Thanks for your help in working this issue...
Tom Kyte
August 07, 2009 - 5:38 pm UTC

let me see your parfile please.

abc%ORA10GR2> !expdp abc/abc directory=DATA_PUMP_DIR

Export: Release 10.2.0.4.0 - Production on Friday, 07 August, 2009 17:11:21

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ABC"."SYS_EXPORT_SCHEMA_01":  abc/******** directory=DATA_PUMP_DIR
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
Processing object type SCHEMA_EXPORT/JOB<b>
Processing object type SCHEMA_EXPORT/REFRESH_GROUP</b>
. . exported "ABC"."EMP"                                 7.828 KB      14 rows
. . exported "ABC"."MV1"                                 7.007 KB       9 rows
. . exported "ABC"."MV2"                                 6.234 KB       3 rows
. . exported "ABC"."MLOG$_EMP"                               0 KB       0 rows
. . exported "ABC"."MLOG$_MV1"                               0 KB       0 rows
Master table "ABC"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ABC.SYS_EXPORT_SCHEMA_01 is:
  /tmp/expdat.dmp
Job "ABC"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:11:29



it is not a privilege issue - I used your create user and it worked fine.

Import refresh group using data pump

A reader, August 07, 2009 - 9:46 pm UTC

The export parfile:
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=abc.dmp
LOGFILE=exp_abc.log
CONTENT=ALL
FLASHBACK_TIME="TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')"
JOB_NAME=exp_abc_job
PARALLEL=4
SCHEMAS=abc

Import parfile:

DIRECTORY=DATA_PUMP_DIR
DUMPFILE=abc.dmp
LOGFILE=imp_abc.log
JOB_NAME=imp_abc_job
PARALLEL=4
SCHEMAS=abc

Oracle 10.2.0.3 P26.

Thanks...

Import refresh group using data pump

A reader, August 09, 2009 - 4:07 pm UTC

I did some more testing on 11.1.0.6 but the refresh group was not exported. Finally I granted exp_full_database to abc and it worked. I am not sure why it works for you without exp_full_database grant. I am testing on Windows 2003.

Without grants:
Export: Release 11.1.0.6.0 - 64bit Production on Sunday, 09 August, 2009 15:36:28

Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "ABC"."EXP_ABC_JOB": abc/******** parfile=exp_abc.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
Processing object type SCHEMA_EXPORT/JOB
. . exported "ABC"."EMP" 30.82 KB 27 rows
. . exported "ABC"."MV1" 7.609 KB 9 rows
. . exported "ABC"."MV2" 6.742 KB 8 rows
. . exported "ABC"."MLOG$_EMP" 0 KB 0 rows
. . exported "ABC"."MLOG$_MV1" 0 KB 0 rows
Master table "ABC"."EXP_ABC_JOB" successfully loaded/unloaded
******************************************************************************
Dump file set for ABC.EXP_ABC_JOB is:
E:\EXPORT\POCDB\ABC.DMP
Job "ABC"."EXP_ABC_JOB" successfully completed at 15:36:33

With exp_full_database:
;;;
Export: Release 11.1.0.6.0 - 64bit Production on Sunday, 09 August, 2009 15:37:23

Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "ABC"."EXP_ABC_JOB": abc/******** parfile=exp_abc.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
Processing object type SCHEMA_EXPORT/JOB
Processing object type SCHEMA_EXPORT/REFRESH_GROUP
. . exported "ABC"."EMP" 30.82 KB 27 rows
. . exported "ABC"."MV1" 7.609 KB 9 rows
. . exported "ABC"."MV2" 6.742 KB 8 rows
. . exported "ABC"."MLOG$_EMP" 0 KB 0 rows
. . exported "ABC"."MLOG$_MV1" 0 KB 0 rows
Master table "ABC"."EXP_ABC_JOB" successfully loaded/unloaded
******************************************************************************
Dump file set for ABC.EXP_ABC_JOB is:
E:\EXPORT\POCDB\ABC.DMP
Job "ABC"."EXP_ABC_JOB" successfully completed at 15:37:29

To resolve ORA-00942 on refreshing Complete refresh Materialized view.

Dinker Joshi, December 07, 2009 - 12:49 pm UTC

Look into irt_mviews fot staleness of this view, if its COMPILATION_ERRORS. issue

ALTER MATERIALIZED VIEW <name> COMPILE;

this will resolve the issue.

MV refresh hangs

DayneO, February 26, 2010 - 2:14 am UTC

Hi Tom,

We have a master <-> writable slave replication environment. Everything seemed to be going well, but now it seems that the replication group refresh "hangs".

The refresh group is set to refresh daily at 10pm, but the process never stops. The Oracle waits are reporting a "SQL*Net break/reset to dblink". The docs don't seem to give much indication as to what might cause this.

What is a "SQL*Net break/reset to dblink" telling us?
Do you perhaps have any advice on what to look for?

matrialized view vs. aggregate tables

A Reader, March 10, 2010 - 9:49 pm UTC

Hi Tom

we have a legacy DW app that is refrehed monthly. It has a lot of aggregate tables which are refreshed each month using the 5 step process :

disable indexes
truncate
insert
reindex
analyze

i am planning to convert the aggregate tables to materialized views for the obvious benefits of easier refresh management and query rewrite feature

Question I had was - can we expect any performance improvement in the refresh times of the materialized views vs. the aggregate tables using the 5 step process described above.

Would greatly appreciate your reply.
Tom Kyte
March 11, 2010 - 8:18 am UTC

if you use atomic_refresh=>false (10g and above, not necessary in 9i and below) the steps to refresh will be:


a) truncate
b) direct path load
c) then gather stats

It doesn't make sense to rebuild all of those indexes - you should just maintain them during the direct path. if you rebuild say 5 indexes - you have to full scan the table 5 times after you load it. With a direct path load, we just maintain the indexes in bulk.

ON DEMAND COMPLETE - 11GR1

Rajeshwaran Jeyabal, August 12, 2010 - 12:27 pm UTC

scott@11G> select * from v$version;

BANNER
------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE    11.1.0.6.0      Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production

Elapsed: 00:00:00.20


scott@11G> create materialized view sales_mv
  2  build immediate
  3  refresh on demand complete
  4  enable query rewrite
  5  as
  6  select sales.cust_id ,
  7    sum(sales.sales_amount) sales_amt,
  8    count(sales.sales_amount) txn_count
  9  from scott.customers,
 10    scott.sales
 11  where scott.sales.cust_id = scott.customers.cust_id
 12  group by scott.sales.cust_id;

Materialized view created.

Elapsed: 00:00:09.75
scott@11G>
scott@11G>
scott@11G> insert into sales select * from sales where rownum <= 5;

5 rows created.

Elapsed: 00:00:00.04
scott@11G> commit;

Commit complete.

Elapsed: 00:00:00.01
scott@11G>
scott@11G> alter session set timed_statistics=true;

Session altered.

Elapsed: 00:00:00.00
scott@11G>
scott@11G> alter session set events '10046 trace name context forever, level 12';

Session altered.

Elapsed: 00:00:00.03
scott@11G> exec dbms_mview.refresh('SALES_MV','C');

PL/SQL procedure successfully completed.

Elapsed: 00:00:24.58
scott@11G>
scott@11G>
scott@11G> select rtrim(c.value,'/') || '/' || d.instance_name ||
  2  '_ora_' || ltrim(to_char(a.spid)) || '.trc'
  3  from v$process a, v$session b, v$parameter c, v$instance d
  4  where a.addr = b.paddr
  5  and b.audsid = sys_context( 'userenv', 'sessionid')
  6  and c.name = 'user_dump_dest';

RTRIM(C.VALUE,'/')||'/'||D.INSTANCE_NAME||'_ORA_'||LTRIM(TO_CHAR(A.SPID))||'.TRC'
------------------------------------------------------------------------------------------------------------
d:\app\192746\diag\rdbms\11g\11g\trace/11g_ora_324.trc

Elapsed: 00:00:00.01


Looking at the Tkprof Results.

********************************************************************************

SQL ID : 77c59h44y45p9
delete from "SCOTT"."SALES_MV"


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1      1.42       7.10        154        215     213156       69489
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.42       7.10        154        217     213156       69489

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  DELETE  SALES_MV (cr=263 pr=154 pw=154 time=0 us)
  69489   MAT_VIEW ACCESS FULL SALES_MV (cr=203 pr=0 pw=0 time=735 us cost=60 size=930917 c



Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                       154        0.29          3.64
  log file switch completion                      2        0.38          0.64
  log buffer space                               13        0.50          1.22
********************************************************************************

SQL ID : 8905utmsum1qv
ALTER SUMMARY "SCOTT"."MV_CUST_SALES_AGG" COMPILE


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.00          0          0          1           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       0.00          0          0          1           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81     (recursive depth: 2)
********************************************************************************

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "SCOTT"."SALES_MV"("CUST_ID","SALES_AMT","TXN_COUNT") SELECT "SALES
"."CUST_ID",SUM("SALES"."SALES_AMOUNT"),COUNT("SALES"."SALES_AMOUNT") FROM "SCOTT"."CUSTOMERS" "CUSTOMERS","SCOTT"."SALES" "SALES"
WHERE "SALES"."CUST_ID"="CUSTOMERS"."CUST_ID" GROUP BY "SALES"."CUST_ID"

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          5           0
Execute      1      2.34      17.04      11380      12391     142849       69489
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.34      17.04      11380      12391     142854       69489

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=12415 pr=11380 pw=11380 time=0 us)
  69489   HASH GROUP BY (cr=11897 pr=11380 pw=11380 time=384 us cost=6384 size=833868 card=6948

2223668    TABLE ACCESS FULL SALES (cr=11897 pr=11380 pw=11380 time=7837 us cost=2539 size



Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file scattered read                        124        0.80         13.04
  db file sequential read                         5        0.31          0.63
  log buffer space                                6        0.13          0.38
********************************************************************************


Tom:
Looks like DELETE is happening in SALES_MV materialized view Instead of Truncate during Complete refresh. Is that wrong in my Test cases?
Tom Kyte
August 19, 2010 - 12:26 am UTC

look at the atomic_refresh parameter - set it to false and it'll go back to truncate + insert /*+APPEND*/


that was changed in 10g - 9i and before did truncate+insert/*+APPEND*/ by default, 10g does delete+insert by default.


In 9i if you want the 10g and above behavior you would use a snapshot refresh group.

in 10g if you want the 9i and before behavior you would use atomic_refresh

Why do updates of columns not referenced in a MV cause a refresh?

Daniel, September 19, 2010 - 12:35 am UTC

Hello.

I'm wondering why a FAST REFRESH ON COMMIT mv refreshes when the only updates to the base table are to columns that are not referenced by either the mv or the mv log.

I must misunderstand how this is supposed to work because it appears to be an optimisation that could be done.


Thanks.

Test case follows:

SQL*Plus: Release 11.2.0.1.0 Production on Sun Sep 19 04:34:35 2010

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining      
and Real Application Testing options                                        


SQL> CREATE TABLE base (
  pk NUMBER(10)
    PRIMARY KEY,
  useMe VARCHAR2(16),
  ignoreMe VARCHAR2(16)
)
;
  2    3    4    5    6    7
Table created.

SQL> CREATE MATERIALIZED VIEW LOG ON base
  WITH
    PRIMARY KEY,
    ROWID
  (useMe)
;
  2    3    4    5    6
Materialized view log created.

SQL> CREATE MATERIALIZED VIEW testMV
  REFRESH FAST ON COMMIT
  AS
    SELECT
      b.pk AS b_pk,
      b.useMe AS b_useMe
    FROM
      base b
    WHERE
      useMe LIKE 'X%'
;
  2    3    4    5    6    7    8    9   10   11
Materialized view created.

SQL> SELECT
    mview_name,
    TO_CHAR(last_refresh_date, 'YYYY/MM/DD HH24:MI:SS') AS refresh_date
  FROM
    user_mviews
  WHERE
    mview_name = 'TESTMV'
;
  2    3    4    5    6    7    8
MVIEW_NAME                     REFRESH_DATE
------------------------------ -------------------
TESTMV                         2010/09/19 05:08:41

SQL> INSERT INTO base
  VALUES (
    1,
    'used',
    'ignored'
  )
;
  2    3    4    5    6    7
1 row created.

SQL> commit;

Commit complete.

SQL> SELECT
    mview_name,
    TO_CHAR(last_refresh_date, 'YYYY/MM/DD HH24:MI:SS') AS refresh_date
  FROM
    user_mviews
  WHERE
    mview_name = 'TESTMV'
;
  2    3    4    5    6    7    8
MVIEW_NAME                     REFRESH_DATE
------------------------------ -------------------
TESTMV                         2010/09/19 05:09:15

SQL> UPDATE base
  SET
    useMe = useMe
;
  2    3    4
1 row updated.

SQL> commit;

Commit complete.

SQL> SELECT
    mview_name,
    TO_CHAR(last_refresh_date, 'YYYY/MM/DD HH24:MI:SS') AS refresh_date
  FROM
    user_mviews
  WHERE
    mview_name = 'TESTMV'
;
  2    3    4    5    6    7    8
MVIEW_NAME                     REFRESH_DATE
------------------------------ -------------------
TESTMV                         2010/09/19 05:09:53

SQL> UPDATE base
  SET
    ignoreMe = ignoreMe
;
  2    3    4
1 row updated.

SQL> commit;

Commit complete.

SQL> SELECT
    mview_name,
    TO_CHAR(last_refresh_date, 'YYYY/MM/DD HH24:MI:SS') AS refresh_date
  FROM
    user_mviews
  WHERE
    mview_name = 'TESTMV'
;
  2    3    4    5    6    7    8
MVIEW_NAME                     REFRESH_DATE
------------------------------ -------------------
TESTMV                         2010/09/19 05:10:23


Tom Kyte
September 20, 2010 - 2:13 pm UTC

there are optimizations that could be done - however, the overhead of them would impose performance de-optimizations themselves. We'd have to have a way of tracking every change down to the column level and then reviewing at commit time all of the changes to determine the complete set of columns modified (remember, you in general can up a lot of stuff - it is not typically just a single row in a transaction).

So, this is an optimization that could be done (as they all are) but would probably add more de-optimizations than optimizations overall (eg: the return on investment would be small or negative)

Daniel, September 21, 2010 - 5:34 pm UTC

My thought process was that
- the maximal set of columns that a statement can update is statically known,
- the columns in the mv log and the columns used in the mv's defining query are statically known,
- if there is no intersection between the two sets for a given statement then rows updated by that statement need not be added to the mv log... no need for comparison of old/new column values at commit time.

Granted, the optimisation would only be valid for a subset of mvs but could be a great improvement when possible, and you know at compile time whether to enable it so any pessimisation would probably be insignificant.

I recall being a bit surprised you can only have one mv log for this very reason.


All that aside, is there a standard solution to this 'excessive' refresh problem?

Assume an employee table with dozens of columns including last_update_time and salary, and an expensive mv that uses nothing but salary from employee in it's definition.

Is the standard solution to avoid updates of eg last_update_time initiating refreshes of the expensive mv to define an intermediary mv that contains only the salary column?

Thanks.
Tom Kyte
September 23, 2010 - 9:42 am UTC

... - the columns in the mv log and the columns used in the mv's defining query
are statically known, And that we'd have to compare the log to the table row by row to see which columns were modified for each row in the log to determine if a given MV would need to be updated or not for each and every transaction ...


think about it - you would be adding a lot of work to every transaction in order to avoid a typically small amount of work.

if you have an extreme edge case whereby MOST of your transactions to not affect the columns the materialized view is utilizing - then you might consider vertical partitioning (two base tables, MV only references one of them)

Daniel, September 23, 2010 - 6:00 pm UTC

Hmmmm, one, or both, of us is not following the others reasoning.

"And that we'd have to compare the log to the table row by row to see which columns were modified for each row in the log to determine if a given MV would need to be updated or not for each and every transaction"

No. I am proposing that a row is only added to the mv log after being updated by a given statement if it is possible for the statement to have modified one of the columns that is used by the mv log or dependent mv. Just don't log the change if it's a change that no one cares about.

No need for any comparisons at any point. When a update/insert statement is compiled it has the code to update given mv log(s) linked in or not.

I'm not sure if it's feasible and obviously there would be challenges with this approach, such as wanting multiple mv logs, definition of a new mv changing an already compiled statements behaviour with regard to inserting into mv logs, and probably many more. My main point is that you'd not need to carry out row-by-row comparisons.

Thanks.
Tom Kyte
September 27, 2010 - 10:02 am UTC

but logs are NOT specific to a materialized view - they are used by ALL materialized views and in general - there can and are more than one.

And, supposing we did that, that would mean that every row modified would have to under go an inspection to see if it met the criteria for logging. Meaning - every single row would have to undergo a relatively expensive operation (check the columns after the update took place) to see if it should be logged or not.

... When a update/insert statement is
compiled it has the code to update given mv log(s) linked in or not.
...

not true, not true at all. think "triggers" for example. Think defaults.

You would need row by row comparisions - at logging time or otherwise. And it would be a lot of overhead for the vast majority of people...

Daniel, October 02, 2010 - 3:57 am UTC

"...logs are NOT specific to a materialized view..."
Yes, understood. That is why I said earlier that you would probably want (though not need) the ability to declare multiple mv logs. A given mv would utilise the minimal sufficient mv log that was available.


Me: "When a update/insert statement is compiled it has the code to update given mv log(s) linked in or not."

Just to be clear, the above is what I would have happen under my proposal, not what I think happens now.


"not true, not true at all. think "triggers" for example. Think defaults."

Assuming you understood me... Yes true. Triggers and defaults? The columns they (could possibly) effect are known at the time a statement is compiled, hence possible to determine what mv log(s) the statement should modify. Again, no row comparisons necessary.

Sure, you say "fat chance isn't going to happen, too different from the way things work now, we don't like adding a mv invalidating swathes of compiled statements, we don't think people will use it, ...' or some other practical constraint then that's fair enough.

But you haven't said anything that changes my opinion that the db contains all the static information required to determine if a given statement could effect the content of a given mv and hence, in principal, the number of refreshes could be reduced... *without* requiring row value comparisons.

Hopefully I've made myself clear, if not perhaps a concrete example would help.

Thanks
Tom Kyte
October 04, 2010 - 1:54 am UTC

... That is why I said earlier that you would probably want
(though not need) the ability to declare multiple mv logs. ...

and then we are smack into "a huge penalty for all operations to achieve what for most would be a small insignificant gain elsewhere". In other words - a ton of work for little benefit.


and checking at hard parse time would also have the same effect - you would have a large penalty for all - with a small gain elsewhere for a few.


So, I'm saying "large penalty, small gain" - that is what I've been saying. The thought of having multiple mv-logs would be a none-starter, that overhead alone would be enough to kill it. People complain about the SINGLE mv-log that already exists.

Daniel, October 26, 2010 - 8:43 pm UTC

..."a huge penalty for all operations..."

I don't think you've stated exactly what this penalty consists of. I assume you mean the cost of inserting rows into the mv log(s) when the row is modified in the underlying table. You do not need to perform row-by-row comparisons at execution time so that rules that out as a source of the penalty.

With my proposal, should you wish to, you could have a single mv log that suffered update for each and every updated row in the underlying table. That would have exactly the same 'huge penalty' as the current scheme.

Should you wish to you could have multiple mv logs that each suffered update for subsets of the updated rows in the underlying table.

For some use cases there would be far fewer updates across all of the mv logs put together than there would be with the current single mv log (because you can statically determine what updates cannot effect dependent mvs).

Furthermore, with multiple mv logs in some situations you will have less mvs depending upon a given mv log. The result being that the mv log may not need to retain rows for the less-frequently-refreshed mvs.

"...a ton of work for little benefit."

You would use multiple mv logs when it was appropriate to do so. You would use a single mv log when it was appropriate to do so. I think there are many situations when it would be appropriate to use multiple mv logs.


"...checking at hard parse time... ...large penalty for all... ...small gain... ...for a few."

I don't see it. I would think it would be trivial to determine that there is a single mv and skip any extra work in the degenerate case.

When you do want to use multiple mv logs, yes, you do incur costs for the hard parse.
How big? I don't know, I think it would be a tiny fraction of all the other work that goes on.
The other issue is that the addition of a new mv or mv log would invalidate some proportion of your parsed statements and they'll need to be reparsed. Adding an mv or mv log is astronomically infrequent compared to execution of statements on logged tables so I can't believe this is a serious objection (in conjunction with the fact that if you don't use multiple mv logs everything works just like the status quo).


"People complain about the SINGLE mv-log that already exists."

I complain about the single mv log, because it's incredibly inefficient for some of the things I want to do. Smart updating of an mv log is an improvement. Multiple mv logs is an improvement because it can leverage smart updating to scale better.

If what you're trying to do demands logging of every single row, and all dependent mvs refresh at the same time, just don't make use of the extra capabilities... you could stick with a status quo mv log for, I imagine, a miniscule extra cost at definition and hard parse time.

To be clear, multiple mv logs without smart updating would be stupid.

Thanks.
Tom Kyte
October 27, 2010 - 5:49 am UTC

the huge penalty is the fact that most every statement would now have more than one mv log to populate. it would have a mv log per materialized view - in most cases. Every insert and delete would be affected. Most every update would be as well. For little general benefit (the rare case would benefit, not the general cae)


How to creat Materalized view if table is big

Rajesh Pahade, December 14, 2010 - 11:33 am UTC

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> desc audita1
 Name                                      Null?    Type
 ----------------------------------------- -------- ---------------------------

 RECORD_KEY                                         NUMBER(38)
 RECORD_NUMBER                                      NUMBER(38)
 FIELDNAME                                          VARCHAR2(100)
 OLD_SCALAR                                         VARCHAR2(1000)
 NEW_SCALAR                                         VARCHAR2(1000)
 OLD_ARRAY                                          CLOB
 NEW_ARRAY                                          CLOB
SQL> select count(1) from audita1;
  COUNT(1)
----------
  33329970
In AuditA1 table our application only inserts the data not updating any records. This table is not having any primary key. 
I am trying to create the MV as follows in another database using Database Link SMPORD 
CREATE MATERIALIZED VIEW AUDITA1 TABLESPACE SM_RDSMV REFRESH FAST START WITH SYSDATE NEXT SYSDATE+1/96 WITH ROWID
AS SELECT * FROM AUDITA1@SMPROD;

We kept this running for 30 hr but it didn’t created MV successfully. 
Could you pl suggest what’s wrong with it or what is the best way of crating materialized view in this scenario.
Thanks in advance!
Regards,
Rajesh Pahade

Tom Kyte
December 14, 2010 - 1:11 pm UTC

select count(1) from audita1;

why do you count 1's? didn't you really want to count rows? count(*) counts rows - count(1) counts the number of non-null 1's - which isn't what you meant - it makes you look like you might not understand how the count aggregate works using count(1)...



you are trying to move 33+ million records over the network - records of unknown size. network of unknown capabilities. clobs are going to be slow as they are done row by row and are in general "big" - big will equate to "not fast" over the network...

have you read about offline instantiation?

http://www.oracle.com/pls/db112/search?remark=quick_search&word=offline+instantiation

count(1)

Mahesh, December 14, 2010 - 6:18 pm UTC

create table cart_clientes
( x number,
y date,
z varchar2(30)
) tablespace DSEC;

select count(*) from all_users;
52

INSERT INTO cart_clientes
SELECT null,null,null FROM all_users;

-- 52 nulls
INSERT INTO cart_clientes
SELECT user_id, created, username FROM all_users;

-- 52 non -null
select count(*) from cart_clientes
-- 104
select count(1) from cart_clientes
--104

What is the difference between count(*) and count(1)?

Tom Kyte
December 15, 2010 - 12:35 am UTC

count(1) counts the number of non-null 1's in a table. In short, that sounds silly doesn't it - why would you count the number of non-null 1's in a table?

count(*) counts the number of rows in the table. That sounds sensible and reasonable right?


count(*) is the way to count rows. Count(1) is the way to count non-null occurences of the number 1.



count(1) is just wrong, it shows the coder doesn't really understand the proper use of the count aggregate. it makes me nervous - it means they just mimic other code they've seen without perhaps understanding what it means.

A reader, December 29, 2010 - 12:44 am UTC

Dear Tom,

I have created this MV inside Trigger t otake a snapshot of the table before insert or update, however it works for first insert/update but for successive ones it failes.. could you please advice where am i doing worng?

create or replace trigger TRG_T2
before insert or update on T2
declare
X number;
pragma AUTONOMOUS_TRANSACTION;
begin
SYS.DBMS_JOB.SUBMIT
( job => X
,what => 'dbms_mview.refresh(''my_view'', ''C'');'
,next_date => sysdate + (5/(60*60*24))
,no_parse => TRUE
);
commit;
end;
Tom Kyte
December 30, 2010 - 1:06 pm UTC

OH MY GOSH. that is wrong on so many levels.


If you want a materialized view to refresh after each modification - please define it as "refresh fast on commit"


And think about what you have done here.

You refresh BEFORE insert or update - think about that for a moment.... Really think about that... What do you think that would do - what would get refreshed if you refresh BEFORE the insert or update on T2??

And think about why they might fail.... An autonomous transaction runs as if it were in another session (like having two sqlplus sessions open...). You cannot see changes made in session 2 from session 1 UNTIL session 1 commits them. therefore, even if you made this an AFTER trigger - the new transaction that runs the refresh WOULD NOT SEE the modifications.



How about this - tell us what you are trying to do - and then we can tell you how to do it. This bit of code is pretty scary looking. There are many misconceptions by the person who developed it about how things work.

This is reason 43241 I would like to have

o triggers
o autonomous transactions
o when others

removed from the PL/SQL language (with commit and rollback being a close 4th place...)

refresh big materialized view using datapump

Daniel, April 07, 2011 - 7:07 am UTC

Hi Tom,

I am trying to create a materialized view for a table that has 300milion+ rows. The server on which the materialized view will be is not very fast and of course I don't really want to have the view in refresh mode 3 weeks.
Right now I started the refresh on the view and after 2 days it is at 13% completed.

My question is: can datapump be used for this kind of materialized view (with rowid option)?

On a primary key based MV, I could import the table and create the MV based on the existing (created at import) table. But for a rowid based MV, can it be done?

Thank you
Tom Kyte
April 12, 2011 - 1:23 pm UTC

Yes, you can do what is known as an offline instantiation

http://www.oracle.com/pls/db112/search?remark=quick_search&word=offline+instantiation

MV

Sam, April 12, 2011 - 10:26 pm UTC

Tom:

Welcome back from Europe! I am glad you are back.

wow, that trigger that does refresh before insert for MV can cause TOTAL DISASTER if the transaction was rolled back. Even if it did not rollback ,it will kill performance of transaction.


Can an MV be created to do fast refresh AFTER COMMIT on the master table. For example, if EMP is the master table in remote database and i have an MV in local DB. If a user added ONE record to EMP in the remote DB and commited, would the MV do a fast refresh for that record only AFTER the commit. If not, can a manual job be schedule in the transaction to do that or it can do COMPLETE refresh (not FAST).
Tom Kyte
April 13, 2011 - 10:00 am UTC

a materialized view may be refreshed:

a) on demand
b) on a schedule
c) on commit


If you need a refresh to happen locally due to a remote operation you just performed - you are doing it wrong, totally wrong. You shouldn't have two databases in the first place.


What is your GOAL sam, don't tell me what you are trying to do - tell me the business requirement you are trying to satisfy.

Glad he's back

Mark Williams, April 13, 2011 - 3:47 pm UTC

> Welcome back from Europe! I am glad you are back.

Well, at least you had the time to spend on APEX during Tom's marathon:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:12691263416650#3182207400346587685

With apologies to Dr. Christian Szell (Laurence Olivier), "Is it safe?"

MV

sam, April 13, 2011 - 9:43 pm UTC

Tom:

<<If you need a refresh to happen locally due to a remote operation you just performed - you are doing it wrong, totally wrong. You shouldn't have two databases in the first place.
What is your GOAL sam, don't tell me what you are trying to do - tell me the business requirement you are trying to satisfy. >>


My goal is to improve performance with the architecture that i have. I know long term the only solution is to have one database but for the time being i have to do the best i can with distributed databases.


I have an inventory database, There is a lot of code that keeps looking up USERS, ORGS and VENDORS data in remote database sitting on same machine next to it.

The data on those remote 3 tables hardly changes but it may happen. add a record ,edit a record, delete a record.


to speed things up i created LOCAL MATERIALIZED VIEWS for these 3 tables and changed the code to use the MV. Things became much faster.

But i am refreshing the MV nightly. A user might run into a situation where he adds a vendor which gets added to the remote DB and then select it on the order form. I have to make that vendor available for him right away instead of next day.

so i am thinking of doing or scheduling a FAST REFRESH on the MV after a commit happens on those remote tables. IT should happen very quickly for one record. I dont want to do all these SELECTS remotely but i need to make sure the MV is updated if and only if something changes in the master remote table.


what is the best thing to do other than creating one DB for now?
Tom Kyte
April 14, 2011 - 9:37 am UTC

My goal is to improve performance with the architecture that i have

and the typical way to do that is.... change.


so i am thinking of doing or scheduling a FAST REFRESH on the MV after a commit
happens on those remote tables.


You cannot. refresh fast on commit is for local tables only. And you cannot specify that your single row you just did would be the only thing refreshed (meaning your comment about "being fast because of only one row" doesn't hold water.

You'll have to create your own tables, insert locally and remotely - and when you query locally - union all your new tables that hold the newly inserted data. when you refresh the MV's at night - delete the stuff in these new tables.

MV

Sam, April 14, 2011 - 1:46 pm UTC

Tom:

I did not quite understand you fully.

1) Why create a local table too? Can't I make the MV writeable and insert the vendor into the REMOTE TABLE and LOCAL MV.

I think if i do this i wont be able to do FAST RERESH becaue the record is added to the MV and there will be a PK violation on refresh so i have to do COMPLETE REFRESH (copy the whole data).

2)

<<You cannot. refresh fast on commit is for local tables only. And you cannot specify that your single row you just did would be the only thing refreshed (meaning your comment about "being fast because of only one row" doesn't hold water. >>

Do you mean the MV and master table have to be in same database for fast on commit to work?

I am not specifiying anything. If users entered 10 vendors i assume the MV LOG for fast refresh will have 10 sql statements and the refresh should go much faster than copying the whole table.

The whole table is quite small too so i can do a complete refresh too but i did not see a need for it.


Tom Kyte
April 14, 2011 - 5:37 pm UTC

1) you could, but as you said you'd lose the fast refresh. Me, thinking you wanted a fast refresh, proposed something that would permit your current refresh from working.

that is why I suggested to consider a local table.

2) yes, that is what I said. refresh fast on commit is for local - single database - no remote - no links - only.


If you can do a complete refresh - then go ahead and just insert into the MV as well and the complete refresh will take care of it.


It is hard to answer when you have to pull information out bit by bit by bit you know. I have to make some assumptions.

MV

Sam, April 14, 2011 - 6:54 pm UTC

Tom:

Thanks, but let us forget fast refresh on commit for remote table..

Transaction 1 in LOCAL DATABASE inserts a vendor into VENDORS table in REMOTE DATABASE B.

Can't this transaction schedule a JOB that does a FAST REFRESH for the local MATERIALIZED VIEW -- similar to scheduling a job for sending emails.

sort of FAST REFRESH on demand thing.

this was my idea.

thre might be a few seconds delay because of the job but I can handle that. it sounds easier that inserting to REMOTE TABLE and LOCAL MV and then doing complete RERESH every night even though the tables are pretty small (about 2000 records).
Tom Kyte
April 14, 2011 - 7:05 pm UTC

Sam, I'm done.

over and out.


figure it out yourself. I've told you what you can do, if you choose to not listen, that's cool. but don't ask me to rubber stamp your ideas, we've been down that path and I've said it before - I won't do that for you.


thre might be a few seconds delay because of the job but I can handle that.

might be a few seconds, or minutes, or longer. But how would you deal with that??? If you can deal with a few seconds, you can deal with minutes or hours then - go for it.

2,000 records is trivial. Go for the full refresh.

Complete refresh

Ajit, August 19, 2011 - 1:32 am UTC

Hi Tom,

Can you please tell me the machanism of complete refresh i.e. how oracle internaly works during complete refresh option on MV.

Regards
Ajit
Tom Kyte
August 30, 2011 - 12:55 pm UTC

it depends.

in 10g and above, a complete refresh by default will:

a) delete from materialized view
b) insert into materialized view select ....
c) commit;

If you use atomic_refresh=>false on the refresh - it will instead:

a) truncate materialized view (data disappears)
b) insert /*+ append */ into materialized view select .. (direct path load it)
c) commit (data re-appears)

Partitioned Materialized view

A reader, October 01, 2011 - 11:08 am UTC

hi tom,
first of all thank you for the knowledge sharing with us. i have one questions and i know you have the answer.
i have one materialized view and we have four different select query which uses the domain indexes each with different sort order, we see high number of consistent gets and because of this not only it takes more time but it consumes more CPU as well. i'm thinking to have this materialized view get partitioned with 4 partitioned, separate partition for each sort order query. now the challenge is how would i refresh the Mview. is it possible to partition the Mview this way? if yes how? and how should it be refreshed? currently it's refreshed every 30 minutes and we have Mview log created on this.

I would appreciate your time and support if you could provide your view soon as possible, this query has become a nightmare to our production system
thanks
Tom Kyte
October 01, 2011 - 4:12 pm UTC

i'm thinking to have this materialized view get partitioned
with 4 partitioned, separate partition for each sort order query.


I cannot imagine how partitioning would help sorting here - you'll need to give a clear example of what you are doing.

A reader, October 01, 2011 - 8:18 pm UTC

this is what i was thinking.
partition-1 will have data already presorted with respect to the query's sort order say by desc, similarly partition-2,3,4.
so basically all the 4 partitions will have the same data but in different order specific to the query's sort order.
Tom Kyte
October 02, 2011 - 8:22 am UTC

and how would that help - since you have to have an order by on the query anyway and we would not really know that your data is 'sorted' (it probably isn't, you can load data that is sorted and it will be more or less sorted on disk - but it will NOT be assured to be stored exactly like that - some rows can easily get "out of place" - but they are not really out of place because a heap table doesn't ever say "I shall store rows in some order")

Only if you can use an index organized table would it be stored "sorted" in any fashion but even then I'd hazzard a guess that we wouldn't ever retrieve it sorted since there is so much data - we'd use multiblock IO to read the data (out of order) and sort it again.

I don't see the logic behind four partitions and no, this will not provide you any benefits.


tell us more about this data. tell us more about these domain indexes (what kind are they? spatial? text? something else?). tell us more about how the application that requests this data interacts with the database - what does it do with the data.

A reader, October 02, 2011 - 10:22 am UTC

we use oracle text for website search, query is using domain index with contains clause,application is making a call to the PL/SQL package which in terns submit several sql statements, the select query with domain index retrieves data from mviews and does insert into GTT. before inserting into GTT it sorts' the data in desc order, we have four columns appearing in order by clause all in desc order, domain index is created on all four columns with order by desc. there is hint DOMAIN_INDEX_NO_SORT is passed into the select statement,
so when user submits any search text into the search area of the website, query takes it and comes back with 160K pages, 8 pages at a time.
Tom Kyte
October 03, 2011 - 9:44 am UTC

I use Oracle text for this website search.

I use a domain index

I sort by whatever you tell me to sort by - the default is "score" - I use an order by.

I get the first page and STOP. I do not get all of the pages and save them. that would be an UTTER AND COMPLETE waste of resources. All you need to do in a search is say "Here is 1-N rows (page 1)", there are more pages (click here to go to the next page). If you think you want to go to the last page - you just really want to sort DESC, not ASC (then last page becomes first page! like magic).

Look at google (sort of the gold standard). They get page one really fast and tell you you are looking at 1-10 of about N - where N is a total guess (just like I do on the home page). Then they stop. You can get to page 99 of the search on google if you hit the page next button over and over - but you'll notice that the pages get slower and slower as you go up. If you try to go to page 100 - they will tell you "ummm, no, think again, refine your search"


No human being on this planet can use 160k pages - NONE.


My advice would be to rewrite your search approach entirely. Scrap the global temporary table. Scrap the 160k pages. Just run the query, get the first page, STOP.

You'll be surprised.

Thank you so much

A reader, October 03, 2011 - 7:22 pm UTC

Salute to Great Tom:)
i completely agree with you and it makes lot of sense to me, but does not make any sense to developers and it's been very hard to change there mind. entire application is on GTT, you can say GTT is the core of the apps. this query is been executes all the time and takes forever sometimes, gets timed out. any suggestion how get agree developers to this.
Thanks for your time and support

Tom Kyte
October 04, 2011 - 11:22 am UTC

tell them to study google for a minute, tell them "if it works for them - why wouldn't it work for you?" ask them "do you not want to be efficient, fast, smart like they are". ask them "do you not want to make things better, faster, cheaper for your end users".

maybe get them to prototype it and let the END USERS decide what they like better.

slow, resource intensive search. oh, you get to know there are exactly 162,123 pages by the way.

fast, responsive search. oh, you get to know there are more than one page - we'll let you go to any of the first ten, next ten and so on pages - but we'll never tell you there are 162,123 pages - but only because you don't really care.


Materialized View

A reader, October 05, 2011 - 9:52 am UTC

I am working for a client where they have a instance (parallel to production instance) which they use for reporting purpose and it gets cloned every day from production to provide end of day point in time data. Currently it serving one country only but it becomes a challange if the same instance is used by different countries and both countries want their end of day point in time data, this makes it necessary to clone the instance twice but it is not a good option since for cloning a instance DBAs have to take the production system down, they can not do that since other countries might be using the instance. So do you have any solution to this requirement, we throught of creating MVs for both the countries and schedule refresh by end of their business days. But this requires change in lot of reporting code as it should point to MVs instead of tables.

Materialized view Refresh in 11GR2

Rajeshwaran, Jeyabal, November 15, 2011 - 11:16 pm UTC

Tom:

I was reading about materialized view from product docs
http://download.oracle.com/docs/cd/E11882_01/server.112/e25554/basicmv.htm#i1006519
<quote>
You can achieve better fast refresh performance for local materialized views if you use a materialized view log that contains a WITH COMMIT SCN clause
</quote>

rajesh@ORA11GR2> create table t1
  2  nologging
  3  as
  4  select level as empno,
  5        dbms_random.string('A',30) as ename,
  6        sysdate as hire_date,
  7        mod(level,100) + 1 as deptno,
  8        mod(level,1000) as comm,
  9        dbms_random.value as salary
 10  from dual
 11  connect by level <=1000000;

Table created.

Elapsed: 00:00:58.79
rajesh@ORA11GR2> create table t2
  2  as
  3  select level as deptno,
  4        dbms_random.string('A',20) as dname,
  5        dbms_random.string('B',10) as loc
  6  from dual
  7  connect by level <=100;

Table created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T1');

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.70
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T2');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t1 add constraint t1_pk primary key(empno);

Table altered.

Elapsed: 00:00:15.82
rajesh@ORA11GR2> alter table t2 add constraint t2_pk primary key(deptno);

Table altered.

Elapsed: 00:00:00.01
rajesh@ORA11GR2> alter table t1 add constraint t1_fk foreign key(deptno) references t2;

Table altered.

Elapsed: 00:00:08.11
rajesh@ORA11GR2> alter table t1 modify deptno not null;

Table altered.

Elapsed: 00:00:06.20
rajesh@ORA11GR2> create materialized view log on t1 with rowid(empno,ename,deptno) including new values;

Materialized view log created.

Elapsed: 00:00:00.68
rajesh@ORA11GR2> create materialized view log on t2 with rowid(deptno,dname,loc) including new values;

Materialized view log created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
  2  build immediate
  3  refresh fast on demand
  4  enable query rewrite
  5  as
  6  select d.deptno,d.dname,count(*)
  7  from t1 e,t2 d
  8  where e.deptno = d.deptno
  9  group by d.deptno,d.dname;

Materialized view created.

Elapsed: 00:00:10.89
rajesh@ORA11GR2>
rajesh@ORA11GR2> update t1
  2  set ename = lower(ename);

1000000 rows updated.

Elapsed: 00:01:26.53
rajesh@ORA11GR2>
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.32
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter session set timed_statistics=true;

Session altered.

Elapsed: 00:00:00.01
rajesh@ORA11GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

Elapsed: 00:00:00.23
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');

PL/SQL procedure successfully completed.

Elapsed: 00:05:12.22
rajesh@ORA11GR2>


Tkprof shows this:

SQL ID: 6qmct6a9qt5gu
Plan Hash: 0
BEGIN dbms_mview.refresh('T1_T2_MV'); END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.13          0          0          0           0
Execute      1      1.17      12.91      16737      48347         20           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.17      13.04      16737      48347         20           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00



Now dropped and recreated the Materialized view log with COMMIT SCN

rajesh@ORA11GR2> drop materialized view log on t1;

Materialized view log dropped.

Elapsed: 00:00:00.64
rajesh@ORA11GR2> drop materialized view log on t2;

Materialized view log dropped.

Elapsed: 00:00:00.06
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t1 with rowid(empno,ename,deptno), commit scn including new values;

Materialized view log created.

Elapsed: 00:00:00.32
rajesh@ORA11GR2> create materialized view log on t2 with rowid(deptno,dname,loc), commit scn including new values;

Materialized view log created.

Elapsed: 00:00:00.04
rajesh@ORA11GR2> drop materialized view t1_t2_mv;

Materialized view dropped.

Elapsed: 00:00:00.56
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
  2  build immediate
  3  refresh fast on demand
  4  enable query rewrite
  5  as
  6  select d.deptno,d.dname,count(*)
  7  from t1 e,t2 d
  8  where e.deptno = d.deptno
  9  group by d.deptno,d.dname;

Materialized view created.

Elapsed: 00:00:04.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> update t1
  2  set ename = lower(ename);

1000000 rows updated.

Elapsed: 00:01:36.61
rajesh@ORA11GR2>
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter session set timed_statistics=true;

Session altered.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

Elapsed: 00:00:00.04
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');

PL/SQL procedure successfully completed.

Elapsed: 00:02:43.58
rajesh@ORA11GR2>


Tkprof shows this:

SQL ID: 6qmct6a9qt5gu
Plan Hash: 0
BEGIN dbms_mview.refresh('T1_T2_MV'); END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      2.28      17.75      20413      44369         19           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.28      17.75      20413      44369         19           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  log file sync                                   1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00


Questions:
1) I dont see any improvement in performance WITH COMMIT SCN included. Is that i am doing something wrong here? Can you help me?

Materialized view Refresh in 11GR2

Rajeshwaran, Jeyabal, November 18, 2011 - 8:47 pm UTC

Tom:

Is that i am not providing enough details for you to answer the above question? Can you please answer to the above question.
Tom Kyte
November 21, 2011 - 1:35 pm UTC

you did everything as a single big transaction. Having a commit scn, a single one, isn't going to do very much.

You might have a lot of row changes - but only one transaction. Not very real world is it? You would have to try to mimic real life.

reader, June 14, 2012 - 11:17 am UTC

In user_admin@test_1, I have a table test and a view v_g1_test

--Scripts for test_1 database

create user user_admin identified by "user"
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
ACCOUNT UNLOCK;

ALTER USER user_admin QUOTA UNLIMITED ON USERS;

grant connect to user_admin;
GRANT CREATE TABLE TO user_admin;
grant create view to user_admin;



conn user_admin/user@test_1

create table user_admin.test
(id number
,nm varchar2(20)
);

insert into user_admin.test values (1,'G1');
insert into user_admin.test values (2,'G2');
insert into user_admin.test values (3,'G1');
insert into user_admin.test values (4,'G2');
commit;


create view user_admin.v_g1_test
as
select * from user_admin.test where nm = 'G1';




--Scripts for test_2 database

create user user_g1 identified by "user"
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
ACCOUNT UNLOCK;

ALTER USER user_g1 QUOTA UNLIMITED ON USERS;


grant connect to user_g1;
GRANT CREATE TABLE TO user_g1;
grant create materialized view to user_g1;
grant create database link to user_g1;

create database link dblinktest_1 to user_admin identified by "user" using test_1;

In user_g1@test_2, I created materialized view that selects data from user_admin.v_g1_test using a database link


create MATERIALIZED VIEW user_g1.mv_test
as
select * from user_admin.v_g1_test@dblinktest_1

Can you please tell me how to refresh this materialized view, whenever a row gets deleted or updated or inserted in user_admin.test@test_1

Tom Kyte
June 14, 2012 - 12:33 pm UTC

you won't be able to do a fast or incremental refresh on the materialized view of a view. You'd want to do that against the base table.

reader, June 14, 2012 - 12:46 pm UTC

I created the materialized view using the base table and used ON COMMIT refresh option. But I am getting ORA-12054 error. I believe this is because of accessing the table over database link. Is there a way to refresh whenever the commit happens on the table?

create MATERIALIZED VIEW user_g1.mv_test
build immediate
REFRESH FAST ON COMMIT with rowid
as
select * from user_admin.test@dblinktest_1

ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Tom Kyte
June 14, 2012 - 3:32 pm UTC

that will never work over a database link, refresh on commit is ONLY supported in a single database - never over a link.


FAST Refresh on MV - 11GR2

Rajeshwaran, Jeyabal, June 26, 2012 - 8:52 am UTC

Tom:

I was reading the below link from product docs.
http://docs.oracle.com/cd/E11882_01/server.112/e25554/basicmv.htm#sthref182
<quote>
Note that COUNT(*) must always be present to guarantee all types of fast refresh. Otherwise, you may be limited to fast refresh after inserts only
</quote>
Also look at the Table 9-2 Requirements for Materialized Views with Aggregates
X = SUM(expr) Y=COUNT(expr) Z='-'

Questions
1) I framed MV based on the X,Y condition and founded that after deleting records from table 'T2' staleness got UNUSABLE. But this is not happening with Inserts. This contradicts with above quotes. Can you please explain ? ( I have included count(*) as per docs but why refresh is not happening on DELETE and only happening on INSERTS ? )

Below is the script i used for testing.

rajesh@ORA11GR2> create table t1 as select * from scott.dept@ORA10GR2;

Table created.

Elapsed: 00:00:00.68
rajesh@ORA11GR2> create table t2 as select * from scott.emp@ORA10GR2;

Table created.

Elapsed: 00:00:00.04
rajesh@ORA11GR2> alter table t1 add
  2  constraint t1_pk
  3  primary key(deptno);

Table altered.

Elapsed: 00:00:00.28
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 add constraint
  2  t2_fk foreign key(deptno)
  3  references t1;

Table altered.

Elapsed: 00:00:00.18
rajesh@ORA11GR2> alter table t2 modify deptno not null;

Table altered.

Elapsed: 00:00:00.10
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t1
  2  with sequence,rowid (deptno,dname)
  3  including new values;

Materialized view log created.

Elapsed: 00:00:00.54
rajesh@ORA11GR2> create materialized view log on t2
  2  with sequence,rowid (deptno,sal)
  3  including new values;

Materialized view log created.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
  2  build immediate
  3  refresh fast on commit
  4  enable query rewrite as
  5  select t1.deptno,t1.dname,sum(t2.sal) as sal,
  6     count(*) as cnt
  7  from t2, t1
  8  where t1.deptno = t2.deptno
  9  group by t1.deptno,t1.dname ;

Materialized view created.

Elapsed: 00:00:01.18
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T1');

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.68
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T2');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.25
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T1_T2_MV');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10
rajesh@ORA11GR2>
rajesh@ORA11GR2> set autotrace traceonly explain;
rajesh@ORA11GR2> select sum(sal) from t2;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 2337769562

----------------------------------------------------------------------
| Id  | Operation                     | Name     | Rows  | Bytes | Cos
----------------------------------------------------------------------
|   0 | SELECT STATEMENT              |          |     1 |     4 |
|   1 |  SORT AGGREGATE               |          |     1 |     4 |
|   2 |   MAT_VIEW REWRITE ACCESS FULL| T1_T2_MV |     3 |    12 |
----------------------------------------------------------------------

rajesh@ORA11GR2> set autotrace off;
rajesh@ORA11GR2>
rajesh@ORA11GR2> insert into t2(empno,deptno,sal)
  2  select level, 10, level * 100
  3  from dual
  4  connect by level <= 10 ;

10 rows created.

Elapsed: 00:00:00.12
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.21
rajesh@ORA11GR2> select staleness
  2  from user_mviews
  3  where mview_name ='T1_T2_MV';

STALENESS
-------------------
FRESH

Elapsed: 00:00:00.06
rajesh@ORA11GR2>
rajesh@ORA11GR2> delete from t2 where empno between 1 and 5;

5 rows deleted.

Elapsed: 00:00:00.01
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.09
rajesh@ORA11GR2> select staleness
  2  from user_mviews
  3  where mview_name ='T1_T2_MV';

STALENESS
-------------------
UNUSABLE

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2>

Tom Kyte
June 26, 2012 - 10:27 am UTC

you can use this technique to see what your materialized view is capable of and why:


ops$tkyte%ORA11GR2> create or replace function explain_mview( p_mv in varchar2 )
  2  return sys.ExplainMVArrayType
  3  PIPELINED
  4  is
  5          pragma autonomous_transaction;
  6          l_data sys.explainMVArrayType;
  7  begin
  8          dbms_mview.explain_mview( mv => p_mv, msg_array => l_data );
  9          commit;
 10          for i in 1 .. l_data.count
 11          loop
 12                  pipe row( l_data(i) );
 13          end loop;
 14          return;
 15  end;
 16  /

Function created.

ops$tkyte%ORA11GR2> select capability_name,
  2         possible,
  3             case when related_text is not null then '('||related_text||') ' end || msgtxt txt
  4   from TABLE( explain_mview('T1_T2_MV') );

CAPABILITY_NAME                P TXT
------------------------------ - --------------------------------------------------
PCT                            F
REFRESH_COMPLETE               T
REFRESH_FAST                   T
REWRITE                        T
PCT_TABLE                      F (T2) relation is not a partitioned table
PCT_TABLE                      F (T1) relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT      T
REFRESH_FAST_AFTER_ONETAB_DML  F (SAL) SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ANY_DML     F see the reason why REFRESH_FAST_AFTER_ONETAB_DML i
                                 s disabled

REFRESH_FAST_PCT               F PCT is not possible on any of the detail tables in
                                  the materialized view

REWRITE_FULL_TEXT_MATCH        T
REWRITE_PARTIAL_TEXT_MATCH     T
REWRITE_GENERAL                T
REWRITE_PCT                    F general rewrite is not possible or PCT is not poss
                                 ible on any of the detail tables

PCT_TABLE_REWRITE              F (T2) relation is not a partitioned table
PCT_TABLE_REWRITE              F (T1) relation is not a partitioned table

16 rows selected.



it is telling you you need count(sal) as well:

ops$tkyte%ORA11GR2> create materialized view t1_t2_mv
  2  build immediate
  3  refresh fast on commit
  4  enable query rewrite as
  5  select t1.deptno,t1.dname,sum(t2.sal) as sal, count(t2.sal) sal_cnt, count(*) as cnt
  6    from t2, t1
  7   where t1.deptno = t2.deptno
  8   group by t1.deptno,t1.dname ;

Materialized view created.


ops$tkyte%ORA11GR2> select capability_name,
  2         possible,
  3             case when related_text is not null then '('||related_text||') ' end || msgtxt txt
  4   from TABLE( explain_mview('T1_T2_MV') );

CAPABILITY_NAME                P TXT
------------------------------ - --------------------------------------------------
PCT                            F
REFRESH_COMPLETE               T
REFRESH_FAST                   T
REWRITE                        T
PCT_TABLE                      F (T2) relation is not a partitioned table
PCT_TABLE                      F (T1) relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT      T
REFRESH_FAST_AFTER_ONETAB_DML  T
REFRESH_FAST_AFTER_ANY_DML     T
REFRESH_FAST_PCT               F PCT is not possible on any of the detail tables in
                                  the materialized view

REWRITE_FULL_TEXT_MATCH        T
REWRITE_PARTIAL_TEXT_MATCH     T
REWRITE_GENERAL                T
REWRITE_PCT                    F general rewrite is not possible or PCT is not poss
                                 ible on any of the detail tables

PCT_TABLE_REWRITE              F (T2) relation is not a partitioned table
PCT_TABLE_REWRITE              F (T1) relation is not a partitioned table

16 rows selected.

MV Containing only Joins - 11GR2

Rajeshwaran, Jeyabal, June 26, 2012 - 12:56 pm UTC

Tom:

Really sorry to bother you again!

http://docs.oracle.com/cd/E11882_01/server.112/e25554/basicmv.htm#sthref185
<quote>
Alternatively, if the previous example did not include the columns times_rid and customers_rid, and if the refresh method was REFRESH FORCE, then this materialized view would be fast refreshable only if the sales table was updated but not if the tables times or customers were updated
</quote>

1) I don't have listed 'T2' rowid in MV creation(created with refresh = FORCE ). When i updated records in T2 and did a refresh it got back updated MV.( This contradicts with above doc quotes. Is that i am missing something here? Please help me if i am wrong )

rajesh@ORA11GR2> create table t1 as select * from dept;

Table created.

Elapsed: 00:00:00.32
rajesh@ORA11GR2> create table t2 as select * from emp;

Table created.

Elapsed: 00:00:00.20
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t1 add
  2  constraint t1_pk
  3  primary key(deptno);

Table altered.

Elapsed: 00:00:00.37
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 add constraint
  2  t2_fk foreign key(deptno)
  3  references t1;

Table altered.

Elapsed: 00:00:00.20
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 modify deptno not null;

Table altered.

Elapsed: 00:00:00.10
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t1 with rowid;

Materialized view log created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t2 with rowid;

Materialized view log created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
  2  build immediate
  3  refresh FORCE
  4  enable query rewrite as
  5  select t1.dname, t2.ename, t1.rowid as t1_rid
  6  from t1, t2
  7  where t1.deptno = t2.deptno;

Materialized view created.

Elapsed: 00:00:00.17
rajesh@ORA11GR2>
rajesh@ORA11GR2>
rajesh@ORA11GR2> select * from t1_t2_mv where rownum = 1;

DNAME          ENAME      T1_RID
-------------- ---------- ------------------
ACCOUNTING     CLARK      AAATOOAAFAAAAFjAAA

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> update t2
  2  set ename = lower(ename);

14 rows updated.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.01
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.65
rajesh@ORA11GR2> select * from t1_t2_mv where rownum = 1;

DNAME          ENAME      T1_RID
-------------- ---------- ------------------
ACCOUNTING     clark      AAATOOAAFAAAAFjAAA

Elapsed: 00:00:00.26
rajesh@ORA11GR2>

Tom Kyte
June 26, 2012 - 3:02 pm UTC

do you know what force means? It means use fast if you can, complete otherwise.

so, what did you demonstrate here to show it was a fast refresh?

....

ops$tkyte%ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select last_refresh_type from user_mviews where mview_name = 'T1_T2_MV';

LAST_REF
--------
COMPLETE

ops$tkyte%ORA11GR2> 



MV containing only Joins - 11GR2

Rajeshwaran, Jeyabal, June 27, 2012 - 12:25 am UTC

so, what did you demonstrate here to show it was a fast refresh? - How do you confirm this? I look at user_mviews it say Complete.

rajesh@ORA11GR2> SELECT last_refresh_type,mview_name
  2  FROM user_mviews ;

LAST_REF MVIEW_NAME
-------- ------------------------------
COMPLETE T1_T2_MV

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2>

Tom Kyte
June 27, 2012 - 9:24 am UTC

yes, and it was COMPLETE, not fast - so, what is the issue?


You said your example contradicts the statement:


Alternatively, if the previous example did not include the columns times_rid and customers_rid, and if the refresh method was REFRESH FORCE, then this materialized view would be fast refreshable only if the sales table was updated but not if the tables times or customers were updated


was the materialized view fast refreshable? (rhetorical question, it wasn't...)

MV containing joins - 11GR2

Rajeshwaran, Jeyabal, June 27, 2012 - 2:12 pm UTC

Alternatively, if the previous example did not include the columns times_rid and customers_rid, and if the refresh method was REFRESH FORCE, then this materialized view would be fast refreshable only if the sales table was updated but not if the tables times or customers were updated

Tom,

Please find below the detailed walkthrough.

Pretend sales table = T1 (since its rowid is included in MV) in below example and either time/customer = T2 in below example

drop table t2 purge;
drop table t1 purge;
drop materialized view t1_t2_mv;

create table t2 as select * from emp;
create table t1 as select * from dept;

alter table t1 add
constraint t1_pk
primary key(deptno);

alter table t2 add 
constraint t2_fk
foreign key(deptno) references t1;

alter table t2 modify deptno not null;

create materialized view log on t1 with sequence,rowid including new values;
create materialized view log on t2 with sequence,rowid including new values;
Now this MView is created as REFRESH FORCE as specified in doc's
create materialized view t1_t2_mv
build immediate
refresh FORCE
enable query rewrite as
select t1.dname, t2.ename,t1.rowid as t1_rid,
 t2.rowid as t2_rid
from t1, t2
where t2.deptno = t1.deptno;

exec dbms_stats.gather_table_stats(user,'T1'); 
exec dbms_stats.gather_table_stats(user,'T2');
exec dbms_stats.gather_table_stats(user,'T1_T2_MV');

select * from t1_t2_mv where rownum = 1;
update t2
set ename = lower(ename);
commit;

exec dbms_mview.refresh('T1_T2_MV','F');
select * from t1_t2_mv where rownum = 1;

Back to doc quotes
<quote>
if the refresh method was REFRESH FORCE, then this materialized view would be fast refreshable only if the sales table was updated but not if the tables times or customers were updated
</quote>
so this MV is Fast refreshable only if T1 is udated but not the T2(since T1 rowid is ther in MV not T2's rowid). Now i requested for FAST refresh
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV','F');

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.46
rajesh@ORA11GR2>

Why oracle didn't throw error on FAST refresh since we updated T2 who's rowid is missing in MV ?
Tom Kyte
June 27, 2012 - 4:59 pm UTC

umm, you seem to have both rowids in there, why do you quote something that says "if you DID NOT have these rowids"?????
create materialized view t1_t2_mv
build immediate
refresh FORCE
enable query rewrite as
select t1.dname, t2.ename,t1.rowid as t1_rid,
    t2.rowid as t2_rid   <b><<<<<==== looks like a rowid from t2 to me??</b>
from t1, t2
where t2.deptno = t1.deptno;


I'm confused here, what are you asking?

sorry miss this out in previous post

Rajeshwaran, Jeyabal, June 27, 2012 - 2:14 pm UTC

sorry miss this out in previous post.
rajesh@ORA11GR2> SELECT last_refresh_type,mview_name
  2  from user_mviews;

LAST_REF MVIEW_NAME
-------- ------------------------------
FAST     T1_T2_MV

Elapsed: 00:00:00.76
rajesh@ORA11GR2>

MV Joins in 11GR2

Rajeshwaran, Jeyabal, June 28, 2012 - 1:55 am UTC

<quote>
if the refresh method was REFRESH FORCE, then this materialized view would be fast refreshable only if the sales table was updated but not if the tables times or customers were updated
</quote>
Tom:

I think i am clear now, Here is the script i used for my demonstration.

Pretend 'T1' as sales table and 'T2' either as times or customers table & Below is my demo script for understanding this above quotes refered from docs.
(Please correct me, if i am wrong)

drop table t2 purge;
drop table t1 purge;
drop materialized view t1_t2_mv;

create table t2 as select * from emp;
create table t1 as select * from dept;

alter table t1 add
constraint t1_pk
primary key(deptno);

alter table t2 add 
constraint t2_fk
foreign key(deptno) references t1;

alter table t2 modify deptno not null;

create materialized view log on t1 with sequence,rowid including new values;
create materialized view log on t2 with sequence,rowid including new values;

create materialized view t1_t2_mv
build immediate
refresh FORCE
enable query rewrite as
select t1.dname, t2.ename,t1.rowid as t1_rid
from t1, t2
where t2.deptno = t1.deptno;

exec dbms_stats.gather_table_stats(user,'T1');    
exec dbms_stats.gather_table_stats(user,'T2');
exec dbms_stats.gather_table_stats(user,'T1_T2_MV');

Now MV has rowid from T1 and not from T2. Now FAST refresh is possible only T1 is updated not on T2.
select * from t1_t2_mv where rownum = 1;
update t2
set ename = lower(ename);
commit;
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV','F');
BEGIN dbms_mview.refresh('T1_T2_MV','F'); END;

*
ERROR at line 1:
ORA-12052: cannot fast refresh materialized view RAJESH.T1_T2_MV
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2558
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2771
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2740
ORA-06512: at line 1

rajesh@ORA11GR2> select * from t1_t2_mv where rownum = 1;

DNAME          ENAME      T1_RID
-------------- ---------- ------------------
ACCOUNTING     CLARK      AAATSTAAFAAAAYDAAA

So far good. if changes from T2 has to reflected in MV we have to go for Complete refresh (and that is what oracle did for me by default)
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.36
rajesh@ORA11GR2> select * from t1_t2_mv where rownum = 1;

DNAME          ENAME      T1_RID
-------------- ---------- ------------------
ACCOUNTING     clark      AAATSTAAFAAAAYDAAA

Elapsed: 00:00:00.00
rajesh@ORA11GR2> select mview_name, last_refresh_type
  2  from user_mviews ;

MVIEW_NAME                     LAST_REF
------------------------------ --------
T1_T2_MV                       COMPLETE

Elapsed: 00:00:00.01
rajesh@ORA11GR2>

Now, if we dropped and recreated this MV like this, FAST refresh is possible for update on T1

create materialized view t1_t2_mv
build immediate
refresh FORCE
enable query rewrite as
select t1.dname, t2.ename,t1.rowid as t1_rid
from t1, t2
where t2.deptno = t1.deptno;
exec dbms_stats.gather_table_stats(user,'T1');    
exec dbms_stats.gather_table_stats(user,'T2');
exec dbms_stats.gather_table_stats(user,'T1_T2_MV');

rajesh@ORA11GR2>
rajesh@ORA11GR2> select * from t1_t2_mv where rownum = 1;

DNAME          ENAME      T1_RID
-------------- ---------- ------------------
ACCOUNTING     CLARK      AAATSaAAFAAAAYDAAA

Elapsed: 00:00:00.00
rajesh@ORA11GR2> update t1
  2  set dname = lower(dname);

4 rows updated.

Elapsed: 00:00:00.01
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV','F');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.11
rajesh@ORA11GR2> select * from t1_t2_mv where rownum = 1;

DNAME          ENAME      T1_RID
-------------- ---------- ------------------
research       SMITH      AAATSaAAFAAAAYDAAB

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> select mview_name, last_refresh_type
  2  from user_mviews;

MVIEW_NAME                     LAST_REF
------------------------------ --------
T1_T2_MV                       FAST

Elapsed: 00:00:00.03
rajesh@ORA11GR2>

Tom Kyte
June 28, 2012 - 9:51 am UTC

can you just state in english without a huge test case to read through what your question is.

clearly and concisely. I don't always have time to parse your huge examples that don't seem to come with a clearly defined question sometimes.

Query Rewrite

Rajasekhar, June 24, 2020 - 2:38 pm UTC

Hello Tom,

Can you please explain me, how query rewrite is useful for performance improvement?
Does it apply for MV's created on both local object and for remote object as well?
Chris Saxon
June 25, 2020 - 11:14 am UTC

I've recently made a video about just this :)



If you can derive the results of your query using an MV, the optimizer can swap it in for you.