Skip to Main Content
  • Questions
  • Direct-path INSERTs are not propagated during a "Materialized View Fast Refresh"

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Pierre.

Asked: January 28, 2003 - 8:11 am UTC

Last updated: June 23, 2008 - 8:12 am UTC

Version: 8.1.7

Viewed 1000+ times

You Asked

Hello

The Oracle8i Concepts Guide mentions on page 10-19 that
"For Materialized Views that use the fast refresh method, a materialized view log or direct-loader log keeps track of the changes to the master tables".

From what i read, I assumed that direct-load INSERTs were supported for fast refresh and that the changes were kept somewhere in such a "direct-loader" log. However, practice shows that such changes don't get propagated at all. They also don't appear in the MLOG$_xxx table associated with the base table xxx which I guess is fed by the internal trigger which doesn't get fired off when using the direct-loader API. But then, where is that mysterious "direct-loader log" ?

Is there something I am missing ?

Thanks for helping me out.

Pierre

and Tom said...

<quote>
Incremental loads of your detail data should be done using the SQL*Loader direct-path option, or any bulk loader utility that uses Oracle's direct path interface (including INSERT AS SELECT with the APPEND or PARALLEL hints). Fast refresh after conventional DML is not supported with views with aggregates and joins, but it is for single-table aggregate views. See Oracle8i SQL Reference for further details.
</quote>

ops$tkyte@ORA817DEV> create table t as select * from scott.emp where rownum < 5;

Table created.

ops$tkyte@ORA817DEV> alter table t add constraint t_pk primary key(empno);

Table altered.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create materialized view log on t with (deptno), rowid including new values;

Materialized view log created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create materialized view v
2 refresh fast
3 as
4 select deptno, count(*) from t group by deptno;

Materialized view created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> select * from all_sumdelta;

no rows selected

ops$tkyte@ORA817DEV> insert /*+ append */ into t
2 select *
3 from scott.emp
4 where empno not in (select empno from t);

10 rows created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> select * from t;
select * from t
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte@ORA817DEV> commit;

Commit complete.

ops$tkyte@ORA817DEV> select * from all_sumdelta;

TABLEOBJ# PARTITIONOBJ# D SCN TIMESTAMP LOWROWID HIGHROWID
---------- ------------- - ---------- ----------- ------------------ ------------------
51308 51308 I 6.5614E+12 28-JAN-2003 AAAMhsAAHAAAk3LAAA AAAMhsAAHAAAk3LH//

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> select * from v;

DEPTNO COUNT(*)
---------- ----------
20 2
30 2

ops$tkyte@ORA817DEV> exec dbms_mview.refresh( 'V' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV> select * from v;

DEPTNO COUNT(*)
---------- ----------
20 5
30 6
10 3



Rating

  (13 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Pierre Six, January 28, 2003 - 9:41 am UTC

Thanks for the response. What puzzles me is that I have the simplest MV possible, which is a mere INSERT SELECT and in that case it doesn't work. To get the best performance in that case (direct-load interface on the insert + fast refresh for the MV), would you then reccomend using a GROUP BY on all fields (given that this is a unique combination) ?

Tom Kyte
January 28, 2003 - 9:51 am UTC

what is the point of a non-aggregate MV on a single table? can you tell me that?

Pierre Six, January 28, 2003 - 10:13 am UTC

We use this this to "publish" some of our datamarts from our Data Warehouse tables. Daily changes are done on the DWH tables and the Datamart tables are synchronized using MV fast refresh. The datamart tables "almost" copies of the DWH tables. The MV therefore includes some transformations on the data, such as DECODEs and NVLs. Both reside on different instances on different machines. Does that make sense ?

Tom Kyte
January 28, 2003 - 1:27 pm UTC

I would not use an MV at all -- nor copy to a DM I guess, rather a regular view with Decodes and NVLs. You lose nothing, gain alot.

Don't know what to tell you -- the addition of a group by could really mess it up (speedwise). Seems like a strange configuration

Pierre Six, January 28, 2003 - 11:15 am UTC

To cut a long story short, we have a 20 Million row table on the DWH side and have a "near copy" in the Datamart (same number of rows but some fields are being altered on the way). Every night, a couple hundred thousand rows get added to the DWH table, 50K rows get updated etc ... and we don't want to do a full refresh of the datamart table every day. DWH and DMT are on different machines.

Tom Kyte
January 28, 2003 - 2:54 pm UTC

ok, my point is -- datamart sounds like a waste of money -- data warehouse sounds like the right place to keep the data.

so, now you know the restriction -- single table aggregates as no one anticipated a single table "no aggregate" mv like that.

adding a couple hundred thousand rows should (could easily) be done in a conventional path load -- that'll be your best solution.

A reader, January 29, 2003 - 8:09 am UTC

The reason we have two tables on two machines is that

- The DWH machine does the inflow, so almost no indexes on the tables, refreshed whenever we want, several times a day if needed. Machine can be brought down if needed.

- The DMT machine does the querying, lots of indexes, stable "point-in-time" view of the data (as of yesterday at midnight), some fields modified to make them suitable for querying, high availability.

Our experience is that oncurrent heavy inflow creates a mixed workflow which seems difficult to tune. Parallel sessions are being taken away by the inflow etc ...

The Materialized View is used here as an extension of the snapshot mechanism, replicating changes accross to the DMT table (with some modifications to make the data suitable for querying).

And it works ! The MV fast refresh does the job, but the inserts into the base table could be faster, that's all ...

Pierre

Tom Kyte
January 29, 2003 - 8:17 am UTC

Thats why I would have fixed the DW, rather then try to band-aid the fix with a data boutique.

Give me a single system any day. Having problems with resource mgmt -- use RAC or the resource manager.

If the DW can't keep up -- the DM with tons of extra indexes and such would flop over -- so, by extension -- since the DM works -- the DW could have those indexes.

A reader, January 29, 2003 - 9:52 am UTC

Pierre,

What you call a DWH is probably known as ODS(Operational Data Store) on more popular terms. And what you call a DM is probably better known as a DWH.

Clarification

Rob, April 29, 2003 - 3:04 pm UTC

Tom:

Thanks for pointing me to this link. I think the quote below is from the data warehousing guide. It indicates that a fast refresh is possible after a direct load.

<quote>
Incremental loads of your detail data should be done using the SQL*Loader
direct-path option, or any bulk loader utility that uses Oracle's direct path
interface (including INSERT AS SELECT with the APPEND or PARALLEL hints). Fast
refresh after conventional DML is not supported with views with aggregates and
joins, but it is for single-table aggregate views. See Oracle8i SQL
Reference for further details.
</quote>

However this quote from the 9.2 Adv. Replication guide indicates otherwise:

<quote>
After a direct path load on a master table or master materialized view using SQL*Loader, a fast refresh does not apply the changes that occurred during the direct path load. Also, fast refresh does not apply changes that result from other types of bulk load operations on masters. Examples of these operations include some INSERT statements with an APPEND hint and some INSERT ... SELECT * FROM statements.
</quote>

I think that a fast refresh after a direct path load is supported only for MV's in the same instance as the Base table. If you are replicating to another instance you have to use conventional path load for fast refresh to work. Does this seem like the correct explanation to you.



Tom Kyte
April 29, 2003 - 8:52 pm UTC

it would make sense (to me anyway) that that could be true. If you need, I can test it out -- but I've gotta be in the office to do that....

Direct Path Insert and Create MV

A reader, October 10, 2003 - 6:54 pm UTC

Hi Tom,

Just want to know if Direct Path Insert (the "APPEND" hint) is supported in the "create materialized view ... as select ..." statement.

Do you have any other suggestions about reducing resource requirements during creating a big MV?

Thanks,

Tom Kyte
October 11, 2003 - 9:52 am UTC

that is the cool thing about SQL_TRACE -- we can see what takes place.  Consider:


ops$tkyte@ORA920> alter session set sql_trace=true;
 
Session altered.
 
ops$tkyte@ORA920> create materialized view emp_mv
  2  nologging
  3  refresh complete
  4  as
  5  select * from emp;
 
Materialized view created.


Using that, we can see that the MV is created with:

CREATE TABLE "OPS$TKYTE"."EMP_MV" ("EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO") nologging
 AS SELECT "EMP"."EMPNO","EMP"."ENAME","EMP"."JOB","EMP"."MGR","EMP"."HIREDATE","EMP"."SAL","EMP"."COMM","EMP"."DEPTNO" FROM "EMP" "EMP"


and since CTAS uses direct path semantics to process -- it is just like a direct path load.

Further, during a refresh:

ops$tkyte@ORA920> alter session set sql_trace=true;
Session altered.
 
ops$tkyte@ORA920> exec dbms_mview.refresh('EMP_MV');
PL/SQL procedure successfully completed.
 
we can see the sql executed is:

BEGIN dbms_mview.refresh('EMP_MV'); END;
                                                                                                                
SET CONSTRAINTS ALL DEFERRED
                                                                                                                
truncate table "OPS$TKYTE"."EMP_MV" purge snapshot log
                                                                                                                
INSERT /*+ APPEND */ INTO "OPS$TKYTE"."EMP_MV"("EMPNO","ENAME","JOB","MGR",
  "HIREDATE","SAL","COMM","DEPTNO") SELECT "EMP"."EMPNO","EMP"."ENAME",
  "EMP"."JOB","EMP"."MGR","EMP"."HIREDATE","EMP"."SAL","EMP"."COMM",
  "EMP"."DEPTNO" FROM "EMP" "EMP"
                                                                                                                
ALTER SUMMARY "OPS$TKYTE"."EMP_MV" COMPILE


so, that single table refresh will be done with direct path actions as well.  (if you use a snapshot GROUP -- you'll have a delete + conventional path insert instead -- but with a single table full refresh it is truncate + direct path insert)
 

sush, January 11, 2004 - 1:59 am UTC

Hi.What's the meaning of ..ALTER SUMMARY "OPS$TKYTE"."EMP_MV" COMPILE...Couple of weeks back in our production database we found one of our mv(which was doing a complete refresh) got struck there for a long time in that status (in alter summary ..compile)when we looked into v$sqlarea..and it failed after 5 hours with the following message

ORACLE error 4020 in FDPSTP
Cause: FDPSTP failed due to ORA-04020: deadlock detected while trying to lock object OKC.OKC_K_HEADERS_B
ORA-06512: at "APPS.XX_CM_CC_MVT_PKG", line 83
ORA-06512: at line 1


Mv Defn :
CREATE MATERIALIZED VIEW xx_cm_cc_con_loc_mv
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
AS
SELECT /*+ parallel(okhb,4) parallel(hcsu,4) parallel(orb,4) */
hcsu.location,
okhb.contract_number
from okc.okc_k_headers_b okhb,
ar.hz_cust_site_uses_all hcsu ,
okc.okc_rule_groups_b orgb ,
okc.okc_rules_b orb
where okhb.sts_code IN ('ACTIVE','OVERDUE','SIGNED')
AND okhb.date_terminated IS NULL
AND okhb.id = orgb.dnz_chr_id
AND orgb.chr_id is not null
AND orgb.ID = orb.rgp_id
AND orb.jtot_object1_code = 'OKX_BILLTO'
AND to_number (orb.object1_id1) = hcsu.site_use_id
AND hcsu.site_use_code = 'BILL_TO'

Explain Plan is as follows
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Hint=CHOOSE 4 K 926228
HASH JOIN 4 K 312 K 926228 :Q2387186006 P->S QC (RANDOM)
HASH JOIN 267 K 13 M 910366 :Q2387186004 P->P HASH
HASH JOIN 265 K 10 M 200481 :Q2387186002 P->P HASH
TABLE ACCESS FULL OKC_K_HEADERS_B 153 K 3 M 2815 :Q2387186001 P->P HASH
TABLE ACCESS FULL OKC_RULE_GROUPS_B 542 K 8 M 197021 :Q2387186000 S->P HASH
TABLE ACCESS FULL OKC_RULES_B 27 M 284 M 662352 :Q2387186003 P->P HASH
TABLE ACCESS FULL HZ_CUST_SITE_USES_ALL 595 K 12M 13658 :Q2387186005 P->P HASH

My question is why is it trying to do this alter summary..compile...Why is it trying to acquire a lock on the tables....I did a small test in development box by locking the table and doing the materialized view refresh.That time I didn't have this problem..

Hope you can clarify the mystery of Materialized view Refresh and the relationship between alter summary..compile.
Thanks in Advance



Tom Kyte
January 11, 2004 - 6:12 am UTC

it validates/compiles the mv -- makes it "whole" again. invalidates cursors against it, like compiling a real view would. it was trying to get a quick ddl lock in order to accomplish its goal.




to Pierre Six question

reader, January 11, 2004 - 8:21 am UTC

Could you use trigger which saves rowid, when change occurs on source?

"Internal trigger."

Kashif, October 21, 2004 - 10:43 am UTC

Hi Tom,

From the Oracle 9i Advanced Replication Guide:

<quote>
Internal Trigger for the Materialized View Log

When changes are made to the master table or master materialized view using DML, an internal trigger records information about the affected rows in the materialized view log. This information includes the values of the primary key, rowid, or object id, or both, as well as the values of the other columns logged in the materialized view log. This is an internal AFTER ROW trigger that is automatically activated when you create a materialized view log for the target master table or master materialized view. It inserts a row into the materialized view log whenever an INSERT, UPDATE, or DELETE statement modifies the table's data. This trigger is always the last trigger to fire.

</quote>

What is unclear to me is, is the internal trigger defined on the MV log, or on the master table? Logically it should be on the master table, but the documentation is a little ambiguous. Secondly, is it safe to assume that if it is "internal" it will not be visible to us? Thanks for your response!

Kashif

Tom Kyte
October 21, 2004 - 2:49 pm UTC

on the master table.

...When changes are made to the master table or master materialized view using DML, an internal trigger records .....

the trigger is on the master table


used to be visible in older releases, no longer visible.

what is purpose of SNAPTIME$$ &CHANGE_VECTOR$$ and what it indicates

Supriya, October 15, 2007 - 3:31 am UTC

When I perform DML operations on Master table, MV log populates with data but SNAPTIME$$ always contains '1/1/4000' date. Also after running it's job manually MV refreshes with new values but when job runs automatically then MV is not refreshing...
What will be the problem?
I have using Snapshot replication remotely.
Please guide ... it's very urgent!
Thanks in advance
Tom Kyte
October 15, 2007 - 12:02 pm UTC

there is no problem. It is working as designed.

you give insufficient data to comment. You don't tell us what errors you see or anything. "is not refreshing" is "insufficient"

not getting...

Supriya, October 16, 2007 - 4:45 am UTC

Not getting what's the problem... m not oracle resource so don't know much about it.
I had written what's happening in real scenario in 10g.
can you please help us out?

ALTER SUMMARY

Laurent Schneider, June 23, 2008 - 7:12 am UTC

Hi Tom,

ALTER SUMMARY "OPS$TKYTE"."EMP_MV" COMPILE

I could well understand ALTER MATERIALIZED VIEW "OPS$TKYTE"."EMP_MV" COMPILE but where does this SUMMARY keyword come from ? It is not a valid SQL statement. Is this an internal secret from DBMS_MVIEW package?

Thanks
Laurent
Tom Kyte
June 23, 2008 - 8:12 am UTC

it would be and undocumented shorthand for alter materialized view

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library