Skip to Main Content


Question and Answer

Tom Kyte

Thanks for the question, Andre, from Brazil.

Asked: September 13, 2001 - 1:29 pm UTC

Last updated: December 21, 2007 - 2:34 pm UTC

Version: 816

Viewed 1000+ times

You Asked

I know that in a refresh complete the snap table is truncated. Is there any way to do a refresh complete and use rollback having the data available during all refresh time(like refresh fast) ? For example :

exec dbms_snapshot.refresh('XXX','C','MYRBSEG') -> this do that ?


and Tom said...

Here is a support note on this topic that offers a solution:

Article-ID: <Note:2077254.6>
Folder: server.Distributed.Snapshots
Topic: Miscellaneous
Document-Type: SOLUTION
Impact: LOW
Skill-Level: CASUAL
Server-Version: 07.XX to 08.XX
Updated-Date: 12-SEP-2000 04:16:08
Attachments: NONE
Content-Type: TEXT/PLAIN
Products: 5/RDBMS (07.XX);
Platforms: GENERIC;

Solution Description
Complete refreshes of a single snapshot uses truncate for performance
reasons and also to reduce rollback segment requirements. The truncate
does an auto-commit so if your complete refresh fails AFTER the truncate
of the base table, SNAP$_<snapshotname>, you cannot rollback or recover.

A workaround is to create a refresh group.

Solution Explanation
A refresh group of multiple snapshots will use
"delete from SNAP$_<snapshotname>" instead of
"truncate SNAP$_<snapshotname>". Unlike a single snapshot
refresh, we have to be able to rollback and recover the entire
refresh of a refresh group if any snapshot in the group fails to refresh.

Therefore, the above can be avoided if you create a refresh of a
snapshot group, grouping your single snapshot with a "dummy" snapshot
of dual.


Create snapshot snap_test
refresh complete
as select * from test@db_link;

create snapshot snap_dual
refresh complete
as select * from dual@db_link;

execute dbms_refresh.make(
name =>'group1',
list =>'snap_test,snap_dual',
next_date =>sysdate,
interval =>'sysdate+1/24',
implicit_destroy =>true);

This example creates 2 snapshots, SNAP_TEST, SNAP_DUAL
and a snapshot group, GROUP1. The snapshot group
contains the 2 snapshots, SNAP_TEST, SNAP_DUAL that will
be refreshed complete.

If we turn sql_trace on during a single snapshot refresh,
we would see: truncate table "SCOTT"."SNAP$_SNAP_TEST"

During a consistent refresh of a snapshot group, we would see:
delete from "SCOTT"."SNAP$_SNAP_TEST" where m_row$$ = :1
and also a
delete from "SCOTT"."SNAP$_SNAP_DUAL

NOTE: Because we are now doing a delete on the SNAP$_<snapshotname> instedad of a truncate, you will have to increase your rollback
segment requirements.


  (33 ratings)

Is this answer out of date? If it is, please let us know via a Comment


problem with refresh group

Tzvika Foyer, January 21, 2002 - 3:26 pm UTC

encounter problem during the refresh process DML like update ( auto by loader ) work on the same object and lock created .
if the refresh group just delete and insert why lock created

One More Question

Bharath, September 10, 2002 - 10:21 am UTC

How to refresh the MV group manually whcih does the delete then insert

Tom Kyte
September 10, 2002 - 10:30 am UTC

dbms_refresh.refresh( 'GROUP_NAME' )

after making the snapshot group.

a table with a mix -- remote data as well as application data

sm, November 04, 2002 - 1:25 pm UTC

In our application we want to use say a table from another we can create a snapshot say sn_emp.
sn_emp has information about employee.
Now we think of couple of columns that we want to add to this employee table..these columns will be updateable through
our application.
So we are envisioning a table that is going to have some columns from a remote database (they will need to be refreshed)
and some columns that are updateable on our side through application.
What is the best way to have this kind of mix.
The snapshot based side of this table has to be refreshed ever 2 hours...

My solution:
Create the snapshot..sn_emp.
create a table tab_emp that has the same primary key as of the snapshot (emplid) as it's primary key.
add the columns that are updateable to this new table.
Create a trigger on the snapshot that will keep sn_emp and tab_emp in sync.

Before reading this article, I was also worried about having refreshes every two hours...
I was thinking the application will go unstable during the time snapshot is being refreshed...
I guess the solution (having snap_dual) will work....

Tom Kyte
November 05, 2002 - 8:43 am UTC

You will have two tables

a) snapshot -- on the remote table. say the primary key is "EMPNO"
b) your auxillary table, primary key is "EMPNO" other attributes are whatever you want.

do NOT recreate a trigger on the snapshot -- you have no way of knowing HOW the snapshot is to be refreshed. An update to the snapshot could well be "delete", "insert" -- you would lose the data in your table.

You will have to "live" with the fact that your table might have EMPNO's that don't exist in the snapshot. You can either

a) ignore their existence
b) clean them out from time to time

their existence (these orphans) is not really material since you'll always join snapshot to this table and they won't appear anyway.

why not trigger

sm, November 05, 2002 - 11:17 pm UTC

Why not a trigger?
For example..say there is a snapshot table snap_emp
with columns emplid, lastname, firstname
and I create a table on my side mine_emp with columns
emplid, lastname, firstname, salary

trigger would be on snap_emp table
as snapshot rereshes (complete)..
trigger will check if the emplid exists in mine_emp table..
if it will see if lastname, firstname are the same..if not it will update them..
if emplid does not exist then it will add that emplid to mine_emp table...
salary column won't be touched..because it is being updated by the user in the application.
Doing this..I don't think..we will loose data in mine_emp table....

Tom Kyte
November 06, 2002 - 6:26 am UTC

why would you carry the firstname/lastname over to the other table? that seems wasteful...

If you wanted all new rows in the snapshot to be added to this other table (again, not necessary but if that is what you want) I would:

a) refresh the snapshot manually by using DBMS_JOBS and scheduling the dbms_mview.refresh myself

b) in that same job I would add:

insert into your_table
select ....
from snapshot_table
where emplid not in ( select emplid from your_table );

(search this site for HASH_AJ to see how to make that "fast")

and that is it. that'll be faster, easier, cheaper then a trigger

why a trigger?

sm, November 06, 2002 - 8:53 am UTC

About lastname, firstname columns...they don't have to be there...actually...only a table can pass the foriegn keys to other tables...but I guess..that table can have just the emplid...and salary (emplid as PK)..and pass emplid as foreign other tables...

About the manual refreshes...there is a condition..that the snapshot has to be refreshed every four hours...automatically...

"b) in that same job I would add:

insert into your_table
select ....
from snapshot_table
where emplid not in ( select emplid from your_table );"

I need to read up on this..I don't know how to do this in one single job that refreshes the snapshot (forced)..and add the new emplids to the local will be ideal though..because forced will be much quicker (considering 4 hour refreshes)

"If you wanted all new rows in the snapshot to be added to this other table
(again, not necessary but if that is what you want)"

we do want to keep them in sync...because the local table would pass foriegn other tables...for extra ones in the local table...there can be a daily job that will do the cleanup..

Tom Kyte
November 06, 2002 - 11:48 am UTC

dbms_job.submit( l_job,
'begin dbms_mivew.refrehsh( ''YOUR_SNAPSHOT'' );
insert into ......;

why a trigger? contd...

sm, November 06, 2002 - 9:36 am UTC

I was just thinking...why I added lastname, firstname...
to local table...
They were there because snapshot is refreshed every four I wanted a way to keep all the columns still alive...while the snapshot is being refreshed..
Also..there is a weired business rule..that some column..say userid...will be populated first time (and consecutive new ones that comes with the new employees) through the snapshot...and then it will be maintained through the if the admin quickly wants to change it he/she can change it through the app..and then send the email to the remote update the userid there...I am fighting with that BR...but that's how they want to do it....

1. to have data available while snapshot is being refreshed
2. local table needs to pass the foreign have a local table...and keep it in sync with the snapshot for new data
3. local table can be a mix of columns that only it has...
i.e that columns that only app will maintain...
and coulmns...that app and remote database both can maintain...

Tom Kyte
November 06, 2002 - 11:51 am UTC

if you use the snap_dual I was talking about.... not a problem -- refresh will be delete + insert, not truncate + insert.

1) handled by this original question, use a snapshot group
2) insert as shown above
3) be that as it may, you'll have to do what you have to do.

are snapshots bad?

sm, November 07, 2002 - 9:46 am UTC

Thanks Tom...
1.Could there have been another way to do accomplish the above besides using snapshots?
For Example we can have views based on remote database...
have procedures populating (from views) some tables at different times..during the day..
2. Are snapshots bad?
I have read/heard/been told that snapshots are a lot of overhead.
One advantage that I do see with the snapshots is that local database can have a static copy of the data at all times...while a view is always connected to the remote a view based on remote database might not be
available at all times...unlike snapshots..we can schedule or break the job around the maintenance schedule of the remote database....

Please comment...

Thanks again

Tom Kyte
November 07, 2002 - 10:39 am UTC

1) snapshots = code is written for you -- just run a command

anything else = you write the code, you maintain the code, you debug the code, you upgrade the code

me, I like having code written for me. I try to use the database features when/where I can.

2) no.

I heard that business travelers better not goto bars cause there are people that will drug you and take out your kidneys to sell on the black market.
</code> <code>

it carries about about as much wait as "snapshots are bad, lots of overhead"


K, December 06, 2002 - 2:27 pm UTC

Can we use snap_dual with more than one snapshot..or for every snapshot we have to create a dummy snapshot to accomplish this..


Tom Kyte
December 06, 2002 - 3:41 pm UTC

i would create a separate one - for concurrency reasons if nothing else. to make it so that more then one snapshot group at a time can refresh easily at the same time.

two jobs?

George, December 09, 2002 - 11:15 am UTC


"execute dbms_refresh.make(
name =>'group1',
list =>'snap_test,snap_dual',
next_date =>sysdate,
interval =>'sysdate+1/24',
implicit_destroy =>true);

dbms_job.submit( l_job,
'begin dbms_mivew.refrehsh( ''YOUR_SNAPSHOT'' );
insert into ......;

Interval and next date parameters can be entered for both
dbms_refresh.make as well as dbms_job.submit.
to make the snapshot refresh every two hours or what ever time
we have to enter same next_date and interval in both of them
I do not understand "dbms_mivew.refrehsh( ''YOUR_SNAPSHOT'' );"
Shouldn't that be dbms_refresh.refresh (group1).
I guess I am trying to understand how is this going to be done in one job.
Should the group1 be created without the next date and interval.

Tom Kyte
December 09, 2002 - 12:18 pm UTC

I'm not sure I understand.

You are mixing two approaches here.

a) you can use dbms_refresh.make to make a group that refreshes all by itself (no dbms_job.submit)

b) use dbms_job.submit to schedule the refresh to take place along with some other stuff you want to have happen at the same time.

The are mutually exclusive. I said "refresh the snapshot manually by using DBMS_JOBS and scheduling the dbms_mview.refresh myself". dbms_job is in place of dbms_refresh.make.

I was answering a related but subtly different question -- the person wanted to do something in addition to refreshing a snapshot, that is what the dbms_job is doing

get both of them

george, December 09, 2002 - 2:02 pm UTC

I guess..I am trying to see how what you suggested hit both the issues in one shot:
1. Delete+Insert instead of truncate+insert for snapshot A
2. Insert into another table while refreshing the snapshot A

For (1) we can make a group (DBMS_REFRESH.MAKE)...for (2) we have to submit a job (DBMS_JOB.SUBMIT).....
If we just do submit a job..because with refreshing the snapshot and insert steps in it...then that would not take care of the that is why I was wondering if we need to do dbms_refresh.refresh (group1) inside the job that we are submitting...

Tom Kyte
December 09, 2002 - 2:18 pm UTC

it didn't - it (the dbms_job suggestion) solved a specific issue for that specific person. They were doing something totally different.

Problem in Snapshot Refresh

Riaz Shahid, February 03, 2003 - 4:45 am UTC

Dear Tom !

I am facing problems regarding the snapshot refresh. I've created a snapshot of master site's table and also created the snapshot log at master site. The refresh time is after each hour.

Now the table on master site was updated but the snapshot couldn't refreshed because the machine was off. We made the machine online (i.e; on the network) but ooooooops.....the snapshot didn't refreshed (on its scheduled time) although the rows exist in the snapshot log at master site. We manuualy refreshed the snapshot.

My question is : Why the snapshot is not refreshed if the snapshot site is off one time. Shouldn't it refresh the snapshot at next scheduled time ? And what is the best solution for that?



Tom Kyte
February 03, 2003 - 7:27 am UTC

the snapshot would try 16 times to refresh, if after 16 failures -- it could not, it sets itself to be "broken"

next time, query dba_jobs and look for "broken" jobs, jobs that have failed 16 times. you can use dbms_job to "fix" them and have them run again.

It is to prevent infinite retries of a job that will never succeed.


A reader, November 16, 2003 - 5:18 pm UTC


create snapshot snap_dual
refresh complete
as select * from dual@db_link;

should we have to create a primary key and snap shot log on the dual table? for this concept to work?


Tom Kyte
November 17, 2003 - 6:37 am UTC

NO -- don't create a primary key or snapshot log on dual!!!

a refresh complete would never use a snapshot log, nor does it need a primary key.

Sree, November 17, 2003 - 7:13 am UTC


When we use the above

create snapshot snap_dual
refresh complete
as select * from dual@db_link;

Oracle complains about the missing primary key? How should we go about that?


Tom Kyte
November 17, 2003 - 7:44 am UTC

ops$tkyte@ORA920LAP> create snapshot snap_dual refresh complete as select * from;

Materialized view created.


ops$tkyte@ORA817DEV> create snapshot snap_dual refresh complete<b> WITH ROWID</b>
  2  as
  3  select * from;

Materialized view created.


A reader, November 17, 2003 - 9:23 am UTC

  1  create snapshot snap_dual refresh complete as select * from
  2* dual@temp
SQL> /
create snapshot snap_dual refresh complete as select * from
ERROR at line 1:
ORA-12014: table 'DUAL' does not contain a primary key constraint

 1  create snapshot snap_dual refresh complete WITH ROWID
  2      as
  3*     select * from dual@temp
SQL> /

Materialized view created.  

Does that mean in 8.1.7 i have to create with rowid?


Tom Kyte
November 19, 2003 - 6:32 am UTC


9ir1 replication

Pravesh Karthik, August 27, 2004 - 4:11 am UTC



what kind of refresh does this makes .. a complete refresh or a fast refresh?

Pravesh Karthik

Tom Kyte
August 27, 2004 - 7:47 am UTC

only you can answer that.

you created the materialized views in the snapshot group. did you create them as incremental refresh or complete refresh?

'f',' cf'

Pravesh Karthik, August 27, 2004 - 9:59 am UTC

I created them as fast refresh.
so, its a fast refresh ...but then i did the follwoing ...



so the first would be fast and the second will be complete irrespective of the type of mv i created - Please confirm

Thanks a lot for your consideration

Tom Kyte
August 27, 2004 - 10:12 am UTC

cf would be "a bad idea"

if you want complete, just use "c" as documented.

Thanks a lot

A reader, August 27, 2004 - 10:24 am UTC

oh that was a mistake.
I wanted 'c' only there...

Thanks for identifying ...

Pravesh Karthik

Refresh complete commits even with group

Sehrope Sarkni, May 06, 2005 - 11:21 am UTC


I tried to create a refresh group to allow rollback of complete refresh mviews but my mviews are local (no dblink involved) so I think that is causing it to still do a truncate instead of a delete. I'm using Is there a way to force deletes instead of truncates for local mviews?

Here's the script output:

SQL> /* Formatted on 2005/05/06 10:49 (Formatter Plus v4.8.0) */
SQL> CREATE TABLE tb_test_1 AS SELECT * FROM all_objects WHERE ROWNUM < 100;

Table created.


Materialized view created.


Materialized view created.

  2     DBMS_REFRESH.make ('TEST_GROUP', 'MV_TEST_1,MV_TEST_DUAL',null,null);
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> select count(*) from tb_Test_1;


  2    FROM mv_test_1;



Table truncated.

  2    FROM mv_test_1;


  2     dbms_refresh.REFRESH ('TEST_GROUP');
  3     ROLLBACK;
  4  END;
  5  /

PL/SQL procedure successfully completed.

  2    FROM mv_test_1;


Here's the script itself (for easy copy/paste):
CREATE TABLE tb_test_1 AS SELECT * FROM all_objects WHERE ROWNUM < 100;
                    , 'MV_TEST_1,MV_TEST_DUAL'
                    , NULL
                    , NULL
  FROM tb_test_1;
  FROM mv_test_1;
  FROM mv_test_1;
  FROM mv_test_1;

-- drop scripts (for easy re-testing)
DROP TABLE tb_test_1;

Tom Kyte
May 06, 2005 - 11:47 am UTC

The purpose is to make the refresh of multiple tables consistent, it still commits at the end.

All MV's in the group are refreshed -- and then a commit happens

So is it possible to rollback complete refreshes after they finish?

Sehrope, May 06, 2005 - 12:31 pm UTC

Is there a way to rollback the mview refreshes after they finish if something else fails (ie work done using the mviews that were refreshed)? The general idea is:

commit; -- only if refresh_mviews and do_work are successful
exception when others then
rollback; -- rollback everything including mview refresh

Tom Kyte
May 06, 2005 - 12:43 pm UTC

no, it is like DDL in that sense.

Complete DBMS_SNAPSHOT.refresh vs. dropping and recreating

Brian Lyle, November 29, 2005 - 5:27 pm UTC

What are the benefits of using DMBS_SNAPSHOT.refresh when perform a complete refresh again droping and recreating the snapshot? I am trying to see if there is a performance gain by using the refresh option in this example?

Tom Kyte
November 30, 2005 - 11:10 am UTC

dropping and recreating makes the materialized view "disappear" for a point in time.

refreshing does not have to have that side effect.

In some releases, a complete refresh would be a truncate + insert /*+ APPEND */ (unless you have a refresh group). It would maintain the indexes during the load.

A drop and recreate can be more efficient since you build the indexes (perhaps with nologging, in parallel) after the create.

In current releases, a complete refresh is a delete+insert - meaning it could be "much less efficient" but the MV never "disappears"

A reader, May 19, 2006 - 1:34 am UTC

Roundabout way to Retain Data when refresh fails

Jaya Rajan, November 28, 2006 - 11:51 pm UTC


I think i found a roundabout way to retain data in case of a complete refresh failure. Instaed of putting the "SELECT columns FROM remote_table WHERE conditions" as part of the create materilaized view DDL, first create a normal view with the same select statement. Then use this view in your materialized view DDL.
Lets say u want ur MV to be created as :
SELECT col1,col2 FROM remote_table@db_link;

Instead u will create a view with the same SELECT statement like :
SELECT col1,col2 FROM remote_table@db_link;

Then use this view to create the materialized view.
SELECT * FROM my_view;
If the remote table is unavailabe and the refresh fails, then ur materialized view will retain the data.

Tom, i discovered this by accident. I had two materialized views - one using a view which in turn uses a remote table and another that had a direct select from the remote table. When the remote database was down and after a fash refresh was issued , i discovered that the MV that used the view had data in it, while the MV that directly connected to the remote database was empty.
Please explain this to me as to why it happens.

Tom Kyte
November 30, 2006 - 8:46 am UTC

it doesn't.

depending on release a complete refresh is either:

a) truncate + insert /*+ APPEND */ (older releases)
b) delete + insert (current releases)

and in older releases, you can make the complete refresh "transactional" by creating a snapshot refresh group (create another dummy materialized view as select * from dual - and then create a refresh group that consists of your big MV and this small one - since they have to be refreshed 'transactionally' - it'll use delete + insert)

About current behavior

Martin Guillen, December 26, 2006 - 2:46 pm UTC

Hey Tom,
You said:
a) truncate + insert /*+ APPEND */ (older releases)
b) delete + insert (current releases)

I'm assuming that this is without a refresh group involved.

On current releases, how can we do truncate + insert /*+ APPEND */ to refresh a MV?

We can do a truncate "outside" dbms_mview.refresh but I don't know how to make this script to do the insert /*+ APPEND */

Any thoughts?,
Tom Kyte
December 26, 2006 - 9:26 pm UTC

you cannot - yes, refresh groups always did the delete+insert, the only way to do the truncate+insert append like work would be to recreate the MV or do the truncate + insert append in a job yourself.

MV and redefinition

A reader, December 27, 2006 - 11:20 am UTC

Hi Tom. We are in the process of migrating one of our legacy application to a new version, and this implies some estructural changes in the underlying tables (about 5 of them only). This system is 24/7 online, and so this needs to be done in as litle downtime as possible. It is going to be migrated on a different server with different archiquitecture. The old version uses 9iR2 and the new version will use 10gR2 RAC. The 5 tables are constantly being used (dml activity). For this, I am thinking of the following approach:

1) Create materialize view logs on the 5 tables.
2) Create remote materialize view (over dblink) on the new server to this five tables, using incremental refreshes (on demand).
3) Create a materialize view group to gather all five tables in this group.

Ok, this so far _should_ work as I expect, to copy all the information to the new server in approximatelly no downtime at all. The problem that arise is that, because there's some estructural change on these tables, I need populate the information with the new estructure. So, for this I am thinking of using dbms_redefinition to make the new table with the estructural changes the interin table so it can be updated with the MV information. With this, I expect to have a short downtime between the time of the last refresh of the mv group and the last refresh of sync_interim_table.

I would like to know your advice on this topic.

Tom Kyte
December 28, 2006 - 9:26 am UTC

you would want to use materialized views on PREBUILT TABLES - search for prebuilt and materialized views on this site for examples - so that when you DROP the materialized views on 10g - the tables remain.

And if you can use dbms_redefintion - you could just use the materialized view to add columns, whatever. You would not need to two step this


Alberto Dell'Era, December 28, 2006 - 10:15 am UTC

Stumbled upon this new feature of 10g one week ago:

dellera@ORACLE10> create materialized view mv
2 refresh complete on demand with rowid
3 as select * from dual;

Materialized view created.

dellera@ORACLE10> drop materialized view mv PRESERVE TABLE;

Materialized view dropped.

dellera@ORACLE10> desc mv
Name Null? Type
------------------------------------------------------------------------ -------- ------------
Tom Kyte
December 29, 2006 - 9:01 am UTC

nice, thanks!

RE: MV and redefinition

A reader, December 28, 2006 - 10:30 pm UTC

Tom, unfortunately, I can't use a prebuilt table since these tables' materialized view logs are created with rowid (they have no primary key defined).

Alberto, impressive! I will give that a go and see what happens.. very cool feature.

RE: MV and redefinition

A reader, December 29, 2006 - 11:46 am UTC

Tom, it looks like that when you call:

drop materialized view mview_name PRESERVE TABLE

Oracle does a "Rowid Range Scan" to organize the rowids in the materialized view and thus this operation can take a while if they are a lot of rows. Unfortunatelly, It will not work as I expect since there's a lot of time involved during the execution of the statement.

When I said about using dbms_redefinition, I was thinking about creating the materialized view locally with the data of the remote site, with the transformation done via a subquery. Then, I was planning to create my original table and use dbms_redefinition on it with the MV view. That way, I was planning to keep the data in sync and once I stop the mv from refreshing, I could just call abort_redef_table on my original table and have the data just there refreshed. I cannot use MV on prebuilt table because the materialized view logs are created with rowid (no primary key defined on the tables).

I did not understand your last paragraph about that I could just use dbms_redefinition. Could you explain further ?

Tom Kyte
December 30, 2006 - 8:54 am UTC

use the prebuilt table option then????

my point was - If you can use dbms_redef to add your columns, and you are using a materialized view in the first place, you DO NOT NEED TO USE dbms_redef - the materialized view creation ITSELF can synthesize these new columns.

RE: MV and redefinition

A reader, December 31, 2006 - 11:58 am UTC

Prebuilt table is not an option, since it can't use a materialized view created with rowid, only primary key.
Tom Kyte
December 31, 2006 - 4:04 pm UTC

well, consider getting a primary key in place then.

Using /*+ Append */ in MV refreshes

Rick Jolley, February 28, 2007 - 1:31 pm UTC

Accrding to what you've said, this is the way I understand things for 10gR2:

In a fast (incremental) refresh, the inserts are row by row, i.e. insert into mv values (col1, col2, ..., colN).
In a complete refresh, there is a delete mv followed by insert into mv (mv query).

If that's true, then can you add an APPEND hint to the mv query to take advantage of it? Would you also have to add in a statement for the NOLOGGING attribute of the mv?
Tom Kyte
February 28, 2007 - 3:09 pm UTC

you cannot add an append hint.

you can schedule your own job that;

a) issues execute immediate 'truncate table t';
b) insert /*+ append */ into t <defining query>;

and using prebuilt table options and such - still have it be a materialized view.

query against mview during complete refresh...

Craig, May 16, 2007 - 2:20 pm UTC


I've been searching Metalink, Oracle Docs, and AskTom for a while and finally came upon this thread. What happens if a query is issued against a materialized view and the materialized view is refreshed with a complete refresh before the query finishes? Also, what happens if a query is issued against a materialized view while it's being refreshed? As always, thanks in advance for your input!

Tom Kyte
May 17, 2007 - 10:48 am UTC


a) full refresh with delete + insert - query runs to completion using the undo to rollback the changes.

b) full refresh with truncate + insert /*+ append */ - if the truncated space is NOT REUSED - query runs to completion. If space is reused then:
$ oerr ora 8103
08103, 00000, "object no longer exists"
// *Cause:  the object has been deleted by another user since the operation
//          began

is going to happen.

Transactionnal refresh

Loïc, December 20, 2007 - 4:40 pm UTC

Hello Tom,
Many thanks for your comments about the refresh group containing 2 materialized view to force transactional refresh.

However, in my case, I use this to refresh data using a complex query (running in about 30s to 1min30s) that are querying very frequently by end users using a GUI. The users are very happy to get their data in less than 500ms but the refresh seems somewhat confusing:

- what happen if two different sessions (du to some thirdparty scheduler bug, or source table heavy load) try to refresh the same group at the same time (or before the other finishes)?

- since I don't need anything like query rewrite, site replication and so on, I think to come back to a simpler solution delete/insert (more understandable to the developers I work with), do you see anything to take care

Thanks a lot in advance for your answer,
Tom Kyte
December 21, 2007 - 2:34 pm UTC

the materialized view refreshing will serialize.

You must work with some very simple people if the concept of

a) run this stored procedure, it makes the data current


b) run this delete and this insert, it makes the data current

would confuse them.

It's also about refresh group administration

Loïc, December 21, 2007 - 4:38 pm UTC

Well, you can't imagine how blue their face can turn when I'm speaking of data set processing, PL/SQL and business logic inside the database... yes they are Java people :( (like me 3 years ago before working with an Oracle DBA... and then the light came to me ;) ).

More seriously, returning to pure delete/insert will meet in fact two goals:
- simplify the process for "my" developers
- ease refresh group administration when exporting/importing one on a schema with an other name: the refresh group keeps the original owner forever

(I'm in vacation so I won't be able to precisely point to the information in the Oracle dictionnary or directly in the dump file but it's true at least for :) (or maybe I'm missing some imp options)

Push materialze view REFRESH from master site

Vikas Sharma, January 17, 2014 - 1:09 pm UTC

Hi Tom,

I have a master materialize view site "A" and one materialize view site "B". on n2 different location and connects using WAN.

I have seen many documents on materialize view refresh but all from refreshes need to be executed from mvsites. it is like mv site pulling data from master site. is there anyway i can push refresh from master to mvsite.


Vikas Sharma

More to Explore


More on PL/SQL routine DBMS_REFRESH here