I think there is a way to do this but I can't remember.
We have 40 tables 500GB+ in size that we maintain between sites using fast refresh. It's a setup that's worked remarkably well for a long time which we don't want to change.
We'll be rolling out a 3rd site soon. Smaller MViews can be instantiated with a complete, non-atomic refresh within a reasonable amount of time and then fast refresh from there. But not these 500GB-2TB big tables.
If we instantiate the data ahead of time and then create MV on prebuilt, fast refresh won't pull the changes that happened since the instantiate/load started. We can make sure changes stay in the MV log but they are still not pushed. For a few tables we would manually track changes based on modify date and update the master so fast refresh picks them up, but for 40 tables that's messy.
What's the proper way to do this? I read about offline instantiation but wasn't able to translate the example to my use case. What is the difference between a MV group and a refresh group?
Thank you.
(My understanding is) "offline instantiation" was part of the original Advanced Replication option, and this is no longer supported.
Here's an option you could explore, which lets you reduce the offline time to just the amount to pull in latest mview log entries.
Setup (site 1)
=====
SQL> create table t ( x int primary key, y int, d timestamp default localtimestamp);
Table created.
SQL>
SQL> insert into t (x,y)
2 select rownum, rownum
3 from dual
4 connect by level < 1000;
999 rows created.
SQL> commit;
Commit complete.
SQL>
SQL> create materialized view log on t with rowid, primary key, sequence;
Materialized view log created.
SQL>
SQL> create table mv as select * from t;
Table created.
SQL> alter table mv add primary key ( x );
Table altered.
SQL>
SQL> create materialized view mv
2 on prebuilt table
3 refresh fast on demand
4 as select * from t;
Materialized view created.
SQL> begin
2 for i in 1001 .. 1000000 loop
3 insert into t (x,y) values (i,i);
4 dbms_session.sleep(.5);
5 commit;
6 if mod(i,10) = 0 then
7 update t set y = y + 1 where x = i;
8 commit;
9 end if;
10 if mod(i,37) = 0 then
11 delete t where x = i-500;
12 end if;
13 commit;
14 end loop;
15 end;
16 /
So I've got a table with constant rows being added or changed ... We can see that from another session
SQL> select * from ( select count(*) from t ), ( select count(*) from mv);
COUNT(*) COUNT(*)
---------- ----------
1057 999
SQL> exec dbms_mview.refresh('MV');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> select * from ( select count(*) from t ), ( select count(*) from mv);
COUNT(*) COUNT(*)
---------- ----------
1059 1059
The process I'll follow is:
- lock the base table (to block any DML)
- refresh the mview to bring things in sync
- establish a known consistent point in time / scn
then commit to allow work to resume
- then datapump the tables to the new site using the known scn
Now comes the "outage"
- use the mview logs to apply changes to base tables
- refresh the mview and we're done.
SQL> conn sys/password@pdb21a as sysdba
Connected.
SQL> col scn format 99999999999999999999999999999999
SQL> lock table mcdonac.t in exclusive mode;
Table(s) Locked.
-- in a separate session
SQL> exec dbms_mview.refresh('MV');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> select * from ( select count(*) from t ), ( select count(*) from mv);
COUNT(*) COUNT(*)
---------- ----------
1303 1303
-- back as sysdba
SQL> select dbms_flashback.get_system_change_number scn from dual;
SCN
---------------------------------
16331327945051
SQL> select sys.cdc_rsid_seq$.nextval from dual;
NEXTVAL
----------
31066
SQL> commit;
Commit complete.
"cdc_rsid_seq$' is where "with sequence" comes from.
Now I can unload the tables as of that consistent moment, and load then into my new db
pdb21a = source
pdb21b = targer
C:\>expdp scott/tiger@pdb21a directory=ctmp dumpfile=mview.dmp logfile=mview.log flashback_scn=16331327945051 tables=(t,mv)
Export: Release 23.0.0.0.0 - Production on Wed Aug 13 11:47:24 2025
Version 23.9.0.25.07
Copyright (c) 1982, 2025, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********@pdb21a directory=ctmp dumpfile=mview.dmp logfile=mview.log flashback_scn=16331327945051 tables=(t,mv)
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/AUDIT_OBJ
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/POST_TABLE_ACTION
. . exported "SCOTT"."MV" 36.24 KB 1303 rows
. . exported "SCOTT"."T" 36.24 KB 1303 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
C:\TMP\MVIEW.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Wed Aug 13 11:47:27 2025 elapsed 0 00:00:02
C:\>impdp scott/tiger@pdb21b directory=ctmp dumpfile=mview.dmp logfile=mview.log tables=(t,mv)
Import: Release 23.0.0.0.0 - Production on Wed Aug 13 11:47:46 2025
Version 23.9.0.25.07
Copyright (c) 1982, 2025, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01": scott/********@pdb21b directory=ctmp dumpfile=mview.dmp logfile=mview.log tables=(t,mv)
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."MV" 36.24 KB 1303 rows
. . imported "SCOTT"."T" 36.24 KB 1303 rows
Processing object type TABLE_EXPORT/TABLE/AUDIT_OBJ
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/POST_TABLE_ACTION
Job "SCOTT"."SYS_IMPORT_TABLE_01" successfully completed at Wed Aug 13 11:47:48 2025 elapsed 0 00:00:01
Now on the target I need to bring things up to date, so this is the start of my outage (ie, nothing happening on source during this process)
SQL> create table t_delta as
2 select *
3 from mlog$_t@pdb21a
4 where SEQUENCE$$ >= 31066;
Table created.
SQL> create materialized view log on t with rowid, primary key, sequence;
Materialized view log created.
SQL> create materialized view mv
2 on prebuilt table
3 refresh fast on demand
4 as select * from t;
Materialized view created.
SQL> delete from t where x in ( select x from t_delta );
4 rows deleted.
SQL> insert into t select * from t@pdb21a where x in ( select x from t_delta);
156 rows created.
SQL> select * from ( select count(*) from t ), ( select count(*) from mv);
COUNT(*) COUNT(*)
---------- ----------
1455 1303
SQL> exec dbms_mview.refresh('MV');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> select * from ( select count(*) from t ), ( select count(*) from mv);
COUNT(*) COUNT(*)
---------- ----------
1455 1455