MV view
Jegan, February 20, 2004 - 2:24 pm UTC
Tuning this query...
I need you suggestion and your great input in tuning this query.
Question on tuning this query...
Count of records per table.
---------------------------
CCRT_CALENDAR_DIM - 7304
PORTFOLIOS_DIM - 17
ACCOUNT_MONTHLY_FACT - 39691552 PARTITIONED BY MONTH (has 38 partitions)
This create mv script (attached below) takes 2 hrs 30 minutes and I am doin a COMPLETE REFRESH since I am using ANALYTICAL FUNCTIONS, I cannot use FAST REFRESH option.
Can you guys throw me some light? Guide me wht are all the parameters that I should look into.
Wht type of indexes will do the trick, How to re-write the sql in a much betterway.
This is test data only. The actual size of ACCOUNT_MONTHLY_FACT is 520 millions rows.
I got 70 more mv to tune. (I got to do this before March 3)
I need your guidance. How to start and proceed from here...
My guts says I am in trouble. I need you guys to help me.
CREATE MATERIALIZED VIEW MV_CREDIT_MEASURES AS
SELECT
B.PORTFOLIO_GROUP_ID,
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
(CASE WHEN GROUPING_ID
(B.PORTFOLIO_GROUP_ID,
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
TRIM(D.CAL_QTR_STR),
TRIM(C.CAL_MONTH_STR),
A.MONTHS_ON_BOOK) = 0 THEN
TRIM(D.CAL_QTR_STR)
WHEN GROUPING_ID
(B.PORTFOLIO_GROUP_ID,
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
TRIM(D.CAL_YEAR_STR),
TRIM(C.CAL_MONTH_STR),
A.MONTHS_ON_BOOK) = 0 THEN
TRIM(D.CAL_YEAR_STR)
ELSE
TRIM(D.CAL_MONTH_STR)
END) VINTAGE_PERIOD,
TRIM(C.CAL_MONTH_STR) REPORT_PERIOD,
A.MONTHS_ON_BOOK,
SUM(INT_AND_FEES) INT_AND_FEES,
SUM(OPENING_BAL) OPENING_BAL,
SUM(CLOSING_BAL) CLOSING_BAL,
SUM(NUM_ACCOUNTS_OPEN) NUM_ACCOUNTS_OPEN,
SUM(NUM_ACCOUNTS_VOLUNTARY_ATTR) NUM_ACCOUNTS_VOLUNTARY_ATTR,
SUM(NUM_ACCOUNTS_INVOLUNTARY_ATTR) NUM_ACCOUNTS_INVOLUNTARY_ATTR,
SUM(PRINCIPAL_CHGOFF) PRINCIPAL_CHGOFF,
SUM(GROSS_CHGOFF) GROSS_CHGOFF,
SUM(NET_CHGOFF) NET_CHGOFF,
SUM(TWO_PLUS_CLOSING_BAL) TWO_PLUS_CLOSING_BAL,
SUM(THREE_PLUS_CLOSING_BAL) THREE_PLUS_CLOSING_BAL,
SUM(FOUR_PLUS_CLOSING_BAL) FOUR_PLUS_CLOSING_BAL
FROM (SELECT
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
A.MONTHS_ON_BOOK,
A.VINTAGE_DATE,
A.MONTH_END_DATE,
(CASE WHEN A.DELINQUENCY_STATUS_ID <> 10 THEN
NVL(A.PURCHASE_FIN_CHG, 0) + NVL(A.CASH_ADV_FIN_CHG,0) + NVL(A.ANNUAL_FEE, 0) +
NVL(A.FRONT_END_FEE ,0) + NVL(A.INSURANCE_FEE, 0) + NVL(A.INTERCHANGE_FEE, 0) +
NVL(A.LATE_FEE, 0) + NVL(A.OTHER_FEE, 0) + NVL(A.OVERLIMIT_FEE, 0)
ELSE 0
END) INT_AND_FEES,
LAG(CASE WHEN A.DELINQUENCY_STATUS_ID <> 10 THEN NVL(A.MTD_CLOSING_BAL,0) ELSE 0 END)
OVER (PARTITION BY
A.ACCOUNT_ID
ORDER BY
A.MONTH_END_DATE)
OPENING_BAL,
(CASE WHEN A.DELINQUENCY_STATUS_ID <> 10 THEN
NVL(A.MTD_CLOSING_BAL, 0)
ELSE 0
END) CLOSING_BAL,
(CASE WHEN A.DELINQUENCY_STATUS_ID <> 10 THEN
1
ELSE 0
END) NUM_ACCOUNTS_OPEN,
(CASE WHEN A.ACCOUNT_STATUS_ID = 1 THEN
1
ELSE 0
END) NUM_ACCOUNTS_VOLUNTARY_ATTR,
(CASE WHEN A.ACCOUNT_STATUS_ID = 2 THEN
1
ELSE 0
END) NUM_ACCOUNTS_INVOLUNTARY_ATTR,
(CASE WHEN A.DELINQUENCY_STATUS_ID = 10 THEN
NVL(A.PURCHASE_CHGOFF, 0) + NVL(A.CASH_ADV_CHGOFF, 0)
ELSE 0
END) PRINCIPAL_CHGOFF,
(CASE WHEN A.DELINQUENCY_STATUS_ID = 10 THEN
NVL(A.TOTAL_CHGOFF, 0)
ELSE 0
END) GROSS_CHGOFF,
(CASE WHEN A.DELINQUENCY_STATUS_ID = 10 THEN
NVL(A.TOTAL_CHGOFF, 0) - NVL(A.TOTAL_RECOVERY_AMT, 0) -
(NVL(A.PURCHASE_FIN_CHG_CHGOFF, 0) + NVL(A.CASH_ADV_FIN_CHG_CHGOFF, 0) +
NVL(A.MISC_CHGOFF, 0))
ELSE 0
END) NET_CHGOFF,
(CASE WHEN A.DELINQUENCY_STATUS_ID NOT IN (0, 1, 10) THEN
NVL(A.MTD_CLOSING_BAL, 0)
ELSE 0
END) TWO_PLUS_CLOSING_BAL,
(CASE WHEN A.DELINQUENCY_STATUS_ID NOT IN (0, 1, 2, 10) THEN
NVL(A.MTD_CLOSING_BAL, 0)
ELSE 0
END) THREE_PLUS_CLOSING_BAL,
(CASE WHEN A.DELINQUENCY_STATUS_ID NOT IN (0, 1, 2, 3, 10) THEN
NVL(A.MTD_CLOSING_BAL, 0)
ELSE 0
END) FOUR_PLUS_CLOSING_BAL
FROM ACCOUNT_MONTHLY_FACT A) A,
PORTFOLIOS_DIM B,
CCRT_CALENDAR_DIM C,
CCRT_CALENDAR_DIM D
WHERE A.PORTFOLIO_ID = B.PORTFOLIO_ID
AND A.MONTH_END_DATE = C.CAL_DAY_DT
AND A.VINTAGE_DATE = D.CAL_DAY_DT
GROUP BY GROUPING SETS (
(B.PORTFOLIO_GROUP_ID,
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
TRIM(D.CAL_MONTH_STR),
TRIM(C.CAL_MONTH_STR),
A.MONTHS_ON_BOOK),
(B.PORTFOLIO_GROUP_ID,
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
TRIM(D.CAL_QTR_STR),
TRIM(C.CAL_MONTH_STR),
A.MONTHS_ON_BOOK),
(B.PORTFOLIO_GROUP_ID,
A.RISK_LEVEL_ID,
A.SUB_PORTFOLIO_ID,
A.SECURITIZATION_IND,
A.BRAND_ID,
TRIM(D.CAL_YEAR_STR),
TRIM(C.CAL_MONTH_STR),
A.MONTHS_ON_BOOK)
);
Does something else matter redo?
steven, February 22, 2004 - 3:29 am UTC
but it seems that the redo after nologging table is still mass
SQL> select count(*) from t;
COUNT(*)
----------
28992
SQL> create materialized view tmv as select * from t;
Materialized view created.
SQL> column value new_val V
SQL> set verify off
SQL> select a.name,b.value from v$statname a ,v$mystat b where a.statistic#=b.st
atistic# and a.name='redo size';
NAME VALUE
---------------------------------------------------------------- ----------
redo size 590228
SQL> exec dbms_mview.refresh( 'TMV', 'C' );
PL/SQL procedure successfully completed.
SQL> select a.name,b.value,TO_char(b.value-&v,'999,999,999,999') diff from v$sta
tname a ,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE
---------------------------------------------------------------- ----------
DIFF
--------------------------------------------------------------------------------
redo size 2366496
1,776,268
SQL> alter table tmv nologging;
Table altered.
SQL> exec dbms_mview.refresh( 'TMV', 'C' );
PL/SQL procedure successfully completed.
SQL> select a.name,b.value,TO_char(b.value-&v,'999,999,999,999') diff from v$sta
tname a ,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE
---------------------------------------------------------------- ----------
DIFF
--------------------------------------------------------------------------------
redo size 4148956
1,782,460
SQL> alter table tmv nologging;
table altered
SQL> exec dbms_mview.refresh( 'TMV', 'C' );
PL/SQL procedure successfully completed.
SQL> select a.name,b.value,TO_char(b.value-&v,'999,999,999,999') diff from v$sta
tname a ,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE
---------------------------------------------------------------- ----------
DIFF
--------------------------------------------------------------------------------
redo size 5899420
1,750,464
February 22, 2004 - 9:35 am UTC
1.7 meg "massive"???
try with a big result set like I did please. there is some overhead that is just going to be there. show us what happens when you do a non-trivial workload like I did (and use column name format a15, etc to make the output "fit", its lots easier to read that way)
nologging still does not effect
steven, February 23, 2004 - 4:41 am UTC
Thank you very much for reply and sorry for obscurity format.
i load more data and repeat again. but it still could see redo increased corresponding with amount of data;confused about it.do I make something wrong?
sorry for disturb again.
SQL> select count(*) from big_table;
COUNT(*)
----------
463872
SQL> create materialized view big_table_mv as select * from
2 big_table;
Materialized view created.
SQL> col name format a15
SQL> column value new_val V
SQL> set verify off
SQL> select a.name,b.value from v$statname a,v$mystat b where a.statistic#=b.sta
tistic# and a.name='redo size';
NAME VALUE
--------------- ----------
redo size 8375876
SQL> exec dbms_mview.refresh( 'BIG_TABLE_MV', 'C' );
PL/SQL procedure successfully completed.
SQL> col diff format a20
SQL> select a.name,b.value,to_char(b.value-&V,'999,999,999,999') diff from v$sta
tname a,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE DIFF
--------------- ---------- --------------------
redo size 36881836 28,505,960
SQL> alter table big_table_mv nologging;
Table altered.
SQL> exec dbms_mview.refresh( 'BIG_TABLE_MV', 'C' );
PL/SQL procedure successfully completed.
SQL> select a.name,b.value,to_char(b.value-&V,'999,999,999,999') diff from v$sta
tname a,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE DIFF
--------------- ---------- --------------------
redo size 65400332 28,518,496
LOAD MORE ....
SQL> select count(*)from big_table;
COUNT(*)
----------
927744
SQL> drop materialized view big_table_mv;
Materialized view dropped.
SQL> create materialized view big_table_mv as select * from
2 big_table;
Materialized view created.
SQL>
SQL> col name format a15
SQL> column value new_val V
SQL> set verify off
SQL> select a.name,b.value from v$statname a,v$mystat b where a.statistic#=b.sta
tistic# and a.name='redo size';
NAME VALUE
--------------- ----------
redo size 320433004
SQL> exec dbms_mview.refresh( 'BIG_TABLE_MV', 'C' );
PL/SQL procedure successfully completed.
SQL>
SQL> select a.name,b.value,to_char(b.value-&V,'999,999,999,999') diff from v$sta
tname a,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE DIFF
--------------- ---------- --------------------
redo size 377704252 57,271,248
SQL> alter table big_table_mv nologging;
Table altered.
SQL> exec dbms_mview.refresh( 'BIG_TABLE_MV', 'C' );
PL/SQL procedure successfully completed.
SQL>
SQL> select a.name,b.value,to_char(b.value-&V,'999,999,999,999') diff from v$sta
tname a,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
NAME VALUE DIFF
--------------- ---------- --------------------
redo size 434843812 57,139,560
does workload is trivial?...
February 23, 2004 - 7:50 am UTC
select force_logging from v$database;
what is the output of that.
it's not for standby
steven, February 23, 2004 - 7:34 pm UTC
SQL> col force_logging format a20
SQL> select force_logging from v$database;
FORCE_LOGGING
--------------------
NO
does some init parameter matter this??
February 24, 2004 - 6:27 am UTC
use sql_trace=true and verify that the refresh is being done via TRUNCATE + INSERT /*+ APPEND */
what version are we looking at here on what platform
feel embarrass
steven, February 25, 2004 - 1:13 am UTC
after desc big_table,i discover that there is pk on it.THe Index make mass redo when /*+ append */. sorry for disturb you because of my carelessness.
SQL> select count(*)from big_table;
COUNT(*)
----------
927744
SQL> select index_name,table_name from user_indexes where table_name='BIG_TABLE_
MV';
INDEX_NAME TABLE_NAME
------------------------------ ------------------------
SYS_C008606 BIG_TABLE_MV
SQL> alter table BIG_TABLE_MV drop primary key;
SQL> select index_name,table_name from user_indexes where table_name='BIG_TABLE_MV';
no rows selected
SQL> alter table BIG_TABLE_MV nologging;
table altered
SQL> column value new_val V
SQL> set verify off
SQL> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and a.name = 'redo size'
5 /
NAME VALUE
-----------------------
redo size 16512
SQL> exec dbms_mview.refresh( 'BIG_TABLE_MV', 'C' );
SQL> select a.name, b.value, to_char( b.value-&V,
2 '999,999,999,999' ) diff
3 from v$statname a, v$mystat b
4 where a.statistic# = b.statistic#
5 and a.name = 'redo size'
6 /
NAME VALUE DIFF
-----------------------------------
redo size 326452 309,940
You are right!Thank you for patience;
I'll never make stupid again
Materialized Views and Statistics
Lisa Spory, May 10, 2004 - 2:38 pm UTC
Tom,
I have a number of materialized views that will need to be refreshed following the bi-monthly ETL to my data warehouse. Many of the materialized views have indexes as well.
Does my ETL script need to explicitly gather statistics for the materialized views and indexes after executing dbms_mview.refesh()?
Regards,
Lisa
May 10, 2004 - 3:33 pm UTC
I would alter table monitoring and gather stale -- so only the mv's that actually need it get it, but yes, you need to do it (or not), it is not part of the refresh.
refresh is slow
friend, May 28, 2004 - 9:19 am UTC
Dear Tom
following are the stats from my system and i am not concluding anything as
looking for your conclusion about mview refresh rate
SQL> select count(*) from mlog$_ab;
COUNT(*)
----------
163127
SELECT aobject_name,a.object_type,tablespace_name,SUM(bytes)
OVER(partition by object_name)/(1024*1024) objsize FROM all_objects
a,user_segments WHERE segment_name=object_name AND a.object_name like
UPPER('MLOG$_ab') AND object_type<>'SYNONYM' ORDER BY
tablespace_name,objsize desc,object_name;
OBJECT_NAME OBJSIZE
---------- ----------
MLOG$_ab 20
Job Name Last Start Last End ST Run Pri/Xit
____________________________ ________________ ________________ __ _______ ___
gca_p_b_refresh_person_vw 05/27/2004 22:48 05/27/2004 23:17 SU 43051591/1
refresh_vw 05/27/2004 22:48 05/27/2004 23:17 SU 43051591/1
select count(*) from mlog$_ab;
COUNT(*)
----------
1989
Hence the mview refresh rate -:
= 162k/29 min = 5.6k/min
it it feasible that 350k record in 15 minutes ????????????
May 28, 2004 - 11:37 am UTC
please
a) post once, in one place.
b) look there for the answser
ORA-12034: materialized view log on "string"."string" younger than last refresh
mvsucks, May 28, 2004 - 11:59 am UTC
Tom,
The MV(m) is a join of two based tables(t1, t2), while loading one base table(t1), the error in the title line occurs. But it says MV log on t2 is younger than last refresh. Could you elabrate a example of how this happens and how to resolve it.
Best,
May 28, 2004 - 12:05 pm UTC
no example, really "nice name" (very professional, how long did you think about that one?).... given that the error would occur on a refresh, not a load -- well, it all goes back to "give me an example"
MView versus permanent table
john, June 03, 2004 - 8:08 am UTC
Dear Tom
Please provide few words on
" If wehave MVIew and table ( e.g table xyz based base table ABC and query is same as mview)with same query then whic one will work better"
Thanx
June 03, 2004 - 8:29 am UTC
a few words...
I don't understand the text in quotes. It is not a complete sentence.
if we have a materialized view and a table (ok).
the table is XYZ (ok).
based base table ABC (not ok, what does a 'base table ABC' have to do with anything? where did it come from)
and query is same as mview (not ok, what mview, on what?)
if you have a table that contains the same data as a materialized view -- what you have are two identical tables as a materialized view is a table under the covers. They would be "the same"
Mview versus table
friend, June 03, 2004 - 8:44 am UTC
Dear Tom i get john words
i think he asked..
1. A table ABC ( comprises same query as mview query)
2. A mview
XYZ i think its a table on which he is creatin mview and from where its creation ABC
Now he is firing queries on table and mview
which will work faster????
Please provde update and answer ;)
June 03, 2004 - 1:16 pm UTC
I'm not willing to be so obliging as to "guess". John would need to clarify really.
your point 1) still doesn't make logical sense to me.
A mview is a table.
If a mview contains the same data as some other table, then what you have are two tables with the same data -- hence why would either or work faster than the other.
mview versus table
friend, June 03, 2004 - 8:55 am UTC
Sorry as writting again
1. suppose ABC is a table CTAS from xyz.
2. Mview MV1 created on XYZ with same query used in above statement.
Then which will work better wrt to query performance
mview is fast refresh...
but we can afford to load batch data in ABC also.
What will u recommend?
Regards
June 03, 2004 - 1:17 pm UTC
they are the *same*.
they are both tables.
with the same data.
friend, June 04, 2004 - 1:36 am UTC
SQL> create materialized view mv1 as select * from emp;
Materialized view created.
SQL> CREATE TABLE PT1 AS SELECT * FROM EMP;
Table created.
SQL> select table_name from user_tables where table_name IN ('MV1','PT1');
TABLE_NAME
------------------------------
MV1
PT1
SQL>
SQL> set lines 300
SQL> set pages 300
SQL> select * from user_tables where table_name IN ('MV1','PT1');
TABLE_NAME TABLESPACE_NAME CLUSTER_NAME IOT_NAME PCT_FREE PCT_USED INI_TRANS MAX_TRANS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE FREELISTS FREELIST_GROUPS LOG B NUM_ROWS BLOCKS EMPTY_BLOCKS
------------------------------ ------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- -------------- ----------- ----------- ----------- ------------ ---------- --------------- --- - ---------- ---------- ------------
AVG_SPACE CHAIN_CNT AVG_ROW_LEN AVG_SPACE_FREELIST_BLOCKS NUM_FREELIST_BLOCKS DEGREE INSTANCES CACHE TABLE_LO SAMPLE_SIZE LAST_ANAL PAR IOT_TYPE T S NES BUFFER_ ROW_MOVE GLO USE DURATION SKIP_COR MON CLUSTER_OWNER DEPENDEN
---------- ---------- ----------- ------------------------- ------------------- ---------- ---------- ----- -------- ----------- --------- --- ------------ - - --- ------- -------- --- --- --------------- -------- --- ------------------------------ --------
MV1 USERS 10 1 255 65536 1 2147483645 YES N
1 1 N ENABLED NO N N NO DEFAULT DISABLED NO NO DISABLED NO DISABLED
PT1 USERS 10 1 255 65536 1 2147483645 YES N
1 1 N ENABLED NO N N NO DEFAULT DISABLED NO NO DISABLED NO DISABLED
i Understand this ..
Ok It means if we can afford batch loading then can go for permanent table i,e Pt1
As query performance will be almost same on PT1 as well as MV1 ..
Please confirm
June 04, 2004 - 7:41 am UTC
they are *the same*.
they are *both* tables.
the major difference is that
"select * from emp" could be rewritten by the server as select * from mv1 IF the server thought that was "cheaper" (it won't in this case -- as emp, mv1, pt1 are all equivalent)
Thanx Sir
friend, June 04, 2004 - 7:46 am UTC
Thanx Sir
friend, June 04, 2004 - 7:48 am UTC
dbms_mview.refresh gives errors
Varun, July 15, 2004 - 8:22 am UTC
I created mv from a view that is based on two different tables. mv created and status is valid. when i use dbms_mvieew.refresh('VIEW_NAME','C') it gives out error : table or view does not exist. Any clue what could be going on?
July 15, 2004 - 12:42 pm UTC
need an example.
mview
singa, July 17, 2004 - 8:52 am UTC
Hi Tom
I created 2 mviews without any mlog.
as 2 queries were too complex and with lot of joins.
Now what happen?
I need inputs on creation of log , i mean shuould i create logs for all tables involved in queries.
as no log so no refresh right?
July 17, 2004 - 2:47 pm UTC
you need to read the data warehousing guide to get the basics down:
</code>
https://docs.oracle.com#index-DAT <code>
you need no mv logs to refresh, they are optional -- with them we might be able to do an incremental refresh, without them -- a complete refresh is called for.
fast refresh materialized view
Linda Ding, August 02, 2004 - 1:55 pm UTC
1. I am trying to create a fast refreshable materialized view to precalcualte cost joins (Join View) for my data warehousing project.
MLOG was created for each under line tables using the same format as below:
drop materialized view log on DW_MONTHLY_CAL_DIM;
create materialized view log on DW_MONTHLY_CAL_DIM
with rowid(MONTHLY_CAL_KEY,FISCAL_YEAR,FISCAL_MONTH,LM_QUARTER,GOV_YEAR,GOV_QUAR
TER,MONTH_DESCR)
including new values;
....
This Materialized View has 1 Fact table join with 6 Dimension table, the MV definition is as below:
CEATE MATERIALIZED VIEW MV_GL_CUBE_2003_FAST
TABLESPACE GL_MV_D01
NOLOGGING
PARALLEL BUILD IMMEDIATE
REFRESH FAST
ON DEMAND
AS
--select /*+ FIRST ROWS */
select
DW_MONTHLY_CAL_DIM.rowid,
DW_COMPANY_DIM.rowid,
DW_NATURAL_ACCT_DIM.rowid,
DW_FINANCIAL_FACT.rowid,
DW_FY_DIM.rowid,
DW_WBS_DIM.rowid,
DW_CONTRACT_DIM.rowid,
(DW_MONTHLY_CAL_DIM.FISCAL_YEAR*10000+DW_MONTHLY_CAL_DIM.FISCAL_MONTH*100)+1 Per
iod,
DW_COMPANY_DIM.FQA_LVL10_COMPANY_ID FQA_LVL10_COMPANY_ID,
DW_NATURAL_ACCT_DIM.FQA_LVL10_NAT_ACCT_ID FQA_LVL10_NAT_ACCT_ID,
DW_FINANCIAL_FACT.SEGMENT_TYPE_KEY SEGMENT_TYPE_KEY,
DW_MONTHLY_CAL_DIM.GOV_YEAR GOV_YEAR,
DW_MONTHLY_CAL_DIM
.......
from DW_MONTHLY_CAL_DIM DW_MONTHLY_CAL_DIM,
DW_COMPANY_DIM DW_COMPANY_DIM,
DW_NATURAL_ACCT_DIM DW_NATURAL_ACCT_DIM,
DW_FINANCIAL_FACT DW_FINANCIAL_FACT,
DW_FY_DIM DW_FY_DIM,
DW_WBS_DIM DW_WBS_DIM,
DW_CONTRACT_DIM DW_CONTRACT_DIM
where DW_FINANCIAL_FACT.POSTED_GL_YEAR>=2003
and DW_MONTHLY_CAL_DIM.MONTHLY_CAL_KEY=DW_FINANCIAL_FACT.MONTHLY_CAL_KEY
and DW_COMPANY_DIM.COMPANY_KEY=DW_FINANCIAL_FACT.COMPANY_KEY
and DW_WBS_DIM.WBS_KEY=DW_FINANCIAL_FACT.WBS_KEY
and DW_FY_DIM.FISCAL_YEAR_KEY=DW_FINANCIAL_FACT.FISCAL_YEAR_KEY
and DW_CONTRACT_DIM.CONTRACT_KEY=DW_FINANCIAL_FACT.CONTRACT_KEY
and DW_NATURAL_ACCT_DIM.NATURAL_ACCT_KEY=DW_FINANCIAL_FACT.NATURAL_ACCT_KEY
During creation of the MV, I got the following error:
SQL> start Test_Fast.sql
66 /
and "DW_WBS_DIM"."WBS_KEY"="DW_FINANCIAL_FACT"."WBS_KY"
*
ERROR at line 62:
ORA-00904: "ROWID": invalid identifier
I don't know why it complains rowid?
I looked at the data wareshoue guide, it says:
A materialized view containing only joins can be defined to be refreshed ON COMMIT or ON DEMAND. If it is ON COMMIT, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table. Oracle does not allow self-joins in materialized join views.
If you specify REFRESH FAST, Oracle performs further verification of the query definition to ensure that fast refresh can be performed if any of the detail tables change. These additional checks are:
• A materialized view log must be present for each detail table.
• The rowids of all the detail tables must appear in the SELECT list of the materialized view query definition.
• If there are no outer joins, you may have arbitrary selections and joins in the WHERE clause. However, if there are outer joins, the WHERE clause cannot have any selections. Further, if there are outer joins, all the joins must be connected by ANDs and must use the equality (=) operator.
• If there are outer joins, unique constraints must exist on the join columns of the inner table. For example, if you are joining the fact table and a dimension table and the join is an outer join with the fact table being the outer table, there must exist unique constraints on the join columns of the dimension table.
I don't have a outer join in the MV definition at all,could you give some hint?
2. I created another MV using the same underline query using complete refresh method, the output for the following query is:
select owner, name, refresh_method from dba_snapshots
DW_OWNER MV_GL_CUBE_2003 JOIN VIEW
This indicate my Materialized View contains join only right? I man it is not a Aggregate MV or a Complex MV right?
Thanks a lot for your help.
August 02, 2004 - 2:48 pm UTC
can you provide a scaled back, yet 100% complelete example -- lots of cut and pasting going on there and I'm not sure what I'm looking at.
PCT
reader, August 03, 2004 - 7:38 pm UTC
I was trying to understand Partition Change Tracking. My understanding is that if you have a MV created on a table that is paritioned and if I performed DDL (Truncate, drop)on base table partitions, corresponding rows in the MV will be deleted next time when the MV is refreshed. Is this correct? Thanks.
August 03, 2004 - 8:32 pm UTC
pct does two things
a) limits the rows that go "stale" in the mv. if only rows in a partition ranged on a date between 1-jan-2000 and 31-dec-2000 were modified -- a query against other date ranges on the MV are not "stale". so it can allow for an mv to be used even if part of it is stale.
b) allows for truncate/drops to happen against the partition and still permit fast refreshes.
so it is about "stale/fresh" and "fast refresh"
10g MV refresh
reader, August 04, 2004 - 11:29 am UTC
From,
</code>
http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10736/advmv.htm#sthref576 <code>
<quote>When a materialized view is partitioned on the partitioning key column or join dependent expressions of the detail table, it is more efficient to use a TRUNCATE PARTITION statement to remove one or more partitions of the materialized view during refresh and then repopulate the partition with new data.<quote>
The above statement confuses me! Suppose the refresh process just needs to insert or update or delete just a couple of rows in the corresponding MV partition, does oracle do truncate of the MV partition rather than doing DMLs? Truncate is ok if one has truncated the entire detail table partition and the next MV refresh would truncate the corresponding MV partition. Any thoughts? Thanks.
August 04, 2004 - 1:09 pm UTC
i believe they are talking about a complete refresh here -- rather than rebuild the entire MV, just a single partition of it need be rebuilt.
Materialized view log
A reader, August 15, 2004 - 4:51 am UTC
Hi Tom,
I need to create more than two materlaized views on the same table with different results. The column access to base table is different in both MVs. How do I achieve this ie., use all columns in one materialized view logs or create more than one materlialized view logs?
Thanks
August 15, 2004 - 9:28 am UTC
No you don't -- you need a single log that merges the needs of both.
Thanks a lot!
A reader, August 15, 2004 - 4:27 pm UTC
test
raman, October 28, 2004 - 5:34 pm UTC
test
How can I create a materialized view through a Procedure ??
raman, October 28, 2004 - 6:01 pm UTC
Hello Tom,
I have a procedure which will accept few parameters to create a base table and materialized view. When I execute the procedure, base table gets created. But MV will not get created:
Here is the code:
CREATE OR REPLACE PROCEDURE "GISTEAM"."CREATE_MV" (mv_table varchar2, sourcetable varchar2 )
as
Begin
EXECUTE IMMEDIATE ('DROP MATERIALIZED VIEW '||mv_table);
EXECUTE IMMEDIATE ('DROP TABLE '||mv_table||' CASCADE CONSTRAINTS');
EXECUTE IMMEDIATE ('CREATE TABLE '||mv_table||' AS SELECT *FROM TABLE@ANOTHER_DATABASE');
EXECUTE IMMEDIATE ('CREATE MATERIALIZED VIEW TRM_RP_GEOMETRICS_MV
ON PREBUILT TABLE
REFRESH FAST
ON DEMAND
WITH PRIMARY KEY
FOR UPDATE
AS
SELECT * FROM TRIMS_UP.'||sourcetable||'@ANOTHER_DATABASE');
End;
Instead of '*', if I mention column names the MV gets created. But I don't want to use it ...
Please advice me on how can I create a generic procedure to create an MV ?
regards,
-raman
October 28, 2004 - 7:58 pm UTC
care to share what happens -- beyond "it does not work"
How can I create a materialized view through a Procedure ??
raman, October 28, 2004 - 6:07 pm UTC
I forgot to include the error I was getting :
ORA-12060: shape of prebuilt table does not match definition query
-raman
October 28, 2004 - 8:00 pm UTC
how about what does the other table look like.
raman, October 29, 2004 - 1:12 pm UTC
The table looks like this, it's a spatial table:
SQL> DESC GEOMETRICS@PGIS
Name Null? Type
----------------------------------------- -------- ----------------------------
NBR_TENN_CNTY NOT NULL NUMBER(2)
NBR_RTE NOT NULL VARCHAR2(5)
SPCL_CSE NOT NULL VARCHAR2(1)
CNTY_SEQ NOT NULL NUMBER(2)
RD_BEG_LOG_MLE NOT NULL NUMBER(5,3)
RD_END_LOG_MLE NOT NULL NUMBER(5,3)
ROW_RGHT NUMBER(4)
ROW_LFT NUMBER(3)
ACCESS_CTRL VARCHAR2(1)
DRCT_ONE_WAY NUMBER(1)
TY_TERRAIN VARCHAR2(1)
LAND_USE VARCHAR2(1)
ILLUM VARCHAR2(1)
SPD_LMT NUMBER(3)
SCHL_ZNE VARCHAR2(1)
SPD_LMT_SCHL NUMBER(2)
NBR_LANES NUMBER(2)
UPDT_BY NOT NULL VARCHAR2(20)
UPDT_ON NOT NULL DATE
MSLINK NOT NULL NUMBER(10)
MAPID NOT NULL NUMBER(10)
ID_NUMBER VARCHAR2(10)
THRU_LANES NUMBER(2)
GEOMETRY MDSYS.SDO_GEOMETRY
OBJECTID NOT NULL NUMBER(38)
regards,
-raman
October 29, 2004 - 4:57 pm UTC
and if you CTAS as select * from it -- does the resulting table match the types perfectly .
raman, October 29, 2004 - 2:02 pm UTC
Sorry for the typos:
CREATE OR REPLACE PROCEDURE "GISTEAM"."CREATE_MV" (mv_table varchar2,
sourcetable varchar2 )
as
Begin
EXECUTE IMMEDIATE ('DROP MATERIALIZED VIEW '||mv_table);
EXECUTE IMMEDIATE ('DROP TABLE '||mv_table||' CASCADE CONSTRAINTS');
EXECUTE IMMEDIATE ('CREATE TABLE '||mv_table||' AS SELECT *FROM TRIMS_UP.'||
sourcetable||'@ANOTHER_DATABASE');
EXECUTE IMMEDIATE ('CREATE MATERIALIZED VIEW TRM_RP_GEOMETRICS_MV
ON PREBUILT TABLE
REFRESH FAST
ON DEMAND
WITH PRIMARY KEY
FOR UPDATE
AS
SELECT * FROM
TRIMS_UP.'||sourcetable||'@ANOTHER_DATABASE');
End;
regards,
-raman
raman, November 01, 2004 - 5:30 pm UTC
Yes, Tom
It would exactly match datatype,columns etc.
raman, November 01, 2004 - 5:59 pm UTC
Tom,
There was a primary key missing in my script which I haven't noticed...
Now it's working...
regards,
-raman
big sum mview
markus, November 03, 2004 - 11:45 am UTC
hi tom,
we need to build a summery table on top of a range partitioned table (by year and month) which contains more than 500 million of rows (around 20 mio. per range).
i think, we don't want to use only one partitioned mview because a complete refresh - if needed for any reason - would last for years propably (4 processor sun v880). afaik, there is no way to complete refresh just a subset of the mview partitions.
what would you suggest in that case? building multiple mviews by another range and build a simple union-all view on top of them?
thanks,
markus
November 04, 2004 - 1:36 am UTC
why wouldn't you use a partitioned mview (500 million records -- taking years, I think you are "overestimating" slightly)
and if you needed to "completely refresh" the mview, you would have the need to completely refresh the union all views you created as well and they would not be "any faster"
you always have the choice when faced with the complete refresh to:
a) create the summary using any technique you desire -- make it as fast as you want
b) drop the existing mview
c) create the mview on the prebuilt table
To Markus
A reader, November 04, 2004 - 7:28 am UTC
On our DW we have partitioned MVs of around this size.
Firstly "complete reresh" is COMPLETE refresh the whole view is replaced - it sounds like what you want to do is to replace an individual partition.
I would go with:
Build a partitioned summary table (I would also align the partitioning with the parent fact table) and populate using PL/SQL or whatever. Finally, create the MV on this pre-built table.
To completely refresh a partition you will need to write your own code to truncate the partition and insert the replacement data. This will need to be followed by a "consider fresh" on the MV.
WARNINGS -
You will need to do a CONSIDER FRESH on the MV after loading it for the first time or you will not be able to modify any partitions.
If you intend to use FAST refresh on the view to perform routine updates you must do the fast refresh before you do the partition replacement as CONSIDER FRESH will remove all pending changes from the logs.
good idea.
markus, November 05, 2004 - 11:45 am UTC
hi,
mview on prebuilt partitioned table + truncate/refresh partition + "alter mview consider fresh" is an interesting idea. i will try this.
regards,
markus
question
arundhuti, February 03, 2005 - 3:36 am UTC
where can i find the redo log files.what is the path.can you give me the syntax for creating log files for materialized view which uses complete refresh
February 03, 2005 - 1:31 pm UTC
i think you have concepts mixed up.
redo logs are not materialized view logs.
Mview
Arundhuti, February 07, 2005 - 12:42 am UTC
Yes i mistakenly wrote that,but can you tell me while complete refresh is there any possibility of creating log files?if yes where can i create that?
regards
Arundhuti
February 07, 2005 - 4:34 am UTC
creating what kind of log files?
Refresh complete every 2 hours for materialized views
sonali, February 07, 2005 - 10:39 am UTC
We use these materialized views for online reports and cubes. So they need to be refreshed every 2 hours. We have about 15 to 20 MV's. Some of them have lots of data ( milltions of rows). Some are very small.
When we refresh them all in a stored procedure ( using schedule job) with moderate data. It takes anywhere between 7 to 10 minutes.
Here is a small sample stored procedure
CREATE OR REPLACE Procedure pr_GetActualDL_refresh
IS
strStale varchar2(30);
BEGIN
select STALENESS into strStale from user_mviews where MVIEW_NAME ='MV_MWEBWORK';
If strStale <> 'FRESH' then
dbms_mview.refresh('MV_MWEBWORK','C');
end if;
select STALENESS into strStale from user_mviews where MVIEW_NAME ='MV_PLANNEDHRS';
If strStale <> 'FRESH' then
dbms_mview.refresh('MV_PLANNEDHRS','C');
end if;
COMMIT;
End pr_GetActualDL_refresh;
/
'MV_PLANNEDHRS' is based on 'MV_MWEBWORK'. Where as 'MV_MWEBWORK' only uses actual tables.
They all are created -
CREATE MATERIALIZED VIEW mv_mwebwork
BUILD IMMEDIATE
REFRESH complete
WITH ROWID
DISABLE QUERY REWRITE
AS
using above syntax.. I could not use fast refresh on anyof those because of complex joins.
Is there any other better way to design this so that all views do not need to get refreshed every 2 hours ?? Especially if there is no data change.
Thanks
Sonali
February 07, 2005 - 11:06 am UTC
you are already doing what you asked -- to only refresh them when stale? did I miss something?
Sorry did not phrase my question correctly
sonali, February 07, 2005 - 1:48 pm UTC
I used this check for staleness to reduce the number of views getting complete refreshed. But still it takes 10 minutes for refresh for moderate data. Some people here have come back to me saying that these MV's are not good as far as performance goes and we should just write stored procedure, create database tables and update them instead as these MV's all are COMPLETE refresh, it any ways does the truncate MV and insert /*append*/.
I feel that MV's are better and I want to make them more faster if I can. I have tuned all the queries.
I read that there are advantages in partitioning the MV's or using PREBUILT tables. Will it help my situation ?
If so what are the advantages of using ON PREBUILT TABLE option and when should one use it ?
Thanks
February 07, 2005 - 4:11 pm UTC
partitioning and prebuilt won't do anything for speed of refresh in this case.
prebuilt is good when you have the data already. You don't have the data.
it doesn't have to be truncate/append, and can be delete+insert. You just need to build refresh groups
Sorry did not phrase my question correctly
sonali, February 07, 2005 - 1:48 pm UTC
I used this check for staleness to reduce the number of views getting complete refreshed. But still it takes 10 minutes for refresh for moderate data. Some people here have come back to me saying that these MV's are not good as far as performance goes and we should just write stored procedure, create database tables and update them instead as these MV's all are COMPLETE refresh, it any ways does the truncate MV and insert /*append*/.
I feel that MV's are better and I want to make them more faster if I can. I have tuned all the queries.
I read that there are advantages in partitioning the MV's or using PREBUILT tables. Will it help my situation ?
If so what are the advantages of using ON PREBUILT TABLE option and when should one use it ?
Thanks
Is this expected behavior?
Rich, March 16, 2005 - 3:50 pm UTC
Hi Tom,
I was testing the ideas you pointed to me at the very beginning of that question.
1- Diasble the indexes
2- Call Refresh
3- Rebuild indexes with no logging..
Here's the result:
SQL> create materialized view test_mv
2 as select deptno, count(*) cnt from scott.emp
3 where deptno is not null
4 group by deptno;
Materialized view created.
SQL> select index_name from user_indexes where table_name = 'TEST_MV';
INDEX_NAME
------------------------------
I_SNAP$_TEST_MV
SQL> alter index I_SNAP$_TEST_MV disable;
Index altered.
SQL> exec dbms_mview.refresh( 'TEST_MV', 'C' );
BEGIN dbms_mview.refresh( 'TEST_MV', 'C' ); END;
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-30554: function-based index SCOTT.I_SNAP$_TEST_MV is disabled
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 841
ORA-06512: at line 1
Is this normal behavior? Is there any way to workaround that. We need to refresh all our Materialized Views with as little archive files generated as possible. Materialized Views are in nologging mode.
Rich
March 17, 2005 - 8:00 am UTC
well, a complete refresh is going to be "truncate+insert /*+ APPEND */". it'll generate the least amount of redo.
You'll find that index to be a unique index, they cannot be "skipped"
When a index is created?
Rich, March 21, 2005 - 8:03 am UTC
Hi Tom,
Can you explain what is the logic underneath index creation for materialized views i.e. when does Oracle creates one on a materialized view?
Rich
SQL> create materialized view test_mv
2 as select deptno, count(*) cnt from scott.emp
3 where deptno is not null
4 group by deptno;
Materialized view created.
SQL> select index_name from user_indexes where table_name = 'TEST_MV';
INDEX_NAME
------------------------------
I_SNAP$_TEST_MV
March 21, 2005 - 10:30 am UTC
upon creation that one is created.
I think I have the same issue as "rich"
Mike, April 07, 2005 - 12:29 pm UTC
Hi Tom,
We are using MV's with Oracle streams in a Mobile computing environment (9ir2). The capture process needs to read all redo generated in order to see if it needs to "capture" those changes. The problem is that when we refresh our MV's we are generating around 200 - 300 MB of redo, now I know that isn't a lot in most circumstances, but a notebook computer, single disk, single CPU takes over an hour to trawl through that redo.
We are of the opinion that as the issue is caused by the redo generated, and that redo does not need to be "captured" (We tag it so that it is not) if we could avoid generating as much redo as possible during the refresh of the MV's then this would have a positive knock on effect to the Streams capture process.
So, I tried some of the postings here, the MV is built "with rowid" and I get a "I_SNAP$_" index on the MV, however when I try to disable it I get a
SQL> alter index I_SNAP$_TABLE_A disable;
alter index I_SNAP$_TABLE_A disable
*
ERROR at line 1:
ORA-02243: invalid ALTER INDEX or ALTER MATERIALIZED VIEW option
which differs to the prior posters. Is there a way around this? perhaps more to the point can we really, really reduce the amount of redo generated by a refresh to a "pifling" amount?
Thanks as always,
Mike.
April 07, 2005 - 12:44 pm UTC
Even though I cannot reproduce:
ops$tkyte@ORA9IR2> create table t as select * from scott.emp;
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view mv
2 as
3 select job, count(*) cnt from t group by job;
Materialized view created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter index i_snap$_mv disable;
Index altered.
it doesn't really matter as the refresh would *fail*.
a FULL refresh is the only way I know to reduce it as low as it goes. That should be done with a
a) truncate (no redo)
b) insert /*+ append */ (with nologging, minimal redo)
and if you do a "do it yourself full refresh", you can get it very near zero.
Materialized view approach
Bob, April 12, 2005 - 5:37 am UTC
Dear Tom,
I have a design question on the best approach to a problem.
We have a database that is OLTP and used by more than one location.
Each location performs a batch process at the end of the working day - the locations are in different time zones so the batch processes can start and end at different times.
Following each batch process, a set of complex reports are run.
Currently the reports use a set of complex views:
VIEW_A uses TAB_1, TAB_2
VIEW_B uses TAB_3, TAB_4, VIEW_A (performs unions)
VIEW_C uses TAB_5, TAB_6
VIEW_D uses TAB_7, TAB_8, TAB_9 (performs unions)
VIEW_E uses VIEW_B, VIEW_C, TAB_10
VIEW_F uses VIEW_D, VIEW_E, TAB_11
The reports are currently taking too long to complete.
A solution has been proposed to use materialized views with REFRESH ON DEMAND. Create materialized views of all the views in question. At the end of each batch process, refresh the materialized views, run the reports.
However, what would be the impact of say:
Location 1 finishes its batch. Refreshes the materialized views. Starts the report process.
Location 2 finishes before location 1 finishes the its reports. Location 2 refreshes the materialized views for its reports...
We've considered creating materialized views for each location and then using dynamic sql for the reports to query the correct materialized view. But this approach seems overly complex, difficult to maintain and not particularly elegant.
Do you have any suggestions on a best approach.
Many thanks,
Bob
April 12, 2005 - 7:27 am UTC
have you tuned the SQL, really looked at it? Verified that you need expensive "UNION" and not the much cheaper "UNION ALL" and so on?
Refresh Time
A reader, April 21, 2005 - 10:48 am UTC
Hi Tom,
Suppose I'm creating a MV with joining 4-5 tables each with around 2-3 M rows. The MV itself will result into max 0.05 M rows. The query to create MV takes around 45 minutes.
I want to refresh this MV on a daily basis. & there are a few hundreds row added in the base table on daily basis.
So my question is does this refresh on demand of MV will take 45 minutes daily or it will just see any incremental changes made to the table?? & thus will take less time?
Thanks as always.
April 22, 2005 - 8:53 am UTC
depends on how you created the MV whether it does changes only or FULL refreshes.
Materialized views
Anubhuti Varshney, April 22, 2005 - 4:27 am UTC
ONLY/FULL which is recommended??
A reader, April 25, 2005 - 2:55 am UTC
Tom,
As I described you the requirement. that the changes that are expected are only a few hundreds daily in the base table, ehich refresh is recommended?
Do I need to do the FULL refresh??
April 25, 2005 - 7:14 am UTC
only you can truly answer this.
You say "a few hundreds" in the base table. That sounds "small", it could mean incremental (fast) refresh.
then again, there might be only 10 rows in the table and then a full would be recommended perhaps.
So, as a number of rows in this table, how many change and as importantly -- how many do not (and I'm assuming a simple single table materialized view here right)
Got my answer! Thanks !!
A reader, April 25, 2005 - 9:08 am UTC
I was satisfied after reading this review
Maria Mundrova, December 09, 2005 - 4:22 am UTC
I am glad to say that I was satisfied after reading this review.There are really helpful parts in it.
Best Regards,
question, truncate MVs
A reader, May 24, 2006 - 4:31 am UTC
Hi
I was wondering if we can truncate the MV manually?
Let me explain why I would like to do so. Basically I have two situations, DDL at origin tables and rowid changes, I have 10 MVs which use rowid replication. (NO PK)
I have a Refresh Group which replicates (read only) a schema in production database to a query only database. The group contains 140 tables. Since this application is quite fres there are still quite a few development going which means sometimes I have to add columns to the tables I am replciating. Since I am using read only replication DDL are not replicated so I should recreate the MV and MV LOG and readd to the refresh group again so far so good. There is a small problem though, even if I dont recreate the MV with new columns the refresh process doesnt give errors!
With rowid changes I have a problem, if my MVs are ROWID based whenever rowid in the origin changes I must perform a complete refresh.
With both situations same question, when I have to perform complete refresh how can I enforce referential integrity? Since refresh groups cant be used it gives this error:
BEGIN DBMS_REFRESH.REFRESH('COMPANY'); END;
*
ERROR at line 1:
ORA-12057: materialized view "SCOTT"."MV_EMP" is INVALID and must complete refresh
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 1
So I am forced to perform refresh using dbms_mview which does not guarantee referential integrity.
Thanks
Pinguman
May 24, 2006 - 7:14 am UTC
did you try alter materialized view MV refresh complete?
AJB, June 12, 2006 - 10:28 am UTC
I understand how you can take steps to reduce the amount of redo when refreshing materialized views but don't understand the following:
If I refresh using dbms_refresh.refresh(<group>) then 1.5Gb of redo is generated.
If I schedule the refresh so that a job executes dbms_refresh.refresh(<group>), when I use dbms_job.run(<job>) then 2.5Gb of redo is generated.
This happens consistently, why does using run_job generate so much more redo?
June 13, 2006 - 10:01 am UTC
I'd need a way to reproduce, but why don't you simply trace it and see what is possibly different.
ways to decrease refresh timing of a Mat view !!
Prashant, August 16, 2006 - 7:09 am UTC
Hi Tom,
How can we decrease refresh timings for a Mat view. I have a mat view which takes 2 Hrs to create and 25 Min to fast refresh, If you could explain ways to decrease the timings would be a great help.
Following scripts used to create mat view: Total Record:5 Mil
create materialized view TEST refresh fast on demand with rowid as Select /*+ PARALLEL (TEST, 4) */ * from TEST@usdb;
August 16, 2006 - 8:44 am UTC
refresh more frequently :)
it is purely a function of the amount of change made to the base table.
ways to decrease refresh timing of a Mat view !!
A reader, August 16, 2006 - 9:17 am UTC
:) .. But I want to use parallel degree during refresh, is this option available...
August 16, 2006 - 9:19 am UTC
complete - sure.
slow by slow (the fast refresh should be called incremental) - it is a process.
ways to decrease refresh timing of a Mat view !!
A reader, August 16, 2006 - 9:25 am UTC
Can you please describe below , if possible and which is the parallel degree ?
exec dbms_mview.refresh('TEST','C', '', TRUE, FALSE,4,4,4,FALSE);
August 16, 2006 - 10:19 am UTC
are you asking me to document the parameters to a function call? If so, that is documented already....
I_SNAP$ index
Jay, August 16, 2006 - 9:52 am UTC
I created a table and populated it with the query I used to define a materialized view. Then I created a materialized view defined on this prebuilt table. Afterward I was able to use this new materialized view and was able to refresh it by calling dbms_mview.refresh. However, I didn't see the I_SNAP$ index on this new mview while it exists on my original directly created mview. Can you tell me why there's I_SNAP$ index in one case but not the other, and what's the purpose of this index?
Here are my sample scripts:
jz@o10gr2> create materialized view foo_date as
jz@o10gr2> select month_start_date, count(*) days from date_dim
jz@o10gr2> group by month_start_date;
Materialized view created.
jz@o10gr2> select count(*) from foo_date;
COUNT(*)
----------
312
1 row selected.
jz@o10gr2> select * from foo_date where rownum < 2;
MONTH_START_DA DAYS
-------------- ----------
19981001 00:00 31
1 row selected.
jz@o10gr2> select index_name from ind where table_name = 'FOO_DATE';
INDEX_NAME
------------------------------
I_SNAP$_FOO_DATE
1 row selected.
jz@o10gr2> create table foo_date_t as select month_start_date, count(*) days from date_dim
jz@o10gr2> group by month_start_date;
Table created.
jz@o10gr2> create materialized view foo_date_t on prebuilt table
jz@o10gr2> as
jz@o10gr2> select month_start_date, count(*) days from date_dim
jz@o10gr2> group by month_start_date;
Materialized view created.
jz@o10gr2> select count(*) from foo_date_t;
COUNT(*)
----------
312
1 row selected.
jz@o10gr2> select index_name from ind where table_name = 'FOO_DATE_T';
no rows selected
August 16, 2006 - 10:26 am UTC
if you have us instantiate the data, we set up reasonable indexes for our use.
If you do not, we do not.
Normally YOU would index it (by primary key), we copied the primary key over from your date_dim table when we created the objects.
Re: I_SNAP$ index
Jay, August 16, 2006 - 10:58 am UTC
Thanks Tom. The I_SNAP$_FOO_DATE is on SYS_OP_MAP_NONNULL("MONTH_START_DATE"), which is a logical primary key on foo_date, but month_start_date is not part of the primary key of date_dim (PK column is date_id), so I don't think Oracle copied it from the detail table.
Are I_SNAP$ indexes used in refresh (either fast or complete) at all? As there is no I_SNAP$ index on the mview created on prebuilt table, does Oracle use a different way to refresh the prebuilt mviews (foo_date_t) than with Oracle instantiated mviews (foo_date)?
August 16, 2006 - 11:24 am UTC
trace it if you are really really curious.
but bear in mind, it can, will and probably has changed from release to release.
Efficient mview refresh in 9i?
Andy Bell, August 18, 2006 - 12:23 pm UTC
Tom,
I am trying to find the most efficient way to incrementally refresh a join-only MVIEW based on a schema perhaps best described as an 'amputated snowflake', i.e. there is a hierarchy of base tables with PK and FK constraints, also some 'master' tables that really should be part of the detail (fact) table. What I am finding is that when a new fact record is inserted, the MVIEW select statement gets called by the refresh many times, once for each base table. Is is possible to write a custom refresh that is only triggered by changes to the fact table so that the select is only called once? (I run into problems with 'DML not legal' etc)
I'm not really interested in capturing changes in master (dimension) tables unless the fact table is also changed.
Also, how can I make sure that the MVIEW is always available for query rewrite, even during DML operations in the refresh?
Example follows:
create table detail
nologging
as
select * from all_objects;
alter table detail modify (status varchar2(7) not null);
alter table detail modify (object_type varchar2(18) not null);
create table master_big
nologging
as
select distinct object_name,'0'||object_name as object_name2
from detail;
alter table master_big
add constraint mpk1 primary key(object_name);
create table master_small
nologging
as
select distinct status
from detail;
alter table master_small
add constraint mpk2 primary key(status);
create table master_level1
nologging
as
select distinct object_type,owner
from detail;
alter table master_level1
add constraint mpk3 primary key(object_type,owner);
create table master_level2
nologging
as
select distinct owner
from master_level1;
alter table master_level2
add constraint mpk4 primary key(owner);
alter table detail
add constraint dpk1 primary key(object_id);
alter table detail
add constraint dfk1 foreign key(object_name) references master_big(object_name);
alter table detail
add constraint dfk2 foreign key(status) references master_small(status);
alter table detail
add constraint dfk3 foreign key(object_type,owner) references master_level1(object_type,owner);
alter table master_level1
add constraint mfk1 foreign key(owner) references master_level2(owner);
analyze table detail compute statistics;
analyze table master_big compute statistics;
analyze table master_small compute statistics;
analyze table master_level1 compute statistics;
analyze table master_level2 compute statistics;
create materialized view log on detail
with rowid (object_id,object_name,owner,object_type,status)
including new values;
create materialized view log on master_big
with rowid (object_name,object_name2)
including new values;
create materialized view log on master_small
with rowid (status)
including new values;
create materialized view log on master_level1
with rowid (object_type,owner)
including new values;
create materialized view log on master_level2
with rowid (owner)
including new values;
create materialized view test
nologging
refresh fast with rowid
as
select d.rowid as rowid1,m1.rowid as rowid2,m2.rowid as rowid3,m3.rowid as rowid4,m4.rowid as rowid5,
d.object_name as object_name0,m1.object_name as object_name1,object_name2,
object_id,m2.status as status,m3.object_type,m4.owner as owner
from detail d,master_big m1,master_small m2,master_level1 m3,master_level2 m4
where d.object_name = m1.object_name
and d.status = m2.status
and d.object_type = m3.object_type
and d.owner = m3.owner
and m3.owner = m4.owner;
analyze table test compute statistics;
-- test master_big insert
insert into master_big
values('Test','This is a test');
update detail set object_id = 40001
where object_id = 30001;
commit;
alter session set sql_trace = true;
exec dbms_mview.refresh('test');
-- test master_level2 insert
insert into master_level2
values('Test');
insert into master_level1
values('Test2','Test');
insert into detail
values('Test','Test3',null,40002,null,'Test0',to_date('18-AUG-06'),to_date('18-AUG-06'),null,'Testing',null,null,null);
commit;
alter session set sql_trace = true;
exec dbms_mview.refresh('test');
August 18, 2006 - 4:19 pm UTC
I ran the example, but I'm not really sure what I'm looking for precisely.
But in any case - no, there is no such thing as a "custom refresh". You would have to be using a technology such as streams in place of the materialized view in order to do that (but then it is not a materialized view and wouldn't be subject to being used in a query rewrite)
Have you looked at the query rewrite integrity level for the rewrite issue?
MVIEW rewrite issue
Andy Bell, August 21, 2006 - 12:06 pm UTC
Tom,
Thanks for your prompt reply regarding mview refresh. I am finding an issue with query rewrite in 9iR2, as follows:
During refresh, whether complete or incremental, a query that would normally be re-written is using the base tables, presumably because rewrite is unavailable. Even when the refresh has finished, the query continues to be run on the base tables until I flush the shared pool or re-start the database. This is in spite of using 'STALE_TOLERATED' integrity.
I have tested the refresh scenario using 'dbms_mview.refresh', and a background job submitted as follows:
CREATE OR REPLACE PROCEDURE test_mview
IS
v_sql VARCHAR2(400);
BEGIN
v_sql := 'ALTER SESSION SET SQL_TRACE=TRUE';
EXECUTE IMMEDIATE v_sql;
EXECUTE IMMEDIATE 'SELECT /*+ REWRITE */ PCID FROM PROPERTY WHERE PID = 1000';
END;
declare j number;
begin
dbms_job.submit(j,'test_mview;',trunc(sysdate) + 13/24,'sysdate + 1/1440');
commit;
end;
/
If I also trace the refresh session, I find that both incremental and complete refreshes use 'delete ...' followed by 'insert ...'. Can you please explain why query rewrite is 'lost' during the refresh?
(By the way, I'd like to send an example, but I'm struggling a bit with a nice way to generate a suitable DML load for the refresh. Any ideas?)
Refreshing Materialized view (Snap-Clone Technology)
Elahe Faghihi, August 30, 2006 - 12:04 pm UTC
Hi Tom,
In order to get better performance in our CRM data mart, I am going to use Materialized view.
the current ETL process runs at 3:00 am every night in staging server and then Production
server gets refreshed using a Snap-Clone technology. I am not sure by using Snap-Clone, how the refreshment
process work . All I know is it will drop the current schema and tablespace
and recreate them based on staging server.
My question would be, how I should refresh the Materialized view in production server.
As you see, there are more that one master table in the query. Do I need to refresh the
Materialized view right after refreshing all the master tables in Stating server first? How I can refresh
it after refreshing master tables in production server
What are my options to update the data in materialized view?
I would appreciate your help in advance.
select
fct.COMPANY,
fct.CRM_PERIOD,
fct.CUSTOMER_ID,
fct.POSITION_ID,
fct.Product_code,
fct.PLAN_DETAILS,
fct.SUGGESTED_DETAILS,
fct.YTD_DETAILS,
fct_corp.segment_code,
fct_corp.segment_type,
fct_corp.sub_segment,
fct_corp.period_date,
fct_corp.Co_Promote_Indicator,
fct_corp.BI_Partner_Indicator,
---
fct_corp.BI_Target_Indicator,
---
fct_corp.segment_brand,
null as employee_id,
null as no_of_details
from
dw_crm_fact_cust_det_summary fct,
(select fct_corp.customer_id,
fct_corp.crm_period,
fct_corp.product_code,
fct_corp.DIVISION_ID,
(CASE WHEN (fct_corp.BICL_TRGT = 'Y' or fct_corp.PARTNER1_TRGT <> 4 ) then ( 'Co-Promote' ) else ( 'Non-Target' ) END) Co_Promote_Indicator,
(CASE WHEN (fct_corp.BICL_TRGT = 'N' and fct_corp.PARTNER1_TRGT = 4 ) then ( 'Non-Target' ) else (CASE WHEN (fct_corp.BICL_TRGT = 'Y' and fct_corp.PARTNER1_TRGT <> 4 ) then ( 'BI + Partner Target' ) else (CASE WHEN (fct_corp.BICL_TRGT = 'Y' ) then ( 'BI Only Target' ) else ( 'Partner Only Target' ) END) END) END) BI_Partner_Indicator,
----
fct_corp.BICL_TRGT BI_Target_Indicator,
----
dte.monthly_date period_date,
seg.segment_code,
seg.segment_type,
seg.sub_segment,
seg.SIEBEL_PRODUCT_CODE segment_brand,
fct_corp.no_of_details corporate_details
from
dw_crm_dim_monthly dte,
dw_crm_dim_segmentation seg,
dw_crm_fact_plan_details fct_corp
where
fct_corp.segment_id = seg.segment_id
and fct_corp.crm_period = dte.crm_period
and fct_corp.PLANNED_ACTIVITY_TYPE = 'Corporate'
and fct_corp.record_status = 'Y'
and fct_corp.division_id = 1
--and fct_corp.CRM_PERIOD = '2006'
) fct_corp
where
fct.customer_id = fct_corp.customer_id (+)
and fct.CRM_PERIOD = fct_corp.crm_period (+)
and fct.product_code = fct_corp.product_code (+)
and fct.position_id is null
--and fct.crm_period = '2006'
union
--DETAIL level facts
select
fct.COMPANY,
fct.CRM_PERIOD,
fct.CUSTOMER_ID,
fct.POSITION_ID,
fct.Product_code,
fct.PLAN_DETAILS,
fct.SUGGESTED_DETAILS,
fct.YTD_DETAILS,
fct_det.segment_code,
fct_det.segment_type,
fct_det.sub_segment,
fct_det.period_date,
fct_det.Co_Promote_Indicator,
fct_det.BI_Partner_Indicator,
---
fct_det.BI_Target_Indicator,
---
fct_det.segment_brand,
fct_det.employee_id,
fct_det.no_of_details
from
dw_crm_fact_cust_det_summary fct,
(select fct_det.crm_period,
fct_det.position_id,
fct_det.customer_id,
fct_det.product_code,
fct_det.company,
(CASE WHEN (fct_det.NO_OF_CORP_DETAILS = 0 and fct_det.NO_OF_PARTNER_DETAILS = 0 ) then ( 'Non-Target' ) else ( 'Co-Promote' ) end) Co_Promote_Indicator,
(CASE WHEN (fct_det.NO_OF_CORP_DETAILS = 0 and fct_det.NO_OF_PARTNER_DETAILS = 0 ) then ( 'Non-Target' ) else ( case when (fct_det.NO_OF_CORP_DETAILS = 1 and not (fct_det.NO_OF_PARTNER_DETAILS = 0 )) then ( 'BI + Partner Target' ) else (case when ( not (fct_det.NO_OF_PARTNER_DETAILS = 0 ) ) then ( 'Partner Only Target' ) else ( 'BI Only Target' ) end) end) end) BI_Partner_Indicator,
-----
(CASE WHEN (fct_det.NO_OF_CORP_DETAILS = 1) THEN ('Y') ELSE ('N') END) BI_Target_Indicator,
-----
emp.employee_id,
emp.employee_code,
seg.segment_code,
seg.segment_type,
seg.sub_segment,
seg.SIEBEL_PRODUCT_CODE segment_brand,
fct_det.period_date,
fct_det.no_of_details
from
dw_crm_fact_details fct_det,
dw_crm_dim_employee emp,
dw_crm_dim_segmentation seg
where
fct_det.employee_id = emp.employee_id
and fct_det.segment_id = seg.segment_id
--and fct_det.crm_period = '2006'
and activity_type = 'Professional Call'
and activity_detail_type = 'Detail'
and fct_det.division_id = 1
) fct_det
where
fct.customer_id = fct_det.customer_id (+)
and fct.CRM_PERIOD = fct_det.crm_period (+)
and fct.position_id = fct_det.position_id (+)
and fct.PRODUCT_CODE = fct_det.product_code (+)
and fct.company = fct_det.company (+)
--and fct.crm_period = '2006'
and fct.ytd_details > 0
union
--POSITION level facts
select
fct.COMPANY,
fct.CRM_PERIOD,
fct.CUSTOMER_ID,
fct.POSITION_ID,
fct.Product_code,
fct.PLAN_DETAILS,
fct.SUGGESTED_DETAILS,
fct.YTD_DETAILS,
fct_plan.segment_code,
fct_plan.segment_type,
fct_plan.sub_segment,
fct_plan.period_date,
fct_plan.Co_Promote_Indicator,
fct_plan.BI_Partner_Indicator,
---
fct_plan.BI_Target_Indicator,
---
fct_plan.segment_brand,
null as employee_id,
null as no_of_details
from
dw_crm_fact_cust_det_summary fct,
(select fct_plan.customer_id,
fct_plan.crm_period,
fct_plan.product_code,
fct_plan.position_id,
fct_plan.DIVISION_ID,
(CASE WHEN (fct_plan.BICL_TRGT = 'Y' or fct_plan.PARTNER1_TRGT <> 4 ) then ( 'Co-Promote' ) else ( 'Non-Target' ) END) Co_Promote_Indicator,
(CASE WHEN (fct_plan.BICL_TRGT = 'N' and fct_plan.PARTNER1_TRGT = 4 ) then ( 'Non-Target' ) else (CASE WHEN (fct_plan.BICL_TRGT = 'Y' and fct_plan.PARTNER1_TRGT <> 4 ) then ( 'BI + Partner Target' ) else (CASE WHEN (fct_plan.BICL_TRGT = 'Y' ) then ( 'BI Only Target' ) else ( 'Partner Only Target' ) END) END) END) BI_Partner_Indicator,
---
fct_plan.BICL_TRGT BI_Target_Indicator,
---
dte.monthly_date period_date,
seg.segment_code,
seg.segment_type,
seg.sub_segment,
seg.SIEBEL_PRODUCT_CODE segment_brand,
fct_plan.no_of_details planned_details
from dw_crm_dim_monthly dte,
dw_crm_dim_segmentation seg,
dw_crm_fact_plan_details fct_plan
where
fct_plan.segment_id = seg.segment_id
and fct_plan.crm_period = dte.crm_period
and fct_plan.PLANNED_ACTIVITY_TYPE = 'Position'
and fct_plan.record_status = 'Y'
and fct_plan.division_id = 1
--and fct_plan.crm_period = '2006'
) fct_plan
where
fct.customer_id = fct_plan.customer_id (+)
and fct.positioN_id = fct_plan.position_id (+)
and fct.CRM_PERIOD = fct_plan.crm_period (+)
and fct.product_code = fct_plan.product_code (+)
--and fct.crm_period = '2006'
and fct.ytd_details = 0
and fct.plan_details > 0;
-
August 30, 2006 - 5:44 pm UTC
we'd need details on this "snap clone, drop tablespace, zap data in" process.
but it sounds like it would need a complete refresh since the materialized view logs would obviously not be maintained.
Elahe Faghihi, August 31, 2006 - 4:01 pm UTC
Hi Tom,
Thanks for your propmt reply. I asked about Snap-Clone process from David who is Network Systems Analyst. Here is David's explanation about Snap-Clone:
The formal name of the product we are using is Snapview. It is an EMC product that is part of their Navisphere suite of storage management software. It operates completely within the storage processors in our Clariion disk array. It sees storage at a block level only; it has no concept of oracle Tablespaces or operating system datafiles. It sees two storage containers – LUNs : One on the source host – the backbox and its clone on the destination host.
In order to refresh the data marts we use two main ingredients – Oracle Transportable Tablespace and Snapview. When the time comes to refresh the data marts here is what happens:
1. the tablespaces and their contents are dropped from the destination host.
2. transportable tablespaces are created on the source host and the metadata export file resides on the source LUN.
3. The clone LUN on the is stopped and refreshed from the source host and restarted – this the snapview process.
4. The metadata file from the source host is now available on the destination and the transportable tablespaces are imported into the destination database.
My recommendation is to build the materialized views on the destination host AFTER the snapview process and transportable tables import are complete.
To Andy Bell, ensuring query rewrite always happen
Gints Plivna, September 05, 2006 - 10:10 am UTC
You can do that with two simultaneous materialized views and playing around with statistics on them.
<shameless plug>
I've explained it rather detailed in my website in article Effective search in a normalized application that can be found here </code>
http://www.gplivna.eu/papers/mat_views_search.htm <code>
Look under the heading "Alternate refresh process of materialized views".
</shameless plug>
Gints
Tom, how did you create an mview without an index on it?
Bartek, October 10, 2006 - 4:44 am UTC
Tom,
Going back to your original example (from 2004..) IÂ’m not really sure how you created the materialized view without an index (to minimize redo during complete refreshes). Oracle documentation says that the mview will be created with one or more index on it:
“When you create a materialized view, Oracle Database creates one internal table and at least one index, and may create one view, all in the schema of the materialized view” (from the 10g SQL Reference Guide)
Regardless whether I create an mview WITH ROWID or WITH PRIMARY KEY, it always ends up having an index on it.
So, the question is: how to create an mview without an index on it (to minimize redo on refreshes) or how to disable the one that is automatically created?
Thanks
Bartek
Scenario follows:
SQL> drop table big_table;
Table dropped.
SQL> drop materialized view mv_big_table;
Materialized view dropped.
SQL> create table big_table nologging
SQL> as select * from all_objects;
Table created.
SQL> create materialized view mv_big_table
SQL> REFRESH COMPLETE WITH ROWID
SQL> as select * from big_table;
Materialized view created.
SQL> select table_name, index_name from user_indexes where index_name like '%BIG%';
MV_BIG_TABLE I_SNAP$_MV_BIG_TABLE
--oops, we got an index for free on our mview.. letÂ’s try WITH PK
SQL> drop materialized view mv_big_table;
Materialized view dropped.
SQL> alter table big_table
SQL> add constraint big_table_pk primary key(object_id);
Table altered.
SQL> create materialized view mv_big_table
SQL> REFRESH COMPLETE WITH PRIMARY KEY
SQL> as select * from big_table;
Materialized view created.
SQL> select table_name, index_name from user_indexes where index_name like '%BIG%';
BIG_TABLE BIG_TABLE_PK
MV_BIG_TABLE BIG_TABLE_PK1
-- and again we have an index..
--letÂ’s try to drop it
SQL> drop index BIG_TABLE_PK1;
SQL> drop index BIG_TABLE_PK1
*
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key
--oh well. LetÂ’s see if thereÂ’s a difference between redo size
--for complete refreshes with the ‘default’ index
--and with an additional one, like in your original example
SQL> column value new_val V
SQL> select a.name, b.value
SQL> from v$statname a, v$mystat b
SQL> where a.statistic# = b.statistic#
SQL> and a.name = 'redo size';
redo size 4592239152
SQL> exec dbms_mview.refresh('MV_BIG_TABLE','C');
PL/SQL procedure successfully completed.
SQL> column value new_val V
SQL> select a.name, b.value, to_char( b.value-&V, '999,999,999,999' ) diff
SQL> from v$statname a, v$mystat b
SQL> where a.statistic# = b.statistic#
SQL> and a.name = 'redo size';
redo size 4653128296 60,889,144
SQL> create index big_table_idx on big_table(object_name);
Index created.
SQL> exec dbms_mview.refresh('MV_BIG_TABLE','C');
PL/SQL procedure successfully completed.
SQL> column value new_val V
SQL> select a.name, b.value, to_char( b.value-&V, '999,999,999,999' ) diff
SQL> from v$statname a, v$mystat b
SQL> where a.statistic# = b.statistic#
SQL> and a.name = 'redo size';
redo size 4716740760 63,612,464
--about the same as with the ‘default’ index, which tells me that the default one is a real index indeed
October 10, 2006 - 8:02 am UTC
actually, things change over time - today in current software, it will do a delete + insert without the append - so the data doesn't "disappear" during the refresh.
so, today, you won't see the append happening - depending on version.
the infrastructure objects (indexes) should be left "as is" for the correct operation of the materialized view.
Alberto Dell'Era, October 10, 2006 - 8:08 am UTC
> today in current software, it will do a
> delete + insert without the append
Is it possible to get back to the old behaviour (without setting events) ?
October 10, 2006 - 11:56 am UTC
not that I'm aware of, haven't researched it too deeply.
Any alternative approach to data replication?
Bartek, October 10, 2006 - 8:30 am UTC
Wow, that was a fast answer :D
Well, if these days Oracle will always create an index for me (I'm using 10gR2), regrettably this makes mviews useless for the data replication purposes.
While I appreciate the new query rewrite capabilities of mviews, the automatic index is causing havoc for the data replication. The case I'm struggling with is a table with 25 million rows and a simple mview that does 'select * from..' to create monthly snapshot of that table.
Now, the 'create materialized view build immediate' takes 5 minutes. Refresh complete takes > 2 hours and never finishes (unable to extend the undo tablespace), refresh fast takes about 1 hour.
What is the current best approach to the data replication, if using materialized views is so much worse these days than CTAS?
Thanks
Bartek
October 10, 2006 - 12:04 pm UTC
I fail to see how the word useless applies at all.
If you just want a copy, you must be using a dblink (which isn't going to be doing the mview rewrite). No reason you cannot just schedule your own job that
a) truncates
b) insert /*+ APPEND */
into your own table? You are not really using any of the materialized view features here...
David Aldridge, October 10, 2006 - 2:19 pm UTC
October 10, 2006 - 8:14 pm UTC
well, there is that of course :)
A reader, November 03, 2006 - 5:06 pm UTC
<< actually, things change over time - today in current software, it will do a
delete + insert without the append - so the data doesn't "disappear" during the
refresh.
>>
Tom,
if we refresh using MV in 10g, which deletes on very huge tables which is in millions and insert, Is it not a slow apporach on mv..
I understand the benefit of "data availability during refresh".. but it comes at the cost of undo tablespace space problem and performance of mv completion etc.
Whats your comment on this.
November 04, 2006 - 12:19 pm UTC
you are accurate in your observation.
A possible workaround for 10g refresh complete performance?
Ian, November 08, 2006 - 10:11 am UTC
Tom
Very worried about the "new improved" refresh complete method in 10g (delete + insert as opposed to truncate + insert append). I just benchmarked it and the 9i method is 25% faster and generates 1% as much redo as the 10g method. But as 88% of the redo comes from the delete could we not just do truncate table <mv_table> purge materialized view log reuse storage and then do the refresh?
Can you think of any drawback to this approach.
Regards
Ian
November 08, 2006 - 7:48 pm UTC
you could but why have an mv log?
In fact...
Ian, November 08, 2006 - 10:27 am UTC
I guess we could also do the insert append ourselves - and then an alter mv consider fresh. Hmmm.
Better not forget...
Ian, November 08, 2006 - 10:37 am UTC
Better not forget to change the truncate to drop storage. Otherwise the table could get fairly large. :-)
BTW - fast refresh is not an option for us.
Regards
Ian
Please ignore my last comment.
Ian, November 08, 2006 - 11:09 am UTC
Please ignore my last comment. I was forgetting that the truncate moved the HW mark anyway.
Slinks away looking embarrassed.
you could but why have an mv log?
Ian, November 09, 2006 - 6:57 am UTC
Tom
You said "you could but why have an mv log?". I just put that in for completeness. As it happens we are an Apps shop so we can't use MV logs anyway. Hence the obsession with the speed of refresh completes.
But it turns out my test got a bit mixed up - I was using an MV on a prebuilt table - and when I tested my insert append I failed to notice I had dropped the MV! With the MV in place you cannot do the insert.
So now you would be looking at (In 10g for a quicker refresh complete) create MV on prebuilt table and then to refresh - a truncate, drop the MV, insert append, re-create the MV. So you lose most of the benefits of the MV apart from the query rewrite. But it does generate even less redo than 9i.
On another note - just out of interest - how come deletes generate so much more redo than inserts? I know the undo for a delete is far bigger than for an insert - and that undo generates it's own redo - but I would have thought that would be offset by the actual delete statement being smaller in the redo log than the insert?
Could you explain this? Am I missing something?
Thanks and Regards
Ian
November 09, 2006 - 8:43 am UTC
it is likely all about index maintenance.
No indexes
Ian, November 09, 2006 - 12:43 pm UTC
Tom
There are no indexes on this table. The table was pre-created and then the MV created with:
create materialized view my_ra_cust_trx_mv
on prebuilt table with reduced precision
refresh complete on demand
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
SQL> l
1 select container_name
2 from dba_mviews
3 where owner = 'APPS'
4* and mview_name = 'MY_RA_CUST_TRX_MV'
SQL> /
CONTAINER_NAME
------------------------------
MY_RA_CUST_TRX_MV
SQL> l
1 select owner, table_name
2 from dba_tables
3 where owner = 'APPS'
4* and table_name = 'MY_RA_CUST_TRX_MV'
SQL> /
OWNER TABLE_NAME
------------------------------ -----------------------------
APPS MY_RA_CUST_TRX_MV
SQL> l
1 select *
2 from dba_indexes
3 where table_owner = 'APPS'
4* and table_name = 'MY_RA_CUST_TRX_MV'
SQL> /
no rows selected
So still a bit odd. Started with 1,000,000 rows. Deleted them then commit then re-inserted the same 1,000,000 rows.
The delete generated 232Mb of redo - the insert generated 45Mb of redo. Production figures would be magnitudes higher.
So I still can't figure out why a delete generates x times as much redo as the corresponding insert.
SQL> create table my_dba_objects
2 as select *
3 from dba_objects;
Table created.
SQL> select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
NAME VALUE
---------------------------------------------------------------- ----------
redo size 203620
SQL> delete
2 from my_dba_objects;
210554 rows deleted.
SQL> commit;
Commit complete.
SQL> column name format a24
SQL> select a.name, b.value, to_char( trunc((b.value-203620)/1024/1024,2), '999,999,999,999.99' ) diff_mb
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
NAME VALUE DIFF_MB
------------------------ ---------- -------------------
redo size 74618684 70.96
SQL> insert into my_dba_objects
2 select *
3 from dba_objects;
210554 rows created.
SQL> commit;
Commit complete.
SQL> select a.name, b.value, to_char( trunc((b.value-74618684)/1024/1024,2), '999,999,999,999.99' ) diff_mb
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
NAME VALUE DIFF_MB
------------------------ ---------- -------------------
redo size 97118300 21.45
Regards
Ian
November 09, 2006 - 2:36 pm UTC
delete = take row off of block (redo = row), put row into undo (redo=row)
insert = put row on block (redo = row), put "delete rowid" into undo (redo = rowid)
Better not forget - revisited.
Ian, November 15, 2006 - 9:45 am UTC
Tom
Turns out I was not so daft with my comment "Better not forget to change the truncate to drop storage. Otherwise the table could get fairly large."
Turns out if you do a truncate reuse storage followed by an insert append in parallel it appends above the previously used storage.
So in my example below 2790 blocks seem to be "lost" from the tablespace.
SQL*Plus: Release 9.2.0.6.0 - Production on Wed Nov 15 14:05:52 2006
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
SQL> alter session enable parallel dml;
Session altered.
SQL>
SQL> create table my_objects
2 tablespace applsysx
3 as select *
4 from dba_objects;
Table created.
SQL> create table my_objects_par
2 tablespace applsysx
3 as select *
4 from my_objects;
Table created.
SQL> begin
2 dbms_stats.gather_table_stats(ownname => 'apps'
3 ,tabname => 'my_objects_par'
4 ,estimate_percent => dbms_stats.auto_sample_size
5 ,cascade => true
6 );
7 end;
8 /
PL/SQL procedure successfully completed.
SQL> select blocks
2 from dba_tables
3 where owner = 'APPS'
4 and table_name = 'MY_OBJECTS_PAR';
BLOCKS
----------
2790
SQL> select to_char(sum(bytes),'99999999999') bytes
2 from dba_free_space
3 where tablespace_name = 'APPLSYSX';
BYTES
------------
38010511360
SQL> truncate table my_objects_par reuse storage;
Table truncated.
SQL> explain plan for
2 insert /*+ append parallel(a) */
3 into my_objects_par a
4 select /*+ parallel(b) */
5 *
6 from my_objects;
Explained.
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY);
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 210K| 18M| 425 | | | |
| 1 | LOAD AS SELECT | | | | | 15,01 | P->S | QC (RAND) |
| 2 | TABLE ACCESS FULL | MY_OBJECTS | 210K| 18M| 425 | 15,00 | S->P | RND-ROBIN |
-------------------------------------------------------------------------------------------------
Note: cpu costing is off
10 rows selected.
SQL> insert /*+ append parallel(a) */
2 into my_objects_par a
3 select /*+ parallel(b) */
4 *
5 from my_objects;
210562 rows created.
SQL> commit;
Commit complete.
SQL> begin
2 dbms_stats.gather_table_stats(ownname => 'apps'
3 ,tabname => 'my_objects_par'
4 ,estimate_percent => dbms_stats.auto_sample_size
5 ,cascade => true
6 );
7 end;
8 /
PL/SQL procedure successfully completed.
SQL>
SQL> select blocks
2 from dba_tables
3 where owner = 'APPS'
4 and table_name = 'MY_OBJECTS_PAR';
BLOCKS
----------
2909
SQL>
1 select trunc((38010511360-sum(bytes))/1024/8) blocks_used
2 from dba_free_space
3* where tablespace_name = 'APPLSYSX'
SQL> /
BLOCKS_USED
-----------
2880
Is this a bug? Or expected behaviour?
Regards
Ian
Any comments
Ian, November 23, 2006 - 5:00 am UTC
Tom
Any thoughts on my question above regarding losing space during a parallel direct path insert?
Regards
Ian
It's OK
Ian, November 24, 2006 - 2:30 pm UTC
Tom
It's OK - I figured out it's expected behavior. Somewhat unexpected expected behavior I will admit - but now I have read up more on how parallel direct path insert works at the segment level it makes perfect sense.
But a great "Gotcha" if you were unaware of it.
Regards
Ian
MV complete refresh (9i vs 10g)
Pavel Ruzicka, December 17, 2006 - 3:10 am UTC
I was disapointed to find this change in behaviour of complete refresh between 9i and 10g as well.
9i: TRUNCATE->INSERT direct path [INSERT /*+ APPEND */]
10g: DELETE->standard INSERT [INSERT /*+ BYPASS_RECURSIVE_CHECK */
The side-effects: performance deterioration, excessive redo log information being generaged even for NOLOGGING MVs, data blocks not being compressed etc..
I see this undocumented change in behaviour is "nasty" from more than one point of view (database upgrade point of view, documentation..)
Anyway, I can confirm that there is a way to revert to 9i logic of complete MV refresh. Option ATOMIC_REFRESH=>false drives the refresh to run in TRUNCATE->direct path INSERT [INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */].
We are going to log SR for documentation bug or enhancement request for "PL/SQL Packages and Types Reference". It currently says:
= atomic_refresh =
“If this parameter is set to true, then the list of materialized views is refreshed in a single transaction. All of the refreshed materialized views are updated to a single point in time. If the refresh fails for any of the materialized views, none of the materialized views are updated.
If this parameter is set to false, then each of the materialized views is refreshed in a separate transaction.”
I believe this does not describe entirely and completely purpose of this argument and details of its implementation (especially in the context of this change in behaviour between 9i and 10g).
BTW - Once we observed pitiful performance of complte MV refresh on 10g, we reverted to "manual, roll-your-own" refresh based on TRUNCATE->INSERT direct path->CREATE MV on prebuilt table. We can outperform the complete refresh in both ATOMIC_REFRESH=>TRUE and FALSE. The reason for this is that complex MV refresh (the actual INSERT) gets rewritten in rather unfortunate way (using number of virtual subqueries like "from$_subquery$_005"..). I am not going to insert the whole trace but it is worth to trace your MV complete refresh to see what is going on.
Mv refresh time
Lakshmi Narayana JV, February 15, 2007 - 5:17 am UTC
Hi tom ,
I created one MV ,with syssate and next refrsh time for every one hour i.e sysdate+1/24;
But my intention is how can we find the refresh time betwen these intervals(i.e how much time it will takes to refresh the MV).
IS THERE any table which says the refresh time exactly .
Help me
February 15, 2007 - 11:56 am UTC
create your own job (eg: make the snapshot not auto refresh)
you could use dbms_job to submit:
begin
call the refresh routine you wanted to call;
insert into your_log_table( whatever, you, want, to, log);
commit;
end;
/
Monitor MV Refresh
Jignesh, March 26, 2007 - 9:31 am UTC
Hi Tom,
I've got two materiallized views.
1. One materialized view is based on tables which is refreshed daily morning - Fast Refresh Method
(start with sysdate next (trunc(sysdate)+1) + 7/24)
2. Second materialized view is based on above materialized view which is refreshed on monthly - Complete Refresh Method
(start with sysdate next add_months(trunc(sysdate,'mm'),1) + 9/24).
Requirement:
On 1st day of month,If first MV refresh is going on, I should wait until 1st MV is successfully refreshed and then start complete refresh of second MV.
Limitations:
I am not allowed to write any stored procedure as we are going to put these two MVs in other's schema.
Can Materialized View Group work in this case? or do we have any other method of achiving above?
Thanks
Materialized view not being used
Jayadevan, May 20, 2008 - 7:19 am UTC
Tom,
We have a table with a couple of millions of records. Approximately 20% of the data is really used. Rest are records which have undergone soft deletion (i.e. a column eff_to_dtm_utc is set to sysdate at deletion time). Most of our reports need only valid records (WHERE eff_to_dtm_utc = to_date('31/12/2075','DD/MM/YYYY'). Since actual deletion of 'soft deleted' records would mean a lot of changes in the application code, we created a materialized view with the most-used columns and filter (WHERE eff_to_dtm_utc = to_date('31/12/2075','DD/MM/YYYY'). The column eff_to_dtm_utc is not part of the MV, since that column is not necessary for the reports now -we have already filtered on the valid value. We are using the materialized view directly now. It is a FAST REFRESH ON COMMIT MV. When we run a query against the base table, with only the columns used in the materialized view, with a filter WHERE eff_to_dtm_utc = to_date('31/12/2075','DD/MM/YYYY'), the query does not use the MV. What could be the reason? QUERY REWRITE is set to TRUE.
Jay
May 20, 2008 - 11:33 am UTC
... WHERE eff_to_dtm_utc = to_date('31/12/2075','DD/MM/YYYY') ...
you cannot being to imagine my distaste for that approach. One wonders why one did not use NULL.
you did not give me an example to work with, so I will not comment further.
Here would have been a simple way to do it - if you had used a reasonable value for the end date:
ops$tkyte%ORA10GR2> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y varchar2(30)
6 )
7 enable row movement
8 PARTITION BY RANGE (dt)
9 (
10 PARTITION part1 VALUES LESS THAN (to_date('01-jan-9999','dd-mon-yyyy')) ,
11 PARTITION junk VALUES LESS THAN (MAXVALUE)
12 )
13 /
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select * from t where dt is null;
Execution Plan
----------------------------------------------------------
Plan hash value: 2931986080
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 39 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1 | 39 | 2 (0)| 00:00:01 | 2 | 2 |
|* 2 | TABLE ACCESS FULL | T | 1 | 39 | 2 (0)| 00:00:01 | 2 | 2 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("DT" IS NULL)
Note
-----
- dynamic sampling used for this statement
ops$tkyte%ORA10GR2> set autotrace off
see how the "is null" makes it hit partition 2 only - as you update the partition key from NULL to some value, it would move it into the other partition.
You would have 80% of the data in partition 1, 20% of the active data in partition 2 and no one would be the wiser.
You can do this with your thing as well - put everything less than 2075 into part1, 2075 data in part2 - and you'll be done. No materialized view, no materialized view logs, no additional storage...
Follow up on partitions Vs materialized views
Jayadevan, May 20, 2008 - 11:39 pm UTC
Hi Tom,
The original table has 52 columns. Out of these, only 6 columns are used by the reports which have performance issues. In this case, wouldn't using materialized view be better? We are dramatically reducing the amount of data to be read/fetched, is it not?
Partitioning was the approach we tried first. Even though it did improve performance, we could not meet the performance expectations. With materialized view approach, we are able to do that (if we query materialized views directly).
Thanks,
Jay
May 21, 2008 - 8:03 am UTC
depends.
IT ALWAYS DEPENDS.
If you use primarily index access, no, I would not expect much difference.
Querying materialized view logs
Jayadevan, June 05, 2008 - 2:59 am UTC
Hi Tom,
If I query materialized view logs, that seem to be taking forever to fetch results, even when the log has 0 records. I tried querying regular tables and it is pretty fast. Snap show the query against the log as the max elapsed per second query.
The database is Oracle 10g R2.
12:09:06 SQL> select count(*) from pers_all;
COUNT(*)
----------
0
Elapsed: 00:00:00.17
12:13:52 SQL> select count(*) from MLOG$_PERS_ALL;
COUNT(*)
----------
0
Elapsed: 00:01:40.06
June 05, 2008 - 9:58 am UTC
that just means the mv log WAS big and now it empty. It is called the "high water mark", the table got big, we processed it, it is empty but when you full scan it (I'm assuming you are using the old RBO which will full scan for a count like that), we have to read all of the blocks.
either
o use the cbo, we'll use a primary key index and it'll be 'fast'
o stop counting rows, it is our table....
Materialized views
Jayadevan, June 05, 2008 - 11:38 pm UTC
Hi Tom,
Thanks for the reply. I have not changed any parameters. Since it is 10gR2, by default, it will use CBO, right?
When/How will the HWM be reset for MV logs? I wanted to query it because initially, I was refreshing only one of the 3 MVs based on this table. As a result, the log grew really huge, and all transactions on the base table were taking time. I wanted to ensure we are not doing anything like that again.
Regards,
Jayadevan
June 09, 2008 - 9:42 am UTC
by default, there are no indexes on this log.
If it is in an ASSM (automatic segment space management) tablespace, you can shrink it
ops$tkyte%ORA10GR2> drop table t;
Table dropped.
ops$tkyte%ORA10GR2> set echo on
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t ( x char(2000) primary key );
Table created.
ops$tkyte%ORA10GR2> create materialized view log on t;
Materialized view log created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec show_space( 'MLOG$_T' );
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 0
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 0
Total Blocks............................ 8
Total Bytes............................. 65,536
Total MBytes............................ 0
Unused Blocks........................... 5
Unused Bytes............................ 40,960
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 8,641
Last Used Block......................... 3
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> insert into t select rownum from all_objects where rownum <= 1000;
1000 rows created.
ops$tkyte%ORA10GR2> rollback;
Rollback complete.
ops$tkyte%ORA10GR2> exec show_space( 'MLOG$_T' );
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 0
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 1,000
Full Blocks ..................... 0
Total Blocks............................ 1,024
Total Bytes............................. 8,388,608
Total MBytes............................ 8
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 12,425
Last Used Block......................... 128
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> alter table mlog$_t enable row movement;
Table altered.
ops$tkyte%ORA10GR2> alter table mlog$_t shrink space compact;
Table altered.
ops$tkyte%ORA10GR2> alter table mlog$_t shrink space ;
Table altered.
ops$tkyte%ORA10GR2> exec show_space( 'MLOG$_T' );
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 0
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 1
Full Blocks ..................... 0
Total Blocks............................ 8
Total Bytes............................. 65,536
Total MBytes............................ 0
Unused Blocks........................... 4
Unused Bytes............................ 32,768
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 8,641
Last Used Block......................... 4
PL/SQL procedure successfully completed.
that is "online", you can move it (offline operation, locks table for entire process)
ops$tkyte%ORA10GR2> exec show_space( 'MLOG$_T' );
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 0
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 1,000
Full Blocks ..................... 0
Total Blocks............................ 1,024
Total Bytes............................. 8,388,608
Total MBytes............................ 8
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 13,193
Last Used Block......................... 128
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> alter table mlog$_t move;
Table altered.
ops$tkyte%ORA10GR2> exec show_space( 'MLOG$_T' );
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 0
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 0
Total Blocks............................ 8
Total Bytes............................. 65,536
Total MBytes............................ 0
Unused Blocks........................... 5
Unused Bytes............................ 40,960
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 11,833
Last Used Block......................... 3
PL/SQL procedure successfully completed.
HOW TO CREATE FAST REFRESH
Pravin Baldawa, June 16, 2008 - 9:23 pm UTC
HI TOM,
HOW TO CREATE FAST REFRESH ON THE FOLLOWING QUERY
SELECT distinct deptno,dname,loc FROM dept
AWAITING...
thanks,
June 17, 2008 - 9:43 am UTC
remove distinct, it is not necessary - deptno is unique, the data set is already distinct by definition.
Thanks for the response..
Pravin Baldawa, June 18, 2008 - 11:18 am UTC
What if I have to use distinct in the select query and use the fast refresh ? Is it possible?
Regards
June 18, 2008 - 3:15 pm UTC
ops$tkyte%ORA10GR2> create table t ( x int primary key, y int, z int );
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create materialized view log on t with rowid (y,z) including new values;
Materialized view log created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create materialized view mv
2 refresh fast
3 as
4 select y, z
5 from t
6 group by y, z;
Materialized view created.
that would be equivalent.
MV Invalid afer Schema refresh
Bhakti Dash, June 25, 2008 - 9:31 pm UTC
After refreshing the schema from production to development, apps developers complaining that the MV's are invalid. Can you please shed some light on this?
Its 9204 database.
Regards,
Bhakti
June 26, 2008 - 3:45 pm UTC
there is entirely insufficient data to say a thing about anything here.
no idea what a 'refresh' entails for you, what it means to you.
no idea where the mv's are invalid - production or development
no idea what the mv's entail, distributed, single database
there is nothing comment-able (I made that word up) here.
A reader, July 01, 2008 - 8:51 am UTC
Hi- We have a view which has about 690,000 rows and we are trying to create a materialized view for it. The create is taking a very long time (more than an hr). How can we speed this up. Even when the create is done, and when we try to refresh it, the refresh is going very slow. Most of the time we end up killing the refresh. Is there any tips for speeding up operations on the MV ?
Tomas Solar, July 11, 2008 - 7:27 am UTC
Hello,
we have MV refreshed each hour. This job running 15min and generates 5GB during the time.
I tried to alter MV to nologging
ALTER MATERIALIZED VIEW DOTTK_ADM.CHS_REPLICA1_MV NOLOGGING;
I tried alter index on it
alter index DOTTK_ADM.I_SNAP$_CHS_REPLICA1_MV rebuild nologging;
I tried to set nocompress
ALTER MATERIALIZED VIEW DOTTK_ADM.CHS_REPLICA1_MV NOCOMPRESS;
even i update refresh process
exec DBMS_MVIEW.REFRESH('DOTTK_ADM.CHS_REPLICA1_MV','C',ATOMIC_REFRESH=>FALSE);
but nothing help me.
I really dont know what to do. One thing what i should try is drop MV and recreate but with which parameter to avoid the same situation?
Regards,Tom
July 11, 2008 - 8:28 am UTC
well, indexes are always going to be logged.
why would nocompress be something you tried?
anyway, you give almost no information here.
are you archivelog mode
did dba force logging
is the view refreshed complete or incrementally (if the latter, then nologging is not an option)
Tomas Solar, July 11, 2008 - 9:26 am UTC
Thank you for an answer. I dont hope that samebody answer me.
It was recommended me from product support from oracle ( logged SR) so i tried it but no effect.
yes, i am in archive log
it is force refresh from remote table. Remote table has 2642502 rows
I drop MV and indexes and recreate it again with this commands:
MV:
===
CREATE MATERIALIZED VIEW VHS_REPLICA1_MV
USING INDEX TABLESPACE "DOTTK_MAIN"
REFRESH FORCE WITH ROWID ON DEMAND
AS
select hit_date, banner_id,hit_count
from VIEW_HITS_STAT@REPLICA1;
ALTER MATERIALIZED VIEW DOTTK_ADM.VHS_REPLICA1_MV NOLOGGING
index:
====
CREATE UNIQUE INDEX "DOTTK_ADM"."PK_BANID_HDATE_R1" ON "DOTTK_ADM"."VHS_REPLICA1_MV" ("HIT_DATE", "BANNER_ID")
TABLESPACE "DOTTK_MAIN" PCTFREE 5 INITRANS 2 MAXTRANS 255 STORAGE ( INITIAL 1024K BUFFER_POOL DEFAULT)
ALTER INDEX "DOTTK_ADM"."I_SNAP$_VHS_REPLICA1_MV" NOLOGGING
than ompute statistics:
==============
exec DBMS_STATS.GATHER_TABLE_STATS('DOTTK_ADM','VHS_REPLICA1_MV',NULL,DBMS_STATS.AUTO_SAMPLE_SIZE);
and setup refresh:
============
BEGIN
DBMS_REFRESH.MAKE(
name => '"DOTTK_ADM"."VIEW_AND_CLICKS"',
list => '',
next_date => to_date('07-11-2008 13:29:52', 'MM-DD-YYYY HH24:MI:SS'),
interval => '/*1:Hrs*/ sysdate + 1/24',
implicit_destroy => FALSE,
lax => FALSE,
job => 0,
rollback_seg => NULL,
push_deferred_rpc => FALSE,
refresh_after_errors => TRUE,
purge_option => NULL,
parallelism => NULL,
heap_size => NULL);
END;
Regards,
Tomas
July 11, 2008 - 10:26 am UTC
if you do a complete refresh - we will truncate and insert /*+ APPEND */ (so do a complete refresh if you want to minimize redo)
UNDO for the index and the index writes will always be logged - always.
the only way to not have them logged would be to disable/drop the index and rebuild/create it later.
Tomas Solar, July 11, 2008 - 12:35 pm UTC
thank you but it was worst
I set ALTER MATERIALIZED VIEW DOTTK_ADM.VHS_REPLICA1_MV REFRESH COMPLETE
I did refresh and now this refresh job generate 4x more redo size.
col sid format 9999;
SQL> col username format a15;
SQL> col program format a30;
SQL> col value format 99999999;
SQL> SELECT st.sid, se.username, se.program, (value/1048576) VALUE
2 FROM gv$sesstat st, gv$statname sn, gv$session se
3 WHERE sn.name = 'redo size' and se.username = 'DOTTK_ADM'
4 AND sn.statistic# = st.statistic#
5 AND st.sid = se.sid
6 AND value <> 0
7 ORDER BY 4 desc;
SID USERNAME PROGRAM VALUE
----- --------------- ------------------------------ ---------
2124 DOTTK_ADM oracle@tnode (J000) 7918
So i try to drop MV and recreate is again with clause NOLOGGING.
CREATE MATERIALIZED VIEW VHS_REPLICA1_MV
NOLOGGING
USING INDEX TABLESPACE "DOTTK_MAIN"
REFRESH FORCE WITH ROWID ON DEMAND
AS
select hit_date, banner_id,hit_count
from VIEW_HITS_STAT@REPLICA1;
CREATE UNIQUE INDEX "DOTTK_ADM"."PK_BANID_HDATE_R1" ON
"DOTTK_ADM"."VHS_REPLICA1_MV" ("HIT_DATE", "BANNER_ID")
TABLESPACE "DOTTK_MAIN" PCTFREE 5 INITRANS 2 MAXTRANS 255
NOLOGGING
STORAGE (
INITIAL 1024K BUFFER_POOL DEFAULT)
Regards,
Tom
July 11, 2008 - 1:34 pm UTC
then you were doing an increment refresh before and there is no way to skip undo and redo generation for that - none.
I told you - regardless, the index will be maintained.
The only ZERO redo option would be
a) disable or drop indexes
b) complete refresh into nologging table
c) rebuild/create of index with no logging
nothing else will get you there
if you refresh incrementally - then all changes are logged and you cannot stop that.
if you refresh complete - the move of the data from temp into the index will be logged (the table need not be)
those are your choices.
Complete refresh does a delete instead of truncate
A reader, July 11, 2008 - 12:51 pm UTC
Hi Tom,
I am having a similar problem. I use a MV to enforce declarative integrity, but I ran into the increasing logging problem (@18 MB per transaction). Archive log sizes for a day increased to @5 GB from @0.7 GB.
Metalink Note 365157.1 says that in 10g, a complete refresh will do a delete instead of a truncate and that this is expected behavior. To force the refresh to do a truncate instead of a delete, the ATOMIC_REFRESH parameter needs to be set to FALSE.
I converted the MV from REFRESH ON COMMIT to REFRESH ON DEMAND and set the ATOMIC_REFRESH to FALSE in the DBMS_MVIEW.REFRESH procedure. This brought down the logging to almost negligible levels, but it commits data to the base table before the check constraint on the MV fires.
July 11, 2008 - 1:36 pm UTC
what are you doing that you do a complete refresh???!?!?! for integrity???
I don't get it - you would be using a refresh fast on commit materialized view.
Tomas Solar, July 11, 2008 - 1:25 pm UTC
I recreated MV as mentioned above but no effect.
So i will try my "last" test
1)MV with complete refresh
2) exec DBMS_MVIEW.REFRESH('DOTTK_ADM.VHS_REPLICA1_MV','C',ATOMIC_REFRESH=>FALSE);
Tom
July 11, 2008 - 1:40 pm UTC
see above, I listed your options....
Nested MV's as a workaround for fast refreshes ?
Gokul, July 21, 2008 - 9:11 am UTC
Hi Tom,
We are trying to create a fast refreshable MV on a complex query. As documented, it is not possible. To make is work we are looking at options to create nested materialized views.
For the data below, we are interested in the most recent M row for a given X, Y. Since, aggregates are not allowed in fast refreshable MV's we cannot use this directly in the MV view definition.
Is it a good idea to create a MV with all the interested keys, and then join this MV with the main table ?
X Y M N
1000 1 20070101 20070101
1000 1 20070101 20080601
1000 1 20080101 20080101
1000 1 20080301 20080101
I am also facing problems, in querying the data for the most latest row. Is this possible with simple sql (without aggregations) ?
Here is the scripts I used for the same,
create table t10 (x int, y int, m int, n int);
alter table t10 add constraint t10_pk primary key (x,y,m,n);
insert into t10 (X, Y, M, N)
values ('1000', '1', '20070101', '20070101');
insert into t10 (X, Y, M, N)
values ('1000', '1', '20070101', '20080601');
insert into t10 (X, Y, M, N)
values ('1000', '1', '20080101', '20080101');
insert into t10 (X, Y, M, N)
values ('1000', '1', '20080301', '20080101');
Rgds,
Gokul
July 22, 2008 - 10:46 am UTC
... Since, aggregates are not allowed in fast refreshable MV's we cannot use
this directly in the MV view definition.
...
that is not true in general. However...
... Is it a good idea to create a MV with all the interested keys, and then join
this MV with the main table ?
...
it could be - it depends. what type of queries do you actually use against this base table??? we don't know what questions you ask.
Other than "we need the most current row by X,Y from this table"
if you
select ... from T where x = ? and y = ? .....
that is, you supply X,Y - I would not consider a materialized view - just an index on X,Y,the_date_field and query
select * from
(select * from t where x = ? and y = ? order by x desc, y desc, the_date_field DESC )
where rownum = 1;
with an index on x,y,the_date_field. we'd read the index backwards, get the first row, stop.
if you needed the most current row for EVERY x,y - and x,y typically had one or two records for each - then I'd skip the MV again and just use the base table.
X,Y would need many rows for each X,Y and we'd need to get ALL or MOST of the x,y values before a MV would be considered - and this table would have to be huge. And whether we'd fast (which should be called incremental, not fast, fast could be really really slow) refresh or complete refresh would depend on many factors as well.
Nested MV's as a workaround for fast refreshes ?
Gokul, July 22, 2008 - 6:21 pm UTC
Hi Tom,
To give a bit of background, we were looking at the MV option because the application has to fetch data from a remote database. Currently, the data is abstracted from the remote database base tables and presented to the application as views. But, since the datamodels of the remote database and the application database is too different joins on some of the remote base tables is taking a lot of time and probably will not meet the response time requirements.
One of the queries looks like
select x.a,x.b,x.c,x.d,x.e,y.a
from x x1, y y1, z z1
where
x1.a = y1.a
and x1.b = y.b
and x1.c = (select max(x2.c) from x x2
where x1.a = x2.a
and x1.b = x2.b
)
and x1.a = z1.a
and x1.b = z1.b
and z1.c = (select max(z2.c) from z z2
where z1.a = z2.a
and z1.b = z2.b
)
and z1.d = (select max(z3.d) from z z3
where z1.a = z3.a
and z1.b = z3.b
and z1.c = z3.c
)
x is 17 million rows
y is 54 million rows
z is 80 million rows
We were looking at the nested MV option to solve the subquery aggregations.
We will not be using any rewrite features of MV, but just
select * from <final MV> where "x.a" = <user input>
or
select * from <final MV> where "y.a" = <user input>
DBAs strongly suggest to write our own refresh code and not use nested MVs as it will heavily impact the remote database and we are exploring the batch option now.
Just checking. Do you see any possibility of creating an incremental refresh MV for the example query with having to created intermediate MVs ?
Rgds,
Gokul
July 22, 2008 - 6:59 pm UTC
...
DBAs strongly suggest to write our own refresh code and not use nested MVs as
it will heavily impact the remote database and we are exploring the batch
option now.
...
wow, your dba's must trust you a lot? I mean, why would what you have to do be better than what the builtin stuff can do?
this really looks like "we meant to have a single database" - why would you want to replicate this volume of data - why wouldn't this process that needs this data be part of the database that has this large amount of data in the first place? (I hate, despise, reject whenever possible - any form of replication (DR - disaster recovery - is not replication, replication is not DR, by the way...)
Is two-way syncing possible with MV
DG, July 28, 2008 - 8:39 pm UTC
Hi Tom,
Thanks for all the valuable advice.
Wanted to check with you if we could do two-way syncing with MVs? I know the changes in master table will be reflected in MV (using fast or complete Refresh), but how can we reflect the changes made to updatable MVs in master table?
Thanks,
July 29, 2008 - 3:46 pm UTC
by creating them as updateable materialized views?
that is exactly what an updateable materialized view does - check out the documentation again, that is the difference between a normal materialized view and an updatable one - you can update the latter and the changes will be reflected in the master, you cannot update the former at all
Fast refresh and redo
Jyo, November 28, 2008 - 12:02 pm UTC
Hi tom,
In the following materialized view, even if there are no data in table, it's generating redo on fast refresh. is there any option to disable the redo on refresh.
CREATE TABLE TABLE1
(
IN_ID NUMBER(20),
APPLN_ID NUMBER(20),
DTL_ID NUMBER(20),
M_TYPE VARCHAR2(20 BYTE),
STATUS VARCHAR2(50 BYTE),
APPLN_CODE VARCHAR2(10 BYTE),
M_SRC VARCHAR2(20 BYTE),
M_FLAG VARCHAR2(3 BYTE),
FILE_NAME VARCHAR2(100 BYTE),
M_NO VARCHAR2(1000 BYTE),
ARCHIVE VARCHAR2(1 BYTE)
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
ALTER TABLE TABLE1 ADD (
CONSTRAINT PK_TABLE1
PRIMARY KEY
(IN_ID));
CREATE MATERIALIZED VIEW LOG ON TABLE1
WITH ROWID, PRIMARY KEY, SEQUENCE
(M_SRC, APPLN_CODE, M_TYPE, STATUS, ARCHIVE)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW MV_TABLE1
ENABLE ROW MOVEMENT
BUILD IMMEDIATE
REFRESH FAST
START WITH SYSDATE
NEXT SYSDATE+1/48
WITH PRIMARY KEY
AS
SELECT APPLN_CODE, M_TYPE, STATUS, COUNT(*) COUNT
FROM TABLE1
WHERE (ARCHIVE IS NULL OR ARCHIVE <> 'Y')
GROUP BY APPLN_CODE, M_TYPE, STATUS;
ALTER MATERIALIZED VIEW LOG ON TABLE1 NOLOGGING;
ALTER MATERIALIZED VIEW MV_TABLE1 NOLOGGING;
regards
jyo
November 28, 2008 - 5:34 pm UTC
fast refresh is the same as saying "incremental refresh"
the redo generated would be the result of removing the old rows, maintaining any indexes and so on.
Even a complete refresh would generate some redo.
How much redo is it generating and why do you believe that it is "too much"
materialized view to show count
jyo, December 09, 2008 - 10:58 am UTC
it's generating small amount of redo log even if for the empty table in the above case. So on periodic refresh, more log files will be generated on tables with data. in my case 8 similar marerialized views are there.
In my scenario, i want to show the output of the above materialized view query as dashboard in the application. So i have created materialized view with periodic refresh. querying normal view based on this query is taking more time.
materialized view query from earlier post. status collumn value will change. according to that count should change
SELECT APPLN_CODE, M_TYPE, STATUS, COUNT(*) COUNT
FROM TABLE1
WHERE (ARCHIVE IS NULL OR ARCHIVE <> 'Y')
GROUP BY APPLN_CODE, M_TYPE, STATUS;
few options i have
1. Direct execution of the query
2. Query from normal view
3. using materialized view
4. create a summary table, and use MERGE to update this
table periodically with the above query
i am looking for fast response in getting the count. any other options. which of the above option will be better. any performance issues with tables that may have millions of records over a period of time
Thanks
December 09, 2008 - 2:20 pm UTC
sounds like a materialized view to me?
not sure what the question was really....
why do you care about "a small amount of redo" - it is going to generate redo, everything you do pretty much will
MV Refresh
Arvind Mishra, March 22, 2009 - 7:46 pm UTC
Hello Tom,
We require truncate and insert load behavior of MV instead of delete and insert behavior. We are using Oracle 10g and wants to complete refresh the MV.
a) What is the best way to get it.
b) Someone told me that if we use ATOMIC_REFRESH => false while using dbms_mview.refresh then we will get truncate and load. Is it true?
c) How can I check that which statement is used by an mv to refresh itself? Please give me an example.
Thanks,
Arvind
March 24, 2009 - 11:04 am UTC
use atomic_refresh => FALSE as a parameter to the refresh.
do not know what you mean by c)
either it is run as a job (select what from dba_jobs...) or you run it yourself (in which case... well, you coded it)
MV Refresh
Arvind Mishra, March 22, 2009 - 7:46 pm UTC
Hello Tom,
We require truncate and insert load behavior of MV instead of delete and insert behavior. We are using Oracle 10g and wants to complete refresh the MV.
a) What is the best way to get it.
b) Someone told me that if we use ATOMIC_REFRESH => false while using dbms_mview.refresh then we will get truncate and load. Is it true?
c) How can I check that which statement is used by an mv to refresh itself? Please give me an example.
Thanks,
Arvind
dbms_mview
Arvind Mishra, March 30, 2009 - 4:47 pm UTC
Thanks Tom.
Sorry for posting same question twice. Actually my internet connection was not working properly.
By c) I mean that how can I see that if materialized veiw is using truncate and load or delete and load to refresh itself.
Regards,
Arvind
parallelism in MV refresh
Srini, April 20, 2009 - 3:03 pm UTC
I want to use parallelism for complete refresh and no parallel for fast refresh (10g DB). I was looking at the documentation and I am not clear what the process is for this. The reason I ask is 99% of time time we run a fash refresh with no parallel and once in while we do a lot of changes on the base table (30% of the data) and we want to do a complete refresh with parallelism. Below are specific questions I had to enable parallelism
1. MV: This should be set to parallel? How about MV indexes?
2. Base table: This should be set to parallel? This is a non partitioned table.
3. MV Log: This should be set to no parallel as it is used in fast refresh.
4. MV Refresh package: Can we set the parallelism in the MV refresh procedure (dbms_mview.refresh) and Oracle would do a parallel build even if the underlying MV and base table are not set as parallel? If so than I dont have to worry about MV and base table setup and just use this parameter.
Snapshot recreation after master table's data change?
Suren, April 21, 2009 - 1:45 pm UTC
Hi Tom i have an crucial issue.
Here is what i did:
1. i have 3 sites: A -> master site, B,C -> MV sites
2. There is very big table (7Gb) in A, named BOOK_IMAGE
3. BOOK_IMAGE table has MV log.
4. Then i created fast refreshable snapshots from BOOK_IMAGE at B and C site, both named REMOTE_BOOK_IMAGE and refreshed.
5. After that i accidentally dropped REMOTE_BOOK_IMAGE snapshot from B
6. While i have dropped REMOTE_BOOK_IMAGE there are many changes (INSERT,UPDATE,DELETE) made to BOOK_IMAGE. Changed rows are approximately 8000.
7. I recreated fast refreshable snapshot REMOTE_BOOK_IMAGE at B and refreshed. But these 8000 records are didn't come to B site.
I can't do complete refresh because of BOOK_IMAGE's huge size.
What should i do to get that changed rows?
April 21, 2009 - 3:51 pm UTC
describe how you did step #7.
#7 detail
Suren, April 21, 2009 - 9:26 pm UTC
create materialized view REMOTE_BOOK_IMAGE
on prebuilt table
refresh force on demand as
select * from BOOK_IMAGE@A;
dbms_mview.refresh('REMOTE_BOOK_IMAGE');
April 23, 2009 - 11:57 am UTC
so, what was in the prebuilt table?
do you know how to do an offline instantiation? That is what you are trying to do, did you do the steps necessary to do an offline instantiation? Have you read about it?
http://www.oracle.com/pls/db102/search?remark=quick_search&word=offline+instantiation&tab_id=&format=ranked (think about the synchronization issues here, unless you follow a process - which will include bringing over the current data - getting the data CURRENT yourself - how could we make it "current"? We don't know what changes need or do not need to be applied, you have to do this via a process...)
Scenario for Snap Shot Plan
A reader, April 22, 2009 - 1:59 pm UTC
Tom,
Scenario: I got a table say T with prices for products, I need to take a snap shot at the begining of each quarter. However, the user needs a history of these snap shots whenever they need. Here is my plan: 1. create a table of "historical_data_for_snapshot", 2. create and refresh(after first creation of course) MV at the begining of each quarter, 3. immediately push the MV data to the table of "historical_data_for_snapshot" after fresh the MV. Does it sound good or have some major flaw here? Thanks in advance for your input.
April 24, 2009 - 4:23 pm UTC
I don't see the usefulness of the materialized view here at all.
Sounds more like you want to run
insert into history
select to_date( :bind_that_has_the_qtr_in_it, 'format') qtr, t.*
from t;
at the beginning of each quarter.
Need MVIEW REFRESH or NOT
Parag, April 26, 2009 - 8:43 am UTC
Hi Tom,
Thanks for your valuable inputs.
I have just done reorg of a complete tablespace(alter table move,alter index move) to a new tablespace name.
My question is -
Q1.)Whether refresh of the materialized views is required or not in that tablespace?
Q2.) If required,whether refresh required of all MVIEWS in that tablespace or only particular MVIEWS?
DB - 9.2.0.6
My MVIEW details after reorg are as below -
MVIEW_NAME REFRESH_METHOD REFRESH_MODE STALENESS TO_CHAR(LAST_REFRESH LAST_REF FAST_REFRESHABLE
------------------------------ -------------------- -------------------- ------------------- -------------------- -------- ------------------
A COMPLETE DEMAND FRESH 31-MAR-2009 11:35:10 COMPLETE DIRLOAD_LIMITEDDML
B COMPLETE DEMAND FRESH 31-MAR-2009 11:35:36 COMPLETE DIRLOAD_LIMITEDDML
C COMPLETE DEMAND UNKNOWN 26-APR-2009 01:30:41 COMPLETE NO
D COMPLETE DEMAND STALE 03-APR-2009 06:02:50 COMPLETE NO
E COMPLETE DEMAND UNKNOWN 26-APR-2009 13:11:01 COMPLETE NO
F FORCE DEMAND UNDEFINED 10-MAY-2007 22:55:09 COMPLETE NO
G COMPLETE DEMAND STALE 03-APR-2009 06:00:37 COMPLETE NO
H COMPLETE DEMAND STALE 03-APR-2009 06:00:13 COMPLETE NO
I FAST COMMIT FRESH 24-APR-2009 16:06:13 FAST DIRLOAD_DML
Regards,
Parag
April 27, 2009 - 1:54 pm UTC
the fresh ones do not, the stale ones do - but they were not made stale by the move - they were already stale.
You didn't actually change the data, there is nothing to synchronize
On Commit Materialized View
A reader, May 07, 2009 - 7:16 pm UTC
Hi Tom, I'm puzzled by the following scenario.
Here's a test case that's reproducible every time on Oracle 10.2.0.4.
CREATE TABLE m (
mID NUMBER NOT NULL PRIMARY KEY,
col2 NUMBER NOT NULL,
col3 NUMBER NOT NULL,
col4 NUMBER NOT NULL,
col5 NUMBER NOT NULL,
col6 VARCHAR2(50) NOT NULL
);
CREATE TABLE i (
iID NUMBER NOT NULL PRIMARY KEY,
mID NUMBER NOT NULL REFERENCES m(mID),
col2 NUMBER NOT NULL,
col3 VARCHAR2(10) NOT NULL
);
CREATE SEQUENCE i_seq;
CREATE INDEX m_idx1 ON m(col2);
CREATE INDEX i_idx1 ON i(mID);
CREATE MATERIALIZED VIEW LOG ON I WITH ROWID;
CREATE MATERIALIZED VIEW LOG ON M WITH ROWID;
CREATE MATERIALIZED VIEW m_i_MV
BUILD IMMEDIATE
REFRESH FAST WITH ROWID
ON COMMIT
AS
SELECT i.iID,
m.mID,
i.col2,
i.col3,
m.col2 m_col2,
m.rowid m_rowid,
i.rowid i_rowid
FROM i,
m
WHERE i.mID = m.mID;
-- --------------------------------------------------------------------------------
-- Session #1 - start populating the tables.
-- --------------------------------------------------------------------------------
BEGIN
FOR i IN 1..20000 LOOP
INSERT INTO m VALUES (i, i, i, i, i, 'xxxxxxxxxxxxxxxxxxxxxxxxx');
FOR j IN 1..3 LOOP
INSERT INTO i VALUES (i_seq.NEXTVAL, i, i, 'zzzzzzzzzz');
END LOOP;
COMMIT; -- Bad, I know, but just a simulation.
END LOOP;
END;
/
-- --------------------------------------------------------------------------------
-- Session #2 - statistics collection started on table i - Run this approximately
-- 15 seconds after session #1 started.
-- --------------------------------------------------------------------------------
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(
ownname => user,
tabname => 'I',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY',
cascade => TRUE);
END;
/
-- --------------------------------------------------------------------------------
-- Session #3 - Session #1 is blocking session #2. Nothing moves, but no deadlock!
-- Both session 1 and 2 are stuck forever!
--
-- The show_lock_pkg comes from metalink.
-- --------------------------------------------------------------------------------
SQL> exec SHOW_LOCK_PKG.showlock;
Locking/Waiting tree Username PID Locked Object/Type-Request
------------------------- -------------- ------ ------- ---------------------
+ 1478,12122
TEST_SCHEMA
4012:1
|
Tran enqueue-Exclusive
|--->1481,2475
TEST_SCHEMA
2184:3
TEST_SCHEMA.I
-- --------------------------------------------------------------------------------
-- Session #4 - simple query, session hung.
-- --------------------------------------------------------------------------------
SELECT COUNT(*) FROM i;
May 11, 2009 - 2:58 pm UTC
very good test case.
Please utilize it to open a bug with support, it definitely reproduces. The dbms_stats is blocked by the first session - but the first session itself "stops" until the dbms_stats is killed (ctl-c'ed).
Suggest you add
dbms_application_info.set_client_info(i);
to the plsql block so you can see that the insert block gets 'stuck'. Stopping the statistics gathering is sufficient to "un-freeze" everything.
On Commit Materialized View Followup
A reader, May 11, 2009 - 5:52 pm UTC
I'll definitely open a support case for this. What would you suggest we do in the meantime before the bug is fixed? We are a software vendor using Oracle as backend. In the upcoming release of the software, we're planning on using on commit materialized view. It's important that the materialized view contains real time data. The transaction rate won't be as high as shown in the test case. The number of concurrent writers will also likely stay pretty low (3 ~ 5 at most). Killing the statistics gathering process every time the system hangs isn't a viable solution in our case. Can you please recommend a workaround for this situation? Thank you.
May 11, 2009 - 7:28 pm UTC
lock the statistics on these tables - do not gather on them if you cannot schedule a maintenance window.
On Commit Materialized View on 11g
A reader, May 12, 2009 - 4:17 pm UTC
I used the above test case (May 7 2009) on 11g and was not able to reproduce the issue. So it seems that this is no longer a problem on the latest release of Oracle.
However, here are the elapsed time running the test case with and without the materialized view:
Without MV: 6.34 seconds
With MV : 18 min 36 seconds
It's obvious that it would definitely take longer to populate the base tables when there's a materialized view. However, why does it take so much longer? Is this normal?
May 13, 2009 - 11:08 am UTC
are you really crafting an on commit MV just to avoid a simple join later? Joins are really pretty efficient in the database.
think of the amount of work (generic work, MV's are very generic) you've added to the commit process.
On Commit Materialized View
A reader, May 13, 2009 - 12:01 pm UTC
No. Not for joins. I've done enough benchmarking to know that a pre-join materiazlied view doesn't really help that much in performance, at least in our application. The materialized view is used mainly to address pagination performance. In order to avoid duplicate records across pages, the pagination SQL is sorted by <the column that the user selected> + <the primary key of one of the base tables>. Since these two columns do not always come from the same table, using a pre-join materialized view solves this problem. My benchmark numbers suggest that performance improvement achieved this way could be quite significant.
Do you have an alternative way that you can suggest to address this specific problem? Thanks.
May 13, 2009 - 3:26 pm UTC
... <the column that the user selected> + <the primary key of one
of the base tables>. ...
change that to be
<the column that the user selected> + <the primary key of one
of the base table that column comes from>.
then you can query that table (pagination wise) and join to the rest.
A reader, May 13, 2009 - 3:54 pm UTC
"<the column that the user selected> + <the primary key of one of the base table that column comes from>".
This would only work if <the primary key of one of the base table that column comes from> is unique after the join. Isn't it? Can you please provide an example of how this works?
Another consideration is that if dynamic sampling is used, wouldn't it be very helpful if all the "selected" columns were on the same table (as in the materialized view) so cross-column cardinality stats is available to the optimizer?
May 13, 2009 - 4:27 pm UTC
if you say... This would only work if <the primary key of one of the base table that column
comes from> is unique after the join. Isn't it? Can you please provide an
example of how this works?
...
then I'll say the same for you
<the column that the user selected> + <the primary key of one of the
base tables>
that would be true of that as well? no?
or is the one of the base tables specifically the top of a hierarchy?
A reader, May 13, 2009 - 5:55 pm UTC
Let me phrase it more precisely. The sorting should be done by <the column that the user selected> + <the primary key of the base table that would make the record unique>
Here's a test case to demonstrate this.
-- -------------------------------------------------------------------
-- Test Setup
-- -------------------------------------------------------------------
DROP TABLE i;
DROP TABLE m;
CREATE TABLE m (
mID NUMBER NOT NULL PRIMARY KEY,
col2 NUMBER NOT NULL,
col3 NUMBER NOT NULL,
col4 NUMBER NOT NULL,
col5 NUMBER NOT NULL,
mDATE DATE NOT NULL
);
CREATE TABLE i (
iID NUMBER NOT NULL PRIMARY KEY,
mID NUMBER NOT NULL REFERENCES m(mID),
status VARCHAR2(10) NOT NULL,
matchCount NUMBER NOT NULL
);
CREATE SEQUENCE i_seq;
CREATE INDEX m_n1 ON m(mDate, mID);
CREATE INDEX i_fk1 ON i(mID);
CREATE INDEX i_n1 ON i(Status, iID);
CREATE INDEX i_n2 ON i(matchCount, iID);
BEGIN
FOR i IN 1..2000 LOOP
INSERT INTO m VALUES (i, i, i, i, i, SYSDATE - MOD(i, 1000));
FOR j IN 1..30 LOOP
INSERT INTO i VALUES (i_seq.NEXTVAL, i, MOD(i, 4), i);
END LOOP;
END LOOP;
END;
/
COMMIT;
exec dbms_stats.gather_table_stats( user, 'M' );
exec dbms_stats.gather_table_stats( user, 'I' );
CREATE TABLE x_temp AS
SELECT i.mID, iID, status, MatchCount, mDate, col2, col3
FROM i, m
WHERE i.mID = m.mID AND
1 = 2;
-- -------------------------------------------------------------------
-- Sort by <the column that the user selected> +
-- <the primary key of one of the base table that column comes from>
--
-- This creates duplicate records (see the result of the next SQL).
-- -------------------------------------------------------------------
BEGIN
FOR i IN 1..1200 LOOP
INSERT INTO x_temp
SELECT mID, iID, status, MatchCount, mDate, col2, col3
FROM (
SELECT x.*, rownum rn
FROM (
SELECT i.mID, iID, status, MatchCount, mDate, col2, col3
FROM i, m
WHERE i.mID = m.mID
ORDER BY mDate, mID --> will not work
) x
WHERE rownum <= 50 * i
)
WHERE rn > (1 + (50 * (i - 1)));
END LOOP;
END;
/
SELECT COUNT(*)
FROM (
SELECT COUNT(*)
FROM x_temp
GROUP BY iID
HAVING COUNT(*) > 1
);
COUNT(*)
----------
26
-- -------------------------------------------------------------------
-- Sort by <the column that the user selected> +
-- <the primary key of the base table that would make the record unique>
--
-- No duplicate records (see the result of the next SQL).
-- -------------------------------------------------------------------
TRUNCATE TABLE x_temp;
BEGIN
FOR i IN 1..1200 LOOP
INSERT INTO x_temp
SELECT mID, iID, status, MatchCount, mDate, col2, col3
FROM (
SELECT x.*, rownum rn
FROM (
SELECT i.mID, iID, status, MatchCount, mDate, col2, col3
FROM i, m
WHERE i.mID = m.mID
ORDER BY mDate, iID --> Works well
) x
WHERE rownum <= 50 * i
)
WHERE rn > (1 + (50 * (i - 1)));
END LOOP;
END;
/
SELECT COUNT(*)
FROM (
SELECT COUNT(*)
FROM x_temp
GROUP BY iID
HAVING COUNT(*) > 1
);
COUNT(*)
----------
0
However, if I sort by <the column that the user selected> +
<the primary key of the base table that would make the record unique>, I may not always be able to create a composite index on the columns if they're not on the same tables.
May 13, 2009 - 7:48 pm UTC
ok, given "The sorting should be done by <the column that the user selected> + <the primary key of the base table that would make the record unique> " - I don't really have any other inputs short of
do you need "on commit refresh", can it be "refresh fast shortly after commits" - refresh frequently, but not on commit.
mview recreation fails with ORA-32036
atul, March 29, 2010 - 6:13 am UTC
I am on 10.2.0.4 and MVIEW recreation fails at below error.
Could yoy please help :
===
SQL> CREATE MATERIALIZED VIEW XXADSMV.XXADS_PIPELINE_TEST_27_MV
2 ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
3 STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
4 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
5 TABLESPACE "XXADS_DATA"
6 BUILD IMMEDIATE
7 USING INDEX
8 REFRESH FORCE ON DEMAND
9 USING DEFAULT LOCAL ROLLBACK SEGMENT
10 DISABLE QUERY REWRITE
11 as with ItemInfo as (
12 select
13 msi.organization_id,
14 msi.inventory_item_id,
15 msi.segment1 as ItemNumber,
16 msi.primary_unit_of_measure,
17 mic.category_id,
18 mc.structure_id,
19 mc.segment1 as ProductType,
20 mc.segment2 as ProductLine,
21 mc.segment3 as ProductSize,
22 mc.segment4 as ProductSeries,
23 mc.segment5 as ProductBrand
24 from apps.mtl_system_items msi
25 join apps.mtl_item_categories mic
26 on msi.organization_id = mic.organization_id
27 and msi.inventory_item_id = mic.inventory_item_id
28 join apps.mtl_categories mc
29 on mic.category_id = mc.category_id
30 join apps.mtl_category_sets mcs
31 on mc.structure_id = mcs.structure_id
32 where mcs.category_set_name = 'Inventory'
33 ),
34 jrs as (
35 select salesrep_id, name, salesrep_number, org_id
36 from apps.JTF_RS_SALESREPS
37 ),
38 OrderDetail as (
39 select ooha.attribute1 as ooha_attribute1, ooha.order_number as ooha_order_number, oola.Line_Number as oola_Line_Number, ooha.transactional_curr_code as ooha_transactional_curr_code, ooha.booked_date as ooha_booked_date,
40 ooha.price_list_id as ooha_price_list_id, ooha.header_id as ooha_header_id,
41 ooha.order_type_id as ooha_order_type_id, ooha.sold_to_org_id as ooha_sold_to_org_id,
42 ooha.org_id as ooha_org_id, oola.ship_from_org_id as oola_ship_from_org_id,trunc(oola.creation_date) as Entry_dt, oola.schedule_ship_date as ship_dt, oola.
43 ordered_item as Product, oola.line_category_code as oola_line_category_code, oola.ordered_quantity as
44 oola_ordered_quantity, oola.inventory_item_id as oola_inventory_item_id, oola.unit_selling_price as oola_unit_selling_price,
45 oola.sold_to_org_id as oola_sold_to_org_id, oola.ship_to_org_id as oola_ship_to_org_id,
46 oola.order_quantity_uom as oola_order_quantity_uom, oola.unit_list_price as
47 oola_unit_list_price, qp.name as qp_name, ottt.name as ottt_name, otta.end_date_active as otta_end_date_active,
48 otta.attribute10 as otta_attribute10, muom.unit_of_measure as muom_unit_of_measure, ItemInfo.ItemNumber as IntemInfo_ItemNumber, ItemInfo.primary_unit_of_measure,
49 ItemInfo.category_id as ItemInfo_category_id, ItemInfo.ProductType as ItemInfo_ProductType, ItemInfo.ProductLine ItemInfo_ProductLine, ItemInfo.ProductSize
50 as ItemInfo_ProductSize,ItemInfo.ProductSeries as ItemInfo_ProductSeries,ItemInfo.ProductBrand as ItemInfo_ProductBrand,cic.item_cost as Unit_Cost, rc.customer_name, gdr.from_currency, gdr.to_currency,
51 gdr.conversion_rate, jrs.salesrep_id, jrs.name as Salesperson, jrs.salesrep_number
52 from apps.oe_order_headers_all ooha
53 join apps.qp_price_lists_v qp
54 on ooha.price_list_id = qp.price_list_id
55 join apps.oe_order_lines_all oola
56 on ooha.header_id = oola.header_id
57 join apps.oe_transaction_types_tl ottt
58 on ooha.order_type_id = ottt.transaction_type_id
59 join apps.oe_transaction_types_all otta
60 on ottt.transaction_type_id = otta.transaction_type_id
61 and oola.org_id = otta.org_id
62 join apps.mtl_units_of_measure muom
63 on oola.order_quantity_uom = muom.uom_code
64 join ItemInfo
65 on oola.ship_from_org_id = ItemInfo.organization_id
66 and oola.inventory_item_id = ItemInfo.inventory_item_id
67 join apps.cst_item_costs cic
68 on oola.inventory_item_id = cic.inventory_item_id
69 and oola.ship_from_org_id = cic.organization_id
70 and cic.cost_type_id = 1
71 join apps.ra_customers rc
72 on ooha.sold_to_org_id = rc.customer_id
73 left outer join apps.gl_daily_rates gdr
74 on trunc(ooha.booked_date) = gdr.conversion_date
75 and ooha.transactional_curr_code = gdr.from_currency
76 and gdr.to_currency = 'USD' -- all money reported in USD
77 left outer join jrs
78 on oola.salesrep_id = jrs.salesrep_id
79 and oola.org_id = jrs.org_id
80 --
81 where qp.name <> 'Conversion Pricelist' -- exclude converted orders
82 and ooha.org_id in ('132','133') -- US/Canada only
83 and oola.shipment_number = 1 -- original transaction only - not line splits
84 and trunc(ooha.booked_date) is not null -- orders must be at least booked to be reported
85 -- last business day: line created OR header booked
86 and (
87 (
88 trunc(ooha.booked_date) = '27-AUG-09'
89 ) OR (
90 trunc(oola.creation_date) = '27-AUG-09' and trunc(oola.creation_date) > trunc(ooha.booked_date)
91 )
92 )
93 -- no entered or cancelled lines
94 and oola.flow_status_code <> 'ENTERED'
95 and oola.flow_status_code <> 'CANCELLED'
96 -- exclude interplant orders
97 and ottt.name not like '%Internal%'
98 -- exclude credit memos
99 --zzz and nvl(otta.end_date_active,sysdate+1) > sysdate
100 and nvl(otta.attribute10,'N') = 'Y' -- do not return credit memos
101 ),
102 OrderDetailSamDayShip as (
103 select ooha.order_number as ooha_order_number,
104 ooha.booked_date as ooha_booked_date,
105 ooha.header_id as ooha_header_id,
106 oola.Line_Number as oola_Line_Number,
107 trunc(oola.creation_date) as Entry_dt,
108 SUM(oola.ordered_quantity) as oola_ordered_quantity
109 from apps.oe_order_headers_all ooha
110 join apps.qp_price_lists_v qp
111 on ooha.price_list_id = qp.price_list_id
112 join apps.oe_order_lines_all oola
113 on ooha.header_id = oola.header_id
114 join apps.oe_transaction_types_tl ottt
115 on ooha.order_type_id = ottt.transaction_type_id
116 join apps.oe_transaction_types_all otta
117 on ottt.transaction_type_id = otta.transaction_type_id
118 and oola.org_id = otta.org_id
119 join ItemInfo
120 on oola.ship_from_org_id = ItemInfo.organization_id
121 and oola.inventory_item_id = ItemInfo.inventory_item_id
122 inner join OrderDetail OD
123 on oola.header_id = OD.ooha_Header_id
124 and oola.Line_Number = oola_Line_Number
125 --
126 where qp.name <> 'Conversion Pricelist' -- exclude converted orders
127 and ooha.org_id in ('132','133') -- US/Canada only
128 and oola.shipment_number > 1 -- all shipments other than the original booked order line (shipment 1)
129 and trunc(ooha.booked_date) is not null -- orders must be at least booked to be reported
130 -- last business day: line created OR header booked ----should be the same as the date of the original booked order transaction date
and OD.Entry_dt = trunc(oola.creation_date)
131 132 -- no entered or cancelled lines
133 and oola.flow_status_code <> 'ENTERED'
134 and oola.flow_status_code <> 'CANCELLED'
135 -- exclude interplant orders
136 and ottt.name not like '%Internal%'
137 -- exclude credit memos
138 --zzz and nvl(otta.end_date_active,sysdate+1) > sysdate
139 and nvl(otta.attribute10,'N') = 'Y' -- do not return credit memos
140 Group BY ooha.order_number,
141 ooha.booked_date,
142 ooha.header_id,
143 oola.Line_Number,
144 trunc(oola.creation_date)
145 ),
146 OrderDetailCalcs as (
147 select
148 OD.ooha_booked_date,
149 apps.XXADS_COMMON_UTILS_INT.Get_Region_From_Shipto(
150 OD.oola_ship_to_org_id,
151 OD.ooha_attribute1,
152 OD.oola_sold_to_org_id,
153 decode(OD.oola_ship_from_org_id, 648, 'NON-US', 'US')
154 ) as Rgn,
155 OD.ItemInfo_ProductType as Product_Type,
156 OD.ooha_order_number as order_number,
157 OD.ItemInfo_ProductLine as Line,
158 OD.customer_name,
159 OD.salesperson,
160 OD.salesrep_number,
161 OD.Entry_dt,
162 OD.Ship_dt,
163 OD.Product,
164 (OD.oola_ordered_quantity + CASE
165 When ODSDS.oola_ordered_quantity IS NULL Then
166 0
167 Else ODSDS.oola_ordered_quantity
168 End) as qty_ord,
CASE
169 170 When ODSDS.oola_ordered_quantity IS NULL Then 0
171 Else ODSDS.oola_ordered_quantity
172 End as Qyt_Ord_ShippedSameDay,
173 OD.unit_cost,
174 OD.oola_unit_selling_price as unit_selling_price,
175 OD.oola_unit_list_price as unit_list_price,
176 OD.oola_order_quantity_uom as ordered_quantity_uom,
177 case OD.oola_line_category_code
178 when 'RETURN' then -1
179 else 1
180 end as qty_rtn_factor,
181 case OD.ItemInfo_ProductType
182 when 'Misc' then 0
183 else apps.XXADS_COMMON_UTILS_INT.get_lb_conversion_rate(
184 OD.muom_unit_of_measure,
185 OD.primary_unit_of_measure,
186 OD.oola_inventory_item_id
187 )
188 end as lbs_factor,
189 case OD.ooha_transactional_curr_code
190 when 'USD' then 1 -- USD-to-USD conversion lookup not defined
191 else OD.conversion_rate
192 end as currency_factor
193 from OrderDetail OD
194 Left Outer Join OrderDetailSamDayShip ODSDS
195 ON OD.ooha_Header_Id = ODSDS.ooha_header_id
196 AND OD.oola_Line_Number = ODSDS.oola_Line_Number
197 )
198 select
199 Rgn,
200 product_type,
201 order_number,
202 line,
203 customer_name,
204 salesperson,
205 salesrep_number,
206 entry_dt,
207 ooha_booked_date,
208 ship_dt,
209 product,
210 qty_ord * qty_rtn_factor as qty_ord,
211 qty_ord * qty_rtn_factor * lbs_factor as qty_pounds,
212 unit_selling_price * currency_factor as unit_price,
213 qty_ord * qty_rtn_factor * unit_selling_price * currency_factor as qty_ord_x_unit_price,
214 ordered_quantity_uom,
215 case
216 when product_type = 'Misc' then 0
217 when lbs_factor is null then null
218 when lbs_factor = 0 then null
219 else unit_selling_price * currency_factor * qty_rtn_factor / lbs_factor
220 end as yield,
221 unit_list_price * currency_factor as list_price,
222 case
223 when unit_list_price is null then 0
224 when unit_list_price = 0 then 0
225 else (unit_list_price - unit_selling_price) / unit_list_price * 100.0
226 end as disc_price,
227 unit_cost,
228 qty_ord * qty_rtn_factor * unit_cost as ext_cost
229 from OrderDetailCalcs
230 ;
as ItemInfo_ProductSize,ItemInfo.ProductSeries as ItemInfo_ProductSeries,ItemInfo.ProductBrand as ItemInfo_ProductBrand,cic.item_cost as Unit_Cost, rc.customer_name, gdr.from_currency, gdr.to_currency,
*
ERROR at line 50:
ORA-32036: unsupported case for inlining of query name in WITH clause
=====
April 05, 2010 - 9:51 am UTC
$ oerr ora 32036
32036, 00000, "unsupported case for inlining of query name in WITH clause"
// *Cause: There is at least one query name which is inlined more than once
// because it's definition query is too simple and references another
// query name. This is currently unsupported yet.
// *Action: remove such query name and retry
If I had to guess (and apparently I do, since we don't have a test case here to work with), I'd guess JRS
get rid of it, just refer to the table.
Sasidhar, April 22, 2010 - 1:57 pm UTC
Hi tom,
i need some help regarding materialized view refresh.
I create a materialized view on emp table, which is available in any database.
create materialized view log on emp;
and then i created the materialied view
create materialized view emp_mv refresh fast as select * from emp;
now when i issue the command
truncate table emp_mv;
the data from materialized view is truncated.
now when i try to
SQL> select * from emp_mv;
no rows selected
why is that the materialied view is empty even though i created the materialized view with refresh fast option.
my question is when i create the materialized view with refresh
April 22, 2010 - 2:02 pm UTC
tell me first why you think a segment you just truncated should have data??!?!?!?!
Sasidhar, April 22, 2010 - 2:18 pm UTC
tom
sincerely i did not understand what you tried to convey by saying
"tell me first why you think a segment you just truncated should have data??!?!?!?! "
am sorry for that
my question was why is the materialized view empty and how do i refresh it.
a small question
how can i include your recent comments when i try to reply
April 22, 2010 - 2:32 pm UTC
Sasidhar
You said you performed the following:
create materialized view log on emp;
create materialized view emp_mv refresh fast as select * from emp;
truncate table emp_mv;
select * from emp_mv;
no rows selected
why is that the materialied view is empty
You just truncated the segment called EMP_MV. You removed all data from it, you wiped it out, you cleared it, you removed all contents, you (I'm running out of ways of saying it)
You truncated it, that - in a word - is why it is empty when you query it.
You obviously will not be able to incrementally maintain it (refresh fast, just apply changes) since you wiped out the data - leaving NOTHING to apply changes to.
You will have to full refresh that guy to get back into business.
Why did you truncate it?
Why did you think that after truncating it you would see anything in it?
How do you think this could be incrementally maintained after truncating it?
how can i include your recent comments when i try to reply
I use cut and paste, but don't over do it, just get what is relevant.
Sasidhar, April 22, 2010 - 3:04 pm UTC
Why did you truncate it?
i truncated to learn about materialized views
Why did you think that after truncating it you would see anything in it?
because i read that refresh fast will refresh the data
How do you think this could be incrementally maintained after truncating it?
hmm i was testing the refresh fast keyword.
tom, is it a must to create a materialized view log when we want the data in the materialized view be refreshed or when we create a materialized view with refresh fast option. is yes what is the data in the materialized view contains aggregrate functions.
i have succeeded in re populating the materialized view by issuing
SQL>begin
dbms_mview.refresh('emp_mv');
end;
/
thank you
April 22, 2010 - 4:33 pm UTC
Why did you truncate it?
i truncated to learn about materialized views why would you think to truncate a materialized view? why? what prompted you to do that? how would that help you learn about them?
(I'm serious, I cannot imagine doing that sequence of events - not to learn about MV's, not for nothing, it would not enter my mind)
I'm curious about the thoughts here - I cannot imagine doing that.
Why did you think that after truncating it you would see anything in it?
because i read that refresh fast will refresh the data well, did you refresh it? You didn't create a self refreshing one, you'd have to do it - and then you would discover "hey, it cannot fast refresh" (please think about it - how could it? you just blew it *away*)
And where did you see any examples ANYWHERE whereby someone truncated a materialized view? You basically do not touch them with anything other then SELECT - you query them, we do the rest.
How do you think this could be incrementally maintained after truncating it?
hmm i was testing the refresh fast keyword. please close your eyes and envision how something you just "blew away" could have changes applied to it?
tom, is it a must to create a materialized view log when we want the data in the materialized view be refreshedNo, they are used only for incremental (also known as 'fast') refreshes. If you want to do a complete (like you just did - since you blew it away) you don't need them.
or when we create a materialized view with refresh fast option. Yes, you would need it then.
s yes what is the data in the materialized view contains aggregrate functions. we can incrementally refresh many of those.
Suggest you read:
http://docs.oracle.com/docs/cd/E11882_01/server.112/e10810/toc.htm chapters 8 and 9 from start to finish - and then play around... The concepts are all there, the rules are there, the examples are there...
MV
A reader, April 22, 2010 - 7:28 pm UTC
Creating materialized view taking more than 24 hr.
manish, May 04, 2010 - 7:34 am UTC
Hi tom,
Earlier materiliazed view took 6 hr to create.
Now its taking more than 24 hr...
Please suggest ...
i am using "REFRESH FORCE ON DEMAND" while creating materilaized view
May 06, 2010 - 12:56 pm UTC
why are you creating it again?
You do realize, this question falls squarely in the "my car won't start, why not" category given the "level of detail" you haven't provided....
Sasidhar, May 05, 2010 - 1:14 pm UTC
Manish
refresh on demand means rebuild the mview if there is any dml issued on the master tables. so its quite obvious that the data on the master tables is not getting changed
May 06, 2010 - 2:07 pm UTC
no, it does not
refresh on demand means to refresh the materialized view when TOLD TO refresh the materialized view.
Not based on DML activity, but based on being TOLD to refresh it.
Materialized views with Large data set
sam, May 20, 2010 - 11:07 pm UTC
Tom,
I have a single table with 30 columns an 200 million records.
The data in the table will be used to present on the web application for viewing.
My approach is creating a Materlialized view with all the aggregations and have the application build on materlialized view(MV).
On a daily frequency nearly 1 million records are received from ODS.
If I utilize Fast Refresh option to refresh the data incrementally on the Materialzed, do you think the performance degrades while selecting(view only)the data by the application.
Do you suggest any other better way or creating the MV with optimal refresh option.
Thanks
May 24, 2010 - 12:32 pm UTC
... My approach is creating a Materlialized view with all the aggregations and have
the application build on materlialized view(MV).
...
the approach should be:
a) build mv's
b) app queries details
c) we rewrite query against mv for you
so that MV's work like an index - transparently, so you can drop and add them over time without affecting the application code.
when you refresh, your computer will be replaying those 1,000,000 row changes in a session, the queries will be affected by the fact that that is happening (it will consume CPU, memory and IO - resources your queries are using). In addition to that (the increased workload) you might see some of your queries taking more logical IO than before, this is a natural side effect of multi-versioning/read consistency (the alternative would be "queries stop while update takes place", which is not acceptable).
So, you would probably see increased response times, almost entirely due to the fact that you are doing something extra on that machine - consuming resources that would not otherwise be consumed by that process.
I would benchmark it to see if you have anything to even consider being concerned about.
Materialized views with Large data set
Sam, May 24, 2010 - 10:09 pm UTC
Tom,
Thanks for the response
I will appreciate if you can clarify
a) build mv's
-- I will build mv's based on the detail table
b) app queries details
-- The table is populated daily
c) we rewrite query against mv for you
**** The queries are only required to should the aggregate information not details anytime.
So If I point the application to the details tables (b), the Query Rewrite functionality will point to the materialized view (a) ????
Also, I will apprecaite if you could clarify
Is the 'fast fresh option' in Materialized view(MV) only updates the incremental data in the MV or it refreshes the entire MV.
Response time is very important,The application dispalys the data for only one record.
Which means the search criteria is the ACOUNT_NO, so the data related to ONE account number is displayed.
Thanks again
May 25, 2010 - 7:26 am UTC
... So If I point the application to the details tables (b), the Query Rewrite
functionality will point to the materialized view (a) ????
..
yes, that is exactly the point, materialized views act like indexes in that you add them and the optimizer transparently uses them when appropriate - just like an index.
fast refresh SHOULD HAVE BEEN called 'incremental refresh'. It is an incremental, it will apply the changes to the existing data, it is NOT complete, it will not completely rebuild the entire MV
Materialized views with Large data set
Sam, May 25, 2010 - 11:40 pm UTC
Thanks a lot.
MVIEWS refresh using cursor locks
Mmohamed Badri, May 27, 2010 - 4:01 am UTC
Hi Tom,
Here a question about concurrent MVIEWS refresh using locks (for update cursor).
We are using Oracle 10g EE 10.2.0.4.0 and we have two materialized views MV1 and MV2 refreshed (complete on demand) by a stored procedure (given below).
This procedure is called by different sessions/transactions (at the same time), that’s why we are using a for update cursor 'lock' (mylock_MV1).
The MV1 refresh takes about 100 secondes.
MV1 is queried by MV2 where MV2 is the only Materialized View queried by Front Office (FO) packages. These packages have to check the MV2 lock ('FO_PKG_MV2') before querying it to be sure that the MView is not being refreshed.
During procedures test, we discover that in some case, MV2 doesn’t contain any data.
dbms_output.put_line before and after every lock show us that the for update cursor mylock_MV1 is opened at the same time (without waiting) by "concurrent procedures calls" !
In our case, a second procedure call starts the MV1 refreshing just before the first call starts the MV2 refresh.
Really starnge ! We have already used locks in this way with tables and it worked very well.
Any solution ? any idea ?
Thanks.
Procedure MV_refresh is
cursor mylock_MV1 is
select * from lock_table where lock_col = 'MV1_refresh' for update;
onelock_MV1 mylock_MV1%ROWTYPE;
cursor mylock_MV2 is
select * from lock_table where lock_col = 'FO_PKG_MV2' for update;
onelock_MV2 mylock_MV2%ROWTYPE;
ct_MV1 number;
begin
dbms_output.put_line(to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') || ' before open lock_MV1 ');
open mylock_MV1;
fetch mylock_MV1
into onelock_MV1;
dbms_output.put_line(to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') || ' after open lock_MV1');
DBMS_MVIEW.REFRESH('MV1', 'C', '', TRUE, FALSE, 0, 0, 0, FALSE);
dbms_output.put_line(to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') || ' refresh MV1 done ');
select count(*) into ct_MV1 from MV1;
if (ct_MV1 != 0) then
dbms_output.put_line(to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') || ' before open lock_MV2');
open mylock_MV2;
fetch mylock_MV2
into onelock_MV2;
dbms_output.put_line(to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') || ' after open lock_MV2');
DBMS_MVIEW.REFRESH('MV2', 'C', '', TRUE, FALSE, 0, 0, 0, FALSE);
-- MV2 depends on MV1 and it is the only MV called by FO packages
dbms_output.put_line(to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') || ' refresh MV2 done');
close mylock_MV2;
end if;
close mylock_MV1;
end MV_refresh;
May 27, 2010 - 7:34 am UTC
.. This procedure is called by different sessions/transactions (at the same time), that’s why we are using a for update cursor 'lock' (mylock_MV1). ...
what?!?!?!?!
Umm, why?
I'd rather fix the design issue you have here rather than look at a problem induced by a design issue.
tell me what you are doing (why you are doing what you are doing even more so)
MVIEWS refresh using cursor locks
Mohamed Badri, May 27, 2010 - 8:59 am UTC
Oh ! :) Yes, If you want but I do not think it's a design problem. Here, I've simplified the names of views, of cursors, the number of instructions ... and I understand that one can wonder about the usefulness of all this.
No matter the need. The question is not there.
But I must correct : not "Is Called" but "can be called" at the same time.
These calls are made by programs that, after insertions, call the procedure MV_REFRESH to update the views. These procedure calls can be concurrent.
Whatever the topic, I do not understand how the locks with "for update cursor" work very well when we want to handle concurrent access to tables and do not work when try to do the same thing with M-Views (using dbms_mviews.refresh) !
Or otherwise : How is it possible to open a "for update cursor" already opened on the same line!
May 27, 2010 - 10:32 am UTC
... These calls are made by programs that, after insertions, call the procedure MV_REFRESH to update the views. ...
still sounds... LIKE A DESIGN ISSUE (a problematic design issue)
sounds like these procedures are batch jobs, and it sounds like they should be in a job chain so that when they all complete - the next step in the chain is "refresh"
So, prove me wrong first, I won't really look at this until I believe "it is the only and correct approach".
(hint: dbms_view.refresh is a lot like DDL, what does DDL do.... it commits, hence releasing all locks. I could tell you how to get around that - there are two simple ways - if you know what they are - go for it. I'll hold onto that till I'm more convinced this is a "good idea")
when you select for update, we run the entire query to completion and touch every row the cursor hits.
select count(*) into ct_MV1 from MV1;
if (ct_MV1 != 0) then
ouch, I always dislike that construct.
DBMS_MVIEW refresh
Michael Tefft, May 28, 2010 - 6:42 am UTC
Did you notice that your call to DBMS_MVIEW.REFRESH:
DBMS_MVIEW.REFRESH('MV2', 'C', '', TRUE, FALSE, 0, 0, 0, FALSE);
specifies (as the 9th parameter) ATOMIC_REFRESH=FALSE?
This is what ATOMIC_REFRESH does:
atomic_refresh
If this parameter is set to true, then the list of materialized views is refreshed in a single transaction. All of the refreshed materialized views are updated to a single point in time. If the refresh fails for any of the materialized views, none of the materialized views are updated.
So, if you are trying to prevent inconsistent results when querying the MV (such as the zero rows that you mention), you may be able to accomplish that by using ATOMIC_REFRESH=TRUE instead of playing around with locks.
The refresh may take longer (with ATOMIC_REFRESH=FALSE, DBMS_MVIEW can do a TRUNCATE, but can not if TRUE), but you should not have to worry about incomplete results.
May 28, 2010 - 9:02 am UTC
but even still, dbms_mview commits with or without atomic refresh, losing the serialization locks they thought they had
still falling into category of "bad design, let's fix the cause not the symptom"
DBMS_MVIEW refresh
Michael Tefft, May 28, 2010 - 9:36 am UTC
I agree the design is a problem.
I was not aiming to 'make the locks work'. My design suggestion was: don't play around with locks. DBMS_MVIEW already provides the protection desired (if you don't override the default).
DBMS_MVIEW refresh
Mohamed Badri, May 31, 2010 - 10:42 am UTC
Hi Michael,
Thank you for reading my procedure.
>>Did you notice that your call to DBMS_MVIEW.REFRESH:
>>DBMS_MVIEW.REFRESH('MV2', 'C', '', TRUE, FALSE, 0, 0, 0, FALSE);
>>specifies (as the 9th parameter) ATOMIC_REFRESH=FALSE?
Yes I know it, but ATOMIC_REFRESH was set to FALSE to get better performance during MV refreshing and querying. It was our first issue.
Hi Tom, thank you for your reply.
You give me the most important part of answer. I didn’t know that locks can be released by a DBMS_REFRESH !
But I am always interested in an alternative.
At the same time, I do not understand why you would like more details about the design! My question remains technical!
This is all what can I say about "design":
We are almost in real time environment (without using specefic hardware/softawre). Data should be available from their insertions. It would have been easier if we were in a batch case. :)
The MVIEW was a query requested by FO (Interface). Given the number of users, more and more important, and the size of the DB, we decided to transform this query in materialized view. This query is not really fast, we compute "history" (first use date, last use date, number of use) of all existing "items" (house, car, screw, document, file, or any other things ...) in our DB.
At the BO (Back Office) layer, the DB receive thousands of insert per day. We have a "tool" (program) for managing all these incoming data. This tool (after transforming data) call many procedures (organized in packages). After that, the same program call the MV_refresh procedure (and once again, we can not do it in batch mode (user expectations)). To accelerate this process, we launch between 10 and 20 instances of this tool.
Hence concurrent calls (of MV_refresh procedure) !!!
And for information, we have others tools (programs) which compute aggregates/OLAP cubes/reporting in batch mode. But they are unrelated to our problem.
For some reasons, I can't give more details!
So, I hope these explanations have attracted your interest.
>>select count(*) into ct_MV1 from MV1;
>>if (ct_MV1 != 0) then
>>ouch, I always dislike that construct.
I can understand , it is temporary.
Thank you.
Mohamed.
June 08, 2010 - 7:47 am UTC
At the same time, I do not understand why you would like more details about the design! My question remains technical!
I give a presentation entitled "why why is probably the right answer"
Just because you ask a technical question does not mean I should blurt out the answer. You might (probably are in my experience) asking the wrong question.
Take this scenario:
question: how do I turn on this band saw.
answer: <what should go here>
The answer should not be "turn on the red switch on the left hand side of the machine" - especially if that is how to turn it on. The answer should be "why". How much damage could someone do if they are unfamiliar with that tool.
question: how do I start this car.
answer: WHY?
question: how do I create a table in a stored procedure
answer: WHY??? (they are asking because they are a sqlserver developer trying to "port" to oracle. Sure you could tell them HOW to do it, but you would be doing a HUGE HUGE disservice to them wouldn't you)
and so on, and so on, and so on.
... Data should be available from their insertions. It would have been easier if we were in a batch case. :) ...
then by definition you cannot use atomic refresh => false can you. The data DISAPPEARS - poof - goes away - and then magically comes back later.
You haven't yet told anyone WHY you need to have this MV refreshed by many processes, at about the same time.
....I can understand , it is temporary. ...
hahahahahah - I fell out of my chair. There is no such thing as temporary. It'll be there after you and I are both long gone - we know that.
USING DEFAULT LOCAL ROLLBACK SEGMENT
baskar, June 04, 2010 - 10:09 am UTC
Hi,
DB Version is 10.2.0.4
Recently Noticed a MV refresh taking various time to complete. The MV refresh starts at 9pm and completes some time at 1am or 12am are sometimes very fast at 10pm. The number of records are almost same. There are no locks in the db during that time.. Opening the MV script i could see
REFRESH COMPLETE
WITH ROWID
USING DEFAULT LOCAL ROLLBACK SEGMENT
DISABLE QUERY REWRITE AS
Using default local rollback segment - is that allocation of rollback segment causes this
Referred the manul
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6002.htm - where it says this clause is not valid if the DB is in automatic undo mode.
thanks,
baskar.l
June 09, 2010 - 7:44 am UTC
that using rollback segment is not relevant, it is not the cause.
... There are no locks in the db
during that time. ...
but - are there other THINGS happening in the database...
If you have access to ASH/AWR, you can see what this process was doing and what it was waiting for - historically.
DBMS_MVIEW refresh
Mohamed Badri, June 09, 2010 - 5:53 am UTC
I never said there was no link between the nature of the problem and the technical solution. We are faced with it every day ...
I said MY question is technical! "Why we can not put a lock with a dbms_refresh" and you've already answered. "Is a lot like dbms_view.refresh DDL." which surprised me and still amazes me ...
You haven't yet told anyone WHY you need to have this MV refreshed by many processes, at about the same time.
I invite you to reread my post.
'hahahahahah - I fell out of my chair. There is no such thing...'
I fell out my chair too. I had never seen such a response.
The quality of exchange is beyond my capabilities.
And I think I'll stop at that level.
Thank you anyway.
June 10, 2010 - 11:48 am UTC
sorry, but when someone says:
I know the code is bad, what it is doing is wrong, but it is "temporary"
I have to laugh, really I do, I can count using neither of my hands the number of times I've seen someone fix that "temporary" thing. It becomes "the legacy code", then "the way we do it", then "standard operating procedure" and then finally "this is our idea of a best practice, we've been doing it for years".
So, yes, I fell out of my chair.
And yes, I've re-read your questions and still - I have found no solid (or fuzzy) technical reason for what you are doing - just the assertion that "we must do this, how can we"
To which I will constantly answer "why". The approach is flawed, you know it (you said as much), I know it, I would like to understand the entire thing you are trying to do - not just this one tiny piece of it. The entire process is flawed and must be looked at.
I encourage you to re-read your posting and tell me where you ever told anyone why this flow of code "is the only and best way to do this". That is what I've been trying to get at all along - *why* are you flowing the code this way, it is flawed (it isn't going to work, you know that), we need to find a better "flow"
Materilaized view with large data set
Sam, June 09, 2010 - 10:23 am UTC
Tom,
I have large data (200 million records) and would like to use analytical functions to create the materilized view with FAST REFRESH option as complete refresh takes more time.
Limitation for Fast Refresh is not to use analytical functions? am I correct?
Is there any work around to aggregate the data using analytical functions with increamental refresh.
we have 11gR2
Thanks
June 10, 2010 - 12:07 pm UTC
you cannot incrementally refresh with analytics - think about the complexity of the mapping between the source data and the post analytic data.
select a,b,c, row_number() over (partition by a order by b) rn
from t
Now, think about what happens when you insert a new row, many rows UNRELATED to the new row must be updated - it just isn't currently feasible.
or
select a, b, c, sum(x) over (order by a,b,c) from t;
think about that one - every time you insert a single row you have to incrementally maintain anywhere from no existing rows to every single existing row. just not feasible or practical.
To Mohamed
Alexander, June 10, 2010 - 12:05 pm UTC
Do any of these things sound familiar?
"We can't change the code it's too much work..."
or
"...it will cost too much..."
or
"...it will require too much testing..."
or
"...it will take too much time..."
etc?
That's what Tom's getting at. Once you develop some junk to get by to get it "working", you will be maintaining that junk for many years and require far much more of your time than doing it right in the first place.
June 10, 2010 - 2:46 pm UTC
exactly, that is why I laughed very hard when I saw 'temporary'
Here is an example:
I fly out of Dulles International Airport outside of Washington DC frequently. I fly United. This is a hub for United, United grew fast at Dulles and needed more terminal space.
So, they built the C and D "temporary terminals". They were to be temporary terminals (and they look it, beat up, not pretty, bad air conditioning, low ceilings, no good food places, etc).
That was in 1983.
I still fly out of C and D to this day. They are still "temporary". They have plans still for the grand United concourses - maybe my grand children will get to use them.
Temporary...
Materilaized view with large data set
Sam, June 11, 2010 - 1:51 pm UTC
Thanks for the info.
I agree your point about analytical function problem in Materialized view.
I am thinking of creating a summary table (200 million records) that is updated on a daily frequency with the delta records (approximately 1 million every day).
This way I can avoid materilaized view
Please let me know if my thought process is viable.
Thanks
June 22, 2010 - 7:46 am UTC
I don't see how this would happen, it is not clear what you are doing - one would need a lot more detail to say "sure, that'll work"
renaming a database and mview log registeration issue
Ofir, June 20, 2010 - 3:25 am UTC
Hi Tom
I've copy two Production databases to Test environment (using hot backup & restore) and changed their names (also the global_name is changed).
now, if I look at dba_registered_snapshots I see that production db name exists in SNAPSHOT_SITE column.
I did a fast refresh in test environment and it worked well, how come ? how snapshot logs is cleaned if the SNAPSHOT_SITE is different from the new database name ?
where do I see the relation between the SNAPSHOT_SITE and the database it related to ?
the database links have the same name as production but it now points to the new test database (also, there are no connections between test and prod environments)
thanks
Ofir
Materilaized view with large data set
A reader, June 22, 2010 - 9:38 pm UTC
Tom,
Following is the scenerio
Table A 200 million records (representing aggregate data)
Table B Staging table 3 million records
Table C working table
Step 1: Match the data from A and B on say ACCOUNT_NUM
column
Step 2: Populate C from step 1 (Say 3 million)
Step 3: Slect data from C performing analytical functions
and update TABLE A.
Please let me know if it does not make sense
Thanks
June 24, 2010 - 6:24 am UTC
I have no idea, I don't know what you are trying to do really.
does it make sense to join A to B by account_num and populate C and then select from C to update A?
Only if you cannot for some reason just:
merge into a
using (select a.columns, b.columns from a, b where a.account_num(+) = b.account_num and ... ) X
on (a.account_num = x.account_num)
when matched then update
when not matched then insert
or something similar.
A reader, June 28, 2010 - 6:56 am UTC
Ref. Materialized views and analytic functions.
What might help is to create a materialized view with a structure that is optimized for answering the analytic function as quickly as possible.
For the query:
select a,b,c, row_number() over (partition by a order by b) rn
from t
... you might create a materialized view with just the a, b and c columns of t, using the ON PREBUILT TABLE option to register a prebuilt index organized table with (a,b) as the composite primary key (if they are unique, of course).
Not always do-able, but often worth considering.
An index on (a,b) and optionally on (a,b,c) might be just as good.
materialized view with a user defined function is not fast refreshable ?
york, July 02, 2010 - 12:43 am UTC
Hi Tom,
I found that a materialized view with a user defined function is not fast refreshable, even this funciton is delared as determistic. Is this a known issue? Any workaround?
My DB version is Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 , and the paramer compatible = '10.2.0.1.0'
Below is the detail scritps to re-produce the issue:
-----------------
create table test1 ( emp_id int primary key, emp_name varchar2(50), salary number(10) );
create materialized view log on test1 with sequence , rowid (emp_id, emp_name, salary ) including new values;
CREATE OR REPLACE function YZHANG.my_upper(s1 IN varchar2 ) return varchar2
DETERMINISTIC is
begin
return upper(s1);
end;
/
create materialized view test_mv1
refresh fast
on demand
as
select my_upper(emp_name) as emp_name,
sum(salary) as duration,
count(*) as cnt
from test1
group by my_upper(emp_name)
;
ORA-12015: cannot create a fast refresh materialized view from a complex query
If I use upper() function instead of my_upper(), the above Mview is fast refreshable.
Many thanks for your support!
materliazed view with large data
Reader, July 12, 2010 - 12:05 am UTC
merge into a
using (select a.columns, b.columns from a, b where a.account_num(+) = b.account_num and ... ) X
on (a.account_num = x.account_num)
when matched then update
when not matched then insert
How is the performan of using the above merge in big table
The table A is said to be around ~200 million records.
Thanks
July 19, 2010 - 10:22 am UTC
the performance is said to be "as good as possible" I guess.
without understanding:
a) what your expectations are (my good might be your really bad or vice versa)
b) how many indexes this might have to maintain
c) how many rows are expected to be modified versus inserted
d) how big B is (if b is empty, this will be really really fast. If b has a billion rows, it will be slower than if b were empty..)
etc.
Alexander, July 19, 2010 - 10:55 am UTC
Tom,
Can you control what tablespace your mv uses to refresh in? E.g. temp vs user tablespace? I want it to use temp but it is not. So I get false alarms for space when it shoots up during the refresh. I don't know which of the 20 storage parameters controls this. I read them all in the documentation but didn't see one that affects this.
July 19, 2010 - 2:11 pm UTC
your mv will refresh of course into the tablespace the MV is in - temp space needed for big sorts or anything come from the temp tablespace. Not sure what you mean - what other "tablespace would be used"?
materialized view partition
Sasi, August 24, 2010 - 2:48 am UTC
Hi Tom,
Am back with another question which might sound pointless.
can i partition a Materialized View or can we create a materialized view with partitions based on range. etc
August 26, 2010 - 10:03 am UTC
MVIEW in a Distributed DB
A reader, October 06, 2010 - 10:25 pm UTC
Hello Tom,
I need to create a MVIEW in a distributed DB.
Say I have famouse tables dept and emp (with all constraints , PK ,FK) in site1 DB -- db-a in schema sch-a
It also has sales table which references empno as FK.
This table is specific to each site so data in sales@site1 will be different to data in sales@site2.
stie2 has dblinks to access site1 tables.
I want to create MVIEW dept and emp in site2 DB-- db-b in schema sch-b
The requirement is one directional replication from site1 to site2
So Whenever any DML activity happens on dept or emp table in site1 DB -- db-a it should be propagated to the remote site. But rememeber there is sales@site2 which has FK empno from emp Mview in site2.( Was thinking of putting on delete cascade )
It need not be instantaneous. But when refresh of MV is done manually it should get in sync with site1
What kind MVIEW and refresh should be done for site2 DB - db-b
In site 2. Even Site2 MV will have FK constraint on emp table thru deptno.
Is it possible to make bi-directional replication ?How ?
October 07, 2010 - 2:31 am UTC
This might not work so well - you have no control over how we will refresh the materialized view on site2, we will do it the way we want to do it.
You have this table sales that has a foreign key to the mview. That foreign key could prevent us from being able to refresh the mview. If you have an EMP record in that table (sales) that the other site decided to delete from EMP - we will not be able to refresh. You will break the refresh. Think long and hard about this - replication in the year 2010 for a transactional application is in my opinion one of the biggest mistakes you could make. The database is powerful enough to handle all of the workload in one location and networks are more than capable of being there - replication like this is not a smart approach.
Yes, it can be bi-directional - but that would be compounding this error, it would make it even worse than you are already planning on making it. I'll leave it to the documentation for the "how", we do fully document this stuff.
Do not implement replication lightly - it is a really bad idea. It makes everything more complex and more fragile. I strongly encourage you to rethink this.
MVIEW in a Distributed DB Forgot -Version 10g
A reader, October 06, 2010 - 10:26 pm UTC
MVIEW in a Distributed DB Follow up
A reader, October 07, 2010 - 9:10 am UTC
Thanks Tom I hear you. And I agree.
Ok How would we do the MVIEW and it refresh if we assumed
There will be only inserts on the site1 on dept and emp table. No deletes , updates.
Then what would be the implementation like for one-directional refresh from site1 to site2.
I tried doing a complete refresh and it fails due to integrity constraint violation.I guess it does a delete all then insert for complete refresh.
I am new to MVs. Will incremental refresh of dept and emp MView work on site2 ? Any example on how to create incremental refresh.
I am doing this just as a test case to understand concepts of Mview, not for production implementation
October 11, 2010 - 10:29 am UTC
Ok How would we do the MVIEW and it refresh if we assumed
I would start by reading the documentation to see how materialized views work - they are very straightforward and pretty easy to set up. Their syntax is simple DDL.
... I tried doing a complete refresh and it fails due to integrity constraint
violation.I guess it does a delete all then insert for complete refresh. ...
a truncate PLUS a direct path insert in 9i and before, a delete plus insert for 10g and above is the refresh mechanism for a complete refresh.
You would need to set all of your constraints to DEFERRABLE - the mview refresh process starts by issuing "set constraints all deferred" so that they have to be true at COMMIT time, but during the refresh transaction itself - they can be violated.
An incremental refresh will have the same requirement - that of deferrable constraints - since you cannot control HOW we refresh - only that we do refresh and when the commit comes - the data integrity will be "there". During the refresh, all bets are off.
excellent link
meenal, October 11, 2010 - 12:49 am UTC
I am a fresher ..learning oracle Admin...i found these Q and ans more helpfull than just studying from books and pdfs...specially i had lot many practical issues ...could solve taking help of these links ...Excellent...Thanks...
materialized view performance
A reader, November 01, 2010 - 7:28 am UTC
Hi Tom,
MV on three base tables - 30M, 10M and 10M rows with aggregation resulting into 2.5 M rows, with PCT refresh takes around 2.5 hours, and if we load data into work table for a partition and exchange partition with final tables, takes around 10 minutes.
If performance difference is so large, then what other factors should be considered to justify MV?
Thanks
November 01, 2010 - 3:36 pm UTC
insufficient data - you don't tell us how you are doing things - sounds like you might be doing incremental refreshes?
the goal of the MV is to be 'transparent', like an index, at runtime. If you do not use an MV (even if on a prebuilt table that you maintain yourself), you are missing out on the big thing about MV's
MV Refresh
A reader, November 02, 2010 - 8:02 am UTC
Hi Tom,
The Script is:
(All base tables are partitioned (range) on monthkey)
CREATE MATERIALIZED VIEW MV
TABLESPACE TBS1
NOLOGGING
PARTITION BY RANGE (monthkey)
(
PARTITION mv_201001 VALUES LESS THAN (201002)
NOLOGGING
NOCOMPRESS
TABLESPACE TBS1,
PARTITION mv_201002 VALUES LESS THAN (201003)
NOLOGGING
NOCOMPRESS
TABLESPACE TBS1,
PARTITION mv_201003 VALUES LESS THAN (201004)
NOLOGGING
NOCOMPRESS
TABLESPACE TBS1,
PARTITION mv_201004 VALUES LESS THAN (201005)
NOLOGGING
NOCOMPRESS
TABLESPACE TBS1,
PARTITION mv_201005 VALUES LESS THAN (201006)
NOLOGGING
NOCOMPRESS
TABLESPACE TBS1,
PARTITION mv_201006 VALUES LESS THAN (201007)
NOLOGGING
NOCOMPRESS
TABLESPACE TBS1
)
NOCACHE
NOCOMPRESS
PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
BUILD IMMEDIATE
USING INDEX
TABLESPACE TBS1
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT A.MONTHKEY MK,
B.MONTHKEY MK1,
C.MONTHKEY MK2,
A.CATEGORY,
B.CENTRE,
A.PRODUCT,
A.ACCOUNT,
A.CATCODE,
C.CUSTTYPE,
C.CUSTSUBTYPE,
SUM (A.CATAMOUNT) AVALUE,
COUNT (A.CATAMOUNT) CATCNT,
COUNT (A.MONTHKEY) AVOLUME,
COUNT ( * ) TVOLUME
FROM A,
B,
C
WHERE A.ACCOUNT = B.ACCOUNT
AND A.MONTHKEY = B.MONTHKEY
AND A.CUSTOMER = C.CUSTOMER
AND A.MONTHKEY = C.MONTHKEY
GROUP BY A.MONTHKEY,
A.CATEGORY,
B.CENTRE,
A.CATEGORY,
A.ACCOUNT,
A.CATCODE,
C.CUSTTYPE,
C.CUSTSUBTYPE,
B.MONTHKEY,
C.MONTHKEY
PCT is used for each subsequent month' data refresh in MV
Version: 10gR1
Thanks
November 02, 2010 - 10:58 am UTC
... sounds like you might be doing incremental refreshes? ....
like I thought - you are doing 'incremental' (row by row) refreshes and it is sounding like your do it yourself complete refreshes (big, bulky, fast for lots of changes) is faster. No surprise here (I wish they would NOT have called it "fast refresh" and "complete refresh". first of all - they mean two things - fast is a speed, complete is a method. it should have been named INCREMENTAL and COMPLETE - both are then methods and neither implies "I am superior to the other - just different")
do you use direct path operations for the data load?
have you played around with a PCT refresh (truncate and reload of a partition in the MV)
MV refresh
A reader, November 02, 2010 - 1:51 pm UTC
Hi Tom,
Tried on base tables:-
1)INSERT /*+ APPEND */ INTO... and
2)Load into work tables, and then exchange partitions...
and used
BEGIN
DBMS_MVIEW.REFRESH (LIST =>'MV',
METHOD =>'P',
atomic_refresh=>FALSE, parallelism=>2);
END;
but still takes the same time.
Can you let me know about..,truncate and reload of a partition in the MV?
Regards,
November 03, 2010 - 1:22 am UTC
trace the mview refresh and verify it is using truncate + insert append.
... Can you let me know about..,truncate and reload of a partition in the MV?
...
not sure what you are asking for?
IS MATERIALIZED VIEW of any interest regarding REFRESH Performance?
Sylvain Dussart, November 08, 2010 - 9:47 pm UTC
Hello,
my need is to speed up a join query with high cardinality in an OLTP on two "big tables".
But, I am wondering of MV REFRESH Performances.
Endeed, for every test I do, the Refresh cost is much higher than the table size even when modifying a single line.
I tested the above example with table M and I (with refresh fast, but without "On commit" option").
I even added indexes on the mv as recommanded to improve FAST refresh :
CREATE index "m_i_mv~mrow" on m_i_mv(m_rowid);
CREATE index "m_i_mv~irow" on m_i_mv(i_rowid);
CREATE index "m_i_mv~mid" on m_i_mv(mid);
CREATE index "m_i_mv~iid" on m_i_mv(iid);
But, nothing to do, at each refresh, the cost is a lot higher than the simple size of the tables(2Mo each):
------------------------------------------------------------
EXEC | "short SQL txt" |"BGETS(Mo)"|
------------------------------------------------------------
1 | BEGIN DBMS_MVIEW.REFRESH('m_i_MV','F');| 79,227 |
1 | /* MV_REFRESH (INS) */ INSERT INTO "SYS| 6,633 |
0 | /* MV_REFRESH (INS) */ INSERT INTO "SYS| 0 |
------------------------------------------------------------
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-----------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 650 (100)|
| 1 | LOAD TABLE CONVENTIONAL | | | | |
| 2 | NESTED LOOPS | | | | |
| 3 | NESTED LOOPS | | 1 | 96 | 650 (1)|
| 4 | VIEW | | 1 | 58 | 649 (1)|
|* 5 | HASH JOIN RIGHT SEMI | | 1 | 162 | 649 (1)|
|* 6 | TABLE ACCESS FULL | MLOG$_I | 33 | 4554 | 399 (0)|
| 7 | TABLE ACCESS FULL | I | 60012 | 1406K| 249 (1)|
|* 8 | INDEX UNIQUE SCAN | SYS_C0015148 | 1 | | 0 (0)|
| 9 | TABLE ACCESS BY INDEX ROWID| M | 1 | 38 | 1 (0)|
-----------------------------------------------------------------------------------
The same behaviour occurs with my applicative tables (96Mo and 32Go).
Is there something wrong with my tests or is it the "normal" way of working of the VMs?
If yes, I think I should better update the VM myself than using the refresh feature... strange!
Thanks in advance for your recommandations.
Regards.
Sylvain
November 10, 2010 - 11:11 am UTC
... my need is to speed up a join query with high cardinality in an OLTP on two
"big tables". ...
this sounds strange to me, how about you describe the problem first before we analyze a solution.
I'm not worried about "cost" inasmuch as "real world observed performance" - what is the real world effect of this.
There is very little concrete information to go on here - no sizing, no details on what you are trying to solve, etc.
MV Refresh Trace
A reader, November 10, 2010 - 12:37 pm UTC
Hi Tom,
I did the trace of the refresh (PCT), and found that during insert it uses the hint
1) INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO ....
2) In select statement of MV - gives error encountered: ORA-10980
3) Uses TABLE ACCESS FULL MLOG$_...
Is there any thing other happening in the trace, which will help to understand, why it is slow?
Thanks
IS MATERIALIZED VIEW of any interest regarding REFRESH Performance?
A reader, November 10, 2010 - 11:57 pm UTC
Hi Tom,
thanks to take time to answer.
Well, my concern is really around the MV refresh, but I will explain my "need" here after.
1) -------- MV REFRESH perform very badly ---------
But first, on the MV, each time a REFRESH occurs, it causes
to do a FTBS on the "i" big table (actually 32Go and 600Go in target size), even if there are only few lines in the MLOG$ tables (see previous explain PLAN).
So, every 2 hours, I have a refresh that takes (as of now) 12 minutes. But how much in the future??? (1 hour? 2 hours?)!
That's not acceptable. That's the origin of my question...
With a colleague, we found a little clue : there is a HINT HASH_SJ on the Refresh which force the FTBS and th hash_sj to occur...
.../* MV_REFRESH (INS) */ INSERT INTO ...
SELECT /*+ HASH_SJ */ CHARTOROWID("MAS$"."M_ROW$$") RID$
FROM "SAP"."MLOG$_T_EKPO" "MAS$" WHERE "MAS$".SNAPTIME$$ > :B_ST0
)) AS OF SNAPSHOT(:B_SCN) "JV$", ...
And This is a big drawback according to our needs!
2) ------ Business case and need -----
In my real business test case, the MV is perfect from a "select" point of view.
The need is to do a "simple" report on good movements having the stock to date by division and other needs.
The problem is that the design (done by the ERP editor) is not perfect with the 2 tables having each 1 selective column (m.date, i.division).
My customer (a retailer) has 1000 shops who make around 20 documents of 100 items (materials) a day (2M lines a day).
The date is in the "header table" and the division in the "item" table.
So, when joining the two tables, we have a big cardinality. For me, the solution were the VM (as of the cost of refresh cancel all gains on the join).
And endeed, With a select on the MV versus a join on the master tables I have around 10 times less blocs!
*******************************************************************************************
**** TEST CASE ***
*******************************************************************************************
With this script, you can reproduce the case :
CREATE TABLE m (
mID NUMBER NOT NULL PRIMARY KEY,
bDT VARCHAR2(8) NOT NULL,
col3 NUMBER NOT NULL,
col4 NUMBER NOT NULL,
col5 NUMBER NOT NULL,
col6 VARCHAR2(50) NOT NULL
);
CREATE TABLE i (
mID NUMBER NOT NULL,
iID NUMBER NOT NULL, --item number
div VARCHAR2(4) NOT NULL,
mat VARCHAR2(18) NOT NULL,
qty NUMBER NOT NULL,
col6 VARCHAR2(50) NOT NULL,
PRIMARY KEY (mID,iID)
);
-- --------------------------------------------------------------------------------
-- Session # - start populating the tables (warning : can be very long)
-- Table m : 200K lines and table i 20M lines
-- --------------------------------------------------------------------------------
DECLARE
mID NUMBER;
DATEC8 VARCHAR2(8);
DIV VARCHAR2(4);
MAT VARCHAR2(18);
qty number;
BEGIN
FOR bdt IN 1..100 LOOP -- 100 dates
DATEC8 := to_char(sysdate - bdt, 'YYYYMMDD');
FOR i IN 1..20 LOOP -- 20docs/div/day
null;
FOR j IN 100..200 LOOP -- 100 MAGs
mID := i_seq.NEXTVAL;
DIV := to_char(j);
INSERT INTO m VALUES (mID, DATEC8, i, i, i, 'xxxxxxxxxxxxxxxxxxxxxxxxx');
FOR k IN 1..100 LOOP -- Materials (100)
INSERT INTO i VALUES (mID, k, DIV, k, k, 'xxxxxxxxxxxxxxxxxxxxxxxxx');
END LOOP;
END LOOP;
COMMIT;
END LOOP;
END LOOP;
END;
/
CREATE MATERIALIZED VIEW LOG ON I WITH ROWID;
CREATE MATERIALIZED VIEW LOG ON M WITH ROWID;
CREATE MATERIALIZED VIEW m_i_MV
BUILD IMMEDIATE
REFRESH FAST WITH ROWID
AS
SELECT m.mID,
m.bdt,
i.iID,
i.div,
i.mat,
i.qty,
m.rowid m_rowid,
i.rowid i_rowid
FROM i,
m
WHERE i.mID = m.mID;
*******************************************************************************************
**** INDEX ***
*******************************************************************************************
CREATE index "m_i_mv~mrow" on m_i_mv(m_rowid);
CREATE index "m_i_mv~irow" on m_i_mv(i_rowid);
CREATE index "m_i_mv~mid" on m_i_mv(mid);
CREATE index "m_i_mv~iid" on m_i_mv(iid);
create index "m~bdt" on m(bdt);
create index "i~div" on i(div);
create index "i~divbdt" on m_i_MV(bdt,div);
*******************************************************************************************
**** RESULTS ***
*******************************************************************************************
-- Jointure --
SELECT m.mID,
m.bdt,
i.iID,
i.div,
i.mat,
i.qty,
FROM i,
m
WHERE i.mID = m.mID
AND m.bdt = '20101110'
AND i.div = '100'
;
*******************************************************************************************
--> JOIN : 5547 blocs!
*******************************************************************************************
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2500 | 236K| 471 (2)| 00:00:06 |
|* 1 | HASH JOIN | | 2500 | 236K| 471 (2)| 00:00:06 |
|* 2 | TABLE ACCESS FULL | M | 3216 | 99696 | 415 (1)| 00:00:05 |
| 3 | TABLE ACCESS BY INDEX ROWID| I | 2500 | 161K| 55 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | i~div | 2500 | | 8 (0)| 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("I"."MID"="M"."MID")
2 - filter("M"."BDT"='20101110')
4 - access("I"."DIV"='100')
Note
-----
- dynamic sampling used for this statement (level=2)
Statistiques
----------------------------------------------------------
0 recursive calls
0 db block gets
5547 consistent gets
0 physical reads
0 redo size
97592 bytes sent via SQL*Net to client
1982 bytes received via SQL*Net from client
135 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2000 rows processed
-- VM --
SELECT mID,
bdt,
iID,
div,
mat,
qty
FROM m_i_MV
WHERE bdt = '20101110'
AND div = '100'
;
*******************************************************************************************
--> VM : 404 blocs!!!
*******************************************************************************************
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000 | 117K| 48 (0)| 00:00:01 |
| 1 | MAT_VIEW ACCESS BY INDEX ROWID| M_I_MV | 2000 | 117K| 48 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | i~divbdt | 2000 | | 10 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("BDT"='20101110' AND "DIV"='100')
Note
-----
- dynamic sampling used for this statement (level=2)
Statistiques
----------------------------------------------------------
7 recursive calls
0 db block gets
404 consistent gets
33 physical reads
0 redo size
133478 bytes sent via SQL*Net to client
1982 bytes received via SQL*Net from client
135 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2000 rows processed
*******************************************************************************************
--> CONCLUSION :
*******************************************************************************************
SELECT on the VM in this case is very efficient.
However, on each refresh, the FTBS on the biggest table of the DB every 2H is a nightmare...
Thanks in advance for your findings.
Regards.
Sylvain
November 11, 2010 - 2:38 pm UTC
hash_sj does NOT force a full tablescan, the optimizer is choosing that based on estimated card= values.
do you have stats on your mv log - are they correct/current? Are there any predicates on that big table in the query in question that would permit index access?
IS MATERIALIZED VIEW of any interest regarding REFRESH Performance?
Sylvain Dussart, November 11, 2010 - 5:46 am UTC
Sorry for the formatting, I didn't notice this night...
Please find below the explain plans in a more readable format.
For the join (5547 blocs):
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2500 | 236K| 471 (2)|
|* 1 | HASH JOIN | | 2500 | 236K| 471 (2)|
|* 2 | TABLE ACCESS FULL | M | 3216 | 99696 | 415 (1)|
| 3 | TABLE ACCESS BY INDEX ROWID| I | 2500 | 161K| 55 (0)|
|* 4 | INDEX RANGE SCAN | i~div | 2500 | | 8 (0)|
---------------------------------------------------------------------------
***************************************************************************
And for the VM (404 blocs):
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000 | 117K| 48 (0)|
| 1 | MAT_VIEW ACCESS BY INDEX ROWID| M_I_MV | 2000 | 117K| 48 (0)|
|* 2 | INDEX RANGE SCAN | i~divbdt | 2000 | | 10 (0)|
--------------------------------------------------------------------------------
MV refresh Trace
abc, November 12, 2010 - 9:31 am UTC
Hi Tom,
can you through some light on the MV refresh trace question asked above.
Regards,
November 12, 2010 - 9:57 am UTC
I did - well before "abc" asked me to here. See above.
IS MATERIALIZED VIEW of any interest regarding REFRESH Performance?
Sylvain Dussart, November 14, 2010 - 6:09 am UTC
Hi Tom,
thanks for the followup and sorry for the late answer.
Yes endeed, statistics have been updated.
Moreover, the behaviour is the same on my DB (11.2.01), on our Productive DB (10.2.04) and another DB from a colleague of mine (also on 10.2.04 DB) who has noticed the same behaviour on another project.
As requested, find below the Explain with the filter predicates after adding 1 line in "m" table and 100 in "i" table.
NOTE that the DELETE is also very consuming...
alter system flush shared_pool;
EXECUTE DBMS_MVIEW.REFRESH('m_i_MV','F');
SELECT substr("SQL_TEXT",0,40) "SQL txt", EXECUTIONS, BUFFER_GETS, SQL_ID, PLAN_HASH_VALUE FROM V$SQL WHERE SQL_TEXT LIKE '%REFRESH%' ORDER BY BUFFER_GETS DESC;
SQL txt EXECUTIONS BUFFER_GETS SQL_ID PLAN_HASH_VALUE
------------------------------ ---------- ----------- ------------- ---------------
BEGIN DBMS_MVIEW.REFRESH('m_i_ 1 297197 ak89cuv8rs7pn 0
/* MV_REFRESH (DEL) */ DELETE 1 151265 1tu4vqnh353w3 2283497834
/* MV_REFRESH (INS) */ INSERT 1 136738 dmnuzuvk7b9ap 4255875859
/* MV_REFRESH (INS) */ INSERT 1 1711 0ay74u9bum6h9 2762657338
SELECT substr("SQL_TEXT",0,60) 1 2 fu3n1csp1r03v 2836784050
SELECT substr("SQL_TEXT",0,50) 1 2 8vscrmm0abtrk 2836784050
SELECT substr("SQL_TEXT",0,30) 2 2 b1qpayb42992k 2836784050
SELECT substr("SQL_TEXT",0,40) 1 2 473azh7618mg2 2836784050
SELECT substr("SQL_TEXT",0,60) 1 2 24anpqmw9tmkq 903671040
/* MV_REFRESH (INS) */ INSERT 0 0 dmnuzuvk7b9ap 0
/* MV_REFRESH (DEL) */ DELETE 0 0 1tu4vqnh353w3 0
/* MV_REFRESH (INS) */ INSERT 0 0 0ay74u9bum6h9 0
SQL> SELECT * FROM TABLE ( DBMS_XPLAN.DISPLAY_CURSOR('1tu4vqnh353w3',1,'TYPICAL') );
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID 1tu4vqnh353w3, child number 1
-------------------------------------
/* MV_REFRESH (DEL) */ DELETE FROM "SYSTEM"."M_I_MV" SNA$ WHERE
"I_ROWID" IN (SELECT /*+ NO_MERGE HASH_SJ */ * FROM (SELECT
CHARTOROWID("MAS$"."M_ROW$$") RID$ FROM "SYSTEM"."MLOG$_I" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :B_ST1 ) AS OF SNAPSHOT(:B_SCN) MAS$)
Plan hash value: 2283497834
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | | | 41455 (100)| |
| 1 | DELETE | M_I_MV | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 41 | 6601 | 41455 (2)| 00:08:18 |
|* 3 | TABLE ACCESS FULL | MLOG$_I | 33 | 4554 | 2 (0)| 00:00:01 |
| 4 | MAT_VIEW ACCESS FULL| M_I_MV | 20M| 443M| 41310 (2)| 00:08:16 |
----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("I_ROWID"=CHARTOROWID("MAS$"."M_ROW$$"))
3 - filter("MAS$"."SNAPTIME$$">:B_ST1)
Note
-----
- dynamic sampling used for this statement (level=2)
29 ligne(s) sÚlectionnÚe(s).
SQL> SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR('dmnuzuvk7b9ap',1,'TYPICAL'));
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
SQL_ID dmnuzuvk7b9ap, child number 1
-------------------------------------
/* MV_REFRESH (INS) */ INSERT INTO "SYSTEM"."M_I_MV" SELECT /*+
NO_MERGE("JV$") */ "MAS$0"."MID","MAS$0"."BDT","JV$"."IID","JV$"."DIV","
JV$"."MAT","JV$"."QTY","MAS$0".ROWID,"JV$"."RID$" FROM ( SELECT
"MAS$"."ROWID" "RID$" , "MAS$".* FROM "SYSTEM"."I" "MAS$" WHERE
ROWID IN (SELECT /*+ HASH_SJ */ CHARTOROWID("MAS$"."M_ROW$$") RID$
FROM "SYSTEM"."MLOG$_I" "MAS$" WHERE "MAS$".SNAPTIME$$ > :B_ST1 ))
AS OF SNAPSHOT(:B_SCN) "JV$", "M" AS OF SNAPSHOT(:B_SCN) "MAS$0" WHERE
"JV$"."MID"="MAS$0"."MID"
Plan hash value: 4255875859
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-----------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 137K(100)|
| 1 | LOAD TABLE CONVENTIONAL | | | | |
| 2 | NESTED LOOPS | | | | |
| 3 | NESTED LOOPS | | 3 | 276 | 137K (1)|
| 4 | VIEW | | 3 | 198 | 137K (1)|
|* 5 | HASH JOIN RIGHT SEMI | | 3 | 504 | 137K (1)|
|* 6 | TABLE ACCESS FULL | MLOG$_I | 33 | 4554 | 2 (0)|
| 7 | TABLE ACCESS FULL | I | 20M| 577M| 136K (1)|
|* 8 | INDEX UNIQUE SCAN | SYS_C0016059 | 1 | | 0 (0)|
| 9 | TABLE ACCESS BY INDEX ROWID| M | 1 | 26 | 1 (0)|
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access(ROWID=CHARTOROWID("MAS$"."M_ROW$$"))
6 - filter("MAS$"."SNAPTIME$$">:B_ST1)
8 - access("JV$"."MID"="MAS$0"."MID")
Note
-----
- dynamic sampling used for this statement (level=2)
39 ligne(s) sÚlectionnÚe(s).
MV refresh for upgrade
A reader, January 31, 2011 - 1:09 am UTC
Hi Tom,
We need to move 2 production schemas from Oracle 9i single instance to Oracle 10G RAC.
At this stage we are thinking of migration strategy for these 2 schemas. Size of these schemas is about 120 GB in total (about 320 tables). We have business constraint to minimize the down time to transfer the schemas from Oracle 9i to Oracle 10G RAC. We considered EXP-IMP but it is slow thus we were thinking about using MVIEW if possible. I would like to take your view on below migration strategy for moving the schemas from Oracle 9i to Oracle 10G RAC.
1) Perform export of 2 schemas in Oracle 9i database without rows. Just to get the structure i.e. tables, index, grants etc
2) Configure MVIEW snapshot for schema in Oracle 9i
3) Create Database link between Oracle 9i and Oracle 10G database
4) Import 2 schemas into Oracle 10G RAC to get the structure i.e. tables, index, grants etc
5) Create MVIEW for all tables in Oracle 10G RAC with REFRESH FAST on PREBUILT TABLE.
6) refresh the MVIEWS in Oracle 10G RAC environment to get data from Oracle 9i database.
7) Organize outage for Oracle 9i database
8) Refresh MVIEWS for the final time
9) Cutover the application to Oracle 10G..
Do you think it is possible to achieve above steps? If Yes, I think it will considerably cut off downtime for the apps. I would also like to know if it is possible to configure MVIEW with fast refresh option on schema rather than per table basis.
Thanks in advance.
Cheers,
February 01, 2011 - 4:32 pm UTC
MV refresh for upgrade
A reader, February 04, 2011 - 6:51 pm UTC
Thank you very much Tom.
Cheers,
Parallelism in Materialized View Refresh
MiM, February 25, 2011 - 9:26 am UTC
Hi Tom,
Two queries about Materialized View Refresh
1. I have read in a few documents which mentions that Materialized View Refresh will only run in parallel if it has been created with parallel option. Is it true ?
2. What is the best way to find if Materialized View Refresh is running in parallel?
Thanks in advance
February 25, 2011 - 9:43 am UTC
1) it is an option to the refresh call - you can specify it there.
http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_mview.htm#i997194 2) look at the refresh call - remember, this might be scheduled automagically via a job (dba_jobs) if you used "start with" and "next" on the create statement - or it might be buried in one of your scripts (typical in a data warehouse)
And most importantly - remember that the refresh might run
a) faster
b) slower
c) no differently
when using parallel or no parallel... parallel is not fast=true....
MV refresh time
Suresh.R, March 17, 2011 - 12:13 pm UTC
Tom,
How/Where to determine that how long materialized view took to refresh.
i.e. At time T1 materialized view refresh started.
At time T2 materialized view refresh completed.
We would like measure T2-T1.
Oracle Version: 9.2.0.8
Regards
Suresh.R
March 17, 2011 - 12:34 pm UTC
we do not record that. You would have to record that if you wanted. It would be rather easy to do.
Instead of using "next" on the materialized view (if you are doing an automatic, scheduled refresh which I presume you must be), just schedule your own job (if you look in user_jobs, you'll see that we schedule a job to satisfy your next request, nothing fancy).
Your job would:
declare
l_rowid rowid;
begin
insert into log_table (name,start_date) values ('MYMVIEW', sysdate )
returning rowid into l_rowid;
dbms_mview.refresh( 'MYMVIEW' );
update log_table set end_time = sysdate where rowid = l_rowid;
end;
log_table is a table of your own design...
..MV refresh time contd
A reader, March 17, 2011 - 1:22 pm UTC
Tom,
suppose the MV refresh job is schedule using dbms_job ( 9i) in production database and we dont want to schedule it on our own way then.
can we make use of dba_jobs_running to measure t2 - t1 ( i.e. MV refresh time taken) ?
Kindly suggest how to in this case?
regards
March 17, 2011 - 1:28 pm UTC
that would never give you the actual time - because the job is "running", as in "not finished".
You have the TOTAL_TIME in dba_jobs, that can give you an idea. If you take a look at what it is now, and look at it later, you can tell how much time it took to run the job (any number of times) during that interval.
But that is not any different from what I'm suggesting above, you still have to have something that records what the total time is NOW, and then records it again LATER.
If you want new information that doesn't currently exist, you'll have to change your process.
.. ..MV refresh time contd
A reader, March 22, 2011 - 4:03 am UTC
Thanks Tom,
Further,
select FULLREFRESHTIM from SYS.sum$
would give us fullrefresh time for that CONTAINERNAM (MYMVIEW)?
Weird statement Oracle does for "fast" refresh
Marcus, March 22, 2011 - 5:39 pm UTC
Hello Tom,
I am experimenting with some huge MVs and hit that problem where the fast refresh takes longer than the complete. I only have a few hundred records in my MV logs, so I followed your instructions and traced the session. Look at something weird I found: If you look at the "AND NOT EXISTS (SELECT 1" part, you will see that Oracle is considering the possibility of the ROWIDs of my regular tables to be NULL ! I was hoping the optimizer would ignore this, but if I remove the ORs, my cost goes from 5x10^9 to 5x10^5. Is this strange or what ?
INSERT INTO "KIPREV"."MV_CERTIFICADOS_PF"
SELECT /*+ NO_MERGE("JV$") */
"MAS$31"."COD_EMPRESA",
"MAS$31"."COD_CUENTA",
/*** Lots of columns here ***/
"MAS$2".ROWID,
"MAS$1".ROWID,
"MAS$0".ROWID
FROM (SELECT "MAS$"."ROWID" "RID$", "MAS$".*
FROM "KIPREV"."PERSONAS" "MAS$"
WHERE ROWID IN (SELECT /*+ HASH_SJ */
CHARTOROWID("MAS$"."M_ROW$$") RID$
FROM "KIPREV"."MLOG$_PERSONAS" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :1)) "JV$",
"KIPREV"."FO_TIPSALDOS_X_CLASSIF" "MAS$0",
"KIPREV"."FO_TIPSALDOS_X_CONTA_DETA" "MAS$1",
"KIPREV"."FO_CONTRIB_X_PRODUTO" "MAS$2",
"KIPREV"."PERSONAS" "MAS$3",
"KIPREV"."FO_CONVENIO" "MAS$4",
"KIPREV"."PERSONAS" "MAS$5",
"KIPREV"."FV_OFICIALES_CARTERA" "MAS$6",
"KIPREV"."FO_CUENTAS_X_OFICIALES" "MAS$7",
"KIPREV"."PA_ENDERECOS" "MAS$8",
"KIPREV"."PA_ENDERECOS" "MAS$9",
"KIPREV"."OFICINAS_DE_ENTES" "MAS$10",
"KIPREV"."PERSONAS" "MAS$11",
"KIPREV"."ENTES_EXTERNOS" "MAS$12",
"KIPREV"."PERSONAS" "MAS$13",
"KIPREV"."FV_OFICIALES_CARTERA" "MAS$14",
"KIPREV"."PERSONAS" "MAS$15",
"KIPREV"."FV_OFICIALES_CARTERA" "MAS$16",
"KIPREV"."FV_OFICIALES_CARTERA" "MAS$18",
"KIPREV"."FO_DADOS_COMERCIAL" "MAS$19",
"KIPREV"."ID_PERSONAS" "MAS$20",
"KIPREV"."PERSONAS_FISICAS" "MAS$21",
"KIPREV"."PERSONAS" "MAS$22",
"KIPREV"."FO_PRODUCTOS" "MAS$23",
"KIPREV"."FO_CUENTAS" "MAS$24",
"KIPREV"."PA_STATUS" "MAS$25",
"KIPREV"."PA_STATUS_SUB" "MAS$26",
"KIPREV"."BE_TIPOS_COBERTURA" "MAS$27",
"KIPREV"."BE_BENEFICIOS" "MAS$28",
"KIPREV"."BENEFICIOS_X_CUENTA" "MAS$29",
"KIPREV"."FO_CONTRIBUICOES" "MAS$30",
"KIPREV"."FO_CONTRIB_X_CONTA" "MAS$31"
WHERE ("MAS$30"."COD_EMPRESA" = "MAS$31"."COD_EMPRESA" AND
"MAS$30"."COD_CONTRIB" = "MAS$31"."COD_CONTRIB" AND
"MAS$29"."COD_EMPRESA" = "MAS$31"."COD_EMPRESA" AND
"MAS$29"."COD_BENEFICIO" = "MAS$31"."COD_BENEFICIO" AND
"MAS$29"."COD_PLAN" = "MAS$31"."COD_PLAN" AND
"MAS$29"."COD_CUENTA" = "MAS$31"."COD_CUENTA" AND
"MAS$28"."COD_EMPRESA" = "MAS$29"."COD_EMPRESA" AND
"MAS$28"."COD_PLAN" = "MAS$29"."COD_PLAN" AND
"MAS$28"."COD_BENEFICIO" = "MAS$29"."COD_BENEFICIO" AND
"MAS$27"."COD_TIPCOBERTURA" = "MAS$28"."COD_TIPCOBERTURA" AND
"MAS$26"."TIP_STATUS" = "MAS$29"."TIP_STATUS" AND
"MAS$26"."COD_STATUS" = "MAS$29"."COD_ESTADO" AND
"MAS$26"."COD_SUB_STATUS" = "MAS$29"."COD_SUB_ESTADO" AND
"MAS$25"."TIP_STATUS" = "MAS$26"."TIP_STATUS" AND
"MAS$25"."COD_STATUS" = "MAS$26"."COD_STATUS" AND
"MAS$24"."COD_EMPRESA" = "MAS$29"."COD_EMPRESA" AND
"MAS$24"."COD_CUENTA" = "MAS$29"."COD_CUENTA" AND
"MAS$23"."COD_EMPRESA" = "MAS$24"."COD_EMPRESA" AND
"MAS$23"."COD_PRODUCTO" = "MAS$24"."COD_PRODUCTO" AND
"MAS$22"."COD_PERSONA" = "MAS$24"."COD_CLIENTE" AND
"MAS$21"."COD_PER_FISICA" = "MAS$22"."COD_PERSONA" AND
"MAS$20"."COD_PERSONA" = "MAS$22"."COD_PERSONA" AND
"MAS$20"."COD_TIPO_ID" = '1' AND
"MAS$19"."COD_EMPRESA" = "MAS$24"."COD_EMPRESA" AND
"MAS$19"."COD_CUENTA" = "MAS$24"."COD_CUENTA" AND
"MAS$18"."COD_EMPRESA" = "MAS$19"."COD_EMPRESA" AND
"MAS$18"."COD_OFICIAL" = "MAS$19"."COD_AGENCIADOR" AND
"JV$"."COD_PERSONA"(+) = "MAS$18"."COD_PERSONA" AND
"MAS$16"."COD_EMPRESA"(+) = "MAS$19"."COD_EMPRESA" AND
"MAS$16"."COD_OFICIAL"(+) = "MAS$19"."COD_AGENTE_VENDA" AND
"MAS$15"."COD_PERSONA"(+) = "MAS$16"."COD_PERSONA" AND
"MAS$14"."COD_EMPRESA"(+) = "MAS$19"."COD_EMPRESA" AND
"MAS$14"."COD_OFICIAL"(+) = "MAS$19"."COD_PROLABORISTA" AND
"MAS$13"."COD_PERSONA"(+) = "MAS$14"."COD_PERSONA" AND
"MAS$12"."COD_ENTE" = "MAS$19"."COD_ENTE" AND
"MAS$11"."COD_PERSONA" = "MAS$12"."COD_PERSONA" AND
"MAS$10"."COD_ENTE" = "MAS$19"."COD_ENTE" AND
"MAS$10"."COD_OFICINA" = "MAS$19"."COD_OFICINA" AND
"MAS$9"."COD_ENDERECO"(+) = "MAS$10"."COD_MUNICIPIO" AND
"MAS$9"."COD_NIVEL_ANTERIOR" = "MAS$8"."COD_NIVEL"(+) AND
"MAS$9"."NIVEL_ANTERIOR" = "MAS$8"."COD_ENDERECO"(+) AND
"MAS$9"."COD_NIVEL"(+) = 'CI' AND "MAS$8"."COD_NIVEL"(+) = 'UF' AND
"MAS$7"."COD_EMPRESA"(+) = "MAS$24"."COD_EMPRESA" AND
"MAS$7"."COD_CUENTA"(+) = "MAS$24"."COD_CUENTA" AND
"MAS$6"."COD_EMPRESA"(+) = "MAS$19"."COD_EMPRESA" AND
"MAS$6"."COD_OFICIAL"(+) = "MAS$19"."COD_AGENTE_VENDA" AND
"MAS$6"."COD_PERSONA" = "MAS$5"."COD_PERSONA" AND
"MAS$4"."ID_CONVENIO"(+) = "MAS$24"."ID_CONVENIO" AND
"MAS$3"."COD_PERSONA"(+) = "MAS$24"."COD_PERSONA_RESPFIN" AND
"MAS$2"."COD_EMPRESA" = "MAS$31"."COD_EMPRESA" AND
"MAS$2"."COD_PRODUTO" = "MAS$31"."COD_PRODUTO" AND
"MAS$2"."COD_CONTRIB" = "MAS$31"."COD_CONTRIB" AND
"MAS$1"."COD_EMPRESA" = "MAS$31"."COD_EMPRESA" AND
"MAS$1"."COD_CONTA" = "MAS$31"."COD_CUENTA" AND
"MAS$1"."COD_PLAN" = "MAS$31"."COD_PLAN" AND
"MAS$1"."COD_BENEFICIO" = "MAS$31"."COD_BENEFICIO" AND
"MAS$0"."COD_EMPRESA" = "MAS$1"."COD_EMPRESA" AND
"MAS$0"."COD_TIPSALDO" = "MAS$1"."COD_TIPSALDO" AND
("MAS$0"."COD_CLASSIFICACAO" = 'BAC' OR
"MAS$0"."COD_CLASSIFICACAO" = 'RIS') AND
"MAS$24"."TIP_CUENTA" = 'I')
/*** Below is the weird part: MAS$<N> are my regular tables ***/
AND NOT EXISTS (SELECT 1
FROM "KIPREV"."MV_CERTIFICADOS_PF" "SNA2$"
WHERE ("SNA2$"."FCC_ROWID" = "MAS$31".ROWID OR
"MAS$31".ROWID IS NULL)
AND ("SNA2$"."FCNT_ROWID" = "MAS$30".ROWID OR
"MAS$30".ROWID IS NULL)
AND ("SNA2$"."BC_ROWID" = "MAS$29".ROWID OR
"MAS$29".ROWID IS NULL)
AND ("SNA2$"."BB_ROWID" = "MAS$28".ROWID OR
"MAS$28".ROWID IS NULL)
AND ("SNA2$"."BTC_ROWID" = "MAS$27".ROWID OR
"MAS$27".ROWID IS NULL)
AND ("SNA2$"."PSS_ROWID" = "MAS$26".ROWID OR
"MAS$26".ROWID IS NULL)
AND ("SNA2$"."PS_ROWID" = "MAS$25".ROWID OR
"MAS$25".ROWID IS NULL)
AND ("SNA2$"."FC_ROWID" = "MAS$24".ROWID OR
"MAS$24".ROWID IS NULL)
AND ("SNA2$"."P_ROWID" = "MAS$23".ROWID OR
"MAS$23".ROWID IS NULL)
AND ("SNA2$"."PERCLI_ROWID" = "MAS$22".ROWID OR
"MAS$22".ROWID IS NULL)
AND ("SNA2$"."PPPFJ_ROWID" = "MAS$21".ROWID OR
"MAS$21".ROWID IS NULL)
AND ("SNA2$"."IPPFJ_ROWID" = "MAS$20".ROWID OR
"MAS$20".ROWID IS NULL)
AND ("SNA2$"."FDC_ROWID" = "MAS$19".ROWID OR
"MAS$19".ROWID IS NULL)
AND ("SNA2$"."OCAG_ROWID" = "MAS$18".ROWID)
AND ("SNA2$"."OCAV_ROWID" = "MAS$16".ROWID OR
"MAS$16".ROWID IS NULL)
AND ("SNA2$"."PERAV_ROWID" = "MAS$15".ROWID OR
"MAS$15".ROWID IS NULL)
AND ("SNA2$"."OCPL_ROWID" = "MAS$14".ROWID OR
"MAS$14".ROWID IS NULL)
AND ("SNA2$"."PERPL_ROWID" = "MAS$13".ROWID OR
"MAS$13".ROWID IS NULL)
AND ("SNA2$"."EE_ROWID" = "MAS$12".ROWID OR
"MAS$12".ROWID IS NULL)
AND ("SNA2$"."PEREE_ROWID" = "MAS$11".ROWID OR
"MAS$11".ROWID IS NULL)
AND ("SNA2$"."OE_ROWID" = "MAS$10".ROWID OR
"MAS$10".ROWID IS NULL)
AND ("SNA2$"."CIDADE_ROWID" = "MAS$9".ROWID OR
"MAS$9".ROWID IS NULL)
AND ("SNA2$"."ESTADO_ROWID" = "MAS$8".ROWID OR
"MAS$8".ROWID IS NULL)
AND ("SNA2$"."COF_ROWID" = "MAS$7".ROWID OR
"MAS$7".ROWID IS NULL)
AND ("SNA2$"."OCCOR_ROWID" = "MAS$6".ROWID OR
"MAS$6".ROWID IS NULL)
AND ("SNA2$"."PERCOR_ROWID" = "MAS$5".ROWID OR
"MAS$5".ROWID IS NULL)
AND ("SNA2$"."FCON_ROWID" = "MAS$4".ROWID OR
"MAS$4".ROWID IS NULL)
AND ("SNA2$"."PEREF_ROWID" = "MAS$3".ROWID OR
"MAS$3".ROWID IS NULL)
AND ("SNA2$"."FCPP_ROWID" = "MAS$2".ROWID OR
"MAS$2".ROWID IS NULL)
AND ("SNA2$"."TIP_SAL_DT_ROWID" =
"MAS$1".ROWID OR "MAS$1".ROWID IS NULL)
AND ("SNA2$"."TSC_ROWID" = "MAS$0".ROWID OR
"MAS$0".ROWID IS NULL)
AND "JV$".RID$ IS NULL)
AND NOT EXISTS
(SELECT 1
FROM "KIPREV"."PERSONAS" "MAS_INNER$",
"KIPREV"."FV_OFICIALES_CARTERA" "MAS_OUTER$"
WHERE "MAS$18".ROWID = "MAS_OUTER$".ROWID
AND "JV$".RID$ IS NULL
AND "MAS_INNER$"."COD_PERSONA" = "MAS_OUTER$"."COD_PERSONA")
March 23, 2011 - 8:18 am UTC
... I am experimenting with some huge MVs and hit that problem where the fast
refresh takes longer than the complete. ...
it is not unusual. They made a huge mistake in 1992 when they called it "fast", it should have been named "incremental". Sometimes fast is slow.
with the outer joins they are doing, there is an opportunity for some of those rowids to be NULL.
It is sort of a disappointment though.
Marcus, March 23, 2011 - 12:03 pm UTC
It is just that whenever I read about the fast/incremental feature for MVs I thought of things beautiful and elegant such as the redo/undo mechanism happening underneath. After carefully building MV logs for 78 tables, choosing carefully the columns, then creating a series of MVs, struggling to keep them "fast-refresh-capable", seeing this DML command is just... sad.
sreenivasan, March 24, 2011 - 5:56 am UTC
Complete Refresh Wait Events
Jane Lamont, March 24, 2011 - 6:01 pm UTC
Hi Tom,
Referring to Metalink ID 763823.1, I have exported/imported mviews to a test, virtual environment of 2 databases to test the base table & mview exports. After import I ran complete refreshes on the small tables with no problem. However, when I tried one of the large tables (50M records), it took 175 hours to complete! Checking it with v$active_session_history, I see huge amounts of wait_class -> user i/o and event -> db file sequential read on the insert statement. Indexes are in place on the mview, though the mview is partitioned the indexes are not partitioned.
The note says to build the mview log on the master table after the import and do a complete refresh. Why would the log be needed for a complete refresh? Would it improve the speed?
Also, why would a complete refresh do so much reading? It is a single transaction, so it would load it to memory then commit to disk at the end, wouldn't it? The disk reading would be on the master table side?
Could you please enlighten me?
Thanks for your help,
Jane
March 25, 2011 - 10:55 am UTC
the log is not needed for a complete refresh, it was likely just reminding you that if you had a log - re-implement it.
It sounds like for whatever reason, the query plan used to refresh the materialized view was using indexes inappropriately. Was the plan "reasonable" looking for the refresh query? Were statistics up to date (did the database know the tables were "large")? What was the defining query?
Fast refresh MView
vin, March 28, 2011 - 9:41 am UTC
Hi Tom,
I have created a mview MV_A on a Master Table A, using the query:
CREATE MATERIALIZED VIEW MV_A
REFRESH FAST
AS SELECT * FROM A
I then created the Mview log on the master table A as:
CREATE MATERIALIZED VIEW LOG ON A;
My master table A is created as PARALLEL DEFAULT.
I then started making DML changes (INSERT) to the Master table A. I noticed that when I INSERT into the table A in the below fashion:
INSERT INTO A (SELECT * FROM B);
The MLOG$_A does not updated capture the changed records.
But an insert into master table in the below fashion is captured by the MLOG$_A.
INSERT INTO A (COL1) VALUES (2);
But when the master table is altered to NOPARALLEL, Inserts in both the fashions, are captured in the mview log.
Is this expected behaviour?
Thanks in advance
Complete Refresh Wait Events
Jane, April 05, 2011 - 11:38 am UTC
Hi Tom,
I have now moved to a mview of a larger size - 151M records with the same wait_class and event as my previous post above.
With these test mviews, I had used DataPump to export/import the metadata of the mview. The import log was: Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW. No mention of statistics.
The structure was there so I ran the complete refresh. The last_analyzed date was the date of import of an empty table not knowing it is a 'large' table. I see that I will need to investigate how to exp/imp the statistics along with metadata.
The execution plan is:
select sql_id, sql_plan_hash_value plan_hash, plan_line_id id, lpad (' ', plan_depth) || plan_operation operation , plan_options , plan_object_name , plan_cardinality card, plan_cost
from v$sql_plan_monitor
where status = 'EXECUTING'
order by key, id;
SQL_ID PLAN_HASH ID OPERATION PLAN_OPTIONS
PLAN_OBJECT_NAME CARD PLAN_COST
------------- ---------- ---- -------------------------------- -----------------
--- ------------------------ ---------- ----------
3c4dk31a1hygz 1788691278 0 INSERT STATEMENT
1 LOAD TABLE CONVENTIONAL
2 REMOTE
TABLE_A 1 2
The refresh query is select * from table_name. The only index on the mview is a 8-column primary key.
So, it might just be the lack of statistics causing this performance problem?
Thanks for your help,
Jane
April 12, 2011 - 12:37 pm UTC
how long does it take to run.
how long do you think it should take to run.
what does 151 million records mean in size (could be 200mb of data, could be 200 tb of data - size matters, row counts, not so much)
It is doing a conventional path load - not a direct path, have you looked at using "atomic_refresh=>false" so it can use TRUNCATE+INSERT/*+APPEND*/ to complete refresh instead of DELETE+INSERT?
MV
A reader, April 12, 2011 - 10:34 pm UTC
Refresh group - delete/insert
Scott, June 22, 2011 - 10:20 am UTC
At the top of this thread back in 2004, you stated "If this table is part of a refresh group, it'll be done with a DELETE (redo) plus insert (redo)." as opposed to truncate plus insert. Is this still true in 10g and 11g? I've searched the Oracle documentation and cannot find it.
June 22, 2011 - 10:51 am UTC
In 10g and above - by default - all materialized views are refreshed with DELETE+INSERT - UNLESS you specify "atomic_refresh=>false" on the refresh call. If you do that, then it will use truncate+insert /*+APPEND*/
refresh groups are always done with atomic_refresh=>true - meaning DELETE+INSERT. The goal of the refresh group is to refresh a consistent set of tables - if we used truncate - which includes a commit - you'd see the tables as of different points in time during the refresh process (and sometimes empty as well!) - which is 100% counter to the refresh group intention.
Rebuild Index
A reader, June 30, 2011 - 9:12 am UTC
I have a question related to incremental refresh of MVs, do we need to rebuild indexes after incremental refresh of MVs, please assume that we didn't disable the indexes before the refresh.
July 01, 2011 - 8:47 am UTC
no, you do not. Materialized views maintain indexes.
How to tune complete refresh
Chandra, July 12, 2011 - 7:41 am UTC
Hi Tom,
We are refreshing MV daily using complete refresh,
it use to take 3 hrs to complete. now all of sudden it is taking 5 hrs, what could be the reason,
we have done no changes.
Please let me know how ro diagnise the issue.
Regards
KNR
how to use utl_http package in oracle
K.V.Koteswara rao, August 17, 2011 - 2:08 am UTC
hi,
i have one requirement to use utl_http.By using this package i need to send one request to webserver with required parameters based on that request that webserver gives response.i need to get that response and populate those response details in my database table.so for this one please give one example.it helps a lot to me
thanks in advance
thanks & regards,
k.v.koteswara rao.
August 17, 2011 - 4:13 am UTC
have you even read the documentation? That is basically what utl_http does.
why would you ask a question about utl_http (a thing that appears on MANY pages on this site) on a page that previously had NO MENTION of utl_http????????
http://asktom.oracle.com/pls/asktom/asktom.search?p_string=utl_http did you even search for it? to see if perhaps there were examples already?? because there are - many of them....
ORA-23401 - Materialized view doesnot exists
Rajeshwaran, Jeyabal, November 20, 2011 - 12:12 am UTC
Tom:
Can you please help me why I am getting this error. While using dbms_mview.refresh_dependent I am getting this error message. But while using dbms_mview.refresh i am NOT getting this error message.
Below is the entire script for your review.
rajesh@ORA11GR2> create table t1
2 nologging
3 as
4 select level as empno,
5 dbms_random.string('A',30) as ename,
6 sysdate as hire_date,
7 mod(level,100) + 1 as deptno,
8 mod(level,1000) as comm,
9 dbms_random.value as salary
10 from dual
11 connect by level <=100000;
Table created.
Elapsed: 00:00:06.24
rajesh@ORA11GR2>
rajesh@ORA11GR2> create table t2
2 nologging
3 as
4 select level as deptno,
5 dbms_random.string('A',20) as dname,
6 dbms_random.string('B',10) as loc
7 from dual
8 connect by level <=100;
Table created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T1');
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.03
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'T2');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.09
rajesh@ORA11GR2> alter table t1 add constraint t1_pk primary key(empno) ;
Table altered.
Elapsed: 00:00:00.14
rajesh@ORA11GR2> alter table t2 add constraint t2_pk primary key(deptno) ;
Table altered.
Elapsed: 00:00:00.01
rajesh@ORA11GR2> alter table t1 add constraint t1_fk foreign key(deptno) references t2 ;
Table altered.
Elapsed: 00:00:00.06
rajesh@ORA11GR2> alter table t1 modify deptno not null ;
Table altered.
Elapsed: 00:00:00.06
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_mv
2 build deferred
3 refresh on commit
4 enable query rewrite
5 as
6 select deptno,count(*) as cnt
7 from t1
8 group by deptno;
Materialized view created.
Elapsed: 00:00:00.09
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
2 build deferred
3 refresh on commit
4 enable query rewrite
5 as
6 select dname, t.deptno,count(t.cnt)
7 from t2, t1_mv t
8 where t2.deptno = t.deptno
9 group by dname,t.deptno;
Materialized view created.
Elapsed: 00:00:00.07
rajesh@ORA11GR2>
rajesh@ORA11GR2> begin
2 dbms_mview.refresh_dependent(
3 number_of_failures=>:x,
4 list=>'T1_T2_MV',
5 method=>null,
6 rollback_seg=>null,
7 refresh_after_errors=>false,
8 atomic_refresh=>true,
9 nested=>true);
10 end;
11 /
begin
*
ERROR at line 1:
ORA-23401: materialized view "RAJESH"."T1_T2_MV" does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2702
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 3220
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 3200
ORA-06512: at line 2
Elapsed: 00:00:00.04
rajesh@ORA11GR2>
rajesh@ORA11GR2> select mview_name, last_refresh_date
2 from user_mviews
3 where mview_name = 'T1_T2_MV'
4 /
MVIEW_NAME LAST_REFR
------------------------------ ---------
T1_T2_MV
Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_mview.refresh(list=>'T1_T2_MV',nested=>true);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.37
rajesh@ORA11GR2>
November 21, 2011 - 1:50 pm UTC
you put the materialized view you are DEPENDENT on in that list.
ops$tkyte%ORA11GR2> variable x number
ops$tkyte%ORA11GR2> l
1 begin
2 dbms_mview.refresh_dependent(
3 number_of_failures=>:x,
4 list=>'T1_MV',
5 method=>null,
6 rollback_seg=>null,
7 refresh_after_errors=>false,
8 atomic_refresh=>true,
9 nested=>true);
10* end;
ops$tkyte%ORA11GR2> /
PL/SQL procedure successfully completed.
ORA-23401 - Materialized view doesnot exists
Rajeshwaran, Jeyabal, November 21, 2011 - 3:29 pm UTC
Tom, so you mean to say that
1) When we use dbms_mview.refresh_dependent - developers manually need to specialty the dependent MV in list parameter
2) when we use dbms_mview.refresh - developers specify only Top level MV and the dependency will be identified by oracle itself
is that correct?
November 21, 2011 - 6:04 pm UTC
1) you specify the materialized view whose dependents you want refreshed.
For example, if I was a materialized view and you refreshed dependents for me - Tom - my son Alan and daughter Megan would get refreshed.
2) when you use this, you are telling us what specific materialized view to refresh. There are no dependencies involved, you are telling us "refresh THIS ONE"
Mview refresh and foreign key violation
Sandeep Deokar, November 26, 2011 - 11:07 am UTC
Hi Tom
We have Materialized view and when I am trying to refresh materialized view I am getting error foreing key constraint violated.
I checked complete data in my remote table and local table there is no such data which will violate this constraint.
Hence I disabled this constraint and refreshed the view it got successfully completed.
After that I enabled the constraint and this also worked fine.
We are using Oracle Oracle 11G 11.2.0.2.0.
This has resolved my purpose for this time and we scheduled our job in this sequence as well.
1) Disable constraint.
2) Refresh Mview.
3) Enable constraint.
My doubt is either it should not refresh mview at any case when constraint is enable or after disabling constraint and refresh Mview I should not be able to enable constraint.
Can you please put some focus what might be the problem here.
Thanks
Sandeep Deokar
November 28, 2011 - 10:46 am UTC
you should create constraints on materialized views as DEFERRABLE INITIALLY IMMEDIATE.
We'll then set constraints all deferred, do the refresh, and then set the constraint back to immediate.
This is necessary because we only guarantee that the data in a set of materialized views is consistent at the END of the refresh - during the refresh you can have duplicates, orphans, etc.
http://docs.oracle.com/cd/E11882_01/server.112/e10706/repmview.htm#REPLN360 see the constraints and refresh section there.
Refresh in another schema
Carl, December 05, 2011 - 1:33 am UTC
Hi,
We are using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit. We are having an issue with MV refresh group, wherein we are unable to refresh objests from two different schemas together. PFB the simulation steps:
as system user:
BEGIN
DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA (
username => 'propagator');
END;
/
as propagator user:
BEGIN
DBMS_REFRESH.MAKE (
name => 'REP_CHK',
list => 'TABLE1,MVCK2.TABLE2',
next_date => SYSDATE,
interval => 'SYSDATE + 5/1440',
implicit_destroy => FALSE,
rollback_seg => '',
push_deferred_rpc => TRUE,
refresh_after_errors => TRUE);
END;
/
EXEC DBMS_REFRESH.REFRESH('REP_CHK');
ORA-12048: error encountered while refreshing materialized view "MVCK2"."TABLE2"
ORA-04052: error occurred when looking up remote object MVCK2.SYS@MASTER
ORA-00604: error occurred at recursive SQL level 2
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from MASTER
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2254
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2460
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 1
We tried creating a MV from MVCK1 schema to MVCK2 schema. It created a MV if the table is in same database, but if the master table is in a different database it failed. Please suggest.
CREATE MATERIALIZED VIEW MVCK2.TABLE2
BUILD IMMEDIATE
REFRESH FAST
WITH ROWID AS
SELECT * FROM CK1.TABLE2@MASTER;
/
We are getting the following error:
ORA-04052: error occurred when looking up remote object CK1.TABLE2@MASTER
ORA-00604: error occurred at recursive SQL level 1
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from MVMASTER
However if we try to create a MV with the same user:
CREATE MATERIALIZED VIEW MVCK2.TABLE2
BUILD IMMEDIATE AS
SELECT * FROM tab;
/
Materialized View gets created successfully.
We searched this forum and googled about it, couldnt get any fruitful pointers. Please suggest
Thanks
Carl
Materialized view Refresh with ROWID
A reader, December 06, 2011 - 4:29 am UTC
As per Oracle 10g doc, Mat View WITH ROWID must be based on a single table and cannot contain any of the following:
* Distinct or aggregate functions
* GROUP BY or CONNECT BY clauses
* Subqueries
* Joins
* Set operations
Rowid materialized views are not eligible for fast refresh after a master table reorganization until a complete refresh has been performed.
I have created a sample table with a PK on ID column.
create table sample as select rownum as id, owner,object_name , object_type from all_objects;
CREATE INDEX ind ON sample(ID);
alter table sample add constraint PK_ID primary key(ID);
I have used DBMS_ADVISOR.tune_mview to create a fast refresh mat view and log. I have removed the column aliases and the statements are as follows:
CREATE MATERIALIZED VIEW LOG ON SAMPLE WITH ROWID, SEQUENCE (OWNER,OBJECT_NAME) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW MV_SAMPLE
build immediate
REFRESH FAST WITH ROWID ON COMMIT
ENABLE QUERY REWRITE AS
SELECT OWNER,COUNT(OBJECT_NAME),COUNT(*) FROM SAMPLE GROUP BY OWNER;
These get created successfully and the mat view gets fast refreshed after any dml on sample table. The "SELECT OWNER,COUNT(OBJECT_NAME) FROM SAMPLE GROUP BY OWNER" also picks the mat view to access data as depicted in explain plan.
Execution Plan
----------------------------------------------------------
Plan hash value: 1060132680
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 37 | 1110 | 3 (0)| 00:00:01 |
| 1 | MAT_VIEW REWRITE ACCESS FULL| MV_SAMPLE | 37 | 1110 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Now my query is
1> The mat view is created with ROWID even though it has GROUP BY and AGGREGATE functions. Why this violates the Oracle Documentation?
2> The underlying table has Primary key; why can't I create a mat view with PK. It throws error if I try to create.
mview refresh on outer join
James Su, March 09, 2012 - 10:27 am UTC
Hi Tom,
We have a mview which including an outer join to the parent table. When a column is updated in the parent table, even it's not referenced in the mview, dbms_mview.refresh still generates SQL's and some of them are quite expensive. Shouldn't CHANGE_VECTOR$$ in the mview log be used to avoid this?
Another question: since the mview is refreshed every minute, mview log keeps changing frequently, does it make sense to gather stats on mview log table?
Thank you.
Below is the test case:
CREATE TABLE t_child (
child_id NUMBER PRIMARY KEY
,father_id NUMBER
,child_name VARCHAR2(10)
);
CREATE TABLE t_father (
father_id NUMBER PRIMARY KEY
,father_name VARCHAR2(10)
,no_use_col VARCHAR2(10)
);
ALTER TABLE t_child ADD CONSTRAINT t_child_fk FOREIGN KEY (father_id) REFERENCES t_father(father_id);
INSERT INTO t_child VALUES(1,NULL,'child1');
INSERT INTO t_child VALUES(2,NULL,'child2');
INSERT INTO t_father VALUES(1,'father1','test');
COMMIT;
CREATE MATERIALIZED VIEW LOG ON t_child WITH SEQUENCE, ROWID,PRIMARY KEY INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON t_father WITH SEQUENCE, ROWID,PRIMARY KEY INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW MV_REFRESH_TEST
NOCACHE
NOLOGGING
NOCOMPRESS
NOPARALLEL
BUILD IMMEDIATE
REFRESH FAST ON DEMAND WITH ROWID
AS
SELECT C.rowid AS c_rowid
,f.rowid AS f_rowid
,c.child_id
,c.father_id
,c.child_name
,f.father_name
FROM t_child C,t_father F
WHERE c.father_id = f.father_id(+);
ALTER TABLE MV_REFRESH_TEST ADD CONSTRAINT pk_MV_REFRESH_TEST PRIMARY KEY (child_id) DEFERRABLE USING INDEX;
CREATE INDEX idxr1_MV_REFRESH_TEST ON MV_REFRESH_TEST (c_rowid);
CREATE INDEX idxr2_MV_REFRESH_TEST ON MV_REFRESH_TEST (f_rowid);
--------- update a unused column in the father table and trace the generated SQL in mview refresh
UPDATE t_father SET no_use_col='test2';
COMMIT;
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 1';
EXEC DBMS_MVIEW.REFRESH('MV_REFRESH_TEST','F');
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
I can see these two updates in the trace file:
/* MV_REFRESH (UPD) */
UPDATE "JSU"."MV_REFRESH_TEST" SNA$
SET "F_ROWID" = NULL, "FATHER_NAME" = NULL
WHERE "F_ROWID" IN (SELECT /*+ NO_MERGE HASH_SJ */
*
FROM (SELECT CHARTOROWID ("MAS$"."M_ROW$$") RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :B_ST0) AS OF SNAPSHOT (:B_SCN)
MAS$)
/* MV_REFRESH (UPD) */
UPDATE /*+ BYPASS_UJVC */
(SELECT /*+ NO_MERGE ("JV$") */
"SNA$"."F_ROWID"
"C0_0",
"JV$"."RID$" "C1_0",
"SNA$"."FATHER_NAME" "C0_1",
"JV$"."FATHER_NAME" "C1_1"
FROM (SELECT "MAS$"."ROWID" "RID$", "MAS$".*
FROM "JSU"."T_FATHER" "MAS$"
WHERE ROWID IN
(SELECT /*+ HASH_SJ */
CHARTOROWID("MAS$"."M_ROW$$")
RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ >
:B_ST0)) AS OF SNAPSHOT (:B_SCN)
"JV$",
"T_CHILD" AS OF SNAPSHOT (:B_SCN) "MAS$1",
"JSU"."MV_REFRESH_TEST" "SNA$"
WHERE "MAS$1"."FATHER_ID" = "JV$"."FATHER_ID"
AND "SNA$"."C_ROWID" = "MAS$1".ROWID) UV$
SET "C0_0" = "C1_0", "C0_1" = "C1_1"
----------------------
-- if we change the mview a little bit,
DROP MATERIALIZED VIEW MV_REFRESH_TEST;
CREATE MATERIALIZED VIEW MV_REFRESH_TEST
NOCACHE
NOLOGGING
NOCOMPRESS
NOPARALLEL
BUILD IMMEDIATE
REFRESH FAST ON DEMAND WITH ROWID
AS
SELECT C.rowid AS c_rowid
,f.rowid AS f_rowid
,c.child_id
,c.father_id
,c.child_name
,f.father_name
FROM t_child C,t_father F
WHERE c.father_id = f.father_id(+)
AND f.father_name(+) LIKE 'father%' -------- this line is added to the first mview
;
ALTER TABLE MV_REFRESH_TEST ADD CONSTRAINT pk_MV_REFRESH_TEST PRIMARY KEY (child_id) DEFERRABLE USING INDEX;
CREATE INDEX idxr1_MV_REFRESH_TEST ON MV_REFRESH_TEST (c_rowid);
CREATE INDEX idxr2_MV_REFRESH_TEST ON MV_REFRESH_TEST (f_rowid);
--------- update a unused column in the father table and trace the generated SQL in mview refresh
UPDATE t_father SET no_use_col='test2';
COMMIT;
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 1';
EXEC DBMS_MVIEW.REFRESH('MV_REFRESH_TEST','F');
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
------ I can see these SQL's generated:
/* MV_REFRESH (DEL) */
DELETE FROM "JSU"."MV_REFRESH_TEST" "SNA$"
WHERE ROWID IN
(SELECT RID
FROM (SELECT "SNA$".ROWID RID,
ROW_NUMBER ()
OVER (PARTITION BY "C_ROWID"
ORDER BY RID$ NULLS LAST)
R,
COUNT ( * )
OVER (PARTITION BY "C_ROWID")
T_CNT,
COUNT (RID$)
OVER (PARTITION BY "C_ROWID")
IN_MVLOG_CNT
FROM "JSU"."MV_REFRESH_TEST" "SNA$",
(SELECT DISTINCT RID$
FROM (SELECT CHARTOROWID (
"MAS$"."M_ROW$$")
RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ >
:B_ST0)) AS OF SNAPSHOT (:B_SCN)
MAS$
WHERE "SNA$"."C_ROWID" IN
(SELECT "C_ROWID"
FROM "JSU"."MV_REFRESH_TEST" "SNA$"
WHERE "F_ROWID" IN (SELECT *
FROM (SELECT CHARTOROWID("MAS$"."M_ROW$$")
RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ >
:B_ST0) AS OF SNAPSHOT (:B_SCN)
MAS$))
AND "SNA$"."F_ROWID" = MAS$.RID$(+))
"SNA2$"
WHERE T_CNT > 1
AND ( (IN_MVLOG_CNT = T_CNT AND R > 1)
OR (IN_MVLOG_CNT < T_CNT AND R <= IN_MVLOG_CNT)))
/* MV_REFRESH (UPD) */
UPDATE "JSU"."MV_REFRESH_TEST" SNA$
SET "F_ROWID" = NULL, "FATHER_NAME" = NULL
WHERE "F_ROWID" IN (SELECT /*+ NO_MERGE HASH_SJ */
*
FROM (SELECT CHARTOROWID ("MAS$"."M_ROW$$") RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :B_ST0) AS OF SNAPSHOT (:B_SCN)
MAS$)
/* MV_REFRESH (INS) */
INSERT INTO "JSU"."MV_REFRESH_TEST"
SELECT /*+ NO_MERGE("JV$") */
"MAS$1".ROWID,
"JV$"."RID$",
"MAS$1"."CHILD_ID",
"MAS$1"."FATHER_ID",
"MAS$1"."CHILD_NAME",
"JV$"."FATHER_NAME"
FROM (SELECT "MAS$"."ROWID" "RID$", "MAS$".*
FROM "JSU"."T_FATHER" "MAS$"
WHERE ROWID IN
(SELECT /*+ HASH_SJ */
CHARTOROWID ("MAS$"."M_ROW$$")
RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :B_ST0)) AS OF SNAPSHOT (:B_SCN)
"JV$",
"T_CHILD" AS OF SNAPSHOT (:B_SCN) "MAS$1"
WHERE "MAS$1"."FATHER_ID" = "JV$"."FATHER_ID"
AND "JV$"."FATHER_NAME" LIKE 'father%'
/* MV_REFRESH (DEL) */
DELETE FROM "JSU"."MV_REFRESH_TEST" "SNA$"
WHERE ROWID IN
(SELECT ROWID
FROM (SELECT ROW_NUMBER ()
OVER (PARTITION BY "C_ROWID"
ORDER BY "F_ROWID" NULLS FIRST)
R,
COUNT ( * )
OVER (PARTITION BY "C_ROWID")
T_CNT,
COUNT ("F_ROWID")
OVER (PARTITION BY "C_ROWID")
NONNULL_CNT
FROM "JSU"."MV_REFRESH_TEST" "SNA2$"
WHERE "C_ROWID" IN
(SELECT "MAS$1".ROWID
FROM (SELECT "MAS$"."ROWID"
"RID$",
"MAS$".*
FROM "JSU"."T_FATHER" "MAS$"
WHERE ROWID IN
(SELECT /*+ HASH_SJ */
CHARTOROWID("MAS$"."M_ROW$$")
RID$
FROM "JSU"."MLOG$_T_FATHER" "MAS$"
WHERE "MAS$".SNAPTIME$$ >
:B_ST0)) AS OF SNAPSHOT (:B_SCN)
"JV$",
"T_CHILD" AS OF SNAPSHOT (:B_SCN) "MAS$1"
WHERE "MAS$1"."FATHER_ID" =
"JV$"."FATHER_ID"
AND "JV$"."FATHER_NAME" LIKE
'father%')) "SNA2$"
WHERE T_CNT > 1
AND ( (NONNULL_CNT = 0 AND R > 1)
OR (NONNULL_CNT > 0
AND R <= T_CNT - NONNULL_CNT)))
March 12, 2012 - 7:15 am UTC
does it make sense to gather stats on mview log table? probably only once - when it has a representative number of rows.
i've often wondered why people do this:
CREATE TABLE t_child (
child_id NUMBER PRIMARY KEY
,father_id NUMBER
,child_name VARCHAR2(10)
);
CREATE TABLE t_father (
father_id NUMBER PRIMARY KEY
,father_name VARCHAR2(10)
,no_use_col VARCHAR2(10)
);
ALTER TABLE t_child ADD CONSTRAINT t_child_fk FOREIGN KEY (father_id)
REFERENCES t_father(father_id);
A one to one optional relationship almost screams out "single table please"
pretty much any change to either table is going to trigger processing - every row change to either will place rows into the materialized view log and they have to be processed.
Seems to me you could skip this entire process by altering the physical schema to be a single table in the first place?
table design
James Su, March 12, 2012 - 10:03 am UTC
Hi Tom,
It's not a one-to-one relationship but one-to-many.
In the OLTP we don't want to store the father_name with every single child row. We store father_id.
In DW we build this mview to do pre-join and make it denormalized. Now we have father_name and don't need to join to the father table.
Anyway this is just a simplified example. Both child and father are big tables with many columns. What I don't understand is why dbms_mview.refresh would generate some expensive sql which is not necessary at all.
Thanks!
March 12, 2012 - 10:24 am UTC
it doesn't know that it isn't necessary, it sees it in the log and must process it.
why wouldn't you just load the data denormalized into a the warehouse in the first place?
you have a warehouse that does a materialized view refresh every minute?
CHANGE_VECTOR$$ in mview log
James Su, March 12, 2012 - 11:01 am UTC
Hi Tom,
Yes, the denormalized mview in DW is refreshed every minute. This is to eliminate joins. Is this a good practice?
Why dbms_mview.refresh doesn't use CHANGE_VECTOR$$ as a filter? And I don't understand why an UPDATE on parent will result in DELETE and INSERT?
March 12, 2012 - 11:16 am UTC
Yes, the denormalized mview in DW is refreshed every minute. This is to
eliminate joins. Is this a good practice?
you load your warehouse every minute?
you don't do any transformations on the way in (why have two copies of this data??)
And I don't
understand why an UPDATE on parent will result in DELETE and INSERT?
that is one way of doing an update - it is just how they implemented it. Sometimes a delete+insert is easier than an update to accomplish during a refresh.
DW load
James Su, March 12, 2012 - 11:27 am UTC
Hi Tom,
It's not technically a DW, maybe we should call it a reporting database. This is replicated by Goldengate from OLTP and has some MVIEWs built on top of it.
When would CHANGE_VECTOR$$ get used in refresh then?
March 12, 2012 - 12:30 pm UTC
why not denormalize the data as it comes in then, using goldengate?
GG replication
James Su, March 12, 2012 - 12:31 pm UTC
I never know there's such an option. Thank you! I will do more research.
Mv refresh time
amit, March 15, 2012 - 6:18 am UTC
Hi,
How to know the total time my MV total took to have complete refresh.
Thank
amit
March 15, 2012 - 7:47 am UTC
the easiest way to capture this would be to schedule the job yourself (don't use the default job - alter your materialized view to not auto refresh)
create a job that runs:
declare
l_start_time timestamp := systimestamp;
l_start_cpu number := dbms_utility.get_cpu_time;
begin
dbms_mview.refresh( ... );
insert into my_log_time (mview_name,ela,cpu) values ( '....',
systimestamp-l_start_time, dbms_utility.get_cpu_time-l_start_cpu );
commit;
end;
on the interval you had your refresh going.
Mv refresh time
amit, March 16, 2012 - 2:15 am UTC
My MV gets refreshed in every 3 minutes.
The base table contains bulk amount of data.
the base table is loaded through a procedure which is executed after every 2 minutes.
So,i need to check the total time MV took to refresh in each run(when MV gets refresh automatically).
Is their to capture the time,when ever the MV gets refreshed automatically.
March 16, 2012 - 8:39 am UTC
did you read what I wrote above?
Mv refresh time
Amit, March 16, 2012 - 12:09 pm UTC
Ya i have seen the above reply which was very useful
but in my case i need to check the total refresh time of MV in production,and in production DB i will not be able to create a job and stop the auto refreshment of the MV.
Please suggest me some way..
March 16, 2012 - 12:17 pm UTC
i will not be able to....
sure you will, why wouldn't you?
You are not allowed to put code into production that does what you actually need to do? How did production get into existence in the first place? Following your logic - production would have nothing beyond the SCOTT/TIGER schema.
This is part of your applications requirement right? You have to do what you have to do.
Mv refresh time
amit, March 16, 2012 - 2:50 pm UTC
Thanks alot.
i will be usng the logic that u have suggested ?
March 16, 2012 - 4:29 pm UTC
"U" had nothing to do with this. I have yet to meet this mythical "U" person - they are very popular and appear to work 24/7.
Tunning a MV
Amit, April 14, 2012 - 4:16 pm UTC
Hi tom,
When a fire a select clause in the MV it takes near about of 40 secs..the select returns 1000 rows.The MV is having total 50 lakh records.
create materialized view temp
tablespace rlt__ts
build immediate
using index
tablespace rlt_ts
refresh start with sysdate next trunc(sysdate,'MI') + 2/1440
using default local rollback segment
disable query rewrite
select ...
from temp1 ,temp2
union all
select ..
from temp3,temp4
union all
select ..
from temp5,temp6
A primary key created on 3 colums...which are having distinct values.
Whenever we are firing a select clause we are use the 3 column in the where clause to use the index created on it.
But the performance is very slow can u tell me a way to increase it perforemance.
April 15, 2012 - 7:45 am UTC
sql_trace and tkprof it. post that information
PK - FK Optimization on Fast Refresh MV
Rajeshwaran, Jeyabal, July 02, 2012 - 7:20 am UTC
Tom:
I was reading about PK-FK Optimization for MV refresh
http://docs.oracle.com/cd/E11882_01/server.112/e25554/basicmv.htm#sthref224 <quote>
For example, for a materialized view that contains a join between a fact table and a dimension table, if only new rows were inserted into the dimension table with no change to the fact table since the last refresh, then there will be nothing to refresh for this materialized view
</quote>
1) I created a FAST Refresh MV, but MV refresh is still happening when rows inserted into Dimension table and not into Fact table. this seems viloating the above quote from docs? Is that i am using a wrong test case?
rajesh@ORA11GR2> create table t1 as
2 select * from dept;
Table created.
Elapsed: 00:00:00.03
rajesh@ORA11GR2> create table t2 as
2 select * from emp;
Table created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t1 add
2 constraint t1_pk
3 primary key(deptno);
Table altered.
Elapsed: 00:00:00.04
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 add
2 constraint t2_fk
3 foreign key(deptno) references t1;
Table altered.
Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 modify deptno not null;
Table altered.
Elapsed: 00:00:00.03
rajesh@ORA11GR2> begin
2 dbms_stats.gather_table_stats
3 ( ownname=>user, tabname=>'T1');
4 dbms_stats.gather_table_stats
5 ( ownname=>user, tabname=>'T2');
6 end;
7 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.23
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t1 with
2 sequence,rowid (deptno,dname,loc) including new value
Materialized view log created.
Elapsed: 00:00:00.03
rajesh@ORA11GR2> create materialized view log on t2
2 with sequence,rowid (deptno,empno,ename,hiredate,sal)
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
2 build immediate
3 refresh fast on demand
4 enable query rewrite as
5 select t1.deptno,count(t2.empno)
6 from t1, t2
7 where t1.deptno = t2.deptno
8 group by t1.deptno;
Materialized view created.
Elapsed: 00:00:00.12
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.15
rajesh@ORA11GR2> select count(*) from t1_t2_mv;
COUNT(*)
----------
3
Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> select last_refresh_date, staleness,
2 refresh_mode, refresh_method
3 from user_mviews;
LAST_REFRESH_DATE STALENESS REFRES REFRESH_
-------------------- ------------------- ------ --------
02-jul-2012 17:40:39 FRESH DEMAND FAST
Elapsed: 00:00:00.01
rajesh@ORA11GR2> exec dbms_lock.sleep(10);
PL/SQL procedure successfully completed.
Elapsed: 00:00:10.01
rajesh@ORA11GR2> insert into t1(deptno,dname,loc)
2 select rownum +40,'aaa','bbb'
3 from all_users;
32 rows created.
Elapsed: 00:00:00.00
rajesh@ORA11GR2> commit;
Commit complete.
Elapsed: 00:00:00.00
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.40
rajesh@ORA11GR2> select count(*) from t1_t2_mv;
COUNT(*)
----------
3
Elapsed: 00:00:00.11
rajesh@ORA11GR2> select last_refresh_date, staleness,
2 refresh_mode, refresh_method
3 from user_mviews;
LAST_REFRESH_DATE STALENESS REFRES REFRESH_
-------------------- ------------------- ------ --------
02-jul-2012 17:41:26 FRESH DEMAND FAST
Elapsed: 00:00:00.36
rajesh@ORA11GR2>
July 02, 2012 - 9:03 am UTC
did you verify if any work was actually done????
we have to check, we did a "refresh", but what amount of *work* was performed?????
think about this, if you create a materialized view, do nothing but wait for a bit, and refresh it (there is obviously *nothing* to be done) and then refresh it. what will your last refresh time be?
there was *nothing done*, but the last refresh time will be advanced (because - well - you refreshed it then, even though nothing is done)
ops$tkyte%ORA11GR2> create table t ( x int primary key, y int );
Table created.
ops$tkyte%ORA11GR2> insert into t values ( 1, 1 );
1 row created.
ops$tkyte%ORA11GR2> create materialized view log on t;
Materialized view log created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create materialized view mv
2 refresh fast on demand
3 as
4 select * from t;
Materialized view created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select to_char( last_refresh_date, 'dd-mon-yyyy hh24:mi:ss' ) from user_mviews;
TO_CHAR(LAST_REFRESH
--------------------
02-jul-2012 10:02:31
ops$tkyte%ORA11GR2> exec dbms_lock.sleep( 5 );
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> exec dbms_mview.refresh( 'MV' );
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> select to_char( last_refresh_date, 'dd-mon-yyyy hh24:mi:ss' ) from user_mviews;
TO_CHAR(LAST_REFRESH
--------------------
02-jul-2012 10:02:36
ops$tkyte%ORA11GR2>
Now, was anything *refreshed* really there? What work do you think was done?
PK - FK Optimization on Fast Refresh MV
Rajeshwaran, Jeyabal, July 23, 2012 - 10:34 am UTC
Now, was anything *refreshed* really there? What work do you think was done? - I was trying to take Tkprof to see the work it done but i get this error "Error encountered: ORA-10980" in Tkprof tracefile, Can you help me what does it mean?
I am on 11.2.0.1.0 in Windows.
Tkprof shows me this.
********************************************************************************
The following statement encountered a error during parse:
select t2.deptno,count(*)
from t1, t2
where t1.deptno = t2.deptno
group by t2.deptno
Error encountered: ORA-10980
********************************************************************************
Below is the script I used for Testing.
rajesh@ORA11GR2> create table t1 as
2 select * from emp;
Table created.
Elapsed: 00:00:00.06
rajesh@ORA11GR2>
rajesh@ORA11GR2> create table t2 as
2 select * from dept;
Table created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 add constraint
2 t2_pk primary key(deptno);
Table altered.
Elapsed: 00:00:00.07
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t1 add constraint
2 t1_pk foreign key(deptno)
3 references t2;
Table altered.
Elapsed: 00:00:00.01
rajesh@ORA11GR2> alter table t1 modify deptno not null;
Table altered.
Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> begin
2 dbms_stats.gather_table_stats
3 ( ownname=>user, tabname=>'T1');
4 dbms_stats.gather_table_stats
5 ( ownname=>user, tabname=>'T2');
6 end;
7 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.65
rajesh@ORA11GR2> create materialized view log on t1 with rowid,sequence
2 ( empno,ename,job,mgr,hiredate,sal,comm,deptno )
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t2 with rowid,sequence
2 ( deptno,dname,loc )
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view t1_t2_mv
2 build immediate
3 refresh fast on demand
4 enable query rewrite as
5 select t2.deptno,count(*)
6 from t1, t2
7 where t1.deptno = t2.deptno
8 group by t2.deptno;
Materialized view created.
Elapsed: 00:00:00.12
rajesh@ORA11GR2>
rajesh@ORA11GR2> insert into t2(deptno,dname,loc)
2 select rownum,'xxx','xxx'
3 from dual
4 connect by level <=7;
7 rows created.
Elapsed: 00:00:00.09
rajesh@ORA11GR2> commit;
Commit complete.
Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> @tktrace.sql
Session altered.
Elapsed: 00:00:00.03
Session altered.
Elapsed: 00:00:00.00
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.57
rajesh@ORA11GR2>
rajesh@ORA11GR2> @tkfilename.sql
RTRIM(C.VALUE,'/')||'/'||D.INSTANCE_NAME||'_ORA_'||LTRIM(TO_CHAR(A.SPID))||'.TRC'
----------------------------------------------------------------------------------
d:\app\179818\diag\rdbms\ora11gr2\ora11gr2\trace/ora11gr2_ora_5068.trc
Elapsed: 00:00:00.07
rajesh@ORA11GR2>
July 30, 2012 - 8:57 am UTC
that is ok, don't you see the rest of the sql in there????
PK - FK Optimization on Fast Refresh MV
Rajeshwaran, Jeyabal, August 01, 2012 - 5:15 am UTC
that is ok, don't you see the rest of the sql in there???? Tom: I think i got it now.
1) With PK-FK constraint present we **DONT** refresh MV when only data loaded in PK table
2) But without constraints we see MV get refreshed for data loads only in PK table.
By running the above script with PK-FK constraint i see only the below sql statement in Tkprof file and nothing other than that.
********************************************************************************
SQL ID: cz5rn28q5d7su
Plan Hash: 0
ALTER SUMMARY "RAJESH"."T1_T2_MV" COMPILE
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 85 0 0
Execute 1 0.00 0.01 0 0 1 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.03 0 85 1 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 84 (recursive depth: 1)
********************************************************************************
But now without PK-FK constraint i see this in Tkprof trace file
rajesh@ORA11GR2> create table t1 as select * from dept;
Table created.
Elapsed: 00:00:00.00
rajesh@ORA11GR2> create table t2 as select * from emp;
Table created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> begin
2 dbms_stats.gather_table_stats
3 ( ownname=>user, tabname=>'T1');
4 dbms_stats.gather_table_stats
5 ( ownname=>user, tabname=>'T2');
6 end;
7 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.20
rajesh@ORA11GR2> create materialized view log on t1 with
2 sequence,rowid (deptno,dname,loc)
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view log on t2
2 with sequence,rowid (deptno,empno,ename,hiredate,sal)
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> drop materialized view t1_t2_mv;
Materialized view dropped.
Elapsed: 00:00:00.20
rajesh@ORA11GR2> create materialized view t1_t2_mv
2 build immediate
3 refresh fast on demand
4 enable query rewrite as
5 select t1.deptno,count(t2.empno)
6 from t1, t2
7 where t1.deptno = t2.deptno
8 group by t1.deptno;
Materialized view created.
Elapsed: 00:00:00.07
rajesh@ORA11GR2>
rajesh@ORA11GR2> insert into t1(deptno,dname,loc)
2 select rownum +40,'aaa','bbb'
3 from all_users;
33 rows created.
Elapsed: 00:00:00.01
rajesh@ORA11GR2> commit;
Commit complete.
Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> @tktrace.sql
Session altered.
Elapsed: 00:00:00.00
Session altered.
Elapsed: 00:00:00.04
rajesh@ORA11GR2> exec dbms_mview.refresh('T1_T2_MV');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.40
MERGE INTO "RAJESH"."T1_T2_MV" "SNA$" USING ( WITH "TMPDLT$_T1" AS ( SELECT
/*+ RESULT_CACHE(LIFETIME=SESSION) */ "MAS$"."RID$" "RID$" ,
"MAS$"."DEPTNO", DECODE("MAS$"."OLD_NEW$$", 'N', 'I', 'D') "DML$$",
"MAS$"."OLD_NEW$$" "OLD_NEW$$", "MAS$"."TIME$$" "TIME$$",
"MAS$"."DMLTYPE$$" "DMLTYPE$$" FROM (SELECT "MAS$".*, MIN("MAS$"."SEQ$$")
OVER (PARTITION BY "MAS$"."RID$") "MINSEQ$$", MAX("MAS$"."SEQ$$") OVER
(PARTITION BY "MAS$"."RID$") "MAXSEQ$$" FROM (SELECT
CHARTOROWID("MAS$"."M_ROW$$") RID$ , "MAS$"."DEPTNO" ,
DECODE("MAS$".OLD_NEW$$, 'N', 'I', 'D') DML$$, "MAS$"."DMLTYPE$$"
"DMLTYPE$$", "MAS$"."SEQUENCE$$" "SEQ$$", "MAS$"."OLD_NEW$$" "OLD_NEW$$",
"MAS$"."SNAPTIME$$" "TIME$$" FROM "RAJESH"."MLOG$_T1" "MAS$" WHERE
"MAS$".SNAPTIME$$ > :B_ST1 ) AS OF SNAPSHOT(:B_SCN) "MAS$" ) "MAS$" WHERE
((("MAS$"."OLD_NEW$$" = 'N') AND ("MAS$"."SEQ$$" = "MAS$"."MAXSEQ$$")) OR
(("MAS$"."OLD_NEW$$" IN ('O', 'U')) AND ("MAS$"."SEQ$$" =
"MAS$"."MINSEQ$$"))) ) SELECT /*+ OPT_ESTIMATE(QUERY_BLOCK MAX=1000)
*/ "DLT$1"."DEPTNO" "GB0", SUM(1* DECODE(("MAS$0"."EMPNO"), NULL, 0, 1))
"D0" FROM (SELECT "MAS$"."ROWID" "RID$" , "MAS$"."DEPTNO", "MAS$"."EMPNO"
FROM "RAJESH"."T2" "MAS$") AS OF SNAPSHOT(:B_SCN) "MAS$0" , (SELECT
"MAS$"."RID$" "RID$" , "MAS$"."DEPTNO" , "MAS$"."DML$$" "DML$$" FROM
"TMPDLT$_T1" "MAS$") AS OF SNAPSHOT(:B_SCN) "DLT$1" WHERE
("DLT$1"."DEPTNO"="MAS$0"."DEPTNO") GROUP BY "DLT$1"."DEPTNO")"AV$" ON
(SYS_OP_MAP_NONNULL("SNA$"."DEPTNO")=SYS_OP_MAP_NONNULL("AV$"."GB0")) WHEN
MATCHED THEN UPDATE SET "SNA$"."COUNT(T2.EMPNO)"="SNA$"."COUNT(T2.EMPNO)
"+"AV$"."D0" WHEN NOT MATCHED THEN INSERT ("SNA$"."DEPTNO",
"SNA$"."COUNT(T2.EMPNO)") VALUES ( "AV$"."GB0", "AV$"."D0")
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 16 14 0
Execute 1 0.01 0.01 0 23 12 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.03 0.02 0 39 26 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 84 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
0 MERGE T1_T2_MV (cr=3 pr=0 pw=0 time=0 us)
0 VIEW (cr=3 pr=0 pw=0 time=0 us)
0 HASH JOIN OUTER (cr=3 pr=0 pw=0 time=0 us cost=17 size=9856 card=154)
0 VIEW (cr=3 pr=0 pw=0 time=0 us cost=14 size=4004 card=154)
0 SORT GROUP BY (cr=3 pr=0 pw=0 time=0 us cost=14 size=3080 card=154)
0 HASH JOIN (cr=3 pr=0 pw=0 time=0 us cost=13 size=3080 card=154)
14 TABLE ACCESS FULL T2 (cr=3 pr=0 pw=0 time=0 us cost=5 size=98 card=14)
33 VIEW (cr=0 pr=0 pw=0 time=0 us cost=7 size=429 card=33)
33 RESULT CACHE cjrh7cuqxh05p46xp5zy2jfg8q (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us cost=7 size=2541 card=33)
0 WINDOW SORT (cr=0 pr=0 pw=0 time=0 us cost=7 size=5544 card=33)
0 TABLE ACCESS FULL MLOG$_T1 (cr=0 pr=0 pw=0 time=0 us cost=6 size=5544 card=33)
0 MAT_VIEW ACCESS FULL T1_T2_MV (cr=0 pr=0 pw=0 time=0 us cost=3 size=114 card=3)
August 01, 2012 - 6:57 am UTC
I have no idea why you posted all of this. any reason?
Partitioned Materialised view
Vaishali G, September 25, 2012 - 9:00 am UTC
Hi Tom, Can we create partitioned materialised view dependent on query includes join of two partitioned tables ? How can we use DBMS_MVIEW in this case?
September 27, 2012 - 8:23 am UTC
yes you can, you would use dbms_mview exactly the same as you would for a non-partitioned table.
REFRESH_DEPENDENT
Rajeshwaran Jeyabal, October 25, 2012 - 6:58 am UTC
Tom,
http://docs.oracle.com/cd/E11882_01/server.112/e25554/refresh.htm#sthref451 <quote>
The third procedure, DBMS_MVIEW.REFRESH_DEPENDENT, refreshes only those materialized views that depend on a specific table or list of tables. For example, suppose the changes have been received for the
orders table but not for
customer payments. The refresh dependent procedure can be called to refresh only those materialized views that reference the
orders table.
</quote>
Lets pretend Order - T1 and Customer as T2.
Say
1) I have two tables 'T1' and 'T2' and MV defined on each 'T1_MV' and 'T2_MV'.
2) Delete couple of records from 'T1' and commit.
3) Invoke dbms_mview.refresh_dependent on 'T1'.
since we did changes on T1 and not on T2 and did refresh_dependent on 'T1' Optimizer should not refresh T2 as per above quotes from docs. But its refreshing both T1 and T2. Is this an expected behaviour?
If needed i can show you, what i am seeing. ( I am on 11.2.0.1 on Windows 32 bit)
October 25, 2012 - 9:14 am UTC
show me in as SMALL an example as you possibly can code - make it teeny tiny, it shouldn't be very large.
REFRESH_DEPENDENT
Rajeshwaran Jeyabal, October 25, 2012 - 10:22 am UTC
Here is what i have for you.
rajesh@ORA11G> create table t1
2 nologging as
3 select *
4 from all_objects;
Table created.
Elapsed: 00:00:01.62
rajesh@ORA11G>
rajesh@ORA11G> create table t2
2 nologging as
3 select *
4 from all_objects;
Table created.
Elapsed: 00:00:01.58
rajesh@ORA11G>
rajesh@ORA11G> begin
2 dbms_stats.gather_table_stats(user,'T1');
3 dbms_stats.gather_table_stats(user,'T2');
4 end;
5 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.60
rajesh@ORA11G>
rajesh@ORA11G> create materialized view log on
2 t1 with rowid,sequence (owner,object_name,object_type,object_id)
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.01
rajesh@ORA11G>
rajesh@ORA11G> create materialized view log on
2 t2 with rowid,sequence (owner,object_name,object_type,object_id)
3 including new values;
Materialized view log created.
Elapsed: 00:00:00.01
rajesh@ORA11G>
rajesh@ORA11G> create materialized view t1_mv
2 build immediate
3 refresh fast on demand
4 enable query rewrite
5 as
6 select owner,count(object_name),
7 max(object_name),count(*)
8 from t1
9 group by owner;
Materialized view created.
Elapsed: 00:00:00.06
rajesh@ORA11G>
rajesh@ORA11G> create materialized view t2_mv
2 build immediate
3 refresh fast on demand
4 enable query rewrite
5 as
6 select owner,count(object_name),
7 max(object_name),count(*)
8 from t1
9 group by owner;
Materialized view created.
Elapsed: 00:00:00.06
rajesh@ORA11G> select mview_name ,
2 last_refresh_date
3 from user_mviews;
MVIEW_NAME LAST_REFRESH_DATE
------------------------------ -----------------------
T2_MV 25-OCT-2012 08:50:07 PM
T1_MV 25-OCT-2012 08:50:07 PM
Elapsed: 00:00:00.03
rajesh@ORA11G>
rajesh@ORA11G> delete from t1 where rownum <=50;
50 rows deleted.
Elapsed: 00:00:00.07
rajesh@ORA11G> commit;
Commit complete.
Elapsed: 00:00:00.00
rajesh@ORA11G>
rajesh@ORA11G> variable x number;
rajesh@ORA11G>
rajesh@ORA11G> begin
2 dbms_mview.refresh_dependent
3 ( number_of_failures =>:x,
4 list =>'T1' );
5 end;
6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.33
rajesh@ORA11G> select mview_name ,
2 last_refresh_date
3 from user_mviews;
MVIEW_NAME LAST_REFRESH_DATE
------------------------------ -----------------------
T2_MV 25-OCT-2012 08:50:23 PM
T1_MV 25-OCT-2012 08:50:23 PM
Elapsed: 00:00:00.00
rajesh@ORA11G>
October 25, 2012 - 11:32 am UTC
both materialized views are dependent on T1.
?????
REFRESH_DEPENDENT
Rajeshwaran Jeyabal, October 26, 2012 - 2:38 am UTC
Thanks Tom. I did changes and its working fine now !
Ora-32345
Vikas, May 07, 2013 - 2:58 pm UTC
Hi tom,
I would request your concern in fixing the "Ora-32345 - Materialised view refresh - job failed due to change in synonym". I have 11.2 environment here and also, there are other MV's as well which are running smoothly. I am confident that it will take seconds in cracking it.
P.N. that the result was same even after restoring the synonym and recreating the mv.
Many Thanks
Vikas
May 07, 2013 - 3:49 pm UTC
I have no idea what you are referring to.
Materialised Views
Vikas, May 09, 2013 - 3:28 pm UTC
Hi Tom,
Look we have created materialised views using synonyms and database links and they used to refresh at some time interval. The total count is four, out of which three are working well. however, fourth one is failing with the error as discussed above.
I have followed this up with the metalink and found a document which explains that this is a bug and can be resolved by applying some patch to it. On the other hand, when i look at my environment, can see clearly that my all other materialised views are working fine. I am in doubt that how could they work if oracle is denying to it.
Request your concern at priority to resolve this as we have live this environment.
Many Thanks,
Vikas
May 09, 2013 - 5:03 pm UTC
I'll have to refer you to support, especially in light of what you say in your second paragraph. It isn't anything I can reproduce.
refresh MV with master tables on remote DB
Tony Fernandez, July 15, 2013 - 4:18 pm UTC
Tom,
We have a database ( 11gR2 ) with materialized views depending on master tables located on a remote database.
The local database uses a db-link to the remote database, and this remote database in turn is not available 100% of the time but more like 80%.
What would be the best approach to refresh the materialized views on the local db, or better yet, what refresh parameters should we use in the materialized views definition.
Thanks,
July 16, 2013 - 4:55 pm UTC
materialized views are build to be OK with down remote databases - they'll try up to 16 times before giving up and making you refresh them manually. nothing special needs be done.
fast or on demand
Tony Fernandez, July 15, 2013 - 4:23 pm UTC
Tom,
To add to my question above, is there a rule of thumb in implementing refresh on commit or fast refresh, for the scenario described above?
Regards,
July 16, 2013 - 4:55 pm UTC
you are distributed, you cannot refresh on commit.
you can only refresh on a schedule or on demand.
Simultaneous refresh
Gennady, October 28, 2013 - 6:45 am UTC
Tom,
I have a dozen MVs built on complex joins of master tables (not aggregates), which I need to refresh at the same point in time using incremental refresh. I must do this hourly.
Eventally I refreshed them completely (as part of initialization) by this way:
dbms_mview.refresh(list => 'mv1,mv2,mv3...', method => 'ccc...', atomic_refresh => false);
And I noticed that Oracle uses TRUNCATE + INSERT /*+ append */ for each MV. Then it rebuilds indexes on them also. And Oracle processed several MVs simultaneously! I saw several sessions working at one moment, and each session processed its own MV.
But when I refreshed the MVs incrementally at the same point in time:
dbms_mview.refresh(list => 'mv1,mv2,mv3...', method => 'fff...', atomic_refresh => true);
then I saw that MVs got refreshed one by one with one Oracle session doing usual DELETE / INSERT / UPDATE...
I.e. Oracle refreshed MV1 then MV2 then MV3... And total refresh time is unacceptable for us often.
My question is why does Oracle do this? Can't it incrementally refresh several MVs simultaneously like it did that with complete refresh? What can prevent it doing this? It seems only DMLs are used.
Or maybe there is some setting or parameter, that I don't know yet.
Thank you.
November 01, 2013 - 9:10 pm UTC
you told us "we don't want to be atomic, it is ok to do each as a separate transacation"
atomic_refresh => false says you are saying "transactions and atomicity are not necessary for this refresh"
we did it because you quite simply told us to
Simultaneous refresh
Gennady, November 04, 2013 - 11:05 am UTC
Well, I understand this. But I would like to refresh by several MVs simultaneously in _atomic_ incremental refresh mode (not one by one) with one REFRESH call. And it seems Oracle doesn't allow this.
November 07, 2013 - 3:04 pm UTC
correct, you would make an individual *atomic* call for each (hence the use of the word atomic, it makes a *call* atomic).
but since a call is really a statement, you could do this easily in a single call really:
declare
l_list sys.odciVarchar2List := sys.odciVarcharlist( 'a', 'b', 'c', ... );
begin
for i in 1 .. l_list.count
loop
.....
end loop;
end;
/
that is a "single" call
Isn't this what refresh groups are intended to do?
Michael Istinger, November 04, 2013 - 12:58 pm UTC
Hi, Gennady!
If you wanted to have multiple MV's to _atomic_
incremental refresh, you should put these MV's in a refresh group. This is what refresh groups are there for.
All the best
Michael
November 07, 2013 - 3:05 pm UTC
they would have to create a dummy materialized view (MV) in order to create a refresh group (you need more than one MV). and then refresh each group.
query rewrite - but MV is not in the plan
Andre, March 07, 2014 - 5:05 pm UTC
Hi Tom,
The users (over 250) run various reports using BO.
I had traced (10046) a few sessions to see what SQL is submitted to Oracle by Business Objects.
One is a rather complex aggregate accessing 8 tables (although in over 30 incarnations in total) and it takes approx 40 sec to execute.
I wanted to test as a Proof of concept that MV would be a good solution - with query rewrite enabled.
I tried various ways to define the MV - finally opting for using the BO-generated SQL as the definition of a XXX_POC_V VIEW.
Then I created a XXX_POC_MV using a Select * from XXX_POC_V
...
NOLOGGING
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
ENABLE QUERY REWRITE
AS
Select * from XXX_POC_V
Next I created two simple PL/SQL procs that would define a cursor on the view and retrieve all rows with BULK COLLECT limit 1000 in a loop until done + and time end-to-end.
Although all required params are set - the two procs operate on the V and MV respectively - and one retrieves data from all tables rather than using a query rewrite.
The proc reading from "V" runs 35-50 sec while the one reading from "MV" does the same in 2sec or less.
Trace files show the actual exec plans.
+
WHAT may I be possibly missing here that Oracle ignores the fact that there is an MV and fails to read the data from it.
???
Could you please help
Thanks
A
MV query rewrite
Andre, April 25, 2014 - 11:48 am UTC
Dear Tom,
I posted the above query 6 weeks ago - and you had been busy - however I would REALLY need to understand this.
When I located another post on your site in which it was explained how the execution plans can be examined and that MV query rewrite does not work if the cost is higher it made sense with that particular example - however I cannot see this applicable here.
1. The MV definition is identical to the complex join performed on base tables - yet the cost is over 3 times higher
2. I then redefined MV as a simple select from a standard view and executed the same read from the view - costs like before MV 3 times higher
3. YET the actual execution time 15-20 times faster when using MV than the same via a View.
WHY..?
Thank you
Regards
Andre
Why wasn't index created as nologging in your sample test?
TomS, October 23, 2023 - 11:17 am UTC
Does "c) rebuild indexes with no logging" in your answer imply that nologging only works with rebuild ? Because you initially created the index without the specifying nologging (Why didn't you specify it there too?)
October 30, 2023 - 5:34 am UTC
NOLOGGING is an attribute of an index that is applicable only a build/rebuild time.
So I can do:
create index IX ... nologging;
alter index IX rebuild nologging;
but I can also do
alter index IX nologging
which means *next* time I rebuild it, it will be done nologging.
Also check out "force logging" which a DBA can use to override this
Create MV does not generate REDO, but refresh DOES
Pawel, September 26, 2024 - 9:02 pm UTC
Hello,
18XE, FORCE_LOGGING=NO
Create MV does not generate REDO, but refresh DOES
22:53:43 XE > column value new_val V
select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
22:57:26 XE > 22:57:26 2 22:57:26 3 22:57:26 4
NAME VALUE
-------------------------------------------------------------------------------- ----------
redo size 191983812
22:57:42 XE >
CREATE MATERIALIZED VIEW "JSYSTEMS"."MV_TEST"
NOLOGGING
TABLESPACE "JSYSTEMS"
BUILD IMMEDIATE
AS select * from dba_objects ;
22:57:44 XE > 22:57:44 2 22:57:44 3 22:57:44 4 22:57:44 5
Materialized view created.
Elapsed: 00:00:00.99
22:57:45 XE > 22:57:45 XE > select a.name, b.value, to_char( b.value-&V, '999,999,999,999' ) diff
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size'
22:57:50 2 22:57:50 3 22:57:50 4 22:57:50 5 ;
old 1: select a.name, b.value, to_char( b.value-&V, '999,999,999,999' ) diff
new 1: select a.name, b.value, to_char( b.value- 191983812, '999,999,999,999' ) diff
NAME VALUE DIFF
-------------------------------------------------------------------------------- ---------- ----------------
redo size 192160396 176,584
Elapsed: 00:00:00.01
22:57:52 XE > exec DBMS_MVIEW.REFRESH('jsystems.MV_TEST','c');
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.03
22:58:08 XE > select a.name, b.value, to_char( b.value-&V, '999,999,999,999' ) diff
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
22:58:11 2 22:58:11 3 22:58:11 4 old 1: select a.name, b.value, to_char( b.value-&V, '999,999,999,999' ) diff
new 1: select a.name, b.value, to_char( b.value- 192160396, '999,999,999,999' ) diff
NAME VALUE DIFF
-------------------------------------------------------------------------------- ---------- ----------------
redo size 217810664 25,650,268
Elapsed: 00:00:00.01
September 30, 2024 - 7:14 am UTC
Think about what CREATE for an mview does (in a database without ARCHIVE LOGGING), which is equivalent to a CREATE TABLE AS SELECT
- update a few database dictionary entries
- write a stack of blocks that do *not* need to be recovered (because archive logging is not enabled).
If the database crashed during this, all I need to do is sort out the dictionary mess to remove the evidence of the mview.
Thus .... just a tiny bit of redo.
Compare that to a refresh, which is equivalent to DELETE / INSERT-SELECT (there are other options, but this is the default).
- I'm wiping a lot of blocks
- I'm inserting a lot of blocks
If the database crashes during this, I need to *undo* this transaction, so thats a lot of redo I need to log.