Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Sam.

Asked: March 11, 2001 - 1:53 am UTC

Last updated: September 23, 2013 - 6:15 pm UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hello Tom:

What I now so far about Materialized Views is "Materialized views are schema objects that can be used to summarize, precompute, replicate, and distribute data".
1. In data warehouses, materialized views are used to precompute and store aggregated data such as sums and averages.
2. In distributed environments, materialized views (also called snapshots) are used to replicate data at distributed sites and synchronize updates done at several sites with conflict resolution methods.
3. In mobile computing environments, materialized views are used to download a subset of data from central servers to mobile clients, with periodic refreshes from the central servers and propagation of updates by clients back to the central servers.

Basically, what I am looking for:
1. Create materialized view is a SYNONYM for create snapshot, then why create snapshot SQL is there?
2. Can you give me an example for:
a. How I can use a materialized view, instead of snapshot for basic replication?
b. How I can use a materialized view in data ware housing environment.
3. Where I can find more detail about materialized view.

Best Regards,
Sam


and Tom said...

1) because CREATE SNAPSHOT is the syntax that has been in use since version 7.0 in 1993 of Oracle. With 8i, the functionality was greatly enhanced to encompass the materialized view concept. They are renaming the feature from snapshot to materialized view. For backwards compatibility, snapshot is and will be supported.

2a) the create snapshot and create materialized view are synonomous, just use create materialized view and you have done a create snapshot and vice versa.

2b) you would create a MV that aggregates some info, for example:

create materialized view emp_rollback
enable query rewrite
as
select deptno, sum(sal) sal
from emp
group by deptno;


Now, given that all the necessary settings have been done (see the data warehousing guide for a comprehensive example) your end users can query:

select deptno, sum(sal) from emp where deptno in ( 10, 20) group by deptno;

and the database engine will rewrite the query to go against the precomputed rollup, not the details -- giving you the answer in a fraction of the time it would normally take.

3)
</code> http://docs.oracle.com/cd/A81042_01/DOC/server.816/a76994/toc.htm <code>
has it all.

Rating

  (210 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Materialzed Views

Sam, March 11, 2001 - 11:10 am UTC


Only part of the example worked

Pranav Shah, March 25, 2002 - 5:21 pm UTC

I created a materialized view, but when I write a select statement with a where clause it looks at the table rather than the materialized view.

Tom Kyte
March 25, 2002 - 7:01 pm UTC

And your example is 100% missing so I have no clue what you tried, what you did. Sorry.

Materialiased view

kit, March 26, 2002 - 11:56 am UTC

When u say it would give u results quicker do you mean as the snapshot was taken during off peak hours

If yes, if the user wanted to include the latest info how would that work.

I mean how can it know to look at the snapshot or direct table

many thanks

Tom Kyte
March 26, 2002 - 3:03 pm UTC

you is a word, not a letter (does anyone else find it really hard to read text with "u" instead of you? I sure do.... anyway...)

I mean it is quicker to query against:

create materialized view emp_rollback
enable query rewrite
as
select deptno, sum(sal) sal
from emp
group by deptno;


then the plain emp table if your query is:

select deptno, sum(sal) from emp where deptno in ( 10, 20) group by deptno;



That is what i meant. MV's are designed for data warehouses where the data is relatively static. The currency of data issue rarely comes up. When it does you can sometimes use "on commit refresh" to maintain the MV like you would an index (automagically) or you would not use the MV to answer the question.



RE: does anyone else find it really hard...

Lester Burnham, March 26, 2002 - 3:17 pm UTC

A resounding, "YES!"

It drives me round the bend when I have to wade through a pool of "u", "ur", and other such "abbreviations". Maybe this is why the search facility on AskTom (or is that AT?) is so under-utilized? If someone can't be bothered to type a complete word, then how can they be bothered to go through all the trouble of clicking on the "Search/Archives" tab and typing in a search phrase. I love reading AskTom, but if I have to read through one more "I upgraded to 9i, didn't read the documentation, and now I can't find svrmgrl, is this a bug in Oracle?" type question again... OK, moment over.

Quick comment - not really related ...

Chris Slattery, March 27, 2002 - 6:16 am UTC



This abbreviated comments u=you ur = your is a terrible habit we are getting due to the widespread usage of GSM SMS messaging . Hasn't caught on in USA yet... Just wait !



materiliased view

kit, March 27, 2002 - 6:51 am UTC

Sorry for the U word.

Distinct values

Alvin, August 07, 2003 - 4:04 am UTC

Because of faulty designs and Developers staunchily defending their designs i need a work around for a certain query.

I need to query all distinct values for a certain table;

Create table t as
select substr(object_name,1,1) a ,1 b from all_objects;

Insert into t as select substr(object_name,1,1), 2 b from all_objects;

Insert into t as select substr(object_name,1,1), 3 b from all_objects;

create materialized view mv_t
enable query rewrite
as select distinct(a),b from t;

select distinct(a)
from t
where b=3;

I know that MV_T can answer the above query but oracle seems to ignore this view. Dimensions don't help because there is no relationship between columns A and B.

It full scans table t.

I think of Indexes as Small virtual tables with rowid's so..

create index i_t on t ( b,a ) compress 1;

works much much faster but it has as many rows as the base table and i know that mv_t has much much less rows and should be able to answer the above query.

1 . Is index really my solution or its possible for me to make use of Materialized views in this case?

Is it because Distinct is not considered an aggregate by oracle ?


Tom Kyte
August 09, 2003 - 3:50 pm UTC

I added a data column to T so as to make it a tad "wider" (could be yours was too "narrow" and the cost of full scanning it wasn't very high)

Also, I did the necessary query rewrite settings (don't know if you did)

Also, I supplied my version :) which is Oracle9iR2

ps$tkyte@ORA920> Create table t as
  2  select substr(object_name,1,1) a , b, rpad('x',50,'x') data
  3    from all_objects, (select 1 b from dual union all select 2 b from dual union all select 3 b from dual );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create materialized view mv_t
  2  enable query rewrite
  3  as select distinct a,b from t;

Materialized view created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> alter session set query_rewrite_enabled=true;

Session altered.

ops$tkyte@ORA920> alter session set query_rewrite_integrity=trusted;

Session altered.

ops$tkyte@ORA920>
ops$tkyte@ORA920> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA920> analyze table mv_t compute statistics;

Table analyzed.

ops$tkyte@ORA920>
ops$tkyte@ORA920> set autotrace traceonly explain
ops$tkyte@ORA920> select distinct a
  2    from t
  3   where b = 3;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=29 Bytes=87)
   1    0   SORT (UNIQUE) (Cost=4 Card=29 Bytes=87)
   2    1     TABLE ACCESS (FULL) OF 'MV_T' (Cost=3 Card=41 Bytes=123)



ops$tkyte@ORA920> set autotrace off


<b>and it worked... It worked with your extra/unneeded () about A.  Distinct is a set operator -- not a function!</b>


I did this in 817 as well -- I've got a feeling you are missing the alter sessions (that you would normally put as init.ora's)?


 

Vipin, September 22, 2003 - 12:22 pm UTC

Hi tom,

First of all sorry for not posting this query in a new thread, but I was waiting for very long and was not able to grab a new thread till now.

I have a very peculiar requirement here. I know that the way it is designed to operate is a little weird but I don't have a choice here (we have a lot of webservices savy guys around;)).

We have two applications :-

System A - One which is supposed to collect some personal data of employees i.e. Emergency contacts
System B - Other which has to use this emergency contact number from the System A for its own processing. This System will call a webservice which will fetch the emergency contact from System A and give us the results. This emergency contact can be modified in System A as well as System B. Upon modification of this data in System B the webservices will update this data in System A(so we need not bother about this.)

Now the problematic area is when the reporting occurs in System B. For reporting purpose we have materialized view on this emergency contact data in System A at System B, but any newly modified data will not be available in System B through materialized view until the next refresh unless otherwise the materialized view from System A to System B is refreshed ON COMMIT. But I think this will slow down the commit process and hence will slow down the DML on Emergency contact data in System A.

So the idea which I suggested is to make the materialized view refresh (FAST refresh) ON DEMAND. The DML which modifies the data at System A would call DBMS_JOB which will submit a job for refreshing materialized view to System B, using DBMS_MVIEW.REFRESH, hence this process would be asynchronous. So even if the modification is due to System A users or System B users(using web services.) we would have all the latest data in MVIEW.

Please correct me if this is the wrong way to approach this problem. Would you prefer some other way, if so please advise.



Tom Kyte
September 22, 2003 - 12:44 pm UTC

you cannot refresh on commit over databases -- only IN a database.

why not (in order of "smartness" or "best practice")

a) combine these two databases. you only need one.

b) use a dblink and just query the data, don't copy it.

distributed complexity -- is just that -- distributed complexity.

Vipin, September 22, 2003 - 12:56 pm UTC

Exactly what I suggested them at first short, but I was cut short beacuse many non technical, policy wise issues associated ;)

Sorry to constrain you like this, but If doors are closed for those two approachs what ever you said, would you nod for my approach.

Thanks for your advise.

Tom Kyte
September 22, 2003 - 1:46 pm UTC

i would not want to submit a job for every row update, no.

just refresh on a "fast enough" cycle.

Vipin, September 22, 2003 - 1:55 pm UTC

Thanks Tom,

But could you please explain why you advise not to refresh asynchronously using SUBMIT for each update?



Tom Kyte
September 22, 2003 - 2:01 pm UTC

you update 5 rows either AT the same time or at ABOUT the same time.

you would queue 5 jobs.

only one of which needs to run, the other 4 just eat resources.




A reader, September 22, 2003 - 2:14 pm UTC

Dear Tom,

I know this is very vague of asking this question, but could you please tell something about this. How many bytes of data is safe to be refreshed (without affecting performance)between any two databases, if the refresh interval is say about 15 mins in a fast refresh?

Tom Kyte
September 22, 2003 - 2:25 pm UTC

you are right

it is vague.


"safe" is not quantifiable

"without affecting performance" isn't totally clear either.


you would want to benchmark to see the effect on YOUR hardware with your loads using your data under your working conditions.

anything else is, well, hypothetical and not very useful

Create snapshot & query rewrite

LCLN, September 22, 2003 - 2:47 pm UTC

Hi, Mr. Kyte.

Is it possible to use "enable query rewrite" in a "create snapshot" command?
And yes, it's hard to read when someone uses 'u' instead of 'you'. Another annoying thing is the lack of punctuation.

Thanks in advance.
A reader from Brazil.


Tom Kyte
September 22, 2003 - 8:05 pm UTC

ops$tkyte@ORA920LAP> create table t as select * from scott.emp;

Table created.

ops$tkyte@ORA920LAP> alter table t add constraint t_pk primary key(empno);

Table altered.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create snapshot t_snap
  2  enable query rewrite
  3  as
  4  select job, count(*) cnt from t group by job;

Materialized view created.



snapshots are just "mv's" in disguise.  query rewrite only works on a single database of course (the base object must exist in the same database as the snapshot) 

Create snapshot & query rewrite

LCLN, September 22, 2003 - 2:59 pm UTC

Hi, Mr.Tom.

Please do not consider my last question.

Best regards.


Use materialized views or table

Michael, September 23, 2003 - 6:17 pm UTC

Hello Tom,

I have two tables SAMPLE_RECEIPT(ID,QTY) and
SAMPLE_USAGE(SAMPLE_RECEIPT_ID,QTY). I have two methods for computing remaining quantity

1.
Create MV for both tables summing QTY by ID.
Create view to determine difference in QTY by ID.

2.
Create a new table SAMPLE_REMAINING(SAMPLE_RECEIPT_ID,QTY)
If a new sample is received, insert record into this table.
Otherwise, perform math on this table accordingly for the sample.

What do you think is a better solution? I am using refresh on commit for MV's.

Thank you.


Tom Kyte
September 23, 2003 - 8:09 pm UTC

what is wrong with using sample_receipt and sample_usage and just joining when you need to know?

Materialize Views

Michael, September 24, 2003 - 11:23 am UTC

Here is a sample data.

create table sample_receipt(id number, qty number);
create table sample_usage(sample_receipt_id number, qty number);

insert into sample_receipt values (1,10);
insert into sample_receipt values (1,15);
insert into sample_receipt values (2,20);
insert into sample_usage values (1,5);
insert into sample_usage values (1,2);
insert into sample_usage values (1,1);
commit;

MDINH@test> select a.id, nvl(asum-bsum,0) qty
2 from
3 (select id, sum(qty) asum
4 from sample_receipt
5 group by id) a,
6 (select sample_receipt_id, sum(qty) bsum
7 from sample_usage
8 group by sample_receipt_id) b
9 where a.id=b.sample_receipt_id(+);

ID QTY
---------- ----------
1 17
2 0

MDINH@test>

Is this what you mean by why not just create a query?

How is performance affected if there are millions of records in the tables since the calculation is done every time?

I am a firm believer of not storing results in the database when it can be calculated on the fly. But in this case I am not sure.

Thank you.

Tom Kyte
September 24, 2003 - 12:16 pm UTC

do you plan querying all of the rows every time -- or do you

how many qty's go into a sum on average (eg: how many records might you avoid scanning if you pre-aggregated)

My query was wrong

Michael, September 24, 2003 - 11:55 am UTC

select a.id, decode(bsum,null,asum,asum-bsum)
from (select id, sum(qty) asum
from sample_receipt
group by id) a,
(select sample_receipt_id, sum(qty) bsum
from sample_usage
group by sample_receipt_id) b
where a.id=b.sample_receipt_id(+);

A reader, September 25, 2003 - 8:45 am UTC


mv's

Dave, March 08, 2004 - 3:15 pm UTC

SQL> desc account_poll
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ACCOUNT_ID                                NOT NULL VARCHAR2(35)
 POLL_ID                                   NOT NULL NUMBER(11)
 OVERALL_SCORE                             NOT NULL NUMBER(3)
 SECONDARY_SCORE                           NOT NULL NUMBER(11)
 CPART_ID                                           NUMBER(38)
 FIRST_ACTIVE_DATE_TIME                             DATE
 LAST_ACTIVE_DATE_TIME                              DATE
 FIRST_ALARM_DATE_TIME                              DATE
 LAST_ALARM_DATE_TIME                               DATE
 ACTION_TYPE                                        NUMBER(1)
 ACTIONED_TS                                        NUMBER(11)
 ROAMER                                             NUMBER(1)
 ACCOUNT_TYPE                                       NUMBER(3)
 ACTIONED_POLL_ID                                   NUMBER(11)
 PREPAID                                            NUMBER(1)
 DATA_SCORE                                         NUMBER(7,2)
 ACTIONED_DATE_TIME                                 DATE
 MAX_ALARM_ID                                       NUMBER(38)
 BEH_FRAUD                                          NUMBER(3)
 BEH_INC                                            NUMBER(3)
 BEH_UNUSUAL                                        NUMBER(3)
 HOT_RE                                             NUMBER(3)
 OVERLAP                                            NUMBER(3)
 HIGH_VAL                                           NUMBER(3)
 HIGH_COST                                          NUMBER(3)
 ABS_FEAT                                           NUMBER(3)
 LONG_DUR                                           NUMBER(3)
 DUR_PAT                                            NUMBER(3)
 SHORT_CALL                                         NUMBER(3)
 IMEI_IMSI                                          NUMBER(3)
 IMSI_IMEI                                          NUMBER(3)
 PREPAY                                             NUMBER(3)
 HIGH_FREQ                                          NUMBER(3)
 GENERIC                                            NUMBER(3)
 HOT_3G                                             NUMBER(3)
 SUMMARY                                            NUMBER(3)
 RULES                                              NUMBER(3)
 NOTE                                               NUMBER(1)
 CONFIRMED                                          NUMBER(1)
 PREV_ALLOCATED_USER                                VARCHAR2(30)

Table has circa 50,000 records (but will grow larger)

Issue Query:

INSERT INTO C3G_TEMP_ACCOUNT_LIST 

SELECT AP.CPART_ID CPART_ID,AP.ACCOUNT_ID,
AP.ACCOUNT_TYPE ACCOUNT_TYPE,AP.FIRST_ACTIVE_DATE_TIME
FIRST_ACTIVE_DATE_TIME,SYSDATE - AP.FIRST_ACTIVE_DATE_TIME  ACTIV_DT,
AP.SECONDARY_SCORE SECONDARY_SCORE,UA.ASSIGNED_TO_USER_ID
ASSIGNED_TO_USER_ID,(',' || DECODE(AP.BEH_FRAUD,2,'2,')  ||
DECODE(AP.BEH_INC,39,'39,')  || DECODE(AP.BEH_UNUSUAL,48,'48,')  ||
DECODE(AP.HOT_RE,59,'59,')  || DECODE(AP.OVERLAP,88,'88,')  ||
DECODE(AP.HIGH_VAL,90,'90,')  || DECODE(AP.HIGH_COST,91,'91,')  ||
DECODE(AP.ABS_FEAT,116,'116,')  || DECODE(AP.LONG_DUR,165,'165,')  ||
DECODE(AP.DUR_PAT,170,'170,')  || DECODE(AP.SHORT_CALL,175,'175,')  ||
DECODE(AP.IMEI_IMSI,176,'176,')  || DECODE(AP.IMSI_IMEI,178,'178,')  ||
DECODE(AP.PREPAY,180,'180,')  || DECODE(AP.HIGH_FREQ,185,'185,')  ||
DECODE(AP.GENERIC,210,'210,')  || DECODE(AP.HOT_3G,249,'249,')  ||
DECODE(AP.SUMMARY,256,'256,')  || DECODE(AP.RULES,283,'283,') )
ALARM_STRING,AP.NOTE ANOTE,AP.MAX_ALARM_ID MAX_ALARMS,AP.ACTION_TYPE
ACTION_TYPE,AP.DATA_SCORE DATA_SCORE,AP.CONFIRMED,AP.PREV_ALLOCATED_USER
PREV_ALLOCATED_USER,DECODE(NVL(SIGN(AP.ACTIONED_DATE_TIME -
AP.LAST_ALARM_DATE_TIME ),1),1,0,1),USER,AP.PREPAID   

FROM USER_ACCOUNT UA,ACCOUNT_POLL AP  

WHERE AP.POLL_ID = 71  AND AP.ACCOUNT_TYPE IN 
(SELECTA.ACCOUNT_TYPE   FROM VIEW_DEFINITION A,CEREBRUS_USER B  WHERE B.VIEW_ID =
A.VIEW_ID  AND UPPER(B.USER_ID) = USER ) AND AP.ACCOUNT_ID = UA.ACCOUNT_ID(+)
SQL> /

(I know the query is bad - it can change too because of the lack of binds at present)


SQL> set autotrace traceonly
17753 rows created.


Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=162 Card=285 Bytes=49305)
   1    0   HASH JOIN (OUTER) (Cost=162 Card=285 Bytes=49305)
   2    1     HASH JOIN (Cost=160 Card=285 Bytes=39045)
   3    2       VIEW OF 'VW_NSO_1' (Cost=8 Card=2 Bytes=26)
   4    3         SORT (UNIQUE) (Cost=8 Card=2 Bytes=90)
   5    4           HASH JOIN (Cost=3 Card=2 Bytes=90)
   6    5             TABLE ACCESS (FULL) OF 'CEREBRUS_USER' (Cost=1 Card=1 Bytes=19)
   7    5             TABLE ACCESS (FULL) OF 'VIEW_DEFINITION' (Cost=1Card=164 Bytes=4264)
   8    2       PARTITION HASH (ALL)
   9    8         TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'ACCOUNT_POLL' (Cost=151 Card=14256 Bytes=1767744)
  10    9           INDEX (RANGE SCAN) OF 'I2_ACCOUNT_POLL' (NON-UNIQUE) (Cost=24 Card=14256)
  11    1     TABLE ACCESS (FULL) OF 'USER_ACCOUNT' (Cost=1 Card=443 Bytes=15948)


So I can see it uses the index of account_poll (which is on the column poll_id) - that is good



Now this table, account_poll gets locked a few times during data processing when it is being updated, this locks 
users out for up two 2 minutes. So we created a materialized view as if I am correct in my understanding this mv will 
be used instead of the table if possible.



SQL> create materialized view log on account_poll with primary key including new values

Materialized view log created.

SQL> create materialized view account_poll_mv -- to create an exact copy
build immediate
refresh on commit fast
enable query rewrite 
as select * from account_poll;

Materialized view created.

SQL> create index I1_MV on account_poll_mv (account_id);
SQL> create index I2_MV on account_poll_mv (poll_id);
SQL> exec dbms_stats.gather_table_stats('CEREBRUS','ACCOUNT_POLL', cascade=>true);
SQL> show parameter query

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_enabled                string      TRUE
query_rewrite_integrity              string      enforced


When I run the same query as before, I get exactly the same plan, so it wasn't using the MV at all

SQL> select operation, object_name from v$sql_plan where object_name like '%MV%';

no rows selected

That proves it.  My question is when will the MV be used?  Only when the base table has locks or am I doing something 
wrong.

Also if I change the query to use the MV directly, I get this plan

17753 rows created.


Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=55 Card=334 Bytes=59452)
   1    0   HASH JOIN (OUTER) (Cost=55 Card=334 Bytes=59452)
   2    1     HASH JOIN (Cost=53 Card=334 Bytes=47428)
   3    2       VIEW OF 'VW_NSO_1' (Cost=8 Card=2 Bytes=26)
   4    3         SORT (UNIQUE) (Cost=8 Card=2 Bytes=90)
   5    4           HASH JOIN (Cost=3 Card=2 Bytes=90)
   6    5             TABLE ACCESS (FULL) OF 'CEREBRUS_USER' (Cost=1 Card=1 Bytes=19)
   7    5             TABLE ACCESS (FULL) OF 'VIEW_DEFINITION' (Cost=1 Card=164 Bytes=4264)
   8    2       TABLE ACCESS (FULL) OF 'ACCOUNT_POLL_MV' (Cost=44 Card=16724 Bytes=2157396)
   9    1     TABLE ACCESS (FULL) OF 'USER_ACCOUNT' (Cost=1 Card=443 Bytes=15948)

Which is very simlar, but it is doing an fts on the MV - it so happens that the number of consistent gets and I/O's 
is the same. The execution time is roughly the same as well. Is the CBO being clever and realising an index is 
not necessary here or am I being tricked in someway.

SO my questions are:  

1. When will the MV be used (Assuming we don't explicitly code it).
2. Why wasn't the index used when the index was used in the original query (same method of gathering stats)
3. When updating the table (update account_poll ....)  will the MV be updated and then the base table be updated or will it 
go for the base table right away.  This is mainly for when the base table has locks on it - what will the users queries do

Or is my understanding of MV's totally incorrect and it will only use the MV when the query submitted to it is select * 
from account_poll

If I am  totally going down the wrong path, what else do you suggest, design is something I cannot do something about really - just want to stop users getting locked out somehow

Thanks 

Tom Kyte
March 08, 2004 - 3:22 pm UTC

lets backup for a moment -- why would you create a read only MV to get around a "locking" issue? Queries are not blocked by locks in Oracle? Whats the motivation here? Users do not, will not get locked out of reading by locks.

(but why would it use the mv when the mv is 100% the same as the base table - it would only use an mv if the query would benefit from using the mv and here it would not)

lockers

Dave, March 08, 2004 - 5:10 pm UTC

Thanks, will get back to you tomorrow about why the users are getting locked out, there are 3 procedures which all update that account_poll table one after the other and while they are running, the users queries just stop working (the ones that update that table).

I will see what locks are being held and what my queries are waiting - but from memory they were all waiting on enqueues

mv

Dave, March 09, 2004 - 5:30 am UTC

I just checked and it is updates blocking updates. So need to come up with a way of stopping them, this is probably a design change but thought I would ask anyway.

Is there any facility I can use to stop a writer blocking and having my write 'buffered' of some sort.

There is no problem with the writes conflicting each other either

Thanks

Tom Kyte
March 09, 2004 - 12:08 pm UTC

if you have udpates blocking updates -- are you *sure* you don't have write conflicts (sounds like classic "lost update" issues here)

Dead link

westsan, March 09, 2004 - 2:28 pm UTC

mv

Dave, March 10, 2004 - 8:52 am UTC

Hi Tom, what do you mean by "lost update" issues.

We have two queries one like this

EXECUTE IMMEDIATE
'update /*+ apv */
(select a.cpart_id as a_c,
a.FIRST_ACTIVE_DATE_TIME as f_a,
a.LAST_ACTIVE_DATE_TIME as l_a,
a.FIRST_ALARM_DATE_TIME as f_ad,
a.LAST_ALARM_DATE_TIME as l_ad,
a.ACTION_TYPE as a_t,
a.ACTIONED_DATE_TIME as a_ts,
a.ROAMER as roa,
a.ACCOUNT_TYPE as act_type,
a.ACTIONED_POLL_ID as a_p_id,
a.PREPAID as pre,
a.note as note,
ap.cpart_id as apa_c,
ap.FIRST_ACTIVE_DATE_TIME as apf_a,
ap.LAST_ACTIVE_DATE_TIME as apl_a,
ap.FIRST_ALARM_DATE_TIME as apf_ad,
ap.LAST_ALARM_DATE_TIME as apl_ad,
ap.ACTION_TYPE as apa_t,
ap.ACTIONED_DATE_TIME as apa_ts,
ap.ROAMER as aproa,
ap.ACCOUNT_TYPE as apact_type,
ap.ACTIONED_POLL_ID as apa_p_id,
ap.PREPAID as appre,
ap.note as apnote
from account a, account_poll ap
where ap.account_id=a.account_id
and ap.poll_id = '||p_poll_id||') apv
set apa_c=a_c,
apf_a=f_a,
apl_a=l_a,
apf_ad=f_ad,
apl_ad=l_ad,
apa_t=a_t,
apa_ts=a_ts,
aproa=roa,
apact_type=act_type,
apa_p_id=a_p_id,
appre=pre,
apnote=note';


and then for example

UPDATE account_poll
SET action_type = p_action,
actioned_date_time = SYSDATE,
actioned_ts = alarm_manager.get_time_in_seconds(SYSDATE)
WHERE account_id = p_account_id;


Tom Kyte
March 10, 2004 - 9:30 am UTC

if you have two different sessions that get blocked on each others update -- that means session 1's changes are going to be overwritten by session 2's changes and session 2 will NEVER have looked as session 1's changes. This is called the lost update problem. it is as if session 1 never did anything.

Now, I did not reverse engineer your entire update (you aliased all of the columns making a direct compare hard) but query 1 updates account_poll.action_type, query 2 updates it as well.

if the "big update" is running and has updated account_id = 55 and the little update comes along for id=55, it will block. when it unblocks it will blindly overwrite the changes of the other guy. If that is OK, then so be it -- but it is called in database terms "lost update". You've processed a row without even inspecting it. the application that did the little update may well have

read the row out.....
<other big update starts>
application makes decision to update based on what it read....
application does update <<<=== lost update, bad data


only you can ascertain if this is meaningful to you.

thanks

A reader, March 10, 2004 - 10:56 am UTC

The lost update here is fine then as the smaller update will hold the most up to date information so when the column gets updated we want that info and dont care about the update from the bigger update (its updating it from itself) - so I am fine with that.

Just wish I could get rid of the blocking :-) but it seems as if I am stuck with it without some design changes

Exact text matching in the MV query

A reader, June 08, 2004 - 11:31 am UTC

Suppose I have a MV containing only joins and the DISTINCT operator i.e. no aggregate functions.

If I modify the text of the query to add some additional WHERE clauses, or even add a no-op like 1=1, it doesnt seem to use the MV (yes, I do have all the session-level query_rewrite settings set)

Does query rewrite using MVs require an exact text match like stored outlines do? Thanks

Tom Kyte
June 08, 2004 - 1:16 pm UTC

distinct is an aggregate set function -- no different than group by <every selected colum>

it makes is "a complex" mv as interest, minus, union would.


Exact text matching in the MV query

A reader, June 08, 2004 - 1:21 pm UTC

Ok, but my question was does query rewrite using MVs require a exact text match on the query like stored outlines do?

In other words, does Oracle "understand" the gist of the query to determine whether the MV can be used, or does it just do some pattern matching? Can you give some examples please? The docs are a tad skimpy on this..

Thanks

Tom Kyte
June 08, 2004 - 1:56 pm UTC

not "exactly exact" -- it requires the same query but whitespace/case doesn't count.


I've found:

http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/qr.htm#721

to be rather comprehensive.  It even shows how to get information on whether an MV can or cannot be used:

ops$tkyte@ORA9IR2> @?/rdbms/admin/utlxrw.sql
ops$tkyte@ORA9IR2> CREATE TABLE REWRITE_TABLE(
  2                    statement_id          VARCHAR2(30),  -- id for the query
  3                    mv_owner              VARCHAR2(30),  -- owner of the MV
  4                    mv_name               VARCHAR2(30),  -- name of the MV
  5                    sequence              INTEGER,       -- sequence no of the error msg
  6                    query                 VARCHAR2(2000),-- user query
  7                    message               VARCHAR2(512), -- EXPLAIN_REWRITE error msg
  8                    pass                  VARCHAR2(3),   -- rewrite pass no
  9                    flags                 INTEGER,       -- associated flags
 10                    reserved1             INTEGER,       -- currently not used
 11                    reserved2             VARCHAR2(10))  -- currently not used
 12  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t1 as select * from all_objects;
 
Table created.
 
ops$tkyte@ORA9IR2> create table t2 as select * from all_users;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view mv
  2  enable query rewrite
  3  as
  4  select distinct t1.owner, t2.created
  5    from t1, t2
  6   where t1.owner = t2.username
  7     and 1=1
  8  /
 
Materialized view created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_schema_stats( user );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> alter session set query_rewrite_enabled=true;
 
Session altered.
 
ops$tkyte@ORA9IR2> alter session set query_rewrite_integrity=trusted;
 
Session altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
  2      dbms_mview.explain_rewrite
  3      ( 'select distinct t1.owner, t2.created from t1, t2 where t1.owner = t2.username and 1=1',
  4        'MV',
  5        'FIRST' );
  6      dbms_mview.explain_rewrite
  7      ( 'select distinct t1.owner, t2.created from t1, t2 where t1.owner = t2.username and 1=1 and 2=2',
  8        'MV',
  9        'SECOND' );
 10  end;
 11  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from rewrite_table
  2  /
 
STATEMENT_ID                   MV_OWNER
------------------------------ ------------------------------
MV_NAME                          SEQUENCE
------------------------------ ----------
QUERY
-------------------------------------------------------------------------------
MESSAGE
-------------------------------------------------------------------------------
PAS      FLAGS  RESERVED1 RESERVED2
--- ---------- ---------- ----------
FIRST                          OPS$TKYTE
MV                                      1
select distinct t1.owner, t2.created from t1, t2 where t1.owner = t2.username a
nd 1=1
QSM-01009: materialized view, MV, matched query text
YES
 
SECOND                         OPS$TKYTE
MV                                      1
select distinct t1.owner, t2.created from t1, t2 where t1.owner = t2.username a
nd 1=1 and 2=2
QSM-01112: WHERE clause of mv, MV, is more restrictive than query
YES

[tkyte@tkyte-pc tkyte]$ oerr qsm 1112
01112, 00000, "WHERE clause of mv, %s, is more restrictive than query"
// *Cause:    The materialized view contains a selection condition in the
//            WHERE clause that is more restrictive than the corresponding
//            selection condition in the request query.
// *Action:   Change the selection criteria in the request query to be at
//            least as restrictive as the selection criteria in the
//            materialized view, or drop and recreate the materialized view
//            using less restrictive selection criteria.
[tkyte@tkyte-pc tkyte]$


<b>Here (and i'm only guessing), the believe the predicate could contain OR's or anything like that and the predicate *could* be more restrictive (although in this case us humans can tell "it ain't so")</b>

 

Referential integrity on Materialized Views

A reader, June 16, 2004 - 6:51 am UTC

Hi Tom

Is it ok to define referential integrity on Materialized Views or is it a bad practice? I have two tables with foreign key references to a Materialized view. The complete refresh of Materialized view fails because it is unable to purge the existing records as it finds dependent child records. Please advise


Tom Kyte
June 16, 2004 - 12:45 pm UTC

then in your case it is obviously "a bad idea".


you can have integrity constraints on MV's -- they should be defined as "deferrable" so the refresh process can defer them all before refreshing. At the end of the refresh they are validated.

In your case however, it would be "a bad idea". The tables that point to the MV will make it IMPOSSIBLE for the MV to be refreshed if someone deletes a row in the base table and you are dependent on it.

Referential Integrity on Materialized views

A reader, June 17, 2004 - 4:53 am UTC

Thanks very much for your feedback

link

A reader, June 25, 2004 - 12:34 pm UTC

Please give the link to the page, where I can find all the factors due to which we cannot create a MV for a given Select.
Thanks

Refresh Complete --in MV

Reader, June 26, 2004 - 7:09 am UTC

How does refresh complete work ?
Does it truncate the table Or updates/inserts ?

Thanks

Tom Kyte
June 26, 2004 - 7:00 pm UTC

depends.

single MV, no refresh groups -- truncate + insert /*+ APPEND */

refresh group -- delete + insert (done as a huge transaction -- all or nothing)

materialized view

tp, July 16, 2004 - 9:57 am UTC

hi tom,
i am unable to drop the tablespace containing materialized view.it is giving error saying that the tablespace contains materialized view.

Tom Kyte
July 16, 2004 - 1:27 pm UTC

uh huh -- so drop the mv?

MV refresh question

sonali, August 27, 2004 - 12:24 pm UTC

I have a MV say mv_work I have it scheduled as job to refresh it every 2 hours.

When some one changes some data in work table and goes to cost and rate there is a stored procedure which refreshes this mv_mwebwork to see the changes they just did. they can't wait for 2 hours for that to happen in this perticular case.

My question is what happens when someone is running the procedure which will refresh this MV and at the same time the MV is refreshed by server as a JOB which one will have to wait.. will my procedure slow down because of that ?

What will happen when 2 people are running this procedure at the same time.. which will refresh this MV before doing other stuffs.. will it wait for one another ?

How do I get around it ?
Is there any place I can check before refresh whether its running at that instant ?

Thanks
Sonali

Tom Kyte
August 27, 2004 - 1:09 pm UTC

will it slow down? maybe, maybe not -- you would have to be the judge of that. do you have the capacity to do both things at the same time....

it probably will be impacted to some degree, you'll have to measure that on your system. depends entirely on how much work needs to be done after all.

the refreshes would go sequentially

How To Create Materialized View

Waseem, September 06, 2004 - 6:39 am UTC

My Question is that:
How Can I Create a Materialized View Based on Another Materialized View ?


Tom Kyte
September 06, 2004 - 10:25 am UTC

in 9i, this is a fully supported feature, yes. MV's of MV's

Materialized Views

ARC, September 10, 2004 - 2:53 pm UTC

Tom,

When I am creating a Materialized Views I am getting following error.

GEFDWDEV>@TMP
(TB_CRM015_ORDER_SPLIT.SPLIT_PCT IS NOT NULL AND TB_CRM015_ORDER_SPLIT.SPLIT_PCT <> 0)
*
ERROR at line 157:
ORA-01031: insufficient privileges

Here is the code:
CREATE MATERIALIZED VIEW MV_COPA_INTL
/*
( COPA_DOC_REF_NBR,
COPA_DOC_REF_ITEM_DSC, JOB_ORDER_NUM, ORDER_LINE_ITEM_NUM, INVOICE_NUM,
INVOICE_LINE_ITEM_NUM, POSTING_DT, JOB_ORDER_DT, ORDER_CREDIT_DT,
INVOICE_DT, SUB_PROD_LN_CD, DISTRIBUTION_CHANNEL_CD, CUST_GRP_CD,
PROD_LN_CD, P_L_CD, SLS_DIM_KEY, SLS_TEAM_TIER0_CD,
SLS_TEAM_TIER0_DSC, SLS_TEAM_TIER1_CD, SLS_TEAM_TIER1_DSC, SLS_TEAM_TIER2_CD,
SLS_TEAM_TIER2_DSC, SLS_TEAM_TIER2_POS_CD, SLS_TEAM_TIER2_MGR_CD, SLS_TEAM_TIER2_MGR_DSC,
SLS_TEAM_TIER3_CD, SLS_TEAM_TIER3_DSC, SLS_TEAM_TIER3_POS_CD, SLS_TEAM_TIER4_CD,
SLS_TEAM_TIER4_DSC, SLS_TEAM_TIER4_POS_CD, MATERIAL_NUM, CNTY_CD,
EXPORT_DOMESTIC_FLG, SHORT_LONG_CYCLE_FLG, CUST_GEOG_REGN_DSC, ACCT_ASSIGN_GRP_CD,
ACCT_ASSIGN_GRP_DSC, ALT_CUST_NUM, BILL_TO_CUST_NUM, SHIP_TO_PARTY,
LINE_ITEM_QTY, DAILY_ORDER_NET_AMT, SALE_PRICE_AMT, LIST_PRICE_AMT,
COMPANY_CD, GOODS_ISSUE_DT, SAP_FISCAL_WK_NUM, SAP_FISCAL_MTH_NUM,
SAP_FISCAL_QTR_NUM, SAP_FISCAL_YR_NUM, FISCAL_WK_NUM, FISCAL_MTH_NUM,
FISCAL_QTR_NUM, FISCAL_YR_NUM, MATERIAL_CONTROL_FLG, ORDER_REASON_CD,
CURRENCY_CD, DIVISION_CD, RECORD_TYPE, SOURCE_SYSTEM_ID,
SLS_ACCRUALS_YN_FLG, MANUAL_ADJ_YN_FLG, STOCK_DROP_SD_FLG, COST_AMT,
MARGIN_AMT, CUST_CHANNEL_DSC, ORDER_REASON_CATEGORY_NAM, ORDER_REASON_DSC,
END_USER_CUST_NUM, END_USER_CUST_NAM, END_USER_CUST_CITY_NAM, END_USER_CUST_STATE_CD,
END_USER_CUST_ZIP_CD, END_USER_CUST_CNTY_CD, SALES_PERSON_NAM, DISCOUNT_PCT,
SALE_TYPE_CD, KEY_ID_NUM, REP_SPLIT_PCT_NUM, SIEBEL_PRODUCT_LINE_CD,
REP_ID, REP_NAM, EXTRACT_DT, LAST_UPDATE_DT,
CREATE_DT, PROCESS_LOG_DSC, REP_ELIG_YN_FLG )
*/
BUILD IMMEDIATE
REFRESH FAST
ENABLE QUERY REWRITE
AS SELECT TRIM(TB_SDW022_COPA_SALES.COPA_DOC_REF_NBR) COPA_DOC_REF_NBR,
DECODE(TB_SDW115_LKP_IGC_PRAT_NUM.PART_NUM, NULL, TRIM(TB_SDW022_COPA_SALES.COPA_DOC_REF_ITEM)||'COPAINTL#RS'||DENSE_RANK() OVER (PARTITION BY TB_SDW022_COPA_SALES.COPA_DOC_REF_NBR ORDER BY TB_CRM015_ORDER_SPLIT.SPLIT_PCT)||
DECODE(TB_COM004_DIM_SLS_HIER.SLS_DIM_KEY, NULL, NULL, '-SF'||DENSE_RANK() OVER (PARTITION BY TB_SDW022_COPA_SALES.COPA_DOC_REF_NBR ORDER BY TB_COM004_DIM_SLS_HIER.SLS_DIM_KEY)),
TRIM(TB_SDW022_COPA_SALES.COPA_DOC_REF_ITEM)||'COPAINTL#DEFREV-YM-'||TB_COM001_DIM_TIME.FISCAL_YEAR_YYYY||SUBSTR('00'||TB_COM001_DIM_TIME.FISCAL_MONTH_M,-2,2)||'#RS'||DENSE_RANK() OVER (PARTITION BY TB_SDW022_COPA_SALES.COPA_DOC_REF_NBR ORDER BY TB_CRM015_ORDER_SPLIT.SPLIT_PCT)||
DECODE(TB_COM004_DIM_SLS_HIER.SLS_DIM_KEY, NULL, NULL, '-SF'||DENSE_RANK() OVER (PARTITION BY TB_SDW022_COPA_SALES.COPA_DOC_REF_NBR ORDER BY TB_COM004_DIM_SLS_HIER.SLS_DIM_KEY))) COPA_DOC_REF_ITEM_DSC,
TRIM(TB_SDW022_COPA_SALES.SALES_ORDER_NUMBER) JOB_ORDER_NUM,
TRIM(TB_SDW022_COPA_SALES.SALES_ORDER_ITEM) ORDER_LINE_ITEM_NUM,
TRIM(TB_SDW022_COPA_SALES.DOC_REF_NBR) INVOICE_NUM,
TRIM(TB_SDW022_COPA_SALES.DOC_REF_ITEM) INVOICE_LINE_ITEM_NUM,
TB_SDW022_COPA_SALES.POSTING_DATE POSTING_DT,
TB_SDW022_COPA_SALES.DATE_LINE_ITEM_CREATED JOB_ORDER_DT,
TB_SDW022_COPA_SALES.DATE_LINE_ITEM_CREATED ORDER_CREDIT_DT,
TRUNC(TB_SDW022_COPA_SALES.INVOICE_DATE_CREATED) INVOICE_DT,
TRIM(TB_SDW022_COPA_SALES.SUB_PROD_LINE) SUB_PROD_LN_CD,
TRIM(TB_SDW022_COPA_SALES.DISTRIBUTION_CHANNEL) DISTRIBUTION_CHANNEL_CD,
TRIM(TB_SDW022_COPA_SALES.CUSTOMER_GROUP) CUST_GRP_CD,
TRIM(TB_SDW022_COPA_SALES.PRODUCT_LINE) PROD_LN_CD,
TRIM(TB_SDW022_COPA_SALES.P_AND_L_CODE) P_L_CD,
TB_COM004_DIM_SLS_HIER.SLS_DIM_KEY SLS_DIM_KEY,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER0_CD SLS_TEAM_TIER0_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER0_DSC SLS_TEAM_TIER0_DSC,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER1_CD SLS_TEAM_TIER1_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER1_DSC SLS_TEAM_TIER1_DSC,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER2_CD SLS_TEAM_TIER2_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER2_DSC SLS_TEAM_TIER2_DSC,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER2_POS_CD SLS_TEAM_TIER2_POS_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER2_MGR_CD SLS_TEAM_TIER2_MGR_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER2_MGR_DSC SLS_TEAM_TIER2_MGR_DSC,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER3_CD SLS_TEAM_TIER3_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER3_DSC SLS_TEAM_TIER3_DSC,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER3_POS_CD SLS_TEAM_TIER3_POS_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER4_CD SLS_TEAM_TIER4_CD,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER4_DSC SLS_TEAM_TIER4_DSC,
TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER4_POS_CD SLS_TEAM_TIER4_POS_CD,
TB_SDW022_COPA_SALES.MATERIAL_NUMBER MATERIAL_NUM,
TRIM(TB_SDW022_COPA_SALES.COUNTRY_KEY) CNTY_CD,
DECODE(TB_SDW022_COPA_SALES.COUNTRY_KEY, 'US','DOMESTIC','PR', 'DOMESTIC','EXPORT') EXPORT_DOMESTIC_FLG,
'SHORT' SHORT_LONG_CYCLE_FLG,
DECODE(TRIM(TB_SDW022_COPA_SALES.SALES_DISTRICT),'LATIN',DECODE(TB_SDW022_COPA_SALES.COUNTRY_KEY,'MX','MEXICO',TRIM(TB_SDW022_COPA_SALES.SALES_DISTRICT)),TRIM(TB_SDW022_COPA_SALES.SALES_DISTRICT)) CUST_GEOG_REGN_DSC,
TRIM(TB_SDW022_COPA_SALES.ACCT_ASSIGN_GROUP) ACCT_ASSIGN_GRP_CD,
TB_SDW017_LKP_ACCT_ASSIGN_GRP.ACCT_ASSIGN_GRP_DSC ACCT_ASSIGN_GRP_DSC,
TRIM(TB_SDW022_COPA_SALES.CUSTOMER_NUMBER) ALT_CUST_NUM,
TRIM(TB_SDW022_COPA_SALES.CUSTOMER_NUMBER) BILL_TO_CUST_NUM,
TRIM(TB_SDW022_COPA_SALES.SHIP_TO_PARTY) SHIP_TO_PARTY,
TB_SDW022_COPA_SALES.SALES_QUANTITY LINE_ITEM_QTY,
(TB_SDW022_COPA_SALES.NET_PRICE * TB_CRM015_ORDER_SPLIT.SPLIT_PCT/100) * (NVL(TB_SDW066_MAP_REP_SLSTEAM.SALES_SPLIT_PCT, 1)) DAILY_ORDER_NET_AMT,
DECODE( TB_SDW022_COPA_SALES.SALES_QUANTITY, 0, 0, (TB_SDW022_COPA_SALES.NET_PRICE / TB_SDW022_COPA_SALES.SALES_QUANTITY)) SALE_PRICE_AMT,
0 LIST_PRICE_AMT, --((TB_SDW022_COPA_SALES.NET_PRICE / TB_SDW022_COPA_SALES.SALES_QUANTITY) / ( 1 - (TB_CRM014_ORDER_DET.DISCOUNT_PCT/100))) LIST_PRICE_AMT,
TRIM(TB_SDW022_COPA_SALES.COMPANY_CODE) COMPANY_CD,
TB_SDW022_COPA_SALES.GOODS_ISSUE_DATE GOODS_ISSUE_DT,
TB_COM001_DIM_TIME.SAP_FISCAL_WEEK_W SAP_FISCAL_WK_NUM,
TO_NUMBER(TB_SDW022_COPA_SALES.FISCAL_MONTH) SAP_FISCAL_MTH_NUM,
TB_COM001_DIM_TIME.SAP_FISCAL_QUARTER_Q SAP_FISCAL_QTR_NUM,
TO_NUMBER(TB_SDW022_COPA_SALES.FISCAL_YEAR) SAP_FISCAL_YR_NUM,
TB_COM001_DIM_TIME.FISCAL_WEEK_W FISCAL_WK_NUM,
TB_COM001_DIM_TIME.FISCAL_MONTH_M FISCAL_MTH_NUM,
TB_COM001_DIM_TIME.FISCAL_QUARTER_Q FISCAL_QTR_NUM,
TB_COM001_DIM_TIME.FISCAL_YEAR_YYYY FISCAL_YR_NUM,
TRIM(TB_SDW022_COPA_SALES.MATERIAL_CONTROL_FLAG) MATERIAL_CONTROL_FLG,
TRIM(TB_SDW022_COPA_SALES.REASON_CODE_NEW) ORDER_REASON_CD,
TRIM(TB_SDW022_COPA_SALES.CURRENCY) CURRENCY_CD,
TRIM(TB_SDW022_COPA_SALES.DIVISION) DIVISION_CD,
TRIM(TB_SDW022_COPA_SALES.RECORD_TYPE) RECORD_TYPE,
'SAPFAN' SOURCE_SYSTEM_ID,
'N' SLS_ACCRUALS_YN_FLG,
'N' MANUAL_ADJ_YN_FLG,
DECODE(SUBSTR(TB_SDW022_COPA_SALES.SHIP_TO_PARTY,1,2),'DR','D','S') STOCK_DROP_SD_FLG,
NVL(TB_SDW022_COPA_SALES.CONSOL_MATERIALS,0)+ NVL(TB_SDW022_COPA_SALES.LABOR_IN_COGS,0)+ NVL(TB_SDW022_COPA_SALES.MISC_DIRECT_LBR,0)+NVL(TB_SDW022_COPA_SALES.OVERHEAD_IN_COGS,0)+NVL(TB_SDW022_COPA_SALES.INVENTORY_VARIANCES,0)+NVL(TB_SDW022_COPA_SALES.COGS_CLEARING,0) COST_AMT,
TB_SDW022_COPA_SALES.NET_PRICE - (NVL(TB_SDW022_COPA_SALES.CONSOL_MATERIALS,0)+ NVL(TB_SDW022_COPA_SALES.LABOR_IN_COGS,0)+ NVL(TB_SDW022_COPA_SALES.MISC_DIRECT_LBR,0)+NVL(TB_SDW022_COPA_SALES.OVERHEAD_IN_COGS,0)+NVL(TB_SDW022_COPA_SALES.INVENTORY_VARIANCES,0)+NVL(TB_SDW022_COPA_SALES.COGS_CLEARING,0)) MARGIN_AMT,
TB_SDW026_LKP_CUSTOMER_GROUP.CUST_GRP_DSC CUST_CHANNEL_DSC,
RCAT.REASON_CATEGORY_NAM ORDER_REASON_CATEGORY_NAM,
RCAT.REASON_DSC ORDER_REASON_DSC,
TB_CRM013_ORDER.END_USER_CSN_NUM END_USER_CUST_NUM,
TB_CRM013_ORDER.END_USER_NAM END_USER_CUST_NAM,
TB_CRM013_ORDER.END_USER_CITY_NAM END_USER_CUST_CITY_NAM,
TB_CRM013_ORDER.END_USER_STATE_CD END_USER_CUST_STATE_CD,
TB_CRM013_ORDER.END_USER_POSTAL_CD END_USER_CUST_ZIP_CD,
TB_CRM013_ORDER.END_USER_CNTY_CD END_USER_CUST_CNTY_CD,
TB_CRM013_ORDER.SALES_PERSON_NAM SALES_PERSON_NAM,
NULL DISCOUNT_PCT,
NULL SALE_TYPE_CD,
NULL KEY_ID_NUM,
/* TB_CRM014_ORDER_DET.DISCOUNT_PCT DISCOUNT_PCT,
TB_CRM014_ORDER_DET.SALE_TYPE_CD SALE_TYPE_CD,
TB_CRM014_ORDER_DET.KEY_ID_NUM KEY_ID_NUM, */
TB_CRM015_ORDER_SPLIT.SPLIT_PCT REP_SPLIT_PCT_NUM,
TB_CRM017_PROD.PROD_LINE_DSC SIEBEL_PRODUCT_LINE_CD,
TB_CRM015_ORDER_SPLIT.PARTNER_CSN_NUM REP_ID,
TB_SDW066_MAP_REP_SLSTEAM.REP_NAME REP_NAM,
TRUNC(TB_SDW022_COPA_SALES.DW_INSERT_DATE) EXTRACT_DT,
TRUNC(SYSDATE) LAST_UPDATE_DT,
TRUNC(SYSDATE) CREATE_DT,
NULL PROCESS_LOG_DSC,
DECODE(TB_SDW115_LKP_IGC_PRAT_NUM.PART_NUM, NULL, 'Y', 'N') REP_ELIG_YN_FLG
FROM TB_SDW022_COPA_SALES,
TB_SDW017_LKP_ACCT_ASSIGN_GRP,
TB_SDW026_LKP_CUSTOMER_GROUP,
TB_SDW028_LKP_REASON_CODES RCAT,
TB_COM001_DIM_TIME,
TB_CRM013_ORDER,
-- TB_CRM014_ORDER_DET,
TB_CRM015_ORDER_SPLIT,
TB_CRM017_PROD,
TB_SDW066_MAP_REP_SLSTEAM,
TB_COM004_DIM_SLS_HIER,
TB_SDW115_LKP_IGC_PRAT_NUM
WHERE TB_SDW022_COPA_SALES.SALES_ORDER_NUMBER = TB_CRM015_ORDER_SPLIT.ORDER_NUM
AND TB_SDW022_COPA_SALES.MATERIAL_NUMBER = TB_SDW115_LKP_IGC_PRAT_NUM.PART_NUM(+)
AND TB_SDW022_COPA_SALES.POSTING_DATE = TB_COM001_DIM_TIME.CALENDAR_DT
AND TB_SDW022_COPA_SALES.SALES_ORDER_NUMBER = TB_CRM013_ORDER.ORDER_NUM(+)
--AND TB_SDW022_COPA_SALES.SALES_ORDER_NUMBER = TB_CRM014_ORDER_DET.ORDER_NUM(+)
--AND RTRIM(LTRIM(TB_SDW022_COPA_SALES.SALES_ORDER_ITEM, '0'), '0') = TB_CRM014_ORDER_DET.LINE_ITEM_NUM(+)
AND TB_SDW022_COPA_SALES.ACCT_ASSIGN_GROUP = TB_SDW017_LKP_ACCT_ASSIGN_GRP.ACCT_ASSIGN_GRP_CD(+)
AND NVL(TB_SDW022_COPA_SALES.CUSTOMER_GROUP,'#') = NVL(TB_SDW026_LKP_CUSTOMER_GROUP.CUST_GRP_CD(+),'#')
AND TB_SDW022_COPA_SALES.REASON_CODE_NEW = RCAT.REASON_CD(+)
AND TB_SDW022_COPA_SALES.MATERIAL_NUMBER = TB_CRM017_PROD.PART_NUM(+)
AND TB_CRM015_ORDER_SPLIT.PARTNER_CSN_NUM = TB_SDW066_MAP_REP_SLSTEAM.REP_ID(+)
AND DECODE(TB_SDW066_MAP_REP_SLSTEAM.LAST_LEVEL_NUM,
4, TB_SDW066_MAP_REP_SLSTEAM.CM_CD,
3, TB_SDW066_MAP_REP_SLSTEAM.RCM_CD,
2, TB_SDW066_MAP_REP_SLSTEAM.RM_CD) = DECODE(TB_COM004_DIM_SLS_HIER.LAST_LEVEL_NUM(+) ,
4, TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER4_CD(+),
3, TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER3_CD(+),
2, TB_COM004_DIM_SLS_HIER.SLS_TEAM_TIER2_MGR_CD(+))
AND (
TB_CRM015_ORDER_SPLIT.PARTNER_CSN_NUM IS NOT NULL
AND
(TB_CRM015_ORDER_SPLIT.SPLIT_PCT IS NOT NULL AND TB_CRM015_ORDER_SPLIT.SPLIT_PCT <> 0)
)

/


Thanks in advance.
ARC

Tom Kyte
September 10, 2004 - 3:00 pm UTC

it is a big one. (why can't people make concise, yet 100% complete test cases similar to what I do to demonstrate things? I don't know what I'm supposed to do with a 15 page monster create statement that I haven't a chance of actually *running* - that and I find 99999 times out of 100000 I find my mistake in the creation of a small test case)

but anyway, do you meet the basic requirements for MV creation:
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_63a.htm#2063839 <code>

and that doesn't look like a "fast refreshable" MV to me probably.

but then again, i don't have a concise, yet 100% complete test case to work with.

How Can I Create a Mview on Mview

Muhammad Waseem Haroon, September 13, 2004 - 4:29 am UTC

SQL> CREATE MATERIALIZED VIEW EMP_MET_VIEW
  2  REFRESH FAST
  3  ON DEMAND
  4  AS
  5  SELECT empno,deptno,ename,sal,comm
  6  FROM   emp;
/

Snapshot created.

SQL> CREATE MATERIALIZED VIEW MVIEW_ON_EMP_MET_VIEW
  2  REFRESH FAST
  3  ON DEMAND
  4  AS
  5  SELECT *
  6  FROM   EMP_MET_VIEW;
FROM   EMP_MET_VIEW
       *
ERROR at line 6:
ORA-23413: table "SCOTT"."EMP_MET_VIEW" does not have a materialized view log

SQL> ED
Wrote file afiedt.buf

  1  CREATE MATERIALIZED VIEW LOG ON EMP_MET_VIEW
  2     WITH ROWID, SEQUENCE
  3     (empno, deptno, ename, sal, comm)
  4*    INCLUDING NEW VALUES
SQL> /

Snapshot log created.

SQL> CREATE MATERIALIZED VIEW MVIEW_ON_EMP_MET_VIEW
  2  REFRESH FAST
  3  ON DEMAND
  4  AS
  5  SELECT *
  6  FROM   EMP_MET_VIEW;
FROM   EMP_MET_VIEW
       *
ERROR at line 6:
ORA-23415: materialized view log for "SCOTT"."EMP_MET_VIEW" does not record the primary key

SQL> DROP MATERIALIZED VIEW LOG ON EMP_MET_VIEW
  2  /

Snapshot log dropped.

SQL> CREATE MATERIALIZED VIEW LOG ON EMP_MET_VIEW
  2  /

Snapshot log created.

SQL> CREATE MATERIALIZED VIEW MVIEW_ON_EMP_MET_VIEW
  2  REFRESH FAST
  3  ON DEMAND
  4  AS
  5  SELECT *
  6  FROM   EMP_MET_VIEW;
FROM   EMP_MET_VIEW
       *
ERROR at line 6:
ORA-12053: this is not a valid nested materialized view

after that i have tried to create it like that:

SQL> CREATE MATERIALIZED VIEW MVIEW_ON_EMP_MET_VIEW
  2  REFRESH ON COMMIT
  3  AS
  4  SELECT *
  5  FROM   EMP_MET_VIEW;
FROM   EMP_MET_VIEW
       *
ERROR at line 5:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view

still i am getting error :(
which of the thing is missing and how can i create a materialized view on another materialized view.
for that, Please give me the example?? 

Tom Kyte
September 13, 2004 - 8:07 am UTC

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/mv.htm#44203 <code>

documentation, it even has a full example! and more :)

Query rewrite

A reader, September 21, 2004 - 11:01 pm UTC

Does query rewrite kick in for anything *other than* materialized views?

Thanks

Tom Kyte
September 22, 2004 - 7:42 am UTC

it was used for function based indexes (not so in current releases, not needed anymore).

it can change the way predicates and transitivity are applied in 9iR2 and up as well.



A reader, September 22, 2004 - 9:44 am UTC

"it can change the way predicates and transitivity are applied in 9iR2 and up as well"

Can you please give examples of how this works and how this can be used to our advantage?

Thanks

Tom Kyte
September 22, 2004 - 10:22 am UTC

ops$tkyte@ORA9IR2> set echo on
ops$tkyte@ORA9IR2> set linesize 121
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> /*
DOC>drop table t1;
DOC>create table t1 as select * from all_objects;
DOC>create index t1_idx on t1(object_id);
DOC>exec dbms_stats.gather_table_stats( user, 'T1', method_opt=>'for columns REPEAT', cascade=>true);
DOC>
DOC>drop table t2;
DOC>create table t2 as select * from all_objects;
DOC>create index t2_idx on t2(object_id);
DOC>exec dbms_stats.gather_table_stats( user, 'T2', method_opt=>'for columns REPEAT', cascade=>true);
DOC>*/
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set query_rewrite_enabled=false;
 
Session altered.
 
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select *
  2   from t1, t2
  3   where t1.object_id = 100
  4     and t1.object_id = t2.object_id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=560 Card=78003 Bytes=15600600)
   1    0   MERGE JOIN (CARTESIAN) (Cost=560 Card=78003 Bytes=15600600)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=279 Bytes=27900)
   3    2       INDEX (RANGE SCAN) OF 'T1_IDX' (NON-UNIQUE) (Cost=1 Card=112)
   4    1     BUFFER (SORT) (Cost=558 Card=279 Bytes=27900)
   5    4       TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=279 Bytes=27900)
   6    5         INDEX (RANGE SCAN) OF 'T2_IDX' (NON-UNIQUE) (Cost=1 Card=112)
 
 
 
<b>turning on query rewrite enabled in 9ir2 (9205 anyway, maybe a little sooner) would have the query look something more like this:</b>


ops$tkyte@ORA9IR2> select *
  2   from t1, t2
  3   where t1.object_id = 100
  4     and t1.object_id = t2.object_id
  5     and t2.object_id = 100;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=279 Bytes=55800)
   1    0   HASH JOIN (Cost=5 Card=279 Bytes=55800)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=279 Bytes=27900)
   3    2       INDEX (RANGE SCAN) OF 'T1_IDX' (NON-UNIQUE) (Cost=1 Card=112)
   4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=279 Bytes=27900)
   5    4       INDEX (RANGE SCAN) OF 'T2_IDX' (NON-UNIQUE) (Cost=1 Card=1)
 
 
 
ops$tkyte@ORA9IR2> alter session set query_rewrite_enabled=true;
 
Session altered.
 
ops$tkyte@ORA9IR2> select *
  2   from t1, t2
  3   where t1.object_id = 100
  4     and t1.object_id = t2.object_id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=279 Bytes=55800)
   1    0   HASH JOIN (Cost=5 Card=279 Bytes=55800)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=279 Bytes=27900)
   3    2       INDEX (RANGE SCAN) OF 'T1_IDX' (NON-UNIQUE) (Cost=1 Card=112)
   4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=279 Bytes=27900)
   5    4       INDEX (RANGE SCAN) OF 'T2_IDX' (NON-UNIQUE) (Cost=1 Card=1)
 
 
 
ops$tkyte@ORA9IR2> set autotrace off
 

In creating Materialized views running into insufficient privileges in 8.1.6

s850507, September 23, 2004 - 6:17 pm UTC

Thanks for excelent discussions... I value your suggestions..

In creating Materialized views running into insufficient privileges

here is the queries to recreate the problem:

select * from v$version;
select * from dba_role_privs where grantee = 'SOE'
/
select * from dba_sys_privs where grantee = 'SOE';
select name, value from v$parameter order by 1;
drop table temp;
drop materialized view "SOE"."TEST_MV" ;
create table temp as select * from all_objects;


CREATE MATERIALIZED VIEW "SOE"."TEST_MV"
TABLESPACE "USERS"
BUILD IMMEDIATE
USING INDEX
TABLESPACE "USERS"
REFRESH FORCE
ON DEMAND
ENABLE QUERY REWRITE
AS
select owner, count(*) from temp group by owner;

BEGIN
DBMS_STATS.GATHER_TABLE_STATS(
ownname => 'SOE',
tabname => 'TEST_MV');
END;
/

in 9i We don't have a problem but in 8.1.6 it says ORA-01031: insufficient privileges

in Both the databases the user has dba privs
and
db parameters are
query_rewrite_enabled true
query_rewrite_integrity enforced


Why would there be insufficient privileges?

Tom Kyte
September 24, 2004 - 9:57 am UTC

are you "SOE"?

S850507, September 24, 2004 - 10:17 am UTC

Yes
Tom Kyte
September 24, 2004 - 11:39 am UTC

ops$tkyte@ORA815> grant dba to soe identified by soe;
 
Grant succeeded.
 
ops$tkyte@ORA815> <b>grant query rewrite to soe;</b>
 
Grant succeeded.
 
ops$tkyte@ORA815> @connect soe/soe
ops$tkyte@ORA815> set termout off
soe@ORA815> set termout on
soe@ORA815> drop table temp;
 
Table dropped.
 
soe@ORA815> drop materialized view      "SOE"."TEST_MV" ;
 
Materialized view dropped.
 
soe@ORA815> create table temp as select * from all_objects;
 
Table created.
 
soe@ORA815>
soe@ORA815>
soe@ORA815> CREATE MATERIALIZED VIEW "SOE"."TEST_MV"
  2  TABLESPACE "USERS"
  3  BUILD IMMEDIATE
  4  USING INDEX
  5  TABLESPACE "USERS"
  6  REFRESH FORCE
  7  ON DEMAND
  8  ENABLE QUERY REWRITE
  9  AS
 10  select owner, count(*) <b>cnt</b> from temp group by owner;
 
Materialized view created.
 
soe@ORA815>
soe@ORA815> BEGIN
  2     DBMS_STATS.GATHER_TABLE_STATS(
  3       ownname => 'SOE',
  4       tabname => 'TEST_MV');
  5  END;
  6  /
 
PL/SQL procedure successfully completed.


<b>worked for me in 815 (didn't have an 816 lying about....) -- changes in bold</b>
 

Query rewrite

A reader, October 13, 2004 - 2:59 pm UTC

ops$tkyte@ORA9IR2> select *
2 from t1, t2
3 where t1.object_id = 100
4 and t1.object_id = t2.object_id;

You showed that turning on query_rewrite rewrites the above query as if 't2.object_id=100' was specified and the plan improves drastically.

This is huge! I thought the CBO did this kind of transitivity by default

Given this, would you recommend setting query_rewrite_enabled=true in the init.ora and 'grant query rewrite to public'? Are there any disadvantages?

Thanks

Tom Kyte
October 13, 2004 - 3:31 pm UTC

i would suggest testing all parameter changes in your TEST environment before just setting it on in production.


are there any disadvantages? in a large system with lots of sql - plans will change. hopefully for the better -- but hey -- it only takes one bad apple to spoil the barrel as they say.

Hi

SNS, October 13, 2004 - 5:35 pm UTC

Tom,

I have a star schema with 5 tables: fact, dim1, dim2, dim3, dim4

I have created a materialized view by saying BUILD IMMEDIATE, REFRESH COMPLETE ON DEMAND
and ENABLE QUERY REWRITE using all the 5 tables by joining the fact
and the dimension tables.

I tried a query on the fact and dim1, dim2 and dim3 (I don't need dim4 in my query).
Apparently the optimizer is not using the MV.

I got the following info from rewrite_table
QSM-01010: table(s) in the query not found in materialized view, MV_AGG_DIST_MFG_GRP_ACC2
QSM-01084: materialized view, MV_AGG_DIST_MFG_GRP_ACC2, has anchor, DIM4, not found in query
QSM-01053: NORELY referential integrity constraint on table, ACSS_FACT_PURCH_RETAIL, in TRUSTED/STALE TOLERATED integrity mode

Do you know why?

Also, I read in Oracle documentation that Oracle recommends to have foreign key on all
the dimension tables in order for the query rewrite to work. I created the foreign keys by enabling,validating and rely.
Even then I am still getting the same QSM messages.

Can you please help me out here?

Thanks,


Tom Kyte
October 13, 2004 - 6:17 pm UTC

umm, need a teeny tiny - yet complete example. You should be able to simulate this with 3 tables, very few columns.

But - sounds like it doesn't know that the relationship between the FACT and DIM4 is "there" (eg: every row that has a non-null fkey in FACT has a mate in DIM4) and perhaps even that the fkey in FACT is non-null (eg: not only does every non-null row in FACT have a mate in DIM4, but -- the fkey is only having NON-NULL values so you will not lose a single row from FACT by joining to DIM4)

That seems to be what the message is saying "hey, you have a RI constraint that was just stuffed on here, we have not validated it..."

But, reproduce it with a TEENY TINY, fits on the back of a napkin -- yet 100% complete example (again -- 3 tables should suffice, fact needs about 3 columns, dim1/dim2 about 2 -- do it exactly like you do in real life, the manner you add constraints and all)

Hi,

SNS, October 14, 2004 - 1:15 pm UTC

As per your requirement, I have 4 different steps to simulate the MV what I had yesterday.

STEP I

I have defined following 4 tables

SQL> desc dim1
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -----------------
 A                                                                        NOT NULL NUMBER
 AA                                                                                VARCHAR2(10)

SQL> desc dim2
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -----------------
 B                                                                        NOT NULL NUMBER
 BB                                                                                VARCHAR2(10)

SQL> desc dim3
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -----------------
 C                                                                        NOT NULL NUMBER
 CC                                                                                VARCHAR2(10)

SQL> desc fact1
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -----------------
 A                                                                                 NUMBER
 B                                                                                 NUMBER
 MEAS1                                                                             NUMBER
 C                                                                                 NUMBER
 
 All the dimension tables have some data and the fact table too have data and all attributes
 in the fact table have a NOT NULL value.
 
 I have RI set on all the dim tables through the fact table.
 
******************************************************************************************* 
 STEP II
 
 drop materialized view mv_test
 /
 CREATE MATERIALIZED VIEW MV_TEST
 TABLESPACE DATA_MV_DWH
 PARALLEL(DEGREE 4 INSTANCES 1)
 BUILD immediate
 REFRESH COMPLETE ON DEMAND
 ENABLE QUERY REWRITE
 AS
 SELECT  
   a.a,
   a.aa,
   b.b,
   b.bb,
   d.c,
   d.cc,
   sum(c.meas1)meas1
 FROM dim1 a, dim2 b,dim3 d,fact1 c
      where a.a = c.a
      and b.b = c.b
      and d.c = c.c
 GROUP BY a.a, a.aa, b.b, b.bb,d.c,d.cc;     
      
exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'MV_TEST',estimate_percent=>5,degree=>12);  

*******************************************************************************************

STEP III
I executed a query inside a pl/sql block later

declare
  v_sql varchar2(2000);
begin
  v_sql := 'SELECT  
  a.a,
  a.aa,
  b.b,
  b.bb,
  sum(c.meas1)meas1
FROM dim1 a, dim2 b ,fact1 c
     where a.a = c.a
     and b.b = c.b
     and b.bb = ''ff''
GROUP BY a.a, a.aa, b.b, b.bb';
   
DBMS_MVIEW.EXPLAIN_REWRITE (
    query=> v_sql,mv=>'MV_TEST'    
  );

end;  

*******************************************************************************************

STEP IV
Finally, I queried rewrite_table to see the output.

QSM-01110: a lossy join in MV, MV_TEST, between tables, FACT1 and DIM3, not found in query

*******************************************************************************************
 

Tom Kyte
October 14, 2004 - 7:34 pm UTC

hmm, on the screen you used to submit this -- did you see text to the effect:



Also if your followup includes an example you want me to look at, I'll need it to have a create table, insert into statements and such that I can easily cut and paste into sqlplus myself (like I give you) in order to play with. I spent too many hours turning something like:

I have a table like:

scott@ORA9IR2> desc dept
Name Null? Type
------------------------------------ -------- -------------------------
DEPTNO NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)



your create tables won't wrap like that either i suspect...

Eugene, October 17, 2004 - 2:35 am UTC

Hi Tom,
We have a long running report. I was given a task to make it run faster. Report's select statement has about 20 different SUM() in it and is a join of two views. One of them is complex. I thought that I can use MVs to break a view into two pieces, but need to ask you first:
1. Is MV the right approach to such task? I am being told that creating a procedure is the way to go in this case.
2.Data for which I create MV does not change during the day. I figure that I can refresh it ones at night using dbms_mview package. If the query that creates a materialized view runs 40 min, does it mean that when I will refresh complete, it will take same amount of time?


Tom Kyte
October 17, 2004 - 10:02 am UTC

1) how about writing a better query? I would never take a long running query and say "hmmm, what sort of interpreted plsql code could I write that could go faster". In general, you can only write things that go much slower than the optimal SQL solution.

I would look at the question being asked of the data and sit down and write a query from scratch (ignoring the existing one -- it can only confuse you). Perhaps ignoring the views and such -- just write a query that answers the question.

2) basically, the refresh is a truncate and an insert /*+ append */

creating materialized views

reader, October 18, 2004 - 2:09 pm UTC

I want to create a MV on a join of 3 tables. One of the tables has a LONG column, that I am unable to include in the MV (gives an error). The Oracle documentation also confirms LONG cols are not allowed.
Is there any workaround to include this column ? I will be using this MV to refresh another MV in a different Oracle instance (for reporting purposes).

Thanks

Tom Kyte
October 18, 2004 - 2:12 pm UTC

longs are not dblink friendly either, longs are very disfunctional.

it isn't going to happen -- especially with the dblink.

Creating MV

reader, October 18, 2004 - 2:24 pm UTC

Thanks Tom , for confirming the unusability of LONG datatypes.
So what alternatives do I have for getting the info in the LONG cols , across to the reporting DB ?

Tom Kyte
October 19, 2004 - 7:32 am UTC

o do it yourself code

or

o use clobs, not longs.


re: Creating MV

LC, October 18, 2004 - 3:36 pm UTC

The copy command and some timestamps on the source table may do the trick for you with the longs.

Tom Kyte
October 19, 2004 - 7:47 am UTC

careful with 'timestamps'...

first -- they are not supported by copy.

second -- you have to take extra precautions with what DATE value you use. See
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:16998677475837 <code>

Creating MV

Reader, October 19, 2004 - 11:03 am UTC

Thanks Tom and LC for your inputs.
Since I am basing the MV on delivered ERP tables, I would not be able to change the datatype of the base tables to CLOB. Also, timestamps are not available.
Guess I'll have to write my own code, if this fld is really reqd for reporting purposes. MVs were a good way of getting incremental refresh.

Thank You

On REFRESH

rennie, October 25, 2004 - 11:14 am UTC

Tom,

Happened to hear today from our DBA that prior to Oracle 9i, Oracle used to get information on the changed data from the REDO Logs, for REFRESHING the Materialized Views. From 9i onwards he said that Materialized view logs can be created which would record information on changed data, which would be used while refreshing.

Somehow, sourcing information from REDO Logs did not make sense to me. Thus iam posting it here.

My questions are:-
1. Is the DBA correct in his statement?
2. For cases where a FAST REFRESH is done, and a materialized view log not created, from where would Oracle get the changed data information?
3. What if log switches happen and REDO Logs are overwritten?

Iam really confused here. Please do explain.

(PS:-Searched for documentation on this but could not find)

regards
rennie



Tom Kyte
October 25, 2004 - 11:44 am UTC

umm, your DBA is 'making things up'

it has never be true -- create snapshot log was a version 7.0 command (renamed to create materialized view log in 8i when MV's with query rewrite were added)


REDO was never used for refreshing a materialized view (aka snapshot).

NEVER.

MV -update and insert

Reader, November 05, 2004 - 11:37 am UTC

1.Can I perform an update or insert on any Materialized view.
2.If I am doing a complete refresh it will do a truncate and insert /*+APPEND*/ but the indexes on MV will still generate logs.Is there a way to stop that.

Thanks

Tom Kyte
November 05, 2004 - 5:18 pm UTC

1) if it is an updatable one (replication), yes.

2) the refresh on the MV itself generates redo too -- append does not disable redo generation (unless you are in noarchive log mode, which I assume no one is running production in).

You would have to "help out" a big to get nologging index maintainance -- you would schedule a job to do the refresh instead of using the builtin scheduling with the MV. the job would set the indexes unusable -- set the session to skip unusable indexes -- refresh -- rebuild indexes.

query rewrite

reader, December 14, 2004 - 5:34 pm UTC

If I have a bind variable in my select statement, would oracle rewrite the query to use materialize view? Thanks.

Tom Kyte
December 15, 2004 - 1:06 pm UTC

yup.


big_table@ORA9IR2> create materialized view big_table_mv
2 build immediate
3 refresh on commit
4 enable query rewrite
5 as
6 select status, count(*)
7 from big_table
8 group by status
9 /

Materialized view created.

...


big_table@ORA9IR2> variable x varchar2(20);
big_table@ORA9IR2> exec :x := 'VALID';

PL/SQL procedure successfully completed.


big_table@ORA9IR2> select status, count(*)
2 from big_table
3 where status = :x
4 group by status;


STATUS COUNT(*)
------- ----------
VALID 999821


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=10)
1 0 TABLE ACCESS (FULL) OF 'BIG_TABLE_MV' (Cost=2 Card=1 Bytes=10)


query rewrite cont

reader, December 15, 2004 - 5:35 pm UTC

Tom, please see below the cut and paste from sqlplus. I am using 10.1.0.2. It is not rewriting if I used a bind bariable! Thanks.

SQL> connect sh/sh;
Connected.

SQL> drop materialized view sales_prod;
 
Materialized view dropped.
 
SQL>
SQL> create materialized view sales_prod
  2         build immediate
  3         enable query rewrite
  4  as
  5  SELECT s.prod_id
  6  ,      t.fiscal_month_number
  7  ,      sum(s.amount_sold) AS sum_amount
  8  FROM   sales s, times t
  9  WHERE  s.time_id = t.time_id
 10  AND    t.fiscal_year = 2000
 11  GROUP  BY s.prod_id, t.fiscal_month_number;
 
 
Materialized view created.
 
SQL>
SQL> execute dbms_stats.gather_table_stats('SH','SALES_PROD');
 
PL/SQL procedure successfully completed.
 
SQL>
SQL> execute dbms_stats.gather_table_stats('SH','SALES');
 
 
PL/SQL procedure successfully completed.
 
SQL>
SQL> execute dbms_stats.gather_table_stats('SH','TIMES');
 
PL/SQL procedure successfully completed.
 
SQL>

SQL> EXPLAIN PLAN FOR
  2  SELECT s.prod_id
  3  ,      t.fiscal_month_number
  4  ,      sum(s.amount_sold) AS sum_amount
  5  FROM   sales s, times t
  6  WHERE  s.time_id = t.time_id
  7  AND    t.fiscal_year = 2000
  8  GROUP  BY s.prod_id, t.fiscal_month_number
  9  ORDER  BY s.prod_id, t.fiscal_month_number;
 
Explained.

SQL> SELECT * FROM table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1496798111
 
--------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |   841 | 10933 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY                |            |   841 | 10933 |     4  (25)| 00:00:01 |
|   2 |   MAT_VIEW REWRITE ACCESS FULL| SALES_PROD |   841 | 10933 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
9 rows selected.

SQL> variable x number;
SQL> exec :x := 2000;
 
PL/SQL procedure successfully completed.
 
SQL> EXPLAIN PLAN FOR
SELECT s.prod_id
,      t.fiscal_month_number
,      sum(s.amount_sold) AS sum_amount
FROM   sales s, times t
WHERE  s.time_id = t.time_id
AND    t.fiscal_year = :x
GROUP  BY s.prod_id, t.fiscal_month_number
ORDER  BY s.prod_id, t.fiscal_month_number;
  2    3    4    5    6    7    8    9
Explained.
 
SQL> SELECT * FROM table(dbms_xplan.display);
 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3246381495
 
----------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |       |   611 | 19552 |   483  (17)| 00:00:06 |       |   |
|   1 |  SORT GROUP BY             |       |   611 | 19552 |   483  (17)| 00:00:06 |       |   |
|*  2 |   HASH JOIN                |       |   191K|  5985K|   451  (11)| 00:00:06 |       |   |
|*  3 |    TABLE ACCESS FULL       | TIMES |   304 |  4560 |    15   (0)| 00:00:01 |       |   |
|   4 |    PARTITION RANGE ITERATOR|       |   918K|    14M|   415   (7)| 00:00:05 |   KEY |   KEY |
|   5 |     TABLE ACCESS FULL      | SALES |   918K|    14M|   415   (7)| 00:00:05 |   KEY |   KEY |
----------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("S"."TIME_ID"="T"."TIME_ID")
   3 - filter("T"."FISCAL_YEAR"=TO_NUMBER(:X))
 
18 rows selected.
 
 

Tom Kyte
December 15, 2004 - 6:57 pm UTC

think about it.

look at your MV:

SQL> create materialized view sales_prod
  2         build immediate
  3         enable query rewrite
  4  as
  5  SELECT s.prod_id
  6  ,      t.fiscal_month_number
  7  ,      sum(s.amount_sold) AS sum_amount
  8  FROM   sales s, times t
  9  WHERE  s.time_id = t.time_id<b>
 10  AND    t.fiscal_year = 2000</b>
 11  GROUP  BY s.prod_id, t.fiscal_month_number;


and look at your query:

SQL> EXPLAIN PLAN FOR
SELECT s.prod_id
,      t.fiscal_month_number
,      sum(s.amount_sold) AS sum_amount
FROM   sales s, times t
WHERE  s.time_id = t.time_id
AND    t.fiscal_year = :x
GROUP  BY s.prod_id, t.fiscal_month_number
ORDER  BY s.prod_id, t.fiscal_month_number;

suppose I run the query with x = 42.  opps -- using the MV would be wrong.

you cannot rewrite this query with THIS bind - it would be *wrong* 

Materalized View

Laxman Kondal, January 21, 2005 - 4:03 pm UTC

Hi Tom

We are having few MVs and all have been created by a tool. Script for all are like this:

CREATE MATERIALIZED VIEW T_PROJECT_MV
STORAGE
(
INITIAL 64K
NEXT 64
PCTINCREASE 0
MAXEXTENTS UNLIMITED
) TABLESPACE OPOM_DATA BUILD IMMEDIATE
REFRESH COMPLETE
START WITH round(sysdate) + .8
NEXT sysdate + 1
WITH ROWID
AS
SELECT *
FROM ORE.Project@wis
WHERE project IN
(SELECT value
FROM Environment_variables
WHERE name = 'PROJECT')
/

and this creates:

CREATE MATERIALIZED VIEW T_PROJECT_MV
REFRESH COMPLETE ON DEMAND
START WITH TO_DATE('22-01-2005 19:12:00', 'DD-MM-YYYY HH24:MI:SS') NEXT SYSDATE + 1
AS
SELECT *
FROM ORE.Project@wis
WHERE project IN
(SELECT value
FROM Environment_variables
WHERE name = 'PROJECT')

And job:

dbms_refresh.refresh('"POM"."T_PROJECT_MV"');

and the query result is:

pom@PTUNEA> SELECT mview_name, refresh_mode||'-'||refresh_method "METHOD-TYPE",
2 to_char(last_refresh_date, 'dd-mm-yy hh24:mi:ss') "LAST REFRESHED",
3 staleness, after_fast_refresh aft_fast, compile_state comp_state
4 FROM user_mviews;

MVIEW_NAME METHOD-TYPE LAST REFRESHED STALENESS AFT_FAST COMP_STATE
------------ -------------- ----------------- --------- ---------- --------
T_PROJECT_MV DEMAND-COMPLETE 21-01-05 14:45:37 UNDEFINED UNDEFINED VALID

1 rows selected.

pom@PTUNEA> SELECT TO_CHAR(job) j#, last_date||','||last_sec last_ref,
2 TO_CHAR(failures) fail, next_date||' '||next_sec "NEXT REFRESH AT", what
3 FROM dba_jobs;


J# LAST_RE FA NEXT REFRESH AT WHAT
--- ------- -- ------------------ --------------------------------------------
51 , 10 22-JAN-05 19:12:00 dbms_refresh.refresh('"POM"."T_PROJECT_MV"');

1 rows selected.

pom@OPTUNEA>

We using Oracle9R2 and I am not clear
1.Who and how job is created in this case?
2.Who and how group has been created?
3.Staleness is undefined and ‘valid’ and failed 10 times
4.Refresh group name and MV name same - any way to look what objects all are in this group?

Thanks and regards.



Tom Kyte
January 21, 2005 - 8:10 pm UTC

1) this ddl:

REFRESH COMPLETE
START WITH round(sysdate) + .8
NEXT sysdate + 1

scheduled a job to refresh the mv every day....

2) what group?

3) staleness and such are defined in the Reference guide (online at otn.oracle.com if you don't have it)

failures -- you need to review your alert log for "why"

4) dba_refresh, dba_refresh_children.

Good one

Golu, February 23, 2005 - 8:03 am UTC

Tom, You are Genius

Materialized views in a OLTP Enviornment

Yogesh Hingmire, March 21, 2005 - 1:55 am UTC

Hi Tom,

1) Can you let me know which scenarios shall i consider using a Materialized View in a OLTP Enviornment,(for eg: queries that are using aggregare/group by functions)

2)And what shall be their refresh guidelines.

Many Thanks for you excellent effort via this site!!!

Yogesh


Tom Kyte
March 21, 2005 - 10:26 am UTC

1) not many. in a OLTP enviroment, you would use mvs to copy the data to other tables in a different format in order to let reports run against it.


2) you tell me, it is your system. how frequently do they need be refreshed for the reports.

Materialized View

Yogesh Hingmire, March 29, 2005 - 3:54 am UTC

Hi Tom,

Thanks for the response!! The reports need to reflect the current data, hence the viewswould need to be refreshed more frequently!!!
which i think is an overhead ...

All n all can i say that M.V's are not of much help in a OLTP Enviornment and best suited for a DataWarehouse Enviornment?

Thanks
Yogesh

Tom Kyte
March 29, 2005 - 8:18 am UTC

which is most important to you

oltp response time
report response time


More questions on MV

amihay gonen, March 31, 2005 - 5:19 am UTC

Hi tom, I've some questions regarding MV

1. We have a situation where the Details table (FACT) is kept for a shorter time then the MV . How can we maintain the MV ? It seems to me that I've 2 options : a) Drop partition of the FACT table and consider fresh for the MV. b) Use PCT refresh method . (I'm not sure it will work ) .

2. Does the PCT refresh method support refresh with exchange partitions ?

3. If we need MV with different time resolution (day, week, month) ? What will better option - Use nested MV or different MV on the FACT table ?



Tom Kyte
March 31, 2005 - 8:05 am UTC

the sole entire goal of the materialized view is to accurately reflect what is in the parent tables. If you want to remove data from the parent, it'll want to remove it from the mv.

You'll have to do this one yourself, the MV refresh process will be quite upset that the parent is missing data. This summary (which isn't really a summary anymore since it diverges from the parent) is something you'll have to maintain.



Materialized view fast refresh is slow

Kubilay, April 07, 2005 - 7:41 am UTC

Hi Tom

Top quality and motivating answers, as always!

What else can cause a "fast" refresh MV to refresh slow apart from your very good example of
"1,000,000 changes on 1 row and 1,000,000 rows changed" ?

My case is:

2 databases A, B with DB_LINK and MV Fast Replication. Replication scheduled to refresh via DBMS_JOB 6 times a day.

Database A (source):

table_test
mlog$_table_test

Database B (target):

MV_table_test -- (MV=Materialized view created as select * from table_test@db_link_to_A; --refresh fast)

In table table_test in Database A lots of bulk inserts /deletes happen overnight and sometimes during the day.
So we get your case 2 "millions of rows changed" all the time and the log mlog$_table_test is showing millions
of changes waiting to be replicated to mv_table_test at the next DBMS_JOB scheduled Refresh.
The count in mlog$_test_Table logs after the refresh do go down to 0, so is not a case of another MV using
the same log.

The problem is that while all this bulk-insert and delete is happenning, on Database B the mv_table_test,
because is serving an inernet website, also is being hit mad by user SELECT queries all the time,
so there is high traffic on the table MV_table_test as well!!

Then in this scenario when millions of changes waiting in the mlog$_test_tabe logs to replicate
and the actual MV_test_table being searched and viewed by thousands of queries.
Is it wise at this moment to start the DBMS_JOB and do the refresh of MV_test_table from test_table?
I mean when the MV_table_test is bombarded with queries is it ok to start a dbms_mview.refresh job? It looks to me like
"trying to shoot a moving target"??

From my experience I figured out that when the traffic is low on the MV_test_table, that is no one queries it,
the refresh is fast, but at high traffic and lots of changes in mlog$_test_Table and lots of queries on MV_test_Table
, refresh is very slow. Is that a good assumption? Or, can it be something else?

Or to make it worse, what is the effect on refresh rate if the test_table on Database A (source) starts
getting bulk updated while it is refreshing the MV_test_table via fast refresh? I think I read somewhere in the Oracle docs that the CHANGE_VECTOR$$ is being used to read the change from test_table and apply it to MV_test_table. Will this slow down things, because of extra Redo/Undo as well? Is this correct?

Many thanks for your help.

Kubilay


Tom Kyte
April 07, 2005 - 10:24 am UTC

just when the volume of changes is large.

to refresh 100,000 rows basically bit by bit
or
to re-pull 500,000 rows in a big bulk operation

(I don't like the term "refresh fast", it implies "it is faster than the other option", it should have been called "refresh incremental" vs "refresh full")


would it be "faster" just to pull the entire table? Or maybe you can find your break even point "hey, if the log has more than M records -- refresh full else refresh fast"?

Materialized view fast refresh is slow

Kubilay, April 07, 2005 - 10:52 am UTC

Appreciate your fast response.

The source table in Database A has got around 200,000 rows, but is a bulky table with a segment of 250MB plus 13 indexes. The target MV to this table, has got about 13 indexes as well, which suppose adds to the "refresh-incremental" time, since indexes have to be updated as well.


The volume of changes coming in to the source table is unpredictable, one day it can be 1000 changes, so easy peasy, another day it can be that someone has run a script 100 times doing updates on the source table and or deleted / reinserted 100,000 rows, for some reason, in the source table and the mlog$_ logs burst to 200,000 changes. The number of rows in the source table doesn't increase dramatically, maybe 70,000 extra rows a year. It is just the changes. Strange, but there is no "insert +append..." or bulk inserts or SQL*Loader direct-path, changes come bit-by-bit, row-by-row all the time, all conventional SQL.

It is a fast-incremental refresh, are you recommending a complete 'C' refresh in the dmbs_mview.refresh procedure instead, when logs above certain threshold, say more than 50% of the rows of the actual table? How about the MV if I do a complete refresh will the users querying the MV experience problems, delays with their queries while I do a complete refresh? It is a 24/7 MV. Is complete 'C' refresh quicker than fast-incremental in some cases?

Thank you very much!

Kubilay





Tom Kyte
April 07, 2005 - 11:46 am UTC

<quote>
It is a fast-incremental refresh, are you recommending a complete 'C' refresh in
the dmbs_mview.refresh procedure instead, when logs above certain threshold, say
more than 50% of the rows of the actual table?
...
</quote>

I'm recommending you investiage that. You may have to create a snapshot refresh group in order to have the full refresh be a DELETE+INSERT instead of a TRUNCATE+ INSERT /*+ APPEND */

what is your version? there is a very interesting approach that could be taken in 10g.

Materialized view fast refresh is slow

Kubilay, April 07, 2005 - 12:15 pm UTC

Great!

I am on Oracle 9i. R2 Enterprise Edition on RH Linux. The full (or complete) refresh not possible since will lock up the target MV for a while, I don't want that since MV is 24/7. Tested it on TEST and while it was doing the complete refresh which took 18 minutes couldn't select anything from the target MV, got 'no rows selected' until it finished the complete refresh.

Will the group
<quote>
refresh be a DELETE+INSERT instead of a
TRUNCATE+ INSERT /*+ APPEND */
</quote>
keep the MV available? I will investigate that!

How about creating 2 identical MVs one serving the users while the other is refreshing and once the refresh is complete,
switching to the 2nd complete MV via a drop create synonym? Is it a viable solution you think?

Many thanks again!

Kubilay


Tom Kyte
April 07, 2005 - 12:27 pm UTC

the refresh group will -- the reason there was no data was the truncate.

that last thought is what I would suggest with 10g perhaps -- with 10g, create or replace synonym won't invalidate views and stored procedures, so it is something you can do without locking up your system from the DDL in many cases.

A reader, April 27, 2005 - 1:26 pm UTC

Hi Tom,

We have two databases A,B. Very frequently we need data of A in B. For this you had suggested 'Materialized View'.

The data in database A changes every day. So can I still create a materialized view?

Tom Kyte
April 27, 2005 - 2:00 pm UTC

yes.

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96567/repmview.htm#29854 <code>

that is what they were designed to do.

toms is great

Neeraj Ranjan Rath,Hyderabad, May 14, 2005 - 6:55 am UTC

Tom if we can create a materialized view using joins between two or more tables then why not creating one table containing same fields of different tables.So that we can directly query on the table.
Also these fields which has been driven from different tables can be updated using ant third party EAI tools like TIBCO
Which will be more useful......
Sorry i still say i am new born baby in this techy world.

Thanks in advance

Tom Kyte
May 14, 2005 - 9:41 am UTC

in a transactional system, where the data comes from, denormalization would kill you (both in performance and data integrity).

In a data warehouse, normalization can sometimes be "not a good thing"

so, MV's are used to bridge the gap between the two needs.

How to balance Rewritten Cost - Original Cost

Ravi Monni, May 18, 2005 - 7:42 pm UTC

Hi Tom,

I have a SQL which gets rewritten only if the date range I query is over 20 days. I assume that Oracle Optimizer considers using index on the base table to cost less than using MV. However, we found that using MV is much faster when we specified hint "rewrite" in the SQL.

We want to avoid using the hint and make the Optimizer smarter. How can we influence rewritten cost for the Oracle Optimizer to consider it to be less expensive and use the MV to rewrite the SQL?

Needless to say, we have the constraints and dimension defined and we also have collected stats on the objects. The tables are not stale.

Thanks in advance.
Ravi

Tom Kyte
May 18, 2005 - 8:06 pm UTC

give me a for example... unhinted query, hinted query -- autotrace traceonly explain, tkprof

if it is small enough, I'll take a look here.

Join cardinality issue in D/W environments

Charu Joshi, May 19, 2005 - 2:45 am UTC

After interacting with you guys here, it has become clear that the way ahead for me is star transformations and MVs (where possible). By using star transformations and bitmap indexes, I won't need to bother about the predicate and join order (as much).

Star transformations.. bitmap indexes.. redesign ETL process.. resistance.. conflict with designers.. sigh..

Thanks Gabe and Tom for helping out.

Best wishes,
Charu.



Tom Kyte
May 19, 2005 - 7:55 am UTC

have you read Jonathans really good papers on the bitmaps/star transformation:

</code> http://www.jlcomp.demon.co.uk/ind_misc.html <code>

What went wrong

P.karthick, June 04, 2005 - 2:51 am UTC

SQL> create table temp_all_object as select * from all_objects;

Table created.

SQL> begin
  2  
  3  insert into temp_all_object select * from temp_all_object;
  4  
  5  insert into temp_all_object select * from temp_all_object;
  6  
  7  insert into temp_all_object select * from temp_all_object;
  8  
  9  end;
 10  /

PL/SQL procedure successfully completed.

SQL> select count(1) from temp_all_object;

  COUNT(1)
----------
    254488


SQL> set timing on

SQL> set autotrace on

SQL> select object_type,count(1) cnt from temp_all_object  group by object_type;

OBJECT_TYPE               CNT
------------------ ----------
CONSUMER GROUP             16
CONTEXT                     8
DIRECTORY                  24
FUNCTION                 2776
INDEX                   10112
INDEX PARTITION          1200
INDEXTYPE                  40
JAVA CLASS              80256
JAVA DATA                2320
JAVA RESOURCE            1040
JAVA SOURCE                64

OBJECT_TYPE               CNT
------------------ ----------
LIBRARY                   312
LOB                         8
OPERATOR                  144
PACKAGE                  3376
PACKAGE BODY             2984
PROCEDURE               10888
SEQUENCE                  552
SYNONYM                 87896
TABLE                   28400
TABLE PARTITION           464
TRIGGER                  2840

OBJECT_TYPE               CNT
------------------ ----------
TYPE                     1200
TYPE BODY                 144
VIEW                    17424

25 rows selected.

Elapsed: 00:00:03.04

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (GROUP BY)
   2    1     TABLE ACCESS (FULL) OF 'TEMP_ALL_OBJECT'




Statistics
----------------------------------------------------------
          0  recursive calls
          5  db block gets
       3434  consistent gets
        430  physical reads
          0  redo size
       1579  bytes sent via SQL*Net to client
        536  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL> create materialized view temp_all_object_mv
  2  enable query rewrite
  3  as
  4  select object_type,count(1) cnt from temp_all_object  group by object_type;

Materialized view created.

Elapsed: 00:00:01.13

SQL> select object_type,count(1) cnt from temp_all_object  group by object_type;

OBJECT_TYPE               CNT
------------------ ----------
CONSUMER GROUP             16
CONTEXT                     8
DIRECTORY                  24
FUNCTION                 2776
INDEX                   10112
INDEX PARTITION          1200
INDEXTYPE                  40
JAVA CLASS              80256
JAVA DATA                2320
JAVA RESOURCE            1040
JAVA SOURCE                64

OBJECT_TYPE               CNT
------------------ ----------
LIBRARY                   312
LOB                         8
OPERATOR                  144
PACKAGE                  3376
PACKAGE BODY             2984
PROCEDURE               10888
SEQUENCE                  552
SYNONYM                 87896
TABLE                   28400
TABLE PARTITION           464
TRIGGER                  2840

OBJECT_TYPE               CNT
------------------ ----------
TYPE                     1200
TYPE BODY                 144
VIEW                    17424

25 rows selected.

Elapsed: 00:00:02.04

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (GROUP BY)
   2    1     TABLE ACCESS (FULL) OF 'TEMP_ALL_OBJECT'




Statistics
----------------------------------------------------------
          0  recursive calls
          5  db block gets
       3434  consistent gets
        430  physical reads
          0  redo size
       1579  bytes sent via SQL*Net to client
        536  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.41

SQL> select object_type,count(1) cnt from temp_all_object  group by object_type;

OBJECT_TYPE               CNT
------------------ ----------
CONSUMER GROUP             16
CONTEXT                     8
DIRECTORY                  24
FUNCTION                 2776
INDEX                   10112
INDEX PARTITION          1200
INDEXTYPE                  40
JAVA CLASS              80256
JAVA DATA                2320
JAVA RESOURCE            1040
JAVA SOURCE                64

OBJECT_TYPE               CNT
------------------ ----------
LIBRARY                   312
LOB                         8
OPERATOR                  144
PACKAGE                  3376
PACKAGE BODY             2984
PROCEDURE               10888
SEQUENCE                  552
SYNONYM                 87896
TABLE                   28400
TABLE PARTITION           464
TRIGGER                  2840

OBJECT_TYPE               CNT
------------------ ----------
TYPE                     1200
TYPE BODY                 144
VIEW                    17424

25 rows selected.

Elapsed: 00:00:02.14

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (GROUP BY)
   2    1     TABLE ACCESS (FULL) OF 'TEMP_ALL_OBJECT'




Statistics
----------------------------------------------------------
        227  recursive calls
          6  db block gets
       3478  consistent gets
        688  physical reads
          0  redo size
       1579  bytes sent via SQL*Net to client
        536  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
         25  rows processed


SQL> select object_type,count(1) cnt from temp_all_object where object_type = 'TABLE' group by objec
t_type;

OBJECT_TYPE               CNT
------------------ ----------
TABLE                   28400

Elapsed: 00:00:03.26

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (GROUP BY)
   2    1     TABLE ACCESS (FULL) OF 'TEMP_ALL_OBJECT'




Statistics
----------------------------------------------------------
          0  recursive calls
          5  db block gets
       3434  consistent gets
        688  physical reads
          0  redo size
        428  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed


In all the cases i see  TABLE ACCESS (FULL) OF 'TEMP_ALL_OBJECT' 

Tom Kyte
June 04, 2005 - 8:28 am UTC

because you never analyzed anything and materialized views (and dozens of other things) are only available via the CBO. (cost based optimizer)

You were using the old RBO. (rule based optimizer)

Performance Issues

A reader, June 11, 2005 - 12:12 pm UTC

Hello Thomas,

I have a question about usage of materialized views.

lets say i have a table X with columns(A AND B) with 40,00,000 records.

Lets say the data in table is something like this.
A B
-- --
1 10
2 10
3 20
4 10

Now data in column A is based on two types of classification which comes from another table.
the classification is Y type and Z type.
So column A stores the ID for Y and Z types classses.

from the table X actually i have to find out the B which are same for Y type ID and Z TypeID.

something like

select count(*)
from x
where a = 1 (value for y type)
and b in (select b from x where a=2 (value for z type)
)
so this will give me the common records for Y and Z type classes.

this query i will be using within a stored procedure to get the match records.offcourse there are other logic in the procedure.

Now my question is, since this table is huge what is the best way to get this result.

and this query will be called in a loop to process about 20,00,000 records.

so should i execute such query each time or is it a good idea to get form a logic using materialized view and with columns like

YID, ZID, B and just do a query with where clause on YID and ZID.

your inputs will be highly appreciated.

Tom Kyte
June 11, 2005 - 2:00 pm UTC

whoa -- hold on. you should not in any way shape or form be calling anything 20,000,000 times in a loop!!!!! Databases where born to JOIN, let them.


rethink your logic here, say "do I really need a loop at all, can I do this in SQL"



Materialized Views

srinu, June 29, 2005 - 4:04 pm UTC

Hi Tom,
After reading this , I got very good idea of Mviews. And we are implimenting in out projects.

I have very small clarification on M views. Can we create Mviews if the query has left outer join. If so is it suggestable.

thanks in advance

Tom Kyte
June 29, 2005 - 8:10 pm UTC

data warehousing guide has details on stuff like this too

yes, you can create a mv on an outer join.

and in some cases they can even be incrementally refreshed.

materialized view log table and corresponding trigger

Gagan Kumar, July 02, 2005 - 1:50 am UTC

Hello Tom,
I am very much thankful to you because of your valuabe suggestions on this topic.

I am using Oracle 8i on Windows Platform .
My schema has been granted the DBA role along with create materialized view ,create any trigger, create any table,query rewrite enable system privileges.

Whenever I am creating a materialized view log , the corresponding trigger is not found to be created .

Simultaneouly whenever I am using a query which is same as the query used in materialized view having query rewrite enabled , the query is not using the materialized view.I found it through plan table.
Please suggest .
Thanks and regards.
GKB from India.



Tom Kyte
July 02, 2005 - 9:24 am UTC

suggest you give us the worlds smallest example.

is the MV analyzed after you create it? if not, the CBO won't understand how best to use it.

giving you the DBA role is like using an atom bomb to crack a walnut. It'll crack the walnut but I would expect a lot of collateral damage to be experienced in doing so.

security counts and you can only be secure if you start that way from day 1.


I would not expect you to be able to see the internalized trigger used to support the MV log.

fast refresh outer joins (complex query)

A reader, July 19, 2005 - 5:00 pm UTC

Hi

You said

yes, you can create a mv on an outer join.
and in some cases they can even be incrementally refreshed.

What cases can we fast refresh? I did a test and created unique index on the join columns without success. I have this data

select * from emp

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
----- ------- -------- ------ --------- ------ ----- ------
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10


select * from dept;

DEPTNO DNAME LOC PROJECT
---------- -------------- ------------- ----------
10 ACCOUNTING NEW YORK 30
20 RESEARCH DALLAS 40
30 SALES CHICAGO 50
40 OPERATIONS BOSTON 60

then did this

create materialized view log on emp
with sequence, rowid
(EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
including new values;

create materialized view log on dept
with sequence, rowid
(deptno, dname, loc, project)
including new values;

create materialized view agg$emp01
refresh fast
as
select dname, sum(sal) dept_sal
from dept d, emp e
where d.deptno = e.deptno(+)
group by dname;

insert into emp values
(9000, 'LSC', 'JEFE', 7782, '24-JAN-88', 2000, 20, 50);


exec dbms_mview.refresh('AGG$EMP01')
BEGIN dbms_mview.refresh('AGG$EMP01'); END;

*
ERROR at line 1:
ORA-12052: cannot fast refresh materialized view LSC.AGG$EMP01
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 841
ORA-06512: at line 1

I have unique index on dept.deptno and emp.deptno

I thought I would be able to refresh fast using the unique indexes but doesnt seems so. Am I missing anything?

A culture question, if I run this query

select dname, ename
from dept left outer join emp
on dept.deptno = emp.deptno

which is the inner table and which is the outer?

Thank you

Tom Kyte
July 19, 2005 - 6:02 pm UTC

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/mv.htm#42281 <code>


specifically:

# If there are no outer joins, you may have arbitrary selections and joins in the WHERE clause. However, if there are outer joins, the WHERE clause cannot have any selections. Further, if there are outer joins, all the joins must be connected by ANDs and must use the equality (=) operator.

# If there are outer joins, unique constraints must exist on the join columns of the inner table. For example, if you are joining the fact table and a dimension table and the join is an outer join with the fact table being the outer table, there must exist unique constraints on the join columns of the dimension table.


if you give a complete test case, I'll take a peek when I have time but I'm traveling a lot the next two days. I'd want the entire thing, create table, keys, the whole thing.

complete case

A reader, July 20, 2005 - 4:19 am UTC

Hi

here is


create table emp as
select  EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO
from
(select row_number() over (partition by deptno order by empno) rn, emp.* from scott.emp)
where rn = 1;

create table dept as select * from scott.dept;

alter table dept add project number;

update dept set project = deptno + 20;

create materialized view log on emp
with sequence, rowid
(EMPNO, ENAME,      JOB,              MGR, HIREDATE,         SAL,       COMM,     DEPTNO)
including new values;

create materialized view log on dept
with sequence, rowid
(deptno, dname, loc, project)
including new values;


create materialized view agg$emp01
refresh fast
as
select dname, sum(sal) dept_sal
from dept d, emp e
where d.deptno = e.deptno(+)
group by dname;

ERROR at line 5:
ORA-12015: cannot create a fast refresh materialized view from a complex query

create unique index emp_idx1 on emp(deptno);

SQL> create materialized view agg$emp01
  2  refresh fast
  3  as
  4  select dname, sum(sal) dept_sal
  5  from dept d, emp e
  6  where d.deptno = e.deptno(+)
  7  group by dname;

Materialized view created.

insert into emp values  
(9000, 'LSC', 'JEFE', 7782, '24-JAN-88', 2000, 20, 50);

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('AGG$EMP01');
BEGIN dbms_mview.refresh('AGG$EMP01'); END;

*
ERROR at line 1:
ORA-12052: cannot fast refresh materialized view LSC.AGG$EMP01
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 841
ORA-06512: at line 1

why I cant fast refresh this thingie?
 

Tom Kyte
July 20, 2005 - 6:41 am UTC

You'll want to check out that chapter.....

<quote>
Materialized aggregate views with outer joins are fast refreshable after conventional DML and direct loads, provided only the outer table has been modified. Also, unique constraints must exist on the join columns of the inner join table. If there are outer joins, all the joins must be connected by ANDs and must use the equality (=) operator.
</quote>

you would need a unique constraint on EMP(DEPTNO) in order for the first mv to be fast refreshable (and include count(*), count(sal) which are prereqs for the aggregate as well)

which is inner and outer table

A reader, July 20, 2005 - 9:06 am UTC

Hi

In my case which table is inner and which outer...?
I think it may be failing because of this

"provided only the outer table has been modified"

But I dont know which of my tables is outer? I modified emp is that the inner table?

Tom Kyte
July 20, 2005 - 12:47 pm UTC

a left outer b

could be read as

a THE left IS outer b




I get an ORA 600 [12811]...

A reader, August 22, 2005 - 5:43 pm UTC

...when I try to create a MV on user_source. There are very many millions of lines of source. Is this a factor, and if so is there an option I can use when creating the MV?

The syntax is:

create materialized view t1_source
build immediate
as
select name, type, text from user_source
order by type, name;


Tom Kyte
August 24, 2005 - 3:30 am UTC

ora-600 => contact support.

but what would be the goal of this particular MV? it would just be an (out of date) copy of user_source.

I get an ORA 600 [12811]...

A reader, August 24, 2005 - 11:02 am UTC

Hi Tom.

The point is to allow developers to see what is in user_source of a runtime system (they don't trust that we can migrate things properly - or moreover they're not intellegent enough to use an nfs mount to look themselves at the source). The view would be refreshed each time a patch is applied to the RT system

Tom Kyte
August 24, 2005 - 5:59 pm UTC

ora-600 => contact support.

I get an ORA 600 [12811]...

A reader, August 24, 2005 - 11:03 am UTC

....and actually the ORA 600 only happens in 9i.

In tests on 8i and 10g it works fine.

ORA-12052

Arindam Majumder, September 07, 2005 - 8:12 am UTC

I need more information on Creating Materialized view on Two Base Tables.

Tom Kyte
September 07, 2005 - 8:49 am UTC

</code> http://docs.oracle.com/docs/cd/B14117_01/server.101/b10736/toc.htm <code>

see the chapters on doing so (8 and 9)

maybe I'm finally getting old

Dean, September 07, 2005 - 11:16 am UTC

but I hate MV's and have since they were called snapshots. I often have need for summarized tables and build a lot of my DW (OLAP or otherwise)off of these summarized tables, but I've always found snapshots/MV's more trouble than they were worth and typically prefer a truncate/insert--+APPEND approach and manage refreshes thru code (or OWB).

I'm just glossing over some of the sophistication, but that's it in a nutshell.

I'd had to think I was becoming one of those DBA's that still carry around there Oracle6 scripts on an floppy, but am I really missing anything by not using MV's?

I strongly believe in "the right tool for the job", so I know a generic answer is not fair, but I really just want to know if I'm missing out on something I should be using/considering along with the new architectural features I am exploiting.

Thank you for your time.

Tom Kyte
September 07, 2005 - 1:50 pm UTC


one term for you:

query rewrite


that is what you are missing out on. or maybe "incrementally refreshed", that too.


I like to call MV's the indexes of your data warehouse. Say you discover this query:


select count(*), avg(sal), min(sal), max(sal)
from literally_huge_table
where something = <x>


and you see it is done over and over and over with different X's - it and others like it are very popular in your system.

What do you do? Do you index something,sal and live with large range scans?

Even bitmaps on something aren't going to help here


but....... create an MV that groups by something -- and you got, well, something magical, you have queries running light speed against the MV's without have to be rewritten.


And you have things like partition change tracking, incremental refreshes, on commit refreshes, etc, etc etc -- have you checked out the data warehousing guide recently?

Thanks

Dean, September 07, 2005 - 4:11 pm UTC

Thank you for your time. Yes, I keep going back to the DW guide on a regular basis to keep getting ideas. Especially as we are evaluating the new OLAP options since Express has been embedded (Discover OLAP is good but buggy, or it may be AWM).


I meant to indicate that I was already summarizing the tables based on my needs. So queries at this level would never hit the individual source rows (that's a drill down query for us). So my insert statement is very similar to the MV one even including the group by, etc.

When I use the create MV syntax and then search in user_objects, I see that it in fact creates a table with the name of the MV, a function-based I_SNAP$_ index (in their appropriate segment types)and a MV which, like views has no data_object_id.

I think the MV syntax and supporting pl/sql is a fast and managable way to do what I've been doing by hand (which I admit could be enough justification), and I appreciate your time. I just enjoy the flexibility that Oracle gives me to do things the way I want to (we also have MSSQL) and wanted to make sure that I wasn't cheating my users or myself out of anything.

P.S. I use incremental upserts in some of my tables. Thanks again for your time.

David Aldridge, September 07, 2005 - 4:37 pm UTC

Now I have to say that I hold no candle for partition change tracking, as it requires but a single failure-to-refresh to reduce you to two options:

i) Complete refresh (not always practical on v. large master table
ii) reversion to conventional refresh.

Having said that though (and I've raised an enhacement request on this), I agree that query rewrite is the Best Thing Since Your Favourite Thing, and will share with you an interesting exercise that my co-workers and I performed today, and it goes like this:

We added a new un-indexed materialized view to a data warehouse fact table, which took less than an hour to create. A particular class of high-level report which was running in approximately 15-16 minutes then began to run in 2 seconds. Let me say that again ... 1 hour to create ... 15 minutes ... 2 seconds ... 500x improvement.

Unless you are performing some kind of funky reporting against your fact table that would prevent query rewrite (analytical functions, for example), there is almost no class of report that cannot be made to run almost instantly with an MV ... it's almost like cheating.

Comparison between a Materialized View and a Standalone table

Rahul Dutta, September 20, 2005 - 5:20 am UTC

Hi Tom,

Can you please give me some data points for using Materialized view over a standalone Table.

The Standalone table gets populated periodically using a stored procedure.
A view will not solve the problem as it is a very complex query.

More info needed

Sudha Bhagavatula, October 13, 2005 - 11:04 am UTC

We are building a new data warehouse, which will get loaded every month initially, we have fact tables which are partitioned by month, every month we load we are going to delete the previous 23 partitions and reload them with the data for those months the way the look as of the load month.

For instance data was loaded on 7/31/2005 starting from 8/01/2003 to 7/31/2005.

Then on 08/31/2005 the previous 23 partions that exist that is from 09/01/2003 to 7/31/2005 will be deleted and refreshed by data as of 08/31/2005.

The reporting date will be set to 08/31/2005 for all the tables.

I need to create a materialized view on a fact table which is joined to other dimension tables by primary key (sequence genreated values) and aggregates of counts and amounts. So if I create the mv on 07/31/2005 I'll have the aggregate table. However when 08/31/2005 comes around, when the partitions are deleted, I don't want the data to be deleted from the mv but the data as of 08/31/2005 should be appended into the mv and reatin the data as 0f 07/31/2005 as well.

How can I achieve that ? This is the firat time that I am planning to use the materialized view. I'm on Oracle 10g.


Thanks
Sudha

Tom Kyte
October 13, 2005 - 1:12 pm UTC

I don't get the "delete the previous 23 partitions (months of data) and reload them"

why?


but materialized views ALWAYS reflect what is in the base table - and it sounds like you want them to diverge - you won't be using a materialized view, you'll have to find some other technique (code of your own design)

Fast Refresh from MV does not work - any clue ?

Thomas Becker, October 21, 2005 - 2:33 pm UTC

Hi Tom,

is it possible to create a fast MV from other MV's ? I played around with the following script but I always get the message that it is not working. I searched the docs but could not find anything which points me to the light. REFRESH_FAST stays on "N" - What do I miss here ?

Thanks for your help
Thomas

-- Base Tables
create table TEST1 (COLUMN1 VARCHAR2(10) not null,COLUMN2 VARCHAR2(10) not null);
alter table TEST1 add constraint TEST1_PK primary key (COLUMN1);

create table TEST2 (COLUMN1 VARCHAR2(10) not null,COLUMN2 VARCHAR2(10) not null);
alter table TEST2 add constraint TEST2_PK primary key (COLUMN1);

CREATE MATERIALIZED VIEW LOG ON TEST1 WITH ROWID INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON TEST2 WITH ROWID INCLUDING NEW VALUES;

-- First Level MV
CREATE MATERIALIZED VIEW TEST1_MV
USING INDEX
REFRESH FAST ON DEMAND WITH ROWID
AS
SELECT T1.COLUMN1,
T1.COLUMN2,
T1.ROWID T1ROWID
FROM TEST1 T1;

CREATE MATERIALIZED VIEW TEST2_MV
USING INDEX
REFRESH FAST ON DEMAND WITH ROWID
AS
SELECT T2.COLUMN1,
T2.COLUMN2,
T2.ROWID T2ROWID
FROM TEST2 T2;

CREATE MATERIALIZED VIEW LOG ON TEST1_MV WITH ROWID, SEQUENCE (COLUMN1, COLUMN2) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON TEST2_MV WITH ROWID, SEQUENCE (COLUMN1, COLUMN2) INCLUDING NEW VALUES;

CREATE TABLE MV_CAPABILITIES_TABLE
(
STATEMENT_ID VARCHAR2(30),
MVOWNER VARCHAR2(30),
MVNAME VARCHAR2(30),
CAPABILITY_NAME VARCHAR2(30),
POSSIBLE CHAR(1),
RELATED_TEXT VARCHAR2(2000),
RELATED_NUM NUMBER,
MSGNO INTEGER,
MSGTXT VARCHAR2(2000),
SEQ NUMBER
);

-- The final - not working SQL
BEGIN
DBMS_MVIEW.EXPLAIN_MVIEW
('SELECT T1.COLUMN1,T1.COLUMN2,
T2.COLUMN1,T2.COLUMN2
FROM TEST1_MV T1,
TEST2_MV T2
WHERE T1.COLUMN1 = T2.COLUMN1');
END;

SELECT * FROM MV_CAPABILITIES_TABLE;


Tom Kyte
October 22, 2005 - 10:26 am UTC

that doesn't meet the needs of a fast refresh JOIN MV, that is why... (eg: using test1 and test2 wouldn't work either)


but regardless, this type of nested MV isn't supported for fast refresh. You would need join mv's or aggregate mv's for this one to be nested.

Thomas, October 24, 2005 - 4:51 pm UTC

Hi Tom,

thanks for the answer - I found the catch on the MV's.

Here is my problem - We have a production DB (9.2) and want to copy (replicate) some base table (30+) to our reporting DB (also 9.2)

To minimize the workload added to production we wanted to use MV's with logs - also to allow us to refresh the data more frequently.
Since we are recreating some of the reporting logic our idea was to use fast nested MV where ever we can.

What do you think ? - Should we use CDC (Streams) to get the data over ? Is there anything I missed ?

Thanks
Thomas




Tom Kyte
October 25, 2005 - 1:22 am UTC

CDC is not streams, streams is not CDC.

I think you can investigate these technologies (probably streams) to see if they meet your requirements.

MV Refreshing

Praveen, November 07, 2005 - 7:31 am UTC

Hi tom,
I am trying to refresh MV using dbms_job but next date is not updating properly whatever I have defined in my next_date parameter.

I have defined the queries as below to schedule the job

DECLARE
    jobno   NUMBER;
   BEGIN
     DBMS_JOB.SUBMIT
    (job  => jobno,
    what => 'dbms_mview.refresh(''MV_REF_MAST'',''C'');'
      ,next_date => SYSDATE
       ,interval  => 'trunc(sysdate+7)+01/24');
    COMMIT;
END;

 OR 

DECLARE
    jobno   NUMBER;
   BEGIN
     DBMS_JOB.SUBMIT
    (job  => jobno,
    what => 'begin mv_refresh; end;'
      ,next_date => SYSDATE
       ,interval  => 'trunc(sysdate+7)+01/24');
    COMMIT;
END;

output of the above
SQL> select NEXT_DATE,INTERVAL,WHAT from dba_jobs where what like '%REF%';

 NEXT_DATE           INTERVAL                       WHAT
 ------------------- ------------------------------ ------------------------------
 07/11/2005 16:27:56 trunc(sysdate) + 7 +01/24      dbms_mview.refresh('MV_REF_MST','C')

In the next_date it is showing the 4 min. difference only.
Please suggest why it is not taking the defined next_date.

Thanks in advance
     

Tom Kyte
November 07, 2005 - 10:41 am UTC

Next date is the next date it should run - right after calling submit, you would expect that to be "now"

next_date is a DATE, not a string in this case - not sure what you mean by it is show the "4 min. difference only" - what 4 minutes?

A reader, November 07, 2005 - 10:27 pm UTC

Tom,
My question was, whatever I have defined the interval, it is not showing as per the defined parameter.
e.g. I have submitted the job now (11/08/2005 10:01:00) when I am selecting the values from dbms_jobs table then it is showing the value for NEXT_DATE is 11/08/2005 10:01:00 instead of 11/15/2005 01:00:00 because my interval is trunc(sysdate)+7+01/24.

Please advice.
Actually I want to schedule to refresh MV on weekly basis. How can I do this.

Tom Kyte
November 08, 2005 - 9:50 pm UTC

but you said "next_date = sysdate"

that is what next_date is - a specific DATE


there is next_date (first time to run it)
and there is interval (what to add to the date when it does get run)

if you wanted it to first run at "11/15/2005 01:00:00", you would have sent in a date that had that value.


next_date = FIRST TIME TO RUN JOB
interval = AMOUNT to add to that date when the job is run, to say when to
next schedule it for.

A reader, November 07, 2005 - 10:30 pm UTC

Sorry!!!!!Tom,
I forgot to tell you the 4 min. difference.

My question was, whatever I have defined the interval, it is not showing as per
the defined parameter.
e.g. I have submitted the job now (11/08/2005 10:01:00) when I am selecting the
values from dbms_jobs table then it is showing the value for NEXT_DATE is
11/08/2005 10:04:00 ( 4 min diff.)instead of 11/15/2005 01:00:00 because my interval is
trunc(sysdate)+7+01/24.

Please advice.
Actually I want to schedule to refresh MV on weekly basis. How can I do this.




Tom Kyte
November 08, 2005 - 9:54 pm UTC

what do you see when you do this:


ops$tkyte@ORA10GR2> variable n number;
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select to_char( sysdate, 'hh24:mi:ss' ) from dual;

TO_CHAR(SYSDATE,'HH24:MI
------------------------
21:46:54

ops$tkyte@ORA10GR2> exec dbms_job.submit(:n, 'null;', next_date => sysdate );

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select what, to_char( next_date, 'hh24:mi:ss') from user_jobs;

WHAT                           TO_CHAR(NEXT_DATE,'HH24:
------------------------------ ------------------------
null;                          21:46:54

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec dbms_job.remove( :n );

PL/SQL procedure successfully completed.



You need to pass in a date that is the time you want the first refresh to happen, if that is to be 1am tomorrow - that would be the date to pass in.  NOT sysdate. 

A reader, November 18, 2005 - 3:30 pm UTC

Hi Tom,

We want to create Materialized Views which will be based on the following query. The query appears little complex. Is it good idea to create Materialized Views on complex query?

Please advice.

Thanks

SELECT
--Primary key for materialized viw
a.row_id,b.row_id,c.row_id,d.row_id,e.row_id,
--account information c.row_id,c.name,c.cust_stat_cd,c.x_main_pt_flg,c.x_listing_id,c.x_cust_num,c.x_org_id,a.x_order_num_hdr,
a.root_asset_id,a.par_asset_id,

--products and attributes
b.name "product name", e.display_name "Product Attributes",
decode(e.DATA_TYPE_CD,'Integer',to_char(e.NUM_VAL),e.CHAR_VAL)as Value, b.x_prod_code "product code",
d.x_edition_year
FROM siebel.s_prod_int b,
siebel.s_pri_lst d,
siebel.s_org_ext c,
siebel.s_asset_xa e,
siebel.s_asset a
WHERE a.prod_id = b.row_id
AND a.serv_acct_id = c.row_id
AND a.x_pri_lst_id = d.row_id
AND a.row_id = e.asset_id(+)
AND d.x_edition_year = 2006


Tom Kyte
November 18, 2005 - 3:56 pm UTC

yes, it can be quite a good idea - if it solves a problem you have.

A reader, November 18, 2005 - 4:29 pm UTC

Hi Tom,
Thanks for the quick reply.

As per our DBA, the Materialized Views refresh will take long time and will effect database performance (it;s over db link) They are recommending creating Materialized Views on top of each table than using complex query . Here is the statement in their own words....

"It's not feasible to create any complex snapshot/materialized view with join across the database link. It just takes forever. Simple snapshot takes much shorter"

Please advice

Thanks



Tom Kyte
November 19, 2005 - 9:46 am UTC

they would be "not accurate" in many cases.

It is certainly feasible and people do it.

Mview Refresh

pradikan, February 24, 2006 - 5:53 am UTC

Hi Tom,

We have several materialized view defined in our apps. They are refresed every 30 minutes based on the settings. My question is that is there any possibility to deactivate this refreshing without deleting the materialized view itself ? Maybe in the future we will need them to refresh again...


Could you please let me know how this should be done

Thanks in advance

Pradikan

Tom Kyte
February 24, 2006 - 8:36 am UTC

MVIEW REFRESH

pradikan, February 24, 2006 - 10:44 am UTC

Hi Tom,

Thanks a lot for the solution. WE have altered some of the unwanted materialize views and have set the refresh time to 60 days.

This helps us a lot. I would like to ask you one more thing..

Is there any other better way of disabling the mview refresh or this is the only option given by Oracle..

Please update me at the earliest

Thanks in Advance...

Pradikan

Tom Kyte
February 24, 2006 - 10:55 am UTC

I cannot think of anything easier or better than "a single command".

Synchronize data from MV to Remote Table

Debasish Ghosh, March 09, 2006 - 2:43 am UTC

I have For Update Materialized view against remote table. when i am inserting data into remote table, my MV refresh the data, it's alright. but the question is when I insert data into MV then the data is not inserting into the remote table, for that I am writing after insert trigger on MV, and it's working fine. but when connection lost then data should not inserting onto the remore database. is there any way when the connection is available then the the remote table is synchronize automatically with the MV. so, the new data should available on to the remote table
thanks for advance
DG

Tom Kyte
March 09, 2006 - 1:15 pm UTC

insufficient data - first and foremost DROP THAT TRIGGER, that is a bug, absolutely DO NOT DO THAT.

second, contact support and work through your setup, you've missed a job likely on the one end that pushes the changes or you have a error log that is full of errors.

Trigger on an MV?

James, April 20, 2006 - 1:41 am UTC

Lets just say I create an MV:

create snapshot log on emp with rowid including new values;
create snapshot log on dept with rowid including new values;

create materialized view ny_emp_mv refresh fast on commit as
select emp.rowid "emp_rid", dept.rowid "dept_rid",emp.empno,emp.deptno,sal from emp,dept
where emp.deptno=dept.deptno
and dept.loc='NEW YORK'
and emp.sal>2000
/

Just say it was very important to me to know when an employee was added to this snapshot. Say I wanted to send them an email welcoming them to the "New York 2000 Club!"

How would I best track this?

I could have a trigger on the EMP table that checked SAL and then looked up DEPT to see if there department is in NY or if they get transfered to a NY office, or indeed a trigger on DEPT to see if an entire office is moved to NY... etc etc

==OR==

I could have an insert trigger on the snapshot to send them the welcome message. I could even have an on delete trigger to send them a goodbye message.

Now just suppose, I have a snapshot with 10 tables, with lots of where predicates and conditions about what makes it into the snapshot - the above example x 10. We **INVOICE** on wether or not a row makes it into the snapshot - so I need to track this.

It seems to me that this would be a fairly common scenario and I would guess that triggers on materialized views would be the recommended soloution...

Thanks and appologies in advance <x>crosses fingers</x>



Tom Kyte
April 20, 2006 - 8:01 am UTC

...
I could have an insert trigger on the snapshot to send them the welcome message.
I could even have an on delete trigger to send them a goodbye message.
.....

seems dangerous as there is nothing stopping Oracle from performing an "update" of a materialized view as a DELETE+INSERT. For you see, the internals of how materialized views are maintained are "internal" and subject to change (so even if documented - so what, you cannot control the "how" of materialized view refreshes)

I don't think you should be doing things based on when a row pops up in a materialized view, that logic belongs way back on the base tables.

(I don't like triggers to implement logic like this either - seems it should be part of the transaction itself that modified the data - using a transactional API written as a stored procedure that all clients invoke)

I tested this and found something...

James, April 20, 2006 - 3:35 am UTC

That updates to the base data result in a delete and insert transaction on the mv. I guess that means triggers on MV are not a good idea. I've also read that apparently they are 'not supported.' I guess you are not going to recommend them then.

I suppose running CDC against an MV is out of the question as well????

Regards

Tom Kyte
April 20, 2006 - 8:04 am UTC

responded to above...

Materialized View QUERY Performance

Alex, May 17, 2006 - 8:11 pm UTC

Hi Tom

1 How do you boost the QUERYing performance of a MV that contains a result set of say 50 or 500 million rows ?

2 Since one cannot build indexes on MV, wouldn't the logical solution be to convert the MV into an ordinary table where indexes can be built on all the desirable columns ?

3 Is there a SUPER-fast way to convert an MV to an ordinary table, something like the opposite of
"CREATE MV x ON PREBUILT TABLE ...", or perhaps an undocumented "CREATE TABLE y from MV x" :-)


4 If the MV is based on a complex SQL query that references 10 tables, then one would need 10 MV Logs in order a FAST refresh be possible, am I correct ?

How much of an overhead are these 10 MV logs to OLTP transactions that performs DML against the 10 tables ?

In consideration of this last point, would you consider customizing the loading/refresh process into an ordinary table, via manual coding instead of using MV ?

Other benefits that an ordinary table has that MV does not have are indexing capabilities, table partioning, right ?

Tom Kyte
May 18, 2006 - 10:44 am UTC

1) same way as you would any query?  they are just tables.  You have all of the same techniques you would use for any table.

2) what?  sure you can - and many people do.  

3) they are normal tables!!!!

4) each base table would need a log, yes.  The "overhead" is an insert into the log.





ops$tkyte@ORA10GR2> /*
ops$tkyte@ORA10GR2> drop table t;
ops$tkyte@ORA10GR2> drop materialized view mv;
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table t ( x int primary key, y int, z char(30) );
ops$tkyte@ORA10GR2> insert into t select rownum, rownum from all_objects;
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create materialized view mv
ops$tkyte@ORA10GR2> as
ops$tkyte@ORA10GR2> select * from t;
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create index mv_idx on mv(y);
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'MV' );
ops$tkyte@ORA10GR2> */
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> set autotrace traceonly explain
ops$tkyte@ORA10GR2> select * from mv where y = 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1879218574

-----------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |     1 |     9 |     2   (0)| 00:00:01 |
|   1 |  MAT_VIEW ACCESS BY INDEX ROWID| MV     |     1 |     9 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN             | MV_IDX |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("Y"=5)

ops$tkyte@ORA10GR2> set autotrace off 

Peculiar Mview refresh times

Bakunian, June 08, 2006 - 3:12 pm UTC

Tom,

We are running 9iR2 on Solaris.
I've noticed strange behavior of Mview refresh mechanism for example if parent table was just loaded refresh takes 20 min. if table has not been loaded refresh takes 2 min.
Mview built with REFRESH COMPLETE ON DEMAND.

Q.1 How Oracle knows that parent table was not loaded if I don't have LOG on it?

Q.2 Can I add LOG to parent table after Mview already has been created and change Mview to REFRESH FAST?

Q.3 In the fact table we have many types of dates (date_funded, date_submitted, date_denied and etc.) so loan_id = 1000 has a date history. In our case we have to self-join this table 8 times to get all dates.
Is it possible to do the same thing using analytic functions?


I have your old book and it has good examples on analytics, does your new book has chapter like that? You are right analytic functions rock I just need to understand how to use them. There is usually more than one way to achieve the same result so when you think to use one or the other method what is your thought process. I am not a big fan of just memorizing on how to use these functions I would like to understand full potentials of Oracle analytics and hopefully stop asking stupid questions.

Thank you for your time,



Tom Kyte
June 08, 2006 - 3:43 pm UTC

You'll need a bit more detail here.

What is the difference between the parent table after just being loaded.
And not just after being loaded.

Not really following you here.

RE: Mview refresh

Bakunian, June 08, 2006 - 4:39 pm UTC

The parent table is a table on which Mview is built, it is loaded by the ETL batch process when load completes then Mview refreshed.

So what I meant say is that the fist time I refresh mview right after the load it takes 20 min. but if I just go ahead and refresh second time it takes 2 min. Why?


Tom Kyte
June 08, 2006 - 8:06 pm UTC

likely a result of block cleanout and physical IO.

But - a simple:

load it
turn on trace with waits
refresh it
exit sqlplus

turn on trace with waits
refresh it
exit sqlplus

would remove all guessing, we'd be able to compare the resulting traces to see what the big waits were.

RE: Materialized view time

Bakunian, June 08, 2006 - 9:10 pm UTC

Great I will do that.

How about Q.2 and Q.3

Thank you.


Tom Kyte
June 09, 2006 - 6:34 am UTC

Q.2 Can I add LOG to parent table after Mview already has been created and
change Mview to REFRESH FAST?

yes, you can add it anytime you want.


Q.3 In the fact table we have many types of dates (date_funded, date_submitted,
date_denied and etc.) so loan_id = 1000 has a date history. In our case we have
to self-join this table 8 times to get all dates.
Is it possible to do the same thing using analytic functions?

I don't see why this leads to a "self join" 8 times, you'd have to provide a small example (only need to see "two", not all "eight" of course - smaller is better)

David Aldridge, June 09, 2006 - 9:44 am UTC

>> Q.2 Can I add LOG to parent table after Mview already has been created and change Mview to REFRESH FAST?

>> yes, you can add it anytime you want.

After creating the MV log run a final complete refresh on all the MVs or your first attempt to fast refresh will fail with ..

ORA-12034: materialized view log on "MY_USER"."MY_MV" younger than last refresh

</code> http://oraclesponge.blogspot.com/2005/12/ora-12034-materialized-view-log.html <code>

PREBUILT TABLE

Bakunian, June 09, 2006 - 12:50 pm UTC

Thank you David I did not know that, and a good article.

However my Mviews not built with PREBUILT TABLE clause should I rebuild them? I still don't understand the difference between having Mview with or without PREBUILT TABLE. Oracle DW guide states following.
<
Use the CREATE MATERIALIZED VIEW statement to create and, optionally, populate the materialized view. If a user-defined materialized view already exists, then use the ON PREBUILT TABLE clause in the CREATE MATERIALIZED VIEW statement.
>
So my question is if Mview already exist how can use ON PREBUILT TABLE clause to create it, it already exists!? Should I drop and recreate it with aforementioned clause?


Tom Kyte
June 09, 2006 - 1:06 pm UTC

if you add a log

and you want to use the log to do a fast refresh

you'll need to do one more sync "complete" to align everything

regardless of whether it was prebuilt table or not.


"what aforementioned clause"??

David Aldridge, June 09, 2006 - 2:07 pm UTC

>> However my Mviews not built with PREBUILT TABLE clause should I rebuild them?

In general I favour using prebuilt tables because it allows the MV definition to be dropped and recreated should a problem be encountered, such as the ORA-12034 that the article dealt with.

You might also want to drop and recreate the MV in order to be able to execute DML against the MV data, which might be the case in the event of a refresh failure or should there be a gross change to the master table that it would just be more efficient to implement as a DML directly against the MV data instead of relying on the refresh mechanism -- for example updating a large amount of rows to correct a particular key column value.

When you move to 10g you might also choose to start using DBMS_ADVANCED_REWRITE instead of MV-based query rewrite, along with your own custom code for maintaining the summaries.


Tom Kyte
June 09, 2006 - 5:37 pm UTC

Just for those that did not get the subtlety of David's statement:

... because it allows the MV definition to
be dropped and recreated should a problem be encountered ...


the MV *definition*, the data stays - the MV goes away and you can rebuild on that existing pre-built table that is left behind.

You can do other neat things like a "do it yourself online redefine that only selects SOME of the rows" (instead of purge, keep what you want...)

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:17309172009134 <code>

Materialized View on a Remote View

Jeremy, June 22, 2006 - 1:03 pm UTC

My current setup is a 10gR2 instance that is used to text mine data. It only contains text and the PK to retrieve the source data from our reporting database(9i instance). We are having performance issues when we join the text back to the source database and I was hoping to solve this with a Materialized View of the source data on the 10g instance. However I believe I am having problems creating the materialized view because the source is a view on a remote database. I believe this is possible but can't find a starting point or an example that I could use. Also the source view is very complicated i.e. contains unions, synonyms and other database links into the application databases. I don't know if that would make a difference or not. Please help me find a starting point to solve my dilemma
Thanks, (and GREAT site couldn't live without it!)

Tom Kyte
June 22, 2006 - 2:50 pm UTC

are you willing to do a FULL refresh, sounds like that is what you would be doing (union for example...)

or at least a little info on what you've done and what error you are hitting.

Confusing Error..

A reader, July 20, 2006 - 3:46 pm UTC

Hi Tom,
Oracle version 9.2.0.7
OS: HP-UNIX 64 Bit

Today we tried creating a mview in the schema of a user, say A.

My User has DBA privs, but to be on the safe side I also granted my self "Create any Snapshot" Privilege. Also I granted user A "Create Snapshot" Privilege.

The mview to be created is based on a view called V_otherUser.

The User a has been granted select priv on this view as has my user.

I connected as my user and exceuted:
create materialized view a.mv_test
as
.....
select * from otherUser.V_other_User.

I get the insufficient privilege Error.

My question:
A BIG Why???

PS Also tried after granting Query Rewrite and Global Query Rewrite Privs to User A and Myself.

Thanks as always


Tom Kyte
July 22, 2006 - 5:23 pm UTC

don't be "safe" - be "precise"

pretty much all of the SQL commands have documented "this is what you need", including the create materialized view command:

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6002.htm#i2162987 <code>

it lists specifically the grants necessary. (and also says something about grants via roles like DBA in this case..)

Nested MV

A reader, August 09, 2006 - 5:03 am UTC

Hi Tom,
We have 2 reporting servers that performing MV refresh with production every night, we find that our production is indeed very heavy loaded during the MV refresh. After performing some studies, i think nested mv might serve our purpose. Action plan come to my mind is, MV will created on 1st reporting server's DB that refresh with production every night, whereas MV on 2nd reporting server's DB will refresh with the 1st reporting DB (Nested MV). I want 2nd DB MV to have the exactly copy of 1st db MV after the refresh completed. Based on the documentation that i went through, nested MV rule: contain joins only or joins and aggregates. Does nested MV can served my purpose? If it not will appreciate if you can hv some advise on alternative.

Thanks


Tom Kyte
August 09, 2006 - 10:44 am UTC

nested materialized views are "on a single system"

You can create a MV of a remote MV and schedule the refreshes appropriately however.

DBMS JOBS

David Andrew, August 10, 2006 - 2:41 pm UTC

Dear Tom,

Thank you for all your responses in the past. I am again looking at you for your guidence.

We have a job running which runs as follows:
SELECT JOB, SUBSTR(WHAT,1,35), NEXT_DATE, NEXT_SEC, BROKEN FROM DBA_JOBS;

  JOB SUBSTR(WHAT,1,35)                   NEXT_DATE NEXT_SEC B
----- ----------------------------------- --------- -------- -
    1 statspack.snap;                     01-JAN-00 13:15:00 N

I would like to schedule it to run at 6 am ?  I tried this,
SQL> BEGIN
  2  DBMS_JOB.CHANGE(JOB =>1, WHAT =>NULL, NEXT_DATE =>NULL, NEXT_SEC='06:00:00' );
  3  END;
  4  /
DBMS_JOB.CHANGE(JOB =>106, WHAT =>NULL, NEXT_DATE =>NULL, NEXT_SEC='06:00:00' );
                                                          *
ERROR at line 2:
ORA-06550: line 2, column 59:
PLS-00201: identifier 'NEXT_SEC' must be declared
ORA-06550: line 2, column 1:
PL/SQL: Statement ignored

I get this error, so I tried going and changing through OEM, I could not find the scheduled jobs there.

can you please help?

Thanks and regards,
David. 

Having issues with Mview across db link

Rajesh, August 30, 2006 - 4:44 pm UTC

Hello Tom,

My first post to you, and I've been extensively using your site and my doubts were clarified by pre-existing posts..... and this web-site has been just great.

My problem did not surface completely in any post, hence this posting.

I have an MVIEW which is set up to refresh data over a DB link, sources data from a view which is based on some tables and mviews.

CREATE MATERIALIZED VIEW MS_MVW
TABLESPACE M_DATA_01
NOCACHE
LOGGING
NOPARALLEL
BUILD IMMEDIATE
REFRESH COMPLETE
START WITH TO_DATE('31-Aug-2006 06:30:00','dd-mon-yyyy hh24:mi:ss')
NEXT TRUNC(SYSDATE + 1) + 26/96
WITH PRIMARY KEY
AS
SELECT *
FROM ms_vew@PDB6;

My first failure was in my alert log was:
ORA-12012: error on auto execute of job 45
ORA-12008: error in materialized view refresh path
ORA-08103: object no longer exists
ORA-02063: preceding line from PDB6
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 1

This was because of tables being in transition on the other database, because their data was being refreshed.

I re-scheduled this job and it's now to set to run at 6:30, but still fails, without any alert in the log, and no failures in the dba_jobs. Only the way to identify failure is that there are 0 rows.

Since I don't have visibility into their DB, the other team which controls the source db, confirmed that their ETL processes completed by 6:19 am. That should not interfere with a job fetching data at 6:30.

When I re-execute the job, it brings over data perfectly.

SELECT * FROM dba_mview_refresh_times :
OWNER|NAME|MASTER_OWNER|MASTER|LAST_REFRESH
CF|MS_MVW|QDBOP|MS_VEW|8/30/2006 7:12:59 AM

Hope you can shed some light.

Tom Kyte
August 30, 2006 - 5:58 pm UTC

it could well be that it is in fact NOT failing but that at the time you did your refresh, they had zero rows (since it sounds like they truncate and then load or drop and then load)

You need better coordination with the guys on the other end of the link.

Should I alter the job

Rajesh, August 30, 2006 - 7:54 pm UTC

Hello Tom,

There is a severe clampdown on privileges, hence no visibility into the other database, other than the particular object.

I am already talking to the person on the other side.

What do you think about the following:
1) Does changing the dbms_refresh.refresh to dbms_mview.refresh help?
2) Could there be any problem with the DB Link / View for the first time access? E.g., since the underlying objects are changed (drop/load), could the view be in an invalid state when attempting the refresh?
3)Should I drop and re-create the DBMS job for refresh?

I will be adding in a job to check it's state early tomorrow morning, before the actual refresh happens.....

I had changed their previous process of drop, re-create and re-grant of the MVIEW to this automated refresh just last month. Now that this is failing, I have people talking about why this was changed to use the dbms_job.

BTW, I'm using Oracle 9.2.0.6.

Tom Kyte
August 31, 2006 - 8:54 am UTC

privilege's - smivileges, this is not about privileges, this is about you and they coordinating your activities, period.

You are apparently refreshing whilst the ETL is happening, there is no data in the tables.

1) only IF THERE IS DATA OVER THERE, and there isn't. So, no, this will do nothing for you.

2) no, there is no data over there. That is the problem. You are refreshing while there is no data to be refreshed.

3) see #1 and #2 and then get a meeting scheduled with the other group to figure out when you *really* should do this.




Thanks Tom

Rajesh, August 31, 2006 - 10:06 am UTC


Data on Remote Server is fine

Rajesh, September 01, 2006 - 11:36 am UTC

Hello Tom,

Thanks for taking time to answer my queries.
I talked to my counterpart and he confirmed that their refreshes completed way before our scheduled refresh.

As I mentioned, I had set up a job to check on the statuses on the remote server. This was set up connecting directly to that server thru' the cron using a shell script.

I did this multiple times between 6:20 and 6:35.
I checked all generated spool files and for al these runsthese times the data was completely available (83K records) on the remote server.

The issue seems to be with the MVIEW refresh being partial(71k records) on my DB.
There is no alert in the logs too.

Again as usual a manual exec dbms_job.run() succeeded in bringing all the 83k records over.

What do you suggest?

Tom Kyte
September 01, 2006 - 12:05 pm UTC

(psst - I don't think so, that is just between you and me - you see, if they had, there would have been DATA to copy and, well, there just wasn't sooooooo....)

tell you what - can you confirm for me that they use either

o truncate + load
o drop + recreate

to populate these tables - if so, I don't care that you "checked", the fact is when you ran the REFRESH - they were empty.

I'm not aware of a refresh silently failing and wiping out all data - if you truly believe this is happening, please work with support.

Thanks

Rajesh, September 18, 2006 - 11:05 am UTC

Tom,
We upgraded to 9.2.0.7 couple of weeks back, and that has resolved this problem.
Neither before nor after the upgrade, has any change been done on either side.
I've contacted support.

Thanks for your time and patience.

Yelena, September 21, 2006 - 3:00 pm UTC

Tom,

I created following materialized view:

create materialized view bis_ret_plus_mvw1
parallel
partition by list (month_id)
(
partition bis_ret_plus_mvw1_0 values (0) TABLESPACE PARISR_D,
partition bis_ret_plus_mvw1_20020131 values (20020131) tablespace PARISMH_D001,
partition bis_ret_plus_mvw1_20020228 values (20020228) tablespace PARISMH_D002,
partition bis_ret_plus_mvw1_20020331 values (20020331) tablespace PARISMH_D003,
partition bis_ret_plus_mvw1_20020430 values (20020430) tablespace PARISMH_D004,
partition bis_ret_plus_mvw1_20020531 values (20020531) tablespace PARISMH_D005,
partition bis_ret_plus_mvw1_20020630 values (20020630) tablespace PARISMH_D006,
partition bis_ret_plus_mvw1_20020731 values (20020731) tablespace PARISMH_D007,
partition bis_ret_plus_mvw1_20020831 values (20020831) tablespace PARISMH_D008,
partition bis_ret_plus_mvw1_20020930 values (20020930) tablespace PARISMH_D009,
partition bis_ret_plus_mvw1_20021031 values (20021031) tablespace PARISMH_D010,
partition bis_ret_plus_mvw1_20021130 values (20021130) tablespace PARISMH_D011,
partition bis_ret_plus_mvw1_20021231 values (20021231) tablespace PARISMH_D012,
partition bis_ret_plus_mvw1_20030131 values (20030131) tablespace PARISMH_D001,
partition bis_ret_plus_mvw1_20030228 values (20030228) tablespace PARISMH_D002,
partition bis_ret_plus_mvw1_20030331 values (20030331) tablespace PARISMH_D003,
partition bis_ret_plus_mvw1_20030430 values (20030430) tablespace PARISMH_D004,
partition bis_ret_plus_mvw1_20030531 values (20030531) tablespace PARISMH_D005,
partition bis_ret_plus_mvw1_20030630 values (20030630) tablespace PARISMH_D006,
partition bis_ret_plus_mvw1_20030731 values (20030731) tablespace PARISMH_D007,
partition bis_ret_plus_mvw1_20030831 values (20030831) tablespace PARISMH_D008,
partition bis_ret_plus_mvw1_20030930 values (20030930) tablespace PARISMH_D009,
partition bis_ret_plus_mvw1_20031031 values (20031031) tablespace PARISMH_D010,
partition bis_ret_plus_mvw1_20031130 values (20031130) tablespace PARISMH_D011,
partition bis_ret_plus_mvw1_20031231 values (20031231) tablespace PARISMH_D012,
partition bis_ret_plus_mvw1_20040131 values (20040131) tablespace PARISMH_D001,
partition bis_ret_plus_mvw1_20040229 values (20040229) tablespace PARISMH_D002,
partition bis_ret_plus_mvw1_20040331 values (20040331) tablespace PARISMH_D003,
partition bis_ret_plus_mvw1_20040430 values (20040430) tablespace PARISMH_D004,
partition bis_ret_plus_mvw1_20040531 values (20040531) tablespace PARISMH_D005,
partition bis_ret_plus_mvw1_20040630 values (20040630) tablespace PARISMH_D006,
partition bis_ret_plus_mvw1_20040731 values (20040731) tablespace PARISMH_D007,
partition bis_ret_plus_mvw1_20040831 values (20040831) tablespace PARISMH_D008,
partition bis_ret_plus_mvw1_20040930 values (20040930) tablespace PARISMH_D009,
partition bis_ret_plus_mvw1_20041031 values (20041031) tablespace PARISMH_D010,
partition bis_ret_plus_mvw1_20041130 values (20041130) tablespace PARISMH_D011,
partition bis_ret_plus_mvw1_20041231 values (20041231) tablespace PARISMH_D012,
partition bis_ret_plus_mvw1_20050131 values (20050131) tablespace PARISMH_D001,
partition bis_ret_plus_mvw1_20050228 values (20050228) tablespace PARISMH_D002,
partition bis_ret_plus_mvw1_20050331 values (20050331) tablespace PARISMH_D003,
partition bis_ret_plus_mvw1_20050430 values (20050430) tablespace PARISMH_D004,
partition bis_ret_plus_mvw1_20050531 values (20050531) tablespace PARISMH_D005,
partition bis_ret_plus_mvw1_20050630 values (20050630) tablespace PARISMH_D006,
partition bis_ret_plus_mvw1_20050731 values (20050731) tablespace PARISMH_D007,
partition bis_ret_plus_mvw1_20050831 values (20050831) tablespace PARISMH_D008,
partition bis_ret_plus_mvw1_20050930 values (20050930) tablespace PARISMH_D009,
partition bis_ret_plus_mvw1_20051031 values (20051031) tablespace PARISMH_D010,
partition bis_ret_plus_mvw1_20051130 values (20051130) tablespace PARISMH_D011,
partition bis_ret_plus_mvw1_20051231 values (20051231) tablespace PARISMH_D012,
partition bis_ret_plus_mvw1_20060131 values (20060131) tablespace PARISMH_D001,
partition bis_ret_plus_mvw1_20060228 values (20060228) tablespace PARISMH_D002,
partition bis_ret_plus_mvw1_20060331 values (20060331) tablespace PARISMH_D003,
partition bis_ret_plus_mvw1_20060430 values (20060430) tablespace PARISMH_D004,
partition bis_ret_plus_mvw1_20060531 values (20060531) tablespace PARISMH_D005,
partition bis_ret_plus_mvw1_20060630 values (20060630) tablespace PARISMH_D006,
partition bis_ret_plus_mvw1_20060731 values (20060731) tablespace PARISMH_D007,
partition bis_ret_plus_mvw1_20060831 values (20060831) tablespace PARISMH_D007
)
build immediate
refresh force on demand
enable query rewrite
as
select /*+ parallel */
a.month_id MONTH_ID
, 'N' BIW_UPDT_FLAG
, a.GFMIS_PROD_COD GFMIS_PROD_COD
, gcib_sub_prod_cod GCIB_SUB_PROD_COD
, nvl(b.ctry_cod, 'ZZ') EXT_UNIT_CTRY_COD
, nvl(b.ctry_nam, 'NOT ASSIGNED') EXT_UNIT_CTRY_NAM
, nvl(b.main_clus_nam, 'NOT ASSIGNED') EXT_UNIT_MAIN_CLUS_NAM
, nvl(b.clus_cod, 0) EXT_UNIT_CLUS_COD
, nvl(b.clus_shrt_nam, 'N ASSIGNED') EXT_UNIT_CLUS_SHRT_NAM
, nvl(b.clus_nam, 'NOT ASSIGNED') EXT_UNIT_CLUS_NAM
, nvl(b.rgn_cod, 0) EXT_UNIT_RGN_COD
, nvl(b.rgn_nam, 'NOT ASSIGNED') EXT_UNIT_RGN_NAM
, nvl(O.GFCID, 0) GFCID
, nvl(O.GFCID_LEG_NAM, 'MISSING/INVALID') GFCID_NAM
, nvl(o.ORR_RTNG_ID, 0) GFCID_ORR_ID
, nvl(s.sic_cod, 0) GFCID_SIC_COD
, nvl(l.L2_PRNT_ID, 0) L2_PRNT_ID
, nvl(l.L2_PRNT_NAM, 'MISSING/INVALID') L2_NAM
, l.L2_BSGRP_SHRT_NAM L2_BSGRP_SHRT_NAM
, l.L2_RMT_COD L2_RMT_COD
, nvl(l.L2_RMT_SHRT_NAM, 'MSN') L2_RMT_SHRT_NAM
, nvl(l.L2_RMT_NAM, 'MISSING/INVALID') L2_RMT_NAM
, nvl(d.SUB_INDS_SECT_COD, 9999) L2_CIS_SUB_INDS_COD
, nvl(l.ORR_RTNG_ID, 0) L2_ORR_ID
, l.L2_OLE_TYP L2_OLE_TYP
, nvl(l.L2_PSE_OLE_AMT,0) L2_PSE_OLE_AMT
, nvl(l.L2_PSLE_OLE_AMT,0) L2_PSLE_OLE_AMT
, nvl(g.GPM_INDS_COD, 9999) L2_GPM_INDS_COD
, l.OS_CR_CLAS L2_OS_CR_CLAS
, l.L2_PM_ID L2_PM_ID
, l.L2_PM_NAM L2_PM_NAM
, nvl(ps.publ_sect_cod, 'No') L2_PUBL_SECT_COD
, nvl(r.reln_cagid, 0) CAGID
, nvl(r.RELN_NAM, 'MISSING/INVALID') RELN_NAM
, nvl(r.RRR_RTNG_ID, 0) RELN_RRR_ID
, nvl(r.reln_cntl_unit_id, 0) RELN_CNTL_UNIT_ID
, nvl(cb.BSUNIT_SHRT_NAM, 'UNKNOWN BUS UNIT') RELN_CNTL_SHRT_NAM
, nvl(cb.BSUNIT_NAM, 'UNKNOWN BUS UNIT') RELN_CNTL_NAM
, nvl(cb.BSUNIT_CTRY_COD, 'ZZ') RELN_CNTL_CTRY_COD
, cb.BSUNIT_CTRY_NAM RELN_CNTL_CTRY_NAM
, cb.BSUNIT_MAIN_CLUS_NAM RELN_CNTL_MAIN_CLUS_NAM
, cb.BSUNIT_CLUS_NAM RELN_CNTL_CLUS_NAM
, cb.BSUNIT_RGN_NAM RELN_CNTL_RGN_NAM
, nvl(rs.SIC_COD, 0) RELN_SIC_COD
, r.RELN_OLE_TYP RELN_OLE_TYP
, nvl(r.RELN_PSE_OLE_AMT, 0) RELN_PSE_OLE_AMT
, nvl(r.RELN_PSLE_OLE_AMT, 0) RELN_PSLE_OLE_AMT
, r.OS_CR_CLAS RELN_CR_CLAS
, r.RELN_INCO_FLAG RELN_INCO_FLAG
, nvl(p.gfcid, 0) GFPID
, nvl(p.gfcid_leg_nam, 'MISSING/INVALID') GFPID_NAM
, RAP_ASET RAP_ASET
, GAAP_ASET GAAP_ASET
, LE_ASET BIW_LE_ASET
, RRW_TOTAL_ASET RRW_TOTAL_ASET
, CD_RAP_ASET CD_RAP_ASET
, CR_RISK_CPTL CR_RISK_CPTL
, BUS_RISK_CPTL BUS_RISK_CPTL
, OPS_RISK_CPTL OPS_RISK_CPTL
, TOTAL_RISK_CPTL TOTAL_RISK_CPTL
, XBDR_RISK_CPTL XBDR_RISK_CPTL
, MKT_RISK_CPTL MKT_RISK_CPTL
, RISK_EQV_AMT RISK_EQV
, REV_AMT REV_AMT
, EXPN_AMT EXPN_AMT
, PRETAX_MRGN_AMT PRETAX_MRGN_AMT
, ECON_COC_AMT ECON_COC_AMT
, CECM_RBT_AMT CECM_RBT_AMT
, TAX_AMT TAX_AMT
, NET_INCM_AMT NET_INCM_AMT
, NIACC_AMT NIACC_AMT
, CAP_COST_AMT CAP_COST_AMT
, BOOK_COC_AMT BOOK_COC_AMT
, BOOK_TAX_AMT BOOK_TAX_AMT
, BOOK_NET_INCM_AMT BOOK_NET_INCM_AMT
, TOTAL_OSUC_AMT TOTAL_OSUC_AMT
, NET_TOTAL_OSUC_AMT NET_TOTAL_OSUC_AMT
, TOTAL_HDGE_AMT TOTAL_HDGE_AMT
, CFA CFA
, ECONCOC_CD_AMT ECONCOC_CD_AMT
, CR_RC_SURCH_AMT CR_RC_SURCH_AMT
, XBDR_RC_SURCH_AMT XBDR_RC_SURCH_AMT
, RM_NET_INCM_AMT RM_NET_INCM_AMT
, RM_PRETAX_MRGN_AMT RM_PRETAX_MRGN_AMT
, RM_EXPN_AMT RM_EXPN_AMT
, RM_TAX_AMT RM_TAX_AMT
, RM_NIACC_AMT RM_NIACC_AMT
, RM_OPS_RISK_CPTL RM_OPS_RISK_CPTL
, RM_MKT_RISK_CPTL RM_MKT_RISK_CPTL
, RM_TOTAL_RISK_CPTL RM_TOTAL_RISK_CPTL
, RM_CAP_COST_AMT RM_CAP_COST_AMT
, RM_CECM_RBT_AMT RM_CECM_RBT_AMT
FROM
paris.bis_biw_ret_fact a
, paris.bis_ctry_dim b
, paris.bis_biw_gcib_prod_dim gp
, paris.bis_oblgo_curr_dim o
, paris.bis_oblgo_curr_dim p
, paris.bis_l2_curr_dim l
, paris.bis_sic_cpc_inds_dim s
, paris.bis_cis_inds_dim d
, paris.bis_prod_dim e
, paris.bis_gpm_inds_dim g
, paris.bis_reln_curr_dim r
, paris.bis_sic_cpc_inds_dim rs
, paris.bis_bsunit_dim cb
, paris.bis_publ_sect_dim ps
WHERE
substr(a.RPRT_CTRY_MKT, 1,2) = b.ctry_cod (+)
AND a.gcib_sub_prod_cod = gp.sub_prod_cod (+)
AND a.gfcid = o.gfcid (+)
AND o.gfpid = p.gfcid (+)
AND o.sic_cod_1 = s.sic_cod (+)
AND a.GFMIS_PROD_COD = e.GFMIS_PROD_COD
AND o.l2_prnt_id = l.l2_prnt_id (+)
AND l.L2_CIS_SUB_INDS_COD = d.SUB_INDS_SECT_COD (+)
AND l.L2_GPM_INDS_COD = g.GPM_INDS_COD (+)
AND l.l2_publ_sect_cod = ps.publ_sect_cod (+)
AND o.RELN_CAGID = r.RELN_CAGID (+)
AND r.RELN_CNTL_UNIT_ID = cb.BSUNIT_ID (+)
AND r.RELN_SIC_COD = rs.sic_cod (+)

I have also created materialized view logs on the tables:

create materialized view log on paris.bis_biw_ret_fact with SEQUENCE, ROWID
(MONTH_ID, GFCID, GFMIS_PROD_COD, GCIB_SUB_PROD_COD, RPRT_CTRY_MKT)
including new values;

create materialized view log on paris.bis_ctry_dim with SEQUENCE, ROWID (CTRY_COD) including new values;

create materialized view log on paris.bis_biw_gcib_prod_dim with SEQUENCE, ROWID (SUB_PROD_COD) including new values;

create materialized view log on paris.bis_oblgo_curr_dim with SEQUENCE, ROWID
(GFCID, SIC_COD_1, RELN_CAGID, GFPID, L2_PRNT_ID)
including new values;

create materialized view log on paris.bis_l2_curr_dim with SEQUENCE, ROWID
(L2_PRNT_ID, L2_CIS_SUB_INDS_COD, L2_GPM_INDS_COD, L2_PUBL_SECT_COD)
including new values;

create materialized view log on paris.bis_sic_cpc_inds_dim with SEQUENCE, ROWID (SIC_COD) including new values;

create materialized view log on paris.bis_cis_inds_dim with SEQUENCE, ROWID (SUB_INDS_SECT_COD) including new values;

create materialized view log on paris.bis_prod_dim with SEQUENCE, ROWID (GFMIS_PROD_COD) including new values;

create materialized view log on paris.bis_gpm_inds_dim with SEQUENCE, ROWID (GPM_INDS_COD) including new values;

create materialized view log on paris.bis_reln_curr_dim with SEQUENCE, ROWID
(RELN_CAGID, RELN_CNTL_UNIT_ID, RELN_SIC_COD)
including new values;

create materialized view log on paris.bis_bsunit_dim with SEQUENCE, ROWID (BSUNIT_ID) including new values;

create materialized view log on paris.bis_publ_sect_dim with SEQUENCE, ROWID (PUBL_SECT_COD) including new values;

For some reason this view is not FAST Refreshable. Is there a way to make it fast refreshable. The query is a simple join of dimension and fact tables. The tables get re-loaded twice a month.

Thank you

Tom Kyte
September 22, 2006 - 2:44 am UTC

you would need to sort of make this example a whole lot

a) smaller
b) complete

before I'll look at it. I'm sure you can reproduce your issue with just say 3 or 4 small tables (few columns), the partition clause is not likely relevant, we don't want tablespaces (because I don't have yours) and we'd need the creates for the base table.

remember, make it

a) small, small, concise, short, remove everything that you can and still have your particular issue reproduce on someone elses machine

b) complete

who knows - in doing so you might likely stumble upon "why" (i usually do, when making the small test case - it becomes apparently "why" - because you'll remove a bit of code and it'll "start working" - then you know what is causing it and it might well become obvious what is wrong at that point)

David Aldridge, September 22, 2006 - 11:41 am UTC

Yelena,

Check the documented fast refresh restriction for join-only MV's.

</code> http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#sthref538 <code>

You appear to violate the "Rowids of all the tables in the FROM list must appear in the SELECT list of the query" clause.

MV that is join of two simple MV's yields "ORA-12053: this is not a valid nested materialized view"

A reader, October 10, 2006 - 3:54 pm UTC

I have exactly the same problem as posted in "How Can I Create a Mview on Mview" on September 13, 2004
by Muhammad Waseem Haroon, so nice, with complete example.
Exactly as in his example, I have 2 tables on remote database.
I created copies of remote tables using simple mviews (no joins, no aggregates, just copy of remote tables as they are).
Then, I need a MV that is simple join of those two MV's (no aggregates), and failing - everything similar as in Bhaskar's complete example.

Here is mine complete example that fails at final mv creation:

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
With the Partitioning and OLAP options
JServer Release 9.2.0.7.0 - Production

SQL> create table emp
  2  ( empno number(4),
  3    deptno number(2)
  4  );

Table created.

SQL> create table dept
  2  ( deptno number(2),
  3    dname varchar2(14)
  4  );

Table created.

SQL> alter table emp add constraint emp_pk primary key (empno);

Table altered.

SQL> alter table dept add constraint dept_pk primary key(deptno);

Table altered.

SQL> create materialized view log on dept with primary key;

Materialized view log created.

SQL> create materialized view log on emp with primary key;

Materialized view log created.

SQL> create materialized view emp_mv
  2  refresh fast
  3  with primary key
  4  as
  5  select empno,deptno from emp;

Materialized view created.

SQL> create materialized view dept_mv
  2  refresh fast
  3  with primary key
  4  as
  5  select deptno,dname from dept;

Materialized view created.

SQL> create materialized view log on dept_mv with rowid;

Materialized view log created.

SQL> create materialized view log on emp_mv with rowid;

Materialized view log created.

SQL> create materialized view emp_dept_mv
  2  refresh fast
  3  with rowid
  4  as
  5  select e.empno, d.deptno, d.dname, e.rowid as e_rowid, d.rowid as d_rowid
  6  from emp_mv e, dept_mv d
  7  where e.deptno=d.deptno;
from emp_mv e, dept_mv d
     *
ERROR at line 6:
ORA-12053: this is not a valid nested materialized view


So, THE QUESTION is:

How to create this final MV that is just simple join of two other MV's ? Complete example please.

Reading of data warehousing guide didn't help, because examples are always with group by which I really don't need here and don't know how to do it in this particular example. 

how to create snapshot by using a query

ABBAS KALVANI, October 18, 2006 - 8:03 am UTC

hello.
I need to understand how to create snapshot in oracle by using a query?
please help me.
tanks.

Tom Kyte
October 18, 2006 - 8:12 am UTC

huh?

first, snapshot is a term that hasn't been used since version 7 days.

And second, if you mean "materialized view", well, the very top of this page has an example right there?

Monitoring MV log growth

Guest, November 21, 2006 - 10:30 am UTC

Hi Tom,

I have created a materialized view log and Materialized View with fast refresh.

My question is
How to monitor the materialized view log growth.

Tom Kyte
November 22, 2006 - 3:39 pm UTC

answer:

same way you monitor any tables growth. it is just a table.

What about triggers?

Amit, November 24, 2006 - 1:59 am UTC

Tom!
Can we create triggers on Materialized Views ?

Tom Kyte
November 24, 2006 - 6:34 pm UTC

technically yes.

practically - you should not. Not only are triggers evil, but you cannot tell what operations a MV refresh will do - does it do an update - or does it do an insert + delete - or what.

I would NOT recommend it.

Performance VIEW vs TABLE

Paul, December 26, 2006 - 12:22 pm UTC

Hi
I have two environments: test and production.

I have table that contain joins of the many tables.(production environment)
I create view of that table (test environment)
In production environment i use table from procedure.
In test environment i use view.

I have problems of performance in test environment.

Would you please help me.

How can produce that problems ?


Materialized View Partitioned

fromhell, January 18, 2007 - 11:51 am UTC

HI, Tom. Great site!
I have 2 questions:
1)I have 2 dimension tables, for example D1, D2.
I have a fact table, named F1. This is a partitioned table, partitioned by range, by month (F1.HDATE), like this

PARTITION BY RANGE (F1_FECCALC)
(PARTITION H005_2005_09 VALUES LESS THAN (TO_DATE('2005-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING NOCOMPRESS TABLESPACE DATE_2005 PCTFREE 0 INITRANS 1 MAXTRANS 255,
PARTITION H005_2005_10 VALUES LESS THAN (TO_DATE(' 2005-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING NOCOMPRESS TABLESPACE DATE_2005 PCTFREE 0 INITRANS 1 MAXTRANS 255,

... and so

When a month is finished, we calculated the new values and created a new partition.

We createad an EUL in Discoverer, and then a Materizalized VIew, created in Discoverer Administrator.
Later, I see The structure of the Materialized View created by Discoverer:

CREATE MATERIALIZED VIEW VM_F1
ON PREBUILT TABLE WITH REDUCED PRECISION
REFRESH COMPLETE ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT O100054.D002_CENTRO AS "Centro", O100054.D002_COMUNIDA AS "CodComun",
O100053.D001_DESCENT AS "Ent",
O100053.D001_DESCF AS "Far",
O100053.D001_ENT AS "Cod Ent",
O100074.F1_FECCALC AS "Fecha"
SUM(O100074.NUM) AS "CSUM",
COUNT(O100074.NUM) AS "CCOUNT",
FROM D1 O100053, D2 O100054, F1 O100074
WHERE ( ( D1.FIELD1 = F1.FIELDD1 ) AND ( D2.FIELDD2 = F1.FIELDD2 ) )
GROUP BY O100054.D002_CENTRO, O100054.D002_COMUNIDA,O100053.D001_DESCENT,
O100053.D001_DESCF,O100053.D001_ENT,
O100074.F1_FECCALC

I have the time that this refresh in Discoverer start with this query:

select name,table_name,last_refresh
from all_snapshots

but I want to know how much time cost that refresh. any idea?

Aonther question... I find that refresh ALL the materialized view
is not needed, cause we're only calculating the last partition...
and more important, Can I created a Materialized View Partitioned with Fast Refresh?
How can I do? How works PCT in this case?

Thanks!!

How can I do it ?

parag j patankar, February 08, 2007 - 5:59 am UTC

Hi,

I am having following table

create table t1 ( a date primary key );

insert into t1 select sysdate from dual;
insert into t1 select sysdate from dual;
insert into t1 select sysdate from dual;
insert into t1 select sysdate from dual;
insert into t1 select sysdate from dual;
insert into t1 select sysdate from dual;

alter session set nls_date_format='DD-MON-RRRR HH24:MI:SS';


18:24:30 SQL> select * from t1;

A
--------------------
07-FEB-2007 18:23:53
07-FEB-2007 18:23:55
07-FEB-2007 18:23:56
07-FEB-2007 18:24:00
07-FEB-2007 18:24:01
07-FEB-2007 18:25:22

Unfortunately I have received a query from outsitde application ( Govt Application ) which is doing

select * from t1 where a = '07-FEB-2007';

no rows selected

18:25:30 SQL>

I want to select records, which are selected by following query


18:25:30 SQL> select * from t1 where trunc(a) = '07-FEB-2007';

A
--------------------
07-FEB-2007 18:23:53
07-FEB-2007 18:23:55
07-FEB-2007 18:23:56
07-FEB-2007 18:24:00
07-FEB-2007 18:24:01
07-FEB-2007 18:25:22

How can I do this without touching query in 9.2 database ? I tried with MV as follows
create materialized view mv1
as
select to_date(to_char(a, 'DD-MON-RRRR'))
from t1;

1* select * from t1 where a = to_date('07-FEB-2007', 'DD-MON-RRRR')
15:57:45 SQL> /

no rows selected

but not successful. Pl also show me how to achive by dbms_advanced_rewrite

thanks & regards
PJP

Tom Kyte
February 08, 2007 - 8:25 am UTC

not going to happen, the advanced rewrite stuff is new in 10g

there is a bug in the developed application, it needs to be fixed.

Why mv is not working ?

A reader, February 09, 2007 - 4:52 am UTC

Hi Tom,

for question mentioned by me in this thread, why mv is not working here ? I should do "query rewrite"

create materialized view mv1
as
select to_char(a, 'DD-MON-RRRR')
from t1;

11:34:05 SQL> show parameter query

NAME TYPE VALUE
------------------------------------ ----------- --------------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED

11:41:03 SQL> exec dbms_stats.gather_table_stats('parag', 't1');

PL/SQL procedure successfully completed.

11:41:15 SQL> exec dbms_stats.gather_table_stats('parag', 'mv1');

PL/SQL procedure successfully completed.

11:41:26 SQL>
11:41:37 SQL> select * from t1 where a = '07-FEB-2007';

no rows selected

Pl explain.

thanks & regards
PJP

Tom Kyte
February 09, 2007 - 8:19 am UTC

because query rewrite is used to rewrite EQUIVALENT QUERIES.

where a = '07-feb-2007'

is NOT equivalent to

where to_char(a,'dd-mon-rrrr') = '07-feb-2007'


they return different answers, it would be a bug if the MV did work!

Whether Distinct is set operator or aggregate set function?

Thomas Bosco, February 15, 2007 - 1:04 am UTC

Hi Tom,
(We used to call you ¿TomUncle¿ because you always provide solutions with simple examples in your own way. In times you never hesitated to correct our views or any thing related).
Now please clarify my doubt whether Distinct is set operator or aggregate set function? First you have mentioned "Distinct is a set operator -- not a function!" and later part again you are referring "distinct is an aggregate set function".
Thanks in advance.
Bosco

Tom Kyte
February 15, 2007 - 11:41 am UTC

not sure what semantic difference you really see between the two.

distinct takes a set and aggregates it - removing duplicates.

Is Lost Update and Dead-lock complementary

Thomas Bosco, February 15, 2007 - 1:43 am UTC

1.Is Lost Update and Dead-lock complementary?
You have quoted:
If you have two different sessions that get blocked on each others update -- that means session 1's changes are going to be overwritten by session 2's changes and session 2 will NEVER have looked as
session 1's changes. This is called the lost update problem.
My Understanding:
When the same object is updated by two schemas simultaneously then Oracle detects the dead lock and roll backs the first transaction.

Please clarify.

2. I think ideally Oracle should roll back the second transaction rather first transaction in the case of Dead-lock detection. It is second transaction¿s mistake, which issued the DML without knowing is there any lock already exists on that table? Correct me if I am wrong.

Thanks in Advance.
Bosco

Tom Kyte
February 15, 2007 - 11:43 am UTC

your understanding is wrong. that is not a deadlock, that is just a "block and wait"

a deadlock is entirely different.

supposed you have the emp table.

YOU: update the king record
I: update the scott record
YOU: update the scott record (and block, this is NOT a deadlock, you are just blocked is all)
I: update the king record (and block, THIS, THIS is a deadlock situation now)

if we just have:

YOU: update the king record
I: update the king record (and block and wait)
YOU: commit;
I: get released and continue


we have a LOST UPDATE

Filtering DELETES from MV Refresh

Chris, March 07, 2007 - 5:29 pm UTC

Hello Tom,

I have table T1 in Prod database and we're considering using MV to propagate DML changes to table T1 in Data Warehouse database.

However there is a daily purging process run against T1 (PROD database) and we don't want these DELETES propagated to T1 in Data Warehouse (because we want all data available to report on).

This sounds like it could be a 'common' requirement but I haven't found anything in doc to do this. The best solution I've thought of is to define a DELETE TRIGGER on T1 (data warehouse) and have it restore the row after DELETION but perhaps you know of a better way?

I have also considered deleting the 'deletion' records from the MV log before applying the refresh but that doesn't seem right at all.

Many thanks
Tom Kyte
March 07, 2007 - 7:32 pm UTC

streams....

have you read about streams?

Chris, March 08, 2007 - 9:48 am UTC

No, at least not until I read your reply. It looks like Oracle Streams has the functionality I'm looking for. Thank you.

missing data

santosh, March 16, 2007 - 12:40 am UTC

hi tom i have a peculiar problem,

I have materialised view name customer_mv on remote database which is oracle 8.1.4.
It refreshes ones in a day.

we have view customerdata_mv in our database which is built on this materialised view and from this view a procedure takes data and puts it into our database table customers.

we run the procedure daily i am attaching the code of the procedure,
CREATE OR REPLACE PROCEDURE "SIEBEL_CUST"
IS
  CURSOR siebel_cust_cur IS
  SELECT *
  FROM customerdata_vw WHERE account_id NOT IN (SELECT cust_account_id FROM nidapps.CUSTOMERS);
  TYPE icust_tabarr IS TABLE OF customerdata_vw%ROWTYPE  INDEX BY BINARY_INTEGER;
  icustarr icust_tabarr;
  insert_count NUMBER := 1;
  i NUMBER := 1;
  file_id UTL_FILE.FILE_TYPE;
  BEGIN
     OPEN siebel_cust_cur;
    file_id := utl_file.FOPEN( 'h:\nidbatchprog', 'ston-c.log', 'w' );
 LOOP
     EXIT WHEN siebel_cust_cur%NOTFOUND ;
     FETCH siebel_cust_cur INTO icustarr(i);
      utl_file.PUT_LINE( file_id,i);
     INSERT INTO CUSTOMERS (customer_id,cust_account_id,customer_name)
            VALUES (customers_seq.NEXTVAL,icustarr(i).Account_ID,icustarr(i).Account_name);
      i := i + 1;
      COMMIT;
       utl_file.PUT_LINE( file_id,icustarr(i).Account_name);
      insert_count :=insert_count + 1;
       IF insert_count = 1000 THEN
   COMMIT;
    insert_count := 1;
   ELSE
   insert_count := insert_count + 1;
   END IF;
 END LOOP;
     dbms_output.put_line(i);
 CLOSE siebel_cust_cur;
   utl_file.PUT_LINE( file_id,'Customer migration completed');
  dbms_output.put_line('for cursor');
  utl_file.fclose(file_id);
 EXCEPTION
  WHEN OTHERS THEN
  NULL;
END;
/


This procedure runs daily after execution of refresh materialised view complete command.

my problem is
some customer informations are missing in my table

when i checked with materialised view data is there but when i check in my table that particular information is not there

what can be the reason for this????
if i think materialised view is not refreshed properly if not today that data should come tomorrow but its not happening.

i cant say procedure is also wrong because its fetching other records

if around 100 records are available in MV 98 records will be there in my table and it happens ocassionaly.

and we dont have any dependency on this tabel and procedure also because its the first procedure to run.


thanks in advance,
santosh
Tom Kyte
March 17, 2007 - 2:30 pm UTC

it took me about 0.05ms to see this:

EXCEPTION
WHEN OTHERS THEN
NULL;
END;

I blame that horrendous really bad coding practice.

You have taken a gun and thoroughly SHOT YOURSELF IN THE FOOT, LEG, ARM, TORSO and HEAD. All at once.

Think about it. What does that exception block do for you?


You do not even want me to comment on your commit "logic", some of the worst I've seen.


There is absolutely no reason why this should be more than one line of code, NONE.



many PL/SQL mistakes

Santhros ibn Shinu, March 19, 2007 - 12:05 pm UTC

Dear Tom:

The code shown will never work appropriately, because of a logical error, not only because of wrong practices:

insert_count :=insert_count + 1;
IF insert_count = 1000 THEN
COMMIT;
insert_count := 1;
ELSE
insert_count := insert_count + 1;
END IF;

Following this logic, when the counter reaches to 1000 it will commit, what happens if I have only, let's say 290 records?, it won't commit, and if I have 5000 records?, it will commit only for the first 1000.
Tom Kyte
March 19, 2007 - 12:29 pm UTC

it doesn't matter, it commits EVERY record already.


 FETCH siebel_cust_cur INTO icustarr(i);
      utl_file.PUT_LINE( file_id,i);
        INSERT INTO CUSTOMERS (customer_id,cust_account_id,customer_name)
            VALUES (customers_seq.NEXTVAL,icustarr(i).Account_ID,icustarr(i).Account_name);
         i := i + 1;
     <b> COMMIT;</b>
       utl_file.PUT_LINE( file_id,icustarr(i).Account_name);



that extra commit every thousand records is just there for "good luck" I think.

This code is rife with errors, the fetch logic is botched

open cursor;
loop
   exit when cursor%notfound
   fetch cursor into variable
   ....
end loop;


That is so wrong. It does the last record two times. That is probably why they added the commit for each row (and that 'seemed' to fix it). I would guess there is a unique key on this target table - and they discovered the last batch of rows was silently not getting inserted, committing each row "fixed" it magically.

There is so much wrong with this routine, but nothing that this code would not fix:

CREATE OR REPLACE PROCEDURE "SIEBEL_CUST"
IS
  file_id UTL_FILE.FILE_TYPE;
begin
  insert into customers (customer_id, cust_account_id, customer_name)
  select customers_seq.nextval, account_id, account_name
    FROM customerdata_vw 
   WHERE account_id NOT IN (SELECT cust_account_id 
                              FROM nidapps.CUSTOMERS
                             where cust_account_id is not null );

   file_id := utl_file.FOPEN( 'h:\nidbatchprog', 'ston-c.log', 'w' );
   utl_file.put_line( 'we inserted ' || sql%rowcount || ' rows' );
   utl_file.fclose(file_id);
   dbms_output.put_line( 'we inserted ' || sql%rowcount || ' rows' );
end;


Notice the beautiful "absence" of commit (commits do not belong in the procedure, rather in the client invoking)

Notice the awesome absence of exception code. We do *not* expect anything, therefore we cannot *handle* anything. If it fails - it should fail in a honking big, un-ignorable fashion. (yes, honking is absolutely a technical term)



Be fair!

Rene A., March 23, 2007 - 11:31 am UTC

Come on, be fair!

I got to fix this kind of code about once a month 8) - You wouldn't believe how many people out there like to have to so called "commit-loop".

At least no one tried to rollback utl_file.* :)

Oh, by the way, there was also an error in counting the inserts.
Before the if-statement:
insert_count :=insert_count + 1;
and inside the else-part:
insert_count :=insert_count + 1;

Very cool. - On the other side, the procedure from Tom, shows a very clean and fast way to copy the data. I would probably use insert /*+ APPEND */ ... (but that is Oracle "black magic")

Summary to be retained for longer peirod than detail

Albert Nelson A., August 29, 2007 - 9:02 am UTC

Hi Tom,

What technology should we consider if we want to retain summarized data longer than the detail data?

For example we want to retain summarized data for 3 years but the detail data will be retained only for 1 year.

Regards,

Albert Nelson A.
Tom Kyte
September 04, 2007 - 3:56 pm UTC

you can look into using a STREAMS custom apply process - so that inserts and updates are processed, but you ignore deletes

and then must "purge" the data yourself after the fact

mv

A reader, December 09, 2007 - 9:37 pm UTC

Tom:

Just want to see if creating a materialized view or a table would make sende in this case or not.

I have a detail status table that stores detail status transactions for a book. If a book was moved from one server to another a status transaction is entered.

I want to track the last status of the book. There can be multiple statuses like book exists on server 1 but not on 2 etc.

would an MV like this make sense.

Book_Status
--------------
server1_in_yn
test1_pf
test2_pf
server2_in_yn

or would u just create a table that stores the derived values.


Tom Kyte
December 10, 2007 - 11:07 am UTC

who is "U"?

your description leaves something "lacking" here.

but I would likely not have a materialized view, just

select * 
  from (select * from book_status where book_id = ? order by book_id, date_column desc)
 where rownum = 1;


that would get the last status record for a given book rather fast, especially if you have an index on (book_id,date_column desc)


CDC on MV

Carlos, January 21, 2008 - 2:09 pm UTC

Hi Tom

I was reading your responses about the feasibility of CDC implementation on MV (snapshots). I can infer that you say it's not supported. Please can you clarify if Oracle supports offcially CDC on MV or not? I have been doing some tests of CDC on MV and it's working fine.
I made the tests on a 9i EE release. Thanks a lot.
Tom Kyte
January 21, 2008 - 9:20 pm UTC

I don't know what you mean by "CDC on MV"?

if you mean doing CDC downstream on a MV, just expect it to "change"

An update against the primary copy might.... be a delete+insert, an update, something. You will hit complete refreshes (truncate+insert/*+append*/ or maybe delete+insert). And so on.

Be careful, it can and will change what it does. And it won't be a bug when it does, it'll be a change, that is all.

Change Data Capture (CDC) on snapshots

Carlos Gongora, January 22, 2008 - 7:40 pm UTC

Thank you for your reply.

What I meant was if Oracle supports the use of Change Data Capture on materilized views (snapshots)?? We are using Oracle 9i EE. I did some tests and looks like it's working fine...

Thanks again,

Carlos
Tom Kyte
January 22, 2008 - 7:57 pm UTC

like I said, you might see things propagate in strange fashion - see above.

Thanks

Carlos, January 24, 2008 - 2:16 pm UTC

Thanks a lot for making time for us and sharing your knowledge You are an Oracle Master, a Teacher, a Professor, a Guru...

Refresh problem in tables having CLOB (not null) fields

Megala, February 07, 2008 - 12:27 am UTC

Tom,

I have set up a materialized view replication, every thing works fine except for those tables in which I have fields of CLOB type which are NOT NULL. When i refresh these tables I got an error telling that "Can not insert null into CLOB field". When I set these fields nullable, it works fine.

I verified that source CLOB table column has value and is not null also.

Is there any restriction in replicating CLOB/BLOB columns.

I am using read-only replication.

Source is on 9.2.0.8 and target is 10.2.0.3

Thanks
Tom Kyte
February 07, 2008 - 8:26 am UTC

This demonstrates the issue and shows a workaround (deferrable constraint). Please contact support and reference bug 5454138


ops$tkyte%ORA10GR2> grant dba to u1 identified by u1;

Grant succeeded.

ops$tkyte%ORA10GR2> grant dba to u2 identified by u2;

Grant succeeded.

ops$tkyte%ORA10GR2> conn u1/u1
Connected.
u1%ORA10GR2> create table bug(accno clob not null,id number primary key);

Table created.

u1%ORA10GR2> insert into bug values ('1',1);

1 row created.

u1%ORA10GR2> create materialized view log on bug;

Materialized view log created.

u1%ORA10GR2> create public database link dbl connect to u1 identified by u1 using 'ora10gr2';

Database link created.

u1%ORA10GR2> conn u2/u2
Connected.
u2%ORA10GR2> Create materialized view bug
  2  refresh fast as
  3  select * from u1.bug@dbl;

Materialized view created.

u2%ORA10GR2>
u2%ORA10GR2> conn u1/u1
Connected.
u1%ORA10GR2> insert into bug values ('2',2);

1 row created.

u1%ORA10GR2> conn u2/u2
Connected.
u2%ORA10GR2> exec dbms_mview.refresh('BUG');
BEGIN dbms_mview.refresh('BUG'); END;

*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01400: cannot insert NULL into ("U2"."BUG"."ACCNO")
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2255
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2461
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2430
ORA-06512: at line 1


u2%ORA10GR2> desc bug
 Name                                     Null?    Type
 ---------------------------------------- -------- ----------------------------
 ACCNO                                    NOT NULL CLOB
 ID                                       NOT NULL NUMBER

u2%ORA10GR2> alter table bug modify accno null;

Table altered.

u2%ORA10GR2> alter table bug modify accno not null deferrable;

Table altered.

u2%ORA10GR2> exec dbms_mview.refresh('BUG');

PL/SQL procedure successfully completed.

u2%ORA10GR2> select * from bug;

ACCNO
-------------------------------------------------------------------------------
        ID
----------
1
         1

2
         2


How to replicate CLOB table faster via create MV

Megala, February 20, 2008 - 2:31 am UTC

Tom,

While creating materialized views, Is there any way to replicate CLOB table faster. CLOB table on source side is quite big in size.

In traditional import, i know if we alter the CLOB into cache .. it goes faster.

Thanks for any suggestions!

How to replicate CLOB table faster via create MV

Megala, February 20, 2008 - 9:46 pm UTC

Tom:

Appreciate if you have any suggestions on this subject.
Tom Kyte
February 20, 2008 - 10:44 pm UTC

faster than what.

If you do this over the network, it will of course take a bit of time to transfer the data (assuming it is large)

you could do offline instantiation. Won't be any faster (might be slower), but can be done as chunks of work, not a big single DDL statement

Megala, February 20, 2008 - 11:15 pm UTC

Thanks Tom.

<< you could do offline instantiation. Won't be any faster (might be slower), but can be done as chunks of work, not a big single DDL statement

Could you show this method how to implement this.

We have read-only access only to source database.
Tom Kyte
February 21, 2008 - 7:02 am UTC

if you have read only access to source database, you will fail.

you need to create axillary objects in that source database (materialized view logs)

http://www.oracle.com/pls/db102/search?remark=advanced_search&word=offline+instantiation&format=ranked&book=&preference=

A reader, February 22, 2008 - 12:50 am UTC

Hello Tom:
I having a problem with my replication setup for oracle 10g. One master and one client with materialized views. There are writing data into the client database which will then be written to the master via the replication user of these materialized views. One of tests I do is to unplug the network cable from the master, plug it back after a while to check if the setup is really working and synchronizing after this kind of failure. Doing so, the client side materialized view user does not unlock the views he was working on at the time of network failure, so it seems. And it won¿t release refresh transaction anymore, for hours and hours, until I kill it by force. At the master side already have been rolled back client's transaction after network failure.

Why doesn't client side continue to refresh after network available or return network error?
Best regards, Suren
Tom Kyte
February 22, 2008 - 7:09 am UTC

"the client side materialized view user"

what is that. I don't know what you mean by that.

A reader, February 22, 2008 - 10:11 pm UTC

Sorry for my question unawareness.
There are writing data into the master database which will then be written to
the client via the replication user of these materialized views.

And that user means the client side materialized view user (or mvadmin user) that does refreshing materialized views.

Suren
Tom Kyte
February 24, 2008 - 11:24 am UTC

if you are the same reader from above, what does this have to do with anything?

A reader, February 24, 2008 - 10:16 pm UTC

I have 2 database servers: DBRemote and DBClient. Both use Oracle 10g. I issued following script at DBRemote.

CREATE TABLE SCHEMA_MASTER.T
(
ID INTEGER,
STR_VALUE VARCHAR2(100 CHAR)
);


ALTER TABLE SCHEMA_MASTER.T ADD (
CONSTRAINT T_PK
PRIMARY KEY
(ID));

And at the DBClient, I set up materialized view site. Here is details:

Master Site: DBRemote

Materialized View Site: DBClient
Administrator: MVADMIN
Propagator: MVADMIN
Account to connect to at the Master Site: MVADMIN_DBClient

Scheduled Link: DBRemote
+ Next Date: SYSDATE
+ Interval: /*1:Hr*/ sysdate + 1/24
+ Delay Seconds: 0
+ Stop On Error: FALSE
+ Parallelism: 0

Purge Scheduling:
+ Next Date: SYSDATE
+ Interval: /*1:Hr*/ sysdate + 1/24
+ Delay Seconds: 0
+ Rollback Segment:

Then created following materialized view.

CREATE MATERIALIZED VIEW schema_mv.MV_T
TABLESPACE users
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
WITH PRIMARY KEY
AS
SELECT ID, STR_VALUE FROM schema_master.T@DBRemote T;

After that I inserted 1,000,000 records into remote table T. Then I started refresh manually at the client. I unplugged network cable from remote side during the refresh, and plugged after 24 hours later. But client has no answer: neither continue to refresh nor show network error. It is always in waiting mode (saw it from v$session_wait). I tried to configure SQLNET.EXPIRE_TIME and TAF (Transparent Application Failover). Both still don't have any result. So all I have to do is kill this refresh with force option.

Why this refresh process is quiet at the client?

Suren
Tom Kyte
February 25, 2008 - 2:11 am UTC

expire time is a dead client detection (dcd) and would be done from server to client to find clients that have disappeared. Here dcd would take place from server to client - not the client to server.

TAF would not apply at all either - the fact the network was gone wasn't even recognized by the client, there is no error as far as that client was concerned - it was just a really slow network. TAF is for when you have more than one instance you could connect to - and you do not.

You will have to look at your OS tcp settings (something I'm not a keen expert on, outside of my area of expertise). Your network isn't timing out - one might expect the same thing to happen with ftp or anything over the network - we are not getting a connection timed out, no error, so we keep on trying - that is why you had to kill the session doing the refresh - so it would ultimately get an error and then the refresh would begin anew.

materialized view indexing

sunil, March 11, 2008 - 8:14 am UTC

Hi
I am having a complex query which the guys want to convert into the materialized view! i just have a simple query that whether the indexing (primary key columns on the parent table) should also be there on the created Materialized view?
Tom Kyte
March 11, 2008 - 9:08 pm UTC

if your materialized view has the same primary key as the base tables - i don't understand why you would have a materialized view

you haven't aggregated - you probably haven't joined. so why the materialized view?

Query and thanx very much

sunil, March 12, 2008 - 3:06 am UTC

Hi
First fo all thanx a lot as this was my first post on your site. I am this time sharing that query also. I cannt share the actual column and table names so i changed it. According to DBA guys this query will broke down in case the table_name is heavily populated. I am new to materilized view concept. So i am following direct their instructions to change this query into an materilized view.
I will be very thankfull for your advice and help.
regrads.
with data_store
as
(select column1,column2,column3,column4,column5dt,
        decode(substr(trim(column3),1,5),'SOMVALUE',trim(substr(column3,6,instr(column3,'-')-6)),'SOMOTHERVAL',999999) derived,
        from table_name)
select column1,column2,
DECODE(DERIVED,999999,
               nvl2(DERIVED3,DERIVED3||'.'||SUM(DECODE(DERIVED,999999,1,null)) OVER (PARTITION BY column1,column2,derived3 ORDER BY column1,column2,to_date(column5dt,'yyyy-mm-dd'),to_number(column4)),null),
               DERIVED) column4
,column3,column5dt
from
(
           select column1,column2,column3,column4,column5dt,derived,derived2
           decode(derived,999999,(select max(derived) from data_store z
                                  where z.column1                    =x.column1
                                  and   z.column2                     =x.column2
                                  and  to_char(to_date(z.column5dt,'yyyy-mm-dd'),'yyyy-mm-dd')=to_char(x.derived2,'yyyy-mm-dd')
                                  and derived<>999999
                                  and to_date(z.column5dt,'yyyy-mm-dd') is not null
                                  and derived is not null),
                           derived) derived3
           from (
                    select column1,column2,column3,column4,column5dt,derived
                       (select max(to_date(column5dt,'yyyy-mm-dd'))
                        from data_store b
                        where derived<>999999
                        and to_date(b.column5dt,'yyyy-mm-dd')<=to_date(a.column5dt,'yyyy-mm-dd')
                        and b.column1=a.column1
                        and b.column2=a.column2) derived2,
                        from data_store a
                ) x
order by column1,column2,to_date(column5dt,'yyyy-mm-dd')
)
order by column1,column2,to_date(column5dt,'yyyy-mm-dd');

Tom Kyte
March 12, 2008 - 5:32 pm UTC

well, I don't see the point of using a materialized view here - there are no inputs to this query.

I don't like what you are doing to the dates!!!!!!!! please - read about them, understand them and stop doing what you are doing.


please, explain to me *WHY*

to_char(to_date(z.column5dt,'yyyy-mm-dd'),'yyyy-mm-dd')


I presume column5dt is a string. In this string is something like 2008-01-12

so, why do you take this string, turn it into a date, and turn it into a string???!?!?!? I'm baffled.


please, explain to me *WHY*

to_date(b.column5dt,'yyyy-mm-dd')<=to_date(a.column5dt,'yyyy-mm-dd')


if these are strings - they are already "comparable" in, they are already 'sortable'


I hate you use a string for a date - that is bad, but don't compound it by massive conversions back and forth and back and forth.

primary keys for above query

sunil, March 12, 2008 - 3:09 am UTC

Sorry sir i just missed one thing... table_name is having primary keys as (column1,column2,column3)
Regards.

query

sunil, March 13, 2008 - 1:30 am UTC

hi
Thanx a lot for the help. Yes column5dt is a string but we can not do anything with it as database design is not in out hands. i will surely remove the extra conversions!
Now back to the original , the query in question is going to be used to create a view. And as the performance of the query is under the scanner for heavy load situations, materilized view is considered to be the solution, as it will store the data with the derived value itself and we will be spared by executing the same query again and again to get the derived column values. You are requested to provide help and suggestions for the same.
thans very much.
Regards.
Tom Kyte
March 13, 2008 - 8:42 am UTC

then it might make sense to make a materialized view out of it - hand it over to the DBA's as they (much like an index) would be responsible for doing it.


Oneway (delta) refresh of materialized views

Paul, April 02, 2008 - 8:27 am UTC

Hi Tom,

We are using MViews in order to refresh master tables to a data warehouse. We need to be able to track the processing done on the receiving side. Say the original table looks like this:

Create Table Master_Tbl
( Pkey number not null primary key,
Content varchar2(20),
Pending number default 1);

Create Materialized View Log on Master_Tbl
With Primary Key Including New Values;

The child table (Mview) is defined on the dwh system (using a db_link) as:

Create Materialized View Child_Tbl
Refresh Fast With Primary Key
As Select * from Master_Tbl@db_link where Pending = 1;

So far, so good. The idea is, that we only want to see a delta view on the dwh side. That is to say we don¿t want the whole table in the data warehouse staging area just the changes which have been made. To make sure this work there is also a trigger on the master table which sets ¿pending¿ to 1 if an update occurs. Once a record has been processed on the dwh an update is performed on the source and pending is set to 0. On the next refresh of the mview all processed records disappear.

Now it gets tricky. We want to maintain the processing status of the child table records per row. We need to perform ¿Validation¿, ¿Transformation¿ and ¿Transfer¿ of data to two target areas. So we have around 5 state per record. It is not possible to add a column ¿State¿ to the Mview unless you create the child mview with ¿For Update¿. However, if we do that, then an mview-log is also created on the child table. Each time the status of a child record is changed, the record is sent (or rather logged to be sent) to the master table (which doesn¿t have that column). Because there is a trigger on the master table, which updates the ¿pending¿ column the change goes round-robin and comes back to the child table on the dwh. ¿

Because the source tables can contain 500 million plus records and some 10 million changes per hour can occur the refresh of the mviews occur every 21 seconds. And with the current behavior we are generating too much traffic. The only way around this problem would seem to be to have separate ¿status¿ tables for each table having an mview and synchronise the rows on the primary key. That sounds like an ideal situation for using a clustered table ¿ but I haven¿t touched them since Oracle 6. But may be I¿m missing some nice feature of Oracle 10 which would solve this more elegantly?

Any suggestions or hints would be appreciated ;-))

Tom Kyte
April 02, 2008 - 9:03 am UTC

create another table, maintain status over there, not in the materialized view. Join to it when you need it.

Materialized View Query Rewrite Feature

MK, May 28, 2008 - 12:56 pm UTC

Hi Tom,
I am currently facing a problem with a newly created materialized view that was performing query rewrites successfully earlier and now does not do it at all. Below are the scripts and my findings. Can you suggest any other ways I can check to see if query rewriting should work for me please?


create materialized view MV_AGED_ARTIST_CNT_FOREVER
on prebuilt table with reduced precision 
enable query rewrite
as
select a.id                                         artist_id,
       acl.service_id                               service_id,
       a.primary_genre_id                           primary_genre_id,
       sum(acl.play_count)                          play_count,
       count(acl.play_count)                        mv_count_1,
       count(*)                                     mv_count_2
from   artist_count_live_aday acl,
       artist a
where  acl.artist_id     = a.id
and    acl.period_kind   = 'DAY'
group by a.id, acl.service_id, a.primary_genre_id;


-- Change refresh times
alter materialized view MV_AGED_ARTIST_CNT_FOREVER 
refresh fast start with sysdate next trunc(sysdate+1) + 5/24;

-- Complete Refresh
exec dbms_mview.refresh('MV_AGED_ARTIST_CNT_FOREVER','C');


I had done the complete refresh as when I checked the output from my MV_CAPABILITIES_TABLE after running DBMS_MVIEW.EXPLAIN_MVIEW('MV_AGED_ARTIST_CNT_FOREVER') I was seeing an error message that my MV was set as "build deferred".. which a complete refresh fixed up.


Output from my MV_CAPABILITIES_TABLE
SQL> exec dbms_mview.explain_mview(mv => 'MV_AGED_ARTIST_CNT_FOREVER');
 
PL/SQL procedure successfully completed
 
SQL> select mvname, capability_name, possible from mv_capabilities_table;
 
MVNAME                         CAPABILITY_NAME                POSSIBLE
------------------------------ ------------------------------ --------
MV_AGED_ARTIST_CNT_FOREVER     PCT                            N
MV_AGED_ARTIST_CNT_FOREVER     REFRESH_COMPLETE               Y
MV_AGED_ARTIST_CNT_FOREVER     REFRESH_FAST                   Y
MV_AGED_ARTIST_CNT_FOREVER     REWRITE                        Y
MV_AGED_ARTIST_CNT_FOREVER     PCT_TABLE                      N
MV_AGED_ARTIST_CNT_FOREVER     PCT_TABLE                      N
MV_AGED_ARTIST_CNT_FOREVER     REFRESH_FAST_AFTER_INSERT      Y
MV_AGED_ARTIST_CNT_FOREVER     REFRESH_FAST_AFTER_ONETAB_DML  Y
MV_AGED_ARTIST_CNT_FOREVER     REFRESH_FAST_AFTER_ANY_DML     Y
MV_AGED_ARTIST_CNT_FOREVER     REFRESH_FAST_PCT               N
MV_AGED_ARTIST_CNT_FOREVER     REWRITE_FULL_TEXT_MATCH        Y
MV_AGED_ARTIST_CNT_FOREVER     REWRITE_PARTIAL_TEXT_MATCH     Y
MV_AGED_ARTIST_CNT_FOREVER     REWRITE_GENERAL                Y
MV_AGED_ARTIST_CNT_FOREVER     REWRITE_PCT                    N
MV_AGED_ARTIST_CNT_FOREVER     PCT_TABLE_REWRITE              N
MV_AGED_ARTIST_CNT_FOREVER     PCT_TABLE_REWRITE              N
 
16 rows selected



Database Init Params in SPFILE

-- query_rewrite_enabled = TRUE
-- query_rewrite_integrity = STALE_TOLERATED (for pre-built MV)

Running explain plans of the same query as the one in my materialized view definition showed that the underlying tables of the query were being hit and not the MView I just created. I then gathered statistics on all underlying tables, the MViews and the indexes for all objects in question. Nothing changed.

Finally I tried the explain_rewrite as below

DECLARE

SQLstr VARCHAR2(4000) := 'select a.id                                         artist_id,
       acl.service_id                               service_id,
       a.primary_genre_id                           primary_genre_id,
       sum(acl.play_count)                          play_count,
       count(acl.play_count)                        mv_count_1,
       count(*)                                     mv_count_2
from   artist_count_live_aday acl,
       artist a
where  acl.artist_id     = a.id
and    acl.period_kind   = ''DAY''
group by a.id, acl.service_id, a.primary_genre_id';

BEGIN
  -- dbms_output.put_line( SQLstr );
   dbms_mview.Explain_Rewrite(QUERY => SQLstr, MV => 'mv_aged_artist_cnt_forever');
END;
/

I got the error :
Trace dumping is performing id=[cdmp_20080528153432] Wed May 28 15:35:16 2008 Errors in file /u01/oracle/admin/piano/udump/piano3_ora_22234.trc:
ORA-00600: internal error code, arguments: [# of sumkeys invalid], [], [], [], [], [], [], [] Wed May 28 15:35:19 2008 Errors in file

Related Oracle Metalink link : https://metalink.oracle.com/metalink/plsql/f?p=130:14:4733190599370497459::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,397534.1,1,1,1,helvetica

What am I doing wrong here? Is there something else I can check to make sure this query_rewrite works?

I had also created a view which included the SQL used to build the M-view and the query was able to rewrite earlier. Since I introduced the auto-refresh every morning it has stopped rewriting to access my Mview! For now I have modified the view to look at the underlying materialized views directly rather than relying on the CBO to make this decision. Any help would be more than appreciated.

Thanks in Advance,
MK



Tom Kyte
May 28, 2008 - 3:50 pm UTC

I'd need a script to reproduce with on my own system.

Materialized View Query Rewrite Feature

MK, May 29, 2008 - 12:50 pm UTC

The table creation scripts are at : http://pastebin.com/m367ac16d

The DBA and myself had to set the query_rewrite parameter to FALSE as the 00600 error outlined kept recurring till we set this parameter in SPFILE.
I had also noticed that the REWRITE_TABLE was giving me no information other than the fact that there was no rewrite taking place:

SQL> select mv_name, sequence,  message, pass from rewrite_table;
 
MV_NAME                                                       SEQUENCE MESSAGE                                                                          PASS
------------------------------ --------------------------------------- -------------------------------------------------------------------------------- ----
MV_AGED_ARTIST_CNT_FOREVER                                           1 QSM-01150: query did not rewrite                                                 NO



Tom Kyte
May 29, 2008 - 1:31 pm UTC

well, if you set query rewrite to false, we are done here - what else can I say?

to pursue an ora-600, you would of course use support.

replicate clob tables with parallel using MV

Megala, June 18, 2008 - 11:03 pm UTC

Tom:

Replicating CLOB tables using materialized views with noparallel option, goes pretty slow. So i thought of using parallel option, but i do not see any parallel execution on source database.

Does it really make it faster to replicate clob tables using parallel option ? Thanks


Tried as below :

create materialized view lob_test_mv_parallel parallel nologging refresh force with rowid as
select
*
from owner.lob_test@remotedb_link
/

Tom Kyte
June 19, 2008 - 9:59 am UTC

parallel dml is possible on a dblink however, there will be a single pipe (it'll serialize via the network link - the query can execute in parallel on the remote site, it can insert in parallel on the local site, it'll serialize over the link)

improve clob tables replication time

Megala, June 19, 2008 - 7:46 pm UTC

Tom:

my goal is to improve the time taken to replicate clob tables. Is it necessary to mention /*+ PARALLEL */ in the remote query in the MV definition to speed up.
Thanks
Tom Kyte
June 20, 2008 - 10:13 am UTC

I told you - over the dblink it'll shrink down to a serial thing. You can query on remote in parallel, you can insert locally in parallel, but the pipe will be serial (hence, it doesn't really make sense to do parallel if you ask me - the remote table being queried in parallel will bottleneck on the single pipe, the dblink)

can you use offline instantiation - transport the data

http://www.oracle.com/pls/db102/search?remark=advanced_search&word=offline+instantiation&format=ranked&book=&preference=

(in most all cases, the "perceived need" to replicate isn't really a need - replication should be the last approach to solving access to data, distributed complexity... with the networks as they are today - the need to replicate is almost non-existent....)

Megala, June 20, 2008 - 2:45 pm UTC

Tom:

I do not have choice to use offline instantiation here.
I thought using PARALLEL option in building MVs will speed up the mv creation.

Will it cut down the MV creation time with PARALLEL option in creating non-lob MVs using DBLINK. ?

I have to replicate around 1100 materialized views in our local environment from remote external database.

Can i still use PARALLEL option on non-lob tables to speed up mv creation with NOLOGGING ?

Example :
=========
create materialized view lob_test_mv_parallel parallel nologging refresh force with rowid as
select
*
from <source_schema>.lob_test@REMOTE_DBLINK
/


I split the MV creation scripts (create mv with parallel into 10 and running in parallel.

Thanks for your suggestions.
Tom Kyte
June 20, 2008 - 4:07 pm UTC

... I thought using PARALLEL option in building MVs will speed up the mv creation. ..

parallel is not "fast=true"

parallel is a tool that in SOME situations MIGHT decrease response time. In your case, over a dblink, probably NOT since the remote query executing in parallel will be constrained by the single network link that all of the data has to flow over.


Tell me why you don't have "the choice", what is precluding what is probably the best approach for you from happening?

Continuation of above

Megala, June 20, 2008 - 6:30 pm UTC

Tom:
<< Tell me why you don't have "the choice", what is <<precluding what is probably the best approach for you <<from happening?

The remote database is owned by different group. All we have is just read-only access to their tables/MV logs for us to replicate remote source tables locally in our environment.

Tom Kyte
June 20, 2008 - 6:39 pm UTC

tell the remote database this:

"Unless you help us do this efficiently, we will nail the heck out of your database for a long period of time. What will it be? We nail your system moving this stuff over a dblink, or you help us out and we get it done fast, efficiently, effectively"

Sometimes, presenting the obvious...

Megala, June 20, 2008 - 7:01 pm UTC

ok. will take it up with source team on effecient methods.
Any suggestions you may have to improve the MV creation (via dblink) with what is available. Thanks


Tom Kyte
June 20, 2008 - 9:16 pm UTC

...
Any suggestions you may have to improve the MV creation (via dblink) with what
is available.
.....

offline instantiation...

Thanks!

Megala, June 20, 2008 - 9:27 pm UTC


Materialised view tables

anupam pandey, July 21, 2008 - 2:13 am UTC

Hi Tom,
How can i know what all tables are being referred by a MView and what is the source for that base table. By source i mean if the tables are being pulled over db link then the name of the db link and userid .

Thanks And Regards,
Anupam
Tom Kyte
July 22, 2008 - 10:24 am UTC

lots of views out there to peek at

# ALL_BASE_TABLE_MVIEWS
# ALL_MVIEW_AGGREGATES
# ALL_MVIEW_ANALYSIS
# ALL_MVIEW_COMMENTS
# ALL_MVIEW_DETAIL_RELATIONS
# ALL_MVIEW_JOINS
# ALL_MVIEW_KEYS
# ALL_MVIEW_LOGS
# ALL_MVIEW_REFRESH_TIMES
# ALL_MVIEWS
# ALL_REGISTERED_MVIEWS


partition change tracking

Amir Riaz, July 22, 2008 - 11:16 am UTC

Hi Tom,

interesting thread.

I am trying to understand how the partition change Tracking work. Here are a few theories i have developed.

under aggregation such as sum and partition change tracking on. oracle sum at partition level and then perform the sum of result to get an aggregated value. but oracle documents says that in case of change the materialized view is not slated its valid so my theory failed.

Can you explain how oracle after data modification keeps the materialized view valid while it should be slate.

how partition change tracking lowers the time of refresh
Tom Kyte
July 22, 2008 - 6:23 pm UTC

"while it should be slate"

I am not familiar that term.

partition change tracking is typically used to do "mini complete refreshes on bits of a materialized view", we know what partitions have been modified and we only need to read them to maintain the materialized view.

It would be best to "demonstrate" what you tried - just create a base table with two partitions - real simple example - and document what you saw and what you expected to see.

Materialized View not Refreshed

Emma, August 03, 2008 - 8:46 pm UTC

Dear Sir,

we are upgrading to 10g from 9i and we are encountering problems replicating data to 10g using materialized views. seems that data from the master table are not refreshed to the MVs. what could be the possible problem? what we did was create a temporary table IDENT_TABLE_A in 10g to replicate TABLE_A from 9i. Then we created a trigger upon INSERT to copy certain columns only from IDENT_TABLE_A to MV_TABLE_A in 10g. All other tables (we created 11 MVs all in all) were populated except for MV_TABLE_A. Where should we check for the error? Thanks very much.

Emma, August 04, 2008 - 4:05 am UTC

Hi!
we found the solution.apparently we forgot to recompile the trigger after making some changes thus it failed to refresh the MV the way we wanted it to.

thanks for this site, very helpful!

Materalized view

Himanshu, November 03, 2008 - 4:51 pm UTC

Hi Tom,

Is it true that we should create materialized views only when we need data from tables which are in a different application or in a different database. The reason I am asking is I need to get a distinct values for a particular column from a table which has around 30 million rows. The number of distinct values for that column is around 100 rows and the chance of getting a new value is one every six month. I have been trying to create a materialized view for this but did not get approval from the database group, every time a user tries to get those values they need to wait around one minute for this.

The other alternative to materialized view could be we have a job which could run one every six months and could dump those rows in a table and our application could read from the table. Please let us know if there is any other way to do this.

thanks

Tom Kyte
November 11, 2008 - 11:56 am UTC

... Is it true that we should create materialized views only when we need data from
tables which are in a different application or in a different database. ...


no, not at all - that is very far from the time you want to use them in fact.


I have been trying to create a materialized view for this but
did not get approval from the database group


sounds like the "database group" is wanting a reason to buy more hardware and they are using you.


But you'd need to tell us more about the 30,000,000 row table. What is it used for, how often is it updated, tell us everything you can think of about it.

Materalized view

A reader, November 12, 2008 - 10:00 am UTC

Hi Tom,

Thanks for the reply.

The 30,000,000 row table is a Audit table where we are storing the history of changes done to different application. (This is captured by using triggers)
For one of the inquiry screen we needed to fill up a dropdown box which contained distinct table names selected from the Audit table. Chances of adding a record first time with a new table name may be once in six months and what we wanted to build a MV which could have been refereshed once in six months.

Thanks
Tom Kyte
November 12, 2008 - 10:45 am UTC

can you not just query the dictionary???? Why bother with this table at all, just query the dictionary to find tables that are 'audited'

Materalized view

A reader, November 12, 2008 - 12:41 pm UTC

Hi Tom,

Thanks for the reply.
We thought of querying the data dictionary but not all the tables in the application are audited, only around 40 percent of the tables are audited.


thanks
Tom Kyte
November 13, 2008 - 4:34 pm UTC

right, and they have a trigger on them and that trigger probably (one would think) would have a standard naming convention and you would just query *_triggers to find the set of tables with your audit triggers on them.

Materalized view

A reader, November 14, 2008 - 10:07 am UTC

Thank You Tom it was very helpfull.

Thanks

Create Materialized view between different versions and characterset

A reader, November 18, 2008 - 3:42 pm UTC

Hi Tom,

We have a 8i DB with characterset UTF8 and 10g DB with AL32UTF8, we want to replicate data from 8i to 10g, is this do-able?

Thanks,

Tom Kyte
November 21, 2008 - 3:02 pm UTC

yes, but, you might have some characterset conversion (you could replicate from X to Y in general, but if one or the other characterset is a subset or doesn't have a mapping of some characters - you will "change" the data of course).

Materialized View with historical data

Juan, November 20, 2008 - 12:07 pm UTC

Hi Tom,

How are you?

I was having a look about materialized views Oracle documentation and i have got some questions i can't answer myself.
In our current system we are executing summary queries on raw tables (partitioned by date) at night and then it feed some summary tables(partitioned by date).
We are thinking to start to use materialized views. But we have some functionality to preserve:

- We need to keep more historical data in the summary views than in the raw data. How can I keep data on the summary tables doing a fast refresh?
We are thinking to use "CONSIDER REFRESH", what do you think about this?

- We need sometimes change definition of the view and, for example, some fields can change the aggregation function from SUM to AVG. But we have to historical data in the view
as it was computed, and at the same time for current date and so on it will be computed as an AVG. How can I redefine the materialized view without losing historical data?

Cheers,
Juan
Tom Kyte
November 24, 2008 - 1:17 pm UTC

you cannot do that, the sole entire and only goal of a materialized view is to:

Make the MV's defining query be exactly and precisely what is in the base table(s) as of the time of refresh.

Period, that is is - you cannot make it be anything else.


You can use streams and a custom apply process - which would take a DELETE and do something else with it - instead of deleting the data, it would 'flag' delete it or move it to another table or whatever you want.


as for aggregates - that'll be really tricky - sounds like you would have to keep the entire history at the row level (NO aggregates) and create materialized views on top of that to aggregate.

Help on creating materialised view

Hashmi, November 21, 2008 - 3:41 pm UTC

Dear Tom,
Can you guide how to create materialised view in the scenario presented as below:

We have data coming from various database centers into a single centralised server on daily basis.
We have some queries that do some arithmetic and display the result in front-end being .net.These queries take a lot of time to display result.
Here is the simple example of the query:

select c1,c2,sum(amt1+amt2) from test where dt>=:dat1 and dt<=:dat2 group by c1,c2;

In this sample query,:dat1 and :dat2 are runtime variables passed from front-ent as user input.
The calculations are done based on these dates as inputs.
Now How can I take advantage of materialised view in such scenario so that calculation is done only on newly arrived data on every refresh.
If you can guide on any other way to speed up the process,that would be great!
Best Regards
Tom Kyte
November 24, 2008 - 4:12 pm UTC

since the where is placed on before the group by - your materialized view would need c1, c2, amt1, amt2, dt - and grouped to c1, c2, dt.

Now, let me ask you - would that be significantly less data?

that is, in general would

select count(*) from (select distinct c1,c2,dt from t where dt beteen :a and :b)

result in a much smaller count than


select count(*) from (select c1,c2,dt from t where dt beteen :a and :b)

would an index on dt, c1, c2, amt1, amt2 be more than sufficient - we'd range scan only the index - never ever hitting the table.

Hashmi, November 25, 2008 - 8:31 am UTC

Dear Tom,
Thanks for your reply.
This is one of the query under the question:
select mj,name,
nvl(sum (case
  when to_char(dt,'YYYYMM')=2008||lpad(2,2,'0')
 then case when p_n='N' then gr_amt  end end),0)Np,
nvl(sum (case
  when to_char(dt,'YYYYMM')=2008||lpad(2,2,'0')
 then case when p_n='P' then gr_amt  end end),0)p,
 nvl(sum (case when p_nplan='N' then gross_amt end),0) NP_prog_amt,
 nvl(sum (case when p_n='P' then gr_amt end),0)p_prog_amt
from v_be be,hd
where dt>='01-MAR-07' and dt<='29-FEB-08'
and be.s_mj(+)=mj
group by mj,name
order by mj


In the above query all the date references are to be replaced by run time variables as inputs to dates.
v_be is a view which has the following structure:
create or replace view v_be as
select substr(h_c,1,4)s_mj,p_n,vno,dt,gr_amt
from be

I have created a function based index on substr(h_c,1,4) and a normal index for mj column of table hd.

The table be has 1263531 records.The table grows on daily basis.
The table hd is a master table with 13541 records.

The explain plan shows full table scans for be and hd.
The query takes 12,13 secs on my test machine locally but when accessed by the application takes infinite time.
Iam learning things.Kindly bear with me.
Tom Kyte
November 25, 2008 - 12:28 pm UTC

...
case
when to_char(dt,'YYYYMM')=2008||lpad(2,2,'0')
then case when p_n='N' then gr_amt end end),0
...

that is horrible - all of the implicit conversion scare the %$@#$%#@ out of me and that is perhaps the least efficient approach.

case
when trunc(dt,'mm') = to_date( '200802', 'YYYYMM' ) and p_n = 'N'
then gr_amt
end

use trunc whenever possible to trunc a date to a day, month, quarter, year - very very very fast, simply sets some bytes at the end of the 7 byte array that is a date rather than run hundreds of lines of NLS code over and over to convert a date to a string.

NEVER concatenate numbers, use explicit conversions.

but since you really wanted to do date compares - do a date compare, turn the string into a date.

If you were to:

create table t
as
select trunc(sysdate,'y')+mod(rownum,365) dt, level gr_amt
  from dual
connect by level <= 500000
/
exec dbms_stats.gather_table_stats( user, 'T' );



and compare the two approaches, you'd find one takes a lot less cpu horsepower...

select sum( case when to_char(dt,'YYYYMM')=2008||lpad(2,2,'0') then case when :p_n='N' then gr_amt  end end)
  from t

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.45       0.44          0       1248          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.45       0.44          0       1248          0           1
********************************************************************************
select sum( case when trunc(dt,'mm') = to_date( '200802', 'YYYYMM' ) and :p_n = 'N' then gr_amt end )
  from t

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.22       0.22          0       1248          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.22       0.22          0       1248          0           1




....
where dt>='01-MAR-07' and dt<='29-FEB-08'
....



where do people pick up such bad practices???? Do you compare apples to flying toaster ovens? Probably not - so stop comparing strings to dates, dates to numbers, numbers to strings - compare

dates to dates
strings to strings
numbers to numbers

and reject any code that does otherwise. Implicit conversions are a bug waiting to jump out and bite you - big time. Not only that, but they can massively impede performance.

USE TO_DATE on those strings
USE TO_DATE with an explicit format.



...
I have created a function based index on substr(h_c,1,4) and a normal index for mj column of table hd.
....

why - what logic did you employ for that? Do you seach on the substr() (no).


You did not actually answer MY questions - could you do that

Apologies

Hashmi, November 26, 2008 - 2:58 am UTC

Dear Tom,
Sorry for irritating you with bad programming practices.
Since now you know the query,this is what you questioned:

<code>
SQL> select count(*) from (select mj,name,dt from hd,be
  2  where dt between to_date('01-MAR-07') and to_date('29-FEB-08')
  3  and mj=substr(hc,1,4));

  COUNT(*)
----------
   8163885

SQL> ed
Wrote file afiedt.buf

  1  select count(*) from (select distinct mj,name,dt from mj,be
  2  where dt between to_date('01-MAR-07') and to_date('29-FEB-08')
  3* and mj=substr(hc,1,4))
SQL> /

  COUNT(*)
----------
     13393


Now,CAn you guide me what approach should I follow to speed up the process?</code>
Tom Kyte
November 28, 2008 - 4:16 pm UTC

it doesn't annoy me inasmuch as scares the heck out of me.

there are so many bugs lurking in that code.... and the performance impact is there too.


Your counts are totally whacky. Were are the JOIN CONDITIONS?? - you join hd and be using a cartesian join for the count??? why?

and we'd need to know what columns come from what tables.

USE CORRELATION NAMES - so that we know whence mj comes from and dt comes from and so on..



Materialized View with historical data

Juan, November 26, 2008 - 10:06 am UTC

Hi Tom

Thank you for your reply

But we can't held the row level data so long as the summary data. We are working in a datawarehouse with huge row level data(with ~100 columns per table and ~1 million of rows per day), then it is aggregated to daily, weekly and monthly for each table (~200 tables). We keep the daily more time than raw data, and weekly and monthly data more than daily data. We need to save space.

It seems that we have to keep our summary management, and we must be aware of the quality of the summary because we have to control if new data for old partitions have been inserted, and we can't enjoy, for example, of the query rewrite functionality.

I think Oracle should manage VMs in a partitioned level in order to freeze some historical partitions and it should let change definition of the VM keeping the freeze partition without changes and these freeze partitions should not refresh with a complete refresh.

Anyway, to control what changes have had in the raw data in order to know if a summary should be recomputed, do you think is it a good idea use oracle streams? (Sorry, I don't know so much about it)
I am thinking in doing something similar like VMs working in a custom way.

Thanks

Tom Kyte
November 28, 2008 - 4:23 pm UTC

the entire sole goal of materialized views is to make the materialized view SYNCHRONIZED with the base table.

If they did not do that, MV-rewrite could not exist - the two sets of data would not be equivalent.


you would, as stated, need to use streams to perform this sort of operation.

The joins are there

Hashmi, December 01, 2008 - 2:28 am UTC

Dear Tom,
In these queries,
SQL> select count(*) from (select hd.mj,hd.name,v_be.dt from hd,v_be
  2  where v_be.dt between to_date('01-MAR-07') and to_date('29-FEB-08')
  3  and hd.mj=v_be.src_mjcd);

  COUNT(*)
----------
   8163885

SQL> ed
Wrote file afiedt.buf

  1  select count(*) from (select distinct hd.mj,hd.name,v_be.dt from mj,v_be
  2  where v_be.dt between to_date('01-MAR-07') and to_date('29-FEB-08')
  3* and hd.mj=v_be.src_mjcd)
SQL> /

  COUNT(*)
----------
     13393

hd.mj=substr(be.hc,1,4) is the join condition.I did create a view v_be to avoid function in join condition.

This is the actual query I am writing for your convenience.
<code>
select hd.mj,hd.name,
nvl(sum (case
  when trunc(v_be.dt,'mm')=to_date('200802','YYYYMM')
 then case when v_be.p_n='N' then v_be.gr_amt  end end),0)Np,
nvl(sum (case
  when trunc(v_be.dt,'MM')=to_date('200802','YYYYMM')
 then case when v_be.p_n='P' then v_be.gr_amt  end end),0)p,
 nvl(sum (case when v_be.p_n='N' then v_be.gr_amt end),0) NP_prog_amt,
 nvl(sum (case when v_be.p_n='P' then be.gr_amt end),0)p_prog_amt
from v_be,hd
where v_be.dt>=to_date('01-MAR-07') and v_be.dt<=to_date('29-FEB-08')
and v_be.s_mj(+)=hd.mj
group by hd.mj,hd.name
order by hd.mj;


I hope now things are more clearer.
Waiting for your expert advice.
Best Regards</code>
Tom Kyte
December 01, 2008 - 7:50 am UTC

... I did create a view v_be to avoid
function in join condition.

...

that makes *no sense*

you either have to

a) join using a function
b) or not

a view would not change that at all.


what the view does in this case is make it too hard for me to parse in my head.

write it without the view, the view is not doing anything in this case but obscuring information from view - it is not avoiding the function or anything like that.

make sure to use correlation names

and give the create tables - simple ones - no tablespaces, nothing fancy and just enough columns to make the query above 'work' (remove columns that are not of interest to us)

Refresh on commit clause to MV

sajidsj, January 14, 2009 - 9:23 am UTC

hi Tom,
From your previous follow-up's, i've learned that MV's between two DB's can't be configures with ON COMMIT clause.

My requirement is to synchronize two tables residing inside two different DB's.

Now I am using the following scenario

1. created an MV LOG to Table A at Database X
2. created an MV to table A at Database Y, with FAST Refresh ON DEMAND option.
3. created a job to run every 15 minutes to refresh MV using DBMS_MVIEW.Refresh procedure
4. created two triggers AFTER UPDATE & AFTER INSERT to MV, which will generate two separate jobs, which is scheduled to run after 2 minutes, to pass the :new.column_values to a procedure
5. The procedure will UPDATE/INSERT data to table B at Database Y

can you please suggest any better way to do the above.
I know that ON COMMIT refresh cannot be carried out between Databases.
And COMMIT statement cannot be invoked inside the trigger to an MV.

you suggestion will be highly appreciated.


Tom Kyte
January 16, 2009 - 4:48 pm UTC

3) or make that part of the create materialized view - you can specify an interval for the refresh

4 & 5) now you have lost me. WHY? where does this trigger come from, where does this job come from? no idea what you are doing that for

Refresh on commit clause to MV

sajidsj, January 16, 2009 - 6:41 pm UTC

Thanks Tom for the reply.

I tried that to make refresh to be part of the create MV, using next refresh clause. It doesn't work with ON DEMAND refresh clause. also I cannot use ON COMMIT refresh b/w DB's.

Regarding point 4&5, I want to insert/update the new/changed row to the table in the destination DB.


Tom Kyte
January 16, 2009 - 8:33 pm UTC

well, if you set it up to auto refresh - it would not be ON DEMAND (it would be on a schedule!)

I did not say use on commit, I said "you could have it schedule it's own refreshes, you don't need the job to do that, it is part of the options of a materialized view create"

regarding 4&5. I don't get it, that is what the materialized view does, IT does the update/insert of the new/changed row. I don't get the trigger *at all*.

Refresh on commit clause to MV

sajidsj, January 17, 2009 - 5:22 am UTC

Thanks Tom

But when I tried create MV without any refresh type clause (ON DEMAND / ON COMMIT) using "WITH-NEXT" option, its assuming ON-DEMAND by default.

ex:
CREATE MATERIALIZED VIEW MV_TEST
FAST REFRESH
WITH PRIMARY KEY
WITH sysdate NEXT sysdate + 15/(24*60)
AS
(/****QUERY******/);

The above script when executed, assumes ON-DEMAND by default.

4&5:-
My actual requirement is to update a table in the destination database, using the MV created from the source table.

Since the MV has been created through the Database-Link, I cannot rely directly on it to run our application, because I heard that MV will only be accessible when the source Database is up.

Tom Kyte
January 17, 2009 - 8:55 am UTC

... My actual requirement is to update a table in the destination database, using
the MV created from the source table. ...

change your requirement. You do not want to have triggers on this MV, rethink your processing here. If you want to do an ETL like processing - you want to use streams.

Do not put triggers on MV's, the only time triggers are supported/supportable on MV's is updateable ones - and only if they promise not to do anything during the refresh process itself.

You are assuming that an update of the base table will be an update against the MV. It doesn't have to be, we might decide to delete + insert. You are assuming that a fast refresh will always be an incremental one - it doesn't have to be, we might rebuild the entire MV (and then you are "hosed"). Do not do what you are doing - rethink your process.


and yes, the refresh type would be "DEMAND" and the system would be doing the demanding for you, this is my point - you do not have to create the job, WE WILL.
ops$tkyte%ORA10GR2> create table t ( x int primary key, y int );

Table created.

ops$tkyte%ORA10GR2> create materialized view log on t;

Materialized view log created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec print_table( 'select what from user_jobs' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create materialized view mv
  2  refresh fast
  3  start with sysdate
  4  next sysdate + (1/24/60)*15
  5  as
  6  select * from t;

Materialized view created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec print_table( 'select * from user_jobs' );
.JOB                          : 304
.LOG_USER                     : OPS$TKYTE
.PRIV_USER                    : OPS$TKYTE
.SCHEMA_USER                  : OPS$TKYTE
.LAST_DATE                    :
.LAST_SEC                     :
.THIS_DATE                    :
.THIS_SEC                     :
.NEXT_DATE                    : 17-jan-2009 08:49:09
.NEXT_SEC                     : 08:49:09
.TOTAL_TIME                   : 0
.BROKEN                       : N
.INTERVAL                     : sysdate + (1/24/60)*15
.FAILURES                     :
.WHAT                         : dbms_refresh.refresh('"OPS$TKYTE"."MV"');
.NLS_ENV                      : NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA'
NLS_CURRENCY='$' NLS_ISO_CURRENCY='AMERICA' NLS_NUMERIC_CHARACTERS='.,'
NLS_DATE_FORMAT='dd-MON-yy' NLS_DATE_LANGUAGE='AMERICAN' NLS_SORT='BINARY'
.MISC_ENV                     : 0102000200000000
.INSTANCE                     : 0
-----------------

PL/SQL procedure successfully completed.


but again, do not put a trigger on this MV, you will regret doing so in the future, that is 100% guaranteed


http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7004.htm#sthref7906

Note:
If you create a trigger on a base table of a materialized view, then you must ensure that the trigger does not fire during a refresh of the materialized view. During refresh, the DBMS_MVIEW procedure I_AM_A_REFRESH returns TRUE.

Refresh on commit clause to MV

sajidsj, January 17, 2009 - 11:37 am UTC

Yup I got it...

But one more questions,

what will happen to the accessibility of the MV, when the Source Database is down?. Will the data in the MV be accessible during the down time?

Note: I am using a Database link to create MV. And Database link hangs without any exception when the source Database is down.


Tom Kyte
January 17, 2009 - 11:56 am UTC

the data in the MV is accessible regardless of the availability of the source database - it just will not be refreshed of course.

Accessing the MV is a local operation, it is not using the dblink at all.

And if you have a dblink just hanging when the remote database isn't available - you have a configuration issue of some sort there - the dblink shouldn't be able to operate (you'd get "ORA-01034: ORACLE not available"). Please utilize support to figure out what is happening there.

Materialized Views

sajidsj, January 17, 2009 - 11:29 pm UTC

So Tom, with that I can assure that, at any time data will be available in the MV, where it might be younger if the Data Base Link or Source Data base is down.

One more think. What's the chance of MV not being synchronized with its peer (source table) with remote Database being up?. Can u explain the possible scenarios?.

Regarding the Database link problem, we recently had a problem. Our application was using a summary table from a remote database through link. This remote DB was down and the application (which is using PLSQL packages and Java classes) remained hanging as it was waiting for a responce from the DB link.
I heard that, when using DBlink, the only way to figure out the unavailability of remote Database from PLSQL is to configure TCP/IP time out.

Could u please comment to this?
Tom Kyte
January 19, 2009 - 8:45 pm UTC

...
So Tom, with that I can assure that, at any time data will be available in the
MV, where it might be younger if the Data Base Link or Source Data base is
down.
....

you can be sure the MV will be queryable and as long as you 'fast refresh' or 'complete refresh with atomic_refresh=>true', yes, it will always have data.


"U" isn't available, "U" is dead as far as I know. Look it up, it is true.
http://en.wikipedia.org/wiki/U_of_Goryeo

The MV will almost never be in sync with the source table, it is ASYNCHRONOUS replication, changes happen to the MV *after* they are committed in the source database.

Again, "U" is dead, please stop asking "U" to answer questions. Death brings an end to answer anything.

Configure tcp/ip timeout then - basically, tcp/ip is the culprit here, not a database link. When the client on the other end "disappears", unless tcp/ip says to us "hey, this connection - it is as dead as U", there is not much we can do.

Refresh behaviour

Dipali, January 19, 2009 - 8:22 am UTC

Hi Tom,
I read 4 different articles on this forum on materialized view and many more on others.
I would like to thank you from bottom of the heart. Persons like you are great help to us who wants to learn new things in DBA world, as instade of just providing answer of questions directly, you teach us how to get these answers.. :) Thanks a lot.

In your this and other threads regarding materialized view, I came to read that:

complete refresh = truncate and insert /*+ appned*/

incremental refresh = delete and inserts.

The replys where you said like above was of year 2002 to 2006.

But in one of the thread, I read your answer saying: in oracle 10g complete refresh behaviour is changed and it now does delete and inserts.

Please tell us, in oracle 10g and oracle 11g, will oracle complete refresh do truncate and /*+append*/ or delete and insert.

Can you also Please tell me the article which discusses such details about materialized views. I read oracle documentation, but mostly it talks about syntaxes, use, when to do what etc. I want to understand its behaviour in detail like How oracle refreshed MV and such.

Regards..
Tom Kyte
January 19, 2009 - 9:27 pm UTC

complete refresh in 9i and before defaults to truncate + insert append
complete refresh in 10g and after defaults to delete + insert


You read correctly (you already knew the answer, I said in 10g it changed...)


You see what it does by enabling sql_trace=true and reading the tkprof. The details of what sql is executed is not documented.

Materialized Views

sajidsj, January 20, 2009 - 12:56 am UTC

Tom,

Condolence to "U", May "U" rest in peace.

regarding The TCP/IP time out settings, we cannot reduce as we are having a very slow network. With which, if we try to reduce the time-out for TCP/IP, there will be connection timeout every now and then.

Thanks a lot, for your (Not Ur) comments and replies, as it helped me a lot to solve the problem effectively. :-)

I know that Streaming would have been the best possible solution, but I dont have enough time to spend on it. I need to solve this issue at the earliest and later I can modify it to use Streaming. I will be coming back to You (God forbid, not to U, the dead) for help regarding Streaming and ADQ's.

Thanks a lot again.
Mohammed Sajid S J
System Analyst


Refresh behaviour

Dipali, January 20, 2009 - 5:15 am UTC

"complete refresh in 9i and before defaults to truncate + insert append
complete refresh in 10g and after defaults to delete + insert
"

I could not understand your meaning of "Before default" and "After default" in your answer. Please clarify about which default setting you said.

And from that it also seems that its possible in oracle 10g to change complete behaviour from delete + insert to truncate + insert appned. Please tell me how we can do that, or suggest it's name, I will study that from documentation.

Moreover, can you please suggest any documentation which discusses such details.

Thanks and Regards,
Dipali..
Tom Kyte
January 20, 2009 - 7:12 am UTC

In database releases of Oracle 9i and before (meaning 9iR2, 9iR1, 8iR3, 8iR2, ....) the default behavior of the database code during a complete refresh was to truncate the table and then direct path load it.

In database releases of Oracle 10g and after (meaning 10gr1, 10gr2, 11gr1, ...) the default behavior of the database code during a complete refresh is to delete from the table and use a conventional path insert to load it.


In 9i and before, to change the default behavior you can use a snapshot refresh group. (ctl-f - that is search - for refresh groups on this page to read more about that)

In 10g and later, you use atomic_refresh=> FALSE in the refresh command to change the default behavior, so as to use a truncate and direct path load instead of delete and conventional path insert.

The default setting I was talking about was actually part of each sentence.

The data warehousing guide covers materialized views in detail.

Refresh behaviour

Dipali, January 20, 2009 - 8:26 am UTC

Thanks a lot for reply, Tom..

Ohh, I was misunderstading the lines.. the defult word you used for oracle releases.. Sorry for troubling you for elobartion..

Thanks for information about atomic_refresh. :) I will test it.

I have tested the tkprof for default complete refresh. I saw that oracle is doing delete, and insert with /*+ BYPASS_RECURSIVE_CHECK */ hint.

1)But there is also one more thing oracle is doing, that is :

SET CONSTRAINTS ALL DEFERRED

Is it also a part of activities oracle is doing for complete refresh. (I think oracle is doing this to preserve integrity of any constraint n that MV)

2)Does oracle commits only the changes done by this complete refresh (i.e. delete + insert ) or commit all the operations done before refresh too?.
Same way, In case of truncate + direct load, will it commit only records inserted by refresh or all the operatios done before refresh too?
i.e. Is MV refresh a DML command or DDL command?

Thanks again and regards,
Dipali..
Tom Kyte
January 20, 2009 - 10:15 am UTC

MV refreshes always set constraints to be deferred, they go inconsistent during the refresh process since they are NOT replaying the transactions, just bring the table up to date.

an MV refresh is a stored procedure that commits when it completes. It is DML (stored procedure) but the logic has it commit.

Refresh behaviour

Dipali, January 20, 2009 - 8:29 am UTC

Tom, The question 2 , I can check myself by creating testcase.
Please reply to "i.e. Is MV refresh a DML command or DDL command?" only, for my confirmation.

Regards,
Dipali..

Materialized View and Exclusive Table Level Lock

Neelam, January 20, 2009 - 12:07 pm UTC

Hello Tom,

I create a table, a snapshot log with rowid on that table. Then I create a snapshot with fast refresh along with rowid on the same table. Then I create another snapshot with rowid in ON COMMIT refresh mode and drop it a few seconds later. After doing this the base table is put on EXCLUSIVE table lock on each and every commit even though the ON COMMIT snapshot is dropped.

See the example below

create table ab_tab
(a number, b number, c number);

CREATE SNAPSHOT LOG ON ab_tab WITH ROWID;

CREATE SNAPSHOT XX_AB_TAB
REFRESH FAST WITH ROWID AS
(SELECT * from AB_TAB);

CREATE SNAPSHOT XX_AB_TAB1
REFRESH ON COMMIT WITH ROWID AS
(SELECT * from AB_TAB where a=b );

DROP SNAPSHOT XX_AB_TAB1;

Then I run this script in one session and monitor locks in another session and I find exclusive table locks being put in ab_tab;

DECLARE

CURSOR c_cur1 IS
SELECT a,b,c
FROM ab_tab
where rownum < 1000;

l_num NUMBER;
BEGIN
FOR i in 1..5000 LOOP

INSERT INTO ab_tab_new values (i,i,i);

COMMIT;

END LOOP;
END;

This is the cause of major deadlocks and circular locks in my environment and I reached here after a month long investigation. I need to understand this behaviour and the best way to correct this. Your help is greatly appreciated.

Thanks,
Neelam
Tom Kyte
January 20, 2009 - 3:35 pm UTC

why are you doing DDL ? Why are you dropping schema objects in your environment????

Materialized View and Exclusive Table Level Lock

Neelam, January 21, 2009 - 5:57 am UTC

Hello Tom,

The problem we have is a huge transaction table which holds sales order data is being put on EXCLUSIVE TABLE lock after every DML statement. This has been going on for last 2 months and we have a TAR open with oracle without any success. This is oracle application 11.5.10.2 with oracle RDBMS 10.2.0.2.0. When the problem started the support was being transitioned to a new vendor and we have no idea on what was going on in the database before we took over.
I am sorry sir I am not a DBA and am merely working on this issue so excuse me on my mistakes here.

The facts are
1. Table that causes trouble is oe_order_lines_all table - Standard oracle table
2. There is a standard oracle snapshot log MLOG$_OE_ORDER_LINES_ALL and snapshot OE_ODR_LINES_SN provided by standard oracle. This is fast refresh On demand MV.
3. A few custom MV which are COMPLETE refresh on demand type.
4. There is no MV with on commit refresh.
5. There is no trigger on this table.

There is nothing which can explain a EXCLUSIVE table lock on every commit in addition to the ROW Exclusive lock.

So in the test environment I dropped all Mlogs and all MVs last week and I recreated them and the problem got solved.

After that I created a MV with ON commit and dropped it and we started seeing the same behaviour.

I tried with a small table the same steps that I have mentioned before and saw the same behaviour so am not sure what was going on.

The help I am looking for is to understand oracle behaviour here. If I drop an incorrect object then oracle should not behave the way it would behave in its presence.

Thanks,
Neelam
Tom Kyte
January 21, 2009 - 1:11 pm UTC

if you have a test case - why didn't you share it with us?

sounds like you have a tiny way to reproduce this - give it to us and tell us how to reproduce your findings.

Materialized View and Exclusive Table Level Lock

Neelam, January 22, 2009 - 5:43 am UTC

Hello Tom,

Thanks for your prompt reply.

I did give you the way to reproduce it in my first post, the one just before this one.
In the reply to that you had asked why am I doing DDL or rather why am I dropping schema objects. The answer Tom as I explained is there was no way to explain or reproduce the issue but to create an ON COMMIT MV and then drop it, from this point on every DML that you make on the master table acquires a EXCLUSIVE table lock and nothing to explain that behaviour.

In case you need to know we are on 10.2.0.2.0 oracle RDBMS.

I am pasting the way to reproduce this issue using a simple table.

create table ab_tab
(a number, b number, c number);

CREATE SNAPSHOT LOG ON ab_tab WITH ROWID;

CREATE SNAPSHOT XX_AB_TAB
REFRESH FAST WITH ROWID AS
(SELECT * from AB_TAB);

CREATE SNAPSHOT XX_AB_TAB1
REFRESH ON COMMIT WITH ROWID AS
(SELECT * from AB_TAB where a=b );

DROP SNAPSHOT XX_AB_TAB1;

Then I run this script in one session and monitor locks in another session and I find exclusive table locks being put in ab_tab;


BEGIN
FOR i in 1..5000 LOOP

INSERT INTO ab_tab_new values (i,i,i);

COMMIT;

END LOOP;
END;

I really appreciate what you do here on this site.

Thanks,
Neelam
Tom Kyte
January 22, 2009 - 9:23 am UTC

after correcting your script, I do not see any issues with locking. When you do an insert - it'll surely get a lock on the table, that is normal - with or without materialized views. But it DOES NOT cause any concurrency issues:

ops$tkyte%ORA10GR2> create table ab_tab
  2  (a number, b number, c number);

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE SNAPSHOT LOG ON ab_tab WITH ROWID;

Materialized view log created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE SNAPSHOT XX_AB_TAB
  2  REFRESH FAST WITH ROWID AS
  3  (SELECT * from AB_TAB);

Materialized view created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE SNAPSHOT XX_AB_TAB1
  2  REFRESH ON COMMIT WITH ROWID AS
  3  (SELECT * from AB_TAB where a=b );

Materialized view created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> DROP SNAPSHOT XX_AB_TAB1;

Materialized view dropped.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into ab_tab values ( 1,1,1 );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          insert into ab_tab values ( 2, 2, 2 );
  5          commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.



Now, if the insert of 1,1,1 had entirely locked out the table, I would have expected:

ops$tkyte%ORA10GR2> lock table ab_tab IN EXCLUSIVE mode;

Table(s) Locked.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          insert into ab_tab values ( 2, 2, 2 );
  5          commit;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at line 4



the table lock you say you are seeing is really just saying "this transaction has an exclusive lock on some rows in this table"

the table itself is not locked.

So, back to you - show us how to reproduce some concurrency issue here please.

And run your script in a test schema so as to ensure it runs...

Materialized View and Exclusive Table Level Lock

Neeelam, January 22, 2009 - 9:58 am UTC

Hello Tom,

  The exclusive table level lock is placed when you commit DML transaction on this table.
Sorry if I was not clear enough about this earlier.

  So the example that you have done will only place ROW Exclusive lock as the first session is not doing a commit besides commit for just one transaction would place Exclusive table lock for such a small time that by the time we reach other session it would be released. To replicate we need to put the insert or update DML and commit in a LOOP like I have shown below.

Hence I execute the below, if you have data in the table you can replace insert with update.

BEGIN
FOR i in 1..1000 LOOP

   INSERT INTO ab_tab_new values (i,i,i);
   
   COMMIT;

END LOOP;
END;


and I use this query to check the locks

SELECT  l.type, prc.spid, -- NVL(S.USERNAME,'Internal') username, 
 id1 || ':' || id2 id,
        --  NVL(S.TERMINAL,'None') terminal,
         L.SID||','||S.SERIAL# Kill,
         U1.NAME||'.'||T1.NAME tab,
         DECODE(L.LMODE,1,'No Lock',
                 2,'Row Share',
                 3,'Row Exclusive',
                 4,'Share',
                 5,'Share Row Exclusive',
                 6,'Exclusive',NULL) lmode,
         DECODE(L.REQUEST,1,'No Lock',
                 2,'Row Share',
                 3,'Row Exclusive',
                 4,'Share',
                 5,'Share Row Exclusive',
                 6,'Exclusive',NULL) request,
     s.MODULE
 FROM    V$LOCK L,
         V$SESSION S,
         SYS.USER$ U1,
         SYS.OBJ$ T1, v$process prc, v$sqltext SQL
 WHERE   L.SID = S.SID
 AND  S.SQL_aDDRESS = SQL.ADDRESS
 AND  prc.addr = s.paddr
 AND     T1.OBJ# = DECODE(L.ID2,0,L.ID1,L.ID2)
 AND     U1.USER# = T1.OWNER#
 and sql.piece = 0
 and SUBSTR(T1.NAME,1,20) like '%AB_TAB%'
order by id

I get the following output

SQL> /

TY SPID         ID
-- ------------ ---------------------------------------------------------------------------------
KILL
---------------------------------------------------------------------------------
TAB                                                           LMODE
------------------------------------------------------------- -------------------
REQUEST             MODULE
------------------- ------------------------------------------------
TM 1323210      757397:0
710,13865
APPS.AB_TAB_NEW                                               Row Exclusive
                    T.O.A.D.

MS 1323210      757397:0
710,13865
APPS.AB_TAB_NEW                                               Exclusive
                    T.O.A.D.

TM 1323210      757398:0
710,13865
APPS.MLOG$_AB_TAB_NEW                                         Row Exclusive
                    T.O.A.D.


Thanks,
Neelam

Tom Kyte
January 22, 2009 - 10:09 am UTC

excuse me, but re-read my example

I inserted 1,1,1 in transaction 1. I did not commit.


I inserted 2,2,2 in transaction 2 and committed. If transaction 2 required a full table lock, it would have deadlocked.

Your example causes me some issues. There is NO ab_tab_new in it.

So, back to you. Why don't you run my example - if it runs as is, there is no full table lock taking place, it would not be possible.


ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA10GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          lock table t in exclusive mode;
  5          commit;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at line 4



that is what we would see if the commit required a full table lock.

Materialized View and Exclusive Table Level Lock

Neeelam, January 22, 2009 - 10:07 am UTC

**Please Ignore the review before this. Sorry for the confusion the table name was incorrect in the script. Thanks for you Patience***

Hello Tom,

The exclusive table level lock is placed when you commit DML transaction on this table.
Sorry if I was not clear enough about this earlier.

 So the example that you have done will only place ROW Exclusive lock as the first session is not doing a commit besides commit for just one transaction would place Exclusive table lock for such a small time that by the time we reach other session it would be released. To replicate we need to put the insert or update DML and commit in a LOOP like I have shown below.

Hence I execute the below, if you have data in the table you can replace insert with update.

BEGIN
FOR i in 1..1000 LOOP

   INSERT INTO ab_tab values (i,i,i);
   
   COMMIT;

END LOOP;
END;


and I use this query to check the locks

SELECT  l.type, prc.spid, -- NVL(S.USERNAME,'Internal') username, 
 id1 || ':' || id2 id,
        --  NVL(S.TERMINAL,'None') terminal,
         L.SID||','||S.SERIAL# Kill,
         U1.NAME||'.'||T1.NAME tab,
         DECODE(L.LMODE,1,'No Lock',
                 2,'Row Share',
                 3,'Row Exclusive',
                 4,'Share',
                 5,'Share Row Exclusive',
                 6,'Exclusive',NULL) lmode,
         DECODE(L.REQUEST,1,'No Lock',
                 2,'Row Share',
                 3,'Row Exclusive',
                 4,'Share',
                 5,'Share Row Exclusive',
                 6,'Exclusive',NULL) request,
     s.MODULE
 FROM    V$LOCK L,
         V$SESSION S,
         SYS.USER$ U1,
         SYS.OBJ$ T1, v$process prc, v$sqltext SQL
 WHERE   L.SID = S.SID
 AND  S.SQL_aDDRESS = SQL.ADDRESS
 AND  prc.addr = s.paddr
 AND     T1.OBJ# = DECODE(L.ID2,0,L.ID1,L.ID2)
 AND     U1.USER# = T1.OWNER#
 and sql.piece = 0
 and SUBSTR(T1.NAME,1,20) like '%AB_TAB%'
order by id

I get the following output

SQL> /

TY SPID         ID
-- ------------ ---------------------------------------------------------------------------------
KILL
---------------------------------------------------------------------------------
TAB                                                           LMODE
------------------------------------------------------------- -------------------
REQUEST             MODULE
------------------- ------------------------------------------------
TM 1323210      757386:0
710,13865
APPS.AB_TAB                                                   Row Exclusive
                    T.O.A.D.

MS 1323210      757386:0
710,13865
APPS.AB_TAB                                                   Exclusive
                    T.O.A.D.

TM 1323210      757387:0
710,13865
APPS.MLOG$_AB_TAB                                             Row Exclusive
                    T.O.A.D.
                    T.O.A.D.


Thanks,
Neelam

Tom Kyte
January 22, 2009 - 10:21 am UTC

run my example please.

If it doesn't deadlock, there is no lock blocking you during the commit.

I don't know how you think you can run that lock query at precisely the time of commit anyway.

Materialized View and Exclusive Table Level Lock

Neeelam, January 22, 2009 - 11:31 am UTC

Hello Tom,

I see and understand what you are saying now, I did not see the simple steps you have mentioned before and I dont have answer to your question below

'I don't know how you think you can run that lock query at precisely the time of commit anyway.'

I am still scratching my head to find a way to explain what I see.

However I still dont understand why if I execute the script below I see EXCLUSIVE Table lock of MS type in v$Lock table, what is that if you can please explain.

BEGIN
FOR i in 1..1000 LOOP

INSERT INTO ab_tab values (i,i,i);

COMMIT;

END LOOP;
END;

Now if I drop all Snapshot logs and snapshot

DROP SNAPSHOT LOG ON APPS.AB_TAB;

DROP SNAPSHOT APPS.XX_AB_TAB;

and then recreate them, this time without recreating the ON COMMIT MV, I dont see this exclusive lock in V$LOCK table.

Thanks,
Neelam
Tom Kyte
January 22, 2009 - 11:55 am UTC

did you run my example? did it lock up?

regardless of the existence - it is not causing your locking/contention issues

ops$tkyte%ORA10GR2> create or replace procedure p( i in number )
  2  as
  3          pragma autonomous_transaction;
  4  begin
  5       insert into ab_tab values (-i,-i,-i);
  6           commit;
  7  end;
  8  /

Procedure created.

ops$tkyte%ORA10GR2> begin
  2  FOR i in 1..5000 LOOP
  3
  4     INSERT INTO ab_tab values (i,i,i);
  5     p(i);
  6     COMMIT;
  7
  8  END LOOP;
  9  END;
 10  /

PL/SQL procedure successfully completed.



If the commits were causing serialization, the above code would fail

materilalized view corrupted

Raj, March 06, 2009 - 5:47 am UTC

Hi Tom,

I got a scenario, I would request your help on the below problem statement


Problem Statement:
We use Materilized Views to get refreshed data (for every 15 mins) and show in the portal/portlet. One day the the portal was showing old timestamp. Our development team has done what ever we could, fianally contacted DBA, even they dint find any Root Cause.
But, they recreated all the Materialized Views which are invloved for that application, then we got latest timestamp. Issue resolved finally.
DBA team just told, Materilized View corrupted.

Questions:
1. Is there anything like Materialized View corrupted, if so, what does it mean and what makes a Materialized View corrupted pls expatiate the concept behind it.
2. How can we conform whether or not Materialized Views corrupted.
3. How can we resolve the corrupted Materialized View.

Please answer Tom.

Regards,
Raj.
Tom Kyte
March 06, 2009 - 10:39 am UTC

DBA team "did not provide you correct information"

1) no, while there is 'corruption', it would result in an error being returned to the application - you would know about it immediately.

2) see #1

3) see #1


Given what I know (which is not much in the 'specifics' area in this case) What is probable is the DBA team changed something and the materialized view stopped refreshing due to some change they made. They either unmade the change or corrected the thing they did and all was good.


A reader, March 06, 2009 - 11:12 am UTC

Thanks Tom..

12053

whizkid, April 25, 2009 - 3:14 am UTC

Hi Tom,

We have two tables at the source site t1 (containing the latest record) & t1_hist (containing the history).

In DWH, we are trying to build a single fast refresh mv of both the source tables. Since we cannot do a direct union all specifying the db link, we have created two individual mv's in the DWH.

create materialized view t1_mv
refresh fast on demand
with primary key
as
select * from t1@source ;

create materialized view t1_hist_mv
refresh fast on demand
with primary key
as
select * from t1_hist@source ;

create materialized view log on t1_hist_mv with rowid(prop_no) including new values;

create materialized view log on t1_mv with rowid (prop_no) including new values;

create or replace force view t1_v_all
(
prop_no
crid,
umarker
)
as
(select prop_no
t1.rowid,
2 umarker
from t1
union all
select prop_no
t1_hist.rowid,
3 umarker
from t1_hist);


create materialized view t1_all
build immediate
refresh fast on demand
with primary key
as
select * from t1_v_all;

this mv creation results in ora-12053. have I gone wrong somewhere or missed something? version 10.2.0.3

Thanks in advance
Tom Kyte
April 27, 2009 - 1:41 pm UTC

why bother with the mv of the mv??? Just create a 'regular' view and union all it in there. We can push predicates and everything else, it'll be sort of like it was partitioned.

MV

sam, January 25, 2010 - 9:36 pm UTC

Hi Tom,

I have this materialized view in 9i that gets refreshed nightly via ajob.

Now i need to update the derived value if a customer processes/updates some books so he sees the summary real time.

I added "FOR UPDATE" the view to make it writeable but i get an error

ORA-12013: updateable materialized views must be simple enough to do fast refresh


create materialize view mv_menu_aggs
as
select custcd,count(*),sysdate
from dist a, prod b
where a.bkno=b.bkno
and a.ship_flag=null
and b.stage in ('AB','DD')
and b.end_date between trunc(sysdate-150) and trunc(sysdate)
group by custcd

How do you fix this?

I tried to create another simple materialized view based on this one.

create materialized view mv_menu_final for update
as select * from mv_menu_aggs
ORA-12014: table 'MV_MENU_AGGS does nto contain a primary key constraint


It told me that table must contain a primary key constraint.

I added a PK and it still wont work. Is there a way to make this writeable so i can update some records??
Tom Kyte
January 29, 2010 - 2:20 pm UTC

an updatable materialized view would have one and only one goal:

to update the master data during the refresh.


That is, if you have master data and create a materialized view like "select * from master_data where x = 5", and you updated the materialized view - those changes would be replicated to the master data during the refresh.

Tell me, since an aggregate necessarily applies to MANY rows - which row would get the update upon the refresh? You cannot answer that.

Therefore, this materialized view can never be updatable. And - further - it is highly unlikely that you would want it to do - I seriously doubt you want the change to propagate UP to the master data.

You'll have to keep some sort of override table of your own to the side. When someone makes a change that you want immediately reflected, you'll have to:

a) insert a row into this override table
b) always query
select mv.custcd, nvl(override.cnt,mv.cnt), mv.dt
  from mv, override
 where mv.custcd = override.custcd(+)
   and mv.dt = override.dt(+)
/


and periodically clean out old valued in the override table, you could do that by using a custom job to refresh instead of the default one - one that would:


begin
   lock table override in exclusive mode;
   delete from override;
   dbms refresh the mv...
end;




MV

sam, February 01, 2010 - 4:47 pm UTC

Tom:

I see what you are saying. But that requries a another table, a job, procedure and more code.
I might as well just create a regular summary table instead of MV.

All i need to do is run DML (Update) on the materialized view for one ROW if the master table was updated. I do not want to refresh all the rows (only job will do that nightly) to minimzed the transaction overhead (time)


update mv_menu_aggs
set cust_cnt = mv_count_value - new_rows_updated
where custcd = p_custcd;


According to Oracle docs, a writeable MV will do that. you shoul be able to update/delete on that just like any table (differnt from updateable MV).


Writeable Materialized Views
A writeable materialized view is one that is created using the FOR UPDATE clause but is not part of a materialized view group. Users can perform DML operations on a writeable materialized view, but if you refresh the materialized view, then these changes are not pushed back to the master and the changes are lost in the materialized view itself. Writeable materialized views are typically allowed wherever fast-refreshable read-only materialized views are allowed.

http://download.oracle.com/docs/cd/B10501_01/server.920/a96567/repmview.htm
Tom Kyte
February 02, 2010 - 10:58 am UTC

read your link and you'll see why this cannot work. It is right there in front of you


For read-only, updatable, and writeable materialized views, the defining query of the materialized view must reference all of the primary key columns in the master.


if you included primary keys, and used a group by - you would be aggregating over *nothing*. I'm not sure you read what I wrote above.

MV

A reader, February 02, 2010 - 3:10 pm UTC

Tom:

I see what you are saying now.

Including Primary keys of both tables defeats the purpose of MV. I will not have a SUMMARY table doing that way.

I will either implement this using a NORMAL table inserad of MV table or using your method of extra OVERRIDE table

For now I am refereshing the MV with each DML and it seems to run very fast (under 1 second) . Maybe I should just leave it as is.

It is not very efficient because if one customer updated a few rows i am recalculating the summaries for all customers. but the number of customers is only 100 and does not increase with time in this case.


sudhithra, February 04, 2010 - 4:53 am UTC

hi,

I have created a mat view which involves aggregate functions and has an inline view in from clause which also has involves some aggregate functions.

When i tried to explain the execution plan for the same sql using which i created the mat view, it is showing the base tables and not the newly created mat view. The query rewrite option is enabled while creating the mat view, when i tried to figure out why query was not rewritten using dbms_mview.explain_rewrite, i got the below 4 messages


QSM-01150: query did not rewrite
QSM-01263: query rewrite not possible when query references a dictionary table or view
QSM-01219: no suitable materialized view found to rewrite this query
QSM-01284: materialized view MV_PRE_RPT has an anchor table INLINE VIEW not found in query

I m not able to figure out the problem and any help in this regard will be appreciated. thanks..
Tom Kyte
February 04, 2010 - 1:58 pm UTC

hmmmm

this seems very important:

...QSM-01263: query rewrite not possible when query references a dictionary table
or view
...


might it be related to that? I don't know *because I do not have an example*, but you might be able to see if it applies - right?

MV-Rewrite doesnt happen

sudhithra, February 05, 2010 - 7:31 am UTC

hi,

I have created a mat view which involves aggregate functions and has an inline view in from clause
which also has involves some aggregate functions.

When i tried to explain the execution plan for the same sql using which i created the mat view, it
is showing the base tables and not the newly created mat view. The query rewrite option is enabled
while creating the mat view, when i tried to figure out why query was not rewritten using
dbms_mview.explain_rewrite, i got the below 4 messages


QSM-01150: query did not rewrite
QSM-01263: query rewrite not possible when query references a dictionary table or view
QSM-01219: no suitable materialized view found to rewrite this query
QSM-01284: materialized view MV_PRE_RPT has an anchor table INLINE VIEW not found in query


The query doesn't use any dictionary tables or views
Below is the sample of the query that i'm using

SELECT A.col1 ,
A.col2 ,
TO_CHAR(iview.icol3,'MM/DD/YYYY'),
SUM(
CASE
WHEN A.col4 = 'VALUE1'
THEN 0
WHEN A.col4 = 'VALUE2'
THEN 0
ELSE iview.icol4
END) sum_col ,
FROM tbl_A A ,
tbl_B B ,
(SELECT A.icol1,
A.icol2 ,
B.icol3 ,
SUM (
CASE
WHEN A.col3 = 'INVALUE1'
THEN A.col4
ELSE 0
END) icol4
FROM tbl_C A,
tbl_D B ,
WHERE A.coln = B.coln
AND A.col5 IN ('A','B','C')
GROUP BY A.icol1,
A.icol2 ,
B.icol3 ,
) iview
WHERE A.col5 = iview.icol1
AND A.col6 = iview.icol2
AND A.col7 = iview.icol3
AND A.col5 = B.coln
GROUP BY A.col1,
A.col2 ,
TO_CHAR(iview.icol3,'MM/DD/YYYY');
Tom Kyte
February 08, 2010 - 7:39 pm UTC

sigh...

I don't know *because I do not have an example*

I still don't - a query, against a totally and completely UNKNOWN set of materialized view.. I don't have any ideas for you, because I know nothing of what you've done.

MV

A reader, February 19, 2010 - 10:33 am UTC

i am using oracle 9.2.
I have a materialized view and works fine. when i add a 3rd table to the query that uses
a db link the CREATE statement takes a long time and ends with a strange internal oracle error (unreadable characters)
seems to be related to db link.

WHen i run the SQL query alone it works fine in a few seconds. Is this an internal bug with MV because of links. ANy ideas to get around this or rewrite this query.

basically i need to store summaries for customer shipments in the last 150 days and shipments between 60 and 150 days.


SELECT custcd, count(*) total_books,
count(case when b.ship_date between trunc(sysdate)-150 and trunc(sysdate)-60 then 1 else null end) subtotal_qty,
sysdate CREATED_DATE
FROM shipment a, production@db_link b, books@db_link c
WHERE a.bkno = b.bkno
AND a.bkmedium = b.bkmedium
AND a.bkno = c.bkno
AND a.bkmedium = c.bkmedium
AND a.reship_flag is null
AND b.stage in ('XY','CD','EW')
AND b.ship_date between (trunc(sysdate)-150) and (trunc(sysdate)+1-1)
group by CUSTCD

Any ideas what may cause this? Shall I create the MV in the remote site (right next to local( and a regular view based on that in the current database or use DRIVING HINT to not bring the data over .
Tom Kyte
February 25, 2010 - 12:46 am UTC

SMK, Sam - internal error = please use support. You are the DBA (so you say), you should therefore have the ability to open a service request.

Materialized Views

A Reader, February 28, 2010 - 1:16 pm UTC

Sir,

Can you please outline the best practices that could be followed to improve complete refresh time of materialized views. I understand that the whole point of using materialized views is to materialize resource intensive queries that could be used over and over again. Other than tuning the query, can the refresh process be parallized or can the materialized view be partitioned and then loaded at the partition level?

Many Thanks

Refresh group to run on 10th of every month

sosu, March 16, 2010 - 1:37 pm UTC

Tom,
I need to create a refresh group to run on 10th of every month.
Here is what I tried to get the date:

select trunc(add_months(trunc(sysdate),1),'mon') + 9 + (3/24) from dual;

But this date statement does not work when put in the below pl/sql code:

BEGIN
DBMS_REFRESH.MAKE(
name => '"OFM"."JUNK"',
list => '',
next_date => to_date('04-10-2010 05:00:00', 'MM-DD-YYYY HH24:MI:SS'),
interval => 'trunc(add_months(trunc(sysdate),1),'mon') + 9 + (3/24)',
implicit_destroy => FALSE,
lax => FALSE,
job => 0,
rollback_seg => NULL,
push_deferred_rpc => FALSE,
refresh_after_errors => FALSE,
purge_option => NULL,
parallelism => NULL,
heap_size => NULL
);
END;
/


Appreciate your inputs as always.

Thank you
Tom Kyte
March 16, 2010 - 2:47 pm UTC

no error message :(
no description of what "does not work" means :(

sigh...


probably, you need to use '' to get a ' in a string


interval => 'trunc(add_months(trunc(sysdate),1),''mon'') + 9 + (3/24)',

sosu, March 16, 2010 - 3:12 pm UTC

Sorry about not providing the error message.
Here it is:

BEGIN
DBMS_REFRESH.MAKE(
name => '"OFM"."JUNK"',
list => '',
next_date => to_date('04-10-2010 05:00:00', 'MM-DD-YYYY HH24:MI:SS'),
interval => "'trunc(add_months(trunc(sysdate),1),'mon') + 9 + (3/24)'",
implicit_destroy => FALSE,
lax => FALSE,
job => 0,
rollback_seg => NULL,
push_deferred_rpc => FALSE,
refresh_after_errors => FALSE,
purge_option => NULL,
parallelism => NULL,
heap_size => NULL
);
END;
Error at line 1
ORA-06550: line 6, column 13:
PLS-00114: identifier ''trunc(add_months(trunc(sysdat' too long

Script Terminated on line 1.

Tom Kyte
March 16, 2010 - 4:12 pm UTC

umm, did you read and perhaps try what I wrote above?

I know that was wrong - your code where I pointed out, and based on the error message, it would be because of that.

You know, it stinks to see that THIS CODE is materially different from the LAST CODE. Meaning you were having me work on code that isn't even the code that is failing currently :(

sigh.

just use what I typed in above.

A reader, March 17, 2010 - 10:07 am UTC

Tom,
Now I get a different error message.

BEGIN
DBMS_REFRESH.MAKE(
name => '"OFM"."JUNK"',
list => '',
next_date => to_date('04-10-2010 05:00:00', 'MM-DD-YYYY HH24:MI:SS'),
interval => 'trunc(add_months(trunc(sysdate),1),"mon") + 9 + (3/24)',
implicit_destroy => FALSE,
lax => FALSE,
job => 0,
rollback_seg => NULL,
push_deferred_rpc => FALSE,
refresh_after_errors => FALSE,
purge_option => NULL,
parallelism => NULL,
heap_size => NULL
);
END;
Error at line 1
ORA-23319: parameter value "trunc(add_months(trunc(sysdate),1),"mon") + 9 + (3/24)" is not appropriate
ORA-06512: at "SYS.DBMS_JOB", line 57
ORA-06512: at "SYS.DBMS_JOB", line 134
ORA-06512: at "SYS.DBMS_IREFRESH", line 186
ORA-06512: at "SYS.DBMS_IREFRESH", line 369
ORA-06512: at "SYS.DBMS_REFRESH", line 88
ORA-06512: at "SYS.DBMS_REFRESH", line 62
ORA-06512: at line 2

Script Terminated on line 1.

Tom Kyte
March 17, 2010 - 10:12 am UTC

arg...


quote quote

not quote, but quote quote - two single ticks, two quotes. You know, the way you get a quote in a string literal, you either use quote quote or you use the perl like quoting.

ops$tkyte%ORA10GR2> select 'how''s this', q'|how's this|' from dual;

'HOW''STHI Q'|HOW'STH
---------- ----------
how's this how's this


Perl-like quoting

Duke Ganote, March 17, 2010 - 11:59 am UTC

Known in the documentation as the "alternative quoting mechanism" for "text literals":
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements003.htm#i42617

A reader, March 18, 2010 - 9:35 am UTC

Thank you Dave and Tom.

I know this is the place for super-quick SR.


Tom Kyte
March 18, 2010 - 9:46 am UTC

I know this is the place for super-quick SR.

No, it most definitely IS NOT. This isn't an "SR" issue at all really - if you would file an SR over something like this, you are doing it wrong.

It would be as correct to file an SR on this as it would be for me to log an issue with Microsoft with the subject "I'm trying to use word to write my resume but no one hires me - what is wrong with word?".

Meaning, this was an issue typically corrected by someone at work glancing over the shoulder saying "hey, you need to double up quotes to get a quote in a string" or looking it up in the documentation.

But an SR, this definitely was not an SR type of issue.

Is materialized view a proper choice?

rohit, May 11, 2010 - 7:41 am UTC

We have a huge table (around 30000000) records and it is growing day by day. One of the requirements now is to show some aggregated information (count on basis of status, type, date, etc.) for this table on application's home page. Moreover these counts cannot be stale than 10 minutes.

Is materialized view a proper choice for doing this? Is there any other way to do this more efficiently?

thanks

Materialized View refresh

A Reader, August 19, 2010 - 6:10 am UTC

Hi Tom,

I had posted a question while ago on this thread asking for suggestions on improving the complete refresh time of materialized views. Could you please respond.

Regards
Tom Kyte
August 19, 2010 - 3:00 pm UTC

you are "a reader", therefore there is nothing I can reference.

Not all questions are answerable - if I read it and chose not to answer it, that would indicate that "there is no answer given the question as stated"

Have you read the data warehousing guide (I suspect you are the one that was looking for 'best practices'). It covers all of the options (and it takes a book, not something I'd reproduce here). If you are looking for 'best practices' then the answer is (and will be) "It depends" - it depends on hundreds/thousands of things (your context, your situation). In that case, all you can do is know and understand "the options" - some of which you listed.

Can PCT (partition change tracking) make a refresh faster? The answer to that is alternatively

a) yes
b) no
c) maybe

In short - it depends. It may apply to you - it may not.

so, check out the data warehousing guide - learn what is available, test it out, get experience with it and use the knowledge of how it works and judge whether it might help you given YOUR circumstances at a point in time.

http://docs.oracle.com/docs/cd/E11882_01/server.112/e10810/toc.htm

Materialized view refresh

A Reader, August 20, 2010 - 8:18 am UTC

Thanks Tom for your feedback.

Using MV instead of a regular view

Luke, November 02, 2010 - 1:12 pm UTC

Hi Tom - I have a situation where an application needs to have access to a subset of rows from a table using a simple where clause (e.g., where table.appName='MyApp'). A co-worker is suggesting creating a materialized view which selects only these rows from the base table, and grant the application access to the MV instead of the base table. The MV and base table will exist on the same database, although in different schemas. Other tables would also be created in the same schema as the MV which will have referential constraints to the MV. The base table will have in the tens of thousands of rows, and the MV would select ~20% of these rows.

From what I've read, I'm not sure that this is a good situation for using a MV. I am inclined to suggest that we use a regular view (and perhaps add some new indexes on the base table) instead of a materialized view, since we are not doing any aggregation or complex joins. My understanding is that any other tables with FK's will not be able to have referential constraints against the view, so they will have to reference the base table directly.

What do you think about using an MV in this situation? Thanks.
Tom Kyte
November 03, 2010 - 1:16 am UTC



having referential integrity pointing to the MV is a really utterly bad idea. MV's might have to have all rows deleted/truncated and then re-inserted (a complete refresh) under some circumstances. Also, you cannot tell if we will use an update to an MV to propagate a change OR an equivalent delete+reinsert of the new row. In short - you have no control over how we refresh. It would be a huge mistake to attempt to do this in real life.

this sounds like a horrible idea, I agree with you. If you are worried about being able to retrieve those 20% of rows - list partition the table - placing 'MyApp' into its own partition and the rest of the rows into another partition.

Use a single table.

If you need to - have a normal view on this table that is used by selects.

Thanks.

Luke, November 03, 2010 - 10:12 am UTC

Thanks for you feedback Tom - very useful.

need more info on MV

sasidhar, January 07, 2011 - 4:32 am UTC

Tom,

I want to know what is the use of for update clause and query rewrite option, while creating a MV.

thank you,
Sasidhar
Tom Kyte
January 07, 2011 - 9:21 am UTC

for update is used with advanced multi-master replication. It allows updates against the materialized view and propagates those updates to the master table.

so for example, a master site might have an employee table with many departments. Site1 might replicate just deptno=10 using a materialized view, site2 gets deptno=20 and so on. So, each site has a slice of the table. If the sites used "for update" on the materialized view - then their updates would be allowed on the materialized view sites and those updates would be rolled up to the master site which contains all of the data.


query rewrite is usedto allow a materalized view to behave a lot like an index (used to transparently speed up access to some data).

For example, say you have a big table - table T.

supposed you create a materialized view MV on T with query rewrite and the sql text:

select c1, count(*) cnt from t group by c1;


Now, if someone issued:

select count(*) from t

the optimizer would rewrite that as:

select sum(cnt) from mv;


Rather than full scan the really big table - it scans the smaller table adding up the counts to get count(*)


It allows us to rewrite a query against the tables the materialized view was written again - to be pointed to the materialized view instead.

Error in explain_rewrite

Rajeshwaran, Jeyabal, June 30, 2012 - 11:50 am UTC

Tom,

Can you please help me to resolve this error? 

rajesh@ORA11GR2> create or replace function
  2  explain_rewrite( p_query in clob,
  3  p_mv in varchar2)
  4  return SYS.RewriteArrayType
  5  pipelined as
  6     pragma autonomous_transaction;
  7     l_data SYS.RewriteArrayType;
  8  begin
  9     dbms_mview.Explain_Rewrite
 10     (query =>p_query, mv=>p_mv,
 11     msg_array=>l_data);
 12     commit;
 13
 14     dbms_output.put_line (' Count = '||l_data.count);
 15     for i in 1..l_data.count
 16     loop
 17             pipe row( l_data(i) );
 18     end loop;
 19  end ;
 20  /

Function created.

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> variable l_qry varchar2(80);
rajesh@ORA11GR2> variable l_mv varchar2(10);
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec :l_qry := ' select deptno,count(*) from t2 group by deptno ';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> exec :l_mv := 'T1_T2_MV';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> select * from table(explain_rewrite(:l_qry,:l_mv));
select * from table(explain_rewrite(:l_qry,:l_mv))
                    *
ERROR at line 1:
ORA-01405: fetched column value is NULL


Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2>

Tom Kyte
July 02, 2012 - 7:11 am UTC

how about you give an entire example? you know, something I can use to reproduce with (you've been on this site for a while, you should have known that was coming)


Entire Example

Rajeshwaran, Jeyabal, July 02, 2012 - 7:26 am UTC

Tom:

Does this helps you?

rajesh@ORA11GR2>
rajesh@ORA11GR2> create table t1 as
  2  select * from dept;

Table created.

Elapsed: 00:00:00.01
rajesh@ORA11GR2> create table t2 as
  2  select * from emp;

Table created.

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t1 add
  2  constraint t1_pk
  3  primary key(deptno);

Table altered.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 add
  2  constraint t2_fk
  3  foreign key(deptno) references t1;

Table altered.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t2 modify deptno not null;

Table altered.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> begin
  2     dbms_stats.gather_table_stats
  3     ( ownname=>user, tabname=>'T1');
  4     dbms_stats.gather_table_stats
  5     ( ownname=>user, tabname=>'T2');
  6  end;
  7  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.23
rajesh@ORA11GR2> create materialized view t1_t2_mv
  2  build immediate
  3  refresh on commit
  4  enable query rewrite as
  5  select t1.deptno,count(*)
  6  from t1, t2
  7  where t1.deptno = t2.deptno
  8  group by t1.deptno;

Materialized view created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create or replace function
  2  explain_rewrite( p_query in clob,
  3  p_mv in varchar2)
  4  return SYS.RewriteArrayType
  5  pipelined as
  6     pragma autonomous_transaction;
  7     l_data SYS.RewriteArrayType;
  8  begin
  9     dbms_mview.Explain_Rewrite
 10    (query =>p_query, mv=>p_mv,
 11    msg_array=>l_data);
 12    commit;
 13
 14    dbms_output.put_line (' Count = '||l_data.count);
 15    for i in 1..l_data.count
 16    loop
 17               pipe row( l_data(i) );
 18    end loop;
 19  end ;
 20  /

Function created.

Elapsed: 00:00:00.64
rajesh@ORA11GR2>
rajesh@ORA11GR2> variable l_qry varchar2(80);
rajesh@ORA11GR2> variable l_mv varchar2(10);
rajesh@ORA11GR2> exec :l_qry := ' select deptno,count(*) from t2 group by deptno ';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.26
rajesh@ORA11GR2> exec :l_mv := 'T1_T2_MV';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> select * from table(explain_rewrite(:l_qry,:l_mv));
select * from table(explain_rewrite(:l_qry,:l_mv))
                    *
ERROR at line 1:
ORA-01405: fetched column value is NULL


Elapsed: 00:00:00.10
rajesh@ORA11GR2>
rajesh@ORA11GR2>

Tom Kyte
July 02, 2012 - 9:34 am UTC

please use this:

create table t1 ( deptno number primary key );
create table t2 ( deptno references t1 not null );

create materialized view t1_t2_mv
enable query rewrite as
select t1.deptno,count(*)
from t1, t2
where t1.deptno = t2.deptno
group by t1.deptno;

declare
    l_data  sys.rewriteArrayType;
begin
    dbms_mview.explain_rewrite
    ( query     => 'select t1.deptno,count(*) from t1, t2 where t1.deptno = t2.deptno group by t1.deptno',
      msg_array => l_data );
end;
/


to file an issue with support.

Error in explain_rewrite

Rajeshwaran, Jeyabal, July 20, 2012 - 4:54 am UTC

Thanks Tom. I dont have access to support, If possible can you please file that issue?
Tom Kyte
July 30, 2012 - 7:51 am UTC

Bug 14387984 - ORA-1405 FROM DBMS_MVIEW.EXPLAIN_REWRITE has been created.

PCT on Mviews

Rajeshwaran Jeyabal, August 16, 2012 - 10:51 pm UTC

Tom,

I was testing PCT on 11gr2 and found that Query rewrite is not happening even for FRESH partition. Can you help me what part of metadata am i missing here to optimizer?

rajesh@ORA11GR2> create table emp
  2  partition by list(deptno)
  3  ( partition p10 values (10),
  4    partition p20 values (20),
  5    partition p30 values (30),
  6    partition pdef values (default) )
  7  as
  8  select * from scott.emp;

Table created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> create table dept as
  2  select * from scott.dept;

Table created.

Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table dept
  2  add constraint dept_pk
  3  primary key(deptno);

Table altered.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table emp
  2  add constraint emp_fk
  3  foreign key(deptno)
  4  references dept;

Table altered.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table emp modify
  2  deptno not null;

Table altered.

Elapsed: 00:00:00.03
rajesh@ORA11GR2> begin
  2     dbms_stats.gather_table_stats(user,'EMP');
  3     dbms_stats.gather_table_stats(user,'DEPT');
  4  end;
  5  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.56
rajesh@ORA11GR2>
rajesh@ORA11GR2> create materialized view emp_dept_mv
  2  build immediate
  3  refresh on demand
  4  enable query rewrite as
  5  select d.deptno,count(*)
  6  from emp e,dept d
  7  where e.deptno = d.deptno
  8  group by d.deptno;

Materialized view created.

Elapsed: 00:00:00.04
rajesh@ORA11GR2>
rajesh@ORA11GR2> exec dbms_stats.gather_table_stats(user,'EMP_DEPT_MV');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
rajesh@ORA11GR2>
rajesh@ORA11GR2> set autotrace traceonly explain;
rajesh@ORA11GR2> select count(*)
  2  from emp ;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 2723375566

---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     1 |     3 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |             |     1 |     3 |            |          |
|   2 |   MAT_VIEW REWRITE ACCESS FULL| EMP_DEPT_MV |     3 |     9 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

rajesh@ORA11GR2> set autotrace off;
rajesh@ORA11GR2>
rajesh@ORA11GR2> delete from emp
  2  where deptno = 30
  3  and rownum = 1;

1 row deleted.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> set autotrace traceonly explain;
rajesh@ORA11GR2> select count(*)
  2  from emp
  3  where deptno = 30;
Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
Plan hash value: 2365616264

-----------------------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |     1 |     3 |     3   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE        |      |     1 |     3 |            |          |       |       |
|   2 |   PARTITION LIST SINGLE|      |     6 |    18 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS FULL   | EMP  |     6 |    18 |     3   (0)| 00:00:01 |     3 |     3 |
-----------------------------------------------------------------------------------------------

rajesh@ORA11GR2>
rajesh@ORA11GR2> select count(*)
  2  from emp
  3  where deptno = 20;
Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 2365616264

-----------------------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |     1 |     3 |     3   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE        |      |     1 |     3 |            |          |       |       |
|   2 |   PARTITION LIST SINGLE|      |     5 |    15 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS FULL   | EMP  |     5 |    15 |     3   (0)| 00:00:01 |     2 |     2 |
-----------------------------------------------------------------------------------------------

rajesh@ORA11GR2>
rajesh@ORA11GR2> set autotrace off;
rajesh@ORA11GR2>

PCT on Mviews

Rajeshwaran Jeyabal, August 16, 2012 - 11:18 pm UTC

Tom,

when i changed the d.deptno to e.deptno in MV, query rewrite happened for deptno=20

When we say d.deptno in MV, dept table is Join Dependent table. Given PK/FK in place, optimizer is unaware to perform query rewrite for this query "select sum(sal) from emp where deptno =20".

Docs say this http://docs.oracle.com/cd/E11882_01/server.112/e25554/advmv.htm#i1006635
<quote>
To support PCT, a materialized view must satisfy the following requirements:

If you use a GROUP BY clause, the partition key column or the partition marker or ROWID or join dependent expression must be present in the GROUP BY clause.
</quote>

In the above MV creation, dept is join dependent table and d.deptno is join dependent expression and I have mentioned it in GROUP BY clause

Do you think any reason why this query rewrite not happened (where deptno=20)when d.deptno is provided in MV ?

rajesh@ORA11GR2> create materialized view emp_dept_mv
  2  build immediate
  3  refresh on demand
  4  enable query rewrite as
  5  select e.deptno,sum(sal),count(sal)
  6  from emp e,dept d
  7  where e.deptno = d.deptno
  8  group by e.deptno;

Materialized view created.

Elapsed: 00:00:00.10
rajesh@ORA11GR2>
rajesh@ORA11GR2> delete from emp
  2  where deptno = 30
  3  and rownum = 1;

1 row deleted.

Elapsed: 00:00:00.00
rajesh@ORA11GR2> commit;

Commit complete.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> set autotrace traceonly explain;
rajesh@ORA11GR2> select sum(sal)
  2  from emp
  3  where deptno = 30;
Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 2365616264

-----------------------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |     1 |     7 |     3   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE        |      |     1 |     7 |            |          |       |       |
|   2 |   PARTITION LIST SINGLE|      |     6 |    42 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS FULL   | EMP  |     6 |    42 |     3   (0)| 00:00:01 |     3 |     3 |
-----------------------------------------------------------------------------------------------

rajesh@ORA11GR2> select sum(sal)
  2  from emp
  3  where deptno = 20;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 2723375566

---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     1 |     7 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |             |     1 |     7 |            |          |
|*  2 |   MAT_VIEW REWRITE ACCESS FULL| EMP_DEPT_MV |     1 |     7 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("EMP_DEPT_MV"."DEPTNO"=20)

rajesh@ORA11GR2>


Materialized View in Invalid Status after creation

Sagar, March 25, 2013 - 11:00 am UTC

Hi Tom,

I am creating Materialized view using the below syntax.

declare
t1_name varchar2(20) := 'abc';
begin
    begin
        execute immediate 'create table t1(id number, name varchar2(20))';
        execute immediate 'create table t2(id number, name varchar2(20))';
        execute immediate 'insert into t1 values (:1, :2)' using 1, t1_name;
        commit;
    end;

    begin
        Execute Immediate 'CREATE MATERIALIZED VIEW TEST_MVIEW
                           NOCACHE
                           PARALLEL 1
                           BUILD IMMEDIATE
                           USING INDEX
                           PCTFREE 0
                           REFRESH ON DEMAND FORCE
                           USING ENFORCED CONSTRAINTS
                           DISABLE QUERY REWRITE
                           AS (select T1.id,
                                      T1.name
                               from T1,T2
                               where t1.id=t2.id;
                           )';
    
    end;
end;
/


The tables used are for demonstration purpose, all actual tables used to create Materialized view are in VALID status.

The script executes without any issues, but the View is in INVALID state after creation. See below:-

SQL> select STALENESS,LAST_REFRESH_TYPE,STALE_SINCE,COMPILE_STATE from  dba_mviews where mview_name='TEST_MVIEW';

STALENESS           LAST_REF STALE_SIN COMPILE_STATE
------------------- -------- --------- -------------------
NEEDS_COMPILE       COMPLETE           NEEDS_COMPILE


This is happening when actual tables are being used in the Materialized view.

When I Recompile the Mat. view, the status of the View becomes VALID.

Could you guide me what wrong am I doing here or some logic which I am missing in relation to Materialized views.

Many Thanks
Tom Kyte
March 27, 2013 - 2:36 pm UTC

you'll have to come up with a working example, yours does not compile and when I get it to compile by fixing it, I see:

  2  t1_name varchar2(20) := 'abc';
  3  begin
  4      begin
  5          execute immediate 'create table t1(id number, name varchar2(20))';
  6          execute immediate 'create table t2(id number, name varchar2(20))';
  7          execute immediate 'insert into t1 values (:1, :2)' using 1, t1_name;
  8          commit;
  9      end;
 10  
 11      begin
 12          Execute Immediate 'CREATE MATERIALIZED VIEW TEST_MVIEW
 13                             NOCACHE
 14                             PARALLEL 1
 15                             BUILD IMMEDIATE
 16                             USING INDEX
 17                             PCTFREE 0
 18                             REFRESH ON DEMAND FORCE
 19                             USING ENFORCED CONSTRAINTS
 20                             DISABLE QUERY REWRITE
 21                             AS (select T1.id,
 22                                        T1.name
 23                                 from T1,T2
 24                                 where t1.id=t2.id
 25                             )';
 26  
 27      end;
 28  end;
 29  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select STALENESS,LAST_REFRESH_TYPE,STALE_SINCE,COMPILE_STATE from  dba_mviews where mview_name='TEST_MVIEW';

STALENESS           LAST_REF STALE_SIN COMPILE_STATE
------------------- -------- --------- -------------------
UNKNOWN             COMPLETE           VALID




my guess is that in real life, you have an INSERT into one of the base tables AFTER the create materialized view.

Materialized View in Invalid Status after creation

Sagar, March 27, 2013 - 4:20 pm UTC

Tom,

To give you a little background of this materialized view script - The creation of Materialized view script is one of the many scripts which is being called by installer (via JDBC) during schema creation activity. Currently I have this script placed in between lots of other scripts which are doing - Create/Insert/Update etc. activities.

And as a result, when the installer finishes creating a schema, I have this materialized view placed in INVALID status.

I have tried putting this script to right at the end of the list, but still it is having INVALID status once entire schema gets created.

Good thing you pointed there, I can see a script being called after this one to update a particular table, which in fact is in the list of base tables used to create materialized view. Could this be the possible reason ?
If so, I need to think about creating Materialized view after the Update is being done.

Many Thanks.
Tom Kyte
March 27, 2013 - 6:34 pm UTC

it isn't invalid. it was stale, you updated the data AFTER you created the materialized view and we are telling you the materialized view is stale. this is the way it works.


Materialized View in Invalid Status

Sagar, March 27, 2013 - 5:14 pm UTC

Tom,

Made the Materialized view creation script run after the last update is done, and it worked!!

Materialized view is in VALID status post execution off all the scripts.

Many Thanks for your advice.

Not an Oracle bug after all

Russ Whiteman, April 30, 2013 - 10:19 pm UTC

Hi Tom,
I just spent most of a day struggling against the same issue that Rajeshwaran, Jeyabal reported last July. You had wound up with a short test case:

...
declare
    l_data  sys.rewriteArrayType;
begin
    dbms_mview.explain_rewrite
    ( query     => 'select t1.deptno,count(*) from t1, t2 where t1.deptno = t2.deptno group by 
t1.deptno',
      msg_array => l_data );
end;
/

Which generates:
ORA-01405: fetched column value is NULL

I suppose I got a bit obsessive about it, even after seeing that you had opened a support ticket on the issue, because I kept looking for workarounds or alternatives to get the information I needed.

It turns out that the bug there belongs to all three of us, we forgot to initialize the sys.rewriteArrayType collection. Change the declaration to:
l_data  sys.rewriteArrayType := sys.rewriteArrayType();

and the exception goes away and you get a nice neat result.

I only mention this in case someone else stumbles over the same problem and winds up here via a search engine. Personally, I feel quite privileged to find myself in such distinguished company in making this mistake. I've been a reader of your site for many years now, and it's the first time I think I've been able to add even a slightly new view to a discussion.

Many thanks for all the help you've given us!
Tom Kyte
May 06, 2013 - 1:25 pm UTC

thanks - I knew that at one point (I have it in my own routines that use explain_rewrite!) i overlooked it in his example.

still a bug needs to be filed, there is no documentation regarding the fact that the array must be not null!

thanks again!

How to refresh a materialized view in a locked session

Khushboo, September 16, 2013 - 3:41 am UTC

Hi Tom,
Can you tell me if there is any way to refresh an MV in a locked session. I have used 'ALTER SESSION SET isolation_level = SERIALIZABLE' due to which the MVs are not refreshing. I am trying to lock the session because while refreshing the MVs, rows of the table (used in the MV)are getting updated. This shows a discrepancy in the report. Can you please suggest.
Tom Kyte
September 23, 2013 - 6:15 pm UTC

what is "locking a session"??