Skip to Main Content
  • Questions
  • ORA-12015 while creating materialized view using aggregates and join on two tables

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Rateesh.

Asked: February 29, 2004 - 2:12 pm UTC

Last updated: March 01, 2004 - 10:53 am UTC

Version: 8.1.7

Viewed 1000+ times

You Asked

I am trying to create a materialized view to speed up the reports. I want it to be fast refreshed and query rewrite enabled. It includes a join on two tables . Following is Create statement i executed

CREATE MATERIALIZED VIEW PGSALES.MV_SALES
TABLESPACE SALES
NOLOGGING
CACHE
NOPARALLEL
BUILD IMMEDIATE
REFRESH FAST
ON DEMAND
WITH ROWID
USING DEFAULT LOCAL ROLLBACK SEGMENT
ENABLE QUERY REWRITE AS
SELECT H.LOCATIONCODE,H.INVOICEDATE,D.ITEMNO,SUM(D.QUANTITY) QTY, SUM(D.DISCOUNTAMOUNT) DISC,SUM(D.QUANTITY*D.RATE) GROSSAMT
FROM POSINVHDR H ,POSINVDTL D
WHERE H.POSINVOICENO=D.POSINVOICENO
GROUP BY H.LOCATIONCODE,INVOICEDATE,ITEMNO

I am getting the following error

ORA-12015: cannot create a fast refresh snapshot from a complex query

But i want it to be fast refresh. this happens only when i take the product of quantity and rate ( which i require ). If i use sum(quantity) alone the snapshot is getting created and i am able to do a fast refresh. Why is it not possible to use product . i tried adding a column in the detail i.e quantity*rate then using it in the snapshot although this works i cannot modify the base tables. Is there any other work around for this.


and Tom said...

well, before you get really excited -- you did understand that the MV would only have been "refreshed fast" if all you did was a DIRECT PATH LOAD right? In 8i, it was documented that MV's with joins and aggregates would be fast refreshed IF and ONLY IF you loaded with the direct path loader (mv logs with incremental changes could not be used):


</code> http://docs.oracle.com/cd/A87860_01/doc/server.817/a76994/mv.htm#42151 <code>

so, I doubt it would have been fast refreshed in reality regardless (you probably are not loading with the direct path loader right?)

It would have been FULLY refreshed regardless. We could rollup the details by themselves to the level of posinvoiceno -- meaning the report would just have to join and aggregate a little, not alot.

Or, you could just go with a complete refresh -- run the nasty query once every now and again and benefit from its existence for all of the other queries...

Or, in 9i, this restriction is removed....


Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks..Something that i did not know

Rateesh Mohan, March 01, 2004 - 10:42 am UTC

Thanks for taking time to answer the questions

Never knew that fast refresh would not work with conventional DML's . Got to know more about materialized view after going through the documentation

I cannot use both the suggestions since rolling upto the level of posinvoiceno does not help 'much' in reducing the time taken for the reports because in the reports it is further linked to more than 7 tables. Also doing a full refresh every half an hour is also not possible as the refresh it self is taking more than half an hour .

Since I cannot find anyother work around to fullfill my requirements what i am planning to do is to create trigger on the detail table on the fields that i require and make summarized table which contains the joined-aggregated values. Make the reports point to this summary table. What are you thoughts on this ?


Thanks once again


Tom Kyte
March 01, 2004 - 10:53 am UTC

it'll have an impact on the running application -- both a serialization issue (many rows in the detail point to the same aggregate row, so two users that did not "hit eachother" in the past will start blocking and locking eachother). the trigger itself will impact the overall performance as well. you'll need to watch/measure that impact.

be careful if H.LOCATIONCODE,H.INVOICEDATE are updateable -- the trigger on the details will have to LOCK that record before reading it if so and you'll need to cascade those updates down from the parent to the aggregate tables.

think through all of the concurrency issues (and remember multi-versioning and read consistency)