Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Hrvoje.

Asked: July 19, 2017 - 12:47 pm UTC

Last updated: August 03, 2017 - 10:31 am UTC

Version: Oracle DB 12.1.0.2

Viewed 1000+ times

You Asked

Hi Tom !

As of Oracle 12c there exists FAST SPLIT PARTITION.
Is there something similar for MERGE PMO?

The exact problem is this.
As currently there is no way to increase HIGH VALUE of RANGE PARTITION
(at least I'm aware of),
the only way to do that, excluding proposed solutions using EXCHANGE PARTITION
as the data has to be all the time ONLINE,
the only way is to SPLIT upper partition into some INTERMEDIATE partition
and merge it to the LOWER partition.
In our real scenario there are only two partitions:
a) upper (MAXVALUE) small datetime-range partition for the latest changes and
b) lower partition with some datetime set for high value
As we need to increase that high value from time to time to keep this upper partition small,
we are using SPLIT UPPER partition INTO HIGHER + INTERMEDIATE partitions
and then MERGE this INTERMEDIATE partition into the LOWER partition.

But because it is not allowed to MERGE LOWER + INTERMEDIATE into LOWER partition,
i.e. ALTER TABLE tab MERGE PARTITIONS LOWER_PART, INTER_PART INTO LOWER_PART
raises ORA-14275: cannot reuse lower-bound partition as resulting partition

currently the only solution is to MERGE LOWER + INTERMEDIATE into INTERMEDIATE partition
and rename INTERMEDIATE partition into LOWER partition.

But that is very slow !,
as it obviously internally is doing creation of a new partition into which
are inserted all the records from the very big LOWER partition and the INTERMEDIATE - very small partition - at most few tens of thousands of records
into new partition and than all local indexes are rebuilt for that partition.

Is there some way which would only add these intermediate records into big lower partition
and than raise high value of it,
but keeping in mind that all has to be ONLINE, at least for querying?

and Chris said...

No, I'm not aware of a way to move the upper bound for a partition. Other than the split+merge method you're using.

But I have to ask:

What tangible benefit are you getting by merging the old partitions?

Seems like a lot of effort for minimal gain to me. Can you give concrete gains you're getting by doing this?

Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Hrvoje Devcic, July 20, 2017 - 2:23 pm UTC

Hello !

Thank you for your answer !

This is needed because the table is huge,
and new data are constantly inserted/updated.
What was changed is tracked by some entry date field.
Whole data must be available for querying,
but only new data is used to transform them from some other system to our internal structure.
Because of that we use this smaller (NEW) partition.
The other problem is that this table is in reality MVIEW which is FAST refreshed every so - 15 minutes, 1 hour or daily,
depending on installation.
So, the performances were far from satisfying for extraction data for only newest entry date records
even with index on that field.
On the other hand all the other queries do not use this entry date field, but primary key.
And when we tried to uniformly range partition this table
with local PK index the performances for those other queries worsened a lot.
So, the best solution was to partition this table into these two partitions, and hash-subpartition partitions into 16, 32 or 128 subpartitions depending on quantity of data
and that resolved all the issues.
Only this weekly/monthly maintance last for about two or three hours.

This is acceptable of course.
I've just needed to be sure if there are some better handling,
as system is complex and any waste of resources is not acceptable.

Chris Saxon
July 20, 2017 - 4:21 pm UTC

If partitioning is causing you problems, some thoughts on alternatives:

- Can you have separate MVs for old and new data? And a view over the top to recreate the whole?
- Instead of range partitioning, can you list partition on a column with OLD/NEW values? This would remove your partition maintenance work

Hrvoje Devcic, July 20, 2017 - 5:56 pm UTC

Hello !

Thank you!

The list partitioning is not an option as it should be based on a datetime column which is not truncated. Also, we can't use virtual columns as it is restriction for fast refreshable mviews.

The other proposal with two mviews, one with where condition entry_date < <fixed_date> and other with where condition entry_date >= <fixed_date> is not bad idea as long as there is restriction that we can't drop mview because it would purge its mview log and that we can't afford.
Please see my question "PMOs on MVIEW".
And that would be the case if we want to change <fixed_date> of course.

And all my questions about this is because Oracle automatically purges mview log when CREATE MATERIALIZED VIEW or DROP MATERIALIZED VIEW is executed.
Even for mviews based on prebuilt table.
If there is a way to prevent that my life would be a lot easier :-)

Should I pose new question about this or can you answer in this thread?
Is there a way to prevent purging mview log on CREATE/DROP MATERIALIZED VIEW?


Chris Saxon
July 21, 2017 - 10:39 am UTC

Why are you worried about mview log purging on create/drop MV? Creating your MVs is a one-off thing, right?

Don't hard-code the fixed date in your MV query. Have another table which holds the cut-off date and join to it in your MVs. Then update this value to "move" the rows from the new to the old.

And as long as you have one MV present, adding/removing others preserves the log:

create table t (x primary key , y not null ) as
  select rownum x, 'xxxx' y from dual connect by level <= 1000;

create table cut_off (
  threshold int primary key
);

create materialized view log on t with rowid, primary key;
create materialized view log on cut_off with rowid, primary key;

insert into cut_off values (500);

select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         1

create materialized view old_mv 
refresh fast on demand 
as
  select t.rowid trid, c.rowid crid, t.* from t
  join   cut_off c
  on     t.x <= c.threshold;

select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         0

update cut_off set threshold = 800;
select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         2
  
create materialized view new_mv 
refresh fast on demand 
as
  select t.rowid trid, c.rowid crid, t.* from t
  join   cut_off c
  on     t.x > c.threshold;

select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         2

exec dbms_mview.refresh ('old_mv', 'F');

select count(*) from old_mv;

  COUNT(*)
----------
       800

select count(*) from new_mv;

  COUNT(*)
----------
       200

update cut_off set threshold = 900;
commit;
exec dbms_mview.refresh ('old_mv', 'F');
exec dbms_mview.refresh ('new_mv', 'F');

select count(*) from old_mv;

  COUNT(*)
----------
       900

select count(*) from new_mv;

  COUNT(*)
----------
       100

update cut_off set threshold = 950;
commit;

select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         2

drop materialized view new_mv;
select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         2

drop materialized view old_mv;
select count(*) from mlog$_cut_off;

  COUNT(*)
----------
         0

Hrvoje Devcic, July 21, 2017 - 11:58 am UTC

Hello !

Because here and there the mviews get out of sync with FAST REFRESHing.
And the COMPLETE REFRESH does not work.
Either it fails with SNAPSHOT TOO OLD (problems with UNDO). Or even with ATOMIC_REFRESH => false it lasts for days. To repeat: days !
The master table is huge.
Don't ask me way - slow network, some strange locks, who knows.
We can't influence a lot. It is not our system.

So, the only solution is
a) DROP MATERIALIZED VIEW ...
b) EXPORT from REMOTE table
c) IMPORT into PREBUILT TABLE
d) CREATE MATERIALIZED VIEW ... ON PREBUILT TABLE ... REFRESH FAST ON DEMAND

But the master remote table is whole time online. It can't be stopped.
So, there are always problems with DELTA records from the EXPORT time and CREATE MATERIALIZED VIEW time.
The materialized view log is constantly filled, but CREATE/DROP MVIEW purges it (!?), deleting all changes !

The materialized view log is using SEQUENCE.
So, please Oracle guys invent this:
a) stop mview log purging for DROP MATERIALIZED VIEW at least for the case when the prebuilt table is left.
b) add part of CREATE MATERIALIZED VIEW command
something like this
CREATE MATERIALIZED VIEW ... ON PREBUILT TABLE ...
KEEP LOG FOR SEQUENCE <some_sequence_number>
(for mview log with SEQUENCE)
or
KEEP LOG FOR SNAPTIME <some_snap_datetime>
again at least for ON PREBUILT TABLE.
Or if it is too complex - just with part KEEP LOG which would not purge mview log at all regardless of SEQUENCE$$ or SNAPTIME$$,

which would keep SNAPTIME$$ = 1.1.4000. in mview log and all other tables as SYS.SLOG$ and SYS.MLOG$
unchanged for that mview.
So we can fetch the DELTA records with FAST REFRESH.

Current solutions we had to implement is a mess.


Chris Saxon
July 21, 2017 - 3:53 pm UTC

"Don't ask me way - slow network, some strange locks, who knows."

Find out! Trace the materialized view creation to see why it's taking so long.

For instructions on how to do this, read:

https://blogs.oracle.com/sql/how-to-create-an-execution-plan#tkprof

If you need help understanding the results, post the formatted TKPROF output and we'll see how we can help.

"So, there are always problems with DELTA records from the EXPORT time and CREATE MATERIALIZED VIEW time."

When you create the MV, it has all the changes up to that point. So why is it a problem that this wipes the log?

Hrvoje Devcic, July 21, 2017 - 12:17 pm UTC

For proposal with new single-record table with <fixed_date> column which would also had to have mview log to support FAST REFRESHING
I think it would be worse solution as in that case
if the <fixed_date> is changed
then Oracle will practically execute COMPLETE REFRESH through FAST REFRESHing (and with even lower performance as it would then internally try to join on <entry_date> field) from the REMOTE table
something we had to avoid in the first place as it does not work.
(SNAPSHOT TOO OLD, lasts for days, ...)
Also is it even possible to have FAST REFRESH MVIEW with JOIN-ed tables (with mview logs) where one would be local and other on the remote site?
I'm not sure if it is.

But no - that does not work for us.

Chris Saxon
July 21, 2017 - 3:55 pm UTC

Well, you can have fast refresh over a DB link:

https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9531475800346781983

Have you tested with a join?

Hrvoje Devcic, July 21, 2017 - 4:06 pm UTC

I've already explained:
"We can't influence a lot. It is not our system."
I.e. we can't trace those systems.
The last resort is that we can ask for the export of the remote tables.
And that's it.

"When you create the MV, it has all the changes up to that point. So why is it a problem that this wipes the log?"
I'll explain again.
1.) DROP MVIEW at time T0
PURGES MVIEW LOG - not so bad
2.) EXPORT from the remote DB table STARTS at time T1
3.) EXPORT from the remote DB table ENDS at time T2 = T1 + DT1
3.) IMPORT finish in our mview prebuilt table at T3 = T2 + DT2
+ COMMIT
4.) CREATE MVIEW ON PREBUILT TABLE at T4 = T3 + DT3
PURGES MVIEW LOG - very, very bad !!!

The remote table can't be stopped.
It is DML-ed all the time.
Mview log is filled all the time.
So we have data from the export at time T1.
And when we execute CRETE MVIEW at time T4
we are missing record changes from MVIEW LOG from T1 till T4
because CREATE MVIEW purges MVIEW LOG !
Aaaaargh !
Chris Saxon
July 24, 2017 - 10:19 am UTC

You can still trace your system. Although this may not give the complete picture, it'll still help.

Have you tried creating a "dummy" MV before you start this process? In the example above, create/drop MV leaves the mv log intact if there's another MV on the table in question...

Hrvoje Devcic, July 21, 2017 - 4:14 pm UTC

Of course you can have FAST REFRESH over db link.
Otherwise all this my story would be pointless :-)

I just wasn't sure if is it possible
to have JOINED FAST REFRESH-able mview
with one table LOCAL and other on the REMOTE site
both with MVIEW LOGS of course.
There are many restrictions for FAST REFRESHable MVIEWS.

But, anyway - I've explained - it can't be solution for us as the performance of such an implementation would be very bad.

Hrvoje Devcic, July 24, 2017 - 12:02 pm UTC

Hello !

Of course I've tried that.
I've tried everything :-)

But it does not work !
If you have two mviews on the same master table,
then the records in mview log will persist of course when mview is DROPPED/CREATED,
but again after the CREATE MVIEW command for the main (non-dummy) mview -> the SNAPTIME$$ in the mview log will be set to the SYSDATE and these records will not be taken into account for FAST REFRESHING of that mview !

The one and only way to handle this without using dummy mview is this:
a) DROP MVIEW
b) EXPORT of master
c) on the remote site: ALTER MATERIALIZED VIEW LOG ... PURGE NEXT SYSDATE + 3;
d) IMPORT in mview prebuilt table
e) CREATE MVIEW on prebuilt table
=> this sets SNAPTIME$$ in the mview log to SYSDATE
f) MONKEYING AROUND:
UPDATE <mview_log> SET SNAPTIME$$ = TO_DATE('01014000','ddmmyyyy');
COMMIT;
=> but it still does not work ! :-)
one additional record have to be DML-e in the MASTER table
so that some of SYS.MLOG$ and/or SYS.SLOG$ get some column to be updated above SYSDATE of CREATE MVIEW
But in this case it would be fine because the master is all the time DML-ed
g) EXEC DBMS_MVIEW.REFRESH('<mview_name>', 'F');
=> now it works
h) on the remote site: ALTER <mview log> PURGE IMMEDIATE;

And you have to admit - this is monkeying around, very bad solution,
but it only works.
Except, of course to manually implement delete/insert/update mechanism from mview log and master table to catch deltas and hoping that nothing will be inserted in those few seconds between COMMIT and CREATE MVIEW.

Two more things to say:
I.)
Master tables have ~ 1E9 records, so export/import are pretty much only solution even if succeed to improve COMPLETE REFRESH with tracing.
II.)
Dummy updates on the master are not allowed.

I think I've made pretty good case for an Oracle ER, don't you think? :-)

Chris Saxon
August 01, 2017 - 3:36 pm UTC

I think I see what you're trying to do now. Essentially you need to replicate all the changes between you exporting the table and creating the MV right?

You could try raising an ER. But I think you're better off trying to solve this using a replication tool such as GoldenGate.

Hrvoje Devcic, August 01, 2017 - 5:29 pm UTC

Hello Chris !

Right !
We need the exact copy of the remote table in our system.
But as I've explained - there are missing records between export and create mview command as this command purges mview log.

The Golden Gate is great, but we are talking here about killing a mosquito with a machine gun,
and that machine gun costs a lot and our customer will not provide it for us.

I'll create an ER about this for sure,
because it is strange that such an option is not supported yet.
CREATE MVIEW do one thing unnecessary and that is purging mview log. That was supposed to be implemented as an option.

Thank you !

Chris Saxon
August 03, 2017 - 10:31 am UTC

Well, there are other replication technologies you could use, GG was just an example!

For the prebuilt table stuff, it seems to me the best solution is to enable a prebuilt MV to include changes from associated logs when you create it.