Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Dimitri.

Asked: July 13, 2020 - 1:12 pm UTC

Last updated: September 08, 2020 - 1:34 am UTC

Version: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.6.0.0.0

Viewed 1000+ times

You Asked

Hello everyone,

I was excited about FlashBack Version Query feature and have a question

I want to track changes on some tables and it seamed that FlashBack Version Query had this feature.

For example, I could store last processed SCN, (select t.CURRENT_SCN from v$database t ) and make select like this:
select versions_startscn,
       versions_starttime,
       versions_endscn,
       versions_endtime,
       versions_xid,
       versions_operation,
       contract_id
  from noffer.contract_status_change versions BETWEEN SCN 12477243880505 and 12477244180201

we have archive logging for 3 hour interval. but I need track changes every second, maybe 4-5 times in second.

and it would filter changes between SCN. but it seems that it is not filtering, but giving all rows. Then I wrote "where clause" like this :
select versions_startscn,
       versions_starttime,
       versions_endscn,
       versions_endtime,
       versions_xid,
       versions_operation,
       contract_id
  from noffer.contract_status_change versions BETWEEN SCN 12477243880505 and 12477244180201
  where versions_startscn is not null or versions_endscn is not null

and received results I needed
but it's makes full scan and is very slow, makes full scan


is there some tricks to make this easily. I just need to know which rows (for this example contract_id changed- inserted, updated or deleted) for the given period (or for this example SCN). Some tables we have, there are 30-40 M per day

Or I need to write triggers for tracking data changes?

Thank you very much
Best
Dimitri Gamkrelidze

and Connor said...

Flashback Query *does* show changes, but be aware of what it is doing to *show* those changes. It is trawling through the undo segments for each row to undo any transactions that impacted that row.

That could be a lot of work. I would look at testing out flashback data archive (FDA), which persists the changes to disk thus reducing the workload, but similarly, its main intention is for "after the fact" analysis (like an audit trail), not necessarily being able to dive into that trail in near real time.

Also depending on your volume/requirements, it would be looking at change notification or query notification

eg https://docs.oracle.com/database/121/ADFNS/adfns_cqn.htm#ADFNS018


If FDA or others above do not meet your needs, then you're potentially heading intro triggers

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Dimitri Gamkrelidze, September 03, 2020 - 8:47 pm UTC

Hello Connor
Thank you very much for your response.

There are about 15 tables in our database that we need to track. Also I need some transformation of the data, maybe there are several tables participating for one row of changes. So I went to trigger option.
I made a table
create table CCARE.CHANGES_EVENT_LOG
(
  changes_event_log_id NUMBER not null,
  table_name           VARCHAR2(200),
  identifier_1_number  NUMBER,
  identifier_1_text    VARCHAR2(4000),
  identifier_2_number  NUMBER,
  identifier_2_text    VARCHAR2(4000),
  identifier_3_number  NUMBER,
  identifier_3_text    VARCHAR2(4000),
  reg_date_time        DATE default sysdate
)
partition by range (REG_DATE_TIME)
(
  partition D200901 values less than (TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
...);
alter table CCARE.CHANGES_EVENT_LOG
  add constraint PK_CHANGES_EVENT_LOG primary key (CHANGES_EVENT_LOG_ID)



with partition on reg_date_time column for 5 days , sadly it is not Timestamp (it's not easy to make changes for this table, because we need to stop a lot of applications).

In triggers of these tables I just insert table name and identifiers of changed rows . changes_event_log_id is inserted with sequence. I thought that it would be very easy then to track changes, sawing last processed changes_event_log_id and on next iteration make select like this:

select * from CCARE.CHANGES_EVENT_LOG where changes_event_log_id>:last_changes_event_log_id



But I faced two major problems:
First of all we are using Oracle RAC with 3 nodes and I had huge gaps between changes_event_log_id, because in RAC oracle caches Sequence by nodes and because ordered sequences is another overhead... The second was Phantom rows that occurred between transactions , first transaction maybe last more time than the second, so tracking by changes_event_log_id had big issues...

I decided not to change this method and it changes status of the row (I used identifier_3_number column, because couldn't alter table on heavy load), because it's very fast and made another thread, which detects unprocessed rows

select * from CCARE.CHANGES_EVENT_LOG t where t.identifier_3_number is null
.
But it was very slow (about 300 msc, which is issue), so I decided make query like this;

select t.changes_event_log_id
      from CCARE.CHANGES_EVENT_LOG t
     where      t.reg_date_time > v_last_action
      and t.identifier_3_number is null


on each iteration
v_last_action = sysdate-1/24/10
and on 50th iteration
v_last_action = sysdate -1


It worked fine, but I see that on some heavy database load I have several second delays , rarely 30 or even more seconds ... in normal situation it's not more than 15-20 millisecond.
As I see, there is not more than 3000 records per second (average 30), and about 2M records per day.

Can you suggest to use CQN for tracking this table with query like this? :
select t.changes_event_log_id from CCARE.CHANGES_EVENT_LOG t where t.identifier_3_number is null

Will it have big overhead and performance degradation for database system ?

Thank you



Connor McDonald
September 08, 2020 - 1:34 am UTC


Some options you could look at:

1) a custom index. So you capture rows and log them as "processed=N" and when they are processed you mark that column as null. An index on "processed" is then only the size of the *unprocessed* rows. As they get set to null, they are removed from the index. Makes it very efficient to find the rows you need to process.

2) Use AQ. Ultimately you are looking for a list of "unprocessed" changes, which aligns with a queue messaging system. But AQ can be a little resource heavy for your needs because it supports a lot of functionality

3) Even if you don't partition by it, adding a timestamp for ordering purposes should be easy because you can do:

alter table add TS timestamp default localtimestamp

and even make that column invisible if you need, and hence no application code would need to change (or even be aware of it) except your code.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library