Skip to Main Content
  • Questions
  • Keeping track of order of incoming rows.

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Oracle developer.

Asked: September 30, 2010 - 11:02 am UTC

Last updated: October 15, 2010 - 9:20 am UTC

Version: 11.1.0.7

Viewed 1000+ times

You Asked

Hello Tom,

We are using 11.1.0.7 RAC database with 2 nodes. In our OLTP application, we have main transactional tables that are constantly being inserted/updated. We also have history tables that are used to store the history of the DML happening on the main transactional tables using after insert/update triggers (statement level). A new row is inserted into history table when a row is inserted into a transaction table and new row is inserted into history table when a row is updated in the transaction table. So there is one to one mapping between a transaction table and a history table.
The history table is then fed to the data warehouse to build a Star schema that is used for reporting purposes.

Anyway, we need to keep track of the sequence of transactions in history tables. So we use a timestamp data type with up to 9 microsecond accuracy to keep track of the sequence of rows coming in. Everything works fine for the most part. But there are times when we have issues where the 2 RAC nodes get out of sync by a few seconds/minutes where an insert into a table happens on one node that is ahead of an update to that row happening on node 2. So the SYSTIMESTAMP of one node is few seconds/minutes ahead of SYSTIMESTAMP of the other node. So our process records an UPDATE happened before the INSERT. And that causes a lot of problems in our processing.

We have been guaranteed that the RAC nodes will not be out of sync more than 2 minutes. But that is not enough. When transactions are happening so fast, even a second off can create a problem in our processing.

We cannot use sequences because we need to keep track of the order of rows and with RAC nodes, sequences cannot be guaranteed to provide the correct order.

My question is can you recommend any way to resolve this type of issue?

Thank you.

and Tom said...

... So there is one to one mapping between a transaction table and a history table. ...

no, there is a one to many mapping. You insert a row into history when you insert, you insert a row into history when you update. history has more rows than "non-history" does.


and there is much more efficient ways to get that history to feed a warehouse than triggers - that is the last approach I would have taken. Streams, CDC, flashback data archive - all would have come first.


... Anyway, we need to keep track of the sequence of transactions in history tables. So we use a timestamp data type with up to 9 microsecond accuracy to keep track of the sequence of rows coming in ...

and now streams really truly comes into play.


... Everything works fine for the most part. But there are times when we have issues where the 2 RAC nodes get out of sync by a few seconds/minutes where an insert into a table happens on one node that is ahead of an update to that row happening on node 2. ...

you have a problem even on a single instance with timestamps. The timestamps are "as of insert" time - but they NEED to be sequenced "as of COMMIT time". Think about it - just because i have a lower timestamp then you do does NOT mean that I inserted and committed (made public, allowed you to see) my data.


this screams out for "streams". It preserves transactional ordering.

and would have about 1000% less impact on your runtime performance - triggers - no way.

You are a big system obviously (two node rac with OLTP - you are not tiny, you have big plans). Using triggers would be the last thing I'd want to do. I'd need a really good reason for having a single trigger in my system let alone a trigger as resource intensive as you have.

Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

My Thoughts exactly...

A reader, September 30, 2010 - 9:14 pm UTC


Why try and reinvent the wheel when the functionality is built out of the box??!! Streams is a sure shot way of achieving the requirement. I would also suggest to have a peek at "Golden Gate" too.


Streams

Oracle developer, October 01, 2010 - 8:07 am UTC

A one-to-one mapping I was referring to was about 1 transactional table to 1 history table and not the rows. But I can see how one would draw a wrong conclusion from that line.

Yes. I understand that Streams will solve this issue. And we are looking into that. However, the trigger solution was designed when 9i ruled the world and Streams had more limitations than it does today with 11g. Plus, Triggers are already coded. Streams is not. Also, among several improvements in 11g, I read that the trigger firing is faster than earlier versions. Please understand. I am not debating that Triggers is better or worse than Streams here. If it were a new system to be built, I would not chose triggers either and no one would. I know that Streams is a better solution.

I also understand about the timestamp being at the "insert time" versus "commit time". However what we store in the table is "insert time" and it would work fine with a non-RAC, single instance database. An Update would not be recorded before an Insert in a single instance database.

So I ask the question again, in my current environment where we use triggers today, is there any way to resolve the RAC timestamp issue?
Tom Kyte
October 01, 2010 - 9:31 am UTC

but you have an issue, one that cannot be solved with triggers unless you want to SERIALIZE your transactions, therefore - (think about where I'm going here....)


short of getting your clocks in order on the nodes OR imposing some heavy duty serialization OR building your own (single point of failure) clock service (which would be really slow)...

Out of sync no more than 2 minutes?

Kent Morwath, October 01, 2010 - 11:34 am UTC

That's a very large time. There are several ways to keep time syncronized on PCs with far better precision. I would also suggest to configure properly NTP on those machines, and use a reliable time source.
Tom Kyte
October 01, 2010 - 2:01 pm UTC

concur, my laptop is never off by even a second from 'real time' based on the radio sync'ed clock on my well. If my laptop can do it...

but the real answer is: fix it, use streams.

The overhead of the triggers - huge (they could probably give back 60% of their cpu's if they did this - that means of course, I should be encouraging them to use triggers - as a stock holder ;) )

The amount of code they would simply erase - that would be huge. And a good thing - erasing code is a positive thing, maintaining it - not so much...

Streams vs Golden Gate

A reader, October 03, 2010 - 8:17 am UTC

we were told by Oracle Support that they no longer recommend to use oracle streams for replication and instead their prefered replication tool is Golden Gate going forward. Is this correct?
Tom Kyte
October 04, 2010 - 5:40 am UTC

if you are doing oracle to oracle replication, and it is rather straightforward replication, no big transformations or anything - you already own everything you need.

golden gate is separately licensed.

Sync the time of the two nodes?

Giuseppe, October 04, 2010 - 6:12 am UTC

The NPT solution is so trivial that I suspect that it hasn't been implemented because of outside network access policies.

If that's the case, how about a batch file that periodically sync the time of one node with the other?
They may be go off from the "real time" but they will be always in sync.
Tom Kyte
October 04, 2010 - 7:47 am UTC

and even if it was (network policy), just setup your own and have node 2 sync from node 1.

but even then, they will have problems :)

You might find an update happening before the insert! If the clocks are drifting as fast as they must be to have such a wide disparity - putting the time "back" on one node could easily cause an update or delete happen BEFORE the insert (according to the clock)...

so, we'd be back to... you need something that sequences transactions, eg: streams or the like.

_bag_, October 13, 2010 - 1:27 am UTC

Oracle developer, you can use SCN instead of timestamp, may be it solves your problem with synchronization.

dbms_flashback.get_system_change_number

or

create global temporary table t (x number);
insert into t values (userenv('commitscn'));
select x from t;

Tom Kyte
October 13, 2010 - 7:02 am UTC

you have the same exact problem. A higher number does not mean the commit was completed last.

SCN on RAC

A reader, October 13, 2010 - 11:25 am UTC

Tom,
1. transaction 1 starts on node A of RAC and gets SCN1
transaction 2 starts on node B of RAC and gets SCN2
If SCN1 < SCN2 can we be sure, what transaction 1 starts earlier than transaction 2?

2. Here is written: "Oracle Database increments SCNs in the system global area (SGA)".
http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/transact.htm#CNCPT039

How two nodes of RAC keep synchronized on SCN? Each of them has it's own SGA.
Tom Kyte
October 15, 2010 - 7:07 am UTC

1) you don't get an SCN for a transaction UNTIL IT COMMITS. calling dbms_flashback will just be like a timestamp.

2) that doesn't matter. You would be assigning the scn to a row when you modify it, you need to assign the scn to the row when you COMMIT - not when you modify it.

A reader, October 15, 2010 - 8:31 am UTC

"You would be assigning the scn to a row when you modify it, you need to assign the scn to the row when you COMMIT - not when you modify it."

In the above statement you said oracle assign SCN when modify the row and at the end you say "not when you modify it"

Could please explain more about this?

Thanks..
Tom Kyte
October 15, 2010 - 9:20 am UTC

I did not say Oracle assigns an SCN when you modify it.

I said that by using dbms_flashback YOU (you, not oracle) would be assigning an SCN to a row when it was modified.

We (Oracle) uses the commit scn to sequence transactions for replicating - but YOU would not be - you would be using a SCN to order "rows" - not transactions - to be applied somewhere else using this technique. It would suffer from the same effects as a timestamp (that was perfectly synchronized on both systems) would

Keeping track of order of incoming rows

sasanka ghosh, October 07, 2014 - 2:52 am UTC

Hi Tom .
Many thanks for your answers but some doubts .

In a DW env heterogeneous or Rac system if we need to collec data and load into DW and data mart without NTP it is a nightmare scenario

if different server clocks are different then proper linking of records specially type 2 and fact will be problem ,

if there are non oracle system involved tthen scn/stream will not be available .

if oracle sytems are involved only then logic to link correct records based on scn/stream will be very very complex and sometimes unmanageable.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library