Skip to Main Content
  • Questions
  • Oracle streams and Oracle messaging

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Dawar.

Asked: October 10, 2007 - 5:39 pm UTC

Last updated: March 19, 2011 - 10:46 am UTC

Version: 10g

Viewed 1000+ times

You Asked

Tom,

I Just wanted to know how important are oracle streams and oracle messaging gateway.

At the moment have no idea about these, going through documents from oracle.com.

Please share with us some benifits.

Thanks in advance

Dawar Naqvi
LA, USA

and Tom said...

well, if you have the need to copy/replicate data from database 1 to database 2 - then streams is important.

if you have a disaster recovery site via data guard, then the underlying technology that is streams is being used in part in many cases - so streams is important.

In short: if you have a need to share data between databases, streams is probably relevant to you. If you do not, it probably isn't.

Same with the message gateway, if you have a requirement to interoperate with say IBM's MQSeries queued data - the gateway is likely relevant to you. If you do not, it probably isn't.


Rating

  (11 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

data transfer from 1 database to another

Nikhilesh, October 12, 2007 - 5:28 am UTC

Dear Tom,
database 1 send a 200 lines text file every 10 seconds to database 2 and the database 2 again send a receipt of that file to database 1. We use MQseries at the moment but now looking forward to use Oracle Streams AQ thru DB link.
Is it wise to use Oracle Strems AQ for this purpose or something else can be tried?

Thanks in advance.

Tom Kyte
October 14, 2007 - 9:17 pm UTC

if you can use mqseries, you can use AQ, they are synonymous in functionality.

AQ comes with the database...

however, if all you wanted to do was insert 200 rows from database 1 into database 2 - you need nothing more than "insert"

streams error detection - instrumentation

Andre, March 31, 2008 - 11:10 am UTC

Mar 31, 2008

Hi Tom,

We have a problem with Streams: specifically queue-to-queue propagation aborting with ORA-600.

Oracle support got stuck on that, too.

I would like to INSTRUMENT the code between capture and propagation and apply processes of 2
databases so that it would be 100% clear where and under what circumstances these errors come up.

Streams are configured ONE-WAY + the source database is high volume processing OLTP.

If you have any sample of the code handy please quote:
1) an example of PL/SQL custom apply-handler
2) an example of intercepting of LCR to display or store into a TRANS_LOG table

We would liike to revisit the code and any good example of good practice design of custom
rule-based-transformation would be great

Thanks
Andre


Tom Kyte
March 31, 2008 - 12:44 pm UTC

Please utilize support and stop putting these *all over the place*.

If you have an ora-600 - Support cannot be allowed to get *stuck*, it is their job to diagnose the issue with you, use the escalation procedures if necessary but please work this via support.




instrumentation

Andre, March 31, 2008 - 4:27 pm UTC

Tom,

Sorry for placing the same question in a couple of places but when I saw it vanished - I thought I made an error.

ALL I AM ASKING IS:
Can you please suggest an INSTRUMENTATION method to localize the root cause - of the problem...?

I have managed instrumentation before - but I am not certain wbout the best approach when it comes to Streams.

I would appreciate your help.
Simple PL/SQL example that would work at TRANSFORM of LCR or at various points of contact from one queue to another. Then - I can say to support: "HERE IT IS - WHY do I get ORA-600" - Otherwise I will keep hearing: "Produce details"

I know that you are a very strong advocate for Instrumentation - I will really appreciate your help.

Regards
Andre
Tom Kyte
March 31, 2008 - 4:50 pm UTC

Yes, I vanished it when I saw it all over the place.

You need to work with support, they can set events, get debug information, all kinds of stuff.

If you INJECT yourself into the streams flow, you will change the problem entirely, right now I assume you are using the default apply methods all across the board. Therefore, that is what needs be debugged. It is *our* code failing, not yours here. Our instrumentation is what needs be enabled.

Do you know how to escalate an SR? I think you need to on this one.

Instrumentation to locate the "weak link"

Andre, April 01, 2008 - 9:04 am UTC

Dear Tom,

I understand that and I have done this before.

(1)
Knowing how much importance you personally put on code instrumentation I asked for your advice - i.e. in case that you have ever instrumented a AQ and or Streams implementation whatever the complexity. If you have you could help me/us to get this done on our site and enable us to locate the "weak link". If you have never instrumented Streams and/or AQ implementation I understand - and I would be grateful if you could suggest an approach with a very simple POC type instrumentation code.

Unless you tell me that instrumentation SHOULD NOT BE DONE when AQ and/or Streams are involved (I can hardly believe that this could be the case) - then I would be very curious to hear the reasons - especially coming from you.

(2)
Now some background info.
Oracle support (escalated etc.) had only told us to verify all environment set-up including releases of Solaris + 10g with all the patch lists on both sides.

They could do more - but this is one of the biggest banks and the security is such that even internally teams on receiving end do not access the core servers.

There are a number of people involved (several countries)and I joined in to face problems that need resolution.

And - BY NO MEANS this is an implementation "using the default apply methods all across the board" - not at all.

Actually rather complex with myriads of propagations queues - I am trying to get some idea as to who did what and for what reason.

The bank will not allow Oracle Support to connect or have any access to the core databases - there various security layers change of ownerships as the stream data travels from one que to the next etc., etc...

Hence my request to you for help + instrumentation examples + some guidance so the project will get implemented.

Kindest regards
Andre



Tom Kyte
April 01, 2008 - 9:44 am UTC

(1) but you have no code of your own to instrument, do not place words in my mouth. I say "you need to instrument your code". Streams is to be an out of the box experience. anything you put in there would CHANGE everything.

You instrument YOUR CODE.
If you have no code, you cannot instrument, there is nothing for you to instrument.


2) if you have non-default apply methods, then you wrote that code, that would be something you could stick instrumentation into, but then I don't know why you would be asking how to instrument - since you have code, you put code in place.



intercepting streams to locate a root cause

Andre, April 02, 2008 - 2:33 am UTC

Tom

Quite a while back I implemented AQ/Streams as out of the box. ¿KISS¿ formula.
I had no problems. There are hundreds of various projects that implemented out of the box AQ/S and only a very few report problems.

If you have several machines involved and NOT out of the box implementation where there are various queue-to-queue communications that call various rule based transformations and this breaks it is impossible to pin-point the root cause of the problem as it is not clear where this happens.

By instrumentation of this implementation I meant to intercept process at various points into LOG tables so that one could see right away:
1. Does an LCR reach these tables?
2. Does it hold the same data (some guys claimed data corruptions)
3. Does it show that transformation worked (at capture and propagation #1, #2, #3, #4 etc + at apply)?

If the stream cannot be intercepted (like FBI tapping a phone conversation) then we could move an intercept LOGGING along ¿ it would take more tests but that¿s OK.

All of that could be done on one single machine first with database backups done so that regression tests could be done.

And if the entire process works on one machine ¿ but does not work in the real life set of servers, then we could intercept data the same way.

Am I missing something?

So ¿ is it or is it NOT possible to intercept data that is streamed between:
a) capture and propagation
b) between propagation #1 and #2
c) between propagation #2 and #3 etc
d) between several propagated streams into one etc
e) between various propagations and apply streams
f) AND ¿ whether or NOT the data intercepted is what is expected ¿ transformed and not corrupted

I thought I could bet that if you found yourself in a similar spot, you would start by questioning your own implementation rather than contact one of your guys next door to solve a problem, would you?

Kindest regards
Andre

Tom Kyte
April 02, 2008 - 7:55 am UTC

... I thought I could bet that if you found yourself in a similar spot, you would ...

ora-600 indicates *we failed*, that is an internal error. 100% of the time, ora-600 is a bug - no if and or buts about it.


if you inject something into the process - you will

a) change everything
b) make it even more complex

where you already have code - great, log away. propagation is a black box, it just happens. You can write your customer apply code (your transformation rules and whatnot) but the propagation is just queue to queue stuff and if that is where the ora-600 is happening - that is right back to support.

instrument ot not = if streams or AQ = NOT !!!

Andre, April 02, 2008 - 4:45 pm UTC

OK Tom,

So some guys at THE BANK have chosen to implement a myriad of Propagation queues (as many as several dozens - to run concurrently for "performance") - so the they hav emanaged to "prove" to Oracle that Streams would not work under such scenario + at volumes of 100,000,000 transactions + which is way out of the demo = out of the box SCOTT/TIGER EMP replication../

At the end we have entered a finger-pointing contest (I am not trying to be more to the point ...) where ...
- never mind that... no point to quote...

I have researched Internet metalink and forums etc/ etc/ etc/ etc... until I got to "Step- by-step Streams by LewisC of PriceWaterhouseCoopers plc" ... interesting

+

Frankly Tom,

I could not possibly see HOW instrumentation would mess up things and that I should just follow your advice and tell Oracle Support:
"We are getting ORA-600 + fix it or else + BTW we will not provide you with any further info other than our platform + Oracle 10gR2".

If you figured that I have reached the end of the ORA-rope - you're right.


Tom Kyte
April 02, 2008 - 8:10 pm UTC

if you cannot provide diagnostic issue to support - you will have a problem, but I don't know what else to tell you. there are cleared folks in support, there are people that can come on site to work with you. I worked in Oracle Federal/Public sector for 14 years - serving the military - we can and do resolve issues in lock tight areas.

IF you start adding code where no code of yours exists
THEN you fundamentally change the process, entirely.


alternative = manual process - Lewis C

Andre, April 03, 2008 - 5:25 am UTC

Tom,

I understand your position as far as using support and we have tried ¿ but there is far too much politics involved. The management says that Oracle will have to reverse engineer their 11g Streams architecture back into 10gR2 or 10gR3 for us and we will then repeat the process ¿ meanwhile snapshots will be the fallback option. BTW. Their administrators have NOT set up GLOBAL_NAMES=TRUE so db-links are not set up the way Oracle says they should. And as I said earlier there are dozens of propagations ¿ history of this is not clear.

The problem needs to get resolved one way or another.
I have done Internet searching to find an article ¿Step-by-step Streams¿ published by Lewis R Cunningham, PricewaterhouseCoopers, LLP.

Almost identical text appears in Lewis C blog
http://blogs.ittoolbox.com/oracle/guide/archives/oracle-streams-configuration-change-data-capture-13501

Lewis published several articles where he demonstrates how one can create an LCR manually and use AQ to send this from one queue to another.

My point is this:
If one uses an Oracle package and provides the required parameters and all should work but it does not and the package body PL/SQL is wrapped ¿ sure Oracle support needs to deal with this.
But ¿ if it is an entire system (STREAMS in this case) where data held in LCR gets corrupted ¿ sometimes (sporadically) and nobody knows at which point, I thought it should be OK to:
1. Create a very simple streams implementation and run a single transaction through it ¿ one at a time for various objects that are required
2. Simulate the load to higher values and monitor
3. Generate various mixed transactions that process various objects with various data and monitor
4. In case of an error replay the same process for the one transaction that caused it or for a batch of transactions ¿ regression test
5. When the error(s) can be reproduced ¿ instrument the process by intercepts of the data flow ¿ that way there are 3 levels of controlling simulation
a. The transaction generator (say PL/SQL procedure) that processes apps objects ¿ this can slow down the data flow on the input
b. The CDC (change data capture) can be intercepted, then propagation, and then apply handler(s) ¿ without any transformation
c. Finally transformations are introduced
6. One may find that once some critical mass (high transfer rate) is reached there are errors that show up but only in a capture custom transformation
7. The problem may show up only when high volume of data is transferred say millions of rows processed ¿ infrequent COMMITS
8. Combination of 6+7 etc.

But if none of those errors can be generated with a single-path stream then (if really necessary) multiple queue propagations can be gradually introduced with parallel processing etc., etc. to locate the exact circumstances ¿ thus root cause of the problem.

Once it is clear what the root cause is ¿ one can opt for a different implementation ¿ say downstream capture or mining only archived logs.

If a work-around is found ¿ Oracle may fix the original problem with a patch but the project moves along with a robust deployment = preferred method.

Streams for instant replication

Tracy, October 31, 2009 - 9:51 am UTC

Is Oracle Streams a suitable method for replicating a subset of tables from one database to another in near real-time i.e. if the requirement is that updates made on the source database must be made on the destination database within milliseconds?

If the answer is yes, does it require that both databases be on 11g and using the new synchronous capture feature of 11g? (It is not clear to me from the Oracle documentation.)
Tom Kyte
November 09, 2009 - 12:02 pm UTC

nothing is, if you have near real time requirements you really have



a requirement to have a single database. truly, honestly, 100% on that.



I will not discuss a system whereby you say you need millisecond propagation times, that is just "not smart". You obviously need a single database. In 1993 - the answer might have been different since hardware was so small back then, but in the 21st century, you need a single database, period.

Why real-time propagation is required.

Tracy, November 16, 2009 - 8:21 am UTC

I'm not sure that I follow your logic about small hardware, but anyway let me explain why the requirement is there.

We have a long established database - Database A in Country A. This database is updated by feeds, by operators, by background jobs etc.

We plan to have Database B in Country B. Certain (but not all) of the data in this database must be kept in step with the data in Database A.

Legal requirements in Country B stipulate that Database B and the application that uses it must reside, physically, in Country B so we need to find an efficient method of propagating the data from A to B.

If it can be done efficiently then copying the data directly from A to B in some way has to be the best way of ensuring data matching and integrity.

Replication, Streams, Goldengate... I'm just looking for the best Oracle solution.
Tom Kyte
November 23, 2009 - 12:06 pm UTC

... I'm not sure that I follow your logic about small hardware,...

In 1993 - 32mb was a large RAM configuration - huge in fact (Megabyte)
In 1993, lots of single cpu systems, and the cpu's - even if you had more than one, were glacially slow (as compared to today)

In 1993 - replication might have (stress MIGHT because mostly, it did not) made sense. If the machine you were using could not handle the load, it might have made sense to offload some of the read only reporting workload to another machine.

In the year 2009 - different story, entirely different. That is what I mean, in 2009 - you would be hard pressed in most corporate environments to find a situation that demanded replication.


... We have a long established database ... What we did last century is not what we do today....

... Legal requirements in Country B stipulate that Database B and the application
that uses it must reside, physically, in Country B so we need to find an
efficient method of propagating the data from A to B.
...

Almost certainly - that is not true. I'm aware of one situation where that might be - but you'd have to say it was that one place. When Oracle consolidated in the late 1990's - our country managers said the same thing, it was not true - never was. Software as a service would not work in that country, cloud computing will not be in that country, that country will not be in the 21st century, they will forever be in the 20th century, pre-internet.

There is nothing that will give you real time propagation in a manageable sense - even in an unmanageable sense.

I suggest you call "bluff" on the "must reside in that country". Especially since you are COPYING IT out of that country and the only reason to have it mandated in country would be for *supposed privacy* reasons - meaning, it would also be illegal to copy it.


You can look at all of the solutions, I am not a fan of any of them, they all lead to lots and lots of busy work - and it just isn't necessary most all of the time.

User level replication

KG, March 15, 2011 - 8:50 am UTC

TOM
We are planning to use streams for schema replication to minimize the downtime for production migration to new host

Do you see any issues with approach?
Are there any limitations we would encounter with streams.
Tom Kyte
March 19, 2011 - 10:46 am UTC

Thanks Tom

KG, March 19, 2011 - 10:25 am UTC

Tom
I will look at oracle docs (the link doesnt work though)
What features of oracle you would use if 11g database needs to migrated to other platform with minimum of downtime

Good Wishes
Tom Kyte
March 19, 2011 - 10:46 am UTC

Link corrected.


Streams or Golden Gate would be appropriate.