Tom,
I understand your position as far as using support and we have tried ¿ but there is far too much politics involved. The management says that Oracle will have to reverse engineer their 11g Streams architecture back into 10gR2 or 10gR3 for us and we will then repeat the process ¿ meanwhile snapshots will be the fallback option. BTW. Their administrators have NOT set up GLOBAL_NAMES=TRUE so db-links are not set up the way Oracle says they should. And as I said earlier there are dozens of propagations ¿ history of this is not clear.
The problem needs to get resolved one way or another.
I have done Internet searching to find an article ¿Step-by-step Streams¿ published by Lewis R Cunningham, PricewaterhouseCoopers, LLP.
Almost identical text appears in Lewis C blog
http://blogs.ittoolbox.com/oracle/guide/archives/oracle-streams-configuration-change-data-capture-13501 Lewis published several articles where he demonstrates how one can create an LCR manually and use AQ to send this from one queue to another.
My point is this:
If one uses an Oracle package and provides the required parameters and all should work but it does not and the package body PL/SQL is wrapped ¿ sure Oracle support needs to deal with this.
But ¿ if it is an entire system (STREAMS in this case) where data held in LCR gets corrupted ¿ sometimes (sporadically) and nobody knows at which point, I thought it should be OK to:
1. Create a very simple streams implementation and run a single transaction through it ¿ one at a time for various objects that are required
2. Simulate the load to higher values and monitor
3. Generate various mixed transactions that process various objects with various data and monitor
4. In case of an error replay the same process for the one transaction that caused it or for a batch of transactions ¿ regression test
5. When the error(s) can be reproduced ¿ instrument the process by intercepts of the data flow ¿ that way there are 3 levels of controlling simulation
a. The transaction generator (say PL/SQL procedure) that processes apps objects ¿ this can slow down the data flow on the input
b. The CDC (change data capture) can be intercepted, then propagation, and then apply handler(s) ¿ without any transformation
c. Finally transformations are introduced
6. One may find that once some critical mass (high transfer rate) is reached there are errors that show up but only in a capture custom transformation
7. The problem may show up only when high volume of data is transferred say millions of rows processed ¿ infrequent COMMITS
8. Combination of 6+7 etc.
But if none of those errors can be generated with a single-path stream then (if really necessary) multiple queue propagations can be gradually introduced with parallel processing etc., etc. to locate the exact circumstances ¿ thus root cause of the problem.
Once it is clear what the root cause is ¿ one can opt for a different implementation ¿ say downstream capture or mining only archived logs.
If a work-around is found ¿ Oracle may fix the original problem with a patch but the project moves along with a robust deployment = preferred method.