Unfortunately I don't have access to trace files even in our database. The best I can get is AWR.
By large amounts I mean tables around 5-20GB and a total amount of data transfered via DBLINK around 150GB per day.
This is a daily process of updating our datamarts:
1. DWH job truncates + inserts tables into our buffer scheme via their dblink to our DB once per day.
2. Our app is using synonyms. Synonyms look either at %TABLE%_A or %TABLE%_B for any given datamart.
3. Our job loads a table from the buffer scheme into an inactive %TABLE%_A/%TABLE%_B(whichever a synonym is not looking at) with another truncate + insert /*+ append */.
4. Then it rebuilds indexes, calculates statistics and switches the synonym.
There is so much replicating of data because we need it available 24/7 and because our DWH refuses to provide us a dblink that we could use to load data from them.
On that topic I have another question. Is it smart to calculate statistics every time with
ownname => '"CRMDM"',
tabname => v_table_inactive,
estimate_percent => dbms_stats.auto_sample_size,
degree => dbms_stats.auto_degree
Right now I'm observing 60-80% spikes in CPU/IO wait when lots of indexes are being rebuilt and statistics are gathered, which is undesirable.
We have GATHER_STATS_JOB running daily, and the data doesn't change too dramatically from day to day.
I'm going to set a low parallel degree for now to flatten the workload, but I'm wondering if gathering stats every time is a lot of useless work.