Hi Tom,
First of all, I would like to thank you for all the contribution you made on the oracle community. I started working on oracle DB 9 years ago, and I can confidently say that I got all my knowledge from asktom :) I have followed your blog for sometime, and this is the first time I decided to ask.
I worked in a custom CRM system's database in a telecommunication company which has an internal resource management system. Lately we are getting many reports from DBA about DB locking/contention and even deadlocks during DML operation on some inventory resource tables. After many investigations, I discovered that there are a few bitmap indexes in the table. Note that this is an OLTP DB, and should not have bitmap because concurrency is the nature of the business. I know the next explanations screams bad designs all over, but I have no control over it.
Below is the link to my schema (please let me know if it doesnt work, it is my first time using the feature):
https://livesql.oracle.com/apex/livesql/s/dpe2gx4d8hi0t5mn8hp5u2dq0 Some background:
- T_RESOURCE_HEADER is a main resource inventory table. As of today, contains some 27 million records.
- T_IMSI_DETAIL is the child table linked to T_RESOURCE_HEADER. This table holds sim cards data, contains some 18 million records.
- H_RESOURCE_AUDIT is an audit table for T_RESOURCE_HEADER. Any modification in T_RESOURCE_HEADER will be recorded here by trigger (i know...).
Our main goal is to remove the bitmap join index below:
CREATE BITMAP INDEX BMI_T_RES_TYPE_RES_HEAD ON T_RESOURCE_HEADER
(T_IMSI_DETAIL.PREPAID_FLAG, T_IMSI_DETAIL.HLRID)
FROM T_RESOURCE_HEADER, T_IMSI_DETAIL
WHERE T_RESOURCE_HEADER.RESOURCE_ID = T_IMSI_DETAIL.RESOURCE_ID;
We have tried removing the bitmap index in our performance test environment. Modifications are fast, no contention/deadlocks seen, however many of the existing validation functionality is impacted i.e. querying the below now takes > 15 minutes and the application timed out.
SELECT COUNT(hdr.RESOURCE_ID)
FROM T_RESOURCE_HEADER hdr, T_IMSI_DETAIL imsi
WHERE hdr.RESOURCE_ID = imsi.RESOURCE_ID
AND hdr.RESOURCE_TYPE_REF_ID = 2 -- Resource Type = IMSI
AND hdr.status_cd = '26' -- Resource Status = Ready
AND ROWNUM <= 10000 -- request to recycle 10000 sim cards
Since then, there are a few things we tried:
1. Create a materialized view (REFRESH FAST ON DEMAND) to capture the above snapshot of simcards available, and refresh regularly using DBMS_SCHEDULER. Whenever the application needs to validate, it will query against the MV. Any modification will still go to the original tables and synchronized back using the MV Logs. This was working for some time and tested solid in performance test env, but then DBA complains that there are too much to monitor (since application team does not have much control or access to MV). Sadly this initiative was then scrapped.
2. Same idea with #1, but instead of MV, use a temporary table (basically mirrors the MV, and put all the information in the table) to capture the snapshot of available sim cards (some 12 millions of records). This temporary table will be refreshed using a daily batch job. Drop the table, recreate, apply index and analyze the table.
We are still in the process of developing the script for this and I would like to ask for some pointers.
1. What is the fastest way to insert some 12 million data into the temporary table? Should I use the APPEND hint? As I understand, APPEND will not reuse existing space/blocks. but how "wasteful" will that be for 12 million records? I tested using CREATE TABLE AS SELECT and INSERT INTO TABLE SELECT, but currently getting ORA-02393 which are being resolved by DBA. Hence i cannot tell you yet the performance of both syntax on creating the table.
2. I have not tested using BULK COLLECT FOR ALL, but i will eventually test the performance and compare to #1. Do you have any tips on this?
3. Any other insert method that you think i can try? SQLLoader isn't really an option for us.
Thanks.
We didn't really need to see the whole schema. But thanks for using LiveSQL anyway! :)
Before getting to your temp table questions, some observations.
You have
lots of indexes on T_RESOURCE_HEADER! Many of these have the same leading columns.
A quick glance shows the following candidates for your query:
IDX_T_RES_HDR_RESTYPE_ST_RSCID
IDX_T_RES_HDR_RES_TYPE_STAT
IDX_T_RES_HDR_RES_TSE
BMI_T_RES_HEAD_TYPE
T_RESOURCE_HEADER_PK
With so many to pick from its possible the optimizer is choosing a suboptimal index for your 15 minute query.
Before getting any further, I would triage these. Look at all those with the same leading columns (e.g. IDX_T_RES_HDR_RESTYPE_ST_RSCID & IDX_T_RES_HDR_RES_TYPE_STAT) and see if you can remove any.
Even if this doesn't help this particular query it's still a good exercise to do.
Next, you have a working solution using MVs. Precisely what extra monitoring do you DBAs need to do for this,
that they won't have to do for the temporary table solution?That's really a question for them. But if they're rejecting a working solution they really need to justify why they're giving you extra work. Because they'll still need to monitor the custom built solution!
Finally, do you really need to rebuild the temp table every time? Is the data going to change that significantly day-to-day? If the changes are few, could you do a merge instead?
Assuming it's worth rebuilding the table (i.e. the most/all the data is different), then CTAS should be the most efficient method.
You're right, insert with append inserts data above the high water mark. If you're deleting the data before the insert this will be very wasteful! The HWM will grow and grow. You'll need to move/shrink the table to reclaim it.
But if you're using truncate to clear the data, this resets the HWM. So you're fine.
SQL> create table t tablespace assm_tblsp enable row movement as
2 select * from dba_objects;
Table created.
SQL>
SQL> select bytes from user_segments
2 where segment_name = 'T';
BYTES
----------
12582912
SQL>
SQL> delete t;
91529 rows deleted.
SQL> insert /*+ append */ into t
2 select * from dba_objects;
91529 rows created.
SQL> commit;
Commit complete.
SQL>
SQL> select bytes from user_segments
2 where segment_name = 'T';
BYTES
----------
24117248
SQL>
SQL> alter table t shrink space;
Table altered.
SQL>
SQL> select bytes from user_segments
2 where segment_name = 'T';
BYTES
----------
11993088
SQL>
SQL> truncate table t;
Table truncated.
SQL> insert /*+ append */ into t
2 select * from dba_objects;
91529 rows created.
SQL> commit;
Commit complete.
SQL>
SQL> select bytes from user_segments
2 where segment_name = 'T';
BYTES
----------
12582912
No particular tips on bulk collect at this stage!