You Asked
Hi Tom, good morning Sir.
I am trying to speed up an Insert statement where the SELECT is selecting all the source table according to this example:
var_select_a_insert :=
'insert into lwm_usrappods.ITEM_LOC_SOH
select
to_number(to_char(trunc(sysdate-1), ''YYYYMMDD'')) as ID_FECHA,
ITEM,
ITEM_PARENT,
ITEM_GRANDPARENT,
LOC,
LOC_TYPE,
AV_COST,
UNIT_COST,
STOCK_ON_HAND,
SOH_UPDATE_DATETIME,
LAST_HIST_EXPORT_DATE,
IN_TRANSIT_QTY,
PACK_COMP_INTRAN,
PACK_COMP_SOH,
TSF_RESERVED_QTY,
PACK_COMP_RESV,
TSF_EXPECTED_QTY,
PACK_COMP_EXP,
RTV_QTY,
NON_SELLABLE_QTY,
CUSTOMER_RESV,
CUSTOMER_BACKORDER,
PACK_COMP_CUST_RESV,
PACK_COMP_CUST_BACK,
CREATE_DATETIME,
LAST_UPDATE_DATETIME,
LAST_UPDATE_ID,
FIRST_RECEIVED,
LAST_RECEIVED,
QTY_RECEIVED,
FIRST_SOLD,
LAST_SOLD,
QTY_SOLD,
PRIMARY_SUPP,
PRIMARY_CNTRY,
AVERAGE_WEIGHT,
FINISHER_AV_RETAIL,
FINISHER_UNITS,
PACK_COMP_NON_SELLABLE,
sysdate as CREATE_DATETIMEODS,
LMX_REPORTS_SQL.GET_ALLOC_DISTRO_QTY(ITEM, LOC, LOC_TYPE) AS ALLOC_DISTRO_QTY,
LMX_REPORTS_SQL.GET_TSF_AVAIL_QTY(ITEM, LOC, LOC_TYPE) AS TSF_AVAIL_QTY
from
rms13prd.ITEM_LOC_SOH';
rms13prd.ITEM_LOC_SOH contains the transaction data and has almost 800,000 records. Developer is inserting the data to his historical table lwm_usrappods.ITEM_LOC_SOH which is in size 97 Gbytes (no partitioned).
Is there any way to speed up this INSERT statement?, I am going to drop the indexes before this INSERT and create them after the INSERT process.
I am trying to create a snapshot log on the source table and insert only the changed records, but I am asking this to developer if it can be done.
Currently, all the 800,000 records are inserted in the target table (LWM_USRAPPODS.ITEM_LOC_SOH) everyday at night.
Any advice or help will be really appreciated.
Thanks in advance.
Kind regards,
Francisco Mtz.
and Connor said...
800,000 records isnt large. For example, on my laptop:
SQL> create table source_table as
2 select d.* from dba_objects d,
3 ( select 1 from dual connect by level <= 20 )
4 where rownum <= 800000;
Table created.
SQL> create table target_table as
2 select * from dba_objects where 1=0;
Table created.
SQL> set timing on
SQL> insert into target_table
2 select * from source_table;
800000 rows created.
Elapsed: 00:00:03.28
So something is slowing you down - could be indexes, could be something else.
Best thing to do is see *where* the time is being lost. So do:
- exec dbms_monitor.session_trace_enable(waits=>true)
- run your insert
- exec dbms_monitor.session_trace_disable
and then run tkprof on the trace file and see where the time is lost. If you get stuck, post the results here and we'll take a look
Rating
(1 rating)
Is this answer out of date? If it is, please let us know via a Comment