Skip to Main Content
  • Questions
  • Improve INSERT speed for loading volume sample data

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Sagar.

Asked: July 27, 2018 - 5:49 pm UTC

Last updated: August 15, 2018 - 11:57 pm UTC

Version: 12.2.0.1.0

Viewed 10K+ times! This question is

You Asked

Hi Team

At the time of writing this, on LiveSQL I get "500 Internal Server Error", so will just paste the re-producible code here.

I'm building sample application data for testing using PL/SQL.
For loading 10M data in USERS_MAPPING table, currently it is taking ~17min. Is there a way I can speed up the insert operation for USERS_MAPPING table.

Database: Oracle 12.2 EE
OEL version: 7.5

Appreciate your help.

Thanks..

-- Drop tables
DROP TABLE users_mapping PURGE;
DROP TABLE users PURGE;

-- Create table USERS
CREATE TABLE users
 (
  users_id VARCHAR2 (40 CHAR ) NOT NULL,
  displayName VARCHAR2 (255 CHAR ) NOT NULL,
  createdBy VARCHAR2 (255 CHAR ),
  createdDate TIMESTAMP(6),
  lastModifiedBy VARCHAR2 (255 CHAR ),
  lastModifiedDate TIMESTAMP(6),
  CONSTRAINT users_userid_pk PRIMARY KEY ( users_id )
 )
;

-- Create table USERS_MAPPING
CREATE TABLE users_mapping
 (
  users_id VARCHAR2(40 CHAR) NOT NULL,
  users_mapping_id VARCHAR2(40 CHAR) NOT NULL,
  createdBy VARCHAR2 (255 CHAR ),
  createdDate TIMESTAMP(6),
  lastModifiedBy VARCHAR2 (255 CHAR ),
  lastModifiedDate TIMESTAMP(6),
  name VARCHAR2(255 CHAR) NOT NULL,
  CONSTRAINT users_mapping_id_pk PRIMARY KEY ( users_id, users_mapping_id ),
  CONSTRAINT users_mapping_userid_fk FOREIGN KEY ( users_id ) REFERENCES users ( users_id )
 )
;

-- Load USERS table data
BEGIN
 FOR i in 0..1000
 LOOP
  INSERT INTO users 
  VALUES (
    SYS_GUID(),
    'User-'||i,
    'Admin',
    SYSTIMESTAMP,
    'Admin',
    SYSTIMESTAMP       
   );
 END LOOP;
 COMMIT;
EXCEPTION
WHEN OTHERS THEN
 RAISE_APPLICATION_ERROR(-20100,
    'ORA Error Code =>'|| SQLERRM
     || CHR(10)
     ||'ORA Error Stack => '
     || SUBSTR (DBMS_UTILITY.FORMAT_ERROR_STACK,1,500)
     || CHR(10)
     ||'ORA Error Backtrace => '
     || SUBSTR (DBMS_UTILITY.FORMAT_ERROR_BACKTRACE,1,500)
    );
END;
/

-- Load USERS_MAPPING table data
DECLARE
 TYPE t_users_id IS TABLE OF VARCHAR2(40);
 v_users_id t_users_id:=t_users_id();
 v_sql VARCHAR2(4000):=NULL;
BEGIN
 SELECT users_id
  BULK COLLECT INTO v_users_id
 FROM users
 ORDER BY users_id;

 IF (v_users_id.COUNT>0) THEN
  FOR i IN v_users_id.FIRST .. v_users_id.LAST
  LOOP
   FOR j in 1..10000
   LOOP
    v_sql := 'INSERT INTO users_mapping VALUES (:1, :2, :3, :4, :5, :6, :7)';
    EXECUTE IMMEDIATE v_sql USING v_users_id(i), SYS_GUID(), 'Admin', SYSTIMESTAMP, 'Admin', SYSTIMESTAMP, 'Mapping'||j;
   END LOOP;
   COMMIT;
  END LOOP;
 END IF;
EXCEPTION
WHEN OTHERS THEN
 RAISE_APPLICATION_ERROR(-20100,
    'ORA Error Code =>'|| SQLERRM
     || CHR(10)
     ||'ORA Error Stack => '
     || SUBSTR (DBMS_UTILITY.FORMAT_ERROR_STACK,1,500)
     || CHR(10)
     ||'ORA Error Backtrace => '
     || SUBSTR (DBMS_UTILITY.FORMAT_ERROR_BACKTRACE,1,500)
    );
END;
/

and Connor said...

Without touching the indexes, you can achieve the following

SQL> CREATE TABLE users
  2   (
  3    users_id VARCHAR2 (40 CHAR ) NOT NULL,
  4    displayName VARCHAR2 (255 CHAR ) NOT NULL,
  5    createdBy VARCHAR2 (255 CHAR ),
  6    createdDate TIMESTAMP(6),
  7    lastModifiedBy VARCHAR2 (255 CHAR ),
  8    lastModifiedDate TIMESTAMP(6),
  9    CONSTRAINT users_userid_pk PRIMARY KEY ( users_id )
 10   )
 11  ;

Table created.

SQL>
SQL> CREATE TABLE users_mapping
  2   (
  3    users_id VARCHAR2(40 CHAR) NOT NULL,
  4    users_mapping_id VARCHAR2(40 CHAR) NOT NULL,
  5    createdBy VARCHAR2 (255 CHAR ),
  6    createdDate TIMESTAMP(6),
  7    lastModifiedBy VARCHAR2 (255 CHAR ),
  8    lastModifiedDate TIMESTAMP(6),
  9    name VARCHAR2(255 CHAR) NOT NULL,
 10    CONSTRAINT users_mapping_id_pk PRIMARY KEY ( users_id, users_mapping_id ),
 11    CONSTRAINT users_mapping_userid_fk FOREIGN KEY ( users_id ) REFERENCES users ( users_id ) deferrable
 12   )
 13  ;

Table created.

--
-- notice I made the constraint deferrable
--
SQL>
SQL> set constraint users_mapping_userid_fk deferred;

Constraint set.

SQL>
SQL> set timing on
SQL> insert into users
  2  select
  3      SYS_GUID(),
  4      'User-'||rownum,
  5      'Admin',
  6      SYSTIMESTAMP,
  7      'Admin',
  8      SYSTIMESTAMP
  9  from dual connect by level <= 1000;

1000 rows created.

Elapsed: 00:00:00.13
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> insert /*+ APPEND */ into users_mapping
  2  select u.users_id, SYS_GUID(), 'Admin', SYSTIMESTAMP, 'Admin', SYSTIMESTAMP, 'Mapping'||r
  3  from users u,
  4       ( select rownum r from dual connect by level <= 1000 );

1000000 rows created.

Elapsed: 00:00:54.88
SQL>


Now that only did 1000 rows into users_mapping for each user. Before moving onto your 10,000 per user, I'll now change to doing the index maintenance *after* the load

SQL> CREATE TABLE users
  2   (
  3    users_id VARCHAR2 (40 CHAR ) NOT NULL,
  4    displayName VARCHAR2 (255 CHAR ) NOT NULL,
  5    createdBy VARCHAR2 (255 CHAR ),
  6    createdDate TIMESTAMP(6),
  7    lastModifiedBy VARCHAR2 (255 CHAR ),
  8    lastModifiedDate TIMESTAMP(6)
  9   )
 10  ;

Table created.

SQL>
SQL> CREATE TABLE users_mapping
  2   (
  3    users_id VARCHAR2(40 CHAR) NOT NULL,
  4    users_mapping_id VARCHAR2(40 CHAR) NOT NULL,
  5    createdBy VARCHAR2 (255 CHAR ),
  6    createdDate TIMESTAMP(6),
  7    lastModifiedBy VARCHAR2 (255 CHAR ),
  8    lastModifiedDate TIMESTAMP(6),
  9    name VARCHAR2(255 CHAR) NOT NULL
 10   ) 
 11  ;

Table created.

SQL> set timing on
SQL> insert into users
  2  select
  3      SYS_GUID(),
  4      'User-'||rownum,
  5      'Admin',
  6      SYSTIMESTAMP,
  7      'Admin',
  8      SYSTIMESTAMP
  9  from dual connect by level <= 1000;

1000 rows created.

Elapsed: 00:00:00.07
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> insert /*+ APPEND */ into users_mapping
  2  select u.users_id, SYS_GUID(), 'Admin', SYSTIMESTAMP, 'Admin', SYSTIMESTAMP, 'Mapping'||r
  3  from users u,
  4       ( select rownum r from dual connect by level <= 10000 );

10000000 rows created.

Elapsed: 00:03:47.17
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.07
SQL>
SQL> alter table users add CONSTRAINT users_userid_pk PRIMARY KEY ( users_id );

Table altered.

Elapsed: 00:00:00.00
SQL> alter table users_mapping add  CONSTRAINT users_mapping_id_pk PRIMARY KEY ( users_id, users_mapping_id );

Table altered.

Elapsed: 00:02:11.75
SQL> alter table users_mapping add  CONSTRAINT users_mapping_userid_fk FOREIGN KEY ( users_id ) REFERENCES users ( users_id ) ;

Table altered.

Elapsed: 00:00:05.19
SQL>
SQL>


So that brings us down to around 5mins, and that is just on my laptop. But, also you'll find that most of this time is spent on CPU getting the GUID's. I've changed your example to using sequences, and look what happens

SQL> create sequence user_seq;

Sequence created.

SQL> create sequence users_mapping_seq cache 100000;

Sequence created.

SQL>
SQL> CREATE TABLE users
  2   (
  3    users_id VARCHAR2 (40 CHAR ) NOT NULL,
  4    displayName VARCHAR2 (255 CHAR ) NOT NULL,
  5    createdBy VARCHAR2 (255 CHAR ),
  6    createdDate TIMESTAMP(6),
  7    lastModifiedBy VARCHAR2 (255 CHAR ),
  8    lastModifiedDate TIMESTAMP(6)
  9   )
 10  ;

Table created.

SQL>
SQL> CREATE TABLE users_mapping
  2   (
  3    users_id VARCHAR2(40 CHAR) NOT NULL,
  4    users_mapping_id VARCHAR2(40 CHAR) NOT NULL,
  5    createdBy VARCHAR2 (255 CHAR ),
  6    createdDate TIMESTAMP(6),
  7    lastModifiedBy VARCHAR2 (255 CHAR ),
  8    lastModifiedDate TIMESTAMP(6),
  9    name VARCHAR2(255 CHAR) NOT NULL
 10   ) 
 11  ;

Table created.

SQL>
SQL> set timing on
SQL> insert into users
  2  select
  3      user_seq.nextval,
  4      'User-'||rownum,
  5      'Admin',
  6      SYSTIMESTAMP,
  7      'Admin',
  8      SYSTIMESTAMP
  9  from dual connect by level <= 1000;

1000 rows created.

Elapsed: 00:00:00.03
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> insert /*+ APPEND */ into users_mapping
  2  select u.users_id, users_mapping_seq.nextval, 'Admin', SYSTIMESTAMP, 'Admin', SYSTIMESTAMP, 'Mapping'||r
  3  from users u,
  4       ( select rownum r from dual connect by level <= 10000 );

10000000 rows created.

Elapsed: 00:00:18.24
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.06
SQL> alter table users add CONSTRAINT users_userid_pk PRIMARY KEY ( users_id );

Table altered.

Elapsed: 00:00:00.04
SQL> alter table users_mapping add  CONSTRAINT users_mapping_id_pk PRIMARY KEY ( users_id, users_mapping_id );

Table altered.

Elapsed: 00:00:20.78
SQL> alter table users_mapping add  CONSTRAINT users_mapping_userid_fk FOREIGN KEY ( users_id ) REFERENCES users ( users_id ) ;

Table altered.

Elapsed: 00:00:01.47


Yes - that is 40 seconds total time.

Rating

  (7 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Proof

A reader, July 29, 2018 - 5:05 am UTC

You said. But, also you'll find that most of this time is spent on CPU getting the GUID's. ..

Pls show me the indicators proving CPU costs.
Or demonstrate it step by step the time amount spent by resource.
Connor McDonald
July 29, 2018 - 11:45 am UTC

SQL> set timing on
SQL> select max(x)
  2  from ( select sys_guid() x from dual connect by level <= 500000 )
  3  /

MAX(X)
--------------------------------
FFFFD900205148EB9E313D6CFF300855

Elapsed: 00:00:19.79


25,000 calls / per second for sys_guid()

Whist is totally fine...unless I want to load millions or billions of rows.

Cannot see any CPU measure in your post.

A reader, July 29, 2018 - 7:30 pm UTC


Append hint

Rajeshwaran, Jeyabal, July 30, 2018 - 9:40 am UTC

Team,

Any reason to have APPEND hint in the above script while loading the data into child table?

setting the constraint to deferred and having an APPEND hint, doesn't do direct path loading.
(able to query table while transaction has not ended).
demo@ORA12C>  CREATE TABLE users
  2      (
  3       users_id VARCHAR2 (40 CHAR ) NOT NULL,
  4       displayName VARCHAR2 (255 CHAR ) NOT NULL,
  5       createdBy VARCHAR2 (255 CHAR ),
  6       createdDate TIMESTAMP(6),
  7       lastModifiedBy VARCHAR2 (255 CHAR ),
  8       lastModifiedDate TIMESTAMP(6),
  9       CONSTRAINT users_userid_pk PRIMARY KEY ( users_id )
 10      )
 11     ;

Table created.

demo@ORA12C> CREATE TABLE users_mapping
  2     (
  3      users_id VARCHAR2(40 CHAR) NOT NULL,
  4      users_mapping_id VARCHAR2(40 CHAR) NOT NULL,
  5      createdBy VARCHAR2 (255 CHAR ),
  6      createdDate TIMESTAMP(6),
  7      lastModifiedBy VARCHAR2 (255 CHAR ),
  8      lastModifiedDate TIMESTAMP(6),
  9      name VARCHAR2(255 CHAR) NOT NULL,
 10      CONSTRAINT users_mapping_id_pk PRIMARY KEY ( users_id, users_mapping_id ),
 11      CONSTRAINT users_mapping_userid_fk FOREIGN KEY ( users_id ) REFERENCES users ( users_id ) deferrable
 12     )
 13    ;

Table created.

demo@ORA12C> set constraint users_mapping_userid_fk deferred;

Constraint set.

demo@ORA12C> insert into users
  2  select
  3    SYS_GUID(),
  4    'User-'||rownum,
  5    'Admin',
  6    SYSTIMESTAMP,
  7    'Admin',
  8    SYSTIMESTAMP
  9  from dual connect by level <= 1000;

1000 rows created.

demo@ORA12C> insert /*+ APPEND */ into users_mapping
  2      select u.users_id, SYS_GUID(), 'Admin', SYSTIMESTAMP, 'Admin', SYSTIMESTAMP, 'Mapping'||r
  3      from users u,
  4           ( select rownum r from dual connect by level <= 1000 );

1000000 rows created.

demo@ORA12C> select * from users_mapping where rownum = 1;

USERS_ID                                 USERS_MAPPING_ID
---------------------------------------- ----------------------------------------
CREATEDBY
-----------------------------------------------------------------------------------------------------------------------------------
CREATEDDATE
---------------------------------------------------------------------------
LASTMODIFIEDBY
-----------------------------------------------------------------------------------------------------------------------------------
LASTMODIFIEDDATE                                                            NAME
--------------------------------------------------------------------------- --------------------
8D1CF0D9CB7E44ECBA61D017DEB92E39         E29A96E015744D87B23AF361A7DFE735
Admin
30-JUL-18 03.01.26.641000 PM
Admin
30-JUL-18 03.01.26.641000 PM                                                Mapping1

now with disabling the constraint will cause direct path loading to take place.

demo@ORA12C> truncate table users_mapping ;

Table truncated.

demo@ORA12C> alter table users_mapping disable constraint users_mapping_userid_fk;

Table altered.

demo@ORA12C> insert /*+ APPEND */ into users_mapping
  2      select u.users_id, SYS_GUID(), 'Admin', SYSTIMESTAMP, 'Admin', SYSTIMESTAMP, 'Mapping'||r
  3      from users u,
  4           ( select rownum r from dual connect by level <= 1000 );

1000000 rows created.

demo@ORA12C> select * from users_mapping where rownum = 1;
select * from users_mapping where rownum = 1
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


demo@ORA12C>


then we can enable the constraint either as Enable novalidate/Rely enable novalidate based on the needs.

Thanks.. need more assistance

Sagar, August 09, 2018 - 7:41 am UTC

Thanks for the response Connner.

I used your feedback and was able to incorporate into part of the code.
Facing performance issue when inserting data into my last PL/SQL module. Right now testing it on limited volume.
Actual load volume would be in millions.
Any help would be appreciated here.

Brief overview of my table structure:
Org - Store organisation data.
Objects - Store type of different objects in my organisations like Employee, Class etc.
Objectdetails - Store instances of different object types like X no of employees in org, y no of classes in org.
Fields - Store attributes name and type for the various objects like Employee(name attribute is of type String)
Fieldsval - Store values for the attributes like Employee('TestUser1' is value of name attribute)

Now in my PL/SQL module, the last bit when I am inserting data into Fieldsval table via SP test_sp_bulkload_fieldsval, maximum time is consumed.
Not able to get my head around how to bulk insert the data here. Right now im doing single inserts into this table which is costing me time.

LiveSQL code contains the re-producible use case:
https://livesql.oracle.com/apex/livesql/s/g3cn743mas7awtih4hsqfg3ol

Thanks in advance..

Thanks.. need more assistance

Sagar, August 10, 2018 - 4:56 pm UTC

Thanks for the response Connner.

I used your feedback and was able to incorporate into part of the code.
Facing performance issue when inserting data into my last PL/SQL module. Right now testing it on limited volume.
Actual load volume would be in millions.
Any help would be appreciated here.

Brief overview of my table structure:
Org - Store organisation data.
Objects - Store type of different objects in my organisations like Employee, Class etc.
Objectdetails - Store instances of different object types like X no of employees in org, y no of classes in org.
Fields - Store attributes name and type for the various objects like Employee(name attribute is of type String)
Fieldsval - Store values for the attributes like Employee('TestUser1' is value of name attribute)

Now in my PL/SQL module, the last bit when I am inserting data into Fieldsval table via SP test_sp_bulkload_fieldsval, maximum time is consumed.
Not able to get my head around how to bulk insert the data here. Right now im doing single inserts into this table which is costing me time.

LiveSQL code contains the re-producible use case:
https://livesql.oracle.com/apex/livesql/s/g3cn743mas7awtih4hsqfg3ol

Thanks in advance..

Thanks.. need more assistance please

Sagar, August 11, 2018 - 8:16 am UTC

Thanks for the response Connner.

I used your feedback and was able to incorporate into part of the code.
Facing performance issue when inserting data into my last PL/SQL module. Right now testing it on limited volume.
Actual load volume would be in millions.
Any help would be appreciated here.

Brief overview of my table structure:
Org - Store organisation data.
Objects - Store type of different objects in my organisations like Employee, Class etc.
Objectdetails - Store instances of different object types like X no of employees in org, y no of classes in org.
Fields - Store attributes name and type for the various objects like Employee(name attribute is of type String)
Fieldsval - Store values for the attributes like Employee('TestUser1' is value of name attribute)

Now in my PL/SQL module, the last bit when I am inserting data into Fieldsval table via SP test_sp_bulkload_fieldsval, maximum time is consumed.
Not able to get my head around how to bulk insert the data here. Right now im doing single inserts into this table which is costing me time.

LiveSQL code contains the re-producible use case:
https://livesql.oracle.com/apex/livesql/s/g3cn743mas7awtih4hsqfg3ol

Thanks in advance..
Connor McDonald
August 15, 2018 - 11:57 pm UTC

bulk inserts in PLSQL will be faster than row at a time, so taking one example from your livesql script:

FOR i IN v_org_id.FIRST .. v_org_id.LAST
LOOP
  IF (v_objectname.COUNT>0) THEN
    FOR j IN v_objectname.FIRST .. v_objectname.LAST
    LOOP
      v_sql := 'INSERT INTO objects VALUES (:1, :2, :3, :4, :5)';
      EXECUTE IMMEDIATE v_sql USING v_org_id(i), SYS_GUID(), 'ObjectsAdmin', SYSTIMESTAMP, v_objectname(j);
    END LOOP;
  END IF;
  COMMIT;
END LOOP;


could be replaced with

FOR i IN v_org_id.FIRST .. v_org_id.LAST
LOOP
  IF (v_objectname.COUNT>0) THEN
    FORALL j IN v_objectname.FIRST .. v_objectname.LAST
      INSERT INTO objects VALUES (v_org_id(i), SYS_GUID(), 'ObjectsAdmin', SYSTIMESTAMP, v_objectname(j);
  END IF;
  COMMIT;
END LOOP;
      


See how the loop/dynamic SQL has been replaced with a single FORALL command. You can apply this logic throughout your code

Thanks

Sagar, August 18, 2018 - 9:43 am UTC

Thanks for the response Connor.

Have a nice day!

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.