Skip to Main Content
  • Questions
  • Update 5 Million records in Database in least time

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Pankaj.

Asked: September 05, 2017 - 9:22 am UTC

Last updated: September 07, 2017 - 2:33 am UTC

Version: 11g

Viewed 10K+ times! This question is

You Asked

I have approx to 5 million records in a table and I need to update one column of this table from another Table. I have used Bulk Collect with FORALL option ( limit = 500 ) but it is taking 6 to 7 hours.

Do we have any option available with oracle which can process the same in least time ( at least less than 1 hour).

and Connor said...

Well...not much to go on there, but you're possible causes of slow down are:

a- expensive to get the data from the other table
b- triggers firing
c- constraints being checked excessively
d- row chaining
e- indexes being updated.

Could be one, some or all of those, because it definitely does not take a long to update 5 million rows because I can do it in 30 seconds on my laptop

SQL> create table t 
  2  as
  3  select d.* from dba_objects d,
  4    ( select 1 from dual connect by level <= 100 );

Table created.

SQL> select count(*) from t;

  COUNT(*)
----------
   7829100

1 row selected.

SQL>
SQL> set timing on
SQL> update t set owner = lower(owner)
  2  where rownum <= 5000000;

5000000 rows updated.

Elapsed: 00:00:31.56


So work your way through (a) through (e) above

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks for your quick answer

Pankaj Pundir, September 06, 2017 - 6:30 am UTC

Thanks Connor.
Yesterday i forgot to mention in hurry that i am using DB_LINK to fetch the data from another table. The detail of Destination table is as follows

Size 56.4 GB
Number Extents 1,034
TABLE_NAME XXXXXX
TABLESPACE_NAME XXXXX
CLUSTER_NAME
IOT_NAME
STATUS VALID
PCT_FREE 10
PCT_USED
INI_TRANS 1
MAX_TRANS 255
INITIAL_EXTENT 25.3 MB
NEXT_EXTENT 1 MB
MIN_EXTENTS 1
MAX_EXTENTS 2,147,483,645
PCT_INCREASE
FREELISTS
FREELIST_GROUPS
LOGGING YES
BACKED_UP N
NUM_ROWS 335,509,637
BLOCKS 7,067,731
EMPTY_BLOCKS 0
AVG_SPACE 0
CHAIN_CNT 0
AVG_ROW_LEN 160
AVG_SPACE_FREELIST_BLOCKS 0
NUM_FREELIST_BLOCKS 0
DEGREE 1
INSTANCES 1
CACHE N
TABLE_LOCK ENABLED
SAMPLE_SIZE 335,509,637
LAST_ANALYZED 05/09/2017 11:07:50 PM
PARTITIONED NO
IOT_TYPE
TEMPORARY N
SECONDARY N
NESTED NO
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
ROW_MOVEMENT DISABLED
GLOBAL_STATS YES
USER_STATS NO
DURATION
SKIP_CORRUPT DISABLED
MONITORING YES
CLUSTER_OWNER
DEPENDENCIES DISABLED
COMPRESSION DISABLED
COMPRESS_FOR
DROPPED NO
READ_ONLY NO
SEGMENT_CREATED YES
RESULT_CACHE DEFAULT
Connor McDonald
September 07, 2017 - 2:33 am UTC

"Yesterday i forgot to mention in hurry that i am using DB_LINK"

along with many many many other things

Use The Force Connor

Alex, September 06, 2017 - 8:29 pm UTC

Pankaj, my update takes 6-7 hours. What's wrong, why so slow?
Connor McDonald
September 07, 2017 - 2:20 am UTC

NOT_THE_SQL_LOOKING_FOR

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.