Skip to Main Content
  • Questions
  • The fastest way of creating a huge table for testing

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Sinan.

Asked: September 02, 2008 - 3:04 pm UTC

Answered by: Tom Kyte - Last updated: January 07, 2011 - 9:10 am UTC

Category: Database - Version: 8.1.7.4

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I wanted to create the below test table and insert about 300 million records into it. Is there a better (faster) way of doing that? Db version is 8.1.7.4 Standard Edition.

begin
  for i in 1..100 loop
    insert /*+ append parallel (huge,12) */ into
  dw_huge huge
  (ID
  ,SOURCEID
  ,TIMESTAMPSOURCELT
  ,TIMESTAMPSOURCEUTC
  ,FRACTIONOFASECOND)
select /*+ parallel (o,12) */
       dw_huge_seq.nextval
      ,object_id
      ,sysdate
      ,sysdate+1/24
      ,0
  from all_objects o;
  commit;
end loop;
end;
/


Thanks,
Sinan

and we said...

this is the process I use:


create table big_table
as
select rownum id,
               OWNER, OBJECT_NAME, SUBOBJECT_NAME,
               OBJECT_ID, DATA_OBJECT_ID,
               OBJECT_TYPE, CREATED, LAST_DDL_TIME,
               TIMESTAMP, STATUS, TEMPORARY,
               GENERATED, SECONDARY
  from all_objects a
 where 1=0
/
alter table big_table nologging;

declare
    l_cnt number;
    l_rows number := &1;
begin
    insert /*+ append */
    into big_table
    select rownum,
               OWNER, OBJECT_NAME, SUBOBJECT_NAME,
               OBJECT_ID, DATA_OBJECT_ID,
               OBJECT_TYPE, CREATED, LAST_DDL_TIME,
               TIMESTAMP, STATUS, TEMPORARY,
               GENERATED, SECONDARY
      from all_objects a
     where rownum <= &1;

    l_cnt := sql%rowcount;

    commit;

    while (l_cnt < l_rows)
    loop
        insert /*+ APPEND */ into big_table
        select rownum+l_cnt,
               OWNER, OBJECT_NAME, SUBOBJECT_NAME,
               OBJECT_ID, DATA_OBJECT_ID,
               OBJECT_TYPE, CREATED, LAST_DDL_TIME,
               TIMESTAMP, STATUS, TEMPORARY,
               GENERATED, SECONDARY
          from big_table
         where rownum <= l_rows-l_cnt;
        l_cnt := l_cnt + sql%rowcount;
        commit;
    end loop;
end;
/

alter table big_table add constraint
big_table_pk primary key(id)
/

begin
   dbms_stats.gather_table_stats
   ( ownname    => user,
     tabname    => 'BIG_TABLE' )
     cascade    => TRUE );
end;
/
select count(*) from big_table;


instead of querying all_objects (a very complex view) over and over - just use the table itself to double itself. Also, instead of a sequence, just use rownum (avoids a TON of recursive sql- if you don't use that, make sure to alter your sequence and set the cache value very high, like a million)

and you rated our response

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

Read about performance of different row generators

September 03, 2008 - 5:45 am UTC

Reviewer: Sandro from Italy

If you want to read about performance of different row generators, then Adrian Billington has written a comprehensive article about the topic: http://www.oracle-developer.net/display.php?id=408

http://www.oracle-developer.net/display.php?id=408

Test Case: 4 different technique for creating a huge table for testing

September 03, 2008 - 9:00 am UTC

Reviewer: Sandro from Italy

This is my test case with four different technique for creating a huge table for testing (reproducible in 10g - in 9i only 1, 2 and 3 - in 8i only 1 and 2 ):
1) Tom's solution;
2) dual...connect by;
3) pipelined function;
4) model.

But...how many resources each approach takes?
I have tested with runstats_pkg 4 approaches and for any 1th and 2nd execution.


-- START Tom's solution with same varian (minimal table) -----------------------
drop table big_table;
create table big_table as select rownum id from dual where 1=0
/

alter table big_table nologging;

-- flush ALL
alter system flush shared_pool;
alter system flush buffer_cache;

exec runstats_pkg.rs_start;
-- first execution
declare
l_cnt number;
l_rows number := 10000;
begin
insert /*+ append */ into big_table select rownum from all_objects a where rownum <= (l_rows/10);

l_cnt := sql%rowcount;
commit;

while (l_cnt < l_rows) loop
insert /*+ APPEND */ into big_table select rownum+l_cnt from big_table where rownum <= l_rows-l_cnt;
l_cnt := l_cnt + sql%rowcount;
commit;
end loop;
end;
/

exec runstats_pkg.rs_middle;
-- second execution for drop the recursive calls and hard parse
declare
l_cnt number;
l_rows number := 10000;
begin
insert /*+ append */
into big_table
select rownum from all_objects a where rownum <= (l_rows/10);

l_cnt := sql%rowcount;
commit;

while (l_cnt < l_rows)
loop
insert /*+ APPEND */ into big_table select rownum+l_cnt from big_table where rownum <= l_rows-l_cnt;
l_cnt := l_cnt + sql%rowcount;
commit;
end loop;
end;
/
exec runstats_pkg.rs_stop;

Run1 ran in 1199 hsecs
Run2 ran in 911 hsecs
run 1 ran in 131.61% of the time

Name Run1 Run2 Diff
LATCH.transaction branch alloc 4 3 -1
LATCH.active checkpoint queue 4 3 -1
LATCH.ktm global data 0 1 1
LATCH.KTF sga latch 0 1 1
LATCH.Shared B-Tree 1 0 -1
LATCH.session timer 4 3 -1
LATCH.job_queue_processes para 0 1 1
LATCH.hash table modification 1 0 -1
LATCH.kwqbsn:qsga 1 0 -1
LATCH.sort extent pool 0 1 1
LATCH.KMG MMAN ready and start 4 3 -1
LATCH.OS process allocation 4 3 -1
STAT...session cursor cache co 1 0 -1
STAT...Cached Commit SCN refer 11 12 1
STAT...Commit SCN cached 3 2 -1
STAT...cleanout - number of kt 4 3 -1
STAT...deferred (CURRENT) bloc 4 5 1
STAT...immediate (CR) block cl 3 2 -1
STAT...cleanouts only - consis 3 2 -1
STAT...redo ordering marks 1 2 1
STAT...calls to kcmgcs 4 5 1
LATCH.PL/SQL warning settings 6 4 -2
STAT...IMU Flushes 4 2 -2
STAT...bytes received via SQL* 1,350 1,348 -2
STAT...redo synch time 2 0 -2
STAT...physical writes direct 19 16 -3
STAT...physical writes 19 16 -3
STAT...db block gets direct 19 16 -3
STAT...physical write total mu 6 3 -3
STAT...physical read total mul 5 2 -3
STAT...messages sent 5 2 -3
STAT...user commits 5 2 -3
STAT...physical writes non che 19 16 -3
STAT...physical reads cache pr 7 10 3
STAT...table fetch continued r 3 0 -3
LATCH.ASM db client latch 10 7 -3
STAT...table scan blocks gotte 38 35 -3
LATCH.library cache lock alloc 16 12 -4
LATCH.qmn task queue latch 4 0 -4
STAT...physical write total IO 7 3 -4
STAT...consistent changes 39 35 -4
STAT...physical write IO reque 7 3 -4
STAT...table scans (short tabl 9 5 -4
STAT...workarea memory allocat 4 0 -4
LATCH.begin backup scn array 7 3 -4
LATCH.session idle bit 25 20 -5
STAT...cursor authentications 9 3 -6
LATCH.loader state object free 10 4 -6
STAT...CPU used by this sessio 11 5 -6
LATCH.library cache pin alloca 12 6 -6
STAT...parse time cpu 7 1 -6
STAT...calls to kcmgas 19 12 -7
STAT...recursive cpu usage 12 5 -7
LATCH.object queue header heap 19 12 -7
STAT...redo subscn max counts 14 5 -9
STAT...index crx upgrade (posi 9 0 -9
STAT...CPU used when call star 14 4 -10
LATCH.In memory undo latch 25 13 -12
LATCH.channel operations paren 70 58 -12
STAT...parse count (hard) 18 2 -16
LATCH.checkpoint queue latch 80 60 -20
LATCH.enqueues 251 229 -22
LATCH.kks stats 49 24 -25
STAT...parse time elapsed 26 1 -25
STAT...workarea executions - o 51 17 -34
STAT...cluster key scans 39 4 -35
STAT...redo entries 105 69 -36
STAT...cluster key scan block 46 8 -38
STAT...enqueue releases 71 29 -42
STAT...enqueue requests 71 29 -42
STAT...shared hash latch upgra 44 0 -44
STAT...db block changes 178 127 -51
STAT...rows fetched via callba 56 0 -56
STAT...user I/O wait time 71 10 -61
LATCH.library cache load lock 90 26 -64
STAT...sorts (memory) 75 7 -68
LATCH.SQL memory manager worka 294 221 -73
LATCH.mostly latch-free SCN 8 94 86
LATCH.lgwr LWN SCN 8 94 86
LATCH.Consistent RBA 6 93 87
STAT...db block gets from cach 220 132 -88
STAT...db block gets 239 148 -91
LATCH.hash table column usage 0 95 95
STAT...index fetch by key 575 469 -106
STAT...parse count (total) 130 20 -110
LATCH.simulator hash latch 171 290 119
STAT...DB time 159 22 -137
STAT...buffer is pinned count 1,732 1,593 -139
LATCH.cache buffers lru chain 225 399 174
STAT...session cursor cache hi 187 12 -175
STAT...free buffer requested 208 25 -183
STAT...physical reads cache 198 14 -184
STAT...physical reads 198 14 -184
STAT...physical read IO reques 191 4 -187
LATCH.object queue header oper 249 436 187
LATCH.library cache pin 339 538 199
STAT...physical read total IO 216 14 -202
STAT...index scans kdiixs1 1,534 1,325 -209
STAT...sorts (rows) 5,680 5,464 -216
LATCH.dml lock allocation 20 240 220
LATCH.messages 133 358 225
STAT...opened cursors cumulati 268 20 -248
LATCH.redo allocation 36 289 253
LATCH.redo writing 31 288 257
STAT...execute count 283 18 -265
LATCH.shared pool 1,515 1,248 -267
STAT...calls to get snapshot s 328 40 -288
STAT...Elapsed Time 1,223 913 -310
LATCH.undo global data 46 381 335
STAT...table fetch by rowid 967 579 -388
LATCH.enqueue hash chains 276 696 420
STAT...consistent gets - exami 863 441 -422
LATCH.library cache 1,646 2,133 487
LATCH.shared pool simulator 224 854 630
STAT...IMU undo allocation siz 260 932 672
LATCH.library cache lock 780 1,456 676
STAT...table scan rows gotten 11,210 10,524 -686
LATCH.session allocation 4,647 3,934 -713
STAT...no work - consistent re 3,955 3,213 -742
STAT...buffer is not pinned co 3,363 2,431 -932
LATCH.row cache objects 2,327 3,516 1,189
STAT...consistent gets from ca 4,878 3,686 -1,192
STAT...consistent gets 4,878 3,686 -1,192
STAT...undo change vector size 5,676 4,428 -1,248
STAT...session logical reads 5,117 3,834 -1,283
LATCH.cache buffers chains 9,908 11,378 1,470
STAT...recursive calls 3,542 141 -3,401
STAT...redo size 17,304 12,052 -5,252
STAT...IMU Redo allocation siz 0 11,052 11,052
STAT...physical write bytes 155,648 131,072 -24,576
STAT...physical write total by 155,648 131,072 -24,576
STAT...session uga memory 57,976 0 -57,976
STAT...session pga memory -196,608 -65,536 131,072
STAT...physical read bytes 1,622,016 114,688 -1,507,328
STAT...physical read total byt 2,031,616 278,528 -1,753,088

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
23,687 29,629 5,942 79.95%
-- END Tom's solution with same varian (minimal table) -----------------------

-- START dual..connect by SOLUTION -------------------------------------------
drop table big_table;
create table big_table as select rownum id from dual where 1=0
/

alter table big_table nologging;

-- flush ALL
alter system flush shared_pool;
alter system flush buffer_cache;

exec runstats_pkg.rs_start;
-- first execution
insert /*+ append */ into big_table select rownum from dual connect by rownum < 10000;
commit;

exec runstats_pkg.rs_middle;
-- second execution for drop the recursive calls and hard parse
insert /*+ append */ into big_table select rownum from dual connect by rownum < 10000;
commit;
exec runstats_pkg.rs_stop;

Run1 ran in 26 hsecs
Run2 ran in 8 hsecs
run 1 ran in 325% of the time

Name Run1 Run2 Diff
STAT...deferred (CURRENT) bloc 4 5 1
STAT...commit txn count during 0 1 1
STAT...recursive cpu usage 4 3 -1
LATCH.hash table modification 1 0 -1
LATCH.active checkpoint queue 1 0 -1
LATCH.redo writing 4 3 -1
STAT...active txn count during 5 4 -1
STAT...session cursor cache co 1 0 -1
STAT...IMU Flushes 0 1 1
STAT...cleanout - number of kt 5 4 -1
STAT...physical write total mu 2 3 1
STAT...redo ordering marks 2 1 -1
STAT...consistent changes 22 21 -1
STAT...redo entries 59 57 -2
STAT...redo synch time 2 0 -2
LATCH.library cache lock alloc 6 4 -2
STAT...bytes received via SQL* 1,450 1,448 -2
LATCH.session idle bit 23 21 -2
LATCH.active service list 2 0 -2
STAT...index crx upgrade (posi 2 0 -2
STAT...table fetch continued r 2 0 -2
STAT...CPU used by this sessio 2 5 3
STAT...calls to kcmgas 12 9 -3
LATCH.undo global data 27 30 3
LATCH.messages 9 5 -4
LATCH.checkpoint queue latch 4 0 -4
STAT...db block changes 99 95 -4
STAT...cursor authentications 5 1 -4
STAT...heap block compress 6 1 -5
STAT...CPU used when call star 11 5 -6
STAT...redo subscn max counts 10 4 -6
STAT...enqueue requests 28 22 -6
STAT...parse time elapsed 6 0 -6
LATCH.kks stats 8 2 -6
LATCH.SQL memory manager worka 14 8 -6
STAT...enqueue releases 29 22 -7
STAT...parse count (hard) 10 2 -8
LATCH.object queue header heap 8 0 -8
STAT...workarea executions - o 22 10 -12
LATCH.enqueues 43 30 -13
STAT...db block gets 130 117 -13
STAT...db block gets from cach 114 101 -13
LATCH.enqueue hash chains 58 44 -14
STAT...cluster key scan block 24 8 -16
STAT...cluster key scans 20 4 -16
STAT...workarea memory allocat 17 0 -17
STAT...shared hash latch upgra 19 0 -19
STAT...sorts (memory) 28 2 -26
STAT...user I/O wait time 31 3 -28
STAT...Elapsed Time 44 11 -33
STAT...rows fetched via callba 36 0 -36
LATCH.shared pool simulator 123 86 -37
LATCH.library cache load lock 38 0 -38
LATCH.simulator hash latch 60 18 -42
STAT...parse count (total) 61 17 -44
STAT...undo change vector size 4,100 4,048 -52
STAT...session cursor cache hi 66 13 -53
STAT...index fetch by key 62 4 -58
LATCH.library cache pin 159 96 -63
STAT...physical read IO reques 71 5 -66
STAT...physical reads cache 71 5 -66
STAT...physical reads 71 5 -66
STAT...physical read total IO 76 10 -66
STAT...free buffer requested 77 10 -67
LATCH.cache buffers lru chain 84 11 -73
STAT...index scans kdiixs1 73 0 -73
STAT...sorts (rows) 5,444 5,369 -75
LATCH.object queue header oper 97 14 -83
STAT...buffer is pinned count 83 0 -83
STAT...opened cursors cumulati 102 17 -85
STAT...execute count 111 17 -94
STAT...calls to get snapshot s 125 28 -97
STAT...DB time 135 8 -127
STAT...table fetch by rowid 187 10 -177
STAT...consistent gets - exami 207 13 -194
LATCH.library cache lock 353 118 -235
STAT...no work - consistent re 303 20 -283
STAT...redo size 10,532 10,216 -316
LATCH.row cache objects 590 267 -323
STAT...buffer is not pinned co 423 28 -395
LATCH.library cache 749 304 -445
STAT...consistent gets 530 52 -478
STAT...consistent gets from ca 530 52 -478
STAT...session logical reads 660 169 -491
LATCH.shared pool 686 159 -527
LATCH.cache buffers chains 1,365 527 -838
STAT...IMU undo allocation siz 2,040 880 -1,160
LATCH.session allocation 1,615 244 -1,371
STAT...recursive calls 1,558 127 -1,431
STAT...IMU Redo allocation siz 312 11,064 10,752
STAT...session uga memory 57,976 0 -57,976
STAT...session pga memory -262,144 0 262,144
STAT...physical read total byt 663,552 122,880 -540,672
STAT...physical read bytes 581,632 40,960 -540,672

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
6,170 2,034 -4,136 303.34%
-- END dual..connect by SOLUTION -------------------------------------------

-- START pipelined functions SOLUTION -------------------------------------------
create or replace type number_ntt as table of number;
/

create or replace function row_generator ( rows_in in pls_integer ) return number_ntt pipelined is
begin
for i in 1 .. rows_in loop
pipe row (i);
end loop;
return;
end;
/

drop table big_table;
create table big_table as select rownum id from dual where 1=0
/

alter table big_table nologging;

-- flush ALL
alter system flush shared_pool;
alter system flush buffer_cache;

exec runstats_pkg.rs_start;
-- first execution
insert /*+ append */ into big_table select * from table(row_generator(10000));
commit;

exec runstats_pkg.rs_middle;
-- second execution for drop the recursive calls and hard parse
insert /*+ append */ into big_table select * from table(row_generator(10000));
commit;
exec runstats_pkg.rs_stop;

Run1 ran in 61 hsecs
Run2 ran in 10 hsecs
run 1 ran in 610% of the time

Name Run1 Run2 Diff
LATCH.session timer 1 0 -1
LATCH.hash table modification 1 0 -1
STAT...calls to kcmgas 11 10 -1
STAT...physical write IO reque 3 2 -1
STAT...physical write total mu 3 2 -1
STAT...physical write total IO 3 2 -1
STAT...redo ordering marks 1 2 1
STAT...session cursor cache co 1 0 -1
STAT...heap block compress 6 5 -1
STAT...table scan blocks gotte 5 4 -1
LATCH.active checkpoint queue 1 0 -1
LATCH.redo writing 4 3 -1
LATCH.begin backup scn array 3 2 -1
LATCH.transaction branch alloc 1 0 -1
LATCH.undo global data 27 28 1
STAT...deferred (CURRENT) bloc 4 5 1
STAT...commit txn count during 0 1 1
STAT...active txn count during 5 4 -1
STAT...cleanout - number of kt 5 4 -1
STAT...IMU Flushes 0 1 1
STAT...table scans (short tabl 3 2 -1
LATCH.kokc descriptor allocati 2 0 -2
STAT...bytes received via SQL* 1,442 1,440 -2
STAT...table fetch continued r 2 0 -2
LATCH.library cache pin alloca 8 6 -2
STAT...redo entries 59 57 -2
LATCH.session idle bit 23 21 -2
STAT...redo synch time 3 0 -3
LATCH.library cache lock alloc 9 6 -3
STAT...table scan rows gotten 31 28 -3
LATCH.checkpoint queue latch 4 0 -4
STAT...redo size 10,464 10,468 4
STAT...redo subscn max counts 9 4 -5
STAT...recursive cpu usage 8 3 -5
STAT...cursor authentications 7 1 -6
STAT...CPU used by this sessio 9 2 -7
LATCH.messages 11 3 -8
STAT...CPU used when call star 11 3 -8
STAT...db block gets 128 119 -9
STAT...db block gets from cach 112 103 -9
STAT...index crx upgrade (posi 10 0 -10
STAT...enqueue requests 39 21 -18
STAT...enqueue releases 40 21 -19
STAT...parse count (hard) 24 2 -22
LATCH.SQL memory manager worka 30 8 -22
LATCH.object queue header heap 26 0 -26
STAT...parse time elapsed 36 0 -36
LATCH.enqueue hash chains 80 42 -38
LATCH.enqueues 65 27 -38
STAT...workarea memory allocat 40 0 -40
LATCH.kks stats 44 2 -42
STAT...shared hash latch upgra 54 0 -54
STAT...workarea executions - o 68 8 -60
STAT...user I/O wait time 63 3 -60
STAT...Elapsed Time 79 13 -66
LATCH.simulator hash latch 126 54 -72
STAT...rows fetched via callba 77 0 -77
LATCH.library cache load lock 82 0 -82
STAT...cluster key scans 119 4 -115
STAT...undo change vector size 4,100 4,220 120
STAT...cluster key scan block 137 8 -129
STAT...sorts (memory) 131 1 -130
STAT...buffer is pinned count 133 0 -133
STAT...free buffer requested 144 10 -134
STAT...physical reads cache 139 4 -135
STAT...physical read IO reques 139 4 -135
STAT...physical read total IO 144 9 -135
STAT...physical reads 139 4 -135
STAT...DB time 154 11 -143
STAT...parse count (total) 166 17 -149
LATCH.cache buffers lru chain 171 10 -161
LATCH.object queue header oper 184 12 -172
LATCH.library cache pin 290 98 -192
STAT...index scans kdiixs1 196 0 -196
STAT...session cursor cache hi 233 13 -220
STAT...index fetch by key 231 4 -227
STAT...opened cursors cumulati 330 17 -313
STAT...execute count 354 17 -337
LATCH.shared pool simulator 423 83 -340
STAT...calls to get snapshot s 387 28 -359
STAT...table fetch by rowid 381 10 -371
STAT...consistent gets - exami 678 13 -665
LATCH.library cache lock 785 118 -667
STAT...no work - consistent re 708 20 -688
STAT...buffer is not pinned co 922 28 -894
STAT...sorts (rows) 6,391 5,368 -1,023
LATCH.library cache 1,420 305 -1,115
STAT...IMU undo allocation siz 2,028 880 -1,148
STAT...consistent gets 1,416 53 -1,363
STAT...consistent gets from ca 1,416 53 -1,363
STAT...session logical reads 1,544 172 -1,372
LATCH.row cache objects 1,675 267 -1,408
LATCH.shared pool 1,685 163 -1,522
LATCH.cache buffers chains 2,770 510 -2,260
STAT...recursive calls 5,301 127 -5,174
LATCH.session allocation 6,141 244 -5,897
STAT...IMU Redo allocation siz 312 11,068 10,756
STAT...session uga memory 57,976 0 -57,976
STAT...session pga memory -262,144 0 262,144
STAT...physical read bytes 1,138,688 32,768 -1,105,920
STAT...physical read total byt 1,220,608 114,688 -1,105,920

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
16,127 2,047 -14,080 787.84%
-- END pipelined functions SOLUTION -------------------------------------------

-- START model SOLUTION -------------------------------------------
drop table big_table;
create table big_table as select rownum id from dual where 1=0
/

alter table big_table nologging;

-- flush ALL
alter system flush shared_pool;
alter system flush buffer_cache;

exec runstats_pkg.rs_start;
-- first execution
insert /*+ append */ into big_table select rnum from dual
model
dimension by (0 dim)
measures (0 rnum)
rules iterate (10000) (
rnum[iteration_number] = iteration_number
);
commit;

exec runstats_pkg.rs_middle;
-- second execution for drop the recursive calls and hard parse
insert /*+ append */ into big_table select rnum from dual
model
dimension by (0 dim)
measures (0 rnum)
rules iterate (10000) (
rnum[iteration_number] = iteration_number
);
commit;
exec runstats_pkg.rs_stop;
Run1 ran in 21 hsecs
Run2 ran in 11 hsecs
run 1 ran in 190.91% of the time

Name Run1 Run2 Diff
STAT...IMU Flushes 0 1 1
LATCH.hash table modification 1 0 -1
LATCH.begin backup sc
Tom Kyte

Followup  

September 03, 2008 - 11:59 am UTC

you are creating test data for a one time test here - do you really want to overanalyze it?

just go with the one that takes the least amount of time or runs faster than fast enough.

watch the rownum trick on really large things.
http://blog.tanelpoder.com/2008/06/08/generating-lots-of-rows-using-connect-by-safely/

Commit inside the While loop

January 06, 2011 - 2:28 am UTC

Reviewer: Parthiban Nagarajan from Coimbatore, India

Hi Tom
Referring to the script in your answer: (Also, in your book entitled "Effective Oracle by Design", the appendix-A contains the same BIG_TABLE script.) Inside the while loop, you are doing the commit. Is it a typo or intentional? May I know your idea behind that? I mean, I want to know whether the commit is really required there? Why it should not be outside the loop? Is it because that the script is just for testing and not a transactional stuff?
I know that you do the things only with reasons/ideas. And I just want to know that reasons/ideas.

Thanks for your help, in advance.
Tom Kyte

Followup  

January 06, 2011 - 8:04 am UTC

It is intentional, I am doing a direct path load (/*+append*/). You get one shot per transaction.

RE: Commit inside the while loop

January 06, 2011 - 8:29 am UTC

Reviewer: Parthiban Nagarajan from Coimbatore, India

Hi Tom
Could you elaborate more on this It is intentional, I am doing a direct path load (/*+append*/). You get one shot per transaction.
Why it should not be outside the loop?
Thanks

Tom Kyte

Followup  

January 06, 2011 - 8:44 am UTC

ops$tkyte%ORA11GR2> create table t ( x int );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert /*+ append */ into t select rownum from dual;

1 row created.

ops$tkyte%ORA11GR2> commit;

Commit complete.
<b>if, after direct path loading the table, I commit, then:</b>

ops$tkyte%ORA11GR2> insert /*+ append */ into t select rownum from dual;

1 row created.
<b>we can do another direct path load into the table, however, if I DO NOT commit after the direct path load:</b>

ops$tkyte%ORA11GR2> insert /*+ append */ into t select rownum from dual;
insert /*+ append */ into t select rownum from dual
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>I cannot</b>



the commit is there to let me continue direct path loading.

RE: Commit inside the while loop

January 07, 2011 - 2:33 am UTC

Reviewer: Parthiban Nagarajan from Coimbatore, India

Hi Tom
Thanks for the answer. Its excellent, as usual ;)
Having little experience with /*+APPEND*/, I am asking you these beginner like questions. Thanks for your patience.
(©) So it seems that Oracle wants to reset the HWM so that it can continue the next direct path. So, it is asking us for either commit (or) rollback. Am I right?
(©) And in the script, we are committing inside the WHILE LOOP i.e. PL/SQL. So, I don't think that this commit as an asynchronous one because the HWM should have been reset before the next /*+APPEND*/ starts. Am I right?
Thanks in advance.
Tom Kyte

Followup  

January 07, 2011 - 9:10 am UTC

The table cannot be read in that transaction again until it commits because that transaction can "see" the new high water mark - but it has no undo available to it to provide for consistent reads. Other transactions are not affected because they won't see the data above the OLD high water mark - and can only read the data below the old high water mark.


The commit in plsql is an asynchronous commit - nothing has changed there - the high water mark was reset during the transaction itself, the commit didn't reset anything - that would just flush redo to disk and end the transaction.

Notification emails

January 07, 2011 - 11:03 am UTC

Reviewer: Sinan Topuz from New York, NY USA

Hi Tom,

How can I disable receiving emails from asktom.oracle.com for updates on a post I entered on 2-Sep-2008?

Thanks

April 20, 2011 - 12:36 pm UTC

Reviewer: A reader

PLSQL is not the fastest way. Use insert into table select * from table.
probably Tom doesn't know it.

April 20, 2011 - 12:37 pm UTC

Reviewer: A reader

You can send the emails to junk folder.

Please review SQL before posting!!

October 28, 2015 - 1:02 am UTC

Reviewer: A reader

Your SQL does not pass basic compile!!!

More to Explore

DBMS_STATS

More on PL/SQL routine DBMS_STATS here