Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Moin.

Asked: December 24, 2002 - 1:59 pm UTC

Last updated: September 15, 2010 - 1:13 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I'm, in the dev stage of a Data warehouse project.

I've insisted that the developers provide adequate data for testing, to help avoid any nasty surprises when we go live, also so that I can size redo logs, rollback segs etc correctly.

The question I have is, when I insert 1.5 million rows of data avg_row_size = 170 bytes so therefore about 255 megs of data in the table itself is generated. There is one index that will require another 130 megs of datrabase space -- Total about 400megs DB space.

But why do I get in my archived log destination about 2gb of redo logs (no other users on the box). I realise redo logs contain undo as well as redo but why such a big difference in redo log size to actual DB space used to store the data?





and Tom said...

They are undo + redo + all of the block changes to that index structure which are massive.

In a true data warehouse, you should be striving to achieve NON-logged operations when possible, doing a hot backup after the operation to provide for recovery.

I would be direct path loading the data into a no-logging table.

I would be using skip_index_maintenance.

I would be rebuilding the indexes afterwards in parallel with nologging.

Consider this example. insert = regular load. insert /*+ append */ = direct path load. insert /*+ append */ into nologging table = direct path load with no log.


ops$tkyte@ORA9I> create table t as select * from all_objects where 1=0;
Table created.

ops$tkyte@ORA9I> set autotrace traceonly statistics;

ops$tkyte@ORA9I> insert into t select * from all_objects;
41855 rows created.


Statistics
----------------------------------------------------------
4629616 redo size

4.6meg of redo...

ops$tkyte@ORA9I> rollback;
Rollback complete.

ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects;
41855 rows created.


Statistics
----------------------------------------------------------
4724292 redo size

the same for direct path..

ops$tkyte@ORA9I> rollback;
Rollback complete.

ops$tkyte@ORA9I> alter table t nologging;
Table altered.

ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects;
41855 rows created.

Statistics
----------------------------------------------------------
984 redo size

but make the table nologging and it is hugely different

ops$tkyte@ORA9I> rollback;
Rollback complete.

ops$tkyte@ORA9I> create index t_idx on t(object_name);
Index created.

ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects;
41856 rows created.


Statistics
----------------------------------------------------------
6024152 redo size

but the index blows it again. note that more redo for index maintenance is generated then for the ENTIRE TABLE -- 6m

ops$tkyte@ORA9I> rollback;
Rollback complete.

ops$tkyte@ORA9I> alter index t_idx unusable;
Index altered.

ops$tkyte@ORA9I> alter session set skip_unusable_indexes=true;
Session altered.

ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects;
41856 rows created.


Statistics
----------------------------------------------------------
12164 redo size

so, that is like a direct path load into a non-logged table with skip index maintenance -- no redo to speak of


ops$tkyte@ORA9I> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /


NAME VALUE
------------------------------ ----------
redo size 24128888

ops$tkyte@ORA9I> alter index t_idx rebuild nologging;

Index altered.

ops$tkyte@ORA9I> @mystat redo
ops$tkyte@ORA9I> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
redo size 24135956


and as you can see the rebuild didn't generate any either really...


So, suggest you follow that approach -- just remember -- hot backup the affected datafiles as soon as possible after the load! so you have something you can recover from...




Rating

  (28 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

redo log generation

A reader, December 24, 2002 - 3:02 pm UTC

Great reply (as ever).

One thing though, I've set up autotrace as per your instructions in your book however I dont get any info like redo logs stats, why's that. Is it a version thing I'm on 8.1.7.4?

Thanks Again

Tom Kyte
December 24, 2002 - 4:23 pm UTC

No, it is not a version thing...

Make sure

o you have the plustrace role
o you see no errors when you issue "set autotrace on"
o you are not running this as SYS, don't do anything as SYS -- SYS is special
and autotrace doesn't work for SYS



A reader, December 24, 2002 - 3:33 pm UTC

AS always best explanation!!!!



A reader, December 24, 2002 - 4:24 pm UTC

Hi Tom,
I am getting clear about NOLOGGING option.
Great answer.

One question.

Does the LOG_MODE( e.g archine or noarchive) affect redo generation?

I am testing this in both env.

I get more redo in archivelong then noarchivelog mode.

If you test with APPEND hint in archivelog -- lots of redo .
In Noarchivelog redo is minimal.

Please advice

Tom Kyte
December 24, 2002 - 4:43 pm UTC

Yes, nologging of the table is only needed in archive log mode. If you have a noarchive log mode database and do a direct path operation on it - no log needs to be generated.

Eg: create table as select - in archivelog mode = lots of redo. in noarchivelog mode = tiny redo.


You need to set the table attributes to NOLOGGING exactly like my example above does in archivelog mode!

Amount of redo generated for DDL statements

A reader, July 25, 2003 - 12:51 pm UTC

Tom,
Would appreciate it if you can guide me on the following

1) Do DDL statements generate a huge amount of redo?
I would like to know if there is any big difference in redo generation for the 2 statements below.

Statement 1
ALTER TABLE TABLE1 ADD (XYZ_IND NUMBER(1));

Statement 2
ALTER TABLE TABLE1 ADD (XYZ_IND NUMBER(1) DEFAULT 1 NOT NULL );

The no. of records in the table 'TABLE1' is about 400K. We have a standby db in the production environment and we would like to know if the DDL Statemnet 1 affects the redo logs going into the standby DB.

Thank you


Tom Kyte
July 25, 2003 - 1:14 pm UTC

depends (as everything does)

take your example, the first will generate almost no redo, the second -- tons, it has to modify every row in every block -- AND we have to be able to redo that change in the event of media failure (as well as UNDO the change in the event of an instance failure whilst it is running)

both will affect the redo logs, just to differing amounts.

Amount of redo generated for DDL statements

A reader, July 25, 2003 - 2:02 pm UTC

Thank you

redo generated by a particular session

A reader, August 23, 2003 - 9:36 am UTC

Tom,
What is the best approach to finding out the amount of redo generated by a particular session? Our DBA is restricting the type of transcations we carry out, not allowing queries which update/insert huge # of records in one transcation. The concern is about the effect the generated redo will have on the network pipe carrying it to the standby database.

Thank you

Tom Kyte
August 23, 2003 - 12:05 pm UTC

why? do they want the system to run less efficiently? do they want the system to do more work? do they want the system to have artificial constraints on it that break the real transaction into tiny bits that then must be carefully coded?

by breaking a transaction up into itty bitty pieces -- they will only accomplish the above.

you will generate MORE redo in total.
you will consume MORE resources in total.
you will have hard to maintain code.
you will have questionable data integrity.


is that their goal? can you clarify that with them?

redo generated by a particular session

A reader, August 23, 2003 - 12:31 pm UTC

All the points you have mentioned are true
>>you will generate MORE redo in total.
>>you will consume MORE resources in total.
>>you will have hard to maintain code.
But the network bandwidth to the standby database is restricting the amount of redo generated.
The situation we are running into is where we may have to update or insert 600K records say in about 10 tables. Instead of accomplishing this in a single transcation or a very few transcations (say less than 10), the DBA is suggesting that we run the SQLs as a job executing a 1000+ or so records every 15 minutes, doing so we will be restricting the amount of redo generated that needs to go through the NETWORK PIPE be applied on the standby database.
To determine the # of records that can be run every 15 min, we would like to know the avergae redo generated by the transcation(or a particular session).

Tom Kyte
August 23, 2003 - 3:27 pm UTC

sounds like you need to fix the pipe to satisfy your requirements.

are you "maximum protection" or "maximum performance" or just using the archived logs?

Does redo info show up in sql trace file

Sam, October 10, 2003 - 2:38 pm UTC

Tom,

Does a sql trace file contain information on how much redo is generated by an insert/update statement? Or must we query it from v$mystat?

Tom Kyte
October 10, 2003 - 4:25 pm UTC

v$mystat

waits and binds can show up in the trace -- not statistics like that.

Determining redo from a PL/SQL anonymous block

Steve Baldwin, November 05, 2003 - 4:05 pm UTC

Great info !!

I have a similar problem and I'm trying to use your described mechanism. In SQL*Plus, I issue the 'set autotrace ...' command, and it works just fine for SQL inserts. However, my process is in a PL/SQL package, and I'm attempting to 'simulate' it using an anonymous PL/SQL block. When I execute the block, the insert happens, but I don't see any stats. Is there another mechanism to get this info ?

Thanks

Tom Kyte
November 06, 2003 - 7:28 am UTC

you can use v$mystat for this:


ops$tkyte@ORA920LAP> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA920LAP> column value new_val V
ops$tkyte@ORA920LAP> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and a.name = 'redo size'
  5  /

NAME                                VALUE
------------------------------ ----------
redo size                         1157852

ops$tkyte@ORA920LAP> begin
  2          insert into t select * from all_objects;
  3  end;
  4  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> select a.name, b.value, b.value-&V DIFF
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and a.name = 'redo size'
  5  /
old   1: select a.name, b.value, b.value-&V DIFF
new   1: select a.name, b.value, b.value-   1157852 DIFF

NAME                                VALUE       DIFF
------------------------------ ---------- ----------
redo size                         4599572    3441720
 

What if I can't do an INSERT INTO ... SELECT ...

Steve Baldwin, November 05, 2003 - 5:14 pm UTC

Does the /*+ append */ hint still work with an INSERT ... VALUES (...) statement ? When I execute a PL/SQL block like this ...

begin
for i in 1 .. 100000 loop
insert /*+ append */
into my_table (col_1, col_2, col_3)
values (i, 2506458, 119) ;
end loop ;
end ;
/

I still get lots of redo, even if I set nologging for my_table.

I have a scenario where if I try a single INSERT ... SELECT, I exceed the capacity of my undo tablespace. I suppose I could try segmenting the SELECT, and then issuing multiple INSERT .. SELECT's, but at this point, I'm using dbms_sql to fetch and then insert 1000 rows at a time, with commits after every 10000. I ultimately need to insert around 5,000,000 rows.

Also, I found that marking indexes as unusable didn't quite do it for me. I needed to drop the primary key (because skip_unusable doesn't work for unique indexes). I also needed to disable all constraints on the table. I'm not sure if it was the FK or Check constraints that were the culprits, but until I did this, I was still getting lots of redo.

As you may have ascertained, this is not a Data Warehouse app. I'm actually doing a data conversion from an old system to a new one (hence the constraints). I'm now planning to do the disable/insert/enable of constraints and drop/insert/create of indexes during the conversion process.

Tom Kyte
November 06, 2003 - 7:32 am UTC

no, append works for BULK OPERATIONS ONLY.

it would make no sense for row at a time. suppose it did work, guess what you would have ended up with? 100,000 rows each on their OWN BLOCK with 4 empty blocks accompanying them. wouldn't that be great? (append writes above the HWM, does not, cannot use existing free space)

your DBA has failed in their job duty. they have not sized the system for the amount of work it has been required to do.

committing after each 1000 -- yick, ugh, i wouldn't stand for it.

i would be doing a single, fast, efficient, least resource intensive insert into select from.


a one time conversion -- tell the dba staff to do their job, beef up the storage in order to facilitate the work required of the system. You'll save tons of money -- even if you have to buy more disk. look at the time you are wasting here trying to "come up with creative, yet hard to code and hard to debug solutions"

SELECT'ing SEQUENCE.NEXTVAL another culprit

Steve Baldwin, November 05, 2003 - 7:54 pm UTC

After dropping all indexes and constraints, and segmenting my insert so that it didn't blow up my UNDO tablespace, I was still getting bucket loads of redo logs.

I tracked it down to the way I was populating the primary key - by selecting any_sequence.NEXTVAL. When I changed that to ROWNUM, the amount of redo for inserting around 280,000 rows went from 10MB to 3144 BYTES !!!

Tom Kyte
November 06, 2003 - 7:38 am UTC

alter sequence any_sequence cache 1000000;

before the load, it'll cause an update to seq$ every 1000000 rows, not every 20. will hugely speed up the sequence.

i would still be going for "single sql"

redo generated by clob update

A reader, December 09, 2003 - 7:43 am UTC

Tom,
We are looking at replacing certain characters in the clob field of a table.

SELECT column1_clob into v_var1 FROM table1 WHERE id = v_id for Update;

...
...
dbms_lob.createtemporary(v_var2, TRUE);
v_var2 := v_var1;
v_cnt := dbms_lob.instr(v_var2,'\&',1,1);
if v_cnt > 0 then
v_var2 :=replace(v_var2,'\&','\&');
end if;

v_cnt := dbms_lob.instr(v_var2,'<',1,1);
if v_cnt > 0 then
v_var2 :=replace(v_var2,'<','\&lt;');
end if;

.....
v_len := dbms_lob.getlength(v_var2);

DBMS_LOB.COPY(v_var1,v_var2,v_len,1,1);
COMMIT;
dbms_lob.freetemporary(v_var2);

My questions

I read in your book expert one-on-one Oracle that update genarates the max. redo (compared with inserts and delets..)
a) Updates to clob generate significant amount of redo - right?
b) Does the code above (particularly DBMS_LOB.COPY)generate significant amount of redo?
c) Every transcation generates redo - right?


Thank you



Tom Kyte
December 09, 2003 - 8:24 am UTC

a) depends on whether clob is logging or nologging, but it'll be a factor of "how much work do i do" -- here you are rewriting the entire image.

b) it could

c) absolutely -- even non-logged ones generate some amount of redo.

redo generated by clob update

A reader, December 09, 2003 - 8:36 am UTC

Tom,
>>a) ....here you are rewriting the entire image.
since I am writing the entire image and if the clob size is about 10K charc's - is it fair to say that a good amount of redo will be generated - assuming that the table is in logging mode and no direct path inserts are made.

Thank you


Tom Kyte
December 09, 2003 - 1:55 pm UTC

it is fair to say that enough redo to redo the operation will be generated! thats about all i can say here.

it is something you can easily measure. query v$my_stat, see how much redo you generated so far, update a couple of rows, requery and subtract. that'll show you.

Does redo size differs based on version 8i / 9i?

Prasad Jayakumar, February 22, 2005 - 12:03 pm UTC

Hi Tom,

I was trying your book example "Expert one-on-one : Redo and Rollback"

The redo size is seeming to differ in 8i and 9i. Is it true or some of the configuration difference in my database caused it?

In 9i Database

create table t(y char(2000));

select * from redo_size; -- View redo_size

VALUE
----------
17012

insert into t values (user);

select * from redo_size; -- View redo_size

VALUE
----------
22492

Redo for this single insert is 5480 bytes (22492-17012)


But in Oracle 8.1.7.4,

1.
VALUE
----------
7148
2.
VALUE
----------
9584

Redo for this single insert is 2436 bytes (9584 - 7148)

I even tried with

create table t(y char(2000 bytes));

in 9i, the same result

Thank you,
Prasad

Tom Kyte
February 22, 2005 - 12:42 pm UTC

redo is going to vary dramatically by release -- yes.

Why there so much Redo

Lizhuohua, February 24, 2005 - 10:32 am UTC

test 1:
SQL> create table t3(a int,b int);
 
Table created.

SQL> insert into t3 select * from t2;
 
200000 rows created.
 
Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'T2'
 
 
 
 
Statistics
----------------------------------------------------------
        547  recursive calls
       3076  db block gets
        985  consistent gets
          2  physical reads
    4158620  redo size
        620  bytes sent via SQL*Net to client
        531  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     200000  rows processed

test 2:
SQL> create table mystat_before as select * from v$mystat;
 
Table created.
 

SQL> begin
  2  for i in 1..200000 loop
  3  insert into t3 values(i,i);
  4  end loop;
  5  end;
  6  /
 
PL/SQL procedure successfully completed.


SQL> select b.value-a.value redosize
 from mystat_before a,
      v$mystat b,
      v$statname c
where a.statistic#=b.statistic#
     and b.statistic#=c.statistic#
     and c.name='redo size';
 
  REDOSIZE
----------
  48220708
 
Q1: Why there so much Redo in test2?
Q2: I test the test2 on my pc ,It used 60-70 sec. 
    But when I test it on a better machine(only cpu better than my pc), it used 8-9 sec.
    The test2 is cpu sensitive,or I have done something wrong?
Thanks&Best Regards.
Lizhuohua 

Tom Kyte
February 24, 2005 - 4:55 pm UTC

because slow by slow operations (row by row) always generates more redo, undo, uses more cpu, takes more time is generally less efficient than...

set based operations, the thing SQL is based on.




Thanks

Lizhuohua, February 24, 2005 - 7:58 pm UTC

Thank you ,
You say: slow by slow operations (row by row) always generates more redo, undo,
uses more cpu,
Could you please explain why row by row generates more redo , and insert generates undo?

Thanks&Best Regards
Lizhuohua

Huge Redo log generation while executing SELECT statement

Sami, February 25, 2005 - 10:50 am UTC

Dear Tom,

Thanks for all your help in the past.

Could you please explain why so much redo is generated for the select statement (even after multiple execution). There is not much DML activity on this table. 
My understanding is that SELECT can generate redo only during block clean out, however the size should not be this big.

Please find both auto trace and tkprof output.
Enviornmnt 8i 2 ode OPS.


SQL> l
  1  select * from (
  2  SELECT     Bank_Id,
  3     Bank_Type_Id,
  4     Country_Id,
  5     Bank_Name,
  6     Bank_Address1,
  7     Bank_Address2,
  8     Bank_Address3,
  9     Branch_Name,
 10     language_code
 11  FROM       MY_TABLE
 12  WHERE      COUNTRY_ID  = 'ABC'
 13  AND        BANK_TYPE_ID  = 'CDE'
 14  and        upd_phase='C'
 15  AND     language_code = 'en'
 16  ORDER BY BANK_ID ASC
 17  )
 18* where rownum < 500
SQL> 

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=154 Card=45 Bytes=58860) 
   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=154 Card=45 Bytes=58860)
   3    2       SORT (ORDER BY STOPKEY) (Cost=154 Card=45 Bytes=3915)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'MY_TABLE' (Cost=142 Card=45 Bytes=3915)
    5    4           INDEX (RANGE SCAN) OF 'MY_TABLE_IDX2' (NON-UNIQUE)  (Cost=20 Card=45) 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     241503  consistent gets
      28951  physical reads
     313092  redo size
      64583  bytes sent via SQL*Net to client
       4094  bytes received via SQL*Net from client
         35  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
        499  rows processed
 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       35     49.99     243.16      16269     275794          0         499
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       37     50.00     243.18      16269     275794          0         499
 
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 180
 
Rows     Row Source Operation
-------  ---------------------------------------------------
    499  COUNT STOPKEY
    499   VIEW
    499    SORT ORDER BY STOPKEY
  41526     TABLE ACCESS BY INDEX ROWID MY_TABLE
  83053      INDEX RANGE SCAN (object id 1115255)
 

Tom Kyte
February 25, 2005 - 6:22 pm UTC

I'd be much less worried about the redo than the overall response time here.

This should be a 0.01 second query

have you considered looking at the INDEX on this table?

12 WHERE COUNTRY_ID = 'ABC'
13 AND BANK_TYPE_ID = 'CDE'
14 and upd_phase='C'
15 AND language_code = 'en'
16 ORDER BY BANK_ID ASC

i'd want it on

country_id, bank_type_id, upd_phase, lanaguage_code, bank_id

you might want to first rows this one as well.. I'd expect a couple dozen LIO's on the index and at most 500 LIO's on the table taking this down to a 500-600 LIO query at most.

Estimate drive space for ARCH logs

V, August 18, 2006 - 12:30 pm UTC

I've been asked to come up with disk space requirements for a new system not yet built. I have table size and transaction estimates. My question is, what would your approach be for determining how many log files would be produced for a given time period?

Tom Kyte
August 18, 2006 - 4:20 pm UTC

there are always a fixed number of redo logs - so I suppose you mean for the archives?

Best best - setup a simulation (and get used to doing that, you'll be constantly benchmarking right...)

Problem is it is really a function of the types of modifications you do, the number of indexes and the like more than anything else.

Arch Size

V, August 19, 2006 - 7:11 pm UTC

I knew you would say something like that:) Unfortunatly, They gave me this project Friday and need an answer by Monday COB so they can order equipment. Do you think I would be safe if I have estimated 16G of data daily to say I may produce the same amount of Redo or would you go more?

Thanks, I know this is a crazy request but just trying to give them something.

Tom Kyte
August 19, 2006 - 7:45 pm UTC

anyone that says yes or no to you regarding this question would be....


making stuff up.

sorry, the only answer I can say is "it depends"


16gb might be 100 times more than you need.
16gb might be be just right
16gb might be 100 less than you need.


give that disk is cheap (relatively speaking), go for broke. Get as much as you can.

Very cool info

Ashok Nagabothu, November 01, 2006 - 6:40 pm UTC

This removed some much confusion about nologging and append hint.

how to find the sessions which generated maximum redo amount at instance?

TongucY, March 18, 2007 - 12:44 pm UTC

Mr.Kyte I use below query from your examples or Sql*Plus's set autotrace traceonly statistics > redo size statistic to calculate how much redo I generated in my session;

CREATE OR REPLACE VIEW redo_size AS
SELECT value
FROM v$mystat, v$statname
WHERE v$mystat.statistic# = v$statname.statistic#
AND v$statname.name = 'redo size'

But my problem is how to find the sessions which generated maximum(top 5 for example) redo from the last database startup on 9iR2 or also with new 10gR2 historic views?

We need this information because from the produced archived log files we observe something new producing almost 2 times more redo for a week. I looked at statspack report but couldnt find as I suspected for a massive update or delete.

Thank you, best regards.
Tom Kyte
March 18, 2007 - 7:48 pm UTC

well, just query the ASH views in 10g - if you have access to them (license consideration).

Else, you might consider creating an "on logoff" trigger that saves this information in a permanent table....

Thank you for your immediate responce Mr.Kyte

TongucY, March 19, 2007 - 3:14 am UTC

Since the problematic database is 9iR2 v$sesstat was suggested at forums.oracle.com and we planned to create a job which will take top 25 rows from the below query and log them into a table for a week;

SELECT ss.sid,
sq.sql_text,
se.status,
se.username,
se.osuser,
se.program,
se.machine,
ss.VALUE
FROM v$sesstat ss, v$statname sn, v$session se, v$sqlarea sq
WHERE ss.statistic# = sn.statistic#
AND se.sql_hash_value = sq.hash_value(+)
AND se.sql_address = sq.address(+)
AND ss.sid = se.sid
AND sn.NAME = 'redo size'
ORDER BY ss.VALUE DESC

Than it is only a sql to catch the massive redo application. thank you, best regards.
Tom Kyte
March 19, 2007 - 10:11 am UTC

and it might not get the culprit - ever. Say:

a) your job runs
b) i log in and create massive redo
c) i log out
d) your job runs

you'll never ever see me.

you could try my approach which was the logoff trigger.


redo generation during insert

A reader, March 22, 2007 - 11:22 am UTC

Hi Tom

I have question regarding redo generation during insert statement.

I have created the following table

create table t
 ( 
 c1 char (100)
 ) 
 tablespace   TS_DBPBL ;


and inserted 100,000 records with two methods.

I expected approximately 10Mb (100,000 * 100) of redo.


First method


select
 name , value
 from
 V$MYSTAT    s_value ,
 V$STATNAME  s_name
 where
 s_value.STATISTIC# = s_name.STATISTIC#
 and name = 'redo size' ;

NAME             VALUE
----------- ----------
redo size        10304


 begin

 for i in 1 .. 100000
 loop
  insert into t values ( 'x' ) ;
 end loop ;

 commit ;

 end ;
/

 select
 name , value
 from
 V$MYSTAT    s_value ,
 V$STATNAME  s_name
 where
 s_value.STATISTIC# = s_name.STATISTIC#
 and name = 'redo size' ;

NAME             VALUE
----------- ----------
redo size     34688280



Second method

select
 name , value
 from
 V$MYSTAT    s_value ,
 V$STATNAME  s_name
 where
 s_value.STATISTIC# = s_name.STATISTIC#
 and name = 'redo size' ;

NAME             VALUE
----------- ----------
redo size     34688280


 begin

 for i in 1 .. 100000
 loop

  insert into t values (  'x' ) ;
  commit ;

 end loop ;


 end ;
/

 select
 name , value
 from
 V$MYSTAT    s_value ,
 V$STATNAME  s_name
 where
 s_value.STATISTIC# = s_name.STATISTIC#
 and name = 'redo size' ;

NAME             VALUE
----------- ----------
redo size     94051660





The first method generated

34688280 ¿ 10304 = 34,677,976 ~ 34.5Mb


The second method generated

94051660 ¿ 34688280 = 59,363,380 = 59.3Mb


As I noted above I expected approximately 10Mb
I don't understand the figures.

I would appreciate your explanation.






Tom Kyte
March 22, 2007 - 2:54 pm UTC

too simplistic to thing 10mb - redo includes information about a lot of stuff.

And when you commit, you add to the redo stream.

To have 100 or 200 bytes (tiny, teeny tiny) added is pretty small given a normal transaction.

You have demonstrated however something that I have demonstrated over and over - the slow by slow processing is the pits, wastes resources, generates more undo and redo....


Redo generation during insert (2)

A reader, March 26, 2007 - 2:54 am UTC


Hi

I know that commiting every insert is a awful programing,
but my main concern is the redo amount generation.

I thought that an insert operation has very little undo
so the total redo should be approximately near the size of inserted records.

So I don't understend:

1) The difference beteewn the amount of data inserted (10m)
and the amount of redo generated (34.5m) in the first method ?

2) Why the commit statement generate so much redo
(second method versus first) ?





Tom Kyte
March 26, 2007 - 7:40 am UTC

... iknow that commiting every insert is a awful programing, ..

one of the reasons it is awful programing is because it generates excessive redo!

inserts in general generate the MOST redo - what needs to be logged? the insert.


1) you did hundreds of thousands of operations. Say we added what sounds like a small bit to each operation. multiply a small bit by 100,000 and you have a big number.

100,000 x 1 byte = 100k extra
100,000 x 100 bytes = 10m extra

and so on....

2) because you committed and that causes even more redo to be generated

Sorry, confused over view information.

RobH, April 17, 2007 - 2:25 pm UTC

"well, just query the ASH views in 10g - if you have access to them (license consideration)."

Sorry, how do you see 'redo size' from the V$ACTIVE_SESSION_HISTORY view (or the DBA_HIST_ACTIVE_SESS_HISTORY). They have event's but thats different than the stats (although you know that).

Tom Kyte
April 18, 2007 - 11:26 am UTC

ok, i don't see a way to get it by session - only over sessions - we'd have to use a logoff trigger (although, with connection pools, the entire concept is likely flawed...)

Why Select produces Redo?

Rajeshwaran, Jeyabal, September 15, 2010 - 11:44 am UTC

Tom:

A Quote from Effective oracle by design (Chapter 2: Your performance tool kit)
<quote>
Blocks may be retrieved and used by oracle in two ways: current and consistent. A current mode gets is a retrieval of blocks as it exists right now. you will see this most frequently during modification statements, which must update only the latest copy of the block. consistent gets are retrieval of blocks from the buffer cache in "read consistent" mode and may include read asides to UNDO (rollback segments). A query will generally perform "Consistent gets"</quote>

test@10GR2> @d:\final_query.sql;

7 rows selected.

Elapsed: 00:00:02.36

Execution Plan
----------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                        | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                             |     7 |  1601 |    23  (92)|
|   1 |  TEMP TABLE TRANSFORMATION            |                             |       |       |         |
|   2 |   LOAD AS SELECT                      |                             |       |       |         |
|   3 |    WINDOW BUFFER                      |                             |     5 |  1595 |    10   (0)|
|   4 |     FILTER                            |                             |       |       |         |
|   5 |      TABLE ACCESS BY INDEX ROWID      | IR_DIAGNOSIS                |     4 |    64 |     2   (0)|
|   6 |       NESTED LOOPS                    |                             |     5 |  1595 |    10   (0)|
|   7 |        NESTED LOOPS OUTER             |                             |     1 |   303 |     8   (0)|
|   8 |         NESTED LOOPS                  |                             |     1 |   296 |     7   (0)|
|   9 |          NESTED LOOPS                 |                             |     1 |   277 |     6   (0)|
|  10 |           NESTED LOOPS                |                             |     1 |   237 |     3   (0)|
|  11 |            TABLE ACCESS BY INDEX ROWID| IR_ITEMATTRIBUTE            |     1 |   223 |     2   (0)|
|  12 |             INDEX FULL SCAN           | IX_IR_ITEMATTRIBUTE_01      |     1 |       |     1   (0)|
|  13 |            INDEX FULL SCAN            | IX_IR_CONTRACT_01           |     1 |    14 |     1   (0)|
|  14 |           TABLE ACCESS BY INDEX ROWID | IR_ENCOUNTER                |     6 |   240 |     3   (0)|
|  15 |            INDEX RANGE SCAN           | IX_IR_ENCOUNTER_06          |     7 |       |     1   (0)|
|  16 |          INDEX UNIQUE SCAN            | PK_IR_MEMBER                |     1 |    19 |     1   (0)|
|  17 |         TABLE ACCESS BY INDEX ROWID   | IR_LINEOFBUSINESS           |     1 |     7 |     1   (0)|
|  18 |          INDEX UNIQUE SCAN            | PK_IR_LINEOFBUSINESS        |     1 |       |     0   (0)|
|  19 |        INDEX RANGE SCAN               | IX_IR_DIAGNOSIS_03          |     5 |       |     1   (0)|
|  20 |   UNION-ALL                           |                             |       |       |         |
|  21 |    HASH GROUP BY                      |                             |     5 |  1155 |     3  (34)|
|  22 |     VIEW                              |                             |     5 |  1155 |     2   (0)|
|  23 |      TABLE ACCESS FULL                | SYS_TEMP_0FD9D6923_B88A631C |     5 |  1355 |     2   (0)|
|  24 |    FILTER                             |                             |       |       |         |
|  25 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  26 |     COUNT STOPKEY                     |                             |       |       |         |
|  27 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  28 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6923_B88A631C |     5 |  1355 |     2   (0)|
|  29 |    FILTER                             |                             |       |       |         |
|  30 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  31 |     COUNT STOPKEY                     |                             |       |       |         |
|  32 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  33 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6923_B88A631C |     5 |  1355 |     2   (0)|
----------------------------------------------------------------------------------------------------------

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
        320  recursive calls
         10  db block gets
        365  consistent gets
          1  physical reads
       1792  redo size
        845  bytes sent via SQL*Net to client
       3960  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          7  rows processed

test@10GR2> /

7 rows selected.

Elapsed: 00:00:02.03

Execution Plan
----------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                        | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                             |     7 |  1601 |    23  (92)|
|   1 |  TEMP TABLE TRANSFORMATION            |                             |       |       |         |
|   2 |   LOAD AS SELECT                      |                             |       |       |         |
|   3 |    WINDOW BUFFER                      |                             |     5 |  1595 |    10   (0)|
|   4 |     FILTER                            |                             |       |       |         |
|   5 |      TABLE ACCESS BY INDEX ROWID      | IR_DIAGNOSIS                |     4 |    64 |     2   (0)|
|   6 |       NESTED LOOPS                    |                             |     5 |  1595 |    10   (0)|
|   7 |        NESTED LOOPS OUTER             |                             |     1 |   303 |     8   (0)|
|   8 |         NESTED LOOPS                  |                             |     1 |   296 |     7   (0)|
|   9 |          NESTED LOOPS                 |                             |     1 |   277 |     6   (0)|
|  10 |           NESTED LOOPS                |                             |     1 |   237 |     3   (0)|
|  11 |            TABLE ACCESS BY INDEX ROWID| IR_ITEMATTRIBUTE            |     1 |   223 |     2   (0)|
|  12 |             INDEX FULL SCAN           | IX_IR_ITEMATTRIBUTE_01      |     1 |       |     1   (0)|
|  13 |            INDEX FULL SCAN            | IX_IR_CONTRACT_01           |     1 |    14 |     1   (0)|
|  14 |           TABLE ACCESS BY INDEX ROWID | IR_ENCOUNTER                |     6 |   240 |     3   (0)|
|  15 |            INDEX RANGE SCAN           | IX_IR_ENCOUNTER_06          |     7 |       |     1   (0)|
|  16 |          INDEX UNIQUE SCAN            | PK_IR_MEMBER                |     1 |    19 |     1   (0)|
|  17 |         TABLE ACCESS BY INDEX ROWID   | IR_LINEOFBUSINESS           |     1 |     7 |     1   (0)|
|  18 |          INDEX UNIQUE SCAN            | PK_IR_LINEOFBUSINESS        |     1 |       |     0   (0)|
|  19 |        INDEX RANGE SCAN               | IX_IR_DIAGNOSIS_03          |     5 |       |     1   (0)|
|  20 |   UNION-ALL                           |                             |       |       |         |
|  21 |    HASH GROUP BY                      |                             |     5 |  1155 |     3  (34)|
|  22 |     VIEW                              |                             |     5 |  1155 |     2   (0)|
|  23 |      TABLE ACCESS FULL                | SYS_TEMP_0FD9D6924_B88A631C |     5 |  1355 |     2   (0)|
|  24 |    FILTER                             |                             |       |       |         |
|  25 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  26 |     COUNT STOPKEY                     |                             |       |       |         |
|  27 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  28 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6924_B88A631C |     5 |  1355 |     2   (0)|
|  29 |    FILTER                             |                             |       |       |         |
|  30 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  31 |     COUNT STOPKEY                     |                             |       |       |         |
|  32 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  33 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6924_B88A631C |     5 |  1355 |     2   (0)|
----------------------------------------------------------------------------------------------------------

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
          2  recursive calls
          8  db block gets
        328  consistent gets
          1  physical reads
        648  redo size
        848  bytes sent via SQL*Net to client
       3960  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          7  rows processed

test@10GR2> /

7 rows selected.

Elapsed: 00:00:02.00

Execution Plan
----------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                        | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                             |     7 |  1601 |    23  (92)|
|   1 |  TEMP TABLE TRANSFORMATION            |                             |       |       |         |
|   2 |   LOAD AS SELECT                      |                             |       |       |         |
|   3 |    WINDOW BUFFER                      |                             |     5 |  1595 |    10   (0)|
|   4 |     FILTER                            |                             |       |       |         |
|   5 |      TABLE ACCESS BY INDEX ROWID      | IR_DIAGNOSIS                |     4 |    64 |     2   (0)|
|   6 |       NESTED LOOPS                    |                             |     5 |  1595 |    10   (0)|
|   7 |        NESTED LOOPS OUTER             |                             |     1 |   303 |     8   (0)|
|   8 |         NESTED LOOPS                  |                             |     1 |   296 |     7   (0)|
|   9 |          NESTED LOOPS                 |                             |     1 |   277 |     6   (0)|
|  10 |           NESTED LOOPS                |                             |     1 |   237 |     3   (0)|
|  11 |            TABLE ACCESS BY INDEX ROWID| IR_ITEMATTRIBUTE            |     1 |   223 |     2   (0)|
|  12 |             INDEX FULL SCAN           | IX_IR_ITEMATTRIBUTE_01      |     1 |       |     1   (0)|
|  13 |            INDEX FULL SCAN            | IX_IR_CONTRACT_01           |     1 |    14 |     1   (0)|
|  14 |           TABLE ACCESS BY INDEX ROWID | IR_ENCOUNTER                |     6 |   240 |     3   (0)|
|  15 |            INDEX RANGE SCAN           | IX_IR_ENCOUNTER_06          |     7 |       |     1   (0)|
|  16 |          INDEX UNIQUE SCAN            | PK_IR_MEMBER                |     1 |    19 |     1   (0)|
|  17 |         TABLE ACCESS BY INDEX ROWID   | IR_LINEOFBUSINESS           |     1 |     7 |     1   (0)|
|  18 |          INDEX UNIQUE SCAN            | PK_IR_LINEOFBUSINESS        |     1 |       |     0   (0)|
|  19 |        INDEX RANGE SCAN               | IX_IR_DIAGNOSIS_03          |     5 |       |     1   (0)|
|  20 |   UNION-ALL                           |                             |       |       |         |
|  21 |    HASH GROUP BY                      |                             |     5 |  1155 |     3  (34)|
|  22 |     VIEW                              |                             |     5 |  1155 |     2   (0)|
|  23 |      TABLE ACCESS FULL                | SYS_TEMP_0FD9D6925_B88A631C |     5 |  1355 |     2   (0)|
|  24 |    FILTER                             |                             |       |       |         |
|  25 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  26 |     COUNT STOPKEY                     |                             |       |       |         |
|  27 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  28 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6925_B88A631C |     5 |  1355 |     2   (0)|
|  29 |    FILTER                             |                             |       |       |         |
|  30 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  31 |     COUNT STOPKEY                     |                             |       |       |         |
|  32 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  33 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6925_B88A631C |     5 |  1355 |     2   (0)|
----------------------------------------------------------------------------------------------------------

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
          2  recursive calls
          8  db block gets
        328  consistent gets
          1  physical reads
        648  redo size
        846  bytes sent via SQL*Net to client
       3960  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          7  rows processed

test@10GR2> /

7 rows selected.

Elapsed: 00:00:02.04

Execution Plan
----------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                        | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                             |     7 |  1601 |    23  (92)|
|   1 |  TEMP TABLE TRANSFORMATION            |                             |       |       |         |
|   2 |   LOAD AS SELECT                      |                             |       |       |         |
|   3 |    WINDOW BUFFER                      |                             |     5 |  1595 |    10   (0)|
|   4 |     FILTER                            |                             |       |       |         |
|   5 |      TABLE ACCESS BY INDEX ROWID      | IR_DIAGNOSIS                |     4 |    64 |     2   (0)|
|   6 |       NESTED LOOPS                    |                             |     5 |  1595 |    10   (0)|
|   7 |        NESTED LOOPS OUTER             |                             |     1 |   303 |     8   (0)|
|   8 |         NESTED LOOPS                  |                             |     1 |   296 |     7   (0)|
|   9 |          NESTED LOOPS                 |                             |     1 |   277 |     6   (0)|
|  10 |           NESTED LOOPS                |                             |     1 |   237 |     3   (0)|
|  11 |            TABLE ACCESS BY INDEX ROWID| IR_ITEMATTRIBUTE            |     1 |   223 |     2   (0)|
|  12 |             INDEX FULL SCAN           | IX_IR_ITEMATTRIBUTE_01      |     1 |       |     1   (0)|
|  13 |            INDEX FULL SCAN            | IX_IR_CONTRACT_01           |     1 |    14 |     1   (0)|
|  14 |           TABLE ACCESS BY INDEX ROWID | IR_ENCOUNTER                |     6 |   240 |     3   (0)|
|  15 |            INDEX RANGE SCAN           | IX_IR_ENCOUNTER_06          |     7 |       |     1   (0)|
|  16 |          INDEX UNIQUE SCAN            | PK_IR_MEMBER                |     1 |    19 |     1   (0)|
|  17 |         TABLE ACCESS BY INDEX ROWID   | IR_LINEOFBUSINESS           |     1 |     7 |     1   (0)|
|  18 |          INDEX UNIQUE SCAN            | PK_IR_LINEOFBUSINESS        |     1 |       |     0   (0)|
|  19 |        INDEX RANGE SCAN               | IX_IR_DIAGNOSIS_03          |     5 |       |     1   (0)|
|  20 |   UNION-ALL                           |                             |       |       |         |
|  21 |    HASH GROUP BY                      |                             |     5 |  1155 |     3  (34)|
|  22 |     VIEW                              |                             |     5 |  1155 |     2   (0)|
|  23 |      TABLE ACCESS FULL                | SYS_TEMP_0FD9D6926_B88A631C |     5 |  1355 |     2   (0)|
|  24 |    FILTER                             |                             |       |       |         |
|  25 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  26 |     COUNT STOPKEY                     |                             |       |       |         |
|  27 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  28 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6926_B88A631C |     5 |  1355 |     2   (0)|
|  29 |    FILTER                             |                             |       |       |         |
|  30 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  31 |     COUNT STOPKEY                     |                             |       |       |         |
|  32 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  33 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6926_B88A631C |     5 |  1355 |     2   (0)|
----------------------------------------------------------------------------------------------------------

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
          2  recursive calls
          8  db block gets
        328  consistent gets
          1  physical reads
        648  redo size
        846  bytes sent via SQL*Net to client
       3960  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          7  rows processed

test@10GR2> /

7 rows selected.

Elapsed: 00:00:02.09

Execution Plan
----------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                        | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                             |     7 |  1601 |    23  (92)|
|   1 |  TEMP TABLE TRANSFORMATION            |                             |       |       |         |
|   2 |   LOAD AS SELECT                      |                             |       |       |         |
|   3 |    WINDOW BUFFER                      |                             |     5 |  1595 |    10   (0)|
|   4 |     FILTER                            |                             |       |       |         |
|   5 |      TABLE ACCESS BY INDEX ROWID      | IR_DIAGNOSIS                |     4 |    64 |     2   (0)|
|   6 |       NESTED LOOPS                    |                             |     5 |  1595 |    10   (0)|
|   7 |        NESTED LOOPS OUTER             |                             |     1 |   303 |     8   (0)|
|   8 |         NESTED LOOPS                  |                             |     1 |   296 |     7   (0)|
|   9 |          NESTED LOOPS                 |                             |     1 |   277 |     6   (0)|
|  10 |           NESTED LOOPS                |                             |     1 |   237 |     3   (0)|
|  11 |            TABLE ACCESS BY INDEX ROWID| IR_ITEMATTRIBUTE            |     1 |   223 |     2   (0)|
|  12 |             INDEX FULL SCAN           | IX_IR_ITEMATTRIBUTE_01      |     1 |       |     1   (0)|
|  13 |            INDEX FULL SCAN            | IX_IR_CONTRACT_01           |     1 |    14 |     1   (0)|
|  14 |           TABLE ACCESS BY INDEX ROWID | IR_ENCOUNTER                |     6 |   240 |     3   (0)|
|  15 |            INDEX RANGE SCAN           | IX_IR_ENCOUNTER_06          |     7 |       |     1   (0)|
|  16 |          INDEX UNIQUE SCAN            | PK_IR_MEMBER                |     1 |    19 |     1   (0)|
|  17 |         TABLE ACCESS BY INDEX ROWID   | IR_LINEOFBUSINESS           |     1 |     7 |     1   (0)|
|  18 |          INDEX UNIQUE SCAN            | PK_IR_LINEOFBUSINESS        |     1 |       |     0   (0)|
|  19 |        INDEX RANGE SCAN               | IX_IR_DIAGNOSIS_03          |     5 |       |     1   (0)|
|  20 |   UNION-ALL                           |                             |       |       |         |
|  21 |    HASH GROUP BY                      |                             |     5 |  1155 |     3  (34)|
|  22 |     VIEW                              |                             |     5 |  1155 |     2   (0)|
|  23 |      TABLE ACCESS FULL                | SYS_TEMP_0FD9D6927_B88A631C |     5 |  1355 |     2   (0)|
|  24 |    FILTER                             |                             |       |       |         |
|  25 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  26 |     COUNT STOPKEY                     |                             |       |       |         |
|  27 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  28 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6927_B88A631C |     5 |  1355 |     2   (0)|
|  29 |    FILTER                             |                             |       |       |         |
|  30 |     TABLE ACCESS FULL                 | IR_ITEMATTRIBUTE            |     1 |   223 |     8   (0)|
|  31 |     COUNT STOPKEY                     |                             |       |       |         |
|  32 |      VIEW                             |                             |     5 |    65 |     2   (0)|
|  33 |       TABLE ACCESS FULL               | SYS_TEMP_0FD9D6927_B88A631C |     5 |  1355 |     2   (0)|
----------------------------------------------------------------------------------------------------------

Note
-----
   - 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
          2  recursive calls
          8  db block gets
        328  consistent gets
          1  physical reads
        648  redo size
        846  bytes sent via SQL*Net to client
       3960  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          7  rows processed

test@10GR2>


Questions:

1) I am the only user connection to database using sql*plus and the final_query.sql contains only a simple select statements with some aggregate functions. why does it generated REDO even after multiple executions?

2) A query will generally perform "Consistent gets", why the query execution has db block gets greater than zero?
Tom Kyte
September 15, 2010 - 12:03 pm UTC

...
|   1 |  TEMP TABLE TRANSFORMATION            |                             |   
    |       |         |
|   2 |   LOAD AS SELECT  
.....


it would be about that - the book-keeping information for the hidden global temporary table you are loading.


the db block gets would be for the same reason, data dictionary management of the global temporary table segment you are allocating and using under the covers.


ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table t as select * from all_objects;
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> with data as (select owner, count(*) cnt from t group by owner)
  2  select t.owner, data.cnt, t.object_name from t, data
  3  where t.owner = data.owner;

71657 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3127217490

-------------------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   308M|    18G|       |  1813  (53)| 00:00:22 |
|*  1 |  HASH JOIN           |      |   308M|    18G|  2792K|  1813  (53)| 00:00:22 |
|   2 |   VIEW               |      | 68056 |  1993K|       |   288   (2)| 00:00:04 |
|   3 |    HASH GROUP BY     |      | 68056 |  1129K|       |   288   (2)| 00:00:04 |
|   4 |     TABLE ACCESS FULL| T    | 68056 |  1129K|       |   286   (1)| 00:00:04 |
|   5 |   TABLE ACCESS FULL  | T    | 68056 |  2259K|       |   286   (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T"."OWNER"="DATA"."OWNER")

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          0  recursive calls<b>
          0  db block gets</b>
       6754  consistent gets
       2040  physical reads<b>
          0  redo size</b>
    2622742  bytes sent via SQL*Net to client
      52966  bytes received via SQL*Net from client
       4779  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      71657  rows processed

ops$tkyte%ORA11GR2> with data as (select <b>/*+ materialize */</b> owner, count(*) cnt from t group by owner)
  2  select t.owner, data.cnt, t.object_name from t, data
  3  where t.owner = data.owner;

71657 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2219902781

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                           |  1734K|   105M|       |   909   (2)| 00:00:11 |<b>
|   1 |  TEMP TABLE TRANSFORMATION |   </b>                        |       |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D6646_33A312 |       |       |       |            |          |
|   3 |    HASH GROUP BY           |                           | 68056 |  1129K|       |   288   (2)| 00:00:04 |
|   4 |     TABLE ACCESS FULL      | T                         | 68056 |  1129K|       |   286   (1)| 00:00:04 |
|*  5 |   HASH JOIN                |                           |  1734K|   105M|  2792K|   620   (2)| 00:00:08 |
|   6 |    VIEW                    |                           | 68056 |  1993K|       |    44   (0)| 00:00:01 |
|   7 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D6646_33A312 | 68056 |  1129K|       |    44   (0)| 00:00:01 |
|   8 |    TABLE ACCESS FULL       | T                         | 68056 |  2259K|       |   286   (1)| 00:00:04 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("T"."OWNER"="DATA"."OWNER")

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          2  recursive calls<b>
          8  db block gets</b>
       6759  consistent gets
       2041  physical reads<b>
        776  redo size</b>
    2622742  bytes sent via SQL*Net to client
      52966  bytes received via SQL*Net from client
       4779  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      71657  rows processed

ops$tkyte%ORA11GR2> set autotrace off

Why Select produces Redo?

Rajeshwaran, Jeyabal, September 15, 2010 - 12:09 pm UTC

1)
...
| 1 | TEMP TABLE TRANSFORMATION | |
| | |
| 2 | LOAD AS SELECT
.....

http://www.oracle.com/pls/db102/search?remark=advanced_search&word=TEMP+TABLE+TRANSFORMATION&format=ranked&book=&preference=

No information about TEMP TABLE TRANSFORMATION in product documentation, what does this refers in Execution plan?

2) I am not using any global temporary table in final_query.sql, Just pulling data from some 5 tables and using some Inline views and Aggregates?
does Inline view could be a reason for this?

Tom Kyte
September 15, 2010 - 12:19 pm UTC

it is what it says, we created a temporary table for you in order to materialize the result set in a temporary table to use later in the plan.


I know you are not EXPLICITLY using a temporary table, *we are* implicitly creating it and populating it and using it.


I said previously:

it would be about that - the book-keeping information for the hidden global temporary table you are loading.


the db block gets would be for the same reason, data dictionary management of the global temporary table segment you are allocating and using under the covers.



it is "hidden" and "under the covers", as in - in the background, we are doing it implicitly.

Why Select produces Redo?

Rajeshwaran, Jeyabal, September 15, 2010 - 12:35 pm UTC

Tom:

Thanks for your answer. From Where did you get to know these information that is missing in product documentation? I would like become like you, I know that need a lots of working experience but still i am learning things around.
Tom Kyte
September 15, 2010 - 1:13 pm UTC

temp table transformation is pretty "self describing" in my opinion. It is what it says it is. You see a temp table transformation, you see a load as select (you load tables...), you see an access to an object name "SYS_TEMP_0FD9D6927_B88A631C" - which is obviously some system generated name...

sometimes things are just what they seem to be.

More to Explore

Hints

Need more information on hints? Check out the complete hint descriptions