Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, ted.

Asked: July 12, 2000 - 5:32 pm UTC

Last updated: March 10, 2011 - 10:47 am UTC

Version: 8i

Viewed 10K+ times! This question is

You Asked

tom,
from oracle documentation nologging can be used with following
operation:

alter table...move partition
alter table...split partition
alter index...split partition
alter index...rebuild
alter index...rebuild partition
create table...as select
create index
direct load with SQL*Loader
direct load INSERT

question:
1. what is direct load insert ? is this the same
as:
insert into tab_a nologging as select * from tab_b;
( tab_a and tab_b have same layout).

2. what happen behind the nologging operation.
is it true neither rbs(undo) and redo log are used with nologging on.

thnx in advance as always.
ted chyn

and Tom said...

1) a direct load insert is done via:

insert /*+ APPEND */ ....

see </code> http://docs.oracle.com/cd/F49540_01/DOC/server.815/a67781/c21dlins.htm#365 <code>

for lots more info.

2) with nologging -- sort of a misnomer as some logging takes place -- the data dictionary changes -- we do things in newly allocated extents. In the event it fails -- rolling back involves undoing a dictionary update (unlink that newly linked extent and its gone). Commiting involves making that dictionary update permanent. Since we perform the operation in "new" space -- we don't have any before images we need to restore and since these things are writing to the datafiles -- we have no data blocks we would need to "redo".

Don't forget -- backup after doing these operations or be prepared to redo the work if you need to perform media recovery. Since its not logged -- archive logs won't recover it.

Rating

  (95 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

any relation between nologging and direstload /*+append*/

P, May 30, 2002 - 4:11 pm UTC

Tom,
here is the responce I got from one of the DBA's.
------------
Yes, I had noticed that you were blowing up the rollback segment. Unfortunately, you are laboring under a number of misconceptions as to how Oracle actually works. No logging is unnecessary because this database is not in archivelog mode. Technically you get a small performance benefit by not running the changes through the online redo logs. However, you are not havening performance problems. The append hint does NOT perform direct path loads. Direct path is only available thought SQL* Loader. Append prevents the database from checking the free space list when inserting new rows. This would provide a performance benefit if you were loading into a table that you had previously deleted rows from. If you are starting with an empty table it buys you nothing.
---------------------
My question is
1)Is it true that if database is not in archivelog mode then one cannot use NOLOGGING and /*+append*/ hint to minimise redo and undo log generation?
2) is the last statement of above reply is correct?

"If you are starting with an empty table it buys you nothing."
that is /*+append*/ will not work if you start with an empty table


Tom Kyte
May 30, 2002 - 4:55 pm UTC

sigh, time for a new dba.

<b>They are the ones laboring under the misconception that they know how Oracle actually works.</b>

Setting the nologging option in this case (noarchivelog) is not necessary they are correct.  The insert /*+ append */ won't generate log for the table modifications.  In archivelog mode, the table would (should) be in nologging mode.

So, in a noarchive log mode database, you don't need to set nologging on the table to bypass redo generation with insert /*+ append */.  This is easy to see and demonstrate:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on statistics
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

23014 rows created.


Statistics
----------------------------------------------------------
...
       5932  redo size
...
      23014  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> rollback;
Rollback complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t select * from all_objects;
23014 rows created.


Statistics
----------------------------------------------------------
...
    2575040  redo size
...
      23014  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> rollback;

Rollback complete.

<b>I don't know about you, but if I don't have to wait for 2.5meg of redo to be a) generated and b) written (yes, you have to wait for it to write) -- I don't want to. 

Consider the savings on a really big load.  

"small performance benefit" -- i think not.  Ask them "ok, so what about when the logs switch and we start checkpointing?  and when the logs wrap and we have to wait for the checkpoints to complete?  Can't I avoid that by bypassing redo as well as getting the TREMENDOUS savings from not writing it in the first place???"</b>


As for the comment:
"The append hint does NOT perform direct path loads.  Direct path is only available thought SQL* Loader."

Please give them the link above -- (to the server concepts manual, which they OBVIOUSLY need to read)

<quote>
Introduction to Direct-Load INSERT

Direct-load INSERT enhances performance during insert operations <b>by formatting and writing data directly into Oracle datafiles, without using the buffer cache. This functionality is similar to that of the Direct Loader utility (SQL*Loader). </b>
</quote>

Hmmmm.......

The comment about an empty table vs a previously loaded table is a bunch of HOOEY as well.  <b>total and utter hooey</b>.  They no not what they speak of.


1) FALSE.  Test case above was performed in a noarchive log mode database.  Serious savings - SERIOUS (and looky at that - the table was *empty*)

2) FALSE.

<b>just read the contents of the link, have them read it.  It is very clearly documented -- just waiting to be read!</b>
 

Ram Gopal Das, May 30, 2002 - 5:25 pm UTC

Tom, Please clarify this once again for me. An empty table has the HWM at 0 blocks. So whether we use /*+APPEND*/ or not the insert will start using the blocks above HWM. So what you are saying is that, in this case (ie of an empty table) the fuction of the append hint would be only to invoke the Direct Mode insert (ie bypassibg the DB buffer)? Right? If yes then I think Oracle Documentation has not clearly explained it - what I mean is that the doc gives an impression that /*+APPEND*/ means using blocks above HWM. So I think many people tend to believe that it is unnecessary to use append hint on an empty table.

This is a very subtle distiction. Thanks for pointing out.

Rgds,

Ram

Tom Kyte
May 30, 2002 - 8:24 pm UTC

The advantage is:

a) no redo need be generated
b) write directly to blocks, bypass buffer cache
c) no freelist inspection for each and every block allocated (and you'll be doing lots of those)

It is very useful to use APPEND on any table when you are bulk loading into it LOTS OF ROWS. It's faster, it's cheaper, it's better -- just make sure that you clear it with the smart DBA so they can schedule a backup (when done unrecoverable -- nologging -- it needs to be backed up shortly thereafter)



noarchivelog and /*+append*/

P, May 30, 2002 - 6:50 pm UTC

tom,
you said
"Setting the nologging option in this case (noarchivelog) is not necessary they
are correct. The insert /*+ append */ won't generate log for the table modifications. "

could you please explain what is the technical reason it doesnot generate redo/undo log when one uses /*+append*/ in noarchivelog mode. link would be just fine



Tom Kyte
May 30, 2002 - 8:26 pm UTC

The technical reason is that is writes directly to the database files -- no logging needed, we bypass the buffer cache -- we don't need the log for instance recovery. Since you cannot do MEDIA recovery in noarchive log mode, we don't need the log for that either.

Hence, in noarchivelog mode, we don't need the log.

In archivelog mode, we assume you want to be able to do media recovery. Hence, by default, we log the changes so that you can. If you specify NOLOGGING, we won't.

That is why.

A reader, May 30, 2002 - 10:50 pm UTC


What did the DBA say?

AB, May 31, 2002 - 3:46 am UTC

P from LA
---------

I hope you showed this to your DBA - I for one would be very interested to know how he responded to the public trashing of his so-called knowledge... :)

AB

A reader, May 31, 2002 - 1:15 pm UTC

"It is very useful to use APPEND on any table when you are bulk loading into it LOTS OF ROWS. It's faster, it's cheaper, it's better "

--Is there a way I can use APPEND in exp/imp? I know we can use DIRECT mode during exports. Is there anything equivalent for imports? If not why? Thanks.

Tom Kyte
May 31, 2002 - 1:45 pm UTC

No, there is not. The reason -- import does single row array inserts and INSERT /*+ APPEND */ works only on inserts with SELECTS

insert ... values -- no append.
insert ... select -- append is possible.


snap shot too old error

P, May 31, 2002 - 2:23 pm UTC

tom,
here is the PL/SQL job I am using. I know I can do that using /*+APPEND*/ but I just want to know the reason for this error
I am commiting after 100 rows event I tried commit after 10 rows, still it gives snapshot too old error

---------
SQL> ed
Wrote file afiedt.buf

  1  DECLARE
  2  n_rowcount NUMBER(10) ;
  3  dt_load_date date;
  4  CURSOR c1 IS
  5      SELECT
  6  A.LAST_UPDATE col1,
  7  B.mel_seq col2,
  8  A.CM_ID col3,
  9  C.mel_seq col4,
 10  A.LK_ID col5
 11  FROM
 12  ECDB.ECDB_SETROUTE_LINKS A,
 13  NCDBMEL.MASTER_EQUIPMENT_LIST B,
 14  NCDBMEL.MASTER_EQUIPMENT_LIST C
 15  where
 16  a.cm_id = b.equipment_id and
 17  a.lk_id = c.equipment_id
 18  ;
 19   BEGIN
 20      dt_load_date := SYSDATE;
 21      n_rowcount :=0;
 22      FOR c1_rec IN c1 LOOP
 23          INSERT INTO NCDBMEL.EROUTE_LINKS
 24    (
 25    EROUTE_LINKS_SEQ,
 26    CREATE_USER,
 27    CREATE_DATE,
 28    MODIFIED_USER,
 29    MODIFIED_DATE,
 30    MEL_SEQ,
 31    EQUIPMENT_ID,
 32    LINK_MEL_SEQ,
 33    LINK_EQUIPMENT_ID
 34    )
 35          VALUES
 36              (NCDBMEL.MAIN_SEQ.NEXTVAL,
 37               'LOAD DATA',
 38               dt_load_date ,
 39               null,
 40               c1_rec.col1,
 41               c1_rec.col2,
 42               c1_rec.col3,
 43               c1_rec.col4,
 44     c1_rec.col5
 45              )   ;
 46            n_rowcount := n_rowcount + 1;
 47            IF n_rowcount =100 THEN
 48              COMMIT WORK;
 49              n_rowcount :=0;
 50            END IF;
 51      END LOOP;
 52      COMMIT WORK;
 53* END;
SQL> /
DECLARE
*
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 4 with name "RB3" too
small
ORA-06512: at line 22


SQL> 

 

Tom Kyte
May 31, 2002 - 3:03 pm UTC

it is DUE TO THE COMMIT.

get rid of the commit and see what happens.

THIS SHOULD be a single statement -- period, plain and simple, no if and or buts about it.




snapshot too old

P, May 31, 2002 - 2:31 pm UTC

Tom,
I also want to mention that no one other than me is using those tables. Infact we have very very few people logged on that DB.

thank you for resourceful site and fantastic book

P

Tom Kyte
May 31, 2002 - 3:04 pm UTC

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:895410916429 <code>

or see my book about this. You are doing it to yourself!!! by committing. If you have the book -- then you really know how I feel about doing it procedurally vs doing it in a single statement -- this screams "single statement".

snapshot too old

P, May 31, 2002 - 3:31 pm UTC

tom,
so you are saying there is no way we can do this without increasing our RS?(which is 30 meg).

P


Tom Kyte
May 31, 2002 - 3:40 pm UTC

30meg is pretty darn small.

But, I betcha you might find that a single insert into as select would do it -- have you even tried it?



snapshot too old

P, May 31, 2002 - 3:46 pm UTC

tom,
yes i did with /*+APPEND*/ hint and without also - doesnt work.
it blows up no matter what I do.
single statement or batchjob doesnt matter
and I know 30 meg is small but we dont need more its not a prd system, for that matter its not even a dev system and with very very few users.
oh btw we are using sequent as our OS(i know i know its old old but hey we have it).

P

Tom Kyte
May 31, 2002 - 4:06 pm UTC

Well, you can either spend lots of time beating yourself about the head trying to make this work with a ton of code or -- you can just increase the size of your RBS to be more appropriate.

Just add a big rbs from temp or somewhere, use the dreaded "set transaction use rolback" statement -- do the transaction and drop the newly added rbs.



snapshot too old

P, May 31, 2002 - 4:19 pm UTC

well how big is big?

I have so many tables to migrate.
some have millions of rows and some with few hundred

Tom Kyte
May 31, 2002 - 4:26 pm UTC

when you say "migrate" what do you mean? Is NCDBMEL.EROUTE_LINKS an empty table that you are filling? how many indexes on it -- you'll have to give me more info.

If you are doing a one time migration -- drop the indexes, alter the table nologging, do the index /*+ append*/ as select, rebuild the indexes in parallel with nologging. No rollback at all. No redo either.

failed to extend rollback segment

P, May 31, 2002 - 5:41 pm UTC

here it is tom
I am running following sql (as you said single SQL) with /*+APPEND*/ linek and our DB is in noarchivelog.
if I understand it correctly then it should not generate any rollback segment - no indexes on destination table (only table thats it).

-------------
SQL> INSERT /*+APPEND */ INTO NCDBMEL.MEL_REFERENCE
  2  (
  3  MEL_REFERENCE_SEQ,
  4  MEL_SEQ,
  5  CREATE_USER,
  6  CREATE_DATE,
  7  MODIFIED_USER,
  8  MODIFIED_DATE,
  9  EQUIPMENT_ID,
 10  REF_TYPE_CODE,
 11  REFERENCE,
 12  REF_SHEET,
 13  REF_MODULE,
 14  REF_COORD,
 15  DOCUMENT_TYPE,
 16  ENGDWGS_SEQ
 17  )
 18  (
 19  SELECT 
 20  NCDBMEL.MAIN_SEQ.NEXTVAL,
 21  B.MEL_SEQ,
 22  'LOAD DATA',
 23  SYSDATE,
 24  NULL,
 25  A.LAST_UPDATE,
 26  A.SETROUTE_KEY,
 27  NULL,
 28  A.SETROUTE_REFERENCE,
 29  A.REF_SHEET,
 30  'EC',
 31  NULL,
 32  'DRAWING',
 33  NULL
 34  FROM 
 35  ECDB.ECDB_SETROUTE_REFERENCE A,
 36  NCDBMEL.MASTER_EQUIPMENT_LIST B
 37  WHERE 
 38  A.SETROUTE_KEY = B.EQUIPMENT_ID AND 
 39  B.CATEGORY_CODE IN ('EROUTE CABLE','EROUTE RACEWAY','EROUTE FROMTO')
 40  );
NCDBMEL.MAIN_SEQ.NEXTVAL,
        *
ERROR at line 20:
ORA-00604: error occurred at recursive SQL level 1
ORA-01562: failed to extend rollback segment number 4
ORA-01628: max # extents (20) reached for rollback segment RB3
 

Tom Kyte
May 31, 2002 - 9:00 pm UTC

You must have missed an index somewhere....

consider:

ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table t;

Table dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

23027 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

ops$tkyte@ORA817DEV.US.ORACLE.COM> create index t_idx on t(object_name);

Index created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

23028 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select used_ublk from v$transaction;

 USED_UBLK
----------
       116

ops$tkyte@ORA817DEV.US.ORACLE.COM> 


<b>see the difference.  I'm thinking - you have an index....

</b> 

RBS full

P, May 31, 2002 - 5:59 pm UTC

also tom we have more than 2 million rows to migrate for this table

Tom Kyte
May 31, 2002 - 9:05 pm UTC

Well, here we go -- noarchivelog mode test database:

ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table t;

Table dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from big_table where 1=0;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from big_table;

1101008 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> select used_ublk from v$transaction;

 USED_UBLK
----------
         1


Hmmm (big_table is really CTAS from all_objects with lots of insert intos -- it is "fat")

Took 1 block of RBS stuff.

Even if 2 million rows DOUBLED THAT - it would fit in 30meg I think (2 blocks)


You are doing something wrong.

maybe:

a) there is an index
b) there is an index

I think -- maybe -- there is an index....... 

nologging

P, June 03, 2002 - 12:34 pm UTC

tom
I checked there are ZERO indexex on EROUTE_TERMINATION table and I get this -
I am using /*+append*/


THERE ARE NO INDEXES ON ANY DESTINATION TABLES
--------------------
SQL> ed
Wrote file afiedt.buf

  1  INSERT /*+APPEND */ INTO NCDBMEL.EROUTE_TERMINATION
  2  (
  3  EROUTE_TERMINATION_SEQ,
  4  CREATE_USER,
  5  CREATE_DATE,
  6  MODIFIED_USER,
  7  MODIFIED_DATE,
  8  MEL_SEQ,
  9  EQUIPMENT_ID,
 10  TERMINATION_MEL_SEQ,
 11  TERMINATION_EQUIPMENT_ID,
 12  WIRE,
 13  BLOCK,
 14  POINT,
 15  VENDOR,
 16  CORJ,
 17  NKC_SEQ
 18  )
 19  (
 20  SELECT
 21  ncdbmel.main_seq.nextval,
 22  'LOAD DATA',
 23  SYSDATE,
 24  NULL,
 25  NULL,
 26  B.mel_seq,
 27  A.CJ_ID,
 28  C.mel_seq,
 29  A.E_ID,
 30  A.WIRE,
 31  A.BLOCK,
 32  A.POINT,
 33  A.VENDOR,
 34  A.CORJ,
 35  A.NKC_SEQ
 36  FROM
 37  ECDB.ECDB_SETROUTE_TERM A,
 38  NCDBMEL.MASTER_EQUIPMENT_LIST B,
 39  NCDBMEL.MASTER_EQUIPMENT_LIST C
 40  WHERE
 41  A.CJ_ID = B.EQUIPMENT_ID
 42  AND A.E_ID = C.EQUIPMENT_ID
 43* )
SQL> /
ncdbmel.main_seq.nextval,
        *
ERROR at line 21:
ORA-00604: error occurred at recursive SQL level 1
ORA-01562: failed to extend rollback segment number 2
ORA-01628: max # extents (20) reached for rollback segment RB1

------------------------ 

Tom Kyte
June 03, 2002 - 3:50 pm UTC

Look for anything, anything at all, with regards to the table. Triggers for example. I quite simply cannot reproduce. Make sure to look when logged in as a DBA and do the looking in the DBA_ views (as I can create an index on your table, I can create a trigger on your table -- need not be in the NCDBMEL schema.

There is something else out there -- there is a reason, you'll just need to dig a little deeper. I've done this on archivelog/noarchivelog and the results are the same.

Look for a trigger this time (also look for the index but make sure to look in the DBA_ views)


A reader, June 03, 2002 - 5:08 pm UTC

Tom,
One of the comment in follow up is

"if I understand it correctly then it should not generate any rollback segment -
no indexes on destination table (only table thats it)."

I am confused here.

Direct load shold not generate REDO but it Should geneate
ROLLBACK information.

Am i right?



Tom Kyte
June 03, 2002 - 5:42 pm UTC

Wrong, it writes above the HWM for the table.

In noarchivelog mode databases -- append means NO UNDO, NO REDO (for the TABLE)

In archivelog mode databases -- append means NO UNDO and
- if table is logging -- redo will be generated
- if table is nologging
- if version is 9iR1 and before -- NO REDO
- if version if 9iR2 and after
- if database permits unloggged operations NO REDO
- else redo will be generated
(for the TABLE)

read the link I convienently provided in the answer -- details abound there in the documentation!




failed to extend RBS

P, June 03, 2002 - 5:37 pm UTC

ok here it is 
DB is noarchivelog
only one user
I droped and ecreated this table so no indexes, no triggers only table
----------------------
SQL> DROP TABLE EROUTE_TERMINATION;

Table dropped.

SQL> CREATE TABLE NCDBMEL.EROUTE_TERMINATION
  2   ( EROUTE_TERMINATION_SEQ NUMBER(20) NOT NULL,
  3   CREATE_USER VARCHAR2(10),
  4   CREATE_DATE DATE,
  5   MODIFIED_USER VARCHAR2(10),
  6   MODIFIED_DATE DATE,
  7   MEL_SEQ NUMBER(20),
  8   EQUIPMENT_ID VARCHAR2(30),
  9   TERMINATION_MEL_SEQ NUMBER(20),
 10   TERMINATION_EQUIPMENT_ID VARCHAR2(30),
 11   WIRE VARCHAR2(8),
 12   BLOCK VARCHAR2(4),
 13   POINT VARCHAR2(4),
 14          VENDOR VARCHAR2(1),
 15   CORJ VARCHAR2(1),
 16          NKC_SEQ NUMBER(3,0)
 17  ) ;

Table created.

SQL> ed
Wrote file afiedt.buf

  1  INSERT /*+APPEND */ INTO NCDBMEL.EROUTE_TERMINATION
  2  (
  3  EROUTE_TERMINATION_SEQ,
  4  CREATE_USER,
  5  CREATE_DATE,
  6  MODIFIED_USER,
  7  MODIFIED_DATE,
  8  MEL_SEQ,
  9  EQUIPMENT_ID,
 10  TERMINATION_MEL_SEQ,
 11  TERMINATION_EQUIPMENT_ID,
 12  WIRE,
 13  BLOCK,
 14  POINT,
 15  VENDOR,
 16  CORJ,
 17  NKC_SEQ
 18  )
 19  (
 20  SELECT
 21  ncdbmel.main_seq.nextval,
 22  'LOAD DATA',
 23  SYSDATE,
 24  NULL,
 25  NULL,
 26  B.mel_seq,
 27  A.CJ_ID,
 28  C.mel_seq,
 29  A.E_ID,
 30  A.WIRE,
 31  A.BLOCK,
 32  A.POINT,
 33  A.VENDOR,
 34  A.CORJ,
 35  A.NKC_SEQ
 36  FROM
 37  ECDB.ECDB_SETROUTE_TERM A,
 38  NCDBMEL.MASTER_EQUIPMENT_LIST B,
 39  NCDBMEL.MASTER_EQUIPMENT_LIST C
 40  WHERE
 41  A.CJ_ID = B.EQUIPMENT_ID
 42  AND A.E_ID = C.EQUIPMENT_ID
 43* )
SQL> /
ncdbmel.main_seq.nextval,
        *
ERROR at line 21:
ORA-00604: error occurred at recursive SQL level 1
ORA-01562: failed to extend rollback segment number 4
ORA-01628: max # extents (20) reached for rollback segment RB3
 

Tom Kyte
June 03, 2002 - 5:51 pm UTC

email me at tkyte@oracle.com

Talk to your DBA

A reader, June 04, 2002 - 9:05 am UTC

I think P from LA should talk to his/her DBA. Even though the DBA may not be clear with some concepts, I am sure the DBA can still help. Please let Tom answer some questions that more people are interested in. Thanks a lot!

Seq.nextval

A reader, June 04, 2002 - 9:32 am UTC

Tom,

I believe the problem is with the sequence generation. if we remove seq.nextval and use a simple "rownum" then I did not get the error. what is the reason?

Thanks,

Tom Kyte
June 04, 2002 - 9:43 am UTC

I saw the sequence myself.  I still could not reproduce the issue:


ops$tkyte@ORA817DEV.US.ORACLE.COM> drop sequence t_seq;
Sequence dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create sequence t_seq;
Sequence created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects where 1=0;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t
  2  select OWNER, OBJECT_NAME, SUBOBJECT_NAME, <b>t_seq.nextval,</b>
  3         DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME,
  4         TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY
  5    from (select * from all_objects
  6          union all
  7          select * from all_objects
  8          union all
  9          select * from all_objects )
 10  /
69135 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select used_ublk from v$transaction
  2  /

 USED_UBLK
----------
         1

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
 

nologging and backup

Pushparaj Arulappan, June 04, 2002 - 1:12 pm UTC

Tom,

Our Database is currently running in ARCHIVELOG mode.
Every Saturday we refresh 3 static tables with millions of
rows. Around 10 million rows for each table. We basically copy data from the other database into our production database. For example, we use the following command to do this.
COPY TO scott/tiger@D100 INSERT emp USING SELECT * FROM emp@D200 WHERE deptno between 1 and 10;

These tables are currently created with logging mode.
SELECT logging FROM dba_tables WHERE table_name='EMP';
LOGGING
-------
YES

After refereshing these three tables the amount of archive log generation is around 30 GB. All these three tables have a unique index.

The loss of data in these three tables is not a concern because we can bring the data from the other database any time. And these three tables purely a read only table.

To save the archiving space and even speed up this, I am planning to use the Direct-load INSERT.

INSERT /*+ APPEND */ INTO emp
SELECT * FROM emp@D200 WHERE deptno between 1 and 10;

My main concern is: Question1:
1) After I load these tables with direct-load insert, should I have to take the backup of the entire database (cold backup) or just taking the export of these three tables alone is enough ?

Assume 05/01/2002 is my last full database backup and I have refereshed these three tables using Direct-Load insert method on 05/08/2002 , 05/15/2002 and 05/22/02 and have only taken the export of these three tables and on 05/27/02 the database crashes due to some media failures.
Can I restore the database from 05/01/2002 backup and recover the database using the archive logs untill 05/27/02 for the rest of the tables in the database without any problem.

Question2:
Since redo for the indexes can not be avoided, I want to drop the indexes and recreate them after the load. Is there any other good way ? ( I can do a QUERY on the other database to see if the rows are unique before I insert into my tables )

Question3:
Can I leave the tables in LOGGING mode and create the INDEX in NOLOGGING mode. Is it advisable ?

Question4:
Would you reccommend to alter these three tables permenantly into NOLOGGING mode ? If I alter the tables into a NOLOGGING mode, do I need to take a FULL DATABASE BACKUP after that?

Question5:
What is advantage of creating these three tables with NOLOGGING over LOGGING mode.

Please give me any other advise that you think is good for my situation.

Thanks
Pushparaj

Tom Kyte
June 05, 2002 - 6:15 am UTC

I would consider doing this:

a) the tables in question are in NOLOGGING mode all of the time

b) use the direct path insert to populate them -- do not use regular inserts (like COPY). Do not have any indexes

(that way, when loaded, they will not generate undo or redo)

c) create NON-UNIQUE indexes on the table (the columns you normally use in a primary key). When you add the indexes -- do it in PARALLEL with NOLOGGING

d) add the primary key with "exceptions into" -- you will capture duplicate rows for cleaning up. The primary key will "kidnap" the non-unique index and use it.

e) after doing these operations -- simply backup (hot backup) the affected tablespaces (the tablespaces with the table / index data ). Add that to your backup set.

That is it.

How big are the rollback segment extents?

Mike, June 04, 2002 - 2:16 pm UTC

The error says it's failing at 20 extents. How big is your extent size for the rollback segment?

mr p

A reader, June 04, 2002 - 5:34 pm UTC

mr p how can you have 20 as maxextents of RBS

ORA-01628: max # extents (20) reached for rollback segment RB3


LOL

The direct load INSERT /*+ APPEND */ does not work on one db

Sean Bu, August 26, 2002 - 10:04 am UTC

Tom,

The direct load of "Insert /*+ append */ .." does not work on one 8.1.7.3 database (which generated more redo than no-direct load) , but works on another 8.1.7.3 database on the same HP server. The both are on the same oracle home. I could not figure the reasons of the differentail.

I have traced on the both sessions on the two databases. If you need them , I can send your the raw files. By looking into the trace outputs of the both, except the difference order of the statements shown in the trace files (the test procedure were same for the two db), I have not enabled to justify the reasons direct load not working on the RESUMIX database. 

The test table t was in LMT on the both databases.


[oracle@ne:RESUMIX_8.1.7_64]$ sqlplus toolman

SQL*Plus: Release 8.1.7.0.0 - Production on Mon Aug 26 09:47:56 2002

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

Enter password:

Connected to:
Oracle8i Enterprise Edition Release 8.1.7.3.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.3.0 - 64bit Production

SQL> drop table t;

Table dropped.

SQL> create table t as select * from dba_objects where 0=1;

Table created.

SQL> set autotrace on stat

SQL>  insert into t select * from dba_objects;

4534 rows created.


Statistics
----------------------------------------------------------
          7  recursive calls
        121  db block gets
       2846  consistent gets
          0  physical reads
     430600  redo size
        646  bytes sent via SQL*Net to client
        469  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
       4534  rows processed

SQL> rollback;

Rollback complete.

SQL> insert /*+ append */ into t select * from dba_objects;

4534 rows created.


Statistics
----------------------------------------------------------
          2  recursive calls
         17  db block gets
       2786  consistent gets
          0  physical reads
     459848  redo size
        634  bytes sent via SQL*Net to client
        482  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
       4534  rows processed

SQL>exit

On the DM database, IT WORKS.

[oracle@ne:dm_8.1.7_64]$ sqlplus toolman

SQL*Plus: Release 8.1.7.0.0 - Production on Mon Aug 26 09:45:10 2002

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

Enter password:

Connected to:
Oracle8i Enterprise Edition Release 8.1.7.3.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.3.0 - 64bit Production


SQL> drop table t;

Table dropped.

SQL> create table t as select * from dba_objects where 0=1;

Table created.

SQL> set autotrace on stat
SQL> insert into t select * from dba_objects;

4663 rows created.


Statistics
----------------------------------------------------------
          9  recursive calls
        657  db block gets
       3953  consistent gets
          0  physical reads
     508488  redo size
        647  bytes sent via SQL*Net to client
        468  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
       4663  rows processed

SQL> rollback;

Rollback complete.

SQL> insert /*+ append */ into t select * from dba_objects;

4663 rows created.


Statistics
----------------------------------------------------------
         50  recursive calls
         24  db block gets
       3845  consistent gets
          0  physical reads
       2424  redo size
        633  bytes sent via SQL*Net to client
        482  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
       4663  rows processed

SQL> exit

 

Tom Kyte
August 26, 2002 - 10:15 am UTC

You've got archivelogmode on and you didn't set the table to NOLOGGING.


read the very first comment on this page above for details....

Thanks!

Sean Bu, August 26, 2002 - 12:25 pm UTC


Just curious novice

HS, August 26, 2002 - 2:41 pm UTC

how do u display 'STATISTICS' report?

Tom Kyte
August 26, 2002 - 3:38 pm UTC

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on statistics


http://asktom.oracle.com/~tkyte/article1/autotrace.html

Relationship b/w logging/nologging and archive/noarchive mode-if any

HS, August 26, 2002 - 4:14 pm UTC

What is the 'official' oracle defined relationship between logging/nologging and archive/noarchive mode of a database. Please explain.

Thanks in advance for your answer.

Tom Kyte
August 26, 2002 - 7:25 pm UTC

In noarchivelog mode, you need not ALTER the underlying object to have the NOLOGGING attribute to perform a non-logged activity (since in noarchivelog mode, we only need to suport instance recovery, not media recovery -- and we can support that without generating the redo entries)

In archivelog mode, you must go out of your way to say "yes, I really mean to do this". It prevents "joe user" from doing a non-logged operation on some table -- destroying its recoverability from backups. "Joe user" cannot do an insert /*+ append */ without generating redo unless "Joe user" can also alter the table (something typically doable only by the DBA who really needs to be aware when non-logged things happen so they can backup)

db mode, table mode and insert mode - differences

A reader, August 27, 2002 - 9:52 am UTC

Tom,

I used to have confusion about the nologging option. Once I did an experiment, everything bEcame clear. As given below, there are 8 different possibilites with db mode, table mode and insert mode.

Here is the summary

Case    DB Mode         Table Mode      Insert Mode     Redo Size  Undo Size
======  ===========     ==========      ============    =========  =========
1       Archivelog      Logging         No append       17373988   73
2       Archivelog      Nologging       No append       17374140   74
3       Archivelog      Logging         Append          17462504    1
4       Archivelog      Nologging       Append            138496    1
5       NoArchivelog    Logging         No append       17383188   73
6       NoArchivelog    Nologging       No append       17379244   74
7       NoArchivelog    Logging         Append            138824    1
8       NoArchivelog    Nologging       Append            139324    1


Here are the details:
=====================
SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
YES

SQL> set autotrace on statistics
SQL> insert into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       8341  recursive calls
      12760  db block gets
    1679474  consistent gets
       4076  physical reads
   17373988  redo size
        860  bytes sent via SQL*Net to client
        566  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
        73

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t nologging as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
NO

SQL> set autotrace on statistics
SQL> insert into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       9005  recursive calls
      12769  db block gets
    1680085  consistent gets
       3585  physical reads
   17374140  redo size
        862  bytes sent via SQL*Net to client
        566  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
        74

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
YES

SQL> set autotrace on statistics
SQL> insert /*+ append */ into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       9309  recursive calls
       1101  db block gets
    1678076  consistent gets
       3599  physical reads
   17462504  redo size
        847  bytes sent via SQL*Net to client
        580  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t nologging as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
NO

SQL> set autotrace on statistics
SQL> insert /*+ append */ into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       9387  recursive calls
       1103  db block gets
    1678141  consistent gets
       3557  physical reads
     138496  redo size
        847  bytes sent via SQL*Net to client
        580  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
         1



SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
YES

SQL> set autotrace on statistics
SQL> insert into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
      10499  recursive calls
      12771  db block gets
    1681361  consistent gets
       6052  physical reads
   17383188  redo size
        853  bytes sent via SQL*Net to client
        566  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          4  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
        73

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t nologging as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
NO

SQL> set autotrace on statistics
SQL> insert into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       8862  recursive calls
      12769  db block gets
    1679977  consistent gets
       3824  physical reads
   17379244  redo size
        857  bytes sent via SQL*Net to client
        566  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
        74

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
YES

SQL> set autotrace on statistics
SQL> insert /*+ append */ into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       9852  recursive calls
       1108  db block gets
    1678166  consistent gets
       3653  physical reads
     138824  redo size
        843  bytes sent via SQL*Net to client
        580  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         15  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

SQL> 
SQL> drop table t;

Table dropped.

SQL> create table t nologging as select * from all_objects where 1 = 0;

Table created.

SQL> select logging from user_tables where table_name = 'T';

LOG
---
NO

SQL> set autotrace on statistics
SQL> insert /*+ append */ into t select * from all_objects ;

159568 rows created.


Statistics
----------------------------------------------------------
       9731  recursive calls
       1104  db block gets
    1678440  consistent gets
       3405  physical reads
     139324  redo size
        844  bytes sent via SQL*Net to client
        580  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
     159568  rows processed

SQL> set autotrace off
SQL> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

 

Confusion clearing up but...

HS, August 27, 2002 - 10:20 am UTC

Tom here what you said :
"In noarchivelog mode, you need not ALTER the underlying object to have the NOLOGGING attribute to perform a non-logged activity (since in noarchivelog mode, we only need to suport instance recovery, not media recovery -- and we can support that without generating the redo entries)

In archivelog mode, you must go out of your way to say "yes, I really mean to do this". It prevents "joe user" from doing a non-logged operation on some table
-- destroying its recoverability from backups. "Joe user" cannot do an insert /*+ append */ without generating redo unless "Joe user" can also alter the table (something typically doable only by the DBA who really needs to be aware when non-logged things happen so they can backup)"

From this I infer the following :
(A)In 'noarchive log' mode, logging/nologging attribute of an object has no effect. As Oracle will perform only instance recovery and this can be achieved without logged entries. Hence my question is ;
if i have logging option enabled for objects in 'noarchive log' mode database this means I am asking Oracle to do more work for nothing. hence I should have nologging option enabled for a database in noarchive log mode?

(B)And for a database in archive log mode all oobjects should have logging option enabled-default. I do it at the tablespace level and let objects inherit this attribute.

And finally what operations are logged anyway regardless of logging/nologging option? i am thinking all DML, is this correct.

Appreciate your answer. thank you.


Tom Kyte
August 27, 2002 - 10:40 am UTC

(A) no, 100% wrong.

It means that in NOARCHIVELOG mode, whenever we can, we skip redo generation. If you say "insert /*+ append */ into t select ...", in noarchivelog mode, we KNOW we can safely skip redo generation and hence we do.

(B) correct.



Everything is logged EXCEPT for statements that listed above in the original question

Thanks and follow up

HS, August 27, 2002 - 10:53 am UTC

Tom,
with reference to my earlier question bullet (A) and your answer, i have to ask that why would i want to have logging option enabled if oracle can recover the instance with nologging option.
I am thinking in terms of my test/QA env. where we drop/create indexes left and right in a datawarehouse situation. Prod DB is in archive log mode.

Thanks again.

Tom Kyte
August 27, 2002 - 10:57 am UTC

It doesn't matter, you don't need to have it on, or off -- in noarchive log mode it is not very meaningful.

So, don't rush out there and turn it "off", it won't do anything for you. It is already as good as off. the potential damage could be that the nologging attribute finds its way into production by accident. we don't want that to happen!

Perfect - thank you and unrelated comment :)

HS, August 27, 2002 - 11:06 am UTC

Hey Tom,
I understand that this is a pro. site and I would like it to be that way also.However did you guys think about adding some fun things like graphical expressions, theme's like beach, ski, nature etc?

I think it will spice up a things little bit in a good different way....:)and would be nice to express ourselves in a different way....

Tom Kyte
August 27, 2002 - 11:27 am UTC



If I had my way, this site would be black text on a white background with links and no pictures whatsoever....

Themes -- no, I doubt I would be going down that path. My favorite font is Courier, images -- find them just getting the way.

nologging and standby database

Logan Palanisamy, August 27, 2002 - 1:39 pm UTC

Tom,

If we perform nologging operations such as "insert /*+append +/ select .." and "alter index ... rebuild nologging" on a production database in archive log mode with standby database, there is no archive log generated for them. Then how does the standby database syncup with the production db? Will the standby database have this data? If yes, how?

Tom Kyte
August 27, 2002 - 1:50 pm UTC

No, it won't -- that is exactly why using NOLOGGING should be left upto the experienced DBA who is aware of such issues and will make sure that if they do happen (cause they want it to) they fix the standby

Note in 9iR2, you can make it so that even if

a) nologging is on the table
b) you do a non-loggable thing (like append)
c) the database is in archive log mode

we ignore you and generate redo log anyway (for just such total protection).

nologging and standby

Logan Palanisamy, August 27, 2002 - 3:13 pm UTC

Does it then mean, there is no such thing as non-loggable operation in 9iR2? Redo is generated for everything including 'nologging' operations and tables.


Tom Kyte
August 27, 2002 - 3:57 pm UTC

No, it means there is the ability for the DBA to make it so that everything is logged.

If the DBA doesn't do that, caveat emptor -- you are back to where you began.

Reader

A reader, August 27, 2002 - 9:55 pm UTC

Tom

There is no "direct read buffer" parameter in Oracle 8i.
How to tune the direct load process. Are there parameters
relevent to "direct load/insert" process which will be
helpful for us to comprehend

Thanks

Tom Kyte
August 28, 2002 - 9:35 am UTC

not familar with "direct_read_buffer" -- what tool/version did you use this with?

Reader

A reader, August 28, 2002 - 6:48 pm UTC

Tom,

I was actually referring to the init<SID>.ora
parameter. I have read it in the article
"The Mysteries of DBWR Tuning" by Steve Adams

<quote>
From version 7.1.5 onwards, this (DBWR load) can be
avoided by using the parallel query option to force blocks to be read into "direct read buffers", rather than into
the database buffer cache
<quote>

Thanks

Tom Kyte
August 29, 2002 - 9:02 am UTC

It is *not* a parameter -- it is an "activity" if you will.

Sort reads - use direct read buffers (eg: no need to use the buffer cache and read consistency -- we KNOW you are the only one that owns these blocks and the only one who will use them -- putting them into the LRU would be pointless and expensive)

Parallel query -- we can do a global checkpoint before running the query to flush the dirty blocks to disk. Now, PQ can bypass the buffer cache and just read from disk -- not putting stuff in the cache, not wasting time looking in the cache for data. It just "happens" tho, there is no "parameter"

Reader

A reader, September 01, 2002 - 12:26 pm UTC

Tom,

I have a table t with 104,190 rows created from DBA_OBJECTS
The HWM is 1287 blocks. Arraysize is set at default
I performed two queries 
    1. do an insert update owner = owner
    2. select /*+ parallel(t 4) */  * from t;
For Each of this statement, before and after the statement
    1. I found the blocks that are dirty
    2. I gathered statistics from v$sysstat DBWR%, and 
       "table scan%" and found the delta

1. When I did parallel query, DBWR checkpoint buffers witten is only 46
   where as v$bh.dirty = 103. Physical reads however shows 1292 close to
   HWM which is correct 
2. When I did parallel query, there were 2628 CR and 428 block gets
   What are these used for

Thanks 


SQL*Plus: Release 8.1.7.0.0 - Production on Sun Sep 1 07:53:24 2002

(c) Copyright 2000 Oracle Corporation.  All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
With the Partitioning option
JServer Release 8.1.7.0.0 - Production

SQL>  select count(*), dirty from v$bh group by dirty;

  COUNT(*) D
---------- -
      1247 N
        33 Y

SQL> insert into stats_t select'first',statistic#, name,class,value from stats;

18 rows created.

Update the table T
==================
SQL> update t set owner = owner;

104192 rows updated.

SQL> commit;

Commit complete.

SQL> insert into stats_t select'next',statistic#, name,class,value from stats;

18 rows created.

SQL> select count(*), dirty from v$bh group by dirty;

  COUNT(*) D
---------- -
      1177 N
       103 Y


SQL> select f.name, f.value, n.value, n.value-f.value from ....


SQL> /

NAME                                      VALUE      VALUE N.VALUE-F.VALUE
---------------------------------------- ---------- ---------- -------------
DBWR buffers scanned                           0        250             250
DBWR checkpoint buffers written                5       2459            2454
DBWR checkpoints                               1         25              24
DBWR cross instance writes                     0          0               0
DBWR free buffers found                        0        250             250
DBWR lru scans                                 0          1               1
DBWR make free requests                        0          1               1
DBWR revisited being-written buffer            0          0               0
DBWR summed scan depth                         0        250             250
DBWR transaction table writes                  0         32              32
DBWR undo block writes                         0       1152            1152
table scan blocks gotten                     649       1945            1296
table scan rows gotten                     54684     158883          104199
table scans (cache partitions)                 0          0               0
table scans (direct read)                      0          0               0
table scans (long tables)                     42         43               1
table scans (rowid ranges)                     0          0               0
table scans (short tables)                    33         34               1

18 rows selected.

SQL> truncate table stats_t;

Table truncated.

Select from Table t in Parallel Mode
====================================
SQL>select /*+ parallel(t 4) */  * from t;

104192 rows selected.

Elapsed: 00:00:19.29

Execution Plan
----------------------------------------------------------                                                                                                                                              
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=196 Card=105123 Byte                                                                                                                                  
          s=19973370)                                                                                                                                                                                   
                                                                                                                                                                                                        
   1    0   TABLE ACCESS* (FULL) OF 'T' (Cost=196 Card=105123 Bytes=19 :Q17000                                                                                                                          
          973370)                                                                                                                                                                                       
                                                                                                                                                                                                        


   1 PARALLEL_TO_SERIAL            SELECT /*+ Q17000 NO_EXPAND ROWID(A1) */ A1.                                                                                                                         
                                   "SEQ#",A1."OWNER",A1."OBJECT_NAME",A                                                                                                                                 
                                                                                                                                                                                                        


Statistics
----------------------------------------------------------                                                                                                                                              
        567  recursive calls                                                                                                                                                                            
        428  db block gets                                                                                                                                                                              
       2628  consistent gets                                                                                                                                                                            
       1292  physical reads                                                                                                                                                                             
        684  redo size                                                                                                                                                                                  
   11575319  bytes sent via SQL*Net to client                                                                                                                                                           
     771431  bytes received via SQL*Net from client                                                                                                                                                     
       6948  SQL*Net roundtrips to/from client                                                                                                                                                          
          7  sorts (memory)                                                                                                                                                                             
          0  sorts (disk)                                                                                                                                                                               
     104192  rows processed                     

SQL> select count(*), dirty  from v$bh group by dirty;

  COUNT(*) D
---------- -
      1202 N
        78 Y                                                                                                                                                        

SQL> select f.name, f.value, n.value, n.value-f.value from ....


NAME                                         VALUE      VALUE N.VALUE-F.VALUE
---------------------------------------- ---------- ---------- -------------
DBWR buffers scanned                          250        250               0
DBWR checkpoint buffers written              2459       2505              46
DBWR checkpoints                               25         27               2
DBWR cross instance writes                      0          0               0
DBWR free buffers found                       250        250               0
DBWR lru scans                                  1          1               0
DBWR make free requests                         1          1               0
DBWR revisited being-written buffer             0          0               0
DBWR summed scan depth                        250        250               0
DBWR transaction table writes                  32         32               0
DBWR undo block writes                       1152       1152               0
table scan blocks gotten                     1958       3265            1307
table scan rows gotten                     159034     263254          104220
table scans (cache partitions)                  0          0               0
table scans (direct read)                       0         52              52
table scans (long tables)                      43         95              52
table scans (rowid ranges)                      0         52              52
table scans (short tables)                     39         58              19

SQL> select table_name , blocks from dba_tables where table_name = 'T';

TABLE_NAME                         BLOCKS
------------------------------ ----------
T                                    1287 

Tom Kyte
September 01, 2002 - 2:23 pm UTC

and this relates to logging vs nologging how??

Reader

A reader, September 02, 2002 - 10:46 am UTC

I agree , it is not directly related.
I thought nologging relates to => direct insert
direct insert relates to => direct read
direct read relates to => parallel query

BTW, I figured it out by joining v$bh and dba_tables on
file_id and block_id where segment_name = 'T' and found
that all the buffers used by 'T' were flushed to disk

Thanks

Tom Kyte
September 02, 2002 - 11:09 am UTC

Thats like the 7 degrees of Kevin Bacon there.

rollback and index maintenance

A reader, September 17, 2002 - 6:47 am UTC

Hi Tom

How come direct insert generates minimum ROLLBACK?

Also in a nologging direct insert why index still generates so much redo?

Tom Kyte
September 17, 2002 - 8:08 am UTC

a direct path insert writes above the high water mark of the table. In order to "rollback" such an insert -- all we have to do is push the high water mark back to where it was. We don't need to go back to each block and undo the insert -- we just undo the advancing of the high water mark on the table and wah-lah -- it is undone.


An index is a complex data structure, unlike a HEAP table. Hence, when we add something to an index -- that data goes into the MIDDLE of the structure (eg: you add 'B' to an index containing 'A' and 'C' -- 'B' goes in between A and C). So, in the index, we are maintaining a data structure, not just adding at the "end". So, in order to rollback we do need to go back and undo the inserts (we need the rollback data). We cannot simply "unadvance a high water mark" and in effect erase that which we added like we can with a heap table.

Direct path insert/Insert - Append : Aren't they same

Logan Palanisamy, September 17, 2002 - 8:48 am UTC

Tom,

Are the "direct path insert" and "Insert /* +Append */" the same or different?

I thought the "insert /* +Append */" inserts into a brand new extent, rather than above the HWM. You are saying "direct path insert" inserts above HWM? Inserting above HWM doesn't necessarily mean inserting into a brand new extent.

Would appreciate if you could highlight the differences?

Tom Kyte
September 17, 2002 - 12:36 pm UTC

inserts above the HWM and it is known as a direct path insert.

Correct, it does not imply a brand new extent.

Read all about it:
</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76965/c21dlins.htm#365 <code>
....

Direct mode insert and index maintenance

Rob, September 17, 2002 - 2:57 pm UTC

Tom:

This is from the Oracle doc referenced by your link.

Index Maintenance
For direct-load INSERT on nonpartitioned tables or partitioned tables that have local or global indexes, index maintenance is done at the end of the INSERT operation. This index maintenance is performed by the parallel execution servers for parallel direct-load INSERT or by the single process for serial direct-load INSERT on partitioned or nonpartitioned tables.

If your direct-load INSERT modifies most of the data in a table, you can avoid the performance impact of index maintenance by dropping the index before the INSERT and then rebuilding it afterwards.

My question is:

What is the "performance impact" that is avoided by dropping the index before the INSERT and then rebuilding it afterward. Since index maintenance is done at the end of the INSERT operation by the parallel execution servers for parallel direct-load INSERT, it seems like the index is maintained in parallel. Is the performance advantage achieved by recreating or rebuilding the index with nologging? Is there any other advantage?



Tom Kyte
September 17, 2002 - 8:07 pm UTC

there can be significant advantages AND disadvantages.

some (not all, i'm sure someone will add something else -- they always do) advantages:

o you can rebuild the index with nologging on (no rollback, no redo for the index maintenance)

o you can rebuild the index in parallel

o you get a nicely compacted index at the end

some disadvantages:

o if you added 100meg to a 5 gig table, rebuilding the index might take significant amounts of time

o the index is unavailable -- meaning the table is "offline" sort of during this entire load/reindex operation (this index just won't be there)



Clarification

Rob, September 17, 2002 - 8:41 pm UTC

This index maintenance is performed by the parallel execution servers for parallel direct-load INSERT

One of the advantages you mention concerns rebuilding the index in parallel. What are the parallel execution servers (note the plural) mentioned above doing if not rebuilding the index in parallel?

Tom Kyte
September 17, 2002 - 8:55 pm UTC

They are MAINTAINING the index in parallel (during a parallel direct path insert)

If you REBUILD and index in parallel, that is much different from MAINTAINING and index (adding new entries to existing index) in parallel.

REBUILD = full scan table in parallel (since index doesn't have all of the data), sort it and build new index.

MAINTAIN = build your little index pieces for the data you inserted and then add it to the existing index in bulk at the end.

One more clarification

Rob, September 17, 2002 - 9:19 pm UTC

Tom:

Sorry to be a nag, I guess I am being dense about this but I am not understanding the maintain process.

1) If the index is maintained after all of the rows are inserted into the table, what info is telling the process what rows in the index to maintain. Does it know based upon the rows that have been inserted into the table but not committed?
3) If multiple processes are updating the index why is it slower than multiple processes building a new index.

MAINTAIN = build your little index pieces for the data you inserted and then add it to the existing index in bulk at the end.

Tom Kyte
September 17, 2002 - 9:34 pm UTC

1) yes, the rows it processed are the ones it is maintaining -- it writes above the HWM to the table and creates some index data off to the side. it'll merge that in after the fact, when the insert is done.

3) didn't say that it would be did I? If you add 100m to a 5gig table, it would probably be faster to maintain.

if you add 5 gig to a 100meg table, it would probably be faster to rebuild.



Redo Size

John, February 25, 2003 - 8:12 am UTC

Hi Tom,

I'm inserting about 30M records into a truncated NOLOGGING table using INSERT /*+ APPEND */ after dropping all unique indexes, marking all other indexes as unusable (and altering session) and disabling all constraints. The statistics are telling me I'm generating about 180Mb of redo. Without the APPEND it's about 850Mb. What I'm keen to know is if 180Mb is really as small as the redo gets. What size redo would you expect for this insert?
Is the redo because of the generation of a sequence number, managing the extents, marking the inserted rows, or ... something else?

Your comments would be appreciated.

Tom Kyte
February 25, 2003 - 9:52 am UTC

You may well have missed an index. It could be recursive SQL due to a sequence (alter your sequences and set the cache = 100,000 or more -- it'll go MUCH faster as well).

redo for recursive sql (auditing, sequences, anything you call as a side effect)
redo for space mgmt (extents, quotas, etc)

You can use log miner to see more clearly what the redo is.

Problem in Creating Database

MWG, March 04, 2003 - 5:09 am UTC

Hi Tom,

I have a script to create database, as

startup nomount pfile="e:\oracle\ora92\database\initTESTORA.ora";
CREATE DATABASE TESTORA controlfile reuse
LOGFILE GROUP 1 ('e:\oracle\oradata\TESTORA\REDO\redo01.log', 'z:\oracle\MIRLOG\TESTORA\redo01.log') SIZE 10M,
GROUP 2 ('e:\oracle\oradata\TESTORA\REDO\redo02.log', 'z:\oracle\MIRLOG\TESTORA\redo02.log') SIZE 10M,
GROUP 3 ('e:\oracle\oradata\TESTORA\REDO\redo03.log', 'z:\oracle\MIRLOG\TESTORA\redo03.log') SIZE 10M
MAXINSTANCES 1
MAXLOGHISTORY 1
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
DATAFILE 'e:\oracle\oradata\TESTORA\SYS\system01.dbf' SIZE 300M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
UNDO TABLESPACE "undotbs01" DATAFILE 'e:\oracle\oradata\TESTORA\UNDO\undo01.dbf' SIZE 500M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE 'e:\oracle\oradata\TESTORA\TEMP\temp01.dbf' SIZE 55M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P15
NATIONAL CHARACTER SET AL16UTF16;

spool off

spool e:\oracle\ora92\assistants\dbca\logs\CreateDBFiles.log
CREATE TABLESPACE "USERS" LOGGING DATAFILE 'e:\oracle\oradata\TESTORA\USR\users01.dbf' SIZE 10M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL;
CREATE TABLESPACE "RBS" LOGGING DATAFILE 'e:\oracle\oradata\TESTORA\RBS\rbs01.dbf' SIZE 20M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL;
CREATE TABLESPACE "INDX" LOGGING DATAFILE 'e:\oracle\oradata\TESTORA\INDX\indx01.dbf' SIZE 50M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL;
spool off

spool e:\oracle\ora92\assistants\dbca\logs\CreateDBCatalog.log
@e:\oracle\ora92\rdbms\admin\catalog.sql;
@e:\oracle\ora92\rdbms\admin\catexp7.sql;
@e:\oracle\ora92\rdbms\admin\catblock.sql;
@e:\oracle\ora92\rdbms\admin\catproc.sql;
@e:\oracle\ora92\rdbms\admin\catoctk.sql;
@e:\oracle\ora92\rdbms\admin\owminst.plb;
exit;

My question is, After I create database with this script, and try to connect to the database, then it says, 'sys user doesn't have create session privilege'.

Whats wrong with my script?? could you please help me.

thank you in advance.

MWG from Germany

Tom Kyte
March 04, 2003 - 6:06 pm UTC

You should not use SYS (or system for that matter) anyway. Probably best you cannot log in with SYS, that is our account, you don't need to use it (in fact 9i makes it excessively hard by default to use it at all!)

connect ... AS SYSDBA

should do it.

MWG, March 06, 2003 - 4:13 am UTC

Hi Tom,

Thanks for the earlier reply.

But Tom, as I want to run the pupbld.sql, so I do need to connect as system. but with my script, I cannot connect to system ( because, system doesn't have create session privilege), so I connect to system as sysdba and then ran the pupbld.sql, but after that, when I try to connect to another user, then oracle says, "pupbld.sql should run when connecting to system, therefore cannot load data to SQLPLUS_PRODUCT_PROFILE".

So could you please tell what I have done wrong???

Thanks,

MWG from Germany

Tom Kyte
March 06, 2003 - 7:42 am UTC

you didn't follow the directions to run it as system.


run it as system as it says. fix your system account, run it, unfix the system account.

Doubt

Cefers.br, May 30, 2003 - 5:14 pm UTC

I read the Oracle Documentation link you mentioned above and I found that:

[...]
Advantage over Parallel Direct Load (SQL*Loader)

Direct-path INSERT operations ensure atomicity of the transaction.
[...]

Can you explain what is ATOMICITY of the transaction?

Thanks in advance...

Tom Kyte
May 31, 2003 - 11:41 am UTC

it either happens entirely or not at all. It is a single statement. it is atomic.



how can i prevent DML from SQL*PLUS

MUBARAK MUNSOUR ELNOUR, July 14, 2003 - 6:46 am UTC

Dear
Tom

what is the value taht i insert into TABLE sqlplus_product_profile to prevent any DML OPERATION FROM SQL*PLUS

THANKS

MUBARAK

Tom Kyte
July 14, 2003 - 9:24 am UTC

Regarding /*+ APPEND */ problem by P from LA

A reader, July 14, 2003 - 4:30 pm UTC

Hi Tom,
I was just browsing this thread and when I came to the problem face by "P from LA" I could see that he was giving the hint in his query as "/*+APPEND*/" no SPACE between the + and APPEND. I suppose that his hint was never used and hence it was doing a conventional path insert using the RBS and generating REDO. hence his inserts were failing. Am I right in this regard? Or was his problem solved by you offline?

Thanks

Tom Kyte
July 15, 2003 - 1:07 am UTC

that space after the + was only needed in earlier releases and then only inside PLSQL -- did not apply

the problem was the sequence and the massive recursive sql taking place. altering the sequence to have a much larger cache size not only worked around it but made it lots faster

Thank you.. I tried it on 9iR2 and it worked without the spaces as well :)

A reader, July 15, 2003 - 1:10 am UTC


NOLOGGING at Tablespace level

A reader, July 16, 2003 - 10:36 pm UTC

Hi Tom,
Can I get any improvement in performance by placing tablespace in NOLOGGING mode for DW application running in NOARCHIVE mode?

Tom Kyte
July 17, 2003 - 10:19 am UTC

not really -- the operations such as

create table as select
insert /*+ APPEND */
direct path loads

will already skip logging in a noarchivelog mode database as they do direct writes to the datafiles

other operations such as insert/update/delete/merge will always generate redo regardless of the mode

Why this small difference?

A reader, July 17, 2003 - 10:32 pm UTC

Tom,
Thanks for your quick response. But I am noticing small difference as below:
SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL> select substr(segment_name,1,30), tablespace_name from user_segments where tablespace_name = 'TEST_TS';

no rows selected

SQL> select LOGGING from user_tablespaces where  tablespace_name = 'TEST_TS';

LOGGING
---------
NOLOGGING

SQL> create table test tablespace test_ts as select * from dba_objects where 1=0;

Table created.
SQL> set autotrace on statistics 
SQL> insert into test select * from dba_objects;

8733 rows created.


Statistics
----------------------------------------------------------
          9  recursive calls
        350  db block gets
      10304  consistent gets
        219  physical reads
     945336  redo size
        652  bytes sent via SQL*Net to client
        470  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
       8733  rows processed
SQL> rollback;

SQL> alter tablespace TEST_TS logging;

Tablespace altered.

SQL> select LOGGING from user_tablespaces where  tablespace_name = 'TEST_TS';

LOGGING
---------
LOGGING
SQL> insert into test select * from dba_objects;

8733 rows created.


Statistics
----------------------------------------------------------
         35  recursive calls
        362  db block gets
      10302  consistent gets
          0  physical reads
     947424  redo size
        653  bytes sent via SQL*Net to client
        470  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
       8733  rows processed

 

Tom Kyte
July 18, 2003 - 1:43 am UTC

that is totally insignificant -- the extra redo could happen for any number of reasons -- eg: recursive sql executed in order to grab a new extent.

nologging would take the redo down to a really small number, it is not coming into play here AT ALL

SQL*Loader Direct Load

K, October 04, 2003 - 2:37 pm UTC

SQL*Loader Direct LOAD bypasses the archive logs. If that is the case how do one replicate the data in contigency database ie 8i standby. Secondly is there a command to find out in primary and in standby database to see if nologging operation was performed on data load and on on any DML performed. If so can you please explain with an example of how it works. You have mentioned in 9i even with nologging on table level Oracle ignores and generates redo log anyways for total protection. Is there a command to be set on primary or contigency or is it by default setting in Oracle 9i.

Thanks


Tom Kyte
October 04, 2003 - 3:18 pm UTC

no it doesn't.

only if the object was marked as NOLOGGING in the database AND the database allows for nonlogged operations.

a direct path load does not bypass the redo logs. not unless you go way way out of your way to do so -- and in a standby environment -- you would not "do so"

IS APPEND WORKING ?

Abhi malhotra, November 10, 2003 - 3:42 am UTC

hello sir,

You have given comment in the middle of this discussion.

"
Note in 9iR2, you can make it so that even if

a) nologging is on the table
b) you do a non-loggable thing (like append)
c) the database is in archive log mode

we ignore you and generate redo log anyway (for just such total protection). 
"

----------------------------------------------------------
SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production

SQL> conn j/j
Connected.
SQL> create table a as select * from dba_objects where rownum<1;

Table created.

SQL> set autot on stat
SQL> insert into a select * from dba_objects;

21947 rows created.


Statistics
----------------------------------------------------------
        429  recursive calls
       7397  db block gets
      20771  consistent gets
        737  physical reads
    3041696  redo size                      <<--- First Time Redo

SQL> truncate  table a;

Table truncated.

SQL>
SQL> insert /*+APPEND*/ into a select * from dba_objects;

21947 rows created.


Statistics
----------------------------------------------------------
        357  recursive calls
        224  db block gets
      19504  consistent gets
          3  physical reads
    2779828  redo size                     <<--- Second Time Redo

SQL> truncate table a;

Table truncated.

SQL> alter table a nologging;

Table altered.

SQL> insert /*+APPEND*/ into a select * from dba_objects;

21947 rows created.


Statistics
----------------------------------------------------------
        528  recursive calls
        225  db block gets
      19527  consistent gets
          0  physical reads
      24308  redo size          <<--- Third time very less Redo



Q1.

Is there  --APPEND--   working with 9iR2,
                           Database in Archivelog mode,
                           Table is in Nologging ?

Q2.
    Why There diff. with First Time and Second Time Redo.

Thanx
Abhi

 

Tom Kyte
November 10, 2003 - 7:48 am UTC

q1) your database wasn't altered to force logging


ops$tkyte@ORA920PC> select force_logging from v$database;
 
FOR
---
NO

q2) 
insert append and insert work differently, i would expect them to do different things.  one writes above the high water mark, maintains indexes in bulk. The other does not.  you cannot compare them. 

global temporary table, archivelog mode, direct path insert

Dusan Valasek, February 16, 2004 - 11:04 am UTC

Hi Tom,

I consider using "alter table exchange partition with table", where table for exchange would by global temporary table. To populate temporary fast, I would use insert /*+ append*/ approach, then rebuild indexes and finally exchange with proper partition. Do I need to do backup immediately after this operation in archivelog mode?

thanks,

Dushan

Tom Kyte
February 16, 2004 - 11:12 am UTC

it'll not be happening. you won't be using a GTT for such an operation. (doesn't really make sense, the data "disappears" and the data would be in a "tempfile", not a "datafile" in most all 'well configured' setups)



You'll be using a "real" table.
If you use NOLOGGING operations you'll want to backup the affected datafiles, yes.

insert /*+ append */ by itself is "redo safe" (generates redo)

you would have to alter the table to be NOLOGGING to bypass redo generation (and even then, make sure the database supports no logging, the dba can disable that feature)



bad

A reader, May 14, 2004 - 11:48 am UTC


Tom Kyte
May 15, 2004 - 10:41 am UTC

good


i win?

A reader, July 08, 2004 - 9:10 am UTC

Tom:
Does "INSERT /*+ NOLOGGING APPEND */ INTO TABLE T SELECT * FROM Y " make any sense?
NOLOGGING is not an optimizer hint right? It has to defined at the table right?
Thanks

Tom Kyte
July 08, 2004 - 9:37 am UTC

it makes sense.

it probably isn't doing what the programmer intended :)

nologging is not a hint, it would be a table attribute.

append makes sense, nologging is just a comment in this case

All indexes set to nologging ?

Neil, July 19, 2004 - 5:15 am UTC

Tom

would setting all indexes to nologging help speed up the import process, as presumably, the DDL used to create the indexes in the target database would be done so with nologging?

What impact would this have in the live (source) database, ie having all indexes altered to be nologgin instead of logging ?

thanks

Neil

Tom Kyte
July 19, 2004 - 7:55 am UTC

nologging only affects index creates and index rebuilds.

normal day to day insert/update/delete activity is ALWAYS logged.

a DBA might take exception to you doing non-logged operations on their database as they have to schedule a hot backup very very soon after you complete your task.

You need to coordinate with them before doing this (in 9ir2 they can make no-logging a non-option actually -- for standby databases)

Indexes nologging

Neil, July 23, 2004 - 5:43 am UTC

>>nologging only affects index creates and index rebuilds.

So once created, altering an index to nologging has no effect.The maintenance of the index by Oracle does not change, and gets logged ?

I was just thinking that if all the indexes were set to nologging, then when I do a full exportof the database,
the indexes would be built nologging during the import into
our development database, and therefore speed up the operation a bit. I was thinking of doing this rather than creating a sperate indexfile and changing the clauses to nologging. Does this make sense ?

Thanks

Neil

Tom Kyte
July 23, 2004 - 8:53 am UTC

that is correct, the index maintanence during insert/update/delete is always logged.


You are correct in your thinking the index statement would be generated with "nologging" and the import would not create redo for the index if the database permits it.



Confimation

Muhammad Riaz Shahid, August 12, 2004 - 11:30 am UTC

Specifying Nologging is usefull for a database running in archive mode .... right ?


Tom Kyte
August 12, 2004 - 11:38 am UTC

it only makes sense in an archive log mode since in noarchive log mode, anything that doesn't need to generate redo won't.

so, in noarchivelog, insert /*+ append */ won't generate redo. In archivelog it will -- unless the table is nologging, but even then nologging can be ignored if the dba said "force logging"

Direct Path Insert in LOGGING mode

Charu, September 08, 2004 - 3:51 am UTC

Hi Tom,

Can any performance benefit be obtained by using direct path insert instead of conventional insert IN LOGGING MODE?

I did a few quick tests and at the end of SQL statements are my findings (Not conclusions):

<NOTE: DB is in ARCHIVELOG mode>

alter table tb1 logging;

alter table tb2 logging;

begin runstats_pkg.rs_start; end;

insert into tb1 select * from all_objects;

commit;

begin runstats_pkg.rs_middle; end;

insert /*+ APPEND */ into tb2 select * from all_objects;

commit;

begin runstats_pkg.rs_stop; end;

I ran these multiple times and what I found was:

-- Negligble difference in time taken by conventional and direct path inserts in LOGGING mode are

-- Negligible difference in redo generated by the two inserts.

-- Direct path insert takes significantly less number of latches compared to conventional path (obviously due to it's way of operation).

-- Direct path insert should consume a lot more PGA. Didn't check this but a guess.

Given the findings above, my hunch is that the direct path insert *might* scale better than conventional path.

Looking for your definitive advice.

Thanks & regards,
Charu.

Tom Kyte
September 08, 2004 - 9:57 am UTC

check out the concepts guide, it goes into this in some detail.


direct path inserts can be done by a single session. (so scale up with users, eh, no)

they always write above the high water mark (will NOT reuse space).

you cannot read from a table you direct path insert into until you commit.



If you are doing large bulk loads, they may make sense.

Direct mode insert in D/W environments

Charu, September 08, 2004 - 10:44 am UTC

Thanks Tom.

Yes I know a little about the constraints that you have mentioned. I was thinking in terms of *size* of data when I said 'scale' - my mistake. (Does the term 'scale' not include increase in data?)

Ours is a D/W environment and I am rewriting a batch process with lots of INSERT AS SELECTs. So these constraints you mentioned won't hamper the scenario. However, the DBA wouldn't accept NOLOGGING, so I am thinking of trying this.

So back to the original question - can direct path insert be used for enhancing performance even in LOGGING mode, given the other constraints won't cause problem?

A little more than yes/no/maybe please, Tom.

Thanks & regards,
Charu.


create table with nologgin option

Sean, October 16, 2004 - 8:28 pm UTC

Hi Tom,

I don't why it still generated a lot of archive log files even I used nologging;

>create table audit_data_temp as select * from audit_data nologging;

Thanks so much for your help.

Sean

Tom Kyte
October 16, 2004 - 8:36 pm UTC

create table audit_data_temp 
as 
select * 
  from audit_data I_AM_A_CORRELATION_NAME_NOT_A_KEYWORD;

now does it make more sense?  nologging was a big old "nothing" the way you used it.



ops$tkyte@ORA9IR2> @mystat "redo size"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
redo size                         3372608
 
ops$tkyte@ORA9IR2> create table t as select * from all_objects <b>NOLOGGING;</b>
 
Table created.
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
redo size                         6591036   <b> 3218428</b>
 
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2> @mystat "redo size"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
redo size                         6625532
 
ops$tkyte@ORA9IR2> create table t <b>NOLOGGING</b> as select * from all_objects;
 
Table created.
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
redo size                         6696156      <b>70624</b>
 
ops$tkyte@ORA9IR2>



<b>just like in real estate, it is all about location, location, location...</b>
 

failed to extend rollback segment

Sanjay Talati, February 01, 2005 - 5:32 am UTC

hi Tom,

First of all thanks for such prompt and apt replies...

I have an error
ORA-01562: failed to extend rollback segment number 8
ORA-01650: unable to extend rollback segment PRBS04 by 2560 in tablespace ROLLBACK_SPACE1

this error occurs while i use sql loader to load bulk data into table...

could u please guide me as to whats the reason for this... and what could be the possible solution..

Thanks a lot in advance

Sanjay Talati

Tom Kyte
February 01, 2005 - 8:54 am UTC

"u" is out sick today. I still want to meet that person, "u" is so in demand but we never seem to cross paths.

Ok if "I" answer?

You ran out of rollback, add more space to it (your rollback tablespace),

Your RBS, they are too small (sqlldr commits pretty frequently so you must have tiny RBS's configured)

Questions and doubts

A reader, February 16, 2005 - 8:49 pm UTC

Tom,

Pls. clarify me for the following :

Suppose I do a direct load insert using the /*+ APPEND */ hint, say insert /*+ APPEND */ into t1 select * from t2;

1. Is my understanding shown below correct ?
a. Read the blocks from t1 into the session's PGA
b. Format the data blocks
c. Write the formatted data blocks directly into
the datafiles that comprise the tablespace
containing the table t2.(using data saves that
bypass the buffer cache in the SGA)
2. If 1 is true, is there any way we can control the
data save frequency for this operation?
3. Since the data blocks are read directly into PGA
and formatted and written directly to datafiles
bypassing the SGA and since /*+ APPEND */ is
normally used for large bulk data loads, we should
have sufficient PGA/process memory. Correct? If not
what will happen to the huge INSERT operation.
4. Since this operation does not generate UNDO in
archivelog/noarchivelog mode and also does not
generate REDO in archivelog mode with NOLOGGING
(for databases that support non-loggable
operations), how will rollback operations performed
in these cases (since we do not have undo to
rollback). Does this mean this operation cannot be
rolled back at all?

Please clarify

Tom Kyte
February 17, 2005 - 7:28 am UTC

1) pretty much

2) not with insert /*+ append */, it'll just keep writing out, but there is no "checkpoint" or "savepoint". SQL either happens or it doesn't. Entirely.


3) it just reads and writes and reads and writes. The size of the load is not really a factor here other then it takes longer to read/write more data. It is not like we load up 10% of the data and then write, we read some, write some.

4) all we need to do is forget the newly added data. We don't advance the high water mark of the table. It is as if the insert never took place because we were writing to a space that no one would think to look at.

How to get the list of objects affected by nologging operations?

Edgar, June 10, 2005 - 9:41 am UTC

Hi,

Let say i need the complete list of objects:
i am unable to recover if crash will happen right now,
assuming i have n-hours old full online backup and all the redo.

Which SQL query to use?

Sincerely,

Tom Kyte
June 10, 2005 - 10:38 am UTC

look at v$datafile, there are unrecoverable columns in there. any datafile with that filled in has unrecoverable changes in it, all objects in that file are subject to having unrecoverable changes.

Ops, i find already

Edgar, June 10, 2005 - 10:06 am UTC

Sorry, i am too fool/lazy somtimes.
I find the answer: Metalink Note:290161.1
THE GAINS AND PAINS OF NOLOGGING OPERATIONS

Seems, the answer is - no way with simple query:
...
Currently in Oracle 9i and Oracle 10gR1, only the primaryÂ’s database V$DATAFILE view reflects nologging operations..
SELECT NAME, UNRECOVERABLE_CHANGE#,UNRECOVERABLE_TIME
FROM V$DATAFILE;
...
Maybe, audit of user actions can help?


Tom Kyte
June 10, 2005 - 10:42 am UTC

from dba_extents, you can get a distinct list of potentially affected objects.

if this is a FREQUENT concern of yours (that would be a problem, a bad practice, logging should be done in most every case) separate out objects you do non-logged operations on from the rest.

Thanks for reply,

Edgar, June 20, 2005 - 8:44 am UTC

no, it is not becouse of my poor work.
I am investigating possibility of implementation of physical standby (with read-only windows for reporting) for one running system i am not administrating currently.

Discrepancies between LOGGING and NOLOGGING

Logan Palanisamy, September 12, 2005 - 2:27 am UTC

Tom,

I am trying to convert a LONG column on table T1 with 20 million records into a CLOB column. To start with, I am testing on a small set of records (~15,000).

As expected NOLOGGING generates less redo logs. But takes more time for some unknown reason. 

There are four insert statements in my demo below:

1. Regular insert (logging)
2. Insert /*+ APPEND / (nologging) using TO_LOB
3. Insert /*+ APPEND / (nologging) using rpad(*,10000,*)
4. Insert /*+ APPEND / (nologging) using rpad(*,50000,*)

My questions are:

1. Why is the second insert taking more time than the first even though it generates less redo log than the first?

2. Why is the second insert generating more redo log than the third and fourth inserts? Under nologging, shouldn't all insert/*+ APPEND*/ generate the same amount of redo? The interesting thing here is, the third and fourth inserts generate the same amount of redo. 

Thanks.



SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
PL/SQL Release 8.1.7.4.0 - Production
CORE    8.1.7.0.0    Production
TNS for Solaris: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

SQL> select extent_management, allocation_type from dba_tablespaces where tablespace_name = (select default_tablespace from user_users);

EXTENT_MAN ALLOCATIO
---------- ---------
LOCAL      UNIFORM

SQL> 
SQL> drop table T2;

Table dropped.

SQL> create table t2 (id number,  NOTES CLOB);

Table created.

SQL> desc T1
 Name                                          Null?    Type
 --------------------------------------------- -------- ------------
 FNAME                                                  VARCHAR2(31)
 LNAME                                                  VARCHAR2(31)
 MNAME                                                  VARCHAR2(31)
 STREET                                                 VARCHAR2(51)
 CITY                                                   VARCHAR2(31)
 ZIP                                                    VARCHAR2(11)
 STATE                                                  VARCHAR2(3)
 CTIME                                         NOT NULL DATE
 NOTES                                                  LONG

SQL> desc t2
 Name                                          Null?    Type
 --------------------------------------------- -------- ------------
 ID                                                     NUMBER
 NOTES                                                  CLOB

SQL> 
SQL> select table_name, logging from user_tables where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             YES

SQL> select table_name, logging from user_lobs where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             YES

SQL> 
SQL> select count(*), sum(bytes) from user_extents where segment_name =  'T2';

    COUNT(*)   SUM(BYTES)
------------ ------------
           1    5,242,880

SQL> set autotrace traceonly timing on
---- The first insert
SQL> Insert into T2 select  rownum, to_lob(p.NOTES) from T1 p where p.ctime > trunc(sysdate) - 24;

14489 rows created.

Elapsed: 00:01:26.99

Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=84383 Card=762721 By
          tes=7627210)

   1    0   COUNT
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=84383 Ca
          rd=762721 Bytes=7627210)

   3    2       INDEX (RANGE SCAN) OF 'T1_CTIME_DESC' (NON-UNIQUE)
           (Cost=728 Card=762721)





Statistics
----------------------------------------------------------
        819  recursive calls
      58205  db block gets
      52936  consistent gets
      16111  physical reads
   95825640  redo size
        866  bytes sent via SQL*Net to client
        632  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
      14489  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> set autotrace off timing off
SQL> select max(dbms_lob.getlength(NOTES)), avg(dbms_lob.getlength(NOTES)), count(*) from t2;

MAX(DBMS_LOB.GETLENGTH(NOTES)) AVG(DBMS_LOB.GETLENGTH(NOTES))     COUNT(*)
------------------------------- ------------------------------- ------------
                         40,001                           3,220       14,489

SQL> 
SQL> select count(*), sum(bytes) from user_extents where segment_name =  'T2';

    COUNT(*)   SUM(BYTES)
------------ ------------
           7   36,700,160

SQL> alter table T2 nologging modify lob (NOTES) (nocache nologging);

Table altered.

SQL> select table_name, logging from user_tables where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             NO

SQL> select table_name, logging from user_lobs where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             NO

SQL> set autotrace traceonly timing on
---- The second insert
SQL> Insert /*+ APPEND */ into T2 select  rownum, to_lob(p.NOTES) from T1 p where p.ctime > trunc(sysdate) - 24;

14489 rows created.

Elapsed: 00:02:48.76

Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=84383 Card=762721 By
          tes=7627210)

   1    0   COUNT
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=84383 Ca
          rd=762721 Bytes=7627210)

   3    2       INDEX (RANGE SCAN) OF 'T1_CTIME_DESC' (NON-UNIQUE)
           (Cost=728 Card=762721)





Statistics
----------------------------------------------------------
        662  recursive calls
      58182  db block gets
      51578  consistent gets
      16215  physical reads
   27318688  redo size
        866  bytes sent via SQL*Net to client
        646  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         13  sorts (memory)
          0  sorts (disk)
      14489  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> set autotrace off timing off
SQL> select max(dbms_lob.getlength(NOTES)), avg(dbms_lob.getlength(NOTES)), count(*) from t2;

MAX(DBMS_LOB.GETLENGTH(NOTES)) AVG(DBMS_LOB.GETLENGTH(NOTES))     COUNT(*)
------------------------------- ------------------------------- ------------
                         40,001                           3,220       28,978

SQL> 
SQL> 
SQL> select count(*), sum(bytes) from user_extents where segment_name =  'T2';

    COUNT(*)   SUM(BYTES)
------------ ------------
          13   68,157,440

SQL> set autotrace traceonly timing on
---- The third insert
SQL> Insert /*+ APPEND */ into T2 select  rownum, rpad('*', 10000, '*') from T1 p where p.ctime > trunc(sysdate) - 24;

14489 rows created.

Elapsed: 00:06:10.20

Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=728 Card=762721 Byte
          s=5339047)

   1    0   COUNT
   2    1     INDEX (RANGE SCAN) OF 'T1_CTIME_DESC' (NON-UNIQUE) (
          Cost=728 Card=762721 Bytes=5339047)





Statistics
----------------------------------------------------------
        657  recursive calls
     122938  db block gets
        534  consistent gets
         27  physical reads
    5084696  redo size
        866  bytes sent via SQL*Net to client
        651  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      14489  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> set autotrace off timing off
SQL> select max(dbms_lob.getlength(NOTES)), avg(dbms_lob.getlength(NOTES)), count(*) from t2;

MAX(DBMS_LOB.GETLENGTH(NOTES)) AVG(DBMS_LOB.GETLENGTH(NOTES))     COUNT(*)
------------------------------- ------------------------------- ------------
                         40,001                           3,480       43,467

SQL> 
SQL> 
SQL> select count(*), sum(bytes) from user_extents where segment_name =  'T2';

    COUNT(*)   SUM(BYTES)
------------ ------------
          13   68,157,440

SQL> set autotrace traceonly timing on
---- The fourth insert
SQL> Insert /*+ APPEND */ into T2 select  rownum, rpad('*', 50000, '*') from T1 p where p.ctime > trunc(sysdate) - 24;

14489 rows created.

Elapsed: 00:06:11.85

Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=728 Card=762721 Byte
          s=5339047)

   1    0   COUNT
   2    1     INDEX (RANGE SCAN) OF 'T1_CTIME_DESC' (NON-UNIQUE) (
          Cost=728 Card=762721 Bytes=5339047)





Statistics
----------------------------------------------------------
        665  recursive calls
     122966  db block gets
        544  consistent gets
          2  physical reads
    5086068  redo size
        866  bytes sent via SQL*Net to client
        651  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      14489  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> set autotrace off timing off
SQL> select max(dbms_lob.getlength(NOTES)), avg(dbms_lob.getlength(NOTES)), count(*) from t2;

MAX(DBMS_LOB.GETLENGTH(NOTES)) AVG(DBMS_LOB.GETLENGTH(NOTES))     COUNT(*)
------------------------------- ------------------------------- ------------
                         40,001                           3,610       57,956

SQL> 
SQL> 
SQL> select count(*), sum(bytes) from user_extents where segment_name =  'T2';

    COUNT(*)   SUM(BYTES)
------------ ------------
          13   68,157,440

SQL> spool off 

Tom Kyte
September 12, 2005 - 7:26 am UTC

nologging is doing direct path writes, you are not writing the data to the buffer cache. so, you are waiting for the physical IO to complete.

for 20,000,000 records, that probably won't matter anyway -- parallel direct path will likely be faster than filling up the buffer cache and waiting for dbwr to write them out.


You generally won't find redo generation to be your bottleneck (common misconception). You generate redo and LGWR writes it, but you don't wait for LGWR to write it until you commit - LGWR does it in the background.

direct path load between NOLOGGING and regular

Logan Palanisamy, September 12, 2005 - 7:07 pm UTC

Tom,

I understand what you are saying about direct-path-load/insert-append bypassing the buffer cache. 

But what I don't understand is why the direct-path insert with NOLOGGING (the second insert in the example below) takes more time than the similar one under normal logging. (the first insert append). 

SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

SQL> select extent_management, allocation_type from dba_tablespaces where tablespace_name = (select default_tablespace from user_users);

EXTENT_MAN ALLOCATIO
---------- ---------
LOCAL      UNIFORM

SQL> 
SQL> drop table T2;

Table dropped.

SQL> create table t2 (id number,  resume CLOB);

Table created.

SQL> 
SQL> select table_name, logging from user_tables where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             YES

SQL> select table_name, logging from user_lobs where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             YES

---- The first insert with LOGGING.
SQL> set autotrace traceonly timing on
SQL> Insert /*+ APPEND */ into T2 select  rownum, to_lob(p.resume) from person p
  2  where p.ctime > trunc(to_date('18-AUG-2005', 'DD-MON-YYYY'));

14489 rows created.

Elapsed: 00:00:53.49

Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=20)
   1    0   COUNT
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'PERSON' (Cost=4 Card=2
           Bytes=20)

   3    2       INDEX (RANGE SCAN) OF 'PERSON_CTIME_DESC' (NON-UNIQUE)
           (Cost=3 Card=2)





Statistics
----------------------------------------------------------
        740  recursive calls
      58206  db block gets
      52907  consistent gets
      16210  physical reads
   95820252  redo size
        857  bytes sent via SQL*Net to client
        672  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
      14489  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> alter table T2 nologging modify lob (resume) (nocache nologging);

Table altered.

SQL> select table_name, logging from user_tables where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             NO

SQL> select table_name, logging from user_lobs where table_name like 'T2';

TABLE_NAME                     LOG
------------------------------ ---
T2                             NO

SQL> 
SQL> alter system switch logfile;

System altered.

---- The Second Insert with NOLOGGING turned on.
SQL> set autotrace traceonly timing on
SQL> Insert /*+ APPEND */ into T2 select  rownum, to_lob(p.resume) from person p
  2  where p.ctime > trunc(to_date('18-AUG-2005', 'DD-MON-YYYY'));

14489 rows created.

Elapsed: 00:02:02.30

Execution Plan
----------------------------------------------------------
   0      INSERT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=20)
   1    0   COUNT
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'PERSON' (Cost=4 Card=2
           Bytes=20)

   3    2       INDEX (RANGE SCAN) OF 'PERSON_CTIME_DESC' (NON-UNIQUE)
           (Cost=3 Card=2)





Statistics
----------------------------------------------------------
       1534  recursive calls
      58182  db block gets
      51777  consistent gets
      16215  physical reads
   27318740  redo size
        860  bytes sent via SQL*Net to client
        672  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         31  sorts (memory)
          0  sorts (disk)
      14489  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.00

 

Tom Kyte
September 13, 2005 - 11:40 am UTC

conventional path:

o you write to the buffer (don't wait for IO to the table)
o you write to the redo log buffer (don't wait for IO to redo until commit, LGRW is writing this out as you are generating it, so by the time you say commit, most has already been written out)

so, in this small example, you didn't wait for physical IO to the datafile or redo log file really.

direct path:

o you write directly to disk, you wait for that IO to complete - YOU do the writing, not DBWR, not LGWR, you do the writing.


Small example here, you didn't fill the buffer cache, you didn't really have to wait for LGWR.

reader

A reader, September 29, 2005 - 9:40 am UTC

"
that is correct, the index maintanence during insert/update/delete is always
logged.
"
Is there any other way except drop/create indexes to
reduce redo log (logging) due to indexes if deletes to a table takes place in archiving mode.


Tom Kyte
September 30, 2005 - 7:48 am UTC

nope.

and you wouldn't like it if there was. For it would imply that a simple instance failure during a delete would cause you to rebuild the indexes upon restart.

This is what databases do - protect data.

how does NOLOGGING affect Update ?

R.S.Karthik, February 15, 2006 - 6:23 pm UTC

Hello Tom
Sorry if this is a repeat question.... tried to search in the site but could not find specifically to this....

Setting the table property to NOLOGGING, I was under the impression that it will not generate redo/undo for both inserts and updates.
The insert with /*+ APPEND */ hint works fine with minimal redo generated. 
But when I try to update that table, it 's generating a lot of redo. 

Below are the details :
--------------------------
16:55:23 SQL>  create table test_k (i varchar2(100),j date) nologging;

Table created.

16:56:12 SQL> insert /*+ APPEND */ into test_k select rownum,sysdate from all_objects;

329531 rows created.


Statistics
----------------------------------------------------------
        108  recursive calls
         15  db block gets
    4350300  consistent gets
      16866  physical reads
       3360  redo size
        345  bytes sent via SQL*Net to client
        351  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     329531  rows processed

16:58:04 SQL> commit;

Commit complete.

17:00:38 SQL> update test_k set j=sysdate;

329531 rows updated.


Statistics
----------------------------------------------------------
        100  recursive calls
     337514  db block gets
       1074  consistent gets
        896  physical reads
   83281684  redo size
        367  bytes sent via SQL*Net to client
        304  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
     329531  rows processed

17:01:11 SQL> 

-------------------------------

There are no indexes on this table and log_mode is "NOARCHIVELOG" for the database.

Is there a specific reason for Oracle to do it that way ?

any pointers on this is highly appreciated....

Thanks
Karthik.

 

Thanks !

Karthik, February 16, 2006 - 12:02 am UTC

Thanks Tom.... it 's clear now....
Like you have mentioned : "Normal inserts/updates/deletes on the table are logged as normal. nologging
only affects certain operations like direct path loads and such on the object."

But I think the reason people like me, get very confused is because Oracle decided to name the property as "LOGGING" instead of "INSERTLOGGING"....

I believe they named it to take care of future releases where hopefully NOLOGGING would be nologging (or minimal logging) in all cases ! :)

Thanks !


Tom Kyte
February 16, 2006 - 8:12 am UTC

umm, no - it'll never ever happen.

If you want no logging, please just use grep sed and awk.


databases are born to protect your data, doing what you suggest would be "less than responsible". indexes, tables, whatever - they are complex data structures and must be protected.

In fact however, NOLOGGING does what you asked for already "minimal logging" - we do that. As little as we can, as much as we NEED to.

grep, sed and awk

Karthik, February 17, 2006 - 12:03 am UTC

Thanks Tom....

Your statement about using "grep, sed and awk" kinda made me curious about what you were talking about....

<quote>
If you want no logging, please just use grep sed and awk.
</quote>

Once I went through this link : </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:744625626787 <code>
I realized what you implied.... very funny and straight to the point :)

I forgot to mention that the database where I was testing was a development instance and was set to NOARCHIVELOG mode.... the production system is in archivelog mode though....

Thanks for the great site !
Karthik



effect of truncate table

A reader, February 21, 2006 - 11:01 am UTC

Is there a difference (in performance) between
INSERT /*+ APPEND */ INTO T1
and
INSERT INTO T1

when I did TRUNCATE TABLE T1 before executing
any of the above INSERTS.

Thanks

Tom Kyte
February 22, 2006 - 8:01 am UTC

yes. there is. they are very different.

the append does direct IO (which you wait for), bypasses undo generation for the table (which you don't have to wait for), and optionally may bypass redo generation (which on a properly configured system won't affect your runtime performance much at all)

the non-append writes to the buffer cache (which you likely won't wait for unless and until you 'fill it up'), deletes undo (same comment as the buffer cache comment just before this), but will never bypass redo generation.

catproc.sql execution time

Deepak, March 07, 2006 - 7:36 am UTC

Hi Tom,

Have recently been asked to tune the Database creation process.

Previously we were executing the following scripts after creating the database...

ORACLE_HOME
dbmsadmincatalog.sql;
ORACLE_HOME
dbmsadmincatexp7.sql;
ORACLE_HOME
dbmsadmincatblock.sql;
ORACLE_HOME
dbmsadmincatproc.sql;
ORACLE_HOME
dbmsadmincatoctk.sql;
ORACLE_HOME
dbmsadminowminst.plb;
connect SYSTEM/manager
ORACLE_HOMEsqlplusadminpupbld.sql;
connect SYSTEM/manager
ORACLE_HOMEsqlplusadminhelphlpbld.sql helpus.sql;

I tried the following operations:

Removed the following scripts from the above list

> catexp7.sql (As we are not going to use Oracle 7 export utility)
> catoctk.sql (As we do not use Oracle Cryptographic Toolkit package) from our application.

Then took a note of the time taken for execution of each of the above mentioned scripts.

Found that catproc.sql is taking most of the time, i.e., 14 mins.

Also monitored the buffer cache hit ratio and the library cache GETHITRATIO and found that both were under the desired limits (Buffer Pool Hit Ratio ~ 99% & GETHITRATIO of library cache to be above 90%) during the script execution.

Can you please help me in the following...

-- Did I do the right thing in removing the two scripts i.e., catexp7.sql & catoctk.sql.? Does Oracle recommend executing the catoctk.sql script always?

-- How can I tune the database creation process further.

Please help me out.

Tom Kyte
March 08, 2006 - 4:29 pm UTC

To make a database create go really fast - just restore a seed database from a backup.

That is, just create a database. When you want a new database - restore that one. The time to create a database becomes the time to restore.


why would you be creating databases so often that you need to tune this particular operation?

DB creation tuning

Deepak, March 09, 2006 - 12:40 am UTC

Thanks for your comments...

DBs get created whenever we install our product. So many times we need to create the DB.

Tom how about the scripts part in my previous question. Did I do the right thing in eliminating the said scripts.

Can I take a dump of the database after running the scripts and restore it to the target database after creating the database?

Tom Kyte
March 09, 2006 - 1:07 pm UTC

use the technique we use - ship with a seed database, restore it.

Tell you what - download Oracle XE from otn.oracle.com and install it, see how fast (about 5 minutes from "click" to "done")



backup of terabyte database with some tables in nologging mode

vijaya, March 16, 2006 - 12:20 am UTC

We have 5 tables in terabyte database in nologging mode. the database is in archive log mode. It is obvious that we cannot take point in time recovery. What is the best method of backup. We take daily incremental backup and weekly full backup. Our storage is EMC and using veritas netbackup.
Please advise best method of backup for this type of terabyte db.
Force logging option is set to no.

Tom Kyte
March 16, 2006 - 11:35 am UTC

you would need to do a full hot backup of any datafile with an unrecoverable change# in it after (soon, really really soon after) performing that non-logged operation.

I would suggest rethinking the whole "skip logging" thing on any object that means something to you and your business.

Backups with nologging events

Elaine, May 11, 2006 - 1:32 pm UTC

Tom, as usual you provide excellent information! I understand about taking a backup as soon as possible after a nologging activity, but have a strange situation to run by you:

We have a multi-terabyte database where 2 loads of million+ rows occur daily with an additional load added once per week. One runs using insert append and the other 2 run using direct path sqlldr. There are multiple update jobs that process daily.

We tried putting the database in archivelog mode, but the amount of redo activity, despite the nologging activity, was extensive and our connection to the SAN is ridiculously slow, so even though it shouldn't have caused a problem, it was a major performance hit.

Now the database is running in noarchivelog. Backing up is quite a task. The last backup we took involved pausing all the jobs, using transportable tablespaces to backup the main table and its partitioned indexes (about 2.6 Tb) and then using RMAN to back up the database incrementally in a startup mount mode (about 3 hours downtime).

Does that sound like a reasonable approach to you? Our intent is to create a duplicate database that can be switched to in the event of a failure and then we would just "replay" any transactions since the last backup.

Would we be buying anything (but trouble) if we put the database into archivelog mode right before issuing the RMAN command and then going back to noarchivelog when the database has backed up so that it could remain up during those 3 hours -- or is that just a dumb/bad idea?

Any suggestions are greatly appreciated.

Tom Kyte
May 11, 2006 - 8:01 pm UTC

... and our connection to
the SAN is ridiculously slow ..

you know, that is killing the DIRECT PATH operations then, they write directly to disk!


in noarchivelog mode, you can only backup in a 'cold' fashion.


I don't understand:

...
using transportable
tablespaces to backup the main table and its partitioned indexes (about 2.6 Tb)
and then using RMAN to back up the database incrementally in a startup mount
mode (about 3 hours downtime). ...

what did you mean by "incrementally"?


you could decrease the "down time windows" by

a) shutdown
b) put into archive log mode + startup
c) backup hot
d) switch log files and archive all - back the archives up
e) shutdown
f) put into noarchive log mode + startup


the stuff you get from c and d would be sufficient to restore a consistent backup from.

Poor SAN performance

Elaine H, May 15, 2006 - 2:49 pm UTC

Unfortunately until we move to our new hardware (some undetermined time this year) we are stuck with having the slow performance through the SAN. The insert/append and direct path sqlldr suffer, but it would be worse if we weren't using these tools.

I am using RMAN to do the cold backup. I originally wrote the script for a hot backup, so I wrote the script using incremental level 1 -- which in a cold backup is a contradiction in terms so sorry for the misnomer.

Do I really need to do an explicit log switch if I use RMAN to backup the database in archivelog mode? I remember reading somewhere that when you issue the backup archivelogs command following the backup, it automatically does the log switch to make the backup consistent.
"At the beginning of every BACKUP ... ARCHIVELOG command that does not specify an UNTIL clause, RMAN attempts to automatically switch out of and archive the current online redo log. In this way, RMAN can include the current redo log in the backup set.

If the database is open, then at the start of an archived log backup RMAN tries to switch out of and archive the current online log according to these rules:

RMAN runs ALTER SYSTEM ARCHIVE LOG CURRENT.
If the UNTIL clause or SEQUENCE parameter is specified, RMAN does not try to switch or archive online logs. "


Do I still need to do the explicit switch too?

Tom Kyte
May 15, 2006 - 3:58 pm UTC

Rman will do the right thing with a command like that.

If you use the "right command" - one that says "we will switch them for you", you would not need to.


index handling before batch inserts to a large table

A Reader, June 17, 2006 - 1:11 am UTC

Hi Tom,

Thanks a buch for the invaluable service to the oracle community ... have some q's about "index handling before batch inserts to a large table"

<quote>
there can be significant advantages AND disadvantages.

some (not all, i'm sure someone will add something else -- they always do)
advantages:

o you can rebuild the index with nologging on (no rollback, no redo for the
index maintenance)

o you can rebuild the index in parallel

o you get a nicely compacted index at the end

some disadvantages:

o if you added 100meg to a 5 gig table, rebuilding the index might take
significant amounts of time

o the index is unavailable -- meaning the table is "offline" sort of during this
entire load/reindex operation (this index just won't be there)
</quote>

<our environment>
oracle 9i R2 - archivelog , all tables - logging

we have a non-partioned fact table with about 90 million rows - each monthly batch cycle adds about 1.5 million rows.

the inserts are done using set base direct path loads :
insert into ... select /*+ append/
</our environment>

<questions>

a) do bitmap indexes have any additional considerations - eg. should always be made unuable before such voluminious inserts and then rebuild - or are the considerations same as B*Tree indexes (specifically in a set base direct path load situation - if for some reason it was a row-by-row insert then i know bitmaps would kill it - and you have to diable them)

b) for such volumes - that is about 90 M rows in table - adding 1.5 million rows in the batch process - can there be a 'generic recommendation'

c) if we go with the approach of diabling (making unusable) and rebuilding - and alter index ... rebuild nologging - and say there is a media failure fater the batch (before a backup is taken) - i am trying to access the impact on recovery - my understanding is - the only thing we may not be able to recover is the indexes build this way - but this will not impact the recovery of other database objects - is that correct ?

d) and lastly - promise - currently this index disabling/enabling is happening for EVERY table in the system - some of them with 500 records ! think its silly to do that for tables less than a threshold number of recs - say approx 500,000 - does this make sense to do index disabling/enabling only if # of rows is above a threshold ?

</questions>

Thanks in advance


Tom Kyte
June 17, 2006 - 7:10 am UTC

a) never say always, never say never, I always say.

If you do large bulk inserts - bitmaps are maintained very nicely. It would be the slow by slow processing that kills them.

b) not likely. You have the original list of things to consider.

c) you would have to rebuild the index again from the base table, just like you did after the load. Everything else that was logged is OK.

d) threshold - no, see original list of considerations for considerations.

backup after nologging operations

martina, July 14, 2006 - 11:36 am UTC

i accept but i do not really understand that nologging objects cannot be redone.

i create all indexes nologging as i think an index can always be rebuilt.

I also create tables nologging, always as select from ...
this is ok if i can recreate tha table at any time, right?

what i did lately is the following:
create table x nologging as select * from a;
create table y nologging as select * from a;
drop a;
"use" x; drop x;
"use" y; drop y;
Can that lead to a dangerous situation?

we have a job which rebuilds all indexes which habe changed their size, nologging compute statistics. Similar to that i have a job which compresses old tables/table partitions copied from the first job. Now i learned that "alter table .. move nologging" is synonymous to "alter table .. move; alter table nologging". here the nologging is pointless as i'm not performing any --+ append. Correct? or is it harmful?

the database is backed up every night, the jobs doing the creates run all the time so i have no chance get an unscheduled backup.

will these creates be passed to a standby database?

Actually i do not understand what the redo information could be more than the "create table" or "create index" statement itself and so i do not understand how that can be lost.

thank you for your help,
martina

Tom Kyte
July 14, 2006 - 12:42 pm UTC

YOU can always rebuild your index. True.

However, if you build the index nologging and do not backup the datafiles affected right after that, you cannot RECOVER your index (there is no redo, recovery uses REDO)

Let us look at the table scenario here:


time 0) you "create table t NOLOGGING as select ....."
time 1) it is finished, no redo generated
time 2) the CEO inserts the most important piece of data every created into T
time 3) they commit.
time 4) they go home
time 5) the disk that T is on fails


You have lost the CEO's data, you cannot get it back, you cannot recover table T from backups. You have lost data.

That is the point.

What if...

A reader, July 14, 2006 - 1:23 pm UTC

Comment to your answer above.

A slight change of events.

After, doing a CTAS in nologging, the DBA/developer was smart enough to turn it back into logging mode.

Then the CEO inserted the data, went back home, disk crashes.

Then want........

And the story continues ;)




Tom Kyte
July 14, 2006 - 1:54 pm UTC

same thing - you STILL cannot recover that table.

The no-logging only affects BULK operations, not normal inserts/updates/deletes.

So, in all cases, the CEO's insert was logged, but unfortunately NOTHING ELSE WAS - so upon recovery - the table is "junk", garbage, no good, lost, zippo, gone, disappeared.

You cannot RECOVER this table from backup since a LARGE PORTION (actually "some portion") of the information needed to recover this table isn't in the redo stream.



Nologging

AD, July 15, 2006 - 4:17 pm UTC

Tom,

The database is in acrhive log mode.

step 1:
1) I create a set of temp tables using CTAS with nologging option
2) I then use these table to update a main table.

1) the tables in step 1 can be re-created if necessary. My question is can I safely use nologging option or it may cause problem for recovery etc.

Regards



Tom Kyte
July 16, 2006 - 9:34 am UTC

If you don't care about recovering these tables from backups in the event of media failure (eg: you CREATE them, you READ them, and you have the knowledge of how to RECREATE them - no one modifies them), you can do whatever you like, sure.

Table/Partition created NOLOGGING

Arjun, February 07, 2007 - 5:13 pm UTC

Followup:
If you don't care about recovering these tables from backups in the event of media failure (eg: you
CREATE them, you READ them, and you have the knowledge of how to RECREATE them - no one modifies
them), you can do whatever you like, sure.

Oracle 9i (Release 2)
Am I following this correctly (ARCHIVELOG mode database
with FORCED Logging set to NO)?

1. A table is created / or a partiton is added to an existing table
using NOLOGGING clause
2. We keep inserting/deleting/updating data in this table/partiton
3. had a disk failure
4. Oracle does have a record of all the redo/undo for this table..but no record of it's creation? and that is why it (and the data in it) is not recoverable?

Did I get that right?
Tom Kyte
February 07, 2007 - 7:18 pm UTC

only direct path operations are affected by nologging, your #2 step would always be fully logged.

To Correct this situation

Arjun, February 07, 2007 - 6:06 pm UTC

Also this recovery scenerio (an object that was created as NOLOGGING) will be fixed with next backup correct?
It is just not recoverable from the backup "before" NOLOGGING object creation...

how to findout corrupted indexes

Reader, March 15, 2007 - 3:58 pm UTC

Hi Tom,

We brought up our DEV database with the production copy
after applying 1 day archivelogs and while using we found
out some errors for indexes as they rebuild indexes in nologging.

Is there any way to find out all the indexes which got corrupted because of nologging.

1 -- In the alert log of Dev database we did not find all the blocks information [ but got info for some blocks ] that
these have corruption because of nologging.

-- We could not figure out this behaviour, why there were some blocks not all blocks.

We map given blocks with indexes and rebuild them.

So, just want to know is there any other way to find out all
the indexes.

We did rman check using "backup validate database" it filled
all blocks in v$database_block_corruption but there are
lot of blocks in that view which were not belongs to
any segment.

do you think any other way is there to find out those corrupted indexes if we can not got this information from
production database.

And Why alert log not having all the blocks and rman still
reporting some blocks after rebuild ALL the indexes of database.
Tom Kyte
March 16, 2007 - 3:01 pm UTC

query v$datafile, it has an unrecoverable date/time in it, it'll tell you if there is unrecoverable data in a file.


1) it would likely never be ALL blocks because you tend to generate redo after the fact and that would be "applied" to the blocks in question making them 'recovered'

if you do a non-logged operation, but later update the table/index so much that enough redo gets generated - you might well have everything you need. it all depends.




NOLOGGING - change in PLAN

Karteek, April 16, 2007 - 10:37 am UTC

Hi Tom,

I am finding differences in Explain plan for a DELETE statement with and without NOLOGGING option. This is looking peculiar to me. Can you please help me...?

--------------------------------------------------------------------------------------------------------------------------------
DELETE /*+ INDEX(XSELL_PRODUCT_OFFER XS_PROOFF_UNIQ) */ FROM XSELL_PRODUCT_OFFER
WHERE (ACCOUNT_KEY, CAMPAIGN_CODE, CHANNEL_CODE, PRODUCT_CODE, XSELL_OFFER_TYPE_CODE, CONSUMER_TYPE_CODE )
IN (SELECT /*+ INDEX(LOOKALIKE_PRODUCT_OFFER LA_PROOFF_UNIQ) */ ACCOUNT_KEY, CAMPAIGN_CODE, CHANNEL_CODE, PRODUCT_CODE, XSELL_OFFER_TYPE_CODE, CONSUMER_TYPE_CODE
FROM LOOKALIKE_PRODUCT_OFFER )

--------------------------------
Explain Plan for below delete statement is

Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

DELETE STATEMENT Optimizer Mode=CHOOSE 15 M 16 M
DELETE XSOFR_OWNER.XSELL_PRODUCT_OFFER
NESTED LOOPS 15 M 1G 16 M
INDEX FULL SCAN XSOFR_OWNER.LA_PROOFF_UNIQ 15 M 699 M 137625
INDEX RANGE SCAN XSOFR_OWNER.XS_PROOFF_UNIQ 1 70 3
--------------------------------------------------------------------------------------------------------------------------------

DELETE /*+ INDEX(XSELL_PRODUCT_OFFER XS_PROOFF_UNIQ) */ FROM XSELL_PRODUCT_OFFER NOLOGGING
WHERE (ACCOUNT_KEY, CAMPAIGN_CODE, CHANNEL_CODE, PRODUCT_CODE, XSELL_OFFER_TYPE_CODE, CONSUMER_TYPE_CODE )
IN (SELECT /*+ INDEX(LOOKALIKE_PRODUCT_OFFER LA_PROOFF_UNIQ) */ ACCOUNT_KEY, CAMPAIGN_CODE, CHANNEL_CODE, PRODUCT_CODE, XSELL_OFFER_TYPE_CODE, CONSUMER_TYPE_CODE
FROM LOOKALIKE_PRODUCT_OFFER )
--------------------------------
For this, Explain Plan is

Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

DELETE STATEMENT Optimizer Mode=CHOOSE 15 M 3375089
DELETE XSOFR_OWNER.XSELL_PRODUCT_OFFER
HASH JOIN 15 M 1G 3375089
INDEX FULL SCAN XSOFR_OWNER.LA_PROOFF_UNIQ 15 M 699 M 137625
PARTITION RANGE ALL 1 7
PARTITION LIST ALL 1 6
TABLE ACCESS FULL XSOFR_OWNER.XSELL_PRODUCT_OFFER 137 M 8G 1747536 1 42
--------------------------------------------------------------------------------------------------------------------------------
If you observe later doing is doing a full table scan, but the only change is adding NOLOGGING option. To my knowledge NOLOGGING is just to reduce the amount of log that sql statement supposed to generate and is nothing to do with method of execution.

Please help me with your feedback.

Thank you so much!
Karteek
Tom Kyte
April 16, 2007 - 1:32 pm UTC

delete /*+ index( T T_IDX */ from T where ....

that works because the table is named T and you did not alias it....

delete /*+ index( T T_IDX */ from T MY_ALIAS where ....

will not use the hint, since there is no longer an object named T and

delete /*+ index( T T_IDX */ from T NOLOGGING where ....

which is what you have coded, is the same as using MY_ALIAS

NOLOGGING is not a hint
NOLOGGING is not something you stick in DML

NOLOGGING is part of DDL - you can alter a table - nologging, you can create a table - nologging. You do not use nologging in DML on the table.


You could change NOLOGGING in your delete to any other valid "word" and observe the same behavior

It has nothing to do with nologging, you are not doing a non-logged operation.

Changing from Logging to Nologging and back

A reader, April 17, 2007 - 11:50 am UTC

Tom,

We use this method for archiving our large tables. We rename our live table, create a new table with NOLOGGING and runs a direct insert from the live table into the new one and finally changes the new table to LOGGING mode. This is to supress any redo generation during the process.

Thinking about a recovery scenario, I guess the only part we cannot recover here is our archival stage which is really not required. Does this changing from NOLOGGING to LOGGING cause any issues for us if we need to recover anything at a later stage?

thanks,

Tom Kyte
April 18, 2007 - 10:36 am UTC

as long as you get a backup of the affected datafiles after the non-logged operation - you are OK from a recovery perspective.


reduce LOG generated

Karteek, April 18, 2007 - 12:02 pm UTC

Thank you so much for reply Tom.

Is there any way to reduce the amount of log that gets generated during the DML operation, especially while DELETING. As you said NOLOGGING can be used while creating a table (DDL), do we have similar option for DMLs also?. We have a scenario where we need to delete about 30 million records from a partitioned table with 350 million records. Your insight would really help us.

Many Thanks!
Karteek
Tom Kyte
April 18, 2007 - 12:49 pm UTC

make sure to use a single DELETE statement, that'll minimize the redo.

look into truncate (of a partition) if possible.

Nologging - What happens to a standby database

Rama Subramanian G, June 20, 2007 - 2:27 am UTC

Hi Tom

Considering a scenario where a standy database (or dataguard now) exists for a production database and we are doing a nologging operation on a table in that database, will the standby database be back in synch, if we copy the datafile of the table on which nologging operation has been done and copy to the standby database afterwards ?
Tom Kyte
June 20, 2007 - 10:14 am UTC

dataguard is what you use to do standby but....


when using standby, you should not do non-logged operations - period.
http://docs.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i70011

If you do, you must manually 'fix' the issue. That is a documented procedure:

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1018551

nologging tables and phyiscal standby issue

Shahram, August 17, 2007 - 7:12 pm UTC

Hi

We have a 3 node RAC , 10gR2 for data warehouse. for our fact tables we have nologging mode for table.

We have physical standby database for this primary database.

DBAs found issue that because of no redo logs being generated, they are not shipped to standby and having problems there.

I am reading in answers from you to enable force_logging.

select force_logging from v$database;

Is this correct? what/how will be the performance and will this resolve physical standby issue?

We are using /*+ APPEND */ and sql loader direct loads in many places in data warehouse.

What are alternative solutions to make DBAs happy while still have nologging ? What is your suggestion?

Thanks
Tom Kyte
August 22, 2007 - 9:58 am UTC

by setting force logging to true, your nologging request will be silently ignored.

You cannot have both

a) no-logging
b) physical standby (which is done via LOGS)


This is not about making DBAs happy.

This is entirely about having your standby work, if you do no logging, they would have to manually resynch datafiles after all of your loads.


CTAS Vs DP performance

Vijay, November 21, 2008 - 9:15 am UTC

Dear Tom,

In this regard, I always wondered that between CTAS and Direct-Path(DP), which one is highly performant/optimized and hence recommended.

I am asking this because, in one of our applications, developers create lots of temporary tables(not global ones though) using CTAS statements using dynamic 

SQL. I was of the opinion that TRUNCATE/insert is a preferred option as it may improve the performance of the code.

Hence, keeping this in mind, I have carried out few sample tests (ensuring that environment is alike for all the three cases
by flushing shared_pool and buffer_cache) which have been detailed below :-


SQL> sho user
USER is "SCOTT"
SQL> 
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

SQL> 
SQL> 
SQL> column SEGMENT_NAME format a15
SQL> 
SQL> select segment_name,segment_type,bytes/1024/1024 "Size in MB" from user_segments where
  2  segment_name = 'TEST_TABLE';

SEGMENT_NAME    SEGMENT_TYPE       Size in MB
--------------- ------------------ ----------
TEST_TABLE      TABLE                     165

SQL>
SQL> select tablespace_name,extent_management,segment_space_management from user_tablespaces
  2  where tablespace_name = 'APPL_DATA';

TABLESPACE_NAME                EXTENT_MAN SEGMEN
------------------------------ ---------- ------
APPL_DATA                      LOCAL      AUTO


METHOD :- 1
***********

SQL> 
SQL> alter session set sql_trace=true;

Session altered.
SQL>
SQL> create table test_table_3
  2  tablespace appl_data
  3  nologging
  4  as
  5  select * from test_table
  6  /

Table created.

SQL> alter session set sql_trace=false;

Session altered.


After formatting tracefile,using TKPROF, this is what I have found (the o/p has been truncated to enhance readability):-

**********************************************************************************
create table test_table_3
tablespace appl_data
nologging
as
select * from test_table

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.05          0          0          0           0
Execute      1      1.58       9.17      20235      20512      23780     1579392
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.59       9.22      20235      20512      23780     1579392

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29  (SCOTT)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  CREATE TABLE STATEMENT   MODE: ALL_ROWS
      0   LOAD AS SELECT OF 'TEST_TABLE_3'
      0    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TEST_TABLE' (TABLE)


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.01       0.05          0          0          0           0
Execute      3      1.58       9.17      20235      20512      23780     1579392
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5      1.59       9.22      20235      20512      23780     1579392

Misses in library cache during parse: 2
Misses in library cache during execute: 1

**********************************************************************************************

METHOD -> 2
***********

Now, I will try carrying out the same operation albeit using truncate/direct-path insert method:-

PLease note that before proceeding, both "shared pool" and "buffer cache" have been flushed to maintain sanctity of the environment.

SQL> truncate table test_table_3;

Table truncated.
SQL>
SQL> alter session set sql_trace=true;

Session altered.

SQL> insert /*+ append */ into test_table_3
  2  select * from test_table;

1579392 rows created.

SQL> commit;

Commit complete.

SQL> 
SQL> alter session set sql_trace=false;

Session altered.


The trace file output has been presented below:-

insert /*+ append */ into test_table_3
select * from test_table

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.08       0.10          3          3          1           0
Execute      1      1.69       9.22      20245      20508      22904     1579392
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.77       9.33      20248      20511      22905     1579392

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29  (SCOTT)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  LOAD AS SELECT  (cr=21891 pr=20267 pw=20225 time=9426983 us)
1579392   TABLE ACCESS FULL TEST_TABLE (cr=20252 pr=20238 pw=0 time=3166713 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      1   LOAD AS SELECT OF 'TEST_TABLE_3'
1579392    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TEST_TABLE' (TABLE)


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.08       0.11          3          3          1           0
Execute      4      1.69       9.23      20245      20509      23294     1579392
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        7      1.77       9.34      20248      20512      23295     1579392

Misses in library cache during parse: 2

**********************************************************************************************

METHOD -> 3
***********

Finally, following is the repetition of previous method with a slight difference that table has now been truncated using "reuse storage" clause.
This has been done so that storage associated with the segment is not released to the tablespace.

After flushing "shared pool" and "buffer cache"....


SQL> truncate table test_table_3 reuse storage;

Table truncated.

SQL>
SQL> alter session set sql_trace=true;

Session altered.

SQL> insert /*+ append */ into test_table_3
  2  select * from test_table;

1579392 rows created.

SQL> commit;

Commit complete.

SQL> 
SQL> alter session set sql_trace=false;

Session altered.

SQL> 


Analyzing the trace file output:-

insert /*+ append */ into test_table_3
select * from test_table

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.09       0.11          3          3          1           0
Execute      1      1.53       9.99      20421      20434      20962     1579392
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.62      10.11      20424      20437      20963     1579392

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29  (SCOTT)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  LOAD AS SELECT  (cr=20436 pr=20422 pw=20225 time=10011161 us)
1579392   TABLE ACCESS FULL TEST_TABLE (cr=20252 pr=20237 pw=0 time=3166719 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      1   LOAD AS SELECT OF 'TEST_TABLE_3'
1579392    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TEST_TABLE' (TABLE)


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.09       0.11          3          3          1           0
Execute      4      1.53      10.01      20422      20435      21352     1579392
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        7      1.62      10.13      20425      20438      21353     1579392

Misses in library cache during parse: 2
Misses in library cache during execute: 1


***********************************************************************


Now, I'm now bit confused after seeing these results as I was expecting method-3 to be the more efficient of all.
So because, Oracle need not have to start reallocating the storage from scratch. Accepted that tablespace is LMT wherein this has
very little bearing..but still..

So, unfortunately it's not the case as Method-1 tops the list with least cpu,disk i/o,query consumptions.


Can you please put forward your views on the same.

Tom Kyte
November 24, 2008 - 2:19 pm UTC

I would say all three took the same time, the differences are so tiny - you would have to run a couple of dozen of each, average them out.


call    count    cpu  elapsed  disk  query    current        rows
------- -----  ----- -------- ----- ------ ----------  ----------
total       2   1.59     9.22 20235  20512      23780     1579392
total       2   1.77     9.33 20248  20511      22905     1579392
total       2   1.62    10.11 20424  20437      20963     1579392


They are all so close as to be the same if you ask me.

and actually, the last one did the "least" work logical IO wise (buffer cache is never devoid of results, it doesn't make sense truth be told to flush stuff)

Direct Path in 10g

Sagar, December 03, 2008 - 3:53 pm UTC

Hi Tom,
I ran your followup example (May 30, 2002 - 4pm US/Eastern) in oracle 10g RAC db and got different results.

Please see my example below.

For table t1t, the amount of redo generated is almost same.
Table t1t_nolog created with nologging options shows less redo when append hint is used.

Please clarify why redo generation is not decreasing for table t1t with append hint.

Thanks
Sagar


SQL*Plus: Release 9.2.0.3.0 - Production on Wed Dec 3 12:17:38 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options


sql> create table t1t as select * from all_objects where 1=0;

Table created.

sql> set autotrace on statistics;
sql> insert /*+ append */ into t1t select * from all_objects;

44896 rows created.


Statistics
----------------------------------------------------------
       5975  recursive calls
       1136  db block gets
     245681  consistent gets
         53  physical reads
    5154688  redo size   <<<--------------------------
        491  bytes sent via SQL*Net to client
        479  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      44896  rows processed

sql> rollback;

Rollback complete.

sql> insert into t1t select * from all_objects;

44896 rows created.


Statistics
----------------------------------------------------------
       5058  recursive calls
      11269  db block gets
     247330  consistent gets
          0  physical reads
    5062784  redo size    <<<--------------------------
        507  bytes sent via SQL*Net to client
        465  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      44896  rows processed

sql> create table t1t_nolog nologging as select * from all_objects where 1=0;

Table created.

sql> insert /*+ append */ into t1t_nolog select * from all_objects;

44897 rows created.


Statistics
----------------------------------------------------------
       5931  recursive calls
       1140  db block gets
     245670  consistent gets
          0  physical reads
      46496  redo size     <<<--------------------------
        495  bytes sent via SQL*Net to client
        485  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      44897  rows processed

sql> rollback;

Rollback complete.

sql> insert into t1t_nolog select * from all_objects;

44897 rows created.


Statistics
----------------------------------------------------------
       5058  recursive calls
       5675  db block gets
     246532  consistent gets
          0  physical reads
    5062480  redo size     <<<--------------------------
        508  bytes sent via SQL*Net to client
        471  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      44897  rows processed


sql> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG


sql> select table_name, logging from user_tables where table_name like 'T1T%';

TABLE_NAME                     LOG
------------------------------ ---
T1T                            YES
T1T_NOLOG                      NO

sql>

Tom Kyte
December 09, 2008 - 11:09 am UTC

I would expect the output you received. What exactly did you read on this page that makes you think it would have been different? I might need to make something more clear if it resulted in confusion....


In noarchivelog mode - direct path loads bypass undo on the table AND redo on the table.

In archivelog mode - direct path loads bypass undo on the table BUT MUST generate redo for the table. There might be a little less redo generated since UNDO generates redo as well - but that depends.


In archivelog mode, a NOLOGGING table behaves like any table in noarchivelog mode would with respect to redo using a direct path insert.

Thanks for the clarification.

Sagar, December 10, 2008 - 2:39 pm UTC


Deleting Records in Nologging Mode.

Yaseen AR, June 30, 2010 - 3:47 am UTC

Tom,

In my Production Server I want to Delete Million of Records. When Ever I am deleting it take 1 hour. In that time every application getting hang (near to that).

My Database running in Archivelog mode. So 200 more archive files are generating in that time. and also I am running out of space in that time.

So with out generating Archive log files Can you suggest me how to delete the record.

Becuase of production
1)I cant remove(Change) the archive log for Temporarily.
2) Removing the Index it little bit though (Management not agreed)
3) I read few OTN Forum Discussion Nologging option wll not work for Delete.
4) Tell me how i can Over come this Issue.

If my Engllish is Bad or my Question is very simple
Apologize for that.

The Discussion I read fully but not clear about Deleting.

Regards
Yaseen AR

Tom Kyte
July 06, 2010 - 2:02 pm UTC

... In that time every application getting hang ( ...

why? Unless those applications are trying to update records you are deleting - or unless your machine is so small it cannot really do more than one thing - they should not.


You cannot delete "no logging", it does not ever happen that way.


To make this more efficient you would look to organizing the data to be more easily purged. For example, if you "delete from t where dt_column < sysdate-30", you could partition by dt_column and simply truncate the data to purge (that can be done without generating redo/undo).


Log switches during large load?

Ditto, March 10, 2011 - 8:14 am UTC

Hi Tom,

I thought I understood this, however, there's obviously something I'm still missing, I was hoping you (or anyone else if they know?) had a minute to look this over and offer any suggestions ..

Goal: (ie the "Why"):
I am working in a large conversion project at the moment where we are bringing in a large amount of data into our system from another company we merged with. I am still testing in DEV trying to establish a process that works consistently with the largest table we have.
The DEV database is in NOARCHIVELOG, however, our PROD database is in ARCHIVELOG mode.

The setup:
The data we're bringing in is in a table in a temp schema and is about 6G in size (yeah, ok, I know, that's not *that* big ...) :)
And we are trying to load this data into are permanent table which is currently about 90G. This table also has a PK, which is about 25G.
We've already queried and verified no duplicates on this key (we used the sequence, in fact, to generate the new numbers and records in this loader table .. so now all we need to do is push it over).
The table (unfortunately) is not partitioned, so no "partition-magic" possible .. (that bugs me, I've been trying to get this table, and a few others partitioned for a few years now ... oh well ..)
The table also has a 50G index, for now, however, I have dropped it ... (it blew out my UNDO, so first test is without the Index, I'll rebuild it later. Second test, I'll increase the UNDO and see how much I need .. ) :)

So yeah, summary:
90G table + 25G PK.
6G data being pushed in via:

insert /*+ append parallel(t,4) */ into target_tbl t
select * from source_tbl s;

Earlier tests on smaller cases (~20G) took about 15 min ... so I'm expecting this one to take up to 2 hrs ... maybe 3 ..

I left it running the other night for almost 20 hrs .

What I noticed after further investigation, in the ALERT log, was it was doing a log switch every 2-3 minutes ..

So my question:
- I'm in NOARCHIVELOG
- I've run "alter table target_tbl nologging" and "alter index target_tbl_pk nologging" (mostly for script consistency for when we go to PROD)
- using APPEND hint ...
- no other significant processes running on system
- 10 REDO log groups (2 logs in each group), each 1G size
- target table + PK are in 2 different tablespaces, both LOCALly managed.
- Target table's tablespace: uniform size: 1G
- Target table's PK tablespace: uniform size: 100M

oh wait ... looking at dba_tablespaces ... those tablespaces are set to "LOGGING" ... is that my problem? (but again, I'm in NOARCHIVELOG mode ... ??)

That's all the info I can think of ... what else could be causing the redo log switches, considering everything I've setup above?

Hope you have some ideas ...
Thanks!

Tom Kyte
March 10, 2011 - 10:47 am UTC

The DEV database is in NOARCHIVELOG, however, our PROD database is in
ARCHIVELOG mode.


probably shouldn't be - yes, I know it is something extra to manage - but then so is "building and testing on something that doesn't represent real life". It is less to have to manage the archives than to try to fix something in real life in my experience.


Index maintenance from an INSERT (yes, even direct path), UPDATE, DELETE, or MERGE must always be logged.

With an INSERT /*+ APPEND */ - what happens (ASSUMING THE APPEND WAS OBEYED, foreign keys/triggers would make us ignore it) is:

a) the data is loaded above the high water mark - UNDO and REDO can be skipped
b) the newly loaded data is temporarily indexed into temp - UNDO and REDO can be skipped
c) when the load is finished loading the tables, the mini indexes generated in step (b) are merged into the existing indexes.

Step (c) will always - always - generate REDO and UNDO for this operation. The existing indexes are being modified now - their structure MUST be recoverable. if we failed during the merge - the indexes MUST be able to be restored.


If you want no redo, no undo, you will have to disable the indexes - load - and then rebuild the indexes. That might take longer than doing it the way you are currently (when you add a little to a lot - it is many times faster just to maintain indexes than to have to rebuild them all later)

Thanks!

Ditto, March 10, 2011 - 11:01 am UTC

Gotcha .. thank-you for your time!

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.