Skip to Main Content
  • Questions
  • What would be causing high logical io on a simple indexed delete statement?

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, patrick.

Asked: May 28, 2004 - 2:31 pm UTC

Last updated: June 01, 2004 - 7:47 am UTC

Version: 8.1.6

Viewed 1000+ times

You Asked

I have some perlexing statistic coming out of my STATSPACK report :

Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
396,768,835 455,120 871.8 48.4 2007910483
DELETE FROM WASABI.V_PRC_QUE WHERE INIT_EVNT_ID = :b1


Table 'WASABI.V_PRC_QUE' is indexed on 'INIT_EVNT_ID'.
Therefore I am surprised to see an average of 871 Buffer gets per execution.

I ran a single delete in my SANDBOX (not my prod database) and did not see anything unusual.

SQL> set autotrace on
SQL> DELETE FROM WASABI.V_PRC_QUE WHERE INIT_EVNT_ID = 716837 ;
1 row deleted.

Execution Plan
----------------------------------------------------------
0 DELETE STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=40)
1 0 DELETE OF 'V_PRC_QUE'
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'V_PRC_QUE' (Cost=4 Car
d=2 Bytes=40)

3 2 INDEX (RANGE SCAN) OF 'XP3_PRC_QUE' (NON-UNIQUE) (Cost
=3 Card=2)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
0 bytes sent via SQL*Net to client
0 bytes received via SQL*Net from client
0 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

What would be causing the LIO to soar like that ?
Thanks in advance.

======

Tom : Here are the answers to your question

======


a) does this always delete a single row?

75% of the time INIT_EVENT_ID will map to 1 row in WASABI.V_PRC_QUE.
The other 25% will have > 1 row.
I ran a query to see how skewed INIT_EVENT_ID is on WASABI.V_PRC_QUE.
The largest count was 25 and it quickly scaled down from there.
So 75% of the time we would get 1 rec - 23% of the time we would get 2 recs - 2% of the time we would get >2 recs (but no more the 25).


select count(*) FROM WASABI.V_PRC_QUE ;
COUNT(*)
----------
5339324

select count(distinct(INIT_EVNT_ID)) FROM WASABI.V_PRC_QUE ;
COUNT(DISTINCT(INIT_EVNT_ID))
-----------------------------
4055233

select INIT_EVNT_ID, count(*) FROM WASABI.V_PRC_QUE group by INIT_EVNT_ID order by 2 ;
INIT_EVNT_ID COUNT(*)
------------ ----------
.................
819029 23
713914 24
715256 24
747881 24
777376 24
772253 24
772255 24
772256 24
772257 24
772258 24
772259 24
772260 24
1340176 25



===


b) is this a highly concurrent table (lots of concurrent modifications)?

At the time of the delete there was no mass activity against this table.
There may have been the odd online hit - but that would have be minimal.
This table is hit hard (high concurrent access) when we are running our Billing application - but this was not occurring at the time.


c) describe the nature of this table -- how does it "work" (eg: we insert increasing init-evnt-ids and then randomly delete them .....)

We bulk loaded this table (approx 400,000 inserts per load) during our billing run (this was NOT running at the time of the delete).
After that there will be the occasional online manual insert/update - but that activity would be minimal.
The INIT-EVNT-IDS will grow sequentially during this bulk load - therefore the INIT-EVNT-ID is growing sequentially over time.

Our delete is based on the USG_DTTM (usage date time) - which should correlate (more or less) to a block of sequential INIT-EVNT-ID's.
I ran the select that we base our DELETE CURSOR on and it returned a list of sequential INIT-EVNT-ID's.

Therefore the deletes are not random - they are based on a data and time - that should correlate to a numeric block of INIT-EVNT-ID numbers.

====

I thought it may be helpful to see the procedure we are running that does the delete.
* We establish a cursor of target records from WASABI.V_PRC_QUE.
* Then we archive the record
* Then we perform the deletes.

CREATE OR REPLACE PROCEDURE Sp_Ilm_Archive_Ntc IS

p_beg_rows NUMBER(10);
p_end_rows NUMBER(10);
p_total_rows NUMBER(10);
p_sur_id wasabi.ilm_sur_ntc.sur_id%TYPE;
p_init_evnt_id wasabi.ilm_sur_ntc.init_evnt_id%TYPE;

CURSOR RowsToArchive_Cur IS
SELECT sur_id, init_evnt_id
FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

BEGIN

dbms_output.put_line('Starting wasabi.ilm_sur_ntc record transfer');

SET TRANSACTION USE ROLLBACK SEGMENT RBS_BIG;

SELECT COUNT(*) INTO p_beg_rows FROM wasabi.ilm_sur_ntc;

OPEN RowsToArchive_Cur;

LOOP

FETCH RowsToArchive_Cur INTO p_sur_id, p_init_evnt_id;

IF RowsToArchive_Cur%NOTFOUND
THEN
EXIT;
ELSE

INSERT INTO history.ILM_SUR_NTC_H
SELECT a.*, SYSDATE, 'Completed Records'
FROM wasabi.ilm_sur_ntc a
WHERE a.sur_id = p_sur_id;

DELETE FROM wasabi.ilm_sur_ntc
WHERE sur_id = p_sur_id;

DELETE FROM wasabi.v_evnt_err_log
WHERE init_evnt_id = p_init_evnt_id;

DELETE FROM wasabi.v_prc_que
WHERE init_evnt_id = p_init_evnt_id;

END IF;

END LOOP;

COMMIT WORK;

CLOSE RowsToArchive_Cur;

SELECT COUNT(*) INTO p_end_rows FROM wasabi.ilm_sur_ntc;

p_total_rows := p_beg_rows - p_end_rows;

DBMS_OUTPUT.PUT_LINE('Total records archived: ' || p_total_rows);

EXCEPTION
WHEN OTHERS
THEN
DECLARE error_msg VARCHAR2(300) := SQLERRM;
BEGIN
DBMS_OUTPUT.PUT_LINE('Error archiving records: ' || error_msg);
DBMS_OUTPUT.PUT_LINE('Rolling Back All Changes.');
ROLLBACK WORK;
DBMS_OUTPUT.PUT_LINE('ROLLBACK complete.');
END;

END;
/




and Tom said...

well, actually, at the time of this - there is *massive* activity against this table, and you are doing it.

I don't know your tables -- but unless both sur_id and init_evnt_id are unique in the driving table -- you are doing lots of un-neccessary work (deleting the same rows over and over -- we have to discover that there are no rows to actually work on). That table is really active and you are the cause!

Also, the count(*) stuff -- ugh....

slow by slow processing should be avoided whenever possible (sorry, meant row by row).

This procedure will run much more efficiently as:

CREATE OR REPLACE PROCEDURE Sp_Ilm_Archive_Ntc
IS
BEGIN
dbms_output.put_line('Starting wasabi.ilm_sur_ntc record transfer');

insert into history.ilm_sur_ntc_h
select a.*, sysdate, 'Completed Records'
from wasabi.ilm_sur_ntc a
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

delete from wasabi.v_evnt_err_log
where init_evnt_id in ( SELECT init_evnt_id
FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

delete from wasabi.v_prc_que
where init_evnt_id in ( SELECT init_evnt_id
FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));
delete FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

DBMS_OUTPUT.PUT_LINE('Total records archived: ' || sql%rowcount );
END;
/


It'll generate less undo, it'll generate less redo, it'll perform less consistent gets, it'll use less CPU.

In the event

WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

is "expensive" to compute, at the very worst you would:

create global temporary table gtt on commit delete rows
as
select * from ilm_sur_ntc where 1=0;

ONCE in your schema, and then:


CREATE OR REPLACE PROCEDURE Sp_Ilm_Archive_Ntc
IS
BEGIN
insert /*+ append */ into gtt
select a.
from wasabi.ilm_sur_ntc a
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));
commit;

dbms_output.put_line('Starting wasabi.ilm_sur_ntc record transfer');

insert into history.ilm_sur_ntc_h
select a.*, sysdate, 'Completed Records'
from gtt;

delete from wasabi.v_evnt_err_log
where init_evnt_id in ( SELECT init_evnt_id from gtt );

delete from wasabi.v_prc_que
where init_evnt_id in ( SELECT init_evnt_id from gtt );

delete FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

DBMS_OUTPUT.PUT_LINE('Total records archived: ' || sql%rowcount );
END;
/


Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

More Info On Subject

Patrick Howe, May 30, 2004 - 5:34 pm UTC

Tom :

More Background Info for you to ponder :

I took a look at the data distribution of my the driving table (ILM_SUR_NTC) on my prod database.
The INIT_EVNT_ID is unique as you can see below.
So we are not getting the effect of trying to delete the same record multiple times

SELECT count(*)
FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

COUNT(*)
----------
286864

SELECT count(distinct(INIT_EVNT_ID))
FROM wasabi.ilm_sur_ntc
WHERE rate_stat = 'C'
AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

COUNT(DISTINCT(INIT_EVNT_ID))
-----------------------------
286864


==

Q1
You emphasize that 'row by row' processing should be avoided whenever possible ('slow by slow' is now embedded in my grey matter).
That by coding a self contained delete statement will generate less undo, it'll generate less redo, it'll perform less consistent gets, it'll use less CPU.
Could you explain why row-by-row processing is less efficient then performing a self contained delete statement ?

==

Q2

My statspack report showed 455,120 executions of the delete statement.
We ended up killing the transaction after awhile.
Would the execution count reflect 1 execution for the delete and 1 execution for the rollback ?
IE; With the driving table having 286,864 unique INIT_EVNT_IDs (see above) - how did we get an execution count of 455,120 ?

==

Q3

In PROD my statspack report showed an average of 871 buffer gets per execution for this delete statement.
When I ran my AUTOTRACE of this delete statement in TEST (same configuration as prod) it showed '0 consistent gets' (see original info).
Given that table V_PROC_QUE has 5 indexes on it - I would expect at minimum 5 consistent gets for this delete.
1 or more buffer gets to retrieve the INIT_EVNT_ID index block.
1 to retrieve the actual record (V_PRC_QUE) - then delete the record.
1 or more buffer gets to retrieve the index block of index2
1 or more buffer gets to retrieve the index block of index3
1 or more buffer gets to retrieve the index block of index4
1 or more buffer gets to retrieve the index block of index5

==

Thanks - Your help is greatly appreciated


Tom Kyte
May 31, 2004 - 12:18 pm UTC

q1) because the entire language SQL is based on sets. Because that is what the RDBMS was built to do. Because you are telling the database in one message "delete ALL of these rows" rather then bugging it hundreds of thousands of times telling it to "find this specific row, delete it".

q2) you did this count *after* processing, you might have 286k right now, back then -- you must have had *more* of them.

q3) consistent gets are what we use to find the row. CURRENT mode gets would be for the delete. I don't see how you could possibly get 0 consistent mode gets -- there would be *at least* one.

show us an example -- cut and paste. Even with a totally EMPTY table, there would be *at least* one consistent get for the index.

More Information

Patrick Howe, May 31, 2004 - 4:50 pm UTC

Tom :

Thanks for the help.
I now have a much better understanding of what caused the high LIO.
This is a great forum!

===

BACKGROUND : TABLES MY TESTING WAS PERFORMED ON

CREATE table DBA_TEST.TEST
  (testnumber1    number(10,0) ,
   testnumber2    number(10,0) ,
   testdate       date
  ) 
tablespace TOOLS
pctfree 10 
pctused 40 ;

alter table DBA_TEST.TEST
     add constraint TEST_PK
     primary key (testnumber1)
     using index tablespace TOOLS ;

create index DBA_TEST.TEST_IX01
    on DBA_TEST.TEST ( testnumber2 )
    tablespace TOOLS ;
    
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (1, 1, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (2, 2, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (3, 3, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (4, 4, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (5, 5, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (6, 6, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (7, 7, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (8, 8, SYSDATE ) ;
insert into DBA_TEST.TEST ( TESTNUMBER1, TESTNUMBER2, TESTDATE ) values (9, 9, SYSDATE ) ;
commit ;



CREATE table DBA_TEST.TEST_DRIVING_TABLE
  (PKNUMBER        number(10,0) ,
   DRIVENUMBER1    number(10,0) ,
   testdate        date
  ) 
tablespace TOOLS
pctfree 10 
pctused 40 ;

alter table DBA_TEST.TEST_DRIVING_TABLE
     add constraint TEST_DRIVING_TABLE_PK
     primary key (PKNUMBER)
     using index tablespace TOOLS ;
    
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (1, 1, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (2, 2, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (3, 3, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (4, 4, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (5, 5, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (6, 6, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (7, 7, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (8, 8, SYSDATE ) ;
insert into DBA_TEST.TEST_DRIVING_TABLE ( PKNUMBER, DRIVENUMBER1, TESTDATE ) values (9, 9, SYSDATE ) ;
commit ;

===================================================================

** Q1 : 'row by row' is 'slow by slow' 

I wanted to see this in an example - so I ran a test on my small DBA_TEST table.
The CURRENT mode gets were the same  : 51 (set delete) vs 51 (4+47 on the driver delete).
The CONSISTENT GETS (QUERY) was dramatically higher :  19 (set delete) vs 27 (9+18 on the driver delete).


ALTER SESSION SET SQL_TRACE=TRUE 
DECLARE
BEGIN
delete from DBA_TEST.TEST
 where TESTNUMBER2 in ( SELECT DRIVENUMBER1 FROM DBA_TEST.TEST_DRIVING_TABLE ) ;
END;
/
ALTER SESSION SET SQL_TRACE=FALSE

DECLARE
BEGIN
delete from DBA_TEST.TEST
 where TESTNUMBER2 in ( SELECT DRIVENUMBER1 FROM DBA_TEST.TEST_DRIVING_TABLE ) ;
END;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.01          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       0.01          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS
********************************************************************************

DELETE FROM DBA_TEST.TEST 
WHERE
 TESTNUMBER2 IN (SELECT DRIVENUMBER1   FROM DBA_TEST.TEST_DRIVING_TABLE )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0         19         51           9
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0         19         51           9

---

ALTER SESSION SET SQL_TRACE=TRUE ; 
DECLARE
p_drivernumber       NUMBER(10);
CURSOR RowsToDelete_Cur IS
SELECT DRIVENUMBER1 FROM DBA_TEST.TEST_DRIVING_TABLE ;
BEGIN
  OPEN RowsToDelete_Cur;
  LOOP
    FETCH RowsToDelete_Cur INTO p_drivernumber ;
    IF RowsToDelete_Cur%NOTFOUND
      THEN
        EXIT;
      ELSE
        delete from DBA_TEST.TEST where TESTNUMBER2 = p_drivernumber ;
    END IF;
  END LOOP;
END;
/
ALTER SESSION SET SQL_TRACE=FALSE ;

DECLARE
p_drivernumber       NUMBER(10);
CURSOR RowsToDelete_Cur IS
SELECT DRIVENUMBER1 FROM DBA_TEST.TEST_DRIVING_TABLE ;
BEGIN
  OPEN RowsToDelete_Cur;
  LOOP
    FETCH RowsToDelete_Cur INTO p_drivernumber ;
    IF RowsToDelete_Cur%NOTFOUND
      THEN
        EXIT;
      ELSE
        delete from DBA_TEST.TEST where TESTNUMBER2 = p_drivernumber ;
    END IF;
  END LOOP;
END;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.01       0.01          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.02       0.02          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS
********************************************************************************

SELECT DRIVENUMBER1   
FROM
 DBA_TEST.TEST_DRIVING_TABLE


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       10      0.00       0.00          0          9          4           9
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.00       0.00          0          9          4           9

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      9  TABLE ACCESS FULL TEST_DRIVING_TABLE 


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
      9   TABLE ACCESS (FULL) OF 'TEST_DRIVING_TABLE'

********************************************************************************

DELETE FROM DBA_TEST.TEST 
WHERE
 TESTNUMBER2 = :b1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      9      0.02       0.02          0         18         47           9
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       10      0.02       0.02          0         18         47           9

===================================================================

** Q2 PROBLEM RESOLVED : WHY AM I SEEING 455,120 EXECUTIONS OF THE DELETE STATEMENT WHEN I COUNT ONLY 286,864 RECS ?

Your response (you must have had *more* of them) got me to thinking about what happened that day.

The original procedure selected *more* records (493,000) :
  WHERE rate_stat = 'C'
    AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -5));

Because it ran so long we killed it and changed the procedure to select less records but never got around to kicking it off again.
  WHERE rate_stat = 'C'
    AND TRUNC(usg_dttm) < TRUNC(ADD_MONTHS(SYSDATE, -6));

So the next day when I noticed the high LIO in my STATSPACK report - I had forgotten that we changed the procedure - which led to my confusion.
My appologies on this.


===================================================================

** Q3 PROBLEM RESOLVED : WHY AM I GETTING ZERO CONSISTENT GETS

I did some more playing around on this and uncovered the problem : I did not have sufficient PRIVS when I ran AUTOTRACE.

When I was signed on as '/ SYSDBA' and ran AUTOTRACE I got the 0 consistent gets (see below).
Then thought I would generate a TRACE file - when I tried to 'ALTER SESSION SET SQL_TRACE=TRUE' I got a 'ORA-01031: insufficient privileges'.
I signed on as SYS and reran AUTOTRACE and low and behold I got a reading on consistent gets.

SQL> set autotrace on
SQL> DELETE FROM DBA_TEST.TEST WHERE TESTNUMBER2 = 2 ;
1 row deleted.

Execution Plan
----------------------------------------------------------
   0      DELETE STATEMENT Optimizer=CHOOSE
   1    0   DELETE OF 'TEST'
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TEST'
   3    2       INDEX (RANGE SCAN) OF 'TEST_IX01' (NON-UNIQUE)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
          0  bytes sent via SQL*Net to client
          0  bytes received via SQL*Net from client
          0  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> set autotrace off

SQL> ALTER SESSION SET SQL_TRACE=TRUE ;
ERROR:
ORA-01031: insufficient privileges

SQL> connect sys@blcopy
Enter password: 
Connected.

SQL> set autotrace on
SQL> DELETE FROM DBA_TEST.TEST WHERE TESTNUMBER2 = 5 ;
1 row deleted.

Execution Plan
----------------------------------------------------------
   0      DELETE STATEMENT Optimizer=CHOOSE
   1    0   DELETE OF 'TEST'
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TEST'
   3    2       INDEX (RANGE SCAN) OF 'TEST_IX01' (NON-UNIQUE)

Statistics
----------------------------------------------------------
          0  recursive calls
          7  db block gets
          2  consistent gets
          0  physical reads
       1076  redo size
        644  bytes sent via SQL*Net to client
        476  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
          1  rows processed


 

Tom Kyte
June 01, 2004 - 7:47 am UTC

never use SYS or SYSTEM for anything -- magical things happen (especially as SYS). Some features don't even work if you are SYS....

SYS is *special*. ignore it's existence.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library