Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Sandra.

Asked: March 07, 2002 - 1:15 pm UTC

Last updated: November 15, 2009 - 1:11 pm UTC

Version: 8.1.7

Viewed 50K+ times! This question is

You Asked

1. Are there any performance concerns with using one Oracle sequence to generate primary key values across a couple thousand tables? We started to go down the path of creating a separate Oracle sequence for each table. Considering the fact that we can cache sequences on the server, I'm not convinced there are performance issues with concurrent, heavy usage of a single sequence.

2. What is the difference between caching a sequence (CREATE SEQUENCE <owner>.<sequence_name> CACHE n) and pinning a sequence (execute sys.dbms_shared_pool.keep(<owner>.<sequence_name>,'Q')?

and Tom said...

1) there could be some contention introduced by using a single object for everything but its more of a feeling I have then anything I can point to on paper.

There is only one sequence cache latch/instance - so regardless of whether you have 1 or 1,000 that won't change. I guess the "hot" spot would be sys.seq$ -- but even thats probably not true since that table is very very very small in any case.

Oh bother -- i knew it would come to this. Ok, I ran a benchmark. The setup:

o I created ten sequences, seq1, ... seq10
o I had a script:

declare
l_val number;
begin
for i in 1 .. 100000
loop
select seq&1..nextval into l_val from dual;
end loop;
end;
/
exit

that would just select from them.

o I ran this csh script:

#!/bin/csh -f

sqlplus perfstat/perfstat <<EOF
exec statspack.snap(10);
exit
EOF


sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &
sqlplus / @selectseq 1 &

wait



sqlplus perfstat/perfstat <<EOF

exec statspack.snap(10);
exit

EOF

sqlplus / @selectseq 1 &
sqlplus / @selectseq 2 &
sqlplus / @selectseq 3 &
sqlplus / @selectseq 4 &
sqlplus / @selectseq 5 &
sqlplus / @selectseq 6 &
sqlplus / @selectseq 7 &
sqlplus / @selectseq 8 &
sqlplus / @selectseq 9 &
sqlplus / @selectseq 10 &

wait

sqlplus perfstat/perfstat <<EOF
exec statspack.snap(10);
exit
EOF

So, 10 guys selecting at first the same sequence and next 10 guys selecting 10 different ones.... statspack says....


Single Sequence Ten Sequences
--------------------------- -----------------------
run time: 3.47 2.42 minutes


single sequence Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
enqueue 116,126 163,747 88.42
log file parallel write 14,608 20,176 10.89
latch free 909 989 .53
control file parallel write 67 146 .08
log file sync 12 108 .06
-------------------------------------------------------------

ten sequences Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
buffer busy waits 2,589 16,183 40.70
log file parallel write 1,801 14,092 35.44
latch free 973 7,826 19.68
db file parallel write 99 1,172 2.95
log file sync 12 248 .62
-------------------------------------------------------------



So, I would say there will be a difference based on this, a potentially big difference (good to know that my "gut" can be right sometimes ;)


2) the difference there is that the CREATE SEQUENCE CACHE causes us to cache values for the sequence in memory. If you cache 20 -- we update seq$ every 20 requests. If you cache 200, we update the physical table seq$ every 200 requests.

The pinning affects how the sequence would be treated in the shared pool. If not pinned (and I'm not a big fan of pinning, if you need to pin -- it indicates a fundemental problem that should be fixed), the sequence is subject to aging out of the shared pool. So, if you don't touch it, it'll be aged out -- and you can "lose" some values. For exmaple, if you cache 200 -- select 100 values (so 100 are left in the cache) and we decide to age the sequence out -- you'll have "lost" those 100 values.

Hope that helps...

Rating

  (33 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Harold Flores, March 07, 2002 - 7:02 pm UTC

If i pin some sequences on the SGA and the cache is 0 i still losing numbers of the sequence???

What can i do for not to lose values that i already have cache on the SGA??

Regards

Tom Kyte
March 07, 2002 - 7:15 pm UTC

You know -- I never understand why people worry about "losing" the numbers... A simple rollback will "lose" a number. A shutdown will. Losing 19 numbers (with a default cache of 20) is no big deal (you are not going to exhaust that sequence, not in many dozens and dozens of years).

If you have cache = nocache, you will of course not "lose" any, you don't have any cached to lose. If you pin a cached sequence, you'll lose some on shutdown but not otherwise.

Harold Flores, March 08, 2002 - 9:34 am UTC

We use sequence for account, if we lose a single number that is a big deal.

Tom Kyte
March 08, 2002 - 2:23 pm UTC

Then you cannot be using a sequence for as I said -- a simple rollback will in fact lose one!!! An instance failure can lose one!!!

SEQUENCES are not gap free under ANY circumstance -- EVER. They are 100% assured to have a gap at some point. 100%....

So, if in fact its a "big deal" due to some artificial external constraint, you are already out of luck.

Thanks a bunch!

Sandra Dawson, March 08, 2002 - 4:40 pm UTC

Very useful info. Ran the benchmark on our machine resulting in the same "Top 5 Wait Events". Looks like there's some locking issues when all ten jobs utilize the same sequence. The time difference was not quite as big in our environment, but it's certainly enough to convince me. Thanks again!

Ivan, April 19, 2002 - 3:59 pm UTC

The results are valid for a particular heavy usage test scenario, but how can you measure the "overhead" of maintaining the questioner's couple thousand sequences?

Sequences in practice are arbitrary numbers. Not withstanding your results, it is simpler to have a single arbitrary number generator than hundreds of arbitrary generators. Less is blessed. Most sequence objects also usually require a synonym and privileges. All of this stuff must be kept in the catalog, loaded into the SGA, backed up, recovered, created, whatever.

Your test grabbed thousands of numbers per second. If there are a couple thousand tables in this database, it is likely that only a few sequences are really consistently "heavy usage". Many sequences are rarely or lightly used in a day's work. The issue is different, when you compare accessing 1000 sequences for one value versus accessing one sequence for 1000 values.

I would suggest, for new sequence needs, that using a common sequence be looked at. An application database of a couple thousand tables likely could work well with a few score sequences. The only exceptions could be for sequences with:
1) consistently "heavy usage"
2) for user purposes: numbers that are seen (e.g., Order Number), or desire a somewhat comfortable consistent assignment


One small disadvantage of a common general sequence is that bigger number values tend to use more storage space in a table. Also, if a sequence "goes berserk" by some error, its effect and resolution would be more wide ranging.



Tom Kyte
April 19, 2002 - 4:47 pm UTC

Well, I disagree with most everything you say EXCEPT that "in general, less is best".

The question is:

Are there any performance concerns with using one Oracle sequence to generate
primary key values across a couple thousand tables?


You say:

<quote>
If there are a couple
thousand tables in this database, it is likely that only a few sequences are
really consistently "heavy usage".
</quote>

Well the problem with that is -- there is ONLY one sequence, not a few, one. So, there is ONE sequence that is consistently "heavy usage".

The query is "are there any performance concerns". The answer is (apparently and provably) YES, YES there are and here they are. Do with it what you might.

With thousands of tables -- it sounds like a big database. Big databases have lots of users. The ENTIRE PURPOSE, CONCEPT of a sequence was to improve concurrency. here we see that if lots of people use it at the same exact time, there are contention issues. You just possibly multiplied your contention issues by THOUSANDS.

You might think it is "simplier" -- I don't agree in general. Contention is a hard thing to fix *after the fact*. It is easier to fix it up front (consider having to retrofit the code in 6 months to use different sequences. Not so *simple* anymore)

We are not comparing:

<quote>
The issue is different, when you compare accessing 1000 sequences
for one value versus accessing one sequence for 1000 values.
</quote>

as you say. We are comparing:

accessing 1,000 sequences vs
accessing one sequence to 10's/100's or millions of values.

EVERY row inserted during the day into ANY TABLE would need that sequence. Your statement only stands up if this system inserts a single row into each table per day. Highly *unlikely* in my opinion.


A common sequence can be looked at -- but a SINGLE sequence for thousands of tables -- I don't think so. If you have a single sequence, you by definition have a consistently "heavy usage" one.

I would not consider synonyms and grants as issues because

a) you shouldn't use synonyms so much ANYWAY for ANYTHING
b) this logic can and should be hidden in stored procedures and if not, we have this thing called a "role".


So, I beg to differ. If you read the question (use ONE sequence for thousands) and the conclusion (apparently there is measurable contention if you use ONE sequence by lots of concurrent sessions)..... I don't see how you can argue with it.

and the advice-- there can be a big difference, take this into consideration.

As with all information everywhere in the world -- do with it what you might.



heavy usage of sequence

prem, October 16, 2002 - 11:06 am UTC

what about impact of sequence update on ora-001555



Tom Kyte
October 16, 2002 - 1:10 pm UTC

ora-1555's are always caused fundemtally by having ROLLBACKS that are too small.

the recursive sql can cause the symptom (lots of commits).

You can increase performance by increasing the cache size( reduce the recursive sql)

What was cache size?

Alan, July 09, 2003 - 11:56 am UTC

Hi Tom,

Excellent benchmark!

Just out of curiosity, do you remember what your cache size was set to when you ran this test? Just curious if
a cache size of say 100,000 would have helped the single sequence test out a little better.

Thanks.

Tom Kyte
July 09, 2003 - 1:34 pm UTC

it was the default, yes, a larger cache size would have helped.

One more question about this test

Alan, July 09, 2003 - 3:25 pm UTC

Tom,

In the second test, each unix process is essentially hitting their own sequence, whereas in the first test 10 processes are competing for the same sequence.

Would the test results have been much different if each of the multiple sequence processes were hitting the sequences being used by the other processes? This would be more of a real world example for cases where user processes are inserting to master/detail tables.

Unfortunately I can't set up test case until I get home tonight, so I was wondering what your "gut feel" was. ;)

Thanks.

Tom Kyte
July 09, 2003 - 7:22 pm UTC

no, the only thing I was trying to show here was "yes, if you use a single sequence -- you will increase contention, there is a downside to using a single sequence for thousands of tables"

That is all the test was doing.

Cache size makes a HUGE difference......

Mark J. Bobak, July 09, 2003 - 4:28 pm UTC

In fact, cache size is so important, it seems to be the
overwhelming factor in this test.

First, I slightly modified Tom's test. I added one more
sequence, seq11, and used that for the 'single sequence test'.

Well, I was going to post all my data, but I don't think there
is room here.

So, what I found was that at the default cache size, my test
was nearly identical results to Tom's. Also, I noticed
that the single sequence test produced notably less redo.
Then, I did two more tests, first, I increased the cache
size for seq11, the single sequence test, from 20 to 200.
(Logic being, 10x the load on this sequence, let's increase
the cache size by 10x.) In this case, the test timing was
exactly the same elapsed time. The single sequence test was
still dominated by enqueue waits on the SQ enqueue, but
total waits were reduced, and total elapsed time was the
same as the multiple sequences test. Also, redo generation
was way lower on the single sequence test.

Finally, I altered the seq11 index again to cache 200,000.
In this case, the single sequence test actually beat the
performance of the multiple indexes. Redo generation was
way down again, on the single sequence test.

And, lest you claim I gave the single sequence test an
advantage by increasing the cache value, and leaving the
other sequences at default cache size, I ram one more test.
This time, I left the single sequence at 200,000, and I
increased the cache of all 10 sequences in the multi-
sequence test to 200,000 as well. Elapsed time, the single
sequence edged out the multiples, 1.78 minutes vs. 1.82
minutes. Redo generation, single sequence was the clear
winner, at slightly more than 1/2 the redo that the multiple
sequence test generated.

So, it seems to me:

- less is better
- cache size is critical, don't be afraid to bump it up
on busy sequences.

Just my two bits.....

Tom Kyte
July 09, 2003 - 7:34 pm UTC

you would be amazed what setting a sequence cache via alter sequence to 100,000 or more can do during a large load -- amazed.

Nice

Alan, July 09, 2003 - 4:54 pm UTC

Thanks for posting your findings Mark.

Since you have your test case handy, can you do a quick test?

Change the code to:
declare
l_val number;
begin
for i in 1 .. 10000 -- reduce by a factor of 10
loop
select seq1.nextval into l_val from dual;
select seq2.nextval into l_val from dual;
select seq3.nextval into l_val from dual;
select seq4.nextval into l_val from dual;
select seq5.nextval into l_val from dual;
select seq6.nextval into l_val from dual;
select seq7.nextval into l_val from dual;
select seq8.nextval into l_val from dual;
select seq9.nextval into l_val from dual;
select seq10.nextval into l_val from dual;
end loop;
end;
/

And then launch with 10 processes. This should simulate the second test case competing for the same sequence as the first test case did.

Thanks.

Hmm....Not sure what to make of this......

Mark J. Bobak, July 09, 2003 - 5:21 pm UTC

I made the changes that Alan requested......

And things came out about even. Redo generation, elapsed
time, and in both cases, the wait time was dominated by
latch free wait event. From monitoring V$SESSION_WAIT
during the test, I'd say it was cache buffers chains
latch contention. Actually, I realized those were level
10 snaps. The StatsPack data confirms it was cache buffers
chains latch contention.

Tom Kyte
July 09, 2003 - 8:16 pm UTC

could have been the banging on dual -- you might consider modifying the benchmark to be:

insert into t values ( s.nextval )


(making sure that T is either in an ASSM tablespace in 9i or you have multiple freelists configured for it)



Thanks

Alan, July 09, 2003 - 5:35 pm UTC

Mark,

Thanks a million for testing.

Were enqueues similar as well between both test cases?

Regards.

No enqueue waits....

Mark J. Bobak, July 09, 2003 - 5:42 pm UTC

Over 99% of waits were latch free, and a huge majority of
those were on cache buffers chains. The enqueue waits go
away when you increase the cache size.

Results from further testing.

Alan, July 10, 2003 - 8:28 am UTC

Tom,

I changed the benchmark to insert into tables as you described and increasing the cache size for the single sequence test didn't seem to make much difference.

My test.
Create tablespace test_seq with ASSM Uniform 1M.
Create tables 1 - 10 as
(x number(38) primary key)

anonymous pl/sql loop changed to:
test_single.sql
begin -- single sequence test
for i in 1..10000
loop
insert into t2 values (seq1.nextval);
insert into t3 values (seq1.nextval);
insert into t4 values (seq1.nextval);
insert into t5 values (seq1.nextval);
insert into t6 values (seq1.nextval);
insert into t7 values (seq1.nextval);
insert into t8 values (seq1.nextval);
insert into t9 values (seq1.nextval);
insert into t10 values (seq1.nextval);
insert into t1 values (seq1.nextval);
end loop;
end;
/
exit

AND
test_multi.sql
begin -- multiple sequence test
for i in 1..10000
loop
insert into t2 values (seq2.nextval);
insert into t3 values (seq3.nextval);
insert into t4 values (seq4.nextval);
insert into t5 values (seq5.nextval);
insert into t6 values (seq6.nextval);
insert into t7 values (seq7.nextval);
insert into t8 values (seq8.nextval);
insert into t9 values (seq9.nextval);
insert into t10 values (seq10.nextval);
insert into t1 values (seq11.nextval);
end loop;
end;
/
exit

For this test, seq1 had a cache of 10,000 and seq2-11 had a cache of 1000.

The single sequence test took 3.5 minutes to run (this is on NT after all) and had the following:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
enqueue 10,929 329 29.45
buffer busy waits 11,997 301 26.87
db file sequential read 3,223 115 10.26
latch free 4,422 104 9.27
CPU time 101 9.03

The multiple sequences test took 2.07 minutes to run and showed:
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
db file sequential read 214 1 44.29
CPU time 1 19.64
control file parallel write 40 0 7.97
db file parallel write 10 0 7.71
control file sequential read 44 0 7.05


I then went back and modified seq1 to have a cache of 100,000 and ran the single sequence test again, but it didn't improve the speed or results by very much. Enqueues and Buffer Busy Waits still dominated the top 5 events although with slightly lower waits than the first test.

So as usual, your gut feel is correct.

Regards.

Balasubramanian.R, August 01, 2003 - 1:35 am UTC

Followup:
A shutdown will. Losing 19 numbers
(with a default cache of 20) is no big deal (you are not going to exhaust that
sequence, not in many dozens and dozens of years).

What we observed for cache sequence as
1) if we do normal shutdown, we are not losing 19 numbers.
2) if we do shutdown abort only, we are losing 19 numbers.

Why we lose numbers in sequence with cache when shutdown abort.




Tom Kyte
August 01, 2003 - 7:53 am UTC

because you aborted the database, it just "went away", it did not take the time to goto the data dictionary and update the SEQ$ table.

any thoughts on this statspack report

Joseph, September 10, 2003 - 10:07 am UTC


STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
PRDPROD 1079374339 prdprod 1 9.2.0.1.0 NO prd-proddb

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 82 08-Sep-03 15:51:26 20 5.1
End Snap: 83 08-Sep-03 16:18:35 22 6.3
Elapsed: 27.15 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~
Buffer Cache: 112M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 1,024K

Load Profile
~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 991.39 3,436.12
Logical reads: 893.03 3,095.22
Block changes: 5.26 18.24
Physical reads: 0.98 3.40
Physical writes: 0.17 0.59
User calls: 3.02 10.47
Parses: 6.54 22.65
Hard parses: 0.14 0.49
Sorts: 34.03 117.94
Logons: 0.09 0.32
Executes: 17.00 58.90
Transactions: 0.29

% Blocks changed per Read: 0.59 Recursive Call %: 96.76
Rollback per transaction %: 1.06 Rows per Sort: 29.43

Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.89 In-memory Sort %: 100.00
Library Hit %: 98.38 Soft Parse %: 97.83
Execute to Parse %: 61.54 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 92.93 % Non-Parse CPU: 97.75

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 16.71 20.38
% SQL with executions>1: 76.38 79.43
% Memory for SQL w/exec>1: 73.19 75.37

Top 5 Timed Events
~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 152 83.39
control file parallel write 524 13 6.92
log file sync 319 9 5.16
db file parallel write 42 4 2.07
log file parallel write 654 3 1.77
-------------------------------------------------------------
Wait Events for DB: PRDPROD Instance: prdprod Snaps: 82 -83
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
control file parallel write 524 0 13 24 1.1
log file sync 319 1 9 29 0.7
db file parallel write 42 21 4 90 0.1
log file parallel write 654 638 3 5 1.4
db file sequential read 624 0 1 1 1.3
db file scattered read 74 0 0 6 0.2
latch free 4 2 0 13 0.0
SQL*Net more data to client 161 0 0 0 0.3
control file sequential read 294 0 0 0 0.6
direct path read (lob) 4 0 0 0 0.0
LGWR wait for redo copy 1 0 0 0 0.0
direct path write 1 0 0 0 0.0
SQL*Net message from client 4,005 0 3,727 931 8.5
wakeup time manager 51 51 1,530 30004 0.1
SQL*Net message to client 4,005 0 0 0 8.5
SQL*Net more data from clien 24 0 0 0 0.1
-------------------------------------------------------------
Background Wait Events for DB: PRDPROD Instance: prdprod Snaps: 82 -83
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
control file parallel write 524 0 13 24 1.1
db file parallel write 42 21 4 90 0.1
log file parallel write 654 638 3 5 1.4
control file sequential read 130 0 0 0 0.3
db file sequential read 3 0 0 0 0.0
LGWR wait for redo copy 1 0 0 0 0.0
rdbms ipc message 3,694 3,233 9,655 2614 7.9
smon timer 5 4 1,334 ###### 0.0
-------------------------------------------------------------
SQL ordered by Gets for DB: PRDPROD Instance: prdprod Snaps: 82 -83
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used
-> by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
739,536 2,982 248.0 50.8 29.84 29.39 1498920852
select max(nvl(option$,0)) from sysauth$ where privilege#=:1 con nect by grantee#=prior privilege# and privilege#>0 start with (g rantee#=:2 or grantee#=1) and privilege#>0 group by privilege#

316,545 3 105,515.0 21.8 28.06 28.34 3734722662
SELECT UPPER(LTRIM(RTRIM(ID))) SOLUTION_FK_ID, TO_NUMBER
(SUBSTR(ID,INSTR(ID,'_',1,2)+1,LENGTH(ID))) SOLUTION_NUMBER,
UPPER(NAME) NAME, SOLUTIONS_PKG.GET_CATEGORY_FOR_VIEW
_LOG(ID) CATEGORY, to_char(DATE_CREATED,'mm/dd/yyyy') DAT
E_CREATED, UPPER(CREATED_BY) CREATED_BY, to_char(E

164,631 2,652 62.1 11.3 14.95 15.09 3666276338
SELECT * FROM soln_xref_categories WHERE so
lution_fk_id = :b1 /*ADVICE(1915): Cursor reference to a externa l variable (use a parameter) [209] */
AND record_status = 'CURRENT'

154,550 2,810 55.0 10.6 11.52 11.48 1242355473
SELECT review_status FROM enotebook_reviews WHERE
e_notebook_entity_id = :b1 AND record_status = 'CURRENT'

96,967 13 7,459.0 6.7 3.11 3.03 2342908987
Module: Foglight Oracle Agent
select ((owner||'.')||table_name) from role_tab_privs r ,user_r ole_privs u where (r.role='SELECT_CATALOG_ROLE' and u.granted_ro
le=r.role)

76,063 13 5,851.0 5.2 1.83 1.80 4010130737
Module: Foglight Oracle Agent
select ((owner||'.')||table_name) from sys.user_tab_privs where (grantee in ('PUBLIC',:b0) and privilege='SELECT') union select granted_role from sys.user_role_privs where username=:b0


58,817 148 397.4 4.0 6.06 6.83 890248754
BEGIN SOLUTIONS_PKG.add_sample_component(:1,:2,:3,:4,:5,:6); END ;

49,435 2,991 16.5 3.4 2.97 2.87 3013728279
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0

26,567 148 179.5 1.8 54.72 55.15 155406885
BEGIN JOB_JAVA_PKG.INSERT_NEW_SOLUTION(:1,:2,:3,:4); END;

25,966 148 175.4 1.8 54.53 54.83 1121734198
INSERT INTO solutions (NAME, sol_type, created
_by, candidate_key )
SQL ordered by Gets for DB: PRDPROD Instance: prdprod Snaps: 82 -83
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used
-> by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
VALUES (:b1, :b4, :b3, :b2 || :b1
)

24,052 148 162.5 1.7 2.73 2.80 3720500110
SELECT /*+ ALL_ROWS*/ * FROM soln_co
mponents WHERE solution_fk_id = :b2 AND co
mponent_fk_id = :b1 AND record_status = 'CURRENT'

23,987 148 162.1 1.6 2.16 2.41 597870277
SELECT MAX (ID) FROM soln_components WHERE soluti
on_fk_id = :b2 AND component_fk_id = :b1 AND r
ecord_status = 'CURRENT'

18,963 150 126.4 1.3 49.01 49.29 458776950
SELECT NVL (MAX (days_counter), 0) + 1 FROM solutions
WHERE TO_DATE (date_created) = TO_DATE (SYSDATE) A
ND UPPER (sol_type) LIKE '%' || UPPER (:b2 ) || '%'
AND UPPER (NAME) LIKE '%' || UPPER (:b1 ) || '%'

17,656 8 2,207.0 1.2 1.29 1.29 2727211991
SELECT ID, TEST, CREATED_BY, MO
DIFIED_BY, DATE_CREATED, STATUS, NVL(
JOB_JAVA_PKG.get_chemist(ID),'NOT ASSIGNED')NAME, JOB_J
AVA_PKG.get_review_status(ID) REVIEW_STATUS, ANALITICAL_METH
OD_# , JOB_JAVA_PKG.get_review_status(ID) REVIEW_STATUS, REA

14,083 3 4,694.3 1.0 1.35 1.35 356648237
select instrument_main_pkg.get_instrument_name(ins.instrument_ma
in_fk_id) INST_NAME, columns_java_pkg.get_colum
n_name(ins.columns_tab_fk_id) COLUMN_NAME,
columns_java_pkg.get_column_serial_no(ins.columns_
tab_fk_id) COLUMN_SERIAL_NO, frx_r

7,254 117 62.0 0.5 0.39 0.43 3789430917
SELECT trim(fname) || ' ' || trim(lname) from employees
where upper(trim(usid)) = upper(trim(:b1))

6,370 148 43.0 0.4 0.41 0.38 1451474974
UPDATE sample_prep SET record_status = 'HISTORY'
WHERE SAMPLE = :b2 AND jobs_fk_id = :b1

5,704 2,856 2.0 0.4 0.29 0.30 4276509072
SELECT category_desc FROM category_lkup WHERE ID
= :b1 AND record_status = 'CURRENT'

4,509 148 30.5 0.3 4.91 4.84 1345456906
SELECT NVL (MAX (days_counter), 0) + 1 FROM soluti
ons WHERE -- TO_DATE(DATE_CREATED)=TO_DATE(SYSDATE)
-- AND ID LIKE '%' || :b2
|| '%' AND UPPER (sol_type) LIKE '%' || UPP
SQL ordered by Gets for DB: PRDPROD Instance: prdprod Snaps: 82 -83
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used
-> by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
ER (:b1 ) || '%'

4,383 1,461 3.0 0.3 0.09 0.13 2216996262
SELECT test FROM jobs WHERE ID = :b1 AND re
cord_status = 'CURRENT'

-------------------------------------------------------------
SQL ordered by Reads for DB: PRDPROD Instance: prdprod Snaps: 82 -83
-> End Disk Reads Threshold: 1000

CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
220 148 1.5 13.8 6.06 6.83 890248754
BEGIN SOLUTIONS_PKG.add_sample_component(:1,:2,:3,:4,:5,:6); END ;

202 159 1.3 12.6 0.07 0.36 3111103299
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece fr om idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by pi ece#

161 148 1.1 10.1 2.73 2.80 3720500110
SELECT /*+ ALL_ROWS*/ * FROM soln_co
mponents WHERE solution_fk_id = :b2 AND co
mponent_fk_id = :b1 AND record_status = 'CURRENT'

143

Tom Kyte
September 10, 2003 - 7:42 pm UTC

looks like an almost idle, low use system without any major problems.

you have a bad query executed three times and the biggest load on your system seems to be your tuning tool :)

Sequences in the create table

Subra, November 03, 2003 - 7:52 pm UTC

Your excellent responses help Oracle community.
I wonder why Oracle is not allowing me use sequence.nextval in the the DEFAULT part of column specification in table ddl.
ALTER TABLE T_TEST_ID
MODIFY (TEST_ID NUMBER(10) DEFAULT SEQ_TEST_ID.nextval)
*
ERROR at line 2:
ORA-00984: column not allowed here

I need to be able to automatically populate the test_id column (with sequence.nextval) whenever user inserts a row, and I do not want to allow the user to use SEQ_TEST_ID.nextval. How could I do it with column specification in table ddl ?



Tom Kyte
November 04, 2003 - 8:11 am UTC

you cannot, this is accomplished via a trigger.


begin select s.nextval into :new.id from dual; end;

Sequence performance

A reader, March 07, 2004 - 4:26 pm UTC

On the issue of sequences and performance, I have 2 questions.

1) How does the picture change (for the better or worse) if OPS/RAC is involved for contention on sequences and cache values?


I'm sure it's been asked before, and I'm sure you have mentioned it somewhere, but I can't find the answer to:

If we have a table that is just one col with one row and is used as a unique number generator (just like a sequence). The process that needs "some unique numbers" updates the value in the row by say 100 and commits. It then internally issues the numbers in sequence until it exhausts them and then does it all over again. Of course we have several processes doing the same thing to the same table (all communicating to the same d/b from diff "client" servers) all caching 100 values and dishing them out as required.

Crazy as it may sound, I'm trying to convince the
developers to use sequences. I need ammunition...bench marks...test cases...anything.

Thanks
Ma$e




Tom Kyte
March 08, 2004 - 7:46 am UTC

1) you want a generous cache, you never want to use "nocache" or "ordered" options -- you just want to use the sequence for what it is -- a surrogate key generator.

You want to drop your single table and update/commit and just use a sequence.

reason: it is easier.

SQL/PLSQLplus

Mohamed Al Maskari, March 21, 2004 - 5:35 am UTC

Is there is a free Q&A for SQL/PLSQL+ ?

Tom Kyte
March 21, 2004 - 9:59 am UTC

right here?

but there are discussion forums on otn.oracle.com as well, yes.

A reader, May 26, 2004 - 9:32 pm UTC


Slow performance while inserting a table with sequence

Arul, June 28, 2004 - 8:04 am UTC

Hello Tom,

I have an insert statement which performs very slow. The 10046 trace file shows db_file_sequential_read as wait_event for SQL "update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1"
The above statement apears 50 times in trace file with lot of waits for db_file_sequential_read and the plan for the SQL is

Rows Row Source Operation
------- ---------------------------------------------------
1 UPDATE
1 INDEX UNIQUE SCAN I_SEQ1 (object id 107)

Now being I_SEQ1 a sys owned object, What kind of tunning will help me ? The above statistics are from raw trace file where as the TKPROF output looks like as below.

********************************************************************************

insert into t_active (ACTIVE_SEQ,MOBILE,SPPHONE,SERVICEID,DSERVICEID,
ACTIONTIME,ACTIONTYPE,ACTIONCHANNEL,ACTIONSTATUS,INSERTTIME,REASON,PART)
VALUES
(active_seq.nextval,:1,:2,:3,:4,TO_DATE(:5,'YYYYMMDDHH24MISS'),:6,:7,:8,
TO_DATE(:9,'YYYYMMDDHH24MISS'),:10,:11)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 3 0 0
Execute 1 0.53 34.41 3186 1079 5216 1000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.54 34.41 3186 1082 5216 1000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 62 (OPERATOR)

Rows Row Source Operation
------- ---------------------------------------------------
1000 SEQUENCE


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: CHOOSE
1000 SEQUENCE OF 'ACTIVE_SEQ'


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3186 0.10 33.94
SQL*Net more data from client 35 0.00 0.00
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 0.00 0.00
********************************************************************************

Please let me know the root cause of db_file_sequential_read in this scenario and measure to tune the query.

The definition of the sequence is as below.

CREATE SEQUENCE active_seq
INCREMENT BY 1
MAXVALUE 9999999999999999999
MINVALUE 100
CYCLE
Thanx in Advance...



Tom Kyte
June 28, 2004 - 9:05 am UTC

the db file sequential read was for the insert - reading index blocks, reading freelist blocks, reading blocks from disk in which to put the data.


It is true that you could speed up the sequence -- by alter sequence active_seq cache 1000;

right now it is cache 20 by default.

but I do not think it is your sequence that is the issue here, it was 3,186 physical IO's to read indexes, table blocks.

Cache size of 1000 did not help

Arul, June 28, 2004 - 11:52 pm UTC

Hello TOM,

-- alter sequence active_seq cache 1000; did not help much still poor performance.

-- I agree it could be an issue with 3186 physical IO, Could you guide me what can be done in that regards ?

Thanx in Advance.



Tom Kyte
June 29, 2004 - 7:55 am UTC

I said the sequence wasn't the problem.


the problem is in order to do that insert, you needed to do 3,186 physical IO's. basically means nothing was in the cache.


how many indexes do you have on this table, what does the table itself look like, why 1000 rows in a bulk insert -- I've found 100's of records at a time to be better.

Sequence benchmark VERY useful

brian, December 28, 2004 - 12:34 pm UTC

Thanks to all that contributed to this... A MOST informative, and for the most part, collegial discussion. I sincerely appreciate all of you sharing your efforts and perspectives.

Great stuff

Steve T, March 03, 2005 - 12:31 pm UTC

I was able to reduce the redo generated on a direct path load with a sequence-generated primary key from >3000000 bytes to 62000, simply by increasing the sequence cache from default value of 20 to 10000. THANKS, TOM!


A reader, October 25, 2005 - 8:57 pm UTC


seq cache - amazed why...

danielmex2010@yahoo.com, December 30, 2005 - 7:56 am UTC

Hi Tom,
In one of the reply, you said making cache of seq to 100k, would be amazing for a fully loaded oltp, what did you mean?
Does that mean system will hang?
I want to increase cache of some seq online from 20 to 1000. My system is heavy oltp, shall i do it or wait till off peak hrs?
Another thing:
I have the following seq stats calculated for the db sequences at the start of application online time and at end, the online time for my app is 2hrs:

At start of application online time:10:00 am, I have:
10:00:04 LTS2-SYS-TADAWULDB>/

Seq Incr Beg-Value LastValue
---------------------------- ---- --
SEQ_LOG 1 1 101,914,529
SEQ_VOUI 1 1 39,182,219
SEQ_ORDE_EXE 1 1 30,441,132
SEQ_TRN 1 50,000 13,162,871
TCDU_SEQ 1 50,000 12,925,122
CAST_SEQ_NB 1 1 1,482,316
TWST_SEQ_NB 1 1 71,381
USRN_SEQ 1 1 54,471
MKTP_SEQ_NB 1 50,000 51,278
EACT_SEQ 1 50,000 50,000
OMTG_SEQ 1 50,000 50,000
ORD_SEQ 1 50,000 50,000
PRODUCT_SEQ 1 50,000 50,000
QUT_SEQ 1 50,000 50,000
SEQ_HDAY 1 50,000 50,000
TRN_SEQ 1 50,000 50,000
TEST1 1 50,000 50,000
SEQ_USAL 1 50,000 50,000
SEQ_TREE 1 50,000 50,000
SEQ_RCPT_SL_NB 1 50,000 50,000
SEQ_HUB_VOUI 1 50,000 50,000
PLG_SEQ 1 50,000 50,000
ORD_NB_SEQ 1 50,000 50,000
MKT_SEQ 1 50,000 50,000
SEQ_NEWS 1 1 24,981
SEQ_DIRECT_DEA 1 1 17,046
SEQ_BLKT 1 0 16,561
SEQ_UTIL 1 1 302
EUSL_SEQ 1 1 1
T1 1 1 1

After 2 hrs of heavy online activity i see values changing rapidly for some seq, anyway at 12:00 noon, i ran the same sql and have the following values:
SEQ_LOG 1 1 102,480,549
SEQ_VOUI 1 1 39,417,374
SEQ_ORDE_EXE 1 1 30,617,876
SEQ_TRN 1 50,000 13,241,400
TCDU_SEQ 1 50,000 13,003,646
CAST_SEQ_NB 1 1 1,482,316
SEQ_BLKT 1 0 258,141
TWST_SEQ_NB 1 1 71,381
USRN_SEQ 1 1 54,631
MKTP_SEQ_NB 1 50,000 51,278
EACT_SEQ 1 50,000 50,000
OMTG_SEQ 1 50,000 50,000
ORD_SEQ 1 50,000 50,000
PRODUCT_SEQ 1 50,000 50,000
SEQ_HDAY 1 50,000 50,000
TRN_SEQ 1 50,000 50,000
TEST1 1 50,000 50,000
SEQ_USAL 1 50,000 50,000
SEQ_TREE 1 50,000 50,000
SEQ_RCPT_SL_NB 1 50,000 50,000
SEQ_HUB_VOUI 1 50,000 50,000
QUT_SEQ 1 50,000 50,000
PLG_SEQ 1 50,000 50,000
ORD_NB_SEQ 1 50,000 50,000
MKT_SEQ 1 50,000 50,000
SEQ_NEWS 1 1 24,986
SEQ_DIRECT_DEAL 1 1 17,066
SEQ_UTIL 1 1 302
EUSL_SEQ 1 1 1
T1 1 1 1

The difference is:
Seq 10:00 12:00 Diff
seq 10 am 12 noon diff
SEQ_LOG 101,914,529 102,480,549 566,020
SEQ_VOUI 39,182,219 39,417,374 235,155
SEQ_ORDE_EXE 30,441,132 30,617,876 176,744
SEQ_TRN 13,162,871 13,241,400 78,529
TCDU_SEQ 12,925,122 13,003,646 78,524
CAST_SEQ_NB 1,482,316 1,482,316 0
TWST_SEQ_NB 71,381 258,141 186,760
USRN_SEQ 54,471 71,381 16,910
MKTP_SEQ_NB 51,278 54,631 3,353
EACT_SEQ 50,000 51,278 1,278
OMTG_SEQ 50,000 50,000 0
ORD_SEQ 50,000 50,000 0
PRODUCT_SEQ 50,000 50,000 0
QUT_SEQ 50,000 50,000 0
SEQ_HDAY 50,000 50,000 0
TRN_SEQ 50,000 50,000 0
TEST1 50,000 50,000 0
SEQ_USAL 50,000 50,000 0
SEQ_TREE 50,000 50,000 0
SEQ_RCPT_SL_NB 50,000 50,000 0
SEQ_HUB_VOUI 50,000 50,000 0
PLG_SEQ 50,000 50,000 0
ORD_NB_SEQ 50,000 50,000 0
MKT_SEQ 50,000 50,000 0
SEQ_NEWS 24,981 50,000 25,019
SEQ_DIRECT_DEA 17,046 24,986 7,940
SEQ_BLKT 16,561 17,066 505
SEQ_UTIL 302 302 0
EUSL_SEQ 1 1 0
T1 1 1 0

Based on above data, what values you think I should set
for cache of each sequences?
I want to start with safe values and then increase,

Also will caching the seq cause any increase in sga mem and contention??

I have 500 concurrent db connections using above seqs.

cheers



Tom Kyte
December 30, 2005 - 9:51 am UTC

I said

... you would be amazed what setting a sequence cache via alter sequence to 100,000
or more can do during a large load -- amazed. ...


meaning a large load of data - a data load, bulk load.

It can have an effect on an oltp system as well. Think about what happens when you get a sequence using nextval. By default the sequence cache is 20 - so every 20 times we have to issue "update seq$ set ...." to increment the sequence. If you do this nextval a lot, you'll be doing that update a lot. You could cut down the number of times you do that update by a large amount by increasing the sequence cache value.

As this is DDL, this is something you want to wait until a period of light to no usage of the system.

what about the settings.

daniel, December 30, 2005 - 10:00 am UTC

Thanks I got it.

About the seq cache size that I should choose based on my stats provided, would you recommend some starting values pls?
For example the first seq_log generated 566,020
values in 2 hours meaning 78 values /sec.
What cache size should i choose, the def is 20 and about 500 sessions use this seq heavily. Should I set to 1000 the observer, or is there formula based on what stats i gathered?

Moreover, as you see in my case top 6 seq are being used like crazy, so does that mean in a 2 hr session of application with cache seq size set, i should be reducing a huge redo, that also means less waits on log file sync??
(bec of no more update seq$) right???
Cheers


Tom Kyte
December 30, 2005 - 11:10 am UTC

how many sequence values are you willing to perhaps lose when the system shutsdown.

set it to that.

each update of seq$ does a commit, so yes, you may see less log file sync waits but no promises on that.

David Aldridge http://oraclesponge.blogspot.com, December 30, 2005 - 2:09 pm UTC

Tom,

I had a couple of questions about the original test on using multiple sequences vs a single sequence, if you can cast your mind back to the Dawn of Time.

i) How did the cache compare on the single sequence test versus the ten sequence test? One would think that for a valid comparison (involving an equivalent number of updates to the $seq table) the single sequence would have a cache size ten times larger than the caches used in the ten sequence test.

ii) Would the test not be more valid if the ten simultaneous sessions didn't each have a sequence of their own? It seems to me that the procedure ought to involve a chance of multiple sessions querying from the same sequence, as they would in the real world .... so for the ten sequence test with sequences from seq_01 to seq_10 you'd use something like:

declare
l_val number;
begin
for i in 1 .. 100000
loop
execute immediate
'select seq_'
||to_char(floor(dbms_random.value(1,10.99999)),'fm00')
||'.nextval from dual' into l_val;
end loop;
end;
/


Tom Kyte
December 31, 2005 - 10:50 am UTC

i) cache sizes were left at what most the of the world would leave them at - the default size.

ii) no, I was trying to show 10 people either

a) sharing the same sequence
b) using 10 different sequences

to show why you might not want a single sequence for EVERY object as that would be a) and how having multiple sequences all in use - we don't see the collisions (not that there would be NO collisions because of course 2 or more people could still be using the same sequence).

I showed the two extremes.

Excellent Article

faisal Hussain, December 31, 2005 - 8:40 pm UTC


Sequence problem

Graham Oakes, January 23, 2006 - 9:50 am UTC

Tom,

I'm reviewing a new product that's been put into production and have found some code that basically uses a table to generate sequences.

i.e.
1/what's the highest value in the table
2/increment it by 1
3/insert into the table the new value
4/use the new value for other stuff

Naturally I'm wary of the performance and validity issues with this method (it's supposed to be a multi-user OLTP system). When I asked why they don't just use sequences the answer given is that they have to be reset each day and that the recreating of the sequences would invalidate the code calling it.

I'm not too worried about the invalidation but if there's a better way that doesn't involve the procs becoming invalid I'd like to know about it.

Thanks
Graham

Tom Kyte
January 23, 2006 - 10:40 am UTC

why do they believe they need to reset a surrogate key.

What is the business requirement - not just "we've always done this", but really - what is the $$$ effect of not doing this, what is the business justification.

regrading statpack

rajat dey, February 02, 2006 - 7:08 am UTC

dear sir,


thnax for the help and opportunity to learn lot of real life oracle problem as well as ideas.

if possible pls help me .. i want to know how to create multiple instnace the steps in oracle 9i and os AIX

pls help

thnx
rajat dey

Tom Kyte
February 02, 2006 - 12:02 pm UTC

why do you want to create multiple instances? (an instance is just a set of processes and memory...)

What are you trying to do?

Performance and Oracle Sequences

Hariharan Sairam, November 10, 2009 - 2:50 am UTC

Hi Tom,

Good Day

That was a great explanation on Sequences.

As you have rightly said, Sequences are used to create primary keys/Unique. Since they are numbers and easily generated from a sequence object, it is easy to implement as well. But I guess, many have misunderstood this concept. My views are listed below:

a) Use sequence as primary keys for all transactional data and use a well-formed key as primary keys for all the stagnant data.

By transactional data I mean, those transactions that occurs several times daily like cash deposit, money transfer, cash withdraw etc, where running numbers are not used as a way to identify the transaction and they are merely used as serial numbers

By stagnant data i mean the master data like Customer Details, Vendor Details, Client Details etc. For example, in banks where they maintain Customer database, a primary key value of TKYTE would be easy to identify as Thomas Kyte or Tom Kyte or Tim Kyte or Timothy Kyte etc, instead of having a primary key of 10789. In this case they have to use another query to find out what 10789 means.

b) Also, I have already told this point to you.

Many think that primary keys should always be numbers, as it seems it would increase performance if we use columns with number data types as primary keys. They are not able to understand that values to be kept for primary keys depends on the type of application, data usage, scenario, requirements and the purpose.

Please correct me if I am wrong.

Thanks

Hari
Tom Kyte
November 11, 2009 - 3:03 pm UTC

a) how about this:

use natural keys for things that have them.
use surrogate keys for things that do not

b) "as it seems it would
increase performance if we use columns with number data types as primary keys. "

unsubstantiated claim not backed up by anything.


instead of guessing, they should test and evaluate and show themselves to be right or wrong.

Performance and Oracle Sequences

Hariharan Sairam, November 10, 2009 - 4:36 am UTC

Hi Tom,

Good Day.

Further to the my previous posting, today we faced the following with Sequences.

A table was exported from a database in machine A and imported into another database in machine B. When started using the database in machine B from our UI, we received unique constraint violated error.

When checked, both the tables are generating values for its primary key column from sequences and the maximum value in machine A is greater than the current sequence value in machine B.

We reset the sequence value to 1 greater than the sequence value in B and started using the sequence.

Is there any other way?

Thanks

Hari
Tom Kyte
November 11, 2009 - 3:12 pm UTC

... Is there any other way? ...

think about it - what other way could possibly exist? How else could we do this?

Ongoing problem with exp/imp

aja, November 11, 2009 - 3:43 pm UTC

This is an ongoing irritation with exp/imp and expdp/impdp. The export facility dumps sequences before the table data. Therefore, in an active database - even with the consistent=y parameter - the data can get ahead of the sequence. The only cure for this I have found is to write a query that generates a DDL script to drop and recreate all of the sequences. Run the query in the source database after the export ends and run the recreate DDL script in the destination database after the import ends.
Tom Kyte
November 15, 2009 - 1:11 pm UTC

ummm

no, with consistent=y all data in the export file is AS OF THE SAME POINT IN TIME.

that is the entire goal - the only reason for existing - of consistent=y. It makes it "as if there is nothing else going on"

Now, if you have used consistent=y, and you have observed something different, that means that you have been doing exports as SYS which is not only a horribly bad idea, a bad practice, something to NOT do, it is documented well that consistent=y does not work for sys, sys cannot do read only transactions.


So, you are either

a) mistaken in your belief
b) using SYS, which you 1) do not need to 2) should never do and actually (a) would still apply - you would be mistaken in your belief.


More to Explore

Performance

Get all the information about database performance in the Database Performance guide.