Skip to Main Content


Question and Answer

Connor McDonald

Thanks for the question, Arshad.

Asked: July 23, 2007 - 5:10 am UTC

Last updated: November 03, 2020 - 5:34 pm UTC

Version: 10.2.0

Viewed 100K+ times! This question is

You Asked

We are using following script for sequence

  MAXVALUE 999999999999999999999999999
  CACHE 20

Then we using it in over package name DP_3PF.sql in insert statement

when we creating report this sequence value is used with each record sequentialy(1,2,3...10) in the table rep_tbl but after some time if we create report we are getting difference like 20 number.

First time when report generated

Step 1.

Report number generated 1
Report number generated 2
Report number generated 3
Report number generated 10

After two three hours we have try to generate that report again

Step 2
Report number generated 31
Report number generated 32
Report number generated 33

Here we should get report number 11 instead of report number 31.

My question is that the difference of 20 numbers is because of using CACHE 20.

This above process is not used my multiple user only one user is accessing that process

and Tom said...

you cannot expect a sequence to return gap free values.

a sequence has one purpose: assign unique numbers to stuff.

Nothing else.

there will be gaps, gaps are normal, expected, good, ok, fine.

they will be there, there is no avoiding them.

This is not a problem, it is expected, it is not "fixable" - a "rollback" for example will generate a gap if some session selected a sequence.

Do not assume they are gap free and all is well in the world.


  (44 ratings)

Is this answer out of date? If it is, please let us know via a Comment


Arshad Usmani, July 24, 2007 - 4:29 am UTC

I know there is gap in seq. when we use rollback or any error occur after selecting next value from sequence,
my question is
I am getting difference between seq. value which is 20. Is that because of using CACHE 20?
Tom Kyte
July 24, 2007 - 9:48 am UTC

yes, but please do not decrease the cache size - if anything set it much much higher.

RE: Sequence CACHE

Tom, July 24, 2007 - 6:54 am UTC

Yes, that is why you commonly see gaps of 20... the CACHE option is for bulk-loading or for apps that experience lots of inserts. It is more efficient to have 20 (or more) values "ready" to be used in such situations. If your app experiences only sporadic, one-at-a-time type of inserts, consider eliminating the CACHE spec. Been-there-done-that, bought the T-shirt. Like Tom K says, gaps are not important. If you need business logic in your key (I'm assuming that the purpose of your sequence is to generate a key or a portion of a key) and you need to guarantee they are in order with no gaps, assign it yourself.
Tom Kyte
July 24, 2007 - 10:12 am UTC

no no no!

since sequences by definition have gaps - who cares when they get gaps.

set cache to 10000 - but don't lower it.

you'll never run out of values. even if you lose 10,000 per second

  1* select 999999999999999999999999999/10000/60/60/24/365 from dual
ops$tkyte%ORA9IR2> /


that is how many years you would have before running out of values.

Arshad Usmani, July 24, 2007 - 7:51 am UTC

Thanks for your feedback Tom.

Never is not always never.

Bill B., July 24, 2007 - 2:36 pm UTC

I have a canned application that used a sequence to generate an ID number. The application had a maximum size to hold the ID of 99999. The application used about 200 ID's a day. Since the database was being shut down every night for backup (Not my idea) and the cache was set to the default of 20, I was losing 20 sequences a day due to the shutdown. While I know that the design of the application was bad, there was nothing I could do about it. Also, the numbers could never be reused. To extend the life of the application, I turned off caching of the sequence, thus saving 7300 id numbers a year. While this was a special case, I am just making the point that never is not always never. However in a normal application, I set the ID column to be as big as possible and never have to worry about the sequence and normally have a large cache.

Yes, Yes, Yes

Tom, July 25, 2007 - 7:28 am UTC

<quote>since sequences by definition have gaps - who cares when they get gaps.</quote>
Tom, I agree with you 110%, 99% of the time. Some applications use sequences to generate, say, order numbers, or a portion of an order number, etc. Now, if no one ever sees these key values, so be it. However, I've built databases schemas to support applications where the users could not understand why the gaps where there and insisted that it was extremely, "our jobs depend on it" important for all "order numbers" to be one-after-the-other. So I dropped the CACHE spec from the sequence and everyone was happy. It was a large app that had few users. The other 99% of the time I could care less about gaps...
Tom Kyte
July 26, 2007 - 9:12 am UTC

rollback - you still have gaps, so I don't get it still.

you have gaps right now.

you also have suboptimal performance, but you have gaps now.

Tom from Gainesville, VA

Matt, July 25, 2007 - 10:49 am UTC

I presume this was a single user system!

Matt from UK

Tom, July 26, 2007 - 8:54 am UTC

Matt, no, it had appr. 250 users with an avg. concurrency of 10 or so percent (20 - 25 users) at any one time. Small, but not a single-user system.

By giving 1 star, were you saying that my scenario was bad or my solution? That application was lightening fast (Cold Fusion with Ajax injection, I also hand-wrote the SQL app queries for the apps developers). There was so little performance difference between CACHE 20 and NOCACHE that it was imperceivable to the end users, and our DBA team.

Sequence gaps

Mike, July 26, 2007 - 10:42 am UTC

CACHE or NOCACHE, if your requirement is truly to make sure there are no gaps, then a sequence is the wrong solution. You may get fewer gaps with NOCACHE, so they may be less noticeable, but you have accepted the possibility of gaps for a requirement that 'demands' no gaps.

If you want to use a sequence, you must first negotiate that 'gap-free' requirement away, and make sure none of the design depends on the keys being 'gap-free'. If the requirement sticks, then you have to use some other mechanism - maybe a table of available keys, or something else.

Also make sure there is no requirement that they are assigned in order, and no design dependency anywhere that they be assigned in order. Use a timestamp to keep track of 'in what order were these done'; if that's not enough, then you need more thought and more work.

RE: Sequence gaps

Tom, July 26, 2007 - 9:20 pm UTC

Mike, that was my point in my first reply, if you need to guarantee an order with no gaps, you must assign it yourself and, I'd also add, due to timing of transactions, be willing to accept that some inserts may fail due to PK violation (I'm thinking along the lines of:
select max(key_id)+1 into v_temp_key from table_name
not a great solution by any means).
And, like you said, even with the NOCACHE option, it is not guaranteed to be "gap free" although there will be significantly less missing values... not really important to most people.
Tom Kyte
July 27, 2007 - 1:57 pm UTC

you'd have to be thinking more like

lock table
select max()+1

you must serialize....

Gap-free non-sequence solutions

Mike, July 27, 2007 - 4:07 pm UTC

Rather than looking for MAX(key) on the existing table, another approach is to have a table of available keys. You can then DELETE FROM KEY_TABLE WHERE KEY = (SELECT MIN(KEY) FROM KEY_TABLE) RETURNING KEY INTO v_returned_key - this should avoid the possibility of multiple sessions grabbing the same key, but you still need to beware of the scalability of something like this.
Tom Kyte
July 27, 2007 - 8:20 pm UTC

that kind of scalability we call....


it would pretty much be the same.

Saravanan Ramalingam, July 30, 2007 - 9:00 am UTC


select max(key_id)+1 into v_temp_key from table_name cannot be equivalised with the sequence.

If your last order from table gets deleted, then the next order will have the same number. Do you need your application numbering in such a way or don't you delete the records from your DB.

It's again on case to case basis. rather we can also have a seperate table to have the last generated number so that we can retrieve and add 1 on that to go ahead. still we need to update the numbering table also. Its bit complicated when it comes into multiusers.


Tom Kyte
July 30, 2007 - 5:32 pm UTC

it is not complicated, it is call

serialization, one at a time, use the lock table command

it is really simply actually, not smart, but plug simple.

What causes gaps though?

David, August 22, 2007 - 6:15 am UTC

Going back to Arshad's original question, what actually causes "cache-sized" gaps to happen? I had been under the (mistaken) belief that those sort of gaps only occurred when the database was bounced, but clearly that's not the only scenario.

We have recently experienced the same thing and the db hasn't been bounced for months.

You've already mentioned rollbacks as being one cause of gaps, but I'd expect that to cause a gap of only <increment_value>.
Tom Kyte
August 22, 2007 - 1:46 pm UTC

just normal aging of things out of the SGA can cause it.

I've never ever considered worrying about it - for the simple reason that I know gaps will happen (not might, but will) so they are a fact - bring them on.

Please ignore previous post

David, August 22, 2007 - 9:29 am UTC

Please ignore my previous post - I found the answer on another of your threads (ie. that sequences can be aged out of the shared pool just like any other db object).

Thanks for your continued good work and apologies for the redundant posts.

Generating gap free values

Amardeep Sidhu, September 14, 2010 - 11:30 pm UTC

Hi Tom,

Just skimmed through the thread. I also came across a similar situation at one of the customer sites. They say that they need gap-free numbers so sequence is not an option for them. So they have created a table that stores names of such counters and their values (There is an index on (counter_name,counter_value). Something like (showing only 2 columns):

SQL> select counter_name,counter_value from counter_table where counter_name='TRAN_ID';

    counter_name    counter_value
    ------------    -------------
        TRAN_ID         100

When they want to use this counter, they do "select for update" on the table, increase the fetched value by 1, use it and after they are done with everything, update it back to the table. Completely serializing the stuff as you wrote above.

I just wanted to know your opinion that is there a better way of doing this ?

Amardeep Sidhu
Tom Kyte
September 15, 2010 - 7:54 am UTC

nope, if you truly believe you need (and you don't, I don't care what part of the planet you are on or what planet you are from in fact...) - then you shall serialize transaction by transaction (you cannot commit the update of the counter_table until the entire transaction is ready to commit).

You will do ONE transaction at a time - it will reduce your hardware costs as you basically only need a netbook to run your database.

The reason I say you do NOT need to do this (gap free) is because you can come back any time you want and assign a sequential number that is as phony as the surrogate key you are already using in the background anytime you want in bulk.

Many systems do that, when you close out the books, you add something to the data at that point in time. No one can tell the difference between your surrogate key and the "invoice number" as it is.


Amardeep Sidhu, September 27, 2010 - 10:32 pm UTC

>and you don't, I don't care what part of the planet you are on or what planet you are from in fact...

>it will reduce your hardware costs as you basically only need a netbook to run your database.

awesome ;)

Thanks Tom for the follow up. Getting the whole point very clearly !

Amardeep Sidhu

cache order

sagi, November 22, 2010 - 2:17 pm UTC

Hi Tom

If i define a sequence with cache of 100, is it always promised that each nextval statement will always get the lowest value from the current values in the cache?
For example, is it possible that my session will get the value 100 and later on 50?

Tom Kyte
November 23, 2010 - 1:13 pm UTC

... For example, is it possible that my session will get the value 100 and later on
50? ..

yes - it can in fact happen - not 100 and 50 perhaps (1000 and 850 could happen with a cache of 100)...

think "RAC" (you could use the ORDER attribute of the sequence but that would make them PAINFULLY slow and serial in nature)

do not rely on sequences being

a) gap free
b) sequential
c) always increasing

Only think of them as "unique - nothing more, nothing less"

So, how to determine the order of events?

Marat Tolgambayev, November 23, 2010 - 5:36 pm UTC

Dear Tom,

Suppose, we have a table reflecting some events - each row represents one event.
We don't rely on sequences, and we don't rely on timestamps (do we?)
So, how to determine the order of events happened (they probably could happen and inserted from different sources)?
Thank you!
Tom Kyte
November 24, 2010 - 8:41 am UTC

if the events actually *have* an order - they would be serialized and a timestamp would work.

if the events happen concurrently - then there is not necessarily any serial ordering of them (even in serializable isolation mode).

So tell me, if thirty things happen simultaneously (you have a 30 cpu machine, it is totally possible, not only possible but probable) - what "order" did they happen in ??

We do have the ability to order things based on their COMMIT times (replication does that for us for example). So, if you are talking about replicating something - it is already done for you - streams will order things properly as does advanced replication or golden gate (streams and golden gate just read the history of what happened from the redo log - it is a sequential file that replays work in the order it was performed)

So, tell us your GOAL - not how you are trying to achieve it. Tell us the specific problem you are trying to solve and all of the details about it you can. Do not include a "solution" to the problem - just the specification and nothing else...

Rely on Source

Rick, November 24, 2010 - 8:34 am UTC

I was once faced with same issue, answer was to have the source timestamp the event themselves. Of course we dealt with ties and timezones.

Cache Order

sagi, November 24, 2010 - 11:50 am UTC

Hi Tom,

Thanks for the answer.

You said :
think "RAC" (you could use the ORDER attribute of the sequence but that would make them PAINFULLY slow and serial in nature)

Can i trust the "c) always increasing" when using Cache when using a single node instance?

Tom Kyte
November 24, 2010 - 1:41 pm UTC

sort of.

It will be increasing as you call "nextval", but it will not save records in the database in commit order.


I call nextval and get 100
you call nextval and get 101
you commit
I commit

Your record was FIRST as far as I'm concerned - but it'll be recorded second since it has a higher nextval.

I will strongly and vehemently counsel you to never use ORDER - it is going to be very slow if we have to do that - no concurrency between nodes. A horribly bad idea.

RE: So, how to determine the order of events?

Marat Tolgambayev, November 24, 2010 - 7:02 pm UTC


it was mostly theoretical question, I just was curious what if I were building such a system, how it would be possible.

Say, I have a complicated mechanism, with hundreds of sensors. The sensors send their signals to a device with Oracle client connected to the database. Different signals come hundreds (or thousands) times per second (in an order, but the order is unpredictable). The device knows the order of the signals, but doesn't store it anywhere, just calls INSERTs in that order.
Then, the mechanism crashes and I see that events A and B happened just before the crash. How do I determine what was the first - A or B?
Tom Kyte
November 25, 2010 - 8:44 am UTC

that would be a single threaded process then, no concurrency issues - a simple counter would be all that is necessary - or a TIMESTAMP with fractional seconds (coupled with a simple counter in the program to avoid any chance of duplicates)


ldsandon, November 25, 2010 - 2:42 am UTC

You should not rely on sequence values to order events. As said, in RAC "order" will have a huge impact on performance. Use a timestamp, if you can get it with enough precision.
Your application seems one that could benefit from a product like TimesTen, also.

about sequence

BABA, December 04, 2010 - 7:31 am UTC

Sir, Please help me on oracle please explain in programmatic manner not in theoretical .
Take some real time examples.

Tom Kyte
December 07, 2010 - 9:26 am UTC

Not much on this site is theoretical, it is all practical.

I would suggest reading this book:

And another viewpoint.

Dana, December 08, 2010 - 8:51 am UTC

Mr. Kyte,

I'd like to say I find your work invaluable. The number of times I've banged my head against the wall struggling with a particular concept in Oracle Docs, only to have you demonstrate the concept clearly in a paragraph or two is priceless. I've been to your presentations as well, and every time I walk away knowing your presentation has made the conference worthwhile.

Oracle Documentation: Free

Tom's Demos: Priceless

Thank you.


A reader, January 12, 2011 - 10:44 am UTC


We have a system where we are issued a range of 8 digit string that we must use for some orders. The first 2 digits are for year.

We store all the alphanumeric numbers/codes in a table and application select from it.

WOuld it work to use a sequence for this instead. for example, to sue CY95 numbers this will issue 100,000 numbers.

create sequence issuedordnumbers_seq
start with 95000000
increment by 1
minvalue 950000000
maxvalue 950100000

2) would the NOCACHE guarantee there is NO GAPS geenrated assuming you always use it after SELECT.
Tom Kyte
January 12, 2011 - 11:05 am UTC

1) you would be limited to 100,000 numbers in a year. And since sequences are never ever - NOT EVER - not under any circumstance - never gap free, you will lose some of that 100,000.

2) see #1, sequences are never, not ever, NOT EVER - not even close - never ever gap free.


A reader, January 12, 2011 - 11:27 am UTC


Can you explain internally why they are never ever gap free.

I though "NOCACHE" will do a select one by one and would not cache anything.

is it because of the "ROLLBACK".
Tom Kyte
January 12, 2011 - 11:42 am UTC

because once selected via nextval - that value will never be returned again.

And you will have some transactions that fail AFTER getting a nextval.

do not use nocache - that would be like setting "slow=true" in your init.ora parameter file.

Ordering of rows within a transaction

Henrik, February 16, 2011 - 7:52 am UTC

Hi, I came acroll this in our system:

create table Log(
ID number(9) NOT NULL,
TimeStamp date NULL,
Severity number(9) NULL,
LogText varchar(512) NULL,
/* more columns */
LogOrder number(9));

create sequence LogSEQ increment by 1 start with 1 nomaxvalue nocycle cache 20;

create or replace trigger LogTrigger
before insert on Log
for each row
select LogSEQ.NextVal into :New.LogOrder from DUAL;

I was wondring if this needs to be changed.

If all I care about is the order rows were inserted within a transaction,
can I then assume LogOrder will be ascending in all environments?.
Rows inserted by different transaction can be separated / ordered using other columns.
(If there is a problem, nobody has ever cared enough to report it).
Tom Kyte
February 16, 2011 - 11:24 am UTC

...If all I care about is the order rows were inserted within a transaction,

*probably* this will be OK. It would only be not ok if there was an external transaction/resource manager (like an XA transaction) and you were using RAC (real application clusters). In an XA transaction - you have the ability if you want - to use multiple 'transactions' to the database - each could be theoretically on a separate node, each with their own 'cache' of sequences.

But - in general - inside of a single transaction on a single node - this will be an increasing number.


mangesh, May 03, 2011 - 1:42 am UTC

i read the all information about cache,but still i am not understand the cache and nocache,could you explain with example.
thank you
Tom Kyte
May 04, 2011 - 12:19 pm UTC

if you use nocache - then every time you call sequence.nextval you will cause sys.seq$ to be updated and committed.

if you use cache 20 - then every 20th time you call sequence.nextval you will cause sys.seq$ to be updated and committed.

It is a performance thing, if you don't cache the next couple of sequence values in the SGA, you will be doing an update and commit every single time you call nextval

sequence cache in SGA

Alwin, November 17, 2011 - 6:50 am UTC


We have defined a sequence with CACHE 20. Although we don't reboot our server, we have no RAC, ..., we often see a gap. Even selecting NEXTVAL in the same session with 3 mins interval shows the gap !

I read in the doc that the sequence cache in the SGA may be too small to keep all our sequences. Is there anywhere some info about this cache and a way to change its size ?

-- Alwin
Tom Kyte
November 17, 2011 - 7:04 pm UTC

Since you cannot rely on gap free sequences EVER

since we control the flushing of objects in the shared pool

you cannot ever rely on a sequence being gap free.

You control the size of the cache by controlling the size of the shared pool - however before you run off and make the shared pool larger, just remember - once and for all - sequences are not gap free. Even if you make your shared pool 500TB in size - we'll still be free to age it out of the pool whenever we feel like it.

So, please, get used to the gaps.

also, what you describe is not necessarily shared pool aging either. It could easily happen just by other sessions using the sequence (and maybe even rolling back so they never end up using the sequence). So, even if cached perfectly - the sequence would still exhibit gaps!.

sequence cache in SGA

Alwin, November 21, 2011 - 9:10 am UTC


I was not really clear in my message. Our application is gap-compliant and that is not a problem for us.

I think having a gap due to sequence flushing out of the SGA should merely be an exception than a rule as we see on our systems. Currently, it's like setting CACHE 1 for the sequence, which is not optimal.

We already exluded a rollback problem, and each gap terminates with DBA_SEQUENCES.LAST_NUMBER. So I wondered if there was a way to tune the sequence cache in the SGA.

-- Alwin

Tom Kyte
November 21, 2011 - 2:26 pm UTC

not really, none of that is exposed.

How frequently is this actually happening?

I'm not sure what you mean by this:

and each gap terminates with

last_number is just the current end point of the sequence. I don't know how you could analyze that number to see a "gap terminating"

sequence cache in SGA

Alwin, November 22, 2011 - 6:39 pm UTC


It may happen as frequent as every few minutes. And we see this on a pretty quiet environment with an SGA of about 1.5 GB.

Let's say my sequence's last_number is currently 1182676 with CACHE 20. Then we may observe these values beeing returned by nextval

value   -- comment
1182676 -- let's start here
1182677 -- ok, just next number
1182696 -- cache seems to have flushed
1182716 -- flush again
1182717 -- here ok
1182718 -- here ok
1182736 -- flush again

No nextval with rollback has been issued, no restart of the DB.

-- Alwin
Tom Kyte
November 22, 2011 - 7:17 pm UTC

what is your shared pool utilization. AWR or statspack can help you with that.

any chance of anyone doing any sort of DDL on the system?

do you have bind variable problems - what is your soft parse percent (statspack/awr again)


Ahmad Al-Sallal, January 01, 2012 - 4:24 pm UTC

Hi Tom,
need you help again :),
I'm trying to generate a numbering solution with the following characteristics:
1) Good Performace as Oracle Sequences (sort of).
2) Accurate/Serialized (...max()+1...)
The following what i did:
-- created a table "SEQUENCES_TABLE" containing the NEXT_VALUE for each column each table
-- a function lock the row,increament it, return the increamented_value - 1 then release the lock and return the value
-- I cached the "SEQUENCES_TABLE"

create table sequences_table
( table_name varchar2(30) not null ,
column_name varchar2(30) not null ,
value number not null,
unique (table_name , column_name )

create or replace function get_sequence ( p_table_name varchar2, p_column_name varchar2) return number
v_value number;
v_rowcount number;
execute immediate upper('update table_sequences set value=value+1
where table_name='''||upper(p_table_name)||''' and column_name='''||upper(p_column_name)||'''');
if sql%rowcount <> 1 then
raise_application_error ( -20001 , 'wrong table/column name' );
end if;

execute immediate upper('select value -1 from table_sequences where column_name ='''||upper(p_column_name)||
''' and table_name = '''||upper(p_table_name)||'''') into v_value;
return v_value;
select get_sequence ('HRS_EMPLOYEES','ID')value FROM DUAL;
--for testing i ran it from two different sessions in loop(10000 repeating times) and the serialization was great (actualy ORACLE is great) because if Session1 get No.100 then Session2 get No.101 then Session3 102 .....

Now, here is my questions list:
1) what are the problems that it may cause (multi-user support,performance,gaping,....anything)???
2) Do you think it's a good way, or there is a better (i beleive there is :) )
3) performance-wise, any advice ???

Tom Kyte
January 01, 2012 - 8:37 pm UTC

1) this will serialize, you already know that.

it will have gaps, you should know that. If you lock, increment, unlock - and then decide to roll back the parent transaction for some other failure (system crash, whatever) - you will have a gap.

Actually, you will have as MANY GAPS as you do with a sequence, your implementation would result in the same number of gaps!

But it would serialize, so it would be a slower way to implement a sequence that still has gaps.

The only way, repeat THE ONLY WAY to have this magic gap free sequence is to retain the lock until you commit. that way if you roll back, the increment rolls back.

2) i think it is a very very bad idea.

3) use sequences, realize there will be gaps, and if you need a truly "gap free number" assigned - assign it later in batch - when you close the books or whatever - not in real time.

by the way, why does your code use dynamic sql?

create or replace function get_sequence ( p_table_name varchar2, p_column_name 
varchar2) return number
v_value number;
v_rowcount number;
    execute immediate upper('update table_sequences set value=value+1
                       where table_name='''||upper(p_table_name)||''' and 
    if sql%rowcount <> 1 then
        raise_application_error ( -20001 , 'wrong table/column name' );
    end if;

    execute immediate upper('select value -1 from table_sequences where 
column_name ='''||upper(p_column_name)||
    ''' and table_name = '''||upper(p_table_name)||'''') into v_value;
    return v_value;

why isn't the update just:

update table_sequences set value = value+1 
where table_name = upper(p_table_name) and column_name = upper(p_column_name) 
returning value into v_value;

no dynamic sql
no need for select
neat and tidy

NOT that you should be doing that in the first place - this is a bad idea all around..

cache sizeļ¼Ÿ

song, January 02, 2012 - 12:57 am UTC

sequence of how to set the appropriate cache size ? On which basis?
Tom Kyte
January 02, 2012 - 8:35 am UTC

I say "set it big, really big, hundreds. If you are doing a mass load with a sequence, set it to hundreds of thousands"

It takes the same amount of memory in the SGA to have a cache 10 as it does a cache 100000 - so having a large cache setting on a sequence is perfectly safe.

Where does sequences cached?

Manoj Kaparwan, January 06, 2012 - 7:11 am UTC

When we say


Where does those 20 sequences gets cached?
i.e. which part of the SGA ( buffer cache, shared pool) ?
Tom Kyte
January 10, 2012 - 9:33 pm UTC

they do not, we do not need to cache all 20.

All we need to keep in the cache is:

the sequence on disk was N
the cache size is M
the current value is X

As long as X is less than N+M - we just increment X when someone calls NEXTVAL.

we do not need to keep in the cache "N, N+1, N+2, ... N+M-1", we just keep N, M and X and increment X when someone asks for a new sequence value. When X=M, we update SEQ$ and reset N in the cache.

So, cache 1000 and cache 20 take the same amount of space in the cache.

Anand Agrawal, January 13, 2012 - 1:21 pm UTC

Hi Tom,

In batch process there are 40 parallel process run.And it use below sequence.It was running in 2 instance.Now this running in 1 instance only (changed for performace tuning).Now we have seen sequence contention.can we avoid it by increasing cache size to 2000 or 3000 ???

MAXVALUE 99999999999
CACHE 1000

MAXVALUE 99999999999
CACHE 1000

Tom Kyte
January 17, 2012 - 9:45 am UTC

how did you measure this contention?

Increasing the cache size will lower the number of log file sync waits you experience (but you would have experienced them before in multi-instance) and possible waits during the update of SEQ$ to bump the cache size (but you would have seen them before).

So, before I can answer, I need to understand what metric you are looking at.

Vikash, March 21, 2012 - 5:00 am UTC

Hi Tom,
I understood the reasons for having gaps in the values for a sequence but I am unclear why the subsequent value would decrease and that also beyond the cache size. I have a sequence defined as follows:
create sequence DETAILS_ID_SEQ
minvalue 100000
maxvalue 999999999999999999999999999
start with 145637------ this is the current value of sequence.
increment by 1
cache 20;

This sequence is generating the the values as follows
144856 3/8/2012 1:08:52 PM
144896 3/8/2012 5:15:05 PM
144877 3/8/2012 5:20:30 PM------ decreased
144878 3/8/2012 5:24:48 PM
144857 3/8/2012 5:29:21 PM------ decreased
144858 3/8/2012 5:33:32 PM
144897 3/8/2012 5:37:18 PM------ increased with diff of 39
144898 3/8/2012 5:40:53 PM
144859 3/8/2012 5:56:34 PM------ decreased
144860 3/8/2012 6:10:35 PM
144916 3/9/2012 1:22:27 PM------ increased with huge diff.
144936 3/9/2012 1:27:54 PM
144956 3/9/2012 3:15:04 PM
144917 3/9/2012 3:27:02 PM----- decreased

Can you please help me understand the above. Thanks for your help and ever useful comments on multiple topics.
Tom Kyte
March 21, 2012 - 9:53 am UTC

are you using RAC?

and how does the application set the creation_dates. Does it start working, get a sequence number, generate some stuff here and there and everywhere and then create a record (some time later) with a creation_date and that sequence?

Or, can the application a) get a sequence, b) get BLOCKED for a period of time, and then c) insert the record

by the way, a sequence does not imply order of insertion in general. the act of generation of the sequence and the act of inserting are not necessarily one in the same. Never rely on sequences implying "order"

Vikash, March 23, 2012 - 3:46 am UTC

Yes Tom, It is RAC database.

The Sequence ID and creation_date get generated at a time difference between two funcation calls, which are invoked one after another. A function is called to get the Value from the sequence. Once that value is received, another procedure is called to insert the record in the details table with the received sequenceId and the sysdate.

So, this reduction in the sequenceID can be attributed to the database being RAC (multiple nodes)? can you please explain it.

Thanks for your help!!
Tom Kyte
March 23, 2012 - 8:25 am UTC

by default a sequence is set to cache 10 (it should be higher in my humble opinion, 100's or 1000's).

So, on node 1, we will generate numbers 1..10
On node 2, we will generate numbers 11..20.

So, you could get sequence number 20 in a process on node 2 well before you get sequence 2 for a process in node 1 - depends on the work loads.

In short: sequences are only assured to be unique, nothing else should be interpreted from them.

Deepak, January 08, 2013 - 10:52 pm UTC

I have a sequence with increment 1, cache 20 and no order.There are four servers connected to the database . if i do sequence,nextval from App server 1 and Appserver 2 in different times is there a possibility of a below output. I want to know if the issue is with sequence. Eventhough i don't want an order in the values but an increasing value is expected while getting the next value as the time increases irrespective of server.

Time App.Server Sequence.nextval

1 1 70
2 1 71
3 2 61
4 2 62
5 2 63
6 2 64
7 3 80
8 2 65
Tom Kyte
January 14, 2013 - 12:43 pm UTC

do not under any circumstance assume that sequences will return monotonically increasing values.

do not assume that a record with sequence=1 is "older" than a record with sequence=200.

what you show above easily and almost always happens in a clustered environment.

it will not happen in a non-database-clustered environment, but don't count on that.

always just assume a sequence is a surrogate key (unique) and nothing more and you'll have a much better life.

NOORDER behaviour

Stan, February 21, 2013 - 1:25 pm UTC

Hi Tom,

With NOORDER clause specified for a sequence, will it return totally random numbers? Or will it be in sequence for number of values cached?

For example, following sequence is created in 11g 3 node RAC.

create sequence seq_temp
start with 1
increment by 1
cache 50;

When seq_temp.nextval is executed from different nodes, will it be starting from 1 upto 50 for node 1, from 51 upto 100 for node 2 and from 101 upto 150 for node 3?
I wanted to make sure that the variation is of Cache value.

Tom Kyte
February 25, 2013 - 10:32 am UTC

With NOORDER clause specified for a sequence, will it return totally random

no, not at all. It just means that in a clustered environment (more than one instance mounting and opening a database) the sequence numbers retrieved across nodes will not be ordered.

so node1 might return numbers 1-10
while node2 is returning 11-20

and then node1 might return 21-30 (while node2 is perhaps still working on 11-20 - or maybe it is doing 41-50 now)

they will be monotonically increasing on each node, but if you look at them "globally", they might not be.

NEVER, NEVER use order - never. Hugely non-scalable. If you want to kill a clustered implementation - try out ORDER and then rethink your approach entirely.

NOORDER behaviour

Stan, February 26, 2013 - 4:42 pm UTC

Thanks a lot for the reply!

So, if I have standalone server, then my sequence will always return values in order regardless of ORDER or NOORDER clause. Is this correct?

Tom Kyte
February 26, 2013 - 5:09 pm UTC

sort of, but remember - the commit order could be radically different.

the row with sequence=1 could have been committed (and hence 'added' to the database) long after the row with sequence = 10 was.

Using sequences to order data is a recipe for disaster, I wish we would return them in reverse order out of the cache so you would get things like

20, 19, 18, ... 1, 40, 39, 38, .... 21, ....

just to make it so people stop making the mistake of using a sequence as some sort of timestamp...

sometimes gap is not good

ty, September 24, 2013 - 9:00 pm UTC

I heard some country (Italy?) does not allow gap in certain serial number. In that case, just use nocache to be sure. Otherwise, gap of 20 is EXPECTED.


Tim Boettcher, July 08, 2014 - 8:17 pm UTC

"if you truly believe you need (and you don't, I don't care what part of the planet you are on or what planet you are from in fact...)"

A case where you need sequential numbers with no gaps is for cheque numbers. There may be other cases but I haven't encountered them in 25 years.

Alexander, April 21, 2016 - 2:56 pm UTC


Is is possible for a select seq.nexval from dual; statement to block a 1 row insert in another session on RAC? We just had a problem with this, the DBA reported the holding lock session was selecting from the seq, and a bunch of insert sessions started to pile up. I don't believe this to be possible, but I wanted to check with gurus first because I know there are "gotchas" with RAC and sequences.


MAXVALUE 9999999999999999999999999999 

SQL holding: select MY_SEQ.nextval from dual
SQL blocked: insert into T (CREATE_DATE, CREATED_BY, ID) values (:1 , :2 , :3 )
Chris Saxon
April 21, 2016 - 3:19 pm UTC

Yes sequences can cause a problem in RAC.

Your sequence is set to ORDER. So Oracle has to synchronize calls across the nodes. This is to ensure the values are returned in order. So sessions can block eachother.

To reduce this issue, I suggest you:

- Change it to NOORDER
- Increase the cache size

For further discussion, see:



Joe, November 03, 2020 - 3:33 pm UTC

I got a good question from application team.
What is the limit of caching sequence.

I just tested it , but couldn't conclude

Sequence created.

Sequence created.

Sequence created.

Sequence created.

SQL> select TEST_1k.nextval from dual;


SQL> select TEST_1k.nextval from dual;


SQL> /


SQL> /


SQL> /


SQL> /
select TEST_1k.nextval from dual
ERROR at line 1:
ORA-08004: sequence TEST_1K.NEXTVAL exceeds MAXVALUE and cannot be instantiated

Sequence created.

Looks like Caching is not worried about the max value.
Just trying to understand how sequence caching behaves and is there a limit ?
Chris Saxon
November 03, 2020 - 5:34 pm UTC

The maximum value for CACHE only applies for sequences that cycle. In this case, the cache value must be smaller than the number of possible values in the cycle.

Otherwise it can be up to 28 digits long. While it doesn't make sense to have a cache value larger than the maxvalue for non-cycling sequences, it doesn't really matter.


Joe, November 03, 2020 - 6:08 pm UTC

Thanks Much Chris!
Yes , CYCLE gave me an error. ORA-04013.
But I was able to give a cache value more than 28 digits. As you just mentioned, it doesn't make any sense thought it is possible.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library