Skip to Main Content


Question and Answer

Tom Kyte

Thanks for the question, M..

Asked: April 21, 2001 - 1:26 pm UTC

Answered by: Tom Kyte - Last updated: August 02, 2012 - 9:15 am UTC

Category: Database - Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hi Tom,

Some times while different users using any application using a Sequence Number, miss some numbers. Even some times 5 to 10 Numbers gap...( I mean may be up to 10 Numbers Missing).. I know those users can delete those records.

How does it hapen, I know that if you have cached numbers and the Server goes down , you may lose those numbers, But how when there is nothing like that...

and What about if 2 or 4 Users accessing the Nextval from the same Sequence, will be there any Dead Lock??? and How is it resolved.


and we said...

Sequences will never generate a gap free sequence of numbers.

If someone calls seq.nextval and doesn't insert the value somewhere (or does insert it and later rolls back their transaction) that number is GONE.

Sequences are database objects and may be aged out of the shared pool like everything else -- it doesn't take a database shutdown to produce a gap.

You should never count on a sequence generating anything even close to a gap free sequence of numbers. They are a high speed, extremely scalable multi-user way to generate surrogate keys for a table.

There is never a chance of a deadlock on a sequence -- they are designed to be multi-user and highly scalable. They remove the concurrency related issues normally associated with unique key generation.

follow up to comment below

He said he knew that when the database was shutdown he would lose cached entries, he asked why he might see that without a shutdown -- that is what I answered.

NOT caching the sequence is not a good response. That obviates much of the speed of the sequence -- understanding that sequences do not generate gap free sequences of number is the right answer. A rollback will introduce a gap -- you must be prepared to live with it. My answer is always "set the cache high, go for concurrency, live with the fact that there will be gaps".

followup to comment two below

A cache size of 500 or even 1,000 would not be unreasonable in that case. You'll avoid the recursive transaction that happens every time the sequence runs out of cached entries.

followup to comment three below

Good point -- i should have made that one ;)

the "cache" is simply a pair of numbers -- the start of the cache and the end of the cache. If you cache 2 or cache 2,000 -- it is the same amount of space.

followup to comment 6 below

(i can't let connor get the last word in here ;)

</code> <code>

for how to pin a sequence upon startup. The "de-pin" would be similar in a shutdown trigger.

However, my 2 cents here is ..... this is a lot of code in an attempt to circumvent something that is meaningless. contigous sequences of numbers are pretty much impossible with sequences (only takes but one rollback -- and those will happen). This is alot of time spent on something that should really just be "ignored" all together. You can get sort of close to having a gap free sequence but close only counts in grenades. You cannot rely on it being gap free (and it will not be gap free)....

and you rated our response

  (14 ratings)

Is this answer out of date? If it is, please let us know via a Review


yes it happen

April 23, 2001 - 4:03 am UTC

Reviewer: ashraf gamak from kuwait

yes tom it happen when the sequance is cache when the database shutdown the gap is begin because som number in the cache so he must nit use cashe sequance

How to determine ideal cache size?

July 20, 2001 - 12:14 pm UTC

Reviewer: Paul Dong from New York

Tom, noting your recommendation to "set the cache high", what would you consider to be an ideal cache size?

We use sequences to generate surrogate keys to facts and dimensions stored in the Data Warehouse. It would be normal for the process to needs thousands of sequence numbers at a time, (and more, when initial load is occuring). Rather than avoiding gaps, my goal is maximizing throughput.


and to add to Tom's last comment...

July 20, 2001 - 1:32 pm UTC

Reviewer: Connor from UK

Its often thought that a large sequence cache somehow uses more memory but this is not the case - a large cache is (very) good

Really Good Point (last one)

July 22, 2001 - 10:26 pm UTC

Reviewer: sreenu from singapore

Thanks Tom for expanding that point. Great logic.

How can I avoid gap while a sequence ages out?

July 22, 2001 - 11:08 pm UTC

Reviewer: hahaer from Chine

I created a sequence at cache size 500. From 9pm to 10pm this sequence will be used frequently, out of that period, it's used little. So every day ,a gap about 420 to 450 will be created because it ages out of library cache. How can I avoid this gad for this sequence? Thanks in advance.

Response to comment 5 (sequence aging out)

July 23, 2001 - 3:36 am UTC

Reviewer: Connor from UK

You can keep a sequence in the shared pool using DBMS_SHARED_POOL.KEEP. Keep the sequence using a startup trigger. When you need to shut the database, on a shutdown trigger, make the sequence nocache then back to cache. This "unloads" it from the shared pool and minimises (not eliminates) the sequence number gaps.

Thanks Connor

July 23, 2001 - 11:41 am UTC

Reviewer: Gururaj Kulkarni from Norcross GA,USA

For getting more from Tom :) and sharing good info with us.

Speciall y cacheing lot of numbers

July 23, 2001 - 8:36 pm UTC

Reviewer: Pichaimani Balaubramanian from San Jose, CA USA

Was a real eye opener

Metalink implies non-gap sequence

September 02, 2003 - 10:01 am UTC

Reviewer: A reader

Well, they're wrong of course, but Note:62002.1 says:
"...if an application requires a sequence which never skips any numbers, a non-cached sequence must be used. "
which kind of implies that you can actually achieve a non-gap sequence, doesn't it?

Tom Kyte


September 02, 2003 - 10:48 am UTC

it should say

"If an application requires a sequence which never skips any numbers, a sequence CANNOT BE USED"

non-cached will have gaps - i can 100% assure you of that.

sort of reserving numbers....

February 20, 2004 - 9:13 am UTC

Reviewer: Mike from England

Would it be possible to reserve numbers in a sequence???

My problem is that we are developing an "off line" version of our app, in this off line version users will be able to insert records into a table that will be syncronised up, the PK of this table is taken from a sequence...

I was thinking of doing some cheeky manipulation of the "increment by" of the sequence along the lines of...

when about to go off line...

1) alter sequence my_seq increment by 1000;

2) select my_seq.nextval into my_min from dual;

3) select my_seq.nextval into a_dummy from dual;

4) alter sequence my_seq increment by 1;

5) create sequence my_local_seq start with my_min increment by 1 maxvalue my_min + 999;

If the my_local_seq runs out of values then an ORA-08004 is what we want.

This seems a little messy though, any better ideas? Oh this is on 9i R2 and we'll be using Oracle Replication for the sync'ing if that helps.



Tom Kyte


February 20, 2004 - 10:07 am UTC

that would not work in a multi-user environment.

imagine what happens when some other session(s) touches the sequence at time 1.1, 2.1, 3.1....

A preferred method would be

a) decide how many offline systems you will have (say 1000)
b) multiply that by 10 (so 10,000) just to be "safe"
c) on the production site, "create sequence my_seq start with 1 increment by 10000"

d) on each of the remotes - "create sequence my_seq start with 2 increment by 10000", start with 3, start with 4 and so on (you could have a table you maintain that has the numbers 2, ... 10000 and upon "install" of the application onto a remote site you "select * from t for update", fetch the first row, delete it and commit. then create the sequence using that number you just got.


February 20, 2004 - 10:40 am UTC

Reviewer: A reader

I hadn't thought of that very cunning...

I thought that aside from the locking of the sequence while I alter it I would be "safe" from users coming in at 1.1, 2.1, 3.1... Is that what you mean??

say the sequence is currently at 1
1.0 I alter the sequence to increment by 1000
1.1 they get a value that is now 1001
2.0 I get 2001 and call it my minimum - the nextval is now 3001
2.1 they get 3001 - this is fine, I won't use it, in fact I
3.0 get the next value in case no one else has to ensure the sequence "bumps along" (to 4001 in this case)
3.1 they get 5001
4.0 I change the sequence back down to increment by 1
4.1 They get 5002.

It doesn't seem to matter to me how many people "jump in" between my X.0 steps, I can be safe in the knowledge that no one can use the value I get in step 2.0 + 1000 (or what ever I changed the increment by too.

That said I think I'll go with your solution, the number of remote sites will be around 50... ;)

Tom Kyte


February 20, 2004 - 12:56 pm UTC

you cannot "lock" a sequence. right after you altered it (step 1) anyone could and would be able to select it again.

You are right -- since you only use min_val, you would be "safe" but -- think of the invalidations you would be doing (i'm dead against DDL in a real system)

create sequence my_seq;
alter session set sql_trace=true;
select my_seq.nextval from dual insql;
select my_seq.nextval from dual insql;
alter sequence my_seq increment by 1000;
select my_seq.nextval from dual;
alter sequence my_seq increment by 1;
select my_seq.nextval from dual insql;
select my_seq.nextval from dual insql;

tkprof with aggregate=no will show:

select my_seq.nextval from dual insql

Misses in library cache during parse: 1
select my_seq.nextval from dual insql

Misses in library cache during parse: 0
select my_seq.nextval from dual insql

Misses in library cache during parse: 1 <<<==== hard parse
select my_seq.nextval from dual insql

Misses in library cache during parse: 0

Sequence Cache

May 16, 2005 - 9:33 am UTC

Reviewer: Anne from MN

This is awesome! If I had a sequence with a cache of say 1000, when I select myseq.nextval from dual , it would get 1..1000 loaded up. A subsequent database shutdown abort would lose 2 .. 1000, right ? By using a higher cache, I am losing more numbers. Why would you still recommend setting the cache high ?

Tom Kyte


May 16, 2005 - 12:54 pm UTC

who cares if you lose 1000 numbers?  even if you generate numbers at the rate of 1,000,000 per second

ops$tkyte@ORA9IR2> select (to_number(rpad('9',27,'9'))/1000000)/60/60/24/366 from dual;

it would take that many years to exhaust it.  whats a few thousand here or there.

by using a higher cache size you are getting possibly much improved performance.
by not using it you are not getting possibly improved performance.

sequences have gaps (that is irrefutable), and it really doesn't matter if they are 1,000 bit or 10. 

May 16, 2005 - 2:19 pm UTC

Reviewer: Anne from MN

Thanks Tom for the explanation!

sequence cache size

August 02, 2012 - 5:59 am UTC

Reviewer: Vineeth.M from india

HI Tom,

We got the below warnings by running the raccheck script.. so please help me to get rid of this situation

## WARNING SQL Check SYS.IDGEN1$ sequence cache size < 1,000 All Databases View
**how to set this sequence cache size..

## Parameter Check pre_page_sga = true PORTALDB1, PORTALDB2 View
** how can i set the pre_page_sga=false, our db is running on spfile or else please provide me the exact command to set this..

## OS Check init.cssd open files limit (ulimit -n ) is NOT set to recommended value >= 65536---> it was set to unlimited..
bash-3.2$ ulimit -n
Tom Kyte


August 02, 2012 - 9:15 am UTC

ignore anything about SYS objects, you may only modify them with the assistance of support. If you want to discuss that one, please open a service request.

your DBA would know how to change a parameter setting in their sleep (if they do not, you need a DBA). Please have them take care of this, they may have set it for a reason.