Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Brij.

Asked: August 08, 2000 - 8:38 am UTC

Last updated: February 10, 2004 - 3:50 pm UTC

Version: 8i

Viewed 1000+ times

You Asked

Sir,
I have Oracle 8i installed on WIN NT platform.Here for some
web application am generating a unique number through sequence
and this number we are incrementing by one with no cache option.
So every time this sequence number is supposed to increment by
one.But what's happening is that this number at least jump once
in a day by two or more numbers.In that case we always have to
reset the sequence number.
Please suggest the solution.





and Tom said...



Sequences do not generate "gap free" series of numbers. The only way to do that is in a highly serialized, blocking fashion (hence if you really truely believe you need it -- you might consider rethinking your need for it).

What is happening in your case is some session has issued:

select seq.nextval from dual;

and then rolled back (or otherwise aborted the transaction and never inserted this value into a table permanently). The sequence values are not "given back" when you roll back -- they are generated and then gone forever.

Again -- if you really need a gap free sequence of numbers, the only way to do that in any system is in a serialized, blocking fashion. You'll have but one transaction at a time taking place. I would reconsider the need for such a thing.


Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Reset Sequence?

Ashish pandit, November 27, 2002 - 5:39 am UTC

Why not Oracle come up with in built reset_sequence procedure. Is there in RDBMS constraints for doing so.

Tom Kyte
November 27, 2002 - 7:38 am UTC

No, it just isn't something that there is widespread need for.

Give me the business case whereby in a production system you are reseting a sequence on an ongoing basis?

sequence jump by 20

A reader, April 21, 2003 - 4:49 pm UTC

Hi Tom:

thank you for your help always.
I created a table with id column, and a sequenct seq_id following as:

CREATE sequence seq_id
INCREMENT BY 1
START WITH 1
MAXVALUE 1.0E28
NOCYCLE
/

the sequence is called through a procedure, but how i got the sequence increased by 20,

1
21
41
42
61
81
82
101
121
141
142
161
181
201
202
203
204
205
206
221
222
223
241
242
243
244
261
262
281
282
301
302
303
304
321
322
323
324
341
361
381
382
383
384
385
386
401
402
403
404
405
406
407
408
409
410
411
412
421
441
442
461
462
463
481
501
502
503
521
541
542
543
544
545
546
547
548
561
581
582
583
601
602
621
641
642
643
644
645
646

thank you,
an
(our oracle db in 8.1.7)

Tom Kyte
April 21, 2003 - 10:37 pm UTC

very small shared pool?

but even so -- do the math and you tell me -- if you generate 1,000 per second, every second -- how many years would it take to exhaust the sequence if it skips 20 EVERY time?

Why your sequence sometimes jumps by 20

Dan Kefford, April 22, 2003 - 6:42 pm UTC

Straight from the Oracle8i SQL reference guide for the CREATE SEQUENCE statement:

<cite>
Sequence Parameters

.
.
.

CACHE
integer

Specify how many values of the sequence Oracle preallocates and keeps in memory for faster access. This integer value can have 28 or fewer digits. The minimum value for this parameter is 2. For sequences that cycle, this value must be less than the number of values in the cycle. You cannot cache more values than will fit in a given cycle of sequence numbers. Therefore, the maximum value allowed for CACHE must be less than the value determined by the following formula:

(CEIL (MAXVALUE - MINVALUE)) / ABS (INCREMENT)

If a system failure occurs, all cached sequence values that have not been used in committed DML statements are lost. The potential number of lost values is equal to the value of the CACHE parameter.


NOCACHE

Specify NOCACHE to indicate that values of the sequence are not preallocated.

If you omit both CACHE and NOCACHE, Oracle caches 20 sequence numbers by default.
</cite>

That last bit is the critical detail.


Tom Kyte
April 22, 2003 - 9:41 pm UTC

and setting nocache is like setting:


fast = false
slow = true
I_want_to_be_as_inefficient_as_possible = Please




NOCACHE

Ed, April 23, 2003 - 4:52 am UTC

I thought I'd do a little benchmarking (makes a change from you doing it :-))

seqtest.sql:
-----------------------------------------------------------
CREATE SEQUENCE nocacheseq NOCACHE;
CREATE SEQUENCE cache10seq CACHE 10;
CREATE SEQUENCE cache100seq CACHE 100;
CREATE SEQUENCE cache1000seq CACHE 1000;
CREATE SEQUENCE cache10000seq CACHE 10000;

DECLARE
n number;
x number;
BEGIN
n := dbms_utility.get_time;
FOR i IN 1..100000 LOOP
  SELECT nocacheseq.nextval INTO x FROM DUAL;
END LOOP;
dbms_output.put_line( 'NOCACHE : '||((dbms_utility.get_time-n)/100)||' seconds....' );
n := dbms_utility.get_time;
FOR i IN 1..100000 LOOP
  SELECT cache10seq.nextval INTO x FROM DUAL;
END LOOP;
dbms_output.put_line( 'CACHE 10 : '||((dbms_utility.get_time-n)/100)||' seconds....' );
n := dbms_utility.get_time;
FOR i IN 1..100000 LOOP
  SELECT cache100seq.nextval INTO x FROM DUAL;
END LOOP;
dbms_output.put_line( 'CACHE 100 : '||((dbms_utility.get_time-n)/100)||' seconds....' );
FOR i IN 1..100000 LOOP
  SELECT cache1000seq.nextval INTO x FROM DUAL;
END LOOP;
dbms_output.put_line( 'CACHE 1000 : '||((dbms_utility.get_time-n)/100)||' seconds....' );
FOR i IN 1..100000 LOOP
  SELECT cache10000seq.nextval INTO x FROM DUAL;
END LOOP;
dbms_output.put_line( 'CACHE 10000 : '||((dbms_utility.get_time-n)/100)||' seconds....' );
END;
/

DROP SEQUENCE nocacheseq;
DROP SEQUENCE cache10seq;
DROP SEQUENCE cache100seq;
DROP SEQUENCE cache1000seq;
DROP SEQUENCE cache10000seq;
-----------------------------------------------------------
SQL> @seqtest

Sequence created.


Sequence created.


Sequence created.


Sequence created.


Sequence created.

NOCACHE : 82.25 seconds....
CACHE 10 : 15.47 seconds....
CACHE 100 : 8.05 seconds....
CACHE 1000 : 15.32 seconds....
CACHE 10000 : 22.54 seconds....

PL/SQL procedure successfully completed.


Sequence dropped.


Sequence dropped.


Sequence dropped.


Sequence dropped.


Sequence dropped.

Interesting that CACHE <largevalue> doesn't necessarily equate to <fast>.  Seems like Oracle's default of 20 isn't too bad.

However, in the system I once developed I'll admit to creating a NOCAHE sequence ...

It was a call tracking system, where calls are only ever entered manually, hence the overhead in performance is not noticeable to the user.  Also, the sequence is used to refer to each call, so if the call number is 2671 (for example) this is liable to be less prone to error or mistyping than if the number's up to 5645738 (or whatever).  Thirdly, the users can get a general "feel" for how many calls are coming in by the current call number.  Originally, the users wanted a guarantee that they'd be no gaps (sigh), but I talked them out of it.  I think their "business reason" was that "we'd like it".

Ed

 

Tom Kyte
April 23, 2003 - 7:34 am UTC

test is flawed.

measuring elapsed time is excessively misleading, you need to look at lots of other things... so -- lets see some more information:

first, I see different timing results (single user system, i'm the only CPU user -- big impact on test)

ops$tkyte@ORA920LAP> drop sequence s1;

Sequence dropped.

ops$tkyte@ORA920LAP> drop sequence s2;

Sequence dropped.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create sequence s1 cache 20;

Sequence created.

ops$tkyte@ORA920LAP> create sequence s2 cache 1000000;

Sequence created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> delete from run_stats;

0 rows deleted.

ops$tkyte@ORA920LAP> commit;

Commit complete.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> declare
  2      l_start number;
  3      l_run1  number;
  4      l_run2  number;
  5      l_x     number;
  6  begin
  7      insert into run_stats
  8      select 'before', stats.* from stats;
  9
 10      l_start := dbms_utility.get_time;
 11      for i in 1 .. 100000
 12      loop
 13          select s1.nextval into l_x from dual;
 14      end loop;
 15      l_run1 := (dbms_utility.get_time-l_start);
 16      dbms_output.put_line( l_run1 || ' hsecs' );
 17
 18      insert into run_stats
 19      select 'after 1', stats.* from stats;
 20      l_start := dbms_utility.get_time;
 21      for i in 1 .. 100000
 22      loop
 23          select s2.nextval into l_x from dual;
 24      end loop;
 25      l_run2 := (dbms_utility.get_time-l_start);
 26      dbms_output.put_line( l_run2 || ' hsecs' );
 27      dbms_output.put_line
 28      ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) ||
 29        '% of the time' );
 30
 31      insert into run_stats
 32      select 'after 2', stats.* from stats;
 33  end;
 34  /
1323 hsecs
898 hsecs
run 1 ran in 147.33% of the time

PL/SQL procedure successfully completed.


<b>Now, looking at system statistics and latching:</b>


ops$tkyte@ORA920LAP> select a.name, b.value-a.value run1,
  2         c.value-b.value run2,
  3         ( (c.value-b.value)-(b.value-a.value)) diff
  4    from run_stats a, run_stats b, run_stats c
  5   where a.name = b.name
  6     and b.name = c.name
  7     and a.runid = 'before'
  8     and b.runid = 'after 1'
  9     and c.runid = 'after 2'
 10     and (c.value-a.value) > 0
 11     and (c.value-b.value) <> (b.value-a.value)
 12   order by abs( (c.value-b.value)-(b.value-a.value))
 13  /


STAT...session logical reads       320207     300046     -20161
LATCH.shared pool                  225174     200165     -25009
LATCH.row cache objects             30048         46     -30002
LATCH.enqueue hash chains           30037         23     -30014
LATCH.library cache pin            440136     400120     -40016
LATCH.library cache                560326     500306     -60020
STAT...session pga memory max       65536          0     -65536
LATCH.cache buffers chains         666008     600232     -65776
STAT...recursive calls             170039     100045     -69994
STAT...redo size                  4367648      28328   -4339320

78 rows selected.

<b>the second one generates lots less redo (alot), does less work, latches a whole lot less (remember a latch is a lock, locks are serialization devices, serialization devices inhibit scalability....)</b>


So, in a multi-user environment or large data load -- having higher cache values will be more scalable, more performant.


Hopefully a system would not require a user to type in such a number?  I mean, where would the number come from in the first place?  these surrogate keys are used to tie the data together but are rarely manually entered
 

Very interesting

Ed, April 23, 2003 - 8:26 am UTC

I'll certainly take that into account when developing a single-user, large-data-load system I have coming up. I'll benchmark a more realistic scenario then.

So what does my flawed (and hastily created) test indicate? I understand that measuring elapsed times is very dodgy, but of course, in the real world the actual elapsed time is precisely what you're interested in.

<quote> Hopefully a system would not require a user to type in such a number? </quote>

Well, it might not be the system requiring the user to do something - rather that the user required that the system provided this facility (as in the real example that I worked on). Manager calls to worker "Go and have a look at call 5438, please", for example.

Ed


D'oh - stupid me

A reader, April 23, 2003 - 8:30 am UTC

Rereading me own test, it indicates that I didn't reset my time variable, n.

Sorry for wasting your time

Ed

Tom Kyte
April 23, 2003 - 7:01 pm UTC

doh! didn't even see that one ;)

nocache

A reader, August 07, 2003 - 9:11 pm UTC

Tom
Agreed on nocache - What is the practical cache size
you recommend? You have benchmarked one with 1000000
as cache size - is that reasonable in a runtime environment?

Thanx!

Tom Kyte
August 10, 2003 - 11:22 am UTC

sure, as long as you don't mind "losing" 1,000,000 sequences in the event of a shutdown abort/crash.

don't worry about running out of sequences.  Even if you shutdown abort once a second, it would take:

ops$tkyte@ORA920> select (rpad( '9',27,'9' ) / 1000000)/60/60/24/366 from dual;

(RPAD('9',27,'9')/1000000)/60/60/24/366
---------------------------------------
                             3.1623E+13

that many years to run out. 

thanx Tom!

A reader, August 15, 2003 - 9:48 pm UTC

"don't worry about running out of sequences. "

But do we have to be concerned about memory consumption
in SGA in any way since sequences I think get cached there?
Probably not - since they are just numbers - and each of them takes a very small number of bytes?


Tom Kyte
August 16, 2003 - 8:41 am UTC

correct, a sequence is a small thing. my comment was about running out of values in a sequence -- even with really really large gaps. you won't run out of values.

Help Please

A reader, February 10, 2004 - 12:56 pm UTC

Hi, Tom,
We are currently having NUMBER sequences as our order number for all type of products(perfect now). Now, things are going to be changed: for some type of products, it will be number, others are alphanumberic. And the thing is that it still could be changed in the future, but we need to keep the production DBs up and running no matter how it changes.
What we plan to do is: change the orderNumber type to VARCHAR2(80) from NUMBER; create a table including product types, categories, orderID, orderIDType, ... and reference this table based on the product type and categories whenever we want to create the new order(and generate new orderNumbers either numeric or aplphanumeric). Problem: the alphanumeric orderNumber is fixed length, say only 4 digits like A001, A002..A999 and then B001...., one of the methods is create a "format" based on the type of products " 11ADK23N09" which means 1->1st position, 1->length, A->alphanumeric, DK->from D to K; 2->2nd position, 3->length, N->number, 09-> from 0 to 9, hope you understand the meaning(I think so). But do you think is it awkward? Do you have some good methods in your crystal ball to handle that while still keep the system dynamic for the future changes?

Thanks a Millions,


Tom Kyte
February 10, 2004 - 3:50 pm UTC

you are basically saying "our primary key will be some string"

so, fine -- use a varchar2 and put into it anything you want. how you generate that string (using whatever algorithm you want) is pretty much up to you.

(me -- i don't like it, the number 515432431 is just fine. if you are trying to encode MEANING into the key - you really are saying "we have a composite key" and should not be putting it into a single field like that)