Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Riaz.

Asked: April 28, 2003 - 11:33 am UTC

Last updated: January 10, 2011 - 8:22 am UTC

Version: 8.1.7

Viewed 1000+ times

You Asked

Hello Tom !

I have a problem for you (i am always happy when i create problem(s) for you since it increase my knowledge).

Consider the following: (I want to find missing shippingbill #s w.r.t collectorate_code & Month and Year(to_char(shippingbill_date,'mmyyyy'))

cr@STARR.LHR> desc custom_sb_master
Name Null? Type
----------------------- -------- ----------------
COLLECTORATE_CODE NOT NULL VARCHAR2(4)
SHIPPINGBILL_NO NOT NULL NUMBER(7)
SHIPPINGBILL_DATE NOT NULL DATE

COLL SHIPPINGBILL_NO Month
==== =============== =======
KEXP 85070 022003
==> here shippingbill_no 85071 is missing
KEXP 85072 022003
KEXP 85073 022003
KEXP 85074 022003
KEXP 85075 022003
KEXP 85076 032003
KEXP 85077 032003
==> here shippingbill_no 85078 & 85079 are missing
KEXP 85080 022003
KEXP 85081 022003
LEXP 5774 022003
==> here shippingbill_no 5775 is missing
LEXP 5776 022003
LEXP 5777 022003
LEXP 5778 022003
LEXP 5779 022003
LEXP 5780 022003
LEXP 5781 022003
LEXP 5782 022003
LEXP 5783 022003
LEXP 5784 022003
LEXP 5785 022003

So I want my answer look like this:

COLL SHIPPINGBILL_NO Month
==== =============== =======
KEXP 85071 022003
KEXP 85078 032003
KEXP 85079 032003
LEXP 5775 022003

I read your article about finding missing serial #s but couldn't understand where to put the condition for collectorate & MonthYear wise.

Waiting for your expert advice.

Riaz

and Tom said...

ops$tkyte@ORA920> select code, sdate,
2 sno+1 first_missing,
3 decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing
4 from (
5 select code, trunc(sdate,'mm') sdate, sno,
6 lag(sno) over (partition by code,trunc(sdate,'mm') order by sno) last_sno,
7 lead(sno) over (partition by code,trunc(sdate,'mm') order by sno) next_sno
8 from t
9 )
10 where nvl(next_sno,sno+1) <> sno+1
11 /

CODE SDATE FIRST_MISSING LAST_MISSING
---- ------ ------------- ------------
KEXP 022003 85071
KEXP 022003 85078 85079
LEXP 022003 5775

ops$tkyte@ORA920>



gets you the necessary information.

Rating

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

great

Riaz Shahid, April 29, 2003 - 3:11 am UTC

Thats simple wonderful and great.

Riaz

Another Question

Riaz Shahid, April 29, 2003 - 10:05 am UTC

Hello Tom !

Consider:

Note that the PK for this table is collectorate_code+SB_Date+SB_No

cr@STARR.LHR> select * from
(
select code, sdate,
sno+1 first_missing,
decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing
from (
select collectorate_code code, shippingbill_date sdate, shippingbill_no sno,
lag(shippingbill_no) over (partition by collectorate_code,trunc(shippingbill_date,'mm') order by shippingbill_no) last_sno,
lead(shippingbill_no) over (partition by collectorate_code,trunc(shippingbill_date,'mm') order by shippingbill_no) next_sno
from custom_Sb_master
)
where nvl(next_sno,sno+1) <> sno+1
)
where first_missing>last_missing
/

CODE SDATE FIRST_MISSING LAST_MISSING
==== ========= ============= ============
MEXP 08-NOV-01 1864 1862
MEXP 10-NOV-01 1877 1875

Elapsed: 00:19:1174.32

Why first_missing is greate than last_missing. Shouldn't it be <=last_missing ???

Regards

Tom Kyte
April 29, 2003 - 10:33 am UTC

something I did not anticipate.  I assumed SNO would be unique -- yours must not be:

ops$tkyte@ORA920> create table t ( code varchar2(4), sdate date, sno number );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t values ( 'abcd', sysdate, 55 );

1 row created.

ops$tkyte@ORA920> insert into t values ( 'abcd', sysdate, 55 );

1 row created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select code, sdate,
  2        sno+1 first_missing,
  3        decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing
  4    from (
  5  select code, trunc(sdate,'mm') sdate, sno,
  6   lag(sno) over (partition by code,trunc(sdate,'mm') order by sno) last_sno,
  7   lead(sno) over (partition by code,trunc(sdate,'mm') order by sno) next_sno
  8    from t
  9         )
 10   where nvl(next_sno,sno+1) <> sno+1
 11  /

CODE SDATE     FIRST_MISSING LAST_MISSING
---- --------- ------------- ------------
abcd 01-APR-03            56           54



that'll happen when you have duplicate SNO's (the fact they are two apart led me down that path -- SNO+1 NEXT_SNO-1 -- if SNO=NEXT_SNO, you would get the above...)

You could add:

ops$tkyte@ORA920> select code, sdate,
  2        sno+1 first_missing,
  3        decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing,
  4            cnt
  5    from (
  6  select code, trunc(sdate,'mm') sdate, sno,
  7   lag(sno) over (partition by code,trunc(sdate,'mm') order by sno ) last_sno,
  8   lead(sno) over (partition by code,trunc(sdate,'mm') order by sno ) next_sno,
  9   count(sno) over (partition by code,trunc(sdate,'mm'), sno ) cnt
 10    from t
 11         )
 12   where nvl(next_sno,sno+1) <> sno+1
 13  /

CODE SDATE  FIRST_MISSING LAST_MISSING        CNT
---- ------ ------------- ------------ ----------
KEXP 022003         85071                       1
KEXP 022003         85078        85079          1
LEXP 022003          5775                       1
abcd 042003            56           54          2

ops$tkyte@ORA920>

to find these DUPLICATES (which are maybe in error?)  or

ops$tkyte@ORA920> select code, sdate,
  2        sno+1 first_missing,
  3        decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing
  4    from (
  5  select code, trunc(sdate,'mm') sdate, sno,
  6   lag(sno) over (partition by code,trunc(sdate,'mm') order by sno ) last_sno,
  7   lead(sno) over (partition by code,trunc(sdate,'mm') order by sno ) next_sno
  8    from t
  9         )
 10   where nvl(next_sno,sno+1) <> sno+1 and next_sno <> sno
 11  /

CODE SDATE  FIRST_MISSING LAST_MISSING
---- ------ ------------- ------------
KEXP 022003         85071
KEXP 022003         85078        85079
LEXP 022003          5775

ops$tkyte@ORA920>


to ignore them.
 

True

Riaz Shaihd, April 30, 2003 - 9:04 am UTC

True.....sb# is not unique w.r.t
collectorate_code,trunc(shippingbill_date,'mm').

But its unique w.r.t
collectorate_code,trunc shippingbill_date).

Consider:

cr@STARR.LHR> select count(*),collectorate_code,trunc(shippingbill_date,'mm'),shippingbill_no
from custom_sb_master group by collectorate_code,trunc(shippingbill_date,'mm'),shippingbill_no
having count(*)>1
/

COUNT(*) COLL TRUNC(SHI SHIPPINGBILL_NO
========== ==== ========= ===============
2 MEXP 01-NOV-01 1863
2 MEXP 01-NOV-01 1876



cr@STARR.LHR> select count(*),collectorate_code,trunc(shippingbill_date),shippingbill_no
from custom_sb_master group by collectorate_code,trunc(shippingbill_date),shippingbill_no
having count(*)>1


no rows selected

So what should i do in order to remove this anomoly. I want missing Sno w.r.t
collectorate_code,trunc(shippingbill_date,'mm')
BUT
Sbno is unique w.r.t
collectorate_code,trunc(shippingbill_date)
AND
i don't want to ignore those sbno (as u mentioned in previous post).

Please Advise

Riaz



Tom Kyte
April 30, 2003 - 9:11 am UTC



You tell ME what you want to do with this anomoly.....

Riaz Shahid, April 30, 2003 - 9:27 am UTC

Hello Tom !

Thanks for your quick response.

Actually i am rececving this data from different sources. I Want to send them the missing snos so that they can send me the data for those nos. They can't send me the WHOLE data since its HUGE one.

Thankx

Riaz

Tom Kyte
April 30, 2003 - 9:52 am UTC

then -- ignore these duplicates??? why not? they do not represent a gap -- so trash them

If the hole in the series is not fixed to 1

Satish, November 22, 2004 - 1:51 am UTC

Hi Tom,
Excellent site. I am reading through the questions and solutions and I find it great to come to this site, whenever I have time. I have both of your books and they are one of the best books I have on Oracle. I have question on "Finding missing numbers". If the hole in the series is not fixed to 1 then is there a way to get the answer out. To elaborate: In the first example by Riaz, he points out that 85071 is missing. If say 85072 is missing as well, then will there be a solution out? I mean if we don't know how many numbers can be missing in between then is there a way out?

Tom Kyte
November 22, 2004 - 7:42 am UTC

the solution above provided the first and last numbers missing.

subtract.

A small query to get missing sequence numbers

David Shen, February 12, 2010 - 4:18 pm UTC

I figured out a small query to get missing sequence numbers.

First, let's get 10 rows from DUAL table:

9i: SELECT NULL FROM DUAL CONNECT BY ROWNUM <=10;
9i: SELECT NULL FROM DUAL CONNECT BY LEVEL <=10;

10g: SELECT NULL FROM DUAL CONNECT BY LEVEL <=10; (error in 8i)
SELECT 1 FROM DUAL CONNECT BY LEVEL <=10; (retrun only 1 row in 9i/8i)
SELECT ROWNUM FROM DUAL CONNECT BY LEVEL <=10; (retrun only 1 row in 9i/8i)

10g: SELECT NULL FROM DUAL CONNECT BY ROWNUM <=10; (error in 8i)
SELECT 1 FROM DUAL CONNECT BY ROWNUM <=10; (retrun only 1 row in 9i)
SELECT ROWNUM FROM DUAL CONNECT BY ROWNUM <=10; (retrun only 1 row in 9i)

Then, using this trick, we can code a query like this:

SELECT (SELECT MIN(A) FROM TEST)+ ROWNUM Missing#
FROM DUAL
CONNECT BY
LEVEL <=(SELECT MAX(A) - MIN(A) FROM TEST)
MINUS
SELECT A FROM TEST;

This query brings back the missing values in column A in table TEST. It works well in 10g, but a little revision is needed to make it working in 8i/9i.
Tom Kyte
February 16, 2010 - 12:30 pm UTC

small in size is not necessarily a good thing - I think you'll find most any of the other approaches here to be more efficient than building a set of numbers like that.

List the missing numbers

SM, April 29, 2010 - 1:31 pm UTC

Tom,
As always thank you for your time and effort.
Is there a way to list the missing numbers in SQL without looping through in PL/SQL?

Thanks
SM
Tom Kyte
April 29, 2010 - 2:12 pm UTC

all of the examples on this page are sql????

see the very first original answer, it lists out the missing numbers.

Clarification

SM, April 30, 2010 - 7:14 am UTC

Tom,
May be I am missing something or I did not explain the question right, but I see only a upper limit and lower limit of the range of missing numbers. I dont see individual numbers listed. I want to be able to list all the numbers in between them like insted of this
first_missing Last_missing
849 853

I want
849
850
851
852
853

Thanks
SM
Tom Kyte
April 30, 2010 - 8:44 am UTC

all you asked for was a list of missing numbers. If someone asked me for a list of missing somethings, I would say:

you are missing a-d
you are missing x
you are missing z

that is my list - you weren't specific enough.

ops$tkyte%ORA11GR2> create table t ( code varchar2(1), sno number );

Table created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t values ( 'A', 100 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 101 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 105 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 110 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 111 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 112 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 114 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'A', 116 );

1 row created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t values ( 'B', 9100 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'B', 9101 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'B', 9105 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'B', 9110 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'B', 9111 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'B', 9112 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'B', 9115 );

1 row created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select * from t order by code, sno;

C        SNO
- ----------
A        100
A        101
A        105
A        110
A        111
A        112
A        114
A        116
B       9100
B       9101
B       9105
B       9110
B       9111
B       9112
B       9115

15 rows selected.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select code, first_missing, last_missing, nvl(last_missing,first_missing)-first_missing+1 nmissing
  2    from (
  3  select code,
  4        sno+1 first_missing,
  5        decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing
  6    from (
  7  select code, sno,
  8   lag(sno) over (partition by code order by sno) last_sno,
  9   lead(sno) over (partition by code order by sno) next_sno
 10    from t
 11         )
 12   where nvl(next_sno,sno+1) <> sno+1
 13         )
 14  /

C FIRST_MISSING LAST_MISSING   NMISSING
- ------------- ------------ ----------
A           102          104          3
A           106          109          4
A           113                       1
A           115                       1
B          9102         9104          3
B          9106         9109          4
B          9113         9114          2

7 rows selected.

ops$tkyte%ORA11GR2> with data
  2  as
  3  (
  4  select code, first_missing, last_missing, nvl(last_missing,first_missing)-first_missing+1 nmissing
  5    from (
  6  select code,
  7        sno+1 first_missing,
  8        decode( next_sno-1, sno+1, to_number(null), next_sno-1 ) last_missing
  9    from (
 10  select code, sno,
 11   lag(sno) over (partition by code order by sno) last_sno,
 12   lead(sno) over (partition by code order by sno) next_sno
 13    from t
 14         )
 15   where nvl(next_sno,sno+1) <> sno+1
 16         )
 17  )
 18  select code, first_missing+column_value
 19    from data, TABLE( cast( multiset( select level-1 from dual connect by level <= data.nmissing ) as sys.odciNumberList ) )
 20  /

C FIRST_MISSING+COLUMN_VALUE
- --------------------------
A                        102
A                        103
A                        104
A                        106
A                        107
A                        108
A                        109
A                        113
A                        115
B                       9102
B                       9103
B                       9104
B                       9106
B                       9107
B                       9108
B                       9109
B                       9113
B                       9114

18 rows selected.

Awesome!!

SM, April 30, 2010 - 12:40 pm UTC

Thank you Tom,
I am not able to understand the last join how SQL accepts the reference of Data.nmissing with the multiset and not otherwise. Can you explain that piece.

Again thank you.
SM
Tom Kyte
April 30, 2010 - 1:05 pm UTC

It is called table unnesting

http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e11822/adobjdes.htm#ADOBJ7446


that shows it with functions, I just used multiset - the builtin function - to return a set and implicitly (the only way) joined to it.

managing primary keys

A Reader, January 06, 2011 - 4:03 am UTC

Tom
Thanks for your time.


We have limitation in number of primary key generated in our application.

we can go in +ve series 1 to 2^31
and in -ve series -2^31 to 0

at the moment we are finding to hard to manage the burn rate (utilization) of these primary keys.

explaining the current set up ..requirement below


a) table t ( x number primary key and other columms....);
b) around 10 million primary keys in table t are utilised per day
c) we do delete old data ( after archiving it) on regular basis based on some filter criteria.
but this archiving does not return us continous range of keys.
say if my current primary key ( ....max( x) from t ) is 20,000,000
and i did deletion of older data between 1 to 1,00,000 keys...
then
this process will not delete all 1,00,000 rows but it may delete 99,500 rows and 500 rows ( though random) would still exists there
as these 500 rows does not meet the deletion/archiving criteria.
d) when I want to re-use these key afterwards - traversing thro... ( .. 20,000,000 ( current ) to 2^31 (max) )
then i will just set the starting key number as -2^31 and that special ( ; ) ) function starts generating keys for us towards 0.
e) so far so good... in case (d) above all of the keys from -2^31 to 0 would never be BEFORE present there in table t ( because 1st time I
am traversing through the negative range).
f) Trouble starts ( ORA-000001) when I started re-suing the postive series ( 1 onwards ).. because i may hit 1st key ( out of those 500 rows
which already exists as mentioned in (c) above) anytime now.
g) Temporary workaround for (f) above - I have some customised scripts which picks up the future keys ( potential candidate of ORA-000001) and these are e moved to
other range ( where I know I have free keys..)
something like.
update t set x = ( value which_I_Know_is_free_and_will_be_reused_after_a_long_gap) where x = (value.. from future_potential_range..)

(we update relational columns as well in case of few tables .. if these keys have any child records)
h) We do the (g) above for bigger chunks of such potential keys .
i) we follow the cycle of re-using positive series and then negative and then positive....

j) Earlier in
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5173758137186#2724857200346268052

i copied the key genratrion logic and you suggested ..
(
/* ...... 3) I'm thinking you should use a SEQUENCE 100% of the time, every day, in every way. period. you have only a mere 268435456 unique keys here - this is silly.
You are going to run out pretty fast. */
)
In our case it will take time to upgrade to new application release where there is vitually no limit on key genration.


Hope I have made the current set up explained above .


Questions :



- How to do (g) above efficently. we do it this way a) we create the table t_1 with primary keys from 1 to 2^31
b) we create the table t_2 with primary keys from -2^31 to 0

select the free keys ( top 1,00,000 ) -- So C1 has all the keys which are free and these can be re-used.

check if the current used key is in postive range or negative range.

...Cursor C1 is
Select a.x from t_1 a where
Not Exists (Select b.x from t b where b.x=a.x) and rownum<100000;



for each record in C1
loop
fetch C1 into DestX
Select x into SourceX from t where x < value_present_x + 100000 and rownum<2;
Update_T(SourceX,DestX);
-- above function updates SourceX value with the DestX.
end loop

- it is clear that we have introduced slow by slow code here .. need to change this into a simple SQLs statements.
- looks we can avoid making use of another 02 tables ( t_1 and t_2) . We can find the missing keys directly in t hence most effieicnt?
- how can we select the next potential keys ( say my current key in table t is 1,00,000 and want to select the keys between keys 2,00,000 and 3,00,000) and move ( update) those keys to another keys on previous range previous range ( 1 to 1,00,000) in a more effienct way? ( here I keep moving the potential keys to previous range.. so the very next range 1,00,001 to 1,99,9999 i have not mentioned )














Tom Kyte
January 06, 2011 - 8:09 am UTC

why are you limiting yourself to 2^31? We limit you (with sequences) to a number(27)

999,999,999,999,999,999,999,999,999

even if you burned 100,000 per SECOND it would take you

ops$tkyte%ORA11GR2> select to_number(rpad('9',27,'9'))/100000/60/60/24/366 from dual;

TO_NUMBER(RPAD('9',27,'9'))/100000/60/60/24/366
-----------------------------------------------
                                     3.1623E+14


that many YEARS to exhaust them.


I really against attempting to design something to reuse surrogate keys - because "efficient" and "reuse those keys" are two orthogonal concepts.


I would suggest just using the number as a number type and you are already done. You will not exhaust them.

Or switch to a SYS_GUID()

The limit of 32-bit integer

Kim Berg Hansen, January 06, 2011 - 9:19 am UTC

Hi, Tom

Just a short comment...

The previous reviewer has a primary key limitation of -2^31 -> 2^31 - that is a 32-bit integer. You ask him why he limits himself.

We have the same situation. The reason in our case:
The old legacy ERP system is compiled using 32-bit integers for the keys. We don't have the source code for the kernel of this system - we cannot change the implementation to a 64-bit integer or higher.

So even if Oracle can last us billions of billions of years, the app cannot understand key values outside the 32-bit integer range :-(

We will exhaust that range in about 4-5 years, I calculate. Then we are either forced to move to another system or figure out some way to reuse key values.

I agree wholeheartedly this is not a good thing. And I won't even ask for a solution for it :-) But it is real life and alas not possible to get rid of 32-bit limitation :-(

I just wanted to state that there may be outside reasons for such a limitation. It might not be a possibility for the reviewer to change his surrogate key datatype...

Tom Kyte
January 06, 2011 - 9:33 am UTC

If you have an application that has this limit
AND you don't have the source code
AND the vendor isn't going to fix this
AND you estimate 4-5 years

THEN
time to buy a new ERP and start the conversion - right now

END IF



for you see - nothing will fix it - if you don't have the source code, you cannot change the primary key generation in the first place. You cannot "fix" it.


Investigating a different primary key generation method would be fruitless - you would need to MODIFY the application. And only the vendor can do that - how many applications do you know of that rely on sequences having some implication to "insert order". As bad an idea as that is - many many many applications do.

You cannot change the way the primary key is generated because

a) you don't have the source (that is a show stopper right there)
b) the vendor isn't going to be involved (red flags abound)
c) the application might well 'break' because it assumes that a primary key of 1000 means the record is "older" than a primary key of 1


I agree :-)

Kim Berg Hansen, January 07, 2011 - 1:37 am UTC

I agree completely, Tom!

And I am working very hard to convince boss that we should drop that 16 year old system and just go ahead and write our own :-)

..managing primary keys contd

A Reader, January 07, 2011 - 7:05 am UTC

Tom,

In our case situation is different.

we have to manage ( live with it) until we upgrade it.

as the burn rate is too high and we alreday have started feeling the heat ( unlike Kim ) we need to have some work around in place.

Could you please ouline any efficeint way of doing the things which I asked in

...
Questions :
...
section above?


regards



Tom Kyte
January 07, 2011 - 9:49 am UTC

I know of no efficient way to do this. You'll have to do something like register all available keys in a table and then write a function/procedure like this:

ops$tkyte%ORA11GR2> create table t
  2  as
  3  select rownum id, decode( mod(rownum,2), 0, 'N', 'Y') inuse from all_users;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create index t_idx on t(id, (case when inuse='N' then 1 end) )
  2  
ops$tkyte%ORA11GR2> create or replace function get_id return number
  2  as
  3          l_id    number;
  4          l_rowid rowid;
  5          cursor c is
  6          select id, rowid
  7            from t
  8           where (case when inuse='N' then 1 end) = 1
  9             for update skip locked;
 10  begin
 11          open c;
 12          fetch c into l_id, l_rowid;
 13          if ( sql%rowcount <> 1 )
 14          then
 15                  raise_application_error( -20001, 'you lose, no id for you' );
 16          end if;
 17          close c;
 18  
 19          update t set inuse = 'Y' where rowid = l_rowid;
 20  
 21          return l_id;
 22  end;
 23  /

Function created.

ops$tkyte%ORA11GR2> create or replace procedure free_id( p_id in number )
  2  as
  3  begin
  4          update t set inuse = 'N' where id = p_id;
  5          if (sql%rowcount <> 1)
  6          then
  7                  raise_application_error( -20001, 'attemp to free invalid id('||p_id||')');
  8          end if;
  9  end;
 10  /

Procedure created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_output.put_line( get_id );
2

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          dbms_output.put_line( get_id );
  5          commit;
  6  end;
  7  /
4

PL/SQL procedure successfully completed.



you need to now not only allocate your id (get_id) but also free it - or write some batch process that frees them on a schedule.

You need to modify code - you have to modify code - do not attempt this in triggers - the application won't understand what you did.

A reader, January 07, 2011 - 3:26 pm UTC

Hi Tom,

5 cursor c is
6 select id, rowid
7 from t
8 where (case when inuse='N' then 1 end) = 1
9 for update skip locked;


will it lock all rows where inuse='N' or just any one row where inuse='N'

Thanks
Tom Kyte
January 07, 2011 - 3:59 pm UTC

read the example I posted above, the answer is in the example already.

There are two transactions there - first one is open when the second one begins... The second one does not block... therefore.....

..managing parimary keys...

A Reader, January 08, 2011 - 12:10 am UTC

Thank you so much Tom.
Yes. we have to change the code for the permanent way.

Further, whether 'inuse'  column should be leading column in index t_idx??

Here,

mndba@host-RAC1>create table t
    as
    select rownum id, decode( mod(rownum,2), 0, 'N', 'Y') inuse from all_objects;

Table created.

mndba@host-RAC1>


mndba@host-RAC1>create index t_idx on t(id, (case when inuse='N' then 1 end) );


Index created.

mndba@host-RAC1> exec dbms_stats.gather_table_stats(user,'T', cascade=>TRUE);

PL/SQL procedure successfully completed.

mndba@host-RAC1>  select id, rowid
              from t
             where (case when inuse='N' then 1 end) = 1
               for update skip locked  2    3    4  ;

24915 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3319758624

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 24581 |   192K|    29  (11)| 00:00:01 |
|   1 |  FOR UPDATE        |      |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| T    | 24581 |   192K|    29  (11)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(CASE "INUSE" WHEN 'N' THEN 1 END =1)


Statistics
----------------------------------------------------------
        168  recursive calls
      25496  db block gets
       1819  consistent gets
          0  physical reads
    5204192  redo size
    1898307  bytes sent via SQL*Net to client
      18641  bytes received via SQL*Net from client
       1662  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      24915  rows processed

mndba@host-RAC1> 


---

mndba@host-RAC1> drop index t_idx;

Index dropped.

mndba@host-RAC1>  create index t_idx on t( (case when inuse='N' then 1 end),id );

Index created.


mndba@host-RAC1> select id, rowid
              from t
             where (case when inuse='N' then 1 end) = 1
               for update skip locked  2    3    4
  5  ;

24915 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1131152847

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |   498 |  3984 |     2   (0)| 00:00:01 |
|   1 |  FOR UPDATE       |       |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| T_IDX |   498 |  3984 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(CASE "INUSE" WHEN 'N' THEN 1 END =1)


Statistics
----------------------------------------------------------
         40  recursive calls
      25355  db block gets
       1728  consistent gets
          0  physical reads
    5201676  redo size
    1898307  bytes sent via SQL*Net to client
      18641  bytes received via SQL*Net from client
       1662  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      24915  rows processed

mndba@host-RAC1>



Tom Kyte
January 08, 2011 - 12:40 pm UTC

yeah, that was a mistake on my part putting it second, the function should be first in the index. sorry about that.

A reader, January 09, 2011 - 11:03 pm UTC

Hi Tom,

"There are two transactions there - first one is open when the second one begins... The second one does not block... therefore..... "

My question is...

when you execute first one it will lock all rows where inuse='N' hence when you execute second transaction
(i.e using autonomous transaction) why it not block by the first one, is that you close the "for update cursor" in the proc get_id and that will release the lock by which the second one
successed ?

I think the lock will held untill transaction commit or rollback or is it true that it will release in addition
to cusor close?

"Explicitly Acquiring Row Locks"

http://download.oracle.com/docs/cd/E11882_01/appdev.112/e17125/adfns_sqlproc.htm#ADFNS99937


Thanks...

Tom Kyte
January 10, 2011 - 8:22 am UTC

but it doesn't, skip locked is "magic" and different from anything else.

It was designed for queuing applications.

It obviously didn't work the way you assumed - else my application would have blocked itself and deadlocked.

But it didn't.


Just give it a try, use two sessions if you like, you'll see it in action.


Really very good site

Rukmini, April 14, 2014 - 12:11 pm UTC


More to Explore

Analytics

Analytic SQL got you confused? Check out Connor McDonald's complete video course.