Tom
Thanks for your time.
We have limitation in number of primary key generated in our application.
we can go in +ve series 1 to 2^31
and in -ve series -2^31 to 0
at the moment we are finding to hard to manage the burn rate (utilization) of these primary keys.
explaining the current set up ..requirement below
a) table t ( x number primary key and other columms....);
b) around 10 million primary keys in table t are utilised per day
c) we do delete old data ( after archiving it) on regular basis based on some filter criteria.
but this archiving does not return us continous range of keys.
say if my current primary key ( ....max( x) from t ) is 20,000,000
and i did deletion of older data between 1 to 1,00,000 keys...
then
this process will not delete all 1,00,000 rows but it may delete 99,500 rows and 500 rows ( though random) would still exists there
as these 500 rows does not meet the deletion/archiving criteria.
d) when I want to re-use these key afterwards - traversing thro... ( .. 20,000,000 ( current ) to 2^31 (max) )
then i will just set the starting key number as -2^31 and that special ( ; ) ) function starts generating keys for us towards 0.
e) so far so good... in case (d) above all of the keys from -2^31 to 0 would never be BEFORE present there in table t ( because 1st time I
am traversing through the negative range).
f) Trouble starts ( ORA-000001) when I started re-suing the postive series ( 1 onwards ).. because i may hit 1st key ( out of those 500 rows
which already exists as mentioned in (c) above) anytime now.
g) Temporary workaround for (f) above - I have some customised scripts which picks up the future keys ( potential candidate of ORA-000001) and these are e moved to
other range ( where I know I have free keys..)
something like.
update t set x = ( value which_I_Know_is_free_and_will_be_reused_after_a_long_gap) where x = (value.. from future_potential_range..)
(we update relational columns as well in case of few tables .. if these keys have any child records)
h) We do the (g) above for bigger chunks of such potential keys .
i) we follow the cycle of re-using positive series and then negative and then positive....
j) Earlier in
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5173758137186#2724857200346268052 i copied the key genratrion logic and you suggested ..
(
/* ...... 3) I'm thinking you should use a SEQUENCE 100% of the time, every day, in every way. period. you have only a mere 268435456 unique keys here - this is silly.
You are going to run out pretty fast. */
)
In our case it will take time to upgrade to new application release where there is vitually no limit on key genration.
Hope I have made the current set up explained above .
Questions :
- How to do (g) above efficently. we do it this way a) we create the table t_1 with primary keys from 1 to 2^31
b) we create the table t_2 with primary keys from -2^31 to 0
select the free keys ( top 1,00,000 ) -- So C1 has all the keys which are free and these can be re-used.
check if the current used key is in postive range or negative range.
...Cursor C1 is
Select a.x from t_1 a where
Not Exists (Select b.x from t b where b.x=a.x) and rownum<100000;
for each record in C1
loop
fetch C1 into DestX
Select x into SourceX from t where x < value_present_x + 100000 and rownum<2;
Update_T(SourceX,DestX);
-- above function updates SourceX value with the DestX.
end loop
- it is clear that we have introduced slow by slow code here .. need to change this into a simple SQLs statements.
- looks we can avoid making use of another 02 tables ( t_1 and t_2) . We can find the missing keys directly in t hence most effieicnt?
- how can we select the next potential keys ( say my current key in table t is 1,00,000 and want to select the keys between keys 2,00,000 and 3,00,000) and move ( update) those keys to another keys on previous range previous range ( 1 to 1,00,000) in a more effienct way? ( here I keep moving the potential keys to previous range.. so the very next range 1,00,001 to 1,99,9999 i have not mentioned )