Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Bala.

Asked: January 04, 2010 - 11:35 pm UTC

Last updated: July 30, 2012 - 7:30 am UTC

Version: 10.2.0.3.0

Viewed 1000+ times

You Asked

Dear Tom,

I am a big fan of you and I have learned lot from your site.

We have a OLTP application from vendor. The table (for e.g table name is X) is having more than 25 million records. We need to generate a report based on one date column, but the date coulum doesn't have index. So the report was very very slow. Then we requested vendor whether we can create a index for this date column. They didn't accept. Then what I did is, I have created a separate table (for e.g. Y) with two columns (Created index for both the columns), one is to store primary key value of the table X and another column is store date column of table X. Since vendor didn't accept the trigger to create on table X, i have created a procedure to insert record into table Y from table X from the last executed primary key value to the max of primary key value of table X. I have submitted this procedure as job to execute every 5 minutes.

In the report I am joining table X and table Y to give the date range to execute. I have tested this in test database, but our DBA is not allowing this approach in prodcution. He is claiming, preformance of the DML on table X would be affected. Please let me know what I have done is a good arroach or is there any work around to tackle this.

Thanks & Best Regards,
Bala

and Tom said...

the only time table X would be affected by anything would be the period of time that the procedure you wrote is run. And it will affect the performance of the entire system - as it is "extra work" you are doing. The amount of work? The execution of your queries to

a) find the last maximum primary key.
b) insert the new data that has a primary key greater than that maximum.

There will be no locking or contention issues on table X, just the extra queries being processed every five minute.

However, there is a bug in your code. Your copy of the data will be incomplete - you WILL miss rows - unless you

a) use a table lock on table X during your refresh, to prevent modifications every five minutes for the duration of your refresh.

b) just put the index on table X and skip your Y table (this would be my preference, the addition of an index that makes your report feasible seems logical)

c) use replication - including materialized view logs.

d) fully refresh every time (increasing the amount of processing necessary to build your Y table).



In order to demonstrate why your approach will not work, assume this timeline and two sessions


time             session1                         session2
t0               insert into X
                 primary key=100
t1                                                insert into X 
                                                  primary key = 101

t2                                                commit;

t3                                                do the refresh
                                                  this will copy over 
                                                  primary keys <= 101.
                                                  Of course, primary key = 100
                                                  will NOT be copied since it
                                                  isn't committed yet.

t4                                                commit;

t5               commit;

t6                                                do the refresh, this will
                                                  copy over primary keys 
                                                  GREATER THAN 101 - skipping
                                                  100



primary key 100 never gets copied

see
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:16998677475837

for a more in depth look at that. You need to redesign your process, it won't work.




Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Locks may not work with sequences

Gary, January 05, 2010 - 5:16 pm UTC

a) Won't necessarily work in RAC as, assuming the primary key is sequence generated, it is possible for node 1 to insert entries 100-119 while node 2 does 120-139, and PK 118 make be inserted after 125.

Timestamps are better than sequences for this.

In 11gr2 there's a new option
DBMS_UTILITY.WAIT_ON_PENDING_DML
"This Procedure Waits Until All Transactions (Other Than The Caller's Own) That Have Locks On The Listed Tables And Began Prior To The Specified Scn Have Either Committed Or Been Rolled Back."

http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10577/d_util.htm#BJEFHJGE

That would work with timestamps without locking.
You'd take the current timestamp (say 10:45 am) and previous one (10:40). Then wait for any pending DML to complete using the new routine and then pick out rows with a created timestamp between 10:40 and 10:45.
Tom Kyte
January 06, 2010 - 7:50 am UTC



http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e10577/d_util.htm#sthref11261

is the link.


that would also SERIALIZE, which is exactly what I've been saying....

and why this "business need" is absolutely a "business request" and via education will become a "oh, what we *really* need is...."

Clarification on WAIT_FOR_PENDING_DML

Tom, February 06, 2012 - 8:13 pm UTC

Could this -00054 result from incorrect usage or expectations of WAIT_FOR_PENDING_DML?

Perhaps it needs to be wrapped in a loop on --54 to account for newer SCNs?

15:43:42 > DECLARE
15:43:42 2 scn NUMBER := NULL;
15:43:42 3 timeout CONSTANT INTEGER := 300; --waiting in seconds
15:43:42 4 BEGIN
15:43:42 5 IF NOT DBMS_UTILITY.WAIT_ON_PENDING_DML(Tables => 'schema1.tab1',
15:43:42 6 timeout => timeout,
15:43:42 7 scn => scn)
15:43:42 8 THEN
15:43:42 9 RAISE_APPLICATION_ERROR(-20000, 'Timed out for SCN: '||SCN);
15:43:42 10 ELSE
15:43:42 11 EXECUTE IMMEDIATE 'ALTER TABLE schema1.tab1 DROP CONSTRAINT tab1_fk';
15:43:42 12 EXECUTE IMMEDIATE 'DROP INDEX schema1.tab1_fk';
15:43:42 13 DBMS_OUTPUT.PUT_LINE('Successfully completed DDL');
15:43:42 14 END IF;
15:43:42 15 END;
15:43:42 16 /
DECLARE
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
ORA-06512: at line 11

Elapsed: 00:00:01.53
15:43:44 >
Tom Kyte
February 06, 2012 - 9:21 pm UTC

well, the logic won't work regardless - each DDL is a separate transaction in the above - so the drop constraint could work - but then the drop index gets blocked by a transaction that starts after the drop constraint finished...

a more straightforward method of doing this ddl would be as follows:

ops$tkyte%ORA11GR2> create table t ( x int constraint t_pk primary key, y int );

Table created.

Elapsed: 00:00:00.01
ops$tkyte%ORA11GR2> create index t_idx on t(y);

Index created.

Elapsed: 00:00:00.01
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> pause

ops$tkyte%ORA11GR2> set timing on
ops$tkyte%ORA11GR2> BEGIN
  2          execute immediate 'alter session set ddl_lock_timeout = 6';
  3          execute immediate 'alter table ops$tkyte.t drop constraint t_pk';
  4          execute immediate 'drop index t_idx';
  5  END;
  6  /
BEGIN
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
ORA-06512: at line 3


Elapsed: 00:00:06.05
ops$tkyte%ORA11GR2> 



I had another session open and inserted a row into T when I hit the pause. The my DDL code blocked for six seconds before timing out.

But again, if you want both of these DDLs to entirely succeed or entirely fail as a single unit - you'll probably need to sort of prevent access to the table for the entire duration there.

Package to resolve this issue

Tom Starr, June 12, 2012 - 11:41 am UTC

-- SAMPLE USAGE (note line continuation hyphens):
-- exec dbdeploy_app.active_table_ddl_wrapper.execute_ddl('some_schema', -
-- 'some_table', -
-- 'alter table some_schema.some_table modify (col3 NULL)');

CREATE OR REPLACE PACKAGE BODY dbdeploy_app.active_table_ddl_wrapper AS

PROCEDURE execute_ddl(this_schema IN VARCHAR, this_table IN VARCHAR, this_ddl IN VARCHAR2)
IS
already_nullable EXCEPTION;
PRAGMA EXCEPTION_INIT(already_nullable,-01451);
l_schema_table VARCHAR2(61);
l_ddl_command VARCHAR2(1000);
wait_timed_out EXCEPTION;
in_use EXCEPTION;
PRAGMA EXCEPTION_INIT(in_use, -54);
scn NUMBER := NULL;
timeout CONSTANT INTEGER := 300; --trying seconds
BEGIN
l_schema_table := this_schema||'.'||this_table;
l_schema_table := upper(l_schema_table);
l_ddl_command := this_ddl;
WHILE true LOOP
BEGIN
IF DBMS_UTILITY.WAIT_ON_PENDING_DML(Tables => l_schema_table, timeout => timeout, scn => scn)
THEN
EXECUTE IMMEDIATE l_ddl_command;
DBMS_OUTPUT.PUT_LINE('Successfully executed this DDL: '||l_ddl_command);
EXIT;
ELSE
RAISE wait_timed_out;
END IF;
EXCEPTION
WHEN wait_timed_out THEN
DBMS_OUTPUT.PUT_LINE('Failed to execute: '||l_ddl_command||' due to WAIT_ON_PENDING_DML timeout on busy table - trying again');
WHEN in_use THEN
DBMS_OUTPUT.PUT_LINE('Failed to execute: '||l_ddl_command||' due to unexpected 00054');
WHEN already_nullable THEN
DBMS_OUTPUT.PUT_LINE('Skipped: '||l_ddl_command||' since column is already NULLABLE');
EXIT;
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Failed to execute: '||l_ddl_command||' due to this unexpected error.');
DBMS_OUTPUT.PUT_LINE(DBMS_UTILITY.FORMAT_ERROR_STACK);
EXIT;
END;
DBMS_OUTPUT.PUT_LINE('Trying again...');
DBMS_LOCK.SLEEP(1);
END LOOP;
END;
END; -- Package Body dbdeploy_app.active_table_ddl_wrapper

Package to resolve this issue - developer note

Tom Starr, June 12, 2012 - 10:04 pm UTC

My ORA-01451 trap is superfluous here.
It is only hardcoded because EXCEPTION_INIT wants a literal.
I was also unable to overcome dbms_output buffering with pipeline.
A second level Metalink ITAR rep deemed this usage appropriate and supported without EBR.
And it does work fine to deploy DDL against active tables.
But how much better than just a --54 loop seems a good question.
Best regards and Thanks in advance for any always erudite and much appreciated thoughts and pointers.
Tom Kyte
June 13, 2012 - 1:18 am UTC

I despise the use of when others in the loop.

there is no RAISE or RAISE_APPLICATION error in there. All you did was cut the error message off so it is harder to debug, you'll get a one line error message instead of the error stack with line numbers and all. If someone calls this - they won't know whether if failed or succeeded.


The essence of your logic is:

ops$tkyte%ORA11GR2>  create or replace PROCEDURE test
  2    IS
  3    BEGIN
  4      WHILE true LOOP
  5        BEGIN
  6          IF (1=1)
  7          THEN
  8                          dbms_output.put_line( 1/0 );
  9              EXIT;
 10          END IF;
 11        EXCEPTION
 12          WHEN OTHERS THEN
 13            DBMS_OUTPUT.PUT_LINE('Failed to execute: '||' due to this unexpected error.');
 14            DBMS_OUTPUT.PUT_LINE(DBMS_UTILITY.FORMAT_ERROR_STACK);
 15            EXIT;
 16        END;
 17        DBMS_OUTPUT.PUT_LINE('Trying again...');
 18        DBMS_LOCK.SLEEP(1);
 19      END LOOP;
 20    END;
 21  /

Procedure created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec test
Failed to execute:  due to this unexpected error.
ORA-01476: divisor is equal to zero

PL/SQL procedure successfully completed.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  success!!! 

<b>the caller thinks it works - and has nothing to know otherwise.  We lose the nice error stack.  We make it harder to debug, as opposed to the right way without that nasty when others:</b>


ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2>  create or replace PROCEDURE test
  2    IS
  3    BEGIN
  4      WHILE true LOOP
  5        BEGIN
  6          IF (1=1)
  7          THEN
  8                          dbms_output.put_line( 1/0 );
  9              EXIT;
 10          END IF;
 11  /*
 12          EXCEPTION
 13          WHEN OTHERS THEN
 14            DBMS_OUTPUT.PUT_LINE('Failed to execute: '||' due to this unexpected error.');
 15            DBMS_OUTPUT.PUT_LINE(DBMS_UTILITY.FORMAT_ERROR_STACK);
 16            EXIT;
 17  */
 18        END;
 19        DBMS_OUTPUT.PUT_LINE('Trying again...');
 20        DBMS_LOCK.SLEEP(1);
 21      END LOOP;
 22    END;
 23  /

Procedure created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec test
BEGIN test; END;

*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "OPS$TKYTE.TEST", line 8
ORA-06512: at line 1

<b>Now we can clearly see the error, the caller cannot ignore the fact that it failed (exception city)</b>


Corrected?

Tom, June 23, 2012 - 12:09 am UTC

Excellent review.
I've added WHEN OTHERS ... RAISE;
Instead of WHEN OTHERS ... EXIT;
Does that reflect correct understanding?

If so architectural / usage questions around the always live multi-app version single database architecture which brings this question need addressing for those who might try this at home.

I can only express a miniscule fraction of the thanks due here!
Tom Kyte
June 23, 2012 - 1:54 pm UTC

Yes, when other then ... do stuff followed by ... RAISE; end;

Best live DDL architecture?

Tom, July 19, 2012 - 7:32 pm UTC

Would you please provide a few pointers to influence those choosing between this live DDL wrapper and Edition Based Redefinition and dbms_redefinition and other alternatives like Streams to support a very active OLTP system?

Thanks in advance for any thoughts.
Tom Kyte
July 19, 2012 - 8:02 pm UTC

why wouldn't you use this "live ddl" with edition based redefinition (EBR)??

if you want online application upgrades in a single database, online operations+ EBR are the way to go.

If you want to do a cut over to a transformed system with entirely new code, goldengate might be something to look at.

Optimal online OLTP upgrades

Tom, July 20, 2012 - 1:29 am UTC

Virtually all the objects we change are tables and ordinary application developers are charged to create their own DDL with little Oracle professional assistance, so making sure apps are okay with all new DDL and that there is sufficient caching of locked objects and incoming data during hopefully brief DDL executed via this wrapper seems optimal without the added complexity of EBR, despite apps must tolerate not only state A and state B but all the incremental steps in between.
But the daunting question how best to accomplish major table/schema refactorings remains...
Tom Kyte
July 30, 2012 - 7:29 am UTC

with little professional help, good luck!!

while it is true that even a blind squirrel occasionally finds a nut - you don't see many successful blind squirrels out there...

Proxy Authentication PHP

Morbach, Rodrigo, July 24, 2012 - 12:33 pm UTC

We're having a little problem. We need to use proxy authentication to validate a user on a PHP application.
We have a Java application that swaps the proxy user by generic user. For this,
we use a driver called OJDBC... we have searched on the internet but couldn“t find an equivalent PHP driver.
We have no idea how to do proxy authentication on PHP. Any suggestions?

Successful blind squirrels

Tom, August 10, 2012 - 4:27 am UTC

These squirrels are good with database concepts, and have easy access to all previous code, and to long lists of DBA vetted checks, and all code is subject to DBA review, and most attempt nothing sophisticated, so it may take a few more cycles but those who learn will be less and less blind, and more and more vested in their own database code.

In this case this is primarily a Java shop, i.e. db is just a bucket, e.g. almost zero PLSQL, hence application tier solutions are more supportable than the complexity of EBR / cross edition triggers / DBMS Redefinition etc, and this wrapper for DDL on active tables suffices most of the time.

Again sufficient appreciation for the knowledge and guidance provided by this site is hard to express, but Thanks!!

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library