Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Vaidyanathan.

Asked: February 13, 2006 - 12:17 pm UTC

Last updated: June 17, 2011 - 1:35 pm UTC

Version: 9.2.0.6

Viewed 10K+ times! This question is

You Asked

Hi Tom ,

I am trying to code a generic exception logger script .
For the same to be generic , I intend to pass the concatenated result set of the rows that need to be logged for later use and analysis and insert into a clob field .

Is there a way to concatenate all the columns of a query row into a single column without having to individually specify
the columns ?

for eg :

desc dummy
Name Null? Type
----------------------------------------- -------- ----------------------------
SRL_NM NOT NULL NUMBER
STATUS NOT NULL VARCHAR2(20)
LDR_ID_SQ VARCHAR2(4)
SQ_NUM NUMBER


select * from dummy ;

SRL_NM STATUS LDR_ SQ_NUM
---------- -------------------- ---- ----------
123 submitting ldr1 1
23 submitting ldr1 2
22 submitting ldr1 3
45 submitting ldr1 4
1234 submitting ldr1 5
77 submitting ldr1 6
88 submitting ldr1 7

7 rows selected.

Now , I want the o/p as :

select srl_nm||status||ldr_id_sq||sq_num from dummy;

SRL_NM||STATUS||LDR_ID_SQ||SQ_NUM
--------------------------------------------------------------------------------
123submittingldr11
23submittingldr12
22submittingldr13
45submittingldr14
1234submittingldr15
77submittingldr16
88submittingldr17

7 rows selected.

The catch is that I do not want to list the columns (hard code) and I want it to get this information at run time say from all_tab_columns where even the table name is known only at run time .
It is practical to concatenate the individual fields if there are few of them .
But , in case , there are say 100 columns that I need to concatenate like this !

Any good pointers in this direction would be highly appreciated .

As always , your site is a boon to oracle developer community .

and Tom said...

well, how do you know what "row" to pick.


But, basically, it could be a use for dbms_xmlgen or one of the xml generating routines:

scott@ORA10GR2> select dbms_xmlgen.getxml('select * from emp where rownum=1') from dual;

DBMS_XMLGEN.GETXML('SELECT*FROMEMPWHEREROWNUM=1')
-------------------------------------------------------------------------------
<?xml version="1.0"?>
<ROWSET>
<ROW>
<EMPNO>7369</EMPNO>
<ENAME>SMITH</ENAME>
<JOB>CLERK</JOB>
<MGR>7902</MGR>
<HIREDATE>17-DEC-80</HIREDATE>
<SAL>800</SAL>
<DEPTNO>20</DEPTNO>
</ROW>
</ROWSET>


XML permits the subsequent process of this data.


(generic, don't get too carried away, generic can be good - it can be very very very evil as well)

Rating

  (60 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

pushing records into a clob field

Vaidyanathan Kasi, February 14, 2006 - 11:27 am UTC

thanks for your pointer Tom ...But , XML unfortunately is not an option .!
Ok...I am convinced I was not clear in my requirements ...:) my apologies .
Lets assume I already know the records that I have to log ...meaning , I know the where clause (but , I dont know how many )
I want to record this entire result set record by record containing about 30-40 columns in a clob field with may be some demiliter say @.
So, if I have say , 10 records in my result set that I need to log , then , I would have 10 records in my logger table which just has a clob field .
I am doing this because , in my application , there are many instances where there are deletions happening which is set by some business rules .
While I cannot avoid those set business rules of deletions , I intend to log those affected records in my logger clob table before I delete for purposes of analysis at a later stage .
It is this process of logging into the clob field of the logger table having known the records that need to deleted that I intend to automate so different applications can use it as a plug-and-play component .
I intended to write this as a PL/SQl procedure .
Hope I am clear with my requirements .
Your inputs and pointers in accomplishing this , as always , would be highly appreciated .

Tom Kyte
February 14, 2006 - 1:22 pm UTC

... XML unfortunately is not an option ...

ummm - yes. it. is.
it sure is.
it absolutely 100% is.

You have just rule it out for whatever unknown reason - your choice. It is an option, but one you are choosing to not take.


You'll have to write the code to do what you want - I gave you what I believe to be the correct answer for your requirement.

I don't know how you are "getting these 10 rows in a result set", or whence this result comes from, so I cannot really go "further".

resultset to clob

Vaidyanathan Kasi, February 14, 2006 - 2:05 pm UTC

wow ...I probably was thinking one-dimensionally ...
you are right ...DBMS_XMLGEN seems to be a viable option ...
Thanks for the pointer ...
Will try along these lines ...
As always , its a privilege to be interacting with you ...
I have a question here : I presume "dbms_xmlgen.getxml" (function that returns clob) gives the o/p as one clob data for any number of records returned by a query .
Is it possible to split this into multiple clobs one for each record returned by the query ?

say :
CREATE TABLE DUMMY
(
SRL_NM NUMBER,
STATUS VARCHAR2(20 BYTE),
LDR_ID_SQ VARCHAR2(4 BYTE),
SQ_NUM NUMBER
);
ALTER TABLE DUMMY ADD (
PRIMARY KEY (SRL_NM, STATUS);

CREATE TABLE Target_clob_table
(
EXCEPTION_SQ NUMBER(15) NOT NULL,
RECORD_NUM NUMBER(15) NOT NULL,
EXCEPTION_BLOB CLOB NOT NULL,
ERROR_MSG VARCHAR2(255 BYTE) NOT NULL,
LAST_MOD_SIGNON_ID VARCHAR2(20 BYTE),
LAST_MOD_DATE_TIME DATE,
REJECT_TABLE_NM VARCHAR2(40 BYTE),
)

INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
123, 'submitting', 'ldr1', 1);
INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
23, 'submitting', 'ldr1', 2);
INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
22, 'submitting', 'ldr1', 3);
INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
45, 'submitting', 'ldr1', 4);
INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
1234, 'submitting', 'ldr1', 5);
INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
77, 'submitting', 'ldr1', 6);
INSERT INTO DUMMY ( SRL_NM, STATUS, LDR_ID_SQ, SQ_NUM ) VALUES (
88, 'submitting', 'ldr1', 7);
COMMIT;

My requirement is to push the 7 records from dummy as 7 records in target_clob_table , one for each record and not as one single chunk for the entire result set .

Thanks .
Would appreciate if you just give me the relevant pointers as I would not prefer to be spoon fed and I want to try it out if you say its possible .
I hope by the time , I get a reply from you , I would be doing my own li'l rnd looking at the feasiblility ...

Thanks.

Tom Kyte
February 14, 2006 - 2:32 pm UTC

you would run the query multiple times - but you likely just want "a clob", it is a "generic error handling" and later you can query the XML as a relation set (getting the rows back out) - so it seems "perfect"

column concatenation at run time

vaidyanathan kasi, February 24, 2006 - 1:52 pm UTC

Hi Tom ,
I wrote the code taking XML route as per your suggestion and it rocks :)

Thanks a ton for the pointer .
I was thinking one-dimensionally to begin with and you opened my eyes to newer realms of oracle .

Vaidy

trigger based audit trail

vaidyanathan kasi, April 05, 2006 - 11:08 am UTC

Hi Tom ,

Good day !
This is a trigger based audit trail question and my apologies for asking this in in this thread.
But , as it is increasingly difficult to get the "ask me your questions now" status from your site (i have been waiting patiently for several days now !) ,
I thought I could use the thread where I got the response from you .

We intend to build audit trail tables corresponding to each of our transactional tables .
meaning , each core transactional table would have its own audit table with the extra columns : who_updated,when_updated,operation_type.
In this regard , we intend to just create the audit record based on the following logic :
if it is an update , fire an after row trigger that inserts 2 records in the audit table : first record to insert before image with operation_type = I for insert
and the second record to be inserted for the after image with operation_type = U
Subsequent updates to the same record would only have one entry in the audit table to be inserted with operation_type=U
meaning , assuming initial value of column A = A ,
and column A was updated for the first time with value B , then
audit table would have 2 records inserted : one with column A = A and operation_type = I
and second with column A = B and operation_type = U
Later in the day , again if the same column A value becomes C , then only one record would be inserted in audit table for :
column A = C with operation_type = U

Now , my questions are :
1) Is this a good approach ? any other possible suggestions to other approaches would be highly useful .
2) I am thinking of implementing the inserts into audit table (after the trigger is fired ) in a procedure rather than
coding it directly in trigger body .
But , my problem is : how to send the entire old and new record sets to the procedure (because , even for a single column change , i anyway need to record the entire record in the audit table)? or is it an overkill ?

Thanks and much appreciated ,
Vaidy

Tom Kyte
April 06, 2006 - 9:06 am UTC

1) seems like it would take a lot of work to figure out if the record had ever been updated before. why not just pre-populate the audit trail with a copy of the existing data and just do the "U" stuff into the future.

2) you would have to send the individual attributes as formal parameters. You can use sql to write sql to help automate the trigger generation:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:59412348055 <code>


trigger based audit trail

vaidyanathan kasi, April 06, 2006 - 1:40 pm UTC

Hi Tom ,
Thanks for your response .
However , I can elaborate more on this .
I intend to make that distinction of which row to update by the "modified_by" column in my main table (for eg: table A)(table to be audited)
For initial data loads , I would stamp the records inserted in A with modified_by='DATA_MAINT'.
My trigger would not fire for initial data loads as it is not necessary for the application to track it .
So, my trigger logic would exclude modified_by=DATA_MAINT.
For subsequent updates during the day ,after the initial rows have been inserted , my after update Trigger would
fire and insert 2 records (one with entire old entries record and other with entire new entries record) as depicted in my previous post .
My trigger would now fire because my update during the day(to any column) would cause the "modified_by" to be the user_id
who made the modification via the front end .
Subsequent updates would all cause the trigger to fire as the "modified_by" is not "DATA_MAINT" anymore .
So, tracking the rows is not going to be problem I think .

This is my idea for now .
Now , is it a good approach ?

Your link deals with having a common audit table scenario which is not the route we intend to take as we are going to have the
correspoding audit tables per main table .

Also , I was thinking of doing these inserts into my audit table in a packaged procedure than in the trigger body for reasons below :

1) i think number of parses are reduced when done from a procedure as opposed to a trigger .
2) tomorrow , if I were to add some more logic into my trigger , that logic can be in the procedure .

Here is wheer my second question is :
If I have to do the DMLs in my procedure , then I to know the entire :old and :new records in my procedure .
How do I achieve this ? or is it an overkill to call a procedure instead of doing this in a trigger where I know ready hand
the details of :old and :new right there .

Your inputs like always , would be highly useful.

Vaidy

Tom Kyte
April 07, 2006 - 4:16 pm UTC

is it a good approach? either that (the trigger) OR in a transactional API (stored procedure) OR using streams to mine the redo information after the fact. Any of those will work. The trigger will impact runtime performance the most of course.


I showed you how to automate the generation of the trigger to pass the :new and :old above - there is no way to pass the record, you need to pass the attributes.

simple question

matt, April 15, 2006 - 7:09 pm UTC

Q: of mean, mode, and median, which will change most when you go from a small to a large sample size?? thanks!

Tom Kyte
April 16, 2006 - 7:25 am UTC

any of them.

median - middle one or average of middle two of a set
mode - most frequently occurring
mean - average

they will all be affected by different discrete values.


Finding current partition dynamically

vaidyanathan kasi, April 18, 2006 - 3:57 pm UTC

Hi Tom ,
My requirement is to find out the latest used partition for a table .
Say , suppose , table A is range partitioned monthly based on date column .
Table A has millions of data (currently 250 million) and every month , 20 million rows get added .(meaning every monthly partition has 20 million worth of data)
Sample high_value o/p of all_tab_partitions for one of the partitions : TO_DATE(' 2004-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
And as part of our requirements , we dont want to analyze all partitions of this huge table every month as it takes too much time for the analyze to complete.
Hence, we just want to identify the latest partition dynamically (partition where the 20 million rows got added)and just analyze the current partition .

But ,It is known that one cannot use the "long" columns of any data dictionary views (here , high_value column of all_tab_partitions) for limiting criteria
since long manipulations are not permitted .

Any suggestions on a way to identify the latest partition in use would be highly useful .

My idea so far that I can think of is to create a temporary table that has a clob column and apply the to_lob function
on high_value column and populate the temp table :

CREATE TABLE PART_INFO
(
PARTNAME VARCHAR2(30 BYTE),
HIGH_VALUE CLOB DEFAULT empty_clob()
);

insert into part_info(partname,high_value)
select partition_name,to_lob(high_value) from all_tab_partitions ;

commit;

Once , I have the information as a clob in part_info , it would be easy to manipulate .

But , one catch here is that if additional partitions would get created on table A (as I am sure it would be in future ),
then , there must be a way to refresh the part_info table automatically to reflect the change without any interference .

I tried to create a view by casting the high_value to a clob on the below lines:
create view vw(partname,high_val) as select partition_name,cast(high_value as clob) from all_tab_partitions ;

Now, am not sure how much of this making sense !!?
Your inputs would be highly appreciated .

Thanks

Tom Kyte
April 18, 2006 - 4:03 pm UTC

how about using dbms_stats to gather stats on partitions without them? that seems safe - you can query the dictionary easily to find that.

makes more sense I believe than "the last one", you want stats on all partitions, so find those without and do them?

Finding current partition dynamically

vaidyanathan kasi, April 18, 2006 - 4:15 pm UTC

I am intending to use dbms_stats to analyze :
dbms_stats.gather_table_stats(P_OWNER_NAME,P_OBJECT_NAME,ESTIMATE_PERCENT => dbms_stats.auto_sample_size ,PARTNAME=>'P01',METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 1',DEGREE => dbms_stats.default_degree,GRANULARITY => 'ALL',CASCADE => TRUE);
But , I need to automate the attibute "PARTNAME" such that it takes only the current partition and analyzes the same and not all partitions without having to hard code the partition name .

How can I achieve that ?

Thanks

Tom Kyte
April 18, 2006 - 6:15 pm UTC

did you see above?

analyze on partioned tables

vaidyanathan kasi, June 02, 2006 - 2:01 pm UTC

Hi Tom ,
I have a perplexing problem on partitioned table performance .
Here is the scenario :
We have certain daily tables that are partitioned per month by date .
Earlier we used to run our scheduled analyze jobs on these tables using dbms_stats every day for our daily load processes.
We faced a problem doing this way when we reached a new month since data starts to get loaded on the new partition from the beginnning of a new month onwards .
On the first day of every month , Our application started to perform poorly since it accesses the data from the new partition.
It gets back to normal on subsequent days of the month (from 2nd day onwards till it reaches the next month).
It was determined by our resident DBAs that since the analyze jobs were doing an estimate statistics (we cannot afford a compute since most of our partitioned daily tables are very very huge ),
it was gathering statistics for only approx 1% of the rows of the total volume (we have let oracle decide the optimum estimate_percent and it apprantly does 1%) .
And so , it was determined that it is quite possible that the data on the new partition (beginning of a new month) is not part of the 1% and the new partition could have got left out during statistics gathering ...
After all this analysis , we decided that we would analyze the latest current partition on these daily partitioned tables and thus we modified our scripts to just analyze the current partition .
So, in a jist , after data is loaded into the new partition on the beginning of the month , the corresponding partition and its local indexes also gets analyzed .
here is the dbms_stats call that we are using for this :
dbms_stats.gather_table_stats(upper(P_OWNER_NAME),upper(P_OBJECT_NAME),ESTIMATE_PERCENT => dbms_stats.auto_sample_size,METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 1',PARTNAME => <current_partition_name> ,DEGREE => dbms_stats.default_degree,GRANULARITY => 'PARTITION' , CASCADE => TRUE);

But , this does not seem to be helping us either ...and all our queries against these analyzed partition tables are performing very slow ...
Interesting thing to note is that this happens only on the first day of the month and from second day onwards till the next month (next partition) , the application gets back to normal !!
This has become a serious bottleneck for us every first day of the month ...
Would highly appreciate if you could Please give me some inputs or pointers on this ...as this is becoming very much perplexing to me though i am trying from end further...

Thanks,
Vaidy

Tom Kyte
June 02, 2006 - 3:06 pm UTC

sounds like the statistics you gather on day one need to be LOOKED AT.

Have you even looked at them to see if they are accurate?

analyze

vaidyanathan kasi, June 02, 2006 - 3:16 pm UTC

"sounds like the statistics you gather on day one need to be LOOKED AT. "
The statistics that we gather on day one and any other day uses the same method :
dbms_stats.gather_table_stats(upper(P_OWNER_NAME),upper(P_OBJECT_NAME),ESTIMATE_PERCENT => dbms_stats.auto_sample_size,METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 1',PARTNAME => <current_partition_name> ,DEGREE => dbms_stats.default_degree,GRANULARITY => 'PARTITION' , CASCADE => TRUE);
Depending on the current partition where the data goes , the partition name would just be changing which is the first day of every month I am sure .
Then , why is it only affecting the queries run on the first day whereas on any other day , the same queries are working as they should normally ...

Have you even looked at them to see if they are accurate?
How can I look and verify if the statistics are accurate/inaccurate ?
is there a way ?

Please let me know as this is eating my brain ....

Tom Kyte
June 02, 2006 - 4:19 pm UTC

You query the data dictionary!


user_tab_partitions (num rows and such)
user_ind_partitions
user_part_histograms

no examples of how query plans change (that would be interesting, the explain plan from a query on day 1 versus day 2). Throw us a bone here.

Analyze partition

vaidyanathan kasi, June 05, 2006 - 11:25 am UTC

Hi Tom ,
We have granularity=PARTITION on our dbms_stats analyze routines in all of our partitioned tables.
All queries in our application use bind variables(rightly so, i would think) and so , the optimizer cannot determine the exact partition
to hit untill run time (pstart=pstop=key) which is 100% agreeable .
I saw in one of your threads/replies that :
Since the optimizer cannot determine the partition until run time , we would have to have even global statistics on the table updated on the partitioned tables as
oracle would use them instead of the partition statistics .
So, I am thinking our dbms_stats calls would have to be GRANULARITY=DEFAULT instead of just GRANULARITY=PARTITION.
My question is :
Would it suffice if dbms_stats call is modified to PARTNAME => '<current_partition>',GRANULARITY => DEFAULT for keeping the global stats updated or anything else need/can be done .
Will doing the above ensure that global stats as well as the current partition stats are updated ?
Could you let me know how I can verify this ?
On what factor does "degree => " attribute depend on ?
we have currenly the following setting on our database :
parallel_max_servers = 5
But , the partitioned tables were all built with a degree of 1 .
So, I think , degree attribute of dbms_stats would not have any effect unless the coresponding tables have degree > 1 by issuing :
alter table table_name parallel ;
Pls correct if I am wrong .

Thanks,
Vaidy

Tom Kyte
June 05, 2006 - 12:14 pm UTC

well, it is not true that local statistics would not be used here. You are using binds. We will bind peek. We will upon bind peeking discover that "hey, this'll hit a single partition". And we'll use the stats on that one partition.

Which could be

a) good
b) bad
c) indifferent

as is the case with peeking in general.


are you

1) a warehouse (parallel - seems maybe yes?)
2) a transactional system

warehouses do not always use binds for the "big queries"


Little evidence that bind peeking will cause the use of local stats:


CREATE TABLE t
(
dt date,
x int,
y char(40) default 'x'
)
PARTITION BY RANGE (dt)
(
PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
PARTITION junk VALUES LESS THAN (MAXVALUE)
)
/

insert /*+ append */ into t
select to_date( '12-mar-2003' ) + mod(rownum,3),
mod(rownum,3),
'x'
from all_objects;

create index t_idx on t(x) local;

exec dbms_stats.gather_table_stats( user, 'T' );

@trace
declare
l_bind date := to_date( '12-mar-2003' );
begin
for x in ( select * from t where x = 0 and dt = l_bind )
loop
exit;
end loop;
for x in ( select * from t where x = 1 and dt = l_bind )
loop
exit;
end loop;
for x in ( select * from t where x = 0 and dt = (select l_bind from dual) )
loop
exit;
end loop;
for x in ( select * from t where x = 1 and dt = (select l_bind from dual) )
loop
exit;
end loop;
end;
/



SELECT * FROM T WHERE X = 0 AND DT = :B1

Rows Row Source Operation
------- ---------------------------------------------------
100 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=4 pr=0 pw=0 time=495 us)
100 TABLE ACCESS FULL T PARTITION: KEY KEY (cr=4 pr=0 pw=0 time=280 us)
********************************************************************************
SELECT * FROM T WHERE X = 1 AND DT = :B1

Rows Row Source Operation
------- ---------------------------------------------------
0 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=2 pr=0 pw=0 time=173 us)
0 TABLE ACCESS BY LOCAL INDEX ROWID T PARTITION: KEY KEY (cr=2 pr=0 pw=0 time=131 us)
0 INDEX RANGE SCAN T_IDX PARTITION: KEY KEY (cr=2 pr=0 pw=0 time=116 us)(object id 67363)

It must have peeked, else it would have either always full scanned or always not full scanned... If we remove the ability entirely to peek:

********************************************************************************
SELECT * FROM T WHERE X = 0 AND DT = (SELECT :B1 FROM DUAL)

Rows Row Source Operation
------- ---------------------------------------------------
100 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=4 pr=0 pw=0 time=607 us)
100 TABLE ACCESS FULL T PARTITION: KEY KEY (cr=4 pr=0 pw=0 time=385 us)
1 FAST DUAL (cr=0 pr=0 pw=0 time=3 us)
********************************************************************************
SELECT * FROM T WHERE X = 1 AND DT = (SELECT :B1 FROM DUAL)

Rows Row Source Operation
------- ---------------------------------------------------
0 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=133 pr=0 pw=0 time=1205 us)
0 TABLE ACCESS FULL T PARTITION: KEY KEY (cr=133 pr=0 pw=0 time=1176 us)
1 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)


we can see that is true


analyze partition

vaidyanathan kasi, June 05, 2006 - 1:19 pm UTC

Hi Tom ,
Please dont feel bad that I keep coming back to the same point ...!
I want to get a grip on this issue and want to know better ways to deal with the global and partition stats concept ...
I must admit that I did not get the jist of what you are trying to convey .
what is bind peeking by the way ?
From what I can understand, we have to have global stats on the partitioned tables updated every time new rows are loaded into partitions .
We are a datawarehouse application .
We have seen cases here that if we just analyze the current partition every day (after data loads complete), it does not seem to
update the global table stats .
Our queries use bind variables and they are accessed from the web front end .So, hard coding dates is not an option .
I think , if a date is hard coded , then optimizer can 100% determine the partition and use the partition stats .Am I correct ?
So, we end up analyzing the table completely (using granularity=> ALL) which usually solves our performance problems .
But , it is not feasible to do a complete analyze every day since our partitioned tables are very huge (300-400 million records) and it would take ages (approx 4 hrs per table) for the analyze to complete .
Is it necessary at all to do the full analyze every day !!! If we are forced to do this way , then , what is the purpose of collecting
current partition statistics which is anyway not helping ?
Or
Is it just enough to have global stats updated once a month (end of month) and continue to gather and update partition level stats ?

thanks,
Vaidy

Tom Kyte
June 05, 2006 - 2:27 pm UTC

search site for bind peeking, lots of talks.


You don't *have* to have anything.


I showed that even if the partition key is "not" hard coded, but rather bound in - the optimizer will PEEK at the bind the first time and skip the global statistics and use local statistics.

Now, global statistics would be maintained, but they cannot be "as accurate as possible"

For, if you add 100 rows - we can figure out there are now "X+100" rows (where X was the previous global statistic)

However, if you add 100 rows - and there was a column Y, and before the load - Y had 500 distinct values - how many distinct values does Y have now?

The answer (unknown to us) is between 500 and 600 somewhere - you cannot roll that kind of data up from local statistics to global.


You might never need to update global statistics using "gather", you might well be able to get away with SETTING them using dbms_stats (presuming you know what they are - the new endpoints for example for that column Y - you likely know "about how many distinct values" there are)

analyze on partitioned tables

vaidyanathan kasi, June 06, 2006 - 12:31 pm UTC

Hi Tom ,

I am afraid I would have to disagree with you a little .
I dont see the behaviour that you just mentioned :

"You don't *have* to have anything.


I showed that even if the partition key is "not" hard coded, but rather bound in
- the optimizer will PEEK at the bind the first time and skip the global
statistics and use local statistics."

Everyday , as part of the dat loads , new rows get added into the current partition and our dbms_stats job
analyzes just the current partition immediately after the data load is complete .
1) dbms_stats.gather_table_stats(upper(P_OWNER_NAME),upper(P_OBJECT_NAME),ESTIMATE_PERCENT => dbms_stats.auto_sample_size,METHOD_OPT => 'FOR ALL COLUMNS SIZE auto',PARTNAME => var_partition_name ,DEGREE => 4,GRANULARITY => 'PARTITION' , CASCADE => TRUE);

But, it does not help the query performance unless we do the following :
2) dbms_stats.gather_table_stats(upper(P_OWNER_NAME),upper(P_OBJECT_NAME),ESTIMATE_PERCENT => dbms_stats.auto_sample_size,METHOD_OPT => 'FOR ALL COLUMNS SIZE auto',PARTNAME => var_partition_name ,DEGREE => 4,GRANULARITY => 'DEFAULT' , CASCADE => TRUE);

Pls note the change in the above 2 statements (granularity).
When we eventually execute dbms_stats using 2) , the queries start to perform well .
In fact, I can clearly notice that the query plans change after 1) and 2) calls.
From what I noticed , what 2) does is : gathers statistics on the current partition and updates the global statistics :
The way I verified that it is indeed updating global statistics is by querying all_tables :
select last_analyzed,num_rows from all_tables where owner=<owner> and table_name=<table_name>;
last_analyzed reflects the time that I executed 2) whereas last_analyzed does not change when I execute 1).

Would be great if you could throw more light on this behaviour .

Apart from this , I have 2 other questions :

In the above dbms_stats calls on 1) and 2) , "DEGREE => 4" attribute does not seem to speed up the execution (it takes the same time) even through I set the following before executing dbms_stats:

alter table upper(P_OBJECT_NAME) parallel(degree 4 instance default) ;

I read in some thread that degree => null would mean that the dbsm_stats call would parallelize according to the table degree as indicated above .
But , this also does not work .
By the way , my init parameter is : parallel_max_servers = 5
What could be the reason that its not speeding up the process ?

I am noticing a strange behaviour in our partitioned tables .

Many of our partitioned tables have the following scenario(taken from all_tab_partitions) :

partition_name => P_M0506
high_value => TO_DATE(' 2006-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
num_rows => 4654707
blocks => 68370

partition_name => P_M0606
high_value => TO_DATE(' 2006-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
num_rows => 0
blocks => 0

Now , if I understand correctly , P_M0506 would be filled up for the month of MAy 2006 since high_value is defined to be less than TO_DATE(' 2006-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN').
For the month of June 2006 , the data should have gone into P_M0606 ...correct ?

But , i am not sure if june data has actually used the june partition since num_rows and blocks = 0 for P_M0606 !!!
It is continuing to put data into the May partition (P_M0506) instead !!

I think this is wrong ...
Pls comment on this ?

I know these are detailed queries ...
But , you can be assured that I am asking these after I have done enough rnd on them ...and cant figure out the
rationale behind these descripancies ...
obvioulsy , google did not help ...

A detailed explanation would be greatly appreciated as I am unable to get much help from anywhere else !

Thanks,
Vaidy

Tom Kyte
June 06, 2006 - 1:46 pm UTC

I said that you don't have to do ANYTHING.
there is nothing you HAVE to do.

well, short of understanding how the bits fit together, what each thing is doing, what set of statistics are likely being used.

In your case (no examples really), you are likely using global stats, not local and the maintainance of the global stats is relevant to you.

Now, the question you need to answer (for yourself) is - can we just SET these statistics (because you know them) or must you actually gather them.

analyze question

vaidyanathan kasi, June 06, 2006 - 2:08 pm UTC

Hi Tom,
That was just part of the question that you tried to answer !
I do realise that you are busy everyday and you do a wonderful service to the developer community !
I am indeed collecting global stats for the table (since i can notice the last_analyzed column getting changed) as well as the current partition stats (last+_analyzed timing reflects the time I ran the dbms_stats).
But , would appreciate if you reply after reading through all the questions that I asked .
Where possible , i have tried to explain things in detail from my end to support my understanding and the current behaviour .
Pls do let me know if you need any other information from me .
I dont ask questions unless I have done my home work in trying various things ...
I thought of approaching you since I know I can get responses that make sense and logical only from you .
But , it appears that you did not have the time to read through my entire post.
If you do feel you can add more value than what you have replied , pls do so and it will be highly appreciated .

Thanks,
Vaidy

Tom Kyte
June 06, 2006 - 3:19 pm UTC

I'm getting confused - you have bsaically said:

we need global stats to be gathered, else the optimizer does not have sufficient information to get the correct plan - this is seen to be true since when we only collect LOCAL stats - it is not sufficient, we must gather stats global and local.


(that is what I've distilled your long post above down to).

So, you can either

a) gather global stats as you are
b) set them if you know them, to avoid having to gather them

as for your "what partition does the data go into" - I would suggest you set up a small example, insert some data and test it out?


Less, the data would appear to have to go where you said.

As always - a test case is useful, set up a small simulation of your stuff and show us what happens.

It could just be that you loaded june after gathering stats for all we know here.

a different question regarding CBO

vaidyanathan kasi, July 25, 2006 - 11:33 am UTC

Hi Tom,
We are on Oracle 9.2.0.6 version , 24 CPU database server and we are on RAC environment with 2 nodes .
We have a query that has bind variables ...
Strange about this query is that even though the explain plan is shown to be using an index on MV_SLD_DAILY_LOANS(IDX_MV_SLD_DAILY_LOANS) ,
adding the same index hint explicitly on the query (obviously no change in explain plan) makes the query perform faster .
Is this not a redundant information supplied to the optimizer when it is shown to be using the index already ?

Query :
SELECT
mvscd.client_cut_id as CLIENT_CUT_ID,
mvscd.client_name as CLIENT_NAME,
mvscd.custom_group_id as GROUP_ID,
mvscd.custom_group_name as GROUP_NAME,
MVSDL.PRODUCT_NM as PRODUCT_NAME,
MVSDL.PRODUCT_KEY PRODUCT_KEY,
MVSDL.DOM_INTL_FLG as DI_CODE,
mvscd.REP_CURR_CODE,
mvscd.GMT_OFFSET_NUM as GMT_OFFSET,
mvscd.TIME_ZONE_DESC_TXT as TIME_ZONE_DESC,
mvscd.LOWEST_DRILL_DOWN_LEVEL,
mvscd.REPORT_NAME,
SLDCD.SLPR_FORMATTED_DATE_TITLE as FREQ_TITLE,
SLDER.EXCHANGE_RT as EXCH_RATE,
MVSDL.COLL_CCY_ID as COLL_CURR_CODE,
SUM(NVL(MVSDL.CASH_CONT_AMT,0)) AS CASH_CONT_AMT,
SUM(NVL(MVSDL.NONCASH_CONT_AMT,0)) AS NONCASH_CONT_AMT,
SUM(NVL(MVSDL.CASH_CONT_AMT*SLDER.EXCHANGE_RT,0)) AS BASE_CASH_CONT_AMT,
SUM(NVL(MVSDL.NONCASH_CONT_AMT*SLDER.EXCHANGE_RT,0)) AS BASE_NONCASH_CONT_AMT,
SUM(NVL(MVSDL.MARK_AMT,0)) MARK_AMT,
SUM(NVL(MVSDL.MARK_AMT*SLDER.EXCHANGE_RT,0)) BASE_MARK_AMT,
SUM(NVL(MVSDL.MARK_VALUE_AMT,0)) MARK_VAL,
SUM(NVL(MVSDL.MARK_VALUE_AMT*SLDER.EXCHANGE_RT,0)) BASE_MARK_VAL,
SUM(NVL(MVSDL.MKT_VAL_AMT,0)) TOT_MKT_VAL,
SUM(NVL(MVSDL.MKT_VAL_AMT*SLDER.EXCHANGE_RT,0)) BASE_TOT_MKT_VAL
FROM MV_SLD_LENDER_GROUP_FUND SLDLGF,
mv_slpr_client_data mvscd,
CALENDAR_DATE SLDCD,
EXCHANGE_RATE SLDER,
MV_SLD_DAILY_LOANS MVSDL
WHERE mvscd.client_cut_id = :client_cut_id --4131
AND mvscd.report_id = :report_id --6
AND SLDCD.CALENDAR_DT = :CALENDAR_DT --'10-Jul-2006'
AND mvscd.GROUP_ID = SLDLGF.FUND_GROUP_SHORT_ID
AND SLDLGF.FUND_ID = MVSDL.FUND_ID
AND SLDLGF.COLL_CCY_ID = MVSDL.COLL_CCY_ID
AND SLDLGF.SETTLE_CCY_ID = MVSDL.SETTLE_CCY_ID
AND SLDLGF.BUS_LINE_CD = MVSDL.BUS_LINE_CD
AND MVSDL.CALENDAR_DT = SLDCD.CALENDAR_DT
AND SLDER.CALENDAR_DT = MVSDL.CALENDAR_DT
AND SLDER.FROM_CCY_ID = MVSDL.COLL_CCY_ID
AND SLDER.TO_CCY_ID = mvscd.REP_CURR_CODE
GROUP BY
mvscd.client_name,
mvscd.client_cut_id,
mvscd.custom_group_id,
mvscd.custom_group_name,
MVSDL.PRODUCT_NM,
MVSDL.PRODUCT_KEY,
MVSDL.DOM_INTL_FLG,
SLDER.EXCHANGE_RT,
MVSDL.COLL_CCY_ID,
mvscd.REP_CURR_CODE,
mvscd.GMT_OFFSET_NUM,
mvscd.TIME_ZONE_DESC_TXT,
mvscd.REPORT_NAME,
SLDCD.SLPR_FORMATTED_DATE_TITLE,
mvscd.LOWEST_DRILL_DOWN_LEVEL
ORDER BY
mvscd.client_name,
mvscd.custom_group_id,
MVSDL.DOM_INTL_FLG,
MVSDL.PRODUCT_NM ;

Query using index hint :

SELECT /*+ index(MVSDL,IDX_MV_SLD_DAILY_LOANS) */
mvscd.client_cut_id as CLIENT_CUT_ID,
mvscd.client_name as CLIENT_NAME,
mvscd.custom_group_id as GROUP_ID,
mvscd.custom_group_name as GROUP_NAME,
MVSDL.PRODUCT_NM as PRODUCT_NAME,
MVSDL.PRODUCT_KEY PRODUCT_KEY,
MVSDL.DOM_INTL_FLG as DI_CODE,
mvscd.REP_CURR_CODE,
mvscd.GMT_OFFSET_NUM as GMT_OFFSET,
mvscd.TIME_ZONE_DESC_TXT as TIME_ZONE_DESC,
mvscd.LOWEST_DRILL_DOWN_LEVEL,
mvscd.REPORT_NAME,
SLDCD.SLPR_FORMATTED_DATE_TITLE as FREQ_TITLE,
SLDER.EXCHANGE_RT as EXCH_RATE,
MVSDL.COLL_CCY_ID as COLL_CURR_CODE,
SUM(NVL(MVSDL.CASH_CONT_AMT,0)) AS CASH_CONT_AMT,
SUM(NVL(MVSDL.NONCASH_CONT_AMT,0)) AS NONCASH_CONT_AMT,
SUM(NVL(MVSDL.CASH_CONT_AMT*SLDER.EXCHANGE_RT,0)) AS BASE_CASH_CONT_AMT,
SUM(NVL(MVSDL.NONCASH_CONT_AMT*SLDER.EXCHANGE_RT,0)) AS BASE_NONCASH_CONT_AMT,
SUM(NVL(MVSDL.MARK_AMT,0)) MARK_AMT,
SUM(NVL(MVSDL.MARK_AMT*SLDER.EXCHANGE_RT,0)) BASE_MARK_AMT,
SUM(NVL(MVSDL.MARK_VALUE_AMT,0)) MARK_VAL,
SUM(NVL(MVSDL.MARK_VALUE_AMT*SLDER.EXCHANGE_RT,0)) BASE_MARK_VAL,
SUM(NVL(MVSDL.MKT_VAL_AMT,0)) TOT_MKT_VAL,
SUM(NVL(MVSDL.MKT_VAL_AMT*SLDER.EXCHANGE_RT,0)) BASE_TOT_MKT_VAL
FROM MV_SLD_LENDER_GROUP_FUND SLDLGF,
mv_slpr_client_data mvscd,
CALENDAR_DATE SLDCD,
EXCHANGE_RATE SLDER,
MV_SLD_DAILY_LOANS MVSDL
WHERE mvscd.client_cut_id = :client_cut_id --4131
AND mvscd.report_id = :report_id --6
AND SLDCD.CALENDAR_DT = :CALENDAR_DT --'10-Jul-2006'
AND mvscd.GROUP_ID = SLDLGF.FUND_GROUP_SHORT_ID
AND SLDLGF.FUND_ID = MVSDL.FUND_ID
AND SLDLGF.COLL_CCY_ID = MVSDL.COLL_CCY_ID
AND SLDLGF.SETTLE_CCY_ID = MVSDL.SETTLE_CCY_ID
AND SLDLGF.BUS_LINE_CD = MVSDL.BUS_LINE_CD
AND MVSDL.CALENDAR_DT = SLDCD.CALENDAR_DT
AND SLDER.CALENDAR_DT = MVSDL.CALENDAR_DT
AND SLDER.FROM_CCY_ID = MVSDL.COLL_CCY_ID
AND SLDER.TO_CCY_ID = mvscd.REP_CURR_CODE
GROUP BY
mvscd.client_name,
mvscd.client_cut_id,
mvscd.custom_group_id,
mvscd.custom_group_name,
MVSDL.PRODUCT_NM,
MVSDL.PRODUCT_KEY,
MVSDL.DOM_INTL_FLG,
SLDER.EXCHANGE_RT,
MVSDL.COLL_CCY_ID,
mvscd.REP_CURR_CODE,
mvscd.GMT_OFFSET_NUM,
mvscd.TIME_ZONE_DESC_TXT,
mvscd.REPORT_NAME,
SLDCD.SLPR_FORMATTED_DATE_TITLE,
mvscd.LOWEST_DRILL_DOWN_LEVEL
ORDER BY
mvscd.client_name,
mvscd.custom_group_id,
MVSDL.DOM_INTL_FLG,
MVSDL.PRODUCT_NM;

Explain Plan before and after index hint is the same:
Operation Object Name Rows Bytes Cost PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 42 1493
SORT GROUP BY 42 12 K 1493
NESTED LOOPS 42 12 K 1486
NESTED LOOPS 56 15 K 1480
NESTED LOOPS 321 65 K 4
NESTED LOOPS 6 1 K 2
TABLE ACCESS BY INDEX ROWID SLD.CALENDAR_DATE 1 37 1
INDEX UNIQUE SCAN SLD.XPKCALENDAR_DATE 1 1
TABLE ACCESS BY INDEX ROWID SLPR.MV_SLPR_CLIENT_DATA 6 858 2
AND-EQUAL
INDEX RANGE SCAN SLPR.MV_SLPR_CLIENT_DATA_NDX5 450 1
INDEX RANGE SCAN SLPR.MV_SLPR_CLIENT_DATA_NDX4 450 1
TABLE ACCESS BY INDEX ROWID SLRLINK.MV_SLD_LENDER_GROUP_FUND 52 1 K 1
INDEX RANGE SCAN SLRLINK.IDX_MV_LNDGRPFND_FUND_GROUP 52
TABLE ACCESS BY INDEX ROWID SLRLINK.MV_SLD_DAILY_LOANS 1 80 5
INDEX RANGE SCAN SLRLINK.IDX_MV_SLD_DAILY_LOANS 1
PARTITION RANGE SINGLE KEY KEY
TABLE ACCESS BY LOCAL INDEX ROWID SLD.EXCHANGE_RATE 1 21 1 KEY KEY
INDEX UNIQUE SCAN SLD.XPKEXCHANGE_RATE 1 KEY KEY

as you can see , the index is being used in both cases , except that in the first case, the index hint was not specified that made the query hang
and in the second case, after the index hint is added , the same query starts to perform much much faster.
We have been seeing this behaviour consistently.

init.ora settings :
optimizer_index_caching = 90
optimizer_index_cost_adj = 10
optimizer_index_permutations = 100

We are seeing the same behaviour with the TKPROF o/p as well..
One curious thing that we did notice is that when the index hint is not explicitly specified , all the blocks of MV_SLD_DAILY_LOANS (about 24000) are seen to be scanned MV_SLD_DAILY_LOANS and this is here there is lot of time consumed.
But, when index hint is used explicitly , then , the block is scanning is pretty fast.

We tested the same query on a non-RAC environment also , but , there , the query seems to be coming very fast even without the index hint.
We approached Oracle support , but , the response level does not seem to be satisfactory.
Are we hitting a potential bug some where ?
All the objects in the above query are analyzed every day.

Any pointers are as always highly appreciated.


thanks,
Vaidy

Tom Kyte
July 25, 2006 - 12:01 pm UTC

ignore explain plan, use v$sql_plan to see what is actually happening.

"bind variable peeking" is coming into play here - search for that term to read about it on this site.

explain plan "lies"

Bind variable peeking

vaidyanathan kasi, July 25, 2006 - 4:13 pm UTC

Hi Tom,
Thanks a lot ...that was a solid pointer and I came to know new stuff...
After going through the related articles on bind variable peeking , i think the way the query works seems to be an effect of this ...
But, is it not a disadvantage rather than an advantage atleast in the context of my application ?
From what I understood, it is basically going to depend on the volume of the resultset ...if the result set is more for a set of values for the bind variables , then an explain plan with FTS is almost a guarantee and subsequent query executions are going to use the same plan even when index use is the right choice ...!!!
And the vice -versa if volume of data is less, then , instead of FTS for subsequent values of bind variables , the optimizer would incorrectly peek at the existing plan that got created the first time with index look up...!!!
Any way(s) to avoid this and make the optimizer decide the plan judiciously depending on the the values of bind variables supplied and not assume by peeking ?




Tom Kyte
July 25, 2006 - 4:24 pm UTC

well, you have gathered statistics that cause plan changes based on the inputs. One valid approach is: do not gather those statistics. if you are OLTP - having historgrams (likely culprit in your case) is probably not advised in general - you might have them here and there but not widespread.

so, are you gathering histograms and did you really "want to"

Bind Variable peeking

vaidyanathan kasi, July 25, 2006 - 4:35 pm UTC

We dont gather histograms on columns...
it is at default 1 bucket .
Ours is not a OLTP appln...
It is not 100% DSS either ...but , yes , I would lean towards 70% DSS and 30 OLTP kind ...
We perform our daily dataloads during the night (batch cycle) and use the results of the dataloads to generate lots of daily reports (meaning huge queries with bind variables) ...It is during the report generation that the queries are going crazy...
During the day , we have very minimal DMLs on the appln ...I would say , max 2-3 % ...

We tried creating histograms just for the indexed columns that are referenced by bind variables with auto ...
even tried with skewonly ...still bad result and it threw off the plan and the performance even further bad...

Ofcourse , histograms were just experimnetal...we then reverted back to usual bucket 1 noticing that it is not having any effect...
Basically, all permutations and combinations have been tried ...





Tom Kyte
July 25, 2006 - 4:50 pm UTC

show me the gather stats command you use on these tables and tell me the version.

you must have something collected that causes the optimizer to change its mind given different values.



Bind variable peeking

vaidyanathan kasi, July 25, 2006 - 8:53 pm UTC

Hi Tom,

The query that I posted has a combination of MVs and tables.
Here is the dbms_stats call for MVs :
exec dbms_stats.gather_table_stats(<p_owner>,<p_name>,METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 1',DEGREE => dbms_stats.default_degree,GRANULARITY => 'ALL',CASCADE => TRUE,NO_INVALIDATE => FALSE);

The MVs are designed to hold data for 1 business day (because that is what our daily reports represent...1 day worth of data...so, we have created some MVs based on functionality to avoid more table joins )...
So, the MVs are analyzed using compute as you can notice from the call above.

For the rest of the tables that are used in the query joins , we let oracle decide the estimate_percent(auto_sample_size) since some of the the tables could be huge...
So, basically, the dbms_stats for these tables is the same except that this has an estimate_percent=>dbms_stats.auto_sample_size.





Tom Kyte
July 26, 2006 - 10:30 am UTC

tell you what - do the plan with "binds" and once again with literals. are they the same or different (big query, no need to post entire thing, be 'concise').

if they differ, just point out the big differences. And look at the bits of the plan that are and look at the referenced objects and see if the stats that would make the optimizer change its mind are there are not.



Bind Variable Peeking

vaidyanathan kasi, July 25, 2006 - 8:56 pm UTC

And I forgot to mention the version :
It is 9.2.0.6
It is a RAC environment with 2 nodes (2 instances)


why "for all INDEXED columns" ?

Alberto Dell'Era, July 26, 2006 - 11:14 am UTC

But method_opt=>'for all INDEXED columns size 1' prevents the collection of ALL column statistics for non-indexed columns, not only histograms (see below) - including the very important low_value, high_value and num_distinct, vital eg for the estimation of cardinality of joins.

Maybe the low hanging fruit is 'for all columns size 1' ?

-----------------------

SQL> create table t as select 1 x, 1 y from dual connect by level <= 100000;

Table created.

SQL> create index t_idx on t(x);

Index created.

SQL> exec dbms_stats.gather_table_stats (user, 't', cascade=>true, method_opt=>'for all indexed columns size 1');

PL/SQL procedure successfully completed.

SQL> select column_name, low_value, high_value, num_distinct from user_tab_columns where table_name = 'T';

COLUMN_NAME                    LOW_VALUE  HIGH_VALUE NUM_DISTINCT
------------------------------ ---------- ---------- ------------
X                              C102       C102                  1
Y

SQL> set autotrace traceonly explain
SQL> select * from t where x = 1;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=100000 Bytes =600000)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=17 Card=100000 Bytes=600000)


SQL> select * from t where y = 1;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1000 Bytes=6000)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=17 Card=1000 Bytes=6000) 

Bind variable peeking

vaidyanathan kasi, July 28, 2006 - 3:28 pm UTC

Hi Tom...
Based on your suggestion , I tried to get the plan based on pure literals and using bind variables...
The plans are exactly the same except for the The MV in question : MV_SLD_DAILY_LOANS which is doing a full table scan when literals are being used and the plan shows index being used when bind variables are in place !

This is the only difference as far as I can notice ...
If you the plan again , then , I can post it to you ...

Also, I did a couple of other test myself as well :

This problem of bind peeking which seems to affect our application queries is noticed to completely vanish once the hidden parameter is set :
_optim_peek_user_binds = FALSE (at session)
From what I understood about this parameter, well..
1) it is hidden (not public)
2) can be set at session level only..
3) removes bind peeking ability (_optim_peek_user_binds=TRUE in 9i by default)

I just tried to test the queries under the session with the hidden parameter set to FALSE a couple of days for consistency and voila...queries are doing great ..in fact , at a much faster pace...
I did all my tests after flushing the cache if any so that the cached plans are removed...
Which makes me think that bind variable peeking is bad some times depending on the applications ...
Only one disadvantage to this seems to be that the query gets hard parsed every time ...
or may be I am wrong ...

i am very very much eager to know your opinion ...
Pls advise further as to what to look for ...



Tom Kyte
July 28, 2006 - 8:52 pm UTC

talk to support about using the undocumented parameters.

Bind variable peeking

vaidyanathan kasi, July 28, 2006 - 3:32 pm UTC

Alberto,
I had the same idea that you had suggested ...
I have already tried the statistics gathering with :
"method_opt => for all columns size 1" also ...the queries are even worse !!
It was not working ,...hence , had to opt for indexed columns.


recursive function calls in query

vaidy, August 14, 2006 - 2:51 pm UTC

Hi Tom,

I am having trouble getting the query to perform faster.
Version : Oracle 9.2.0.6

SELECT
to_date('13-FEB-2006','dd-mon-yyyy') cal_dt,
la.Loan_Alloc_Sq loan_alloc_sq,
fn_get_risk_type_new(rsid.Sec_Subclass_Id, rsid.Trade_Ctry_Cd, rsid.sec_sq, rsid.Mat_Dt, to_date('13-FEB-2006','dd-mon-yyyy')) risk_ctgry_cd ,
rsid.Trade_Ctry_Cd risk_ctry_cd,
'S' status_cd
FROM loan_alloc la, daily_loan_alloc dla, Ref_Security_Inventory_Det rsid, Ctpy_Det cd
WHERE dla.Calendar_Dt = '13-FEB-2006'
AND la.Loan_Alloc_Sq = dla.Loan_Alloc_Sq
AND la.Sec_Sq = rsid.Sec_Sq
AND la.Broker_Id = cd.Ctpy_Id
AND cd.Ctpy_Type_Cd <> 'X'
AND ('13-FEB-2006' between cd.Valid_From_Dt and cd.Valid_To_Dt)
AND cd.Active_Flg = 'Y'
AND ('13-FEB-2006' between rsid.Valid_From_Dt and rsid.Valid_To_Dt)
AND rsid.Active_Flg = 'Y';

The query uses a user defined function fn_get_risk_type_new which returns a varchar2.

Without the function call, the query performs better , but when the function call is introduced, it starts degrading ...
The query returns approx 750,000 rows ...

My understanding is that the function call has to be executed for every row of the query which makes it very slow.

The function basically does some complex calculations given the input data ...
Any other better approach would be highly welcome...

FYI,
Function based index would not be possible since the function takes in a variable date as a parameter apart from the column parameters...
I have tried using pipelined function over the query , but , it did not give any performance benefit...

slddev@newdev>var txt varchar2(1000)
slddev@newdev>exec :txt := 'SELECT ''13-FEB-2006'' calendar_dt, la.Loan_Alloc_Sq loan_alloc_sq,fn_get_risk_type(rsid.Sec_Subclass_Id, rsid.Trade_Ctry_Cd, rsid.sec_sq, rsid.Mat_Dt, ''13-FEB-2006'') risk_ctgry_cd,
rsid.Trade_Ctry_Cd risk_ctry_cd, ''S'' status_cd FROM loan_alloc la, daily_loan_alloc dla, Ref_Security_Inventory_Det rsid, Ctpy_Det cd WHERE dla.Calendar_Dt = ''13-FEB-2006'' AND
la.Loan_Alloc_Sq = dla.Loan_Alloc_Sq AND la.Sec_Sq = rsid.Sec_Sq AND la.Broker_Id = cd.Ctpy_Id AND cd.Ctpy_Type_Cd <> ''X'' AND (''13-FEB-2006'' between cd.Valid_From_Dt and cd.Valid_To_Dt)
AND cd.Active_Flg = ''Y'' AND (''13-FEB-2006'' between rsid.Valid_From_Dt and rsid.Valid_To_Dt) AND rsid.Active_Flg = ''Y''' ;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
slddev@newdev>select * from table(cast(pkg_test_risk.Lookups_Fn(:txt) as type_risk));

where pkg_test_risk.Lookups_Fn is a packaged pipelined function that takes in a query string as input and returns object type and pipes the same as o/p.

Not able to think any other way I can make the performance better.

Vaidy

Tom Kyte
August 14, 2006 - 3:16 pm UTC

if query without function performs acceptably
and then we add function and it "degrades"

one would be tempted to NOT blame query but perhaps think "function, must be function"

Let us say the overhead of calling said function is "0.01 seconds" (say it takes 0.01 second to LEAVE sql and goto plsql, run your code - of unknown dimension at this time, we have no code to look at - and return to sql)


ops$tkyte%ORA9IR2> select 750000*0.01/60/60 from dual;

750000*0.01/60/60
-----------------
       2.08333333

2 hours.  If you do something that is "really super fast" but you do it a lot, it adds up!


Ok, 

fn_get_risk_type_new(rsid.Sec_Subclass_Id, rsid.Trade_Ctry_Cd, rsid.sec_sq, 
rsid.Mat_Dt, to_date('13-FEB-2006','dd-mon-yyyy'))

(thank you for using to_date with a format there - now please do that EVERYWHERE!!!!)

how many unique 

sec_subclass_id + trade_ctry_cd + sec_sq + mat_dt values do you pass to this?

(eg: if you were to take your query that returns 750,000 rows and 

select count(distinct sec_subclass_id ||','||trade_ctry_cd ||','|| sec_sq ||','|| mat_dt ) from (that_query)

what count do you think would be returned? 

function calls in query

vaidyanathan kasi, August 14, 2006 - 4:26 pm UTC

Couldn't agree more on your point...
It just adds up query time wise ...
And I know that the query itself is not the culprit !

As for "to_date", I know it is the right way ...This was written by some one else who has already left the company and its like
I am trying to clean his dirty laundary..!:)
But, trust me ....Its there every where ...

Answer to your question would be that the distinct count returns approx 10% of the total rows ...

Vaidy

Tom Kyte
August 14, 2006 - 4:40 pm UTC

maybe you can use scalar subquery caching to benefit (we can make 1/10th the calls if you want)

consider the following example and search for "scalar subquery caching" for more discussion (of course IF the plsql function can just be removed all together as MANY CAN BE) that would be best by far)

ops$tkyte%ORA10GR2> create or replace function f( p_owner in varchar2, p_object_type in varchar2 ) return number
  2  is
  3  begin
  4          dbms_application_info.set_client_info(userenv('client_info')+1);
  5          return length(p_owner)+length(p_object_type);
  6  end;
  7  /

Function created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select count(distinct owner || '/' || object_type ) from t;

COUNT(DISTINCTOWNER||'/'||OBJECT_TYPE)
--------------------------------------
                                   216

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select t.*, f(owner,object_type) from t;

49998 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 49998 |  4540K|   160   (2)| 00:00:02
|   1 |  TABLE ACCESS FULL| T    | 49998 |  4540K|   160   (2)| 00:00:02
------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       3990  consistent gets
          0  physical reads
          0  redo size
    2546149  bytes sent via SQL*Net to client
      37048  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      49998  rows processed

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
49998

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select t.*, (select f(owner,object_type) from dual) from t;

49998 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1032660217

------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 49998 |  4540K|   160   (2)| 00:00:02
|   1 |  FAST DUAL        |      |     1 |       |     2   (0)| 00:00:01
|   2 |  TABLE ACCESS FULL| T    | 49998 |  4540K|   160   (2)| 00:00:02
------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       3990  consistent gets
          0  physical reads
          0  redo size
    2546165  bytes sent via SQL*Net to client
      37048  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      49998  rows processed

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
806

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select t.*, (select f(owner,object_type) from dual)
  2    from (select /*+ no_merge */ * from t order by owner, object_type) t;

49998 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 437998554

------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)
------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 49998 |  6249K|       |  1237   (1)
|   1 |  FAST DUAL          |      |     1 |       |       |     2   (0)
|   2 |  VIEW               |      | 49998 |  6249K|       |  1237   (1)
|   3 |   SORT ORDER BY     |      | 49998 |  4540K|    12M|  1237   (1)
|   4 |    TABLE ACCESS FULL| T    | 49998 |  4540K|       |   160   (2)
------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        693  consistent gets
          0  physical reads
          0  redo size
    2362980  bytes sent via SQL*Net to client
      37048  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      49998  rows processed

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
216

 

function call in query

vaidyanathan kasi, August 15, 2006 - 3:30 pm UTC

Hi Tom,
I 100% agree with you that inlining the contents of the function in the sql is the best way to go anyday ...
But , unfortunately , the calculations inside the function are very complex in nature and cannot be inlined
with the query ...
So, we had to segregate the calculations , for modularity sake , into a separate function ...
Your insights into scalar subquery caching feature was just too good ...
I experimented my query based on your suggestions with the scalar subquery caching and worked great...
There was one catch on it though ...
The query that we are using is used bascially to insert into a table...
insert into table
select <function_name()>,blah,blah...

It was interesting to note that while the query itself uses the feature of scalar subquery caching , combined with insert , it negates it ...!
The function call is no longer utilising the scalar subquery cache feature...
Not sure if it is an intentional feature to not work using insert !
So, I just used the bulk collect feature to insert rows from the cursor that contains the query...
Anyway, alls well that ends well and thanks a lot for your suggestions ...

Vaidy

Tom Kyte
August 15, 2006 - 3:50 pm UTC

are you sure about that???

ops$tkyte%ORA10GR2> create or replace function f( p_owner in varchar2, p_object_type in varchar2 ) return number
  2  is
  3  begin
  4          dbms_application_info.set_client_info(userenv('client_info')+1);
  5          return length(p_owner)+length(p_object_type);
  6  end;
  7  /

Function created.

ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> create table t2 as select t.*, 0 extra from t where 1=0;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select count(distinct owner || '/' || object_type ) from t;

COUNT(DISTINCTOWNER||'/'||OBJECT_TYPE)
--------------------------------------
                                   217

ops$tkyte%ORA10GR2> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t2
  2  select t.*, f(owner,object_type) from t;

50001 rows created.

ops$tkyte%ORA10GR2> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
50001

ops$tkyte%ORA10GR2> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t2
  2  select t.*, (select f(owner,object_type) from dual) from t;

50001 rows created.

ops$tkyte%ORA10GR2> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
807

ops$tkyte%ORA10GR2> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t2
  2   select t.*, (select f(owner,object_type) from dual)
  3    from (select /*+ no_merge */ * from t order by owner, object_type) t;

50001 rows created.

ops$tkyte%ORA10GR2> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
217

 

Date conversion and NLS format

vaidyanathan kasi, September 01, 2006 - 10:15 am UTC

Hi Tom,

The situation is that we recieve data from third party file of which one such field is a char(8) of date in yyyymmdd format.
We process the file and after data massaging , the char field is converted to a date field in our table .
But, after processing , what we find is that the data that is stored in the converted date field is differing by century!
For eg : if we get 20501120 as the input char field , our intention is to just store this data as is as a date in a separate field as "11/20/2050"
But, we notice that the date has become "11/20/1950" ...
All we are doing is a to_date('20501120,'yyyymmdd').
Is this due to the fact that our nls_date_format settings is dd-mon-rr both at the session and database level.
The moment I set my nls session as the following :
alter session set nls_date_format = 'DD-MON-YYYY';
and then , run my process , I find that the date field has been transformed correctly as 11/20/2050.
But, my understanding was that the "rr" format related things usually comes into picture only when we deal with 2 digit year to begin with.
but, here the input is a 4 year format in the form of a string: 20501120 and I want to prevent oracle to apply the century conversion.

NLS INSTANCE PARAMETERS :

PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_SORT
NLS_DATE_LANGUAGE
NLS_DATE_FORMAT
NLS_CURRENCY
NLS_NUMERIC_CHARACTERS
NLS_ISO_CURRENCY
NLS_CALENDAR
NLS_TIME_FORMAT
NLS_TIMESTAMP_FORMAT
NLS_TIME_TZ_FORMAT
NLS_TIMESTAMP_TZ_FORMAT
NLS_DUAL_CURRENCY
NLS_COMP
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE

NLS DATABASE PARAMETERS :

PARAMETER VALUE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET US7ASCII
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_RDBMS_VERSION 9.2.0.6.0


NLS SESSION PARAMETERS :

PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE

Pls advise.

Thanks,
Vaidy

Tom Kyte
September 01, 2006 - 10:39 am UTC

nope, you are doing something else besides to_date( str, 'yyyymmdd' )

of that I am 100% sure. You are likely doing an implicit conversion somewhere else along the line.

tell you what - reproduce this for us, write a snippet of your code that does only one thing - takes the string 20500101 and puts it into a date column - and have that date be 1950

Then, we can tell you what you did wrong :)

but odds are that when you do this exercise, you'll figure it out yourself (I usually do when I do it)

Dtae conversion and NLS

vaidyanathan kasi, September 01, 2006 - 11:25 am UTC

Hi Tom,
Code snippet as requested :
update stage_mch_invest
set calc_mat_dt = decode(mat_dt , '', null , ' ', null , '00000000', null , to_date(mat_dt,'yyyymmdd'));

Here , the mat_dt column comes in as a char field and we transform it and store it as a date field calc_mat_dt.


Trust me, this is all the code is doing as far as this field is concerned.

Input : mat_dt = '20501120'
Output :

MCH_SOURCE_MAT_DT : '20501120' (input)
CALCULATED_MAT_DT : derived calc_mat_dt field which is a date

e459393@uat>select smi.MAT_DT MCH_source_mat_dt,smi.CALC_MAT_DT calculated_mat_dt from security_inve
st si,stage_mch_invest smi
2 where smi.SSB_TRD_ID = '05GBER00470'
3 and si.SEC_INVEST_ID = smi.SSB_TRD_ID
4 /

MCH_SOUR CALCULATE
-------- ---------
20501120 20-NOV-50

Elapsed: 00:00:00.00

If you look at the above , I cannot find out if the 50 in the output refers to 1950 or 2050 !
So, i added another filter to know the actual stored value in the table.

e459393@uat>ed
Wrote file afiedt.buf

1 select smi.MAT_DT MCH_source_mat_dt,smi.CALC_MAT_DT calculated_mat_dt from security_invest si,s
2 where smi.SSB_TRD_ID = '05GBER00470'
3 and si.SEC_INVEST_ID = smi.SSB_TRD_ID
4* and smi.CALC_MAT_DT = to_date('11/20/2050','mm/dd/yyyy')
e459393@uat>/

no rows selected

Elapsed: 00:00:00.00

As you can notice, the added filter of 11/20/2050 did not yield any row .

e459393@uat>ed
Wrote file afiedt.buf

1 select smi.MAT_DT MCH_source_mat_dt,smi.CALC_MAT_DT calculated_mat_dt from security_invest si,s
2 where smi.SSB_TRD_ID = '05GBER00470'
3 and si.SEC_INVEST_ID = smi.SSB_TRD_ID
4* and smi.CALC_MAT_DT = to_date('11/20/1950','mm/dd/yyyy')
e459393@uat>/

MCH_SOUR CALCULATE
-------- ---------
20501120 20-NOV-50

Where the filter is 11/20/1950 , the query returns result .
Which makes me believe that the result stored is actually 1950 and not 2050.
The above is based on the existing default nls_date_format of dd-mon-rr everywhere.

But, if I alter the session to : dd-mon-yyyy format , here is the result :
e459393@uat>alter session set nls_date_format='DD-MON-YYYY';

Session altered.

Elapsed: 00:00:00.00

Now, I run the process again (which includes the update) and then when I query this time :

e459393@uat>select smi.MAT_DT MCH_source_mat_dt,smi.CALC_MAT_DT calculated_mat_dt from security_inve
st si,stage_mch_invest smi
2 where smi.SSB_TRD_ID = '05GBER00470'
3 and si.SEC_INVEST_ID = smi.SSB_TRD_ID;

MCH_SOUR CALCULATED_
-------- -----------
20501120 20-NOV-2050

Elapsed: 00:00:00.00

As you can see above , the o/p now included 2050.

e459393@uat>ed
Wrote file afiedt.buf

1 select smi.MAT_DT MCH_source_mat_dt,smi.CALC_MAT_DT calculated_mat_dt from security_invest si,s
2 where smi.SSB_TRD_ID = '05GBER00470'
3 and si.SEC_INVEST_ID = smi.SSB_TRD_ID
4* and smi.CALC_MAT_DT = to_date('11/20/2050','mm/dd/yyyy')
e459393@uat>/

MCH_SOUR CALCULATED_
-------- -----------
20501120 20-NOV-2050

Elapsed: 00:00:00.00
e459393@uat>ed
Wrote file afiedt.buf

1 select smi.MAT_DT MCH_source_mat_dt,smi.CALC_MAT_DT calculated_mat_dt from security_invest si,s
2 where smi.SSB_TRD_ID = '05GBER00470'
3 and si.SEC_INVEST_ID = smi.SSB_TRD_ID
4* and smi.CALC_MAT_DT = to_date('11/20/1950','mm/dd/yyyy')
e459393@uat>/

no rows selected

Elapsed: 00:00:00.00

This time , the result that is stored in the table is not 1950 , but 2050.

Hope this test case is ok...

Pls advise.
Thanks,
Vaidy

Tom Kyte
September 01, 2006 - 11:35 am UTC


it is exactly as I thought.

you have implicit conversions:

decode(mat_dt , '', null , '        ', null , '00000000', null , to_date(mat_dt,'yyyymmdd'))


that decode function returns A STRING (the first return value set the return value for decode)


so your update is just really:

update stage_mch_invest
set calc_mat_dt = <b>to_date( </b> decode(mat_dt , '', null , '        ', null , '00000000', null , <b>to_char( to_date(mat_dt,'yyyymmdd') )</b>) );



ops$tkyte%ORA10GR2> create table t ( msg varchar2(20), mat_dt date );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable x varchar2(20)
ops$tkyte%ORA10GR2> exec :x := '20500101'

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t values ( 'Original',
  2  decode(:x, '', null ,
  3             '        ', null ,
  4                     '00000000', null ,
  5                     to_date(:x,'yyyymmdd')
  6  ) );

1 row created.

ops$tkyte%ORA10GR2> insert into t values ( 'Fixed',
  2  decode(:x, '', TO_DATE(null) ,
  3             '        ', TO_DATE(null) ,
  4                     '00000000', TO_DATE(null) ,
  5                     to_date(:x,'yyyymmdd')
  6  ) );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select msg, to_char(mat_dt,'dd-mon-yyyy') from t;

MSG                  TO_CHAR(MAT
-------------------- -----------
Original             01-jan-1950
Fixed                01-jan-2050



<b>BEWARE OF IMPLICIT CONVERSIONS</b>

 

date conversions and NLS

vaidyanathan kasi, September 01, 2006 - 11:52 am UTC

Wow...!!!
I am baffled ...
I did not know decode had this hidden quality because , technically , it is a sophisticated if-else construct.
So, I thought that the resolutions should be straight away on the to_date function since that is the one that satisfies the criteria.
I did not that no matter what the criteria satisfied is , decode would take the form of the first if condition even if it does not satisfy the
condition !
Thanks a ton for pointing it out .
Normally , I am aware of implicit conversions and I take care of it , but this one did beat me ...

thanks,
Vaidy

Tom Kyte
September 01, 2006 - 12:09 pm UTC

but decode is a function
and an overloaded one

and a function returns precisely..... ONE TYPE


it is not technically a "if-else", it is a function, purely and plainly.




dbms_xplan not working

vaidy, October 02, 2006 - 1:38 pm UTC

Hi Tom,
I am getting the following error while trying to get the actual plan from dbms_xplan :

vk@newpp>SELECT * FROM table(DBMS_XPLAN.DISPLAY('plan_table', 'xx','ALL'));
SELECT * FROM table(DBMS_XPLAN.DISPLAY('plan_table', 'xx','ALL'))
*
ERROR at line 1:
ORA-22905: cannot access rows from a non-nested table item


CLIENT VERSION :
SQL*Plus: Release 9.2.0.6.0 - Production on Mon Oct 2 13:26:07 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

Tom Kyte
October 02, 2006 - 4:41 pm UTC

and what be your cursor_sharing set to?

dbms_xplan

vaidy, October 02, 2006 - 4:50 pm UTC

Hi Tom,
I kind of guessed that you might ask this question!
At the instance level, cursor_sharing is set to SIMILAR.
And I had learnt from your earlier posts that this probably was the culprit and changed the value to EXACT at session level and queried.
Still the same.
Some times , I get the following error also :
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Error: cannot fetch plan for statement_id 'xx'

Though , when I query it like a normal table from plan_table , I dont see any problem : select * from plan_table where statement_id='xx';

Tom Kyte
October 02, 2006 - 4:57 pm UTC

that is it, the dreaded cursor sharing

alter session set cursor_sharing=exact;

and try again. known problem.

you really ought to address the ROOT CAUSE however, having cursor_sharing set to similar or force is cause for immediately concern (you probably not only have severe performance issues but are full of sql injection bugs)

dbms_xplan

vaidy, October 02, 2006 - 5:10 pm UTC

You nailed it!
We are currently having some severe performance problems.
But that aside, as I had mentioned , i did set the session cursor_sharing = exact :
SQL> alter session set cursor_sharing=exact;

Session altered.

SQL> SELECT plan_table_output FROM table(DBMS_XPLAN.DISPLAY('PLAN_TABLE', 'xx','ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Error: cannot fetch plan for statement_id 'xx'

Pls advise as to what could be the reason!

 

Tom Kyte
October 02, 2006 - 5:11 pm UTC

statement id 'xx' does not maybe exist???

what is in your plan table?

dbms_xplan

Vaidy, October 02, 2006 - 5:19 pm UTC

It does exist because :
when I query it like :
select * from plan_table where statement_id='xx';
I get the results!

SQL> desc plan_table
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 STATEMENT_ID                                       VARCHAR2(30)
 TIMESTAMP                                          DATE
 REMARKS                                            VARCHAR2(80)
 OPERATION                                          VARCHAR2(30)
 OPTIONS                                            VARCHAR2(255)
 OBJECT_NODE                                        VARCHAR2(128)
 OBJECT_OWNER                                       VARCHAR2(30)
 OBJECT_NAME                                        VARCHAR2(30)
 OBJECT_INSTANCE                                    NUMBER(38)
 OBJECT_TYPE                                        VARCHAR2(30)
 OPTIMIZER                                          VARCHAR2(255)
 SEARCH_COLUMNS                                     NUMBER
 ID                                                 NUMBER(38)
 PARENT_ID                                          NUMBER(38)
 POSITION                                           NUMBER(38)
 COST                                               NUMBER(38)
 CARDINALITY                                        NUMBER(38)
 BYTES                                              NUMBER(38)
 OTHER_TAG                                          VARCHAR2(255)
 PARTITION_START                                    VARCHAR2(255)
 PARTITION_STOP                                     VARCHAR2(255)
 PARTITION_ID                                       NUMBER(38)
 OTHER                                              LONG
 DISTRIBUTION                                       VARCHAR2(30)
 CPU_COST                                           NUMBER(38)
 IO_COST                                            NUMBER(38)
 TEMP_SPACE                                         NUMBER(38)

 

Tom Kyte
October 02, 2006 - 9:14 pm UTC

do something like this and cut and paste the results for us please:

ops$tkyte%ORA9IR2> delete from plan_table;

2 rows deleted.

ops$tkyte%ORA9IR2> explain plan set statement_id = 'xx' for
  2  select * from dual;

Explained.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> SELECT plan_table_output FROM table(DBMS_XPLAN.DISPLAY('PLAN_TABLE', 'xx','ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |       |       |       |
|   1 |  TABLE ACCESS FULL   | DUAL        |       |       |       |
--------------------------------------------------------------------

Note: rule based optimization

9 rows selected.
 

dbms_xplan

vaidy, October 02, 2006 - 11:33 pm UTC

Here is the detailed output :

stars2.sql below is just a unnamed plsql block that contains the query with plsql variables to get the bind variable effect.


e459393@newpp>alter session set cursor_sharing=exact;

Session altered.

Elapsed: 00:00:00.47
e459393@newpp>@c:\vaidy\procs\mon_perf\stars2.sql

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.78

e459393@newpp> insert into plan_table
2 (
3 STATEMENT_ID,REMARKS,OPERATION,OPTIONS,OBJECT_NODE,OBJECT_OWNER,OBJECT_NAME,OBJECT_INSTANCE,OPTIMIZER,SEARCH_COLUMNS,ID,
4 PARENT_ID,POSITION,COST,CARDINALITY,BYTES,OTHER_TAG,PARTITION_START,PARTITION_STOP,PARTITION_ID,OTHER,DISTRIBUTION,
5 CPU_COST,IO_COST,TEMP_SPACE,ACCESS_PREDICATES,FILTER_PREDICATES
)
6 7 select
8 'xx3' statement_id,
9 NULL remarks,
10 a.operation,
11 a.options,
12 a.object_node,
13 a.object_owner,
14 a.object_name,
15 null object_instance,
16 a.optimizer,
17 a.search_columns,
18 a.id,
19 a.parent_id,
20 a.position,
21 a.cost,
22 a.cardinality,
23 a.bytes,
24 a.other_tag,
25 a.partition_start,
26 a.partition_stop,
27 a.partition_id,
28 a.other,
29 a.distribution,
30 a.cpu_cost,
31 a.io_cost,
32 a.temp_space,
33 a.access_predicates,
34 a.filter_predicates
35 from v$sql_plan a
36 where
37 (a.hash_value,a.address) in
38 (select HASH_VALUE,ADDRESS from v$sqltext
39 where sql_text like lower('%NEW\_NEW\_NEW\_UNIQUE\_STRING%') escape '\');

19 rows created.

e459393@newpp> commit;

Commit complete.

e459393@newpp> SELECT plan_table_output FROM table(DBMS_XPLAN.DISPLAY('PLAN_TABLE', 'xx3','ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Error: cannot fetch plan for statement_id 'xx3'


I am not getting a clue as to what could be the reason !

Pls advise !



Tom Kyte
October 03, 2006 - 6:16 am UTC

umm, please use explain plan to populate that table, do not use an insert.

dbms_xplan

vaidy, October 03, 2006 - 1:07 pm UTC

The reason I want to get the sql plan (the actual as opposed to explain plan) is that i am using bind variables in my query and dubiously enough,
the explain plan is whacky than what is actually getting executed.(I think there is some anamoly behind incorrect explain plans when using bind variables)
The query is a 6 table join with a mix of partitioned (monthly partition by date) and non-partitioned tables.
We have a scenario where the query behaves poor (Given the volume of data , Its a big deal if it takes more than 2 minutes!)
The biggest table in the join has 7 million records (partitioned monthly by date) out of which because of partition elimination (where calendar_dt = :calendar_dt resolves to KEY KEY in the plan), the records are in the range of 1.5 million !

Here is the query :

SELECT /*+ new_new_unique_string */ rca.Calendar_Dt, rca.ctpy_entity_id, sdci.client_entity_id, gc.risk_ctry_cd, gc.risk_ctgry_cd,
gr.Invest_Vehicle_Id, 'RREP', gc.Sec_Sub_Cust_Id, gc.Sec_Subclass_Id, 'L', '6',
ced.Reinvest_Coll_Indem_Flg as Indem_Flg,
'0', NULL, gc.Sec_Ccy_Id, gc.Sec_Issue_Ctry_Cd, 'Y',
sum(rca.mkt_val_amt_pct*sdci.Contract_Amt_Ratio*gc.sec_usd_mkt_val_amt) as usdmktval,
sum(rca.mkt_val_amt_pct*sdci.Contract_Amt_Ratio*gc.sec_mkt_val_amt) as mktval,
sum(rca.cont_amt_pct*sdci.Contract_Amt_Ratio*gr.Deal_USD_Contract_Amt) as usdcontamt,
sum(rca.cont_amt_pct*sdci.Contract_Amt_Ratio*gr.Deal_Contract_Amt) as contamt
FROM Generic_Collateral_p gc, Generic_Reinvestment_p gr, Repo_Coll_Alloc_p rca,
Client_Entity_Det ced,
Stars_Daily_Client_Invest sdci,
Ref_Invest_Vehicle_Det rivd
WHERE rca.Calendar_Dt = :calendar_dt
AND gc.Calendar_Dt = :calendar_dt
AND gr.Calendar_Dt = :calendar_dt
AND sdci.Calendar_Dt = :calendar_dt
AND gc.Coll_Sq = rca.Coll_Sq
AND gr.Reinvest_Sq = rca.Reinvest_Sq
AND gr.Invest_Vehicle_Id = sdci.Invest_Vehicle_Id
AND sdci.Client_Entity_Id = ced.Client_Entity_Id
AND gr.Invest_Vehicle_Id = rivd.Invest_Vehicle_Id
AND rivd.Commingled_Flg = 'N'
AND (:calendar_dt between rivd.Valid_From_Dt and rivd.Valid_To_Dt)
AND rivd.Active_Flg = 'Y'
AND (:calendar_dt between ced.Valid_From_Dt and ced.Valid_To_Dt)
AND ced.Active_Flg = 'Y'
GROUP BY rca.Calendar_Dt, rca.ctpy_entity_id, sdci.client_entity_id, gc.risk_ctry_cd, gc.risk_ctgry_cd,
gr.Invest_Vehicle_Id, gc.Sec_Sub_Cust_Id, gc.Sec_Subclass_Id,
ced.Reinvest_Coll_Indem_Flg,
gc.Sec_Ccy_Id, gc.Sec_Issue_Ctry_Cd;

Generic_Collateral_p, Generic_Reinvestment_p , Repo_Coll_Alloc_p are partitioned by date.
Client_Entity_Det and Ref_Invest_Vehicle_Det are predominantly static.

The tables are all correctly indexed (as per my knowledge) and they are analyzed.

As mentioned , cursor_sharing=similar at instance level.
Which implied an interesting thing in the sense that the optimiser bound even literals in the query like 'Y','N' etc!
I read and understood the implications of these from your earlier posts.

Does cursor_sharing parameter make a query to kneel so much ? and will setting cursor_sharing=exact at session (not sure if the dbas would allow it to be changed at instance level)
be a permanent cure for the problem ?

One more interesting thing here is that the moment , we stop using partitioned tables (recreate the existing partitioned tables as normal tables),
the overall query goes great guns!!!(even with cursor_sharing=similar)!!!
Can you pls explain what could be happening here ?

But, my approach would be to always use the partition tables as that is the correct approach I believe since the tables are going to grow every day and month !

The problem is in getting the formatted result of the dumped data in plan_table!

Hence, I am unable to give you the formatted sql_plan details here.
But , the access plan as I saw in v$sql_plan without formatting revealed the following details:

There are 3 full table scans (2 on static tables that I mentioned) and the other on the largest partitioned table (full tbl scan after partition elimination).
I know that full tbl scans are not evil .In fact, it is better for the 2 static tables since even with the filters , the intermediate rows returned is more or less the same as the
total number of records.
But, I am not sure about the full tbl scan on the partitioned table though it does perform the partition elimination.

STATEMENT_ID OPERATION OPTIONS OBJECT_NAME OPTIMIZER ID PARENT_ID POSITION COST CARDINALITY BYTES PARTITION_START PARTITION_STOP PARTITION_ID IO_COST TEMP_SPACE ACCESS_PREDICATES FILTER_PREDICATES
xx3 SELECT STATEMENT CHOOSE 0 0 4547
xx3 HASH JOIN 2 1 1 1432 110753 21043070 1432 CLIENT_ENTITY_ID="CLIENT_ENTITY_ID"
xx3 HASH JOIN 4 2 2 1413 110817 18284805 1413 INVEST_VEHICLE_ID="INVEST_VEHICLE_ID"
xx3 INDEX RANGE SCAN XPKSTARS_DAILY_CLIENT_INVEST ANALYZED 6 5 1 10 12 10 CALENDAR_DT=:B1
xx3 PARTITION RANGE SINGLE 8 7 1 KEY KEY 8
xx3 INDEX RANGE SCAN XIE1GENERIC_COLLATERAL_P ANALYZED 10 9 1 114 1 KEY KEY 8 114 CALENDAR_DT=:B1
xx3 TABLE ACCESS FULL REPO_COLL_ALLOC_P ANALYZED 18 17 1 1169 128408 5007912 KEY KEY 17 1169 CALENDAR_DT=:B1
xx3 PARTITION RANGE SINGLE 17 11 2 KEY KEY 17
xx3 INDEX RANGE SCAN XIE1GENERIC_REINVESTMENT_P ANALYZED 16 15 1 3 595 KEY KEY 14 3 CALENDAR_DT=:B1
xx3 TABLE ACCESS BY LOCAL INDEX ROWID GENERIC_REINVESTMENT_P ANALYZED 15 14 1 4 595 17255 KEY KEY 14 4
xx3 PARTITION RANGE SINGLE 14 12 2 KEY KEY 14 COMMINGLED_FLG='N' AND "VALID_FROM_DT"<=:B1 AND "VALID_TO_DT">=:B1 AND "ACTIVE_FLG"='Y'
xx3 HASH JOIN 12 11 1 19 593 32022 19 INVEST_VEHICLE_ID="INVEST_VEHICLE_ID"
xx3 HASH JOIN 11 7 2 1191 20886 1942398 1191 REINVEST_SQ="REINVEST_SQ"
xx3 TABLE ACCESS BY LOCAL INDEX ROWID GENERIC_COLLATERAL_P ANALYZED 9 8 1 118 34412 1651776 KEY KEY 8 118
xx3 HASH JOIN 7 4 2 1358 9035 1273935 1358 COLL_SQ="COLL_SQ"
xx3 TABLE ACCESS BY INDEX ROWID STARS_DAILY_CLIENT_INVEST ANALYZED 5 4 1 55 1337 32088 55
xx3 TABLE ACCESS FULL CLIENT_ENTITY_DET ANALYZED 3 2 1 14 862 21550 14 VALID_FROM_DT<=:B1 AND "VALID_TO_DT">=:B1 AND "ACTIVE_FLG"='Y'
xx3 SORT GROUP BY 1 0 1 4547 110753 21043070 4547 45376000

Could you pls advise as to any scope for tuning based on the first glance at the plan here ?

I would be glad to provide you with more information should you need.

Vaidy

Tom Kyte
October 03, 2006 - 4:48 pm UTC

...
(I think there 
is some anamoly behind incorrect explain plans when using bind variables)
...

no anomoly - explain plan:

a) does NOT bind peek, ever
b) assumes ALL binds are character strings, regardless of what the application actually does (binds numbers and/or dates etc)
c) always hard parses right there and then - using YOUR current environment, which might be radically different from the "other environment" that actually parsed the query.

Maybe this is what you are looking for:

<quote src=Effective Oracle by Design>
Use DBMS_XPLAN and V$SQL_PLAN

If you edit the script utlxpls.sql in Oracle9i Release 2, youÂ’ll discover it is effectively one-line long:

select plan_table_output
from table( dbms_xplan.display( 'plan_table',null,'serial'))

If you edit that same script in Oracle9i Release 1 or before, youÂ’ll find a huge query. DBMS_XPLAN.DISPLAY is a better method for querying and displaying the plan output. It is a function that simply returns a collection, which is a procedurally formatted EXPLAIN PLAN output, including the supplemental information at the bottom of the report (new in Oracle9i Release 2). This is a side effect of using the new DBMS_XPLAN package. 

So, if you do not have access to the utlxpls.sql script, the simple query shown here will perform the same function. In fact, the DBMS_XPLAN package is so good at adjusting its output based on the inputs that you do not even need to supply the inputs as utlxpls.sql does. This simple line suffices:
select * from table(dbms_xplan.display)

Using this feature coupled with the V$SQL_PLAN dynamic performance view, you can easily dump the query plans for already executed statements, directly from the database. 

In the previous section, I demonstrated how you can use an INSERT into the PLAN_TABLE and then run utlxpls or utlxplp to see the plan. In Oracle9i Release 2, using DBMS_XPLAN and a view you can create, it becomes even easier. If you use a schema that has been granted SELECT privileges on SYS.V_$SQL_PLAN directly, youÂ’ll be able to create this view:

ops$tkyte@ORA920> create or replace view dynamic_plan_table
  2  as
  3  select
  4   rawtohex(address) || '_' || child_number statement_id,
  5   sysdate timestamp, operation, options, object_node,
  6   object_owner, object_name, 0 object_instance,
  7   optimizer,  search_columns, id, parent_id, position,
  8   cost, cardinality, bytes, other_tag, partition_start,
  9   partition_stop, partition_id, other, distribution,
 10   cpu_cost, io_cost, temp_space, access_predicates,
 11   filter_predicates
 12   from v$sql_plan;

View created.

Now, you can query any plan from the database with a single query:

ops$tkyte@ORA920> select plan_table_output
  2    from TABLE( dbms_xplan.display
  3                ( 'dynamic_plan_table',
  4                  (select rawtohex(address)||'_'||child_number x
  5                     from v$sql
  6  where sql_text='select * from t t1 where object_id > 32000' ),
  7                  'serial' ) )
  8  /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------
| Id  | Operation                   | Name|Rows| Bytes |Cst(%CPU)|
------------------------------------------------------------------
|   0 | SELECT STATEMENT            |     |    |       |         |
|   1 |  TABLE ACCESS BY INDEX ROWID| T   |291 | 27936 | 25   (0)|
|*  2 |   INDEX RANGE SCAN          | T_PK|291 |       |  2   (0)|
------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID">32000)

13 rows selected.

The emphasized text in the code is a query that gets the STATEMENT_ID. In this query, you can use whatever values you want to identify the exact query plan you wanted to review.  The use of this technique, querying the V$ table rather then inserting the contents of V$SQL_PLAN into a “real table” is appropriate if you will be generating the explain plan for this query once.  Access to V$ tables can be quite expensive latch wise on a busy system.  So, if you plan on running the explain plan for a given statement many times over – copying the information to a temporary working table would be preferred.
</quote>
 

dbms_xplan

vaidy, October 03, 2006 - 1:22 pm UTC

Hi Tom,
Also , wanted to know if there is any correlation between partitioning and cursor_sharing !
The moment I set cursor_sharing=exact at session and run the query with partitioned tables, the query seems to be performing well...
Right now , we have cursor_sharing=similar at instance level.
I do know the diff and implication between force,exact and similar and I know that force is evil(mostly).

Your insight would be very very helpful.

Tom Kyte
October 03, 2006 - 4:51 pm UTC

force and similar are both evil - equally so but for slightly different reasons.

why do you have similar set? is it set at the system level (init.ora)??

dbms_xplan

vaidy, October 03, 2006 - 4:57 pm UTC

yes..
It is set at init.ora level.

Your reply was unfortunately too short for me to get any insight from !!

Tom Kyte
October 03, 2006 - 4:59 pm UTC

i'll give you insight when you give me insight.

why is it set, what prompted that.

dbms_xplan

vaidy, October 03, 2006 - 5:19 pm UTC

Thats a million dollar question!

I would have to check with the DBAs for the reason which pretty sure , I am not going to get any.!

I am 100% sure that I am not going to get any logical answer to this even if I ask them!
Nobody questioned them , so nobody cares!
So, here goes what I think !
My reasoning theoritically would be as follows:
Ours is a hybrid application leaning more towards batch processing and reporting (DSS) with moderate to minor online
transactions(oltp).
Percentage wise , DSS would be 80% , oltp : 20%

Use of cursor_sharing=similar would help us to reduce (might help I think) hard parsing queries more often than when exact is set.
My rationale is that even if it is set to similar,because of the behaviour of "similar" to auto-bind constants(which is evil by the way according to me), it should
still be safe compared to force since it peeks at the literals before binding and only binds those literals that it deems as unsafe (highly skewed values that might make the plan
go whacky)...Pls correct me if I am wrong!
So, why should it be a big deal ?

I do know that "exact" does not auto bind and is the safest in that respect.
I also disagree with some folks here that exact always hard parses and wastes time !!
I think , if there are bind variables (as in stored procedures variables to feed a query column value) , we should still be good to go and not worry about hard parsing because I think it wont hard parse and it will be a soft parse ...
Am I going too way off ? or am I atleast in the ballpark of being right ?

But, i do notice it to be a big deal and big improvement when I set "exact" and run the query!
It flies !!

But, why is there a correlation between the setting and partition is what I cant figure out !!
Because, when we reverted (for testing) to non-partioned tables, the query did not mind the setting at that time (it still was set to similar)!!

Now, waiting impatiently for your insights and I am sure I want to know more!

Tom Kyte
October 03, 2006 - 7:19 pm UTC

...
Use of cursor_sharing=similar would help us to reduce (might help I think) hard
parsing queries more often than when exact is set.
...

if and only if:

a) oltp did not use binds
b) meaning oltp has BUGS, BUGS BUGS, big BUGS, glaring BUGS, stupid BUGS

which would mean

oltp cannot perform
oltp rips shared pool to shreds
oltp cannot scale
oltp has SECURITY HOLES IN THE FORM OF SQL INJECTION BUGS THAT should scare any
reasonable manager into FIXING THE THING

meaning, oltp can issue an alter session but your reporting/dss should not be negatively IMPACTED by their mistake.

never set cursor_sharing at the instance level, worst thing you could ever ever every DO.....

that is why.



dbms_xplan

vaidy, October 03, 2006 - 5:30 pm UTC

Hi Tom,

The below paragraph was written by me in earlier post.

"Use of cursor_sharing=similar would help us to reduce (might help I think) hard parsing queries more often than when exact is set.
My rationale is that even if it is set to similar,because of the behaviour of "similar" to auto-bind constants(which is evil by the way according to me), it should
still be safe compared to force since it peeks at the literals before binding and only binds those literals that it deems as unsafe (highly skewed values that might make the plan
go whacky)..."

There is a typo there ...
I think it should be :
"Similar" peeks at the literals before binding and only binds those literals that it deems as safe (unsafe such as highly skewed values that might make the plan
go whacky are not auto-bound and are placed along with the stored plan the next time)..."


sorry for the type..

dbms_xplan

vaidy, October 03, 2006 - 8:24 pm UTC

From your reply, I infer that :

1) instance level cursor_sharing setting is dangerous.
2) Use Bind variables (This i can assure that we are doing adequately since I think PL/SQL would automatically take care of it in the form of variables passed to the query where clause)

But, I still have not got your inputs on what does cursor_sharing have to do with partitioning ? or is it some kind of a myth that I have up there !

I also want to know if removing the "cursor_sharing" parameter from the init.ora would be safe.
Also, instead of "alter session set.." command, I could also hint the query to use cursor_sharing_exact ...correct ?

Pls advise.


Tom Kyte
October 04, 2006 - 6:58 am UTC

in data warehousing, you many times do not want to use binds on the actual "warehouse" query (most queries, yes, the big warehouse query - no, you want the optimizer to spend a second getting the best possible plan for this query that will take minutes or hours to execute)

the alter session or cursor_sharing_exact hint - yes.

dbms_xplan

A reader, November 07, 2006 - 6:49 pm UTC

Hi Tom,
I have a query relating to the use of analytic function.

Table A with a pre-computed ratio(I derived this ratio using ratio_to_report):

create table A (col_a1 char(4),col_a2 char(1),col_a3 char(3),col_a4 char(3),col_a5 number);

INSERT INTO A ( COL_A1, COL_A2, COL_A3, COL_A4, COL_A5 ) VALUES (
'ABCD', 'C', 'USD', 'USD', 0.2);
INSERT INTO A ( COL_A1, COL_A2, COL_A3, COL_A4, COL_A5 ) VALUES (
'ABCD', 'C', 'USD', 'EUR', 0.8);
INSERT INTO A ( COL_A1, COL_A2, COL_A3, COL_A4, COL_A5 ) VALUES (
'PQRS', 'C', 'GBP', 'USD', 0.6);
INSERT INTO A ( COL_A1, COL_A2, COL_A3, COL_A4, COL_A5 ) VALUES (
'PQRS', 'C', 'GBP', 'JPY', 0.2);
INSERT INTO A ( COL_A1, COL_A2, COL_A3, COL_A4, COL_A5 ) VALUES (
'PQRS', 'C', 'GBP', 'EUR', 0.2);
COMMIT;

Col_A1 Col_A2 Col_A3 Col_A4 Col_A5

ABCD C USD USD 0.2
ABCD C USD EUR 0.8
PQRS C GBP USD 0.6
PQRS C GBP JPY 0.2
PQRS C GBP EUR 0.2

Table B only has the first three column value information along with an amount field that is grouped at the first 3 column levels:

create table B (col_b1 char(4),col_b2 char(1),col_b3 char(3),col_b4 number);

INSERT INTO B ( COL_B1, COL_B2, COL_B3, COL_B4 ) VALUES (
'ABCD', 'C', 'USD', 100);
INSERT INTO B ( COL_B1, COL_B2, COL_B3, COL_B4 ) VALUES (
'PQRS', 'C', 'GBP', 1000);
COMMIT;

Col_B1 Col_B2 Col_B3 Col_B4

ABCD C USD 100
PQRS C GBP 1000

My output should go to a third table C that is structurally similar to table A without ratio, but with an amount column.

create table C (col_c1 char(4),col_c2 char(1),col_c3 char(3),col_c4 char(3), col_c5 number);

INSERT INTO C ( COL_C1, COL_C2, COL_C3, COL_C4 ) VALUES (
'ABCD', 'C', 'USD', 'USD');
INSERT INTO C ( COL_C1, COL_C2, COL_C3, COL_C4 ) VALUES (
'ABCD', 'C', 'USD', 'EUR');
INSERT INTO C ( COL_C1, COL_C2, COL_C3, COL_C4 ) VALUES (
'PQRS', 'C', 'GBP', 'USD');
INSERT INTO C ( COL_C1, COL_C2, COL_C3, COL_C4 ) VALUES (
'PQRS', 'C', 'GBP', 'JPY');
INSERT INTO C ( COL_C1, COL_C2, COL_C3, COL_C4 ) VALUES (
'PQRS', 'C', 'GBP', 'EUR');
COMMIT;

O/P of Table C should look like:

Col_C1 Col_C2 Col_C3 Col_C4 Col_C5

ABCD C USD USD 0.2*100=20
ABCD C USD EUR 0.8*100=80
PQRS C GBP USD 0.6*1000=600
PQRS C GBP JPY 0.2*1000=200
PQRS C GBP EUR 0.2*1000=200

Assuming Table C already contains rows with values upto the first 4 columns Col_C1,2,3,4 , I am thinking to
use merge (if matching record from C grouped at Col_C1,2,3 exists in Table B , then update Col_C5 based on the above proration logic, if not insert
new row into table C (values based on conditions outside this scope))

Can this be done using analytic functions as I think it can be.

Could you pls explain how this can be achieved?

Tom Kyte
November 08, 2006 - 8:21 am UTC

if you have A and B, then just JOIN and that'll give you your "output"

so write a query that joins A & B, call that Q

merge into C
using (Q) q
on (...)
when matched


I don't see where analytics enter the picture at all.

different plans

A reader, December 27, 2006 - 2:36 pm UTC

Hi Tom,
Happy holidays!
We have a table that is partitioned by month on a date column.
Local index on (date,sq_key) which is also the primary key for the table.
The table has about 300 million rows spread across these monthly partitions.
On one environment :
select * from daily_loan_alloc where calendar_dt = to_date('12/26/2006','mm/dd/yyyy');
gives me a plan of :
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 1 1
TABLE ACCESS BY LOCAL INDEX ROWID SLD.DAILY_LOAN_ALLOC 1 77 1 26 26
INDEX RANGE SCAN SLD.XPKDAILY_LOAN_ALLOC 1 3 26 26

This returns data in 10 sec.(the query as you can notice is straight forward).

While on the other environment:
select * from daily_loan_alloc where calendar_dt = to_date('12/26/2006','mm/dd/yyyy');
comes back with same number of data in 3 minutes!!!
Query plan is different:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 712 K 12644
TABLE ACCESS FULL SLD.DAILY_LOAN_ALLOC 712 K 51 M 12644 26 26

Now,I am aware that full scan is not evil,but , here , for some reason ,I am unable to understand the difference in behaviour.

Suspecting the need for index rebuild on the local index,I tried to do the following on the problematic environment to determine if index rebuild is required:
SQL> analyze index SLD.XPKDAILY_LOAN_ALLOC partition (P_M1206) validate structure;

Index analyzed.

SQL> set timing on
SQL> select name as IndexName,
2 height,
3 lf_rows,
4 del_lf_rows
5 from index_stats;

INDEXNAME HEIGHT LF_ROWS DEL_LF_ROWS
------------------------------ ---------- ---------- -----------
XPKDAILY_LOAN_ALLOC 3 18483555 0

They turned out to be identical for both the environments !
Could you please throw more light on what could be the reason for the difference in the behaviours for this query?
Should I go for index rebuild ?
Will it solve the problem of query performance?
Under what scenario should index be rebuilt?
I would have thought that index rebuilt is generally necessary if the levels of the index is abnormally high or if the leaf nodes marked for delete on the index is more than say 20% of the total leaf nodes...
Is this assumption correct?
Here,I notice that the del_lf_rows = 0 and the height does not seem to be drastic either!
Or is there any other area that I can explore?
Both the environments have exactly the same capacity and power.

Thanks
Tom Kyte
December 28, 2006 - 9:39 am UTC

compare plans, are estimated card= values the same.

different plans

A reader, December 28, 2006 - 10:04 am UTC

Hi Tom,
On Environment 1:
select * from daily_loan_alloc where calendar_dt = to_date('12/26/2006','mm/dd/yyyy');
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 708 K 13150
TABLE ACCESS FULL SLD.DAILY_LOAN_ALLOC 708 K 52 M 13150 26 26

As you can see,estimated rows in the plan shows as : 708K which is the approx number of wors for the query.

In Environment 2:
SQL>set autotrace traceonly explain
SQL>select * from daily_loan_alloc where calendar_dt = to_date('12/26/2006','mm/dd/yyyy');

Elapsed: 00:00:00.47

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=77)
1 0 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'DAILY_LOAN_ALLOC'
(Cost=1 Card=1 Bytes=77)

2 1 INDEX (RANGE SCAN) OF 'XPKDAILY_LOAN_ALLOC' (UNIQUE) (Co
st=3 Card=1)

Here,the estimated rows in the plan shows as : 1 !

But, the actual rows in the query is the same 708k approximately.


Tom Kyte
December 29, 2006 - 9:01 am UTC

ok, did you see the glaring difference?

so.........

what is the difference between these here two environments.

different plans

A reader, January 03, 2007 - 11:02 am UTC

Hi Tom,

Environment : Oracle9i 2 node RAC.
We have some updates in our PL/SQL codes involving correlated updates which are very much slower.(one such update takes 30 minutes!) there by
affecting te processing time of the entire code.
But,the query that is used to form the correlated update gives the data much quicker.(takes just about 10-20 sec).
This makes me think that in order to process the update faster,I would be better off using the cursor approach than a direct update!
Is a direct update normally the perferred way than a cursor based approach because of no context switch?
Is there any other way to speed up the update?

I am giving one eg:
update statement(takes about 25-30 minutes):

______________________________________________________________________________________________________
UPDATE loan_alloc la
SET (close_dt,term_indicator_cd) = (SELECT lnc_close_date,lnc_term_indic
FROM stage_dml_loan sl
WHERE sl.loan_alloc_sq = la.loan_alloc_sq
)
WHERE EXISTS (SELECT 1
FROM stage_dml_loan sl
WHERE sl.lnc_alloc_flag = '3'
AND sl.loan_alloc_sq = la.loan_alloc_sq
AND
(NVL(sl.lnc_close_date, '01-jan-1900') <> NVL(la.close_dt, '01-jan-1900')
OR NVL(sl.lnc_term_indic, 'X') <> NVL(la.term_indicator_cd, 'X'))
);

Explain plan:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

UPDATE STATEMENT Optimizer Mode=CHOOSE 851 K 861657
UPDATE SLD.LOAN_ALLOC
FILTER
TABLE ACCESS FULL SLD.LOAN_ALLOC 851 K 12 M 10117 :Q655533000 P->S QC (RANDOM)
TABLE ACCESS BY INDEX ROWID SLDUSER.STAGE_DML_LOAN 1 10 1
INDEX RANGE SCAN SLDUSER.IDX_DMLLOAN_ID 1 3
TABLE ACCESS BY INDEX ROWID SLDUSER.STAGE_DML_LOAN 1 9 1
INDEX RANGE SCAN SLDUSER.IDX_DMLLOAN_ID 1 3

The corresponding sql query that makes the update(takes just about 10-20 sec):

SELECT la.loan_alloc_sq,lnc_close_date,la.CLOSE_DT,lnc_term_indic,la.TERM_INDICATOR_CD
FROM stage_dml_loan sl,loan_alloc la
WHERE sl.loan_alloc_sq = la.loan_alloc_sq
AND sl.lnc_alloc_flag = '3'
AND
(
NVL(sl.lnc_close_date, '01-jan-1900') <> NVL(la.close_dt, '01-jan-1900')
OR
NVL(sl.lnc_term_indic, 'X') <> NVL(la.term_indicator_cd, 'X')
);

Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 104 K 13716
HASH JOIN 104 K 2 M 13716 :Q655534001 P->S QC (RANDOM)
TABLE ACCESS FULL SLDUSER.STAGE_DML_LOAN 394 K 3 M 3599 :Q655534000 S->P BROADCAST
TABLE ACCESS FULL SLD.LOAN_ALLOC 17 M 243 M 10117 :Q655534001 PCWP
______________________________________________________________________________________________________

loan_alloc (table that is updated) has 17 million records and has a PK on loan_alloc_sq.
stage_dml_loan (staging table that is used for the update) has about 700,000 records (no PK since it is a staging table used by sql*loader to load data from file,but has index on loan_alloc_sq)

Is there a way to use any other approach without having to go for the cursor based approach as there are so many such updates?
I was thinking "updating the join" method , but not sure if that can be used here.
Please advise!

Thanks.

index

A reader, January 24, 2007 - 3:15 pm UTC

Hi Tom,
It was interesting to read your article on index compression in your book(my apologies...dont remember the exact name of the book though...it is a book on 9i,10g architecture...).

You have compared various compression levels for a composite index and have given the stats which basically inferred that too much compression is also bad.
Your eg was on a 3 column index and index performance with compression 2 was found to be the best based on OPT_CMPR_COUNT column in index_stats.

In our application,most of our indexes are composite.
My question: Is there a way to get a list of indexes in a schema which could greatly benefit from compression?

data export

A reader, February 15, 2007 - 2:27 pm UTC

Hi Tom,
We have a requirement where in we need to copy selective data from some tables(based on business rules) from our PROD environment to our UAT environment.
The reason being that there are lot of user keyed data during the day in our PROD environment.
Obviously,as PROD becomes a first priority,the users do not diligently key in the same data in our UAT environment which causes
first of all a mess in terms of data variance and secondly,for lack of uniformity , we are finding it difficult to debug or replicate problematic scenarios
in UAT.
Bulk of our time is lost in bringing the UAT data in sync with PROD even before we can analyze the actual core problem.
We cannot obviously enforce this as a rule to the users who use the application becuase for them,PROD is a top priority understandably...
Adding to this,we have a corporate wide stringent policy that does not allow us to create DB LINKS from PROD to UAT (for security purposes)...

Our requirement is to sync up these selected tables in UAT with their counterparts in PROD at the end of the day , say , like a scheduled job or something...

Is there a good concise way of achieving this through code or any tool?

Many thanks.
Tom Kyte
February 16, 2007 - 1:28 pm UTC

perfect opportunity for you to TEST YOUR ABILITY TO RESTORE!!!

have your DBA staff restore prod to uat. You get what you need and they have the confidence to know they can do the most important bit of their job and how long it'll take.

ORA-01401

A reader, June 11, 2007 - 12:43 pm UTC

Hi Tom,
In one of my procedures, i am getting the following error:
ORA-01401: inserted value too large for column
As easy as it may sound error wise,it is giving me a hard time when one approach of inserting data works while other gives me the above error...



e459393@sysint>desc mv_nce_noncash
 Name                                      Null?    Type
 ----------------------------------------- -------- ------------
 BUS_DATE                                  NOT NULL DATE
 BROKER_ID                                          VARCHAR2(20)
 TRADETYPE                                          VARCHAR2(1)
 NCE_ID                                    NOT NULL NUMBER(15)
 DML_LOAN_AND_REGION                                VARCHAR2(20)
 TRADE_TERM_DATE                                    DATE
 SETTLE_DATE                                        DATE
 SETTLEMENT_CCY                                     VARCHAR2(3)
 COLLATERAL_CCY                                     VARCHAR2(3)
 STOCK                                              VARCHAR2(12)
 FUND                                      NOT NULL VARCHAR2(20)
 SECURITY_NAME                                      VARCHAR2(30)
 SECURITY_CCY                                       VARCHAR2(3)
 VALUE_CCY                                          VARCHAR2(3)
 ALLOCATION_NOMINAL                                 NUMBER
 TO_USD_RATE                                        NUMBER(15,7)
 PERCENTAGE                                         NUMBER(8,4)
 HOUSE_ACCT                                         VARCHAR2(30)
 TRADE_PRICE                                        NUMBER(14,6)
 PRODUCT_LINE                                       VARCHAR2(20)
 PRODUCT_LINE_DESC                                  VARCHAR2(30)
 SECURITY_CLASS                                     VARCHAR2(4)
 COLLATERAL_TYPE                                    VARCHAR2(4)

My Stored procedure code earlier was to use a cursor and process it by means of index by table:
I am just giving you the synopsis here:

Approach 1:
*****************************************************
cursor c is
select a,b,c,d from some_table(complex sql here);

type var_type is table of c%rowtype index by binary_integer;
rec var_type;

begin

open c;

loop

fetch c bulk collect into rec limit 5000;

l_counter := l_counter + 1;

if rec.count > 0 then

forall i in rec.first..rec.last

insert into (
SELECT
BUS_DATE,
BROKER_ID,
TRADETYPE,
NCE_ID,
DML_LOAN_AND_REGION,
TRADE_TERM_DATE,
SETTLE_DATE,
SETTLEMENT_CCY,
COLLATERAL_CCY,
STOCK,
FUND,
SECURITY_NAME,
SECURITY_CCY,
VALUE_CCY,
ALLOCATION_NOMINAL,
TO_USD_RATE,
PERCENTAGE,
HOUSE_ACCT,
TRADE_PRICE,
PRODUCT_LINE,
PRODUCT_LINE_DESC,
SECURITY_CLASS,
COLLATERAL_TYPE
FROM MV_NCE_NONCASH)
values rec(i);

commit;

end if;

exit when c%notfound;

end loop ;

commit;
*********************************************************
Approach 1 gives me the above mentioned ORA error even though the data coming in from the cursor is good and I have checked for length problems which could cause the error...
Approach 2:
*****************************************************
Straight insert into mv_nce_noncash(col1,col2.....)
select * from the cursor query above;

This approach works fine and the same insert gets processed without trouble!!!


Could you please explain where I could be having a problem...?
Approx number of records to be inserted:200K

I realise that for inserting 200K, approach 2 is more than sufficient...as the database should be able to withstand that volume instead of breaking it into batches like approach 1 and processing...
But, I am curious...

Big Thanks...

Tom Kyte
June 11, 2007 - 4:04 pm UTC

I am less curious - because approach 1 would not be correct, so let us ignore trying to find the issue and be relieved it did not work.

without a test case, I'm not going to really be able to do more than "guess" at what the bug (in your code, in oracle code, in any code) would be.

Columns as rows

a reader, June 12, 2007 - 9:45 am UTC

Hi Tom,

I have data in a table as following:

DEPTNO MAILID
------ ------
DEPT1 ABC@XCC.COM
DEPT1 XYZ@DO.COM
DEPT2 DHDF@DOP.COM
DEPT2 SGDS@ER.COM
DEPT1 SDG@TO.COM

I need the output as:
DEPT1 ABC@XCC.COM;XYZ@DO.COM;SDG@TO.COM
DEPT2 DHDF@DOP.COM;SGDS@ER.COM

I will appreciate if you can help me to achieve this result using a single query; I know that I can do this using pl/sql code but I do nt want to use.

Thanks a lot in advance.
Tom Kyte
June 12, 2007 - 10:30 am UTC

search site for stragg

string aggregate

martina, June 12, 2007 - 9:59 am UTC


i guess something like
select dept,stragg(email)
      from tab
      group by dept;

is what you're looking for?

regards, martina

Relevance to the topic???

John Gasch, June 12, 2007 - 10:10 am UTC

Tom,

Per your guidelines on submittal of reviews:

Please limit this to a comment or question relevant to the other text on this page.

How do most of the reviews in this thread even remotely relate to the originally posted question? When you graciously followup on unrelated questions, you are encouraging this. When the stream of information wanders across many unrelated domains, it complicates the lookup process for those of us who employ the search utilities.
If readers need to post a new question, but are too impatient to await an opportunity to do so legitimately, they might at least consider doing a simple search and append such questions to an existing thread on a related topic.

Perhaps the banner on the "Submit Review" page ought to be changed to:
Please limit this review to a comment or question relevant to the topic of this thread.

Regardless, I do admire your seemingly limitless patience, along with your boundless knowledge. We are grateful for your service. I consult your site daily and I learn a ton. I also find that I rarely encounter a situation where a search of existings posts doesn't steer me toward the solution.

John Gasch

martina, June 12, 2007 - 10:45 am UTC


to john gasch,

do you realize that the decision if something is relevant to a topic is not easy for a non native speaker?

regards, martina
Tom Kyte
June 12, 2007 - 10:58 am UTC

why isn't it?

you do read documentation right?

if they cannot make that decision, they likely would not understand my response to their question.

Relevance

John Gasch, June 12, 2007 - 11:25 am UTC

To Martina,

My critique was not aimed at you - I was referring in general to the entire thread. In fact, when I composed my review, I had not seen your post, nor the prior one that you were responding to. At least yours had more relevance than the vast majority of other reviews, which spanned from optimizer questions, using dbms_xplan, partitions, analytics, to name a few - none which can be traced back to the original question regarding column concatenation.

My apologies to you if you thought I was critical of you specifically. That was not my intention.

Regards ,John

A reader, June 12, 2007 - 4:50 pm UTC

Hi John,
I do agree that the thread has jumbled up questions and has minor relevance if any to the original question...
But, unfortunately , that has been the only way that I am able to get my questions/queries answered/responded...
Please let me know another better way to get a response and I can probably do a better job myself of it.
So, till then ,I cannot help it.
If it irrelevant to you , then you always have the option to go and look at other links.

Regarding my latest question on ORA-01401 error, I have seen both approaches work.
Because of huge data,I cannot unfortunately give the scripts which would require me to give data to many other dependant tables as well...
But, I do disagree with the response.
When i come to this site, it must be remembered that I do due diligence of trying out first hand myself and when it goes beyond my reasoning, I try to seek the guidance.
The response has not helped me direct towards any possible reasons which is why most people visit this site in the sure hope that when their question gets posted, they expect some semblance of good reasoning and rationale.

Thanks.


A reader, June 12, 2007 - 4:53 pm UTC

Also, I must add that this site is the best...
no doubts about that...
This site has helped me to leverage my knowledge to a great extent and I always tip my hat to Tom.
(except for this question ofcourse)

thanks
Tom Kyte
June 12, 2007 - 5:39 pm UTC

well, you gave us zero ways to see what you see, so I was disappointed in your question :)

we are even.

Relevance to the topic

John Gasch, June 13, 2007 - 11:20 am UTC

To "reader from Boston"

Tom has built an extraordinary knowledge base to serve as a shared resource for the benefit of DBAs and developers worldwide. But a knowledge base requires structure. Reviews (and even new questions such as yours) would be more beneficial to all if they are properly catagorized and filed. Regarding your most recent "new question" on the topic of "ORA-01401: Inserted value too large for column", it would be more beneficial to all if you had posted it under the existing original post titled ORA-01401: Inserted value too large for column. It took me all of 5 seconds to locate that thread.

If Tom is willing to field new questions disguised as reviews, its his site and his perogative; but it would contribute more to the knowledge base if such "reviews" were posted within a relevant thread where the rest of us can read it in context.

Last word on this from me.
John

statspack report

A reader, July 16, 2007 - 1:14 pm UTC

Hi Tom,

I have a statspack report taken for a session for a snapshot period of 1 hour.
From the outset, I had certain deductions that I could make out.


STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
O02CDS3 3530770287 O02CDS3 1 9.2.0.6.0 NO svdw2000

Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 136 16-Jul-07 10:00:03 148 43.6
End Snap: 138 16-Jul-07 11:00:04 189 39.1
Elapsed: 60.02 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 2,032M Std Block Size: 8K
Shared Pool Size: 304M Log Buffer: 9,216K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 126,863.77 8,037.66
Logical reads: 13,965.12 884.78
Block changes: 470.04 29.78
Physical reads: 75.12 4.76
Physical writes: 122.18 7.74
User calls: 449.07 28.45
Parses: 104.57 6.63
Hard parses: 12.98 0.82
Sorts: 48.98 3.10
Logons: 5.36 0.34
Executes: 207.94 13.17
Transactions: 15.78

% Blocks changed per Read: 3.37 Recursive Call %: 60.14
Rollback per transaction %: 0.04 Rows per Sort: 74.98

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.99 In-memory Sort %: 99.32
Library Hit %: 96.10 Soft Parse %: 87.59
Execute to Parse %: 49.71 Latch Hit %: 99.92
Parse CPU to Parse Elapsd %: 98.61 % Non-Parse CPU: 95.53

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 94.81 94.31
% SQL with executions>1: 85.23 62.35
% Memory for SQL w/exec>1: 87.17 65.41

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 5,082 78.27
enqueue 162 351 5.41
log file sync 61,107 310 4.77
log file parallel write 126,061 292 4.50
db file parallel write 3,510 160 2.46
-------------------------------------------------------------
Wait Events for DB: O02CDS3 Instance: O02CDS3 Snaps: 136 -138
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
enqueue 162 118 351 2166 0.0
log file sync 61,107 1 310 5 1.1
log file parallel write 126,061 0 292 2 2.2
db file parallel write 3,510 0 160 46 0.1
SQL*Net message from dblink 900 0 94 105 0.0
direct path write 315,984 0 86 0 5.6
log file sequential read 515 0 28 55 0.0
direct path read 197,385 0 18 0 3.5
control file parallel write 1,312 0 7 5 0.0
SQL*Net more data to client 51,606 0 3 0 0.9
latch free 2,108 314 2 1 0.0
db file sequential read 2,935 0 2 1 0.1
async disk IO 965 0 1 1 0.0
SQL*Net break/reset to clien 91 0 1 12 0.0
direct path read (lob) 1,023 0 1 1 0.0
log file switch completion 22 0 1 28 0.0
db file scattered read 1,009 0 1 1 0.0
SQL*Net more data from dblin 4,272 0 0 0 0.1
process startup 2 0 0 66 0.0
buffer busy waits 547 0 0 0 0.0
LGWR wait for redo copy 860 0 0 0 0.0
direct path write (lob) 156 0 0 0 0.0
log file single write 10 0 0 5 0.0
control file sequential read 1,280 0 0 0 0.0
row cache lock 6 0 0 2 0.0
library cache pin 4 0 0 1 0.0
SQL*Net message to dblink 900 0 0 0 0.0
buffer deadlock 1 1 0 0 0.0
SQL*Net message from client 1,664,588 0 577,013 347 29.3
jobq slave wait 42 41 122 2907 0.0
SQL*Net more data from clien 71,575 0 14 0 1.3
SQL*Net message to client 1,664,630 0 2 0 29.3
-------------------------------------------------------------
Background Wait Events for DB: O02CDS3 Instance: O02CDS3 Snaps: 136 -138
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file parallel write 126,062 0 292 2 2.2
db file parallel write 3,510 0 160 46 0.1
log file sequential read 515 0 28 55 0.0
control file parallel write 1,312 0 7 5 0.0
async disk IO 965 0 1 1 0.0
db file scattered read 955 0 1 1 0.0
direct path write 265 0 0 1 0.0
db file sequential read 1,603 0 0 0 0.0
LGWR wait for redo copy 860 0 0 0 0.0
log file single write 10 0 0 5 0.0
control file sequential read 1,132 0 0 0 0.0
direct path read 365 0 0 0 0.0
buffer busy waits 5 0 0 0 0.0
latch free 3 1 0 0 0.0
rdbms ipc message 70,759 4,521 23,544 333 1.2
pmon timer 1,196 1,196 3,512 2936 0.0
smon timer 2,917 0 3,508 1202 0.1
-------------------------------------------------------------


Instance Activity Stats for DB: O02CDS3 Instance: O02CDS3 Snaps: 136 -138

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 508,164 141.1 8.9
CPU used when call started 507,127 140.8 8.9
CR blocks created 56,857 15.8 1.0
Cached Commit SCN referenced 0 0.0 0.0
Commit SCN cached 0 0.0 0.0
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 77,904 21.6 1.4
DBWR checkpoints 5 0.0 0.0
DBWR free buffers found 0 0.0 0.0
DBWR lru scans 0 0.0 0.0
DBWR make free requests 0 0.0 0.0
DBWR revisited being-written buff 68 0.0 0.0
DBWR summed scan depth 0 0.0 0.0
DBWR transaction table writes 1,147 0.3 0.0
DBWR undo block writes 16,705 4.6 0.3
DFO trees parallelized 0 0.0 0.0
PX local messages recv'd 0 0.0 0.0
PX local messages sent 0 0.0 0.0
Parallel operations not downgrade 0 0.0 0.0
SQL*Net roundtrips to/from client 1,606,860 446.2 28.3
SQL*Net roundtrips to/from dblink 900 0.3 0.0
active txn count during cleanout 35,100 9.8 0.6
background checkpoints completed 5 0.0 0.0
background checkpoints started 5 0.0 0.0
background timeouts 5,551 1.5 0.1
branch node splits 0 0.0 0.0
buffer is not pinned count 39,275,404 10,906.8 691.0
buffer is pinned count 17,048,594 4,734.4 300.0
bytes received via SQL*Net from c 373,724,272 103,783.5 6,575.4
bytes received via SQL*Net from d 9,143,672 2,539.2 160.9
bytes sent via SQL*Net to client 389,036,569 108,035.7 6,844.8
bytes sent via SQL*Net to dblink 20,837 5.8 0.4
calls to get snapshot scn: kcmgss 1,443,397 400.8 25.4
calls to kcmgas 177,653 49.3 3.1
calls to kcmgcs 25,281 7.0 0.4
change write time 4,717 1.3 0.1
cleanout - number of ktugct calls 35,410 9.8 0.6
cleanouts and rollbacks - consist 29,588 8.2 0.5
cleanouts only - consistent read 89 0.0 0.0
cluster key scan block gets 2,376,397 659.9 41.8
cluster key scans 1,181,059 328.0 20.8
commit cleanout failures: block l 576 0.2 0.0
commit cleanout failures: buffer 17 0.0 0.0
commit cleanout failures: callbac 23 0.0 0.0
commit cleanout failures: cannot 158 0.0 0.0
commit cleanout failures: hot bac 0 0.0 0.0
commit cleanouts 148,858 41.3 2.6
commit cleanouts successfully com 148,084 41.1 2.6
commit txn count during cleanout 1,622 0.5 0.0
consistent changes 110,682 30.7 2.0
consistent gets 48,573,987 13,489.0 854.6
consistent gets - examination 14,381,636 3,993.8 253.0
current blocks converted for CR 0 0.0 0.0
cursor authentications 22,418 6.2 0.4
data blocks consistent reads - un 89,405 24.8 1.6
db block changes 1,692,612 470.0 29.8
Instance Activity Stats for DB: O02CDS3 Instance: O02CDS3 Snaps: 136 -138

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
db block gets 1,715,893 476.5 30.2
deferred (CURRENT) block cleanout 44,143 12.3 0.8
dirty buffers inspected 0 0.0 0.0
enqueue conversions 2,294 0.6 0.0
enqueue deadlocks 0 0.0 0.0
enqueue releases 428,202 118.9 7.5
enqueue requests 428,206 118.9 7.5
enqueue timeouts 0 0.0 0.0
enqueue waits 44 0.0 0.0
exchange deadlocks 1 0.0 0.0
execute count 748,784 207.9 13.2
free buffer inspected 48 0.0 0.0
free buffer requested 144,770 40.2 2.6
hot buffers moved to head of LRU 0 0.0 0.0
immediate (CR) block cleanout app 29,677 8.2 0.5
immediate (CURRENT) block cleanou 3,913 1.1 0.1
index fast full scans (full) 187 0.1 0.0
index fetch by key 10,755,107 2,986.7 189.2
index scans kdiixs1 1,920,377 533.3 33.8
leaf node 90-10 splits 79 0.0 0.0
leaf node splits 197 0.1 0.0
logons cumulative 19,290 5.4 0.3
messages received 68,929 19.1 1.2
messages sent 68,929 19.1 1.2
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 31,719,914 8,808.6 558.1
opened cursors cumulative 315,848 87.7 5.6
parse count (failures) 36 0.0 0.0
parse count (hard) 46,745 13.0 0.8
parse count (total) 376,547 104.6 6.6
parse time cpu 22,699 6.3 0.4
parse time elapsed 23,019 6.4 0.4
physical reads 270,517 75.1 4.8
physical reads direct 262,737 73.0 4.6
physical reads direct (lob) 1,481 0.4 0.0
physical writes 439,980 122.2 7.7
physical writes direct 355,199 98.6 6.3
physical writes direct (lob) 256 0.1 0.0
physical writes non checkpoint 377,439 104.8 6.6
pinned buffers inspected 0 0.0 0.0
prefetched blocks 2,355 0.7 0.0
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time 3,601 1.0 0.1
queries parallelized 0 0.0 0.0
recursive calls 2,440,204 677.7 42.9
recursive cpu usage 52,484 14.6 0.9
redo blocks written 957,418 265.9 16.8
redo buffer allocation retries 19 0.0 0.0
redo entries 882,716 245.1 15.5
redo log space requests 24 0.0 0.0
redo log space wait time 62 0.0 0.0
redo ordering marks 0 0.0 0.0
redo size 456,836,448 126,863.8 8,037.7
redo synch time 31,745 8.8 0.6
redo synch writes 61,086 17.0 1.1
redo wastage 17,965,324 4,989.0 316.1
Instance Activity Stats for DB: O02CDS3 Instance: O02CDS3 Snaps: 136 -138

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
redo write time 30,031 8.3 0.5
redo writer latching time 8 0.0 0.0
redo writes 63,028 17.5 1.1
rollback changes - undo records a 168 0.1 0.0
rollbacks only - consistent read 27,244 7.6 0.5
rows fetched via callback 3,556,047 987.5 62.6
session connect time 0 0.0 0.0
session logical reads 50,288,398 13,965.1 884.8
session pga memory 32,562,176 9,042.5 572.9
session pga memory max 33,440,712 9,286.5 588.4
session uga memory 8,862,196,664 2,461,037.7 155,923.0
session uga memory max 2,108,945,264 585,655.5 37,105.2
shared hash latch upgrades - no w 1,889,256 524.7 33.2
shared hash latch upgrades - wait 90 0.0 0.0
sorts (disk) 1,192 0.3 0.0
sorts (memory) 175,188 48.7 3.1
sorts (rows) 13,225,769 3,672.8 232.7
summed dirty queue length 0 0.0 0.0
switch current to new buffer 53,596 14.9 0.9
table fetch by rowid 7,152,785 1,986.3 125.9
table fetch continued row 110,994 30.8 2.0
table scan blocks gotten 27,409,386 7,611.6 482.3
table scan rows gotten 2,428,587,412 674,420.3 42,729.0
table scans (direct read) 0 0.0 0.0
table scans (long tables) 116 0.0 0.0
table scans (rowid ranges) 0 0.0 0.0
table scans (short tables) 174,971 48.6 3.1
transaction rollbacks 8 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 1,617,089 449.1 28.5
user commits 56,817 15.8 1.0
user rollbacks 20 0.0 0.0
write clones created in backgroun 7 0.0 0.0
write clones created in foregroun 877 0.2 0.0
-------------------------------------------------------------


The execute to parse ratio is quite low ...Is this normal ? the client connection is being done through JDBC thin client.
The ratio makes me think that more is being parsed than executed!

Also, one of the top waits is :log file sync.
Does this contribute heavily towards performance degradation?
Is reducing redo a possible option?
what else can be looked at?

I am not sure about the enqueue wait which also seems like a contributor of wait...
Enqueues are locks on a db resource ...correct?
Is there a contention?

Are my deductions valid?
Also, if you can throw more light into anything you notice to be glaringly unusual, that would be much useful...
In the meantime, I will try to dig into the past threads from your site...

Thanks a ton!

candidate for IOT?

prasanna, July 19, 2007 - 3:56 pm UTC

Hi Tom,

This site rocks!
Some of the tables in our application have the following pattern of columns (just mentioning the relevant ones):

Table test:

col_a
col_b
date1
date2
flag

This is right now a heap table with PK being (col_a,col_b) and unique index on (col_a,date1,date2,flag)
Most of the queries surrounding these tables are accessed via col_a,date1,date2,flag with a range scan...(unique index on the four columns)
Like:
SELECT col_a,col_b FROM test
WHERE col_a = (join with another table)
AND (V_Calendar_Dt BETWEEN date1 AND date2)
AND Flag = 'Y'

I find an enormous performance hit in cases where I have to join these tables to other heap tables just like shown above...

Volume wise, these tables are small with the max record table containing about 100000 records ...

I was also considering to convert them to Hash clusters , but, based on the forum discussions here, looks like hash cluster is not a good option since I will
seldom be querying based on the PK...

But, are these kind of tables good candidates for
IOT instead of heap...

Also, in the current version of the heap table, i have certain other indexes as well which I presume would become secondary indexes in IOT world...
But, in one of the discussions, I read that if I have a scenario where I need to create a secondary index on an IOT,i can exclude the PK columns as they automaticlly are attached to the secondary index...
Am I right?
In the case, then , it will not be possible to enforce uniqueness just on date1,date2,flag as col_a is part of PK and I choose not to include it...!

I tried to create a secondary index on the iot version of test with the following columns: col_a,date1,date2,flag as that is the current structure of our heap table...
But, it did not give me any performance advantages in the sense that the LIOs actually got more and it took more hops!

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=82 Bytes=4059
0)

1 0 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_59831' (UNIQUE) (Cost=
1 Card=82 Bytes=40590)

2 1 INDEX (RANGE SCAN) OF 'XAK1CFD_IOT' (UNIQUE) (Cost=1 Car
d=33)

Here , XAK1CFD_IOT is the secondary unique index that I created on col_a,date1,date2,flag.
SYS_IOT_TOP_59831 is the PK on col_a,col_b...


Any pros/cons that can be listed would be much help...
Tom Kyte
July 19, 2007 - 7:23 pm UTC

deja vu

i followed up on this elsewhere, go there. please don't cross post like that.

IOT

prasanna, July 19, 2007 - 10:43 pm UTC

oops...sorry...
that single "*" is for me actually :)
I had been waiting for some kind of response to my queries in the actual forum for the past 3-4 days and my impatience got the better of me today!...


Scalar subquery

Prasanna, August 15, 2007 - 10:54 pm UTC

Hi Tom,
This query actually deals among other things , primarily with scalar subquery...
I did not know where to post it in the actual postings...
Hence, excuse me that I have to ask here...

We are on Oracle 9.2.0.8.
I have a query that uses scalar subquery that performs efficiently.
Total Rows in Stage_dml_earnings_mon: 18 million
For a calendar_dt, rows in Stage_dml_earnings_mon = ~800K 
 
select s.EN_GQU_FROM_DATE ,s.EN_GQU_AS_OF_DATE,s.SOURCE_SQ,s.EN_GQU_TRAN_NO,s.EN_GQU_TRAN_SUFF,
(select nvl(max(l.loan_alloc_sq),0) from loan_alloc l
where
l.loan_id = s.en_gqu_tran_no
and l.alloc_id = s.en_gqu_tran_suff
and l.source_sq = s.source_sq
and l.settle_dt = s.en_gqu_as_of_date ) max_las
from stage_dml_earnings_mon s
where s.EN_GQU_FROM_DATE = to_date('07/02/2007','mm/dd/yyyy');

Explain Plan:

e459393@pp>select plan_table_output from table(dbms_xplan.display);

-------------------------------------------------------------------------------------------------
| Id  | Operation                    |  Name                       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                             |   816K|    23M|   468   (1)|
|   1 |  SORT AGGREGATE              |                             |     1 |    28 |            |
|   2 |   TABLE ACCESS BY INDEX ROWID| LOAN_ALLOC                  |     1 |    28 |     2  (50)|
|*  3 |    INDEX RANGE SCAN          | IDX_LOANAL_AK1              |     1 |       |     4  (25)|
|*  4 |  INDEX RANGE SCAN            | IDX_STGEARNMON_DT_TRAN_BOS  |   816K|    23M|  4678   (1)|
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("L"."LOAN_ID"=:B1 AND "L"."ALLOC_ID"=:B2 AND "L"."SOURCE_SQ"=:B3 AND
              "L"."SETTLE_DT"=:B4)
   4 - access("SYS_ALIAS_4"."EN_GQU_FROM_DATE"=TO_DATE(' 2007-07-02 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Here IDX_STGEARNMON_DT_TRAN_BOS refers to : Index on stage_dml_earnings_mon(EN_GQU_FROM_DATE, EN_GQU_TRAN_NO, EN_GQU_TRAN_SUFF, EN_GQU_AS_OF_DATE, SOURCE_SQ)
Here IDX_LOANAL_AK1 refers to : Index on loan_alloc(LOAN_ID, ALLOC_ID, SOURCE_SQ, SETTLE_DT)

Now, when I try to use the above query and execute the merge statement (or an update), it suddenly starts taking a very long time to 
complete the execution!

merge into stage_dml_earnings_mon s
using
(
select s.EN_GQU_FROM_DATE ,s.EN_GQU_AS_OF_DATE,s.SOURCE_SQ,s.EN_GQU_TRAN_NO,s.EN_GQU_TRAN_SUFF,
(select nvl(max(l.loan_alloc_sq),0) from loan_alloc l
where
l.loan_id = s.en_gqu_tran_no
and l.alloc_id = s.en_gqu_tran_suff
and l.source_sq = s.source_sq
and l.settle_dt = s.en_gqu_as_of_date ) max_las
from stage_dml_earnings_mon s
where s.EN_GQU_FROM_DATE = to_date('07/02/2007','mm/dd/yyyy')
) c_rec
on
(
 s.EN_GQU_FROM_DATE = c_rec.EN_GQU_FROM_DATE
 and s.EN_GQU_TRAN_NO = c_rec.EN_GQU_TRAN_NO
 and s.EN_GQU_TRAN_SUFF = c_rec.EN_GQU_TRAN_SUFF
 and s.EN_GQU_AS_OF_DATE = c_rec.EN_GQU_AS_OF_DATE
 and s.SOURCE_SQ = c_rec.SOURCE_SQ
)
when matched then
update set s.loan_alloc_sq = c_rec.max_las
when not matched then
insert 
(
 en_gqu_from_date
)
values (null);

Explain Plan:
SQL>select plan_table_output from table(dbms_xplan.display('PLAN_TABLE','xyz','ALL'));

----------------------------------------------------------------------------------------------------
| Id  | Operation                      |  Name                       | Rows  | Bytes |TempSpc| Cost (%CPU)|
----------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT                |                             |   846K|   134M|       | 45086   (1)|
|   1 |  MERGE                         | STAGE_DML_EARNINGS_MON      |       |       |       |            |
|   2 |   VIEW                         |                             |       |       |       |            |
|*  3 |    HASH JOIN OUTER             |                             |   846K|   148M|    33M| 45086   (1)|
|*  4 |     INDEX RANGE SCAN           | IDX_STGEARNMON_DT_TRAN_BOS  |   846K|    24M|       |  5173   (1)|
|   5 |     TABLE ACCESS BY INDEX ROWID| STAGE_DML_EARNINGS_MON      |   846K|   124M|       | 36194   
|*  6 |      INDEX RANGE SCAN          | IDX1_STAGERNMON_DT_ID       |     1 |       |       |  2282   (2)|
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("S"."SOURCE_SQ"(+)="SYS_ALIAS_4"."SOURCE_SQ" AND
              "S"."EN_GQU_AS_OF_DATE"(+)="SYS_ALIAS_4"."EN_GQU_AS_OF_DATE" AND
              "S"."EN_GQU_TRAN_SUFF"(+)="SYS_ALIAS_4"."EN_GQU_TRAN_SUFF" AND
              "S"."EN_GQU_TRAN_NO"(+)="SYS_ALIAS_4"."EN_GQU_TRAN_NO" AND
              "S"."EN_GQU_FROM_DATE"(+)="SYS_ALIAS_4"."EN_GQU_FROM_DATE")
   4 - access("SYS_ALIAS_4"."EN_GQU_FROM_DATE"=TO_DATE(' 2007-07-02 00:00:00', 'syyyy-mm-dd hh24:mi:
   6 - access("S"."EN_GQU_FROM_DATE"(+)=TO_DATE(' 2007-07-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Here IDX_STGEARNMON_DT_TRAN_BOS refers to : Index on stage_dml_earnings_mon(EN_GQU_FROM_DATE, EN_GQU_TRAN_NO, EN_GQU_TRAN_SUFF, EN_GQU_AS_OF_DATE, SOURCE_SQ)
IDX1_STAGERNMON_DT_ID refers to : Index on stage_dml_earnings_mon(EN_GQU_FROM_DATE)

Above, the subquery in the "using clause" is exactly the same query that performs great...
My requirement is to use this in merge and thats when I get struck...

I am not understanding how some thing that works as a query does not work when used inside a merge ...

To begin with, this used to be a correlated update:
update stage_dml_earnings_mon s
    set loan_alloc_sq = ( select nvl(max(loan_alloc_sq),0) from loan_alloc l          
                           where s.en_gqu_from_date = to_date('07/02/2007','mm/dd/yyyy')
        and l.loan_id = s.en_gqu_tran_no
                             and l.alloc_id = s.en_gqu_tran_suff
                             and l.source_sq = s.source_sq
                             and l.settle_dt = s.en_gqu_as_of_date)
where en_gqu_from_date = to_date('07/02/2007','mm/dd/yyyy');

But, correlated updates involved a full table scan on stage_dml_earnings_mon (which has a volume of 18 million)!

So, I am trying to explore some other options like merge...

Also, Can you please guide me through any other viable options..
Unfortunately, "update of a join" is not an option since I cannot enforce unique/PK constraint on the inner table...!

I realise you get tons of emails to which you need to address!
But, I have analyzed this to the full extent possible and you are my last resort!
So, please refrain from a 2 liner reply :)

Dynamic column alias

A reader, June 16, 2011 - 6:48 am UTC

Hi Tom - Can we set the column alias dyanmically during execution time.

--created a function
CREATE
OR
REPLACE
FUNCTION COL_ALIAS ( NAME_IN IN VARCHAR2 )
RETURN VARCHAR2
IS CNUMBER VARCHAR2(10);
BEGIN
CNUMBER:='TEST' ;
RETURN CNUMBER ;
END;

--invoking the function to set the column alias during runtime.This results in an error.

SELECT id as COL_ALIAS(id) FROM TABLE1;


Can we achieve this using pl/sql. Please suggest if you can.



-Thanks
Tom Kyte
June 17, 2011 - 1:35 pm UTC

You have to do that before the query is parsed. We cannot compile the sql unless we know the names, types, number of columns.

You'd have to use dynamic sql - not something I would suggest.

A reader, June 21, 2011 - 1:06 am UTC

Thanks for the reply.

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here