It is very helpful.
Li, November 23, 2002 - 10:44 pm UTC
Thanks a lot, Tom. It exactly solves my problem.
-
A reader, March 11, 2003 - 9:28 pm UTC
Tom,
A general question. Would selecting from a database table be faster than selecting from a PLSQL table of the same size?
We have in our application a function that populates a PL/SQL table and returns a ref cursor based on that PLSQL table using CAST. The other option was to insert into a temporary database table using FOR ALL and fetch from it. The function has to return a refcursor.
Which approach seems better given that there could be around 20000 records.
Thanks a lot for your time.
-
-, March 11, 2003 - 9:44 pm UTC
Tom,
Thanks for the quick reply.
Actually no. We didnt try the two approaches. Situation was such that we had to make a choice there.
It was a tradeoff between Insert+select from temp table vs select from the PL/SQL table which was already built. We settled for the second because it appeared much less work to us.
Is there any knows caveats for the PL/SQL table with CAST approach on 20,000 to 30,000 records?
thanks
March 12, 2003 - 7:46 am UTC
20-30k records -- RAM, CPU intensive. very much so.
A reader, March 12, 2003 - 2:31 am UTC
hi tom,
a very interesting solution. but did I get it right: the DISTINCT forces ORACLE to temporarily "store" the results for "future use"? can this in general be assumed in cases like this (inline views)? and what did you mean by "the other rownum trick does the same"?
March 12, 2003 - 7:50 am UTC
yes.
A reader, March 12, 2003 - 9:15 am UTC
hello tom,
but "yes" fits the first two questions only ...
March 12, 2003 - 9:29 am UTC
the rownum trick does the same thing as the distinct. it materialized the inline view for us, making it so that the plsql is not called over and over.
Problems with CBO statistical info on table(cast(...))
Dr. John Doe, March 12, 2003 - 2:29 pm UTC
Tom,
There appears to be a problem with the CBO being unable to estimate the cardinality / cost of a casted PL/SQL table:
create table t as select * from all_objects;
create index t_n00 on t (object_id);
analyze table t compute statistics;
create or replace type myNumTabType as table of number
/
set autotrace traceonly
select t.*
from (select column_value from table(cast(myNumTabType(50) as myNumTabType))) f, t
where f.column_value = t.object_id;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=46 Card=8168 Bytes=628936)
1 0 HASH JOIN (Cost=46 Card=8168 Bytes=628936)
2 1 COLLECTION ITERATOR (CONSTRUCTOR FETCH)
3 1 TABLE ACCESS (FULL) OF 'T' (Cost=14 Card=7041 Bytes=542157)
Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
93 consistent gets
87 physical reads
0 redo size
655 bytes sent via SQL*Net to client
316 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processed
Trying again adding the following hint:
/*+ ORDERED USE_NL(f,t) */
(You can use the dreaded RULE hint too :-()
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=16352 Card=8168 Bytes=628936)
1 0 NESTED LOOPS (Cost=16352 Card=8168 Bytes=628936)
2 1 COLLECTION ITERATOR (CONSTRUCTOR FETCH)
3 1 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=2 Card=7041 Bytes=542157)
4 3 INDEX (RANGE SCAN) OF 'T_N00' (NON-UNIQUE) (Cost=1 Card=7041)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
656 bytes sent via SQL*Net to client
316 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processed
Looks like the CBO thinks there are 8168 rows in the table(cast(...))
(Tested on 8.1.6)
March 13, 2003 - 7:23 am UTC
It is using a heuristic, yes. There is not much else it can do.
Agreed, but...
Dr. John Doe, March 13, 2003 - 2:08 pm UTC
Wouldn't it be nice if it peeked somehow and got the count? Maybe a hint like ROWS(f,100) where we could help it along?
Is it possible that the two hints won't work?
Li, April 28, 2003 - 8:52 pm UTC
Tom:
This is Li again. Recently one customer is having a very bad performance problem with my view which is prototyped like here.
From the tkprof they provided, my PL/SQL package used in the view definition is called hundrends of times instead of 4 times for thousands of rows. I couldn't reproduce it in any internal environment. All the development environments I can test has that view calling my PL/SQL package 4 times, which is what was expected.
Is there any possibility the hints in the prototypes won't work, or didn't work very well? Customer db is 8.1.7.3.6.
Thanks,
Li
Problem can be solved by adding leading hint
Li, May 30, 2003 - 1:39 pm UTC
There is a way I have exercised which fixes the performance problem. It turned out that some versions of Optimizer need more hint to come out an optimal plan.
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production
SQL> CREATE OR REPLACE VIEW my_vl_not_perf AS
2 SELECT /*+ leading(update_v) */ B.TYPE, T.NAME, 'Y' as VIEW_FLAG, 'Y' as UPDATE_FLAG
3 FROM my_tl T, my_b B,
4 TABLE(CAST(my_package.my_function('UPDATE') as my_inst_pk_vals))
UPDATE_V
5 WHERE B.LIST_HEADER_ID = T.LIST_HEADER_ID and
6 T.LANGUAGE = userenv('LANG') and
7 B.LIST_HEADER_ID = UPDATE_V.INSTANCE_PK1_VALUE
8 UNION ALL
9 SELECT /*+ leading(view_v) */ B.TYPE, T.NAME, 'Y' as VIEW_FLAG, 'N' as UPDATE_FLAG
10 FROM my_tl T, my_b B,
11 (select a.*, rownum r1 from
12 TABLE(CAST(my_package.my_function('VIEW') as my_inst_pk_vals))a)
VIEW_V,
13 (select distinct * from
14 TABLE(CAST(my_package.my_function('UPDATE') as my_inst_pk_vals)))
UPDATE_V
15 WHERE B.LIST_HEADER_ID = T.LIST_HEADER_ID and
16 T.LANGUAGE = userenv('LANG') and
17 B.LIST_HEADER_ID = VIEW_V.INSTANCE_PK1_VALUE and
18 B.LIST_HEADER_ID = update_v.instance_pk1_value (+)
19 and update_v.instance_pk1_value IS NULL
20 UNION ALL
21 SELECT /*+ leading(view_v) */ B.TYPE, T.NAME, 'N' as VIEW_FLAG, 'N' as UPDATE_FLAG
22 FROM my_tl T, my_b B,
23 (SELECT distinct instance_pk1_value
24 FROM TABLE(CAST(my_package.my_function('VIEW') as
my_inst_pk_vals)) ) view_v
25 WHERE B.LIST_HEADER_ID = T.LIST_HEADER_ID and
26 T.LANGUAGE = userenv('LANG') and
27 B.LIST_HEADER_ID = view_v.instance_pk1_value(+)
28 and view_v.instance_pk1_value IS NULL
Cast and Table(Cast ......)
Arindam Mukherjee, September 16, 2006 - 2:38 am UTC
Respected Sir,
Today I request you to clear one of my stumbling blocks using cast and table (cast
..
From the word Cast, I understand only type casting broadly available in C programming. But I cant use this in SQL environment. So I request you please write those TWO (cast and table (cast
)) in details with various examples. For your understanding, I am referring you to your example found to your site. In this example, table(cast ..) is found but you please give us example without that like only cast(...) so that we can use it independently whenever applicable.
ops$tkyte@ORA920LAP> declare
2 l_data array := array(1,2,3,4,5);
3 begin
4 for x in ( select *
5 from all_users
6 where user_id in ( select *
7 from table(cast(l_data as array)) ) )
8 loop
9 dbms_output.put_line( x.username );
10 end loop;
11 end;
12 /
September 16, 2006 - 2:42 pm UTC
I don't understand your question.
cast is a documented builtin function, I use it in this case out of habit.
It is similar to the C casting concept (but it'll do type conversion, C just pretends it is already the right type)
Today Only CAST - Built-in function
Arindam Mukherjee, September 18, 2006 - 8:23 am UTC
Sir,
You please tell us when do you use in-built function (CAST) and why please explain us with Example. If you don't use that function, what type of problem do you face?
September 18, 2006 - 1:35 pm UTC
in some earlier releases of Oracle, if you bound a "sql object type" in plsql using a local variable - it would forget the "datatype" so
select * from table( my_variable );
would fail at run time. However, if you reminded it of the type:
select * from table( cast( my_variable as my_type ) );
it would be OK. Hence, I use it out of practice.
TABLE(CAST vs GTT
Richard Squires, November 06, 2006 - 2:22 am UTC
I've been asked to change a package that uses alot of TABLE(CAST... calls to use Global Temporary Tables. Basically, we are reading a small result set from a query into a collection and then using this set multiple times within the package. The rationale behind the decision to not use the TABLE(CAST... approach is detailed as follows:
create table release_file (
release_file_id number,
ntusername varchar2(32),
release_file_sourcesafe varchar2(512),
release_file_version number(4),
release_time date,
subtask varchar2(32),
released_to_test char(1),
date_released_to_test date,
tr_approved_release varchar2(50))
/
begin
for i in 1..100 loop
insert into release_file values (i,2,3,4,null,234,1,null,325235);
end loop;
end;
/
CREATE GLOBAL TEMPORARY TABLE GT_RELEASE_FILE
on commit delete rows
cache
as
(select * from release_file
where 1=2)
/
CREATE OR REPLACE
TYPE TReleaseFile is object(release_file_id number,
ntusername varchar2(32),
release_file_sourcesafe varchar2(512),
release_file_version number(4),
release_time date,
subtask varchar2(32),
released_to_test char(1),
date_released_to_test date,
tr_approved_release varchar2(50))
/
CREATE TYPE TReleaseFileTab is table of TReleaseFile
/
declare
MyRelFileTab TReleaseFileTab;
Cnt number:=0;
LCnt1 number:=0;
LCnt2 number:=0;
begin
dbms_output.put_line(to_char(sysdate,'HH24:MI:SS'));
select sum(gets) into LCnt1 from v$latch where latch# in (201,202,203);
MyRelFileTab:= TReleaseFileTab();
select TReleaseFile(release_file_id,ntusername,release_file_sourcesafe,release_file_version,release_time,subtask,released_to_test,date_released_to_test,tr_approved_release)
bulk collect into MyRelFileTab
from release_file
where rownum < 101;
for i in 1 .. 1000
loop
select /*+ cardinality (trf 100) */
count(*)
into Cnt
from release_file rf,
table(cast(MyRelFileTab as TReleaseFileTab)) trf
where rf.release_file_id=trf.release_file_id;
end loop;
select sum(gets) into LCnt2 from v$latch where latch# in (201,202,203);
dbms_output.put_line('Latches: '||to_char(LCnt2-LCnt1));
dbms_output.put_line(to_char(sysdate,'HH24:MI:SS'));
end;
/
declare
Cnt number:=0;
LCnt1 number:=0;
LCnt2 number:=0;
begin
dbms_output.put_line(to_char(sysdate,'HH24:MI:SS'));
select sum(gets) into LCnt1 from v$latch where latch# in (201,202,203);
insert into gt_release_file
select *
from release_file
where rownum < 101;
for i in 1 .. 1000
loop
select /*+ cardinality (trf 100) */
count(*)
into Cnt
from release_file rf,
gt_release_file trf
where rf.release_file_id=trf.release_file_id;
end loop;
select sum(gets) into LCnt2 from v$latch where latch# in (201,202,203);
dbms_output.put_line('Latches: '||to_char(LCnt2-LCnt1));
dbms_output.put_line(to_char(sysdate,'HH24:MI:SS'));
end;
/
If you run the two procedures repeatedly you should see that the TABLE(CAST approach takes significantly longer than the GTT approach (its a difference of 10 seconds and 2 seconds here) and the number of latches is also significantly bigger (approx 742669 vs 9379). Am I comparing like for like, is this a fair comparison, or are GTTs better in terms of performance than TABLE(CAST?
November 06, 2006 - 9:55 am UTC
I would not want to repeatedly reference a nested collection like that over and over and over and over in SQL, no.
If you don't need to call out from SQL (which is sort of what is happening, pickler fetches), don't.
TABLE(CAST vs GTT
Richard Squires, November 08, 2006 - 12:05 am UTC
I accept that my example isn't a great implementation but what I hoped it would demonstrate is the big differences between using the collection and the GTT.
What we are doing is storing the results of a query in a collection, this query is built up in a procedure to give us a small set of records we are interested in. We then TABLE(CAST this collection, joining to it in a number of different procedures within our package. We have adopted this methodology many times in our package - we create a number of these very small collections which we then TABLE(CAST and join to to restrict our result set. The example I provided shows a distinct difference between the use of a collection and a GTT so it has been suggested that we replace all of our collections with GTTs.
Is this approach a good one, how would you capture a small result set (thats not built from a simple SELECT so can't be incorporated into the other SQL statements) and then use this many times in your package? This package is called many times so does either approach have an impact on parsing - can Oracle determine that the collection is the same (hinted with CARDINALITY) and therefore doesn't need to re-parse it?
November 08, 2006 - 8:25 am UTC
I would not say "it is the end all be all way to do everything" no.
If I had functioning code
that
works faster than fast enough
then
I would not be bothered to touch it, not useful use of my time.
If I had to use this stuff ONE TIME, I might use it like this:
with data
as
(select a.*, rownum r from table( .... ) a )
select
.......
the rownum thing will cause the collection to be "materialized" in general so regardless of the query plan, the pickler collection fetch stuff should happen once.
The problem with your example is "it is WORST CASE", it is not "in general" and worst of all it is not "this is what we really do"
Good Explanation
Sujit, December 14, 2007 - 9:52 am UTC
We are using GTT as explained here (build it once and use it repeatedly). But we have a concern to use the same while we have a two phase commit. Can you please give your view on this , what will be appropriate to use a GTT or collection , or any other approach , keeping in mind that we have to do a two phase commit.
December 14, 2007 - 1:25 pm UTC
GTT or CAST
Dana, December 14, 2007 - 3:34 pm UTC
We went through an extensive remediation that had to address whether to use GTT or CAST implementations. So long as cardinality hint is used in CAST and number was around 100 or less, CAST worked fine. After 100 IN list members, GTT worked better. I used dbms_stats.set_table_stats to manually select a best guess on GTT.
In each implementation, execution times were close to literals. Overall, hard parses went down dramatically. That was our goal. Scalability was vastly improved while overall execution stayed about the same.
CAST(TABLE) causing Buffer Busy Wait events
Ernie, January 10, 2008 - 2:50 pm UTC
Tom,
We have implemented in a package, the dynamic in-list implementation.
The implementation of ours has 2 sub programs which are over-loaded to accept VARCHAR2 data or CLOB... This was to ensure that we had a way to give end users more then 1000 values if they needed it (and as a result, they are asking for it).
One of the functions from the package is below.
FUNCTION str2numtbl( p_str IN CLOB ) RETURN myTableTypeNum
AS
l_str CLOB ; -- initiate with at least one comma in the string
l_n NUMBER;
l_data myTableTypeNum := myTableTypeNum();-- initiate your collection
p_val VARCHAR2(32687);
BEGIN
SELECT p_str||','
INTO l_str
FROM dual;
LOOP
l_n := instr( l_str, ',' ); -- Find if there is a comma value in the string.
EXIT WHEN (nvl(l_n,0) = 0); -- exit when there are no more comma¿s found in the string
l_data.EXTEND; -- add the value to the type
p_val := rtrim(ltrim(substr(l_str,1,l_n-1)));
l_data( l_data.COUNT ) := p_val; -- see if there is another value in the string passed
l_str := substr( l_str, l_n+1 ); -- get next values to parce
END LOOP;
RETURN l_data; -- return the collection of values as the type back to SQL
END str2numtbl;
In certain cases the bind variable is being passed with > 4000 values. In the real world, we would say this is extensive and implementation is incorrect. However I am curious to hear from you on what we can do to decrease the Buffer Busy waits...?
I was theorizing that since the object type is temporary, the temp segment space will increase, and potentially be a bottle neck to other users utilizing the same funtion. I tested this case, and did see in 10g OEM that configuration wait was 96% of total database activity. Of that Buffer Busy Wait was at 96% as well. The file that was being accessed was TEMP.
Thanks in advance.
January 10, 2008 - 2:54 pm UTC
why do you think this is the cause of increased buffer busy waits.
how did you measure this?
where are the numbers?
temp won't have buffer busy waits. that is all direct IO, doesn't use the buffer cache.
configuration wait?
and most important - 96% of WHAT
if it is 96% of one second - yawn.
Questions Answered.
Ernie, January 11, 2008 - 8:57 am UTC
Tom,
Thanks for the questions, all of which are relevant.
you: why do you think this is the cause of increased buffer busy waits?
Answer:
This was actually pretty easy to track down with 10g OEM web interface.
Here are the outputs from a test that we sampled with 500 values being passed to the procedure. The length of which was ~47000 characters long.
DETAILED ADDM REPORT FOR TASK 'ADDM:4194172246_1_6797' WITH ID 7111
Analysis Period: 10-JAN-2008 from 10:00:48 to 10:15:57
Database ID/Instance: 4194172246/1
Database/Instance Names: ISITDEV/isitdev
Host Name: devtransordb
Database Version: 10.2.0.3.0
Snapshot Range: from 6796 to 6797
Database Time: 29376 seconds
Average Database Load: 32.3 active sessions
FINDING 1: 64% impact (18738 seconds)
-------------------------------------
Database writers (DBWR) were unable to keep up with the demand for free buffers.
RECOMMENDATION 1: DB Configuration, 64% benefit (18738 seconds)
ACTION: Consider increasing the number of database writers (DBWR) by setting the parameter "db_writer_processes". Also consider if asynchronous I/O is appropriate for your architecture.
RATIONALE: The value of paraameter "db_writer_processes" was "2" during the analysis period.
RATIONALE: The value of parameter "disk_asynch_io" "FALSE" during the analysis period.
RECOMMENDATION 2: Host Configuration, 64% benefit (18738 seconds)
ACTION: Investigate the I/O subsystem's write performance.
RATIONALE: During the analysis period, the average data files' I/O throughput was 1.5 M per second for reads and 172 K per second for writes. The average response time for single block reads was 3.7 milliseconds.
RECOMMENDATION 3: Application Analysis, 64% benefit (18738 seconds)
ACTION: Investigate application logic for possible use of direct path inserts as an alternative for multiple INSERT operations.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Wait class "Configuration" was consuming significant database time. (64% impact [18743 seconds])
FINDING 2: 30% impact (8694 seconds)
------------------------------------
PL/SQL execution consumed significant database time.
RECOMMENDATION 1: SQL Tuning, 30% benefit (8694 seconds)
ACTION: Investigate the SQL statement with SQL_ID "1nzfn80kk18fb" for possible performance improvements.
RELEVANT OBJECT: SQL statement with SQL_ID 1nzfn80kk18fb and
PLAN_HASH 289983401
SELECT DISTINCT site.BUS_UNIT, site.SITE_NAME, address.STATE, address.COUNTY, strut.STRUCTURE_TYPE, strut.HGT_NO_APPURT, strut.HGT_NO_APPURT_UM, COORD_DATUM_REF, strut.LAT_DEC, LONG_DEC, opn.bus_unit, site.CTRY_ID FROM site site, address address, structure strut, structure_open_space opn
WHERE site.CTRY_ID = 'US' AND site.BUS_UNIT IN (SELECT /*+ CARDINALITY (t 10) */ * FROM TABLE (CAST(in_list_pkg.str2tbl(:1) AS myTableTypeStr)) t WHERE ROWNUM >=0 ) and site.non_crown_ind != 'Y' and LENGTH(strut.structure_id) = 1 AND (site.site_flag = '1' OR (site.bu_type_code IN ('TW', 'RT') and site.in_service_date is null)) AND address.ADDRESS_ID = site.ADDRESS_ID AND address.CTRY_ID = site.CTRY_ID AND strut.CTRY_ID = site.CTRY_ID AND strut.BUS_UNIT = site.BUS_UNIT AND opn.structure_id(+) = strut.structure_id AND opn.BUS_UNIT(+) = strut.BUS_UNIT AND opn.CTRY_ID(+) = strut.CTRY_ID ORDER BY site.BUS_UNIT
RATIONALE: Waiting for event "free buffer waits" in wait class "Configuration" accounted for 84% of the database time spent in processing the SQL statement with SQL_ID "1nzfn80kk18fb".
You - how did you measure this?
Answer: = ADDM, and AWR
where are the numbers?
temp won't have buffer busy waits. that is all direct IO, doesn't use the buffer cache.
Sorry, I may have mistated the wait event. It is free buffer waits.
configuration wait? The configuration wait is the classification that OEM gives it.
and most important - 96% of WHAT
Total I/O for the time slice.
One other finding of note, during this time slice as well, and which most indicative that the PL/SQL is the cause is this last finding from ADDM...
Finding PL/SQL execution consumed significant database time.
Impact (minutes) 144.9
Impact (%) 29.6
The top SQL event was the query given above.
Investigate the SQL statement with SQL_ID "1nzfn80kk18fb" for possible performance improvements.
SQL TextSELECT DISTINCT site.BUS_UNIT, site.SITE_NAME, address.STATE, address.COUNTY, st...
SQL ID1nzfn80kk18fb
Rationale Waiting for event "free buffer waits" in wait class "Configuration" accounted for 84% of the database time spent in processing the SQL statement with SQL_ID "1nzfn80kk18fb".
As always, I do appreciate your candor and input. Let me know your thoughts.
January 11, 2008 - 9:24 am UTC
but why do you think THIS procedure is the cause of them. I don't.
that function isn't - the SQL is. That the function is called from that sql statement is not relevant.
basically - it is that the query in questions does a TON of logical IO (and hence needs a TON of buffers) and you don't have a ton of free buffers so dbwr is made to prematurely clean out the cache.
basically - you want to look at that SQL statement and tune it (indexing, whatever)
I prefer to use WITH subquery factoring with the str2tbl trick
http://asktom.oracle.com/Misc/varying-in-lists.html and maybe a materialize hint in the with'ed query to force it into temp (to avoid invoking the function more than once - rownum *should* do that, you'll want to verify that the function is invoked only once per execution)
Questions Answered.
Ernie, January 11, 2008 - 9:46 am UTC
Tom,
The explain plan from the SQL with the function is as follows...
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Ro
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT UNIQUE | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS OUTER | |
| 4 | NESTED LOOPS | |
| 5 | NESTED LOOPS | |
| 6 | VIEW | VW_NSO_1 |
| 7 | COUNT | |
|* 8 | FILTER | |
| 9 | COLLECTION ITERATOR PICKLER FETCH| STR2TBL |
|* 10 | TABLE ACCESS BY INDEX ROWID | SITE |
|* 11 | INDEX UNIQUE SCAN | PK_SITE |
| 12 | TABLE ACCESS BY INDEX ROWID | STRUCTURE |
|* 13 | INDEX RANGE SCAN | PK_STRUCTURE |
|* 14 | INDEX RANGE SCAN | STRUCTURE_OPEN_SPACE_PK |
| 15 | TABLE ACCESS BY INDEX ROWID | ADDRESS |
|* 16 | INDEX UNIQUE SCAN | PK_ADDRESS |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
8 - filter(ROWNUM>=0)
10 - filter(("SITE"."SITE_FLAG"=1 OR "SITE"."IN_SERVICE_DATE" IS NULL AND
("SITE"."BU_TYPE_CODE"='RT' OR "SITE"."BU_TYPE_CODE"='TW')) AND "S
11 - access("SITE"."CTRY_ID"='US' AND "SITE"."BUS_UNIT"="COLUMN_VALUE")
13 - access("STRUT"."CTRY_ID"='US' AND "STRUT"."BUS_UNIT"="SITE"."BUS_UNIT")
filter(LENGTH("STRUT"."STRUCTURE_ID")=1)
14 - access("OPN"."CTRY_ID"(+)='US' AND "OPN"."BUS_UNIT"(+)="STRUT"."BUS_UNIT"
"OPN"."STRUCTURE_ID"(+)="STRUT"."STRUCTURE_ID")
filter("OPN"."CTRY_ID"(+)="STRUT"."CTRY_ID")
16 - access("ADDRESS"."CTRY_ID"='US' AND "ADDRESS"."ADDRESS_ID"="SITE"."ADDRESS"
How do I ensure that the function is only being invoked one time per execution? Is there something I can check in the database?
January 13, 2008 - 10:52 pm UTC
put into the function:
dbms_application_info.set_client_info( userenv('client_info')+1 );
and before you call it, set client info to zero and inspect client info after you invoke the query.
I answered my own question.
Ernie, January 11, 2008 - 10:52 am UTC
I put an dbms_output within the function call, and only got one line output for ~700 values passed, which means to me that the function is being called once per query.
January 13, 2008 - 10:54 pm UTC
then, it (buffer busy waits) has nothing to do with the function - but rather that the plan (eg: if the function wasn't there but a table was used and the plan stayed the same) performs lots of physical IO.
Question?
Ernie, January 14, 2008 - 9:02 am UTC
What if the CLOB being parsed is being held in memory, and we have a high amount of usage against the package (CLOB)? Would not this be stored in the TEMP table segment?
How would we be able to see the amount of memory usage that the CLOB is taking up? Also, would there be a way to put the CLOB variable in a different memory segment (i.e. not is TEMP)?
Thanks for the input. I think we are getting somewhere.
January 14, 2008 - 3:36 pm UTC
so, if the clob is in memory, it is in the client memory (not sga). and if it is in temp, it is on disk, not in memory.
there is no "other memory segment" to speak of here. It'll be in the session memory OR on disk.
Question?
Ernie, January 16, 2008 - 8:44 am UTC
In the link you provided the context text value. is there any limits to what you can try and put there? (i.e. length wise)?
Also, if there was a need to provide the ability to do multiple select * where in clauses, I think we would need to have multiple context values set, we couldn't use the same one over and over within the same SQL... (my assumption).
Let me know your thoughts.
As always, thank you.
January 16, 2008 - 3:54 pm UTC
contexts are max varchar2(4000)
and yes, if you had multiple where clause bits that referenced this context, you'd need multiple contexts if they each need their own values.
or just bind, as demonstrated there as well. binds are varchar2(4000) as well
Table(Cast()) in NDSQL?
Max Power, March 25, 2009 - 2:25 pm UTC
Can the table(cast() function be used in Native DSQL? I've been playing around with it but can't get it to work. Thanks!
March 29, 2009 - 8:31 pm UTC
without an example of what you tried, I cannot say what you did wrong - but likely - you tried to reference the variable name in the SQL and not bind, you'd have to bind, you need to pass the object to the sql code, sql cannot 'see' plsql stuff
ops$tkyte%ORA10GR2> create or replace type myScalarType
2 as object
3 ( x int, y date, z varchar2(10) )
4 /
Type created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace type myTableType
2 as table of myScalarType
3 /
Type created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable x refcursor;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
2 l_data myTableType := myTableType( myScalarType(1,sysdate,'hello'),
3 myScalarType(2,sysdate-1,'World') );
4 begin
5 open :x for
6 'select * from table( cast( :data as myTableType ) )' using l_data;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> print x
X Y Z
---------- --------- ----------
1 29-MAR-09 hello
2 28-MAR-09 World
Disregard My Question
A reader, March 25, 2009 - 2:52 pm UTC
You have to BIND your array.
Execute immediate 'select * from table(cast(:arr as arrtype))' using arr;
works. Thanks.
Table of Cast is picking index and Member OF is not
Lalitha, September 18, 2012 - 10:41 am UTC