Skip to Main Content
  • Questions
  • Using Oracle collections and result sets

Breadcrumb

Dev Live Dev Intro

We are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August and September. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Navin.

Asked: November 18, 2000 - 10:37 pm UTC

Answered by: Tom Kyte - Last updated: May 07, 2013 - 3:40 pm UTC

Category: Database - Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

Tom,
we are implementing an application using EJB's. We need to be able to insert , update , delete and view result sets from the Oracle database. One way suggested is to use stored procedures and functions and use Java or Session/entity beans to access the tables directly.

Is there a better way to implement this functionality using Oracle collections like arrays and accessing them from Java. We need to be able to view result sets, do batch inserts/updates and deletes. Are there any new features in Oracle 8i that I can take advantage of ?
Appreciate your suggestions.

Thanks,
Navin.

and we said...

Arrays/collections won't necessarily be any better (in fact not as good as) a regular result set using straight JDBC. Where collections can come in handy is in returning a single "object" or "glob" of data. For example, say you have the EMP, DEPT and PROJECTS table. Assume DEPT is one to many with EMP and DEPT is one to many with PROJECTS (a department has many emps and a department is working on many projects). Using collections, we can return in one row the DEPT data, along with a collection of EMP data and a collection of PROJECTS data. Collections are best used to fetch these non-relational (not "square") rows.

If you just have "flat" tables -- result sets will do it for you. You can use setExecuteBatch() on a prepared statement to do bulk (array) inserts, updates, and deletes. ResultSets can use the setRowPrefetch() method to set their arraysize which defaults to 10.

You can put the "data" beans right into the database, but ultimately they are going to use JDBC to access the data regardless of their physical location.

and you rated our response

  (158 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

follow up questions

September 27, 2001 - 12:09 am UTC

Reviewer: sj

" Assume DEPT is one to many with EMP and DEPT is
one to many with PROJECTS (a department has many emps and a
department is working on many projects). Using collections, we can return in one row the DEPT data, along with a collection of EMP data and a collection of PROJECTS data. Collections are best used to fetch these non-relational (not "square") rows.

If you just have "flat" tables -- result sets will do it for you.
"

Can you show how the following can be done..

"Using collections, we can return in one row the DEPT data, along with a collection of EMP data and a collection of PROJECTS data."


what do you mean by "non-relational (not "square") rows" and ""flat" tables "


and by "result sets" do you mean ref cursors

Tom Kyte

Followup  

September 27, 2001 - 8:12 am UTC

Here is an example then.  First we start by creating the collection types:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create type empType as object
  2  ( ename varchar2(30),
  3    empno number
  4  )
  5  /
Type created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create type empTblType as table of empType
  2  /
Type created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create type projType as object
  2  ( name     varchar2(30),
  3    start_date date,
  4    end_date   date
  5  )
  6  /
Type created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create type projTblType as table of projType
  2  /
Type created.


Now I need some data to query:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table emp as select * from scott.emp;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table dept as select * from scott.dept;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table proj as
  2  select username name,
  3         created start_date,
  4         created+7 end_date,
  5         (mod(rownum,4)+1)*10 deptno
  6    from all_users
  7   where rownum <= 20
  8  /
Table created.

And now we can query the data:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select dname,
  2   cast( multiset( select ename, empno
  3                     from emp
  4                    where emp.deptno = dept.deptno ) as empTblType ) emps,
  5   cast( multiset( select name, start_date, end_date
  6                     from proj
  7                    where proj.deptno = dept.deptno ) as projTblType ) projs
  8    from dept
  9  /
DNAME          EMPS(ENAME, EMPNO)             PROJS(NAME, START_DATE, END_DA
-------------- ------------------------------ ------------------------------
ACCOUNTING     EMPTBLTYPE(EMPTYPE('CLARK', 77 PROJTBLTYPE(PROJTYPE('DBSNMP',
               82), EMPTYPE('KING', 7839), EM  '28-AUG-01', '04-SEP-01'), PR
               PTYPE('MILLER', 7934))         OJTYPE('OPS$TKYTE', '24-SEP-01
                                              ', '01-OCT-01'))

RESEARCH       EMPTBLTYPE(EMPTYPE('A', 7369), PROJTBLTYPE(PROJTYPE('SYS', '2
                EMPTYPE('JONES', 7566), EMPTY 8-AUG-01', '04-SEP-01'), PROJT
               PE('SCOTT', 7788), EMPTYPE('AD YPE('OPS$ORA817', '28-AUG-01',
               AMS', 7876), EMPTYPE('FORD', 7  '04-SEP-01'), PROJTYPE('OPS$C
               902))                          LBECK', '09-SEP-01', '16-SEP-0
                                              1'))

SALES          EMPTBLTYPE(EMPTYPE('ALLEN', 74 PROJTBLTYPE(PROJTYPE('SYSTEM',
               99), EMPTYPE('WARD', 7521), EM  '28-AUG-01', '04-SEP-01'), PR
               PTYPE('MARTIN', 7654), EMPTYPE OJTYPE('SCOTT', '28-AUG-01', '
               ('BLAKE', 7698), EMPTYPE('TURN 04-SEP-01'), PROJTYPE('A', '20
               ER', 7844), EMPTYPE('JAMES', 7 -SEP-01', '27-SEP-01'))
               900))

OPERATIONS     EMPTBLTYPE()                   PROJTBLTYPE(PROJTYPE('OUTLN',
                                              '28-AUG-01', '04-SEP-01'), PRO
                                              JTYPE('CTXSYS', '28-AUG-01', '
                                              04-SEP-01'))


ops$tkyte@ORA817DEV.US.ORACLE.COM> 

See how there are 4 rows -- just the rows from the DEPT table came back.  The result set is "jagged" -- not square -- in this case because we have these collections which in effect are just like result sets themselves.  We have an array of employees and an array of projects with each dept row.


A ref cursor is a result set.  A result set need not be a ref cursor (no ref cursors above -- just a "cursor") 

Awesome

September 27, 2001 - 10:46 am UTC

Reviewer: A reader

Can we pass this single row of information as input into another procedure or function, or is this output only usefull as a report.

Tom Kyte

Followup  

September 27, 2001 - 2:29 pm UTC

You can definitely pass this around. as soon as you define a collection type -- you have just invented a new PLSQL datatype as well. You can define variables of this type, sure.

great explanation

September 27, 2001 - 3:09 pm UTC

Reviewer: A reader

Lets assume I have dname,empTblType,projtbltype. Dname has one value in it, and there are multiple values in the other 2.

How can we address the values inside the emptbltype.
Is emptbltype.empno , emptbltype.ename valid ?

Give us an example procedure which will table dname,empTblType,projtbltype, and do some processing.

when do we need to pass collections like this as parameters, i.e. where are they very useful.

Tom Kyte

Followup  

September 27, 2001 - 4:03 pm UTC

See
</code> http://docs.oracle.com/docs/cd/A87860_01/doc/appdev.817/a76976/adobjint.htm#452711 <code>

I've got lots of examples littered on this site with collections, search for
create or replace type as table of
and read some of them.

If you have a variable:

declare
x empTblType := empTblType(empType( 'Tom', 1234 ), empType( 'Bob', 5678 ));begin
dbms_output.put_line( x(1).ename );
end;
/

is how you would access it -- just like an array of records.

You would use this anywhere you felt you needed an array. anywhere you needed to pass an array of employees -- this might be useful.

make us understand

September 27, 2001 - 3:40 pm UTC

Reviewer: A reader

what do you mean by "non-relational (not "square") rows" and ""flat" tables "

,
The
result set is "jagged" -- not square --

What does the above terminology mean, try to explain in layman's language please.

Tom Kyte

Followup  

September 27, 2001 - 4:06 pm UTC

Look at the first row:

ACCOUNTING     EMPTBLTYPE(EMPTYPE('CLARK', 77 PROJTBLTYPE(PROJTYPE('DBSNMP',
               82), EMPTYPE('KING', 7839), EM  '28-AUG-01', '04-SEP-01'), PR
               PTYPE('MILLER', 7934))         OJTYPE('OPS$TKYTE', '24-SEP-01
                                              ', '01-OCT-01'))


that is really like:

ACOUNTING  CLARK   7782      DBSNMP     28-aug-01  04-sep-01
           KING    7839      OPS$TKYTE  24-sep-01  01-oct-01
           MILLER  7634


In a single row -- its not a "square", its not simple rows and columns, its rows with columns that look like rows themselves.

 

for the above sqlplus output what are your format settings

September 27, 2001 - 5:15 pm UTC

Reviewer: A reader


Tom Kyte

Followup  

September 28, 2001 - 7:06 am UTC

the only formating was

column emps format a30
column projs format a30

retrieve from java

May 31, 2002 - 10:20 am UTC

Reviewer: Charlie

ACCOUNTING EMPTBLTYPE(EMPTYPE('CLARK', 77 PROJTBLTYPE(PROJTYPE('DBSNMP',
82), EMPTYPE('KING', 7839), EM '28-AUG-01', '04-SEP-01'), PR
PTYPE('MILLER', 7934)) OJTYPE('OPS$TKYTE', '24-SEP-01
', '01-OCT-01'))

>>>>>>>>>>>>>>>>>
Hi Tom,

After getting this "jagged" resultset via JDBC, how do we build JAVA object while looping through this resultset?
Say we have several java objects such as JEmpTLbType, JEmpType, JDepartment. A small example would be greatly
appreciated.

Thanks,

Tom Kyte

Followup  

May 31, 2002 - 10:24 am UTC

I recommend you look at jpublisher

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/java.817/a81357/toc.htm <code>

it'll generate the mapping classes for you.

ora-00913

May 31, 2002 - 11:36 am UTC

Reviewer: Charlie

Thanks for pointing out JPublisher!  Another question is:

SQL> desc campaign  <-- regular table
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 ID                                                    NOT NULL VARCHAR2(30)
 COMPANY                                                        VARCHAR2(40)
 START_DATE                                                     DATE
 END_DATE                                                       DATE
 CANCEL_DATE                                                    DATE
 DESCRIPTION                                                    VARCHAR2(200)

SQL> desc target  <-- regular table
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 ID                                                    NOT NULL VARCHAR2(30)
 CAMPAIGN_ID                                                    VARCHAR2(30)
 MAIN_TARGET_ID                                                 VARCHAR2(30)
 SOP_ID                                                         NUMBER(10)
 DESCRIPTION                                                    VARCHAR2(200)

SQL> desc obj_campaign <-- object type
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 ID                                                             VARCHAR2(30)
 COMPANY                                                        VARCHAR2(40)
 START_DATE                                                     DATE
 END_DATE                                                       DATE
 CANCEL_DATE                                                    DATE
 DESCRIPTION                                                    VARCHAR2(200)

SQL> desc obj_target  <-- object type
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 ID                                                             VARCHAR2(30)
 CAMPAIGN                                                       OBJ_CAMPAIGN

SQL> CREATE OR REPLACE VIEW v_target
  2  OF obj_target
  3  WITH object identifier(id)
  4  AS
  5  SELECT
  6     t.id,
  7     cast((
  8             SELECT
  9                     c.id
 10                     company,
 11                     start_date,
 12                     end_date,
 13                     cancel_date,
 14                     c.description
 15             FROM CAMPAIGN c
 16             WHERE t.campaign_id = c.id
 17      )AS obj_campaign) CAMPAIGN
 18  FROM TARGET t;
                SELECT
                *
ERROR at line 8:
ORA-00913: too many values


When I try to create this OR view, it failed with error 913.  I can't figure it out why.  Any clue?

Thanks,
 

Tom Kyte

Followup  

May 31, 2002 - 1:38 pm UTC

CREATE OR REPLACE VIEW v_target
OF obj_target
WITH object identifier(id)
AS
select t.id, obj_campaign( c.id, c.company, c.start_date, c.end_date, c.cancel_date, c.description )
from target t, campaign c
where t.id = c.id(+)
/

is the way to do that. (lose the outer join if you don't need it)

From the docs:

<quote>
CAST allows you to convert built-in datatypes or collection-typed values of one type into another built-in datatype or collection type. You can cast an unnamed operand (such as a date or the result set of a subquery) or a named collection (such as a varray or a nested table) into a type-compatible datatype or named collection. The type_name must be the name of a built-in datatype or collection type and the operand must be a built-in datatype or must evaluate to a collection value.

For the operand, expr can be either a built-in datatype or a collection type, and
subquery must return a single value of collection type or built-in type. MULTISET
informs Oracle to take the result set of the subquery and return a collection value. Table 5–1 shows which built-in datatypes can be cast into which other built-in datatypes. (CAST does not support LONG, LONG RAW, or any of the LOB datatypes.)
</quote>

what that means is -- you can only use subquery in a CAST when

a) subquery returns a single row and a single column
b) subquery returns a NESTED TABLE or VARRAY type


In any event, you need not use it at all to do this.

invalid column

May 31, 2002 - 3:38 pm UTC

Reviewer: Charlie

This is absolutely great stuff!!  Thanks a lot, Tom!!

One thing is when I refer to a column inside a object type, it raises the error 904.
So how can I use the column of object to search?

SQL> r
  1  select id, campaign
  2  from v_target
  3  where id = 'TBase1'
  4* and campaign.start_date < sysdate
and campaign.start_date < sysdate
             *
ERROR at line 4:
ORA-00904: invalid column name 

Tom Kyte

Followup  

May 31, 2002 - 3:41 pm UTC

You must use correlation names with objects in the queries:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select id, campaign
  2    from v_target <b>T</b>
  3          where id = 'TBase1'
  4            and <b>T.</b>campaign.start_date < sysdate
  5  /

no rows selected 

collection

May 31, 2002 - 4:47 pm UTC

Reviewer: Charlie

Cool!  Thanks!

SQL> r
  1  create or replace type obj_result as object
  2* (rid number(2), description varchar2(200));

Type created.

SQL> create or replace type result_collection as table of obj_result;
  2  /

Type created.


SQL> desc obj_target
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ---------------------------
 ID                                                             VARCHAR2(30)
 CAMPAIGN                                                       OBJ_CAMPAIGN
 RESULTS                                                        RESULT_COLLECTION
 
 
 If this obj_target contains a nested table/collection, how can I refer to its member
 inside of this collection?
 
 SQL> select id, campaign
   2    from v_target T
   3          where id = 'TBase1'
   4            and T.campaign.start_date < sysdate
   5            and T.results(1).rid = 2
   6  /
ERROR at line 5:
ORA-00904: invalid column name    

Tom Kyte

Followup  

May 31, 2002 - 8:56 pm UTC

You cannot use subscripts in SQL.

You can join the nested table to the view, this is called "flattening".

You must remember NESTED TABLES HAVE NO ORDER. There is no such thing as the "first" or the "second". Nested tables do NOT preserve order. Therefore (1) on one query might be (2) on another run.

ORA-03113

June 01, 2002 - 3:23 pm UTC

Reviewer: Charlie

Hi Tom,

I try to "flatten" the results, but I got error 3113.  Any clue?

Thanks,

SQL> select id, r.description
  2  from v_target t, table(t.results) r
  3* where r.rid = 2
SQL> /
select id, r.description
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel 

Tom Kyte

Followup  

June 02, 2002 - 11:16 am UTC

Time to contact support and open a tar. they can help you identify why this is.

Return value of collection

April 09, 2003 - 10:11 am UTC

Reviewer: Raj from India

Hi Tom,

I have a function at the database side. It returns a table of records.

When I tried to call this function from forms I'm not getting the value.

Here is the example

I have a package which has a function which returns table of ename from emp

CREATE OR REPLACE PACKAGE pkg_test AS

TYPE typ_ename_tab IS TABLE OF emp.ename%TYPE;

FUNCTION fn_GetEname RETURN typ_ename_tab;

END pkg_test;
/

CREATE OR REPLACE PACKAGE BODY pkg_test AS

FUNCTION fn_GetEname RETURN typ_ename_tab IS

op_ename typ_ename_tab := NULL;

BEGIN

SELECT ename
BULK COLLECT INTO op_ename
FROM emp;

RETURN op_ename;

END fn_GetEname;

END pkg_test;
/

I tested this package at the back end with the foll. code
DECLARE

a pkg_test.typ_ename_tab;

BEGIN

a := pkg_test.fn_GetEname;

FOR j IN a.FIRST..a.LAST
LOOP
DBMS_OUTPUT.PUT_LINE(a(j));
DBMS_OUTPUT.PUT_LINE(' ');
END LOOP;

END;
and the out put was

SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER
So far so good.

But when I tried to get this in a When_button_pressed Trigger I was able to compile it without error but I got the output as NULL

The Code in when_button_pressed is
DECLARE

a pkg_test.typ_ename_tab;

BEGIN

a := pkg_test.fn_GetEname;

IF a IS NOT NULL THEN
FOR j IN a.FIRST..a.LAST
LOOP
MESSAGE(a(j));
MESSAGE(' ');
END LOOP;
ELSE
MESSAGE('A IS NULL');
END IF;

END;

The Out put is a message A IS NULL at the status Bar.

Pls tell me if it is possible to get the table of records in forms.

ONE IMPORTANT POINT IS I WANT TO USE BULK COLLECT OPTION IN THE SELECT STATEMENT.

Thanks a Lot in advance.

Regards
Raj


Tom Kyte

Followup  

April 09, 2003 - 10:53 am UTC

record groups do array fetches -- that would be my approach in forms actually. The record group will work similar to a 2 dimensional array. create_group_from_query, populate_group might be what you are looking for.

I haven't done forms since about 1995 -- so, I'm not really upto speed on what it can/cannot do with records. You can try otn.oracle.com -> discussion forums -> developer.


(record groups would be my path however)

Nested tables vs. one-to-many

June 10, 2003 - 10:11 pm UTC

Reviewer: A reader

Suppose my data/business has a one-to-many relationship. The traditional way to model this is a parent table and a child table with a FK pointing to the parent table.

Of course, with 8i, this could also be modeled as a parent table with a nested table/varray as one of the columns.

At what point does the nested table approach become unviable? When the "many" in one-to-many reaches, say, 100s of rows?

Suppose I have a 1000 row nested table stored in a main table and I want to update one row in that. Oracle would unnest that "table", update the one row and write the entire object back, right? Isnt this expensive?

What if I need to add 100s of rows each day to that nested table? Is this a misuse of the feature?

Just trying to get a sense of when pure relational makes sense vs. object-relational.

Thanks

Tom Kyte

Followup  

June 11, 2003 - 6:40 am UTC

If you have my book "Expert one on one Oracle" -- I go into this in some detail.

A nested table is nothing more then a parent/child table where the server will supply a surrogate key to the parent table ( a hidden 16 byte raw with a unique constraint on it per parent row ) and a child table with the same 16 byte raw foreign key that you need to know to INDEX yourself.

Since virtually ever parent/child relationship I've seen has natural keys that can (should) be used -- I would "do it myself", no nested tables.

I'll save on the 2 hidden columns
and their indexes
and I'll be able to use the child table "as a table" when I want. nested tables make that "hard"

June 11, 2003 - 10:16 am UTC

Reviewer: A reader

Seems like you are saying that nested tables is useless in practice.

So why did Oracle add this feature at all? As you said, it doesnt seem to have much real-world applications.

In general, all the object-oriented features, other than saying that "yes, we support objects in Oracle", are there any real-world benefits over pure and simple ER relational modelling?

Yes, I can see the benefits of user defined types but just as a convenience, nothing that cannot be achieved using pure native datatypes.

Comments? Thanks

Tom Kyte

Followup  

June 11, 2003 - 6:33 pm UTC

I use them in my code all of the time.

they are very useful in OR views.

They are useful as a programming construct, I choose NOT to use them to persistently store data.

In a view, yes.
In my code, oh yes.
As a datatype for a column in a physical table, no.

The comment about the udt's is pretty straight on -- considering they are actually implemented on top of pure native datatypes.

June 11, 2003 - 9:47 pm UTC

Reviewer: A reader

---x---
I use them in my code all of the time.
they are very useful in OR views.
---x---

I'm sorry but I dont think I quite understand you.

How do you use them in your code, for what?

What do you mean, "useful in OR views"? Can you explain these 2 uses with examples? Thanks.


Tom Kyte

Followup  

June 12, 2003 - 8:39 am UTC

Object Relational (OR) Views


in my code? for collections, tables of objects that can be queried from. user defined types.

For example, once upon a time ago I wrote a simple "correlated array" type. It was an object type with some getter/setter methods, a nested table of NAME/VALUE pairs. I could create a variable:

declare
x correlated_array := correlated_array();
begin
x.set('somename','someval');
x.set('someothername','someotherval');

dbms_output.put_line( x.get('somename') );
dbms_output.put_line( x.get('someothername') );


of course, with 9iR2 having index by tables that can be indexed by varchar2, this is now obsolete.


If you have my book "Expert one on one Oracle" however, I go into things like building an in memory hash table type (like the type java has for example). It uses nested tables as well.

I use them to return procedurally built result sets from stored procedures, so you can:

select * from TABLE( some_function(inputs...) );


OR views -- documented in:
</code> http://docs.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-APP <code>
the application developers guide "object relational features"


Reviewer

July 09, 2003 - 5:31 pm UTC

Reviewer: DC from Canada

I found DML on collection is hard. How can we update a nested table using one update DML with mulitple rows referenced by a join table and for each row, there are nested entries to be updated base on reference to another table ?

Thanks,

Tom Kyte

Followup  

July 09, 2003 - 8:17 pm UTC

give me a for example example to work with.

AND remember what nested tables are all about -- it is virtually "a virtual table per row".

It sounds like you might really be wanting "an honest to goodness standalone table"

DC

July 09, 2003 - 11:48 pm UTC

Reviewer: DC from CANADA

An object table purchase order with line items stored as nested table.
create lineitem_objtyp as object (stock_no number,
order_qty number, unit_cost number(11,2), discount number(3,2));
create type lineitem_ntabtyp as table of lineitem_objtab;
create type po_line_objtyp authid current_user as object
(po_no number, po_rec_date date, po_firm_date date, customer_id number, lineitem_ntab lineitem_ntabtyp);
create table po_line_objtab of po_line_objtyp nested table
lineitem_ntab store as lineitem_store_ntab;
There are 2.3M PO with each PO may contain 1 or many hundreds of line items. po_no is unique. I simplify the ddl as this object table has many columns.
The unit cost of certain items need to be adjusted every month end base on customer_id and their purchase order number. I have a reference file every month end from the marketing area which contains the following info, (customer id, po_no, stock_no, adjusted_unit_cost).
How can I process the po_line_objtab base on po_no from the reference file and update the unit_cost with adjust_unit_cost base on stock_no from the reference file.
I hope one update statement will work as we have many reference files to be processed. I'm trying to see if it can be done using merge, cast etc without writing pl/sql.

Thanks,


Tom Kyte

Followup  

July 10, 2003 - 8:00 am UTC

it seems to me that you want "objects", but I want my relational stuff too.

don't worry -- it is very typical. Lets say you were creating a banking application. For the tellers, it would be very conviently to model

"A customer, having multiple accounts, each account having multiple transactions"

sounds great -- but now you have a bunch of customer objects, when you want to figure out at the end of the day "how many transactions did I do", you need to "message" each customer and ask them "did you do any today? if so, how many". If you want to find out "how much money is in the bank", you have to once again message each customer.... "how many customers used branch X in the last 30 days" -- message every customer.

If we only viewed data from one perspective, objects would be great. There are times we do -- a CAD/CAM drawing for example. But most of the times we need multiple views of the data.

that is where the strength of an rdbms, when used properly, comes in.

You appear to want to use line items as a strong entity, all by itself, as the driving table. But by hiding as an attribute of a top level entity like that - you lose. Sure, there are undocumented hints and tricks -- but in general, you've modelled your data from the perspective that "PO is king"


I would be looking at using real transactional tables with object relational views on top of them to give you this object flavor. Then -- the sky is the limit with what you'll be able to do -- you'd have your PO perspective, but you could easily model an "item perspective", a customer persepective and so on -- things you'll ultimately want in a real system.


DC

July 10, 2003 - 9:15 am UTC

Reviewer: DC from CANADA

Thanks for your valuable comment. So, what will be the sql looks like to update lineitem's unit cost base on stock_no and po_no ?

Tom Kyte

Followup  

July 10, 2003 - 10:20 am UTC

Well that would be a correlated update:

update lineitem
set unit_cost = ( select unit_cost from other_table
where other_table.stock_no = lineitem.stock_no
and other_table.po_no = lineitem.po_no )
where exists ( select unit_cost from other_table
where other_table.stock_no = lineitem.stock_no
and other_table.po_no = lineitem.po_no );

very useful - one question though

July 10, 2003 - 1:04 pm UTC

Reviewer: A reader

"Where collections can come in handy is
in returning a single "object" or "glob" of data."

How about returning cursor containing cursors - how does
that perform against the solution of using collections
since that also can return "non-square" data?

thanx!

Tom Kyte

Followup  

July 10, 2003 - 3:03 pm UTC

absolutely


select dname, cursor( select ename from emp where emp.deptno = dept.deptno )
from dept;


works nicely as well -- but if you always get the ename list for every dept -- it can involve extra round trips where as a collection will not.

thanx -

July 10, 2003 - 5:46 pm UTC

Reviewer: A reader

when you use collections - you are returning the
entire dataset - whereas in case of cursor, you
may still have to make round trips - is that
what you meant? Here by round trip do you mean the
round trip implied in "fetching" the data from
the returned resultsets (or cursor)?

thanx!

Tom Kyte

Followup  

July 11, 2003 - 7:31 am UTC

when using collections -- the entire row is returned to the client.

when using cursor variables, pointers to result sets are returned. these result sets reside on the server initially so the client has to turn around and go back to the server to get them.

yes, I mean the round trip involved in fetching the result sets for each column that is a cursor.

Updates Colleciton

July 10, 2003 - 9:37 pm UTC

Reviewer: A reader

I tried your method, it doesn't work.  I have a sample of testing data for you to review and your sql can not reference the object. Please see below.  Trying to update stock_no 2 as reference in the reference_table. Can you help ?
 SQL> create type lineitem_objtyp as object (stock_no number, order_qty number, 
  2  unit_cost number(11,2), discount number(3,2));
  3  /

Type created.

SQL> create type lineitem_ntabtyp as table of lineitem_objtyp;
  2  /

Type created.

SQL> create type po_line_objtyp authid current_user as object
  2  (po_no number, po_rec_date date, po_firm_date date, customer_id number,
  3  lineitem_ntab lineitem_ntabtyp);
  4  /

Type created.

SQL> create table po_line_objtab of po_line_objtyp nested table 
  2  lineitem_ntab store as lineitem_store_ntab;

Table created.
  1  create table reference_table (customer_id number, po_no number, stock_no number,
  2* adjusted_unit_cost number)
SQL> /

Table created.
  1  insert into po_line_objtab values (10, null, null, 10001,
  2  lineitem_ntabtyp(lineitem_objtyp(1, 1, 3.2, 0), lineitem_objtyp(2, 1, 2.4, 0),
  3* lineitem_objtyp(3, 1, 2.2, 0)))
SQL> /

1 row created.

SQL> ed
Wrote file afiedt.buf

  1  insert into po_line_objtab values (11, null, null, 10001,
  2  lineitem_ntabtyp(lineitem_objtyp(5, 1, 2.2, 0), lineitem_objtyp(2, 1, 2.4, 0),
  3* lineitem_objtyp(7, 1, 2.2, 0)))
SQL> /

1 row created.

SQL> ed
Wrote file afiedt.buf

  1  insert into po_line_objtab values (12, null, null, 10001,
  2  lineitem_ntabtyp(lineitem_objtyp(6, 1, 2.1, 0), lineitem_objtyp(7, 1, 2.4, 0),
  3* lineitem_objtyp(8, 1, 2.2, 0)))
SQL> /

1 row created.
SQL> insert into po_line_objtab values (13, null, null, 10001,
  2  lineitem_ntabtyp());

1 row created.
Wrote file afiedt.buf

  1  insert into po_line_objtab values (14, null, null, 10001,
  2  lineitem_ntabtyp(lineitem_objtyp(6, 1, 2.1, 0), lineitem_objtyp(2, 1, 2.4, 0),
  3* lineitem_objtyp(8, 1, 2.2, 0)))
SQL> /

1 row created.
SQL> insert into reference_table values (10, 10001, 2, 5.5);

1 row created.

SQL> ed
Wrote file afiedt.buf

  1* insert into reference_table values (11, 10001, 2, 5.5)
SQL> /

1 row created.

SQL> ed
Wrote file afiedt.buf

  1* insert into reference_table values (12, 10001, 2, 5.5)
SQL> /

1 row created.
SQL> insert into reference_table values (13, 10001, 2, 4.5);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from reference_table;

CUSTOMER_ID      PO_NO   STOCK_NO ADJUSTED_UNIT_COST
----------- ---------- ---------- ------------------
         10      10001          2                5.5
         11      10001          2                5.5
         12      10001          2                5.5
         13      10001          2                4.5

Wrote file afiedt.buf

  1  update lineitem o
  2  set o.unit_cost = (select r.adjusted_unit_cost from reference_table r
  3  where r.stock_no = o.stock_no and
  4  r.po_no = o.po_no)
  5  where exists
  6  (select u.adjusted_unit_cost from reference_table u
  7  where u.stock_no = o.stock_no and
  8* r.po_no = o.po_no)
SQL> /
update lineitem o
       *
ERROR at line 1:
ORA-00942: table or view does not exist

You have any suggestion.  Thanks.





 

Tom Kyte

Followup  

July 11, 2003 - 9:24 am UTC

I must be missing something here -- cause in the one followup I said "i don't think you want to model it like that since your requirements cause you to have a different view or perspective."

You said "ok, so what would it look like then"

I (assuming you meant, Ok -- with real relational tables as you described --, so what would it look like then) showed you how to with "real tables"


You have Purchase order objects.
They happen to have collections of lineitems.

Lineitems do not exist "by themselves"

You will be hard pressed to make them "exist by themselves"

they only exist in the context of a PO as you have them modeled.



Updates Collection

July 10, 2003 - 10:36 pm UTC

Reviewer: DC from CANADA

Some mistakes in my sample. The ddl of the reference_table should be,
create table reference_table(po_no, customer_id, stock_no, adjusted_unit_cost).

Thanks,

Updates Collection

July 11, 2003 - 11:42 am UTC

Reviewer: DC from CANADA

Hi, the purchase order may be my bad example. The way what we have here is a Object Model in a utility environment. The reference table is the consumption reading from the customers. Most of the tables are designed using collections encapsulated with methods. I tried to simplify the schema so that you know my Q. Is there a way to update the nested table's content massively base on certain criteria using my example ? I tried my sql as in below ( an idea) but it gave me ORA-03001 unsupported features. Below sql I think is not optimized too as it has to union to those non-updated nested entries.

update po_line_objtab set lineitem_ntab =
(cast(multiset
(
select stock_no, order_qty, unit_cost + adjusted_unit_cost from
(select r.po_no, k.stock_no, k.order_qty, k.unit_cost, r.adjusted_unit_cost
from reference_table r,
table(select lineitem_ntab from po_line_objtab p
where p.po_no = r.po_no) k where k.stock_no = r.stock_no)
union
select k.stock_no, k.order_qty, k.unit_cost
from reference_table r,
table(select lineitem_ntab from po_line_objtab p1
where p1.po_no = r.po_no) k where k.stock_no <> r.stock_no
) as lineitem_ntabtyp))

Thanks,

Tom Kyte

Followup  

July 11, 2003 - 12:33 pm UTC

do you happen to have access to my book "Expert one on one Oracle"?

Updates Collection

July 11, 2003 - 1:35 pm UTC

Reviewer: DC from CANADA

Finally got it working. Here is the sql, missing the multiset join clause.

July 24, 2003 - 11:05 am UTC

Reviewer: Randy from STL

Tom, Thanks for all your time and patients.

I think that I may be able to use this to solve one requirement we have. I currently return a result set to middle teir for reporting. Result set is from two tables with a cross-reference table in-between. Here is a simplified example of the current result set.


Work_id Emp Location Area District
1 John NYC Queens 3rd
1 John NYC Queens 4th
1 John NYC Queens 5th
1 John NYC Broncs 2nd
1 John NYC Broncs 4th
1 John LA East 2nd
1 John LA East 3rd
2 Tom NYC Queens 3rd
2 Tom NYC Queens 4th


The middle teir is currently used to flatten this out. I would like to have the result set grouped by Work_id, emp and return like this:

Job_id Emp Location Area District
1 John NYC,LA Queens,Broncs,East 2nd,3rd,4th,5th
2 Tom NYC Queens 3rd, 4th

Any creative ideas?

Thanks again.



Tom Kyte

Followup  

July 24, 2003 - 4:25 pm UTC

patients -- freudian slip there? :)

got a version to go with that inquiry?

July 25, 2003 - 10:45 am UTC

Reviewer: Randy from STL

Sorry about that. We are currently on 8.1.7.4 We will be moving to 9.2 Q1 2004. This fix would be something they would like this fall. We have a release Oct 1st of this year. But if it will require 9i thats ok, it will push.

Thanks.

B.T.W. Aren&#8217;t you suppose to be on vacation?

Tom Kyte

Followup  

July 25, 2003 - 12:38 pm UTC

in 9i, we can use a user defined aggregate to do this somewhat easily (search for stragg on this site to get an idea -- that code needs a tweak to "distinct" the strings but that is easy)

in 8i, it would be "hard", we'd have to write a plsql function for each column...



July 28, 2003 - 10:39 am UTC

Reviewer: Randy from STL

Ok, got it working as SQL. My select now looks like this:

SELECT JOB_ID,
STRAGG (DISTINCT LOCATION),
STRAGG (DISTINCT AREA),
STRAGG (DISTINCT DISTRICT)
FROM T
GROUP BY JOB_ID.

Problem I have is that this does not seem to work inside a PL/SQL Block. I get the error

ORA-06550: line 15, column 47:
PL/SQL: ORA-30482: DISTINCT option not allowed for this function
ORA-06550: line 12, column 4:
PL/SQL: SQL Statement ignored

Is there a different way to do the distinct? I thought maybe inside the ODCIAGGREGATEMERGE function, but I could not figure out how.

Thanks.



Tom Kyte

Followup  

July 28, 2003 - 10:51 am UTC

Interesting, looks like a "product issue" to me. use a dynamically opened ref cursor as a temporary solution:

variable x refcursor
declare
l_cursor sys_refcursor;
begin
open l_cursor for '
SELECT JOB_ID,
STRAGG (DISTINCT LOCATION) l,
STRAGG (DISTINCT AREA) a,
STRAGG (DISTINCT DISTRICT) d
FROM T
GROUP BY JOB_ID';

:x := l_cursor;
end;
/

print x


(please file an issue with support as well!)



pls-00157 AUTHID only allowed on schema-level programs

December 23, 2003 - 3:07 pm UTC

Reviewer: Craig from Lowell, AR

Tom -

I have a package with a private procedure in 9iR2. This private procedure is the only portion of the package I need invokers rights on. However, when I declare my private procedure:

PROCEDURE flag_change(p_nbr IN number
, p_seq IN number)
authid CURRENT_USER IS
.
.
.
END flag_change;

I get a PLS-00157: AUTHID only allowed on schema-level programs

Is this to say that invokers rights must be declared/defined at the package level? I'm not sure I 100% understand what this error is telling me, and what I need to do to fix it. Can you once again help me out?

Thanks!!

ps. happy holidays! =)

Tom Kyte

Followup  

December 23, 2003 - 5:27 pm UTC

the package is either authid current user or not.

Now, the funny thing is, even if we could make flag_change be authid current_user, the very fact that it is a PRIVATE PROCEDURE (callable only by other routines in this package) means that the current user will ALWAYS be the package owner!!!! (once you enter a definers rights routine, the current_user is the owner of the package/procedure!)

so, even if it worked -- it would be a "no-op". it would not change anything as the only value of current user that could ever happen would be the schema owner of the package!

Thanks Tom.Very useful

December 26, 2003 - 8:14 pm UTC

Reviewer: Sami


January 06, 2004 - 6:11 am UTC

Reviewer: Praveen from bangalore, india

Hi Tom

We are seriously considering a design revamping of a table's VARCHAR(255) column into a nested table type. But
we are least aware of the performance impacts such a change may cause to a 'data resolution' application that uses
this table. We highly appreciate an advice or some hints about how you would be approaching the problem.

Here is the scenario:

The table is an error log table, say err_log. When a business exception occurs (we have all the known business exceptions),
an error logging procedure takes the responsibility of logging the error into err_log table with such
informations like time_stamp, source procedure, a record_id etc. But in
addition to those above informations we can have a set of very important parametes and their values at the time of error.
This set of parameters can vary in number (sometimes none); hence we store them as a string in a VARCHAR2 column
(EG: say we have 2 parameters in hand at the time of error: parameter_1 & parameter_2 with values 12345 & 'ABCD',
then we create a string like 'PARAM1:12345;PARAM2:ABCD'). (see the hard codings PARAM1:, PARAM2: etc.)

Now you might have guessed the problem one can face during a data resolution process. We cannot directly query the
err_log table based on the values of parameter_1 & parameter_2. This indeed makes the data resolution application a
bit complicated (String manipulation, data_type conversion etc are to applied). Design revamping is because of the fear that new bussiness rules (hence new exceptions) may come
and the above method is indeed going to be cost dearly for a maintanence team.

Our considerations are:

1) Use a nested table instead of VARCHAR2 column.
Thus PARAM1 12345
PARAM2 'ABCD'
PARAM3 '21-MAY-03'
will be a nested table field values in err_log table which will replace the old varchar2 field value like
'PARAM1:12345;PARAM2:ABCD;PARAM3:21-MAY-2003'.

2) Normalize the table err_log. Perhaps we may need to create additional tables like 'PARAMETER' table, 'ERROR_PARAMETER' table etc

How does the above 2 approach are going to affect the performance, if the err_log table is going to be populated
hugely after every ETL process. ? (size of the err_log table is not much of a concern since this is regularly archived).

OR

Is there any other better method?

-Thanks Tom
-Praveen

Tom Kyte

Followup  

January 06, 2004 - 8:49 am UTC

1 and 2 are equivalent.

(if you have my book expert one on one oracle -- i show you exactly what a nested table is, expose the implementation totally)

a nested table will simply add a 16 byte UNIQUE raw column to the "parent table".
it will set up a child table with this same 16 byte raw column (a fkey to parent).
you will have to index this column in the nested table.

I find using "plain old relational tables" to be the "best" (do it yourself). In general, your parent table already has a primary key (no need for that 16 byte raw unique column) and you can simply use that.

If you like the nested table syntax, by all means create an object relational view to give yourself the "appearance" of having the nested table (but retain the flexibility of plain old relational tables which cannot be beat by any object model)




map to host_arrays

June 07, 2004 - 6:03 pm UTC

Reviewer: Gabriel from Montreal, Canada

Hello Tom,

This is a design question.

My sytem will start receiving an additional 10000 transactions every 5 minutes through files. These transactions will be filetered through pl/sql.
The filtering code will use associative arrays to store the transactions temporarely before being written in the database. The purpose of this solution was to maximize memory use (which we have plenty) and to minimize IO which starts to be a botleneck.

Question 1. Does the above make any sense?

My main problem is how the pl/sql code will be called. I am trying to avoid having to call some procedure 10000 times every 5 minutes to load my associative arrays. I would rather call the procedure and pass an array as a parameter. I know Oracle is capable of handling that but how can an array of a record type can be constructed on the os side?

Question 2. What are the posibilities (technology wise) to build an array of records on the OS?
Question 3. Would calling the proc with an array provide imporved perfomance over calling 10000 times with string that I manipulate to build my array?

I read that index-by tables can map to host arrays?

Question 4. What is a host array and does this mapping work?

Thank you very much,

Tom Kyte

Followup  

June 07, 2004 - 7:34 pm UTC

no, it doesn't so far. Your writes are buffered in the buffer cache -- with the exception of redo, which is buffered in the redo log buffer and trickled out in the background by lgwr -- until you commit, at which point you cause the redo log buffer to be flushed (and might well wait for it).

you don't wait for these background writes.
you might well wait for the foreground "commit" write.

it would be much better to read 100 records, forall i insert them and commit when you have to.

The IO bottleneck in a system like this will be redo. If you save it all up and slam it in there -- you end up waiting for redo "log file syncs". If you keep it flowing -- it'll be buffered writes all of the way.

interesting but

June 08, 2004 - 12:00 am UTC

Reviewer: Gabriel from Montreal, Canada

Hello Tom,

Thank you very much for your prompt response (do you ever sleep?).
It makes sense what you say but the questions still remain:

1. Would there be a performance gain if I call the procedure with an array instead of calling it 10000 times from the external system?

2. When you say read the elements do you refer to external tables? This means more i/o. Is this faster than calling pl/sql with data arguments?

3. What is mapping to host arrays and can you please provide an example?

Thank you very much again,

Tom Kyte

Followup  

June 08, 2004 - 8:23 am UTC

1) if you mean "would it benefit me to save up say 100 or 1000 inputs and send them to the procedure at once" -- yes, it would. we call it "array processing" and it is a very good way to make this process more efficient.

But -- remember -- it is like all medicine. Too much and you kill the patient. I find 100-500 (100 really) to be a sweet spot for this type of processing -- it keeps both the client and the server busy, a nice flow of data. If you save up too much in the client, you get that big burst effect on the server and things run choppy.


2) not sure what you meant there. I'm thinking "something has to read input, something has to process input, something has to insert input". something reading should do so in bite sized chunks -- to keep the reader busy and the server busy.


3) depends on your language. If possible i would be using nothing but plsql here to read data (via external tables), and insert it.

ref cursor vs. collection

August 09, 2004 - 8:43 pm UTC

Reviewer: A reader

i have a case where i need to return the
results of a query in the following way.

the result should come from t1 if t1 has data
else
the result should come from t2 if t2 has data

(t1 is actually a query from several tables -
one of which is different in the two cases.)

Now in this case I can not think of an elegant
way of doing the above in one select. Also,
finding out if there is a record in t1 and then
returning a ref cursor based on that finding
does not make sense ( doing more work than required.)

Hence I am thinking of using collections. Using
if t1 has data put t1's records in a collection
else put t1's records in a collection

return the result as
open cursor for
/* select from the collection */;

One could also use a temporary table here with more
or less the same effect...perhaps it would be a
cleaner solution.

Do you have any thoughts?

thanx!


Tom Kyte

Followup  

August 09, 2004 - 9:09 pm UTC

will the results from "t1" be zero one or more rows?

how many rows in the entire thing -- min and max?

thanx!

August 09, 2004 - 9:36 pm UTC

Reviewer: A reader

"will the results from "t1" be zero one or more rows?
how many rows in the entire thing -- min and max? "

t1/t2 can zero, one or more rows in them - t1 can have
a max of 360 rows, t2 can have a max of 20 or so rows..
how does this change the solution?

many thanx!

Tom Kyte

Followup  

August 10, 2004 - 7:29 am UTC

If it was "t1 returns at most a single row and t2 returns at most a single row", we could use rownum, but we can still do this in a single query:



ops$tkyte@ORA9IR2> select * from t1;
 
         X          Y
---------- ----------
         1          2
         1          3
 
ops$tkyte@ORA9IR2> select * from t2;
 
         X          Y
---------- ----------
         1         23
         2         23
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable x number
ops$tkyte@ORA9IR2> exec :x := 1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from (
  3  select a.*, count(decode(what,'t1',1)) over () cnt_t1
  4    from (<b>
  5  select 't1' what, t1.*
  6    from t1
  7   where x = :x
  8  union all
  9  select 't2' what, t2.*
 10    from t2
 11   where x = :x
 12        </b> ) a
 13         )
 14   where cnt_t1 = 0 or what = 't1'
 15  /

<b>the bolded part is your query -- we added a "what" column.  we use analytics to count how many rows came from 't1' and keep rows such that

a) t1 cnt is zero -- they will be all of the t2 rows..
b) t1 rows (meaning cnt_t1 will not be zero, t2 won't make the cut)</b>

 
WHAT                X          Y     CNT_T1
---------- ---------- ---------- ----------
t1                  1          2          2
t1                  1          3          2
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec :x := 2
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> /
 
WHAT                X          Y     CNT_T1
---------- ---------- ---------- ----------
t2                  2         23          0
 
ops$tkyte@ORA9IR2>
 

very interesting!

August 10, 2004 - 11:45 am UTC

Reviewer: A reader

Thanx for yet another interesting approach...Just
one doubt - if it is ok by you...In this case, are not you executing both queries
in all cases?... So depending on how expensive
the queries are - this may run slower? e.g.
if the query from t2 is quite expensive in 100% of the
cases and in 90% of the cases the query from t1
is pretty fast, we will be incurring the expense
of executing both of them in majority of cases.
Unless of course optimizer is smart enough to
see through that..

Instead if you use if then else, then we avoid this problem?

However, I see yet another technique here which could
be very useful - thank you!!


Tom Kyte

Followup  

August 10, 2004 - 3:40 pm UTC

yes, if executing "t1" is cheap and "t2" is expensive, returning a collection might be well prefered.

thanx!

August 10, 2004 - 4:04 pm UTC

Reviewer: A reader


To use collection or temporary table?

December 14, 2004 - 1:49 pm UTC

Reviewer: Gail

Hello,Tom!
I work in Oracle8i.
I have a process which consists of few parts.
One of the parts is to fill 2 arrays or 2 temporary tables:
One table is consisted of 3 fields prod_no,branch_no,qty1.
3 fields are numeric.
Another table is consisted of 4 fields:
prod_no,branch_no,day_in_week,qty2.
day_in_week is varchar(2) and the others are numeric.
After i fill 2 tables I have a loop, which does a lot of things and access sometimes to table1 and sometimes to table2 in order to get qty1 and qty2.To table one i access by prod_no and branch_no from the loop.To table two i access by prod_no,branch_no,day_in_week.
Can you give me advice how to do that.What is better to define arrays or temporary table and why?
And if the solution is array how to find the qty's there.
Thank you very much.
Gail.

Tom Kyte

Followup  

December 15, 2004 - 12:54 pm UTC

"loop" = JOIN perhaps.

maybe you don't mean to be procedurally looking things up, maybe you mean to use a scalar subquery or JOIN.

Continue to use collections or temporary table

December 15, 2004 - 1:30 pm UTC

Reviewer: A reader

Hello,Tom,thank you for the question but at this case join is not good because i need to make precalculation process before i begin my main process.I need 2 temporary tables or arrays i don't know what to choose.
One table is
PROD_NO BRANCH_NO QTY_1
------- --------- -----
1 1
1 2
1 3
1 4
IN MY FIRST PROCESS I HAVE TO CALCULATE QTY'S AND TO PUT FOR EACH RECORD.
The second table is :
PROD_NO DAY BRANCH_NO QTY_2
------- --- --------- ----
1 SUN 1
1 MON 1
IN MY FIRST PROCESS I HAVE TO CALCULATE ALSO QTY'S2 AND TO PUT FOR EACH RECORD.
2 TABLES ARE INDEPENDENT.
MY FIRST PROCESS DOES'NT HAVE TO RUN THROUGH
THIS TABLES BUT
SOMETIMES IT MUST ACCESS TO TABLE 1 BY PROD_NO,BRANCH_NO
AND TO GET QTY_1
AND SOMETIMES IT MUST ACCESS TO TABLE 2 BY PROD_NO,DAY,BRANCH_NO AND TO GET QTY_2.
SO HOW IS IT THE BEST WAY TO DEFINE TABLES LIKE TEMPORARY OR ARRAYS AND WHY?
CAN YOU SHOW ME EXAMPLE PLEASE?
Thank's a lot. Gail



Tom Kyte

Followup  

December 15, 2004 - 6:11 pm UTC

i don't even know what you are "looking up" here -- i don't see why the computation of a qty (something you would not look up) would impact a lookup?

but, at the end of the day -- do what I would do - try both, benchmark and find out which is superior. would you believe me if I just said "do it like this, it is faster" (please say no)

You could only believe me if I said "given this set of assumptions, this benchmarks shows that this approach X is superior to this approach Y"

so, set it up, try it out.

Continue to temporary tables or collections-Oracle 8i.

December 16, 2004 - 2:36 am UTC

Reviewer: GAIL

Dear Tom,
May be i didn't explain myself well.

My first question was if i have to keep approximetly 5
=====================
records during the process and i need them only in the time of process,where is it better to keep it :in temporary table or array (index_by_integer) and why?

My second question was:
=======================
May you show me by example how to get the "qty" from array
that has 4 fields with 4 values:
prod_no day_in_week branch_no qty.
------- ----------- --------- ----
1 sun 1 20
1 sun 2 30
1 mon 1 50
1 mon 2 60
2 sun 1 100
1 sun 2 90

I run in DATABASE TABLE that have prod_no,day_in_week,branch_no
And i want to get the qty by prod_no,day_in_week,branch_no
from previous array (or temporary table).

Thank you .
GAIL.



Tom Kyte

Followup  

December 16, 2004 - 8:13 am UTC

I'm still in the mode of:

but, at the end of the day -- do what I would do - try both, benchmark and find
out which is superior. would you believe me if I just said "do it like this, it
is faster" (please say no)

You could only believe me if I said "given this set of assumptions, this
benchmarks shows that this approach X is superior to this approach Y"

so, set it up, try it out.

Continue to use collections or temporary table in ORACLE 8I

December 16, 2004 - 8:48 am UTC

Reviewer: Gail

Sorry,Tom!
I'll try to do that ,but i don't know how to define and to use this kind
of array:prod_no,day_in_week,branch_no,qty
HOW TO GET QTY VALUE FROM THIS ARRAY?
May you send me the example with such kind of array?
Thank you very much.
Gail

Tom Kyte

Followup  

December 16, 2004 - 9:34 am UTC

insufficient data really.


but in general:


the_qty := your_array(i).qty;










CONTINUE ABOUT ARRAY in ORACLE 8I.

December 16, 2004 - 9:58 am UTC

Reviewer: GAIL

BUT IF I Need the array of 3 values of different types
what kind of array it should be and by which index i have to look up in array?
ARE THERE SOME general rules when it is good to use array(index_by_integer) and when the DB TABLE?
Sometimes it is very difficult to try it out because of the changes in the database.
Thank's.
GAIL

Tom Kyte

Followup  

December 16, 2004 - 10:10 am UTC

but you

a) know how to code plsql
b) know how to code sql

so you can

c) benchmark both approaches and see what works best for you.

If you are interested -- I did just that in Effective Oracle by Design -- it took a couple of pages to show that scalar subqueries & joins (are better than) looking up in arrays (are sometimes better than) firing off billions of tiny queries to do lookups.

Continue collections ORACLE 8I

December 16, 2004 - 10:56 am UTC

Reviewer: GAIL

Dear Tom!
I ordered your 2 books:effective Oracle and One to ONE PART 1
I want to order part 2 also.And i have to get then in 2 weeks.I am very new in ORACLE.Thats the reason i asked
for example fot the type of array i wrote above.
Thank you very much.
GAIL.


Continue collection Oracle 8i

December 17, 2004 - 1:08 am UTC

Reviewer: A reader

Dear Tom!
I hope you read what i wrote above.
I don't know how to access to array by 3 fields (different types).May you send me example?
Is there any general rule what is better: array (index_by_integer) or temporary table?
May you write me what you would do for small tables and for big tables. I tried out 2 cases with simple situation and haven't got to any conclusion.
Thanks a lot.
GAIL

Tom Kyte

Followup  

December 17, 2004 - 7:55 am UTC

have you read the plsql guide? or the application developers guide for object relational features.

I'm afraid that showing snippets of code isn't going to get you very far, especially if "performance is key"


there are no cut and dry answers. try both approaches with your data in your case and see which works best.


create type myScalarType ( x int, y date, z varchar2(30))
/
create type myArrayType as table of myScalarType
/

declare
l_data myArrayType := myArrayType();
begin
l_data.extend;
l_data(1).x := 42;
l_data(1).y := sysdate;
l_data(1).z := 'hello world';
l_data.extend;
l_data(2) := myScalarType( 1, sysdate, 'x' );
end;
/

for example - but now you know 5% of what you can do with them -- so back to the books.


Continue about collections-ORACLE 8I

December 17, 2004 - 11:45 am UTC

Reviewer: GAIL

Dear Tom,
Thnk you for the example of ARRAY.
I want to ask you 2 questions:
Is this array ACCESED BY KEY PROPERLY?

DECLARE
TYPE r_prods is RECORD (Prod _no number(1),
DAY_IN_WEEK number(1),
BRANCH_NO NUMBER(1),
QTY NUMBER(4));

TYPE t_prods IS TABLE OF r_prods INDEX BY BINARY_INTEGER;
v_Prods t_prods;
BEGIN
t_prods(111).prod_no=1;
t_prods(111).day_in_week=1;
t_prods(111).branch_no=1;
t_prods(111).qty:=300;
t_prods(121).prod_no=1;
t_prods(121).day_in_week=2;
t_prods(121).branch_no=1;
t_prods(121).qty:=400;
the key is of 3 digits
first digit represents the value of prod_no
second - the value of day_in_week
third - the value of branch
Here i get the array qty:
var_qty:=t_prods (var_prod*100+var_day*10+var_branch).qty;

THE SECOND QUESTION IS:
I'm waiting four your 2 books in 2 weeks:Effective,One to One, so i could'nt see the examples .
May you send me the example of SCALAR SUBQUERY&JOINS to arrays? Does it work in Oracle 8i?
THANK YOU VERY MUCH.
GAIL.

Tom Kyte

Followup  

December 17, 2004 - 2:46 pm UTC

looks ok -- i did not actually compile it or anything. that is a plsql index by (sparse) table.


scalar subqueries are simply:


select column, (select statement), column
from table


eg:

select dname, (select count(*) from emp where emp.deptno = dept.deptno) cnt
from dept;

you use a "select" that returns 1 row, 1 column (hence the name scalar) where you would normally use a column or literal values.




Continue to scalar subquery

December 17, 2004 - 5:38 pm UTC

Reviewer: GAIL

Hello,Tom,
I understood the above example of scalar subquery.IT'S GREAT,but i didn't get
how you join it to the array?May you show the example with array?
Tnank you very much.
GAIL

Tom Kyte

Followup  

December 18, 2004 - 8:45 am UTC

getting too far off topic.

Sorry, Tom,what do you mean?

December 18, 2004 - 12:15 pm UTC

Reviewer: GAIL

GAIL
Tom Kyte

Followup  

December 18, 2004 - 12:58 pm UTC

getting too far off of the original topic of this post. I'm not sure even what you mean by "joining to the array". But I've lots of examples of lots of stuff on this site.

peek around.

What i mean joining to array is:

December 18, 2004 - 2:43 pm UTC

Reviewer: GAIL

That you wrote in one of the answers that you prefer scalar subquery&join instead of lookup to arrays,so i understood that we have the possibility to make joins in scalar subqueries to arrays.Am i right?Any example?
Thhnk you.
GAIL

Tom Kyte

Followup  

December 18, 2004 - 4:01 pm UTC

i prefer scalar subqueries and joins

OVER

lookups to arrays.


I prefer doing the work in SQL, not procedurally. So, arrays are the things I do not prefer -- they are procedural. It doesn't make sense to "make joins in scalar subqueries to arrays"

Nested Table and FULL TABLE ACCESS

December 30, 2004 - 1:48 pm UTC

Reviewer: Venki from India

When I have a Nested Table in a Query joined with yet another table (with an index), why does the othre table goes into a FULL INDEX SCAN rather than INDEX RANGE SCAN.

SQL> CREATE TABLE EMP (EMP_ID NUMBER, EMP_NAME VARCHAR2(30));

SQL> CREATE TYPE NUMBER_TBL IS TABLE OF NUMBER;

DECLARE
  l_emp_tbl NUMBER_TBL := NUMBER_TBL();
  CURSOR c_emp_names IS
    SELECT emp_name
      FROM TABLE(CAST(l_emp_tbl AS NUMBER_TBL)) t
         , emp e
     WHERE e.emp_id = t.column_value;
BEGIN
  l_emp_id.extend(2);
  l_emp_tbl(1) := 1;
  l_emp_tbl(2) := 2;
  OPEN c_emp_names;
  ....
  ....

The Query of the cursor c_emp_names results in a FULL TABLE access of EMP table and then filters by the Nested Table values to get my two records ultimately.

Why is it happening like this.
How to avoid this ?

   

Tom Kyte

Followup  

December 30, 2004 - 2:26 pm UTC

sort of need the entire example please. I for one don't see any indexes.

Nested Table versus Full Table Scan

January 05, 2005 - 7:59 am UTC

Reviewer: Venki from India

I created a table BIG_TABLE which has some rows from ALL_OBJECTS

SQL> 
create table big_table as (
  select object_id, object_type, object_name 
    from all_objects
   where owner IN ('SYS', 'PUBLIC') 
     and object_type in('TABLE', 'VIEW', 'INDEX', 'SYNONYM')
  )

SQL> SELECT COUNT(*) FROM big_table;
20386

SQL> CREATE INDEX big_table_u1 ON big_table (object_id);
Index Created

SQL> CREATE TYPE NUMBER_TBL_TYPE IS TABLE OF NUMBER;
Type Created.

SQL> ALTER session SET timed_statistics=TRUE;

SQL> ALTER session SET statistics_level=ALL;

SQL> ALTER session SET events '10046 trace name context forever, level 12';

Now I execute this PLSQL block
SQL> 
declare
  l_object_tbl number_tbl_type := number_tbl_type();
  cursor c_objects is
    select o.object_id, o.object_name
      from table (cast (l_object_tbl as number_tbl_type)) t
         , big_table o 
     where o.object_id = t.column_value;
begin
  l_object_tbl.extend(5);
  l_object_tbl(1) := 487;
  l_object_tbl(2) := 581;
  l_object_tbl(3) := 607;
  l_object_tbl(4) := 519;
  l_object_tbl(5) := 499;
  for v_object in c_objects loop
    dbms_output.put_line(v_object.object_name);
  end loop;
end;        

The Explain Plan is
===================

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.04          0          6          0           0
Fetch        6      0.67       1.23        353        118         36           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        8      0.68       1.28        353        124         36           5

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: 65     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  MERGE JOIN  (cr=118 r=353 w=238 time=1234557 us)
  20386   SORT JOIN (cr=118 r=353 w=238 time=1161844 us)
  20386    TABLE ACCESS FULL BIG_TABLE (cr=118 r=115 w=0 time=175600 us)
      5   SORT JOIN (cr=0 r=0 w=0 time=26851 us)
      5    COLLECTION ITERATOR PICKLER FETCH (cr=0 r=0 w=0 time=65 us)

This is not what I am trying in reality. This is just a replica of what I would like to do with my product related tables and a Nested Table.

Questions
=========
1. Why is there a FULL TABLE ACCESS of BIG_TABLE? Why is COLLECTION ITERATION PICKLER not driving the query?

2. Can you also explain why the Fetch Phase shows the value of COUNT as 6. Can you explain what happens Behind the Scenes. 

Tom Kyte

Followup  

January 05, 2005 - 9:56 am UTC

looks like an RBO plan (sort merge -- cbo would have at least hash joined)

You want to read:

</code> http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html
Query Plans with Temporary Tables

and
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3779680732446#15740265481549 <code>



What about Oracle 8i

January 06, 2005 - 7:41 am UTC

Reviewer: Venki from India

Thank you so much Tom for your reply. It was really useful especially the Cardinality Hint.

Is there anyway to achieve the same in Oracle 8i since Cardinality Hint is available only from 9i.

Tom Kyte

Followup  

January 06, 2005 - 10:57 am UTC

nope - it quite simply "did not exist" way back then.

Explanation on how Nested Tables works behind the Scenes

January 06, 2005 - 7:50 am UTC

Reviewer: Venki from India

Tom, can you also show light on the working on Nested Tables when they are used in the Queries. Especially the figures that come up in the Explain Plan.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.04 0 6 0 0
Fetch 6 0.67 1.23 353 118 36 5
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8 0.68 1.28 353 124 36 5

Thanks in advance for your help on this.

Tom Kyte

Followup  

January 06, 2005 - 11:03 am UTC

</code> http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html <code>

read query plans with temporary tables, and then follow the link to the one
"about in-memory collections"

Which performs better

January 07, 2005 - 10:25 am UTC

Reviewer: Venki from India

Hi Tom,
Suppose I want to search for the existence of a value in a PLSQL Table, which will perform better
1. SQL Query using CAST Operator
SELECT 1
FROM TABLE (CAST (tbl as number_tbl_type)) t
WHERE t.column_value = <some_value>

2. Do the coding in PLSQL itself to search for each value stored in the table and then find out whether the required value exists.
DECLARE
l_found boolean;
l_tbl number_tbl_type;
BEGIN
l_found := FALSE;
FOR i in 1..l_tbl.COUNT LOOP
IF l_tbl(i) = <some_value> THEN
l_found := TRUE;
EXIT;
END IF;
END LOOP;
END;

When to go in for CAST operator... and when to stick competely with PLSQL. Can you throw some light on that.



Tom Kyte

Followup  

January 08, 2005 - 3:22 pm UTC

how about option 3 and 4?

3)

declare
l_tbl number_tbl_type;
begin
if ( l_tbl.exists( some_value ) )
then
......


(eg: instead of storing the elements in l_tbl(1), l_tbl(2), .... store the elements in l_tbl(VALUE) -- use a sparse array.


4) same idea but use

type array is table of <whatever> index by varchar2(N);

for when "some value" is not a number, but a string.






IF (main goal is to find something in the plsql table)
THEN
use that something as the index.


Problem with collection

January 07, 2005 - 3:42 pm UTC

Reviewer: Viki

Hello,Tom
I have a problem:
create table prods
(prod number(2),
code number(2));
Insert into prods(1,1);
Insert into prods(1,2);
Insert into prods(1,3);
Insert into prods(1,4);
Insert into prods(1,5);
Insert into prods(2,1);
Insert into prods(2,2);
Insert into prods(2,3);
Insert into prods(2,4);
Insert into prods(2,5);
I have to run through the dates from '01-nov-2004'
to '31-dec-2004'
For each date that is not sunday i have to check:
IF a date is friday i have to run through table PRODS
where prod=2 then where prod=1,
for each combination of date,prod and code
i have to make different 14 selects from different tables
for each select i calculate qty1,qty2,qty3,qty4...qty14
and for each combination i have to make an insert to other table: for each combination of date,prod,code i make an insert: insert into table t date,prod,code,qty1....qty14.
I thought to do it :
LOOP
RUN THROUGH THE DATES
FOR EACH DATE
FOR I IN (SELECT PROD,CODE FROM PRODS)
SELECT * INTO QTY1
..................
SELECT INTO QTY14
INSERT INTO TABLE T (DATE,I.PROD,I.CODE,QTY1....QTY14)
END LOOP;
END LOOP;
But this way seems to me very slow in performance
because i have approximetly 300 combinations.
Suggest me,please better way to do it.
Have a nice day
Viki

Tom Kyte

Followup  

January 08, 2005 - 4:16 pm UTC

new question -- when I'm taking them, please feel free to post it. Not really relevant to this particular thread.

Using result set in collection

January 26, 2005 - 12:59 pm UTC

Reviewer: suresh from ny

A peculiar situation we are facig with dealing with the result set. Here is the sample case.

create or replace type test_tab as table of number;

create table test (supernum number,status varchar2(20));

insert into test values(1000,'ready');
insert into test values(2000,'ready');
insert into test values(3000,'not ready');
insert into test values(4000,'not ready');
insert into test values(5000,'ready');
insert into test values(6000,'not ready');

select * from test;

  SUPERNUM STATUS
---------- --------------------
      1000 ready
      2000 ready
      5000 ready
      6000 not ready
      3000 not ready
      4000 not ready

create table test_a (supernum number,status varchar2(20),remarks varchar2(20));

insert into test_a values(1000,'started','blank');
insert into test_a values(2000,'processing','blank');
insert into test_a values(3000,'submitted','blank');


select * from test_a;

  SUPERNUM STATUS               REMARKS
---------- -------------------- --------------------
      1000 started              blank
      2000 processing           blank
      3000 submitted            blank

create or replace procedure proc_a
as

v1 test_tab ;
begin
select supernum bulk collect into v1
from test where status='ready';

proc_b(v1);
end;
/

create or replace procedure proc_b(p_supernum IN test_tab )
as
begin
  update test_a set status=case status when 'started' then 'test started'
                          when 'processing' then 'test processing'
                             when 'submitted' then 'test submitted' 
                   end
 where remarks in('blank') and supernum in 
(select supernum from table(cast(p_supernum as test_tab )));
end;
/

EXPECTED RESULTS:

select * from test_a;

 SUPERNUM STATUS               REMARKS
---------- -------------------- --------------------
      1000 test started              blank
      2000 test processing           blank
      3000 submitted                 blank

ACTUAL RESULTS:

SQL> select * from test_a;

  SUPERNUM STATUS               REMARKS
---------- -------------------- --------------
      1000 test started         blank
      2000 test processing      blank
      3000 test submitted       blank

Why is the supernum of 3000 also updated while the IN parameter is not having that value?




 

Tom Kyte

Followup  

January 26, 2005 - 1:42 pm UTC

and supernum in (select supernum from table(cast(p_supernum as test_tab )));

is the same as:

and supernum in (select supernum from dual)




as long as your p_supernum has at least one row.  It is a correlated subquery!  it is always TRUE with at least one row, FALSE otherwise.



 where remarks in('blank') and supernum in
(select <b>COLUMN_VALUE</b> from table(cast(p_supernum as test_tab )));
end;
/


ops$tkyte@ORA9IR2> select * from test_a;
 
  SUPERNUM STATUS               REMARKS
---------- -------------------- --------------------
      1000 started              blank
      2000 processing           blank
      3000 submitted            blank
 
ops$tkyte@ORA9IR2> exec proc_a
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from test_a;
 
  SUPERNUM STATUS               REMARKS
---------- -------------------- --------------------
      1000 test started         blank
      2000 test processing      blank
      3000 submitted            blank
 
 

My goodness... You are the best Tom

January 26, 2005 - 1:53 pm UTC

Reviewer: A reader

Thanks so much..

convert OR result set into relational result set

May 12, 2005 - 7:36 am UTC

Reviewer: Nilanjan Ray from India

Hi Tom,

I have the following query:

select prog_break_key,prog_break_dt,
cast(multiset(select * from ssp_other_event oe
where oe.prog_break_key=pb.prog_break_key) as myobjtab_other) other_ev,
cast(multiset(select * from ssp_promotion_spot pe
where pe.prog_break_key=pb.prog_break_key) as myobjtab_promo) promo_ev,
cast(multiset(select * from ssp_sponsorship_spot se
where se.prog_break_key=pb.prog_break_key) as myobjtab_spon) spon_ev
from ssp_programme_break pb
where prog_break_dt = '01-JAN-2005'

Can I reconvert it into a relational result set using the above query as a subquery/. etc. For one main record either of the object type collection could be null i.e. there could be null child records.

thnaks in advance for your valuable time.

Regards
Nilanjan Ray





Tom Kyte

Followup  

May 12, 2005 - 12:14 pm UTC

not sure -- will only ONE of other_ev, promo_ev or spon_ev return a value -- or can all three?

we can definitely flatten with a UNION ALL view in any case.


I beg you:

where prog_break_dt = to_date( '01-JAN-2005', 'dd-mon-yyyy' )





convert OR result set into relational result set

May 13, 2005 - 6:36 am UTC

Reviewer: Nilanjan Ray from India

Thanks Tom for the prompt reply

The scenario could be that for each main record :
1. All three cobject collection would have data
2. All three cobject collection could be null
3. Either one or more could be null.

Actually there is already a view based on the above query, which is used inside PL.SQL internally.
Now I need to expose the data in rows and columns in the application.
Can I achieve this by using any sort of transformations using the above query?

Could you please illustrate the point "we can definitely flatten with a UNION ALL view in any case"

"prog_break_dt = '01-JAN-2005'" was a hasty mistake in the post.

Thanks in advance
Nilanjan Ray


Tom Kyte

Followup  

May 13, 2005 - 10:29 am UTC

so you would have to use a UNION ALL if you wanted to flatten it.


select ...
  from ssp_programme_break pb, ssp_other_event se
 where oe.prog_break_key=pb.prog_break_key
   and pb.prog_break_dt = to_date('01-JAN-2005','dd-mon-yyyy')
 UNION ALL
....


you would join three times.  to flatten using your view you could:

ops$tkyte@ORA9IR2> create or replace view v
  2  as
  3  select deptno,
  4         cast(multiset(select empno from emp where emp.deptno = dept.deptno) as myTableType ) emps,
  5         cast(multiset(select sal from emp where emp.deptno = dept.deptno) as myTableType ) sals
  6    from dept;
 
View created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from v;
 
    DEPTNO
----------
EMPS
-------------------------------------------------------------------------------
SALS
-------------------------------------------------------------------------------
        10
MYTABLETYPE(7782, 7839, 7934)
MYTABLETYPE(2450, 5000, 1300)
 
        20
MYTABLETYPE(7369, 7566, 7788, 7876, 7902)
MYTABLETYPE(800, 2975, 3000, 1100, 3000)
 
        30
MYTABLETYPE(7499, 7521, 7654, 7698, 7844, 7900)
MYTABLETYPE(1600, 1250, 1250, 2850, 1500, 950)
 
        40
MYTABLETYPE()
MYTABLETYPE()
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from (select deptno, 'EMPS', t.*
  3            from v, table(v.emps) t )
  4   union all
  5  select *
  6    from (select deptno, 'SALS', t.*
  7            from v, table(v.sals) t )
  8  /
 
    DEPTNO 'EMP COLUMN_VALUE
---------- ---- ------------
        20 EMPS         7369
        30 EMPS         7499
        30 EMPS         7521
        20 EMPS         7566
        30 EMPS         7654
        30 EMPS         7698
        10 EMPS         7782
        20 EMPS         7788
        10 EMPS         7839
        30 EMPS         7844
        20 EMPS         7876
        30 EMPS         7900
        20 EMPS         7902
        10 EMPS         7934
        20 SALS          800
        30 SALS         1600
        30 SALS         1250
        20 SALS         2975
        30 SALS         1250
        30 SALS         2850
        10 SALS         2450
        20 SALS         3000
        10 SALS         5000
        30 SALS         1500
        20 SALS         1100
        30 SALS          950
        20 SALS         3000
        10 SALS         1300
 
28 rows selected.
 

Can you say offhand which is better?

October 13, 2005 - 11:40 pm UTC

Reviewer: James from Australia

Well I have 2 examples here.

Example 1
Uses an implicit cursor loop and executes additional sql per iteration:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.01 0 0 0 0
Execute 62 0.02 0.01 0 0 0 0
Fetch 241 0.06 0.06 0 4807 0 210
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total ***307*** 0.08 0.08 0 4807 0 210



Example 2
Uses a single bulk collected collection that contains a sql table type.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 5 0.17 0.16 0 9237 0 34
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total ***10*** 0.17 0.17 0 9237 0 34


Now example 1 apparently ran faster, used less cpu, less buffer gets than exmaple 2.

Example 2 had ***way*** less calls - 10 compared to 300...

But based on the other metrics my nice bulk collected example 2 is not as good...

I really want to use example 2 so can you say that based on the metrics it is better? ie The cpu,elapsed,query differences are trivial. The call counts are what matters!




Tom Kyte

Followup  

October 14, 2005 - 7:34 am UTC

the cardinality of calls isn't as relevant as the amount of work performed by said call.


Looking at these, I might prefer #1 - due to less logical IO (more scalable, less work) and less CPU.


But then again, we are comparing something that affected 210 rows to something that affected 34 rows - so I'm not sure these are even comparable (they don't see to be the "same")


populate a nested table using a select in insert

October 17, 2005 - 5:24 pm UTC

Reviewer: RAVI from DALLAS TX USA

HI Tom,
I got a requirment, where I am planning to use nested tables,
first let me go through the requirment.
partent table an archive table which grows by 4 million records a month,
parent table as a primary key combination of three columns,
and over 50% of parent records have around 1 to 15 child records,
so we have an estimate of 12 million child records for the parent table,
that implies child table will grow by 12 million records a month,
and the child table is archived as as a seperate table,
parent and child table as no direct relation,
and I am not supposed to add a extra column to the child table with a column of parent table to make a foreign key relationship.

so I was planning to add a nested table column in the
parent archive table with the values of primary key combination of the child table.

and the only query that is planned on this parent child tables is
pull a parent record along with all the child records

so I need suggestion from you, if I have to use nested tables or not,
and so if I had to use nested tables can you show me how to work out
the below example


insert data using select statement into a nested table,
here is a scenario,

taking examples of emp and dept,
each department has many employees
I want a table to hold data
for each record of a department I want to have a nested table to
hold employee numbers for that department,
and this should be done using select statement
to populate the nested table
(like we do "insert into emp select * from emp")

and also from above created nested table,
how can I get all the employee details for a given department
by refering to the employee numbers in the nested table
for the given department.


Tom Kyte

Followup  

October 18, 2005 - 8:19 am UTC

do you have access to either "Expert one on one Oracle" or my new book "Expert Oracle database Architecture"

I ask because in there I show exactly how a nested table works, how (if you use them) you should create them (as an IOT basically), and why you probably don't want to use a nested table - but rather use a real table with a primary/foreign key.

found solution for the above, but still need help

October 17, 2005 - 10:35 pm UTC

Reviewer: ravi from DALLAS TX USA

HI Tom,
worked out the below solution, derived from the techniques you have used above in the this link, but I have used IN operator which will be very expensive for me to work with, can you please suggest me any work around, any help is greatly appreciated
Thank you

below is the solution that I worked out

create type emptype as object
(empno number(4),
ename varchar2(30))
SQL> /

Type created.

SQL> create type emptb as table of emptype;
  2  /

Type created.

  1  create table dept_emp(deptno number(4), dname varchar2(30), dep_emp emptb)
  2* nested table dep_emp store as emptable
SQL> /

Table created.

/* trying to insert values useing select statement */

SQL> ed
Wrote file afiedt.buf

  1  insert into dept_emp select deptno, dname,
  2  cast(multiset(select empno, ename from emp where deptno = d.deptno) as emptb)
  3* from dept d
SQL> /

4 rows created.

SQL> select * from dept_emp;

    DEPTNO DNAME
---------- ------------------------------
DEP_EMP(EMPNO, ENAME)
----------------------------------------------------------------------------------------------------
        10 ACCOUNTING
EMPTB(EMPTYPE(7782, 'CLARK'), EMPTYPE(7839, 'KING'), EMPTYPE(7934, 'MILLER'))

        20 RESEARCH
EMPTB(EMPTYPE(7369, 'SMITH'), EMPTYPE(7566, 'JONES'), EMPTYPE(7788, 'SCOTT'), EMPTYPE(7876, 'ADAMS')
, EMPTYPE(7902, 'FORD'))

        30 SALES
EMPTB(EMPTYPE(7499, 'ALLEN'), EMPTYPE(7521, 'WARD'), EMPTYPE(7654, 'MARTIN'), EMPTYPE(7698, 'BLAKE')

    DEPTNO DNAME
---------- ------------------------------
DEP_EMP(EMPNO, ENAME)
----------------------------------------------------------------------------------------------------
, EMPTYPE(7844, 'TURNER'), EMPTYPE(7900, 'JAMES'))

        40 OPERATIONS
EMPTB()

/* fetch child details for one master record */

  1  select * from emp where empno in (select bb.empno from
  2  the(select dep_emp from dept_emp
  3* where deptno = 10)bb)
SQL> /

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

SQL>  

Tom Kyte

Followup  

October 18, 2005 - 8:41 am UTC

why are you using nested tables since you need to query the nested table standalone?

sorry Tom, but I dint get you,

October 18, 2005 - 10:44 am UTC

Reviewer: RAVI from DALLAS TX USA

sorry Tom, I dint get you what you ment by query nested table stand alone,
my requirement was to pull out all the employees who work in department no 10,
and also notice that i am not using the foreign key relation to pull all the employees for dept no 10 because I dont have foreign key relation in my real time tables, and my real time emp table has around couple of billion.

Tom can you please elobrate the question, or any suggestions for me would be greatly appriciated, and thank you for your valuable time

Tom Kyte

Followup  

October 18, 2005 - 10:55 am UTC

it is just 


select dname, e.*
  from dept d, table( d.emptab )
 where d.deptno = 10;



i just looked at

  1  select * from emp where empno in (select bb.empno from
  2  the(select dep_emp from dept_emp
  3* where deptno = 10)bb)
SQL> /


you were trying to query emp directly.


My suggestion stands however, I would not use nested tables to store data. 

Thank you, that's the query that I was looking for

October 18, 2005 - 12:40 pm UTC

Reviewer: RAVI from DALLAS TX USA

Thank you for your quick response, and thank you for your suggestion, I dint get what is e.* is it a typo that you forgot to include emp table and alias to 'e'.
I was directly quering 'emp' table by reffering to the nested table of department '10' to get all the details for the employees of department 10

select dname, e.*
from dept d, table( d.emptab )
where d.deptno = 10;





Tom Kyte

Followup  

October 18, 2005 - 2:22 pm UTC

sorry - missed an e at the end of line 2....

after table(d.emptab) e



EXCELLENT, and Thank you for your time Tom

October 18, 2005 - 2:54 pm UTC

Reviewer: RAVI from DALLAS TX USA


October 31, 2005 - 11:00 am UTC

Reviewer: A reader

Are there any caveats for the PL/SQL table
with CAST approach in Oracle 9iR2? Because in some
of the stored procedures we are using CAST; and the
performance is very slow. Could you please explain?

Thanks in advance.

Tom Kyte

Followup  

November 01, 2005 - 4:56 am UTC

have you identified "what" is slow?? why do you believe it would be cast?

cast will do one of three things:

a) make things run faster
b) make things run slower
c) not change the performance of the query whatsoever.

I can show all three under different circumstances.

A and C are most common, B would be the result of you casting a datatype to a type that makes some access path not available - and that access path would be more optimal.

Managing collection arrays via tables -ORACLE 8I/10G

November 01, 2005 - 9:15 am UTC

Reviewer: ANA

Hello Tom,
Why making select from collection in PL/SQL code is faster
than select from regular table?
Thank you?
ANA

Tom Kyte

Followup  

November 01, 2005 - 11:03 am UTC

where is that stated that it will be faster?


In fact, I wrote:

....Arrays/collections won't necessarily be any better (in fact not as good as) a
regular result set using straight JDBC. .....

November 01, 2005 - 9:32 am UTC

Reviewer: A reader

Thanks for your valuable response. I wrote two procedures P1, P2 with and without using PL/SQL & CAST operator. I took runstats of these two: Run1 -> with CAST and run2 -> without CAST and here are the results:
Run1 ran in 1162 hsecs
Run2 ran in 765 hsecs
run 1 ran in 151.9% of the time

Name Run1 Run2 Diff
LATCH.Consistent RBA 2 1 -1
LATCH.active checkpoint queue 5 4 -1
STAT...recursive cpu usage 6 5 -1
STAT...table scan rows gotten 1 0 -1
STAT...table scans (short tabl 1 0 -1
STAT...table scan blocks gotte 1 0 -1
STAT...redo entries 6 7 1
LATCH.undo global data 5 3 -2
STAT...bytes received via SQL* 426 424 -2
STAT...calls to get snapshot s 4 2 -2
LATCH.channel operations paren 8 4 -4
STAT...CPU used by this sessio 9 5 -4
STAT...CPU used when call star 9 5 -4
STAT...calls to kcmgcs 2 6 4
STAT...consistent gets - exami 2 6 4
STAT...cleanout - number of kt 2 6 4
STAT...active txn count during 2 6 4
STAT...consistent changes 10 17 7
STAT...db block changes 16 24 8
LATCH.redo writing 20 11 -9
STAT...db block gets 10 19 9
LATCH.messages 30 20 -10
STAT...bytes sent via SQL*Net 417 428 11
LATCH.child cursor hash table 24 12 -12
LATCH.enqueue hash chains 30 12 -18
LATCH.row cache enqueue latch 30 12 -18
LATCH.row cache objects 30 12 -18
STAT...recursive calls 2 21 19
LATCH.session allocation 50 28 -22
LATCH.library cache pin alloca 70 44 -26
LATCH.enqueues 55 26 -29
LATCH.library cache pin 128 95 -33
STAT...no work - consistent re 2 35 33
STAT...buffer is pinned count 34 0 -34
STAT...consistent gets 10 46 36
LATCH.session idle bit 105 66 -39
STAT...session logical reads 20 65 45
LATCH.cache buffers chains 114 163 49
LATCH.shared pool 174 125 -49
STAT...buffer is not pinned co 3 53 50
LATCH.library cache 272 188 -84
STAT...redo size 1,820 1,944 124
LATCH.checkpoint queue latch 454 326 -128
LATCH.SQL memory manager worka 268 134 -134
STAT...Elapsed Time 1,167 771 -396
STAT...session pga memory -131,072 0 131,072

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,898 1,310 -588 144.89%

2.
I took tkprof of these two procedures and here is the output:

TKPROF: Release 9.2.0.1.0 - Production on Mon Oct 31 13:15:03 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: C:\dev_ora_23433.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
--------------------------------------------------------------------------------

*** SESSION ID:(73.2562) 2005-10-31 17:56:11.570

********************************************************************************

The following statements encountered a error during parse:

select rtrim(c.value,'/')||'/'||d.instance_name||'_ora_' || ltrim(to_char(a.spid))||.'trc'\where a.addr=b.paddr and b.audsid=sys_context('userenv','sessionid')
and c.name='user_dump_dest'

Error encountered: ORA-00936
--------------------------------------------------------------------------------
select rtrim(c.value,'/')||'/'||d.instance_name||
'_ora_' || ltrim(to_char(a.spid))||'.trc'\where a.addr=b.paddr and b.audsid=sys_context('userenv','sessionid')
and c.name='user_dump_dest'

Error encountered: ORA-00911
********************************************************************************

Alter session set events '10046 trace name context forever, level 12'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.00 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 100

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 2.63 2.63
********************************************************************************

BEGIN p1; END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 100

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

SELECT CAST (MULTISET (SELECT col1
FROM
table1 WHERE col2='C1EW' ) AS col1_OBJ ) FROM DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 0 36 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 5 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.02 0.01 0 41 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 100 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL DUAL

--------------------------------------------------------------------------------

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 100

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 3.65 3.65
********************************************************************************

BEGIN p2; END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.02 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 100

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 73.66 87.80
SQL*Net break/reset to client 4 0.00 0.00
********************************************************************************

SELECT COL1
FROM
TABLE WHERE COL2='C1EW'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 19 0.00 0.00 0 36 0 18
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 21 0.00 0.00 0 36 0 18

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 100 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
18 TABLE ACCESS BY INDEX ROWID TABLE1
18 INDEX RANGE SCAN TABLE1_IX2 (object id 38800)

--------------------------------------------------------------------------------

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 100

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 107.70 107.70
********************************************************************************

select rtrim(c.value,'/')||'/'||d.instance_name||
'_ora_' || ltrim(to_char(a.spid))||'.trc'
from v$process a,v$session b,v$parameter c,
v$instance d
where a.addr=b.paddr and b.audsid=sys_context('userenv','sessionid')
and c.name='user_dump_dest'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.04 0 9 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.02 0.02 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.06 0.06 0 9 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 100

Rows Row Source Operation
------- ---------------------------------------------------
1 MERGE JOIN
1 SORT JOIN
1 NESTED LOOPS
1 MERGE JOIN
855 SORT JOIN
855 NESTED LOOPS
1 NESTED LOOPS
1 NESTED LOOPS
1 FIXED TABLE FULL X$QUIESCE
1 FIXED TABLE FULL X$KVIT
1 FIXED TABLE FULL X$KSUXSINST
855 FIXED TABLE FULL X$KSPPCV
1 FILTER
1 SORT JOIN
1 FIXED TABLE FULL X$KSPPI
1 FIXED TABLE FULL X$KSUSE
1 SORT JOIN
45 FIXED TABLE FULL X$KSUPR


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 33.39 33.41
********************************************************************************

Alter session set timed_statistics=false


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 100

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 15.24 15.24



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.06 0.05 0 9 0 0
Execute 7 0.02 0.02 0 0 0 4
Fetch 2 0.02 0.02 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.10 0.10 0 9 0 5

Misses in library cache during parse: 4

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 10 0.00 0.00
SQL*Net message from client 10 107.70 250.46
SQL*Net break/reset to client 4 0.00 0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 38 0.05 0.03 0 36 0 0
Execute 62 0.04 0.03 0 0 0 0
Fetch 146 0.00 0.00 0 201 0 109
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 246 0.09 0.07 0 237 0 109

Misses in library cache during parse: 23

9 user SQL statements in session.
36 internal SQL statements in session.
45 SQL statements in session.
********************************************************************************
Trace file: C:\Documents and Settings\nchiruvolu\Desktop\Nandini\Tkprof\r1db003_ora_23433.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
9 user SQL statements in trace file.
36 internal SQL statements in trace file.
45 SQL statements in trace file.
30 unique SQL statements in trace file.
930 lines in trace file.


From the results I think using CAST is slowing down the process. Please explain.

Thanks in advance.

Tom Kyte

Followup  

November 01, 2005 - 11:13 am UTC

I'm sorry - I don't know what I'm comparing

why are you CASTING, what is your *goal*, what is your *need*.


show me some code here - why are you using objects, did you need to?

November 01, 2005 - 10:32 am UTC

Reviewer: A reader

Here the procedures I wrote:

CREATE OR REPLACE procedure p1
as
v_obj col_obj;
begin
select cast (multiset
(select col1 from table1
where col2='C1EW'
) as col_obj
)
into v_obj
from dual;
for i in 1 .. v_obj.last loop
dbms_output.put_line(v_obj(i).col1);
end loop;
end p1;
/

CREATE OR REPLACE procedure p2
as
cursor c1 is
select col1 from table1
where col2='C1EW';
v_col varchar2(20);
begin
open c1;
loop
exit when c1%notfound;
fetch c1 into v_col;
dbms_output.put_line(v_col);
end loop;
close c1;
end p2;
/


Tom Kyte

Followup  

November 02, 2005 - 4:23 am UTC

my question is - why would you use a collection and objects when you didn't need to?

if you wanted an array, you can bulk collect - but I wouldn't use the cast multi-set unless I *needed* to, and you don't.

So, this is more a case of "wrong, overly complex approach"



begin
select col1 BULK COLLECT into l_array
from table1
where col2 = 'CREW';

....
end;


Continue to above

November 02, 2005 - 5:56 am UTC

Reviewer: Ana

IF i have in my PL/SQL code
to make several times:
select col1 from table1
where col2='CREW'
like select num ...
where num in (select col1 from table1
where col2='CREW')
is it better to do once:
select col1 BULK COLLECT into l_array
from table1
where col2 = 'CREW';
and every time to make

select num where num in (select ..from TABLE(CAST(l_array))
or just to repeat every time
where num in (select col1 from table1
where col2='CREW')
Thank you,
Ana

Tom Kyte

Followup  

November 03, 2005 - 4:58 am UTC

No. Just do it in SQL.

In which cases it is efficient?

November 03, 2005 - 9:01 am UTC

Reviewer: Ana

Hello,Tom!
When is it efficient to make select from array
like: select ..from TABLE(CAST(l_array))
Ana

Tom Kyte

Followup  

November 04, 2005 - 2:24 am UTC

when you have an array and no table.

when you procedurally filled an array, and there is no table where the data already exists.

I would never

a) read table to fill array.
b) send array back to database as subquery

I would just

a) send query to database that uses a subquery.

Can you show example,

November 04, 2005 - 3:56 am UTC

Reviewer: Ana

Hi,Tom!
Can you show me example please about the written above,
Thank you,
Ana


Tom Kyte

Followup  

November 04, 2005 - 4:01 am UTC

select * from t where c1 in ( select c2 from t2 )

?? not sure what you mean.

Continue to above

November 04, 2005 - 8:39 am UTC

Reviewer: Ana

If i have in PL/SQL CODE to use "in ( select c2 from t2
where t2.b='fff')"
in different query's a lot of times let's say 8 times
what would the most efficient way to do it?
And why not to use array?
Thank you ,
Ana




Tom Kyte

Followup  

November 04, 2005 - 8:56 am UTC

use with subquery factoring


with SQ
as
(select c2 from t2 where t2.b = 'fff' )
select *
from t
where x in ( select * from sq)
or y in (select * from sq)



why not to use an array? because SQL is really good at doing SQL - it is what databases do best.

CONTINUE

November 05, 2005 - 8:56 am UTC

Reviewer: Ana

with SQ
as
(select c2 from t2 where t2.b = 'fff' )
select *
from t
where x in ( select * from sq)
or y in (select * from sq)

But i repeat:
in (select c2 from t2 where t2.b = 'fff' )
in different quierys from different tables and places in PL/SQL procedure ,i also use it in loops
and i just show here a simple example but it is more complex in my code:in (select c2 from t2 where t2.b = 'fff' and t2.c between x and z an ..so on....),
and as i understand this select is executed all the time i use it,so what to do in order this select will be executed once?
Thank you,
Ana


Tom Kyte

Followup  

November 05, 2005 - 5:56 pm UTC

No, you would not want to 'optimize' this (likely DE-optimize this).

Let SQL do SQL. SQL is really good at doing SQL.


Let the optimizer optimize. It is very very very doubtful you want to try and second guess this and "optimize" it yourself (it happens, but is rare, and then you might use a global temporary table - BUT - don't do that unless and until you really have to - which in my experience so far has been like once or twice)

Continue

November 06, 2005 - 3:58 am UTC

Reviewer: Ana

Hello,Tom!
You wrote before thay wouldn't read table to fill array
and send array back to database as subquery,
but on the other hand i read in your book about select bulk_collect to array,and select .. cast(array()),so i am confused when to use array.
May you explain please
and give the example when it is good to use array
and when it is good to make select from array.
Thank you,
Ana

Tom Kyte

Followup  

November 06, 2005 - 8:31 am UTC

you use bulk collect to efficiently retrieve data from the database for processing in a program.


You can use the cast array for a variable in list when the client has an array and says "i need all of the rows that match the data in this array"


I would NOT goto the database to retrieve the values into an array just to send them BACK to the database.


It is rare to select from an array - in lists where the client (because an end user picked items from a list on a screen perhaps) has the values and needs to bind them.

Continue

November 06, 2005 - 10:02 am UTC

Reviewer: Ana

May you explain me please
what is the difference between how Oracle manages the array in memory (and in which memory) and how Oracle manages the table.
As i understood since the array is in session's private
memory ,it is faster for Oracle to search data in a private memory than in public memory(SGA) ,but i am really not sure about that.
Thank's,
Ana


Tom Kyte

Followup  

November 06, 2005 - 10:21 am UTC

the array would be in your pga memory - but oracle processes queries in general out of the buffer cache (not your memory).

use sql.

Continue

November 06, 2005 - 11:31 am UTC

Reviewer: Ana

If i use select into array and then proceed the array for many records in my program,why is it faster than
to proceed a table. Is it because array is in PGA and doesn't need I/O reads? But if a table is in buffer cashe?
Thank's,
Ana

Tom Kyte

Followup  

November 06, 2005 - 12:35 pm UTC

in the book - I cover why bulk collects are better than single row fetches. You do not context switch back and forth between the application and the database (even between plsql and the database - between plsql and sql)

Continue

November 07, 2005 - 3:19 am UTC

Reviewer: Ana

I understood why bulk collect is better,
but my question was why when we have many records is it better to load them into array and proceed the array and not
to make cursor on table and proceed the cursor. I ask about the situation when i don't need to insert or to update the records but just to proceed them and in proceeding to do someting else.
Thank you,
Ana

Tom Kyte

Followup  

November 07, 2005 - 8:52 am UTC

a cursor is a cursor.

You can either


a) fetch rows one by one from a cursor
b) bulk collect N rows at a time from a cursor


your comment: "why when we have many records is it better to load them into
array and proceed the array and not
to make cursor on table and proceed the cursor." doesn't really make sense since in order to get the records into an ARRAY you need - a cursor.

They are not mutually exclusive things here. They are two different things.

With a cursor you can either go

row by row
in bulk

when you have many rows to process, bulk makes best sense.



Continue

November 07, 2005 - 9:21 am UTC

Reviewer: Ana

That's my question:why bulk makes best sence? I just want to understand why proceeding in array with many records is better than row by row,is it because array resides in my private PGA memory and it is fastier to scan it then fetching the table data from SGA buffer cash or I/O,
I just want to understand the difference in Oracle managing of data in array and data in database table.

Tom Kyte

Followup  

November 07, 2005 - 11:53 am UTC

No, it is because if you ask for 1,000 rows one at a time, you make a context switch from the client (be it forms, plsql, java, whatever) 1,000 times. Each switch from the client to the server takes time, resources, many things.


if you ask for 1,000 rows 100 at a time, you make 10 such switches - not 1,000.

10 being smaller than 1,000 means less work.


You are NOT comparing "managing of data in an array vs in a database table", you are comparing "bug the server 1,000 times for 1,000 rows versus bug the server 10 times for 1,000 rows"



Continue

November 08, 2005 - 4:21 am UTC

Reviewer: Ana

Do you mean that if i do for 10,000 records:
FOR IN 1..array.1000 LOOP

..Some statements;
--could be also SQL statements
END LOOP;
it would work faster then:
FOR I IN (SELECT * FROM T)---T is the table of 10,000 recs--LOOP
..some statements;
END LOOP;
because here in "FOR I IN (SELECT* FROM T) LOOP
....."
is 10,000 switches and in "FOR I IN ARRAY.COUNT " is less
switches?

Tom Kyte

Followup  

November 08, 2005 - 10:01 pm UTC

I did not follow this example at all.

but i would ask "how did the array get populated in the first place"

Continue

November 09, 2005 - 2:36 am UTC

Reviewer: Ana

"how did the array get populated in the first place"
The array get populated by select .. bulk collect...into

Continue

November 09, 2005 - 6:25 am UTC

Reviewer: Ana

Sorry,Tom,
i didn't express myself well,
i understood why bulk collect is good at filling arrays for many records,
it is because it saves switches between sql and pl/sql.
But after i filled an array once by bulk collect,
i have to proceed through the array and inside the loop
to make some actions:
FOR i in 1..array.count loop
....
END LOOP

i didn't understand,why to proceed the array in loop
is faster than proceeding a table cursor in loop like:
FOR i in (SELECT * FROM T) loop
....
END LOOP
If i have a loop through the array and inside the loop i
have a select statement from database so there is also switch between pl/sql engine and sql engine.

So what exactly (besides reducing the time switch between sql and plsql) makes proceeding through array more fast?
Explain me please in details.
Ana


Tom Kyte

Followup  

November 10, 2005 - 5:25 pm UTC

because the

for i in (select * from t)

goes back and forth and back and forth and back and forth and back and forth from plsql to sql.

where as the bulk operation cuts down on the context switches.


That is what makes it better. Nothing more, nothing less.

Continue

November 13, 2005 - 3:35 am UTC

Reviewer: Ana

If i have a loop through the array and inside the loop i
have a select statement from database so there is also switch between pl/sql
engine and sql engine?What to do in such situation?



Tom Kyte

Followup  

November 13, 2005 - 10:33 am UTC

look at your algorithm and ask "why am I not using a subquery and a single sql statement, or perhaps a join???"

Continue

November 13, 2005 - 10:39 am UTC

Reviewer: Ana

The only advantage of using arrays in PL/SQL is reducing switch time between PLSQL and SQL? Or there is another advantage? May you give example?
Thank you'
Ana

Tom Kyte

Followup  

November 13, 2005 - 10:44 am UTC

No, not at all.

The advantage of BULK COLLECT (which just happens to use arrays) is to reduce the number of switches between SQL and PLSQL.

Arrays, as a general purpose programming tool, of course have many other uses outside of this - as arrays themselves. A programming language without arrays would not be much of a programming language.


Use arrays as you would in Java, C, VB, whatever.


BULK COLLECT is not to be confused with "the only way to use arrays in plsql"

Continue

November 13, 2005 - 10:55 am UTC

Reviewer: Ana

In my question i meant using array that contains data from
database tables.
Does it make sence to use array in order to reduce i/o reading of data? Expalin me please.
Thank you,
Ana

Tom Kyte

Followup  

November 13, 2005 - 12:39 pm UTC

we are going in circles.


It makes sense to use BULK COLLECT in order to

a) reduce the context switch
b) potentially reduce the IO performed by the query, due to visiting blocks less often

ops$tkyte@ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> declare
  2          cursor c1 is select * from all_objects slow_by_slow;
  3          cursor c2 is select * from all_objects bulk_collect;
  4
  5          type array is table of c1%rowtype index by binary_integer;
  6          l_array array;
  7          l_row   c1%rowtype;
  8  begin
  9          open c1;
 10          loop
 11                  fetch c1 into l_row;
 12                  exit when c1%notfound;
 13                  -- process row here --
 14          end loop;
 15          close c1;
 16
 17          open c2;
 18          loop
 19                  fetch c2 bulk collect into l_array limit 100;
 20                  -- process rows here --
 21                  exit when c2%notfound;
 22          end loop;
 23          close c2;
 24  end;
 25  /

PL/SQL procedure successfully completed.




SELECT * FROM ALL_OBJECTS SLOW_BY_SLOW

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.04       0.04          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    49972      2.57       2.42          0     115985          0       49971
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    49974      2.62       2.46          0     115985          0       49971


SELECT * FROM ALL_OBJECTS BULK_COLLECT


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.03          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      500      1.71       1.66          0      65293          0       49971
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      502      1.74       1.69          0      65293          0       49971



speaks for itself :)
 

object type description

March 31, 2006 - 7:16 am UTC

Reviewer: A reader

Dear Tom,

I have an object type with the following description

16:40:43> desc TYP_CSC_CONDITION
Name Null? Type
------------------------------- -------- ----
ISCHANGED VARCHAR2(1)
ADMIN_CONDITION TYP_CSC_EXTERNALCODE

where TYP_CSC_EXTERNALCODE definition is

16:40:51 > desc TYP_CSC_EXTERNALCODE
Name Null? Type
------------------------------- -------- ----
ISCHANGED VARCHAR2(1)
COD_VAL VARCHAR2(6)
EXTERNALCOD VARCHAR2(4)
FAG_ACT VARCHAR2(1)
FAG_CSI VARCHAR2(1)

is there any way to have the description of the TYP_CSC_CONDITION given as follows

ISCHANGED VARCHAR2(1)
ADMIN_CONDITION TYP_CSC_EXTERNALCODE
ISCHANGED VARCHAR2(1)
COD_VAL VARCHAR2(6)
EXTERNALCOD VARCHAR2(4)
FAG_ACT VARCHAR2(1)
FAG_CSI VARCHAR2(1)

I heard that we can do this kind of decription using Designer2000 SDD_ELEMENTS tables.

Have you any idea?

Thakns in advance


Tom Kyte

Followup  

March 31, 2006 - 12:25 pm UTC

try this in sqlplus:

SQL> set desc depth all
 

Back To Collections

April 14, 2006 - 4:30 pm UTC

Reviewer: Michael Smith from Austin, Texas USA

I have the following:

drop type tab_col;
create type tab_col as object
(tab varchar2(30), col varchar2(30));
/
drop type v_tab_col;
create type v_tab_col as varray(3) of tab_col;
/
drop table test;
create table test
(target_table varchar2(30),
target_column varchar2(30),
loc v_tab_col);
--
insert into test values( 'DEMO_DW1', 'AGE_0_5', v_tab_col(tab_col('ABS','P0TO42K'),tab_col('ABS','P0_4CY'), tab_col('ABS','P0_45Y')));
commit;
--
DECLARE
it v_tab_col;
sql_stmt tab_col;
--
cursor x1 is
select *
from zork
where target_table = 'DEMO_DW1';
--
BEGIN
for x in x1 LOOP
it := x.loc;
dbms_output.put_line( '.** Table = ' || x.target_table);
dbms_output.put_line( '. Column = ' || x.target_column);
for i in 1..3 LOOP
dbms_output.put_line( '. Column A = ' || x.loc(i).tab || ' ' || x.loc(i).col);
sql_stmt := x.loc(i);
dbms_output.put_line( '. Column B = ' || sql_stmt.tab || ' ' || sql_stmt.col);
dbms_output.put_line( '. Column C = ' || it(i).tab || ' ' || it(i).col);
end LOOP;
end LOOP;
END;
/

But I am having difficulity seeing an individual element from column loc via sql. Here is what I have tried:

12:29:17 MAPINFO:REGDEV:HULK> select loc from test;

LOC(TAB, COL)
---------------------------------------------------------------------------------------------------------------------------
V_TAB_COL(TAB_COL('ABS', 'P0TO42K'), TAB_COL('ABS', 'P0_4CY'), TAB_COL('ABS', 'P0_45Y'))

12:29:47 MAPINFO:REGDEV:HULK> select loc.tab from test;
select loc.tab from test
*
ERROR at line 1:
ORA-00904: "LOC"."TAB": invalid identifier


12:30:09 MAPINFO:REGDEV:HULK> select loc(1).tab from test;
select loc(1).tab from test
*
ERROR at line 1:
ORA-00904: "LOC": invalid identifier


12:40:35 MAPINFO:REGDEV:HULK> select loc.tab(1) from test;
select loc.tab(1) from test
*
ERROR at line 1:
ORA-00904: "LOC"."TAB": invalid identifier


14:38:49 MAPINFO:REGDEV:HULK> ed
Wrote file afiedt.buf

1 select target_table, target_column, tloc.tab
2 from test t,
3* table(t.loc) tloc
14:39:16 MAPINFO:REGDEV:HULK> /

TARGET_TABLE TARGET_COLUMN TAB
------------------------------ ------------------------------ ------------------------------
DEMO_DW1 AGE_0_5 ABS
DEMO_DW1 AGE_0_5 ABS
DEMO_DW1 AGE_0_5 ABS

14:39:17 MAPINFO:REGDEV:HULK> ed
Wrote file afiedt.buf

1 select target_table, target_column, tloc.tab
2 from test t,
3* table(t.loc(1)) tloc
14:39:43 MAPINFO:REGDEV:HULK> /
table(t.loc(1)) tloc
*
ERROR at line 3:
ORA-00904: "T"."LOC": invalid identifier


14:44:55 MAPINFO:REGDEV:HULK>

Is there any way via sql to see an individual element, i.e. loc(1), loc(3) etc???

Tom Kyte

Followup  

April 14, 2006 - 4:52 pm UTC

ops$tkyte@ORA10GR2> select target_table, target_column, tab, col
  2    from test t, table(t.loc)
  3  /

TARGET_TABLE                   TARGET_COLUMN
------------------------------ ------------------------------
TAB                            COL
------------------------------ ------------------------------
DEMO_DW1                       AGE_0_5
ABS                            P0TO42K

DEMO_DW1                       AGE_0_5
ABS                            P0_4CY

DEMO_DW1                       AGE_0_5
ABS                            P0_45Y
 

More on Collections

April 14, 2006 - 5:09 pm UTC

Reviewer: Michael Smith from Austin, Texas USA

It seems I was not clear in what I was asking for. Is there any way within SQL to just select one of the loc pairs. I would like to be able to just select loc(1) and get tab and col for that sincle entry only.

Thanks for your help, Tom, it is much appreciated.

Tom Kyte

Followup  

April 15, 2006 - 12:53 pm UTC

if you want the "i'th element from a collection", you would do that in the client that retrieved the collection.

In SQL - they can be unnested - but unless and until you add an "order by", there is no such thing as the "i'th element"

If I can make an assumption about the primary key of this table (as I did below), we can get "one element per key from the varray", but I would not feel safe in saying at all that it is varray(1) from from the varray.

SQL is all about sets.


ops$tkyte@ORA10GR2> create table test
  2    (target_table    varchar2(30),
  3     target_column   varchar2(30),
  4     loc             v_tab_col,
  5     primary key(target_table,target_column) );

Table created.

ops$tkyte@ORA10GR2> --
ops$tkyte@ORA10GR2> insert into test values( 'DEMO_DW1',  'AGE_0_5',
  2  v_tab_col(tab_col('ABS','P0TO42K'),tab_col('ABS','P0_4CY'),
  3  tab_col('ABS','P0_45Y')));

1 row created.

ops$tkyte@ORA10GR2> insert into test values( 'DEMO_DW1',  'AGE_0_6',
  2  v_tab_col(tab_col('ABS','P0TO42K'),tab_col('ABS','P0_4CY'),
  3  tab_col('ABS','P0_45Y')));

1 row created.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> --
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select target_table, target_column, tab, col
  2    from test t, table(t.loc)
  3   where target_table = 'DEMO_DW1'
  4     and target_column = 'AGE_0_5'
  5     and rownum = 1;

TARGET_TABLE                   TARGET_COLUMN
------------------------------ ------------------------------
TAB                            COL
------------------------------ ------------------------------
DEMO_DW1                       AGE_0_5
ABS                            P0TO42K


ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select *
  2    from (
  3  select target_table, target_column, tab, col,
  4         row_number() over (partition by target_table, target_column
  5                                order by rownum) rn
  6    from test t, table(t.loc)
  7         )
  8   where rn = 1
  9  /

TARGET_TABLE                   TARGET_COLUMN
------------------------------ ------------------------------
TAB                            COL                                    RN
------------------------------ ------------------------------ ----------
DEMO_DW1                       AGE_0_5
ABS                            P0TO42K                                 1

DEMO_DW1                       AGE_0_6
ABS                            P0TO42K                                 1


 

April 15, 2006 - 1:11 pm UTC

Reviewer: Michel Cadot from France


collection object

September 08, 2006 - 11:11 am UTC

Reviewer: Saradha Bavanandam from NJ

Hi Tom,

I created an object of emp table type(with 7/8 columns). Then I created a table with deptno, dname, loc, emp object.
I want to insert records into this table by selecting from dept table and emp table. like for example:
deptno dname loc emp
10 sales NJ emp_tab_typ(1,'sdf',...)
emp_tab_typ(2, 'sdf',...)

20 ... -- .......
.......
........
Basically I want to merge these two tables into one.
I am able to insert the data if I hardcode the values. But couldnt do that by fetching from the tables.
Can you help me?

Tom Kyte

Followup  

September 09, 2006 - 11:54 am UTC

Little excerpt from Expert One on One Oracle:

<quote>

The Types
The types used in this example are borrowed from Chapter 6, with the addition of the DEPT_TYPE. They are:


scott@TKYTE816> create or replace type emp_type
2 as object
3 (empno number(4),
4 ename varchar2(10),
5 job varchar2(9),
6 mgr number(4),
7 hiredate date,
8 sal number(7, 2),
9 comm number(7, 2)
10 );
11 /
Type created.

scott@TKYTE816>
scott@TKYTE816> create or replace type emp_tab_type
2 as table of emp_type
3 /
Type created.

scott@TKYTE816> create or replace type dept_type
2 as object
3 ( deptno number(2),
4 dname varchar2(14),
5 loc varchar2(13),
6 emps emp_tab_type
7 )
8 /
Type created.

Once again, a department is modeled as being an object with a department number, a name, a location, and employees.
The O-R View
From the above type definitions, it is very easy to synthesize our data for this view from the existing relational data. It would look like:

scott@TKYTE816> create or replace view dept_or
2 of dept_type
3 with object identifier(deptno)
4 as
5 select deptno, dname, loc,
6 cast ( multiset (
7 select empno, ename, job, mgr, hiredate, sal, comm
8 from emp
9 where emp.deptno = dept.deptno )
10 as emp_tab_type )
11 from dept
12 /
View created.
</quote>


the view definition has the query you seem interested in.

set desc depth all not working

December 01, 2006 - 7:33 am UTC

Reviewer: A reader

Sir,

I have tried on sqlplus (oracle 8.1.7.4) the following command

13:02:05 MMM> set desc depth all
unknown SET option "desc"

13:27:51 MMM> set describe depth all
unknown SET option "describe"

could you please let me know the exact command

Thanks a lot.



Tom Kyte

Followup  

December 01, 2006 - 7:47 am UTC

the exact command is as you typed it.



ops$tkyte@ORA817DEV> set desc depth all
ops$tkyte@ORA817DEV>

perhaps you are using an OLDER sqlplus even, the version of the database - not relevant for "set" commands (sqlplus does those)

so, get using some software written this century. 

Can we return multiple resultsets from a REF CURSOR

May 14, 2007 - 12:37 pm UTC

Reviewer: Deepak from CA

I am looking for help on PL/SQL as to how to return a multiple "sets" of recordsets from a oracle stored proc, using a single ref cursor out parameter.

Here's the psuedo code on what the PL/SQL should do -

****** start SP psuedo code ********
define stored_proc_get_employees_by_dept (IN dept_array, OUT ref_cursor)
BEGIN proc

FOR cur_row IN 1 .. dept_array.count LOOP
select * from employees where dept = dept_array(cur_row)
END LOOP

END proc;

****** end SP psuedo code ********

lets say dept_array - contains ('FINANCE', 'HR', 'SALES',' MARKETING') etc....


Tom Kyte

Followup  

May 14, 2007 - 4:59 pm UTC

you only want to return a single result set then.

ops$tkyte%ORA10GR2> create table dept as select * from scott.dept;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace type myTable as table of varchar2(30)
  2  /

Type created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure get_emp_by_dept( p_array in myTable, p_result in out sys_refcursor )
  2  as
  3  begin
  4          open p_result for
  5          select ename, dname
  6            from emp, dept
  7           where emp.deptno = dept.deptno
  8             and dept.dname in (select * from table( cast( p_array as myTable) ))
  9           order by dname;
 10  end;
 11  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable x refcursor
ops$tkyte%ORA10GR2> exec get_emp_by_dept( myTable( 'SALES', 'RESEARCH' ), :x )

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> print x

ENAME      DNAME
---------- --------------
SMITH      RESEARCH
ADAMS      RESEARCH
JONES      RESEARCH
SCOTT      RESEARCH
FORD       RESEARCH
JAMES      SALES
TURNER     SALES
MARTIN     SALES
WARD       SALES
ALLEN      SALES
BLAKE      SALES

11 rows selected.

July 03, 2007 - 3:05 am UTC

Reviewer: A reader

Which will be more efficient.

Your reply on September 27, 2001 - 8am US/Eastern using cast and multiset

OR

Your reply on May 14, 2007 - 4pm US/Eastern:

Which method we should apodt to send data to front end in such cases.
1-Repeating Department Name with there related information (Your reply May 14, 2007 - 4pm US/Eastern).
2-Returning Department Name once and sending other information as object (Your reply September 27, 2001- 8am US/Eastern)
Tom Kyte

Followup  

July 03, 2007 - 10:34 am UTC

they are entirely different questions and entirely different results.

one question (2001) was "show me a departement and for each department - an array of this and that"

the other question (2007) was "i have a list of things, I want a report based on that list"


they are apples and toaster ovens, they cannot be compared, they do entirely different things.

Collections

July 19, 2007 - 9:42 pm UTC

Reviewer: Mini from USA

Based on your earlier query on the

select dname,
2 cast( multiset( select ename, empno
3 from emp
4 where emp.deptno = dept.deptno ) as empTblType ) emps,
5 cast( multiset( select name, start_date, end_date
6 from proj
7 where proj.deptno = dept.deptno ) as projTblType ) projs
8 from dept
9 /
DNAME EMPS(ENAME, EMPNO) PROJS(NAME, START_DATE, END_DA
-------------- ------------------------------ ------------------------------
ACCOUNTING EMPTBLTYPE(EMPTYPE('CLARK', 77 PROJTBLTYPE(PROJTYPE('DBSNMP',
82), EMPTYPE('KING', 7839), EM '28-AUG-01', '04-SEP-01'), PR
PTYPE('MILLER', 7934)) OJTYPE('OPS$TKYTE', '24-SEP-01
', '01-OCT-01'))

RESEARCH EMPTBLTYPE(EMPTYPE('A', 7369), PROJTBLTYPE(PROJTYPE('SYS', '2
EMPTYPE('JONES', 7566), EMPTY 8-AUG-01', '04-SEP-01'), PROJT
PE('SCOTT', 7788), EMPTYPE('AD YPE('OPS$ORA817', '28-AUG-01',
AMS', 7876), EMPTYPE('FORD', 7 '04-SEP-01'), PROJTYPE('OPS$C
902)) LBECK', '09-SEP-01', '16-SEP-0
1'))

SALES EMPTBLTYPE(EMPTYPE('ALLEN', 74 PROJTBLTYPE(PROJTYPE('SYSTEM',
99), EMPTYPE('WARD', 7521), EM '28-AUG-01', '04-SEP-01'), PR
PTYPE('MARTIN', 7654), EMPTYPE OJTYPE('SCOTT', '28-AUG-01', '
('BLAKE', 7698), EMPTYPE('TURN 04-SEP-01'), PROJTYPE('A', '20
ER', 7844), EMPTYPE('JAMES', 7 -SEP-01', '27-SEP-01'))
900))

OPERATIONS EMPTBLTYPE() PROJTBLTYPE(PROJTYPE('OUTLN',
'28-AUG-01', '04-SEP-01'), PRO
JTYPE('CTXSYS', '28-AUG-01', '
04-SEP-01'))
I am trying to put the output of this select in a tab separated file. The recordset of EmpType Table should be pipe separated, so


EMPTBLTYPE(EMPTYPE('CLARK', 7782), EMPTYPE('KING', 7839),

shows up as 'CLARK',7782 || 'KING', 7839

Thanks

Can collections be used "IN"

August 07, 2007 - 12:05 pm UTC

Reviewer: Alex from BR

Is it possible to use a collection, like ('A','B','C') into the "where" ?

Select * into v_dept
from employee E
where E.dept IN ( collection )

I am today constructing dynamic SQL inside my procedure ans executing it with EXECUTE IMMEDIATE, but as syntax highligth and compilation won't show errors until runtime, I would like a way so i could build the statement in a less error prone way.

thanks
Tom Kyte

Followup  

August 07, 2007 - 2:09 pm UTC

where col in (select * from TABLE( collection ) )


http://asktom.oracle.com/Misc/varying-in-lists.html

Mutlset

October 03, 2007 - 10:32 am UTC

Reviewer: Nantha from USA

Hi Tom,
I need to compare a set of records (Set A) to another set of records(Set B) in a same table, (means four columns like ssn,dob,firstname,gender).
I just want to get Matched records between A and B, Only in A, Only in B and Unmatched.
We can do this by using regular set operatoer like minus, intersect etc.., If I do this using mutiset operater after bulk collect the data to a table object, Will that be faster than the regular set operators in SQL? Or Please let me know if there is any other best way to do this?
Thank you.


collection multiset union

April 08, 2008 - 8:27 am UTC

Reviewer: Sujit from PA USA

I am trying to do multiset union on collection below is the two scenarios , 
First one I am getting an error , is it a oracle bug?

Second one work fine , is there any concern in doing this?
-----------------------------------------------------------
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

/********************First One****************************/
SQL> create or replace type t001 as object
  2  (
  3  a number(9),
  4  b number(9)
  5  );
  6  /

Type created.

SQL> create or replace type t001_array as table of t001;
  2  /

Type created.

SQL> declare
  2  l1  t001_array:=t001_array();
  3  l2  t001_array:=t001_array();
  4  l3  t001_array:=t001_array();
  5  begin
  6  l3:=l1 MULTISET UNION l2;
  7  end;
  8  /
l3:=l1 MULTISET UNION l2;
*
ERROR at line 6:
ORA-06550: line 6, column 1:
PLS-00801: internal error [*** ASSERT at file pdw4.c, line 2076; Type
0x0764A554 has no MAP method.; _anon__213DCEF0__AB[6, 1]]


/********************Second One*************************/

SQL> create or replace type t002 as object
  2  (
  3  a number(9),
  4  b number(9),
  5  MAP MEMBER FUNCTION test RETURN number
  6  );
  7  /

Type created.

SQL>    CREATE OR REPLACE TYPE BODY t002
  2      AS
  3        MAP MEMBER FUNCTION test
  4            RETURN number
  5        IS
  6        BEGIN
  7            RETURN 0;
  8        END test;
  9      END;
 10      /

Type body created.

SQL>
SQL> create or replace type t002_array as table of t002;
  2  /

Type created.

SQL> declare
  2  l1  t002_array:=t002_array();
  3  l2  t002_array:=t002_array();
  4  l3  t002_array:=t002_array();
  5  begin
  6  l3:=l1 MULTISET UNION l2;
  7  end;
  8  /

PL/SQL procedure successfully completed.


Tom Kyte

Followup  

April 09, 2008 - 2:13 pm UTC

that internal error should return a regular ora-xxxx error, as such it is a bug - the error message is, not the behaviour overall

insert records into table

January 07, 2009 - 3:18 pm UTC

Reviewer: Branka Bowman from VA, USA

I want to make procedure that will insert one record in one table, and more than one record in child table.
Please let me know if this is best approach? What if I also want to insert records into child table of sams_user_poc?

create or replace TYPE user_poc_type AS OBJECT (
USER_POC_ID NUMBER(9),
USER_ID NUMBER(9),
ADDRESS_LINE1 VARCHAR2(100),
ADDRESS_LINE2 VARCHAR2(100),

create or replace type user_poc_type_table as table of user_poc_type;


create or replace PROCEDURE INSERT_USER (p_user_id IN number, p_username IN VARCHAR2, p_user_poc_type_table in user_poc_type_table
) AS
BEGIN
insert into sams_user (user_id, username)
values (p_user_id, p_username);
for i in 1 .. p_user_poc_type_table.count
loop
Insert into sams_user_poc (user_id, ADDRESS_LINE1 ,ADDRESS_LINE2)
VALUES( p_user_id,p_user_poc_type_table(I).ADDRESS_LINE1, p_user_poc_type_table(I).ADDRESS_LINE2);
end loop;

END INSERT_USER;

Tom Kyte

Followup  

January 07, 2009 - 5:21 pm UTC


ops$tkyte%ORA10GR2> create or replace
  2  PROCEDURE INSERT_USER(
  3   p_user_id IN number,
  4   p_username IN VARCHAR2,
  5   p_user_poc_type_table in user_poc_type_table )
  6  AS
  7  BEGIN
  8    insert into sams_user (user_id, username) values (p_user_id, p_username);
  9    insert into sams_user_poc
 10    ( user_id, address_line1, address_line2 )
 11    select x.user_id, x.address_line1, x.address_line2
 12      from TABLE(p_user_poc_type_table) x;
 13  END INSERT_USER;
 14  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec insert_user( 1, 'tom', user_poc_type_table( user_poc_type(2,3,'x','y'), user_poc_type(4,5,'a','b') ) );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from sams_user;

   USER_ID USERNAME
---------- ------------------------------
         1 tom

ops$tkyte%ORA10GR2> select * from sams_user_poc;

   USER_ID ADDRESS_LI ADDRESS_LI
---------- ---------- ----------
         3 x          y
         5 a          b


insert records into table

January 07, 2009 - 5:44 pm UTC

Reviewer: Branka Bowman from VA, USA

This is great. It is so elegant.
Thanks

insert records into table

January 08, 2009 - 10:31 am UTC

Reviewer: Branka Bowman from VA, USA

I added subtype in the type, and now I have problem with it.

create or replace TYPE user_email_type AS OBJECT (
EMAIL_ID NUMBER(9),
EMAIL_TYPE VARCHAR2(1),
EMAIL_ADDRESS VARCHAR2(200),
USER_POC_ID NUMBER(9));

create or replace type user_email_type_table as table of user_email_type;

create or replace TYPE POC_TYPE AS OBJECT
(USER_POC_ID NUMBER(9),
ADDRESS_LINE1 VARCHAR2(100),
ADDRESS_LINE2 VARCHAR2(100),
email_type user_email_type_table
);

create or replace type POC_TYPE_table as table of POC_TYPE;

create or replace PROCEDURE INSERT_USER_email(
p_user_id IN number,
p_username IN VARCHAR2,
p_poc_type_table in poc_type_table )
AS
BEGIN
insert into sams_user (user_id, username,user_type) values (p_user_id, p_username,'A');

insert into sams_user_poc ( USER_POC_ID,user_id, address_line1, address_line2 )
select x.USER_POC_ID, p_user_id, x.address_line1, x.address_line2
from TABLE(p_poc_type_table) x;
insert into sams_user_poc_email (EMAIL_ID,EMAIL_TYPE,EMAIL_ADDRESS,USER_POC_ID)
select y.EMAIL_ID, 'E',y.EMAIL_ADDRESS,x.user_poc_id
from ??????;

END INSERT_USER_email;

Can you help me with this?

Thanks



Tom Kyte

Followup  

January 08, 2009 - 11:48 am UTC


ops$tkyte%ORA10GR2> create table t
  2  ( email_id number, email_type varchar2(1), email_address varchar2(20), user_poc_id number );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure p
  2  ( p_data in poc_type_table )
  3  as
  4  begin
  5          insert into t (email_id,email_type,email_address,user_poc_id)
  6          select x.*
  7            from (select * from TABLE(p_data)) a, TABLE(a.email_type) X;
  8  end;
  9  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> begin
  2     p( POC_TYPE_table(
  3           POC_TYPE( 1, 'a', 'b', user_email_type_table( user_email_type(100,'x','a1',200), user_email_type(300,'y','a2',400) ) ),
  4           POC_TYPE( 2, 'x', 'y', user_email_type_table( user_email_type(1000,'m','a3',2000), user_email_type(3000,'n','a4',4000) ) )
  5                  ) );
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from t;

  EMAIL_ID E EMAIL_ADDRESS        USER_POC_ID
---------- - -------------------- -----------
       100 x a1                           200
       300 y a2                           400
      1000 m a3                          2000
      3000 n a4                          4000




insert records into table

January 09, 2009 - 3:11 pm UTC

Reviewer: Branka Bowman from VA, USA

What if I did not pass poc_id as a parameter? If I created it using sequence?

desc POC_TYPE
Name Null? Type
----------------------------------------------------------------------- -------- ------------------
ADDRESS_LINE1 VARCHAR2(100)
ADDRESS_LINE2 VARCHAR2(100)
EMAIL EMAIL_TYPE_TABLE


desc EMAIL_TYPE_table
EMAIL_TYPE_table TABLE OF EMAIL_TYPE
Name Null? Type
----------------------------------------------------------------------- -------- ------------------
EMAIL_TYPE VARCHAR2(1)
EMAIL_ADDRESS VARCHAR2(200)




Begin
insert into sams_user (user_id,username)
values ( SAMS_USER_SEQ.nextval, p_username )
returning user_id into l_user_id;


insert into sams_user_poc ( user_poc_id, user_id, address_line1, address_line2
)
select SAMS_USER_SEQ.nextval, l_user_id, x.address_line1, x.address_line2
from TABLE(p_poc_type_table) x
;




insert into sams_user_poc_email (EMAIL_ID,EMAIL_TYPE,EMAIL_ADDRESS,USER_POC_ID
)
select SAMS_USER_SEQ.nextval,x.email_type, x.EMAIL_ADDRESS, ?????? poc_id
from (select * from TABLE(p_poc_type_table)) a, TABLE(a.email) X;

END:
Tom Kyte

Followup  

January 09, 2009 - 3:33 pm UTC

yes - what if?

What is your question? if you want them all to have the same value, use currval instead of nextval for the last two references.

Multiset lmited in where clause

April 30, 2009 - 12:10 pm UTC

Reviewer: Marcus Harris

Can we have the resultant value of the multiset in the where clause of the query?

Psuedo code:
select t1.col1, t1.col2, cast(multiset(select col3, col4 from t2 where pk = t1.pk) as definedtable) as nestedvalues from t1 where nestedvalues is not null

I get an error stating nestedvalues is an invalid identifier.

I am trying to get only those records from t1 that have a collection of records from t2.
Tom Kyte

Followup  

April 30, 2009 - 1:20 pm UTC

ops$tkyte%ORA11GR1> select dummy, 5 x from dual where x is not null;
select dummy, 5 x from dual where x is not null
                                  *
ERROR at line 1:
ORA-00904: "X": invalid identifier



X is not in scope in the where clause, X doesn't exist yet.

ops$tkyte%ORA11GR1> select * from (select dummy, 5 x from dual) where x is not null;

D          X
- ----------
X          5


Bulk Collection Question

May 11, 2009 - 11:30 am UTC

Reviewer: Bob from London, UK

Hi Tom,

I have a question regarding a certain report I am developing. We have transactions (in a source table) when they get updated the following happens:

i) New Transaction (v.1) in TRN table
ii) Update transaction in TRN table - becomes v.2 in TRN table v.1 inserted in AUDIT table
iii) update transaction in TRN table - becomes v.3 in TRN table and v.2 is inserted in audit table

Result TRN table has v.3
AUDIT table has v.2 and v.1

Now the TRN table has about 200 fields in it.

What I have to do is write a report that does a count on number of times all the fields have changed. e.g
Compare TRN v.3 record (with all it's associated fields) with AUDIT v.2, and then AUDIT v.2 with AUDIT v.1 and so on and so forth.

I am comparing each field using SQL...but the problem is I'm doing 200 bits of SQL (comparing each field) for each transaction comparison. Is there a way to do this utilizing arrays for comparing fields. Here is my code so far:


set timing on
set serveroutput on size 1000000
DECLARE

l_sql LONG;
l_condition VARCHAR2(500);
l_result varchar2(1) default null;

l_max_version_no NUMBER;
l_min_version_no NUMBER;

TYPE x IS RECORD (trn_rowid urowid,
cob_dt date,
cres_sys_id number,
trn_cd varchar2(100),
trn_src_cd varchar2(40) ,
process_id number);

TYPE x_arr IS TABLE OF x ;

--TYPE y IS TABLE OF VARCHAR2(2000)
--INDEX BY BINARY_INTEGER;

TYPE y IS RECORD (col_name varchar2(30),
where_condition varchar2(1000) );

TYPE y_arr IS TABLE OF y;

TRN x_arr;
TRN_COLS y_arr;

--TRN_COLS y;

PROCEDURE sp_update_summary_table(p_process_id IN NUMBER, p_column_name IN VARCHAR2) IS
l_string LONG := '';
v_method_name VARCHAR2(50) := 'update_summary_table()';
BEGIN
l_string := 'UPDATE adjustments_summary SET '||p_column_name||' = '||p_column_name||' + 1 ';
l_string := l_string || ' WHERE process_id = :x';

EXECUTE IMMEDIATE l_string USING p_process_id;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('ERROR: '|| v_method_name ||': '|| SQLERRM);
RAISE;
END sp_update_summary_table;

BEGIN

SELECT a.rowid,
a.cob_dt,
a.cres_sys_id,
a.trn_cd,
a.trn_src_cd,
c.process_id
BULK COLLECT
INTO
trn
FROM
src_trn_det a,
run_header_max b,
process_ctl c
WHERE
a.cob_dt = b.cob_dt
AND a.run_id = b.run_id
AND b.process_id = c.process_id
-- AND a.cres_sys_id = 1
-- AND a.cob_dt = '28-FEB-2009'
-- AND a.trn_src_cd = 'GDP_STG_WSS'
-- AND a.trn_cd = '106431032007'
AND trunc(a.update_datetime) between '02-MAR-2009' and '09-MAR-2009'
AND EXISTS
(SELECT 'x'
FROM SRC_TRN_DET_AUDIT b
WHERE a.version_no > 1
AND a.cres_sys_id = b.cres_sys_id
AND a.cob_dt = b.cob_dt
AND a.trn_src_cd = b.trn_src_cd
AND a.trn_cd = b.trn_cd);

SELECT
COLUMN_NAME,
CASE WHEN DATA_TYPE IN ('NUMBER') THEN
'NVL(a.' || column_name ||',99999) = NVL(b.' || column_name ||',99999)'
WHEN DATA_TYPE LIKE '%CHAR%' THEN
'NVL(a.' || column_name ||',''XYZ'') = NVL(b.' || column_name ||',''XYZ'')'
WHEN DATA_TYPE = 'DATE' THEN
'NVL(a.' || column_name ||',''01-JAN-1001'') = NVL(b.' || column_name ||',''01-JAN-1001'')'
END CASE
BULK COLLECT
INTO
trn_cols
FROM
user_tab_columns
WHERE
table_name = 'SRC_TRN_DET'
ORDER BY column_id;


FOR i IN trn.FIRST..trn.LAST LOOP
IF trn.EXISTS(i) THEN

select max(version_no),
min(version_no)
into l_max_version_no,
l_min_version_no
from SRC_TRN_DET_AUDIT a
where a.cob_dt = trn(i).cob_dt
and a.cres_sys_id = trn(i).cres_sys_id
and a.trn_cd = trn(i).trn_cd
and a.trn_src_cd = trn(i).trn_src_cd;

--dbms_output.put_line('INFO: Process ID: ' || trn(i).process_id ||', '||
-- 'MAX Version: '||l_max_version_no||
-- ', MIN Version:'||l_min_version_no);

FOR j IN REVERSE l_min_version_no..l_max_version_no loop
--dbms_output.put_line('Value of j: ' || j);

FOR k IN trn_cols.FIRST..trn_cols.LAST LOOP
IF j = l_max_version_no THEN

BEGIN

l_result := '';
l_sql := '';
l_condition := trn_cols(k).where_condition;

l_sql := ' SELECT ''p'' FROM SRC_TRN_DET_AUDIT a ';
l_sql := l_sql || ' WHERE a.cob_dt = :a ';
l_sql := l_sql || ' AND a.cres_sys_id = :b ';
l_sql := l_sql || ' AND a.trn_cd = :c ';
l_sql := l_sql || ' AND a.trn_src_cd = :d ';
l_sql := l_sql || ' AND a.version_no = :e ';
l_sql := l_sql || ' AND EXISTS (SELECT null ';
l_sql := l_sql || ' FROM src_trn_det b ';
l_sql := l_sql || ' WHERE b.rowid = :f ';
l_sql := l_sql || ' AND b.cob_dt = a.cob_dt';
l_sql := l_sql || ' AND b.cres_sys_id = a.cres_sys_id';
l_sql := l_sql || ' AND b.trn_cd = a.trn_cd ';
l_sql := l_sql || ' AND b.trn_src_cd = a.trn_src_cd ';
l_sql := l_sql || ' AND '|| l_condition ||')';

EXECUTE IMMEDIATE l_sql
INTO
l_result
USING
trn(i).cob_dt,
trn(i).cres_sys_id,
trn(i).trn_cd,
trn(i).trn_src_cd,
j,
trn(i).trn_rowid;

EXCEPTION
WHEN NO_DATA_FOUND THEN
sp_update_summary_table(trn(i).process_id, trn_cols(k).col_name);
--dbms_output.put_line('INFO: Version No: '||j||','||l_condition ||': no match found');
END;

ELSIF j > l_min_version_no THEN

BEGIN

l_result := '';
l_sql := '';
l_condition := trn_cols(k).where_condition;

l_sql := ' SELECT ''p'' FROM SRC_TRN_DET_AUDIT a ';
l_sql := l_sql || ' WHERE a.cob_dt = :a ';
l_sql := l_sql || ' AND a.cres_sys_id = :b ';
l_sql := l_sql || ' AND a.trn_cd = :c ';
l_sql := l_sql || ' AND a.trn_src_cd = :d ';
l_sql := l_sql || ' AND a.version_no = :e';
l_sql := l_sql || ' AND EXISTS (SELECT null ';
l_sql := l_sql || ' FROM SRC_TRN_DET_AUDIT b ';
l_sql := l_sql || ' WHERE b.cob_dt = :f ';
l_sql := l_sql || ' AND b.cres_sys_id = :g ';
l_sql := l_sql || ' AND b.trn_cd = :h ';
l_sql := l_sql || ' AND b.trn_src_cd = :i ';
l_sql := l_sql || ' AND b.version_no = :j';
l_sql := l_sql || ' AND '|| l_condition ||')';

EXECUTE IMMEDIATE l_sql
INTO
l_result
USING
trn(i).cob_dt,
trn(i).cres_sys_id,
trn(i).trn_cd,
trn(i).trn_src_cd,
j,
trn(i).cob_dt,
trn(i).cres_sys_id,
trn(i).trn_cd,
trn(i).trn_src_cd,
j-1;

EXCEPTION
WHEN NO_DATA_FOUND THEN
sp_update_summary_table(trn(i).process_id, trn_cols(k).col_name);
--dbms_output.put_line('INFO: Version No: '||j||','||l_condition ||': no match found');
END;

END IF;
END LOOP;
END LOOP;
END IF;
END LOOP;
end;
/

After you BULK COLLECT fields from a record - is it possible to transverse the fields in the record using a counter instead of naming each field ...

Thanks very much
Tom Kyte

Followup  

May 11, 2009 - 6:58 pm UTC

...
EXCEPTION
 WHEN OTHERS THEN
     dbms_output.put_line('ERROR: '|| v_method_name ||': '|| SQLERRM);
     RAISE;
END sp_update_summary_table;

....


why? why do that?


... I am comparing each field using SQL...but the problem is I'm doing 200 bits of
SQL (comparing each field) for each transaction comparison. Is there a way to
do this utilizing arrays for comparing fields. ...



You'd have to use dbms_sql to be able to access the "i'th column" of a result set (a row at a time)

why not write a small script in plsql to read the dictionary and generate a SINGLE sql statement that does the 200 compares all at once? Then you can put that sql statement into your real code.

Bulk collection

May 11, 2009 - 7:02 pm UTC

Reviewer: Bob from London, UK

I am not sure how you would do the 200 comparisons in a SINGLE SQL statement from the data dictionary - please can you show me by way of an example?
Tom Kyte

Followup  

May 13, 2009 - 10:02 am UTC

tell you what...

you write the SQL for 2 columns - JUST 2 columns...

and then I'll tell you how to do it for 3
and then 4
and then N

You would use the dictionary to GENERATE a static sql statement that does the work for you.

More detail on report

May 11, 2009 - 7:40 pm UTC

Reviewer: JerryQ from Killaloe, Ireland

Hi Bob,

Your statement:
"What I have to do is write a report that does a count on number of times all the fields have
changed."
isn't very clear.
What should the report display:
date / column_name / no_of_changes
OR
trans_id / no_of_changes
OR just
No_of_changes?

Jerry

Bulk collection

May 12, 2009 - 5:37 am UTC

Reviewer: Bob from London

Hi - we are trying to monitor the number of times a field is changed in the table SRC_TRN_DET over a particular time period. I'm going to try and follow Tom's suggestion of one piece of SQL to monitor which field has changed ...

In PL/SQL can you dynamically refer to a variable e.g.

Can you build a string..e.g.

'WHERE '||trn(i)||'.'||x|| where x is a variable containing a column name?
Tom Kyte

Followup  

May 13, 2009 - 10:30 am UTC

give me your query that does TWO columns....

with hard coded column names.

Performance

May 12, 2009 - 7:38 am UTC

Reviewer: JerryQ from Killaloe, Ireland

Hi Bob,

Maybe I'm reading your code incorrectly, but it looks like you're running a select for every transaction and every column. If you're dealing with a large amount of records, this is going to add up to a hell of a lot of processing very quickly.
My own background is in large billing systems, and for nightly batch processing, we'd find large performance improvements by just opening cursors on transaction/master tables once and use ORDER BY; then we'd read thru each cursor, until the ids match.

In your case, assuming trans_id is the unique identifier of a transaction and that the record has a last_changed_dt column:

csr_a SELECT * FROM master WHERE last_change_dt > ... ORDER BY trans_id;
csr_b SELECT * FROM aud1 WHERE last_change_dt > ... ORDER BY trans_id;
csr_c SELECT * FROM aud2 WHERE last_change_dt > ... ORDER BY trans_id, tran_dt;

Then for each entry in csr_a, process thru entries in csr_b and csr_c.

For counting which columns have changed, keep an associative array -
TYPE typTabVC30 IS TABLE OF PLS_INTEGER INDEX BY VARCHAR2(30)
aChangeCnts typTabVC30;

You're reading in the full record from each table;
so let's say you want to compare recA to recB, then recB to recC ...and possibly recC2, recC3? Assuming tables are exactly the same in layout - column_names and types), and you don't want to write out all 200+ IF statements, there are a few ways to iterate dynamically thru the recordsets - possibly using the sys.anydata type.
What may be easier is to dynamically create a procedure, function / (or a anonymous pl/sql block) within your code using the USER_TAB_COLUMNS

e.g.
vTempProc := 'create or replace procedure pcXX(aCnts IN OUT NOCOPY rptPkg.typTabVC30,
recA IN NOCOPY aud_tab%ROWTYPE,
recB IN NOCOPY aud_tab%ROWTYPE) IS
BEGIN
';
open csr_cols; (user_tab_columns)
LOOP
fetch csr_cols into rec_cols;
exit when csr_cols%NOTFOUND;
vTempProc := vTempProc||
'IF recA.'||csr_cols.column_name||' =recB.'||csr_cols.column_name||' THEN NULL;
ELSIF recA.'||csr_cols.column_name||' IS NULL AND recB.'||csr_cols.column_name||' IS NULL
THEN NULL;
ELSIF aCnts.EXISTS('''||csr_cols.column_name||''' THEN
aCnts('''||csr_cols.column_name||''') := aCnts('''||csr_cols.column_name||''') + 1;
ELSE
aCnts('''||csr_cols.column_name||''') := 0;
end IF;'
END LOOP;
vTempProc := vTempProc||' END;'

EXECUTE IMMEDIATE vTempProc;

open cursors
begin fetching
identify 2 records to compare
Call procedure(aCnt, rowA, rowB);
end loop

Hope this helps.
J

@Bob re: Bulk collection

May 12, 2009 - 8:14 am UTC

Reviewer: Stew Ashton from Paris, France


Hi Bob,

I would start "small and manual": create test tables with few rows and few columns, write the SQL that does what you want, then figure out how to generate that SQL. For example:
-- Create test tables with just two updateable columns, F1 and F2

create table bob(pkey_id, version_id, F1, F2, 
constraint bob_pk primary key(pkey_id, version_id)) as
select object_id pkey_id, 3 version_id, owner F1, object_name F2 from all_objects
where object_id < 10

create table succeeded.
/

create table bob_audit(pkey_id, version_id, F1, F2, 
constraint bob_audit_pk primary key(pkey_id, version_id)) as
select pkey_id, 2 version_id, F1 || '2' F1 , F2 || '2' F2 from bob
union all
select pkey_id, 1 version_id, F1 || '1' F1, F2 from bob

create table succeeded.
/

-- For each primary key, show the latest version, the total number of versions
-- and the total number of distinct values per column

select pkey_id
, max(version_id) current_version
, count(version_id) num_versions
, count(distinct F1) num_F1
, count(distinct F2) num_F2
from (
  select * from bob
  union all
  select * from bob_audit
) group by pkey_id
order by 1

PKEY_ID   CURRENT_VERSION  NUM_VERSIONS  NUM_F1  NUM_F2
--------- ---------------- ------------- ------- ------
2         3                3             3       2
3         3                3             3       2
4         3                3             3       2
5         3                3             3       2
6         3                3             3       2
7         3                3             3       2
8         3                3             3       2
9         3                3             3       2

8 rows selected

/

-- The only SQL that needs to be generated refers to the updateable columns.
-- The following code returns just those columns.

select column_name from user_tab_cols where table_name = 'BOB'
and column_name not in (select column_name from user_cons_columns where constraint_name = 'BOB_PK')

COLUMN_NAME
------------------------------
F1
F2

2 rows selected

/
-- Now use the above code to generate the ultimate SELECT

select
'select pkey_id
, max(version_id) current_version
, count(version_id) num_versions' txt from dual
union all
select ', count (distinct ' || column_name || ') num_' || column_name from user_tab_cols where table_name = 'BOB'
and column_name not in (select column_name from user_cons_columns where constraint_name = 'BOB_PK')
union all
select
'from (
  select * from bob
  union all
  select * from bob_audit
) group by pkey_id
order by 1
/' from dual

TXT
------------------------------------
select pkey_id
, max(version_id) current_version
, count(version_id) num_versions

, count (distinct F1) num_F1
, count (distinct F2) num_F2
from (
  select * from bob
  union all
  select * from bob_audit
) group by pkey_id
order by 1
/

4 rows selected

May 12, 2009 - 8:26 am UTC

Reviewer: JerryQ from Killaloe, Ireland

Would a "count(distinct)" be correct - e.g.
if the column changed from available to locked to available to locked to available, a count(distinct) will return 2 - but it has actually changed 4 times.
Is performance an issue? 200+ aggregating SQL statements - most of them with no indexes (assuming no indexes on audit tables). On our systems - 100m+ records usually - and way more in our audit logs - this would be a major hit on the cluster.




May 12, 2009 - 8:28 am UTC

Reviewer: JerryQ from Killaloe, Ireland

Sorry Stew - misread the SQL - I see it would only require one big statement.
Only issue is whether Count(distinct) is enough for bob.

Bulk Collection

May 12, 2009 - 10:01 am UTC

Reviewer: Bob from London

Thanks for your input guys! Really, really helpful! Actually a count(distinct) isn't enough ... I think I would need to go with Jerry's suggestion - I haven't dynamically created a PL/SQL proc before - so should be quite interesting! Thanks so much for your help...FYI the above code took 2.5 hours to run last night..so hopefully,,,,Jerry's way will be faster AND produce the correct result! :-)

Bulk Collection

May 12, 2009 - 10:09 am UTC

Reviewer: Bob from London

Jerry - you suggest a PL/SQL table to hold the counts. How do you bulk update a normal table with the counts from a PL/SQL table if you have 200+ columns

May 12, 2009 - 10:40 am UTC

Reviewer: JerryQ from Killaloe, Ireland

Hi Bob

A few ways of doing it.

But quick and rough - and making a lot of assumptions on record counts (if the summary record count is very high - you may need to output the array and clear it a few times during the run. Also - once again if you're creating vast amounts of summary records (millions per night), you could use a array of the summary table type - load in the details and bulk insert it - hard to imagine a summary table having this many records...):

Assuming you've a 3-dim associative array - process_id,column_name, cnt
thats fully loaded once processing is complete.
And that the column_names in your summary table are the same as those in the audited tables:
Realise that values are getting hardcoded (rather than bound) - but I'm presuming there aren't that many inserts to be done.

piProcessId := aCnts.FIRST;
WHILE piProcessId IS NOT NULL LOOP
vSqlCols := NULL;
vSqlVals := NULL;
vCol := aCnts(piProcessId).FIRST;
WHILE vCol IS NOT NULL LOOP
vSqlCols := vSqlCols||vCol||',';
vSqlVals := vSqlVals||aCnts(piProcessId)(vCol)||',';
vCol := aCnts(piProcessId).NEXT(vCol);
END LOOP;
vSql := 'INSERT INTO summary('||vSqlCols||') VALUES ('||vSqlVals||')';
EXECUTE IMMEDIATE vSql;
piProcessId := aCnts.NEXT(piProcessId);
END LOOP;

(don't forget to strip off the trailing comma)

Bulk collection

May 12, 2009 - 1:29 pm UTC

Reviewer: Bob from London

Actually - I hit a problem with your idea Jerry because the string limit on EXECUTE IMMEDIATE is 32k ... and mine is way greater than that when I create dynamic IF statements

May 12, 2009 - 4:56 pm UTC

Reviewer: JerryQ from Killaloe, Ireland

Hi Bob

Have run into this problem myself before.
1. Make sure to "trim" your SQL:
i.e. vSql := 'begin' ||chr(10)||
' if....then' ||chr(10)||
' etc.
2. "execute immediate" will take up to 2*32767 characters
- I usually just use a number of varchar2s(32767) and pass them in strung together.
execute immediate sql1||sql2||sql3

If 65000 chars isn't enough, write out two or more procedures
On my version - 10Gr2 I get an error if I try to pass in a CLOB.

Here's some code to test - mine fails if I increase the FOR LOOP beyond approx 510:

set serveroutput on format wrapped size 1000000
declare
vclob CLOB;
piTop PLS_INTEGER;
vSql1 VARCHAR2(32767);
vSql2 VARCHAR2(32767);
vSql3 VARCHAR2(32767);
vSql4 VARCHAR2(32767);
vSql5 VARCHAR2(32767);
vSql6 VARCHAR2(32767);
begin
vClob := 'DECLARE' ||CHR(10)||
' vClob CLOB;'||CHR(10)||
'BEGIN' ||CHR(10);
FOR i IN 1..500 LOOP
vClob := vClob||
' vClob := vClob||''1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890''||CHR(10);';
END LOOP;
vClob := vClob||'END;';
DBMS_OUTPUT.PUT_LINE('vClob is '||LENGTH(vClob));
-- EXECUTE IMMEDIATE vClob;
piTop := CEIL(LENGTH(vClob) / 32767);
DBMS_OUTPUT.PUT_LINE(piTop);
FOR i IN 1..piTop LOOP
CASE i
WHEN 1 THEN vSql1 := SUBSTR(vClob,1,32767);
vClob := SUBSTR(vClob,32768);
WHEN 2 THEN vSql2 := SUBSTR(vClob,1,32767);
vClob := SUBSTR(vClob,32768);
WHEN 3 THEN vSql3 := SUBSTR(vClob,1,32767);
vClob := SUBSTR(vClob,32768);
WHEN 4 THEN vSql4 := SUBSTR(vClob,1,32767);
vClob := SUBSTR(vClob,32768);
WHEN 5 THEN vSql5 := SUBSTR(vClob,1,32767);
vClob := SUBSTR(vClob,32768);
WHEN 6 THEN vSql6 := SUBSTR(vClob,1,32767);
vClob := SUBSTR(vClob,32768);
END CASE;
END LOOP;
EXECUTE IMMEDIATE vSql1||vSql2||vSql3||vSql4||vSql5||vSql6;
END;



Tom Kyte

Followup  

May 13, 2009 - 11:12 am UTC

#2 - excessively dangerous, it might, it might not.

Not until 11g can native dyanmic sql be said to do more then 32k

prior to that - the only way in plsql would be to use dbms_sql, there is a parse interface that takes a plsql table of strings - and that can have 2billion lines of sql in it.

Execute immediate - workaround

May 12, 2009 - 5:04 pm UTC

Reviewer: JerryQ from Killaloe, Ireland

One other workaround you can try, if your code can't fit into 65K is dbms_sql.parse that accepts a pl/sql table of 32K strings.

TYPE aVarchar2s IS TABLE OF VARCHAR2(32767) INDEX BY BINARY_INTEGER;

PROCEDURE parse(c IN INTEGER,
STATEMENT IN aVarchar2s,
lb IN INTEGER,
ub IN INTEGER,
lfflg IN BOOLEAN,
language_flag IN INTEGER);

Bulk collection

May 12, 2009 - 5:52 pm UTC

Reviewer: Bob from London, UK

I must say Jerry - you have been extremely helpful! I think I can get around this 32k problem by creating a new version of the proc every say 20 IF statements ... use a mod function something like that ...then do the EXECUTE IMMEDIATE...

BUT....I want to bulk collect records ...from the transaction table then call that procedure for each transaction record OR pass an array of records ....instead of just a record type: RecA/RecB ...

Is it possible to refer to a record from BULK COLLECT - I don't think X%ROWTYPE and the record from a bulk collect are the same?

May 12, 2009 - 6:26 pm UTC

Reviewer: JerryQ from Killaloe, Ireland

Hi Bob - no worries. I'm pretty certain they are the same.
Using Bulk Collects would make the record-matching style of coding more involved.

The bulk collect is just an array of the records (rowtypes).

TYPE transrecs_aat IS TABLE OF transtab%ROWTYPE
INDEX BY PLS_INTEGER;
aTransRecs transrecs_aat;

FOR indx IN 1 .. aTransRecs.COUNT
LOOP
getRecordToCompareWith - say recB
pcCompare(aCnts,aTransRecs(indx),recB)
END LOOP;

pcCompare(aCnts..., recA transtab%ROWTYPE, recB transtab%ROWTYPE) IS ....

Each row in the collection has the same type as a single record in the table.

Bulk Collections

May 13, 2009 - 6:06 am UTC

Reviewer: Bob from London

Jerry - with regards to your dynamically created procedure - I get an error with the parameter:

recA IN NOCOPY aud_tab%ROWTYPE,


Is this the right syntax for passing a record as a parameter?

May 13, 2009 - 9:00 am UTC

Reviewer: A reader

Hi Bob

To use "nocopy", you need to use "IN OUT"
i.e. recB IN OUT NOCOPY aud_tab%ROWTYPE

or
it probably wouldn't make any difference to just use
recB IN aud_tab%ROWTYPE


Collections

May 14, 2009 - 8:33 am UTC

Reviewer: Bob from London

Hi Jerry - I am trying passing records as a parameter to a dynamically created procedure - but when I try and execute the procedure, I get:

DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "CRES2PROD.PC"
ORA-06508: PL/SQL: could not find program unit being called: "CRES2PROD.PC"
ORA-06512: at line 99
ORA-06512: at line 186

My code is as follows:

I have to create a dummy procedure PC at the beginning - otherwise the PL/SQL won't compile (because PC doesn't exist)

create or replace procedure pc(p_process_id in number,
recA IN OUT NOCOPY src_trn_det%ROWTYPE,
recB IN OUT NOCOPY src_trn_det%ROWTYPE) IS
BEGIN
NULL;
END;
/

PROCEDURE sp_create_compare(p_process_id IN NUMBER,
p_trn IN OUT NOCOPY src_trn_det%ROWTYPE,
p_aud IN OUT NOCOPY src_trn_det%ROWTYPE)
IS
l_method_name VARCHAR2(50) := 'sp_create_compare()';
l_index PLS_INTEGER := 0;
l_init LONG := '';
l_temp LONG := '';

l_execute_pt PLS_INTEGER := 20;
BEGIN
l_init := 'create or replace procedure pc(p_process_id in number, ';
l_init := l_init || ' recA IN OUT NOCOPY src_trn_det%ROWTYPE, ';
l_init := l_init || ' recB IN OUT NOCOPY src_trn_det%ROWTYPE) IS BEGIN';

DBMS_OUTPUT.PUT_LINE('INFO: Performing procedure: '|| l_method_name);

FOR x IN (SELECT COLUMN_NAME,
(CASE WHEN DATA_TYPE = 'NUMBER' THEN
'NVL(recA.' || column_name ||',99999) = NVL(RecB.' || column_name ||',99999)'
WHEN DATA_TYPE LIKE '%CHAR%' THEN
'NVL(recA.' || column_name ||',''XYZ'') = NVL(RecB.' || column_name ||',''XYZ'')'
WHEN DATA_TYPE = 'DATE' THEN
'NVL(recA.' || column_name ||',''01-JAN-1001'')= NVL(RecB.' || column_name ||',''01-JAN-1001'')'
END) col_comp
FROM
user_tab_columns
WHERE
table_name = 'SRC_TRN_DET'
ORDER BY column_id)
LOOP
l_index := l_index + 1;

--dbms_output.put_line('Column: '||x.column_name ||', Index: '|| l_index);

IF l_index = 1 THEN
l_temp := l_init;
ELSE
l_temp := l_temp ||
' IF '||x.col_comp||' THEN NULL; ELSE '||
' update adjustments_summary set '||x.column_name||'='||x.column_name||'+1' ||
' where process_id = p_process_id; END IF; ';
END IF;


if mod(l_index, l_execute_pt) = 0 then
dbms_output.put_line('INFO: Clearing down temp string');
l_temp := l_temp || ' END; ';

EXECUTE IMMEDIATE l_temp;
pc(p_process_id, p_trn, p_aud);

l_temp := l_init;
end if;

END LOOP;

/*
l_temp := l_temp || ' END; ';
EXECUTE IMMEDIATE l_temp;
pc(p_process_id, p_trn, p_aud);
commit;
*/

EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('ERROR: '||l_method_name||' failed with: '||SQLERRM);
raise;
END sp_create_compare;

I'm not sure why it can't execute PC - because it's valid...

May 14, 2009 - 11:05 am UTC

Reviewer: A reader

Hi Bob

1.First thing - if performance is an issue, you definitely don't want to be doing the update for every change. Just keep an array - e.g. by processid/column_name/ChangeCnt

Then just update your summary table from the array when you've finished processing.

2.Why are using NVL,99999 etc? Seems very inefficient and possibly wrong if one side actually holds 99999, or XYZ. Whats wrong with a straight "is null" check? IF NOT((a=b) OR (a is null and b is null)) then aCnts(pID)(vColName) := aCnts(pID)(vColName) + 1; end if;

3. "existing state of package"
The execute immediate is re-compiling the procedure. The main procedure was compiled against an older version. PL/SQL sees the version mismatch and gives you the error.
So:
a. use an anonymous pl/sql block, passing out the Counts array, instead of a procedure
b. Just run your code to create the procedure. And then don't bother re-creating it unless the table structure changes
c. Change your procedure to being a Package. Leave the Package Header alone and recompile the "body" (not certain this works - but it should)







May 14, 2009 - 11:25 am UTC

Reviewer: A reader

Whoops - mistake in last code. (always a problem with nulls)
Code would need to be
if (a=b) OR (a is null and b is null) then
null;
else
...change
end if;


collections

May 14, 2009 - 1:18 pm UTC

Reviewer: Bob from London

Instead of doing that - you could do a decode ...? How do ou declare a 2D array in PL/SQL, initialize it and then populate it - the syntax is a pain ...

May 14, 2009 - 1:38 pm UTC

Reviewer: A reader

2dim array
pID integer, column_name varchar2(30), cnt integer:

TYPE typPiVc30 IS TABLE OF PLS_INTEGER INDEX BY VARCHAR2(30);
TYPE typPiVc30Pi IS TABLE OF typPiVc30 INDEX BY PLS_INTEGER;

aCnts typPiVc3oPi;

Re. Initialisation.
I wouldn't usually initialise; just populate as required:
so in your "IF" statement:
IF a = b THEN
NULL;
ELSE
IF aCnts.EXISTS(pID)
AND aCnts(pID).EXISTS(vColumn) THEN
aCnts(pID)(vColumn) := aCnts(pID)(vColumn) + 1;
ELSE
aCnts(pID)(vColumn) := 1;
END IF;
END IF;

Then, after, to process thru results, use FIRST and NEXT
1. for every pID make sure you have a blank summary record - possibly a bulk insert
2.
piCurrPID := aCnts.FIRST;
WHILE piCurrPID IS NOT NULL LOOP
vCurrCol := aCnts(piCurrPID).FIRST;
vUpdateSql := NULL;
WHILE vCurrCol IS NOT NULL LOOP
vUpdateSql := vUpdateSql||vCurrCol||' = '||aCnts(pId)(vCurrCol)||',';
vCurrCol := aCnts(piCurrPID.NEXT(vCurrCol);
END LOOP;
vUpdateSql := SUBSTR(vUpdateSql,1,LENGTH(vUpdateSql)-1);
EXECUTE IMMED..('UPDATE SUMMARY SET '
||vUpdateSql
||' WHERE pID = :2'
USING piCurrPID;
piCurrPid := aCnts.NEXT(piCurrPid);
END LOOP;

(haven't tested this - but most of its there)

You could also process this by
1. for every pID make sure you have a blank summary record - possibly a bulk insert
2. Gather all the values for each column and do a bulk update for that column

Without knowing what sort of record numbers are being processed, don;t know whether its worth the effort.


May 14, 2009 - 1:40 pm UTC

Reviewer: A reader

Line:
vCurrCol := aCnts(piCurrPID.NEXT(vCurrCol);
Should be
vCurrCol := aCnts(piCurrPID).NEXT(vCurrCol);




collections

May 14, 2009 - 4:53 pm UTC

Reviewer: Bob from London

I am dealing with millions of records ...

I am having a problem passing ROWTYPE as a parameter to a procedure ..is it legal to do this:

declare
c1rec x%rowtype;
begin
some_proc(c1rec);
end;

create procedure some_proc(rec IN OUT NOCOPY x%rowtype)
IS
BEGIN
..some processing ...can I refer to rec?
END;
/

Collections

May 14, 2009 - 5:43 pm UTC

Reviewer: Bob from London

...just found out you can! Thanks for your help "reader" - I'll try out the 2D array and see what the performance is like..

May 14, 2009 - 5:44 pm UTC

Reviewer: Jerry Slattery

Hi Bob,

There comes a point when you're going to have to figure this out for yourself. You could have answered your last question yourself with about 5 lines of pl/sql.
Hope you get on well with it.
J.

collections

May 14, 2009 - 5:50 pm UTC

Reviewer: Bob from London

Hi reader - I am comparing records with rowtypes passed in as parameters to a procedure. I am building dynamic SQL using execute immediate ...If I update an element in an array (i.e. increase the value by zero ...) in a dynamically created string - will I be able to access that value in the calling PL/SQL?

Collections

May 14, 2009 - 6:49 pm UTC

Reviewer: Bob from London

Just wanted to say a big thank you! Especially to Jerry and "reader" - got my job runtime down from 3 hours to 19 minutes. I haven't coded any bulk collections yet, so I think it can run even faster. Thanks so much for your time, knowledge and wisdom!

Follow Up question

June 10, 2009 - 8:23 am UTC

Reviewer: Venkat from India

We do something similar ( in Java), but i have a Q on the way the SQL is structured using collections.

In the following 2 queries, the result is the same, but i see huge Cardinality and Costs with the latter query(B):
(A) SELECT d.org_id,d.name
FROM dept d
WHERE md.dept_id in ( select orgs.val from (TABLE(CAST( ? AS MY_INTEGER_ARRAY))) orgs )
ORDER BY d.name ASC

(B) SELECT d.org_id,d.name
FROM dept d , (select * from (TABLE(CAST( ? AS MY_INTEGER_ARRAY)))) orgs
WHERE md.dept_id = orgs.val
ORDER BY d.name ASC

The explain plans look like this:
Cost Cardinality Bytes
(A)
SELECT STATEMENT ALL_ROWS 24 2 136
SORT(ORDER BY) 24 2 136
HASH JOIN(SEMI) 23 2 136
TABLE ACCESS(FULL) DEPT ANALYZED 5 237 15642
COLLECTION ITERATOR(PICKLER FETCH)

(B) SELECT STATEMENT ALL_ROWS 249 16545 1125060
SORT(ORDER BY) 249 16545 1125060
HASH JOIN 23 16545 1125060
TABLE ACCESS(FULL) DEPT ANALYZED 5 237 15642
COLLECTION ITERATOR(PICKLER FETCH)

DEPT.dept_id is indexed.
Am not sure why (B) which uses joins is more costly than (A)?
Tom Kyte

Followup  

June 10, 2009 - 2:57 pm UTC

A "Q"? http://tinyurl.com/duby7




A join is NOT equivalent to a subquery. The optimizer doesn't have any information about the set of data in the collection - it does not assume "it is unique data", hence it assumes when you JOIN, you will have duplicates and cause the result set to be larger.



ops$tkyte%ORA10GR2> create table t as select * from all_users;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_data sys.odcinumberList := sys.odcinumberList( 1, 2, 3, 4, 5, 1, 2, 3, 4, 5 );
  3  begin
  4          for x in ( select t.*
  5                   from t
  6                  where user_id in (select * from table(l_data)) )
  7          loop
  8                  dbms_output.put_line( x.username || ', ' || x.user_id );
  9          end loop;
 10          for x in ( select '.'||plan_table_output data
 11                   from table(dbms_xplan.display_cursor) )
 12          loop
 13                  dbms_output.put_line( x.data );
 14          end loop;
 15          dbms_output.put_line( rpad('-',50,'-') );
 16
 17          for x in ( select t.*
 18                   from t, (select * from table(l_data)) d
 19                                  where t.user_id = d.column_value )
 20          loop
 21                  dbms_output.put_line( x.username || ', ' || x.user_id );
 22          end loop;
 23          for x in ( select '.'||plan_table_output data
 24                   from table(dbms_xplan.display_cursor) )
 25          loop
 26                  dbms_output.put_line( x.data );
 27          end loop;
 28          dbms_output.put_line( rpad('-',50,'-') );
 29  end;
 30  /

<b>one row, using "IN" has specific meaning, specifically "do not join - use an in probe"</b>

SYSTEM, 5


.SQL_ID  75u5unyzy8azp, child number 0
.-------------------------------------
.SELECT T.* FROM T WHERE USER_ID IN (SELECT * FROM TABLE(:B1 ))
.
.Plan hash value: 3918497701
.
.-------------------------------------------------------------------------------------------
.| Id  | Operation                          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
.-------------------------------------------------------------------------------------------
.|   0 | SELECT STATEMENT                   |      |       |       |    11 (100)|          |
.|*  1 |  HASH JOIN SEMI                    |      |     1 |    41 |    11  (10)| 00:00:01 |
.|   2 |   TABLE ACCESS FULL                | T    |    41 |  1599 |     2   (0)| 00:00:01 |
.|   3 |   COLLECTION ITERATOR PICKLER FETCH|      |       |       |            |          |
.-------------------------------------------------------------------------------------------
.
.Predicate Information (identified by operation id):
.---------------------------------------------------
.
.   1 - access("USER_ID"=VALUE(KOKBF$))
.
.Note
.-----
.   - dynamic sampling used for this statement
.
--------------------------------------------------


<b>two rows - different answer, same data, your queries are NOT equivalent (in general)</b>

SYSTEM, 5
SYSTEM, 5


.SQL_ID  1vu4ad2jphvbs, child number 0
.-------------------------------------
.SELECT T.* FROM T, (SELECT * FROM TABLE(:B1 )) D WHERE T.USER_ID = D.COLUMN_VALUE
.
.Plan hash value: 3400138071
.
.-------------------------------------------------------------------------------------------
.| Id  | Operation                          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
.-------------------------------------------------------------------------------------------
.|   0 | SELECT STATEMENT                   |      |       |       |    11 (100)|          |
.|*  1 |  HASH JOIN                         |      |  8168 |   327K|    11  (10)| 00:00:01 |
.|   2 |   TABLE ACCESS FULL                | T    |    41 |  1599 |     2   (0)| 00:00:01 |
.|   3 |   COLLECTION ITERATOR PICKLER FETCH|      |       |       |            |          |
.-------------------------------------------------------------------------------------------
.
.Predicate Information (identified by operation id):
.---------------------------------------------------
.
.   1 - access("T"."USER_ID"=VALUE(KOKBF$))
.
.Note
.-----
.   - dynamic sampling used for this statement
.
--------------------------------------------------

PL/SQL procedure successfully completed.


The 8168 is a 'magic' number - we have no statistics on that collection - we make them up (8168 is the default row count for my collection in this case because my block size is 8k - smaller blocksize = less rows, bigger = more)


You want to read:


http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html

Query Plans with Temporary Tables

and

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3779680732446#15740265481549

Bulk collection question

June 27, 2009 - 4:06 pm UTC

Reviewer: An avid reader from UK

Hi Tom,

I have a question for you regarding bulk collections. Suppose I create an object type:

e.g.

create object x as object (a number,
b varchar2(10),
c number,
d number);
/
create object x_tab as table x;

declare
type a is ref cursor;
b a;

y x_tab

l_sql varchar2(100) := 'select a,b,c,d from y'
begin
y := x_tab();

loop
open b for l_sql;

fetch b bulk collect into y


for i in 1..y.count
...do some processing ...
end loop;

exit when b%notfound;
end loop;
end;

In this example - there are four fields. To refer to them
in the code, one would have to explicitly state the field name. e.g. y(i).a. My question is - what happens if we have 200 fields ... and I don't know the field I want to process
(except at run time). I could have the field name I want to process in a variable. How would I refer to the column from bulk collection then?

Thanks so much
Tom Kyte

Followup  

July 06, 2009 - 5:06 pm UTC

... and I don't know the field
I want to process
....


I stopped right there, then I would say you have not defined your problem yet.


Sounds like your data model is completely wrong (you have 200 columns named "value1, value2, value3, ...." for example) instead of 200 rows..

Or, your code isn't yet fully specified out.


give us a REAL WORLD use case and then we can suggest how to deal with it.

Access to <field_name> in data record elements in PL/SQL tables

July 29, 2009 - 6:01 am UTC

Reviewer: Olaf from Germany

Hi Tom,
I would like to assign to the <field_name> dynamically in the way:

DECLARE
TYPE tab_type IS TABLE OF emp%ROWTYPE;
t_tab tab_type: = tab_type;

TYPE col_type ist TABLE OF all_tab_columns.column_name%TYPE;
t_col col_type := col_type();

BEGIN
SELECT column_name
BULK COLLECT INTO t_col
FROM all_tab_columns
WHERE table_name = 'EMP'
AND owner = 'SCOTT';

t_tab.extend;

--instead of :
t_tab (t_tab.last) .empno: = ....
t_tab (t_tab.last) .ename: = ....
t_tab (t_tab.last) .job: = ....
t_tab (t_tab.last) .mgr: = ....
t_tab (t_tab.last) .hiredate: = ....
t_tab (t_tab.last) .sal: = ....
t_tab (t_tab.last) .comm: = ....
t_tab (t_tab.last) .deptno: = ....

-- This variante:
FOR i IN 1 .. t_col.count LOOP
t_tab (t_tab.last) || t_col (i): = ....

END LOOP;
...
END;

Is there a possibility of stimulating this?

Many thanks.

Olaf


Tom Kyte

Followup  

July 29, 2009 - 7:22 am UTC

as stimulating as it sounds....

no.

A bit late

November 07, 2009 - 6:25 pm UTC

Reviewer: JerrQ from Killaloe

I'm hoping Olaf solved his problem by now. Just saw this and thought it could be done by dynamic SQL with an anonymous block.

Haven't tried the code - and have no idea what impact it would have on performance.

DECLARE
TYPE tab_type IS TABLE OF emp%ROWTYPE;
t_tab tab_type := tab_type;

TYPE col_type IS TABLE OF all_tab_columns.column_name%TYPE;
t_col col_type := col_type();

vSql VARCHAR2(2000);
BEGIN
SELECT column_name
BULK COLLECT INTO t_col
FROM all_tab_columns
WHERE table_name = 'EMP'
AND owner = 'SCOTT';

t_tab.extend;


FOR i IN 1 .. t_col.count LOOP
vSql := 'DECLARE '||
' TYPE tab_type IS TABLE OF emp%ROWTYPE;'||
' dyn_tab tab_type := tab_type;'||
'BEGIN '||
' dyn_tab := :1;'||
' dyn_tab(t_tab.last).'||t_col(i)||' := ....'||
' :2 := dyn_tab;'||
'END;';

EXECUTE IMMEDIATE vSql USING IN OUT t_tab, t_tab;


END LOOP;

Tom Kyte

Followup  

November 11, 2009 - 1:56 pm UTC


yuck.

February 19, 2010 - 12:53 am UTC

Reviewer: A reader

select dname,
2 cast( multiset( select ename, empno
3 from emp
4 where emp.deptno = dept.deptno ) as empTblType ) emps,
5 cast( multiset( select name, start_date, end_date
6 from proj
7 where proj.deptno = dept.deptno ) as projTblType ) projs
8 from dept

If we open rs for the above query and it returns 50 records how much cursors will it open in database 1 or 50?
Tom Kyte

Followup  

February 25, 2010 - 12:20 am UTC

it depends.

does the client fetch a row at a time? or 50 rows at a time? does the client close the cursors after it fetches or them open for a while.

You control it.

Real Time Scenarios for Objects TYpes

April 28, 2010 - 1:40 pm UTC

Reviewer: V.Hariharaputhran from Chennai India

Mr.Oracle, The way I address you always - First I would like to appreciate your splendid services to the oracle community.

On what occasions one should go for object and object types, I mean like Objects with MEMBER FUNCTIONS,MAP MEMBER FUNCTIONS, Types BODY.

I dint find more discussion on this topic in your forum. It would of great help if you could through some light on this topic as usual with apt examples.

Regards
V.Hari
Tom Kyte

Followup  

April 28, 2010 - 3:19 pm UTC

They are useful in certain datatypes (XMLType pops into mind). So, in that sense the can be useful - to create new datatypes that didn't exist before.

I don't like them to create tables - to create a COLUMN in a table, sure, but not tables.

functionality similiar to multiset except for collections in Oracle 9i

May 06, 2010 - 3:48 pm UTC

Reviewer: Saurabh Nolakha from Bangalore,India

Hi Tom,
Is there any functionality available in Oracle 9i for comparing collections which is similar to "MULTISET EXCEPT" provided in Oracle 10g?

I just want to avoid full table scans to look-up for a particular value in table.
Please help in providing the best possible way to achieve it in Oracle 9i
Thanks and regards:
Saurabh
Tom Kyte

Followup  

May 06, 2010 - 8:58 pm UTC

the new collection stuff for was added in 10g release 1, so any functionality that is document "new as of release x" is only in release x and above, sorry.

I'm perplexed by your comment:

I just want to avoid full table scans to look-up for a particular value in
table.


elaborate on that, why would you full scan a table to look for a particular value? why would you not index such a table? give example of what you are trying to do, supply version you are trying to do it in, and we might be able to show how best to do it.

rather than supply a method to do something and ask "can I use this method in release X", ask "I need to do <something goes here>, how best to do it in release X"

functionality similiar to multiset except for collections in Oracle 9i

May 07, 2010 - 5:27 am UTC

Reviewer: Saurabh Nolakha from Bangalore,India

Hi Tom,
I have requirement for Oracle 9i only.
I am storing values in two tables(indexed by binary integer).Say table A and Table B.
table B contains values 1,2,3,4,5,6.......100
Table A contains values(1,5,105,211,311)
I just want to look for values of Table A in table B.
Please suggest a best possible way to do this.

Thanks and regards:
Saurabh

Tom Kyte

Followup  

May 07, 2010 - 8:38 am UTC

store in a collection instead of a plsql index by table and just use sql.


your use of terminology such as "full scan" when referring to a plsql index by table is terribly confusing.

I just want to avoid full table scans to look-up for a particular value in table.

I don't know how anyone would be able to figure out you wanted to do what you stated based on that.

ops$tkyte%ORA10GR2> declare
  2          l_a myNumberArray := myNumberArray(1,5,105,211,311);
  3          l_b myNumberArray := myNumberArray();
  4  begin
  5          l_b.extend(100);
  6          for i in 1 .. 100 loop l_b(i) := 1; end loop;
  7
  8          for x in (select * from table(l_a) minus select * from table(l_b) )
  9      loop
 10                  dbms_output.put_line( x.column_value );
 11          end loop;
 12  end;
 13  /
5
105
211
311

PL/SQL procedure successfully completed.

functionality similiar to multiset except for collections in Oracle 9i

May 09, 2010 - 2:36 pm UTC

Reviewer: Saurabh Nolakha from Bangalore,India

Hi Tom,

I used the code provided by you for my better understanding:-

declare
type myNumberArray is table of number;
l_a myNumberArray := myNumberArray(1,5,105,211,311);
l_b myNumberArray := myNumberArray();
begin
l_b.extend(100);
for i in 1 .. 100 loop l_b(i) := 1; end loop;

for x in (select * from table(l_a) minus select * from table(l_b) )
loop
insert into temp values( x.column_value );
end loop;
end;
/
----------------------------------------------------
but on execution in SQL Navigator it gives following error.
"local collection types not allowed in SQL statements".

Here is some more details about my requirement:-
From a view I just need to check for some columns that is there any new value entered or not.Just need all those new values entered.Considering the following table:

Col1 col2 col3 col4
A11 B21 C31 D41
A12 B22 C32 D42
A11 B21 C31 D41
A12 B22 C32 D42

so if next day new values are entered for col1 as below:

Col1 col2 col3 col4
A31 B21 C31 D41

then my procedure should detect the new value and supply that into result.
problem is that base table contains some million rows.Please provide a best possible way to do this.

Thanks and regards:
Saurabh Nolakha
Tom Kyte

Followup  

May 10, 2010 - 6:38 pm UTC

ops$tkyte%ORA9IR2> create or replace type myNumberArray as table of number
  2  /

Type created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> declare
  2              l_a myNumberArray := myNumberArray(1,5,105,211,311);
  3              l_b myNumberArray := myNumberArray();
  4  begin
  5              l_b.extend(100);
  6             for i in 1 .. 100 loop l_b(i) := 1; end loop;
  7
  8              for x in (select * from table(cast(l_a as myNumberArray)) minus select * from table(cast(l_b as myNumberArray)) )
  9          loop
 10                          dbms_output.put_line( x.column_value );
 11  -- insert into temp values( x.column_value );
 12             end loop;
 13  end;
 14  /
5
105
211
311

PL/SQL procedure successfully completed.

Why not a refcursor

May 11, 2010 - 3:14 am UTC

Reviewer: V.Hariharaputhran from Chennai India

Mr.Oracle

Since in the above requirement the person has asked for the best way to perform it,
Please provide a best possible way to do this.

wouldn't be a REFCURSOR a best solution to return the result,

open rf for
select * from T1
Minus
select * from T2

because while doing these below steps

l_a myNumberArray := myNumberArray(1,5,105,211,311);
l_b myNumberArray := myNumberArray();

to load data into the array i will have to perform bulk collect twice from view/table. Will just refcursor make it more simple.

More over I am not sure of the requirement,

From a view I just need to check for some columns that is there any new value entered or not.Just
need all those new values entered


How is the tracked, is it done with a extra date column in the table.

how to do this.

May 31, 2010 - 5:08 am UTC

Reviewer: A reader

Hi Tom

I want to select the columns of the cursor used in the SQL below ,How to do this using SQL query. ( I have learned it in this thread only and now I need to used it in a query...but I do not know how to disply ename and salary as 2 sepearate column - like we do in normal queries).

select dname, cursor( select ename ,salary from emp where emp.deptno = dept.deptno )
from dept;

Thanks
Ajeet
Tom Kyte

Followup  

May 31, 2010 - 8:41 am UTC

it would depend on your programming API - it may or may not be supported depending on the api you choose.

You are getting a cursor back during the fetch - just like you would be getting a ref cursor back from a stored procedure


ops$tkyte%ORA11GR2> declare
  2      l_dname dept.dname%type;
  3      l_ename emp.ename%type;
  4      l_sal   emp.sal%type;
  5      l_cur   sys_refcursor;
  6      cursor c is
  7      select dname, cursor( select ename ,sal
  8                              from emp
  9                             where emp.deptno = dept.deptno ) cur
 10          from dept;
 11  begin
 12      open c;
 13      loop
 14          fetch c into l_dname, l_cur;
 15          exit when c%notfound;
 16          dbms_output.put_line( l_dname );
 17          loop
 18              fetch l_cur into l_ename, l_sal;
 19              exit when l_cur%notfound;
 20              dbms_output.put_line( '-- ' || l_ename || ', ' || l_sal );
 21          end loop;
 22      end loop;
 23  end;
 24  /
ACCOUNTING
-- CLARK, 2450
-- KING, 5000
-- MILLER, 1300
RESEARCH
-- SMITH, 800
-- JONES, 2975
-- SCOTT, 3000
-- ADAMS, 1100
-- FORD, 3000
SALES
-- ALLEN, 1600
-- WARD, 1250
-- MARTIN, 1250
-- BLAKE, 2850
-- TURNER, 1500
-- JAMES, 950
OPERATIONS

PL/SQL procedure successfully completed.

actually I wanted to know how to do it in SQL...

May 31, 2010 - 9:41 am UTC

Reviewer: A reader

Hi Tom,

The reason I asked the above question that I am using scalar suqueries and there I am trying to select 2 fields in one select..using cursor.

for example

select ename,cursor(select deptname,dept_id from dept where emp_id = dept.emp_id)
from emp ;

how to show the deptname,dept_id as two seperate columns using SQL - if it make sense.

Thanks
Ajeet
Tom Kyte

Followup  

June 08, 2010 - 7:29 am UTC

it does not make sense.

You have a query that returns two columns - ename and a cursor

That cursor itself returns two columns

How you format and display that is entirely up to you. If you just wanted columns (three of them) you should have joined and no cursors() at all.

Try a Join

June 01, 2010 - 1:41 pm UTC

Reviewer: Andrew Markiewicz from Madison, WI

Ajeet,
Unless your ref cursor is being passed in through as a parameter or something, why not use just a join of emp and dept?

select dname, ename ,salary 
  from emp
     , dept
 where emp.deptno = dept.deptno
;

select emp.ename
     , dept.deptname
     , dept.dept_id 
  from dept 
     , emp
 where emp.emp_id = dept.emp_id

resaon of not using join

June 02, 2010 - 12:42 am UTC

Reviewer: A reader

Hi Tom,

The actual query where I want to use the scalar subquery has 8 very large tables , only 4 of them are key tables..the other 4 are there just because we want to select 2 or 3 columns from them (they don;t have any filter column in where clause) , when we run the query with only 4 tables - which I say key tables in terms of number of rows being returned, filter etc..then it run just good. when we try to add the other 4 tables also in from clause the query runs very slow..so i was trying to find an alternate if we could use the scalar subquery in this case.
to define very slow - if we run the query on the 4 tables only - it takes 20-30 seconds , when we add these 4 tables it starts taking 300 seconds.

Thanks
Ajeet
Tom Kyte

Followup  

June 08, 2010 - 9:35 am UTC

but what does it take to return the ENTIRE RESULT SET, including fetching from these other tables - not just getting the cursors from them but actually FETCHING those cursors to exhaustion as well.

Accessing collections in SELECT clause

March 04, 2011 - 6:33 pm UTC

Reviewer: Shreyas from New York, US

Hello Tom,

I have a question on accessing collections.

Is it possible to read collections (using index of the collection) directly in the SELECT clause of the query?

Example:
SQL> drop function f_test;

Function dropped.

SQL> drop type t_result;

Type dropped.

SQL> drop type o_result_row;

Type dropped.

SQL> drop type t_outputs;

Type dropped.

SQL> drop type o_output;

Type dropped.

SQL> drop type t_inputs;

Type dropped.

SQL> drop type o_input;

Type dropped.

SQL> 
SQL> create or replace type o_input as object
  2  (type_cd  varchar2(10),
  3   start_dt date,
  4   end_dt   date
  5  );
  6  /

Type created.

SQL> 
SQL> create or replace type t_inputs as table of o_input;
  2  /

Type created.

SQL> 
SQL> create or replace type o_output as object
  2  (rtn    number,
  3   debug_info varchar2(200)
  4  );
  5  /

Type created.

SQL> 
SQL> create or replace type t_outputs as table of o_output;
  2  /

Type created.

SQL> 
SQL> create or replace type o_result_row as object
  2  (table_name      varchar2(30),
  3   tablespace_name varchar2(30),
  4   rtn_outputs     t_outputs
  5  );
  6  /

Type created.

SQL> 
SQL> create or replace type t_result as table of o_result_row;
  2  /

Type created.

SQL> 
SQL> create or replace function f_test(in_inputs IN t_inputs)
  2  return t_result
  3  as
  4  
  5  ret_result t_result;
  6  calc_val   t_outputs;
  7  i      integer;
  8  j      integer;
  9  
 10  begin
 11  
 12     select o_result_row(table_name, tablespace_name, null)
 13     bulk collect into ret_result
 14     from   all_tables
 15     where  owner = 'SYS'
 16     and    rownum <= 10;
 17  
 18     for i in ret_result.first..ret_result.last
 19     loop
 20        calc_val := null;
 21        calc_val := t_outputs();
 22  
 23        for j in in_inputs.first..in_inputs.last
 24        loop
 25           calc_val.extend(1);
 26           calc_val(j) := o_output(i+j*2+1.15, 'Debug info for param ' || j || ' ' || in_inputs(j).type_cd);
 27  
 28        end loop;
 29        ret_result(i).rtn_outputs := calc_val;
 30     end loop;
 31  
 32     return ret_result;
 33  
 34  end;
 35  /

Function created.


Now, the below queries works fine, but retuns the data set as one column on every row

           

SQL> select table_name, rtn_outputs
  2  from  table(cast(f_test(t_inputs(o_input('MTD', '01-Mar-2011', '04-Mar-2011')
  3                                  )
  4                         ) as t_result));

TABLE_NAME                     RTN_OUTPUTS(RTN, DEBUG_INFO)
------------------------------ ---------------------------------------------------------
AUDIT_ACTIONS                  T_OUTPUTS(O_OUTPUT(4.15, 'Debug info for param 1 MTD'))
DUAL                           T_OUTPUTS(O_OUTPUT(5.15, 'Debug info for param 1 MTD'))
IMPDP_STATS                    T_OUTPUTS(O_OUTPUT(6.15, 'Debug info for param 1 MTD'))
KU$NOEXP_TAB                   T_OUTPUTS(O_OUTPUT(7.15, 'Debug info for param 1 MTD'))
KU$_LIST_FILTER_TEMP           T_OUTPUTS(O_OUTPUT(8.15, 'Debug info for param 1 MTD'))
ODCI_SECOBJ$                   T_OUTPUTS(O_OUTPUT(9.15, 'Debug info for param 1 MTD'))
ODCI_WARNINGS$                 T_OUTPUTS(O_OUTPUT(10.15, 'Debug info for param 1 MTD'))
PLAN_TABLE$                    T_OUTPUTS(O_OUTPUT(11.15, 'Debug info for param 1 MTD'))
PSTUBTBL                       T_OUTPUTS(O_OUTPUT(12.15, 'Debug info for param 1 MTD'))
STMT_AUDIT_OPTION_MAP          T_OUTPUTS(O_OUTPUT(13.15, 'Debug info for param 1 MTD'))

10 rows selected.

SQL> select table_name, rtn_outputs
  2  from  table(cast(f_test(t_inputs(o_input('MTD', '01-Mar-2011', '04-Mar-2011'),
  3                                   o_input('YTD', '01-Jan-2011', '04-Mar-2011')
  4                                  )
  5                         ) as t_result));

TABLE_NAME                     RTN_OUTPUTS(RTN, DEBUG_INFO)
------------------------------ --------------------------------------------------------------------------------------------------------------
AUDIT_ACTIONS                  T_OUTPUTS(O_OUTPUT(4.15, 'Debug info for param 1 MTD'), O_OUTPUT(6.15, 'Debug info for param 2 YTD'))
DUAL                           T_OUTPUTS(O_OUTPUT(5.15, 'Debug info for param 1 MTD'), O_OUTPUT(7.15, 'Debug info for param 2 YTD'))
IMPDP_STATS                    T_OUTPUTS(O_OUTPUT(6.15, 'Debug info for param 1 MTD'), O_OUTPUT(8.15, 'Debug info for param 2 YTD'))
KU$NOEXP_TAB                   T_OUTPUTS(O_OUTPUT(7.15, 'Debug info for param 1 MTD'), O_OUTPUT(9.15, 'Debug info for param 2 YTD'))
KU$_LIST_FILTER_TEMP           T_OUTPUTS(O_OUTPUT(8.15, 'Debug info for param 1 MTD'), O_OUTPUT(10.15, 'Debug info for param 2 YTD'))
ODCI_SECOBJ$                   T_OUTPUTS(O_OUTPUT(9.15, 'Debug info for param 1 MTD'), O_OUTPUT(11.15, 'Debug info for param 2 YTD'))
ODCI_WARNINGS$                 T_OUTPUTS(O_OUTPUT(10.15, 'Debug info for param 1 MTD'), O_OUTPUT(12.15, 'Debug info for param 2 YTD'))
PLAN_TABLE$                    T_OUTPUTS(O_OUTPUT(11.15, 'Debug info for param 1 MTD'), O_OUTPUT(13.15, 'Debug info for param 2 YTD'))
PSTUBTBL                       T_OUTPUTS(O_OUTPUT(12.15, 'Debug info for param 1 MTD'), O_OUTPUT(14.15, 'Debug info for param 2 YTD'))
STMT_AUDIT_OPTION_MAP          T_OUTPUTS(O_OUTPUT(13.15, 'Debug info for param 1 MTD'), O_OUTPUT(15.15, 'Debug info for param 2 YTD'))

10 rows selected.


My question is - how do I access individual elements in the collection T_OUTPUTS of every row?

For example, for the second query above, I am looking for something like

select table_name, rtn_outputs(1).rtn, rtn_outputs(1).debug_info, rtn_outputs(2).rtn, rtn_outputs(2).debug_info
from  table(cast(f_test(t_inputs(o_input('MTD', '01-Mar-2011', '04-Mar-2011'),
                                 o_input('YTD', '01-Jan-2011', '04-Mar-2011')
                                )
                       ) as t_result));


The reason I have T_INPUTS and T_OUTPUTS is that the number of parameters can vary by query, while the inherent functionality (calculating rate of returns in this case) is esentially the same. By accepting a TABLE TYPE of inputs, I can support any number of parameters, and I don't need to change my f_test if the number of parameters change. The only changes that are required will be on the query - change the SELECT clause based on the number of parameters passed to t_inputs.

Now, the reason why I want a function based output in the first place is lenghty, and is probably not relevant here. In case it is, I'll be glad to share it.

So, is there a way?

I use Oracle 10g (Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production)

Thanks in advance for your time.

-Shreyas

Tom Kyte

Followup  

March 07, 2011 - 9:46 am UTC

your plsql code can certainly reference them that way - your procedural code can access them as subscripted arrays - why do you want/need it in SQL directly?

Accessing collections in SELECT clause

March 08, 2011 - 1:44 pm UTC

Reviewer: Shreyas from New York, USA

The query needs to be directly plugged into a reporting tool. Eventually, there will be different function calls by different queries, with each query result providing data to a certain page/section of the report.

The calculations are quite complex and also need a lot of data, so needs to be done in PL/SQL. The teams providing data (using PL/SQL packages) and consuming the data (creating reports) are different, so using TYPEs provide a sort of an agreement (at least on the structure) between the 2 teams for independent development.

Thanks again for your time.

-Shreyas

Tom Kyte

Followup  

March 08, 2011 - 2:30 pm UTC

write plsql functions and call them from sql to procedurally process the collection.

Accessing collections in SELECT clause

March 09, 2011 - 1:28 pm UTC

Reviewer: Shreyas from New York, USA

Thanks Tom. I had thought about it, but it involved writing separate functions per (nested) type per return value. In the example above, I would need to write 2 separate functions one for .rtn and one for .debug_info, both accepting t_outputs and a subscript as parameters. I wondered if there was some Oracle supplied feature, as I like using inbuilt features (when available) rather than writing my own.

Anyway, in case Oracle developers are interested, this is my 2 cents for a future enhancement.

-Shreyas

Tom Kyte

Followup  

March 10, 2011 - 9:30 am UTC

If you write it in SQL, you'd have to write it each and every time you wanted to do it.

If you write it in plsql, true, you'll do it once per distinct type (if they are subtypes, you might be able to do it once) - but you'll only do it once per distinct type. You win.


Accessing collections in SELECT clause

March 09, 2011 - 3:20 pm UTC

Reviewer: Shreyas from New York, USA

Figured that I could utilize the "member functions" of the TYPE to get what I want without writing "separate" PL/SQL functions.

So, in the above example:

create or replace type o_result_row as object
(table_name varchar2(30),
tablespace_name varchar2(30),
rtn_outputs t_outputs,
member function get_rtn(i integer) return number,
member function get_debug_info(i integer) return varchar2
);
/

create or replace type body o_result_row as
member function get_rtn(i integer) return number
as
begin
return rtn_outputs(i).rtn;
end get_rtn;
member function get_debug_info(i integer) return varchar2
as
begin
return rtn_outputs(i).debug_info;
end get_debug_info;
end;
/

Now I can access the member functions by simply providing an alias to the TABLE that has been CAST

select a.table_name, a.get_rtn(1) as rtn1, a.get_rtn(2) rtn2, a.get_debug_info(1) di1, a.get_debug_info(2) di2
from table(cast(f_test(t_inputs(o_input('MTD', '01-Mar-2011', '04-Mar-2011'),
o_input('YTD', '01-Jan-2011', '04-Mar-2011')
)
) as t_result)) a;

TABLE_NAME RTN1 RTN2 DI1 DI2
-------------------------- ---------- ---------- ---------------------------- ----------------------------
AUDIT_ACTIONS 4.15 6.15 Debug info for param 1 MTD Debug info for param 2 YTD
DUAL 5.15 7.15 Debug info for param 1 MTD Debug info for param 2 YTD
IMPDP_STATS 6.15 8.15 Debug info for param 1 MTD Debug info for param 2 YTD
KU$NOEXP_TAB 7.15 9.15 Debug info for param 1 MTD Debug info for param 2 YTD
KU$_LIST_FILTER_TEMP 8.15 10.15 Debug info for param 1 MTD Debug info for param 2 YTD
ODCI_SECOBJ$ 9.15 11.15 Debug info for param 1 MTD Debug info for param 2 YTD
ODCI_WARNINGS$ 10.15 12.15 Debug info for param 1 MTD Debug info for param 2 YTD
PLAN_TABLE$ 11.15 13.15 Debug info for param 1 MTD Debug info for param 2 YTD
PSTUBTBL 12.15 14.15 Debug info for param 1 MTD Debug info for param 2 YTD
STMT_AUDIT_OPTION_MAP 13.15 15.15 Debug info for param 1 MTD Debug info for param 2 YTD

10 rows selected.

I take my 2 cents back!

It's nice learn something new everyday! :)

Thanks,
Shreyas

Help needed

October 17, 2012 - 7:18 am UTC

Reviewer: A

Hello Tom,
In the same thread you shown an example of using nested table. But is there any way where I can have array of objects with an object (which is again of array) without using nested tables?

dp_main_obj_tab is an varray of dp_main_obj which in turn has one object (dp_header_obj) and another varray of object (dp_obj_tab based on dp_obj).

My queries are:

1. Is it possible to have varray of objects with an varray of objects?
2. If so, how can I assign and display the value?
3. Im unable display the values from dp_obj_tab. How to display (p_dp_main_obj_tab(1).dp_OBJ_TAB(1).mName);


Below are the scripts for your reference.

create or replace type dp_obj as object (
mname varchar2(200),
mvalue varchar2(4000),
perioddate date);

create or replace type dp_header_obj as object (
periodenddate date,
periodname varchar2(200));

create or replace type dp_obj_tab as table of dp_obj ;

create or replace type dp_main_obj as object
(
dp_header dp_header_obj,
dp dp_obj_tab);

create or replace type dp_main_obj_tab as table of dp_main_obj;


declare
p_dp_main_obj_tab dp_main_obj_tab := dp_main_obj_tab(null);
begin
p_dp_main_obj_tab(1) := dp_main_obj(
dp_header_obj('17-OCT-2012','annual'),
dp_obj_tab(
dp_obj('MName_1','yes','17-OCT-2012'),
dp_obj('MName_2','no','17-OCT-2012'),
dp_obj('MName_3','error','17-OCT-2012')
));
dbms_output.put_line(p_dp_main_obj_tab(1).dp_header.periodname);
end;

Tom Kyte

Followup  

October 17, 2012 - 3:17 pm UTC

varray's would work the same as nested tables.

just replace table with varray(n) in my example and it'll work the same. a varray is like a nested table but with an upper bound in the size.

Help needed

October 19, 2012 - 7:30 am UTC

Reviewer: A

Hello Tom,
As suggested by you, I tried but still not able to do it. Can you please help me out?

create or replace type dp_obj as object (
mname varchar2(200),
mvalue varchar2(4000),
perioddate date);

create or replace type dp_obj_tab as table of dp_obj ;

create or replace type dp_main_obj as object (
periodenddate date,
periodname varchar2(200),
dp_obj_tab_n dp_obj_tab);

create type dp_main_obj_varray as varray (3) of dp_main_obj ;

declare
l_dp_main_obj_varray dp_main_obj_varray(null);
begin
l_dp_main_obj_varray(1) := dp_main_obj(sysdate,'P1',dp_obj_tab(dp_obj('m1','1',sysdate),dp_obj('m2','2',sysdate+1),dp_obj('m3','3',sysdate+3)));
end;

Error report:
ORA-06550: line 2, column 29:
PLS-00566: type name "DP_MAIN_OBJ_VARRAY" cannot be constrained
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:



Thanks
Tom Kyte

Followup  

October 23, 2012 - 11:24 am UTC

ops$tkyte%ORA11GR2> create or replace type dp_obj as object (
  2  mname     varchar2(200),
  3  mvalue    varchar2(4000),
  4  perioddate date);
  5  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace type dp_header_obj as object (
  2  periodenddate  date,
  3  periodname varchar2(200));
  4  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace type dp_obj_tab as varray(10) of dp_obj ;
  2  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace type dp_main_obj as object
  2  (
  3  dp_header dp_header_obj,
  4  dp dp_obj_tab);
  5  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace type dp_main_obj_tab as  varray(3) of dp_main_obj;
  2  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2  p_dp_main_obj_tab dp_main_obj_tab := dp_main_obj_tab(null);
  3  begin
  4  p_dp_main_obj_tab(1) := dp_main_obj(
  5      dp_header_obj('17-OCT-2012','annual'),
  6      dp_obj_tab(
  7          dp_obj('MName_1','yes','17-OCT-2012'),
  8          dp_obj('MName_2','no','17-OCT-2012'),
  9          dp_obj('MName_3','error','17-OCT-2012')
 10          ));
 11  dbms_output.put_line(p_dp_main_obj_tab(1).dp_header.periodname);
 12  end;
 13  /
annual

PL/SQL procedure successfully completed.

Using collections in Select query

December 28, 2012 - 6:37 am UTC

Reviewer: Pratibha

I know I can use OUT parameter to have more than 1 return values, but I need to use these returned values in a select query. Is there a way to do it?

Also, Can I use a collection array in a DML Is there some syntax available to fetch values from colletion like arrays.
select mycollection(input)[1],mycollection(input)[2] from dual;


Tom Kyte

Followup  

January 04, 2013 - 11:16 am UTC

in a select, the only plsql you can use is plsql that is a function that takes ONLY in parameters.


If you need to return more than one value - you can return an object type with as many attributes as you like.

ops$tkyte%ORA11GR2> create or replace type myScalarType as object
  2  ( x int,
  3    y date,
  4    z varchar2(30)
  5  )
  6  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace function f return myScalarType
  2  as
  3  begin
  4      return myScalarType( 42, sysdate, user );
  5  end;
  6  /

Function created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select f() from dual;

F()(X, Y, Z)
-------------------------------------------------------------------------------
MYSCALARTYPE(42, '04-JAN-13', 'OPS$TKYTE')


I don't know what you mean in the second question really.

You can query collections as though they were tables...

ops$tkyte%ORA11GR2> create or replace type myArraytype as table of myScalarType
  2  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2      l_data myArrayType := myArrayType( myScalarType( 1, sysdate, 'a' ),
  3                                         myScalarType( 2, sysdate, 'b' ),
  4                                         myScalarType( 3, sysdate, 'c' ) );
  5  begin
  6      for x in (select x, y, z from TABLE(l_data) )
  7      loop
  8          dbms_output.put_line( x.x || ', ' || x.y || ', ' || x.z );
  9      end loop;
 10  end;
 11  /
1, 04-JAN-13, a
2, 04-JAN-13, b
3, 04-JAN-13, c

PL/SQL procedure successfully completed.

url mentioned above is not accessible pls provide a proper url

January 15, 2013 - 11:28 pm UTC

Reviewer: venkat from India

Tom Kyte

Followup  

January 16, 2013 - 11:45 am UTC

I did what you could have done for a dead link to old documentation...

I went to the documentation and plugged jpublisher into the search field and hit enter:


http://www.oracle.com/pls/db112/search?remark=quick_search&word=jpublisher

Collections and Result Set with distinct

January 24, 2013 - 6:34 am UTC

Reviewer: A reader

I have a temporary table which I want to extract DISTINCT values for each of the columns in it;

create global temporary table tt 
  (c1 varchar2(10), 
   c2 varchar2(10),
   c3 varchar2(10),
   c4 varchar2(10),
   c5 varchar2(10)
  );

declare
  type typ_c1s is table of tt.c1%type;
  type typ_c2s is table of tt.c2%type;
  type typ_c3s is table of tt.c3%type;
  type typ_c4s is table of tt.c4%type;
  type typ_c5s is table of tt.c5%type;
  l_c1s typ_c1s;
  l_c2s typ_c2s;
  l_c3s typ_c3s;
  l_c4s typ_c4s;
  l_c5s typ_c5s;
begin
  select distinct c1 bulk collect into l_c1s from tt where c1 is not null;
  select distinct c2 bulk collect into l_c2s from tt where c2 is not null;
  select distinct c3 bulk collect into l_c3s from tt where c3 is not null;
  select distinct c4 bulk collect into l_c4s from tt where c4 is not null;
  select distinct c5 bulk collect into l_c5s from tt where c5 is not null;
  ...more processing...
end;
/


Is there a way to not having to full scan TT 5 times, but get all the distinct column values with a single SQL ?
Tom Kyte

Followup  

January 30, 2013 - 12:59 pm UTC

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into tt
  2  select mod(rownum,7), mod(rownum,10), mod(rownum,5), mod(rownum,2), mod(rownum,3)
  3    from all_users
  4  /

48 rows created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2    l_c1s sys.odciVarchar2List;
  3    l_c2s sys.odciVarchar2List;
  4    l_c3s sys.odciVarchar2List;
  5    l_c4s sys.odciVarchar2List;
  6    l_c5s sys.odciVarchar2List;
  7  
  8    procedure prt( nm in varchar2, x in sys.odciVarchar2List )
  9    as
 10    begin
 11          dbms_output.put_line( nm );
 12          dbms_output.put_line( 'count = ' || x.count );
 13          for i in 1 .. x.count
 14          loop
 15                  dbms_output.put_line( nm || '(' || i || ') = ' || x(i) );
 16          end loop;
 17    end;
 18  begin
 19          execute immediate
 20          'select cast( collect( distinct c1 ) as sys.odciVarchar2List ),
 21                 cast( collect( distinct c2 ) as sys.odciVarchar2List ),
 22                 cast( collect( distinct c3 ) as sys.odciVarchar2List ),
 23                 cast( collect( distinct c4 ) as sys.odciVarchar2List ),
 24                 cast( collect( distinct c5 ) as sys.odciVarchar2List )
 25                     from tt'
 26            into l_c1s, l_c2s, l_c3s, l_c4s, l_c5s;
 27  
 28          prt( 'c1', l_c1s );
 29          prt( 'c2', l_c2s );
 30          prt( 'c3', l_c3s );
 31          prt( 'c4', l_c4s );
 32          prt( 'c5', l_c5s );
 33  end;
 34  /
c1
count = 7
c1(1) = 0
c1(2) = 1
c1(3) = 2
c1(4) = 3
c1(5) = 4
c1(6) = 5
c1(7) = 6
c2
count = 10
c2(1) = 0
c2(2) = 1
c2(3) = 2
c2(4) = 3
c2(5) = 4
c2(6) = 5
c2(7) = 6
c2(8) = 7
c2(9) = 8
c2(10) = 9
c3
count = 5
c3(1) = 0
c3(2) = 1
c3(3) = 2
c3(4) = 3
c3(5) = 4
c4
count = 2
c4(1) = 0
c4(2) = 1
c5
count = 3
c5(1) = 0
c5(2) = 1
c5(3) = 2

PL/SQL procedure successfully completed.




I'm using native dynamic sql to workaround a known issue whereby plsql doesn't recognize DISTINCT in collect as being valid...

Collections and Result Set with distinct

January 31, 2013 - 4:08 am UTC

Reviewer: A reader

Thank you Tom, didn't think of COLLECT and whilst looking into that a bit more, I found SET which meant I think I could get away without DYNAMIC sql but would need a new type;

create or replace type vc_list_type as table of varchar2(4000)
/

select set(cast( collect(c1) as vc_list_type )),
       set(cast( collect(c2) as vc_list_type )),
       set(cast( collect(c3) as vc_list_type )),
       set(cast( collect(c4) as vc_list_type )),
       set(cast( collect(c5) as vc_list_type ))
into l_c1s, l_c2s, l_c3s, l_c4s, l_c5s
from tt;


Collections and Result Set with distinct

January 31, 2013 - 4:52 am UTC

Reviewer: A reader

Tom,

Out of interest, I did a RUNSTATS comparing the 5 full table scans with your COLLECT version and surprisingly the 5 full scans was quicker (I've got about 3 millions rows in TT)

Run1 ran in 219 hsecs
Run2 ran in 414 hsecs
run 1 ran in 52.9% of the time

Name                       Run1       Run2     Diff
LATCH.cache buffer chains  48,157     12,237   -35,920
STAT...table scan rows gotten  11,534,320  2,883,580  -8,650,740
STAT...sorts (rows)       0  11,524,320  11,524,320

Run 1 latches total versus runs -- difference and pct
     Run1      Run2    Diff      Pct
    64,242    27,884   -36,358   230.39%


Unsurprising was the LATCH and IO figures, but 5 full table scans was consistently quicker than COLLECT, perhaps the SORTS is the reason why it is slower?

From a run time perspective, neither of them are "slow" - but would you have a preference? Stick with 5 full table scans OR go for reduced I/O and Latches even though slightly slower?
Tom Kyte

Followup  

January 31, 2013 - 2:42 pm UTC

what are the "specs" for reproducing. I took a copy of all objects and copied it over and over again for 1,000,000 rows and find the opposite.

looks like you are using a really old runstats - one that pumps out elapsed time instead of cpu time...

big_table%ORA11GR2> @test
big_table%ORA11GR2> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

big_table%ORA11GR2> 
big_table%ORA11GR2> declare
  2    l_c1s sys.odciVarchar2List;
  3    l_c2s sys.odciVarchar2List;
  4    l_c3s sys.odciVarchar2List;
  5    l_c4s sys.odciVarchar2List;
  6    l_c5s sys.odciVarchar2List;
  7  
  8    procedure prt( nm in varchar2, x in sys.odciVarchar2List )
  9    as
 10    begin
 11          dbms_output.put_line( nm );
 12          dbms_output.put_line( 'count = ' || x.count );
 13          for i in 1 .. x.count
 14          loop
 15                  dbms_output.put_line( nm || '(' || i || ') = ' || x(i) );
 16          end loop;
 17    end;
 18  begin
 19          execute immediate
 20          'select cast( collect( distinct owner ) as sys.odciVarchar2List ),
 21                 cast( collect( distinct object_type ) as sys.odciVarchar2List ),
 22                 cast( collect( distinct status ) as sys.odciVarchar2List ),
 23                 cast( collect( distinct temporary ) as sys.odciVarchar2List ),
 24                 cast( collect( distinct secondary ) as sys.odciVarchar2List )
 25                     from big_table'
 26            into l_c1s, l_c2s, l_c3s, l_c4s, l_c5s;
 27  
 28          prt( 'c1', l_c1s );
 29          prt( 'c2', l_c2s );
 30          prt( 'c3', l_c3s );
 31          prt( 'c4', l_c4s );
 32          prt( 'c5', l_c5s );
 33  end;
 34  /
c1
count = 39
c1(1) = A
c1(2) = APEX_030200
c1(3) = APPQOSSYS
c1(4) = APP_ADMIN
c1(5) = APP_ADMIN_USER
c1(6) = APP_CODE
c1(7) = APP_USER
c1(8) = BI
c1(9) = BIG_TABLE
c1(10) = CTXSYS
c1(11) = DBSNMP
c1(12) = DUBI
c1(13) = EXFSYS
c1(14) = FLOWS_FILES
c1(15) = HR
c1(16) = IX
c1(17) = LEO
c1(18) = MDSYS
c1(19) = OLAPSYS
c1(20) = OPS$TKYTE
c1(21) = ORACLE_OCM
c1(22) = ORDDATA
c1(23) = ORDPLUGINS
c1(24) = ORDSYS
c1(25) = OUTLN
c1(26) = OWBSYS
c1(27) = OWBSYS_AUDIT
c1(28) = PERFSTAT
c1(29) = PM
c1(30) = PUBLIC
c1(31) = SCOTT
c1(32) = SH
c1(33) = SI_INFORMTN_SCHEMA
c1(34) = SYS
c1(35) = SYSMAN
c1(36) = SYSTEM
c1(37) = UTIL
c1(38) = WMSYS
c1(39) = XDB
c2
count = 39
c2(1) = CLUSTER
c2(2) = CONSUMER GROUP
c2(3) = CONTEXT
c2(4) = DESTINATION
c2(5) = DIRECTORY
c2(6) = EDITION
c2(7) = EVALUATION CONTEXT
c2(8) = FUNCTION
c2(9) = INDEX
c2(10) = INDEX PARTITION
c2(11) = INDEXTYPE
c2(12) = JAVA CLASS
c2(13) = JAVA DATA
c2(14) = JAVA RESOURCE
c2(15) = JAVA SOURCE
c2(16) = JOB
c2(17) = JOB CLASS
c2(18) = LIBRARY
c2(19) = MATERIALIZED VIEW
c2(20) = OPERATOR
c2(21) = PACKAGE
c2(22) = PACKAGE BODY
c2(23) = PROCEDURE
c2(24) = PROGRAM
c2(25) = RULE
c2(26) = RULE SET
c2(27) = SCHEDULE
c2(28) = SCHEDULER GROUP
c2(29) = SEQUENCE
c2(30) = SYNONYM
c2(31) = TABLE
c2(32) = TABLE PARTITION
c2(33) = TABLE SUBPARTITION
c2(34) = TRIGGER
c2(35) = TYPE
c2(36) = TYPE BODY
c2(37) = VIEW
c2(38) = WINDOW
c2(39) = XML SCHEMA
c3
count = 2
c3(1) = INVALID
c3(2) = VALID
c4
count = 2
c4(1) = N
c4(2) = Y
c5
count = 2
c5(1) = N
c5(2) = Y

PL/SQL procedure successfully completed.

big_table%ORA11GR2> 
big_table%ORA11GR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

big_table%ORA11GR2> declare
  2    l_c1s sys.odciVarchar2List;
  3    l_c2s sys.odciVarchar2List;
  4    l_c3s sys.odciVarchar2List;
  5    l_c4s sys.odciVarchar2List;
  6    l_c5s sys.odciVarchar2List;
  7  
  8    procedure prt( nm in varchar2, x in sys.odciVarchar2List )
  9    as
 10    begin
 11          dbms_output.put_line( nm );
 12          dbms_output.put_line( 'count = ' || x.count );
 13          for i in 1 .. x.count
 14          loop
 15                  dbms_output.put_line( nm || '(' || i || ') = ' || x(i) );
 16          end loop;
 17    end;
 18  begin
 19          select distinct owner bulk collect into l_c1s from big_table;
 20          select distinct object_type bulk collect into l_c2s from big_table;
 21          select distinct status bulk collect into l_c3s from big_table;
 22          select distinct temporary bulk collect into l_c4s from big_table;
 23          select distinct secondary bulk collect into l_c5s from big_table;
 24  
 25          prt( 'c1', l_c1s );
 26          prt( 'c2', l_c2s );
 27          prt( 'c3', l_c3s );
 28          prt( 'c4', l_c4s );
 29          prt( 'c5', l_c5s );
 30  end;
 31  /
c1
count = 39
c1(1) = OWBSYS_AUDIT
c1(2) = APP_CODE
c1(3) = DUBI
c1(4) = MDSYS
c1(5) = UTIL
c1(6) = BIG_TABLE
c1(7) = PUBLIC
c1(8) = OUTLN
c1(9) = CTXSYS
c1(10) = OLAPSYS
c1(11) = FLOWS_FILES
c1(12) = OWBSYS
c1(13) = HR
c1(14) = SYSTEM
c1(15) = ORACLE_OCM
c1(16) = OPS$TKYTE
c1(17) = EXFSYS
c1(18) = APEX_030200
c1(19) = SCOTT
c1(20) = APP_ADMIN_USER
c1(21) = DBSNMP
c1(22) = ORDSYS
c1(23) = ORDPLUGINS
c1(24) = SYSMAN
c1(25) = PM
c1(26) = SH
c1(27) = A
c1(28) = APP_ADMIN
c1(29) = APPQOSSYS
c1(30) = XDB
c1(31) = ORDDATA
c1(32) = IX
c1(33) = BI
c1(34) = SYS
c1(35) = WMSYS
c1(36) = SI_INFORMTN_SCHEMA
c1(37) = LEO
c1(38) = PERFSTAT
c1(39) = APP_USER
c2
count = 39
c2(1) = EDITION
c2(2) = INDEX PARTITION
c2(3) = TABLE SUBPARTITION
c2(4) = CONSUMER GROUP
c2(5) = SEQUENCE
c2(6) = TABLE PARTITION
c2(7) = SCHEDULE
c2(8) = RULE
c2(9) = JAVA DATA
c2(10) = PROCEDURE
c2(11) = OPERATOR
c2(12) = DESTINATION
c2(13) = WINDOW
c2(14) = SCHEDULER GROUP
c2(15) = PACKAGE
c2(16) = PACKAGE BODY
c2(17) = LIBRARY
c2(18) = PROGRAM
c2(19) = RULE SET
c2(20) = CONTEXT
c2(21) = TYPE BODY
c2(22) = JAVA RESOURCE
c2(23) = XML SCHEMA
c2(24) = TRIGGER
c2(25) = JOB CLASS
c2(26) = DIRECTORY
c2(27) = MATERIALIZED VIEW
c2(28) = TABLE
c2(29) = INDEX
c2(30) = SYNONYM
c2(31) = VIEW
c2(32) = FUNCTION
c2(33) = JAVA CLASS
c2(34) = JAVA SOURCE
c2(35) = INDEXTYPE
c2(36) = CLUSTER
c2(37) = TYPE
c2(38) = JOB
c2(39) = EVALUATION CONTEXT
c3
count = 2
c3(1) = INVALID
c3(2) = VALID
c4
count = 2
c4(1) = Y
c4(2) = N
c5
count = 2
c5(1) = Y
c5(2) = N

PL/SQL procedure successfully completed.

big_table%ORA11GR2> exec runStats_pkg.rs_stop(10000)
Run1 ran in 171 cpu hsecs
Run2 ran in 450 cpu hsecs
run 1 ran in 38% of the time

Name                                  Run1        Run2        Diff
STAT...OS Page reclaims                  0      11,835      11,835
LATCH.row cache objects             13,597         163     -13,434
STAT...Cached Commit SCN refer      14,487      72,435      57,948
STAT...no work - consistent re      14,529      72,645      58,116
STAT...physical reads direct        14,529      72,645      58,116
STAT...physical reads               14,529      72,645      58,116
STAT...consistent gets direct       14,529      72,645      58,116
STAT...table scan blocks gotte      14,529      72,645      58,116
STAT...consistent gets              14,547      72,680      58,133
STAT...session logical reads        14,576      72,719      58,143
STAT...logical read bytes from     385,024     606,208     221,184
STAT...table scan rows gotten    1,000,000   5,000,000   4,000,000
STAT...sorts (rows)              5,000,000           0  -5,000,000
STAT...physical read total byt 119,021,568 595,107,840 476,086,272
STAT...cell physical IO interc 119,021,568 595,107,840 476,086,272
STAT...physical read bytes     119,021,568 595,107,840 476,086,272

Run1 latches total versus runs -- difference and pct
        Run1        Run2        Diff       Pct
      17,374       1,452     -15,922  1,196.56%

PL/SQL procedure successfully completed.

big_table%ORA11GR2> 

Collections and Result Set with distinct

February 01, 2013 - 5:02 am UTC

Reviewer: A reader

create global temporary table tt 
  (c1 varchar2(10), 
   c2 varchar2(10),
   c3 varchar2(10),
   c4 varchar2(10),
   c5 varchar2(10)
  );

insert into tt
select owner, object_type, status, temporary, secondary
form   dba_objects,
      (select *
       from   dual
       connect by level <= 13
      );

-- 1071018 rows inserted

exec runstats_pkg.rs_start;

...then exactly the same code as yours above but with the following runstats...

Run1 ran in 295 hsecs
Run2 ran in 120 hsecs
run 1 ran in 245.83% of the time

Name                                  Run1        Run2        Diff
LATCH.cache buffer chains            9,479      46,450      36,971
STAT...session uga memory          -65,488      65,488     130,976
STAT...session pga memory          -65,536      65,536     131,072
STAT...table scan rows gotten    1,071,018   5,355,090   4,284,072
STAT...sorts (rows)              5,355,090           0  -5,355,090

Run1 latches total versus runs -- difference and pct
        Run1        Run2        Diff       Pct
      25,391      48,619      23,228    52.22%


We're on 11.2.0.1

Where do I get a later copy of runstats ?
Tom Kyte

Followup  

February 01, 2013 - 10:53 am UTC

Interesting, just ran your exact test (11.2.0.3 on Oracle Enterprise Linux) and got:

ops$tkyte%ORA11GR2> exec runStats_pkg.rs_stop(10000)
Run1 ran in 183 cpu hsecs
Run2 ran in 427 cpu hsecs
run 1 ran in 42.86% of the time

Name                                  Run1        Run2        Diff
STAT...session logical reads         4,690      20,509      15,819
STAT...consistent gets               4,473      20,460      15,987
STAT...consistent gets from ca       4,473      20,460      15,987
STAT...no work - consistent re       4,120      20,365      16,245
STAT...table scan blocks gotte       4,075      20,365      16,290
STAT...consistent gets from ca       4,025      20,439      16,414
STAT...undo change vector size      28,868       3,740     -25,128
LATCH.cache buffers chains          11,423      41,783      30,360
STAT...redo size                    69,384       4,864     -64,520
STAT...session uga memory          131,024           0    -131,024
STAT...session pga memory          851,968    -655,360  -1,507,328
STAT...table scan rows gotten    1,005,036   5,025,065   4,020,029
STAT...sorts (rows)              4,930,705           0  -4,930,705
STAT...session uga memory max            0   6,624,644   6,624,644
STAT...session pga memory max            0   6,684,672   6,684,672
STAT...physical read bytes       7,725,056           0  -7,725,056
STAT...physical read total byt   7,725,056           0  -7,725,056
STAT...cell physical IO interc   7,725,056           0  -7,725,056
STAT...logical read bytes from  38,420,480 168,009,728 129,589,248

Run1 latches total versus runs -- difference and pct
        Run1        Run2        Diff       Pct
      17,078      45,259      28,181     37.73%

PL/SQL procedure successfully completed.




what is your platform?

can you tkprof this so I can compare tkprofs?



create or replace package body runstats_pkg
as

g_start number;
g_run1  number;
g_run2  number;

procedure rs_start
is
begin
    delete from run_stats;

    insert into run_stats
    select 'before', stats.* from stats;

    g_start := dbms_utility.get_cpu_time;
end;

procedure rs_middle
is
begin
    g_run1 := (dbms_utility.get_cpu_time-g_start);

    insert into run_stats
    select 'after 1', stats.* from stats;
    g_start := dbms_utility.get_cpu_time;

end;

procedure rs_stop(p_difference_threshold in number default 0)
is
begin
    g_run2 := (dbms_utility.get_cpu_time-g_start);

    dbms_output.put_line
    ( 'Run1 ran in ' || g_run1 || ' cpu hsecs' );
    dbms_output.put_line
    ( 'Run2 ran in ' || g_run2 || ' cpu hsecs' );
    if ( g_run2 <> 0 )
    then
    dbms_output.put_line
    ( 'run 1 ran in ' || round(g_run1/g_run2*100,2) ||
      '% of the time' );
    end if;
    dbms_output.put_line( chr(9) );

    insert into run_stats
    select 'after 2', stats.* from stats;

    dbms_output.put_line
    ( rpad( 'Name', 30 ) || lpad( 'Run1', 12 ) ||
      lpad( 'Run2', 12 ) || lpad( 'Diff', 12 ) );

    for x in
    ( select rpad( a.name, 30 ) ||
             to_char( b.value-a.value, '999,999,999' ) ||
             to_char( c.value-b.value, '999,999,999' ) ||
             to_char( ( (c.value-b.value)-(b.value-a.value)), '999,999,999' ) data
        from run_stats a, run_stats b, run_stats c
       where a.name = b.name
         and b.name = c.name
         and a.runid = 'before'
         and b.runid = 'after 1'
         and c.runid = 'after 2'
         -- and (c.value-a.value) > 0
         and abs( (c.value-b.value) - (b.value-a.value) )
               > p_difference_threshold
       order by abs( (c.value-b.value)-(b.value-a.value))
    ) loop
        dbms_output.put_line( x.data );
    end loop;

    dbms_output.put_line( chr(9) );
    dbms_output.put_line
    ( 'Run1 latches total versus runs -- difference and pct' );
    dbms_output.put_line
    ( lpad( 'Run1', 12 ) || lpad( 'Run2', 12 ) ||
      lpad( 'Diff', 12 ) || lpad( 'Pct', 10 ) );

    for x in
    ( select to_char( run1, '999,999,999' ) ||
             to_char( run2, '999,999,999' ) ||
             to_char( diff, '999,999,999' ) ||
             to_char( round( run1/decode( run2, 0, to_number(null), run2) *100,2 ), '99,999.99' ) || '%' data
        from ( select sum(b.value-a.value) run1, sum(c.value-b.value) run2,
                      sum( (c.value-b.value)-(b.value-a.value)) diff
                 from run_stats a, run_stats b, run_stats c
                where a.name = b.name
                  and b.name = c.name
                  and a.runid = 'before'
                  and b.runid = 'after 1'
                  and c.runid = 'after 2'
                  and a.name like 'LATCH%'
                )
    ) loop
        dbms_output.put_line( x.data );
    end loop;
end;

end;
/


maximum record count

February 09, 2013 - 12:35 am UTC

Reviewer: Seenuvasan K from INDIA

Hi TOM,

i am using oracle 8i, right now i have storing single table in 25169486 records with multiple index purpose of finance data. How many records can able to store for single table in oracle 8i, cause if any crash then i need to recover with hard backup. So pl tell me your suggestions.
Tom Kyte

Followup  

February 11, 2013 - 9:57 am UTC

billions and billions and more

http://docs.oracle.com/cd/A87860_01/doc/index.htm

25 million is pretty small, you won't hit a limit.


Now that said, there are practical limits - as the segment gets large (10's of gigabytes) you will want to look at partitioning the data for administrative purposes. It is not easy to administer tables that are many 10's of gigabytes in size.

varrays and select from that

May 07, 2013 - 12:03 pm UTC

Reviewer: prad from india

declare

type src_prt_hv_rec is record (
table_owner dba_tab_partitions.table_owner%type,
table_name dba_tab_partitions.table_name%type,

partition_name dba_tab_partitions.partition_name%type,
parition_position dba_tab_partitions.partition_position%type,
partition_high_value dba_tab_partitions.high_value%type,
subpartition_count dba_tab_partitions.subpartition_count%type,
partition_high_value_length dba_tab_partitions.high_value_length%type,

spname dba_tab_subpartitions.subpartition_name%type,
subpartition_position dba_tab_subpartitions.subpartition_position%type,
subpartition_high_value dba_tab_partitions.high_value%type,
subpartition_high_value_length dba_tab_partitions.high_value_length%type
);

type src_prt_hv_arr IS varray(100000000) of src_prt_hv_rec;

v_tmp number;
src_prt_hv src_prt_hv_arr;
c varchar2(100);

cursor src_prt_hv_cur is
select dtp.table_owner,dtp.table_name,dtp.partition_name,dtp.partition_position,dtp.high_value,dtp.subpartition_count,
dtp.high_value_length,dtsp.subpartition_name,dtsp.subpartition_position sp_position,dtsp.high_value sp_high_value,
--dtp.last_analyzed,dtp.num_rows ,
dtsp.high_value_length sp_high_value_length
from dba_tab_partitions dtp,dba_tab_subpartitions dtsp
where dtp.table_owner=dtsp.table_owner(+)
and dtp.table_name=dtsp.table_name(+)
and dtp.partition_name=dtsp.partition_name(+)
and (dtp.last_analyzed is not null and dtsp.last_analyzed(+) is not null);

v_endCounter number :=10;
type src_prt_hv_rc is ref cursor;
src_prv_hv_cv src_prt_hv_rc;
v_date date;

v_part varchar2(100);
v_sub_part_count number;
v_src_part varchar2(1000);
v_tab_name varchar2(100);
v_par_high_val_len number;
v_subpar_high_val_len number;

begin

open src_prv_hv_cv for
select dtp.table_owner,dtp.table_name,dtp.partition_name,dtp.partition_position,dtp.high_value,dtp.subpartition_count,
dtp.high_value_length,dtsp.subpartition_name spname,dtsp.subpartition_position sp_position,dtsp.high_value sp_high_value,
--dtp.last_analyzed,dtp.num_rows ,
dtsp.high_value_length sp_high_value_length
from dba_tab_partitions dtp,dba_tab_subpartitions dtsp
where dtp.table_owner='CDR_OWNER28700'
and dtp.table_name ='KAL_RAP_CHARGE_TYPE_FACT'
and dtp.table_owner=dtsp.table_owner(+)
and dtp.table_name=dtsp.table_name(+)
and dtp.partition_name=dtsp.partition_name(+)
and (dtp.last_analyzed is not null and dtsp.last_analyzed(+) is not null);

fetch src_prv_hv_cv bulk collect into src_prt_hv;

close src_prv_hv_cv;


begin


for j in (select table_name,partition_name,subpartition_name,to_date(subpartition_high_value,'yyyymmdd') sphv
from CDR_OWNER28700.master_stats_config where table_name = 'KAL_RAP_CHARGE_TYPE_FACT')
loop
dbms_output.put_line (j.sphv||' dd');

if src_prt_hv.exists(j.sphv)
then
dbms_output.put_line('ds');
end if;

SELECT COUNT(*) INTO c
FROM TABLE(select table_name from rc_prt_hv_cv );

end loop;
end;
end;
/


i have a record type and want to select a particular value from a column. how can i best do it.. please advice..thanks


Tom Kyte

Followup  

May 07, 2013 - 3:40 pm UTC

in current versions - 11gr2 and before - you cannot select from a plsql record type, you'd have to create an object type, a nested table of that object type (outside of plsql) and use those types instead.


ops$tkyte%ORA11GR2> create or replace type myScalarType as object
  2  ( x number, y date, z varchar2(30) )
  3  /

Type created.

ops$tkyte%ORA11GR2> create or replace type myTableType as table of myScalarType
  2  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          l_data myTableType;
  3  begin
  4          select myScalarType( user_id, created, username )
  5            bulk collect into l_data
  6            from all_users;
  7  
  8          for x in (select t.z from TABLE(l_data) t where rownum <= 5)
  9          loop
 10                  dbms_output.put_line( x.z );
 11          end loop;
 12  end;
 13  /
A
OPS$TKYTE
REPORT_USER
MAIN_DB
RDC_WZ

PL/SQL procedure successfully completed.




Fetch with bulk collect

September 25, 2013 - 3:18 pm UTC

Reviewer: Vikas from UK

Hi Tom,
Basically I am trying to fetch nested table column from a table and populate it into a variable of same nested table type in pl/sql. The issue is, when I use the fetch with the BULK COLLECT INTO clause then I get a "type mismatch" error but when using the same code without the bulk collect then there is no error.

I know that I could have used the TABLE function to un-nest the nested table column instead of using the loop to iterate through each element. But I am just curious to know, despite having the same nested table type why does the fetch with bulk collect returns "type mismatch" error? I might be missing something.....

The code is:
CREATE TYPE nt_type AS TABLE OF NUMBER;
/

CREATE TABLE t1 (aa NUMBER, bb nt_type)nested table bb store as bb_nt;
insert into t1 values(1,nt_type(10,20));

insert into t1 values(2,nt_type(10,20,30,40,50));
commit;

--The below block works if BULK COLLECT IS NOT USED.
DECLARE
CURSOR C1 IS
SELECT bb
FROM t1 where rownum=1;
--v nt_type:=nt_type();
v nt_type;
BEGIN
OPEN c1;
fetch c1 into v;
for i in v.first..v.count loop
dbms_output.put_line(v(i) );
end loop;
close c1;
END;
/

--The below block DOES NOT work if BULK COLLECT IS USED.
DECLARE
CURSOR C1 IS
SELECT bb
FROM t1;
--v nt_type:=nt_type();
v nt_type;
BEGIN
OPEN c1;
fetch c1 bulk collect into v;
for i in v.first..v.count loop
dbms_output.put_line(v(i) );
end loop;
close c1;
END;
/


Thanks and regards,
Vikas

MEMBER OF and SUBMULTISET OF question

January 26, 2015 - 9:19 pm UTC

Reviewer: A reader

Tom,

while playing with this I found that I'm not quite happy with the execution plan. When running query:

select t.parentid from V_TESTPARENTCHILD t where 'child11' member of children;
OR
select t.parentid from V_TESTPARENTCHILD t where string_table('child11','child12') submultiset of children;

I was hoping that Oracle will use correspondent index on CHILD table first to find records, matching 'child11' and then by using parentID retrieve the actual record (or even skip this step completely because it can be retrieved from index). But no matter what I'm doing, it looks like the plan is doing full table (or index) scan on PARENT table first and only after it checking CHILD table for presence of searched value.
Should it be like this or I'm missing something?

set define off
spool testparentchild.log

create table PARENT
(
id VARCHAR2(32) not null
)
;
alter table PARENT
add constraint PK_TEST_PARENT primary key (ID);

create table CHILD
(
id VARCHAR2(32) not null,
parent_id VARCHAR2(32) not null,
name VARCHAR2(32) not null
)
;
create unique index U_TEST_CHILD on CHILD (NAME, PARENT_ID);
create unique index U_TEST_CHILD_REV on CHILD (PARENT_ID, NAME);
alter table CHILD
add constraint PK_TEST_CHILD primary key (ID);
alter table CHILD
add constraint FK_CHILD2PARENT foreign key (PARENT_ID)
references PARENT (ID);
create or replace type string_table as table of varchar2(15)
/

create or replace type test_parent_child as object
(
parentID varchar2(32),
children string_table
)
/

create or replace force view v_testparentchild of test_parent_child with object oid (parentid) as
select t.id ,
cast(multiset
(select c.name from child c where c.parent_id = t.id) as
string_table) as children
from parent t;
insert into parent(id) values ('parent1');
insert into parent(id) values ('parent2');
insert into child(id,parent_id,name) values('child11', 'parent1', 'child11');
insert into child(id,parent_id,name) values('child12', 'parent1', 'child12');
insert into child(id,parent_id,name) values('child21', 'parent2', 'child21');
insert into child(id,parent_id,name) values('child22', 'parent2', 'child22');
commit;
spool off