Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Chris Saxon

Thanks for the question, Pratap.

Asked: June 28, 2002 - 10:31 am UTC

Last updated: June 14, 2022 - 5:06 pm UTC

Version: 9.0.0

Viewed 50K+ times! This question is

You Asked

Hi Tom

Can you please give an example of using pipelined functions in a *package*. I am not able to make it run within a package. I want to return an array of records from the pipelined function, say like

name_1, address_1, city_1, hash value 1
name_2, address_2, city_2, hash value 2
..........
..........

Everywhere the examples given are of standalone functions. Also please refer to bug number 2136476 on Metalink regarding these pipelined functions. I desperately want to use such a wonderful feature.

Thanks and regards

Pratap


and Tom said...

Well, that bug has to do with trying to use a type inside of a package -- which will never work (the fix to the bug is to pump out the correct error message). So, if it is your desire to have the types created inside the package -- that never has been and most likely never will be "supported". We need to use SQL types.

But -- just creating a pipelined in a package is easy:

tkyte@TKYTE9I.US.ORACLE.COM> create or replace type myScalarType as object
2 ( a int,
3 b date,
4 c varchar2(25)
5 )
6 /

Type created.

tkyte@TKYTE9I.US.ORACLE.COM>
tkyte@TKYTE9I.US.ORACLE.COM> create or replace type myTableType as table of myScalarType
2 /

Type created.

tkyte@TKYTE9I.US.ORACLE.COM>
tkyte@TKYTE9I.US.ORACLE.COM>
tkyte@TKYTE9I.US.ORACLE.COM> create or replace package my_pkg
2 as
3 function f return myTableType PIPELINED;
4 end;
5 /

Package created.

tkyte@TKYTE9I.US.ORACLE.COM> create or replace package body my_pkg
2 as
3 function f return myTableType
4 PIPELINED
5 is
6 begin
7 for i in 1 .. 5
8 loop
9 pipe row ( myScalarType( i, sysdate+i, 'row ' || i ) );
10 end loop;
11 return;
12 end;
13 end;
14 /

Package body created.

tkyte@TKYTE9I.US.ORACLE.COM>
tkyte@TKYTE9I.US.ORACLE.COM> select * from table( my_pkg.f() );

A B C
---------- --------- -------------------------
1 29-JUN-02 row 1
2 30-JUN-02 row 2
3 01-JUL-02 row 3
4 02-JUL-02 row 4
5 03-JUL-02 row 5

tkyte@TKYTE9I.US.ORACLE.COM>

Rating

  (136 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, June 28, 2002 - 3:37 pm UTC

Tom,

Can you explain me PIPELINED function in simple words?
It is really confusing. In which scenerio we cau use this function.

Thnaks

Tom Kyte
June 28, 2002 - 7:02 pm UTC

search for pipelined on this site.

Basically, when you would like a PLSQL (or java or c) routine to be the "source" of data -- instead of a table -- you would use a pipelined function.

Data transformation for example. (perform some procedural logic to transform the data)

Using rownum with pipelined functions

Martin, March 27, 2003 - 10:00 am UTC

Hi Tom,

quick question :

If I declare a pipelined function which does the old n-row table functionality :

SQL> CREATE OR REPLACE TYPE test_type AS TABLE OF NUMBER;
  2  /

Type created.

SQL> CREATE OR REPLACE FUNCTION test_func RETURN test_type 
  2  PIPELINED
  3  AS
  4  BEGIN
  5    FOR i IN 1..100
  6    LOOP
  7      PIPE ROW(i);
  8    END LOOP;
  9  END test_func;
 10  RETURN;
 11  /

Function created.

But then add a rownum predicate to a query on this function, what does oracle do in order to fulfil this? Is the logic of rownum somehow merged into the function itself?

SQL> SELECT * FROM TABLE(test_func) WHERE rownum < 10;

COLUMN_VALUE
------------
           1
           2
           3
           4
           5
           6
           7
           8
           9

9 rows selected.

Thanks in advance
 

Tom Kyte
March 27, 2003 - 10:27 am UTC

depends on the access plan totally -- whether Oracle "aborts" the query against the table() function or if it has to do the entire thing AND THEN apply rownum.

ops$tkyte@ORA920> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> select * from table(test_func) where rownum < 10;

COLUMN_VALUE
------------
           1
           2
           3
           4
           5
           6
           7
           8
           9

9 rows selected.

ops$tkyte@ORA920> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
10

ops$tkyte@ORA920>
ops$tkyte@ORA920> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> select *
  2    from table(test_func) full outer join
  3         dual on (to_char(column_value)=dummy)
  4   where rownum < 10;

COLUMN_VALUE D
------------ -
          72
          47
           3
          45
          89
          66
          70
          90
          37

9 rows selected.

ops$tkyte@ORA920> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
100

 

And the "infinite" solution?

Martin, March 27, 2003 - 10:38 am UTC

Thanks Tom,

presumably then, declaring a pipelined function which would (if it were a traditional function) NEVER stop, i.e.

SQL> CREATE OR REPLACE FUNCTION test_func RETURN test_type
  2  PIPELINED
  3  AS
  4    i  PLS_INTEGER := 0;
  5  BEGIN
  6    LOOP
  7      i := i + 1;
  8      PIPE ROW(i);
  9    END LOOP;
 10    RETURN;
 11  END test_func;
 12  /

Function created.

SQL> SELECT * FROM TABLE(test_func) WHERE rownum < 10;

COLUMN_VALUE
------------
           1
           2
           3
           4
           5
           6
           7
           8
           9

9 rows selected.

This will ALWAYS abort the query? (I presume it must do, since it'd be pretty difficult to materialize this? ;-))
 

Tom Kyte
March 27, 2003 - 11:04 am UTC

I would not count on it. I would parameterize the function test_func to take the number of rows to generate as INPUT


select * from table(test_func(:n));


and bind in 10.

Sorry, forgot something...

Martin, March 27, 2003 - 10:41 am UTC

Sorry, Tom, forgot to ask...

if you were using pipelined functions for n-row functionality (which I personally think are pretty neat), which would be the most efficient then? passing the number of rows in the function call, TABLE(function(n)), or having an "infinite" loop and use rownum?

Thanks in advance

Tom Kyte
March 27, 2003 - 11:04 am UTC

i would (do) pass in the number of rows to synthesize.

Can I process multiple rows and then pipe one out?

Matt, April 09, 2003 - 2:43 am UTC

Say,I have a table with data in, something like:

my_tab
ID NUMBER,
start DATE,
end DATE

example data is something like:

1 01/01/2000 03/01/2000
1 03/01/2000 08/01/2000
1 08/01/2000 21/01/2000
1 21/02/2000 05/05/2000
2 05/05/2000 25/05/2000
2 25/05/2000 26/05/2000
2 01/06/2000 10/06/2000
3 ...

So, I have 'consecutive' values.

What I need to do is pull these out of the table and process them in sets of ID's so that I can work out the earliest start and latest end dates for each id. The results in this case should be:

1 01/01/2000 05/05/2000
2 05/05/2000 26/05/2000
2 01/06/2000 10/06/2000
3 ...

Can I do this using a pipelined function? Say I have a REF CURSOR which defines the result set above, can I retreive the data in sets of ID's to process, work out the earliest start dates and latest end dates and then pipe back one or more rows (as in the siuration above where Id 1 has four input rows and one output row and ID 2 has 3 input rows and
two output rows...ID 3 may have many more output rows, say no more than 10).

My questions really are:

- How do I pull out the data in ID order in sets of ID's (do I just monitor the fetch result looking for a change in ID?)?
- Can pipelined functions be used in this way - many rows in...processsing...and many rows out?

A simple example would really be appreciated.

Best Regards.


Tom Kyte
April 09, 2003 - 9:39 am UTC

In the "olden days" gone by -- we called this "control break processing".  You loop until something changes -- and then output.

I'll do this two ways.  Once using pure SQL (cause its fun, gets the brain going) and once as a pipelined function.

they do the same thing - just differently....


ops$tkyte@ORA920> create table t ( id number, start_date date, end_date date );
Table created.

ops$tkyte@ORA920> alter session set nls_date_format = 'dd/mm/yyyy';
Session altered.

ops$tkyte@ORA920> insert into t values ( 1,'01/01/2000','03/01/2000' );
ops$tkyte@ORA920> insert into t values ( 1,'03/01/2000','08/01/2000' );
ops$tkyte@ORA920> insert into t values ( 1,'08/01/2000','21/01/2000' );<b>
ops$tkyte@ORA920> insert into t values ( 1,'21/01/2000','05/05/2000' );
I assumed that was a typo in the original input -- 21/01, not 21/02...</b>
ops$tkyte@ORA920> insert into t values ( 2,'05/05/2000','25/05/2000' );
ops$tkyte@ORA920> insert into t values ( 2,'25/05/2000','26/05/2000' );
ops$tkyte@ORA920> insert into t values ( 2,'01/06/2000','10/06/2000' );
<b>
ops$tkyte@ORA920> insert into t values ( 4,'01/06/2000','02/06/2000' );
</b>

ops$tkyte@ORA920> begin
  2      dbms_output.put_line( trunc(sysdate+1) || '..' || (trunc(sysdate+10)+1) );
  3      for i in 1 .. 10
  4      loop
  5          insert into t values ( 3, trunc(sysdate+i), trunc(sysdate+i)+1 );
  6      end loop;
  7
  8      dbms_output.put_line( trunc(sysdate+12) || '..' || (trunc(sysdate+100)+1) );
  9      for i in 12 .. 100
 10      loop
 11          insert into t values ( 3, trunc(sysdate+i), trunc(sysdate+i)+1 );
 12      end loop;
 13
 14      dbms_output.put_line( trunc(sysdate+200) || '..' || (trunc(sysdate+200)+1) );
 15      for i in 200 .. 200
 16      loop
 17          insert into t values ( 3, trunc(sysdate+i), trunc(sysdate+i)+1 );
 18      end loop;
 19
 20      dbms_output.put_line( trunc(sysdate+300) || '..' || (trunc(sysdate+325)+1) );
 21      for i in 300 .. 325
 22      loop
 23          insert into t values ( 3, trunc(sysdate+i), trunc(sysdate+i)+1 );
 24      end loop;
 25  end;
 26  /
10/04/2003..20/04/2003
21/04/2003..19/07/2003
26/10/2003..27/10/2003
03/02/2004..29/02/2004

PL/SQL procedure successfully completed.

<b>There I wanted to test boundary value conditions (single row time frames, many multi-row ones)

And now for the SQL.  If you are interested in the "how", take the inline views one by one from the inside out and run them by themselves.  See what they do and then see what adding the next inline view does to that...</b>

ops$tkyte@ORA920> select *
  2    from (
  3  select id,
  4         start_date,
  5         nvl(lead(end_date) over (partition by id order by rn), end_date) end_date,
  6         rn
  7    from (
  8  select t.*,
  9         row_number() over ( partition by id order by start_date ) rn
 10    from (
 11  select id,
 12         lag(end_date) over (partition by id order by start_date) prior_end,
 13         start_date,
 14         end_date,
 15         lead(start_date) over (partition by id order by start_date) next_start
 16    from t) t, ( select rownum x from all_objects where rownum <= 2)
 17   where ( (prior_end is null or
 18            next_start is null or
 19            (prior_end <> start_date) or
 20            (end_date <> next_start))
 21           and x = 1)
 22         or
 23         (x = 2 and (prior_end is null AND next_start is null))
 24         or
 25         (x = 2 and (prior_end <> start_date AND end_date <> next_start ))
 26         )
 27         )
 28   where mod(rn,2) = 1
 29  /

        ID START_DATE END_DATE           RN
---------- ---------- ---------- ----------
         1 01/01/2000 05/05/2000          1
         2 05/05/2000 26/05/2000          1
         2 01/06/2000 10/06/2000          3
         3 10/04/2003 20/04/2003          1
         3 21/04/2003 19/07/2003          3
         3 26/10/2003 27/10/2003          5
         3 03/02/2004 29/02/2004          7
         4 01/06/2000 02/06/2000          1

8 rows selected.

<b>and for the procedural approach, we'll need a collection type to return</b>


ops$tkyte@ORA920> create or replace type myScalarType as object
  2  ( id number, start_date date, end_date date )
  3  /

Type created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace type myArrayType as table of myScalarType
  2  /

Type created.

<b>and I'll setup my function to take a cursor in -- it can by a query against any table the returns an ID, DATE1, DATE2 sorted by ID, DATE1 and will do this sort of processing for us</b>

ops$tkyte@ORA920> create or replace function f( p_cursor in SYS_REFCURSOR ) return myArrayType
  2  PIPELINED
  3  as
  4      l_id number;
  5      l_start_date date;
  6      l_end_date   date;
  7      l_data       myScalarType;
  8  begin
  9      fetch p_cursor into l_id, l_start_date, l_end_date;
 10      if ( p_cursor%notfound )
 11      then
 12          close p_cursor;
 13          return;
 14      end if;
 15
 16      l_data := myScalarType( l_id, l_start_date, l_end_date );
 17          l_start_date := l_end_date;
 18      loop
 19          if ( l_id <> l_data.id OR l_start_date <> l_data.end_date )
 20          then
 21              pipe row (l_data);
 22              l_data := myScalarType( l_id, l_start_date, l_end_date );
 23          else
 24              l_data.end_date := l_end_date;
 25          end if;
 26
 27          fetch p_cursor into l_id, l_start_date, l_end_date;
 28          exit when p_cursor%notfound;
 29      end loop;
 30      pipe row( l_data );
 31          close p_cursor;
 32      return;
 33  end;
 34  /

Function created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select *
  2    from TABLE( f(CURSOR(select id, start_date, end_date
  3                           from t
  4                          order by id, start_date) ) );

        ID START_DATE END_DATE
---------- ---------- ----------
         1 01/01/2000 05/05/2000
         2 05/05/2000 26/05/2000
         2 01/06/2000 10/06/2000
         3 10/04/2003 20/04/2003
         3 21/04/2003 19/07/2003
         3 26/10/2003 27/10/2003
         3 03/02/2004 29/02/2004
         4 01/06/2000 02/06/2000

8 rows selected.

 

parallel_enable

Pratap, April 09, 2003 - 11:44 pm UTC

Hi Tom

Can we parallel_enable pipelined functions with such "control break processing" logic.

Thanks

Pratap


Tom Kyte
April 10, 2003 - 7:43 am UTC

think about it....

can a pipelined function be run in parallel at all really.. how would that work... what would it mean? Looking at the virtual table function -- how would Oracle know to take "test_func(1000)" and break that into "test_func( 1..500 )" and "test_func( 501..1000 )" -- how does it even know the lower bound is 1? why not -100000? Similar sticky issues exist for the cursor()

so, while syntactically it might work (it might compile) -- it would not really make sense, as it is the generator of data in this case, not a consumer of data -- and you (we) didn't write it to be parallel (cannot in fact)

This is great!

Matt, April 09, 2003 - 11:54 pm UTC

Great stuff. It took a while to get my head around the SQL, but I think I am there. You appear to split the result set in two using a cartesian join and then query each copy to pull out what you need. Can you please explain this part in a few lines?

As usual there were a couple of hidden layers of complexity (that I hadn't shared) which meant that I tried the pipelined function approach.

However, I am getting the following error on using the pipeline:

ORA-03001: unimplemented feature
ORA-02070: database does not support some function in this context

This comes from my (complex) cursor. If I materialize the cursor into a table the pipeline fn works fine, which suggests that the CURSOR operator might not support all the available SQL options. In my cursor I use:

- LEFT OUTER JOIN syntax
- case..when..else
- NVL
- inline views

My guess is that one of these is not understood by the CURSOR operator.

Are you aware of such a restriction?

I'll try and get back to you and let you know what caused the SQL to fall over (I'm on vacation in a few hours).

Thanks once again.

Tom Kyte
April 10, 2003 - 7:54 am UTC

for the ora2070, we'd sort of need to see the query and understand the base objects (eg: are they queried over a dblink -- does it work as a ref cursor by itself).  All of those constructs are OK

ops$tkyte@ORA920> select *
  2    from TABLE( f(CURSOR(select nvl(empno,1234) empno,
  3                                case when sysdate = sysdate then hiredate
  4                                     else null
  5                                 end s, hiredate e
  6                           from (select empno, hiredate, dname
  7                                   from emp left outer join dept on (emp.deptno = dept.deptno))
  8                          order by empno, hiredate ) ) );

        ID START_DAT END_DATE
---------- --------- ---------
      7369 17-DEC-80 17-DEC-80
      7499 20-FEB-81 20-FEB-81
      7521 22-FEB-81 22-FEB-81
      7566 02-APR-81 02-APR-81
      7654 28-SEP-81 28-SEP-81
      7698 01-MAY-81 01-MAY-81
      7782 09-JUN-81 09-JUN-81
      7788 09-DEC-82 09-DEC-82
      7839 17-NOV-81 17-NOV-81
      7844 08-SEP-81 08-SEP-81
      7876 12-JAN-83 12-JAN-83
      7900 03-DEC-81 03-DEC-81
      7902 03-DEC-81 03-DEC-81
      7934 23-JAN-82 23-JAN-82

14 rows selected.


but the error points to "something over a dblink" being an issue....



The cartesian product with the two row set is simply to duplicate singleton entries like:

ops$tkyte@ORA920> insert into t values ( 4,'01/06/2000','02/06/2000' );


What I do is duplicate each row but only keep both if it turns out that it is the only observation (id = 3 has one such row as well) -- meaning there is no "lag" or "lead" row with the start_date/end_date matching to this current row.  I need two cause my logic required it -- I kept a row at the beginning of the series and the row at the end and then used lead to pick off the values I needed and kept just the first row of each pair -- if I didn't have the pair, the logic fell apart. 

A reader, May 04, 2003 - 3:25 pm UTC

hello tom,

after getting a little bit familiar with concepts of (pipelined) table functions I'm looking now for "use cases" of SYS.ANYDATASET, SYS.ANYTYPE, and SYS.ANYDATA types as results of those functions (as mentioned - but not explained - in PL/SQL User's Guide and Reference for Release 2).

I've never used any of those types before. could you please provide a sample of how to use these new types with table functions?

Tom Kyte
May 04, 2003 - 7:48 pm UTC

search for anydata for an example of that

as for the anytype and anydataset -- in 9i and before -- i haven't used them myself. They would normally be used for creating new generic datatypes or generic routines.

A reader, May 18, 2003 - 2:54 am UTC

I have two additional questions on table functions:

1st: is the order of rows returned from the table function within a SQL statement the exact same order in which the entries were "piped" into the internal collection (so that no order by clause is needed)?

2nd: there seems to be no need to let the internal collection GROW. apparently it works fine to NOT extend that collection but "reuse" the one and only entry
over and over again to pipe its content out. but can one rely on such a behaviour?


Tom Kyte
May 18, 2003 - 10:26 am UTC

1) maybe, maybe not, I would not count on it. You should not count on the order of rows in a result set without having an order by. If you join or do something more complex then simply "select * from table( f(x) )", the rows could well come back in some other order.

empirically -- they appear to come back as they are piped.
I do not believe it is documented that this is so.

In fact, collections of type NESTED TABLE are documented to explicitly *not have the ability to preserve order*.


2) that is the way pipelined functions should be coded, not with collections at all -- but with just the scalar type you reuse over and over. I cannot imagine why you would use extend.

Order of elements

Martin, May 19, 2003 - 4:07 am UTC

Hi Tom,

can I just clarify what you said earlier :

"In fact, collections of type NESTED TABLE are documented to explicitly *not have
the ability to preserve order*."

I always thought that this related to PERSISTENTLY stored nested tables (i.e. a nested table column). For locally declared collections, i.e. assigned as a local "variable", I thought that the only difference between a nested table and a VARRAY (since a VARRAY does retain order) was that the VARRAY is bounded?

Is this not true then?

Thanks in advance

Tom Kyte
May 19, 2003 - 7:50 am UTC

empirically, they are ordered. observation leads you to believe so.

however, no where is it documented this is so, in fact the documentation says otherwise. And especially in regards to pipelined functions I would say it would be risky to rely on it "being so"

It MUST be true... surely?

Martin, May 19, 2003 - 9:15 am UTC

It must be true that variables of a nested table type must retain their ordering, since otherwise, what's the point of the index?

Placing a value in index 1 must always stay as index 1 surely... if not, methinks it's time to get back to INDEX BY tables! ;-)

Regards

Tom Kyte
May 19, 2003 - 10:38 am UTC

what index?

oh, you mean a subscript. sure, as I said "empirically, it would be observed they do"

however -- it is not documented ( i would assume that for a variable, it would be safe to assume )

however -- the question was about PIPED output here -- here, I would not make that assumption. I would NOT rely on that.

from sql there is one sure way to get ordered data -- that is ORDER BY. Don't use it at your own risk.

Thanks

Martin, May 19, 2003 - 11:11 am UTC

Hi Tom,

yes, I was talking about variables (as I mentioned in the previous post).

It's a bit funny that it's not documented. I would think that there are numerous systems out there which rely on the fact that "rows" in nested table variables are in the same position as how they were put in....

Thanks for all your help.


A bit more on ordering...

Adrian Billington, May 19, 2003 - 5:33 pm UTC

Tom / Martin

I would consider that the *order* of a collection is inherently related to the way you access the elements. In a traditional loop through a collection variable (first to last or vice versa), by its very nature you will be accessing the elements in some sequential order (either forwards or backwards depending on the direction of your loop). That you can pretty much guarantee to be consistent, because as Martin says, to load an element in subscript one and then have it turn out elsewhere later on would make collections unusable.

But when you wish to access the elements of a collection using SQL, then all bets are off, as with standard table access. Nowhere does Oracle tell you it will supply rows from a table back in the same order that they were entered - so the same must apply from a SELECT against a cast collection - i.e. use an ORDER BY to guarantee an order.

Regards
Adrian


A reader, July 25, 2003 - 1:21 pm UTC

There is something about pipelined functions in this paper,
please if you find something wrong please tellme.

</code> http://www.geocities.com/juancarlosreyesp/OracleAskTom.pdf <code>

Hi Tom, what I'm doing wrong?

juancarlosreyesp@yahoo.com, August 08, 2003 - 6:20 pm UTC

Hi Tom, can you please explainme what I'm doing wrong.

DROP TYPE BBV.TYP_CUPNEG;
DROP TYPE BBV.TYO_CUPNEG;

CREATE TYPE BBV.TYO_CUPNEG AS OBJECT
( CUNX_SUBPROD INT,
  CUNX_CADENA  INT )
/

CREATE TYPE BBV.TYP_CUPNEG AS TABLE OF TYO_CUPNEG
/
CREATE PUBLIC SYNONYM TYP_CUPNEG FOR BBV.TYP_CUPNEG;
CREATE PUBLIC SYNONYM TYO_CUPNEG FOR BBV.TYO_CUPNEG;


CREATE OR REPLACE 
FUNCTION bbv.fut_cupneg
RETURN TYP_CUPNEG PIPELINED
IS
BEGIN

 FOR A IN 1..3 LOOP
  PIPE ROW( TYO_CUPNEG( 1,1) );
 END LOOP;


RETURN;
END;
/

Then I got the 3 records but they are empty records
SQL> SELECT * FROM TABLE(FUT_CUPNEG);

CUNX_SUBPROD CUNX_CADENA
------------ -----------




SQL>
Thanks 

Tom Kyte
August 10, 2003 - 12:05 pm UTC

ops$tkyte@ORA920> CREATE TYPE TYO_CUPNEG AS OBJECT
  2  ( CUNX_SUBPROD INT,
  3    CUNX_CADENA  INT )
  4  /

Type created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> CREATE TYPE TYP_CUPNEG AS TABLE OF TYO_CUPNEG
  2  /

Type created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> CREATE OR REPLACE
  2  FUNCTION fut_cupneg
  3  RETURN TYP_CUPNEG PIPELINED
  4  IS
  5  BEGIN
  6
  7   FOR A IN 1..3 LOOP
  8    PIPE ROW( TYO_CUPNEG( 1,1) );
  9   END LOOP;
 10
 11
 12  RETURN;
 13  END;
 14  /

Function created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> SELECT * FROM TABLE(FUT_CUPNEG);

CUNX_SUBPROD CUNX_CADENA
------------ -----------
           1           1
           1           1
           1           1


<b>I cannot reproduce.  do not use public synonyms (ugh), run the example as I did and see what happens.</b>
 

solution

juancarlosreyesp@yahoo.com, August 08, 2003 - 7:20 pm UTC

Hi Tom, the previous problem seems to be a bug.

I did the same in this other way and worked.

DROP TYPE BBV.TYP_CUPNEG;
DROP TYPE BBV.TYO_CUPNEG;

CREATE TYPE BBV.TYO_CUPNEG AS OBJECT
( CUN_SUBPROD VARCHAR2(4),
CUN_CADENA VARCHAR2(50) )
/
CREATE TYPE BBV.TYP_CUPNEG AS TABLE OF TYO_CUPNEG
/
CREATE PUBLIC SYNONYM TYP_CUPNEG FOR BBV.TYP_CUPNEG;
CREATE PUBLIC SYNONYM TYO_CUPNEG FOR BBV.TYO_CUPNEG;
-- Start of DDL Script for Function ADM.FUT_CUPNEG
-- Generated 8-ago-2003 19:15:42 from ADM@SBBV.WORLD

CREATE OR REPLACE
FUNCTION bbv.fut_cupneg return
TYP_CUPNEG
is
l_data TYP_CUPNEG;
cAntSubProducto VARCHAR2(4);
cCupones VARCHAR2(50);
i NUMBER := 0;
begin
l_data := TYP_CUPNEG();
for rCupon in ( SELECT a.cun_subprod, a.cun_nrocup, ROWNUM
FROM bbv.cupneg a
ORDER BY 1,2 ) loop
IF cAntSubProducto IS NOT NULL AND
NOT cAntSubProducto = rCupon.cun_subprod THEN
l_data.extend;
i := i + 1;
l_data(i) := TYO_CUPNEG( cAntSubProducto,RTRIM(cCupones,',') );
cCupones := NULL;
END IF;
cAntSubProducto := rCupon.cun_subprod;
cCupones := cCupones || TRIM(TO_CHAR(rCupon.cun_nrocup)) || ',' ;
end loop;
l_data.extend;
l_data(i) := TYO_CUPNEG( cAntSubProducto,RTRIM(cCupones,',') );
return l_data;
end;
/

select * from
TABLE( cast(
fut_cupneg as
TYP_CUPNEG ) )


Cannot reproduce...

juancarlosreyesp@yahoo.com, August 11, 2003 - 10:06 am UTC

Thanks tom, running your example I got the same empty spaces, I suppose it's a bug, we applied the patch that solved the "lock" when compiling functions, maybe that's the reason.

CREATE TYPE TYO_CUPNEG AS OBJECT
( CUNX_SUBPROD INT,
   CUNX_CADENA  INT )
/
CREATE TYPE TYP_CUPNEG AS TABLE OF TYO_CUPNEG
/
CREATE OR REPLACE
FUNCTION fut_cupneg
RETURN TYP_CUPNEG PIPELINED
IS
BEGIN
FOR A IN 1..3 LOOP
PIPE ROW( TYO_CUPNEG( 1,1) );
END LOOP;
RETURN;
END;
/
SQL> SELECT * FROM TABLE(FUT_CUPNEG);

CUNX_SUBPROD CUNX_CADENA
------------ -----------
SQL>
But It's working in the other way, thank you :) 

Definetively is a bug

A reader, August 21, 2003 - 12:05 pm UTC

I confirmed is a bug related with that database, your function was correct.
thanks :)

Why I have to use cast?

juancarlosreyesp@yahoo.com, August 22, 2003 - 9:50 am UTC

Hi Tom, could you please explainme why I have to use cast in this situation.

CREATE OR REPLACE 
TYPE daz.typ_string_to_table AS TABLE OF VARCHAR2(50);
/

CREATE OR REPLACE 
FUNCTION daz.fut_string_to_table( cString VARCHAR2, cSeparador VARCHAR2 ) 
RETURN TYP_STRING_TO_TABLE 
PIPELINED IS 
cItem VARCHAR2(50);
BEGIN 

FOR nI IN 1..LENGTH(cString) LOOP 
 cItem := cItem || SUBSTR( cString, nI,1 );
 IF NVL(SUBSTR( cString, nI,1 ),'~') = NVL(cSeparador,'~') THEN
  PIPE ROW(SUBSTR(cItem,1,LENGTH(cItem)-1) ); 
  cItem := NULL;
 END IF;
END LOOP; 
 PIPE ROW(cItem); 
RETURN; 
END;
/



SQL> select * from table(daz.fut_string_to_table( 'a,b,c,d',',' ) );
select * from table(daz.fut_string_to_table( 'a,b,c,d',',' ) )
              *
ERROR en lÝnea 1:
ORA-22905: cannot access rows from a non-nested table item

SQL> select cast( column_value as varchar2(50)) from table(daz.fut_string_to_table( 'a,b,c,d',',' ) );

CAST(COLUMN_VALUEASVARCHAR2(50
--------------------------------------------------
a
b
c
d

SQL>


 

Tom Kyte
August 22, 2003 - 10:20 am UTC

might be a quirk of the release you are on

ops$tkyte@ORA920> CREATE OR REPLACE
  2  TYPE typ_string_to_table AS TABLE OF VARCHAR2(50);
  3  /

Type created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> CREATE OR REPLACE
  2  FUNCTION fut_string_to_table( cString VARCHAR2, cSeparador VARCHAR2 )
  3  RETURN TYP_STRING_TO_TABLE
  4  PIPELINED IS
  5  cItem VARCHAR2(50);
  6  BEGIN
  7
  8  FOR nI IN 1..LENGTH(cString) LOOP
  9   cItem := cItem || SUBSTR( cString, nI,1 );
 10   IF NVL(SUBSTR( cString, nI,1 ),'~') = NVL(cSeparador,'~') THEN
 11    PIPE ROW(SUBSTR(cItem,1,LENGTH(cItem)-1) );
 12    cItem := NULL;
 13   END IF;
 14  END LOOP;
 15   PIPE ROW(cItem);
 16  RETURN;
 17  END;
 18  /

Function created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from table(fut_string_to_table( 'a,b,c,d',',' ) );

COLUMN_VALUE
--------------------------------------------------
a
b
c
d


 

:) Thanks

A reader, August 22, 2003 - 11:01 am UTC


Back to the original question ...

Mike Wilson, August 27, 2003 - 12:47 pm UTC

I believe that Tom states in this article that 
  "if it is your desire to have the types created inside the package -- that 
never has been and most likely never will be supported. "

Possibly I misunderstand the question but I think this *does* actually work (at least in 9203).  I had the same problem.  I don't like having anything "dangling" about outside of my packages (thats what there there for right?).  With the introduction of Pipelined functions that don't require a CAST I think you can have the best of both worlds now, package specific Types that aren't exposed to SQL but still work from the CLI.

SQL> CREATE OR REPLACE
  2  PACKAGE pltest
  3    IS
  4    type virtual_table_type is table of number;
  5    function f( p_num_rows in number ) return virtual_table_type pipelined;
  6  END;
  7  /

Package created.

SQL>
SQL> CREATE OR REPLACE
  2  PACKAGE BODY pltest
  3  IS
  4    function f( p_num_rows in number ) return virtual_table_type PIPELINED
  5    is
  6    begin
  7        for i in 1 .. p_num_rows
  8        loop
  9            pipe row( i );
 10        end loop;
 11        return;
 12    end;
 13  END;
 14  /

Package body created.

SQL> select * from table(pltest.f(5));

COLUMN_VALUE
------------
           1
           2
           3
           4
           5

That being said, pipelined functions are cool.  Though I had the same problem with the docs and, in general the examples I found.  Stand alone functions are great to learn the concepts but there is almost always a "gotcha" when you go to use this stuff in packages.  I wish more examples were packaged instead of stand-alone for completeness sake.

Ok, this is the part where Tom rips my answer apart (cringing)... 

Tom Kyte
August 27, 2003 - 6:09 pm UTC

I think it is a bug that is accidently worked.  (no ripping here).  I'll be filing a bug for sure -- if you rerun your example, I get an internal error the second time.

ops$tkyte@ORA920LAP> CREATE OR REPLACE
  2  PACKAGE pltest
  3  IS
  4  type virtual_table_type is table of number;
  5  function f( p_num_rows in number ) return virtual_table_type pipelined;
  6  END;
  7  /
CREATE OR REPLACE
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [17012], [0x55C7C68C], [], [], [], [], [], []


also, it won't work for anything other then scalar types of course..  not suggested due to the above. 

***To reader who posted following link

Suvarna Patki, August 28, 2003 - 5:41 am UTC

Hi Tom,

Does this link point to some Asktom archives???

</code> http://www.geocities.com/juancarlosreyesp/OracleAskTom.pdf <code>

***Above link is no longer available.

Thanks!!!

Suvarna

Tom Kyte
August 28, 2003 - 7:50 am UTC

it is not mine nor is it related to asktom

Packaged types and pipeline functions

Stefan Grefen, October 08, 2003 - 8:45 am UTC

I did run into the same problem as TOM using types declared in a package and a function using the type in the same package (it works exactly once).
However declaring the pipelined function outside of the package (still using the type out of the package) makes it work reliable. Is this a bug too (should I avoid to use this?)

high number of rows

A reader, November 17, 2003 - 5:29 am UTC

Hi

If the PL/SQL code returns massive amount of rows would we get ORA-04030 if we want to pipe that results set?

Or this works like UNIX pipes? A simple data stream? (not stored in memory)

Tom Kyte
November 17, 2003 - 6:48 am UTC

if you do -- it would be a bug.

pipe works by letting the client have the first row

waaayyyy before the client gets the last row.

consider:

ops$tkyte@ORA920LAP> create or replace type myTabletype as table of varchar2(2000)
  2  /

Type created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create or replace function f return myTabletype
  2  PIPELINED
  3  as
  4  begin
  5          loop
  6                  pipe row( rpad('*',2000,'*') );
  7          end loop;
  8          return;
  9  end;
 10  /

Function created.

ops$tkyte@ORA920LAP> @connect /
ops$tkyte@ORA920LAP> set termout off
ops$tkyte@ORA920LAP> @&gname
ops$tkyte@ORA920LAP> set termout off
ops$tkyte@ORA920LAP> REM GET afiedt.buf NOLIST
ops$tkyte@ORA920LAP> set termout on
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> column value new_val V
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> select a.name, b.value
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic# and a.name = 'session pga memory max';

NAME                                VALUE
------------------------------ ----------
session pga memory max             220508

ops$tkyte@ORA920LAP> select count(*) from TABLE(f) where rownum <= 100;

  COUNT(*)
----------
       100

ops$tkyte@ORA920LAP> select a.name, b.value, b.value - &V diff
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic# and a.name = 'session pga memory max';
old   1: select a.name, b.value, b.value - &V diff
new   1: select a.name, b.value, b.value -     220508 diff

NAME                                VALUE       DIFF
------------------------------ ---------- ----------
session pga memory max             433544     213036

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> select count(*) from TABLE(f) where rownum <= 1000000;

  COUNT(*)
----------
   1000000

ops$tkyte@ORA920LAP> select a.name, b.value, b.value - &V diff
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic# and a.name = 'session pga memory max';
old   1: select a.name, b.value, b.value - &V diff
new   1: select a.name, b.value, b.value -     433544 diff

NAME                                VALUE       DIFF
------------------------------ ---------- ----------
session pga memory max             433544          0

ops$tkyte@ORA920LAP>

<b>no new memory allocated to do 1,000,000 pipes vs 100 </b>
 

Cool, thanks.

Marc, November 17, 2003 - 3:34 pm UTC

I am happy to see you in Copenhagen in January 04.

how about hierarchy sql

Sudhir, November 17, 2003 - 10:25 pm UTC

for the question 
"   Reviews      
Bookmark Review | Bottom | Top 
  June 28, 2002 
Reviewer:  A reader 

Tom,

 Can you explain me PIPELINED function in simple words?
 It is really confusing. In which scenerio we cau use this function.

 Thnaks 


Followup:  
search for pipelined on this site.

Basically, when you would like a PLSQL (or java or c) routine to be the "source" 
of data -- instead of a table -- you would use a pipelined function.

Data transformation for example. (perform some procedural logic to transform the 
data) 
 

Bookmark Review | Bottom | Top 
Using rownum with pipelined functions  March 27, 2003 
Reviewer:  Martin  from UK 

Hi Tom,

quick question :

If I declare a pipelined function which does the old n-row table functionality :

SQL> CREATE OR REPLACE TYPE test_type AS TABLE OF NUMBER;
  2  /

Type created.

SQL> CREATE OR REPLACE FUNCTION test_func RETURN test_type 
  2  PIPELINED
  3  AS
  4  BEGIN
  5    FOR i IN 1..100
  6    LOOP
  7      PIPE ROW(i);
  8    END LOOP;
  9  END test_func;
 10  RETURN;
 11  /

Function created.

But then add a rownum predicate to a query on this function, what does oracle do 
in order to fulfil this? Is the logic of rownum somehow merged into the function 
itself?

SQL> SELECT * FROM TABLE(test_func) WHERE rownum < 10;

COLUMN_VALUE
------------
           1
           2
           3
           4
           5
           6
           7
           8
           9

9 rows selected.

Thanks in advance
 


Followup:  
depends on the access plan totally -- whether Oracle "aborts" the query against 
the table() function or if it has to do the entire thing AND THEN apply rownum.

ops$tkyte@ORA920> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> select * from table(test_func) where rownum < 10;

COLUMN_VALUE
------------
           1
           2
           3
           4
           5
           6
           7
           8
           9

9 rows selected.

ops$tkyte@ORA920> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
10

ops$tkyte@ORA920>
ops$tkyte@ORA920> exec dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> select *
  2    from table(test_func) full outer join
  3         dual on (to_char(column_value)=dummy)
  4   where rownum < 10;

COLUMN_VALUE D
------------ -
          72
          47
           3
          45
          89
          66
          70
          90
          37

9 rows selected.

ops$tkyte@ORA920> select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------
100

 
 

Bookmark Review | Bottom | Top 
And the "infinite" solution?  March 27, 2003 
Reviewer:  Martin  from UK 

Thanks Tom,

presumably then, declaring a pipelined function which would (if it were a 
traditional function) NEVER stop, i.e.

SQL> CREATE OR REPLACE FUNCTION test_func RETURN test_type
  2  PIPELINED
  3  AS
  4    i  PLS_INTEGER := 0;
  5  BEGIN
  6    LOOP
  7      i := i + 1;
  8      PIPE ROW(i);
  9    END LOOP;
 10    RETURN;
 11  END test_func;
 12  /

Function created.

SQL> SELECT * FROM TABLE(test_func) WHERE rownum < 10;

COLUMN_VALUE
------------
           1
           2
           3
           4
           5
           6
           7
           8
           9

9 rows selected.

This will ALWAYS abort the query? (I presume it must do, since it'd be pretty 
difficult to materialize this? ;-))
 


Followup:  
I would not count on it.  I would parameterize the function test_func to take 
the number of rows to generate as INPUT


select * from table(test_func(:n));


and bind in 10. 
 

Bookmark Review | Bottom | Top 
Sorry, forgot something...  March 27, 2003 
Reviewer:  Martin  from UK 

Sorry, Tom, forgot to ask...

if you were using pipelined functions for n-row functionality (which I 
personally think are pretty neat), which would be the most efficient then? 
passing the number of rows in the function call, TABLE(function(n)), or having 
an "infinite" loop and use rownum?

Thanks in advance 


Followup:  
i would (do) pass in the number of rows to synthesize. 
 

Bookmark Review | Bottom | Top 
Can I process multiple rows and then pipe one out?  April 09, 2003 
Reviewer:  Matt  from Australia 

Say,I have a table with data in, something like:

my_tab
ID NUMBER,
start DATE,
end DATE

example data is something like:

1 01/01/2000 03/01/2000
1 03/01/2000 08/01/2000
1 08/01/2000 21/01/2000
1 21/02/2000 05/05/2000
2 05/05/2000 25/05/2000
2 25/05/2000 26/05/2000
2 01/06/2000 10/06/2000
3 ...

So, I have 'consecutive' values.

What I need to do is pull these out of the table and process them in sets of 
ID's so that I can work out the earliest start and latest end dates for each id. 
The results in this case should be:

1 01/01/2000 05/05/2000
2 05/05/2000 26/05/2000
2 01/06/2000 10/06/2000
3 ...

 
"

Can "connect by" method be used to find the outcome?

As usual, thanks for great insight and creativity! 

Ok

Kate, February 24, 2004 - 1:02 am UTC

Dear Tom,
How we can use parallel_enable clause of pipelined functions
?Do you have any code for that?Please do reply.
Bye!

Tom Kyte
February 24, 2004 - 6:46 am UTC

it is just part of the "declare", see the syntax diagram in the SQL ref guide.

A reader, March 03, 2004 - 11:54 am UTC

The reason for this bug
select cast( column_value as varchar2(50)) from
table(daz.fut_string_to_table( 'a,b,c,d',',' ) );

was because some parameter was wrong , I update the old init.ora file this database had, with this paramters and this works perfectly. NOT CAST NEEDED

DB_BLOCK_SIZE = 8192
COMPATIBLE = 9.2.0.1.0
DB_NAME = XXX
INSTANCE_NAME = XXX
REMOTE_LOGIN_PASSWORDFILE = SHARED
CURSOR_SPACE_FOR_TIME = FALSE # NO ELIMINA CURSORES EN SESION, No Habilitar en Developer
#O7_DICTIONARY_ACCESSIBILITY=TRUE
###########################################
# CACHE AND I/O
###########################################
DB_CACHE_SIZE = 60M
DB_KEEP_CACHE_SIZE = 25M
DB_RECYCLE_CACHE_SIZE = 10M
DB_FILE_MULTIBLOCK_READ_COUNT = 16
# DB_CACHE_ADVICE = ON/READY/OFF # HABILITAR PARA OBTENER ESTADÍSTICAS DEL USO DEL CACHE.
###########################################
# POOLS
###########################################
WORKAREA_SIZE_POLICY = AUTO
PGA_AGGREGATE_TARGET = 100M
#JAVA_POOL_SIZE
#JAVA_POOL_SIZE = 1000000 # SI NO UTILIZA JAVA
JAVA_POOL_SIZE = 33000000 # SI USA O SI SE ESTA INSTALANDO
LARGE_POOL_SIZE = 8000000
SHARED_POOL_SIZE = 50000000
# SHARED_POOL_RESERVED_SIZE = 10000000
# PRE_PAGE_SGA = TRUE # OPTIMIZA SI HAY MEMORIA PERMITE ALCANZAR PUNTO OPTIMO MAS RAPIDO AUNQUE DESACELERA INICIALMENTE
# LOCK_SGA = TRUE # COLOCA TODO EL SGA EN MEMORIA, no funciona en NT
###########################################
# ARCHIVELOGFILES
###########################################
LOG_BUFFER = 1000000 # CANTIDAD DE DATOS QUE PUEDEN GENERARSE DURANTE 9 SEGUNDOS SIN COMMIT O SAVEPOINT
LOG_ARCHIVE_START = TRUE
LOG_ARCHIVE_DEST = D:\XXXARCHLOG
LOG_CHECKPOINT_INTERVAL = 0
FAST_START_MTTR_TARGET = 300
###########################################
# CURSORS AND LIBRARY CACHE
###########################################
OPEN_CURSORS=800
SESSION_CACHED_CURSORS = 1000
###########################################
# UNDO MANAGEMENT
###########################################
UNDO_MANAGEMENT = AUTO
UNDO_TABLESPACE = TBL_UNDO
UNDO_RETENTION = 56000
###########################################
# DIAGNOSTICS AND STATISTICS
###########################################
BACKGROUND_DUMP_DEST = E:\ORAXXX\TRACE\BACKGROUND
USER_DUMP_DEST = E:\ORAXXX\TRACE\USER
CORE_DUMP_DEST = E:\ORAXXX\TRACE\CDUMP
###########################################
# FILE CONFIGURATION
###########################################
CONTROL_FILES = ("E:\ORAXXX\CONTROLFILES\CTL_XXX01.CTL", "E:\ORAXXX\CONTROLFILES\CTL_XXX02.CTL",
"D:\ORAXXX\CONTROLFILES\CTL_XXX01.CTL", "D:\ORAXXX\CONTROLFILES\CTL_XXX02.CTL")
READ_ONLY_OPEN_DELAYED = FALSE #TRUE, accede READONLY tablespaces solo al leer datos ahi
DB_FILES=15
###########################################
# EXTERNAL FILES
###########################################
UTL_FILE_DIR = *
SESSION_MAX_OPEN_FILES = 30
###########################################
# NÚMERO MÁXIMO DE ROLES PERMITIDOS PARA BD
###########################################
MAX_ENABLED_ROLES = 70
LICENSE_MAX_USERS = 70
PROCESSES = 50 # 5 + (# USUARIOS CONCURRENTES*2 REPORT/DEVELOPER)
## SESSION = DERIVED (1.1 * PROCESSES + 5)
###########################################
# JOBS
###########################################
JOB_QUEUE_PROCESSES = 1
###########################################
# DISTRIBUTED
###########################################
OPEN_LINKS = 4
DB_DOMAIN = ""
REMOTE_DEPENDENCIES_MODE = SIGNATURE
###########################################
# PARA FUNCTION BASED INDEXES
###########################################
QUERY_REWRITE_ENABLED = TRUE
QUERY_REWRITE_INTEGRITY = ENFORCED
###########################################
# CBO
###########################################
optimizer_max_permutations=2500
OPTIMIZER_INDEX_COST_ADJ = 10 # Cambiar a 50 en datawarehousing
OPTIMIZER_INDEX_CACHING = 90 # No cambiar, posibilidad de encontrar indice en cache
CURSOR_SHARING = EXACT # SIMILAR OR FORCE, quita performance, se utilizan como auxiliar para mal binding
###########################################
# STATISTICS
###########################################
#TIMED_STATISTICS = TRUE
#STATISTICS_LEVEL = ALL
###########################################
# AUDIT
###########################################
AUDIT_SYS_OPERATIONS = TRUE
AUDIT_TRAIL = DB




A reader, March 03, 2004 - 2:17 pm UTC

Any way there is a bug fixed apparentluy in 9.2.0.4
(Error): ORA-00600: c?digo de error interno, argumentos:
[kokeeafi1],
> > > [2], [2], [], [], [], [], []
> > >
> > > I get when i do a
> > > SELECT * FROM TABLE ( PCK_CCO.FUT_CCO_IMPUTACION( 'CCB',3000, NULL ) )
> > >


Solved Function table without pipelined

A reader, March 03, 2004 - 5:08 pm UTC

I solved the problem,I hope not to find a another bug.

CREATE OR REPLACE TYPE ctb.tyo_cco_imputacion AS OBJECT(
nTotal NUMBER,
cCCO1 VARCHAR2(9),
cCCO2 VARCHAR2(9),
cCCO3 VARCHAR2(9),
cCCO4 VARCHAR2(9),
cCCO5 VARCHAR2(9),
nMonto NUMBER(16,2))
/

CREATE OR REPLACE TYPE ctb.typ_cco_imputacion AS TABLE OF
CTB.TYO_CCO_IMPUTACION;
/

FUNCTION TEST
return TYP_CCO_IMPUTACION
as
l_data TYP_CCO_IMPUTACION := TYP_CCO_IMPUTACION();
begin
for i in 1..10
loop
l_data.extend;
l_data(l_data.count) :=
TYO_CCO_IMPUTACION( i,i,i,i,i,i,i );
end loop;
return l_data;
end;




select * from the ( select cast( test as TYP_CCO_IMPUTACION ) from dual )


Help

Ram, March 17, 2004 - 10:01 pm UTC

Hi Tom,
In the "create function " statement,I came across the clause "Return
datatype pipelined or Aggregate using "Implementation clause " "etc.
What is the use of "aggregate" and Implementation Clause?Do you have a
simple example which can make it obvious?
Please do write a followup.
Bye!


Tom Kyte
March 18, 2004 - 7:08 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2196162600402 <code>

it is for making user defined aggregates.

plsql tables and pipelined functions

Vipin, May 18, 2004 - 9:12 am UTC

Hi Tom,

while trying this I am getting the INTERNAL ERROR. Could you please tell me why?

create or replace package test_pipe is
TYPE rf_refcursor IS REF CURSOR;
TYPE rec_row IS RECORD
(
empno emp.empno%TYPE,
ename emp.ename%TYPE
);
TYPE tb_rows IS TABLE OF rec_row index by binary_integer;
FUNCTION fn_ret_rows
(i INTEGER)
RETURN tb_rows PIPELINED;
PROCEDURE test_this
(
out_put OUT rf_refcursor
);

end test_pipe;

create or replace package body test_pipe is
a tb_rows;
b rec_row;
FUNCTION fn_ret_rows
(i INTEGER)
RETURN tb_rows PIPELINED
IS
BEGIN
FOR j IN i..i+100
LOOP
b.empno := j;
b.ename := 'ENAME : '||j;
PIPE ROW(b);
END LOOP;
RETURN;
END;
PROCEDURE test_this
(
out_put OUT rf_refcursor
)
AS
BEGIN
OPEN out_put
FOR
SELECT empno, ename
FROM table(fn_ret_rows(1));
END test_this;
end test_pipe;


Is it that we cannot use PL/SQL Table in a pipelined function?

Thanks

Tom Kyte
May 18, 2004 - 4:30 pm UTC

correct, MUST be a sql collection type -- so SQL can understand the type. SQL cannot "see" plsql types.

Parallel Parameters with Table Functions

Jasbir Kular, May 19, 2004 - 2:29 pm UTC

My Oracle version is 9.0.1. If I "parallel_enable" a table function and partition by any, should I see multiple sessions spawn when I execute this table function? Should the parallel_server parameter be set to "TRUE"? What parameters do table functions use? How can I throttle the number of sessions spawned when I execute the table function?

Thanks for your help!


Tom Kyte
May 19, 2004 - 2:48 pm UTC

a table function can be used in a parallel query -- but it won't really be parallelized itself (it might be executed more than once..)

think about it -- we cannot really "slice it up", we don't have the knowledge to slice it up.

so, it can be used in a parallel query (which you control via the normal parameters for pq -- see the concepts guide, see the admin guide).. but it won't be split up into N different pieces really -- it might be executed N times or its output filtered to N different slaves tho.

What does Parallel_Enable really do?

Jasbir Kular, May 19, 2004 - 3:06 pm UTC

Thanks for the quick response. So what does the "parallel_enable" and "partition by" clause of the table function really do? What is their purpose?

Thanks for your help!

Tom Kyte
May 19, 2004 - 4:02 pm UTC

I'll have to research that -- onsite with a customer this week, might not have time to look into it for a bit. Something to play with and understand better...

A reader, May 20, 2004 - 9:30 am UTC

Tom, are you by any chance in New York ?

Thanks.

Tom Kyte
May 20, 2004 - 11:42 am UTC

today, I'm in Alabama -- but I live in Virginia.

Table Function Experiment

Jasbir Kular, May 20, 2004 - 1:54 pm UTC

I tried to experiment a little with table functions and what I was trying to prove was that a table function should be able to partition the data through the ref cursor into parallel executions.

The table function that I created would take in a simple ref cursor passing in only an ID, this ID will be partitioned. The function will do one fetch to get the first row and then lookup a translation value for that ID (each ID would have a different translation value). After this lookup the function will loop through the remaining rows returning the fetched ID and that first translation value for the first initial fetch.

My purpose of this function is to show that if in fact the table function does stream data then each unique ID from the ref cursor should be processed in it's own session and therefore the first fetch to translate the ID should be different for each session.

Here are the table, type, package, and function creation scripts:

I am using Oracle 9.0.1 for this test, and here are the parallel parameter settings:

NAME VALUE

parallel_server FALSE
parallel_server_instances 1
parallel_min_percent 0
parallel_min_servers 0
parallel_max_servers 5
parallel_instance_group
parallel_execution_message_size 2148
parallel_broadcast_enabled FALSE
parallel_adaptive_multi_user FALSE
parallel_threads_per_cpu 2
parallel_automatic_tuning FALSE


Tables:
CREATE TABLE test (id NUMBER(10));
CREATE TABLE partition_test (id NUMBER(10), val VARCHAR2(30));

Type:
CREATE OR REPLACE TYPE test_type AS OBJECT (ID NUMBER(10), VAL VARCHAR2(30));

Collection Type:
CREATE OR REPLACE TYPE test_tab AS TABLE OF TEST_TYPE;

Package:
CREATE OR REPLACE PACKAGE test_pkg
IS
TYPE TEST_CUR IS REF CURSOR RETURN TEST%ROWTYPE;
END;

Table Function:
CREATE OR REPLACE FUNCTION test_tf (P IN TEST_PKG.TEST_CUR)
return TEST_TAB
pipelined
CLUSTER P BY (ID)
parallel_enable ( partition P by HASH(ID))

IS

L_val VARCHAR2(30);
L_REC TEST%rowtype;
L_OUT TEST_TYPE;

BEGIN
FETCH P INTO L_REC;
select val into l_val from PARTITION_TEST where id = l_rec.id;

LOOP
L_OUT := TEST_TYPE (L_REC.ID, L_val);
PIPE ROW (L_OUT);
FETCH P INTO L_REC;
EXIT WHEN P%NOTFOUND;
END LOOP;
RETURN;
END;


Insert into TEST table:

INSERT INTO test
(ID)
VALUES
(1)
/
INSERT INTO test
(ID)
VALUES
(1)
/
INSERT INTO test
(ID)
VALUES
(1)
/
INSERT INTO test
(ID)
VALUES
(2)
/
INSERT INTO test
(ID)
VALUES
(2)
/
INSERT INTO test
(ID)
VALUES
(3)
/


Insert into PARTITION_TEST table:

INSERT INTO partition_test
(ID,VAL)
VALUES
(1,'ABC')
/
INSERT INTO partition_test
(ID,VAL)
VALUES
(2,'DEF')
/
INSERT INTO partition_test
(ID,VAL)
VALUES
(3,'GHI')
/


Executions:

(1) Straight execution of table function
SELECT * FROM TABLE(TEST_TF(CURSOR(SELECT ID FROM TEST)));

Results:

ID VAL

1 ABC
1 ABC
1 ABC
2 ABC
2 ABC
3 ABC

This shows that the table function only ran in one session, since ID#2 and 3 should of translated into DEF and GHI respectively.

(2) Add in PARALLEL clause
SELECT * FROM TABLE(TEST_TF(CURSOR(SELECT /*+ PARALLEL (TEST) */ ID FROM TEST)))

Results:

ID VAL

1 ABC
1 ABC
1 ABC
3 ABC
2 DEF
2 DEF

This shows that the table function did partition the data into seperate sessions. ID#1 = ABC, #2 = DEF, but #3 = ABC. Only explanation for why ID #3 seemed to have run in the same session as ID#1 is because I am only getting a parallelism degree of 2. Each table function requires 2 sessions to operate (1 slave to partition the input data and 1 slave to execute the function logic) and 1 overall session to controll the slaves. Since the database parallel_max_servers = 5 this makes sense why there was only 2 streams (each stream required 2 slaves and 1 overal controller).

(3) Take out parallel_enable clause from table function
SELECT * FROM TABLE(TEST_TF(CURSOR(SELECT /*+ PARALLEL (TEST) */ ID FROM TEST)))

Results:

ID VAL

1 ABC
1 ABC
1 ABC
2 ABC
2 ABC
3 ABC

This shows the parallel_enable clause is actually partitioning the input data.

(4) Change the parallel_enable from HASH to RANGE
SELECT * FROM TABLE(TEST_TF(CURSOR(SELECT /*+ PARALLEL (TEST) */ ID FROM TEST)))

Results:

ID VAL

1 ABC
1 ABC
1 ABC
2 ABC
2 ABC
3 GHI

Seems like ID#1 and #2 ran in same session but ID#3 ran in another session.


I ran this test on a larger scale. I created a table function that took in a query and dynamically executed the query to generate a flat file. I passed in 3 queries that would each produce a file with about 40000 rows. I wanted to produce large enough files to watch the parallel sessions. When I executed the table functions in fact two of the files did generate in parallel. Here is the parallel results:

select sid, qcsid, degree, req_degree from v$px_session order by qcsid;

SID QCSID DEGREE REQ_DEGREE

34 34
53 34 2 8
21 34 2 8
23 34 2 8
25 34 2 8




PLSQL TYPE and PLSQL NESTED TABLE can be use in PIPELINED function

Nilendu, May 20, 2004 - 5:47 pm UTC

Tom,

As of 9i - Release 2, I was able to use PLSQL TYPE and PLSQL NESTED TABLE to return data from PLSQL PIPELINED function.

However SQL's "CREATE TYPE ... AS OBJECT" needs to be replaced as "TYPE ... IS RECORD" in PLSQL.

Declaring the PLSQL TYPEs and PLSQL TABLEs in a package spec. and then have the PIPELINED functions included in the package is the only way one could use TABLE_NAME.COLUMN_NAME%TYPE or TABLE_NAME%ROWTYPE in declaration of TYPE. Since %TYPE and %ROWTYPE are PLSQL words, they cannot be used in SQL.

Here's a sample that worked for me, just change the table_name to something that exists in your schema --

CREATE OR REPLACE package RICKS_TEST as
type RICKS_TEST_OBJ is record (
a users.user_id%type, -- NUMBER(10)
b users.creation_date%type -- DATE
);


type RICKS_TEST_RT is table of RICKS_TEST_OBJ;

function RICKS_TEST_FUNC (p_owner in bc_entities.entity_id%type -- NUMBER(10)
)
return RICKS_TEST_RT PIPELINED;

end RICKS_TEST;
/


CREATE OR REPLACE package body RICKS_TEST as
function RICKS_TEST_FUNC (p_owner in bc_entities.entity_id%type)
return RICKS_TEST_RT PIPELINED
as

v_result_object RICKS_TEST_RT;

cursor C_ROUTED is
select user_id,
creation_date
from users
where owner = p_owner;

begin

for C_RESULT in C_ROUTED loop
PIPE ROW (C_RESULT);
end loop;

return;
end RICKS_TEST_FUNC;


end RICKS_TEST;
/


select * from table(ricks_test.ricks_test_func(1));

A B
---------- ---------
1 10-JUN-02
2 22-NOV-00
3 25-JUL-00
110 01-AUG-00
111 01-AUG-00
145 07-AUG-00
185 09-AUG-00
203 21-AUG-00
264 22-AUG-00
283 22-AUG-00
303 23-AUG-00

Tom Kyte
May 20, 2004 - 8:42 pm UTC

interesting, looks like a new 9ir2 feature that slipped in. Neat.

Question from the previous Post.

NOTNA, May 21, 2004 - 11:13 am UTC

Hi,

Just want to know where would the "v_result_object" br use for??

Cheers,
NOTNA

Cartesian for each row

Marico, May 21, 2004 - 1:54 pm UTC

This is kind of silly question -- i'm sure -- but though.
I wasn't able to do this (efficiently) with a single statement sql.

suppose you have a table with

x times
-- -----
1 3
2 4

times is number of times you have to repeat a row. So, x=1 have 3 times - you must repeat 3 times the same line (sort of cartesian itself for each row)

about example above you have result set like:

x times rn_times
-- ----- --------
1 3 1
1 3 2
1 3 3
2 4 1
2 4 2
2 4 3
2 4 4

I got with this:
--
set echo on

drop table t1;
drop table pivot;

create table pivot ( rn number primary key )
organization index;

insert into pivot
select rownum from all_objects;

create table t1 ( x int, times int );

insert into t1 values ( 1, 4 );
insert into t1 values ( 2, 3 );

set autotrace on

select x, times, rn rn_times
from (
select x, times
from t1
), pivot
where rn <= times
order by x, rn_times
/

set autotrace off
set echo off
--
Do you see any another efficient method?



Tom Kyte
May 21, 2004 - 2:04 pm UTC

short of a pipelined function -- no, not really.

pipelined function

Marcio, May 21, 2004 - 3:24 pm UTC

But, how to do the above example with function pipelined. I couldn't see.

Tom Kyte
May 22, 2004 - 11:13 am UTC

see
</code> https://asktom.oracle.com/Misc/oramag/on-uniqueness-space-and-numbers.html <code>

there is an example there. a pipelined function -- you send it a number, it returns "N" rows.

You would send your pipelined function a "cursor", it would fetch a row and see the number "3" and call pipe row 3 times, then it would fetch another row and see "4" and call pipe row 4 times and so on.

Parallel_Enable, Partition By, etc.

Jasbir Kular, May 25, 2004 - 3:34 pm UTC

Hi Tom,

Last week I posted a question regarding what the "Parallel_Enable" and "Partition by" clauses of a table function actually do. You were going to do some investigation, have you had a chance for any analysis? These clauses seem to only work if the table passed in to the ref cursor is defined as Parallel or you use the parallel hint in the query for the ref cursor (you can see my experiment results posted to this forum on May 20th, 2004).

Thanks for your help.

Tom Kyte
May 28, 2004 - 2:51 pm UTC

after looking into -- that is correct, it will actually parallelize the ref cursor itself and send the table funnction the equivalent of "slave sql".  So for example (big_table is 100,000 rows):


ops$tkyte@ORA9IR2> create or replace type tabType as table of number
  2  /
 
Type created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace function f( p_cursor in sys_refcursor ) return tabType
  2  parallel_enable ( partition p_cursor by any ) pipelined
  3  as
  4          l_number number := 0;
  5          l_rec    big_table%rowtype;
  6  begin
  7          loop
  8                  fetch p_cursor into l_rec;
  9                  exit when p_cursor%notfound;
 10                  l_number := l_number+1;
 11          end loop;
 12          close p_cursor;
 13          pipe row (l_number);
 14          return;
 15  end;
 16  /
 
Function created.
 
ops$tkyte@ORA9IR2> show err
No errors.
ops$tkyte@ORA9IR2> create or replace function f2( p_cursor in sys_refcursor ) return tabType
  2  parallel_enable ( partition p_cursor by any ) pipelined
  3  as
  4          l_number number := 0;
  5          l_rec    big_table%rowtype;
  6  begin
  7          loop
  8                  fetch p_cursor into l_rec;
  9                  exit when p_cursor%notfound;
 10                  l_number := l_number+1;
 11              pipe row (l_number);
 12          end loop;
 13          close p_cursor;
 14          return;
 15  end;
 16  /
 
Function created.
 
ops$tkyte@ORA9IR2> show err
No errors.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from table( f( cursor( select /*+ parallel( bt, 5 ) */ * from big_table bt ) ) )
  3  /
 
COLUMN_VALUE
------------
       40656
       59344
 
<b>shows the function was called twice in two sessions and each got about the same number of rows</b>


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly
ops$tkyte@ORA9IR2> select *
  2    from table( f2( cursor( select * from big_table bt ) ) )
  3  /
 
100000 rows selected.
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=8168 Bytes=106184)
   1    0   VIEW* (Cost=11 Card=8168 Bytes=106184)                                                             :Q37001
   2    1     COLLECTION ITERATOR* (PICKLER FETCH) OF 'F2'                                                     :Q37001
   3    2       TABLE ACCESS* (FULL) OF 'BIG_TABLE' (Cost=70 Card=100000 Bytes=10000000)                       :Q37000
 
 
   1 PARALLEL_TO_SERIAL            SELECT C0 FROM (SELECT /*+ NO_QKN_BUFF */ VA
                                   LUE(A2) C0 FROM TABLE("OPS$TKYTE"."F2"(CURSO
                                   R(SELECT A1.C0 C0,A1.C1 C1,A1.C2 C2,A1.C3 C3
                                   ,A1.C4 C4,A1.C5 C5,A1.C6 C6,A1.C7 C7,A1.C8 C
                                   8,A1.C9 C9,A1.C10 C10,A1.C11 C11,A1.C12 C12,
                                   A1.C13 C13 FROM :Q37000 A1))) A2) A0
 
   2 PARALLEL_COMBINED_WITH_PARENT
   3 PARALLEL_TO_PARALLEL          SELECT /*+ NO_EXPAND ROWID(A1) */ A1."ID" C0
                                   ,A1."OWNER" C1,A1."OBJECT_NAME" C2,A1."SUBOB
                                   JECT_NAME" C3,A1."OBJECT_ID" C4,A1."DATA_OBJ
                                   ECT_ID" C5,A1."OBJECT_TYPE" C6,A1."CREATED"
                                   C7,A1."LAST_DDL_TIME" C8,A1."TIMESTAMP" C9,A
                                   1."STATUS" C10,A1."TEMPORARY" C11,A1."GENERA
                                   TED" C12,A1."SECONDARY" C13 FROM "BIG_TABLE"
                                    PX_GRANULE(0, BLOCK_RANGE, DYNAMIC)  A1
 
 
 
Statistics
----------------------------------------------------------
     100058  recursive calls
          3  db block gets
       1556  consistent gets
       1440  physical reads
        808  redo size
    1405873  bytes sent via SQL*Net to client
      73825  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
         10  sorts (memory)
          0  sorts (disk)
     100000  rows processed
 
ops$tkyte@ORA9IR2> set autotrace off

<b>and that all 100,000 rows were processed</b>


It takes N sessions to run the SQL and N sessions to run the function (the query feeds the session).  In my case -- i used a total of 5 sessions, two to query, two to process and one to coordinate. 

Table Function with Insert

Jasbir Kular, May 31, 2004 - 10:44 am UTC

Hi Tom,

Thanks for your help. So we now know that table functions can process data in parallel. But, I have noticed when you combine a table function with an insert statement the table function is not processed in parallel.

INSERT INTO BIGTABLE2
(select * from table( f( cursor( select /*+ parallel( bt, 5 ) */ * from big_table bt ) ) )

The company I am consulting for is using 9.0.1.

Thanks again for your help.

Tom Kyte
May 31, 2004 - 1:27 pm UTC

are you sure?


ops$tkyte@ORA9IR2> create or replace type tabType as table of number
  2  /
 
Type created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace function f2( p_cursor in sys_refcursor ) return tabType
  2  parallel_enable ( partition p_cursor by any ) pipelined
  3  as
  4          l_number number := 0;
  5          l_rec    big_table%rowtype;
  6  begin
  7          loop
  8              fetch p_cursor into l_rec;
  9              exit when p_cursor%notfound;
 10              l_number := l_number+1;
 11          end loop;
 12          close p_cursor;<b>
 13          select sid into l_number from v$mystat where rownum=1;</b>
 14          pipe row (l_number);
 15          return;
 16  end;
 17  /
 
Function created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from table( f2( cursor( select * from big_table bt ) ) )
  3  /
 
COLUMN_VALUE
------------
           9
           8
 
<b>that shows 2 invocations of the function under different SIDS (sessionid won't work -- have to use the SID) and</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t
  2  select *
  3    from table( f2( cursor( select * from big_table bt ) ) )
  4  /
 
2 rows created.
 
ops$tkyte@ORA9IR2> select * from t;
 
         X
----------
         9
         8
 


<b>or are you trying to do parallel direct path inserts? (which need  enable pdml which will invoke /*+ append */ by default... )</b>
 

It does work

Jasbir Kular, May 31, 2004 - 2:12 pm UTC

Hi Tom,

You are correct, the insert from select <table function> does work. I tried it again and it worked this time. Must have been someone was using all the parallel servers when I executed the statement last time. Also, the database is executing the alter session enable parallel dml before this insert and the parallel execution of the table function still works. This insert statement is actually generated by an Oracle Warehouse Builder mapping which auto generates the alter session command in the code.

Thanks again for all your help.

Pipelined table function called from normal function

Badarinath Boyina, June 08, 2004 - 10:59 am UTC

Hi Tom,

I am calling Pipelined table function from normal function and what I have observed that Pipeline function doesn't continue processing after piping the row until the calling function access it again.

Is it possible to run both the functions concurrently, I mean parallely?

Thanks in advance.

Tom Kyte
June 08, 2004 - 12:54 pm UTC

pipe row is like a "suspend", it lets the consumer get at the data before the last bit of data is produced.

No, the piper of data will not be executing at the time the pipee is processing.

Pipelined table functions for transformations

Stewart Bryson, July 14, 2004 - 3:19 pm UTC

The PL/SQL documentation talks about stringing numerous pipelined table functions together as a way of doing ETL-type transformations without staging tables. See:

</code> http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/08_subs.htm#19729 <code>

But there were no concrete examples of how to do this. Using the method outlined here, it seems applicable for using the result sets of one or maybe two table functions in one SQL statement. But how would I chain multiple results of table functions together to replace 8 different staging tables without having a select statement with 8 different embedded function calls?

Thanks as always.

Tom Kyte
July 14, 2004 - 10:17 pm UTC

that is exactly how you would chain them all together -- 8 function calls in the TABLE clause -- function into function into function.

Chaining tables

A reader, July 15, 2004 - 7:39 am UTC

How?

select * from table(f1(f2(f3(...

Right?

Tom Kyte
July 15, 2004 - 12:40 pm UTC

it would be more like each of the f's would take a cursor as input


ops$tkyte@ORA9IR2> create type myType as table of number
  2  /
 
Type created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace function f1( C in sys_refcursor ) return myType
  2  pipelined
  3  as
  4      l_n number;
  5  begin
  6      loop
  7          fetch c into l_n;
  8          exit when c%notfound;
  9          pipe row( ln(l_n) );
 10      end loop;
 11      return;
 12  end;
 13  /
 
Function created.
 
ops$tkyte@ORA9IR2> create or replace function f2( C in sys_refcursor ) return myType
  2  pipelined
  3  as
  4      l_n number;
  5  begin
  6      loop
  7          fetch c into l_n;
  8          exit when c%notfound;
  9          pipe row( sqrt(l_n) );
 10      end loop;
 11      return;
 12  end;
 13  /
 
Function created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from table( f2( cursor( select *
  3                              from table( f1( cursor(select rownum
  4                                                       from all_users
  5     where rownum < 5) ) ) ) )
  6                                                       )
  7  /
 
COLUMN_VALUE
------------
           0
  .832554611
  1.04814707
  1.17741002


then it can really "pipe" the data -- f1 does a bit, f2 does a bit and client recieves a bit -- back to f1, f2, client and so on. 

Pipelined data of type using functions

Dave Fowler, July 15, 2004 - 12:14 pm UTC

Perfect example for showing use of nested data from a type table (collection)





Thanks for the example

Stewart Bryson, July 15, 2004 - 4:39 pm UTC

I guess the answer was right in front of me...

Hierarchical Query in pipelined functions.....

Anil Singh, July 28, 2004 - 11:39 am UTC

Hi Tom,

Connect by clause in using Pipelined functions do not work. I have the following situation.

create or replace type myScalarType as object
    ( id   int,
      parent_id   int
    )
    
create or replace type myTableType as table of myScalarType

create or replace
        function f return myTableType
           PIPELINED
           is
           begin
            pipe row ( myScalarType( 1, null) );
            pipe row ( myScalarType( 2, 1) );
            pipe row ( myScalarType( 3, 2) );
            pipe row ( myScalarType( 4, 2) );
            pipe row ( myScalarType( 5, 4) );
            pipe row ( myScalarType( 6, 4) );
            return;
           end;


select * from table(f) a
connect by prior id = parent_id
start with parent_id is null

While executing the above query I get an error message

SQL> select * from table(f) a
  2  connect by prior id = parent_id
  3  start with parent_id is null
  4  /
select * from table(f) a
*
ERROR at line 1:
ORA-22950: cannot ORDER objects without MAP or ORDER method

Please suggest a solution for this case..... 

Tom Kyte
July 28, 2004 - 1:22 pm UTC

[tkyte@localhost tkyte]$ oerr ora 22950
22950, 00000, "cannot ORDER objects without MAP or ORDER method"
// *Cause:  an object type must have a MAP or ORDER method defined for
//          all comparisons other than equality and inequality comparisons.
// *Action: Define a MAP or ORDER method for the object type


it is trying to sort -- needs a map or order method for that to define how to sort your scalartype.

so, the answer lies entirely in the message?


or you can "cheat" and use the "rownum" trick:

 
ops$tkyte@ORA9IR2> select *
  2    from ( select id, parent_id from table(f) where rownum >0 )
  3   connect by prior id = parent_id
  4   start with parent_id is null;
 
        ID  PARENT_ID
---------- ----------
         1
         2          1
         3          2
         4          2
         5          4
         6          4
 
6 rows selected.
 

A reader, July 28, 2004 - 4:01 pm UTC

Tom ,
We are comparing the databases for a purchase decision.

Which is the best database among the ones below.

MS SQL Server
Sybase
DB2
Oracle


Tom Kyte
July 28, 2004 - 7:43 pm UTC

Oracle.

you expected something different?



A reader, July 28, 2004 - 5:57 pm UTC

... from a pipelined table function point of view ;o)

Consumer function is not consuming as soon as Pipeline function producing it.....!

Badarinath Boyina, August 06, 2004 - 6:58 am UTC

Hi Tom,

I am using parallel enabled pipelined table function to do the process and pipe the rows as soon as they produce it. There is a consumer function which calls this pipelined function and process further.

What I am expecting from the above is, pipeline function to execute in specified number of parallel sessions and pipe the rows as soon as they produce it, and then consumer function to consume the produced row as soon as possible with out waiting for any slave process to complete.

But, what happening is
• Pipeline function is processing in specified number of parallel slaves and piping the rows as soon as they produce it and continuing process further, and I am very happy with it.
• But, the consumer function is not consuming the piped rows until one of the pipelined function salve process completes the process. Which I am not happy. Can you tell me why it is behaving like this? Why it is not consuming the rows as soon as pipeline function pipes the rows?

I got to mention that, I have specified degree of parallelism as 3, but it is opening 6 slaves. Can you tell me why 6 slaves?

Thank You in advance
Badari

Tom Kyte
August 06, 2004 - 8:32 am UTC

give me a test case (small, concise, yet 100% complete) and all of the tools necessary to observe the same results as you. (that is, save me a bit of time here -- give me the example and the tools to analyze the example to see what you see).

I fail to see how you can observe your first point and second point simultaneously. How can you observe that the piping of rows is happening if the consumption isn't.



Consumer function is not consuming as soon as Pipeline function producing it.....!

Badarinath Boyina, August 06, 2004 - 11:31 am UTC

Hi Tom,

Thanks for the prompt reply.

Please find the required objects and query to support my previous argument. Tom, I haven't provided the table with 100s of thousand records which needed, I believe you can manage it. Other thing is you need to change the following package to use the actual table name instead of the text '<table with 100,000records>'.

create type pipe_tabtype as table of varchar2(100);

create table pipe_test
(
record_id number,
session_id number,
pipe_time number,
fetch_time number
);

CREATE OR REPLACE package pk_pipe_test as

procedure call_f_pipe;

function f_pipe( p_cursor in sys_refcursor )
return pipe_tabtype
parallel_enable ( partition p_cursor by any ) pipelined;

end pk_pipe_test;
/



CREATE OR REPLACE package body pk_pipe_test as

function f_pipe( p_cursor in sys_refcursor )
return pipe_tabtype
parallel_enable ( partition p_cursor by any ) pipelined
is
l_sid pls_integer;
l_rec <table with 100,000 records>%rowtype;
begin
select sid into l_sid from v$mystat where rownum=1;

loop
fetch p_cursor into l_rec;
exit when p_cursor%notfound;
pipe row (dbms_utility.get_time||' '||l_sid);
end loop;
return;
end;

procedure call_f_pipe
is
cursor c_f_pipe is
select * from table (f_pipe(cursor(
select /*+ full(a) parallel(a,3) */ *
from <table with 100,000records> a)));

i pls_integer := 0;
l_sid pls_integer;
l_text varchar2(4000);
begin
select sid into l_sid from v$mystat where rownum <= 1;

for r_pipe_rec in c_f_pipe
loop
i := i+1;
insert into pipe_test
(record_id,session_id,pipe_time,fetch_time)
values(i
,substr(r_pipe_rec.column_value,
instr(r_pipe_rec.column_value,' ')+1)
,substr(r_pipe_rec.column_value,1,
instr(r_pipe_rec.column_value,' ')-1)
,dbms_utility.get_time);
commit;
end loop;

exception
when others then
dbms_output.put_line('Error in calling function');
dbms_output.put_line('Sql error= ' ||
to_char(sqlcode)||
'Message = '||sqlerrm(sqlcode));
end ;
end pk_pipe_test;
/

Invoke the pipe function as below:

begin
pk_pipe_test.call_f_pipe();
end;


The min and max timings of piping and fetching(consuming) are:

select min(pipe_time) min_pipe
,max(pipe_time) max_pipe
,min(fetch_time) min_fetch
,max(fetch_time) max_fetch
from pipe_test

MIN_PIPE MAX_PIPE MIN_FETCH MAX_FETCH
88156630 88193082 88192495 88310444


Look at the individual parallel slaves:
select session_id
,min(pipe_time) min_pipe
,max(pipe_time) max_pipe
,min(fetch_time) min_fetch
,max(fetch_time) max_fetch
from pipe_test
group by session_id

SESSION_ID MIN_PIPE MAX_PIPE MIN_FETCH MAX_FETCH
13 88156630 88193082 88193123 88310444
27 88156638 88192680 88192780 88310189
37 88156639 88192459 88192495 88310147

If you look at the above query the first row has been consumed at 88192495, which is after the completion of producer session id 37. Which proves that the consumer is not consuming as soon as record produced.

Please let me know if you need more information.

Regards,
Badari


Tom Kyte
August 06, 2004 - 11:54 am UTC

MIN_PIPE MAX_PIPE MIN_FETCH MAX_FETCH
88156630 88193082 88192495 88310444


88,192,495 is before 88,193,082


which seems to show that the consumer was getting information before the producer completed?????




Consumer is not consuming as soon as producer produces it....!

Badarinath Boyina, August 09, 2004 - 5:14 am UTC

Hi Tom,

Please have a look at the timings of individual sessions. Here it is again....

SESSION_ID MIN_PIPE MAX_PIPE MIN_FETCH MAX_FETCH
13 88156630 88193082 88193123 88310444
27 88156638 88192680 88192780 88310189
37 88156639 88192459 88192495 88310147


The first fetch is at 88,192,495 which is for the producer session 37 and it happens only after this session is completed. Please note that the last piped row from this session is at 88,192,459.

Which means that the consumer is not consuming until one of the parallel slaves completing the process. Which means it is not consuming as soon as row has been piped.

So, my question still remains.

Thank You,
Badari

Tom Kyte
August 09, 2004 - 10:05 am UTC

Ok, now I understand the output. I rewrote it a little. Need to research what is going on exactly.

How do you execute a pipelined function that had DML in it?

A reader, August 10, 2004 - 5:02 pm UTC

Can you come up with a very basic example?


Tom Kyte
August 10, 2004 - 7:28 pm UTC

ops$tkyte@ORA9IR2> create or replace type myTable as table of number
  2  /
 
Type created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace function f return myTable PIPELINED
  2  as<b>
  3          pragma autonomous_transaction;</b>
  4  begin
  5          for i in 1 .. 5
  6          loop
  7                  insert into t values ( I );
  8                  pipe row(i);
  9          end loop;
 10          commit;
 11          return;
 12  end;
 13  /
 
Function created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from t;
 
no rows selected
 
ops$tkyte@ORA9IR2> select * from table(f());
 
COLUMN_VALUE
------------
           1
           2
           3
           4
           5
 
ops$tkyte@ORA9IR2> select * from t;
 
         X
----------
         1
         2
         3
         4
         5
 
ops$tkyte@ORA9IR2>
 

Excellent!

A reader, August 11, 2004 - 8:31 am UTC

Thank you!

Nothing like a little pragma before breakfast.

Consumer is not consuming as soon as producer produces it....!

Badarinath Boyina, August 11, 2004 - 9:51 am UTC

Hi Tom,

Did you got the chance to research on the above problem?

Any updations please?

Thank You,
Badari

Tom Kyte
August 11, 2004 - 11:55 am UTC

Yes I did, it is the way it works (until 10gR2 actually :)

it is a side effect of the (safe) manner in which PQ is operating.  A consumer doesn't starting consuming until the producer is finished. 

It is why sometimes for example a CREATE TABLE AS SELECT in parallel can be faster than the equivalent INSERT /*+ APPEND */ in parallel -- with a CTAS the consumer is the producer, in an INSERT /*+ APPEND */ we use a 2 process consumer/producer model.

For example, if you remove your procedural code and populate a table via CTAS:

SQL> create table x2 parallel as select foo.*, dbms_utility.get_time fetch_time from table (pk_pipe_test.f_pipe(cursor( select /*+ full(a) parallel(a,4) */ * from big_table a))) foo;

Table created.


Here the LOAD AS SELECT is done by the slave set itself, so there is no need to stop any rows from being produced. This leads to having pipe_time later than fetch_time:

SQL> select session_id ,min(pipe_time) min_pipe ,max(pipe_time) max_pipe ,min(fetch_time) min_fetch ,'fetched first row ' || decode(sign(max(pipe_time)-min(fetch_time)),1,'BEFORE','AFTER') || ' by ' || (max(pipe_time)-min(fetch_time)) || ' hsecs' txt ,max(fetch_time) max_fetch from x2 group by session_id;

SESSION_ID   MIN_PIPE    MAX_PIPE  MIN_FETCH TXT                       MAX_FETCH
---------- ---------- ---------- ---------- ---------------------------------------- ----------
    75   10801279    10802672   10801279 fetched first row BEFORE by 1393 hsecs     10802672
    71   10801280    10802672   10801281 fetched first row BEFORE by 1391 hsecs     10802672




 

Thanks!

Badarinath Boyina, August 12, 2004 - 5:25 am UTC

Thanks Tom!

Atleast I know now is that it is nothing to do with code or database setup.

In the mean time i have to find a crude solution so that the consumer can consume somthing before all producers complete their process.

Thank You
Badari

parallel_enable

Reader, August 12, 2004 - 8:35 am UTC

Hi tom
Can parallel_enable clause in procedure , functions enable paralle processing of data .
What i mean is suppose i have a p1 defined as
"create or replace procedure p1
parallel_enable....... " .
Input to this procedure is a collection of say 100000 records . This precedure does a lots of processing ( pl/sql , calculations etc ) .
So if i now this procedure ,Will the collection divide itself and do the procedural logic in parallel ?

Regards

Tom Kyte
August 12, 2004 - 9:28 am UTC

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_59a.htm#2075083 <code>

examples are above.

It means the inputs can be parallelized if the inputs are "a cursor"

a pipelined conveyer without nested calls

Musashi, August 13, 2004 - 12:38 am UTC

It seems that there is a way to avoid nested calls when building a pipelined conveyer.
Nested calls work fine, but quickly become unmanagable.

-- the test table
create table t(
n number,
s varchar2(32))

create or replace type t_rec_type as object(n number, s varchar2(32))

create or replace type t_tbl_type as table of t_rec_type

insert into t values (1, 's1');

insert into t values (2, 's2');

insert into t values (3, 's3');

commit;

-- test package
create or replace package test_t as

function get_detail12(rec in t_rec_type)
return t_tbl_type pipelined;

function get_detail3(rec in t_rec_type)
return t_tbl_type pipelined;

function test
return t_tbl_type pipelined;
end test_t;
/

create or replace package body test_t as
-- object (record) constructor
function make_t_rec(iRec in t%rowtype)
return t_rec_type
as
begin
return t_rec_type(iRec.n, iRec.s);
end make_t_rec;

-- 3rd level of details
function get_detail3(rec in t_rec_type)
return t_tbl_type pipelined
as
begin
for cv in (
select aa.n, aa.s || 'd3' from t aa
where aa.n = rec.n)
loop
pipe row(make_t_rec(cv));
end loop;
return;
end get_detail3;

-- get 1st and 2nd level of details
function get_detail12(rec in t_rec_type)
return t_tbl_type pipelined
as
begin
-- 1st level of details
for cv in (
select n, s || 'd1' from t
where n = rec.n)
loop
pipe row(make_t_rec(cv));
end loop;

-- 2nd level of details
for cv in (
select aa.n, aa.s || 'd2' from t aa
where aa.n = rec.n)
loop
pipe row(make_t_rec(cv));
end loop;

-- add 3rd level of details from an external pipelined function
for cv in (
select * from table(cast(get_detail3(rec) as t_tbl_type)))
loop
pipe row(make_t_rec(cv));
end loop;
return;
end get_detail12;


-- main routine
function test
return t_tbl_type pipelined
as
rec t_rec_type;
begin
for cv in (
select * from t)
loop
rec := make_t_rec(cv);
pipe row(rec);
-- add explicitly 1st and 2nd level of details and - implicitly - 3rd
for cv1 in (
select * from table(cast(get_detail12(rec) as t_tbl_type)))
loop
pipe row(make_t_rec(cv1));
end loop;
end loop;
return;
end test;
end test_t;
/

-- here is the test
-- observe original data - plus details 1, 2, 3
select * from table(test_t.test)









Lolita, August 18, 2004 - 9:59 am UTC

Hi Tom,
Excellent information. I tried your example using a PL/SQL table instead of SQL type and it works just the same. Here's my problem: when I call the function from another database using dblink for both SQL type and PL/SQL table it failed. I know why it failed for SQL type because of the limitation "object type cannot be access thru remote database ...." but for the PL/SQL table I hit the error "ORA-00600: internal error code, arguments: [kkxpityp1], [], [], [], [], [], [], []". The local and remote database are on the same Oracle version which is 9.2.0.1. My end goal here is to be able to return rows of record thru dblink. PL/SQL table works but only for few rows (more than a hundred then it blows). Does pipeline work on dblink? Is there any other way beside the PL/SQL table that can be use thru dblink? Please help, this is a production problem that I need to fix ASAP. Thanks in advance.
CREATE OR REPLACE PACKAGE PLT_DEFINE IS

TYPE REFERENCE_type IS record
(
value VARCHAR2(50)
,description VARCHAR2(4000)
);

TYPE reference_table IS TABLE OF reference_type;

END plt_define;
/

CREATE OR REPLACE PACKAGE MY_PKG
as
function f return PLT_DEFINE.REFERENCE_TABLE PIPELINED;
end;
/
CREATE OR REPLACE PACKAGE BODY MY_PKG
as
function f return PLT_DEFINE.REFERENCE_TABLE
PIPELINED
IS
l_data plt_define.reference_type;
begin
for i in 1 .. 5
LOOP
l_data.VALUE := TO_CHAR(i);
l_data.description := 'hello' || i;
pipe row ( L_DATA );
end loop;
return;
end;
end;

/
--- local use work fine...
select * from table( my_pkg.f() )
Query finished, retrieving results...
VALUE DESCRIPTION
-------------------------------------------------- --------------------------------------------------------------------------------
1 hello1
2 hello2
3 hello3
4 hello4
5 hello5

5 row(s) retrieved

--- from another database
select * from table( my_pkg.f@APID() )
*
ORA-00600: internal error code, arguments: [kkxpityp1], [], [], [], [], [], [], [][1]: (Error): ORA-00600: internal error code, arguments: [kkxpityp1], [], [], [], [], [], [], []


Tom Kyte
August 18, 2004 - 10:15 am UTC

it is the issue of remote objects again.  plsql instantiates a local "sys" object for your packaged object (so you are not really gaining anything by "hiding" the object in the package spec -- an object is created for you)

ops$tkyte@ORA9IR2> @dbls
Wrote file /tmp/xtmpx.sql
 
OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
PACKAGE      MY_PKG
             PLT_DEFINE
 
PACKAGE BODY MY_PKG
 
TYPE         SYS_PLSQL_135117_26_1
             SYS_PLSQL_135117_9_1
             SYS_PLSQL_135117_DUMMY_1
 
 
6 rows selected.
 

So, it is the same "remote type" issue (the bug is in the error message, it is fixed in a later release).


You can return a PLSQL table type -- instead of "select * from function@remote", you'll


create synonym remote_pkg for remote_pkg@remote;

declare
   my_data remote_pkg.type;
begin
   remote_pkg.procedure( my_data );
end;
/


 

How to find out?

j., October 19, 2004 - 7:50 am UTC

i 've tried jasbir's sample (above), and it worked - but after applying it to my real use-case (within the same environment, but other data sources) no parallelism takes place.

how can one find out why?


Tom Kyte
October 19, 2004 - 9:28 am UTC

boil it down to a simple concise example and we'll

a) probably find the issue on the way to making the test case
b) have a small test case we can inspect if a) doesn't happen.


a) happens for me about 999 times out of 1000....

from my point of view the view is the cause

j., October 19, 2004 - 2:47 pm UTC

parallelism doesn't kick in whenever i use a view instead of a table (even if the view just "wraps" the table).

the real world case would have to make use of views which include joins in addition ...

Tom Kyte
October 19, 2004 - 2:51 pm UTC

you'll have to supply a test case showing what you mean.  views are not evil. views do NOT preclude parallel query.


ops$tkyte@ORA9IR2> create table t (i int,j int,k int) ;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view v as select * from t;
 
View created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select /*+ parallel(v 8) */ * from v;
 
Execution Plan
----------------------------------------------------------
   0
SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=82 Bytes=3198)
 
 
   1    0
  TABLE ACCESS* (FULL) OF 'T' (Cost=1 Card=82 Bytes=3198)
:Q14000
 
 
 
   1 PARALLEL_TO_SERIAL            SELECT /*+ NO_EXPAND ROWID(A1) */ A1."I",A1.
                                   "J",A1."K" FROM "T" PX_GRANULE(0, BLOCK_RANG
                                   E, DYNAMIC)  A1
 
 
ops$tkyte@ORA9IR2> set autotrace off
 

jasbir's sample slightly modified

j., October 20, 2004 - 2:29 am UTC

i only switched from using SQL-types to PL/SQL-types and added the view:

drop table Test ;
drop table PartTest ;

create table Test( ID number( 10 ) ) ;
create table PartTest( ID number( 10 ), Val varchar2( 30 ) ) ;
create or replace view MyView as select ID from Test ;

create or replace package MyPck
is
type t_curTest is ref cursor return Test%rowtype ; -- strong typed

type t_recMyRow is record( ID Test.ID%type, Val PartTest.Val%type ) ;
type t_tabMySet is table of t_recMyRow ;

function GetMySet( p_curTest t_curTest )
return t_tabMySet pipelined
cluster p_curTest by( ID )
parallel_enable( partition p_curTest by hash( ID ) ) ;
end MyPck ;
/

create or replace package body MyPck
is
function GetMySet( p_curTest t_curTest )
return t_tabMySet pipelined
cluster p_curTest by ( ID )
parallel_enable( partition p_curTest by hash( ID ) )
is
v_recTest Test%rowtype ;
v_recMyRow t_recMyRow ;
begin
fetch p_curTest into v_recTest ;
select Val into v_recMyRow.Val from PartTest where ID = v_recTest.ID ;

loop
v_recMyRow.ID := v_recTest.ID ;
pipe row( v_recMyRow ) ;
fetch p_curTest into v_recTest ;
exit when p_curTest%NOTFOUND ;
DBMS_Lock.Sleep( 2 ) ;
end loop ;
return ;
end GetMySet ;
end MyPck ;
/

insert into Test( ID ) values( 1 ) ;
insert into Test( ID ) values( 1 ) ;
insert into Test( ID ) values( 1 ) ;
insert into Test( ID ) values( 2 ) ;
insert into Test( ID ) values( 2 ) ;
insert into Test( ID ) values( 3 ) ;

insert into PartTest( ID, Val ) values( 1, 'ABC' ) ;
insert into PartTest( ID, Val ) values( 2, 'DEF' ) ;
insert into PartTest( ID, Val ) values( 3, 'GHI' ) ;

commit ;

-- parallel execution takes place
select * from table( MyPck.GetMySet( cursor( select /*+ PARALLEL (TEST) */ ID from Test ) ) ) ;

-- no parallel execution happens
select * from table( MyPck.GetMySet( cursor( select /*+ PARALLEL (MYVIEW) */ ID from MyView ) ) ) ;

by inspecting v$session and the generated results one can see that no parallel execution takes places when the provided cursor makes use of the view instead of the table.

this reproduces with 9.2.0.4.0 on AIX 64-bit as well as with 9.2.0.5.0 in WinXP.

Tom Kyte
October 20, 2004 - 7:13 am UTC

i'll check around and see if this is expected

i 've now created a TAR for that

j., October 23, 2004 - 5:53 pm UTC

pushing the parallel hint into to view's body solves the issue, but unfortunately i can't get the working sample case applied to the real life use case ...

that's why i'm looking for assistance to find out why.
i'll keep you informed. any help is appreciated.

is there any way to FORCE parallel execution?

A reader, November 30, 2004 - 3:33 am UTC

we 've implemented a long running complex data export via a pipelined table function containing several nested cursors. the outermost of those cursors is very simple and determines the scope for data retrieval by providing parameter values for its much more complex nested cursors.

in order to decrease processing time further we 'd like the pipelined table function to be executed in parallel. problem is: conditions for parallel execution to kick in are not met if all parameters of the outermost cursor are feed by values. we can achieve parallel execution only if we omit 2 out of those 4 parameters (then the optimizer decides to do one FTS).

is there any other chance to *force* parallel execution (to find out whether runtime decreases or not)?

Tom Kyte
November 30, 2004 - 7:43 am UTC

"conditions for parallel
execution to kick in are not met if all parameters of the outermost cursor are
feed by values."

do you mean that the query plan is such that PQ isn't chosen?

if so, you can hint it to see if a parallel plan would be beneficial or not.

A reader, November 30, 2004 - 2:15 pm UTC

PQ is chosen only if:

1: we include a PARALLEL hint and
2: the query has two parameters only (instead of the four we 'd like to use).

in such case the execution plan reports one FTS - which is documented as one of the conditions for PQ.

on the other hand if the query takes four parameters as needed (instead of the abovementioned two) we can't get it to be executed in parallel by that hint anymore - even if we force FTS by a FULL hint. the FULL hint isn't ignored but the PARALLEL hint is ...



Tom Kyte
November 30, 2004 - 8:11 pm UTC

*one of the conditions that might invoke PQ*. not a pre-requisite.


lets see the query you are trying to use with the full and parallel where you say parallel is ignored

Any restriction to the number of rows retrieved

Jeyanthi Meenakshi, December 13, 2004 - 9:25 am UTC

Is there any restriction to the number of rows retrieved through pipe row functions?????

Tom Kyte
December 13, 2004 - 10:21 am UTC

no

dynamic cursor for pipelined functions

Nilanjan Ray, December 20, 2004 - 6:36 am UTC

Hi Tom,

Using a ref cursor for pipelined table function gives the following error.

create or replace package My_Types is
type lookups_tab is table of tx_vw%rowtype;
end My_Types;
/

create or replace function Lookups_Fn(l_v_str varchar2)
return My_Types.lookups_tab
pipelined
is
--v_row My_Types.lookup_row;
type cur_typ is ref cursor;
v_row tx_vw%rowtype;
cur cur_typ;

begin

open cur for l_v_str;
loop

fetch cur into v_row;
exit when cur%notfound;

pipe row (v_row);

end loop;

close cur;

return;

end;
/

> select * from table(Lookups_Fn ('Select srl_no rn,aud_short_nm onm from tab1'));
select * from table(Lookups_Fn ('Select srl_no rn,aud_short_nm onm from tab1'))
*
ERROR at line 1:
ORA-22905: cannot access rows from a non-nested table item

Elapsed: 00:00:00.00
How can we work around the problem.

Regards

Tom Kyte
December 20, 2004 - 8:32 am UTC

worked for me, after I created other objects to support the example (incomplete test case here)

If you have older releases you 

a) need to use a CREATE TYPE command, not a plsql record.  The plsql record trick works in 9ir2 by creating the types automagically for us:

ops$tkyte@ORA9IR2> create table tx_vw ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package My_Types
  2  is
  3    type lookups_tab is table of tx_vw%rowtype;
  4  end My_Types;
  5  /
 
Package created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace function Lookups_Fn(l_v_str varchar2)
  2  return My_Types.lookups_tab
  3  pipelined
  4  is
  5    v_row tx_vw%rowtype;
  6    cur sys_refcursor;
  7  begin
  8    open cur for l_v_str;
  9    loop
 10      fetch cur into v_row;
 11      exit when cur%notfound;
 12      pipe row (v_row);
 13    end loop;
 14   close cur;
 15   return;
 16  end;
 17  /
 
Function created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from table(Lookups_Fn ('Select user_id from all_users where rownum < 5'))
  2  /
 
         X
----------
         0
         5
        11
        19
 

....
 
OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
FUNCTION     LOOKUPS_FN
 
PACKAGE      MY_TYPES
 
TABLE        TX_VW                            USERS
 
TYPE         SYS_PLSQL_33873_13_1
             SYS_PLSQL_33873_DUMMY_1
             SYS_PLSQL_33874_9_1
             SYS_PLSQL_33874_DUMMY_1
             SYS_PLSQL_33880_13_1
             SYS_PLSQL_33880_DUMMY_1
 
 
9 rows selected.


See the SYS_PLSQL.... stuff?  that is the automagic part.


2) you have NO version information, cannot say much more given that.  example worked for me in 9ir1 and 9ir2 

Still receive the same error

Nilanjan Ray, December 22, 2004 - 1:57 am UTC

Sorry Tom for not providing the exact details in my previous post.

SQL>select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production

SQL>create table tx ( idx number, text varchar2(20) );

Table created.

Elapsed: 00:00:00.00
stowner@STELLAR4
SQL>create table tx1 (idx number,text1 varchar2(20));

Table created.

Elapsed: 00:00:00.00
SQL>
SQL>create or replace view tx_vw as
  2  select a.idx,b.text1 text
  3  from tx a,tx1 b
  4  where a.idx = b.idx;

View created.

Elapsed: 00:00:00.00
SQL>
SQL>create or replace package My_Types is
  2    type lookups_tab is table of tx_vw%rowtype;
  3  end My_Types;
  4  /

Package created.

Elapsed: 00:00:00.01
SQL>
SQL>create or replace function Lookups_Fn(l_v_str varchar2)
  2  return My_Types.lookups_tab
  3    pipelined
  4  is
  5    --v_row My_Types.lookup_row;
  6    type cur_typ is ref cursor;
  7    v_row tx_vw%rowtype;
  8    cur cur_typ;
  9
 10  begin
 11
 12    open cur for l_v_str;
 13    loop
 14
 15    fetch cur into v_row;
 16    exit when cur%notfound;
 17
 18      pipe row (v_row);
 19
 20    end loop;
 21
 22   close cur;
 23
 24  return;
 25
 26
 27  end;
 28  /

Function created.

Elapsed: 00:00:00.01
SQL>
SQL>select * from table(Lookups_Fn ('Select * from tx_vw'));
select * from table(Lookups_Fn ('Select * from tx_vw'))
              *
ERROR at line 1:
ORA-22905: cannot access rows from a non-nested table item


Elapsed: 00:00:00.00

Please clarify " See the SYS_PLSQL.... stuff?  that is the automagic part."

Regards 

Tom Kyte
December 22, 2004 - 9:25 am UTC

i cannot reproduce (don't have 9201)

suggest you either do it with the explicit type outside of plsql (so you can use CAST() if you needed to)

or -- best yet, patch up to 9205/9206 -- it is what is currently supported.

just found a could be possible reason

Nilanjan Ray, January 18, 2005 - 4:01 am UTC

It seems that using a parameterized pipelined function causes this error

</code> http://coding.derkeiler.com/Archive/Perl/perl.dbi.users/2004-02/0019.html <code>

Regards


CAST requirement

David J, March 08, 2005 - 12:09 am UTC

Try the following to remove the need for CAST:

ALTER SESSION SET cursor_sharing=exact;

Works for me! (9.2.0.6)

Much slower than dynamic sql

Branka, March 10, 2005 - 5:43 pm UTC

Much slower than dynamic sql query

Tom Kyte
March 10, 2005 - 7:40 pm UTC

much faster than molasses?

DML in Table functions

Ramasamy, May 12, 2005 - 3:00 pm UTC

Tom ,

I have tried the same code where you have explained about DML in Table functions in this thread.

SQL> select * from v$version
  2  ;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bi
PL/SQL Release 10.1.0.3.0 - Production
CORE    10.1.0.3.0      Production
TNS for HPUX: Version 10.1.0.3.0 - Production
NLSRTL Version 10.1.0.3.0 - Production


COde .

create or replace type myTable as table of number;
/

create table t ( x int );
/

create or replace function f return myTable PIPELINED
    as
            pragma autonomous_transaction;
    begin
            for i in 1 .. 5
            loop
                    insert into t values ( I );
                    pipe row(i);
            end loop;
           commit;
           return;
   end;
/

select * from t;

select * from table(f());

select * from t;


I got the following error 

SQL> select * from table(f());
select * from table(f())
                    *
ERROR at line 1:
ORA-06519: active autonomous transaction detected and rolled back
ORA-06512: at "DWMERCHDEV.F", line 8



Is there any other database parameter setting which i am missing??

Thanks a lot.

 

Tom Kyte
May 12, 2005 - 3:47 pm UTC

pipe row is like "return" in many respects, like a "return but please come back" and it looks like they updated the code to respect that fact


you would need to commit before piping - moving the commit to before the pipe row corrects the error, changes the behaviour. but actually -- there is nothing saying the original commit would ever get hit (user executes query pulls 4 of 5 and then cancels - the atrans is left in limbo -- looks like the original behaviour was a bug)

Cursor to fetch records from pipeline table

Randy, August 22, 2005 - 3:07 pm UTC

Hi Tom,
           I am really in need of help, I tried to find something related, but every where in examples, you where either using refcursor or query itself to pass the parameters but my situation is bit diffrent can you please help me in doing this, any suggestions please, its about the pipeline tables I want to use them as cursor, and the below is what i am trying to do

problem is i want to give the input parameters from a cursor and i dont want to have the cursor or query in place of the parameters to the pipeline table, its below like what i have to do . this is only a example, though it does not make any sense, I am using a similar scenario for my realtime procedure

Any help or suggestion would be great, thank you tom

CREATE OR REPLACE PROCEDURE archive_emp
IS

CURSOR c_emp IS
 SELECT
 * FROM
 emp;
CURSOR c_emp_sal
(p_deptno NUMBER,
sal NUMBER)    /*this is a cursor i am writing on a pipelined table*/
IS
SELECT
 empno, sal FROM TABLE(emp_sal(p_empno, p_sa) );
 
BEGIN

FOR c_emp_rec IN c_emp LOOP

         FOR c_emp_sal_rec IN c_emp_sal LOOP
            
                          INSERT INTO emp_archive
 .......
                               /* here i am trining to insert a combination of outer and inner cursor values for each row returned by the inner cursor
                END LOOP;

/* here i archive the values from outer cursor and some variables which have been calculated using inner cursor results, this calculation 
are done inside the inner cursor loop
 END LOOP;

END;
/

/* tthe below is the pipeline table function this is working fine i tested it by giving leteral values*/

CREATE OR REPLACE FUNCTION emp_sal
   (p_deptno number,  p_sal number  )
   RETURN emp_table PIPELINED
   IS
    TYPE rc IS REF CURSOR;
     l_cursor rc;
     l_query  VARCHAR2(512) DEFAULT ' SELECT  empno, sal  FROM  emp  WHERE  1 =1 ';
 cursor l_template IS SELECT empno,sal FROM emp;
    l_rec  l_template%rowtype;
   BEGIN
        IF  ( p_deptno IS NOT NULL ) THEN
             l_query := l_query ||
             ' AND deptno = :deptno
               AND sal > :2 ' ;
OPEN l_cursor FOR  l_query USING p_deptno, p_sal;
      ELSE
             l_query := l_query ||
             ' AND sal > :sal' ;
OPEN l_cursor FOR  l_query USING p_sal;
      END IF;
          LOOP
             FETCH l_cursor INTO l_rec;
             EXIT WHEN l_cursor%notfound;
        PIPE ROW(emp_rec(l_rec.empno, l_rec.sal));
          END LOOP;
         CLOSE l_cursor;
     RETURN;
     END;
/
SQL> select * from table(emp_sal(30,1000));
     EMPNO        SAL
---------- ----------
      7499       1600
      7521       1250
      7654       1250
      7698       2850
      7844       1500

 

Tom Kyte
August 24, 2005 - 3:24 am UTC

I don't understand the question.

Smells like ...

Bob B, August 24, 2005 - 10:59 am UTC

This smells like an attempt to create a generic cursor join...

Table functions that act like parameterized views. "Give me the emps in department 20 who make more than $1000" becomes "emp_pkg.get_emps( 20, 1000 )" instead of "SELECT x, y, z FROM EMP WHERE ...".

After that is set up, the next thing to do is get that same result set in the same way, but join it to something else. So now we have "FOR outer_rec in emp_pkg.get_emps( 20, 1000 ) LOOP FOR inner_rec in job_pkg.get_jobs( rec.EMP_ID, SYSDATE ) LOOP ..." instead of "SELECT x, y, z FROM EMP, JOBS WHERE ... ".

I may be wrong as I'm assuming a lot; however, there are (at least ) 2 things that can accomplish these tasks without taking anything away from the database's ability to join (this is its purpose in life). One is FGAC the other is context variables.

FGAC is useful for adding predicates to queries based on who's session it is. If an employee looks at emps and can only ever see his emp while a manager looks at emps and see's his record and his reports records, that is a canonical situation for FGAC.

Context variables are useful for making parameterized views (among other uses). So instead of table/cursor functions one would have a context setting function and one or more views against that context. The "cursor join" can then be written something like, "set_emp_ctx( 20, 1000 ); SELECT x, y, z FROM EMP_VIEW, JOBS WHERE ...". Set_emp_ctx would ensure that EMP_VIEW is the same result set that will be returned by "emp_pkg.get_emps"

The ability to use either (or both) of these methods depends on the database version. Check the documentation for availability

PLS-00378

A reader, January 26, 2006 - 4:50 pm UTC

Hi

I am learning table function using the 9iR2 documentation, sadly the first thing I try it failed:

CREATE OR REPLACE FUNCTION GetBooks(a CLOB)
RETURN BookSet_t;
/

Warning: Function created with compilation errors.

SQL> show err
Errors for FUNCTION GETBOOKS:

LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0      PLS-00378: invalid compilation unit for this release of PL/SQL

This the example from pl/sql user guide page 8-32. Do you if this is wrong? 

Tom Kyte
January 27, 2006 - 8:24 am UTC

that example should read:

create function getBooks(a clob) return bookset_t ...


(elipses, not semi-colon). The implementation of the function is missing. It is not a full, compilable example.

violating autonomous transaction?

Bryce Harrison, January 31, 2006 - 1:31 am UTC

Tom,

Question: Is the pipe row command itself a DML statement that is violating my autonomous transaction directive?

The following package when called through a pipelined function as follows:

select mytable.filename
from table(ene_file_pkg.f_dir_list('/tmp')) mytable

errors with:

ORA-06519: active autonomous transaction detected and rolled back
ORA-06512: at "APPS.ENE_FILE_PKG", line 19

The package is as follows:

CREATE OR REPLACE package ene_file_pkg
as
--
function f_dir_list (p_directory IN VARCHAR2) return EneFileTableType PIPELINED;
--
end;
/
CREATE OR REPLACE package body ene_file_pkg
as
--
procedure get_directory_list(p_directory IN VARCHAR2)
as language java
name 'EneDirList.getList( java.lang.String )';
--
function f_dir_list (p_directory IN VARCHAR2) return EneFileTableType
PIPELINED
is
pragma autonomous_transaction;
begin
--
dbms_java.grant_permission( 'APPS', 'SYS:java.io.FilePermission', p_directory, 'read' );
get_directory_list(p_directory);
--
for i in (select filename from ene_dir_list)
loop
pipe row (EneFileScalarType (i.filename));
end loop;
--
commit;
--
return;
--
end;
--
end;
/

The java class is taken from your example:

create or replace and compile java source named "EneDirList" as
--
import java.io.*;
import java.sql.*;

public class EneDirList
{
public static void getList(String directory)
throws SQLException
{
File path = new File( directory );
String[] list = path.list();
String element;

for(int i = 0; i < list.length; i++)
{
element = list[i];
#sql { INSERT INTO ENE.ENE_DIR_LIST (FILENAME)
VALUES (:element) };
}
}

}
/

The table creation script for ENE_DIR_LIST is:
create global temporary table ENE.ENE_DIR_LIST
( filename varchar2(255) )
on commit delete rows
/
create synonym ene_dir_list for ene.ene_dir_list
/

The ScalarType creation script is:
create or replace type EneFileScalarType as object
( filename varchar2(100))
/
create or replace type EneFileTableType
as table of EneFileScalarType
/

Regards,
Bryce


Tom Kyte
January 31, 2006 - 2:23 am UTC

umm, why would you grant yourself this permission in a function like that in the first place?

pipe row is a lot like "return, but come back later", you returned with an open transaction in effect.

autonomous transation cont'd

Bryce, January 31, 2006 - 6:20 pm UTC

I'm limited in the way i can use it- whereby i can't issue the grant permission and then execute to SQL- as the application only allows a SQL statement to be entered. (It's a valueset in Oracle Applications)
What is your suggestion? Is this possible with just a single pipelined SQL statement?

Tom Kyte
February 01, 2006 - 2:42 am UTC

grants should be done ONCE, by the dba - you don't do them every time like that. That bit of code doesn't belong in your code.


anyway, you would need to commit your atrans before piping.

atrans

Bryce, February 02, 2006 - 5:24 pm UTC

It is the global temp table that has "on commit" directive that is causing the headache. I created a normal table- performed delete rows, commit rows around the procedure; it now works. I've also added a session id column to avoid contention.
Thanks for you help Tom. Your plain english - straight forward responses make your site a greatest source for oracle techies and the like.

Pipelined Table Functions

Ik, February 22, 2006 - 5:50 am UTC

Tom,

From Oracle documentation :
"</code> http://sqltech.cl/doc/oracle9i/appdev.901/a88896/dci12tbl.htm#1007111"

"Pipelined table functions can be implemented in two ways: 

Native PL/SQL approach: ....

Interface approach: The consumer and producers run on the same execution thread. Producer explicitly returns the control back to the consumer after producing a set of results. In addition, the producer caches the current state so that it can resume where it left off when the consumer invokes it again. "

An example (using a ref cursor) is given at
"
http://sqltech.cl/doc/oracle9i/appdev.901/a88896/dciappa1.htm" <code>

My question is :

Is the interface method limited to only ref cursor input types?

Thanks,


Tom Kyte
February 22, 2006 - 8:44 am UTC

the inputs can be whatever, the outputs are what are described there.

ODCITableFetch

Ik, February 23, 2006 - 6:17 am UTC

Tom,

Apologies in advance if iam making my question obscure.

My question is on the example given in the link above.
</code> http://sqltech.cl/doc/oracle9i/appdev.901/a88896/dciappa1.htm" <code>

My intention is to open a file (in the filesystem) read entire data and pass it back ( PIPELINED ) to the calling function.

I fail to understand how in the given example, Oracle is able to determine exactly where the cursor left off.

In my case, iam reading a flat file. Inorder to store the position where it left off, i have added a variable to the Object Type and i keep writing my current position to the variable and i read it back each time i enter ODCITableFetch.

Fetching all records from the flat file works. But, if i put a WHERE clause on my select statement, it displays records and fails.

What i wanted is to really understand how Oracle is able to pick up where it left off and also how the nrows (argument to ODCITableFetch) value is being set.

Thanks,




Tom Kyte
February 23, 2006 - 8:21 am UTC

Why wouldn't Oracle, a program running a program, be able to remember what it was doing???


Oracle calls your code, you return stuff via an internal API, Oracle calls your code again, you pick up where you left off.

It was designed to be rather "transparent", as you pipe rows, you leave your scope, someone elses scope becomes active - they now how your data, it gets processed by whatever and then they return to your scope and you pick up precisely where you left off.

ASYNC EXECUTION STORED PROCEDURE

Jimmy, March 14, 2006 - 12:15 pm UTC

This would serve to execute procedures stored of package of way async directly of PL/SQL or exists another form to do it?

Tom Kyte
March 15, 2006 - 9:04 am UTC

if you want to "background" a plsql procedure - have it run and not wait for it - that would be dbms_job or the new scheduler package in 10g (dbms_job works just fine)

Pipelined functions

Chris, April 13, 2006 - 5:31 pm UTC

Tom,
I'm not sure if I should use pipelined functions in my scenario or not, but I'm sure you'll have some guidance. I have a query that is made up of many different sub-queries. These subqueries appear many times in the main query, everywhere from union's to sub-queries, etc. It may look something like:

select ....
from (query1),
(query2),
table1

where query1 may consist of many sub-queries that are repeated many times, including query2. Basically, it's a query that might be only 300 lines of distinct code, but because of all the repetition it ends up at over 400o lines long, pretty unmanagable. i was looking for a way to physically implement the logical breakdown of the query. The reuse of the same queries over and over is nothing more than a macro substitution, but I feel sorry for the poor fool that has to maintain this in the future (as I am inheriting from someone now).

Is there an easy way to implement this using functions (knowing I lose indexing on the result sets if that is necessary), are temp tables a better way to go, or do you have any other ideas?

I liked the thought of the functions as there are about 8 different queries that are used more than once in this query, and just replacing them with (select * from table(get_dataset_1)), (select * from table(get_dataset_2)), etc. was appealing. It would let me represent the main query in a very logical manner without code repetition.

Tom Kyte
April 14, 2006 - 12:10 pm UTC

with query1 as ( select ... ),
query2 as ( select ... ),
query3 as ( select ... from query1, query2 ....),
....
select ...
from query1, query2, query3, .....


with subquery factoring...

Unfortunately this functionality is not all there.

Michael Friedman, April 16, 2006 - 4:40 am UTC

On Oracle XE:

CREATE OR REPLACE PACKAGE pltest IS
  type r is record(
    num number,
    v   varchar2(30));

  type virtual_table_type is table of r;

  type cu_r is ref cursor return r;

  function f(p_num_rows in number) return virtual_table_type
    pipelined;

  function get_cur(p_num_rows in number) return cu_r;

  function test(p_num_rows in number) return number;

END;
/

CREATE OR REPLACE PACKAGE BODY pltest IS
  function f(p_num_rows in number) return virtual_table_type
    PIPELINED is
    v_row r;
  begin
    for i in 1 .. p_num_rows
    loop
      v_row.num := i;
      v_row.v   := i;
      pipe row(v_row);
    end loop;
    return;
  end;

  function get_cur(p_num_rows in number) return cu_r is
    v_cur cu_r;
  begin
    open v_cur for
      select *
        from table(f(p_num_rows));
    return v_cur;
  end;

  function test(p_num_rows in number) return number is
    v_cur cu_r;
    v_row r;
    v_res number := 0;
  begin
    v_cur := get_cur(p_num_rows);
    loop
      fetch v_cur
        into v_row;
      exit when v_cur%notfound;
      v_res := v_res + v_row.num;
    end loop;
    return v_res;
  end test;

END;
/

----------------------------------------------
SQL> select * from table(pltest.f(5));

       NUM V
---------- ------------------------------
         1 1
         2 2
         3 3
         4 4
         5 5

SQL> select pltest.test(5) from dual
  2  /

PLTEST.TEST(5)
--------------
            15

This worked like a charm.

Now try on 10gR2 SE.  The first query works perfectly.

But...

SQL> select pltest.test(5) from dual;

select pltest.test(5) from dual

ORA-22905: cannot access rows from a non-nested table item
ORA-06512: at "NOVA2.PLTEST", line 18
ORA-06512: at "NOVA2.PLTEST", line 29

----------------------
I don't think this is at a point where people can rely on it yet.

 

Tom Kyte
April 16, 2006 - 7:37 am UTC

Interesting "observation", funny "conclusion" - that an obscure'ish feature would deem something beyond "being something you can use".  But anyway - I cannot reproduce, what parameters have you set differently:

ops$tkyte@XE> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production



ops$tkyte@XE> select * from table(pltest.f(5));

       NUM V
---------- ------------------------------
         1 1
         2 2
         3 3
         4 4
         5 5

ops$tkyte@XE> select pltest.test(5) from dual
  2  /

PLTEST.TEST(5)
--------------
            15
 

conclusion aside ...

Gabe, April 17, 2006 - 9:20 am UTC

Michael's test worked on XE but not on 10gR2 SE.
I tested on 10gR1 over the weekend and it failed too.

Tom Kyte
April 17, 2006 - 10:00 am UTC

I'd like to figure out what is different here - I'm using 100% vanilla init.ora and I'm using a client that is the same release.

Can we get some details on the clients, the nls settings, any set init.ora parameters and such?

Once again you solve my issue.

Andy Sims, April 21, 2006 - 9:14 am UTC

Tom, once again yours is the only site on the internet with clear instructions on how things work in Oracle.

You have saved me so much pain and my company so much money with the solutions and explanations you provide here!

Cannot access rows error...

A reader, June 08, 2006 - 9:34 am UTC

Don't know what causes this, but if Michael changes:

select * from table(pltest.f(5));

...to

select num,v from table(pltest.f(5));

...he'll probably find that it works fine.




parallel_enable & grouping sets

Oren Nakdimon, June 08, 2006 - 6:03 pm UTC

Hi Tom.
Am I missing something here or is this simply an Oracle bug?

This is a simplification of my real problem:

create table t (x number);
insert into t (x) values (1);
insert into t (x) values (2);
commit;

create or replace type out_rec_t as object (x number, y number);
/

create or replace type out_rec_table_t as table of out_rec_t;
/

CREATE or replace FUNCTION f(p sys_refcursor) RETURN out_rec_table_t PIPELINED
PARALLEL_ENABLE(PARTITION p BY ANY)
as
rec t%rowtype;
BEGIN

LOOP
FETCH p INTO rec;
EXIT WHEN p%NOTFOUND;

for i in (select x,y from (SELECT 1 x,2 y FROM dual) group by grouping sets ((x),(y)))
LOOP
PIPE ROW(out_rec_t(i.x,i.y));
END LOOP;
END LOOP;
RETURN;
END;
/

Now, the following query works fine:

select *
from
table(f(cursor(select * from t)));

X Y
---------- ----------
1
2
1
2

4 rows selected.

But when I try to use the parallelism of the pipeline function, I get no rows:

select *
from
table(f(cursor(select /*+ PARALLEL(t) */ * from t)));

no rows selected


I'm on 10gR2.
If I remove the "grouping sets" it works fine.

Thanks,
Oren.


Tom Kyte
June 08, 2006 - 8:27 pm UTC

definitely a bug, please contact support.

The select x,y from (SELECT  1 x,2 y FROM dual) group by grouping sets ((x),(y)) is broken when you use parallel


ops$tkyte@ORA10GR2> create table t (x number);

Table created.

ops$tkyte@ORA10GR2> insert into t (x) values (1);

1 row created.

ops$tkyte@ORA10GR2> insert into t (x) values (2);

1 row created.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace type out_rec_t as object (x number, y number);
  2  /

Type created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace type out_rec_table_t as table of out_rec_t;
  2  /

Type created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table msg( id int primary key, ts timestamp default systimestamp, sessionid number, msg varchar2(100) );

Table created.

ops$tkyte@ORA10GR2> create sequence s;

Sequence created.

ops$tkyte@ORA10GR2> create or replace procedure log_msg( p_msg in varchar2 )
  2  as
  3          pragma autonomous_transaction;
  4  begin
  5          insert into msg(id,sessionid,msg) values (s.nextval,sys_context( 'userenv', 'sid'),p_msg);
  6          commit;
  7  end;
  8  /

Procedure created.

ops$tkyte@ORA10GR2> show err
No errors.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> CREATE or replace FUNCTION f(p sys_refcursor) RETURN out_rec_table_t PIPELINED
  2      PARALLEL_ENABLE(PARTITION p BY ANY)
  3  as
  4      rec t%rowtype;
  5          l_cnt number;
  6  BEGIN
  7
  8    log_msg( 'enter procedure' );
  9    LOOP
 10      FETCH p INTO rec;
 11      EXIT WHEN p%NOTFOUND;
 12          log_msg( 'fetched ' || rec.x );
 13      select count(*) into l_cnt from (select x,y from (SELECT  1 x,2 y FROM dual) group by grouping sets ((x),(y)));
 14          log_msg( 'expect ' || l_cnt );
 15      for i in (select x,y from (SELECT  1 x,2 y FROM dual) group by grouping sets ((x),(y)))
 16      LOOP
 17                  log_msg( 'pipe row ' || i.x || ', ' || i.y );
 18          PIPE ROW(out_rec_t(i.x,i.y));
 19      END LOOP;
 20    END LOOP;
 21    log_msg( 'exit procedure' );
 22    RETURN;
 23  END;
 24  /

Function created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select *
  2  from
  3   table(f(cursor(select * from t)));

         X          Y
---------- ----------
         1
                    2
         1
                    2

ops$tkyte@ORA10GR2> select * from msg order by ts;

        ID TS                              SESSIONID MSG
---------- ------------------------------ ---------- ------------------------------
         1 08-JUN-06 08.19.27.984426 PM          144 enter procedure
         2 08-JUN-06 08.19.27.988650 PM          144 fetched 1
         3 08-JUN-06 08.19.28.120362 PM          144 expect 2
         4 08-JUN-06 08.19.28.160266 PM          144 pipe row 1,
         5 08-JUN-06 08.19.28.170865 PM          144 pipe row , 2
         6 08-JUN-06 08.19.28.171726 PM          144 fetched 2
         7 08-JUN-06 08.19.28.216225 PM          144 expect 2
         8 08-JUN-06 08.19.28.268132 PM          144 pipe row 1,
         9 08-JUN-06 08.19.28.268916 PM          144 pipe row , 2
        10 08-JUN-06 08.19.28.269612 PM          144 exit procedure

10 rows selected.

ops$tkyte@ORA10GR2> delete from msg;

10 rows deleted.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select *
  2  from
  3   table(f(cursor(select /*+ PARALLEL(t) */ * from t)));

no rows selected

ops$tkyte@ORA10GR2> select * from msg order by ts;

        ID TS                              SESSIONID MSG
---------- ------------------------------ ---------- ------------------------------
        11 08-JUN-06 08.19.28.326936 PM          150 enter procedure
        12 08-JUN-06 08.19.28.326942 PM          143 enter procedure
        13 08-JUN-06 08.19.28.326974 PM          159 enter procedure
        14 08-JUN-06 08.19.28.327253 PM          141 enter procedure
        16 08-JUN-06 08.19.28.329200 PM          152 enter procedure
        15 08-JUN-06 08.19.28.329200 PM          142 enter procedure
        17 08-JUN-06 08.19.28.329301 PM          145 enter procedure
        18 08-JUN-06 08.19.28.332048 PM          146 enter procedure
        19 08-JUN-06 08.19.28.337621 PM          159 fetched 1
        20 08-JUN-06 08.19.28.338690 PM          146 exit procedure
        21 08-JUN-06 08.19.28.338937 PM          145 exit procedure
        25 08-JUN-06 08.19.28.339829 PM          142 exit procedure
        22 08-JUN-06 08.19.28.343143 PM          152 exit procedure
        23 08-JUN-06 08.19.28.345407 PM          143 exit procedure
        24 08-JUN-06 08.19.28.347100 PM          150 exit procedure
        26 08-JUN-06 08.19.28.352014 PM          141 exit procedure
        27 08-JUN-06 08.19.28.357256 PM          159 expect 0
        28 08-JUN-06 08.19.28.375186 PM          159 fetched 2
        29 08-JUN-06 08.19.28.389196 PM          159 expect 0
        30 08-JUN-06 08.19.28.405167 PM          159 exit procedure

20 rows selected.
 

Controlling selective PARALLEL access within statement

Avi Haleva, June 15, 2006 - 2:35 am UTC

Hi Tom,
I have a situation in which I'm using grouping sets and I want to allow parallel access to a certain table (during a full scan) and restrict the parallel access for the grouping set scans over the temporary table created for these scans.

Here is an example of the problem :

create table t1 (x number, y number);
insert into t1 (x,y) values (1,2);
insert into t1 (x,y) values (2,2);

select x,y,count(*) from (SELECT x,y FROM t1) group by grouping sets
((x),(y));

The plan is :
----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 78 | 13 (16)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | TABLE ACCESS FULL | T1 | 2 | 52 | 5 (0)| 00:00:01 |
| 4 | LOAD AS SELECT | | | | | |
| 5 | HASH GROUP BY | | 1 | 13 | 3 (34)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9FC8CD_906870BA | 1 | 13 | 2 (0)| 00:00:01 |
| 7 | LOAD AS SELECT | | | | | |
| 8 | HASH GROUP BY | | 1 | 13 | 3 (34)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9FC8CD_906870BA | 1 | 13 | 2 (0)| 00:00:01 |
| 10 | VIEW | | 1 | 39 | 2 (0)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9FC8CE_906870BA | 1 | 39 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

select x,y,count(*) from (SELECT /*+ PARALLEL(t1) */ x,y FROM t1) group by grouping sets
((x),(y));

-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 78 | 10 (20)| 00:00:01 | | | |
| 1 | TEMP TABLE TRANSFORMATION | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | || 3 | PX SEND QC (RANDOM) | :TQ10001 | 2 | 52 | 2 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 4 | LOAD AS SELECT | | | | | | Q1,01 | PCWP | |
| 5 | PX RECEIVE | | 2 | 52 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
| 6 | PX SEND ROUND-ROBIN | :TQ10000 | 2 | 52 | 2 (0)| 00:00:01 | Q1,00 | P->P | RND-ROBIN |
| 7 | PX BLOCK ITERATOR | | 2 | 52 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 8 | TABLE ACCESS FULL | T1 | 2 | 52 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
| 9 | PX COORDINATOR | | | | | | | | |
| 10 | PX SEND QC (RANDOM) | :TQ20002 | 1 | 13 | 3 (34)| 00:00:01 | Q2,02 | P->S | QC (RAND) |
| 11 | LOAD AS SELECT | | | | | | Q2,02 | PCWP | |
| 12 | PX RECEIVE | | 1 | 13 | 3 (34)| 00:00:01 | Q2,02 | PCWP | |
| 13 | PX SEND ROUND-ROBIN | :TQ20001 | 1 | 13 | 3 (34)| 00:00:01 | Q2,01 | P->P | RND-ROBIN |
| 14 | HASH GROUP BY | | 1 | 13 | 3 (34)| 00:00:01 | Q2,01 | PCWP | |
| 15 | PX RECEIVE | | 1 | 13 | 3 (34)| 00:00:01 | Q2,01 | PCWP | |
| 16 | PX SEND HASH | :TQ20000 | 1 | 13 | 3 (34)| 00:00:01 | Q2,00 | P->P | HASH |
| 17 | HASH GROUP BY | | 1 | 13 | 3 (34)| 00:00:01 | Q2,00 | PCWP | |
| 18 | PX BLOCK ITERATOR | | 1 | 13 | 2 (0)| 00:00:01 | Q2,00 | PCWC | |
| 19 | TABLE ACCESS FULL| SYS_TEMP_0FD9FC8DB_906870BA | 1 | 13 | 2 (0)| 00:00:01 | Q2,00 | PCWP | |
| 20 | PX COORDINATOR | | | | | | | | |
| 21 | PX SEND QC (RANDOM) | :TQ30002 | 1 | 13 | 3 (34)| 00:00:01 | Q3,02 | P->S | QC (RAND) |
| 22 | LOAD AS SELECT | | | | | | Q3,02 | PCWP | |
| 23 | PX RECEIVE | | 1 | 13 | 3 (34)| 00:00:01 | Q3,02 | PCWP | |
| 24 | PX SEND ROUND-ROBIN | :TQ30001 | 1 | 13 | 3 (34)| 00:00:01 | Q3,01 | P->P | RND-ROBIN |
| 25 | HASH GROUP BY | | 1 | 13 | 3 (34)| 00:00:01 | Q3,01 | PCWP | |
| 26 | PX RECEIVE | | 1 | 13 | 3 (34)| 00:00:01 | Q3,01 | PCWP | |
| 27 | PX SEND HASH | :TQ30000 | 1 | 13 | 3 (34)| 00:00:01 | Q3,00 | P->P | HASH |
| 28 | HASH GROUP BY | | 1 | 13 | 3 (34)| 00:00:01 | Q3,00 | PCWP | |
| 29 | PX BLOCK ITERATOR | | 1 | 13 | 2 (0)| 00:00:01 | Q3,00 | PCWC | |
| 30 | TABLE ACCESS FULL| SYS_TEMP_0FD9FC8DB_906870BA | 1 | 13 | 2 (0)| 00:00:01 | Q3,00 | PCWP | |
| 31 | PX COORDINATOR | | | | | | | | |
| 32 | PX SEND QC (RANDOM) | :TQ40000 | 1 | 39 | 2 (0)| 00:00:01 | Q4,00 | P->S | QC (RAND) |
| 33 | VIEW | | 1 | 39 | 2 (0)| 00:00:01 | Q4,00 | PCWP | |
| 34 | PX BLOCK ITERATOR | | 1 | 39 | 2 (0)| 00:00:01 | Q4,00 | PCWC | |
| 35 | TABLE ACCESS FULL | SYS_TEMP_0FD9FC8DC_906870BA | 1 | 39 | 2 (0)| 00:00:01 | Q4,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------

as you can see , once I'm using the parallel hint on table t1, it is also used for the full scans on the temporary objects. Assuming this temporary object can reside in memory I can see there is a major overhead using parallel access, so I want to avoid this.

Another issue: I could not overcome is the initial I/O cost of creating the temp object (and the first full scans that do actual physical reads) the next iteration is in memory (no physical reads at all. How can one controll this behaviour via configuration.

In my case we need to perform the grouping sets on several dozens of columns so the cost of each iteration is crucial.

Thanks in advance ,
Avi

parameterized view

jianhui, July 03, 2006 - 7:06 pm UTC

Hi Tom,
It's inconvinient to set context in order to simulate parameterized view when there are more than one parameters.

But it's very easy to write a function that returns ref cursor and has more than one parameters.

The challenge is how to view the data of ref cursor just as if it's a table?
i.e. (pseudo code)
function foo (p1 int, p2 int)
return sys_ref_cursor
c sys_ref_cursor -- sorry, i forgot the buildin ref cursor name, just used this as pseudo code
is
open c for select * from table_a where col1=:p1 and col2=p2 using p1, p2;
return c;
end;

select * from table(foo(1,2)) does not work, I know that I can iterate throught the cursor and pipe it out to a SQL type in function foo, then use TABLE function in SELECT statement. However, it seems so much coding work. Is it the only way or do you have better idea?

Best Regards,


Tom Kyte
July 07, 2006 - 7:29 pm UTC

you would really need to define "the client"

say "the client" is sqlplus, then:

ops$tkyte@ORA10GR2> create or replace function foo( p1 int, p2 int ) return sys_refcursor
  2  as
  3          l_cursor sys_refcursor;
  4  begin
  5          open l_cursor for select p1, p2 from dual;
  6          return l_cursor;
  7  end;
  8  /

Function created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> variable x refcursor
ops$tkyte@ORA10GR2> exec :x := foo( 42, 55 );

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> print x;

       :B2        :B1
---------- ----------
        42         55

easy as pie. 

pipeline function with parameter fails

jianhui, July 05, 2006 - 8:44 pm UTC

Hi Tom,
I am not sure why when the function has parameter it fails, but it works without parameter. Could you share some light on this? (DB version is 9205)

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
PL/SQL Release 9.2.0.5.0 - Production
CORE    9.2.0.6.0       Production
TNS for Solaris: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

SQL> create or replace type t as object (c varchar2(30));
  2  /

Type created.

SQL> create or replace type t_set as table of t;
  2  /

Type created.

SQL> CREATE OR REPLACE FUNCTION foo
(
  p_c varchar2
)
RETURN t_set PIPELINED IS
  out_rec t := t(p_c);
BEGIN
  PIPE ROW(out_rec);
  return;
END foo;  2    3    4    5    6    7    8    9   10
 11  /

Function created.

SQL> sELECT * FROM TABLE(foo('hello'));
sELECT * FROM TABLE(foo('hello'))
              *
ERROR at line 1:
ORA-22905: cannot access rows from a non-nested table item


SQL> CREATE OR REPLACE FUNCTION foo
RETURN t_set PIPELINED IS
  out_rec t := t('hello');
BEGIN
  PIPE ROW(out_rec);
  return;
END foo;  2    3    4    5    6    7
  8  /

Function created.


SQL> sELECT * FROM TABLE(foo);

C
------------------------------
hello

SQL>
 

Tom Kyte
July 08, 2006 - 9:33 am UTC

you have the dreaded, hated "cursor_sharing" parameter set to something other than "exact"

Meaning - someone has forgotten to use bind variables in their code (making it so insecure, so very very insecure, even with cursor_sharing=force|similar)....


Consider:

ops$tkyte@ORA9IR2> alter system flush shared_pool;

System altered.

ops$tkyte@ORA9IR2> alter session set cursor_sharing=exact;

Session altered.

ops$tkyte@ORA9IR2> sELECT * FROM TABLE(foo('hello'));

C
------------------------------
hello

ops$tkyte@ORA9IR2> alter system flush shared_pool;

System altered.

ops$tkyte@ORA9IR2> alter session set cursor_sharing=force;

Session altered.

ops$tkyte@ORA9IR2> sELECT * FROM TABLE(foo('hello'));
sELECT * FROM TABLE(foo('hello'))
              *
ERROR at line 1:
ORA-22905: cannot access rows from a non-nested table item


ops$tkyte@ORA9IR2> sELECT * FROM TABLE(cast( foo('hello') as t_set));

C
------------------------------
hello


So, you see the root cause, you have a workaround (using cast), but basically the best thing is to never set cursor_sharing to anything other than EXACT at the system level! 

it worked on 10gR2, maybe a bug in 9i?

A reader, July 08, 2006 - 12:26 am UTC

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> create or replace type t as object (c varchar2(30));
  2  /

Type created.

SQL> create or replace type t_set as table of t;
  2  /

Type created.

SQL>  CREATE OR REPLACE FUNCTION foo
(
  p_c varchar2
)
RETURN t_set PIPELINED IS
  out_rec t := t(p_c);
BEGIN
  PIPE ROW(out_rec);
  return;
END foo;   2    3    4    5    6    7    8    9   10  
 11  /

Function created.

SQL> select * from table(foo('hello'));

C
------------------------------
hello

 

Tom Kyte
July 08, 2006 - 8:54 pm UTC

yes it was, but I'll still blame cursor_sharing entirely. Hate it, won't use it. Implies you've done something horribly wrong - the worst possible thing you can do in Oracle in fact.



works like charm

jianhui, July 11, 2006 - 4:07 pm UTC

Hi Tom,
Thanks for catching this. Once again, I am impressed! How could you find out that cursor sharing parameter caused the problem for TABLE function in this case?

By the way, I agree with you about cursor_sharing setting, this is just a testing database for me as solo user.

Sincerely,


Tom Kyte
July 12, 2006 - 3:19 pm UTC

I use Oracle a lot. A whole lot. Constantly. With lots of different people. I get to see a lot of things.

I remember things very well too...

I've seen this before.

addressing in parameters

Martina, August 08, 2006 - 6:51 am UTC

Hi Tom,
could you please explain why the following simple
test case does not work?
(i'm using oracle version 9.2.0.7.0)

create or replace type my_type as OBJECT
(
a NUMBER,
b VARCHAR2(20)
)
/

create or replace type my_type_set is table of my_type
/

CREATE OR REPLACE FUNCTION f_test_pipelined (
p1 NUMBER DEFAULT NULL,
p2 NUMBER DEFAULT NULL
)
RETURN my_type_set PIPELINED
IS
out_rec my_type := my_type (NULL, NULL);
BEGIN
IF p2 IS NOT NULL
THEN
out_rec.a := 1;
out_rec.b := p2;
PIPE ROW (out_rec);
ELSE
out_rec.a := 2;
out_rec.b := 'null';
PIPE ROW (out_rec);
END IF;
RETURN;
END;

Now, I can call the function with parameters e.g.

select * from table(f_test_pipelined(1,2));

However if I wish to enter only parameter p2, I get an ORA exception

select * from table (f_test_pipelined(p2=>2));

ORA-00907: missing right parenthesis

Thanks !




Tom Kyte
August 08, 2006 - 7:59 am UTC

because you are not allowed to use named notation (a plsql'ism) in SQL.

order by column descending with pipelined parallel_enable?

A reader, August 09, 2006 - 11:40 am UTC

It appears that through my testing that you cannot specify "desc" in the order by for a pipelined parallel_enabled function?

Do you agree with that?

Defining a function
FUNCTION P(L_CURSOR IN FACURSORTYPE) RETURN PIPE_TAB_TYPE
PIPELINED ORDER L_CURSOR BY(EMP_ID, EMP_CD, HIRE_DT DESC)
PARALLEL_ENABLE(PARTITION L_CURSOR BY RANGE(EMP_ID, EMP_CD));
or
FUNCTION P(L_CURSOR IN FACURSORTYPE) RETURN PIPE_TAB_TYPE
PIPELINED ORDER L_CURSOR BY(EMP_ID, EMP_CD, HIRE_DT )
PARALLEL_ENABLE(PARTITION L_CURSOR BY RANGE(EMP_ID, EMP_CD));

causes the rows to be sent to the slaves in HIRE_DT ascending order.
--
I have searched through a lot of documentation, your site, and others and not been able to find any reference to sorting by descending order.

Thanks so much for your site

Tom Kyte
August 09, 2006 - 1:03 pm UTC

I would say if it is not in descending order - it would be a bug.

order by column descending with pipelined parallel_enable?

A reader, August 09, 2006 - 12:06 pm UTC

I'm also a little confused by the difference between CLUSTER and ORDER. It appears to me that ORDER does what CLUSTER does but with the capability of adding more columns than there are in the PARTITION BY clause.

Hopefully, you can correct/explain it in a way that is a little clearer.

Thanks again for your site.

Tom Kyte
August 09, 2006 - 2:28 pm UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14289/dcitblfns.htm#sthref695 <code>

...
Clustering causes rows that have the same key values to appear together but does not otherwise do any ordering of rows.
......

if the rows were ordered after clustering, it was "by accident" and should not be relied on.

Can ref cursor be treated as table

Vinayak, August 19, 2006 - 8:35 pm UTC

Hi Tom,
I have a function that returns a ref cursor to the calling program. The ref cursor contains around 15 columns say col1, col2, col3.....col15 and returns around 100000 rows. My requirement is to show the total number of records returned by the ref cursor based on a few simple conditions. The conditions are count the number of records where col2 is null, col5='A', col7 is not null, col8 =44, col12='TEMPO' and col15=to_date('15/08/2006','dd/mm/yyyy').
One way is to fetch the records from the ref cursor one-by-one, do all the above checks and then increment the counter by 1 for the records satisfying all the above conditions.
Is there any way by which I can treat the ref cursor as a table and directly apply the conditions through a WHERE clause as it is applied to a static table and remove the iterative approach. I wanted the count of records in 1 shot.

Thanks
Vinayak

Tom Kyte
August 19, 2006 - 9:21 pm UTC

no, there is not.

Anticipated answer!!!!

Vinayak, August 21, 2006 - 9:14 am UTC

Tom,
That was an expected answer. But still is there any better approach you think for this. Iterating over 1 million records is quite time consuming.


Tom Kyte
August 27, 2006 - 9:24 am UTC

sure, you can do it IN A SET BASED operation (eg: the ref cursor returned to you is NOT the one you really want, hence you need a new API call that returns the correctly formed refcursor.


scott%ORA10GR2> select empno, ename, job, count(case when job='CLERK' then 1 end) over () tot_clerks,
2 count(case when hiredate < to_date( '01011983', 'ddmmyyyy' ) then 1 end) over () tot_hires_before_1983
3 from emp;

EMPNO ENAME JOB TOT_CLERKS TOT_HIRES_BEFORE_1983
---------- ---------- --------- ---------- ---------------------
7369 SMITH CLERK 4 12
7499 ALLEN SALESMAN 4 12
7521 WARD SALESMAN 4 12
7566 JONES MANAGER 4 12
7654 MARTIN SALESMAN 4 12
7698 BLAKE MANAGER 4 12
7782 CLARK MANAGER 4 12
7788 SCOTT ANALYST 4 12
7839 KING PRESIDENT 4 12
7844 TURNER SALESMAN 4 12
7876 ADAMS CLERK 4 12
7900 JAMES CLERK 4 12
7902 FORD ANALYST 4 12
7934 MILLER CLERK 4 12

14 rows selected.


Pipelined function in view AND arguments submitted

Dieter, November 26, 2006 - 7:26 am UTC

I wanted to follow up on this discussion with a (hopefully not too) silly question: we most often pack the results of a pipelined function into a view to make it transparent to the rest of the application but by doing so we lack the ability to submit any arguments to the function itself.
Would you know of any "tricks" to combine those two requirements (make pipelined function to be used like a regular view and allow arguments to the pipelined function)?


Tom Kyte
November 26, 2006 - 9:41 am UTC

Help needed

Nikhil, January 03, 2007 - 6:46 am UTC

Hello Tom,
Iam using Oracle 10g.

create or replace TYPE record_struct
AS OBJECT ( owner varchar2(30), OBJECT_NAME varchar2(30), OBJECT_ID number, USER_ID number);
/


create or replace TYPE record_TABLE
AS TABLE OF record_struct;
/

CREATE OR REPLACE FUNCTION f1(x number) RETURN record_TABLE PIPELINED IS
l_array record_TABLE;
cursor c is select a.owner owner, a.OBJECT_NAME object_name, a.OBJECT_ID object_id, b.USER_ID user_id
from all_objects a, all_users b
where a.owner = b.username;
rec c%rowtype;
BEGIN
open c;
fetch c into rec;
loop
exit when c%notfound;
l_array := rec;
pipe row(l_array);
end loop;
return;
end;
/

SQL> sho err
Errors for FUNCTION F1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
12/3 PL/SQL: Statement ignored
12/14 PLS-00382: expression is of wrong type
13/3 PL/SQL: Statement ignored
13/12 PLS-00382: expression is of wrong type

How to solve this?

Thanks in advance

Tom Kyte
January 05, 2007 - 8:06 am UTC

why do people put the same exact thing in multiple locations???? look to the first place you posted this.

A reader, January 18, 2007 - 4:26 am UTC


Tom,

I've got a pipelined function like

CREATE OR REPLACE FUNCTION f1 (
x NUMBER
)
RETURN numset_t PIPELINED
IS
m NUMBER DEFAULT 10000;
BEGIN
FOR i IN 1 .. x
LOOP
FOR x IN ( SELECT aoll_aol_id_from
FROM acme_dba.aol_links
WHERE ROWNUM < m )
LOOP
NULL;
END LOOP;

PIPE ROW (i + m );
END LOOP;

PIPE ROW ( -10 );
RETURN;
END;
/


And do a query

select * from table(f1(10))


But I don't see true 'piping', the last row arrives along with the first row, I would have thought there would be a delay between the two rows?

CBO with pipelined functions

Matt McPeak, June 26, 2007 - 5:05 pm UTC

I have a query where a pipelined function -- TABLE(CAST(my_func(:n) AS my_tbl)) -- is joined with several large tables.

I am running into a problem in that, at runtime, the CBO doesn't know that the pipelined function will return less than 5 rows and therefore A) should read it first and B) use nested loops to access the large tables.

How do I get the required information to the CBO without using hints, which I have seen you advise against?

Thanks!
Tom Kyte
July 02, 2007 - 9:52 am UTC

I advise against hints that tell the optimizer HOW to do something.

I advise to use hints that give the optimizer INFORMATION so it can make the right decision itself.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3779680732446#15740265481549

Thanks!

Matt McPeak, July 02, 2007 - 6:13 pm UTC

Thanks! The CARDINALITY hint was just what I was looking for! "Not quite well documented" may be a bit of an understatement though. )

Thanks again,
Matt

pipelined & recursive ?

martina, July 05, 2007 - 9:44 pm UTC

Hi Tom,

is there a way to write a recursive pipelined function? i tried once, but i gave up when i realized that i needed an array to stop the recursion anyway.

thanks, martina
Tom Kyte
July 06, 2007 - 12:36 pm UTC

no, you cannot call the pipelined function from plsql itself

captain obvious, July 12, 2007 - 9:35 am UTC

However, you may very well have a function (F1) returning the collection type you are using in your pipelined function(F2), and parse the returned collection (from F1) to return it in a pipelined format (in F2).
This allows you to provide many access points to the same datasource. For instance, you may have many validation and transformation scenarios based on the same data, so you have many F2 for a single F1.

arrays of anydataset (?)

Vinay Chandrakant, December 05, 2007 - 10:46 am UTC

Hi Tom,
I have a question on the usage of anydataset. Not sure if this should have been a new question, but this at least partly deals with pipelined functions, so I thought it would be appropriate here..

I have a pipelined interface function that returns anydataset. Within the type body, I would like to populate an additional anydataset variable (a package-level 'global' variable, defined in a package specification).

My question is, can I declare and use an 'array of anydataset'? Something like:

type t_adset is table of anydataset;
adset t_adset := t_adset();

And then go on to use it in this fashion:

ANYDATASET.BeginCreate(DBMS_TYPES.TYPECODE_OBJECT, SELF.t_anytype, packagename.adset(1) );
packagename.adset(1).AddInstance();
packagename.adset(1).PieceWise();
packagename.adset(1).SetVarchar2('some_text');
packagename.adset(1).EndCreate();

If I try this I get the following:

ORA-22370: incorrect usage of method AnyDataset Copy

If, instead of trying to create an array of anydataset, I use a simple anydataset variable like adset anydataset; and try populating that variable, it works perfectly!

Is there any 'workaround' for this, or any way I can make it happen?

Thanks and Regards,
Vinay Chandrakant

A different angle on the ordered collection question

kerry, August 28, 2008 - 12:10 pm UTC

I am 100% in agreement with the opinion stated way back that we should not count on the order of records in a collection or rows piped from a table function. The application code should of course supply an ORDER BY clause.

Mine is a different concern: Is there any way to indicate to the optimizer (perhaps with a pragma, hint or other keyword) information about the ordering of the rows coming from a table function so that a subsequent sort might be eliminated?

What I'm envisioning is something like NOSORT syntax (for create index), which fails, of course, if the ordering of the data is not upheld.

My application for this involves time series data in a file being accessed through a pipelined table function. All of the data (and there's a LOT of it) is in time order already, and I'd like to apply analytics without an extra sort.
Tom Kyte
August 29, 2008 - 10:50 pm UTC

... Is there any way to indicate to the optimizer
(perhaps with a pragma, hint or other keyword) information about the ordering
of the rows coming from a table function so that a subsequent sort might be
eliminated?
...

no, other than "noparallel" really - parallel query will mess with this.


but when you added the last paragraph, it went beyond what noparallel can do - here you want to tell the optimizer "we are already sorted by X,Y,Z" - there is no such capability as of this writing...

A reader, December 03, 2008 - 12:04 pm UTC

Good one Tom!

ORA-30732 with Pipelined function

Parthiban Nagarajan, March 24, 2009 - 11:50 pm UTC

Hello Tom

Here I post an issue with Pipeline function.

"find_dup_vals" is a function that uses REF CURSOR (Dynamic SQL). It gets table name - 'tname' and column name - 'code_colname' as input parameters and returns a table type.  The functionality it is created for is to find the repeated values in 'desc_colname' using SOUNDEX and some parsing.

DESCRIPTIONS :
--------------
SQL> desc find_dup_vals
FUNCTION find_dup_vals RETURNS DUP_VAL_TTY
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 TNAME                          VARCHAR2                IN
 CODE_COLNAME                   VARCHAR2                IN
 DESC_COLNAME                   VARCHAR2                IN     DEFAULT

SQL> desc dup_val_tty
 dup_val_tty TABLE OF DUP_VAL_TY
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CODE                                               VARCHAR2(10)
 SHORT_DESC                                         VARCHAR2(100)

SQL> desc dup_val_ty
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CODE                                               VARCHAR2(10)
 SHORT_DESC                                         VARCHAR2(100)

THE ISSUE : The following query raises ORA-30732 in SQL*Plus but not in TOAD or SQL Tools.
-----------

SQL> select * from table(find_dup_vals('tname', 'colname'));
              *
ERROR at line 1:
ORA-30732: table contains no user-visible columns

Kindly help me to know what is wrong there ...

Tom Kyte
March 29, 2009 - 2:26 pm UTC

why wouldn't you post the actual create statements? why??

or even mention the versions/platforms?!?!?
ops$tkyte%ORA10GR2> create type dup_val_ty
  2  as object
  3  ( code varchar2(10),
  4    short_desc varchar2(100)
  5  )
  6  /

Type created.

ops$tkyte%ORA10GR2> create type dup_val_tty
  2  as table of dup_val_ty
  3  /

Type created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace function find_dup_vals( tname in varchar2, code_colname in varchar2, desc_colname in varchar2 default null )
  2  return dup_val_tty
  3  PIPELINED
  4  as
  5  begin
  6          pipe row( dup_val_ty( 'hello', 'world' ) );
  7          return;
  8  end;
  9  /

Function created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from table(find_dup_vals('tname', 'colname'));

CODE
----------
SHORT_DESC
-------------------------------------------------------------------------------
hello
world




so, take my example

add to it

until it breaks

and if it is not obvious what is wrong, feel free to post something that doesn't work so we can observe it as well.

unnested pipeline function performance

A reader, July 14, 2009 - 12:36 am UTC

Hi Tom,
I am having trouble with the explain plan for a query involving pipeline function.
I am running 10g .

My query looks like this:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
select /*+ ordered */ * from EXT_LS_DLC_INS_DRV_V a
, table(udt_parser.fn_parser_pivoted_cartridge(cursor(select
PIT_START,
PIT_STOP,
PIT_START_YQ,
UIDCHANNELCUT,
START_TIME,
STOP_TIME,
SPI,
INTERVALCOUNT,
VALUECODES,
null dstparticipant
from dual),'EXT_LS_DLC_D_PUB_INS_VE',1,1))B
where a.audit_id=182600
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
When I run the query with only the view(not the pipeline function), it has a decent explain plan and query return time:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
select * from EXT_LS_DLC_INS_DRV_V a
where a.audit_id=182600

---------------------------------
SELECT STATEMENT, GOAL = ALL_ROWS 15966 1 321
FILTER
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTDATA 119 1 115
NESTED LOOPS 15966 1 321
NESTED LOOPS 15847 1 206
NESTED LOOPS 14092 1 113
MERGE JOIN CARTESIAN 14091 1 62
TABLE ACCESS BY INDEX ROWID EXT EXT_AUDITS 130 1 17
NESTED LOOPS 14089 1 37
TABLE ACCESS FULL EXT EXT_REPORTS 3 140 2800
INDEX RANGE SCAN EXT EXT_AUDITS_FK 1 700
BUFFER SORT 13961 5 125
TABLE ACCESS BY INDEX ROWID EXT EXT_AUDIT_DETAILS 2 5 125
INDEX RANGE SCAN EXT EXT_AUDIT_DETAILS_N1 1 5
TABLE ACCESS BY INDEX ROWID EXT EXT_REPORT_OBJECTS 1 1 51
INDEX UNIQUE SCAN EXT EXT_REPORT_OBJECTS_PK 0 1
PARTITION RANGE ITERATOR 1755 46 4278
PARTITION LIST ITERATOR 1755 46 4278
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTHEADER 1755 46 4278
INDEX RANGE SCAN LODSTAR LSCHANNELCUTHEADER_N3 1754 8
PARTITION RANGE ITERATOR 118 1
PARTITION LIST ITERATOR 118 1
INDEX RANGE SCAN LODSTAR LSCHANNELCUTDATA_SPK 118 1
PARTITION RANGE ITERATOR 182 1 62
PARTITION LIST ITERATOR 182 1 62
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTHEADER 182 1 62
INDEX RANGE SCAN LODSTAR LSCHANNELCUTHEADER_N2 181 1
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
but as soon as I add the pipeline function, ie run the first query above,(though it is essentially running on the same data set), the explain plan has full table scans and never returns..
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SELECT STATEMENT, GOAL = ALL_ROWS 567189872 16360 6086001800
NESTED LOOPS 567189872 16360 6086001800
VIEW APP EXT_LS_DLC_INS_DRV_V 567189839 1 2181
FILTER
HASH JOIN 567189839 1 2225
HASH JOIN 132383191 46 8234
VIEW APP EXT_AUDIT_REPORT_OBJECTS_V 15657 1 31
HASH JOIN 15657 1 256
HASH JOIN 14003 1 212
NESTED LOOPS 14000 1 181
VIEW APP EXT_AUDITS_V 13992 1 35
TABLE ACCESS BY INDEX ROWID EXT EXT_AUDITS 13992 1 22
INDEX FULL SCAN EXT EXT_AUDITS_FK 110 75580
VIEW APP EXT_REPORT_OBJECTS_V 8 1 146
TABLE ACCESS FULL EXT EXT_REPORT_OBJECTS 8 2 154
VIEW APP EXT_REPORTS_V 3 140 4340
TABLE ACCESS FULL EXT EXT_REPORTS 3 140 2800
VIEW APP EXT_AUDIT_DETAILS_V 1651 332580 14633520
TABLE ACCESS FULL EXT EXT_AUDIT_DETAILS 1651 332580 8314500
PARTITION RANGE ALL 132357882 1276987150 188994098200
PARTITION LIST ALL 132357882 1276987150 188994098200
VIEW APP LS_LSCHANNELCUTHEADER_V 132357882 1276987150 188994098200
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTHEADER 132357882 1276987150 118759804950
INDEX FULL SCAN LODSTAR LSCHANNELCUTHEADER_N3 5451992 1276987150
PARTITION RANGE ALL 434796738 1311093100 2682496482600
PARTITION LIST ALL 434796738 1311093100 2682496482600
VIEW APP LS_LSCHANNELCUTDATA_V 434796738 1311093100 2682496482600
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTDATA 434796738 1311093100 150775706500
INDEX FULL SCAN LODSTAR LSCHANNELCUTDATA_SPK 3740600 1311093100
PARTITION RANGE ITERATOR 182 1
PARTITION LIST ITERATOR 182 1
VIEW APP LS_LSCHANNELCUTHEADER_V 182 1
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTHEADER 182 1 62
INDEX RANGE SCAN LODSTAR LSCHANNELCUTHEADER_N2 181 1
VIEW APP 33 16360 6050320640
COLLECTION ITERATOR PICKLER FETCH UDT_PARSER FN_PARSER_PIVOTED_CARTRIDGE
----------

what am I doing wrong? I tried using hints like no_unnest but nothing seems to work..
thanks!

still the same plan

A reader, July 16, 2009 - 5:39 pm UTC

Based on the link, I first found the number of rows returned by the view:
select count(*) from EXT_LS_DLC_INS_DRV_V where audit_id=182600;
that gave me 76 rows.

---------------
so i tried to put in the hint:
select /*+ cardinality (a 76)*/ * from EXT_LS_DLC_INS_DRV_V a
, table(udt_parser.fn_parser_pivoted_cartridge(cursor(select
PIT_START,
PIT_STOP,
PIT_START_YQ,
UIDCHANNELCUT,
START_TIME,
STOP_TIME,
SPI,
INTERVALCOUNT,
VALUECODES,
null dstparticipant
from dual),'EXT_LS_DLC_D_PUB_INS_VE',1,1))B
where a.audit_id=182600;
It still gives the same plan ..
---------
SELECT STATEMENT, GOAL = ALL_ROWS 567189872 16360 6086001800
NESTED LOOPS 567189872 16360 6086001800
VIEW APP EXT_LS_DLC_INS_DRV_V 567189839 1 2181
FILTER
HASH JOIN 567189839 1 2225
HASH JOIN 132383191 46 8234
VIEW APP EXT_AUDIT_REPORT_OBJECTS_V 15657 1 31
HASH JOIN 15657 1 256
HASH JOIN 14003 1 212
NESTED LOOPS 14000 1 181
VIEW APP EXT_AUDITS_V 13992 1 35
TABLE ACCESS BY INDEX ROWID EXT EXT_AUDITS 13992 1 22
INDEX FULL SCAN EXT EXT_AUDITS_FK 110 75580
VIEW APP EXT_REPORT_OBJECTS_V 8 1 146
TABLE ACCESS FULL EXT EXT_REPORT_OBJECTS 8 2 154
VIEW APP EXT_REPORTS_V 3 140 4340
TABLE ACCESS FULL EXT EXT_REPORTS 3 140 2800
VIEW APP EXT_AUDIT_DETAILS_V 1651 332580 14633520
TABLE ACCESS FULL EXT EXT_AUDIT_DETAILS 1651 332580 8314500
PARTITION RANGE ALL 132357882 1276987150 188994098200
PARTITION LIST ALL 132357882 1276987150 188994098200
VIEW APP LS_LSCHANNELCUTHEADER_V 132357882 1276987150 188994098200
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTHEADER 132357882 1276987150 118759804950
INDEX FULL SCAN LODSTAR LSCHANNELCUTHEADER_N3 5451992 1276987150
PARTITION RANGE ALL 434796738 1311093100 2682496482600
PARTITION LIST ALL 434796738 1311093100 2682496482600
VIEW APP LS_LSCHANNELCUTDATA_V 434796738 1311093100 2682496482600
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTDATA 434796738 1311093100 150775706500
INDEX FULL SCAN LODSTAR LSCHANNELCUTDATA_SPK 3740600 1311093100
PARTITION RANGE ITERATOR 182 1
PARTITION LIST ITERATOR 182 1
VIEW APP LS_LSCHANNELCUTHEADER_V 182 1
TABLE ACCESS BY LOCAL INDEX ROWID LODSTAR LSCHANNELCUTHEADER 182 1 62
INDEX RANGE SCAN LODSTAR LSCHANNELCUTHEADER_N2 181 1
VIEW APP 33 16360 6050320640
COLLECTION ITERATOR PICKLER FETCH UDT_PARSER FN_PARSER_PIVOTED_CARTRIDGE

trapzz, August 31, 2009 - 8:46 am UTC

Hi Tom,
I used this pipelined function for the first time and it worked fine on 2 environments as development and UAT
both having same db version
1 Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
2 PL/SQL Release 10.2.0.3.0 - Production
3 CORE 10.2.0.3.0 Production
4 TNS for Linux: Version 10.2.0.3.0 - Production
5 NLSRTL Version 10.2.0.3.0 - Production
So i moved it to live having the same db version but here the function returns no data altough data is there.
pls help me out in this regard. What must be the reason behind it.
Tom Kyte
August 31, 2009 - 1:11 pm UTC

what is "this pipelined function"??!??

... What must be the reason behind it. ...

a bug in your developed code would be my first idea/thought. If your pipelined function is data driven (likely, most are) and the data is different in production than in test (likely, most are), then the probability this is a bug in your code is very high.

but you give us *nothing* to look at.

If your code has when others and is not followed by raise/raise_application_error - we will make fun of it. Just a warning.

a quick question regarding pipelined functions...

Mathan, October 29, 2009 - 10:11 am UTC

Tom, can we execute a pipelined function by any means other than calling it from a select statement...
Tom Kyte
October 29, 2009 - 10:20 am UTC

they are meant to be selected from.


ops$tkyte%ORA10GR2> create or replace function foo return sys.odcinumberlist
  2  pipelined
  3  as
  4  begin
  5          pipe row(1);
  6  end;
  7  /

Function created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from table(foo);

COLUMN_VALUE
------------
           1

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_data sys.odcinumberlist;
  3  begin
  4          l_data := foo();
  5  end;
  6  /
        l_data sys.odcinumberlist;
        *
ERROR at line 2:
ORA-06550: line 1, column 10:
PLS-00653: aggregate/table functions are not allowed in PL/SQL scope

Java Pipelined Functions

Fernando, September 03, 2010 - 3:10 pm UTC

Tom,

Is it possible to PIPE ROW from java code ?
I've read that it is possible, buy I couldn't find any working example of this and couldn't be able to make my java function work.

This is what I have:

CREATE OR REPLACE FUNCTION ls(p_directory IN VARCHAR2)
RETURN text_line PIPELINED AS
LANGUAGE JAVA
NAME 'Folder.ls( java.lang.String ) return java.lang.String' ;
/

CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "Folder" as
import java.io.*;
import java.sql.*;

public class Folder {
public static void ls(String directory) throws SQLException {
File path = new File( directory );
File[] list = path.listFiles();
String filename;
for(int i = 0; i < list.length; i++) {
filename = list[i].getName();
#sql { PIPE ROW (:filename) };
}
}
}
/

Could you please point out what am I doing wrong or give a more convenient example ?

Thanks in advance
Tom Kyte
September 09, 2010 - 7:24 pm UTC

pipe row is plsql - I've never heard of it in regards to java, where did you read it?


Pipeline function and java

Lal Cyril, September 30, 2011 - 4:00 am UTC

Hi Tom,
I have the following doubts on pipelined functions

1)I need to pass a huge collection of collection to a java program as a pipe lined function. Is it possible?.
Example i have a table which stores persons details and i have tables which stores person activities,person leaves etc. To give the person details to a java program, the best way is to design a collection of persons which have activities collection, leave collection etc (collection of collection). Can i use pipelined function to return collection of collection for handling this case.

2)You mentioned that pipelined function is for returning a
huge result set with data made available to the calling program as and when it is put in the type. This way we can
reduce the memory consumption for the type in the database layer.

Can the same be done in java layer.
Means, is it possible to call a pipelined function from java and without waiting for the entire result set to be populated can i use the result set in java and write the same to an xml(in a pipelined way), So that the memory consumption in java layer also can be reduced, since the result set is very huge.

If yes can you share a sample, to undestand how can it be done in java.
Tom Kyte
September 30, 2011 - 6:27 pm UTC

1) why would you use a pipelined function - why wouldn't you just use SQL?

2) yes - because java is just a calling program, it isn't any different than C or Visual Basic as far as we are concerned.


One wonders why you would want to return a huge set to java - when you could just produce the output (xml included) you need right from the database - unless you get paid by source lines of code and want this to be slow that is....

pipelined functions in a package

Fede H, October 11, 2011 - 3:35 pm UTC

Hi Tom,

Looks like that at 11g we can use pipelined functions with types defined inside of a package, is there some known disadvantages with this method?

Thanks,
Fede

CREATE OR REPLACE PACKAGE my_pkg IS
TYPE myscalartype IS RECORD
(
a INT,
b DATE,
c VARCHAR2 (25)
);

TYPE mytabletype IS TABLE OF myscalartype;

FUNCTION f
RETURN mytabletype
PIPELINED;
END;
/


CREATE OR REPLACE PACKAGE BODY my_pkg IS
FUNCTION f
RETURN mytabletype
PIPELINED IS
v_rec myscalartype;
BEGIN
FOR i IN 1 .. 5
LOOP
v_rec.a := i;
v_rec.b := SYSDATE + i;
v_rec.c := 'row ' || i;
PIPE ROW (v_rec);
END LOOP;

RETURN;
END;
END;
/

SELECT * FROM TABLE (my_pkg.f ());
Tom Kyte
October 11, 2011 - 6:10 pm UTC

well, you are not saving on schema objects:

OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
PACKAGE      MY_PKG

PACKAGE BODY MY_PKG

TYPE         SYS_PLSQL_98838_29_1
             SYS_PLSQL_98838_9_1
             SYS_PLSQL_98838_DUMMY_1


all kinds of "thunking" going on here.

So, as long as you don't mind all of these mysterious, sort of untrackable types appearing - go for it.

I've never understood the reasoning behind not just declaring them outright.

Bulk collect from Table function

Rajeshwaran Jeyabal, October 12, 2011 - 3:44 pm UTC

Tom:

Is that we cannot select and Inserted into the same Nested table collection?

rajesh@ORA10GR2> declare
  2   l_type emp_ntt := emp_ntt(empobj(7369,null),empobj(7499,null));
  3   l_tab  emptab := emptab();  
  4  begin
  5   select empobj('A',1)
  6   bulk collect into l_type
  7   from table(l_type);
  8   
  9   dbms_output.put_line ('l_type.count = '||l_type.count);
 10   
 11   for i in 1..l_type.count
 12   loop
 13    dbms_output.put_line (l_type(i).ename);
 14    dbms_output.put_line (l_type(i).empno);
 15   end loop;
 16  end;
 17  /
l_type.count = 0

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
rajesh@ORA10GR2> 

Tom Kyte
October 12, 2011 - 4:18 pm UTC

you cannot, the act of fetching into it would reset it - but if you reset it there is nothing to select from

Pipeline function taking time in FOR loop

Manohar Thumu, December 22, 2011 - 7:08 am UTC

Tom,

We are facing a strange situation when we are using pipeline function.

The below mentioned code Point# 1 takes a lot of time to execute. where as Point#2 code comes back in 5 seconds.

Actual query in the Cursor comes back in 2 seconds

1) FOR REC_TFR IN CUR_SUPERPROD_FACT
LOOP
v_UDT_ENTITY_BE_PROD_FACT.org_KEY := REC_TFR.org_ENTITY_key;
v_UDT_ENTITY_BE_PROD_FACT.BE_KEY := REC_TFR.be_key;
v_UDT_ENTITY_BE_PROD_FACT.Prod_key := REC_TFR.Prod_key;
v_UDT_ENTITY_BE_PROD_FACT.PERIOD_DESC := REC_TFR.Description;
v_UDT_ENTITY_BE_PROD_FACT.Metric_name := REC_TFR.Metric_name;
v_UDT_ENTITY_BE_PROD_FACT.Metric_value := REC_TFR.Metric_value;
PIPE ROW (v_UDT_ENTITY_BE_PROD_FACT);
END LOOP;

2) When we change this to below we get the results under 5 seconds.
V_stmt holds the query in this case.

execute immediate v_stmt bulk collect into o_t_coll;
FOR a IN o_t_coll.first .. o_t_coll.last
LOOP
v_UDT_ENTITY_BE_PROD_FACT.org_KEY := o_t_coll(a).org_ENTITY_key;
v_UDT_ENTITY_BE_PROD_FACT.BE_KEY := o_t_coll(a).be_key;
v_UDT_ENTITY_BE_PROD_FACT.Prod_key := o_t_coll(a).Prod_key;
v_UDT_ENTITY_BE_PROD_FACT.PERIOD_DESC := o_t_coll(a).Description;
v_UDT_ENTITY_BE_PROD_FACT.Metric_name := o_t_coll(a).Metric_name;
v_UDT_ENTITY_BE_PROD_FACT.Metric_value := o_t_coll(a).Metric_value;
PIPE ROW (v_UDT_ENTITY_BE_PROD_FACT);
END LOOP;

Please let us know what we are doing wrong.

Regards,
Manohar
Tom Kyte
December 22, 2011 - 10:14 am UTC

post the tkprofs of both queries for us to compare - just the part of the tkprof that is relevant - just the small section for each of the queries, not the entire tkprof file.


and explain why you say "actual query in the cursor comes back in 2 second" for point1 - but say that is slow compared to 5 seconds for point2???

Table type and pipelined functions

Tiger, December 27, 2011 - 11:58 am UTC

Database version: 9.2.0.6

Tom,

How do table types (myTableType as table of myScalarType)
handle multiple iterations of a function call through the same session.
For example if a pipelined function accepts a variable and is called at the same time through the same session then the data will be piped through the same table type.

Thanks.
Tom Kyte
December 27, 2011 - 12:31 pm UTC

they would handle it just like number, date, varchar2 do.

they are just datatypes. You can have many variables of type number and they do not conflict with each other. You can have many instances of myTableType and they won't collide either.

web services output as pipelined

George Joseph, March 28, 2012 - 5:55 am UTC

I have a scenario, where i believe using pipelined functions could possibly improve the performance. However i am confused on how to go about with it.

We call a web service for each of the rows inside a table.
so it does something like this

for each row in table/*hash object in .net*/
loop
call_web_service(security_id) -- This is a .net call.
/*this returns say the current market value for a security.*/
processing_logic done in sql.
/*eg: filter records if their value < a value in a database table*/
end loop;

The processing_logic comparison is done for each row. I was wondering if we can make use of a pipelined function to perform the comparison outside the webservice calls so that it would make it more efficient.

The results of the processing_logic for one row do not influence the inputs of the next row. In other words the processing logic can be done outside the loop without causing a logical flaw in the code.

Tom, could you please advise on if pipeline functions can be made use here



Tom Kyte
March 28, 2012 - 9:16 am UTC

I cannot tell you how much I hate web services sometimes. Or anything remotely similar.


How slow is that going to be. Slow as molasses in February in the Artic circle.



I'm not sure what a pipelined function would do for you since I don't know what "... to perform the comparision outside the webservice calls..." means.

re:

George Joseph, March 28, 2012 - 9:53 am UTC

Why dont you like webservices?, what is the alternative to a webservice. I am interested to know your opinion.

the webservice does the following.
For a given security, it computes the say current value at the time the webservice is called. Eg: the trading price of a stock.

I need to compare the value of the current_market value with certain rules configured in a database table eg:

webservice i/p is
('ORCL')
o/p is
('ORCL',100.02)

Now i need to insert into a another table only if the value of market_value returned is >=101 for a particular security eg:
my_table
(sec_id,filter_if_value_greater)
'ORCL',101

so i compare the value that is returned from webservice
with the value 101 that is present in my_table.

Now the webservice is called for each security, and the comparison is performed each time making it a slow process.

I totally concur with your comment that web services are slow in this regard :)

Your mantra is what i follow( set thinking and single pass sql), so i am thinking if it can be made use of here.

I can insert these returned values of the webservice into a "temp" table and perform the comparison with my_table, but was wondering if pipeline function can help me in this regard.

As always thanks so much for your comments and help.




Tom Kyte
March 28, 2012 - 10:38 am UTC

I don't like webservices because everyone gets really "hooked into the architectural beauty of them" and forces everyone to use them for everything - even when the data they need is more readily and easily available via other means.

For example, I was working with a customer who decided "web services - totally cool, let's do everything that way".

so, all of a sudden, things that took 0.0001 seconds before (because we could - you know - just join) are taking 0.01 seconds now (or worse, that is on the best day). coupled with slow by slow processing - they were looking at over one week of constant processing to load 45,000,000 things into their database. That should take a couple of seconds (and does take a couple of seconds when coded correctly).

Abuse and overuse of web services runs rampant.


why not go to the data source of this webservice - I bet they have a table already.

re

George Joseph, March 28, 2012 - 11:15 am UTC

its a bloomberg web service that we are calling, and we are charged each time it is called. Certainly i imagine they would have a table with the data at the source side.
But dont think they would provide the credentials for the accessing.

Its really fascinating how the problems you describe are the the exact nature being preached by architecture folks here.

The funny part in all this is that none of the architecture team would ever know the alternative or even consider it, for want to being called a nonconformist or purely for political reasons. And since any approaches would work, when it gets down to performance problems, they would move to the next latest architecture trend, they read about!
Tom Kyte
March 28, 2012 - 12:06 pm UTC

a charge per call - wonderful.

You don't want to pre-load everything then, you only want to get quotes for the necessary stocks.

don't they have a batch API so you might be able to get more than one quote/call. They would benefit from that as much as you.

Making Web Services a *tiny* bit less horrible

Stew Ashton, March 28, 2012 - 11:21 am UTC


Thanks to Tom for saying how horrible Web Services can be for repetitive processing. Using Web Services or message-based services for batch processing is such a bad idea and is so pervasive. Service-Oriented Architecture (SOA) anyone?

When this kind of thing cannot be helped, there should at least be a batch-type interface: instead of asking for one current market value at a time, ask for 100. Then the processing could be:

1) Using fetch size of 100, get 100 records from my_table;
2) Make one Web Service call to get 100 market values;
3) Use JDBC update batching (or whatever is available in your programming language) to send the updates to the database all at once.

Now you have three round trips on the network instead of hundreds.

The best API would still be a view, allowing a join.
Tom Kyte
March 28, 2012 - 12:08 pm UTC

never have truer words been written ;)


I'm always amazed when, within a single company that owns BOTH databases, the stakeholders in one database won't permit a dblink to query up in batch some data - say daily - to refresh a lookup table or two in another database. They demand the remote system use a webservice in 'real time' (i call it real slow time) to get the data.

Or any one of a thousand other examples of going all dogma on 'webservices' at the expense of simple, cheap and fast.


Me too!

Loz, March 28, 2012 - 9:49 pm UTC

Tom, Stew,
Add me to the club. I am constantly amazed that people can't seem to see that joining 2 databases row by row via some byzantine XML web service is *not* a better way than simply using db links.
Dogma indeed. SOA definately has it's place but what happened to pragmatism and common sense?
Loz.

A reader, March 29, 2012 - 2:53 am UTC


thnks

George Joseph, April 01, 2012 - 7:54 am UTC

Thanks Tom,Stew and Lou, for your response.



PARALLEL_ENABLE function not executing when called form pl/sql

Chris, May 22, 2012 - 1:46 pm UTC

Hello Tom,

I am trying to parallelize the unloading of a table into a flat file by calling a pipelined parallelized function from a procedure using EXECUTE IMMEDIATE. The problem is that whenever a PARALLEL hint is specified in my query, the procedure finishes without generating the file, which leads me to think that the pipelined function failed but no error was raised. On the other hand, running the exact same query directly from sqlplus works just fine.

P.S: I get the same result when using a CURSOR WITH BULK COLLECT to call the pipelined parallelized function

--Create the table to unload
SQL> create table t (a varchar2(10));

--INSERTED 1586771 records in table t

--Here's the count
SQL> select count(*) from t
  COUNT(*)
----------
   1586771


--Create the pipelined function to unload the table
CREATE OR REPLACE FUNCTION IDD.S800014_PARALLEL_SADC_EXP_FNC (
                  p_source    IN SYS_REFCURSOR,
                  p_filename  IN VARCHAR2,
                  p_directory IN VARCHAR2
                  ) RETURN dump_ntt
                    PIPELINED
                    PARALLEL_ENABLE (PARTITION p_source BY ANY)
AS

     TYPE row_ntt IS TABLE OF VARCHAR2(32767);
     v_rows    row_ntt;
     v_file    UTL_FILE.FILE_TYPE;
     v_buffer  VARCHAR2(32767);
     v_sid     NUMBER;
     v_name    VARCHAR2(128);
     v_lines   PLS_INTEGER := 0;
     c_eol     CONSTANT VARCHAR2(1) := CHR(10);
     c_eollen  CONSTANT PLS_INTEGER := LENGTH(c_eol);
     c_maxline CONSTANT PLS_INTEGER := 32767;

BEGIN

     --SELECT SYS_CONTEXT('USERENV','SESSIONID') INTO v_sid FROM DUAL;
     SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
     
     v_name := p_filename || '_' || TO_CHAR(v_sid);
     v_file := UTL_FILE.FOPEN(p_directory, v_name, 'w', 32767);

     LOOP
       FETCH p_source BULK COLLECT INTO v_rows LIMIT 100;

        FOR i IN 1 .. v_rows.COUNT LOOP

           IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
              v_buffer := v_buffer || c_eol || v_rows(i);
           ELSE
              IF v_buffer IS NOT NULL THEN
                 UTL_FILE.PUT_LINE(v_file, v_buffer);
              END IF;
              v_buffer := v_rows(i);
           END IF;

        END LOOP;

        v_lines := v_lines + v_rows.COUNT;

        EXIT WHEN p_source%NOTFOUND;
     END LOOP;
     CLOSE p_source;

     UTL_FILE.PUT_LINE(v_file, v_buffer);
     UTL_FILE.FCLOSE(v_file);

     PIPE ROW (dump_ot(v_name, v_lines, v_sid));
     RETURN;

END S800014_PARALLEL_SADC_EXP_FNC;
/


--Created the procedure that calls the function
CREATE OR REPLACE PROCEDURE IDD.EXPORT_TABLE    IS 
   
   v_stmnt            VARCHAR(4000);
   DirLoc             VARCHAR2(30) := 'DATA_PUMP_DIR';
   v_file_name        VARCHAR2(100) := 't_DUMP';
   TYPE CurTyp        IS REF CURSOR;
   sll_cur            CurTyp;
     
BEGIN

    v_stmnt := 'SELECT /*+ PARALLEL(t,4) */ * FROM t';
    
    dbms_output.put_line('SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(' || v_stmnt || '), '''
                                                                                 || v_file_name || ''', '''
                                                                                 || DirLoc || ''')) nt');
                                      
    EXECUTE IMMEDIATE 'SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(' || v_stmnt || '), '''
                                                                                 || v_file_name || ''', '''
                                                                                 || DirLoc || ''')) nt';

END EXPORT_TABLE;
/



--Enabled parallel querying
SQL> alter session enable parallel query;

--Called the procedure but nothing happened other than printing the output. No file was generated
SQL>  exec EXPORT_TABLE;
SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(SELECT /*+ PARALLEL(t,4) */ * FROM t), 't_DUMP', 'DATA_PUMP_DIR')) nt

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01

--Ran the exact same query that is called by the procedure directly in sqlplus, and this time the files were generated
SQL> SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(SELECT /*+ PARALLEL(t,4) */ * FROM t), 't_DUMP', 'DATA_PUMP_DIR')) nt;

FILE_NAME                                                                                                                        NO_RECORDS SESSION_ID
-------------------------------------------------------------------------------------------------------------------------------- ---------- ----------
t_DUMP_114                                                                                                                           396773        114
t_DUMP_84                                                                                                                            396452         84
t_DUMP_104                                                                                                                           396773        104
t_DUMP_67                                                                                                                            396773         67

Elapsed: 00:00:26.14


--Changed the parallelism level to 1 in the procedure by replacing line 11 with:
--    v_stmnt := 'SELECT /*+ PARALLEL(t,1) */ * FROM t';
--the pipelined function created a file this time but of course without any parallelism
SQL> exec EXPORT_TABLE;
SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(SELECT /*+ PARALLEL(t,1) */ * FROM t), 't_DUMP', 'DATA_PUMP_DIR')) nt

PL/SQL procedure successfully completed.

Elapsed: 00:01:47.15

> ls -l
total 12538
-rw-rw-r--   1 oracle   oinstall 6347084 May 22 14:37 t_DUMP_116

Tom Kyte
May 22, 2012 - 2:54 pm UTC

use an autonomous transaction to debug this.


create a small logging table with sid, seq, msg columns.

create a small autonomous procedure to insert into this table and commit.

call it from your code all over the place so you can *see* what you are doing.

PARALLEL_ENABLE function not executing when called form pl/sql

Chris, May 22, 2012 - 1:51 pm UTC

Sorry, I forgot to include the DDL's for the DUMP_NTT
Collection type:

CREATE TYPE IDD.dump_ot AS OBJECT
( file_name VARCHAR2(128)
, no_records NUMBER
, session_id NUMBER
);

CREATE TYPE IDD.dump_ntt AS TABLE OF dump_ot;

Thanks

PARALLEL_ENABLE function not executing when called form pl/sql

Chris, May 22, 2012 - 4:14 pm UTC

Hi Tom,

I did as you suggested

I ran the procedure many times, and it seems that sometimes it only reaches the first logging step of the pipelined function which occurs just before getting the sid of the PQ slave, and sometimes it doesn't even reach that point

create table t_logs (
    sid NUMBER,
    seq NUMBER,
    msg VARCHAR2(2000)
);

CREATE SEQUENCE IDD.t_log_SEQ
  START WITH 248
  MAXVALUE 999999999999999999999999999
  MINVALUE 1
  NOCYCLE
  CACHE 20
  ORDER;

CREATE OR REPLACE PROCEDURE IDD.log_parallel_unload (p_sid IN NUMBER, p_msg IN VARCHAR2) AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
    insert into t_logs(sid,seq,msg) values (p_sid,t_log_SEQ.NEXTVAL ,p_msg);
    commit;
END log_parallel_unload;
/


--Called the logging procedure at some key locations in the code

CREATE OR REPLACE PROCEDURE IDD.EXPORT_TABLE    IS 
   
   v_stmnt            VARCHAR(4000);
   DirLoc             VARCHAR2(30) := 'DATA_PUMP_DIR';
   v_file_name        VARCHAR2(100) := 't_DUMP';
   TYPE CurTyp        IS REF CURSOR;
   sll_cur            CurTyp;
   all_dump           dump_ntt;
   v_sid              NUMBER;
     
BEGIN
    
    SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
    log_parallel_unload(v_sid,'SID of main process is ' || v_sid);
     
    v_stmnt := 'SELECT /*+ PARALLEL(t,4) */ * FROM t';
    
    dbms_output.put_line('SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(' || v_stmnt || '), '''
                                                                                 || v_file_name || ''', '''
                                                                                 || DirLoc || ''')) nt');
                                      
    log_parallel_unload(v_sid,'Calling the parallel pipelined unloding function');

    EXECUTE IMMEDIATE 'SELECT * FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(' || v_stmnt || '), '''
                                                                                 || v_file_name || ''', '''
                                                                                 || DirLoc || ''')) nt';

--    OPEN sll_cur FOR v_stmnt;
--    SELECT DUMP_OT(file_name, no_records, session_id) BULK COLLECT INTO all_dump FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(sll_cur,v_file_name,DirLoc)) nt;
--    
END EXPORT_TABLE;
/


CREATE OR REPLACE FUNCTION IDD.S800014_PARALLEL_SADC_EXP_FNC (
                  p_source    IN SYS_REFCURSOR,
                  p_filename  IN VARCHAR2,
                  p_directory IN VARCHAR2
                  ) RETURN dump_ntt
                    PIPELINED
                    PARALLEL_ENABLE (PARTITION p_source BY ANY)
AS

     TYPE row_ntt IS TABLE OF VARCHAR2(32767);
     v_rows    row_ntt;
     v_file    UTL_FILE.FILE_TYPE;
     v_buffer  VARCHAR2(32767);
     v_sid     NUMBER;
     v_name    VARCHAR2(128);
     v_lines   PLS_INTEGER := 0;
     c_eol     CONSTANT VARCHAR2(1) := CHR(10);
     c_eollen  CONSTANT PLS_INTEGER := LENGTH(c_eol);
     c_maxline CONSTANT PLS_INTEGER := 32767;

BEGIN

     --SELECT SYS_CONTEXT('USERENV','SESSIONID') INTO v_sid FROM DUAL;
     log_parallel_unload(NULL,'getting sid for parallelized function');
     BEGIN
        SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
     EXCEPTION
        WHEN OTHERS THEN
             log_parallel_unload(NULL,'Could not get SID for parallelized function: ' || SQLERRM);
     END;
     log_parallel_unload(v_sid,'SID is ' || v_sid);

     v_name := p_filename || '_' || TO_CHAR(v_sid);
     v_file := UTL_FILE.FOPEN(p_directory, v_name, 'w', 32767);
     log_parallel_unload(v_sid,'Filename is ' || v_name);

     LOOP
       FETCH p_source BULK COLLECT INTO v_rows LIMIT 100;

        FOR i IN 1 .. v_rows.COUNT LOOP

           IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
              v_buffer := v_buffer || c_eol || v_rows(i);
           ELSE
              IF v_buffer IS NOT NULL THEN
                 UTL_FILE.PUT_LINE(v_file, v_buffer);
              END IF;
              v_buffer := v_rows(i);
           END IF;

        END LOOP;

        v_lines := v_lines + v_rows.COUNT;

        EXIT WHEN p_source%NOTFOUND;
     END LOOP;
     CLOSE p_source;

     UTL_FILE.PUT_LINE(v_file, v_buffer);
     UTL_FILE.FCLOSE(v_file);

     PIPE ROW (dump_ot(v_name, v_lines, v_sid));
     RETURN;

EXCEPTION
    WHEN OTHERS THEN
             log_parallel_unload(v_sid,'Parallel function failed: ' || SQLERRM);

END S800014_PARALLEL_SADC_EXP_FNC;
/

SQL>  exec EXPORT_TABLE
SQL> select * from T_LOGS order by 2;

       SID        SEQ MSG
---------- ---------- ----------------------------------------------------------------------------------------------------
       144        326 SID of main process is 144
       144        327 Calling the parallel pipelined unloding function


SQL> exec EXPORT_TABLE
SQL> select * from T_LOGS order by 2;

       SID        SEQ MSG
---------- ---------- ----------------------------------------------------------------------------------------------------
       144        326 SID of main process is 144
       144        327 Calling the parallel pipelined unloding function
       144        328 SID of main process is 144
       144        329 Calling the parallel pipelined unloding function
                  330 getting sid for parallelized function
                  331 getting sid for parallelized function


SQL> exec EXPORT_TABLE
SQL> select * from T_LOGS order by 2;

       SID        SEQ MSG
---------- ---------- ----------------------------------------------------------------------------------------------------
       144        326 SID of main process is 144
       144        327 Calling the parallel pipelined unloding function
       144        328 SID of main process is 144
       144        329 Calling the parallel pipelined unloding function
                  330 getting sid for parallelized function
                  331 getting sid for parallelized function
       144        334 SID of main process is 144
       144        335 Calling the parallel pipelined unloding function

...
...
...

SQL> /

       SID        SEQ MSG
---------- ---------- ----------------------------------------------------------------------------------------------------
       144        326 SID of main process is 144
       144        327 Calling the parallel pipelined unloding function
       144        328 SID of main process is 144
       144        329 Calling the parallel pipelined unloding function
                  330 getting sid for parallelized function
                  331 getting sid for parallelized function
       144        334 SID of main process is 144
       144        335 Calling the parallel pipelined unloding function
       144        336 SID of main process is 144
       144        337 Calling the parallel pipelined unloding function
       144        338 SID of main process is 144
       144        339 Calling the parallel pipelined unloding function
       144        340 SID of main process is 144
       144        341 Calling the parallel pipelined unloding function
       144        342 SID of main process is 144
       144        343 Calling the parallel pipelined unloding function
       144        344 SID of main process is 144
       144        345 Calling the parallel pipelined unloding function
       144        346 SID of main process is 144
       144        347 Calling the parallel pipelined unloding function
       144        348 SID of main process is 144
       144        349 Calling the parallel pipelined unloding function
                  350 getting sid for parallelized function
                  351 getting sid for parallelized function
                  352 getting sid for parallelized function
                  353 getting sid for parallelized function

Tom Kyte
May 23, 2012 - 7:40 am UTC

It had to do with the way you did the dynamic execute.

execute immediate 'select ...';

without an into or anything just "starts" and closes the query - it doesn't run it. You never fetched from it.

CREATE OR REPLACE PROCEDURE EXPORT_TABLE
IS
   v_stmnt            VARCHAR(4000);
   DirLoc             VARCHAR2(30) := 'DATA_PUMP_DIR';
   v_file_name        VARCHAR2(100) := 't_DUMP';
   TYPE CurTyp        IS REF CURSOR;
   sll_cur            CurTyp;
   all_dump           dump_ntt;
   v_sid              NUMBER;

   l_stmt      long;
   l_rec       dump_ot;
BEGIN
    SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
    log_parallel_unload(v_sid,'SID of main process is ' || v_sid);

    v_stmnt := 'SELECT /*+ PARALLEL(t,4) */ object_name FROM t';

    l_stmt := ('SELECT dump_ot(file_name,no_records,session_id) FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(CURSOR(' || v_stmnt ||
                          '), ''' || v_file_name || ''', ''' || DirLoc || ''')) nt');

    log_parallel_unload(v_sid,'Calling the parallel pipelined unloding function');

    OPEN sll_cur FOR l_stmt;
    loop
        fetch sll_cur into l_rec;
        exit when sll_cur%notfound;
        dbms_output.put_line( l_rec.file_name || ', ' || l_rec.no_records || ', ' || l_rec.session_id );
    end loop;
    close sll_cur;
END EXPORT_TABLE;
/


produced this output:

ops$tkyte%ORA11GR2> exec EXPORT_TABLE
t_DUMP_132, 19484, 132
t_DUMP_8, 17950, 8
t_DUMP_200, 16763, 200
t_DUMP_69, 18628, 69

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select * from T_LOGS order by 2;

       SID        SEQ MSG
---------- ---------- --------------------------------------------------------------------------------
       134        248 SID of main process is 134
       134        249 Calling the parallel pipelined unloding function
                  250 getting sid for parallelized function
                  251 getting sid for parallelized function
                  252 getting sid for parallelized function
         8        253 SID is 8
       132        254 SID is 132
                  255 getting sid for parallelized function
       200        256 SID is 200
        69        257 SID is 69
       132        258 Filename is t_DUMP_132
       200        259 Filename is t_DUMP_200
        69        260 Filename is t_DUMP_69
         8        261 Filename is t_DUMP_8

14 rows selected.


PARALLEL_ENABLE function not executing when called form pl/sql

Chris, May 23, 2012 - 8:57 am UTC

Thanks alot. It works.

I havn't thought of fetching the records one by one trough a cursor. I didn't think it would be parallelized if fetched one by one.
Instead I was trying to fetch through a bulk collect using a select again but the query wasn't getting parallelized.

OPEN sll_cur FOR v_stmnt;
SELECT DUMP_OT(file_name, no_records, session_id) BULK COLLECT INTO all_dump FROM TABLE(S800014_PARALLEL_SADC_EXP_FNC(sll_cur,v_file_name,DirLoc)) nt;


Thanks again
Tom Kyte
May 23, 2012 - 10:57 am UTC

... I didn't
think it would be parallelized if fetched one by one. ...

that is the only way to retrieve data...


array fetching still fetches one by one at the server level, it saves on round trips...


the bulk collect would work just as well.

You had to FETCH the data is all.

SYS_PLSQL bug??

Boris, October 11, 2012 - 3:22 am UTC

I am getting the following output while quering the user_source:-

TESTDB SYS_PLSQL_12564004_164_1 TYPE 1 TYPE "SYS_PLSQL_12564004_165_1" as object (ERROR_KEY NUMBER,
TESTDB SYS_PLSQL_12564004_164_1 TYPE 2 ERROR_CODE VARCHAR2(20),
TESTDB SYS_PLSQL_12564004_164_1 TYPE 3 ERROR_TYPE VARCHAR2(20),
TESTDB SYS_PLSQL_12564004_164_1 TYPE 4 ERROR_GENERATED_BY VARCHAR2(30),
TESTDB SYS_PLSQL_12564004_164_1 TYPE 5 ERROR_LONG_DESC VARCHAR2(200),
TESTDB SYS_PLSQL_12564004_164_1 TYPE 6 ERROR_STATUS VARCHAR2(20),
TESTDB SYS_PLSQL_12564004_164_1 TYPE 7 ERROR_SEVERITY VARCHAR2(20),
TESTDB SYS_PLSQL_12564004_164_1 TYPE 8 CAT_NAME VARCHAR2(20));


type definition for the above mentioned object ( extracted with the help of DBMS_METADATA)

CREATE OR REPLACE TYPE "TESTDB"."SYS_PLSQL_12564004_165_1" as object (ERROR_KEY NUMBER,
ERROR_CODE VARCHAR2(20),
ERROR_TYPE VARCHAR2(20),
ERROR_GENERATED_BY VARCHAR2(30),
ERROR_LONG_DESC VARCHAR2(200),
ERROR_STATUS VARCHAR2(20),
ERROR_SEVERITY VARCHAR2(20),
CAT_NAME VARCHAR2(20));

I have not created any object type with this name. I have wasted a day to identify which code has created the above mentioned object but all in vein.
Please help on this.

Are piped rows always immediately returned

dieter oberkofler, January 26, 2013 - 1:55 am UTC

Dear Tom,

I'm evaluating the use of pipelined function in a specific use case as follows:
When executing some synchronous but still relatively long running pl/sql procedure from a UI, it would be nice to return some progress information to the invoking process using a simple SELECT from a pipelined function.

The fact that the rows are returned immediately from the pipelined function is critical for this approach but it seems that (at least) when combining PIPE ROW with AUTONOMOUS_TRANSACTION the rows would no longer be returned immediately but rather when the procedure has been completed.

I'm having a hard time to properly test this but at least visually it seems as if the SELECT * FROM TABLE(pipelinedFunction()) would stand still for pretty much the time the procedure needs to process all rows and then return all piped rows at once without any delay.

I would be most interested in your opinion on this approach and on how to properly time/test this approach to make sure that it works as intended.

Thank you for all your help!

Tom Kyte
January 30, 2013 - 2:25 pm UTC

it would have to complete its transaction BEFORE it pipes the rows...

for example:

ops$tkyte%ORA11GR2> create table t ( msg varchar2(80) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace function f( n in number ) return sys.odciDateList
  2  PIPELINED
  3  as
  4      pragma autonomous_transaction;
  5  begin
  6      for i in 1..n
  7      loop
  8          insert into t (msg) values ( 'row ' || i || to_char( sysdate,'" "hh24:mi:ss') );
  9          commit;
 10          pipe row( sysdate );
 11          dbms_lock.sleep(1);
 12      end loop;
 13  end;
 14  /

Function created.

ops$tkyte%ORA11GR2> @bigdate
ops$tkyte%ORA11GR2> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';

Session altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set arraysize 1
ops$tkyte%ORA11GR2> select * from table(f(10));

COLUMN_VALUE
--------------------
30-jan-2013 15:23:46
30-jan-2013 15:23:47
30-jan-2013 15:23:48
30-jan-2013 15:23:49
30-jan-2013 15:23:50
30-jan-2013 15:23:51
30-jan-2013 15:23:52
30-jan-2013 15:23:53
30-jan-2013 15:23:54
30-jan-2013 15:23:55

10 rows selected.



works - the rows show up in sqlplus as they are generated and another session can query T and see the rows in there. however, if you try to pipe a row with an open transaction:

ops$tkyte%ORA11GR2> create table t ( msg varchar2(80) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace function f( n in number ) return sys.odciDateList
  2  PIPELINED
  3  as
  4      pragma autonomous_transaction;
  5  begin
  6      for i in 1..n
  7      loop
  8          insert into t (msg) values ( 'row ' || i || to_char( sysdate,'" "hh24:mi:ss') );
  9          pipe row( sysdate );
 10          commit;
 11          dbms_lock.sleep(1);
 12      end loop;
 13  end;
 14  /

Function created.

ops$tkyte%ORA11GR2> @bigdate
ops$tkyte%ORA11GR2> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';

Session altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set arraysize 1
ops$tkyte%ORA11GR2> select * from table(f(10));
select * from table(f(10))
                    *
ERROR at line 1:
ORA-06519: active autonomous transaction detected and rolled back
ORA-06512: at "OPS$TKYTE.F", line 9



it'll fail.

Extensible Optimizer

Rajeshwaran, Jeyabal, February 02, 2013 - 4:11 am UTC

Tom,

I was reading your column in Oracle magazine about Extensible Optimizer.
The demo shown up here is applicable for Table functions.
http://www.oracle-developer.net/display.php?id=427
SQL> ASSOCIATE STATISTICS WITH FUNCTIONS collection_wrapper USING collection_wrapper_ot;

 Statistics associated.
Here collection_wrapper is a Table function, How can i achieve the same for Pipelined Table function? I tried but couldnot arrive it, Can you help us on this?

Here is what i used for demo.

create or replace type vctype 
is table of varchar2(30)
/

create or replace function
parse_list(p_in varchar2)
return vctype
pipelined is
 l_data varchar2(50) := p_in||',' ;
begin
 for x in (
  select regexp_substr(l_data,'[^,]+',1,level) txt
  from dual
  connect by level <= ( length(l_data) -
   length(replace(l_data,',')) ) )
 loop
  pipe row( x.txt );
 end loop;
 return;
end;
/

create or replace type parse_typ 
is object
( x  number,
  static function ODCIGetInterfaces(p_out out sys.ODCIObjectList)
  return number,
  static function ODCIStatsTableFunction
 ( p_func in sys.ODCIFuncInfo,
   p_out  out sys.ODCITabFuncStats,
   p_argdesc in sys.ODCIArgDescList,
   p_collection in vctype )
  return number
);
/

create or replace type body parse_typ
is
   static function ODCIGetInterfaces(p_out out sys.ODCIObjectList)
   return number is
   begin
  p_out := sys.ODCIObjectList( sys.odciobject('SYS','ODCISTATS2') );
  return odciconst.success;
   end;
   
   static function ODCIStatsTableFunction
   ( p_func in sys.ODCIFuncInfo,
   p_out  out sys.ODCITabFuncStats,
   p_argdesc in sys.ODCIArgDescList,
   p_collection in vctype )
 return number
 is
 begin
  p_out := sys.ODCITabFuncStats( p_collection.count );
  return odciconst.success;
 end;
end;
/ 

associate statistics with functions parse_list using parse_typ;

variable x varchar2(30);
exec :x := 'a,b,c,d,e';
delete from plan_table;
commit;
explain plan for
select * from
table( parse_list(:x) )
/

The actual cardinality is 5, but explain plan shows it as 1 why?
rajesh@ORA11G> @plan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------
Plan hash value: 58440541

------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |            |     1 |     2 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| PARSE_LIST |     1 |     2 |    29   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

8 rows selected.

Elapsed: 00:00:00.10
rajesh@ORA11G>

Tom Kyte
February 04, 2013 - 9:32 am UTC

the stats data function doesn't get a collection, that is what the pipelined function returns. the stats function gets the inputs ( a string ).

read that article you were pointed to again. their stats function takes a number as input (the input to the pipelined function) NOT the output of the pipelined function!

Parallel pipelined function

anonymous, April 02, 2014 - 12:30 pm UTC

Hi

I have a view containing select statement joining several tables and joining with them a pipelined function which returns 1-n rows.

I would like to regard this pipelined function as 'normal function' that justs returns several rows which are joined to other tables. Pipelined function gets its parameters from these other tables.

This works fine. However as number of rows have increased in joined tables I would like this view to execute in parallel. Joining on these tables can be done on parallel, but calling this pipelined function seems to be done only on one process.

From reading manluas I can see that there exists this possibility to define pipelined function as parallel enabled. It requires however using ref cursors and is of style 'pipelined function is the driving force'. I would like to have the joining of these tables to be the driving force and the pipelined function just to be one of the additions to that.

Is it somehow possible to make pipelined function to be executed by parallel servers with using ref cursors ?

Can you pass through a pipeplined function?

Brad Sukut, June 14, 2022 - 1:28 pm UTC

Hi Tom -
I wanted to overload and pass through a pipelined function but I get a PLS-00633 error when trying to compile my package body. I basically want all the logic in just one function and use overloaded functions to fill in any missing / default data. I'm hoping this is just a simple syntax error. Here is my test package:

create or replace type myScalarType as object( iCol int, dCol date, vCol varchar2(80) );
/
create or replace type myTableType as table of myScalarType;
/
create or replace package umn_pipe_test is
function pipeline_demo return myTabletype PIPELINED;
function pipeline_demo ( p_nrows_to_generate in number ) return myTabletype PIPELINED;
function pipeline_demo ( p_nrows_to_generate in number, in_date date ) return myTabletype PIPELINED;
end;
/
create or replace package body umn_pipe_test is

function pipeline_demo return myTabletype PIPELINED as
nRows number := 5;
begin
return pipeline_demo(nRows);;
end;

function pipeline_demo ( p_nrows_to_generate in number ) return myTabletype PIPELINED as
begin
for i in 1 .. p_nrows_to_generate loop
pipe row( myScalarType( i, trunc(sysdate)+i, 'row ' || i ) );
end loop;
return;
end;

function pipeline_demo ( p_nrows_to_generate in number, in_date date ) return myTabletype PIPELINED as
begin
for i in 1 .. p_nrows_to_generate loop
pipe row( myScalarType( i, in_date+i, 'row ' || i ) );
end loop;
return;
end;
end;

Thanking you in advance for any help on this.
Chris Saxon
June 14, 2022 - 5:06 pm UTC

A pipelined table function doesn't return anything as such - it pipes the rows and has an empty return statement. Thus

return pipeline_demo(nRows);


Won't work - you still need to use PIPE ROW

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.