Skip to Main Content

Breadcrumb

XMAS_TREE

The AskTOM team is taking a break over the holiday season, so we're not taking questions or responding to comments, but we might still sneak in a few Office Hours sessions! Thanks for being a member of the AskTOM community.

Question and Answer

Tom Kyte

Thanks for the question.

Asked: June 09, 2004 - 3:03 pm UTC

Last updated: March 02, 2022 - 2:36 am UTC

Version: 9.2.0.5

Viewed 100K+ times! This question is

You Asked

Can you illustrate the usage of pipelined functions with a simple(emp,dept) example.I browsed some of the posts in this site relevant to that but i am not able to grasp it.
Under what circumstances using a pipelined function can be effective( performance wise,making the code modular)?

and Tom said...

pipelined functions are simply "code you can pretend is a database table"

pipelined functions give you the (amazing to me) ability to

select * from PLSQL_FUNCTION;




anytime you think you can use it -- to select * from a function, instead of a table, it might be "useful".

Consider this "ETL" process (extract/transform/load) whereby a flat file is fed into a plsql function that "transforms" it and is used to update existing data:

ops$tkyte@ORA9IR2> @etl
=================================================================

For external tables, we need to use a directory object -- we'll
start with that, mapping to the temp directory


demo@ORA9IR2>
demo@ORA9IR2> create or replace directory data_dir as '/tmp/'
2 /

Directory created.

demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2> set echo off
Enter to continue

=================================================================


Now, we'll create the external table.
part of its definition is what looks like a control file -- it is


demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2> create table external_table
2 (EMPNO NUMBER(4) ,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7, 2),
8 COMM NUMBER(7, 2),
9 DEPTNO NUMBER(2)
10 )
11 ORGANIZATION EXTERNAL
12 ( type oracle_loader
13 default directory data_dir
14 access parameters
15 ( fields terminated by ',' )
16 location ('emp.dat')
17 )
18 /

Table created.

demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2> set echo off
Enter to continue

=================================================================


We'll create a flat file to work with now


demo@ORA9IR2>
demo@ORA9IR2> host flat scott/tiger emp > /tmp/emp.dat

demo@ORA9IR2>
demo@ORA9IR2> set echo off
Enter to continue

=================================================================


And finally, select * from it...


demo@ORA9IR2>
demo@ORA9IR2> select empno, ename, hiredate from external_table
2 where ename like '%A%'
3 /

EMPNO ENAME HIREDATE
---------- ---------- ---------
7499 ALLEN 20-FEB-81
7521 WARD 22-FEB-81
7654 MARTIN 28-SEP-81
7698 BLAKE 01-MAY-81
7782 CLARK 09-JUN-81
7876 ADAMS 12-JAN-83
7900 JAMES 03-DEC-81

7 rows selected.

demo@ORA9IR2>
demo@ORA9IR2> pause

demo@ORA9IR2> host vi /tmp/emp.dat

demo@ORA9IR2> select empno, ename, hiredate from external_table
2 where ename like '%A%'
3 /

EMPNO ENAME HIREDATE
---------- ---------- ---------
7499 ALLEN 20-FEB-81
7521 WARD 22-FEB-81
7654 MARTIN 28-SEP-81
7698 BLAKE 01-MAY-81
7782 CLARK 09-JUN-81
7876 ADAMS 12-JAN-83
7900 JAMES 03-DEC-81

7 rows selected.

demo@ORA9IR2>
demo@ORA9IR2> set echo off
=================================================================


We'll set up a PLSQL "etl" routine to ingest the flat file
and output live data to be merged or inserted

demo@ORA9IR2> pause

demo@ORA9IR2> create or replace type emp_scalar_type as object
2 (EMPNO NUMBER(4) ,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7, 2),
8 COMM NUMBER(7, 2),
9 DEPTNO NUMBER(2)
10 )
11 /


Type created.

demo@ORA9IR2> create or replace type emp_table_type as table of emp_scalar_type
2 /

Type created.

demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2> create or replace
2 function emp_etl( p_cursor in sys_refcursor )
3 return emp_table_type
4 PIPELINED
5 as
6 l_rec external_table%rowtype;
7 begin
8 loop
9 fetch p_cursor into l_rec;
10 exit when (p_cursor%notfound);
11
12 -- validation routine
13 -- log bad rows elsewhere
14 -- lookup some value
15 -- perform conversion
16 pipe row( emp_scalar_type( l_rec.empno,
17 LOWER(l_rec.ename),
18 l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal,
19 l_rec.comm, l_rec.deptno ) );
20 end loop;
21
22 return;
23 end;
24 /

Function created.

demo@ORA9IR2> set echo off
=================================================================


that routine, our code, works just like a TABLE


demo@ORA9IR2> select empno, ename
2 from TABLE(emp_etl(cursor( select * from external_table ) ) )
3 where ename like '%a%';

EMPNO ENAME
---------- ----------
7499 allen
7521 ward
7654 martin
7698 blake
7782 clark
7876 adams
7900 james

7 rows selected.

demo@ORA9IR2>
demo@ORA9IR2> set echo off

=================================================================


we'll create an "existing" table of data, we would like to
'refresh' it from the source systems

demo@ORA9IR2> create table emp as select * from scott.emp where mod(empno,2) = 0;

Table created.

demo@ORA9IR2> select empno, ename from emp;

EMPNO ENAME
---------- ----------
7566 JONES
7654 MARTIN
7698 BLAKE
7782 CLARK
7788 SCOTT
7844 TURNER
7876 ADAMS
7900 JAMES
7902 FORD
7934 MILLER

10 rows selected.

demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2> set echo off

=================================================================


and here is the "merge" -- from flat file, through ETL,
straight to the table without hitting the disk with staging
files

demo@ORA9IR2> merge into EMP e1
2 using ( select *
3 from TABLE(emp_etl(cursor( select * from external_table ) ) )
4 ) e2
5 on ( e2.empno = e1.empno )
6 when matched then
7 update set e1.sal = e2.sal, e1.ename = e2.ename
8 when not matched then
9 insert ( empno, ename, job, mgr,
10 hiredate, sal, comm, deptno )
11 values ( e2.empno, e2.ename, e2.job, e2.mgr,
12 e2.hiredate, e2.sal, e2.comm, e2.deptno )
13 /

14 rows merged.

demo@ORA9IR2> commit;

Commit complete.

demo@ORA9IR2>
demo@ORA9IR2> pause

demo@ORA9IR2> select empno, ename from emp;

EMPNO ENAME
---------- ----------
7566 jones
7654 martin
7698 blake
7782 clark
7788 scott
7844 turner
7876 adams
7900 james
7902 ford
7934 miller
7369 smith
7499 allen
7521 ward
7839 king

14 rows selected.

demo@ORA9IR2>
demo@ORA9IR2>
demo@ORA9IR2> set echo off


Rating

  (81 ratings)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

You are a Genius Tom

A reader, June 09, 2004 - 4:05 pm UTC

Tom ..I am giving you some information in return..

this should make you feel better...

</code> http://news.bbc.co.uk/1/hi/scotland/1263510.stm <code>





Good one!

A reader, June 09, 2004 - 4:32 pm UTC


Comprehensive and inspiring reply

Dizzley, June 09, 2004 - 6:23 pm UTC

Thanks for such a complete reply. I am inspired to go find a problem to try out this solution. ;)

Can you see other benefits in pipelined functions? What about using Java functions to generate table-like data?

It all sounds like another tool for the box, which is great news.

Tom Kyte
June 09, 2004 - 6:28 pm UTC

sure, anything that can manufacture data you might want to query... but that doesn't exist in a table.


What about..

dxl, June 17, 2004 - 4:26 am UTC

If you were to write a function (not pipelined) that returned a varray of objects, then you

SELECT * from Table(cast(my_func as myTableType))

Doesn't this give you the same functionality as pipelined functions?? Please can you explain the difference.

Tom Kyte
June 17, 2004 - 10:02 am UTC

pipelined function = client gets first row before function generates last row.

pipelined function = function doesn't need to build big honking array in memory.

pipelined function = something you can use to return really large results, nonpipelined = something you can use to return small result sets.

Mark, June 21, 2004 - 10:01 am UTC

Tom,  Can you show me what I am doing wrong here.  I am tring to write a Pipelined function so I can pass a comma delimmited list of IDs to a procedure to be used as an IN list.

SQL> CREATE OR REPLACE TYPE MYNUMTYPE AS TABLE OF NUMBER
  2  /

Type created.

SQL> CREATE OR REPLACE FUNCTION GET_NUM_LIST(
  2     P_STR   IN   VARCHAR2 )
  3     RETURN MYNUMTYPE PIPELINED
  4  AS
  5     L_STR         LONG      DEFAULT P_STR || ',';
  6     L_N           NUMBER;
  7     V_ID          NUMBER;
  8     V_MYNUMTYPE   MYNUMTYPE;
  9  BEGIN
 10     LOOP
 11        L_N := INSTR( L_STR, ',' );
 12        EXIT WHEN( NVL( L_N, 0 ) = 0 );
 13        V_ID := NVL( LTRIM( RTRIM( SUBSTR( L_STR, 1, L_N - 1 ))), -9999999999 );
 14  
 15        IF V_ID != -9999999999 THEN
 16           V_MYNUMTYPE := MYNUMTYPE( V_ID );
 17           PIPE ROW( V_MYNUMTYPE );
 18        END IF;
 19  
 20        L_STR := SUBSTR( L_STR, L_N + 1 );
 21     END LOOP;
 22  
 23     RETURN;
 24  END;
 25  /
Warning: Function created with compilation errors.

SQL> show errors
Errors for FUNCTION GET_NUM_LIST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
17/10    PL/SQL: Statement ignored
17/20    PLS-00382: expression is of wrong type

Thanks 

Tom Kyte
June 21, 2004 - 1:53 pm UTC

pipe row( v_id );

you pipe out what the table contains, NOT the table itself.

Pipelines 'pipelined'

Chris Ellison, June 25, 2004 - 10:25 am UTC

Syntax and reasons to use pipelined functions very useful

thanks for great explanation

Sue Andrews, July 01, 2004 - 7:13 am UTC

Here's an example of how useful pipelined function can be: select * from table(ourpipelinedfuction) , in a Custom SQL in Discoverer EUL, gives us the results as a table. Now if we could just figure out a way to pass it a parameter ....

Controlling the the number of rows returned from pipelined table function!

Badarinath Boyina, July 28, 2004 - 9:37 am UTC

Hi Tom,

When we pipe the row from pipelined table function, PL/SQL engine doesn't pass it to the consumer immediatley instead passes it in batches for the performance reasons.

The statement in ORACLE pl/sql document is as follows:

"In PL/SQL, the PIPE ROW statement causes a table function to pipe a row and continue processing. The statement enables a PL/SQL table function to return rows as soon as they are produced. (For performance, the PL/SQL runtime system provides the rows to the consumer in batches.)"

My question is, is there anyway we can control the batch size?

Regards,
Badari

Tom Kyte
July 28, 2004 - 1:07 pm UTC

it passes based on the array size.


you the programmer control the array size.

pipelined function performance

Praveen, January 26, 2005 - 3:09 am UTC

Tom,

But we can control the array size only if we are using a varray. What if we are using the pl/sql table (nested table)?

Thanks
Praveen

Tom Kyte
January 26, 2005 - 8:47 am UTC

huh?

when fetching, you the programming 100% control the array size, period.

you the programmer say "prefetch 100" in java
you the programmer say "bulk collect limit 200" in plsql
you the programmer say "set arraysize 25" in sqlplus
you the programmer allocate an array of 100 and exec sql fetch into it in pro*c

and so on.....

That's a lot of information

Praveen, January 26, 2005 - 11:20 pm UTC

Wow, Tom, you showered me with knowledge....thankyou very much.

Linheng Liang, February 16, 2005 - 5:13 pm UTC

The explanation is as clear as it could be.

I am investigating the performance of pipelined table functions by joining a table function with a physical table (database table) versus two tables join together, and found out the first case is slower than second one.

I ran the two statements in a test procedure 1000 times and profiled them:

First (table function join with a table): 7.91 seconds
Second (two actual table joins): 1.15 seconds

I noticed that the first case has a full table scan, and I could not avoid that. I tried different hints, but still the same. In the second case, indexes were used.

The difference could be well caused by the full table scan.

Could you please provider an example to prove mine is wrong, and table function performs better.

Thanks,

Tom Kyte
February 16, 2005 - 5:37 pm UTC

if you do not NEED a pipelined function and can use a real table -- by all MEANS use the real table!!!!


I cannot prove something wrong which I cannot "see"


you will full scan a pipelined function for the simple reason that -- well, there are no indexes?

but full scans are not evil, not a chance.

A reader, February 17, 2005 - 12:01 pm UTC

Hi Tom,

Thanks for your quick response! My perception is pipelined table function is faster in general than the queries through regular tables, but may be just the response time.

I am trying to get better performance by using table functions, and that's my goal. But it seems there is no real advantage for my case at least. It's a cool feature, and it has its advantages as you spelled out.

I am interested in knowing if people experience performance gain by using the table functions in their applications. Again, each application is different, but if someone could post a simple test scenario to show a better performance in using table functions.

Thanks again!




Tom Kyte
February 17, 2005 - 1:52 pm UTC

a pipelined table function cannot be faster than a query through regular tables (cannot be)

you have "select * from t1, t2, t3, t4 where ...."

vs

"hey, pl/sql, run this same query from above, procedurally process it and pipe the rows out"


both run the same query?


table functions have a place, doing ETL (plsql function reads an external table, plsql function logs bad rows into a error table, transforms the row, pipes the row out -- to an insert /*+ APPEND */, et:


insert /*+ append */ into t
select * from TABLE( plsql_function( cursor( select * from et ) ) );


or even in parallel -- instead of

a) loading the file into a stage table
b) running a process against stage to convert/cleanse the data
c) loading cleansed data into real table




Linheng Liang, February 18, 2005 - 12:10 pm UTC

Your answer will save days of my work in pursuing something which I don't know exactly to where it really should be applied.

Thanks,



Thanks Tom

Rahul, May 13, 2005 - 12:31 am UTC

Hi Tom, i have learned everything what i need to know about pipeline. Is it possible to pass parameter through a pipeline function?
like SELECT * FROM TABLE(FUNC_PIPELINE(p_mrchno,p_crd_no));

How can i manage a pipe lined function to retrive data from mainframe? because ,here you have to use dbms_hs_passthrough, not a simple cursor.
Thanks in advance


Tom Kyte
May 13, 2005 - 9:18 am UTC

yes. the example on this page did so.

Thanks Tom

Rahul, May 13, 2005 - 12:32 am UTC

Hi Tom, i have learned everything what i need to know about pipeline. Is it possible to pass parameter through a pipeline function?
like SELECT * FROM TABLE(FUNC_PIPELINE(p_mrchno,p_crd_no));

How can i manage a pipe lined function to retrive data from mainframe? because ,here you have to use dbms_hs_passthrough, not a simple cursor.
Thanks in advance


Truly great !!

SVS, July 11, 2005 - 9:40 pm UTC

Tom

You are simply amazing......

There are tons of things to learn from a guru like you !!!

Rgds

SVS

cursor in pipelined function

CG, October 06, 2005 - 11:43 am UTC

Tom I have:
CREATE OR REPLACE TYPE pending_issues_type AS OBJECT (
ME_SEQ                                             NUMBER,
PERS_SEQ                                           NUMBER,
NAME                                               VARCHAR2(4000),
SSN                                                VARCHAR2(9),
RATE                                               VARCHAR2(10),
START_DATE                                         DATE,
TYPE_ID                                            VARCHAR2(1),
TYPE_DESC                                          VARCHAR2(30),
PROVIDER                                           VARCHAR2(4000),
FINALIZED_DATE                                     DATE,
LOG_DTG                                            DATE,
SSN_LAST_4                                         VARCHAR2(4),
SSN_LAST_2                                         VARCHAR2(9),
FMPC_ID                                            VARCHAR2(2),
SORT_FMPC_ID                                       VARCHAR2(2),
STATUS_GROUP                                       VARCHAR2(1),
DEPT_NAME                                          VARCHAR2(20),
CMD_NAME                                           VARCHAR2(100),
WC_NAME                                            VARCHAR2(20),
DIV_NAME                                           VARCHAR2(20),
PAYGRADE_CLASS                                     VARCHAR2(4),
CMD_DOD_UIC                                        VARCHAR2(8),
CMD_ASSIGNMENT                                     VARCHAR2(32),
VISITOR_IND                                        VARCHAR2(1) )
/

CREATE OR REPLACE TYPE pending_issues_set_type AS TABLE OF pending_issues_type;
/
SHOW ERROR

@me_lib.pks

SHOW ERROR

@me_lib.pkb

SHOW ERROR

CREATE OR REPLACE VIEW ME_PENDING_ISSUES_VW (
    ME_SEQ,
    PERS_SEQ,
    NAME,
    SSN,
    RATE,
    START_DATE,
    TYPE_ID,
    TYPE_DESC,
    PROVIDER,
    FINALIZED_DATE,
    LOG_DTG,
    SSN_LAST_4,
    SSN_LAST_2,
    FMPC_ID,
    SORT_FMPC_ID,
    STATUS_GROUP,
    DEPT_NAME,
    CMD_NAME,
    WC_NAME,
    DIV_NAME,
    PAYGRADE_CLASS,
    CMD_DOD_UIC,
    CMD_ASSIGNMENT,
    VISITOR_IND )
AS
 SELECT * FROM TABLE ( me_lib.get_pending_issues() );

The function get_pending_issues looks like:
FUNCTION get_pending_issues RETURN pending_issues_set_type
PIPELINED
IS
  cursor c_issues is
         SELECT         mp.me_seq,
                        mp.pers_seq,
                        hs_lib.get_pers_name(mp.pers_seq),
                        mp.me_ssn,
                        mp.me_rate,
                        mp.me_start_date,
                        mp.me_type_id,
                        mt.type_desc,
                        sams_system.getprovidername(mp.prov_seq),
                        mp.finalized_date,
                        ml.log_dtg,
                        rhv.ssn_last_4,
                        rhv.ssn_last_2,
                        rhv.fmpc_id,
                        rhv.sort_fmpc_id,
                        rhv.status_group,
                        rhv.dept_name,
                        rhv.cmd_name,
                        rhv.wc_name,
                        rhv.div_name,
                        rhv.paygrade_class,
                        rhv.cmd_dod_uic,
                        rhv.cmd_assignment,
                        rhv.visitor_ind
                   FROM me_report_header_vw rhv,
                        me_pers             mp,
                        me_prov_log         ml,
                        me_type             mt
                  WHERE mp.finalized_date IS NULL
                    AND mp.me_type_id      != 'R'
                    AND mp.log_seq         = ml.log_seq
                    AND mp.me_type_id      = mt.type_id
                    AND mp.me_seq          = rhv.me_seq
                UNION
                 SELECT mp.me_seq,
                        mp.pers_seq,
                        hs_lib.get_pers_name(mp.pers_seq),
                        mp.me_ssn,
                        mp.me_rate,
                        mm.mer_date,
                        decode(mp.me_type_id, 'S','Z','R'),
                        decode(mp.me_type_id, 'S','MER(via SOAPP)','MER'),
                        sams_system.getprovidername(mp.prov_seq),
                        mm.finalized_date,
                        ml.log_dtg,
                        rhv.ssn_last_4,
                        rhv.ssn_last_2,
                        rhv.fmpc_id,
                        rhv.sort_fmpc_id,
                        rhv.status_group,
                        rhv.dept_name,
                        rhv.cmd_name,
                        rhv.wc_name,
                        rhv.div_name,
                        rhv.paygrade_class,
                        rhv.cmd_dod_uic,
                        rhv.cmd_assignment,
                        rhv.visitor_ind
                   FROM me_report_header_vw rhv,
                        me_pers             mp,
                        me_prov_log         ml,
                        me_type             mt,
                        me_mer              mm
    -- line 981   WHERE mm.finalized_date IS NULL 
                    AND mp.log_seq         = ml.log_seq
                    AND mp.me_type_id      = mt.type_id
                    AND mp.me_seq          = rhv.me_seq;
BEGIN
      FOR x IN c_issues LOOP

             PIPE ROW( x );
      END LOOP;
             RETURN;
END get_pending_issues;

but when I try to compile the package I get
SQL> @me_lib.pkb
DOC>***********************************************************************
DOC>* ME_LIB.PKB                                                          *
DOC>* As of 06 Oct 2005 @ 09:16.                                          *
DOC>*---------------------------------------------------------------------*
DOC>*                                                                     *
DOC>**********************************************************************/
Creating Package Body 'ME_LIB'
DOC>* ME_LIB package body
DOC>*************************************************************************

Warning: Package Body created with compilation errors.

SQL> show error
Errors for PACKAGE BODY ME_LIB:

LINE/COL ERROR
-------- -----------------------------------------------------------------
981/14   PL/SQL: Statement ignored
981/24   PLS-00382: expression is of wrong type

I even tried without defining a explicit cursor in did the 
for x in ( select .... ) loop but got the same error

any ideas? 

Tom Kyte
October 06, 2005 - 12:19 pm UTC

just a comment here - but this example could be paired down to something that fits on the back of a cocktail napkin - completeness is good, terseness is even better.

you are trying to pipe a plsql record
when you must be piping an OBJECT type.

either:

a) cursor c is select yourObjectType( c1, c2, c3, ,,,, ) data from ...

for x in C
loop
pipe row (x.data);


b)

for x in C
loop
pipe row(yourObjectType( x.c1, x.c2, ........ ));



but in your case, unless you are ACTUALLY DOING something really special in there, don't even use a pipelined function - unless you just want to make it slower than it needs to be.


My purpose of the above tom was to do this ......

CG, October 06, 2005 - 1:04 pm UTC

......
CREATE OR REPLACE VIEW ME_PENDING_ISSUES_VW (
ME_SEQ,
PERS_SEQ,
NAME,
SSN,
RATE,
START_DATE,
TYPE_ID,
TYPE_DESC,
PROVIDER,
FINALIZED_DATE,
LOG_DTG,
SSN_LAST_4,
SSN_LAST_2,
FMPC_ID,
SORT_FMPC_ID,
STATUS_GROUP,
DEPT_NAME,
CMD_NAME,
WC_NAME,
DIV_NAME,
PAYGRADE_CLASS,
CMD_DOD_UIC,
CMD_ASSIGNMENT,
VISITOR_IND )
AS
SELECT * FROM TABLE ( me_lib.get_pending_issues() );

is this bad? In your opinion of course.

Tom Kyte
October 06, 2005 - 1:41 pm UTC

yes, if all me_lib.get_pending_issues() does is what you have supplied here.

Why wouldn't is just be a VIEW without ANY CODE AT ALL.

thank you.....

cg, October 06, 2005 - 2:10 pm UTC

.... makes sense to me.
But for grins if I had this:
SQL> desc pending_issues_set_type
 pending_issues_set_type TABLE OF PENDING_ISSUES_TYPE
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------------------
 ME_SEQ                                             NUMBER
 PERS_SEQ                                           NUMBER
 NAME                                               VARCHAR2(4000)
 SSN                                                VARCHAR2(9)
 RATE                                               VARCHAR2(10)
 START_DATE                                         DATE
 TYPE_ID                                            VARCHAR2(1)
 TYPE_DESC                                          VARCHAR2(30)
 PROVIDER                                           VARCHAR2(4000)
 FINALIZED_DATE                                     DATE
 LOG_DTG                                            DATE
 SSN_LAST_4                                         VARCHAR2(4)
 SSN_LAST_2                                         VARCHAR2(9)
 FMPC_ID                                            VARCHAR2(2)
 SORT_FMPC_ID                                       VARCHAR2(2)
 STATUS_GROUP                                       VARCHAR2(1)
 DEPT_NAME                                          VARCHAR2(20)
 CMD_NAME                                           VARCHAR2(100)
 WC_NAME                                            VARCHAR2(20)
 DIV_NAME                                           VARCHAR2(20)
 PAYGRADE_CLASS                                     VARCHAR2(4)
 CMD_DOD_UIC                                        VARCHAR2(8)
 CMD_ASSIGNMENT                                     VARCHAR2(32)
 VISITOR_IND                                        VARCHAR2(1)

why would this:
<snip>
cursor c_issues is
         SELECT         mp.me_seq,
                        mp.pers_seq,
                        hs_lib.get_pers_name(mp.pers_seq) pername,
                        mp.me_ssn,
                        mp.me_rate,
                        mp.me_start_date,
                        mp.me_type_id,
                        mt.type_desc,
                        sams_system.getprovidername(mp.prov_seq) provname,
                        mp.finalized_date,
                        ml.log_dtg,
                        rhv.ssn_last_4,
                        rhv.ssn_last_2,
                        rhv.fmpc_id,
                        rhv.sort_fmpc_id,
                        rhv.status_group,
                        rhv.dept_name,
                        rhv.cmd_name,
                        rhv.wc_name,
                        rhv.div_name,
                        rhv.paygrade_class,
                        rhv.cmd_dod_uic,
                        rhv.cmd_assignment,
                        rhv.visitor_ind
                   FROM me_report_header_vw rhv,
                        me_pers             mp,
                        me_prov_log         ml,
                        me_type             mt
                  WHERE mp.finalized_date IS NULL
                    AND mp.me_type_id      != 'R'
                    AND mp.log_seq         = ml.log_seq
                    AND mp.me_type_id      = mt.type_id
                    AND mp.me_seq          = rhv.me_seq
                UNION
                 SELECT mp.me_seq,
                        mp.pers_seq,
                        hs_lib.get_pers_name(mp.pers_seq),
                        mp.me_ssn,
                        mp.me_rate,
                        mm.mer_date,
                        decode(mp.me_type_id, 'S','Z','R'),
                        decode(mp.me_type_id, 'S','MER(via SOAPP)','MER'),
                        sams_system.getprovidername(mp.prov_seq),
                        mm.finalized_date,
                        ml.log_dtg,
                        rhv.ssn_last_4,
                        rhv.ssn_last_2,
                        rhv.fmpc_id,
                        rhv.sort_fmpc_id,
                        rhv.status_group,
                        rhv.dept_name,
                        rhv.cmd_name,
                        rhv.wc_name,
                        rhv.div_name,
                        rhv.paygrade_class,
                        rhv.cmd_dod_uic,
                        rhv.cmd_assignment,
                        rhv.visitor_ind
                   FROM me_report_header_vw rhv,
                        me_pers             mp,
                        me_prov_log         ml,
                        me_type             mt,
                        me_mer              mm
                  WHERE mm.finalized_date IS NULL
            AND mm.me_seq          = mp.me_seq
                    AND mp.log_seq         = ml.log_seq
                    AND mp.me_type_id      = mt.type_id
                    AND mp.me_seq          = rhv.me_seq;
BEGIN
      FOR x IN c_issues LOOP
             PIPE ROW( pending_issues_set_type(
                    x.me_seq,
                        x.pers_seq,
                        x.pername,
                        x.me_ssn,
                        x.me_rate,
                        x.me_start_date,
                        x.me_type_id,
                        x.type_desc,
                        x.provname,
                        x.finalized_date,
                        x.log_dtg,
                        x.ssn_last_4,
                        x.ssn_last_2,
                        x.fmpc_id,
                        x.sort_fmpc_id,
                        x.status_group,
                        x.dept_name,
                        x.cmd_name,
                        x.wc_name,
                        x.div_name,
                        x.paygrade_class,
                        x.cmd_dod_uic,
                        x.cmd_assignment,
                        x.visitor_ind ) );
      END LOOP;
             RETURN;
<snip>
  .... give me this for each x ?
 PLS-00306: wrong number or types of arguments in call to
 'PENDING_ISSUES_SET_TYPE' 

Tom Kyte
October 06, 2005 - 2:55 pm UTC

because it is pending_issues_type you want to pipe -- not the collection type.

you don't have a collection in the loop, you have elements of the collection

Unix Pipe

Reader, October 25, 2005 - 10:48 pm UTC

Is it possible for a pipelined function to receive data from a unix pipe ?

How much data can a pipelined function hold (10M,20M....)
Thanks

Tom Kyte
October 26, 2005 - 11:42 am UTC

utl_file might be able to read from a unix pipe, but I'm not sure I'd even want to think about doing that (what starts the thing that feeds this pipe).

pipelined functions don't "hold" anything, they produce data and pipe to the client - the consumer.

What about Using XML Objects with Pipelined Functions

Robert Corfman, November 06, 2005 - 1:47 pm UTC

It seems that in all the pipelined function examples I can find, we have to create new types in the database. Is is possible, and if so, could you provide an example, of using the built in XMLType Object to avoid creating new types?

Tom Kyte
November 06, 2005 - 1:52 pm UTC

you can hide the types in package specs and we'll create the types for you - they will BE THERE, but you don't have to create them.

I would not use XML to do this (and you'd still need a type - a table of xmltype).



Alberto Dell'Era, November 06, 2005 - 3:01 pm UTC

> you can hide the types in package specs and we'll create the types for you -
> they will BE THERE, but you don't have to create them.

You probably meant this (10.2.0.1):

dellera@ORACLE10> create or replace package pkg as
2 type ret is table of varchar2(100);
3 end;
4 /

Package created.

dellera@ORACLE10> create or replace function f return pkg.ret pipelined is begin pipe row ('pippo'); return; end;
2 /

Function created.

dellera@ORACLE10> select * from table(f);

COLUMN_VALUE
------------------------------------
pippo

But:

dellera@ORACLE10> create or replace package pkg
2 as
3 type t_elem is object (x int, y int);
4 type t_elem_array is table of t_elem;
5 end;
6 /

Warning: Package created with compilation errors.

dellera@ORACLE10> show errors;
Errors for PACKAGE PKG:

LINE/COL ERROR
-------- --------------------------------------------------
3/3 PLS-00540: object not supported in this context.

dellera@ORACLE10> create or replace package pkg
2 as
3 type t_elem is record (x int, y int);
4 type t_elem_array is table of t_elem;
5 end;
6 /

Package created.

dellera@ORACLE10> create or replace function f return pkg.t_elem_array pipelined is begin pipe row ( pkg.t_elem (1,1) ); return; end;
2 /

Warning: Function created with compilation errors.

dellera@ORACLE10> show errors
Errors for FUNCTION F:

LINE/COL ERROR
-------- -----------------------------------------------------------------
1/55 PL/SQL: Statement ignored
1/66 PLS-00222: no function with name 'T_ELEM' exists in this scope

Or is there a way to create the object in the package spec, too ?

TIA

Tom Kyte
November 06, 2005 - 3:56 pm UTC

no, i mean in the package body you can use some of the types - but in hindsight - for a pipelined function - it must be a sql type. I was just thinking of using a collection in sql and mispoke.

For a pipelined function the invoker needs to understand the type - you need the type. It is a requirement. (even if we "hid" them in a spec like we can for collections referenced in sql - we still have the types - they just have ugly mangled names, but they are there)

Alberto Dell'Era, November 06, 2005 - 5:20 pm UTC

> (even if we "hid" them in a spec like we can for collections
> referenced in sql - we still have the types - they just have
> ugly mangled names, but they are there)

Ahem, not following you here - pipelined functions apart, do you mean that sql types are "implicitly" created from pl/sql types when using collections ?

Maybe a tiny tiny quick and fast example :)

Tom Kyte
November 07, 2005 - 8:38 am UTC

actually, I'm getting myself confused.

Allow me to let someone else fix that:

</code> http://oracle-base.blogspot.com/2005/09/pipelined-table-functions-and.html <code>

an example of a pipelined function returning a type defined in a plsql spec...

I was talking about using a local collection variable in SQL myself.

Package Types for Pipelined Functions

Robert Corfman, November 06, 2005 - 7:16 pm UTC

So, if I understood right, is it then possible to define the collection types required for the pipelined function as a type embedded in a package spec? or does it have to be a true type defined directly in the database by someone with create type authority?

Tom Kyte
November 07, 2005 - 8:39 am UTC

see right above, link to Tim Hall's blog.

Mihail Bratu, December 10, 2005 - 3:44 pm UTC


pipelined fuctions,

A reader, June 27, 2006 - 12:05 pm UTC

I have demonstrated the use of pipelined functions below.

set serveroutput on size 1000000 format wrapped
drop table t1
/
create table t1 (char_column varchar2(6), number_column number)
/
CREATE OR REPLACE TYPE t1_object_type as object (
char_column varchar2(6), number_column number)
/
CREATE OR REPLACE TYPE t1_table_type AS TABLE of t1_object_type;
/

/* This function tries to read from table T1 and pipe the rows
back to the calling procedure.
*/
create or replace function T1_func
RETURN t1_table_type PIPELINED
AS
PRAGMA AUTONOMOUS_TRANSACTION;
v_record t1_object_type := t1_object_type(null,null);
v_rows integer;
begin
select count(*)
into v_rows
from t1;

dbms_output.put_line('beginning function T1_func. Number of T1 rows = '||to_char(v_rows));
commit;
for cur in (select char_column, number_column from t1) loop
v_record.char_column := cur.char_column;
v_record.number_column := cur.number_column;
PIPE ROW(v_record);
end loop;
exception
when others then
dbms_output.put_line ('Error in function T1_func: '||SQLERRM);
end t1_func;
/

/* This procedure inserts data into a table T1 without committing, then tries to insert
more rows through the table function T1_func
*/
create or replace procedure p1
is
v_rows integer;

begin
select count(*)
into v_rows
from t1;
dbms_output.put_line('beginning procedure P1. Number of T1 rows = '||to_char(v_rows));

insert into t1 (char_column, number_column) values ('abc', 123);
insert into t1 (char_column, number_column) values ('def', 456);
insert into t1 (char_column, number_column) values ('ghi', 789);

select count(*)
into v_rows
from t1;
dbms_output.put_line('In procedure P1. After inserts, number of T1 rows = '||to_char(v_rows));
commit;
insert into t1 (char_column, number_column)
(select * from table(t1_func));

select count(*)
into v_rows
from t1;
dbms_output.put_line('In procedure P1. After calling autonomous table function, number of T1 rows = '||to_char(v_rows));

exception
when others then
dbms_output.put_line('Error in procedure P1: '||SQLERRM);
end p1;
/
exec p1;



In my demonstration if I remove PRAGMA_AUTONOMOUS_TRANSACTIONS statement, I get ORA-04091: table SNS.T1 is mutating, trigger/function may not see it.

Why is that?

Also, can I use refcursor in the above example without going for the pipelined functions?



Tom Kyte
June 27, 2006 - 2:43 pm UTC

WARNING...WARNING...WARNING...WARNING...WARNING...WARNING...WARNING...WARNING...


WARNING...WARNING...WARNING...WARNING...WARNING...WARNING...WARNING...WARNING...

If you are using an autonomous transaction

to avoid mutating table constraints

you are almost certainly (99.99999999999999% certainly) doing something seriously messed up and wrong.


and to compound it with WHEN OTHERS, not followed by raise - ugh, ugh, ugh. Two HUGE mistakes made here.

Ok, three - you count the records and say "this is how many we are going to fetch" and then proceed to open another cursor. You do understand that it would be very easy for this routine to say "I'm going to give you 100 records" and then turn around and give us 1,000 or 0 right - you have to think "multi-user"




question cont...

A reader, June 27, 2006 - 2:35 pm UTC

In the above example, I insert 3 rows in the table t1 within the stored procedure. Next I call the function and try to insert the same data again.

Why the rows which I inserted in the procedure is not seen when the call goes to function? I don't want to commit until the function call is successfully executed.

How to achieve this result?

I tried using sys_refcursor as a parameter to function but still I cannot see the rows in my function that I inserted in the procedure.

Any other way to overcome this problem?

Thanks,


Tom Kyte
June 27, 2006 - 2:49 pm UTC

because of the autonomous transaction - avoid them, stay away from them, you have lots to fix there first.

example,

A reader, June 27, 2006 - 2:55 pm UTC

do you have an example of how to get rid of autonomous transaction and achieve the results what I want?



Tom Kyte
June 27, 2006 - 3:03 pm UTC

without any code - write a specification, explain in detail what it is you want.

What I have so far is code that does not work :)



If you just want to insert rows into a table,

insert into t1
select * from t1


is all you need.

working on it...

A reader, June 27, 2006 - 3:29 pm UTC

I am working on getting the real pl/sql program our company is being using which has the scenario described above.

I will produce it here as soon as I edit it.

Thanks,


Pipelined functions

A reader, November 24, 2006 - 8:39 am UTC

Hi Tom,
Thanks for a wonderful discussion on Pipelined functions. It is quite informative.
I have a couple of questions for you, if you don't mind.
1) Firstly, we are using PRO*C version 806 here. And for this version to work with PIPELINED functions, I find that I cannot use collections that are declared within PL/SQL packages. I've had to declare new SQL TYPES. Would you know if this has been changed in later versions? Or if this is the standard behavious for all 'outside' languages seeing as how they would have no idea about a pl/sql package declaration?

2) Secondly, how would you approach the requirement below. WE have a number of PRO*C reports that use basically the same SQL code, but outputs stuff differently. In order to go in for code reuse and modularity as well as maintenance, we have currently taken out the SQL from the reports and used a 'generic' package that contains the SQL code and uses PIPELINED functions to return the resultset. The reasons for this was that the SQL takes in search criteria which we are passing in as parameters to the functions. Also we weren't sure the ref cursor route was right for us, since any change to the 'select' clause would've required a change to all associated reports and we didn't want to do that. Using PIPELINED functions and named column list in the PROC selects, we have gotten around this.
I'm really interested to know how you would do this.

Thanks

Tom Kyte
November 24, 2006 - 6:45 pm UTC

1) that is in general true in all releases, even if you declare them in a package - later releases create TWO TYPES for each one, no magic, they must ultimately be SQL types. I would just your own nicely named types for documentation purposes, otherwise you end up with lots of "mangled names" in your schema.

if you have pipelined functions, why are you using pro*c 806, that doesn't even make sense


2) why would a change to a select require that, you lost me there entirely.

If the number of columns do not change, a change in the select associated with a ref cursor would not affect any client code at all.

I would much rather use ref cursors over collections and pipelined functions.

What is flat ....

Reader, November 25, 2006 - 10:10 am UTC

In the example above you have said --

host flat scott/tiger emp > /tmp/emp.dat

What is flat ?

Thanks

Tom Kyte
November 25, 2006 - 11:25 am UTC

More info

A reader, November 27, 2006 - 5:39 am UTC

HI Tom,
Thanks for your response to my question above (reg. Pro*C and pipelined functions).
Reg. the second point, I'm told it is possible that the client may want to add to the column-list in the select clause in a report. And this in-turn would need a change in all the Pro*C reports, to update the struct to hold the info if we used a ref cursor. If we used pipelined functions and used a named list in the report instead, we do away with this and only have to the change the report(s) that are affected.
So right now, for example I've got something like

'exec sql declare cursor mycur as
select field1, field2, field3
from table(mypkg.myfunction(:arg1, :arg2));'

And changing the packaged function to add a new column in doesn't affect my reports.

As for the versions, let's not even go there. I've given up trying to make sense of things like that too:-)

Thanks
Chandini



Tom Kyte
November 27, 2006 - 8:52 am UTC

I don't see how a ref cursor affects this

if you add a column to ANYTHING why wouldn't ANYTHING have to change. Why is a pipelined function (which requires, well, A CURSOR) changing that fact?

Exception handling in pipelined function

Richard Limanowski, December 18, 2006 - 5:35 pm UTC

Hello Tom,
I do not get exception from a pipelined function.
Is this a bug or a feature?
Using 10.2.0.1.0 under RedHat.
Richard

create or replace type t_n as table of number(4)
/
create or replace package p_tst as
function f1 return t_n pipelined;
function f2 return t_n pipelined;
end;
/
show err
create or replace package body p_tst as
function fe return number
as
l_res number;
begin
select 3 into l_res from dual where 1 = 0;
return l_res;
end;
--
function f1 return t_n pipelined
as
begin
pipe row(1);
pipe row(fe);
return;
end;
--
function f2 return t_n pipelined
as
begin
pipe row(1);
pipe row(fe);
return;
exception when others then
raise_application_error(-20001, 'got: ' || sqlerrm);
end;
end;
/
show err
-- does get no exception
select * from table(p_tst.f1);
-- does get exception
select * from table(p_tst.f2);


Tom Kyte
December 18, 2006 - 7:19 pm UTC

no_data_found is not considered an error when fetching from a SQL statement.....

beware of that in a pipelined function - no data found is interpreted by.... THE CLIENT.

and here this client is saying "ah, no more data, great - we are done"

how to debug pipelined function

Richard Limanowski, December 23, 2006 - 11:52 am UTC

Hello Tom,

would you give me some tips how to best debug a pipelined function from within say Oracle SQL Developer.

I made a package:

create or replace type t_n as table of number(4)
/
CREATE OR REPLACE
PACKAGE P_TEST_P AS
function f return t_n pipelined;
END P_TEST_P;
/
CREATE OR REPLACE
PACKAGE BODY P_TEST_P AS
function f return t_n pipelined AS
BEGIN
pipe row(1);
return;
END f;
END P_TEST_P;
/

and then tried to debug the function.

SQL Developer generated a small script:

DECLARE
v_Return ONC.T_N;
BEGIN
v_Return := P_TEST_P.F();
-- Modify the code to output the variable
-- DBMS_OUTPUT.PUT_LINE('v_Return = ' || v_Return);
END;

When I try to run the script I get PLS-00653.
I can debug the code at a cost of rewriting the function into a non-pipelined version with all implications.

Is there a better solution for this like smart wrapping or the like?

Yours,
Richard
Tom Kyte
December 24, 2006 - 9:15 am UTC

check this out:

http://sueharper.blogspot.com/2006/07/remote-debugging-with-sql-developer_13.html

it is sort of like debugging a "trigger" would be - you cannot just load up and run the trigger, you have to do an insert or something - and then "remotely" debug it.

Ivano, February 02, 2007 - 3:19 pm UTC

Hi Tom,
i've created a table function that return a collection,it work s but only when i try to launch it two times.

CREATE OR REPLACE PACKAGE BODY TRA_CAN_PKG AS

FUNCTION get_sel_rif_i1 (tra_can_arg IN tra_can_cur)

RETURN tra_can_tab

PARALLEL_ENABLE (PARTITION tra_can_arg BY ANY)

PIPELINED IS
tra_can_rec sc4file_in%ROWTYPE;

tracciato tra_can_row := tra_can_row('','','','','','','','','','','','','',
'','','','','','','','','','','','','',
'','','','','','','','','','','','','',
'','','','','','','','','','','','','',
'','','','','','','','','','','','','',
'','','','','','','','','','','','','',
'','','','','','','','','','','','','',
'','','','','','','','','','','','','','');

BEGIN

LOOP

FETCH tra_can_arg INTO tra_can_rec;

EXIT WHEN tra_can_arg%NOTFOUND;



IF ((NOT contains_caus(tra_can_rec.cod_mov)) AND contains_sup_ch(tra_can_rec.cod_canale )) THEN

tracciato.cod_luogo_approv:=tra_can_rec.cod_luogo_approv;
tracciato.cod_tipo_circuito:=tra_can_rec.cod_tipo_circuito;
tracciato.cod_merc:=tra_can_rec.cod_merc;
tracciato.cod_art_nazion:=tra_can_rec.cod_art_nazion;
tracciato.cod_soc:=tra_can_rec.cod_soc;
tracciato.cod_divis:='01';
tracciato.cod_mov:=tra_can_rec.cod_mov;
tracciato.data_01:=tra_can_rec.data_01;
tracciato.data_02:=tra_can_rec.data_02;
tracciato.data_03:=tra_can_rec.data_03;
tracciato.data_04:=tra_can_rec.data_04;
tracciato.cod_luogo_approv_sec:=tra_can_rec.cod_luogo_approv_sec;
tracciato.cod_merc_sec:=tra_can_rec.cod_merc_sec;
tracciato.cod_forn_amm:=tra_can_rec.cod_forn_amm;
tracciato.cod_forn_comm_gruppo:=tra_can_rec.cod_forn_comm_gruppo;
tracciato.cod_canale:=tra_can_rec.cod_canale;
tracciato.cod_canale_sec:=tra_can_rec.cod_canale_sec;
tracciato.cod_gsettore_dep:=tra_can_rec.cod_gsettore_dep;
tracciato.cod_gsettore_sec:=tra_can_rec.cod_gsettore_sec;
tracciato.cod_fam_dep:=tra_can_rec.cod_fam_dep;
tracciato.cod_sfam_dep:=tra_can_rec.cod_sfam_dep;
tracciato.cod_segm_dep:=tra_can_rec.cod_segm_dep;
tracciato.cod_gsettore_can:=tra_can_rec.cod_gsettore_can;
tracciato.cod_fam_can:=tra_can_rec.cod_fam_can;
tracciato.cod_sfam_can:=tra_can_rec.cod_sfam_can;
tracciato.cod_segm_can:=tra_can_rec.cod_segm_can;
tracciato.cod_tipo_pv:=tra_can_rec.cod_tipo_pv;
tracciato.cod_posiz_merceol:=tra_can_rec.cod_posiz_merceol;
tracciato.cod_categoria_acquisto:=tra_can_rec.cod_categoria_acquisto;
tracciato.cod_fascia_mercato:=tra_can_rec.cod_fascia_mercato;
tracciato.cod_tipo_stagionalita:=tra_can_rec.cod_tipo_stagionalita;
tracciato.ora_cons_prev:=tra_can_rec.ora_cons_prev;
tracciato.cod_stagionalita:=tra_can_rec.cod_stagionalita;
tracciato.cod_assortimento:=tra_can_rec.cod_assortimento;
tracciato.cod_cat:=tra_can_rec.cod_cat;
tracciato.cod_um_norm:=tra_can_rec.cod_um_norm;
tracciato.qta:=tra_can_rec.qta;
tracciato.volumi:=tra_can_rec.volumi;
tracciato.cod_valuta:=tra_can_rec.cod_valuta;
tracciato.pc:=tra_can_rec.pc;
tracciato.pv:=tra_can_rec.pv;
tracciato.pv_lordo:=tra_can_rec.pv_lordo;
tracciato.stato_ordine:=tra_can_rec.stato_ordine;
tracciato.ora_cons_eff:=tra_can_rec.ora_cons_eff;
tracciato.flag_ventilabile:=tra_can_rec.flag_ventilabile;
tracciato.cod_unita_moviment:=tra_can_rec.cod_unita_moviment;
tracciato.qta_umov:=tra_can_rec.qta_umov;
tracciato.composizione_pallet:=tra_can_rec.composizione_pallet;
tracciato.num_ordine:=tra_can_rec.num_ordine;
tracciato.num_documento:=tra_can_rec.num_documento;
tracciato.cod_iva:=tra_can_rec.cod_iva;
tracciato.perc_iva:=tra_can_rec.perc_iva;
tracciato.perc_scarto:=tra_can_rec.perc_scarto;
tracciato.tara:=tra_can_rec.tara;
tracciato.tipo_pa:=tra_can_rec.tipo_pa;
tracciato.cod_tipo_acq:=tra_can_rec.cod_tipo_acq;
tracciato.cod_tipo_ven:=tra_can_rec.cod_tipo_ven;
tracciato.destinazione_merce:=tra_can_rec.destinazione_merce;
tracciato.flag_edi:=tra_can_rec.flag_edi;
tracciato.flag_gpa:=tra_can_rec.flag_gpa;
tracciato.flag_partita:=tra_can_rec.flag_partita;
tracciato.cod_cliente:=tra_can_rec.cod_cliente;
tracciato.pa_base:=tra_can_rec.pa_base;
tracciato.pa_sconto_001:=tra_can_rec.pa_sconto_001;
tracciato.pa_sconto_002:=tra_can_rec.pa_sconto_002;
tracciato.pa_sconto_003:=tra_can_rec.pa_sconto_003;
tracciato.pa_sconto_004:=tra_can_rec.pa_sconto_004;
tracciato.pa_sconto_005:=tra_can_rec.pa_sconto_005;
tracciato.pa_sconto_006:=tra_can_rec.pa_sconto_006;
tracciato.pa_sconto_007:=tra_can_rec.pa_sconto_007;
tracciato.pa_sconto_008:=tra_can_rec.pa_sconto_008;
tracciato.pa_sconto_009:=tra_can_rec.pa_sconto_009;
tracciato.pa_sconto_010:=tra_can_rec.pa_sconto_010;
tracciato.pa_sconto_011:=tra_can_rec.pa_sconto_011;
tracciato.pa_sconto_012:=tra_can_rec.pa_sconto_012;
tracciato.pa_sconto_013:=tra_can_rec.pa_sconto_013;
tracciato.pa_sconto_014:=tra_can_rec.pa_sconto_014;
tracciato.pa_sconto_015:=tra_can_rec.pa_sconto_015;
tracciato.pa_sconto_016:=tra_can_rec.pa_sconto_016;
tracciato.pa_sconto_017:=tra_can_rec.pa_sconto_017;
tracciato.pa_sconto_018:=tra_can_rec.pa_sconto_018;
tracciato.pa_sconto_019:=tra_can_rec.pa_sconto_019;
tracciato.pa_sconto_020:=tra_can_rec.pa_sconto_020;
tracciato.pa_sconto_021:=tra_can_rec.pa_sconto_021;
tracciato.pa_sconto_022:=tra_can_rec.pa_sconto_022;
tracciato.pa_sconto_023:=tra_can_rec.pa_sconto_023;
tracciato.pa_sconto_024:=tra_can_rec.pa_sconto_024;
tracciato.pa_sconto_025:=tra_can_rec.pa_sconto_025;
tracciato.num_promo_logist:=tra_can_rec.num_promo_logist;
tracciato.progr_riga_ord:=tra_can_rec.progr_riga_ord;
tracciato.progr_riga_bolla:=tra_can_rec.progr_riga_bolla;
tracciato.progr_stralcio:=tra_can_rec.progr_stralcio;
tracciato.lead_time_teorico:=tra_can_rec.lead_time_teorico;
tracciato.peso_merce:=tra_can_rec.peso_merce;
tracciato.volume_merce:=tra_can_rec.volume_merce;
tracciato.sconto_range_min:=tra_can_rec.sconto_range_min;
tracciato.sconto_range_max:=tra_can_rec.sconto_range_max;
tracciato.impegnato_iniz:=tra_can_rec.impegnato_iniz;
tracciato.impegnato_att:=tra_can_rec.impegnato_att;
tracciato.costo_logistico:=tra_can_rec.costo_logistico;
tracciato.cod_collo_misto:=tra_can_rec.cod_collo_misto;
tracciato.cod_itf:=tra_can_rec.cod_itf;
tracciato.filler:=tra_can_rec.filler;
IF is_rif(tra_can_rec.cod_mov) THEN

tracciato.pa_netto:=tra_can_rec.pal;
tracciato.pal:=tra_can_rec.pa_netto;
ELSE
tracciato.pa_netto:=tra_can_rec.pa_netto;
tracciato.pal:=tra_can_rec.pal;
END IF;
PIPE ROW (tracciato);

END IF;

END LOOP;

RETURN;

END;


Tom Kyte
February 03, 2007 - 7:23 pm UTC

"sorry"

chop it down, make it small - but still fail, include EVERYTHING I need to cut and paste and run it on my own system

then, I might look at it, until then - it is just a bunch of code on the screen and I'm not a compiler.

don't forget - SMALL - as small as can be. If you post large - probably what I would do is just make it small to point out how small it should have been.

So, make it very very very very small - for example, you do not need dozens of assignments I'm sure - one or two would have done.

Loading Huge Volume of Data After Splitting

Rajasekar, April 03, 2007 - 2:07 am UTC

Thanks for the excellent site.

Please let me know if the Pipelined function would be effective in the below case.

I have a external table like below with around 10Million records. Flag Column stores all the flag code related to the album as a singel record. While I load into the target table each code should go in as seperate record as shown below.

Create table my_music
(album_id NUMBER,
flag VARCHAR2(10)
);

INSERT INTO my_music values(1,'AB');
INSERT INTO my_music values(2,'MKCS');
INSERT INTO my_music values(3,'SEWRT');

SQL> select * from my_music;

ALBUM_ID FLAG
---------- ----------
1 AB
2 MKCS
3 SEWRT

The above records should be loaded into target as

1 A
1 B
2 M
2 K
2 C
2 S
3 s
3 E
3 W
3 R
3 T

Can I use pipelined function here? Will that be the optimal way to load. If yes, can you please show here how to implement it?

Or is there anyother better solution to implement.

Thanks,
Raj

My Current Implementation

Rajasekar, April 03, 2007 - 2:35 am UTC

The solution that I had implemented for the above requirement is given below. Will this be an optimal code to load 10 Million records?

create or replace type album_scalar_type as object
(a_id VARCHAR2(10),
flags VARCHAR2(1)
)
/

create or replace type album_table_type as table of album_scalar_type;
/

CREATE OR REPLACE FUNCTION flags_etl( p_cursor IN SYS_REFCURSOR )
RETURN album_table_type PIPELINED AS
v_a_id VARCHAR2(10);
v_a_flag VARCHAR2(10);
v_flg VARCHAR2(1);
BEGIN
LOOP
FETCH p_cursor INTO v_a_id,v_a_flag;
EXIT WHEN (p_cursor%NOTFOUND);
FOR I IN 1..LENGTH(v_a_flag)
LOOP
pipe row( album_scalar_type( v_a_id, SUBSTR(v_a_flag,i,1)) );
END LOOP;
END LOOP;
RETURN;
END;
/

SQL> ed
Wrote file afiedt.buf

1 select a_id, flags from load_amg_cls_calbum
2* where rownum < 5
SQL> /

A_ID FLAGS
---------- ----------
W 1 CDFJ
W 2 CDFJ
W 3 DJC
W 4 DFJ


SELECT *
FROM TABLE(flags_etl(CURSOR( SELECT a_id, flags FROM load_amg_cls_calbum where rownum < 5) ) )

A_ID F
---------- -
W 1 C
W 1 D
W 1 F
W 1 J
W 2 C
W 2 D
W 2 F
W 2 J
W 3 D
W 3 J
W 3 C

A_ID F
---------- -
W 4 D
W 4 F
W 4 J

Thanks!

Tom Kyte
April 03, 2007 - 11:11 pm UTC

look at the other place you asked this for my answer.

global temp

Swapnal, April 04, 2007 - 8:54 am UTC

How is this different from populating Global temporary table and selecting from it. In most cases when I have choose between an record object and global temporary table, I prefer the later.Minimal difference is staying within realm of SQL.Any reason why it should be otherwise
Tom Kyte
April 04, 2007 - 10:41 am UTC

how is what different?

if you mean "pipelined functions" it would be the fact that you are streaming data from a cursor to a procedure process (the pipelined function) and then another process retrieves this.

and you can skip all of the heinous IO that a global temporary table would likely incur by just streaming the data.

and the stream is nice because the pipelined function is constantly returning data to the client as it produces it - nice "pipeline" gets going instead of a series of monolithic steps that require large amounts of resources.

questions on table functions

Stewart W. Bryson, April 12, 2007 - 12:47 pm UTC

First off, is there such a thing as a non-pipelined table function?

Secondly, I created this test case just to ask about bulk collecting inside a table function. I realize the function goes a long way to do nothing.

SQL> CREATE OR REPLACE TYPE part AS object
  2  (
  3    TABLE_OWNER                            VARCHAR2(30),
  4    TABLE_NAME                             VARCHAR2(30),
  5    PARTITION_NAME                         VARCHAR2(30),
  6    PARTITION_POSITION                     NUMBER
  7  )
  8  ;
  9  /

Type created.

Elapsed: 00:00:00.01
SQL> 
SQL> CREATE OR REPLACE TYPE parttab AS TABLE OF tdinc.part
  2  /

Type created.

Elapsed: 00:00:00.01
SQL> 
SQL> CREATE OR REPLACE FUNCTION get_parts
  2     ( p_owner     VARCHAR2,
  3       p_table     VARCHAR2
  4     )
  5     RETURN parttab
  6     PIPELINED
  7  AS
  8     l_dyn_ddl     LONG;
  9  
 10     t_parttab parttab;
 11  BEGIN
 12  
 13     l_dyn_ddl :=
 14     'SELECT table_owner,'
 15     ||' table_name,'
 16     ||'partition_name,'
 17     ||' partition_position '
 18     || '  FROM all_tab_partitions'
 19     || ' WHERE table_owner = '''
 20     || upper (p_owner)
 21     || ''' AND table_name = '''
 22     || upper (p_table)
 23     || '''';
 24  
 25     EXECUTE IMMEDIATE l_dyn_ddl
 26     bulk collect INTO t_parttab;
 27  
 28     FOR i IN 1 .. t_parttab.count
 29     LOOP
 30  
 31        pipe ROW (t_parttab(i));
 32     END LOOP;
 33  
 34     RETURN;
 35  END get_parts;
 36  /

Function created.

Elapsed: 00:00:00.03
SQL> select * from table(get_parts('BRYSONS','SALES'));
select * from table(get_parts('BRYSONS','SALES'))
                    *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got -
ORA-06512: at "BRYSONS.GET_PARTS", line 25


Elapsed: 00:00:00.06
SQL> 

Tom Kyte
April 13, 2007 - 12:16 pm UTC

... First off, is there such a thing as a non-pipelined table function? ...

sure - see http://asktom.oracle.com/Misc/varying-in-lists.html for an 8i example (before pipelined functions existed)


I'll guess, you have cursor sharing set and an older release of Oracle?

use cast


table( cast( get_parts(...) as part ) )


Giving Oracle "hints" about the pipelined function

Scott Van Wart, April 13, 2007 - 4:22 pm UTC

While assessing our need for pipelined functions, I wondered if it would be possible to pass hints to Oracle that might make it terminate the function prematurely. Like in the following,

CREATE TYPE t_date_array IS TABLE OF DATE;
/

CREATE FUNCTION generate_dates RETURN t_date_array PIPELINED IS
  v_date DATE := TRUNC( SYSDATE );
BEGIN
  LOOP
    PIPE ROW( v_date );
    v_date := v_date + 1;
  END LOOP;
  RETURN;
END;
/


And then a test:
SQL> SET TIMING ON
SQL> SELECT * FROM TABLE( generate_dates ) WHERE column_value < TRUNC( SYSDATE ) + 7;
ERROR:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
ORA-06512: at "HEYTHERETOM.GENERATE_DATES", line 6

no rows selected

Elapsed: 00:00:03.69
SQL>


Now the immediate fix would be to place a realistic limit on the range of dates returned, which would solve both the error (I'm not quite vain enough to assume my procs will still be used in the year 9999) and the amount of time it takes to run. Or I could just query on DUAL with a CONNECT BY combining SYSDATE and LEVEL :). But for illustrative purposes... can I hint to Oracle that generate_dates will always return a date at least as late as the previous date, so it might know to abort the function after so many rows? Something like a check constraint maybe?

Thanks!
-Scott
Tom Kyte
April 13, 2007 - 7:21 pm UTC

I didn't follow this - not sure what you mean by a hint in this context.

did you mean to perhaps pass some inputs to generate dates to have it stop eventually?

Re: Giving Oracle "hints" about the pipelined function

Scott Van Wart, April 13, 2007 - 7:55 pm UTC

Hi Tom,

You mentioned earlier that without indexes, Oracle needs to check everything. Since the function generates results in ascending chronological order, there's no need to check any more than 7 rows with the given WHERE clause, but Oracle can't really know this, so it generates all of the results until the year 9999 (assuming we trap and handle the error).

Bottom line, if you have a lineup of people from youngest to oldest, and you want the first 5 youngest people, you don't check all the way to the end of the line. It would be neat if Oracle could be taught to do the same. It's just a thought, I'm probably on the wrong train of thought and should be looking for other methods, but I thought I'd ask :).
Tom Kyte
April 13, 2007 - 9:58 pm UTC

you'd have to pass in inputs to tell your procedure to "stop", it could generate ANYTHING - so it has to check everything.

Follow-up

Stewart W. Bryson, April 14, 2007 - 10:49 pm UTC

Sorry for not including the information...

Actually... I'm using 10.2.0.3
Tom Kyte
April 16, 2007 - 10:40 am UTC

and, does cast work - is cursor sharing set.

Cast and cursor_sharing

Stewart W. Bryson, April 17, 2007 - 10:44 am UTC

Thanks for your help with this.

The cast doesn't help. I even tried it inside the function, though I knew that wouldn't help either.
SQL> select * from table(cast(get_parts('BRYSONS','SALES') as part));
select * from table(cast(get_parts('BRYSONS','SALES') as part))
                         *
ERROR at line 1:
ORA-22907: invalid CAST to a type that is not a nested table or VARRAY


Elapsed: 00:00:00.07
SQL> show parameter cursor  

NAME                                 | TYPE        | VALUE
------------------------------------ | ----------- | ------------------------------
cursor_sharing                       | string      | EXACT
cursor_space_for_time                | boolean     | FALSE
open_cursors                         | integer     | 300
session_cached_cursors               | integer     | 20
SQL> 


Tom Kyte
April 18, 2007 - 9:42 am UTC

oh, the function returns parttab, a table of parts, use the parttab type.

A reader, August 05, 2007 - 9:54 am UTC

Tom
How does the following work from your original reply:

pipe row( emp_scalar_type( l_rec.empno,
LOWER(l_rec.ename),
l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal,
l_rec.comm, l_rec.deptno ) );

What I mean is is the expression emp_Scalar_type a 'terse' one, is it initialising a Creating and Intialising a Type

2) Question on Pipe Row, obviously, from point one if you have Multiple columns to Output you need to use a Type variable. That is Pipe Row does not always expect its parameter(s) to be a Object Type as in your example

Please see the 'Followup' section of :

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:210612357425#28960621306776

3) Can I use a Varray as output in Pipe Row.

Thanks




Tom Kyte
August 05, 2007 - 2:18 pm UTC

yeah, it creates a temporary.

Just like coding:

return 42;

would.


2) pipe row expects a single thing - yes, because a pipelined function returns a collection of a type - of A TYPE


3) try it and see :)

A reader, August 06, 2007 - 4:13 am UTC

Tom from your previous answer
"yeah, it creates a temporary.

Just like coding:

return 42;

would.
"

Does what really happens is, the Constructor Function of the Object is run, memory gets created for the input function variable values. An instance of the object gets created and the Function returns the pointer to the instance?

As soon as the value is returned, the instance gets destroyed?
Tom Kyte
August 06, 2007 - 11:57 am UTC

sure, everything that would happen if you coded:

declare
   x myType := myType( ...... );
begin
   null;
end;
/

would happen, just like:
return 42;


Table Function Parameter

Moe, August 07, 2007 - 6:05 pm UTC

Hi Tom,

suppose I have the following table function query:

SELECT * FROM TABLE(test_function(12));

Is there a way that I can create a view based on the above statement which I can pass the function parameter in the where cluse?
something like:

create or replace view test_view as
SELECT b.* FROM TABLE(test_function(a.dept_id))b,
department a
where a.dept_id = b.dept_id

then I'll like to say:

select * from test_view where dept_id = 12


Thank You Very Much

What about scalability?

rustam kafarov, January 23, 2008 - 9:24 am UTC

Hello.
And what about scalability of using pipelined functions?
For example my pipelined function that returns collection of objects. Inside function we retrieve data from cursor, puts them into pl/sql collection using bulk fetching. Then processes them somehow in the loop using other queries. And then "pipes" thousands rows.
I guess all returned data will be put into UGA, am I right? What will be happened if 1000 users will execute same queries to this function? When allocated memory will be released? When collection will be released? I didn't found information about using resources and releasing of memory.
Thanks

Tom Kyte
January 23, 2008 - 2:05 pm UTC

the returned data is returned to the client. They fetch it.

Your routine will call PIPE ROW (N) times in a row - depending on the clients array fetch size, client gets those (N) rows - uses them - and then you get called again for (N) more.

internal pipelined function

AlexGru, May 20, 2008 - 6:39 am UTC

Hello Mr. Tom.
Some times ago I begun use pipelined functions almost in everywere. I just lovin in it.
And have one question: can I declare and use one pipelined function inside any???

For example:

declare
CURSOR CUR IS
select
uo.object_id,
uo.object_name
from
user_objects uo;

TYPE TYPE_TBL IS TABLE OF CUR%ROWTYPE;
l_res_tbl TYPE_TBL;
-- internal PF ----------------------------
function getInternalDataSet return TYPE_TBL pipelined IS
l_tbl TYPE_TBL;
begin
OPEN CUR;
FETCH CUR BULK COLLECT INTO l_tbl;
FOR i in l_tbl.FIRST..l_tbl.LAST LOOP
pipe row(l_tbl(i));
END LOOP;
CLOSE CUR;
end;
-------------------------------------------
begin
--1
SELECT *
BULK COLLECT INTO l_res_tbl
FROM TABLE(getInternalDataSet);

--2
l_res_tbl:=getInternalDataSet;

end;

--1 and --2 not allowed.
It's suppose that --1 not allowed because SQL engine don't know about function getInternalDataSet at all.

But wy not working second case, I can't understand.
Please show me where I need looking for???

Thanks so much.


Tom Kyte
May 20, 2008 - 11:24 am UTC

... Some times ago I begun use pipelined functions almost in everywere. I just ...

well, that is a bad idea - they should be used in pretty limited cases, not "every where". Just like XML - sparingly is a good concept to apply here.


if a function is pipelined, it by definition doesn't return anything, it doesn't make sense to call it anywhere other than from SQL.

The 2nd case should be clear - a pipelined function just has a "return;", not "return data;", just return. It pipes rows - exclusively for the use and consumption of SQL - it doesn't return a thing.


and yes, in order for SQL to see the function, the function must be visible to SQL and when it is hidden in the plsql block - it cannot be 'seen'

Response

AlexGru, May 20, 2008 - 10:29 pm UTC

Hello Mr. Tom.
Thanks so much for response.
Of course I saw exaggerated, I use it for return data sets to web pages. I think it's good idea incapsulate SQL queries in pipelined functions and call this function from web pages (it's no OLTP system). This approach can help maintain system in the future. And also I remember all time about your relationship to communicate between DBA and developer. I am DB developer and we have web developers, who are clients for me. I want secure my (internal DB) work from those developers. Why not? They can knowlege in ASP, PHP , JS and etc, but not much in Oracle SQL and PL/SQL.
And sometimes they try writes absolutely enificient queries without understanding system. And now I just present documented interface to my set of PF's to those developers and no problems yet.
And also I think this approach move logic near to DB side from WEB site, obviouse it's good.
Can you say me anything about my approach for this area development (just OLAP web).

Thanks again and again.


Tom Kyte
May 20, 2008 - 10:32 pm UTC

I don't see why you would need to use lots and lots of pipelined functions.

Most everything you could think to do - can and should be done in SQL

so, a ref cursor would be appropriate most times, pipelined functions would be few and and far between.

OK OK

AlexGru, May 21, 2008 - 12:24 am UTC

Thanks. Of course I agree and do it, I am about "Most everything you could think to do - can and should be done in SQL ".
I just invoke "simple" SQL queries (regular SQL queries) in PF.
Problems that I have set of SQL queries (only for present data in OLAP web system) where are each about 100 lines.
But busenes rules can change often, in this case I need so fast make this changes in every SQL.
I need take this SQL queries (no one) from web developers,
make change, test it and return for web developers.
They (web developers) can try to make changes without me and born logical errors or wrong efficiecy.
With PF I save (try save) me, queries and web developers from this errors.
May be best way is using functions that return ref cursors??? I right.
Sorry, I don't done it before, but suppouse it's not difficult.
Can you say me -"Yes it's good way to invoke SQL queries with functions that return ref cursors, for present data sets on web"?????
Sorry for I take your time and thanks more and more.










Tom Kyte
May 21, 2008 - 8:10 am UTC

you should use ref cursors for returning result sets from a function - not a pipelined function.

I can definitely say that, not only will I say it - but I wrote it in Effective Oracle by Design as well - ref cursors are the right way to return a result set.




why would I code:

declare 
CURSOR CUR IS 
 select 
       uo.object_id,
       uo.object_name
   from
       user_objects uo;

TYPE TYPE_TBL IS TABLE OF CUR%ROWTYPE;
l_res_tbl TYPE_TBL;
-- internal PF ----------------------------
function getInternalDataSet return TYPE_TBL pipelined IS
 l_tbl TYPE_TBL;
 begin
  OPEN CUR; 
  FETCH CUR BULK COLLECT INTO l_tbl;
   FOR i in l_tbl.FIRST..l_tbl.LAST LOOP
    pipe row(l_tbl(i));
   END LOOP;
  CLOSE CUR;
 end; 


when I could just


function getInternalDataSet return sys_refcursor 
IS
   l_cur sys_refcursor;
begin
  OPEN l_cur for 
   select uo.object_id, uo.object_name
     from user_objects uo;

  return l_cur;
end;


you fetch all of the data into an array (memory, processing time). You then iterate over the array (processing time). You pipe each row performing a context switch between plsql and sql (pipe row suspends the plsql routine, goes back to sql engine). That is processing time and more memory.


Just return the result set and let the client fetch from it.

You would use a pipelined function IF and ONLY IF you had to procedurally process each row slow by slow - like you might in an ETL process (extract/transform/load)


Ok, I will try

AlexGru, May 21, 2008 - 11:26 pm UTC

Hello Mr. Tom.
I agree with and yeasterday begin my research in this field.
I try rewrite my Pipelined functions to functions that return ref cursors. We use PHP and I don't have strong PHP knowlege (just very beginner in PHP). I asked ower developers about there experiense in work with functions that return ref cursors. They doesn't know about it. It means that I need do it before and explain how it works.
I use Oracle10g, PHP5 and ZendStudio. I try search examples in internat, Oracle site and book "The Underground PHP
and Oracle® Manual". It's work well just before fetching from ref cursor. I found too small in internet about this question. Examples almost absents at all. Why? It's looks like nobody use this approach. Why, if it's recomended by you and any profs(I read also some materials in Russian "Using link to ref cursor serve as a criterion for Oracle professional"
1part- http://www.citforum.ru/database/oracle/cursor/
2part- http://www.citforum.ru/database/oracle/cursor1/ )?

Today I am going to continue my work in this and response about my success.
Of course I have your books, but not readed fully yet.

And one questions in this theme:
I have PFs looks like :
function func_name ...
cursor c1 is ...
cursor c2 is ...
cursor c3 is ...
begin

<here I have good readable logic, where I open cursors, fetch bulk collection and any. Code in this part looks short, readable and maintable.>

end;

In this approach I shure that I use bind variables when open cursors.
How can I change code structure in begin .. end part with saving readable.
If I will use your example:
Code can looks next:

function func_name ...
begin

here long code with too much
(OPEN C1 FOR ...Test SQL queries...)

end;
And I can't use USING when write OPEN C1 FOR... .
I need be sure that I use BV's when OPEN C1 FOR ...
where C1 ref cursor.

Thanks so much for your time that I take.
And sorry for my English I need improve it too.








Tom Kyte
May 22, 2008 - 7:04 am UTC

if you

begin
open c1 for select * from t where column = pl_sql_variable;
end;


you have bound, you are bound, it works the same as a cursor you defined above.

I don't know what you mean by

function func_name ...
begin
<b>
here long code with too much</b>
(OPEN C1 FOR ...Test SQL queries...)

end;


I think I pointed out that returning a ref cursor will cause you to delete hundreds and hundreds - thousands of lines of code - LESS CODE with ref cursors - way too much code with pipelined functions.



If php can process a result set, and a ref cursor is just a result set - well......


i just googled

php sys_refcursor

and found lots of reference material

@AlexGru re: PHP and ref cursors

Stew Ashton, May 26, 2008 - 9:23 am UTC


"The Underground PHP and Oracle® Manual" release 1.4 gives a simple but full example of accessing a REF CURSOR on p. 143.

This example uses the PHP Oracle extension called OCI8. You really need to use OCI8 because the other extension, called PDO, is not mature.

I strongly suggest using PHP 5.2.5, because earlier versions have OCI8 drivers with important bugs. In fact, the current version of "Zend Core for Oracle" has an out-of-date OCI8 driver. Thanks to Christopher Jones of Oracle, Zend is in the process of fixing this.

As Tom has said elsewhere, using a REF CURSOR means you will parse at every execution. If you set the session_cached_cursor parameter high enough, the parse will be a "softer soft parse" that reduces the cost of parsing.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2588723819082#2595492927824

Yes, I found.

AlexGru, May 28, 2008 - 8:52 am UTC

Hello Mr. Tom.
Thanks to you and any who response me.
Yes, I found way to proeccess ref cursor in PHP.
But in this project I need stay as is. Of course in the next project I will begin from functions that return ref cursors.
Yes. Now it's a lot of not only PL/SQL code and PHP.

Thanks.

What happens internally + use of pipeline functions on Ruby on Rails

Johann Tagle, December 08, 2008 - 11:09 am UTC

Hi Tom,

What happens internally when you use pipelined functions? They seem to be the only way to use stored procedures on Ruby on Rails ( http://wiki.rubyonrails.org/rails/pages/OracleStoredProceduresAsDataSource ). I'm wondering because there's a popular RoR plugin called will_paginate which does a lot of pagination for you if you just give it an SQL query. If you give it something like:

select col1, col2, col3 from table(function(param1,param2))

it will automatically form:

select count(*) from (select col1, col2, col3 from table(function(param1,param2)))

to count the results. I know you frown on applications that count the number of results but bear with me for a moment - does the above result in the function needlessly retrieving data for col1, col2, etc when all that is needed is the number of results or is there (hopefully) some sophistication/intelligence in Oracle that prevents this from happening?

The RoR plugin also generates something like the ff to get for example results 16-30:

select * from (select raw_sql_.*, rownum raw_rnum_ from (select col1, col2, col3 from table(function(param1,param2))) raw_sql_ where rownum <= 30) where raw_rnum_ > 15

Again, does this mean the function will still read all the data for results 1-15 first? Am I better off not using the plugin?

Thank you very much.

Johann
Tom Kyte
December 09, 2008 - 12:22 pm UTC

that is NOT the only way - that is a way to use a stored procedure AS IF IT WERE A TABLE - a datasource.


Now to your question. That will run the function, the function must produce output (to be counted). This output will be object types - records in short. You will get the records and then count them.


And yes, every time you call the function - the function will run and send results back to the client. if you get rows 16-30, rows 1-15 will be generated - FILTERED OUT (never sent back to client) and only rows 16-30 will escape from the server - but rows 1-15 will necessarily be generated.

Paginating output of pipelined functions = inefficient?

Johann Tagle, December 12, 2008 - 3:40 am UTC

Thanks for response Tom.

So the function will still read all data to form records 1-15, though they won't be sent back to the client. It looks like this will really be bad if say, I get rows 901-915 -- the function will still read all the data to form records 1-900 only to be filtered out. Is there any way to show statistics to demonstrate how bad this can get?

It sure looks like I'm better off feeding the plugin with SQL equivalent to the output of the stored function so that instead of it generating

select * from
(select raw_sql_.*, rownum raw_rnum_ from
(select col1, col2, col3 from
table(function(param1,param2))) raw_sql_
where rownum <= 901)
where raw_rnum_ > 915

it generates:

select * from
(select raw_sql_.*, rownum raw_rnum_ from
(select col1, col2, col3 from
<place logic equivalent to stored function here>)) raw_sql_
where rownum <= 901)
where raw_rnum_ > 915

This does mean Oracle won't have to read all that data for rows 1-900 in the non-stored function version right?

Or maybe I should drop using that plugin altogether and write my own. Because not using a stored function means I have to move the logic that dynamically forms the SQL based on the available parameters (following http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1288401763279 ) to the RoR code. I'm thinking a stored function that includes what M to N records to retrieve in the parameters would be better.

Your thoughts on this would be greatly appreciated. Thanks!

Johann
Tom Kyte
December 12, 2008 - 7:46 am UTC

end users will never go to rows 900-910.

Look at google, search for Oracle - try to go to page 100, let us know if you ever get there!

You will never really get to row 900 - and if you do - let them wait, no one would have the patience to hit page forward that much.

Well if somebody does, I need to be ready

Johann Tagle, December 12, 2008 - 6:56 pm UTC

Hi Tom,

Of course, I don't expect sane users to click "next page" some 600 times to get the 901th row. But then a web crawler might. And even if I'm just at page 2 the pipelined function will still needlessly read data for all the records of page 1. What if the site gets popular enough and there are 1000 users at a time who go to page 2. It looks like it's going to be wasteful and if so I need to be able to show how problematic it would be. I'm just concerned of the possible resource hog and effect on the performance of the application to other users when such things happen.

Or am I worrying about optimization too much -- is it acceptable/practice to just "make room" in the hardware for such scenarios?

Thanks

Johann
Tom Kyte
December 13, 2008 - 7:27 am UTC

but on page two - the fact you read page one isn't really relevant, not important, noise. It is only page 100 that becomes problematic (somewhere between 10ish and 100).

Unless you run the query and save the results in a table (massive overhead in itself!) you are going to encounter this (but then you have to read over the first part of the table each time too...)

@Johann Tagle re: pagination

Stew Ashton, December 14, 2008 - 8:41 am UTC


Google stops at 1000 "results", no matter how many results per page you want. http://www.google.fr/search?hl=en&q=tom+kyte&start=1000&sa=N returns "Sorry, Google does not serve more than 1000 results for any query."

If you want to tune, I like your idea of "a stored function that includes what M to N records to retrieve in the parameters". That way you can tune the underlying SQL and limit the last possible row to 1000 or whatever number you want.

If you think the underlying SQL needs to be tuned, use an index to satisfy the WHERE and the ORDER BY clauses (you do need to ORDER the rows before pulling out the lines you want). Look here (read down a little to see where Tom talks about indexes) : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:76812348057

Thanks

Johann Tagle, December 16, 2008 - 12:30 am UTC

Thanks for all the suggestions, Tom and Stew!

Performance of Pipe row to Bulk collect

pranav, March 12, 2009 - 11:28 am UTC

Hi,

Which one will be faster -

1) FOR vrt_row IN ( SELECT * FROM tbl )
LOOP
PIPE ROW (vrt_row);
END LOOP;

OR

1) Select * bulk collect into vrt_row from tbl;

I'm trying to keep either of them in a function and then use it as table function (select * from function) so based on the performance i can use it.

Can you please tell which one will be faster and why?

Thanks
Pranav.
Tom Kyte
March 12, 2009 - 1:05 pm UTC

in 10g and above:

for x in (select ... )

will implicitly array fetch 100 at a time, you do not need to for this application.


So, in this example, stick with:

FOR vrt_row IN ( SELECT * FROM tbl )
LOOP
PIPE ROW (vrt_row);
END LOOP;


but - ask yourself - for something so simple - do I really want a pipelined function because of COURSE the fastest would be

select * from tbl;

without the plsql code at all

Performance of PIPE to Bulk collect to Views

pranav, March 12, 2009 - 12:06 pm UTC

Hi,

I'm just rephrasing my question -

Purpose - To select data from a Table (no transformation logic is needed)

Which one will be faster -

1)
vrt_row tbl%rowtype;
begin
FOR vrt_row IN ( SELECT * FROM tbl )
LOOP
PIPE ROW (vrt_row);
END LOOP;
end;

OR

1)
vrt_row tbl%rowtype;
Begin
Select * bulk collect into vrt_row from tbl;
End;

OR

1) Creation of View - Select * from tbl;
And let users use the view rather than the Table functions (no creation of object is required here)

Can you please tell which one will be faster and why?

Appreciate your time.

Thanks
Pranav.

Tom Kyte
March 12, 2009 - 1:10 pm UTC

you labeled them all (1)

pretend you labeled them (1), (2), (3).


Why not let the users query the table? Why (3)? Why not just let them at the table.

I would not consider a pipelined function here at all - nothing to be gained from it, only a negative performance hit.


Either
a) query table
b) query simple view of table (which is just like (a) - query table)

Performance of PIPE to Bulk collect to Views

pranav, March 12, 2009 - 1:15 pm UTC

Thanks for the quick response.

I don't want the Java app to include table names, but views will work. So i guess comparing to the use of Table functions or Bulk collect if it's just a single table we can use views.

How about if the App is expecting an Object in return. Which will be good (Pipe row or Bulk collect, assume no transformation logic here just an object to be filled and returned)?

Thanks again
Pranav.
Tom Kyte
March 12, 2009 - 1:51 pm UTC

How about this - a stored procedure returning REF CURSORS - that would be my option for a java program.

You can put well crafted sql into a stored procedure and return it using a ref cursor

create or replace procedure p( ...inputs....., p_cursor in out sys_refcursor )
as
begin
  open p_cursor for
  select .......;
end;


and if they wanted an object, you just create a procedure that returns that.


No pipelined functions
No extra procedural logic

just return a ref cursor - I like that even better than a view.


RE: " Under what circumstances using a pipelined function can be effective"

Duke Ganote, March 12, 2009 - 4:06 pm UTC

Excellent original question. (So far) I've only found one use for pipelined functions.

My ODBC-driver for DB2 seems to be limited to strings of no more than 255 characters. But some of the fields I needed were longer than 255 characters.

To pull a 2000 character-wide field through ODBC, I had to ask the source DBMS (DB2, in my case) to split the field into parts, then ask Oracle to reassemble it into one field. To make DB2 do its part, I had to invoke DBMS_HS_PASSTHROUGH procedures. Pipelined functions let me treat the results like a standard query.

http://it.toolbox.com/blogs/data-ruminations/stick-odbc-in-your-pipelined-function-and-view-it-27371

Can you do it WITHOUT creating a physical datatype ?

A reader, April 24, 2009 - 5:11 pm UTC

In all above examples, It seems that you HAD to create the physical sql type on the database.

in 10g 3yrs. after the initial question, Is it possible to do this in pure pl/sql ? or we are still struggling ?

Tom Kyte
April 27, 2009 - 1:27 pm UTC

PLSQL is "greater than" SQL. SQL is "below" PLSQL. In PLSQL can see and use SQL datatypes. SQL can see - only sql datatypes. (plsql has datatypes that SQL just cannot 'see', there is no like type)

Change PLSQL in the above to JAVA (java stored procedures)
Change JAVA to C or C++ (external procedures)
Change C or C++ to .net


Creating the type is just like creating a view - it is simply stored metadata that makes it possible for the SQL engine to understand what is going on.


I've never struggled with this, well, except for trying to understand what is difficult about exposing a type in SQL that is.

Pipelined functions

chandini paterson, August 16, 2009 - 7:31 am UTC

Tom, Thanks for a wonderful site as always.
I have written a pipelined function to return rows after some processing. I have a bit of code that needs to be repeated so I thought I would put it in a local sub-program to do this. However I can't get the function to compile now.

Basically I am doing something like
Create or replace function xyz
return blah_type pipelined
as
begin
---some local variables

For rec in (select * from dual) Loop
If <some condn satisfied> then
do_something;
-- do some processing here
pipe row (blah_type);
end if;
end loop;

do_something;

end xyz;




The do_something part is repeated in two sections of the report. And it includes a call to pipe row as well, since I need to put out some summary data depending on some conditions. Ideally I'd like to put the whole do_something bit into a sub-program within the function but PLSQL doesn't like it. I tried creating a local function that is also pipelined, but it didn't like any way I tried to call it.

Any ideas how I would do this? Or if this is even possible. Do I just get on and repeat my code twice?

Thank you


Tom Kyte
August 24, 2009 - 8:06 am UTC

you would need an actual example demonstrating your actual issue.

but do_something cannot call 'pipe row', it is not a pipelined function (and if it were, it could not be called)

do_something should return as a formal parameter the data that needs be piped and this function would pipe it.

PIPELINED Vs NON-PIPELINED

Rajeshwaran, Jeyabal, December 18, 2009 - 6:44 am UTC

CREATE OR REPLACE TYPE mytabletyp IS TABLE OF NUMBER;

CREATE OR REPLACE FUNCTION non_pipe_fnc(p_limit IN NUMBER) IS
RETURN mytabletyp
AS
 mycol mytabletyp := mytabletyp();
BEGIN
 FOR i IN 1..p_limit
 LOOP
  mycol.extend;
  mycol(mycol.count) := i;
 END LOOP;
 RETURN mycol; 
END non_pipe_fnc;

CREATE OR REPLACE FUNCTION pipe_func (p_limit IN NUMBER) 
RETURN mytabletyp PIPELINED
AS
 mycol mytabletyp := mytabletyp();
BEGIN
 FOR i IN 1..p_limit
 LOOP
  pipe row(i);
 END LOOP;
 
 RETURN;
END pipe_func; 

********************************************************************************

SELECT * 
FROM
 TABLE(non_pipe_fnc(100000))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0         36          0           0
Execute      1      0.06       0.07          0          0          0           0
Fetch      668      0.01       0.06          0          0          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      670      0.07       0.14          0         36          0      100000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 100  

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     668        0.00          0.00
  SQL*Net message from client                   668        0.44         19.01
********************************************************************************


********************************************************************************

SELECT * 
FROM
 TABLE(pipe_func(100000))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0         36          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      668      0.09       0.12          0          0          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      670      0.09       0.13          0         36          0      100000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 100  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  COLLECTION ITERATOR PICKLER FETCH PIPE_FUNC (cr=0 pr=0 pw=0 time=100092 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     668        0.00          0.00
  SQL*Net message from client                   668        0.06         10.82
********************************************************************************


Tom,
<Quote from Oracle product documentation>
With non-pipelined table functions, the entire collection returned by a table function must be constructed and returned to the server before the query can return a single result row.
Pipelining enables rows to be returned iteratively, as they are produced.
This also reduces the memory that a table function requires, as the object cache does not need to materialize the entire collection.
</quote>

Questions.
1) I could not see any major difference interms of IO or Elapsed/CPU time between a Non Pipelined Table function & Pipelined Table function.
2) Why Execution Plan is missing in TKPROF output for Non Pipelined Table function ?
3) How can i compare the Non Pipelined Table function & Pipelined Table function interms of Performance?

Please correct me if am doing wrong.

Thank You,
Rajesh.
Tom Kyte
December 18, 2009 - 12:40 pm UTC

your pipelined function is rather trivial. but I can guess which one consumed a ton of RAM on the server (can you?).

In a real function, you would typically doing work - running a query, performing some ETL. a pipelined function would allow the client to get the first row WAY before the last row was returned. For example, what if the function were more like:


for x in ( select from a complex_select )
loop
do_a_process_against_record( x );
pipe row( .... );
end loop;
return

versus

for x in ( select from a complex_select )
loop
do_a_process_against_record( x );
array(array.count+1) := x;
end loop;
return array;


the pipe row call allows the record to go back to the client IMMEDIATELY, without the procedure finishing (the procedure suspends). The plsql rouine doesn't have to build up an array of thousands or hundreds of thousands (or more) elements either.


for the non-pipelined function missing the plan, I'd say that cursor was not closed. Before 11g, the stat records are only put into the trace upon a cursor getting closed.


do this

query1
query2
EXIT; <<<== exit sqlplus
then tkprof the trace file




In general, use a pipelined function - you don't want to wait for the last row to be produced before you can start processing it in the client (hence the name, PIPE - like a unix pipe).


Another Example.

Rajeshwaran, Jeyabal, December 18, 2009 - 11:10 am UTC

CREATE TYPE all_obj_type IS OBJECT
(
  OWNER           VARCHAR2(30),
  OBJECT_NAME     VARCHAR2(30),
  SUBOBJECT_NAME VARCHAR2(30),
  OBJECT_ID       NUMBER,
  DATA_OBJECT_ID NUMBER,
  OBJECT_TYPE     VARCHAR2(19),
  CREATED         DATE,
  LAST_DDL_TIME   DATE
);  

CREATE OR REPLACE TYPE all_obj_tt IS TABLE OF all_obj_type;

CREATE OR REPLACE FUNCTION non_pipe_fnc 
RETURN all_obj_tt IS
 all_obj  all_obj_tt := all_obj_tt();
BEGIN
 FOR r IN (SELECT * FROM ALL_OBJECTS)
 LOOP
  all_obj.EXTEND;     
  all_obj(all_obj.COUNT) := all_obj_type(r.OWNER,r.OBJECT_NAME,r.SUBOBJECT_NAME,r.OBJECT_ID,r.DATA_OBJECT_ID,r.OBJECT_TYPE,r.CREATED,r.LAST_DDL_TIME); 
 END LOOP;
 RETURN all_obj;
END non_pipe_fnc;

CREATE OR REPLACE FUNCTION pipe_fnc
RETURN all_obj_tt
PIPELINED IS 
 all_obj  all_obj_tt := all_obj_tt();
BEGIN
 FOR r IN (SELECT * FROM ALL_OBJECTS)
 LOOP
  all_obj.EXTEND;  
  PIPE ROW(all_obj_type(r.OWNER,r.OBJECT_NAME,r.SUBOBJECT_NAME,r.OBJECT_ID,r.DATA_OBJECT_ID,r.OBJECT_TYPE,r.CREATED,r.LAST_DDL_TIME));
 END LOOP;
 RETURN;
END pipe_fnc; 

********************************************************************************

select * 
from
 table (non_pipe_fnc)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0        153          0           0
Execute      1      0.34       0.32          0          0          0           0
Fetch      274      0.31       0.19          0          0          0       40876
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      276      0.67       0.52          0        153          0       40876

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 54  

Rows     Row Source Operation
-------  ---------------------------------------------------
  40876  COLLECTION ITERATOR PICKLER FETCH NON_PIPE_FNC (cr=85648 pr=0 pw=0 time=4246240 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     274        0.00          0.00
  SQL*Net message from client                   274        0.73         11.51
  SQL*Net more data to client                   627        0.00          0.00
********************************************************************************

SELECT * 
FROM
 ALL_OBJECTS


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      409      0.78       0.81          0      68150          0       40876
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      411      0.78       0.81          0      68150          0       40876

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 54     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID SUM$ (cr=0 pr=0 pw=0 time=0 us)
      0   INDEX UNIQUE SCAN I_SUM$_1 (cr=0 pr=0 pw=0 time=0 us)(object id 398)
  40876  FILTER  (cr=85623 pr=0 pw=0 time=496535 us)
  53184   HASH JOIN  (cr=1058 pr=0 pw=0 time=213592 us)
     68    TABLE ACCESS FULL USER$ (cr=6 pr=0 pw=0 time=105 us)
  53184    TABLE ACCESS FULL OBJ$ (cr=1052 pr=0 pw=0 time=53490 us)
   2125   TABLE ACCESS BY INDEX ROWID IND$ (cr=5510 pr=0 pw=0 time=25080 us)
   2730    INDEX UNIQUE SCAN I_IND1 (cr=2773 pr=0 pw=0 time=10165 us)(object id 39)
  20766   NESTED LOOPS  (cr=61582 pr=0 pw=0 time=332449 us)
  23154    INDEX RANGE SCAN I_OBJAUTH1 (cr=61582 pr=0 pw=0 time=165445 us)(object id 103)
  20766    FIXED TABLE FULL X$KZSRO (cr=0 pr=0 pw=0 time=87148 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=18 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=8 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=10 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=10 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=11 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=10 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=20 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=11 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=16 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=11 us)
     26   VIEW  (cr=0 pr=0 pw=0 time=165 us)
     26    FAST DUAL  (cr=0 pr=0 pw=0 time=65 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=10 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=17 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=12 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=18 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=13 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)

--------------------------------------------------------------------------------

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          3          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          3          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 54  

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1       15.39         15.39
********************************************************************************

select * 
from
 table (pipe_fnc)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0         96          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      274      2.85       3.70          0      82388          0       40876
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      276      2.87       3.70          0      82484          0       40876

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 54  

Rows     Row Source Operation
-------  ---------------------------------------------------
  40876  COLLECTION ITERATOR PICKLER FETCH PIPE_FNC (cr=85644 pr=0 pw=0 time=2925569 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     274        0.00          0.00
  SQL*Net message from client                   274        0.62         44.10
  SQL*Net more data to client                   627        0.00          0.00
********************************************************************************

SELECT * 
FROM
 ALL_OBJECTS


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      409      0.82       0.83          0      68150          0       40876
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      411      0.82       0.83          0      68150          0       40876

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 54     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID SUM$ (cr=0 pr=0 pw=0 time=0 us)
      0   INDEX UNIQUE SCAN I_SUM$_1 (cr=0 pr=0 pw=0 time=0 us)(object id 398)
  40876  FILTER  (cr=85620 pr=0 pw=0 time=578217 us)
  53184   HASH JOIN  (cr=1058 pr=0 pw=0 time=213788 us)
     68    TABLE ACCESS FULL USER$ (cr=6 pr=0 pw=0 time=106 us)
  53184    TABLE ACCESS FULL OBJ$ (cr=1052 pr=0 pw=0 time=53495 us)
   2125   TABLE ACCESS BY INDEX ROWID IND$ (cr=5510 pr=0 pw=0 time=26170 us)
   2730    INDEX UNIQUE SCAN I_IND1 (cr=2773 pr=0 pw=0 time=10540 us)(object id 39)
  20766   NESTED LOOPS  (cr=61582 pr=0 pw=0 time=338453 us)
  23154    INDEX RANGE SCAN I_OBJAUTH1 (cr=61582 pr=0 pw=0 time=171441 us)(object id 103)
  20766    FIXED TABLE FULL X$KZSRO (cr=0 pr=0 pw=0 time=89493 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=18 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=11 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=11 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=12 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=18 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=19 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=22 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=17 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=14 us)
     26   VIEW  (cr=0 pr=0 pw=0 time=179 us)
     26    FAST DUAL  (cr=0 pr=0 pw=0 time=61 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=20 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=17 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=17 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=22 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=9 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=15 us)
      0   FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)

--------------------------------------------------------------------------------

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          3          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          3          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 54  

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1       10.17         10.17
********************************************************************************

Tom,
Pipelined Function took total of 3.70 secs, while Non_Pipelined_Function took 0.52 sec
Is the delay in pipelined function is due to " Pipelining enables rows to be returned iteratively "?

Thanks,
Rajesh.

Tom Kyte
December 18, 2009 - 1:00 pm UTC

I'd rather use runstats
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

for this, since we are going to be measuring stuff inside of other stuff (plsql calling sql)

also, your second routine - why the array? you didn't want that there, that just adds to the runtime.


So, fixing your code:


ops$tkyte%ORA11GR2> CREATE OR REPLACE FUNCTION non_pipe_fnc
  2  RETURN all_obj_tt IS
  3      all_obj  all_obj_tt := all_obj_tt();
  4  BEGIN
  5      FOR r IN (SELECT * FROM ALL_OBJECTS)
  6      LOOP
  7          all_obj.EXTEND;
  8          all_obj(all_obj.COUNT) := all_obj_type(r.OWNER,r.OBJECT_NAME,r.SUBOBJECT_NAME,r.OBJECT_ID,r.DATA_OBJECT_ID ,r.OBJECT_TYPE,r.CREATED,r.LAST_DDL_TIME);
  9      END LOOP;
 10      RETURN all_obj;
 11  END non_pipe_fnc;
 12  /
Function created.

ops$tkyte%ORA11GR2> CREATE OR REPLACE FUNCTION pipe_fnc
  2  RETURN all_obj_tt
  3  PIPELINED IS
  4  BEGIN
  5      FOR r IN (SELECT * FROM ALL_OBJECTS)
  6      LOOP
  7          PIPE ROW(all_obj_type(r.OWNER,r.OBJECT_NAME,r.SUBOBJECT_NAME,r.OBJECT_ID,r.DATA_OBJECT_ID,r.OBJECT_TYPE,r.CREATED,r.LAST_DDL_TIME));
  8      END LOOP;
  9      RETURN;
 10  END pipe_fnc;
 11  /
Function created.

ops$tkyte%ORA11GR2> exec runStats_pkg.rs_start;
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> select * from table(NON_PIPE_FNC);
66348 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1517095174

--------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |              |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| NON_PIPE_FNC |  8168 | 16336 |    29   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        803  recursive calls
          0  db block gets
      47187  consistent gets
          0  physical reads
          0  redo size
    3078130  bytes sent via SQL*Net to client
      49072  bytes received via SQL*Net from client
       4425  SQL*Net roundtrips to/from client
       1491  sorts (memory)
          0  sorts (disk)
      66348  rows processed

ops$tkyte%ORA11GR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select * from table(PIPE_FNC);

66348 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2394767287

----------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| PIPE_FNC |  8168 | 16336 |    29   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        768  recursive calls
          0  db block gets
      47142  consistent gets
          0  physical reads
          0  redo size
    3078130  bytes sent via SQL*Net to client
      49072  bytes received via SQL*Net from client
       4425  SQL*Net roundtrips to/from client
       1490  sorts (memory)
          0  sorts (disk)
      66348  rows processed

ops$tkyte%ORA11GR2> exec runStats_pkg.rs_stop;
Run1 ran in 1174 cpu hsecs
Run2 ran in 1139 cpu hsecs
run 1 ran in 103.07% of the time

Name                                  Run1        Run2        Diff
...
LATCH.JS slv state obj latch        43,452           1     -43,451
STAT...session uga memory max      196,560           0    -196,560
STAT...session pga memory          327,680           0    -327,680

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
381,721     336,382     -45,339    113.48%

PL/SQL procedure successfully completed.



so, the pipelined function IN TOTAL took a little less cpu than the non-pipelined IN TOTAL. and used a little less latching - lots less memory.



A reader, December 18, 2009 - 1:11 pm UTC

Hello Tom,

Nice example

why the latch count more for nonpipeline funstion?

Is it not that array is in PGA why oracle need to latch?

Thanks




Tom Kyte
December 18, 2009 - 1:19 pm UTC

it'll go up and down a little (the latching) based on release and version of thee database. The latching seems to be related to the array access. It might be populated in your UGA, but it goes out to the greater world when you return it.

A reader, December 18, 2009 - 1:30 pm UTC

Thanks for your reply


.....It might be populated in your UGA, but it goes out to the greater world when you return it.


Could please elobrate more on your above statement


Many thanks

Tom Kyte
December 18, 2009 - 1:46 pm UTC

turns out - that latch has nothing to do with this.

We can only see latching for the entire database - it is coming from a background process, not related to this example. They latch about the same.

Functions to use in the recursive subquery factoring

Sven Ollofson, January 24, 2010 - 3:49 pm UTC

Hi,

I have hit a wall with something I am trying out. I have a recursive query using 11g R2's latest "subquery factoring" clause, but there are all the restrictions of not being able to use distinct, or group by or using the query_name in a subquery, keeping all that in mind, I have a function that returns the child nodes to traverse in a string ex: 1,4,6.
Ex: get_child_nodes( ParentNode )

Is there a way I can write the SQL of the recursive block such that it doesn't use the query in a subquery?

I was thinking something along the lines of

select get_child_nodes( a.Node )
from query_name a

But I would need something in the select clause that can split the string and maybe using CAST and TABLE returns me the next child nodes as rows now! If I did the above where I use get_child_nodes and query_name in a FROM section then it would violate the recursive subquery factoring restrictions.

If this is not possible can you please suggest me a nice way to implement recursion (SQL or PL/SQL...pref SQL :) ) when the recursive block that decides on the child nodes can only be decided by a complex query involving group bys, inline views, or analytical functions etc?
Or can a group by, having clause be completely rewritten using analytical functions? I notice that partition over ( ) doesn't suppress duplicates like group by does.

Your advice would be most appreciated.


Regards,
Sven

Sangeetha Ganeshrao, June 18, 2012 - 11:26 pm UTC

Hi Tom, 

I have taken this part of the text from the above link. Even I am facing the same issue. As per your suggestion below, I used the parttab type instead of the type part.

SQL Query that I executed:
select * from table(cast(get_parts('BRYSONS','SALES')as parttab));

It is returning the below error:
ORA-00932: inconsistent datatypes: expected - got -
ORA-06512: at "QBANAPPS.GET_PARTS", line 25
00932. 00000 -  "inconsistent datatypes: expected %s got %s"
*Cause:    
*Action:

Kindly provide your suggestion on this.
Thanks in advnace.

-------Part taken from the above link-----------

Thanks for your help with this. 

The cast doesn't help. I even tried it inside the function, though I knew that wouldn't help either. 

SQL> select * from table(cast(get_parts('BRYSONS','SALES') as part));
select * from table(cast(get_parts('BRYSONS','SALES') as part))
                         *
ERROR at line 1:
ORA-22907: invalid CAST to a type that is not a nested table or VARRAY


Elapsed: 00:00:00.07
SQL> show parameter cursor  

NAME                                 | TYPE        | VALUE
------------------------------------ | ----------- | ------------------------------
cursor_sharing                       | string      | EXACT
cursor_space_for_time                | boolean     | FALSE
open_cursors                         | integer     | 300
session_cached_cursors               | integer     | 20
SQL> 





Followup   April 18, 2007 - 9am Central time zone:

oh, the function returns parttab, a table of parts, use the parttab type. 


Tom Kyte
June 19, 2012 - 8:37 am UTC

give your full example, the code is blowing up in YOUR code, not in the sql statement itself.

we'd need to see what line 25 is for you.

I want YOUR concise, yet 100% complete test case here. Please do not just refer above, I don't know what you might have changed.

make it as SMALL AS POSSIBLE.

(strange?) pipelend function behaviour

Daniel, November 09, 2012 - 3:20 am UTC

Hi Tom,
first of all, many thanks for driving this forum. It's provided many answers to me and helped a lot many times.
Now, what's my point.
I observe strange behavior regarding pipelined function, at least to me. I have a package as follows:
CREATE OR REPLACE PACKAGE BODY pck_load_empdata
IS
  FUNCTION fnc_parse_xml(ax_xml_csv IN XMLTYPE) RETURN bsp.t_ntt_stg_employee
  PIPELINED IS
  BEGIN
    --parse the XML/CSV
    --bsp.t_ntt_stg_employee is TABLE OF TYPE that corresponds to bsp.stg_employee table structure
  END;    
    
  PROCEDURE prc_populate_stg (ax_csv IN xmltype)
  IS
    PRAGMA AUTONOMOUS_TRANSACTION;
  BEGIN
    INSERT INTO bsp.stg_employee
    SELECT *
      FROM TABLE(fnc_parse_xml(ax_csv));
    COMMIT;
  END;
  
  PROCEDURE prc_merge_emps
  IS
  BEGIN
    --do something with the loaded data and data in DB
  END;
  
  PROCEDURE prc_load_emp_batch(ax_csv IN xmltype, ax_csv OUT xmltype)
  IS
  BEGIN
    prc_populate_stg(ax_msg_in);
    prc_merge_emps;
  END;
END;
When debugging I observe the merge procedure is skipped by the debugger, i.e. it doesn't want to enter it, nor it enters the COMMIT line. Simply it jumps directly at the end of the package.
When, however, prc_populate_stg does insert append, it strangely enters the commit line and executes the merge procedure as expected.
Logging is always turned on on the table bsp.stg_employee.
Although the issue seems to get solved by this, but I'd like to understand why this happens.
Do the pipelined table functions always need to do append and I just overlooked that in doc?
Am I doing something wrong or understand wrong pipelined functions? or is this correct behavior or .. any other conclusion ?

Original code runs at Oracle 10.2.0.5, AIX

Thank you very much in advance.
Tom Kyte
November 09, 2012 - 6:35 am UTC

pipelined functions do not need an append, no.


sounds like an issue with the plsql debugging mechanism - suggest you utilize support for that one.

Daniel, November 09, 2012 - 7:00 am UTC

Hi,

thank you for quick response. Later, once they are available to me I'll try with other version and platform if it behaves the same way.

Mohamed Houri, November 09, 2012 - 7:56 am UTC

@Daniel from Czech Republic

First could you please give your oracle release. Very often it matters in such cases

Second, have you shown here the exact code you are using? particularly, have you exceptions blocs in your code that you didn't show here?

Third, is your pipelined function selecting from the table bsp.stg_employee ?
if so then please pay attention to the situation I have presented in the following article

http://hourim.wordpress.com/2012/02/09/materialized-hint-and-ora-04091-error/

pipelined table function issue

Daniel, November 09, 2012 - 8:29 am UTC

@Mohamed Houri

Hi Mohamed,
First, Oracle release I'm using was mentioned in my original question post, it is 10.2.0.5 IBM/AIX.
Second, I can't show the exact code and in this processing stage, there are no exception blocks.
Third, no, my pipelined function is NOT selecting from the table is is inserting in. I'm aware of the mutating table issue in such a case. the function just parses given XML/CSV and pipes out the objects set with CSV fields.

As I already said, I'll try to test the same (or very similar) code on different Oracle version/platform. Might be it's a version-platform specific issue.

let me understand

venkat, January 16, 2013 - 2:58 am UTC

In the begining of this post you mentioned the following example, i understood the entire example except a few lines.

i didn't understand this i highlighted in bold what does this mean ? insert data into emp_scalar_type ???
<b>pipe row( emp_scalar_type</b>( l_rec.empno,
 17                            LOWER(l_rec.ename),
 18                            l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal,
 19                                    l_rec.comm, l_rec.deptno ) );

and why did you mention TABLE here ? 

 select empno, ename
  2    <b>from TABLE</b>(emp_etl(cursor( select * from external_table ) ) )
  3   where ename like '%a%';

Tom Kyte
January 16, 2013 - 12:00 pm UTC

they are pipelined functions, you can read about them here:


http://www.oracle.com/pls/db112/search?word=pipelined+&partno=



using the TABLE clause allows us to treat a plsql function in this case as though it was a row source, a table in effect.

and the plsql function returns a collection of object types (in my example, the object type was something I named EMP_SCALAR_TYPE). You return these using the PIPE ROW builtin in PLSQL

A reader, July 20, 2013 - 1:53 am UTC

In your original example (very beginning), you mentioned that you could "log bad rows elsewhere". Can you have an insert statement in a function that gets called in a SQL statement? WOuldn't you get this error?

ORA-14551: cannot perform a DML operation inside a query


demo@ORA9IR2> create or replace
2 function emp_etl( p_cursor in sys_refcursor )
3 return emp_table_type
4 PIPELINED
5 as
6 l_rec external_table%rowtype;
7 begin
8 loop
9 fetch p_cursor into l_rec;
10 exit when (p_cursor%notfound);
11
12 -- validation routine
13 -- log bad rows elsewhere
14 -- lookup some value
15 -- perform conversion
16 pipe row( emp_scalar_type( l_rec.empno,
17 LOWER(l_rec.ename),
18 l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal,
19 l_rec.comm, l_rec.deptno ) );
20 end loop;
21
22 return;
23 end;
24 /


Tom Kyte
July 31, 2013 - 3:37 pm UTC

you would have to use an autonomous transaction for your logging routine. It would be a separate function, defined with pragma autonomous_transaction, it would insert the logged data and commit.

Joining with table()

A, September 11, 2013 - 12:46 pm UTC

Hello Tom,
Will there be any performance impact on joining a table with another built on a piped-line function using table()?

--Object type
CREATE OR REPLACE TYPE ty_as_data AS OBJECT
(
as_id NUMBER(18),
description VARCHAR2(4000),
rating VARCHAR2(20),
last_admin_date DATE,
exception_date DATE,
rat_type VARCHAR2(20) );

--Table Type
CREATE OR REPLACE TYPE tab_as_data IS TABLE OF ty_as_data;


--Function
create or replace FUNCTION f_as_data
RETURN tab_as_data PIPELINED
AS
BEGIN
FOR i IN (SELECT as_id,
description,
rating,
last_admin_date,
exception_date,
rat_type
FROM table1,
table2,
table3
WHERE <<complex join>>)
LOOP
PIPE ROW(ty_as_data(i.as_id, i.description,
i.rating, i.last_admin_date,
i.exception_date, i.rat_type));
END LOOP;
RETURN;
END f_as_data;


--Usage
SELECT a.as_id, a.description
FROM TABLE(secid_fn_anlytc_sec_data_apr()) a,
table100
where a.as_id = table100.as_id;

I know this is an very vague question and without data/indexes you won't be able to answer but was curious are there any performance impact on using the above query?

Table100 - around 10,000+ rows and are indexed.
TABLE(secid_fn_anlytc_sec_data_apr()) - returns around 35,000+ rows

Is there a better way of writing the above query?

Yes, the FOR loop in function will be changed to bulk collect.
Tom Kyte
September 23, 2013 - 5:50 pm UTC

it would be faster to just join to the query in the pipelined function - skipping the code altogether.

you have shown no reason why that pipelined function should exist

Pipeline Function vs directly selecting from table

soumadip.india@gmail.com, September 17, 2014 - 8:37 am UTC

Hi Tom,

I have question on one of the previous example you have mentioned.

We have read about the benefit of pipelined table function against normal table functions. But when we have a option of directly selecting from table should we opt for pipelined table function?

As you have always said try to do it in SQL if possible, else use PL/SQL. So, in the below example of selecting data from ALL_OBJECTS table, should we use pipelined function or we should directly select from ALL_OBJECTS table?

CREATE TYPE all_obj_type IS OBJECT
(
OWNER VARCHAR2(30),
OBJECT_NAME VARCHAR2(30),
SUBOBJECT_NAME VARCHAR2(30),
OBJECT_ID NUMBER,
DATA_OBJECT_ID NUMBER,
OBJECT_TYPE VARCHAR2(19),
CREATED DATE,
LAST_DDL_TIME DATE
);
/

CREATE OR REPLACE TYPE all_obj_tt IS TABLE OF all_obj_type;
/

CREATE OR REPLACE FUNCTION pipe_fnc
RETURN all_obj_tt
PIPELINED IS
BEGIN
FOR r IN (SELECT * FROM ALL_OBJECTS)
LOOP
PIPE ROW(all_obj_type(r.OWNER,r.OBJECT_NAME,r.SUBOBJECT_NAME,r.OBJECT_ID,r.DATA_OBJECT_ID,r.OBJECT_TYPE,r.CREATED,r.LAST_DDL_TIME));
END LOOP;
RETURN;
END pipe_fnc;
/

SELECT * FROM table(pipe_fnc);
/

SELECT * FROM ALL_OBJECTS;
/

Regarding a PFT implementation in Oracle 18c

Rajeshwaran Jeyabal, February 22, 2022 - 8:17 am UTC

Team,

we got a Polymorphic table function implemented like this and it works fine
demo@XEPDB1>  create or replace package pkg_export_to_csv
  2    as
  3       function describe( p_tab in out dbms_tf.table_t )
  4       return dbms_tf.describe_t;
  5
  6       procedure fetch_rows ;
  7    end;
  8    /

Package created.

demo@XEPDB1>  create or replace package body pkg_export_to_csv
  2     as
  3        function describe( p_tab in out dbms_tf.table_t )
  4        return dbms_tf.describe_t
  5        as
  6                l_new_cols dbms_tf.COLUMNS_NEW_T;
  7        begin
  8                for i in 1..p_tab.column.count
  9                loop
 10                                p_tab.column(i).pass_through := false;
 11                                p_tab.column(i).for_read := true;
 12                end loop;
 13
 14                l_new_cols( l_new_cols.count+1 ) := dbms_tf.COLUMN_METADATA_T( name => 'csv_cols' );
 15                return dbms_tf.describe_t( new_columns => l_new_cols );
 16        end;
 17
 18        procedure fetch_rows as
 19                l_env dbms_tf.env_t ;
 20                l_rowset dbms_tf.row_set_t;
 21                l_rowcnt number;
 22                l_colcnt number;
 23                l_piece long;
 24                l_rows number := 0 ;
 25                l_value dbms_tf.tab_varchar2_t ;
 26        begin
 27                l_env := dbms_tf.get_env;
 28                dbms_tf.get_row_set( l_rowset,l_rowcnt,l_colcnt );
 29
 30                for r in 1..l_rowcnt
 31                loop
 32                        dbms_tf.xstore_get('X'||r,l_rows);
 33                        l_piece := null;
 34                        for c in 1..l_colcnt
 35                        loop
 36                                l_piece := l_piece ||','||
 37                                        dbms_tf.col_to_char( l_rowset(c),r );
 38                        end loop;
 39                        l_value(r) := trim(',' from l_piece);
 40                        dbms_tf.xstore_set('X'||r,l_rows+l_rowcnt);
 41                end loop;
 42
 43                if l_colcnt > 0 then
 44                        dbms_tf.put_col(1,l_value);
 45                end if;
 46        end;
 47     end;
 48     /

Package body created.

demo@XEPDB1> create or replace function export_to_csv(
  2    p_tab in table )
  3  return
  4  table pipelined
  5  table polymorphic
  6  using pkg_export_to_csv;
  7  /

Function created.

demo@XEPDB1> select * from export_to_csv(dept);

CSV_COLS
------------------------------
10,"ACCOUNTING","NEW YORK"
20,"RESEARCH","DALLAS"
30,"SALES","CHICAGO"
40,"OPERATIONS","BOSTON"

demo@XEPDB1>


but now we need column header to be displayed on the top, so modified it like this

demo@XEPDB1> create or replace package body pkg_export_to_csv
  2  as
  3    g_str long;
  4
  5    function describe( p_tab in out dbms_tf.table_t )
  6    return dbms_tf.describe_t
  7    as
  8               l_new_cols dbms_tf.COLUMNS_NEW_T;
  9    begin
 10               for i in 1..p_tab.column.count
 11               loop
 12                             p_tab.column(i).pass_through := false;
 13                             p_tab.column(i).for_read := true;
 14                             g_str := g_str ||','|| p_tab.column(i).description.name;
 15               end loop;
 16                             g_str := trim(',' from g_str);
 17               l_new_cols( l_new_cols.count+1 ) := dbms_tf.COLUMN_METADATA_T( name => 'csv_cols' );
 18               return dbms_tf.describe_t( new_columns => l_new_cols );
 19    end;
 20
 21    procedure fetch_rows as
 22               l_env dbms_tf.env_t ;
 23               l_rowset dbms_tf.row_set_t;
 24               l_rowcnt number;
 25               l_colcnt number;
 26               l_piece long;
 27               l_rows number := 0 ;
 28               l_value dbms_tf.tab_varchar2_t ;
 29    begin
 30               l_env := dbms_tf.get_env;
 31               dbms_tf.get_row_set( l_rowset,l_rowcnt,l_colcnt );
 32
 33               for r in 1..l_rowcnt
 34               loop
 35                               dbms_tf.xstore_get('X'||r,l_rows);
 36                               l_piece := null;
 37                               for c in 1..l_colcnt
 38                               loop
 39                                               l_piece := l_piece ||','||
 40                                                               dbms_tf.col_to_char( l_rowset(c),r );
 41                               end loop;
 42                               l_value(r) := trim(',' from l_piece);
 43                               dbms_tf.xstore_set('X'||r,l_rows+l_rowcnt);
 44               end loop;
 45                     l_value(0) := g_str;
 46               if l_colcnt > 0 then
 47                               dbms_tf.put_col(1,l_value);
 48               end if;
 49    end;
 50  end;
 51  /

Package body created.

demo@XEPDB1> select * from export_to_csv(dept);

CSV_COLS
------------------------------
10,"ACCOUNTING","NEW YORK"
20,"RESEARCH","DALLAS"
30,"SALES","CHICAGO"
40,"OPERATIONS","BOSTON"

demo@XEPDB1>


but it doesn't help, so went with this workaround.

demo@XEPDB1> with rws as (
  2  select 'DEPTNO' as deptno,'DNAME' as dname,
  3      'LOC' as loc
  4  from dual
  5  union all
  6  select to_char(deptno),dname,loc from dept )
  7  select * from export_to_csv(rws)
  8  /

CSV_COLS
------------------------------
"DEPTNO","DNAME","LOC"
"10","ACCOUNTING","NEW YORK"
"20","RESEARCH","DALLAS"
"30","SALES","CHICAGO"
"40","OPERATIONS","BOSTON"


do we have any better option to handle the column header dynamically in the PTF implementation?
Connor McDonald
February 28, 2022 - 7:19 am UTC

We can synthesize a new row

SQL> create or replace package pkg_export_to_csv
  2    as
  3       function describe( p_tab in out dbms_tf.table_t )
  4       return dbms_tf.describe_t;
  5
  6       procedure fetch_rows ;
  7    end;
  8    /

Package created.

SQL>
SQL> create or replace package body pkg_export_to_csv
  2     as
  3        function describe( p_tab in out dbms_tf.table_t )
  4        return dbms_tf.describe_t
  5        as
  6                l_new_cols dbms_tf.COLUMNS_NEW_T;
  7        begin
  8                for i in 1..p_tab.column.count
  9                loop
 10                                p_tab.column(i).pass_through := false;
 11                                p_tab.column(i).for_read := true;
 12                end loop;
 13
 14                l_new_cols( l_new_cols.count+1 ) := dbms_tf.COLUMN_METADATA_T( name => 'csv_cols' );
 15                return dbms_tf.describe_t( new_columns => l_new_cols, row_replication => true );
 16        end;
 17
 18        procedure fetch_rows as
 19                l_env dbms_tf.env_t ;
 20                l_rowset dbms_tf.row_set_t;
 21                l_rowcnt number;
 22                l_colcnt number;
 23                l_piece long;
 24                l_rows number := 0 ;
 25                l_value dbms_tf.tab_varchar2_t ;
 26
 27                repfac dbms_tf.tab_naturaln_t;
 28                new_r int := 0;
 29        begin
 30                l_env := dbms_tf.get_env;
 31                dbms_tf.get_row_set( l_rowset,l_rowcnt,l_colcnt );
 32
 33                for r in 1..l_rowcnt
 34                loop
 35                   repfac(r) := 1;
 36                   if r = 1 then
 37                     new_r := 1;
 38                     repfac(r) := repfac(r) + 1;
 39                     dbms_tf.xstore_get('X'||r,l_rows);
 40                     l_piece := null;
 41                     for c in 1..l_colcnt
 42                     loop
 43                            l_piece := l_piece ||','||l_env.get_columns(c).name;
 44                     end loop;
 45                     l_value(new_r) := trim(',' from l_piece);
 46                     dbms_tf.xstore_set('X'||new_r,l_rows+l_rowcnt);
 47
 48                     new_r := 2;
 49                     l_piece := null;
 50                     for c in 1..l_colcnt
 51                     loop
 52                            l_piece := l_piece ||','||dbms_tf.col_to_char( l_rowset(c),r );
 53                     end loop;
 54                     l_value(new_r) := trim(',' from l_piece);
 55                     dbms_tf.xstore_set('X'||new_r,l_rows+l_rowcnt);
 56
 57                   else
 58                     new_r := r + 1;
 59                     dbms_tf.xstore_get('X'||r,l_rows);
 60                     l_piece := null;
 61                     for c in 1..l_colcnt
 62                     loop
 63                            l_piece := l_piece ||','||dbms_tf.col_to_char( l_rowset(c),r );
 64                     end loop;
 65                     l_value(new_r) := trim(',' from l_piece);
 66                     dbms_tf.xstore_set('X'||new_r,l_rows+l_rowcnt);
 67                   end if;
 68
 69                end loop;
 70
 71                if l_colcnt > 0 then
 72                        dbms_tf.row_replication(replication_factor => repfac);
 73                        dbms_tf.put_col(1,l_value);
 74                end if;
 75        end;
 76     end;
 77     /

Package body created.

SQL>
SQL> sho err
No errors.
SQL>
SQL>
SQL> create or replace function export_to_csv(
  2    p_tab in table )
  3  return
  4  table pipelined
  5  table polymorphic
  6  using pkg_export_to_csv;
  7  /

Function created.

SQL>
SQL> select * from export_to_csv(dept);

CSV_COLS
----------------------------------------------------------------------------------------------------------------------------------
"DEPTNO","DNAME","LOC"
10,"ACCOUNTING","NEW YORK"
20,"RESEARCH","DALLAS"
30,"SALES","CHICAGO"
40,"OPERATIONS","BOSTON"

5 rows selected.

SQL>


PTF consume more CPU.

Rajeshwaran, Jeyabal, February 28, 2022 - 10:01 am UTC

Team,

thanks for the above response, completely forgot about the "replication factor" implementation - thanks for highlighting that.

we got a requriement in application to unload the data in CSV format.
so we thought of developing a Polymorphic table function(PTF)utility for this.
while benchmarking (PTF vs Manual concatenation) in lower environment we found that PTF consume more CPU.

so how can we make PTF to be CPU efficient? Kindly advice.

Below is our benchmarking details.

create or replace package pkg_export_to_csv
as
   function describe( p_tab in out dbms_tf.table_t )
   return dbms_tf.describe_t;

   procedure fetch_rows ;
end;
/

create or replace package body pkg_export_to_csv
as
  function describe( p_tab in out dbms_tf.table_t )
  return dbms_tf.describe_t
  as
    l_new_cols dbms_tf.COLUMNS_NEW_T;
  begin
    for i in 1..p_tab.column.count
    loop
        p_tab.column(i).pass_through := false;
        p_tab.column(i).for_read := true;
    end loop;

    l_new_cols( l_new_cols.count+1 ) := dbms_tf.COLUMN_METADATA_T( name => 'csv_cols' );
    return dbms_tf.describe_t( new_columns => l_new_cols );
  end;

  procedure fetch_rows as
    l_env dbms_tf.env_t ;
    l_rowset dbms_tf.row_set_t;
    l_rowcnt number;
    l_colcnt number;
    l_piece long;
    l_rows number := 0 ;
    l_value dbms_tf.tab_varchar2_t ;
  begin
    l_env := dbms_tf.get_env;
    dbms_tf.get_row_set( l_rowset,l_rowcnt,l_colcnt );

    for r in 1..l_rowcnt
    loop
      dbms_tf.xstore_get('X'||r,l_rows);
      l_piece := null;
      for c in 1..l_colcnt
      loop
        l_piece := l_piece ||','||
          dbms_tf.col_to_char( l_rowset(c),r );
      end loop;
      l_value(r) := trim(',' from l_piece);
      dbms_tf.xstore_set('X'||r,l_rows+l_rowcnt);
    end loop;

    if l_colcnt > 0 then
      dbms_tf.put_col(1,l_value);
    end if;
  end;
end;
/

create or replace function export_to_csv(
   p_tab in table )
return
table pipelined
table polymorphic
using pkg_export_to_csv;
/

create table t1 
nologging as 
select owner,object_type,object_id,
 object_name, rpad('*',1000,'*') as data
from all_objects a,
 all_users 
where rownum <=100000; 

exec dbms_monitor.SESSION_TRACE_ENABLE( waits=>true, binds=>true );

begin 
 for x in ( select owner||','||object_type||','||object_id||','||
   object_name||','||data txt 
   from t1 )
 loop
  null ;
 end loop;
end;
/ 

begin 
 for x in (select * from export_to_csv(t1) )
 loop
  null ;
 end loop;
end;
/ 


Here is what the Tkprof shows us
********************************************************************************

SQL ID: 0xum9g49ndmfg Plan Hash: 3617692013

SELECT * 
FROM
 EXPORT_TO_CSV(T1) 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.04       0.02          0        786          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     1001    119.22     158.21      16139      24458          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1003    119.26     158.24      16139      25244          0      100000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 110     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
    100000     100000     100000  POLYMORPHIC TABLE FUNCTION  EXPORT_TO_CSV (cr=27202 pr=16139 pw=0 time=2997615101 us starts=1)
    100000     100000     100000   TABLE ACCESS FULL T1 (cr=16226 pr=16139 pw=0 time=2983240 us starts=1 cost=4427 size=103300000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PGA memory operation                          127        0.00          0.00
  direct path read                              414        0.19          2.80



********************************************************************************

SQL ID: dz5zwy6xfta5c Plan Hash: 3617692013

SELECT OWNER||','||OBJECT_TYPE||','||OBJECT_ID||','|| OBJECT_NAME||','||DATA 
  TXT 
FROM
 T1 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          6           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     1001      0.48       3.19      16139      16978          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1003      0.48       3.21      16139      16978          6      100000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 110     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
    100000     100000     100000  TABLE ACCESS FULL T1 (cr=16978 pr=16139 pw=0 time=1341619 us starts=1 cost=4427 size=103300000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  reliable message                                1        0.00          0.00
  enq: KO - fast object checkpoint                1        0.04          0.04
  PGA memory operation                            3        0.00          0.00
  direct path read                              492        0.53          2.67
********************************************************************************

Connor McDonald
March 02, 2022 - 2:36 am UTC

How about using SQL_SCRIPT via the scheduler

https://connor-mcdonald.com/2018/06/20/the-death-of-utl_file/

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library