Depends how many columns you fetch...
Adrian Billington, April 26, 2002 - 4:34 pm UTC
Tom
I have seen BULK COLLECT take longer than row-by-row in my "stress tests" of it way back when I first got 8i (8.1.6.0). I think with bulk collect in 8i requiring a scalar array per column fetched, we eventually hit diminishing returns as our column count gets higher. On the full size database I was working on at the time, this was only about 15 columns, but I'm sure that for many people the "break-even" point will be higher. With each scalar array requiring its own subscript lookup post-fetching, I think this was possibly where some of the performance lag occurred as the column count crept up. This was certainly a restriction on bulk fetching those big, wide production tables we all have and meant that I have sat tight waiting for 9i so I can bulk collect into a table of records and therefore have one subscript lookup for all attributes in the array. Of course, I haven't managed to get 9i installed yet so I can't verify this last bit just yet...
But I still use it wherever possible...:)
Adrian
April 26, 2002 - 8:45 pm UTC
well, I went as high as I was willing to type:
ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
2 type onameArray is table of all_objects.object_name%type index by binary_integer;
3
4 l_arr1 onameArray;
5 l_arr2 onameArray;
6 l_arr3 onameArray;
7 l_arr4 onameArray;
8 l_arr5 onameArray;
9 l_arr6 onameArray;
10 l_arr7 onameArray;
11 l_arr8 onameArray;
12 l_arr9 onameArray;
13 l_arr10 onameArray;
14 l_arr11 onameArray;
15 l_arr12 onameArray;
16 l_arr13 onameArray;
17 l_arr14 onameArray;
18 l_arr15 onameArray;
19 l_arr16 onameArray;
20 l_arr17 onameArray;
21 l_arr18 onameArray;
22 l_arr19 onameArray;
23 l_arr20 onameArray;
24 l_arr21 onameArray;
25 l_arr22 onameArray;
26 l_arr23 onameArray;
27 l_arr24 onameArray;
28 l_arr25 onameArray;
29 l_arr26 onameArray;
30 l_arr27 onameArray;
31 l_arr28 onameArray;
32 l_arr29 onameArray;
33 l_arr30 onameArray;
34 l_arr31 onameArray;
35 l_arr32 onameArray;
36 l_arr33 onameArray;
37 l_arr34 onameArray;
38 l_arr35 onameArray;
39 l_arr36 onameArray;
40 l_arr37 onameArray;
41 l_arr38 onameArray;
42 l_arr39 onameArray;
43 l_arr40 onameArray;
44 l_arr41 onameArray;
45 l_arr42 onameArray;
46 l_arr43 onameArray;
47 l_arr44 onameArray;
48 l_arr45 onameArray;
49 l_arr46 onameArray;
50 l_arr47 onameArray;
51 l_arr48 onameArray;
52 l_arr49 onameArray;
53
54 cursor c is select object_name c1, object_name c2, object_name c3, object_name c4, object_name c5,
55 object_name c6, object_name c7, object_name c8, object_name cg, object_name cn,
56 object_name c11, object_name c12, object_name ca, object_name ch, object_name co,
57 object_name c16, object_name c17, object_name cb, object_name ci, object_name cp,
58 object_name c21, object_name c22, object_name cc, object_name cj, object_name cq,
59 object_name c26, object_name c27, object_name cd, object_name ck, object_name cr,
60 object_name c31, object_name c32, object_name ce, object_name cl, object_name cs,
61 object_name cx26, object_name cx27, object_name cxd, object_name cxk, object_name cxr,
62 object_name cy26, object_name cy7, object_name cyd, object_name cyk, object_name cyr,
63 object_name c36, object_name c37, object_name cf, object_name cm
64 from all_objects;
65
66 l_start number;
67 begin
68 insert into run_stats select 'before', stats.* from stats;
69
70 l_start := dbms_utility.get_time;
71 for i in 1 .. 1
72 loop
73 for x in c
74 loop
75 null;
76 end loop;
77 end loop;
78 dbms_output.put_line( (dbms_utility.get_time-l_start) || ' hsecs' );
79
80
81 insert into run_stats select 'after 1', stats.* from stats;
82 l_start := dbms_utility.get_time;
83 for i in 1 .. 1
84 loop
85 open c;
86 loop
87 fetch c BULK COLLECT into
88 l_arr1, l_arr2, l_arr3, l_arr4, l_arr5, l_arr6, l_arr7, l_arr8,
89 l_arr9, l_arr10, l_arr11, l_arr12, l_arr13, l_arr14, l_arr15, l_arr16,
90 l_arr17, l_arr18, l_arr19, l_arr20, l_arr21, l_arr22, l_arr23, l_arr24, l_arr25,
91 l_arr26, l_arr27, l_arr28, l_arr29, l_arr30, l_arr31, l_arr32, l_arr33, l_arr34,
92 l_arr35, l_arr36, l_arr37, l_arr38, l_arr39 , l_arr40, l_arr41, l_arr42, l_arr43,
93 l_arr44, l_arr45, l_arr46, l_arr47, l_arr48, l_arr49
94 LIMIT 100;
95 exit when c%notfound;
96 end loop;
97 close c;
98 end loop;
99 dbms_output.put_line( (dbms_utility.get_time-l_start) || ' hsecs' );
100
101 insert into run_stats select 'after 2', stats.* from stats;
102 end;
103 /
908 hsecs
811 hsecs
PL/SQL procedure successfully completed.
and while they start to get very close -- so I can see how it might go the other way.... at that point I wouldn't want to bulk fetch as the time saved would be more then offset by the time typing!
so, they are less advantageous with a really really wide table (which I tend to not deal with day to day myself -- don't like em myself, try to find something else -- less columns)
memory leak
Leon Rzhemovskiy, April 26, 2002 - 7:05 pm UTC
I have been experimented with bulk collect a year ago and came to same conclusion. It runs slower with stress test (millions of executions). If you take a look on server memory you would find the reason. It is an oracle bug related to memory leak. It is fixed in 9i and there are patches available.
I did not try to use bulk collect since then. I think it should work faster. At lease bulk insert is what I found very useful
Practical experience with running bulk collect 100000000 times is appreciated to compare speed with individual fetching
Thanks
What about using the arrays?
Adrian Billington, April 29, 2002 - 3:43 am UTC
Tom
Thanks for being bothered to type out 49 arrays... :). Firstly, I'm sure LIMIT size will make a difference - I guess the more arrays you collect, the lower (or more sensible) your limit should become (general assumption). Your test does not include actually using the arrays as my original test did way back (i.e. having to reference each array with its binary_integer lookup), so I tested it by passing the retrieved values concatenated to a procedure one "row" at a time (procedure had a null call only). So the implicit cursor was already looping a row at a time, but the arrays were bulk loaded then had to be looped through, which you would expect to slow things down a bit. Not at all, BULK COLLECT still wins !!
Like you say, if you can be bothered to type it all...
Regards
Adrian
Which bug number?
Peter, May 16, 2002 - 11:58 am UTC
Leon,
could you please give me bug number and patchsets?
Pardon the stupid question
Arun Mathur, November 26, 2003 - 11:46 am UTC
Do you have scripts for creating run_stats and stats?
Thanks.
Arun
November 26, 2003 - 11:52 am UTC
BULK COLLECT LIMIT
Dan Clamage, November 28, 2003 - 12:25 pm UTC
I was testing the performance of a batch application that bulk collected only a handful (3 - 5) columns into parallel arrays. This was on a Compaq Tru64 UNIX platform with 16GB RAM.
Because I was working with 10's of millions of rows I quite sensibly used a LIMIT clause. I started with 10,000 and worked my way up to 500,000. I found that values in the range of 100's of thousands performed better that those in the 10's of thousands. 50,000 was not measurably better or worse than 10,000. Ditto at the high end.
I made the LIMIT user-configurable, since the Production platform was somewhat different (more powerful) than my development server. My recommendation was to enter values around 200,000.
The amount of PGA memory available to you is crucial.
On 9.2.0.1 I ran into a memory leak bug using the SAVE EXCEPTIONS clause (fixed in 9.2.0.3). It would bomb after consuming 1GB of PGA.
Bulk collect slower
A reader, March 12, 2004 - 4:47 pm UTC
Hi Tom, I take 2 days investigating (your book effective design helped me)
But after I move to bulk collect it is slower here are on example of the code and the statistcs if you can please help me
I run twice to be sure there was not other reason
The sum of all this modifications had caused a notorious increment on time of processing (instead of reducing it)
there were les cpu usage, but any way it took more time (The code is in a function in the server ) 9ir2S.
I increased the parameter 20%, and the speed was the same than before
looking at the db file sequential read there were an important increase,
even when they are few records as you can see in the eaxmple.
Maybe I lost some point about the implementation, in bulk insert you have to add more more memory?
The changes had been done in this way
---NO BBBUUULLKK COLLEEEECCCTT
CURSOR uCli IS SELECT CLF_CODCLI, CUF_CTACORR, CLF_TIPOCLI, (CUF_DBCUO - CUF_CRCUO) CUOTAS
FROM CLIENTESF, CUENTASF
WHERE CLF_CODCLI = CUF_CODCLI
AND ( CLF_LRETIVA = 'T'
FOR uCli1 IN uCli LOOP
INSERT INTO FON.IVA_RW
( IVA_FECHA, IVA_CODCLI, IVA_CTACORR, IVA_MONTO, IVA_CUOTAS, IVA_DSC )
VALUES
( dFecha, uCli1.CLF_CODCLI, uCli1.CUF_CTACORR, nMonto, nCuota, cDs );
UPDATE CUENTASF SET
CUF_IVA = NVL(CUF_IVA,0) + ABS(NVL(nCuota,0)),
CUF_MONTO_IVA = NVL(CUF_MONTO_IVA,0) + ABS(NVL(nMonto,0)),
CUF_CRCUO = CUF_CRCUO - nCuota
WHERE CUF_CODCLI = uCli1.CLF_CODCLI
AND CUF_CTACORR = uCli1.CUF_CTACORR;
END LOOP;
---BBBUUULLKK COLLEEEECCCTT
tCLF_CODCLI DBMS_SQL.NUMBER_TABLE;
tCUF_CTACORR DBMS_SQL.NUMBER_TABLE;
tCLF_TIPOCLI DBMS_SQL.VARCHAR2_TABLE;
tCUOTAS DBMS_SQL.NUMBER_TABLE;
tNACIONALIDAD DBMS_SQL.VARCHAR2_TABLE;
tMONTOIVA DBMS_SQL.NUMBER_TABLE;
tCUOTAIVA DBMS_SQL.NUMBER_TABLE;
SELECT CLF_CODCLI, CUF_CTACORR, CLF_TIPOCLI, (CUF_DBCUO - CUF_CRCUO) CUOTAS,
CASE WHEN CLF_TIPOCLI = 'E' THEN
( SELECT EMP_NACIONALIDAD FROM EMPRESAS
WHERE EMP_RUC = CLF_CODCLI )
ELSE
( SELECT PER_NACIONALIDAD FROM PERSONAS
WHERE PER_CI = CLF_CODCLI ) END AS NACIONALIDAD
BULK COLLECT INTO
tCLF_CODCLI ,
tCUF_CTACORR ,
tCLF_TIPOCLI ,
tCUOTAS ,
tNACIONALIDAD
FROM CLIENTESF, CUENTASF
WHERE CLF_CODCLI = CUF_CODCLI AND ( CLF_LRETIVA = 'T'
FORALL i IN 1..tCUF_CTACORR.count
INSERT INTO FON.IVA_RW
( IVA_FECHA, IVA_CODCLI, IVA_CTACORR, IVA_MONTO, IVA_CUOTAS, IVA_DSC )
VALUES
( dFecha, tCLF_CODCLI(i), tCUF_CTACORR(i), tMONTOIVA(i), tCUOTAIVA(i), cDs );
FORALL i IN 1..tCUF_CTACORR.count
UPDATE CUENTASF SET
CUF_IVA = NVL(CUF_IVA,0) + ABS(NVL(tCUOTAIVA(i),0)),
CUF_MONTO_IVA = NVL(CUF_MONTO_IVA,0) + ABS(NVL( tMONTOIVA(i),0)),
CUF_CRCUO = CUF_CRCUO - tCUOTAIVA(i)
WHERE CUF_CODCLI = tCLF_CODCLI(i)
AND CUF_CTACORR = tCUF_CTACORR(i);
--------STATSITICS BULK
INSERT INTO FON.IVA_RW
( IVA_FECHA, IVA_CODCLI, IVA_CTACORR, IVA_MONTO, IVA_CUOTAS, IVA_DSC )
VALUES
( :b6, :b1, :b2, :b3, :b4, :b5 )
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.84 96.16 3603 31 32740 2311
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.84 96.16 3603 31 32740 2311
--------STATSITICS NO BULK
INSERT INTO FON.IVA_RW
( IVA_FECHA, IVA_CODCLI, IVA_CTACORR, IVA_MONTO, IVA_CUOTAS, IVA_DSC )
VALUES
( :b6, :b5, :b4, :b3, :b2, :b1 )
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2311 2.32 76.68 2711 15 37444 2311
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2312 2.32 76.68 2711 15 37444 2311
---- statistics bulk
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 46 0.00 0.00 0 0 0 0
Execute 245 0.32 0.46 5 1206 1133 40
Fetch 203 0.65 2.75 1039 8460 0 203
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 494 0.98 3.22 1044 9666 1133 243
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 370 0.00 0.00
SQL*Net message from client 369 66.26 123.39
db file sequential read 35 0.02 0.13
SQL*Net more data to client 37 0.00 0.00
db file scattered read 129 0.11 2.01
SQL*Net message to dblink 110 0.00 0.00
SQL*Net message from dblink 110 0.11 0.30
SQL*Net more data from dblink 50 0.00 0.00
log file sync 4 0.11 0.20
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6858 0.26 0.29 1 0 2 0
Execute 84204 14.29 233.51 9997 24023 136841 15819
Fetch 84233 113.60 209.59 33324 635953 0 89223
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 175295 128.17 443.40 43322 659976 136843 105042
Misses in library cache during parse: 29
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 10712 0.22 192.55
db file scattered read 4152 0.36 70.31
SQL*Net message to dblink 94 0.00 0.00
SQL*Net message from dblink 94 0.07 0.21
SQL*Net more data from dblink 25 0.00 0.00
log buffer space 4 1.02 2.72
log file switch (checkpoint incomplete) 13 1.03 12.60
free buffer waits 37 1.03 36.40
6887 user SQL statements in session.
17 internal SQL statements in session.
6904 SQL statements in session.
********************************************************************************
Trace file: E:\oraSFON\trace\user\sfon_ora_2844.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
6887 user SQL statements in trace file.
17 internal SQL statements in trace file.
6904 SQL statements in trace file.
205 unique SQL statements in trace file.
496224 lines in trace file.
-- statistics no bulk
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 44 0.73 0.77 0 45 0 0
Execute 167 0.28 0.69 26 863 809 28
Fetch 137 0.57 2.86 1045 6770 0 137
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 348 1.59 4.33 1071 7678 809 165
Misses in library cache during parse: 32
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 280 0.00 0.00
SQL*Net message from client 279 57.24 121.91
db file sequential read 61 0.06 0.63
SQL*Net more data to client 45 0.00 0.00
db file scattered read 129 0.13 2.01
SQL*Net message to dblink 86 0.00 0.00
SQL*Net message from dblink 86 0.44 0.66
SQL*Net more data from dblink 32 0.00 0.00
log file sync 4 0.00 0.01
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7195 0.60 0.96 9 72 2 0
Execute 91143 22.57 293.68 10002 48096 146424 15758
Fetch 88958 160.12 290.88 42724 617730 0 88874
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 187296 183.31 585.53 52735 665898 146426 104632
Misses in library cache during parse: 187
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
db file sequential read 11638 0.21 209.67
db file scattered read 5270 0.26 78.58
SQL*Net message to dblink 124 0.00 0.00
SQL*Net message from dblink 124 0.18 0.52
SQL*Net more data from dblink 43 0.00 0.00
log buffer space 3 1.02 1.05
free buffer waits 100 1.03 101.64
log file switch (checkpoint incomplete) 12 1.03 11.12
log file switch completion 3 1.02 1.43
6804 user SQL statements in session.
435 internal SQL statements in session.
7239 SQL statements in session.
********************************************************************************
Trace file: E:\oraSFON\trace\user\usfon_ora_2784.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
6804 user SQL statements in trace file.
435 internal SQL statements in trace file.
7239 SQL statements in trace file.
222 unique SQL statements in trace file.
608497 lines in trace file.
March 12, 2004 - 7:54 pm UTC
you just did more physical IO when you ran the bulk -- it would happen JUST as often with the single row.
this is one of those 'you got unlucky' but you would get just as unlucky (even more so) with the single row process over time.
run each process "a couple of times", you'll see.
A reader, March 15, 2004 - 8:11 am UTC
Thanks Tom.
BULK COLLECT QUERY NOT COMPILING
Mahomed Suria, April 01, 2004 - 11:12 am UTC
Hi Tom,
I am getting the following error whilst experimenting with BULK COLLECT but I cannot understand why? :
marvin> sql
SQL*Plus: Release 9.2.0.4.0 - Production on Thu Apr 1 17:08:49 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Release 9.2.0.4.0 - Production
JServer Release 9.2.0.4.0 - Production
SQL> set pages 200;
SQL> @sp_fixloc
Warning: Procedure created with compilation errors.
SQL> show errors
Errors for PROCEDURE SP_FIXLOC:
LINE/COL ERROR
-------- -----------------------------------------------------------------
48/3 PL/SQL: SQL Statement ignored
108/28 PL/SQL: ORA-00904: "ALLCNT": invalid identifier
Here is my stored procedure:
create or replace procedure sp_fixloc( p_prcnam in varchar2,
p_update in number,
p_whs in rakloc.whscde%type,
p_blk in rakloc.rakblk%type default null,
p_asl in rakloc.rakasl%type default null,
p_col in rakloc.rakcol%type default null,
p_lvl in rakloc.raklvl%type default null,
p_sid in rakloc.raksid%type default null)
is
--
--
TYPE t_rakusr IS TABLE OF rakloc.rakusr%type INDEX BY PLS_INTEGER;
TYPE t_raktyp IS TABLE OF rakloc.raktyp%type INDEX BY PLS_INTEGER;
TYPE t_whscde IS TABLE OF rakloc.whscde%type INDEX BY PLS_INTEGER;
TYPE t_rakblk IS TABLE OF rakloc.rakblk%type INDEX BY PLS_INTEGER;
TYPE t_rakasl IS TABLE OF rakloc.rakasl%type INDEX BY PLS_INTEGER;
TYPE t_rakcol IS TABLE OF rakloc.rakcol%type INDEX BY PLS_INTEGER;
TYPE t_raklvl IS TABLE OF rakloc.raklvl%type INDEX BY PLS_INTEGER;
TYPE t_raksid IS TABLE OF rakloc.raksid%type INDEX BY PLS_INTEGER;
TYPE t_rakocc IS TABLE OF rakloc.rakocc%type INDEX BY PLS_INTEGER;
TYPE t_rakfre IS TABLE OF rakloc.rakfre%type INDEX BY PLS_INTEGER;
TYPE t_maxdep IS TABLE OF rakloc.maxdep%type INDEX BY PLS_INTEGER;
TYPE t_numitr IS TABLE OF rakloc.numitr%type INDEX BY PLS_INTEGER;
TYPE t_numall IS TABLE OF rakloc.numall%type INDEX BY PLS_INTEGER;
l_rakusr t_rakusr;
l_raktyp t_raktyp;
l_whscde t_whscde;
l_rakblk t_rakblk;
l_rakasl t_rakasl;
l_rakcol t_rakcol;
l_raklvl t_raklvl;
l_raksid t_raksid;
l_rakocc t_rakocc;
l_rakfre t_rakfre;
l_maxdep t_maxdep;
l_numitr t_numitr;
l_numall t_numall;
l_rakcnt t_rakocc;
l_itrcnt t_numitr;
l_allcnt t_numall;
l_loc varchar2(40);
l_msg varchar2(120);
BEGIN
select r.rakusr, r.raktyp, r.whscde, r.rakblk, r.rakasl,
r.rakcol, r.raklvl, r.raksid, r.rakocc, r.rakfre,
r.maxdep, r.numitr, r.numall,
(select count(*) from pallet p
where p.whscde = r.whscde
and p.rakblk = r.rakblk
and p.rakasl = r.rakasl
and p.rakcol = r.rakcol
and p.raklvl = r.raklvl
and p.raksid = r.raksid
and trnsts <> 5) rakcnt, --TRNSTS-ADVISED
(select count(*) from palreq
where ( piksts in (60, -- PIKSTS-PICKING
20, -- PIKSTS-ON_INPUT
30) -- PIKSTS-CURRENT-MOVE
or ( piksts = 10 -- PIKSTS-ALLOCATED
and piktyp <> 2) -- PIKTYP-NORMAL
)
and ( ( dstwhs = r.whscde
and dstblk = r.rakblk
and dstasl = r.rakasl
and dstcol = r.rakcol
and dstlvl = r.raklvl
and dstsid = r.raksid )
or ( palreq.intasl = r.rakasl
and palreq.intasl is not null
and r.raktyp in (40, --RAKTYP-DEPOSIT-BASE
42) --RAKTYP-VIRTUAL-GRP
)
or ( palreq.intusr is not null
and palreq.intusr = r.rakusr)
)
) itrcnt,
(select count(*) from palreq
where whscde = r.whscde
and rakblk = r.rakblk
and rakasl = r.rakasl
and rakcol = r.rakcol
and raklvl = r.raklvl
and raksid = r.raksid
and piksts in (-1, -- PIKSTS-SHORT-ALLOC
10, -- PIKSTS-ALLOCATED
40,-- PIKSTS-BLOCKED-ALLOC
30) -- PIKSTS-CURRENT-MOVE
) allcnt
BULK COLLECT INTO
l_rakusr,l_raktyp,l_whscde,l_rakblk,l_rakasl,l_rakcol,
l_raklvl, l_raksid,l_rakocc,l_rakfre,l_maxdep,l_numitr,
l_numall, l_rakcnt,l_itrcnt,l_allcnt
from rakloc r
where r.whscde = p_whs
and (r.rakblk = p_blk or p_blk is null)
and (r.rakasl = p_asl or p_asl is null)
and (r.rakcol = p_col or p_col is null)
and (r.raklvl = p_lvl or p_lvl is null)
and (r.raksid = p_sid or p_sid is null)
and ( r.rakocc <> rakcnt
or r.numitr <> itrcnt
or r.numall <> allcnt )
and r.raktyp in (select raktyp from bkrtyp
where blktyp = 2010)
;
if ( p_update <> 0 ) then
FORALL indx IN l_rakusr.FIRST .. l_rakusr.LAST
update rakloc
set rakocc = l_rakcnt(indx)
, rakfre = l_maxdep(indx) - l_rakcnt(indx)
, numall = l_allcnt(indx)
, numitr = l_itrcnt(indx)
where whscde = l_whscde(indx)
and rakblk = l_rakblk(indx)
and rakasl = l_rakasl(indx)
and rakcol = l_rakcol(indx)
and raklvl = l_raklvl(indx)
and raksid = l_raksid(indx)
;
else
FOR indx IN l_rakusr.FIRST .. l_rakusr.LAST
LOOP
l_loc := 'Location [' || l_rakusr(indx) || ']' ;
if ( l_rakcnt(indx) <> l_rakocc(indx)) then
l_msg := trim(l_loc) || ' rakocc old=' || to_char(l_rakocc(indx)) ||
' new=' || to_char(l_rakcnt(indx));
pkg_trc.outmsg(p_prcnam, l_msg);
end if;
if ( l_itrcnt(indx) <> l_numitr(indx)) then
l_msg := trim(l_loc) || ' numitr old=' || to_char(l_numitr(indx)) ||
' new=' || to_char(l_itrcnt(indx));
pkg_trc.outmsg(p_prcnam, l_msg);
end if;
if ( l_allcnt(indx) <> l_numall(indx)) then
l_msg := trim(l_loc) || ' numall old=' || to_char(l_numall(indx)) ||
' new=' || to_char(l_allcnt(indx));
pkg_trc.outmsg(p_prcnam, l_msg);
end if;
END LOOP;
end if;
END;
/
April 01, 2004 - 11:24 am UTC
nothing to do with bulk collect
everything to do with normal scoping. allcnt -- defined in the select list, is not "in scope" in the body of the query yet.
ops$tkyte@ORA9IR2> select (select count(*) from dual ) allcnt
2 from dual
3 where allcnt <> 0
4 /
where allcnt <> 0
*
ERROR at line 3:
ORA-00904: "ALLCNT": invalid identifier
ops$tkyte@ORA9IR2> select *
2 from (
3 select (select count(*) from dual ) allcnt
4 from dual
5 )
6 where allcnt <> 0
7 /
ALLCNT
----------
1
Followup : Bulk Collect
A reader, April 02, 2004 - 9:04 am UTC
Thanks Tom.
Sorry , as you say it is not a Bulk Collect problem, but I was off the track anyway and you solved it for me.
Thanks again.
Bye
Default Bulk collect LIMIT in 9i?
Tony, April 14, 2004 - 10:05 am UTC
Tom,
What is the Default Bulk collect LIMIT in 9i?
April 14, 2004 - 10:57 am UTC
there isn't one.
if you bulk collect without limit -- all rows are retrieved
Bulk Collect
Sreedhar, June 21, 2004 - 3:10 am UTC
Tom,
What's the best practice to use "bulk collect into". And when it comes to performance which is more efficient?
a) Straight away with Select statements.
Ex:- SELECT empno,
ename,
sal
BULK COLLECT INTO l_empno, l_ename, l_sal
FROM EMP
WHERE ename = 'SCOTT';
b) Use it inside a cursor.
Ex:- DECLARE
CURSOR c1( p_ename IN VARCHAR2 )
IS
SELECT empno,
ename,
sal
BULK COLLECT INTO l_empno, l_ename, l_sal
FROM EMP
WHERE ename = p_ename;
BEGIN
OPEN C1;
FETCH C1 BULK COLLECT INTO l_empno, l_ename,l_sal;
EXIT WHEN C1%NOTFOUND;
CLOSE C1;
END;
Rgrds
Sreedhar
June 21, 2004 - 8:26 am UTC
the decision point is not performance related (i'll leave it to you to benchmark, like I would, to see that).
it is "how many rows"
If you have about 100 (or less) rows, select into is just dandy.
If you have lots more than 100, explicit cursors with a LIMIT clause on the fetch would be called for:
open c;
loop
fetch c bulk collect into array1, array2.... LIMIT 100;
process array1.count number of rows....
exit when c%notfound;
end loop;
close c;
A reader, June 21, 2004 - 2:00 pm UTC
Bulk Collect should be faster.
June 21, 2004 - 8:18 pm UTC
give that BOTH of them are bulk collects -- what is your point?
A reader, November 18, 2004 - 3:05 pm UTC
Why Bulk collect with limit doesn't insert all the rows.
see the code below. Is it a bug?
declare
TYPE N1 IS table of VARCHAR2(30);
N1_TAB N1;
cursor c is select object_name from all_objects ;
begin
open c;
loop
fetch c bulk collect into N1_TAB limit 200 ;
exit when c%notfound;
forall i in n1_tab.first..n1_tab.last
insert into obj (object_name)
values (n1_tab(i));
end loop ;
close c;
end;
I have 8023 object is all_objects but the code only inserts 8000.
November 19, 2004 - 9:53 am UTC
because of the bug in your code.
loop
fetch bulk collect...
forall i....
exit when c%notfound;
end loop;
your last fetch gets say 193 records (for example), c%notfound is TRUE (since C is exhausted, empty, fetched out) -- but you failed to process the last 193 records.
A reader, November 19, 2004 - 4:47 pm UTC
Thanks Tom. I realised my error
Nested Loops and Bulk Collect
A Reader, January 12, 2005 - 3:54 pm UTC
Tom,
Would you advise Nested Loops and first_Rows hint with Bulk Collect. If i was fetching 1000 rows at a time using LIMIT clause - would first_rows(1000) hint be beneficial in my case?
regards
January 12, 2005 - 5:47 pm UTC
I don't like to generalize.
Answer could be yes, answer could be no.
say my goal was to write a file from these records. Probably "no", I was ALL_ROWS optimization. I want to get to the last row ASAP.
say I was writing a pipelined function as part of a ETL process. Maybe "yes", it would be very cool to get the first 1000 rows as soon as possible and send them using pipe row to the "client" whomever that might be.
say I was pulling these records to feed a queue -- and there were a dozen processes waiting to get my results. Again - maybe yes, the faster I can get them to work (and they take a long time to work on the data), the sooner they will be able to complete. So as long as my first rows query retrieves data faster than they consume it, it might be the right choice.
say my goal was to "feed myself" -- fetch 1,000 rows, process them, fetch next 1,000, process them -- then probably "no". It won't make me finish any faster getting the first 1,000 "as soon as possible", in fact, first rows would probably make me finish using much more time -- since fetching rows in total would take longer.
See -- depends on your *needs*. What you are doing. You have to think about what your goal is and whether "getting the first row fast, but getting the last row slow" beats "getting the last row as fast as possible and maybe waiting for the first rows to come longer"
Useful, but I'm unsure about the loop in Tom's BULK COLLECT
Chris Booth, January 20, 2005 - 10:19 am UTC
Why do we need the loop?
38 open c;
39 loop
40 fetch c BULK COLLECT into l_owner, l_object_name,
l_subobject_name,
41 l_object_type, l_timestamp, l_status,
42 l_temporary, l_generated, l_secondary
limit 100;
43 exit when c%notfound;
44 end loop;
45 close c;
Surely we can just do this:
38 open c;
40 fetch c BULK COLLECT into l_owner...LIMIT 100;
45 close c;
The combination of the LIMIT statement and the EXIT WHEN statement will mean we don't retrieve 100 rows.
In fact, the use of a loop here makes no sense to me.
I could be wrong though.
January 20, 2005 - 10:54 am UTC
what if there are 105 rows.
you will get the first 100, process them and ignore the last 5
I fetch all of the rows -- giving you the ability to process them all.
I was showing the difference between
a) fetching ALL rows one at a time
b) fetching ALL rows 100 at a time.
interesting fact -- in 10g, for x in ( select * from t ) will automagically and transparenly fetch 100 at a time...
Thanks...
Chris Booth, January 20, 2005 - 12:17 pm UTC
...it makes sense now
but why...
Chris, January 21, 2005 - 7:05 am UTC
...would you want to fetch data 100 rows at a time, when you could fetch all data in one go?
January 21, 2005 - 8:31 am UTC
Connor McDonald (wrote the mastering oracle plsql book) has a great analogy using "paving a driveway with two people"
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:367986993667#24384621466907 <code>
it is purely a case of "too much good medicine kills the patient", "diminishing marginal returns", "overkill"
say the result set was 1,000,000 rows. you really want 1,000,000 rows in ram? in plsql?
you want to wait until the last row is fetched, stashed somewhere in the oracle kernel, and then copied en-mass to your array location?
if the array fetcher isn't plsql -- you really want to slam the network with 1,000,000 rows all at once?
What you want is to get a nice steady stream going -- database do a little work, plsql do a little work, database do a little work and so on. 100 to 500 rows at a time is a nice "sweet spot"
thanks...
Chris, January 21, 2005 - 12:05 pm UTC
That's a big help to me and a couple of colleagues.
Bulk Collect & Fetch
Richard Evans, February 22, 2005 - 3:04 pm UTC
Is there a better way than declaring l_owner, l_object_name .. for every "column" you want to use?
For example instead of doing:
type onameArray is table of all_objects.object_name%type index by
binary_integer;
l_owner onameArray;
l_object_name onameArray;
for i in 1 .. 1
loop
open c;
loop
fetch c BULK COLLECT into l_owner, l_object_name limit 100;
exit when c%notfound;
end loop;
close c;
Do:
Cursor c1 is select * from all_objects;
TYPE onameArray is TABLE OF c1%TYPE;
l_AllObjects onameArray;
open c1;
fetch c1 BULK COLLECT INTO l_AllObjects;
close c1;
One of the examples included at least 50 l_* declarations and calls in the fetch statement -yuck!
How I've gotten around this:
Cursor c1 is select * from all_objects;
TYPE onameArray is TABLE OF c1%TYPE;
l_AllObjects onameArray;
select * BULK COLLECT INTO l_AllObjects FROM all_objects;
Then calling them by column name:
FOR i IN l_AllObjects.FIRST..l_AllObjects.LAST LOOP
dbms_output.putline(l_AllObjects(i).owner)
dbms_output.putline(l_AllObjects(i).object_name)
END LOOP;
All: What are your thoughts on this? Is this more/less effecient? Is there a way to fetch your cursor c1 into l_AllObjects without declaring l_owner, l_object_name ... ?
Thanks
February 22, 2005 - 5:17 pm UTC
it is a new feature and works great (i talk of it in Effective Oracle by Design and have used it on more current posts)
use the array of records.
Is it allowed to use inline view in BULK collect straight query
A reader, March 21, 2005 - 6:21 pm UTC
Any idea, why i am getting this error :
I am using the parameter value inside the inline view's query of the main bulk collect query.
SQL> show errors
Errors for FUNCTION VALIDATE_DIAL_PLAN_RANGE_BVOIP:
LINE/COL ERROR
-------- -----------------------------------------------------------------
10/2 PL/SQL: SQL Statement ignored
19/9 PL/SQL: ORA-00947: not enough values
Function code :
SQL> /
SQL> set serverout on size 1000000
SQL> create or replace function validate_dial_plan_range_bvoip(
2 p_iom_order_id in iom_order.iom_order_id%TYPE,
3 p_dial_plan_range_inlist in BVOIP_DIAL_PLAN_RANGE_OBJ_LIST,
4 p_dial_plan_range_outlist out BVOIP_DIAL_PLAN_RANGE_OBJ_LIST,
5 p_err_msg out varchar2)
6 RETURN number is
7 ok number(2) := 0;
8 l_dial_plan_range_list BVOIP_DIAL_PLAN_RANGE_OBJ_LIST := BVOIP_DIAL_PLAN_RANGE_OBJ_LIST();
9 begin
10 SELECT C.DIAL_PLAN_RANGE_ID,
11 C.DIAL_PLAN_INFO_ID,
12 C.PBX_EXT_LENGTH,
13 C.START_RANGE,
14 C.END_RANGE,
15 C.VALID_IND,
16 C.COUNTRY_CODE,
17 C.GATEWAY_CITY_CODE
18 BULK COLLECT INTO l_dial_plan_range_list
19 FROM DIAL_PLAN_RANGE C,
20 (select b.dial_plan_info_id
21 from voip_info a, dial_plan_info b
22 where a.iom_order_id=p_iom_order_id
23 and a.voip_info_id=b.voip_info_id) D
24 where c.dial_plan_info_id=D.dial_plan_info_id
25 and c.valid_ind='Y'
26 ;
27
28 FOR i IN l_dial_plan_range_list.FIRST..l_dial_plan_range_list.LAST LOOP
29 dbms_output.put_line(l_dial_plan_range_list(i).DIAL_PLAN_RANGE_ID) ;
30 dbms_output.put_line(l_dial_plan_range_list(i).DIAL_PLAN_INFO_ID) ;
31 END LOOP;
32
33 RETURN ok;
34
35 end validate_dial_plan_range_bvoip;
36 /
Warning: Function created with compilation errors.
SQL> show errors
Errors for FUNCTION VALIDATE_DIAL_PLAN_RANGE_BVOIP:
LINE/COL ERROR
-------- -----------------------------------------------------------------
10/2 PL/SQL: SQL Statement ignored
19/9 PL/SQL: ORA-00947: not enough values
SQL> /
Warning: Function created with compilation errors.
SQL> l 9 21
9 begin
10 SELECT C.DIAL_PLAN_RANGE_ID,
11 C.DIAL_PLAN_INFO_ID,
12 C.PBX_EXT_LENGTH,
13 C.START_RANGE,
14 C.END_RANGE,
15 C.VALID_IND,
16 C.COUNTRY_CODE,
17 C.GATEWAY_CITY_CODE
18 BULK COLLECT INTO l_dial_plan_range_list
19 FROM DIAL_PLAN_RANGE C,
20 (select b.dial_plan_info_id
21* from voip_info a, dial_plan_info b
SQL>
SQL> desc BVOIP_DIAL_PLAN_RANGE_OBJ_LIST
BVOIP_DIAL_PLAN_RANGE_OBJ_LIST TABLE OF BVOIP_DIAL_PLAN_RANGE_OBJ
Name Null? Type
----------------------------------------- -------- ----------------------------
DIAL_PLAN_RANGE_ID NUMBER(11)
DIAL_PLAN_INFO_ID NUMBER(11)
PBX_EXT_LENGTH VARCHAR2(2)
START_RANGE VARCHAR2(15)
END_RANGE VARCHAR2(15)
VALID_IND VARCHAR2(1)
COUNTRY_CODE NUMBER(2)
GATEWAY_CITY_CODE VARCHAR2(20)
March 22, 2005 - 11:04 am UTC
need entire examples -- but probably you are just missing the cast of the scalars to the scalar object type
select myType(a,b,c,d,...) bulk collect into ... from
you have a table of objects probably.
Never mind my prev post
A reader, March 21, 2005 - 6:31 pm UTC
Tom:
I figured out that.. since i am using collection, i have to select on object.
Thanks.
Will this approach work ok.. any suggestions
A reader, March 22, 2005 - 5:46 pm UTC
Hi Tom:
Need your input on this comparison approach from input list with db table from performance perspective.
Here is the business logic. My function will be receiving the input as an array of records to determine whether the individual data record is already validated or not in downstream. After done validation in DB function, i need to send back the list which is yet to be validated (meaning those data/records not in DB table by doing comparison on those particular fields).
Basically, i wrote the following stored function which does the comparison of the input list with database table using MINUS operator. The reason i prefer MINUS is more performant rather than traditional field by field comparison. Otherwise, if i do traditional approach, i have to compare field by field which is no good for performance.
I tested this with PL/SQL driver and it is working fine.
Is this the good approach for comparison purpose.
-------------------------------------------
set serverout on size 1000000
create or replace function validate_dial_plan_range_bvoip(
p_iom_order_id in iom_order.iom_order_id%TYPE,
p_dial_plan_range_inlist in BVOIP_DIAL_PLAN_RANGE_OBJ_LIST,
p_cursor in out SYS_REFCURSOR,
--p_dial_plan_range_outlist out BVOIP_DIAL_PLAN_RANGE_OBJ_LIST,
p_err_msg out varchar2)
RETURN number is
ok number(2) := 0;
l_dial_plan_range_list BVOIP_DIAL_PLAN_RANGE_OBJ_LIST := BVOIP_DIAL_PLAN_RANGE_OBJ_LIST();
begin
SELECT BVOIP_DIAL_PLAN_RANGE_OBJ(C.DIAL_PLAN_RANGE_ID,
C.DIAL_PLAN_INFO_ID,
C.PBX_EXT_LENGTH,
C.START_RANGE,
C.END_RANGE,
C.VALID_IND,
C.COUNTRY_CODE,
C.GATEWAY_CITY_CODE)
BULK COLLECT INTO l_dial_plan_range_list
FROM DIAL_PLAN_RANGE C,
(select b.dial_plan_info_id
from voip_info a, dial_plan_info b
where a.iom_order_id=p_iom_order_id
and a.voip_info_id=b.voip_info_id) D
where c.dial_plan_info_id=D.dial_plan_info_id
and c.valid_ind='Y'
;
/* FOR i IN l_dial_plan_range_list.FIRST..l_dial_plan_range_list.LAST LOOP
dbms_output.put_line('DIAL_PLAN_RANGE_ID : '|| i||' '||l_dial_plan_range_list(i).DIAL_PLAN_RANGE_ID) ;
dbms_output.put_line('DIAL_PLAN_INFO_ID : '|| i||' '||l_dial_plan_range_list(i).DIAL_PLAN_INFO_ID) ;
END LOOP;
*/
open p_cursor for
select * from TABLE ( cast(p_dial_plan_range_inlist as BVOIP_DIAL_PLAN_RANGE_OBJ_LIST) )
minus
select * from TABLE ( cast(l_dial_plan_range_list as BVOIP_DIAL_PLAN_RANGE_OBJ_LIST) )
;
RETURN ok;
end validate_dial_plan_range_bvoip;
/
My PL/SQL test driver :
=======================
--set autoprint on
var err_msg varchar2(100);
var ret number;
var x refcursor;
--exec :ret := validate_dial_plan_range_bvoip(31314,:errmsg) ;
--exec :ret := validate_dial_plan_range_bvoip(31433,:errmsg) ;
declare
my_tab BVOIP_DIAL_PLAN_RANGE_OBJ_LIST := BVOIP_DIAL_PLAN_RANGE_OBJ_LIST() ;
begin
my_tab := BVOIP_DIAL_PLAN_RANGE_OBJ_LIST( BVOIP_DIAL_PLAN_RANGE_OBJ(1508, 1223, '3', '7324200000', '001
0', 'Y', 1, '001'), BVOIP_DIAL_PLAN_RANGE_OBJ(1509, 1223, '4', '7324200011', '0110', 'Y', 1, '004'), BVOIP_DIAL
_PLAN_RANGE_OBJ(1510, 1223, '4', '732420111', '0120', 'Y', 1, '004'), BVOIP_DIAL_PLAN_RANGE_OBJ(1511, 1223, '4'
, '732420121', '0130', 'Y', 2, '005'), BVOIP_DIAL_PLAN_RANGE_OBJ(1512, 1223, '4', '732420121', '0130', 'Y', 1,
'004'), BVOIP_DIAL_PLAN_RANGE_OBJ(1513, 1223, '4', '732420131', '0140', 'Y', 2, '005'), BVOIP_DIAL_PLAN_RANGE_O
BJ(1514, 1223, '4', '732420131', '0140', 'Y', 1, '004'), BVOIP_DIAL_PLAN_RANGE_OBJ(1515, 1223, '4', '732420121'
, '0130', 'Y', 1, '004'));
:ret := validate_dial_plan_range_bvoip(31433, my_tab, :x, :err_msg);
end;
/
Output: I got the record which is in input list but not in DB table. This means that it needs to be validated by downstream.
SQL> print x
DIAL_PLAN_RANGE_ID DIAL_PLAN_INFO_ID PB START_RANGE END_RANGE V COUNTRY_CODE GATEWAY_CITY_CODE
------------------ ----------------- -- --------------- --------------- - ------------ --------------------
1515 1223 4 732420121 0130 Y 1 004
March 23, 2005 - 1:03 am UTC
Minus is brutally efficient, I use it myself for "diffing" two things (tables, result sets, whatever)
I like that NULLS are considered "the same" in this case too.
A reader, March 23, 2005 - 10:16 am UTC
Hi Tom:
Thanks for the info and feedback.
Bulk collect
satya, March 23, 2005 - 2:36 pm UTC
When I am getting this errors
how can we handle this ?
DECLARE
TYPE all_rows_tab IS TABLE OF user_tables%ROWTYPE index BY BINARY_INTEGER;
l_all_rows_array all_rows_tab;
cursor cur_all_rows is SELECT * FROM user_tables;
BEGIN
OPEN cur_all_rows;
FETCH cur_all_rows BULK COLLECT INTO l_all_rows_array;
CLOSE cur_all_rows;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(SQLERRM);
END;
2:34:02 PM *
2:34:02 PM ORA-06550: line 8, column 42:
2:34:02 PM PLS-00597: expression 'L_ALL_ROWS_ARRAY' in the INTO list is of wrong type
2:34:02 PM ORA-06550: line 8, column 5:
2:34:02 PM PL/SQL: SQL Statement ignored
March 23, 2005 - 6:26 pm UTC
should be of type cur_all_rows%rowtype (or get to 9i at the very least)
A reader, March 23, 2005 - 6:02 pm UTC
I guess you are on 8i. The same code works fine from 9iR2.
Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
SQL> get 817_PLS-00597.sql
1 DECLARE
2 TYPE all_rows_tab IS TABLE OF user_tables%ROWTYPE index BY BINARY_INTEGER;
3 l_all_rows_array all_rows_tab;
4 cursor cur_all_rows is SELECT * FROM user_tables;
5 BEGIN
6 OPEN cur_all_rows;
7 FETCH cur_all_rows BULK COLLECT INTO l_all_rows_array;
8 CLOSE cur_all_rows;
9 EXCEPTION
10 WHEN OTHERS THEN
11 dbms_output.put_line(SQLERRM);
12* END;
SQL> @817_PLS-00597.sql
PL/SQL procedure successfully completed.
What is behind "limit"?
Sasa, March 28, 2005 - 2:01 am UTC
Hi Tom,
Could you explain what is behind "limit" when bulk collect into ...limit 100 for example?
Is this some sort of "rownum" functionality?
Regards,
Sasa
March 28, 2005 - 7:50 am UTC
say you were processing 100,000 records.
You do not want to fetch 100,000 records all at once into an array -- too much.
So, you use LIMIT to fetch 100 rows 1,000 times from the 100,000 record result set.
It lets you control how many records at a time you retrieve to process. It is not like rownum at all.
well, I was going to 'slip in a new question'
Scott Mattes, March 28, 2005 - 3:29 pm UTC
but I read your warnings and won't.
I'll just wait till I'm blue in the face from holding my face (that's my way of saying I wish that there were 50 of you so that there was a time the I would see 'I am now excepting new questions' instead of the 'sorry, large backlog now' that I have only ever seen in the past few years of daily once a day checking.
March 28, 2005 - 6:15 pm UTC
open for questions right now ;)
when is 'right now'?
Scott Mattes, March 29, 2005 - 8:35 am UTC
I am on 'right now' at 0833 EST and the sign says 'come back later to ask questions'.
So, when was the 'right now' that the question lines were open?
March 29, 2005 - 10:26 am UTC
for the last 6 months, here is when the bulk of new questions was submitted by hour:
ask_tom@ASKUS> select to_char(timestamp,'hh24'), count(*)
2 from WWC_ASK_SUBMITTED_QUESTIONS$
3 where timestamp >= add_months(sysdate,-6)
4 group by to_char(timestamp,'hh24')
5 order by 1;
TO COUNT(*)
-- ----------
00 8
01 1
02 3
03 2
04 15
05 6
06 12
07 10
08 70
09 91
10 113
11 97
12 90
13 96
14 105
15 106
16 44
17 57
18 74
19 52
20 117
21 82
22 61
23 32
24 rows selected.
and by day
1 select to_char(timestamp,'Dy'), count(*)
2 from WWC_ASK_SUBMITTED_QUESTIONS$
3 where timestamp >= add_months(sysdate,-6)
4 group by to_char(timestamp,'Dy')
5* order by 2
ask_tom@ASKUS> /
TO_ COUNT(*)
--- ----------
Fri 106
Thu 142
Wed 153
Tue 154
Sat 233
Mon 275
Sun 281
7 rows selected.
that right now was probably around 9pm last night, the 28th.
How can that be
Scott Mattes, March 29, 2005 - 3:42 pm UTC
'Now' is 1536 EST. According to your query results almost 2 questions a minute are asked during this time period (maybe bad assumption, you might probably be PST).
How are people getting the 'OK to post'? Do they sit there clicking the refresh button until it changes?
How is the 'sorry, call back later I have a backlog' msg set?
March 29, 2005 - 4:58 pm UTC
the first report is the number of questions asked during that hour for the last 6 months.
The second report is the report of the best day of the week to ask.
It was for six months.
A reader, April 14, 2005 - 5:43 pm UTC
How can I bulk collect into an object?
Here is what I am trying to do:
create or replace type address( city varchar2(10),
state varchar2(20)
);
create or replace type addr_obj is table of address;
create or replace empl(name varchar2(20),
job varchar2(20),
addr_det addr_obj
);
procedure myroc
is
obj empl;
begin
--this works fine
select name,job
into obj.name,
obj.job
from emp
where deptno=10;
--here i get error
select city,state
bulk collect into obj.addr_det
from emp_addr;
end;
Here is the error message:
PL/SQL: ORA-00947: not enough values
Thanks.
April 14, 2005 - 6:06 pm UTC
select select address(city,state)
bulk collect into obj.addr_det
from emp_addr;
note: no create table, no inserts, no testing by me at all......
You have to select out something that fits into the thing you are selecting into. You have a collection of objects, therefore you need select out an object.
A reader, April 15, 2005 - 12:05 pm UTC
Thanks a lot Tom.
I have another problem, hope my explanation is clear:
I have two database db1,db2.
In db1 I have object, collection of objects which I am trying to store in record, varray of db2.
Here is my code:
---
In db1, I have following objects at SQL level:
create or replace type address_obj is object(
addr1 varchar2(10);
zip number;
);
create or replace type addr_typ is table of address_obj;
create or replace type data(
name varchar2(10),
address addr_typ );
--
In db2 I have a package in which I am trying to store object data:
create or replace package p
is
type address is record(
addr1 varchar2(10);
zip number;
);
type addr_tab is varray of address;
type details(
name varchar2(10),
addr addr_tab );
function f (id in number)return details;
end p;
create or replace package boby p
is
function f (id in number)return details
is
v_details details;
v_obj data;
begin
--this works
v_details.name :=data.name;
--this gives error
v_details.addr :=data.address;
end f;
end p;
I am getting following error:
PLS-00382: expression is of wrong type
Thanks in advance!
April 15, 2005 - 1:08 pm UTC
objects are not extremely dblink friendly.
plsql types are however, in this case you'll want to use them. you can reference a remote plsql type in a local plsql packgage.
A reader, April 15, 2005 - 1:15 pm UTC
Thanks a lot!
In the above example, if objects are in same schema;
could you please show me how can i collect data from an object into record type?
A reader, April 15, 2005 - 1:33 pm UTC
Once again thanks!
A reader, April 15, 2005 - 1:41 pm UTC
Will the object views degrade performance?
April 15, 2005 - 2:16 pm UTC
what object views?
in general all features will do one of three things when used:
a) make it better
b) make it worse
c) not make it better or worse
"it" being whatever you are using them to do.
Bulk collect or Single Sql
N.Venkatagiri, July 07, 2005 - 7:21 am UTC
Tom,
One program was written using Bulk collect (ETL -related), and when we were analysing for fine tuning , we found after analysis with statistics one of the Procedure that uses the Bulk collect can be writtn as a single SQL, tested and working very fast (5 times than the old).
One of our Developement Staff is saying that Bulk collect
will be very faster and flexible than the single sql with tuned procedure.
How can I answer that gentleman? Or my analysis is wrong?.
Thanks
July 07, 2005 - 9:44 am UTC
get him a copy of expert one on one Oracle or Effective Oracle by Design.
but -- challenge him, say "ok, here we are 5 times faster -- now you write the slow by slow (opps - meant row by row) code that beats that and then we'll all be convinced"
In general, a single SQL statement is going to outperform slow by slow code, bulk or no bulk.
Bulk Collect : All rows not inserted
Ranju, October 20, 2005 - 1:52 pm UTC
I had a problem where all the desired rows were not inserted in the table using Bulk collect forall syntax. I found out my mistake and also read whole thread to find others problems in the same topic. It saved my time.