Consider a generic set of type definitions for use within a package.
TYPE TYPE_TABN1 IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
TYPE TYPE_TABV1 IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
TYPE TYPE_REC1 IS RECORD
(COL1 VARCHAR2(100),
...
COL200 VARCHAR2(100));
TYPE TYPE_TAB1REC1 IS TABLE OF TYPE_REC1 INDEX BY BINARY_INTEGER;
TYPE TYPE_RC1REC1 IS REF CURSOR RETURN TYPE_REC1;
I have a couple of questions:
1. What is the impact of where types are defined? Package Spec VS. Package Body level, VS. within the top-level procedure which is externally called.
We have a shell script multi-threading calls to the top-level procedure for partition-based processing, typically 40 simultaneously. So the question is primarily about whether doing 40 of these type definitions at the same time (in the top-level procedure) is better, worse, or no-different.
2. For the type which is a table of records (e.g. TYPE_REC1TAB1) if data is bulk collected (5000 limit) into a variable of that type and the last 100 columns are always "populated" with null, is there any memory impact as opposed to having the record defined for only the first 100 columns which actually contain data?
Thanks.
1. Where you declare the types just affects the scope they're visible. If you place them in a package spec, you can access them from other packages, etc. In the package body, just program units inside that package and so on. So it really comes down to where and how you want to reuse these in other code.
2. We can do a quick test. Let's create a table with 101 columns and 1000 rows. Only the first column is populated, the last 100 are null:
declare
create_tab varchar2(4000);
begin
create_tab := 'create table t (id int';
for i in 1 .. 100 loop
create_tab := create_tab || ', c' || i || ' int';
end loop;
create_tab := create_tab || ')';
execute immediate create_tab;
end;
/
insert into t (id)
select level from dual connect by level <= 1000;
commit;
We can measure the PGA usage with this get_stat function (HT to Tim Hall):
create or replace function get_stat (p_stat in varchar2) return number as
l_return number;
begin
select ms.value
into l_return
from v$mystat ms
join v$statname sn
on ms.statistic# = sn.statistic#
and sn.name = p_stat;
return l_return;
end get_stat;
/
Then run two tests: bulk collecting into a single column array and one with all 101 columns of the table. I'll connect to a new session each time to reset the PGA:
conn chris/chris
set serveroutput on
declare
ids dbms_sql.number_table;
mem pls_integer;
begin
mem := get_stat('session pga memory');
select id
bulk collect into ids
from t;
DBMS_OUTPUT.put_line(
'Just IDs : ' || (get_stat('session pga memory') - mem)
);
end;
/
Just IDs : 65536
conn chris/chris
set serveroutput on
declare
type tarr is table of t%rowtype index by binary_integer;
tab tarr;
mem pls_integer;
begin
mem := get_stat('session pga memory');
select *
bulk collect into tab
from t;
DBMS_OUTPUT.put_line(
'Full table: ' || (get_stat('session pga memory') - mem)
);
end;
/
Full table: 5570560
So that's 65,536 bytes of memory for just the ids vs. 5,570,560 for all the columns. An order of magnitude more memory!