there is no way you want to have 300,000 records in an array - do you understand the amount of pga you are using?
one hundred - great.
five hundred - maybe ok.
one thousand - really rare.
300,000 at once - I would not allow your code into production.
ops$tkyte%ORA10GR2> @mystat "pga memory max"
ops$tkyte%ORA10GR2> set echo off
NAME VALUE
--------------------------------------------- ----------
session pga memory max 1044052
ops$tkyte%ORA10GR2> declare
2 type array is table of big_table.big_table%rowtype;
3 l_data array;
4 begin
5 select * bulk collect into l_data from big_table.big_table where rownum <= 300000;
6 end;
7 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off
NAME VALUE DIFF
--------------------------------------------- ---------- ------------------
session pga memory max 174190164 173,146,112
big table is of average width there, a copy of all objects over and over with one extra primary key column (number).
I would not permt a program to eat up 170mb in an array - that is just wasteful - and it is really hard to manage a data structure that big - you get diminishing marginal returns.
Please - be a little more "reasonable" in your data structures there. I'm saying this in general - in all of your code.
short of that, use dml error logging if you expect 10's of thousands of errors like that - the data will be logged into a table, you can "fix it" and then bulk insert it from there.