Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Balaji.

Asked: September 16, 2015 - 6:50 pm UTC

Last updated: September 18, 2015 - 4:12 am UTC

Version: 11.2.0.4.0

Viewed 1000+ times

You Asked

Hi Tom,

We are encountering the ORA-01044 Error - ORA-01044: size 300000000 of buffer bound to variable exceeds maximum 268431360

Basically we are building the LIST of 50K records in Python and passing the LIST to the Stored Procedure. The Stored Procedure will accept the LIST and insert values into the table.

Below are the Stored Procedure details.

TYPE data_array IS TABLE OF VARCHAR2(32767) INDEX BY PLS_INTEGER;

PROCEDURE pkg_bulk(p_val_list IN OUT NOCOPY data_array)
IS
BEGIN
IF p_val_list IS NOT NULl
THEN
FORALL in 1..p_val_list.COUNT
INSERT INTO XYZ VALUES p_val_list(I);
END;
EXCEPTION:
...
END;

Python Code
---------

Step1: Execute the sql query
Step2: For Each result item in the Cursor, Put the result in the LIST (ie LIST.APPEND(cursor_result) . Once the result count reaches 50K call the Stored Procedure pkg_bulk.
Step3: Continue step2 until DATA is available.

We are getting the error in Step2. When we reduce the records limit from 50K to 25K records, it runs fine.

We would like to know root cause of the issue - ORA-01044: size 300000000 of buffer bound to variable exceeds maximum 268431360.

Database has sufficient PGA memory - 3GB

Which database parameter 268431360 the oracle complain about?

Please advise.

Thanks,
Bala

and Connor said...

There is a limit of 256MB for passing host arrays back and forth to the database.

SQL> select 256 * 1024 * 1024 from dual;

256*1024*1024
-------------
    268435456


In reality, once your bulk bind size start to get more than (say) one thousand rows at a time, I'd be very surprised if you see any kind of performance benefit at all.

Try a timing test with 25k rows, then 10k rows, then 1k rows, where the total number of rows inserted is the same, just the number of times you loop around is increased.

Search this site for "bulk bind" and you'll find plenty of examples where a binding size of even as low as 50 is very close to optimal.

Hope this helps.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Balaji, September 17, 2015 - 11:37 am UTC

Hi,

Thanks for the response!

Yes, we observe the performance benefit with 1K,10K,25 and 50K.
This python code will run as thread for 100s of feeds.
so, 25k-50K would give us performance benefit.

When we tested in the DEV environment, Python code was passing 50K,100k and even more records and it worked fine.

But the same is not working in UAT environment.

We are interested to see which database parameter limit this.

Thanks again!

Thanks,
Bala
Connor McDonald
September 18, 2015 - 4:12 am UTC

There is not a database parameter (to my knowledge) that you can access and/or change.


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library