Skip to Main Content
  • Questions
  • Pl/sql procedure freezes while fetching data from dblink

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Anatolii.

Asked: August 29, 2025 - 8:49 am UTC

Last updated: September 02, 2025 - 1:33 am UTC

Version: oracle database 19c enterprise

Viewed 100+ times

You Asked

I encountered inexplicable freezing of pl/sql procedure while trying to fetch and insert data into table, however same operation runs successfully when called with sql query in any IDE.
I am trying to realise ETL process in data warehouse database.
I need to load data from a remote database's table into a local one, i have low privileges on the remote database meaning i cant tune network or create materialised view logs there.
Target table is 'long' and daily data that must be transferred is about one million rows 7 columns wide with mostly numeric data and no large objects.
I created db link and a simple view that represents remote table in local database and granted select privilege to
'DATAW'
schema designed for data storage.

In the
'DATAW'
i created a procedure for merging data into identical local table. Something like this:
create procedure merge_t_data 
(p_days_offset in number)
as 
begin 
merge /*+ APPEND */ into t_data_local tgt
using (
select col_id, col2, ..., col7 
from pdbadmin.v_data_remote
where updated_at >= trunc(sysdate) - interval '1' day * p_days _offset ) src on (tgt.col_id = src.col_id) 
when matched then update 
set ...
when not matched then
insert 
...;
end;

When i run the procedure the session acquires wait event 'Sql*net message from dblink' or 'Sql*net more data from dblink' which stays the same forever.
When i check incoming traffic on the server while the procedure is running i see that it is not used at all.
HOWEVER
When i run the same merge operatiion using query like:
merge /*+ APPEND */ into t_data_local tgt
using (
select col_id, col2, ..., col7 
from pdbadmin.v_data_remote
where updated_at >= trunc(sysdate) - interval '1' day * 3 ) src on (tgt.col_id = src.col_id) 
when matched then update 
set ...
when not matched then
insert 
...;

it runs successfully: i see incoming traffic up to 2Mb, and query finishes after +-10 minutes.
I am the only user of the database, no other people works with it for now.
I have also tried inserting the data into temporary table; using fetch cursor bulk collect; running execute immediate in the procedure, result was the same - execution freezes.
Also worth mentioning that i also successfully realised ETL process for second table: it is much wider: daily data needed for transferring is about 50k rows and the number of collumns is more than 20. I did it with similar merge procedure that runs successfully unlike the previously discussed one.


I want to know if it is possible to achieve success in running my merge procedure for the 'long' table or what might be other solutions to this problem

and Connor said...

Ideally we need to see an execution plans (or SQL monitoring report), but I'll hazard a guess.

A merge is essentially a join, which can one of three

- nested loop
- merge
- hash

In all three cases, the join is performed on "one node", in the sense that one node drives the join, and the other node provides data to it (either all at once for hash/merge, or incrementally in nested loop), but the "act" of joining is done on a single node.

So my hypothesis here is that the plan for the procedure based merge is sending data in the "wrong" direction, ie, the join is being performed in such a way that a larger volume of data is being passed.

You can control this with the DRIVING_SITE hint, so perhaps give that a try.

Rating

  (1 rating)

Comments

I found the solution

Anatolii, September 03, 2025 - 12:50 pm UTC

After analizing explain plans for the procedure run and .sql script run i found the difference in the way remote data was filtered.
I ran .sql operations as follows:
merge /*+ append */ into local_table tgt
using (
select col1, col2, ... from pdbadmin.remote_view 
where updated_at >= trunc(sysdate) - interval '1' day * 1
) src on ( ...

so explain plan revealed following execution plan for filtering remote data:
where "updated_at" >= cast(trunc(:1)-interval '+000000001 00:00:00.000000000' day(9) to second(9) as timestamp)

but in the pl/sql i defined the where clause as follows:
merge /*+ append */ into local_table tgt
using (
select col1, col2, ... from pdbadmin.remote_view 
where (p_days_offset is null or updated_at >= trunc(sysdate) - interval '1' day * p_days_offset ) ) src on ( ...

suposing that it is executed the same way.
But explain plan revealed following execution plan for filtering remote data:
where :1 is null or "updated_at" >= cast(trunc(:2)- :3 as timestamp)

also the plan showed exactly twice more resources consumed for some reason.
So the reason why pl/sql frooze could be inproper filtering clause pushed to the remote server leading to incorrect execution plan
i achieved successful running pl/sql procedure by separatly calculating the filtering value and passing it to the merge operation like this:
v_days_offset : = trunc(sysdate) - interval '1' day * p_days_offset;
merge /*+ append */ into local_table tgt
using (
select col1, col2, ... from pdbadmin.remote_view 
where updated_at >= v_days_offset
) src on ( ...

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library