Skip to Main Content
  • Questions
  • How to improve timings of query executed on a DB link?

Breadcrumb

XMAS_TREE

The AskTOM team is taking a break over the holiday season, so we're not taking questions or responding to comments, but we might still sneak in a few Office Hours sessions! Thanks for being a member of the AskTOM community.

Question and Answer

Chris Saxon

Thanks for the question.

Asked: September 29, 2021 - 10:23 am UTC

Last updated: February 06, 2024 - 11:23 am UTC

Version: Oracle 19 C

Viewed 10K+ times! This question is

You Asked

Hello Team,

First of all, thanks for all the good work you are doing.

It would be great if you can help us understand how we can improve timings of query executed on a DB link.

Consider an Oracle instance (DB1) that can talk with another oracle instance (DB2) via DB Link. A select statements fetches ~1,00,000 values from DB1 and pass them to a function that is defined on DB2. The function do some operations on the values and returns the results. All the data is in DB1 but the function has to be in DB2.

We have tried multiple things and using DRIVING_SITE hint helped to reduce timings. However, it comes with its own issues( Code re-write, hint can be ignored, issue in techniques used in plan stabilization, query transformation, non-applicability to DML ). Things like switching to bulk processing did not help much.

Any thing else that we can try?

We are looking for techniques that can be applied to a large number of statements.

Thanks,
AB

and Chris said...

I discussed options for tuning distributed queries in an Office Hours session a while ago:



https://asktom.oracle.com/pls/apex/asktom.search?oh=5684

All the data is in DB1 but the function has to be in DB2.

Why does the function have to be in DB2?

The best way to speed this query up is to stop transferring the data around and process it all locally. Creating the function on DB1 and avoiding DB links is the single biggest gain you can make performance-wise.

Or - even better - moving the logic out of the function and into pure SQL should also help; this isn't always practical however.

Rating

  (2 ratings)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

Reason for moving the function outside DB1

A reader, October 01, 2021 - 5:12 am UTC

Thank you for the response Chris.

The function invokes a C lib via External Procedure call. The lib do some processing and returns the result back to the function.

The function has to be moved to DB2 (a non-ExaCC DB) because DB1 is in ExaCC and we are not allowed to put the C lib in ExaCC.

Also, due to some security restrictions, we can not move logic from the C lib in DB.
Connor McDonald
October 04, 2021 - 4:47 am UTC

I spoke to some of our Exa CC people - they replied

While we recommend to not install additional software on servers (as especially additional RPMs will affect OS patching/upgrade), it never says anywhere you are not allowed to install additional software….

One option you could consider is using a docker container for additional software, as this does not interfere with the OS updates then… See: Decoupling RPM Dependencies, running Oracle R Distribution (ORD) on Exadata Database Nodes in Docker (Doc ID 2257680.1)

If this is the case, you may want to put the software onto an ACFS folder, to have a common location across the cluster (and not to worry to have to update each node individually).

Any tips or tricks when the remote DB is not Oracle?

Justin, January 29, 2024 - 11:25 pm UTC

Greetings!

We have some scenarios where we need to extract data from a remote MSSQL environment, and we need to limit the rows extracted based on an update flag in a secondary table.

When running this natively on MSSQL, you can easily facilitate this through an Inner Join, and the performance is adequate. When we have appropriate access on the MSSQL side, we'll create a MSSQL view to perform this work, and then issue a select against the remote view. This performs as expected.

We now have a scenario where we cannot create a remote view and have to perform the row limitation on our side. e.g.

Select
 a.*
From
 Table_A@Db_Link a
 Inner Join Table_B@Db_Link b
  on a.Id = b.Id
Where
 b.Update_Date >= Trunc(Sysdate)


I've tried various /*+Driving_Site (a) */ and /*+ No_Merge */ optimization hints, but the result is the same. Both datasets are brought back to the local environment, in their entirety, and then the join is performed.

Any guidance or quick lessons to be learned here?

Thanks!
-Justin
Chris Saxon
February 06, 2024 - 11:23 am UTC

I don't know of a way to do this. Your performance tuning options querying database links across different database systems will always be limited.

In this specific query, it's possible that the whole of table B is returned, then joined and filtered in Oracle Database. You could try using a subquery to ensure table B is filtered first, e.g.:

Select
 a.*
From
 Table_A@Db_Link a
 Inner Join ( select * from Table_B@Db_Link b where b.Update_Date >= Trunc(Sysdate) ) b
  on a.Id = b.Id


More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database