Skip to Main Content

Breadcrumb

XMAS_TREE

The AskTOM team is taking a break over the holiday season, so we're not taking questions or responding to comments, but we might still sneak in a few Office Hours sessions! Thanks for being a member of the AskTOM community.

Question and Answer

Chris Saxon

Thanks for the question.

Asked: August 07, 2020 - 1:44 am UTC

Last updated: November 26, 2024 - 2:16 pm UTC

Version: Oracle 11.2.0.4

Viewed 1000+ times

You Asked

Hi Team,

We are having PeopleSoft systems connected with Oracle 11.2.0.4 database. We have monthly financial close activities every beginning of month and lot of jobs use to run as part of this which internally access multiple large records.

We have a database of 2.5 TB size and the important tables are partitioned in database and having millions of rows. Every month end activity will add additional 100GB of data to the database.

We are facing performance issues in the jobs intermittently. For ex: last month we did not have any issues with any of the jobs but this month end we had allocation jobs (one of the month end jobs) running for 5 hours which completed last time in 1 hour.

There is no changes in the queries that are hitting the database. Also we have taken the AWR report and there is no other queries that are making this query slower or no other jobs were taking database resources during that time.

How do we approach this situation. if we are going for query tuning, the same queries were working within very minimal time last time and next month also we may not see any issue (from our previous experience). So really we need to go for query tuning or any other approach?

Also if business reports slowness in the query execution, actually we can only monitor and see if any other jobs are running/if any other queries are blocking this table etc. Is there anything that we can do so that the currently running query will run faster without cancelling the running query.

I know this is a lot of information and question. Please let us know how to proceed in this kind of situation. Also how we can give an explanation to business why the job went slow only this month. We could not find any abnormalities in the AWR report or database side.

Thanks.
Ajmal Palliyal
+1-219-238-4933


and Connor said...

The is always a clash of goals here when it comes to SQL performance

a) When things are running well, we do *not* want the SQL plans to change

b) But when the data changes, we *want* the SQL plans to reflect the changed data

Of course, the ideal is that in (b), we only change SQL plans when we know things are going to better not worse. That is the optimizer's aim, but it does not always succeed.

I would recommend taking a look at SQL Plan Management (SPM) for a selection of business critical queries. SPM will lock a good running plan into place even if the optimizer finds what it thinks is a better one. The big benefit of SPM is that we will still *store* the new plan that the optimizer found, just not *use* it. You can then (from time to time) look at these new plans and decide to see if they are *truly* better.

Optimizer product manager Nigel Bayliss has an excellent white paper on SPM

https://blogs.oracle.com/optimizer/new-sql-plan-management-white-paper


Rating

  (1 rating)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

intermittent slow INSERT...VALUES

Narendra, November 17, 2024 - 10:49 pm UTC

Hello Chris/Connor,

Setup:
Database => Oracle 19.23 ExaC@C (4-node RAC)
JDBC => 23.3 ojdbc8.jar

I am trying to diagnose an intermittent performance issue where application and a SAAS enterprise monitoring tool, reports that a relatively simple INSERT...VALUES takes anywhere from 3 seconds to 2 minutes. The table has 3 columns; no LOB columns; with single partition and a local non-unique index and with no PK/UK constraint.
However, what is making it very difficult, if not impossible, is a combination of below reasons:
1. No patter/sequence of events identified that leads to the slow performance
2. When application/SAAS tool reports slow executions, there is literally no clue in AWR/ASH. ASH data, GV$ACTIVE_SESSION_HISTORY or DBA_HIST_ACTIVE_SESS_HISTORY report at the most one row per execution.
3. The issue can not be reproduced in any of the non-production environment
4. Almost 95% of executions complete within milliseconds (as expected) but the other 5% run slow and relative customer-facing application journey times out.

Any idea how I can even begin investigation into this? It is (rightly) considered risky to enable any kind of logging/tracing, either on application side or on database side as the logs/traces can quickly get huge and fill up file system. We will literally have to have someone monitor the file system, either on app server or on DB server, to manage the space and trace files....and that too for days, which makes this very difficult to use.

In an ideal world, what would help is ability to trace/log only those executions which take more than a defined threshold. But sql trace does not seem to have such an option. Neither the Oracle JDBC driver appear to have this option.

Any suggestions on how I can approach this issue?
My line of thought so far is
1. First, find out the discrepancy, if any, between the response time reported by app/SAAS monitoring tool and database
2. If no discrepancy is found between app/SAAS and database about execution time of the INSERT...VALUES then (somehow) try to get more details on the slow executions from database (like sql tracing output)

Thanks in advance

Chris Saxon
November 26, 2024 - 2:16 pm UTC

A single INSERT .. VALUES statement on a 3 column table should be essentially instant. Things that might affect this are:

- Blocking on primary/unique key values
- Insert triggers

I'm guessing you've ruled these out and there are no other obvious causes. In which case enabling some form of tracing is the way forward. Start logging a subset of the servers to find the problem.

to manage the space and trace files....and that too for days, which makes this very difficult to use.

Surely you can setup some kind of cleaning job to remove the unwanted files?

there is literally no clue in AWR/ASH

That strongly suggests the problem is somewhere in the application/network rather than the database. End-to-end tracing really is the way forward here though.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.