Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Akash.

Asked: October 02, 2006 - 10:48 am UTC

Last updated: October 09, 2012 - 12:09 pm UTC

Version: 9.2.0

Viewed 100K+ times! This question is

You Asked

Hi,

I want to find out top 10 expensive sql's on the database, my database is on oracle 9206. can you please send me the script or can help me to create.

and Tom said...

define to me first your concept of "most expensive"

a) Is it logical IO's per execution?
b) Is it physical IO's per execution?
c) Is it cpu used?
d) Is it based on number of parse calls?
e) Is it elapsed time used?
f) Is is number of executions?
g) Is is size consumed in shared pool?
h) Is is number of child versions found?

is it some complex f(a,b,c,d...h) from above?


but in any case, statspack rather nicely will present to you a, b, c, d .. h.

Use it.
</code> http://docs.oracle.com/cd/B10501_01/server.920/a96533/statspac.htm#34837 <code>



Rating

  (26 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, October 02, 2006 - 1:59 pm UTC


A reader, October 02, 2006 - 2:01 pm UTC


give me the script related to that.

Akash, October 02, 2006 - 2:04 pm UTC

Top 10 sqls based on logical as well as physical reads on the database.
I dont want to use the statspack i need the script for that.

Tom Kyte
October 02, 2006 - 4:44 pm UTC

Umm, guess what statspack is....

@?/rdbms/admin/statspack.sql

there you go, you got script (and thanks for the other two "blank" reviews)


I don't get it, you want "a script" here is a good one for people that want a simple script to tell them what to do:

select 'all is well that ends well' the_meaning_of_life
from dual
/


statspack gives you precisely what you are asking for, go for it.


Use Statspack!

Dana, October 02, 2006 - 2:20 pm UTC

Why do you want to know only the top ten most expensive SQL? Statspack gives you that by multiple categories, plus information on why the top ten are expensive. Really, really, you want to use Statspack. Statspack gives the overall context of database load profiles that simple scripts never will. In addition, I find the @sprepsql highly useful for troubleshooting; you won't get that from a simple "top ten" script. Use Statspack!


Why can't you script statspack?

A reader, October 02, 2006 - 2:54 pm UTC

Statspack snapshot and generating reports from these snapshots can both be scripted and can either be scheduled or run interactively, whichever way you want. I used them extensively till we moved to 10g. Then I modified the scripts to generate AWR and ADDM reports.

As Tom has said, statspack is the easiest way out unless you want to reinvent the wheel.

Tom Kyte
October 02, 2006 - 4:45 pm UTC

I am for some reason reminded of the second bit in this collection of bytes:
</code> http://asktom.oracle.com/Misc/bits-and-bytes.html <code>
the teach yourself programming one.




Expensive How?

Dana, October 02, 2006 - 7:24 pm UTC

Something I notice about statspack reports. There are categories for logical i/o, physical i/o, memory, executions, hard parse...but no separate category for elapsed time. Oracle has it right I feel, look at quantifiable resources used, and that will translate into better performance on the clock.

Expensive how != how expensive

Adam Musch, October 03, 2006 - 10:16 am UTC

Dana in Phoenix:

Elapsed time for execution isn't deterministic -- running the same query twice with the same Oracle conditions can have very different run-times, based on, off the top of my head:

1. CPU load
2. I/O load
3. Network load

all of which are outside of Oracle's control. Measuring LIO's, parse calls, and latches are the most deterministic metrics available for a statspack run, because the same statements should always yield the same values. PIOs can vary widely -- I think on of Cary Millsap's tenets is that you should tune LIOs, not PIOs, because if you reduce LIOs, PIOs will take care of themselves.

Tom Kyte
October 03, 2006 - 11:00 am UTC

I've said that myself, in general if you "watch the LIO's the PIO's take care of themselves"

there are exceptions, as always.

quick rundown of top 10 sessions

Vitaliy, October 03, 2006 - 12:55 pm UTC

To get a quick rundown of top 10 sessions I use the following script -- go to:

</code> http://www.dbatoolz.com/SCRIPT_17.HTM

get (the first one on the list):

   s_ses_top.sql

This script lists top 10 sessions based
on the following values from v$sesstat:
        - consistent gets
        - db block gets
        - physical reads
        - db block changes

It doesn't list the sql statement but that's easily done using the following script:

   
http://www.dbatoolz.com/SCRIPT_27.HTM <code>

get:

s_user_sql.sql

Here's a sample of the output of both:

You are running script: s_ses_top.sql

TOT_VALUE Sid,Serial User s t OS-User PROCESS Machine Login Time Idle Module
---------- ------------ -------- - - ---------- ------------ ---------- --------------- -------- -----------------------------------
19818466 '156,4' APPS I applmgr xyz4 01ST 10:34:00 0:0:10 JDBC Thin Client JDBC Thin Client
6946884 '28,9' APPS I applmgr 20220 xyz4 01ST 10:33:41 0:0:8
4396663 '16,14' APPS I applmgr 19760 xyz4 01ST 10:33:02 0:0:10
2881972 '190,5252' APPS I applmgr 18989 abcde1 03RD 06:19:55 0:18:57 OEXOEORD FRM:FPATR:Xyhfasd OM Mana
1998525 '144,71' APPS I applmgr 27294 abcde1 01ST 10:45:01 0:53:3 JDBC Thin Client fnd.wf.worklist.se
1289846 '164,54' APPS I applmgr abcde1 01ST 10:44:47 0:0:0 JDBC Thin Client JDBC Thin Client
837357 '66,2' APPS I applmgr 20332 xyz4 01ST 10:33:43 0:0:27
837230 '64,40' APPS I applmgr 20334 xyz4 01ST 10:33:43 0:0:27
837182 '60,1' APPS I applmgr 20312 xyz4 01ST 10:33:43 0:0:27
837178 '34,56' APPS I applmgr 20340 xyz4 01ST 10:33:43 0:0:27

10 rows selected.

09:43:05 SYSTEM@XYAZ:xyz4> @s_user_sql.sql
Enter value for sid: 156
old 3: where address = (select sql_address from v$session where sid=&&sid)
new 3: where address = (select sql_address from v$session where sid=156)
old 4: and hash_value = (select sql_hash_value from v$session where sid=&&sid)
new 4: and hash_value = (select sql_hash_value from v$session where sid=156)

SQL_TEXT
--------------------------------------------------------------------------------
BEGIN WF_EVENT_OJMSTEXT_QH.DEQUEUE(:1, :2, :3); END;

1 row selected.

09:43:12 SYSTEM@XYAZ:xyz4>


Tom Kyte
October 03, 2006 - 4:49 pm UTC

top 10 sessions is however radically different from top 10 SQL statements.

follow up

Vitaliy, October 03, 2006 - 5:37 pm UTC

> top 10 sessions is however radically different from top 10 SQL statements.

Correct -- it's a step further.

Tom Kyte
October 03, 2006 - 7:23 pm UTC

no, it is a step backwards, in a huge step..

so if by further, you meant further back, sure.....


who cares what the top 10 sessions are now.
you are looking at 'system stuff' when looking at this level.

so, look for the sql's that cause you the most work - over time, over sessions, forever.



clarification

Vitaliy, October 03, 2006 - 7:48 pm UTC

> who cares what the top 10 sessions are now.
> you are looking at 'system stuff' when looking at this level.

I am not looking for 'system stuff' overtime when I use this script I look for what's killing us now and the reason I look at it is because it's my objective to find it.

I look at 'system stuff' all the time but these are two different things -- if your objective is to look at 'system stuff' then by all means use statspack but if you are looking at the rundown of what's trashing the system at this moment the top 10 sessions will get you further.

SQL doesn't just appear out of thin air -- sessions execute SQL -- you eventually have to go after the process that originated this SQL.

If you know the system at hand to the point that you know every SQL statement and the process it originated from then good for you -- you'll probably be able to ID them just by looking at SQL statements. Where I work we have hundreds of thousands of processes, forms, reports, etc.. and we have no control over the SQL they generate -- my objective is to ID the process and to build a case for the vendor to improve it.


Tom Kyte
October 03, 2006 - 7:57 pm UTC

that would be very different from what the original poster asked for - therefore.

just to clarify.


huge difference between:

a) what is causing my "machine" to do the most work
and
b) what is doing the most work *right now*

especially since the magnitude of difference between A and B could be many orders.

I know what your objective is.....
Just pointing out it is apparently *not* the same as the original poster.

One major limitation of Statspack

John Knauer, October 03, 2006 - 9:24 pm UTC

There is one thing missing from Statspack and that is the identification of queries that do not use bind variables. Statspack will through its wait event information let you know that there is a problem with library latch contention if the number and frequency of queries that don't use bind variables is such that end users are noticeably impacted. However Statspack won't tell you directly that there are queries that don't use bind variables.

If the lack of bind variables is causing the problem you need to know what queries need to be changed and which schema, i.e. which application is executing them. This is doubly important since the absence of bind variables is also a security issue.

On this site a very simple, but powerful method for capturing the SQL in memory and identifying queries that don't use bind variables can be read at </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1163635055580 <code>

I implemented this method and found it consistently takes only a few seconds to process the SQL in memory. (There are usually 8K to 10K queries in memory in the instance I am examining and as you might suspect, there are often many queries that would be the same if bind variables were being used.) I make this information available to developers via a pl/sql web interface and it provides an easy means to ensure bad old habits don't sneak back in.

I have not used Statspack with 10g so if this has been added that is good news.

Tom Kyte
October 04, 2006 - 6:59 am UTC

ADDM and AWR notify you of the problem and (unfortunately:) suggest cursor sharing. I suggest filing a p1 bug against the application causing the issue but that is me...

AWR reports are better

A reader, October 03, 2006 - 10:21 pm UTC

Tom,
When I look at AWR reports, they are better than statspack. It has SQL sorted by elapsed time, CPU time, DB time, buffer gets and so on. AWR reports also show SQL versions. Lot of pain of analyzing AWR reports has been taken out by ADDM. The ADDM reports show the low hanging fruit at a glance. The grid control is a step ahead. On the home page of each database is a brief summary of all the ADDM findings. I can tell real time what is taking up most resources in the database or I can go back to history view. The SQL tuning and SQL Access advisor help tune some of the worse SQLs. This is one of the areas where 10g has a marked improvement over 9i. Compared to all this ease of identifying problem areas and tuning, statspack reports kind of fade away.

What are your thoughts on AWR/ADDM/Grid Control/SQL Tuning and SQL Access advisor?

Thanks

Tom Kyte
October 04, 2006 - 7:02 am UTC

If you have licensed the ability to use AWR and ADDM, you should most definitely use it instead of statspack.

Statspack will continue to be used as it comes with XE, SE, EE, PE regardless. But using statspack AND having AWR going is duplicative effort.

Finding out Top SQL response times and cost

Aniruddha, October 20, 2006 - 3:19 am UTC

Statspack reports are very useful, but when you want to just look at top SQL, it is better to just query the stats$ tables and generate your own report. This can be done for top SQL in the following way:

SELECT a.sql_text "SQL statement", b.avgcputime "CPU time (s/exec)", b.avgelapstime "Elaps. time (s/exec)",
MAX (c.COST) "Cost", b.executions "Execs", b.TIME "Date/Time", b.module "Module"
FROM stats$sqltext a,
(SELECT DISTINCT sql_id, snap_id, executions, avgcputime, avgelapstime, module,
TO_CHAR (last_active_time, 'dd-mm-yy hh24:mi') TIME
FROM (SELECT DISTINCT sql_id, ROWNUM, snap_id, executions,
ROUND (cpu_time / (executions * 1000000), 0) avgcputime,
ROUND (elapsed_time / (executions * 1000000), 0) avgelapstime,
module, last_active_time
FROM stats$sql_summary
WHERE executions != 0
AND module IN (SELECT DISTINCT module
-- To get SQL of application only
FROM stats$sql_summary
WHERE LOWER (module) LIKE 'yourappname%')
AND TO_CHAR (last_active_time, 'DD.MM.YYYY') =
TO_CHAR (SYSDATE, 'DD.MM.YYYY')
ORDER BY avgelapstime DESC)
WHERE ROWNUM <= 10) b,
stats$sql_plan_usage c
WHERE LOWER (a.text_subset) NOT LIKE 'insert%' AND a.sql_id = b.sql_id AND a.sql_id = c.sql_id
GROUP BY a.sql_text,
a.sql_id,
b.avgcputime,
b.avgelapstime,
b.executions,
b.TIME,
b.module,
a.piece,
a.text_subset
ORDER BY a.sql_id, a.piece;

By changing sysdate you can generate an end of the day report for any day you want. Replace 'Yourappname' with relevant exe/module name. Note: this is written for 10g. I have not tried it on 9i. Sorry if it's difficult to read (you may need to reformat it before reading!)

Top SQL

Bala Palani, October 24, 2006 - 9:23 am UTC

select * from (select disk_reads,executions,sql_text from v$sqlarea
order by 1 desc, 2 desc) where rownum <=10

- Based on Disk IO
- If SQL Text is long, only partially available

reader

A reader, May 18, 2007 - 2:32 pm UTC

What is the significance of the interval in AWR snapshot

When I look at the DBA_HIST_SNAPSHOT I see 2 columns
begin_interval_time and end_interval_time. Also the begin_interval_time is the same as the previous end_interval_time. It seems it continuously taking snapshots whatever the settings for interval time

What is the effect of modifying the interval smaller than the default 1 hour to say 15 minutes
Tom Kyte
May 18, 2007 - 4:28 pm UTC

the effect would be there would be 4 times as many observations, the code would run 4 times as often.

Query

shyam, May 19, 2007 - 2:44 pm UTC

Sir,
I would want know the query for the following
A query which displays first 10 rows of a table without using WHERE CLAUSE and without knowing number of fields and field names.

If there is any mistake in question sorry..
Tom Kyte
May 21, 2007 - 10:17 am UTC

this is something a client would do, fetch 10 rows, stop.

or, you would use, well, A WHERE CLAUSE.

reader

A reader, May 19, 2007 - 5:05 pm UTC

Is the concept of AWR is similar to STATS_PACK that , oracle takes ONE snapshot at every begin_interval time only
Tom Kyte
May 21, 2007 - 10:18 am UTC

AWR (and ASH) is/are sampling - collects more data in memory - then flushes on a schedule as well as taking v$ snapshots.

It does a bit more than just statspack snapping in general

How to find queries running on a particular time ?

Parag J Patankar, May 21, 2007 - 7:39 am UTC

Hi Tom,

Is it possible to find out queries run between particular period for e.g. queries run today from 10am to 10.30am ?

thanks & regards
PJP
Tom Kyte
May 21, 2007 - 11:08 am UTC

maybe - usually.

you statspack.snap at 10am, you do it again at 10:30am and a statspack report can show you what it sees.


Now if a query was executed at 10:10am and flushed from the shared pool at 10:20am, you won't see it - but that would not be a very prevalent occurrence.

reader

A reader, May 21, 2007 - 12:01 pm UTC

I am trying to confirm if AWR is taking one snapshot per inteval inluding v$ stat etc

This way I can specify a smaller interval so that the statistics does not average out
Tom Kyte
May 21, 2007 - 12:56 pm UTC

the purpose of AWR is to average out.

the purpose of ASH is to give you session level insight. ASH is sampling at a much higher rate.

reader

A reader, May 23, 2007 - 6:44 am UTC

How to schedule sampling for ASH. Any URL to read about it

Thanks
Tom Kyte
May 23, 2007 - 7:13 am UTC

you don't, it is internal, we sample continuously and flush on a schedule.

sifting through the ashes

Bob, May 23, 2007 - 9:52 pm UTC

http://www.oracle.com/technology/products/manageability/database/pdf/twp03/PPT_active_session_history.pdf

I am sure this is referenced in other places, but for the prior poster looking to understand ASH data collection a little better, see above pdf.

Struggling with 'buffer busy wait' and usage of dba_hist views

Serge F., July 30, 2007 - 4:35 pm UTC

Hi Tom,
I am new to Oracle 10g new features of using dba_hist views.
Here it is my headache:
The following query shows me the significant Buffer Busy Waits:

column buffer_busy_wait format 999,999,999

select sn.snap_id,
to_char(sn.end_interval_time,'dd Mon HH24:mi:ss') mydate,
avg(new.buffer_busy_wait-old.buffer_busy_wait) buffer_busy_wait
from
dba_hist_buffer_pool_stat old,
dba_hist_buffer_pool_stat new,
dba_hist_snapshot sn
where new.snap_id = sn.snap_id
and new.snap_id = sn.snap_id
and old.snap_id = sn.snap_id-1
having avg(new.buffer_busy_wait-old.buffer_busy_wait) > 100
group by to_char(sn.end_interval_time,'dd Mon HH24:mi:ss'), sn.snap_id;

SNAP_ID MYDATE                BUFFER_BUSY_WAIT
------- --------------------- ----------------
  15882 27 Jul 11:00:44                    876
  15891 27 Jul 19:00:21                    136
  15899 28 Jul 03:00:58                  1,193


Now, conforming to most of the documentation I've seen th reasonable action you take is to find the offending tables having to few freelist and to increase the freelist.
How can we find the "offending" tables based on what we have above ?
The only way I found is to use dba_hist_active_sess_history in the above join so that will allow me to join also dba_data_files and dba_objects so I can find the objects. For simplicity of the case I took only first snap_id = 15882 from above.

select
obj.object_type,
obj.object_name,
to_char(sn.end_interval_time,'dd Mon HH24:mi:ss') mydate,
avg(new.buffer_busy_wait-old.buffer_busy_wait) buffer_busy_wait
from
dba_hist_buffer_pool_stat old,
dba_hist_buffer_pool_stat new,
dba_hist_snapshot sn,
dba_hist_active_sess_history ash,
dba_data_files df,
dba_objects obj
where new.snap_id = sn.snap_id
and new.snap_id = sn.snap_id
and old.snap_id = sn.snap_id-1
and df.file_id = ash.current_file#
and obj.object_id = ash.current_obj#
and ash.snap_id = sn.snap_id
and sn.snap_id = 15882
having avg(new.buffer_busy_wait-old.buffer_busy_wait) > 100
group by to_char(sn.end_interval_time,'dd Mon HH24:mi:ss'), obj.object_type, obj.object_name;

OBJECT_TYPE         OBJECT_NAME                      Yr.  Mo Dy  Hr.  BUFFER_BUSY_WAIT
------------------- -------------------------------- ---------------- ----------------
TABLE               STUDENT_LOAN                     27 Jul 11:00:44               876
TABLE               OSFI_FEED_CORE_DATA              27 Jul 11:00:44               876
TABLE PARTITION     STG_FIN_ACC                      27 Jul 11:00:44               876
TABLE PARTITION     DDS_PARTY_SRCD                   27 Jul 11:00:44               876
TABLE PARTITION     STG_BCAR_DETAIL                  27 Jul 11:00:44               876

So apparently these will be the offending tables for 'buffer busy wait' event. So for these tables/segments I should increase freelist.

But, but ... if I add to the above join DBA_HIST_SEG_STAT I see BUFFER_BUSY_WAITS_TOTAL = 0 and BUFFER_BUSY_WAITS_DELTA = 0 like there were no buffer waits !!! why ? what's going on here ?

select BUFFER_BUSY_WAITS_TOTAL,
BUFFER_BUSY_WAITS_DELTA,
obj.object_type,
obj.object_name,
to_char(sn.end_interval_time,'dd Mon HH24:mi:ss') mydate,
avg(new.buffer_busy_wait-old.buffer_busy_wait) buffer_busy_wait
from
dba_hist_buffer_pool_stat old,
dba_hist_buffer_pool_stat new,
dba_hist_snapshot sn,
dba_hist_active_sess_history ash,
dba_data_files df,
dba_objects obj,
DBA_HIST_SEG_STAT dss
where new.snap_id = sn.snap_id
and new.snap_id = sn.snap_id
and old.snap_id = sn.snap_id-1
and df.file_id = ash.current_file#
and obj.object_id = ash.current_obj#
and dss.snap_id = sn.snap_id
and dss.OBJ# = obj.object_id
and ash.snap_id = sn.snap_id
and sn.snap_id = 15882
having avg(new.buffer_busy_wait-old.buffer_busy_wait) > 100
group by
to_char(sn.end_interval_time,'dd Mon HH24:mi:ss'), obj.object_type, obj.object_name, BUFFER_BUSY_WAITS_TOTAL, BUFFER_BUSY_WAITS_DELTA;

BUFFER_BUSY_WAITS_TOTAL BUFFER_BUSY_WAITS_DELTA OBJECT_TYPE         OBJECT_NAME                 
----------------------- ----------------------- ------------------- ---------------------------
                      0                       0 TABLE               STUDENT_LOAN                           
                      0                       0 TABLE               OSFI_FEED_CORE_DATA         
                      0                       0 
TABLE PARTITION     STG_FIN_ACC   .....


Also I have another question in here: all tablespaces containing these segments/tables have SPACE MANAGEMENT = AUTO so that we should not worry about adjusting freelist. Then why 'busy buffer waits' is happenning, why not Oracle takes care of that because of 'AUTO' setting ? If space management = 'AUTO' takes care for me of eliminating 'busy buffer waits' then why do I have this top event and does it make sense to add freelist as long as I have 'AUTO' ?

Please put some light for me in here ... Thank you.
Tom Kyte
July 30, 2007 - 5:56 pm UTC

i beg to differ with your definition of considerable.

those are teeny tiny. Look at the time spent waiting - ask yourself "would anyone give me a raise if I gave them back that time", answer is going to be "no" in this case.

DBA_HIST_SEG_STAT

Serge F., July 31, 2007 - 9:58 am UTC

Thank you Tom for answering. Meanwhile for my second set of questions I found the answer in one of your book. Busy buffer Waits might exist even with AUTO flag set, but they are reasonable small waits.
But you miss my first question, if you have few minutes :)) please have a look.
It is about DBA_HIST_SEG_STAT. I am joining that table and while the other one dba_hist_buffer_pool_stat shows me buffer busy waits, this one shows 0 for the fields BUFFER_BUSY_WAITS_TOTAL and BUFFER_BUSY_WAITS_DELTA.
How come ?

ADDM

Anand, October 05, 2012 - 1:59 am UTC

Hi Tom,

I am using below query to get the report of my batch process of last night.Then i look at the message with "Run SQL Tuning Advisor on the SQL statement with SQL_ID "76cr59ammtfs5".
But when i am selecting the sql_id in table gv$sql unable to find any row.Can you please suggest.

select a.execution_start,a.execution_end, b.*
from dba_advisor_log a,dba_advisor_actions b
where execution_start >=
(select min(init_ts)
from flex_batch_statistics
where cod_eod_category = 1
and dat_eod_process = ln_global.dat_last_process)
and execution_end <=
(select max(last_updated_ts)
from flex_batch_statistics
where cod_eod_category = 1
and dat_eod_process = ln_global.dat_last_process)
and a.task_id = b.task_id
order by execution_start

can you explain what does it mean

"Consider using array interface for the INSERT statement with SQL_ID "a97mh76zdzmtc". "
Tom Kyte
October 09, 2012 - 12:09 pm UTC

run an ASH report on that sql id.



array interface means "consider bulking up your SQL, we notice that you are doing an insert/update/delete that affects just one row - maybe you can ask us to do a few hundred or so instead of one at a time"


DBA_HIST_ACTIVE_SESS_HISTORY get sql by user and object schema

karma, January 30, 2015 - 12:08 pm UTC

I need to find out usage of all objects in a given schema and auditing is not option so how can one approach to get this information. I was thinking to use DBA_HIST_ACTIVE_SESS_HISTORY ? How to use this table and get all SQL id and SQL text and then parse each SQL (looking for FROM and JOIN clause to extract tables used.

Download link not working

A reader, June 05, 2015 - 6:44 am UTC

download-east.oracle.com/docs/cd/B10501_01/server.920/a96533/statspac.htm#34837
Link not working