Skip to Main Content
  • Questions
  • Is there a way to track start and end time of a query?

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Venkatesh.

Asked: June 04, 2008 - 10:09 am UTC

Last updated: February 01, 2021 - 1:56 am UTC

Version: 10.2.0.3

Viewed 100K+ times! This question is

You Asked

Hi Tom,

I am trying to find if there is a way to track/log slow queries just like how mysql has a way of doing it. So obviously we should have a way to define time threshold that would identify the queries to be running slow. Typically this can be done by application by logging the begin and end time for individual queries. However, I was curious to know if there is a way within Oracle to accomplish this.

I do understand there are other values to look at in statspack/awr for tuning purpose, but I am trying to see if this is at all possible.

Thanks Tom.

Venky

and Tom said...

v$sql has

o number of executions
o elapsed time
o cpu time.

among other metrics, all sitting there waiting to be looked at.



an application logging times would not be useful - they include think/application work times - it would not be representative of the actual time spent in the database - v$sql does...

Rating

  (22 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks for the info

A reader, June 05, 2008 - 2:00 pm UTC

Thank you Tom. I should have looked. It appears that the column gets updated at varying times (uneven periods depending on what the query is doing).

Thanks again!

Venky

not sure, but...

djb, June 05, 2008 - 3:53 pm UTC

Are you sure he's not looking for information from V$SESSION_LONGOPS ?

Tom Kyte
June 05, 2008 - 7:13 pm UTC

pretty sure - since that view cannot tell you how long a query took...

and it would only report on an individual phase of a query....

and only the current phase...


eg: if a query is:

hash join
table access full t1
table access full t2


you would see a full scan of t1 in longops AND THEN the full scan of t2 in longops AND THEN nothing....


he wants to know "how long do my queries take..." v$sql - has the time spent in the database executing the query and the number of times query was executed - you can derive "average" runtimes from that.

Yup

djb, June 06, 2008 - 12:57 pm UTC

Forgot about the fact that multi-phase query will show the different steps, so yes, that would be difficult to use in that regard.

where to get complete query/

Bhavesh Ghodasara, June 07, 2008 - 3:45 am UTC

Hi Tom,

from where i can get full query? sql_text of v$sql only contains first 1000 characters of query. my query exceeds that length. Which is other view from where i get complete query?
Tom Kyte
June 09, 2008 - 11:43 am UTC

v$sqltext_with_newlines;

RE: where to get complete query

Duke Ganote, June 07, 2008 - 5:17 pm UTC

There are several v$sql* views (see the askTom link at bottom). Tom describes it this way:

"v$sqltext is simply a way to see the entire query. The v$sql and v$sqlarea views only show the first 1000 bytes. newlines and other control characters are replace with
whitespace.

"v$sqltext_with_newlines is v$sqltext without the whitespace replacment.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:385254862843

Query to track Elapsed Time

Venkatesh Naicker, June 11, 2008 - 8:47 pm UTC

Hi Tom,

Based on this thread, I came up with the following sql to track slow running sqls. But something seems to be incorrect still. For example the sql on a sid working since last 30 minutes is being reported as 16 hours. Would you mind pointing out what am I missing?

set pagesize 100
col sql_fulltext form a30
col program form a29
col module form a25
col time form 999
col sid form 9999
set linesize 300
set trimspool on
select sid,program,a.module,sql_fulltext,
(case
when trunc(elapsed_time/1000000)<60 then to_char(trunc(elapsed_time/1000000))||' Sec(s)'
when trunc(elapsed_time/1000000/60)<60 then to_char(trunc(elapsed_time/1000000/60))||' Min(s)'
when trunc(elapsed_time/1000000/60/60)<24 then to_char(trunc(elapsed_time/1000000/60/60))||' Hour(s)'
when trunc(elapsed_time/1000000/60/60/24)>=1 then to_char(trunc(elapsed_time/1000000/60/60/24))||' Day(s)'
end) as time
from v$session a,v$sqlarea b
where a.sql_address=b.address
and a.sql_hash_value=b.hash_value
and users_executing>0
and elapsed_time/1000000>30;

Thanks in advance.

Venkatesh

Tom Kyte
June 11, 2008 - 9:40 pm UTC

v$sqlarea is an aggregate view over all child cursors and is cumulative since the server was started up.

the elapsed time in that column is for EVERY EXECUTION of that query since the server was started.

I think you want to look at last_call_et in v$session maybe.

how to capture sql stats for past 2 hours or so

Suryanarayana Murthy Boddu, June 12, 2008 - 2:38 am UTC

Hi Tom,

The view and collection of stats from v$sql and v$sqltext_with_newlines is good if the query is currently under execution. After some duration, the data under those views will be purged. My requirement is as follows: I want to query the database to get all the sqls and their plans whose execution time crossed a threshold. I understand we can use dba_hist_sqlstat and dba_hist_sqlplan views. but the problem is that the most of the stats stored in dba_hist_sqlstat are cumulative or delta. So, the information is appended. In many applications like one on which I am working, we have same sqls being fired multiple number of times but the response time varies highly with the inputs which we want to capture. So, I would like to have information about every call instead of aggregate information. Can you help me through this? Hope I am clear.

Thanks,
Murthy
Tom Kyte
June 12, 2008 - 7:49 am UTC

but that is exactly what the history tables show - the v$ tables, they are cumulative, the dba_hist tables - they are snapshots, and if you take "snap1" and subtract it from "snap2", you get what happened between snap's 1 and 2.

<quote>

DBA_HIST_SQLSTAT displays historical information about SQL statistics. This view captures the top SQL statements based on a set of criteria and captures the statistics information from V$SQL. The total value is the value of the statistics since instance startup. The delta value is the value of the statistics from the BEGIN_INTERVAL_TIME to the END_INTERVAL_TIME in the DBA_HIST_SNAPSHOT view.

</quote>

correction for above comment

Suryanarayana Murthy Boddu, June 12, 2008 - 2:42 am UTC

Hi,

Please read my question/requirement in above comment as follows:

I want to query the database to get all the sqls and their plans whose execution time crossed a threshold during past 2 hours. I want to have a sql which I can fire every 2 or 3 hours and get all the sqls which crossed a threshold.

Thanks,
Murthy

TOTAL Columns of DBA_HIST_SQLSTAT

Olivier, June 16, 2008 - 6:24 am UTC

Hi,

I'm using the DBA_HIST_SQLSTAT in order to highlight poor SQL code/parameter. As you said, this table contains 2 kind of stats : Total and Delta.

My question is on Total columns : Why and when Total is reset ?
SELECT
  s.SNAP_ID,
  h.STARTUP_TIME,
  PLAN_HASH_VALUE,
  EXECUTIONS_TOTAL,
  EXECUTIONS_DELTA,
  CPU_TIME_TOTAL*POWER(10,-6) "CPU_TIME_TOTAL"
FROM
  DBA_HIST_SQLSTAT s,
  DBA_HIST_SNAPSHOT h
WHERE s.SNAP_ID IN (431, 432, 433)
AND SQL_ID='7ca0jht73uv92'
AND s.SNAP_ID=h.SNAP_ID


SNAP_ID STARTUP_TIME          PLAN_HASH_VALUE EXECUTIONS_TOTAL EXECUTIONS_DELTA CPU_TIME_TOTAL
------- ------------------------------------- ---------------- -----------
    431 11/06/08 14:33:43,000       552933365             1239              154     221,775941
    432 11/06/08 14:33:43,000       552933365             1533              294     274,695173
    433 11/06/08 14:33:43,000       552933365               56               56      11,045736


Here stats for this SQL_ID were reset after snap 432 and the Total column restart counting. The database wasn't restarted : same startup_time.

I'm using Oracle 10.2.0.3 on WinXP SP2 with standard AWR parameters.

Thanks,
Olivier
Tom Kyte
June 16, 2008 - 12:08 pm UTC

anything that causes the sql to age out - eg, here I added a primary key, it invalidated the sql, the new hard parse starts everything over again.


ops$tkyte%ORA10GR2> create table t as select rownum x from dual connect by level <= 5;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_n number;
  3  begin
  4          for i in 1 .. 10
  5          loop
  6                  select count(*) into l_n from t LOOK_FOR_ME;
  7          end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select sql_id, executions from v$sql where sql_text like 'SELECT%COUNT(*)%FROM%T%LOOK_FOR_ME%';

SQL_ID        EXECUTIONS
------------- ----------
fn420mxh9x0tu         10

ops$tkyte%ORA10GR2> alter table t add constraint t_pk primary key(x);

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_n number;
  3  begin
  4          for i in 1 .. 2
  5          loop
  6                  select count(*) into l_n from t LOOK_FOR_ME;
  7          end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select sql_id, executions from v$sql where sql_text like 'SELECT%COUNT(*)%FROM%T%LOOK_FOR_ME%';

SQL_ID        EXECUTIONS
------------- ----------
fn420mxh9x0tu          2


tracking poor sql

deba, June 17, 2009 - 7:29 am UTC

Hi tom,

I'm using the DBA_HIST_SQLSTAT in order to highlight poor SQL. In this table , I can see there are three columns which are as follows :

IOWAIT_DELTA
APWAIT_DELTA
CCWAIT_DELTA

1) But I am not sure about the units of the time in these columns. Is it microsecond or millisecond or something else ?

2) What is the difference between application wait time ( APWAIT_DELTA )and cocurrency wait time ( CCWAIT_DELTA ) ?
Could you please explain this ? I think concurrency wait should be subset of application wait time . Am I right ?

Thanks
Deba
Tom Kyte
June 17, 2009 - 3:06 pm UTC

should be that which the wait times are reported in, 1/1,000,000 of a second in current releases.

concurrency waits are that subset of waits that are related to multiple users bumping into each other - yes.

dba_hist_sqlstat ELAPSED_TIME

Alex, July 17, 2009 - 2:07 pm UTC

Hi Tom,

We are running a data warehouse on 10.2.0.3 on a 32-cpu server. Most of the queries are ad-hoc and use parallel query. AWR snapshots are captured every hour.

I am trying to capture runtime statistics for most (whatever is in dba_hist_* views) queries. However, it looks like elapsed_time_delta does not match what is reported by AUTOTRACE in sqlplus. As you can see, buffer gets are about the same, rows_processed are about the same, however elapsed_time is showing 961 hours (vs. 15 reported by autotrace).

Why is elapsed_time is showing 961 hours (vs. 15 reported by autotrace)?

Thanks in advance for your help,

Alex

/*** Output from SQLPLUS ***/

27701316 rows selected.

Elapsed: 15:15:23.75

Statistics
----------------------------------------------------------
 237  recursive calls
   3  db block gets
 1927855355  consistent gets
   24583380  physical reads
 636  redo size
SP2-0642: SQL*Plus internal error state 1075, context 1:5:4294967295
Unsafe to proceed
   20316104  bytes received via SQL*Net from client
    1846756  SQL*Net roundtrips to/from client
  23  sorts (memory)
   0  sorts (disk)
   27701316  rows processed

/********************************/

/*** DBA_HIST_SQLSTAT ***/

> select
  2  count(snap_id) number_of_snapshots,
  3  sum(buffer_gets_delta) buffer_gets,
  4  sum(rows_processed_delta) rows_processed,
  5  round(sum(elapsed_time_delta)/1000000/3600) elapsed_time_hours
  6  from
  7  dba_hist_sqlstat
  8  where sql_id in ('dqv1ujuvj9c9r')
  9  /

NUMBER_OF_SNAPSHOTS BUFFER_GETS ROWS_PROCESSED ELAPSED_TIME_HOURS
------------------- ----------- -------------- ------------------
                 16  1927765004       27700206                961

/********************************/

Tom Kyte
July 24, 2009 - 8:44 am UTC

well, your sqlplus thing there is showing a single instance of the query.

Your query against dba_hist_sqlstat is sort of aggregating the counts over and over again.

look at the last snap

cpu_time_delta and elapsed_time_delta

Lise, January 20, 2011 - 11:58 am UTC

Hi,
We are running with 11.1, 9CPUs, and no parallel.We have 100's of concurrent users running the same queries.
When I run the query below it returns details as displayed.

SELECT cpu_time_delta / 1000 / 60 / 60
,elapsed_time_delta / 1000 / 60 / 60
FROM dba_hist_sqlstat stat
,dba_hist_snapshot snap
WHERE snap.snap_id = stat.snap_id
AND cpu_time_delta > elapsed_time_delta

CPU_TIME_DELTA/1000/60/60 ELAPSED_TIME_DELTA/1000/60/60
1 0.05 0.0335372222222222
2 89.7555555555556 88.9720486111111
3 0.702777777777778 0.263673333333333
4 23.6138888888889 21.3090375
5 1.11388888888889 0.793110277777778
6 18.6416666666667 18.0575211111111
7 0.0583333333333333 0.0505002777777778
8 34.125 33.8415672222222
9 36.7444444444444 35.8155641666667
10 15.5777777777778 15.3565080555556
11 0.458333333333333 0.434083888888889
12 21.4611111111111 20.1554075
13 1.28055555555556 1.22089388888889
14 0.0166666666666667 0.0103419444444444
15 1.525 0.800712222222222

As you can see there are quite a few that has a big difference between CPU and Elapsed.
I read an article on My Support 1052756.6 which states that the load balancing between processors is handled by the Scheduler and not Oracle. Does this mean that even though I am not running multithreaded or parallel, a query can take up more than one CPU and hence the elapsed time appear to be smaller. I know you mention in the Effective Oracle book that there will be some small difference due to the underlying data etc. However, I am seeing some big differences.
Any guidance much appreciated.

Tom Kyte
January 24, 2011 - 6:16 am UTC

I don't see any big differences, give a for example.


If you have thousands/millions of statements of statements - and you measure two different metrics (measured at different points in the code, at different intervals/levels in the code) and you add them up - the divergence can be rather 'big'.

The only "big" difference I see is #3 up there and those numbers are so small as to make big not a big deal.

To me, this looks like you do not have much in the way of wait events for those sql statements and the cpu time is in fact pretty much the same as elapsed.


Get top sql from AWR tables

KG, February 12, 2011 - 2:11 pm UTC

Hi Tom


I am running below sql to find sql information which took more than 1 hour to execute
If its not right way to do it ,please correct

SELECT
TO_CHAR(s.begin_interval_time,'mm-dd hh24') c1,
sql.sql_id c2,
sql.executions_delta c3,
sql.buffer_gets_delta *POWER(10,-6) c4,
sql.disk_reads_delta *POWER(10,-6) c5,
sql.iowait_delta *POWER(10,-6) c6,
sql.apwait_delta *POWER(10,-6) c7,
sql.ccwait_delta *POWER(10,-6) c8,
sql.ELAPSED_TIME_DELTA *POWER(10,-6) c9,
sql.ELAPSED_TIME_TOTAL *POWER(10,-6) c10
FROM
dba_hist_sqlstat SQL,
dba_hist_snapshot s
WHERE
s.snap_id = sql.snap_id
AND
(
sql.ELAPSED_TIME_delta *POWER(10,-6)
)
> 3600
AND sql.INSTANCE_NUMBER =1
AND s.instance_number =sql.instance_number
AND sql.executions_delta > 0
ORDER BY
c1,c2


Begin In SQL ID Exec Delta BufferGets Delta DiskReads Delta IO_Wait Delta Application_Wait Delta Concurrency_Wait Delta Elapsed time Delta Elapsed time
-------- ------------- ---------- ---------------- --------------- ------------- ---------------------- ---------------------- ------------------ ------------
02-07 10 80dfdmq7kzyb0 4 78 0 0 0 0 3690.2548 4108.30107
02-08 01 g2f5cfx06yy09 1 37 0 1 0 0 3631.00237 22668.8229
02-09 07 g2f5cfx06yy09 1 38 0 0 0 0 3623.01721 44242.3479
02-10 10 5k9vcdsrgtygs 8 313 0 2822 0 0 4048.32819 8698.99943

Regards

Need response

KG, February 16, 2011 - 2:26 pm UTC

Hi Tom
please respond to question regarding finding top sqls
Tom Kyte
February 16, 2011 - 2:29 pm UTC

we do not track things down to that level.

We do not keep track of how long each individual sql statement that was executed took to execute. We keep aggregations.

You cannot get there from here, you cannot really see which sql statement executed for more than an hour.


alter

Dana, February 16, 2011 - 4:02 pm UTC

alter session set tracefile_identifier='somename';
alter session set events '10046 trace name context forever, level [X]';
... execute sql ...
alter session set events '10046 trace name context off';

Tom Kyte
February 16, 2011 - 5:36 pm UTC

Not sure why you posted that?

I hope it is not in response to the question immediately above - we would not want to run a system with sql trace on for every query. They were asking to find the "high load sql defined as sql that took over an hour to execute". This would not be a method for doing that in real life.

Further explanation

Dana, February 17, 2011 - 6:50 am UTC

My take on the thread was "How do I find out what problem SQL is doing."
When I think problem SQL, it is in the context of which SQL consistently shows as a high consumer in AWR reports. My general criteria is to look at high SQL by Gets, and focus on the top SQL in that section. I extract that SQL and trace it.

I do not agree with the initial premise of "on the fly" diagnostic. I think the analysis of problem SQL starts with knowing, not guessing, the average load of the system and SQL that consistently makes up the average load of the system. That knowledge has, for me, come from reading many, many AWR reports during average business cycles.

Then when I get the reports from customers of poor performance, I can usually find the SQL causing the issue quickly, execute that SQL with tracing, fix it or tune it, or ignore it, depending on the reason why it is there.

So overall, I was saying a pound of prevention in the guise of reading AWR's even when things are good, is worth a ton of cure, in the guise of trying to diagnose SQL as a problem without context.
Tom Kyte
February 17, 2011 - 11:27 am UTC

but - you said none of that originally :) Nothing about AWR, nothing about how to use the information.

All you said was:


alter session set tracefile_identifier='somename';
alter session set events '10046 trace name context forever, level [X]';
... execute sql ...
alter session set events '10046 trace name context off';


Tracing has its place - just not in answer to the question that was asked.


regarding top sql

KG, February 17, 2011 - 8:07 am UTC

Tom
when i use oem/dbconsole it reports Historical SQL (AWR) for 24 hours .
I understand it lists top sql based on execution time in 24 hours . Below is the example of same.



SQL ID SQL Text % of Total CPU Time (seconds) Wait Time (seconds) Elapsed Time Executions Module
Elapsed Time Per Execution (sec)
----------- --------- -------- ---------- ---------- --------- ----- -----
g2f5cfx06yy09 BEGIN xxnpi_ftp.startup 1.21 904.82 20624.36 21529.18 1 NPIFTPS


Can't i achieve same from by running sql on below awr tables.


DBA_HIST_SQLSTAT
DBA_HIST_SQLTEXT
DBA_HIST_SQL_BIND_METADATA
DBA_HIST_SQL_PLAN
DBA_HIST_SQL_SUMMARY
DBA_HIST_SQL_WORKAREA_HSTGRM


Tom Kyte
February 17, 2011 - 11:36 am UTC

that is over ALL executions of the sql - not sql by sql.


I would still use AWR

Dana, February 17, 2011 - 10:15 am UTC

AWR still works better in my opinion. A period of one hour for regular reporting or three to four minutes for known trouble spots is easy to do. And doing delta comparisons, extra top sql and so forth is also easy. AWR is a tool that does not get in my way, and does not make me recreate special scripts for every type of scenario.
I've seen queries that might normally take 1/10th second go to ten times the average execution.
With AWR during that time period, I might see that a higher number of users squeezed PGA, causing longer execution times, or that contending sessions generated TX-Enq locks or something else entirely.

AWR provides so much value add to the process I consider it indispensable.
Tom Kyte
February 17, 2011 - 12:08 pm UTC

I agree.

awr

kg, February 18, 2011 - 1:35 pm UTC

Hi Tom/Dana
i had a specific requirement to create expensive sqls report over period of 15 or 30 days and provide to customer.


I have used awr reports and oem(10g,11g) for sometime now and i have idea what it does and how it can help.

I know the cumulative timing information is kept in awr tables . In situation where OEM/dbconsole is not available and analyzing all the reports generate from backend becomes time consuming .

I am ok ,if by quering directly tables i get timing information for multiple executions

If you say this approach is not gonna help by any means ,i will drop it

Thanks again for your help so far
Tom Kyte
February 18, 2011 - 2:31 pm UTC

why not just generate the text reports - zeroing in on the top sql report is trivial - there are plsql procedures to generate them if you don't have access to the @scripts to generate them even.

SQL Developer

Dana, February 18, 2011 - 2:42 pm UTC

SQL Developer works for this as well. It's a daily routine for me to generate AWR via SQL Developer. Quick, under a minute, you're done.

And in most systems I look at, what is most expensive is what executes during business hours, whether it's North America, Europe, Asia, etc. I expect to see completely different reports during off business hours. Sometimes it administrative tasks, sometimes batch jobs. But I surely don't count that toward most expensive SQL. I count what the applications run during business hours.

Top sql

A reader, February 19, 2011 - 7:02 pm UTC


A reader, January 31, 2021 - 9:26 am UTC

How I know the query begin time and end time by using the sqlid before days like yesterday before yesterday day like that
Connor McDonald
February 01, 2021 - 1:56 am UTC

It *might* be in v$active_session_history, otherwise you can get things like total execution time etc from AWR reports and potentially v$sqlstats

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.