Thanks for the question, Rasin.

Asked: April 09, 2006 - 1:09 pm UTC

Last updated: July 31, 2013 - 6:58 pm UTC

Version: 10.2

Viewed 50K+ times! This question is

Tom,
can you explain what are SQL Profiles, what are they used for how the CBO uses them and how can we utilize this feature.

Thanks.

## and Tom said...

Ok, this is my conceptual explanation.

In the past, the DBA could analyze single 'things'

the DBA could analyze A TABLE - finding the number of blocks, rows, things like that. about A TABLE.

the DBA could analyze A COLUMN - get histograms, high/low values, number of distinct values, things like that - about A COLUMN

the DBA could analyze AN INDEX - get keys/get, leaf blocks, height, clustering factor.

the DBA could analyze A SYSTEM - find the single block IO time, the multi-block IO time, cpu speed and so on.

But - most of our queries (many of them) involve more than A TABLE, A COLUMN, AN INDEX, and so on... They are complex.

For example (blatant theft of example from Jonathan Lewis here - his CBO book is awesome, I remember him first using this example in 2003 at the NoCoug meeting...)

o How many of you are Pisces? 1/12th of the room will raise hands.

o How many of you are born in December? 1/12 of the room will raise hands.

Now, how many of you Pisces were born in December? (silence, none of them are - I know, I am Pisces :)

ops\$tkyte@ORA10GR2> create table t
2 as
3 select decode( mod(rownum,12), 0, 'Capricorn',
4 1, 'Gemini',
5 2, 'Libra',
6 3, 'Aquarius',
7 4, 'Cancer',
8 5, 'Scorpio',
9 6, 'Pisces',
10 7, 'Ariea',
11 8, 'Leo',
12 9, 'Sagittarius',
13 10, 'Taurus',
14 11, 'Virgo' ) zodiac,
15 add_months( to_date( '01-jan-2006' ), mod(rownum,12) ) dob
16 from all_objects
17 /

Table created.

ops\$tkyte@ORA10GR2>
ops\$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', method_opt=> 'for all columns size 254' );

PL/SQL procedure successfully completed.

ops\$tkyte@ORA10GR2>
ops\$tkyte@ORA10GR2> set autotrace traceonly explain
ops\$tkyte@ORA10GR2> select * from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 51292 | 751K| 52 (6)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 51292 | 751K| 52 (6)| 00:00:01 |
--------------------------------------------------------------------------

ops\$tkyte@ORA10GR2> select * from t where zodiac = 'Pisces';

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 4107 | 61605 | 53 (8)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 4107 | 61605 | 53 (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ZODIAC"='Pisces')

ops\$tkyte@ORA10GR2> select * from t where dob = to_date( '01-dec-2006' );

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 4494 | 67410 | 53 (8)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 4494 | 67410 | 53 (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("DOB"=TO_DATE('2006-12-01 00:00:00', 'yyyy-mm-dd
hh24:mi:ss'))

ops\$tkyte@ORA10GR2> select * from t where zodiac = 'Pisces' and dob = to_date( '01-dec-2006' );

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 360 | 5400 | 53 (8)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 360 | 5400 | 53 (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ZODIAC"='Pisces' AND "DOB"=TO_DATE('2006-12-01
00:00:00', 'yyyy-mm-dd hh24:mi:ss'))

We can sort of see this with dynamic sampling - which is *similar* to a sql profile, but not stored like a profile would be:

ops\$tkyte@ORA10GR2> select /*+ dynamic_sampling( t 4 ) */ * from t where zodiac = 'Pisces' and dob = to_date( '01-dec-2006' );

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 53 (8)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 15 | 53 (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ZODIAC"='Pisces' AND "DOB"=TO_DATE('2006-12-01
00:00:00', 'yyyy-mm-dd hh24:mi:ss'))

Note
-----
- dynamic sampling used for this statement

ops\$tkyte@ORA10GR2> set autotrace off

the optimizer ran this query for us to parse:

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS opt_param('parallel_execution_enabled',
'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE
*/ NVL(SUM(C1),:"SYS_B_0"), NVL(SUM(C2),:"SYS_B_1")
FROM
(SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ :"SYS_B_2"
AS C1, :"SYS_B_3" AS C2 FROM "T" "T" WHERE "T"."ZODIAC"=:"SYS_B_4" AND
"T"."DOB"=TO_DATE(:"SYS_B_5", :"SYS_B_6")) SAMPLESUB

and used the results of that as the estimated cardinality. SQL profiles would do the same - but would look at the entire query - cross table, column, whatever and do partial executes on it (try out the data in your database) and remember that in the dictionary as extended statistics.

It can use that next time to parse and optimize the query.

It'll remember other things too - like "this query is used to retrieve just the first rows, we should not optimize for all rows, but first rows - let us remember that in the future..."

So, a SQL profile is sort of like gathering statistics on A QUERY - which involves many tables, columns and the like....

In fact - it is just like gathering statistics for a query, it stores additional information in the dictionary which the optimizer uses at optimization time to determine the correct plan. The SQL Profile is not "locking a plan in place", but rather giving the optimizer yet more bits of information it can use to get the right plan.

## Rating

(39 ratings)

Is this answer out of date? If it is, please let us know via a Comment

### Minor correction

Tom K, April 10, 2006 - 1:48 pm UTC

Tom - This is a little nit-pickie, but....when I first was looking at your example, I couldn't figure out how you were getting Gemini (my birthday) to end up as June - it didn't look like it would come out right (but it was 2am so....). But, it doesn't. I'm not into horoscopes, but most people know 'their sign', so this correction might help make this example clearer.

create table t
as
select decode (mod (rownum, 12), 0, 'Capricorn',
1, 'Aquarius',
2, 'Pisces',
3, 'Aries',
4, 'Taurus',
5, 'Gemini',
6, 'Cancer',
7, 'Leo',
8, 'Virgo',
9, 'Libra',
10, 'Scorpio',
11, 'Sagittarius') zodiac,
add_months (to_date ('01-jan-2006'), mod (rownum, 12)) dob
from all_objects
/

BTW, thanks for all the great books and info!

Regards, Tom (me too!)

April 11, 2006 - 10:58 am UTC

it would still be wrong as the signs tend to span months.... it was just a demo :)

### SQL PROFILES

Mark, April 02, 2008 - 12:52 pm UTC

Tom,

Do SQL Profiles get stale? If so, is there some way to refresh stale profiles like we do for stats?

April 02, 2008 - 1:43 pm UTC

you re-profile the sql statement.

Yes, they can get "stale" - basically, sql profiles are better estimated cardinality values - better in that they are based on sampling your where clause - not based on basic 'statistical projection'.

and if the underlying data changes - well, then they (the better card=values) would become wrong.

Eg: if someone changes the zodiac lookup table so that pisces are born in december, the sample we ran that says "zero are" would be wrong

### sql profile vs Sql plan management of 11g

Aman Sharma, April 03, 2008 - 6:22 am UTC

Hi sir,
Its an excellent explanation.Can you please explain the difference in the sql profiles and sql plan management /sql baselines of 11g ?
regards
Aman....
April 03, 2008 - 8:07 pm UTC

the sql baselines is more akin to storing query plans and ensuring those plans are used. they are the baseline plans..

sql profiles are what they are - cardinality information, subject to influencing a plan, but not dictating it.

### Outlines

Aru, June 17, 2009 - 2:02 am UTC

Hi Tom,
We have a new application and database which we think is running great at the moment(10gR2). What I want to basically do is note the top 20 sql which are most frequently executed. What can I do to compare the execution plan, access path etc. say 1 year from now, when we face some kind of performance degradation because of these queries optimizer plans changes due to factors like increase in data volume etc. or if the optimizer comes up with a sub-optimal plan. What exactly is the best method of storing information and which information(for example - explain plan) should I be looking at storing?
Thanks always,
Regards,
ARU.
June 17, 2009 - 11:24 am UTC

do what stats pack, AWR and people with databases do in general...

create a table, store it with an identifier that lets you group related sql's together.

insert into my_history (id,dt) values (s.nextval,sysdate);

insert into my_v\$sql_history
select s.currval, .....
from v\$sql
where ...

insert into my_v\$sql_plan_history
select s.currval, ....
from v\$sql_plan
where ....

and so on

### Outlines

Aru, June 17, 2009 - 6:17 pm UTC

Thanks Tom,
Will do as you suggested. Can I also couple this up with stored outlines so when the optimizer plan changes to a suboptimal one, I can use the stored outline to test if the original plan worked better? Not too sure if I should create profiles or use outlines for this kind of thing. Also I was reading your comment on baselines. Can I create a baseline for individual sql's?
Regards,
ARU.
June 18, 2009 - 11:07 am UTC

outline = way to force a particular plan

profile = way to analyze/gather statistics for a query, resulting in the optimizer having more information, resulting in better plans to be generated - not a specific particular plan like an outline

profiles and outlines are very different.

sql plan baselines are 11g, you said 10g, so not sure they apply to you

### One Observation regarding sql profiles.

Dhairyasheel Tawade, June 22, 2009 - 8:06 am UTC

Hello Tom

"In fact - it is just like gathering statistics for a query, it stores additional
information in the dictionary which the optimizer uses at optimization time to determine
the correct plan. The SQL Profile is not "locking a plan in place", but rather giving
the optimizer yet more bits of information it can use to get the right plan."

What I observed while tuning queries in my DataWarehouse, using sql tuning advisor (in Oracle Entriprise manager dbconsole) , is that tuning advisor gives you a recommendation of implementing a sql profile with certain % improvement in query performance. When we see the details of this recommendation, it shows us two windows. One is original Explain plan window and other is new improved explain plan window.

When I click the implement button is the new explain plan getting stored somewhere or just the stats are getting stored?

I'm using sql profiles in my 4.2 TB DataWarehouse from last 1.5 years and have not seen performance degradation in any of the reports for which I've created profiles. We add upto 3GB of data everyday. What could be the chances/frequency of profiles getting stale in such an environment.

Also once in sql tuning advisor's recommendations, there was a recommendation to create and index and another to create a profile & the % improvement was more if you selected the profile, howz that possible?
June 22, 2009 - 12:42 pm UTC

... When I click the implement button is the new explain plan getting stored
somewhere or just the stats are getting stored?
...

as I wrote, just the additional stats, it does NOT force a plan - it stores additional cardinality/selectivity information that - given everything as it exists in your database right now - would probably result in the modified plan.

the profiles have as great a probabiliy of going stale as any other statistic does.

the sql profile will for example compute for a query like:

select * from t where x = 5 and y = 10;

the actual rows returned from x=5 and y=10 (whereas the optimizer on its own would return the probability(x=5) TIMES probability(y=10).

if that answer remains constant over time, stats are not going to go 'stale', if it changes radically over time, it will

as for the last bit - indexes are not all goodness, they are not always "better than anything else". How is it "NOT" possible - why do you think this is "improbable"?????

### Transfer SQL Profiles

Peter Langlands, August 04, 2009 - 6:33 am UTC

Tom, we have the same 10.2 database structure on multiple servers. How do we transfer a SQL Profile from one database to another?
August 04, 2009 - 3:19 pm UTC

http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_sqltun.htm#CACIAHIH

CREATE_STGTAB_SQLPROF Procedure

Creates the staging table used for copying SQL profiles from one system to another.

PACK_STGTAB_SQLPROF Procedure

Moves profile data out of the SYS schema into the staging table

UNPACK_STGTAB_SQLPROF Procedure

Uses the profile data stored in the staging table to create profiles on this system

### Creating SQL Profile

Rajeshwaran, Jeyabal, August 05, 2009 - 9:16 am UTC

Hi Tom,

Is it possible to create a SQL Profile other than dbms_sqltune.accept_sql_profile (procedure or function)and associate that to a Query? If Yes, Can you please show me how?

Thanks,
Rajesh
August 05, 2009 - 9:39 am UTC

there are various user interfaces to it in Oracle Enterprise Manager, SQL Developer, Visual Studio and such - but they all end up calling....

the APIs that do it. DBMS_SQLTUNE is that API

### SQL Profile restrictions

Fredy Mercy, December 09, 2009 - 5:20 am UTC

Hi Tom,

1.
When I try to accept SQL Profile I get error message ORA-13787 Missing SQL profile for statement...
I’m using 10.2 database 64bit for AIX.
When I try the same on Windows XP machine (same database version), I’m able to create the same profile without problem.

Is it a bug in Oracle database version for AIX or SQL Profiles are not supported on AIX platform?

2.

For example. stored outlines cannot be used with remote databases (so stored outline cannot be created with SQL statement that use db links), with multi table insert...

My SQL statement is fairly complex (it’s about loading data from stage tables in my data warehouse), and I use db link in my statement.

Oracle Warehouse builder has generated SQL code with a lot of hints, subquery names, blank lines (SQL takes about 100 lines), double qoutes, db links...

Maybe there are some restriction that confuse Oracle database, and because of that, Oracle refuse to use SQL Profile?

December 10, 2009 - 1:58 pm UTC

1) they work on aix and on all platforms, it would rather seem that you've possibly made a mistake somewhere? You don't say what you did to get that

2) sql profiles are not at all like stored outlines, they are very different. It'll take what bits and pieces it can of the query and see if the estimated cardinalities are close to the actual observed ones and if not - correct them, resulting in a better overall plan.

have you utilized support at all?

### SQL Profile restrictions 2 part

Fredx Mercy, December 10, 2009 - 5:46 am UTC

Hi Tom,

Regarding my previous post point 1 (ORA-13787 error), it's my fault, and when I corrected it, I am also able to accept SQL Profile on AIX machine.

But my second point is still actual. It seems that something in my complex SQL statement for loading records into data warehouse system prevent Oracle to use SQL Profile. I can send you SQL statement if you want (although it’s more than 100 lines of code).

When I execute DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE, I get signature value of 15926012898303819840, and when I query dba_sql_profiles table, I get in signature field value of 1,59260128983038E19. That means that it should work, but accepted SQL Profile is still not used
(I can see it from SELECT * FROM table(dbms_xplan.display(NULL,NULL,'basic +note')); after I do explain plan for my SQL statement).

I’ve also tried to store my SQL in CLOB in table, and pull the stored SQL text back as imput in DBMS_SQLTUNE.CREATE_TUNING_TASK procedure, but still with no success.

I’ve tried to create and accept SQL Profile for my SQL statement from AWR, but without success too.

My conclusion is that it must be some restriction in Oracle SQL Profile functionality that prevent Oracle engine to use my complex DML SQL statement (multi-table insert which includes db_links) which is generated by Oracle Warehouse builder.

December 10, 2009 - 2:59 pm UTC

.. I can send you SQL statement if you want...

I don't want.

You say "I am able to accept sql profile on aix machine"

this:

... I’ve tried to create and accept SQL Profile for my SQL statement from AWR, but
without success too. ...

is very vague - sort of like "my car won't start, why not?" Define "without success", what step in the process fails - can you create the profile and accept it (it sounds like "yes") but you just don't see it being used?

details - lots of them.

### SQL Profile restrictions

Fredy Mercy, December 11, 2009 - 1:17 am UTC

Hi Tom,

As you can see from my previous post, I made a mistake in my code, and after correcting it, SQL Profile works fine for my test SQL.

Regarding my complex multi-table insert DML which uses also db links, it doesn’t work.

It must be some restrictions (like with stored outline which also cannot be used with db links and multi table inserts), and I’ve also consulted some other recognized members of Oracle community, and they also come to the same conclusion: SQL Profiles has restrictions and it won’t work with DMLs that uses multi table insert and/or db links.

Please try to make multi table insert (insert all when...into...else...into) which also includes db links and you’ll see that you can not make Oracle to use it.

I’ve solved my problem with plan instability by manually correcting statistics that caused the execution plan to change (DBMS_STATS.SET_TABLE_STATS).

Regards

Fredy

December 11, 2009 - 7:30 am UTC

... it
doesn’t work.
...

last time, define " it doesn't work"

My car won't start, it doesn't work.

Now - you fix it. Without seeing it, without having any information about symptoms, without me telling you how it was working and what happened to make it not work, nothing.

Oracle would be looking at the select component of the insert (multi-table or not). If you take the select out of the multi-table insert and profile it - does it "work" (not the multi-table insert, just the select). That would be test #1, is the query subject to being tuned at all by sql profiles? If you get a profile to be used for the select all by itself, then we can dig further.

... I’ve solved my problem with plan instability by manually correcting statistics
...

umm, now I'm confused - why wouldn't you have done that in the first place? bad stats = wrong cardinality = wrong plan. The advisor should have told you "stale stats, fix them", did it not?

baskar.l, December 12, 2009 - 7:10 am UTC

```Hi Tom,

Suprised to see the cost of the same when i create the table..its cost is 174 where ur example shows a cost of 54..Guess in the DB something is missing!!??

SQL> set autotrace traceonly explain
SQL> select * from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   339K|  4975K|   174   (2)| 00:00:03 |
|   1 |  TABLE ACCESS FULL| T    |   339K|  4975K|   174   (2)| 00:00:03 |
--------------------------------------------------------------------------
thanks,

December 14, 2009 - 8:15 am UTC

what the heck is "ur", are you missing keys on your keyboard?

we have different machines

we have different multiblock read counts

we have different number of rows

did you , ummm, see the number of rows and bytes?

My output:

```|  1 | TABLE ACCESS FULL| T  | 51292 |  751K|  52  (6)| 00:00:01 |
```

`|   1 |  TABLE ACCESS FULL| T    |   339K|  4975K|   174   (2)| 00:00:03 |`

do you think that full scanning 339,000 rows using 4,975k of storage might take more resources (have a higher cost) than 51,292 rows using 751k of storage?

By the way, the only place I can find the number 54 on this page is.... In your comments. I presume you meant 52.

### SQL Profile restrictions part 3

Fredy Mercy, December 14, 2009 - 4:19 am UTC

Hi Tom,

I've tried to make SQL Profile for select part of my multi table insert DML as you’ve suggested me to do, and SQL Profile is now used (Select part of my multi table insert DML is fairly complex and includes remote table via db links...).

Although SQL Profile for select part is now used, Oracle still refuse to use that SQL Profile when I explain the whole multi-table insert DML statements.

I come to the following conclusions:

1. SQL Profiles for multi-table insert DML doesn’t work, and it’s restriction that exist and you can’t find anything about that restriction in Oracle manuals.

2. If you create SQL Profile for only select part of the multi-table insert DML, SQL Profiles still won’t be used.

I hope you’ll agree with me.

Regards,

Fredy

December 14, 2009 - 9:31 am UTC

I'll agree with you if you take your testcase to support, upload it into a service request and get confirmation.

Brian Peasey, March 18, 2010 - 1:38 pm UTC

Hi,

I have a bind variable that specifies the financial_batch_id for my query which I'm profiling. If the bind variable is a value that doesn't exist in table, does that affect the outcome of the advisor's recommendation?

Here I have two profiles for the same query, first is with a valid value used by the bind variable, the second is for a value that doesn't exist.

It looks like I answered my own question, but would like to hear you input on this question.

```select f.task_name, f.message, r.benefit
and f.owner = 'SYSTEM';```

my_sql_tuning_task A potentially better execution plan was found for this statement. 9975
my_sql_tuning_task2 A potentially better execution plan was found for this statement. 788

Thank you.
Brian P.
March 18, 2010 - 2:32 pm UTC

If you tell me to tune the process of finding all of the white balls in a bucket I know has only blue and red - you'll get a bad answer from me - if in real life you only look for red and blue balls.

The profile will have the same issue, we'll have very much undercounted something - and that will artificially make some part of a plan look really cheap and we'll vote to do that - when in real life, that part of the plan will be very expensive.

This is one reason you want to generate profiles on the real machine, with the real user workload, so we have valid inputs to the profile process. Namely, the inputs they've actually used.

### How to force SQL profile

Sachin, October 05, 2010 - 5:13 am UTC

Hi Tom,

I have a sql statement that has 2 plans in the AWR. One is good one and the other one is bad. I want to create a profile with good plan. how do i force the optimizer to use the good plan using profile.

October 05, 2010 - 12:21 pm UTC

as long as the profile can be used - it will be. If something in the environment prevents the profile from being used - it won't be.

by any chance - when you say "awr" do you really mean a plan managed baseline?

need more details here - versions, conditions, situation you are in...

### How to force SQL profile

Sachin, October 05, 2010 - 9:35 pm UTC

Hi Tom,

Sorry for the incomplete information..

sql_TEXT:
=========
SELECT TO_CHAR(MIN(VALUE_DATE),'DD-MM-YYYY HH24:MI:SS') FROM CTD
WHERE
ACID = :1
AND
PSTD_DATE > to_date( :2 , 'DD-MM-YYYY HH24:MI:SS')
AND
VALUE_DATE+0 <= to_date( :3 , 'DD-MM-YYYY HH24:MI:SS')
AND
tran_date >= to_date( :4 ,'DD-MM-YYYY')
AND
TRAN_SUB_TYPE !='IP'
AND
TRAN_SUB_TYPE != 'IC';

BAD PLAN (Index range scan on IDX_HTD_ACCT_VAL_DT)
========

Plan hash value: 1356846995

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | 37 | | |
| 2 | VIEW | CTD | 2 | 74 | 3 (0)| 00:00:01 |
| 3 | UNION-ALL | | | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| EOD_TRAN_DETAIL_TABLE | 1 | 37 | 1 (0)| 00:00:01 |
| 5 | INDEX RANGE SCAN | IDX_DTD_ACCT_ID_23 | 1 | | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| HIST_TRAN_DTL_TABLE | 1 | 36 | 2 (0)| 00:00:01 |
| 7 | INDEX RANGE SCAN | IDX_HTD_ACCT_VAL_DT | 10 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

1 - :1 (VARCHAR2(30), CSID=1): '01550097'
2 - :2 (VARCHAR2(30), CSID=1): '20-11-2009 23:59:59'
4 - :4 (VARCHAR2(30), CSID=1): '19-11-2009'

GOOD PLAN: (Index range scan on IDX_HTD_ACCT_ID)
==========

Plan hash value: 1898960002

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1 | 37 | | |
| 2 | VIEW | CTD | 2 | 74 | 2 (0)| 00:00:01 |
| 3 | UNION-ALL | | | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| EOD_TRAN_DETAIL_TABLE | 1 | 37 | 1 (0)| 00:00:01 |
| 5 | INDEX RANGE SCAN | IDX_DTD_ACCT_ID_23 | 1 | | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| HIST_TRAN_DTL_TABLE | 1 | 36 | 1 (0)| 00:00:01 |
| 7 | INDEX RANGE SCAN | IDX_HTD_ACCT_ID | 1 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

1 - :1 (VARCHAR2(30), CSID=1): '02217030'
2 - :2 (VARCHAR2(30), CSID=1): '28-07-2010 15:45:39'
4 - :4 (VARCHAR2(30), CSID=1): '27-07-2010'

As a workaround till we find the soultion for this problem, I want to create a profile so that optimizer picks up good plan(hash_value = 1898960002)

oracle version : 10.2.0.4
October 06, 2010 - 4:51 am UTC

so, create the profile??? A profile is simply a set of extended statistics that we generate in response to you telling us to. A profile however does not force a given plan, nor is it designed to tell the optimizer what the plan should be - it is a set of additional statistics - better estimated cardinalities.

do you have the profile created already?

or do you really mean "using sql plan management"

### How to force SQL profile

sachin, October 08, 2010 - 5:07 am UTC

```No, I have not created profile. I am going to create stored outline.

my mistake. I thought, we can force the plan with profile.

Thanks Tom

SQL> explain plan for SELECT    /*+
2        BEGIN_OUTLINE_DATA
3        IGNORE_OPTIM_EMBEDDED_HINTS
4        OUTLINE_LEAF(@"SEL\$2")
5        OUTLINE_LEAF(@"SEL\$3")
6        OUTLINE_LEAF(@"SET\$1")
7        OUTLINE_LEAF(@"SEL\$1")
8        NO_ACCESS(@"SEL\$1" "CTD"@"SEL\$1")
9        INDEX_RS_ASC(@"SEL\$3" "HTD"@"SEL\$3" ("HTD"."ACID" "HTD"."TRAN_DATE"))
10        INDEX_RS_ASC(@"SEL\$2" "ETD"@"SEL\$2" ("ETD"."ACID"))
11        END_OUTLINE_DATA
12    */
13  TO_CHAR(MIN(VALUE_DATE),'DD-MM-YYYY HH24:MI:SS') FROM CTD
14  WHERE
15  ACID =  '01550097'
16  AND
17  PSTD_DATE > to_date( '20-11-2009 23:59:59'  , 'DD-MM-YYYY HH24:MI:SS')
18  AND
19  VALUE_DATE+0 <= to_date( '19-11-2009 20:40:43'  , 'DD-MM-YYYY HH24:MI:SS')
20  AND
21  tran_date >= to_date( '19-11-2009' ,'DD-MM-YYYY')
22  AND
23  TRAN_SUB_TYPE !='IP'
24  AND
25  TRAN_SUB_TYPE != 'IC';

Explained.

Elapsed: 00:00:00.01

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 250584926

--------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                       |     1 |    37 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                |                       |     1 |    37 |            |          |
|   2 |   VIEW                         | CTD                   |     2 |    74 |     5   (0)| 00:00:01 |
|   3 |    UNION-ALL                   |                       |       |       |            |          |
|*  4 |     TABLE ACCESS BY INDEX ROWID| EOD_TRAN_DETAIL_TABLE |     1 |    37 |     1   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN          | IDX_DTD_ACCT_ID_10    |     1 |       |     1   (0)| 00:00:01 |
|*  6 |     TABLE ACCESS BY INDEX ROWID| HIST_TRAN_DTL_TABLE   |     1 |    36 |     4   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN          | IDX_HTD_ACCT_ID       |    32 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("ETD"."PSTD_DATE">TO_DATE(' 2009-11-20 23:59:59', 'syyyy-mm-dd hh24:mi:ss') AND
"ETD"."TRAN_DATE">=TO_DATE(' 2009-11-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
INTERNAL_FUNCTION("ETD"."VALUE_DATE")+0<=TO_DATE(' 2009-11-19 20:40:43', 'syyyy-mm-dd
hh24:mi:ss') AND "ETD"."TRAN_SUB_TYPE"<>'IP' AND "ETD"."TRAN_SUB_TYPE"<>'IC')
5 - access("ETD"."ACID"='01550097')
6 - filter("HTD"."PSTD_DATE">TO_DATE(' 2009-11-20 23:59:59', 'syyyy-mm-dd hh24:mi:ss') AND
INTERNAL_FUNCTION("HTD"."VALUE_DATE")+0<=TO_DATE(' 2009-11-19 20:40:43', 'syyyy-mm-dd
hh24:mi:ss') AND "HTD"."TRAN_SUB_TYPE"<>'IP' AND "HTD"."TRAN_SUB_TYPE"<>'IC')
7 - access("HTD"."ACID"='01550097' AND "HTD"."TRAN_DATE">=TO_DATE(' 2009-11-19 00:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "HTD"."TRAN_DATE" IS NOT NULL)

28 rows selected.

```

October 11, 2010 - 11:32 am UTC

No, I have not created profile. I am going to create stored outline.

a profile is not an outline and vice versa.

have you read about creating a stored outline for a query? It is fairly straightforward - you just need to make sure the person running the query has enabled that category of stored outines to be used at runtime to "force" the outline.

### SQL Profile and non bind variable queries

Andre, January 26, 2011 - 7:26 pm UTC

Hello Tom,

In some cases, after using SQL Advisor, it reports that there is a sql profile that would improve the query performance in 99%. As this query does not use bind variables, and its parameters changes a lot, I would like to check what the sql profile is changing in the query so I can put some hint in the views used by the query to improve overall performance, not just for one query. Is it possible?

My worst cenario is a view that returns 350000 rows (total) aproximatelly. Seeing the original explain plan the rows estimated for this view is 7000 and the explain plan using the profile it goes to 3432. Not sure if this was the big deal that changed the query execution time from 1 hour to 60 sec.

Thanks,

Andre.
February 01, 2011 - 3:10 pm UTC

### SQL Profiling

Faisal, May 04, 2011 - 10:09 pm UTC

Hi Tom,

Great article, it cleared a lot of misconceptions that I had :)

I am trying to profile a huge SQL using SQL Profiler and getting TIMEOUT!

-------------------------------------------------------------------------------
Workload Type : Single SQL Statement
Scope : COMPREHENSIVE
Time Limit(seconds) : 120
Completion Status : INTERRUPTED
Started at : 05/05/2011 10:46:09
Completed at : 05/05/2011 10:49:24

-------------------------------------------------------------------------------
Error: ORA-13639: The current operation was interrupted because it timed out.
-------------------------------------------------------------------------------

Is there a way to avoid hitting the TIMEOUT?

Thanks again :)
May 06, 2011 - 9:28 am UTC

increase the time limit.

### How Lazy silly of me...

Faisal, May 04, 2011 - 10:38 pm UTC

Hi Tom,

Please ignore my previous question regarding the timeout

I just found out, I can set CREATE_TUNING_TASK.time_limit parameter for this.

Lets say there is a ViewA, ViewB and a third view ViewC (Which is a Union of the first two)

How does one profile these?

Can I just Profile ViewC since SET operators are not supported by the SQL Profiler (DBMS_SQLTUNE)

Please whack me if I am wrong :)

### SQL Profile recomendation

Rajeshwaran, Jeyabal, May 09, 2011 - 3:57 pm UTC

```rajesh@ORA11GR2> drop table t1 purge;

Table dropped.

Elapsed: 00:00:00.17
rajesh@ORA11GR2>
rajesh@ORA11GR2> create table t1
2  nologging
3  as
4  select     mod(rownum,10) as x,
5             sysdate        as y,
7             a.*
8  from all_objects a;

Table created.

Elapsed: 00:00:05.26
rajesh@ORA11GR2>
rajesh@ORA11GR2> create index t1_ind_01 on t1(z) nologging;

Index created.

Elapsed: 00:00:00.45
rajesh@ORA11GR2>
rajesh@ORA11GR2> begin
2     dbms_stats.gather_table_stats(
3     ownname=>user,
4     tabname=>'T1',
5     estimate_percent=>dbms_stats.auto_sample_size,
6     method_opt=>'for all indexed columns size 254');
7  end;
8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.75
rajesh@ORA11GR2> variable x varchar2(50);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2> set autotrace traceonly explain statistics;
rajesh@ORA11GR2> select x,object_name
2  from t1
3  where z = :x;

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 35825 |  2658K|   443   (1)| 00:00:06 |
|*  1 |  TABLE ACCESS FULL| T1   | 35825 |  2658K|   443   (1)| 00:00:06 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("Z"=:X)

Statistics
----------------------------------------------------------
1  recursive calls
0  db block gets
1599  consistent gets
0  redo size
486  bytes sent via SQL*Net to client
415  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
1  rows processed

rajesh@ORA11GR2> set autotrace off;
rajesh@ORA11GR2>
rajesh@ORA11GR2> select x,object_name
2  from t1
3  where z = :x;

X OBJECT_NAME
---------- ------------------------------
1 ICOL\$

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> select sql_id,operation
2  from v\$sql_plan
3  where plan_hash_value=3617692013;

SQL_ID        OPERATION
------------- ------------------------------
br9j50wkw90b5 SELECT STATEMENT
br9j50wkw90b5 TABLE ACCESS

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> begin
3        sql_id => 'br9j50wkw90b5',
5  end;
6  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
rajesh@ORA11GR2>
2  from dual;
ERROR:
ORA-06512: at "SYS.DBMS_SQLTUNE", line 880
ORA-06512: at line 1

no rows selected

Elapsed: 00:00:00.01
rajesh@ORA11GR2>
rajesh@ORA11GR2>```

Tom:

Why there is NO sql profile recomendation from Oracle Optimizer for using this index t1_ind_01? Is that due to insufficient Histogram buckets?

```rajesh@ORA11GR2>
rajesh@ORA11GR2> select column_name,count(*)
2  from user_tab_histograms
3  where table_name ='T1'
4  group by column_name
5  /

COLUMN_NAM   COUNT(*)
---------- ----------
Z                   1

Elapsed: 00:00:00.01
rajesh@ORA11GR2>```

May 09, 2011 - 5:55 pm UTC

Rajesh,

you and I just this afternoon discussed that the optimizer only looks at the first 32 bytes of the data - I gave you a link to an article that goes into that in depth. did you read it?

### SQL Profile recomendation

Rajeshwaran, Jeyabal, May 09, 2011 - 9:39 pm UTC

Thanks Tom, I got it. But just for confirmation I cross verified with you.

### 2 PLAN_HASH_VALUE

A reader, June 22, 2011 - 11:57 am UTC

Hi Tom,

I have a sql_id with 2 PLAN_HASH_VALUE, the cost is the same
but the Elapsed Time (ms) and CPU time ( ms ) is diferent,
the optmizer is choosing the wrong PLAN_HASH_VALUE today,
How can I force the optimizer the use the correct PLAN_HASH_VALUE?

thanks.
June 22, 2011 - 12:14 pm UTC

no versions?

assuming 10g and above, use a query plan baseline if you have one plan you would prefer always be used.

### SQL Profile

Suraj, July 04, 2011 - 10:46 am UTC

Hi Tom,

I have used SQL tuning advisor for a specific query which was runnung slow.As per the recommendation i have implemeted the SQL profile. My question is if the data changes in the underlying tables,will the SQL profile will do the automatic audjustments?

July 05, 2011 - 11:51 am UTC

a SQL profile is very much like statistics on a query - it is very much like statistics period.

Just like with dbms_stats - if the underlying data changes radically - then the statistics might need to change. a SQL profile might need to be periodically re-run in order to keep the statistics for the query up to date.

### Creating a New Sql Profile From an Existing One

A reader, August 03, 2011 - 1:31 pm UTC

Is there a way to duplicate a sql profile for multiple iterations of a particular sql statement?

We have a heavily used peoplesoft financials implementation and periodically, a specific sql statement running from an app engine starts to perform very poorly.

i.e.
INSERT INTO PS_BP_REFD_TAO6 (PROCESS_INSTANCE, KK_TRAN_ID, KK_TRAN_DT, KK_TRAN_LN, KK_REFD_ID, KK_REFD_DT, KK_REFD_LN) SELECT S.PROCESS_INSTANCE, S.KK_TRAN_ID, S.KK_TRAN_DT, S.KK_TRAN_LN, H.KK_TRAN_ID, H.KK_TRAN_DT, L.KK_TRAN_LN FROM PS_BP_APV_S_TAO6 S, PS_KK_SOURCE_HDR H, .....

where PS_BP_REFD_TAO6 and PS_BP_APV_S_TAO6 will change, the end of the table names will vary between TA01 and TAO12.

We've successfully implemented 3 sql profiles for TAO4,TAO6,TAO8 iterations while the poorly performing sql was running. It has made a huge difference and the performance is 1000% better.

I was just wondering if there is a way to copy these or create new ones from them for the other 9 iterations of the sql statement or if that is wouldn't be recommended anyway.

August 04, 2011 - 7:47 am UTC

If the identifiers are changing - it isn't going to work. The profile wouldn't apply to those objects.

### sql profiles always

A reader, September 12, 2011 - 1:16 am UTC

Tom,

Since SQL Profiles will try to produce more optimal plans based on extra data, is there any reason why you wouldn't always use SQL Profiles for everything?

September 12, 2011 - 8:34 am UTC

is there a reason you don't generate histograms always?

is there a reason we don't generate extended statistics for every set of possible column combinations?

and so on...

generating those (sql profiles) consumes a ton of resources, you'd have to keep doing it if the data changes (they are mostly just more statistics after all) and most of the time you don't need them. And a sql profile works sql by sql - before I used a sql profile - I'd look at the query (estimated rows versus actual rows in the plan) and see if there wasn't something obvious that extended statistics - or just better representative statistics in general would fix - as that would tend to fix an entire class of problem rather than a single sql statement.

Peter, September 23, 2011 - 10:20 pm UTC

Tom,

Regarding the sql profile and daily oracle auto stats gathering job, I do have a difficult situation.
we have some mission critical queries with response time under 30ms, most of time it is fine, but sometime the execution plan was suddenly changed and the response time went up to over 1s and whole system went to panic. we usually got to re run the stats on related tables and invalidate the plan and then use profile to get the best possible plan and then implement it.
my questions are:
1) if the plan is good, can we stop the daily stats gathering job( or changes to monthly or yearly) and just use profile to fix the stats if the plan goes wrong ?
2) why does the optimizer generate a bad plan when a good plan has been running for long time ? Is the daily stats collection bad or the optimizer can not generate a better plan in a busy system ?
3) if a same sql has two or more plans, is the possible the two plans can be used by different sessions in same time ?
4) what is the best option to stabilize the plan ?

env. 11gr1 on solaris

Thanks.

September 24, 2011 - 2:02 pm UTC

yes, no, maybe.

we gather stats to:

a) get plans to change
b) get plans to stay the same :)

Say you have a table that is pretty much at a stable state as far as size and data skew goes. You might think "I don't have to gather stats on it anymore" - and that might really be true. HOWEVER, what if that table was the EMP table and there was an EMPNO column always increasing, and a HIREDATE column always increasing. Say you hired 1,000 people over the last six months (and fired/retired the same). Now, if you ask for everyone that was hired last month, and assuming your last stats gather was a year ago, the optimizer would guess "1 row" - even though it is more like 167 rows. You might need to maintain the statistics on those columns to get plans to stay the same over time. You don't need to gather statistics in this case, you could just use dbms_stats.set_column_stats to tell us the new high/low values.

1) I would suggest using a baseline for these critical queries - so that they stay using the same plan until YOU allow it to change.

2) The optimizer is a mathematical model, it has tipping points - they are "single row" changes - for example:

```ops\$tkyte%ORA11GR2> create table t
2  as
3  select substr(object_name, 1, 1 ) str, all_objects.*
4    from all_objects
5   order by dbms_random.random;
Table created.

ops\$tkyte%ORA11GR2> create index t_idx on t(str,object_name);
Index created.

ops\$tkyte%ORA11GR2> begin
2      dbms_stats.gather_table_stats
3      ( user, 'T',
4        method_opt => 'for all indexed columns size 254',
5        estimate_percent=>100 );
6  end;
7  /
PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> select count(subobject_name) from t t1 where str = 'T';

…
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    19 |   296   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE              |       |     1 |    19 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T     |   292 |  5548 |   296   (0)| 00:00:04 |
|*  3 |    INDEX RANGE SCAN          | T_IDX |   292 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

ops\$tkyte%ORA11GR2> insert into t
2  select 'T', all_objects.*
3    from all_objects
4   where rownum <= 1;

1 row created.

ops\$tkyte%ORA11GR2> begin
2      dbms_stats.gather_table_stats
3      ( user, 'T',
4        method_opt => 'for all indexed columns size 254',
5        estimate_percent=>100 );
6  end;
7  /

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> select count(subobject_name) from t t2 where str = 'T';

…
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    19 |   297   (1)| 00:00:04 |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T    |   293 |  5567 |   297   (1)| 00:00:04 |
---------------------------------------------------------------------------

```

All it took was one more row to be returned and we switched from index range scan to full scan.

Now - do you really think that between 292 rows and 293 rows - there is so big of a difference that one must use an index and the other not? No, not really - but given that this is a model - there must be a tipping point somewhere - if not 293 rows - should it full scan at 500? If so, ask yourself "is there a big difference between 499 and 500 - should 499 use an index and 500 full scan". You can see where that is going - you have to make the choice at some point.

In 11g - you might consider looking into sql plan management - which can make the optimizer "more cautious" about switching a plan - it'll test the plan first:

http://docs.oracle.com/docs/cd/E11882_01/server.112/e23633/preup.htm#UPGRD00232

3) yes, absolutely.

4) you will want to look at query plan baselines and/or sql plan management.

### Excellent !

A reader, September 24, 2011 - 10:54 pm UTC

Thank you !

### How to generate SQL profiles for long running queries?

Prashant, September 26, 2011 - 6:20 am UTC

Hi

I have a query that runs perpetually and never comes back:(.
I tuned this in a couple of ways
a) I hinted the query (LEADING) and got it to execute faster
b) I altered the session to use optimizer dynamic sampling = 3

In both cases I found the query using a better execution path and complete in 25 seconds.

I was wondering if I could use SQL profile as it would mean that I don't need to hint the query with LEADING or sampling or make session changes with dynamic sampling

I tried few times by creating a tuning task for the sql id and it runs for 30 mins and then say anonymous block completed.

ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';

Shows me the error reported as terminated after running for 30 mins. Can you please suggest how to use SQL Profiles for long running queries?
September 26, 2011 - 7:39 am UTC

I was wondering if I could use SQL profile as it would mean that I don't need
to hint the query with LEADING or sampling or make session changes with dynamic
sampling

Yes, you can. I'd recommend just using enterprise manager, it'll let you select all of the options easily for creating the profile. Otherwise, read up on the dbms_sqltune package and its options, you can increase the timeout for i.

### Thanks Tom..

Prashant, September 27, 2011 - 10:32 am UTC

### question

A reader, March 08, 2012 - 9:16 am UTC

How do I create SQL Profile for a query below where it has an index on employee_id.

Select * from employee
where employee_id = 1000

Thanks

March 09, 2012 - 9:12 am UTC

the same way you create a sql profile for any query? do you know how to generate profiles in general? why is this a special case?

### sql profile

A reader, March 12, 2012 - 7:47 am UTC

I don't know how to create SQL profile. Can you show me how to do this in my test case I gave you before so that it will always use the index on employee_id?

March 12, 2012 - 8:11 am UTC

do you have enterprise manager (EM) up and running?

here is a quick and dirty "command line" way - but EM would be the way to go

```ops\$tkyte@ORA10G> create or replace procedure p
2  as
3          cursor c1
4          is
5          select object_id, object_name
6            from sqlprof
7           order by object_id;
9          l_object_id   sqlprof.object_id%type;
10          l_object_name sqlprof.object_name%type;
11  begin
12          open c1;
13          for i in 1 .. 10
14          loop
15              fetch c1 into l_object_id, l_object_name;
16              exit when c1%notfound;
17              -- ....
18          end loop;
19  end;
20  /
Procedure created.
```

there is our code, it is "bad" - it should have used "first_rows(10)" so the optimizer might know to use an index on object_id to retrieve the data - this query full scans the entire table and orders it and then we just fetch the first ten rows...

```SELECT OBJECT_ID, OBJECT_NAME FROM SQLPROF ORDER BY OBJECT_ID

call     count       cpu    elapsed      query    current        rows
------- ------  -------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0           0
Execute      1      0.00       0.00          0          0           0
Fetch       10      0.07       0.10        659          0          10
------- ------  -------- ---------- ---------- ----------  ----------
total       12      0.07       0.10        659          0          10

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 410     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
10  SORT ORDER BY (cr=659 pr=0 pw=0 time=101152 us)
47487   TABLE ACCESS FULL SQLPROF (cr=659 pr=0 pw=0 time=47604 us)

```

so, we profile it:

```ops\$tkyte@ORA10G> declare
2          l_sql_id v\$sql.sql_id%type;
3  begin

4    select sql_id  into l_sql_id
5      from v\$sql
6     where sql_text = 'SELECT OBJECT_ID, OBJECT_NAME
FROM SQLPROF ORDER BY OBJECT_ID';
7    dbms_output.put_line(
9      ( sql_id    => l_sql_id,
10        task_name => 'sqlprof_query' ) || ' in place...' );
12    ( task_name => 'sqlprof_query' );
13  end;
14  /
PL/SQL procedure successfully completed.
```

and it tells us what to do:

```ops\$tkyte@ORA10G> SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'sqlprof_query')
2    FROM DUAL;

--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
…

-------------------------------------------------------------------------------
SQL ID  : 3zfpa86satsm3
SQL Text: SELECT OBJECT_ID, OBJECT_NAME FROM SQLPROF ORDER BY OBJECT_ID

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially better execution plan was found for this statement.

Recommendation (estimated benefit: 99.45%)
------------------------------------------
Consider accepting the recommended SQL profile.
'sqlprof_query')
```

### thanks

A reader, March 12, 2012 - 10:14 am UTC

Thank you. I appreciate your help.

### Why isn't my query using the profile I enabled?

Robert, May 17, 2012 - 9:02 am UTC

Tom,

I have a long running job(s) that call the same sql_id.
For some reason it picked a bad plan. (not primary concern why at this point).
I ran sql advisor and it gave me the more correct/better plan.
I applied/enabled this sql profile for the sql_id, and v\$sqlarea shows this sql_profile... but the v\$sqlarea.plan_hash_value is not the one of the sql_profile provided by sql advisor.
I flushed the shared pool to try to get it to use this plan... but no luck.... It *did* change the plan.. but to something a lot worse (not the one from the sql profile i.e. v\$sqlarea.sql_profile), but when I deleted that sql profile it went back to the original one that I was trying to fix.

Why isn't this sql_id using the plan from the sql_profile found by sql advisor?
Do I have to stop/start the query? The v\$sqlarea.first_load_time is remaining the same (several days ago) even after I applied the sql profile and flushed the shared pool.

Thank you,

Robert.

Actually, when I applied/enabled the sql profile for this sql_id, the plan did change... but it changed to something worse... the sql_profile is the right one, but the plan_hash_value is not the one that goes with this sql_profile.

The query is called over and over by parallel jobs (it has been in the shared pool for several days).

The job is running for several days (i.e. v\$sql.first_load_time shows several days ago).
For some reason it picked a bad plan when it started and the job is ru

### I forgot version (above)...... running 10.2.0.5

Robert, May 17, 2012 - 9:03 am UTC

Tom.... version is 10.2.0.5
Thanks.

### SQL Profile instead of tuning the query?

Francisco, April 12, 2013 - 8:19 pm UTC

Hi, Tom,

I'm not so sure about how/when I should utilize SQL Profiles yet. Some days ago, after implementing the recommended SQL profile for a query we got the benefit suggested by SQL Tuning Advisor: close to 98%. Before that, the developers said they were not able to improve performance by rewriting the query. After reaching as large an improvement through SQL profile I have some questions in mind:
1) The developers were right? That gain is only possible with SQL profile but not rewriting the query? The benefit was so great that I can not believe that there is nothing wrong about the SQL code.
2) Once SQL Profile is just a matter of collecting statistics for the query, I should accept that the poor past performance should not be debited to the SQL code? (The statistics for objects are frequently updated.)
3) I thought that SQL Tuning Advisor / SQL Profile were good tools for finding the most appropriate SQL queries to work on the code (or data model). Now I must satisfy myself with the implementation of SQL Profile or should I keep trying to improve the query code until I can disable this SQL Profile? Would it be possible to get that better plan by tuning the query and not enabling the SQL Profile?
Now I'm concerned about developers wishing to just rely on enabling SQL Profiles instead of trying to tune their queries.

Below I included the original execution plan, the plan implemented by SQL Profile and the query code, taken from tkprof report.

Pamplona

==> Original plan, before running SQL Tunning Advisor:
-----------------------------------------------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 7681 2.08 2.31 0 0 0 0
Fetch 15362 231.61 2054.54 575200 3675284 0 7681
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 23044 233.69 2056.86 575200 3675284 0 7681

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 127 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE
(cr=462 pr=226 pw=0 time=1132037 us)

1 1 1 VIEW
(cr=462 pr=226 pw=0 time=1132007 us cost=91 size=104 card=1)

1 1 1 HASH GROUP BY
(cr=462 pr=226 pw=0 time=1131994 us cost=91 size=118 card=1)

1 1 1 VIEW
(cr=462 pr=226 pw=0 time=1131666 us cost=91 size=118 card=1)

1 1 1 HASH GROUP BY
(cr=462 pr=226 pw=0 time=1131633 us cost=91 size=63 card=1)

1 1 1 HASH JOIN
(cr=462 pr=226 pw=0 time=1131219 us cost=90 size=63 card=1)

466 466 466 NESTED LOOPS
(cr=433 pr=226 pw=0 time=729831 us)

5533 5533 5533 NESTED LOOPS
(cr=30 pr=21 pw=0 time=96074 us cost=78 size=4692 card=138)

1 1 1 TABLE ACCESS BY INDEX ROWID ECADVINCULO
(cr=4 pr=0 pw=0 time=69 us cost=4 size=14 card=1)

1 1 1 INDEX RANGE SCAN IDX2ECADVINCULO
(cr=3 pr=0 pw=0 time=28 us cost=2 size=0 card=1)
(object id 158253)

5533 5533 5533 INDEX RANGE SCAN IDX6EPAGHISTRUBRICACAPA
(cr=26 pr=21 pw=0 time=92117 us cost=5 size=0 card=1343)
(object id 197077)

466 466 466 TABLE ACCESS BY INDEX ROWID EPAGHISTORICORUBRICAVINCULO
(cr=403 pr=205 pw=0 time=1060909 us cost=74 size=1900 card=95)

153 153 153 TABLE ACCESS BY INDEX ROWID ECALFOLHATRIB
(cr=29 pr=0 pw=0 time=2951 us cost=11 size=2204 card=76)

153 153 153 INDEX RANGE SCAN IDXCALFOLHATRIB1
(cr=5 pr=0 pw=0 time=320 us cost=2 size=0 card=76)
(object id 211428)

==> Plan after running SQL Tunning Advisor, and enabling the recommended SQL Profile:
------------------------------------------------------------------------------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 7681 1.98 2.17 1 3 0 0
Fetch 15362 26.40 34.25 2647 1796431 0 7681
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 23044 28.38 36.43 2648 1796434 0 7681

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 127 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE
(cr=61 pr=0 pw=0 time=2830 us)

1 1 1 VIEW
(cr=61 pr=0 pw=0 time=2814 us cost=10 size=104 card=1)

1 1 1 HASH GROUP BY
(cr=61 pr=0 pw=0 time=2806 us cost=10 size=118 card=1)

1 1 1 VIEW
(cr=61 pr=0 pw=0 time=2486 us cost=9 size=118 card=1)

1 1 1 HASH GROUP BY
(cr=61 pr=0 pw=0 time=2473 us cost=9 size=63 card=1)

1 1 1 NESTED LOOPS
(cr=61 pr=0 pw=0 time=1995 us)

29 29 29 NESTED LOOPS
(cr=60 pr=0 pw=0 time=1916 us cost=8 size=63 card=1)

153 153 153 MERGE JOIN CARTESIAN
(cr=33 pr=0 pw=0 time=1435 us cost=5 size=43 card=1)

153 153 153 TABLE ACCESS BY INDEX ROWID ECALFOLHATRIB
(cr=29 pr=0 pw=0 time=549 us cost=2 size=29 card=1)

153 153 153 INDEX RANGE SCAN IDXCALFOLHATRIB1
(cr=5 pr=0 pw=0 time=232 us cost=2 size=0 card=1)
(object id 211428)

153 153 153 BUFFER SORT
(cr=4 pr=0 pw=0 time=341 us cost=3 size=14 card=1)

1 1 1 TABLE ACCESS BY INDEX ROWID ECADVINCULO
(cr=4 pr=0 pw=0 time=21 us cost=3 size=14 card=1)

1 1 1 INDEX RANGE SCAN IDX2ECADVINCULO
(cr=3 pr=0 pw=0 time=14 us cost=2 size=0 card=1)
(object id 158253)

29 29 29 INDEX RANGE SCAN IDX6EPAGHISTRUBRICACAPA
(cr=27 pr=0 pw=0 time=596 us cost=2 size=0 card=1)
(object id 197077)

1 1 1 TABLE ACCESS BY INDEX ROWID EPAGHISTORICORUBRICAVINCULO
(cr=1 pr=0 pw=0 time=75 us cost=3 size=20 card=1)

SELECT :B4 AS CDVINCULO ,
MAX(N.CDPESSOA) AS CDPESSOA ,
MAX(N.NUPOSSUIINATIVOSIRH) AS NUPOSSUIINATIVOSIRH ,
NVL(SUM(N.VLBASEATUAL - :B14 ),0) AS VLBASEATUAL ,
NVL(SUM(N.VLDEDUZIDOATUAL),0) AS VLDEDUZIDOATUAL ,
NVL(SUM(N.VLBASEOUTROS),0) AS VLBASEOUTROS ,
NVL(SUM(N.VLDEDUZIDOOUTROS),0) AS VLDEDUZIDOOUTROS ,
NVL(SUM(N.VLBASEOUTROSFORASIGRH),0) AS VLBASEOUTROSFORASIGRH,
NVL(SUM(N.VLABAT65ANOSOUTROSSIGRH),0) AS VLABAT65ANOSOUTROSSIGRH
FROM
(
SELECT CDPESSOA ,
CDVINCULO ,
SUM(NUPOSSUIINATIVOSIRH) AS NUPOSSUIINATIVOSIRH,
SUM(VLBASEATUAL) AS VLBASEATUAL ,
SUM(VLDEDUZIDOATUAL) AS VLDEDUZIDOATUAL ,
CASE
WHEN NVL( SUM(VLBASEOUTROVINCMESMAFOL),0) <> 0
THEN NVL( SUM(VLBASEOUTROVINCMESMAFOL),0)
ELSE NVL( SUM(VLBASEOUTROVINCOUTRAFOL),0)
END VLBASEOUTROS,
CASE
WHEN NVL( SUM(VLBASEOUTROVINCMESMAFOL),0) <> 0
THEN NVL( SUM(VLDEDUZOUTROVINCMESMAFOL),0)
ELSE NVL( SUM(VLDEDUZOUTROVINCOUTRAFOL),0)
END VLDEDUZIDOOUTROS ,
SUM(VLBASEOUTROSFORASIGRH) AS VLBASEOUTROSFORASIGRH,
SUM(VLABAT65ANOSOUTROSSIGRH) AS VLABAT65ANOSOUTROSSIGRH
FROM
(
SELECT CDPESSOA ,
CDVINCULO ,
NUPOSSUIINATIVOSIRH,
CASE
WHEN CDRUBRICAAGRUPAMENTO = :B10
AND CDFOLHAPAGAMENTO = :B13
AND CDVINCULO = :B4
THEN NVL(VLPAGO,0)
END AS VLBASEATUAL,
CASE
WHEN CDRUBRICAAGRUPAMENTO = :B10
AND
(
CDFOLHAPAGAMENTO <> :B13
OR CDVINCULO <> :B4
)
AND
(
FLMESMOTIPOFOLHA = 'N'
)
THEN NVL(VLPAGO,0)
END AS VLBASEOUTROVINCOUTRAFOL,
CASE
WHEN CDRUBRICAAGRUPAMENTO = :B10
AND
(
CDFOLHAPAGAMENTO <> :B13
OR CDVINCULO <> :B4
)
AND
(
FLMESMOTIPOFOLHA = 'S'
)
THEN NVL(VLPAGO,0)
END AS VLBASEOUTROVINCMESMAFOL,
CASE
WHEN CDRUBRICAAGRUPAMENTO = :B10
AND CDFOLHAPAGAMENTO <> :B13
THEN NVL(VLPAGO,0)
END AS VLBASEOUTROSFORASIGRH,
CASE
WHEN CDFOLHAPAGAMENTO = :B13
AND CDVINCULO = :B4
THEN
CASE
WHEN CDRUBRICAAGRUPAMENTO IN (:B9 )
THEN NVL(VLPAGO,0)
END
END AS VLDEDUZIDOATUAL,
CASE
WHEN
(
CDFOLHAPAGAMENTO <> :B13
OR CDVINCULO <> :B4
)
AND
(
FLMESMOTIPOFOLHA = 'N'
)
THEN
CASE
WHEN CDRUBRICAAGRUPAMENTO IN (:B9 )
THEN NVL(VLPAGO,0)
END
END AS VLDEDUZOUTROVINCOUTRAFOL,
CASE
WHEN
(
CDFOLHAPAGAMENTO <> :B13
OR CDVINCULO <> :B4
)
AND
(
FLMESMOTIPOFOLHA = 'S'
)
THEN
CASE
WHEN CDRUBRICAAGRUPAMENTO IN (:B9 )
THEN NVL(VLPAGO,0)
END
END AS VLDEDUZOUTROVINCMESMAFOL,
CASE
WHEN CDFOLHAPAGAMENTO <> :B13
THEN
CASE
WHEN CDRUBRICAAGRUPAMENTO IN (:B8 )
THEN NVL(VLPAGO,0)
END
END AS VLABAT65ANOSOUTROSSIGRH
FROM
(
SELECT
/*+ index(RV IDX6EPAGHISTRUBRICACAPA) */
V.CDPESSOA ,
V.CDVINCULO ,
RV.CDFOLHAPAGAMENTO ,
RV.CDRUBRICAAGRUPAMENTO,
FP.NUANOMESIMPLANTACAO ,
(
CASE
WHEN V.CDSITUACAOPREVIDENCIARIA = 2
AND V.CDVINCULO <> :B4
THEN 1
ELSE 0
END) AS NUPOSSUIINATIVOSIRH,
SUM(RV.VLPAGAMENTO) AS VLPAGO ,
MAX(
CASE
WHEN FP.CDTIPOFOLHA = :B12
AND FP.CDTIPOCALCULO = :B6
THEN 'S'
ELSE 'N'
END) AS FLMESMOTIPOFOLHA
FROM EPAGHISTORICORUBRICAVINCULO RV
INNER JOIN ECALFOLHATRIB FP
ON FP.SGTRIBUTO = :B3
AND FP.TPMES = :B2
AND FP.CDCALCULOPAI = :B1
AND FP.CDFOLHAPAGAMENTO = RV.CDFOLHAPAGAMENTO
ON V.CDVINCULO = RV.CDVINCULO
WHERE V.CDPESSOA = :B11
AND
(
RV.CDRUBRICAAGRUPAMENTO IN (:B10 , :B9 , :B8 ,:B7 )
)
AND
(
NOT
(
:B6 = 3
AND FP.CDFOLHAPAGAMENTO = :B5
)
)
GROUP BY V.CDPESSOA ,
V.CDVINCULO ,
RV.CDFOLHAPAGAMENTO ,
RV.CDRUBRICAAGRUPAMENTO,
FP.NUANOMESIMPLANTACAO ,
(
CASE
WHEN V.CDSITUACAOPREVIDENCIARIA = 2
AND V.CDVINCULO <> :B4
THEN 1
ELSE 0
END)
)
)
GROUP BY CDPESSOA,
CDVINCULO
) N;
April 22, 2013 - 6:41 pm UTC

forgetting for a moment whether the query could be rewritten or not - you have to know what a sql profile is/does and why query plans go bad.

what if you had a query:

select * from t where x = 5;

and we told you:

a) t is a table with 1,000,000 rows in it
b) x has 10 distinct values in T
c) there are about 100 rows per block in t
d) the values of x are uniformly distributed throughout the table (every block has about 8/10 different values of x on it)
e) there is an index on t(x)

Now, given those statistics - what would you probably do as an optimizer? what plan would you come up with?

I'd probably come up with a full scan - because of the fact that we have to read just about every block in the table anyway (rows are uniformly distributed, the clustering factor on the index on t(x) is near the number of rows in the table).

Now, you run the query - and it returns two rows. Yes, just two rows.

Now, tell me - how do you rewrite the query: select * from t where x=5 to perform better?

answer: you don't, you figure out why the optimizer isn't able to get an accurate estimation in the first place and correct that.

corrective action in this case might be:

1) gather histograms or
3) SQL PROFILE <<<<===== could be the answer

a sql profile is a method to gather statistics on a query =- it gives the optimizer a chance at getting the right plan by getting the right estimated cardinalities.

see: http://jonathanlewis.wordpress.com/2006/11/09/dbms_xplan-in-10g/ for a method to see if the estimates are way off from the actuals, if they are - you might need better statistics, more statistics, dynamic sampling, sql profiling, or maybe a better query.

I see this: /*+ index(RV IDX6EPAGHISTRUBRICACAPA) */ and pretty much hate the query already... why were they hinting and getting a bad plan? if you have a hinted query and are not getting a perfect plan, you are not hinting very smart :) tell them to avoid hints.

sql profiling is a good thing, it helps the optimizer get the right estimates and thus the right plan - without hinting.

### Does this applies to plsql

A reader, July 24, 2013 - 2:05 pm UTC

SQL profile was created for a sql.
profile that was created is not being used when sql is called by the PL/SQL proceduer. Can you please advise
July 31, 2013 - 6:58 pm UTC

plsql rewrites your sql, for example:

```ops\$tkyte%ORA11GR2> declare
2          l_dummy varchar2(1) := 'x';
3  begin
4          for x in (select 'look for me' x from dual where dummy = l_dummy)
5          loop
6                  null;
7          end loop;
8  end;
9  /

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select sql_text from v\$sql where sql_text like '%''look for me''%';

SQL_TEXT
----------------------------------------------------------------------------------------------------
declare  l_dummy varchar2(1) := 'x'; begin  for x in (select 'look for me' x from dual where dummy =
l_dummy)  loop   null;  end loop; end;

SELECT 'look for me' X FROM DUAL WHERE DUMMY = :B1
select sql_text from v\$sql where sql_text like '%''look for me''%'
```

so, did you set up the sql profile for the right statement?

### BIND VARIABLES

EINAR, September 11, 2014 - 8:54 am UTC

```Hi tom,

does SQL_PROFILE take care on 11gr2 of bind variables,
i mean will sql_profile evaluate bind variable value ?.

Test case :

create table einar.prueba as select * from dba_objects;

SQL> update einar.prueba set owner='EINAR';
77612 rows updated.
SQL> commit;
update einar.prueba set owner='PABLO' where rownum < 2;
1 row updated.
SQL> commit;

create index einar.idx on einar.prueba(owner);

Take stats :

exec dbms_stats.gather_table_stats (ownname=> 'EINAR', tabname=> 'PRUEBA', ESTIMATE_PERCENT =>100, granularity=>'all', cascade=>true, method_opt=>'for all indexed columns');

Histograms check :

select
endpoint_number,
endpoint_number - nvl(prev_endpoint,0)  frequency,
chr(to_number(substr(hex_val, 2,2),'XX')) ||
chr(to_number(substr(hex_val, 4,2),'XX')) ||
chr(to_number(substr(hex_val, 6,2),'XX')) ||
chr(to_number(substr(hex_val, 8,2),'XX')) ||
chr(to_number(substr(hex_val,10,2),'XX')) ||
chr(to_number(substr(hex_val,12,2),'XX')),
endpoint_actual_value
from    (
select
endpoint_number,
lag(endpoint_number,1) over(
order by endpoint_number
)                                                       prev_endpoint,
to_char(endpoint_value,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')hex_val,
endpoint_actual_value
from
dba_tab_histograms
where
table_name = 'PRUEBA'  and owner='EINAR'  and column_name = 'OWNER'
)
order by
endpoint_number
;
ENDPOINT_NUMBER  FREQUENCY CHR(TO_NUMBER(SUBS
77611      77611      EINAQÃ¿
77612          1      PABLNÃ¿

variable valor varchar2(5);
exec :valor := 'EINAR';

PL/SQL procedure successfully completed.

select * from einar.prueba c where owner= :valor;

I accept from AWR plan ( where bind variable is EINAR) and
plan hash value makes a FULL TABLE SCAN.

Now accepting a sql profile from awr which does a FULL TABLE SCAN :

@create_sql_profile_awr.sql
Enter value for sql_id: 4njqgqdxf5fgs
Enter value for plan_hash_value: 3279922394
Enter value for profile_name (PROF_sqlid_planhash):
Enter value for category (DEFAULT):
Enter value for force_matching (FALSE):

Check hints :

SELECT extractValue(value(h),'.') AS hint
FROM sys.sqlobj\$data od, sys.sqlobj\$ so,
table(xmlsequence(extract(xmltype(od.comp_data),'/outline_data/hint'))) h
WHERE so.name = '&sql_profile_name'
AND so.signature = od.signature
AND so.category = od.category
AND so.obj_type = od.obj_type
AND so.plan_id = od.plan_id;  2    3    4    5    6    7    8
Enter value for sql_profile_name: PROF_4njqgqdxf5fgs_3279922394
old   4: WHERE so.name = '&sql_profile_name'
new   4: WHERE so.name = 'PROF_4njqgqdxf5fgs_3279922394'

IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
DB_VERSION('11.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL\$1")
FULL(@"SEL\$1" "PRUEBA"@"SEL\$1")

Now try bind variables, i executed severall times with both values :

variable valor varchar2(5);
exec :valor := 'EINAR';

PL/SQL procedure successfully completed.

select * from einar.prueba c where owner= :valor;

variable valor varchar2(5);
exec :valor := 'PABLO';

PL/SQL procedure successfully completed.

select * from einar.prueba c where owner= :valor;

Ends up that both bind values  have the same plan :

COL BIND_SENSI FORMAT a10
COL BIND_AWARE FORMAT a10
COL BIND_SHARE FORMAT a10
SELECT executions,sql_id,child_number,IS_BIND_SENSITIVE AS "BIND_SENSI" ,IS_BIND_AWARE AS "BIND_AWARE",IS_SHAREABLE AS "BIND_SHARE",sql_text,PLAN_HASH_VALUE
FROM v\$sql WHERE sql_id='4njqgqdxf5fgs';SQL> SQL> SQL>   2

EXECUTIONS SQL_ID  CHILD_NUMBER   BIND_SENSI BIND_AWARE BIND_SHARE SQL_TEXT               PLAN_HASH_VALUE
---------- ----------------------------------------------------------- ---------- ---------- --------------------------------------
2     4njqgqdxf5fgs    0                 Y      N       N   select * from einar.prueba where owner= :valor 3279922394
2     4njqgqdxf5fgs    1       Y         Y       N   select * from einar.prueba where owner= :valor 3279922394
2     4njqgqdxf5fgs    2       Y      Y       Y   select * from einar.prueba where owner= :valor 3279922394

Even if it seems tahat it has noticed that bind_peeking would change the plan, it seems to keep the same execution plan.

If i don't use profile it executes two different plans .

Does sql_profile take care about the bind variables values
getting used ?
```

### reduce parse count ?

Santosh, January 09, 2015 - 6:56 am UTC

Our product generates lot of sqls with bind variables .Until they make change,will a sql profile generated out of query with bind variables reduce parse count. All times these queries (update) update only 1 row as its where condition is based on unique condition.

# More to Explore

##### PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

##### PL/SQL docs

PL/SQL reference manual from the Oracle documentation library