Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Sandeep.

Asked: June 09, 2016 - 2:25 pm UTC

Last updated: October 20, 2017 - 1:33 am UTC

Version: 11.2.0.4.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I am aware, you have already clarified about the same subject many times.

I have a table, which stores customer transaction information.
Data load is happening continuously into the table.
Since it is a transaction table, given a day, there may be 10 transaction done by one customer, and 1000 transaction by another customer, so the data is skewed.
Lets Consider my knowledge about adaptive cursor sharing is nearly zero.
For the first time, when a query is made to the db with a customer_id [first time for this id] it takes 6/10 minutes (lets accept it as a tuned query)
Next time onward for the same id the time taken gradually decreases and subsequently it is done in seconds.
For another customer_id the time for first time, and time taken subsequently are different, but the behavior is same.
For all of the customer_ids the execution plan is the same

cursor_sharing=exact

Could you please help, in at the least categorizing the behavior.
When I check with the below query
SELECT sql_id, child_number, is_bind_sensitive, is_bind_aware
FROM v$sql
WHERE sql_text = 'my-query';

I see the
is_bind_sensitive=N
and
is_bind_aware=N

Does this mean adaptive cursor sharing is disabled?

Adaptive cursor sharing will be of any help in this case?

How can I at the least, maintain a uniform time pattern?

Thanks and Regards,
Sandeep

and Connor said...

If the two "is_bind_..." parameters are "N", then adaptive cursor sharing (ACS) is not coming into play.

You havent provided the actual query being run, but two things you need to have before ACS will be relevant:

1) bind variables in your sql
2) a histogram on the table/columns concerned

Because when those two are true, then we'll do bind peeking, ie, take a look at the values provided in the bind variables.

You can validate that this is happening, but using DBMS_XPLAN to look at the plans for the sql that you have run using "peeked binds" in the format clause.

So when we see such a scenario, where the optimizer believes that the value of the bind variable could impact the plan, that first sql (or more accurately the first child) will be marked as bind sensitive (is_bind_sensitive=Y), which means we'll hang on to some cardinality feedback info post execution to use for subsequent executions.

On the next execution, if that cardinality feedback tells us we could have come up with a different plan for a provided bind variable value, that is when you will see a new child. This will have is_bind_aware=Y. At which point, the first child is marked as no longer sharable, because we dont want to re-use that one any more (because it is not bind aware).

Hope this helps.

Addenda: "a histogram on the table/columns concerned" is actually incorrect. The histogram may influence the optimizer in deciding there is skew and hence a difference in selectivity, but a histogram is not mandatory for the optimizer to make that decision.

Rating

  (25 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Acs value

A reader, June 10, 2016 - 8:47 am UTC

Having these two conditions what would be the best value for acs parameter ?
Connor McDonald
June 10, 2016 - 11:23 am UTC

I dont know what you mean

Sandeep Mishra, June 10, 2016 - 9:05 am UTC

Hi Connor,

Thank You.

This is basically a customer transaction table
customer (customer_id,col_1,col_2,transaction_date)
This table is partitioned by transaction_date, for 60 days 60 partitions.
Everyday partition older than 60days gets dropped, so global index is not done to avoid the cost of index rebuilding. Local index is present in both the columns transaction_date and customer_id.
Data will definitely be not uniform, that is skewed data
Below kind of queries are only the application is executing
1) Last five transactions for a customer
2) Transaction between two dates
The table size is more than 200GB for 60days data
To meet all possible kind of scenarios, like one may do a transaction 60 days before, and after 30 days he did another 3 and today he does another 4, so there is no option other than scanning the complete table.
Another customer is doing every day nearly 1 or 2 transactions, so you can imagine the skewed nature of this data set.
So could you please elaborate the use of histograms, if this will be of any help in reducing the response time, and for which column I can implement histogram out of the two customer_id and transaction_date, and if possible how to do it.
Thanks and Regards,
Sandeep

Thanks and Regards
Sandeep


Connor McDonald
June 10, 2016 - 11:28 am UTC

2) Transactions between two dates

Since you partition by day, a simple full scan (with partition pruning) is probably going to be most efficient and consistent mechanism for this. So a histogram is unlikely to be helpful (or harmful) since you'll be doing full scans.

1) Last five transactions

Perhaps in this case, a composite index (cust/trans date) is the means for this. Because then you dont really *care* how many transactions they have - you will just walk backwards down the index for the last 5 entries.

Followup again

Ghassan, June 10, 2016 - 5:56 pm UTC

Connor
"
1) bind variables in your sql
2) a histogram "
Having this what you suggest best value for cursor sharing ? Exact ?

Sandeep
There is no benefit of these indexes see the exec plan . I would suggest to also subpartition by hash the customer id .. but since we have no idea of what queries look about. Indeed looks like the ideal is to get max benefit from pruning

Adaptive Cursor Sharing and Session_Cached_Cursors.

Rajeshwaran, Jeyabal, June 12, 2016 - 8:44 am UTC

http://oracle-randolf.blogspot.com/2011/01/adaptive-cursor-sharing.html

Sandeep - By any chance are these sql's from PL/SQL? if so then ACS doesn't kick in for sql's with in PLSQL.

Bug 8357294: ADAPTIVE CURSOR SHARING DOESN’T WORK FOR SQL FROM PL/SQL

PL/SQL has famously enjoyed the so-called “soft-parse avoidance” optimization
for embedded SQL ever since its inception. Then, in 10.1, the benefit was
brough to native dynamic SQL, but only for “execute immediate”. Notice that
queries executed using a ref cursor (whether this is opened statically or
dynamically) do not enjoy the “soft-parse avoidance” optimization.

It seems that this was overlooked when adaptive cursor sharing was introduced.

11g PL/SQL does adaptive cursor sharing when we are using EXECUTE IMMEDIATE, but does not if you are using embedded SQL (for example in a(n implicit) cursor, like a for loop).
The bug says, that setting SESSION_CACHED_CURSORS=0 fixes this problem (that in-turn lead to increased parsing).

This problem is not just related to PL/SQL, but for any application that tries to avoid parse calls, neither ACS or "Cardinality feedback" kick in.
Connor McDonald
June 12, 2016 - 11:46 am UTC

good input.

Ghassan, June 13, 2016 - 8:47 am UTC

Hi Connor
Still don't understand my question?
Connor McDonald
June 13, 2016 - 10:08 am UTC

The only valueI would choose for cursor_sharing is exact, unless there is a reason (eg 3rd party app) that makes "force" necessary.

Sandeep Mishra, June 13, 2016 - 10:47 am UTC

HI Connor,

I have a table, it has 30 partitions, from 2nd Jan To 31 Jan.
Indexes are defined.
Suppose today is 28th of Jan.
When I query the table for 1st Jan to 25th Jan, the execution plan uses indexes.
When I query for 26th Jan to 28th Jan, it goes for complete table scan.
When I checked the index got last analyzed on 25th of Jan.
And the STALE_STATS=Yes.

Can I conclude, the behavior is totally attributed to the stale statistics.

But for another table, though the scenario is same, it does not go for a full table scan, and uses the index.

Could you please help understanding this behavior.

Thanks and Regards,
Sandeep
Connor McDonald
June 13, 2016 - 11:05 am UTC

"Can I conclude, the behavior is totally attributed to the stale statistics."

It is *possible*, but with the information I cant give you anything definitive.

A *potential* cause could be:

- the stats suggests the max date is (say) 25th
- when you ask for 26th => 28th, the stats suggests that those partitions are empty (since the stats think the max is 25th).
- so the quickest way to scan 3 empty partitions...is full scan them.

Sandeep Mishra, June 13, 2016 - 11:32 am UTC

Hi Connor/All

Thank You, for the valuable feedback.

Could I please request you to let me know, about where all I need to check further to categorize the issue?

Thanks and Regards,
Sandeep

Sandeep Mishra, June 13, 2016 - 4:02 pm UTC

Hi Connor,

I am not able to conclude anything from the below behavior, could you please help. I am sorry to not to give the query, just because my focus is to understand the optimizer behavior, if required, will provide. The underlying table is partitioned by date.

sequence of execution
---------------------
1] Query=> where [ customer_id_1 , range = from date_1 to date_2 ]
2] Query=> where [ customer_id_2 , range = from date_1 to date_2 ]
3] Query=> where [ customer_id_3 , range = from date_1 to date_2 ]

in 1] case it takes its own sweet time 3/4 mins.

and the 2] and 3] executions works out within few seconds

I tried with different date ranges , for any given range the first time execution takes time.

The execution plan has no change in it.

Can I conclude as the date range is changing in the query literal, for the first time for a give range( [ say 1st of Jan to 5th of Jan] and then [ say 7th of Jan to 15th of Jan ] ), so it is going for a parsing again? The date comparison done as below

mydate between trunc(to_date('date-string-1','')) and trunc(to_date('date-string-2',''))

And for the same date range, but different customer_id, why the subsequent executions are faster?

Kindly help understanding.

Thanks and Regards,
Sandeep
Chris Saxon
June 14, 2016 - 3:50 am UTC

I'd head down this route:

1) A 10046 trace to see where the time was lost
2) statistics_level = all, to pick up the rowsource information
3) perhaps 10053 trace to check optimizer details

Trace it

Rajeshwaran, Jeyabal, June 13, 2016 - 5:51 pm UTC

mydate between trunc(to_date('date-string-1','')) and trunc(to_date('date-string-2',''))
And for the same date range, but different customer_id, why the subsequent executions are faster?


Hope you have bind variables in place, since that would lead to cursor sharing.

Trace those sql's using dbms_montior.session_Trace_enabled(waits=>true,binds=>true), That helps to get more info about the execution. we could even see why does it took more time on first execution, and what is causing slowness here.

Sandeep Mishra, June 14, 2016 - 1:46 pm UTC

Hi Connor/Rajeshwaran

Thank you for the valuable feedback.
I have done the sql tracing, for same date range but two different customer_id

1st Run
--------
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 4.06 194.70 3900 6968 0 0
Execute 1 0.14 3.34 0 255 0 0
Fetch 20 0.18 7.29 0 0 0 278
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 22 4.39 205.34 3900 7223 0 278

For the second run
======================
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.42 0 168 0 0
Execute 1 0.15 3.05 0 255 0 0
Fetch 51 0.21 4.60 0 0 0 742
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 53 0.40 8.08 0 423 0 742

To me it is the parsing which is consuming the CPU time

Please let me know, what more information I should provide and how to deal with this

Thanks and Regards,
Sandeep

Connor McDonald
June 15, 2016 - 1:09 am UTC

Dynamic sampling would suggest

- either some portion of the query is being done in parallel
- some stats are missing
- optimizer_dynamic_sampling is set at a high value

So I would check each of those in turn first to see *why* it is kicking in.

Once you've done that...then yes, it is not uncommon to explicitly not allow dynamic sampling in oltp style situations, where parse times need to be keep nice and fast.

Hope this helps.

Sandeep Mishra, June 15, 2016 - 12:35 am UTC

HI Connor

My query structure is like below

select from (
select 1 ...
union all
select 2 ...
union
select 3 ....
)

So for the first time run, it is going for a dynamic sampling for each of the internal queries and attributing a significant amount of time in the parse time

For the subsequent runs though it is still doing dynamic sampling but the time consumed is negligible

When I am disabling dynamic sampling individually for each of the internal queries through query hints, I see no change in the execution plan but a significant reduction in response time.

This is an OLTP environment, and the application is directly firing the query with literals and not with bind variables, I mean the moment the query reaches db it will be already be having customer_id and date range.

Since the query is supposed to be running unlimited times [ life of the query is permanent ] and the there is not much change in the amount of data, so can I proceed with this approach of disabling dynamic sampling.

Thanks and Regards,
Sandeep

Sandeep Mishra, June 15, 2016 - 1:50 am UTC

Hi Connor,

Yes, I am also using the hint /* + parallel 40 */.

Let me not change optimizer settings, just want to maintain as it is and not change for a single sql.
Since table stats are proper, then the only thing I should implement is disabling dynamic sampling for this particular query.

Just another thing, the tables are ranged partitioned on a date column on a daily basis, and as per the business need I have to drop partition older than 60 days on a daily basis

So according to default settings, a change of 10% of size of a table makes the stats stale, and nearly 3 days time I see a 10% of change. Since the table sizes are almost remain same, what I am trying to achieve is avoid this stale stats condition for each business day.

So how can I fix the stats and not allow it to change, if I should do this?
Or
Should I decrease or increase the stale_stat percentage?
If I am increasing then the stats collection will be delayed and I might get a chance to allow stats collection again before the stats become stale?
What if I am decreasing the value to 3 [10% in 3 days so each day approx 3%] then will it be kicking the stats collection in a daily basis
I dont have separate job scheduled for stats collection, and depending on the default job?
And could you please help me understanding the process of freezing stats , if it is supposed to help ?

Thanks and Regards,
Sandeep
Connor McDonald
June 15, 2016 - 5:22 am UTC

I would hence ask - a "parallel 40" operation is not what I would refer to as "OLTP"...Parallel by definition is you saying:

"I've got something big to do, that will probably take a long time, so I want to throw lots of server resources at it."

That is *why* we dynamically sample parallel queries, because we're assuming that it will be more efficient for us to take the time to sample, and hence get the best optimizer plan we can, because that will be a small perecentage of the overall run time anwyay.

In terms of locking stats, you simply use dbms_stats.lock_table_stats. This will stop any automated facilities from changing the stats. Then you might look at using something like dbms_stats.set_xxx_stats to explicitly set the stats for the table or partitions.

I'm always curious that a lot of customers (for example) *know* that they load (say) 10,000,000 new transactions every day. Yet rather than just use set_table_stats/set_column_stats to increment the relevant numbers with the information that they already, they will happily burn lots of server resources re-gathering stats.

There are lots of examples of this, ie, where we should just *set* the stats, not waste effort recalculating. For example, the high water mark for a transaction date...well, I *know* that's today. Why scan a billion rows to work that out ? etc etc

Sandeep Mishra, June 15, 2016 - 8:49 am UTC

Hi Connor,

Thank You.

So if I am freezing the stats, will it not become stale after the default 10% change ?

Thanks and Regards,
Sandeep
Connor McDonald
June 16, 2016 - 2:42 am UTC

My point is not that you should simply stop using stats - I'm saying that since you have control over the process, you could easily

a) lock the stats
b) use "dbms_statt.set_table_stats"

to keep your stats representative of what you want to achieve from your queries without having the expense of a full re-gather.

(An alternative would also be to explore 'incremental' stats - see the link)

http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-stats-concepts-110711-1354477.pdf

But for example, if you drop a partition which is (say) 1/30th of the data, then you can probably easily deduce number of rows remaining, adjust low boundary values for columns, etc and just set these manually.


Sandeep Mishra, June 17, 2016 - 5:20 am UTC

Hi Connor,

Thank You.

I used the word "Freeze", only to refer to locking up of the stats. Sorry, I was not clear.

So if I am locking the stats, will it no more be going to stale state again?

Thank you for the white paper, I am going through it, and would get back to this discussion once done.

Thanks and Regards,
Sandeep

A reader, June 17, 2016 - 3:52 pm UTC

From what was being said (first execution takes 5 minutes and subsequent ones take far less) it could be the parallel hint (first execution needs to start all these 40 up and subsequent executions are faster). I think the PQ slaves are killed after 5 minutes so any execution after that will incur the cost of starting these up (assuming you don't have the min parallel server set). If this is the case it should be shown as a wait on OS slave startups (can't remember the excat wait event).
Chris Saxon
June 18, 2016 - 1:31 am UTC

Good input.

Note that earlier in the thread you can see a trace which showed most of the time was lost on parsing.

Sandeep Mishra, June 18, 2016 - 3:43 pm UTC

HI Connor,

Could you please help me solving, this riddle.

1) What happens if stats are locked, will it become stale or not?

2) What is the internal mechanism, in details, behind parallel hint? This is used in my case because by this my query runs 40 times faster, except for the first time.

3) In case I am using parallel hint, what are the per-requisites, and what all settings I need to do before I use this hint in a database / table level? Any White Paper or link will be beneficial.

Could you please help with the above queries.

Thanks and Regards,
Sandeep
Connor McDonald
June 19, 2016 - 9:37 am UTC

"What is the internal mechanism, in details, behind parallel hint? "

Simply put..parallel says:

Give me more resources to get a big job done faster.

Which means - use more sessions, consume more I/O concurrently and use more CPU, on the expectation that this job needs a LOT of "firepower" to get done.

Which is why we will do dynamic sampling, because we *really* want to get the query plan right before we unleash all that server power onto it.

Sandeep Mishra, June 18, 2016 - 4:52 pm UTC

Hi Connor

I got the answer for the 1st doubt of mine, whether I am locking the stats or not, the stats become stale after the default 10% change.

So optimizer will be ignoring stale_stats correct?

So the stats will not be used, and my locking of the stats becomes meaningless

Am I going wrong somewhere?

Thanks and Regards,
Sandeep

Sandeep Mishra, June 18, 2016 - 5:01 pm UTC

And Now, though not confused but not able to conclude.

Why, because as I was simulating the STALE-STATS condition, now I find the optimizer is using the stats which are locked previously and because of 10% change now though the stats are STALE, but it is still using the STATS !!!

SO only one thing I am able to conclude, that is STALE_STATS is just a hint for the daily scheduled default job to collect statistics for an object ???

Could you please help to conclude.

Thanks and Regards,
Sandeep

Provide test case or example

A reader, June 19, 2016 - 3:25 pm UTC

"But for example, if you drop a partition which is (say) 1/30th of the data, then you can probably easily deduce number of rows remaining, adjust low boundary values for columns, etc and just set these manually. :

Could you please elaborate?
Connor McDonald
June 20, 2016 - 12:53 am UTC

Here's a rudimentary example of the kind of coding you could do


SQL> drop table t cascade constraints purge;

Table dropped.

SQL> create table t ( x int, y date )
  2  partition by range ( y )
  3  interval ( numtodsinterval(1,'DAY'))
  4  (
  5  partition p1 values less than ( date '2016-01-01' )
  6  );

Table created.

SQL>
SQL> insert into t
  2  select rownum, date '2015-12-31' + rownum / 1000
  3  from dual
  4  connect by level <= 1000 * 40;

40000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.lock_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> select table_name, num_rows from user_tables where table_name = 'T';

TABLE_NAME                       NUM_ROWS
------------------------------ ----------
T                                   40000

1 row selected.

SQL> select partition_name, num_rows from user_tab_partitions where table_name = 'T';

PARTITION_NAME         NUM_ROWS
-------------------- ----------
P1                          999
SYS_P14304                 1000
SYS_P14305                 1000
SYS_P14306                 1000
SYS_P14307                 1000
SYS_P14308                 1000
SYS_P14309                 1000
SYS_P14310                 1000
SYS_P14311                 1000
SYS_P14312                 1000
SYS_P14313                 1000
SYS_P14314                 1000
SYS_P14315                 1000
SYS_P14316                 1000
SYS_P14317                 1000
SYS_P14318                 1000
SYS_P14319                 1000
SYS_P14320                 1000
SYS_P14321                 1000
SYS_P14322                 1000
SYS_P14323                 1000
SYS_P14324                 1000
SYS_P14325                 1000
SYS_P14326                 1000
SYS_P14327                 1000
SYS_P14328                 1000
SYS_P14329                 1000
SYS_P14330                 1000
SYS_P14331                 1000
SYS_P14332                 1000
SYS_P14333                 1000
SYS_P14334                 1000
SYS_P14335                 1000
SYS_P14336                 1000
SYS_P14337                 1000
SYS_P14338                 1000
SYS_P14339                 1000
SYS_P14340                 1000
SYS_P14341                 1000
SYS_P14342                 1000
SYS_P14343                    1

41 rows selected.

SQL>
SQL> alter table t set interval () ;

Table altered.

SQL>
SQL> declare
  2    tot_rows int;
  3    tot_blks int;
  4
  5    par_rows int;
  6    par_blks int;
  7
  8    par_name varchar2(30);
  9  begin
 10    select num_rows, partition_name , blocks
 11    into par_rows, par_name, par_blks
 12    from user_tab_partitions
 13    where table_name = 'T'
 14    and   partition_position = 1;
 15
 16    select num_rows , blocks into tot_rows, tot_blks
 17    from user_tables
 18    where table_name = 'T';
 19
 20    execute immediate 'alter table t drop partition '||par_name;
 21
 22    dbms_stats.set_table_stats(
 23      '','T',
 24      numrows=> tot_rows - par_rows,   -- adjust total rows downwards because we dropped a partition
 25      numblks=>tot_blks - par_blks,    -- adjust total blocks downwards because we dropped a partition
 26      force=>true );  -- override the lock
 27  end;
 28  /

PL/SQL procedure successfully completed.

SQL>
SQL>


Now I stress - there's a good chance you *dont* need to do this, because if you are dropping old partitions, there is a good chance you are currently adding just as much data at the "other end" of the table, so the *size* of the table does not really change.

Where you *would* need to make adjustments is perhaps the low and high value settings for the table columns, because things like sequential ID's and date columns would presumably advance.

Sandeep Mishra, June 20, 2016 - 5:23 am UTC

Hi A Reader / Connor

"I think the PQ slaves are killed after 5 minutes so any execution after that will incur the cost of starting these up (assuming you don't have the min parallel server set). If this is the case it should be shown as a wait on OS slave startups (can't remember the excat wait event). "

Could you please help elaborate the above idea.
What is this PQ slave is and how I can keep them alive forever?

Thanks and Regards,
Sandeep
Connor McDonald
June 21, 2016 - 2:21 am UTC

John, June 20, 2016 - 1:37 pm UTC

Hi Sandeep.
Have you tried running the sql without the PQ hint and if so do all the executions take the same time now (ie all longer not just the first) if this is the case it maywell be an issue with starting up PQ slaves.
I did not read all the comments on this post and therefore may well have jumped to the wrong conclusion.
Regarding your questions :
PQ Slave - Parallel Query slaves are used when PQ is invoked.
In your case Oracle needs to start up 40 slaves which involves the OS spawning 40 processes and that takes time.
To keep alive PQ slaves you can set the minimum number to keep by specifying parallel_min_servers (in your case it would need to be at least 40 but depends on what else is being run parallel mode at the same time).

From the Doc
PARALLEL_MIN_SERVERS
This parameter specifies the number of processes to be started in a single instance that are reserved for parallel operations. The syntax is:

PARALLEL_MIN_SERVERS=n
The n variable is the number of processes you want to start and reserve for parallel operations.

Setting PARALLEL_MIN_SERVERS balances the startup cost against memory usage. Processes started using PARALLEL_MIN_SERVERS do not exit until the database is shut down. This way, when a query is issued, the processes are likely to be available.

Sandeep Mishra, June 21, 2016 - 4:33 am UTC

Hi Connor
Thank You.

Hi John
Thank You, for the description

1) It is a hybrid [DWH + OLTP] environment.
2) The amount of data present is like a DWH environment
3) The way this database queried is OLTP environment
4) The data which is queried is completely skewed [So I started from Adaptive Cursor Sharing]
5) Customer have to check record of their historical transactions in a few different ways, and it is predefined so queries are fixed, and the moment the query is reaching DB it contains literals in the where clause
6) So for one of the query say where data has to be fetched within two dates, if the date difference is 60 days and the data in the table maintained is for 60 days, it will be definitely going for a all partition scanning, so the developer ended up in using PARALLEL hint
7) When parallel hint is used dynamic sampling is getting fired. so for the 1st time the hard parse was the culprit behind the large amount of response time
8) Once dynamic sampling is disabled, and with the presence of parallel hint the query performs much better way
9) Now the discussion is all about keeping alive the PQs so that the 1st time execution time (which is negligible now, but still taking time) can further be reduced.

Your inputs are most welcome.

Thanks and Regards,
Sandeep


Approach

Ghassan, June 22, 2016 - 4:38 am UTC

As I understood this is a performance issue.
First things are to provide the query. The exec plans. The tables description and other dependencies.
I cannot see in the huge post anything serious of that.
I afraid that the problem becomes how to tune the entire post.

Sandeep Mishra, June 27, 2016 - 10:28 am UTC

Hi Connor

Thank You.

Going back to the parent topic.
For bind variable peeking (introduced in 9i), the predecessor of ACS, during the optimization phase the query optimizer peeks at the values of the bind variables and use them as literals.

The issue with the above approach was as below

1] A table has 1000 rows
2] A query-1 selecting 990 rows, so the optimizer decides to do a full table scan
3] Now query-2 selecting only 10 rows, but the optimizer goes for a full table scan.
4] If query-2 is run first, then index scan is used
And after query-2, when query-1 is fired it uses the index scan.
5] From 11g, but after few more subsequent runs, the optimizer goes for full table scan for query-1 and index scan for query-2, which is the correct way.

initially is_bind_sensitive & is_bind_aware set to N
and after few runs both becomes Y and the correct execution plans are used

Queries
----------
If [5] is nothing but adaptive cursor sharing, why can't I configure to do the same at the beginning itself?

What is the use of this "warming up" process, with the expense of initial poor response time, in case of incorrect execution plan used ?

Could you please help understanding?

Thanks and Regards,
Sandeep

Connor McDonald
June 28, 2016 - 1:09 am UTC

Its like the old phrase "Everyone is an expert in hindsight" (aka after the event).

Dont forget...that *during* the process, for example, at stage [2], the database *doesnt know* that there will *ever* be a [3] or a [4].

Alternatively, if we were to assume that there will *always* be more executions, and so peeked at *every* bind, *every* time...then we are back to the same parsing costs at using literals all the time.

ACS is trying to find a "middle ground" between getting good plans and not parsing all the time.

Sandeep Mishra, June 28, 2016 - 5:47 am UTC

Hi Connor,

Thank You.

So what should be my course of action as a DBA?
Let ACS takeover by itself?
Do some sql profiling?
Put some hint?

Just to make sure, the same execution plan is used

Regards,
Sandeep
Connor McDonald
June 29, 2016 - 3:20 am UTC

To lock down a plan, I'd look at baselines as a first option

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.