Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Fred.

Asked: March 24, 2005 - 6:06 pm UTC

Last updated: August 01, 2017 - 3:45 am UTC

Version: 10g

Viewed 10K+ times! This question is

You Asked

I was reading </code> http://www.dba-oracle.com/oracle_tips_cpu_count_wrong.htm <code>and was wondering if this is really a bug?
Is it really dangerous?

and Tom said...

Ok -- before I start with an answer, I'll ask everyone "please be polite in all followups". Stick to technical issues only. thanks

No bug there -- not at all. In fact, it would be a bug if CPU_COUNT was set differently than what the OS told us to set it to. I suggest taking a look at metalink Note 205089.1, this explains how Oracle takes full advantage of the logical CPUs

Hyper threading (which you can briefly read about here:

</code> http://www.intel.com/technology/hyperthread/ http://arstechnica.com/articles/paedia/cpu/hyperthreading.ars/1

) was designed to do exactly what is happening here -- the illusion of multiple CPU's.

So, no -- no bug.  But the rest of the article has some issues too.  For example:

<quote>
You have turned-on CPU-based optimizer costing - For Oracle 10g database where the CBO has been set to created execution plans based on CPU cost (the cpu_cost parameter).
</quote>

Well, two things.  First, you don't switch on cpu costing in 10g, it is just there and you have to poke around for a hidden parameter to turn it off (or use a hint "no_cpu_costing" to disable it for a single query).  Next there is no such parameter as "cpu_cost", there is a computed cpu_cost -- but that is not driven by the number of CPU's but rather how fast they are reported to be (gathered via system stats).  The cpu count isn't going to affect the costing of a serial plan.  consider:

 
ops$tkyte@ORA10G> create table t as select * from all_objects;
Table created.
 
ops$tkyte@ORA10G> exec dbms_stats.gather_table_stats( user, 'T' );
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW', pvalue => 1 );
PL/SQL procedure successfully completed.

<b>I tell the system "oh, I have very slow CPU's..."</b>
 
ops$tkyte@ORA10G> @explain "select count(*), min(created), max(created) from t t1"

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2275019856
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |  1065  (87)| 00:00:13 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|   2 |   TABLE ACCESS FULL| T    | 47581 |   371K|  1065  (87)| 00:00:13 |
---------------------------------------------------------------------------
 
9 rows selected.

<b>and the cpu cost is through the roof, now I say "a bit faster these CPU's are:</b>
 
ops$tkyte@ORA10G> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW', pvalue => 200 );
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> @explain "select count(*), min(created), max(created) from t t2"
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2275019856
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |   154   (4)| 00:00:02 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|   2 |   TABLE ACCESS FULL| T    | 47581 |   371K|   154   (4)| 00:00:02 |
---------------------------------------------------------------------------
 
9 rows selected.

<b>and the cost (cpu cost) goes down....  now faster still:</b>
 
ops$tkyte@ORA10G> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW', pvalue => 500 );
PL/SQL procedure successfully completed.
 

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2275019856
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |   151   (2)| 00:00:02 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|   2 |   TABLE ACCESS FULL| T    | 47581 |   371K|   151   (2)| 00:00:02 |
---------------------------------------------------------------------------
 
9 rows selected.

<b>and the cost falls yet again</b>
 

ops$tkyte@ORA10G> show parameter cpu_count
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     1

<b>so what'll happen with 2 cpu's to the cost?</b>

ops$tkyte@ORA10G> alter system set cpu_count = 2;
System altered.
 
                                                                                             
ops$tkyte@ORA10G> connect / as sysdba
Connected.
sys@ORA10G> startup force
ORACLE instance started.
 
Total System Global Area  322961408 bytes
Fixed Size                   778956 bytes
Variable Size             296492340 bytes
Database Buffers           25165824 bytes
Redo Buffers                 524288 bytes
Database mounted.
Database opened.


sys@ORA10G> show parameter cpu_count
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     2



sys@ORA10G> connect /
Connected.
ops$tkyte@ORA10G> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW', pvalue => 1 );
 
PL/SQL procedure successfully completed.
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2275019856
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |  1065  (87)| 00:00:13 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|   2 |   TABLE ACCESS FULL| T    | 47582 |   371K|  1065  (87)| 00:00:13 |
---------------------------------------------------------------------------
 
9 rows selected.

<b>Numbers looking familiar?</b>
 
ops$tkyte@ORA10G> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW', pvalue => 200 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> @explain "select count(*), min(created), max(created) from t t5"
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2275019856
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |   154   (4)| 00:00:02 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|   2 |   TABLE ACCESS FULL| T    | 47582 |   371K|   154   (4)| 00:00:02 |
---------------------------------------------------------------------------
 
9 rows selected.
 
ops$tkyte@ORA10G> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW', pvalue => 500 );
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> @explain "select count(*), min(created), max(created) from t t6"
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2275019856
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |   151   (2)| 00:00:02 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|   2 |   TABLE ACCESS FULL| T    | 47582 |   371K|   151   (2)| 00:00:02 |
---------------------------------------------------------------------------
 
9 rows selected.


<b>so, cpu count (even if is was wrong -- which it isn't, cpu_count should consider hyper threading, it would be a bug if it did not) won't affect the cpu cost of a serial plan.</b>


The quote:

<quote>
You are using automatic parallelism - If cpu_count is doubled and the parallel_automatic_tuning is set to true, the CBO will believe that full-table scans are less expensive than they really are, and influence the optimizer to choose a full-table scan.
</quote>

Needs to be clarified.  (parallel automatic tuning is deprecated in 10g,
http://docs.oracle.com/cd/B14117_01/server.101/b10755/initparams150.htm#REFRN10155
 so these are from 9i below).  The tables would have to be parallel in the first place and it would affect the cost of parallel plans only (but that is OK you see, you can in fact do things in parallel, that is the goal of hyper threading here).  I'm using 9i below so that parallel automatic tuning is not deprecated:


ops$tkyte@ORA9IR2> connect /
Connected.

ops$tkyte@ORA9IR2> show parameter cpu_count
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     1

ops$tkyte@ORA9IR2> show parameter automatic
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_automatic_tuning            boolean     TRUE


ops$tkyte@ORA9IR2> @explain "select * from t t1"
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
 
--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             | 28022 |  2544K|    60 |
|   1 |  TABLE ACCESS FULL   | T           | 28022 |  2544K|    60 |
--------------------------------------------------------------------
 
Note: cpu costing is off
 
9 rows selected.
 
ops$tkyte@ORA9IR2> connect / as sysdba
Connected.

ops$tkyte@ORA9IR2> alter system set cpu_count = 2 scope=spfile;

System altered.
 
ops$tkyte@ORA9IR2> startup force
ORACLE instance started.
 
Total System Global Area  126948840 bytes
Fixed Size                   452072 bytes
Variable Size              58720256 bytes
Database Buffers           67108864 bytes
Redo Buffers                 667648 bytes
Database mounted.
Database opened.


ops$tkyte@ORA9IR2> connect /
Connected.


ops$tkyte@ORA9IR2> show parameter cpu_count
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     2
ops$tkyte@ORA9IR2> show parameter automatic
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_automatic_tuning            boolean     TRUE


ops$tkyte@ORA9IR2> @explain "select * from t t2"
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
 
--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             | 28022 |  2544K|    60 |
|   1 |  TABLE ACCESS FULL   | T           | 28022 |  2544K|    60 |
--------------------------------------------------------------------
 
Note: cpu costing is off
 
9 rows selected.
 

<b>The cost of the full scan -- not affected.  However, if I take the rather *serious* step of saying "please consider this table for parallel operations", things look different:</b>


ops$tkyte@ORA9IR2> alter table t parallel;
 Table altered.
 
ops$tkyte@ORA9IR2> @explain "select * from t t2_parallel"
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
 
-----------------------------------------------------------------------------
| Id  | Operation         |N| Rows  | Bytes | Cost|  TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  | | 28022 |  2544K|  15 |      |      |            |
|   1 |  TABLE ACCESS FULL|T| 28022 |  2544K|  15 | 94,00| P->S | QC (RAND)  |
-----------------------------------------------------------------------------
 
Note: cpu costing is off
 
9 rows selected.

<b>but you see -- that is an effect of setting the TABLE to parallel more than anything else.  (if you didn't want parallel, why set it?).  It is true that the cost of a PARALLEL full table scan can go down, but down it should go with hyperthreading (if you have 2 "real" cpus that are hyperthreading -- you have 4 cpus in effect that can do parallel operations.  And remember the "effective" speed of these 4 cpus will be less than the real speed of the 2 - but more effective over all)
</b>

And the costing is rather predicable.  A serial full scan had a cost of 60.  A full scan with 2 cpus had 15 (60/4 = 15, the 4 came from 2 cpus with the default parallel_thread_per_cpu of 2.  If we do this parallel full scan in a system with cpu_count set to one, we'd see:

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
 
-----------------------------------------------------------------------------
| Id  | Operation         |N| Rows  | Bytes | Cost|  TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  | | 28022 |  2544K|  30 |      |      |            |
|   1 |  TABLE ACCESS FULL|T| 28022 |  2544K|  30 | 96,00| P->S | QC (RAND)  |
-----------------------------------------------------------------------------

so, it is not that a full table scan cost goes down, but the cost of a parallel plan could be (rather -- should be) affected

The article continues:

<quote>
How does cpu_count get set incorrectly?

The oracle installer will determine the number of CPUs on a system during installation.  However, some new processors (i.e. Intel) have dual cores, where we have two CPUs on one chip, and they appear to be two processors when they are only one, which a shared core.  
</quote>

that is not true -- the installer doesn't do this, the database upon startup (and dynamically in 10g as you can add and remove cpus at runtime) queries the OS and the OS tells the software how many effective CPU's it has.  This is what is key -- if the CPU's are running hyperthreaded (you can disable or enable it in the bios, leave it on) you need to be able to take advantage of it.  If you run with single CPU algorithms on a hyperthreaded chip -- you are missing the boat (one of the things I love about this -- I have a p4 extreme edition with hyperthreading on my laptop -- is that when MS Word runs away, it can only take 1/2 of the machine with it!  The OS is running on "two cpus".  They appear to be, they act like, the perform like two cpus.


<quote>
 The Oracle cpu_count is gathered when you install Oracle, and it can sometime be set to an incorrect value.  The cpu_count affects the Oracle cost-based optimizer, and is considered every time that Oracle created an execution plan for a SQL statement.  A doubled setting for cpu_count can result in excessive full-table scans.
</quote>

Again -- this is not an install time thing, if you add a cpu, the software recognizes it.  You need not reinstall or anything like that.  

Also it is not considered "every time" it is only considered for 
plans where parallelism has been explicitly requested by a hint, or 
object-level attribute. If you have turned on like that, a parallel full table scan may become more appealing -- but it is supposed to - you have in fact the ability to run more concurrent processes without context switching, it is what the cpu's do.  If parallel query isn't appropriate for your system -- you wouldn't have this on (parallel on the table)


In an article explaining the dangers of hyperthreading CPUs being double-counted, it seems strange to point out:

<quote>
 The cpu_count is used to determine the settings for several important init.ora parameters:

         fast_start_parallel_rollback - The default value is 2 times the number of CPU of your system, but some DBAs recommend setting this value to 4 times the cpu_count.
</quote>

If the author meant that DBA's should double the fast_start_parallel_rollback -- then this is a good thing.

The author starts by <b>warning</b> us that hyper-threading is dangerous because Oracle double-counts the CPUs - but here he seems to be telling us that it is a good idea to double the default value that Oracle chooses for a parameter based on the number of CPUs.


<quote>
db_block_lru_latches - If you are running multiple instances on the same server, this default may be too high, in which case you will see excessive server paging and high CPU utilization.
</quote>


Server paging would be the result of excessive memory allocation - but latches do not take up excessive or even much memory.  High CPU usage should be quantified - running multiple instances on a machine with too few CPUs can cause CPU issues (I've said many times the perfect number of Oracle instances per server is precisely one), but that is a problem that is unrelated to whether the CPUs are logical or physical.  There are reasons why you may see increased CPU, there are reasons why you may see reduced CPU -  but the author fails to give any reasons why he thinks that the CPU utilization would be higher, or the server paging to disk because of cpu_count.


<quote>
parallel_max_servers - This parameter controls the maximum number of OPQ factotum processes (p000, p001) that are spawned during a parallel query when parallel_automatic_tuning is enabled.
</quote>

This parameter controls the maximum number of parallel execution (PX) processes for an instance (not an individual query as indicated). It is "globally" true, it is not in any way limited to parallel automatic tuning or adaptive multi user -- it is just "true".  But -- once again, the cpu count is not set *wrong* (the very premise here is false)

And lastly

<quote>
log_buffer - The number of CPUs is important to the value of log_buffer, because multiple log writer (LGWR) processes may be spawned by Oracle to asynchronously offload the redo information.
</quote>

While the default size of the log buffer is set based on cpu count:

log_buffer - from the Server Reference -
    default = 512KB or 128KB * cpu_count whichever is larger

that is because the log buffer is a function of the number of concurrently active transactions and the amount of redo they generate.  For the log buffer is where redo goes before lgwr writes it out (and lgwr does that when you commit, when the buffer is 1/3 full or 1meg full, or every 3 seconds whether it needs it or not).  

What the number of cpu's has to do with log buffer and it's sizing is the fact that since you have more cpu's it means you have the ability to run more things concurrently (meaning that since we have 2 apparent cpu's instead of 1 - we are doing more things at the same time!  We are filling the log buffer faster)  But cpu count didn't affect this -- the OS's ability to run two of us at the same time did.

Note that if you have two CPUs, you can have multiple copy latches, and don't copy on allocation, which means better scalability - and you really DO have the CPUs to take advantage of this.

There was a log_parallelism option in 9.2 which disappeared in 10g, which allowed multiple allocation latches - but NOT log writers - and you were advised not to use it unless you had 16 to 64 processors.

The only way I know to get more than one lgwr per DATABASE is to run RAC (clusters -- many instances, single database).  A single database instance will have only one lgwr.



I still wish that sometimes when things are said -- supporting evidence were supplied.  It would cut down on the number of misconceptions and incorrect information out there.  It would at least help us see the point of view of the author of the information -- the assumptions they made, why they drew the conclusions they did.  If the things in this referenced article were true, you could (should I say) easily demonstrate them.

I find many times when I open my mouth to respond to a question without actually trying it out -- proving it, I would be wrong or only partially correct.  That is why I generally have a little snippet of something to accompany each answer (and the ones I skip doing that bit for -- are invariably wrong or followed up with lots of yeahbutts -- "yeah, but what about ....") 
 
>

Rating

  (59 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Very informative

A reader, March 24, 2005 - 6:51 pm UTC

Well, quite obviously, it's not a bug. From the link
provided by the OP, I really wonder how anyone could
believe that "bug" story (or the "install time" hypothesis).

Anyway, I agree with you that the Oracle community should
fight against the spreading of such myths. I'm (still)
learning much from your *proof-based* approach, that's what
we need. Keep up with the good work!

Thanks,
Tomasz


Log_buffer > 1M.

Sai, March 24, 2005 - 7:00 pm UTC

Tom,

Oracle flushes redo buffer to online redologs when it is 1/3 full or 1M or with commits...etc. Is there any downside of setting log_buffer to more than 1M? Metalink note 30753.1 says "Values above 1Mb are unlikely to yield significant benefit."

Even with 1M setting, we are seeing log buffer allocation retry waits.

Thanks.


Tom Kyte
March 24, 2005 - 8:42 pm UTC

I'll see if I cannot get that note updated.

The thing is -- the 1/3 - 1m - commits - those are points at which lgwr starts flushing out to disk.. they are triggers for an action.

Say I run a big transaction that generates gobs of redo - lgwr will be constantly flushing while I'm generating. values way over 1m can be very beneficial in a case like that..

or say I have hundreds/thousands of concurrent transactions - same thing. 1m would be too small.

what is your exact wait there?


cpu cost

Dave, March 24, 2005 - 7:43 pm UTC

(1) In 9i, does oracle compute cpu cost and store it in data dictionary for CBO to use cpu cost to come with better exceution plan? Is it done automatically by oracle, I mean the computation of cpu cost?

(2) If the answer for the above is 'yes', one does not have to use gather system stats procedure to collect system stats that includes cpu cost? Thanks.

(3) any change in 10g regarding the above?

Tom Kyte
March 24, 2005 - 8:43 pm UTC

in 9i, cpu costing was not "on"

in 10g, cpu costing is "on"

Very useful information

Bipul, March 25, 2005 - 3:00 am UTC

Hi Tom.

Great explanation on cpu_count and thanks for those couple of useful link on hyperthreading.

I was wondering if Oracle installer collects *any* stats at install time? My intial guess is *none*.

Thanks
bipul

Tom Kyte
March 25, 2005 - 7:51 am UTC

the installer looks at your OS and system to ensure you meet the minimum requirements, that's about it.

The database upon startup figures out what resources are available to be used.

Excellent critique.

Kevin, March 25, 2005 - 9:32 am UTC


Jay, March 25, 2005 - 12:20 pm UTC

Kudos to Tom for all the clarifications with examples, Oracle community is greatful to people like you.

Thanks.

What cpu costing buys us?

Mikito Harakiri, March 25, 2005 - 12:55 pm UTC

Before, a mesure of execution cost was the number of buffer gets. Very simple and clear model from performance analyst perspective. Now we have extra column with a number in units I have no idea about. Now I have to do twice a work in order to understand optimizer decision.

On practical side, did CPU costing solve any of the fundamental optimimisation problems (predicate corellation, complex predicate selectivity estimate, etc)? It did not.

I suspect a miniscule performance improvements that people see in the benchmarks is attributed solely to clever predicate order evaluation. Can predicate order evaluation be implemented outside of CPU costing model? I bet it can.

Tom Kyte
March 25, 2005 - 6:49 pm UTC

simple is sometimes too simple. Was IO the only thing to be considered? No, with the ability to measure your IO rates and CPU speeds -- more inputs, better plan.

If it never changed, it could never get better.

</code> http://docs.oracle.com/docs/cd/B14117_01/server.101/b10752/optimops.htm#51003 http://docs.oracle.com/docs/cd/B14117_01/server.101/b10752/optimops.htm#37379 <code>

it is a "better" cost as it takes into consideration cpu and io.

cpu cost in 10g

Sam, March 25, 2005 - 10:00 pm UTC

you say, *in 10g, cpu costing is "on" *. However, as a DBA I have to gather system stats for CBO to use cpu data? right. or does oracle collect cpu information like speed etc automatically in 10g? could you clarify where does CBO get cpu cost info? Thanks.

Tom Kyte
March 26, 2005 - 9:10 am UTC

if you query sys.aux_stats$

you'll see where it gets them from by default, gathering system statistics places it's results there.

CPUSPEEDNW vs CPUSPEED

Alberto Dell'Era, March 26, 2005 - 11:56 am UTC

May you please clarify what's the meaning of CPUSPEEDNW vs CPUSPEED ?

By reading the docs for dbms_stats.gather_system_stats:
</code> http://download-west.oracle.com/docs/cd/B14117_01/appdev.101/b10802/d_stats.htm#1036402 <code>

I've understood (please correct me if i'm wrong) that CPUSPEEDNW is the cpu speed "No Workload", so to-be-collected when the instance is idle (my interpretation of "Oracle recommends to run GATHER_SYSTEM_STATS ('noworkload') after creation of the database and tablespaces").
CPUSPEED is the cpu speed to-be-collected when the instance is busy servicing a representative workload (by choosing INTERVAL or START|STOP).

But how can the cpu speed be different - shouldn't this produce the same value (+- the statistical error in the estimation algorithm of course - so better said, E[CPUSPEEDNW] = E[CPUSPEED]) ?
Probably i'm missing the real meaning of this measures, and what i'm missing explains why you chose CPUSPEEDNW in your example ...
TIA

Tom Kyte
March 26, 2005 - 1:09 pm UTC

they should be more or less the "same" (fractional differences).

One is collected on a "basically idle system", it generates a fake workload (just run dbms_stats.gather_system_stats on 10g without any input, cpu should pop right up and read/write IO go up as well). You use this to "prime the pump"

Once the system is running with representatitve workloads for real, you can use dbms_stats gather system stats with an interval to collect them under your real workload and those will be used instead.

I chose cpuspeednw because I had not gathered 'real' workload stats, so, it was going to use the nw one.

Thanks

Alberto Dell'Era, March 26, 2005 - 1:15 pm UTC

crystal clear!

Faulty cpu count

Parikshit Paul, March 26, 2005 - 10:00 pm UTC

To be very simplistic,if the os tells us to consider the number of cpu's to be 2 at the application level, we should not be really considering how many cpu 's are there in actual??
Does that make any sense?

Tom Kyte
March 27, 2005 - 9:51 am UTC

As far as the OS is concerned, there are two CPU's (or 4 or whatever) with hyperthreading.

My task manager in windows has two cpu graphs. A single program running in a tight loop is only able to utilize 50% of the machine resources. It takes two things running in tandem to fully utilize the machine (in parallel). There are effectively two cpus on that machine with hyperthreading.

10g

Anil, March 27, 2005 - 9:26 am UTC

Hi Tom 

<<
in 9i, cpu costing was not "on"

in 10g, cpu costing is "on" 

>>

I have tried the test you mentioned , in my 10g database

SYS@NGDEV1-SQL>  create table t as select * from all_objects;

Table created.

SYS@NGDEV1-SQL> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

SYS@NGDEV1-SQL> exec dbms_stats.set_system_stats( pname => 'CPUSPEEDNW',pvalue => 1 );

PL/SQL procedure successfully completed.

SYS@NGDEV1-SQL> @trace

Session altered.

SYS@NGDEV1-SQL> @xp

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------
Error: cannot fetch last explain plan from plan_table

SYS@NGDEV1-SQL>
SYS@NGDEV1-SQL>
SYS@NGDEV1-SQL> explain plan for  select count(*), min(created), max(created) from t
  2  /

Explained.

SYS@NGDEV1-SQL> @xp

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     8 |  1119 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |       |
|   2 |   TABLE ACCESS FULL| T    | 49215 |   384K|  1119 |
-----------------------------------------------------------

Note
-----
   - 'PLAN_TABLE' is old version
   - cpu costing is off (consider enabling it)

13 rows selected.

Why " cpu costing is off (consider enabling it)"  What is wrong here?

Rgds
Anil 

 

Tom Kyte
March 27, 2005 - 10:29 am UTC

'plan table is old version'.....

went back to 8i, grabbed the plan table:


ops$tkyte@ORA10G>  create table PLAN_TABLE (
statement_id    varchar2(30),
timestamp       date,
remarks         varchar2(80),
operation       varchar2(30),
options         varchar2(30),
object_node     varchar2(128),
object_owner    varchar2(30),
object_name     varchar2(30),
object_instance numeric,
object_type     varchar2(30),
optimizer       varchar2(255),
search_columns  number,
id              numeric,
parent_id       numeric,
position        numeric,
cost            numeric,
cardinality     numeric,
bytes           numeric,
other_tag       varchar2(255),
partition_start varchar2(255),
partition_stop  varchar2(255),
partition_id    numeric,
other           long,
  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23   24   25  distribution    varchar2(30));
 
Table created.
 
ops$tkyte@ORA10G> @plan 'select * from dual';
 
0 rows deleted.
 
old   1: explain plan for &1
new   1: explain plan for select * from dual
 
Explained.
 
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
 
----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2 |
----------------------------------------------------------
 
Note
-----
   - 'PLAN_TABLE' is old version
   - cpu costing is off (consider enabling it)
 
12 rows selected.

<b>and now just put the proper one in place:</b>

ops$tkyte@ORA10G> drop table plan_table;
 
Table dropped.
 
ops$tkyte@ORA10G> @?/rdbms/admin/utlxplan
 
Table created.
 
ops$tkyte@ORA10G> @plan 'select * from dual';
 
0 rows deleted.
 
old   1: explain plan for &1
new   1: explain plan for select * from dual
 
Explained.
 
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
Plan hash value: 397561404
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
8 rows selected.
 


<b>it is not so much that cpu costing is off, but rather the cpu costing information wasn't available.... due to the old plan table</b>


 CPU_COST                                          NUMBER(38)
 IO_COST                                           NUMBER(38)
 TEMP_SPACE                                        NUMBER(38)
 ACCESS_PREDICATES                                 VARCHAR2(4000)
 FILTER_PREDICATES                                 VARCHAR2(4000)
 PROJECTION                                        VARCHAR2(4000)
 TIME                                              NUMBER(38)
 QBLOCK_NAME                                       VARCHAR2(30)


Those columns are missing from your plan table.

 

dave, March 27, 2005 - 10:31 am UTC

have you gathered system stats?

Tom Kyte
March 27, 2005 - 10:37 am UTC

did not need to, I did not. They are defaulted and we are specifically setting the cpuspeednw setting.

it was all about the plan table (probably).

Two cpu's? or Four?

Parikshit Paul, March 27, 2005 - 10:13 pm UTC

Tom,You said:
[quote]
As far as the OS is concerned, there are two CPU's (or 4 or whatever) with hyperthreading.

My task manager in windows has two cpu graphs. A single program running in a tight loop is only able to utilize 50% of the machine resources. It takes two things running in tandem to fully utilize the machine (in parallel). There are effectively two cpus on that machine with hyperthreading.
[/quote]
O.k.
Now 2 cpu's hyperthreading=4cpu's (logically)( that's what o.s would show=> what oracle would assume)
But
[quote]
There are effectively two cpus on that machine with hyperthreading.
[/quote]
So Don's logic holds. Doesn't it??


Tom Kyte
March 28, 2005 - 7:25 am UTC

no don's logic doesn't hold. (even if it were set wrong, which it is not, the logic of why it would be bad wasn't there)


I have a windows laptop -- with hyperthreading, it has 2 CPU's.

I have a Dell Poweredge Server -- with hyperthreading, it has 4 CPU's.

The OS sees 2/4/8/whatever.

Whatever the OS reports as being there, that is WHAT IS THERE.


I have 1 "physical" cpu on my laptop that via the magic that is hyperthreading, appears, functions, is 2 cpus.

I have 2 "physical" cpus on my poweredge server that via the magic that is hyperthreading, appears, functions IS 4 cpus.


Perhaps I confused by talking about my laptop and poweredge server on the same page, one has 2 cpus via hyperthreading, the other has 4 cpus via hyperthreading.

Ok, using TOP on asktom.oracle.com:

CPU states: cpu user nice system irq softirq iowait idle
total 6.9% 0.0% 1.8% 0.2% 2.3% 11.3% 77.1%
cpu00 6.1% 0.0% 2.5% 0.9% 7.9% 12.7% 69.5%
cpu01 8.1% 0.0% 1.1% 0.0% 0.9% 9.9% 79.6%
cpu02 7.1% 0.0% 1.3% 0.0% 0.1% 13.5% 77.6%
cpu03 6.3% 0.0% 2.3% 0.0% 0.3% 9.1% 81.6%
Mem: 5931992k av, 5913556k used, 18436k free, 0k shrd, 3460k buff


how many CPU's do I have? What should cpu_count be set to? There are 4 cpu's on that machine as far as the OS and everything else is concerned (another dell server with hyperthreaded xeon chips). If I run a single process in a tight loop, one cpu is going to show 100%, do two things, two cpus and so on. The machine has 4 cpus for all intents and purposes. cpu_count = 4 is the only right setting.

Number of CPUs

Sriram, March 28, 2005 - 7:54 am UTC

Is there a reason why the number of CPUs on the server should always be even numbers? i.e..2,4,8 instead of 3,5,7?

Thanks,
Sriram

Tom Kyte
March 28, 2005 - 8:04 am UTC

Like nodes in a cluster, they technically can be odd numbers (we run a 3 node rac cluster)

2,4,6,8,10,.... are more "normal", traditional.

Misunderstanding

Parikshit Paul, March 29, 2005 - 10:52 pm UTC

Thanks TOM,
Yeah I really misunderstood.When you said 2 cpu's, I thought 2 physical cpu's.
Everything is clear now.:)


Lower CPU_COUNT/better performance on 9i

Tony, March 30, 2005 - 3:06 am UTC

Dear Tom,
Thank you for the great explanations; it really helped me a lot to understand the exact issues that I am having problem with.
Our production database server has 2 physical CPUs (Dual Xeon) we had big performance problems on our application (OLTP takes 9 hours) meanwhile on the testing server which a simple PC not even hyper-threaded it took only 17 minutes on identical database structure and data, the difference was that on the server there was no I/O activity nor CPU.
Our application/DB support, after many guesses theyÂ’ve set the CPU_COUNT = 1, PARALLEL_THREAD_PER_CPU = 1 where they were 2 by 4, and the performance improved greatly. OLTP takes only 12 minutes, but every other query, report degraded in performance.What i want to know is why performance increased when we set cpu_count = 1 and parallel_thread_per_cpu = 1.

I have Windows 2003 server standard installed, with Oracle 9i Rel2 Enterprise 9.2.0.1.0
What you suggest I do other than patching Oracle with the right patches and setting the parallel execution parameters and the parallel option on the tables?

Thanks in Advance
Tony G.


Tom Kyte
March 30, 2005 - 7:18 am UTC

did you have PARALLEL OPERATIONS ENABLED

and where they being done on the single cpu machine
and where they being done on the multi-cpu machine

performance didn't increase when you set cpu_count = 1, performance changed:

<quote>
OLTP takes only 12 minutes, but every other query, report degraded in
performance
</quote>

It seems you don't want parallel plans for OLTP (makes sense) but do for reports run in batch (makes sense)

Perhaps you want to do that, put the tables back to default degrees of parallelism (and cpu_count/whatever back to their defaults). Then for the reports that want parallel query, say that.

Awesome ...

A reader, March 30, 2005 - 7:49 am UTC

Maybe Oracle should collaborate with Discovery ...

</code> http://dsc.discovery.com/fansites/mythbusters/mythbusters.html <code>

Too many myths, too little time ... ;)

Well done, Tom, well explained ...

Tom Kyte
March 30, 2005 - 9:01 am UTC

I dig those guys -- one of my favorite shows.

Lower CPU_COUNT / better performance!?

Tony, March 30, 2005 - 9:46 am UTC

Hi Tom

On the machine with 2 physical CPUs we didn't have parallel options enabled, but we will enable all the necessary parameters and alter tables to parallel when they're not.

What I am aiming to do is to have oracle CPU_COUNT = 2 and PARALLEL_THREAD_PER_CPU = 4 (back to the defaults) and enable parallelism with the parameters and table options.

Doing this will I get (more or less) the same performance as I am now. ???

Thanks in Advance
Tony


Tom Kyte
March 30, 2005 - 10:24 am UTC

<quote>
On the machine with 2 physical CPUs we didn't have parallel options enabled, but
we will enable all the necessary parameters and alter tables to parallel when
they're not.
</quote>

you must have -- else it would have been the same. the cpu count affects parallel plan generation, unless you had them "on", they would not be used.

GREAT help

Tony, March 31, 2005 - 4:15 am UTC

Hi Tom
Thanks for the very useful explainations and help.

Keep up the Good work.

Regards
Tony G.

Relation of async IO and lgwr_io_slaves

Mike Ault, April 12, 2005 - 8:19 am UTC

In the docs Oracle states that the loq writer will write asynchronously on async capable systems. In addition, on non-async systems (and async for that matter) if dbwr_io_slaves is greater than 0 then up to 4 lgwr_io_slaves are also created. How does the above relate to your statement that the only way to get more than one log writing process is to use RAC?

Physically speaking the async write capability of the log writer process on async capable OS equates to multiple processes writing to multiple disks. Likewise on non-async systems when dbwr_io_slaves is greater than 0 then you get up to 4 log writer slave processes. What are these used for if you can only write with one process?

Thanks for clearing this up as the docs, metalink and other sources just mention this in passing without really discussing it in detail.

Tom Kyte
April 12, 2005 - 8:54 pm UTC

But the real question is, will Don ever retract his paper -- which as far as I can see has almost nothing technically accurate starting with the title and ending with lgwr.

lgwr doing async io is not even remotely related to having multiple lgwrs. The statement regarding lgwr doesn't even seem to make "sense"

<quote>
The number of CPUs is important to the value of log_buffer, because multiple log writer (LGWR) processes may be spawned by Oracle to asynchronously offload the redo information.
</quote>

why is the number of cpu's important because of multiple log writers? why would that be important to the value of log_buffer? and how would cpu_count relate to multiple log writers.

Not that an instance will have more than one lgwr....


IO Slaves (have not had lgwr specific io slaves for many many many releases), are NOT lgwr's.

They are a technique to simulate async IO.

LGWR, there is and will be ONE. It is a point of serialization in the database instance, there be but one lgwr to wait on.

So in answer to:

<quote>
How does the above relate to your statement that the only way to get
more than one log writing process is to use RAC?
</quote>

the quote was directly about LGWR, there is only one of them.

LGWR can and will use async to write.
LGWR can and will use IO slaves to simulate async IO.

ASYNC IO is not LGWR. We are not even mincing terms here. There is a single LGWR process responsible for initiating all writes to the redo logs.

And even if we made the broad jump to consider IO slaves lgwr (which we won't really), what connection between them and cpu_count could you make?

They updated the page

Thomas Kyte, April 14, 2005 - 7:10 pm UTC

for the second time. It is mostly accurate, but inconclusive now.

they say:

<quote>
Oracle doubles the value for cpu_count
</quote>

but that is not true, the OS is reporting to us the number of CPUs. Oracle is not doubling anything. As reported above, with hyperthreading enabled, there is only one correct cpu_count value that which the OS reports.


They say this is an issue when "You have turned-on CPU-based optimizer costing", but they fail to say "why", so I'm left curious? Aren't you?


This:
<quote>
If cpu_count is doubled and the parallel_automatic_tuning is set to “true”, the CBO will believe that full-table scans are less expensive based on doing them in parallel using a default degree of cpu_count x
</quote>

Is still false, as demonstrated above in the original answer. It takes a stated desire on your part to have a table be considered for parallel operations -- and if you have, well, you basically do have two cpu's -- that is the entire goal of hyperthreading.

<quote>
The Oracle cpu_count is determined when you start Oracle. The cpu_count affects the Oracle cost-based optimizer through many calculated parameters that use cpu_count as their basis and are considered every time that Oracle creates an execution plan for a SQL statement. A doubled setting for cpu_count can result in changes to yhour explain plans when moving from a single to multiple CPUs or when you increase the number of CPUs in your system.
</quote>

I still think it would be really neat for them to give us at least a single example... Which parameters? Some of them? A hint as to what they are? Why they are relevant?

As demonstrated above, they should be able to cook up a simple example in a couple of seconds (if they know what parameters might be affected..)

They talk of db_block_lru_latches, which is cool -- but they don't really say why it is relevant or why having hyperthreading on is good bad or indifferent... Why this parameter? Why not something else?


<quote>
This parameter controls the maximum number of OPQ factotum processes (p000, p001) that are spawned during a parallel query when parallel_automatic_tuning is enabled.
</quote>
that is still not accurate (but I like the use of the word factotum still, interesting application of the word)


<quote>
There is still argument about whether the cpu_count influences the behavior of actions against the log_buffer.
</quote>
I'm not sure who they are arguing with, it was described above what happens with the copy latches.... and why having cpu_count set properly is a positive thing


<quote>
yet this appears to be an issue because the final conclusions are still not completely clear.
</quote>

one wonders then why this was titled "Oracle performance bug" in the first place if the jury is out, that is, if they don't have any information one way or the other?


All in all, the bug was downgraded to an "issue" in the article. It sure sounded more exciting an an "import oracle bug" though.



Tom Kyte
April 14, 2005 - 8:07 pm UTC


Damned inconvenient things, facts!

Howard J. Rogers, April 14, 2005 - 9:53 pm UTC

Don probably uses the term "factotum processes" for the same reason the national radio broadcaster here has been known to refer to that well-known piece by Claude Debussy, the Gollyperson's Cakewalk.

Bearing in mind where Don and his sidekick are located, words like "slave" are probably not ones that lightly trip off the tongue.

Alternatively, it could be Don's penchant for larding any piece of what ought to be plain speaking with Latin to make it sound better.

By the way, you quote Don as referring to hyperthreading as meaning the CPU has multiple cores. That's just one more error to add to the others. Multi-core CPUs have not actually been released yet (though AMD should have one within weeks, and Intel will have one by mid-year). Hyperthreading is in effect merely the simulation of multiple cores. True multi-core, however, is (will be) one CPU die that actually contains two separate, physical processors within it. Initially, each core will be single threaded, but Intel plan to eventually release multi-core hyper-threaded processes.

Brace yourself for the article Don writes that day!

Regards
HJR

cpu question

thirumaran, September 23, 2005 - 8:32 am UTC

Hi Tom,

Operating System: WINDOWS 2003 ADV server
RAM: 1 GB RAM
DB : ORACLE 10g Release 2
CPU : INTEL P4 UNI processor
Will oracle run efficiently for 100 concurrent oracle users (mixed transaction size using lob data when connected to GIS server ) using the above mentioned H/W or it is recommended to go for a dual processor .

Thanks
Thirumaran



Tom Kyte
September 23, 2005 - 9:41 am UTC

Oracle will run as efficiently as it can.

But, you have to ask yourself though if 100 users would be satisified with the response time from a machine that has less resources than my laptop......

Little sparse on memory there.
Probably the disk system isn't too hot either.
And a single cpu....

I would say it looks a bit small. But a benchmark done by you will confirm or deny that.

Number of processors and licence fee

Laxman Kondal, October 06, 2005 - 4:33 pm UTC

Hi Tom

I have not come across so far but one of our clients had this requirement.

In multiple processors - 4 processors, box client wished to reduce it to half, so client can save some license fee by setting some parameter or code to use only 2 out of 4.

Is there any way Oracle software can control the use of processors or processors have to be removed physically from the box and does Oracle really count processors for license fee?

It may seem stupid question but I could not think of this and found answer any where - it's client - whose wish to be followed/refused politely.

Thanks and regards


Tom Kyte
October 06, 2005 - 6:30 pm UTC

No, the cpus would have to be unavailable somehow.

they can be disabled, they can be "domained" away (if your OS supports that).

Logical CPUs should be handled differently.

SDiZ, October 07, 2005 - 1:21 am UTC

For the best performance, Logical CPUs should always be handled different then physical CPU.

The "Logical CPUs" in Hyperthread shares the same cache, and part of the instruction pipeline. A high loading on one logical cpu can make another logical cpu on the same chips slower.

Traditionally, OS Scheduler tends to "stick" a process to a specific CPUs to make better utilization of memory caches. This arguement, of course, does not holds for logical CPUs.



Tom Kyte
October 07, 2005 - 8:26 am UTC

But that is an OS issue - not one that the database would be looking at. It won't make the "another logical cpu on the same chips slower", both run "slower" - meaning both run as fast as they can.

If you have a single physical cpu that is hyperthreading - the databae needs to know that - needs to know "2" - the algorithms change and in fact there are conceptually 2 cpus here. While one process is doing IO or waiting for a resource (a latch) the other process on the same cpu can keep going.


bottom line here - the cpu count wasn't, isn't set wrong, the cpu count must be set to what the OS reports.

Hyperthreads Myths

Billy, October 07, 2005 - 4:27 am UTC

SDiZ's (Hong Kong) said:

> For the best performance, Logical CPUs should
> always be handled different then physical CPU.
> The "Logical CPUs" in Hyperthread shares the same
> cache, and part of the instruction pipeline. A
> high loading on one logical cpu can make another
> logical cpu on the same chips slower.

And not just that. There are bugs in hyperthread CPU implementations that allow one process to peek at the cached values of another process - one can gain access to encryption keys and so on that way.

However, I'm a firm believer that one does not "interfere" with OS kernel scheduling (despite me often doing this in R&D environments).

So yes, the OS kernel should be aware of hyperthread CPU (logical CPUs) differences.. but this has no bearing at all at application level. Including Oracle.

Software getting involved with binding its threads and processes to different CPUs and pulling all kinds of hacks in order to schedule itself and optimise CPU performance.. It will work nicely in the development lab on the development server. And only there.

Software should -not- interfere with OS kernel scheduling of processes.

Software should be -aware- of the number of CPUs (logical or physical) that the OS kernel reports, and makes it self-tuning performance decisions based on that.

NEVER SECOND GUESS THE OS KERNEL SCHEDULER. EVER!

Which sounds like what the author of the quoted article was attempting to do. "Oh, the kernel reports 4 CPUs, but I know there is only 2 physical CPUs so I am configuring/developing/tuning as if there are only 2 CPUs".

FWIW, I have been down that road numerous times. Always loved getting under the kernel's hood.. from configuring job queues and streams on mainframe systems in the 80's, to managing to write a NT process (with the lowest possible priority) that managed to red-line a CPU anyway.

In my experiences.. you leave the kernel to do its job and you focus on your job. Unless of course you hack around just for the hack of it. :-)

> Traditionally, OS Scheduler tends to "stick" a process
> to a specific CPUs to make better utilization of memory
> caches. This arguement, of course, does not holds for
> logical CPUs.

Yep. This is another myth that seems to still stick it head out at times.. that an OS kernel scheduler will always balance the load evenly across all available CPUs. Confusion reigns when CPU0 shows 90% usage and CPU1 is idle, and the conclusion is that something is wrong as the 2nd CPU is not used.

Or the one that states that PQ does not work on a single CPU - despite the same single CPU being able to run 100's of application threads.

OWT - Old Wives' Tales. Like Occam's Razor. It is easier to believe in "what sounds right" from so-called experts, than to figure out the truth oneself.

Tom Kyte
October 07, 2005 - 8:30 am UTC

<quote>NEVER SECOND GUESS THE OS KERNEL SCHEDULER. EVER!</quote>

Never say Never, Never say Always, I Always say :)

But I agree with the sentiment.

OS Schedulers

SDiZ, October 11, 2005 - 9:04 am UTC

Actually, I want the Oracle RDBMS override the OS Schedulers.

Not only the CPU Scheduler, but also the IO Scheduler... The query plans have lots of infomation about cache utilization predictions, which the OS schedulers have never take account of... With all those infomation, the schedulars can plan ahead with real data , never readahead useless infomation or invalidate soon to be used cache, etc.

With Linux, this is not a dream. the -mm series of kernel have a pluggable i/o scheduler infrastructure already....

Tom Kyte
October 11, 2005 - 3:33 pm UTC

Maybe in version 20 when we need no OS.

Only a guess, no promise on the release date.

Can cpu_count change dynamically

A reader, October 11, 2005 - 12:46 pm UTC

Tom,
In most servers, it is possible to add/remove CPUs without stop/start of server. If I add/remove CPUs while database is still running, will the cpu_count change automatically? If it does not, is there a downside?
Thanks

Tom Kyte
October 11, 2005 - 3:50 pm UTC

it is version dependent - a relatively new feature to dynamically change the number of cpus and have things "readjust" for it.

*mostly* would affect parallel query.

Consider benchmarking your workload without HT

Russell H, November 21, 2005 - 11:48 pm UTC

These articles indicate possible server performance problems with hyper-threading in Citrix and SQL Server applications. Perhaps similar situations could occur on some workloads running in Oracle? It may be worth running sample workloads with and without HT enabled.

</code> http://news.com.com/Does+hyperthreading+hurt+server+performance/2100-1006_3-5965435.html?tag=nefd.top http://blogs.msdn.com/slavao/archive/2005/11/12/492119.aspx <code>

TLP (thread-level parallelism )

A reader, March 21, 2006 - 7:01 am UTC

Tom,

can Oracle on Windows 2003 take advantages of multi-core processor architecture? My software vendor says:

"To fully realize these advantages, both the applications and operating system should support thread-level parallelism (TLP), allowing portions of programs to run independently ..."

Does Oracle supports TLP? Are there any benchmarks that prove that?

Many thanks in advance

Tom Kyte
March 22, 2006 - 1:54 pm UTC

we are parallelized by the OS at the thread level on windows - always have been.

No proof via benchmarks necessary (not even sure how a benchmark of Oracle by itself would prove anything??) - it is the way it is architected - single process, multiple threads - each thread independent from the others

cpu_count dropped suddenly

Michael, June 15, 2006 - 12:18 am UTC

Hi Tom,

I would like to ask what will make cpu_count changed by Oracle? (Besides Hardware chang)

I am using Sun Solaris for running my 10gR1 RAC, both node has 2 (pysical and logical) CPUs. The 2 instances have cpu_count=2 at the beginning While I was checking the system heathiless, I found that the instance 2 reported cpu_count=1. I checked the system was running with 2 CPUs normally and the alert log of instance 2 reported that "Detected change in CPU count to 1". Do you think there is any reason why my RAC instance changed the cpu_count without any hardware changed.

This is very odd to me, and it dose not happen on instance 1.

(At that moment, I am calling support for that.)

Thanks!!!

Michael

Tom Kyte
June 15, 2006 - 8:39 am UTC

use support, could be flaky hardware, it alerts us to changes and we react to that.

Solaris CPU count

Roderick, June 15, 2006 - 5:05 pm UTC

FYI in 10gR1, the CKPT process will periodically poll Solaris for the online cpu-count to detect if it has changed. Perhaps the OS system call used is returning an incorrect result or Oracle is misinterpreting the return value. The OS call made will depend on the API for that particular OS. For Solaris, I just happen to know it will either be pset_info() or sysconf() depending on the situation [using the truss utility against a test system can be your friend]. Otherwise, best to handle this via Support as Tom recommends.


Thanks Tom & Roderick!

Michael, June 15, 2006 - 10:03 pm UTC


Idle CPU

Arindam Mukherjee, August 12, 2006 - 8:18 am UTC

Respected Mr. Tom,

On reading this entire thread, where all respected big shots of Oracle have suggested their comments, I am very sorry to ask you this simple question in this thread.

From Oracle 9i Database Performance and Tuning Guide, chapter - 22
I have got the following lines.

Events that identify that a server process is waiting for more work are known as idle events.

Excessive CPU usage usually means that there is little idle CPU on the system.

Ques 1: My query – when usage is excessive, how can I say CPU resources is little idle?

Note that wait I/O should be considered as idle CPU.

Ques 2: My query – at peak time, CPU may be used in full swing, so we can get wait event. How can I say CPU is little idle?


Tom Kyte
August 12, 2006 - 7:56 pm UTC

this is not a siple question.

define excessive (I'm not a fan of that term). Is 80% for 5 hours excessive? Is 95% for 2 minutes excessive?

what is little? define little?



ques 1: not a well framed question. "when usage is excessive, how can I say..." - you cannot. You have to define your terms and what excessive means to you.

if I have a 16 cpu box and I run a 32 way parallel query - I really want 16 cpus to light up and go all of the way. I'd be most happy if they were near 100%

if I have a 16 cpu box and I run serial queries only for 1,000 transactional users and we run at or near 100% capacity for extended periods (define extended :), I might not be happy.

Why would IO waits be considered "idle cpu", the only thing that is idle cpu is - well - idle cpu time?


ques 2: what would you be waiting on? there is no "we were waiting on cpu" wait event.

Dual-core processor

Rajesh, February 14, 2007 - 2:09 am UTC

Hi Tom,

I would like to know if a single duo core could support max 50 concurrent users.
Or if I should go with a greater number of processors.
What are the performace benefits of using duo core with Oracle.
Is there any standard metrics used for sizing hardware?


Thanks,
Rajesh
Tom Kyte
February 14, 2007 - 8:29 am UTC

define concurrent.

of course it could - if they do certain things

of course it could not - if they do certain things

so the answer is of course:

yes, no, maybe.


or:

IT DEPENDS

Question on cpu_count

Tom, April 01, 2009 - 1:59 pm UTC

I have one question about cpu_count. We were trying to set the db_32k_cache_size parameter to enable multiple blocksizes in our database, but our db server is a Sun 5240 that reports 128 cpus, and the default value for db_32k_cache_size was throwing errors. We can get past this by setting the cpu_count parameter to a much lower value but I was wondering what other side effects this would have in our system as cpu_count is used for more than this one parameter.
Tom Kyte
April 01, 2009 - 4:46 pm UTC

well, what was your error first?

question on cpu_count

tom, April 02, 2009 - 9:02 am UTC

SQL> alter system set db_32k_cache_size=4M;
alter system set db_32k_cache_size=4M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00384: Insufficient memory to grow cache


SQL> show parameter cpu_count

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     128
SQL> show parameter db%cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_16k_cache_size                    big integer 0
db_2k_cache_size                     big integer 0
db_32k_cache_size                    big integer 0
db_4k_cache_size                     big integer 0
db_8k_cache_size                     big integer 0
db_cache_advice                      string      ON
db_cache_size                        big integer 0
db_keep_cache_size                   big integer 0
db_recycle_cache_size                big integer 0
SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 2192M
sga_target                           big integer 2192M
SQL> 


and if we do this it works

SQL> alter system set cpu_count=2;

System altered.

SQL> alter system set db_32k_cache_size=4M;

System altered.

SQL> show parameter db%cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_16k_cache_size                    big integer 0
db_2k_cache_size                     big integer 0
db_32k_cache_size                    big integer 16M
db_4k_cache_size                     big integer 0
db_8k_cache_size                     big integer 0
db_cache_advice                      string      ON
db_cache_size                        big integer 0
db_keep_cache_size                   big integer 0
db_recycle_cache_size                big integer 0
SQL> 


Also, according to the docs I would expect the db_32k_cache_size to be 8M (4M * 2 cpus) but it is always double what I expect.

We are using Soalris containers on this box and not all 128 cpus should be dedicated to Oracle but I was wondering if there were any side effects we should be aware of when setting cpu_count?

Tom Kyte
April 02, 2009 - 10:27 am UTC

that just means your sga target/max size was too small - when cpu count is high, you need a larger default buffer cache.

select component, granule_size from v$sga_dynamic_components;

would show you what the granules are going to be. And with a 2gb SGA, I would fully expect 16mb granules.


http://docs.oracle.com/docs/cd/B19306_01/server.102/b14231/create.htm#sthref380


Granule size is determined by total SGA size. Generally speaking, on most platforms, if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB. For SGAs larger than 1 GB, granule size is 16 MB. Some platform dependencies may arise. For example, on 32-bit Windows NT, the granule size is 8 MB for SGAs larger than 1 GB. Consult your operating system specific documentation for more details.



setting the cpu count down will impact all of the defaults that depend on cpu count - mostly the parallel settings.

wrong cpu_count

A reader, March 10, 2010 - 7:10 pm UTC

We have a new server with 4 dual core cpu's.
The cpu_count is not set in init.ora file.
when I check
select name,value from v$parameter where name='cpu_count';

It shows 16.
Do you know why.
Tom Kyte
March 11, 2010 - 8:15 am UTC

after you see what your OS reports (we just believe whatever it tells us) and verify that your OS gives you a completely different number - please utilize support for that one.

or, if it gives you 16, ask the OS vendor...

How to limit resources per instance

Harri, September 07, 2010 - 2:25 am UTC

We have couple of servers running 10 or more databases. Versions are 9, 10, 11. Sometimes one or two instances takes too much CPU-resources. One reason is high default value of parallel_max_servers because of high cpu_count.

In these shared database servers we want to limit cpu-resources per instance. What would be the best way to do it? I'm thinking to set at least parallel_max_servers but how about setting cpu_count to lower value.


Tom Kyte
September 09, 2010 - 8:09 pm UTC

...We have couple of servers running 10 or more databases. ...

the only correct number of instances per host is 1, period, I don't care what version of Oracle - it is 1 - period.

use virtualization if you want more than one database per host (this lets you create many hosts where you originally just had one and lets you assign resources to them).


If you are using parallel query and having more than one instance per server - you have really totally and completely "missed the boat, missed the point". parallel query is inherently NON-SCALABLE. it is designed to allow a single instance to consume ALL RESOURCES. It just won't work. You cannot realistically use parallel and more than one database on a server, both will try to use the entire machine.

In 11g there is a new feature "instance caging" - does not apply to 9 or 10 at all - and I don't believe in it. You can read about it, but basically - you are doing something really wrong here - having more than one instance on a host.

I've written that a thousand times, I will not change my mind, there is no good way to deal with more than one instance on a host.

Alexander, September 10, 2010 - 10:25 am UTC

I wish they would document that, or put it in a metalink note. Otherwise the big shot architects who suck up to the mangers will be like "look at all this money we'll save on hardware, throw all these instances on one server." Then of course the dbas are bombarded with performance related problems.

Hypothetically. ;)

re: Alexander on multiples instances per server

Stew Ashton, September 11, 2010 - 3:32 am UTC


It's not just the architects: I got Oracle France in a meeting with my bosses about consolidating databases, and the Oracle tech representative defended the idea of multiple instances on one server. Shot down by friendly fire :(

I think this is one of those cases where you have to ask "why". Why are there all these little instances and databases in the first place? It's because projects, organizations and budgets are application-oriented and every application wants "its" data.

In reality, much of this data belongs together, and since it is separated, applications spend half their time sending and receiving data through flat files, without data integrity and without access control.

In this situation, virtualization will protect the database but not the data.

On the plus side, virtualization is possible without changing the corporate culture, whereas consolidation means changing the way people think, organize, budget and work.

Reasons for multiple databases on one host

Harri, September 14, 2010 - 10:42 am UTC

We have hundreds of databases. Why? Because of a need of different versions, characters sets, backup strategies and so on. 80% of databases are very small ones and really don't need own server.

Think if we really build new server(physical or virtual) for every single database. Every new host needs after basic OS configuration at least:
-Oracle sofware installation + patching
-Backup software installations
-Monitoring software configurations
-Cluster configurations in HA environment

...and finally after that we can say "create database..." and start to test the new configuration.

There is a big difference if you maintain 800 servers or 80 servers. Think about just patch installations and Critical Patch Updates. Think Oracle CPU-based licensing costs.


Tom Kyte
September 15, 2010 - 7:25 am UTC

... There is a big difference if you maintain 800 servers or 80 servers. Think
about just patch installations and Critical Patch Updates. Think Oracle
CPU-based licensing costs.
...

how is patching hundreds of separate databases on a single machine easier than patching one? explain that to me sometime. It is the sheer NUMBER of databases - not their location.

And you cannot tell me you need hundreds of databases for versions, we DON'T HAVE hundreds of versions. And the security nightmare you've introduced here - did we or did we not yet patch that install??

And character sets - just use one that satisfies everyone, something UTF - not an argument for multiple databases.

Backup strategies? You either a) have one or b) you do not. You can backup and restore "applications" easily - to their own point in time if you want - *in a single database* - as easily as many (I would argue "more easily", since you have a single entity a small group of people can easily manage in their head).



One host = One instance. Have as few hosts as humanly possible. An application does NOT need a database to itself, that is just wrong.

harri, September 15, 2010 - 9:03 am UTC

> how is patching hundreds of separate databases on a single machine easier than patching one? explain that to me sometime. It is the sheer NUMBER of databases - not their location.

First, if you have to transfer patches(the files) to one server or to hundreds of servers there sure is differense.

Secondly, if you have to install patches (I mean binaries) to hundreds of oracle_homes or just few of them there is huge difference. Of course the work of running those scripts to the databases is same regardless of how many servers you have. But if you have hundreds of servers you have hundreds installations also, right? AND we have to care about OS patches too.


>And you cannot tell me you need hundreds of databases for versions, we DON'T HAVE hundreds of versions.

I meant hundreds of different combinations, not only versions and patch levels. Example you have many different combinations of versions + extra cost options. And you sure have many different characters sets.



>And character sets - just use one that satisfies everyone, something UTF - not an argument for multiple databases.

Say that to our customer's software vendors. One supports only WE8ISO.... character set, one UTF... and so on.



>Backup strategies? You either a) have one or b) you do not. You can backup and restore "applications" easily - to their own point in time if you want - *in a single database* - as easily as many (I would argue "more easily", since you have a single entity a small group of people can easily manage in their head).

-We sure backup 10TB database differently than 500MB database.
-One might be in archive mode one not. Yes there really is applications that not needs their database running in archive mode. And if archive generation would be 1TB in day we sure accept no archivemode for them.
-In 10 or 11 versions we use rman, because we have skills to do it. We don't want to study to use rman in 8 or 9 versions anymore. We use old scripts there because we believe we finally get rid of them in the near future.(but again there is thos software vendors...)


Tom Kyte
September 15, 2010 - 9:36 am UTC

... First, if you have to transfer patches(the files) to one server or to hundreds
of servers there sure is differense.
..

if you have to transfer files to hundreds of servers, allow me to introduce you to a SAN or something similar? If you transfer files to hundreds of servers, you are doing it wrong to begin with.



.. Secondly, if you have to install patches (I mean binaries) to hundreds of
oracle_homes or just few of them there is huge difference....

if you don't have an Oracle home per database, then you have databases that are all tied together and dependent anyway, and since many patches require database stuff be done - you still have HUNDREDS of databases to patch. Nothing gained there.

... I meant hundreds of different combinations, not only versions and patch levels.
Example you have many different combinations of versions + extra cost options.
And you sure have many different characters sets.
...

but if you bought the option, you bought the option - so everyone on that host has access to the option anyway. You are cheating yourself if you don't let everyone have at them. You have three combos of versions right now - just three. 9, 10, and 11.


... -We sure backup 10TB database differently than 500MB database....

why? You have hundreds of 500mb databases - the sum of them is much much larger than it would be if you consolidated down to a few (note: I did not say one, I said "a few")

... -One might be in archive mode one not. Yes there really is applications that
not needs their database running in archive mode. And if archive generation
would be 1TB in day we sure accept no archivemode for them.
..

you don't need to do that database by database - think about it, you need maybe.... umm... two databases for that.

... -In 10 or 11 versions we use rman, because we have skills to do it. We don't
want to study to use rman in 8 or 9 versions anymore. We use old scripts there
because we believe we finally get rid of them in the near future.(but again
there is thos software vendors...)
...

and think of the money and time you would save by getting rid of the un-supported software and the maintenance headache that comes with it. One of the advantages of a FEW databases is that you force people to come along with you whether they want to or not.


You don't need hundreds of databases - that is your choice, I don't agree with your reasoning, you could do a lot more with a lot less. I really shouldn't be arguing this way either - because you would cut the number of CPU's you would need (it takes a lot more cpu to run two databases then to run a single database with two applications - let alone HUNDREDS of them). As a stock holder - I should technically be promoting "run as many as you can - a database per user of an application would be best".

but i don't



LOL

Michel Cadot, September 15, 2010 - 9:45 am UTC


As a stock holder - I should technically be promoting "run as many as you can - a database per user of an application would be best".

:-D

Regards
Michel

harri, September 15, 2010 - 10:56 am UTC

>if you don't have an Oracle home per database, then you have databases that are all tied together and dependent anyway, and since many patches require database stuff be done - you still have HUNDREDS of databases to patch. Nothing gained there.

Patching oracle software is one thing and patching databases is an another. You can do these things separate. Example in clusters you can install patches to oracle_homes that are "cold". Then in maintenance window at night you switch over databases to node already patched and only run update scripts to the databases. The break is shorter.

If we have four databases (let's say because of different character sets) running in same oracle_home. There is only one patch installation. In your "model" there is four.



>You have three combos of versions right now - just three. 9, 10, and 11.

Well, there is releases(r1, r2, r3) too. 11.1 is different version than 11.2, right?
And then there is those third and fourth and fith numbers: 9.2.0.7, 9.2.0.8, 10.2.0.4, 10.2.0.5
and finally there is PSU-patches and CPU patches.

When I speak "versions" in generally, I mean all of those. All these versions with few different character sets makes many combos and need for separate databases. Maybe some day Oracle supports multiple character sets in one db.



... -We sure backup 10TB database differently than 500MB database....

>why? You have hundreds of 500mb databases - the sum of them is much much larger than it would be if you consolidated down to a few (note: I did not say one, I said "a few")

Do you mean why we backup 10TB database differently than 500MB database or do you mean why we have 500MB databases? Of course we consolidate what we can, but there is those things that i have been already told(character sets, versions). I didn't say that we have hundreds of 500MB databases. We have hundreds of databases. Sizes are between 10TB and 500MB. Typical size is between 200-600GB.


... -One might be in archive mode one not. Yes there really is applications that
not needs their database running in archive mode. And if archive generation
would be 1TB in day we sure accept no archivemode for them.
..

>you don't need to do that database by database - think about it, you need maybe.... umm... two databases for that.

Actually We need about ten databases running in three different servers for that. Reasons for that are character sets and versions again but other reason is different network requirements.
Tom Kyte
September 15, 2010 - 11:55 am UTC

sigh, but you still have hundreds of databases to patch - therein lies the rub. You have HUNDREDS of databases to schedule, patch, review, test. Not 10, not 20, but hundreds.


You should have 9.2.0.8, 10.2.0.4 and either of 11.1.0.7 or 11.2.0.2 - you shouldn't have 9201,9202,... etc.



harri, September 15, 2010 - 12:37 pm UTC

Sure we have hundreds of databases to patch but we don't have hundreds of oracle homes to patch. And we don't have hundreds of operating systems to patch.(well, maybe two hundred).

Of course we should have only one database instead of hundreds of databases. But there is not such a server that runs all that transactions.

And if there is versions then there is always software vendors who are stucked one single version.

Tom Kyte
September 15, 2010 - 1:14 pm UTC

... Of course we should have only one database instead of hundreds of databases. ..

No, that is unrealistic, what is realistic is having a HANDFUL. I will maintain that you do not need hundreds of databases, you need, want, desire a handful.


Oleksandr Alesinskyy, September 16, 2010 - 7:11 am UTC

You should have 9.2.0.8, 10.2.0.4 and either of 11.1.0.7 or 11.2.0.2 - you shouldn't have 9201,9202,... etc.

Unfortunately in a course of 9.2 evolution a couple of changes that may be breaking for 3-rd party applications were done. One was (as far as I can remember) a change in the representations of numbers with unspecified precision in 9.2.0.5 and other in 9.2.0.6 was related to the views.

So I may imagine a situation when 9.2.0.4 (or 9.2.0.5) has to be kept. Sure, such situations should be extremely rare - but anyway possible.

Harri, September 16, 2010 - 7:27 am UTC

>Unfortunately in a course of 9.2 evolution a couple of changes that may be breaking for 3-rd party applications were done. One was (as far as I can remember) a change in the representations of numbers with unspecified precision in 9.2.0.5 and other in 9.2.0.6 was related to the views.

Yep, there is software called Tuxedo which wouldn't work in newest patch levels of 9.2 and if I rember correct Forms had same problem.


Tom Kyte
September 16, 2010 - 8:12 am UTC

still does not lead to hundreds, but a *handful* - so we went from 3 to 4 - that does not cause you to have a database for every user still...

Harri, September 16, 2010 - 8:47 am UTC

Tell me how I handle 5 different character sets, 5 different versions, 5 different networks with 3-4 databases.

And there is about 10000 users using 500 different applications.

Tom Kyte
September 16, 2010 - 8:51 am UTC

I didn't say 3-4 databases, I said we went from 3 versions to 4, that doesn't cause you to have hundreds of databases.

At most I see 25 databases if you truly, honestly have the need for five different character sets.


If you have made the choice to go with 500 databases - one for each application and are happy with running many databases on a single machine (which increases our revenue, I should be happy for that - you need a lot more cpu to run many databases than you do to run a small number of databases) - then so be it.

but I'll never ever recommend it as a good idea.

Harri, September 16, 2010 - 8:57 am UTC

OK, 25 databases maybe can handle those versions and character set combos(what about networks). Then there is different initora parameters. One needs optimizer_ind_cost_adj to be set, one absolutely not and so on...

Suddenly we have those hundreds of databases.

Tom Kyte
September 16, 2010 - 9:13 am UTC

Harri,

just read above, if you are happy - so be it.

(there are obvious ways to address the parameter issues in a single database - we could go back and forth forever on this)

I'll never ever recommend it as a good idea.

Harri, September 16, 2010 - 9:26 am UTC

I'm quite happy with our hundreds of databases and 200 servers. There was a time when we had about 1000 databases.

But my questions was that how I can limit cpu-resources in one server with ten databases.

You have helped me a lot in the past, but not in this one :)

Tom Kyte
September 16, 2010 - 10:14 am UTC

I answered your question completely - I am confused. I answered it 100%, re-read your first comment on this page.

I told you how to do it in 9/10 (virtualization)
I told you of a new feature in 11 (virtualization preferred, but there is this instance caging thing as well)

and I pointed out that parallel query coupled with many instances is a sure death - which you are already finding out.

If you are totally happy, you really wouldn't have any questions one would think, you are running into the age old problem of more than one database instance per host, that of conflicting resource needs.


select isses_modifiable from v$parameter where name= 'optimizer_index_cost_adj'

Sokrates, September 16, 2010 - 10:00 am UTC

ISSES
-----
TRUE

for example via logon trigger

Dell Poweredge

reader, November 04, 2010 - 8:48 am UTC

Hi Tom,

What type of Dell poweredge do you have - what's the spec?
You mentioned it's running on RH - is it plug and play?
i.e. is it easy setting up wireless and watching DVD's?

I downloaded OEL 5.4 and I have problems with setting up wirelessand watching DVD's.

Oracle 11g works a treat though :-)
Tom Kyte
November 05, 2010 - 4:58 am UTC

I would not be using this machine to watch DVD's or use a wireless network. It is a database server.

OEL is a server sort of operating system, it was not designed nor intended to be consumer device.

If you want to watch movies - get a Mac.
If you want to run a database - get a server.


I run windoze on my laptop - but when I want to run a database on my laptop for demo's and such - I use vmware to run a server OS such as OEL or other linux variants.

Dell Poweredge

reader, November 05, 2010 - 5:16 am UTC

OK Tom - thanks for that. I have some problems connecting to the internet using OEL 5.4

i.e I might want to download a patch from Oracle for 11g

Do I need something installed in /lib/firmware in order to do this? I noticed there is nothing there.

With regards to your poweredge - what's the spec, how many CPU's, memory etc. Is it easy to upgrade memory and does if additional sockets if you want to add more cpu?

Thanks
Tom Kyte
November 05, 2010 - 9:26 am UTC

i.e. you might want to use a wired connection - as this is a server and wireless speeds just isn't going to cut it.

I've never even considered attempting to think about trying to use wireless with a server. sorry.

The spec of my poweredge is not relevant to anything - it is just a relatively cheap biggish machine I keep in my laundry room to run tests on. I don't really plan on upgrading or maintaining it- the only thing I've done is add some cards to it for usb support.

T2 chip...

A reader, November 12, 2010 - 5:10 am UTC

Hi Tom,

i have a machine runing in data warehouse environemtns with ultra-sprack T2 chip. it has two processing 8 core each and each core runs 8 thread. Here is its CPU_coun and parallel_thread_per_cpu count.

CPU_Count=32
PARALLEL_THREAD_PER_CPU=2

I am runing oracle 11g with solaris 10. according to above threads CPU_COUNT should be 64 (8x8=64) and PARALLEL_THREAD_PER_CPU should be 8.
Tom Kyte
November 12, 2010 - 9:03 am UTC

what does the os report back as the number of cpus

CPU_COUNT and resource manager

A reader, November 29, 2011 - 5:31 am UTC

Tom,

Do I NEED to enable resource manager by enabling a resource plan before I can set CPU_COUNT ?

According to chapter 27 "Managing Resources with Oracle Resource Manager" of the DBA manual, it seems you have to, but here http://docs.oracle.com/cd/B28359_01/server.111/b28320/initparams038.htm suggests that if you didn't enable resource manager, then it just means CPU utilisation won't be managed within the instance?

We just want to limit a DB instance on the server to use two CPUs at the most, we don't care how that instance utilises the two CPUs
Tom Kyte
November 29, 2011 - 11:42 am UTC

If you want to use instance caging you

a) set cpu count to a number less than the number of cpus (cores) on the machine
b) enable the default resource plan (or some resource plan)


If you don't want to use instance caging, you can just set cpu count - which in turn will change the default values for various other parameters in the system. Probably not a good idea.


If you just want to limit a DB instance to two CPU's, you want instance caging and would need the resource manager.

CPU_COUNT

Sam, June 06, 2012 - 8:30 pm UTC

We have a T2 reporting 2 sockets, 12 cores and 96 (hyperthread) running Solaris 10, and ~10 DBs. The perfromance was sluggish. How should one juggle this without instance caging. Notice no system stats on any dbs. Should we set the cpu count to cores count on all the dbs or should leave it to 96. Leaving 96 throws the UTLRPs to fail as the parallel_threads_per_cpu = 2.

What is the best way to gain performance in 11g?
Tom Kyte
June 07, 2012 - 6:41 am UTC

The perfromance was sluggish.

my car won't start.


we are even, we have the same amount of information.


what leads you to think it is the CPU settings that are the culprit. What analysis have you done to determine the root cause of this 'sluggishness'

Cores from AWR report

Rajeshwaran, Jeyabal, July 31, 2017 - 1:41 pm UTC

Team:

AWR report header shows CPU's as 24 and Cores as 24.

Does the "Cores" in the AWR report shows Cores Per cpu? if so i have totally 24 cpu and each cpu have 24 cores available then total cores available is = 24*24 = 576?

kindly advice.
Connor McDonald
August 01, 2017 - 3:45 am UTC

No it is not cores per cpu. But the AWR report will reflect what it gets back from the OS, which can create all sorts of confusion.

For example, take (say) an 8 core chip - it will probably say 8 cpus and 8 cores. But enable SMT and you'll probably now see 16 cpus and 8 cores.

Or take an IBM machine where you can carve up the machine down to tenths of a cpu, or the machine can have a dynamically *variable* amount of "cpu" based on min and max thresholds. An AWR report has to make some basic assumptions.

So it all becomes a bit moot when it comes to cpu nowadays.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library