Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Shivaswamy.

Asked: September 17, 2002 - 9:47 pm UTC

Last updated: May 18, 2011 - 3:37 am UTC

Version: 817

Viewed 10K+ times! This question is

You Asked

Tom,

DSS Environment. Users log in with their own user account to the application. When they submit a particular job in multiple sessions, the CPU Usage shots up on 8 CPU Server(NT). Infact, it goes like this. One user submits say 7 concurrent sessions (Through Custom Software )and the job will continue in multiple legs, _ I mean, once this set is over it will call next 7 sessions and so on till the job is complete. This number, - 7 sessions - is determined, based on the CPU availability. What I want to do is say give 85% for this user/job and AT ANY TIME this user/job should not use more than 85%(even if it is available, becuase the job may hit all of 100% during some part of the leg) and others will manage within remaining 15%.

I checked Profile & DBMS_RESOURCE_MANAGER. I found them not suitable for this task. The latter will be of use ONLY when there are other jobs demanding CPU. If not, it will allow this user to hog 100%, which I want to avoid.(Beacuse to use continuosly at 100% is not good, right?) What I want is say 85% and NO MORE for this user.

Is there a way to do this?

Thanks in advance,
Shivaswamy

and Tom said...


Why why why -- do you erroneously believe that 100% CPU utilization is bad????

That is exactly what we should all be striving for!!!

It is not like we can put "cpu in the bank" and use it later. The 15% you want to with-hold from this user cannot be used by anyone else later!!!

In fact, suppose you could do it -- suppose you could limit them to 85% utilization.... Ok, say they run a thing that takes an hour if they had 100% of the CPU. It takes OVER an hour with 85%. Say they start this at 8am (they are being nice).... Now, you limit them to 85%. 9am comes and everyone else comes to work -- but wait -- this job is STILL running (it could have been done but you won't let it). So, this user makes everyone else run slower cause you kept 15% of the CPU in the bank but the bank cannot "carry forward".




Let them use 100% of the machine if no one is there (you WANT THAT - you really really really do).

Use dbms_resource_manager to limit them to 85% when there are others on the box. but when they are the only ones there -- let them use it all (your approach is 100% fundementally flawed)


We have failed if our machines continously run at X% utilized and X is less then 100. 100% utilization is our GOAL, not our fear. Yikes.



Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What about "Processor Que Length..?"

Shivaswamy, September 17, 2002 - 11:03 pm UTC

Tom,

Thanks for your your encouraging words. I have two follow ups:

1. They can submit 10(or 20 for that matter) instead of 7 concurrent sessions.(I know, we hit 85 to 90% for 7 sessions) Then it is 100% always. Is that acceptable?( I have no qualms because this is the most important job that has highest priority as of now.)

2. With 100% and more, "Processor Que Length" (on NT) increases up to 14, at times. I have read that, that is not good. Is not that bad?



Tom Kyte
September 18, 2002 - 7:19 am UTC

1) it is not only totally acceptable, it is EXACTLY what you want to happen.

2) Now what you want to do is limit the number of concurrent sessions they use. To avoid them swamping the system. You can use the resource manager in 9i to limit their number of concurrent sessions so that even if they submit 20 -- only 12 or so run (which is what you are trying to achieve).

they should be taught that 20 is too many however.

Thanks

Shivaswamy, September 18, 2002 - 7:48 am UTC

Thanks, a lot, Tom.

CPU utilization per instance

Arun Gupta, September 03, 2003 - 8:53 pm UTC

Tom
The client is running multiple Oracle instances on the same Sun box. They want to know if there is a way to find out at OS level the CPU utilization per instance.

Is there a shell script that you have handy to do this?
Thanks


Tom Kyte
September 04, 2003 - 8:51 am UTC

no, it would be really really hard -- if even possible.

consider that an instance constantly has processes coming and going and coming and going.

All you can do is use v$sysstat - record the "cpu used" statistics. wait a bit. record them again and subtract. that'll be approximiately the cpu used by each instance during the period of observation (key word = APPROXIMATELY. If a statement starts BEFORE you recorded and finished AFTER you recorded - you won't see its cpu time. If a statement started 5 hours BEFORE you recorded, finished and then you took the second recording -- all 5 hours of cpu time would be dumped into that observation)

No CPU used initially

A reader, January 31, 2008 - 1:16 pm UTC

Hi Tom,

We are running a process that pulls data over a dblink from one database (different box) onto our database.We derive the dates initially for the process i.e. the data over dblink is picked on the basis of date.However we have observed that for initial 5 minutes the CPU Usage is almost negligible.After 5 minutes , the CPU Usage increases and remains constant to 25% Usage till the end of process.

Our process goes like this:

1) derive lower boundary for process from config table - date1
2) derive upper boundary for process - date2
3) insert into t1 select * from t2@dblink where date1 > date1 and date2 <= date2
4) set the upper boundary in config table

Your views on CPU not being utilized under some scenario will be of most important to us.

We are working on oracle 10g.

Regards
Tom Kyte
February 04, 2008 - 3:25 pm UTC

for the initial 5 minutes, it must be doing most of the work remotely.

and then for the rest, it is fetching and processing rows and actually doing some things locally.

sounds completely normal and logical.

Consistent CPU Usage

A reader, February 26, 2008 - 12:35 pm UTC

Hi Tom,

Thanks for the reply.

I tried running 3 identical process but different oracle packages with a total of 12 million records.
However still the CPU usage stays consistent at 25%,till all the processes get over.
I am not sure why my CPU usage is not increasing.I also tried using parallelism but without any luck.
What could be the thing that is blocking my CPU usage?Is it some sga parameter or buffer cahce?

Your views on this would really be apreciated.

Regrads.
Tom Kyte
February 26, 2008 - 3:09 pm UTC

why would cpu go to 100% if you are IO bound - which you likely are. You are not doing anything very cpu intensive on the local machine, just reading data off of a socket and writing it.

Consistent CPU Usage

A reader, February 28, 2008 - 8:09 am UTC

Hi,

Thanks for providing inputs.
But my actual problem is as follows:

I am working on oracle 10g with SGA_TARGET set to 1.5 GB

When I run one single process , CPU Usage is 25%.
When I run three process , still the CPU usage is around 25%.
Ideally the CPU usage should have been increased by some percentage as we are running more than one process.
Is it the case where one process is waiting for another and not doing anything?If so what is causing the blocking?

Also all the 3 process pulls data over a dblink from different tables and inserts into different table.

Your views are really appreciated.

Regards,
Tom Kyte
February 28, 2008 - 10:58 pm UTC

... Is it the case where one process is waiting for another and not doing
anything?If so what is causing the blocking? ....

a) sure sounds like it could be, but not necessarily. they could be, well, for example bottlenecked on your network, on the remote system, on the local disk - in addition to each other

b) oh, one of about a BILLION things

and really if all they are doing is pulling stuff over a dblink and writing into a table, I'm not sure why you would expect cpu to go through the roof.


trace them, see what they are waiting on

CPU usage

Aru, November 18, 2008 - 1:00 am UTC

Hi Tom,
Till now everyone's been asking you about cpu utilization within a database on a server.
But what I am wondering is that- is it possible to limit CPU utilization of a database on a server with multiple databases? What we are going to have is a datawarehouse and another database on the same server ( that's what our budget is allowing at the moment - will shift the other one after a year). We want to limit the CPU usage of the datawarehouse to a limit where it will not impact the performace of the other database. Is it possible?
Regards,
Thanking you,
Aru.
Tom Kyte
November 18, 2008 - 7:40 pm UTC

virtualization, you need to partition that machine

or consolidation, put both into a single instance and use the resource manager.

Bad advice

Tom, December 23, 2009 - 9:05 am UTC

You said:

"Why why why -- do you erroneously believe that 100% CPU utilization is bad???? That is exactly what we should all be striving for!!!"

In another post you said:
"When you are near 100% utilization - everything is going to be bogged down"

You were right in the other post...this post's advice is ridiculous. No database server nearing 100% CPU utilization is going to be performing well. The "load averages" will be through the roof: queuing starts to adversly affect response times way down closer to 60% utilization, depending on the SMP configuration.

In real-world situations, a "healthy" database server is constrained by I/O not CPU...

"100% utilization is our GOAL, not our fear. Yikes."
Yikes is right to anyone taking that advice.
Tom Kyte
December 31, 2009 - 12:36 pm UTC

do you see the original question, 8 cpu machine, we do something in parallel, we use 100% of the cpu (big batch job).

Yes, I'd want about parallel 16 on that machine, light up the cpu's as close to 100% as possible - and hope for some disk IO waits so as to not overload it.

If you are a transactional system going with many users, I'll agree with you. Big old batch job, I'd want to get near full utilization.

I'll admit, it could have been stated more clearly - in that fashion.

Limiting CPU in transactional system

A reader, May 16, 2011 - 1:14 pm UTC

Hi Tom,
in your last post you said:
"If you are a transactional system going with many users, I'll agree with you..."

This is exactly my case. I have a transactional system with many concurrent sessions. I want to group them based on what they do and allow them only a percentage of the total CPU. I can identify them based on what service name they connect to. But, I need to make sure I never use up all 100% CPU. Is it possible and if yes, how?
Tom Kyte
May 18, 2011 - 3:37 am UTC

In 10g - you could use either

o max number of active sessions in a resource group
o shared server

Using the resource manager, you can set the maximum number of active sessions in a given consumer group. If you set that to less than the number of CPU's, you can make sure that group never gets 100% of the cpu on the machine.

Using shared server connections for a given service would basically allow you to achieve the same.


in 11g, the resource manager can be used to put a hard limit on resource consumption - in prior releases it only kicked in when a resource was at 100% utilization, now it can kick in at a given %, so you can limit a given group to X% of the CPU

http://docs.oracle.com/docs/cd/E11882_01/server.112/e17120/dbrm001.htm#i1007577

More to Explore

DBMS_RESOURCE_MANAGER

More on PL/SQL routine DBMS_RESOURCE_MANAGER here