Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Balaji.

Asked: June 26, 2000 - 5:38 pm UTC

Last updated: March 22, 2006 - 1:17 pm UTC

Version: 8.0.5

Viewed 1000+ times

You Asked

How to derive the size of the SGA at the time of installation. Is there any thumb rule which needs to be followed for specifying the SGA for large database. Assume I have 16 Gig of RAM. What will be the Optimum SGA Size for Oracle 8 and 8i ?

and Tom said...

This is an answer that won't be an answer.

It depends on many factors. There is no single "out of the box" answer. You'll start with something and tune it from there.

o Going to run lots of PLSQL? (eg: replication, Oracle apps, your own stuff with tons of PLSQL) - need a large shared pool

o Going to run Multi-Threaded Server? Need a large shared pool

o Going to run Java, lots of it, in the database? Need a large java shared pool

o Going to have 10 users doing lots of really big queries? Need larger settings for sort area size, hash area size and such -- these might not affect the SGA (but they might if you use MTS)

o Going to have 1000 users doing OLTP? The mix will be different.

o Going to have anything else on this box? They will need memory.

The settings that *most* affect the SGA size will be:


o db_block_buffers -- how many buffers to cache in the SGA.
o shared_pool_size -- if you do not have a large pool allocated
this will be used for that as well.
shared sql, loaded plsql and so on are in
here.
o java_pool_size -- when you use java, this is used.
o large_pool_size -- used by MTS, backup, and so on for large
chunks of memory that should be freed on
demand, not managed LRU like shared pool.

Thing that will affect it to a lesser degree:

o log_buffer -- redo log buffer, measured in Kbytes.



Rating

  (13 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Really Helpfull to gain the basic

Siamak Thanawaal, November 27, 2001 - 11:17 am UTC

Really Helpfull to gain the basic.. & start off from some where. Though many times your calculation go wrong but you at least know where to start from & this normally helps as your basic check list....

Sometimes Database is very slow

sivababu, April 03, 2002 - 7:51 am UTC

Hello TOM,
Our one of the customer database is slow. But it was working fine past days. Is there any query to check why it is slow now?. The database is running oracle 8.1.5 under LINUX with our ERP application.

While starting the databse all the parameters showing 32 bytes. I didn't get why it is.
svrmgrl>startup
sga 32 bytes
........ 32 bytes
etc..

Thanks and expecting your reply.

regards,
sivababu

Tom Kyte
April 03, 2002 - 9:48 am UTC

I've never heard of that -- 32bytes.

Ask the customer what they changed. when they say "nothing" ask "no really -- what did you change"....

statspack can help you diagnose instance level performace issues.

32 bytes

Andrew, April 03, 2002 - 1:19 pm UTC

I've seen a similar thing when running svrmgrl where the the number of bytes are all null - that turned out to be the wrong svrmgrl (different versions of Oracle installed on the same machine). For unix do a "which svrmgrl".

Size of SGA vs RAM

Charanjiv, April 04, 2002 - 2:58 am UTC

dont know if this belongs here, but hopefully you will answer. I have noticed on my development server (8.1.6) win NT, the shared pool free memory remains a stable 60-70 MB while my server memory (ram) decreases with usage (from 300 to 40/50MB free). What is contributing to this.

Tom Kyte
April 04, 2002 - 11:48 am UTC

what is the size of the shared pool on the two boxes. Should be the same for prod as for development, probably isn't.

A reader, December 26, 2002 - 11:14 am UTC

Hi TOM,

In dedicated config. how much TOTAL memory each connection would use on the server ?



Tom Kyte
December 26, 2002 - 11:57 am UTC

don't know, somewhere between a couple hundred k to as much ram as the machine will give it.

depends on what you do, what your settings are.

take a plsql table and put a million records in it -- lots more memory then if you put 5 records in it..


have a sort_area_size of 1m vs 100m....

and so on.

I can say in 9i, if you use workarea_size_policy = auto and a pga_aggregate_target, you can have some control over the SUM of the memory used by each PROCESS....

LOG_BUFFER

Jay Hart, December 26, 2002 - 9:26 pm UTC

Tom, sometimes its is recommended that LOG_BUFFER be changed (most times, increased) to improve performance, and I've read much to that respect. HOWEVER, wouldn't allocating more than 1M be largely unused, because LGWR is signaled to write when R.L buffer has 1M of redo entries?? So wouldn't anything above 1M be 'wasteful'?

Tom Kyte
December 27, 2002 - 9:13 am UTC

Well, it is

o every 3 seconds
o when 1/3 full
o upon commit
o (new) when 1meg full


Now, say you have a relatively large transaction that generates say 10meg of redo. When it hits 1meg of redo -- lgwr will start flushing in the background. Whilst it is flushing - your transaction generated another 2meg of redo (3meg taken by it + whatever else is going on in your system). when lgwr is done flushing the first 1m, it comes back and finds 2+meg more to flush and sets about doing that. Whilst it is flushing that 2meg, your transaction generates another 3meg and so on... (5+ meg of redo in the buffer now).

If the redo log buffer was 1meg large -- what would have happened:

o your transaction generates 1meg of redo
o lgwr writes it out -- WHILE THIS IS TAKING PLACE, everyone else is waiting, your transaction -- all transactions.
o you generate another 1m of redo -- goto #2 and loop over and over


So, redo log buffers of 1's to even 10's of megabytes do make sense in some cases -- to avoid the waits for lgwr.

GREAT HELP

Jay Hart, December 29, 2002 - 2:39 am UTC


redolog buffer waits

Moh'd Saeed, February 04, 2003 - 2:59 pm UTC

Hi Tom:

This is a very interesting explanation about Tx waiting for LGWR.

How/what tells me that my transactions are waiting for LOGWR to complete writing the log buffer to redo files ?

Thanks for your help.


Tom Kyte
February 04, 2003 - 3:56 pm UTC

use statspack -- take a snapshot, have your apps running at full steam, get a cup of coffee, 15/30 minutes later take another snapshot.

Then, read the report. Look at the top 5 timed events. If you see events like "log file sync" "log file ...." -- you are waiting on lgwr.



bytes or kbytes

Irfan, April 29, 2003 - 6:23 am UTC

Hi Tom

In your answer to the original question

/*
log_buffer -- redo log buffer, measured in Kbytes.
*/

does log buffer in init.ora is stored in kbytes? but normally all other values are in bytes - bit confusing me.
Thanks

Tom Kyte
April 29, 2003 - 8:46 am UTC

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76961/ch188.htm#55848 <code>

it is specified in bytes -- my kbytes reference was more to the overall size (it is generally measured in kbytes, not mega/gigabytes - it is small relative to the other sga components)

db_cache_size for data warehouse env.

A reader, February 19, 2004 - 5:36 am UTC

Hi Tom,
Recently, we installed oracle 9ir2 with data warehouse configuration on AIX box. The default setting was 16MB for db_cache_size. I can change it, but my question is, initially I am loading a huge amount of data (ranging from 2 GB to 10GB), which I loaded, and created indexes. Now I need to create an ASCII file through the ETL tool. I wrote a sql query, which I can use in ETL tool to create files, but the query is taking a huge amount of time. What workable size of initial parameters should I start with, as I have a time constraint to get the results ASAP – without wasting time on trying various values. The box has 8GB of memory and 3 CPUs.
Thanks


Tom Kyte
February 19, 2004 - 11:08 am UTC

alot bigger than 16meg certainly.

I cannot answer the question 100% -- but 512m to 1gig would be "a good starting point" perhaps.



Thanks a lot!

A reader, February 19, 2004 - 2:12 pm UTC


SGA SIZING

Hardik, June 10, 2004 - 1:45 am UTC

ITS usefule.. I think that is more the enough.

db_cache_size for data warehouse env: take 2

Jon, March 20, 2006 - 12:42 pm UTC

I recently read in Gavin Powell's "Oracle Data Warehouse Tuning for 10g" on page 388, "Set the database buffer cache to the mininum allowed by the Oracle Database version and the operating system when using a data wareshouse database."

I was rather shocked when I read that but I also see the point.

What is your reaction to Gavin's recommendation?

Tom Kyte
March 22, 2006 - 1:17 pm UTC

never say never
never say always
I always say


this is why ROT (rules of thumb) can be evil. I can think of many cases where I would like a nice cache in my data warehouse. I can think of other cases where the cache should be minimized.

"It all depends"