agree with you 100%
shdeedb, December 25, 2002 - 7:26 am UTC
I fully agree with you and I appreciate youre replies because these are with our exp.
Just one think i like to ask:
you said large pool has colpletly solved pinning problem
how??
I am not using large pool becuase i though it is used by rman and under mts. While i am on deciated server config.
So how come large pool will help in pinning?
and if if it does, what hall iset it too? initially?
Likewise there is another parameter java pool size whose def is 20mb , i set it to 5mb also becuase we are not usng java database services. is this right thing to do?
thanks
December 25, 2002 - 8:42 am UTC
You only needed to pin things when we used the shared pool as the ONLY pool and you used features like MTS.
Under MTS, we would allocate memory for the UGA from the shared pool AND memory for packages/cursors etc.
packages / cursors etc should be managed using a LRU.
UGA memory should be managed like a heap (eg: allocate -- and free explicitly)
but the shared pool only did LRU memory management -- hence UGA requests which could be aged out (the session logged out) were not because they were "fresher". Hence the shared pool would get fragmented like crazy - things you didn't want to age out would, things you wanted to age out wouldn't. That and UGA memory requests are "big", package/cursor/etc requests are small.
So, by dividing that memory up -- the problem that pinning was introduced to help alleviate -- went away.
If you do not use PQ, rman, MTS -- you need no large pool and you never needed pinning!
You can set the java pool really small (like 0m on later releases, 1m on older ones) if not using java.
thanks a lot
shdeedb, December 25, 2002 - 3:20 pm UTC
Thanks indeed for your explanation.
I have asked you another questionin anohjer thread this evening, i think i missed checking "email notification" so i can i get the reply from yoru website.....
I wanted to add somethign on it.
A transaction is sent to database at say 12:01:21 sec.
and got reply back from database at 12:13:56. which is 10 fold longer than it normay takes.
However when I see the archived log via log minor, i see that the same transaction starts (operation = start) as show in log at 12:03:55 sec and completed (commit) at 12:03:56.
So what was the db doing bewtween 12:01:21 to 12:03:52.
Does that mean the db does a lot before anythign gets to redo log buffer?? and then to redo log files????
I have a redo log buffer as of 5 mb and i fail to see it is hhavign a flusign issue.
Please do not reply to this if you got my earliar email.
Cheers
December 25, 2002 - 4:08 pm UTC
my question to you would be what is up with the elapsed time from 12:01:21 to 12:03:55??
Obviously there is more here then meets the eye, not understanding how your system works, I cannot really answer.
When you see the commit record in the log, the stuff needed for redo for that transaction was already written, so no -- it is not a 10 minute wait on the redo log buffer.
clarification
shdeeb, December 25, 2002 - 4:48 pm UTC
Yes but you see (i hope you have read my other email where i have posted the log miner entries and if not i can post em here).
The start of trnsaction and the insert into a log table goes around 12:01:21 then nothing in redo log, until
time 12:02:52 when i see a delete statement (sgainst the insert which i did earliar) indicating a rollback.
This rollback is becuase there is a time out set at the client side for some 90secs. So why the trsna did not cont after it has inserted into the log table like always
From aix i can see a lots of cpu waiting on i/o.
I think prob some check point occurs and db was so busy that nothign is accepted in to redo buffer during that 90 secs. What do you think??
By the way how come youre are working on x-mas as for here its a normal day in middle east.
Cheers
Merry xmas to you
How to pin a table or view to catch memory?
A reader, October 11, 2004 - 11:32 am UTC
Tom,
It seems that the DBMS_SHARED_POOL.KEEP procedure does not work with tables or views. Is there any workaround for this if I want to pin a table or one row into shared pool?
Thanks as always.
October 11, 2004 - 1:40 pm UTC
you cannot "pin a row or a table" in ram.
a cache is a cache -- you USE it or you LOSE it.
If the table in question needs to be in the cache -- it will be.
You can set aside memory just to be used for this table ( the KEEP or RECYCLE pools) but even then, there are absolutely no assurances that every block (and every version of every block, or the undo needed to get the right version of the block and so on) would be in the cache.
Using the keep/recycle pool is something you do when you have identified a real issue, it is fine tuning to the n'th degree.
Are you solving a hypothetical problem here? or a problem that you've actually identified as being a real issue?
Re: How to pin a table or view to cache memory?
A reader, October 11, 2004 - 2:21 pm UTC
Tom,
Here is our real situation: a table, called cons_tab, contains thousands of rows, and some of them are application related constant parameters. Those constants are not changed once the application server starts. Currently some applications are designed as to query the cons_tab for a constant whenever a DML transaction happens even though the constant keeps unchanged.
We just think about to query the cons_tab once when the application server starts, and save the value in cache memory for all applications use later. The PLSQL package can share a variable in a transaction, but not for all applications. Maybe we can use 'set sys_context'?
Could you please give us some advice? Thanks.
October 11, 2004 - 4:54 pm UTC
tables that are used will be cached.
tables that are not, won't
so -- just query it when you need it would be my advice, the application will query it to load up its values and then just reference them in its own address space.
remember -- databases do two things really well:
a) read data
b) write data
it is what they do, don't be afraid of "hurting them".
If this information is used, it'll be cached. If not, it won't be (but so what, you don't use it, you don't reference it).
So use it and it'll be cached.
Don't use it and it won't be.
reference the values in the queries that are the DML and all will be "ok".
You don't give me enough use cases here to be more specific -- no examples of how these constants are in fact "used".
dbms_shared_pool.keep()
A reader, October 11, 2004 - 9:57 pm UTC
"If you do not use PQ, rman, MTS -- you need no large pool and you never needed pinning!"
I have 8GB RAM, my shared pool is 100MB and I have been getting ORA-4031's on some of my gigantic packages. I pinned them and the problem seems to have gone away. I dont not use PQ, RMAN or MTS, so according to you, pinning shouldnt matter, but it did!
I looked at my statspack reports, v$shared_pool_advisory, etc, and didnt see anything that indicated my shared pool was undersized or that my apps were not using bind variables.
Seemed to be a pinning issue with some super large packages.
Even in 9iR2, dbms_shared_pool.keep() does seem to help?
Thanks
October 12, 2004 - 7:43 am UTC
that would indicate to me that you are not using bind variables and are being massively negatively impacted because.
Let me rephrase, in an otherwise properly implemented system, pinning is something you do not need. If you haven't used bind variables -- all bets are off -- you are running at less than half speed and perhaps pinning can give you some marginal relief from a 4031 but you are still running really slow.
search this site for remove_constants and see if you are actually "not binding as you think you should"
A reader, October 12, 2004 - 7:47 am UTC
tom
we can use dbms_shared_pool.keep(P) to pin object (packages,procedures ) is there any way to pin sql which is not in package or procedure. like select query.
thanks
October 12, 2004 - 8:34 am UTC
1 select text from all_source
2 where name = 'DBMS_SHARED_POOL'
3 and type = 'PACKAGE'
4* order by line
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> /
TEXT
-------------------------------------------------------------------------------
package dbms_shared_pool is
------------
-- OVERVIEW
--
-- This package provides access to the shared pool. This is the
-- shared memory area where cursors and PL/SQL objects are stored.
----------------------------
-- PROCEDURES AND FUNCTIONS
--
procedure sizes(minsize number);
-- Show objects in the shared_pool that are larger than the specified
-- size. The name of the object is also given which can be used as
-- an argument to either the 'keep' or 'unkeep' calls below. You should
-- issue the SQLDBA or SQLPLUS 'set serveroutput on size xxxxx'
-- command prior to using this procedure so that the results will
-- be displayed.
-- Input arguments:
-- minsize
-- Size, in kilobytes, over which an object must be occupying in the
-- shared pool, in order for it to be displayed.
procedure keep(name varchar2, flag char DEFAULT 'P');
-- Keep an object in the shared pool. Once an object has been keeped in
-- the shared pool, it is not subject to aging out of the pool. This
-- may be useful for certain semi-frequently used large objects since
-- when large objects are brought into the shared pool, a larger
-- number of other objects (much more than the size of the object
-- being brought in, may need to be aged out in order to create a
-- contiguous area large enough.
-- WARNING: This procedure may not be supported in the future when
-- and if automatic mechanisms are implemented to make this
-- unnecessary.
-- Input arguments:
-- name
-- The name of the object to keep. There are two kinds of objects:
-- PL/SQL objects, triggers, sequences, types and Java objects,
-- which are specified by name, and
-- SQL cursor objects which are specified by a two-part number
-- (indicating a location in the shared pool). For example:
-- dbms_shared_pool.keep('scott.hispackage')
-- will keep package HISPACKAGE, owned by SCOTT. The names for
-- PL/SQL objects follows SQL rules for naming objects (i.e.,
-- delimited identifiers, multi-byte names, etc. are allowed).
-- A cursor can be keeped by
-- dbms_shared_pool.keep('0034CDFF, 20348871', 'C')
-- The complete hexadecimal address must be in the first 8 characters.
-- The value for this identifier is the concatenation of the
-- 'address' and 'hash_value' columns from the v$sqlarea view. This
-- is displayed by the 'sizes' call above.
-- Currently 'TABLE' and 'VIEW' objects may not be keeped.
-- flag
-- This is an optional parameter. If the parameter is not specified,
-- the package assumes that the first parameter is the name of a
-- package/procedure/function and will resolve the name. Otherwise,
-- the parameter is a character string indicating what kind of object
-- to keep the name identifies. The string is case insensitive.
-- The possible values and the kinds of objects they indicate are
-- given in the following table:<b>
-- Value Kind of Object to keep
-- ----- ----------------------
-- P package/procedure/function
-- Q sequence
-- R trigger
-- T type
-- JS java source
-- JC java class
-- JR java resource
-- JD java shared data
-- C cursor
-- If and only if the first argument is a cursor address and hash-value,
</b>
-- the flag parameter should be set to 'C' (or 'c').
-- Exceptions:
-- An exception will raised if the named object cannot be found.
procedure unkeep(name varchar2, flag char DEFAULT 'P');
-- Unkeep the named object.
-- WARNING: This procedure may not be supported in the future when
-- and if automatic mechanisms are implemented to make this
-- unnecessary.
-- Input arguments:
-- name
-- The name of the object to unkeep. See description of the name
-- object for the 'keep' procedure.
-- flag
-- See description of the flag parameter for the 'keep' procedure.
-- Exceptions:
-- An exception will raised if the named object cannot be found.
procedure aborted_request_threshold(threshold_size number);
-- Set aborted request threshold for the shared pool.
-- Input arguments:
-- threshold_size
-- The size in bytes of a request which will not try to free unpinned
-- (not "unkeep-ed") memory within the shared pool. The range of
-- threshold_size is 5000 to ~2 GB inclusive.
-- Description:
-- Usually, if a request cannot be satisfied on the free list,
-- the RDBMS will try to reclaim memory by freeing objects from the
-- LRU list and checking periodically to see if the request can be
-- fulfilled. After finishing this step, the RDBMS has performed a near
-- equivalent of an 'alter system flush shared_pool'. As this impacts
-- all users on the system, this procedure "localizes" the impact to the
-- process failing to find a piece of shared pool memory of size
-- greater than thresh_hold size. This user will get the out of
-- memory error without attempting to search the LRU list.
-- Exceptions:
-- An exception will be raised if threshold is not in the valid range.
--
end;
107 rows selected.
Fabulous discussion on pinning
irranda, November 01, 2004 - 1:26 pm UTC
Not the biggest fan of interfering with system processes(os/rdbms) which manage my access to the data. I have several colleagues who are completely infatuated with the panacea that is pinning. I came across this thread while researching solutions that are not pinning-centric. I am most grateful for the concise explanation of pinning's history and present day usefulness in the never-ending saga that is performance tuning. Thanx...
How to pin SQL by using dbms_shared_pool ?
Parag Jayant Patankar, July 06, 2005 - 10:27 am UTC
Hi Tom,
Just for learing purpose I am trying to load simple SQL by using dbms_shared_pool
1* select address, hash_value from v$sql where sql_text like 'select sysdate from dual%'
19:45:16 SQL> /
ADDRESS HASH_VALUE
---------------- ----------
0700000009D132D8 3742653144
1 row selected.
19:45:17 SQL> exec DBMS_SHARED_POOL.KEEP(0700000009D132D8,3742653144', 'C');
ERROR:
ORA-01756: quoted string not properly terminated
Can you show me how to load SQL into dbms_shared_pool ?
regards & thanks
pjp
July 06, 2005 - 10:51 am UTC
you seem to have a funky character in there that is not really a quote. are you cutting and pasting from word or something?
thanks
Parag Jayant Patankar, July 06, 2005 - 10:58 am UTC
Hi Tom,
Thanks for your minute observation. Yes I was trying to put details from word document. After correcting quote I was able to load SQL in shared_pool.
best regards
pjp
is this good script to identify tables which are candidates for pinning
jasdeep, September 06, 2006 - 12:24 pm UTC
SELECT
p.owner,
p.name,
t.num_rows,
-- ltrim(t.cache) ch,
DECODE(t.BUFFER_POOL,'KEEP','Y','DEFAULT','N') K,
s.blocks blocks,
SUM(a.executions) nbr_FTS
FROM
dba_tables t,
dba_segments s,
v$sqlarea a,
(SELECT DISTINCT
address,
object_owner owner,
object_name name
FROM
v$sql_plan
WHERE
operation = 'TABLE ACCESS'
AND
options = 'FULL') p
WHERE
a.address = p.address
AND
t.owner = s.owner
AND
t.table_name = s.segment_name
AND
t.table_name = p.name
AND
t.owner = p.owner
AND
t.owner NOT IN ('SYS','SYSTEM')
HAVING
SUM(a.executions) > 1
AND t.num_rows<10000
GROUP BY
p.owner, p.name, t.num_rows, t.CACHE, t.BUFFER_POOL, s.blocks
ORDER BY
SUM(a.executions) DESC;
Script may cause ORA-00600 error on 9.2.0.6 databases
Paul, September 06, 2006 - 6:56 pm UTC
I can't speak for the value of that script in determining candidates for pinning, but I have seen that query cause ORA-00600 errors in an Oracle 9.2.0.6 database. It has something to do with selecting from v$sql_plan. See Metalink note 4434689.8 for details.
is this good script
A reader, September 06, 2006 - 6:59 pm UTC
I was to use
ALTER TABLE TABLE_NAME STORAGE(BUFFER_POOL KEEP);
September 07, 2006 - 7:01 am UTC
eh? I have not any idea what you mean here.
Pin Cursor
Paul, October 26, 2006 - 11:08 am UTC
I have a lookup table that has an attribute_id and attrribute_code with around 120 rows and several other columns like creation_date,last_update_date etc. this is a standard oracle applications table.I have a pl/sql procedure that needs to query this table to get the attributeid from the attribute_code. the lookup table is properly indexed and also bind variables is used to query the table.
We are doing a data conversion and this procedure has to get executed more than one million times.
This procedure accepts a pl/sql table of attributecode as a parameter( pl/sql table most of the time will have around 30 to 40 attribute codes). So each time this procedure gets called , we loop through the pl/sql table and for each attribute code we query the look up table by open,fecth,close cursor
So since this query against this lookup table is executed so many times, is there a way to somehow cache just the attribute_id and attribute_code for all the rows by using dbms_shared_pool and if so , is it worth it or could you suggest any alternative way to handle the situation or just leave it the way its now?
October 26, 2006 - 12:24 pm UTC
if you are querying the same data over and over, it is cached, that is what the buffer cache does, caches things you access frequently.
dbms_shared_pool would have nothing to do with caching of data.
Fun Times Ahead
A reader, October 26, 2006 - 3:02 pm UTC
I can imagine that in a yet to be defined day in the future where there will be 3 things to consider and keep straight:
- server side cursor: associated with sql text, its execution plan, and other metadata. The library cache object representing the server-side cursor can be pinned into the shared pool with the dbms_shared_pool.keep procedure
- table data blocks: contain the data for a table. All the blocks associated with a table can be potentially held in the Oracle buffer cache by assigning the table to a keep pool of appropriate size.
- result sets: What you get when you execute a cursor against the data referenced by the SQL. Someday, those results can be optionally cached somewhere as well...
Do Not Pin Anything ?
Jim Cox, June 20, 2007 - 2:51 pm UTC
Hi Tom
Environment:
Windows 2000 Advanced Server
Oracle 9.2.0.8.6
I am doing some research on Pinning and came across an article that says "Oracle Corporation recommends that you always pin the STANDARD, DBMS_STANDARD, DBMS_UTILITY, DBMS_DESCRIBE, and DBMS_OUTPUT packages in the shared pool"
but at the top you state "DO NOT PIN ANYTHING"
Can you give some advice with regards to this statement I found ?
Thanks
Jim
June 20, 2007 - 2:59 pm UTC
point me to that statement, when I see it in context - I'll comment on it.
Until then: why are you pinning anything. If we ALWAYS thought it would be good to pin these, well, they would be pinned out of the box.
don't pin.
Ok, I googled
"Oracle Corporation recommends that you always pin the"
I would ask that person where they got that information from, it is unsubstantiated and there is no reasoning behind it.
Info You Requested
Jim Cox, June 20, 2007 - 3:53 pm UTC
June 20, 2007 - 4:29 pm UTC
I already commented that I found the "source", my suggestion is above
ask the source whence this "advice" comes from? What documentation (not a posting in a forum, not a support note from version 7.x written for a specific problem - that doesn't exist 7 releases later) makes this rather blanket advice?
ask them what versions this might apply to?
ask them how to measure how one would determine if they needed to do this?
ask them how to measure whether it was beneficial or not?
In the year 2007, with versions of the database that have things like the large_pool.
This pinning stuff, it was introduced to help alleviate issues with using the shared pool to hold session state (UGA memory) with shared server (called multi-threaded server way back when) and manage plsql/sql code.
The introduction of the large_pool - which segregates the UGA shared server memory from the plsql/sql 'cache', well, that is a better solution than "pinning"
I Got It
Jim Cox, June 20, 2007 - 5:49 pm UTC
Thanks Tom
I thought you were asking me to provide you with the site
My mistake :-(
Jim
basic question
A reader, June 17, 2009 - 10:07 am UTC
Hi Tom
I have a basic question , i want to see what all packages have been pinned using dbms_shared_pool.keep.
which database view shows this.
Regards
June 17, 2009 - 3:13 pm UTC
v$db_object_cache
Thanks
A reader, June 18, 2009 - 2:31 am UTC
Thanks so much for answer. it has helped.
pinning a table to make inserts faster??
reader, February 09, 2010 - 9:43 am UTC
Hi Tom,
Thank you for all the wonderful suggestions you provide to the Oracle community.
we have been struggling for the last 10 days identifying the root cause for slow insert and wondering if you can help us identify the problem.
we have a table1 with xmltype column. we do inserts to this table1 very infrequently (only in certain scenarios). But the service that inserts to this table is time sensitive. currently it is taking 20 seconds to do the insert. we have a similar insert in the same procedure to another table2 with xmltype column. table2 is much much bigger and inserted very frequently(10000 rows/day), it takes less than a second to insert to table2. In the insert statement to both the tables, we call the sys.xmltype construct.
INSERT
INTO table1(col1,
col2,
xml_col3,
col4
)
VALUES(p_col1,
UPPER(p_col2),
sys.xmltype(p_xml_col3),
p_col4
);
we are on 10g RAC (4 nodes). we ran several tests by clearing shared_pool, buffer_cache, pinning objects to the database in a lower environment that is also RAC (3 nodes).
what we identified is that only the very first time when we run the procedure on a particular RAC node after clearing the shared_pool, buffer_cache, we are seeing the 20 second response time for the first insert. if we run table1 insert then table2 insert, table1 insert is taking 20 seconds while table2 insert is taking less than a second. Other way is also true. so we thought it might be issue with xmltype as it is common between the two. we tried pinning sys.xmltype and also select sys.xmltype('<hello/>') from dual to load xmltype to memory. Both seemed working. When we clear the shared pool & Buffer_cache again and run the same scenarios, we are not seeing the slowness.
what is puzzling is in production environment, we call table2 insert first and then table1 insert, table2 insert is taking less than 2 seconds and table1 insert is taking 20 seconds.
so my other thought is, is it taking too long to load table2(since it is not a frequent insert)? how do we test this? should we pin the table?
February 15, 2010 - 2:12 pm UTC
there is a high overhead to starting up XML in a session, lots of java to get loaded up. It should not be 20 seconds however, that seems really long.
Can we get the output of tkprof with a 10046 level 12 trace?
tell me, how do you "pin" a table? What command where you thinking about running?
tkprof output for insert with xmltype
Reader, February 16, 2010 - 1:48 pm UTC
Thank you Tom for your response!!
Below is the tkprof output for the insert statement.
********************************************************************************
INSERT INTO table2(TRACE_ID, CHANNEL_ID, APPLICATION_ID,
TRANSACTION_CODE, STATUS_CODE, DETAIL, ERROR_MESSAGE, ERROR_COUNT,
TRANSMIT_COUNT, JMS_URL, INSERT_USER, UPDATE_USER, INSERT_DATE_GMT ,
UPDATE_DATE_GMT , DELETE_DATE_GMT, INSERT_SITE , UPDATE_SITE,
BUSINESS_LINE_ID )
VALUES
(:B13 , :B12 , :B11 , :B10 , :B9 , SYS.XMLTYPE(:B8 ), :B7 , :B6 , :B5 , :B4 ,
USER, USER, :B3 , :B3 , TO_DATE('01/01/9999 00:00:01','MM/DD/RRRR
HH24:MI:SS'), :B2 , :B2 , :B1 )
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 20.84 20.34 252 31812 20 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 20.84 20.34 252 31812 20 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 53 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 5 0.00 0.00
********************************************************************************
I see a lot of selects below the insert statement in the tkprof output. but they seem to be taking less than a second
********************************************************************************
select value(p$)
from
"XDB"."XDB$ELEMENT" as of snapshot(:2) p$ where SYS_NC_OID$ =
:1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.06 0.04 0 370 0 0
Execute 339 0.55 0.43 0 337 0 0
Fetch 339 0.09 0.08 20 1021 0 339
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 680 0.70 0.57 20 1728 0 339
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID XDB$ELEMENT (cr=3 pr=3 pw=0 time=757 us)
1 INDEX UNIQUE SCAN SYS_C003054 (cr=2 pr=2 pw=0 time=441 us)(object id 41092)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 20 0.00 0.00
********************************************************************************
February 17, 2010 - 8:34 am UTC
... I see a lot of selects below the insert statement in the tkprof output. but
they seem to be taking less than a second
...
but if you do a lot of things that take less than a second, they add up to being many seconds. If you have a few things that take .7 cpu seconds, they will add up rapidly.
this looks like initial startup stuff that is done one per session and perhaps more due to the hard parse being performed.
xml data with namespace
Reader, February 18, 2010 - 8:31 am UTC
Thank you Tom!
This is a 4 node RAC database. AS you said, it took 20 seconds the very first time on each node. And subsequent runs are faster. But it also took longer in some cases where this insert hasn't run on that node recently.
on further testing, we noticed that it takes 20 seconds when the xml we are inserting has namespace in it.
1. Does Oracle do any processing if there's a namespace in the xml data?
2. How can we prevent that processing? we don't need schema validations
3. we haven't registered the schema in the database. will registering the schema make the inserts faster?
4. If we don't need to extract the individual tags from the xml, do you suggest to use CLOB in that case?
February 18, 2010 - 9:50 am UTC
I'd suggest the XML forums on otn.oracle.com - I really do not use the XML stuff all that much, they would be in a much better position to guide you in this particular case.
Ramya, May 27, 2010 - 12:52 am UTC
Hi Tom,
Reading this thread I understand that pinning objects into KEEP poll may not be a permanent solution for a problem.
We have been experiencing performance problems in our database likely. Here is the sample query:
select /*+ ORDERED FIRST_ROWS(6) */ * from
(select DCDATA_ID , score(1) as score from aap.DCDATA_imagearc a
where contains(DCDATA_META, '((BASEBALL and (USA not BROWNLOW)) and ((((({aapdefault} within document@schema) not ((((((((((Credit\~AP or Credit\~AFP) or Credit\~EPA) or Credit\~30) or Credit\~113) or Cr
edit\~95) or Credit\~13) or Credit\~33) or Credit\~65) or Credit\~56) or Credit\~42)) not ((({ap+arroyo} within credit) or ({ap+chris} within credit)) or ({ap+graylock} within credit))) not INTL+OUT) not aus
tralia+only))', 1) > 0
order by DCDATA_ID desc )
where ROWNUM <= 6
============
Plan Table
============
------------------------------------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
------------------------------------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 11K | | | |
| 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 7 | 182 | 11K | 00:02:16 | | |
| 3 | TABLE ACCESS BY GLOBAL INDEX ROWID | DCDATA_IMAGEARC | 14K | 1583K | 11K | 00:02:16 | ROW LOCATION| ROW LOCATION|
| 4 | INDEX FULL SCAN DESCENDING | DCDATA_IMAGEARC_ID| 960 | | 20 | 00:00:01 | | |
------------------------------------------------------------------+-----------------------------------+---------------+
This query waits on "db file sequential read" wait event and takes 5-10 mins. I tried increasing the SGA_TARGET size from 8GB to 10GB. But, it was of no help. The i gathered statistics but, even that was of no help. Now i changed the hint FIRST_ROWS to ALL_ROWS. It is definitely working fine now but, im not convinced that it will be a permanent solution for this.
So, I was thinking of increasing the memory and KEEP pool size and pin the indexes used in the query into KEEP POOL.
Can you please tell me your views on pinning indexes into KEEP pool and is there any other better way to overcome this issues.
Appreciate your comments on this.
Thanks,
Ramya
May 27, 2010 - 7:26 am UTC
are you really just getting the first 6 rows? or are you really getting the entire result set?
A reader, May 27, 2010 - 8:12 pm UTC
Hi Tom,
Thanks for your reply. I think it returns the entire result set. It is not just this kind of query which waits on "db file sequential read", there are a couple of queries which are slow for example:
select /*+ ORDERED FIRST_ROWS(1) */ * from (select DCDATA_ID , score(1) as score from DCDATA_imagearc a where contains(DCDATA_META, :fulltext, 1) > 0 and DCDATA_ID = '20100528000238290115' order by DCDATA_ID desc ) where ROWNUM <= 1.
This query is supposed to return just one result and there are two indexes IMAGEARC_META(partitioned text index) and DCDATA_IMAGEARC_ID on the column DCDATA_ID. Even this query does a full index scan and takes 5-10 mins to complete.
We changed the hint from FIRST_ROWS to ALL_ROWS but it didnt help this query. It still waits for db file sequential read. When cheked with developer he suggested to rebuild the text index. But, I feel it wont help us in this situation.
I am thinking pinning few recent partitions of text index into KEEP might help.
Can you please let me know if my assumption is right?
Thanks,
Ramya
May 28, 2010 - 7:44 am UTC
... I think it returns the entire result set. ...
then remove all hints and let's discuss THAT query. Do that for me first.
A reader, May 30, 2010 - 6:12 pm UTC
Hi Tom,
I have removed the hint from the query now.
No difference in query performance yet.
Thanks,
Ramya
May 31, 2010 - 8:19 am UTC
show tkprof and query and query plan (query plan, not the row source operation - I want three things:
tkprof with row source operation.
query
query plan - what the optimizer GUESSED
A reader, June 03, 2010 - 7:08 pm UTC
Hi Tom,
Sorry for delayed response.
I increased SGA_TARGET size from 8GB to 10GB last friday.
It seems to be performing better now. I tried reproducing the issue with the query which used to take a long time before increasing the SGA_TARGET size but, was unsuccessful.
I noticed that some of the queries took long time yesterday. But, i was not able to reproduce the issue. I will try to get the trace next time when there is a slow query.
At this point I am not sure if increasing SGA_TARGET size fixed the problem as I can still see some slow queries.
Thanks,
Ramya
Why need "pinning" object?
A reader, November 11, 2010 - 2:51 am UTC
Hi tom,
i want to know why need "pinning" object before get object lock in memory?
November 11, 2010 - 2:41 pm UTC
what is an "object lock in memory" ?
pinning tables,
A reader, April 19, 2012 - 2:08 pm UTC
We have a table used for maintaining logical locks of other key tables (like customer table). The DMLs on the logical lock table is done through application coding. This lock table will not have more than 1000 rows at any given time (after the lock is released, the row will be deleted by the table).
Recently we saw extreme slowness due to contention on this particular table. There are about 50 sessions working on data loads and each of the session does DML on this lock table. The wait events associated to this was buffer busy waits.
To overcome this problem (please don't ask why they are running 50 sessions in parallel), I am thinking of having one row per block (by setting pctfree to extremely high number - like 90%). That way, each session most likely will be dealing with its own block.
The other option we are considering is pinning the table in the memory.
What would be your thoughts?
Thanks,
April 20, 2012 - 3:52 pm UTC
pinning tables in memory - there is no such feature, alter table cache doesn't do it, buffer pool keep doesn't do it, nothing but nothing will "pin" a table in memory
and it wouldn't matter anyway - even if it were - the buffer busy waits are likely due to attempted truly concurrent modifications. Only one transaction can have the block in current mode to modify it at a time
Your one row per block (or few rows per block) will tend to alleviate this - unless you are modifying index keys and the buffer busy waits are on that.
pinning in shared pool
A reader, May 28, 2012 - 8:43 am UTC
Hello Tom,
I am using Oracle 11.2.0.3.0 in my prod system.
While trying to check for pinned objects I found some thing "MULTI-VERSIONED OBJECT" in v$db_object_cache with KEPT=Y
Checked for some of those objects and it is either table or index.
I am pretty much sure we have not kept any tables or indexes, moreover v$db_object_cache does n't have pinned/cached table/index info.
select unique(type) from v$db_object_cache order by 1;
Did a lot of google/oracle doc, but no trace of MULTI-VERSIONED OBJECT.
Do you have any inputs for those objects and are those really put in the KEEP POOL ?
Thanks.
May 28, 2012 - 8:59 am UTC
It probably has to do with multiple versions of statistics in the database, can happen with things during statistics gathering and at other times.
Data Pinning
Prashant, June 29, 2012 - 6:20 am UTC
Hi Tom,
1. A Package is PINNED (DBMS_SHARED_POOL.KEEP).
2. The same package is Altered in its respective schema
(Using Create or Replace Package).
To reflect the alteration, we need to -
a. PIN the same Altered Package again.
b. Altered Package is not required to PIN again
Warm Regards,