Skip to Main Content
  • Questions
  • Does bundle dictionary lookup cached cursors effect use of cursor cache

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Joe .

Asked: March 28, 2012 - 12:40 pm UTC

Last updated: March 29, 2012 - 6:58 am UTC

Version: 11.2.0

Viewed 1000+ times

You Asked

Tom,
I've been trying to solve a performance problem for a customer where I was seeing different benchmark performance for a standard transaction we run basically to validate the processing power of a particular CPU / hardware configuration. The system is a Warehouse Management System, we use a data generation program to create a dataset and then run a fast complex transaction to see what rate we can get per second for a single thread process. I was seeing a significant slow down on the customer system where I expected its performance to be pretty much the same as my performance server. Anyway after a lot of investigation with trace files, tkprof etc I found the difference was the customer had session_cached_cursor set at the default value of 50 whereas mine was set at 150. All of our ProC code is compiled with Hold_Cursor=YES, Release_Cursor=NO and Maxopencursors=150 so we always set our value to at least 150. When I was comparing the tkprof files that were practicully identical (as they had run the same work load on the same data) the only difference was the parse count. Their's was 85k+ where mine was 5k. So the 50 limit was forcing additional parses. The test program basically calls a PL/SQL package to do all the work in a loop. The 5k parse for my test was ok as we have 1 piece of dynamically generated SQL where we deliberately call dbms_sql.parse.

Interestingly I had tripped across this last week at Oracle in Reading while benchmarking our app on ExaData. We were seeing library cache mutex x waits when we ramped the volume up to high levels. One of the tech guys suggested SCC might be our problem (it was at the default), it wasn't but when we increased it we saw a 40% improvment in the same benchmark transaction. That's what made me realise what my customer problem was.

I tested it at 50 on my system (normally set at 150 large enough for any of our ProC binaries to cache all of their cursors) and could see the same issue as the customer, problem solved. I then tried it higher at 500 and saw yet another increase in performance from 120 p/s to 135 p/s. This confused me as I knew we do not need more than about 130 cursors so why the continued improvement.

I looked in the v$open_cursor view and could see the following;


Count SID Cursor type SQl ID
5 13 BUNDLE DICTIONARY LOOKUP CACHED 0kkhhb2w93cx0
83 13 BUNDLE DICTIONARY LOOKUP CACHED 4m7m0t6fjcs5x
1 13 DICTIONARY LOOKUP CURSOR CACHED 0k8522rmdzg4k
1 13 DICTIONARY LOOKUP CURSOR CACHED 0ws7ahf1d78qa
1 13 DICTIONARY LOOKUP CURSOR CACHED 459f3z9u4fb3u
1 13 DICTIONARY LOOKUP CURSOR CACHED 5ur69atw3vfhj
1 13 OPEN 0y7hj6d8gjf7k
1 13 OPEN 0yz38qk2d25f0
1 13 OPEN 149788cvqqbfh
1 13 OPEN 1sqyqtdj25sfx
.....


The full listing of this gives 134 unique sql id's. 126 are the app and that stays static for this process. Line item 2 increases with every iteration of the transaction. The SQL is below;


update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1


Now this looks like a sequence get to me, we use sequence numbers as primary keys on some of the tables so I would expect sequences to be incrementing. What I dont understand is why a new cursor seems to be allocated for every execution. Once the total hits 499 the bundle dictionary value gets reduced (assume a flush out of some of the cursors) and it then builds up again. There are 3 child cursors in v$sql for the sql id.

I have session_cached_cursor and open_cursors set to 500 so not sure which one is controlling the behaviour. So at last to my question, sorry about the long winded prologue, is this normal, is the creation of these cursors affecting my performance. All looks very strange to me. Have been struggling to find any useful information on the web or Oracle support.

Any insights would be very useful.

Thanks Joe.

and Tom said...

Well, the real fix for you would be:

alter sequence S cache 10000 /* or more! */;

instead of 10 (the default). that would avoid the commit every 10 sequence gets (lots of log file sync waits for that).

In general, our recursive sql is not cached open (it is cached in the shared pool, it is shared sql, it will soft parse - but it will cause a parse).

I am sure you saw a lot of benefit from session cached cursors due to the change in PLSQL behavior more than anything. PLSQL uses session_cached_cursors to decide how many cursors to cache open itself.

session_cached_cursors will not in itself reduce parse calls (a session cursor cache hit is still a parse!). However, it can indirectly reduce parsing by allowing PLSQL to cache even more cursors open!


In plsql - if you run a stored procedure with 5 sql statements, the first time you run it plsql will have to parse that sql. it will keep these statements OPEN in the hopes that you run that procedure again and it can skip the parse. The size of the cached open cursors it keeps is limited by session_cached_cursors.


So, based on your comment: "only difference was the parse count. Their's was 85k+ where mine was 5k. " - that was due to plsql being able to cache more cursors open.

The seq$ update is still a problem and should be fixed by altering the cache size of the sequence.



Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Extra explanation

Joe, March 28, 2012 - 7:26 pm UTC

Thanks Tom, so I think I understand this now. The additional dictionary cursors I am seeing are the sequence updates that occur every time we need to get a new sequence cache set, so in my raw trace file I find 2,002 update statements as I did the transaction 10,000 times with 4 sequences each with a cache of 20 (not sure what the odd 2 were for but close enough).

I can see from the raw trace that the cursor is parsed, exec and closed. So what I see in v$open_cursor are the old cursors for the same sql_id that have not been cleaned out util I hit the cached cursor limit.

PARSING IN CURSOR #139665827233920 len=129 dep=2 uid=0 oct=6 lid=0 tim=1332977279328919 hv=2635489469 ad='de6e3078' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
PARSE #139665827233920:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1935744642,tim=1332977279328919
EXEC #139665827233920:c=0,e=71,p=0,cr=1,cu=2,mis=0,r=1,dep=2,og=4,plh=1935744642,tim=1332977279329044
STAT #139665827233920 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=55 us)'
STAT #139665827233920 id=2 cnt=1 pid=1 pos=1 obj=79 op='INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=3 us cost=0 size=68 card=1)'
CLOSE #139665827233920:c=0,e=2,dep=2,type=3,tim=1332977279329094


The bit I'm still not quite sure of is why I see an improvement when I increase the SCC parameter from 150 to 500, my app only needs 130. I'll run some tests and tkprof them to see where I am getting the saving. Does the sequence update not benefit from a softer soft parse by the fact there are copies of the sql in the session cache. Are cursors discarded by their cursor type or are they all treated by the same LRU logic and simply by the fact a cursor has had a hit it goes to the top of the list.

As always thanks for your help. I know a little bit more about this vast subject.

Tom Kyte
March 29, 2012 - 6:58 am UTC

your client APP only needs 130, but you are calling PLSQL - are you adding up into that 130 all of the SQL you do in plsql?


Does the sequence update not benefit from a softer soft parse Not in general, recursive sql is "special", "different"


alter that sequence cache size!

Reply

Joe Byrne, March 29, 2012 - 7:38 am UTC

I found my reason for different performance stats, I was testing with a very minor difference in the test data that changed the functional flow. Now I get consistant results once I set cursor cache at 150 or above. Yes the 130 includes all of the PL/SQL cursors as well.

I tried changing the sequence cache value (set it to 1,000), made a very small difference to the run time. The trace files show an improvement of 0.17 seconds. Now does 40 updates / parse instead of 2,000. The whole transaction test takes 76 seconds for 10,000 transactions or 7.63ms per transaction.

Think I am happy now with what is going on and that the internal cursors are not affecting the caching of our application cursors.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library