Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Sandeep.

Asked: July 13, 2016 - 2:40 pm UTC

Last updated: July 20, 2016 - 3:56 pm UTC

Version: 11.2.0.4.0

Viewed 10K+ times! This question is

You Asked

Hi Connor,

Wanted to understand the session memory release process.

1] A database
2] Connections are established to the database, through connection pool
3] Connection pool is configured in Application Server [ Weblogic ]
4] The connection pool count is 50
6] There will always be 50 connections with the database ( irrespective of number of transactions 0---to---N )
7] The moment all 50 connections are established, the amount of additional RAM used 1500 MB
8] The RAM used is understood, because of the individual PGA for each session
9] For certain period of time say 30 mins, it is able to process 5000 requests
10] After 30 mins it is observed that RAM consumed now is [ used_amount_before_connection_pool + 1500 + some_more_say_1000MB ]
11] What is the purpose of this "some_more_say_1000MB" amount of RAM usage?
12] Why I asked the question in step-11, because the amount { 1500 + some_more_say_1000MB } gets only released when all the 50 connections are killed. So after 30 mins when there are no transactions happening, why the "some_more_say_1000MB" amount of memory is not getting released?
13] Some other time, when there are 9000 requests for another 30 mins the memory utilization becomes [ used_amount_before_connection_pool + 1500 + some_more_say_1000MB + some_more_say_1000MB]
14] This keeps on happening till the entire memory is consumed.

Could you please help understanding.

Thanks and Regards,
Sandeep


and Connor said...

When the session is established, we'll assign some PGA memory to it.

SQL> conn mcdonac/*******
Connected.

SQL> select s.name, st.value
  2  from v$statname s, v$mystat st
  3  where st.STATISTIC# = s.STATISTIC#
  4  and s.name = 'session pga memory max';

NAME                                                    VALUE
-------------------------------------------------- ----------
session pga memory max                                2556824


As you do operations (SQL etc) you will then consume perhaps more pga memory depending on what you are doing, eg

SQL> select s.name, st.value
  2  from v$statname s, v$mystat st
  3  where st.STATISTIC# = s.STATISTIC#
  4  and s.name = 'session pga memory max';

NAME                                                    VALUE
-------------------------------------------------- ----------
session pga memory max                                2950040


so I used up a couple of hundred kilobytes to parse, execute and fetch from this query. If I so something that needs a LOT of pga (eg a big sort)


SQL> create table xxx1 as select * from dba_objects order by 1,2,3,4;

Table created.

SQL> select s.name, st.value
  2  from v$statname s, v$mystat st
  3  where st.STATISTIC# = s.STATISTIC#
  4  and s.name = 'session pga memory max';

NAME                                                    VALUE
-------------------------------------------------- ----------
session pga memory max                               40174488

1 row selected.


then you can see it bump up. But also, once that operation is completed, that figure is a high water mark - you can see that my *current* memory drops back down again as below:
SQL> select s.name, st.value
  2  from v$statname s, v$mystat st
  3  where st.STATISTIC# = s.STATISTIC#
  4  and s.name = 'session pga memory';

NAME                                                    VALUE
-------------------------------------------------- ----------
session pga memory                                    6030232


Whether that is released back to the OS is dependent on platform and version.

So a couple of things:

a) It seems a little high that you are burning 30meg per connection the moment those sessions start. You can see my example is around 2.5meg, and to get to 40meg, I had to do a nice big sort. That perhaps raises questions about the code in your application.

b) With a fixed number of connections (ie 50 in your pool), the ram allocation would be expected to go up to a "limit", ie the required pga to handle 50 sustained connections, and then level off. If its continuing to grow and grow, that would suggest an application fault of some kind, because that is one of the things a connection pool is designed to avoid.

Hope this helps.

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, July 14, 2016 - 7:21 am UTC

Hi Cornor,

I am using 250 connections to DB. Whenever there is a requests pumped to the DB. The memory is increasing for every iteration.

The DB version we are using is 11.2.0.4.0. RedHat version is Red Hat Enterprise Linux Server release 6.2 (Santiago).

Could you please help me to figure out what exactly is stopping memory to release.

Thanks,
Hari
Connor McDonald
July 14, 2016 - 12:42 pm UTC

"4] The connection pool count is 50"

So where did 250 come from ?

A reader, July 15, 2016 - 5:35 am UTC

session pga memory max indiacate valuse so what is it in bytes or kb or mb?
Connor McDonald
July 15, 2016 - 8:57 am UTC

bytes

A reader, July 16, 2016 - 1:33 am UTC

Hi Connor,

Thank You.

1] We changed OS
2] We change DB version
3] We change Hardware

But observing the same behavioral pattern. That is continuous swelling up of used RAM

One more thing from database level memory scheme used is ASMM [SGA=32 GB] + [PGA=16 GB], Dedicated Server setup

But at the beginning, or when the connections are killed, the RAM occupied shows 7GB instead of 48GB ??

And once the traffic is allowed, the swelling starts.

Well how I am checking the memory utilization, I am using the below os commands

free -ms 10
free -g

This command also shows, the cache part is continuously increasing.

Since all possible combinations are tried (different os,db version, hardware) but the issue remains same, so can I conclude the issue has nothing to do with OS,DB and Hardware and it is purely Application dependent?

Could you please help.

Thanks and Regards,
Sandeep



Connor McDonald
July 16, 2016 - 4:49 am UTC

If you have set ASMM to 48G, you're basically telling us that 48G will be ours to use. So even if we start at 7G, we will most probably grow to that 48G mark during the life of the instance.

32G of that (SGA) will pretty much remain constant. The 16G for PGA is the limit we *try* to keep to for things like hash joins, sorts, and the like. But if (say) a session creates a huge PLSQL table in memory (which would chew up a lot of PGA) then you exceed that limit. (12c addresses this with the pga_aggregate_limit parameter).

But, no matter what the pga grows to (lets say with all the sessions connected, the total is sitting at around 20G), then you would expect that this ceiling to remain relatively level. Some sessions may occasionally bump it higher than 20G when they do something intensive, but you would expect that be transient.

If you are observing continuous growth in pga consumption, then you have an application problem to deal with.

One thing to keep in mind, at the OS level, you would normally see almost *all* memory used up *all* of the time. After all, it makes little sense not to use it all. So for most OS's, any "leftover" memory that applications are not using, will typically be set aside for things like file system caching etc. So make sure you dig down into the database layer to look at memory, because the OS will often say "all memory is in use" (and that is a good thing)

A reader, July 16, 2016 - 5:58 am UTC

Hi Connor,

Thank You.

1st Thing
------------
The impact of "all memory used up" at os level, is really intriguing. When this happens the "data-source", inside application server goes to a "suspended" state. And transactions fail.

2nd Thing
-----------
So if all is well, you are suggesting the memory swollen will be continuing beyond 48G say 70G, and after that it remains at 70G ?

3rd Thing
----------
About investigating, from the application side, I see the memory utilization is keep increasing under the "Other" category, and not by SQL or PL/SQL. Under the "Other" category whatever components are present they appear rather cryptic. Could you please suggest way forward

4th Thing
-----------
Suppose I am doing a hard parsing for the same query,each time the query is fired, which is the only possible thing my application may be doing, as it is just a query based application, most of the time repeatedly fetching the same data blocks. How much a hard parse may contribute to a memory swelling issue like this? If at all a big sorting or a parsing is the reason, then the memory should be released back once the transactions are over, even there are no result caching enabled, no materialized views are there. The most interesting part is, all of the swelling vanishes once the connections are killed.

Could you please help understanding.

Thanks and Regards,
Sandeep




Connor McDonald
July 17, 2016 - 12:25 am UTC

Do you have an AWR report you could show us when the memory is near exhaustion ?

(you can email it to asktom_us@oracle.com)

A reader, July 18, 2016 - 6:49 am UTC

Hi Connor,

1] The datasource used was JNDI, from weblogic application server. And the memory release is not happening

2] When apache basic data source from within application is used, the memory is getting released as expected

Could you please help understanding.

About AWR, looking at the above test case, if you still need one, I shall provide.

Thanks and Regards,
Sandeep
Connor McDonald
July 20, 2016 - 3:53 pm UTC

Are you sure the application server is closing cursors? Is it possible the app is holding them open?

The Weblogic server blog has this advice:


Close all JDBC resources in a finally Block

This include ResultSet, Connection, Statement and Prepared Statement objects and to avoid potential memory leaks. The connection.close() won't necessarily automatically clean up all the other objects because the implementation of close() may differ between JDBC drivers. Also, JDBC objects not properly closed could lead to this error:

java.sql.SQLException: ORA-01000: maximum open cursors exceeded.

If you don't explicitly close Statements and ResultSets right away, cursors may accumulate and exceed the maximum number allowed in your DB before the Connection is closed.


https://blogs.oracle.com/WebLogicServer/entry/insuring_high_level_of_performance

Chris

A reader, July 18, 2016 - 6:54 am UTC

Hi Connor,

Please find the code snippet used at a java based application .

Earlier we used weblogic datasource using JNDI:-
-----------------------------------------------
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="${JNDI_NAME}" />
</bean>

________________________________________________________

Modified code is using apache Basic datasource :-
------------------------------------------------
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" >
<property name="url" value="${DB_URL}" />
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
<property name="username" value="${DB_USERNAME}" />
<property name="password" value="${DB_PASSWORD}" />
<property name="removeAbandoned" value="true"/>
<property name="initialSize" value="${DB_MINCONNECTION}" />
<property name="maxActive" value="${DB_MAXCONNECTION}" />
</bean>

Thanks,
Theja
Connor McDonald
July 20, 2016 - 3:56 pm UTC

We're not really experts in Weblogic configuration here!

You could try asking on the Weblogic forums about your config:

https://community.oracle.com/community/fusion_middleware/weblogic

Chris