Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Brian.

Asked: November 25, 2014 - 6:46 pm UTC

Last updated: December 03, 2014 - 10:51 pm UTC

Version: 11.2.0.3

Viewed 1000+ times

You Asked

Tom, we are a small software company that has developed a customer-facing Graphical User Interface (GUI) that connects to a customer-hosted Oracle 11g database. This is a two-tier, classic client/server setup. The GUI uses Oracle Data Provider (ODP) for .NET, which provides connection pooling by default---so our developers used it. As a result, each GUI (and thus each user) establishes 12-14 dedicated connections to their particular database.

Recently we have been discussing scalability. We want to be able to support customers who have hundreds (or even more) users sharing a single database. My experience says that having 12-14 connections per GUI will limit our ability to do that, and I have urged us to move towards using a single connection per GUI.

Our C# developers have resisted this idea, especially since connection pooling is enabled by default in the ODP. (Why would Oracle enable something by default if it's so harmful to scalability, after all?) I think the real fear is that the GUI, which is multi-threaded, might perform more slowly with a single connection, since individual threads would have to be serialized to share that single connection.

I have suggested that we endeavor to move towards a single connection per GUI, and only then to investigate other options like shared server.

So my questions are 1) am I being reasonable? 2) Under what circumstances would having multiple connections from a single GUI be reasonable? And 3) do you think that combining connection pooling with shared server would ever be reasonable?

and Tom said...

This is so bad - so so bad, unbelievably bad.

Connection pooling is to be used to CONCENTRATE a large number of end users into a very very very small number of connections to a database.

Your developers are using a connection pool to EXPLODE a set of users into an order of magnitude MORE users.

they've got is completely and utterly backwards. Totally.


1) yes, reasonable, sound, smart

2) mostly never. And even then, it shouldn't have multiple connections - but it can have multiple sessions on a connection. SQLPlus does this for example with autotrace: https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5671284058977

but pretty much NEVER

3) nope, they should use a single connection per instance of your client server application. Using shared server to try and solve their bad architecture decision would just be wrong. Why would you want to

a) consume a ton of resources (each session they create would consume a ton of SGA)

b) impose a long code path - each request their app makes would be sent to dispatcher, dispatcher puts it into a queue in the SGA, a shared server (when it becomes available) would pick it up and process it, put the response back into a queue in the SGA, for the dispatcher to eventually send it back to you.

as opposed to - they send request, dedicated server deals with it and sends answer back.

Use shared server only when you have too many client/server applications running concurrently. do not use it to "solve" their issue.



Have them limit their connection pools to 1 - min size = 1, max size = 1. They won't even have to recode...

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

ODP Connection Pooling

Mark, November 27, 2014 - 6:51 pm UTC

> ...since connection pooling is enabled by default in the ODP. (Why would Oracle enable something by default if it's so harmful to scalability, after all?)

Like many things, connection pooling is desirable for some situations maybe less desirable for others. Consider a multi-tier web application where a middle-tier server establishes connections to the database using the same credentials for each connection. In this case connection pooling would likely be desired. In your case it is less desirable.

However, one of them (either on or off) has to be the default, of course, and developers should take this into account if the default is not appropriate for a specific scenario. For example, if connection pooling was disabled by default a different class of applications would not be well served and for that set of applications scalability could be limited.

If you are able to change the connection string it is a matter of setting Pooling=false (or do as Tom has suggested and explicitly limit the pool size). Making such settings configuration items so they could be altered without a recompile, etc. might be considered as well.
Tom Kyte
November 29, 2014 - 12:45 am UTC

Thanks Mark - who has literally written the book on this .net stuff :)

Brian Foreman, December 03, 2014 - 7:59 pm UTC

Nicely explained. "Concentrating many users into few connections" versus "exploding a few users into many connections" puts it quite well.

Sadly, as it turns out, significant re-coding *will* be required to reduce our connection-count per GUI to 1, as existing code does not return connections to the pool but instead retains them for use in different parts of the GUI. This was done (needlessly, it now seems clear) to retain session state in those areas---in particular, for using global temporary tables to store and manipulate query result sets, yet another bad practice we are thankfully already moving away from.

To ensure I fully understand, then, I will sum up in the hope you can validate:

In short, there should be nothing stopping us from using ODP (or even just oracle.dataaccess.dll) to 1) create a single connection to the database, 2) establish (if necessary) many separate sessions over that single connection (the necessity of which will be debated separately), each one of which can 3) retain its own session state information for as long as it wants---even though only one session can be *active* on that single connection at a time.

If that all sounds right, then I believe I have all the information I need to continue working this issue with my development team. Thanks so much Tom (and Mark!)!!!
Tom Kyte
December 03, 2014 - 8:53 pm UTC

seriously? Your developers wailed "but it is configured by default" and then proceeded to use a connection pool in a manner it is explicitly NOT DESIGNED to be used in? really?

You grab a connection from a pool, use it for as short a duration of time as possible, and put it back.

If you do anything else - repeat - *anything else*, you do not use connection pools, you connect to the database and use the connection yourself for as long as you need. This is called client server computing.

They didn't need multiple connections to maintain state - that would happen if they just grabbed A SINGLE connection and kept it.


they should not create multiple sessions/connect - that is just a way to waste tons of resources in a slightly more efficient fashion than having multiple connections. They *could*, but there is no way they *should*


So yes, you could go down that path, it would get you a bit further before you hit the wall, but you are still going to smack that wall pretty hard even so...


Brian Foreman, December 03, 2014 - 9:55 pm UTC

Trust me, it pains me to ask you these questions. I was aghast when I realized what had been done, but failed to convince my team that it was any big deal until scalability came up. Now, I need to make *sure* we understand it, and I thought it might help to hear it from you. So, thanks for that.

It's also true that establishing multiple sessions over a single connection should not be necessary for us. I wanted to be absolutely sure I understood it as a possibility, but other than the SQL*Plus example you provided (where there is an understandable reason for it), I can't really think of any reason to do it. It seems such situations would be very rare.

The discussion has been immensely helpful, so thanks, exasperating as it must have been to hear about...
Tom Kyte
December 03, 2014 - 10:51 pm UTC

:) thanks for the feedback, appreciate it.