Tom, we are a small software company that has developed a customer-facing Graphical User Interface (GUI) that connects to a customer-hosted Oracle 11g database. This is a two-tier, classic client/server setup. The GUI uses Oracle Data Provider (ODP) for .NET, which provides connection pooling by default---so our developers used it. As a result, each GUI (and thus each user) establishes 12-14 dedicated connections to their particular database.
Recently we have been discussing scalability. We want to be able to support customers who have hundreds (or even more) users sharing a single database. My experience says that having 12-14 connections per GUI will limit our ability to do that, and I have urged us to move towards using a single connection per GUI.
Our C# developers have resisted this idea, especially since connection pooling is enabled by default in the ODP. (Why would Oracle enable something by default if it's so harmful to scalability, after all?) I think the real fear is that the GUI, which is multi-threaded, might perform more slowly with a single connection, since individual threads would have to be serialized to share that single connection.
I have suggested that we endeavor to move towards a single connection per GUI, and only then to investigate other options like shared server.
So my questions are 1) am I being reasonable? 2) Under what circumstances would having multiple connections from a single GUI be reasonable? And 3) do you think that combining connection pooling with shared server would ever be reasonable?
This is so bad - so so bad, unbelievably bad.
Connection pooling is to be used to CONCENTRATE a large number of end users into a very very very small number of connections to a database.
Your developers are using a connection pool to EXPLODE a set of users into an order of magnitude MORE users.
they've got is completely and utterly backwards. Totally.
1) yes, reasonable, sound, smart
2) mostly never. And even then, it shouldn't have multiple connections - but it can have multiple sessions on a connection. SQLPlus does this for example with autotrace:
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5671284058977 but pretty much NEVER
3) nope, they should use a single connection per instance of your client server application. Using shared server to try and solve their bad architecture decision would just be wrong. Why would you want to
a) consume a ton of resources (each session they create would consume a ton of SGA)
b) impose a long code path - each request their app makes would be sent to dispatcher, dispatcher puts it into a queue in the SGA, a shared server (when it becomes available) would pick it up and process it, put the response back into a queue in the SGA, for the dispatcher to eventually send it back to you.
as opposed to - they send request, dedicated server deals with it and sends answer back.
Use shared server only when you have too many client/server applications running concurrently. do not use it to "solve" their issue.
Have them limit their connection pools to 1 - min size = 1, max size = 1. They won't even have to recode...