When is an application using too many sessions


I'm working on a client-server application that uses OCI to communicate with Oracle. In the past our application often treated sessions as an infinite resource, quite possibly to the detriment of other applications that are sharing that Oracle server.

I'm trying to convince my fellow developers that it is important for our application to play nice with others and not be such a resource hog.

At what point would you Oracle DBAs consider an application to be using (or abusing) sessions excessively?

EDIT: Currently one of the biggest consumers of sessions is this one component that uses direct path loading to bulk load several (20-30) tables in parallel.

Best Answer

As with all things, it depends.

If you're using OCI, I assume you're developing a client/ server application. I would generally want the database in that situation to support shared server connections and I would want to request a shared server connection when the application connected to the database. That improves the performance of the application by limiting the amount of work the database has to do in order to create a new session. That has the side effect of leaving more resources available on the server for others. Using shared server connections does involve a longer code path when the database executing subsequent queries since the query has to be sent to a shared server process but that generally isn't a big deal if the number of shared server processes is reasonable.

Opening a number of sessions could also be a problem if the DBA is not using automatic PGA management. If you're using manual PGA management, PGA is configured on a per-session basis so each session can allocate a separate SORT_AREA_SIZE for sorts among other PGA components. If you created a large number of sessions in a database using manual PGA management and each session tried to maximize its PGA usage, you could easily starve the server of RAM and cause performance problems for everyone. Assuming you are using Oracle 10.1 or later, however, automatic PGA management is available. In that case, the DBA configures a PGA_AGGREGATE_TARGET (or includes the PGA in the MEMORY_TARGET in 11g) and the database takes care of ensuring that the aggregate PGA across all sessions is limited so that the database runs out of resources.

If the database supports shared server connections, your application gets a shared server connection, and the database uses automatic PGA management, most DBAs won't care too much how many sessions you create.

Now, if you are creating many sessions so that you can do more work in parallel, that would create performance problems over and above the number of sessions. It's easy enough to configure the database to support 1000 sessions, for example, it's much harder to configure the database to not die in a pile if all 1000 session simultaneously issue a meaty query against a data warehouse. If your application is using up all the resources available to the database for your queries, the DBA would probably want to consider using Oracle Resource Manager to prioritize different applications and/or different users. For example, the DBA could configure Resource Manager so that if the CPU utilization goes to 100%, your application in the aggregate gets 50% of the CPU, some other application in the aggregate gets 25%, and all others get the remaining 25%. If no other requests were pending, your application would be free to use all 100% of the CPU.

If you are running things in parallel, it might also be useful to investigate Oracle's ability to run statements in parallel because that might involve less overhead than writing your own parallelization code. I would expect, for example, that it would be far easier and quite possibly faster to write a client application that serially submitted statements that used Oracle parallel query to execute rather than opening multiple sessions and executing each statement from a separate thread of your application while Oracle was not using parallel query to execute any of the statements. If you use parallel query, the database can also adjust the number of parallel slaves that are spawned so that when the database is particularly busy fewer parallel slaves are started and when the database is relatively idle more parallel slaves are started. That would potentially satisfy both the desire to make the application as efficient as possible as well as to share the resources of the server more equitably with other applications.