Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, nikoloz.

Asked: February 06, 2026 - 9:50 pm UTC

Last updated: February 18, 2026 - 7:16 am UTC

Version: 19.13

Viewed 100+ times

You Asked

Hello ,
during the connection 'storm' (for example after restarting app servers , because of DB maintenance tasks )
i observe high amount of these commands:

update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0000-00-00', to_date(NULL), :2) where user#=:1

roughly , 30 % of all 150 active sessions execute this command , with Log file Sync Event and the sessions wont 'go away' (i assume ) that db is not able to handle these processes all at the same time , killing couple of these sessions helps .

With a bug fix patch all 'managed' was to change the Event type , it used to be :
library cache: mutex X , now its a 'simple' log file sync , but the performance issue remains .

I did experiment with REDO log sizes and Files , but even during the issue the log change frequency is not dramatically (if at all) increased, i DID change up buffer cache sizes .. tried smaller size , tried larger size .. the result is the same.

i did try the _disable_last_successful_login_time this DB serves Siebel/CRM system and after this change some modules did not start with Siebel internal errors
+ i understand that i should consult with the vendor how and what the system does after the restart and what is causing the sessions flooding , BUT again at this point i want to rule out communications with the vendor , i want to focus on what can be done on DB side .

my big questions are :

1. is the system (siebel application ) opening too many uncontrolled sessions to db and when db has to audit them we get the

update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0000-00-00', to_date(NULL), :2) where user#=:1

sessions to pile up , or is my DB not managing to handle the load , i understand that without full metrics its difficult to judge the full picture , but i see no CPU spikes , no RAM consumption issues , BUT the log file sync is killing the db , WHAT parameters should i pay the most attention to (buffer cache maybe ) ? what can be done DURING the issue ? is it possible to have so many 'log file sync' sessions that db not just slows down the procces of each sync , but is not able to handle them at all ?

2. is it possible to rebuild the user$ table or truncate the sys.user$ table ?


and Connor said...

1) Ultimately you're going to see this when login frequency is excessive. Log File Sync is excessive commits, and those commits are coming from excessive login frequency. It's definitely something you want to take up with the vendor.

If is not possible to use "_disable_last_successful_login_time" (which strikes me as odd, since Siebel has existed long before this functionality arrived), perhaps then take a look at "_granularity_last_successful_login_time" which controls the frequency of updates. (Obviously do this with guidance and endorsement from Oracle Support)


2) Do not under any circumstance rebuild/truncate the USER$ table. You will immediately destroy the database

Rating

  (1 rating)

Comments

Avoid connection storms beforehand

Stew Ashton, February 18, 2026 - 10:09 am UTC

In my view, the correct approach would be to avoid the cause (connection storms) instead of trying to manage the bad side effects.

On the server side, throttle the connection rate using the connection rate limiter parameters in listener.ora, see
https://docs.oracle.com/en/database/oracle/oracle-database/26/netrf/oracle-net-listener-parameters-in-listener-ora.html#GUID-205F23E2-9DDA-4815-A60A-31B94E1F8787

On the client side if possible, (in tnsnames.ora or the application's connection string) use the timeout parameters to avoid quick connection failures and immediate retries. See
https://docs.oracle.com/en/database/oracle/oracle-database/26/netrf/local-naming-parameters-in-tns-ora-file.html#GUID-B1EEB283-CBD7-4ED8-9B94-AB890660EB3C

Finally, the listener itself might have trouble handling large volumes of connection requests. You might want to take a look at
https://docs.oracle.com/en/database/oracle/oracle-database/26/netag/handling-large-volumes-concurrent-connection-requests.html#NETAG0101

Bottom line, there is a database access problem that is causing a database problem. Fix the problem upstream.