Thanks for the question, nikoloz.
Asked: February 06, 2026 - 9:50 pm UTC
Last updated: February 18, 2026 - 7:16 am UTC
Version: 19.13
You Asked
Hello ,
during the connection 'storm' (for example after restarting app servers , because of DB maintenance tasks )
i observe high amount of these commands:
update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0000-00-00', to_date(NULL), :2) where user#=:1
roughly , 30 % of all 150 active sessions execute this command , with Log file Sync Event and the sessions wont 'go away' (i assume ) that db is not able to handle these processes all at the same time , killing couple of these sessions helps .
With a bug fix patch all 'managed' was to change the Event type , it used to be :
library cache: mutex X , now its a 'simple' log file sync , but the performance issue remains .
I did experiment with REDO log sizes and Files , but even during the issue the log change frequency is not dramatically (if at all) increased, i DID change up buffer cache sizes .. tried smaller size , tried larger size .. the result is the same.
i did try the _disable_last_successful_login_time this DB serves Siebel/CRM system and after this change some modules did not start with Siebel internal errors
+ i understand that i should consult with the vendor how and what the system does after the restart and what is causing the sessions flooding , BUT again at this point i want to rule out communications with the vendor , i want to focus on what can be done on DB side .
my big questions are :
1. is the system (siebel application ) opening too many uncontrolled sessions to db and when db has to audit them we get the
update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0000-00-00', to_date(NULL), :2) where user#=:1
sessions to pile up , or is my DB not managing to handle the load , i understand that without full metrics its difficult to judge the full picture , but i see no CPU spikes , no RAM consumption issues , BUT the log file sync is killing the db , WHAT parameters should i pay the most attention to (buffer cache maybe ) ? what can be done DURING the issue ? is it possible to have so many 'log file sync' sessions that db not just slows down the procces of each sync , but is not able to handle them at all ?
2. is it possible to rebuild the user$ table or truncate the sys.user$ table ?
and Connor said...
1) Ultimately you're going to see this when login frequency is excessive. Log File Sync is excessive commits, and those commits are coming from excessive login frequency. It's definitely something you want to take up with the vendor.
If is not possible to use "_disable_last_successful_login_time" (which strikes me as odd, since Siebel has existed long before this functionality arrived), perhaps then take a look at "_granularity_last_successful_login_time" which controls the frequency of updates. (Obviously do this with guidance and endorsement from Oracle Support)
2) Do not under any circumstance rebuild/truncate the USER$ table. You will immediately destroy the database
Rating
(1 rating)