Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Babloo.

Asked: July 15, 2006 - 3:20 pm UTC

Answered by: Tom Kyte - Last updated: September 07, 2009 - 2:41 am UTC

Category: Database - Version: 9.2.0

Viewed 1000+ times

You Asked

Hi Tom ,
I was reading your book and found this section about optimistic and pessimistic locking and got confused
On one hand I read Oracle handles locking automatically and then we got to figure out which type of locking we want . Is it true
Can you Put both statments together .
The stuff that you explain about lost updates etc completely makes sense . It is just that in your example in the book about optimistic and pessimistic locking , we are deciding when to lock and locking the records by putting for update clause in the select statement .I am just confused that why we say Oracle locking is performed automatically and requires no user action( It is concepts manual)

and we said...

Oracle automatically locks data when you modify it.

But, that does NOT prevent bad things from happening in a multi-user scenario.


Most applications

a) read out some data
b) let user look at it
c) update it

IF in between A and C - someone else does the same (to the same data), you can see how we get the "classic" database issue of "lost update".

a) I read out my address. It is "456 Main Street"
a) YOU read out my address. It is "456 Main Street"
b) I look at it and decide it is wrong (should be 123 Main Street)
b) YOU look at it and decide it is wrong (should be 456 Main St.)
c) I update (and hence automagically lock) the row
c) after I commit YOU update it


We end up with "456 Main St."

Your job was to fix the spelling of street names and use correct abbreviations.
My job was to review my address and correct it.

We both believe we have completed our task. But we haven't - you undid my task.


No amount of 'automatic locking' can fix this application bug. (it is an APPLICATION bug - 100%).

We may emloy either

1) pessimistic locking. We assume someone else might update this row and will lock it "early" (pessmistically) in order to prevent them from even TRYING. Needs a stateful environment, no connection pools, must maintain transaction over multiple web pages - worked GREAT in client server, not so much anymore)

2) optimistic locking. We assume no one else would probably be interested in our row OR we cannot maintain a transaction (connection pool, web based application). here we must verify that the data in the database hasn't changed since we read it out. So, we RE-READ and LOCK the data - compare to what we read out and if it hasn't changed - we can update it.



The statement in the concepts guide about "locking is performed automatically" is technically accurate but somewhat "optimistic" :)


You still have to design your applications to work correctly in a multi-user environment.

and you rated our response

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

July 17, 2006 - 1:28 pm UTC

Reviewer: A reader


Question about Locking.

July 28, 2006 - 5:08 pm UTC

Reviewer: Dihan Cheng from USA

Hello Tom, could you please help me on the following.

Say we have a table T with just 2 records
ID NAME
1 'TOM'
2 'JOHN'

we open two sessions A and B under the same schema which can access table T and do the following.

A>update T set name = 'TED' where id = 1;
B>update T set name = 'BOB' where id = 2;

We know if we update the same record, session B will be blocked.
My question is: if we update different records like the above case, is there any chance session B will also be blocked (say, if these two records are in the same block)?

Actually, this question occurs to my mind when I am reading Expert One On One Oracle P149.

"The interesting thing is that the data may appear locked when we get to it, even if it is not ... ..."

I think the reason I am confused about the above problem is I still don't fully understand the mechanism Oracle uses to determine whether a transaction is alive or not.

warm regards,

Dihan

Tom Kyte

Followup  

July 28, 2006 - 9:05 pm UTC

I cannot find that quote on my page 149 - any more surrounding text to put it into context here?

In the example you gave, there should be no locking or blocking going on -


assuming that NAME is not the primary key and this isn't a parent table with some child table having an unindexed foreign key to it...

Question about Locking

July 30, 2006 - 11:55 pm UTC

Reviewer: Dihan Cheng from USA

Hello Tom,

The quote is the first sentence on P149.

"locked already). The interesting thing is that the data may appear locked when we get to it, even if it is not. When we lock rows of data in Oracle a transaction ID is associated with the block containing the data and when the lock is released, that transaction ID is left behind"

I want to know in what circumstances,

"The interesting thing is that the data may appear locked when we get to it, even if it is not." will happen. Do you mean INITRANS, MAXTRANS here?

warm regards,

Dihan

Tom Kyte

Followup  

July 31, 2006 - 7:45 am UTC

funny, my page 149 is about distributed transactions. (You likely have a translated version) Anyway - this is block cleanout. See Chapter 5, there is a section on it there.

The row *appears* locked, but is not, because the block was not cleaned out. So, we clean it out and get the row for ourselves.

It is not initrans, maxtrans - it is because we didn't get a chance to clean out the block upon commit. Explained in a later section.

Question about Locking

July 31, 2006 - 9:31 am UTC

Reviewer: Dihan from USA

Thanks Tom, it's a Most useful for me.

locks

May 22, 2008 - 12:11 am UTC

Reviewer: A reader

Tom:

Would you implement any kind of locking for this situation.

Let us say you have an ORDERS table. At the end of each month you have a transaction that reviews all the customer orders placed for current month, validate the quantities and update then based on some business rule formula.

This process is started by a web URL that uses mod_plsql. It is a stored procedure. It gets run by a supervisor (one account) to the application. However, many people can have that account login/password and run it too.

The application places set a lock flag column "Y" in the orders table to prevent customers from doing any updates.

Now the stored procedure sort of do this

FOR x in (select * from orders where order_date = 'MAY')
LOOP
--code to figure out if order meets business rules

--if it does not do this
update orders set ord_qty = revised_qty
where order_id = v_order_id and cust_id v_cust_id;
END LOOP;

END;
commit;


would you use "select for updates" or impose optimistic locking mechanisms? I just do not see any need for it but you have better opinions on any possible issues.

thanks
Tom Kyte

Followup  

May 22, 2008 - 7:05 am UTC

... The application places set a lock flag column "Y" in the orders table to
prevent customers from doing any updates.

....

what an inefficient way to implement "lock table"

if you are locking the table to prevent modifications - why not just lock the table?

locks

May 22, 2008 - 7:25 am UTC

Reviewer: A reader

TOm:

The flag in reality is not locking the table. It is locking the user from doing edits. It is only locking the user order data from further edits.

For example, You enter an order on the web. Application saves it and allows you a week to edit it. After a week, the lock/edit flag is set to "Y" and you wont be able to do any edits afterward.

But in the description of the monthly process above, do you see any need for any DB locking mechanisms while runing the stored procedure.


Tom Kyte

Followup  

May 23, 2008 - 7:41 am UTC

if all of your other logic says "update only if flag = 'N'", then their would be no need to pre-lock them - but you don't see to say "where flag = 'Y'", so it appears you could be getting records that are updatable, but I cannot tell.

lock

May 23, 2008 - 10:05 am UTC

Reviewer: A reader

TOm:

I do. i just did not want to bug you with all the code.

Great answer. However ,does this mean that a column lock can replace a "Select for update" statement.

I assume if I did not have that column I would write the stored procedure

For x in (Select records from table for update).
LOOP
then do the update
end loop;

correct?

This can still be done using mod_plsql and is different than optimistic locking records to prevent lost updates. right.
Tom Kyte

Followup  

May 23, 2008 - 6:08 pm UTC

... does this mean that a column lock can replace a "Select
for update" statement.
...

well, there is no such thing as a "column lock"

what you have is some attribute
that attribute says "only this process may update me and I trust all other processes to obey this rule"


and yes, you can use the select for update to do pessimistic locking (and it would be appropriate to do so)

lock

May 23, 2008 - 6:28 pm UTC

Reviewer: A reader

tom:

Yes I will change it to select for update.

But just to clarify, you can use that because it is running as a stored procedure in DB.

When a user calls a ROW or bunch of ROW on the web screen to update, you CANNOT use PESSIMISTIC and you have to use OPTIMISTIC lock because there is no sesssion state. is this correct?
Tom Kyte

Followup  

May 23, 2008 - 6:45 pm UTC

correct.

Does SELECT statment can lock another session ?

June 06, 2008 - 3:32 pm UTC

Reviewer: yoav ben moha

Hi,
Version 10203.
In our PepoleSoft CRM application , users complained today that they are waiting to much time to get response back.
I checked lockes and found that all the blocking sessions
runs the same sql SELECT statment , and all the blocked sessions had been wait on the same INSERT statment:

BLOCKER SID 'ISBLOCKING' BLOCKEE SID
------------ ---------- ------------- ----------- ---------
PSSYS 310 is blocking PSSYS 55
PSSYS 133 is blocking PSSYS 67
PSSYS 130 is blocking PSSYS 139
PSSYS 213 is blocking PSSYS 163
PSSYS 212 is blocking PSSYS 221
PSSYS 150 is blocking PSSYS 285
PSSYS 110 is blocking PSSYS 286
PSSYS 96 is blocking PSSYS 336

The select statment:
---------------------
SELECT intr.setid, intr.inst_prod_id
FROM nap_internet intr, ps_rf_inst_prod i, ps_prod_item p
WHERE intr.setid = i.setid
AND intr.inst_prod_id = i.inst_prod_id
AND i.setid = p.setid
AND i.product_id = p.product_id
AND (nap_user_id2 = :1 OR nap_user_id3 = :2 OR nap_lookup_prod_id = :3)
AND p.nap_svc_type IN ('21', '6')
AND p.product_type = 'SNGL';

The inser statment:
-------------------
INSERT INTO NAP_LDAP_PROCS (INST_PROD_ID, NAP_PROCESS_NAME, NAP_PARAM_VALUE)
VALUES (:B2 , 'NapCheckRtrv', :B1 )

As you can see the SELECT statment does not contain the FOR UPDATE clause.
1. Could you please explain why sessions had been locked ?

Tom please note that in order to solve the problem i used explain plan , and found that one of the tables performed FULL TABLE SCAN.
After restoring its statistics it now perform: TABLE ACCESS BY INDEX ROWID .
The statment is running much faster and there arent any locks.

Thanks.

Tom Kyte

Followup  

June 09, 2008 - 10:34 am UTC

you show that they are all waiting on different sessions - so, how can you say "they were waiting on the same insert" here.


Does select can lock

June 09, 2008 - 3:02 pm UTC

Reviewer: yoav ben moha

Hi Tom,
All the Blocker sids (310,133,130 etc) ran the same SELECT statment.
All the Blocked sids ran the same INSERT statment.
for exmaple : SID 310 ran the SELECT statment , and SID number 55 had to wait until SID 310 finished.
The same happened for all the other sessions.
Thanks
Tom Kyte

Followup  

June 09, 2008 - 3:47 pm UTC

nope, something else was happening there - the insert would not block a select (well, there is a strange case that can happen with a distributed transaction, but you don't mention any of that)

row lock slowing down data process or hanging

June 10, 2008 - 1:41 am UTC

Reviewer: J from CA

Hi Tom,

row lock in one of my systems is slowing down or making the application hanging. see below top 5 timed events.

The 3rd party vendor application starts to look for datafile, and do update/select/delete intensively on row base (170k records). The problem query is doing update on one small table (about 30 records, no index); when it holds lock on the table in exclusive mode (this process itself is inactive), the main process with same sql statement (bind variable is used) is hanging. all AWR /ADDM report indicate it.

This application is working fine with 9i database; also works fine with another 10g database in dedicated server mode and traditional SGA memory setting, while the problem database is set as shared server mode with auto SGA.

When application cut the datafile for processing in half (85k records), it went through fine; however, when get back to full volumn, the lock came back. I have awr/addm report for these 2 scenarios.

Question for you:

1. Will automatic tuning of SGA cause any unintended lock issue which causes inactive session doesn't release the lock?

2. Will PGA_AGGREGATE_TARGET cause any lock not-release issue? (as the problem DB with much smaller setting of 140M, though v$PGA_target_advice didn't suggest any bigger number, while the working 10g DB is set as 1000M)

Do you see immediate need to change sga and pga from query result below?

3. Will shared server mode cause any inconsistent DB connectivity which cause row locking?

4. How can we prevent this locking issue? we set undo as 2GB, and request more frequent commit on application side (currently it is all set at per 1000 records).

I really appreciate if you can shed any light on this. Application complains the DB setting because we do have one 10g DB works fine; even though we show root cause is the locking session from application id.

Thanks!

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
enq: TX - row lock contention 716 2,074 2897 55.6 Applicatio
CPU time 1,546 41.5
log file parallel write 176,818 271 2 7.3 System I/O
log file sync 166,570 259 2 7.0 Commit
db file parallel write 36,102 78 2 2.1 System I/O
--
From v$sga_target_advice:
SGA_SIZE SGA_SIZE_FACTOR ESTD_DB_TIME ESTD_PHYSICAL_READS
452 0.25 5383 186747
904 0.5 3656 108343
1356 0.75 3650 97989
1808 1 3649 92863
2260 1.25 3648 92863
2712 1.5 3648 92863
3164 1.75 3648 92863
3616 2 3648 92863

From v$pga_target_advice
:
TARGET_PGA_MB PGA_TARGET_FACTOR BYTES_PROCESSED ESTD_EXTRA_BYTES_RW ESTD_PGA_CACHE_HIT_PERCENTAGE
18 0.125 789377024 92450816 90
35 0.25 789377024 92450816 90
70 0.5 789377024 0 100
105 0.75 789377024 0 100
140 1 789377024 0 100
168 1.2 789377024 0 100
196 1.4 789377024 0 100
224 1.6 789377024 0 100
252 1.8 789377024 0 100
280 2 789377024 0 100
420 3 789377024 0 100
560 4 789377024 0 100
840 6 789377024 0 100
1120 8 789377024 0 100

from ADDM:
ACTION: Significant row contention was detected in the TABLE "STATUS_CALL_SELECTION_LISTS" with object id 60383. Trace the cause of row contention in the application logic using the given blocked SQL.
RELEVANT OBJECT: database object with id 60383
RATIONALE: The SQL statement with SQL_ID "7n51sfkvamupw" was blocked on row locks.
RELEVANT OBJECT: SQL statement with SQL_ID 7n51sfkvamupw
update STATUS_CALL_SELECTION_LISTS set IS_LOCKED = 1, UTC_HOUR = :p1,TIMESTAMP = SYSDATE where CSL_ID = :p2
RATIONALE: Session with ID "207", User ID "83", Program "perl@usvh1eudv07 (TNS V1-V3)" and Module "perl@usvh1eudv07 (TNSV1-V3)" was the blocking session responsible for 100% of this recommendation's benefit.


Tom Kyte

Followup  

June 10, 2008 - 7:32 am UTC

....
This application is working fine with 9i database; also works fine with another
10g database in dedicated server mode and traditional SGA memory setting, while
the problem database is set as shared server mode with auto SGA.
........

Ok, based on that - I will make a deduction...

if it works on dedicated server - whereby every connection has a dedicated process, but it "takes a lot longer on shared server, whereby a small pool of processes is used by a lot of connections", then you did it yourself.

It has nothing to do with memory.

You would NEVER fix a concurrency issue by just "we'll increase undo and commit more" - if you could do that, it means you can affect the code and you would fix the transactional logic - NOT BREAK IT by committing when the transaction isn't quite done, this would be the single worst approach you could even consider.

It has everything to do with shared server.


Say you have 100 connections.
And a pool of 5 shared servers.

I come in and do the select for update. I return from the database - there are five shared servers open now - free for use.

You come in and try to lock some data I have locked. You block. But - you block in the shared server process - you hold it. There are now four shared servers open.

Bob, Mary and Alice come in and try to lock some data I have locked. They block. There are now ONE shared server open.


There are 96 of us left trying to do stuff - and one shared server process. I want to release the lock (which would in turn release you and Bob and Mary and Alice). But - there are 95 people in front of me in the queue to the shared server. They are all quick requests - about 0.5 seconds each... So, 45 seconds later, I finally get in there and commit - and we are OK again.

And then it happens again
And again
And so on

And one time, instead of just you, Bob, Mary and Alice - George also tries to lock some data - now we have five out of five shared servers locked up. You are really in trouble now.


I believe you want to immediately "stop using" shared server - looking at the way your application interacts with the database - the probability of an artificial deadlock is very high.

<quote src=Expert Oracle database architecture>

Dedicated Server vs. Shared Server

Before we continue to examine the rest of the processes, let¿s discuss why there are two connection modes and when one might be more appropriate than the other.

When to Use Dedicated Server

As noted previously, in dedicated server mode there is a one-to-one mapping between client connection and server process. This is by far the most common method of connection to the Oracle database for all SQL-based applications. It is the simplest to set up and provides the easiest way to establish connections. It requires little to no configuration.

Since there is a one-to-one mapping, you do not have to be concerned that a long-running transaction will block other transactions. Those other transactions will simply proceed via their own dedicated processes. Therefore, it is the only mode you should consider using in a non-OLTP environment where you may have long-running transactions. Dedicated server is the recommended configuration for Oracle, and it scales rather nicely. As long as your server has sufficient hardware (CPU and RAM) to service the number of dedicated server processes your system needs, dedicated server may be used for thousands of concurrent connections.

Certain operations must be done in a dedicated server mode, such as database startup and shutdown, so every database will have either both or just a dedicated server set up.

When to Use Shared Server


Shared server setup and configuration, while not difficult, involves an extra step beyond dedicated server setup. The main difference between the two is not, however, in their setup; it is in their mode of operation. With dedicated server, there is a one-to-one mapping between client connections and server processes. With shared server, there is a many-to-one relationship: many clients to a shared server.

As its name implies, shared server is a shared resource, whereas a dedicated server is not. When using a shared resource, you must be careful to not monopolize it for long periods of time. As you saw previously, use of a simple DBMS_LOCK.SLEEP(20) in one session would monopolize a shared server process for 20 seconds. Monopolization of these shared server resources can lead to a system that appears to hang.

Figure 5-2 depicts two shared servers. If I have three clients, and all of them attempt to run a 45-second process more or less at the same time, two of them will get their response in 45 seconds and the third will get its response in 90 seconds. This is rule number one for shared server: make sure your transactions are short in duration. They can be frequent, but they should be short (as characterized by OLTP systems). If they are not short, you will get what appears to be a total system slowdown due to shared resources being monopolized by a few processes. In extreme cases, if all of the shared servers are busy, the system will appear to hang for all users except the lucky few who are monopolizing the shared servers.

Another interesting situation that you may observe when using shared server is that of an artificial deadlock. With shared server, a number of server processes are being ¿shared¿ by a potentially large community of users. Consider a situation where you have five shared servers and one hundred user sessions established. Now, at most five of those user sessions can be active at any point in time. Suppose one of these user sessions updates a row and does not commit. While that user sits there and ponders his or her modification, five other user sessions try to lock that same row. They will, of course, become blocked and will patiently wait for that row to become available. Now, the user session that holds the lock on this row attempts to commit its transaction (hence releasing the lock on the row). That user session will find that all of the shared servers are being monopolized by the five waiting sessions. We have an artificial deadlock situation here: the holder of the lock will never get a shared server to permit the commit, unless one of the waiting sessions gives up its shared server. But, unless the waiting sessions are waiting for the lock with a timeout, they will never give up their shared server (you could, of course, have an administrator ¿kill¿ their session via a dedicated server to release this logjam).


So, for these reasons, shared server is only appropriate for an OLTP system characterized by short, frequent transactions. In an OLTP system, transactions are executed in milliseconds¿nothing ever takes more than a fraction of a second. Shared server is highly inappropriate for a data warehouse. Here, you might execute a query that takes one, two, five, or more minutes. Under shared server, this would be deadly. If you have a system that is 90 percent OLTP and 10 percent ¿not quite OLTP,¿ then you can mix and match dedicated servers and shared server on the same instance. In this fashion, you can reduce the number of server processes on the machine dramatically for the OLTP users, and make it so that the ¿not quite OLTP¿ users do not monopolize their shared servers. In addition, the DBA can use the built-in Resource Manager to further control resource utilization.

Of course, a big reason to use shared server is when you have no choice. Many advanced connection features require the use of shared server. If you want to use Oracle Net connection pooling, you must use shared server. If you want to use database link concentration between databases, then you must use shared server for those connections.

Note If you are already using a connection pooling feature in your application (e.g., you are using the J2EE connection pool), and you have sized your connection pool appropriately, using shared server will only be a performance inhibitor. You already sized your connection pool to cater for the number of concurrent connections that you will get at any point in time¿you want each of those connections to be a direct dedicated server connection. Otherwise, you just have a connection pooling feature connecting to yet another connection pooling feature.

Potential Benefits of Shared Server


So, what are the benefits of shared server, bearing in mind that you have to be somewhat careful about the transaction types you let use it? Shared server does three things for us mainly: it reduces the number of operating system processes/threads, it artificially limits the degree of concurrency, and it reduces the memory needed on the system. We¿ll discuss these points in more detail in the sections that follow.

Reduces the Number of Operating System Processes/Threads

On a system with thousands of users, the operating system may quickly become overwhelmed in trying to manage thousands of processes. In a typical system, only a fraction of the thousands of users are concurrently active at any point in time. For example, I¿ve worked on systems recently with 5,000 concurrent users. At any one point in time, at most 50 were active. This system would work effectively with 50 shared server processes, reducing the number of processes the operating system has to manage by two orders of magnitude (100 times). The operating system can now, to a large degree, avoid context switching.

Artificially Limits the Degree of Concurrency

Speaking as a person who has been involved in lots of benchmarks, the benefits of this are obvious to me. When running benchmarks, people frequently ask to run as many users as possible until the system breaks. One of the outputs of these benchmarks is always a chart that shows the number of concurrent users versus the number of transactions (see Figure 5-3).

Insert 5300f0503scrap.gif CRX
Figure 5-3. Concurrent users vs. transactions per second

Initially, as you add concurrent users, the number of transactions increases. At some point, however, adding additional users does not increase the number of transactions you can perform per second¿the graph tends to flatten off. The throughput has peaked and now response time starts to increase (you are doing the same number of transactions per second, but the end users are observing slower response times). As you continue adding users, you will find that the throughput will actually start to decline. The concurrent user count before this drop-off is the maximum degree of concurrency you want to allow on the system. Beyond this point, the system becomes flooded and queues begin forming to perform work. Much like a backup at a tollbooth, the system can no longer keep up. Not only does response time rise dramatically at this point, but throughput from the system may fall as well as the overhead of simply context switching and sharing resources between too many consumers takes additional resources itself. If we limit the maximum concurrency to the point right before this drop, we can sustain maximum throughput and minimize the increase in response time for most users. Shared server allows us to limit the maximum degree of concurrency on our system to this number.

An analogy for this process could be a simple door. The width of the door and the width of people limit the maximum people per minute throughput. At low ¿load,¿ there is no problem; however, as more people approach, some forced waiting occurs (CPU time slice). If a lot of people want to get through the door, we get the fallback effect¿there are so many saying ¿after you¿ and false starts that the throughput falls. Everybody gets delayed getting through. Using a queue means the throughput increases, some people get through the door almost as fast as if there was no queue, while others (the ones put at the end of the queue) experience the greatest delay and might fret that ¿this was a bad idea.¿ But when you measure how fast everybody (including the last person) gets through the door, the queued model (shared server) performs better than a free-for-all approach (even with polite people; but conjure up the image of the doors opening when a store has a large sale, with everybody pushing very hard to get through).

Reduces the Memory Needed on the System

This is one of the most highly touted reasons for using shared server: it reduces the amount of required memory. It does, but not as significantly as you might think, especially given the new automatic PGA memory management discussed in Chapter 4, where workareas are allocated to a process, used, and released¿and their size varies based on the concurrent workload. So, this was a fact that was truer in older releases of Oracle but is not as meaningful today. Also, remember that when you use shared server, the UGA is located in the SGA. This means that when switching over to shared server, you must be able to accurately determine your expected UGA memory needs and allocate appropriately in the SGA, via the LARGE_POOL_SIZE parameter. So, the SGA requirements for the shared server configuration are typically very large. This memory must typically be preallocated and thus can only be used by the database instance.
Note It is true that with a resizable SGA, you may grow and shrink this memory over time, but for the most part, it will be ¿owned¿ by the database instance and will not be usable by other processes.

Contrast this with dedicated server, where anyone can use any memory not allocated to the SGA. So, if the SGA is much larger due to the UGA being located in it, where does the memory savings come from? It comes from having that many fewer PGAs allocated. Each dedicated/shared server has a PGA. This is process information. It is sort areas, hash areas, and other process-related structures. It is this memory need that you are removing from the system by using shared server. If you go from using 5,000 dedicated servers to 100 shared servers, it is the cumulative sizes of the 4,900 PGAs (excluding their UGAs) you no longer need that you are saving with shared server.

Dedicated/Shared Server Wrap-up

Unless your system is overloaded, or you need to use a shared server for a specific feature, a dedicated server will probably serve you best. A dedicated server is simple to set up (in fact, there is no setup!) and makes tuning easier.

Note With shared server connections, a session¿s trace information (SQL_TRACE=TRUE output) may be spread across many individual trace files, and reconstructing what that session has done is made more difficult.
If you have a very large user community and know that you will be deploying with shared server, I would urge you to develop and test with shared server. It will increase your likelihood of failure if you develop under just a dedicated server and never test on shared server. Stress the system, benchmark it, and make sure that your application is well behaved under shared server. That is, make sure it does not monopolize shared servers for too long. If you find that it does so during development, it is much easier to fix than during deployment. You can use features such as the Advanced Queuing (AQ) to turn a long-running process into an apparently short one, but you have to design that into your application. These sorts of things are best done when you are developing. Also, there have historically been differences between the feature set available to shared server connections versus dedicated server connections. We already discussed the lack of automatic PGA memory management in Oracle 9i, for example, but also in the past things as basic as a hash join between two tables were not available in shared server connections.

</quote>

Yes! no lock any more

June 10, 2008 - 10:27 pm UTC

Reviewer: J from CA

Tom,
you are brilliant! I am suspecting the shared server mode but didn't want to take it out without expert input. Now I took it out, boom, no lock session any more.

Appreciate your quick response!

Lock comes back again

June 13, 2008 - 11:39 am UTC

Reviewer: J from Ca

unfortunately, The lock came back again after we turned off MTS, enable ODM. 2 application processes are spawned at backend as in all other servers, and we only observe lock on this new server. We tried to match all server parameters.

Application reduced the commit rate to 5000k records, while in other servers it is 10k records.

The table being locked is very small.

is there any specific DB parameter we could adjust to make commit fast?

Thanks!
Tom Kyte

Followup  

June 13, 2008 - 1:15 pm UTC

the shared server would have caused artificial hangs, longer than needed waits on enqueues as the lock holder could not get on a shared server.

There are no server parameters to twiddle if you have two processes simultaneously trying to update the same data - that should hopefully be somewhat apparent.

The commit is fast - you'll find the commit is instantaneous - you have a process that locks rows and holds them - the other process has to wait for them if it wants to lock the same rows. The only fix for this is a look under the covers at the "design" - and see if there isn't a less contentious way to do whatever it is you are trying to do.

i have a problem regarding locking

June 18, 2008 - 4:51 am UTC

Reviewer: atul kumar from INDIA

please help me,i have a problem regarding locking.
only one user can access the table at a time not more two or more in d2k payroll system.i want to make it multiuser.


how we can make a table multiuser.please help me its urgent.
Tom Kyte

Followup  

June 18, 2008 - 1:18 pm UTC

you have got to be kidding me.

You designed the system, you know where your points of serialization are, you know why they are there.

I do not know anything about your design.
I do not know anything about why you introduced a massive point of serialization
I cannot tell you how to remove it without having a few facts

and once you list the facts, you yourself would (should) be able to 'figure out what needs be done'

background process locks

September 06, 2009 - 1:51 am UTC

Reviewer: ali from india

Dear Tom,

could you please explain how background processes such as smon,pmon ,lgwr can lock user session .. i am facing locking issue frequently in our system due to background processes. i dont know how to solve this issues ...

thanks for your service

Thanks
Ali
Tom Kyte

Followup  

September 06, 2009 - 9:56 am UTC

if you don't know how it could, how do you know it is? (because - if you know it is - you know what it is doing???)

share with us YOUR FINDINGS and tell us how you arrived at these findings - and we can help you analyze your findings.

but as it is - this is simply too vague to respond to.

background process locks

September 06, 2009 - 2:13 pm UTC

Reviewer: ali from india

Dear Tom

Thanks for your Quick reply

my findings as follows

this is a RAC host ( 04 instances )
version : 10.2.0.4.0
Application : 11.5.10.2


SESSION ID1 ID2 TY INST_ID
-------------------- ---------- ---------- ---------- ---------- -- ----------
Holder: 5491 284 4 TS 4
Waiter: 5470 284 4 TS 3


holder
======

Session Id.............................................: 5491
Serial Num..............................................: 1
Client Process Id on Client Machine ....................: *29598*
Server Process ID ......................................: 29598
Sql_Address ............................................: 00
Sql_hash_value .........................................: 0
Program ...............................................: xxxxx@xxxxxxxxx (SMON)

Waiter
======

Session Id.............................................: 5470
Serial Num..............................................: 5510
User Name ..............................................: APPS
Program ...............................................: xxxxx@xxxxxxxxxx (J001)
Module .................................................: Disco10

Sql : SELECT * FROM EUL5_xxxxxxxxxx;

could you please help me to understand how smon can block the above job session

Thanks in advance
Ali
Tom Kyte

Followup  

September 07, 2009 - 1:55 am UTC

that is a TS - temp segment lock. SMON is responsible for cleaning things up.

what is

SELECT name FROM sys.ts$ where TS# =284

and tell us about your temporary tablespaces - are they "true" temporary?

background process locks

September 07, 2009 - 2:30 am UTC

Reviewer: ali from india

Dear tom

Thanks for your reply

please find the details


SQL> SELECT name FROM sys.ts$ where TS# =284
  2  ;

NAME
------------------------------
TEMP

SQL>


SQL> select TABLESPACE_NAME , CONTENTS , EXTENT_MANAGEMENT from dba_tablespaces where TABLESPACE_NAME='TEMP';

TABLESPACE_NAME                CONTENTS  EXTENT_MAN
------------------------------ --------- ----------
TEMP                           TEMPORARY LOCAL

SQL>

yes they are true temp files

Thanks
Ali

Tom Kyte

Followup  

September 07, 2009 - 2:41 am UTC

please utilize support to diagnose this further - they'll likely have you set some events to monitor SMON. there is no straightforward cause of this (eg: it should not happen, so something is afoot and they'll need more information to diagnose)