Suppose I have a table that's called account_detail of customer account detail.
And This table contain 100,000,000 Rows.
suppose we fire select query on this table and at a same time may or may not be insert or update or delete record fire by staff.
so
Question :-
1) Is it possible to decrease the performance of the database as well table also?
2) How we can keep eye on those event or relevant session level information?
3) How we can get maximum performance when DML operation fire?
In first Question , Suppose our multiple client access or update same data then session event i see "enq-TM and enq-TX concention" event of v$session view.
Ans I want to how to increase the maximum performance so client will be HAPPY.
The things that impact performance are simple:
1- doing too much, or
2- not being allowed to do work
With 1, if your server can support 50 people running busy queries, then once you get to 51 people...then performance will degrade by a small margin for each of them. Typically the more over the capacity you go, the faster the performance degrades, ie, it is not a linear slow down but a exponential one. A key metric is often the average active sessions versus core count on your server. A good white paper on that here
http://www.oracle.com/technetwork/cn/database/diag-techniques-presentation-ow07-128491.pdf With 2, then if you cannot do work, then the user experience is the same, ie, things seem slow. "Cannot work" means you are trying access a resource and you are blocked. The obvious example here is that you want to update a row than someone else has locked. But resource competition can also exist for queries... if you (and many others) are hammering away at common set of blocks in memory, then you will see contention for the memory. You start seeing contention on latches that protect that memory, or the buffer themselves.
In all these cases, a good place to start is the AWR reports and ASH reports.