Help analyze statspack report
Schekina Israel, July 28, 2005 - 9:28 am UTC
Hi Tom,
This is my statspack report.
STATSPACK report for
DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- --- ------------
CTIORA 135762511 ctiora 1 8.1.7.4.1 NO CTIORA
Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 2603 28-Jul-05 08:30:03 87
End Snap: 2604 28-Jul-05 09:00:02 87
Elapsed: 29.98 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 120000 log_buffer: 1048576
db_block_size: 8192 shared_pool_size: 94371840
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 21,967.27 828.11
Logical reads: 3,904.27 147.18
Block changes: 127.84 4.82
Physical reads: 1,424.44 53.70
Physical writes: 2.66 0.10
User calls: 336.07 12.67
Parses: 14.03 0.53
Hard parses: 1.34 0.05
Sorts: 5.60 0.21
Logons: 0.25 0.01
Executes: 133.24 5.02
Transactions: 26.53
% Blocks changed per Read: 3.27 Recursive Call %: 32.05
Rollback per transaction %: 0.00 Rows per Sort: 7.26
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.37 Redo NoWait %: 100.00
Buffer Hit %: 63.52 In-memory Sort %: 99.98
Library Hit %: 98.64 Soft Parse %: 90.45
Execute to Parse %: 89.47 Latch Hit %: 99.84
Parse CPU to Parse Elapsd %: 90.26 % Non-Parse CPU: 98.92
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 64.91 71.00
% SQL with executions>1: 83.03 50.99
% Memory for SQL w/exec>1: 76.02 47.17
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
db file scattered read 390,010 353,271 70.92
db file sequential read 87,307 61,226 12.29
log file sync 47,963 31,219 6.27
buffer busy waits 44,370 29,637 5.95
log file parallel write 47,422 18,212 3.66
-------------------------------------------------------------
Wait Events for DB: CTIORA Instance: ctiora Snaps: 2603 -2604
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
db file scattered read 390,010 0 353,271 9 8.2
db file sequential read 87,307 0 61,226 7 1.8
log file sync 47,963 0 31,219 7 1.0
buffer busy waits 44,370 0 29,637 7 0.9
log file parallel write 47,422 0 18,212 4 1.0
log file sequential read 964 0 2,099 22 0.0
db file parallel write 266 0 954 36 0.0
latch free 2,259 1,951 524 2 0.0
direct path read 771 0 379 5 0.0
direct path write 1,184 0 302 3 0.0
control file parallel write 597 0 163 3 0.0
control file sequential read 103 0 60 6 0.0
log file switch completion 1 0 21 210 0.0
file open 79 0 12 2 0.0
buffer deadlock 1 1 8 80 0.0
file identify 8 0 6 8 0.0
refresh controlfile command 10 0 5 5 0.0
SQL*Net more data to client 147 0 3 0 0.0
log file single write 4 0 2 5 0.0
library cache pin 3 0 2 7 0.0
LGWR wait for redo copy 14 0 1 1 0.0
SQL*Net break/reset to clien 2 0 0 0 0.0
enqueue 1 0 0 0 0.0
SQL*Net message from client 600,763 0 14,879,022 248 12.6
SQL*Net message to client 600,805 0 120 0 12.6
SQL*Net more data from clien 174 0 10 1 0.0
-------------------------------------------------------------
Background Wait Events for DB: CTIORA Instance: ctiora Snaps: 2603 -2604
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
log file parallel write 47,422 0 18,212 4 1.0
log file sequential read 964 0 2,099 22 0.0
db file parallel write 266 0 954 36 0.0
control file parallel write 597 0 163 3 0.0
control file sequential read 67 0 47 7 0.0
file open 67 0 12 2 0.0
file identify 8 0 6 8 0.0
direct path read 15 0 4 3 0.0
log file single write 4 0 2 5 0.0
LGWR wait for redo copy 14 0 1 1 0.0
direct path write 15 0 0 0 0.0
rdbms ipc message 131,069 1,645 902,106 69 2.7
smon timer 6 6 184,323 ###### 0.0
pmon timer 587 585 180,078 3068 0.0
-------------------------------------------------------------
July 28, 2005 - 10:29 am UTC
still about 3times too long.
tell me, what changed. where did the hard parses go away too?
but a statspack and tuning is sort of like reading tea leaves (and an imcomplete reading at that). All I can say for sure from this is "you do lots of IO"
It is strange that you wait for log file sync (you are transactional) but you read so much with each transaction (eg: not typical OLTP, definitely not a data warehouse)
Hard to tell what you do, but since you have almost 1gig of buffer cache sitting there, I'd look at the queries and say "are they doing as little work as possible?"
Help analyze statspack report
Schekina Israel, July 28, 2005 - 9:34 am UTC
Hi Tom,
What should I expect to have ? what is the acceptable percentage for the parses and hard parses?
I did review this question and generate a review for a 30 minutes snapshot. Please can you hel with this report?
Regards,
Schekina
July 28, 2005 - 10:30 am UTC
hard parses should be virtually zero after the database has been up and running for a while - regardless of system type.
Help analyze statspack report
Schekina Israel, July 28, 2005 - 11:00 am UTC
I did not give you all the content of the statspack report. But IM suspecting you do not have the real picture of my database.
The database we are having is not an in house one. We are using a comercial software that need Oracle to be install for this software creates a tablespace and a Schema. It's him who created the data objects and some and al the requests on those tables.
For example, these are some of the queries it makes on the application. We have no way to improve it or if you see something that might be of any help, please let me know and I will find out if the provider of this software can do any changes.
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,246,935 626 5,186.8 46.2 944248090
SELECT code FROM contact WHERE campaign=:h_v1 and status in (0,1
6,15)
463,061 9 51,451.2 6.6 2147691553
update contact set moment=:b0:b1,status=:b2:b3,priority=:b4:b5,
is_special=:b6:b7 where code=:b8
396,107 98,714 4.0 5.6 1555800698
select status into :b0:b1 from contact where code=:b2
124,950 24,990 5.0 1.8 1473299240
update work_call_tps set occurrences=:b0:b1 where ((call_type=:b2 and agent=:b3) and campaign=:b4)
108,350 1,173 92.4 1.5 3992982419
select count(*) into :b0 from user_tables where UPPER(table_name)=UPPER(:b1)
95,076 60 1,584.6 1.4 2365118716
SELECT contact.code,contact.agent,contact.dial_rule,contact.time
zone,contact.is_special,contact.ivacc_user FROM contact,ct_soll_2005 WHERE contact.campaign=101 and contact.status=0 and contact.priority=:h_v1 and contact.moment is null and contact.code=ct_soll_2005.easycode and GROUPE_NO NOT IN (1055,1050)
83,773 845 99.1 1.2 3349990359
UPDATE CAMPAIGN_CODES SET CODE=CODE WHERE (CODE IN (SELECT DISTINCT CAMPAIGN FROM AUX_COUNTERS_CD WHERE TRANS_SN = :b1 )) OR (CODE IN (SELECT DISTINCT CAMPAIGN FROM AUX_COUNTERS_CPT WHERE TRANS_SN = :b1 ))
76,795 45 1,706.6 1.1 1699634860
SELECT contact.code,contact.agent,contact.dial_rule,contact.time
zone,contact.is_special,contact.ivacc_user FROM contact,ct_renouvellement WHERE contact.campaign=44 and contact.status=0 and contact.priority=:h_v1 and contact.moment is null and contact.code=ct_renouvellement.easycode and effectif = 1
74,970 24,990 3.0 1.1 2707836379
Thanks
Schekina
July 28, 2005 - 11:15 am UTC
well, look at the first query:
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,246,935 626 5,186.8 46.2 944248090
SELECT code FROM contact WHERE campaign=:h_v1 and status in (0,1
6,15)
Ok, that seems like a LOT of IO (gets/exec) for such a simple query doesn't it?
how many rows does that return typically?
what is the indexing scheme on that table.
And -- if you are having performance issues with a 3rd party product that you bought and paid for -- why are they not involved in the tuning process? Seems the ones best able to implement any sort of change would be the owners of the application.
What is this?
Flado, July 28, 2005 - 11:35 am UTC
<quote>
update contact set moment=:b0:b1,status=:b2:b3,priority=:b4:b5,
is_special=:b6:b7 where code=:b8
</quote>
I've never seen such bind variables (e.g., :b6:b7) - what exactly does that mean? Concatenation of the two variables? Or is it a single variable named "b6:b7"?
Weird.
Looks like it's a single variable, though:
17:12:27 flado@perf>create table contact (moment varchar2(100),status varchar2(10), priority varchar2(10),is_special var
char2(10), code varchar2(10));
Table created.
Elapsed: 00:00:00.85
17:18:55 flado@perf>explain plan for
17:19:26 2 update contact set moment=:b0:b1,status=:b2:b3,priority=:b4:b5,
17:19:34 3 is_special=:b6:b7 where code=:b8:b9;
Explained.
Elapsed: 00:00:00.01
17:21:55 flado@perf>set lines 70
17:22:02 flado@perf>select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | |
| 1 | UPDATE | CONTACT | | | |
|* 2 | TABLE ACCESS FULL | CONTACT | | | |
--------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CONTACT"."CODE"=:Z)
Note: rule based optimization
July 28, 2005 - 12:14 pm UTC
looks like pro*c indicator variables.
void process()
{
EXEC SQL BEGIN DECLARE SECTION;
int a;
short a_i;
int b;
EXEC SQL END DECLARE SECTION;
EXEC SQL WHENEVER SQLERROR DO die();
exec sql alter session set sql_trace=true;
exec sql update t set a = :a:a_i, b = :b;
}
=====================
PARSING IN CURSOR #3 len=28 dep=0 uid=353 oct=6 lid=353 tim=1096256669177582 hv=3301393248 ad='61b1ae7c'
update t set a=:b0:b1,b=:b2
END OF STMT
Help analyze statspack report
Schekina, July 28, 2005 - 1:31 pm UTC
I really do not not have control on the queries executed over the database. So I don't know what to do. The tuning of this database doesn't seems easy. I don't know where to start from.
Now it seems like we have Pro*C in our mist!
Thanks
Schekina
July 28, 2005 - 3:36 pm UTC
can you answer the question from above though.
well, look at the first query:
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,246,935 626 5,186.8 46.2 944248090
SELECT code FROM contact WHERE campaign=:h_v1 and status in (0,1
6,15)
Ok, that seems like a LOT of IO (gets/exec) for such a simple query doesn't it?
how many rows does that return typically?
what is the indexing scheme on that table.
for example, if that query return 5,000 rows per execution, that might be reasonable.
If it returns 5 rows/execution, that is NOT reasonable and the indexing scheme would be all wrong (eg: maybe they have an index on campaign OR they have one on status -- but they should have one on (campaign,status) and you need to add it)
Help analyze statspack report
Schekina Israel, July 28, 2005 - 4:51 pm UTC
Concerning this query, SELECT code FROM contact WHERE campaign=:h_v1 and status in (0,16,15)
We discussed about that with the client and the never allow us any changes on the table CONTACT. For some values of CAMPAIGN, we can have more than 10000 rows.
The numbers of rows will always depend on the value of the campaign column. I got the following result by running the following query for the purpose of this question:
SELECT campaign , count(*) FROM easy.contact WHERE status in (0,16,15) group by campaign order by count(*) desc;
CAMPAIGN,COUNT(*)
44,16784
102,16445
113,12281
116,9703
101,8947
95,6056
103,5754
20,4723
109,4002
21,3721
106,3052
110,2339
107,1909
93,898
17,864
14,664
89,603
13,596
37,501
114,484
119,477
104,429
111,418
117,354
98,161
92,137
80,76
73,63
120,30
115,20
7,13
118,10
8,5
53,2
112,2
9,1
10,1
11,1
12,1
This is the script used to create the table:
CREATE TABLE EASY.CONTACT
(
CODE NUMBER(10) NOT NULL,
CAMPAIGN NUMBER(10) NOT NULL,
STATUS NUMBER(6) NOT NULL,
AGENT NUMBER(10),
MOMENT DATE,
DIAL_RULE NUMBER(10),
PRIORITY NUMBER(6) NOT NULL,
TIMEZONE NUMBER(10) NOT NULL,
IS_SPECIAL NUMBER(6) NOT NULL,
IVACC_USER NUMBER(10),
PRIMARY KEY (CODE)
USING INDEX
TABLESPACE DBSPACE2
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
),
FOREIGN KEY (CAMPAIGN)
REFERENCES EASY.CAMPAIGN (CODE),
FOREIGN KEY (AGENT)
REFERENCES EASY.AGENT (E_USER),
FOREIGN KEY (DIAL_RULE)
REFERENCES EASY.DIAL_RULE (CODE),
FOREIGN KEY (TIMEZONE)
REFERENCES EASY.TIMEZONE (CODE)
)
TABLESPACE DBSPACE2
PCTUSED 40
PCTFREE 20
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 2
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
LOGGING
NOCACHE
NOPARALLEL;
CREATE INDEX EASY.IXCONTACT2 ON EASY.CONTACT
(CAMPAIGN, STATUS, PRIORITY, MOMENT, TIMEZONE,
DIAL_RULE)
NOLOGGING
TABLESPACE DBSPACE2
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
NOPARALLEL;
CREATE OR REPLACE TRIGGER EASY.trg_contact_del_a AFTER DELETE ON contact
BEGIN
easy_pkg.prc_ct_after;
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_del_b BEFORE DELETE ON contact
BEGIN
easy_pkg.prc_ct_before;
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_del_r BEFORE DELETE ON contact
FOR EACH ROW
BEGIN
easy_pkg.prc_ct_delete_row( :old.campaign, :old.moment, :old.dial_rule,
:old.priority, :old.timezone, :old.status );
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_ins_a AFTER INSERT ON contact
BEGIN
easy_pkg.prc_ct_after;
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_ins_b BEFORE INSERT ON contact
BEGIN
easy_pkg.prc_ct_before;
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_ins_r BEFORE INSERT ON contact
FOR EACH ROW
BEGIN
easy_pkg.prc_ct_insert_row( :new.campaign, :new.moment, :new.dial_rule,
:new.priority, :new.timezone, :new.status );
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_upd_a AFTER
UPDATE OF campaign, moment, status, dial_rule, priority, timezone
ON contact
BEGIN
easy_pkg.prc_ct_after;
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_upd_b BEFORE
UPDATE OF campaign, moment, status, dial_rule, priority, timezone
ON contact
BEGIN
easy_pkg.prc_ct_before;
END;
/
SHOW ERRORS;
CREATE OR REPLACE TRIGGER EASY.trg_contact_upd_r BEFORE
UPDATE OF campaign, moment, status, dial_rule, priority, timezone
ON contact FOR EACH ROW
BEGIN
easy_pkg.prc_ct_update_row(
:old.campaign, :old.moment, :old.dial_rule, :old.priority,
:old.timezone, :old.status,
:new.campaign, :new.moment, :new.dial_rule, :new.priority,
:new.timezone, :new.status );
END;
/
SHOW ERRORS;
GRANT SELECT ON EASY.CONTACT TO LECTURE;
GRANT DELETE, INSERT, SELECT, UPDATE ON EASY.CONTACT TO SSQ;
July 29, 2005 - 7:54 am UTC
not too much we can do. I really do suggest (well, it is really your main option here) if you are having performance issues to engage the VENDOR of this application. They make a living implementing and tuning this.
Alberto Dell'Era, July 29, 2005 - 5:30 am UTC
> SELECT code FROM contact
> WHERE campaign=:h_v1
> and status in (0,16,15)
I bet that Tom will suggest to create an index on (STATUS, CAMPAIGN, CODE) - it would cut down the consistent gets by 100 or even 1000 times, since the table will not need to be accessed at all.
Note btw that these columns are tiny numbers (CAMPAIGN is three-digit, STATUS is two, probably CODE is tiny too) - you'll have a very very compact index.
Or perhaps (CAMPAIGN, STATUS, CODE) - I would try both and check which one is used, and then drop the other.
July 29, 2005 - 8:56 am UTC
yes, having code in the index will cut down on the consistent gets, but if they are "not allowed" to add or change anything.
Schekina, July 29, 2005 - 10:25 am UTC
There is nothing I can do because the tuning og this quesries is VENDOR dependant. I talk to them it seems like this is not an issue for them. There is no way to improve the performance of the database otherwise? I'm always having buffer hit ratio average 93%. When just started hit ratio is 100% then 99% then 98%... 4 to 10 days after startup I wil rich my average 93% and stay there for 2 to 4 weeks and after that 92% for the next 2 weeks and 91% the next 2 weeks. Sometimes I'm so concerned that I will shudown et startup it over. But usually, nobody is complaining.
Thanks
Schekina
would like to close this discussion
add index
Jim, July 29, 2005 - 11:30 am UTC
Heck, if it was me I'd log in to another schema and add the index to the table. That way they are less likely to find it. If the vendor isn't very helpful then one needs to solve the problem. We are talking about adding an index not adding a trigger or changing the schema.
Alberto Dell'Era, July 29, 2005 - 12:21 pm UTC
Drawing from my experience in dealing with application vendors - even the most change-resistant vendors eventually implement customer's requests, if you press them enough AND you provide the solution with an estimate of the benefit.
In this case, since Schekina's query contributes with 46.2% to the buffer gets overall, and the buffer hit ratio is always near 90%, i wouldn't be surprised to see a reduction on cpu usage by 40%+, not to mention the reduction on response time for that particular query (that would go down from X to near-zero).
Siebel 7.7
A reader, July 31, 2006 - 10:50 am UTC
Here is a 10-minute statspack report from a Siebel 7.7 app on a 9.2.0.6 database with CBO stats gathered using dbms_stats.gather_schema_stats('SIEBEL',cascade=>true)
--------- ------------------ -------- --------- -------------------
Begin Snap: 43 31-Jul-06 10:00:01 229 4.5
End Snap: 44 31-Jul-06 10:10:01 230 4.6
Elapsed: 10.00 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,200M Std Block Size: 8K
Shared Pool Size: 1,008M Log Buffer: 2,048K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 7,125.63 5,996.33
Logical reads: 91,091.86 76,655.14
Block changes: 34.51 29.04
Physical reads: 85.76 72.17
Physical writes: 1.95 1.64
User calls: 30.52 25.68
Parses: 8.01 6.74
Hard parses: 0.12 0.10
Sorts: 6.52 5.49
Logons: 0.03 0.03
Executes: 16.12 13.56
Transactions: 1.19
% Blocks changed per Read: 0.04 Recursive Call %: 26.40
Rollback per transaction %: 0.00 Rows per Sort: 27.74
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.98
Buffer Hit %: 99.91 In-memory Sort %: 100.00
Library Hit %: 98.95 Soft Parse %: 98.46
Execute to Parse %: 50.28 Latch Hit %: 96.89
Parse CPU to Parse Elapsd %: 90.08 % Non-Parse CPU: 99.90
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 72.71 73.19
% SQL with executions>1: 89.18 88.77
% Memory for SQL w/exec>1: 78.76 79.33
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 1,120 91.68
db file sequential read 28,427 42 3.46
db file scattered read 2,314 19 1.54
log file sync 717 12 1.01
log file parallel write 1,410 11 .91
-------------------------------------------------------------
Since it is a web-app, I think the 50% execute to parse is acceptable, right?
What about the 90% parse cpu to parse elapsed? What are we to make of that?
Also, the top wait event is "CPU time"? Is that a red herring?
Thanks for any insights.