You Asked
Hi, Tom
I've got a problem creating a temporary table from a large table with 1.5 million records.
The temp table should have about 40000 records
Create table temp as
(select loc, team, year,
sum(a) sales1,
sum(b) sales2,
....
sum(z) sales12,
sum(a) - sum(aa) - sum(aaa) margin1,
.....
sum(z) - sum(zz) - sum(zzz) margin12
from master
group by loc, team, year)
The problem looks like a workspace problem as it doesn't even finish the query in 4 hours, and that's all the spare time I have overnight to do housekeping before the applications restart.
Is the problem the 24 (Sum)s or am I missing something
Thanks
John
and Tom said...
could be indicative of being ultra conservative on your sort_area_size. Here is an example on a modest Solaris box
two CPU:
Oct 24 15:37:33 aria-dev unix: cpu1: SUNW,UltraSPARC-II (upaid 1 impl 0x11 ver 0xa0 clock 400 MHz)
Oct 24 15:37:33 aria-dev unix: cpu3: SUNW,UltraSPARC-II (upaid 3 impl 0x11 ver 0xa0 clock 400 MHz)
and 2gig of ram (quite a few years old, definitely not state of the art, has about 6 instances of Oracle running on it -- nothing fancy with the disks -- just "plain old disks"
So, I created a table T:
drop table t;
create table t ( loc varchar2(10), team varchar2(10), year int,
a int, b int, c int, d int, e int, f int, g int, h int, i int, j int, k int, l int, m int,
n int, o int, p int, q int, r int, s int, t int, u int, v int, w int, x int, y int, z int );
begin
for l_loc in 1..200
loop
for l_team in 1..40
loop
for l_year in 1998..2002
loop
insert into t values ( 'loc' || l_loc, 'team' || l_team, l_year,
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26);
end loop;
end loop;
end loop;
end;
/
insert /*+ append */ into t select * from t;
commit;
insert /*+ append */ into t select * from t;
commit;
insert /*+ append */ into t select * from t;
commit;
insert /*+ append */ into t select * from t;
commit;
insert /*+ append */ into t select * from t;
commit;
select count(*) from t;
1,280,000 records...
and then:
ops$tkyte@ORA806.WORLD> set timing on
ops$tkyte@ORA806.WORLD> alter session set sort_area_size = 104857600
2 /
Session altered.
Elapsed: 00:00:00.00
ops$tkyte@ORA806.WORLD> create table t2
2 as
3 select loc, team, year,
4 sum(a) sales1, sum(b) sales2, sum(c) sales3,
5 sum(d) sales4, sum(e) sales5, sum(f) sales6,
6 sum(g) sales7, sum(h) sales8, sum(i) sales9,
7 sum(j) sales10, sum(k) sales11, sum(l) sales12,
8 sum(m) sales13,
9 sum(q)-sum(r)-sum(q) margin1,
10 sum(a)-sum(f)-sum(w) margin2,
11 sum(k)-sum(d)-sum(e) margin3,
12 sum(c)-sum(s)-sum(r) margin4,
13 sum(e)-sum(z)-sum(t) margin5,
14 sum(j)-sum(y)-sum(y) margin6,
15 sum(l)-sum(x)-sum(u) margin7,
16 sum(o)-sum(w)-sum(i) margin8,
17 sum(n)-sum(v)-sum(o) margin9,
18 sum(j)-sum(u)-sum(p) margin10,
19 sum(g)-sum(t)-sum(g) margin11,
20 sum(n)-sum(g)-sum(h) margin12,
21 sum(w)-sum(f)-sum(j) margin13
22 from t
23 group by loc, team, year
24 /
Table created.
Elapsed: 00:02:28.50
ops$tkyte@ORA806.WORLD>
took about 2 1/2 minutes. Normally, the sort area size is 102,400 bytes (100k). For this test, using a temp tablespace with 1m extents (that never actually got used!) and a 100m sort area size.
Now, trying the same thing with a 10m sort area size, it is very different, I let it run for 1/2 hour -- it took up about 1.4 gig of temp and I stopped it (got bored, proved my point)...
So suggestion: increase sort_area_size in your batch session to some large number to improve the performance of your very very large sort. The smaller the sort area size -- the smaller the swap pieces - the more temp you need -- the longer it will take (lots of extra IO and all).
People tend to way undersize this and never change at the session level for batch jobs. It can make a TREMENDOUSLY huge difference though!
Rating
(6 ratings)
Is this answer out of date? If it is, please let us know via a Comment