thanks
Cedric, March 21, 2005 - 4:22 am UTC
Thanks for your explanation. I will then add 5 times 178266112 bytes to be able to add my new index to TBS ACCT.
I have then more questions but I will open a new thread for that :)
Free space issue
Cedric Sobrido, March 21, 2005 - 5:07 am UTC
In fact I was wondering how many space left I had on this TBS ACCT so I ran the script I found in a previous question (</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:285415955510, <code>
and it gave me the following:
Tablespace Name KBytes Used Free Used Largest
---------------- ------------ ------------ ------------ ------ ------------
ACCT 14,336,000 14,234,240 101,760 99.3 65,472
CWMLITE 20,480 9,600 10,880 46.9 10,624
DRSYS 20,480 9,920 10,560 48.4 10,560
EXAMPLE 141,440 140,992 448 99.7 448
INDX 25,600 64 25,536 .3 25,536
ODM 20,480 9,536 10,944 46.6 10,944
SYSTEM 419,840 417,344 2,496 99.4 2,496
TOOLS 10,240 64 10,176 .6 10,176
UNDOTBS1 15,692,800 11,584 15,681,216 .1 4,063,232
USERS 25,600 64 25,536 .3 25,536
XDB 39,040 38,848 192 99.5 192
------------ ------------ ------------
sum 30,752,000 14,872,256 15,879,744
I don't know why system and XDB (what is it ?) are almost full but the weirdest to me is that me removed 10 millions records out of 25 millions in the main table in TBS ACCT and we don't see the free space. Is there something to do to really release the space so that oracle sees it as free ?
March 21, 2005 - 10:28 am UTC
system is where the dictionary is, xdb where xml db is installed.
when you delete from a table, the table gets free space - to be used for subsequent inserts. but the space doesn't go back to the "free pool in general"
think about it - you deleted rows in the "middle of the structure". You could use "alter table T move" in order to "compact" it (rebuild it) but you'll need to rebuild the indexes as well after that.
Cannot explain why we got ORA-01652
steve, February 13, 2006 - 4:19 pm UTC
Hi Tom,
I (and a couple of other DBAs) are at a loss to explain to the
customer an error message we received the other day during our
system upgrade. As part of our software upgrade, we needed
to convert a partitioned table into a non-partitioned table.
Simple enough. We executed the following anonymous PL/SQL
block:
spool nmsadm_alter_user_admin.lst
begin
execute immediate 'create table ppw_cust_hist_tmp tablespace ppw_data as
select * from ppw_cust_history';
execute immediate 'drop table ppw_cust_history';
execute immediate 'alter table ppw_cust_hist_tmp rename to
ppw_cust_history';
execute immediate 'alter table ppw_cust_history
add constraint ppw_cust_history_pk
primary key (cust_id ,gen_ymdhms)
using index tablespace ppw_index';
end
Note that the procedure verified beforehand that there was
ample space in the target tablespaces. The procedure raised
the following error:
ORA-01652 Unable to extend TEMP segment tablespace PPW_DATA
But what has us confused is the following. After running
the procedure (which by the way is part of a larger script)
we see that:
o The table PPW_CUST_HISTORY is now non-partitioned
(so step one worked)
o Table ppw_cust_hist_tmp does not exist (so step 3 worked)
o Primary key pps_cust_history_PK exists and has a
creation date of 'now' (so step 4 worked)
Furthermore, the spool file shows:
PL/SQL procedure completed successfully
I could understand if ppw_cust_history was still partitioned or
pps_cust_history_PK did not exist or
But I cannot understand
which command raised the error yet all new objects seem to be in place.
Any ideas?
February 13, 2006 - 4:29 pm UTC
do you have the line that threw the error?
Sorry, no line
Steve, February 14, 2006 - 7:45 am UTC
Hi Tom,
Unfortunately we do not have the line that threw the error.
This is all we have in the alert log:
ORA-1652: unable to extend temp segment by 64 in tablespace PPW_DATA
And the spool file contains:
PL/SQL command completed successfully
(but I will ask the customer to send the spool file just
to make sure). Furthermore, there are no trace files produced.
I little confused....
I'll try to dig up more information.
Thank you
February 14, 2006 - 8:23 am UTC
how do we know the two are even related then?
sorry, but unless you had an exception block that hides all errors - that block worked and did not raise the error.
I agree
A reader, February 14, 2006 - 8:46 am UTC
I agree with you. The only thing though is nothing else
was running at the time (we were in a maintenance window
performing the software upgrade). And if I'm not mistaken,
the error was sent to the screen where the script was run
(but I'll need to verify this with the support engineer
that was running the script).
It's a strange one...
I'll keep digging.
Thanks for your time on this.
February 14, 2006 - 9:12 am UTC
if that error went to the screen, then either
a) you have code that catches "when others" and used dbms_output.print_line to print it out and then IGNORED the error entirely (continued processing). That I would call a bug in your developed code.
b) it was not raised by this procedure, but via some other process.
In any case - you don't have a spool file?
Update
steve, February 17, 2006 - 8:15 am UTC
Hi Tom,
Here's a screen log that was sent by the support
organization. It shows all the steps that were
performed (I just tidied it a little for you).
The only thing missing from the log is the verification
that the primary key exists (which is does)
Check if partitioned before:
----------------------------
SQL> select partitioned from dba_tables
where table_name='PPW_CUST_HISTORY';
PAR
---
YES
SQL> exit
Run the script
--------------
$ /opt/NMS/.../alter_ppw_cust_history.sh
ORA-01652: unable to extend temp segment by 64 in tablespace PPW_DATA
ORA-06512: at line 2
ppsdws01 oracle 207> ls -ltr
total 41004
drwx------ 512 Dec 17 2003 nsmail/
-rw-r--r-- 155923 Dec 17 2003 ahr_viewer.pl
.
.
.
-rw-r--r-- 43 Feb 9 01:43 nmsadm_alter_user_admin.lst
Verify the spool file
---------------------
$ more nmsadm_alter_user_admin.lst
PL/SQL procedure successfully completed.
Check the database situation afterwards
---------------------------------------
$ sqlplus system/manager
SQL> select partitioned from dba_tables where
table_name='PPW_CUST_HISTORY';
PAR
---
NO
SQL> exit
Verify the alert log
--------------------
ppsdws01 oracle 210> tail -100 /opt/.../alert_dds.log
.
.
.
.
.
.
Current log# 3 seq# 30131 mem# 0: /oracle3/dds/redo3a.rdo
Thu Feb 9 01:44:49 2006
ARC1: Evaluating archive log 2 thread 1 sequence 30130
ARC1: Beginning to archive log 2 thread 1 sequence 30130
Creating archive destination LOG_ARCHIVE_DEST_1: '/backup/archive/1_30130.arc'
==> ORA-1652: unable to extend temp segment by 64 in tablespace PPW_DATA
ARC1: Completed archiving log 2 thread 1 sequence 30130
Check if the _tmp table exists
------------------------------
ppsdws01 oracle 211> sqlplus dwadm/dwdbpwd
SQL> desc ppw_cust_hist_tmp
ERROR:
ORA-04043: object ppw_cust_hist_tmp does not exist
SQL> desc ppw_cust_history
Name Null? Type
--------------- -------- -----------
CUST_ID NOT NULL NUMBER(10)
MASTER_CUST_ID NOT NULL NUMBER(10)
.
.
.
HISTO_END_DATE DATE
Here is the script
------------------
${ORACLE_HOME}/bin/sqlplus -s ${DB_WH_USER}/${DB_WH_USER_PW}@${DB_WH_SID} <<EOF | nawk '/ORA-/ {print}'
spool nmsadm_alter_user_admin.lst
-- This must re run in the same transaction.
-- If "create table ppw_cust_hist_tmp" fails we dont want to drop ppw_cust_history
begin
execute immediate 'create table ppw_cust_hist_tmp tablespace ppw_data as select * from ppw_cust_history';
execute immediate 'drop table ppw_cust_history';
execute immediate 'alter table ppw_cust_hist_tmp rename to ppw_cust_history';
execute immediate 'alter table ppw_cust_history add constraint ppw_cust_history_pk primary key (cust_id ,gen_ymdhms) using index tablespace ppw_index';
end;
/
spool off
/
EOF
Thank you
Steve
February 17, 2006 - 2:47 pm UTC
suggest you set echo on and spool everything to your file.
don't just grep out the bad, get it all so you can see everything.
heck, it could be something in their login.sql - it fails BEFORE you even execute (for example)
Thank you - I'll try
A reader, February 17, 2006 - 3:12 pm UTC
Thank you Tom. I'll try it.
second slash
Darren L, February 17, 2006 - 5:10 pm UTC
/
spool off
/ <--- you ran the script twice..the second one outside of the SPOOL file. it could have run out of space doing the *second* move
February 18, 2006 - 8:13 am UTC
indeed! good eye.
Exactly
A reader, February 24, 2006 - 11:08 am UTC
Haven't checked the site for the last couple of days and got
a reply from Oracle support and was about to post the answer.
But you also nailed it.
Feeling kinda stupid...
Thanks all
ORA-1654: unable to extend index
Alay, May 05, 2006 - 4:40 am UTC
Hi Tom,
I am getting following error.
ORA-1654: unable to extend index TBAADM.IDX_OUT_CLG_PART_TRAN_TABLE by 25600 in tablespace IDX_OCP_TBLSPC
Used space in IDX_OCP_TBLSPC tablespace is only 74%. So why I am getting this error?
Is there any way(using some query) to check in advance that this type of error may come, so we can take necessary action.
May 05, 2006 - 6:55 am UTC
run
</code>
http://asktom.oracle.com/Misc/free.html <code>
it'll report
a) space free in tablespace (aggregate)
b) largest free contigous space (biggest extent that can be allocated)
I will guess
a) dictionary managed tablespace
b) that has lots of free extents (many extents in dba_free space)
c) that are not contigous
d) so the sum of free space is 25% of the tablespace
e) BUT the largest contigous set of free space is not large enough for your next extent of the index
options:
a) add more space to tablespace
b) alter index to set next extent to a size = to largest contigous free space, it'll be able to extend into that space.
Ora-1652
A reader, February 06, 2007 - 10:39 pm UTC
Hi Tom,
I get this error on create index statement.
ORA-01652: unable to extend temp segment by 128 in tablespace ABC
Tablespace Name KBytes Used Free Used Largest Kbytes Used
------------------- ------------ ------------ ------------ ------ ------------ ------------ ------
ABC 310,528 309,504 1,024 99.7 832 1,843,200 16.8
Extent Management & Allocation Type
TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN ALLOCATIO
------------------------------ -------------- ----------- ---------- ---------
ABC 65536 LOCAL SYSTEM
Autoextend ON
TABLESPACE_NAME AUT MAXBYTES BYTES MAXBLOCKS USER_BLOCKS
------------------------------ --- ---------- ---------- ---------- -----------
ABC YES 1887436800 317980672 230400 38808
The puzzling thing here is that this is a tablespace with locally managed and autoextend on and the "MaxPoss" shows 1,843,200 Kb, and still the create index throws the 1652 but what i feel is that since it has autoextend on ,it should not give this error unless the maxbytes is reached which is not the case here.
Thanks
February 07, 2007 - 1:06 pm UTC
just because a file is autoextendable doesn't mean there is free space on the file system.
Perhaps you actually ran out of space.
Not the case
Sushil, February 07, 2007 - 10:26 pm UTC
The file system dint run out of space.
At the time of error.
SQL> select file_name from dba_data_files where tablespace_name='ABC';
FILE_NAME
--------------------------------------------------------------------------------
/prod/oradata/data01/ABC01.dbf
/prod/oradata/data01/ABC02.dbf
/prod/oradata/data01/ABC03.dbf
df -h /prod/oradata/data01
Filesystem Size Used Avail Use% Mounted on
47G 39G 7.7G 84% /prod/oradata
So lots of space was still available?
February 08, 2007 - 8:17 am UTC
that does not show us that it wasn't full when it tried to extend.
That shows us it is quite full now, but not entirely full.
Error: ORA-01652: Unable to extend temp segment by 128
Beroetz, March 19, 2007 - 5:52 am UTC
Hello.
The error "ORA-01652: Unable to extend temp segment by 128" occured when I tried to execute a complex query. I saw that my temp file is 30GB and it's hard to believe that it's getting full by this query!!!
There is an analytic function used in the query and probably it is causing the huge need of space.
Here is the query:
INSERT INTO LoanPlans ( LoanRefNum, NumFlow, DateFlow, AmtFlowPrinc, AmtFlowInt)
SELECT t1.event || t1.branch || t1.amendm Field1,
Row_Number() over
(partition by t1.branch,t1.event,t1.amendm
order by t1.inst_nbr) Field2,
t1.mat_date Field3, t1.inst_principal*t3.ind_rate Field4,
t1."INT"*t3.ind_rate Field5
FROM RS.LOAN_PLANS t1 INNER JOIN
RS.LOANS t2 ON (t1.event=t2.event AND
t1.branch=t2.branch AND t1.amendm=t2.amendm AND
t1.DATE_KEY = t2.DATE_KEY) LEFT JOIN
( SELECT t2.branch branch, t2.event event, t2.amendm amendm,
SUM(t1.inst_principal) over
(partition by t1.branch, t1.event, t1.amendm
order by t1.inst_nbr) sum_inst,
t2.inst_nbr inst_nbr, t2.releas - t2.capital_payd inst_rest
FROM RS.LOANS t2 INNER JOIN
RS.LOAN_PLANS t1 ON (t2.branch = t1.branch AND
t2.event = t2.event AND t2.amendm = t1.amendm AND
t2.DATE_KEY = t1.DATE_KEY)
) q ON (t1.branch = q.branch AND t1.event = q.event AND
t1.amendm = q.amendm) LEFT JOIN
RS.EXCH_RATES t3 ON (t1.currency = t3.currency AND
t1.DATE_KEY = t3.DATE_KEY AND t1.DATE_KEY = t3.acc_date)
Here is the execution plan of the query:
SELECT STATEMENT, GOAL = CHOOSE Cost=3455606 Cardinality=3615949 Bytes=636407024 IO cost=3455606 Optimizer=CHOOSE Timestamp=19.3.2007 ã. 11:00:08
WINDOW SORT Cost=3455606 Cardinality=3615949 Bytes=636407024 IO cost=3455606 Timestamp=19.3.2007 ã. 11:00:08
FILTER Timestamp=19.3.2007 ã. 11:00:08
HASH JOIN OUTER Timestamp=19.3.2007 ã. 11:00:08
HASH JOIN OUTER Cost=6610 Cardinality=20963 Bytes=2683264 IO cost=6610 Timestamp=19.3.2007 ã. 11:00:08
HASH JOIN Cost=6572 Cardinality=20963 Bytes=2096300 IO cost=6572 Timestamp=19.3.2007 ã. 11:00:08
PARTITION RANGE SINGLE Timestamp=19.3.2007 ã. 11:00:08
TABLE ACCESS FULL Object owner=RS Object name=LOANS Cost=369 Cardinality=24551 Bytes=1104795 IO cost=369 Optimizer=ANALYZED Timestamp=19.3.2007 ã. 11:00:08
TABLE ACCESS FULL Object owner=RS Object name=LOAN_PLANS Cost=5629 Cardinality=826902 Bytes=45479610 IO cost=5629 Optimizer=ANALYZED Timestamp=19.3.2007 ã. 11:00:08
TABLE ACCESS FULL Object owner=RS Object name=EXCH_RATES Cost=2 Cardinality=1 Bytes=28 IO cost=2 Optimizer=ANALYZED Timestamp=19.3.2007 ã. 11:00:08
VIEW Object owner=MDR Cost=3203514 Cardinality=167045285 Bytes=8018173680 Timestamp=19.3.2007 ã. 11:00:08
WINDOW SORT Cost=3203514 Cardinality=167045285 Bytes=13530668085 IO cost=3203514 Timestamp=19.3.2007 ã. 11:00:08
HASH JOIN Cost=6667 Cardinality=167045285 Bytes=13530668085 IO cost=6667 Timestamp=19.3.2007 ã. 11:00:08
PARTITION RANGE SINGLE Timestamp=19.3.2007 ã. 11:00:08
TABLE ACCESS FULL Object owner=RS Object name=LOANS Cost=369 Cardinality=48079 Bytes=2067397 IO cost=369 Optimizer=ANALYZED Timestamp=19.3.2007 ã. 11:00:08
TABLE ACCESS FULL Object owner=RS Object name=LOAN_PLANS Cost=5629 Cardinality=826902 Bytes=31422276 IO cost=5629 Optimizer=ANALYZED Timestamp=19.3.2007 ã. 11:00:08
So I have 3 questions that I'm fighting with since last week:
1) Why the windowing clause of the analytic function requires so many resources since I have index on the "partition by" and "order by" clauses? Is there any way to optimize it?
2) Is it possible for the query to fill all the 30GB of the temp segment? The table that the query selects from has 4 million records.
3) What can I do to run successfully my query?
Thanks.
March 19, 2007 - 10:16 am UTC
why is it hard to believe? I see lots of "big" things in there and well, other people can use temp too.
1) it is optimized. You are asking for something "big" here.
2) sure it is.
3) configure more temp?
Temp file
Beroetz, March 19, 2007 - 10:35 am UTC
Thanks for the quick answer. I was surprised because this is the unique transformation running in the database.
Could you provide me with a sample of how to increase the size of the TEMP file?
Best regards, Beroetz
March 19, 2007 - 12:10 pm UTC
see the alter command
either alter your temporary tablespace and add a file
or alter your database and increase the file size
re: Temp file
Beroetz, March 19, 2007 - 10:47 am UTC
As well, is there any temp file limit per user? Maybe it has been hit?
March 19, 2007 - 12:14 pm UTC
no
ORA-01652
Beroetz, March 20, 2007 - 10:12 am UTC
Hello,
I've added second 30GB temp file but the operation failed again! I've seen that Oracle has consumed all the temp space.
We have only 20GB more free space.
Obviously the analytic function uses far too much space so is there any better alternative to execute the query?
Thanks
March 20, 2007 - 11:08 am UTC
obviously nothing.
not following your logic entirely, but you are outer joining a join of
LOAN_PLANS to LOANS
with a join of LOANS to LOAN_PLANS
eh? whats up with that. you join them in the same fashion, you have no predicates on them. they are the same sets of data.
why do you join these two tables twice.
why do you outer join these results
why is there not JUST A SINGLE JOIN between loans and loan_plans?????
what is your goal here, I think this query logic is "strange" from the get go.
ora-01652`
Thakur Manoj, August 20, 2008 - 4:34 am UTC
Dear Sir
I am getting the below error while taking the backup of the particular user
ORA-01652-unable to extend temp segment in the tablespace temp.
while running the dictionary view
v$sort_segment
I have found that
select sum(free_blocks) from v$sort_segment where tablespace_name='TEMP';
SUM(FREE_BLOCKS)
-----------------------
1572864
Please help to resolve the below issue
August 20, 2008 - 10:37 am UTC
how would a backup use temp? define 'backup' for us here.
and you do understand that immediately after getting that error, you will have released a lot of temp space - I would expect temp to have lots of free stuff RIGHT AFTER running out and getting an error, all of the temp you were using is not used now.
ora-01652-unable to extent temp segment by 128 in the tablespace temp
Thakur Manoj, August 20, 2008 - 4:40 am UTC
Dear Sir
I am getting the below error while taking the backup of the user
"ora-01652-unable to extend temp segment by 128 in the tablespace temp"
select sum(free_blocks) from v$sort_segment
sum(free_blocks)
--------------------------------
1572864
ora-01652
Thakur manoj, August 21, 2008 - 8:47 am UTC
Dear Sir
I am doing export of a particular user and I am getting the error as
ora-01652-unable to extend temp segment by 128 in the tablespace temp.
Also I would like to tell you 9.2.0.1.0 version the temp tablespace is being automatically created by oracle when we create the particular tablespace 'max_c3_user'.
This error came when I added indexes into the particular user.Also the temp tablespace is automatically created .I know indexes increases sorting when we try to do index scan.
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------ ------------------------------ ------------------------------
MAX_C3 MAX_C3_USER TEMP
Now While taking the backup of the above query also I am getting the error as well as while selecting a complex query I am getting the error
Please help to resolve the below issue
Regards
August 21, 2008 - 9:17 pm UTC
... Also I would like to tell you 9.2.0.1.0 version the temp tablespace is being
automatically created by oracle when we create the particular tablespace
'max_c3_user'.
....
you might like to tell me that, but you would not be telling me something factual.
We do not automatically create any temporary tablespaces, you are mistaken in your analysis of what is happening.
I don't see how export would fail with a failure to extend temp. I can see how IMPORT would - when it goes to create an index - it will use temporary space possibly in order to sort.
... I know indexes increases sorting when we
try to do index scan.
..
actually, if you are performing an index range scan, you are reading the data sorted - the index eliminates the need for temp in that case - no sorting is done, it was already done at index create time.. An index CREATE might use temp to sort, but not a range scan....
... Now While taking the backup of the above query also I am getting the error as ....
I do not understand at all what it means to "back up a query", that doesn't compute, we don't backup queries.
...
well as while selecting a complex query I am getting the error
.......
that makes sense, your temporary tablespace is too small to perform the operation you requested, make it larger or change your request.
Manoj Check ur Temp Tablespace
Dhairyasheel, August 22, 2008 - 3:05 am UTC
Hi Manoj,
u need to check the size of your temp tablespace by the query
select * from dba_temp_files (fired from sys user).
The bytes column will tell you if ur temporary tablespace is too small. Also enable the autoextend clause on your temporary tablespace so that the tempfile can automatically grow when more space is required by the session.
ora-01652
Thakur Manoj, August 22, 2008 - 11:07 am UTC
Dear Sir
Please check and tell me what i should do to resolve this error
ora-01652-unable to extend temp segment by 128 in tablespace temp
TSNAME DFNAME STATUS ENABLED BYTES/1048576
------------------------------ ------------------------------ ---------- ------------ -------------
TEMP C:\ORACLE\ORADATA\cis\TEMP01 OFFLINE READ WRITE 0
.DBF
select tablespace_name,next_extent,max_extents from dba_tablespaces where tablespace_name='TEMP';
TABLESPACE_NAME NEXT_EXTENT MAX_EXTENTS
------------------------------ ----------- -----------
TEMP 1048576
select tablespace_name,extent_size,total_extents,total_blocks,max_size,max_blocks from v$sort_segment;
TABLESPACE_NAME EXTENT_SIZE TOTAL_EXTENTS TOTAL_BLOCKS MAX_SIZE MAX_BLOCKS
------------------------------- ----------- ------------- ------------ ---------- ----------
TEMP 128 12288 1572864 12288 1572864
Also Let me tell you that There are some users created into my database
What exactly I should do to resolve this error
FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS
1 SYSTEM 576716800 70400 AVAILABLE
2 UNDOTBS1 948961280 115840 AVAILABLE
3 CWMLITE 20971520 2560 AVAILABLE
4 DRSYS 20971520 2560 AVAILABLE
5 EXAMPLE 155975680 19040 AVAILABLE
6 INDX 26214400 3200 AVAILABLE
7 ODM 20971520 2560 AVAILABLE
8 TOOLS 10485760 1280 AVAILABLE
9 USERS 26214400 3200 AVAILABLE
10 XDB 39976960 4880 AVAILABLE
11 INTCDR_san_USER 15728640 1920 AVAILABLE
12 INTCDR_san_TEMP 5242880 640 AVAILABLE
13 gh1007_san_DEV_USER 89128960 10880 AVAILABLE
14 gh1007_hjk_USER 340787200 41600 AVAILABLE
15 gh1007_san_DEV_TEMP 5242880 640 AVAILABLE
Now while taking the export I am getting the below error
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user IRS2007_cis_C3
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user IRS2007_cis_C3
About to export IRS2007_cis_C3's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
EXP-00056: ORACLE error 1652 encountered
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
ORA-06512: at "SYS.DBMS_LOB", line 424
ORA-06512: at "SYS.DBMS_METADATA", line 1140
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully
Now what to do to resolve this error sir
You said to check in dba_temp_files
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUTOEXTENSIBLE MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
1 C:\ORACLE\ORADATA\IMACS\TEMP01.DBF 1 TEMP AVAILABLE
I cannot see any bytes into the memory
Regards
August 26, 2008 - 7:25 pm UTC
I already did, re-read above.
You either need to do things that need less temp
or
make your temp larger.
export is running a query, that query is generating the need to spill to disk (are your pga/sort area sizes set reasonable? should it be spilling to temp) and unless and until temp is larger, it will continue to fail.
it sort of appears your temp tablespace is "offline", have you looked at that?
TSNAME DFNAME STATUS
ENABLED BYTES/1048576
------------------------------ ------------------------------ ----------
------------ -------------
TEMP C:\ORACLE\ORADATA\cis\TEMP01 <b>OFFLINE</b> READ
WRITE 0
.DBF
Thakur Manoj, August 22, 2008 - 11:10 am UTC
Sorry The
After firing the query
select * from dba_temp_files I have got this
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUTOEXTENSIBLE MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
1 C:\ORACLE\ORADATA\cis\TEMP01.DBF 1 TEMP AVAILABLE
I cannot see any bytes into the temp tablespace.
How to resolve this error sir
August 26, 2008 - 7:26 pm UTC
you are providing conflicting information
work with your dba, they will know what to do and if they do not - they are not the dba
Temp is offline bring it online.
Dhairyasheel Tawade., August 25, 2008 - 3:46 am UTC
HI looking from ur query on dba_tablespaces, temp tablespace's status is showing OFFLINE.
try bringing the tablespace back online.
alter tablespace tablespacename online;
(u may need to fire above command in mount mode not sure !)
also in the path of temp file verify on your windows system that the file pointed out really exists and if the drive is not full.
if nothing works create a new temporary tablespace then
fire alter database default temporary tablespace newtablespacename and ur on ur way.
ORA-01652:
rain, March 21, 2009 - 3:37 pm UTC
Hi Tom;
SQL> select count(*) from dba_source;
COUNT(*)
----------
155855
SQL> select count(*) from dba_source where type='PACKAGE';
COUNT(*)
----------
111503
and I want to create a table like this;
SQL> create table table1 as select * from dba_source where type='PACKAGE';
create table table1 as select * from dba_source where type='PACKAGE'
ORA-01652: unable to extend temp segment by 128 in tablespace SYSTEM
HOw can I solve this problem.Thanks
March 24, 2009 - 10:57 am UTC
you are apparently trying to create that table in SYSTEM (bad idea, bad bad idea)
so, i would suggest
create table table1 TABLESPACE SOME_WHERE_ELSE as ......
do not create things in system.
here is what happened:
ops$tkyte%ORA10GR2> alter database datafile '&f' autoextend off;
old 1: alter database datafile '&f' autoextend off
new 1: alter database datafile '/home/ora10gr2/oracle/product/10.2.0/oradata/ora10gr2/system01.dbf' autoextend off
Database altered.
ops$tkyte%ORA10GR2> create table ttt tablespace system as select * from all_objects;
create table ttt tablespace system as select * from all_objects
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 128 in tablespace SYSTEM
ops$tkyte%ORA10GR2> alter database datafile '&f' autoextend on next 10m maxsize 31g;
old 1: alter database datafile '&f' autoextend on next 10m maxsize 31g
new 1: alter database datafile '/home/ora10gr2/oracle/product/10.2.0/oradata/ora10gr2/system01.dbf' autoextend on next 10m maxsize 31g
Database altered.
ops$tkyte%ORA10GR2> create table ttt tablespace system as select * from all_objects;
Table created.
That just shows "you were trying to create a table in system", I fixed it by making system 'grow', you should fix it by NOT USING SYSTEM!!
False ORA-01652 being reported.
A reader, March 25, 2009 - 7:15 am UTC
Hi Tom,
Thank you for all your wonderful responses. Your knowledge about Oracle database is incomparable. The below is just for your information.
There seems to be a bug of false ORA-01652 in Oracle 10g Version 10.2.0.3. We could test it only on HP-Itanium.
Below is the procedure to reproduce it:
0. Disable resumable space allocation if it is enabled by setting the resumable_timeout value to 0.
alter system set resumable_timeout=0;
1. Create a test tablespace with size 1M , autoextensible as below:
create tablespace test datafile '/db02/oradata/egebdev/test01.dbf' size 1M autoextend
on maxsize 500M extent management local uniform size 512K;
2. Exit and log in through new session.
3. Try to create a table which consumes more than 1M size so as to extend the datafile as below:
create table test1 tablespace test as select * from dba_objects;
3. Check the error in alert log "ORA-01652: unable to extend temp segment by 64 in tablespace TEST"
4. Again exit from the current session and log in through new session.
5. Try to create another table through this new session:
create table test2 tablespace test as select * from dba_objects;
6. Check the error in alert log "ORA-01652: unable to extend temp segment by 64 in tablespace TEST"
7. Each time you connect through a new session and try to create the table the error "ORA-01652"
is logged into the alert log.
Note that the error is not reported in the session itself and the tables are created. The error is only logged in the
alert log.
March 29, 2009 - 8:02 pm UTC
I cannot reproduce in any fashion, any release.
Please utilize support.
event="1652 trace name ERRORSTACK level 3"
Avnish, April 02, 2009 - 1:14 am UTC
Hi Tom,
The event (event="1652 trace name ERRORSTACK level 3") was causing this to happen. We upgraded our 9i DB to 10g and this event got carried to 10g init.ora files.
Thanks
A reader, April 14, 2009 - 3:43 am UTC
Hi Tom
I can see the free space and used space of the temp tablespace from the following query.
I know that temp segments are not dealloacated.
What I wanna know is how much of the space in (bytes_used) part is active and how much is free for future
transactions..
Is there a way to see that?
SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free)
2 FROM V$temp_space_header
3 GROUP BY tablespace_name;
TABLESPACE_NAME SUM(BYTES_USED) SUM(BYTES_FREE)
--------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
TEMP 943718400 0
April 14, 2009 - 11:35 am UTC
Scofield, April 15, 2009 - 4:47 am UTC
Thanks Tom.
Tom is the best, ignore the rest
1324 million records in table
maher, November 17, 2009 - 2:49 am UTC
hi tom
i have a table that i made export for it from Sqlserver2008 that table contains approximately 1324 million Records , and i get this error when i want to index some columns of this table :
ORA-1652: unable to extend temp segment by xxxx in tablespace temp
Temp TableSpace have max of 3g space, should i add more space for it .
thanks in advance.
maher
November 23, 2009 - 1:42 pm UTC
... should i add more space for it ...
If you would like to successfully create the index - the answer to that would probably be yes...
how to identify query causing the ORA-1652
Rimz, December 14, 2009 - 3:20 pm UTC
Hi Tom,
When we encounter the error "ORA-1652: unable to extend temp segment by 128 in tablespace", is there a way to identify for sure which query was responsible for filling up the temp space? We get this error on the production server sometimes, and the DBAs don't give us a definite answer on the culprit sql. Can we find out for sure which query caused the error? Is this logged anywhere?
Thank you for your help.
Rima
A reader, December 16, 2009 - 9:20 pm UTC
Respected Sir;
I just read the above posts, Is my understanding correct?
Suppose, Im rebuilding indexes or performing "create table as"
Even though, I have enough space in tablespace I am still getting ORA-1654.
This indicate that contigues blocks in dba_free_space is so small to satisfy next extent size of the segment.
To prevent this either I need to add space or modify the next extent size of the table or index.(make it smaller)
December 17, 2009 - 7:28 am UTC
correct
but again, today, in the year 2009, that should be so very very rare - unless you are doing it the way people did things in 1989 with dictionary managed tablespaces.
A reader, December 21, 2009 - 1:46 pm UTC
Respected Sir;
Suppose all my free chunks are 1mb in dba_free_space.
My table needs to extent 0.90mb
Oracle get one of the 1mb chunk for my table.
Does the remainung 0.1mb automatically assigned to freelist or bitmap? 0.1mb will be above HWM.
How is the process here?
Can you please explain?
December 21, 2009 - 4:02 pm UTC
three cases:
1) locally manage tablespace with uniform extents. This cannot happen. All extents are the same size (including 'free' ones).
2) locally managed tablespace with system allocated extents. We use powers of two (64k, 1mb, 8mb and others). We would break a big free extent into a small and big extent if needed, but in this case, we would have used 1mb, not 0.9mb.
3) dictionary managed tablespaces. You are in the stone age, you need to fix this some day. But, if there was 1mb of free space in dba_free_space in one extent - we would break it into two extents, one would stay in dba_free_space and one would become allocated to your segment (and the space inside of the extent would be managed by freelists - space INSIDE of an extent is managed by freelist/bitmap freelists - extents are NOT managed by freelists/bitmap freelists)
A reader, December 23, 2009 - 4:58 pm UTC
Respected Sir;
1-)
Assume DMT;
My table needs 0.9mb of extent. My free chunks are 1mb.
When my table needs an extent,Oracle would break the 1mb chunk into two (0.90 mb and 0.1mb).
and use the 0.90 mb and leave the 0.1 mb in the dba_free_space.
Is that correct?
2-)
Assume LMT with system allocated extents.
We need 64k extent, all free chunks are 1mb in dba_free_space.
Does oracle again perform the same thing? (Break the 1mb into two and use the 64k and leave the remaining in chunk in dba_free_space)
December 31, 2009 - 2:22 pm UTC
Stop assuming DMT - dictionary managed tablespaces - it is 2009 (not for long, it is almost 2010). Get with current methods.
yes, it would likely do that. For both.
unale to open url
aliyar, June 30, 2010 - 3:06 am UTC
July 06, 2010 - 1:30 pm UTC
Error: ORA-01652: Unable to extend temp segment by 128
Matthew, January 28, 2013 - 2:59 am UTC
Hi Tom,
I have a similar question like the previous "Error: ORA-01652: Unable to extend temp segment by 128", that is, using a INSERT-SELECT statement to update a table.
The business logic is to update every rows to the data of next rows like below,
(the where condition of col1 and col2 is about filtering out 1/3 records in the tables,
while all the tables has about ~ 1 million records)
<code>
INSERT /*+ append */ INTO target_table
WITH vw_wk_table_left_join_wk_src AS (
SELECT wk_table.pk1
,wk_table.pk2
,wk_table.pk3
,wk_src.col_a, wk_src.col_b, wk_src.col_c
,wk_table.col1
,wk_table.col2
,wk_table.col3
FROM
(
SELECT pk1, pk2, pk3
,MIN(col1) AS col1
,MIN(col2) AS col2
,MIN(col3) AS col3
FROM working_table
WHERE col1 <> 0
GROUP BY pk1, pk2, pk3
) wk_table
LEFT JOIN
working_data_table wk_src
ON
wk_table.pk1 = wk_src.pk1
AND wk_table.pk2 = wk_src.pk2
AND wk_table.pk3 = wk_src.pk3
)
,vw_wk_table_join_all AS (
SELECT pk1
,pk2
,pk3
,col_a, col_b, col_c
,lead_col1 AS col1
,lead_col2 AS col2
,lead_col3 AS col3
FROM
(
SELECT pk1
,LEAD(pk2) OVER (
PARTITION BY pk1
ORDER BY pk1, pk2, pk3, col1, col2, col3
) AS pk2
,LEAD(pk3) OVER (
PARTITION BY pk1
ORDER BY pk1, pk2, pk3, col1, col2, col3
) AS pk3
,wk_src.col_a, wk_src.col_b, wk_src.col_c
,LEAD(col1) OVER (
PARTITION BY pk1
ORDER BY pk1, pk2, pk3, col1, col2, col3
) AS lead_col1
,LEAD(col2) OVER (
PARTITION BY pk1
ORDER BY pk1, pk2, pk3, col1, col2, col3
) AS lead_col2
,LEAD(col3) OVER (
PARTITION BY pk1
ORDER BY pk1, pk2, pk3, col1, col2, col3
) AS lead_col3
FROM vw_wk_table_left_join_wk_src vw
ORDER BY pk1, pk2, pk3, col1, col2, col3
) rs
WHERE lead_col1 = 'D'
AND lead_col2 = 'E'
AND col1 <> 'D'
AND col3 IS NOT NULL
UNION ALL
SELECT wk_table.pk1
,wk_table.pk2
,wk_table.pk3
,src.col_a, src.col_b, src.col_c
,wk_table.col1
,wk_table.col2
,wk_table.col3
FROM
(
SELECT DISTINCT
pk1
,pk2
,pk3
,col1
,col2
,col3
,DENSE_RANK() OVER (
PARTITION BY pk1
ORDER BY pk1, pk2, pk3, col1, col2, col3
) group_row_number
FROM vw_wk_table_left_join_wk_src vw
ORDER BY pk1, pk2, pk3, col1, col2, col3
) wk_table
INNER JOIN
master_table src
ON
wk_table.record_key = src.record_key
WHERE wk_table.group_row_number = 1
AND col1 = 'D'
AND col2 = 'X'
AND col3 IS NOT NULL
)
SELECT pk1, pk2, pk3,
,col1
,'X' AS col2
,col3
,col_a, col_b, col_c
,-1 AS flag1
,0 AS flag2
FROM vw_wk_table_join_all
WHERE col1 > 0
The plan is as below,
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 877K| 31G| | 105K (1)| 00:21:11 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6612_C3E997A9 | | | | | |
|* 3 | HASH JOIN OUTER | | 877K| 394M| 87M| 57933 (1)| 00:11:36 |
| 4 | VIEW | | 877K| 77M| | 19043 (1)| 00:03:49 |
| 5 | SORT GROUP BY | | 877K| 62M| 80M| 19043 (1)| 00:03:49 |
|* 6 | TABLE ACCESS FULL | WORKING_TABLE | 877K| 62M| | 3591 (1)| 00:00:44 |
| 7 | TABLE ACCESS FULL | WORKING_DATA_TABLE | 939K| 339M| | 17140 (1)| 00:03:26 |
| 8 | VIEW | | 877K| 31G| | 47918 (1)| 00:09:36 |
| 9 | UNION-ALL | | | | | | |
|* 10 | VIEW | | 877K| 32G| | 16976 (1)| 00:03:24 |
| 11 | WINDOW SORT | | 877K| 32G| | 16976 (1)| 00:03:24 |
| 12 | VIEW | | 877K| 32G| | 16976 (1)| 00:03:24 |
| 13 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6612_C3E997A9 | 877K| 436M| | 16976 (1)| 00:03:24 |
| 14 | MERGE JOIN | | 1 | 39260 | | 30942 (1)| 00:06:12 |
|* 15 | VIEW | | 877K| 97M| | 30939 (1)| 00:06:12 |
| 16 | SORT UNIQUE | | 877K| 56M| 70M| 16976 (1)| 00:03:24 |
|* 17 | WINDOW SORT PUSHED RANK| | 877K| 56M| 70M| 30939 (1)| 00:06:12 |
| 18 | VIEW | | 877K| 56M| | 16976 (1)| 00:03:24 |
| 19 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6612_C3E997A9 | 877K| 436M| | 16976 (1)| 00:03:24 |
|* 20 | SORT JOIN | | 1 | 39143 | | 3 (34)| 00:00:01 |
|* 21 | TABLE ACCESS FULL | MASTER_TABLE | 1 | 39143 | | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
When it executes, The error "ORA-01652: Unable to extend temp segment by 128" occured.
After I have tried to configure more temp and use
alter session set sort_area_size = 1048576000
, this SQL can execute.
However, other SQL involving small numbers of records and tables (just output thousands of records) running in the same time get the error of ORA-01652.
Therefore, I have 3 questions:
1. How can I optimize to use less I/O by not greatly decrease the performance?
2. The tables will grow larger by time, what can I do to ensure that the above procedure can successfully run without making other procedure cannot work?
3. My colleague suggested using cursor and commit interval to deal with it, how does it work?
Thank you very much!
</code>
January 31, 2013 - 8:47 am UTC
I would definitely use a CREATE TABLE AS SELECT- not update - if you are hitting 1/3 of the rows...
get a select that represents the data you want in your table.
put create table new_table as in front of it.
drop old table
rename new.
(3) would be just about the worst idea *ever*, the bigger the set, the worse the idea in (3) would be.
Unable to extend temp segment by 128
Pradeep, July 02, 2013 - 2:25 pm UTC
Hi Tom,
I am trying to run a query in Pre Production db (having temp tablespace of 72 GB) and getting error "unable to extend temp segment by 128 in tablespace TEMP" .The surprising part is the same query is running fine in Production and also sometime back it is used to run smoothly in Pre production.
We have tried to increase of temp tablespace in Pre production db,but still receiving this error for mere a small number of rows of 10k whilst we are abl;e to fetch more than 1,00,000 rows in Production .
Can you please tell , if there could be some environment issues with Pre Production database , like any parameter etc or something to do with datafiles in PPR etc .... We will really appreciate your help !!
Many Thanks !!
July 02, 2013 - 4:45 pm UTC
look at the plans, are they the same - i doubt it. that is where we'd have to start.
Unable to extend temp segment by 128
Pradeep Sorari, July 03, 2013 - 7:30 am UTC
Hi Tom,
Thanks for the response !!
As I said query is not gettign completed and throwing error so I am yet not able to generate trace file for that in Pre Production.
However I have generated plans for this query using :
SET AUTOTRACE TRACEONLY EXPLAIN:
FOR PRODUCION (Where it is runing smoothly ):
Execution Plan
----------------------------------------------------------
Plan hash value: 139614374
------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 4248 | 25221 (2)| 00:05:03 | | |
| 1 | VIEW | | 2 | 4248 | 25221 (2)| 00:05:03 | | |
| 2 | UNION-ALL | | | | | | | |
| 3 | HASH UNIQUE | | 1 | 368 | 2617 (1)| 00:00:32 | | |
| 4 | NESTED LOOPS OUTER | | 1 | 368 | 2616 (1)| 00:00:32 | | |
| 5 | VIEW | | 1 | 345 | 2611 (1)| 00:00:32 | | |
|* 6 | HASH JOIN OUTER | | 1 | 351 | 2611 (1)| 00:00:32 | | |
|* 7 | HASH JOIN OUTER | | 1 | 334 | 2265 (1)| 00:00:28 | | |
| 8 | TABLE ACCESS BY INDEX ROWID | ERROR_FSPEC_MANIFEST | 1 | 24 | 1 (0)| 00:00:01 | | |
| 9 | NESTED LOOPS | | 1 | 310 | 2196 (1)| 00:00:27 | | |
|* 10 | TABLE ACCESS FULL | PREADVICE_ERROR | 1 | 286 | 2195 (1)| 00:00:27 | | |
|* 11 | INDEX RANGE SCAN | ERROR___MANIFEST_PK | 1 | | 1 (0)| 00:00:01 | | |
| 12 | TABLE ACCESS FULL | ERR_PRE_ADVICE_DATA | 2801 | 67224 | 68 (0)| 00:00:01 | | |
| 13 | TABLE ACCESS FULL | ERR_PRE_ADVICE_ITEM | 33718 | 559K| 344 (1)| 00:00:05 | | |
| 14 | VIEW PUSHED PREDICATE | MBV_TP_COMMERCIAL_ACCOUNT | 1 | 23 | 5 (0)| 00:00:01 | | |
| 15 | NESTED LOOPS | | 1 | 64 | 5 (0)| 00:00:01 | | |
| 16 | NESTED LOOPS | | 1 | 58 | 5 (0)| 00:00:01 | | |
| 17 | NESTED LOOPS | | 1 | 49 | 4 (0)| 00:00:01 | | |
| 18 | TABLE ACCESS BY INDEX ROWID | ACCOUNT | 1 | 38 | 3 (0)| 00:00:01 | | |
|* 19 | INDEX UNIQUE SCAN | ACCOUNT_UK2 | 1 | | 2 (0)| 00:00:01 | | |
|* 20 | TABLE ACCESS BY INDEX ROWID | PARTY | 556K| 5976K| 1 (0)| 00:00:01 | | |
|* 21 | INDEX UNIQUE SCAN | PARTY_UK1 | 1 | | 0 (0)| 00:00:01 | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | PARTY_ROLE | 361K| 3175K| 1 (0)| 00:00:01 | | |
|* 23 | INDEX UNIQUE SCAN | PARTY_ROLE_UK1 | 1 | | 0 (0)| 00:00:01 | | |
|* 24 | INDEX RANGE SCAN | SUBJECT_ID | 1 | 6 | 0 (0)| 00:00:01 | | |
| 25 | HASH UNIQUE | | 1 | 409 | 22605 (2)| 00:04:32 | | |
| 26 | NESTED LOOPS OUTER | | 1 | 409 | 22604 (2)| 00:04:32 | | |
|* 27 | HASH JOIN OUTER | | 1 | 389 | 22499 (2)| 00:04:30 | | |
| 28 | NESTED LOOPS OUTER | | 1 | 369 | 22154 (2)| 00:04:26 | | |
|* 29 | HASH JOIN | | 1 | 346 | 22149 (2)| 00:04:26 | | |
|* 30 | TABLE ACCESS FULL | PREADVICE_ERROR | 1 | 286 | 2195 (1)| 00:00:27 | | |
| 31 | TABLE ACCESS FULL | ERROR_FSPEC_MANIFEST_LINE | 7158K| 409M| 19884 (2)| 00:03:59 | | |
| 32 | VIEW PUSHED PREDICATE | MBV_TP_COMMERCIAL_ACCOUNT | 1 | 23 | 5 (0)| 00:00:01 | | |
| 33 | NESTED LOOPS | | 1 | 64 | 5 (0)| 00:00:01 | | |
| 34 | NESTED LOOPS | | 1 | 58 | 5 (0)| 00:00:01 | | |
| 35 | NESTED LOOPS | | 1 | 49 | 4 (0)| 00:00:01 | | |
| 36 | TABLE ACCESS BY INDEX ROWID | ACCOUNT | 1 | 38 | 3 (0)| 00:00:01 | | |
|* 37 | INDEX UNIQUE SCAN | ACCOUNT_UK2 | 1 | | 2 (0)| 00:00:01 | | |
|* 38 | TABLE ACCESS BY INDEX ROWID | PARTY | 556K| 5976K| 1 (0)| 00:00:01 | | |
|* 39 | INDEX UNIQUE SCAN | PARTY_UK1 | 1 | | 0 (0)| 00:00:01 | | |
|* 40 | TABLE ACCESS BY INDEX ROWID | PARTY_ROLE | 361K| 3175K| 1 (0)| 00:00:01 | | |
|* 41 | INDEX UNIQUE SCAN | PARTY_ROLE_UK1 | 1 | | 0 (0)| 00:00:01 | | |
|* 42 | INDEX RANGE SCAN | SUBJECT_ID | 1 | 6 | 0 (0)| 00:00:01 | | |
| 43 | TABLE ACCESS FULL | ERR_PRE_ADVICE_ITEM | 33718 | 658K| 344 (1)| 00:00:05 | | |
|* 44 | TABLE ACCESS BY GLOBAL INDEX ROWID| PRE_ADVICE_MNFST | 1 | 20 | 104 (0)| 00:00:02 | ROWID | ROWID |
|* 45 | INDEX RANGE SCAN | PRE_ADVISE_MNFST_NU01 | 103 | | 1 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("EPAD"."BATCH_ID"="EPAI"."BATCH_ID"(+))
7 - access("EPAD"."FILE_SERIAL_NO"(+)="EFM"."FILE_SERIAL_NUMBER" AND "EPAD"."CREATED_DATE"(+)="EFM"."ERROR_DATE")
10 - filter("ERR"."ERROR_DATE"=20130101)
11 - access("ERR"."ERROR_ID"="EFM"."ERROR_ID")
19 - access("A"."ACCOUNT_NUMBER"=NVL("EPAI"."ACCOUNT_N0","EFM"."ACCOUNT_NUMBER"))
20 - filter("P"."ID_SUBJECT" IS NOT NULL)
21 - access("A"."PARTY_ID_ACCOUNT"="P"."ID")
22 - filter("R"."PARTY_ROLE_TYPE_ID"=1)
23 - access("P"."ID"="R"."PARTY_ID")
24 - access("P"."ID_SUBJECT"="S"."ID")
27 - access("EFML"."ERROR_DATE"="EPAI"."CREATED_DATE"(+) AND "EFML"."BARCODE"="EPAI"."ITEM_NUMBER"(+))
29 - access("ERR"."ERROR_ID"="EFML"."ERROR_ID")
30 - filter("ERR"."ERROR_DATE"=20130101)
37 - access("A"."ACCOUNT_NUMBER"="EFML"."ACCOUNT_NUMBER")
38 - filter("P"."ID_SUBJECT" IS NOT NULL)
39 - access("A"."PARTY_ID_ACCOUNT"="P"."ID")
40 - filter("R"."PARTY_ROLE_TYPE_ID"=1)
41 - access("P"."ID"="R"."PARTY_ID")
42 - access("P"."ID_SUBJECT"="S"."ID")
44 - filter("EFML"."ERROR_DATE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("PAM"."CREATED_DATE"(+)),'YYYYMMDD')))
45 - access("EFML"."FILE_SERIAL_NUMBER"="PAM"."FILE_SERIAL_NUMBER"(+))
For Pre production (where it is throwing error) :
Execution Plan
----------------------------------------------------------
Plan hash value: 795909704
--------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2574M| 5092G| | 249M (1)|831:28:36 | | |
| 1 | VIEW | | 2574M| 5092G| | 249M (1)|831:28:36 | | |
| 2 | UNION-ALL | | | | | | | | |
| 3 | HASH UNIQUE | | 2574M| 899G| 1870G| 249M (1)|831:20:23 | | |
|* 4 | HASH JOIN RIGHT OUTER | | 2574M| 899G| 15M| 43M (1)|144:03:47 | | |
| 5 | VIEW | MBV_TP_COMMERCIAL_ACCOUNT | 362K| 11M| | 8243 (2)| 00:01:39 | | |
| 6 | NESTED LOOPS | | 362K| 22M| | 8243 (2)| 00:01:39 | | |
|* 7 | HASH JOIN | | 362K| 20M| 11M| 8207 (2)| 00:01:39 | | |
|* 8 | HASH JOIN | | 362K| 7070K| 7416K| 2930 (2)| 00:00:36 | | |
|* 9 | TABLE ACCESS FULL | PARTY_ROLE | 361K| 3175K| | 1144 (2)| 00:00:14 | | |
|* 10 | TABLE ACCESS FULL | PARTY | 556K| 5976K| | 803 (3)| 00:00:10 | | |
| 11 | TABLE ACCESS FULL | ACCOUNT | 935K| 33M| | 2492 (2)| 00:00:30 | | |
|* 12 | INDEX RANGE SCAN | SUBJECT_ID | 1 | 6 | | 0 (0)| 00:00:01 | | |
| 13 | VIEW | | 2574M| 819G| | 57082 (44)| 00:11:25 | | |
|* 14 | HASH JOIN OUTER | | 2574M| 836G| 5072K| 57082 (44)| 00:11:25 | | |
|* 15 | HASH JOIN OUTER | | 15097 | 4894K| 4824K| 26525 (1)| 00:05:19 | | |
|* 16 | HASH JOIN | | 15097 | 4644K| 4424K| 20650 (1)| 00:04:08 | | |
|* 17 | TABLE ACCESS FULL | PREADVICE_ERROR | 15097 | 4246K| | 18334 (1)| 00:03:41 | | |
| 18 | TABLE ACCESS FULL | ERROR_FSPEC_MANIFEST | 604K| 15M| | 973 (2)| 00:00:12 | | |
|* 19 | TABLE ACCESS FULL | ERR_PRE_ADVICE_DATA | 263K| 4367K| | 5273 (1)| 00:01:04 | | |
| 20 | TABLE ACCESS FULL | ERR_PRE_ADVICE_ITEM | 692K| 11M| | 4562 (2)| 00:00:55 | | |
| 21 | HASH UNIQUE | | 15097 | 6133K| 12M| 41154 (1)| 00:08:14 | | |
|* 22 | HASH JOIN RIGHT OUTER | | 15097 | 6133K| | 39814 (1)| 00:07:58 | | |
| 23 | PARTITION RANGE ALL | | 404 | 8080 | | 176 (0)| 00:00:03 | 1 | 71 |
| 24 | PARTITION HASH ALL | | 404 | 8080 | | 176 (0)| 00:00:03 | 1 | 4 |
| 25 | TABLE ACCESS FULL | PRE_ADVICE_MNFST | 404 | 8080 | | 176 (0)| 00:00:03 | 1 | 284 |
|* 26 | HASH JOIN OUTER | | 15097 | 5838K| 5544K| 39637 (1)| 00:07:56 | | |
|* 27 | HASH JOIN OUTER | | 15097 | 5366K| 5256K| 30363 (1)| 00:06:05 | | |
|* 28 | HASH JOIN | | 15097 | 5071K| 4424K| 24484 (1)| 00:04:54 | | |
|* 29 | TABLE ACCESS FULL | PREADVICE_ERROR | 15097 | 4246K| | 18334 (1)| 00:03:41 | | |
| 30 | TABLE ACCESS FULL | ERROR_FSPEC_MANIFEST_LINE | 892K| 47M| | 3047 (2)| 00:00:37 | | |
| 31 | TABLE ACCESS FULL | ERR_PRE_ADVICE_ITEM | 692K| 13M| | 4561 (2)| 00:00:55 | | |
| 32 | VIEW | MBV_TP_COMMERCIAL_ACCOUNT | 362K| 11M| | 8243 (2)| 00:01:39 | | |
| 33 | NESTED LOOPS | | 362K| 22M| | 8243 (2)| 00:01:39 | | |
|* 34 | HASH JOIN | | 362K| 20M| 11M| 8207 (2)| 00:01:39 | | |
|* 35 | HASH JOIN | | 362K| 7070K| 7416K| 2930 (2)| 00:00:36 | | |
|* 36 | TABLE ACCESS FULL| PARTY_ROLE | 361K| 3175K| | 1144 (2)| 00:00:14 | | |
|* 37 | TABLE ACCESS FULL| PARTY | 556K| 5976K| | 803 (3)| 00:00:10 | | |
| 38 | TABLE ACCESS FULL | ACCOUNT | 935K| 33M| | 2492 (2)| 00:00:30 | | |
|* 39 | INDEX RANGE SCAN | SUBJECT_ID | 1 | 6 | | 0 (0)| 00:00:01 | | |
--------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("AC"."COMMERCIAL_ACCOUNT_NUMBER"(+)=NVL("EPAI"."ACCOUNT_N0","EFM"."ACCOUNT_NUMBER"))
7 - access("A"."PARTY_ID_ACCOUNT"="P"."ID")
8 - access("P"."ID"="R"."PARTY_ID")
9 - filter("R"."PARTY_ROLE_TYPE_ID"=1)
10 - filter("P"."ID_SUBJECT" IS NOT NULL)
12 - access("P"."ID_SUBJECT"="S"."ID")
14 - access("EPAD"."BATCH_ID"="EPAI"."BATCH_ID"(+))
15 - access("EPAD"."FILE_SERIAL_NO"(+)="EFM"."FILE_SERIAL_NUMBER" AND "EPAD"."CREATED_DATE"(+)="EFM"."ERROR_DATE")
16 - access("ERR"."ERROR_ID"="EFM"."ERROR_ID")
17 - filter("ERR"."ERROR_DATE"=20130101)
19 - filter("EPAD"."FILE_SERIAL_NO"(+) IS NOT NULL)
22 - access("EFML"."ERROR_DATE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("PAM"."CREATED_DATE"(+)),'YYYYMMDD')) AND
"EFML"."FILE_SERIAL_NUMBER"="PAM"."FILE_SERIAL_NUMBER"(+))
26 - access("AC"."COMMERCIAL_ACCOUNT_NUMBER"(+)="EFML"."ACCOUNT_NUMBER")
27 - access("EFML"."ERROR_DATE"="EPAI"."CREATED_DATE"(+) AND "EFML"."BARCODE"="EPAI"."ITEM_NUMBER"(+))
28 - access("ERR"."ERROR_ID"="EFML"."ERROR_ID")
29 - filter("ERR"."ERROR_DATE"=20130101)
34 - access("A"."PARTY_ID_ACCOUNT"="P"."ID")
35 - access("P"."ID"="R"."PARTY_ID")
36 - filter("R"."PARTY_ROLE_TYPE_ID"=1)
37 - filter("P"."ID_SUBJECT" IS NOT NULL)
39 - access("P"."ID_SUBJECT"="S"."ID")
See the temp usage and Time sectiones are drastically high? What could br the reasons ? Adn what all actions would you recommend here .
Query is : (Note : "Distinct" is used knowingly and we are required to insert this data in table for further usage in reporting )
SELECT error_id
,acc_num
,acc_name
,serialnum
,mnfstdate
,errplace
,severity
,barcode
,des
,erd_date ERROR_DATE
,erd_time ERROR_TIME
FROM
((select distinct(err.error_id)
,nvl(epai.ACCOUNT_N0,efm.account_number) acc_num
,ac.account_name acc_name
, efm.file_serial_number serialnum --If header fails menifest id is not generated
, nvl(epad.MANIFEST_DATE,efm.manifest_date) mnfstdate
,'File Validation' errplace
,decode(err.severity,'FATAL','Rejected', --FATAL data processing errors
'MEDIUM','Rejected', --Data validation errors that do not need system alert raising
'Warning') severity
,null barcode --File level validation so don't want to send same error for every item in preadvice.
,dbms_lob.SUBSTR(err.description,(DECODE(INSTR(err.description,'~'),0,DBMS_LOB.GETLENGTH(err.description),(INSTR(err.description,'~')-1))),1) des
,err.error_date erd_date
,err.error_time erd_time
from preadvice_error err
INNER JOIN ERROR_FSPEC_MANIFEST efm ON (err.error_id = efm.error_id)
LEFT OUTER JOIN ERR_PRE_ADVICE_DATA epad ON (EPAD.CREATED_DATE = efm.ERROR_DATE
AND EPAD.FILE_SERIAL_NO = EFM.FILE_SERIAL_NUMBER )
LEFT OUTER JOIN ERR_PRE_ADVICE_ITEM epai ON (epad.batch_id =EPAI.BATCH_ID)
LEFT OUTER JOIN COMMERCIAL_ACCOUNT ac ON ac.COMMERCIAL_ACCOUNT_NUMBER = nvl(epai.ACCOUNT_N0,efm.account_number))
UNION ALL
(select distinct (err.error_id) eid
,efml.account_number acc_num
,ac.account_name acc_name
,efml.file_serial_number serialnum
,nvl(pam.MNFST_DATE,efml.manifest_date) mnfstdate
,'Item Validation' errplace
,decode(err.severity,'FATAL','Rejected', --FATAL data processing errors
'MEDIUM','Rejected', --Data validation errors that do not need system alert raising
'Warning') severity
,trim(efml.barcode) barcode --added fuction trim as a fix for defectID 36
,dbms_lob.SUBSTR(err.description,(DECODE(INSTR(err.description,'~'),0,DBMS_LOB.GETLENGTH(err.description),(INSTR(err.description,'~')-1))),1) des
,err.error_date erd_date
,err.error_time erd_time
from preadvice_error err
INNER JOIN ERROR_FSPEC_MANIFEST_LINE efml ON (err.error_id = efml.error_id)
LEFT OUTER JOIN COMMERCIAL_ACCOUNT ac ON ac.COMMERCIAL_ACCOUNT_NUMBER= efml.account_number
LEFT OUTER JOIN ERR_PRE_ADVICE_ITEM epai ON (efml.barcode = EPAI.ITEM_NUMBER
AND EFML.ERROR_DATE = EPAI.CREATED_DATE)
LEFT OUTER JOIN TECHPACK.PRE_ADVICE_MNFST pam ON (EFML.FILE_SERIAL_NUMBER = pam.FILE_SERIAL_NUMBER
AND TO_NUMBER(TO_CHAR(PAM.CREATED_DATE,'YYYYMMDD')) = EFML.ERROR_DATE )))
WHERE erd_date between 20130101 and 20130101;
July 16, 2013 - 12:22 pm UTC
add more temp
or give it more memory.
you are running out of temp space, it is pretty clear?