Skip to Main Content
  • Questions
  • UNDO tablespace cannot support transaction

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Khalid.

Asked: July 01, 2024 - 5:13 pm UTC

Last updated: July 09, 2024 - 12:26 pm UTC

Version: 19..23.0.0

Viewed 1000+ times

You Asked

Hi,

Due to my work environment, I'm not allowed to cut and paste from my environment.
As a result of that I cannot provide a specific test case, but just gives symptoms, would you give me general guidance on how to approach this problem?

I'm running a huge insert 10878201 rows, and each record is quite fat, it has about 50 or 60 columns. The query was issued from Toad and the connection handle was lost because Toad froze and then when restarted the same connection handle was lost. However, I did a DBMS_SESSION trace enable on the sid, serial#, and got a lot of UNDO segment info in the trace file and then finally it says in the trace:

Problem: Undo tablespace cannot support required UNDO retention
Recommendation: Size UNDO tablespace to 3702 MB

However, there are 11 datafiles (32G * 11) in the UNDO and AFAIK my (Insert into SELECT * from huge table) was the only Active USER transaction that day (4 days back). Somehow the tablespace allocated could have easily supported the UNDO_RETENTION (28800).

REDO Logs: There are 4 log groups (2 per thread), each is 200MB, this particular transaction is running on thread 1 of RAC node1.

1) Shall I try to split the transactions into manageable size chunks and commit every 10000 rows or so?
2) Will resizing the REDO log size help?

Adding significant info to this question:
I get a "ORA-30036" in the trace file generated about 4days back when Toad froze.
However, the session still shows up as ACTIVE in gv$session and last_call_et keeps on inreasing.
I hope this added info will help

Thanks


and Connor said...

10million rows (unless you've got clobs/blobs/etc) should load in seconds or minutes at worst. So if its running longer than that, then something is definitely wrong here (but you really need to see a trace file or execution plan stats to dig into that accurately - or check an AWR report/active session history etc).

Redo wont impact undo size/retention.

However, I did a DBMS_SESSION trace enable on the sid, serial#,

DBMS_SESSION only works on the current session...so not sure what you're saying here. DBMS_MONITOR? In any event, you general dont get (many) undo statistics in such a trace file, so also not sure what you're seeing.

However, the session still shows up as ACTIVE in gv$session and last_call_et keeps on inreasing

If this is still running you may as well kill it, because even if it finishes, the moment it wants to come back to the session (and finds that session gone) it will start rolling back (you'll see v$transaction.used_urecs going backwards).

undo_retention has not bearing on undo for *transactions*, only for how long undo is kept for queries, but its odd that you';re seeing something saying "size undo to 3.7G" when you say you have a 350G undo tablespace (which in itself sounds extreme). Are you sure you're seeing the right undo info for the instance ? (There's instance undo and local undo to consider here).

But without seeing some data, not sure how we can help a lot more here. You can always send information to asktom_us@oracle.com if you're worried about privacy.

Rating

  (1 rating)

Comments

Incorrect info in my question -

Khalid, July 04, 2024 - 12:49 pm UTC

Thanks for taking my question.
My apologies I need to make some corrections to my original question, which 1 - I will do in the next post soon as I'm not logged in right now to my other env.
2 - The data size wasn't 10 Million, I copied the num_rows for the wrong Insert statement. The actual record size was 653,nnn,nnn (I don't remember the digits
3 - By the way the 10 Million statement completed in 31 minutes, but when I removed indexes it ran very quickly.
4 - Main Correction - On the 653 million Insert I ran the trace about 3 days after Toad had crashed but still the query showed up in v$session, and I ran DBMS_MONITOR.session_trace_enable(sid, serial#, binds, waits), and then I checked for the trace file location from v$process for that SID, and I noticed that the date of the trace file said June 28, although I ran the trace_enable on July 2. Which means that trace was already generated.
Please bear with me, and would you allow me to restate the problem when I have access to it.
My issue was resolved 653 Million fat records in exactly 4 hours, with bulk-collect, with no-indexes.
However, I'm more interested in the Toad crash and the trace file messages of June 28, and I will come back to it soon as we have the 4th holiday.
Thanks so much.
Chris Saxon
July 09, 2024 - 12:26 pm UTC

Let us know if/when you have more details

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here