Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Sanjay.

Asked: October 23, 2019 - 5:16 pm UTC

Last updated: November 05, 2019 - 4:13 am UTC

Version: 11.2.0.2

Viewed 1000+ times

You Asked

Hello,
I have table with 30M rows which has BLOB column. BLOB column has about 700GB of data.
I have tried dbms_redefinition package and DBMS_REDEFINITION.start_redef_table failed after 3 days due to ORA-1555. I have increased undo_retention to 600000 ( almost 7 days)
and also used following statement to change retention for LOB
alter table mytable modify lob(XML_packet) (retention);

It again failed after 3 days. I have open SR with oracle and oracle has asked to IF: ORA-1555 on LOB Data ( Doc ID 1950896.1 ) which deals with BLOB corruption.

My issue is that PL/SQL mentioned in Doc is running very slow. I have changed select to use parallel but cursor most likely is doing fetch one by one.
I am looking v$session_longops and time remaining is showing 5Million seconds for some parallel session and there may be more sessions which will come later when existing process finish their processing.
What can I do to finish in reasonable timeframe ( 2 to 3 days) ?
I have traced session also and I see direct path read and block cnt is 1 so most likely it is doing one block at a time which is why it is running slow.
Any idea on how to tackle this ?

declare
error_1578 exception;
error_1555 exception;
error_22922 exception;
pragma exception_init(error_1578,-1578);
pragma exception_init(error_1555,-1555);
pragma exception_init(error_22922,-22922);
num number;
begin
for cursor_lob in (select rowid r, &&lob_column from &table_owner..&table_with_lob) loop
begin
num := dbms_lob.instr (cursor_lob.&&lob_column, hextoraw ('889911')) ;
exception
when error_1578 then
insert into corrupt_lobs values (cursor_lob.r, 1578);
commit;
when error_1555 then
insert into corrupt_lobs values (cursor_lob.r, 1555);
commit;
when error_22922 then
insert into corrupt_lobs values (cursor_lob.r, 22922);
commit;
end;
end loop;
end;
/

and Connor said...

That note says there are scenarios:
a) a long running query, and hence not enough undo, OR
b) a corrupted lob

The code you are running is looking for corrupted lobs - i would be inclined to assume it is (a) not (b) as your first step. For that, we can do something to avoid the ora-1555 by pre-collecting ALL of the rowids, and then we can incrementally copy them over to the target table. (This is assuming that you can do without complete read consistency across the entire table).

SQL>
SQL> create table t ( pk int primary key, b blob )
  2  lob ( b) store as basicfile;

Table created.

SQL>
SQL> create table t1 ( pk int , b blob )
  2  lob ( b) store as securefile;

Table created.

SQL>
SQL> insert into t
  2  select rownum, hextoraw('00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF')
  3  from dual
  4  connect by level <= 100000;

100000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> create or replace
  2  type nt_rowid_list is table of varchar2(50)
  3  /

Type created.

SQL>
SQL> set serverout on
SQL> declare
  2    type rowid_list is table of varchar2(50) index by pls_integer;
  3    r rowid_list;
  4
  5    rt nt_rowid_list;
  6
  7  begin
  8    select rowidtochar(rowid) bulk collect into r
  9    from   t;
 10    dbms_output.put_line('Collected '||r.count||' rowids');
 11
 12    for i in 1 .. r.count / 1000
 13    loop
 14       dbms_output.put_line('Processing rows '||((i-1)*1000+1)||' to '||(i*1000));
 15
 16       rt := nt_rowid_list();
 17       for j in (i-1)*1000+1 .. i*1000
 18       loop
 19         rt.extend;
 20         rt(rt.count) := r(j);
 21       end loop;
 22
 23       insert into t1
 24       select /*+ leading(nt) rowid(t) */ t.pk, t.b
 25       from   t,
 26              table(rt) nt
 27       where  t.rowid = chartorowid(column_value);
 28    end loop;
 29  end;
 30  /
Collected 100000 rowids
Processing rows 1 to 1000
Processing rows 1001 to 2000
Processing rows 2001 to 3000
Processing rows 3001 to 4000
Processing rows 4001 to 5000
Processing rows 5001 to 6000
Processing rows 6001 to 7000
Processing rows 7001 to 8000
Processing rows 8001 to 9000
Processing rows 9001 to 10000
...
Processing rows 97001 to 98000
Processing rows 98001 to 99000
Processing rows 99001 to 100000

PL/SQL procedure successfully completed.

SQL>



In your case, you will have 30m rowids in a plsql table, so thats around 600m which is big but not problematic for a single session. Then we are loading 1000 blobs at a time without needing the read consistency across the entire table.

Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, October 24, 2019 - 9:08 pm UTC

Hi,
Thanks for your response, however from DBA perspective, I have setup UNDO_RETENTION to 600,000 seconds ( 7 days) and UNDO tablespace is setup as Retention GUARANTEE too.
Alter table was also executed for table with LOB

alter table XMLGEN.mytable modify lob(XML_packet) (retention);

So how come ORA-1555 would arise in this case ? If it had happened after 6/7 days then it would make sense.
Connor McDonald
October 29, 2019 - 11:17 am UTC

Setting a lob to "retention" is about establishing a consistency between the lobs and other objects in the database.

However, it does *not* mean that older versions of lobs will use the undo tablespace (and hence retention guarantee does not apply). Old versions will be kept in the lob segment tablespace.

Also, these parameters may be silently ignored if your blob is in a manual segment space management tablespace.

Thanks a lot

Sanjay Gupta, October 25, 2019 - 10:22 pm UTC

I am using your Pl/SQL code and can see that performance is much better.

Thanks a lot for your helo
Connor McDonald
October 29, 2019 - 11:13 am UTC

Parallel wont be your friend here because we are doing the rows in small batches anyway. DBMS_PARALLEL_EXECUTION *may* be an option down the track, but before doing that you want to be turning on a trace once it slows down so we can see where the delay is ocurring.

Sanjay Gupta, October 26, 2019 - 5:42 am UTC

Initially when I started, processing was much faster and done 4M in couple hrs but now processing is much slower. I see now processing about 100K per hrs. So not sure what's reason for slowness ?

still bad performace

Sanjay, October 28, 2019 - 10:15 pm UTC

Thanks McDonald,
I am not sure why initially few millions( about 4M) loaded pretty last but later on speed reduced to about 100K per hr. I have also included parallel clause for insert and select but I only see 1 session.
They is nothing shows up in v$session_longops as well. Wiat event I see "direct path read and P3 shows 1 cnt"

Anything else that I can do to speed it up ?

INSERT /*+ parallel (NEWTABLE,8) */
INTO NEWTABLE (TVPROGRAMCONTENT_ID,
XML_PACKET,
CDATE,
UDATE)
SELECT /*+ leading(nt) rowid(t) parallel (t,8) parallel ( nt,8) */
T.TVPROGRAMCONTENT_ID,
T.XML_PACKET,
T.CDATE,
T.UDATE
FROM OLDTABLE T, TABLE ( :B1) NT
WHERE T.ROWID = CHARTOROWID (COLUMN_VALUE)

Have you tried expdp/impdp

A reader, October 30, 2019 - 5:41 am UTC

We had to re-org multiple large tables 300+ GB each (at least) on a bi-weekly basis due to performance issues with LOB.

We ended up using data pump for that.. there is a perl script which will split up a table into multiple dumps (generates a par file for each) and you can export them in parallel.

Once done just import them in parallel (think it generates the imp par file as well). Just google for it.

Cheers.


Connor McDonald
November 05, 2019 - 4:13 am UTC

Nice input

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database