Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, fynix.

Asked: December 15, 2015 - 11:30 pm UTC

Last updated: December 30, 2015 - 3:43 am UTC

Version: 11.1.0.7.23

Viewed 1000+ times

You Asked

I am trying to figure out a way to recover a table that was truncated in production database from RMAN backups that I have on tape. I tried restoring the backup to a remote server but I am unable to get a direct connection to the tape library and I cannot seem to get Oracle Secure Backup client to talk to the tape on the production server. My other option is to do a partial restore on the production server and then export the table from the restore. I am not sure how to go about this since I understand it is risky and I run the risk of messing up the entire database just to recover one table.

and Connor said...

The process is described in the docs

http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmtspit.htm

which is all about recovering an entire tablespace back to your production database in an automated fashion.

However, this is a section in this guide

"Performing RMAN TSPITR Using Your Own Auxiliary Instance"

which gives you more manual control. In this case, you would create the auxiliary instance, restore/recovery the tablespace, and then use datapump to export the table you want and re-import into your production node.

I endorse your caution about doing this on your production db.

MOS note 96197.1 talks about this process on a non-RMAN backup, but if you can restore your datafiles from RMAN, then you could follow this process.

Rating

  (14 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, December 16, 2015 - 10:29 am UTC

Thanks I will try that, I actually found that page before but for some reason I didn't see the detailed explanation. Is there a way to prevent truncate on any table in a schema even if you are the schema owner? I need to allow truncate on a few tables but prevent it on core tables.
Connor McDonald
December 17, 2015 - 1:20 am UTC

Well, to truncate a table in another schema you need DROP ANY TABLE which I would hope very very very few people, aka none :-) would be granted.

So the solution is that people never connect as the schema owner - thats just good practice for applications even without considering truncate.

A common technique is to disable the ability to connect as the schema owner, so that all access must come from another schema which only has ins/upd/del/sel and execute on plsql priviieges.

But you can do this as a stopgap

SQL> create or replace trigger ddl_trigger
  2  before truncate on scott.SCHEMA
  3  declare
  4      l_sysevent varchar2(25);
  5  begin
  6      select ora_sysevent into l_sysevent from dual;
  7
  8      if ( l_sysevent in ('TRUNCATE') )
  9      then
 10          raise_application_error(-20000,'no no no no');
 11      end if;
 12  end;
 13  /

Trigger created.

SQL> truncate table t;
truncate table t
               *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: no no no no
ORA-06512: at line 8


Alex Adamowicz, December 18, 2015 - 3:25 pm UTC

Might I also suggest using transport tablespace from RMAN backups? This would allow you to copy your schema to a past point in time to another database so you can be sure what you want is there and avoid nuking anything in production. I would find this less cumbersome than the manual auxiliary option.
Chris Saxon
December 19, 2015 - 4:20 am UTC

Good point.

A reader, December 20, 2015 - 4:19 am UTC

For some reason, I am not getting emails when a response is added to my question.
@Connor
I will look into implementing that script in the production database so that truncates are prevented. I do have some scripts that need to execute truncates on certain tables. Is there a way to allow truncate on those few tables?
@Alex
Thanks for the response, that looks like exactly what I need. I am testing it on a test database in VMWare. I ran into the error below:

UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2862
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4052
ORA-06512: at line 1


I created a parameter file and added a STREAMS_POOL_SIZE=50M parameter and that fixed the issue. I will import the data into another test db and see how it goes before I do this on production.

During import I ran into this error
DECLARE
*
ERROR at line 1:
ORA-06512: at "SYS.DBMS_STREAMS_TABLESPACE_ADM", line 1854
ORA-06512: at line 18


I am wondering if this is due to ASM. I did the tbs transport to a directory. The test database I am importing into, uses ASM. The script rman generated for the import is

/*
   The following command may be used to import the tablespaces.
   Substitute values for <logon> and <directory>.
   impdp <logon> directory=<directory> dumpfile= 'dmpfile.dmp' transport_datafiles= /u01/ttbs/users.259.898869355
*/
--------------------------------------------------------------
-- Start of sample PL/SQL script for importing the tablespaces
--------------------------------------------------------------
-- creating directory objects
CREATE DIRECTORY STREAMS$DIROBJ$1 AS  '/u01/ttbs/';
CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS  '/u01/ttbs';
/* PL/SQL Script to import the exported tablespaces */
DECLARE
  -- the datafiles
  tbs_files     dbms_streams_tablespace_adm.file_set;
  cvt_files     dbms_streams_tablespace_adm.file_set;
  -- the dumpfile to import
  dump_file     dbms_streams_tablespace_adm.file;
  dp_job_name   VARCHAR2(30) := NULL;
  -- names of tablespaces that were imported
  ts_names       dbms_streams_tablespace_adm.tablespace_set;
BEGIN
  -- dump file name and location
  dump_file.file_name :=  'dmpfile.dmp';
  dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
  -- forming list of datafiles for import
  tbs_files( 1).file_name :=  'users.259.898869355';
  tbs_files( 1).directory_object :=  'STREAMS$DIROBJ$1';
  -- import tablespaces
  dbms_streams_tablespace_adm.attach_tablespaces(
    datapump_job_name      => dp_job_name,
    dump_file              => dump_file,
    tablespace_files       => tbs_files,
    converted_files        => cvt_files,
    tablespace_names       => ts_names);
  -- output names of imported tablespaces
  IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
    FOR i IN ts_names.first .. ts_names.last LOOP
      dbms_output.put_line('imported tablespace '|| ts_names(i));
    END LOOP;
  END IF;
END;
/
-- dropping directory objects
DROP DIRECTORY STREAMS$DIROBJ$1;
DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
--------------------------------------------------------------
-- End of sample PL/SQL script
--------------------------------------------------------------
~



Connor McDonald
December 21, 2015 - 12:51 am UTC

As well 'ora_sysevent' as you have access to the built-in's

ora_dict_obj_owner
ora_dict_obj_name

so you can easily modify your trigger to be more granular.

Hope this helps.

A reader, December 20, 2015 - 1:48 pm UTC

I managed to import the data by creating my own par file but I couldn't import the data into ASM. I used the filename from the sample import script and imported the data to the file system and remapped the tablespace since it was a users tbs. Also apparently with transportable tablespaces you need to create all the users in that tablespace before you can successfully import the tbs. I am now attempting the transport tablespace on production and it looks like it's gonna take a while (~300GB of data). Will update if I am successful.
P.s: not much luck with oracle support, been a week now and no solution to restoring tape backups to a remote server smh.
Connor McDonald
December 21, 2015 - 12:56 am UTC

Thanks for keeping us posted.

If you have issues with Support, its good to follow the various escalation procedures available.

fynix, December 21, 2015 - 1:57 pm UTC

Just when I thought I was finally making progress I ran into this error

channel ORA_AUX_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_AUX_SBT_TAPE_1: restoring archived log
archived log thread=2 sequence=3609
channel ORA_AUX_SBT_TAPE_1: restoring archived log
archived log thread=3 sequence=3957
channel ORA_AUX_SBT_TAPE_1: restoring archived log
archived log thread=2 sequence=3610
....
archived log thread=1 sequence=4297
channel ORA_AUX_SBT_TAPE_2: reading from backup piece /PRODDB/L0_PRODDB_898380995_50243_1_23qood63_1_1
channel ORA_AUX_SBT_TAPE_1: ORA-19870: error while restoring backup piece /PRODDB/L0_PRODDB_898380997_50244_1_24qood65_1_1
ORA-19502: write error on file "/u01/app/oracle/product/11.1.0/db_1/dbs/arch2_3609_875846841.dbf", block number 1822721 (block size=512)
ORA-27072: File I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4
Additional information: 1822721
Additional information: 3584

RMAN-06900: WARNING: unable to generate V$RMAN_STATUS or V$RMAN_OUTPUT row
RMAN-06901: WARNING: disabling update of the V$RMAN_STATUS and V$RMAN_OUTPUT rows
ORACLE error from target database:
ORA-09817: Write to audit file failed.
Linux-x86_64 Error: 28: No space left on device

failover to previous backup

Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/21/2015 02:30:44
RMAN-03015: error occurred in stored script Memory Script
RMAN-20506: no backup of archived log found
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 2 with sequence 3624 and starting SCN of 34200056591 found to restore
RMAN-06025: no backup of archived log for thread 2 with sequence 3621 and starting SCN of 34196939812 found to restore
RMAN-06025: no backup of archived log for thread 2 with sequence 3620 and starting SCN of 34196627047 found to restore
RMAN-06025: no backup of archived log for thread 2 with sequence 3619 and starting SCN of 34196306049 found to restore
RMAN-06025: no backup of archived log for thread 2 with sequence 3618 and starting SCN of 34195991406 found to restore


Here is the run block I used for the transport tablespace

run{
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/auxparm.ora';
transport tablespace users,indxs TABLESPACE DESTINATION '/u11/ttbs' AUXILIARY DESTINATION '/u11/aux' until scn 34213461886;
}


Contents of auxiliary parm file
streams_pool_size=100M


Despite me setting the auxiliary destination, Rman decided to create the archive logs in ?/dbs. I am not sure exactly why this happened since DB_CREATE_FILE_DEST should point to the auxiliary destination. ?/dbs is on a disk with only 100GB of space so space ran out and that is why the process failed. The process took 18 hours to restore and I really do not want to wait another 18 hours to find out if the transport will complete successfully. How do I get Rman to look at the same location where the restored files already are and basically continue from the archive log creation point? The automatic initialization parameters rman created:
executing command: SET auxiliary parameter file

RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Tablespace UNDOTBS2
Tablespace UNDOTBS3

Creating automatic instance, with SID='wgrn'
using contents of file /tmp/auxparm.ora

initialization parameters used for automatic instance:
db_name=PRODDB
compatible=11.1.0.0.0
db_block_size=8192
db_files=500
db_unique_name=tspitr_PRODDB_wgrn
large_pool_size=1M
shared_pool_size=110M
processes=50
ifile=/tmp/auxparm.ora
db_create_file_dest=/u11/aux
control_files=/u11/aux/cntrl_tspitr_PRODDB_wgrn.f



I am thinking of modifying the the run block to:

run{
set archivelog destination to '/u11/aux/arc';
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/auxparm.ora';
transport tablespace users,idx TABLESPACE DESTINATION '/u11/ttbs' AUXILIARY DESTINATION '/u11/aux' until scn 34213461886;
}

Connor McDonald
December 21, 2015 - 11:59 pm UTC

I'm pretty sure that "AUXILIARY DESTINATION" is for online redo logs, not archivelogs.

I'd be inclined to use the set command *and* set log_archive_dest_1 in the init.ora

no

A reader, December 22, 2015 - 2:45 pm UTC

I am not able to create the trigger. It says system triggers can't be created on tables
Chris Saxon
December 22, 2015 - 4:04 pm UTC

That's because you can't create system triggers on tables.

A system trigger is created on either a schema or the database. Its triggering event is composed of either DDL statements (listed in "ddl_event") or database operation statements (listed in "database_event").


http://docs.oracle.com/database/121/LNPLS/triggers.htm#LNPLS99887

fynix, December 23, 2015 - 12:13 am UTC

I swear the Oracle gods hate me :( I have been on this for days now and just when it gets to 99% something happens. I added the set archivelog destination clause to the run block

run{
set archivelog destination to '/u11/aux/arc';
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/auxparm.ora';
transport tablespace users,idx TABLESPACE DESTINATION '/u11/ttbs' AUXILIARY DESTINATION '/u11/aux' 
until scn 34213461886;
}


Everything ran smoothly (for 19hrs ) until I encountered this error. Looks like an error from the OS caused RMAN to exit. I am using an NFS mount to store the aux data. Its so frustrating to have something run for almost an entire day and then have to start all over. Its sad because I had been warning management since I came here to purchase storage so I can perform backups to disk and archive those backups on tape. A disk backup would have allowed me to simply copy the backup pieces to a remote server and restore the traditional way and extract that table data.

archived log file name=/u11/aux/arc/1_4298_875846841.dbf thread=1 sequence=4298
channel clone_default: deleting archived log(s)
archived log file name=/u11/aux/arc/3_3969_875846841.dbf RECID=2080279 STAMP=899098139
archived log file name=/u11/aux/arc/3_3970_875846841.dbf thread=3 sequence=3970

Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/22/2015 09:28:14
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u11/aux/arc/3_3970_875846841.dbf'
ORA-00283: recovery session canceled due to errors
ORA-12801: error signaled in parallel query server P005
ORA-01115: IO error reading block from file 2 (block # 453715)
ORA-01110: data file 2: '/u11/aux/TSPITR_RECSYS_WGRN/datafile/o1_mf_sysaux_c7kym2fg_.dbf'
ORA-27072: File I/O error
Linux-x86_64 Error: 5: Input/output error
Additional information: 4
Additional information: 453715
Additional information: -1

Connor McDonald
December 23, 2015 - 6:03 am UTC

Not that I want to commit you to another 19 hours :-( but are NFS mounts all done with the correct options ?

Typically you'll want something like:

rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768

and there's good info in the NetApp guides for general NFS

http://www.netapp.com/us/media/tr-3633.pdf

but normally there will be a message in the alert log if Oracle thinks the mount options are not appropriate.

fynix, December 23, 2015 - 2:23 pm UTC

I followed the guidelines which were suggested on Oracle-base for RAC with NFS. So i mounted the storage from the ODA as

sudo  mount -t nfs -o rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 odanode0:/cloudfs/backups /u11


This is the export parameter on the ODA server
/cloudfs/backups            rac02(rw,sync,no_root_squash)


The only difference I see is with timeo (300 vs 600) and actimeo. I am attempting it one more time although it is going to take much longer this time because for some reason the automatic script did not detect the already restored files and I wasnt sure exactly what parameter I sshould use to point to the files i.e. set newname for datafile 1 to '/u11/users.dbf' or set newname for clone datafile 1 to '/u11/users.dbf'. Better safe than sorry. If this fails again I have a last ditch effort which is to attach a 1.5TB 2.5" laptop drive with USB and do it that way. All I got is lemons lol.

Chris Saxon
December 24, 2015 - 2:08 am UTC

Can I suggest you try restoring something off the smallest tablespace you've got, ie, remove the 19hours elapse, until you get some initial success and *then* try the "real deal". That way if things go awry, you havent burnt too much time.

I'm impressed by your tenacity :-) Merry Xmas

fynix, December 24, 2015 - 3:16 am UTC

I found this document on Oracle support ID 1084192.1. It seems the problem is the NFS mount after all. It fails at the archivelog rescover point as described in the document. I am moving the datafiles over to the usb hard drive and I will reattempt the tablespace transport.
Your suggestion is actually pretty genius, I been so preoccupied with finding out why the process was failing in the first place, I wasnt thinking of a smarter way to troubleshoot. I will test with a smaller tablespace. Although considering that system and all the undo tablespaces must be restored, the wait will be just as long those tablespaces total about 320GB and with the tape speed at 8MB/s for each drive (2 drives) ~7hrs. I guess thats better than 19 or more lol. I will leave it running and keep my fingers crossed. Thanks
P.S: I am being hard headed only because I hate when something fails and I can't find the exact root cause lol.

fynix, December 24, 2015 - 3:54 am UTC

Merry Xmas to you too :)

fynix, December 26, 2015 - 9:58 pm UTC

This has gotta be the longest time I have spent attempting any rman process. Good news mounting a usb drive fixed the I/O error. The restore and recovery process completed but bad news it failed due to memory error

media recovery complete, elapsed time: 21:02:32
Finished recover at 26-DEC-15

database opened

contents of Memory Script:
{
#mark read only the tablespace that will be exported
sql clone "alter tablespace USERS read only";
#mark read only the tablespace that will be exported
sql clone "alter tablespace INDX read only";
# create directory for datapump export
sql clone "create or replace directory STREAMS_DIROBJ_DPDIR as ''
/u03/ttbs''";
# export the tablespaces in the recovery set
host 'expdp userid=
'********'
 transport_tablespaces= USERS,
 INDX dumpfile=
dmpfile.dmp directory=
STREAMS_DIROBJ_DPDIR logfile=
explog.log';
}
executing Memory Script

ORACLE error from auxiliary database: ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","DBMS_RCVMAN","PL/SQL MPCODE","BAMIMA: Bam Buffer")
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_RCVMAN"

error executing package DBMS_RCVMAN in AUXILIARY database

Removing automatic instance
shutting down automatic instance
ORACLE error from auxiliary database: ORA-04031: unable to allocate 4088 bytes of shared memory ("shared pool","select name","PL/SQL MPCODE","BAMIMA: Bam Buffer")
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_RCVMAN"

error executing package DBMS_RCVMAN in AUXILIARY database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/26/2015 02:58:21
RMAN-06429: AUXILIARY database is not compatible with this version of RMAN
RMAN-03015: error occurred in stored script Memory Script
RMAN-06429: AUXILIARY database is not compatible with this version of RMAN

RMAN>

Recovery Manager complete.



Searching on Oracle support, it looks like the issue is the shared_pool_size set by the automatic instance is too small. I would make the size bigger from the parameter file but I am not sure how to point the aux instance to the already created files. It took 48hours to actually restore and recover 527GB of data, I definitely do not want to run through that again. I tried using set newname in the run block but that failed

run{
set archivelog destination to '/u03/aux/arc';
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/auxparm.ora';
set newname for clone datafile 1 to "/u03/aux/TSPITR_PRODDB_JWWH/datafile/o1_mf_system_c7qskod1_.dbf";
set newname for clone datafile 43 to "/u03/aux/TSPITR_PRODDB_JWWH/datafile/o1_mf_system_c7rbqokr_.dbf";
...
set newname for clone datafile 115 to "/u03/ttbs/indx.10126.851105465";
set newname for clone datafile 128 to "/u03/ttbs/indx.10798.885747433";
transport tablespace users,indx TABLESPACE DESTINATION '/u03/ttbs' AUXILIARY DESTINATION '/u03/aux' until scn 34213461886;
}


RMAN>
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "double-quoted-string": expecting one of: "new"
RMAN-01007: at line 1 column 38 file: standard input


I am not sure if there is a way to reattempt the transport without recreating all the datafiles. I am thinking of tranferring all the recovered files to another server, manually creating a controlfile and pfile and creating a database from these files. Is there another way?

fynix, December 27, 2015 - 3:28 am UTC

I was able to recover the table by manually creating the control file pointing it to the recovered datafiles, open resetlogs and export the table to a dumpfile. Phew, what a rough two weeks. I don't think I am going to get any disk storage space anytime soon so I am going to implement the trigger to prevent truncates. I will follow up with Oracle as to how to enable recovering the database data from tape to another server, lack of which is the reason why I had to go through all this in the first place.
Connor McDonald
December 28, 2015 - 11:55 pm UTC

"Experience is what you get when you were expecting something else".

One of my favourite quotes. Thanks so much for posting the running updates to AskTom. It makes for a great read.

And congratulations on getting the job done.

fynix, December 29, 2015 - 3:55 pm UTC

Thanks Connor,
It definitely was a learning experience for me, I hope I never have to go through that again but at least if I do I know what not to do. I am still trying to get a solution for the tape issue with Oracle Support and it sure is a slow process. I am opening up discussions with the bosses as we need to fix the current setup to the best we can to prevent this from happening, unfortunately legacy applications and lack of documentation will make changes such as changing application schema passwords very difficult. A classic case of too many cooks and no recipes left behind to figure out what was made. I plan to reign in and eliminate all that when we transition to some new hardware (ODA). Thanks so much Connor and Alex Adamowicz, your tips here and there really helped to solve this problem.
P.S: the parameter filesystemio_options was set to none which is probably why the tablespace transport was failing during archivelog application. Oracle recommends setting it to setall when NFS mounts are in use. This requires a restart though so I will be leaving that change for whenever we have maintenance downtime.
Chris Saxon
December 30, 2015 - 3:43 am UTC

Might be worth checking out Direct NFS. Better performance and solves a lot of those OS particulars.

(I'm biased - I'm a huge fan of dNFS)

fynix, December 30, 2015 - 1:09 pm UTC

Yes, I will definitely implement dNFS, it may only be a few MB/s faster but it works better with the way Oracle reads and writes to datafiles. The goal though is to use NFS sparingly.

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.