some days ago,I asked a question about big file tablespace vs normal small file tablespace, you said "That said, its rare to need to recover a particular datafile - the most common scenarios are recover a database, or recover/fix some blocks. In either case, there are mechanisms available to avoid needing to do massive file operations, in which case, the choice between bigfile and smallfile makes no difference. " (the URL is:
https://asktom.oracle.com/pls/apex/f?p=100:12:0::NO::P12_ORIG,P12_PREV_PAGE,P12_QUESTION_ID:Y,1,9537337800346114121 )
would you please tech me,if a bad block was found in a big file tablespace(such as a datafile with 100G size), what is my best solution for the shortest database down time? thank you very much
If you found a corrupted block mentioned (say) in the alert log, it would just be an RMAN command
BLOCKRECOVER DATAFILE 7 BLOCK 123456;
Or could do even better - if you have a standby database, we can automatically repair the block on the primary by getting a valid version of the block from the standby. In this case, you don't run any commands as such, you would just evidence of the block recovery in the alert log.
I'd recommend a read of the corruption white paper here
http://www.oracle.com/technetwork/database/availability/maa-datacorruption-bestpractices-396464.pdf