Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, alee.

Asked: January 15, 2018 - 9:05 am UTC

Last updated: July 13, 2023 - 5:40 am UTC

Version: 12

Viewed 10K+ times! This question is

You Asked

I have a big file tablespace,which has 100G size,I wonder ,big file tablepspace vs normal small file tablespace,which is better? someone said, it is very difficult to recover if bad block occurs in a big file, and what's more, performance is worse than normal small file tablespace.

which is the best chooice if I have many tablespaces,each tablespace has about 100G data, and each will increase 50G every year?

and Connor said...

There is no performance difference between a bigfile tablespace and a smallfile tablespace. As you probably know, the ROWID is used to locate a particular row for a particular table in a tablespace (ie, one of its datafiles).

The difference between bigfile and smallfile is simply we dedicate more bits in the rowid to the block address in the file, and less bits to the file location. So the file can be bigger (we can address more blocks), but you can only one file (because we dont have as many bits any more to reference it).

So you won't see any differences in performance, usage etc.

What *is* worth considering is your "unit of work". If you have to recover an entire datafile, then whether it is a smallfile or a bigfile, it is worth knowing what the recovery time will be, and whether it fits into your acceptable business availability requirements. If you had a 100TB bigfile that you had to restore, it might take a while.

That said, its rare to need to recover a particular datafile - the most common scenarios are recover a database, or recover/fix some blocks. In either case, there are mechanisms available to avoid needing to do massive file operations, in which case, the choice between bigfile and smallfile makes no difference.


Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

how to recover a bad block in a big file tablespace

alee alee, January 17, 2018 - 7:46 am UTC

thank you very much, and you said,"That said, its rare to need to recover a particular datafile - the most common scenarios are recover a database, or recover/fix some blocks. In either case, there are mechanisms available to avoid needing to do massive file operations", would you please teach me,if a bad block occurs in my big file space(such as a datafile with a 100G size), what is my best solution?

Juan Ignacio, December 03, 2020 - 12:11 am UTC

Hi,

Talking about BIGFILES, what about allocation which is better to allocate them on ASM or FileSystem/vol-group?

Regards
Connor McDonald
December 04, 2020 - 3:32 am UTC

I don't think it really matters.

ASM is a good default because we take care of all of the underlying IO optimizations (direct/asynch IO etc).

But as long as your file system supports direct/asynch and is configured to use them, then it should be just as good.

The only thing to steer clear from nowadays is raw, because we no longer support it.

A reader, July 12, 2023 - 5:54 am UTC

Hi
Can you please look below url.Here, They are mentioning bigfile tablespace performance benefit over smallfile tablespace.While the db version is 10g data warehouse Env on exadata. Is it still hold true on current version of database on exadata with OLTP env? There is some official documentation reference as well in the discussion.

https://community.oracle.com/mosc/discussion/3106771/is-bigfile-tablespace-mandatory-on-exadata-to-leverage-the-maximum-benefits
Connor McDonald
July 13, 2023 - 5:40 am UTC

The arguments there seem to be:

a- less space in SGA
b- less space in controlfile
c- less files for checkpoint

I'll counter that with a challenge to anyone who can post an AWR report where they can demonstrate a significant performance degradation due to the number of datafiles in their database.

If you had thousands and thousands of datafiles (which also means you're into the hundreds of terabytes region for overall database size), then *maybe* you might see a small performance difference.

bigfiles are a *manageability* feature not a performance feature.

A reader, July 12, 2023 - 7:48 am UTC

Just adding doc id for reference and excerpts from forum url :
BIGFILE Type Tablespaces Versus SMALLFILE Type (Doc ID 262472.1)
let's say you decide to create several bigfile tablespaces rather than having smallfile tablespaces with several datafiles each,
with that you can reduce the DB_FILES initialization parameter which will reduce the amount of SGA space required for datafile information.
Also the controlfile will be smaller because it now doesn't have to store that many datafile information,
instead it will just have information for one big datafile per tablespace.
And as for the CKPT and DBWR, these bigfile tablespaces are locally managed tablespaces with automatic segment space management
which provide a different optimized code path to use them plus the rowid is "different",
since the relative file number is always the same so there's no need for that, those "extra bytes" will be used to allow bigger datafiles.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database