Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Vijay.

Asked: October 01, 2000 - 1:34 pm UTC

Last updated: February 15, 2010 - 3:29 pm UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

1. How do I find out how much space is filled in online-redo log files.

2. How do I find out how much space is filled in the current datafiles.

3. What are the important X$tables.

and Tom said...

#1) I am not aware of any method to tell where you are pointing currently inside of an online redo log file. Since we use them n a circular fashion -- they never really "get full". You can review your alert.log file to see how frequently they switch but the concept of "filled" doesn't really apply to online redo.


#2) A query such as:

ops$tkyte@DEV816> select substr( a.file_name,
2 instr(a.file_name,'/',-1)) file_name,
3 round(bytes/1024/1024) mBytes,
4 round(nvl(b.free,0)/1024/1024) Free,
5 round( (a.bytes-nvl(b.free,0))/1024/1024) used
6 from dba_data_files a,
7 (select sum(bytes) free, file_id
8 from dba_free_space
9 group by file_id ) b
10 where a.file_id = b.file_id(+)
11 /

FILE_NAME MBYTES FREE USED
-------------------- ---------- ---------- ----------
/system01.dbf 452 6 446
/rbs_ts_01.dbf 55 0 55
/rbs_ts_02.dbf 1135 0 1135
/rbs_ts_03.dbf 91 0 91
/rbs_ts_04.dbf 92 0 92
/rbs_ts_05.dbf 42 0 42
/users.dbf 471 0 471
/scats_data.dbf 10 10 1
/scheduler_data.dbf 184 183 2
/scheduler_idx.dbf 17 16 1
/flows_data.dbf 109 0 109
/flows_idx.dbf 10 10 1
/portal_data.dbf 492 0 492
/drsys.dbf 2000 0 2000
/pkt_data.dbf 123 0 123
/pkt_idx.dbf 118 0 118

16 rows selected.

will do that.

#3) in my opion -- there aren't any. I use the v$ tables. x$ tables, being wholly undocumented and changeable from dot release to dot release, are only useful in rare circumstances -- if ever. The V$ views, built mostly on top of the X$ tables -- are what you want to master.

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Mapping many dbf files o a tablespace.

Prats, February 08, 2010 - 2:44 am UTC

Tom,
I have recently come across a database which handles with terabytes of data [or may be more]. Here, for a particular tablespace, I saw around 30dbf biles being created.

1) Apart from initial 60GB space consumptions at the very beginning, will there be any performance related limitations.
2) Is it okay to create so many DBF files as compared to few as we are anyway specifying
AUTOEXTEND ON NEXT 2G MAXSIZE UNLIMITED


This db and related hardware is shipped as part of a product so DBA wants to sort out almost all such issues in the begining

create tablespace testtblspace
datafile
'/test1.dbf' size 2G reuse AUTOEXTEND ON NEXT 2G MAXSIZE UNLIMITED,
'/test2.dbf' size 2G reuse AUTOEXTEND ON NEXT 2G MAXSIZE UNLIMITED,
'/test3.dbf' size 2G reuse AUTOEXTEND ON NEXT 2G MAXSIZE UNLIMITED,
....
....
'/test30.dbf' size 2G reuse AUTOEXTEND ON NEXT 2G MAXSIZE UNLIMITED,

Tom Kyte
February 09, 2010 - 7:00 pm UTC

30 is a very small number?


you might not want maxsize unlimited, do you really want that?

mfz, February 09, 2010 - 7:52 pm UTC

How to determine the optimal size / number of the redo log files / groups ?

Mapping many dbf files on a tablespace(cont)

prats, February 10, 2010 - 1:53 am UTC

Thanks for your reponse Tom.

1) How would having 30 dbf files be better than having 4-5 dbf files with MAXSIZE UNLIMITED clause. Can you please elaborate on should I opt for so many dbf files.
2) Also, will data storage be sequential in these dbf files, I mean once dbf1 gets filled up dbf2 gets pumped with new data.


Tom Kyte
February 15, 2010 - 3:29 pm UTC

1) Can you please elaborate on should I opt for so many dbf files. because you want to - that would be the reason. IF you do not want to, then you would NOT. Some people prefer to have lots of small (manageable in their mind) files rather than a few (unmanageable in their mind) files.

It doesn't really matter in the grand scheme of things whether you have 1 or 100, it is the same level of "manageability" these days, you are not doing things a file at a time - you are pointing and clicking or typing a single rman command to do mass operations.


2) we use them in a round robin fashion - we try to spread the data out over all of them.