Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Sachin.

Asked: September 21, 2003 - 10:28 pm UTC

Last updated: May 04, 2009 - 10:58 am UTC

Version: 8.1.7

Viewed 1000+ times

You Asked

Hi,

I exported a big database and imported it into another.

On a very big table (190 Million records), there are 3 indexes. Index size before export was 13 GB.

But after import into new database, index size has increased to 20 GB.

I checked, indexes are correct; number of records in new DB is correct.

Please help.
Cheers,
Sachin



and Tom said...

index was rebuilt using default pctfree.

existing index must have been "compact" -- all blocks full.
new index is not "compact", it has white space.

same thing would happen if you rebuild the existing index -- it would get "bigger" (counter to the "common knowledge" of course that rebuilding an index 'reclaims space' -- it can just as easily make the index larger


Also, it could be due to the extent sizing on new database -- but now I'm guessing (lack of details).....


Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Wanted help to understand why this could happen...

Sven Bleckwedel, July 06, 2007 - 3:29 pm UTC

Hi Tom,

Yesterday I did a dump export from 2 schemas, to dump import data back to another server, installed only for testing purposes. The source and target were Oracle 9i R2 (with patchset 9.2.0.6 applied), running on W2K3 servers. The situation right before the dump export was, in the source server:

SQL> set linesize 132 pagesize 999
SQL> col tablespace_name format a15
SQL> col file_name format a55
SQL> col "Size MB" format 999,999.90
SQL> col "Used MB" format 999,999.90
SQL> col "Total MB" format 999,999.90
SQL> select tablespace_name, file_name, bytes/1024/1024 "Size MB",
  2         user_bytes/1024/1024 "Used MB", maxbytes/1024/1024 "Total MB"
  3  from dba_data_files;

TABLESPACE_NAME FILE_NAME                                          Size MB  Used MB Total MB
--------------- ------------------------------------------------- -------- -------- --------
...
VETORH_DATA     D:\ORACLE\ORADATA\TECONDI\DADOS\VETORH_DATA.ORA   4,096.00 4,095.94 4,096.00
VETORH_INDX     D:\ORACLE\ORADATA\TECONDI\INDICES\VETORH_INDX.ORA 1,147.50 1,147.44 4,096.00
RM_DADOS        D:\ORACLE\ORADATA\TECONDI\DADOS\RMDADOS1.DAT        310.00   309.94   512.00
RM_DADOS_FF     D:\ORACLE\ORADATA\TECONDI\DADOS\RMDADOS2.DAT        100.00    99.94   512.00
RM_INDICES      D:\ORACLE\ORADATA\TECONDI\INDICES\RMIND1.IDX        320.00   319.94   512.00
VETORH_DATA     D:\ORACLE\ORADATA\TECONDI\DADOS\VETORH_DATA2.ORA  3,414.81 3,414.75 4,096.00
...


And the situation right after the dump import, was this, in the target server:

SQL> set linesize 132 pagesize 999
SQL> col tablespace_name format a15
SQL> col file_name format a55
SQL> col "Size MB" format 999,999.90
SQL> col "Used MB" format 999,999.90
SQL> col "Total MB" format 999,999.90
SQL> select tablespace_name, file_name, bytes/1024/1024 "Size MB",
  2         user_bytes/1024/1024 "Used MB", maxbytes/1024/1024 "Total MB"
  3  from dba_data_files;

TABLESPACE_NAME FILE_NAME                                          Size MB  Used MB Total MB
--------------- ------------------------------------------------- -------- -------- --------
...
VETORH_DATA     C:\ORACLE\ORADATA\TECONDI\VETORH_DATA.DBF         4,096.00 4,095.00 4,096.00
VETORH_INDX     C:\ORACLE\ORADATA\TECONDI\VETORH_INDX.DBF         2,707.00 2,706.00 4,096.00
RM_DADOS        C:\ORACLE\ORADATA\TECONDI\RMDADOS1.DAT              512.00   511.00   512.00
RM_DADOS_FF     C:\ORACLE\ORADATA\TECONDI\RMDADOS2.DAT              200.00   199.00   512.00
RM_INDICES      C:\ORACLE\ORADATA\TECONDI\RMIND1.IDX                512.00   511.00   512.00
VETORH_DATA     C:\ORACLE\ORADATA\TECONDI\VETORH_DATA2.DBF        4,096.00 4,095.00 4,096.00
VETORH_DATA     C:\ORACLE\ORADATA\TECONDI\VETORH_DATA3.DBF        4,096.00 4,095.00 4,096.00
...


I couldn't finish the dump import because near at the end of this process, it "crashed" with these type of errors:

. . importing table                      "R128GCA"          0 rows imported
. . importing table                      "R128GCU"          0 rows imported
IMP-00017: following statement failed with ORACLE error 1658:
 "CREATE TABLE "R128GFA" ("CODOEM" NUMBER(8, 0), "CMPCUS" DATE, "ABRSUB" VARC"
 "HAR2(30), "VLRATE" NUMBER(11, 2), "PERCUS" NUMBER(5, 2))  PCTFREE 10 PCTUSE"
 "D 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GRO"
 "UPS 1) TABLESPACE "VETORH_DATA" LOGGING NOCOMPRESS"
IMP-00003: ORACLE error 1658 encountered
ORA-01658: unable to create INITIAL extent for segment in tablespace VETORH_DATA
IMP-00017: following statement failed with ORACLE error 1658:
 "CREATE TABLE "R128GIT" ("CODOEM" NUMBER(8, 0) NOT NULL ENABLE, "CMPCUS" DAT"
 "E NOT NULL ENABLE, "CODTCU" NUMBER(4, 0) NOT NULL ENABLE, "VLRITE" NUMBER(1"
 "1, 2))  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536"
 " FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "VETORH_DATA" LOGGING NOCOMPRESS"
IMP-00003: ORACLE error 1658 encountered
Ctrl-C...


After having this problem, I added more datafiles to the mentioned tablespace and made all things (dropped the user cascade and recreate it) again. This process was interrupted 2 times more. I added more datafiles, but the question remained. I know that this "problem" (space consumed after import could be greater) could happen, but this time something was out of control. I used the parameter file described below, to do the export:

D:\Oradiag>type exp_rm.par
userid=system/<password>
owner=(vetorh rm)
file=exp_rm.dmp
compress=n
consistent=y
direct=y
statistics=none
buffer=5000000
log=exp_rm.log


And used these parameter files, to do the import the data back, to the other server:

C:\TEMP>type imp_rm.par
userid=rm/<password>
file=c:\Temp\exp_rm.dmp
fromuser=rm
touser=rm
buffer=5000000
commit=y
log=imp_rm.log

C:\TEMP>type imp_vetorh.par
userid=vetorh/<password>
file=c:\Temp\exp_rm.dmp
fromuser=vetorh
touser=vetorh
buffer=5000000
commit=y
log=imp_vetorh.log


And I used this script to create the tablespaces:

-- Vetorh_DATA
create tablespace VETORH_DATA
datafile 'C:\ORACLE\ORADATA\TECONDI\vetorh_data.dbf' size 4096M
autoextend on next 1M maxsize 4096M
extent management local uniform size 1M;
-- Vetorh_INDX
create tablespace VETORH_INDX
datafile 'C:\ORACLE\ORADATA\TECONDI\vetorh_indx.dbf' size 2048M
autoextend on next 1M maxsize 4096M
extent management local uniform size 1M;
-- Vetorh_TEMP
create temporary tablespace VETORH_TEMP
tempfile 'C:\ORACLE\ORADATA\TECONDI\vetorh_temp.dbf' size 200M
autoextend on next 1M maxsize 4096M
extent management local uniform size 1M;
-- RM_DADOS
CREATE TABLESPACE RM_DADOS
DATAFILE 'C:\ORACLE\ORADATA\TECONDI\RMDADOS1.DAT' SIZE 512M
AUTOEXTEND ON NEXT 1M maxsize 512M
extent management local uniform size 1M;
-- RM_DADOS_FF
CREATE TABLESPACE RM_DADOS_FF
DATAFILE 'C:\ORACLE\ORADATA\TECONDI\RMDADOS2.DAT' SIZE 200M
AUTOEXTEND ON NEXT 1M maxsize 512M
extent management local uniform size 1M;
-- RM_INDICES
CREATE TABLESPACE RM_INDICES
DATAFILE 'C:\ORACLE\ORADATA\TECONDI\RMIND1.IDX' SIZE 512M
AUTOEXTEND ON NEXT 1M maxsize 512M
extent management local uniform size 1M;
-- RM_TEMP
CREATE temporary TABLESPACE RM_TMP
TEMPFILE 'C:\ORACLE\ORADATA\TECONDI\RMTMP1.DBF' SIZE 100M
AUTOEXTEND ON NEXT 1M maxsize 512M
extent management local uniform size 1M;


And used the commands mentioned below, to add more datafiles for the tablespace, when the problems appeared:

...
SQL> alter tablespace VETORH_DATA
  2  add datafile 'C:\ORACLE\ORADATA\TECONDI\VETORH_DATA2.DBF' size 4096M
  3  autoextend on maxsize 4096M;

Tablespace altered.

...
SQL> alter tablespace VETORH_DATA
  2  add datafile 'C:\ORACLE\ORADATA\TECONDI\VETORH_DATA3.DBF' size 4096M
  3  autoextend on maxsize 4096M;

Tablespace altered.


I have one or another table with LOBs, but this fact alone couldn't generate a so HIGH consumption of space. If more details about this problem could be shared and what could be done to deal with it, please send your comments. Any advice should be greatly appreciated...

Thanks in advance,
Sven

Tom Kyte
July 07, 2007 - 10:42 am UTC

you don't say how the source tablespaces were created in the first place, but you set a minimum extent size of 1mb on the new target database.

why did you do that?

why not autoallocate?


You probably blew it out totally by having EVERY table consume at least 1mb, regardless of size.

Thanks for clarify...

Sven Bleckwedel, July 09, 2007 - 8:08 pm UTC

Hi Tom,

I followed an advice, but wasn't really a good idea doing as described. I thinked about uniform extents and don't paid attention to what I made. I corrected this and now it is ok. Thanks for clarify...

Regards,
Sven

Umer, May 04, 2009 - 5:13 am UTC

Hi Tom,

I have just import the table on another schema the table size was 965.75 MB before import the table after the table size is
1001.5 MB.

Kindly let me know how the table size is grow infact the rows are same.

regards,
Umer
Tom Kyte
May 04, 2009 - 10:58 am UTC

you give insufficient details here.

There could be many reasons, here are the first two that I've thought of off of the top of my head

a) table was 100% full before, that is, you loaded it and used the default pctfree of 10% so each block is about 90% full. then you did some updates and made each block closer to 100% full. When you reload the table with the existing data - it used the pctfree of 10% again so each block is 90% or less full - your table grows by about 10%.

ops$tkyte%ORA10GR2> create table t( x int, y char(30), z varchar2(30) );

Table created.

ops$tkyte%ORA10GR2> insert into t (x,y) select object_id, object_name from all_objects where rownum <= 500;

500 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select dbms_rowid.rowid_block_number(rowid), count(*) from t group by dbms_rowid.rowid_block_number(rowid) order by 1;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                               19540        184
                               19541        181
                               19544        135

<b>so, we get about 184 rows per block...</b>

ops$tkyte%ORA10GR2> update t set z = 'xxx';

500 rows updated.

ops$tkyte%ORA10GR2> analyze table t compute statistics;

Table analyzed.

ops$tkyte%ORA10GR2> select chain_cnt from user_tables where table_name = 'T';

 CHAIN_CNT
----------
         0

<b>we made the rows bigger - they all still fit on a block, however</b>

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> create table t2 as select * from t;

Table created.

ops$tkyte%ORA10GR2> select dbms_rowid.rowid_block_number(rowid), count(*) from t2 group by dbms_rowid.rowid_block_number(rowid) order by 1;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                               19548        166
                               19549        164
                               19550        164
                               19551          6

<b>copy them into a new table with the default pctfree of 10% and bamm - you get less rows per block


b) the tablespace you imported into is locally managed with uniform sized extents. Your unique extent size in this new tablespace is different from the old one.