Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Raghu .

Asked: May 02, 2000 - 1:20 pm UTC

Last updated: July 17, 2013 - 9:08 pm UTC

Version:

Viewed 50K+ times! This question is

You Asked

Tom, What is the best way to move tables from one
tablespace to another tablespace in the same instance?


and Tom said...


I'll give you the Oracle8.0 and before answer as well as the Oracle8i and above answer (in 8i, this is *trivial*, in 8.0 and before, it takes practice and time)....


There are 2 methods we can use to do this. One is to use a combination of "alter table X move tablespace Y" and "alter index X rebuild tablespace Y" -- this works in Oracle8i release 8.1 and up ONLY. Attached is a script called moveall.sql.

Moveall.sql
set echo off

column order_col1 noprint
column order_col2 noprint

set heading off
set verify off
set feedback off
set echo off

spool tmp.sql

select decode( segment_type, 'TABLE',
segment_name, table_name ) order_col1,
decode( segment_type, 'TABLE', 1, 2 ) order_col2,
'alter ' || segment_type || ' ' || segment_name ||
decode( segment_type, 'TABLE', ' move ', ' rebuild ' ) ||
chr(10) ||
' tablespace &1 ' || chr(10) ||
' storage ( initial ' || initial_extent || ' next ' ||
next_extent || chr(10) ||
' minextents ' || min_extents || ' maxextents ' ||
max_extents || chr(10) ||
' pctincrease ' || pct_increase || ' freelists ' ||
freelists || ');'
from user_segments,
(select table_name, index_name from user_indexes )
where segment_type in ( 'TABLE', 'INDEX' )
and segment_name = index_name (+)
order by 1, 2
/

spool off

set heading on
set verify on
set feedback on
set echo on

REM UNCOMMENT TO AUTO RUN the generated commands
REM ELSE edit tmp.sql, modify as needed and run it
REM @tmp

--- eof ----


It uses the user_segments table to generate all of the needed "alter table move" and "alter index rebuild" statements to move a table/index into another tablespace preserving the storage characteristics currently assigned to the object. For example, when we run moveall.sql in the SCOTT schema, we might see:

scott@ORACLE> @moveall
scott@ORACLE> set echo off

alter TABLE ACCOUNT move
tablespace users
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);

alter TABLE BONUS move
tablespace users
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);

alter TABLE DEPT move
tablespace users
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);

alter INDEX PK_DEPT rebuild
tablespace users
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);

....

It begins by moving a table and then rebuilding each of the indexes on that table. Since the indexes on the tables being moved will become unusable after the table, this script rebuilds them right after moving a table -- before moving the next table (to reduce downtime).

Running the moveall.sql script is harmless as it is written. It generates the SQL you need to run and saves the sql into yet another script file "tmp.sql". You should edit tmp.sql, review it, modify it if you want (eg: if you have a multi-cpu system, you could modify the index rebuilds to be "parallel N", "unrecoverable" and add other options to make them go faster on your system), and then run it.


Another method would be to use EXPort and IMPort. You would

o EXP the user account.
o drop all of the objects this user owns. You can 'select' the drop statements you need (script attached) in much the same way we 'select' the alter table/index statements
o revoke UNLIMITED TABLESPACE from the user
o alter the users default tablespace to the target tablespace
o give the user an unlimited quota on this new tablespace and their temporary tablespace
o IMP this users data.

So, the process to move SCOTT's objects from their current tablespace to a NEW_TABLESPACE would be:


1) do an export of all of scott's objects. Make sure no one modifies them after you begin this process. You will lose these changes if they do.

$ exp userid=scott/tiger owner=scott

2) you would drop all of scotts tables. This will get the indexes as well. I don't suggest dropping the user SCOTT but rather dropping scott's objects. Dropping scott would cause any system priveleges SCOTT has to disappear and the import would not restore them. This script can be used to drop someones tables:

--------------------------------------
set heading off
set feedback off
set verify off
set echo off

spool tmp.sql
select 'drop table &1..' || table_name || ' cascade constraints;'
from dba_tables
where owner = upper('&1')
/
spool off
@tmp.sql
--------------------------------------



3) You would modify the user to *not* have unlimited tablespace (else the IMP will just put the objects right back into the tablespace they came from) and then give them unlimited quota's on the new tablespace you want the objects to go into and on their temporary tablespace (for the sorts the index creates will do)

alter user SCOTT default tablespace NEW_TABLESPACE
/
revoke unlimited tablespace from SCOTT
/
alter user SCOTT quota unlimited on NEW_TABLESPACE
/
alter user SCOTT quota unlimited on SCOTTS_TEMPORARY_TABLESPACE
/

4) you will IMP the data back in for that user. IMP will rewrite the create statements to use the users default tablespace when it discovers that it cannot create the objects in their original tablespace. Please make sure to review the file imp.log after you do this for any and all errors after you import.

imp userid=scott/tiger full=y ignore=y log=imp.log


5) you can optionally restore 'unlimited tablespace' to this user (or not). If you do not, this user can only create objects in this new tablespace and temp (which in itself is not a bad thing)...



As with any operation of this magnitude -- please test these procedures on a small test account (such as SCOTT) to become familar with them.


A couple of side notes:

o the alter table move/alter index rebuild is more flexible and faster the exp/imp (and less error prone -- you never actually drop the objects). Additionally, it would be easy to modify the script to move TABLES to one tablespace and INDEXES to a different tablespace. The drawback to using this method is the you cannot move a table with a LONG or LONG RAW. You must exp that table and imp it into a table. You can do this easily by exporting the table with the LONG/LONG RAW, dropping that table -- creating an empty version of this table in the new tablespace and importing just that table.

o if you use the exp/imp, it is upto you to ensure that no modifications happen to the tables after you begin the export. There are no mechanisms in place to ensure this -- you must do this (else you will lose changes)


followup to comment #8

regarding the order columns.


No, it should not be like that. It is correct as presented:

select decode( segment_type, 'TABLE',
segment_name, table_name ) order_col1,
decode( segment_type, 'TABLE', 1, 2 ) order_col2,

If the segment_type is a table, return the SEGMENT_NAME (which is actually the table name). Else, if the segment type is not a table (it is an index) return the TABLE_NAME that the index is on. This'll group a table and all of its indexes together.

The second column just makes sure that tables are done PRIOR to their indexes.



Rating

  (211 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Great Answer.

Irshad, March 21, 2001 - 11:30 am UTC

This is a great answer with excellent examples.

Pauline, April 07, 2001 - 4:26 pm UTC

Great answer!

Excellent

Syed Hadi, April 25, 2001 - 1:55 am UTC

I was in need of this solution. Great!!!

moving tables across tablespaces

Roland Scheifele, May 07, 2001 - 3:28 am UTC

Very helpful tip! I could solve a topical problem.

Reorganization

Vikram, May 08, 2001 - 2:11 am UTC

very usefull tips

Awesome example explaining the feature

Sai, May 25, 2001 - 6:06 pm UTC

I am planning to do the "alter table move" in a couple of weeks as most of the tables were confined to one single tablespace, and this example is very clear on the procedure to be followed to move tables across tablespace.

Absolutely Fantastic.

Rui Gongalves, June 12, 2001 - 9:36 am UTC


Code Review

Joydeep Ghosh, July 02, 2001 - 2:32 am UTC

Is not it should be like this...
decode( segment_type, 'TABLE',
table_name, segment_name ) order_col1
In code
'alter ' || segment_type || ' ' || segment_name ||....
segment_name replaced by
like this
decode(segment_type, 'TABLE', 'table_name ', 'segment_name' )




Great!!

Lynn, July 10, 2001 - 6:08 am UTC

This was just what I was looking for!!

Oracle DBA

A reader, July 12, 2001 - 11:38 am UTC

it just what i am looking for, working on, and i would not get from another experience oracle DBA. Thank you very much.

how can i move copy form tablespace

sohel, July 15, 2001 - 12:59 am UTC

yes, it was very helpful for me, i am an oracle user, so i need this things frequently, thanks for everything...

sohel

Its amazing

K.Srinivasan, August 08, 2001 - 3:16 am UTC

Now only I come to know that it is possible to move a table from one tablespace to another. Its very useful for me.

Mohammed Abubakker, November 01, 2001 - 10:27 am UTC

Excellent Most helpful

Excellent

Boris Oblak, November 20, 2001 - 12:08 pm UTC

Excellent

Sooo Great a Solution

Srinivas Kuncham, January 28, 2002 - 1:26 am UTC

i was very very much pleased to see the solution. it had saved much of my time and headache. Thank u Tom Thank u very mcuh for ur solution...

Table Space

R.RamNarayan, March 16, 2002 - 2:59 am UTC

The Solution you provided was useful. My Question is

I have two Schemas let us say PAY1 and PAY2. Both Contain
Tables with Similar Structure. I want to move the Data from
PAY1 to PAY2 i.e Insert data in the respective tables of PAY1 from PAY2. Could you please tell me as i want it during Year Forwarding

Tom Kyte
March 16, 2002 - 8:50 am UTC

insert into pay2.table select * from pay1.table;

is the easiest way to do that.

Parag's Feedback

Parag, March 17, 2002 - 3:47 am UTC

Too good Tom.
Appriciate ur hardwork with fantastic EXAMPLE.

Carry On Boss.....

follow up

Irfan, March 24, 2002 - 8:18 am UTC

Can I use the first method (decode) to move table from dictionary managed tablespace to locally managed tablespace
without making any change in the script ?


Tom Kyte
March 24, 2002 - 3:44 pm UTC

sure, this works from any tablespace to any other tablespace. for an LMT, I would just as soon leave the storage off all together and use uniformt extents.

What happens to un-specified table properties ?

Sameer Utrankar, March 24, 2002 - 6:57 pm UTC

You generated alter table move script with storage parameters and tablespace but what happens to un-specified table properties (eg cache. monitoring etc) or physical properties (organization heap/index, cluster, lob etc) ?

The question is not really for these specific parameters that I mentioned above but since 'create table' has whole bunch of options and only a few are used in your script, my question is 'will the unspecified paramters be retained by default from what the table originally had' or 'will these unspecified parameters be lost' ?

If they are lost then that would just mean, one need to be careful before generating a script for all tables and must take care of special parameters that table has before moving it.

Your comment please ?

Tom Kyte
March 24, 2002 - 8:55 pm UTC

It'll keep heap/iots in place:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( x int primary key ) organization index;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> @dbls

OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
INDEX        SYS_IOT_TOP_28786                USERS

TABLE        T


ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t move tablespace system;

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> @dbls

OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
INDEX        SYS_IOT_TOP_28786                SYSTEM

TABLE        T


ops$tkyte@ORA817DEV.US.ORACLE.COM> 

clusters, they are "special", they cannot be moved -- you cannot move a TABLE in a cluster (that would sort of defeat the purpose of a cluster).  You'll get:
ops$tkyte@ORA817DEV.US.ORACLE.COM> create index emp_dept_cluster_idx
  2  on cluster emp_dept_cluster
  3  /

Index created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> create table dept
  2  ( deptno number(2) primary key,
  3    dname  varchar2(14),
  4    loc       varchar2(13)
  5  )
  6  cluster emp_dept_cluster(deptno)
  7  /

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table dept move tablespace system
  2  /
alter table dept move tablespace system
            *
ERROR at line 1:
ORA-14512: cannot perform operation on a clustered object

instead.

Lobs, lobs are lobs -- they are stored in a lob segment, they are NOT moved with this, that is an entirely different command to move them!</b>

<b>but I would say YES, you must be CAREFUL with this as you should be with anything that affects your data!</b>

 

What about the quota on the original tablespace

vk, March 26, 2002 - 3:53 am UTC

Excellent answer Tom. Just one point. We would need to check and remove the quota on the original tablespaces, right?? That is in case we have allocated specific quota to users rather then using Unlimited Tablespace system priv

Tom Kyte
March 26, 2002 - 7:52 am UTC

sure. Just note that granting RESOURCE or DBA to a user implicitly grants UNLIMITED TABLESPACE directly...

A reader, May 15, 2002 - 5:34 pm UTC


COPY COMMAND

Basharat, August 04, 2002 - 9:23 am UTC

Tom, Just for my clarity, Can we use Copy command to move data from one table to another in a different schema. If not Why?

Tom Kyte
August 05, 2002 - 11:28 am UTC

Yes you can.




rebuilding of indexes

MS, September 10, 2002 - 7:08 am UTC

If indexes were in a different tablespace then we don't have to rebuild them right?
Let's say scott's tables are in USERS tablespace..but indexes are in index1.
Now we want to move scott's tables to another_ts tablespace.
So after we do this..we don't need to rebuild indexes..do we?

Thanks

Tom Kyte
September 10, 2002 - 7:39 am UTC

No, you need to rebulid them -- think about it, the rowids of each and every row in the table have changed! That is why you need to rebuild the indexes, they are totally useless after the move. They point to where the data WAS, not where it IS.




index rebuild

MS, September 10, 2002 - 7:53 am UTC

So after moving the tablespace...we will do:
(1) alter INDEX PK_DEPT rebuild;
and that will rebuilt it and point it to
another_ts tablespace.
or we have to do:
(2) alter INDEX PK_DEPT rebuild
tablespace another_ts
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);

Will (2) rebuild them in another_ts.
if it does then should we do:
(3) alter INDEX PK_DEPT rebuild
tablespace index1
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);



Tom Kyte
September 10, 2002 - 9:05 am UTC

If you want it in another tablespace, you will include a tablespace clause, else it goes right back where it came from.

PCTINCREASE STINKS. don't use it. Use locally managed tablespaces, stop using storage clauses and if you do use a storage clause on a dictionary managed tablespace use INITIAL=NEXT and PCTINCREASE=0

alter move command from DMT to LMTs

ms, September 22, 2002 - 2:25 pm UTC

Tom,
(8.1.7.3.0 in windows going to 8.1.7.4.0 in unix)
I brought a database from windows to unix....
I want to change form Dictionary managed Tablespaces (DMTs) to Locally Managed tablespaces (LMTs).
I broght all the objects for all the schemas into one big tablespace...
Now I want to rearrange them into my small, medium, and large LMT's (tables and indexes)
I have analyzed all the schemas with DBMs_STATS to get no of rows etc.
Now I am about to rerrange...

This is a small database (3 gig or so)
I have arranged 6 tablespaces (small, large, and medium...three for tables and three for indices)

I am going to run commands like this following sample:

for tables:

select 'ALTER TABLE ' || ds.segment_name || ' MOVE TABLESPACE TS_SMALL;'
from dba_segments ds, dba_tables dt, v$parameter vp
where 1=1
and vp.name = 'db_block_size'
and ds.segment_type = 'TABLE'
and ds.owner = 'A_SCHEMA'
and ds.segment_name = dt.table_name
and DS.TABLESPACE_NAME = DT.TABLESPACE_NAME
and (dt.blocks*vp.value) <= 1000000
and (dt.blocks*vp.value) IS not NULL;

and then changing the "and (dt.blocks*vp.value) <= 1000000"
according to size...and changing ' MOVE TABLESPACE TS_SMALL;' to ' MOVE TABLESPACE TS_MEDIUM;'
AND ' MOVE TABLESPACE TS_LARGE' AND SO ON

Now I will run for index rebuild:

select 'ALTER INDEX ' || dba_segments || ' REBUILD TABLESPACE TS_IDX_SMALL;'
where SEGMENT_TYPE = 'INDEX'
AND OWNER = 'MANAGER'
and bytes is not null
and bytes <= 1000000
and bytes <> 0

and keep changing it for size to get those statements


Is my approach O.K. to rearrange tablespaces like that..
As I understood..for LMT's all the tables and indices will take the default for the corresponding tablespace (like extent size storage etc...)
and we just divide tables and indexes according to size.

And yes, I did do the export as compress=N





Tom Kyte
September 22, 2002 - 3:16 pm UTC

looks good to me -- just a variation on the approach above.

If you got "fancy", you would just run a single query for the tables and use CASE to decide the tablespace.

select 'ALTER TABLE ' || ds.segment_name || ' MOVE TABLESPACE ' ||
case when dt.blocks*vp.value <= 100000 then 'TS_SMALL'
when dt.blocks*vp.value <= NNNNNNN then 'TS_MED'
else 'TS_BIG'
end || ';'
.....

(dt.blocks*vp.value) is null

MS, September 22, 2002 - 3:47 pm UTC

Thanks for a quick reply Tom...
Which tablespace do we put the tables that have (dt.blocks*vp.value) as null...

SEGMENT_NAME TABLESPACE_NAME EXTENTS BLOCKS BYTES_ALLOC BYTES_USED
--------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------- ----------
EVT_DEST_PROFILE TS_ALL 5 10 81920
EVT_PROFILE TS_ALL 5 10 81920
MICROSOFTDTPROPERTIES TS_ALL 5 10 81920
MICROSOFTDTPROPERTIES TS_ALL 5 10 81920
SMP_AD_ADDRESSES_ TS_ALL 5 10 81920
SMP_AD_DISCOVERED_NODES_ TS_ALL 5 10 81920
SMP_AD_NODES_ TS_ALL 5 10 81920
SMP_AD_PARMS_ TS_ALL 5 10 81920
SMP_AUTO_DISCOVERY_ITEM_ TS_ALL 5 10 81920
SMP_AUTO_DISCOVERY_PARMS_ TS_ALL 5 10 81920
SMP_REP_VERSION TS_ALL 5 10 81920
SMP_SERVICE_DATA_ TS_ALL 5 10 81920
SMP_SERVICE_GROUP_DEFN_ TS_ALL 5 10 81920
SMP_SERVICE_GROUP_ITEM_ TS_ALL 5 10 81920
SMP_SERVICE_ITEM_ TS_ALL 5 10 81920
SMP_UPDATESERVICES_CALLED_ TS_ALL 5 10 81920

16 rows selected.




Tom Kyte
September 22, 2002 - 4:56 pm UTC

if that is null, then blocks is null and either

a) it is not a segment with data
b) you didn't analyze it

so put it where you want ;)

ORA-03113: end-of-file on communication channel

MS, September 22, 2002 - 4:21 pm UTC

I was trying to run the statements to move tables to tablespaces.....
built all the statements dynamically....
and ran them..
it kept altering tables for a while....
then all of a sudden with one table...it gave the error:
ORA-03113: end-of-file on communication channel


any idea about this error?

and after that it gave :
ORA-03114: not connected to ORACLE
for all the others, ofcourse.

this was on oracle 8i sqlplus worksheet..connected from a windows workstation to Oracle 8.1.7.4.0 on a UNIX box.

The statement it failed on was a snapshot...so, I thought may be snapshots tables can not be moved...snapshots have to be dropped and recreated in correct tablespace...but then I saw..some of the other snapshots tables statements
worked succesfully...

Also, what happen if accidently you run a move tablespace twice....by mistake..
like ALTER TABLE TEST MOVE TABLESPACE TS_SMALL;
AND THEN AGAIN
ALTER TABLE TEST MOVE TABLESPACE TS_SMALL;
will this mess anything up?

Thanks.

Tom Kyte
September 22, 2002 - 4:57 pm UTC

3113 = call support, you should have a trace file on the server, a message in your alert log. They can diagnose the problem

Moving twice just consumes resources, no harm done otherwise.

review

Irfan, September 23, 2002 - 10:15 am UTC

1. What changes should be done in script of first method moveall.sql, for moving tables from 9i Locallay managed tablespace?

2. When I run the moveall.sql from 9i database(locally managed tablespace) I got the following script:

alter table EMP move tablespace users
storage(initial 65536 next
minextents 1 maxextents 2147483645
pctincrease freelists 1);

In the above script next and pctincrease does not have any
value. Can this be safely run to move the table ?

Thanks.

Tom Kyte
September 23, 2002 - 10:51 am UTC

drop initial, next, minextents, maxextents pctincrease from the query.


moving tables across tablespaces

Bill Sonia, September 23, 2002 - 5:32 pm UTC

I'm actually doing a similar process to 'rebuild' tables with chained rows but have the added overhead of Replication.

Can I simply quiesce the group, perform this ddl and then restart replication?

Tom Kyte
September 24, 2002 - 7:19 am UTC

depends on the type of replication.

advanced, peer to peer, which is 100% based on primary key, yes.

snapshots -- only if the snapshots are based on PRIMARY KEY and not rowid.


In any case, if you simply "move the chained rows" (insert into temp, delete from prod, insert temp into prod) -- you can do this with replication on. If the number of rows as a percentage of the table is "small", that might be prefered.

moving tables from 9i LMT to 9iLMT

Irfan, September 24, 2002 - 2:49 am UTC

I have changed query MOVEALL.sql as per your suggestion (removed initial, next, minextents,maxextents, pctincrease )for 9i LMT, which is as follows.
Moveall.sql
-----------

set echo off
column order_col1 noprint
column order_col2 noprint

set heading off
set verify off
set feedback off
set echo off
spool ind.sql

select decode(segment_type,'TABLE',
segment_name,table_name) order_col1,
decode(segment_type,'TABLE',1,2) order_col2,
'alter '||segment_type||' '||segment_name||' '||
decode(segment_type,'TABLE','move','rebuild')||
chr(10) ||
'tablespace &1'||chr(10)||
'storage( freelists '|| freelists||');'
from user_segments,
(select table_name,index_name from user_indexes)
where segment_type in ('TABLE','INDEX')
and segment_name = index_name(+)
order by 1,2
/
spool off
---------------

When I run this I am getting following output

alter TABLE EMPBNF move
tablespace users
storage( freelists 1);

alter TABLE EMPCVR move
tablespace users
storage( freelists 1);

.........

Please confirm what I did is correct?
Thanks.



Tom Kyte
September 24, 2002 - 7:39 am UTC

is that what you wanted to output? If so, that is correct. If not, try again.

I mean -- is that the statement that you wanted? It is syntatically correct.

Rearranging snapshots into LMTs after a move

MS, September 24, 2002 - 8:47 am UTC

ALTER SNAPSHOT <SN_NAME> MOVE TABLESPACE <TS_NAME>;
will move snapshot into ts_small etc.
and it will also move the underlying table to the same tablespace..
Then we can run separate "ALTER INDEX REBUILD INDEX <IND_NAME> '
for all the indexes for that snapshot...
is there anything else that we need to worry about in case of snapshots....

Thanks.

Tom Kyte
September 24, 2002 - 3:32 pm UTC

that should do it.

Review

Irfan, September 24, 2002 - 8:54 am UTC

My original question was :
1. Should I specify the storage values for moving LMT to another LMT. As I know that in LMT system decide the appropriate storage allocations.

So, moving the tables from LMT to another LMT is it necessary to give storage allocations ? If it is necessary than I shall use your original Moveall.sql, otherwise I shall ignore the storage allocations and will use the modified script?

Thanks

Tom Kyte
September 24, 2002 - 3:34 pm UTC

1) no, you shouldn't, with the exception of maybe freelists.



ORA-00997: illegal use of LONG datatype

MS, September 30, 2002 - 7:38 pm UTC

(8.1.7.4.0 on UNIX)

While running alter table move tablespace (DMT to LMT)..I am getting this error....
The tables that are giving this error have either LONG or LONG RAW columns in them..
So how do you move them....

Thanks

Tom Kyte
October 01, 2002 - 9:47 am UTC

export/import.

better yet, create table as select using the TO_LOB function to get rid of the nasty LONG/LONG RAW and convert it into a clob/blob

what is the query used in dbls.sql

Rob Gibson, December 20, 2002 - 6:22 pm UTC

In you March 24, 2002 follow-up to Sameer Utrankar from NJ, you demonstrated the location of an IOT using a sql script called dbls.sql.

Could you post the content of dbls.sql?

Thank you.


Tom Kyte
December 20, 2002 - 8:22 pm UTC

column object_name format a30
column tablespace_name format a30
column object_type format a12
column status format a1

break on object_type skip 1

select object_type, object_name,
decode( status, 'INVALID', '*', '' ) status,
decode( object_type,
'TABLE', (select tablespace_name from user_tables where table_name = object_name),
'INDEX', (select tablespace_name from user_indexes where index_name = object_name),
null ) tablespace_name
from user_objects a
where object_name like upper('%&1.%')
order by object_type, object_name
/


A question on moving tables

p001, January 31, 2003 - 9:51 pm UTC

I am using Oracle 9.2 version and I have a 1GB database. Right now all objects (tables, indexes etc) sits in one table space. I wanted to reorganize these into multiple tablespaces.

I saw in some threads above moving tables based on size, like large, medium and small. What is the advantage of doing this?

Do we really see any significant performance improvements by moving objects into different tablespaces?


Tom Kyte
February 01, 2003 - 9:28 am UTC

do you have more then one disk -- it is all about DISKS, nothing about tablespaces really.

tablespaces can be used to place an object on a specific disk.
tablespaces can be made up from files from many disks (sort of like striping)

unless you have lots more "physical disks" and given that 1gig is pretty small -- you probably won't see anything from this exercise.


it is not "different tablespaces" you want to look at but rather "even spread of IO across all devices you own". You may be able to use tablespaces to help you achieve this on a big database.

How to move LOBS

Anton, February 04, 2003 - 6:55 pm UTC

Hi Tom.

I have just moved a table constaining a LOB column to a new tablespace. How do I move the LOB segment to the same new tablespace ?

Problem in moving tables across tablespaces

Ashiq Shamsudeen, February 05, 2003 - 5:03 am UTC

Hello Tom,

I've problem in moving tables from one TBS(tablespace) to another.All the objects of scott schema is in system tables so i

took a dump of it and created another user ashiq and assigned default TBS and temp TBS , and granted all necessary privileges

which u've show us .Its throwing up error during importing of constraints saying objects already exists,but as such no object

exists .See...

scott@RMS8> sho user
USER is "SCOTT"
scott@RMS8>
scott@RMS8> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BONUS TABLE
DEPT TABLE
DUMMY TABLE
EMP TABLE
SALGRADE TABLE
TEST TABLE
TTK TABLE

8 rows selected.

scott@RMS8> select constraint_name, constraint_type, table_name
2 from user_constraints;

CONSTRAINT_NAME C TABLE_NAME
------------------------------ - ------------------------------
PK_DEPT P DEPT
SYS_C007564 C EMP
PK_EMP P EMP

3 rows selected.

scott@RMS8> select table_name, tablespace_name from user_Tables;

TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------
BONUS SYSTEM
DEPT SYSTEM
DUMMY SYSTEM
EMP SYSTEM
SALGRADE SYSTEM
TEST SYSTEM
TTK SYSTEM

8 rows selected.

scott@RMS8>
scott@RMS8> conn system/manager@rmsreal8
Connected.
scott@RMS8> sho user
USER is "SYSTEM"
system@RMS8> create user ashiq identfied by ashiq;
create user ashiq identfied by ashiq
*
ERROR at line 1:
ORA-00922: missing or invalid option

system@RMS8> ed
Wrote file afiedt.buf

1* create user ashiq identified by ashiq
system@RMS8> /

User created.

system@RMS8> grant connect , resource to ashiq;

Grant succeeded.

system@RMS8> alter user ashiq default tablespace users temporary tablespace temp;

User altered.

system@RMS8> revoke unlimited tablespace from ashiq;

Revoke succeeded.

system@RMS8> alter user ashiq quota unlimited on users;

User altered.

system@RMS8> alter user ashiq quota unlimited on temp;

User altered.

system@RMS8> host exp userid=scott/tiger@rmsreal8 owner=scott

system@RMS8> conn ashiq/ashiq@rmsreal8
Connected.

system@RMS8> sho user
USER is "ASHIQ"

system@RMS8>select constraint_name, constraint_type, table_name
2 from user_constraints;

no rows selected

system@RMS8> select * from tab;

no rows selected

As u see there are no objects before importing.Now , i get connected to scott user to import the dump..

system@RMS8> conn scott/tiger@rmsreal8
Connected.
system@RMS8> sho user
USER is "SCOTT"
system@RMS8> host imp userid=scott/tiger@rmsreal8 parfile=ashiq.par

The parfile has the following values ,they are...

FILE=expdat.dmp
IGNORE=n
gRANTS=y
constraints=y
fromuser=scott
toUSER=ashiq
log=ashiq_log.log

Error what it gave is(i took from ashiq_log.log) ...

Connected to: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
With the Partitioning option
JServer Release 8.1.7.0.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing SCOTT's objects into ASHIQ
. . importing table "BONUS" 0 rows imported
. . importing table "DEPT" 4 rows imported
IMP-00015: following statement failed because the object already exists: <=======
"ALTER TABLE "DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO") USING I"
"NDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 65536 MI"
"NEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS"
" 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ENABLE "
. . importing table "DUMMY" 1 rows imported
. . importing table "EMP" 14 rows imported
IMP-00015: following statement failed because the object already exists: <======
"ALTER TABLE "EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO") USING INDE"
"X PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 65536 MINEX"
"TENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 "
"BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ENABLE "
.
.
.
.
.


Why its throwing this error??Though there is no objects before importing why its showing the
error "Objects already exist" ??

Why alter table tname move tablespace doesn't work for global temporary tables??If i tried to move i'm getting the

error "ORA-14451: unsupported feature with temporary table" .Then where it actually stores till a session get over??
DB is 8.1.7



Tom Kyte
February 05, 2003 - 8:51 am UTC

You would have to give me a full test case -- showing how the constraints were created in the first place (since scott/tiger normally doesn't have them).

I cannot reproduce.


It doesn't make sense to move a global temporary table. They don't "exist" anywhere until someone uses them -- and then they "exist" in the users temporary tablespace.

Problem in moving tables across tablespaces

Ashiq Shamsudeen, February 06, 2003 - 1:54 am UTC

As u said ,Yes constraints won't be there for emp/dept tables. I added constraints separately . Moreover i'm running this on client machine.

Regarding the performance which is the best ,i mean
alter table Y move tablespace or exp/imp??

Tom Kyte
February 06, 2003 - 8:02 am UTC

like I said -- I would need a full example.

I rebuilt scott/tiger.
I added constraints in every possible manner I could imagine anyone doing -- fully qualified names, from another schema, index owned by some other user, etc

could not reproduce.

So, If you can do the blow by blow -- like I do --- you know

o step 1, run demodrop
o step 2, run demobld
o step 3, <how to add constraint>
o step 4, exp/imp --> error



I would use alter table move in all cases if at all possible. anytime you take the data OUT of the database to put it back IN you stand a chance of losing something, the data, an index, a constraint, grant something. alter table move -- no worries, you will not lose anything (and you don't read -> write to disk + read from disk -> write -- you just read -> write, less io going on there)



Confused

Helen, February 06, 2003 - 8:10 am UTC

I may be totally wrong here but shouldn't:

host imp userid=scott/tiger@rmsreal8 parfile=ashiq.par
be
host imp userid=ashiq/ashiq@rmsreal8 parfile=ashiq.par

???


Tom Kyte
February 06, 2003 - 9:02 am UTC

if scott/tiger is a DBA, the first one works but you know, I didn't try that.  

But -- it works:


ops$tkyte@ORA817DEV> drop user ashiq cascade;

User dropped.

ops$tkyte@ORA817DEV> drop user scott cascade;

User dropped.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create user scott identified by tiger;

User created.

ops$tkyte@ORA817DEV> grant connect, resource, dba to scott;

Grant succeeded.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> alter user scott default tablespace users temporary tablespace temporary;

User altered.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create user ashiq identified by ashiq;

User created.

ops$tkyte@ORA817DEV> grant connect, resource to ashiq;

Grant succeeded.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> host demobld scott tiger

SQL*Plus: Release 8.1.7.0.0 - Production on Thu Feb 6 09:01:20 2003

(c) Copyright 2000 Oracle Corporation.  All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

Building demonstration tables.  Please wait.
Demonstration table build is complete.
Disconnected from Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

ops$tkyte@ORA817DEV> pause

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> @connect scott/tiger
ops$tkyte@ORA817DEV> set termout off
scott@ORA817DEV> REM GET afiedt.buf NOLIST
scott@ORA817DEV> set termout on
scott@ORA817DEV>
scott@ORA817DEV> alter table scott.dept add constraint dept_pk primary key(deptno);

Table altered.

scott@ORA817DEV> alter table emp add constraint emp_pk primary key(empno);

Table altered.

scott@ORA817DEV>
scott@ORA817DEV> host exp userid=scott/tiger owner=scott

Export: Release 8.1.7.4.0 - Production on Thu Feb 6 09:01:21 2003

(c) Copyright 2000 Oracle Corporation.  All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user SCOTT
. exporting object type definitions for user SCOTT
About to export SCOTT's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export SCOTT's tables via Conventional Path ...
. . exporting table                          BONUS          0 rows exported
. . exporting table                           DEPT          4 rows exported
. . exporting table                          DUMMY          1 rows exported
. . exporting table                            EMP         14 rows exported
. . exporting table                       SALGRADE          5 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting snapshots
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

scott@ORA817DEV> host imp userid=scott/tiger ignore=n grants=y constraints=y fromuser=scott touser=ashiq

Import: Release 8.1.7.4.0 - Production on Thu Feb 6 09:01:26 2003

(c) Copyright 2000 Oracle Corporation.  All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing SCOTT's objects into ASHIQ
. . importing table                        "BONUS"          0 rows imported
. . importing table                         "DEPT"          4 rows imported
. . importing table                        "DUMMY"          1 rows imported
. . importing table                          "EMP"         14 rows imported
. . importing table                     "SALGRADE"          5 rows imported
About to enable constraints...
Import terminated successfully without warnings.

 

Move objects with large extents takes enormous time

Ramanathan Subramanian, February 06, 2003 - 1:55 pm UTC

Hi Tom,

I am working on Oracle RDBMS version 8.1.7.3 (32 BIT)
I have more than 25 tables in a tablespace USERS having extent size of 16K and number of extents range from 10,000 to 25,000. Yesterday I tried to rebuild one table in another tablespace with extent size of 4MB. The table size was 400MB and it had 12000 extents. It took 3 hrs to move this object to another tablespace. When I was monitoring the process, the creation of new table happened quickly, but the dropping of extents on the old tablespace took very long time.

I used the following SQL.

alter session set sort_area_size=200000000;
alter session set db_file_multiblock_Read_count=128;
alter session set sort_multiblock_read_count=128;

alter table CUST.LOADTRACK move tablespace USR01 storage ( initial 4M next 4M pctincrease 0 maxextents unlimited) ;

This table was moved from USERS tablespace.
I have a task of moving all the 25 objects to new tablespace with better storage parameters. Some of the tables are of size 2GB.

How can I speed/tune this process to accomplish this task in 12 hours.

I appreciate your quick reply.
Thanks, Tom.


Tom Kyte
February 06, 2003 - 4:26 pm UTC

Are you using dictionary managed tablespaces.

DDL on those (drop, truncate) is painfully slow.

What you might want to do (little more involved) is:

create table NEW_Loadtrace tablespace usr01 .....
as select * from loadTrack;

(can be done nologging, in parallel even)

rename loadTrack to oldLoadTrack;
rename new_LoadTrack to loadTrack;
grant on it, index it, etc

that'll take the time the move would have if the object didn't have so many extents.

Then, drop the table in the background -- when no one really cares how long it takes....




sys schema maintenance

David, February 14, 2003 - 12:59 pm UTC

Tom, this is an old question of mine. I searched your site but at first glance I couldn't find the answer.

As with other schemas that need some housekeeping, shouldn't the following be tackled in the sys schema ?

1 Holes in tables due to deletes
2 Index rebuilds
3 Tablespace fragmentation (in DMT, not in the case of LMT/auto)
4 Statistics collection

I have this feeling that sys objects are kind of "stay" and thir management is a bit "slack" ? Please comment on 1,2,3,4 individually.

Thank you

Tom Kyte
February 14, 2003 - 8:37 pm UTC

1) nope
2) nope
3) nope
4) not yet...

1,2,3 are things that most people do to their own objects to make themselves feel better.

"Holes", well, they get reused.

"index rebuilds", easily 99% or more of these are 100% un-neccessary. Many of them are even harmful, but sigh, no one really measures the return then get so they don't know...

"tablespace frag" -- happens only when you drop or truncate stuff -- two things that NEVER should happen in system ;)


9i, you can analyze the sys schema, but it is not necessary (yet)... coming soon to a theatre near you though.

On holes and rebuilds

David, February 14, 2003 - 10:02 pm UTC

Tom, you sort of surprised me... I thought the items I wrote above would be normal DBA tasks. It wasn't clear to me if you were talking of SYS or other schemas too.

1. You mean I don't have to worry about holes because they get reused, even in other schemas than sys ?

2. You mean I don't have to treat them ? Even if I do some operation that deletes lots of rows from a crowded table ? And if it is supposed not to be crowded any more (shouldn't I reorganize the table, say ALTER TABLE MOVE) ?

3. I hear about DBAs rebuilding indexes every "major" DML operations. Isn't it necessary though ?

4. How about the so-called "sliding index syndrome" ? ( index extending because of old keys and leaf rows with few entries )

5. Which rebuilds would be "un-necessary" and which would be "harmful" ?

6. When would it be really "necessary" to rebuild ? Isn't it when we have a lot of deleted leaf rows ?

Thanks you for the answers. Please, point me to some threads of interest in this regard.

Tom Kyte
February 15, 2003 - 12:14 pm UTC

1) yup
2) a "crowded table"? I've never heard of a table being crowded but I think I know what you mean.

You have a purge process. A better way to do a purge, if you are deleting a large percentage of the table is:

a) create table as rows to keep
b) drop old table
c) rename new table to old

3) not only NOT NECESSARY, but
a) a waste of time typically
b) counter productive
c) never measured -- did they show that the index performs better tomorrow?
d) sometimes even causes NEGATIVE impacts to performance

bitmap indexes are a glaring exception to this... b*trees pretty much take care of themselves with the exeception of one or two de-generative cases, but they are the exception and many of us will never see them in their systems.

4) give me a concrete example to work with here -- a simulation if you will. It is what I would do for you.

5) most all and many.

6) No, not at all -- they get reused and it can be quite expensive to get those "holes back". See
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6601312252730 <code>

for example. there, they rebuild an index and watch the system take the next week and a half to get back to "normal" -- only to do it ALL OVER again next month. How productive.


Here is a case when you might COALESCE (not rebuild -- COALESCE)...

o you have an index on a column

o the values in this column are increasing (or decreasing) -- eg a sequence. Specifically the values are not random - they are somewhat predicable -- always bigger then before (or smaller)

o you delete from this table as you process rows (eg: this is a table where you insert records to be processed and as you process them -- you delete them)

o you do not delete 100% of the records over time. That is, record #55 might be exceptional -- never processed for some reason. It stays in this table forever. Record #1032 might too - and so on.

o you therefore end up with an index that has lots of leaf blocks with 1 or 2 rows on them on the "left" of the index and you'll NEVER insert a row into these blocks agains (cause the new values are ALWAYS higher -- there will NEVER be a row less then 1032 ever again.

In order to reclaim these very sparse blocks -- we can coalesce the index, merge together adjacent index leaf blocks and taking the newly emptied block and putting it on the free list. No need to rebuild the entire structure, just get back that which is yours period.


A finely tuned index will run at 75% plus or minus utilization by design on average. They are not built to be rebuilt -- they almost always take care of themselves.


ctas+drop+rename X alter table move

David, February 15, 2003 - 4:41 pm UTC

Tom, I gotta say... In a few lines you changed a lot of my concepts.

a) The "sliding index window" (I've heard people calling this many ways) is exactly what you mentioned in your coalesce example. This is usually quoted as the classic case why indexes SHOULD be rebuilt. I just changed my mind -- your colasce description clears any doubt. Since coalesce was introduced in 8i (I guess), what would people do before that... rebuild ?

b) Besides backup, I thought these were some of the most important DBA housingkeeping tasks:

1 Holes in tables due to deletes
2 Index rebuilds
3 Tablespace fragmentation (in DMT, not in the case of LMT/auto)
4 Statistics collection

From what you said, I understand the only one I should care about is 4 -- gathering stats. Do you agree ?

c) In question 2 above, instead of

o create table as rows to keep
o drop old table
o rename new table to old

couldn't I just:

o delete table rows to be purged
o alter table move (+ eventual index rebuild)

Wouldn't the latter be better since I may have other "attached" objects and such, like indexes, grants, etc ?

Tom Kyte
February 15, 2003 - 5:13 pm UTC

a) correct -- and these indexes are *rare*. Tell me -- other then the example I gave, how often does something like that really happen?

A purge of "old data" would get rid of all of them (leave no left behinds)...

If you don't delete (very common) -- it is not a problem...

b) 3 isn't an issue anymore and was only an issue in the past if you let it be (eg: you should encounter that issue once and then you learn and you don't let it happen anymore)

#4 is valid.

c) tell me -- which has the opportunity to be

o easily parallelized
o done without undo generation
o done without redo generation
o the fastest.....

Yes, you could delete + move + IMMEDIATE index rebuild as moving invalidates the indexes.

Indexes would have to be rebuilt (created really -- the rebuild would be in effect a create since the existing indexes are useless -- have bogus rowids in them) in both cases.

Grants run really fast....

So, if this is something you do once or twice a year -- create as keep, drop, rename is perhaps the most efficient method.

more and more questions

David, February 15, 2003 - 10:13 pm UTC

Tom, it sounds strange to me... despite the performance and space (no user undo) advantages, delete purge/alter table move seems to be much simpler.

All table characteristics are kept (pctfree, pctused, storage and other clauses, dependent and depending objects like indexes, grants, etc etc etc. Also I can do it online (ok, index would be useless before rebuilding them) -- less to worry about -- fewer points of failure -- only rebuilding the indexes.

In case of a table with specific properties, ctas/drop/rename would entail quite a lot of research to reconstruct its orginal format -- not to mention downtime.

a) In my opinion, choosing one of them is a matter of tradeoffs -- ctas=performance and move=ease of use. Don't you think ? Aren't they just alternatives -- both with advantages and disadvantages ?

b) Do you have any scripts/methodology for reconstructing the original table properties in the ctas scenario ?

c) In the case of the the sliding index (your coalesce example), how do I monitor so I know what indexes need coalescing ? How often should I do that ? Maybe automate that ?

Tom Kyte
February 16, 2003 - 10:44 am UTC

Simple is one thing. Simple is easy. Simple is simple. Simple is something that could take 5 days.

create as rows to keep
index
grant

might take a couple of minutes in comparision.


If you have a large table (hundreds of megs in size -- gigabytes in size -- or more)

And you are going to purge much of the data (say 40% or more)

And you are going to rebuild anyway afterwards (alter move)

THEN

CTAS + index + grant will be many many times faster then

delete + move + index


a) yes, but if you have to do this in a brief period of time, your choices will be constrained.

b) look right above - CTAS is not significantly different from generating alter table moves ;)

c) you tell me -- you maintain the system. if you insert 100 rows a day and delete 99 of them a day, it could be months between coalescing. If you insert 100,000 rows a day and delete 999,990 of them a day during processing, it might be a more regular thing. You can automate it if you like but the period will be based on how you use it.


one more thing on the coalescing - in this case, it'll do two things

a) reclaim space (thats OK, thats good, do that when you want to)

b) if your queries are in the form of

"select * from ( select * from rows_to_process where processed = 'N' order by id ) where rownum = 1"

that is -- you are using the index in question to find the minimum non-processed row, you will want to use coalesce more often as you are reading perhaps lots of virtually empty index blocks in order to find the row you want.




Question on moving clustered tables...

Matt, February 16, 2003 - 2:05 am UTC

I want to set up a procedure to move 30 or so clustered tables into one date range partitioned table. From this thread I understand that the 'alter table move...' will not work. So what would be the best high level approach to do this?

I am thinking a CTAS to convert each clustered table to an ordinary heap organised table (maybe splitting each table so that one clustered table goes into two or more partitions). Each table and then be swapped into a pre-defined partition (in the target table).

Is there anything else that I might need to consider?
Is there a better (and more efficient) means to do this?

Thanks and Regards,


Tom Kyte
February 16, 2003 - 11:10 am UTC

CTAS will work nicely -- just make sure to test that the swap afterwards is being done without validation (or you can end up reading and reading and reading the data over and over -- you just want a dictionary update at the end).



last row factor

David, February 17, 2003 - 8:28 am UTC

Tom, you are saying index leaf rows are usually reused after deletes, so I don't have to worry about rebuilding -- okay.

If I'm not mistaken, leaf blocks only get reused after the last row is deleted. That's one of the main reasons some people say these indexes should be rebuilt.

a) Doesn't that make those blocks prone to be underutilized ? Hence, wouldn't it be proper to perform rebuilds after major deletes ?

b) I had asked you above how I can monitor index holes. From some papers I've read, they say the ratio of deleted rows from index_stats being around 40% is a good hint for me to rebuild the index. Is that metric ok ?

c) Because there can be so many indexes in a database, I should keep procedures to know what indexes should be coalesced (rebuild or what). To me, only "having the feeling" that some table is purged in bulk may be a bit slack. What methodology would you use to spot such indexes ? Runinng the query from b) from time to time maybe ? What else ?

Thanks

Tom Kyte
February 17, 2003 - 11:35 am UTC

"If I'm not mistaken, leaf blocks only get reused after the last row is deleted.
That's one of the main reasons some people say these indexes should be rebuilt."


well, consider


create index t_idx on t(last_name);


Ok, say you fire Bill Smith. Delete from t where last_name = 'Smith', there is now a deleted entry.

Is that block "underutilized"??? Not if you hire someone named S<omething> -- any name that "fits" on that block.

Is that block "underutilized"??? Not if you have someone get married and change their last name....


You see, with the exception of indexes are "things that only increase" - data is moving about in an index structure all over the place (and these are the vast majority of indexes)

Also -- as I said over and over (and over, and over) -- a "mass delete" would be performed as a CTAS anyway! It is called a "purge", it is an infrequent operation. It is something that can be (nay, I would say should be) optimized to be very efficient.


a) no, I do not believe in "major deletes". And even so -- if it is deleting historical data then the index blocks for sequence/date columns would go 100% free (you are wiping out OLD DATA -- all the sequences less then 54231 will "disappear", all of the data before "01-jan-2001" will go away.

Indexes on lastnames, firstnames -- etc (random strings) -- we'll just reuse that space over time naturally....

b) NO (i don't really know how to say this better other then to shout NO). Reread this:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6601312252730 <code>

and read it again and again.

How did you get that 40% in there in the first place -- that will be key to understanding whether that ROT (rule of thumb -- i despise ROT) is valid in your case or not.

Say you have an index with 40% deleted lf rows. Ok, now say that index is on a character string with lots of different values ALL over the place. What'll happen if you rebuild? Well, MOST LIKELY you'll end up over the course of the next couple of days resplitting every block in that index -- getting the index "fat" again -- back to 40% empty, using great resources to do so in the process, to have gained nothing.

I have no metrics by which I say "hey, rebuild that thing". Look at the usage (i explained the case where it makes sense -- the sweeping index where you leave behind like 1 row on each leaf block). Rebuild based on HOW the index is used, not by some formula.

c) sigh. I give. you don't know what tables are purged from? (you do, you better).... they shouldn't be done with a delete in all probability -- unless they are tiny (like 1 gig or less) tables.

You cannot identify the tables that cause the very uncommon situation of the sweeping index? where you leave 1 row/leaf block?

But how about last row factor ?

David, February 17, 2003 - 12:55 pm UTC

Okay, Tom... rewind please.

I have some very basic questions, and you just mixed me up:

a) Once I upon a time I learned in an Oracle class that an index leaf block only gets reused when the last row from that block is deleted, as if pctused were zero. Correct ?

b) In this case, if you delete just one row from the block, you'll render it completely useless for new inserts until the block is empty. Correct ? (Above you mentioned you deleted Smith and then ineserted another row)

c) So, these blocks will have holes until the last row is deleted. Correct ?

d) In time, these holes will get larger and larger because of eventual deletes. Correct ?

e) So, when these deleted rows get to a threshold (40% used, say) it's good to rebuild, because there will be more holes than used space. Correct ?

The dynamics above makes a lot of sense to me AS LONG AS the premisses are true (leaf block being reused only after the last row is deleted). Please be *straightforward*, so I can understand it better.

Tom Kyte
February 17, 2003 - 1:48 pm UTC

1) define reused.  I define it as "being used day to day".  

If I create an index on emp(ename) and then I delete BLAKE and then later I insert BAILEY -- will that index leaf block be "reused"??  Yes, sure.

Now, if I put 100,000 rows in emp (lots of enames, assume many leaf blocks) and I delete BLAKE and later I insert ZALUBA -- will I reuse that index leaf block BLAKE was on? No, I cannot -- ZALUBA belongs wwwwaaaay at the other end of the index.  But later I update some other row to BANKS -- now I will.  The data in the index moves about from place to place.  I frequently use and reuse blocks ALL OVER the place.

Now, imagine you have an index on a column populated by a sequence.  further, assume all by 1% of the data gets deleted over time (we process rows and get rid of them but 1 out of 100 "hangs around forever" for some reason.  

Now, over time, the index will accumulate leaf blocks with one or two entries on them since the OTHER 99 rows were deleted AND we cannot reuse the slots on these blocks with new inserts since the sequence numbers are always too big and getting bigger.  This is that rare case that needs a coalesce from time to time in order to take all of those 1% rows and clump them together.


Most indexes are more like the first case -- you use and reuse the space CONSTANTLY as you insert randomly into the structure, delete randomly from the structure (and since an update is logically a DELETE+INSERT in an index....)


b) false, wrong, totally erroneous.  The blocks ARE ALWAYS AVAILABLE for new inserts, updates -- always.  The index is a data structure, data goes where it must go.  You goto insert the row "Smythe" -- Oracle will search throught the index looking for where "Smythe" should go and 

   -- if there is space on the block -- it'll put it there, it just needs
      space

   -- if there is NO MORE space on the block -- it'll split the block into
      two and put Smythe on one of them -- where it belongs -- in sorted order.

c) FALSE.  Easy test here:

ops$tkyte@ORA920> create table t ( x int, pad char(255) );

Table created.

ops$tkyte@ORA920> create index t_idx on t(x,pad);

Index created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t values ( 1, 'x' );

1 row created.

ops$tkyte@ORA920> insert into t values ( 2, 'x' );

1 row created.

ops$tkyte@ORA920> insert into t values ( 999999999 , 'x');

1 row created.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920> begin
  2          for i in 3 .. 100000
  3          loop
  4                  delete from t where x = i-1;
  5                  commit;
  6                  insert into t values ( i, i );
  7                  commit;
  8          end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> analyze index t_idx validate structure;
Index analyzed.

ops$tkyte@ORA920>  select height, lf_rows, lf_blks, br_blks from index_stats;

    HEIGHT    LF_ROWS    LF_BLKS    BR_BLKS
---------- ---------- ---------- ----------
         1          3          1          0

See -- now, I put in 1, 2, 999999999 -- 2 is between them.  I deleted 2, put in 3.  delete 3, put in 4 and so on.  is that space between 1 and 99999999 reused?  sure -- the index never grew at all.  Perhaps more compelling:


ops$tkyte@ORA920> create table t ( x char(7), pad char(255) );

Table created.

ops$tkyte@ORA920> create index t_idx on t(x,pad);

Index created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t
  2  select to_char(rownum+0.1,'fm0000.00'), 'x'
  3    from all_objects
  4   where rownum < 5000;

4999 rows created.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920> analyze index t_idx validate structure;

Index analyzed.

ops$tkyte@ORA920> select height, lf_rows, lf_blks, br_blks, DEL_LF_ROWS from index_stats;

    HEIGHT    LF_ROWS    LF_BLKS    BR_BLKS DEL_LF_ROWS
---------- ---------- ---------- ---------- -----------
         2       4999        179          1           0

ops$tkyte@ORA920>
ops$tkyte@ORA920> delete from t where mod(x,28) <> 0;

4999 rows deleted.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t
  2  select to_char(rownum+0.2,'fm0000.00'), 'x'
  3    from all_objects
  4   where rownum < 5000;

4999 rows created.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920> analyze index t_idx validate structure;

Index analyzed.

ops$tkyte@ORA920> select height, lf_rows, lf_blks, br_blks, DEL_LF_ROWS from index_stats;

    HEIGHT    LF_ROWS    LF_BLKS    BR_BLKS DEL_LF_ROWS
---------- ---------- ---------- ---------- -----------
         2       4999        179          1           0





d) Nope, you bought into the MYTHOLOGY that index space is not reused, it is false, index space is very much reused, index structures are much more robust and resiliant then people give them credit for.


e) nope.

All of the premises you have are incorrect.


o index space is available for reuse
o the deleted row entries are very reusable by any row that fits there (doesn't
  have to be the same value as that which was deleted as demonstrated)
o indexes behave AS IF pctfree = 0, they fill them up -- all of the way up

o pctused has no meaning when applied to NON-HEAP tables.  You only need to 
  manage freelists like that when data goes into the object "anywhere".  in an
  index - -data must go where data must go -- no if and or buts.  If you insert
  the number 55 into an index -- it MUST go between 54 and 56 -- it cannot go
  after 1000, or before 32.  So oracle will find the place it MUST go and put
  it right there.  If there is free space -- great, if not, we split the block
  and MAKE space


It is that last point -- that we split the block and make space -- that should make you hesitate to rebuild nilly willy -- block splitting takes a bit of time, energy, effort, resources.  If you rebuild an index and squash out our freespace -- well, we'll just split over and over again (riddle me this batman comes to mind!)

 

Some nuggets...

A reader, February 17, 2003 - 2:36 pm UTC

I just happened to open Don Burleson's "High Performance Oracle8 Tuning" that was on my desk. Here's when he claims it's necessary to rebuild indexes (page 335):

From index_stats:

- height -> ...whenever the value of height is more than 3, you might benefit from dropping and re-creating the index. Oracle will not spawn a 4th level on a clean rebuild until more than about 10 million nodes have been added to the index.

- distinct keys -> ... values less than 20 are candidates for being recreated as bitmapped indexes.

- del_lf_rows -> (page 339 -- he has a script that creates dynamic sql to rebuild the index whenever: height > 3 OR del_lf_rows >10)

So ?

Tom Kyte
February 17, 2003 - 3:06 pm UTC

read

</code> http://asktom.oracle.com/~tkyte/index_rebuild.html

(redirects to google, Don was there himself)


the distinct keys thing is *just plain wrong advice*.  20 distinct values -- hmm, i have a 2,000,000 table.  20 distinct values in column X.  It is an OLTP system (transactional, not DW).  Of the 20 distinct value, 19 of them only point to 200,000 rows (19 of them point to 10% of the data).  The remaining 1,800,000 all have a value "X".  The 19 point to more or less uniform numbers of rows, or about 10,526 each.  Ok -- so each value there points to 0.5263% of the data .  We frequently query "where x = 'A'", "where x = 'B'", and so on.  We do not query for "where x = 'X'" (they are processed rows or whatever).

Still thinking "bitmap"?  Bitmaps are

o not based on raw numbers of distinct values (like "20" or "15" or "100")
o not for anything other then read mostly systems
o best explained by Jonathan Lewis ;)

http://www.dbazine.com/jlewis3.html http://www.dbazine.com/jlewis6.html http://www.dbazine.com/jlewis7.html <code>



and if he really rebuilds when del_lf_rows exceed 10, well, I would be able to debate that and win (that it is a bad idea)



Last DDL time

A reader, February 19, 2003 - 8:40 am UTC

Tom, I was trying to check the last ddl time from user_objects during a long-running index rebuild.

The last DDL time is the time it started or ended (or what), since the rebuild can take some hours ?

Thanks

Tom Kyte
February 19, 2003 - 8:56 am UTC

appears to be the time the ddl started:

big_table@ORA920> set timing on
big_table@ORA920> select sysdate, last_ddl_time from user_objects where object_name = 'BIG_TABLE_PK'
2 /

SYSDATE LAST_DDL_TIME
-------------------- --------------------
19-feb-2003 08:55:44 12-sep-2002 08:50:01

1 row selected.

Elapsed: 00:00:00.01
big_table@ORA920> alter index big_table_pk rebuild;

Index altered.

Elapsed: 00:00:08.54
big_table@ORA920> select sysdate, last_ddl_time from user_objects where object_name = 'BIG_TABLE_PK'
2 /

SYSDATE LAST_DDL_TIME
-------------------- --------------------
19-feb-2003 08:55:52 19-feb-2003 08:55:44

1 row selected.

Elapsed: 00:00:00.01
big_table@ORA920>



Date printing on new site

Paul, February 19, 2003 - 9:32 am UTC

Don't know if it is my IE browser or not (my linux box is rebuilding a kernel) but I see this for dates on Answers
"originally submitted on 16-¿¿¿-2003 18:34 Eastern US time, last updated 19-¿¿¿-2003 8:52"

Like the new site features, thanks very much for this wonderful resource.
Specifically this question as I am building routines to move tables to LMTs now.

Tom Kyte
February 19, 2003 - 1:45 pm UTC

sorry about that -- we are fixing that now ;) slight NLS_LANG issue... should be corrected actually

moving tables across tablespaces

Alvin Panugayan, April 21, 2003 - 2:45 am UTC

Greetings,

What happens if most of the production tables resides in the system tablespace ??

i plan to use your script to migrate the tables which resides on the system tbs onto another tbs.



Tom Kyte
April 21, 2003 - 7:24 am UTC

system is just a tablespace -- same procedures work.

default tablespace as SYSTEM

Mike, May 16, 2003 - 11:07 pm UTC

Tom,
I have installed 9i R2 enterprise edition.

SQL> select username, default_tablespace from dba_users where default_tablespace='SYSTEM'
  2  and username not in ('SYS','SYSTEM', 'SCOTT');

USERNAME                       DEFAULT_TABLESPACE
------------------------------ ------------------------------
DBSNMP                         SYSTEM
MBURK                          SYSTEM
OUTLN                          SYSTEM
WMSYS                          SYSTEM
ORDSYS                         SYSTEM
ORDPLUGINS                     SYSTEM
MDSYS                          SYSTEM

Why do these product owner use SYSTEM as default?
 

Tom Kyte
May 17, 2003 - 10:08 am UTC

they are our accounts, we can put them were we like. there is nothing inheritly "special" about system as far as object allocation goes.

Mike, May 17, 2003 - 11:25 am UTC

"
they are our accounts, we can put them were we like. there is nothing inheritly
"special" about system as far as object allocation goes.
"

I raised the question is because of Oracle Expert-- the tuning tool gives the following recommendation:



Oracle Expert has discovered tablespace SYSTEM is dedicated to data
dictionary segments.

Oracle Expert recommends relocating:
$$$$$
index WMSYS.WM$LOCKROWS_INFO_IDX to a tablespace dedicated to
application index segments (INDX,TCO_INDEX)

index WMSYS.WM$VT_ERRORS_PK to a tablespace dedicated to application
index segments (INDX,TCO_INDEX)

index WMSYS.WM$TMP_DBA_CONS_IND to a tablespace dedicated to
application index segments (INDX,TCO_INDEX)

index WMSYS.SYS_C001463 to a tablespace dedicated to application
index segments (INDX,TCO_INDEX)

index WMSYS.WM$NESTED_COLUMNS_PK to a tablespace dedicated to
application index segments (INDX,TCO_INDEX)
....
$$$$$
The TCO is my cutomer application owner.

Why Oracle Expert gives such recomm?

Thanks

Mike

Tom Kyte
May 17, 2003 - 3:12 pm UTC

I guess they were the default tablespace of your account that was running this.

Ignore it.

Mike, May 17, 2003 - 12:15 pm UTC

By the way, the Oracle Expert was installed with 9i R2. I beleive it is 9.2.0.1

A reader, June 25, 2003 - 8:52 am UTC


initial and lmt - 8i

M. Bento, July 09, 2003 - 8:12 am UTC

When moving a table into a uniform LMT using alter table without the storage clause the table's initial is not set to the LMT default initial parameter but keeps its previous value ending up with tables with different initial extents inside the same LM tablespace.
Should one set the initial along with the alter table in order to prevent fragmentation?
Thanks.

Tom Kyte
July 09, 2003 - 11:22 am UTC

you are confusing the initial extent sizes with actual extent sizes.

Rest assured, the EXTENTS actually allocated are all the same size in that UNIFORM LMT -- use user_extents to verify that.

The INITIAL EXTENT (and next extent and pctincrease and minextents) are used during the initial creation to figure out HOW MANY extents to create.  Consider:

ops$tkyte@ORA920> create tablespace testing
  2  extent management local
  3  uniform size 512k
  4  /

Tablespace created.

ops$tkyte@ORA920> compute sum of mbytes on report
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table t ( x int )
  2  tablespace testing
  3  storage ( initial 1m );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select extent_id, bytes/1024/1024 mbytes
  2    from user_extents
  3   where segment_name = 'T'
  4   order by extent_id;

 EXTENT_ID     MBYTES
---------- ----------
         0         .5
         1         .5
           ----------
sum                 1

<b>see, 2 extents of 512k -- for a total of 1m, because the 1m was requested..</b>

ops$tkyte@ORA920>
ops$tkyte@ORA920> drop table t;

Table dropped.

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table t ( x int )
  2  tablespace testing
  3  storage ( initial 1m next 1m minextents 4 pctincrease 100 );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select extent_id, bytes/1024/1024 mbytes
  2    from user_extents
  3   where segment_name = 'T'
  4   order by extent_id;

 EXTENT_ID     MBYTES
---------- ----------
         0         .5
         1         .5
         2         .5
         3         .5
         4         .5
         5         .5
         6         .5
         7         .5
         8         .5
         9         .5
        10         .5
        11         .5
        12         .5
        13         .5
        14         .5
        15         .5
           ----------
sum                 8

16 rows selected.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select 1 + 1 + 2 + 4 from dual;

   1+1+2+4
----------
         8

<b>there, Oracle took the initial, next, minextents and pctincrease and computed "hey, it would have taken 8m in a DMT, please grab 8m of space" and did so, 512k at a time...</b>


ops$tkyte@ORA920>
ops$tkyte@ORA920> drop table t;

Table dropped.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create table t ( x int )
  2  tablespace testing
  3  storage ( initial 64k minextents 10 );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select extent_id, bytes/1024/1024 mbytes
  2    from user_extents
  3   where segment_name = 'T'
  4   order by extent_id;

 EXTENT_ID     MBYTES
---------- ----------
         0         .5
         1         .5
         2         .5
         3         .5
         4         .5
         5         .5
         6         .5
         7         .5
         8         .5
         9         .5
           ----------
sum                 5

10 rows selected.

ops$tkyte@ORA920>
ops$tkyte@ORA920>

<b>Here Oracle did 64k * 10 = 640k, but also saw the minextents 10 -- and said, allocate at least 10 extents, as long as 10 extents of space is larger then 640k</b>

But there is NO fragmentation here, they are all 512k extents. 

CTAS and alter table move

reader, August 21, 2003 - 3:46 pm UTC

Is the concept behind moving a table using alter table move command the same as CTAS in the sense that oracle will directly copy the rows to the blocks in the datafiles directly? Is it correct? if not how does oracle do it when we use CTAS and alter table move? Thanks.



Tom Kyte
August 21, 2003 - 7:24 pm UTC

same concept -- but you don't lose you grants, index definitions, table name, etc.

use ctas to copy the data.
use alter table move to MOVE the data.

redo and undo generation

Raman, August 22, 2003 - 8:30 am UTC

Tom, do both CTAS and alter table move commend generate redo and undo if I don't put the table in nologging mode before the operation? Thanks.

Tom Kyte
August 22, 2003 - 9:02 am UTC

no undo.

redo in archivelog mode unless table is NOLOGGING.
no redo in noarchivelog mode.



What are the uniform size values you use for LMT?

Laurent, August 22, 2003 - 9:29 am UTC

Tom,

Great examples, I'm also planning to move data from dictionnary managed tbs to LMT with uniform size extents.

If I'm calculating the size of each table with your method found at:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:266215435203 <code>


what are the uniform size I should use?

can you tell me what ranges do you usually use? say for example :

tables < 10Mb goes in a tablespace with 128k extent size
10 Mb < tables < 100 Mb goes in tbs w/ 1M extent size
--- these are exemples ---

Thanks


Tom Kyte
August 22, 2003 - 10:18 am UTC

128k would be good for 10m -- that would be about 80 extents, managable -- great (higher is fine as well)


10 .. 100mb with 1m, fine...


your goal, come up with an extent size that

a) gives you the max extents you are personally comfortable with
b) doesn't "waste" too much space. using 5meg extents for an object that will be about 6meg for example would "waste" 4meg at the end.



about indexes size this time

Laurent, August 22, 2003 - 11:30 am UTC

Hi again,

For the tables considering the 2 methods below:

the first method fond at:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:266215435203

 EXTENT_ID      BYTES     BLOCKS
---------- ---------- ----------
         0     131072         16
         1     131072         16
         2     131072         16
         3     131072         16
         4     131072         16
         5     131072         16
         6     131072         16
                      ----------
sum                          112


    BLOCKS EMPTY_BLOCKS  AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
       100           11        844                   1


Ok, the above shows us :

o we have 112 blocks allocated to the table
o 11 blocks are totally empty (above the HWM)
o 100 blocks contains data (the other block is used by the system)
o we have an average of .8k free on each block used.

Therefore, our table

o consumes 100 blocks of storage in total
o of which 717.6k is used for our data.

-------------------------------------------------------------------------
the second method using your show_space procedure found at :
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:231414051079 <code>

select SEGMENT_NAME, PARTITION_NAME, SEGMENT_TYPE, FREE_BLOCKS, TOTAL_BLOCKS,UNUSED_BLOCKS from table (show_space_for('X12_NPLAN','X84','%'));

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
SEGMENT_TYPE FREE_BLOCKS TOTAL_BLOCKS UNUSED_BLOCKS
------------------------------ ----------- ------------ -------------
X12_NPLAN
TABLE 1 112 11

Total Blocks : Total blocks allocated to the table
Unused Blocks : Blocks that have never contained data (above the HWM)

we can only found that our table consumes 100 blocks of storage in total
(112 TOTAL_BLOCKS - 11 UNUSED_BLOCKS - 1 BLOCK USED BY THE SYSTEM)

my first question is :
Is there always 1 block used by the system for each table,index...

so I guess this method is less accurate than the first one.
-----------------------------------------------------------------------

My 2nd question is about indexes:

select SEGMENT_NAME, PARTITION_NAME, SEGMENT_TYPE, FREE_BLOCKS, TOTAL_BLOCKS,UNUSED_BLOCKS from table (show_space_for('X18_IND1','X84','%'));

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
SEGMENT_TYPE FREE_BLOCKS TOTAL_BLOCKS UNUSED_BLOCKS
------------------------------ ----------- ------------ -------------
X18_IND1
INDEX 0 6608 5

here we find out that:
index X18_IND1 uses 6608-5(-1??) blocks
^^^^
depending from question 1
Is that right?
Is there a more accurate way to get an index size (aka knowing the average space used in an index blocks)?

Thanks again,


Tom Kyte
August 22, 2003 - 7:37 pm UTC

there is at least one block, yes.


is it less accurate? no.
does it give you less information? yes.

a single block is NOT going to make or break you in the end.

it does not bypass buffer cache

Reader, August 24, 2003 - 10:44 am UTC

"redo in archivelog mode unless table is NOLOGGING."

So, alter table move regardless of whether the table is in nologging or logging mode, oracle does not bypass buffer cache when it moves (rebuilds) the table using alter table ..... command. Tom, is it like it is doing conventional inserts using the blocks in the free list all over again for all of the rows? That means if the table is huge, it takes a lot of time. Thanks.


Tom Kyte
August 24, 2003 - 11:46 am UTC

huh?


it generates redo

it bypasses undo.

it does direct writes to datafiles.

redo generation is not at all linked to "buffer cache". It does not do convention path inserts. it is like a direct path load -- read data, format blocks, write blocks. the only difference is the "read data" isn't reading a file, it is reading database data.


how it corrects chain count

Reader, August 24, 2003 - 12:22 pm UTC

Thanks for the clarification. But I am not clear about how alter table move takes care of row migration when it does a direct path load. While moving, does it measure the row size and puts only a fewer rows in the blocks depending on the size of the blocks and current pctfree values to eliminate row migration issue? Thanks.

Tom Kyte
August 24, 2003 - 4:47 pm UTC

with migrated rows -- all we do is IGNORE the head rowid piece and pick up the migrated row piece.

Just like in a full scan, when we read a block with a head row piece that points to a migrated row, we ignore it, knowing that later when we hit that row for real, we'll get it then.

Good Work

Rajat Garg, September 24, 2003 - 7:49 am UTC

Very usefull script to move tables across tablespaces.

Great! And, how do we MOVE Nested Tables ?

Arul kumar, December 10, 2003 - 11:58 am UTC

Tom,

We have lots of nested tables as well.
Can you suggest us on how to move them to a different tablespace?

Thanks,
Arul.

Tom Kyte
December 10, 2003 - 3:55 pm UTC

ops$tkyte@ORA9IR2> !oerr ora 22931
22931, 00000,"MOVE of nested table to a different tablespace not supported"
//  *Cause:  Attempt to move a nested table to a different tablespace.
//  *Action: Nested tables always colocate in the same tablespace as the parent.
//           A nested table can be moved to a different tablespace only by moving
//           its containing table to the target tablespace.


move parent and nested table goes with. 

Gr8

Bipin Ganar, January 19, 2004 - 5:56 am UTC

Hi Tom,
I have two schmemas both are in same Test Database.
User1 and User2. I have taken Export from <User1> which is using tablespace <tspace1> and <User2> with <tspace2>
When I import from <user1> to <User2>. This by default create all tables in <tspace1> but I have to import in other one i.e. <tspace2>. How can i do this ?

Tom Kyte
January 19, 2004 - 9:35 am UTC

read the original answer. it describes how to use exp/imp to move things across tablespaces. the SAME exact principle works for across tablespaces with two accounts. Follow the bulleted list above in the exp/imp section

Nice

Roselyn, March 22, 2004 - 6:02 am UTC

Dear Tom,
What else a tablespace can store other than datafiles?Are
they confined only to store datafiles?
Please do reply.
The command
"sql>drop tablespace <name> including contents and datafiles cascade constraints;"

made me to ask like that.

Tom Kyte
March 22, 2004 - 7:17 am UTC

tablespaces do not store datafiles.

tablespaces store segments.
they store segments in datafiles.


tablespaces are made up of one or more files.

tablespaces contain segments and store them in these files.


that command simply says "drop all segments in that tablespace, drop the tablespace, and then erase the files"

Different Server/different Tablespace

Vinnie, April 08, 2004 - 8:43 am UTC

Tom,

Is there anyway to import a set of given tables for a given user from 1 server to another with different tablespace names?

Thanks

Tom Kyte
April 08, 2004 - 10:49 am UTC

please hit the "home key"

that is is what this page is all about -- and only about. It outlines your options.

moving across schemas

A reader, April 19, 2004 - 4:02 pm UTC

I want to "rename" a schema. But it's not possible in Oracle. Could I use a something like your moving-cross- tablespace strategy ?

If I use exp+imp for that, I'll need to have downtime.

I wish to do this on-line. Is it possible ?

I was thinking of:
. alter table move online
. alter index rebuild online

What do you suggest ?

Tom Kyte
April 19, 2004 - 7:37 pm UTC

you would have to export import, there is no way to rename a schema.

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:17309172009134 <code>

you might be able to use MV's on prebuilt tables in the same way I did there over schema's

Indexes moved to new Tablespace

denni50, June 02, 2004 - 1:53 pm UTC

Tom

We moved indexes from the DATA Tablespace to a new
INDEX Tablespace. Problem now is we have 8 gig of
free space sitting in the DATA Tablespace that we
need to recapture for hard disk storage...kind of
a 'swapping' of storage between tablespaces.

Would thid require a re-sizing of the data_files since
indexes are no longer stored with the data within
the same tablespace.

I know you can move the tables to a TEMP Tablespace
drop DATA Tablespace and rename TEMP to DATA
that require rebuilding the indexes since 'pointers'
to rowid's would be lost when moved to TEMP.

And then there's the export/import which I am trying to
avoid. I know OEM comes with a Reorganization Utility..
would that help?

thanks




Tom Kyte
June 02, 2004 - 2:16 pm UTC

you can only rename tablespaces in 10g and above -- but you shouldn't need to for this.

You would have to:

alter table T move tablespace new;
alter index I rebuild;

for each table and then drop the old tablespace.


Or you can look and see what object is at the "end" of the file (use dba_extents, get the segment name of the segment that has the max block out there in that file) and just move it -- then you can shrink the file some and so on.

Moving table across TBS in 8.0.4

Yogesh B, June 30, 2004 - 8:37 am UTC

Is this feature available in 8.0.4 ? or only 8i onwards ? If not what is the option ? imp/exp / sql*loader ?

Tom Kyte
June 30, 2004 - 10:24 am UTC

this article starts with the comment:

I'll give you the Oracle8.0 and before answer as well as the Oracle8i and above
answer (in 8i, this is *trivial*, in 8.0 and before, it takes practice and
time)....

oops, Sorry I missed that

A reader, June 30, 2004 - 10:40 am UTC


move table to same tablespace

A reader, June 30, 2004 - 10:53 pm UTC

If I do the alter table move and specify the same tablespace as the existing tablespace, will it have the same affect, storage wise, as moving it to a different tablespace?

Tom Kyte
July 01, 2004 - 10:22 am UTC

yes, it'll move it inside the same tablespace, rebuild it in effect.

moving tables across tablespaces

SShah, July 01, 2004 - 7:03 pm UTC

Tom,
I am trying to move tables from one tablespace to another
using export utility. I tried exporting tables from scott schema with success but for some reason when I am exporting tables from another schema ABC to the dmp file using the following -

exp system/pwd file=c:\expdat_abc.dmp log=c:\export_abc.log tables=(abc.dept,abc.emp) grants=Y- indexes=Y

Export: Release 9.2.0.1.0 - Production on Thu Jul 1 16:23:55 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
EXP-00010: abc is not a valid username
EXP-00010: abc is not a valid username
EXP-00010: abc is not a valid username
Export terminated successfully with warnings.

I have exported tables earlier with success the same way for the same schema ABC. Am I missing something somewhere.
I also tried exiting from the cmd window and trying it again.
Thanks again for looking at this. :)


Tom Kyte
July 01, 2004 - 8:17 pm UTC

you have 9.2

do not use exp/imp....

just use


alter table dept move tablespace new_tablespace;


doesn't get any easier than that.




ORA-02478

Yogesh B, July 07, 2004 - 6:43 am UTC

I have a table with 3000000 rows. After analyzing table I found 300000 rows are chained, i.e. almost 10% of actual rows (result of alter table add columns). So I decided to create that table in a new TBS with new name, by using create as select with nologging and parallel (degree 4) clause. (I can't use move as Oracle 8.0.4)

While execution of query I faced following error

ORA-02478: merge into base segment would overflow MAXEXTENTS limit

Then I checked the MAXEXTENTS parameter for the new TBS, which was originally set to 500. I changed it to unlimited and once again executed the query and it was a success.

Can you please explain the working of ‘create as select’ in context with MAXEXTENTS.


Tom Kyte
July 07, 2004 - 8:43 am UTC

[tkyte@tkyte-pc tkyte]$ oerr ora 2478
02478, 00000, "merge into base segment would overflow MAXEXTENTS limit"
// *Cause: Merge of temporary segment into base segment failed because
// MAXEXTENTS was larger than the total in the temp and base segments
// *Action: Use a larger value for MAXEXTENTS on the base segment or make the
// extents in the temporary segments larger



seems clear? you said "i want at most 500 extents". the table resulting from your CTAS as select would have exceeded that. Hence we did what you asked us to do and did not exceed your maxextents setting for that segment.

moving IOTables

Alex, July 16, 2004 - 5:15 am UTC

Thanks for the support.
I have made the following changes in your script to move IOTs also.

select
decode( s.segment_type, 'TABLE', s.segment_name, a.table_name ) order_col1,
decode( s.segment_type, 'TABLE', 2, decode(a.index_type, 'IOT - TOP', 1,3 )) order_col2,
'alter '|| decode(a.index_type, 'IOT - TOP', 'TABLE',s.segment_type) ||' '|| decode(a.index_type, 'IOT - TOP', a.table_name,s.segment_name )||
decode( s.segment_type, 'TABLE', ' move ', decode(a.index_type, 'IOT - TOP', ' move ',' rebuild ' )) ||
chr(10) ||
' tablespace &1 ' || chr(10) ||
...



Index status = N/A after moving table

Nathan Jacobson, July 29, 2004 - 2:46 am UTC

I moved some tables across tablespaces.

I got some "INVALID" indexes.... as far as know it's ok!

I did INDEX REBUILD for this indexes and it returned to "VALID".

The problem is that i got some "N/A" (status) for indexes not related to the tables moved. This indexes are PARTITIONED indexes. They went "N/A" after moving tables.

In DBA_IND_PARTITIONS the status for these indexes are "USABLE" but in DBA_INDEXES i got "N/A".

I did ALTER INDEX REBUILD PARTITION for all partitions of one of these indexes but i still have the same status.

Tom Kyte
July 29, 2004 - 11:29 am UTC

dba_indexes should have had NA for them all along -- the "index" doesn't have a status, the index partitions do.

ops$tkyte@ORA9IR2> CREATE TABLE t
  2  (
  3    temp_date  date,
  4    x          int,
  5    y int
  6  )
  7  PARTITION BY RANGE (temp_date)
  8  subpartition by hash(x)
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace tools,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  ;
 
Table created.
 
ops$tkyte@ORA9IR2> create index t_idx on t(x) local;
 
Index created.
 
ops$tkyte@ORA9IR2> select index_name, status from dba_indexes where owner = user;
 
INDEX_NAME                     STATUS
------------------------------ --------
T_IDX                          N/A
 
ops$tkyte@ORA9IR2>


<b>N/A is "normal always" :) </b>
 

Scenario little different

Nathan Jacobson, July 29, 2004 - 6:28 pm UTC

In this case:

SQL> select status , index_name, partition_name from user_ind_partitions;

STATUS   INDEX_NAME                     PARTITION_NAME
-------- ------------------------------ -------------------
N/A      T_IDX                          JUNK
N/A      T_IDX                          PART1
N/A      T_IDX                          PART2

SQL> select status , index_name from user_indexes;

STATUS   INDEX_NAME
-------- ------------------------------
N/A      T_IDX

In my case, before rebuild i had:

select owner , status, count(status) from dba_indexes
  where status <> 'VALID' and owner = 'APPS'
  group by  owner,status;
 
no rows selected


And after rebuild of normal indexes i got N/A for partitioned indexes:

select owner , status, count(status) from dba_indexes
  where status <> 'VALID' and owner = 'APPS'
  group by  owner,status;
 
 OWNER                       STATUS   COUNT(STATUS)
 --------------------------- -------- -------------
 APPS                        N/A                  9

 

 

Tom Kyte
July 30, 2004 - 7:14 am UTC

show me how to reproduce. partitioned indexes will always be N/A as far as I know -- since it really doesn't apply (the status) to them.


should be easy for your to set up an example?

Column order

A Reader, July 31, 2004 - 10:38 am UTC

Hi Tom,

For some reasons, we need to reorganize the order of columns in some tables. Those tables contain data. What I would do is add all the columns in the "right" order, move the data to the new columns, drop the old columns, rename the new columns to the old columns, apply the constraints that lost when the columns were dropped.
That's a lot of work.
Is there an easier way we can reorganize the order of a table in 9i (say, when we desc the table)?

Thank you for your help.

Tom Kyte
July 31, 2004 - 12:19 pm UTC

I've done this one a thousand times too:

a) rename t to t_table;
b) create view t as select columns in any order that makes you happy from t_table;
c) revoke on t_table;
d) grant on t;


done in milli-seconds. anything else is a waste of cycles.

"a view" is my answer.

Column Order

A Reader, July 31, 2004 - 8:47 pm UTC

Thank you for your answer. We are in the developing phase. Whenever a new column is added to a table, we are asked to put the column in the right place (order). We have to do this many times and a view is not an option. When released, the table creation scripts will come from the database with the right order. I was hoping Oracle has something like putting a column in a specific place so that we can alter the table easier. I actually don't think the order of the column matter so much.
Thank you again for your help.

Tom Kyte
August 01, 2004 - 10:24 am UTC

there is no such thing as the "right place" except that someone has deemed they know "where the right place is".

there is quite simply "no such concept" as the right place. It is in fact what views are designed for in part.


Sorry -- there is no concept of putting columns in "such and such an order", has never even made sense to me actually.

You would need to rewrite the entire table to put a column "in the middle".


the "right place" is right there -- at the end of the table. You see this in the Oracle data dictionary all of the time -- as new columns are added to the dictionary -- not only do they go "to the end", but we do the same with the views too -- but you can create your own views to put the columns in any order you like.


a view *is in fact an option*. You have myopically and rather arbitrarily ruled them out is all (nothing i can do to fix that, but try to convince you that "it is the right way")

Unusable index

Reader, September 08, 2004 - 2:41 pm UTC

If I just move the tables and not the indexes, then the indexes would NOT become unusable.

Is this understanding correct?

Thanks.

Tom Kyte
September 08, 2004 - 2:59 pm UTC

incorrect.

the rowids have sort of been "blow out of the water" there.

if you move the table, you will have to rebuild the indexes.

Very Useful

Krishna, September 15, 2004 - 10:32 pm UTC

As always, Tom's answers are very useful and saves lot of time at work place. Ofcourse helps in impressing the colleagues too ....:-)

Moving tables across different tablespaces in different instances

Khalid Jaffar, October 11, 2004 - 11:44 am UTC

Hi Tom,
i have a table that contains some 10.5 million records. i need to move it to different instance. i have tried the 'copy' command of the sqlplus. But need a more speedy solution for this. Thanks in advance.

Tom Kyte
October 11, 2004 - 1:42 pm UTC

well, sqlplus copy for me as been "really fast" in the past. how speedy is speedy, how far apart are the databases (and you were running sqlplus ON THE MACHINE with one of the databases right? if you copy from machine a to machine b using sqlplus on machine c, the data is fetched from A -> C and inserted from C -> B. the client machine C is frequently the "bottleneck" in such a case -- if you connect locally to one or other of the databases it'll go from A->B, instead of A->C->B)

Move schema to different tablespace

Sean, November 13, 2004 - 11:38 am UTC

Hi Tom,

I need to move whole schema from one tablespace to another tablespace. Any quick ways besides export and import?

Thanks so much for your help.



Tom Kyte
November 13, 2004 - 1:02 pm UTC

hmm, did you read the original answer at all?

moving schema to different tablespace

Sean, November 13, 2004 - 1:16 pm UTC

Hi Tom,

I didn't ask the question clearly. I know how to move table across tablespace. I guess that I can use your script to move table one by one. As you said, it is faster than export and import.

Thanks so much for your help.

moving schema to different tablespace

Tim Noel, November 17, 2004 - 4:22 pm UTC

Greetings Tom,
The examples you gave at the top of this post are invaluable. I know I'll be using them in the not-too-distant future.

However, I have a small wrinkle to this scanario.

I have an application that is used at 7 different locations around the country, each with a local client/server database identical in structure to the others but all containing site-specific data. I'm working on a plan to bring all these databases together in one place and to serve the application up on the Web. The plan currently is to have one instance and seven schemas, each schema containing the unique data from the sever different locations. Right now, each database has two tablespaces, TS_T and TS_I, to store tables and indexes respectivly. I'm thinking of creating a two unique tablespaces for each site to contain their data, like TS_[Site_A]_T and TS_[Site_A]_I, for performance purposes. Each site will be sending me an export of their data to be imported into the new database and I need to be able to import them into these newly created tablespaces. Can the above import example be used to do this or do I need something different?

Sorry to make this so long but I wanted to explain my situation clearly.

Many thanks in advance!

Tom Kyte
November 17, 2004 - 4:47 pm UTC

i'd use this
</code> http://asktom.oracle.com/~tkyte/article2/index.html <code>
and consolidate into a single schema.

tablespaces have nothing to do with performance, they are purely 100% an administrative tool.

if i were to keep 7 schemas (i wouldn't, but if i were to), i'd put all data for a schema in a single tablespace.

moving schema to different tablespace

Tim, November 17, 2004 - 4:30 pm UTC

Tom,
I should have included this info in the original message.

The exports that I receive will be coming from 8i databases and I'm importing into a 9i instance... if that makes any difference at all.

Thanks again...

moving schema to different tablespace

Tim, November 17, 2004 - 5:06 pm UTC

Tom,
I'm so sorry. I know your time is valuable. I forgot one key element here.

I failed to mention that the users from these 7 sites will be accessing the new app over the Web... and they still have to be able to get to their unique data... thus the reason for using 7 different schemas.

The plan is to check their UID's on login, using that data to query a table of UID's and site ID's and executing an "after logon on database" trigger that will fire off "alter session set current_schema=[siteID]. That, along with assignment of the appropriate roles, should point users to their unique site data.

Or... is there a better way?

Thanks again...

Tom Kyte
November 18, 2004 - 9:52 am UTC

that is exactly what FGAC is about -- this situation:

...
I failed to mention that the users from these 7 sites will be accessing the new
app over the Web... and they still have to be able to get to their unique
data... thus the reason for using 7 different schemas.
......

is exactly when you want to use it :)

a web based app -- you'll probably be using a connection pool (so, forget that logon trigger in all likely hood)

each page request would call a procedure with the users "identity" and that procedure would issue "dbms_session.set_context(....)" and your policy would filter the data just for them (virtual private database)

that way you can use stored procedures and everything very naturally.


This screams "SINGLE SCHEMA"


Quick follow-up on T/S with different extent sizes

Gary, November 24, 2004 - 1:22 pm UTC

Hi Tom

9.2.0.4 EE on UNIX (AIX)
All tablespaces are LMT with ASSM turned off

A quick follow up question, when moving a table from a tablespace with a uniform extent size of say 500K to another tablespace with a uniform extent size of say 100K when there are zero (0) rows in the table.

I understand the table will have one (1) extent regardless of how many rows are in it.

When it is moved to the T/S with the smaller extent size I see it creating five (5) extents, in this case, to accomodate the larger extent the table lived in prior to the move.  DBA_EXTENTS shows me the five (5) extents of the smaller size but USER_TABLES shows me an initial extent of 500K and a next extent of 100K.

Does it need the five (5) extents in the smaller T/S because it had that much space 'allocated' in the other T/S?

How do I get the new table down to using just one (1) of the smaller extents?  

How do I get the system to recognize there is no data in the table and that it doesn't need the larger extent space (500K)?

I tried to TRUNCATE the table but that didn't change the number of extents allocated.

Here is my test cases:

Tablespace 1 with 500K extents:

SQL>create tablespace t_500
datafile '/u02/oradata/dwt1/ora_t_500.dbf' size 5M
extent management local
uniform size 500K
segment space management manual;

Tablespace 2 with 100K extents:

SQL>create tablespace t_100
datafile '/u02/oradata/dwt1/ora_t_100.dbf' size 5M
extent management local
uniform size 100K
segment space management manual;

Create table in t_500;

SQL>create table t (col1 int, col2 varchar2(20), col3 date)
tablespace t_500;

Now from USER_EXTENTS:

SEGMENT_NAME          EXTENT_ID      BYTES
-------------------- ---------- ----------
T                             0     524288

and from USER_SEGMENTS:

SEGMENT_NAME              BYTES
-------------------- ----------
T                        524288


Naturally the same.

Move the table to t_100:

SQL>alter table t move tablespace t_100;

Table altered.

Now looking at the table in t_100:

SEGMENT_NAME          EXTENT_ID      BYTES
-------------------- ---------- ----------
T                             0     114688
T                             1     114688
T                             2     114688
T                             3     114688
T                             4     114688

5 rows selected.


Looking at USER_TABLES:

TABLE TABLESPACE_NAME                INITIAL_EXTENT NEXT_EXTENT
----- ------------------------------ -------------- -----------
T     T_100                                  524288      114688

1 row selected.

It shows the initial extent still at 524288.

I can truncate the table:

SQL>truncate table t;

Table truncated.

Everything is the same from USER_EXTENTS:

SEGMENT_NAME          EXTENT_ID      BYTES
-------------------- ---------- ----------
T                             0     114688
T                             1     114688
T                             2     114688
T                             3     114688
T                             4     114688

5 rows selected.


I also tried to MOVE the table to itself in the same t_100 tablespace:

SQL>alter table t move;

Table altered.

Same five (5) extents:

SEGMENT_NAME          EXTENT_ID      BYTES
-------------------- ---------- ----------
T                             0     114688
T                             1     114688
T                             2     114688
T                             3     114688
T                             4     114688

5 rows selected.


I can CTAS to antoher table:

SQL>create table t2 tablespace t_100 as select * from t;

Table created.

That creates a one (1) extent table as expected:

SEGMENT_NAME          EXTENT_ID      BYTES
-------------------- ---------- ----------
T2                            0     114688

1 row selected.


I hesitate to use the CTAS as it forces me to recreate all indexes, etc.

So, how do I reduce the amount of allocated space in these tables without losing the data or having to recreate indexes?

Is exp/imp the only way?

Many thanks for your tireless efforts and endless patience with those of us eager to learn but sometimes just don't 'get it'!

Gary
 

Tom Kyte
November 24, 2004 - 1:53 pm UTC

when moving the table WITHOUT STORAGE CLAUSES -- Oracle will take the dictionary information regarding the current

a) initial
b) next
c) minextents
d) pctincrease

and using that compute out "how much space does this thing get on the first get go"

So, your 500k LMT will have initial = 500, next = 500, minextents = 1, pctincrease = 0. That says "initial 500k" please....


That is why:

alter table t move tablespace NEW storage ( initial 1k next 1k minextents 1 pctincrease 0 );

or just


alter table t move storage ( initial 1k next 1k minextents 1 pctincrease 0 );

is a pretty neat idea as it nukes the legacy storage clause.


if you've already moved them -- you can alter the table to deallocate unused.

moving tables to another tablespace

Jean Carlos, December 01, 2004 - 3:53 pm UTC

Hi Tom

I'm trying to move tables from a DMT to a LMT using ALTER TABLE table MOVE tablespace. Some ALTER TABLE failed because there are tables with LONG columns; so I tried to use
CREATE TABLE new_table AS subquery...
using TO_LOB in the LONG column. But I get this:

system@RNOD> create table orabanks.me_encuestas_2
2 tablespace bme_tab_lm
3 as (select id_encuesta, codigo_empresa, codigo_agencia,
4 adicionado_por, fecha_adicion, codigo_tipo_encuesta,
5 codigo_encuestador, fecha_encuesta, codigo_persona,
6 to_lob(comentario) comentario, modificador_por, codigo_encuesta
7 fecha_modificacion from orabanks.me_encuestas);
fecha_modificacion from orabanks.me_encuestas)
*
ERROR at line 7:
ORA-01658: unable to create INITIAL extent for segment in tablespace BME_TAB_LM

BME_TAB_LM is a LMT

system@RNOD> select tablespace_name, initial_extent, next_extent, extent_management, allocation_type
2 from dba_tablespaces
3 where tablespace_name = 'BME_TAB_LM';

TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN ALLOCATIO
------------------------------ -------------- ----------- ---------- ---------
BME_TAB_LM 65536 LOCAL SYSTEM

So I cannot use this:
system@RNOD> alter tablespace bme_tab_lm default storage (initial 200k);
alter tablespace bme_tab_lm default storage (initial 200k)
*
ERROR at line 1:
ORA-25143: default storage clause is not compatible with allocation policy

What I'm missing here????

This is the information about the old tablespace:
select tablespace_name, initial_extent, next_extent, extent_management, allocation_type
2 from dba_tablespaces
3 where tablespace_name = 'BME_TAB';

TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN ALLOCATIO
------------------------------ -------------- ----------- ---------- ---------
BME_TAB 12288 12288 LOCAL USER

It is LOCAL because I used the DBMS_SPACE_ADMIN first. Later I read this is not a good way to convert tablespaces, so I'm trying the other way; moving tables to brand new tablespaces.

Thanks!!!

Tom Kyte
December 01, 2004 - 7:48 pm UTC

is there any space left in that tablespace?

ORA-14133 ALTER TABLE MOVE cannot be combined with other operations

Mike, December 10, 2004 - 1:51 pm UTC

I attempted to do:
ALTER TABLE tablename MOVE TABLESPACE newtablespace
STORAGE (INITIAL 1k NEXT 1k MINEXTENTS 1 PCTINCREASE 0);

and got an ORA-14133.
<quote>
ORA-14133 ALTER TABLE MOVE cannot be combined with other operations

Cause: An attempt was made to combine an ALTER TABLE MOVE statement with another operation, such as MODIFY.

Action: Make sure that MOVE is the only operation specified in ALTER TABLE statement.
</quote>

This is in v9.2.0.5

I've got a table that someone built on a 128M tablespace and I'd like to move it to a smaller LMT, since it is really only about one meg of data in it.

I also tried executing an ALTER TABLE tablename DEALLOCATE UNUSED; after moving it to the smaller LMT (4M extents) but it still occupies 128M of space. (32 extents of 4M each). The file only has 101,000 records in it.

Any suggestions?

Tom Kyte
December 10, 2004 - 8:03 pm UTC

Full example please -- something special about your table perhaps.

ops$tkyte@ORA10GR1> drop table t;
 
Table dropped.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> alter table t move tablespace users storage ( initial 1k next 1k minextents 1 pctincrease 0 );
 
Table altered.
 
 

ALTER TABLE tablename DEALLOCATE UNUSED KEEP 1K;

A reader, December 10, 2004 - 3:20 pm UTC

Would this be the appropriate step to take once the table has been moved to the new tablespace?

ALTER TABLE tablename DEALLOCATE UNUSED KEEP 1K;



Tom Kyte
December 10, 2004 - 8:12 pm UTC

not necessary, table would already be "small"

I don't know what's different

Mike, December 13, 2004 - 9:53 am UTC

I tried this three times on Friday and always got the error, but now on Monday, I tried to give you a step by step example and now it works.

I can't explain what I might have done wrong, because I haven't got a clue.

Sorry for bothering you with it.

Mike

CLOB will not move to another tablespace

Amit, January 30, 2005 - 2:09 am UTC

Hi Tom,

In my production I have a table with CLOB column. Using import this table simply will not move to the test database's tablespace , which has a different name than production tablespace.

Is there a way out ?

Thanks.

Tom Kyte
January 30, 2005 - 9:30 am UTC

precreate the table, import with ignore=y

if you need the DDL because you "lost it", import indexfile=test.sql will get it for you

Import only Data

ramana kotamsetti, February 14, 2005 - 12:17 pm UTC

Hi Tom,
I would like to know if there is any way to import only data to avoid the following error.
IMP-00015: following statement failed because the object already exists:

I need to move a table from one table space to another due to space problem.
Mine is oracle 7.3 version so I am going with export & import option.
As I don't have privilege to change the user default table space, I would like to know if I can create the table in some other table space and then import only the data from the exp dmp file.

Please let me know if this is possible.
Thanks in Advance.


Tom Kyte
February 14, 2005 - 2:43 pm UTC

ignore=y




Not working for me

Venugopal, February 24, 2005 - 8:42 am UTC

Tom,

Your second solution to the original question is not working for me.

Database version : 9.2.0.4.0

I have user A having tables in tablespace X. This is to be imported into user B having def tablespace Y. These are in the same database.

Even after your workaround, it still tries to create objects in tablespace X.

How do i figure out what went wrong?

Thanks,






Tom Kyte
February 24, 2005 - 8:47 am UTC

this is a big page - which "workaround"

and is it "successful" in creating the objects in X? if so, that means user B has the rights to create objects in that tablespace and this is to be expected (you would have to take these rights away)

And if the object is a multi-segment object (a table with a LOB, a partitioned table, an IOT with an overflow segment, ....) you have to precreate them as imp will only rewrite the very first tablespace it sees in the ddl statement, not all of them.

-

Venugopal, February 24, 2005 - 8:54 am UTC

Tom,

Thanks for the speedy response.

I was referring to your follow up to the original question (solution 2).

The exported dmp file has the tablespace name hardocded in it and gives
IMP-00003: ORACLE error 1950 encountered
ORA-01950: no privileges on tablespace 'X'
IMP-00017: following statement failed with ORACLE error 1950:

My problem is that tablespace X is full and a new tablespace Y is available which should be used for the new schema.

Iam unable to find a way to do it without doing any changes to the tables in user A (which iam not supposed to)

Thanks,



Tom Kyte
February 24, 2005 - 9:15 am UTC

what was the following statement please?

space header

jim, April 14, 2005 - 5:18 pm UTC

Tom, I moved all objects from one tablespace to another tablespace and then I wrote a script to find all empty tablespaces.

what I did is just querying dba_segment and if there is nothing in there, I will consideer it as empty.

However, I found some strange row in dba_segment with segment_type='SPACE HEADER'

this segment type is not documented, what's it?



Tom Kyte
April 14, 2005 - 6:04 pm UTC

'SPACE HEADER' segments are the extents bitmap at the top of the locally managed tablespaces that you may see.

LOBINDEX & LOBSEGMENT

George, April 26, 2005 - 6:09 pm UTC

Tom,

I just tried the moveall script, but after it ran (successfully,) there were LOBINDEX and LOBSEGMENT
segment_type(s) left in the old tablespace.

Is there a simple fix for that or do I need to use
the export/import method to get it done?

Thanks, George

Tom Kyte
April 26, 2005 - 8:04 pm UTC

in order to move the lobs, you have to move the table as well....

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:378418239571#16767733894997 <code>

sincere apologies for a bad post

George, April 26, 2005 - 6:15 pm UTC

I apologize for not reading the whole message before
asking my question.... I only saw the first message
on my page when I posted.

anyway, you have already answered my question at:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:378418239571 <code>

so I hope I have not caused you any effort.

George

Tom Kyte
April 26, 2005 - 8:04 pm UTC

no worries :)

move all objects from one tablespace and then drop it.

Sean, May 31, 2005 - 12:11 pm UTC

Hi Tom,

We want to move all objects in tablespace A to tablespace B, then drop tablespace A.

Here is the command I will use to move tables to different tablespace:

create or replace procedure move_table is

begin
for x in (select * from user_tables where tablespace_name = 'A')
loop
execute immediate 'alter table ' ||x.owner||'.'|| x.table_name || ' move tablespace B';
end loop;
end;

Is it true that as long as "select count(*) from dba_tables where tablespace_name='A' "returns zero, we can drop tablespace A?


Thanks so much for your help.



Tom Kyte
June 01, 2005 - 7:40 am UTC

only if the tablespace constains nothing but tables.

don't forget to rebuild indexes.
and hopefully B is a LMT with system allocated extents, then I would suggest

alter table t move tablespace b storage( initial 1k next 1k minextent 1 pctincrease 0 );

to get rid of the legacy storage clause.

16k blcok size of system tablespace

Sean, June 01, 2005 - 4:12 pm UTC

Hi Tom,

db_block_size of our 9204 db is 16k. The block size of system tablespace is 16k too. Since it is a OLTP db, do I need change system tablespace to 8k? If yes, how?

Thanks so much for your help.



Tom Kyte
June 01, 2005 - 5:33 pm UTC

is it working better than good enough?

if not, have you identified the blocksize to be even remotely related to your performance issue?

no, there is no rule that says "though must use 8k for oltp"

block size for oltp database

Sean, June 01, 2005 - 9:29 pm UTC

Hi Tom,

There is no performance issue currently. We realized that we donÂ’t need to use 32k block size for our tablespace. During the discussion on this site, you mentioned that using 8k block size for olap db. See the bottom of that thread:

</code> http://asktom.oracle.com/pls/ask/f?p=4950:61:3041788057388372379::::P61_ID:328917235004#41728487288242 <code>

Actually, I donÂ’t have ways to prove which block size is best for our database, besides checking row chain for some big tables. I do have some notes from metalink, but they only list pron and con for different block sizes.

Note:65984.1
Note:34020.1


Thanks so much for your help.


Tom Kyte
June 02, 2005 - 4:21 pm UTC

but that would be "if I were creating my database, what blocksize would I consider"

If I already had it -- unless and until I saw it be the cause of some problem, I would leave it be. rebuilding a database is not something I'd want to do.

alt, June 02, 2005 - 11:44 am UTC

i want to load a data from one database(production) to another database(dataware house) for time interaval(ie monthly).there is no column in the table in which i can find the loadded data and new data.is there any method to find a new data and loaded(except trigger).using flashback queary we can do this

Tom Kyte
June 02, 2005 - 5:10 pm UTC

monthly is far too large.

You can use things like change data capture, streams and what not, all documented -- but in 9i and up only . no versions... hmmm :(

moving tables across tablespace

vernon, June 05, 2005 - 10:50 pm UTC

before i read this topic, i am planning to do the exp - imp to move the tables to another tablespsace, my concern using the command 'alter table tbalename move tablespace...' can i do this during operation/production hours?

Tom Kyte
June 06, 2005 - 7:16 am UTC

during the move, the table can be READ but not modified.

immediately after the move, the table will have a bunch of invalid indexes that need be rebuilt before reads/modifications can take place.

The alter table move is considered an offline operation.

In 9i, there is dbms_redefinition which can do online rebuilds.


Either of them would be infinitely superior to export/import, export/import might only be useful in the case of a table with a long/long raw -- but then you are going to be waiting for a long time anyway, since longs/long raws make imp go slow by slow (row by row)

tablespace re-org recovery concerns

mohini, June 09, 2005 - 5:54 pm UTC

Tom,

ORACLE 9.2.0.6 (on Unix)
Reorg tasks:
1. LMT (UNIFORM SIZE) ---> LMT (AUTOALLOCATE)

I am re-arranging almost my whole database..moving to autoallocate..and re-arranging tables and indices as I go.
The plan is to create a dummy tablespace..move in some objects..recreate the tablespace as autoallocate..and move the objects back in...
So, My concerns are:
1. Recovery in case of a media failure:
Do I need to keep taking tablespace level backups as I
move along?
2. You mentioned dbms_redefinition package for on-line move and build.
Can you give an example please?

Thanks.


Tom Kyte
June 09, 2005 - 6:45 pm UTC

why move objects back? just move one way?

1) alter table move generates redo in archivelog mode, unless you say "do not"
2) search site for dbms_redefinition

ILDE, June 22, 2005 - 10:28 am UTC


The moveall.sql script ommits the very first table?

DanielD, June 26, 2005 - 3:17 pm UTC

Hello Tom,

I have tested your moveall.sql script against 'HR' schema on Oracle9i (9201). Here is the list of tables owned by HR:

OWNER TABLE_NAME
------------------------------ ----------
HR COUNTRIES
HR DEPARTMENTS
HR EMPLOYEES
HR JOBS
HR JOB_HISTORY
HR LOCATIONS
HR REGIONS

And here is the output from moveall.sql:

alter INDEX COUNTRY_C_ID_PK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter TABLE DEPARTMENTS move
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX DEPT_ID_PK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX DEPT_LOCATION_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter TABLE EMPLOYEES move
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX EMP_DEPARTMENT_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX EMP_EMAIL_UK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX EMP_EMP_ID_PK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX EMP_JOB_IX rebuild

tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX EMP_MANAGER_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX EMP_NAME_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter TABLE JOBS move
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX JOB_ID_PK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter TABLE JOB_HISTORY move
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX JHIST_DEPARTMENT_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX JHIST_EMPLOYEE_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX JHIST_EMP_ID_ST_DATE_PK rebuild
tablespace TEST_DATA_LMT

storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX JHIST_JOB_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter TABLE LOCATIONS move
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX LOC_CITY_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX LOC_COUNTRY_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX LOC_ID_PK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX LOC_STATE_PROVINCE_IX rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter TABLE REGIONS move
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX REG_ID_PK rebuild
tablespace TEST_DATA_LMT
storage ( initial 65536 next 65536

minextents 1 maxextents 4096
pctincrease 50 freelists 1);

Please note that the script temp.sql begins with 'alter INDEX COUNTRY_C_ID_PK' and 'alter TABLE COUNTRIES move' dml is missing completely. I did copy/paste of the moveall.sql script...

Thanks.

Daniel

Tom Kyte
June 26, 2005 - 4:36 pm UTC

countries is an IOT (index organized table)

there is no table to move in that case. there is no countries table "for real", just an index.

The moveall.sql script ommits the very first table?

DanielD, June 26, 2005 - 4:29 pm UTC

I see, the COUNTRIES is IOT.

Moving a table within the same Tablespace

TS, July 07, 2005 - 3:49 pm UTC

Tom,

Assuming I've a table_A that resides on tablespace TS_A,
can I move table_A within tablespace TS_A or should I
move it to a different tablespace TS_B and then move it back
to TS_A?

Thanks for all the help.



Tom Kyte
July 07, 2005 - 5:08 pm UTC

you can move it within the same tablespace, it would clear out all "white space", it could be used to compress the table.

Migration

friend, August 05, 2005 - 12:32 pm UTC

execute dbms_space_admin.tablespace_migrate_to_local('REFIND' );
size 10.8886719GB
Elapsed: 00:00:04.43


execute dbms_space_admin.tablespace_migrate_to_local('IBSINDX' );
Size 53.125GB
00:03:52.13

execute dbms_space_admin.tablespace_migrate_to_local('IBSINDX2' );
size 101.952148GB
Elapsed: 00:06:59.63

execute dbms_space_admin.tablespace_migrate_to_local('REFTBL' );
size 1.953125GB
00:00:02.92

execute dbms_space_admin.tablespace_migrate_to_local('IBSSMALL' );
size 4.8828125 gB
00:00:04.53

execute dbms_space_admin.tablespace_migrate_to_local('IBSMED' );
Size144.658203
03:55:52.27

Is there any way to reduce the Time for IBSMED tablespace?
Database was open in restricted mode and tablespaces are online.
Please suggest as very soon I am going to make above changes in production

Thanks!

Tom Kyte
August 05, 2005 - 2:21 pm UTC

one might ask "why are you doing this change, what do you hope to gain from it"?

Migration

friend, August 05, 2005 - 12:38 pm UTC

More information on above query

SQL> select count(EXTENT_ID),tablespace_name  from dba_extents group by tablespace_name;

COUNT(EXTENT_ID) TS
---------------- --------------------
           46239 IBSBIG
           75155 IBSINDX
          171274 IBSINDX2
         1548072 IBSMED
            1228 IBSSMALL
             107 RBS
             637 REFIND
             589 REFTBL
             237 SBT_MED
              10 SBT_SMALL
             831 SBT_SMALL_IND
           13225 SYSTEM
           11487 TEMP

Thanks! 

Tom Kyte
August 05, 2005 - 2:21 pm UTC

see above, what is your "goal" here, what are you hoping to achieve by doing this?

I am moving tablespaces to LMT.

abc, August 07, 2005 - 1:27 am UTC


I am moving tablespaces to LMT to avoid futher fragmentation

friend, August 07, 2005 - 1:28 am UTC


Tom Kyte
August 07, 2005 - 9:37 am UTC

just set

next = constant
pctincrease = 0


on all of the segments in there, and you are done. moving to a "migrated lmt", doesn't buy you very much -- if you want to avoid future 'fragmentation', just set next = constant uniform size and pctincrease = 0.

If you are serious about cleaning up, create a new, EMPTY, real LMT and move objects in there (can be done online even)

reader

A reader, August 26, 2005 - 7:16 am UTC

"alter table move" seems to work very fast. What exactly
goes behind the scene. Is there some type of block level
data transfer goes on bypassing buffer cache ....

Tom Kyte
August 26, 2005 - 8:48 am UTC

it is a direct path operation, yes. read and write to and from disk - not "sql"

moving tables across tablespaces

senthil, August 30, 2005 - 2:23 pm UTC

i have created a global temperory table and tried to move t a newly created temperory tablespace.But got a message
"ORA-14451: unsupported feature with temporary table".
can't we move GTT to user created tablespace?

Tom Kyte
August 30, 2005 - 2:46 pm UTC

you don't need to, they allocate their extents in the temporary tablespace of the user using them.

don't be confused if the dictionary says "system", it just doesn't know where the extents will be allocated until a user uses it.

If I have temp set to X
and you have temp set to Y

when I use it, it'll allocate extents in X
when you use it, it'll allocate extents in Y.

BLOB and LONG RAW

reader, September 21, 2005 - 11:35 am UTC

I know you have mentioned for LONG RAW columns, do exp/imp. But can I use alter table move with BLOB columns? I'm using 9i, is there any other method apart from exp/imp to move tablespaces for BLOB's or LONG RAW's?


Tom Kyte
September 21, 2005 - 7:37 pm UTC

for blobs, yes.

for long raws, no.


ops$tkyte@ORA9IR2> create table t ( x int, y blob );

Table created.

ops$tkyte@ORA9IR2> @dbls
Wrote file /tmp/xtmpx.sql

OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
LOB          SYS_LOB0000033227C00002$$        USERS

TABLE        T                                USERS


ops$tkyte@ORA9IR2> alter table t move lob(y) store as ( tablespace tools );

Table altered.

ops$tkyte@ORA9IR2> @dbls
Wrote file /tmp/xtmpx.sql

OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
LOB          SYS_LOB0000033227C00002$$        TOOLS

TABLE        T                                USERS

(remember, index rebuild after table moves...) 

Playing with rows

Sudershan, December 15, 2005 - 11:18 am UTC

Oracle 9i
Tom,
I was looking at your followup:


Followup:
looks good to me -- just a variation on the approach above.

If you got "fancy", you would just run a single query for the tables and use
CASE to decide the tablespace.

select 'ALTER TABLE ' || ds.segment_name || ' MOVE TABLESPACE ' ||
case when dt.blocks*vp.value <= 100000 then 'TS_SMALL'
when dt.blocks*vp.value <= NNNNNNN then 'TS_MED'
else 'TS_BIG'
end || ';'
.....

dt.blocks*vp.value plays with size in bytes..


Question: How do we translate no of rows into that..
Like when a developer sends a script saying..this table will have about 50000 rows..and this table will have a million rows...how do you size that for a tablespace..
Thanks.



Tom Kyte
December 15, 2005 - 11:25 am UTC

you cannot - not given that information.


50,000 rows might be much larger than a table with 1,000,000 rows.


the only even close to accurate method I know is to create the table, and load it with representative data (some percentage - between 0.00001 and 10% depending on the real volumne of data - the more data you expect, the less % you need to load). Gather stats on it, and see how big it is.

But even that does not account for "wastage" that will accumulate over time if the table is updated/deleted from over time (whitespace that will come and go...)

change owner

A reviewer, December 19, 2005 - 4:34 am UTC

hi tom
can we change the owner of an object? that is, if SCOTT user has created a table T, the owner of T will be SCOTT. Now I want to change the owner of T with APPUSER user.



Tom Kyte
December 19, 2005 - 7:36 am UTC

you cannot, you have to COPY the data. there is no change owner.

Size of segment

Sudershan, December 20, 2005 - 11:06 am UTC

Oracle 9.2

Tom,

In one of your comments above:

Followup:
looks good to me -- just a variation on the approach above.

If you got "fancy", you would just run a single query for the tables and use
CASE to decide the tablespace.

select 'ALTER TABLE ' || ds.segment_name || ' MOVE TABLESPACE ' ||
case when dt.blocks*vp.value <= 100000 then 'TS_SMALL'
when dt.blocks*vp.value <= NNNNNNN then 'TS_MED'
else 'TS_BIG'
end || ';'
.....


where dt = dba_tables
and vp = v$parameter

can dt.blocks*vp.value be switched to dba_segments.bytes
what is the difference between the two..

Thanks



Tom Kyte
December 20, 2005 - 11:16 am UTC

you can make it do whatever you like? They are just numbers and thresholds - you would/could use your own "numbers" (bytes) and thresholds.

YL Chan, January 06, 2006 - 4:29 am UTC

Dear Tom,

The script moveall.sql is very useful for DBA, thanks!

Currently, I am using Oracle Financial 11i, and it has a scheme called "AP".

Right now I want to move all the "AP" tables from its original tablespace to another tablespace, but I am not sure moveall.sql is working on this scheme or not as the owner "AP" has so many segment_types, such as lobindex, lobsegment and table partition.

Does the script moveall.sql cater these kinds of segment types?

Thanks a lot!

Tom Kyte
January 06, 2006 - 1:52 pm UTC

You had better work with support on that. To see if you are even allowed to move them from one tablespace to another.


moveall deals with regular tables and indexes only (see the where clause)

multiple tablespaces

Asim Naveed, January 15, 2006 - 11:59 pm UTC

If I have only one physical hard drive on my system, does making seperate tablespaces for different tables/indexes
will improve performance? In my opinion, because there
is only one hard drive making serperate tablespaces or
single tablespace will have no difference in performance.

Thanks

Tom Kyte
January 16, 2006 - 9:31 am UTC

tablespaces are not about performance. indexes and tables don't need to be segregated - the goal is even IO over all available devices. I can do that with one tablespace having many datafiles from many devices (striping - with or with RAID)

tablespaces are about administrative ease. Use tablespaces to make your life better.



Why not segregate tables and indexes

Asim Naveed, January 17, 2006 - 5:12 am UTC

OK, i agree that the goal is even I/O over all
physical hard drives.

But you said tables and indexes does not need to be segregated. In case of only one physical hard disk I
agree with this but in case of multiple hard disks,
many people belive that
storing tables and indexes on different hard drives will
increase performance. To achieve this they make one
tablespace for only tables, this tablespace will be based
on datafiles from physical disk1, and for indexes they
create another tablespace which will be based on datafiles
from physical disk2.

But if I have ONE tablespace which is based on datafiles
from TWO different physical hard drives. How can I be
sure that Oracle stores table data on physical disk1 and index data on physical disk2?


Tom Kyte
January 17, 2006 - 8:59 am UTC

there is a difference between "believing something" and "something being true"

Ask them why they believe this, where is the math?


To achieve the last goal - make sure your segments have lots of extents (uh-oh, another myth I see getting in the way). Oracle will tend to round robin allocate the extents on all available datafiles - achieving "poor mans striping" without raid...

We could have been doing that for years, if only the myth of "one extent" and the myth of "separate index from data" hadn't gotten in the way. Instead, we had to invent software and hardware raid so we could "separate index from data" by tablespace only to REMIX THEM ALL back on disk again with stripes....

tables and indexes does not need to be segregated ?

Chester, January 26, 2006 - 10:12 am UTC

Hi,
I like to follow up on the previous question regarding to
"tables and indexes does not need to be segregated".

I just want to make sure I clearly understand this. Are you saying that even for multiple devices: there is no need for separating data and index table spaces so that data and index use different devices to increase I/O through put ?

why ? Is this because of RAID ?

also: (a dumb qestion)
you said "tablespaces are about administrative ease. Use tablespaces to make your life better."

How tablespace makes admin ease if it is not for IO through put ?

ps. I am a great fan of your book, ORACLE expert-one-one-one from WROX.

Great book !!!

Thanks






Tom Kyte
January 26, 2006 - 10:42 am UTC

the goal is, has been, will be (as far as I can tell) EVEN IO

use all of your devices, evenly, no hot spots.


How could I achieve that with 4 physical disks? I could create one tablespace, with datafiles on the 4 disks. I could create all of my segments with lots of extents (dozens, hundreds...). Oracle would tend to round robin the extents on the disks for me, so my table/index gets striped. RAID before RAID was invented....

You could use raid striping.

whatever.


Putting indexes on disk1 and disk2, tables on disk3 and disk4 would just split IO between the two sets by "index io goes here", "table io goes there". would that be "even"? Maybe yes, maybe no, maybe....



How to use tablespaces to make your life easier?

Maybe you don't need/want to backup indexes (you can recreate them upon disk failure). put them into their own tablespace.

Maybe you need to point in time recover an application independent of some other application. Put each application into its own tablespace (or set of tablespaces). Now you can.

and so on.

move objects across tablespaces

A reader, January 27, 2006 - 6:36 am UTC

Tom,

After moving indexes and tables from DMT to LMT, I alter schema to use LMT.

How can I move other objects e.g procedures, functions, packages, synonyms and constraints to LMT ?

Tom Kyte
January 27, 2006 - 8:42 am UTC

you don't "move" procedures, they live in SYSTEM forever. They are not "data" like a table or index is "data", they are meta-data/code. They are part of the data dictionary.

Tablespaces

Hassan Ayub, January 30, 2006 - 8:48 am UTC

Hi Tom,

Just went through the whole thread and I must say that it was really informative. I tried the given method and apparently it worked fine, but when I checked out the data dictionary view DBA_USERS, it gave me the name of the new tablespace under the field default_tablespace and when I queryed the data dictionary view DBA_TABLES, it gave me SYSTEM as the tablespace under the tablespace_name field. Any ideas?
Which specific view we are suppose to follow/believe?

Tom Kyte
January 30, 2006 - 3:01 pm UTC

well, not sure where you are going with this


dba_users would show the default tablespace any newly created segment by a given user would use.

dba_tables shows the tablespace the segment is currently occupying.

I would not expect them to be "the same" in general.

Copying Partition accross database

jignesh, February 04, 2006 - 7:13 am UTC

Hi Tom,

We are using 9iR2 and are planning to upgrade to 10g in near future.

We are building a large datawarehouse and the step involved is to take the daily load from OLTP database and load into DSS system.
i.e. Take the copy of partition (i.e.last days data) and insert data across database in another schema. The table in question will have 60million records per day. The table is partitioned on day.

The options that I can think of at the movement is
1. Transportable tablespace
2. Export & import using Query clause

However, I belive that first step of transportable tablespace is to take the tablespace offline which is not possible in this case as we do not want to touch the OLTP database.

Do you have any better solution in mind?

Tom Kyte
February 06, 2006 - 12:11 am UTC

10gR2 has an online transportable tablespace .

but you cannot transport a single partition of a partitioned table.


You seem to have precluded database links?

Copying Partition accross database

Jignesh, February 06, 2006 - 6:59 am UTC

Yes, DatabaseLink would be the last preferred option.

If we talk about 10gR2.. definitely I will start thinking about DataPump as an option with Parellel clause. But if I use Datapump API to do this excersice then I will be forced to use Database Link. Is there any major difference between
1. Exporting & Importing using Datapump via Net8 &
2. Exporting & importing using Datapump API via Dblink?

Tom Kyte
February 07, 2006 - 12:40 am UTC

a dblink is "net".

NETWORK_LINK in DataPump

Jignesh, February 07, 2006 - 5:47 am UTC

I was reading New features of 10g. Following is the extract from SYBEX book :

The NETWORK_LINK enables the network mode import using a database link.

"The network mode import is different from using SQL*Net to perform the import: impdp username/password@database."

Looking at your feedback, it looks that there is no difference. Am I right?

Tom Kyte
February 07, 2006 - 6:13 am UTC

they both use "net"

a database link uses "sqlnet"
tns connections use "sqlnet"

they all use sqlnet

I agree that a dblink is different from a tns-connection (dblink lets database A talk to database B), you can do different things.


impdp u/p@db

says "hey, local client program that is IMPDP, please connect to the database 'db'"

if from the database DB you wanted to then copy to/from DB_X, you would use a database link to get database DB to talk to DB_X

Rebuilding of indexes

Espen Rydningen, February 07, 2006 - 9:05 am UTC

Hi Tom,
Thank you for an excellent and most helpful solution. Just what I was looking for.

But for a bit further clarification, you responded to "MS from USA", that when you move tables, you need to rebuild the indexes. That makes sense, but if it is the indexes you move to another tablespace, would you then have to rebuild the index?

Thanks.

Tom Kyte
February 08, 2006 - 1:02 am UTC

in order to move the indexes, you are rebuilding them?

Not sure what you mean by "if it is
the indexes you move to another tablespace, would you then have to rebuild the
index?" since to "move" you "rebuild"

Database link

Jignesh, February 07, 2006 - 6:13 pm UTC

Thanks for your feedback Tom. I hope I will find a better solution soon to avoid transfer of millions of data on a daily basis avoiding net traffic.

Need a little help with tablespaces

Scott Horowitz, March 13, 2006 - 12:39 pm UTC

Hey Tom,
Our previous DBA has left the company, and I am taking up the bulk of his work. We have a tablespace DOCPDF, where we store images for our clients (in PDF form). This tablespace has reached it's max, and cannot extend anymore. Our USERS tablespace has about 200MB of data, but is extended to about 10G (this is due to a large table created in that tablespace that should never have been). I'm doing a test on our development database of recreating the tablespace with a smaller limit. I moved all objects (using OEM) from USERS to USERS2 (which I created on the test). I moved all users as well. After I had done that, there is still about 50MB still in use in that tablespace. I want to know where else I should look for data to move. I want to take this tablespace offline and then delete it to free up some room.

If you can point me in the right direction, I'd appreciate it.

Tom Kyte
March 13, 2006 - 9:30 pm UTC

what do you mean "this tablespace has reached it's max" - I don't know what that could possibly mean?

From Before

Scott Horowitz, March 14, 2006 - 9:43 am UTC

Our ASM drive is at capacity. And, all datafiles are using the entire ASM drive right now. I created a new datafile and tablespace USERS2 on the file system, temporarily, in order to recreate the USERS tablespace with a smaller size.

In a nutshell, USERS has 200MB of data, and the tablespace size is 10757MB. We tried using maxshrink, but were unable to reduce the size. Metalink told us we had to recreate the tablespace. I was unable to do this on ASM (no room), and had to do it on the file system. Then I was going to drop the old tablespace and datafile(on ASM), and then move the new one back to the ASM drive.

I just want to know where else to look for data on the tablespace to move

I'm sorry, if I'm not too detailed, I'm still learning a lot of the DBA stuff.



Is this a little better?

Tom Kyte
March 15, 2006 - 8:46 am UTC

to find what is in a tablespace, query dba_segements where tablespace_name = 'x'

what is the difference between alter table..move.. and dbms_redefinition

Orakle_lover, March 24, 2006 - 4:31 pm UTC

I am moving a database from cooked file system to raw filesystem. I want to this without any outage. And this is the database with a standby setup.
Here is my plan.
1.create the lv's both on primary and standby.
2.create the tablespaces on primary. hoping it will automatically effects at standby.
3.1).Use the dbms_redefinition to move the tables to the new tablespaces( which are on the raw system).
or
3.2)Use the alter table.. mover statement.

There are nearly 12 schemas to move with different objects types.

In the 3rd step what is your best recommendation.
I would like to use the dbms_redefinition,but for each and every table I need to create the indexes, constraints,... blah blah.. That will be a big process , I think

If i use alter table.. move.. This need to be done in the outage only ,right? because I think all the indexes will be invalid after the move.

What do you recommend!!
Give me your solutions, so that i can pick the best suitable for mine!!

Thank you sir.
I hope this will helpful for all the dba's

Thanks

Tom Kyte
March 24, 2006 - 5:47 pm UTC

1) lv's?


but why are you moving to raw? what is the goal?

lv -> Logical volumes

Orakle_lover, March 24, 2006 - 6:27 pm UTC

I wanted to move the database from jfs file system to raw filesystem

Tom Kyte
March 24, 2006 - 8:21 pm UTC

why?

moving cooked to jfs

A reader, March 27, 2006 - 9:49 am UTC

I just wanted to make my database identical to all other database in my envirnmonet.

All of the databases are in Raw filesystem.


Tom Kyte
March 27, 2006 - 10:17 am UTC

be careful, you might kill the performance of this database moving from cooked (CACHED BY OS) to raw (not cached).


if downtime must be zero
then
you would create new tablespaces on the new devices...
you would use dbms_redefinition to build the objects on the new devices
else
you can just move the files, support notes exist for this but it is OFFLINE
end if

Heed Tom's advice!!!

A reader, March 27, 2006 - 1:24 pm UTC

Reader from Texas:

Tom's right on the money when he said:

"be careful, you might kill the performance of this database moving from cooked (CACHED BY OS) to raw (not cached)."

I am currently experiencing awful I/O performance on my production instance when compared to my test instance. All stats point to slow I/O. The main difference between production and test: Test is on a cooked file system, production on raw devices. So be very careful!!!

So I ask, why would anyone prefer raw devices over cooked file systems if performance seems to be better on cooked?

Hopefully I will not start a "holy war"

Thanks.



Tom Kyte
March 27, 2006 - 3:23 pm UTC

all it means is on your raw system - your buffer cache is too small.

On the cooked file system - you were using a secondary SGA - the OS file system cache. On the RAW system - just give the memory to Oracle and we'll cache the blocks (probably more efficiently than the file system would)

raw devices

Orakle_lover, March 28, 2006 - 9:35 am UTC

Thanks tom,

All most all of the databases in my workstation are running on raw devices only. --Thanks to previous dba's and my lead.

I moved to this database to raw devices too and working great.

But i did't understood ,
How to check the cache size in the filesystem.
and
how much memory(for caching blocks) is allocated to raw devices.


Tom Kyte
March 28, 2006 - 3:56 pm UTC

any memory left over - the OS typically will use to cache the file system.

so it is not a fixed size, it is whatever happens to be left over.

Any leftover memory?

Orakle_lover, March 28, 2006 - 4:24 pm UTC

How can i find the current memory usage in o.s level
O.S is AIX 5.2

At least for a given point of time, If we know how memory is being used. so that we can know how much memory is free(for caching at os level)....



Tom Kyte
March 28, 2006 - 8:14 pm UTC

vmstat perhaps.

Alexander, April 05, 2006 - 11:26 am UTC

Tom,

If you had a database that was around 35-40 gb of data, would you recommend storing that in one tablespace? Does it depend? If so, on what? My only concerns would be performance related, but I don't think that would be an issue because of how LMTs work but I wanted your advice, thanks.

Tom Kyte
April 06, 2006 - 9:09 am UTC

it would have nothing to do with performance since

- tablespaces may have 1 or more datafiles, and each file can be on a different device

- datafiles themselves are typically striped over many physical devices

you don't really use tablespaces to tune, you use them to organize and make your life easier/better.


So, if it makes SENSE to have a 40g tablespace - no problem. But there are organizational reasons you might want to have more than one. for example, you might break out indexes from tables - and skip backing up indexes, since you can just recreate them whenever. Or you might put the 20g audit trail table into its own tablespace - leaving the 20g of real data in another - so you can "transport" the real data but leave the audit trail stuff behind.

any updates to the original questions?

A reader, April 07, 2006 - 1:26 am UTC

Here is the original question:

What is the best way to move tables from one
tablespace to another tablespace in the same instance (10gR2)?


Tom Kyte
April 08, 2006 - 8:39 am UTC

define "best way" for me.


the "best way" depends on your needs.


need to be up and running and allowing insert/update/delete/select? dbms_redefinition is the likely answer (9i and above)

allowed to be down during the move? alter table T move, alter index I rebuild

Why can't see unusable status in data dictionary while index bacomes unusable on partitioned table

Pauline, April 15, 2006 - 9:59 am UTC

Tom,
During the process of moving table to another tablespace,we know indexes on table are unusable. If it is regular table,
we may see status in *_indexes is 'UNUSABLE'. But I just
noticed if it is the partitioned table,I always see status
is something like 'N/A'. Why we can't see 'UNUSABLE' status while the index on partitioned table is really unusable?  
Please see below:

SQL> select index_name,column_name from user_ind_columns where index_name='XIE4BID_FACTS_DSS';

INDEX_NAME
------------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
XIE4BID_FACTS_DSS
LOT_ID

SQL> select lot_id from BID_FACTS_DSS where rownum<11;
select lot_id from BID_FACTS_DSS where rownum<11
*
ERROR at line 1:
ORA-01502: index 'DSS.XIE4BID_FACTS_DSS' or partition of such index is in unusable state

SQL> select index_name,status from user_indexes where status<>'VALID';

INDEX_NAME                     STATUS
------------------------------ --------
XIE10BID_FACTS_DSS             N/A
XIE11BID_FACTS_DSS             N/A
XIE12BID_FACTS_DSS             N/A
XIE13BID_FACTS_DSS             N/A
XIE14BID_FACTS_DSS             N/A
XIE15BID_FACTS_DSS             N/A
XIE16BID_FACTS_DSS             N/A
XIE17BID_FACTS_DSS             N/A
XIE18BID_FACTS_DSS             N/A
XIE19BID_FACTS_DSS             N/A
XIE1BID_FACTS_DSS              N/A

INDEX_NAME                     STATUS
------------------------------ --------
XIE20BID_FACTS_DSS             N/A
XIE21BID_FACTS_DSS             N/A
XIE22BID_FACTS_DSS             N/A
XIE23BID_FACTS_DSS             N/A
XIE2BID_FACTS_DSS              N/A
XIE3BID_FACTS_DSS              N/A
XIE4BID_FACTS_DSS              N/A
XIE5BID_FACTS_DSS              N/A
XIE6BID_FACTS_DSS              N/A

20 rows selected.


Is any dictionary view showing UNUSABLE status for index on partitioned table? 

Thanks in advance. 

Tom Kyte
April 15, 2006 - 1:16 pm UTC

because for a partitioned index, the "top level" index isn't an index at all - but a collection of smaller indexes in the partitions.

You can have a partitioned index with 2 partitions - partition 1 is valid, partition 2 is unusable.

Now, tell me, what is the "status" of the "index" at the top level? The status is "sort of partially usable" - it is NOT valid, it is not unusuable - it is "maybe it is usable, maybe not - depends on the query you ask of me"

You have to look at the individual segments to see if the individual partition is valid or not.

Pauline, April 15, 2006 - 4:57 pm UTC

Which view I should look at for individual segment? I queried against user_ind_partitions, still can't find it.

SQL> select INDEX_NAME,PARTITION_NAME,STATUS  from user_ind_partitions where STATUS ='UNUSABLE';

no rows selected

SQL> select a.segment_name,b.status from user_segments a,user_indexes b
  2      where a.SEGMENT_NAME =b.index_name and a.SEGMENT_NAME ='XIE4BID_FACTS_DSS'
   and b.status='UNUSABLE'  3  
  4  /

no rows selected

Thanks.

 

Tom Kyte
April 15, 2006 - 5:06 pm UTC

so, you have no unusable index partitions.

I do:

ops$tkyte@ORA10GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  -- subpartition by hash(x)
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create index t_idx on t(x) local;

Index created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> alter index t_idx modify partition part1 unusable;

Index altered.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select partition_name, status from user_ind_partitions where index_name = 'T_IDX';

PARTITION_NAME                 STATUS
------------------------------ --------
JUNK                           USABLE
PART1                          UNUSABLE
PART2                          USABLE


(and don't forget, if you have SUBPARTITIONS, you need to look at a lower view yet)

ops$tkyte@ORA10GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  subpartition by hash(x)
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create index t_idx on t(x) local;

Index created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> alter index t_idx modify partition part1 unusable;

Index altered.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select partition_name, status from user_ind_partitions where index_name = 'T_IDX';

PARTITION_NAME                 STATUS
------------------------------ --------
PART2                          N/A
PART1                          N/A
JUNK                           N/A

ops$tkyte@ORA10GR2> select partition_name, status from USER_IND_SUBPARTITIONS where index_name = 'T_IDX';

PARTITION_NAME                 STATUS
------------------------------ --------
PART1                          UNUSABLE
PART2                          USABLE
JUNK                           USABLE

 

excellent answer

Pauline, April 15, 2006 - 5:34 pm UTC

Tom,
Yes, you are right. This table has subpartition and I should also look at each subpartions. Now I query:

SQL>  select count(*) from  user_segments where segment_name='XIE4BID_FACTS_DSS';

  COUNT(*)
----------
       522
SQL> select partition_name, status from USER_IND_SUBPARTITIONS 
  2  where index_name = 'XIE4BID_FACTS_DSS';

.......

PARTITION_NAME                 STATUS
------------------------------ --------
SYS_P334                       UNUSABLE
SYS_P334                       UNUSABLE
SYS_P334                       USABLE
SYS_P334                       USABLE
SYS_P335                       UNUSABLE
SYS_P335                       USABLE
SYS_P335                       UNUSABLE
SYS_P335                       UNUSABLE
SYS_P335                       USABLE
SYS_P335                       USABLE
SYS_P336                       UNUSABLE

PARTITION_NAME                 STATUS
------------------------------ --------
SYS_P336                       USABLE
SYS_P336                       UNUSABLE
SYS_P336                       UNUSABLE
SYS_P336                       USABLE
SYS_P336                       USABLE

522 rows selected.

I see many 'UNUSABLE' on subpartitions as you said this index is partially unusable.

Thanks sooooo much for your help.



 

Default value and check constraints for CTAS

A reader, May 11, 2006 - 11:21 am UTC

Tom,

If I use CTAS to create a table T2 from T1, T2 looses the "default value" defined for columns in T1. Also, the check constraints do not get copied across. Is there a way to get the default value and check constraints copied across? If not, what is the easiest way to do this? I am using 10g R2.

Thanks


Tom Kyte
May 11, 2006 - 7:50 pm UTC

Well, this sounds like something that would be done in a more controlled environment (eg: we create tables in support of a schema - not just "because").

I'd be going back to my DDL.

If I didn't have the DDL, I'd be looking at dbms_metadata to get it.

Hoe do we get check constraint using dbms_metadata

A reader, May 15, 2006 - 12:47 pm UTC

Tom,

1. So, how do we get the check constraint DDL using DBMS_METADATA? Please provide an example script.

2. How do we get an ALTER script for DEFAULT value for the columns of a table?

Thanks once again.


Tom Kyte
May 15, 2006 - 3:25 pm UTC

#2 - I don't believe you do, they come out in the CREATE TABLE statement.


#1
http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_metada.htm#sthref4278

ops$tkyte@ORA10GR2> create table t
  2  (x int default 10 constraint x_check check (x>5) ,
  3   y int default -1 constraint y_check check (y<0)
  4  )
  5  /

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select dbms_metadata.get_dependent_ddl('CONSTRAINT','T') from dual;

DBMS_METADATA.GET_DEPENDENT_DDL('CONSTRAINT','T')
--------------------------------------------------------------------------------

  ALTER TABLE "OPS$TKYTE"."T" ADD CONSTRAINT "X_CHECK" CHECK (x>5) ENABLE

  ALTER TABLE "OPS$TKYTE"."T" ADD CONSTRAINT "Y_CHECK" CHECK (y<0) ENABLE

 

SPACE MANAGEMENT

A reader, May 23, 2006 - 7:39 am UTC

Normally when we move a huge table from
one tablespace to another, the source tablespace
shows more free space and the target tablespace shows
less free space immediately after the completion of
the MOVE.

If I , ALTER TABLESPACE src READ ONLY and then
MOVE a huge table from src tablespace to target tablespace,
the target tablespace shows less free space immediately
after the completion of the operation, but the source tablespace status does not change, OK its because
we have made it read only, but even when I
ALTER TABLESPACE src READ WRITE and then see
the free space in src tablespace, it does not
change?

When/How will it release the space freed in the tablespace
when it was in READ ONLY.



Tom Kyte
May 23, 2006 - 7:44 am UTC

they will be cleaned up by SMON over time, they were converted into temporary segments.



big_table@ORA10GR2> select segment_name, segment_type from dba_segments where tablespace_name = 'MANUAL';

SEGMENT_NAME SEGMENT_TYPE
------------------------------ ------------------
BIG_TABLE TABLE
BIG_TABLE_OWNERS TABLE
BIG_TABLE_PK INDEX

big_table@ORA10GR2> alter tablespace manual read only;

Tablespace altered.

big_table@ORA10GR2> create tablespace big_table;

Tablespace created.

big_table@ORA10GR2> select sum(bytes) from dba_free_space where tablespace_name = 'MANUAL';

SUM(BYTES)
----------
65929216

big_table@ORA10GR2> alter table big_table move tablespace big_table;

Table altered.

big_table@ORA10GR2> alter index big_table_pk rebuild tablespace big_table;

Index altered.

big_table@ORA10GR2> select sum(bytes) from dba_free_space where tablespace_name = 'MANUAL';

SUM(BYTES)
----------
65929216

big_table@ORA10GR2> select segment_name, segment_type from dba_segments where tablespace_name = 'MANUAL';

SEGMENT_NAME SEGMENT_TYPE
------------------------------ ------------------
BIG_TABLE_OWNERS TABLE
7.17 TEMPORARY
7.3465 TEMPORARY



move tablespace

BT, June 07, 2006 - 6:30 am UTC


move tablespace

BT, June 07, 2006 - 6:32 am UTC

very helpful

Moving tables to new tablespace

FS, June 22, 2006 - 12:47 pm UTC

Good info

moving table to new tablespace

john sisson, July 07, 2006 - 2:43 pm UTC

I had to work to make this script go on 9i - here is a modified version - thank you for the tool!
==
set echo off

accept source_TS char prompt 'Enter source tablespace name [nxadvertising] > ' default NXADVERTISING
accept target_TS char prompt 'Enter target tablespace name [nxadv_data] > ' default NXADV_DATA
accept index_TS char prompt 'Enter index tablespace name [nxadv_index] > ' default NXADV_INDEX


column order_col1 noprint
column order_col2 noprint

set linesize 300
set heading off
set verify off
set feedback off
set echo off

spool tmp.sql

select decode( segment_type, 'TABLE', segment_name, table_name ) order_col1,
decode( segment_type, 'TABLE', 1, 2 ) order_col2,
'alter ' || segment_type || ' ' || segment_name
|| decode( segment_type, 'TABLE', ' move ', ' rebuild ' )
|| chr(10) ||
' tablespace '
|| decode( segment_type, 'TABLE', '&target_TS', '&index_TS') || ' '
|| chr(10) ||
' storage ( initial ' || initial_extent
|| decode( next_extent, NULL, ' ', ' next ' || next_extent )
|| chr(10) ||
' minextents ' || decode( min_extents , NULL, '1', min_extents )
|| ' maxextents ' || decode( max_extents , NULL, '2147483645', max_extents )
|| chr(10) ||
' ' || decode( pct_increase, NULL, ' ', ' pctincrease ' || pct_increase )
|| decode( freelists, NULL, ' ', ' freelists ' || freelists )
|| ');'
from user_segments,
(select table_name, index_name from user_indexes )
where segment_type in ( 'TABLE', 'INDEX' )
and tablespace_name = '&source_TS'
and segment_name = index_name (+)
order by 1, 2
/

spool off

set heading on
set verify on
set feedback on
set echo on
set linesize 100
==

Excellent

Rajeev, July 25, 2006 - 5:29 am UTC

Genius Tom,

Please let me know following questions :

1.if we can use "alter TABLE ACCOUNT move tablespace users" command online ?? Or should I ask for a downtime ?

2. I will move this table to a new tablespace and then will move the table back to original tablespace. Can I do that ??? Will it help in reclaiming the space. If yes then should I rebuild the indexes again...??

As I am going to do this in Production database so want to be careful.

Please let me hear from you.

Regards, Rajeev

Tom Kyte
July 25, 2006 - 11:04 am UTC

1) it would be offline, to do an online reorganization - you could use dbms_redefinition

2) why? just move it in the same tablespace. ALTER TABLE T MOVE;

if you move the table, you better rebuild the indexes as they will all be unusable after the move!

Excellent

Rajeev, July 26, 2006 - 5:25 am UTC

Tom,

As always million times thanks to your bullet speed detailed reply.

Please see below, the status of the table and tablespace :

Free Blocks.............................137617
Total Blocks............................237335
Total Bytes.............................1944248320
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................12
Last Used Ext BlockId...................249632
Last Used Block.........................5

Tablesapce Size_Mb Used_Mb Free_Mb Used_%
----------- --------- --------- --------- ---------
RTME_LB 1999.0 1951.5 47.5 97.6

1. As, I don't have enough space in same table space, I am moving it to a different tablespcae and then back. Because what I understand If I move the table in the same tablespace then it will require around double the space during the time movement is taking place...please correct me If I am wrong ??

2. Also, do i need to see if table I am moving is partitioned etc ??

3. I have used your script moveall.sql and have generated the list of syntax and indexes, I need to rebuild. Please let me know If I need to look into something else before moving ??

4. While rebuiling index do I need to specify the tablespace clause ?? As finally I am moving the table in the original tablespace...!!!

I will be highly thankful to you if you could answer above mentioned questions.

Please let me hear from you.

Regards,
Rajeev

Tom Kyte
July 26, 2006 - 10:54 am UTC

1) that is correct
2) only if you want to. if it is partitioned and you "move it", it'll tell you to move the partitions instead. but, actually, one would logically assume that if you are tasked with moving a table like this - you would have rather intimate knowledge of the physical structure of the table before doing so (eg: you shouldn't have to ask if it is partitioned, you would already know that information)

3) if you are asking questions like this, I suggest this:

DO NOT MOVE IT, just don't touch it, it looks fine as it is.

4) only if you want. but see point #3 and take it to heart. perhaps this is not the wisest thing to do. And if you do it, do it IN YOUR TEST ENVIRONMENT 500 times to see that you know how to do it properly.

Else - this will be a CLM.

Career
Limiting
Move

(thanks to Connor McDonald for that nice acronym.

Excellent information

Rajeev, July 26, 2006 - 5:29 am UTC

Also, forgot to mentione, I am using oracle 8.1.7, so I won't be able to use dbms_redefinition. So, only option is, I take a downtime to perform this activity.

Please let me hear from you, on my previous post.

Regards,
Rajeev

Tom Kyte
July 26, 2006 - 10:55 am UTC

I would turn it into a zero hours of downtime proposition.

Don't do it at all.

Why are you doing it in the first place?

what do you mean by offline ?

Rajeev, July 26, 2006 - 7:55 am UTC

Tom,

Sorry for continuous 3 posts...:-(

One last question beside above 2 posts ( questions ):

what do you mean by offline ??? Database should be in which state ?? Whole database should not be accessible or the applications should not access this table..??

Please let me know these questions...so that I can plan my activities.

Regards, Rajeev

Tom Kyte
July 26, 2006 - 11:25 am UTC

alter table t move; is an "offline operation for the table".

The table will be read only while this command is running.
The table will become entirely unavailable for all intents and purposes after the command runs and until you rebuild the indexes.

The table will be "offline"

Thanks

Rajeev, July 26, 2006 - 3:56 pm UTC


Thanks for your reply.

Actually, I am not that bad in these activities as you are thinking...:-))
I just asked those silly questions just to make sure that I dont do any mistake. As i m loaded into work and dont had enough time...!!! and by asking silly questions, u get so many answers..:-))

Anyways, u r genius in oracle and is serving the oracle community in a real nice way.

Thanks, Rajeev

Tom Kyte
July 26, 2006 - 4:16 pm UTC

I am totally serious when I say this:

do not reorganize this/these tables.

Period. Just don't. The only thing you might accomplish is imposing some serious downtime on your users.



Transferring ONLY data

lakshmi, August 01, 2006 - 4:07 am UTC

Hi Tom

Our requirement is to copy data (which also contains BLOB data) present in a schema to another (which is present in a different machine.)
Currently we follow the following steps to migrated data from X to Y version of our application:

Step1.Export the data using exp utility from X version

exp /
FILE=${DB_DMP_FILE} \
CONSISTENT=Y \
CONSTRAINTS=N \
TRIGGERS=N \
GRANTS=N \
INDEXES=N \
FEEDBACK=10000 \
TABLES=${table_list}

Step2.Install version Y of our application
Step3.Import the data got in step 1 using imp utility:

imp / \
SILENT=Y \
FILE=${importFilename} \
TOUSER=${toUser} \
FROMUSER=${fromUser} \
INDEXES=N \
CONSTRAINTS=N \
COMMIT=Y \
BUFFER=3000000 \
DESTROY=N \
IGNORE=Y \
ROWS=Y \
TABLES= $tableSet \
LOG=${logFile}

Observation:The comments given to the table in version Y is overwritten by the ones of version X.

Note: The table definitions (like column names,data type etc) are the same in version X and Y. But there are changes in the constraints, triggers etc., and we want only the data on version Y.


1. Which is right way to transfer only the data if the data contains BLOB data also?
(Anything which is new to oracle 10 is also ok for us since we are shortly moving to Oracle 10gR2)
2. Also wanted to find out if anything else which could have overwritten by the above mentioned procedure

Thanks

Tom Kyte
August 01, 2006 - 7:25 am UTC

how about

insert into t select * from t@remote;


or read up on data pump.

Transferring ONLY data

A reader, August 02, 2006 - 1:05 am UTC

Thanks Tom for the quick response !

But the problem is in our procedure the source and target schema doesnt co-exists (we export the data, uninstall the previous version, install the new version and import the data exported) and

I am not sure if I can use data pump since our old version is running in Oracle 9 and new in Oracle 10 and I recently read that output of export cannot be used by data pump.

It would be nice if you can give your suggestions on this

Thanks


Tom Kyte
August 02, 2006 - 11:35 am UTC

change your procedure?

have them coexist, takes the same amount of space more or less.

(yes, once you introduce 9i, datapump is not in the solution set anymore)

Next 0k

A reader, August 08, 2006 - 6:26 am UTC

Hi Tom,
I have seen this clause a lot of times:
Create table tab_name (cols datatype)
storage(initail 512k next #b 0k #b pctincrease 0)
tablespace tbsname;

Just wanted to know where will the next 0k derive its value from. I thought that it will not extend as the next clause is missing. But on reading the docs it said the following:

<Quote>

Specify in bytes the size of the next extent to be allocated to the object. Use K or M to specify the size in kilobytes or megabytes. The default value is the size of 5 data blocks. The minimum value is the size of 1 data block. The maximum value depends on your operating system. Oracle rounds values up to the next multiple of the data block size for values less than 5 data blocks. For values greater than 5 data blocks, Oracle rounds up to a value that minimizes fragmentation, as described in Oracle9i Database Administrator's Guide.

If you change the value of the NEXT parameter (that is, if you specify it in an ALTER statement), then the next allocated extent will have the specified size, regardless of the size of the most recently allocated extent and the value of the PCTINCREASE parameter.

</Quote>
So it says it should be a minimum of 1 data lock, which is surely not 0k.

DB version 9i rel 2. The tablespace is also Locally Managed with Segemnt Space Management set to either Auto/Manual.

Please advice.


Tom Kyte
August 08, 2006 - 7:59 am UTC

locally managed tablespaces use their own next, they don't really care too much about YOUR next, it isn't relevant for them.

they only use "next" if you create a table with MINEXTENTS, then they figure out how much space would have been allocated in a dictionary managed tablespace and allocate at least that much in the locally managed one.

But NEXT is otherwise "not useful" in regards to a locallly managed tablespace.

praveen, September 30, 2006 - 2:56 pm UTC

These messages were really useful.
I too have a query.
What is the best way to overcome the below scenario.
The tablespaces are created with max_extents value as 2147483645.
What should be done if the max_extents are reached?

Tom Kyte
October 01, 2006 - 12:43 am UTC

cry?

that is a big number, think about it.

do the math, discover it would be a 128tb table if the extents were all 64k. then sleep better knowing you won't actually hit that in real life.

Move Object From one datafile To another

Devendra, November 10, 2006 - 5:34 am UTC

Dear Tom
Tablespace TEST contains Two datafile.
Now for some reason i want to drop one datafile.
Both datafiles are used 50%.
So suppose i drop one datafile what happend to the objects in that datafile.
Can i move objects from that datafile to another datafile in same tablespace.
Can i do this Without "Import/Export" or "Alter table move" command

Thanks in advance
Devendra

Tom Kyte
November 10, 2006 - 8:57 am UTC

well, prior to 10g, you never ever drop a datafile - you drop a tablespace and that takes all files with it.

but tell me, why do you care if there are one or two files?

easiest thing:

create new tablespace T with one file;

alter the segments and move/rebuild them into t

drop test

Datafile Resize

Devendra, November 10, 2006 - 10:09 am UTC

Dear Tom
Thanks for your reply.
Suppose i resize a datfile is there any chance to lost some data.If yes How i can solve this.

Thanks in advance
Devendra

Tom Kyte
November 10, 2006 - 2:40 pm UTC

no, you will not lose data by resizing.

Single Table having data in multiple dbfiles under same tablespace

Maulesh P Jani, November 12, 2006 - 12:32 pm UTC

Hi TOM,
Thanks for your extending support !
I am having one doubt for it details are as below :
Table : EMP (o Rows)
Tablespace : Tblspace
Db Files : 5 dbfiles all are equally filled ( Say 50 % Filled)

Now When I do masive insertion into this table after all insertion I can see that all the Db files are some what filled up (By storing this tables data).

Now my question is :
1) If I make a single Dbfile ( Large in size) then such table records will be in single dbfile then it can make any change in performance compare to many files? ( Means If I have table contain million records into same tablespace and Dbfile : compare to that if I have records for same table splited into five or more Dbfiles then does it can change in any performance .
2) I am not sure but I believe is If all my table records are stored under same file then it should be some atleast some what faster then splited into different files .

As any Dbfiles are also having records physically stored in disk is not serially or in order by I am asking this in the intence that inthis case Disk-Controller has to manage to single file so.

Regards
MJani

Tom Kyte
November 12, 2006 - 12:43 pm UTC

1) define file. The number of files isn't really too important (reasonably speaking).

But what if the 5 files were on five devices without striping of the file system at all, now you have striped the table and that is "good" (even IO)

If the underlying file system is already striped - it doesn't really matter does it, it is already striped (it is OK to stripe a stripe). You would use files to make your life easier and more enjoyable.

If you were afraid of big files over 2/4gb - go for it, have more than one. doesn't really mater.


2) why, why do you believe something (that in general is absolutely not true or not relevant)

Disk controllers to not manage "files", software way up the stack does that. Disk controllers respond to very primitive "read this, read that" requests

What about chaning owner of an object

dav, November 20, 2006 - 11:19 am UTC

I want to move a table from one schema to other schema. Please let me know how can it be done. Thank you for your help

Tom Kyte
November 20, 2006 - 1:45 pm UTC

there is a trick with partition exchange, but pratically, what you will do is

a) copy it to the other schema
b) drop it from old

How do I convert IOT to HEAP?

Kurt Look, December 24, 2006 - 12:26 pm UTC

Tom,

Commenting on moving a table from one tablespace to another, you note "It'll keep heap/iots in place". I have an IOT and I would like to convert it to HEAP. What is the best way to do this?
Tom Kyte
December 24, 2006 - 3:54 pm UTC

the ONLY answer to any question that has "best" in it is.....

It depends :)

say you had to do this "online"

ops$tkyte%ORA10GR2> create table t (user_id,username,created,primary key(user_id))
  2  organization index
  3  as
  4  select user_id, username, created  from all_users;

Table created.

ops$tkyte%ORA10GR2> exec dbms_redefinition.can_redef_table( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create table t_interim
  2  as
  3  select user_id, username, created  from all_users where 1=0;

Table created.

ops$tkyte%ORA10GR2> begin
  2          dbms_redefinition.start_redef_table
  3          ( uname => user,
  4            orig_table => 'T',
  5            int_table => 'T_INTERIM' );
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> alter table t_interim add constraint t_i_pk primary key(user_id);

Table altered.

ops$tkyte%ORA10GR2> begin
  2     dbms_redefinition.sync_interim_table( user, 'T', 'T_INTERIM' );
  3     dbms_redefinition.finish_redef_table( user, 'T', 'T_INTERIM' );
  4  end;
  5  /

PL/SQL procedure successfully completed.


if you didn't have the "online" need, a create table as select would do it.

alter table move and redo logs

mohan, January 10, 2007 - 3:32 pm UTC

Oracle 9.2.0.7.0 (Unix)
Database and all Tablespaces are "NOT" under forced logging.
Database is in restricted mode and running under ARCHIVELOG mode.

Hi Tom,

I am moving my Dictionary Managed Tablespaces (DMTs) to Locally Managed Tablespaces (LMTs)
using ALTER TABLE MOVE and ALTER INDEX REBUILD commands.
some of the large tables are generting lots of redo..
So arch directory gets filled up...everything comes to a halt..
then the directory gets cleaned up..and things move on..
I was going to use "NOLOGGING" clause..but I am a little lost on the recovery aspect of it.

This is from 9i Release 2 Documentation:

"
If the database is run in ARCHIVELOG mode, then media recovery from a backup made before the
LOGGING operation re-creates the object. However, media recovery from a backup made before the
NOLOGGING operation does not re-create the object.

The size of a redo log generated for an operation in NOLOGGING mode is significantly
smaller than the log generated in LOGGING mode.

In NOLOGGING mode, data is modified with minimal logging (to mark new extents INVALID and to record dictionary changes).
When applied during media recovery, the extent invalidation records mark a range of blocks as logically corrupt,
because the redo data is not fully logged. Therefore, if you cannot afford to lose the database object,
then you should take a backup after the NOLOGGING operation.
"

SO, when docs talk about archivelog mode database..before and after NO/LOGGING operations.
say..
10.00 a.m. - a hot backup of the database was taken
10.05 a.m. - table A was altered to NOLOGGING mode
10.10 a.m. - table A was altered to LOGGING mode
10.20 a.m. - media recovery or incomplete recovery
was needed to:


10.08 a.m. - Changes to Table A made before 10.05 a.m. all survive
10.15 a.m. - ??


In case of data reorg (under restricted mode)..will it be O.K. to use nologging mode (considering recovery scenerios)?
and then when done...a new hot backup can be taken.
or may be do the whole operation under NOARCHIVELOG mode.

appreciate your help.

any ideas regarding the above Tom..

mohan, February 06, 2007 - 10:19 am UTC

any ideas regarding the above (nologging) question Tom..
I am just trying to reduce the amount of redo that is being generated...when some of the larger tables are being moved..

Thanks.
Tom Kyte
February 06, 2007 - 11:31 am UTC

after you perform a non-logged operation on a datafile, you should schedule a backup of that datafile as soon as possible.

NOLOGGING clause vs altering a table/index mode

mohan, February 07, 2007 - 3:29 pm UTC

9i release 2

Tom,
Are these two options the same when the goal is to reduce
the redo log generated?:
Option 1:
1. alter table/index status to nologging
2. run alter table move or alter index rebuild
3. alter table/index status back to logging

Option 2:
run alter table move or alter index rebuild operation using nologging clause.

Tom Kyte
February 07, 2007 - 7:07 pm UTC

option two will leave the status of the segment in nologging.

option one will not

Option 2 test case

mohan, February 08, 2007 - 4:31 pm UTC

Followup:
option two will leave the status of the segment in nologging.

option one will not


hmm (9i Release 2)
I am not seeing that behaviour for some reason:

SQL> select tablespace_name from dba_segments
2 where segment_name = 'TEST_LOGGING';

TABLESPACE_NAME
------------------------------
USERS

SQL> select logging from dba_tables
2 where table_name = 'TEST_LOGGING';

LOG
---
YES

SQL> alter table test_logging move tablespace ts_small nologging;

Table altered.

SQL> select tablespace_name from dba_segments
2 where segment_name = 'TEST_LOGGING';

TABLESPACE_NAME
------------------------------
TS_SMALL

SQL> select logging from dba_tables
2 where table_name = 'TEST_LOGGING';

LOG
---
YES

SQL>
Tom Kyte
February 08, 2007 - 9:10 pm UTC

9iR2 for me

ops$tkyte%ORA9IR2> select logging from user_tables where table_name = 'T';

LOG
---
YES

ops$tkyte%ORA9IR2> select logging from user_indexes where index_name = 'T_PK';

LOG
---
YES

ops$tkyte%ORA9IR2> @mystat "redo size"
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size                15295568

ops$tkyte%ORA9IR2> alter table t move nologging;

Table altered.

ops$tkyte%ORA9IR2> @mystat2
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size                15353764           58,196

ops$tkyte%ORA9IR2> @mystat "redo size"
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size                15353764

ops$tkyte%ORA9IR2> alter index t_pk rebuild nologging;

Index altered.

ops$tkyte%ORA9IR2> @mystat2
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size                15409072           55,308

ops$tkyte%ORA9IR2> select logging from user_tables where table_name = 'T';

LOG
---
NO

ops$tkyte%ORA9IR2> select logging from user_indexes where index_name = 'T_PK';

LOG
---
NO


rebuild of indexes on partition table

Ajeet, February 09, 2007 - 1:02 am UTC

Hi Tom,

I have to rebuild all indexes of a partitioned tables as I want to move them into a seperate tablespace (different tablespace) .
I tried to use

alter table modify partition part_name rebuild unusable local indexes ;

but this does not let me specify the tablespace name.

is there a simpler way to do this.

Regards
Tom Kyte
February 09, 2007 - 8:14 am UTC

ops$tkyte%ORA10GR2> alter index t_idx rebuild partition part1 tablespace users;

Index altered.

reader

A reader, February 22, 2007 - 2:58 pm UTC

Suppose the table has check constraints, triggers, grants, primary key, foreign key, referencial integrity from this table to another table, as well referential integrity from an another table to this table, and indexes attached to the table

If I move the table , indexes and primary key indexes, will everything else take care of itself
Tom Kyte
February 22, 2007 - 7:47 pm UTC

you need to rebuild indexes after a move.

reader

A reader, February 23, 2007 - 8:35 am UTC

Thanks

Does triggers, grants and referential integrity take care of itself
Tom Kyte
February 26, 2007 - 10:21 am UTC

if you mean "after an alter table move" (i have NO CLUE who you are or what you are referring to - so I'll presume you are "a reader from the US" as above)

yes, they "take care of themselves"

How do I validate

Venkateswara Reddy, February 26, 2007 - 4:59 am UTC

Hi Tom

Absolutely you have given very good example. I would like to know after moving and re-building indexes, is there any way to validate. I want a query then with if it is not validated.

Thanks

Venkat
Tom Kyte
February 26, 2007 - 2:40 pm UTC

if what is not validated?

move tablespace still requires index rebuilds?

A reader, July 08, 2007 - 11:43 am UTC

I have a situation where some tables need to be rebuilt in production for performance. (it's been tested) They are going to be shrunk about 90%. I say the best way is CTAS, rename, rebuild indexes and in other cases delete and exp/imp. Management wants it all done online with deletes, alter table moves - so that everything is online and we don't need an outage. As near as I can figure, even with an alter table xxx move tablespace xxx, the indexes will be invalid after the move is completed and until they are rebuilt. Is this correct? In other words.. there is no complete online reorg feature? This is 9204.
Tom Kyte
July 08, 2007 - 2:05 pm UTC

delete + dbms_redefinition would be "online"

anything else is "offline" (unless you use shrink in 10g with online index rebuilds afterwards)

alter table T move will invalidate all indexes, that will cause "downtime" as modifications will fail and queries will - well - not have any indexes so they either won't run or will run a bit slower in general :)



so, you want it online - delete data, then use dbms_redefintion IF NEEDED to 'reclaim' space.

doesn't dbms_refinition in ver 9 require indexing?

A reader, July 08, 2007 - 5:07 pm UTC

Tom - in this previous post -
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:365418042918

You mentioned that dbms_redefinition has certain online drawbacks (in 9i) in this follow-up.

- quote -
well, yes, no need to "rebuild" for the simple reason that you are (in 9i) responsible for BUILDING
the indexes on the newly redefined table! (trick answer or trick answer, depends on how you want to
look at it)

the steps to redefine in 9i are roughly:

a) create scratch (new) table to redefine into
b) start redefine process, copies data from old to new
c) index new
d) constrain new
e) add triggers , etc to new
f) grant on new
g) finish redefine
- end quote -


So I would have a similar problem with dbms_redefinition, unavailable indexes, as I would with "alter table move", no?


Tom Kyte
July 08, 2007 - 6:10 pm UTC

no you wouldn't since you do the create indexes before "finish redefine"

you stage a replica of the source table in steps a-f, you put it into production in step g

during these steps (a-f) people access old data, at step g you "publish" the new data.

Thanks

A reader, July 09, 2007 - 2:16 am UTC

Thank you - appreciate your help.

alter table move make index status 'N/A'

wawan, August 01, 2007 - 10:30 pm UTC

Tom,

I do alter table XXX move tablespace NEWTBS.
but now, the index status is 'N/A' and the alert log file
contain message like this
"Index DWH.LA_ST_AG_SUP_BUY_OBS_MON_I1 or some [sub]partitions of the index have been marked unusable"

the index has partition and subpartition.

I try to rebuild the subpartition, and running well,
but still the status is 'N/A'



  1  select index_name,partition_name,status,composite from
  2  user_ind_partitions
  3* where index_name = 'INM_PK'
SQL> /

INDEX_NAME PARTITION_NAME  STATUS   COM
---------- --------------- -------- ---
INM_PK     FILM_Y_2005     N/A      YES
INM_PK     FILM_Y_2006     N/A      YES
INM_PK     FILM_Y_2007     N/A      YES
INM_PK     FILM_Y_OTHER    N/A      YES

SQL> l
  1  select index_name,partition_name,subpartition_name,status
  2  from user_ind_subpartitions
  3* where index_name = 'INM_PK'
SQL> /

INDEX_NAME PARTITION_NAME  SUBPARTITION_NAME              STATUS
---------- --------------- ------------------------------ --------
INM_PK     FILM_Y_2005     SYS_SUBP81                     USABLE
INM_PK     FILM_Y_2005     SYS_SUBP82                     USABLE
INM_PK     FILM_Y_2005     SYS_SUBP83                     USABLE
INM_PK     FILM_Y_2006     SYS_SUBP84                     USABLE
INM_PK     FILM_Y_2006     SYS_SUBP85                     USABLE
INM_PK     FILM_Y_2006     SYS_SUBP86                     USABLE
INM_PK     FILM_Y_2007     SYS_SUBP87                     USABLE
INM_PK     FILM_Y_2007     SYS_SUBP88                     USABLE
INM_PK     FILM_Y_2007     SYS_SUBP89                     USABLE
INM_PK     FILM_Y_OTHER    SYS_SUBP90                     USABLE
INM_PK     FILM_Y_OTHER    SYS_SUBP91                     USABLE
INM_PK     FILM_Y_OTHER    SYS_SUBP92                     USABLE

12 rows selected.



I do 'alter index INM_PK rebuild partition' and subpartition
all one by one,

but the status in user_ind_partitions still 'N/A'
and the alert file still contains such warning.

I have open SR in metalink, but dont satisfy with
the answer that said the behaviour is normal.

any advice ?

regards
Tom Kyte
August 05, 2007 - 10:16 am UTC

you need to rebuild indexes after moving data - you just changed all of the rowids, the index is invalid.

OK

Rajesh, August 10, 2007 - 2:35 am UTC

Hi Tom,
How to find out all the tablespaces in which an user can
create tables,snapshots,views etc?
I checked in user_users but it is showing only the default_tablespace and not ALL the tabelspaces in which an user has priviliges for creating objects.
Tom Kyte
August 14, 2007 - 10:56 am UTC

user_ts_quotas

shows quota information by user and tablespace.

but remember, a user with UNLIMITED TABLESPACE privilege has access to ALL tablespaces.

Rebuilding Table

Sasi Kumar, November 26, 2007 - 12:31 am UTC

Hi Tom,
Under what condition, we should decide to rebuild a table. I had one Scenario. I had two similar instances, in which three tables goes for Full Table Scan. In one instances, all three FTS finished within 1 min(QA), in another instances it goes for 6 Mins(Prod). Both have similar number of rows, similar statistics, etc. Whether rebuilding the tables in PROD, helps me? Or moving the tables to different tablespace helps.Please advice.Thanks Tom.
Tom Kyte
November 26, 2007 - 1:03 pm UTC

well, nothing you have said would cause me to even consider rebuilding.


You might want to find the root cause - first question I'd ask would be "is there about the same number of blocks here folks - are they the same?"

then "what else might be different, could it be that there are 1000 users in prod and 1 user in QA?"


Rebuilding Table

Sasi Kumar, November 28, 2007 - 5:10 am UTC

Hi Tom,

Thanks for the quick reply. Here is my analysis.

The Blocks accessed is almost same in both the instances.

In Production:

Table Time to execute Blocks accessed
Table1 110 Secs 609960
Table2 120 Secs 529475
Table3 135 Secs 659428

In QA
Table Time to execute Blocks accessed
Table1 35 Secs 600189
Table2 35 Secs 487490
Table3 35 Secs 653103

I tried both during Non-Peak Hours.That is when no users in QA and in Production.

Is there is anyway to speed up the Full table Scan . Please advice, regarding this tom.Thanks!!
Tom Kyte
November 28, 2007 - 10:52 pm UTC

trace it
tkprof it

observe the waits

and tell us what is different between the two

Moving the table to another tablespace

DK, December 21, 2007 - 2:03 am UTC

Hi Tom,

I am moving the table T from tablespace tbsp1 to tbsp2 like this

alter table T move tablespace tbsp2;

While the alter statement is still running,in another session.
When I query the user_segments table

select segment_name from useR_segments where tablespace_name ='tbsp1';

I can see the table name as 'T'

When I check the user_segment for the tbsp2

select segment_name from useR_segments where tablespace_name ='tbsp2';

I see that table T is not there but there is a table name by some number like "50.1314"

Can you please let me know what does this number stands for?

Thanks,
Dheeraj

Tom Kyte
December 21, 2007 - 2:53 pm UTC

it is a temporary name, it is how we name temporary segments.

While this process of copying is going on - the new segment is a temporary segment. When we are done, we convert that temporary segment into a permanent one

it is temporary so that if the instance fails during the alter table move - SMON will clean up after a restart (because SMON cleans up temporary segments after a crash)

ORA-02219 invalid NEXT storage option value

Fledgling, February 01, 2008 - 2:25 pm UTC

Hi Tom,
Your site has helped me solve a lot of my Oracle related hindrances. Thank you very much for that. I tried your moveall.sql and it did generate script to move tables to another tablespace problem is it gives me "ORA-02219 invalid NEXT storage option value" I am supposed to replace "&1" with a value right? My test DB version is 9.2.0.1.0

Tom Kyte
February 04, 2008 - 3:39 pm UTC

pass a tablespace name. where you want to move them to.

ORA-02219: invalid NEXT storage option value

Fledgling, February 01, 2008 - 2:47 pm UTC

Hi again Tom,

Don't bother answering my previous entry. I found the error. Probably because all of the tables are empty, "next,pctincrease and freelists" were all null. See my real goal is to copy/create all tables from one user to another and into a different tablespace. I imported a DMP into another user and ended up going to the same tablespace as the original user (since they are in the same instance). Thank you very much for this site with all of the solutions to Oracle DB problems. More power to you.

it's very handful to me

TSB, February 07, 2008 - 7:33 am UTC

As for months i was searching for tablespace replacement ..it will give a added advantage to my application

Moving partition

A reader, March 07, 2008 - 6:59 pm UTC

Tom,
We want to move a partition from one tablespace to another. My questions are:
a) Can this be done online?
b) The partition has local indexes and global indexes on the table. Do both these indexes need to be rebuild?
c) Can the indexes (both local and global) be rebuild online?

Oracle version 10gr2.

Thanks...
Tom Kyte
March 10, 2008 - 11:08 am UTC

a) well, you can do an online redefine of a partition of a table - see the dbms_redefinition package for details

b) no, you can maintain local and global indexes during partition operations

ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
 10    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace users
 11  )
 12  /

Table created.

ops$tkyte%ORA10GR2> create index t_global on t(x) GLOBAL;

Index created.

ops$tkyte%ORA10GR2> create index t_local on t(y) LOCAL;

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t select to_date( '12-mar-2003','dd-mon-yyyy')+mod(rownum,2), user_id, username from all_users;

58 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select index_name, status from user_indexes where index_name = 'T_GLOBAL'
  2  union all
  3  select partition_name, status from user_ind_partitions where index_name = 'T_LOCAL';

INDEX_NAME                     STATUS
------------------------------ --------
T_GLOBAL                       VALID
PART1                          USABLE
PART2                          USABLE

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table t move partition part1 tablespace example UPDATE INDEXES;

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select index_name, status from user_indexes where index_name = 'T_GLOBAL'
  2  union all
  3  select partition_name, status from user_ind_partitions where index_name = 'T_LOCAL';

INDEX_NAME                     STATUS
------------------------------ --------
T_GLOBAL                       VALID
PART1                          USABLE
PART2                          USABLE

<b>versus NOT updating the indexes:</b>

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table t move partition part2 tablespace example;

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select index_name, status from user_indexes where index_name = 'T_GLOBAL'
  2  union all
  3  select partition_name, status from user_ind_partitions where index_name = 'T_LOCAL';

INDEX_NAME                     STATUS
------------------------------ --------
T_GLOBAL                       UNUSABLE
PART1                          USABLE
PART2                          UNUSABLE


c) they could NORMALLY be - but here, you've destroyed them if you did not maintain them during the move.

Moving partition

A reader, March 10, 2008 - 12:43 pm UTC

Thanks for you reply and the example. Can we use parallel option to speed up things if the partitions are big?

I always wonder, do you remember the syntax of commands or do you have examples ready made? It is an amazement the way you give the working examples. It would have taken me at least half hour to create this example.
Tom Kyte
March 10, 2008 - 12:54 pm UTC

yes, you can use parallel operations


ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE
---------------------- ----------
DDL statements paralle          0
lized


ops$tkyte%ORA10GR2> alter table t move partition part1 <b>parallel 4</b> tablespace example UPDATE INDEXES;

Table altered.

ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
DDL statements paralle          1                1
lized




do I remember the syntax? I remember that such a thing is possible and I guess at the syntax if I don't remember it - realizing I always have the documentation - or even a quick search on asktom or one of my books to look it up (yes, people have laughed at me when I've reached for my own book to verify something...)

Transaction treatment

Paulo Bueno, March 28, 2008 - 12:25 pm UTC

Alter table zzzz move seems to be nice, but does it generate transaction? I mean, will it use space in the roll back segments?
Tom Kyte
March 31, 2008 - 7:46 am UTC

it is transactional in that is either

a) entirely happens
b) entirely does not happen

It generates UNDO - but only for the dictionary changes it needs to make.

Alter table t move works like this:

a) read the source table
b) direct path write the target table, using TEMPORARY extents. Temporary extents are used so that if we "crash", SMON will clean them up and this move will never have "happened" at all.
c) when the entire table has been copied - we start converting the temporary extents into permanent ones - this is an UPDATE TO THE DICTIONARY.
d) we finish by removing the dictionary entries for the old table (delete) and updating the dictionary entries for the new table (to make it look like it is the old table) and then....

e) commit the updates to the dictionary.

Moving Tablespace data

Aby, June 23, 2008 - 3:31 am UTC

Hi Tom,
I would like to move the data from one tablespace to another exisitng tablespace and then drop the old tablespace. Could you please suggest any methodologies for the same ?
Thanks
Aby

Tom Kyte
June 23, 2008 - 8:07 am UTC

did you

even

read this posting........?


alter ... move for tables
alter ... rebuild for indexes

Moving partition table...

Laxman, July 01, 2008 - 5:46 am UTC

Hi Tom,
Version: 9.2.0.6
Recently we moved tables across tablespaces.
Few of them were partition tables.
After moving tables from source to target tablespace we dropped the source tablespaces.
In case of partition table the definition is still pointing to old tablespace however the partitions are pointing to new tablespace.

Example: Below is the extarct from dbms_metadata.get_ddl() function.

ADM_CDR was the old tablespace
ADM_CDR_A is the new tablespace




  CREATE TABLE "ARBOR"."CDR_DATA_WORK" 
   ( "MSG_ID" NUMBER(10,0) NOT NULL ENABLE, 
 "MSG_ID2" NUMBER(3,0) NOT NULL ENABLE, 
 "MSG_ID_SERV" NUMBER(3,0) NOT NULL ENABLE, 
 "EXT_TRACKING_ID" VARCHAR2(96), 
 "TRANS_ID" VARCHAR2(20) NOT NULL ENABLE, 
 "ELEMENT_ID" NUMBER(10,0) NOT NULL ENABLE, 
 "TYPE_ID_USG" NUMBER(10,0) NOT NULL ENABLE, 
 "RATE_CLASS" NUMBER(6,0) NOT NULL ENABLE, 
 "BILL_CLASS" NUMBER(6,0), 
 "PROVIDER_ID" NUMBER(10,0), 
 "JURISDICTION" NUMBER(10,0), 
 "RATE_CURRENCY_CODE" NUMBER(6,0) NOT NULL ENABLE, 
 "ACCOUNT_NO" NUMBER(10,0) NOT NULL ENABLE, 
 "ACCOUNT_CATEGORY" NUMBER(3,0), 
 "SUBSCR_NO" NUMBER(10,0) NOT NULL ENABLE, 
 "SUBSCR_NO_RESETS" NUMBER(6,0) NOT NULL ENABLE, 
 "EXTERNAL_ID" VARCHAR2(144), 
 "EXTERNAL_ID_TYPE" NUMBER(6,0), 
 "POINT_ORIGIN" VARCHAR2(72), 
 "MKT_CODE" NUMBER(6,0), 
 "COUNTRY_CODE_ORIGIN" NUMBER(6,0), 
 "COUNTRY_DIAL_CODE_ORIGIN" VARCHAR2(6), 
 "POINT_ID_ORIGIN" NUMBER(10,0), 
 "POINT_TAX_CODE_ORIGIN" VARCHAR2(15), 
 "POINT_TAX_CODE_TYPE_ORIGIN" NUMBER(3,0), 
 "POINT_TARGET" VARCHAR2(72), 
 "COUNTRY_CODE_TARGET" NUMBER(6,0), 
 "COUNTRY_DIAL_CODE_TARGET" VARCHAR2(6), 
 "POINT_ID_TARGET" NUMBER(10,0), 
 "POINT_TAX_CODE_TARGET" VARCHAR2(15), 
 "POINT_TAX_CODE_TYPE_TARGET" NUMBER(3,0), 
 "BASE_AMT" NUMBER(18,0) NOT NULL ENABLE, 
 "RATE_DT" CHAR(20) NOT NULL ENABLE, 
 "TRANS_DT" CHAR(20) NOT NULL ENABLE, 
 "SECOND_DT" VARCHAR2(20), 
 "TIMEZONE" NUMBER(3,0) NOT NULL ENABLE, 
 "PRIMARY_UNITS" NUMBER(10,0) NOT NULL ENABLE, 
 "SECOND_UNITS" NUMBER(10,0), 
 "THIRD_UNITS" NUMBER(10,0), 
 "UNITS_CURRENCY_CODE" NUMBER(6,0), 
 "BILLING_UNITS_TYPE" NUMBER(6,0), 
 "CCARD_ID" NUMBER(10,0), 
 "CCARD_ID_SERV" NUMBER(3,0), 
 "MIU_CCARD_ACCOUNT" VARCHAR2(66), 
 "MIU_CCARD_CARRIER_CODE" NUMBER(3,0), 
 "MIU_CCARD_OWNR_NAME" VARCHAR2(120), 
 "MIU_CCARD_EXPIRE" VARCHAR2(4), 
 "FEDERAL_TAX" NUMBER(18,0), 
 "STATE_TAX" NUMBER(18,0), 
 "COUNTY_TAX" NUMBER(18,0), 
 "CITY_TAX" NUMBER(18,0), 
 "OTHER_TAX" NUMBER(18,0), 
 "ANNOTATION" VARCHAR2(255), 
 "CUSTOMER_TAG" VARCHAR2(240), 
 "RATED_UNITS" NUMBER(10,0) NOT NULL ENABLE, 
 "AMOUNT" NUMBER(18,0) NOT NULL ENABLE, 
 "FOREIGN_AMOUNT" NUMBER(18,0) NOT NULL ENABLE, 
 "RATE_PERIOD" CHAR(1) NOT NULL ENABLE, 
 "NO_BILL" NUMBER(1,0) NOT NULL ENABLE, 
 "COMP_STATUS" NUMBER(3,0) NOT NULL ENABLE, 
 "CDR_STATUS" NUMBER(3,0) NOT NULL ENABLE, 
 "FILE_ID" NUMBER(10,0) NOT NULL ENABLE, 
 "FILE_ID_SERV" NUMBER(3,0), 
 "REV_RCV_COST_CTR" NUMBER(6,0) NOT NULL ENABLE, 
 "ARCH_FLAG" NUMBER(1,0) DEFAULT 0 NOT NULL ENABLE, 
 "MIU_ERROR_CODE1" NUMBER(6,0) NOT NULL ENABLE, 
 "MIU_ERROR_CODE2" NUMBER(6,0) NOT NULL ENABLE, 
 "MIU_ERROR_CODE3" NUMBER(6,0) NOT NULL ENABLE, 
 "MPS_MIU_DT" DATE, 
 "MIU_INVESTIGATOR" VARCHAR2(16), 
 "MIU_QUEUE" VARCHAR2(16), 
 "MIU_DISP_STATUS" NUMBER(3,0), 
 "MIU_DISP_CODE" NUMBER(3,0), 
 "MIU_LOCKED_BY" VARCHAR2(30), 
 "MIU_NOTES" VARCHAR2(255), 
 "UNROUNDED_AMOUNT" NUMBER(18,0) DEFAULT 0 NOT NULL ENABLE, 
 "CORRIDOR_PLAN_ID" NUMBER(10,0), 
 "CELL_ID_ORIGIN" NUMBER(6,0), 
 "CELL_NAME_ORIGIN" VARCHAR2(15), 
 "SPLIT_ROW_NUM" NUMBER(3,0) DEFAULT 0 NOT NULL ENABLE, 
 "ORIG_TYPE_ID_USG" NUMBER(10,0) NOT NULL ENABLE, 
 "ACCESS_REGION_ORIGIN" VARCHAR2(15), 
 "ACCESS_REGION_TARGET" VARCHAR2(15), 
 "AMOUNT_REDUCTION" NUMBER(18,0) NOT NULL ENABLE, 
 "AMOUNT_REDUCTION_ID" NUMBER(10,0) DEFAULT 0 NOT NULL ENABLE, 
 "OPEN_ITEM_ID" NUMBER(10,0) NOT NULL ENABLE, 
 "RAW_REC_ID" NUMBER(6,0), 
 "RAW_REC_SIZE" NUMBER(6,0), 
 "ROAMING_DETAIL" VARCHAR2(255), 
  CONSTRAINT "CDR_DT_WRK_SUBSCR_NO_RESETS_CH" CHECK (subscr_no_resets >= -32768 and subscr_no_resets <= 32767) ENABLE, 
  CONSTRAINT "CDRDTWRKPNTTXCDTYPRGNCH" CHECK (point_tax_code_type_origin >= 0 and point_tax_code_type_origin <= 255) ENABLE, 
  CONSTRAINT "CDRDTWRKPNTTXCDTYPTRGTCH" CHECK (point_tax_code_type_target >= 0 and point_tax_code_type_target <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_ACCOUNT_NO_CH" CHECK (account_no >= -2147483648 and account_no <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_ARCH_FLAG_CH" CHECK (arch_flag in (0, 1)) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_BILL_CLASS_CH" CHECK (bill_class >= -32768 and bill_class <= 32767) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_CCARD_ID_CH" CHECK (ccard_id >= -2147483648 and ccard_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_CCARD_ID_SERV_CH" CHECK (ccard_id_serv >= 0 and ccard_id_serv <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_CDR_STATUS_CH" CHECK (cdr_status >= 0 and cdr_status <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_CDR_STATUS_RULE" CHECK (cdr_status in (0,1,2,3,4,5,8,9,12,13,15)) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_COMP_STATUS_CH" CHECK (comp_status >= 0 and comp_status <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_ELEMENT_ID_CH" CHECK (element_id >= -2147483648 and element_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_FILE_ID_CH" CHECK (file_id >= -2147483648 and file_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_FILE_ID_SERV_CH" CHECK (file_id_serv >= 0 and file_id_serv <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_JURISDICTION_CH" CHECK (jurisdiction >= -2147483648 and jurisdiction <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_MIU_DISP_CODE_CH" CHECK (miu_disp_code >= 0 and miu_disp_code <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_MKT_CODE_CH" CHECK (mkt_code >= -32768 and mkt_code <= 32767) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_MSG_ID2_CH" CHECK (msg_id2 >= 0 and msg_id2 <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_MSG_ID_CH" CHECK (msg_id >= -2147483648 and msg_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_MSG_ID_SERV_CH" CHECK (msg_id_serv >= 0 and msg_id_serv <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_M_DSP_CD_RL" CHECK (miu_disp_code in (0,1,2,3,4,5)) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_M_DSP_STTS_RL" CHECK (miu_disp_status in (0,1,2,3)) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_NO_BILL_CH" CHECK (no_bill in (0, 1)) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_OPEN_ITEM_ID_CH" CHECK (open_item_id >= -2147483648 and open_item_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_PRIMARY_UNITS_CH" CHECK (primary_units >= -2147483648 and primary_units <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_PROVIDER_ID_CH" CHECK (provider_id >= -2147483648 and provider_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_RATED_UNITS_CH" CHECK (rated_units >= -2147483648 and rated_units <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_RATE_CLASS_CH" CHECK (rate_class >= -32768 and rate_class <= 32767) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_RAW_REC_ID_CH" CHECK (raw_rec_id >= -32768 and raw_rec_id <= 32767) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_RAW_REC_SIZE_CH" CHECK (raw_rec_size >= -32768 and raw_rec_size <= 32767) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_SECOND_UNITS_CH" CHECK (second_units >= -2147483648 and second_units <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_SPLIT_ROW_NUM_CH" CHECK (split_row_num >= 0 and split_row_num <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_SUBSCR_NO_CH" CHECK (subscr_no >= -2147483648 and subscr_no <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_THIRD_UNITS_CH" CHECK (third_units >= -2147483648 and third_units <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_TIMEZONE_CH" CHECK (timezone >= 0 and timezone <= 255) ENABLE, 
  CONSTRAINT "CDR_DATA_WORK_TYPE_ID_USG_CH" CHECK (type_id_usg >= -2147483648 and type_id_usg <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_ACCOUNT_CATEGORY_CH" CHECK (account_category >= 0 and account_category <= 255) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_BLLNG_NTS_TYP_CH" CHECK (billing_units_type >= -32768 and billing_units_type <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_CELL_ID_ORIGIN_CH" CHECK (cell_id_origin >= -32768 and cell_id_origin <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_CNTRY_CD_RGN_CH" CHECK (country_code_origin >= -32768 and country_code_origin <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_CNTRY_CD_TRGT_CH" CHECK (country_code_target >= -32768 and country_code_target <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_CORRIDOR_PLAN_ID_CH" CHECK (corridor_plan_id >= -2147483648 and corridor_plan_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_EXTERNAL_ID_TYPE_CH" CHECK (external_id_type >= -32768 and external_id_type <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_MIU_DISP_STATUS_CH" CHECK (miu_disp_status >= 0 and miu_disp_status <= 255) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_MIU_ERROR_CODE1_CH" CHECK (miu_error_code1 >= -32768 and miu_error_code1 <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_MIU_ERROR_CODE2_CH" CHECK (miu_error_code2 >= -32768 and miu_error_code2 <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_MIU_ERROR_CODE3_CH" CHECK (miu_error_code3 >= -32768 and miu_error_code3 <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_MNT_RDCTN_D_CH" CHECK (amount_reduction_id >= -2147483648 and amount_reduction_id <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_M_CCRD_CRRR_CD_CH" CHECK (miu_ccard_carrier_code >= 0 and miu_ccard_carrier_code <= 255) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_NTS_CRRNCY_CD_CH" CHECK (units_currency_code >= -32768 and units_currency_code <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_ORIG_TYPE_ID_USG_CH" CHECK (orig_type_id_usg >= -2147483648 and orig_type_id_usg <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_POINT_ID_ORIGIN_CH" CHECK (point_id_origin >= -2147483648 and point_id_origin <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_POINT_ID_TARGET_CH" CHECK (point_id_target >= -2147483648 and point_id_target <= 2147483647) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_REV_RCV_COST_CTR_CH" CHECK (rev_rcv_cost_ctr >= -32768 and rev_rcv_cost_ctr <= 32767) ENABLE, 
  CONSTRAINT "CDR_DT_WRK_RT_CRRNCY_CD_CH" CHECK (rate_currency_code >= -32768 and rate_currency_code <= 32767) ENABLE
   ) PCTFREE 10 PCTUSED 80 INITRANS 1 MAXTRANS 255  LOGGING 
  STORAGE(
  PCTINCREASE 0 FREELISTS 31 BUFFER_POOL DEFAULT)
  TABLESPACE "ADM_CDR" 
  PARTITION BY RANGE ("TRANS_DT") 
 (PARTITION "CDRDW_SEP_07"  VALUES LESS THAN ('20071001 00:00:00') 
  PCTFREE 10 PCTUSED 80 INITRANS 1 MAXTRANS 255 
  STORAGE(INITIAL 134217728 NEXT 134217728 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 31 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "ADM_CDR_A" NOCOMPRESS , 
 PARTITION "CDRDW_OCT_07"  VALUES LESS THAN ('20071101 00:00:00') 
  PCTFREE 10 PCTUSED 80 INITRANS 1 MAXTRANS 255 
  STORAGE(INITIAL 134217728 NEXT 134217728 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 31 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "ADM_CDR_A" NOCOMPRESS

[[Rest of the code truncated...]]


Please advise.

Thanks for your time.
Tom Kyte
July 06, 2008 - 7:11 pm UTC

ops$tkyte%ORA11GR1> alter table t move tablespace tools;
alter table t move tablespace tools
            *
ERROR at line 1:
ORA-14511: cannot perform operation on a partitioned object


Ok, since you cannot move a partitioned table, you move table partitions, perhaps you back up - use a MUCH MUCH smaller example (you need about 1 column for example, about 1, maybe 2, partitions) to demonstrate with??


ops$tkyte%ORA11GR1> CREATE TABLE t
  2  (
  3    dt  date
  4  )
  5  PARTITION BY RANGE (dt)
  6  (
  7    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
  8    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace users
  9  )
 10  /

Table created.

ops$tkyte%ORA11GR1> select dbms_metadata.get_ddl( 'TABLE', 'T' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','T')
-------------------------------------------------------------------------------

  CREATE TABLE "OPS$TKYTE"."T"
   (    "DT" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT)
  TABLESPACE "USERS"
  PARTITION BY RANGE ("DT")
 (PARTITION "PART1"  VALUES LESS THAN (TO_DATE(' 2003-03-13 00:00:00', 'SYYYY-M
M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  <b>TABLESPACE "USERS"</b> NOCOMPRESS ,
 PARTITION "PART2"  VALUES LESS THAN (TO_DATE(' 2003-03-14 00:00:00', 'SYYYY-MM
-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  <b>TABLESPACE "USERS"</b> NOCOMPRESS )



ops$tkyte%ORA11GR1> alter table t move partition part1 tablespace tools;

Table altered.

ops$tkyte%ORA11GR1> alter table t move partition part2 tablespace tools;

Table altered.

ops$tkyte%ORA11GR1> select dbms_metadata.get_ddl( 'TABLE', 'T' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','T')
-------------------------------------------------------------------------------

  CREATE TABLE "OPS$TKYTE"."T"
   (    "DT" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT)
  TABLESPACE "USERS"
  PARTITION BY RANGE ("DT")
 (PARTITION "PART1"  VALUES LESS THAN (TO_DATE(' 2003-03-13 00:00:00', 'SYYYY-M
M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  <b>TABLESPACE "TOOLS"</b> NOCOMPRESS ,
 PARTITION "PART2"  VALUES LESS THAN (TO_DATE(' 2003-03-14 00:00:00', 'SYYYY-MM
-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  <b>TABLESPACE "TOOLS"</b> NOCOMPRESS )




Laxman, July 04, 2008 - 12:27 am UTC

I got the answer from your other thread

alter table <table_name> modify default attributes tablespace <tablespace_name>;

Thanks !!! :)

Moving tablespace from one database to another

DK, July 10, 2009 - 8:18 am UTC

Hi Tom,

I have a tablespace T1 schema A in Database: DEV and another tablespace T2 schema B in Database TEST.

Can I move the tablespace T1 to T2 in the above case and How?
Tom Kyte
July 14, 2009 - 5:14 pm UTC

read about transporting tablespaces - well documented, search for it in the documentation relevant to your release.


FOR PARTITION TABLES AND IOTS ?

Rahul, October 08, 2009 - 10:42 pm UTC

moving tabels to diff tables spaces is not allowed for partition tables or IOT !!!!!!

Moving tables from Dictionary to local tablespace slow

Sujit, November 24, 2009 - 4:27 pm UTC

While trying to move few tables from DMT to LMT (table size 7 million rows , 1 GB) , it was taking long time (>3 hrs still not competed so I killed it) , same type of move from LMT to LMT was less than 30 minutes. From your comments above it looks like CREATE TABLE AS SELECT or dbms_redefination may be the approach to take. Just wondering if there is any other approach which can be used.

If I convert DMT to LMT and then try to move it , will it be helpfull?


to note I am using 10g in Windows 64 bit.
Tom Kyte
November 25, 2009 - 12:04 pm UTC

... 1
GB) , it was taking long time (>3 hrs still not competed so I killed it) ...

how did you do it?

It could well be that in the DMT you had thousands of extents and that will take a long long time to deallocate. In that case, a create table new_table tablespace LMT as select * from old_table; rename old_table to junk, rename new_table to old_table and then drop the junk (that will take a while).


... If I convert DMT to LMT and then try to move it , will it be helpfull?...

if the performance issue was the result of deallocating extents - yes, it could well help tremendously

That was the reason

Sujit, November 28, 2009 - 2:56 pm UTC

I did Ctrl C to kill it.

Changing to local and then move worked like a charm. I did coalesce the tablespace and then converted to local and then moved , it get finished in 3 minutes for a 1 GB table.

moveall scripts What about LOBS : 9.2.0.8

A reader, January 20, 2010 - 7:49 pm UTC

Hi Tom,
1) Any pointers on how to move Tables with LOB columns.

I believe Data will have to be moved as well.Moveall does not Handle LOBS ?

2)Also what if Indexes are in a different tablespace

Script only asks for one tablespace.
How to make it ask to different tablespace for Indexes.

3) Can I rebuild index on the same Tablespace as it existed before
Tom Kyte
January 21, 2010 - 8:17 am UTC

moveall can be freely modified by any programmer on the planet to do anything they wish it to do :)

You see the concept - you can do whatever you like with it.

the moving of lobs is done in conjunction with the moving of tables - it gets more complex since you would specify the tablespace for the table and the tablespace for each lob/lob segment.


As for making the script more than it is - you can do that, you can make it do whatever you like - you are programmer :)



3) yes. alter index i rebuild; that would do it for index I

Move LOBS and LONG

A reader, January 21, 2010 - 9:44 am UTC

Thanx very much.
Q) I see that the MOVEALL script does not take care of columns with LONG datatype as it needs special care.

Can you please review these steps for moving LONG and LOBs

(some tables have either (LONG / LONG raw columns )

OR LOBS ) Not both of them

1) Moving long or long raw


Move table testl from tablespace x to tablespace Y
Table testl (n number l long )

Create new table testl2 (n number , l long ) in new tablespace y

use sqlplus copy command to load table testl2 using testl.

drop table testl

create table testl with new tablespace y.

Use sqlplus copy command to load table testl using testl2.

My concern is if I use exp/imp how will it land up in the the correct new tablespace y even if I recreate the table with tablespace y becuase original exp would have in its defintion tablespace x.

2) Moving LOBS

For moving LOBS then do I also have to reload the data ?
Or I just need to do the following
ALTER TABLE CTABLE
MOVE LOB ("XYZ") STORE AS (TABLESPACE "Y").


Tom Kyte
January 21, 2010 - 10:44 am UTC

1) or use datapump export and import.

the sqlplus copy will only work with LONG - not long raw.

to use datapump, you can use the remap_tablespace parameter.

to use exp/imp, you would a) exp, b) drop the tables, c) recreate them with the right tablespace, d) import with ignore=y and cross your fingers and hope it works.


2) when you tested it, what happened?

ops$tkyte%ORA11GR2> alter table t move lob(y) store as (tablespace users);

Table altered.


it moves the table AND the lob. You can see that via:

ops$tkyte%ORA11GR2> create table t ( x int constraint t_pk primary key, y clob );

Table created.

ops$tkyte%ORA11GR2> insert into t values ( 1, 'hello world' );

1 row created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
T_PK                           VALID
SYS_IL0000087124C00002$$       VALID

ops$tkyte%ORA11GR2> alter table t move
  2  lob(y) store as (tablespace users);

Table altered.

ops$tkyte%ORA11GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
SYS_IL0000087124C00002$$       VALID
T_PK                           UNUSABLE

ops$tkyte%ORA11GR2> alter index t_pk rebuild;

Index altered.

ops$tkyte%ORA11GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
SYS_IL0000087124C00002$$       VALID
T_PK                           VALID



the fact the index on the primary key went invalid indicates the table was moved as well as the lob.

Moveall and Triggers

A reader, January 21, 2010 - 9:55 am UTC

Also

Do I need to drop the triggers and recreate them becuase of move operation.

Will it fire the triggers during move ?

I was looking at 9i OEM and it generates some TCL script to reorg tablespace and It is dropping some of the triggers and recreating . Not all of them
Tom Kyte
January 21, 2010 - 10:46 am UTC

why don't you read up on alter table t move and see what it does.

No, the triggers are OK. alter table t just moves data, NOTHING ELSE. constraints, grants, everything.

Very Helpful

A reader, January 21, 2010 - 12:59 pm UTC

Unfortunately its is 9.2.0.8.

Does oracle simplify this in 11gr2.

Moving is always a pain even in the real world.

Thank you
Tom Kyte
January 25, 2010 - 1:27 am UTC

so question to you:

why move?
what is so hard about an alter move statement????? It sort of made it EASY (it was hard before that statement)

MOVELL AND storage details

A reader, January 21, 2010 - 1:43 pm UTC

What is the significance of specifying storage parameters when moving to different tablespace .

Like the script even generates the following.
storage ( initial 49152 next
minextents 1 maxextents 2147483645
pctincrease freelists 1);

What happens i just use

alter table x move tablespace y.

without using the storage parameters.


Tom Kyte
January 25, 2010 - 1:32 am UTC

did you take my advice and *read*?

anyway, if you use storage parameters, it'll use what you supply.

if you do not, it'll use what currently exists.

moveall and Index tablespace 9208

A reader, January 21, 2010 - 2:25 pm UTC

I have a tablespace which is used only for indexes

When I rebuild the index as a result of alter table move ,

Will it clear up the holes in the index tablespace as I am rebuilding it.

I also want to shrink the index tablespace so all data is tied together and all free space together.This is 9208

Will the index rebuild defrag like this ?

Or I need to rebuild it to a diffrent tablespace and then
coalese the original index tablespace and then rebuild the index back again using original index tablespace ?
Tom Kyte
January 25, 2010 - 1:38 am UTC

The index rebuild will rebuild the index *somewhere* in the tablespace, no assurances *where*.

I would rebuild into a new tablespace, when old is empty, just drop it and rename new one if necessary.

actually, I would start with this question though before I did anything:

why? why are you doing this? are you sure it is even worth our time?

Read only tablespaces

Reviewer, April 05, 2011 - 6:41 am UTC

Hi Tom,
We have a partitioned table with each partition in different tablespace. After specific period, we compress old tablespaces & change to read only.
if we need to add a not null column (with default value) in the table, we can bring all partitioned tablespaces to read/write, do we need to uncompress tablespace also?
Regards,

Tom Kyte
April 12, 2011 - 12:19 pm UTC

versions are always sort of relevant. I'll assume 11g, which has a fast add column with default value - that doesn't touch the table at all.

ops$tkyte%ORA11GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION p1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace p1,
 10    PARTITION p2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace p2,
 11    PARTITION p3 VALUES LESS THAN (MAXVALUE)
 12  )
 13  /

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t select to_date( '12-mar-2003', 'dd-mon-yyyy')+mod(rownum,3), rownum, rownum from all_objects;

72557 rows created.

ops$tkyte%ORA11GR2> select count(*) from t partition(p1);

  COUNT(*)
----------
     24185

ops$tkyte%ORA11GR2> select count(*) from t partition(p2);

  COUNT(*)
----------
     24186

ops$tkyte%ORA11GR2> select count(*) from t partition(p3);

  COUNT(*)
----------
     24186

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select partition_name, compression, blocks, empty_blocks from user_tab_partitions where table_name = 'T';

PARTITION_NAME                 COMPRESS     BLOCKS EMPTY_BLOCKS
------------------------------ -------- ---------- ------------
P1                             DISABLED        110            0
P2                             DISABLED        110            0
P3                             DISABLED        110            0

ops$tkyte%ORA11GR2> alter table t compress;

Table altered.

ops$tkyte%ORA11GR2> alter table t move partition p1;

Table altered.

ops$tkyte%ORA11GR2> alter table t move partition p2;

Table altered.

ops$tkyte%ORA11GR2> alter tablespace p1 read only;

Tablespace altered.

ops$tkyte%ORA11GR2> alter tablespace p2 read only;

Tablespace altered.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select partition_name, compression, blocks, empty_blocks from user_tab_partitions where table_name = 'T';

PARTITION_NAME                 COMPRESS     BLOCKS EMPTY_BLOCKS
------------------------------ -------- ---------- ------------
P1                             ENABLED          72            0
P2                             ENABLED          72            0
P3                             ENABLED         110            0

ops$tkyte%ORA11GR2> alter table t add a number default 5 not null;

Table altered.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select partition_name, compression, blocks, empty_blocks from user_tab_partitions where table_name = 'T';

PARTITION_NAME                 COMPRESS     BLOCKS EMPTY_BLOCKS
------------------------------ -------- ---------- ------------
P1                             ENABLED          72            0
P2                             ENABLED          72            0
P3                             ENABLED         110            0

ops$tkyte%ORA11GR2> 



tablespace doesn't even have to be online.

Free space necessary to move a table

Fernanda, April 27, 2011 - 3:21 pm UTC

How much free space do I need in a target tablespace to move a table?

Example: There is one table whose size is 60GB, with 10 millions rows. I delete 5 millions rows. Is it necessary 60GB in a target tablespace before apply the "move" or around 30GB is sufficient?
Tom Kyte
April 27, 2011 - 4:59 pm UTC

It depends, you need enough free space to hold the resulting 5,000,000 records. Whether that is 50% of the original table or 25% or 75% or something else - I do not know.

You might consider simply "alter table t shrink space", it'll do it in place without needing a secondary copy.

error while adding partition in table

manish, May 17, 2011 - 6:55 am UTC

alert table a
add partition b values less than (20) tablespace b;


when i ran above script i got below err
"no privileges on tablespace c"

why i got this error when i m tryin to add partition on tablespace "b"
Tom Kyte
May 18, 2011 - 9:36 am UTC

give example, cut and paste from sqlplus
ops$tkyte%ORA11GR2> drop user a cascade;

User dropped.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create user a identified by a quota unlimited on p2;

User created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> grant create session to a;

Grant succeeded.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> CREATE TABLE a.t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (x)
  8  (
  9    PARTITION part1 VALUES LESS THAN (5),
 10    PARTITION part2 VALUES LESS THAN (10)
 11  )
 12  /

Table created.

ops$tkyte%ORA11GR2> connect a/a
Connected.
a%ORA11GR2> alter table t
  2  add partition part3 values less than (20) tablespace p2
  3  /

Table altered.

a%ORA11GR2> select partition_name, tablespace_name from user_tab_partitions;

PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------
PART2                          USERS
PART1                          USERS
PART3                          P2


take that example and make it look more like your real set of tables/indexes until it reproduces. then we can talk

Moving Data from one Table to another

Sachin, December 02, 2011 - 12:41 am UTC

As always, you scripts are amazing and even so simple that one will be left amused at one self.
I have one small query regarding moving data from one table A to another table B using Sql script (assuming all column values to be inserted).
To this , I came up with
insert into A select * from B;

Apart from this, would there be any faster script to pull/move/transfer data from one table to another ?
Appreciate your time.
Tom Kyte
December 06, 2011 - 11:03 am UTC

make sure A and B has exactly the same structure, it would be better perhaps for production code to be:

insert into a ( c1, c2, c3, ... ) select c1, c2, c3, ... from b;


You can look at using parallel direct pathing with insert /*+ Append */


http://www.oracle.com/pls/db112/search?remark=quick_search&word=parallel+insert

startup restrict

A reader, January 30, 2013 - 2:31 pm UTC

Do we have put database in restrict mode for object move ? SInce there are other schemas in the database, we don't want to create outage for other schemas.
Tom Kyte
January 31, 2013 - 2:19 pm UTC

no, you can just move it whenever you want.

it will make the table you are moving unavailable for modifications while the move and subsequent index rebuilds take place.

Some indexes VALID after table move

A reader, July 11, 2013 - 5:27 pm UTC

Tom,
I was testing table move and found something strange. I had moved a bunch of tables from one tablespace to another using alter table move statements. I was expecting all indexes on these tables to show status of UNUSABLE in dba_indexes.

However, I noticed that about 68 indexes out of 714 were showing VALID instead of UNUSABLE. I checked in DBA_SEGMENTS and these indexes did not exist. So technically these indexes need to be rebuild.

Are there any conditions which would cause this to happen? These are standard tables (no partitioning, no IOT, nothing). The indexes were mostly primary key indexes but some were normal indexes on non-primary key columns. Oracle 11.2.0.3.

Thanks...
Tom Kyte
July 16, 2013 - 4:02 pm UTC

I would need a way to reproduce.

if they didn't have a segment in dba_segments - they didn't exist. The unusable indexes would have had entries in there - we don't get rid of their data, so they would have a segment.


Some indexes VALID after table move

A reader, July 17, 2013 - 3:30 pm UTC

Tom,
Unfortunately, the unusable indexes are showing up as VALID randomly. I cannot find a way to reproduce at will. I will try moving them few more times to see if it is happening on the same tables.

However, I did notice that every time I move a table out of a tablespace, the space used by the indexes is dropped from DBA_SEGMENTS, i.e., the indexes no longer have an entry in DBA_SEGMENTS but the entry does persist in DBA_INDEXES. Is this unexpected? I have been observing this consistently for all indexes.

Thanks
Tom Kyte
July 17, 2013 - 6:15 pm UTC

sorry, i mispoke on the segment bit, the index segments do go away. my mistake (every time I don't actually demonstrate something....)

but the indexes should definitely go unusable.

ops$tkyte%ORA11GR2> create table t ( x int constraint t_pk primary key, y int );

Table created.

ops$tkyte%ORA11GR2> create index t_y_idx on t(y);

Index created.

ops$tkyte%ORA11GR2> insert into t values ( 1 , null );

1 row created.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> select segment_name, segment_type from user_segments;

SEGMENT_NAME                   SEGMENT_TYPE
------------------------------ ------------------
T                              TABLE
T_PK                           INDEX
T_Y_IDX                        INDEX

ops$tkyte%ORA11GR2> select index_name, status from user_indexes;

INDEX_NAME                     STATUS
------------------------------ --------
T_PK                           VALID
T_Y_IDX                        VALID

ops$tkyte%ORA11GR2> alter table t move;

Table altered.

ops$tkyte%ORA11GR2> select segment_name, segment_type from user_segments;

SEGMENT_NAME                   SEGMENT_TYPE
------------------------------ ------------------
T                              TABLE

ops$tkyte%ORA11GR2> select index_name, status from user_indexes;

INDEX_NAME                     STATUS
------------------------------ --------
T_PK                           UNUSABLE
T_Y_IDX                        UNUSABLE

ops$tkyte%ORA11GR2>

Some indexes VALID after table move

A reader, July 17, 2013 - 4:06 pm UTC

Tom,
I forgot to copy/paste the test case to show that index segment gets dropped after table is move from tablespace.

drop table t_test;

create table t_test (c1 number, c2 varchar2(30)) tablespace users_data;

insert into t_test values (1,'Test');
insert into t_test values (2,'Test');
commit;

create index ind_t_test on t_test(c1) tablespace users_data;

select segment_name, segment_type, tablespace_name
from user_segments where segment_name like '%T_TEST%';

SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
--------------- --------------- ------------------------------
IND_T_TEST INDEX USERS_DATA
T_TEST TABLE USERS_DATA


alter table t_test move tablespace users_data;

Table altered.


select index_name, table_name, tablespace_name, status
from user_indexes where index_name='IND_T_TEST';

INDEX_NAME TABLE_NAME TABLESPACE_NAME STATUS
------------- ------------ ------------------- --------
IND_T_TEST T_TEST USERS_DATA UNUSABLE


select segment_name, segment_type, tablespace_name
from user_segments where segment_name like '%T_TEST%';

SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
--------------- --------------- ------------------------------
T_TEST TABLE USERS_DATA


The index segment is gone.


alter index ind_t_test rebuild tablespace users_data;

Index altered.


select segment_name, segment_type, tablespace_name
from user_segments where segment_name like '%T_TEST%';

SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
--------------- --------------- ------------------------------
IND_T_TEST INDEX USERS_DATA
T_TEST TABLE USERS_DATA



Some indexes VALID after table move

A reader, July 17, 2013 - 6:33 pm UTC

Thanks for confirming about index segments and the fact that indexes must become UNUSABLE after table move.

I am testing the move again. It takes about 2-3 hours for table moves and 3-4 hours for index rebuilds.

If the indexes still show as VALID even though the table has been moved, I will follow up with support via SR. If there is a resolution, I will post here.

Some indexes VALID after table move

A reader, July 17, 2013 - 8:05 pm UTC

Tom,
After some digging, I found out that it was deferred segment creation which was causing one part of this issue.

Some tables and their indexes were created but data was never inserted into them. The tables and indexes did not have any segments allocated to them. However, the indexes kept showing VALID even after table move.

The second part was being caused by tables which once contained data but now are either truncated or all rows have been deleted from them. The indexes on such tables do occupy segments. The index segment is never dropped after moving the table and the index keeps showing VALID status.

In both cases, the index remains VALID even if data is inserted into table after move and without rebuilding the index. Though the behavior is logical, it did catch me by surprise.

Tom Kyte
July 17, 2013 - 9:08 pm UTC

thanks for the feedback, glad it is cleared up now!

Moving tables to diffrent tablespace

binayak, June 20, 2014 - 12:17 pm UTC

Tom, it does not work When I tried your 2nd method(export & import) for moving tables from one tablespace to another. Import command still creates all the tables into the tablespace they came from.

The steps are mentioned below.

>> EXP the user account.

>> Dropped the user account.

>> Recreated the user account without UNLIMITED TABLESPACE privilege and default tablespace set to the new tablespace with unlimited quota

>> IMP the users data from exported dump file.

Please clarify.


Useful Information

Akshay Patil, May 17, 2016 - 5:01 am UTC

Easy to understand.

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.