Skip to Main Content
  • Questions
  • Schema migration + unknown table utilization

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Alex.

Asked: July 14, 2016 - 7:17 pm UTC

Last updated: October 21, 2016 - 2:11 am UTC

Version: 12.1.0.2

Viewed 1000+ times

You Asked

Hi team,

I have pretty much an unanswerable question, but I thought I'd see what advice you can give anyway.

I am working on a project trying to separate many legacy applications using shared schemas to their own self contained. There are a couple of problems, the lesser of which is code using the old schema prefix. The more difficult is this is that because this is a very old application, no one really knows who's using what in the database. There are a 100 or more different modules in this application, all being developed and supported by different teams. Right now they're all hitting the same two schemas, and they don't know if someone else is a consumer of their data or not.

So, we have to find a way to surgically move each API's subset of objects and somehow figure out what we're going to break. I'm sure there's an appropriate Dilbert about documentation you can insert here.

The best thing we can come up with is to create synonyms like

'create synonym old_schema.table_t for new_schema.table_t'


and audit their usage to figure out who's accessing them. But that's a lot of auditing to turn on, and it might even tell us what we need.

So if you can only reply, "that sucks, good luck with that" I totally understand. I just figured you guys may have had to experience this with a client before, and may know some features that could be of some help.

Thanks.

and Connor said...

That sucks...good luck with that. Just kidding :-)

In these cases, some options you could consider

1) Sledge hammer approach, ie, take a testing environment, move the database objects to the desired level of schema separation, and see whats busted.
Risks:
- Sometimes tough to sell as an option.
- Needs someone to massage all of the application components

2) Tracing. Use a login trigger for all of the apps to set sql_trace on for a few hours/days/weeks etc. Every SQL issued will then be captured to trace files. You have then a (near) definitive list of the SQL's that are run, and (depending on your app), the trace file header normally contains sufficient information to deduce which apps belongs to which session.
Risks:
- Could be a lot of trace data, which might bog down performance
- Some rare app processes (eg monthly report) might be missed

3) Sampling. Take regular dumps of V$SQL to gain similar information to (2) without the overhead of tracing.
Risks:
- Can miss stuff

Let us know how you go.

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Alex, July 15, 2016 - 2:51 pm UTC

Thanks a lot. Option #3 is something we were planning on doing, but would probably only give us a rough idea because this is a busy system and things seem to age out rather quickly.

I would love to use #1 the sledge, and you got me thinking, we have access to RAT. I have no real world experience using it but I believe it spits out a report of things that failed right? So, what if made the changes to test, and then ran a production workload for....well... as long as we could. A week would make me feel comfortable, but I don't know if that's possible.

#2 is a great idea and would give the most complete list but I feel like that would probably generate an overwhelming amount of activity.
Chris Saxon
July 16, 2016 - 4:40 am UTC

RAT is definitely an option.

#2 if you do tracing at level 1 (ie no binds, no waits) generally isnt too intrusive. Then just tkprof the whole lot and sort distinct.

Alex, July 18, 2016 - 6:36 pm UTC

After thinking about how to use RAT some more, I realized there will be a ton of expected failures coming from the application we are moving to a new schema, not just the unknowns we are looking for. The prod workload will still be trying to hit the old schema. Not sure if there is anyway around that.
Chris Saxon
July 19, 2016 - 9:14 am UTC

Why will there be many expected failures? If you've created synonyms from the old schema to new, all the SQL should continue to work, no?

Chris

Alex, July 19, 2016 - 3:17 pm UTC

So yes the plan was for the final end state in production was to have synonyms so everything would still work. My previous comments were specific to the testing scenario Connor mentioned in #1 with the addition of RAT.

What I was going for was being able to only capture failed sql statements for the unknown applications by not using synonyms in test. But since the production workload will be playing transactions in the current configuration for the app being moved, those would all bomb too. So I seem to be in a situation (for testing) where everything would work or fail, and we have to decide which would yield the best information.
Chris Saxon
July 19, 2016 - 4:11 pm UTC

Ah, I see what you mean. Yes, in that case RAT would error all your existing statements. I'm not aware of any workarounds for this (other than the synonym approach).

Chris

Alex, September 26, 2016 - 6:29 pm UTC

Hi guys,

I have another problem to work out with this schema migration effort. I've been told it needs to be automated, which poses some challenges. The one I'm stuck on now is how to dynamically generate an export command with the table list I have. I'm terrible with analytics and I'm pretty sure that would solve my problem.

CREATE TABLE OBJECT_LIST 
   (  SOURCE_SCHEMA VARCHAR(50) NULL, 
      TARGET_SCHEMA VARCHAR(50) NULL,
      OBJECT_NAME   VARCHAR2(100) NULL,
      OBJECT_TYPE   VARCHAR2(20) NULL
   );

insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'WORKFLOW_STEP', 'TABLE');
insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'WORKFLOW_RULE', 'TABLE');
insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'USER_VAL_STRATEGY', 'TABLE');
insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'UMP_STAG_REQUESTS_LOG', 'TABLE');
insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'UMP_STAGING_REQUESTS', 'TABLE');
insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'UMP_STAGING_FILES', 'TABLE');


Looking to generate this to a file:

expdp  PARALLEL=8 DIRECTORY=MY_DIR dumpfile=tables.dmp logfile=exp.log CLUSTER=N TABLES=*list_here*


Also, if you can think of a better way to do this please let me know. I was thinking it might be better to somehow use a parfile. I have full control over how I use the list of objects, the application team is going to give us their list and we can use it however we want.
Chris Saxon
September 27, 2016 - 12:52 am UTC

How about something like this (you can customise to your needs)


SQL> drop table object_list purge;

Table dropped.

SQL>
SQL> CREATE TABLE OBJECT_LIST
  2     (  SOURCE_SCHEMA VARCHAR(50) NULL,
  3        TARGET_SCHEMA VARCHAR(50) NULL,
  4        OBJECT_NAME   VARCHAR2(100) NULL,
  5        OBJECT_TYPE   VARCHAR2(20) NULL
  6     );

Table created.

SQL>
SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'WORKFLOW_STEP', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'WORKFLOW_RULE', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'USER_VAL_STRATEGY', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'UMP_STAG_REQUESTS_LOG', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'UMP_STAGING_REQUESTS', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'ORION', 'UMP_STAGING_FILES', 'TABLE');

1 row created.

SQL>
SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('SCOTT', 'ORION', 'AWORKFLOW_STEP', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('SCOTT', 'ORION', 'AWORKFLOW_RULE', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('SCOTT', 'ORION', 'AUSER_VAL_STRATEGY', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'XYZ', 'BUMP_STAG_REQUESTS_LOG', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'XYZ', 'BUMP_STAGING_REQUESTS', 'TABLE');

1 row created.

SQL> insert into object_list (source_schema, target_schema, object_name, object_type) VALUES ('EAPI', 'XYZ', 'BUMP_STAGING_FILES', 'TABLE');

1 row created.

SQL>
SQL>
SQL> with obj_list as (
  2    select
  3      source_schema,
  4      target_schema,
  5      source_schema||'_'||target_schema fname,
  6      listagg(''''||object_name||'''',',') within group ( order by object_name ) as concat_str
  7    from  object_list
  8    group by source_schema, target_schema
  9  )
 10  select 'expdp u/p@db schemas='||source_schema||' include=TABLE:"IN ('||
 11          concat_str||
 12         ')" directory=MY_DIR dumpfile='||fname||'.dmp logfile='||fname||'.log'
 13  from  obj_list;

'EXPDPU/P@DBSCHEMAS='||SOURCE_SCHEMA||'INCLUDE=TABLE:"IN('||CONCAT_STR||')"DIRECTORY=MY_DIRDUMPFILE='||FNAME||'.DMPLOGFI
------------------------------------------------------------------------------------------------------------------------
expdp u/p@db schemas=EAPI include=TABLE:"IN ('BUMP_STAGING_FILES','BUMP_STAGING_REQUESTS','BUMP_STAG_REQUESTS_LOG')" dir
ectory=MY_DIR dumpfile=EAPI_XYZ.dmp logfile=EAPI_XYZ.log

expdp u/p@db schemas=EAPI include=TABLE:"IN ('UMP_STAGING_FILES','UMP_STAGING_REQUESTS','UMP_STAG_REQUESTS_LOG','USER_VA
L_STRATEGY','WORKFLOW_RULE','WORKFLOW_STEP')" directory=MY_DIR dumpfile=EAPI_ORION.dmp logfile=EAPI_ORION.log

expdp u/p@db schemas=SCOTT include=TABLE:"IN ('AUSER_VAL_STRATEGY','AWORKFLOW_RULE','AWORKFLOW_STEP')" directory=MY_DIR
dumpfile=SCOTT_ORION.dmp logfile=SCOTT_ORION.log


3 rows selected.

SQL>
SQL>
SQL> with obj_list as (
  2    select
  3      source_schema,
  4      target_schema,
  5      source_schema||'_'||target_schema fname,
  6      source_schema||':'||target_schema remap,
  7      listagg(''''||object_name||'''',',') within group ( order by object_name ) as concat_str
  8    from  object_list
  9    group by source_schema, target_schema
 10  )
 11  select 'impdp u/p@db directory=MY_DIR dumpfile='||fname||'.dmp logfile=imp_'||fname||'.log remap_schema='||remap
 12  from  obj_list;

'IMPDPU/P@DBDIRECTORY=MY_DIRDUMPFILE='||FNAME||'.DMPLOGFILE=IMP_'||FNAME||'.LOGREMAP_SCHEMA='||REMAP
------------------------------------------------------------------------------------------------------------------------
impdp u/p@db directory=MY_DIR dumpfile=EAPI_ORION.dmp logfile=imp_EAPI_ORION.log remap_schema=EAPI:ORION
impdp u/p@db directory=MY_DIR dumpfile=EAPI_XYZ.dmp logfile=imp_EAPI_XYZ.log remap_schema=EAPI:XYZ
impdp u/p@db directory=MY_DIR dumpfile=SCOTT_ORION.dmp logfile=imp_SCOTT_ORION.log remap_schema=SCOTT:ORION

3 rows selected.

SQL>


So you get a expdp/impdp pair for each distinct source/target schema.

Alex, September 27, 2016 - 9:12 pm UTC

Thanks a lot, this is great, however I did stumble down another rabbit hole. EXPDP is very picky about formats and I can't get it to accept the command I generated, it always seems to want everything in one giant string.

For example:

Export: Release 11.2.0.3.0 - Production on Tue Sep 27 19:19:50 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup/exports/expdat.dmp"
ORA-27038: created file already exists
Additional information: 1


It looks like it can't get paste the table list and doesn't the dumpfile parameter name.

This is where is got off on a tangent; it appears you can use a query to reference your table list:

http://www.gokhanatil.com/2011/06/oracle-datapump-include-parameter-and-limit-of-4000-chars.html

But I can't get this to work either. I tried two techniques:

SELECT 'expdp / schemas='
  || source_schema
  ||' include=TABLE:"IN (SELECT object_name FROM schmig.object_list where object_type = ''TABLE'')"' 
  || ' directory=DATA_PUMP_DIR dumpfile='
  || source_schema ||'_' ||target_schema
  ||'.dmp logfile='
  || source_schema ||'_' ||target_schema
  ||'.log' AS expcmd
FROM schmig.object_list
WHERE ROWNUM = 1;

SELECT 'expdp / schemas='
  || source_schema
  ||' include=TABLE:"IN (SELECT object_name FROM schmig.object_list)"' 
  ||  ' query="where object_type = ''TABLE''"'
  || ' directory=DATA_PUMP_DIR dumpfile='
  || source_schema ||'_' ||target_schema
  ||'.dmp logfile='
  || source_schema ||'_' ||target_schema
  ||'.log' AS expcmd
FROM schmig.object_list
WHERE ROWNUM = 1;


In both cases it barfs here:

LRM-00116: syntax error at 'object_type' following 'where'

I also tried adding SCHMIG to the schema list in the expdp command, didn't help.
Connor McDonald
September 28, 2016 - 7:37 am UTC

well...we sortta need to see the expdp command you were running :-)

Alex, September 28, 2016 - 2:12 pm UTC

Sorry....

expdp / schemas=EAPI,SCHMIG include=TABLE:"IN (SELECT object_name FROM schmig.object_list where object_type = 'TABLE')" directory=DATA_PUMP_DIR dumpfile=EAPI_ORION.dmp logfile=EAPI_ORION.log


expdp / schemas=EAPI include=TABLE:"IN (SELECT object_name FROM schmig.object_list)" query="where object_type = 'TABLE'"
 directory=DATA_PUMP_DIR dumpfile=EAPI_ORION.dmp logfile=EAPI_ORION.log



Alex, September 28, 2016 - 9:07 pm UTC

This has come full circle back to a linefeed type issue. I found out what my other problem was, it wasn't escaping any special characters. So now I'm back to expdp not recognizing the full statement:

SELECT 'expdp / schemas='
  || source_schema || ',' || 'SCHMIG' || chr(10) \
  ||'include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \''TABLE\''\)\"' || chr(10) \
  ||'directory=DATA_PUMP_DIR dumpfile='
  || source_schema ||'_' ||target_schema
  ||'.dmp logfile='
  || source_schema ||'_' ||target_schema
  ||'.log' AS expcmd
FROM schmig.object_list
WHERE ROWNUM = 1;

expdp / schemas=EAPI,SCHMIG
include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \'TABLE\'\)\"
directory=DATA_PUMP_DIR dumpfile=EAPI_KSS.dmp logfile=EAPI_KSS.log

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup/PPRTL1/exports/expdat.dmp"
ORA-27038: created file already exists
Additional information: 1


A10_export.sh[3]: (SELECT: not found [No such file or directory]


A10_export.sh is just a file containing the exp command from above being generated by the above query.

Connor McDonald
September 29, 2016 - 12:51 am UTC

OK - it might be the asktom app mangling your post, but the query you posted doesn't even run for me

SQL> SELECT 'expdp / schemas='
  2    || source_schema || ',' || 'SCHMIG' || chr(10) \
  3    ||'include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \''TABLE\''\)\"' || chr(10) \
  4    ||'directory=DATA_PUMP_DIR dumpfile='
  5    || source_schema ||'_' ||target_schema
  6    ||'.dmp logfile='
  7    || source_schema ||'_' ||target_schema
  8    ||'.log' AS expcmd
  9  FROM schmig.object_list
 10  WHERE ROWNUM = 1;
  || source_schema || ',' || 'SCHMIG' || chr(10) \
                                                 *
ERROR at line 2:
ORA-00911: invalid character



So I edited it a little

SQL> SELECT 'expdp / schemas='
  2    || source_schema || ',' || 'SCHMIG' || ' \'|| chr(10)
  3    ||'include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \''TABLE\''\)\"' || ' \'|| chr(10)
  4    ||'directory=DATA_PUMP_DIR dumpfile='
  5    || source_schema ||'_' ||target_schema
  6    ||'.dmp logfile='
  7    || source_schema ||'_' ||target_schema
  8    ||'.log' AS expcmd
  9  FROM object_list
 10  WHERE ROWNUM = 1;

EXPCMD
----------------------------------------------------------------------------------------------------------------------------------
expdp / schemas=EAPI,SCHMIG \
include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \'TABLE\'\)\" \
directory=DATA_PUMP_DIR dumpfile=EAPI_ORION.dmp logfile=EAPI_ORION.log


and took that over to my VM and ran it, once again with a little editing to make it appropriate for my system, but (hopefully) no syntactic changes

expdp scott/tiger schemas=SCOTT \
include=TABLE:\"IN \(SELECT object_name FROM user_objects where object_type = \'TABLE\'\)\" \
directory=TEMP dumpfile=EAPI_ORION.dmp logfile=EAPI_ORION.log


and it works fine

Export: Release 12.1.0.2.0 - Production on Wed Sep 28 20:46:25 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** schemas=SCOTT include=TABLE:"IN (SELECT object_name FROM user_objects where object_type = 'TABLE')" directory=TEMP dumpfile=EAPI_ORION.dmp logfile=EAPI_ORION.log 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 56.93 MB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."SAMPLE_DATASET_INTRO"              11.84 MB   10000 rows
. . exported "SCOTT"."SAMPLE_DATASET_PARTN"              11.79 MB   10000 rows
. . exported "SCOTT"."SAMPLE_DATASET_EVOLVE"             11.76 MB   10000 rows
. . exported "SCOTT"."SAMPLE_DATASET_FULLTEXT"           11.74 MB   10000 rows
. . exported "SCOTT"."SAMPLE_DATASET_XMLDB_HOL"          12.72 MB   10000 rows
. . exported "SCOTT"."SAMPLE_DATASET_XQUERY"             11.73 MB   10000 rows
. . exported "SCOTT"."DEPT"                              6.023 KB       4 rows
. . exported "SCOTT"."EMP"                               8.773 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.953 KB       5 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
  /tmp/EAPI_ORION.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Sep 28 20:48:11 2016 elapsed 0 00:01:43



So hopefully that SQL gets you working

Alex, September 30, 2016 - 6:02 pm UTC

For some reason, it doesn't work in a script, and I'm at a loss. I was able to reproduce what you gave me (thanks). But when I try and run the expdp from the file I created, the same command doesn't work but does when I just copy and paste to the command line:

#!/bin/ksh

sqlplus -S "/" <<EOF

SET LINESIZE 160
SET PAGESIZE 9999
SET TERMOUT OFF
SET ECHO OFF
SET FEEDBACK OFF
SET HEADING OFF
SET TRIM ON

spool A10_export.sh

SELECT 'expdp / schemas='
|| source_schema || ',' || 'SCHMIG' || ' \'|| chr(10)
||'include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \''TABLE\''\)\"' || ' \'|| chr(10)
||'directory=DATA_PUMP_DIR dumpfile='
|| source_schema ||'_' ||target_schema
||'.dmp logfile='
|| source_schema ||'_' ||target_schema
||'.log' AS expcmd
FROM schmig.object_list
WHERE ROWNUM = 1;

spool off
exit

/u0/aadamowi/demo
(qa-exd-db01.bos01.corp.akamai.com:aadamowi)> ksh run_migration.sh

expdp / schemas=EXTAPI,SCHMIG \
include=TABLE:\"IN \(SELECT object_name FROM schmig.object_list where object_type = \'TABLE\'\)\" \
directory=DATA_PUMP_DIR dumpfile=EXTAPI_KSS.dmp logfile=EXTAPI_KSS.log


/u0/aadamowi/demo
(qa-exd-db01.bos01.corp.akamai.com:aadamowi)> ksh A10_export.sh

Export: Release 11.2.0.3.0 - Production on Fri Sep 30 17:53:52 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup/PPRTL1/exports/expdat.dmp"
ORA-27038: created file already exists
Additional information: 1


A10_export.sh[3]: (SELECT: not found [No such file or directory]


Connor McDonald
October 01, 2016 - 1:34 am UTC

Step 1- sort out this error first

ORA-31641: unable to create dump file "/backup/PPRTL1/exports/expdat.dmp"
ORA-27038: created file already exists


Alex, September 30, 2016 - 8:41 pm UTC

Ok I think I finally got it now. It was indeed extra space being thrown at the end of the first lines, and I hadn't gotten the right combination of sqlplus commands to fix it. It ended up being set tab off + set trimspool on.

Although I doubt it will be the last time you hear from me on this project, I do appreciate the help on this chuck. I feel like I owe you guys beers at this point.
Connor McDonald
October 01, 2016 - 1:36 am UTC

We'll send you the delivery address :-)


Glad you got it working

Alex, October 20, 2016 - 8:23 pm UTC

Hi guys,

I'm having another issue with is this that I'm a little stumped on. In order to keep applications running smoothly that have code pointing to the schema I'm moving them out of, the plan was to rename the tables and create synonyms in their place pointing to the new schema.

I've discovered many tables are using the max identifier length of 30 characters, so I can't rename them using the full table name. I'm afraid if I trim them so I can glue "_old" at the end that will make backing out much more difficult.

I'd like to be able to just drop them, but at the least I could probably only get away with that after the deployment is done, and the synonyms need to be in place before that. This will also be automated running one step after another and I would worry the export step might fail and the rest of the job would continue on and drop all the tables.
Connor McDonald
October 21, 2016 - 2:11 am UTC

You could move the whole thing to our cloud, where we're running 12.2 and you get 128byte name lengths :-)

How about this for a strategy (or something similar).

I'm assuming that the tables are named in upper case (as is the norm in Oracle). Why not rename them to lower case ?

rename MY_BIG_LONG_FAT_TABLE_NAME to "my_big_long_fat_table_name";
create synonym MY_BIG_LONG_FAT_TABLE_NAME for "my_big_long_fat_table_name";


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library