Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Akhila.

Asked: December 02, 2016 - 9:26 am UTC

Last updated: July 19, 2017 - 3:42 pm UTC

Version: Oracle 11g

Viewed 1000+ times

You Asked

Hi!

I have few questions regarding data migration/replication from Oracle:

1. We have a column (of datatype NUMBER) in Oracle table which stores Transaction AMOUNT in the format: 10,52 (instead of 10.52). When we migrate data (or replicate data from Oracle), is it possible to convert this format of 10,52 to 10.52 so we can import this converted format into our target database?

2. If the above is possible, can you please suggest which is the best tool that can handle this migration? I have read that Golden gate supports such format and data transformations during replication but I would like to know if there is any other option.

Kindly assist. Thank you!

and Chris said...

The display of decimal and group separators for numbers is controlled by your NLS settings. So provided it really is a number, all you need to do is change your nls_numeric_characters:

alter session set nls_numeric_characters = ',.';
create table t (
  x number
);

insert into t values ('10,52');
insert into t values ('10.52');

SQL Error: ORA-01722: invalid number

select * from t;

         X
----------
     10,52

alter session set nls_numeric_characters = '.,';

select * from t;

         X
----------
     10.52


But if your "number" is stored in a string then you'll have to do a data type conversion. Golden Gate does support various transformations:

https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_datainteg.htm#GWUAD451

Whether or not this is the "best" approach depends upon what your requirements are...

Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Akhila Selva Ganesh, December 02, 2016 - 2:50 pm UTC

Hi Chris

Thank you! I understand that we need to change the NLS settings of the Source database prior to migration and then migrate the data. However, if I need to retain the source database NLS settings as it is while only the data in the target database needs be converted and stored, is this possible?
Chris Saxon
December 02, 2016 - 2:58 pm UTC

The NLS settings are for your client. As long as you're transferring from a number to a number (with no conversion in between) you'll be fine.

Akhila Selva Ganesh, December 02, 2016 - 3:30 pm UTC

Okay but I am not sure if this is applicable only if target database is also Oracle. I missed to mention that target database is MS SQL Server. So neither an export/import of the data from Oracle in desired format nor an active replication from Oracle to SQL Server support this?
Chris Saxon
December 02, 2016 - 4:56 pm UTC

A tiny detail! I would suspect that you'll still be fine.

But if you're looking for help migrating off Oracle to SQL Server you'll need to find the MS equivalent of AskTom ;)

Akhila Selva Ganesh, December 05, 2016 - 10:56 am UTC

Yes I know :( and I agree!! But we are looking to migrate one of our DB to SQL Server. To clearly understand, can you please elaborate the term 'client' ?
Chris Saxon
December 05, 2016 - 11:57 am UTC

Client is anything that connects to the database.

Akhila Selva Ganesh, December 05, 2016 - 1:00 pm UTC

Thanks Chris!!

Alex, July 17, 2017 - 9:06 pm UTC

Hi team,

I have a need to move some rather large tables out of a shared schema into their own self contained schema for their respective application. For average to small size applications I'm good with using datapump. However, as you know, for a TB or so of data takes a long time and I am exploring trying to use a Golden Gate zero downtime approach. I haven't been able to find any examples or documentation for achieving this in my situation. When you do the initial load piece to the migration, can it be any form of an expdp or does it have to be a full or schema level? This will be happening inside the same database; it's just to break things up.

Thanks.
Chris Saxon
July 18, 2017 - 9:47 am UTC

What exactly are you doing? Just moving tables within a database to new schemas? Can't you do create table as select to build the tables in the first place? And edition-based redefinition to switch over with zero downtime?

Alex, July 18, 2017 - 3:23 pm UTC

Yes, we just need to move tables + objects to a different schema in the same database. We looked at using dbms_redefinition, but it didn't appear you could use that across schemas. Also, this needs to be an easily repeatable process. Trying to do CTAS + get all dependent keys etc then grab all code plus use dbms_redefinition sounds very cumbersome.
Chris Saxon
July 18, 2017 - 4:23 pm UTC

You're repeatedly moving tables from one schema to another?! Surely this is just a one-off thing? What exactly is your requirement here?

Yes, DBMS_redef has to be in the same schema. So it's not going to help you.

Alex, July 18, 2017 - 5:34 pm UTC

We have a bunch of legacy applications sharing a schema. On the application side they are being converted to a microservice architecture. On the database side we want a corresponding bounded context model aka their own schema. Most of them are a reasonable size for datapump. But we have a handful that won't be sensible for that.

So yes it is a one shot deal per application, but there are half a dozen or so apps.

Our DBAs said they have done a GG migration like this before, but they do not have my full confidence and I am trying to fact check whether this can be done for a subset of tables.

Any other ideas are welcome.
Chris Saxon
July 19, 2017 - 3:40 pm UTC

I see. I get why you want to split them out. But do you really need to? Seems like a lot of work for minimal benefit...

Why not just create app users for each microservice that only has access to the tables it uses?

But yes, you can use GG for a subset of tables.

To Alex : datapump twists

J. Laurindo Chiappa, July 18, 2017 - 8:06 pm UTC

Hi, Alex : sometimes I need to do something like this, and (besides Goldengate, a very capable BUT very expensive tool) the best option in my experience is to use datapump...
For small/medium-sized schemas you could use the normal/common datapump client, but for larger schemas the best alternative (performatic but cheap, repeatable and customizable) is the datapump API using the Parallel options , see http://dbaora.com/oracle-clone-schema-using-plsql/ for an example....

Best regards,

J. Laurindo Chiappa
Chris Saxon
July 19, 2017 - 3:41 pm UTC

Thanks for sharing.

Partitioning

Christian, July 19, 2017 - 2:28 pm UTC

If your goal is to just move your table to another schema and don't change the structure you could use partitioning for that:

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:752030266230#2284434496930

I have done that recently for a 30GB Table in almost no time; works like a charm.

You of course would need a Enterprise Edition License plus the Partitioning option, so the whole solution isn't very cheap when you are currently on SE. It's fast though ;)

cheers
Chris Saxon
July 19, 2017 - 3:42 pm UTC

Good suggestion. Though if they're considering GG, I'm guessing the cost of EE + Partitioning won't be an issue ;)