Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Garuang.

Asked: December 18, 2005 - 7:47 pm UTC

Last updated: July 30, 2024 - 5:26 am UTC

Version: 10.0.2.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I am getting following error when i use copy_file procedure of dbms_file_transfer package. Here i am trying to copy log file from one folder to other.

Thanks

SQL> BEGIN
2 dbms_file_transfer.copy_file(source_directory_object =>
3 'SOURCE_DIR', source_file_name => 'sqlnet.log',
4 destination_directory_object => 'DEST_DIR',
5 destination_file_name => 'sqlnet.log');
6 END;
7 /
BEGIN
*
ERROR at line 1:
ORA-19505: failed to identify file "c:\temp\source\sqlnet.log"
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 3223)
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 84
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 193
ORA-06512: at line 2


and Tom said...

this utility is NOT a general purpose "copy my file please" routine. It is specifically for database (binary) files and backup - it is not to copy just any file.


</code> http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_ftran.htm#i999064

see the usage notes:

http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_ftran.htm#sthref3224 <code>

o The size of the copied file must be a multiple of 512 bytes.



Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, May 06, 2006 - 12:33 am UTC

Tom,

Can you recommend some technique on how can the database file(s) from production can be copied to physical standby site as in our shop ssh/ftp will/is disabled.

Thanks.

Tom Kyte
May 06, 2006 - 1:49 am UTC

A reader, May 06, 2006 - 5:07 pm UTC

Tom,

The database on the other site to/will be built will be physical standby.

Thanks.

Tom Kyte
May 07, 2006 - 11:26 am UTC

well, you can build standbys from backups, but - that doesn't answer the question "would you like to use the package referenced above which will allow you to copy a datafile from machine to machine without using ftp/sftp)"

A reader, May 07, 2006 - 12:34 pm UTC

Tom,

Maybe i wasnt clear enought lemme make this question clear, i looked at dbms_file_transfer before posting this question but in my case the database where the file to be transferred will be physical standby. As per my understanding for dbms_file_transfer to work the database must be open (please correct me if i am wrong), after saying that is there any other way we can transfer the file through database features.

Thanks.

Tom Kyte
May 07, 2006 - 1:08 pm UTC

I don't get it - there is the step of getting the files there (you need a point to point sort of connection - so you have a primary sight, it is up. you create an instance to receive it. files get there. who cares what you do to the instance that received it).

I'd suggest you just use your backups. There MUST be someway to restore them isnt' there.

(you can have more than one 'database' on that standby machine you know - one temporary one just to be a recipient of files).



A reader, May 07, 2006 - 11:14 pm UTC

Thanks.

dbms_file_transfer in parallel ?

Yoav, March 14, 2007 - 5:05 pm UTC

Hi Tom,
I have a tablespace with 10 data files, each file 10GB size.
Im transfering it evrey day as a part of transportable tablespace process.
As describe bellow the files are copied file after file.
Is there option to move all the 10 datafiles in one BULK ?

declare
cursor c is
select file_name
from dba_data_files
where tablespace_name= 'TEST';

BEGIN
for c_rec in c loop
dbms_file_transfer.put_file(
'SOURCE_DIR',
c_rec.file_name,
'DEST_DIR',
c_rec.file_name,
'remote_db');
end loop;
END;
/

Thanks.
Tom Kyte
March 15, 2007 - 7:35 am UTC

you are moving all 10 datafiles 'in bulk'

if you want to do it in 'parallel', you can use dbms_job (for example), you would schedule N jobs - and they would execute with a 'degree of parallelism' equal to your job_queue_processes init.ora setting.

How to copy non binary files between two different DB server

Rami Reddy V, July 16, 2024 - 6:53 pm UTC

Hi Tom,

As you said "The size of the copied file must be a multiple of 512 bytes." to copy the file using dbms_file_transfer package and also its applicable for binary files, hence this package won't help to copy the files which are different sizes like 1k, 10k, 100k plus other than binary files.

File Name : TS_STAGE_20240711_tts_exp.log
Size: 1.15 KB (1,178 bytes)
Size on Disk: 4.00 KB (4,096 bytes)

declare
p_source_db varchar2(100) := 'source_db.itron.com';
l_export_log_filename  varchar2(100) := 'TS_STAGE_20240711_tts_exp.log';
begin

dbms_file_transfer.get_file(source_directory_object      => 'DATA_PUMP_DIR',
                                    source_file_name             => l_export_log_filename,
                                    source_database              => p_source_db,
                                    destination_directory_object => 'SOURCEDIRECTORY',
                                    destination_file_name        => l_export_log_filename);
end;
/

ORA-19505: failed to identify file "C:\app\vrreddy\Oracle19c\admin\ieedev\dpdump\TS_STAGE_20240711_tts_exp.log"
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch
O/S-Error: (OS 1178) The volume change journal is being deleted.
ORA-02063: preceding 4 lines from SOURCE_DB
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 54
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 151
ORA-06512: at line 6

SQL> 


The below are my questions, expecting your valuable answers please.

1) Is there any alternative way to achieve this other than using dbms_file_transfer package?

2) While creating the file by using dbms_datapump.add_file even though I have mentioned to create 100KB file by mentioning the filesize parameter as shown below however its still creating 2 KB file only.

     dbms_datapump.add_file(handle    => dp_handle,
                            filename  => l_tablespace_name || '_tts_exp.log',
                            directory => 'DATA_PUMP_DIR',
                            filesize  => 102400,
                            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
                           reusefile   => 1);



Starting "MTS"."SYS_EXPORT_TABLESPACE_06":
W-1 Startup took 0 seconds
W-1 Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
W-1 . . exported "MTS"."PARENT_EMP":"SYS_P4830" 7.281 KB 1 rows in 0 seconds using direct_path
W-1 . . exported "MTS"."EMP_TRANSACTION_DTL":"SYS_P4850" 6.820 KB 1 rows in 0 seconds using direct_path
W-1 . . exported "MTS"."EMP_TXN_STATUS":"SYS_P4850" 5.929 KB 1 rows in 0 seconds using direct_path
W-1 . . exported "MTS"."REFERENCE_EMP":"SYS_P4830" 5.914 KB 1 rows in 0 seconds using direct_path
W-1 . . exported "MTS"."REFERENCE1_EMP":"SYS_P4830" 6.351 KB 1 rows in 0 seconds using direct_path
W-1 Completed 5 TABLE_EXPORT/TABLE/TABLE_DATA objects in 0 seconds
W-1 Master table "MTS"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MTS.SYS_EXPORT_TABLESPACE_06 is:
C:\APP\VRREDDY\ORACLE19C\ADMIN\IEEDEV\DPDUMP\TS_STAGE_20240711_TTS_EXP.DMP
Job "MTS"."SYS_EXPORT_TABLESPACE_06" successfully completed at Tue Jul 16 23:39:39 2024 elapsed 0 00:00:03

Chris Saxon
July 18, 2024 - 9:08 am UTC

1. As Tom said, this package is specifically for copying database files, not any random file you want to move.

Why are you trying to do this inside the database? The easiest method is to copy it at the OS level. If you really must run this from the database, you can use a scheduler executable job to run OS scripts.

2. The filesize parameter is the maximum size for each file, not a request to make each file a specific size. Also, from the docs:

A filesize can only be specified for dump files.

Log files are not dump files.

how to copy file where filesize is not multiple of 512 bytes

Rami Reddy V, July 20, 2024 - 7:57 pm UTC

Thanks Tom for your quick response.

Let me explain my issue in detail for better understanding purpose.

I have two different database servers, one is Source DB and another one is Target DB, And my source DB partitioned tables(its partitioned on daily basis) data we are moving to Target DB as part of Archival process, once data moved to Target DB then we are removing that partitions from Source DB. For this activity first one day(ex:-01July2024) partitions data we are moving into new Tablespace(ex:-TS_STAGE_20240701) and then doing export and import by using Data pump API using DB link, however the export in Target DB taking 5+ hrs. as its happening through DB link by calling the remote DB procedure from source DB, hence I thought to do export in source DB itself which is taking 1 hr.30 mins itself later just copy the data files, export dump and export logs files to Target DB server through DBMS_FILE_TRANSFER package. Here data base files, dump files able to copy but export log file copy failing with "file size is not mulitple of 512 bytes".

So Is there any alternative way how to copy the export log file to Target DB. As this export log filesize we cannot control that it should be multiple of 512 bytes.

This is the export log taking 5+ hrs. through DB link in the Target DB.
W-1      Completed 178 TABLE_EXPORT/TABLE/TABLE_DATA objects in 588 seconds
W-1 Master table "TEST"."SYS_EXPORT_TABLESPACE_03" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_TABLESPACE_03 is:
  E:\ARCHIVETABLESPACE\TS_STAGE_20200627_TTS_EXP.DMP
Job "TEST"."SYS_EXPORT_TABLESPACE_03" successfully completed at Fri Jul 12 07:12:11 2024 elapsed 0 05:15:27


This is the export log taking 1 hrs. 30 mins in source DB.
W-1      Completed 178 TABLE_EXPORT/TABLE/TABLE_DATA objects in 95 seconds
W-1 Master table "TEST"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_TABLESPACE_01 is:
  C:\APP\ORACLE\ORADATA\SOURCEDIRECTORY\TS_STAGE_20200627_TTS_EXP.DMP
Job "TEST"."SYS_EXPORT_TABLESPACE_01" successfully completed at Sat Jul 13 02:50:27 2024 elapsed 0 01:23:06


Below is the complete code to replicate the issue.

--In Source DB

-- Create table
create table PARENT_EMP
(
  empno        NUMBER not null,
  job          VARCHAR2(20),
  sal          NUMBER(7,2),
  deptno       NUMBER(2),
  created_date DATE not null,
  gender       CHAR(1)
)partition by range(created_date) INTERVAL (NUMTODSINTERVAL(1, 'DAY'))
    (PARTITION parent_emp_catchall VALUES less than (to_date('20000101','yyyymmdd'))
   );
  ;
-- Create/Recreate indexes 
create index IX1_PARENT_EMP on PARENT_EMP (CREATED_DATE)
  local;
create index IX2_PARENT_EMP on PARENT_EMP (JOB);
create index IX3_PARENT_EMP on PARENT_EMP (DEPTNO);
create bitmap index IX4_PARENT_EMP on PARENT_EMP (GENDER)
  local;
-- Create/Recreate primary, unique and foreign key constraints 
alter table PARENT_EMP
  add primary key (EMPNO);
  
  
-- Create table
create table REFERENCE_EMP
(
  ename  VARCHAR2(10),
  emp_id NUMBER not null,
  empno  NUMBER not null
);



alter table REFERENCE_EMP
  add constraint FK_EMPNO foreign key (EMPNO)
  references PARENT_EMP (EMPNO);
  

 alter table REFERENCE_EMP enable row movement;  
 
 
alter table REFERENCE_EMP  
  modify partition by reference ( FK_EMPNO );
  
  
-- Create/Recreate indexes 
create index IX1_REFERENCE_EMP on REFERENCE_EMP (EMPNO)
  local;
create index IX2_REFERENCE_EMP on REFERENCE_EMP (ENAME);
-- Create/Recreate primary, unique and foreign key constraints 
alter table REFERENCE_EMP
  add primary key (EMP_ID);
  
-- Create table
create table REFERENCE1_EMP
(
  ename    VARCHAR2(10),
  emp_id   NUMBER not null,
  r_emp_id NUMBER not null
);
-- Add comments to the table 
comment on table REFERENCE1_EMP
  is 'This table stores the Reference1 employees info';
-- Add comments to the columns 
comment on column REFERENCE1_EMP.ename
  is 'Reference1 employee name';
comment on column REFERENCE1_EMP.emp_id
  is 'Reference1 employee Primary key.';
comment on column REFERENCE1_EMP.r_emp_id
  is 'Reference employee primary employee';

alter table REFERENCE1_EMP
  add constraint FK_R_EMP_ID foreign key (R_EMP_ID)
  references REFERENCE_EMP (EMP_ID);



 alter table REFERENCE1_EMP enable row movement;  
 
 
alter table REFERENCE1_EMP  
  modify partition by reference ( FK_R_EMP_ID );
  

-- Create/Recreate indexes 
create index IX1_REFERENCE1_EMP on REFERENCE1_EMP (R_EMP_ID)
  local;
create index IX2_REFERENCE1_EMP on REFERENCE1_EMP (UPPER(ENAME));
-- Create/Recreate primary, unique and foreign key constraints 
alter table REFERENCE1_EMP
  add primary key (EMP_ID);


--insert data one day data 

insert into parent_emp 
select level empno, 'AJob' as job,0.001*level as sal, 10, to_date('20240711','yyyymmdd')  as created_dt , decode(mod(level,2),0,'M','F')
from dual connect by level <=1000000; 

insert into reference_emp 
select 'e'||level as ename, level, level from dual connect by level <=1000000;

insert into reference1_emp 
select 'e'||level as ename, level, level from dual connect by level <=1000000;

commit;


---create new tablespace

create tablespace TS_STAGE_20240711 datafile 'C:\app\Oracle\oradata\TS_STAGE_20240711.dbf' size 25M autoextend on next 10M;

----move one 11th July data 2024 data into tablespace TS_STAGE_20240711

alter table parent_emp move partition SYS_P4830 tablespace TS_STAGE_20240711 update indexes;
alter table reference_emp move partition SYS_P4830 tablespace TS_STAGE_20240711 update indexes;
alter table reference1_emp move partition SYS_P4830 tablespace TS_STAGE_20240711 update indexes;


---create directory using sys user grant previleges to test user. 

CREATE DIRECTORY source_dir AS 'C:\app\Oracle\oradata\SOURCEDIRECTORY';

grant all on DIRECTORY source_dir to TEST;


----below code to create export dump 

declare
      dp_handle number;
      l_main_schema varchar2(10):='TEST';
      l_tablespace_name varchar2(100):='TS_STAGE_20240711';
    begin
      dp_handle := dbms_datapump.open(operation => 'EXPORT',job_mode  => 'TABLESPACE');
  
      dbms_datapump.add_file(handle    => dp_handle,
                            filename  => l_tablespace_name || '_tts_exp.dmp',
                            directory => 'SOURCE_DIR',
                            filetype    => dbms_datapump.ku$_file_type_dump_file,
                            reusefile   => 1);
     dbms_datapump.add_file(handle    => dp_handle,
                            filename  => l_tablespace_name || '_tts_exp.log',
                            directory => 'SOURCE_DIR',
                            --filesize  => 102400,
                            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
                           reusefile   => 1);
 
      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'SCHEMA_EXPR', VALUE => 'IN(''' || l_main_schema || ''')');
   
      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''POST_TABLE_ACTION'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''INDEX_STATISTICS'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''TABLE_STATISTICS'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''OBJECT_GRANT'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''COMMENT'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''INDEX'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''CONSTRAINT'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''REF_CONSTRAINT'')');

      dbms_datapump.metadata_filter(handle => dp_handle, NAME => 'EXCLUDE_PATH_EXPR', VALUE => 'IN (''TRIGGER'')');

      DBMS_DATAPUMP.metadata_filter (handle => dp_handle, name => 'TABLESPACE_EXPR', value => 'IN(''' || l_tablespace_name || ''')');
 
     dbms_datapump.data_filter (handle      => dp_handle,
                                  name        => 'PARTITION_LIST',
                                  value       => '''SYS_P4830''',
                                  table_name  => 'PARENT_EMP',
                                 schema_name => l_main_schema);
 
     dbms_datapump.data_filter (handle      => dp_handle,
                                  name        => 'PARTITION_LIST',
                                  value       => '''SYS_P4830''',
                                  table_name  => 'REFERENCE1_EMP',
                                  schema_name => l_main_schema);
 
     dbms_datapump.data_filter (handle      => dp_handle,
                                  name        => 'PARTITION_LIST',
                                  value       => '''SYS_P4830''',
                                  table_name  => 'REFERENCE_EMP',
                                  schema_name => l_main_schema);
                                                         
                                  
      dbms_datapump.set_parameter(handle => dp_handle, NAME => 'METRICS', VALUE => 1);
      dbms_datapump.set_parameter(handle => dp_handle, name => 'INCLUDE_METADATA', value => 0); --DATA_ONLY                            
 
     dbms_datapump.start_job(dp_handle);
     dbms_datapump.detach(dp_handle);
 
end;
/


--In Target DB

---create directory using sys user grant previleges to test user. 

CREATE DIRECTORY target_dir AS 'E:\TARGETDIRECTORY';

grant all on DIRECTORY target_dir to TEST;


--Later trying to copy the datafiles, export dump file and export log file from source DB to Target DB 

declare
tablespace_name varchar2(100) := 'TS_STAGE_20240711';
p_source_db varchar2(100) := 'source_db'; 
l_export_log_filename  varchar2(100) := tablespace_name ||'_tts_exp.log'; 
l_export_dump_filename  varchar2(100) := tablespace_name ||'_tts_exp.dmp'; 
l_sql_stmt varchar2(10000);
TYPE cur_type IS REF CURSOR;
l_df_c cur_type;
l_data_filepath           VARCHAR2(1000);
l_data_filename           VARCHAR2(128); 
source_directory   VARCHAR2(128) := 'SOURCE_DIR';
target_directory   VARCHAR2(128) := 'TARGET_DIR';  
begin

l_sql_stmt := 'select file_name from dba_data_files@' || p_source_db || ' where tablespace_name =''' || tablespace_name || ''' ';
      dbms_output.put_line('sql is >' || l_sql_stmt || '<');

      OPEN l_df_c FOR l_sql_stmt;
      LOOP
        FETCH l_df_c
          INTO l_data_filepath;
        EXIT WHEN l_df_c%NOTFOUND;

        IF instr(l_data_filepath, '\') > 0 THEN
          l_data_filename := substr(l_data_filepath, (instr(l_data_filepath, '\', -1, 1) + 1), length(l_data_filepath));
        END IF;

        IF l_data_filename IS NULL THEN

          CLOSE l_df_c;
          raise_application_error(-20051, 'Datafile file path not valid, could not extract filename for dbms_file_transfer.get_file package');
        END IF;

              
        dbms_file_transfer.get_file(source_directory_object      => source_directory,
                                    source_file_name             => l_data_filename,
                                    source_database              => p_source_db,
                                    destination_directory_object => target_directory,
                                    destination_file_name        => l_data_filename);
  
    END LOOP;

    CLOSE l_df_c;
 

        dbms_file_transfer.get_file(source_directory_object      => source_directory,
                                    source_file_name             => l_export_dump_filename,
                                    source_database              => p_source_db,
                                    destination_directory_object => target_directory,
                                    destination_file_name        => l_export_dump_filename);

        ---here failing 
        dbms_file_transfer.get_file(source_directory_object      => source_directory,
                                    source_file_name             => l_export_log_filename,
                                    source_database              => p_source_db,
                                    destination_directory_object => target_directory,
                                    destination_file_name        => l_export_log_filename);         
   
end;
/

Chris Saxon
July 22, 2024 - 12:43 pm UTC

The size of the logfile, whether a multiple of 512 bytes is irrelevant: dbms_file_transfer.get_file is not for this type of file.

As I replied above, you can use OS tools to move the file; i.e. not Oracle Database. If you really must run it all from the database, create a scheduler job of type executable that calls a OS script.


share sample code to create a scheduler job of type executable that calls a OS script.

Rami Reddy V, July 22, 2024 - 7:06 pm UTC

Thanks Tom.

Could you share sample code or any reference docs/ links how to create job type executable to run OS script.
  
"create a scheduler job of type executable that calls a OS script."



Connor McDonald
July 30, 2024 - 5:26 am UTC

Here's an example

grant create job to scott;
grant create external job to scott;
grant create credential to scott;

conn scott/tiger@mydb

begin
  dbms_credential.create_credential(
    credential_name => 'OS_ACCESS',
    username        => 'oracle',
    password        => 'mypassword'
  );
end;
/

set serveroutput on
declare
  l_shell   varchar2(32767);
begin

  l_shell := 
'#!/bin/bash
 /bin/ls -l /tmp > /tmp/scott.txt';

  dbms_scheduler.create_job(
    job_name        => 'RUN_OS_SHELL',
    job_type        => 'EXTERNAl_shell',
    job_action      => l_shell,
    credential_name => 'OS_ACCESS',
    enabled         => true
  );
end;
/



Note - we don't pick up the standard profile so make sure you set any environment variables you need *inside* the script

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library