Afonso, July 04, 2001 - 5:06 pm UTC
Finally the answer
Richard Wainwright, January 03, 2002 - 5:50 am UTC
After much searching it seems that my problem with building the dictionary for logminer was due to not getting a perfect match between the utl_file_dir parameter and the dictionary_location value used by utl_file.
Fantastic resource from the mundane to the arcane
Joe Suchy, September 11, 2002 - 4:25 pm UTC
Most of the other responses to UTL_FILE questions always breezed over the UTL_FILE_DIR part. No one else mentioned the need for exact matches, including case. Thanks for a perfect explanation.
Bala Nemani, October 08, 2002 - 10:42 pm UTC
I could not get this kind of response when working with Oracle Support directly. Very helpful
Sikandar Hayat, August 10, 2003 - 3:40 am UTC
thanks for your explain
haihong li, September 27, 2003 - 2:32 am UTC
It's very useful.
One additional comment from metalink note 202159.1
Ken Naim, April 30, 2004 - 12:47 am UTC
if you want to specift multiple directories in an alter system command the syntax is
ALTER SYSTEM SET UTL_FILE_DIR='directory1','directory2' scope=spfile;
April 30, 2004 - 7:26 am UTC
ahh, but if you have spfiles, you probably have 9ir2 which means you'll be wanting to not use utl_file_dir but CREATE DIRECTORY instead (no restart)
lchao, June 08, 2004 - 4:30 pm UTC
useful piece of information
Mohamamd Aamir, July 10, 2004 - 9:16 am UTC
It was very useful to a beginner like me.In fact I wish to copy data from a machine that was running the Oracle service to another machine where I can pick the files to load in the data mart.
I hope my problem of providing the correct rights was taken care of in this article.
Praveen, March 11, 2005 - 1:24 am UTC
Query to find current directories for UTL_FILE?
Matt, June 13, 2005 - 3:57 pm UTC
Is there a view that can be queried to see the current directories/paths on the server that can used with UTL_FILE (not by reading init.ora!) ?
June 13, 2005 - 9:00 pm UTC
v$parameter where name = utl_file_dir
dba_directories, since utl_file in 9i can use that instead of utl_file_dir
ext table accesss on networked drives in windows is still hard
Sam C, December 16, 2005 - 12:04 am UTC
Thanks tom for your advice. I am trying to make it work from last 24hours without much success.
I am using windows 2003 server with Oracle 9i trying to access file on XP or 2003 machines.
I tried mapping drive, giving all permissions, creating logon as etc. No use.
Finally I checked metalink and found this.
Note 290703.1 -> EXTERNAL TABLES ON UNC DIRECTORIES DON'T WORK ANYLONGER BEGINNING WITH PATCHSET 22.214.171.124
Oracle accepted similar problem as mine as bug but gave alternative of using local drive or mapped drive per Note 45172.1. But even note does not help.
Did any one found a solution? This problem is forcing me to have many Oracle installations in many servers, which is very painful and expensive :-)
Do any one know if this problem is resolved in 10g?
Advance thanks for any help/suggestions.
December 16, 2005 - 8:21 am UTC
sorry that windows makes this so hard, I agree, things on windows are harder - I can never get things working right on that platform myself. Probably why all of my servers are not windows and I run vmware to run my database examples on my laptop.
that note states what to do:
Due to security risks, beginning with the patchset 126.96.36.199 the external
tables don't work anylonger, if the default directory is defined using UNC.
To workaround this, use the mapped letter instead of UNC pathes.
Mapped drive does not work
Sam C, December 16, 2005 - 10:40 am UTC
Who ever wrote the suggestion in the bug notes did not test by themselves. It is a free advice for the people who are suffering :-)
In windows 2003 only local drives are working with external tables and not mapped drives (period).
Hope Oracle fixes this bug soon.
December 16, 2005 - 1:00 pm UTC
tell me how you set it up, there are gyrations you must go through on windows to get this networked drive visible. did you do all of the above steps?
ORA-29284: file read error
Giridhar, January 07, 2006 - 1:08 am UTC
we use UTL_FILE to read files in our PL/SQL Program.
Last week we got an exceptional file, which is greater than 2 GB. Generally it is a small file.
our program failed with the error as:
SQLCODE <-29284> SQLERRM <ORA-29284: file read error>
I checked with our DBA and he confirmed that we are on 64 bit. Do we get such errors even if we are on 64 bit database? Details of our v$version is as follows, which also confirms we are on 64 Bit:
Oracle9i Enterprise Edition Release 188.8.131.52.0 - 64bit Production
PL/SQL Release 184.108.40.206.0 - Production
CORE 220.127.116.11.0 Production
TNS for Solaris: Version 18.104.22.168.0 - Production
NLSRTL Version 22.214.171.124.0 - Production
Please guide me further to avoid such errors in future.
January 07, 2006 - 10:36 am UTC
In looking at the spec of utl_file, many of the inputs/outputs are binary_integer - signed 32bit integers:
FUNCTION FGETPOS RETURNS BINARY_INTEGER
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
FILE RECORD IN
ID BINARY_INTEGER IN
DATATYPE BINARY_INTEGER IN
BYTE_MODE BOOLEAN IN
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
FILE RECORD IN/OUT
ID BINARY_INTEGER IN/OUT
DATATYPE BINARY_INTEGER IN/OUT
BYTE_MODE BOOLEAN IN/OUT
ABSOLUTE_OFFSET BINARY_INTEGER IN DEFAULT
RELATIVE_OFFSET BINARY_INTEGER IN DEFAULT
so, regardless of the wordsize on the machine, the UTL_FILE API is limited by this (signed 32bit integer - 2gig)
a) instead of using utl_file, you may use an external table and simply:
for x in (select * from et)
if your line length is >4000 bytes, you would use multiple varchar2(4000)'s to map the file.
b) you could use a bfile, requires a little "parsing" on your part:
Thanks a lot
Giridhar, January 08, 2006 - 12:28 pm UTC
Thanks tom for the explanation and for possible work arounds.
Mapped drive does not work
Sam C, January 19, 2006 - 4:31 pm UTC
Sorry for late response. I followed several steps in many iterations to try this mapping to work.
For clarity let me call Windows 2003 server with Oracle as Oracle server and another Windows 2003 server with shared folder as data server.
1) I changed both TNS listener and Oracle services in oracle server to login as administrator of the server. Then stopped and started both services.
2) Shared a folder on data server for RW for everyone in the world.
3) Mapped data server shared folder as local drive in Oracle server. Could test read/write from windows server.
I even did reboot on both servers as windows likes rebooting :-).
I even attempted to set UTF_FILE in oracle server to shared folder and restarted Oracle as well as windows server.
So now I ended up with 4 different servers with database installed! I could have only one database server but are forced to have many because of this problem.
Any one has solution for this problem?
Sam C, February 01, 2006 - 2:29 pm UTC
If so, please share your experiences.
other work around when signed 32bit integer - 2gig
Slobo, March 30, 2006 - 3:42 am UTC
fastreader does not have such limitation comparing to utl_file while providing very fast tables extract into the text files.
this is a third party tool though: www.wisdomforce.com
Problem with UTL_FILE
Ishi, April 27, 2006 - 6:47 am UTC
I was following Metalink Note 44307.1 for testing UTL_FILE and got error.
Please see follwoing:
create table test
insert into test values (10,25);
insert into test values (20,50);
CREATE OR REPLACE PROCEDURE test1 IS
file_handle UTL_FILE.FILE_TYPE; -- file handle of OS flat file
col1 NUMBER; -- C1 retrieved from testtab table
retrieved_buffer VARCHAR2(100); -- Line retrieved from flat file
-- Open file to write into and obtain its file_handle.
file_handle := UTL_FILE.FOPEN('/u03/users/ofsa/ofsatest/oracle_utl_data','mytest.out','W');
UTL_FILE.PUT_LINE(file_handle, 'this is line 1 as a test');
SELECT c1 INTO col1 FROM test
WHERE c2 = 25;
'This is the c1 %s when the c2 is %s.\n',
-- Read a line from the file.
UTL_FILE.GET_LINE (file_handle, retrieved_buffer);
-- Print fetched line out to the SQL*Plus prompt.
-- CLose the file.
WHEN NO_DATA_FOUND THEN
WHEN UTL_FILE.INVALID_PATH THEN
WHEN UTL_FILE.READ_ERROR THEN
WHEN UTL_FILE.WRITE_ERROR THEN
WHEN OTHERS THEN
I did above as user SYSTEM.
Now, when I tried to execute the procedure it came up with following error:
SQL> exec test1;
PL/SQL procedure successfully completed.
Procedure created file in specified directory location which is NFS mount point. Following is permission of file which Oracle created (With -rw-r---- permission!)
$ ls -ltr mytest.out
-rw-r----- 1 orauat ofsatest 62 Apr 27 18:10 mytest.out
orauat is a owner of Oracle software:
$ ps -ef |grep pmon
orauat 1826896 1 0 21:20:18 - 0:03 ora_pmon_ofsad
Directory permission is as under:
$ ls -ld oracle_utl_data
drwxrwsrwx 3 ofsatest ofsatest 2048 oracle_utl_data
Following is defined in UTL_FILE_DIR init.ora parameter:
SQL> show parameter utl_file_dir
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir string /u01/oracle/oracle_utl_data, /
SQL> select * from v$version;
Oracle9i Enterprise Edition Release 126.96.36.199.0 - 64bit Production
PL/SQL Release 188.8.131.52.0 - Production
CORE 184.108.40.206.0 Production
TNS for IBM/AIX RISC System/6000: Version 220.127.116.11.0 - Production
NLSRTL Version 18.104.22.168.0 - Production
What am I missing here?
Thanks for your help in advance.
April 27, 2006 - 3:13 pm UTC
*STOP USING SYSTEM*
do not use sys
do not use system
do not use any builtin account, just stop doing that.
but what is the problem here? what are you seeing that you were not expecting to see?
you caught when others and basically have HIDDEN COMPLETELY AND TOTALY the error. You just pretended it did not happen
you don't know what happened
you don't know where it happened
you have no idea what is going on
so tell us - why did you have a when others? Other than to make it so the above list is true?
Problem with UTL_FILE
Ishi, April 27, 2006 - 10:59 am UTC
Please ignore my last post. I know what I was missing
UTL_FILE.PUT_LINE(file_handle, 'this is line 1 as a test');
SELECT c1 INTO col1 FROM test
WHERE c2 = 25;
'This is the c1 %s when the c2 is %s.\n',
I was writting in mytest.out and was trying to read from myfile.txt.
I got it working.
Sorry for that.
Thanks and keep going ;)
April 27, 2006 - 3:23 pm UTC
LOSE THE WHEN OTHERS
that is a bug in your code, just delete it now and never code it again (unless you follow it by "raise;")
Ishi, April 28, 2006 - 7:38 am UTC
To mount remote Windows folder
Sale, May 10, 2006 - 1:45 pm UTC
on Unix Oracle 9iR2 server and access a Windows text file as external table from stored procedure, how can we do that?
Can you provide example? Can you provide links?
Fix for using mapped network drives with Oracle directory objects
Jody, August 18, 2006 - 9:23 am UTC
To Sam C, and any other interested parties. I have experienced exactly the same problems as described when attempting to access mapped network drives through Oracle Directory objects (via external tables and UTL_FILE package) with Windows 2003 Server on 10g (10.2.0.1.0).
Having progressed this issue through Oracle Metalink support, I was informed that this problem was resolved in 10.1.0.5 as a result of an unpublished bug fix:
4427475 KUP-04040: FILE EXT852IP_48.DAT IN NFS_HISTORY_DUANE_READE NOT FOUND
I have subsequently upgraded to 10.2.0.2.0 and can confirm that Oracle Directory Objects perform as expected when UNC specifications are used in their creation. Note that there still appears to be a problem when using the mapped drive letter (at least I couldn't get this working!).
Hope this information is of use.
thanks Jody, JUK
Sam C, November 04, 2006 - 5:35 pm UTC
for shring the good news. I was waiting for a long long time foe this..
A reader, April 24, 2009 - 5:21 am UTC
Oracle version : 10.2.0.3
We are running a script from ksh, the script is below. What happens is that the procedure is expected to return a ref cursor as output. When we try to write a single record returned by refcusor (we have exit in the loop), using UTL_FILE, the script executes as expected. But when we remove the exit to write all records, it throws an exception :
-----------> ksh script ----------------->
if [ $# -ne 4 ];then
echo Usage "$0 \"id\" \"TNum\" \"ConfigFile\" \"Stg\""
for i in `cat $3`
echo "Processing $i for $1-$2"
sqlplus -s xyz/$($GET xyz) << ?
set time on timing on
r2 CLOB; --varchar2(32000);
r3 CLOB; --varchar2(32000);
INSERT INTO tlog VALUES (SYSDATE,'Start - New');
l_file := UTL_FILE.fopen('LOG_DIR','pullDATA_$1-$2-$4_$i.log' , 'w',32760);
xyz.app_security.pullData('$1-$2', '$i' ,'|', r1 , r2 , r3 , err_code , err_msg);
if(err_code = 0)
fetch r1 into r4;
exit when r1%notfound;
exit; ------------------> Does not work when we remove this.
INSERT INTO tlog VALUES (SYSDATE,'End - New');
April 27, 2009 - 11:42 am UTC
my car won't start.
I did this:
put in key
turn key ------------------> does not work
We are so even now aren't we.
care to share what "does not work" means - you know, since we cannot actually see the output you see? the error you are encountering would be really helpful
Venkat, July 22, 2009 - 11:06 am UTC
Thanks. Please don't treat this as separate question. How we have special characters in UTL_FILE package.
My file has these 2 files.
Switch gear media tensión
TABLERO BAJA TENSION
When i read from UTL_FILE, it is treated as one single line
"Switch gear media tensi?n
TABLERO BAJA TENSION"
Please help me in resolving this one. It is something very urgently needed to be fixed.
Naveen Wason, November 02, 2009 - 7:16 am UTC
I have read your solution and applied on my
environment it works fine and my problem has been resolved.
but i have one another task related to utl_file. I want to read/write
files in client drive using utl_mail how it would be possible. It's urgent...
November 09, 2009 - 12:25 pm UTC
... I want to read/write
files in client drive using utl_mail how it would be possible. It's urgent......
it would also be "a virus" if you could do that - wouldn't it.
Think about it, you would like your database server to be able to reach out to an arbitrary client drive and start reading stuff. Please do think about that for a moment.
Your client can make the drive available to the server, the server could mount and secure that drive, and the database could read it. That would be safe and secure and supported.
Your client could invoke something in your environment that would allow that client to upload a file to the server - that would work.
But having the database reach out and read an arbitrary client drive all by itself - never in our lifetime. Not without the client exporting it and not without the server mounting it.
UTL_FILE GET_LINE ORA-29284
A reader, March 16, 2010 - 2:25 pm UTC
Recently our oracle system was migrated to the below version.
1 Oracle Database 11g Enterprise Edition Release 22.214.171.124.0 - 64bit Production
2 PL/SQL Release 126.96.36.199.0 - Production
3 CORE 188.8.131.52.0 Production
4 TNS for Linux: Version 184.108.40.206.0 - Production
5 NLSRTL Version 220.127.116.11.0 - Production
However most pl/sql jobs worked fine. Except for one that involved any reading or writing of files on the drive that was under OCFS filesystem. Most of the jobs failed with the ORA-29284 File Read Error.
The solution we used was to change the physical path of the directories to a non OCFS drive.
March 16, 2010 - 2:58 pm UTC
the only issue I see in the support database has to do with a partially failed OCFS installation - that is, putting the rpm's down for OCFS solved the problem, it was a generic file system issue - not plsql.
sam, April 17, 2010 - 1:23 pm UTC
I have oracle 11g server running on unix Box "ORA11". IT is inside a firewall and cannot be accessed by external users.
I have oracle http server (mod_plsq) running on Box "ORAWEB". IT is outside the firewall.
If a procedure in the database uses UTL_FILE needs to create a file that is accessed by external user, do you create a MOUNT from ORAWEB to ORA11 and then write to that directory.
or can ORA11 write files to a directory on ORAWEB.
What is the correct setup for this and priviliges required?
If i am correct, oracle can only create files locally on the server. SO I need a mount from the web server directory to the
database server, so oracle can see the web server directory as local.
can you confirm please.
April 17, 2010 - 1:30 pm UTC
talk to your system administrator, this would be done entirely and completely outside of Oracle.
Why wouldn't you just return a clob from the plsql routine, mod_plsql and return that to the end user - that is, instead of writing to an OS file, write to a lob - return lob - have lob returned to user.
it'll look just like a file to them.
sailesh kumar, January 05, 2011 - 4:15 am UTC
I am working on email server of Informatica.
Any attachment in the email is stored as a clob object in the oracle database; with other column like from,to,cc,bcc,subjectline,content.
We need to detach this attachment from the clob and make it into a file in unix.
Please advise me if it is possible
January 05, 2011 - 7:39 am UTC
You can use UTL_FILE in PL/SQL to write files into the file system. You can use Java IO routines to write files with a java stored procedure as well.
Moving or copying a file
selva, May 30, 2011 - 12:07 am UTC
I need to copy a file from local directory to server directory. How can we do in PLSQL?.
May 31, 2011 - 10:14 am UTC
you cannot - plsql runs on the server, you would need a client program running on the local system that has access to the local file system to achieve this.
Mani, June 27, 2011 - 8:25 am UTC
When the datatype is number or Money then the value will store in result otherwise it should be null. But it's not working. Can u halp me in this...
create table salgrades
( MIN_NUMBER NUMBER,
result varchar2(10) default null,
case when (datatype='Number' or datatype='Money') then
result := datatype;
result := null;
June 27, 2011 - 11:31 am UTC
this is strange, constraints cannot do assignments - what are you trying to do???
You are not trying to validate anything. It looks like you want a virtual column?
In 10g and before, use a view of this table - the table will not have result as a column, but in the view, you can
select min_number, max_number, datatype, case when datatype in ('Number', 'Money' ) then datatype end result from salgrades.
In 11g, look up virtual column for a better solution.
Javi asdsad, August 22, 2011 - 7:45 am UTC
I am a little bit confused...Can we copy a file into a remote directory usingo UTL_FILE package or not?
I have Oracle 10gr2 running on Windows XP. I have tried to copy a file on to a mapped directory but I can't do it succesfully, I always have INVALID_PATH exception...
Is there any way to do it?
Thanks for all,
August 30, 2011 - 2:15 pm UTC
the code running in the database can only access directories available to the database server.
If the file system (the remote file system) is mounted and accessible to the database (eg: the database could write a trace file to that file system for example), then utl_file will be able to access it.
If not - then no.
Yes - you can use remote file systems, as long as they are properly network attached to the database server.
Works for multiple threads?
Eashwer Iyer, October 14, 2011 - 7:55 am UTC
Does this approach with UTL_FILE work for multiple threads / multiple invocations all writing to the same file?
Please use my example as an illustration if you find that it is a new question (or point me to where it is found)... please.
I use the UTL_FILE function to write out debug log from within an oracle stored function
I wish to ensure that two invocations of the same functions (multiple flowchart runs) try to change the contents of a file simultaneously
invocation 1 generates this debug line 1
invocation 2 of same function which happens in another client flowchart, generates this debug line 1
invocation 1 of same function generates its debug line 2
invocation 1 of same function generates its debug line 3
invocation 1 of same function generates its debug line 4
invocation 2 of same function generates its pertinent debug line 2
invocation 2 of same function generates its pertinent debug line 3
QUESTION can two file pointers point to the same file and write/append simultaneously to the same file via utl_file ?
Instead of using UTL_FILE and writing to a file....
Can I pipe the contents of dbms_output.put_line into std out, directly into a log file
October 14, 2011 - 11:22 pm UTC
if you have multiple processes writing to the same file - in any language - you have an unmitigated mess. You would need to serialized and open/write/close the file in a protected section of code.
I would have each 'thread' write to its own trace file, just like we do.
A reader, October 15, 2011 - 10:45 am UTC
that would be a total mess!
You would think UTL_FILE would apply an implicit lock on the same dir/file until it is done writing it - same way oracle locks records when someone runs any DML - update.
it seems UTL-FILE does not and concurrency control has to be handled by the application although PL/SQL cant lock files on the filesystem (unix CHMOD).
October 16, 2011 - 10:14 pm UTC
it is pretty much the way things work. Try it in C. See what happens - unless you go out of your way and ask for exclusive access.
the filesystem doesn't give two hoots about it - it is pretty much "standard operating procedure" is most languages. Windows is about the only OS I've seen that locks these things by default (note: programming language didn't do it, WINDOWS DID IT - and many people (self included) hate that, it isn't supposed to work that way).
If you want a database, use a database. If you want a file, understand how they work - in pretty much all languages.
UTL Read and Create new Table in oracle
Prakash, March 15, 2012 - 11:45 pm UTC
Now im using utl package.i can created utl file and read the excel file to oracle.everything successful.
I have table :uploading ,
column Name:tablename varchar2,
column_name varchar2 primary key,
column_value varchar2 ..
now i want create new table for ,but i need 2 columns before table and i want add new one columns ,example like tablename varchar2,
column_name varchar2 reference key,
column_value varchar2 .....
when i running first utl package automatically i want create new table.Please help me...
Amine Sekkai, October 07, 2013 - 5:37 pm UTC
Thank you for the support note 45172.1.
I have 2 questions regarding this note:
1- in point "Accessing Network Directories" : does the change from default login (SYSTEM) to administrator users have an impact on the good running of the database ?
2- if we add the user SYSTEM to access the shared resource, could the UTL_FILE methods "see" it ?
Thanks in advance,
sam, December 11, 2013 - 1:14 pm UTC
Did oracle change anything in UTL_FILE between 9iR1 and 11gR2?
I have sevreal old SPs that use UTL_FILE and FOPEN To read file directly from filesysem (no oracle directory used).
In 11gR2 the FOPEN fails with invalid directory error. I also set the initialization parameter UTL_FILE_DIR but still wont work. It works by creating an oracle direcotry though.
There are a lot of SP changes required due to this. Normaly oracle considers backward compatability and I wonder if this was an expception.
Rohit Chaudhary, September 07, 2017 - 11:29 am UTC
Please tell me how to do valid for package body and synonym
select owner,object_name,object_type, status from all_objects where object_name='UTL_FILE';
OWNER OBJECT_NAME OBJECT_TYPE STATUS
------------------------------ ------------------------------ ------------------- -------
SYS UTL_FILE PACKAGE VALID
SYS UTL_FILE PACKAGE BODY INVALID
PUBLIC UTL_FILE SYNONYM VALID
APEX_030200 UTL_FILE SYNONYM INVALID
September 09, 2017 - 12:51 am UTC
Run the database recompilation script as SYSDBA
But if UTL_FILE is invalid...well, thats very odd.
Jess, March 27, 2018 - 12:22 am UTC
We have a use case where a csv file exists on server X but needs to exist on server Z (so it could go into external table there). Both servers have Oracle databases on them and have intra-connectivity (there are db links between them).
Can UTL_FILE.FCOPY be used on Z to copy from remote to local (else on X to copy from local to remote)? If so, what does the setup look like (i.e., how does the remote directory need to be defined to make it work)?
If it can't be done this way, is there another way?
We can't scp the file to Z because we don't have any permissions directly on the server itself. We can't use dbms_file_transfer either because file sizes would have to be 512K multiples. (We considered creating an external table on X, loading the file that way, and then selecting over db link to Z, but that's not very viable because database on X is owned by external supplier, so we can't really be making changes there).
Thank you as always!
March 27, 2018 - 6:32 am UTC
File level operations can be done across servers *but* you need to remember that these operations are done as the 'oracle' account on the relevant platform, ie, the account that the database runs under.
For example, on Windows, if you map a network drive (lets call it S:) then its most probably been done with the Admin credentials or your own credentials. Which probably means that the 'oracle' account has no idea (or has no access) to such a share.
Some more information here https://asktom.oracle.com/pls/asktom/asktom.search?tag=oracle-user-unable-to-access-network-locations-or-permission-issues
So - overcome the OS level configuration, and doing file operations from oracle should be fine.
Jess, March 27, 2018 - 7:49 am UTC
The linked page is only useful for cases where you can map the drive, which essentially fools it into being local. What about things like RDS, where you can create and Oracle directory but have no control over the file system or directory's physical location? It can't be mapped, so you're left with an Oracle directory you can talk to, but not the filesystem. Creating external tables on, say, remote instance is not ideal because of 'ownership' of that database...
Any advice on how a file can be placed/written into a local directory from a remote server (or vice versa) without OS changes?
March 28, 2018 - 1:38 am UTC
Some options you could explore
1) transfer to local via the OS
Something like rsync or similar so that files are (at the OS level) transported from a remote location to a local location.
2) on-demand transfer to local (or perception thereof)
create an external table with a pre-processor option, so that the pre-processor presents the remote data to the query engine
Jess, March 28, 2018 - 10:42 am UTC
Sorry, need to pick your brain a little more on that :)
Re: transfer to local via OS <== not for things like RDS... SQL is the only way in or out. dbms_file_transfer would work, but that's got issues with 512 multiples (unless there's some way to overcome this with some kind of unix pre-processing?)
Re: on-demand transfer... I'm aware that one can do external tables with preprocessing (read Arup's 2011 article and a few others), but I don't understand how that would work here... If I have a source database with full OS access on X and a target database on Z with no OS access.... Sure, I can define an external table on Z, but what kind of pre-processing could it do? Or are you talking about external/pre-processing on X and having Z read data from it over db link (in which case that's no good, as we have no ability to create new objects in X)... If you could clarify a bit more how you envision this working, that would be greatly appreciated.
March 28, 2018 - 12:55 pm UTC
OK, I think I understand your requirement a bit more.
- create table T (c clob);
- load file into the clob, eg
create or replace
procedure load_a_file( p_filename in varchar2 )
insert into t values ( empty_clob() )
returning theClob into l_clob;
l_bfile := bfilename( 'MY_FILES', p_filename );
dbms_lob.fileopen( l_bfile );
dbms_lob.loadfromfile( l_clob, l_bfile,
dbms_lob.getlength( l_bfile ) );
dbms_lob.fileclose( l_bfile );
- create table t1
- insert into T1 select * from t@x (or vice-versa)
which violates your "in which case that's no good, as we have no ability to create new objects in X", but I don't think you can get around that (unless you have OS access to Z).
What I can't really work out here is - if you have no OS access to Z, what use is there copying a file to it ? That would suggest you *do* have access in Z to some sort of OS 'structure' (ie where the file will be stored). If you have that, you can put a script there no ? Then a script could scp/ssh/etc to X to cat the file etc.
Jess, March 28, 2018 - 10:55 am UTC
(Sorry Connor, slight clarification to my last comment. Obviously we can pad the source file with extras to make it x512, but would prefer to do as little as possible on that server/system as possible if there's another way, as it's an external system we don't have much influence with.)
Jess, March 28, 2018 - 10:40 pm UTC
Thanks for the idea (whether we can leverage it is a separate issue, but glad to have it in back pocket; at least it means there can be one generic table through which all files go the same way instead of having a separate table for each file). We didn't think of this. All of a sudden, SQL Loader is starting to look good too :)
To answer your question ("if you have no OS access to Z, what use is there copying a file to it ? That would suggest you *do* have access in Z to some sort of OS 'structure' [...]? Then a script could scp/ssh/etc to X to cat the file etc."),
As you well know, database mananged services, such as RDS, restrict access/functionality, and among the first to go is access to the file system. There is most definitely an OS structure (from/to which files can be read/written), but the only thing that can access it is a SQL process--it's not possible to issue a unix command directly or to interact with that file outside of SQL. For example, you can write a 'local' log with utl_file, but you can't just take the log file itself. Instead, you have to connect using a SQL session from somewhere else, select the contents of your file (RDS gives you a wrapper for it, else you'd be writing your own procedure), and spool them to a file on a server from which you established connection.
It can be rather non-trivial to do this with Oracle. Other DBs, such as Postgres, have the ability to read/write a remote file. Unfortunately, this means that when we have to migrate clients who happen to do a ton of file writing directoy in database and also have a primary objective of a managed service in the cloud and a only a secondary objective of staying with Oracle, it becomes a tough sell... They see the complexity involved and want to run for the hills (and it's hard to blame them for doing so) :( :(
March 30, 2018 - 5:05 am UTC
"There is most definitely an OS structure (from/to which files can be read/written)"
That's my point. If you have file system you can write to (even only from the db), you can:
- write files there
- write *scripts* there
- use those scripts for preprocessing options in external tables, etc
A reader, July 09, 2019 - 5:54 am UTC
Can you please be specific in changes that need to be performed in services on log on tab.
The changes should be performed in which machine?
July 09, 2019 - 6:39 am UTC
If you're referring to:
When an oracle instance is created the services that support it are setup to Log On As the SYSTEM
Then it's the server that the Oracle Database instance runs on.