Skip to Main Content
  • Questions
  • Dynamically Changing File Name in External Tables

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Michael.

Asked: March 29, 2005 - 10:38 pm UTC

Last updated: August 23, 2021 - 2:16 am UTC

Version: 10.1.0.3.0

Viewed 10K+ times! This question is

You Asked

Is it possible to pass the file name(s) into an Oracle external table? My source files are from multiple sources with the same format but different names. To keep from having multiple external tables, I would like to dynamically change the file name (LOCATION). Is this possible? Can you also time stamp the log file that is being written?

Thanks
Michael

and Tom said...

alter table ext_table location ('new_filename');


</code> http://docs.oracle.com/cd/B10501_01/server.920/a96540/statements_32a.htm#2115070 <code>

shows what you can alter about external tables.

You would/could UTL_FILE FRENAME log files (but there are timestamps contained "in" them)

and if you rename the file, you'll have a log file "per" filename -- so the OS will have the timestamp on it as well.

Rating

  (37 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

just to add...

Connor, March 30, 2005 - 8:08 am UTC

if memory serves, this particular table DDL didn't mark dependent procedures etc invalid.... but I'm too lazy to check this presently :-)

Tom Kyte
March 30, 2005 - 9:11 am UTC

You'll have to excuse the single user test script, I know they don't "prove" things, but it does seem to indicate a possibility that Connor is right

ops$tkyte@ORA9IR2.US.ORACLE.COM> create or replace directory data_dir as '/tmp/'
  2  /
Directory created.
 
ops$tkyte@ORA9IR2.US.ORACLE.COM> create table external_table
  2  (EMPNO NUMBER(4) ,
  3   ENAME VARCHAR2(10),
  4   JOB VARCHAR2(9),
  5   MGR NUMBER(4),
  6   HIREDATE DATE,
  7   SAL NUMBER(7, 2),
  8   COMM NUMBER(7, 2) ,
  9   DEPTNO NUMBER(2)
 10  )
 11  ORGANIZATION EXTERNAL
 12  ( type oracle_loader
 13    default directory data_dir
 14    access parameters
 15    ( fields terminated by ',' )
 16    location ('emp.dat')
 17  )
 18  /
Table created.
 
ops$tkyte@ORA9IR2.US.ORACLE.COM> create or replace procedure p
  2  as
  3  begin
  4          for x in ( select * from external_table )
  5          loop
  6                  null;
  7          end loop;
  8  end;
  9  /
Procedure created.
 
ops$tkyte@ORA9IR2.US.ORACLE.COM> create or replace view v as select * from external_table;
View created.
 
ops$tkyte@ORA9IR2.US.ORACLE.COM> @invalid
ops$tkyte@ORA9IR2.US.ORACLE.COM> break on object_type skip 1
ops$tkyte@ORA9IR2.US.ORACLE.COM> column status format a10
ops$tkyte@ORA9IR2.US.ORACLE.COM> select object_type, object_name, status
  2  from user_objects
  3  where status = 'INVALID'
  4  order by object_type, object_name
  5  /
 
no rows selected
 
ops$tkyte@ORA9IR2.US.ORACLE.COM> alter table external_table location( 'foo.dat' );
 
Table altered.
 
ops$tkyte@ORA9IR2.US.ORACLE.COM> @invalid
ops$tkyte@ORA9IR2.US.ORACLE.COM> break on object_type skip 1
ops$tkyte@ORA9IR2.US.ORACLE.COM> column status format a10
ops$tkyte@ORA9IR2.US.ORACLE.COM> select object_type, object_name, status
  2  from user_objects
  3  where status = 'INVALID'
  4  order by object_type, object_name
  5  /
 
no rows selected
 

Alter External Table

Michael, March 30, 2005 - 11:46 am UTC

Thank you! This worked great. I have set up the log file as an external table and reviewing the external log file for errors.

David Aldridge, March 30, 2005 - 3:53 pm UTC

>> You'll have to excuse the single user test script, I know they don't "prove" things,... <<

:D

Larger systems...

Connor, March 30, 2005 - 7:16 pm UTC

> You'll have to excuse the single user test script, I know they don't "prove"
> things, but it does seem to indicate a possibility that Connor is right

Yes, but this DDL probably *does* mark procedures invalid when you're dealing with much larger systems, more important clients, and self-inflated consultants. Clearly your test case does not take this into account

hee hee

:-)

Using external table in parallel

Richard, June 04, 2005 - 4:57 am UTC

Helo Tom,
I was looking for a way to use external table to load data in parallel from different files having the same structure.
I utilized "alter table" statement and it worked for me.
My question is: is this the correct way to handle this situation avoiding intermixed/corrupt input?
Here is my test script.
I am using 10.1.0.3.0 on RHEL3 and the directory is an nfs mounted fs.
Yours,
Richard

drop table ext_test;
create table ext_test (
foo varchar2(10)
)
organization external
(
type oracle_loader
default directory vend_data
access parameters (fields terminated by ';')
location ('ext_test_2.dat')
)
/

host echo "one;" > /share/Oracle/vend_data/ext_test_1.dat
host echo "two;" >> /share/Oracle/vend_data/ext_test_1.dat
host echo "eins;" > /share/Oracle/vend_data/ext_test_2.dat
host echo "zwei;" >> /share/Oracle/vend_data/ext_test_2.dat

create table test_t1 (foo varchar2(10));

set serveroutput on
begin
for x in (select * from ext_test)
loop
insert into test_t1 values (x.foo);
dbms_lock.sleep(30);
end loop;
end;
/

select * from test_t1;

/* yields
FOO
----------
eins
zwei
*/

/* parallel in another session */
alter table ext_test location('ext_test_1.dat');

select * from ext_test;

/* yields
FOO
----------
one
two
*/





Tom Kyte
June 04, 2005 - 8:35 am UTC

oh no -- you don't want to do that.

You want to list all of the tables in the single external table (it takes a list)

You want to alter the external table to be parallel.

and just do a parallel direct path insert! it'll parallelize everything for you, direct path it.

you don't want to do what you are doing, that would be a mess.

Using external table in parallel - continued

Richard, June 04, 2005 - 9:18 am UTC

Helo Tom,

I guess I was not explicit enough in describing my situation.

Acutally I am using perl scripts for importing data from external suppliers which deliver at random intervals. The scripts normalize the data comming in files with differing stucture and output the normalized data into an file which gets loaded into an target table with the means of an external table.

My problem is not how to maximize throughput but rather how to avoid possible problems when using the same external table with differing file content in parallel.

Yours,

Richard

Tom Kyte
June 04, 2005 - 11:53 am UTC

you won't be using the same external table with differing content.

You'll need a table per content OR you'll need to serialize access to the table.



Umm...

Kashif, June 04, 2005 - 10:26 am UTC

Hey Tom -

Can you explain how we would accomplish what you recommended to "Richard from Germany/Frankfurt" a couple of posts above? I'm not entirely sure what you mean by:

<quote>
You want to list all of the tables in the single external table (it takes a list)
<quote>

Thanks.

Kashif


Tom Kyte
June 04, 2005 - 12:16 pm UTC

an external table may have a list of files associated with it, they are just treated as if they where a single file. So, if you had 5 files to "load" via the external table, you just list them all in the definition and when you select * from external_table, you are querying all 5 files.

Another External Table Issue

Faisal, June 04, 2005 - 11:05 am UTC

Hi Tom,

In DWH, we received data from 10 different companies, each company have 5 data files of different formats. I have created 10x5 external tables. Once the load of any given company completed (i.e. one set of five files) we moved these to archived folders (We must have to move files to archive folder because next day the same data files will be FTP by source system).
Now if we query external table it generate the error because flat file does not exist in a specified location. Could you suggest the best possible solution of this problem?

Thanks,

Faisal

Tom Kyte
June 04, 2005 - 12:17 pm UTC

well, tell me, what would you LIKE to have happen when you move the files?? I don't see "a problem", I see an obvious side effect from moving the files. But what should "happen"?

Faisal, June 04, 2005 - 3:06 pm UTC

In this case, if the process query the table, it should say "No data found", but my second point, is that ok to leave external table without the file i.e. logically excepted approch?

Regards,


Tom Kyte
June 04, 2005 - 6:45 pm UTC

if you want no data found, then when you move the files, TOUCH them in that directory (leave behind empty files).

Else, be prepared to get an ERROR when querying them.

Either is acceptable to me, you have to decide which is most appropriate for you.

Using external table in parallel - continued

Maurizio De Giorgi, March 02, 2006 - 3:44 pm UTC

Tom, what do you think about the following approach (I was trying to leverage Oracle statement consinstency) ?

var h_flowid NUMBER;
var file_correctly_processed VARCHAR2 (1);

BEGIN
:h_flowid := 12345;
:file_correctly_processed := 'N';
END;
/

DROP TABLE EXT_PARALLEL_LOAD_TEST;

CREATE TABLE EXT_PARALLEL_LOAD_TEST
(
REC_NUM NUMBER,
DATI VARCHAR2(4000 BYTE)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY TISF_INBOX
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
NOBADFILE
NODISCARDFILE
NOLOGFILE
DATE_CACHE 0
FIELDS
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(
REC_NUM RECNUM,
DATI CHAR (4000)
)
)
LOCATION (TISF_DONEBOX:'AY17.SE.BOEN01.CB.P040624')
)
REJECT LIMIT UNLIMITED
LOGGING
NOCACHE
NOPARALLEL;

DROP TABLE YBE_I16_IDT1F_TEMP;

CREATE TABLE YBE_I16_IDT1F_TEMP
(Z_FLOWID, PROGR_DOC, TIPO_REC, QUALIFICATORE)
NOLOGGING
AS
SELECT OBJECT_ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME
FROM all_objects
WHERE 1=0;

DROP TABLE bad_ext_locations;

CREATE TABLE bad_ext_locations
(z_flowid, ext_table, LOCATION, location_distinct_count)
NOLOGGING
AS
SELECT COUNT (*) OVER (), TABLE_NAME, LOCATION,
COUNT (DISTINCT LOCATION) OVER () location_distinct_count
FROM user_external_locations
WHERE 1=0;
/*
This loader do ALTER...LOCATION to set properly the location
*/
BEGIN
EXECUTE IMMEDIATE '
ALTER TABLE EXT_PARALLEL_LOAD_TEST
LOCATION (TISF_DONEBOX:''AY17.SE.BOEN01.CB.P040624'')
'
;
END;
/
/*
Let's simulate another loader do ALTER...LOCATION
*/
DECLARE
PRAGMA autonomous_transaction;
BEGIN
EXECUTE IMMEDIATE '
ALTER TABLE EXT_PARALLEL_LOAD_TEST
LOCATION (TISF_DONEBOX:''AY17.SE.BOEN01.CB.P040623'',
TISF_DONEBOX:''AY17.SE.BOEN01.CB.P040624'')
'
;
END;
/
timing START first_attempt;
INSERT ALL
WHEN (SUBSTR (dati, 5, 4) = 'IDT1')
THEN
INTO YBE_I16_IDT1F_TEMP
(Z_FLOWID, PROGR_DOC, TIPO_REC, QUALIFICATORE)
VALUES (:h_flowid, SUBSTR (dati, 1, 4), SUBSTR (dati, 5, 4), SUBSTR (dati, 10, 3))
WHEN (LOCATION <> 'AY17.SE.BOEN01.CB.P040624' OR location_distinct_count <> 1)
THEN
INTO bad_ext_locations
(z_flowid, ext_table, LOCATION, location_distinct_count)
VALUES (:h_flowid, 'EXT_PARALLEL_LOAD_TEST', LOCATION, location_distinct_count)
WITH loc_tab AS
(
SELECT LOCATION,
COUNT (DISTINCT LOCATION) OVER () location_distinct_count
FROM user_external_locations
WHERE table_name = 'EXT_PARALLEL_LOAD_TEST')
SELECT /*+ CARDINALITY (E 1000000) */
e.rec_num, e.dati, loc_tab.LOCATION, loc_tab.location_distinct_count
FROM loc_tab LEFT OUTER JOIN EXT_PARALLEL_LOAD_TEST e
ON ( loc_tab.LOCATION = 'AY17.SE.BOEN01.CB.P040624'
AND loc_tab.location_distinct_count = 1);
timing STOP first_attempt;
/*

The file was correctly processed or the location was changed by another
loader between the ALTER and INSERT ALL executed by this loader?

*/
BEGIN
SELECT DISTINCT 'N'
INTO :file_correctly_processed
FROM bad_ext_locations
WHERE z_flowid = :h_flowid;
EXCEPTION
WHEN NO_DATA_FOUND THEN
:file_correctly_processed := 'Y';
END;
/
print :file_correctly_processed;
DELETE bad_ext_locations WHERE :file_correctly_processed = 'N';
/*
This loader do ALTER...LOCATION to set properly the location
*/
BEGIN
EXECUTE IMMEDIATE '
ALTER TABLE EXT_PARALLEL_LOAD_TEST
LOCATION (TISF_DONEBOX:''AY17.SE.BOEN01.CB.P040624'')
'
;
END;
/
/*
Let's simulate another loader do ALTER...LOCATION
*/
DECLARE
PRAGMA autonomous_transaction;
BEGIN
EXECUTE IMMEDIATE '
ALTER TABLE EXT_PARALLEL_LOAD_TEST
LOCATION (TISF_DONEBOX:''AY17.SE.BOEN01.CB.P040623'')
'
;
END;
/
timing START second_attempt;
INSERT ALL
WHEN (SUBSTR (dati, 5, 4) = 'IDT1')
THEN
INTO YBE_I16_IDT1F_TEMP
(Z_FLOWID, PROGR_DOC, TIPO_REC, QUALIFICATORE)
VALUES (:h_flowid, SUBSTR (dati, 1, 4), SUBSTR (dati, 5, 4), SUBSTR (dati, 10, 3))
WHEN (LOCATION <> 'AY17.SE.BOEN01.CB.P040624' OR location_distinct_count <> 1)
THEN
INTO bad_ext_locations
(z_flowid, ext_table, LOCATION, location_distinct_count)
VALUES (:h_flowid, 'EXT_PARALLEL_LOAD_TEST', LOCATION, location_distinct_count)
WITH loc_tab AS
(
SELECT LOCATION,
COUNT (DISTINCT LOCATION) OVER () location_distinct_count
FROM user_external_locations
WHERE table_name = 'EXT_PARALLEL_LOAD_TEST')
SELECT /*+ CARDINALITY (E 1000000) */
e.rec_num, e.dati, loc_tab.LOCATION, loc_tab.location_distinct_count
FROM loc_tab LEFT OUTER JOIN EXT_PARALLEL_LOAD_TEST e
ON ( loc_tab.LOCATION = 'AY17.SE.BOEN01.CB.P040624'
AND loc_tab.location_distinct_count = 1);
timing STOP second_attempt;
/*

The file was correctly processed or the location was changed by another
loader between the ALTER and INSERT ALL executed by this loader?

*/
BEGIN
SELECT DISTINCT 'N'
INTO :file_correctly_processed
FROM bad_ext_locations
WHERE z_flowid = :h_flowid;
EXCEPTION
WHEN NO_DATA_FOUND THEN
:file_correctly_processed := 'Y';
END;
/
print :file_correctly_processed;
DELETE bad_ext_locations WHERE :file_correctly_processed = 'N';
/*
This loader do ALTER...LOCATION to set properly the location
*/
BEGIN
EXECUTE IMMEDIATE '
ALTER TABLE EXT_PARALLEL_LOAD_TEST
LOCATION (TISF_DONEBOX:''AY17.SE.BOEN01.CB.P040624'')
'
;
END;
/

timing START third_attempt;
INSERT ALL
WHEN (SUBSTR (dati, 5, 4) = 'IDT1')
THEN
INTO YBE_I16_IDT1F_TEMP
(Z_FLOWID, PROGR_DOC, TIPO_REC, QUALIFICATORE)
VALUES (:h_flowid, SUBSTR (dati, 1, 4), SUBSTR (dati, 5, 4), SUBSTR (dati, 10, 3))
WHEN (LOCATION <> 'AY17.SE.BOEN01.CB.P040624' OR location_distinct_count <> 1)
THEN
INTO bad_ext_locations
(z_flowid, ext_table, LOCATION, location_distinct_count)
VALUES (:h_flowid, 'EXT_PARALLEL_LOAD_TEST', LOCATION, location_distinct_count)
WITH loc_tab AS
(
SELECT LOCATION,
COUNT (DISTINCT LOCATION) OVER () location_distinct_count
FROM user_external_locations
WHERE table_name = 'EXT_PARALLEL_LOAD_TEST')
SELECT /*+ CARDINALITY (E 1000000) */
e.rec_num, e.dati, loc_tab.LOCATION, loc_tab.location_distinct_count
FROM loc_tab LEFT OUTER JOIN EXT_PARALLEL_LOAD_TEST e
ON ( loc_tab.LOCATION = 'AY17.SE.BOEN01.CB.P040624'
AND loc_tab.location_distinct_count = 1);
timing STOP third_attempt;
/*

The file was correctly processed or the location was changed by another
loader between the ALTER and INSERT ALL executed by this loader?

*/
BEGIN
SELECT DISTINCT 'N'
INTO :file_correctly_processed
FROM bad_ext_locations
WHERE z_flowid = :h_flowid;
EXCEPTION
WHEN NO_DATA_FOUND THEN
:file_correctly_processed := 'Y';
END;
/
print :file_correctly_processed;
DELETE bad_ext_locations WHERE :file_correctly_processed = 'N';

COMMIT;

Tom Kyte
March 03, 2006 - 7:44 am UTC

other than 5 pages of code - I don't see anything here.

You don't expect me to read, reverse engineer, digest all of this and try to figure out what you were doing?

q: "what do you think of this approach"

a: I have no idea, simply because I have no clue what problem you might be trying to solve or what logic you applied in trying to solve that problem - mostly because I'm not a plsql compiler and cannot really spend the time reverse engineering your software - in an attempt to figure out what it's goal is.


Funny thing, I would read the code "literally", if it does not accomplish what you intended to do (you wanted to accomplish "A") but it does something (it does "B") - what would happen? I might say "looks good to me, it will accomplish what it set out to accomplish" - problem being - that might not be the problem you were trying to solve!!!!!!

Using external table in parallel - continued

Maurizio De Giorgi, March 03, 2006 - 1:05 pm UTC

Tom, I'am really sorry. You give us so much and I'am not able to give you neither the basic info you need to help me. Please let me explain.

I titled my review: "Using external table in parallel - continued". In my intention it was the shortest way to tell you that the context and the goals of my review were exactly the same of the previous posts - with the same title - from Richard. Actually I did what Richard did to continue his first post (but his first post was just few lines upward!).

Given that, I supposed that posting the code was less "ambiguity" prone than any kind of description. It was also a way to give you a reproducible test case as complete as possible (at least files are missing but i think you could use almost any file).

Here is my attempt to explain.
The file(s) used by a query wich reads an external table is that referred in the location when the query starts. I did some basic testing. They seems to show that once the query "is going" in session A you can change the location in session B but the query will continue to read the file initially pointed by the location parameter. It seems that only new queries both in session B (or A) will use the new location.

In this scenario I toughth it could be possible to assign the file to the external table, query the table (actually loading the data) while tracking the file(s) in use, check the file used was the right one or loop to try again until no other DIY parallel process changed the location meantime the process considered starts the query after changing the location.

The code should simulate (and tests) the depicted solution.

Beside you often says "challenge authority", It seems to me that everything it's working but this is alerting me since I saw you told Richard there was no way to do it but only "to serialize access to the table".

Thanks.
Maurizio.

Tom Kyte
March 03, 2006 - 2:13 pm UTC

yeah but, five pages of code....

and the fact that I get lots of these...

and spend seconds on each (else I would do nothing else ever). It is "big"



What you are describing has nothing to do with read consistency... But I would not trust it to work forever.

We cannot always use test cases to prove something is always true. We can use them easily to prove something is NOT ALWAYS true.

I will not recommend this approach, no. I will still recommend serialization here. I would not recommend changing the location like that whilst things are happening.


Using external table in parallel - continued

Maurizio De Giorgi, March 07, 2006 - 2:47 pm UTC

Tom, thanks for your time.

I'm going through the "lock and serialize" approach as of your advice, but - if it is possible here - I would you to clarify a couple of points.

It seems to me that the "untrustable behaviour" is exactly the "picture" of the definition: "Read consistency, as supported by Oracle...Guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution ()".

I feel like I'am missing something - concept or fact - that is obvious or implicit to you (I know... they are many!). So, can you give us some detail on the reasons why "that has nothing to do with read consistency" ?

Given that, do you see any "special" reason that cause Oracle not to lock and serialize automatically for us?

N.B.: I just hope this would not be *so* important: Oracle 9iR2 has been used for testing the code instead of 10.1 as of Richard review.

Finally, do you think having a way to "Setting a Column to the Datafile Record Location" could be requested as an enhancement to the Oracle capabilities related with external tables (the "companion" of RECNUM - "Setting a Column to the Datafile Record Number")?

Thanks.
Maurizio.

Tom Kyte
March 09, 2006 - 11:15 am UTC

read consistency is done via undo.

There is no such thing as UNDO for external tables, the very concept doesn't even begin to exist. Read consistency quite simply "is not part of the equation" when considering external tables!

You may request any enhancement, absolutely.

Using external table in parallel - continued

Maurizio De Giorgi, March 09, 2006 - 12:06 pm UTC

Tom, sorry for going on so long but I need to understand.

Do you mean

"Multiversion Concurrency Control
Oracle automatically provides read consistency to a query so that all the data that the query sees comes from a single point in time (statement-level read consistency). Oracle can also provide read consistency to all of the queries in a transaction (transaction-level read consistency)"

simply does not apply to any query involving external tables?

If so, should'nt better to clearly document this (AFAK) "unique" exception?

Is it correct that in order to "guarantee" Multiversion Concurrency Control (statement or transaction read consistency) - while accessing an external table - Oracle should have to lock and serialize access to external tables location (I saw that Oracle tracks external table uses in v$access) but at the moment it does not (you did not answer about this aspect) ?

Thanks.
Maurizio.

Tom Kyte
March 09, 2006 - 3:28 pm UTC

you do not update external tables.

therefore, there is no such thing as read consistency - since that only applies to modified data. You NEVER MODIFY external tables, you cannot insert, update, delete them. You can do DDL to them but DDL never has and never will do "read consistency" since DDL is not insert/update/delete and so on.


read consistency kicks in when you read a block that was modified since your statement or transaction began. It is not possible for an external table block to have been modified by Oracle - hence it just never "happens", doesn't apply at all.


This is an obvious exception - since, well, you don't have the ability to MODIFY this stuff? There are no updates, no deletes, no inserts. How could it apply at all?

(and v$ tables are documented to be not read consistent as well)



external tables are just files, outside of OUR control.

Using external table in parallel - continued

Maurizio De Giorgi, March 10, 2006 - 8:30 am UTC

Tom, all these concepts are known and (I hope enough) clear to me.

Perhaps I have to admit some difficult figuring out how exactly DDL in general (but on external tables or RO objects expecially) "interacts" with transactions and read consistency model or - in other words - what exactly happens when somebody issue a DDL while "transactions are going".

I was thinking read only objects as included - while not "actively" - in read-consistency model. I mean they partecipate in transactions but being RO they never change. They partecipate with one constant version of data. Oracle assumes but controls the constant version is not beeing changed by anybody.

I know they are not comparable but let's consider RO tablespaces and their RO objects.

If you want to change (rename) the datafile "below" a RO tablespace you have first to put the tablespace offline. If you try w/o offlining it you get an error.

External tables do not belong to any tablespace. They are not applicable but let's imagine a "hidden" system managed single RO tablespace - transparent to the user - below any external table definition. When you try to change the location of the external table the system actually rename the datafile of the tablespace. If someone is using that tablespace (accessing data using the external table) it will give you an error. You should first put the external table (actually the associated "hidden" tablespace) in some way "offline" before being able to change the location.

Let's consider bfiles.

"BFILEs are read-only; you cannot modify them. They support only random (not sequential) reads, and they do not participate in transactions. The underlying operating system must maintain the file integrity, security, and durability for BFILEs. The database administrator must ensure that the file exists and that Oracle processes have operating-system read permissions on the file."

but the file locator itself can be part of transactions as any other data (or not?).

In my experience altering the external table location, while someone else is accessing that table data, do not give any error but "consistent" data. Since this is not documented as "unsafe" or "untrustable", I figured out the location itself as a some way "transactionally read-consistent managed" kind of datum.

Thanks for your clarifications.
Maurizio.

Tom Kyte
March 10, 2006 - 12:30 pm UTC

short sentence should clear all up:

DDL and Transactions mix as well as OIL and WATER, they don't mix at all.




read consistency has nothing to do with DDL - DDL breaks everything. Doing DDL in a live production system is a pretty bad idea (lots of cursor invalidations, you'll get ora-4020's, nothing good comes from it)




Using external table in parallel - continued

Maurizio De Giorgi, March 10, 2006 - 5:06 pm UTC

Tom,

I do not want to be ungrateful to you. I think I am being loyal with your teaches.

After this "round trip" in read consistency, DDL, etc, my original doubts, my first questions are here again.

If "DDL breaks everything" (etc) why I do not get a ora-00054(like) error when I try to alter the location while the table is being read by someone else?

Why you think in the future this behaviour could be changed?

Something could be implemented better "there"?

Are you omitting to say something?

Thanks.
Maurizio.

Tom Kyte
March 10, 2006 - 8:36 pm UTC

for the same reason you can truncate a table I am reading and I MIGHT get and 8103 or I MIGHT get the data - it all depends.




Log

Marcio, March 13, 2006 - 4:24 pm UTC

Is it possible that the logs and bad file be created on different directory then the default directory?

Tom Kyte
March 14, 2006 - 9:59 am UTC

yes, just tell us where, the syntax supports it.

BADFILE 'DIR_NAME1':'t.bad'
LOGFILE 'DIR_NAME2':'t.log'




Changing the Badfile & Logfile Name alone

A Reader, May 04, 2006 - 3:38 pm UTC

hi tom,

we get data from about 70 sources with same formats
we have external tables defined as follows and our
plan was to change the location, badfile and logfile.

changing location is no problem (using alter table) - however when trying to change the badfile and logfile - the problem we are facing is you have to alter the entire access parameter clause - along with the positions of each field - is there a way to change just the badfile and logfile component of the access parameters w/o changing the other access parameters

will appreciate your help

create table file_cabs_lcostcen_asc
(
COUNTRY_CODE VARCHAR(3),
P_CC_CODE VARCHAR2(6),
P_CC_DESCRIPTION VARCHAR2(30),
P_CC_STD_HRS VARCHAR2(5)
)
organization external
(
type oracle_loader
default directory load_dock_cabs
access parameters
(
records delimited by newline characterset WE8ISO8859P1
logfile 'COUNTRY_MMYYLCOSTCEN_ASC.log'
badfile 'COUNTRY_MMYYLCOSTCEN_ASC.bad'
fields LRTRIM
(
COUNTRY_CODE POSITION(1:3),
P_CC_CODE POSITION (7:12),
P_CC_DESCRIPTION POSITION (13:42),
P_CC_STD_HRS POSITION (43:47)
)
)
location ('COUNTRY_MMYYLCOSTCEN_ASC.txt')
)
parallel 1
reject limit 0;

Changing LOCATION, and documenting LOCATION

Duke Ganote, June 27, 2006 - 12:23 pm UTC

Maurizio De Giorgi wrote that "Setting a Column to the Datafile Record Location" would be requested as an enhancement.

At this point, we're working around this by using SEQUENCE(MAX) instead of RECNUM to capture the record number
</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/ch06.htm#1008235 <code>
and having the shell script that invokes SQL*LOADER update a file# column after every file load.

Obviously something similar could be done after the ALTER TABLE to change LOCATION (answer to original question), and loading SQL, if external tables were being used.

The loading SQL might even, perhaps, get the LOCATION from DBA_EXTERNAL_LOCATIONS (first checking for just one location per external table)...hmmm. I assume that RECNUM would be reset for each LOCATION if the external table had more than one?

Tom Kyte
June 27, 2006 - 2:44 pm UTC

files are transparently concatenated with external tables as if it were just one big file.

dynamically changing file name in external tables

Sriram S, July 17, 2006 - 4:26 pm UTC

When I try to use utl_file.frename() to rename the bad and log files, I get a file operation error? I looked at the log files that got generated and the permissions are like this:
-rw-r--r-- 1 oracle 719 Jul 17 13:14 COUNTRY_MMYYLCOSTCEN_ASC.log

WOuld you know if there is a way to dynamically alter the external table log and bad file names as well along with the file name?

Tom Kyte
July 18, 2006 - 7:55 am UTC

let's see the example here - can you just write a block of code that can rename them (need no external table or anything, just the file in the OS).

Let's see how you are making the call to frename

external tables - changing the log and bad file names.

Sriram, August 01, 2006 - 2:53 pm UTC

I created an external table as follows.
CREATE TABLE myTest
(
method_name VARCHAR2(10),
argument1 VARCHAR2(10)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY METRICS_DATA_DIR
ACCESS PARAMETERS
( LOGFILE metrics_data_dir:'mylog.log'
fields terminated BY ' ' LTRIM (method_name,argument1)

)
LOCATION (METRICS_DATA_DIR:'mydata.dat')
)
REJECT LIMIT 0
NOPARALLEL
NOMONITORING;
I created the data file. Permissions is set to 777.

It gives the error. However, if you create it this way ...change the order a little bit ..

drop table mytest;
CREATE TABLE myTest
(
method_name VARCHAR2(10),
argument1 VARCHAR2(10)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY metrics_data_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
LOGFILE METRICS_DATA_DIR:'mylog_%p.log'
fields terminated BY ' '
LTRIM (method_name,argument1)
)
LOCATION (metrics_data_dir:'mydata.dat')
)
REJECT LIMIT 0
NOPARALLEL
NOMONITORING;

I am able to modify the log and bad file locations dynamically now. I still have issues deleting the log and bad files it creates.

I use this to remove the file.
UTL_FILE.fremove ('METRICS_DATA_DIR', i_new_filename);

This works when I remove the data file after a successful load, but fails when I try to get rid of the logs. I have a shell script now that runs every week to get rid of the logs. Sorry, I got tied up with other things at work that I forgot to update the issue.
When do you plan to be in Seattle next?
Thanks
Sriram


Tom Kyte
August 02, 2006 - 10:25 am UTC

define "fails" - perhaps with an example that we can just cut and paste.

Serialise access

Billy, October 16, 2006 - 7:26 am UTC

Tom said:
> You'll need a table per content OR you'll need to serialize
> access to the table.

I have an interesting challenge (or problem ;-) ) regarding using external tables.

The developers have Perl processes that process data files collected from network devices (the Perl libs used are part of a 3rd party product suite - so no choice there).

The resulting CSV files are then loaded into Oracle. They opted for the dynamic option. They load 1000's of files - which mean dynamically creating 1000's of external tables and running 1000's of non-sharable INSERT SQLs to insert the external data into the permanent Oracle table.

They now want to look at doing this in a serialised fashion - the actual load itself is millisecs and the thinking is that doing it serially will not impact overall performance, and lessen the impact of all those DDLs and dynamic non-sharable SQLs.

How to serialise?

One choice is using DBMS_LOCK and using a user lock to control access to the table. However, it is not very robust as it requires everyone to play (code) along with this - using the user lock to serialise.

The other option could be doing an ALTER TABLE to change the file location (DML trans). Then grabbing an exclusive table lock on the table and checking if the location is still what it was set to - if it is, proceeed with the INSERT else try again.

Comments, thoughts and suggestions on this will be appreciated.

What would have been an excellent feature to address this problem would have been to allow an external table to be equivalent to a global temp table ito its location related variables. (i.e. making them session based)

Tom Kyte
October 16, 2006 - 7:59 am UTC

I would disagree with the "not very robust", you are making a design decision, that doesn't affect the "robustness" of anything - it is just a decision.


I don't see how thte alter table/lock table would be "more robust" or "more flexible" - in fact it would be "more error prone" as:

time session1 session2
t0 alter table
t1 alter table
t2 lock table - whoops! we won't process the data we thought


why not a single external table with lots of files and processed as one big set??

Serialise access

Billy, October 16, 2006 - 8:58 am UTC

Tom wrote:
> I don't see how thte alter table/lock table would be "more
> robust" or "more flexible" - in fact it would be "more
> error prone" as:
>
> time session1 session2
> t0 alter table
> t1 alter table
> t2 lock table - whoops! we won't process the data we thought

Yes - exactly. Which is why I was thinking that at t2, after the table exclusive lock, to 1st check that the location specified is unchanged before continuing (using *_EXTERNAL_LOCATIONS view).

> why not a single external table with lots of files and
> processed as one big set??

The Perl processes itself runs in parallel as the stuff they do outside Oracle needs to be multi-processed (and red-lines all 4 CPUs too *sigh*). This specific system (I almost wrote "thing") does not use Perl's multi-threading btw.

Part of the problem here is having to integrate with Perl code from Cisco (we do not want to write and maintain that ourselves - it is rather complex and often changes versions)

Hmm.. I can ask them to do, as a final single processing step, the load of the CSV files. But then they will need a process coordinator to kick that off once the Perl multi-processing has been done.



Transparently concatenated...double your fun

Duke Ganote, January 15, 2007 - 9:46 am UTC

select location from dba_external_locations where table_name = 'TESTLOAD';

LOCATION
----------------------
adclmstate_200504.txt

select count(*) from testLoad;
COUNT(*)
--------
18019

alter table testLoad location ('adclmstate_200504.txt', 'adclmstate_200504.txt');

select location from dba_external_locations where table_name = 'TESTLOAD';

LOCATION
----------------------
adclmstate_200504.txt
adclmstate_200504.txt

select count(*) from testLoad;
COUNT(*)
--------
36038

Identify data file name when using multiple files

A reader, July 25, 2007 - 8:59 am UTC

Hi Tom,

Is it possible to idenfity which file a particular record came from if I am using multiple files for a given external table ?

Here is an example:

drop table test_ext;

create or replace directory test_dir as '/home/oracle/test/';

CREATE TABLE test_ext
(
    col1            varchar2(10),
    col2            varchar2(10),
    audit_fname     varchar2(50)
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_LOADER
    DEFAULT DIRECTORY test_dir
    ACCESS PARAMETERS
(
     records delimited by newline
     badfile test_dir:'test%a_%p.bad'
     logfile test_dir:'test%a_%p.log'
     fields terminated by ','
     missing field values are null
     (
        col1            CHAR,
        col2            CHAR,
        audit_fname     CHAR
     )
)
LOCATION ('test1.txt','test2.txt','test3.txt')
)
PARALLEL
REJECT LIMIT UNLIMITED;

/*

Test data files
-- test1.txt

a,b
c,d

-- test2.txt
e,f
g,h

-- test3.txt
i,j
k,l

-- Expected output for select * from test_ext
a,b,test1.txt
c,d,test1.txt
e,f,test2.txt
g,h,test2.txt
i,j,test3.txt
k,l,test3.txt

*/

Tom Kyte
July 26, 2007 - 9:21 am UTC

not that I am aware of, the filename is lost.

External Table and fixed length data files

Srikanth. S, August 14, 2007 - 12:35 pm UTC

Tom

What kind of access parameters would you use to select from a fixed width data file?

Here is an example of the structure:

1. create directory test_dir as '/var/tmp/';

2. create table external_table2
(
NAME VARCHAR2(50)
,EMPL_STATUS VARCHAR2(1)
,THIS_EFFDT VARCHAR2(6)
,FULL_PT_TIME VARCHAR2(1)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory test_dir
access parameters
( fields terminated by ',') <=== THIS IS WHERE I WOULD LIKE TO SPECIFY THAT THE DATA FORMAT IS FIXED WIDTH
location ('myfile.dat')
)
reject limit unlimited;


myfile.dat

LEE,JEFFREY S T20060513F
LONGVAL,NORMAJEAN A20051024F
ELBAZ,ALBERT A20051031F
FISCHER,MARK J A20051229F

Thanks
Tom Kyte
August 20, 2007 - 12:08 pm UTC

well, I took ulcase2.ctl

LOAD DATA
INFILE 'ulcase2.dat'
INTO TABLE EMP

( EMPNO    POSITION(01:04) INTEGER EXTERNAL,
  ENAME    POSITION(06:15) CHAR,
  JOB      POSITION(17:25) CHAR,
  MGR      POSITION(27:30) INTEGER EXTERNAL,
  SAL      POSITION(32:39) DECIMAL EXTERNAL,
  COMM     POSITION(41:48) DECIMAL EXTERNAL,
  DEPTNO   POSITION(50:51) INTEGER EXTERNAL)


and had sqlldr generate an equivalent external table:
$ sqlldr scott/tiger ulcase2.ct external_table=generate_only


and this was the result - showing what you would put in the access parameters for a fixed width, positional file...


CREATE TABLE "SYS_SQLLDR_X_EXT_EMP"
(
  "EMPNO" NUMBER(4),
  "ENAME" VARCHAR2(10),
  "JOB" VARCHAR2(9),
  "MGR" NUMBER(4),
  "SAL" NUMBER(7,2),
  "COMM" NUMBER(7,2),
  "DEPTNO" NUMBER(2)
)
ORGANIZATION external
(
  TYPE oracle_loader
  DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
  ACCESS PARAMETERS
  (
    RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
    BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'ulcase2.bad'
    LOGFILE 'test.log_xt'
    READSIZE 1048576
    FIELDS LDRTRIM
    REJECT ROWS WITH ALL NULL FIELDS
    (
      "EMPNO" (1:4) INTEGER EXTERNAL(4),
      "ENAME" (6:15) CHAR(10),
      "JOB" (17:25) CHAR(9),
      "MGR" (27:30) INTEGER EXTERNAL(4),
      "SAL" (32:39) INTEGER EXTERNAL(8),
      "COMM" (41:48) INTEGER EXTERNAL(8),
      "DEPTNO" (50:51) INTEGER EXTERNAL(2)
    )
  )
  location
  (
    'ulcase2.dat'
  )
)REJECT LIMIT UNLIMITED

Dynamically changing logfile

Robert Massey, November 22, 2007 - 11:12 am UTC

I was looking for a way to have the current date in the logfile name. I found this article by searching EXTERNAL TABLE LOGFILE.

I figured out that the logfile could be changed by respecifying the access parameters clause. However, I didn't want to have to respecify the entire clause.

Using regular expressions, I was able to do it. The following procedure changes the logfile for the external table CENSUS_X. log_dir is the directory object for the logfile. The LOGFILE parameter has to already be present in the ACCESS PARAMETERS clause for this to work.

begin
for r in (
select
regexp_replace(access_parameters, '(logfile ).+',
'\1log_dir:''epic-census-x-'
|| to_char(sysdate, 'yyyy-mm-dd')
|| '.log''', 1, 1, 'i'
) as new_access_parameters
from user_external_tables
where table_name = 'CENSUS_X'
) loop
execute immediate 'alter table census_x access parameters ('
|| cast(r.new_access_parameters as varchar2) || ')';
end loop;
end;
/

ACCESS_PARAMETERS is a CLOB, so CAST is used in the EXECUTE IMMEDIATE statement.

This works in 10g. I don't recall if 9i has regular expressions.

Dynamically change filename

Bharati, September 17, 2009 - 2:30 am UTC

Yes,
U can also update the user_external_locations (view) based on external_locations$ sys object
and set location = new location (new filename)
provided user has rights to update external_locations$ table.

from non sys user->

Update external_locations$
set location = new name
where obj# = (id from all_objects for extrenal table)


Tom Kyte
September 17, 2009 - 9:28 am UTC

I do not want you to ever post on this site again - never - not if you are going to post $#@@$#$@ like that, that is the worst thing you can do.

You NEVER NEVER NEVER update anything in the sys schema, never. I would fire you on the spot, immediately, no questions.


You have a documented supported command, updating the data dictionary (which we cache in the SGA - we don't see your update, your approach quite simply does not work in real life) would be the worst thing you could do.

I am amazed at the "not smart" things people do. You have a COMMAND THAT IS DOCUMENTED AND SUPPORTED THAT CHANGES THE FILENAME.

WHY WOULD YOU EVEN CONSIDER YOUR 'approach' (I use quotes because your approach isn't an approach at all - it is, to be blunt, stupid)

read:
http://asktom.oracle.com/Misc/birth-of-asktom.html

What you did (update the dictionary) is the reason this site exists. It was stupid then, it is even more stupid now.

Wow

Martijn A. Hoekstra, September 17, 2009 - 9:40 am UTC

I'm truly amazed/flabbergasted.
It's 2009(almost 2010) and people still want to act like/mess up SYS.

Looks like a clear case of:

"History repeating"

again...

so how can you ...

Sokrates, September 17, 2009 - 10:20 am UTC

... how can you set maxtrans on an index in 10g (and above)
*WITHOUT* updating the dictionary ?
Tom Kyte
September 17, 2009 - 1:28 pm UTC

maxtrans is always 255, no matter what (no matter what the documentation says or anything :) )

Unloading data to flat files using external tables

Jobin Joseph, October 15, 2009 - 4:14 am UTC

I am trying to use external table concept using datapump driver to unload data from Oracle tables to OS files.
I was under an impression that persmission for all  files that Oracle creates is based on the umask setting of the session which starts the instance or the listener. However I just noticed that files created by external table using DIRECTORY as the location are not having read permission to others though the umask is 0022. At the same time file written by same instance to the same local using UTL_FILE package has read permission to others. 

Linux As4, bash shell

SQL>CREATE TABLE test
 ORGANIZATION EXTERNAL
 (TYPE oracle_datapump
 DEFAULT DIRECTORY test
 ACCESS PARAMETERS (
 NOLOGFILE)
 LOCATION ('test.txt')
 ) AS SELECT * from dba_tables;


-rw-r-----  1 oracle dba 356352 Oct 15 13:06 test.txt

Is this an expected behavior ? If so do I have a workaround to give read permission on external table files to other users ?


File having more fields in some rows

Venkat, February 17, 2010 - 5:36 am UTC

Hi Tom,

If the file is having more number of columns (extra columns) in some rows than what we specified, then how external table handles those rows? How do we fail the creation of external table whenever we have these type of records.

Thanks
Venkat
Tom Kyte
February 17, 2010 - 9:26 am UTC

I'm not sure what you mean.


If there are extra columns - we don't care, we only read the ones you asked us to read.


ops$tkyte%ORA11GR2> create or replace directory DATA as '/home/tkyte'
  2  /

Directory created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> CREATE TABLE "DEPT_ET"
  2  (
  3    "DEPTNO" NUMBER(2),
  4    "DNAME" VARCHAR2(14),
  5    "LOC" VARCHAR2(13)
  6  )
  7  ORGANIZATION external
  8  (
  9    TYPE oracle_loader
 10    DEFAULT DIRECTORY DATA
 11    ACCESS PARAMETERS
 12    (
 13      RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
 14      FIELDS TERMINATED BY "|" LDRTRIM
 15      REJECT ROWS WITH ALL NULL FIELDS
 16      (
 17        "DEPTNO" CHAR(255)
 18          TERMINATED BY "|",
 19        "DNAME" CHAR(255)
 20          TERMINATED BY "|",
 21        "LOC" CHAR(255)
 22          TERMINATED BY "|"
 23      )
 24    )
 25    location
 26    (
 27      'dept.dat'
 28    )
 29  )
 30  /

Table created.

ops$tkyte%ORA11GR2> !cat dept.dat
10|ACCOUNTING|NEW YORK
20|RESEARCH|DALLAS|extra
30|SALES|CHICAGO|extra|extra|read all about it
40|OPERATIONS|BOSTON

ops$tkyte%ORA11GR2> select * from dept_et;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

Naming program dynamically

sree, April 28, 2010 - 12:50 pm UTC

Making my question simple.

select ename, empno , sal, comm from emp;

I get some output from this query.
Now if i want to name the output of this query dynamically and the actually program is based on some parameter.
How do i do that?


QUICK REPLY WILL BE APPRECIATED
Tom Kyte
April 28, 2010 - 1:00 pm UTC

REPLY IS:

I HAVE NO IDEA WHAT YOU MEAN.


you do not, as far as I know, "name" query output. It does not make sense to me - I've written more than one database program and have never "named" a result set...

So, explain what you mean, pretend you were trying to describe the problem to your mom - be that clear and detailed (assuming your mom is not a programmer or computer IT person that is)

Explanation

SRI, April 28, 2010 - 3:54 pm UTC

I have to save the output of the query in a *.txt format file and name the file dynamically.
and the whole process should be automated.

does this make sense?

Tom Kyte
April 28, 2010 - 4:17 pm UTC

would you care to perhaps say what you would be using to connect to the database perhaps.

You know, I doubt you are using mental telepathy to communicate directly with the database so you must be using *some sort of software* to build your application with.

What would it be, and what OS would you be on - how about a little "context" here.

Read this:

So, explain what you mean, pretend you were trying to describe the problem to your mom - be that clear and detailed (assuming your mom is not a programmer or computer IT person that is)

and take it to heart. No one can help you (with anything) unless you can give them something to work with.


What does name the file dynamically mean?
What are you going to use to interact with the end user?
What are you going to use to interact with the database?
How can something "dynamic" be "automated" at the same time?

Creating a directory dynamically

Kari, May 26, 2010 - 8:37 am UTC

I found your suggestion for changing the location of an external table very helpful.

Now, the client wants to change the directory that the file gets placed in based on an input parameter. (c:\external_tbl\<customer_no>\<filename>.txt)

I am considering creating a package in the SYS schema that will create a directory dynamically and then allow the user to change the location including both the directory:filename, as well as the default directory for the table. I plan on creating a procedure to create and one to drop the directory. I am little hesitant to create a pl/sql package in the SYS schema. What are your thoughts?
Tom Kyte
May 26, 2010 - 9:04 am UTC

Now, the client wants to change the directory that the file gets placed in
based on an input parameter. (c:\external_tbl\<customer_no>\<filename>.txt)


really bad idea, this is on the server you know, you do not want to be doing this for obvious security reasons.

I am considering creating a package in the SYS schema

STOP RIGHT THERE. The answer to that is NO. Stay out of SYS, just don't even dream about going there, stop, backup, get away - do not do that. Period.



Do be hesitant to do that, refuse to ever even consider putting your code in to SYS - just stop.


But back to the question - you do not want to do this, convince your client that it is a really bad security concept - you'll be providing a function that can open up the entire filesystem. Bad bad bad idea, since it will be accessed using the Oracle account with its privileges.

Multiple Files

Russell, March 06, 2011 - 11:17 pm UTC

We are using Oracle 10g. A thread in this topic a few years ago, asked about grabbing the filename for each record. At that time, you answered that the filename is not available. I was wondering if anything had changed in relation to this response.


We have multiple files in the same structure, containing products that a customer has. Currently, based on the way the files are extracted for us, 1 Customer's records, may be duplicated across multiple files. It is also possible in any file for a customer to have multiple of the same product (because the customer has purchased multiple of the same products at the same or different times)

EG

FILE_A12.txt
CUST_ID, PROD_ID, START_DATE
111, A12, 1/1/2001
111, A12, 1/1/2001
111, B34, 10/10/2010

FILE_B34.txt
CUST_ID, PROD_ID, START_DATE
111, A12, 1/1/2001
111, A12, 1/1/2001
111, B34, 10/10/2010

If I could get the filename, I could work out a distinct set of products for each customer.

If the filename is not available - in the external table, would you recommend, creating an external table for each type of file (approx 15-20 at 50mb - 19gb in size) and a union / merge to create a distinct set?

Thanks


Tom Kyte
March 07, 2011 - 12:32 pm UTC

In 11g you'll be set to go:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3015912000346648463

but before that, there isn't a way to get the filename being queried in a multiple file external table.


I would suggest testing your approach. Use a UNION ALL (not union, that implies a "distinct" would need to happen on the data set). I don't see how merge would apply.


External Table Can Location be a Variable

Marchalain Mullins, August 17, 2021 - 8:23 pm UTC

Is it possible to dynamically change the file name (LOCATION) to a variable? Within a PLSQL package, I can successfully use something like this:
EXECUTE IMMEDIATE 'alter table GR_LIST location (''GR_LIST.txt'')';
However, I now need to make the location a variable, and cannot find the correct syntax that works. example:
EXECUTE IMMEDIATE 'alter table GR_LIST location (''LV_LIST_15'')'; and the value of LV_LIST_15 might be GR_LIST_15.txt and the next day might be UG_LIST_15.txt.
Connor McDonald
August 18, 2021 - 6:03 am UTC

Use the EXTERNAL TABLE MODIFY clause at query time to avoid all DDL

SQL> create table e1 (
  2    owner         varchar2(128)
  3  )
  4  organization external (
  5    type oracle_loader
  6    default directory tmp
  7    access parameters (
  8      records delimited by newline
  9      nologfile
 10      fields csv with embedded terminated by ',' optionally enclosed by '"'
 11      missing field values are null (
 12        owner
 13      )
 14    )
 15    location ('o1.txt')
 16  )
 17  reject limit unlimited;

Table created.

SQL> select * from e1;

OWNER
------------------------------
SYS
SYSTEM
DBSNMP
APPQOSSYS
DEVLIVE
DEMO
DBSFWUSER
REMOTE_SCHEDULER_AGENT
FLOWS_FILES
APEX_LISTENER
APEX_UTILS
ENDUSER
PUBLIC
MCDONAC
CTXSYS
ASKTOM
ORDS_METADATA
HR
APEX_MON
AUDSYS
OJVMSYS
SI_INFORMTN_SCHEMA
DVF
DVSYS
TRCADMIN
PERFSTAT
GSMADMIN_INTERNAL
ORDPLUGINS
GLOC
HMS_TOOLS
SODAUSER
ORDDATA
MDSYS
OLAPSYS
LBACSYS
SANGAM
OUTLN
ORACLE_OCM
APEX_210100
SCOTT
APEXCON
SOE
AV_USER
XDB
WMSYS
ORDSYS
TRCANLZR
ODTUG

48 rows selected.

SQL> select *
  2  from   e1 external modify (location ('o2.txt') );

OWNER
------------------------------
SYS
SYSTEM
DBSNMP
APPQOSSYS
DEVLIVE
DEMO
DBSFWUSER
REMOTE_SCHEDULER_AGENT
FLOWS_FILES
APEX_LISTENER
APEX_UTILS
ENDUSER
PUBLIC
MCDONAC
CTXSYS
ASKTOM
ORDS_METADATA
HR
APEX_MON
AUDSYS

20 rows selected.

SQL> variable v1 varchar2(10);
SQL> exec :v1 := 'o1.txt'

PL/SQL procedure successfully completed.

SQL> select *
  2  from   e1 external modify (location ( :v1) );

OWNER
------------------------------
SYS
SYSTEM
DBSNMP
APPQOSSYS
DEVLIVE
DEMO
DBSFWUSER
REMOTE_SCHEDULER_AGENT
FLOWS_FILES
APEX_LISTENER
APEX_UTILS
ENDUSER
PUBLIC
MCDONAC
CTXSYS
ASKTOM
ORDS_METADATA
HR
APEX_MON
AUDSYS
OJVMSYS
SI_INFORMTN_SCHEMA
DVF
DVSYS
TRCADMIN
PERFSTAT
GSMADMIN_INTERNAL
ORDPLUGINS
GLOC
HMS_TOOLS
SODAUSER
ORDDATA
MDSYS
OLAPSYS
LBACSYS
SANGAM
OUTLN
ORACLE_OCM
APEX_210100
SCOTT
APEXCON
SOE
AV_USER
XDB
WMSYS
ORDSYS
TRCANLZR
ODTUG

48 rows selected.

SQL>


External Table Can Location be a Variable

Marchalain, August 19, 2021 - 6:35 pm UTC

Version 12.2

PLSQL package won't compile unless variable name is enclosed in single quotes, but that turns it into a literal. How can I overcome this?

lv_incentive_file VARCHAR2(150) := pi_incentive_file;
lv_incentive_amt pdrdedn.pdrdedn_amount1%TYPE := pi_incentive_amt;
(NOTE: the pi_ variables are parms into the procedure)

BEGIN
SELECT l.pass_number,
lv_incentive_amt
FROM jubilee.pay_well_loader EXTERNAL MODIFY (LOCATION (lv_incentive_file)) l;

(NOTE, there is no 'INTO' because the query is feeding an INSERT statement)
Connor McDonald
August 23, 2021 - 2:16 am UTC

A bit of dynamic SQL will help

SQL> create table t ( n varchar2(100));

Table created.

SQL>
SQL> declare
  2    v varchar2(10) := 'o2.txt';
  3  begin
  4    insert into t
  5    select *
  6    from   e1 external modify (location (v) );
  7  end;
  8  /
  from   e1 external modify (location (v) );
                                        *
ERROR at line 6:
ORA-06550: line 6, column 41:
PL/SQL: ORA-00905: missing keyword
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored


SQL>
SQL> declare
  2    v varchar2(10) := 'o2.txt';
  3  begin
  4    execute immediate '
  5    insert into t
  6    select *
  7    from   e1 external modify (location (:b1) )' using v;
  8  end;
  9  /

PL/SQL procedure successfully completed.



I'll log a bug