Do the PL/SQL one dump Date type with time part?
Connie, March 29, 2002 - 2:39 pm UTC
I am not sure if the PL/SQL procedure dump date type with time part? For example, if there is a date column starttime, what the result of that field in the output flat file?
Why do you use the line as:
dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000);
since you have already define columns in the loop above.
Thx,
March 29, 2002 - 2:42 pm UTC
just issue:
alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss'
before (or during) the execution of this routine. Then dates will come out with the time.
Tom, this routine
Kulguru, April 14, 2002 - 4:38 pm UTC
writes to the flat file on the server side
my requirement right now is to write to the flat file on the client side,
will it suffice to remove all the utl_file calls with text_io package
Further, your routine cannot handle 'select * from emp', we have to specifically mention columns in the select.
Do you have a routine which accepts select * from emp.
Thank you
April 14, 2002 - 4:44 pm UTC
TEXT_IO is only for forms. So, unless you are doing this in forms -- you'll be changing the utl_file to dbms_output and hoping you are never asked to put out more then 255 bytes/row.
Further -- the routine can, does, and will handle select * quite well. In fact -- the example provided uses select *.
The good question to another question
Andrew, April 15, 2002 - 5:18 pm UTC
Tom,
His/her question is spelled strangely. He/she wants to EXTRACT FROM FLAT FILE TO TABLE. (I.e. he wants to view beautiful sunset lokking towards the east) I.e. he/she wants to load.
Accordingly he/she has to modify your technique, opening file for reading, (not writing), parsing every line and executing sql to insert into table (instead of select).
Why bother with plsql. Just use sql*loader.
Will sqlplus accept Text_io
Famy, April 16, 2002 - 12:15 pm UTC
1.Tom, sqlplus is also a client tool just like forms and reports, can I write a script using text_io and execute it from sqlplus.
2.Further, I have no access to the database server, so the utlfile pl/sql procedure and sqlldr solutions are out of questin.
What I want is a utility to write the data to the client side, and also a utilitythat will pick it up from the client side and dump it into a table in my schema.
Is this possible.
Kindly answer.
April 16, 2002 - 9:30 pm UTC
1) no, sqlplus is NOT a client too "just like" forms and reports. forms and reports actually have a full fledged PLSQL runtime engine in them. SQLPlus does not. TEXT_IO does not exist in sqlplus. SPOOL does.
see
2) SPOOL -- see
</code>
http://asktom.oracle.com/~tkyte/flat/index.html <code>
for ideas. As for ruling out sqlldr -- then you are basically "out of luck" -- or you are writing your OWN client program that parses an input and file loads it.
writing data to the client side -- trivial (see flat).
Loading it is trivial as well -- if you use sqlldr -- if you rule it out, it's hard.
dump_csv
Vasaku, August 08, 2002 - 8:43 am UTC
Tom, I have modified your dump_csv using text_io, for using from forms.
Now..
I'm passing in the :system.last_query from forms, this also selects the rownum in the query .. so how should I filter out the rownum.
I also want to filter our all the commas where should I put the REPLACE.
And lastly I want to pass TAB as the separator, but when I passed chr(9) as the separator it is not working?
Thanks
August 08, 2002 - 9:22 am UTC
well, if you know which column rownum is (it is probably the FIRST or the LAST column right?? ) you can skip it.
What commas???
"it is not working?" -- neither is my car, can you fix it? not working makes little sense to me. please please -- be more descriptive.
different approach to write to flat file using text_io
Vasaku, August 08, 2002 - 8:47 am UTC
Tom,
Instead of using dump_csv(which uses a query as a input), if I want to just write the contents of the block(in forms)
how can I do that.
Can you show a small example for an assumed example of a block containing 2 records and 2 columns.
Thx
August 08, 2002 - 9:24 am UTC
You would use something like:
go_block( 'block name' )
first_record
loop
text_io to write the :block.item1, :block.item2, ... :block.itemN out
next_record
exit when no more records
end loop
how to start writing from the secon or the 3rd line in the excel file
Vasaku, August 08, 2002 - 3:38 pm UTC
Tom
With the help of your program I'm able to successfully write to the csv file.
But I need column headings.That is I need to have column headings , and underneath it I need to write the data as per the user requirement.
Any idea how I can do it.
(Sorry for the above comment on 'not working' I got confused between column separator and line separator)
August 09, 2002 - 7:50 am UTC
You can use describe_columns:
is
l_desc_tab dbms_sql.desc_tab;
l_cols number;
l_cursor number;
l_query long;
begin
l_cursor := dbms_sql.open_cursor;
dbms_sql.parse( l_cursor, p_query, dbms_sql.native );
dbms_sql.describe_columns( l_cursor, l_cols, l_desc_tab );
....
to get an array of records that contain this data:
type desc_rec is record (
col_type binary_integer := 0,
col_max_len binary_integer := 0,
col_name varchar2(32) := '',
col_name_len binary_integer := 0,
col_schema_name varchar2(32) := '',
col_schema_name_len binary_integer := 0,
col_precision binary_integer := 0,
col_scale binary_integer := 0,
col_charsetid binary_integer := 0,
col_charsetform binary_integer := 0,
col_null_ok boolean := TRUE);
type desc_tab is table of desc_rec index by binary_integer;
See the Supplied Packages guide for the details. That'll give you your column headings.
If you use our OWA_SYLK utility (search this site for
owa_sylk utl_file
) you'll get column headings and lots more excell stuff easily.
how to avoid rowid , which is the first column
Vasaku, August 09, 2002 - 12:17 pm UTC
Tom
1.can you tell me the way to change your dump_csv to ignore the first column i.e. the rowid.
2.If I want to capture the column names in the dump_csv , how should I do that , and where should I give the dbms_output.
(I cant use the owa_swlk, the DBA says it is not available)
August 09, 2002 - 12:36 pm UTC
Umm, change 1 to 2 in the loop
for i in 2 .. l_colCnt loop
dbms_sql.column_value( l_theCursor, i,
l_columnValue );
utl_file.put( l_output, l_separator ||
l_columnValue );
l_separator := p_separator;
end loop;
so it loops over columns 2, 3, 4, ....
Right above this comment -- I showed how to call dbms_sql.describe_columns. You would do that after you parse. you would then use utl_file/dbms_output/whatever to print out the column headings.
OWA_SYLK is as available as DUMP_CSV is. OWA_SYLK is a utility posted on THIS SITE -- it is something I wrote (along with Chris Beck). Search for it (like I said) and you'll find it.
where did you code raise subprogram
Sam, August 20, 2002 - 9:26 pm UTC
What is the purpose of the below
exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
and I dont see that you have coded the subprogram raise , which you are calling in the exception section.
Please explain.
August 20, 2002 - 9:41 pm UTC
raise is part of the plsql language itself. see the plsql guide.
Extracting data to a flat file
Anil, April 10, 2003 - 4:21 am UTC
Dear Tom,
I had successfully used dump_csv function to write to a text file.
My code is like this
declare
l_rows number;
rpt_date date;
tran_date date;
begin
loop
select max(reportdate) into rpt_date from amlareport;
select max(txd_bus_dt) into tran_date from read.trandetail;
if tran_date > rpt_date
then rpt_date := rpt_date +1;
else exit;
end if;
l_rows := dump_csv('select * from read.trandetail where txd_bus_dt=(select max(reportdate)+1 from amlareport)','c:\oracle\admin\dwdev\udump','FD Report'||rpt_date);
insert into amlareport values ('FD PLACEMENT VIA CASH >= 250K',rpt_date);
commit;
exit when tran_date=rpt_date;
end loop;
end;
/
I want to have column headings in the text file. Is there a need to change dump_csv function. Please tell me how and where the code should be changed.
Regards,
Anil
April 10, 2003 - 8:04 am UTC
Look up a page or two where we talk about this already.
SQL output in a text file
TH, April 26, 2003 - 10:11 am UTC
Dear Tom,
I need it in the reverse way.
What is the simple way to output the result of an SQL to a textfile. I will call that query from my application. I won't use SQL Plus.
I can use that feature for custom generated reports.
April 26, 2003 - 11:48 am UTC
what do you mean "in the reverse way". this is all about outputting the results of a query to a file -- which seems to be what you are asking for.
the dump_csv function and oracle forms
Marla, May 21, 2003 - 7:18 am UTC
hi, i tried to use the dump_csv function in Oracle Forms but when I compiled it, the following problems came out:
Encountered the symbol "AUTHID" when expecting one of the following:
.@%; is as character
The symbol ". was inserted before "AUTHID" to continue.
if i comment the AUTHID CURRENT_USER, the following error comes out:
Implementation Restriction: 'DBMS_SQL.NATIVE': cannot directly access remote
package variable or cursor
I was hoping to use the dump_csv function because this is what I need to do:
- Information entered in the current form will be saved in the database. (this works already)
- When button pressed trigger, extract data from the table (data of the current form w/c was just saved, primary key is sir_trans_no) into a text file delimited by ';'
- :SIR_TRANS_NO is the textbox that holds the primary key
This was supposed to be my trigger:
/* trigger WHEN-BUTTON-PRESSED */
DECLARE
1_row NUMBER;
BEGIN
1_row := dump_csv('select * from sir where sir_trans_no = :SIR_TRANS_NO','c:\temporary\requests', 'REQUEST');
END;
can you help me please?
May 21, 2003 - 8:56 am UTC
that is because it is written for the database, not for forms.
authid current user is a database feature, makes no sense in forms.
You would want to use
EXEC_SQL in forms instead of DBMS_SQL.
TEXT_IO in forms instead of UTL_FILE (to get the file on the client)
You need to take the concept but recode it to be a client side plsql implementation.
Stored Procedure in DUMP_CSV
wor, November 14, 2003 - 12:11 pm UTC
Hi Tom,
I have a many stored procedures which takes some inputs parameters and returns a refcursor.
I would like to dump the results of the refcusor into flat file using dump_csv function. But I notice that dump_csv takes only sql queries.
Instead of passing sql's I would like to call dump_csv with different stored procedures in a generic manner and based on the refcursor return values, the flat should be created. I DO NOT want to specify array variables and do a fetch into beacuse this removes the generic nature of the dump_csv functionality.
is this possible and how.
November 15, 2003 - 8:39 am UTC
you cannot -- a ref cursor is not "procedurally processable" by PLSQL.
PLSQL cannot decribe it
PLSQL cannot dynamically fetch column1, then column2 then column3 from it
PLSQL would need to know AT COMPILE time exactly what the query looked like
so, you CANNOT use a refcursor in this case, you must use a DBMS_SQL "cursor" instead if you want this functionality.
wor ( continue.. )
wor, November 14, 2003 - 12:19 pm UTC
If the above solution is possible in pro*c than please tell me the pro*C way.I assume it will be faster in pro*C.
Thanks in advance,
Wor
November 15, 2003 - 8:39 am UTC
see the first url above, it has a pro*c "flat"
OK
Siva, January 09, 2004 - 12:44 am UTC
Dear tom,
csv -> what does it stand for?
Bye!
January 09, 2004 - 8:27 am UTC
comma separated values
Extracting Data to a unspecified directory
Scott Pedersen, February 04, 2004 - 4:57 pm UTC
Hi Tom,
I am currently working on a system where the user will need to take a dump of the data produced in Oracle Forms and save it as an Excel file.
I have read about the examples you have provided and it is just what I am looking for, the only thing is having to stipulate a directory to write to using UTL_FILE_DIR in the init.ora file, which from my understanding allows the user to save a file to a directory on the Oracle Server.
What I would like them to be able to do is save the file to a directory of their choosing on their workstation and not on the server. Is this possible?
Thanks in advance.
Regards,
Scott.
February 04, 2004 - 6:06 pm UTC
use TEXT_IO, that writes on the "client"
To create xls documents easy
Peter Visscher, February 05, 2004 - 6:04 am UTC
Hi Tom,
I know a nice way to make Excel sheets from sqlplus. You can use the markup html tag in sqlplus. Here's an example.
SET LINESIZE 4000
SET VERIFY OFF
SET FEEDBACK OFF
SET PAGESIZE 999
SET MARKUP HTML ON ENTMAP ON SPOOL ON PREFORMAT OFF
SPOOL c:\temp\test_xls.xls
SELECT object_type
, SUBSTR( object_name, 1, 30 ) object
, created
, last_ddl_time
, status
FROM user_objects
ORDER BY 1, 2
/
SPOOL OFF
SET MARKUP HTML OFF ENTMAP OFF SPOOL OFF PREFORMAT ON
SET LINESIZE 2000 VERIFY ON FEEDBACK ON
Kind regards,
Peter Visscher
Amis Services BV
A reader, April 01, 2004 - 4:19 pm UTC
why is your dump_csv a function, and then you call it in a procedure. Why cant it be a procedure straight away? Very curious about this..
April 02, 2004 - 9:37 am UTC
you of course can obviously do whatever you want.
I wanted a function that returned the number of rows written. I then call this function from various procedures.
Column Headings in csv file
Sri, April 08, 2004 - 10:36 am UTC
I have gone through the code snippet for adding a column headings... as you have asked to use describe_columns... I have tried using it but it is giving error as:
*
ERROR at line 1:
ORA-01007: variable not in select list
ORA-06512: at "SYS.DBMS_SYS_SQL", line 1147
ORA-06512: at "SYS.DBMS_SQL", line 339
ORA-06512: at "CITINEW.DUMP_CSV", line 33
ORA-06512: at "CITINEW.TEST_DUMP_CSV", line 5
ORA-06512: at line 1
I have used:
*l_cursor := dbms_sql.open_cursor;
dbms_sql.parse( l_cursor, p_query, dbms_sql.native );
dbms_sql.describe_columns( l_cursor, l_cols, l_desc_tab );
l_status := dbms_sql.execute(l_cursor);
LOOP
EXIT WHEN ( dbms_sql.fetch_rows(l_cursor) <= 0 );
l_separator := '';
FOR i IN 1 .. l_cols LOOP
dbms_sql.column_value( l_cursor, i, l_desc_tab(i).col_name );
utl_file.put( l_output, l_separator || l_desc_tab(i).col_name );
l_separator := p_separator;
END LOOP;
utl_file.new_line( l_output );
l_cnt := l_cnt+1;
END LOOP;
dbms_sql.close_cursor(l_cursor);
utl_file.fclose( l_output );
please guide wat to code to insert the heading in the csv file.
April 08, 2004 - 10:55 am UTC
umm, what happened to the define column calls after the parse?
take the original code
ADD to it (removing NONE of it)
utl_file_dir
Jennifer Chen, April 20, 2004 - 9:45 am UTC
Hi Tom,
I have a requirement that will give the user the option to specify the location and name of the output file for a query. I checked that I can't modify utl_file_dir on the fly.
SQL> select issys_modifiable from v$parameter where name='utl_file_dir';
ISSYS_MOD
---------
FALSE
What are my alternatives?
Thanks in advance.
April 20, 2004 - 10:22 am UTC
Really?
I choose /usr/oracle/oradata/database/system.dbf as my output destination.
Oh wait, maybe we don't really want to do that.
Rather -- might you want to list the directories the administrator of the database has deemed "these are OK for the oracle process to write into" and present that list to the end user? Much safer...
In 9i, you can use DIRECTORY objects instead of utl_file_dir. Meaning, you can on the fly "create directory FOO as '/tmp'" and use utl_file to write to that directory. However, that gives you the capability to save to system.dbf again.... not so "good" if you let them just write ANYWHERE (cause as we know, end users are going to try to write anywhere if you let them)
Think about this for a little bit first -- you really don't want to write ANYWHERE, really.
THANK YOU
Jennifer Chen, April 20, 2004 - 9:36 pm UTC
Hi Tom,
I got your point here...
I have used Oracle9i for almost 2 years and been certified with 9i. I didn't know this New Method of Using UTL_FILE in Oracle9i R2 until today. I have tried it, and it worked fine.
Many many thanks...
Questions of speed
Alvin, April 21, 2004 - 2:27 am UTC
I need to extract data from the table and have it in a file too.
Currently i use sqlplus's 'spool'.
My problem with that approach is because of sqlplus innate tendency to align the columns hence storing excessive space. The query finishes quickly but the output to the terminal (or output to file) takes much much much longer.
Is the PLSQL program above much faster than spool ?
Hints in the sql
Alvin, April 21, 2004 - 4:55 am UTC
Tried using the dump_csv function and it answered my 'unsuppressed space' problem. Very useful tool !
Can i pass a sql with a hint on the dump_csv function ? or will there be any side effects ?
April 21, 2004 - 8:09 pm UTC
pass any sql you want.
data migration utility
Anurag, April 21, 2004 - 10:35 am UTC
Hi! tom,
I want to make data migration utility for foxpro & access to oracle in D2K. Can U brief How to achieve this.
Pl. do not advise to use migration workbench, I want to create customize utility for these two only.
Pl. Help.
April 21, 2004 - 8:47 pm UTC
the only advice i have is "starting writing code"
me, I would set up the heterogenous services and using odbc just "create table IN_ORACLE as select * from table_in_access"
Question
Alvin, April 23, 2004 - 12:26 am UTC
I'm using the dump_csv funtion in a anonymous pl/sql block
declare
l_num number;
begin
l_num:= dump_csv('select /*+ INDEX (RTBETELCALLDETAILRECORDS CDR_MULTI_2) */ *
from rtbetelcalldetailrecords
where groupid=1482
and time between 1075593600000 and 1078099199999
order by time'
, ',' ,'/export/home/oracle/utl_file_dir' ,'tele_200402.csv');
dbms_output.enable(10000);
dbms_output.put_line('-------------------------');
dbms_output.put_line('Rows dumped :'||l_num);
end;
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 103
ORA-06512: at "SYS.UTL_FILE", line 120
ORA-06512: at "SYS.UTL_FILE", line 287
ORA-06512: at "RTBETELEPHONY.DUMP_CSV", line 50
ORA-06512: at line 4
Also i ran it as sql only.
1 select dump_csv('select /*+ INDEX (RTBETELCALLDETAILRECORDS CDR_MULTI_2) */ *
2 from rtbetelcalldetailrecords
3 where groupid=1482
4 and time between 1075593600000 and 1078099199999
5 order by time'
6 , ',' ,'/export/home/oracle/utl_file_dir' ,'tele_200402.csv')
7* from dual
11:51:06 rtbetelephony@RTBE.SUNFIRE01> /
select dump_csv('select /*+ INDEX (RTBETELCALLDETAILRECORDS CDR_MULTI_2) */ *
*
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 103
ORA-06512: at "SYS.UTL_FILE", line 120
ORA-06512: at "SYS.UTL_FILE", line 287
ORA-06512: at "RTBETELEPHONY.DUMP_CSV", line 50
ORA-06512: at line 1
What i dont get is it runs fine on other partitions and consistently raises these on others despite rerunning it. It always raises the error at a specific filesize.
April 23, 2004 - 11:20 am UTC
so, what be line 50 of dump_csv in your database.
it could be you want to:
l_output := utl_file.fopen( p_dir, p_filename, 'w', 32000 );
^^^^^^^^^^
default linesize is 1022 bytes, you might be exceeding that.
A reader, April 26, 2004 - 8:36 pm UTC
Tom
When the data which needs to to dumped to csv contains commas, will the CSV format work?
April 27, 2004 - 7:54 am UTC
you would typically "quote" it
"this is, i believe","what you would do",55
to have three fields.... you would just add "quotes"
TEXT_IO giving errors
Subramanian, May 05, 2004 - 8:46 am UTC
Hi
Good day to you.
I have copied your unloader example with slight modifications.
It fails when the program comes to column_value procedure.
I am using FORMS 5.0 and oracle 8.1.5.
It gives the following error.
ORA-06562: Type of out argument must match type of column or bind
variable.
Please help.
--------
PROCEDURE PRcreate_data IS
mtablename varchar2(30);
column_name_str varchar2(2000);
select_str varchar2(2000);
out_file TEXT_IO.FILE_TYPE;
l_cursor_id integer;
l_col_count integer;
l_column_value varchar2(4000);
l_status number;
l_separator varchar2(1);
l_desc_tbl dbms_sql.desc_tab;
cursor table_cur is
select tablename
from upload_tables;
cursor first_cur is
select column_name, data_type
from user_tab_columns
where table_name = mtablename
order by column_id;
BEGIN
out_file := TEXT_IO.FOPEN('c:\echo.txt', 'w');
TEXT_IO.PUT_LINE(out_file, 'LOAD DATA');
TEXT_IO.PUT_LINE(out_file, 'INFILE *');
for table_rec in table_cur
LOOP
mtablename := table_rec.tablename;
TEXT_IO.PUT_LINE(out_file, 'INTO TABLE '||mtablename);
TEXT_IO.PUT_LINE(out_file, 'FIELDS TERMINATED BY ''|''');
column_name_str := '(';
select_str := 'Select ';
for first_rec in first_cur
LOOP
column_name_str := column_name_str||first_rec.column_name||',';
select_str := select_str||first_rec.column_name||',';
END LOOP;
column_name_str := rtrim(column_name_str,',');
column_name_str := column_name_str||')';
TEXT_IO.PUT_LINE(out_file, column_name_str);
TEXT_IO.PUT_LINE(out_file, 'BEGIN DATA');
select_str := rtrim(select_str,',');
select_str := select_str||' from '||mtablename;
l_cursor_id := dbms_sql.open_cursor;
dbms_sql.parse(l_cursor_id, select_str, 1);
dbms_sql.describe_columns( l_cursor_id, l_col_count, l_desc_tbl );
for i in 1 .. l_col_count
LOOP
dbms_sql.define_column( l_cursor_id, i, l_column_value, 4000);
END LOOP;
l_status := dbms_sql.execute(l_cursor_id);
while ( dbms_sql.fetch_rows(l_cursor_id) > 0)
LOOP
l_separator := '';
for i in 1 .. l_col_count
LOOP
dbms_sql.column_value( l_cursor_id, i, l_column_value );
TEXT_IO.PUT_LINE(out_file, l_separator || l_column_value );
l_separator := '|';
END LOOP;
END LOOP;
dbms_sql.close_cursor(l_cursor_id);
END LOOP;
TEXT_IO.fclose( out_file );
EXCEPTION
WHEN no_data_found THEN
TEXT_IO.FCLOSE(out_file);
when others then
TEXT_IO.FCLOSE(out_file);
message(sqlerrm);
END;
--------
Thanks in advance
Subramanian Natarajan
May 05, 2004 - 9:17 am UTC
in forms, use EXEC_SQL or even better "create_group_from_query" as that will actually array fetch to the client.
TEXT_IO
Subrmanian Natarjan, May 05, 2004 - 11:53 pm UTC
Hi Tom
Thank you for your immediate response.
I searched metalink.
It is a bug and is fixed in FORMS 6.0.5.x.
Thanks.
Subramanian Natarajan.
Tiny irritating feature!
Phil, May 18, 2004 - 9:47 am UTC
Hi Tom
I've searched high and low and annoyingly I am sure it can be done. I have a sql file that prompts for a few variables. It then starts a spool and immediately runs another file with a where clause using some of the variables. Problem is, I don't want to see it in the spool - is there a way of hiding this?
(Top of the export
old 27: FROM dsv.TSKS WHERE JBID = &&jbid ORDER BY ID
new 27: FROM dsv.TSKS WHERE JBID = 123 ORDER BY ID)
And the script is...
SET NEWPAGE 0
SET SPACE 0
SET LINESIZE 400
SET PAGESIZE 0
SET ECHO OFF
SET FEEDBACK OFF
SET HEADING OFF
prompt 'Please enter the name of the filename you wish to produce (include the full path)'
accept filename
prompt 'Which jbid do you wish to export?'
accept jbid
spool &&filename
SET termout off
@ c:\pulljob
spool off
SET termout on
set linesize 80
set feedback on
set heading on
set termout on
set verify on
set trimspool on
(pulljob is basically just a big select statement)
May 18, 2004 - 4:45 pm UTC
set verify off
set verify off
Phil, May 18, 2004 - 12:19 pm UTC
Use the above to remove the "verification" of substitution variables (sorry to have asked a dumb Q!)
how i pass parameter to (dump_csv) function
A reader, May 31, 2004 - 10:41 am UTC
All the values are Right Justified!!!
Arindam, June 14, 2004 - 9:53 pm UTC
Tom,
I have tried to run this procedure and I have been sucessful, thanks much. But I have a problem. All the column values comes as right justified. Whether its Varchar2 or Number. I would like to have Varchar's as Left justified and Number's as right. I am using a view to create the file. The view has correct datatype, I mean Varchar's and Numbers. Any insight on this ? May be am I am missing something. Please help.
June 15, 2004 - 3:18 pm UTC
in the view, use to_char(num,'999999999.99') instead of just num
my code doesn't even justify anything?
A reader, July 05, 2004 - 10:16 pm UTC
Tom
I write a lot of queries for my users. Now I want to use the dump_csv fundtion and set all those queries up using a front end GUI( Forms 9i).
Please give me your opinion and solution. I want to store all these queries in the database, and pull them up based on the query selected using the forms gui interface.
Now the issue I'm encountering is , each of these queries also take parameters to filter data. I want your advise as to how I should set up these queries for parameters. Once the query is selected from the gui interface and parameters are supplied , then the query as a whole can be sent as a paramter to dump_csv routine. But how should we dynamically decide which parameters to supply based on the query selected.
For example I might select query 1 which is
select empno, ename from emp where deptno =
Next I might select query 2 which can be
select ename, dname
from dept d, emp e
where d.deptno = e.deptno
and ename =
Is there any way we can modify the dump_csv parameter and make this happen.
I repeat the main issue is with supplying parameters, to the query selected.
Thanks a lot in advance
July 06, 2004 - 7:33 am UTC
either use cursor_sharing=force, or use an application context.
You'll want to save in your schema the QUERY and the names of the 'binds' you want to supply -- so you can build a UI to gather this information (eg: dont save just a query, but a query plus other metadata)
ops$tkyte@ORA9IR2> create or replace package my_pkg
2 AUTHID CURRENT_USER
3 as
4 function dump_csv( p_query in varchar2,
5 p_separator in varchar2 default ',',
6 p_dir in varchar2 ,
7 p_filename in varchar2 )
8 return number;
9
10 procedure bind( p_name in varchar2, p_val in varchar2 );
11 end;
12 /
Package created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace context my_ctx using my_pkg
2 /
Context created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package body my_pkg
2 as
3
4 procedure bind( p_name in varchar2, p_val in varchar2 )
5 is
6 begin
7 dbms_session.set_context( 'my_ctx', p_name, p_val );
8 end;
9
10
11
12 function dump_csv( p_query in varchar2,
13 p_separator in varchar2 default ',',
14 p_dir in varchar2 ,
15 p_filename in varchar2 )
.......
ops$tkyte@ORA9IR2> exec my_pkg.bind( 'ename', '%A%' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable n number
ops$tkyte@ORA9IR2> exec :n := my_pkg.dump_csv( 'select * from scott.emp where ename like sys_context(''my_ctx'',''ename'')', ',', '/tmp', 'test.dat' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> print n
N
----------
7
Each query can have diff number of parameters
A reader, July 06, 2004 - 11:22 am UTC
Each query can have different number of parameters, how do we set the session context for each of them. What do mean by store the metadata about the query. What all do you think we should store in the schema.
Thanks
July 06, 2004 - 11:59 am UTC
fine, you just call bind over and over for each parameter? not a problem there.
I think you should store a master row with the query and a series of detail rows that have the query_id as a foreign key and the NAME of the bind variable.
eg:
select * from t where x = sys_context('my_ctx','x') and y > sys_context('my_ctx','y')
should have two rows stored in a detail table telling you "x and y are the inputs" -- so you don't have to parse that information out.
Well, lets see
A reader, July 06, 2004 - 6:57 pm UTC
The users selects a query
The gui interface pull up the associated parameter into and displays items on the interface, so that the user can enter parameters.
Once the user enters the parameters, then we need to just call bind over and over for each parameter.
Is that the way you are seeing it ?
July 06, 2004 - 7:58 pm UTC
yes.
Interesting Proposition
A reader, July 06, 2004 - 7:05 pm UTC
Tom
I have another interesting proposition here, apart from writing to a csv file, if I also want to display the data returned by the query in the form, what is the best way to do it.
This is challenging actually because, each query has different number of columns. Can we use the ref cursor to achieve this, can I have a ref cursor return a fixed set of 30 values out, irrespective of the number of columns a query has. Yes, obviously if the query has more than 50 columns the query will fail, but that is o.k.
If this is possible then we can build a fixed width block of 30 columns in oracle form and pupulate it using a stored procedure each time.
July 06, 2004 - 7:59 pm UTC
sure, you can always query:
select ename, empno, job, null, null, null, ....., null
from emp
and dump that to a file and have a default block built on it (heck, you can even set the item property to hidden for the 27 extra columns and unhide them later)
Error
A reader, July 06, 2004 - 11:05 pm UTC
I. Now we are creating a table storing the query in it
create table t5 ( q varchar2(4000));
insert into t5 values ( 'select ename, empno, job, null, null, null, null from emp') ;
commit;
II. Before calling a query from the table, lets try running it statically and not calling it from the database
SQL> create or replace package types
2 as
3 type cursorType is ref cursor;
4 end;
5 /
Package created.
SQL>
SQL> create or replace procedure getemps( p_cursor in out types.cursorType )
2
3 as
4
5 begin
6
7 open p_cursor for select ename, empno, job, null, null, null, null from emp;
8
9 end;
10 /
Procedure created.
SQL> variable c refcursor
SQL> exec :c := sp_ListEmp
PL/SQL procedure successfully completed.
SQL> print c
ENAME EMPNO
---------- ----------
ADAMS 7876
ALLEN 7499
BLAKE 7698
CLARK 7782
FORD 7902
JAMES 7900
JONES 7566
KING 7839
MARTIN 7654
MILLER 7934
SCOTT 7788
ENAME EMPNO
---------- ----------
SMITH 7369
TURNER 7844
WARD 7521
14 rows selected.
Tom, can you give some hint as to how this package should be changed in order to accept queries from the database.
I ran into the below error..
SQL> ed
Wrote file afiedt.buf
1 create or replace procedure getemps( p_cursor in out types.cursorType )
2 as
3 lv_query varchar2(4000);
4 begin
5 select q into lv_query from t5;
6 open p_cursor for lv_query;
7* end;
8 /
Procedure created.
SQL> variable c refcursor
SQL> exec :c := sp_ListEmp
PL/SQL procedure successfully completed.
SQL> print c
ENAME EMPNO
---------- ----------
ADAMS 7876
ALLEN 7499
BLAKE 7698
CLARK 7782
FORD 7902
JAMES 7900
JONES 7566
KING 7839
MARTIN 7654
MILLER 7934
SCOTT 7788
ENAME EMPNO
---------- ----------
SMITH 7369
TURNER 7844
WARD 7521
14 rows selected.
SQL>
SQL> exec getEmps( :c )
BEGIN getEmps( :c ); END;
*
ERROR at line 1:
ORA-00936: missing expression
ORA-06512: at "SCOTT.GETEMPS", line 6
ORA-06512: at line 1
SQL>
SQL> print c
ERROR:
ORA-24338: statement handle not executed
July 07, 2004 - 7:53 am UTC
ops$tkyte@ORA9IR2> create table t5 ( q varchar2(4000));
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t5 values
2 ( 'select ename, empno, job, null, null, null, null from emp') ;
1 row created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable x refcursor
ops$tkyte@ORA9IR2> declare
2 l_stmt varchar2(4000);
3 begin
4 select q into l_stmt from t5;
5 open :x for l_stmt;
6 end;
7 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> print x
ENAME EMPNO JOB N N N N
---------- ---------- --------- - - - -
SMITH 7369 CLERK
ALLEN 7499 SALESMAN
WARD 7521 SALESMAN
JONES 7566 MANAGER
MARTIN 7654 SALESMAN
BLAKE 7698 MANAGER
CLARK 7782 MANAGER
SCOTT 7788 ANALYST
KING 7839 PRESIDENT
TURNER 7844 SALESMAN
ADAMS 7876 CLERK
JAMES 7900 CLERK
FORD 7902 ANALYST
MILLER 7934 CLERK
14 rows selected.
<b>works for me, check your stored query, you did not cut and paste a sqlplus session, so I can only guess you did something different on the insert -- i cannot see the actual query you fetched out</b>
A reader, July 06, 2004 - 11:14 pm UTC
Tom
Can you show me how the query should be stored in the database with sys_context applied to the parameter column for a query.
And then when the query is called, how should we use the bind procedure above and pass values to multiple parameters.
Thanks in advance
July 07, 2004 - 7:53 am UTC
I already did.
look above. replace my character string constant with your plsql variable.
call "bind" over and over.
done.
What about Blob?
Zeenat, September 06, 2004 - 8:44 am UTC
Thanks for the solution Tom. However how do we handle columns of blob data types
in such cases?
Appreciate your help,
Zeenat
September 06, 2004 - 10:34 am UTC
dbms_sql can handle blobs.
YOU have to decide what you want to do with one upon detecting it.
A reader, September 06, 2004 - 11:57 am UTC
can we put BLOB data to flat file,is this advisable
Sachi, September 07, 2004 - 3:36 am UTC
I tried this function with table having BLOB data,but it gave inconsistent data type error.
if dbms_sql can handle blob data how do we change the above function to put the blob data to flat file and is this advisable?
September 07, 2004 - 7:30 am UTC
depends on your release, current versions of the software have a "put raw" api in utl_file.
dbms_sql can get a blob, dbms_lob can read/write the blob, utl_file in current releases can be used to write binary data.
unload / load Oracle tables with clob, blob , etc into flat file using WisdomForce Fastreader
Joseph, January 08, 2005 - 9:18 pm UTC
DOn't know if this related to the subject of question. But it defintely answers some comments here.
There is an effective and very fast tool FastReader www.wisdomforce.com which unload Oracle tables with clob, blob ,etc into flat file and automatically generates control files and scripts for loading data.
Follow-up on ref cursor usage
Paul, February 08, 2005 - 9:36 pm UTC
Hi Tom,
One of the follow-ups was asking exactly the situation I am banging my head with right now => how to dump the results given an arbitrary ref cursor as the starting input.
Your answer dampened my hopes => you cannot -- a ref cursor is not "procedurally processable" by PLSQL.
But I still have some hope! Is this the final word on the matter? Patently, dbms_sql itself is able to track and manage its "cursors" but I'm not sure what internal tricks it is resorting to.
So the question is: assuming that the input is a ref cursor, and that this is an immutable given, is there _any_ way of at least describing the columns, or getting a column count?
Any words of wisdom most appreciated!
February 09, 2005 - 2:44 am UTC
my final answer:
you cannot -- a ref cursor is not "procedurally
processable" by PLSQL.
in a 3gl that lets you "describe" (like java -- meaning you could write a java STORED procedure that does this), you can.
(suppose in plsql you could get the column count and describe them, what than? you still could not "fetch" them -- for fetch is processed at COMPILE time in plsql)
To Paul
Menon, February 09, 2005 - 1:21 pm UTC
Ref Cursor Metadata in PL/SQL...not possible?
Bipin, March 30, 2005 - 1:52 pm UTC
Tom,
I have same question as Paul has. I read your book effective oracle by design,, but it seems i need to read it again in order to understnad ref cursors. But in any case, it's very difficult for me to digest this fact.
It's kinda difficult to believe that you can read RefCursor metadata and navigate columns and rows of the result set in Java but not in PL/SQL. Isn't PL/SQL a procedural language? Isn't ref cursor is a cursor after all?
As Paul mentioned in above posting, it is a very common requirement to write such code. Where does Java get this information from? It has to be part of 'opened RefCursor', no? If yes, why can't Oracle come up with some package like dbms_sql that gets us this information?
Is there any other reading you can suggest?
Thanks again for supporting oracle develoepr community.
-Bipin
March 30, 2005 - 2:07 pm UTC
in PLSQL, if you want to do dynamic sql and navigate the result set column by column (because the number, types and names change) you use DBMS_SQL.
If you know at compile time, we can use a ref cursor.
PLSQL just doesn't have the language constructs (java uses an API) to do this -- java doesn't either.
In PLSQL if you want to do it like java, you do it just like java, dbms_sql is alot like ResultSet.
Enhancement request for Oracle?
Bipin, March 30, 2005 - 2:35 pm UTC
Thanks for your quick answer Tom.
Do you think it is a case of a good enhancement request for Oracle or it's just impossible even with some new oracle supplied PL/SQL API?
I think this is the last question I am going to ask on this subject.
Thanks for your patience.
One off the tangent but related question:
If I write this as Java SP, can I still debug it in Jdeveloper? (I can of course, find it out myself but just being greedy)
By the way, I started using Jdeveloper after reading your book and this is the first time, it's so easy to debug a PL/SQL code! Thankyou!
Have a good day.
-Bipin.
March 30, 2005 - 3:13 pm UTC
but dbms_sql does it all, it is the very api you are looking for?
Clarification on the question
Bipin, March 30, 2005 - 4:37 pm UTC
I previously thought I won't followup on this but it seems that I have not made my question clear enough. Le me try:
I am aware about dbms_sql but dbms_sql does NOT take ref cursor input. It expects a cursor? (an integer returned by dbms_sql.parse?)
The situation I have is:
We have several SPs that are taking XML string as input and returning ref cursor for client use. This ref cursor structure of course will be different for different SPs.
I want to write a gneralized SP that will take SP name and XML string as inputs and test/debug such a SP and print ref cursor output. In order for me to do this, I need to get access to ref cursor metadata as everything (column names, types, # of columns etc.) could be different from one SP to another SP.
Since the original SP returns a ref cursor and not an integer, I can NOT use dbms_sql? So, question is does/should PL/SQL have an equivalent of dbms_sql that takes ref cursor as input?
I can change the original SP to use dbms_sql rather than ref cursor but then I change the interface for the clients who expect a ref cursor as output. So, that's not an option for me.
Hope this explains the questions.
Thanks again for your help.
-Bipin.
March 30, 2005 - 5:27 pm UTC
correct. my point is - you want java api like access to a result set in plsql, you will use dbms_sql.
If you want the declaritive way, where the language and SQL are tightly bound, you'll use ref cursors.
You'll have to write that test routine in C, Java, or some other 3gl as PLSQL doesn't do that.
ThankYou.
Bipin, March 30, 2005 - 5:50 pm UTC
I think I will try Java SP.
Thanks for your time, Tom.
backup and recovery
Parag J Patankar, April 06, 2005 - 9:36 am UTC
Hi Tom,
We are in Oracle9R2 and we would like to preserver our data atleast for 7 years as per the instructions from various authorities. For this reason we have decided to preserve data in text format. Because we are not sure that if we have taken backup in oracle7 it will work in oracle10g or higher versions ( at the time of requirement suppose at end of 7th year we require this data ...) and again if it is not working we have to restore OS also ..etc and it will big massive work. Such restoration will
make our life difficult.
pl tell me something wrong in this above thinking process.
So we have decided to
1. Dump all data in text files using separator character ( got an answer in this
question )
2. all tables and indexes definitions. Kindly guide me which one is best for
this step ( shall we do by exp and imp with indexfile option ? or extract
definitions from dbms_metadata ...etc )
3. Load this text data into the tables .
Can you help me
a. which is the best method to load data into these tables ? (using sqlloader or text file ...etc )
b. by amending your script to provide loading script ?
regards & thanks
pjp
April 06, 2005 - 1:22 pm UTC
... preserve data in text format ...
why? databases where invented to....
store data.
Just upgrade as you upgrade?
"Preserving data in text format"
Parag Jayant Patankar, April 07, 2005 - 10:52 am UTC
Hi Tom,
Regarding your answer to my question in this thread, sorry I have not mentioned that, we are in banking industy and every day we are generating huge number of transactions every day". Due to various reasons our vendor is purging transactions from our production database on a monthly basis. This database is outside our country so we are having little control over it, secondly I can not take decision on this.
In this scenario we want to preserve our data in text format. Can you help me regarding my questions ?
I also observed that downloading of data of 45000 records by using your utility into flat file took around 9 mins. Can you tell me is it normal ?
regards & thanks
pjp
April 07, 2005 - 11:52 am UTC
which utility.
I have 3
o plsql, slow but easy, flexible
o sqlplus, faster, less flexible
o pro*c -- much faster.
45,000 records in 9 minutes seems very excessive.
Ok, silly little desktop pc. table T is a copy of ALL_OBJECTS, using sqlplus:
[tkyte@localhost tkyte]$ time flat / t > t.dat
real 0m4.228s
user 0m3.490s
sys 0m0.310s
[tkyte@localhost tkyte]$ wc t.dat
27809 38840 2732614 t.dat
[tkyte@localhost tkyte]$
so, less than 5 seconds.
Using pro*c
[tkyte@localhost array_flat]$ time array_flat userid=/ 'sqlstmt=select * from t' arraysize=100 > t.dat
Connected to ORACLE as user: /
Unloading 'select * from t'
Array size = 100
OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY
27809 rows extracted
real 0m0.694s
user 0m0.440s
sys 0m0.020s
[tkyte@localhost array_flat]$ wc t.dat
27809 94458 3344412 t.dat
[tkyte@localhost array_flat]$
Less than a second. 9 minutes would be considered excessive.
Parag Jayant Patankar, April 07, 2005 - 12:24 pm UTC
Hi Tom,
I have used pl/sql procedure to download data
create or replace function dump_csv( p_query in varchar2,
p_separator in varchar2
default ',',
p_dir in varchar2 ,
p_filename in varchar2 ) ...etc. I will recheck again and inform you.
you have mentioned that pl/sql is slower but flexible to use, sqlplus is faster ..
So if I want to execute select statement in sql or pl/sql which one is faster ? ( my understanding from your valuable help is PL/SQL is faster because it parse the query and execute again and again so giving us good parse/execute ratio while SQL is not doing so it always at least doing soft parse)
rgards & thanks
pjp
April 07, 2005 - 12:30 pm UTC
but in your case you have only one query.
sqlplus is faster because it is array fetching, my simple plsql one is not array fetching (you could correct that)
sqlplus is faster because it is C code writing data, plsql is not .
You could make the plsql go faster simply by building a "line" and then utl_file.putting it, rather than column by column.
You could make the plsql go faster by array fetching.
You could make the plsql go faster by not using dbms_sql (since you know your query, you can use static sql, fetch BULK COLLECT, build a 32k buffer of data, call utl_file a few times instead of number of rows * number of columns....)
but I doubt you would beat the pro*c program.
more than 4000 char query
A reader, April 07, 2005 - 3:38 pm UTC
Tom,
Was wondering if you can provide any suggestions if the query(variable p_query in dump_csv) is more than 4000 charac?
Thank you
April 07, 2005 - 4:34 pm UTC
32k is the limit for a plsql varchar2
after that, send an array, use the dbms_sql.parse that takes an array that can have a query of any size (documented in the supplied packages guide)
Cannot directly access remote package variable?
A reader, April 21, 2005 - 10:24 pm UTC
Tom,
In my form I have this code:
pa_project_pub.load_key_member(p_return_status => :key_member_projects.replacement_km_status,
p_person_id => :key_member_projects.replacement_person_id,
p_project_role_type => :key_member_projects.op_role,
p_start_date => :key_member_projects.end_date_active);
When compiled it produces this error
Implementation Restriction: PA_INTERFACE_UTILS_PUB.G_PA_MISS_NUM Cannot directly access remote package variable or cursor
So, I tried the following in sql and got no error.
declare
return_status varchar2(2000);
begin
pa_project_pub.load_key_member(p_return_status => return_status,
p_person_id => 10,
p_project_role_type => 'OP Role',
p_start_date => sysdate);
end;
April 22, 2005 - 10:24 am UTC
there must be other inputs that are defaulted to that procedure and are defaulted to package variables.
describe the procedure in sqlplus, find and supply ALL inputs that are defaulted to package variables.
A reader, April 22, 2005 - 1:02 pm UTC
>desc pa_project_pub
PROCEDURE LOAD_KEY_MEMBER
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
P_API_VERSION_NUMBER NUMBER IN DEFAULT
P_INIT_MSG_LIST VARCHAR2 IN DEFAULT
P_RETURN_STATUS VARCHAR2 OUT
P_PERSON_ID NUMBER IN DEFAULT
P_PROJECT_ROLE_TYPE VARCHAR2 IN DEFAULT
P_START_DATE DATE IN DEFAULT
P_END_DATE DATE IN DEFAULT
April 22, 2005 - 1:13 pm UTC
yes, like I said -- fill in the defaults -- one of them is defaulted to a package variable and forms cannot use that default.
you can also select text from all_source where name = 'THAT_PKG' and type = 'PACKAGE' order by line; to see the actual spec and what all things are defaulted to.
Using store procedure how can i export table into flat file
Surya Nagendra Varma, April 28, 2005 - 6:36 am UTC
Using pl/sql store procedure how can i export tables into flat file and the flat file should be named as table name with date
April 28, 2005 - 8:29 am UTC
How to use on Unix
Rash, June 01, 2005 - 10:44 am UTC
Hi Tom,
I have to extract oracle table data to text file and for that I want to use this proc. Its a scheduled job which is running on Unix server. Can you please tell me what changes do I need to make in INIT.ORA and how am I suppose to use utl_file ? What other modifications do i need to make it running on Unix. I think these are very basic questions, but I am new to this environment and have to meet very close deadline.
Thanks in advance.
Rash
June 01, 2005 - 11:32 am UTC
database version?
How to use on Unix
Rash, June 01, 2005 - 11:42 am UTC
Hi,
oracle version is 9.2.0.6
Thanks for very very quick reply
Rash
June 01, 2005 - 12:55 pm UTC
use a directory object
create or replace directory myfiles as '/tmp'
/
and utl_file can write to that, no setup needed. instead of using an OS path in utl_file, just use 'MYFILES' (in UPPERCASE)
dbms_job + psp = html file?
ht, August 02, 2005 - 1:16 am UTC
Hi Tom,
I've generated html output with SQL*Plus's "set markup html on" feature.
Is it possible to use dbms_job to generate a PSP into a pre-defined utl_dir location?
TIA
ht
August 02, 2005 - 7:28 am UTC
Dawar Naqvi, August 10, 2005 - 2:04 pm UTC
Tom,
You have answered just 2nd above to me as:
create or replace directory myfiles as '/tmp'
/
and utl_file can write to that, no setup needed. instead of using an OS path in
utl_file, just use 'MYFILES' (in UPPERCASE)
Now my database version is 10.1.0.3.0.
Does it apply on my database also?
Regards,
Dawar
August 11, 2005 - 8:40 am UTC
yes.
create CSV file from Oracle table
Dawar Naqvi, August 10, 2005 - 2:46 pm UTC
Tom,
DB version is 10.1.0.3.0.
I would like to create CSV file from Oracle table.
I did not find any test.dat file under /tmp.
SQL> create or replace function dump_csv( p_query in varchar2,
2 p_separator in varchar2
3 default ',',
4 p_dir in varchar2 ,
5 p_filename in varchar2 )
6 return number
7 AUTHID CURRENT_USER
8 is
9 l_output utl_file.file_type;
10 l_theCursor integer default dbms_sql.open_cursor;
11 l_columnValue varchar2(2000);
12 l_status integer;
13 l_colCnt number default 0;
14 l_separator varchar2(10) default '';
15 l_cnt number default 0;
16 begin
17 l_output := utl_file.fopen( p_dir, p_filename, 'w' );
18
19 dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
20
21 for i in 1 .. 255 loop
22 begin
23 dbms_sql.define_column( l_theCursor, i,
24 l_columnValue, 2000 );
25 l_colCnt := i;
26 exception
27 when others then
28 if ( sqlcode = -1007 ) then exit;
29 else
30 raise;
31 end if;
32 end;
33 end loop;
34
35 dbms_sql.define_column( l_theCursor, 1, l_columnValue,
36 2000 );
37
38 l_status := dbms_sql.execute(l_theCursor);
39
40 loop
41 exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
42 l_separator := '';
43 for i in 1 .. l_colCnt loop
44 dbms_sql.column_value( l_theCursor, i,
45 l_columnValue );
46 utl_file.put( l_output, l_separator ||
47 l_columnValue );
48 l_separator := p_separator;
49 end loop;
50 utl_file.new_line( l_output );
51 l_cnt := l_cnt+1;
52 end loop;
53 dbms_sql.close_cursor(l_theCursor);
54
55 utl_file.fclose( l_output );
56 return l_cnt;
57 end dump_csv;
58 /
Function created.
SQL> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select *
6 from all_users
7 where rownum < 25',
8 ',', '/tmp', 'test.dat' );
9 end;
10 /
Procedure created.
SQL> create or replace directory myfiles as '/tmp';
Directory created.
SQL> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select *
6 from all_users
7 where rownum < 25',
8 ',','MYFILES','test.dat' );
9 end;
10 /
Procedure created.
Am I missing any thing?
Regards,
Dawar
August 11, 2005 - 8:44 am UTC
well, you didn't seem to actually RUN IT.
You created procedures, did you run them?
create CSV file from Oracle table
Dawar Naqvi, August 15, 2005 - 1:23 pm UTC
Yes.
I got it, thanks.
exec dbms_output.put_line( dump_csv( 'select * from employee_items where rownum < 5', ',', 'MYFILES', 'test.csv' ) )
working fine but dont understand
A reader, August 19, 2005 - 9:40 am UTC
Tom,
Thanks for making this code available.
But I dont understand this part of the code. Could you plz explain?
FOR i IN 1 .. 255 LOOP
BEGIN
DBMS_SQL.DEFINE_COLUMN( l_theCursor, i,
l_columnValue, 2000 );
l_colCnt := i;
EXCEPTION
WHEN OTHERS THEN
IF ( SQLCODE = -1007 ) THEN EXIT;
ELSE
RAISE;
END IF;
END;
END LOOP;
What are these values 255 & 2000 for?
What purpose do they solve?
Thanks as always.
August 20, 2005 - 4:06 pm UTC
This is old code that pre-dates the describe API in dbms_sql.
I'll define up to the first 255 columns (ignoring the rest... used to be the limit to the number of columns) and stop defining the first time Oracle raises
[tkyte@desktop tkyte]$ oerr ora 1007
01007, 00000, "variable not in select list"
// *Cause:
// *Action:
telling me I've gone past the last column you selected.
naging session while spooling off
Denis, August 22, 2005 - 7:10 pm UTC
Hi
I did some testing to compare performance between 'spool off' solution and this PL/SQL procedure. Output file was not really big just about 100MB. Total timing was almost the same for both solutions, but CPU for PL/SQL was much higher.
"Spool off" has interesting behaviour it actually loads file pretty fast and after this it just doing something else for some time, and then it releases the session. If sql loader would release session right after file is created and command would be completed, then timing for sqlloader would be very short.
Now I am doing spool off to generate big files about 2GB each. It creates file but starts writing only in some time (30 minutes for example). Looks like it just loads as much as it can to memory first. After file is completed session is not released, actual command is hanging and I can't rename file, antil I close console manually. Can you explain this behaviour? what should be done here?
I have 10g SE on Windows.
here is my command and sql file:
E:\masterfeed.cmd scott/tiger E:\masterfeed.sql E:\feeds\feed_0.sql AlbumTrackRating_0 2005-08-22_00:00:00
------------------------------------------------------
@echo off
if "%1"=="" goto :usage
if "%2"=="" goto :usage
if "%3"=="" goto :usage
if "%4"=="" goto :usage
if "%5"=="" goto :usage
sqlplus -s %1 @%2 %3 %4 %5
goto :done
:Usage
echo "usage feed un/pw [script] [filename] [table_name] [date in format 2005-07-28_00:00:00]"
:done
-------------------------------------
set wrap off
set linesize 100
set feedback off
set pagesize 0
set verify off
set termout off
spool &1
select userID ||chr(9)|| albumTrackID ||chr(9)|| rating
from &2 a1
where exists
(select userID from &2 a2
where a2.userID=a1.userID
and dateModified between to_date('&4','YYYY-MM-DD_HH24:MI:SS')-30 and to_date('&4','YYYY-MM-DD_HH24:MI:SS')
and rating<>255
)
Order by userID asc
/
spool off
set termout on
@&1
exit
hanging command while spooling off
Denis, August 22, 2005 - 7:12 pm UTC
Sorry, I made a type in my previous posting.
D2K(Clear Block)
Prasanna, August 23, 2005 - 1:26 am UTC
Hi Tom,
Im not much into D2K, I have a problem like this. A form contains 2 blocks. There are 2 radio buttons on a form, when I change this options ( Radio ) then the first block contents need to be cleard. I tried giving Clear_Block('my') but it is disabled all the contents on the form, when I tried giving Clear_List it cleared 2 blocks contents.
I hope Iam clear with my question. I tried even giving No_Validate, but it of no use.
Waiting for your reply.
Thanks & Regards,
Prasanna
August 24, 2005 - 3:45 am UTC
please peek at otn.oracle.com -> discussion forums. It has been over a decade since I last touched forms, they have a forum for forms over there.
comparing pl/sql to spool off
Denis, August 24, 2005 - 11:51 am UTC
Tom, could you please, answer the questions above about spool off behavior?
Thanks a lot
Denis
August 24, 2005 - 6:26 pm UTC
I'm not sure what you mean by the spool off behaviour, you said
...It creates file
but starts writing only in some time (30 minutes for example)
...
but that is not "spool off behaviour", that is whatever was happening BEFORE the spool off
spool off
Denis, August 24, 2005 - 8:29 pm UTC
Tom,
as you can see I am using this solution </code>
http://asktom.oracle.com/~tkyte/flat/index.html <code>
There are 2 situations.
1. small file - 100Mb. Data is written to file very fast. Command is not completed and session is hanging for some time after data was written to file last time. Then, after some time command is completed and session release. What sqlplus is doing after it completed writing to file?
2. Big file 2GB. File is created right away, but data started writing to file in 30 minutes. What is happening at this time?
Command is not completed and session is hanging after data was written to file last time. So it looks like it completed writing to file, but is doing something else? what is this something else? I was wating for a day to see if command would be completed by itself. So I had manually to close console window. Similar situation to case 1, the difference is that comman is not completed by itself, or it need more time then I waited.
In both cases file had all data I need - all necessery data was written to file.
This is actual query:
==============
select a.userID ||chr(9)|| a.AlbumTrackID ||chr(9)|| a.rating
from userAlbumTrackRating a
where exists
(select userID from userAlbumTrackRating b
where b.userID=a.userID
and b.dateModified between to_date('1-AUG-05') and to_date('31-AUG-05')
)
Order by a.userID asc
====================================
Please, advice.
Denis
Thanks
Denis
August 25, 2005 - 3:29 am UTC
1) nothing, like I said, it is still doing the query.
Perhaps you have a big table, but the table has very very few rows (relative to its overall size). So, it takes a long time to full scan - reading all of the empty blocks.
Trace it and see.
2) could be the opposite, the query you are running takes a long time to get the first row and then they start coming.
what about bulk fetching
Pet, August 30, 2005 - 12:00 pm UTC
If we've version 9i release 2, can we change these extract to implment bulk fetch with limit clause? Is that possbile? Please let me know.
August 30, 2005 - 12:28 pm UTC
you don't even need 9i -- dbms_sql has an array interface, you can use it, I just didn't do it for this demo.
if you have expert one on one Oracle -- I cover the array fetching with dbms_sql.
dynamic
A reader, August 30, 2005 - 4:17 pm UTC
I looked at DBMS_SQL.define_array , it seems that I need to declare set of local array variable to able to bind into., But in my case I don't know the SQL Query will be passed as parameter.
Basically I need to create a flat file if the query is passed to pl/sql procedure as one parameter, followed by bind variables inside the query as rest of the parameters.
I need to do bulk collect and array fetching.
Please let me know your thoughts
August 31, 2005 - 1:43 am UTC
You would array fetch column 1 into array slots 1..100
column 2 into array slots 101..200
column 3 into array slots 201..300
and so on, you only need one array.
Error when puuting the function inside a cursor
Thilak, September 21, 2005 - 5:11 am UTC
Hi TOm,
The code was very useful...
But i have only one doubt...
My problem is i am calling the dump_csv inside a cursor, the cursor actually fetches the query ,separator,dir,filename from a table. I have some five records from the table, the file is created for the first record, but the second record it throws the following exception
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
ORA-06512: at "WHTDEV.DUMP_CSV", line 17
ORA-06512: at "WHTDEV.SP_RRS_ODS", line 97
ORA-06512: at line 1
All the file names are unique.. i couldnt figure the problem
September 21, 2005 - 7:15 pm UTC
worked for me,
ops$tkyte@ORA9IR2> create or replace directory tmp as '/tmp';
Directory created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
2 n number;
3 begin
4 n:=dump_csv( 'select 1 x, sysdate y, dummy from dual', ',', 'TMP', 't1.dat' );
5 n:=dump_csv( 'select 2 x, ''xxxx'' y, dummy from dual', ',', 'TMP', 't2.dat' );
6 end;
7 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> !ls -ltr /tmp/t?.dat
-rw-rw-r-- 1 ora9ir2 ora9ir2 14 Sep 21 19:13 /tmp/t1.dat
-rw-rw-r-- 1 ora9ir2 ora9ir2 9 Sep 21 19:13 /tmp/t2.dat
could be that your path or filename was invalid the second time around (and it wasn't that it was the second time, just the inputs were bad ) line 17 was a simple file open call
sending a multiline query string to this function
John K., September 28, 2005 - 4:55 pm UTC
I took the suggestion of a previous poster, making a wrapper procedure to the dump_cvs function. All works fine, even multi-line queries can be pasted into the procedure as I enter it in sql*plus.
Is there a more direct way to enter the sql string parameter to the dmp_cvs function (in SQL*PLUS) when the sql string has multiple lines? Much searching and posting to forums and experiments have not kept me from bumping into the limitation of pasting multi-line text into sql*plus. I looked at declaring a variable and trying to hide that in a scriptfile of some kind, calling it with something like @procedure_stub, tehn referring to the variable while calling the function/procedure, but found I am not experienced enough in PL/SQL to pull this off.
I would ultimately like to be able to paste the (multi-line) query string into the command line in sql*plus.
Does one necessarily have to enter the procedure test_dump_csv in order execute the function dump_csv?
September 28, 2005 - 5:38 pm UTC
not sure what you mean?
my example:
create or replace procedure test_dump_csv
as
l_rows number;
begin
l_rows := dump_csv( 'select *
from all_users
where rownum < 25',
',', '/tmp', 'test.dat' );
end;
/
was a multi-line sql statement?
John K., September 28, 2005 - 8:51 pm UTC
Sorry I was not clear.
Is it possible to "parameterize" this procedure in such a way that one could, say, be prompted for the sql string (and paste it in), prompted for the field separator, prompted for the ORACLE_DIRECTORY name, and prompted for the filename? I imagine being able to type something like
SQL> @dump_query
and then be able to paste in the query (and the other params) after a series of prompts.
Right now I must type in the entire procedure, not just the parameters; that works as expected. But every time I try to paste in the multiple line query after being prompted (by a script that I have written based around your procedure), the errors are due to the line breaks in the pasted string. I am admittedly very green with variable substitution in sql*plus and pl/sql.
Thank you.
September 29, 2005 - 6:39 am UTC
not really, not unless you end the lines to be glued together with "-", the sqlplus continuation character.
truncated fields using sqlldr_exp
John K., October 11, 2005 - 2:31 pm UTC
While trying to unload a table into a text load file, I discovered that the fields of my table are being truncated; I don't think I am bumping into the limit your describe in your documentation:
There is an absolute limit of 2000 bytes in 7.x and 4000 bytes in 8.x per line/row for unloaded data. The total size of the unloaded data is unlimited -- the maximum size of an individual row of data is what is limited.
I ran the utility, directing output to snoconso.ctl. Here is the "ctl" part of the file follwed by the first two lines of data:
LOAD DATA
INFILE *
INTO TABLE tmp_sno
REPLACE
FIELDS TERMINATED BY '|'
(
cui
,lat
,ts
,lui
,stt
,sui
,ispref
,aui
,saui
,scui
,sdui
,sab
,tty
,code
,str
,srl
,suppress
,cvf
)
BEGINDATA
C0004040|ENG|S|L0568373|PF|S0735588|Y|A3071096|102379017|61615004||SNOMEDCT|SY|61615004|Single-stran
C0004040|ENG|S|L1200216|PF|S1438948|Y|A3300297|102378013|61615004||SNOMEDCT|SY|61615004|Endonuclease
As you can see, the 15th field (str) is being truncated. Here is the desc of the table in question (formatted to fit this space:
CUI NOT NULL CHAR(8)
LAT NOT NULL CHAR(3)
TS NOT NULL CHAR(1)
LUI NOT NULL CHAR(8)
STT NOT NULL VARCHAR2(3)
SUI NOT NULL CHAR(8)
ISPREF NOT NULL CHAR(1)
AUI NOT NULL CHAR(8)
SAUI VARCHAR2(50)
SCUI VARCHAR2(50)
SDUI VARCHAR2(50)
SAB NOT NULL VARCHAR2(20)
TTY NOT NULL VARCHAR2(20)
CODE NOT NULL VARCHAR2(50)
STR NOT NULL VARCHAR2(3000)
SRL NOT NULL NUMBER(38)
SUPPRESS NOT NULL CHAR(1)
CVF VARCHAR2(50)
How can I get the data into a .ctl file?
The data in the dumptable was selected from a table originally loaded using sqlldr. In other words, the table I am dumping to text was created with:
create table todump
as select * from orig_table where sab = 'avalue'
Orig_table was loaded from a text file; here is the .ctl file for that:
options (direct=true, errors=0)
load data
CHARACTERSET UTF8
infile 'orig.RRF' "str X'7c0d0a'"
badfile 'orig.bad'
discardfile 'orig.dsc'
truncate
into table orig_table
fields terminated by '|'
trailing nullcols
(CUI char(8),
LAT char(3),
TS char(1),
LUI char(8),
STT char(3),
SUI char(8),
ISPREF char(1),
AUI char(8),
SAUI char(50),
SCUI char(50),
SDUI char(50),
SAB char(20),
TTY char(20),
CODE char(50),
STR char(3000),
SRL integer external,
SUPPRESS char(1),
CVF char(50)
)
While composing this letter, I used gawk to modify the original text file from which orig_table was loaded, created a .ctl file, and loaded it into a table on a different instance, which was my goal. But I am still curious why I could not get sqlldr_exp to work in this case.
October 11, 2005 - 3:53 pm UTC
you'd have to supply me with an entire test case (inserts.....)
what is in your login and glogin.sql files.
using 9i
John K., October 11, 2005 - 2:33 pm UTC
I forgot to mention that I am using 9i
Reproducing the truncation
John K., October 12, 2005 - 1:37 pm UTC
I was unable to reproduce the problem with a tiny dataset, but I can compare a spooled output of my existing table to the sqlldr_exp output. The spooled output is fine (but the cols have been formatted to suit these particular 19 rows). My login.sql is at the bottom of this post.
> set trimspool on
> set feedback off
> col str form a41
> col stt form a2
> col aui form a8
> col saui form a9
> col adui form a7
> col sdui form a7
> col sab form a8
> col tty form a3
> col code form a9
> col srl form 9
> col cvf form a3
> create table tmp_conso as select * from mrconso where rownum <20;
> select * from tmp_conso;
CUI LAT T LUI ST SUI I AUI SAUI SCUI SDUI SAB TTY CODE STR SRL S CVF
-------- --- - -------- -- -------- - -------- --------- --------- ------- -------- --- --------- ----------------------------------------- --- - ---
C0000005 ENG P L0000005 PF S0007492 Y A7755565 M0019694 D012711 MSH PEN D012711 (131)I-Macroaggregated Albumin 0 N
C0000005 ENG S L0270109 PF S0007491 Y A0016458 M0019694 D012711 MSH EN D012711 (131)I-MAA 0 N
C0000039 CZE P L3180523 PF S3708014 Y A3909890 D015060 MSHCZE MH D015060 1,2-DIPALMITOYLFOSFATIDYLCHOLIN 3 N
C0000039 ENG P L0000039 PF S0007564 N A6326244 C25778 NDFRT IN C25778 1,2-Dipalmitoylphosphatidylcholine 0 N 256
C0000039 ENG P L0000039 PF S0007564 Y A0016515 M0023172 D015060 MSH MH D015060 1,2-Dipalmitoylphosphatidylcholine 0 N 256
C0000039 ENG P L0000039 VO S1357296 Y A1317708 M0023172 D015060 MSH PM D015060 1,2 Dipalmitoylphosphatidylcholine 0 N
C0000039 ENG S L0000035 PF S0007560 Y A0016511 M0023172 D015060 MSH EN D015060 1,2-Dihexadecyl-sn-Glycerophosphocholine 0 N
C0000039 ENG S L0000035 VO S1357276 Y A1317687 M0023172 D015060 MSH PM D015060 1,2 Dihexadecyl sn Glycerophosphocholine 0 N
C0000039 ENG S L0000038 PF S0007563 Y A0016514 M0023172 D015060 MSH EN D015060 1,2-Dipalmitoyl-Glycerophosphocholine 0 N
C0000039 ENG S L0000038 VO S1357295 Y A1317707 M0023172 D015060 MSH PM D015060 1,2 Dipalmitoyl Glycerophosphocholine 0 N
C0000039 ENG S L0012507 PF S0033298 N A0049238 SNMI PT F-63675 Dipalmitoylphosphatidylcholine 4 N 256
C0000039 ENG S L0012507 PF S0033298 N A2880749 166113012 102735002 SNOMEDCT PT 102735002 Dipalmitoylphosphatidylcholine 4 N 256
C0000039 ENG S L0012507 PF S0033298 Y A0049237 M0023172 D015060 MSH EN D015060 Dipalmitoylphosphatidylcholine 0 N 256
C0000039 ENG S L0012507 VC S0627555 N A0683490 LNC CN NOCODE DIPALMITOYLPHOSPHATIDYLCHOLINE 0 N
C0000039 ENG S L0012507 VC S0627555 Y A6841046 LNC LPN LP15542 DIPALMITOYLPHOSPHATIDYLCHOLINE 0 N
C0000039 ENG S L0012508 PF S0033296 Y A0049235 M0023172 D015060 MSH EN D015060 Dipalmitoylglycerophosphocholine 0 N 256
C0000039 ENG S L0012509 PF S0033297 Y A0049236 M0023172 D015060 MSH EP D015060 Dipalmitoyllecithin 0 N 256
C0000039 ENG S L0296452 PF S0033295 Y A0049234 M0023172 D015060 MSH EN D015060 Dipalmitoyl Phosphatidylcholine 0 N 256
C0000039 ENG S L0296452 VW S0073244 Y A0100864 M0023172 D015060 MSH PM D015060 Phosphatidylcholine, Dipalmitoyl 0 N
> spool off
The sql_exp output is truncated:
LOAD DATA
INFILE *
INTO TABLE tmp_conso
REPLACE
FIELDS TERMINATED BY '|'
(
cui
,lat
,ts
,lui
,stt
,sui
,ispref
,aui
,saui
,scui
,sdui
,sab
,tty
,code
,str
,srl
,suppress
,cvf
)
BEGINDATA
C0000005|ENG|P|L0000005|PF|S0007492|Y|A7755565||M0019694|D012711|MSH|PEN|D012711|(131)I-Macroaggrega
C0000005|ENG|S|L0270109|PF|S0007491|Y|A0016458||M0019694|D012711|MSH|EN|D012711|(131)I-MAA|0|N|
C0000039|CZE|P|L3180523|PF|S3708014|Y|A3909890|||D015060|MSHCZE|MH|D015060|1,2-DIPALMITOYLFOSFATIDYL
C0000039|ENG|P|L0000039|PF|S0007564|N|A6326244||C25778||NDFRT|IN|C25778|1,2-Dipalmitoylphosphatidylc
C0000039|ENG|P|L0000039|PF|S0007564|Y|A0016515||M0023172|D015060|MSH|MH|D015060|1,2-Dipalmitoylphosp
C0000039|ENG|P|L0000039|VO|S1357296|Y|A1317708||M0023172|D015060|MSH|PM|D015060|1,2 Dipalmitoylphosp
C0000039|ENG|S|L0000035|PF|S0007560|Y|A0016511||M0023172|D015060|MSH|EN|D015060|1,2-Dihexadecyl-sn-G
C0000039|ENG|S|L0000035|VO|S1357276|Y|A1317687||M0023172|D015060|MSH|PM|D015060|1,2 Dihexadecyl sn G
C0000039|ENG|S|L0000038|PF|S0007563|Y|A0016514||M0023172|D015060|MSH|EN|D015060|1,2-Dipalmitoyl-Glyc
C0000039|ENG|S|L0000038|VO|S1357295|Y|A1317707||M0023172|D015060|MSH|PM|D015060|1,2 Dipalmitoyl Glyc
C0000039|ENG|S|L0012507|PF|S0033298|N|A0049238||||SNMI|PT|F-63675|Dipalmitoylphosphatidylcholine|4|N
C0000039|ENG|S|L0012507|PF|S0033298|N|A2880749|166113012|102735002||SNOMEDCT|PT|102735002|Dipalmitoy
C0000039|ENG|S|L0012507|PF|S0033298|Y|A0049237||M0023172|D015060|MSH|EN|D015060|Dipalmitoylphosphati
C0000039|ENG|S|L0012507|VC|S0627555|N|A0683490||||LNC|CN|NOCODE|DIPALMITOYLPHOSPHATIDYLCHOLINE|0|N|
C0000039|ENG|S|L0012507|VC|S0627555|Y|A6841046||||LNC|LPN|LP15542|DIPALMITOYLPHOSPHATIDYLCHOLINE|0|N
C0000039|ENG|S|L0012508|PF|S0033296|Y|A0049235||M0023172|D015060|MSH|EN|D015060|Dipalmitoylglyceroph
C0000039|ENG|S|L0012509|PF|S0033297|Y|A0049236||M0023172|D015060|MSH|EP|D015060|Dipalmitoyllecithin|
C0000039|ENG|S|L0296452|PF|S0033295|Y|A0049234||M0023172|D015060|MSH|EN|D015060|Dipalmitoyl Phosphat
C0000039|ENG|S|L0296452|VW|S0073244|Y|A0100864||M0023172|D015060|MSH|PM|D015060|Phosphatidylcholine,
login.sql:
-- login.sql
--
-- DESCRIPTION
-- SQL*Plus login startup file.
--
-- Add any sqlplus commands here that are to be executed when a user
-- starts SQL*Plus on your system
--
-- USAGE
-- This script is automatically run when SQL*Plus starts
--
DEFINE _EDITOR='gvim -c "set filetype=sql"'
-- For backward compatibility
SET SQLPLUSCOMPATIBILITY 8.1.7
set serveroutput on size 1000000 FORMAT WRAPPED
set trimspool on
set long 5000
set linesize 300
set pagesize 999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user) || '@' || substr( global_name, 1,
decode( dot, 0, length(global_name), dot-1) ) global_name
from (select global_name, instr(global_name,'.') dot from global_name );
set sqlprompt '&gname> '
set termout on
-- Used by Trusted Oracle
COLUMN ROWLABEL FORMAT A15
-- Used for the SHOW ERRORS command
COLUMN LINE/COL FORMAT A8
COLUMN ERROR FORMAT A65 WORD_WRAPPED
-- Used for the SHOW SGA command
COLUMN name_col_plus_show_sga FORMAT a24
-- Defaults for SHOW PARAMETERS
COLUMN name_col_plus_show_param FORMAT a36 HEADING NAME
COLUMN value_col_plus_show_param FORMAT a30 HEADING VALUE
-- Defaults for SET AUTOTRACE EXPLAIN report
COLUMN id_plus_exp FORMAT 990 HEADING i
COLUMN parent_id_plus_exp FORMAT 990 HEADING p
COLUMN plan_plus_exp FORMAT a60
COLUMN object_node_plus_exp FORMAT a8
COLUMN other_tag_plus_exp FORMAT a29
COLUMN other_plus_exp FORMAT a44
set editfile Q:\scripts\oracle\current.buf
October 13, 2005 - 9:16 am UTC
but all sqlldr_exp does is create a script that spools - nothing more.
look at sqlldr_exp - see that is just does a start /tmp/flat$$.sql -- grab /tmp/flat$$.sql and see if you can see any reason it would do that
see what is different from flat$$.sql (the $$ is replaced with a number by the shell of course) from your spool script.
I've not had this happen to me.
for example, if I run
$ sqlldr_exp scott/tiger dept
/tmp/flat$$ will look like this:
prompt LOAD DATA
prompt INFILE *
prompt INTO TABLE dept
prompt REPLACE
prompt FIELDS TERMINATED BY '|'
prompt (
prompt deptno
prompt ,dname
prompt ,loc
prompt )
prompt BEGINDATA
select
deptno||chr(124)||
dname||chr(124)||
loc
from dept
/
exit
and that script is run by this:
sqlplus -s $1 << EOF
set wrap off
set feedback off
set pagesize 0
set verify off
set trimspool on
set linesize 5000
start /tmp/flat$$.sql
exit
EOF
exporting file
jp, October 30, 2005 - 7:26 am UTC
Tom,
I am trying to export information into a csv file, but when I run the test_dump_csv I am getting an error
1 create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select *
6 from scott.dept',
7 ',', 'c:\', 'test.dat' );
8* end;
SQL> /
Procedure created.
SQL> exec test_dump_csv
BEGIN test_dump_csv; END;
*
ERROR at line 1:
ORA-29280: invalid directory path
ORA-06512: at "SYS.UTL_FILE", line 18
ORA-06512: at "SYS.UTL_FILE", line 424
ORA-06512: at "SYS.DUMP_CSV", line 17
ORA-06512: at "SYS.TEST_DUMP_CSV", line 5
ORA-06512: at line 1
Even I tried creating a directory but still same error,
SQL> create or replace directory myfile as 'c:\tmp'
2 /
Directory created.
1 create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select *
6 from scott.dept',
7 ',', 'myfile', 'test.dat' );
8* end;
SQL> /
Procedure created.
SQL> exec test_dump_csv
BEGIN test_dump_csv; END;
*
ERROR at line 1:
ORA-29280: invalid directory path
ORA-06512: at "SYS.UTL_FILE", line 18
ORA-06512: at "SYS.UTL_FILE", line 424
ORA-06512: at "SYS.DUMP_CSV", line 17
ORA-06512: at "SYS.TEST_DUMP_CSV", line 5
ORA-06512: at line 1
Can you let me know what its the problem?
thanks
October 31, 2005 - 2:43 am UTC
[tkyte@localhost ~]$ oerr ora 29280
29280, 00000, "invalid directory path"
// *Cause: A corresponding directory object does not exist.
// *Action: Correct the directory object parameter, or create a corresponding
// directory object with the CREATE DIRECTORY command.
[tkyte@localhost ~]$
directory names would be case sensitive and identifiers are folded to upper case
5 l_rows := dump_csv( 'select *
6 from scott.dept',
7 ',', 'MYFILE', 'test.dat' );
^^^^^^
In the first case, utl_file_dir wasn't set probably (good thing too - I would not want you to WRITE to my c:\ drive on the SERVER!!!! utl_file always writes to the SERVER)
exporting file - fixed
jp, October 30, 2005 - 11:07 am UTC
Tom,
I found the problem, I did not set the utl_file_dir value in the init.ora
its working fine.
but I have another question, can I create a export file but separate by tab, something like
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS etc..
thanks for your help
October 31, 2005 - 2:54 am UTC
chr(9)
instead of
','
will use tabs.
date format
jp, November 03, 2005 - 9:56 am UTC
Tom,
thanks, the char(9) works just perfect,
I tried to run the following query in the test_dump_csv
select TO_CHAR(insert_date, 'YYYYMMDD'),room_category,tot_room,scala_code from apart_stat
but its compiling with erros, the error seems to be related to TO_CHAR(insert_date,'YYYYMMDD'),
Is there any restriction for date format?
I tested the query in sqlplus and worked fine
thanks,.
November 04, 2005 - 2:32 am UTC
no, but you don't show us the example, so I'll assume you forgot to use '' to get a single ' in a character string literal.
I want java code for data given by user which should be store/retrieve in/from flat file
chandrachary, December 22, 2005 - 1:21 am UTC
December 22, 2005 - 10:32 am UTC
cool?
shouldn't be very hard for a java programmer to write I wouldn't expect... You have the logic virtually outlined for you right above, just have to change it to "result sets"
Question
PRS, December 30, 2005 - 10:35 am UTC
Tom,
Can I use directory object to create file using UTL_FILE on a different server(Application Server) than the database server? It does not create the file on my application server. Rather it creates the file on the directory used in create directory object on the database server.
Thanks,
PRS
December 30, 2005 - 11:12 am UTC
only if the database server can SEE that directly itself
otherwise, it would be very virus like, wouldn't it. How would you like it if files just started appearing on your machine magically???
enlightning
kirtish, January 07, 2006 - 5:40 am UTC
Hi Tom,
Its useful as ever. You always learn new things.
I was going though the above posts, i had a doubt if i can
create a file in a directory through a "after insert trigger" and how?
Thanks
kirtish
January 07, 2006 - 10:25 am UTC
utl_file is used to create files.
HOWEVER, before you do anything NON-TRANSACTIONAL in a trigger - make sure you know what you are doing.
Say you create that file in the after insert trigger....
And the transaction is rolled back.
Files do not rollback.
Be very very very careful using any routine that starts with UTL_ in a trigger - utl_file, utl_smtp, utl_http, utl_tcp, whatever - they are typically NON-TRANSACTIONAL - they will NOT rollback when the transaction rolls back which can lead to a *mess*.
Static SQL
Preeti, March 20, 2006 - 11:45 pm UTC
Tom,
I searched on your website, I couldn't find an example of utl_file using static sql. You did mention in your article that if we have static sql we could avoid the dbms_sql.parse and hence the code would be faster. Could you please provide an example.
Also, is it possible to declare the sql in a cursor rather than putting it in a varchar2 variable.
Thanks
March 22, 2006 - 1:39 pm UTC
utl_file.fopen
for x in ( select * from emp )
loop
utl_file.put_line( x.ename || ',' || x.empno || .... );
end loop;
???
where I do dynamic, you do static - no "magic", as straightforward as it sounds.
If you want a generic routine as I have provided, you would use dbms_sql and dbms_sql does not accept ref cursors.
Better UTL_FILE performance
Andrew, May 29, 2006 - 4:20 am UTC
What I found gives significantly better performance in UTL_FILE is if you concatenate all your SELECT columns into a single VARCHAR2 column (if you can afford to have a limit of 4000 chars per line). You can then have a generic stored procedure which takes in any native cursor variable with a single VARCHAR2 field and which pumps the cursor contents out to a file. If I remember correctly it performed about 50% faster than a procedure which had about 10 columns doing the same work.
A question for Tom - presuming you could ensure strictly single-thread access to an external DLL, would the performance of an export-single-column DLL be very fast when called from extproc (within PL/SQL)? What we need is a fast solution which can be invoked from the server, rather than relying on shell scripts and jobs.
May 29, 2006 - 7:45 am UTC
Or....
you could select out all columns and have the data dumper build the string (which would make more sense...)
You would likely find an external procedure written in C to be "faster", yes.
reading multiple lines of a file
Paulo, August 24, 2006 - 10:54 am UTC
Hi Tom
In the past you help me, now i need once more a little help from you.
I have a file that's contained 4 lines of information, in each line there is several information that is separeted by "|". My problem is this, my plsql reads perfectlly the first line but i cannot read the second, i don't know how. Can you tell me how i can read the other lines? If possible with an example.
I'm using a utl_file to work with the files
August 27, 2006 - 7:55 pm UTC
if you know how to read the first line, why can't you read the second?
the code to read the "second" line is identical to the code to read the first line.
PAGESIZE LIMIT?
abz, August 29, 2006 - 6:47 am UTC
I have a simple query whose data I want to spool in a file, using SQLPLUS SPOOL, the problem is that maximum page size can be of 50000. What I did is.
SET LINESIZE 3000;
SET HEAD OFF;
SET PAGESIZE 50000
everything is fine in the output file but after each 50000 lines SQLPLUS
adds a BLANK LINE in the output,
How can I solve this problem
August 29, 2006 - 7:21 am UTC
a little bit confusing syntax
abz, August 30, 2006 - 2:45 am UTC
Thanks that solved the problem.
1- But the syntax that is SET PAGESIZE 0 is not so intuitive to human being, shouldnt it be SET PAGESIZE unlimited or infinite? that would be more intuitive.
2- One more problem, in the end of the query it give a message "xx rows selected", how can I avoid this?
i got the answer for 2
abz, August 30, 2006 - 5:35 am UTC
I got the answer for 2, can you please comment on 1.
SET FEED[BACK] {6|n|ON|OFF}
Displays the number of records returned by a script when a script selects at least n records. ON or OFF turns this display on or off. Turning feedback ON sets n to 1. Setting feedback to zero is equivalent to turning it OFF.
August 30, 2006 - 8:45 am UTC
already did....
Can UTL_FILE package used to create ASCII files
Karthik, August 30, 2006 - 7:52 am UTC
I have one problem in creating csv file.
If one column has single line value, it is coming in single cell. But if the column has no.of lines using carriage return while entering into the table,
I am not able to create csv file properly. That one column value takes more than one cell in csv.
For example the column "Issue" has following value:
"Dear
I hereby updated the Human Resources: New User Registration Form Request.
And sending the request for your action.
Regards
Karthik".
If i try to create the csv file that particular record is coming as follows:
0608001,AEGIS USERID,SINGAPORE, Dear
I hereby updated the Human Resources: New User Registration Form Request.
And sending the request for your action.
Regards
Karthik,Closed.
If we try to load the data in table it is giving error since that one record is coming in more than one line. How can I store that value in a single line in csv file.
Pls help.
August 30, 2006 - 8:48 am UTC
you tell me, what would YOU like the file to look like?
once you tell us that, we can show you how to do that.
extracting data to a flat file
karthik, August 31, 2006 - 1:23 am UTC
The CSV file is being generated by this pl/sql for one record in many lines like:
0608001,AEGIS USERID,SINGAPORE, Dear
I hereby updated the Human Resources
Regards
Karthik,Closed.
But I need everything in one line.
0608001,AEGIS,DEAR I hereby updated the HumanResources Regards Karthik,Closed
August 31, 2006 - 9:11 am UTC
use translate to replace chr(13) and chr(10) with ' '
select ... translate(column, chr(13)||chr(10), ' ' ) .....
that'll turn carriage returns and linefeeds into spaces, changing the data of course, but that is what you asked for.
extracting data to a flat file
Karthik, September 01, 2006 - 12:14 am UTC
Gr8....That only I was looking for.....It is working fine.
But I want to ask one more question. I can make the csv file using translate. But If I want to load the data in that exact format without modifying the data(including carriage returns and line feeds), how can I load from one database to another. I dont have permission to use databaselink. Is there any other way to load? If it is there then how? Pls explain
September 01, 2006 - 8:27 am UTC
what tool will you use?
extracting data to a flat file
karthik, September 01, 2006 - 8:49 am UTC
I am using SQL LOADER.
September 01, 2006 - 10:35 am UTC
if you are doing oracle to oracle, I'm seriously wondering why you are writing code.
exp
imp
code free, be done with it. you are finished (take an early weekend!)
but if you persist in the "do it yourself", use this format of a control file and leave the newlines in the data - use a record terminator instead.
<quote src=Expert Oracle database architecture>
Use the STR Attribute
This is perhaps the most flexible method of loading data with embedded newlines. Using the STR attribute, we can specify a new end-of-line character (or sequence of characters). This allows us to create an input data file that has some special character at the end of each linethe newline is no longer special.
I prefer to use a sequence of characters, typically some special marker, and then a newline. This makes it easy to see the end-of-line character when viewing the input data in a text editor or some utility, as each record still has a newline at the end of it. The STR attribute is specified in hexadecimal, and perhaps the easiest way to get the exact hexadecimal string we need is to use SQL and UTL_RAW to produce the hexadecimal string for us. For example, assuming we are on UNIX where the end-of-line marker is CHR(10) (linefeed) and our special marker character is a pipe symbol (|), we can write this:
ops$tkyte@ORA10G> select utl_raw.cast_to_raw( '|'||chr(10) ) from dual;
UTL_RAW.CAST_TO_RAW('|'||CHR(10))
-------------------------------------------------------------------------------
7C0A
which shows us that the STR we need to use on UNIX is X'7C0A'.
Note On Windows, you would use UTL_RAW.CAST_TO_RAW( '|'||chr(13)||chr(10) ).
To use this, we might have a control file like this:
LOAD DATA
INFILE demo.dat "str X'7C0A'"
INTO TABLE DEPT
REPLACE
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(DEPTNO,
DNAME "upper(:dname)",
LOC "upper(:loc)",
COMMENTS
)
So, if our input data looks like this:
[tkyte@desktop tkyte]$ cat demo.dat
10,Sales,Virginia,This is the Sales
Office in Virginia|
20,Accounting,Virginia,This is the Accounting
Office in Virginia|
30,Consulting,Virginia,This is the Consulting
Office in Virginia|
40,Finance,Virginia,This is the Finance
Office in Virginia|
[tkyte@desktop tkyte]$
where each record in the data file ends with a |\n, the previous control file will load it correctly.
</quote>
Can we use dbms_output to dump entire row from cursor?
Robert, September 13, 2006 - 11:46 am UTC
Tom,
Is there a simple way (perhaps using refcursor) from within a procedure, to write out an entire row from a query with DBMS_OUTPUT?
I know I can string together the columns, but I want a method that is as simple as doing a 'select' in sql*plus... *without* having to do a PRINT :X after running the procedure... only using dbms_output.put_line.
Thanks,
Robert.
September 13, 2006 - 3:04 pm UTC
you have to string them together.
Need your help
ND, September 13, 2006 - 10:02 pm UTC
Hi Tom,
This post is very helpful to my current project. I want to thank you for that. Also I have a quick question on the last define_column statement from your original post:
dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000 );
what does it do? Didn't we already define all the columns from inside the loop??
Thanks for your time,
PS: I am sure you have some Oracle-related books publised, can you recommend me one?
September 14, 2006 - 9:09 am UTC
typo, not needed.
Excellent....! but incomplete
Venkataramesh K, September 21, 2006 - 2:13 am UTC
Nice one, found it very useful. But the script is not handling the columns having the data with whole spaces. It is printing NULL for columns has spaces in it.
September 21, 2006 - 2:14 am UTC
"having the data with whole spaces"
eh? sorry, but no clue what you mean - however I can say "you have the code, fix it".
Description
Venkataramesh, September 21, 2006 - 2:35 am UTC
hi,
sorry for not explaining correctly.
Actually i have a column of data type varchar2(4). The value that was stored in this field is ' ' (Four spaces). When i use this script to generate a file, the output is printing null instead of four spaces in the file.
Appreciate your help
Thanks in advance.
September 21, 2006 - 6:30 am UTC
ops$tkyte%ORA10GR2> create table t ( x int, y varchar2(4), z int );
Table created.
ops$tkyte%ORA10GR2> insert into t values ( 1, ' ', 2 );
1 row created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select * from t',
6 ',', 'MY_DIR', 'test.dat' );
7 end;
8 /
Procedure created.
ops$tkyte%ORA10GR2> exec test_dump_csv;
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> !cat /tmp/test.dat
1, ,2
works for me, are you "sure". I used 10gr2 - what about you.
One more doubt
venkataramesh, September 22, 2006 - 2:39 am UTC
I am also using 10gr2 version.
It is working for me.
Now i had another dount, i had a char(4) column which is nullable and having a null value in it. I executed the process.
Name Null? Type
----------- -------- ----------------------------
X NUMBER(38)
Y CHAR(4)
Z NUMBER(38)
The result from the select looks like the below.
X Y Z
---------- ---- ----------
1 2
why did y was given space in the select query result. the export dump looks like the below. (which is correct)
SQL> !cat temp.dat
12
Now my doubt is why the select query is showing four spaces.
September 22, 2006 - 3:10 pm UTC
well, why don't you do what i did...
post an ENTIRE example....
ops$tkyte%ORA10GR1> create table t ( x number, y char(4), z number );
Table created.
ops$tkyte%ORA10GR1> insert into t values ( 1, ' ', 2 );
1 row created.
ops$tkyte%ORA10GR1> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select * from t',
6 ',', 'MY_DIR', 'test.dat' );
7 end;
8 /
Procedure created.
ops$tkyte%ORA10GR1>
ops$tkyte%ORA10GR1> exec test_dump_csv;
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR1> !cat /tmp/test.dat
1, ,2
Problem with the DBMS_SQL with following query
Venkataramesh K, October 12, 2006 - 5:10 am UTC
I have given the input query to the procedure which unloads the data to a flat file as follows.
select sysdate,'00000000' from dual
Now i am getting the following error.
ERROR at line 1:
ORA-06550: line 1, column 101:
PLS-00103: Encountered the symbol "00000" when expecting one of the following:
) , * & | = - + < / > at in is mod remainder not rem => ..
<an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
The symbol ", was inserted before "00000" to continue.
October 12, 2006 - 8:23 am UTC
why do you not show us your code, like I show you my code, so we can see what you are doing wrong?
ops$tkyte%ORA10GR2> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select sysdate,''000000'' from dual', ',', 'MY_DIR' , 'x.dat' );
6 end;
7 /
Procedure created.
ops$tkyte%ORA10GR2> exec test_dump_csv
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> !cat /tmp/x.dat
12-OCT-06,000000
you probably didn't do quotes correctly.
Another option using SQL+ & COLSEP !!!
george lewycky, October 18, 2006 - 3:05 pm UTC
SET SERVEROUTPUT ON
SET ECHO OFF
SET VERIFY OFF
SET Heading OFF
SET LINESIZE 2000
SET NEWPAGE NONE
SET PAGESIZE 100
SET Heading OFF
SET COLSEP , <- this saves you the headache of coding each column!!!
spool c:\myfile.txt
select * from tablename <---- select all from your table
spool off
NOTE: if any of the columns have ',' embedded in them like the address you might be in a bit of a bind. So you might need a unique delimiter like '|' or ']'
You must include tick marks with this delimiter !!! See below line:
SET COLSEP '|' or SET COLSEP ']'
your file will look something like this:
92877|S SHECTER |EMGINEERING & TECHNICAL FIELD
92881|S SHECTER |ENGINEERING & TECHNICAL FIELD
RE: Another option using SQL+ & COLSEP !!!
Maarten, October 27, 2006 - 9:06 am UTC
TO: george lewycky
How can I lose the spaces between
SHECTER and |EMGINEERING
in your file:
92877|S SHECTER |EMGINEERING & TECHNICAL FIELD
92881|S SHECTER |ENGINEERING & TECHNICAL FIELD
Replacing table Data if condition satisfied
Rinku, January 12, 2007 - 1:41 am UTC
Hi Tom,
In addition to loading the data from the table to a file, I want to replace some data in a column to some other value in the file, i.e. lets say in a column if a entry is '0' I want to replace it by 'false' in the file (column & table remains unchanged) and if it is '1' replace it by 'true'.
I hope I was clear
Regards
Rinku
getting error
Murali, January 15, 2007 - 8:15 am UTC
Hi Tom,
I copied exactly the same procedure.. in apps instance..
and trying execute it is giving error as follow
and running in Toad n tried in unix prompt as well
===========================================
declare
l_rows number;
begin
l_rows := dump_csv( 'select *
from all_users
where rownum < 25',
',', '/tmp', 'test.dat' );
END ;
========================================
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 98
ORA-06512: at "SYS.UTL_FILE", line 157
ORA-06512: at "APPS.DUMP_CSV", line 16
ORA-06512: at line 4
what need to be done..
please let me know..
Want to preserve trailing spaces
Eric Schneider, January 18, 2007 - 6:07 pm UTC
Tom,
I need to extract a table to a fixed length flat file. The first column is 3 characters and second column will be truncated to 25 characters. The problem I'm having is that if the second column ends with spaces, those spaces are not written to the file.
001ONE TWO THREE
003THIS IS TEST DATA
A12I AM TESTING THE CUT OFF
090BLUE CROSS AND BLUE SHIEL
If I turn trimspool and trimout off, the end of line character is at the 80th character within the output file, and I need it to be at the 25th.
Here is the simple code:
set heading off
set colsep ''
set feedback off
SPOOL &1
SELECT SUBSTR(cde_carrier, 5, 7),
RPAD(SUBSTR(nam_bus, 1, 25),25,' ')
FROM t_tpl_carrier;
SPOOL OFF;
Thank you,
Eric
pipelining the output for a procedure
Valli, January 25, 2007 - 10:24 am UTC
Hi Tom,
I had been referring to this site for around 2 years now, but never found a need to ask a new question or try to follow-up on something... It just has answers for everything... Great work!!!
But I have got an interesting problem now...one part or another of which doesnt look like getting answered anywhere...
I have a procedure which updates a few databases using native dynamic sql. Its a kind of transaction processing and i need to log each of the transactions. I cant use utl_file as the log has to be put in an application server and not the DB server. And i cannot wait for the entire process to end before the logs get updated. So spooling is ruled out. And one of the links lead me to pipelined functions.. which could not be used as i am doing data manipulation here.
Currently, i am trying to call the procedure by limiting the input data to be processed to 100 rows and getting the logging and putting it to a log file and call for another 100 rows and so on, and building the log file using a shellscript.
But i get a feeling there is a better way and you would be able to help!! Any suggestions Tom??
separator
shubham, February 05, 2007 - 10:37 am UTC
Hi Tom,
above you have specified to use " in case the separator is already present in the columns.
can we not get a .csv file with fields enclosed in ".
like
rather then -
i,am,shubham
i would like to have -
"i","am","shubham"
is it possible....
thanx
February 05, 2007 - 10:39 am UTC
you have the code, you can make the code do whatever you would like it to do.
So, "is it possible..." sure - right after you make the very minor change to the code to do it!
separator
shubham, February 05, 2007 - 10:47 am UTC
i m looking for that very change....
like..wot change shud i make to the above DUMP_CSV function
thanks a lot...
February 05, 2007 - 11:11 am UTC
you really cannot figure that out?
instead of
a,b,c,d
you want
"a","b","c","d"
and give the above really simple code - cannot do it??
wuh-oh.
utl_file.put( l_output, l_separator ||
l_columnValue );
could become
utl_file.put( l_output, l_separator ||
'"' || l_columnValue || '"' );
and if you think l_columnValue might have " in it an needs to be doubled up:
utl_file.put( l_output, l_separator ||
'"' || replace(l_columnValue,'"','""') || '"' );
or escaped with a \ (depends on what is reading this file)
utl_file.put( l_output, l_separator ||
'"' || replace(l_columnValue,'"','\"') || '"' );
separator
shubham, February 06, 2007 - 5:21 am UTC
thanx for help....
spool
John, February 08, 2007 - 10:52 am UTC
i need to create 2 comma delimited
files(extracts) from a table(one for each fiscal week)..based on the day i am running it has to be for the previous 2 weeks...the sql is pretty simple..i'll call it from UNIX
table a
-----
fy_week ,
cust,
pkg,
delvery,
date
sales_dollars
I can figure out the weeks by running the following query from calendar_dim
select
fiscal_year_week_prior1,
fiscal_year_week_prior2
from
calendar_dim
where fy_date = $1
then for fiscal_year_week_prior1 and fiscal_year_week_prior2 i need to run the following script
to spool 2 files each
select
fy_week||','||cust||','||pkg||','||delivery||','||date||','||sales_dollars
where fy_week = v_week
The only part i need help is the file name that i am spooling has to have fiscal week to be a part of it..
like...
file_1_yyyywk.dat
file_2_yyyywk.dat
basically i will pass in sysdate to a unix script and the calendar_dim query will figure out what weeks to
run and i have to spool the two files based on the first query for each week and name the files with fy_week part of the file name...any ideas...
February 08, 2007 - 11:22 am UTC
you pass in the date right - so just the same way you referenced it in your query, reference it in your spool command.
John, February 08, 2007 - 12:55 pm UTC
But i have to run the SQL to determine the weeks.how would i do that? can you please give me an example showing that?
February 08, 2007 - 1:31 pm UTC
column fy1 new_val fy1
column fy2 new_val fy2
select
fiscal_year_week_prior1 fy1,
fiscal_year_week_prior2 fy2
from
calendar_dim
where fy_date = $1;
spool &fy1..dat
.....
spool off
spool &fy2..dat
.....
spool off
text-->sqlldr -->flat -->text problem
John Kilbourne, February 09, 2007 - 9:58 am UTC
I loaded a text file into oracle and later retrieved a view, and found that a long string was truncated. After some tweaking, I have a test case that reproduces this. Notice the end of the tmp.flat file; the string is truncated and ends in "SALSO", and the other fields are just gone. I include my login.sql, as I thought perhaps the answer was there, but I don't see it.
The string is not truncated in the table. The BEGINDATA part of the .ctl file is all one line in the file.
tmp.sql:
DROP TABLE tmp;
CREATE TABLE tmp
(
RXCUI VARCHAR2(8) NOT NULL,
LAT VARCHAR2 (3) DEFAULT 'ENG' NOT NULL,
TS VARCHAR2 (1),
LUI VARCHAR2(8),
STT VARCHAR2 (3),
SUI VARCHAR2 (8),
ISPREF VARCHAR2 (1),
RXAUI VARCHAR2(8) NOT NULL,
SAUI VARCHAR2 (50),
SCUI VARCHAR2 (50),
SDUI VARCHAR2 (50),
SAB VARCHAR2 (20) NOT NULL,
TTY VARCHAR2 (4) NOT NULL,
CODE VARCHAR2 (50) NOT NULL,
STR VARCHAR2 (3000) NOT NULL,
SRL VARCHAR2 (10),
SUPPRESS VARCHAR2 (1),
CVF VARCHAR2(50)
)
;
*******************************************
*******************************************
tmp.ctl:
options (direct=true, errors=0)
load data
infile *
badfile 'tmp.bad'
discardfile 'tmp.dsc'
truncate
into table tmp
fields terminated by '|'
trailing nullcols
(
RXCUI char(8),
LAT char(3),
TS char(1),
LUI char(8),
STT char(3),
SUI char(8),
ISPREF char(1),
RXAUI char(8),
SAUI char(50),
SCUI char(50),
SDUI char(50),
SAB char(20),
TTY char(20),
CODE char(50),
STR char(3000),
SRL integer external,
SUPPRESS char(1),
CVF char(50)
)
BEGINDATA
347269|ENG|P||PF||Y|1536371|1536371|347269||RXNORM|SCD|347269|ALLERGENIC EXTRACT, DANDELION POLLEN 1 UNT / ALLERGENIC EXTRACT, FIREBUSH KOCHIA POLLEN 1 UNT / ALLERGENIC EXTRACT, GLODENROD POLLEN 1 UNT / ALLERGENIC EXTRACT- COCKLEBUR, MIXED XANTHIUM SPP. 1 UNT / ALLERGENIC EXTRACT- DOCK, SOUR OR SHEEP SORREL RUMEX ACETOSELLA 1 UNT / ALLERGENIC EXTRACT- DOCK, YELLOW RUMEX CRISPUS - CURLY DOCK 1 UNT / ALLERGENIC EXTRACT- HEMP, WATER 1 UNT / ALLERGENIC EXTRACT- LAMB'S QUARTERS CHENOPODIUM ALBUM 1 UNT / ALLERGENIC EXTRACT- MEXICAN TEA CHENOPODIUM AMBROSIOIDES 1 UNT / ALLERGENIC EXTRACT- MUGWORT, COMMON ARTEMISIA VULGARIS 1 UNT / ALLERGENIC EXTRACT- PIGWEED, ROUGH REDROOT AMARANTHUS RETROFLEXUS 1 UNT / ALLERGENIC EXTRACT- PLANTAIN, ENGLISH PLANTAGO LANCEOLATA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, SHORT AMBROSIA ELATIOR 1 UNT / ALLERGENIC EXTRACT- RAGWEED, TALL AMBROSIA TRIFIDA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, WESTERN AMBROSIA CORONOPIFOLIA 1 UNT / ALLERGENIC EXTRACT- RUSSIAN THISTLE SALSOLA KALI 1 UNT / ALLERGENIC EXTRACT- SAGEBRUSH, COMMON ARTEMISIA TRIDENTATA 1 UNT Injectable Solution||N||
*******************************************
*******************************************
E:\asktom>sqlldr john/john control='tmp.ctl'
SQL*Loader: Release 10.2.0.1.0 - Production on Fri Feb 9 09:44:48 200
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Load completed - logical record count 1.
E:\asktom>type tmp.log
SQL*Loader: Release 10.2.0.1.0 - Production on Fri Feb 9 09:44:48 200
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: tmp.ctl
Data File: tmp.ctl
Bad File: tmp.bad
Discard File: tmp.dsc
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 0
Continuation: none specified
Path used: Direct
Table TMP, loaded from every logical record.
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- -----------
RXCUI FIRST 8 | CHARACTER
LAT NEXT 3 | CHARACTER
TS NEXT 1 | CHARACTER
LUI NEXT 8 | CHARACTER
STT NEXT 3 | CHARACTER
SUI NEXT 8 | CHARACTER
ISPREF NEXT 1 | CHARACTER
RXAUI NEXT 8 | CHARACTER
SAUI NEXT 50 | CHARACTER
SCUI NEXT 50 | CHARACTER
SDUI NEXT 50 | CHARACTER
SAB NEXT 20 | CHARACTER
TTY NEXT 20 | CHARACTER
CODE NEXT 50 | CHARACTER
STR NEXT 3000 | CHARACTER
SRL NEXT * | CHARACTER
SUPPRESS NEXT 1 | CHARACTER
CVF NEXT 50 | CHARACTER
Table TMP:
1 Row successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 1
Total logical records rejected: 0
Total logical records discarded: 0
Direct path multithreading optimization is disabled
Run began on Fri Feb 09 09:44:48 2007
Run ended on Fri Feb 09 09:44:49 2007
Elapsed time was: 00:00:00.66
CPU time was: 00:00:00.04
*******************************************
*******************************************
E:\asktom>flat john/john tmp >tmp.flat
tmp.flat:
347269|ENG|P||PF||Y|1536371|1536371|347269||RXNORM|SCD|347269|ALLERGENIC EXTRACT, DANDELION POLLEN 1 UNT / ALLERGENIC EXTRACT, FIREBUSH KOCHIA POLLEN 1 UNT / ALLERGENIC EXTRACT, GLODENROD POLLEN 1 UNT / ALLERGENIC EXTRACT- COCKLEBUR, MIXED XANTHIUM SPP. 1 UNT / ALLERGENIC EXTRACT- DOCK, SOUR OR SHEEP SORREL RUMEX ACETOSELLA 1 UNT / ALLERGENIC EXTRACT- DOCK, YELLOW RUMEX CRISPUS - CURLY DOCK 1 UNT / ALLERGENIC EXTRACT- HEMP, WATER 1 UNT / ALLERGENIC EXTRACT- LAMB'S QUARTERS CHENOPODIUM ALBUM 1 UNT / ALLERGENIC EXTRACT- MEXICAN TEA CHENOPODIUM AMBROSIOIDES 1 UNT / ALLERGENIC EXTRACT- MUGWORT, COMMON ARTEMISIA VULGARIS 1 UNT / ALLERGENIC EXTRACT- PIGWEED, ROUGH REDROOT AMARANTHUS RETROFLEXUS 1 UNT / ALLERGENIC EXTRACT- PLANTAIN, ENGLISH PLANTAGO LANCEOLATA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, SHORT AMBROSIA ELATIOR 1 UNT / ALLERGENIC EXTRACT- RAGWEED, TALL AMBROSIA TRIFIDA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, WESTERN AMBROSIA CORONOPIFOLIA 1 UNT / ALLERGENIC EXTRACT- RUSSIAN THISTLE SALS
*******************************************
*******************************************
john@ORCL> select str from tmp;
STR
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
------------------------------------------------------------
ALLERGENIC EXTRACT, DANDELION POLLEN 1 UNT / ALLERGENIC EXTRACT, FIREBUSH KOCHIA
POLLEN 1 UNT / ALLERGENIC EXTRACT, GLODENROD POLLEN 1 UNT / ALLERGENIC EXTRACT-
COCKLEBUR, MIXED XANTHIUM SPP. 1 UNT / ALLERGENIC EXTRACT- DOCK, SOUR OR SHEEP
SORREL RUMEX ACETOSELLA 1 UNT / ALLERGENIC EXTRACT- DOCK, YE
LLOW RUMEX CRISPUS - CURLY DOCK 1 UNT / ALLERGENIC EXTRACT- HEMP, WATER 1 UNT /
ALLERGENIC EXTRACT- LAMB'S QUARTERS CHENOPODIUM ALBUM 1 UNT / ALLERGENIC EXTRACT
- MEXICAN TEA CHENOPODIUM AMBROSIOIDES 1 UNT / ALLERGENIC EXTRACT- MUGWORT, COMM
ON ARTEMISIA VULGARIS 1 UNT / ALLERGENIC EXTRACT- PIGWEED, R
OUGH REDROOT AMARANTHUS RETROFLEXUS 1 UNT / ALLERGENIC EXTRACT- PLANTAIN, ENGLIS
H PLANTAGO LANCEOLATA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, SHORT AMBROSIA ELATIO
R 1 UNT / ALLERGENIC EXTRACT- RAGWEED, TALL AMBROSIA TRIFIDA 1 UNT / ALLERGENIC
EXTRACT- RAGWEED, WESTERN AMBROSIA CORONOPIFOLIA 1 UNT / ALL
ERGENIC EXTRACT- RUSSIAN THISTLE SALSOLA KALI 1 UNT / ALLERGENIC EXTRACT- SAGEBR
USH, COMMON ARTEMISIA TRIDENTATA 1 UNT Injectable Solution
*******************************************
*******************************************
john@ORCL> select * from tmp;
RXCUI LAT T LUI STT SUI I RXAUI SAUI
SCUI SDUI
SAB TTY CODE
-------- --- - -------- --- -------- - -------- --------------------------------
------------------ --------- --------------------------------------------------
-------------------- ---- ----------
STR
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
------------------------------------------------------------
SRL S CVF
---------- - --------------------------------------------------
347269 ENG P PF Y 1536371 1536371
347269
RXNORM SCD 347269
ALLERGENIC EXTRACT, DANDELION POLLEN 1 UNT / ALLERGENIC EXTRACT, FIREBUSH KOCHIA
POLLEN 1 UNT / ALLERGENIC EXTRACT, GLODENROD POLLEN 1 UNT / ALLERGENIC EXTRACT-
COCKLEBUR, MIXED XANTHIUM SPP. 1 UNT / ALLERGENIC EXTRACT- DOCK, SOUR OR SHEEP
SORREL RUMEX ACETOSELLA 1 UNT / ALLERGENIC EXTRACT- DOCK, YE
LLOW RUMEX CRISPUS - CURLY DOCK 1 UNT / ALLERGENIC EXTRACT- HEMP, WATER 1 UNT /
ALLERGENIC EXTRACT- LAMB'S QUARTERS CHENOPODIUM ALBUM 1 UNT / ALLERGENIC EXTRACT
- MEXICAN TEA CHENOPODIUM AMBROSIOIDES 1 UNT / ALLERGENIC EXTRACT- MUGWORT, COMM
ON ARTEMISIA VULGARIS 1 UNT / ALLERGENIC EXTRACT- PIGWEED, R
OUGH REDROOT AMARANTHUS RETROFLEXUS 1 UNT / ALLERGENIC EXTRACT- PLANTAIN, ENGLIS
H PLANTAGO LANCEOLATA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, SHORT AMBROSIA ELATIO
R 1 UNT / ALLERGENIC EXTRACT- RAGWEED, TALL AMBROSIA TRIFIDA 1 UNT / ALLERGENIC
EXTRACT- RAGWEED, WESTERN AMBROSIA CORONOPIFOLIA 1 UNT / ALL
ERGENIC EXTRACT- RUSSIAN THISTLE SALSOLA KALI 1 UNT / ALLERGENIC EXTRACT- SAGEBR
USH, COMMON ARTEMISIA TRIDENTATA 1 UNT Injectable Solution
N
*******************************************
*******************************************
login.sql:
--
-- Copyright (c) Oracle Corporation 1988, 2000. All Rights Reserved.
--
-- NAME
-- login.sql
--
-- DESCRIPTION
-- SQL*Plus login startup file.
--
-- Add any sqlplus commands here that are to be executed when a user
-- starts SQL*Plus on your system
--
-- USAGE
-- This script is automatically run when SQL*Plus starts
--
DEFINE _EDITOR='gvim'
-- For backward compatibility
SET SQLPLUSCOMPATIBILITY 8.1.7
set serveroutput on size 1000000 FORMAT WRAPPED
set trimspool on
set long 5000
set linesize 300
set pagesize 999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user) || '@' || substr( global_name, 1,
decode( dot, 0, length(global_name), dot-1) ) global_name
from (select global_name, instr(global_name,'.') dot from global_name );
set sqlprompt '&gname> '
DEF ttylist=('IN','BN','DF','SBD','SBDC','SBDF','SCD','SCDF','SCDC')
DEF xx= @afiedt.buf
-- Used by Trusted Oracle
COLUMN ROWLABEL FORMAT A15
-- Used for the SHOW ERRORS command
COLUMN LINE/COL FORMAT A8
COLUMN ERROR FORMAT A65 WORD_WRAPPED
-- Used for the SHOW SGA command
COLUMN name_col_plus_show_sga FORMAT a24
-- Defaults for SHOW PARAMETERS
COLUMN name_col_plus_show_param FORMAT a36 HEADING NAME
COLUMN value_col_plus_show_param FORMAT a30 HEADING VALUE
-- Defaults for SET AUTOTRACE EXPLAIN report
COLUMN id_plus_exp FORMAT 990 HEADING i
COLUMN parent_id_plus_exp FORMAT 990 HEADING p
COLUMN plan_plus_exp FORMAT a60
COLUMN object_node_plus_exp FORMAT a8
COLUMN other_tag_plus_exp FORMAT a29
COLUMN other_plus_exp FORMAT a44
--for META, RxNorm queries
--COL str format a65
COL code format a10
COL rela format a24
COL atv format a20
col scui format a9
--set editfile Q:\scripts\oracle\afiedit.buf
set termout on
February 12, 2007 - 8:34 am UTC
$ ./flat / tmp
347269,ENG,P,,PF,,Y,1536371,1536371,347269,,RXNORM,SCD,347269,ALLERGENIC EXTRACT, DANDELION POLLEN 1 UNT / ALLERGENIC EXTRACT, FIREBUSH KOCHIA POLLEN 1 UNT / ALLERGENIC EXTRACT, GLODENROD POLLEN 1 UNT / ALLERGENIC EXTRACT- COCKLEBUR, MIXED XANTHIUM SPP. 1 UNT / ALLERGENIC EXTRACT- DOCK, SOUR OR SHEEP SORREL RUMEX ACETOSELLA 1 UNT / ALLERGENIC EXTRACT- DOCK, YELLOW RUMEX CRISPUS - CURLY DOCK 1 UNT / ALLERGENIC EXTRACT- HEMP, WATER 1 UNT / ALLERGENIC EXTRACT- LAMB'S QUARTERS CHENOPODIUM ALBUM 1 UNT / ALLERGENIC EXTRACT- MEXICAN TEA CHENOPODIUM AMBROSIOIDES 1 UNT / ALLERGENIC EXTRACT- MUGWORT, COMMON ARTEMISIA VULGARIS 1 UNT / ALLERGENIC EXTRACT- PIGWEED, ROUGH REDROOT AMARANTHUS RETROFLEXUS 1 UNT / ALLERGENIC EXTRACT- PLANTAIN, ENGLISH PLANTAGO LANCEOLATA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, SHORT AMBROSIA ELATIOR 1 UNT / ALLERGENIC EXTRACT- RAGWEED, TALL AMBROSIA TRIFIDA 1 UNT / ALLERGENIC EXTRACT- RAGWEED, WESTERN AMBROSIA CORONOPIFOLIA 1 UNT / ALLERGENIC EXTRACT- RUSSIAN THISTLE SALSOLA KALI 1 UNT / ALLERGENIC EXTRACT- SAGEBRUSH, COMMON ARTEMISIA TRIDENTATA 1 UNT Injectable Solution,,N,
I cannot reproduce.
Extract data from an XML view
Rory, February 13, 2007 - 10:49 pm UTC
Hi Tom,
I've got a view
SQL> desc itiread.cpu_xml
Name Null? Type
--------- -------- ------------------------
TABLE of XMLTYPE
This view contains XML data.
By setting the linesize, long, etc. I am able to extract the whole XML data (which is only one line of the view).
But my problem is, the spool file looks something like below. (This is only part of the spool file). My problem is, I met the MAX ALLOWABLE SIZE for LINESIZE in sqlplus, which makes my output below wrap, thereby giving me errors when I open the file (test.xml). The browser would give an error like 'end tag does not match the start tag'. This is because the whole XML is only one line. And it is about 200000 long. While the limit of LINESIZE is only 32k. Is there a workaround for this? Or if I can at least tell the select statement to spool to a newline when it sees an end tag like "</computer>", then at least the browser can read it.
Thanks.
<computers><computer id="14178708"><assettag>107459</assettag><hostname>ACVACTIDCPCTX07</hostname><model>/Server/INTEL Server/HP COMPAQ PROLIANT DL360 G4/</model><modelno>DL360 G4</modelno><serialno>USM506035W</serialno><location>/ITI/TIDC/1/1GBAY02/</location><startu>84</startu><assignment>In use</assignment><serverstatus>Production</serverstatus></computer><computer id="14178719"><assettag>107447</assettag><hostname>ACVACTIDCPDAT03</hostname><model>/Server/INTEL Server/HP COMPAQ PROLIANT DL380 G4/</model><modelno>DL380 G4</modelno><serialno>USX4450051</serialno><location>/ITI/TIDC/1/1GBAY03/</location><startu>27</startu><assignment>In use</assignment><serverstatus>Production</serverstatus></computer><computer id="14178734"><assettag>107462</assettag><hostname>ACVACTIDCPMAL01</hostname><model>/Server/INTEL Server/HP COMPAQ PROLIANT DL360 G4/</model><modelno>DL360 G4</modelno><serialno>USM50600N6</serialno><location>/ITI/TIDC/1/1GBAY01/</location><startu>61</startu><assignment>In use</assignment><serverstatus>Production</serverstatus></computer><computer id="14178747"><assettag>109347</assettag><hostname>CICTIDCSQL01</hostname><model>/Server/INTEL Server/HP COMPAQ PROLIANT DL380 G4/</model><modelno>DL380 G4</modelno><serialno>USE504A0FP</serialno><location>/ITI/TIDC/1/1EBAY26/</location><startu>47</startu><assignment>In use</assignment><serverstatus>Production</serverstatus></computer><computer id="14178782"><assettag>107204</assettag><hostname>ELWTTIDCWEB01</hostname><model>/Server/INTEL Server/HP COMPAQ PROLIANT DL360 G4/</model><modelno>DL360 G4</modelno><serialno>USM506033J</serialno><location>/ITI/TIDC/1/1EBAY07/</location><startu>51</startu><assignment>In use</assignment><serverstatus>Production</serverstatus></computer><computer id="14178793"><assettag>108845</assettag><hostname>FSHRPSILN01</hostname><model>/Server/INTEL Server/HP COMPAQ PROLIANT DL380 G3/</model><modelno>DL380 G3</modelno><serialno>D352LDN7J001</serialno><location>/ITI/LIDC/1/02-14-22/</loca
tion><
February 14, 2007 - 8:25 am UTC
write a real program using a real language instead of a simple command line scripting tool?
(you could write a plsql block of course that uses dbms_output - but be aware that in 9i and before the linesize for dbms_output is 255 which might be too small)
Extract XML data
Rory, February 14, 2007 - 3:57 pm UTC
Hi Tom,
Thanks for the prompt reply.
Is it possible for you to give me a sample code to generate my needed output using dbms_output. But of course with the necessary work around for the dbms_output limitations.
The one liner view is about 200000 long.
THanks a lot.
Extracting huge vol. of data into xml file using UTL_FILE
Harsh, February 28, 2007 - 1:59 pm UTC
Hi Tom ,
I am currently working on oracle 9 relese2
when i try to extarct huge volume of data i.e around 70000 records from the database into an xml file by executing the below mentioned code, it gives me the following error
expected symbol name is missing
DECLARE
*
ERROR at line 1:
ORA-20054: Invalid Operation
ORA-06512: at line 171.
can you please sort it out?
set serveroutput on
set define ~
spool c:\volunteerList2_harsh.xml
DECLARE
CURSOR cVol IS
SELECT * FROM volunteers WHERE EXISTS
(SELECT 'X' FROM volunteer_details WHERE vdd_vdd_id = 167 and vl_pin = pin AND UPPER( attr_text ) = 'Y');
cURSOR cVolDet( vId NUMBER ) IS
SELECT data, description, domain
FROM (SELECT DISTINCT DECODE( vdd.datatype, '1', TO_CHAR( vd.attr_number ), '2', vd.attr_text,
'3', REPLACE( TO_CHAR( vd.attr_date, 'YYYYMMDD-HH24MISS'), '-','T' ) ) data, vdd.description, vdd.domain, vdd.vdd_order
FROM volunteer_detail_descriptions vdd, volunteer_details vd
WHERE vd.vl_pin = vId
AND vdd.vdd_id = vd.vdd_vdd_id
AND UPPER( vdd.description ) NOT LIKE '%(OLD)%'
AND UPPER( vdd.description ) NOT LIKE '%DO NOT USE%' )
ORDER BY vdd_order;
CURSOR cVolTrials( vId NUMBER ) IS
SELECT trial_name, t_id, tt.description, start_date, end_date, payment, pay_method, payment_date
FROM trial_types tt, trials t, volunteer_tests
WHERE vl_pin = vId
AND t_id = t_t_id
AND tt_id = t.tt_tt_id;
cLine VARCHAR2(32000);
iIndent INTEGER := 0;
temp VARCHAR2(32000);
val VARCHAR2(32000);
File1 utl_file.file_type;
PROCEDURE xmlElement( xmlTag VARCHAR2, xmlData VARCHAR2 := '', xmlFunc VARCHAR2 := '', xmlAttr VARCHAR2 := '' ) IS
cOut VARCHAR2(32000);
BEGIN
IF xmlFunc = 'C' THEN
iIndent := iIndent - 1;
END IF;
cOut := LPAD( '<', 2*iIndent+1 );
IF xmlFunc = 'C' THEN
cOut := cOut || '/';
END IF;
cOut := cOut || xmlTag;
IF xmlAttr IS NOT NULL AND xmlFunc <> 'C' THEN
cOut := cOut || ' ' || xmlAttr;
END IF;
IF xmlFunc = 'O' THEN
iIndent := iIndent + 1;
ELSIF xmlFunc IS NULL AND xmlData IS NULL THEN
cOut := cOut || ' /';
END IF;
cOut := cOut || '>';
IF xmlData IS NOT NULL THEN
cOut := cOut || REPLACE( REPLACE( REPLACE ( xmlData, '&', '&' ), '<', '<' ), '>', '>' );
IF xmlFunc IS NULL THEN
cOut := cOut || '</' || xmlTag || '>';
END IF;
END IF;
utl_file.put_line(File1, cOut,FALSE);
END;
FUNCTION tagName( name VARCHAR2 ) RETURN VARCHAR2 IS
cOut VARCHAR2(4000);
BEGIN
cOut := TRANSLATE( name, '-_/():@&<>%$£^.\?,',' ');
cOut := INITCAP( cOut );
cOut := REPLACE( cOut, ' ', '' );
cOut := LOWER( SUBSTR( cOut, 1, 1 ) ) || SUBSTR( cOut, 2 );
RETURN cOut;
END;
FUNCTION parse( data IN OUT VARCHAR2, delimiter VARCHAR2 := ' ' ) RETURN VARCHAR2 IS
i INTEGER;
output VARCHAR2(32000);
BEGIN
i := INSTR( data, delimiter );
IF i > 0 THEN
output := SUBSTR( data, 1, i - 1 );
data := SUBSTR( data, i + LENGTH( delimiter ) );
ELSE
output := data;
data := NULL;
END IF;
RETURN output;
END;
BEGIN
File1 := utl_file.fopen('ORALOAD','volunteerList22.xml', 'w');
utl_file.put_line(File1,'<?xml version="1.0"?>',FALSE);
xmlElement( 'volunteerList', '', 'O' );
FOR rVol IN cVol LOOP
xmlElement( 'volunteer', '', 'O', 'pin="' || TO_CHAR( rVol.pin ) || '"' );
xmlElement( 'email', rVol.EMAIL );
xmlElement( 'forename', rVol.FORENAME );
xmlElement( 'middlenames', rVol.MIDDLENAMES );
xmlElement( 'staffNo', TO_CHAR( rVol.STAFF_NO ) );
xmlElement( 'surname', rVol.SURNAME );
xmlElement( 'knownAs', rVol.KNOWNAS );
xmlElement( 'addressLine1', rVol.ADDRESS_LINE1 );
xmlElement( 'addressLine2', rVol.ADDRESS_LINE2 );
xmlElement( 'addressLine3', rVol.ADDRESS_LINE3 );
xmlElement( 'postCode', rVol.POSTCODE );
xmlElement( 'telNo', rVol.TEL_NO );
xmlElement( 'comments', rVol.COMMENTS );
xmlElement( 'utPin', TO_CHAR( rVol.UT_PIN ) );
xmlElement( 'department', TO_CHAR( rVol.DEPARTMENT ) );
xmlElement( 'payGroup', rVol.PAY_GROUP );
FOR rVolDet IN cVolDet( rVol.pin ) LOOP
IF rVolDet.domain IS NULL THEN
xmlElement( tagName( rVolDet.description ), rVolDet.data );
ELSE
temp := TRANSLATE( rVolDet.data, '/\,', ' ' );
WHILE temp IS NOT NULL LOOP
val := parse( temp );
IF val IS NOT NULL THEN
BEGIN
SELECT rv_low_value INTO val FROM cg_ref_codes WHERE rv_domain = rVolDet.domain AND rv_low_value = val;
xmlElement( tagName( rVolDet.description ), val );
EXCEPTION
WHEN NO_DATA_FOUND THEN
utl_file.put_line( File1, '<!--Illegal Value "' || val || '" for element "' || tagName( rVolDet.description ) || '"-->',FALSE);
END;
END IF;
END LOOP;
END IF;
END LOOP;
FOR rVolTrials IN cVolTrials( rVol.pin ) LOOP
xmlElement( 'trial', '', 'O', 'id="' || TO_CHAR( rVolTrials.t_id ) || '"' );
xmlElement( 'trialName', rVolTrials.trial_name );
xmlElement( 'description', rVolTrials.description );
xmlElement( 'startDate', REPLACE( TO_CHAR( rVolTrials.start_date, 'YYYYMMDD-HH24MISS' ), '-','T' ) );
xmlElement( 'endDate', REPLACE( TO_CHAR( rVolTrials.end_date, 'YYYYMMDD-HH24MISS' ), '-','T' ) );
xmlElement( 'payment', TO_CHAR( rVolTrials.payment ) );
xmlElement( 'payMethod', TO_CHAR( rVolTrials.pay_method ) );
xmlElement( 'paymentDate', REPLACE( TO_CHAR( rVolTrials.payment_date, 'YYYYMMDD-HH24MISS' ), '-','T' ) );
xmlElement( 'trial', '', 'C' );
END LOOP;
xmlElement( 'volunteer', '', 'C' );
END LOOP;
xmlElement( 'volunteerList', '', 'C' );
COMMIT;
utl_file.fclose(File1);
EXCEPTION
WHEN utl_file.invalid_path THEN
RAISE_APPLICATION_ERROR (-20052, 'Invalid File Location');
WHEN utl_file.invalid_operation THEN
RAISE_APPLICATION_ERROR (-20054, 'Invalid Operation');
WHEN others THEN
RAISE_APPLICATION_ERROR (-20099, 'Unknown UTL_FILE Error');
END;
/
Spool Off;
if i use dbms_output BUFFER OVERFLOWS due to size limit. following is the code using dbms_output
set serveroutput on
set feedback off
set define ~
spool c:\volunteerListOriginal.xml
DECLARE
CURSOR cVol IS
SELECT * FROM volunteers WHERE EXISTS
( SELECT 'X' FROM volunteer_details WHERE vdd_vdd_id = 167 and vl_pin = pin AND UPPER( attr_text ) = 'Y' ) ;
-- AND ROWNUM <=20;
cURSOR cVolDet( vId NUMBER ) IS
SELECT data, description, domain
FROM (SELECT DISTINCT DECODE( vdd.datatype, '1', TO_CHAR( vd.attr_number ), '2', vd.attr_text,
'3', REPLACE( TO_CHAR( vd.attr_date, 'YYYYMMDD-HH24MISS'), '-','T' ) ) data, vdd.description, vdd.domain, vdd.vdd_order
FROM volunteer_detail_descriptions vdd, volunteer_details vd
WHERE vd.vl_pin = vId
AND vdd.vdd_id = vd.vdd_vdd_id
AND UPPER( vdd.description ) NOT LIKE '%(OLD)%'
AND UPPER( vdd.description ) NOT LIKE '%DO NOT USE%' )
ORDER BY vdd_order;
CURSOR cVolTrials( vId NUMBER ) IS
SELECT trial_name, t_id, tt.description, start_date, end_date, payment, pay_method, payment_date
FROM trial_types tt, trials t, volunteer_tests
WHERE vl_pin = vId
AND t_id = t_t_id
AND tt_id = t.tt_tt_id;
cLine VARCHAR2(32000);
iIndent INTEGER := 0;
temp VARCHAR2(32000);
val VARCHAR2(32000);
PROCEDURE xmlElement( xmlTag VARCHAR2, xmlData VARCHAR2 := '', xmlFunc VARCHAR2 := '', xmlAttr VARCHAR2 := '' ) IS
cOut VARCHAR2(32000);
BEGIN
IF xmlFunc = 'C' THEN
iIndent := iIndent - 1;
END IF;
cOut := LPAD( '<', 2*iIndent+1 );
IF xmlFunc = 'C' THEN
cOut := cOut || '/';
END IF;
cOut := cOut || xmlTag;
IF xmlAttr IS NOT NULL AND xmlFunc <> 'C' THEN
cOut := cOut || ' ' || xmlAttr;
END IF;
IF xmlFunc = 'O' THEN
iIndent := iIndent + 1;
ELSIF xmlFunc IS NULL AND xmlData IS NULL THEN
cOut := cOut || ' /';
END IF;
cOut := cOut || '>';
IF xmlData IS NOT NULL THEN
cOut := cOut || REPLACE( REPLACE( REPLACE ( xmlData, '&', '&' ), '<', '<' ), '>', '>' );
IF xmlFunc IS NULL THEN
cOut := cOut || '</' || xmlTag || '>';
END IF;
END IF;
dbms_output.put_line( cOut );
END;
FUNCTION tagName( name VARCHAR2 ) RETURN VARCHAR2 IS
cOut VARCHAR2(4000);
BEGIN
cOut := TRANSLATE( name, '-_/():@&<>%$£^.\?,',' ');
cOut := INITCAP( cOut );
cOut := REPLACE( cOut, ' ', '' );
cOut := LOWER( SUBSTR( cOut, 1, 1 ) ) || SUBSTR( cOut, 2 );
RETURN cOut;
END;
FUNCTION parse( data IN OUT VARCHAR2, delimiter VARCHAR2 := ' ' ) RETURN VARCHAR2 IS
i INTEGER;
output VARCHAR2(32000);
BEGIN
i := INSTR( data, delimiter );
IF i > 0 THEN
output := SUBSTR( data, 1, i - 1 );
data := SUBSTR( data, i + LENGTH( delimiter ) );
ELSE
output := data;
data := NULL;
END IF;
RETURN output;
END;
BEGIN
DBMS_OUTPUT.PUT_LINE('<?xml version="1.0"?>');
xmlElement( 'volunteerList', '', 'O' );
FOR rVol IN cVol LOOP
xmlElement( 'volunteer', '', 'O', 'pin="' || TO_CHAR( rVol.pin ) || '"' );
xmlElement( 'email', rVol.EMAIL );
xmlElement( 'forename', rVol.FORENAME );
xmlElement( 'middlenames', rVol.MIDDLENAMES );
xmlElement( 'staffNo', TO_CHAR( rVol.STAFF_NO ) );
xmlElement( 'surname', rVol.SURNAME );
xmlElement( 'knownAs', rVol.KNOWNAS );
xmlElement( 'addressLine1', rVol.ADDRESS_LINE1 );
xmlElement( 'addressLine2', rVol.ADDRESS_LINE2 );
xmlElement( 'addressLine3', rVol.ADDRESS_LINE3 );
xmlElement( 'postCode', rVol.POSTCODE );
xmlElement( 'telNo', rVol.TEL_NO );
xmlElement( 'comments', rVol.COMMENTS );
xmlElement( 'utPin', TO_CHAR( rVol.UT_PIN ) );
xmlElement( 'department', TO_CHAR( rVol.DEPARTMENT ) );
xmlElement( 'payGroup', rVol.PAY_GROUP );
FOR rVolDet IN cVolDet( rVol.pin ) LOOP
IF rVolDet.domain IS NULL THEN
xmlElement( tagName( rVolDet.description ), rVolDet.data );
ELSE
temp := TRANSLATE( rVolDet.data, '/\,', ' ' );
WHILE temp IS NOT NULL LOOP
val := parse( temp );
IF val IS NOT NULL THEN
BEGIN
SELECT rv_low_value INTO val FROM cg_ref_codes WHERE rv_domain = rVolDet.domain AND rv_low_value = val AND ROWNUM = 1;
xmlElement( tagName( rVolDet.description ), val );
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE( '<!--Illegal Value "' || val || '" for element "' || tagName( rVolDet.description ) || '"-->' );
END;
END IF;
END LOOP;
END IF;
END LOOP;
FOR rVolTrials IN cVolTrials( rVol.pin ) LOOP
xmlElement( 'trial', '', 'O', 'id="' || TO_CHAR( rVolTrials.t_id ) || '"' );
xmlElement( 'trialName', rVolTrials.trial_name );
xmlElement( 'description', rVolTrials.description );
xmlElement( 'startDate', REPLACE( TO_CHAR( rVolTrials.start_date, 'YYYYMMDD-HH24MISS' ), '-','T' ) );
xmlElement( 'endDate', REPLACE( TO_CHAR( rVolTrials.end_date, 'YYYYMMDD-HH24MISS' ), '-','T' ) );
xmlElement( 'payment', TO_CHAR( rVolTrials.payment ) );
xmlElement( 'payMethod', TO_CHAR( rVolTrials.pay_method ) );
xmlElement( 'paymentDate', REPLACE( TO_CHAR( rVolTrials.payment_date, 'YYYYMMDD-HH24MISS' ), '-','T' ) );
xmlElement( 'trial', '', 'C' );
END LOOP;
xmlElement( 'volunteer', '', 'C' );
END LOOP;
xmlElement( 'volunteerList', '', 'C' );
END;
/
SPOOL OFF;
Thanks in Advance.
Regards
Hp
spool clob on to a file
Umesh Kasturi, March 07, 2007 - 3:54 am UTC
Tom
I want to display a very large data in a single row.
I am concatenating all the values of a column called col1 and want to spool to a file . But when I try this it puts values in a separate lines
How do I over come that?
Thanks in advance
set long 5000000000
set termout off
alter table emp add col1 varchar2(4000);
update emp set col1 = rpad('*',3999,'*');
CREATE OR REPLACE FUNCTION lg_concatenate_list (p_cursor IN SYS_REFCURSOR)
RETURN CLOB
IS
l_clob CLOB ;
l_temp VARCHAR2(32767);
BEGIN
dbms_lob.createTemporary( l_clob, true );
dbms_lob.open( l_clob, dbms_lob.lob_readwrite );
LOOP
FETCH p_cursor INTO l_temp;
EXIT WHEN p_cursor%NOTFOUND;
dbms_lob.writeappend (l_clob, length(l_temp), l_temp );
END LOOP;
CLOSE p_cursor;
RETURN l_clob;
END;
/
connect scott/tiger
spool d:\temp\concat.csv
select lg_concatenate_list(cursor (select col1 from emp) )l from dual;
spool off
March 07, 2007 - 10:23 am UTC
SQL> set linesize N
Adds carriage return Separate lines
umesh_kasturi, March 09, 2007 - 5:58 am UTC
********************************************************************************
********************************************************************************
********************************************************************************
I did try by having set lines 200
Here is a copy paste of my output. Actually I see that it appends a Carraige return at the end. I did try to remove by putting a "if ... end if" logic but it still persists
How do I overcome that and print it in one singl line when I spool the same
March 09, 2007 - 11:14 am UTC
sorry - not making sense to me
extracting data to a flat file
Tarang, March 09, 2007 - 3:20 pm UTC
This is what I use to extract data to a flat file.
Hope that helps.
-----------------------------
create or replace procedure datapull(isql in varchar2, ifilename in out varchar2, orowsexported out number)
is
t_sqlstatement varchar2(4000);
t_sourcecursorid number;
t_columncount integer :=0;
t_columntable dbms_sql.desc_tab;
rc number;
--
t_outfileid utl_file.file_type;
t_outfilename varchar2(100);
t_job_id number;
--
t_runid number :=0;
t_errormsg varchar2(4000);
t_exportedrows number := 0;
--
x number := 0;
tmp varchar2(32767);
data varchar2(32767);
clob_data clob;
--
begin
begin
t_sqlstatement := replace(replace(trim(isql),chr(13),' '),chr(10),' ');
t_sourcecursorid := dbms_sql.open_cursor;
dbms_sql.parse(t_sourcecursorid,t_sqlstatement,dbms_sql.native);
dbms_sql.describe_columns(t_sourcecursorid,t_columncount,t_columntable);
exception
when others then
rollback;
t_errormsg := 'error parsing source cursor:'|| to_char(sqlcode) || ':' || sqlerrm;
dbms_output.put_line(t_errormsg);
return;
end;
--
x := nvl(length(trim(ifilename)),0);
if x > 0 then
t_outfilename:= trim(ifilename)||'.txt';
else
t_errormsg := 'error outputfilename not specified:'|| to_char(sqlcode) || ':' || sqlerrm;
dbms_output.put_line(t_errormsg);
return;
end if;
--
begin
t_outfileid := utl_file.fopen('data_dir',t_outfilename,'w',32767);
dbms_output.put_line('out file open: ' || t_outfilename);
--
for x in 1 .. t_columncount
loop
dbms_output.put_line(t_columntable(x).col_name || ' is type:' || t_columntable(x).col_type);
if t_columntable(x).col_type = 112 then -- is clob
dbms_sql.define_column(t_sourcecursorid,x,clob_data);
else
dbms_sql.define_column(t_sourcecursorid,x,data,10000);
end if;
if x = 1 then
tmp := upper(t_columntable(x).col_name);
else
tmp := chr(9) || upper(t_columntable(x).col_name);
end if;
utl_file.put(t_outfileid,tmp);
end loop;
--
rc := dbms_sql.execute(t_sourcecursorid);
--
loop
exit when dbms_sql.fetch_rows(t_sourcecursorid) = 0;
utl_file.put_line(t_outfileid,null);
for x in 1 .. t_columncount
loop
if t_columntable(x).col_type = 112 then -- is clob
if dbms_lob.getlength(clob_data) > 32700 then
t_errormsg := 'error clob too big, length: ' || dbms_lob.getlength(clob_data);
dbms_output.put_line(t_errormsg);
return;
end if;
dbms_sql.column_value(t_sourcecursorid,x,clob_data);
data := dbms_lob.substr(clob_data,32700,1);
else
dbms_sql.column_value(t_sourcecursorid,x,data);
end if;
--
data := trim(data);
if x = 1 then
tmp := data;
else
tmp := chr(9) || data;
end if;
utl_file.put(t_outfileid,tmp);
end loop;
t_exportedrows := t_exportedrows + 1;
end loop;
--
exception
when others then
rollback;
t_errormsg := 'error exporting data:'|| to_char(sqlcode) || ':' || sqlerrm;
dbms_output.put_line(t_errormsg);
return;
end;
--
dbms_sql.close_cursor(t_sourcecursorid);
utl_file.fclose(t_outfileid);
dbms_output.put_line('rows exported:' || t_exportedrows);
t_errormsg := 'data pull job completed';
ifilename := t_outfilename;
orowsexported := t_exportedrows;
exception
when others then
rollback;
t_errormsg := 'error :'|| to_char(sqlcode) || ':' || sqlerrm;
dbms_output.put_line(t_errormsg);
raise_application_error(-20001,'error datapull: '||to_char(sqlcode)||':'||sqlerrm);
end datapull;
/
Special Characters
A reader, April 13, 2007 - 6:25 am UTC
Hi Tom,
I'm using this dump_csv function for extracting the data. How do we delimit the special characters (like ') in the below sql query. I have a couple of queries for which I need to extract data in flat file. And the data is huge. Please can you advice on this.
create or replace procedure test_dump_csv
as
l_rows number;
begin
l_rows := dump_csv( 'select *
from all_objects
where owner='CTXSYS',
',', 'd:\temp\output', 'test.csv' );
end;
/
Warning: Procedure created with compilation errors.
SQL> show errors
Errors for PROCEDURE TEST_DUMP_CSV:
LINE/COL ERROR
-------- -----------------------------------------------------------------
8/41 PLS-00103: Encountered the symbol "CTXSYS" when expecting one of
the following:
. ( ) , * @ % & | = - + < / > at in is mod not range rem =>
.. <an exponent (**)> <> or != or ~= >= <= <> and or like as
between from using ||
The symbol ". was inserted before "CTXSYS" to continue.
Many Thanks
April 13, 2007 - 2:08 pm UTC
to get a ' in a character string literal, you use ''
ops$tkyte%ORA9IR2> select 'how''s this going to work' from dual;
'HOW''STHISGOINGTOWORK'
------------------------
how's this going to work
or, new in 10g, q'| .... |'
ps$tkyte%ORA10GR2> select q'|how's this going to work|' from dual;
Q'|HOW'STHISGOINGTOWORK|
------------------------
how's this going to work
Or use chr(39) ?
Paul, April 16, 2007 - 6:09 am UTC
Using chr sometimes makes things easier to read, like when you unstring individual words from a comma separated string. Anyway, on a similar theme for those of us without 10G:
1 select 'how'
2 ||
3 chr(39)
4 ||
5 's this going to work'
6* from dual
PS@dev> /
'HOW'||CHR(39)||'STHISGO
------------------------
how's this going to work
1 row selected.
PS@dev>
A reader, April 16, 2007 - 12:20 pm UTC
Many thanks Tom
Your inputs are always very useful.
Alexander the ok, May 01, 2007 - 3:28 pm UTC
Tom,
See if this makes sense to you. I want to allow developers to use your cool dump_csv function. I granted them read/write to the directory object.
The dir permissions look like this:
SQL> !ls -la /usr/apps/corpis/planit/ftp/utl_files
total 6120
drwxrwxrwx 2 n0002220 users 256 May 1 14:09 .
drwxrwxr-x 4 n0009888 users 4096 May 1 12:50 ..
-rw-r--r-- 1 oracle dba 923 May 1 13:53 test.dat
-rw-r----- 1 oracle dba 3121285 May 1 14:09 w_test.out
What I don't understand, when I run dump_csv (as the oracle software account), it creates the file with world read, when they do, it does not. If utl_file is writing to the directory using the oracle software account why isn't it the same for both files?
May 01, 2007 - 4:20 pm UTC
when you log into oracle, do you use sqlnet or do you use a direct connection and do they do it differently?
Alexander the ok, May 01, 2007 - 4:32 pm UTC
I use sqlplus oracle/pw
they either connect from their pc or sqlplus user/pwd@db
May 01, 2007 - 9:00 pm UTC
ok, their umask is inherited from the listeners environment.
your umask is inherited from your environment..
that is, their dedicated server, the process that makes the file, is spawned by the listener
your dedicated server is spawned by you, your environment.
I'll bet your default umasks are different.
Alexander, May 02, 2007 - 9:09 am UTC
If that's the case, what can I do about it? Do I have to get a SA to change theirs? I typed in umask and got back 022. Don't know what that means though.
May 02, 2007 - 9:14 am UTC
look at the umask of the thing that starts the listener.
$ man umask
will be useful to understand what it is.
You will likely find the umask of the environment that creates the listener is 0137:
[tkyte@desktop 10gNew]$ umask
0137
[tkyte@desktop 10gNew]$ umask -S
u=rw,g=r,o=
Yours:
[tkyte@desktop 10gNew]$ umask 022
[tkyte@desktop 10gNew]$ umask -S
u=rwx,g=rx,o=rx
gave read to all.
Alexander, May 02, 2007 - 11:19 am UTC
I don't think I understand quite what you are saying. Running umask 022 will give everyone the ability to read files created by oracle? I ran it but they still can't get at their files.
Or I need to give the listener the 022? Thanks for the help, I struggle with Unix....
May 02, 2007 - 5:34 pm UTC
the environment the listener is running it - its umask is "X"
that umask of X, it is giving the file the rw r - you are seeing.
You need to make the environment of the listener, the thing that starts the listener have a umask of Y, where Y has the file permissions you desire.
Alexander, May 02, 2007 - 5:53 pm UTC
The part that does not compute to me is the "thing that starts the listener". I don't know how to identify this, or change it's umask. Is it a process? A user? tia
May 02, 2007 - 6:21 pm UTC
do you have a system administrator with you that does know your OS and how it works?
yes, this is the "thing that starts the listener", the "thing logged in and it's environment when you start the listener"
Alexander, May 02, 2007 - 7:53 pm UTC
We use the oracle OS account to do everything, including starting/stopping listeners if needed. That has 022 so I'm still confused but I'll try and get a SAs input. I have a feeling they're not going to know anything about oracle listeners though.
Wilson, May 15, 2007 - 8:15 am UTC
I tried to run this script in OracleXE and I get an error, kindly assist. I have the utl_file_dir in the INIT.ora and shows that it is set:
SQL> ed
Wrote file afiedt.buf
1 create or replace
2 function dump_csv( p_query in varchar2,
3 p_separator in varchar2 default ',',
4 p_dir in varchar2 ,
5 p_filename in varchar2 )
6 return number
7 is
8 l_output utl_file.file_type;
9 l_theCursor integer default dbms_sql.open_cursor;
10 l_columnValue varchar2(2000);
11 l_status integer;
12 l_colCnt number default 0;
13 l_separator varchar2(10) default '';
14 l_cnt number default 0;
15 begin
16 l_output := utl_file.fopen( p_dir, p_filename, 'w' );
17 dbms_sql.parse( l_theCursor, p_query,
18 dbms_sql.native );
19 for i in 1 .. 255 loop
20 begin
21 dbms_sql.define_column( l_theCursor, i,
22 l_columnValue, 2000 );
23 l_colCnt := i;
24 exception
25 when others then
26 if ( sqlcode = -1007 ) then exit;
27 else
28 raise;
29 end if;
30 end;
31 end loop;
32 dbms_sql.define_column( l_theCursor, 1,
33 l_columnValue, 2000 );
34 l_status := dbms_sql.execute(l_theCursor);
35 loop
36 exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
37 l_separator := '';
38 for i in 1 .. l_colCnt loop
39 dbms_sql.column_value( l_theCursor, i,
40 l_columnValue );
41 utl_file.put( l_output,
42 l_separator || l_columnValue );
43 l_separator := p_separator;
44 end loop;
45 utl_file.new_line( l_output );
46 l_cnt := l_cnt+1;
47 end loop;
48 dbms_sql.close_cursor(l_theCursor);
49 utl_file.fclose( l_output );
50 return l_cnt;
51* end dump_csv;
52 /
Warning: Function created with compilation errors.
SQL> /
Warning: Function created with compilation errors.
SQL> show error
Errors for FUNCTION DUMP_CSV:
LINE/COL ERROR
-------- -----------------------------------------------------------------
7/25 PLS-00201: identifier 'UTL_FILE' must be declared
7/25 PL/SQL: Item ignored
15/8 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
15/8 PL/SQL: Statement ignored
40/16 PL/SQL: Statement ignored
40/30 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
44/12 PL/SQL: Statement ignored
44/31 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
48/8 PL/SQL: Statement ignored
48/25 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
May 15, 2007 - 8:54 pm UTC
grant yourself execute on utl_file.
it is not granted by default
A reader, May 17, 2007 - 10:48 am UTC
Tom,
Is it safe to use the 'create directory' instead of setting the utl_file_dir. I mean if we grant read, write to the user for which this extract is aplicable.
Thank You
May 17, 2007 - 11:30 am UTC
it is safer to use a directory, more granular security is available.
you should not use utl_file_dir in current releases.
no column headers
A reader, July 20, 2007 - 9:32 am UTC
Tom,
I'm trying to dump the data in csv format by using you dump_csv function. The only problem is that I'm not able to get the column headers in the output. Your assistance is required.
Thanks in advance.
July 20, 2007 - 4:16 pm UTC
ctl-f (search) for headings on this page.
no column headers
A reader, July 20, 2007 - 10:11 am UTC
Tom,
I'm trying to dump the data in csv format by using you dump_csv function. The only problem is that
I'm not able to get the column headers in the output. For Eg:
select owner, object_name, object_type from user_objects ;
How do i get the headers OWNER OBJECT_NAME OBJECT_TYPE
Your assistance is required.
Thanks in advance.
no column headers
A reader, August 03, 2007 - 11:48 am UTC
Tom,
I managed to get it temporary through a small change. I just added a line before the select as
select 'OWNER' Owner, 'OBJECT_NAME' Object_name, 'OBJECT_TYPE' Object_type from dual
union
select ....
So, that I get the first line as the
Owner,Object_name,Object_type...
Thanks Tom
unload to csv file
sunitha, August 28, 2007 - 7:18 am UTC
hi
i am new to oracle
i have used the function which is in this review
create or replace function dump_csv( p_query in varchar2,
p_separator in varchar2
default ',',
p_dir in varchar2 ,
p_filename in varchar2 )
return number
AUTHID CURRENT_USER
is
l_output utl_file.file_type;
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(2000);
l_status integer;
l_colCnt number default 0;
l_separator varchar2(10) default '';
l_cnt number default 0;
begin
l_output := utl_file.fopen( p_dir, p_filename, 'w' );
dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
for i in 1 .. 255 loop
begin
dbms_sql.define_column( l_theCursor, i,
l_columnValue, 2000 );
l_colCnt := i;
exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
end;
end loop;
dbms_sql.define_column( l_theCursor, 1, l_columnValue,
2000 );
l_status := dbms_sql.execute(l_theCursor);
loop
exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
l_separator := '';
for i in 1 .. l_colCnt loop
dbms_sql.column_value( l_theCursor, i,
l_columnValue );
utl_file.put( l_output, l_separator ||
l_columnValue );
l_separator := p_separator;
end loop;
utl_file.new_line( l_output );
l_cnt := l_cnt+1;
end loop;
dbms_sql.close_cursor(l_theCursor);
utl_file.fclose( l_output );
return l_cnt;
end dump_csv;
/
Function Created
create or replace procedure test_dump_csv
as
l_rows number;
begin
l_rows := dump_csv( 'select *
from sysadm.centre',
',', c:\test', 'test1.dat' );
end;
/
Procedure Created
But the test folder in C drive is empty(test1.dat not created)
September 04, 2007 - 12:42 pm UTC
you do understand that plsql runs in the server, on the server and the file would be created on the DATABASE SERVER right?
TEXT_IO
suni, September 05, 2007 - 6:08 am UTC
Hi Tom,
I made the function dump_csv to procedure, created directory utl_file_dir as 'c:\temp\utl_file', its working fine,
In the review you said that we have to use TEXT_IO to save on workstation.
so i replaced utl_file with text_io,
but i got the warning: Warning: Procedure created with compilation errors.
the data in error lines
8 l_output TEXT_IO.FILE_TYPE;
16 l_output := TEXT_IO.FOPEN( p_dir, p_filename, 'w' );
40 TEXT_IO.PUT( l_output, l_separator ||
41 '"'||l_columnValue||'"' );
44 TEXT_IO.NEW_LINE( l_output );
48 TEXT_IO.FCLOSE( l_output );
SQL> SHOW ERRORS
Errors for PROCEDURE DUMP_CSV:
LINE/COL ERROR
-------- -----------------------------------------------------------------
8/21 PL/SQL: Item ignored
8/21 PLS-00201: identifier 'TEXT_IO.FILE_TYPE' must be declared
16/5 PL/SQL: Statement ignored
16/5 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
40/13 PL/SQL: Statement ignored
40/26 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
44/9 PL/SQL: Statement ignored
LINE/COL ERROR
-------- -----------------------------------------------------------------
44/27 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
48/5 PL/SQL: Statement ignored
48/21 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
September 05, 2007 - 4:14 pm UTC
text_io is for oracle forms
you cannot use plsql running IN THE SERVER to write to your disk
you can use plsql running in a forms server to write to the machine the form is running on.
you have to have a client program running on a client to write to that client (otherwise you have a "virus" like thing!)
Problem with arabic data when using dump_csv
mohamad shehadeh, September 28, 2007 - 8:29 am UTC
Dear sir,
i got unreadable text while using dump_csv with arabic data
Can you please help me to use with arabic data
September 28, 2007 - 5:43 pm UTC
back up and just write a tiny file with utl_file - and if you get "garbage", you have the perfect test case for support.
csv
A reader, January 22, 2008 - 5:13 pm UTC
Tom:
Is this creating a file on the server. I could not find it under /tmp?
Can I have it create one on the client machine?
create or replace procedure test_dump_csv
as
l_rows number;
begin
l_rows := dump_csv( 'select *
from all_users
where rownum < 25',
',', '/tmp', 'test.dat' );
end;
/
January 22, 2008 - 6:58 pm UTC
http://asktom.oracle.com/tkyte/flat/index.html sqlplus and SPOOL works for a client dump
utl_file can ONLY write to the server, utl_file is running in plsql,plsql runs on the server.
if utl_file could write to YOUR file system - it would be deemed (rightly so) a virus. Something running on the database cannot write to your filesystem (unless of course you mounted your filesystem read/write to the database server...)
csv
A reader, January 22, 2008 - 9:38 pm UTC
TOm:
I was testing with a simple text file and excel read it fine
A,100,200,300
B,200,400,500
Can't I just create any query and stick ||','|| between the columns, save it to my client machine and then open it in Excel
like
spool emp.txt
select empid||','||last_name||','||first_name from emp;
January 23, 2008 - 7:47 am UTC
of course you can.
see the *very first link* in the original answer - it has a script to do just that.
spool inside anonymous block
Vishal, March 01, 2008 - 4:31 am UTC
Hi Tom,
I Have a script in which based on some cursor values it is creating some text file.due to some restriction i can't use "UTL_FILE" so i am spooling all the files.but spool is sql statment so it's not working in this block.
here is sql script->
Declare
posicio number;
nivell number:=0;
name varchar2(30);
name2 varchar2(30);
CURSOR crea IS
SELECT * FROM STRUCTURE
WHERE (parent_part IS NULL)
and (product_line = '30'
or product_line = '7T'
or product_line = 'LF')
and (substr(component_part,1,1) <> 'U')
and (length(component_part) = 6
or substr(component_part,1,2) = 'R-'
or substr(component_part,1,7) = 'REFURB-'
or substr(component_part,7,4) = '-000'
or product_line <> '30'
or
((length(component_part) = 11) and
component_part = 'Q6702-64001' or
component_part = 'Q6703-64001')
or
((substr(component_part,7,2) = '#A' or
substr(component_part,7,2) = '#B' or
substr(component_part,7,2) = '#M' or
substr(component_part,7,2) = '#U' or
substr(component_part,7,2) = '#2' or
substr(component_part,7,2) = '#4') and
(substr(component_part,1,6) = 'C7769E' or
substr(component_part,1,6) = 'C7769F' or
substr(component_part,1,6) = 'C7769G' or
substr(component_part,1,6) = 'C7770E' or
substr(component_part,1,6) = 'C7770F' or
substr(component_part,1,6) = 'C7770G' or
substr(component_part,1,6) = 'C7790D' or
substr(component_part,1,6) = 'C7790E' or
substr(component_part,1,6) = 'C7790F' or
substr(component_part,1,6) = 'C7791C' or
substr(component_part,1,6) = 'C7791D' or
substr(component_part,1,6) = 'C7791E' or
substr(component_part,1,6) = 'C7791K' or
substr(component_part,1,6) = 'C7796C' or
substr(component_part,1,6) = 'C7796D' or
substr(component_part,1,6) = 'C7796E' or
substr(component_part,1,6) = 'CG710A' or
substr(component_part,1,6) = 'Q1251E' or
substr(component_part,1,6) = 'Q1253E' or
substr(component_part,1,6) = 'Q1271A' or
substr(component_part,1,6) = 'Q1272A' or
substr(component_part,1,6) = 'Q1273A' or
substr(component_part,1,6) = 'Q1274A' or
substr(component_part,1,6) = 'Q1276A' or
substr(component_part,1,6) = 'Q1277A' or
substr(component_part,1,6) = 'Q5669A' or
substr(component_part,1,6) = 'Q5669B' or
substr(component_part,1,6) = 'Q5670A' or
substr(component_part,1,6) = 'Q5671A' or
substr(component_part,1,6) = 'Q5672A' or
substr(component_part,1,6) = 'Q5677A' or
substr(component_part,1,6) = 'Q5677B' or
substr(component_part,1,6) = 'Q6648A' or
substr(component_part,1,6) = 'Q6651A' or
substr(component_part,1,6) = 'Q6652A' or
substr(component_part,1,6) = 'Q6653A' or
substr(component_part,1,6) = 'Q6654A' or
substr(component_part,1,6) = 'Q6655A' or
substr(component_part,1,6) = 'Q6656A' or
substr(component_part,1,6) = 'Q6656B' or
substr(component_part,1,6) = 'Q6656C' or
substr(component_part,1,6) = 'Q6659A' or
substr(component_part,1,6) = 'Q6660A' or
substr(component_part,1,6) = 'Q6665A' or
substr(component_part,1,6) = 'Q6666A' or
substr(component_part,1,6) = 'Q6667A' or
substr(component_part,1,6) = 'Q6668A' or
substr(component_part,1,6) = 'Q6668B' or
substr(component_part,1,6) = 'Q6668C' or
substr(component_part,1,6) = 'Q6668D' or
substr(component_part,1,6) = 'Q6668E' or
substr(component_part,1,6) = 'Q6668F' or
substr(component_part,1,6) = 'Q6670A' or
substr(component_part,1,6) = 'Q6670B' or
substr(component_part,1,6) = 'Q6671A' or
substr(component_part,1,6) = 'Q6673A' or
substr(component_part,1,6) = 'Q6673B' or
substr(component_part,1,6) = 'Q6675A' or
substr(component_part,1,6) = 'Q6675B' or
substr(component_part,1,6) = 'Q6677A' or
substr(component_part,1,6) = 'Q6683A' or
substr(component_part,1,6) = 'Q6684A' or
substr(component_part,1,6) = 'Q6685A' or
substr(component_part,1,6) = 'Q6686A' or
substr(component_part,1,6) = 'Q6687A' or
substr(component_part,1,6) = 'Q6688A' or
substr(component_part,1,6) = 'Q6693A' or
substr(component_part,1,6) = 'Q6694A' or
substr(component_part,1,6) = 'Q6711A' or
substr(component_part,1,6) = 'Q6712A' or
substr(component_part,1,6) = 'Q6713A'))
or
((substr(component_part,7,2) = '-0' or
substr(component_part,7,2) = '-1' or
substr(component_part,7,2) = '-2') and
(substr(component_part,1,6) = 'C7769E' or
substr(component_part,1,6) = 'C7769F' or
substr(component_part,1,6) = 'C7770E' or
substr(component_part,1,6) = 'C7770F' or
substr(component_part,1,6) = 'C7790D' or
substr(component_part,1,6) = 'C7791C' or
substr(component_part,1,6) = 'C7791D' or
substr(component_part,1,6) = 'C7791K' or
substr(component_part,1,6) = 'C7796C' or
substr(component_part,1,6) = 'C7796D' or
substr(component_part,1,6) = 'C7796E' or
substr(component_part,1,6) = 'Q1271A' or
substr(component_part,1,6) = 'Q1273A' or
substr(component_part,1,6) = 'Q1277A' or
substr(component_part,1,6) = 'Q5669A' or
substr(component_part,1,6) = 'Q6655A' or
substr(component_part,1,6) = 'Q6651A' or
substr(component_part,1,6) = 'Q6652A' or
substr(component_part,1,6) = 'Q6653A' or
substr(component_part,1,6) = 'Q6654A' or
substr(component_part,1,6) = 'Q6656A' or
substr(component_part,1,6) = 'Q6656B' or
substr(component_part,1,6) = 'Q6659A' or
substr(component_part,1,6) = 'Q6675A' or
substr(component_part,1,6) = 'Q6677A' or
substr(component_part,1,6) = 'Q6683A' or
substr(component_part,1,6) = 'Q6685A' or
substr(component_part,1,6) = 'Q6687A' or
substr(component_part,1,6) = 'Q6711A' or
substr(component_part,1,6) = 'Q6712A')))
ORDER BY component_part;
CURSOR crea2 IS
SELECT distinct rtrim(a.component_part) component_part2,
'zz' product_line2,
b.description,
b.qty_per,
b.document_resp,
b.phantom_flag,
b.part_class,
b.explode_code
FROM GAMMA_INSERT a, STRUCTURE b
WHERE rtrim(a.component_part) = b.component_part
ORDER BY rtrim(a.component_part);
BEGIN
FOR cr IN crea LOOP
name:=cr.component_part;
loop
posicio:=instr(name,'/');
if posicio <> 0 then
name2:=substr(name,1,posicio-1)||'-'||
substr(name, posicio+1, LENGTH(name));
name:=name2;
else
exit;
end if;
end loop;
*--Here i want to create a txt file based on cursor--*
spool c:\'||cr.product_line||name||'.txt';---->(error giving)
dbms_output.put_line('Part List for '||cr.component_part||' '||cr.description);
dbms_output.put_line(' ');
--dbms_output.put_line('BCD BOM snapshot date (Barcelona time) : '|| to_char(sysdate,'YYYY/MM/DD HH24:MI'));
dbms_output.put_line(' ');
dbms_output.put_line('Level Part-number Description Quantity DR Revision');
dbms_output.put_line('----- ----------- ------------- -------- -- --------');
dbms_output.put_line(' ');
dbms_output.put_line(lpad(to_char(nivell),2,'0')||' '||cr.component_part||' '||cr.description||' '||cr.qty_per||' '||cr.document_resp);
END LOOP;
End;
or do you have any other idea we can implement this.
Vishal
April 03, 2008 - 7:59 pm UTC
(do you know about "in" where substr(a,1,2) in ( 'aa', 'bb', .... ))
you cannot of course interact with sqlplus from plsql since plsql is submitted to the server and runs on the server.
You'll really want to write a program in order to do your custom data extraction, sqlplus is a rather simple command line tool to talk to Oracle, it really isn't for programming.
how generate a text file
hArOld kAwAii, April 03, 2008 - 3:16 am UTC
hi tom,
i am new in oracle 10g.. i would like to ask the script from top to bottom on how to generate this sample text to a text file..
can i view the generated text?
heres the sentence line by line
Your application status is : DONE
VECO|
N
09164324030
N,Y
~
April 03, 2008 - 7:58 pm UTC
what language are you going to be programming in is the very first question that pops into my mind.
the second would be "why would you need to be told how to produce a text file in that language" - because I would presume you program in it.
how to generate a text file
hArOld kAwAii, April 03, 2008 - 3:58 am UTC
hi tom,
i am new in oracle 10g.. i would like to ask the script from top to bottom on how to generate this
sample text to a text file..
can i view the generated text?
heres the sentence/words line by line
Your application status is : DONE
VECO|
N
09164324030
N,Y
~
sori for the wrong grammar.
thanks in advance.
god bless
re: how to generate a text file
hArOld kAwAii, April 03, 2008 - 9:16 pm UTC
i just dont know how to use oracle....
i am a beginner/novice in oracle then my boss wants me to make a job/scheduler that would generate a text file and saved in a particular path outside oracle OS probably in C:\TXTCONNECT...
thanks,
sorry if my question is not direct as you want it to be..
April 04, 2008 - 9:55 am UTC
forget about oracle for a second, on this machine you are going to create this "job scheduler" - WHAT PROGRAMMING LANGUAGE are you planning on coding in.
eg: what *tool* are you going to use to program all of this.
how to generate a text file
hArOld kAwAii, April 03, 2008 - 9:17 pm UTC
by the way, i am using PLSQL
thanks,
output file
hArOld kAwAii, April 03, 2008 - 10:04 pm UTC
hi tom,
i would just rephrase my question,
1. is oracle plsql capable of saving a .txt file to the Desktop?
sorry for the past questions i've posted.
thanks for your consideration.
April 04, 2008 - 10:22 am UTC
no, plsql can write a file on the SERVER.
plsql runs in the server.
That said, you need "something" on the desktop to
a) connect to oracle
b) invoke the plsql
Now, once you tell me what that is, we can start talking about HOW to do this.
If you just need to extract a CSV file on a client, and you have sqlplus installed, you can just put into a file:
spool data.txt
select a || ',' || b || ',' || c ...
from t;
spool off
exit
and
sqlplus username/password@database @thatfile.sql
it'll create a file data.txt in your current directory
Tom replied cryptically with a question.
Allen, May 14, 2008 - 3:32 pm UTC
Cryptic to me anyway. The question, PRS asked wasn't answered. We recently split servers and I had to stop using EXTERNAL defined tables. No one here could figure out how to use that or UTL_FILE in that configuration. I and our DBA have been searching and searching (or googling and googling). I've seen others ask on various sites and NO ANSWER. You had no answer either but rather, and excuse me for saying it, seemed to deflect the topic. It is a serious problem for us.
PRS asked:
Can I use directory object to create file using UTL_FILE on a different server(Application
Server) than the database server?
Tom replied:
only if the database server can SEE that directly itself
otherwise, it would be very virus like, wouldn't it. How would you like it if files just started
appearing on your machine magically???
May 16, 2008 - 10:51 am UTC
Allen - sorry, when I wrote
only if the database server can SEE that directly itself
I meant to say "directory", not "directly"
IF the database server cannot see the file system, then code running IN THAT DATABASE SERVER cannot see the file system. The file system does not exist.
pretend for a moment it could. So, my database - asktom on asktom.oracle.com should be able to see your file system?
If it could, think about it for a moment, would that not be very virus like.
I did not deflect anything, I said "of course NOT"
Think about this for a moment, how could it work - if the machine the database software is on has no access to this file system, how could a process that is part of the database software see this file system?
You have to make the file system available to the database server if you want the database server to have access to the file system.
No deflection
No obfuscation
The answer is: make the file system available to the database server.
If you say "we cannot, we split the machines"
Then all we can say is "sorry, we cannot do magic, you need to find a way to get the files available to the database server OR find another way entirely to do it"
Oracle unhandled userdefined exception while executing utl_files
sudhakar, June 30, 2008 - 7:50 am UTC
Dear Tom,
Iam facing a problem while executing the utl file
the snapshot of the error is given below
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 120
ORA-06512: at "SYS.UTL_FILE", line 204
ORA-06512: at line 4
The source code is
create or replace procedure utl_file_test_write (
path in varchar2,
filename in varchar2,
firstline in varchar2,
secondline in varchar2)
is
output_file utl_file.file_type;
begin
output_file := utl_file.fopen (path,filename, 'W');
utl_file.put_line (output_file, firstline);
utl_file.put_line (output_file, secondline);
utl_file.fclose(output_file);
--exception
-- when others then null;
end;
July 06, 2008 - 6:43 pm UTC
okley dokley, what is your line 4 then, that would be pretty "telling" no?
Oh, but wait, your line 4 in that example is part of the parameter list...
what is YOUR line 4.
Flat file reading
Sanjeev, August 06, 2008 - 7:07 am UTC
Hi Tom,
I have a flat file which contains a signle record with demilited by ','.
Now I wanted to create a oracle table using this record:
e.g: Flat file record looks like:
Brand,Model,2006/01(Unit),2006/02(Unit),2006/03(Unit) and this goes till 36 Months i.e 2008/12(Unit).
Now my Table needs to be like:
Table_test having columns as (Brand varchar2(25),Model varchar2(50),2006/01(Unit) number and so on till 36 months).
Please remember that flat file columns are not fixed, which means now we have received a file containing 36 months as given above, but in future we may a get a file containing 24 months. So how to create a oracle table using such file values.
Thanks in advance.
August 06, 2008 - 9:01 am UTC
then you would have to
a) read flat file
b) parse flat file
c) determine how many columns you have
d) create a table
e) load that table using the data you just parsed.
no magic.
alternatively, you could set of a table
t( brand, model, c1, c2, c3, ..... c998 );
and use an external table with "trailing nullcols" - so that we'd fill in as many columns as we see in the file - leaving the rest NULL.
Reading flat file
Sanjeev, August 07, 2008 - 3:53 am UTC
Hi Tom,
Thanks for your useful tips,
I have written the below procedure, which is getting failed while reading the file...
Procedure:
CREATE OR REPLACE PROCEDURE PROC_LOAD IS
v_file UTL_FILE.file_type;
err_file UTL_FILE.file_type;
l_file VARCHAR2 (100) := 'proc_test1.txt';
e_file VARCHAR2 (100) := 'err_file.txt';
v_how VARCHAR2 (2) := 'r';
v_text VARCHAR2 (32767);
v_text_1 VARCHAR2 (32767);
v_size NUMBER := 32767;
V_COUNT NUMBER:=1;
V_INCR NUMBER:=1;
v_decr number;
v_number number;
BEGIN
v_file := UTL_FILE.fopen ('TEST', l_file, v_how,v_size);
err_file := UTL_FILE.fopen ('TEST', e_file, 'w',v_size);
UTL_FILE.get_line (v_file, v_text);
FOR I IN 1..100 LOOP
if i=1 then
V_INCR:=INSTR(V_TEXT,',',1,1);
v_decr:=instr(v_text,',',1,I)-1;
V_TEXT_1:=SUBSTR(V_TEXT,1,V_DECR);
else
V_INCR:=INSTR(V_TEXT,',',1,I-1);
v_decr:=instr(v_text,',',1,I)-1;
v_number:=v_decr-v_incr;
V_TEXT_1:=SUBSTR(V_TEXT,V_INCR+1,v_number);
END IF;
UTL_FILE.put_line (err_file, v_text_1);
DBMS_OUTPUT.PUT_LINE(V_TEXT_1);
END LOOP;
BEGIN
LOOP
BEGIN
UTL_FILE.get_line (v_file, v_text);
EXCEPTION
WHEN NO_DATA_FOUND
THEN
EXIT;
END;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
UTL_FILE.put_line (err_file, 'Err : ' || SQLERRM);
END;
UTL_FILE.fclose (v_file); -- Close File
UTL_FILE.fclose (err_file);
EXCEPTION
WHEN UTL_FILE.invalid_path
THEN
DBMS_OUTPUT.put_line ('Invalid Path');
UTL_FILE.fclose (v_file); -- Close File
UTL_FILE.fclose (err_file);
WHEN UTL_FILE.invalid_mode
THEN
DBMS_OUTPUT.put_line ('Invalid Mode');
UTL_FILE.fclose (v_file); -- Close File
UTL_FILE.fclose (err_file);
WHEN UTL_FILE.write_error
THEN
DBMS_OUTPUT.put_line ('Write Error');
UTL_FILE.fclose (v_file); -- Close File
UTL_FILE.fclose (err_file);
WHEN UTL_FILE.invalid_operation
THEN
DBMS_OUTPUT.put_line (SQLCODE || ': ' || SQLERRM);
UTL_FILE.fclose (v_file); -- Close File
UTL_FILE.fclose (err_file);
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (SQLCODE || ': ' || SQLERRM);
UTL_FILE.fclose (v_file); -- Close File
UTL_FILE.fclose (err_file);
END PROC_LOAD;
Input File:
Category,Brand,Model,Item (JAN),Launch Date,WIDTH,MONITOR SCREEN SIZE,MOVIE,ISO Sensitivity,Short Focal Distance,F Value,Long Focal Distance,OPTICAL ZOOM,GROUP OF OPTICAL ZOOM,DIGITAL ZOOM,MONITOR SCREEN,CCD TOTAL PIXELS,GROUP OF PIXEL TOTAL,WEIGHT,HEIGHT,DEPTH,WATER PROOF,STORAGE MEDIA,POWER SOURCE,MOVIE CODEC,CRADLE,FINDER,SENSOR TYPE,CCD MOVIE PIXELS,GROUP OF DVD EFFECTIVE PIXELS,EFFECTIVE PIXEL,GROUP OF EFFECTIVE PIXEL,MUSIC COMPRESSION FUNCTION TYPE,USB TYPE,CHANGEABLE LENS,WIRELESS COMMUNICATION,IMAGE STABILIZER,PICTBRIDGE,FRAME PER SECOND,MOVIE RECORDING PIXELS,CCD,LCD MONITOR PIXELS,Maximum Photographing Shots,INFRARED,PRINTER BUNDLE,LENS SET,SHUTTER SPEED MAX,SHUTTER SPEED MIN,NUMBER OF HDMI NEW,FACE DETECTION,SHOCKPROOF RECORDING,DUST-PROOF,Total(Unit),2006/01(Unit),2006/02(Unit),2006/03(Unit),2006/04(Unit),2006/05(Unit),2006/06(Unit),2006/07(Unit),2006/08(Unit),2006/09(Unit),2006/10(Unit),2006/11(Unit),2006/12(Unit),2007/01(Unit),2007/02(Unit),2007/03(Unit),2007/04(Unit),2007/05(Unit),2007/06(Unit),2007/07(Unit),2007/08(Unit),2007/09(Unit),2007/10(Unit),2007/11(Unit),2007/12(Unit),Total(Value),2006/01(Value),2006/02(Value),2006/03(Value),2006/04(Value),2006/05(Value),2006/06(Value),2006/07(Value),2006/08(Value),2006/09(Value),2006/10(Value),2006/11(Value),2006/12(Value),2007/01(Value),2007/02(Value),2007/03(Value),2007/04(Value),2007/05(Value),2007/06(Value),2007/07(Value),2007/08(Value),2007/09(Value),2007/10(Value),2007/11(Value),2007/12(Value),Total(ASP),2006/01(ASP),2006/02(ASP),2006/03(ASP),2006/04(ASP),2006/05(ASP),2006/06(ASP),2006/07(ASP),2006/08(ASP),2006/09(ASP),2006/10(ASP),2006/11(ASP),2006/12(ASP),2007/01(ASP),2007/02(ASP),2007/03(ASP),2007/04(ASP),2007/05(ASP),2007/06(ASP),2007/07(ASP),2007/08(ASP),2007/09(ASP),2007/10(ASP),2007/11(ASP),2007/12(ASP)
Output:
12:59:13 SQL> exec proc_load;
-29283: ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line
475
ORA-29283: invalid file operation
PL/SQL procedure successfully completed.
Mainly in the above procedure I am trying to create a pivoted file using the above souce file like:
Brand
Level
2006/01(Unit)
2006/02(Unit)
.
.
.
2007/12(ASP)
As I mentioned in my earlier message that the date values are not fixed and I need to use these dates to created a pivoted table.
Thanks
UTL_FILE problem
Marcus, August 08, 2008 - 2:01 am UTC
Hello Sanjeev,
can you access the files at all? Is there a directory named 'Test'? The problem maybe is not the procedure logik but the access to the files.
Regards
Marcus
Reading Flat File
Sanjeev, August 08, 2008 - 7:38 am UTC
Hi Marcus,
I am able to access the flat file provided If I delete some of the columns in the flat file as I believe the length of the line is too large to handle for UTL_FIle.fopen function.
'Test' is an directory which I have created for accessing the flat file.
So let me know is there any way by which I can access all the columns of that line from the flat file so that I can create an oracle table using those columns. This table I will be using to make pivoting of the measures based on the time dimension( Which is the part of the column mentioned in the flat file e.g 2006/01 (Unit) in which 2006/01 is the value belongs to time dimension).
Thanks.
August 08, 2008 - 1:21 pm UTC
utl_file is limited to 32k.
you should do it the way I told you to, external tables.
Unloading data to a flat file with PL/SQL
rc, August 16, 2008 - 3:54 am UTC
Oracle 10g Flat File
Mike, August 19, 2008 - 6:25 pm UTC
Hi! i've got oracle 10g and i need to create a flat file to feed sap... anyway this is what i been trying... pls help me
create or replace procedure Cursos_Loading(p_nothing IN integer)
is
v_filehandle utl_file.file_type;
begin
v_filehandle := utl_file.fopen('DIRECTORY', 'FILENAME', 'w', 4000);
for r in 1..10 loop
utl_file.put_line(v_filehandle, 'LINE # : '||r);
end loop;
utl_file.fclose(v_filehandle);
end Cursos_Loading;
what i'm doing wrong 'cause when i execute the sproc appears this error...
/* Source of PACKAGE BODY SYS.UTL_FILE is not available */
ORA-29280 Invalid directory path
August 20, 2008 - 10:35 am UTC
well, you obviously have an invalid directory path??
did you
create or replace directory 'DIRECTORY' as '/x/y/z';
please don't use /x/y/z of course....
Rajeev, September 29, 2008 - 8:30 am UTC
Hi
I have created a procedure to extract data from Oracle to XML file. It is working with a problem that, once the XML file reached the size of 34KB it is not writing any more data. Your suggetions will be helpful.
The procedure is follows
CREATE OR REPLACE PROCEDURE extractToXML(pTname varchar2,rowSettag varchar2,rowTag varchar2,filename varchar2)
IS
BEGIN
declare
v_ctx dbms_xmlquery.ctxtype;
v_file utl_file.file_type;
v_xml clob;
v_more boolean := true;
v_date varchar2(30);
BEGIN
v_date := to_char(systimestamp - 29, 'yyyy-mm-dd')||' 00:00:00';
-- create xml context.
v_ctx := dbms_xmlquery.newcontext('select * from '||pTname);
-- dbms_xmlquery.setxslt(v_ctx, '/home/dwintuser/Development/XML/dvBatchFile.xsd') ;
-- create the transformed html document.
dbms_xmlquery.setRowSetTag(v_ctx,rowSettag);
dbms_xmlquery.setRowTag(v_ctx,rowTag);
v_xml := dbms_xmlquery.getxml(v_ctx);
dbms_xmlquery.closecontext(v_ctx);
-- output html document to file
v_file := utl_file.fopen('C:\oraload', filename, 'W',32767);
while v_more loop
utl_file.put(v_file, substr(v_xml, 1, 32767));
dbms_output.put_line('Length of XML '||length(v_xml));
if length(v_xml) > 32767
then
v_xml := substr(v_xml, 32768);
dbms_output.put_line('Inside IF Length of XML '||length(v_xml));
else
v_more := false;
end if;
end loop;
utl_file.fclose(v_file);
dbms_output.put_line('Coke Completed Successfully');
exception
when others then
dbms_output.put_line(substr(sqlerrm,1,1000));
utl_file.fclose(v_file);
end;
END;
September 29, 2008 - 9:24 am UTC
utl_file demands a newline at least every 32k.
do you have any?
Writing Big files - Suggest a better way
Nishant, January 27, 2009 - 3:44 am UTC
Hi Tom,
We have been using UTL_FILE to write to text files from our oracle tables.A shell script internally transfers the file to Mainframe based daa mart system.We do this on a daily basis.
Our files are too large and on an average their size remains about 40 GB.Sometimes it goes upto 90 GB when data is accumulated for multiple days.
My table from where I dump the data is Parallel enable with degree 8 and we are using ASSM.
This consumes lot of time , 4-5 hrs(sometimes even more) in file writing.Can you please suggest us a better way of doing this ?
January 28, 2009 - 8:25 am UTC
plsql would probably be my last choice for this task. Actually, I cannot imagine dumping 40-90gb to a flat file - that seems like a "bad idea". But whatever.
http://asktom.oracle.com/tkyte/flat/index.html
Big file dumping
Nishant, January 29, 2009 - 5:21 am UTC
Thank you very much Tom.However,It is like sending each commissionable transaction from Oracle incentive compensation tables (approx 1 million trxs daily are commissionable out of total approax 8 million transactions in our POS system) to reporting system on mainframe.
If we send the data for more than 4-5 weeks, then this problem arises.
If you find some better way to do this,please share with me.
Thanks again for your time and effort.
January 30, 2009 - 2:07 pm UTC
well, I laid out options in that link for dumping to a file.
My point was "probably do not need mainframe reporting system", you already have the data in a pretty competent reporting system.
parallel dumping with a ref_cursor
Neil, February 21, 2009 - 12:33 pm UTC
Hi tom -
I wanted to dump a file onto my server in double-quick time, after reading some good stuff on www.oracle-developer.net which I have attempted to implement.
Why can I parallelize a result set in the package below
using cursor(select...), but not if I pass in a ref_cursor.
Is this a limitation, or am I doing something stupid?
Here is a reproducuble test: (sorry it's more than 1000 chars)
begin
execute immediate 'alter session enable parallel query';
execute immediate 'alter session enable parallel dml';
end;
/
create table source_table
parallel
nologging
as
select a.text
from all_source a
where rownum < 1001
/
create
type dump_results_typ
as object(file_name varchar2(255)
,no_records number
,session_id number)
/
create
type dump_results_tab_typ
as table of dump_results_typ
/
create or replace package parallel_dump is
function fn_dump (i_directory in varchar2
,i_file in varchar2)
return integer;
function fn_cur
return sys_refcursor;
function fn_read_and_dump (i_directory in varchar2
,i_file in varchar2
,i_cursor in sys_refcursor)
return dump_results_tab_typ pipelined
parallel_enable
(partition i_cursor by any);
end parallel_dump;
/
create or replace package body parallel_dump is
function fn_dump (i_directory in varchar2
,i_file in varchar2)
return integer
is
l_cursor sys_refcursor;
l_results dump_results_tab_typ := dump_results_tab_typ();
begin
l_cursor := fn_cur;
-- this doesn't seem to work...
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,l_cursor)) t; -- <== Variable
if l_results.count > 0
then
for i in l_results.first..l_results.last
loop
dbms_output.put_line(to_char(l_results(i).no_records)
||' records written to '||l_results(i).file_name);
end loop;
end if;
-- but this does!!!
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,cursor(select * from source_table))) t;
if l_results.count > 0
then
for i in l_results.first..l_results.last
loop
dbms_output.put_line(to_char(l_results(i).no_records)
||' records written to '||l_results(i).file_name);
end loop;
end if;
return 0;
exception
when others then
return 1;
end fn_dump;
function fn_cur
return sys_refcursor
is
l_cursor sys_refcursor;
begin
open l_cursor for
select * from source_table;
return l_cursor;
end fn_cur;
function fn_read_and_dump (i_directory in varchar2
,i_file in varchar2
,i_cursor in sys_refcursor)
return dump_results_tab_typ
pipelined
parallel_enable (partition i_cursor by any)
as
type output_tab_typ is table of varchar2(32767);
l_rows output_tab_typ;
l_file utl_file.file_type;
l_name varchar2(255);
l_buffer varchar2(32767);
l_sid number;
l_lines pls_integer := 0;
c_eol constant varchar2(1) := chr(10);
c_eollen constant pls_integer := length(c_eol);
c_maxline constant pls_integer := 32767;
begin
select sid
into l_sid
from v$mystat
where rownum = 1;
l_name := i_file || '_' || to_char(l_sid) || '.csv';
l_file := utl_file.fopen(i_directory, l_name, 'w', 32767);
loop
fetch i_cursor
bulk collect
into l_rows
limit 250;
for i in 1 .. l_rows.count
loop
if length(l_buffer) + c_eollen + length(l_rows(i)) <= c_maxline
then
l_buffer := l_buffer || c_eol || l_rows(i);
else
if l_buffer is not null
then
utl_file.put_line(l_file, l_buffer);
end if;
l_buffer := l_rows(i);
end if;
end loop;
l_lines := l_lines + l_rows.count;
exit when i_cursor%notfound;
end loop;
close i_cursor;
utl_file.put_line(l_file, l_buffer);
utl_file.fclose(l_file);
pipe row (dump_results_typ(l_name, l_lines, l_sid));
return;
exception
when others then
if i_cursor%isopen then close i_cursor; end if;
raise_application_error(-20500, 'Unknown error in parallel_dump.fn_read_and_dump: '
||sqlerrm(sqlcode));
end fn_read_and_dump;
begin
null;
end parallel_dump;
/
/*=============================================================================================*/
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Feb 20 16:41:01 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus> declare
2 res number;
3 begin
4
5 res := parallel_dump.fn_dump(i_directory => '/your/directory/here'
6 ,i_file => 'pdtest1');
7
8* end;
9 /
1000 records written to pdtest1_521.csv --<== No parallism with ref_cursor variable
560 records written to pdtest1_526.csv
102 records written to pdtest1_495.csv
236 records written to pdtest1_513.csv
102 records written to pdtest1_525.csv
PL/SQL procedure successfully completed.
February 21, 2009 - 9:32 pm UTC
I hate your code
http://asktom.oracle.com/pls/ask/search?p_string=%22i+hate+your+code%22 exception
when others then
return 1;
end fn_dump;
do you have access to "Expert Oracle Database Architecture", I have a full example in there.
and sort of try to explain what is going on. We cannot magically execute your PLSQL in parallel, think about it - look at this query:
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,l_cursor)) t; -- <== Variable
How could we call your code in N sessions - and each of it executes a slice of the data? What is being 'parallelized' here? You are asking us to execute a slice of your function fn_read_and_dump in 4 different sessions - how could that be possible? There is no way we have to tell your code "hey, you are to return the 3rd quarter of the data - now do your stuff. Even if we could - how would YOU know how to get the 3rd quarter?
We can execute queries in parallel and send the slices of data to your routines (execute your routine against each bit of the parallel result set.
We cannot execute your code in parallel.
we only know what "parallel query" is.
Still don't get it
Neil, February 22, 2009 - 5:54 pm UTC
The examples need to be succinct, so the exception handling got chucked.
I still don't understand why a cursor variable passed into an exposed packaged function doesn't work and passing in "cursor(select...)" does.
open l_cursor for select * from source_table;
:
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,l_cursor)) t; -- <== Variable - no good!
-- But this is ok! Parallelises nicely...
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,cursor(select * from source_table))) t;
After all, called from the same block, the pl/sql function is receiving the SAME information in the last argument, just in a different syntax.
Maybe the question should be "what's the difference between a cursor variable and a select statement cast as a cursor, to a parallel pipelined function?"
February 22, 2009 - 6:22 pm UTC
the exception handling was not chucked, it was turned into a horrible worst practice. to chuck it would be to remove it, I would have loved to see it removed.
I have simply vowed to myself that anytime I see when others not followed by raise/raise application error used improperly to write "i hate your code" and link to the growing library of examples.
Anyway
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,l_cursor)) t; -- <== Variable - no good!
you are trying to ask us to run your function in parallel. The BEST we could do is call it N times, but it would run from START TO FINISH N times (there is nothing for us to divvy up and send to you, we cannot send your routine a SLICE OF DATA, the only thing we know how to slice and dice is SQL and you don't have any SQL for us to SLICE UP). The sql is already opened, already run, already executed, it is out of our control - we are NOT opening it there, someone else did and frankly - we cannot tell if that SQL is parallel or not - it doesn't matter if it is - we already started executing it, it is going - it is too late to play games with it.
we have NO way to slice up the data to be processed by each copy, if we did parallel 4 - you would get 4 complete copies of the output (that would be *wrong*). There is NOTHING we could 'send' to your routine to tell the 3rd copy to just get the 3rd quarter of the data to process.
versus:
-- But this is ok! Parallelises nicely...
select /*+ parallel(t,4) */
dump_results_typ(file_name, no_records, session_id)
bulk collect
into l_results
from table(parallel_dump.fn_read_and_dump(i_directory
,i_file
,cursor(select * from source_table))) t;
We can, because you stated so in your declaration, slice up "select * from source_table" in any way we see fit and run N copies of your routine against each slice. Very very very different. We have some SQL to slice up, we do so and we invoke your routine N times - each with a different slice of data to process.
The only way to run your code in parallel is to take a query that is an INPUT to the code and parallelize the query and send a slice of the output to each copy of your routine. In order to do that, we need to control very finely the opening of that cursor - to parallelize it with the code.
You cannot open and the change the way it should be opened after the fact. You have to combine the opening of the query with the invocation of the routine itself.
ref_cursor and cursor(select...)
Neil, February 23, 2009 - 4:29 am UTC
To sum up then, a pipelined parallel function declaration MUST have a ref_cursor as a parameter, but that parameter CANNOT be opened before it is passed to the function; and the only way to pass a ref_cursor without opening it is to employ a CURSOR(select...) expression.
February 23, 2009 - 4:52 am UTC
I agree that the documentation:
http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14289/dcitblfns.htm#i1005017 is lacking/not clear in that fact, but yes, the cursor() construct is necessary - so that the entire outer query can be parallelized with it. In fact, I wish they would not have used 'ref cursor' at all in the parallel discussion but rather just "cursor variable"
You have to sort of think of your query:
select * from table( cursor( select * from t ) );
and being broken up into:
select * from table( cursor( select * from t where rowid between :a and :b ) );
select * from table( cursor( select * from t where rowid between :a and :b ) );
select * from table( cursor( select * from t where rowid between :a and :b ) );
select * from table( cursor( select * from t where rowid between :a and :b ) );
and that is what is actually run in parallel (conceptually). If you send it an already opened result set, it is too late to do that.
Use .tsv file as input to sql query and append the output to the .tsv file
John, April 24, 2009 - 2:06 am UTC
Hi tom, Here is my scenario..
1) I have a sample.tsv file with some data as below,
dat1 dat2 dat3
----- ---- -----
xyz1 xyz2 xyz3
xyz4 xyz5 xyz6
2) I need to have a script which takes in the above sample.tsv as input to sql query and the resulted output should be appended to the same sample.tsv file on the clinet side, I need not have this on server. The output will be just adding ID column to the sample.tsv file.
Ex:
select id from tab1 where col1 = xyz1 and col2 = xyz2;
{xyz1 and xyz2 are the values from above sample.tsv file}
Required resulting sample.tsv file.
id dat1 dat2 dat3
--- ---- ---- ----
1 xyz1 xyz2 xyz3
2 xyz4 xyz5 xyz6
3) Now I pick the above sample.tsv file and use it as a input to some command to load the data of sample.tsv file to the database. Hope I am clear. Please let me know if I am not.
I am trying to have the above process automated. I am very new to Oracle and I want to know if your dump_csv script can handle this with modifications or if you have some other script which does something similar to this or if it is possible at all to achieve this in pl/sql, if how should be the approach. Please help.
Thanks,
John
April 27, 2009 - 11:36 am UTC
2) I need to have a script which takes in the above sample.tsv as input to sql
query and the resulted output should be appended to the same sample.tsv file on
the clinet side, I need not have this on server. The output will be just adding
ID column to the sample.tsv file.
you will be writing a client program then, frankly the database isn't even involved here - you'd have to get the file to the server and the server can only create files on the server.
You do not tell us what this client is
You do not tell us what software the client has
You cannot do this in plsql, since plsql runs on the server and the server cannot write to your filesystem.
Well, you could do this in plsql, if you are using something like mod_plsql - for example, I could write a stored procedure and put it on asktom to return tab delimited data. In fact, I could create a form that would let the client upload their file to the server and then the server returns to them a brand new file (no appending, we cannot write to YOUR filesystem - a program that runs on your client could, but we don't have anything there)
PL/SQL Performance (Bulk Collect)
Manuel Vidigal, July 29, 2009 - 10:28 am UTC
Hi Tom,
I changed you code a bit so that it could bulk collect any given query (it concatenates every column with the separator and replaces the columns with this concatenation). The only limitation is the maximum 4000 chars per line.
Here is the code:
CREATE OR REPLACE FUNCTION dump_csv_to_file_new(p_query IN VARCHAR2,
p_separator IN VARCHAR2 DEFAULT ';',
p_dir IN VARCHAR2,
p_filename IN VARCHAR2,
p_header IN VARCHAR2 DEFAULT 'Y',
p_limit IN INTEGER DEFAULT 100)
RETURN NUMBER AUTHID CURRENT_USER IS
TYPE csv_cur_type IS REF CURSOR;
csv_cur csv_cur_type;
TYPE myarray IS TABLE OF VARCHAR2(4000);
l_data myarray;
l_query VARCHAR2(4000);
l_column_string VARCHAR2(2000);
l_thecursor INTEGER DEFAULT dbms_sql.open_cursor;
l_columnvalue VARCHAR2(2000);
l_status INTEGER;
l_colcnt NUMBER DEFAULT 0;
l_separator_csv VARCHAR2(10);
l_cnt NUMBER DEFAULT 0;
rec_tab dbms_sql.desc_tab;
l_output utl_file.file_type;
l_buffer VARCHAR2(32767) := NULL;
l_new_line VARCHAR2(10) := chr(10);
BEGIN
l_output := utl_file.fopen(p_dir, p_filename, 'w');
dbms_sql.parse(l_thecursor, p_query, dbms_sql.native);
FOR i IN 1 .. 255 LOOP
BEGIN
dbms_sql.define_column(l_thecursor, i, l_columnvalue, 2000);
l_colcnt := i;
EXCEPTION
WHEN OTHERS THEN
IF (SQLCODE = -1007) THEN
EXIT;
ELSE
RAISE;
END IF;
END;
END LOOP;
dbms_sql.define_column(l_thecursor, 1, l_columnvalue, 2000);
l_status := dbms_sql.EXECUTE(l_thecursor);
dbms_sql.describe_columns(l_thecursor, l_colcnt, rec_tab);
l_separator_csv := '';
FOR x IN 1 .. l_colcnt LOOP
l_column_string := l_column_string || l_separator_csv || rec_tab(x)
.col_name;
l_separator_csv := '||''' || p_separator || '''||';
END LOOP;
IF nvl(p_header, 'Y') != 'N' THEN
utl_file.put_line(l_output, l_column_string);
l_cnt := l_cnt + 1;
END IF;
l_query := regexp_replace(upper(p_query),
'^SELECT (.*?) FROM',
'SELECT ' || l_column_string || ' FROM');
dbms_output.put_line(l_query);
OPEN csv_cur FOR l_query;
LOOP
FETCH csv_cur BULK COLLECT
INTO l_data LIMIT p_limit;
FOR j IN 1 .. l_data.COUNT LOOP
IF length(l_buffer || l_data(j)) >= 30000 THEN
utl_file.put_line(l_output, l_buffer);
l_buffer := l_data(j);
ELSE
IF l_buffer IS NULL THEN
l_buffer := l_data(j);
ELSE
l_buffer := l_buffer || l_new_line || l_data(j);
END IF;
END IF;
l_cnt := l_cnt + 1;
END LOOP;
EXIT WHEN csv_cur%NOTFOUND;
END LOOP;
CLOSE csv_cur;
utl_file.put_line(l_output, l_buffer);
utl_file.fclose(l_output);
RETURN l_cnt;
END dump_csv_to_file_new;
I performed some tests and it look pretty fast, I would like to compare it with the pro*C way, but I don't understand how to compile and use the pro*C.
Here is the performance test:
SQL> CREATE TABLE csv_test AS
2 SELECT o.*
3 FROM all_objects o,
4 (SELECT LEVEL a
5 FROM dual
6 CONNECT BY LEVEL <= 100);
Tabela criada.
Decorreram: 00:00:42.60
SQL> select count(1) from csv_test;
COUNT(1)
----------
6837100
Decorreram: 00:00:14.60
SQL> set autotrace traceonly statistics
SQL> select * from csv_test;
6837100 linhas seleccionadas.
Decorreram: 00:01:05.79
Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
549970 consistent gets
100707 physical reads
0 redo size
807972849 bytes sent via SQL*Net to client
5014386 bytes received via SQL*Net from client
455808 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
6837100 rows processed
SQL> DECLARE
2 l_result NUMBER;
3 BEGIN
4 l_result := dump_csv_to_file_new
5 (p_query => 'select * from csv_test',
6 p_separator => ';',
7 p_dir => 'TEMP',
8 p_filename => 'dump_csv_to_file_new.csv',
9 p_header => 'N',
10 p_limit => 100);
11 END;
12 /
Procedimento de PL/SQL concluÝdo com Ûxito.
Decorreram: 00:01:15.03
SQL> DECLARE
2 l_result NUMBER;
3 BEGIN
4 l_result := dump_csv
5 (p_query => 'select * from csv_test',
6 p_separator => ';',
7 p_dir => 'TEMP',
8 p_filename => 'dump_csv.csv');
9 END;
10 /
Procedimento de PL/SQL concluÝdo com Ûxito.
Decorreram: 00:08:32.13
The modified code only takes more 10 seconds than the query itself. The output file has 697MB. So for cases where the limit per line is not bigger than 4000, I think one can use this new procedure instead of the pro*C program. Although I would like to see some one compare the two.
Cheers,
Manel
A reader, August 04, 2009 - 1:39 am UTC
Hi Tom, for writing csv file iam using dump_csv function. I had a problem when my address is in the column like VALLEY SPRINGS, CA. while opening the csv file address is showing in two different columns. How to make this in a single column. Thanks in advance.
August 04, 2009 - 2:26 pm UTC
you will need to probably put double quotes around it.
Read up on what YOUR thing that opens your csv file is expecting as a data format - here is a good one if your thing is excel
http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm
Unload data on client side
Kapil Sharma, August 06, 2009 - 3:10 am UTC
Hi Tom
I really liked this post and it does clear all the doubts and issues but 1 thing that I would like to ask is if there is any way to create flat files on the client server (and not on the DB server) by calling a pl/sql procedure?
August 06, 2009 - 8:45 am UTC
ctl-f on this page for the word
virus
and read the multitude of discussions on this topic we've already had.
spooling clobs
Fernando Sánchez, December 15, 2009 - 7:12 pm UTC
Hello.
I have been asked to write in one file the whole content of a table, this table has six CLOB columns and around 7000 rows.
This file will be used to load the file in a table in a mysql database so the format is given to me. None of this clobs is very long (the maximun size of some of them is around 60000 characters).
I was asked to do it with pl/sql but I first tried to do it with spool, which has been quite fast in writting the whole file.
But I am having a behaviour I cannot understand and that I need to avoid in order to have the file well formatted (I have tried in 10.1 and in 10.2 databases):
When one clob column has a line feed included the line feeds are perfectly written in the file but also an extra line feed is written "at the end of the record" (only one line feed even if more than one clob column has a line feed included).
If no clob column has a line feed then no extra line feed is included at the end of the record.
I suspect I am missing something.
Thanks in advance.
December 16, 2009 - 7:09 am UTC
do not use spool, sqlplus is a very simple - very very simple (yet deceptively powerful at times) reporting tool. You have the control you have and nothing more.
If you want a well formed file, you'll be writing a bit of code to do it.
Creating XML from Oracle Table
Pranab, January 03, 2010 - 9:12 pm UTC
Looking for suggestion to write huge data into XML from oracle table
The data will be in hierarchy structure, like
Master
<Child1>
<Sub Child1>
<End Sub Child1>
<Sub Child2>
<End Sub Child2>
<Sub Child3>
<End Sub Child3>
<End Child1>
<Child2>
<Sub Child1>
<End Sub Child1>
<Sub Child2>
<End Sub Child2>
<Sub Child3>
<End Sub Child3>
<End Child2>
End Master
Data extraction will be from 10+ tables
No of columns will be 150+
Most important records will be around 2/3 millions
And tables sixe around 7/8 millions of records
please help. Thanks in advance.
January 04, 2010 - 12:09 pm UTC
good luck with that, sounds like a horrifically bad (slow to boot) idea. What is going to possibly consume/use that.
Can you tell us instead what your goal is - not how you are trying to achieve said goal - but what the goal itself is.
XML is good for a very few things, massive data interchange on this scale - no, not so.
Can we have junk stuff like bcp?
A reader, January 29, 2010 - 2:53 pm UTC
I used to work with SQL Server and it uses "bcp" to do bulk insert/extract from/to table, it is very handy and easy.
Can we have such junk in Oracle?
January 29, 2010 - 4:26 pm UTC
but expdp and impdp do that, as does a simple create table as select (unload to an external table)
we just named them "differently"
Some Tough Ask
A reader, August 04, 2010 - 3:23 am UTC
I have a requirement to dump a tables data to flat file and conditions are
==
Output - A series of comma-delimited, or tab-delimited or fixed-width files.
When using delimited or fixed-width files:
Choose a delimiter which does not appear in the data (pipe - | - is usually best);
Text fields must be “quoted”;
A header row and table layout must be included in each file to identify the field in each column.
Start date is July 1, 2006 end date is June 30, 2010
===
Could it be possible?
August 04, 2010 - 7:35 am UTC
you can write anything given a proper specification.
I would say your specification is lacking however, it is incomplete.
What if the quoted text contains quotes itself? How do you deal with that? (typically via an escaping mechanism - so that the text:
and they said "Hello there"
becomes
"and they said \"Hello there\""
but now you need to deal with text with \ in it too - and escape that...
what about text with newlines - how to deal with that?
if you select | and the data contains | - what then? do you simply restart and if you restart - and the thing you selected exists in the data, when then?
typically back to an escaping mechanism
what about the thing receiving this data - what SPECIFICATION is it expecting - has anyone talked to them about it??
and so on
But sure, once you have a rock solid SPECIFICATION - you can write code for pretty much anything. You have my template above as a starting point, refine it to your needs once you answer all of the possible questions one could raise about your specification.
READING FROM WORD DOCUMENT(.doc)
Sunny, September 27, 2010 - 10:08 am UTC
Hi Tom,
I created a pl-sql procedure to read from a MS word document(.doc) for a requirement to store a word document in Oracle table(version 9.2).
Though I was able to load it successfully in my Oracle table in BLOB datatype, I am unable to read it.
How can I display the data from the BLOB column in exactly the same manner it was stored in word file?
thanks!
September 27, 2010 - 12:16 pm UTC
think about this, you would have to rewrite MS-WORD in plsql. They have a binary document format - you would have to know how to process a word document from start to finish.
Why not just return the word document "as is" to the client?
We can use Oracle text to extract the text from the document (no formatting) or turn it into html (which won't be close to being an exact copy formatting wise) or another document format (like RTF).
Reading from Word document(.doc)
Sunny, September 27, 2010 - 10:43 am UTC
Hi Tom,
Further to my above query, i just want to convert the BLOB datatype to varchar2 datatype.
Please let me know ,how can i do this in oracle 9i.
Thanks!!
September 27, 2010 - 12:23 pm UTC
Getting error in dump_csv function
abhi, January 05, 2011 - 8:51 am UTC
Hi
I am getting this error on execution of TEST_DUMP_CSV.
ORA-29285 : file write error
ORA-06512 :at "SYS.UTL_FILE" line 69
ORA-06512 :at "SYS.UTL_FILE" line 604
ORA-06512 :at "DUMP_CSV" line 55
ORA-06512 :at "TEST_DUMP_CSV" line 5
ORA-06512 :at line 3
Please help.
January 05, 2011 - 9:02 am UTC
what did you do to debug this so far?
usually this is the result of the default linesize, you might want to modify the fopen to use a larger linesize if your lines are long - you can set it up to 32k.
How to modify FOPEN
abhi, January 07, 2011 - 5:16 am UTC
Can you please tell how to modify the fopen to use a larger linesize. I am using this UTL_FILE first time.
Please help.
ref cursor ad file write using perl
A reader, January 17, 2011 - 11:21 am UTC
Any advantage of using a function to return a cursor and write flat file in perl.Why can't directly do the same with utl_file pkg. Is the first method faster ?
January 23, 2011 - 3:22 pm UTC
you can do it with utl_file, sounds like time to benchmark - prototype something simple in each language and evaluate the results.
link is no longer working
jim, April 04, 2011 - 2:57 pm UTC
April 12, 2011 - 12:06 pm UTC
Output csv with a twist
Earl, May 30, 2011 - 9:52 pm UTC
Tom,
I've been using your function/procedure combo to output CSV files for one of my applications for quite some time. It works like a charm. Unfortunately after we create the CSV output on the database machine we need to scp the file to a web server where the file can be downloaded by an "curl" process.
What I'd like to do is eliminate this convoluted transfer/download of the csv data by creating a procedure that can be called on demand via HTTP (this is an Apex app). You probably know that Apex will allow a user to invoke a stored procedure directly as long as it's been granted execute privileges to the public user.
My question has to do with how best to get the data from the table into a form that is HTTP output ready. I have thought about creating a temporary clob from the CSV output and then just using htp.p(l_clob).
I've also thought about using bulk collect (I have a well defined query and don't really need to do the dynamic sql stuff) but it seems that I would do the bulk collect only to loop through it to do the HTTP output so why not just do it at the time of the original query.
Just wondering what you would do in a case like this? It is a very small result set (several thousand rows of 8 or 9 columns), so not really concerned with long processing times. Thanks for any thoughts you might have.
Earl
May 31, 2011 - 12:48 pm UTC
just change the utl_file calls to htp.p or htp.prn calls.
search this site for owa_sylk for example - that creates a sylk file instead of a csv file, but does it right to the browser.
Flat link is not working
Cid, June 17, 2011 - 7:07 am UTC
June 17, 2011 - 1:55 pm UTC
look up two reviews ago
What are the other options?
Sri Narashimalu, August 04, 2011 - 11:32 am UTC
Tom,
Thank you first of all!
In one of the reviews above, you haven't actually mentioned what the other options are for the task - extract 50 gigs worth of data from a table to file. Could you mention some options that you use for such tasks please?
plsql would probably be my last choice for this task. Actually, I cannot imagine dumping 40-90gb to a flat file -that seems like a "bad idea". But whatever.
Thanks,
Sri
August 04, 2011 - 11:44 am UTC
raj, October 20, 2011 - 10:56 am UTC
dump_csv works...awesome.. made my life easier.
CSV
Developer PA, November 14, 2011 - 9:25 am UTC
Tom,
I have used your dump_csv function and facing some exceptions here.
_______________________________________
I decleared the varicable in my Procedure
v_location varchar2 1000):='/data/cmst/cms/data/pricing' ;
v_file_name varchar2(1000):='Extract_Details.txt' ;
l_c1 sys.odciVarchar2List;
l_c2 sys.odciVarchar2List;
l_c3 sys.odciVarchar2List;
l_limit number := 10;
_______________________________________
Refcurdor defined as
TYPE custodian_cursor_type IS REF CURSOR;
_______________________________________
and now i am calling the function like this
OPEN pio_cstdn_extract_cur FOR v_chr_sql;
loop
fetch p_cur bulk collect into l_c1, l_c2, l_c3 limit l_limit;
close pio_cstdn_extract_cur;
--dump_csv( v_chr_sql,',',v_location,v_file_name);
Dump_CSV takes p_query as input but i have a ref cursor with me , i am new to the ref cursor and utl_file concepts , I may be worng in writing the code above , Please guide
___________________________________________
Thanks in advance.
November 15, 2011 - 8:42 am UTC
If you want to use a ref cursor, you'll have to rewrite the code.
I'm not sure what "exception" you are facing, you give absolutely no real information here to see what is going on.
Datatype Inconsistent Error
A reader, November 15, 2011 - 11:54 am UTC
TOm , i was able to rewrite the code
I am using the following code and got the error:Data type inconsistent.
Getting Error at line
FETCH custodian_extract_header
INTO ce_line;
since i have many columns with different date types and i have defined ce_line as Varchar 2 it giving me inconsistent data type.
If i select only VARCHAR columns for my V_CHR_SQL, the below code runs perfectly fine.
I need to know what data type has to be defined for ce_line so that it accepts any datatype.
Decleare:
ce_line varchar2(20000);
V_DB_DIR_NAME varchar2 (34) := ' ';
ce_file UTL_FILE.file_type;
Body:
open custodian_extract_header for V_CHR_SQL;
IF UTL_FILE.is_open (ce_file)
THEN
LOOP
FETCH custodian_extract_header
INTO ce_line;
EXIT WHEN custodian_extract_header%NOTFOUND;
UTL_FILE.put_line (ce_file, ce_line, FALSE);
END LOOP;
END IF;
CLOSE custodian_extract_header;
UTL_FILE.fclose (ce_file);
November 16, 2011 - 9:19 am UTC
I need an example I can run.
this example will NOT have utl_file (not relevant to your issue) and will have as few lines of code as humanly possible to reproduce your issue..
You will however need to have as many bind variables as you have columns selected.
If your query - which we CANNOT EVEN SEE!!!!! - has more than one column - you'll need to supply some more variables to fetch into.
Or, just USE MY DBMS_SQL CODE as it is.
Datatype Inconsistent Error
developer, November 15, 2011 - 12:06 pm UTC
TOm , i was able to rewrite the code
I am using the following code and got the error:Data type inconsistent.
Getting Error at line
FETCH custodian_extract_header
INTO ce_line;
since i have many columns with different date types and i have defined ce_line as Varchar 2 it giving me inconsistent data type.
If i select only VARCHAR columns for my V_CHR_SQL, the below code runs perfectly fine.
I need to know what data type has to be defined for ce_line so that it accepts any datatype.
Decleare:
ce_line varchar2(20000);
V_DB_DIR_NAME varchar2 (34) := ' ';
ce_file UTL_FILE.file_type;
Body:
open custodian_extract_header for V_CHR_SQL;
IF UTL_FILE.is_open (ce_file)
THEN
LOOP
FETCH custodian_extract_header
INTO ce_line;
EXIT WHEN custodian_extract_header%NOTFOUND;
UTL_FILE.put_line (ce_file, ce_line, FALSE);
END LOOP;
END IF;
CLOSE custodian_extract_header;
UTL_FILE.fclose (ce_file);
a dot appers at the end of the first line
Stefan, November 17, 2011 - 9:33 am UTC
Hi Tom,
Thank you for the function, it is very useful.
I encountered a situation that I don't know how to explain; if the select returns only one line, then at the end of the line, in the exported file, appears a dot ".".
If I have more lines, then the dot doesn't appear anymore.
Example:
only 1 line
chebib;chbib7@hotmail.com.
2 lines
su;salw@live.com.pt
shan;shanyongliangsmec@163.com
Do you know what is generating this dot?
November 17, 2011 - 7:21 pm UTC
no it doesn't, can you prove it by creating a table, putting one row into it, showing us the file doesn't exist, running dump_csv against this one row table, and showing us the data with the dot?
Sort of like this:
ops$tkyte%ORA11GR2> create or replace directory my_dir as '/tmp';
Directory created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> !ls -l /tmp/test.dat
ls: /tmp/test.dat: No such file or directory
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> declare
2 l_rows number;
3 begin
4 l_rows := dump_csv( 'select *
5 from t', ',', 'MY_DIR', 'test.dat' );
6 end;
7 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> !ls -l /tmp/test.dat
-rw-rw-r-- 1 ora11gr2 ora11gr2 12 Nov 17 20:20 /tmp/test.dat
ops$tkyte%ORA11GR2> !cat /tmp/test.dat
hello world
ops$tkyte%ORA11GR2>
a dot appers at the end of the first line
Stefan, November 18, 2011 - 3:09 am UTC
Yes, you are right, it seems that the dot is visible only using view option from Total Commander; if I edit the file, it doesn't appear anymore. the same issue if I use cat command.
Thank you!
ORA-29283: invalid file operation. How to solve
AnkitV, November 21, 2011 - 5:52 am UTC
Hi Tom
I am facing the below problem and have not been able to get it solved with the help of Unix administrator too till now (on the assumption that it might be a directory permission related problem)
Following code is trying to create a file in host server directory :
....
v_os_touch_file := utl_file.fopen('EXTERNAL_TABLE_DIR', file1.txt, 'w');
.....
We are getting the below error every-time :
"ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation"
select * from v$version
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
"CORE 10.2.0.3.0 Production"
TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
NLSRTL Version 10.2.0.3.0 - Production
Directory does exist:
-------------------------
select * from dba_directories where directory_name = 'EXTERNAL_TABLE_DIR'
Owner directory_name directory_path
SYS EXTERNAL_TABLE_DIR /ftp/wmdwh/in-process
Grants were given too :
------------------------
grant read, write on directory EXTERNAL_TABLE_DIR to PUBLIC;
Even when i am trying to execute below block, its giving same error :
declare
v_os_touch_file utl_file.file_type;
begin
v_os_touch_file := utl_file.fopen('EXTERNAL_TABLE_DIR', 'abc.txt', 'w');
utl_file.put_line(v_os_touch_file, 'hello here');
utl_file.fclose(v_os_touch_file);
end;
Error report:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
ORA-06512: at line 4
29283. 00000 - "invalid file operation"
*Cause: An attempt was made to read from a file or directory that does
not exist, or file or directory access was denied by the
operating system.
*Action: Verify file and directory access privileges on the file system,
and if reading, verify that the file exists.
Please help in this.
Thanks a lot.
November 21, 2011 - 2:04 pm UTC
looks like your Oracle database account does not have 'write' access to the directory in question.
if you log in as oracle and try to create a file in that directory (without using the oracle software) - does it work or does it fail.
File getting created elsewhere
A reader, November 23, 2011 - 12:45 am UTC
Hi Tom
Thanks for above reply, there was permission problem with oracle account. Solved. Thanks.
That problem was solved, but came another.
In short, files are not getting created on server from where jobs are run, but on server where DB is installed.
WE WANT THEM TO BE CREATED ON THE SERVER FROM WHERE JOBS ARE BEING RUN.
Our prod database D1 is installed on host H1 and PL/SQL jobs run on server S1 accessing D1 and create files on S1 only.
JOBS RUN VIA A UNIX SCRIPT, WHICH CALLS sqlplus AND EXECUTES JOBS PASSED AS ARGUMENTS.
Recently I got same jobs scheduled to run on S2 accessing D2 database (test) installed on H2 server but files are getting created on H2 instead of S2.
Directory object is EXTERNAL_TABLE_DIR on both S1 and S2.
Actually files should be created on S2 only as jobs are being run there only.
Can you please tell why are the files getting created on H2 given that jobs are run from S2 (given that in case of PROD files are getting created on S1, NOT H1) and what can be done to rectify this ?
Thanks a lot
November 23, 2011 - 8:30 am UTC
WE WANT THEM TO BE CREATED ON THE SERVER FROM WHERE JOBS ARE BEING RUN.
that would take a virus like capability wouldn't it. Think about this for a minute.
Say you run this from your PC. Do you really want the database server to be able to reach out over the network and arbitrarily select your PC and create a file on it? And then when your coworker runs this on their PC, the same thing happens - a file gets created on their machine. In short, this magic database server would have the ability to write a file onto ANY MACHINE in your network!
Think about that for a minute, long and hard :)
Now, how do you go about getting machine 1 to write a file on machine 2? It is rather straight forward.
You mount the disk that machine 2 has explicitly exported onto machine 1. Machine 1, due to your setup, due to your explicit request, due to your explicit configuration and security rules, can now write to the file system owned by machine 2.
The runs might be "run from S2", but they RUN on H2 - plsql runs IN THE SERVER, not on the client.
If you want the files to be created on the client you shall either
a) mount the file system so the database has access to the file system.
b) have the client retrieve data from the database and have the CLIENT write that data to the local file system.
If you are looking to dump a file locally, maybe all you need is sqplus:
http://asktom.oracle.com/Misc/httpasktomoraclecomtkyteflat.html
question
A reader, November 25, 2011 - 2:59 pm UTC
We have around 10K+ schema's. The schema are only on one instance. would like to iterate through all the schema and dump the data for a table (consisting about 24K rows) into a flat file.
Any recommendation as to what should be the efficient way of doing this?
November 28, 2011 - 10:23 am UTC
please don't tell me it is the same schema over and over again - just for different "customers" :(
simply write a plsql loop
for x in (select owner, table_name from all_tables where table_name = 'XXX' )
loop
dump_csv( 'select * from ' || owner || '.' || table_name, ... );
end loop;
If they must all go into the SAME file, just build a big UNION ALL in a plsql index by table and use the dbms_sql.parse routine that takes a plsql table type instead of a single string (to avoid the 32k issue)...
http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_sql.htm#i997676 see the third definition of parse, use that one with a varchar2A datatype as an input.
thank you
Andrew, November 30, 2011 - 7:51 am UTC
Thank you for your response.
I ran the PL/SQL and it took around 2 min for 700K records, so in production for ~13 million records this would take about 40 min.
I also ran your Proc*C program and it took around 8 seconds for 700K records. I am thinking of using this instead of PL/SQL. How can I loop through the 18K schema's to get an output?
Or do you suggest to use BULK Collect and update the PL/SQL to optimize it?
Thank you very much.
November 30, 2011 - 8:01 am UTC
pro*c or sqlplus will pretty much blow away plsql for this particular operation - writing to a file
I would use sqlplus to write a script. for example, if you used my 'flat.sql' script, you could:
set heading off
set feedback off
set .... whatever else you need ...
spool x.sql
select '@flat u/p ' || table_name from all_tables where ....;
spool off
@x
same with the pro*c program, just have sqlplus create a script, run said script
SQL*Plus SPOOL versus PL/SQL
Stew Ashton, December 01, 2011 - 10:16 am UTC
Tom, I have been working on a generic CSV / XML dump tool, and I was disappointed in the lack of speed of the SPOOL command. Your followup made me go back and look again: here's what I found in case it helps anyone else.
First, set ARRAYSIZE to at least 100.
Second, set LINESIZE appropriately: too small and the line will wrap in the output file, too big and things will slow way down. I had used LINESIZE 32767 to be safe and it tripled processing time.
My test data set is one million lines with average row length of 350. Reading and transforming the lines to CSV format takes 17 seconds. Dumping the result with the SPOOL command takes 40 seconds, and dumping with UTL_FILE takes 60 seconds.
Properly configured, SPOOL wins but UTL_FILE isn't that far behind in my tests.
December 06, 2011 - 10:33 am UTC
make sure to have trimout set on as well!
question
A reader, December 01, 2011 - 8:23 pm UTC
not sure when you say this
select '@flat u/p ' || table_name from all_tables where ....;
can you please tell me how i can pass schema name dynamically?
December 06, 2011 - 10:46 am UTC
run that query in sqlplus, you want to spool out all tables in a schema, well
a) log in as that schema
b) run that script - it will generate a script that unloads all of the tables in that schema.
if you do not want to log into that schema, log into a schema that can read all of the tables you want and use
select '@flat u/p ' || owner || '.' || table_name from all_tables where owner in ( .... );
SQL*Plus SPOOL
Stew Ashton, December 07, 2011 - 4:47 am UTC
On using SPOOL to output CSV to a file: did you mean SET TRIMOUT ON or SET TRIMSPOOL ON, or both?
December 07, 2011 - 1:24 pm UTC
trimout on
trimspool on
both - trimspool being the more relevant - thanks for catching that!
A reader, January 25, 2012 - 10:40 am UTC
Hi Tom,
There is report generation in production for every month end.
Using open cursor in proc then app server write the file.But its very slow.It take 3 hours to finish with record 300000.Can you please suggest on this?
January 26, 2012 - 10:22 am UTC
you have to be kidding here right? seriously?
A reader, January 26, 2012 - 10:32 am UTC
NO...Tom i am serious...why ?
January 31, 2012 - 2:55 pm UTC
well, think about it for a while - seriously. ask yourself how anyone could formulate a sensible response to your "question"
Yes, "question" with quotes - on purpose. Because it sure wasn't a well formed question in any way shape or form.
Jess, July 09, 2012 - 12:14 pm UTC
Hi Tom,
One of the columns in the table we're writing out to a file is a date (datetime). It has been created as a date because other code uses it as such. For the file to have correct format, we need to break it up into 2 distinct fields, date and time. We can't replace "select * " with "select trunc(date), to_char(time)" because we'd have to list out all the columns, and that would be a massive list for every query.
Is there a way to get around this problem without changing the source table to store the time component as a separate varchar column and select it accordingly?
Thank you.
July 11, 2012 - 1:17 pm UTC
... and that would be
a massive list for every query. ...
so what? I don't care. You should be anyway....
ops$tkyte%ORA11GR2> alter session set nls_date_format = 'dd/mm/yyyy", "hh24:mi:ss';
Session altered.
ops$tkyte%ORA11GR2> select sysdate from dual;
SYSDATE
--------------------
11/07/2012, 14:15:42
but you probably can do it with the select * even though you shouldn't. Just set your NLS date format to something that works for you.
Jess, July 16, 2012 - 6:27 am UTC
Thanks Tom!
I have tried it with your suggestion. Worked great.
I also need some of the fields to be in double-quotes. Changing p_separator isn't the way to go because only text fields can be in quotes--numerics have to go without.
What is the best way to do that? I've tried to do it in the select itself by spelling out the columns and trying to include quotes, but must be doing it wrong as keep getting errors:
Basic
p_query varchar2 (32000) := 'select num_col, text_col '||
' from my_table';
works, but
p_query varchar2 (32000) :=
'select num_col), '||' "'||'text_col'||'", '||
' from my_table';
results in ORA-00904: "text_col": invalid identifier
What am I doing wrong?
Thank you in advance.
July 16, 2012 - 4:01 pm UTC
this is how I'd do it, use easier quoting and just one big line that spans lines:
p_query := q'|
select num_col, '"'||text_col||'"',
date_col, '"'||text_col2||'"',
.....
from my_table
|';
Jess, July 19, 2012 - 7:11 am UTC
As always, thanks for a great suggestion, Tom.
Ended up doing q'# as q'| was conflicting with ||s or so it would seem.
Replace Inva;id Characters before writing to a file
A reader, September 28, 2012 - 1:59 pm UTC
I have a strange problem. I need to extract a description field from a table and write it to text file. If the description field has invalid characters, I need to replace them with spaces.
Invalid characters is defined as anything that's not lowercase letters (a to z), uppercase letters (A to Z), numbers, spaces $ - % ? & # @ ^ * !
If there are any other characters in the description, I need to replace it with space.
CREATE TABLE SK_DESC_TEMP
(DESCRIPTION VARCHAR2(2000));
INSERT INTO SK_DESC_TEMP
(DESCRIPTION)
VALUES
(q'*ABCD'EF*HI~JKL"mno');
I would like to get a output that's
ABCD EF*HI JKL mno
Is there an efficient way to do it?
September 29, 2012 - 8:01 am UTC
your output is botched, you said * is valid, you are missing it on the leading edge of your sample output.
ops$tkyte%ORA11GR2> CREATE TABLE t
2 (DESCRIPTION VARCHAR2(2000));
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> INSERT INTO t (DESCRIPTION) VALUES (q'|*ABCD'EF*HI~JKL"mno|');
1 row created.
ops$tkyte%ORA11GR2> insert into t (description) values (q'|abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ $-%?&#@^*!|' );
1 row created.
ops$tkyte%ORA11GR2> insert into t (description) values (q'|a1bcdefghi5jklmnop(qrstuvwxyz)ABCDEF++GHIJKLMNO=PQRSTUVWXYZ $-%?&#@^*!'|' );
1 row created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select description, regexp_replace( description, '[^A-Za-z \$\%\?\&\#\@\^\*\!\-]', ' ' ) from t;
DESCRIPTION
-------------------------------------------------------------------------------
REGEXP_REPLACE(DESCRIPTION,'[^A-ZA-Z\$\%\?\&\#\@\^\*\!\-]','')
-------------------------------------------------------------------------------
*ABCD'EF*HI~JKL"mno
*ABCD EF*HI JKL mno
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ $-%?&#@^*!
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ $-%?&#@^*!
a1bcdefghi5jklmnop(qrstuvwxyz)ABCDEF++GHIJKLMNO=PQRSTUVWXYZ $-%?&#@^*!'
a bcdefghi jklmnop qrstuvwxyz ABCDEF GHIJKLMNO PQRSTUVWXYZ $-%?&#@^*!
Replace Inva;id Characters before writing to a file
A reader, September 28, 2012 - 2:16 pm UTC
Please use the following SQL script . I had a typing mistake on my previous insert statement. Sorry about that.
CREATE TABLE SK_DESC_TEMP
(DESCRIPTION VARCHAR2(2000));
INSERT INTO SK_DESC_TEMP
(DESCRIPTION)
VALUES
(q'*ABCD'EF*HI~JKL"mno*');
Import data from txt files to Oracle
Alexis, November 13, 2012 - 10:57 am UTC
Hi Tom,
I need help and I think this is the correct place. I need to import data from txt files to oracle, I don't know if is possible create a table with txt file structure because txt files would be differents.
Thanks
November 14, 2012 - 8:17 pm UTC
read about external tables (lots of examples on this site) or sqlldr (again, tons of examples out there)
ram issue
kiran, July 26, 2013 - 10:57 am UTC
hi tom..
How make obsolete backup to a valid backup in rman..?
Thank you in advance...
August 02, 2013 - 5:26 pm UTC
they are valid already? they are just obsolete and can be deleted if you want.
where is it?
Sal, September 04, 2013 - 10:46 pm UTC
September 09, 2013 - 10:58 am UTC
asktom.oracle.com/Misc/httpasktomoraclecomtkyteflat.html
Data Backup
Luz, September 25, 2013 - 10:31 am UTC
Hi tom,
I have this code:
BEGIN
v_proceso:=enr_f_bd_codproceso;
SELECT DISTINCT NOM_BBDD_DESTINO INTO v_bbdd_dstino FROM ENM_RED_CONTROL WHERE COD_PROCESO=v_proceso AND NOM_TABLA_DESTINO NOT LIKE 'AUX%';
--bucle para las distintas tablas almacenadas en la rtabla de rowid
FOR c_tabla in(SELECT DISTINCT rid.esquema,rid.nom_tabla,DECODE(rid.dblink,'''',NULL,rid.dblink) DBLINK_NONULL FROM ENM_RED_ROWID rid WHERE hilo=v_hilo)
LOOP
enr_p_bd_logtbl('BCK','INI',id_eje,v_proceso,c_tabla.esquema,c_tabla.nom_tabla);
v_campos:=enr_f_bd_backcamp(c_tabla.esquema,c_tabla.nom_tabla,c_tabla.DBLINK_NONULL);
BEGIN
FOR c_rowid in(SELECT NUM_ROWID FROM ENM_RED_ROWID rid WHERE ESQUEMA=c_tabla.esquema AND NOM_TABLA=c_tabla.NOM_TABLA)
LOOP
--bucle wue recorre los rowid por cada tabla
v_query:='SELECT '''||v_bbdd_dstino||''','''||c_tabla.esquema||''','''||c_tabla.nom_tabla||''','||CHR(13)||
'''INSERT INTO '||c_tabla.esquema||'.'||c_tabla.nom_tabla||c_tabla.DBLINK_NONULL||' VALUES(''||'||v_campos||'||'');'' AS v_insert'||CHR(13)||
'FROM '||c_tabla.esquema||'.'||c_tabla.nom_tabla||c_tabla.DBLINK_NONULL||CHR(13)||
'A WHERE A.ROWID='''||c_rowid.num_rowid||'''';
v_query:= 'INSERT /*+ APPEND */ INTO ENM_RED_BACKUP '||v_query;
--dbms_output.put_line(v_query);
EXECUTE IMMEDIATE v_query;
COMMIT;
END LOOP;
enr_p_bd_logtbl('BCK','FIN',id_eje,v_proceso,c_tabla.esquema,c_tabla.nom_tabla);
COMMIT;
EXCEPTION WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(v_query);
enr_p_bd_logtbl('BCK','ERR',id_eje,v_proceso,c_tabla.esquema,c_tabla.nom_tabla,NULL,SQLCODE,SQLERRM);
END;
END LOOP;
END;
the problem is that it is very slow, is there any way to make it faster?
Thank you very much for your help
Replace Inva;id Characters before writing to a file
Veslar, September 26, 2013 - 6:56 am UTC
When using regexp, backslash does not mean an escape character within [], but a literal one. In the post with the same title above it makes it a valid character.
CREATE TABLE SK_DESC_TEMP
(DESCRIPTION VARCHAR2(2000));
INSERT INTO SK_DESC_TEMP
(DESCRIPTION)
VALUES
(q'*ABCD'EF*HI~JKL"mno*');
INSERT INTO SK_DESC_TEMP
(DESCRIPTION)
VALUES
(q'*AB\/CD'EF*HI~JKL"mno*');
SELECT description,
regexp_replace(description,'[^a-z0-9 $-%?&#@^*!]',' ') correct,
regexp_replace(description,'[^A-Za-z \$\%\?\&\#\@\^\*\!\-]', ' ' ) incorrect FROM SK_DESC_TEMP;
Replace Inva;id Characters before writing to a file
Veslar, September 26, 2013 - 11:43 am UTC
Sorry for my previous post. There was another error and inconsistency. Hyphen should be at the end not to be treated as the range operator. A-Z added not to be dependent on the default value of case insensitivity (for me at least).
CREATE TABLE SK_DESC_TEMP
(DESCRIPTION VARCHAR2(2000));
INSERT INTO SK_DESC_TEMP
(DESCRIPTION)
VALUES
(q'*ABCD'EF*HI~JKL"mno*');
INSERT INTO SK_DESC_TEMP
(DESCRIPTION)
VALUES
(q'*OK-azAZ09 $-%?&#@^*!NOK-~/\.,;"-*');
SELECT description,regexp_replace(description,'[^a-zA-Z0-9 $%?&#@^*!-]',' ') replaced FROM SK_DESC_TEMP;
dump_csv function
Srinivasan, November 19, 2013 - 12:46 pm UTC
I have incorporated the code of dump_csv as it is. the outcome does not throw error. However there is no output file in the specified directory as defined in the procedure like below.
create or replace procedure test_dump_csv
as
l_rows number;
begin
l_rows := dump_csv
('select * from apps.TB03569A where rownum < 25',',','/usr/tmp2', 'test_dump.dat');
end;
/
and I confirmed that in parameter file of utl_file, above path is included.
SQL> @test_dump_csv.sql
Function created.
Procedure created.
Why my code is not executed in the order?
A reader, December 02, 2013 - 7:37 pm UTC
Hi Tom,
My code looks like below.
declare
-
-
begin
v_strt_time:=systimestamp
d_o.p_l(v_strt_time);
-
-
exec:= DBMS_SQL.execute(crsr);
loop exit when DBMS_SQL.fetch_rows(crsr)<=0;
for i in 1..colcnt loop
DBMS_SQL.column_value(crsr,i,v_buffer);
v_buffer:=seperater||v_buffer;
end loop;
utl_file.put_line_nchar(v_dir,v_file,ltrim(v_buffer,seperater));
v_spld:=v_spld+1;(initialized to 0)
if mod(v_spld,1000)=0 then d_o.p_l('Rows exported :' ||vspld ); end if;
-
end loop;
v_end_time:=systimestamp;
d_o.p_l(v_end_time);
-
-
v_total_time:=calculation;
d_o.p_l('Time taken to spool '||vspld|| ' Rows:'||v_total_time);
-
end;
/
Here the start time and the spool progress(print no.of rows exported for every 1000 rows) are not executing in the order they are executing.
Everything is displaying on the sqlplus screen once the completion of the spooling.
What is the reason for this and how to correct this?
Also as DBMS_SQL.execute cannot give the count of the table (assume that the PARSE statement is select * from table), what is the way to find the count of the total records that are going to be spooled?
I want this value before utl_file starts writing the lines into the file.
Also is there any other utility to spool the data to a flat file apart from spool and utl_file?
Thank you so much for your time.
I have ploblem.
ice, January 22, 2014 - 10:20 am UTC
SQL> exec dump_table_to_csv( 'HR','/HOME/ORACLE','HR.CSV');
BEGIN dump_table_to_csv( 'HR','/HOME/ORACLE','HR.CSV'); END;
*
ERROR at line 1:
ORA-29280: invalid directory path
ORA-06512: at "HR.DUMP_TABLE_TO_CSV", line 46
ORA-06512: at line 1
A reader, March 06, 2014 - 6:36 am UTC
Hi sir,
The function doesnt dump the clob datatype, any ideas ? (c.value is a clob)
SELECT dump_csv (&query,',','CSV_FILES','table_dump.csv') FROM DUAL;
Enter value for query: select b.pid, c.value from orhdepidmid b, orhdemetadata c where b.mid=c.mid and c.mdid="21";
PL/SQL procedure successfully completed.
[oracle]$ cat table_dump.csv
FL10000,
FL1000000,
escape the p_seperator for the first row
Umesh Kasturi, March 31, 2016 - 12:22 pm UTC
To avoid the p_seperator I changed the code as follows in dump_csv. But still the seperator appears. Can you Please Help.
In the first if statement I do not have the l_seperator as given below
for i in 1 .. l_colCnt loop
if i=1 then
dbms_sql.column_value( l_theCursor, i, l_columnValue );
utl_file.put( l_output, l_columnValue );
l_separator := p_separator;
end if;
if i>1 then
dbms_sql.column_value( l_theCursor, i, l_columnValue );
utl_file.put( l_output, l_separator|| l_columnValue );
l_separator := p_separator;
end if;
end loop;
April 01, 2016 - 11:49 am UTC
What do you mean by "avoid the p_separator" ? If you dont want a separator, you can just pass null
SQL> create or replace function dump_csv( p_query in varchar2,
2 p_separator in varchar2
3 default ',',
4 p_dir in varchar2 ,
5 p_filename in varchar2 )
6 return number
7 AUTHID CURRENT_USER
8 is
9 l_output utl_file.file_type;
10 l_theCursor integer default dbms_sql.open_cursor;
11 l_columnValue varchar2(2000);
12 l_status integer;
13 l_colCnt number default 0;
14 l_separator varchar2(10) default '';
15 l_cnt number default 0;
16 begin
17 l_output := utl_file.fopen( p_dir, p_filename, 'w' );
18
19 dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
20
21 for i in 1 .. 255 loop
22 begin
23 dbms_sql.define_column( l_theCursor, i,
24 l_columnValue, 2000 );
25 l_colCnt := i;
26 exception
27 when others then
28 if ( sqlcode = -1007 ) then exit;
29 else
30 raise;
31 end if;
32 end;
33 end loop;
34
35 dbms_sql.define_column( l_theCursor, 1, l_columnValue,
36 2000 );
37
38 l_status := dbms_sql.execute(l_theCursor);
39
40 loop
41 exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
42 l_separator := '';
43 for i in 1 .. l_colCnt loop
44 dbms_sql.column_value( l_theCursor, i,
45 l_columnValue );
46 utl_file.put( l_output, l_separator ||
47 l_columnValue );
48 l_separator := p_separator;
49 end loop;
50 utl_file.new_line( l_output );
51 l_cnt := l_cnt+1;
52 end loop;
53 dbms_sql.close_cursor(l_theCursor);
54
55 utl_file.fclose( l_output );
56 return l_cnt;
57 end dump_csv;
58 /
Function created.
SQL>
SQL>
SQL> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select *
6 from all_users
7 where rownum < 25',
8 ',', 'LOG_DIR', 'test.dat' );
9 end;
10 /
Procedure created.
SQL>
SQL> exec test_dump_csv
PL/SQL procedure successfully completed.
SQL>
SQL>
SQL> host cat c:\temp\test.dat
LBACSYS,92,11/09/2014 10:28:35,YES,Y
OUTLN,13,11/09/2014 08:41:00,YES,Y
SYSTEM,8,11/09/2014 08:40:57,YES,Y
SYS,0,11/09/2014 08:40:57,YES,Y
ORDS_METADATA,212,15/02/2016 09:45:57,YES,Y
APEX_050000,117,24/11/2015 10:59:34,YES,Y
DVF,99,11/09/2014 11:12:47,YES,Y
APEX_040200,98,11/09/2014 10:29:19,YES,Y
FLOWS_FILES,94,11/09/2014 10:29:19,YES,Y
OLAPSYS,82,11/09/2014 10:05:58,YES,Y
DVSYS,1279990,11/09/2014 11:12:47,YES,Y
SI_INFORMTN_SCHEMA,78,11/09/2014 09:49:49,YES,Y
ORDPLUGINS,77,11/09/2014 09:49:49,YES,Y
ORDDATA,76,11/09/2014 09:49:49,YES,Y
ORDSYS,75,11/09/2014 09:49:49,YES,Y
CTXSYS,73,11/09/2014 09:47:42,YES,Y
WMSYS,62,11/09/2014 09:12:11,YES,Y
MDSYS,79,11/09/2014 09:49:50,YES,Y
ANONYMOUS,51,11/09/2014 08:54:43,YES,Y
XDB,50,11/09/2014 08:54:43,YES,Y
APPQOSSYS,49,11/09/2014 08:54:34,YES,Y
DBSNMP,48,11/09/2014 08:54:32,YES,Y
GSMADMIN_INTERNAL,21,11/09/2014 08:44:25,YES,Y
DEMO,222,06/03/2016 13:08:05,NO,N
SQL>
SQL> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select *
6 from all_users
7 where rownum < 25',
8 null, 'LOG_DIR', 'test.dat' );
9 end;
10 /
Procedure created.
SQL>
SQL> exec test_dump_csv
PL/SQL procedure successfully completed.
SQL> host cat c:\temp\test.dat
LBACSYS9211/09/2014 10:28:35YESY
OUTLN1311/09/2014 08:41:00YESY
SYSTEM811/09/2014 08:40:57YESY
SYS011/09/2014 08:40:57YESY
ORDS_METADATA21215/02/2016 09:45:57YESY
APEX_05000011724/11/2015 10:59:34YESY
DVF9911/09/2014 11:12:47YESY
APEX_0402009811/09/2014 10:29:19YESY
FLOWS_FILES9411/09/2014 10:29:19YESY
OLAPSYS8211/09/2014 10:05:58YESY
DVSYS127999011/09/2014 11:12:47YESY
SI_INFORMTN_SCHEMA7811/09/2014 09:49:49YESY
ORDPLUGINS7711/09/2014 09:49:49YESY
ORDDATA7611/09/2014 09:49:49YESY
ORDSYS7511/09/2014 09:49:49YESY
CTXSYS7311/09/2014 09:47:42YESY
WMSYS6211/09/2014 09:12:11YESY
MDSYS7911/09/2014 09:49:50YESY
ANONYMOUS5111/09/2014 08:54:43YESY
XDB5011/09/2014 08:54:43YESY
APPQOSSYS4911/09/2014 08:54:34YESY
DBSNMP4811/09/2014 08:54:32YESY
GSMADMIN_INTERNAL2111/09/2014 08:44:25YESY
DEMO22206/03/2016 13:08:05NON
SQL>
SQL>
Avoid p_seperator for the first row
umesh kasturi, April 06, 2016 - 7:50 am UTC
please refer to the last post with a similar heading .
Apologies , that my posting was not clear to you.
I will rephrase my question:
I don't need the separator based on a conditional data
i.e. if sal =1000 then I need the separator else , I don't.
thanks in advance
April 06, 2016 - 12:01 pm UTC
Sorry - I'm still not following.
Can you post some data as it would be in a query, and then how it would appear in your file to help me.
Thanks,
Connor
clob in dumpcsv
umesh kasturi, May 19, 2016 - 1:51 am UTC
Hi ,
dump_csv helps me to write the output of 'select * from my_table' to a file.
What if my_table contains two clob columns.
Can I still use this dump_csv function to write to a file?
Thanks
May 20, 2016 - 3:20 am UTC
You can...but (and this isnt a clob thing, but a string thing in general) you might need to think about what you want to do with multi-line input, eg
SQL> create or replace function dump_csv( p_query in varchar2,
2 p_separator in varchar2
3 default ',',
4 p_dir in varchar2 ,
5 p_filename in varchar2 )
6 return number
7 AUTHID CURRENT_USER
8 is
9 l_output utl_file.file_type;
10 l_theCursor integer default dbms_sql.open_cursor;
11 l_columnValue varchar2(2000);
12 l_status integer;
13 l_colCnt number default 0;
14 l_separator varchar2(10) default '';
15 l_cnt number default 0;
16 begin
17 l_output := utl_file.fopen( p_dir, p_filename, 'w' );
18
19 dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
20
21 for i in 1 .. 255 loop
22 begin
23 dbms_sql.define_column( l_theCursor, i,
24 l_columnValue, 2000 );
25 l_colCnt := i;
26 exception
27 when others then
28 if ( sqlcode = -1007 ) then exit;
29 else
30 raise;
31 end if;
32 end;
33 end loop;
34
35 dbms_sql.define_column( l_theCursor, 1, l_columnValue,
36 2000 );
37
38 l_status := dbms_sql.execute(l_theCursor);
39
40 loop
41 exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
42 l_separator := '';
43 for i in 1 .. l_colCnt loop
44 dbms_sql.column_value( l_theCursor, i,
45 l_columnValue );
46 utl_file.put( l_output, l_separator ||
47 l_columnValue );
48 l_separator := p_separator;
49 end loop;
50 utl_file.new_line( l_output );
51 l_cnt := l_cnt+1;
52 end loop;
53 dbms_sql.close_cursor(l_theCursor);
54
55 utl_file.fclose( l_output );
56 return l_cnt;
57 end dump_csv;
58 /
Function created.
SQL> drop table t purge;
Table dropped.
SQL> create table t ( x int, y int, z clob );
Table created.
SQL>
SQL> insert into t values (1,2,'This is some test data');
1 row created.
SQL> insert into t values (3,4,'This is some test data'||chr(10)||'across two lines');
1 row created.
SQL> create or replace procedure test_dump_csv
2 as
3 l_rows number;
4 begin
5 l_rows := dump_csv( 'select * from t',',','LOG_DIR', 'test.dat' );
6 end;
7 /
Procedure created.
SQL>
SQL> exec test_dump_csv
PL/SQL procedure successfully completed.
SQL> host cat c:\temp\test.dat
1,2,This is some test data
3,4,This is some test data
across two lines
SQL>
Hard returns in data make CSV's more complicated.
clob in dumpcsv
umesh kasturi, May 20, 2016 - 4:46 am UTC
Hi ,
This is with respect to my above posting of printing clob columns. I have a requirement where in , I need to print data which is more than 32k ( and is a clob ) . dump_csv works fine with 32k data and not above that.
What I want to do is..
Check the data type,
if the data type is number,varchar2,date,- print as is
if the data type is CLOB, then I need to print the whole of clob. thanks.
May 20, 2016 - 8:50 am UTC
utl_file is limited to 32k anyway....so what would you want to do with a clob that is longer than that ?
To : umesh kasturi
Rajeshwaran, Jeyabal, May 20, 2016 - 9:31 am UTC
In case of CLOB get piece wise access to the data and have them processed ( either spool or write to file using utl_file)
Here is an example to get clob accessed in piecewise.
rajesh@ORA11G> select x,y,dbms_lob.getlength(z) from t;
X Y DBMS_LOB.GETLENGTH(Z)
---------- ----------- ---------------------
1 20-MAY-2016 5000
2 20-MAY-2016 1000
2 rows selected.
rajesh@ORA11G>
rajesh@ORA11G> declare
2 l_sql long ;
3 c int ;
4 l_col_cnt int;
5 l_desc_t dbms_sql.desc_tab;
6 l_value varchar2(4000);
7 l_value2 clob;
8 l_slice varchar2(32767);
9 l_rows int;
10 l_offset int :=0;
11 begin
12 l_sql :='select * from t';
13 c := dbms_sql.open_cursor;
14 dbms_sql.parse(c,l_sql,dbms_sql.native);
15 dbms_sql.describe_columns(c,l_col_cnt,l_desc_t);
16 for i in 1..l_col_cnt
17 loop
18 if l_desc_t(i).col_charsetid <> 178 then
19 dbms_sql.define_column(c,i,l_value,4000);
20 else
21 dbms_sql.define_column(c,i,l_value2);
22 end if;
23 end loop;
24 l_rows := dbms_sql.execute(c);
25
26 while ( dbms_sql.fetch_rows(c) > 0 )
27 loop
28 for i in 1..l_col_cnt
29 loop
30 if l_desc_t(i).col_charsetid <> 178 then
31 dbms_sql.column_value(c,i,l_value);
32 dbms_output.put(l_value ||',');
33 else
34 dbms_sql.column_value(c,i,l_value2);
35 while ( l_offset < dbms_lob.getlength(l_value2) )
36 loop
37 l_slice := dbms_lob.substr(l_value2,32767,greatest(l_offset,1) );
38 dbms_output.put( l_slice );
39 l_offset := l_offset + length(l_slice);
40 end loop;
41 end if;
42 end loop;
43 dbms_output.new_line;
44 dbms_output.put_line('Total length of clob ='|| l_offset);
45 dbms_output.new_line;
46 l_offset := 0;
47 end loop;
48 dbms_sql.close_cursor(c);
49 end;
50 /
1,20-MAY-2016,**********************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
************************************************************************************************************************************************************************************
**********************************************************************************************************************************************************
Total length of clob =5000
2,20-MAY-2016,$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Total length of clob =1000
PL/SQL procedure successfully completed.
rajesh@ORA11G>
Extract data from database to txt or csv file via DB procedure
Krunal, May 21, 2021 - 4:13 pm UTC
Hi Team,
Your community is very good and I learnt lots of things from it. You are doing great job!!
Basically we are working in oracle 19c. And while we migrated our database from 12c to 19c we came to know that UTL_FILE functionality is deprecated in 19c.
Now I have 2 requirement as below.
1.) I want to generate some letters via oracle procedure which have some dynamic values from table and some static values from procedure.
2.) I want to generate csv file from the query executed in oracle procedure. All data are dynamic and from query executed in procedure.
Thanks in advance.
External table use
Asim, September 30, 2022 - 3:03 pm UTC
Can I create an empty external table in my schema and then use
Insert into my externaltabel select...from myinternaltae
to get data in a file of my operating system?
October 03, 2022 - 3:36 am UTC
A reader, October 04, 2022 - 7:09 am UTC
Can I create an empty external table in my schema and then use
Insert into my externaltabel select...from myinternaltae
to get data in a file of my operating system?
You can create an external table as select with data. But you can't insert into an external table.
Instead of inserting, you can create another table with new data and a different file name. Then create the consolidated table or alter the location for multiple files.