Skip to Main Content
  • Questions
  • Inserting data from comma delimited text file into a table.

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Nag.

Asked: October 13, 2001 - 3:14 pm UTC

Last updated: August 02, 2013 - 5:25 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Tom

A month back, inadvertently I delted some data from production and committed. But fortunately for me, I had saved that data in a text file, (comma delimited)(round about 1000 rows).

I tried using the following ways:

1.Sql loader: but I didnt have access to the server on which the database resides. I only had access to the network server through which we connect to the database server( the database and network servers are unix servers, where as my client is WIN2000).

2.COPY the insert sql against the all the rows and insert all the rows.


No.1 didnt work due to the explained reasons.

No.2 worked , but when I copied all the 1000 insert statements, and pasted at the sqlplus prompt, it was just doing 50 to 60 inserts and breaking. Why is this so?

So I copied 50 insert statements at a time and pasted at the sqlplus prompt , and committed .


to sum up I have the following questions.

1.Was that a buffer problem with the sqlplus, that it did not all me to paste all the 1000 insert statements. If you had 1000 insert statements in a file , how will you execute them at the sqlplus prompt.

2.Can you tell me a real convenient method to overcome these kind of situations using pl/sql ( from both client and server sides).

3.Can we used the sqlloader only from the database server side.

Thank you




and Tom said...

number one should work -- sqlldr works just dandy over the network.

number two -- I don't know why your cut and paste buffer was getting blown out but if you had them in a file -- why didn't you just:

SQL> @filename

that would work straight out.



PLSQL won't be too much help here. All you really needed was sqlldr -- it would have done it just fine. Note that if you didn't have sqlldr and this was important data -- why didn't you just give it to the admin on the server and ask them to blow it in real quick? Thats part of their job in a way -- to make sure your business stays up and running after all.



Rating

  (13 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, October 13, 2001 - 3:31 pm UTC

While doing SQL> @filename

We should not have the semi colon after the end of each insert statement , Iam I right. 

Tom Kyte
October 13, 2001 - 3:37 pm UTC

you should if you want to run it. You either need:

insert into t values ( 1 );
insert into t values ( 2 );


or you need:

insert into t values ( 1 )
/
insert into t values ( 2 )
/


either one works.


BTW -- just had an after thought on this topic as well. You'll love a new feature in 9i called a flashback query. Using that, this person would have been able to open up a query AFTER they committed that "flashed back" to the point in time right before they DELETED the data -- and just query it back out of the database! Instant recovery, no need to reload.

load_data - that was pl/sql solution to load data , wasnt it

Nag, October 13, 2001 - 3:38 pm UTC

'pl/sql will not be much help ', Well, I believe not, becasue ,I just found something on your website, you yourself have advetised and advocated it strongly in

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:464420312302,%7Binsert%7D%20and%20%7Bdelimited%7D <code>

( The name of the function is load_data)

What purpose does load_data serve then.

In what context did you provide this solution(load_data) , and in what context you said that pl/sql will not be much help ful.

Tom Kyte
October 13, 2001 - 3:40 pm UTC

Well, you said

"but I didnt have access to the server on which the database resides."

that sort of precludes using that utility as it relies 100% on utl_file and utl_file can ONLY read files on the server.

So, for you -- plsql won't be much help here as I said.

Load data is only useful if you have the file on the server.


Fascinating feature

Nag, October 13, 2001 - 4:20 pm UTC

BTW -- just had an after thought on this topic as well. You'll love a new feature in 9i called a flashback query. Using that, this person would have been able to open up a query AFTER they committed that "flashed back" to the point in time right before they DELETED the data -- and just query it back out of the database! Instant recovery, no need to reload.

Tom, mindboggling ,overwhelming are the words which instatly come to my mind.

1.Can you illustrate the scenario described above, and also the sql clause which will help flash back.

2.If oracle goes at this pace, it will be very difficult to catchup and be uptodate. You comments please.

3.How is the flashback query managed internally. You have to answer this.

Tom Kyte
October 14, 2001 - 9:18 am UTC

1) Here is an annotated demo of this feature. You can flashback either based on the SCN (system change number) or based on time (time is within a 5 minute window however -- not exact. SCN is exact)

scott@ORA9I.WORLD> @demo012
=================================================================

We'll begin by creating a table of "keep scns"

As we run through our updates, we'll remember the scn in place when
when we began/ended. We can use this to flash back to various points
in time


scott@ORA9I.WORLD>
scott@ORA9I.WORLD>
scott@ORA9I.WORLD> create table keep_scn( msg varchar2(25), scn number );

Table created.

=================================================================


Now, we'll update and commit the EMP table manmy times in a loop. We
remember the system change number before we start and when we are done.
this way, we can flash back to the beginning, middle or end

remember each commit in the system will up the SCN, its our "clock"


scott@ORA9I.WORLD>
scott@ORA9I.WORLD> begin
2 insert into keep_scn values ( 'start', dbms_flashback.get_system_change_number );
3
4 for i in 1 .. 100
5 loop
6 update SCOTT.EMP set sal = sal * 1.01;
7 commit;
8 end loop;
9
10 insert into keep_scn values ( 'stop', dbms_flashback.get_system_change_number );
11 commit;
12 end;
13 /

PL/SQL procedure successfully completed.


=================================================================

Now, lets start flashing back. We'll start with a flash back to the
beginning first:


scott@ORA9I.WORLD>
scott@ORA9I.WORLD> variable x refcursor
scott@ORA9I.WORLD> set autoprint on
scott@ORA9I.WORLD>
scott@ORA9I.WORLD>
scott@ORA9I.WORLD> declare
2 l_scn number;
3 begin
4 select scn into l_scn from keep_scn where msg = 'start';
5 dbms_flashback.enable_at_system_change_number( l_scn );
6 open :x for select ename, sal from SCOTT.EMP;
7 dbms_flashback.disable;
8 end;
9 /

PL/SQL procedure successfully completed.


ENAME SAL
---------- ----------
SMITH 800
ALLEN 1600
WARD 1250
JONES 2975
MARTIN 1250
BLAKE 2850
CLARK 2450
SCOTT 3000
KING 5000
TURNER 1500
ADAMS 1100
JAMES 950
FORD 3000
MILLER 1300

14 rows selected.


=================================================================

Now we'll get the SCN that was in effect in the "middle" of update
and view the data at that point in time


scott@ORA9I.WORLD>
scott@ORA9I.WORLD> declare
2 l_scn number;
3 begin
4 select trunc(avg(scn)) into l_scn from keep_scn;
5 dbms_flashback.enable_at_system_change_number( l_scn );
6 open :x for select ename, sal from SCOTT.EMP;
7 dbms_flashback.disable;
8 end;
9 /

PL/SQL procedure successfully completed.


ENAME SAL
---------- ----------
SMITH 1302.67
ALLEN 2605.36
WARD 2035.47
JONES 4844.33
MARTIN 2035.47
BLAKE 4640.79
CLARK 3989.45
SCOTT 4885.04
KING 8141.76
TURNER 2442.51
ADAMS 1791.19
JAMES 1546.9
FORD 4885.04
MILLER 2116.85

14 rows selected.


=================================================================

Now we'll get the SCN that was in effect at the end
and view the data at that point in time


scott@ORA9I.WORLD> declare
2 l_scn number;
3 begin
4 select scn into l_scn from keep_scn where msg = 'stop';
5 dbms_flashback.enable_at_system_change_number( l_scn );
6 open :x for select ename, sal from SCOTT.EMP;
7 dbms_flashback.disable;
8 end;
9 /

PL/SQL procedure successfully completed.


ENAME SAL
---------- ----------
SMITH 2163.82
ALLEN 4327.72
WARD 3381.09
JONES 8046.79
MARTIN 3381.09
BLAKE 7708.73
CLARK 6626.78
SCOTT 8114.43
KING 13524.08
TURNER 4057.24
ADAMS 2975.28
JAMES 2569.53
FORD 8114.43
MILLER 3516.27

14 rows selected.


=================================================================

lastly, we'll flash back back to the start, open a cursor on the
data and then use that cursor to restore our data to that point in
time. we'll rollback the work in effect


scott@ORA9I.WORLD>
scott@ORA9I.WORLD> declare
2 cursor emp_cur is select empno, sal from SCOTT.emp;
3 l_rec emp_cur%rowtype;
4 l_scn number;
5 begin
6 select scn into l_scn from keep_scn where msg = 'start';
7 dbms_flashback.enable_at_system_change_number( l_scn );
8 open emp_cur;
9 dbms_flashback.disable;
10 loop
11 fetch emp_cur into l_rec;
12 exit when emp_cur%notfound;
13 update SCOTT.emp set sal = l_rec.sal where empno = l_rec.empno;
14 end loop;
15 close emp_cur;
16 commit;
17 end;
18 /

PL/SQL procedure successfully completed.


=================================================================

Just to see that it worked...


scott@ORA9I.WORLD>
scott@ORA9I.WORLD> select ename, sal from SCOTT.emp;

ENAME SAL
---------- ----------
SMITH 800
ALLEN 1600
WARD 1250
JONES 2975
MARTIN 1250
BLAKE 2850
CLARK 2450
SCOTT 3000
KING 5000
TURNER 1500
ADAMS 1100
JAMES 950
FORD 3000
MILLER 1300

14 rows selected.



2) Well, thats a fact of everyday life for everything. I teach a course internally on new 9i features -- just a bullet on each with a demo here and there to make it real (like the one above). Its about 170 slides and takes eight solid hours and it only touches on new features (and doesn't get them all). There is alot in htere thats for sure.

3) its done the same exact way we do a consistent read -- using the UNDO we have to store to rollback queries in the first place. In 9i, there are UNDO tablespaces (no need for rollback segments anymore) where by you set up an undo retention period of N minutes and that dictates how far back you can "flash back" to.

Flashback queries followup

Rich, October 17, 2001 - 3:36 pm UTC

Hi Tom,

Several places I've read mention something along the lines of "go back the number of MINUTES you want".

Is the flashback functionality useful for reviewing data over a longer period of time? Eg: how many sales orders were there on the system at this time last month.

If not, why not?

Thanks.


Tom Kyte
October 17, 2001 - 3:55 pm UTC

The flashback query uses the UNDO information (aka rollback data). In 9i, you would set up an UNDO tablespace with a retention period. This retention period is specified in minutes and tells us how long to hang onto rollback for (no need for rollback segments, just an undo tablespace).

If you have the room for a months worth of UNDO online -- you could do it (i'll not be able to test that for you however ;)

The flashback query is a very complete solution for a "smallish" window of time, hours of time. You could use it in conjunction with fine grained auditing (another new 9i feature where by we capture the SQL, the bind variables AND the SCN when you want us to). With that information, you can see what they saw since you can flashback. So, its very useful for watching someone you suspect. It is very useful to recover data you didn't mean to drop and so on.


If I had a need to go back to last month, I would consider a tool such as Workspace Management. Using that, I can simply create a "branch" of my data every month and I'll have the ability to go back to that point in time whenever I want. This is accomplished by versioning records. You can use this to automagically get an audit trail if you like (we can keep the entire change history of a record), you can use this to create versions of your data to "what if", you can use this to create point in time snapshots of your data. Your application doesn't know this is happening (totally transparent). See

</code> http://www.oracle.com/pls/db901/db901.to_toc?pathname=appdev.901/a88806/toc.htm <code>

for details. It would be more applicable to what you wanted to do and would not require having 30 days of UNDO online.

A reader, October 18, 2001 - 9:25 am UTC

excellent explanation on flashback query

RE: Flashback queries followup October 17, 2001

Rich, October 18, 2001 - 10:42 am UTC

Thanks for the excellent explanation and alternate solution.

concat first and second fields

Robert, May 22, 2002 - 11:27 am UTC

Tom,

Can you concatenate the 1st two fields in a comma delimited infile and insert them into the first column with sql*loader?



A reader, October 29, 2002 - 1:24 pm UTC

you said
"This retention period
is specified in minutes and tells us how long to hang onto rollback for (no
need for rollback segments, just an undo tablespace)."
in the Oracle documantaion they say you specify the retention in seconds is that correct ?


Tom Kyte
October 29, 2002 - 2:30 pm UTC

correct, typo on my part -- it is seconds.

Srinivas, January 22, 2004 - 5:04 am UTC

Hi Tom,

We have a cold Fusion front end, from where we upload a csv file, parse the file in cf, call a stored procedure to insert data for each record(each csv file has around 1000 records). I am not sure, if we can use sqlldr for this? Or is this the only way to do this?

Thanks

Srinivas

Tom Kyte
January 22, 2004 - 7:08 am UTC

sqlldr works on OS files. if you have an OS file, sqlldr would be appropriate.

if you have a variable (a big string), parse/insert is appropriate.

CSV Files

Srinivas, January 23, 2004 - 4:46 am UTC

Hi Tom,

Thanks for the prompt reply. But i could not understand the reply.

We are using Cold Fusion 5.0 as the front end and oracle 8i as the database. Both are on different systems. There are other applications also running on the systems. The cold fusion front end, which has an upload facility, from which clients upload their data files(csv files). Right now what we are doing is, reading the csv file, error checking is done at the front end,

the valid data(record by record) is sent as parameter to a stored procedure, which inserts the data. In peak times, there are around 1000 users accessing the application, and the offshot of this is response time is down, page cannot be displayed errors. We are about to fine tune the application and identifying the possible areas for improvement. What we felt is, to insert 1000 records, there are 1000 calls to the database, instead at one stroke, we want to send the data and insert it.

To achieve this, the various options considered are,
1) the correct data will be send as xml(clob) and the procedure will parse and insert it to the tables.
2) the correct data will be send as string and the procedure will parse and insert it into the tables.
3) the uploaded file(from cold fusion front end) will be copied from cold fusion machine to the database system over the

network, there we would like sqlldr to insert the data into the tables. Is it possible to invoke sqlldr from the front end.

Thanks

Srinivas

Tom Kyte
January 23, 2004 - 8:01 am UTC

why not have cold fusion bulk insert it? or cannot that API do bulk inserts? Never programed cold fusion -- is it incapable of array inserts?



Importing Quoted Delimiter Text

Reader, December 08, 2004 - 1:30 pm UTC

Hi Tom

I hope my question falls in line with the other threads
in this topic.

We've encountered a scenario when importing files from
outside sources that involves having a comma in a column
like the company_name below:

  1* select company_name from company where co_id in(4480478,5087245)
SQL> /

COMPANY_NAME
------------------------------------------------------------
"STUART D. PHILLIPS, P.C."
"EcoSolutions, Inc."

SQL> 


It appears that any column with a "," gets imported into
the company table enclosed by quotes. When we have to
export the data to a .txt or .csv file we are finding
problems with those records that have the above scenario
and every column that comes after the quoted column gets
rejected.

How can I fix the records that already exist with this
problem and how to clean up files with this situation
before importing.

thanks so much for your help and advice.
I hope this question is appropriate here.

 

Tom Kyte
December 09, 2004 - 12:44 pm UTC

there is slightly insufficient data to comment here. need an example -- control file, create table (remove most columns, make it small) and sample data.

Thanks Tom

Reader, December 09, 2004 - 12:51 pm UTC

it is an import module that comes with the 'canned'
application software. Users call up a form, enter the
path and filename and the data gets imported.

I don't have access to the source code or .fmb code
so I am left having to perform update statements using TRIM
to remove the " at both ends.

Tom Kyte
December 09, 2004 - 3:19 pm UTC

no way I can answer then.

you might be able to put a trigger on the table to remove it as well...

SQL LOADER and comma delimited

A reader, July 26, 2013 - 10:05 am UTC

Hi Tom,

I have a data file fields seperated by comma (,) and enclosed by quotations. But inside the quoted
string there are again commas.

Example

"A Street","T/A, slow","123 4567 8901"

How to load such data. Please help.

control file looks likes this.

LOAD DATA
APPEND
INTO TABLE CM_A
FIELDS TERMINATED BY ","
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
STREET CHAR "RTRIM(:STREET)",
NAME CHAR "RTRIM(:NAME)",
PHONE CHAR "RTRIM(:PHONE)"
)
Tom Kyte
August 02, 2013 - 5:25 pm UTC

that'll work just fine. as long as it is quoted like that.

ops$tkyte%ORA11GR2> !cat cm_a.ctl
LOAD DATA
infile *
replace
INTO TABLE CM_A
FIELDS TERMINATED BY ","
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
 STREET CHAR "RTRIM(:STREET)",
  NAME CHAR "RTRIM(:NAME)",
   PHONE CHAR "RTRIM(:PHONE)"
)
begindata
"A Street","T/A, slow","123 4567 8901"

ops$tkyte%ORA11GR2> !sqlldr / cm_a

SQL*Loader: Release 11.2.0.3.0 - Production on Fri Aug 2 13:20:08 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 1

ops$tkyte%ORA11GR2> select * from cm_a;

STREET                         NAME
------------------------------ ------------------------------
PHONE
------------------------------
A Street                       T/A, slow
123 4567 8901