So you're reading a JSON document stored in a column, spooling it using SQL*Plus? And these documents can be larger than 32,767 characters, so you're exceeding the maximum linesize?
If so, here's a few things you could try:
Manually add carriage returnsAt appropriate points in the document, e.g. after close braces "}", add a carriage return:
with jdoc as (
select '[{"employee_id":100,"first_name":"Steven"},{"employee_id":101,"first_name":"Neena"},{"employee_id":102,"first_name":"Lex"},{"employee_id":103,"first_name":"Alexander"},{"employee_id":104,"first_name":"Bruce"},{"employee_id":105,"first_name":"David"},{"employee_id":106,"first_name":"Valli"},{"employee_id":107,"first_name":"Diana"},{"employee_id":108,"first_name":"Nancy"},{"employee_id":109,"first_name":"Daniel"}]' j from dual
)
select replace(j, '}', '}' || chr(10))
from jdoc;
REPLACE(J,'}','}'||CHR(10))
[{"employee_id":100,"first_name":"Steven"}
,{"employee_id":101,"first_name":"Neena"}
,{"employee_id":102,"first_name":"Lex"}
,{"employee_id":103,"first_name":"Alexander"}
,{"employee_id":104,"first_name":"Bruce"}
,{"employee_id":105,"first_name":"David"}
,{"employee_id":106,"first_name":"Valli"}
,{"employee_id":107,"first_name":"Diana"}
,{"employee_id":108,"first_name":"Nancy"}
,{"employee_id":109,"first_name":"Daniel"}
]
Combine the lines when reading the fileYou could stick with spooling the file as you currently do. And accept that documents will be split over many lines.
Then stitch the lines back together when you read them. Connor has an example of how to do this using pipelined table functions on his blog:
https://connor-mcdonald.com/2017/11/01/parsing-freeform-data-in-flat-files/ Skip the file writing step!It sounds like your process is:
- Write the file from database A
- Read it into database B as an external table
If so, why the need to create the file? With a database link between the databases, you can transfer in one step:
insert into t@dbb
select * from t@dba;