Not really, because a file might not even contain any header information that indicates its encoding. There are tools out there that will "guess" at what the encoding is which might help, eg
[oracle@db192 ~]$ file -i emp.csv
emp.csv: text/plain; charset=us-ascii
You could take advantage of this by using *another* external table to query this result and then make a decision on whether you need to modify the true external table, eg
create table file_check
( ftype varchar2(64)
)
organization external
( type oracle_loader
default directory temp
access parameters
( records delimited by newline
preprocessor bin:'run_file_command.sh'
)
location
( temp:'dummy.txt'
)
)
reject limit unlimited
where "run_file_command.sh" is something like:
file -i emp.csv | sed 's/.*charset=//g'