Oracle'solder JDBC2 drivers & statement caching
Matthew, March 02, 2005 - 12:18 am UTC
Tom,
The javadoc from the oracle.jdbc.pool.OracleConnectionCacheImpl class in the JDBC2 drivers distributed with v8.1.7 have setStmtCacheSize() and getStmtCacheSize() methods, which imply that some of the much earlier drivers (Oracle-provided JDBC3 drivers have only been out since 10g??) can do statement caching.
I just hope the documentation isn't misleading :)
matthew
March 02, 2005 - 7:05 am UTC
More on Statement v/s PrearedStmt
A reader, March 04, 2005 - 6:05 pm UTC
Tom,
My friend referred to me the following link to a oreilly book that says that
There's a popular belief that using a PreparedStatement object is faster than using a Statement object. After all, a prepared statement has to verify its metadata against the database only once, while a statement has to do it every time. So how could it be any other way? Well, the truth of the matter is that it takes about 65 iterations of a prepared statement before its total time for execution catches up with a statement. This has performance implications for your application, and exploring these issues is what this section is all about.
Here is the link:
</code>
http://www.oreilly.com/catalog/jorajdbc/chapter/ch19.html <code>
Please let me know your thoughts.
Satish.
March 04, 2005 - 6:31 pm UTC
it is totally and completely FALSE and WRONG.
If you have access to my book "Effective Oracle by Design", I did that example -- right. And show that a prepared statement works faster than a statement from the very first use (they neglected to "prime the java class cache", they benchmarked "wrong").
It is *not true* what that page says -- it prompted me to write an entire chapter myself with the comprehensive single and multi-user benchmarks.
see also
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:10128287191505 <code>
Righting
A reader, March 04, 2005 - 6:37 pm UTC
you righted an article?
March 04, 2005 - 7:54 pm UTC
just call me "hooked on phonics" -- thanks, I changed right to write above
Prepared vs Callable Statement
Marshall Thompson, March 07, 2005 - 8:37 am UTC
Above you state that use of jdbc 3.0 statement caching with prepared statements is the best possible situation. I know we were discussing prepared statements - but where do callable statements (jdbc calling a stored proc) fit in? I assume it is better to use callable statements from jdbc?
March 07, 2005 - 3:00 pm UTC
you use prepared statements when you have only inputs to provide and callable statements when you need to get outputs (like from a stored procedure), most stored procedure calls will be callable statements for that reason (they generally supply outputs)
I can use a callable stmt in place of any prepared stmt
Marshall B Thompson, March 08, 2005 - 1:04 pm UTC
But, I could use a stored procedure and a jdbc callable statement for an insert or update too. I can bind my variables for a callable statement just like a prepared statement. Would it be better to use stored procedures and callable statements, or has the jdbc 3.0 client-side statement caching made prepared statements preferable?
Statement caching should work for both prepared and callable statements
Menon, March 08, 2005 - 6:11 pm UTC
As tom points out in the doc.
But Tom
"but why would you use a callable statement when you don't need to?"
Well is not the code in Java versus code in PL/SQL
among other reasons that you have advocated yourself
a good reason to stick to callable statements even
for normal DMLs that could be done by Prepared statement?
Thanx.
March 08, 2005 - 7:24 pm UTC
Well, Menon - you wrote the book :) (literally)
If you are doing insert/update/delete without returning anything, which would you yourself use?
well,,
Menon, March 08, 2005 - 8:23 pm UTC
callable statement because of other reasons
a. adding a trace and tuning is easier
b. adding another statement to the logic later on
is easier
c. it is accessible to other third party apps and so on.
Unless I missed something?
At work, my code only contains calls to PL/SQL code
even if they could be written within Java as PreparedStatement.
March 08, 2005 - 11:11 pm UTC
a) how so?
b) how so?
c) how so?
a callable statement is a prepared statement that can return outputs?
well yes.
Menon, March 09, 2005 - 12:13 am UTC
But that is "splitting hair" IMHO :-)
Typically when you use PreparedStatement, you dont use
it to call stored procedures? You should not - even
if your procedure does not return any output. It does
not return one today - but what if it does tommorow -
you have to make more changes than required. There is
no benefit to it and I doubt if most people are even
aware of the distinction.
Also, a, b and c are true assuming the above is true and
we are talking of the choice between using SQL in Java
versus stored procedures...
So did I miss something again?
And Marshall, what is your use scenario - were you
thinking of invoking stored procedures using
PreparedStatement?
March 09, 2005 - 7:38 am UTC
I was simply saying that "If I were to execute a statement that didn't return outputs like insert, update, delete, select, maybe even a stored procedure", I can use either or - don't need to always use callable statement, don't know that it would be good to always use a callable statement when I don't need to.
Prepared vs Callable
Marshall B Thompson, March 09, 2005 - 7:42 am UTC
I planned to always create stored procedures in packages. I thought they had to be called from java/jdbc using callable statements. I have Tom's books and I thought they advocated always using pl/sql code in packages - maybe that's what I wanted to hear. If a prepared statement can call a stored procedure, I did not know that.
March 09, 2005 - 7:58 am UTC
If the statement to be executed has only "inputs" -- prepared statements work
In general, stored procedures return outputs (OUT parameters) for those you must use callable statements
ok..
Menon, March 09, 2005 - 10:29 am UTC
Well, if you use prepared statement, you have
to put your insert, select statement etc in Java. If you
used a stored procedure then you dont have to do
that. This also means that granting privileges is
more secure(execute to stored procedures versus
direct insert privileges on the table to the user)
Tuning is easier because it is easier to just take
your PL/SQL code and tune it in most cases
using tkprof rather than change your SQL in Java,
compile, restart your middle tier and so on.
All the reasons that I learnt from your books and
this site, Tom (just like Marshall did I suppose):-)
So you can use either PreparedStatement or CallableStatement
but using stored procedures in conjunction with
CallableStatement gives you many advantages
(some mentioned above).
The only place where you should switch to using PreparedStatement is when you need to use update batching feature of JDBC. If other tools (such as the usual external tables etc.) have been ruled out for good reasons then it makes sense to use PreparedStatement since Oracle does not implement true batching for CallableStatement objects even though it supports the use of standard batching syntax for these objects. This is of course assuming you are using Oracle supplied JDBC drivers.
March 09, 2005 - 12:54 pm UTC
Excellent, thanks
update batching vs passing arays
bob, March 09, 2005 - 2:45 pm UTC
It sounded like someone had kidnapped Tom and was writing on his behalf for a little bit there. It was kind of funny how Menon was trying to convince Tom using Tom's own words of why he follows the methodology Tom teaches.
You can create the equivalent effect of batch inserts by passing arrays to pl/sql for bulk inserts. It is no where near as convenient though as just batching single row inserts from the client. I am weighing these options myself. The client application needs to delete cascade and than bulk insert lots of rows from its memory, but I don't want all the users of this client app to have insert and delete on all the base tables to delete old data and reinsert all the new data. so stored procedures are the only approach, but have the added hassle of passing arrays to pl/sql from C++.
Thanx bob!
Menon, March 09, 2005 - 3:08 pm UTC
"You can create the equivalent effect of batch inserts by passing arrays to pl/sql for bulk inserts. "
Yes - that is another (somewhat more painful at times but
perfectly valid) alternative to update batching - thanx
for pointing it out.
Also, hopefully I came across as giving credit where
it is due to Tom for whatever I "kidnapped" from him :)
Best Regards
stored procedures should use callable statements then
A reader, March 09, 2005 - 3:25 pm UTC
Thanks.
In our Java application, we always call stored procedures for any database operations like insert/update/delete etc.
We do have OUT parameter to indicate any error message occured in sp to notify client.
So in this case, my understanding is that we must use callable statements. At one time, i was always thinking they are one and the same. Now i understood the difference.
Would you mind giving an example for Callable prepared statement.
<<If the statement to be executed has only "inputs" -- prepared statements work
In general, stored procedures return outputs (OUT parameters) for those you must
use callable statements
>>
Thanks
March 09, 2005 - 3:39 pm UTC
I don't like that out parameter, should just let the exception propagate, but anyway.....
if you have out parameters you are already using callable statements?
Different results Statement vs PreparedStatement
Alex, April 04, 2005 - 4:34 pm UTC
Tom,
My question for the problem is simple, I cannot figure out my the following two statements return different result. I'm trying to capture the benefits of binding with PreparedStatements but the query results are not the same.
Statement stmt = conn.createStatement();
//stmt.execute("alter session set sql_trace=true");
int n = 20;
for (int i = 0; i < n; i++)
{
stmt.execute("select id from data where id = " + i);
System.out
.println("The statement finished" + " " + stmt.toString() + " " + i);
}
This yields:
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 40 156.25 18.86 0 0 0 0
Execute 40 0.00 22.26 0 0 0 0
Fetch 40 0.00 16.88 0 100 0 40
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 120 156.25 58.00 0 100 0 40
Now the PreparedStatement (which i'm using pretty much verbatium from Oracle 9i Java Programming)
if (ps == null) {
ab = conn.prepareStatement("alter session set sql_trace=true");
ab.execute();
ps = conn
.prepareStatement("select id from data where id = ?");
int n = 20;
for (int i = 0; i < n; i++) {
ps.setInt(1, i);
ps.execute();
System.out.println("The statement finished " + ps.toString()+ " "
+ i);
}
//ps.close();
}
This produces:
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.97 0 0 0 0
Execute 2 0.00 1.62 0 0 0 0
Fetch 2 0.00 1.29 0 5 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.00 3.88 0 5 0 2
The second time I ran the PreparedStatement it produced zero's across the board in the trace file. Is my logic flawed? I'm mostly concerned with the number of rows, that's how I know I didn't do the amount of work it should have.
April 04, 2005 - 5:36 pm UTC
create tables?
insert into's?
help us help you -- the entire program would be useful.
(but is it the if ps==null??? you only do this a handful of times, you didn't execute it 40 times?)
Updated code
Alex, April 05, 2005 - 10:16 am UTC
Tom,
I made a few corrections per your advice. They look ok to me now but I still get weird different results. Mostly intrestingly, different execute counts, AND neither are 20, which is what I set it too. I did change the query slighty to a count(*) to try and cause more work.
Here's the no bind statement:
public class Poor_code {
/**
* @throws RemoteException
*/
public Poor_code() throws RemoteException {
super();
// TODO Auto-generated constructor stub
}
static Connection conn = null;
public static void main(String[] args) throws Exception {
if (conn == null) {
DriverManager.registerDriver(new OracleDriver());
conn = DriverManager.getConnection(
"DatabaseURL",
"user", "password");
}
Poor_code code = new Poor_code();
code.exeBad();
}
public void exeBad() {
try {
Statement stmt = conn.createStatement();
stmt.execute("alter session set sql_trace=true");
int n = 20;
for (int i = 0; i < n; i++)
{
stmt
.execute("select count(*) from data where id = "
+ i);
System.out.println("The statement finished" + " "
+ stmt.toString() + " " + i);
}
// conn.close();
} catch (Exception e) {
System.out.println("Error caught is" + " " + e);
}
}
}
With bind + 1 soft parse:
public class Good_Code {
static PreparedStatement ps;
static PreparedStatement ab;
static Connection conn = null;
public static void main(String[] args) throws Exception {
if (conn == null) {
DriverManager.registerDriver(new OracleDriver());
conn = DriverManager.getConnection(
"DatabaseURL",
"user", "password");
}
if (ps == null) {
ps = conn
.prepareStatement("select count(*) from data where id = ?");
}
ab = conn.prepareStatement("alter session set sql_trace=true");
ab.execute();
int n = 20;
for (int i = 0; i < n; i++) {
ps.setInt(1, i);
ps.execute();
System.out.println("The statement finished " + ps.toString() + " "
+ i);
}
// ps.close();
}
}
April 05, 2005 - 12:17 pm UTC
don't you actually need to get the result set and fetch it?
performance testing
Alex, April 05, 2005 - 12:45 pm UTC
No I don't really care about the results. This is just to demonstrate how much better PreparedStatements are for the database. Of course, it isn't a valid test if my stats are showing different numbers of rows processed.
April 05, 2005 - 6:32 pm UTC
but unless you actually fetch data, what does "executing a select" mean.....
that was my point, not sure what it is even "doing".
How about executing
declare x number; begin x := 1; end;
declare x number; begin x := 2; end;
or the test i usually use:
create table t (x int);
and then insert into t values ( 1 ), (2), ....
Insert
Alex, April 06, 2005 - 9:31 am UTC
Ok i changed the select query to
insert into t values(?) or i and here's what i observed:
No binds:
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 20 0.00 124.80 0 0 0 0
Execute 21 781.25 952.38 0 20 100 20
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 41 781.25 1077.18 0 20 100 20
Misses in library cache during parse: 20
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Then with:
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 1.02 0 0 0 0
Execute 21 1250.00 892.15 0 1 63 20
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 22 1250.00 893.17 0 1 63 20
Misses in library cache during parse: 0
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
So it parsed 19 more times but used more cpu? Can you make any sense out of this....
April 06, 2005 - 1:20 pm UTC
except that the cpu and elapsed numbers don't look sensible for 20 single row inserts under any conditions?
are you running 8i tkprof against 9i traces or what?
and don't forget, it is *less about time than it is about LATCHING*
latches are a type of lock
Locks are a type of serialization device
serialization devices *prevent* scalability
If you want the read made, already done, been there in java test with explaination -- I have it in Effective Oracle by Design
tkprof
Alex, April 07, 2005 - 10:15 am UTC
Now that you mention it, yeah I am using 8i tkprof on 9i traces. I didn't know there was a difference. I'll get a newer version of tkprof and see if that does it.
April 07, 2005 - 11:26 am UTC
it'll divide those numbers by 10000 or something.
9i trace, 9i tkprof:
select *
from
all_objects
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.05 9 81 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1951 8.52 10.85 315 141623 0 29238
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1953 8.55 10.91 324 141704 0 29238
9i trace, 8i tkprof
select *
from
all_objects
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 300.00 573.43 9 81 0 0
Execute 1 0.00 3.57 0 0 0 0
Fetch 1951 85200.00 108559.12 315 141623 0 29238
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1953 85500.00 109136.12 324 141704 0 29238
so you were comparing:
total 41 781.25 1077.18 0 20 100 20
Execute 21 1250.00 892.15 0 1 63 20
.07 cpu seconds to
.12 cpu seconds
which we can very safely attribute to "noise", way too small to count as meaningful.
20 rows not enough
Alex, April 07, 2005 - 1:37 pm UTC
Well thanks for the demo. I guess the differences are not apparent when doing a small amount of work. The example I was following in the book did 10,000 rows. I was trying to duplicate the effort of an actual statement that our application runs. But since it's not resource intensive, the benefits of binds etc are not visible Like you said, there are other things to consider.
I also noticed that sometimes when I run a trace, it appends to an already existing trace file. I thought it was overwriting it. So if the previous trace had 20 rows, then the next I ran had 20, it would show 40 rows total in the file. Is there any rhyme or reason to why a new trace file is created or just appended to? You've been very helpful as always.
April 07, 2005 - 4:11 pm UTC
are you using shared server or dedicated server? connection pool or straight connection?
Alex, April 07, 2005 - 4:27 pm UTC
Dedicated and straight connection. I'm not positive about the straight connection though.
April 07, 2005 - 4:44 pm UTC
dedicate servers would not write to the same trace file if you log out and log back in. they should be very unique in user_dump_destination, only if you reused a connection.
OracleConnection.getImplicitCachingEnabled() not working as it should
pasko, June 20, 2005 - 12:01 pm UTC
Hi Tom,
and may be Menon if he sees this ,
i have been trying to use ImplicitStatement Caching with Oracle JDBC Drivers , but it seems the Implicit Statement Caching Feature is not working properly:
For Example:
import oracle.jdbc.OracleConnection;
import oracle.jdbc.pool.OracleDataSource;
OracleDataSource oracleDataSource = new OracleDataSource();
oracleDataSource.setUser( dbUser );
oracleDataSource.setPassword(dbPwd );
oracleDataSource.setURL(dbUrl);
OracleConnection conn = (OracleConnection) oracleDataSource.getConnection() ;
// Enable implicit Statement cache
conn.setImplicitCachingEnabled( true);
System.out.println("Implicit Caching = "
+ conn.getImplicitCachingEnabled()
);
Here conn.getImplicitCachingEnabled() always returns false.
PreparedStatement binding question
Alan, January 24, 2006 - 11:25 am UTC
I have an SQL statement that has an 'in' condition. For example
select * from people where id_prsn in (100, 101, 102, 103);
However, the issue here is that the number of IDs within the in condition is determined at run time. As such, I am not sure how to bind this to the statement. I tried doing the dollowing:
select * from people where id_prsn in (?)
And then binding a string of "100, 101, 102, 103" but that does not work. How can I keep the sql static but bind different numbers of elements at run time? If not, can you give me a better solution than constructing the SQL at runtime based on the number of ids.
Thanks.
January 24, 2006 - 8:46 pm UTC
Batching Stored Procedure calls
Daire, February 06, 2008 - 7:24 am UTC
Hi, great information in this (old) thread. Can you just clarify then - I have a Java program (Java 1.4) that loads data to an Oracle database (Oracle 9i) using a stored procedure. The stored Procedure does not return any values to the Java program. I am not loading a massive amount of records (maybe up to 15,000 at a time) but would like to use batch processing.
Am I right in saying then that as the SP does not return any values that I should use oracle.jdbc.driver.OraclePreparedStatement (as opposed to oracle.jdbc.driver.OracleCallableStatement) which will allow me to use the setDefaultExecuteBatch(), executeUpdate() and sendBatch() methods and that this will result in "true batching" as opposed to just supporting batching syntax?
Thanks.
February 06, 2008 - 9:28 am UTC
if it does not return anything (excellent, you are using exceptions correctly I would guess!), then a prepared statement would be what you want.
however, batching of stored procedures doesn't work that way (from what I know). It works with conventional sql - but not stored procedures (you'd have to use the jdbc batching - addBatch, executeBatch)
PreparedStatement datatype
KC, September 13, 2009 - 9:01 am UTC
Hi Tom,
I am now supporting a J2EE application. The application is using PreparedStatement and bind variables where applicable. However one thing I noticed is that developers are always using PreparedStatement.setString to bind a parameter, regardless of data type of the column in the database table.
I have seen cases where setString is used against a NUMBER, VARCHAR, Timestamp column in database.
For example, consider the following example where I have a table t and a NUMBER type column named num_col:
Table t
------------------
create table t (num_col number primary key, resource_name varchar2(25));
insert into t values(1,'resource 1');
insert into t values(2,'resource 2');
insert into t values(3,'resource 3');
commit;
The Java code, use setString on the number column, as follows:
Java
--------------------
PreparedStatement pstmt = conn.prepareStatement("select * from t where num_col = ?");
pstmt.setString( 1, '1');
Similarly, the developers use setString against VARCHAR, Date, Timestamp, ...etc, you name it. To further set the stage, all queries in this application are SELECT statements. There is no INSERT/UPDATE/DELETE. Data modifications are done at other layers.
In light of this situation, I want to get your advice:
1. What is the implication of the above coding practice?
2. Specifically, would it have negative impact to performance and how?
3. Would it cause the implicit conversion as mentioned in index myth "Why Isn¿t My Index Getting Used?" as written in your Expert Oracle book
4. Assuming that you are the development lead, what possible action(s) would you take?
Thanks.
September 14, 2009 - 1:32 pm UTC
PreparedStatement pstmt = conn.prepareStatement("select * from t where num_col = ?");
pstmt.setString( 1, '1');
fortunately, that is the same as:
select * from t where num_col = TO_NUMBER(?)
however, it would be considered an extremely bad practice, makes the developers look like rank amateurs, they will be made fun of.
and they will get horrible BURNT one day. If they:
select * from t where DATE_COL = ?
that is the same as
where DATE_COL= to_date(?)
which means it uses the CURRENT DEFAULT date format - which can and will (and has - the default date format of Oracle out of the box has changed over the years!) change.
And when it does - bam, their application either
a) stops working
b) even worse, keeps working but retrieves the wrong data because the date is interpreted wrong
They should absolutely use:
where date_col = to_date( ?, 'dd-mon-yyyy' )
assuming their format is dd-mon-yyyy, they would use whatever string format they decided to use.
Better yet, use a date datatype and bind the date, then no worries about formats or implicit conversions.
Implicit conversions is something a newbie developer
a) does
b) because they are not aware of how evil they are.
4. Assuming that you are the development lead, what possible action(s) would you take? first educate on the right way and then if nothing changed
make fun of them in code reviews and reject their code without looking at anything else. Until it is correct
PreparedStatement datatype
KC, September 14, 2009 - 2:21 pm UTC
Hi Tom,
... They should absolutely use: where date_col = to_date( ?, 'dd-mon-yyyy' ) ...
Yup they do this and setString a date to it. Thus the query should be returning correct data given that the date format is explicitly specified.
... equivalent to select * from t where num_col = TO_NUMBER(?)....
Am I correct to say that this is not a big performance hit since TO_NUMBER is executed once per query.
Or is this always the case? Would Oracle implicitly applied a function to the column as follows:
"select * from t where to_char(num_col) = ?"
This would definitely change the execution plan because the index on primary key num_col is now precluded.
I also want to present my case to the developers from performance perspective. For example, why setString on a datatype other than varchar drags the performance. Would be great if you can shed me some light. Thanks.
September 14, 2009 - 3:04 pm UTC
... Am I correct to say that this is not a big performance hit since TO_NUMBER is
executed once per query.
...
yes, but it makes them look like programmers that have not yet graduated from high school writing their first programs.
... Or is this always the case? Would Oracle implicitly applied a function to the
column as follows:
"select * from t where to_char(num_col) = ?"
...
we cannot do that - when you compare a string to a number, the string is converted to a number
for you see, the strings:
1
1.0
001.
+1
1e1
1.000000000000000000000000
000000000000000000000000000001.000000000000000000000000
are all the same number - 1, so to compare string to "X" - we convert string to the type of X
string to date, string is to_date'd
string to number, string is to number'd
... I also want to present my case to the developers from performance perspective. ...
why can't you tell them it makes them look like first year programmers without any experience at all - since anyone with experience would have encountered first hand the devil that is "implicit conversions" and would have been burnt by them in the past and would avoid them at all costs.
or maybe they are just too immature as far developing software goes and haven't been around long enough.
that is - shame them into it.
It shouldn't take "this is bad for performance" to do something right. No one said it would affect performance in this case. It could return wrong answers (that is generally considered 'a bad thing'), it could return an error (that is also generally considered a bad thing). Why not just use the reason "it is wrong", "implicit conversions should be avoided at all costs".
Tell you what - you are the DBA right? Change the default NLS date mask. And go on vacation. Or, change the default every day for the next month using 30 different formats - and don't tell anyone. When they call about the error - simply demonstrate how to_date(?,'dd-mon-yyyy') fixes it right up, you don't know what the problem could be.
or change default nls_numeric_characters
Sokrates, September 14, 2009 - 3:48 pm UTC
... and (implicit) to_number will fail as well
September 14, 2009 - 5:48 pm UTC
if they use decimals - absolutely.
comma or dot, which is right - neither and both :)
PreparedStatement datatype
KC YEUNG, September 14, 2009 - 11:22 pm UTC
Hi Tom,
Thanks for the analysis and background information.