nope. If you access those varchar2 columns frequently and do not always want the columns put in front of it - this approach would actually degrade (minimally) performance.
pretty much all data is stored in a "string like" fashion.
A string will have a null flag, a length byte or bytes ( <=250 characters - 1 byte, >250 characters - 2 bytes) followed by the data.
A number will have a null flag/length byte (numbers are 0..22 bytes in length) followed by the varying amount of data.
A binary_float will consume 5 bytes - a leading null flag/length byte followed by 4 bytes of floating point data.
A binary_double will consume 9 bytes - leading null flag/length followed by 8 bytes of floating point data.
and so on. We have to read that length byte to figure out how many bytes of the row constitute that column - read over that data to get to the next length byte and so on.
So, the further down the 'create' list a column is, the longer it takes to retrieve that column.
ops$tkyte%ORA11GR2> @test
ops$tkyte%ORA11GR2> set echo on
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> declare
ops$tkyte%ORA11GR2> l_create long := 'create table t ( c1 number';
ops$tkyte%ORA11GR2> begin
ops$tkyte%ORA11GR2> for i in 2 .. 1000
ops$tkyte%ORA11GR2> loop
ops$tkyte%ORA11GR2> l_create := l_create || ',c'||i||' number default ' || i;
ops$tkyte%ORA11GR2> end loop;
ops$tkyte%ORA11GR2> execute immediate l_create || ')';
ops$tkyte%ORA11GR2> end;
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> insert into t (c1, c1000 ) select rownum, rownum from all_objects;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> @trace
ops$tkyte%ORA11GR2> alter session set events '10046 trace name context forever, level 12';
Session altered.
ops$tkyte%ORA11GR2> begin
2 for i in 1 .. 10
3 loop
4 for x in ( select c1 from t )
5 loop
6 null;
7 end loop;
8 for x in ( select c1000 from t )
9 loop
10 null;
11 end loop;
12 end loop;
13 end;
14 /
PL/SQL procedure successfully completed.
SELECT C1 FROM T
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10 0.00 0.00 0 0 0 0
Fetch 7230 6.41 15.72 414610 420920 0 722790
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7241 6.41 15.72 414610 420920 0 722790
********************************************************************************
SELECT C1000 FROM T
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10 0.00 0.00 0 0 0 0
Fetch 7230 8.66 17.93 421260 3304860 0 722790
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7241 8.66 17.94 421260 3304860 0 722790
In that case, the overhead we partially the parsing of 1000 columns and the chasing down of the chained row piece (any row with more than 254 columns will be stored in multiple pieces).
If we change 1000 to 250 in the above example bit of code:
SELECT C1 FROM T
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10 0.00 0.00 0 0 0 0
Fetch 7230 0.62 0.62 1117 94520 0 722790
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7241 0.62 0.62 1117 94520 0 722790
********************************************************************************
SELECT C250 FROM T
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10 0.00 0.00 0 0 0 0
Fetch 7230 0.96 0.97 7 94520 0 722790
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7241 0.96 0.97 7 94520 0 722790
there is still a difference - even without the multiple row pieces.
so, if the varchar2 columns are usually always selected, but the columns the DBA makes you put first are only sometimes selected - they've actually penalized your performance.
I'll just say again:
So, my recommendation would be - put the columns in order that makes the most sense from an aesthetic and design perspective more than anything else.