It certainly *could* be.
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:53174778859588 https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2818414600346806270 Start a fresh session and do a full refresh. As well db file seq read, check out the 'table fetch continued row' session statistic - if that's going up fast, then you are chaining, eg
-- 200 columns
SQL> create table T1 ( c1 int );
Table created.
SQL>
SQL> begin
2 for i in 2 .. 200 loop
3 execute immediate 'alter table T1 add c'||i||' int';
4 end loop;
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>
SQL> declare
2 l_ins varchar2(32767) := 'insert into t1 values (';
3 begin
4 for i in 1 .. 200 loop
5 l_ins := l_ins || i||',';
6 end loop;
7 execute immediate 'alter session set cursor_sharing = force';
8 for i in 1 .. 1000 loop
9 execute immediate rtrim(l_ins,',')||')';
10 end loop;
11 commit;
12 end;
13 /
PL/SQL procedure successfully completed.
SQL>
SQL> select st.value
2 from v$statname s, v$sesstat st
3 where st.STATISTIC# = s.STATISTIC#
4 and st.sid = sys_context('USERENV','SID')
5 and s.name = 'table fetch continued row';
VALUE
----------
0
SQL>
SQL> select count(c200)
2 from t1;
COUNT(C200)
-----------
1000
SQL>
SQL> select st.value
2 from v$statname s, v$sesstat st
3 where st.STATISTIC# = s.STATISTIC#
4 and st.sid = sys_context('USERENV','SID')
5 and s.name = 'table fetch continued row';
VALUE
----------
0
SQL>
SQL>
Compared to 400 columns
SQL> drop table T1 purge;
Table dropped.
SQL> create table T1 ( c1 int );
Table created.
SQL>
SQL> begin
2 for i in 2 .. 400 loop
3 execute immediate 'alter table T1 add c'||i||' int';
4 end loop;
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>
SQL> declare
2 l_ins varchar2(32767) := 'insert into t1 values (';
3 begin
4 for i in 1 .. 400 loop
5 l_ins := l_ins || i||',';
6 end loop;
7 execute immediate 'alter session set cursor_sharing = force';
8 for i in 1 .. 1000 loop
9 execute immediate rtrim(l_ins,',')||')';
10 end loop;
11 commit;
12 end;
13 /
PL/SQL procedure successfully completed.
SQL>
SQL> select st.value
2 from v$statname s, v$sesstat st
3 where st.STATISTIC# = s.STATISTIC#
4 and st.sid = sys_context('USERENV','SID')
5 and s.name = 'table fetch continued row';
VALUE
----------
0
SQL>
SQL> select count(c400)
2 from t1;
COUNT(C400)
-----------
1000
SQL>
SQL> select st.value
2 from v$statname s, v$sesstat st
3 where st.STATISTIC# = s.STATISTIC#
4 and st.sid = sys_context('USERENV','SID')
5 and s.name = 'table fetch continued row';
VALUE
----------
296
If you *really* need all those columns, then you can't really eliminate the issue altogether, but you can reduce its impact as much as possible by putting the "most-nullable" columns past the 255 mark.
It's only when columns 256 onwards *contain* data that we need to do the piecewise fetch.