Skip to Main Content
  • Questions
  • Lengthening maximum length of VARCHAR2 columns is just data dictionary update?

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Duke.

Asked: April 30, 2010 - 11:00 am UTC

Last updated: June 22, 2010 - 7:51 am UTC

Version: 10.2.0.4.0

Viewed 10K+ times! This question is

You Asked

I'll admit I was dumbfounded when someone suggested that a new VARCHAR2 column on a table needed to be larger than we expect. Why? Just in case some day we need more room and we've already got billions of records in the table. It'll take a while to expand the column if it's a big table, he said.

Pshaw, I said, it's probably just a simple data dictionary update, and the size of the table is irrelevant. And that's what I infer from a simple test:

create table T compress nologging as
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects union all
select * from dba_objects
/
SQL> select count(*) From T;

            COUNT(*)
--------------------
           1,109,927

SQL> select data_length from user_tab_columns where column_name = 'OBJECT_NAME' and table_name = 'T';

DATA_LENGTH
-----------
        128

Elapsed: 00:00:00.04
SQL> alter table T modify object_name varchar2(255);

Table altered.

Elapsed: 00:00:00.03
SQL> alter table T modify object_name varchar2(4000);

Table altered.

Elapsed: 00:00:00.04


If Oracle were some flatfile based system that likes fixed-width fields, I could see the point. But that's not even remotely true, is it?

and Tom said...

for varchar types - what you say is true, since a varchar type is a varying width field.

Same for number - you can increase the scale of a number in the same way.


However, if you have a char(n) and alter it to be char(n+1), that would require updating every row since a char(n) is stored using either 0 (null) or N characters only (it is blank padded)


So, for the varchar - you can grow it (but not shrink it necessarily) when you need to instantly.

Which means - please use the right type for now, do not do anything for "just in case".

Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

VARCHAR (phat) and CHAR (well, ...a homonym/homophone)

Duke Ganote, April 30, 2010 - 11:55 am UTC

BYTE -> CHAR changes?

Duke Ganote, June 11, 2010 - 9:54 pm UTC

More than a simple dictionary update?

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 11 22:41:32 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning and OLAP options

ORACLE_INS:DISBDATA\ciopdba> select count(*) from vo_lic_loa;

            COUNT(*)
--------------------
           2,283,400

  1  select 'alter table '||table_name||' modify '||column_name||' '||data_type||'('||data_length||' CHAR);'
  2    from user_tab_columns
  3   where table_name in ( select table_name from user_tables )
  4     and table_name like 'VO_LIC%'
  5*    and char_used = 'B'
ORACLE_INS:DISBDATA\ciopdba> /

'ALTERTABLE'||TABLE_NAME||'MODIFY'||COLUMN_NAME||''||DATA_TYPE||'('||DATA_LENGTH||'CHAR);'
--------------------------------------------------------------------------------------
alter table VO_LIC_LOA modify STATE_CD CHAR(2 CHAR);
alter table VO_LIC_LOA modify LINE_OF_AUTH_NM CHAR(70 CHAR);
alter table VO_LIC_LOA modify CH_ASGN_ID CHAR(6 CHAR);
alter table VO_LIC_LOA modify PR_ASGN_ID CHAR(6 CHAR);
alter table VO_LIC_LOA modify LICENSEE_NM VARCHAR2(50 CHAR);
alter table VO_LIC_LOA modify LIC_NBR VARCHAR2(30 CHAR);
alter table VO_LIC_LOA modify LIC_STATUS_CD CHAR(3 CHAR);
alter table VO_LIC_LOA modify CREATE_USER_ID CHAR(30 CHAR);
alter table VO_LIC_LOA modify REC_STATUS_CD CHAR(2 CHAR);
alter table VO_LIC_LOA modify HISTORY_REC_IND CHAR(1 CHAR);
alter table VO_LIC_LOA modify UPDATE_USER_ID CHAR(30 CHAR);

11 rows selected.

Elapsed: 00:00:04.60
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify STATE_CD CHAR(2 CHAR);

Table altered.

Elapsed: 00:03:09.52
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify LINE_OF_AUTH_NM CHAR(70 CHAR);

Table altered.

Elapsed: 00:02:45.27
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify CH_ASGN_ID CHAR(6 CHAR);

Table altered.

Elapsed: 00:02:09.30
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify PR_ASGN_ID CHAR(6 CHAR);

Table altered.

Elapsed: 00:01:25.20
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify LICENSEE_NM VARCHAR2(50 CHAR);

Table altered.

Elapsed: 00:00:01.87
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify LIC_NBR VARCHAR2(30 CHAR);

Table altered.

Elapsed: 00:00:00.31
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify LIC_STATUS_CD CHAR(3 CHAR);

Table altered.

Elapsed: 00:01:25.28
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify CREATE_USER_ID CHAR(30 CHAR);

Table altered.

Elapsed: 00:02:35.50
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify REC_STATUS_CD CHAR(2 CHAR);

Table altered.

Elapsed: 00:02:23.78
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify HISTORY_REC_IND CHAR(1 CHAR);

Table altered.

Elapsed: 00:01:59.64
ORACLE_INS:DISBDATA\ciopdba> alter table VO_LIC_LOA modify UPDATE_USER_ID CHAR(30 CHAR);

Table altered.

Elapsed: 00:02:42.59


Tom Kyte
June 22, 2010 - 7:51 am UTC

you are taking it from bytes to characters, very similar to:

However, if you have a char(n) and alter it to be char(n+1), that would require updating every row since a char(n) is stored using either 0 (null) or N characters only (it is blank padded)

doing something...

Duke Ganote, June 16, 2010 - 2:21 pm UTC

Can it be spending one-and-a-half minutes on a simple dictionary update?

SQL> create table zz ( z char(2 BYTE) );

Table created.

Elapsed: 00:00:00.03
SQL> insert into zz select ' ' from dual connect by level < 10000000;

9999999 rows created.

Elapsed: 00:00:08.78
SQL> commit;

Commit complete.

Elapsed: 00:00:00.10
SQL> select bytes, blocks from user_segments where segment_name = 'ZZ';

               BYTES     BLOCKS
-------------------- ----------
         125,566,976      15328

Elapsed: 00:00:00.12
SQL> alter table zz modify z char(2 CHAR);

Table altered.

Elapsed: 00:01:35.00
SQL> select bytes, blocks from user_segments where segment_name = 'ZZ';

               BYTES     BLOCKS
-------------------- ----------
         125,566,976      15328


This is not to say...

Russ, November 24, 2010 - 2:12 pm UTC

however, that there is absolutely no impact from increasing the size/scale of number or varchar columns. You MUST evaluate your code for something like the following:

create table t (c1 number(10), c2 varchar2(10));

Table created.

create procedure p
is
 v1 number(10);  -- Matches C1.
 v2 varchar2(10);  -- Matches C2.
begin
 select c1 into v1 from t;
 select c2 into v2 from t;
end;
/

Procedure created.

insert into t values ( 1234567890, '1234567890');

1 row inserted.

commit;

Commit complete.

exec p;

PL/SQL procedure successfully completed.

alter table t modify c1 number(11); -- C1 now eleven digits.

Table altered.

update t set c1 = 12345678901;

1 row updated.

commit;

Commit complete.

exec p;

BEGIN p; END;
ORA-06502:  PL/SQL numeric or value error: number precision too large.

alter table t modify c2 varchar2(11); -- C2 now 11 chars.

update t set c1 = 1234567890, c2 = '12345678901';

1 row updated.

commit;

Commit complete.

exec p;

BEGIN p; END:
ORA-06502:  PL/SQL: numeric or value error: character string buffer too small


Note that procedure p was not modified for the new larger sizes of the columns. There is no impact until p is executed with the actual data in the table being larger than the size of its internal variables. This could be minutes after implementation--or days or weeks or months.

As innocuous as such table changes may seem, they are not without risk. Application code should be examined and tested after such a change. Your CM system/tools will make it easier to find the relationships between code and tables. You do use CM tools, right?

Kind regards,
Russ