Skip to Main Content
  • Questions
  • Adding a column with a default value to a table

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Ricardo.

Asked: April 20, 2010 - 5:13 pm UTC

Last updated: June 29, 2022 - 11:06 am UTC

Version: 10.2

Viewed 100K+ times! This question is

You Asked

Tom, I have a doubt about adding columns with default values to a table. Recently, I have stumble upon an Arup Nanda article on it where he states the following

"When a user selects the column for an existing record, Oracle gets the fact about the default value from the data dictionary and returns it to the user" he further says "you can define a new column as not null and with a default value and still not incur any penalty for redo and undo generation."

which actually caught my attention. Reading through the documentation I see this "When you specify a default value, the database immediately updates each row with the default value. Note that this can take some time, and that during the update, there is an exclusive DML lock on the table." It makes no mention of Oracle getting the fact about the default value of the column of the table from the dictionary. I do see, however, that a new sentece has been added to the 11g documentation regarding this and this is it "For some types of tables (for example, tables without LOB columns), if you specify both a NOT NULL constraint and a default value, the database can optimize the column add operation and greatly reduce the amount of time that the table is locked for DML." but again, it doesn't mention what KIND of optimization Oracle does. I have done very light test adding the column with the default value and not null constraint straight, as opposing of doing it step by step (add the column, update to the default value, then add the not null constraint) and whereas redo generation (according to v$mystat) is reduced, there is no significant reduction in undo generation (this is on 10gR2).

So my question is: Is it correct to assume what Arup Nanda says about Oracle getting the fact about the default value of the column at "query time" or Oracle still has to update all the rows when you specify a default value to a column and a not null constraint ?

For your reference, here are the links to Arup Nanda article and 11g documentation

http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-schemamanagement.html
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tables006.htm#sthref1856

Thanks!

and Tom said...

Arup is talking about 11g - a new feature.

You posted that your version is 10g Release 2, you do not have the fast add column yet.


If a new column is added to a table, the column is initially NULL unless you specify the DEFAULT clause. When you specify a default value, the database immediately updates each row with the default value. Note that this can take some time, and that during the update, there is an exclusive DML lock on the table. For some types of tables (for example, tables without LOB columns), if you specify both a NOT NULL constraint and a default value, the database can optimize the column add operation and greatly reduce the amount of time that the table is locked for DML.

The documentation is correct for 11g - the 11g documentation would not be correct when discussing 10g.

Your testing in 10g is completely invalid since - in 10g - this feature DID NOT EXIST.

ops$tkyte%ORA11GR1> create table t pctfree 50 as select * from all_objects;

Table created.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> column redo new_val R
ops$tkyte%ORA11GR1> column undo new_val U
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select max(decode( a.name, 'redo size', b.value )) redo,
  2         max(decode( a.name, 'undo change vector size', b.value )) undo
  3  from v$statname a, v$mystat b
  4  where a.statistic# = b.statistic#
  5  and a.name in ( 'redo size', 'undo change vector size' );

      REDO       UNDO
---------- ----------
 112138396   31140348

ops$tkyte%ORA11GR1> alter table t add y char(80) default 'x' not null;

Table altered.

ops$tkyte%ORA11GR1> select max(decode( a.name, 'redo size', b.value )) redo,
  2         max(decode( a.name, 'redo size', b.value )) -&R redo_diff,
  3         max(decode( a.name, 'undo change vector size', b.value )) undo,
  4         max(decode( a.name, 'undo change vector size', b.value ))-&U undo_diff
  5  from v$statname a, v$mystat b
  6  where a.statistic# = b.statistic#
  7  and a.name in ( 'redo size', 'undo change vector size' );

      REDO  REDO_DIFF       UNDO  UNDO_DIFF
---------- ---------- ---------- ----------
 112147868       9472   31143192       2844

ops$tkyte%ORA11GR1> alter table t add x char(80) default 'x' null;

Table altered.

ops$tkyte%ORA11GR1> select max(decode( a.name, 'redo size', b.value )) redo,
  2         max(decode( a.name, 'redo size', b.value )) -&R redo_diff,
  3         max(decode( a.name, 'undo change vector size', b.value )) undo,
  4         max(decode( a.name, 'undo change vector size', b.value ))-&U undo_diff
  5  from v$statname a, v$mystat b
  6  where a.statistic# = b.statistic#
  7  and a.name in ( 'redo size', 'undo change vector size' );

      REDO  REDO_DIFF       UNDO  UNDO_DIFF
---------- ---------- ---------- ----------
 148759788   36611920   41369292   10226100



In 11g you can see there is a vast difference between adding a NOT NULL column with a default and a NULLable column with a default.

Rating

  (34 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

?

Ricardo, April 21, 2010 - 12:07 pm UTC

Thanks tom. So I believe is correct to assume in 10gR2 and before that:

alter table t add y char(80) default 'x' not null;

is no different than

alter table t add y char(80);
update t set y = 'x';
alter table t modify y not null;

in terms of redo/undo generation?

Tom Kyte
April 21, 2010 - 12:28 pm UTC

correct, the new 11g feature is "fast add column". New in 11g - what you would expect to find in an article entitled "
Oracle Database 11g:
The Top New Features for DBAs and Developers"

:)

?

Ricardo, April 21, 2010 - 12:22 pm UTC

Sorry, not the same operations above, but with respect to the already existing values I meant.
Tom Kyte
April 21, 2010 - 12:29 pm UTC

I do not know what you mean by that.

A reader, September 15, 2010 - 5:50 pm UTC

I've a partitioned table with size of 80G. Is there any faster way to add a NOT NULL column with default value?
Would alter table be better or add column, update value and add not null constraint?

Tom Kyte
September 16, 2010 - 6:34 am UTC

see original answer, we discussed this. In 11g - fast add column, before that - it was a) lock table, b) add column, c) update column, d) commit.


If you want to do this online (80gb is small to medium sized, not very large) you could use dbms_redefinition in 10g.


A reader, September 16, 2010 - 5:21 pm UTC

I should have mentioned its on 10g...
I'll check dbms_redef...

Thanks a bunch.

Be careful when adding columns if you use Designer table API

Adam Martin, April 28, 2011 - 3:03 pm UTC

Be careful when adding not-null columns with default values in 11g if you use triggers.
This is especially problematic if you use Designer-generated table APIs.
They assign all column :new values to variables and back again.

There is a bug that has been around since 11g, and still exists in 11.2.0.2,
in update triggers and fast-add column default values.  If a variable is set 
to the :new.column, and then :new.column is set back to the variable, the default value 
is lost, and the trigger attempts to insert a null.  Designer table APIs do this so you 
may get a "cannot insert null" error on a column you were not even updating.

By the way, the workaround is to update every row, and set the new column equal to itself 
(defeating the purpose of the fast add.)

Example: 

SQL> create table t (c1 number, c2 number);
 
Table created
SQL> insert into t values (1,1);
 
1 row inserted
SQL> alter table t add (c3 number default 0 not null);
 
Table altered
SQL> create or replace trigger t_bur_tr
  2  before update on t for each row
  3  declare
  4    v_new1 number;
  5    v_new2 number;
  6    v_new3 number;
  7   begin
  8    v_new1 := :new.c1;
  9    :new.c1 := v_new1;
 10    v_new2 := :new.c2;
 11    :new.c2 := v_new2;
 12    v_new3 := :new.c3;
 13    :new.c3 := v_new3;
 14    dbms_output.put_line(':old.c1 = ' || :old.c1);
 15    dbms_output.put_line(':new.c1 = ' || :new.c1);
 16    dbms_output.put_line(':old.c2 = ' || :old.c2);
 17    dbms_output.put_line(':new.c2 = ' || :new.c2);
 18    dbms_output.put_line(':old.c3 = ' || :old.c3);
 19    dbms_output.put_line(':new.c3 = ' || :new.c3);
 20  end;
 21  /
 
Trigger created
SQL> select * from t;
 
        C1         C2         C3
---------- ---------- ----------
         1          1          0
SQL> update t set c1 = 9999 where c1 = 1;
 
:old.c1 = 1
:new.c1 = 9999
:old.c2 = 1
:new.c2 = 1
:old.c3 = 0
:new.c3 = 
 
update t set c1 = 9999 where c1 = 1
 
ORA-01407: cannot update ("ABM"."T"."C3") to NULL

Cajie, December 30, 2011 - 5:46 am UTC

Tom,

The fast-add feature will certanily help improve the performance for large table, But, I have a doubt as far as the fast-add column
is concern and would like to know how oracle handles this situtaions and displays result correctly.

Consider a scenerio :

1) I create a table with two rows 2nd column being NOT NULL and with DEFAULT value as 1
2) I insert 10 rows in first column.
3) I select the table and get the the proper DEFAULT values in 2nd column.
4) I alter table and change the DEFAULT value of 2nd column to 2
5) I again insert 10 rows in first column.
6) I select the table again and get the the proper DEFAULT values in 2nd column for 1st 10 rows as 1 and next 10 rows as 2.


My question is how the oracle handles the different default values if they are being displayed at runtime?


SQL> CREATE TABLE tmp
     (
        x    INT,
        y    INT DEFAULT 1 NOT NULL
     );


SQL> SELECT    *
     FROM      tmp
     WHERE     y = 2;



     X    Y
     ---- ----
     1    1
     2    1
     3    1
     4    1
     5    1
     6    1
     7    1
     8    1
     9    1
     10   1


SQL> INSERT INTO    tmp ( x )
     SELECT        LEVEL * 10
     FROM          DUAL
     CONNECT BY    LEVEL <= 10;

SQL> ALTER TABLE tmp MODIFY y DEFAULT 2;

SQL> SELECT    *
     FROM      tmp;


     X     Y
     ----  ----
     1    1
     2    1
     3    1
     4    1
     5    1
     6    1
     7    1
     8    1
     9    1
     10    1
     10    2
     20    2
     30    2
     40    2
     50    2
     60    2
     70    2
     80    2
     90    2
     100    2


Does oracle update all the existing column values with the existing default value before altering the table with new default value?


Tom Kyte
December 30, 2011 - 11:38 am UTC

I'm missing something here - like an insert or something??? this makes no sense:

SELECT *
FROM tmp
WHERE y = 2;



X Y
---- ----
1 1
2 1
3 1
4 1


please do your work in sqlplus just like I do.

And - when you insert new rows, they will have the default value right there in the row itself. Only pre-existing rows after you ADD a column use this new behavior.

unless and until you add a column - nothing has changed from the old ways.

and it will work with the add column even if you alter the default. steps to see this:


take a big table bt

look at its current size

alter table bt add y varchar2(4000) default rpad('y',4000,'y') not null;
(that happens instantly, far too fast to have added the column and updated every row)

look at its current size - it will be the same (shows also the column was 'fast' added

alter table bt modify y default rpad( 'x', 4000, 'x' ) not null;

look as current size - it'll be the same
look at first row, it's Y value will be Y's...

insert new row allowing y to default
select it out and it'll be x's


we know to supply the default of yyyy's for all rows that existed at the point in time of the alter ADD column - and we know to put in xxxx's for every new row from now on



Thanks

Cajie, January 02, 2012 - 1:09 am UTC

Thanks Tom.

Apologies Tom.

There was a mistake in the script coz, I manually modified the script to explain the scenerio. I always try to write script as you do but everyone knows you are Tom and we only try to be Tom.
Tom Kyte
January 02, 2012 - 8:36 am UTC

I always try to write script as you do but everyone knows
you are Tom and we only try to be Tom.


but writing a script and running it and cutting and pasting should be something anyone can do :)

Clarification

A reader, January 05, 2012 - 9:39 am UTC

Tom,
This is a very interesting feature. After reading this feature, I had the same question as the original poster:

So my question is: Is it correct to assume what Arup Nanda says about Oracle getting the fact about the default value of the column at "query time" or Oracle still has to update all the rows when you specify a default value to a column and a not null constraint ?

Can you please clarify?

Does Oracle never actually update rows in the table and always gets the default value from data dictionary?

Thanks...
Tom Kyte
January 05, 2012 - 10:18 am UTC

Arup is correct, and the entire point of this entire post is:

with fast add column - when you add a column to a table and that column is NOT NULL with a default value - the table is not touched during the add column

there is no locking

there is no update

the value will be supplied at retrieval time from the data dictionary.



if the column added is NOT NULL DEFAULT (some value), then we do not lock, we do not update, we retrieve the value as needed at runtime from the dictionary.

Its close to what I would want.

Galen Boyer, January 05, 2012 - 10:53 am UTC

I would really like it if all "default" data were retrieved from the
data dictionary and never actually persisted at the row level. Then,
it would be clear whether the data in the database came from a default
or supplied value, although, Oracle would need to offer up some "IS
DEFAULT" function quite like the "IS NULL" function. But, that would
come close to solving the "should it be NULL" vs "should I default
something" arguments.
Tom Kyte
January 05, 2012 - 12:48 pm UTC

I would really like it if all "default" data were retrieved from the
data dictionary and never actually persisted at the row level


then you would never be able to change the default value - ever.


As it is, you can change the default value after adding the column - because we will store the default value for any newly inserted rows.

We keep the default value of the column at the time of addition in the dictionary, but if it ever changes - we'll still keep it - but change the default value for any new rows added.

Clarification

A reader, January 05, 2012 - 11:42 am UTC

Thanks Tom. This raises many more questions in my mind. Some are just "loud thoughts" which may be wrong. The tables in my mind are data warehouse tables. Please let us know your expert opinion.

In this column add scenario, for each row queried, Oracle will have to retrieve the value of the column from data dictionary. Would this lead to significant increase in IO against data dictionary?

Oracle will also have to check if the value of the column for a given row is default or not. If it is default, return the data dictionary value otherwise return the actual value. Will this add some extra code path for each row?

This can also have impact on storage because there is no actual data stored in the column. Although the column is NOT NULL by definition, in reality, it is NULL.

If such a table is exported by conventional export (though is it not recommended), will the column be exported null? I am assuming that data pump export would be smart enough to get the value from data dictionary.

Is there a way to override the behavior?


Thanks...
Tom Kyte
January 05, 2012 - 1:02 pm UTC

In this column add scenario, for each row queried, Oracle will have to retrieve
the value of the column from data dictionary. Would this lead to significant
increase in IO against data dictionary?


no, we just need to get it once, then we know what it is. It is part of the metadata for the table.

Oracle will also have to check if the value of the column for a given row is
default or not. If it is default, return the data dictionary value otherwise
return the actual value. Will this add some extra code path for each row?


No it won't. This will only apply to trailing null columns. The newly added column will be the "last" column. If it is entirely "missing" and it is NOT NULL and has a default - we know to return that default value. (trailing null columns are not stored anywhere).

If it is not entirely "missing", the value will be right there in the row.

The amount of extra code here - only invoked when the column is entirely missing, is trivial.


This can also have impact on storage because there is no actual data stored in
the column. Although the column is NOT NULL by definition, in reality, it is
NULL.,


no, no it isn't. It is just not stored in the table, it is definitely NOT NULL. It will have a (positive) impact on storage because the existing rows will not have to have this value stored until

a) someone changes it
b) you add a NULLABLE or a NOT NULL column without a default value after it.
c) you insert new rows (they will have the default value stored in the row)

If such a table is exported by conventional export (though is it not
recommended), will the column be exported null


i'm not sure if I should answer that - there are so many things that export (the old one) does not do in 11g - it is deprecated entirely - that you shouldn't be using it at all.

In any case, export just uses sql to retrieve the data, it would get the value using sql and do OK.


Is there a way to override the behavior?


alter the table, add the column NULLABLE with a default.
alter the table, modify the column to be NOT NULL.


Not sure I understand

Galen Boyer, January 05, 2012 - 12:57 pm UTC

> then you would never be able to change the default value - ever.

Why not? It was never stored in the first place. It would be like
changing code that did a NVL(fld,'IAMADEFAULT') to
NVL(fld,'IAmADefault'). I would argue that the way things are now is
more closely aligned with the statement, "You would never be able to
change it", because of the implications. Currently, I can change a
default value, but I would be loath to do so, because how would I
legitimately go back and update the previously defaulted values to the
new default? To me, a default value should be the same across all
records. Were I to want to change a default value, I'd like to be
able change all records that had the previous default.
Tom Kyte
January 05, 2012 - 1:13 pm UTC

It is stored - in the dictionary.

To me, a default value should be the same across all
records. Were I to want to change a default value, I'd like to be
able change all records that had the previous default.


history and ANSI differ with you.

It is not the way it has worked in the past. The default value of a column associated with a row is assigned when the row is created. IF the default value changes in the future - all existing rows have the original default value, only new ones have the "new" default value.



Here are the steps we undertake to support the historical (and standard) practice:

a) you have a table T, it doesn't have a column X yet.

b) you alter table t add (x int default 42 not null). We do nothing to the table on disk - to the data on disk. In the data dictionary we store the fact that "if X is a trailing null column, its value is 42"


Now, if you query up X, you'll get 42's. If a new row is added, X will be physically set to 42 (it will NOT be a trailing null column in new rows).

Later you

c) alter table t modify x default 55;

Now, if you pick up a row with a trailing empty X column - we supply 42 for it even though the current 'default' is 55 - that value 42 will be remembered for as long as X is a column of that table. If you add a new row - the value 55 (the current default) will be supplied and physically stored.



It would be like
changing code that did a NVL(fld,'IAMADEFAULT') to
NVL(fld,'IAmADefault').


that is not the way defaults work, you can insert null into a column with a default. It is only when you DO NOT SUPPLY any value at all in the insert that you get the "default" (or use the default keyword in an insert / update). It is not as simple as an NVL() of a NULL.


Currently, I can change a
default value, but I would be loath to do so, because how would I
legitimately go back and update the previously defaulted values to the
new default?


You don't - you don't even know of a column that has the old default value was "defaulted" or PURPOSELY set to that value. The changing of a default value only affects subsequent actions - it does not in any way change existing data.

it never has
it never (probably, high chance) will



A reader, January 05, 2012 - 1:12 pm UTC

<quote>

take a big table bt

look at its current size

alter table bt add y varchar2(4000) default rpad('y',4000,'y') not null;
(that happens instantly, far too fast to have added the column and updated every row)

you said "updated every row" which negate the feature OR you mean to say update in Data dictionary

look at its current size - it will be the same (shows also the column was 'fast' added

alter table bt modify y default rpad( 'x', 4000, 'x' ) not null;

look as current size - it'll be the same
look at first row, it's Y value will be Y's...

insert new row allowing y to default
select it out and it'll be x's

<quote>



Also i donot understand this


(1) In very Big Table add column as NOT NULL with 'y' default value
(it update in data dictionary with value 'y')

(2) query the records we see default value as 'y'

(3) Now again alter table change default value for the column added in step 1 'X'

(This will again update the data dictionary and override previous value i.e. 'y' with 'x')

(4) now if we query we will see 'X'

(5) add new rows the new rows also show 'X'

is above is correct?

Thanks
Tom Kyte
January 05, 2012 - 1:17 pm UTC

you said "updated every row" which negate the feature OR you mean to say update
in Data dictionary



umm, I said

"far to fast to have 'X'"

where X was = 'ADDED THE COLUMN AND UPDATED EVERY ROW'


it went too fast to have actually added the column
and
it went too fast to have actually updated every row


it did not update every row, it just plugged the default value into the metadata for the table.




Also i donot understand this


(1) In very Big Table add column as NOT NULL with 'y' default value
(it update in data dictionary with value 'y')

(2) query the records we see default value as 'y'

(3) Now again alter table change default value for the column added in step 1
'X'

(This will again update the data dictionary and override previous value i.e.
'y' with 'x')

(4) now if we query we will see 'X'

(5) add new rows the new rows also show 'X'

is above is correct?


Not at all, not even remotely. See the comment right above that I just added as you were adding this. it explains it in more detail.


A reader, January 05, 2012 - 1:23 pm UTC

but why it shows 42 and not 55? since we change the default to 55?

is not it wrong? or it is an expected behaviour?

Thanks
Tom Kyte
January 05, 2012 - 2:44 pm UTC

the default value is assigned to a row when it is created (or when the column is added)

Think of it this way - suppose I gave every new hire to a company a default signing bonus of X (I can override that and give them more or less, but our policy is X). Now, in the future (inflation) dictates that I increase this value. Should the recorded value of an employee hired 5 years ago suddenly change just because you changed the default value? that would be absurd.


the default value is the default value assigned to something when it is inserted, when it is created. Once the default is assigned - it becomes an attribute of that row and the only way to change it would be to update that row.


Maybe I'm getting there

Galen Boyer, January 05, 2012 - 2:36 pm UTC

> that is not the way defaults work, you can insert null into a column
> with a default. It is only when you DO NOT SUPPLY any value at all
> in the insert that you get the "default" (or use the default keyword
> in an insert / update). It is not as simple as an NVL() of a NULL.

I know that defaults don't work that way. I'm arguing how I think
they should be, completely because of your next point.

> You don't - you don't even know of a column that has the old default
> value was "defaulted" or PURPOSELY set to that value.

And that is why I believe defaults shouldn't be persisted. Instead,
you should be able to ask something like IS DEFAULT. Some guy inserts
the value -1, or it gets defaulted to -1. Then someone comes along,
knows the default of that column to be -1. That same someone then is
never quite sure how that -1 got there. With an IS DEFAULT, there
would be no ambiguity.

> The changing of a default value only affects subsequent actions - it
> does not in any way change existing data.

What I'm indirectly arguing is that, at any point in time, a "default"
value for some column should be the default for all rows in the table
no matter what point in time that record was inserted. Maybe I'm not
in tune with the rules. Is there an ANSI standard about defaults that
says they must NOT work the way I'm describing? I would think
defaults are something databases could handle however the heck they
think they should be handled.

> it never has it never (probably, high chance) will

Well, Oracle is moving closer to what I'm describing with this
functionality. I'm not saying they are moving closer because folks at
Oracle believe the way I do, I'm saying were some folks at Oracle to
believe the way I do and convincingly argue that Oracle were to
implement it, this functionality would be very smoothly extended to an
IS DEFAULT.

Tom Kyte
January 05, 2012 - 2:44 pm UTC

from just above:

Think of it this way - suppose I gave every new hire to a company a default signing bonus of X (I can override that and give them more or less, but our policy is X). Now, in the future (inflation) dictates that I increase this value. Should the recorded value of an employee hired 5 years ago suddenly change just because you changed the default value? that would be absurd.

what do you think about that?


What I'm indirectly arguing is that, at any point in time, a "default"
value for some column should be the default for all rows in the table
no matter what point in time that record was inserted. Maybe I'm not
in tune with the rules.


I disagree (see prior example). Can you come up with an example that is real world and says otherwise?

Is there an ANSI standard about defaults that
says they must NOT work the way I'm describing?


ANSI describes how defaults work - down to a T.

I would think
defaults are something databases could handle however the heck they
think they should be handled.


I wish :)


Well, Oracle is moving closer to what I'm describing with this
functionality.


How so?

Column Default value handling in 11g caused application to crash

SJ, January 06, 2012 - 5:37 am UTC

Hi,
I recently encountered an issue with the default value in a Oracle table. I observed that there is some difference in the handling of default values in Oracle 10 and 11g. As a result of this the asp application threw the error 'Accessor is not a parameter accessor' when Oracle 11g is used as the DB. With Oracle 10g the application was working fine.

The table said above has a not null column with default column value set to 1. In Oracle 11 during the updation the default value became null, which caused the application to crash. If I update every row in the table the default value becomes null. Also there is another column which has data in a specific format like '50-50-0'. This column looses the data ie after update the data shows some random characters. What is the reason for these issues and please advise some solutions to correct the application from crashing.
The table makes use of after each row update trigger to maintain an audit trail.
With Oracle 10g there was no issue as mentioned above, everything was working fine.
Tom Kyte
January 10, 2012 - 9:30 pm UTC

give example, tell me how to observe this myself. this is not sounding right, it doesn't sound like anything i've heard of.


Application crashes with Oracle 11g

SJ, January 11, 2012 - 1:19 am UTC

Thanks for looking into the issue. Is there anyway I can show this issue to you? I think that would be the best.
Tom Kyte
January 11, 2012 - 9:24 am UTC

I'll reiterate:


give example, tell me how to observe this myself. this is not sounding right, it doesn't sound like anything i've heard of.



give me an example like I always give you guys examples...

Alright, I was a little over the top...

Galen Boyer, January 11, 2012 - 8:17 am UTC

> Think of it this way - suppose I gave every new hire to a company a
> default signing bonus of X (I can override that and give them more
> or less, but our policy is X). Now, in the future (inflation)
> dictates that I increase this value. Should the recorded value of an
> employee hired 5 years ago suddenly change just because you changed
> the default value? that would be absurd.

Well, okay, that's a reason to have defaults continue to work the way
they do. I have never used a default for setting values like that. So
I truly wasn't thinking of that. For something like you are talking
about, I just would have never designed it by table defaults
persisting the value into the row. I probably would have first
designed a table that represented signing bonuses and maintain that by
itself, sort of, when inflation hits, add a new record with a
effective timestamp. Then, use that table at insert time to figure
out what should be the signing bonus. But, just because I would design
something a particular way does not give me the right to say the
default behaviour should change, so, I'll backtrack on my argument
that the bahaviour of defaults should change. But, let me at least
explain what I'm really after. Maybe, when defining table defaults,
one could ask for it to be a table level default that can be changed
and would therefore change the value for all rows that had been
defaulted. Leave the original behaviour, but also allow for the
default behaviour to be different. My experience for use of defaults
and my irritation with setting default values are usually found in the
datawarehousing world where every attribute of a dimension is NOT
NULL. So, we will load a row and if we can't find some of the
attributes values in the source, we default the value. Then, the
default values and real values are intermingled, the number of rows
are adding up quickly and there is no way to tell after the fact
whether a value is a default or not. We don't really give a darn what
the actual default value is, we would want them all to be the same and
we can easily decide later that the default value should be something
different. My argument in those cases was that it would be nice to
have some form of a "default object", very akin to a NULL, in that you
can ask a default, hey, are you a default. So, then if I default a
value of say 'NA', then where 'NA' be entered, then I can ask, IS
DEFAULT. Well, if defaults were never actually stored and instead
kept in the metadata of the table, then I'd have the functionality I
was looking for, and I wouldn't even need some new "DEFAULT OBJECT".

A different reason for what I'm after comes about because I was
alarmed by Jonathan Lewis's detailed explanation of how adding some
default date representing infinity (ie, way in the future) can really
confuse the optimizer and make it believe the data distribution is
much more spread out than it truly is. I've been on a few Kimball
based datawarehousing projects and following Kimball's teachings
always makes sure each dimension ends up with exactly this issue.
Now, as Jonathan is so darn good at, it just blows a big hole right in
the middle of that design. Basically, the Kimball Type2 Dimension
model forces some outlying date on all current records. Each new
dimension record will have its date defaulted to something like
'01/01/2400'. Well, it probably would be very helpful to the
optimizer to not have actually stored that.
Tom Kyte
January 11, 2012 - 9:48 am UTC

I probably would have first
designed a table that represented signing bonuses and maintain that by
itself, sort of, when inflation hits, add a new record with a
effective timestamp. Then, use that table at insert time to figure
out what should be the signing bonus.


and for what you want "a default that changes over time", I would have followed pretty much the same design principle! We are just polar opposites here. I would use the table (supply this value at runtime if there is no value) for a default that can change over time - whereas you would have used a table to supply a default value that should remain static unless updated directly.


My experience for use of defaults
and my irritation with setting default values are usually found in the
datawarehousing world where every attribute of a dimension is NOT
NULL. So, we will load a row and if we can't find some of the
attributes values in the source, we default the value. Then, the
default values and real values are intermingled, the number of rows
are adding up quickly and there is no way to tell after the fact
whether a value is a default or not.


this one is simple. Someone made a huge mistake - those attributes are obviously NULLABLE.

see what happens when you use a fake default for what should have been null:

http://asktom.oracle.com/Misc/back-from-oracle-openworld.html
http://asktom.oracle.com/Misc/use-null-for-unknonw-data.html

the data is not "defaulted", the data values are UNKNOWN and you are using a fake default to fill in the blanks.


A different reason for what I'm after comes about because I was
alarmed by Jonathan Lewis's detailed explanation of how adding some
default date representing infinity (ie, way in the future) can really
confuse the optimizer and make it believe the data distribution is
much more spread out than it truly is.


absolutely correct and a good reason to..... just let it be null. You don't have a value for it, the data model was wrong, the attributes should be NULLABLE.


If your model gets in the way of doing it right, you might just want to reconsider the model.

Oh well.

Galen Boyer, January 11, 2012 - 10:55 am UTC

> and for what you want "a default that changes over time", I would
> have followed pretty much the same design principle! We are just
> polar opposites here. I would use the table (supply this value at
> runtime if there is no value) for a default that can change over
> time - whereas you would have used a table to supply a default value
> that should remain static unless updated directly.

I would want a history of what was the default at what time period.
Whether that translated to an altering of a default for a table or a
lookup in the codebase to get it would be a different discussion.
I've thought of your design in the past, get it closer to the data
(which is sort of your mantra everywhere anyways) and I always thought
it had some merit but having it in its own table is more accepted by
all the other folks that are considering the designs. The way you are
describing involves DDL. Most folks I know are loath to have
scheduled DDL as part of the logical codebase (partioning not
withstanding), so I don't fight that argument. But, I would be
totally on board with your way if there were more than one of us
making that argument in a design discussion. I have no problem with
how you are doing it. I just would want the history of the default
values somewhere.

> If your model gets in the way of doing it right, you might just want
> to reconsider the model.

Well, there are alot of folks that would say that Ralph Kimball's way
of modeling datawarehouses is correct. It certainly is a big enough
camp and he is a big enough name of industry leaders that maybe a way
to marry camps might be good? Plus, there is a full camp of Relational
designers who feel there should be NO NULLS anywhere in a data-model.
And they would probably say that Oracle's Optimiser needs to be
"fixed". I'm not one of them, (although I do feel one should try to
make all attributes NOT NULL and then back off) but relational theory
does not support the concept of NULL so there are camps of folks that
try to live by this theory. For example, one architect I worked for
was deliberately introducing "NO NULLS anywhere" into an environment
with already existing codebase that assumed NULLs existed and then,
all of the sudden, they were getting a value when previously they
weren't. I just shook my head and said, you need to win that battle
at the start of the database design, or deal with NULLS!.

I think my idea would go a long way in solving the divide. Having
something like:

create table t1
(
id number NOT NULL
,fld varchar2(100) NOT NULL BY DEFAULT DEFAULTOBJECT_XXX
)
/

might do the trick. Maybe defaults allowing to be changed could
work... Just seems to allow for all to be satisfied, but, I've
conversed with you enough in the past to know when you are not
convinced. I believe I have a good idea. You don't. No biggy.

Tom Kyte
January 11, 2012 - 3:18 pm UTC

I would want a history of what was the default at what time period.


a table approach could do that.

a default that changes all other old defaults - could not.


The way you are
describing involves DDL.


how so? I'm saying "have a table that supplies the default if the default changes, update the table that supplies this default value"

there are alot of folks that would say that Ralph Kimball's way
of modeling datawarehouses is correct.


and there are equally as many that say otherwise and have performance numbers to back that up.


there is a full camp of Relational
designers who feel there should be NO NULLS anywhere in a data-model


I agree that there is, but the performance of such a model in real life precludes it from being something we can actually do.




I'm not arguing about how defaults should or could work - but rather, trying to make clear how they actually do work, what they do and how they do it :)


Continuing.

Galen Boyer, January 12, 2012 - 10:46 am UTC

>> The way you are describing involves DDL.

> how so? I'm saying "have a table that supplies the default if the
> default changes, update the table that supplies this default value"

You referenced,

Think of it this way - suppose I gave every new hire to a company a
default signing bonus of X (I can override that and give them more
or less, but our policy is X). Now, in the future (inflation)
dictates that I increase this value. Should the recorded value of
an employee hired 5 years ago suddenly change just because you
changed the default value? that would be absurd.

I read this as the values had been set by a default for a column. I
then said, well I would have done it table-based and not have used
defaults. Somewhere from there on, you said you would have done it
table-based. So, not sure where we are on this one. but I think we are
actually saying the same thing.

>> there is a full camp of Relational designers who feel there should
>> be NO NULLS anywhere in a data-model

> I agree that there is, but the performance of such a model in real
> life precludes it from being something we can actually do.

Well, there are alot of things Oracle introduces that seem to perform
just fine. I fail to see how

create table t1
(
id number NOT NULL
,fld varchar2(100) NOT NULL BY DEFAULT DEFAULTOBJECT_XXX
)
/

is deemed non-performant when it isn't even an option and how come its
so clear that it would be non-performant? "IS NULL" works just fine.
Why couldn't "IS DEFAULT"? The idea is what I'm after. I don't hear
you taking my idea and just thinking it over a little bit. I hear you
saying,

"Here is how defaults work" combined with "Don't use defaults just
because Kimball says so or some relational theory zealot would
say to"

I'm saying, I see valid arguments in the Relational zealot's words as
well as Kimball's models as well as performance stats that say don't
do that. I think that there might be a way to have both worlds here.
Shouldn't we be constantly striving for and revisiting how things are
when they currently don't completely support the theory behind why
they exist? What is wrong with database vendors striving to match
relational theory as they evolve?

> I'm not arguing about how defaults should or could work - but
> rather, trying to make clear how they actually do work, what they do
> and how they do it :)

I understand how they work, but I also understand how much you will go
out of your way to make sure a conversation like I'm having with you
would not be misconstrued by the casual reader as somehow being how
things actually are. I appreciate that of you, understand it and am
constantly amazed at the patience you show while staying committed to
it.
Tom Kyte
January 13, 2012 - 9:06 am UTC

oh, sorry, i thought you were talking about my approach to doing a default that 'changes' over time.

I can see the potential validity for your approach to defaults - but I'm not 100% convinced they would be used appropriately - I see potential for abuse. It seems that NULL would suffice in the case of


,fld varchar2(100) NOT NULL BY DEFAULT DEFAULTOBJECT_XXX


it would be

fld varchar2(100),

if it were null, it was 'default' (unknown really)

defalut column value based on other column

Ravi B, February 10, 2012 - 6:24 pm UTC

Hi Tom,

Is there a way to accomplish this without a trigger?

<code>
drop table test_cons;
create table test_cons (id number,end_date date,is_live number);

Would want to do something like this:

alter table test_cons modify is_live number default (case when end_date is null then 1 else 0 end);

</code>
Tom Kyte
February 13, 2012 - 7:45 am UTC

seems a bit strange, what if we update end_date later to be null - shouldn't "is_live" go to 0?


If so, use a view in 10g and before or a virtual column in 11g and above - it shouldn't be stored, it is a derived value.

otherwise - if it should be a default, you would have to use a trigger - but it couldn't work like a default if it were a trigger. A default is only used if you do not reference the column at all in the insert. A trigger cannot tell whether you referenced the column in the insert or not.

On large tables

Dhruva, May 02, 2012 - 2:21 am UTC

I have a table with million of records, now, I would like to add a new column with default value.

If I use ALTER TABLE tablename ADD column datatype DEFAULT "", then the query is taking hours of time and connection gets timed out.

What is the best way to achieve this?
Tom Kyte
May 02, 2012 - 1:35 pm UTC

read the original answer.

Prior to 11g the set would have been to:

a) lock table
b) add column
c) update column (which might hurt by migrating a lot of rows) with default value
d) commit



In 11g it becomes

b) add column

and that is all with the fast add not null column with a default value.


Prior to 11g, I would recommend either:

a) create table as select with new default value, index it, grant it, put constraints on, etc - drop old table - rename new table

b) dbms_redefinition into a new table with a new column


(a) would be offline of course - but it could be done in parallel and would result in a nicely packed table/set of indexes. It could be pretty fast since you will skip undo and can skip redo as well if you like.

(b) would be online so it really doesn't matter how long it takes - the end users won't be affected.

HOW ABOUT NULLABLE NEW COLUMN?

Carl, May 17, 2012 - 2:50 pm UTC

Hi Tom,

We added a new column to an existing table trying to use the FAST ADD COLUMN feature on 11G.

The value for this column for the pre-existing rows is a constant (0).

At the end we want to permit this column to be nullable (for the new inserted rows).

Here is the code:

-- Here we add the new column c4 : The fast add column
-- feature is used:
alter table test add c4 number(19) default 0 not null;

-- Here we modify the column to allow null values for new
-- records:
alter table test modify c4 default null null;

We noticed that the value 0 is written to all the pre-existing rows.

Is there a way to fast add nullable columns for which we want to initialize pre-existing rows?

Best regards.


Tom Kyte
May 18, 2012 - 2:37 am UTC

You can use dbms_parallel_execute to do this "online" in a fashion.

a) add the column as "nullable" and let it be null. that'll just stick the column on. Use an editioning view if need be (rename table, create editioning view - offline for a second) to hide the column from the application.

b) put a temporary trigger on the table to set :new.new_column := 0

c) use dbms_paralell_execute to update very very very small slices of the table and commit.

d) when done, drop trigger - replace editioning view with one that exposed new column and you are done


There will be a short outage to rename table, create editioning view, add trigger and then a tiny outage to drop the trigger


fast add column

A reader, May 18, 2012 - 3:50 am UTC

Hi Tom,

Still not quite clear about the 'fast add column'.

Since the default value is stored in dictionary and populated at runtime.

My question is if the default value changed many times, how does this work?
dictionary is reponsible to store the linkage between rowid and each version of default value?
or whenever a default value is changed, it will do a update to the existing rows firstly and only note down the newest default value?
if case 1, after many times change, will runtime populating performance become a issue?
if case 2, whether 'fast add column' is not so 'fast add' as documented?

create table test(id int);
insert into test select rownum from dual connect by level<=5;
alter table test add (a int default 1 not null );
select * from test;
alter table test modify a int default 10;
select * from test;
insert into test(id) select rownum+5 from dual connect by level<=5;
select * from test;
alter table test modify a int default 100;
insert into test(id) select rownum+10 from dual connect by level<=5;
select * from test;
Tom Kyte
May 21, 2012 - 7:43 am UTC

when you do the fast add column - all we do is put into the dictionary "if you find a row without this value - meaning it is not in the row at all - there are zero bytes for this row in the table, it is a trailing nullcol (but it cannot be - it is NOT NULL) then you shall use this value"

So, in the data dictionary is the value for the column if this column is found to not exist at all in the row.

Now, after that point, after the fast add column, any row that is inserted into the table (or that row is modified in a way that necessitates that column to appear in the row) - will have that default value populated (it is the current default). so, after the fast add column - any newly added row will have the default value appear in it, it will consume space, it will not be "missing"

Later, if you modify the default value via an ALTER - we still remember in the data dictionary "this is the default value for any MISSING column value". And we update the dictionary to say "but for any newly added row - this is the default" and we put that default in for any newly added row. so, any newly added row will have the new default. any row where the column is just entirely missing will have the ORIGINAL "new" default.




What happens with nullable columns?

Sandro Ferri, August 08, 2012 - 5:47 am UTC

Hi Tom,

thanks for your precise answers. Here is one more question:

In our DB (Oracle 11.0), we have many tables (new ones are created every day) containing about 30 millions of rows each; our customer asked us to add a new column (not indexed) to those tables from now on, however we also need to update the already existing tables.

Since we do not need to set a value for this column in our "past" tables, we are fine adding it as a nullable with no default value, aka:

ALTER TABLE DAILY_TABLE_NNN ADD (NEW_COLUMN NUMBER(2));

So, in this case, is the "fast add column" feature of Oracle 11 still applied? Can we expect the "add column" operation to leave our existing datafiles almost unchanged in size? If not, is there another way to obtain this result? Unfortunately, at this moment datafile size and management are an issue for us (they are scattered in different filesystems).

Thanks again for your kind attention and dedication.

Sandro
Tom Kyte
August 17, 2012 - 1:15 pm UTC

nope. it will not be.

but it doesn't need to be - that was always 'fast'. It would take the same amount of time to add that column to a 1,000,000,000 row table as a zero row table, we don't have to touch the table at all.

Remove default value after adding a new column with a default value

A reader, February 28, 2014 - 10:05 pm UTC

Tom,
I added a new column and needed to set it to a known value for existing records. Any future data will have a value for this column. Since there are a lot of rows instead of doing an update I added the column with a default value and a not null constraint. After adding the column I wanted the default value setup on the table to be removed. How can this be done?

CREATE TABLE ksree.TEST
(
ID NUMBER,
item VARCHAR2(30) DEFAULT 'Y' CONSTRAINT test_item_nn NOT NULL
);

alter table ksree.test
add description varchar2(100) default 'no description' constraint test_description_nn not null;


Will the below statement be the correct way to remove default value? This statement below did not put the default value previously set and it seemed correct. But, I am not sure and I would like to verify if this is the correct way to do it.

ALTER TABLE KSREE.TEST
MODIFY item default '';

I tried the below statement as well and it still puts the default value for new rows.

ALTER TABLE KSREE.TEST
MODIFY item default null;

Thank you,
Kirthi

A reader, March 28, 2014 - 5:45 pm UTC

Tom, can you please comment your thoughts on my previous post. Thank you.
Kirthi
Tom Kyte
March 31, 2014 - 10:04 am UTC

the column you added was description

the column you modified was item.

try modifying the column you added, which is what you stated you wanted to do.

ops$tkyte%ORA11GR2> CREATE TABLE TEST
  2  (
  3    ID  NUMBER,
  4    item VARCHAR2(30) DEFAULT 'Y' CONSTRAINT test_item_nn NOT NULL
  5  );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into test (id) values ( 1 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> alter table test
  2  add description varchar2(100) default 'no description' constraint test_description_nn not null;

Table altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into test(id) values ( 2 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> ALTER TABLE TEST
  2  MODIFY description default null;

Table altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into test(id) values ( 3 );
insert into test(id) values ( 3 )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("OPS$TKYTE"."TEST"."DESCRIPTION")


ops$tkyte%ORA11GR2> select * from test;

        ID ITEM
---------- ------------------------------
DESCRIPTION
-------------------------------------------------------------------------------
         1 Y
no description

         2 Y
no description



it you modify the column description, it will work as expected.

A reader, November 19, 2014 - 3:13 pm UTC

Alter table emp
add (ssn number(9) default 0)
will do it.

Fast add feature doesn't seem to be working

Sree, January 12, 2015 - 7:57 am UTC

Hi Tom,

I'm trying this feature in Oracle 11.2.0.3.0 and it just doesn't work.

This is the query I'm firing:

Alter table <table_name>(
<column_name> varchar(12 char) default 'AA' NOT NULL
);

This takes the same time as the clause without not null keyword. From the above discussions, I understand the table should not be locked and ideally this should finish in a matter of milliseconds (which is the behavior we see in a different DB). Are there any parameter which could be affecting the feature at my side?

Please help.

Typo in the review

Sree, January 12, 2015 - 8:21 am UTC

Hi Tom,

Apologies. There is a typo in the query I mentioned above.

Alter table <table> add
(
<column> varchar(12 char) default 'AA' not null
);

It takes around 35 mins. to execute in both cases. Number of rows: 60,000,000+

Does defaulting have an impact on insert

Sree, January 14, 2015 - 12:40 pm UTC

Hi Tom,

Please ignore the above query where it is cited that feature is not working in 11G - the person who had done the database migration had kept 'compatible' as 10.2.0 which was hiding the feature. We changed that to 11.2.0.3 and the feature started working, and yes the column is getting added in a matter of milliseconds.

One query though which came up. Does this have an impact on performance of insert/update? Does the default not null constraint puts an additional overhead on those operations by adding a check whether the column is not null?

Could you please help for understanding this concept better?

Anvesh, February 18, 2016 - 8:59 pm UTC



Nice Article !

Really this will help to people of SQL Server Community.
I have also prepared small note on this, How to find default value of columns in SQL Server.

http://www.dbrnd.com/2015/10/sql-server-script-to-find-all-default-values-with-columns/

Tables with LOB columns

Rajeshwaran Jeyabal, August 21, 2017 - 1:49 am UTC

http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables.htm#ADMIN11005
<quote>
If a new column is added to a table, the column is initially NULL unless you specify the DEFAULT clause. When you specify a default value, the database immediately updates each row with the default value. Note that this can take some time, and that during the update, there is an exclusive DML lock on the table. For some types of tables (for example, tables without LOB columns), if you specify both a NOT NULL constraint and a default value, the database can optimize the column add operation and greatly reduce the amount of time that the table is locked for DML.
</quote>
The below demo is from 11.2.0.4.

though we have LOB column in the table, we see reduced undo and redo.
demo@ORA11G>
demo@ORA11G> create table t as select a.*, to_clob(rpad('*',32000,'*')) as x1 from all_objects a;

Table created.

demo@ORA11G>
demo@ORA11G> column undo new_val u
demo@ORA11G> column redo new_val r
demo@ORA11G>
demo@ORA11G> select max( decode(s1.name,'undo change vector size',s2.value)  ) undo,
  2      max( decode(s1.name,'redo size',s2.value)  )  redo
  3  from v$sysstat s1,
  4       v$mystat s2
  5  where s1.name in ('undo change vector size','redo size')
  6  and s1.statistic# = s2.statistic#;

      UNDO       REDO
---------- ----------
  25226828   87178504

demo@ORA11G> alter table t add x2 char(80) default 'x' not null;

Table altered.

demo@ORA11G>
demo@ORA11G> select max( decode(s1.name,'undo change vector size',s2.value)  ) - &u as undo_diff,
  2     max( decode(s1.name,'undo change vector size',s2.value)  ) undo,
  3     max( decode(s1.name,'redo size',s2.value)  ) - &r as redo_diff,
  4      max( decode(s1.name,'redo size',s2.value)  )  redo
  5  from v$sysstat s1,
  6       v$mystat s2
  7  where s1.name in ('undo change vector size','redo size')
  8  and s1.statistic# = s2.statistic#;
old   1: select max( decode(s1.name,'undo change vector size',s2.value)  ) - &u as undo_diff,
new   1: select max( decode(s1.name,'undo change vector size',s2.value)  ) -   25226828 as undo_diff,
old   3:        max( decode(s1.name,'redo size',s2.value)  ) - &r as redo_diff,
new   3:        max( decode(s1.name,'redo size',s2.value)  ) -   87178504 as redo_diff,

 UNDO_DIFF       UNDO  REDO_DIFF       REDO
---------- ---------- ---------- ----------
      3876   25230704      12300   87190804

demo@ORA11G> desc t
 Name                         Null?    Type
 ---------------------------- -------- -----------------------------
 OWNER                        NOT NULL VARCHAR2(30)
 OBJECT_NAME                  NOT NULL VARCHAR2(30)
 SUBOBJECT_NAME                        VARCHAR2(30)
 OBJECT_ID                    NOT NULL NUMBER
 DATA_OBJECT_ID                        NUMBER
 OBJECT_TYPE                           VARCHAR2(19)
 CREATED                      NOT NULL DATE
 LAST_DDL_TIME                NOT NULL DATE
 TIMESTAMP                             VARCHAR2(19)
 STATUS                                VARCHAR2(7)
 TEMPORARY                             VARCHAR2(1)
 GENERATED                             VARCHAR2(1)
 SECONDARY                             VARCHAR2(1)
 NAMESPACE                    NOT NULL NUMBER
 EDITION_NAME                          VARCHAR2(30)
 X1                                    CLOB
 X2                           NOT NULL CHAR(80)

demo@ORA11G>

Connor McDonald
August 21, 2017 - 5:38 am UTC

Yeah, I think its poorly worded documentation. A trigger is an easy way to see which methods are re-visting the table rows


SQL> create table t ( x int, y clob );

Table created.

SQL>
SQL> insert into t
  2  select rownum ,rpad(rownum,2000,'x')
  3  from dual
  4  connect by level <= 100;

100 rows created.

SQL>
SQL> create or replace
  2  trigger trg
  3  before update on t
  4  for each row
  5  declare
  6    x int;
  7  begin
  8    x := 1/0;
  9  end;
 10  /

Trigger created.

SQL>
SQL> alter table t add z int  default 123;
alter table t add z int  default 123
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "MCDONAC.TRG", line 4
ORA-04088: error during execution of trigger 'MCDONAC.TRG'


SQL>
SQL> alter table t add z1 int default 123 not null ;

Table altered.

SQL>
SQL>
SQL> create table t1 ( x int, y int );

Table created.

SQL>
SQL> insert into t1
  2  select rownum , rownum
  3  from dual
  4  connect by level <= 100;

100 rows created.

SQL>
SQL> create or replace
  2  trigger trg1
  3  before update on t1
  4  for each row
  5  declare
  6    x int;
  7  begin
  8    x := 1/0;
  9  end;
 10  /

Trigger created.

SQL>
SQL> alter table t1 add z int  default 123;
alter table t1 add z int  default 123
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "MCDONAC.TRG1", line 4
ORA-04088: error during execution of trigger 'MCDONAC.TRG1'


SQL>
SQL> alter table t1 add z1 int default 123 not null ;

Table altered.

SQL>


Function as default value

Mansi Raval, June 29, 2022 - 8:00 am UTC

I Could see some behaviour in my code about defalut value.
So the column in my table a below:
table 1:
Column name datatype nullable default value
CLEAN_DATA VARCHAR2(4000 BYTE) Yes GENERATED ALWAYS AS (GET_CLEAN_DATA("UNCLEAN")) Virtual

So it has default and always value based on what function return.
Now when I an using this column in where clause, it should directly use the value I believe. But the execution plan shows the function call as well in filter.

Does it work like that? Is it calling this function everytime I use this column in where.
Chris Saxon
June 29, 2022 - 11:06 am UTC

The database doesn't store the values for virtual columns. These are always calculated at runtime.

So yes, the function will be executed when you reference this column in SQL. If you're using it in the WHERE clause, the database will call the function when filtering.