Skip to Main Content
  • Questions
  • Update all database tables with current timestamp

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Fiyin.

Asked: May 21, 2018 - 3:50 pm UTC

Last updated: September 21, 2018 - 1:13 am UTC

Version: 11.2.0.3.0

Viewed 10K+ times! This question is

You Asked

Hi,

I have a Production database ( version shown below) with about 2000 tables, having over 10 million rows each.

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production                       
PL/SQL Release 11.2.0.3.0 - Production                                          
CORE 11.2.0.3.0 Production                                                      
TNS for Linux: Version 11.2.0.3.0 - Production                                  
NLSRTL Version 11.2.0.3.0 - Production   


For conflict resolution, I added a timestamp column to these tables. What is the most efficient way to update each row with the current timestamp with the least period of locking on the tables and possibly archive log generation and also commit every 10,000 rows?


SET SERVEROUTPUT ON SIZE UNLIMITED

DECLARE
    table_own    VARCHAR2 (30);
    table_name   VARCHAR2 (30);
    l_time       TIMESTAMP := SYSTIMESTAMP;
    sqlstmt      VARCHAR2 (1000);

    CURSOR table_cur
    IS
          SELECT a.owner, a.table_name
            FROM dba_tables a, dba_segments b
           WHERE     a.TABLE_NAME = b.SEGMENT_NAME
                 AND a.owner = b.owner
                 AND a.owner NOT IN ('ANONYMOUS',
                                     'APEX_030200',
                                     'APEX_PUBLIC_USER',
                                     'APPQOSSYS',
                                     'CTXSYS',
                                     'DBSNMP',
                                     'DIP',
                                     'EXFSYS',
                                     'FLOWS_FILES',
                                     'MDSYS',
                                     'ORACLE_OCM',
                                     'ORDDATA',
                                     'ORDPLUGINS',
                                     'ORDSYS',
                                     'OUTLN',
                                     'MDDATA',
                                     'MGMT_VIEW',
                                     'OLAPSYS',
                                     'OWBSYS',
                                     'OWBSYS_AUDIT',
                                     'SPATIAL_CSW_ADMIN_USR',
                                     'SPATIAL_WFS_ADMIN_USR',
                                     'SI_INFORMTN_SCHEMA',
                                     'SYS',
                                     'SYSTEM',
                                     'WMSYS',
                                     'XDB',
                                     'XS$NULL',
                                     'SYSMAN',
                                     'OLAPSYS')
                 AND (a.owner, a.table_name) NOT IN
                         (SELECT owner, table_name
                            FROM dba_tab_columns
                           WHERE     column_name = 'LAST_DML_TIME'
                                 AND NUM_NULLS IS NOT NULL)
        ORDER BY b.bytes / 1024 / 1024 / 1024 DESC;
BEGIN
    OPEN table_cur;

    LOOP
        FETCH table_cur INTO table_own, table_name;

        EXIT WHEN table_cur%NOTFOUND;

        sqlstmt :=
               'update '
            || table_own
            || '.'
            || table_name
            || ' set last_dml_time = '''
            || l_time
            || '''';

        EXECUTE IMMEDIATE sqlstmt;

        i := i + 1;

        IF MOD (i, 10000) = 0
        THEN
            COMMIT;
        END IF;
    END LOOP;

    CLOSE table_cur;
END;
/


The average time of 3 runs has been 3 days, it is a standard edition database so I can't generate an AWR report. Thanks

and Chris said...

You've already missed the fastest way:

Add the column as not null with a default of sysdate!

This is an "instant" operation as of 11g:

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2575782100346318728

You could drop the new columns and add them back again. But dropping columns generates lots of redo and has locking implications.

Luckily there's a shortcut: set the column to unused. This is also an "instant" operation. You can then add it back with the default and not null:

create table t as 
  select level x from dual
  connect by level <= 10000;
  
alter table t add ( dt timestamp );

alter table t set unused column dt;

alter table t add dt timestamp default systimestamp not null ;

desc t

Name   Null?      Type           
X                 NUMBER         
DT     NOT NULL   TIMESTAMP(6)  


If you don't want to do that, the next fastest way is to do a "create-as-select". Set the updated date by selecting sysdate here:

create table t_updated as 
  select c1, c2, ..., sysdate last_updated
  from   t


You can use dbms_redefinition to do this online:

https://oracle-base.com/articles/10g/online-table-redefinition-enhancements-10gr1
https://oracle-base.com/articles/11g/online-table-redefinition-enhancements-11gr1

But it's still a faff. Adding the defaulted column is my preferred method.

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

You guys are the very best

Fiyin Lamidi, May 21, 2018 - 6:45 pm UTC

This is my very first post on asktom but hardly a day goes by that I don't come on to read the different posts, I also follow you on twitter and read all your other blogs. Thanks for coming up with this idea.
Connor McDonald
May 22, 2018 - 7:21 am UTC

glad we could help

Any suggestions?

A Reader, June 08, 2018 - 4:39 am UTC

Hi,

I tested the 2 options, the "fast add column" option locks the affected tables for the duration of the operation which typically last over 3days and makes the database unusable for the duration as it wasn't designed with this task in mind. We have tried avoiding such operations due to previous horrible experiences.
Also, the dbms_redefinition option would result in missing transactions for operations that take place after the CTAS and before the redefinition, for a 24-7 production database, that'll be unacceptable, furthermore, we are migrating the database to a new data center, provisioning additional storage to double the database size would be stretching resources.

I created a temporary table described below and populated it with the select statement also shown below, I have a script to create update statements with each row of the table, however, I'd like to commit the transactions after every 10000 rows updated, any suggestions.

TABLE upd_all_cols
Name Null? Type
----------------------------------------- -------- ----------------------------
TABLE_NAME NOT NULL VARCHAR2(30)
OWNER NOT NULL VARCHAR2(30)
COLUMN_NAME VARCHAR2(4000)

/* Select statement to populate table upd_all_cols with the names of every table in select schemas having a unique key */

SELECT DISTINCT a.table_name, a.owner, d.column_name
FROM dba_tables a
JOIN dba_constraints c
ON a.TABLE_NAME = c.table_name AND a.owner = c.owner
LEFT JOIN DBA_CONS_COLUMNS d
ON a.TABLE_NAME = d.table_name
AND a.owner = d.owner
AND a.owner NOT IN ('ANONYMOUS',
'APEX_030200',
'APEX_PUBLIC_USER',
'APPQOSSYS',
'CTXSYS',
'DBSNMP',
'DIP',
'EXFSYS',
'FLOWS_FILES',
'MDSYS',
'ORACLE_OCM',
'ORDDATA',
'ORDPLUGINS',
'ORDSYS',
'OUTLN',
'MDDATA',
'PERFSTAT',
'MGMT_VIEW',
'OLAPSYS',
'OWBSYS',
'OWBSYS_AUDIT',
'SPATIAL_CSW_ADMIN_USR',
'SPATIAL_WFS_ADMIN_USR',
'SI_INFORMTN_SCHEMA',
'SYS',
'SYSTEM',
'WMSYS',
'XDB',
'XS$NULL',
'SYSMAN',
'OLAPSYS',
'GG_USER',
'INGRIAN',
'GG_ADMIN',
'QUEST')
AND (a.owner, a.table_name) IN
(SELECT DISTINCT a.OWNER, a.TABLE_NAME
FROM dba_tab_columns a, dba_objects b
WHERE a.table_name = b.object_name
AND LOWER (COLUMN_NAME) = 'last_dml_timestamp')
AND d.column_name IS NOT NULL
AND c.constraint_type IN ('U', 'P');

/* Script to update rows from table upd_all_cols */

SET SERVEROUTPUT ON
EXEC DBMS_OUTPUT.ENABLE (buffer_size => NULL);

DECLARE

sqlstmt varchar2(4000);
TYPE update_Cursor IS TABLE OF upd_all_cols%ROWTYPE
INDEX BY PLS_INTEGER;

Update_Id update_Cursor;
l_index_value PLS_INTEGER;

BEGIN
SELECT * BULK COLLECT into Update_Id FROM upd_all_cols;

dbms_output.put_line(Update_Id.count);
l_index_value := Update_Id.FIRST;

WHILE (l_index_value IS NOT NULL)
LOOP

sqlstmt :='UPDATE '
|| Update_Id (l_index_value).OWNER
|| '.'
|| Update_Id (l_index_value).TABLE_NAME
|| '
SET last_dml_time = SYSDATE
WHERE last_dml_time IS NULL ORDER BY '
|| Update_Id (l_index_value).column_name||';';


IF MOD (Update_Id (l_index_value).column_name.count, 10000) = 0
THEN
COMMIT;
END IF;


EXECUTE IMMEDIATE sqlstmt;

DBMS_OUTPUT.put_line (sqlstmt);


l_index_value := Update_Id.NEXT (l_index_value);
END LOOP;
END;
/
Connor McDonald
June 08, 2018 - 7:16 am UTC

If you use fast add, you don't need to update *anything*. It is implicitly done, eg

SQL> create table t as select object_id, object_name from dba_objects;

Table created.

SQL>
SQL> alter table t add blah date default sysdate not null;

Table altered.

SQL>
SQL> select blah, count(*) from t
  2  group by blah;

BLAH                  COUNT(*)
------------------- ----------
08/06/2018 15:15:38      82993

SQL> insert into t (object_id, object_name) values (0,0);

1 row created.

SQL>
SQL> select blah, count(*) from t
  2  group by blah;

BLAH                  COUNT(*)
------------------- ----------
08/06/2018 15:15:38      82993
08/06/2018 15:15:45          1


In addition

A Reader, June 08, 2018 - 3:37 pm UTC

Thank you for the suggestion Connor, not to sound cheeky, but i it possible to commit every couple of 10000 records updated with the "fast add column" so reduce the length of the locking, I know it's a DDL with implicit commit but for a Production system, locking it for too long would be a concern, also is does it work the same with a standard edition database as with an enterprise edition database. Thanks
Connor McDonald
June 10, 2018 - 9:42 am UTC

(I think) you're missing my point (which I could probably have described better :-))

alter table T add column X default sysdate not null

will take about 0.1 seconds if the table has 100 rows.

It will also take about 0.1 seconds if the table has 100 billion rows.

POC

A Reader, June 11, 2018 - 5:27 pm UTC

Hi Connor,

I really appreciate the help and insight on this topic, I tested the "fast add column" on 2 tables with different counts shown below.

SQL> SET TIMING ON
SQL> set feedback on
COUNT(*)
----------
563654
1 row selected.
SQL> alter table CT.CS add last_dml timestamp default systimestamp not null;

Table altered.

Elapsed: 00:00:06.83


SQL> SET TIMING ON
SQL> set feedback on
COUNT(*)
----------
246112075
1 row selected.
SQL> alter table CT.US add last_dml timestamp default systimestamp not null;

Table altered.

Elapsed: 00:47:04.68


Is the "fast add column" feature version and edition specific because our database is 11.2.0.3 "standard edition" which might account for the difference in behavior, also the above tables are the largest and smallest in the database with over 2,000 tables in between. The entire script using this feature across all the tables in a test scenario took 2day, 16hrs and 49mins, during which most to the tables were locked for the duration of the operation, I don't think this is an option for our Production environment which brings us back to the original question of using mod in the update script.


Connor McDonald
June 12, 2018 - 1:45 am UTC

Ah.... you've hit the following:

Bug 20436283 : FAST ADD COLUMN NOT WORKING WHEN ADDING A TIMESTAMP COLUMN

which you can see a demo of here

SQL> create table t as
  2  select d.* from dba_objects d,
  3   ( select 1 from dual connect by level <= 20 );

Table created.

SQL>
SQL> set timing on
SQL> alter table T add c1 timestamp default systimestamp not null;

Table altered.

Elapsed: 00:00:22.30
SQL> alter table T add c2 date      default sysdate      not null;

Table altered.

Elapsed: 00:00:00.00


but try this little trick :-)

SQL> alter table T add c3 timestamp default sysdate      not null;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table T modify c3 timestamp default systimestamp;

Table altered.

Elapsed: 00:00:00.00


Thank you so much

A Reader, June 12, 2018 - 2:14 am UTC

It worked perfectly just as you described, thank you so much Connor, you guys are the best
Connor McDonald
June 12, 2018 - 3:44 am UTC

glad we could help out

Not quite as expected

A Reader, June 23, 2018 - 4:12 am UTC

Hi Guys,

The timestamp column worked as expected when inserting a new record, but not when updating existing record, any suggestion on a work around without having to add an update trigger to all the tables in the database.



TABLE CT.TXN
Name Null? Type Default
----------------------------------------- -------- --------------------------- ---------------------------
TXN_ID NOT NULL NUMBER(19)
TRANS_TYPE NOT NULL VARCHAR2(255)
DATE_CREATED TIMESTAMP(6) current_timestamp
DATE_UPDATED TIMESTAMP(6)
LAST_DML_TIMESTAMP TIMESTAMP(6) systimestamp


CREATE OR REPLACE TRIGGER CT.TXN_DATE_UPDATED
BEFORE INSERT or UPDATE ON T.TXN
FOR EACH ROW
BEGIN
:NEW.DATE_UPDATED := SYSTIMESTAMP;
END;
/

SQL> INSERT INTO CT.TXN (TXN_ID,
TRANS_TYPE)
VALUES (5454,
'AA')
1 row created.

SQL> COMMIT
Commit complete.


SQL> SELECT TXN_ID,
TRANS_TYPE,
DATE_CREATED,
DATE_UPDATED,
LAST_DML_TIMESTAMP
FROM CT.TXN
WHERE TXN_ID = 5454

TXN_ID TRANS_TYPE DATE_CREATED DATE_UPDATED LAST_DML_TIMESTAMP
5454 AA 6/22/2018 11:51:49.166679 PM 6/22/2018 11:51:49.166777 PM 6/22/2018 11:51:49.166673 PM

1 row selected.

SQL> UPDATE CT.TXN
SET trans_type = 'MD'
WHERE TXN_ID = 5454
AND TRANS_TYPE = 'AA'

1 row updated.

SQL> SELECT TXN_ID,
TRANS_TYPE
DATE_CREATED,
DATE_UPDATED,
LAST_DML_TIMESTAMP
FROM CT.TXN
WHERE TXN_ID = 5454

TXN_ID TRANS_TYPE DATE_CREATED DATE_UPDATED LAST_DML_TIMESTAMP
5454 MD 6/22/2018 11:51:49.166679 PM 6/23/2018 12:00:02.915504 AM 6/22/2018 11:51:49.166673 PM

1 row selected.

Connor McDonald
June 24, 2018 - 10:39 am UTC

That is what (and has always been) the definition of a default clause.

If you want to update when a record was touched, you'll need to update that column.

Addendum

A Reader, June 24, 2018 - 12:43 am UTC

I should also add this is an 11.2.0.3 standard edition database

clarification

A Reader, June 25, 2018 - 3:33 pm UTC

Hi Connor,

/*

That is what (and has always been) the definition of a default clause.

If you want to update when a record was touched, you'll need to update that column.

*/

Please can you add more clarity to the above, is "This is what and has always been the definition of default clause" meaning the column is updated by the system (auto-magically) when the timestamp column is updated or a side effect of the bug/work around, also "If you want to update when a record was touched, you'll need to update that column." would this involve adding the default column to the set clause of every update statement in every procedure, package and trigger?

Also, Is there a performance implication on the database if an "after update on each row" trigger to update the default column is added.

Thanks
Chris Saxon
June 25, 2018 - 3:52 pm UTC

A default only applies on insert. Not update. You have to build you own solution for updating values.

And yes, there is overhead of doing this in a trigger (vs. part of the update):

create table t1 ( x int, y timestamp );

create table t2 ( x int, y timestamp );

create or replace
  trigger TRG2 before update on t2
  for each row
begin
  :new.y := systimestamp;
end;
/

insert into t1
  select 1, systimestamp from dual
  connect by level <= 100000;

insert into t2
  select 1, systimestamp from dual
  connect by level <= 100000;

commit;

set timing on
update t1
set    x = 2, y = systimestamp;

100,000 rows updated.

Elapsed: 00:00:10.399

update t2
set    x = 2;

100,000 rows updated.

Elapsed: 00:00:15.648


Note also that using the trigger method, the timestamp is the time the trigger fired. But doing it in the update is the statement time:

select count ( distinct y ) from t1;

  COUNT(DISTINCTY)
                 1

select count ( distinct y ) from t2;

  COUNT(DISTINCTY)
            100000

Rollback

A reader, September 19, 2018 - 3:39 am UTC

Hi Guys,

I'll like to find out what the best option is to rollback/remove the timestamp column and any possible risks/gotchas.

1. Set the column to unused
2. Drop the column


Chris Saxon
September 19, 2018 - 10:35 am UTC

As mentioned above, set unused is instant and does minimal work. The time taken & work to drop a column is proportional to the number of rows in the table.

So unless the table stores few rows, set unused is the way to go.

Of course, with set unused the column is still technically there. It becomes a hidden column.

To remove it completely, you can drop the unused columns:

create table t (
  c1 int, c2 int
);

alter table t set unused column c2;

select column_name, hidden_column 
from   user_tab_cols
where  table_name = 'T';

COLUMN_NAME                  HIDDEN_COLUMN   
C1                           NO              
SYS_C00002_18091903:34:12$   YES       

alter table t drop unused columns;

select column_name, hidden_column 
from   user_tab_cols
where  table_name = 'T';

COLUMN_NAME   HIDDEN_COLUMN   
C1            NO          


The database will have to do the work to get rid of it at this point. But, unlike drop, this is a non-blocking operation.

Re-Rollback

Fiyinfoluwa Lamidi, September 19, 2018 - 7:42 pm UTC

Hi Chris,

The statement /* As mentioned above, set unused is instant and does minimal work */ has not been my experience as shown below


CREATE TABLE t
2 AS
3 SELECT d.*
4 FROM dba_objects d,
5 ( SELECT 1
6 FROM DUAL
7 CONNECT BY LEVEL <= 1000);

Table created.

Elapsed: 00:05:56.30

SQL> select count(*) from t;

COUNT(*)
----------
13169800

SQL> SET TIMING ON
SQL> SET ECHO ON
SQL>
SQL> create table t1 as select * from t;

Table created.

Elapsed: 00:00:39.45
SQL> create table t2 as select * from t;

Table created.

Elapsed: 00:00:39.04
SQL> create table t3 as select * from t;

Table created.

Elapsed: 00:00:20.99
SQL> create table t4 as select * from t;

Table created.

Elapsed: 00:00:20.98
SQL> create table t5 as select * from t;

Table created.

Elapsed: 00:00:20.98

SQL> desc t1
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(19)
CREATED DATE
LAST_DDL_TIME DATE
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(30)
C3 NOT NULL TIMESTAMP(6)

SQL> desc t2
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(19)
CREATED DATE
LAST_DDL_TIME DATE
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(30)
C3 NOT NULL TIMESTAMP(6)

SQL>
SQL> alter table t1 add c4 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.02
SQL> alter table t1 modify c4 timestamp default systimestamp;

Table altered.

Elapsed: 00:00:00.02
SQL> alter table t2 add c4 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t2 modify c4 timestamp default systimestamp;

Table altered.

Elapsed: 00:00:00.00
SQL>
SQL> alter table t1 set unused column c4;

Table altered.

Elapsed: 00:04:24.52
SQL>
SQL> alter table t2 set unused column c4;

Table altered.

Elapsed: 00:03:53.56

SQL> alter table t3 add c4 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.02
SQL>
SQL>
SQL> alter table t3 modify c4 timestamp default systimestamp;

Table altered.

Elapsed: 00:00:00.02
SQL>
SQL> alter table t4 add c4 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t4 modify c4 timestamp default systimestamp;

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> SET TIMING ON
SQL> SET SERVEROUTPUT ON SIZE UNLIMITED

DECLARE
table_own VARCHAR2 (30);
table_name VARCHAR2 (30);
sqlstmt VARCHAR2 (1000);

CURSOR table_cur IS
SELECT owner, table_name
FROM dba_tables
WHERE table_name IN ('T3', 'T4')
ORDER BY owner, table_name;
BEGIN
OPEN table_cur;

LOOP
FETCH table_cur INTO table_own, table_name;

EXIT WHEN table_cur%NOTFOUND;

sqlstmt :=
'alter table '
|| table_own
|| '.'
|| table_name
|| ' set unused (c4)';


EXECUTE IMMEDIATE sqlstmt;
END LOOP;

CLOSE table_cur;
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:08:43.02
SQL> SQL>


In both cases, the output is not instant for a table with millions of rows, with 1000 tables of similar sizes, it'll make several packages and procedures that depend on these tables unavailable for a while, is there a better way of achieving this.

Connor McDonald
September 21, 2018 - 1:13 am UTC

That looks like an issue in 11.2. I see the same delay in 11.2, but can't replicate that in 12.2

SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
  79524000

1 row selected.

Elapsed: 00:00:21.82
SQL> alter table t1 add c4 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.02
SQL> alter table t1 modify c4 timestamp default systimestamp;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column c4;

Table altered.

Elapsed: 00:00:00.01


I'd suggest throwing a trace on the "set unused" command and seeing where the time was lost, because it *should* be just a dictionary operation

More to Explore

SQL

The Oracle documentation contains a complete SQL reference.