Skip to Main Content
  • Questions
  • How to forecast archive generation for a 350 GB table with no large objects.

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Nikhil.

Asked: December 12, 2018 - 7:42 pm UTC

Last updated: December 14, 2018 - 2:55 am UTC

Version: 12.2

Viewed 1000+ times

You Asked

Hi Gurus,

Thanks for helping us whenever we need you.

I have a task to import a 350 GB table. I will create index later. Its a 12.2 CDB with one single PDB and have a standby DB with force_logging enabled.

Is there a method so that I can forecast archive generation before import.

Regards,
Lancerique.

and Connor said...

We can do a little test to get an estimate

SQL> create table t ( x char(1000));

Table created.

SQL>
SQL> insert /*+ APPEND */ into t
  2  select rownum
  3  from
  4  ( select 1 from dual connect by level <= 1000 ),
  5  ( select 1 from dual connect by level <= 1000 );

1000000 rows created.

SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL> select * from user_tables
  2  where table_name = 'T'
  3  @pr
==============================
TABLE_NAME                    : T
TABLESPACE_NAME               : LARGETS
CLUSTER_NAME                  :
IOT_NAME                      :
STATUS                        : VALID
PCT_FREE                      : 10
PCT_USED                      :
INI_TRANS                     : 1
MAX_TRANS                     : 255
INITIAL_EXTENT                : 65536
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
PCT_INCREASE                  :
FREELISTS                     :
FREELIST_GROUPS               :
LOGGING                       : NO
BACKED_UP                     : N
NUM_ROWS                      : 1000000
BLOCKS                        : 143486
EMPTY_BLOCKS                  : 0
AVG_SPACE                     : 0
CHAIN_CNT                     : 0
AVG_ROW_LEN                   : 1001
AVG_SPACE_FREELIST_BLOCKS     : 0
NUM_FREELIST_BLOCKS           : 0
DEGREE                        :          1
INSTANCES                     :          1
CACHE                         :     N
TABLE_LOCK                    : ENABLED
SAMPLE_SIZE                   : 1000000
LAST_ANALYZED                 : 14-DEC-18
PARTITIONED                   : NO
IOT_TYPE                      :
TEMPORARY                     : N
SECONDARY                     : N
NESTED                        : NO
BUFFER_POOL                   : DEFAULT
FLASH_CACHE                   : DEFAULT
CELL_FLASH_CACHE              : DEFAULT
ROW_MOVEMENT                  : DISABLED
GLOBAL_STATS                  : YES
USER_STATS                    : NO
DURATION                      :
SKIP_CORRUPT                  : DISABLED
MONITORING                    : YES
CLUSTER_OWNER                 :
DEPENDENCIES                  : DISABLED
COMPRESSION                   : DISABLED
COMPRESS_FOR                  :
DROPPED                       : NO
READ_ONLY                     : NO
SEGMENT_CREATED               : YES
RESULT_CACHE                  : DEFAULT
CLUSTERING                    : NO
ACTIVITY_TRACKING             :
DML_TIMESTAMP                 :
HAS_IDENTITY                  : NO
CONTAINER_DATA                : NO
INMEMORY                      : DISABLED
INMEMORY_PRIORITY             :
INMEMORY_DISTRIBUTE           :
INMEMORY_COMPRESSION          :
INMEMORY_DUPLICATE            :
DEFAULT_COLLATION             : USING_NLS_COMP
DUPLICATED                    : N
SHARDED                       : N
EXTERNAL                      : NO
CELLMEMORY                    :
CONTAINERS_DEFAULT            : NO
CONTAINER_MAP                 : NO
EXTENDED_DATA_LINK            : NO
EXTENDED_DATA_LINK_MAP        : NO
INMEMORY_SERVICE              :
INMEMORY_SERVICE_NAME         :
CONTAINER_MAP_OBJECT          : NO

PL/SQL procedure successfully completed.

SQL> select 143486 * 8192 from dual;

143486*8192
-----------
 1175437312

SQL> select sum(bytes) from dba_segments
  2  where segment_name = 'T';

SUM(BYTES)
----------
1218510848

1 row selected.


So our table occupies approx 1.2G of space. Now we'll export it, drop it and re-import it and check the delta in our system wide redo consumption

C:\>expdp mcdonac/****** dumpfile=t.dmp directory=TEMP tables=T

Export: Release 12.2.0.1.0 - Production on Fri Dec 14 10:45:00 2018

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "MCDONAC"."SYS_EXPORT_TABLE_01":  mcdonac/******** dumpfile=t.dmp directory=TEMP tables=T
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MCDONAC"."T"                               960.3 MB 1000000 rows
Master table "MCDONAC"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TABLE_01 is:
  C:\TEMP\T.DMP
Job "MCDONAC"."SYS_EXPORT_TABLE_01" successfully completed at Fri Dec 14 10:45:32 2018 elapsed 0 00:00:31


SQL> drop table t purge;

Table dropped.


SQL> select * from v$sysstat
  2  where name like 'redo size%';

STATISTIC# NAME                                                    CLASS      VALUE
---------- -------------------------------------------------- ---------- ----------
       288 redo size                                                   2  409354092
       290 redo size for lost write detection                          2          0
       291 redo size for direct writes                                 2     549264

3 rows selected.

C:\>impdp mcdonac/****** dumpfile=t.dmp directory=TEMP

Import: Release 12.2.0.1.0 - Production on Fri Dec 14 10:50:21 2018

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "MCDONAC"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_FULL_01":  mcdonac/******** dumpfile=t.dmp directory=TEMP
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MCDONAC"."T"                               960.3 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_FULL_01" successfully completed at Fri Dec 14 10:50:46 2018 elapsed 0 00:00:24


SQL>
SQL> /

STATISTIC# NAME                                                    CLASS      VALUE
---------- -------------------------------------------------- ---------- ----------
       288 redo size                                                   2 1589530412
       290 redo size for lost write detection                          2          0
       291 redo size for direct writes                                 2 1177370524

3 rows selected.



So we can see the redo consumption is pretty much the same as the data imported

SQL> select 1589530412 - 409354092 from dual;

1589530412-409354092
--------------------
          1180176320


If that is going to be excessive, you could consider doing the operation in nologging mode, and then doing the normal steps on your standby (copy the affected datafile(s) over, and recovering them).

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database