Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Ram.

Asked: January 31, 2018 - 6:08 am UTC

Last updated: February 06, 2018 - 5:49 am UTC

Version: 12.1.0.2

Viewed 1000+ times

You Asked

Hi Oracle Masters,

Two questions:

1) I read the below line in "Advanced Security Guide" for TDE:

"If you enable TDE column encryption on a very large table, then you may need to increase the redo log size to accommodate the operation".

How are TDE and redo related? What exactly happens in the background (block level) when we enable TDE for an existing column? New blocks are used for the encrypted values and that is the reason why redo gets generated?

This is not clearly covered in the documentation.

2) With regards to the storage overhead of TDE, documentation says:

"The maximum storage overhead for each encrypted value is from 1 to 52 bytes".

I know that 52 is broken down to = 16 bytes padding for encryption + 20 bytes for integrity check (MAC) + 16 bytes for SALT.

=========
Scenario
=========

Table: x
Column: y
Number of rows in x: 100
Data Type of y: NUMBER (22 bytes)
Average column length of y: 4 bytes

Consider the encryption padding to be multiples of 16 bytes and i don't want SALT & MAC to be enabled for this column. How much extra space should i procure before I go with the encryption?

and Connor said...

It easy to run a simple benchmark to see what the impact will be

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "TDEpassword";

System altered.

SQL> create table t1 ( x int, y int );

Table created.

SQL> create table t2 ( x int, y int ENCRYPT);

Table created.

SQL>
SQL> insert into t1 select rownum, rownum from dual connect by level <= 100000;

100000 rows created.

SQL> insert into t2 select rownum, rownum from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select segment_name, bytes from user_segments where segment_name in ('T1','T2');

SEGMENT_NAME                        BYTES
------------------------------ ----------
T1                                2097152
T2                                7340032


So for this example, we had a 5 megabyte overhead for 100,000 rows. Obviously you can adjust this test case for whatever encryption parameters that are specific to you.

And that also tells you why their is a redo overhead. You are writing more data both a data itself, and as undo-data which is also logged in redo. More data = more redo.

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Follow up

Ram Dittakavi, January 31, 2018 - 8:29 am UTC

Hi Connor,

More keen to know what happens when you encrypt an existing column and the space overhead for that? I want to plan the additional storage that is needed for this.
Connor McDonald
February 01, 2018 - 12:48 am UTC

Well....we gave you a script to do the demo. Was it that tough to add a couple of lines to it ? Oh well...

SQL> alter table t1 modify y encrypt;

Table altered.

SQL> select segment_name, bytes from user_segments
  2  where segment_name in ('T1','T2');

SEGMENT_NAME                        BYTES
------------------------------ ----------
T1                               11534336
T2                                7340032

SQL> alter table t1 move;

Table altered.

SQL> select segment_name, bytes from user_segments
  2  where segment_name in ('T1','T2');

SEGMENT_NAME                        BYTES
------------------------------ ----------
T1                                8388608
T2                                7340032


So you can see you'll get a segment size increase, along with some "dead space" which could be reclaimed (if you really needed it do) via a "move".

To Ram

Rajeshwaran, Jeyabal, January 31, 2018 - 2:57 pm UTC

I want to plan the additional storage that is needed for this.

The right answer to this would be to benchmark in your pre-prod/UAT environment and measure the additional storage required for your production systems.
Connor McDonald
February 01, 2018 - 1:00 am UTC

Indeed.

Follow Up

Ram, February 01, 2018 - 10:42 am UTC

Hi,

I tried the below:

SQL> create table t1 ( x NUMBER, y NUMBER ) TABLESPACE TDE_SDITTAKA;

SQL> insert into t1 select dbms_random.random,dbms_random.random from dual connect by level <= 1048576;

SQL> select segment_name, bytes/1048576 "MB" from dba_segments where segment_name in ('T1');

SEGMENT_NAME
--------------------------------------------------------------------------------
MB
----------
T1
35

SQL> select AVG_ROW_LEN from dba_tables where table_name='T1';

AVG_ROW_LEN
-----------
14

SQL> select OWNER, TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_LENGTH, AVG_COL_LEN from dba_tab_columns where owner='SDITTAKA' and table_name='T1';

OWNER
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
DATA_TYPE
--------------------------------------------------------------------------------
DATA_LENGTH AVG_COL_LEN
----------- -----------
SDITTAKA
T1
X
NUMBER
22 8

SDITTAKA
T1
Y
NUMBER
22 8

SQL> alter table t1 modify y encrypt;

Table altered.

Elapsed: 00:05:30.83

SQL> exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'SDITTAKA' , tabname => 'T1',cascade => true, estimate_percent => dbms_stats.auto_sample_size,method_opt=>'FOR ALL COLUMNS SIZE AUTO', granularity => 'ALL', degree => 8);

SQL> select segment_name, bytes/1048576 "MB" from dba_segments where segment_name in ('T1');

SEGMENT_NAME
--------------------------------------------------------------------------------
MB
----------
T1
112

SQL> select AVG_ROW_LEN from dba_tables where table_name='T1';

AVG_ROW_LEN
-----------
60

SQL> select OWNER, TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_LENGTH, AVG_COL_LEN from dba_tab_columns where owner='SDITTAKA' and table_name='T1';

OWNER
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
DATA_TYPE
--------------------------------------------------------------------------------
DATA_LENGTH AVG_COL_LEN
----------- -----------
SDITTAKA
T1
X
NUMBER
22 8

SDITTAKA
T1
Y
NUMBER
22 53

As you see, the average column length before the encryption was 8 and now it is 53. How a 22 byte data type column can accommodate 53 bytes of column values? Even the average row length was 14 and now it is 60 which is more than 44 (22 + 22). Is it taking the ghost copies as well?
Connor McDonald
February 02, 2018 - 12:27 am UTC

That is just the overhead of the encryption, ie, the data at rest. All other attributes of the column remain unchanged

SQL> create table t2 ( y number(6) );

Table created.

SQL> insert into t2 values (123456);

1 row created.

SQL> select dump(y) from t2;

DUMP(Y)
----------------------------------------------------------------------
Typ=2 Len=4: 195,13,35,57

1 row selected.

SQL> alter table t2 modify y encrypt;

Table altered.

SQL> select dump(y) from t2;

DUMP(Y)
----------------------------------------------------------------------
Typ=2 Len=4: 195,13,35,57

1 row selected.

SQL> exec dbms_stats.gather_table_stats('','T2');

PL/SQL procedure successfully completed.

SQL> select avg_col_len from user_tab_cols where table_name = 'T2';

AVG_COL_LEN
-----------
         53

1 row selected.


Its a bit like the metaphor of emailing an attachment to someone. If you email a 1MB picture to someone, the data will be typically be around 1.3MB transmitted because the data is encoded. We never "see" that - we see 1MB being sent, and the recipient sees 1MB being received.

Follow Up

Ram, February 01, 2018 - 5:00 pm UTC

Found in the below link, the maximum allowable size for various data types:

https://docs.oracle.com/cloud/latest/db121/ASOAG/asotrans_config.htm#GUID-E9DFE595-0119-4189-84AD-19FFE5683CC6__CJAIDCFJ

But, NUMBER is not given. In the above scenario, it adjusted automatically to the size of the value more than 22 bytes?

Follow Up

Ram, February 02, 2018 - 6:18 am UTC

Did this:

SQL> create table t3 ( y number(2)) tablespace TDE1;

Table created.

SQL> insert into t3 values (12);

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','T3');

PL/SQL procedure successfully completed.

SQL> select avg_col_len from user_tab_cols where table_name = 'T3';

AVG_COL_LEN
-----------
3

SQL> select dump(y) from t3;

DUMP(Y)
-------------------
Typ=2 Len=2: 193,13

I haven't even encrypted the table but the AVG_COL_LEN is showing 3. Any particular reason?
Connor McDonald
February 03, 2018 - 4:31 am UTC

2 bytes for data, 1 row for terminating the column so you know where the next column is about to start in the row.

SQL> create table t ( x varchar2(100));

Table created.

SQL>
SQL> insert into t
  2  select rpad('x',10,'x') from dual
  3  connect by level <= 1000;

1000 rows created.

SQL>
SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_col_len from user_tab_cols where table_name = 'T';

AVG_COL_LEN
-----------
         11

1 row selected.

SQL>
SQL>
SQL> truncate table t;

Table truncated.

SQL>
SQL> insert into t
  2  select rpad('x',20,'x') from dual
  3  connect by level <= 1000;

1000 rows created.

SQL>
SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_col_len from user_tab_cols where table_name = 'T';

AVG_COL_LEN
-----------
         21

1 row selected.

SQL>


Follow up

Ram Dittakavi, February 05, 2018 - 2:41 pm UTC

Thanks, Connor. I've seen a different behavior while testing on a bigger table.

=================
Before Encryption
=================

Table Size : 17.8125 GB
Avg Col Len : 5 bytes
Avg Row Len : 250 bytes

=================
After Encryption
=================

Table Size : 17.8125 GB <----
Avg Col Len : 17 bytes
Avg Row Len : 262 bytes

=================
After Table Move
=================

Table Size : 18.625 GB <----
Avg Col Len : 17 bytes
Avg Row Len : 262 bytes
Connor McDonald
February 06, 2018 - 5:49 am UTC

If your table has default of pctfree = 10, then you could easily:

- start with table of size X, where each block is 90% full
- you perform operations that grow rows, and the table is now size X, which each block 100% full
- alter table move will now grow the table so that each block is 90% full as per the pctfree setting

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database