Skip to Main Content
  • Questions
  • Partitioning a table caused 2x growth in size

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Jim.

Asked: November 01, 2021 - 1:26 pm UTC

Last updated: September 02, 2024 - 2:32 pm UTC

Version: 19.10.0

Viewed 1000+ times

You Asked

Using CTAS, created a new partitioned, compressed table from an unpartitioned compressed table. Segment size went from 6gb to about 10gb, with around 2000 partitions. I was table to replicate this growth in two separate environments. I I'm assuming this is normal overhead, but I was hoping for confirmation.

Thanks.

and Connor said...

Partitioning does not have an overhead ... *but* partitioning is designed for large tables. Because of this, we always allocate space in larger chunks initially than would be the case for a normal table.

eg normal table

SQL> create table t as select * from dba_objects;

Table created.

SQL> select extent_id, bytes
  2  from   user_extents
  3  where  segment_name = 'T';

 EXTENT_ID      BYTES
---------- ----------
         0      65536
         1      65536
         2      65536
         3      65536
         4      65536
         5      65536
         6      65536
         7      65536
         8      65536
         9      65536
        10      65536
        11      65536
        12      65536
        13      65536
        14      65536
        15      65536
        16    1048576
        17    1048576
        18    1048576
        19    1048576
        20    1048576
        21    1048576
        22    1048576
        23    1048576
        24    1048576
        25    1048576
        26    1048576
        27    1048576

28 rows selected.


We started with 64KB, then we move to 1MB and eventually we will use 8M and 64M if the table gets huge.

With partitioning, we assume you're going to get huge anyway so we do this


SQL> create table t1 partition by list ( owner ) automatic
  2  ( partition p1 values ('SYSTEM') )
  3  as select * from dba_Objects;

Table created.

SQL> select extent_id, bytes
  2  from   user_extents
  3  where  segment_name = 'T1';

 EXTENT_ID      BYTES
---------- ----------
         0    8388608
         0    8388608
         1    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608
         0    8388608

53 rows selected.



We *start* with 8MB extents.

I think you might be going a little hard with your partitions, 2000 partitions for 6GB is around 3megabytes per partition - so you can see why it grew because we used 8M per partition.

A partition size is typically no less than 1-2G except in special circumstances, so I'd suggest revisiting your granularity

Rating

  (3 ratings)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

That explains it!

Jim, November 02, 2021 - 1:34 pm UTC

Thanks for explaining it. Luckily, it was just a test in UAT before moving to the 1tb production table. And thanks for giving guidance on when partitioning does and doesn't make sense.

Connor McDonald
November 09, 2021 - 2:56 am UTC

glad we could help

Partition Large Extents

Ramon Caballero, November 02, 2021 - 10:59 pm UTC

Please see the MOS Note:
Initial Extent Size of a Partition Changed to 8MB from 64KB After Upgrade to 11.2.0.2 or Later (Doc ID 1295484.1)
Where it explains why this happens and gives you a workaround to fallback to pre-11g behavior:
alter session set "_partition_large_extents" = false;
or
alter system set "_partition_large_extents" = false;

Connor McDonald
November 09, 2021 - 2:57 am UTC

good info.

I'd still recommend looking at how we get to 2000 partitions in an 8G table

What about advanced compression, 9 partitions and 15+G difference over 156G?

Mitchell, August 27, 2024 - 9:55 am UTC

We have a situation where partitioning a 156G, ~750M row table with CLOBs ends up growing by at least 10% when redefined into yearly partitions. The CLOBs are securefile, and we are getting the most out of our Advanced compression license in both the original and new table. This is a staging database run, before we run it in production - there isn't much recent data here. The database is near latest 19c on Oracle Cloud.

Both
Year   Approximate Size
P2016  10G -- includes more years from earlier
P2017  1.6G
P2018  18G
P2019  50G
P2020  51G
P2021  38G
P2022  16M
P2023  16M
P2024  16M

I thought maybe the difference might be made up in a smaller lob segment due to some difference while using the STORAGE IN ROW clause, but that's grown too, from ~6,504G to ~6,536G. The database was also 11.2.0.4 before 2018. The table has always been INSERT/SELECT only with very rare exceptions. Anyway, here's the essence of the original table definition:
CREATE TABLE OUR_BIG_DATA
(
  THE_ID    NUMBER                      NOT NULL,
...
  THE_CLOB_FIELD           CLOB                        NOT NULL,
...
  CREATION_DATE     DATE                        NOT NULL,
...
)
LOB (THE_CLOB_FIELD) STORE AS SECUREFILE (
  TABLESPACE  ORIGINAL_TS -- smallfile
  ENABLE      STORAGE IN ROW
  CHUNK       8192
  RETENTION
  COMPRESS HIGH
  NOCACHE
  LOGGING
  STORAGE    (
              INITIAL          2M
              NEXT             2M
              MINEXTENTS       1
              MAXEXTENTS       UNLIMITED
              PCTINCREASE      0
              BUFFER_POOL      DEFAULT
             ))
TABLESPACE ORIGINAL_TS
PCTFREE    10
INITRANS   25
MAXTRANS   255
STORAGE    (
            INITIAL          2M
            NEXT             2M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
COMPRESS FOR OLTP
NOCACHE;


The new table has the same definition apart from the partitioning/tablespaces:
CREATE TABLE OUR_BIG_DATA
(
  THE_ID    NUMBER                      NOT NULL,
...
  THE_CLOB_FIELD           CLOB                        NOT NULL,
...
  CREATION_DATE     DATE                        NOT NULL,
...
)
LOB (THE_CLOB_FIELD) STORE AS SECUREFILE (
  TABLESPACE  NEW_TBSP_MAX -- bigfile
  ENABLE      STORAGE IN ROW
  CHUNK       8192
  RETENTION
  COMPRESS HIGH
  NOCACHE
  NOLOGGING)
COMPRESS FOR OLTP
TABLESPACE NEW_TBSP_MAX -- bigfile
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            BUFFER_POOL      DEFAULT
           )
PARTITION BY RANGE (CREATION_DATE)
(  
  PARTITION P2016 VALUES LESS THAN (TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    COMPRESS FOR OLTP
    TABLESPACE NEW_TBSP_2016
    LOB (THE_CLOB_FIELD) STORE AS SECUREFILE (
      TABLESPACE  NEW_TBSP_2016
      ENABLE      STORAGE IN ROW
      CHUNK       8192
      RETENTION
      COMPRESS HIGH
      NOCACHE
      LOGGING
      STORAGE    (
                  INITIAL          16M
                  NEXT             16M
                  MINEXTENTS       1
                  MAXEXTENTS       UNLIMITED
                  PCTINCREASE      0
                  BUFFER_POOL      DEFAULT
                 ))
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    STORAGE    (
                INITIAL          16M
                NEXT             16M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
               ),  
...

and the other 8 partitions are similar in tablespaces all created from the same script.

No problem "wasting" up to 16M per year but I cannot explain how this gets to 15G. This has been highlighted as we're running a space saving exercise with intent to prune off old partitions in nonproduction environments after cloning from production, and there are many clones.

There may be some parallel threads involved in the dbms_redefinition execution, but AFAIK it's not very high - the session either had force parallel dml/ddl 8 or 16. Can anyone explain this? It takes a while to change something and rerun, and my brain hurts.
Chris Saxon
September 02, 2024 - 2:32 pm UTC

I cannot explain how this gets to 15G

I'm lost - where exactly are you getting this figure from and how?

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database