Skip to Main Content
  • Questions
  • Table with LONG data type not being freed

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Gábor.

Asked: January 18, 2021 - 11:58 am UTC

Last updated: January 28, 2021 - 9:08 am UTC

Version: 12.1.0.2.0

Viewed 1000+ times

You Asked

Hi Tom,

We are dealing with a system which has some legacy tables with LONG column. One of these tables contains critical data (email bodies and attachments, chat transcripts, etc) in LONG column. Unfortunately we cannot remove or even change this table in any way, system is heavily dependent on it. 2 years ago we introduced the following solution:

- New table with same columns, but CLOB instead of LOB
- Minutely job, which copies newly inserted records from original table into new table, and converts LONG to CLOB
- Daily job which deletes already copied records from old table

When this solution was introduced, we did a manual cleanup of the original table:

- Stop the system which uses the table
- Create new table with exact same definition (LONG column)
- Copy over all records which had to be kept
- Rename original table to "_OLD" postfix and new table to original table name
- Drop old table

However, after 2 years, customer where this database is located wanted to drop old partitions (note: the problematic table is NOT partitioned), and while checking disk space, noticed that the original table is still growing to be very large (this was the original problem also 2 years ago). Our expectation from the above solution was that while Oracle will not free up the disk space after the DELETE because of the LONG column, it will still reuse existing blocks, so that table will not grow after a certain point. However this is not what is happening, the table keeps growing, day by day. Currently it has about 13 000 records, but over 19 million blocks. About 18 000 records are added during the day, and each of those are deleted at night.

I tried to reproduce the problem using the below SQL, but I'm not sure it's fully correct, it's producing a different result, something which is what we were expecting: after we delete and insert again, used blocks count is not increasing.

-- Helper proc
CREATE OR REPLACE PROCEDURE show_space (
    PI_TABLE_NAME IN VARCHAR2
) AS
    l_unformatted_blocks   NUMBER;
    l_unformatted_bytes    NUMBER;
    l_fs1_blocks           NUMBER;
    l_fs1_bytes            NUMBER;
    l_fs2_blocks           NUMBER;
    l_fs2_bytes            NUMBER;
    l_fs3_blocks           NUMBER;
    l_fs3_bytes            NUMBER;
    l_fs4_blocks           NUMBER;
    l_fs4_bytes            NUMBER;
    l_full_blocks          NUMBER;
    l_full_bytes           NUMBER;

    PROCEDURE p (
        p_label   IN        VARCHAR2,
        p_num     IN        NUMBER
    ) IS
    BEGIN
        dbms_output.put_line(rpad(p_label, 40, '.')
                             || p_num);
    END;

BEGIN
    dbms_space.space_usage(
        'CCQ',
        PI_TABLE_NAME,
        'TABLE',
        l_unformatted_blocks,
        l_unformatted_bytes,
        l_fs1_blocks,
        l_fs1_bytes,
        l_fs2_blocks,
        l_fs2_bytes,
        l_fs3_blocks,
        l_fs3_bytes,
        l_fs4_blocks,
        l_fs4_bytes,
        l_full_blocks,
        l_full_bytes);

    p('Unformatted Blocks ', l_unformatted_blocks);
    p('FS1 Blocks (0-25) ', l_fs1_blocks);
    p('FS2 Blocks (25-50) ', l_fs2_blocks);
    p('FS3 Blocks (50-75) ', l_fs3_blocks);
    p('FS4 Blocks (75-100)', l_fs4_blocks);
    p('Full Blocks ', l_full_blocks);
END;
/

-- Dummy table, exact same definition as original table
CREATE TABLE test_long (
    qwkey       NUMBER(38, 0) NOT NULL,
    tablename   VARCHAR2(90 BYTE),
    fieldname   VARCHAR2(90 BYTE),
    textkey     NUMBER(38, 0),
    text        LONG
);

ALTER TABLE test_long ADD CONSTRAINT pk_test_long PRIMARY KEY (qwkey) USING INDEX;

-- Original stats
BEGIN
    dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1);
END;

BEGIN
    show_space('TEST_LONG');
END;

/*

Output:

Unformatted Blocks .....................0
FS1 Blocks (0-25) ......................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................0
FS4 Blocks (75-100).....................0
Full Blocks ............................0
*/

-- Insert 1000 dummy records, each has 10 000 characters in TEXT column to simulate email body size
DECLARE
    text VARCHAR2(10000);
BEGIN
    FOR i IN 1 .. 1000
    LOOP
        text := '';
        
        FOR j IN 1 .. 1000
        LOOP
            text := text || '1234567890';
        END LOOP;
        
        INSERT INTO test_long (qwkey, tablename, fieldname, textkey, text)
            VALUES (i, TO_CHAR(i), 'FIELD', i, text);
    END LOOP;
    
    COMMIT;
END;
/

-- Updated stats
BEGIN
    dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1);
END;

BEGIN
    show_space('TEST_LONG');
END;

/*

Output:

Unformatted Blocks .....................0
FS1 Blocks (0-25) ......................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................1
FS4 Blocks (75-100).....................8
Full Blocks ............................1999

Total count is 2008.
*/

-- Deleting 500 records, half table
DELETE FROM test_long WHERE MOD(TO_NUMBER(tablename), 2) = 0;
COMMIT;

-- Updated stats
BEGIN
    dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1);
END;

BEGIN
    show_space('TEST_LONG');
END;

/*

Output:

Unformatted Blocks .....................0
FS1 Blocks (0-25) ......................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................0
FS4 Blocks (75-100).....................1008
Full Blocks ............................1000

Total count is 2008, same as before, just in different distribution. So far as expected, blocks are not freed with DELETE.
*/

-- Adding 500 records again
DECLARE
    text VARCHAR2(10000);
BEGIN
    FOR i IN 1001 .. 1500
    LOOP
        text := '';
        
        FOR j IN 1 .. 1000
        LOOP
            text := text || '1234567890';
        END LOOP;
        
        INSERT INTO test_long (qwkey, tablename, fieldname, textkey, text)
            VALUES (i, TO_CHAR(i), 'FIELD', i, text);
    END LOOP;
    
    COMMIT;
END;
/

-- Updated stats
BEGIN
    dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1);
END;

BEGIN
    show_space('TEST_LONG');
END;

/*

Output:

Unformatted Blocks .....................0
FS1 Blocks (0-25) ......................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................1
FS4 Blocks (75-100).....................8
Full Blocks ............................1999

Total blocks is 2008 again, 1000 records again, so it seems blocks were reused.
*/

-- Delete 750 records
DELETE FROM test_long WHERE MOD(TO_NUMBER(tablename), 2) = 1;
COMMIT;

-- Updated stats
BEGIN
    dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1);
END;

BEGIN
    show_space('TEST_LONG');
END;

/*

Output:

Unformatted Blocks .....................0
FS1 Blocks (0-25) ......................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................1
FS4 Blocks (75-100).....................1508
Full Blocks ............................499

Total blocks is still 2008, so again, nothing was freed.
*/

-- Insert 500 records again.
DECLARE
    text VARCHAR2(10000);
BEGIN
    FOR i IN 1501 .. 2000
    LOOP
        text := '';
        
        FOR j IN 1 .. 1000
        LOOP
            text := text || '1234567890';
        END LOOP;
        
        INSERT INTO test_long (qwkey, tablename, fieldname, textkey, text)
            VALUES (i, TO_CHAR(i), 'FIELD', i, text);
    END LOOP;
    
    COMMIT;
END;
/

-- Updated stats
BEGIN
    dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1);
END;

BEGIN
    show_space('TEST_LONG');
END;

/*

Output:

Unformatted Blocks .....................0
FS1 Blocks (0-25) ......................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................2
FS4 Blocks (75-100).....................508
Full Blocks ............................1498

Still 2008 blocks, so again, blocks were reused.
*/


and Connor said...

My guess is that your customer stopped/forgot/missed an error for the job the does the cleanup, and that it has not been running.

LONG's dont get special treatment in terms of free space management, but as you've already noted, if the table has grown to X bytes, then its going to stay at X bytes.

Here's a simple benchmark you do that is perhaps closer to mimicing the customer app

Session 1
========
--
-- setup plus regular long population
--
create sequence seq;

create table t_long ( 
  x int default seq.nextval primary key,
  ts date default sysdate,
  txt long );

create table t_clob ( 
  x int primary key,
  ts date,
  txt clob );
  
create table msg ( d timestamp default systimestamp, m varchar2(100));

declare
  l_hi int;
begin
  loop
    for i in 1 .. 10 
    loop
      insert into t_long (txt) values (rpad('x',30000,'x'))
        returning x into l_hi;
      commit;
      dbms_session.sleep(dbms_random.value(0.2,0.6));
    end loop;
    insert into msg (m) values ('10 records insert, HWM='||l_hi);
    commit;    
  end loop;
end;
/

-- You might need to replace dbms_session with dbms_lock on 12c

Session 2
=======
--
-- transfer long to clob at regular intervals
--
declare
  cnt int;
begin
loop
  cnt := 0;
  for i in ( select * from t_long where x not in ( select x from t_clob ))
  loop
    insert into t_clob values (i.x, i.ts, i.txt);
    cnt := cnt + 1;
  end loop;
  insert into msg (m) values ('Transferred '||cnt||' records');
  commit;
  dbms_session.sleep(10);
end loop;
end;
/

Session 3
=======
--
-- on demand cleanup
--
declare
  cnt int;
begin
  select count(*) into cnt from t_long;
  insert into msg (m) values ('Active rows in T_LONG: '||cnt);
  commit;
  delete from t_long
  where x in ( select x from t_clob);
  cnt := sql%rowcount;
  insert into msg (m) values ('Cleaned '||cnt||' records');
  select bytes 
  into   cnt
  from   user_segments
  where  segment_name = 'T_LONG';
  insert into msg (m) values ('Segment size: '||cnt);
  commit;
end;
/
select * from msg order by d;



I ran this for an hour on my database, and the total segment size is totally dependent on how frequently I ran the cleanup, ie, if I ran it every few mins, the segment growth was small because the high watermark of number of non-cleaned rows was low. If I left it for an hour, then the cleanup still works fine, but the segment size maps to 1hour of rows. But after that, the segment never grows because that cleaned up space is reused.

There were some early bugs with ASSM back in the day about bitmap blocks not correctly reflecting free space, but those were more 8i, 9i timeframe not 12c. Even so, it might be worth a call with Support to make sure you haven't hit some esoteric bug.

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

It might be the work of the append hint

Mikhail Onishchenko, January 20, 2021 - 8:55 am UTC

Try looking into the code that is inserting the new rows into your old table. If it has /*+ append*/ hint - that is the reason your table keeps growing.

If you don't have a direct access to the code in question, you could try looking in your v$sql.

I've recently encountered a piece of code that looked like that:
delete from t1 where ...
insert /*+ append*/ into t1 ...

Some people just put that hint everywhere without fully understanding the consequences of their actions. Just because it makes their inserts faster.
Connor McDonald
January 28, 2021 - 9:08 am UTC

Good input.

Checked again

Gábor Major, January 20, 2021 - 10:49 pm UTC

Thanks for the feedback. Actually the nightly job is running. It is a single stored procedure, which executes the DELETE, then COMMIT, then executes some other stuff, including an insert into another table which throws an index error. So the procedure ultimately fails (which we are going to fix), but the DELETE itself is commited. And as I mentioned, there were only 13 thousand records in the table when I was writing the question, and 18 thousand are deleted each day, so if the DELETE would fail, there would be much more records. Tomorrow I'll verify again with my and your benchmark in test and production DB as well (I tested on test DB only, but issue is in production).

And as for the APPEND hint: it is actually a great throught, would never come to my mind even though I often use it during data migrations into new (empty) tables. But unfortunately it is not the case here: I don't have access to source code, but the V$SQL table shows only simple INSERT INTO <table> statements.

You can review the code..

A reader, January 21, 2021 - 7:12 am UTC


You can review the code using DBA_SOURCE or DBMS_METADATA

Hopefully CLOB is using SECUREFILE which has other benefits as well.

Cheers.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database