Skip to Main Content
  • Questions
  • 'direct path read' is much slower in Oracle database 19c in comparison with 11g

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Alex.

Asked: October 10, 2022 - 10:58 am UTC

Last updated: January 17, 2023 - 1:42 am UTC

Version: 19.16

Viewed 1000+ times

You Asked

Hello

Could you please help me with the following issue
I'm developing a procedure that gets and processes large blob data from a table.
It is much slower in my dev database 19.16 in comparison with a dev database 11.2.0.4.

I noticed in the trace file that 'direct path read' takes much longer time in 19.16 (ela field).
Do you know where the timing difference come from?
The both databases have the same SGA. They're placed on the virtual machines with same computing resources that use the same SSD disk.

11.2.0.4:

WAIT #3061052452: nam='direct path read' ela=3 file number=2 first dba=16173 block cnt=1 obj#=3862810 tim=1665396576739512


19.16:

WAIT #139622124454344: nam='direct path read' ela=105 file number=44 first dba=194052 block cnt=1 obj#=3635694 tim=167978375852



Here is the test case that I used to generate the trace files:

CREATE OR REPLACE DIRECTORY BLOB_DIR AS '/tmp';

CREATE TABLE test_blob_tab (
  id        NUMBER,
  blob_data BLOB
);

DECLARE
  l_bfile  BFILE;
  l_blob   BLOB;

  l_dest_offset INTEGER := 1;
  l_src_offset  INTEGER := 1;
BEGIN
  INSERT INTO test_blob_tab (id, blob_data)
  VALUES (1, empty_blob())
  RETURN blob_data INTO l_blob;

  l_bfile := BFILENAME('BLOB_DIR', 'myzipfile.zip');--the zip file size is 109MB
  DBMS_LOB.fileopen(l_bfile, DBMS_LOB.file_readonly);
  DBMS_LOB.loadblobfromfile (
    dest_lob    => l_blob,
    src_bfile   => l_bfile,
    amount      => DBMS_LOB.lobmaxsize,
    dest_offset => l_dest_offset,
    src_offset  => l_src_offset);
  DBMS_LOB.fileclose(l_bfile);

  COMMIT;
END;
/

declare
l_blob blob;
begin
  for i in 1..10000 loop
    for c in (select dbms_lob.substr(tbt.blob_data,2000,i) blob_data from test_blob_tab tbt) loop
      l_blob:=c.blob_data;
    end loop;
  end loop;
end;
/



Here are the DBMS_METADATA.GET_DDL on table test_blob_tab for both environments

11.2.0.4 database:

"CREATE TABLE "APPS"."TEST_BLOB_TAB" 
   ( "ID" NUMBER, 
 "BLOB_DATA" BLOB
   ) SEGMENT CREATION DEFERRED 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  TABLESPACE "APPS_TS_TX_DATA" 
 LOB ("BLOB_DATA") STORE AS BASICFILE (
  TABLESPACE "APPS_TS_TX_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION 
  NOCACHE LOGGING ) "



19.16 database:

"CREATE TABLE "APPS"."TEST_BLOB_TAB" 
   ( "ID" NUMBER, 
 "BLOB_DATA" BLOB
   ) SEGMENT CREATION DEFERRED 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  TABLESPACE "APPS_TS_TX_DATA" 
 LOB ("BLOB_DATA") STORE AS SECUREFILE (
  TABLESPACE "APPS_TS_TX_DATA" ENABLE STORAGE IN ROW CHUNK 8192
  NOCACHE LOGGING  NOCOMPRESS  KEEP_DUPLICATES ) "


Thank you,
Alex

and Connor said...

I think you'll need to have to Support on that one. I can't find any real discernible differences on 11g or 19c

11g

SQL> create table test_blob_bas
  2     ( id number,  blob_data blob
  3     ) segment creation immediate
  4   nocompress logging
  5   lob (blob_data) store as basicfile (
  6    enable storage in row chunk 8192 retention
  7    nocache logging ) ;

Table created.

SQL> set timing on
SQL> declare
  2    l_bfile  bfile;
  3    l_blob   blob;
  4
  5    l_dest_offset integer := 1;
  6    l_src_offset  integer := 1;
  7  begin
  8    insert into test_blob_bas (id, blob_data)
  9    values (1, empty_blob())
 10    return blob_data into l_blob;
 11
 12    l_bfile := bfilename('TEMP', 'perth_21c.mp4'); --the zip file size is 170mb
 13    dbms_lob.fileopen(l_bfile, dbms_lob.file_readonly);
 14    dbms_lob.loadblobfromfile (
 15      dest_lob    => l_blob,
 16      src_bfile   => l_bfile,
 17      amount      => dbms_lob.lobmaxsize,
 18      dest_offset => l_dest_offset,
 19      src_offset  => l_src_offset);
 20    dbms_lob.fileclose(l_bfile);
 21
 22    commit;
 23  end;
 24  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.24

SQL> declare
  2    l_blob blob;
  3  begin
  4    for i in 1..10000 loop
  5      for c in (select dbms_lob.substr(tbt.blob_data,2000,i) blob_data from test_blob_bas tbt) loop
  6        l_blob:=c.blob_data;
  7      end loop;
  8    end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.25

SQL> declare
  2    l_blob blob;
  3  begin
  4    for i in 1..10000 loop
  5      for c in (select dbms_lob.substr(tbt.blob_data,i*100+2000,i) blob_data from test_blob_bas tbt) loop
  6        l_blob:=c.blob_data;
  7      end loop;
  8    end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.59


and 19c

SQL> create table test_blob_sec
  2     ( id number,  blob_data blob
  3     ) segment creation immediate
  4   nocompress logging
  5   lob (blob_data) store as securefile (
  6    enable storage in row chunk 8192
  7    nocache logging  nocompress  keep_duplicates ) ;

Table created.

SQL> set timing on
SQL> declare
  2    l_bfile  bfile;
  3    l_blob   blob;
  4
  5    l_dest_offset integer := 1;
  6    l_src_offset  integer := 1;
  7  begin
  8    insert into test_blob_sec (id, blob_data)
  9    values (1, empty_blob())
 10    return blob_data into l_blob;
 11
 12    l_bfile := bfilename('TEMP', 'perth_21c.mp4'); --the zip file size is 170mb
 13    dbms_lob.fileopen(l_bfile, dbms_lob.file_readonly);
 14    dbms_lob.loadblobfromfile (
 15      dest_lob    => l_blob,
 16      src_bfile   => l_bfile,
 17      amount      => dbms_lob.lobmaxsize,
 18      dest_offset => l_dest_offset,
 19      src_offset  => l_src_offset);
 20    dbms_lob.fileclose(l_bfile);
 21
 22    commit;
 23  end;
 24  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.21


SQL> declare
  2    l_blob blob;
  3  begin
  4    for i in 1..10000 loop
  5      for c in (select dbms_lob.substr(tbt.blob_data,2000,i) blob_data from test_blob_sec tbt) loop
  6        l_blob:=c.blob_data;
  7      end loop;
  8    end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.38

SQL> declare
  2    l_blob blob;
  3  begin
  4    for i in 1..10000 loop
  5      for c in (select dbms_lob.substr(tbt.blob_data,i*100+2000,i) blob_data from test_blob_sec tbt) loop
  6        l_blob:=c.blob_data;
  7      end loop;
  8    end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.61
SQL>



which suggests the issue might be specific a certain platform or patch level etc.

Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

I guess I found the root cause

Alex, December 07, 2022 - 10:52 am UTC

Thank you for the reply Connor. Your comment made me check the differences between the two databases.
I found that in 11g filesystemio_options parameter was set to none
In 19c it was set to SETALL.
Presumably it made a difference, because in 19c the OS disk cache wasn't used as the direct i/o was used.
That's why 'direct path read' took much longer.
When setting filesystemio_options to SETALL in 11g the issue also appears. So it does not depend on a database version.
I tried to put the test table into KEEP buffer cache, hoping that it would help to avoid direct path reads, but it did not help.

Could you tell if my theory makes sense? And if there's a way to avoid 'direct path reads' for a table containing a large zip BLOB up to 1G? Or at least make them faster?
Connor McDonald
January 17, 2023 - 1:42 am UTC

You can nominate that a LOB is cached in its definition, or perhaps used CACHE READS if that is the main area of concern here.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database