Hello TOM,
When I use an ORDER BY, I see that the consistent gets are divided by ten in my tests, can you explain why?
Here are my tests.
First I create a table.
SQL> create table test_obj01 as select OWNER, OBJECT_NAME, SUBOBJECT_NAME from dba_objects;
SQL> insert into test_obj01 select * from test_obj01;
78417 rows created.
SQL> /
...
...
SQL> select count(*) from test_obj01;
COUNT(*)
----------
627352
I create an ID with ROWNUM to avoid the deduplication with Oracle Net and collect the stats.
SQL> update test_obj01 set OWNER = OWNER || to_char(rownum);
627352 rows updated.
SQL> commit;
SQL> exec dbms_stats.gather_schema_stats('HR');
I use AUTOTRACE to see statistics.
First execution of the SELECT without ORDER BY : blocks are read from hard drive.
SQL> set autotrace traceonly explain statistics
SQL> SET TIMING ON
SQL> select owner from test_obj01;
627352 rows selected.
Elapsed: 00:00:04.36
Execution Plan
----------------------------------------------------------
Plan hash value: 2410895595
---------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 627K| 7351K| 1168 (1)| 00:00:01 |
| 1 | TABLE ACCESS FULL | TEST_OBJ01 | 627K| 7351K| 1168 (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
27 recursive calls
0 db block gets
45886 consistent gets
4285 physical reads
0 redo size
15681662 bytes sent via SQL*Net to client
460660 bytes received via SQL*Net from client
41825 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
627352 rows processed
Second execution of the SELECT without ORDER BY : blocks are read only from the memory; 0 physical read, 46 000 consistent gets.
SQL> /
627352 rows selected.
Elapsed: 00:00:04.13
Execution Plan
----------------------------------------------------------
Plan hash value: 2410895595
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 627K| 7351K| 1168 (1)| 00:00:01 |
| 1 | TABLE ACCESS FULL | TEST_OBJ01 | 627K| 7351K| 1168 (1)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
45841 consistent gets
0 physical reads
0 redo size
15681662 bytes sent via SQL*Net to client
460660 bytes received via SQL*Net from client
41825 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
627352 rows processed
I flush blocks on hard drive for my next test; I have to do this because my database has no workload so DBWR is very very slow to write datas.
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
Now we execute the same SELECT but with an ORDER BY. Oracle does a SORT; the cost is 4029, almost four time the cost to do a full table scan.
First execution : Oracle reads blocks from hard drive.
SQL> set autotrace traceonly explain statistics
SQL> SET TIMING ON
SQL> select owner from test_obj01 order by owner;
627352 rows selected.
Elapsed: 00:00:03.96
Execution Plan
----------------------------------------------------------
Plan hash value: 2720153981
-----------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 627K| 7351K| | 4029 (1)| 00:00:01 |
| 1 | SORT ORDER BY | | 627K| 7351K| 12M | 4029 (1)| 00:00:01 |
| 2 | TABLE ACCESS FULL | TEST_OBJ01 | 627K| 7351K| | 1168 (1)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
27 recursive calls
0 db block gets
4328 consistent gets
4285 physical reads
0 redo size
15681662 bytes sent via SQL*Net to client
460660 bytes received via SQL*Net from client
41825 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
627352 rows processed
Second execution, to read blocks only from memory.
SQL> /
627352 rows selected.
Elapsed: 00:00:04.10
Execution Plan
----------------------------------------------------------
Plan hash value: 2720153981
----------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 627K | 7351K| | 4029 (1)| 00:00:01 |
| 1 | SORT ORDER BY | | 627K | 7351K| 12M | 4029 (1)| 00:00:01 |
| 2 | TABLE ACCESS FULL | TEST_OBJ01 | 627K | 7351K| | 1168 (1)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4283 consistent gets
0 physical reads
0 redo size
15681662 bytes sent via SQL*Net to client
460660 bytes received via SQL*Net from client
41825 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
627352 rows processed
What we see? With ORDER BY, the SELECT is a little faster than without ORDER BY : 04 minutes 10 seconds VS 04 minutes 13 seconds.
The size of transferred datas is the same, so there is no deduplication in my SELECT; the explanation is elsewhere.
I see that without ORDER BY, Oracle does 45 841 Consistent Gets but, with ORDER BY, Oracle does only 4 283 : 10 times less!
I know that the SGA was long time ago called "Shared Global Area" and someone call the PGA "Private Global Area". So I think the blocks are managed in SGA with latches and mutexes, not in PGA, and it takes some time to manage these locks. But, and I am not very aware about that, I read that the structure of SGA and PGA was very different, for exemple the size of chunk was not the same etc etc...
Can you tell me why we have less consistent gets when we read in PGA than in SGA for the same amount of data?
Have a nice day,
David DUBOIS
I'll come back to the ORDER BY case, but lets just start with this setup:
SQL> create table t1 as select * from dba_objects;
Table created.
SQL> set autotrace traceonly stat
SQL> select owner from t1;
83292 rows selected.
Statistics
---------------------------------------
13 recursive calls
7 db block gets
9731 consistent gets
1567 physical reads
So 1500 reads and 10,000 consistent gets. Now I've made tiny change to my SQLPlus settings and I'll re-run
SQL> ???????????????
SQL> select owner from t1;
83292 rows selected.
Statistics
---------------------------------------
0 recursive calls
0 db block gets
1735 consistent gets
0 physical reads
Woo hoo! Down from 10,000 gets to 1735!!!
So ... what's going on here? When we read a block in memory, we need to
- lock access to the block (ie so no-one messes with while we read it)
- extract the row(s) we want
- release our access
That is basically what a consistent get is.
If I asked for one row at a time, then I would do this:
Row 1:
- lock access to the block (ie so no-one messes with while we read it)
- extract the row we want
- release our access
Row 2:
- lock access to the block (ie so no-one messes with while we read it)
- extract the row we want
- release our access
etc...so I would see consistent gets = number of rows (roughly).
But if tell the database I want rows in *batches*, I can do this instead:
- lock access to the block (ie so no-one messes with while we read it)
- tell the database I'll be visiting this block for a while
- release our access
- at this point, we're allowing other people to see the block, but we've put a flag in there saying "Use me but don't change me"
- extract LOTS of rows (1, 2, 3, 4...)
- lock access to the block (ie so no-one messes with while we read it)
- tell the database I'm done visiting this block
- release our access
So now a single consistent get could yield lots of rows. Back to my code - this is what I did:
SQL> set arraysize 500 <<=============
SQL> select owner from t1;
83292 rows selected.
Statistics
---------------------------------------
0 recursive calls
0 db block gets
1735 consistent gets
0 physical reads
I told the database I'd be doing fetches in batches of 500. This let the database use that second more optimal means of doing consistent gets. The original stats were with the default arraysize of 10
Coming back to our ORDER BY example. We are reading into the PGA, sorting them, so its block at a time (as opposed to rows at a time), so our consistent gets will marry up with blocks as you've seen in your stats. Then we retrieve from PGA (or temp storage) back to our client, but by that stage it is private to us and not coming from the buffer cache.