Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Margaret.

Asked: July 14, 2002 - 2:48 pm UTC

Last updated: December 20, 2003 - 9:26 am UTC

Version: 8.0.5

Viewed 1000+ times

You Asked

Dear Tom:
I would like to create a table by copying from an existing table with the following sql:
SQL> create table emp2
2 tablespace tools
3 as select empno, ename, hiredate
4 from emp
5 order by hiredate;
order by hiredate
*
ERROR at line 5:
ORA-00933: SQL command not properly ended

This sql is not working with an order by clause.Is there any ways to achieve this fast?

Thank you very much for your help.



and Tom said...

You will need Oracle8i release 1 (version 815) or up -- that is when "order by" in views, CTAS, and subqueries was permitted.

Prior to that:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table emp as select * from scott.emp where 1=0;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into emp select * from scott.emp order by ename;

14 rows created.

is one approach to inserting "sorted" data.

Rating

  (18 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

create table select order

A reader, July 15, 2002 - 12:04 am UTC

I didnt got one thing why one need to order a select statement when the order of retriving the data whithout a order by cause is not predictable, also is against the RDBMS concept.

Tom Kyte
July 15, 2002 - 8:38 am UTC

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3479923945167 <code>
for why this is relevant -- particularly in a data warehouse.

It is for clustering of like or related values together -- something that could be achieved also via

o an index organized table
o a b*tree cluster

for example. it is for tuning purposes.

Dave, July 15, 2002 - 5:47 am UTC

In response to the last readers comment, I have used physical row ordering for quite a while to get performance improvements, on data warehouse fact tables in particular.

Put simply, you arrange for rows that are likely to be selected together to be physically located together.

If you consider a table with 1,000,000 rows in it, with 50 rows per block, then your table covers about 20,000 blocks. Suppose you have a column 'COUNTY' with an even distribution of 20 values, you have about 50,000 rows per COUNTY. If you select all rows where COUNTY=5, and the rows you need are mixed in with all the others in some random fashion, Oracle will have to 'touch' a great many blocks in order to get the rows you want -- probably leading to a full table scan (not neccessarily a bad thing in itself, of course). Should your rows be physically ordered by COUNTY than Oracle only needs to read about 50,000 rows, which will only occupy about 1000 blocks, and would promote the use of an index scan. Less blocks read = faster performance (generalizing a little there). Oracle can detect such a situation from the "CLUSTERING_FACTOR" of the index.

On large fact tables I have reduced I/O by a factor of over 10 by using this simple technique.

By the way, if you arrange for frequently accessed columns to be physically located at the beginning of the row rather than at the end, you can also improve performance -- only worth doing on pretty wide tables, and it saves CPU cycles, not I/O. Didn't believe it myself till I tested it on a 160 column datamart fact table, and reduced a full scan selecting SUM() of the column in question from 49 seconds to 17 seconds (repeatable result, no-one else on the system, same execution plan etc).


Tom Kyte
July 15, 2002 - 9:00 am UTC

wait'll see what Oracle9i release 2 can do!!!!

I took my "famous" big table with 1,000,000 rows created like this:

create table big_table as select * from all_objects;
insert /*+ append */ into big_table select * from big_table;
commit;
<above 2 statements done until there are 1,000,000 rows).  Then <b>using the new feature of table compression -- and knowledge of the data</b>, I did this:

ops$tkyte@ORA920.US.ORACLE.COM> @compressed
DOC>create table big_table_compressed<b>
DOC>COMPRESS</b>
DOC>as
DOC>select *
DOC>  from big_table
DOC> order by owner, object_type, created, last_ddl_time, object_name;
DOC>*/
big_table@ORA920.US.ORACLE.COM>

<b>a compressed table works like a compressed index sort of.  A symbol table is formed and repeated values on a block are represented once. The savings can be incredible (from 3-1 to 12-1).

Here, I used my knowledge of the data to put the least selective data first</b>

big_table@ORA920.US.ORACLE.COM> exec show_space( 'BIG_TABLE' )
Free Blocks.............................
Total Blocks............................14336
Total Bytes.............................117440512
Total MBytes............................112
Unused Blocks...........................532
Unused Bytes............................4358144
Last Used Ext FileId....................9
Last Used Ext BlockId...................13320
Last Used Block.........................492

PL/SQL procedure successfully completed.

big_table@ORA920.US.ORACLE.COM> exec show_space( 'BIG_TABLE_COMPRESSED' )
Free Blocks.............................
Total Blocks............................1792
Total Bytes.............................14680064
Total MBytes............................14
Unused Blocks...........................75
Unused Bytes............................614400
Last Used Ext FileId....................9
Last Used Ext BlockId...................18184
Last Used Block.........................53

PL/SQL procedure successfully completed.

big_table@ORA920.US.ORACLE.COM> select (14336-532)/(1792-75) from dual;

(14336-532)/(1792-75)
---------------------
           8.03960396

<b>That shows I achieved an 8:1 compression ratio.  My compressed table is 1/8th the size of the original!</b>

big_table@ORA920.US.ORACLE.COM>
big_table@ORA920.US.ORACLE.COM> alter tablespace users offline;

Tablespace altered.

big_table@ORA920.US.ORACLE.COM> alter tablespace users online;

Tablespace altered.

<b>flush the buffer cache..</b>

big_table@ORA920.US.ORACLE.COM>
big_table@ORA920.US.ORACLE.COM> set autotrace on
big_table@ORA920.US.ORACLE.COM> set timing on
big_table@ORA920.US.ORACLE.COM> select /*+ FULL(BIG_TABLE) */ count(*) from big_table;

  COUNT(*)
----------
   1000000
<b>
Elapsed: 00:00:28.66
</b>
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1329 Card=1)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=1000000)




Statistics
----------------------------------------------------------
        245  recursive calls
          0  db block gets<b>
      13717  consistent gets
      13647  physical reads</b>
         60  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          1  rows processed

big_table@ORA920.US.ORACLE.COM> select /*+ FULL(BIG_TABLE_COMPRESSED) */ count(*) from big_table_compressed;

  COUNT(*)
----------
   1000000
<b>
Elapsed: 00:00:04.07
</b>
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=167 Card=1)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'BIG_TABLE_COMPRESSED' (Cost=167 Card=1000000)




Statistics
----------------------------------------------------------
        245  recursive calls
          0  db block gets<b>
       1744  consistent gets
       1682  physical reads</b>
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          1  rows processed

big_table@ORA920.US.ORACLE.COM> set autotrace off

<b>1/8th the data, 1/8th the IO time.  Imagine your data warehouse with all of its existing data but at say 1/3 or less the size!  Through in a couple of compressed indexes -- compressed materialized views (group by tends to sort -- can be very compressable if you group by in a MV)...
</b>
 

A reader, July 15, 2002 - 10:19 am UTC

begin
insert into x (select * from user_objects order by object_name);
end;


will not work unless i use dynamic SQL. What is the better option when there are global temporary tables created for reporting purposes and the ordered data is required?

should we use 1. Dynamic SQL 2. IOT ?

Tom Kyte
July 15, 2002 - 12:24 pm UTC

Well, now I am worried.

If ordered data is required for the REPORT (and you are not trying to simply order the data for performance in a data warehouse for example) -- you must, have to, need to use an order by on the query that retrieves the data. No if and or buts on that one

Even with an IOT -- YOU MUST USE AN ORDER BY on the select, else the data will come back in any order we like.



A reader, July 15, 2002 - 12:59 pm UTC

Tom,


Currently, the reporting requirements are like

1. A report calls a back-end procedure

2. A back-end procedure is written to populate a table which
is temporary. (The SQL is usually a join between 3 or 4 tables)

3. The procedure manipulates the data

4. The reporting tool presents the data....with very little manipulations.

The problem here is the data which is getting inserted into the temporary table should be in specific order.

Now here we cannot use insert into select ... order by

Should our reporting procedures use native_dynamic SQL
to populate this temporary tables?



Tom Kyte
July 15, 2002 - 8:27 pm UTC

NO, data cannot be inserted in any sort of order to predict the OUTCOME.

Data does NOT come out in the order it went in.

The ONLY way to realiably do what you are doing is by having the reporting tool use ORDER BY on the select -- repeat - THE ONLY WAY TO GET SORTED DATA -- is to use order by. Inserting it "sorted" won't do it.

You can show me as many "examples" as you like. It just doesn't work that way, it not assured to work that way. If the order of the data is at all relevant to you -- you will use order by on the select.

PERIOD.

So, stop playing with the insert and work on the SELECT.

Compressed tables

Dave, July 15, 2002 - 2:47 pm UTC

Ooooh! very nice.

Compressed tables!

Mark J. Bobak, July 16, 2002 - 2:17 am UTC

Now, that is just too cool!



Did not work in Oracle 8.0.5

A reader, July 29, 2002 - 8:18 pm UTC

Hi Tom.

Exactly what I need but I could not get this to work in my Oracle 8.0.5 database.  Example:

SQL> create table tt as select * from dual where 1=0;

Table created.

SQL> insert into tt select * from dual order by dummy desc;
insert into tt select * from dual order by dummy desc
                                  *
ERROR at line 1:
ORA-00933: SQL command not properly ended


SQL> show version
unknown SHOW option "version"
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle8 Enterprise Edition Release 8.0.5.1.1 - Production
PL/SQL Release 8.0.5.1.1 - Production
CORE Version 4.0.5.0.0 - Production
TNS for SEQUENT DYNIX/ptx: Version 8.0.5.0.0 - Production
NLSRTL Version 3.3.2.0.0 - Production
 

Tom Kyte
July 30, 2002 - 12:31 pm UTC

sorry, guess I messed up on that one. I should have run it in 817.

Won't work in 805, I knew that. order by in a subquery, view, select in insert, etc -- added in Oracle8i release 1 and up

Compressed Tables

Sikandar Hayat Awan, August 04, 2002 - 9:34 am UTC

Excellent solution in 9i.
With reference to above post for reporting my point of view is like explained by TOM,

When data is inserted in a temp table the report will display all the data in the table so no where. If the temp table occupied 100 blocks then the report has to display the data from all these 100 blocks. Now for performace as u will use order by in select statement just improve the sort operations by sort_area_size, TTS parameters.


Sagi, November 26, 2002 - 8:00 am UTC

Hi Tom,

In your example of COMPRESS table you said

"A symbol table is formed and repeated values on a block are represented once."

Firstly, What is the Data dictionary view where we can see the associated symbol table for a compressed table. Is it readable?

Secondly, If we try to create more than one compressed table and say that the values match from the SYMBOL table created for the first compressed table. Now when we are creating the second compressed table will it again create a new SYMBOL table or make use of the first one. Can you give a example. It would be very helpful.

Also can you explain a bit more about the SYMBOL table.

Thanx in advance.

Regards,
Sagi


Tom Kyte
November 26, 2002 - 8:08 am UTC

You cannot view the symbol table -- it is stored on the block header itself. There is a symbol table/block.


Each block has a symbol table that is local and specific to only the data on that particular block.

More on compressed table

Rob, December 11, 2002 - 10:04 pm UTC

Tom:

If I create a table with the compress option specified and then do a direct path load, is the data compressed? (your example was a ctas). If I do further inserts into the table are these new rows compressed? If rows are updated do they stay compressed?

Compressed indexes:

Is the compression applicable only to create/rebuild of indexes. Will rows added to the table have compressed entries in the index? What is the effect of updates, are these rows uncompressed?

Thanks,

Rob


Tom Kyte
December 11, 2002 - 10:53 pm UTC

yes -- direct path loads are compressed.

inserts done either a row at a time (insert ... values) or insert as select without append -- they are not compressed.

updates -- no compress.


index key compression -- for everything. all DML.

Is this true

A reader, December 12, 2002 - 9:24 am UTC

One of the reviewer said

" By the way, if you arrange for frequently accessed columns to be physically
located at the beginning of the row rather than at the end, you can also improve
performance -- only worth doing on pretty wide tables, and it saves CPU cycles,
not I/O. Didn't believe it myself till I tested it on a 160 column datamart fact
table, and reduced a full scan selecting SUM() of the column in question from 49
seconds to 17 seconds (repeatable result, no-one else on the system, same
execution plan etc). "

Tom. is this true ???




Tom Kyte
December 12, 2002 - 11:00 am UTC

well, your tests indicate that it could be.... 

There could be two things at work here.

a) the rows in this table exceeded the size of a database block -- the last column was on a block far away and we had to walk through many blocks to get to it (row chaining)

b) it takes longer to parse out the last column of a table then it does the first column cause Oracle stops processing the block when it hits the data it needs.


ops$tkyte@ORA920> drop table t;

Table dropped.

ops$tkyte@ORA920> declare
  2          l_stmt long;
  3  begin
  4          l_stmt := 'create table t ( c001 number';
  5          for i in 2 .. 159
  6          loop
  7                  l_stmt := l_stmt || ', c' || to_char(i,'fm000') || ' varchar2(5) default ''12345'' ';
  8          end loop;
  9          l_stmt := l_stmt || ', c160 number )';
 10          execute immediate l_stmt;
 11  end;
 12  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> insert into t (c001,c160)
  2  select rownum, rownum from all_objects where rownum <= 10000;

10000 rows created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select avg_row_len from user_tables where table_name = 'T';

AVG_ROW_LEN
-----------
        959

ops$tkyte@ORA920>
ops$tkyte@ORA920> declare
  2      l_start number;
  3      l_run1  number;
  4      l_run2  number;
  5          l_n     number;
  6  begin
  7      insert into run_stats select 'before', stats.* from stats;
  8
  9      l_start := dbms_utility.get_time;
 10      for i in 1 .. 10
 11      loop
 12          select sum(c001) into l_n from t;
 13      end loop;
 14      l_run1 := (dbms_utility.get_time-l_start);
 15      dbms_output.put_line( l_run1 || ' hsecs' );
 16
 17      insert into run_stats select 'after 1', stats.* from stats;
 18      l_start := dbms_utility.get_time;
 19      for i in 1 .. 10
 20      loop
 21          select sum(c160) into l_n from t;
 22      end loop;
 23      l_run2 := (dbms_utility.get_time-l_start);
 24      dbms_output.put_line( l_run2 || ' hsecs' );
 25      dbms_output.put_line
 26      ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );
 27
 28      insert into run_stats select 'after 2', stats.* from stats;
 29  end;
 30  /
11 hsecs
27 hsecs
run 1 ran in 40.74% of the time

<b>so, wall clock time - tis faster for sure </b>

that is strictly due in this case to the fact that getting column 1 requires less work parsing the block then getting column 160 does (which requires "walking the row" column by column to get there).  The "statistics" of runstats that I use shows there was no fundemental changes:

ops$tkyte@ORA920> select a.name, b.value-a.value run1, c.value-b.value run2,
  2         ( (c.value-b.value)-(b.value-a.value)) diff
  3    from run_stats a, run_stats b, run_stats c
  4   where a.name = b.name
  5     and b.name = c.name
  6     and a.runid = 'before'
  7     and b.runid = 'after 1'
  8     and c.runid = 'after 2'
  9     and (c.value-a.value) > 0
 10     and (c.value-b.value) <> (b.value-a.value)
 11   order by abs( (c.value-b.value)-(b.value-a.value))
 12  /

....
LATCH.simulator hash latch            896       1025        129
LATCH.cache buffers chains          29014      29187        173

57 rows selected.

between the two -- it was 100% cpu related... Now, it gets even worse when:

ops$tkyte@ORA920> drop table t;

Table dropped.

ops$tkyte@ORA920> declare
  2          l_stmt long;
  3  begin
  4          l_stmt := 'create table t ( c001 number';
  5          for i in 2 .. 155
  6          loop
  7                  l_stmt := l_stmt || ', c' || to_char(i,'fm000') || ' varchar2(5) default ''12345'' ';
  8          end loop;
  9          for i in 156 .. 159
 10          loop
 11                  l_stmt := l_stmt || ', c' || to_char(i,'fm000') || ' char(2000) default ''x'' ';
 12          end loop;
 13          l_stmt := l_stmt || ', c160 number )';
 14          execute immediate l_stmt;
 15  end;
 16  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> insert into t (c001,c160)
  2  select rownum, rownum from all_objects where rownum <= 10000;

10000 rows created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select avg_row_len from user_tables where table_name = 'T';

AVG_ROW_LEN
-----------
       8957


ops$tkyte@ORA920> declare
  2      l_start number;
  3      l_run1  number;
  4      l_run2  number;
  5          l_n     number;
  6  begin
  7      insert into run_stats select 'before', stats.* from stats;
  8
  9      l_start := dbms_utility.get_time;
 10      for i in 1 .. 10
 11      loop
 12          select sum(c001) into l_n from t;
 13      end loop;
 14      l_run1 := (dbms_utility.get_time-l_start);
 15      dbms_output.put_line( l_run1 || ' hsecs' );
 16
 17      insert into run_stats select 'after 1', stats.* from stats;
 18      l_start := dbms_utility.get_time;
 19      for i in 1 .. 10
 20      loop
 21          select sum(c160) into l_n from t;
 22      end loop;
 23      l_run2 := (dbms_utility.get_time-l_start);
 24      dbms_output.put_line( l_run2 || ' hsecs' );
 25      dbms_output.put_line
 26      ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );
 27
 28      insert into run_stats select 'after 2', stats.* from stats;
 29  end;
 30  /
849 hsecs
1364 hsecs
run 1 ran in 62.24% of the time

PL/SQL procedure successfully completed.


<b>but here, it is not all about just "cpu" and parsing.  There was a ton more other work going on:</b>

ops$tkyte@ORA920> select a.name, b.value-a.value run1, c.value-b.value run2,
  2         ( (c.value-b.value)-(b.value-a.value)) diff
  3    from run_stats a, run_stats b, run_stats c
  4   where a.name = b.name
  5     and b.name = c.name
  6     and a.runid = 'before'
  7     and b.runid = 'after 1'
  8     and c.runid = 'after 2'
  9     and (c.value-a.value) > 0
 10     and (c.value-b.value) <> (b.value-a.value)
 11   order by abs( (c.value-b.value)-(b.value-a.value))
 12  /

NAME                                 RUN1       RUN2       DIFF
------------------------------ ---------- ---------- ----------
...
STAT...recursive cpu usage            851       1365        514
LATCH.multiblock read objects       24480      25680       1200
STAT...redo size                    27628      30040       2412
STAT...prefetched blocks           168546     174110       5564
LATCH.simulator lru latch           11456      19637       8181
LATCH.simulator hash latch          23995      37354      13359
STAT...physical reads              180792     264069      83277
STAT...free buffer requested       180798     264079      83281
LATCH.cache buffers lru chain      180791     264122      83331
STAT...table fetch continued r          0     100000     100000
ow

STAT...no work - consistent re     200418     300421     100003
ad gets

STAT...buffer is not pinned co     200426     300431     100005
unt

STAT...consistent gets             200544     300556     100012
STAT...session logical reads       200594     300631     100037
LATCH.cache buffers chains         581982     865504     283522

72 rows selected.


In this case, in order to get column 160 -- we had to do Table Fetch Continued Row (see how we did exactly 100,000 of them -- 10 * 10000... every single row was chained and required "being put back together again")

On a really big table -- this could be really disasterous if every row were chained. 

Dave, December 12, 2002 - 12:39 pm UTC

Another Myth/Rule-of-thumb bites the dust! (with all the usual "in certain circumstances" caveats)

"Column order is irrelevant" - bah.

I wonder whether the relatively higher consumption of CPU time on selecting the first and second columns would be exacerbated by having the table compressed? Would you care to use your same test case on a compressed version of the table Tom?

Tom Kyte
December 12, 2002 - 1:18 pm UTC

Ok, so I did this:

declare
        l_stmt long;
begin
        l_stmt := 'create table t ( c001 number';
        for i in 2 .. 159
        loop
                l_stmt := l_stmt || ', c' || to_char(i,'fm000') || ' varchar2(5) default ''12345'' ';
        end loop;
        l_stmt := l_stmt || ', c160 number )';
        execute immediate l_stmt;
end;
/

insert into t (c001,c160)
select rownum, rownum from big_table.big_table where rownum <= 100000;

create table t2 compress
as
select * from t;


(big_table.big_table just has 2million rows -- so each table t and t2 has 100,000 rows exactly)...

Now, since t2 was infinitely compressable -- it was a whopping 2 megabytes in size and t was 120 megabytes! (unrealistic). but the results where interesting:

ops$tkyte@ORA920> alter session set sql_trace=true;

Session altered.

ops$tkyte@ORA920> declare
  2          l_n number;
  3  begin
  4          for i in 1 .. 10
  5          loop
  6                  select sum(c001) into l_n from t;
  7                  select sum(c001) into l_n from t;
  8                  select sum(c160) into l_n from t;
  9                  select sum(c160) into l_n from t;
 10                  select sum(c001) into l_n from t2;
 11                  select sum(c001) into l_n from t2;
 12                  select sum(c160) into l_n from t2;
 13                  select sum(c160) into l_n from t2;
 14          end loop;
 15  end;
 16  /

PL/SQL procedure successfully completed.

SELECT sum(c001)
from
 t

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute     20      0.00       0.00          0          0          0           0
Fetch       20     18.95      23.19     264014     288800          0          20
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       42     18.96      23.19     264014     288800          0          20

SELECT sum(c160)
from
 t

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute     20      0.00       0.00          0          0          0           0
Fetch       20     21.43      26.14     264019     288800          0          20
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       42     21.44      26.14     264019     288800          0          20

SELECT sum(c001)
from
 t2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute     20      0.00       0.00          0          0          0           0
Fetch       20      6.12       8.35          0       4640          0          20
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       42      6.12       8.36          0       4640          0          20

SELECT sum(c160)
from
 t2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute     20      0.00       0.00          0          0          0           0
Fetch       20      6.20       6.67          0       4640          0          20
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       42      6.20       6.68          0       4640          0          20
 

Dave, December 12, 2002 - 2:31 pm UTC

It looks like the difference in CPU usage was very slight on the compressed table -- I would guess that the overhead of uncompressing the block is very high relative to the overhead of finding the end of the rows in it, and that the benefits of getting a high compression ratio will outweigh the overhead on finding the last column.

Makes sense really -- I'm guessing that the whole block will be uncompressed whether the first column of the first row or the last column of the last row is requested.

Thanks Tom.

Tom Kyte
December 12, 2002 - 2:57 pm UTC

well -- actually -- here, the compression of the block helped.

We didn't have to "uncompress" columns 1..159 -- they were fixed length pointers into a symbol table stored on the block. This is a bad example -- very degenerate case where the data was super compressable.

NEW LOV

Zo, June 30, 2003 - 5:02 am UTC

Dear Tom,
Look at this

Declare

TYPE Dtl_Typ IS TABLE OF agc_encaissements_details%ROWTYPE
INDEX BY BINARY_INTEGER;
Dtl_Tab Dtl_Typ;
i BINARY_INTEGER := 0;
z BINARY_INTEGER := 0;

last_rec binary_integer;

begin

if :ecs_montant_regl >= nvl(:hb_sum,0) then
go_block('FACTURES');
last_record;
last_rec:=:system.cursor_record;

first_record;
i:=i+1;
while (i<=last_rec) loop
if :factures.hb_tick='Y' then
z:=z+1;
dtl_tab(z).edt_numencais:=:agc_encaissements.ecs_numencais;
dtl_tab(z).edt_numfacture:=:factures.hb_facture;
dtl_tab(z).edt_numfacture:=:factures.hb_facture;
dtl_tab(z).edt_montant_facture:=:factures.hb_solde;
dtl_tab(z).edt_montant_regl:=:factures.hb_montant;
dtl_tab(z).edt_observation:=:agc_encaissements.ecs_codclient;
end if;
next_record;
i:=i+1;
end loop;

Clear_Block(No_Validate);
i:=0;
i:=i+1;
go_block('AGC_ENCAISSEMENTS_DETAILS');

for rec in i..z loop
create_record;
:edt_numencais:=dtl_tab(rec).edt_numencais;
:edt_numfacture:=dtl_tab(rec).edt_numfacture;
:edt_numfacture:=dtl_tab(rec).edt_numfacture;
:edt_montant_facture:=dtl_tab(rec).edt_montant_facture;
:edt_montant_regl:=dtl_tab(rec).edt_montant_regl;
:edt_observation:=dtl_tab(rec).edt_observation;
end loop;

go_item('AGC_ENCAISSEMENTS.ECS_NUMENCAIS');
:toolbar.comment:=null;
synchronize;
else
raise form_trigger_failure;
end if;
end;

Purpose:
It display query into block (A) (non-database) and insert all ticked records from (A) as a new records data into block (B) (database).

My Objective:
You can use a LOV but only one record at a time can be brought back (i.e n times query required). Is there anyway to implement a SHIFT key for a LOV so that it can bring out more than one records.

Q1) This code works fine but take much time to insert data into block (B), is there any code that should be tuned ? Do you have any other solution.

Q2) Could you be able to show a sample source code behind this LOV so that I can create my own LOV ?

NB. Sorry Tom if my question may not involve the above subject at all but I see no-where to ask and perhaps this is useful to most of us using Oracle Forms

Kindly
Zks

Tom Kyte
June 30, 2003 - 7:06 am UTC

try

otn.oracle.com -> discussion forums. there is one for forms there. I have not touched forms since 1995 myself.

NEW LOV !!! It works fine

Zo, July 02, 2003 - 1:23 am UTC

Reply for my question above.

Q1) I worked on it yesterday and It works fine since I set 'Validate from LOV' to 'FALSE' on the item property.

Thanks Tom for your link, there is also www.experts-exchange.com may help sometimes.

Zo

full scan of compressed tables

D.B.Nemec, July 04, 2003 - 1:55 am UTC

Hi Tom,

in the "compressed" example above the compressed table has (1792-75 = 1717) blocks. In a full scan used for count(*) are only 1682 physical reads.

a) How is it possible after a buffer cache flush? Are some chained rows on play? I guess this have nothing to do with compression, because the not compressed table gives a similar result.

b) Are there any known trade-offs of the compression? I.e. some statements (e.g. some hash joins, exists, subqueries etc.) on compressed tables with less I/O but greater elapsed time due to decompression. In other word, is there a checklist for a decision if a data warehouse table should be compressed?


Thanks

Jaromir

</code> http://www.db-nemec.com <code>


Tom Kyte
July 04, 2003 - 8:30 am UTC

(speek is speak btw -- from your home page ;)

Interesting -- it is a side effect of ASSM (automatic segment space management) in 9i.  Had me going there for a while -- I couldn't reproduce straight away...

I can tell I was using it since the free blocks print out as NULL in the show space report.  Both tables come up "short".  Apparently, the bitmap blocks must be in the cache still -- when I edit the raw trace file, I can see it is skipping blocks here and there in the extents.

Here is the demo:

ops$tkyte@ORA920> column tablespace_name new_val T

ops$tkyte@ORA920> exec show_space( 'BIG_TABLE_COMPRESSED' );
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................           1,696
Total Blocks............................           1,792
Total Bytes.............................      14,680,064
Total MBytes............................              14
Unused Blocks...........................              60
Unused Bytes............................         491,520
Last Used Ext FileId....................               9
Last Used Ext BlockId...................          30,600
Last Used Block.........................              68

PL/SQL procedure successfully completed.

<b>that is the output from my current show_space routine -- see how there are 1,792 total blocks -- but only 1696 "full" ones...</b>


ops$tkyte@ORA920> analyze table big_table_compressed compute statistics for table;
Table analyzed.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select table_name, blocks, tablespace_name,
  2        (select SEGMENT_SPACE_MANAGEMENT
  3           from dba_tablespaces
  4              where tablespace_name = user_tables.tablespace_name) ssm
  5  from user_tables
  6  where table_name = 'BIG_TABLE_COMPRESSED';

TABLE_NAME                         BLOCKS TABLESPACE_NAME                SSM
------------------------------ ---------- ------------------------------ ------
BIG_TABLE_COMPRESSED                 1732 USERS                          AUTO

<b>appears there are 1732 blocks to be read -- however:</b>

ops$tkyte@ORA920> alter tablespace &t offline;
old   1: alter tablespace &t offline
new   1: alter tablespace USERS offline
Tablespace altered.

ops$tkyte@ORA920> alter tablespace &t online;
old   1: alter tablespace &t online
new   1: alter tablespace USERS online
Tablespace altered.

ops$tkyte@ORA920> set autotrace traceonly statistics;
ops$tkyte@ORA920> select count(*) from big_table_compressed;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1701  consistent gets
       1697  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

<b>there were only 1697 physical IO's -- but 1697 is the number of "full blocks" plus the extent header -- the difference were the bitmap blocks used to manage space, they appear not to get flushed -- moving the table to a manually managed tablespace:</b>


ops$tkyte@ORA920> set autotrace off

ops$tkyte@ORA920> select decode( '&t', 'BIG_TABLE', 'USERS', 'BIG_TABLE') tablespace_name
  2    from dual;
old   1: select decode( '&t', 'BIG_TABLE', 'USERS', 'BIG_TABLE') tablespace_name
new   1: select decode( 'USERS', 'BIG_TABLE', 'USERS', 'BIG_TABLE') tablespace_name

TABLESPAC
---------
BIG_TABLE

ops$tkyte@ORA920> alter table big_table_compressed move tablespace &t;
old   1: alter table big_table_compressed move tablespace &t
new   1: alter table big_table_compressed move tablespace BIG_TABLE
Table altered.

ops$tkyte@ORA920> exec show_space( 'BIG_TABLE_COMPRESSED' );
Free Blocks.............................               0
Total Blocks............................           1,792
Total Bytes.............................      14,680,064
Total MBytes............................              14
Unused Blocks...........................              97
Unused Bytes............................         794,624
Last Used Ext FileId....................               6
Last Used Ext BlockId...................          16,009
Last Used Block.........................              31

PL/SQL procedure successfully completed.

<b>show_space shows us now that there are the same number of total blocks but:</b>

ops$tkyte@ORA920> analyze table big_table_compressed compute statistics for table;
Table analyzed.

ops$tkyte@ORA920> select table_name, blocks, tablespace_name,
  2        (select SEGMENT_SPACE_MANAGEMENT
  3           from dba_tablespaces
  4              where tablespace_name = user_tables.tablespace_name) ssm
  5  from user_tables
  6  where table_name = 'BIG_TABLE_COMPRESSED';

TABLE_NAME                         BLOCKS TABLESPACE_NAME                SSM
------------------------------ ---------- ------------------------------ ------
BIG_TABLE_COMPRESSED                 1694 BIG_TABLE                      MANUAL

<b>only 1694 contain data, and</b>

ops$tkyte@ORA920> alter tablespace &t offline;
old   1: alter tablespace &t offline
new   1: alter tablespace BIG_TABLE offline
Tablespace altered.

ops$tkyte@ORA920> alter tablespace &t online;
old   1: alter tablespace &t online
new   1: alter tablespace BIG_TABLE online
Tablespace altered.

ops$tkyte@ORA920> set autotrace traceonly statistics;
ops$tkyte@ORA920> select count(*) from big_table_compressed;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1699  consistent gets
       1695  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

<b>that read every one of them...</b>


good eyes there. 

Interesting Result!

Jaromir, July 07, 2003 - 5:47 pm UTC

Nice to know, that in case of space management auto you can omit some blocks on full table scan.

Any idea to b) above? I mean if there are no trade-offs by compressing of tables, all the data warehouse tables should be compressed some day.

Thanks for your peap (sorry peep – like my english) on my page!

Jaromir


Tom Kyte
July 07, 2003 - 6:08 pm UTC

you didn't omit them, we read them -- look at the LIO's. it is just that that alter tablespace apparently didn't not flush them.


There are tradeoffs --

o it takes significantly more CPU to load
o there are cases where it can take more CPU to process
o if the data is not organized to be compressible, it can add lots of CPU to load
(to try and compress) with no benefits in storage.

So, just like its an OPTION for filesystems sometimes, it'll be an option for segments.

alter table compress

A reader, December 19, 2003 - 2:07 pm UTC

Hi Tom,

Could you explain what is the effect of issuing a "alter table ... compress" to a normal table?

It finished for me very fast so I think it's not _compressing_ is the rows. Will it only compress new rows when they are inserted?

Thanks


Tom Kyte
December 20, 2003 - 9:26 am UTC

it set the attribute "compress" to true for the table meaning any FUTURE bulk insert /*+append*/'s, direct path loads, alter moves will be done in compress mode.



More to Explore

Hints

Need more information on hints? Check out the complete hint descriptions