Database, SQL and PL/SQL

On Measuring Distance and Aging

 

Our expert computes distance without aging and highlights favorite Oracle9i features.

By Tom Kyte Oracle Employee ACE

May/June 2003

 

I created a PL/SQL function to compute the distance between two points, but it takes over one-half hour to execute when I call it from SQL. Here is the function:

create or replace function f_dist
 (in_lat1 in number, in_long1 in number,
  in_lat2 in number, in_long2 in number)
return number
as
begin
   return 3959 * ( acos (
     (sin(in_lat1/57.3)*
      sin(in_lat2/57.3))
        +
     (cos(in_lat1/57.3)*
      cos(in_lat2/57.3)*
      cos(abs(in_long2)/57.3
        - abs(in_long1)/57.3))
     ) );
exception
  when others then return 9999;
end f_dist;
/

I use a SQL query like the following to call the function:

select substr(Z2.ZIPP, 1, 7) ZIP
  from A PT, B TA, C Z2
 where (PT.REGION || PT.AREA ||
       PT.DISTRICT || PT.LOCATION) = :str
   and PT.LOCATION_ID = TA.LOCATION_ID
   and Z2.AVGLAT <
       (TA.LATITUDE + ( :dist / 69 ))
   and Z2.AVGLAT >
       (TA.LATITUDE - ( :dist / 69 ))
   and Z2.AVGLONG <
       (TA.LONGITUDE + ( :dist / 47 ))
   and Z2.AVGLONG >
       (TA.LONGITUDE - ( :dist / 47 ))
   and F_DIST(TA.LATITUDE, TA.LONGITUDE, i
      Z2.AVGLAT, Z2.AVGLONG) <= :dist;

If I do not use the PL/SQL call but rather run the function using acos/sin and so on, it runs in about 15 seconds. My questions are 1) why is the PL/SQL so much slower, and 2) is there a way to speed up this process?

This is an interesting question. After a little research (I got the real data to play with), what I discovered was that with the data set there were 612,296 invocations of the PL/SQL function F_DIST . It took about 2,083 seconds to run this query on my system (which was about the same as in the original setup), meaning that I called that PL/SQL function 294 times a second. That is a context switch from the SQL engine to the PL/SQL engine 294 times—every second. Each callout took at most 0.003 seconds, which by itself is very little time, but when you do it 612,296 times, it adds up! Any time you can run the code directly in SQL —and you should—it will almost certainly be faster than calling out to PL/SQL time and time again.

As for the second part—making it go faster—I think we can do that. Not only can you make it go faster, but you can do it more easily. Hidden inside the database is a lot of functionality for extended datatypes—XML, text, image, audio, video, and even spatial data.

So I asked our resident spatial guru, Clarke Colombo, to look at this problem, and he suggests that you augment the tables to include a spatial column and then use that spatial column in the query. Here is the code Clarke had to add:

alter table b add
(geometry mdsys.sdo_geometry);
update b a
  set a.geometry =
    mdsys.sdo_geometry(2001,null,
       mdsys.sdo_point_type(a.longitude,
                          a.latitude,
                          null),
       null, null)
 where latitude is not null;
insert into user_sdo_geom_metadata
values ('B','GEOMETRY'
   mdsys.sdo_dim_array(
      mdsys.sdo_dim_element
      ('X',-180,180,.00005),
      mdsys.sdo_dim_element
      ('Y',-90,90,.00005)), null);
create index b_sidx on b(geometry)
  indextype is mdsys.spatial_index;

The above process was for table B in your question. For table C, Clarke used the same process: he added a geometry column, registered it with the USER_SDO_GEOM_METADATA table, and then created a spatial index on it as well. Now Clarke's query to compute the distance becomes simply:

select substr(Z2.ZIPP, 1, 7) ZIP
  from A PT, B TA, C Z2
 where (PT.REGION || PT.AREA ||
       PT.DISTRICT || PT.LOCATION) = :str
   and PT.LOCATION_ID = TA.LOCATION_ID
   AND MDSYS.LOCATOR_WITHIN_DISTANCE
    (Z2.AVG_GEOM, TA.GEOMETRY,
     'distance = '||:dist||',units=mile')
                              = 'TRUE';

Clarke found that this query runs in 1 second versus more than 15 seconds using straight SQL.


Snapshot Too Old?

I wanted to drop all the indexes of my schema, so I tried the following code:

begin
for i in (select index_name
           from user_indexes)
loop
   execute immediate
   'drop index '||
        i.index_name||' ';
end loop;
end;

I had about 500 indexes to drop, and it gave me an ORA-01555 error midway through. When I reran the code, it dropped the rest of the indexes. I do not understand this error; what do the rollback segments have to do with a drop index command? Can you please explain the mechanism?

Well, first I need to explain what causes an ORA-01555 "snapshot too old" error, and then I can describe what happened here. In short, when Oracle processes a query, such as your select index_name from user_indexes query, it does so using a consistent-read mechanism. This is implemented by using the UNDO information your session or other sessions created during the processing of inserts, updates, and deletes. So when Oracle Database reads a block to answer your query, it may need to read some UNDO information as well. To get the entire story behind this process, I encourage you to read the Oracle9i Database Concepts guide. MVRC stands for multiversioning and read consistency—two of the most important concepts to understand about Oracle Database.

In the normal processing of your query, Oracle reads some UNDO information. The ORA-01555 error is raised when some UNDO you need for answering your query no longer exists because it has been overwritten. UNDO (stored either in an UNDO tablespace or in rollback segments managed manually) is saved only for as long as the transaction that generated it is still active. Once that transaction commits, Oracle may overwrite that UNDO and reuse the space. If you have sized your UNDO tablespace (rollback segments) correctly, Oracle will not immediately overwrite this information but will instead keep it around for a while. This lets queries that need to use the UNDO information have access to it.

The ORA-01555 error is caused by having insufficiently sized UNDO space for the work your system does. Oracle wraps around and reuses the space, overwriting UNDO that is needed by some active query. One of the techniques that causes the ORA-01555 most frequently is the bad practice of committing in a cursor FOR LOOP , such as the following:

For x in ( select *
             from emp )
loop
   ... process record ...
   update emp set ...
     where empno = x.empno;
   commit;
End loop;

By doing the update, you are generating UNDO against the EMP table. At the same time, by committing right after the update, you are telling Oracle "it is OK to reuse that undo space whenever." The problem is that your own query against EMP needs that UNDO information in order to get a read-consistent view of the EMP table. By committing in the loop, you are releasing resources that you yourself need.This style of coding is just asking for the ORA-01555 error.

Why is this happening to you? At first glance, it would appear that you are not using the above technique. However, as it turns out, you are! The two things to understand are the following:

  • DDL commits. Your drop index statement is, in fact, a COMMIT.

  • DDL modifies the dictionary tables, dropping the index deletes from the data dictionary. You are querying that very same data dictionary.

Your code is just like my EMP table example, above. The question now becomes, "How can you avoid this?" The answer is to not keep the cursor open across commits so that you do not need that UNDO information. Your new logic can be coded like this:

declare
   l_index_name varchar2(30);
begin
   loop
      select index_name into l_index_name
        from user_indexes
       where rownum=1;
      execute immediate 'drop index ' ||
                         l_index_name;
   end loop;
exception
   when no_data_found then null;
end;

You are no longer committing in a cursor FOR LOOP ; you simply run a query to find an index to drop, drop it, and then run yet another query—over and over—until finally there is no more data to process. This will not generate an ORA-01555 error, since the query is short in duration and you do not keep it open across commits.

If you search for ORA-01555 on asktom.oracle.com , you'll find a wealth of information on this "snapshot too old" error and other case studies involving it.


UNDO Retention

I have heard that the UNDO_RETENTION parameter is only "directive" and that undo space is reused if undo_tablespace is full, regardless of the value of the UNDO_RETENTION parameter. Is this true?

The UNDO_RETENTION parameter applies only to Oracle9i Database Release 1 and later, because it affects the new UNDO tablespace available in Oracle9i. The UNDO_RETENTION parameter specifies how long Oracle should attempt to retain UNDO information on disk before reusing it. This helps to alleviate the ORA-01555 error issue discussed in the previous question. If UNDO were retained for as long as it took that user to drop 500 indexes, for example, the user would not have hit the ORA-01555 error. Additionally, the UNDO_RETENTION parameter allows the DBA to retain UNDO information in support of the Flashback query facility. If you have a requirement to allow users to flashback up to 3 hours, this is what you would use.

Now to the question, "Is the undo retention period a mandate or a suggestion?" It is, in fact, a suggestion. If Oracle Database has sufficient space or can extend the UNDO segment to have sufficient space, it will not reuse the UNDO information until it has expired. On the other hand, Oracle will not cause a transaction to fail just because there was insufficient space to satisfy the retention period. Oracle will prematurely expire UNDO information that was generated by committed transactions if it needs to. You can see this via a simple test. What I'll do is set up two UNDO tablespaces— UNDO_BIG and UNDO_SMALL . UNDO_BIG will start at 1MB and grow up to 2GB in size. UNDO_SMALL will start and remain at 1MB in size. I'll process transactions in each of these UNDO tablespaces and note their ending size. I'll use an UNDO_RETENTION value of 10,800 (three hours) and see what happens:

SQL> create undo tablespace
  2  UNDO_BIG datafile
  3  size 1m autoextend on next 1m
  4  maxsize 2048m;
Tablespace created.
SQL> create undo tablespace
  2  UNDO_SMALL datafile
  3  size 1m autoextend off;
Tablespace created.
SQL> show parameter undo_retention
NAME            VALUE
----------      ------------
undo_retention   10800
SQL> alter system
  2  set undo_tablespace = UNDO_BIG
  3  scope = memory;
System altered.
SQL> drop table t;
Table dropped.
SQL> create table t
  2  ( x char(2000),
  3    y char(2000),
  4    z char(2000)
  5  );
Table created.
SQL> insert into t values('x','x','x');
1 row created.
SQL> begin
  2  for i in 1 .. 500
  3  loop
  4     update t set x=i,y=i,z=i;
  5     commit;
  6  end loop;
  7  end;
  8  /
PL/SQL procedure successfully completed.
SQL> select bytes,maxbytes
  2    from dba_data_files
  3   where tablespace_name = 'UNDO_BIG';
     BYTES     MAXBYTES
    -------    -----------
   5242880   2147483648

What this shows is that I generated about 5MB of UNDO. We know that the UNDO_BIG tablespace was just created, and its initial size was 1MB. The fact that the data file takes up 5MB of space now shows that Oracle grew it in order to retain the UNDO for the three-hour suggested time frame. Now I'll use a small UNDO tablespace ( UNDO_SMALL ) and try again:

SQL> alter system
  2  set undo_tablespace = UNDO_SMALL
  3  scope = memory;
System altered.
SQL> show parameter undo_t
NAME              VALUE
-----------       ---------
undo_tablespace    UNDO_SMALL

Here I ran the same DROP TABLE, CREATE TABLE , and INSERT INTO TABLE and updated it 500 times. This time, however, the results are different:

SQL> select bytes,maxbytes
  2    from dba_data_files
  3   where tablespace_name = 'UNDO_SMALL';
     BYTES    MAXBYTES
    ------    ---------
   1048576          0

Oracle did not "grow" my data file (because it was not allowed to), but it did not cause my transactions to fail either. Instead, Oracle prematurely expired UNDO information that was not needed for possible rollback anymore. Note that prematurely expired UNDO information may lead to the dreaded ORA-01555 error and would be an indication that your UNDO tablespace is just too small for the work you do. You can use the UNXPBLKREUCNT column of V$UNDOSTAT (a dynamic performance view) to see how many unexpired blocks were cycled out of (expired from) the UNDO tablespace prematurely. When I did that for this test, I discovered that 413 blocks had been cycled out for the UNDO_SMALL tablespace test—a total of 3.3MB on my system.


How Long to Roll Back?

When a session rolls back for some error (typically, when the rollback segment or undo space cannot grow), what is the best way to determine if it is rolling back and how much is left to be rolled back? I use the query SELECT USED_UBLK FROM V$TRANSACTION to see how many undo blocks the transaction has allocated to its changes. If the figure drops, then this is an indication that the transaction is rolling back. Is this the preferred approach, or is there a better one? Also, how do I know how many undo blocks the transaction had when it started the rollback process, so I can estimate how much longer the rollback will continue?

USED_UBLK is what you would query. You can peek at the FLAG column in V$TRANSACTION as well. If you try select to_char(flag,'0000000X') from v$transaction and look at the second to last character in the result, a value of 8 most likely indicates a rollback, while 0 indicates a "normal" transaction. So 00001E8E , for example, would be a rollback, and 00001E03 would be a normal transaction.

As for the rest of your question, you needn't know how many blocks were used at the beginning of the rollback. You simply need to know the following:

  • How many blocks are used now (used_now)

  • How many blocks are used in, say, 60 seconds (used_later)

Now calculate (used_later / (used_now - used_later)), and that is approximately how many minutes you have left in the rollback process. (Of course this won't be 100 percent accurate, but it is a good guess.)

So if you have 500 used_ublks to start and 60 seconds later you have 400, you can guess that 400/(500-400) = 4 minutes is the amount of rollback time remaining.


Converting to Hexadecimal

Does Oracle have a function or an easy way to convert hexadecimal to decimal or decimal to hexadecimal?

As a matter of fact, since Oracle8i Release 8.1.5, we do. It is a simple to_number/to_char format mask for numbers. You can use the query select to_number( 'AA', 'xx' ) from dual ; to convert from hexadecimal to decimal and select to_char( 111, 'xxxx' ) from dual to convert from decimal to hexadecimal.

Anticipating the next question, "What about other bases such as binary, octal, and so on?" Oracle doesn't convert those natively, but I do have a set of PL/SQL functions that accomplish these conversions. For information, see asktom.oracle.com.


What's New?

Could you help me in addressing the comments below (contributed by my teammate in relation to upgrading to Oracle9i for data warehousing applications):

There are lots of "new features" in Oracle9i that are Oracle bundling its other products such as the OLAP server, which is just the old Express. I also think that Oracle Warehouse Builder is an old product and that data mining is just the algorithms, not visualization. There is not too much extra that is really new except for some more performance tuning capability around views (bitmap join index), lead and lag function upgrades, and extra partitioning capability.

Well, the online analytical processing (OLAP) server in Oracle9i is not the old Express. It is light-years different. Express meant "dump your data from Oracle into this other database and use a different language to query, manage, secure, back it up, and so on." OLAP means "connect to Oracle and there you go; all of the data is ready to be queried relationally or 'OLAP cube-wise.'" If your teammate thinks it is just Express with a new name, now would be a good time to take another look, and would be a great place to get started.

And to call data mining "just algorithms" is true, sure, but isn't all software? The mining part is hard. And the visualization is there via Oracle9i JDeveloper. Moving on to the core database itself, here is a short list of new Oracle9i features:

  • Compressed tables. Very impressive. Potentially huge savings.

  • List partitioning.

  • External tables. Parallel direct path loader in SQL. I love this feature for data warehousing.

  • MERGE command, multitable inserts.

  • Direct path loads with SQL functions.

  • Program Global Area (PGA) aggregate target. No more sort area size, hash area size, or this size or that size.

  • Pipelined functions.

  • WITH clause in select . This reduces the number of times a subquery or inline view has to be materialized to process a query.

  • User-defined aggregates. Very fast.

  • Natively compiled PL/SQL. PL/SQL is turned into object code and executed natively in the operating system.

  • Associative arrays for doing lookups/scrubbing. PL/SQL table types where the index is a string. So, for example, state_name := state_lookup( 'AK' ) ; would be valid and fast.

  • Oracle9i Real Application Clusters (RAC). Use Oracle9i RAC if you are serious about availability and scaling out by adding more computers to your cluster as well as scaling up by buying larger computers to replace the smaller ones.

  • Online operations for everything. Do online rebuilds and reorgs.

  • Block-level recovery. Use when a single 16K block in a 32GB data file goes "bad" and you need the database to be up.

  • Significant enhancements to the Resource Manager. Determine the n-active users in a group; make predicative query execution time estimation.

  • Bitmap join indexes.

  • Bitmap indexes on index-organized tables (IOTs).

  • Multiblock size databases. Transport that 4K block size online transaction processing (OLTP) set of already formatted and indexed tables to your data warehouse and do the extraction, transformation, and loading (ETL) using create table as select or insert /*+ append */ with complex queries against the OLTP data. (Your data warehouse will have a 16 or 32K block size, and your OLTP will have a 2, 4, or 8K block size, so this is relevant.)

  • Fine-grained auditing. Efficiently and effectively capture the questions people are asking, along with binds. This is good not only for auditing, but also for performance analysis.

  • Resumable space allocation. If you are using Oracle9i, for example, to rebuild a 15GB bitmap index, and you get a space-related error, you can suspend the statement, fix the space problem (add a file to temp, for example) and have the index rebuild pick up where it left off.

  • Index monitoring. Does anyone really use that index?

  • Update of GLOBAL indexes. Instead of having GLOBAL indexes becoming invalid and needing to be rebuilt, they are maintained during partition management.

  • Resize the SGA online. Add block buffers and take them away.

  • Buffer cache/shared pool/etc. advisors. New V$ tables tell you, for example, that if you increased the buffer cache by this much, you would get n percent more logical IOs.

My hand hurts from typing just this partial feature list. [And the editor had to cut a lot of the information due to lack of space. —Ed.] If you would like to see some examples or slides, go to asktom.oracle.com/ and take a look at the Oracle9i presentations at the bottom of the page.


Next Steps

ASK Tom
 asktom.oracle.com
Oracle Vice President Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column.

READ
more Tom
asktom.oracle.com

Oracle documentation
 /documentation

 

DISCLAIMER: We've captured these popular historical magazine articles for your reference. Links etc contained within these article possibly won't work. These articles are provided as-is with no guarantee that the information within them is the best advice for current versions of the Oracle Database.