Database, SQL and PL/SQL

On Searching and Synonyms

 

Our technologist searches, moves, and finds synonyms.

By Tom Kyte

July/August 2005

 

I want to create a new index, and I get the following error message: "ORA-1652: unable to extend temp segment by 1024 in tablespace ACCT." This new index should be stored in tablespace ACCT, and the user creating the index is assigned the TEMP tablespace for temporary space. Why is Oracle using ACCT for temporary space?

When you create a new segment, Oracle uses temporary extents to initially build it and then, at the end of the process, converts (via a simple dictionary update) the temporary extents into permanent ones.

It does this so that if the instance crashed halfway through the index build, the System Monitor Process (SMON) would find the temporary extents out there and clean them up—nothing special needs to be done.

So, these "temporary" extents are really your INDEX extents, and this message is saying "Sorry, insufficient space to create your index in this tablespace." The solution in this case: Add more space to the ACCT tablespace so it can hold the index.

Moving Files

I'm running Oracle8i (Release 8.1.6) in noarchivelog mode on Windows 2000, and I need to move the datafile G:SAARV2DATAVLMAIN01 .dbf to L:SAARV2DATAVLMAIN01.dbf. This is my first time for this kind of move in this setup, and I'm a bit worried.

Well, if you are in noarchivelog mode, you are basically saying, "We will lose data someday"—not might but will . You have no ability to recover from media failure, so if disk "L" fails tomorrow, you will lose everything up to your last backup. I strongly encourage you to revisit the noarchivelog-mode decision.

It is pretty easy to move a datafile for any tablespace except SYSTEM .

Where old_name and new_name are the fully qualified filenames, the steps are the following:

  1. Alter tablespace ts_vlmain offline.

  2. Move the files in the operating system.

  3. Alter database rename file old_name to new_name.

  4. Alter tablespace ts_vlmain online.

You will want to back up your control files after a change like this as well. In the event that you need to move the SYSTEM tablespace datafiles, you can use one of two techniques:

  1. Shut down, move the files, recreate the control files, and start up.

  2. Shut down, start up in mount mode, move the files, rename them, and then open the database.

Because the second is easier, I'll demonstrate that:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
System Global Area  122754516 bytes
Fixed Size             452052 bytes
Variable Size        88080384 bytes
Database Buffers     33554432 bytes
Redo Buffers           667648 bytes
Database mounted.
SQL> !mv system01.dbf system.dbf
SQL> alter database rename file
  2  '/home/.../system01.dbf'
  3  to '/home/.../system.dbf';
Database altered.
SQL> alter database open;
Database altered
Back up your control file again, and that's it.

Grouping ID

I never really understood the usage of the GROUPING_ID function. I heard that it avoids using multiple GROUPING functions. Can you please illustrate with a small example?

The original question and answer for this, with even more information, can be found at asktom.oracle.com/~tkyte/grouping_id. You use the GROUPING_ID function with GROUPING SETS GROUP BY CUBE/ROLLUP —it tells you the level of aggregation that any given row represents.

In the following example we'll show GROUPING SETS (Listing 1), GROUP BY ROLLUP (Listing 2), and GROUP BY CUBE (Listing 3) to see what we get with each. We'll use the standard SCOTT.EMP table to do this. The first query will show us the sum of salaries by DEPTNO and by JOB . We need to use the GROUPING_ID function to determine what aggregation each row represents. It might not be obvious why we would need this in general from the example, but consider what would happen if DEPTNO or JOB were NULLABLE . There would be no way to distinguish the detail row from the aggregated row.

The GROUPING_ID function returns a 0 or 1 when given a single column. (In this case, it works just like the GROUPING function.) If the return value is 0, indicating a detail record value for that particular column, then the column in question was not aggregated over (was not collapsed). If the function returns 1, then the column in question was aggregated over—any aggregates in the SELECT list would have been computed over that entire column's set of values. GROUPING_ID differs from GROUPING , in that you can send a list of columns and the GROUPING_ID function will treat the list as bits and return a decimal number. That means that the call to GROUPING_ID(a,b,c) might return any number between 0 and 7, because different 0/1 combinations are returned. Given that fact, we can use a CASE statement in the query to see if the row is a detail row for DEPTNO , for JOB , for neither, or for both.

Using GROUPING SETS in Listing 1, we asked for GROUP BY only on DEPTNO and then only on JOB . So, that one query was like running the following query

select deptno, null, sum(sal)
from emp group by deptno
union all
select null, job, sum(sal)
from emp group by job;

Code Listing 1: Using GROUPING_ID with GROUPING SETS

SQL> select deptno,
  2         job,
  3         sum(sal),
  4         grouping_id(deptno) gid_d,
  5         grouping_id(job) gid_j,
  6         grouping_id(deptno,job) gid_dj,
  7             bin_to_num(grouping_id(deptno),grouping_id(job)) b2n,
  8         case when grouping_id(deptno,job) = 0
  9              then 'Dtl both'
 10              when grouping_id(deptno,job) = 1
 11              then 'Agg over job'
 12              when grouping_id(deptno,job) = 2
 13              then 'Agg over deptno'
 14              when grouping_id(deptno,job) = 3
 15              then 'Agg over both'
 16              end what
 17    from emp
 18   group by grouping sets( (deptno), (job) )
 19  /
DEPTNO      JOB           SUM(SAL)      GID_D   GID_J    GID_DJ     B2N  WHAT
________    ____________  ___________  _______ ________  ________  ____ ______________
10                        8750          0       1         1         1    Agg over job
20                        0875          0       1         1         1    Agg over job
30                        9400          0       1         1         1    Agg over job
	    ANALYST       6000          1       0         2         2    Agg over deptno
            CLERK         4150          1       0         2         2    Agg over deptno
            MANAGER       8275          1       0         2         2    Agg over deptno
            PRESIDENT     5000          1       0         2         2    Agg over deptno
            SALESMAN      5600          1       0         2         2    Agg over deptno
8 rows selected.

. . . but without having to make two passes on the EMP table, as would be the case with the UNION ALL .

In looking at the columns involved in the query in Listing 1, we can see that the function GROUPING( column_name) shows us when a column is aggregated over or preserved as a detail record. When GROUPING(deptno) = 0, DEPTNO is preserved in the output. When it is 1, it is aggregated over. However, we have two columns in this set we are aggregating by, for a total of four possible 0/1 combinations. (In this query, only two are possible.) Using the GROUPING_ID function on this vector of columns, we can easily see what each row represents. I've also included the alternative, more verbose way to accomplish this—the BIN_TO_NUM() function, to which we can send a list of 0s and 1s and get back a decimal number as well. I'm pretty sure you'll agree that GROUPING_ID(c1,c2,c3) is easier than the corresponding BIN_TO_NUM call with three GROUPING calls.

In Listing 2, we take a look at GROUP BY ROLLUP . A rollup by the two columns DEPTNO and JOB will produce

  1. Detail records by DEPTNO, JOB (sum of SAL for each DEPTNO/JOB combination).

  2. A summary record for each DEPTNO over JOB (like a subtotal).

  3. A summary record over DEPTNO and JOB —a single aggregate for the entire result. Listing 2 shows the query and the results.

Code Listing 2: Using GROUPING_ID with GROUP BY ROLLUP

SQL> select deptno, job, sum(sal),
  2         grouping_id(deptno) gid_d,
  3         grouping_id(job) gid_j,
  4         case when grouping_id(deptno,job) = 0
  5              then 'Dtl both'
  6              when grouping_id(deptno,job) = 1
  7              then 'Agg over job'
  8              when grouping_id(deptno,job) = 2
  9              then 'Agg over deptno'
 10              when grouping_id(deptno,job) = 3
 11              then 'Agg over both'
 12              end what
 13    from emp
 14   group by rollup( deptno, job )
 15  /
DEPTNO    JOB           SUM(SAL)       GID_D    GID_J     WHAT
________  _____________ ___________    _______  _______   __________
10        CLERK          1300          0        0         Dtl both
10        MANAGER        2450          0        0         Dtl both
10        PRESIDENT      5000          0        0         Dtl both
10                       8750          0        1         Agg over job
20        CLERK          1900          0        0         Dtl both
20        ANALYST        6000          0        0         Dtl both
20        MANAGER        2975          0        0         Dtl both
20                      10875          0        1         Agg over job
30        CLERK           950          0        0         Dtl both
30        MANAGER        2850          0        0         Dtl both
30        SALESMAN       5600          0        0         Dtl both
30                       9400          0        1         Agg over job
	               29025          1        1         Agg over both
13 rows selected.

A rollup is sort of like a running total report, and GROUPING_ID tells us when the rollups happened. So the data is sorted by DEPTNO , JOB , and we have subtotals by DEPTNO (aggregated over JOB ) and by DEPTNO , JOB (aggregated over both) along with the details by DEPTNO/JOB .

As you can see in Listing 2, the GROUPING_ID function was useful in telling us when we were dealing with a rolled-up record and the level of detail we could expect in that record.

Last, we'll look at GROUP BY CUBE. CUBE is similar to ROLLUP , in that you get the same three record types as shown in Listing 2 but also get all possible aggregations. CUBE grouping by DEPTNO and JOB will give you records by all of the following:

  1. DEPTNO and JOB
  2. DEPTNO over JOB
  3. JOB over DEPTNO
  4.  
  5. A single total aggregate
 

You get every possible aggregate. Listing 3 shows the syntax and output and how to use the GROUPING_ID function to see what the level of detail is for each row. It is interesting to note that GROUP BY CUBE produces a superset of the rows we observed in the first query (in Listing 1). You could use GROUPING_ID with CUBE to generate the same result set as the original grouping sets query. That is, adding

having (grouping_id(deptno,job)=2
    or (grouping_id(deptno,job)=1 

. . . to the GROUP BY CUBE query would cause it to be the logical equivalent of the GROUPING SETS query. But you shouldn't do that! If you need only some of the aggregates, use GROUPING SETS to get just the ones you need computed and avoid computing the others altogether. It would be fair to say that GROUPING_ID doesn't avoid multiple grouping functions but GROUPING SETS does. However, GROUPING_ID plays an important role in seeing what data is what.

Code Listing 3: Using GROUPING_ID with GROUP BY CUBE

 SQL> select deptno, job, sum(sal),
  2         grouping_id(deptno) gid_d,
  3         grouping_id(job) gid_j,
  4         case when grouping_id(deptno,job) = 0
  5              then 'Dtl both'
  6              when grouping_id(deptno,job) = 1
  7              then 'Agg over job'
  8              when grouping_id(deptno,job) = 2
  9              then 'Agg over deptno'
 10              when grouping_id(deptno,job) = 3
 11              then 'Agg over both'
 12              end what
 13    from emp
 14   group by cube( deptno, job )
 15   order by grouping_id(deptno,job)
 16  /
DEPTNO      JOB             SUM(SAL)     GID_D     GID_J    WHAT
_________   _____________   ___________  _______   _______  ________
10          CLERK            1300        0         0        Dtl both
10          MANAGER          2450        0         0        Dtl both
10          PRESIDENT        5000        0         0        Dtl both
20          CLERK            1900        0         0        Dtl both
30          CLERK             950        0         0        Dtl both
30          SALESMAN         5600        0         0        Dtl both
30          MANAGER          2850        0         0        Dtl both
20          MANAGER          2975        0         0        Dtl both
20          ANALYST          6000        0         0        Dtl both
10                           8750        0         1        Agg over job
20                          10875        0         1        Agg over job
30                           9400        0         1        Agg over job
	    CLERK           4150        1         0        Agg over deptno
	    ANALYST         6000        1         0        Agg over deptno
	    MANAGER         8275        1         0        Agg over deptno
	    PRESIDENT       5000        1         0        Agg over deptno
	    SALESMAN        5600        1         0        Agg over deptno
	                   29025        1         1        Agg over both
18 rows selected.

I encourage you to visit the above-referenced URL, where reader Philip from Cincinnati, Ohio, was kind enough to share how you can use this GROUPING_ID (or GROUPING ) in materialized views to facilitate querying out just the part of the cube/rollup you are interested in.

Words of Caution

Are there any issues or drawbacks to be aware of when adding some logic, beyond a default value assignment, to the variable declaration section of a stored procedure, as opposed to putting the logic in the body? (The different methods are in Listing 4.)

Code Listing 4: Different methods and locations for logic

Method 1—in the declaration section
v_todate  date  := trunc(nvl(pi_todate, to_date('1-JAN-2199','DD-MON-YYYY')));
Method 2—in the body
v_todate := trunc(nvl(pi_todate, to_date('1-JAN-2199','DD-MON-YYYY')));
Method 3—if-then-else (in the body)
if pi_todate is null or pi_todate = '' then
   v_todate := to_date('1-JAN-2199','DD-MON-YYYY');
else
   v_todate := trunc(pi_todate);
end if;

I pointed out a couple of differences at asktom.oracle.com, but others came along and added more to the list. This column combines them all, but to see the original discussion and who contributed what ideas, see asktom.oracle.com/~tkyte/caution.html.

I use Method 1 whenever I can. You are initializing a variable, whatever it takes. Method 3 in this case probably performed "marginally . . . a tiny bit" faster. NVL() is the reason. NVL() evaluates both inputs and then assigns the values. The if-then-else in Method 3 would just evaluate one of the functions. There is also the following possible side effect—not in the above example, but in general:

SQL> declare
  2   x number := nvl( 1/1, 1/0 );
  3  begin
  4   null;
  5  end;
  6  /
declare
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at line 2
SQL> declare
  2   x number;
  3  begin
  4   if ( 1/1 is null )
  5   then
  6       x := 1/0;
  7   else
  8       x := 1/1;
  9   end if;
 10  end;
 11  /
 PL/SQL procedure successfully completed.

If an argument to NVL is an error waiting to happen if it were evaluated, the if-then-else of Method 3 might avoid it.

A reader added further:

There's nothing in the question that suggests that exception sections were used in the original stored procedures, but if that's the case, beware that exceptions raised in the declaration section are not caught by the exception section. By simply moving initializations to the declaration section, you are changing the behavior of the application:

SQL> create or replace
  2  function test (x number)
  3  return number
  4  is
  5    l_temp number;
  6  begin
  7    l_temp := 1/x;
  8    return l_temp;
  9  exception
 10    when zero_divide then
 11      return 0;
 12  end;
 13  /
Function created.
SQL> select test (0) from dual;
   TEST(0)
___________
           0
SQL> create or replace
  2  function test (x number)
  3  return number
  4  is
  5    l_temp number := 1/x;
  6  begin
  7      return l_temp;
  8  exception
  9    when zero_divide then
 10      return 0;
 11  end;
 12  /
Function created.
SQL> select test (0) from dual;
select test (0) from dual
       *
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "DELLERA.TEST", line 4
As you can see, moving the code that could raise an exception into the "is begin" block from the "begin end" block changes the way this code executes.

But all of that aside, if I can initialize the value in the declaration section, I do it that way.

Synonym Used by Objects

How can I find synonyms used in different packages or procedures? I want to use a synonym as an input, and as an output, I need object names that use that synonym.

The dictionary view DBA_DEPENDENCIES has that information (and more). Consider the following:

SQL> desc dba_dependencies;
 Name
 _________________
 OWNER
 NAME
 TYPE
 REFERENCED_OWNER
 REFERENCED_NAME
 REFERENCED_TYPE
 REFERENCED_LINK_NAME
 DEPENDENCY_TYPE 

From that, we see that we can find all objects that have relationships. If we want to find the objects that reference the SYNONYM DBMS_OUTPUT owned by SYS , we may query:

SQL> select owner, name, type
  2  from dba_dependencies
  3  where referenced_name =
  4       'DBMS_OUTPUT'
  5    and referenced_type =
  6       'SYNONYM'
  7    and referenced_owner =
  8   'SYS';

That would produce a list of all objects (views, procedures, packages, functions, triggers, and so on) that reference that synonym.

Next Steps

ASK Tom
your most difficult technology questions.
 asktom.oracle.com

READ
more on GROUPING_ID
 more on adding logic to declaration sections



 

DISCLAIMER: We've captured these popular historical magazine articles for your reference. Links etc contained within these article possibly won't work. These articles are provided as-is with no guarantee that the information within them is the best advice for current versions of the Oracle Database.