January/February 2001
We use an application that deletes data with records in excess of 20MB from the fact table. It takes the following form:
DELETE measurements
WHERE last_modified_by
LIKE '%'||'&1'||'#'||'&2'||'#%''
This process currently takes nearly two to three hours each time it runs. Is there a way we can speed this up? Note that &1&1 is of fixed length and &2&2 is of varying length and mixed case. The character in front of &1&1 is #.
Let's say the &1 is after the first # and the &2 is after the second, up to the third. Then I would recommend a function-based index (an index on a function—a capability added in Oracle8i). You can make an index on some function of the last_modified_by column that will allow you to find all of the rows you want to delete via the index—as opposed to the full scan that is occurring now. The index you want to create would take the form:
SQL> create index t_idx on
2 t(substr( last_modified_by,
3 instr(last_modified_by,'#',1,1)+1,
4 instr(last_modified_by,'#',1,3)
5 instr(last_modified_by,'#',1,1)-1 )
6 )
7 /
Index created.
If I'm off on the placement of the #'s, you just need to adjust the fourth parameter to instr
in the above—I'm looking for the first and third ones. The index you create will pick off just the fields you are interested in—in effect, it will index &1#&2 for you. You would then create a view to delete from, so that you can avoid typos in the function.
SQL> create or replace view t_view
2 as
3 select t.*,
4 substr( last_modified_by,
5 instr(last_modified_by,'#',1,1)+1,
6 instr(last_modified_by,'#',1,3)
7 instr(last_modified_by,'#',1,1)-1 ) idx_col
8 from t
9 /
View Created.
This new view will use an index after having the necessary session or system parameters set (the following may be set in your init.ora
to make them the default).
SQL> alter session set QUERY_REWRITE_ENABLED=TRUE;
Session altered.
SQL> alter session set QUERY_REWRITE_INTEGRITY=TRUSTED;
Session altered.
SQL> set autotrace on explain
SQL> delete /*+ index( t_view t_idx ) */
from t_view
2 where idx_col = 'amper1_data#amper2_data';
1 row deleted.
Execution Plan
--------------------------------------------
0 DELETE STATEMENT Optimizer=CHOOSE (Cost=1 Card=1
1 0 DELETE OF 'T'
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=1
3 2 INDEX (RANGE SCAN) OF 'T_IDX' (NON-
UNIQUE)
SQL> set autotrace off
This process can make use of an index, and, if the number of rows to be deleted is a small fraction of the 20 million rows present, will definitely speed up the operation.
I use the script below for reporting on loans:
break on report
compute sum of sum(loan_amount) on report
select officer, sum(loan_amount)
from table_x
group by officer;
The results are as follows:
officer sum(loan_amount)
----------------------
Mr Q 3000
Mr W 4000
Mr T 6000
Mr X 7000
----------------------
sum 20000
But I would prefer the results to look like the following:
officer sum(loan_amount) percentage
---------------------------------
Mr Q 300015
Mr W 4000 20
Mr T 6000 30
Mr X7000 35
---------------------------------
sum 20000
Can you help me with the relevant code to calculate the respective percentage of each officer's total in relation to the grand total? My database is Oracle7 Release 7.1.6.
Starting with Release 7.1 of Oracle, users have had access to a feature called an inline view. An inline view is a view within a query. Using this feature, you can easily accomplish your task. Every row in the report must have access to the total sum of loan_amount
. You can simply divide sum ( loan_amount
) by that total, and you'll have a number that represents the percentage of the total. Given that fact, you can use a query such as this ( deptno = officer, sal=loan_amount
) in the following:
1 select deptno, sum(sal),
sum(sal)/tot_sal
2 from emp, ( select sum(sal) tot_sal
from emp )
3* group by deptno, tot_sal
scott@ORA716.WORLD> /
DEPTNO SUM(SAL) SUM(SAL)/TOT_SAL
------------------------------
10 8750 .301464255
20 10875 .374677003
30 9400 .323858742
With Oracle8i Release 2 (8.1.6 and higher), you can calculate percentages by using the new analytic functions as well. The query using an analytic function might look like this:
scott@TKYTE816> select deptno, sum(sal),
ratio_to_report(sum(sal)) over ()
2 from emp
3 group by deptno;
DEPTNO SUM(SAL) RATIO_TO_REPORT
(SUM(SAL))OVER()
------------------------------
10 8750 .301464255
20 10875 .374677003
30 9400 .323858742
The query produces the same answer—but it does so more efficiently, because it does not have to make two passes over the data to arrive at the answer. Because the analytic functions are built-in, queries that use them will find the answer more rapidly than the "pure" SQL-based approach.
I want to confirm that lsnrctl
has the following permissions for all of my servers:
-rwsr-s-x 1 oracle dba
494456 Dec 7 1999 lsnrctl
Are these default permissions, or were they reset at some point? Are they the same for all users? Does this permission set mean that any user can start and stop my listener? I would like to change it to -rwsr-s- - -
. Will this cause problems?
You could make that change; however, a more correct method would be to password-protect the listener functions.
See the Oracle documentation covering Net8 administration for details, but in short, you can do the following:
LSNRCTL> change_password
[just press enter if you don't have one yet]
Old password:
New password:
Reenter new password:
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=s
lackdog)(PORT=1521)))
Password changed for LISTENER
The command completed successfully
LSNRCTL> set password
Password:
The command completed successfully
LSNRCTL> save_config
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=s
lackdog)(PORT=1521)))
Saved LISTENER configuration parameters.
Listener Parameter File
/d01/home/oracle8i/network/admin/listener.ora
Old Parameter File
/d01/home/oracle8i/network/admin/listener.bak
The command completed successfully
LSNRCTL>
You need to use a password to perform various operations (such as STOP
) but not all (such as STATUS
). This allows you to protect your listener but does not inhibit people from using the listener in a valid way.
It is normally difficult to list all privileges and roles assigned to a specific user in one select
, since a privilege can be assigned to a role, which can be assigned to another role, which in turn can be assigned to another role, which can be granted a user. Can you help me write a query that hierarchically lists users, roles, and privileges ( sys
and tab
)?
It's not really that difficult, but the query can be slow using the straight data dictionary (its view has a group by
to remove duplicate entries ) unless you first create a more suitable view to answer this question specifically. The following view lists all of your roles, all of the roles granted to those roles, and so on:
create view user_role_hierarchy
as
select u2.name granted_role
from ( select *
from sys.sysauth$
connect by prior privilege# =
grantee#
start with grantee# = uid or grantee# = 1) sa,
sys.user$ u2
where u2.user#=sa.privilege#
union all select user from dual
union all select 'PUBLIC' from dual
/
grant select on user_role_hierarchy to public;
That view is based on the existing data dictionary view, and if you create the new view as SYS
, it will show the currently logged-in users their roles (and all of the roles granted to those roles, and so on). You'll find this view to be very fast.
To get all of your "table" privileges, you should do this:
select distinct PRIVILEGE, owner, table_name
from dba_tab_privs
where grantee in ( select * from user_role_hierarchy )
/
And to get all of your "system" privileges, you should do this:
select distinct PRIVILEGE
from dba_sys_privs
where grantee in ( select * from
user_role_hierarchy )
/
Would you provide some examples of how to use DBMS_OBFUSCATION_TOOLKIT?
The DBMS_OBFUSCATION_TOOLKIT
is the DES encryption package. This package shipped with Oracle8i Release 2 and later. It provides first-time field-level encryption in the database. The trick to using this package is to make sure everything is a multiple of eight. Both the key and the input data must have a length divisible by eight (the key must be exactly 8 bytes long).
Here's an example:
ops$tkyte@DEV816> variable x varchar2(25)
ops$tkyte@DEV816> exec :x := 'How Now Brown Cow';
PL/SQL procedure successfully completed.
ops$tkyte@DEV816> declare
2 l_data varchar2(255);
3 begin
4 l_data := rpad( :x, (trunc( length(:x)/8)+1)*8, chr(0));
5
6 dbms_obfuscation_toolkit.DESEncrypt
7 ( input_string => l_data,
8 key_string =>'MagicKey',
9 encrypted_string=> :x );
10 end;
11 /
PL/SQL procedure successfully completed.
ops$tkyte@DEV816> print x
X
______________
"rè=_,w]_ëSWÒ:,-Ù ?T"
ops$tkyte@DEV816> declare
2 l_data varchar2(255);
3 begin
4 dbms_obfuscation_toolkit.DESDecrypt
5 ( input_string => :x,
6 key_string => 'MagicKey', decrypted_string=> l_data );
7
8 :x := rtrim( l_data, chr(0) );
9 end;
10 /
PL/SQL procedure successfully completed.
ops$tkyte@DEV816> print x
X
______________
How Now Brown Cow
You must protect and preserve your "magickey"—8 bytes of data that is used to encrypt/decrypt the data. If it becomes compromised, your data is vulnerable.
I'd like to check the tablespace information similar to OEM display.
The query you're after is:
select a.file_name,
a.file_id,
a.tablespace_name,
a.bytes.allocated,
nvl(b.free,0) free,
a.bytes-nvl(b.free,0) used
from dba_data_files a,
( select file_id, sum(bytes) free
from dba_free_space
group by file_id ) b
where a.file_id = b.file_id
Using an inline view like that is extremely powerful.
I have submitted ten jobs which are placed so that no more that three jobs are overlapping. I have five SNPs configured. job_queue_process = 5 , job_queue_interval = 60
. I've noticed that my jobs configured to start at 09:00:00 a.m. today with an interval of 24 hours will start tommorrow at 09:00:12 a.m. tomorrow. This happens for all jobs. They start after an interval of 24 hours and 10 to 12 seconds the next time. Hence I have to reset my jobs every week.
Moreover, the total_time
displayed in db_jobs
for one of my jobs is 57363 seconds . Does this mean that this job is taking 956 minutes ( 15 hours)? If this is true, then for most of the day my dba_jobs_running
would have showed some records, which it is not.
Are there additional settings that would make each job start at correct time, eliminating the delay of 10 to 12 seconds?
The job "sliding" like that is a normal side effect of the way the job queues work. The NEXT_DATE
is computed right before the job runs. The queues are inspected every 60 seconds (not on the minute—every 60 seconds from some point in time). Hence, it might be 9:00:50 when the job is getting run, not 9:00:00. You are probably just using
sysdate+1
as the next date. That means the job will run about 9:00:50 tomorrow—but tomorrow it slips almost full minute, and so on. In a month, it's 20 to 30 minutes (or more) off. To get something scheduled at a fixed time of the day, you should always use an absolute date rather than a relative one. Your NEXT
should be:
trunc(sysdate)+1+9/24
That will always set next to 9 a.m. the next day—regardless of when the job happens to be run (e.g. If a job was to get stuck and some other job wasn't able to be run until 11 a.m. today—the NEXT
would still evaluate to 9 a.m. tomorrow). Typically you always want to use an absolute calculation like that with the job queues to avoid the "slide"—unless it's a job that should happen every 5 minutes or so.
The total time is a cumulative counter. The value of 15 hours shows we've spent 15 hours total over the life of this job running it.
I created an index name_idx
on a table temp on its column name (varchar2(20))
. This column name does not have a not NULL
constraint on it. When I issue a query like
select ... from temp
where name='JAMES';
I found out (using autotrace) that this index is not being used. I made this index to work using a hint with this query. However if I issue
select count(*) from temp
where name='JAMES';
this index gets used. Am I missing something?
I created the temp table as scott.emp
. It has 16 rows and count(*) returns 1.
You are using the cost-based optimizer (CBO), and it is doing the most correct thing possible.
You have a very small table. It basically has 1 block. The CBO understands that it is very small. It understands that to use an index to answer the select ....
from temp where name = 'JAMES', it would have to read an index and then do a table access by rowid
. It also understands that it could just read the entire table in 1 IO and get the data just as easily. Here, on such a small table, the index increases overhead rather than decreasing it.
When you do the count(*), the optimizer understands that it does not need to access the table. Hence the query can be entirely answered by reading the equally tiny index—so it does.
Using the rule-based optimizer, which does not consider the size of tables, we can get the less optimal plan of "index range scan, table access by rowid
".
The bottom line here is that indexes equal faster stuff in all cases. The optimizer is doing the correct thing here given the stats on the table.
Here is an example that illustrates this behavior:
ops$tkyte@ORA8i.WORLD> create table emp
2 as
3 select * from scott.emp;
Table created.
ops$tkyte@ORA8i.WORLD> create index emp_idx
on emp(ename);
Index created.
ops$tkyte@ORA8i.WORLD> set autotrace
traceonly explain
ops$tkyte@ORA8i.WORLD> select empno, ename
2 from emp
3 where ename = 'JAMES';
Execution Plan
----------------------------------------------------------
0SELECT STATEMENT Optimizer=CHOOSE
10TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
21INDEX (RANGE SCAN) OF 'EMP_IDX' (NON-UNIQUE)
Using RBO, which uses rules only to develop a query plan, we use the index. Using the index is actually not the optimal plan given the size of this table!
ops$tkyte@ORA8i.WORLD> select count(*)
2 from emp
3 where ename = 'JAMES';
Execution Plan
----------------------------------------------------------
0SELECT STATEMENT Optimizer=CHOOSE
10SORT (AGGREGATE)
21INDEX (RANGE SCAN) OF 'EMP_IDX' (NON-UNIQUE)
Here it uses the even smaller index—since it knows that it need not go to the table to pick up other columns. This is optimal.
ops$tkyte@ORA8i.WORLD> analyze table emp
compute statistics;
Table analyzed.
ops$tkyte@ORA8i.WORLD> select empno, ename
2 from emp
3 where ename = 'JAMES';
Execution Plan
----------------------------------------------------------
0SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1
10TABLE ACCESS (FULL) OF 'EMP' (Cost=1 Card=1
Now it picks the better plan for this query given this small amount of data.
ops$tkyte@ORA8i.WORLD> select count(*)
2 from emp
3 where ename = 'JAMES';
Execution Plan
----------------------------------------------------------
0SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1
10SORT (AGGREGATE)
21INDEX (RANGE SCAN) OF 'EMP_IDX' (NON-UNIQUE)
and the count(*) is unaffected...
|
DISCLAIMER: We've captured these popular historical magazine articles for your reference. Links etc contained within these article possibly won't work. These articles are provided as-is with no guarantee that the information within them is the best advice for current versions of the Oracle Database.